id
stringlengths 20
20
| content
stringlengths 211
2.4M
| meta
dict |
---|---|---|
BkiUfNE5qhDCZSoNx3wQ | \section{Implementation Details} \label{app:hyper}
\begin{table*}[!tb]
\centering
\small
\begin{tabular}{c|c|c|c}
\hline
\multicolumn{2}{c|}{\textbf{Attraction}} & \multicolumn{2}{c}{\textbf{Train}} \\
\hline
Info & Reqt & Info & Reqt \\
\hline
area=centre & entrance fee & arriveBy=13:00 & duration\\
type=museum & address & day=wednesday & price\\
& & departure=norwich & trainID\\
& & destination=cambridge & \\
\hline
\end{tabular}
\caption{User goal used in the case study. The user goal contains two domains (i.e., attraction and train). The user is supposed to inform the constraints displayed in Info, request information displayed in Reqt. The system is required to answer the request according to the user's demands.}
\label{case:usergoal}
\end{table*}
\begin{table*}[!tb]
\small
\begin{tabular}{l@{~}p{7cm}|l@{~}p{7cm}}
\hline
\multicolumn{2}{c|}{\textbf{Rule}} & \multicolumn{2}{c}{\textbf{S-PPO}} \\
\hline
U:& I am planning a trip to town and want to sight see a bit . Can you let me know some attractions I may be interested in ? A museum type of attraction . I 'd like something in the centre . & U:& I am planning a trip to town and want to sight see a bit . Can you let me know some attractions I may be interested in ? A museum type of attraction . I 'd like something in the centre . \\
S:& Would you like regency gallery ? Sure , there are 11 for you to choose from . & S:& They are located within the centre . We have 11 of those ! Anything specific you need or just a recommendation ? Absolutely . There are some wonderful museum in that area . You are welcome . Is there anything else I can help you with today ? \\
U:& Can I have the address of a good one ? Can you just recommend one and tell me the entrance fee ? & U:& Just select one for me. I would like to visit on in town centre please . \\
S:& It is located in 98 king street. The entry fee is free . & S:& How about broughton house gallery ? they 're pretty fun . That one is located in the centre . It is listed as a museum attraction . Did you need any further assistance today ? \\
U:& Yeah i need a train too. I 'd like to arrive by 13:00 , do you have something close to that time ? I would like to leave on wednesday . \textcolor{red}{Is it going to norwich ? } & U:& Is there an exact address , like a street number ? Thanks !. Can you just recommend one and tell me the entrance fee ? \\
S:& \textcolor{red}{Where are you departing from ? } & S:& They are located at 98 king street . It is on 6 trinity street . The park is free . You are welcome . Is there anything else I can help you with today ? \\
U:& Is it going to norwich ? & U:& I also need a train. I want to get there by 13:00 . I would like to leave on wednesday. I 'll be departing from norwich . \\
S:& Where will you be leaving from ? & S:& \textcolor{blue}{Where are you headed ? When would you like the train to depart ? The arrival time is 06:35 . There are 7 . }\\
U:& \textcolor{red}{Great I also need a train departs from norwich .} & U:& Just select one for me. I don't care about the leave of the train. Ok , great thanks . Can you also help me find a train going to cambridge ? \\
S:& \textcolor{red}{I ' m sorry , we do n't have any trains arriving by 13:00 . There are no trains on wednesday . I ' m sorry there are no trains to \#TRAIN-NOOFFER-Dest\# . There are no trains leaving from norwich .} & S:& It arrives at 06:35 . TR4203 would be your perfect fit . I would say you should leave by 05:16. The price of those tickets are 17.60 pounds . \\
U:& Actually , I ' m all set . Thank you ! Bye !. & U:& What 's the travel time on that trip ? \\
S:& Thank you for using our services . & S:& \textcolor{blue}{It should arrive by 06:35. The trip will cost 17.60 pounds . The trip is 79 minutes . Did you need any further assistance today ? }\\
& & U:& \textcolor{blue}{You were great . Goodbye .} \\
& & S:& It should arrive by 06:35. The fare is 17.60 pounds. I ' m happy to help , and I hope you enjoy your stay !. \\
\hline
\multicolumn{2}{c|}{\multirow[t]{2}{*}[0ex]{\textbf{Failure:}}
\begin{tabular}[c]{@{}p{6.5cm}@{}}System does not inform the user's request. \end{tabular}} & \multicolumn{2}{c}{\textbf{Success}}
\\ \hline
\end{tabular}
\small
\centering
\caption{A sample dialog session comparison in simulated evaluation between Rule-based system and S-PPO system in natural language with the user goal in Table~\ref{case:usergoal}.
For the rule-based system, it often provides very concise
response to users. Besides, it misunderstands the user's utterance (i.e., treat the `norwich' as destination instead of departure) which prevents it from finding satisfied entities in database. On the contrary, our S-PPO system can provide more informative sentences, understand the user's utterance correctly and thus can inform the user's request.}
\label{case:sample}
\end{table*}
\clearpage
\begin{table*}[!tb]
\centering
\small
\begin{tabular}{c|c|c|c}
\hline
\multicolumn{2}{c|}{\textbf{Hotel}} & \multicolumn{2}{c}{\textbf{Train}} \\
\hline
Info & Reqt & Info & Reqt \\
\hline
parking=yes & pricerange & day=monday & duration \\
star=4 & & depart=london liverpool street & \\
type=guesthouse & & dest=cambridge & \\
& & leaveAt=15:00 & \\
\hline
\end{tabular}
\caption{User goal used in the case study.
The user goal contains two domain (i.e., hotel and train). The user is supposed to inform the constraints displayed in Info, request information displayed in Reqt. The system is required to answer the request according to the user's demands.}
\label{case:usergoal-humaneval}
\end{table*}
\begin{table*}[!tb]
\small
\begin{tabular}{l@{~}p{7cm}|l@{~}p{7cm}}
\hline
\multicolumn{2}{c|}{\textbf{Rule}} & \multicolumn{2}{c}{\textbf{S-PPO}} \\
\hline
U:& I am looking for a 4-star hotel with parking. The hotel should be in the type of guesthouse.
&
U:& I am looking for a 4-star hotel with parking. The hotel should be in the type of guesthouse. \\
S:& Would archway house work for you ? The phone number is 01223575314 . We have 16 such places .
&
S:& Yes , where would you like to stay and in what price range ? We have 16 moderate options , any preferences .\\
U:& What's the price range of it?
&
U:& What's the price range of it? \\
S:& \textcolor{red}{The aylesbray lodge guest house has free wifi . Sure , i have 16 options for you.}
&
S:& \textcolor{blue}{It is in north and has 4 stars and is priced moderate . I have found one , Can I make a booking for you? } \\
U:& Is it expensive?
&
U:& No, I do not need to book. Thanks! I also need a train on Monday. Depart from london liverpool street and goes to cambridge. It should leave at 15:00. \\
S:& \textcolor{red}{After further review I believe acorn guest house would be perfect for your stay !. Great , i have 16 options for you.}
&
S:& I found one for you. The train ID is TR9202 . \\
U:& I need to know its price range.
&
U:& How long will it take to get there? \\
S:& \textcolor{red}{That one is a cheap restaurant . It 's fantastic !. It 's in the moderate price range .}
&
S:& \textcolor{blue}{It leaves at 15:39 and arrives at 17:07 for a total travel time of 88 minutes.} \\
U:& So is it the hotel in the cheap price range?
&
U:& Thanks! \\
S:& \textcolor{red}{I can suggest the finches bed and breakfast . Great I have 7 different options for you !.}
&
S:& Have a good day . Bye!\\
U:& Well, I also need a train on Monday. The train departs from london liverpool street and goes to cambridge. It should leave at 15:00.
&
& \\
S:& \textcolor{red}{How many tickets shall I book for you on the 15:00 train on monday from london liverpool street to cambridge ?}
&
& \\
U:& How long will it take to get there?
&
\\
S:& I found a train from london liverpool street to cambridge at 15:00 on monday , will that work ?
&
\\
U:& How many minutes will it take to get the destination?
&
\\
S:& What time would you like to leave or arrive by ?
&
\\
\hline
\multicolumn{2}{c|}{\multirow[t]{2}{*}[0ex]{\textbf{Failure:}}
\begin{tabular}[c]{@{}p{6.5cm}@{}}System does not answer the user's requests. \end{tabular}} & \multicolumn{2}{c}{\textbf{Success}}
\\ \hline
\end{tabular}
\small
\centering
\caption{A sample dialog session comparing in human evaluation between rule-based system and S-PPO system in natural language with the user goal in Table~\ref{case:usergoal-humaneval}.
Rule-based system can not understand user's request about price range, and give contradictory responses (e.g., response `cheap' and `moderate' at the same time which confuse users). Moreover, it can not understand user's request about the duration of the train. Therefore, the dialog fails. On the contrary, S-PPO system can quickly get the point of user's request and answer user's questions correctly.
}
\label{case:sample-humaneval}
\end{table*}
\iffalse
\clearpage
\begin{table*}[!tb]
\centering
\small
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{2}{c|}{\textbf{Attraction}} & \multicolumn{2}{c}{\textbf{Restaurant}} & \multicolumn{2}{c}{\textbf{Taxi}} \\
\hline
Info & Reqt & Info & Reqt & Info & Reqt \\
\hline
type=museum & entrance fee & food=italian & address & arriveBy=17:30 & car type \\
& postcode & price=expensive & & & phone \\
& address & & & & \\
\hline
\end{tabular}
\caption{User goal used in the case study.
The user goal contains three domains (i.e., attraction, restaurant and taxi). The user is supposed to inform the constraints displayed in Info, request information displayed in Reqt. The system is required to answer the request according to the user's demands.
}
\label{case:usergoal-onpolicy}
\end{table*}
\begin{table*}[!tb]
\small
\begin{tabular}{l@{~}p{7cm}|l@{~}p{7cm}}
\hline
\multicolumn{2}{c|}{\textbf{M-PPO}} & \multicolumn{2}{c}{\textbf{S-PPO}} \\
\hline
U:& I need an attraction. Can you make a suggestion? Are there any musuems in the centre?
&
U:& I need an attraction. Can you make a suggestion? Are there any musuems in the centre? \\
S:& Would you like broughton house gallery? It is located in the centre. It's a museum with free parking.
&
S:& Would you like broughton house gallery ? It 's located in the centre . There are some wonderful museum in that area. would that work for you ? \\
U:& Can I get the address and postcode please?
&
U:& Can you let me have their postcode ? I will need the entrance fee please . \\
S:& They are located in 98 king street. Its postcode is cb11ln.
&
S:& You would love broughton house gallery. It is on 14 king's parade . Its entrance fee is free . The postcode is cb11ln. \\
U:& I would also like a taxi to commute between the two places. My arrival time will be 17:30.
&
U:& I need a restaurant . Are there any italian restaurants ? I am looking for a expensive restaurant .\\
S:& You are welcome. Is there anything else I can help you with today ?
&
S:& The stazione restaurant and coffee bar is a nice place.Can I make a reservation for you ? Is there anything else I can help you with ? \\
U:& Can you give me the car type ? \textcolor{red}{(user out DA: Request-Taxi-Car-?)}
&
U:& Yes. May I have the address for the restaurant please ? \\
S:& How about broughton house gallery ? they're pretty fun. \textcolor{red}{(sys in DA: Request-Hotel-Type-?)}
&
S:& They are located at Market Hill City Centre. \\
U:& I need the car type please .
&
U:& Can you schedule me a taxi to take me there ? I would like to arrive by 17:30. \\
S:& How about broughton house gallery ? they 're pretty fun. 98 king street is the address.
&
S:& I need to know what time you need to leave. \\
U:& What is the car type for the taxi ?
&
U:& I don't care about the leaving time. Please tell me the car type and phone number. \textcolor{blue}{(user out DA: Request-Taxi-Car-?, Request-Taxi-Phone-?)} \\
S:& How about broughton house gallery ? they 're pretty fun.
&
S:& A tesla is booked for you! The contact number is 53433616982. \textcolor{blue}{(sys in DA: Request-Taxi-Car-?, Request-Taxi-Phone-?)} \\
\hline
\multicolumn{2}{c|}{\multirow[t]{2}{*}[0ex]{\textbf{Failure:}}
\begin{tabular}[c]{@{}p{6.5cm}@{}}System NLU does not understand the user's utterances. \end{tabular}} & \multicolumn{2}{c}{\textbf{Success}}
\\ \hline
\end{tabular}
\small
\centering
\caption{A sample dialog session comparison in simulated evaluation between M-PPO system and S-PPO system in natural language with the user goal in Table~\ref{case:usergoal-onpolicy}.
In M-PPO, which suffers from dialog state distributional shift issues, system can not understand user's utterance in the online interactive stage. The system NLU of M-PPO misinterprets the user's intent 'Request-Taxi-Car-?' as 'Request-Hotel-Type-?', causing a failure dialog. On the contrary, in S-PPO, which trains system NLU with on-policy data, system can understand user's utterance very well, thus help the user to complete the tasks.}
\label{case:sample-onpolicy}
\end{table*}
\clearpage
\begin{table*}[!tb]
\centering
\small
\begin{tabular}{c|c}
\hline \multicolumn{2}{c}{\textbf{Attraction}}\\
\hline
Info & Reqt \\
\hline
type=park & address \\
& phone \\
\hline
\end{tabular}
\caption{User goal used in the case study.
The user goal contains one domain (i.e., attraction). The user is supposed to inform the constraints displayed in Info, request information displayed in Reqt. The system is required to answer the request according to the user's demands.}
\label{case:usergoal-bonus}
\end{table*}
\begin{table*}[!tb]
\small
\begin{tabular}{l@{~}p{7cm}|l@{~}p{7cm}}
\hline
\multicolumn{2}{c|}{\textbf{S-PPO w/o bonus}} & \multicolumn{2}{c}{\textbf{S-PPO}} \\
\hline
U:& I am planning a trip and need some help with a particular attraction. I' m looking for a park type attraction.
&
U:& I am planning a trip and need some help with a particular attraction. I' m looking for a park type attraction. \\
S:& Is there a certain area of town you would prefer? What sort of attraction would you like it to be? botanic gardens is available; It is listed as a park attraction. would that work for you? There are no park in that area.
&
S:& Is there a certain area of town you would prefer ? botanic gardens is available; It 's a park attraction. would that work for you ? \\
U:& I don't care about the area of the attraction.
&
U:& I don't care about the area of the attraction. \\
S:& Would you like botanic gardens? It is listed as park. You might want to try the botanic gardens. I don't care about the area of the attraction. I' m sorry. There are no park listed in that area . \textcolor{red}{(sys out DA: Recommend-Attraction-Name-botanic gardens, Inform-Attraction-Type-park, NoOffer-Attraction-Area-do n't care, NoOffer-Attraction-Type-park)}
&
S:& I recommend botanic gardens; it's got a lot of great features to watch! The entry fee is 4 pounds. It is a park. \textcolor{blue}{(sys out DA: Recommend-Attraction-Name-botanic gardens, Inform-Attraction-Fee-4 pounds, Inform-Attraction-Type-park)} \\
U:& You were great. Goodbye. \textcolor{red}{(user in DA: Request-Attraction-Type-?, Recommend-Attraction-Name-botanic gardens, NoOffer-Attraction-Type-park)}
&
U:& Can you give me their phone number please? Can I get the precise address please? \textcolor{blue}{(user in DA: Recommend-Attraction-Name-botanic gardens, Inform-Attraction-Fee-4 pounds, Inform-Attraction-Type-park)} \\
S:& botanic gardens looks good; botanic gardens is available ; would that work for you? Ok; thank you. Have a good day.
&
S:& They are located at Regent Street City Centre. The phone number is 01223336265. Did you need any further assistance today ? \\
\hline
\multicolumn{2}{c|}{\multirow[t]{2}{*}[0ex]{\textbf{Failure:}}
\begin{tabular}[c]{@{}p{6.5cm}@{}}User NLU does not understand the system's utterances. \end{tabular}} & \multicolumn{2}{c}{\textbf{Success}}
\\ \hline
\end{tabular}
\small
\centering
\caption{A sample dialog session comparison in simulated evaluation between S-PPO system without reward bonus and S-PPO system in natural language with the user goal in Table~\ref{case:usergoal-bonus}.
In S-PPO system w/o bonus, system always outputs contradictory dialog acts and utterances (e.g., the system output `NoOffer-Attraction-Type-park' and 'Inform-Attraction-Type-park' at the same time) which confuse the user. Therefore, the dialog fails. On the contrary, training system with reward bonus (S-PPO) can avoid this issue. The system can output consistent dialog acts and utterances (e.g., 'Inform-Attraction-Type-park') which is understandable by user NLU and thus can complete the dialog successfully.}
\label{case:sample-bonus}
\end{table*}
\fi
\section{Conclusion}
In this paper, we propose novel joint system-wise optimization techniques for pipeline goal-oriented dialog systems.
To mitigate the data sparsity problem of NLU, we propose to a novel data augmentation approach.
To enhance exploration of policy, we propose a novel stochastic policy parameterization with Poisson distribution as well as an additional reward bonus to encourage the policy to explore successful dialogs.
Our extensive experiments of automatic evaluation and human evaluation demonstrate that our approach outperforms prior works by a large margin and achieves state-of-the-art success rate in the system-wise evaluation on the
multi-domain goal-oriented benchmark dataset MultiWOZ.
In the future, we plan to explore in the direction of: training the NLU and NLG components jointly; and applying our techniques to end-to-end neural dialog systems.
\section{Experimental Results}
\subsection{Experimental setup}
We experiment on the common benchmark dataset MultiWOZ 2.1~\cite{eric2019multiwoz}, a multi-domain, multi-intent task-oriented dialog corpus that contains 7 domains, 13 intents, 25 slot types, 10,483 dialog sessions, and 71,544 dialog turns. We apply the agenda-based user simulator~\cite{schatzmann2007agenda}.
The simulator initializes a user goal when the dialog starts, provides the agent with a simulated user response at each dialog turn and works at the dialog act level.
We compare our system with the following published baseline systems:
\begin{itemize}
\item End-to-end trained Neural Systems: neural models that take in user utterances and output responses in natural language. We consider TSCP~\cite{lei2018sequicity}, DAMD~\cite{zhang2020task}, and SOLOIST+~\cite{zhang2021hybrid} which is a improved variant of SOLOIST~\cite{peng2020soloist} and achieved the best performance in DSTC9 competition~\cite{gunasekara2020overview}.
\item Joint Systems: jointly-learning some components. We consider word-level DST SUMBT~\cite{lee2019sumbt} and word-level policy LaRL~\cite{zhao2019rethinking}.
\item Modularized GDPL System (M-GDPL)~\cite{takanobu2019guided}: a dialog policy learning algorithm which uses inverse reinforcement learning for reward estimation.
\item{Rule-based System (Rule)}~\cite{takanobu2020your}: achieves the state-of-the-art results in system-wise evaluation. It consists of a trained BERT-NLU, rule-based DST and policy, and template-based NLG components.
\end{itemize}
We also compare with Modularized PPO System (M-PPO)~\cite{takanobu2020your}, which is trained with PPO algorithm under the environment that assumes all the other components are exactly correct. Note that the only difference between M-PPO System and Rule-based System is about the policy.
We refer to our proposed system as the {\bf Joint System-wise PPO System (S-PPO)}: we use our proposed joint system-wise optimization techniques to fine-tune the M-PPO system above. We also compare with a variant of S-PPO: we replace the learned policy in S-PPO with a rule-based policy --- we call it as Aug-Rule since it uses our data augmentation.
\begin{table*}[t]
\centering
\small
\begin{tabular}{|c|cccc|}
\hline
& Turns & Info. & Match. & Succ. \\
\hline
TSCP~\cite{lei2018sequicity} & 18.20 & 32.0 & 13.68 & 11.8 \\
SUMBT~\cite{lee2019sumbt} & 13.71 & 44.0 & 46.44 & 27.8 \\
LaRL~\cite{zhao2019rethinking} & 13.08 & 68.0 & 68.95 & 47.7 \\
DAMD~\cite{zhang2020task} & 11.27 & 64.0 & 59.7 & 48.5 \\
M-GDPL~\cite{takanobu2019guided} & 10.86 & 69.0 & 68.3 & 54.1 \\
M-PPO~\cite{takanobu2020your} & 8.30 & 92.3 & 98.3 & 74.1 \\
Rule~\cite{takanobu2020your} & \textbf{6.505} & 96.3 & 98.2 & 81.6 \\
SOLOIST+~\cite{zhang2021hybrid} & 7.44 & 95.7 & 97.9 & 83.5 \\
\hline
Aug-Rule (ours) & 7.52 & 96.3 & 98.6 & 91.4 \\
S-PPO (ours) & 7.21 & \textbf{98.4} & \textbf{99.6} & \textbf{93.8} \\
\hline
\end{tabular}
\caption{System-wise automatic evaluation performance of dialog turns, inform recall, match rate, and success rate, for all baseline systems and our S-PPO system. S-PPO beats all baselines and achieves state-of-the-art results.}
\label{table:main}
\end{table*}
\subsection{Training}
For all the RL-based methods, we use two hidden layer MLPs with 100 dimensions for the system policy and 50 dimensions for the value function. The action space of system policy is 209. The activation function is ReLU for MLPs.
We follow \citet{takanobu2019guided} and pre-train the policy by simple imitation learning (with cross-entropy loss and a learning rate of 1e-5) until convergence on the state-action pairs on the MultiWOZ dataset~\cite{eric2019multiwoz}.
We then use the pre-trained policy as initialization to interact with the user simulator and continue to train the policy using the PPO algorithm (M-PPO system). After that, we fine-tune M-PPO with our joint system-wise optimization techniques (S-PPO system).
For PPO training, we use RMSprop optimizer with a learning rate of 1e-4 and a batch size of 2000.
For NLU learning, we fine-tune all the parameters of NLU including the BERT model with AdamW~\cite{loshchilov2017decoupled} optimizer using a learning rate of 1e-4 and a batch size is 40 (including 32 augmented data and 8 offline MultiWOZ data).
For the reward bonus, we found that the coefficient $\alpha$ is not sensitive and we set $\alpha=10$.
For SOLOIST+~\cite{zhang2021hybrid}, we use the publicly available code and rerun their model in our test settings\footnote{Therefore result numbers are different from the original paper}.
\subsection{Evaluation Metric}
We use the number of \textit{dialog turns}, averaging over all dialog sessions to measure the efficiency of accomplishing a task. The system should help each user accomplish his/her goal within 20 turns, otherwise, the dialog is regarded as failed. We also utilize two other metrics: \textit{inform recall} and \textit{match rate} to estimate the task success. Both metrics are calculated based on the dialog act~\cite{stolcke2000dialogue}. The dialog act from the input and output of the agenda-based user simulator's policy will be used to calculate the two scores. \textit{Inform recall} evaluates whether all the information requests are fulfilled, and \textit{match rate} assesses whether the offered entity meets all the constraints specified in a user goal. The dialog is marked as successful if and only if both \textit{inform recall} and \textit{match rate} are 1. For each agent, the \textit{success rate} and other metrics are averaged on 1000 dialog sessions.
\subsection{Main Results}
Table~\ref{table:main} shows the performance comparison with baselines. We compare all methods on the number of dialog turns, inform recall, match rate and success rate.
Our method S-PPO outperforms end-to-end neural systems (TSCP, DAMD, SOLOIST), joint systems (SUMBT, LaRL) and other pipeline systems (M-GDPL, M-PPO, Rule) and achieves state-of-the-art success rate in system-wise evaluation.
We discover that our method Aug-Rule also outperforms other baselines by a large margin. This demonstrates that our data augmentation approach can also significantly boost the performance of other pipeline systems.
Our proposed S-PPO system significantly outperforms the M-PPO system and beats the results of other systems by a large margin of 10\% in \textit{success rate}.
S-PPO also achieves the best performance in \textit{inform recall}, \textit{match rate} and \textit{success rate}. This is because our techniques enable the system to understand and answer the user's requests, encourage the system to provide correct answers and thus have a higher success rate.
We also observe that the \textit{dialog turns} of S-PPO is lower than other learning-based systems and only slightly higher than that of the rule-based system.
\subsection{Human Evaluation Results}
We recruit 20 people to interact with dialog systems and collect their judgment on task success. Following the setting of DSTC competition~\cite{kim2019eighth}, we collect 100 dialogs for each system. For each dialog, we anonymize the system ID to reduce the user's bias.
\begin{table}[h]
\centering
\footnotesize
\begin{tabular}{|cccc|}
\hline
Rule & SOLOIST+ & Aug-Rule & S-PPO \\
\hline
62\% & 77\% & \textbf{78\%} & \textbf{78\%} \\
\hline
\end{tabular}
\caption{Success rate in human evaluation.}
\label{table:humaneval}
\end{table}
\iffalse
\begin{wraptable}{r}{.24\textwidth}
\vspace{-0.1in}
\centering
\small
\begin{tabular}{|c|ccc|}
\hline
& M-PPO & Rule & S-PPO \\
\hline
Succ. & 52\% & 48\% & \textbf{66\%} \\
\hline
\end{tabular}
\caption{Human evaluation.}
\label{table:humaneval}
\vspace{-0.1in}
\end{wraptable}
\fi
\begin{figure*}[t]
\centering
\includegraphics[width=0.23\textwidth]{figs/succ_solo.png}
\includegraphics[width=0.23\textwidth]{figs/info_solo.png}
\includegraphics[width=0.23\textwidth]{figs/match_solo.png}
\caption{Performance (y-axis) with different number of domains (x-axis). S-PPO can handle complex tasks much better than baselines.}
\label{fig:dom_number}
\end{figure*}
For each session, users are asked to mark whether the system completes the dialog. If a dialog is completed, users are also asked to provide all requested slot values for database query verification purposes. A dialog is successful if the fulfilled requested slots match the values in the database. We compare four systems which achieve the highest performance in automatic system-wise evaluation. The success rate of 62\% from the rule-based system is taken from~\cite{takanobu2020your}, while the success rate from our own human evaluation is lower than this number.
As shown in Table~\ref{table:humaneval}, S-PPO and Aug-Rule outperform other systems in human evaluation, which demonstrates the effectiveness of our approaches in real-world applications.
We find that S-PPO and Aug-Rule are significantly better than the rule-based system, which indicates that the systems trained with our data augmentation approach can understand human's utterances much better.
Our methods S-PPO and Aug-Rule also slightly outperform SOLOIST+, even though SOLOIST+ leverages a stronger GPT-2 model for language generation (NLG).
Table~\ref{case:sample} and \ref{case:sample-humaneval} in Appendix show the sampled dialog sessions of both automatic evaluation and human evaluation
\iffalse
\begin{table}[h]
\centering
\small
\begin{tabular}{|c|cc|}
\hline
& Rule & S-PPO \\
\hline
Succ. & 39.8\% & \textbf{52.0\%} \\
\hline
\end{tabular}
\caption{Human evaluation.}
\label{table:humaneval}
\end{table}
\fi
\subsection{Multi-domain tasks}
Figure~\ref{fig:dom_number} demonstrates how the performance varies with the number of domains in a task. S-PPO outperforms all baseline systems consistently when the number of domains increases from 1 to 3. We find that the performance of M-PPO drops dramatically when the number of domains increases. On the contrary, our proposed S-PPO system can scale well to the multi-domain tasks, and the performance of S-PPO only drops slightly when the number of domains increases. These results show that S-PPO can deal with complex tasks better than baselines.
\subsection{Ablation study}
In this section, we conduct ablation studies to investigate the contribution of the proposed techniques. Specifically, we test four variants: 1) `Vanilla': optimize the policy in a system-wise manner without using our approaches; 2) `Poiss': using stochastic policy parameterization with Poisson distribution; 3) `Aug': training system NLU with data augmentation; 4) `Bonus': train policy with reward bonus.
\begin{table}[h]
\centering
\small
\begin{tabular}{|c|cccc|}
\hline
& Turn & Info. & Match. & Succ. \\
\hline
Vanilla & 8.41 & 93.9 & 98.2 & 77.7 \\
\hline
Poiss & 8.23 & 94.6 & 98.8 & 80.3 \\
\hline
Poiss+Aug & 7.34 & 97.4 & 99.3 & 90.2 \\
\hline
Poiss+Aug+Bonus & 7.21 & 98.4 & 99.6 & 93.8 \\
\hline
\end{tabular}
\caption{Ablative results in system-wise evaluation.}
\label{table:ablation}
\end{table}
The ablation results are shown in Table~\ref{table:ablation}. First, using our new parameterization of stochastic policy can improve the success rate by about 3\% comparing with the vanilla system. This is because the stochastic parameterization technique enables the control for the number of sampled dialog acts as well as better exploration. Second, training the system NLU with our proposed data augmentation can achieve the most performance gain (10\%). Third, using the reward bonus to train policy can improve the success rate by 3\%.
Our ablation results show that the main bottleneck with pipeline systems comes from the NLU component.
To further show how our proposed techniques work, we provide analysis of intermediate results. Figure~\ref{fig:consistency} shows the F1 score of NLU during training. On the one hand, training \textit{system NLU} with data augmentation helps improve the performance. On the other hand, training policy with the reward bonus indirectly boosts the performance of \textit{user NLU}, even though we do not train the user NLU.
\begin{figure}[h]
\centering
\includegraphics[width=0.2\textwidth]{figs/sys_cons.png}
\includegraphics[width=0.2\textwidth]{figs/user_cons.png}
\vspace{-0.1in}
\caption{Performance of system NLU and user NLU.}
\label{fig:consistency}
\vspace{-0.1in}
\end{figure}
\section{Introduction}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.90\textwidth]{figs/pipeline3.png}
\caption{Workflow in a pipeline dialog system (by using an agenda-based user simulator). ``DA'' represents ``dialog acts''. First, user generates a user goal and user dialog acts, which follows by a user NLG to output user utterance. The system NLU takes in the user utterance and recovers dialog acts. The DST takes in the dialog acts, queries the database and returns belief state and DB state. The policy receives the state and outputs system out DA. Finally, the system NLG decodes the system out DA into system utterance and responses to the user.}
\label{fig:pipeline}
\end{figure*}
Goal-oriented dialog systems have evolved from single domain to complex multi-domain tasks, with daily applications in customer support and personal assistants
~\cite{levin1997learning,stolcke2000dialogue,sarikaya2016overview,crook2016task,gao2018neural,zhang2020recent}.
Existing approaches include
(1) pipeline system, which typically consists of four modular components: \textit{Natural Language Understanding} (NLU)~\cite{goo2018slot,devlin2018bert,pentyala2019multi}, \textit{Dialog State Tracker} (DST)~\cite{xie2015recurrent,lee2016task}, \textit{Dialog Policy}~\cite{peng2017composite,takanobu2019guided}, and \textit{Natural Language Generation} (NLG)~\cite{wen2015semantically,balakrishnan2019constrained};
(2) end-to-end neural system with a learned model that takes in the conservation history and outputs the sentences~\cite{lei2018sequicity,zhang2020task,peng2020soloist,ham2020end};
and (3) some hybrid versions of (1) and (2)~\cite{zhao2019rethinking,lee2019sumbt}. Comparing with end-to-end systems, the modular structure makes the pipeline systems more interpretable. In this paper, we focus on the pipeline systems\footnote{We refer the readers to the right red box in Fig.~\ref{fig:pipeline} for an illustration of the pipeline system and the associated building blocks.}.
Recently \citet{takanobu2020your} proposed \textit{system-wise} evaluations on various configurations of goal-oriented dialog systems, which measured the success rate of the entire system to benchmark systematically. \citet{takanobu2020your} showed that improvement on individual components in prior work may not necessarily bring benefit to the pipeline systems in system-wise evaluation.
More surprisingly, the best system (by a decent margin) in their paper turned out to be a simple pipeline system with a trained BERT-NLU~\cite{devlin2018bert}, rule-based DST, rule-based policy, and template-based NLG components. We refer to this system as the \textit{rule-based} system for simplicity.
In this paper, we pose a question that has not been studied in prior work: what is the bottleneck that affects the pipeline system's performance?
To answer this question, we propose joint system-wise optimization that trains components of the pipeline system jointly and synergistically to improve system-wise performance for pipeline systems. We train the NLU and policy together while fixing the DST and the NLG components as used in the {\em rule-based} system. This is because the rule-based DST and the template-based NLG both already have high performance. Therefore to simplify the joint system-wise optimization process, we focus on training the NLU and policy together with the fixed DST and NLG modules in the pipeline.
First, it is well known that BERT-NLU is data-hungry and requires a large amount of labeled training data~\cite{devlin2018bert}. However, labeled data is expensive and limited~\cite{zhang2020recent}. Existing labeling methods require humans to label intents and slots from utterances, which is quite time-consuming. In this paper, we design a novel automatic data labeling approach by leveraging the fact that the system NLU and the user NLG solve the inverse problems (as shown in Fig.~\ref{fig:pipeline}).
Given dialog acts that outputted by any user policies, we leverage a well-trained user NLG component to generate diverse utterances based on the dialog acts.
Then, we train the system NLU by using the generated utterances as input and the dialog acts as labels. By training the system NLU component with this data augmentation technique, system NLU can achieve better performance and provide the system policy with correctly recognized intents/slots.
Second, we train the system policy to adapt to the output of its upstream components (i.e., system NLU). To encourage the exploration of policy, we propose a novel policy network with Poisson distribution to control the number of dialog acts. We also propose a reward bonus to help the policy reduce errors of the user NLU and explore successful dialog turns.
By training policy with enhanced exploration, the policy can explore more successful dialogs and provide the system NLU with more diverse training data.
We conduct extensive experiments on the multi-domain dialog benchmark dataset MultiWOZ 2.1~\cite{eric2019multiwoz} with agenda-based user simulator~\cite{schatzmann2007agenda} following the system-wise evaluation~\cite{takanobu2020your}.
We show that our approaches significantly improve the learning efficiency of the pipeline dialog system and outperform the existing learning-based approaches as well as the rule-based system in both automatic evaluation and human evaluation. Our method also outperforms the best results (Team-1) in DSTC9~\footnote{In DSTC9, most teams adopt end-to-end neural systems based on GPT-2~\cite{radford2018improving} for NLU and NLG, while we take BERT for NLU and template-based method for NLG as used in \cite{takanobu2020your}} competition~\cite{gunasekara2020overview}, which is a concurrent work with ours.
In summary, our contributions are:
\begin{itemize}
\itemsep -0.0em
\item[1.] We propose novel techniques to enable effective joint system-wise optimization for the pipeline dialog system.
\item[2.] Our approaches outperform the competitive pipeline systems by big margins of 12\% success rate in automatic system-wise evaluation and of 16\% success rate in human evaluation on the multi-domain dataset MultiWOZ, and also outperform the recent state-of-the-art end-to-end trained model from DSTC9.
\item[3.] Our ablation studies demonstrate that the bottleneck with the pipeline system comes from the NLU component.
\end{itemize}
\section*{Acknowledgments}
\bibliographystyle{acl_natbib}
\section{Methodology}
To improve system-wise performance for the pipeline system, we propose joint system-wise optimization to train the components jointly and synergistically. As mentioned before, since rule-based DST and template-based NLG already achieve high performance, we focus on the joint training of NLU and policy.
On the one hand, training NLU requires a large amount of labeled data~\cite{devlin2018bert}. To ease the time-consuming labeling process, we design an automatic data augmentation approach by generating utterances from user NLG with dialog acts for training the system NLU
(Section~\ref{method:NLU}).
On the other hand, the existing policy parameterization lacks mechanisms for adequate exploration. Therefore, we propose a novel stochastic policy parameterization as well as a reward bonus to encourage exploration (Section~\ref{method:RL}).
\subsection{Training NLU with Data Augmentation}
\label{method:NLU}
Let us denote $d$ as dialog act and $u$ as dialog utterance. We also denote the delexicalized dialog act as $\bar{d}$, which will be combined with the slot value from the database to result in a full dialog act.
The NLU component $f_{\omega}$ maps an utterance $u$ to the corresponding dialog act $d$. We use the pre-trained BERT model as token encoder to train the NLU.
The NLG component maps the dialog act to an utterance. In this paper, we apply a template-based NLG implemented in Convlab-2~\cite{zhu2020convlab}.
We first observe that NLG and NLU are inverse processes of each other.
Therefore, instead of labeling dialog acts from given utterances, our method generates utterances from given dialog acts. This allows us to adopt any well-trained user NLG models (or human feedback input if there is any) to provide diverse training data for system NLU.
Since human labeling is time-consuming, in this paper, we allow our system to interact with a user simulator (which is also a pipeline system) to collect data. The user simulator consists of BERT-NLU, rule-based DST, agenda-based policy~\cite{schatzmann2007agenda} and template-based NLG.
During the conversation, the user simulator first outputs dialog acts $d$ and subsequently generates utterances $u$ by template-based NLG, which forms a new pair of training data ($u, d$) and provides training data augmentation for system NLU.
After collecting training data from the use simulator, we then mix the augmented data with the offline training data in MultiWOZ 2.1~\cite{eric2019multiwoz} together to fine-tune the system NLU.
We describe the NLU learning in more details below. The BERT-NLU consists of two sub-tasks: slot-value extraction ($f^{slot}$ ) and intent classification ($f^{intent}$).
We use cross-entropy loss to train both the slot classification and the intent detection tasks as implemented in Convlab-2~\cite{zhu2020convlab}.
Let $\omega$ denote all the parameters and let
$\mathcal{L}_{slot}(\omega)$ and $\mathcal{L}_{intent}(\omega)$ denote the two losses. The final loss function is the sum of two
\begin{equation} \label{eq:nluloss}
\mathcal{L}_{slot}(\omega) + \mathcal{L}_{intent}(\omega).
\end{equation}
To fine-tune the NLU, we first derive a rule-based algorithm to automatically convert the dialog acts $d$ to
supervision signals for NLU (i.e., slot-values and intents). Then we use Eq.~\eqref{eq:nluloss} to fine-tune the NLU with the augmented training data.
Note that the template-based NLG in the user simulator can be replaced by any other well-trained NLG models (including humans). Moreover, using an ensemble of NLG models simultaneously can increase the diversity of training data, which helps enhance the robustness of system NLU. We leave it to future work.
\subsection{Exploration with Stochastic Policy Parameterization and Reward Bonus} \label{method:RL}
The dialog policy generates the next system action conditioned on the dialog state. The dialog state consists of (1) belief state from DST; (2) DB state from database; (3) user actions at current turn; (4) system actions at the last turn. The dialog state is then represented as a binary vector of size $m$ ($m$ is the dimension of state) which serves as the input of policy. Given the input, the policy then outputs a distribution over all candidate delexicalized dialog acts and samples actions from that distribution.
Following common practice~\cite{takanobu2020your}, we use a sparse reward function: we give a positive reward $2L$ for a success dialog and a negative reward $-L$ for a failure one, where $L$ represents the number of dialog turns. we denote this reward function as $r_{origin}$.
To improve the performance of policy in system-wise evaluation, we train the policy using reinforcement learning in a system-wise manner by considering all the components (instead of assuming that they are perfect). Moreover, we propose two techniques to encourage the policy to explore successful dialogs.
\subsubsection{Stochastic Policy Parameterization}
Our starting point is the off-the-shelf RL algorithms PPO~\cite{schulman2017proximal}, which is one of the commonly used model-free algorithms in prior works in dialog~\cite{takanobu2019guided,takanobu2020your}.
We improve the PPO algorithm based on the ConvLab-2 implementation~\cite{zhu2020convlab}.
First, we observe that existing policy parameterization maintains a separate Bernoulli distribution for each dialog act, and takes a simple threshold to output the set of dialog acts. There are several potential drawbacks: (1) the policy that collects data is deterministic, whereas the PPO algorithm requires a stochastic policy. Therefore there is a mismatch between the intended use of PPO and the actual implementation; (2) the determinism leads to insufficient exploration; (3) the parameterization does not utilize the mutual exclusiveness between different dialog acts.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figs/policy8.png}
\caption{Policy network architecture.}
\label{fig:policy}
\end{figure}
To overcome these drawbacks, we introduce a new parameterization of the stochastic policy that enables a principled way of using policy gradient algorithms as well as better exploration.
As shown in Figure~\ref{fig:policy}, the policy network is given a state $s$ and outputs a sequence of delexicalized dialog acts (which will be combined with the slot value from the database to result in a full dialog act.) We first model the number of delexicalized dialog acts to output, denoted by $k$, by
\begin{align}
k \sim \textup{Poisson}(\lambda(s))
\end{align}
where $\lambda(s)$ is a function of the state parameterized by a neural net.
We follow Convlab-2~\cite{zhu2020convlab} to use $209$ atomic actions. Then, we model the distribution of each delexicalized dialog act by a categorical distribution over the $209$ choices of delexicalized dialog act by
\begin{align}
\bar{d}|s \sim \textup{softmax}(g(s))
\end{align}
where $g(s)$ is a function of the state parameterized by a neural net that outputs a $209$-dimensional logit vector.
We assume all the $k$ dialog acts are independent with each other conditioned on $k$, resulting in the joint distribution of the actions $a = (
\bar{d}_1,\dots, \bar{d}_k)$:
\begin{align}
\pi(a|s) = \frac{\lambda(s)^k}{k!} \cdot e^{-\lambda(s)} \cdot \prod_{1\le i\le k} p(\bar{d}_i|s).
\end{align}
We then compute the policy gradient update according to the PPO algorithm. During data collection, we directly sample from the stochastic policy instead of taking a deterministic threshold as done in the existing implementation~\cite{zhu2020convlab}.
\subsubsection{Reward Bonus}
The second novel technique is motivated by the observation that when the user simulator is not perfect (e.g., the user NLU does not understand the system output utterances), the dialog system is not able to finish tasks successfully.
We thus design a reward bonus to encourage the policy to select dialog acts whose translated utterances result in low errors for the user NLU (which is fixed without training) to reduce the errors from the user simulator.
We first measure the performance of user NLU by precision and recall at dialog act level and then use the F1 score as reward bonus $r_{bonus}$.
Therefore, the final reward function is:
\begin{equation}
r = r_{\textup{origin}} + \alpha \cdot r_{\textup{bonus}}
\end{equation}
where $\alpha$ is a hyper-parameter.
The new reward function encourages the policy to lower the errors from the user simulator and explore successful dialogs during training.
\begin{algorithm}[htbp]
\caption{Joint System-Wise Optimization for Pipeline Systems (S-PPO)}
\label{alg}
\begin{algorithmic}[1]
\State Given user NLU and NLG, rule-based DST, template-based NLG, data buffer $\mathcal{D}$ and $\mathcal{M}$, MultiWOZ dataset $\mathcal{B}$. \label{code:init}
\State Pre-train system NLU and user NLU on MultiWOZ dataset. Pre-train dialog policy $\pi_\theta$ (by assuming all the NLU and NLG are perfect, that is, passing along the dialog act between users and system directly). \label{code:pretrain}
\For {each epoch}
\State Trajectory data buffer $\mathcal{D} \leftarrow \emptyset$, NLU data buffer $\mathcal{M} \leftarrow \emptyset$
\For {step $t=0,1,2,...,\textup{batch\_size}$} \label{code:turn}
\If {new session}
\State Generate a user goal. \label{code:usergoal}
\EndIf
\State Receive user's utterance $u_{\textup{userout},t}$ with its corresponding dialog act $d_{\textup{userout},t}$. \label{code:userutterance}
\State Process user's utterance (with sampling from the policy) and send response to the user.\label{code:sys_response}
\State Receive the immediate reward $r_{\textup{origin}}$ and $r_{\textup{bonus}}$.
\label{code:reward}
\State Update the data buffers:
$\mathcal{D} \leftarrow \mathcal{D} \cup \{(s_t,a_t,r_{\textup{origin}}+\alpha \cdot r_{\textup{bonus}})\}$,
$\mathcal{M} \leftarrow \mathcal{M} \cup \{(u_{\textup{userout},t}, d_{\textup{userout},t})\} $.
\EndFor
\State Train the dialog policy using PPO algorithm on collected data from $\mathcal{D}$. \label{code:updatepolicy}
\State Train system NLU on dataset $\mathcal{M}$ and $\mathcal{B}$ jointly to enforce consistency. \label{code:updateNLU}
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Joint System-Wise Optimization}
Algorithm~\ref{alg} shows the pseudo code of our pipeline training process. We first initialize all components in the pipeline system (line \ref{code:init}), pre-train the NLU and dialog policy components (line \ref{code:pretrain}). For each new session, we first initialize a user's goal (line \ref{code:usergoal}). At each dialog turn, the system processes the user utterance, selects actions and responses to the users (line \ref{code:sys_response}). We then evaluate the current turn and calculate the original reward and reward bonus (line \ref{code:reward}). After collecting a batch of data, we train the dialog policy using PPO algorithm (line \ref{code:updatepolicy}) and train system NLU (line \ref{code:updateNLU})
In our algorithm, the joint training of NLU and policy can benefit each other. On the one hand, a better NLU can provide more accurate input for policy; on the other hand, a good policy can explore better training data for NLU. By performing joint system-wise optimization for NLU and policy, the system-wise performance of pipeline systems can be improved.
\section{Preliminaries on Pipeline Dialogue System}
\subsection{Natural Language Understanding (NLU)}
The task of NLU~\cite{devlin2018bert,radford2018improving,zheng2020out} is typically decomposed into two sub-tasks: intent detection and slot-value extraction.
We will use $d$ to denote the dialog act, and $u$ to denote the utterance. The NLU components $f_{\omega}$ maps an utterance to the corresponding dialog act $d$.
Another important notion is the delexicalized dialog act, denoted by $\bar{d}$, which will be combined with the slot value from the database to result in a full dialog act.
We describe the intent classification problem below in more detail as they are relevant in our algorithm. Let us denote the function of slot classification as $f^{slot}$ which maps the utterance $u$ of $k$ tokens to a sequence of $k$ slot IDs, one for each tokens. (Each token can be classified into one of the slots, or no slot). Let $f^{intent}$ denote function of intent detection, which maps an utterance $u$ to one of the possible intents. We use pre-trained BERT~\cite{devlin2018bert} to get token/sentence embedding and add two linear classifiers for the tasks. We use cross entropy loss to train both the slot classification and the intent detection tasks (in the former case, we use the sum of the cross-entropy losses over all tokens).
Let $\omega$ denote all the parameters and let
$\mathcal{L}_{slot}(\omega)$ and $\mathcal{L}_{intent}(\omega)$ denote the two losses and the final loss function is the their sum
\begin{equation} \label{eq:nluloss}
\mathcal{L}_{slot}(\omega) + \mathcal{L}_{intent}(\omega)
\end{equation}
\subsection{Dialog State Tracking (DST)}
A DST infers the belief state (or user goal) from dialog history. It receives the intents and slot-values from NLU and outputs a belief state, which encodes the extracted information as a compact set of dialog state that contains a set of informable slots and their corresponding values (user constraints), and a set of requested slots. The belief state is often used to query a task-specific database (DB) to obtain the DB state, such as the number of entities that match the user goal.
\subsection{Dialog Policy}
Conditioned on the dialog state, the dialog policy generates the next system action. Since the dialog acts in a session are generated sequentially, it is often formulated as a Markov Decision Process (MDP), which can be addressed by Reinforcement Learning (RL). The dialog state consists of (1) belief state; (2) DB state; (3) user actions at current turn; (4) system actions at the last turn. The dialog state is then vectorized as a binary vector which serves as the input of policy. Given the input, the policy then outputs a distribution over all candidate delexicalized dialog acts and sample actions from that distribution.
Denoting the state space as $\mathcal{S}$, the action space as $\mathcal{A}$, the policy learns to map the state to action: $\pi_{\theta}: \mathcal{S} \rightarrow \mathcal{A}$, where $\theta$ is the parameters of policy. At the $t$-th turn, the policy faces a state $s_t\in \mathcal{S}$, takes an action $a_t \in \mathcal{A}$ and receive a reward $r_t \in \mathcal{R}$. In this paper, we use proximal policy optimization (PPO)~\cite{schulman2017proximal} and train our policy by maximizing:
\begin{equation} \label{eq:ppo}
\scriptsize{
\mathrm{min} \left( \frac{\pi_{\theta}(a_t|s_t)}{\pi_{old}(a_t|s_t)} A_t, \mathrm{clip} \left(\frac{\pi_{\theta}(a_t|s_t)}{\pi_{old}(a_t|s_t)}, 1-\epsilon, 1+\epsilon \right) A_t \right),
}
\end{equation}
where $\epsilon$ is a hyper-parameter of PPO and $A_t$ is the advantage function. Following common practice~\cite{takanobu2020your}, we use a sparse reward function --- we give a positive reward $2L$ for a success dialog and a negative reward $-L$ for a failure one, where $L$ represents the number of dialog turns. we denote this reward function as $r_{origin}$.
\subsection{Natural Language Generation (NLG)}
Given the dialog act generated by the dialog policy, the NLG component maps the act to a natural language utterance, which is often modeled as a conditioned language generation task~\cite{wen2015semantically}. To improve user experience, the generated utterance should (1) fully convey the semantics of a dialog act for task-completion, and (2) be natural, specific, and informative, analogous to human language. In this paper, we apply a retrieval-based NLG implemented in Convlab-2~\cite{zhu2020convlab}.
\section{Related Work}
\subsection{Data Augmentation for NLU}
Similar to ours, \citet{liu2018dialogue} collected extra data to train the dialog system. However, their proposed approaches require access to human teaching, which is time-consuming and laborious.
\citet{liu2020robustness} proposed a model-agnostic toolkit LAUG to approximate natural perturbation and provided different data augmentation approaches to train NLU.
\citet{li2020textat} proposed adversarial token-level perturbation as data augmentation to improve the robustness of NLU.
\citet{wei2019eda} proposed four simple but powerful data augmentation operations to boost the performance of text classification tasks.
Different from the above work, our data augmentation leverages the inverse property of NLU and NLG, and is generated on-line with dialog conversations, which is more helpful for the NLU in goal-oriented dialog systems.
\subsection{Dialog Policy Learning}
Reinforcement Learning (RL) is commonly used to learn dialog policy, where users are modeled as a part of the environment and the policy is learned through interactions with users~\cite{zhao2016towards,liu2017iterative}.
\citet{peng2018deep} used pre-collected dialog acts as discrete actions and leveraged model-based reinforcement learning to train deep Q-networks~\cite{mnih2015human}.
To further boost the learning efficiency of dialog systems, \citet{peng2017composite} formulated the task as options over Markov Decision Processes and use hierarchical RL to learn a dialog manager to operate at different option level. To leverage the human-human offline data, \citet{chen2017agent} addressed the problem of when and how to learn from the teacher's experiences.
\citet{takanobu2019guided} proposed using a policy network with multinomial distribution to enlarge to exploration spaces and proposed an efficient reward estimation approach for efficient dialog policy learning.
While these prior works mainly focused on improving dialog policy in component-wise evaluation, our method aims to boost the system-wise performance for the overall dialog system.
Our proposed stochastic policy parameterization is related to \citet{jhunjhunwala2020multi} in the sense that our dialog model samples multiple actions. \citet{jhunjhunwala2020multi} proposed to filter out invalid actions by rules and human interaction, while our stochastic policy parameterization enables better exploration and offers a principled way to compute policy gradient.
\subsection{End-to-End Neural Dialog System}
Our work is also related to end-to-end trained neural systems. An end-to-end trained model takes user utterances as input and directly outputs system responses in natural language, so it can be trained in a system-wise manner naturally.
\citet{lei2018sequicity} proposed a holistic and extendable framework based on a single sequence-to-sequence (seq2seq) model~\cite{sutskever2014sequence} which can be optimized with supervised or reinforcement learning (RL) in end-to-end fashion.
One drawback of end-to-end neural systems is that training the word-level policy with RL is very difficult due to the large action space. To mitigate this issue, \citet{zhao2019rethinking} proposed to learn policy networks in the latent action space.
\citet{lee2019sumbt} proposed a universal and scalable belief tracker by jointly learning the NLU and DST modules, improving the flexibility of domain ontology configurations.
Some recent work~\cite{hosseini2020simple,peng2020soloist,ham2020end} proposed a simple end-to-end neural system to predict belief state and sentence response jointly based on the strong auto-regressive models such as GPT-2~\cite{radford2019language}. Following this line, \citet{kulhanek2021augpt} and \citet{zhang2021hybrid} proposed improved pre-trained techniques and careful post-processing approaches to boost the performance of GPT-2 and achieved the best performance in DSTC9 competition~\cite{gunasekara2020overview}.
Despite the good performance and model simplicity, end-to-end neural systems are not as interpretable as the pipeline system.
In this paper, we focus on optimization for the pipeline dialog system and aim for system-wise improvement.
| {
"attr-fineweb-edu": 1.912109,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUe7DxK6-gD0Sre9Th | \section{Introduction}
\label{sec:Introduction}
Nowadays many actors in the soccer industry, from television broadcasters to scouts and professionals in soccer clubs, are increasingly relying on data-driven scores to rank players, find promising talents, and increase fan engagement \cite{gudmundsson2017spatio, pappalardo2017quantifying, bornn2018soccer}.
Several online platforms, such as wyscout.com or whoscored.com, allow for searching players in a database and show aggregated statistics of their performance. Unfortunately, these tools provide no intuitive way to \emph{compare} the \emph{evolution} of performance of players or \emph{suggest} players behaving in a similar manner.
In this paper we describe a web dashboard for searching and comparing data-driven performance scores of soccer players.
The dashboard provides the user with a graphical interface to interact with the {\sf PlayeRank} algorithm \cite{pappalardo2018playerank} that offers a principled evaluation of the performance of players based on data describing all the spatio-temporal events (e.g., passes, shots, etc.) that occur during a match.
Several actors in the sports industry may benefit from our dashboard:
\begin{itemize}
\item a \emph{talent scout}, who searches for promising talents that meet specific constraints (e.g., age or role);
\item a \emph{coach}, who needs to visualize the evolution of the performance of their players to select the team's lineup in the next match;
\item a \emph{sports journalist}, who wants to comment in an article about the performances in a recent match;
\item a \emph{soccer enthusiast}, who wants a support to set up the lineup of their fantasy football team.
\end{itemize}
The design of the dashboard is hence motivated by the need of providing these actors with: (i) a way to visualize the evolution of the performance of a player in time; (ii) a compact way to compare the performance of two or more players; (iii) the possibility to search players by role and to filter them according to specific constraints (e.g., age, trend of growth of their performance, matches played); (iv) the possibility to change the parameters of the {\sf PlayeRank} algorithm to obtain tailored evaluations of performance.
A demo-video of the web dashboard is available at the following link: \url{youtu.be/SzrDRKucRjE}, while an online version will be available soon at \url{playerank.d4science.org}.
\section{Dashboard Architecture}
\label{sec:Architecture}
\begin{figure}[htb]
\centering\includegraphics[scale=0.3]{platform_flow.pdf}
\caption{Schema of the communication between the web dashboard and the API.
}
\label{fig:schema1}
\end{figure}
The web dashboard is designed in Python as a Dash \cite{Dash} app that communicates with the {\sf PlayeRank} algorithm \cite{pappalardo2018playerank} (\autoref{fig:schema1}). The communication channel is realized through an API that implements the exchange of data in two directions: (i)
the web dashboard sends a request for aggregated data through the HTTP protocol, using an URL containing the parameters of the request;
(ii) the API returns the desired aggregated data using JSON format.
A Dash app consists of two parts: the layout and the set of callbacks (\autoref{fig:schema1}).
In the layout, all the components of the graphical interface are specified, each having a unique and unambiguous identifier. This identifier is the connection point to the callbacks part, a dynamic section that specifies the actions to be
executed when an event occurs on a layout component (e.g., the user clicks on a button or
changes the value in a dropdown). The callbacks are the part that actually communicates with the API functions. This communication is transparent to the user, who can only interact with the layout part of the dashboard.
\section{Dashboard Layout}
\label{sec:Platform}
Figure \ref{fig:Scheme} shows the components in the layout of the web dashboard and the callbacks associated with each graphical component. The design of the layout has been guided by the \emph{Visual Exploration Paradigm} \cite{du2010visual}, consisting in a three-step process, called the \emph{Information Seeking Mantra}: overview, zoom and filter, and details-on-demand.
The upper part of the layout (Navbar) is a navigation bar containing two search dropdowns (\autoref{fig:Scheme}a, c) with their corresponding buttons (\autoref{fig:Scheme}b, d). The dropdowns are associated with callbacks that: (i) call the API to retrieve all players having the name or role typed by the user (\autoref{fig:Scheme}); (ii) highlight on the pitch part the selected role; (iii) add the selected players in the players table.
The second part of the layout (Pitch \& Settings) contains two elements: the visualization of a soccer pitch that highlights the roles selected in the navigation bar; and the settings panel (\autoref{fig:Scheme}g) that, when its filters are modified, updates the boxplot visualizing the distribution of the players' performance score per role (\autoref{fig:Scheme}f).
The third part (Table \& Settings) contains a set of sliders (\autoref{fig:Scheme}h, i, l) that allow the user to filter the players by age, trend of performance growth and number of matches played. Sliders are associated with a callback that updates the players table, a component that visualizes the names of the selected players, as well as other information like age and role, average performance score and trend of performance growth (\autoref{fig:Scheme}m).
The fourth part (Trend) contains a line chart (\autoref{fig:Scheme}n) visualizing the evolution of the performance scores of the players that have been selected in the players table.
Finally, the last part (Cards) contains some cards that show further information about the selected players and, in addition, it shows other players that are {\em similar} according to the way they have pitched on the soccer field (\autoref{fig:Scheme}p).
In the next sections we describe in detail some of the layout components of the web dashboard, in order to explore its functionalities, and refer the interested reader to the demo-video available at the following link: \url{youtu.be/SzrDRKucRjE}.
\begin{figure}[htb]
\centering
\includegraphics[scale = 0.35]{Scheme-3.pdf}
\caption{Graphical components in the dashboard's layout. The boxes list the callbacks that are triggered when the user interacts with a layout component. }
\label{fig:Scheme}
\end{figure}
\subsection{Pitch Plot}
The soccer pitch component (\autoref{fig:Pitch}) is linked to the ``Search by role'' dropdown in the navigation bar.
When the value of the dropdown changes, a callback is triggered, drawing the corresponding role data on the soccer pitch component. The role is drawn by highlighting the positions of the pitch associated with that role. \autoref{fig:Pitch} shows how the soccer pitch components looks like when three roles (left CB, right CB and central FW) are selected in the ``Search by role'' dropdown.
\begin{figure}[htb]
\centering
\includegraphics[scale = 0.4]{Pitch-2}
\caption{Drawing of three roles on the soccer pitch plot: left central back (left CB), right central back (right CB) and central forward (central FW).}
\label{fig:Pitch}
\end{figure}
\subsection{Settings panel}
\label{sec:settings}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.4]{Boxplot-Settings.pdf}
\caption{Illustration of the Settings panel}
\label{fig:Boxplot}
\end{figure}
In general, {\sf PlayeRank} \cite{pappalardo2018playerank} computes a player's performance score in a match as a scalar product between a vector of features describing their performance (e.g., number of shots, number of cards, expected goals, etc.) and a vector of weights specifying the important of each feature. The web dashboard allows the user to recalibrate the feature weights of the {\sf PlayeRank} algorithm so as to obtain tailored evaluations of performance.
As an instance, the user can change the importance of scoring a goal into the player evaluation (using the slider in Figure \ref{fig:Boxplot}c), so to reward more the players who score goals. Similarly, the weight associated with each performance feature can be changed by the corresponding slider (Figure \ref{fig:Boxplot}e), which triggers a callback that in turn asks the API to recompute the performance scores with the new weights.
The boxplot in Figure \ref{fig:Boxplot}a provides a visual summary of the distribution of performance scores per role.
It is updated every time a feature weight is changed by the corresponding slider, i.e., the two buttons in \autoref{fig:Boxplot}b-d activate a callback that re-draw the boxplot with the new performance scores.
Note that changing the feature weights from the Settings panel implies changing the visualization of the performance scores in the trend plot (\autoref{fig:trends}) as well.
\subsection{Trend plot}
\label{sec:Selected}
The trend plot (\autoref{fig:trends}) allows the user to compare the evolution in time of the performance scores of soccer players based on {\sf PlayeRank}. For instance, \autoref{fig:trends} compares three top players in the Italian first division season 2018/2019: Mauro Icardi (FC Internazionale), Cristiano Ronaldo (Juventus FC) and Lorenzo Insigne (SSC Napoli). The striking impact of Cristiano Ronaldo (the orange curve) on the Italian league is immediately recognizable from the plot: after a shaky start, probably due to adaptation to a new league, the performance scores of C. Ronaldo shortly overtake the ones of renowned strikers in the league, like Icardi and Insigne.
\begin{figure*}[htb]
\centering\includegraphics[scale=0.455]{ica-ron-ins-3.pdf}
\caption{Comparing the performance scores of Icardi, Ronaldo and Insigne (blue, orange and green curves, respectively).}
\label{fig:trends}
\end{figure*}
By using a proper dropdown (\autoref{fig:trends}b) the user can choose among two types of trends: (i) \emph{trend\_long}, calculated taking equally into account all matches' scores; (ii) \emph{trend\_short} which weights more the player's score in the most recent matches.
The trends specified by the user in the dropdown are drawn, via a dedicated callback and for each player, in the trend plot.
\section{The talent scout use scenario}
To demonstrate the usefulness of our web dashboard, we consider a crucial task in the soccer industry such as talent scouting. The main purpose of a talent scout is searching for promising talents, i.e., \emph{young} and \emph{unknown} players who show a good and a positive trend in their performance. A scout can achieve this goal using the web dashboard as follows.
The scout first searches for players of specific roles (e.g., central forward or central midfielder) using the ``Search by role'' dropdown. Then, s/he uses the age slider (\autoref{fig:talent}a) to select young players (< 22 yo). To focus on players who promise a bright future, the scout selects a positive trend of growth using the trend slider (\autoref{fig:talent}c).
The scout sorts the resulting players by trend of growth (column \texttt{TrendPercentage} in \autoref{fig:talent}), by age and finally by average performance score (column \texttt{PlayeRankMean}).
\autoref{fig:talent} shows the players table resulting from the above described operations.
Note that a player can appear in several rows because he can play different roles in different matches. For example Kean occurs twice in the ordered list, once as central MF and once as central FW.
The scout can eventually select some players in the table by using the appropriate check boxes, hence visualizing the evolution of their performance in a specific trend plot. Referring to the example shown in \autoref{fig:talent}, the talent scout selects Kean, Mancini and Cassata as the most promising young talents in the Italian first division and asks the interface to show their performance plot.
\begin{figure}[htb]
\centering\includegraphics[scale=0.34]{Talent-2.pdf} \caption{Players table resulting from the interaction of the talent scout with the web dashboard.}
\label{fig:talent}
\end{figure}
\section{Conclusions}
In this paper we presented a web dashboard for searching and comparing soccer performance scores. Through a set of API endpoints, the web dashboard can retrieve data about performances and players.
Users can search for players by name or roles, getting an immediate comparison of selected players and their performance evolution. Users can also search for players behaving similarly in the previous played games. If we consider that player scouting in soccer is a high-consuming task, our dashboard can help scouts in saving a considerable amount of time, hence partially automating the complex process of finding promising talents.
\begin{acks}
This work has been partially funded by EU project SoBigData RI, grant \#654024.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.722656,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdrU241xg-MOC_a0D | \section{Experiments}\label{sec:experiments}
In this section, we present experiments with our proposed model. First, we introduce two publicly available group activity datasets, the Volleyball dataset~\cite{IbrahimCVPR2016} and the Collective dataset~\cite{ChoiICCV2009}, on which we evaluate our approach. Then we describe implementation details followed by ablation study of the model. Lastly, we compare our approach with the state-of-the-art and provide a deeper analysis of the results. For simplicity, we call our static branch as ``Pose", the dynamic branch with RGB frames as ``RGB" and the dynamic branch with optical flow frames as ``Flow" in the following sections.
\subsection{Datasets}\label{sec:experiments:datasets}
\textbf{The Volleyball dataset}~\cite{IbrahimCVPR2016} consists of clips from 55 videos of volleyball games, which are split into two sets: 39 training videos and 16 testing videos. There are 4830 clips in total, 3493 training clips and 1337 clips for testing. Each clip is 41 frames in length. Available annotations includes group activity label, individual players' bounding boxes and their respective actions, which are provided only for the middle frame of the clip. Bagautdinov~\etal~\cite{BagautdinovCVPR2017} extended the dataset with ground truth bounding boxes for the rest of the frames in clips which we are also using in our experiments. The list of group activity labels contains four main activities (\textit{set}, \textit{spike}, \textit{pass}, \textit{winpoint}) which are divided into two subgroups, \textit{left} and \textit{right}, having eight group activity labels in total. Each player can perform one of nine individual actions: \textit{blocking}, \textit{digging}, \textit{falling}, \textit{jumping}, \textit{moving}, \textit{setting}, \textit{spiking}, \textit{standing} and \textit{waiting}.
\textbf{The Collective dataset}~\cite{ChoiICCV2009} consists of 44 clips with varying lengths starting from 193 frames to around 1800 frames in each clip. Every 10th frame has the annotation of persons' bounding boxes with one of five individual actions: (\textit{crossing}, \textit{waiting}, \textit{queueing}, \textit{walking} and \textit{talking}. The group activity is determined by the action that most people perform in the clip. Following~\cite{QiECCV2018} we use 32 videos for training and 12 videos for testing.
\subsection{Implementation details}
\label{sec:experiments:implementation}
To make a fair comparison with related works we use $T=10$ frames as the input to our model on both datasets: middle frame, 5 frames before and 4 frames after. For the Volleyball dataset we resize each frame to $720\times1280$ resolution, for the Collective to $480\times720$. During training we randomly sample one frame $F_{t_p}$ from $T$ input frames for the pose network. During testing we use the middle frame of the input sequence. Following the conventional approach we are also using ground truth person bounding boxes for fair comparison with related work. We crop person bounding boxes from the frame $F_{t_p}$ and resize them to $256\times192$, which we process with the pose network obtaining actor-level features maps. For the I3D network, we use features maps obtained from \textit{Mixed\_4f} layer after additional average pooling over the temporal dimension. Then we resize the feature maps to $90\times160$ and use the RoIAlign~\cite{HeICCV2017} layer to extract features of size $5\times5$ for each person bounding box in the middle frame of the input video. We then embed both pose and I3D features to the vector space with the same dimension $d=128$. The transformer encoder uses dropout $0.1$ and the size of the linear layer in the feed-forward network $L$ is set to $256$.
For the training of the static branch we use a batch size of 16 samples and for the dynamic branch we use a batch size of 8 samples. We train the model for 20,000 iterations on both datasets. On the Volleyball dataset we use an SGD optimizer with momentum $0.9$. For the first 10,000 iterations we train with the learning rate $0.01$ and for the last 10,000 iterations with the learning rate $0.001$. On the Collective dataset, the ADAM~\cite{KingmaICLR15} optimizer with hyper-parameters $\beta_1=0.9$, $\beta_2=0.999$ and $\epsilon=e^{-10}$ is used. Initially, we set the learning rate to 0.0001 and decrease it by a factor of ten after 5,000 and 10,000 iterations. The code of our model will be available upon publication.
\subsection{Ablation study}
\label{sec:experiments:ablation}
\begin{table}
\centering
\begin{tabular}{cccc}
\toprule
\multirow{2}{*}{\centering\bfseries\# Layers} & \multirow{2}{*}{\centering\bfseries\# Heads} & \multicolumn{1}{p{2.0cm}}{\centering\bfseries\ Positional \\ Encoding} & \multicolumn{1}{p{1.5cm}}{\centering\bfseries\ Group \\ Activity} \\
\bottomrule
1 & 1 & \xmark & 91.0 \\
1 & 1 & \cmark & 92.3 \\
1 & 2 & \cmark & 91.4 \\
2 & 1 & \cmark & 92.1 \\
\bottomrule
\end{tabular}
\smallskip
\caption{\textbf{Actor-Transformer} ablation on the Volleyball dataset using static actor representation. Positional encoding improves the strength of the representation. Adding additional heads and layers did not materialize due to limited number of available training samples.}
\label{table:experiments:volleyball_transformer}
\end{table}
We first perform an ablation study of our approach on the Volleyball dataset~\cite{IbrahimCVPR2016} to show the influence of all three stages of the model. We use group activity accuracy as an evaluation metric in all ablations.
\textbf{Actor-Transformer.} We start with the exploration of parameters of the actor-transformer. We experiment with the number of layers, number of heads and positional encoding. Only the static branch represented by the pose network is considered in this experiment. The results are reported in Table~\ref{table:experiments:volleyball_transformer}. Positional encoding is a viable part giving around $1.3\%$ improvement. This is expected as group activity classes of the Volleyball dataset are divided into two subcategories according to the location of which the activity is performed: \textit{left} or \textit{right}. Therefore, explicitly adding information about actors' positions helps the transformer better reason about this part of the group activity. Typically, transformer-based language models benefit from using more layers and/or heads due to the availability of large datasets. However, the Volleyball dataset has a relatively small size and the transformer can not fully reach its potential with a larger model. Therefore we use one layer with one head in the rest of the experiments.
\begin{table}
\centering
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{\bfseries Method} & \multicolumn{1}{p{1.25cm}}{\centering\bfseries Static} & \multicolumn{2}{p{2.0cm}}{\centering\bfseries Dynamic} \\
\cmidrule(lr){2-2} \cmidrule(lr){3-4}
& Pose & RGB & Flow \\
\toprule
Base Model & 89.9 & 89.0 & 87.8 \\
Graph~\cite{WuCVPR2019} & 92.0 & 91.1 & 89.5\\
Activity Maps~\cite{AzarCVPR2019} & - & 92.0 & 91.5 \\
\cmidrule{1-4}
Actor-Transformer (ours) & 92.3 & 91.4 & 91.5\\
\bottomrule
\end{tabular}
\smallskip
\caption{\textbf{Actor Aggregation} ablation of person-level features for group activity recognition on the Volleyball dataset. Our actor-transformer outperforms a graph while matching the results of activity maps.}
\label{table:experiments:volleyball_comparison_alternatives}
\end{table}
\textbf{Actor Aggregation.}
Next, we compare the actor-transformer with two recent approaches that combine information across actors to infer group activity. We use a static single frame (pose) and dynamic multiple frames (I3D) models as a baseline. It follows our single branch model without using the actor-transformer part, by directly applying action and activity classifiers on actor-level features from the pose and the I3D networks. The first related method uses relational graph representation to aggregate information across actors~\cite{WuCVPR2019}. We use the authors' publicly available code for the implementation of the graph model. We also use an embedded dot-product function for the appearance relation and distance masking for the position relation, which performed best in ~\cite{WuCVPR2019}. For fair comparison, we replace the actor-transformer with a graph and keep the other parts of our single branch models untouched. The second related method is based on multiple refinement stages using spatial activity maps~\cite{AzarCVPR2019}. As we are using the same backbone I3D network, we directly compare with the results obtained in~\cite{AzarCVPR2019}. The comparisons are reported in Table~\ref{table:experiments:volleyball_comparison_alternatives}. Our actor-transformer outperforms the graph for all backbone networks with good improvement for optical flow features without explicitly building any relationship representation. We match the results of activity maps~\cite{AzarCVPR2019} on optical flow and having slightly worse results on RGB. However, we achieve these results without the need to convert bounding box annotations into segmentation masks and without multiple stages of refinement.
\begin{table}
\centering
\begin{tabular}{lcc}
\toprule
\bfseries Method & \bfseries Pose + RGB & \bfseries Pose + Flow \\
\bottomrule
Early - summation & 91.2 & 88.5 \\
Early - concatenation & 91.8 & 89.7 \\
Late & 93.5 & 94.4 \\
\bottomrule
\end{tabular}
\smallskip
\caption{\textbf{Fusion} ablation of static and dynamic representations on the Volleyball dataset. The late fusion outperforms the early fusion approaches. }
\label{table:experiments:volleyball_comparison_fusion}
\end{table}
\textbf{Fusion.}
In the last ablation, we compare different fusion strategies to combine the static and dynamic representations of our model. For the late fusion, we set the weight for the static representation to be twice as large as the weight for the dynamic representation. The results are presented in Table~\ref{table:experiments:volleyball_comparison_fusion}. The early fusion is not beneficial for our model, performing similar or even worse than single branch models. Early fusion strategies require the actor-transformer to reason about both static and dynamic features. Due to the small size of the Volleyball dataset, our model can not fully exploit this type of fusion. Concentrating on each of two representations separately helps the model to better use the potential of static and dynamic features. Despite Flow only slightly outperforming RGB ($91.5\%$ vs. $91.4\%$), fusion with static representation has a bigger impact ($93.9\%$ vs. $93.1\%$) showing that Flow captures more complementary information to Pose than RGB.
\begin{table}
\centering
\resizebox{0.99\columnwidth}{!}{
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{\bfseries Method} & \multirow{2}{*}{\bfseries Backbone} & \multicolumn{1}{p{1.2cm}}{\centering\bfseries Group \\ Activity} & \multicolumn{1}{p{1.4cm}}{\centering\bfseries Individual \\ Action} \\
\bottomrule
Ibrahim~\etal~\cite{IbrahimCVPR2016} & AlexNet & 81.9 & - \\
Shu~\etal~\cite{ShuCVPR2017} & VGG16 & 83.3 & - \\
Qi~\etal~\cite{QiECCV2018} & VGG16 & 89.3 & - \\
Ibrahim and Mori~\cite{IbrahimECCV2018} & VGG19 & 89.5 & - \\
Bagautdinov~\etal~\cite{BagautdinovCVPR2017} & Inception-v3 & 90.6 & 81.8 \\
Wu~\etal~\cite{WuCVPR2019} & Inception-v3 & 92.5 & 83.0 \\
Azar~\etal~\cite{AzarCVPR2019} & I3D & 93.0 & - \\
\cmidrule{1-4}
Ours (RGB + Flow) & I3D & 93.0 & 83.7 \\
Ours (Pose + RGB) & HRNet + I3D & 93.5 & 85.7 \\
Ours (Pose + Flow) & HRNet + I3D & \textbf{94.4} & \textbf{85.9} \\
\bottomrule
\end{tabular}}
\smallskip
\caption{\textbf{Volleyball dataset comparison} for individual action prediction and group activity recognition. Our Pose + Flow model outperforms the state-of-the-art.}
\label{table:experiments:volleyball_state_of_the_art}
\end{table}
\begin{table}
\centering
\resizebox{0.92\columnwidth}{!}{
\begin{tabular}{lcc}
\toprule
\multirow{2}{*}{\bfseries Method} & \multirow{2}{*}{\bfseries Backbone} & \multicolumn{1}{p{1.5cm}}{\centering\bfseries Group \\ Activity}\\
\bottomrule
Lan~\etal~\cite{LanPAMI2012} & None & 79.7 \\
Choi and Salvarese~\cite{ChoiECCV2012} & None & 80.4 \\
Deng~\etal~\cite{DengCVPR2016} & AlexNet & 81.2 \\
Ibrahim~\etal~\cite{IbrahimCVPR2016} & AlexNet & 81.5 \\
Hajimirsadeghi~\etal~\cite{HajimirsadeghiCVPR2015} & None & 83.4 \\
Azar~\etal~\cite{AzarCVPR2019} & I3D & 85.8 \\
Li and Chuah~\cite{LiICCV2017} & Inception-v3 & 86.1 \\
Shu~\etal~\cite{ShuCVPR2017} & VGG16 & 87.2 \\
Qi~\etal~\cite{QiECCV2018} & VGG16 & 89.1 \\
Wu~\etal~\cite{WuCVPR2019} & Inception-v3 & 91.0 \\
\cmidrule{1-3}
Ours (RGB + Flow) & I3D & \textbf{92.8} \\
Ours (Pose + RGB) & HRNet + I3D & 91.0 \\
Ours (Pose + Flow) & HRNet + I3D & 91.2 \\
\bottomrule
\end{tabular}}
\smallskip
\caption{\textbf{Collective dataset comparison} for group activity recognition. Our Pose + RGB and Pose + Flow models achieve the state-of-the-art results.}
\label{table:experiments:collective_state_of_the_art}
\end{table}
\subsection{Comparison with the state-of-the-art}\label{sec:experiments:stateoftheart}
\textbf{Volleyball dataset.} Next, we compare our approach with the state-of-the-art models on the Volleyball dataset in Table~\ref{table:experiments:volleyball_state_of_the_art} using the accuracy metrics for group activity and individual action predictions. We present two variations of our model, late fusion of Pose with RGB (Pose + RGB) and Pose with optical flow (Pose + Flow). Both variations surpass all the existing methods with a considerable margin: $0.5\%$ and $1.4\%$ for group activity, $2.7\%$ and $2.9\%$ for individual action recognition. It supports our hypothesis that the transformer-based model with the static and dynamic actor representations is beneficial for the group activity task. Moreover, we also compare the late fusion of RGB with optical flow representation (RGB + Flow) and achieve the same group activity accuracy as in~\cite{AzarCVPR2019} which also uses a backbone I3D network. However, we achieve these results with a much simpler approach and without requiring any segmentation annotation. Combination of all three representations gives the same performance as Pose + Flow showing that only using one dynamic representation is essential.
\textbf{Collective dataset.} We further evaluate our model on the Collective dataset and provide comparisons with previous methods in Table~\ref{table:experiments:collective_state_of_the_art}. We use only group activity accuracy as a metric following the same approach as the related work. Interestingly, our individual branches on the Collective dataset have much more variation in their performance than on the Volleyball dataset: Flow - $83.8\%$, Pose - $87.9\%$, RGB - $90.8\%$. However, with both fused models, Pose + RGB and Pose + Flow, we achieve the state-of-the-art results, slightly outperforming the best published results of~\cite{WuCVPR2019}. We also explore the fusion of RGB and Flow representations and find that this combination performs best on the Collective dataset reaching $92.8\%$ accuracy. We hypothesize that Pose and RGB representations capture similar information that is complementary to the optical flow representation as supported by the results of Pose + RGB model which is just slightly better than RGB representation alone. We also try to combine all three representations without receiving any additional improvement over RGB + Flow. It is worth noting that with the same backbone I3D network Azar \etal \cite{AzarCVPR2019} achieve $85.8\%$ accuracy which is $7.0\%$ lower that our results showing the benefits of the transformer-based model over their activity maps approach.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/attn_vis.pdf}
\caption{\textbf{Example of each actor attention} obtained by actor-transformers. Most attention is concentrated on the key actor (5) who performs \emph{setting} action which helps to correctly predict \emph{left set} group activity. Best viewed in the digital version.}
\label{fig:attn_vis_volleyball}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{images/volleyball_activities.pdf}
\caption{\textbf{Volleyball dataset confusion matrix} for group activity recognition. Our model achieves over $90\%$ accuracy for each group activity.}
\label{fig:cm_volleyball}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{images/collective_activities.pdf}
\caption{\textbf{Collective dataset confusion matrix} for group activity recognition. Most confusion comes form distinguishing \textit{crossing} and \textit{walking}.}
\label{fig:cm_collective}
\end{figure}
\subsection{Analysis}\label{sec:experiments:analysis}
To analyze the benefits of our actor-transformer we illustrate the attention of the transformer in Figure~\ref{fig:attn_vis_volleyball}. Each row of the matrix on the right represents the distribution of attention $A_h$ in equation ~\ref{eq:att_eqn} using the representation of the actor with the number of the row as a query. For most actors the transformer concentrates mostly on the key actor with number 5 of the \emph{left set} group activity who performs a \emph{setting} action. To further understand the performance of our model we also present confusion matrices for group activity recognition on the Volleyball dataset in Figure~\ref{fig:cm_volleyball} and the Collective dataset in Figure~\ref{fig:cm_collective}. For every group activity on the Volleyball dataset our model achieves accuracy over $90\%$ with the least accuracy for \textit{right set} class ($90.6\%$). The most confusion emerges from discriminating \textit{set}, \textit{spike} and \textit{pass} between each other despite their spatial location, \textit{left} or \textit{right}. Also, the model struggles to distinguish between \textit{right winpoint} and \textit{left winpoint}. On the Collective dataset, our approach reaches perfect recognition for \textit{queueing} and \textit{talking} classes. However, two activities, \textit{crossing} and \textit{walking}, lead to the most confusion for our model. Several works~\cite{WangCVPR2017, AzarCVPR2019} argue that \textit{crossing} and \textit{walking} are naturally the same activity as they only differ by the relation between person and street. Integrating global scene-level information potentially can help to distinguish these two activities, which we leave for future work.
\section{Introduction} \label{sec:intro}
The goal of this paper is to recognize the activity of an individual and the group that it belongs to~\cite{ChoiICCV2009}. Consider for example a volleyball game where an individual player \emph{jumps} and the group is performing a \emph{spike}. Besides sports, such group activity recognition has several applications including crowd monitoring, surveillance and human behavior analysis. Common tactics to recognize group activities exploit representations that model spatial graph relations between individual actors (\eg \cite{IbrahimECCV2018, QiECCV2018,WuCVPR2019}) and follow actors and their movements over time (\eg \cite{IbrahimCVPR2016, QiECCV2018, ShuCVPR2017}). The majority of previous works explicitly model these spatial and temporal relationships based on the location of the actors. We propose an implicit spatio-temporal model for recognizing group activities.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{images/figure_1.pdf}
\caption{We explore two complementary static and dynamic actor representations for group activity recognition. The static representation is captured by 2D pose features from a single frame while the dynamic representation is obtained from multiple RGB or optical flow frames. These representations are processed by a transformer that infers group activity.}
\label{fig:intro}
\end{figure}
We are inspired by progress in natural language processing (NLP) tasks, which also require temporal modeling to capture the relationship between words over time. Typically, recurrent neural networks (RNN) and their variants (long short-term memory (LSTM) and gated recurrent unit (GRU)) were the first choices for NLP tasks~\cite{ChronICCV2015, MikolovICASSP2011, SutskeverICML2011}. While designed to model a sequence of words over time, they experience difficulty modeling long sequences~\cite{CollinsArxiv2016}. More recently, the transformer network~\cite{VaswaniNIPS2017} has emerged as a superior method for NLP tasks~\cite{DaiACL2019, DevlinNAACL2019, LampleArxiv2019, YangArxiv2019} since it relies on a self-attention mechanism that enables it to better model dependencies across words over time without a recurrent or recursive component. This mechanism allows the network to selectively extract the most relevant information and relationships. We hypothesize a transformer network can also better model relations between actors and combine actor-level information for group activity recognition compared to models that require explicit spatial and temporal constraints. A key enabler is the transformer's self-attention mechanism, which learns interactions between the actors and selectively extracts information that is important for activity recognition. Therefore, we do not rely on any \textit{a priori} spatial or temporal structure like graphs~\cite{QiECCV2018, WuCVPR2019} or models based on RNNs~\cite{DengCVPR2016, IbrahimCVPR2016}. We propose transformers for recognizing group activities.
Besides introducing the transformer in group activity recognition, we also pay attention to the encoding of individual actors. First, by incorporating simple yet effective positional encoding~\cite{VaswaniNIPS2017}. Second, by explicit modeling of static and dynamic representations of the actor, which is illustrated in Figure~\ref{fig:intro}. The static representation is captured by pose features that are obtained by a 2D pose network from a single frame. The dynamic representation is achieved by a 3D CNN taking as input the stacked RGB or optical flow frames similar to~\cite{AzarCVPR2019}. This representation enables the model to capture the motion of each actor without explicit temporal modeling via RNN or graphical models. Meanwhile, the pose network can easily discriminate between actions with subtle motion differences. Both types of features are passed into a transformer network where relations are learned between the actors enabling better recognition of the activity of the group. We refer to our approach as actor-transformers. Finally, given that static and dynamic representations capture unique, but complimentary, information, we explore the benefit of aggregating this information through different fusion strategies.
We make three contributions in this paper. First, we introduce the transformer network for group activity recognition. It refines and aggregates actor-level features, without the need for any explicit spatial and temporal modeling. Second, we feed the transformer with a rich static and dynamic actor-specific representation, expressed by features from a 2D pose network and 3D CNN. We empirically study different ways to combine these representations and show their complementary benefits. Third, our actor-transformers achieve state-of-the-art results on two publicly available benchmarks for group activity recognition, the Collective~\cite{ChoiICCV2009} and Volleyball ~\cite{IbrahimCVPR2016} datasets, outperforming the previous best published results~\cite{AzarCVPR2019, WuCVPR2019} by a considerable margin.
\section{Conclusion}\label{sec:conclusion}
We proposed a transformer-based network as a refinement and aggregation module of actor-level features for the task of group activity recognition. We show that without any task-specific modifications the transformer matches or outperforms related approaches optimized for group activity recognition. Furthermore, we studied static and dynamic representations of the actor, including several ways to combine these representations in an actor-transformer. We achieve the state-of-the-art on two publicly available benchmarks surpassing previously published results by a considerable margin.
{\small
\bibliographystyle{ieee_fullname}
\section{Model}\label{sec:proposed}
The goal of our method is to recognize group activity in a multi-actor scene through enhancement and aggregation of individual actor features. We hypothesize that the self-attention mechanism provided by transformer networks is a flexible enough model that can be successfully used out-of-the-box, without additional tricks or tweaks, for the inference of the activity of the whole group given the representation of each actor.
Our approach consists of three main stages presented in Figure \ref{fig:model}: actor feature extractor, group activity aggregation and fusion. In brief, the input to our model is a sequence of video frames $F_t, t=1,..,T$ with $N$ actor bounding boxes provided for each frame where $T$ is the number of frames. We obtain the static and the dynamic representation of each actor by applying a 2D pose network on a single frame and a 3D CNN on all input frames. The dynamic representation can be built from RGB or optical flow frames, which are processed by a 3D CNN followed by a RoIAlign~\cite{HeICCV2017} layer. Next, actor representations are embedded into a subspace such that each actor is represented by a 1-dimensional vector. In the second stage, we apply a transformer network on top of these representations to obtain the action-level features. These features are max pooled to capture the activity-level features. A linear classifier is used to predict individual actions and group activity using the action-level and group activity-level features, respectively. In the final stage we introduce fusion strategies before and after the transformer network to explore the benefit of fusing information across different representations. We describe each stage in more details in the following subsections.
\subsection{Actor feature extractor}
\label{sec:proposed:person_extractor}
All human actions involve the motion of body joints, such as hands and legs. This applies not only to fine-grained actions that are performed in sports activities (\eg \textit{spike} and \textit{set} in volleyball) but also to every day actions such as \textit{walking} and \textit{talking}. This means that it is important to capture not only the position of joints but their temporal dynamics as well. For this purpose, we utilize two distinct backbone models to capture both position and motion of joints and actors themselves.
To obtain joints positions a pose estimation model is applied. It receives as input a bounding box around the actor and predicts the location of key joints. Our approach is independent of the particular choice of the pose estimation model. We select the recently published HRNet~\cite{SunCVPR2019} as our pose network as it has a relatively simple design, while achieving state-of-the-art results on pose estimation benchmarks. We use the features from the last layer of the network, right before the final classification layer, in all our experiments. Specifically, we use the smallest network \textit{pose\_hrnet\_w32} trained on COCO key points ~\cite{LinECCV2014}, which shows good enough performance for our task as well.
The second backbone network is responsible for modeling the temporal dynamics. Several studies have demonstrated that 3D CNNs, with enough available data for training~\cite{TranICCV2015, CarreiraCVPR2017}, can build strong spatio-temporal representations for action recognition. Accordingly, we utilize the I3D~\cite{CarreiraCVPR2017} network in our framework since the pose network alone can not capture the motion of the joints from a single frame. The I3D network processes stacked $F_t, t=1,..,T$ frames with inflated 3d convolutions. We consider RGB and optical flow representations as they can capture different motion aspects. As 3D CNNs are computationally expensive we employ a \textit{RoIAlign}~\cite{HeICCV2017} layer to extract features for each actor given $N$ bounding boxes around actors while processing the whole input frames by the network only once.
\subsection{Transformer}\label{sec:proposed:transformer}
Transformer networks were originally introduced for machine translation in~\cite{VaswaniNIPS2017}. The transformer network consists of two parts: encoder and decoder. The encoder receives an input sequence of words (source) that is processed by a stack of identical layers consisting of a multi-head self-attention layer and a fully-connected feed-forward network. Then, a decoder generates an output sequence (target) through the representation generated by the encoder. The decoder is built in a similar way as the encoder having access to the encoded sequence. The self-attention mechanism is the vital component of the transformer network, which can also be successfully used to reason about actors' relations and interactions. In the following section, we describe the self-attention mechanism itself and how the transformer architecture can be applied to the challenging task of group activity recognition in video.
Attention $A$ is a function that represents a weighted sum of the values $V$. The weights are computed by matching a query $Q$ with the set of keys $K$. The matching function can have different forms, most popular is the scaled dot-product~\cite{VaswaniNIPS2017}. Formally, attention with the scaled dot-product matching function can be written as:
\begin{align}
A(Q, K, V)= \textup{softmax}(\frac{QK^T}{\sqrt{d}})V
\end{align}
where $d$ is the dimension of both queries and keys. In the self-attention module all three representations ($Q$, $K$, $V$) are computed from the input sequence $S$ via linear projections so $A(S) = A(Q(S), K(S), V(S))$.
Since attention is a weighted sum of all values it overcomes the problem of forgetfulness over time, which is well-studied for RNNs and LSTMs~\cite{CollinsArxiv2016}. In sequence-to-sequence modeling this mechanism gives more importance to the most relevant words in the source sequence. This is a desirable property for group activity recognition as well because we can enhance the information of each actor's features based on the other actors in the scene without any spatial constraints. Multi-head attention $A_h$ is an extension of attention with several parallel attention functions using separate linear projections $h_i$ of ($Q$, $K$, $V$):
\begin{align}
A_h(Q, K, V)= \textup{concat}(h_1, ..., h_m)W,
\label{eq:att_eqn}
\end{align}
\begin{align}
h_i = A(QW_i^Q, KW_i^K, VW_i^V)
\end{align}
Transformer encoder layer $E$ consists of multi-head attention combined with a feed-forward neural network $L$:
\begin{align}
L(X) = Linear(Dropout(ReLU(Linear(X)))
\end{align}
\begin{align}
\hat{E}(S) = LayerNorm(S + Dropout(A_h(S)))
\end{align}
\begin{align}
E(S) = LayerNorm(\hat{E}(S) + Dropout(L(\hat{E}(S))))
\end{align}
The transformer encoder can contain several of such layers which sequentially process an input $S$.
In our case $S$ is a set of actors' features $S=\{s_i|i=1,..,N\}$ obtained by actor feature extractors. As features $s_i$ do not follow any particular order, the self-attention mechanism is a more suitable model than RNN and CNN for refinement and aggregation of these features. An alternative approach can be incorporating a graph representation as in~\cite{WuCVPR2019} which also does not rely on the order of the $s_i$. However, the graph representation requires explicit modeling of connections between nodes through appearance and position relations. The transformer encoder mitigates this requirement relying solely on the self-attention mechanism. However, we show that the transformer encoder can benefit from implicitly employing spatial relations between actors via positional encoding of $s_i$. We do so by representing each bounding box $b_i$ of the respective actor's features $s_i$ with its center point $(x_i, y_i)$ and encoding the center point with the same function $PE$ as in~\cite{VaswaniNIPS2017}. To handle 2D space we encode $x_i$ with the first half of dimensions of $s_i$ and $y_i$ with the second half. In this work we consider only the encoder part of the transformer architecture leaving the decoder part for future work.
\subsection{Fusion}\label{sec:proposed:fusion}
The work by Simonyan and Zisserman~\cite{SimonyanNIPS2014} demonstrated the improvements in performance that can be obtained by fusing different modalities that contain complimentary information. Following their example, we also incorporate several modalities into one framework. We explore two branches, static and dynamic. The static branch is represented by the pose network which captures the static position of body joints, while the dynamic branch is represented by I3D and is responsible for the temporal features of each actor in the scene. As RGB and optical flow can capture different aspects of motion we study dynamic branches with both representations of the input video. To fuse static and dynamic branches we explore two fusion strategies: early fusion of actors' features before the transformer network and late fusion which aggregates predictions of classifiers, similar to~\cite{SimonyanNIPS2014}. Early fusion enables access to both static and dynamic features before inference of group activity. Late fusion separately processes static and dynamic features for group activity recognition and can concentrate on static or dynamic features, separately.
\subsection{Training objective}
\label{sec:proposed:training}
Our model is trained in an end-to-end fashion to simultaneously predict individual actions of each actor and group activity. For both tasks we use a standard cross-entropy loss for classification and combine two losses in a weighted sum:
\begin{align}
\mathcal{L}= \lambda_g\mathcal{L}_{g}(y_g, \tilde{y}_g) + \lambda_a\mathcal{L}_{a}(y_a, \tilde{y}_a)
\end{align}
where $\mathcal{L}_{g}, \mathcal{L}_{a}$ are cross-entropy losses, ${y}_g$ and ${y}_a$ are ground truth labels, $\tilde{y}_g$ and $\tilde{y}_a$ are model predictions for group activity and individual actions, respectively. $\lambda_g$ and $\lambda_a$ are scalar weights of the two losses. We find that equal weights for individual actions and group activity perform best so we set $\lambda_g=\lambda_a=1$ in all our experiments, which we detail next.
\section{Related Work}\label{sec:related}
\subsection{Video action recognition}
\textbf{CNNs for video action recognition.} While 2D convolutional neural networks (CNN) have experienced enormous success in image recognition, initially they could not be directly applied to video action recognition, because they do not account for time, which is vital information in videos. Karpathy~\etal~\cite{KarpathyCVPR2014} proposed 2D CNNs to process individual frames and explored different fusion methods in an effort to include temporal information. Simonyan and Zisserman~\cite{SimonyanNIPS2014} employed a two-stream CNN architecture that independently learns representations from input RGB image and optical flow stacked frames. Wang~\etal~\cite{WangECCV2016} proposed to divide the video into several segments and used a multi-stream approach to model each segment with their combination in a learnable way. Many leveraged LSTMs to model long-term dependencies across frames~\cite{DonahuePAMI2014, LiCVIU2018, NgCVPR2015, SharmaICLR2016}. Ji~\etal~\cite{Ji2PAMI2010} were the first to extend 2D CNN to 3D, where time was the third dimension. Tran~\etal~\cite{TranICCV2015} demonstrated the effectiveness of 3D CNNs by training on a large collection of noisy labeled videos~\cite{KarpathyCVPR2014}. Carreira and Zisserman~\cite{CarreiraCVPR2017} inflated 2D convolutional filters to 3D, exploiting training on large collections of labeled images and videos. The recent works explored leveraging feature representation of the video learned by 3D CNNs and suggesting models on top of that representation~\cite{HusseinCVPR2019, WangECCV2018}. Wang and Gupta~\cite{WangECCV2018} explored spatio-temporal graphs while Hussein~\etal~\cite{HusseinCVPR2019} suggested multi-scale temporal convolutions to reason over minute-long videos. Similarly, we also rely on the representation learned by a 3D CNN~\cite{CarreiraCVPR2017} to capture the motion and temporal features of the actors. Moreover, we propose to fuse this representation with the static representation of the actor-pose to better capture exact positions of the actor's body joints.
\textbf{Attention for video action recognition.} Originally proposed for NLP tasks~\cite{BahdanauICLR2014} attention mechanisms have also been applied to image caption generation~\cite{XuICML2015}. Several studies explored attention for video action recognition by incorporating attention via LSTM models~\cite{LiCVIU2018, SharmaICLR2016}, pooling methods~\cite{GirdharNIPS2017, LongCVPR2018} or graphs~\cite{WangECCV2018}. Attention can also be guided through different modalities, such as pose~\cite{Baradel2018HumanAR, DuICCV2017} and motion~\cite{LiCVIU2018}. More recently, transformer networks~\cite{VaswaniNIPS2017} have received special recognition due to the self-attention mechanism that can better capture long-term dependencies, compared to RNNs. Integrating the transformer network for visual tasks has also emerged~\cite{GirdharCVPR2019, ParmarICML2018}. Parmar~\etal~\cite{ParmarICML2018} generalized the transformer to an image generation task, while Girdhar~\etal~\cite{GirdharCVPR2019} created a video action transformer network on top of a 3D CNN representation~\cite{CarreiraCVPR2017} for action localization and action classification. Similarly, we explore the transformer network as an approach to refine and aggregate actor-level information to recognize the activity of the whole group. However, we use representations of all actors to create query, key and values to refine each individual actor representation and to infer group activity, while ~\cite{GirdharCVPR2019} used only one person box proposal for query and clip around the person for key and values to predict the person's action.
\textbf{Pose for video action recognition.}
Most of the human actions are highly related to the position and motion of body joints. This has been extensively explored in the literature, including hand-crafted pose features~\cite{JhuangICCV2013, NieCVPR2015, WangCVPR2013}, skeleton data~\cite{DuCVPR2015, HouCSVT2018, LiuECCV2016, ShahroudyCVPR2016, SongAAAI2017}, body joint representation~\cite{CaoIJCAI2016, ChronICCV2015} and attention guided by pose~\cite{Baradel2018HumanAR, DuICCV2017}. However, these approaches were only trained to recognize an action for one individual actor, which does not generalize well to inferring group activity. In our work we explore the fusion of the pose features with dynamic representations, following the multi-stream approach~\cite{ChoutasCVPR2018, TuPR2018, ZolfaghariICCV2017} for action recognition, but we leverage it to infer group activity.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.85\linewidth]{images/model.pdf}
\caption{\textbf{Overview of the proposed model.} An input video with $T$ frames and $N$ actor bounding boxes is processed by two branches: static and dynamic. The static branch outputs an HRNet~\cite{SunCVPR2019} pose representation for each actor bounding box. The dynamic branch relies on I3D~\cite{CarreiraCVPR2017}, which receives as input either stacked RGB or optical flow frames. To extract actor-level features after I3D we apply a RoIAlign~\cite{HeICCV2017} layer.
A transformer encoder ($E$) refines and aggregates actor-level features followed by individual action and group activity classifiers. Two fusion strategies are supported. For early fusion we combine actor-level features of the two branches before $E$, in the late fusion we combine the classifier prediction scores. }
\label{fig:model}
\end{figure*}
\subsection{Group activity recognition}
Group activity recognition has recently received more attention largely due to the introduction of the public Collective dataset~\cite{ChoiICCV2009} and Volleyball dataset~\cite{IbrahimCVPR2016}. Initially, methods relied on hand-crafted features extracted for each actor, which were then processed by probabilistic graphical models~\cite{AmerECCV2014, ChoiECCV2012, Choi2014PAMI, ChoiCVPR2011, HajimirsadeghiCVPR2015, LANCVPR2012, LanPAMI2012}. With the emergence of deep learning, the performance of group activity recognition has steadily increased. Some of the more successful approaches utilized RNN-type networks. Ibrahim~\etal~\cite{IbrahimCVPR2016} used LSTM to model the action dynamics of individual actors and aggregate the information to predict group activity. Deng~\etal~\cite{DengCVPR2016} integrated graphical models with RNN. Shu~\etal~\cite{ShuCVPR2017} used a two-level hierarchy of LSTMs that simultaneously minimized the energy of the predictions while maximizing the confidence. Bagautdinov~\etal~\cite{BagautdinovCVPR2017} jointly detected every actor in a video, predicted their actions and the group activity by maintaining temporal consistency of box proposals with the help of RNN. Wang~\etal~\cite{WangCVPR2017} utilizes single person dynamics, intra-group and inter-group interactions with LSTM-based model. Li and Chuah~\cite{LiICCV2017} took an alternative approach, where captions were generated for every video frame and then were used to infer group activity. Ibrahim and Mori~\cite{IbrahimECCV2018} created a
relational representation of each person which is then used for multi-person activity recognition. Qi~\etal~\cite{QiECCV2018} proposed an attentive semantic RNN that utilized spatio-temporal attention and semantic graphs to capture inter-group relationships. Lately, studies have been moving away from RNNs. Azar~\etal~\cite{AzarCVPR2019} used intermediate representations called activity maps, generated by a CNN, to iteratively refine group activity predictions. Wu~\etal~\cite{WuCVPR2019} built an actor relation graph using a 2D CNN and graph convolutional networks to capture both the appearance and position relations between actors. Like Wu~\etal~\cite{WuCVPR2019} we also rely on actor-level representations but differently, we utilize the self-attention mechanism that has the ability to selectively highlight actors and group relations, without explicitly building any graph. Moreover, we enrich actor features by using static and dynamic representations. Similarly to~\cite{AzarCVPR2019} we build our dynamic representation with a 3D CNN.
| {
"attr-fineweb-edu": 1.787109,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUepDxK6nrxrSHgxXg | \section{Introduction}\label{sec:intro}
In-play football bets are traded live during a football game.
The prices of these bets are driven by the goals scored in the underlying game
in a way such that prices move smoothly between goals and jump to
a new level at times when goals are scored. This is similar to financial markets
where the price of an option changes according to the price changes
of the underlying instrument. We show that the Fundamental Theorems
of Asset Pricing can be applied to the in-play football betting market
and that these bets can be priced in the risk-neutral framework.
Distribution of final scores of football games has been studied by
several authors. In particular, \citet{maher1982modelling} found that an independent
Poisson distribution gives a reasonably accurate description of football
scores and achieved further improvements by applying a bivariate Poisson
distribution. This was further developed by \citet{dixon1997modelling}
who proposed a model in which the final scores of the two teams are
not independent, but the marginal distributions of each team's scores
still follow standard Poisson distributions.
Distribution of in-play goal times has been studied by \citet{dixon1998birth}
who applied a state-dependent Poisson model where the goal intensities
of the teams depend on the current score and time. The model also
accounts for other factors such as home effect and injury time. The
standard Poisson model has been applied by \citet{fitt2005valuation}
to develop analytical valuation formulae for in-play spread bets on
goals and also on corners. A stochastic intensity model has been suggested by
\citet{jottreau2009cir} where the goals are driven by Poisson processes
with intensities that are stochastic,
in particular driven by a Cox-Ingerson-Ross process.
\citet{vecer2009estimating} have shown that
in-play football bets may have additional sensitivities on the top of the
standard Poisson model, for instance sensitivities to red cards.
The Fundamental Theorems of Asset Pricing form the basis of the risk-neutral
framework of financial mathematics and derivative pricing
and have been developed by several authors,
including \citet{cox1976valuation}, \citet{harrison1979martingales},
\citet{harrison1981martingales}, \citet{harrison1983stochastic},
\citet{huang1985information}, \citet{duffie1988security} and \citet{back1991fundamental}.
The first fundamental theorem states that a market is arbitrage free if and only if there
exists a probability measure under which the underlying asset prices
are martingales. The second fundamental theorem states that the market
is complete, (that is, any derivative product of the underlying assets
can be dynamically replicated) if and only if the martingale measure
is unique.
In this paper we use independent standard time-homogeneous Poisson
processes to model the scores of the two teams. We construct a market of three
underlying assets and show that within this model a unique martingale
measure exists and therefore the market of in-play football bets is
arbitrage-free and complete. Then we demonstrate calibration and replication
performance using market data.
The structure of this paper is the following. Section \ref{sec:inplay} contains a
general overview of in-play football betting and an overview
of the data set. Section \ref{sec:Maths} defines the formal model and contains pricing formulae for
Arrow-Debreu securities among others. In Section \ref{sub:Calibration} we calibrate the model to
historical market quotes of in-play bets and in Section \ref{sec:nextgoal} we use the
same data to show that Next Goal bets are natural hedging instruments
that can be used to build a replicating portfolio to match the values of other bets,
in particular the liquidly traded Match Odds bets.
The Appendix reports analytical pricing formulae for some of the most liquidly traded bets.
\section{In-Play Football Betting}\label{sec:inplay}
In traditional football betting, also known as pre-game or fixed odds
betting, bets are placed before the beginning of the game. In-play
football betting enables bettors to place bets on the outcome of a
game after it started. The main difference is that during in-play
betting, as the game progresses and as the teams score goals, the
chances of certain outcomes jump to new levels and so do the odds
of the bets. Prices move smoothly between goals and jump once a goal is scored.
In-play betting became increasingly popular in recent years.
For instance, \citet{inplaytracker2013}
recently reported that for one particular bookmaker (Unibet) in-play
betting revenues exceeded pre-game betting revenues by 2013Q2 as shown
in Figure \ref{fig:unibet}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.55\paperwidth]{plots/Unibet}
\end{center}
\caption{\label{fig:unibet}Revenue distribution of one particular bookmaker's
(Unibet) football betting revenues between In-Play and Pre-Game football
betting.}
\end{figure}
There are two main styles of in-play betting: odds betting and spread
betting. In odds betting, the events offered are similar to digital
options in the sense that the bettor wins a certain amount if the
event happens and loses a certain amount otherwise. Typical odds
bets are whether one team wins the game, whether the total number
of goals is above a certain number or whether the next goal is scored
by the home team. In spread betting, the bets offered are such that
the bettor can win or lose an arbitrary amount. A typical example
is a bet called ``total goal minutes'' which pays the bettor the
sum of the minute time of each goal. In this paper we focus on odds
betting, but most of the results can also be applied to spread betting.
A study of spread betting containing analytical pricing formulae for
various spread bets was published by \citet{fitt2005valuation}.
In-play betting offers various types of events such as total goals,
home and away goals, individual player goals, cards, corners, injuries
and other events. This paper focuses on bets related to goal events
only.
Throughout the paper we refer to the value $X_{t}$ of a bet as the
price at which the bet can be bought or sold at time $t$ assuming
that the bet pays a fixed amount of 1 unit in case it wins and zero
otherwise. This is a convenient notation from a mathematical point
of view, however it is worth noting that different conventions are
used for indicating prices in betting markets. The two most popular
conventions are called fractional odds and decimal odds. Both of these
conventions rely on the assumption that the bettor wagers a fixed
stake when the bet is placed and enjoys a payoff in case the bet wins
or no payoff in case it loses. Fractional odds is the net payoff
of the bet in case the bet wins (that is, payoff minus stake), divided
by the stake. Decimal odds is the total payoff of the bet in case
the bet wins, divided by the stake. Therefore, the value of a bet
$X_{t}$ is always equal to the reciprocal of the decimal odds which
is equal to the reciprocal of fractional odds plus one, formally:
\begin{equation}
X_{t}=\frac{1}{\it{Decimal}_{t}}=\frac{1}{\it{Fractional}_{t}+1},
\end{equation}
where $\it{Decimal}_{t}$ denotes decimal and $\it{Fractional}_{t}$ denotes fractional odds.
Most of the market data we used was originally represented as decimal
odds, but they were converted to bet values using the above formula
for all the figures and for the underlying calculations in this paper.
It is also worth noting that bets can be bought or sold freely during
the game. This includes going short which is referred to as lay betting.
Mathematically this means that the amount held can be a negative number.
In-play bets can be purchased from retail bookmakers
at a price offered by the bookmaker, but can also be traded
on centralized marketplaces where the exchange merely matches orders of participants
trading with each other through a limit order book and keeps a deposit from each party
to cover potential losses.
\subsection{An example game}\label{sec:examplegame}
In order to demonstrate our results we selected the Portugal
vs. Netherlands game from the UEFA Euro 2012 Championship which was
played on the 22nd of June 2012. The reason for selecting this particular
game is that the game had a rather complex unfolding with Netherlands
scoring the first goal, but then Portugal taking the lead in the second
half and finally winning the game. This made the odds jump several times
during the game which makes it a good candidate for demonstrating how
the model performs in an extreme situation. The number of goals as
a function of game time is shown in Figure \ref{fig:Number-of-goals}.
Figures \ref{fig:Match-Odds-values.} and \ref{fig:Over-Under-values.}
show market values of two bet types traded on a betting
market called Betfair: Match Odds and Over-Under. Match Odds contains
three bets: home team winning the game, away team winning the game
and the draw. Over-Under contains bets on the total number of goals
where Under X.5 is a bet that pays off if the total number of goals
is equal or less than X. The dashed lines show the best buy and sell
offers on the market while the continuous lines show the calibrated
model values (see Section \ref{sub:Calibration}).
In case of Match Odds, the value of the bet for Netherlands winning
the game jumped after Netherlands scored the first goal. When the
scores became even after Portugal scored a goal, the value of the
Draw bet jumped up and when Portugal took the lead by scoring the
third goal, the value of the bet for Portugal winning the game jumped
up. Finally, by the end of the game the value of the bet for Portugal
winning the game converged to 1 and the value of the other bets went
to zero.
In case of the Over-Under bets, trading ceased for the Under 0.5 bet
after the first goal when the value of this bet jumped to zero. By the end
of the game, the value of the Under 3.5, 4.5, 5.5, 6.5 and 7.5 bets
reached 1 because the total number of goals was actually 3 and
the values of the Under 0.5, 1.5 and 2.5 bets went to zero.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.55\paperwidth]{plots/Soccer-Euro2012-Fixtures17June-PortugalvNetherlands_Scores}
\end{center}
\caption{\label{fig:Number-of-goals}Scores of the two teams during the Portugal
vs. Netherlands game on the 22nd of June, 2012. The half time result
was 1-1 and the final result was a 2-1 win for Portugal.}
\end{figure}
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.55\paperwidth]{plots/Soccer-Euro2012-Fixtures17June-PortugalvNetherlandsValuesContractMainTypeMATCHODDS.png}
\par\end{centering}
\protect\caption{\label{fig:Match-Odds-values.}Values of the three Match Odds bets
during the game: Draw (black), Portugal Win (red), Netherlands Win
(blue). Dashed lines represent the best market buy and sell offers
while the continuous lines represent the calibrated model values.
Note that the value of the Netherlands Win bet jumps up after the first
goal because the chance for Netherlands winning the game suddenly
increased. It jumped down for similar reasons when Portugal scored
it's first goal and at the same time the value of the Portugal Win
and Draw bets jumped up. By the end of the game, because Portugal
actually won the game, the value of the Portugal Win bet reached 1
while both other bets became worthless.}
\end{figure}
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.55\paperwidth]{plots/Soccer-Euro2012-Fixtures17June-PortugalvNetherlandsValuesContractMainTypeOVERUNDER.png}
\par\end{centering}
\protect\caption{\label{fig:Over-Under-values.}Values of Over/Under bets during the
game. Under X.5 is a bet that pays off in case the total number of
goals by the end of the game is below or equal to X. Marked lines
represent the calibrated model prices while the grey bands show the
best market buy and sell offers. Note that after the first goal trading
in the Under 0.5 bet ceased and it became worthless. By the end of
the game when the total number of goals was 3, all the bets up until
Under 2.5 became worthless while the Under 3.5 and higher bets reached
a value of 1.}
\end{figure}
\section{Mathematical framework}\label{sec:Maths}
In this section we present a risk-neutral valuation framework for in-play
football betting. To do so we follow the financial mathematical approach,
in which we start by assuming a probability space, then identify a
market of underlying tradable assets and postulate a model for the
dynamics of these assets. We show that the first and second fundamental
theorems of asset pricing apply to this market, that is the market
is arbitrage-free and complete which means that all derivatives can
be replicated by taking a dynamic position in the underlying assets.
In classical finance, the distinction between the underlying asset
(for example a stock) and a derivative (for example an option
on the stock) is natural. This is not the case in football betting; there is
no such clear distinction between underlying and derivative assets because all bets are made on the scores,
and the score process itself is not a tradable asset. In order to be able to apply the
Fundamental Theorems of Asset Pricing we need to artificially introduce underlying assets
and define the model by postulating a price dynamics for these assets in the physical measure.
It is also desirable to chose underlying assets that have a simple enough
price dynamics so that developing the replicating portfolio becomes as straightforward as possible.
For these reasons, the two underlying assets of our choice are assets that at the end of the game pay out the number of goals scored
by the home and away teams, respectively. It is important to note that these assets are not traded in practice
and the choice therefore seems unnatural. However, these underlying assets can be statically replicated from
Arrow-Debreu securities that are referred to as Correct Score bets in football betting and are traded in practice.
Furthermore, towards the end of the Section \ref{sec:riskneutralpricing} we arrive at Proposition \ref{prop:replication-from-anything} which
states that any two linearly independent bets can be used as hedging instruments. Therefore the choice of the underlying assets is practically
irrelevant and only serves a technical purpose. This result is applied in Section
\ref{sec:nextgoal} where Next Goal bets are used as natural hedging instruments.
\subsection{Setup}
Let us consider a probability space $\left(\Omega,\mathcal{F},\mathbb{P}\right)$
that carries two independent Poisson processes $N_{t}^{1}$, $N_{t}^{2}$
with respective intensities $\mu_{1}$, $\mu_{2}$ and the filtration
$\left(\mathcal{F}_{t}\right)_{t\in\left[0,T\right]}$ generated by
these processes. Let time $t=0$ denote the beginning and $t=T$ the
end of the game. The Poisson processes represent the number of goals
scored by the teams, the superscript $1$ refers to the home and $2$
refers to the away team. This notation is used throughout, the distinction
between superscripts and exponents will always be clear from the context.
The probability measure $\mathbb{P}$ is the real-world or physical
probability measure.
We assume that there exists a liquid market where three assets can
be traded continuously with no transaction costs or any restrictions
on short selling or borrowing. The first asset $B_{t}$ is a risk-free
bond that bears no interests, an assumption that is motivated by the
relatively short time frame of a football game. The second and third
assets $S_{t}^{1}$ and $S_{t}^{2}$ are such that their values at
the end of the game are equal to the number of goals scored by the
home and away teams, respectively.
\begin{defn}[\bf model]
The model is defined by the following price dynamics of the assets:
\begin{eqnarray}
B_{t} & = & 1\nonumber \\
S_{t}^{1} & = & N_{t}^{1}+\lambda_{1}\left(T-t\right)\label{eq:modeldef}\\
S_{t}^{2} & = & N_{t}^{2}+\lambda_{2}\left(T-t\right)\nonumber
\end{eqnarray}
where $\lambda_{1}$ and $\lambda_{2}$ are known real constants.
\end{defn}
Essentially, the underlying asset prices are compensated Poisson processes, but the compensators $\lambda_1,\lambda_2$
are not necessarily equal to the intensities $\mu_1,\mu_2$ and therefore the prices are not necessarilty
martingales in the physical measure $\mathbb{P}$. This is similar to the Black-Scholes model where the stock's drift in the physical measure
is not necessarily equal to the risk-free rate.
We are now closely following \citet{harrison1981martingales} in defining the necessary concepts.
\subsection{Risk-neutral pricing of bets}\label{sec:riskneutralpricing}
\begin{defn}[\bf trading strategy]
A \textit{trading strategy} is an $\mathcal{F}_{t}$-predictable
vector process $\phi_{t}=\left(\phi_{t}^{0},\phi_{t}^{1},\phi_{t}^{2}\right)$
that satisfies $\int_{0}^{t}\left|\phi_{s}^{i}\right|ds<\infty$ for
$i\in\left\{ 0,1,2\right\} $. The associated \textit{value process}
is denoted by
\begin{equation}
V_{t}^{\phi}=\phi_{t}^{0}B_{t}+\phi_{t}^{1}S_{t}^{1}+\phi_{t}^{2}S_{t}^{2}.
\end{equation}
The trading strategy is \textit{self-financing }if
\begin{equation}
V_{t}^{\phi}=V_{0}^{\phi}+\int_{0}^{t}\phi_{s}^{1}dS_{s}^{1}+\int_{0}^{t}\phi_{s}^{2}dS_{s}^{2}.
\end{equation}
where $\int_{0}^{t}\phi_{s}^{i}dS_{s}^{i}$, $i\in\left\{ 1,2\right\} $
is a Lebesgue Stieltjes integral which is well defined according to
Proposition 2.3.2 on p17 of \citet{bremaud1981point}.
\end{defn}
\begin{defn}[\bf arbitrage-freeness]
The model is\textit{ arbitrage-free}
if no self-financing trading strategy $\phi_{t}$ exist such that
$\mathbb{P}\left[V_{t}^{\phi}-V_{0}^{\phi}\ge0\right]=1$ and $\mathbb{P}\left[V_{t}^{\phi}-V_{0}^{\phi}>0\right]>0$.
\end{defn}
\begin{defn}[\bf bet]
A \textit{bet} (also referred to as a \emph{contingent claim} or \emph{derivative}) is an $\mathcal{F}_{T}$-measurable
random variable $X_{T}$.
\end{defn}
In practical terms this means that the value of a bet is revealed
at the end of the game.
\begin{defn}[\bf completeness]
The model is \textit{complete} if for every
bet $X_{T}$ there exists a self-financing trading strategy $\phi_{t}$
such that $X_{T}=V_{T}^{\phi}$. In this case we say that the bet
$X_{T}$ is \textit{replicated} by the trading strategy $\phi_{t}$
\end{defn}
\begin{thm}[risk-neutral measure]
\label{prop:equivalentMartingaleMeasure} There
exists a probability measure $\mathbb{Q}$ referred to as the risk-neutral
equivalent martingale measure such that:
\begin{enumerate}
\item[(a)]
The asset processes $B_{t}$, $S_{t}^{1}$, $S_{t}^{2}$ are $\mathbb{Q}$-martingales.
\item[(b)]
The goal processes $N_{t}^{1}$ and $N_{t}^{2}$ in measure $\mathbb{Q}$
are standard Poisson processes with intensities $\lambda_{1}$ and
$\lambda_{2}$ respectively (which are in general different from the
$\mathbb{P}$-intensities of $\mu_{1}$ and $\mu_{2}$).
\item[(c)]
$\mathbb{Q}$ is an equivalent measure to $\mathbb{P}$, that
is the set of events having zero probability is the same for both
measures.
\item[(d)]
$\mathbb{Q}$ is unique.
\end{enumerate}
\end{thm}
\begin{proof}
The proof relies on Girsanov's theorem for point processes (see Theorem
2 on p.165 and Theorem 3 on page 166 in \citet{bremaud1981point})
which states that $N_{t}^{1}$ and $N_{t}^{2}$ are Poisson processes
with intensities $\lambda_{1}$ and $\lambda_{2}$ under the probability
measure $\mathbb{Q}$ which is defined by the Radon-Nikodym-derivative
\begin{equation}
\frac{d\mathbb{Q}}{d\mathbb{P}}=L_{t},
\end{equation}
where
\begin{equation}
L_{t}=\prod_{i=1}^{2}\left(\frac{\lambda_{i}}{\mu_{i}}\right)^{N_{t}^{i}}\exp\left[\left(\mu_{i}-\lambda_{i}\right)t\right].
\end{equation}
Then uniqueness follows from Theorem 8 on p.64 in \citet{bremaud1981point}
which states that if two measures have the same set of intensities,
then the two measures must coincide. The Integration Theorem on p.27
of \citet{bremaud1981point} states that $N_{t}^{i}-\lambda_{i}t$
are $\mathbb{Q}$-martingales, therefore the assets $S_{t}^{i}$ are
also $\mathbb{Q}$-martingales for $i\in\left\{ 1,2\right\} $. Proposition
9.5 of \citet{tankov2004financial} claims that $\mathbb{P}$ and
$\mathbb{Q}$ are equivalent probability measures. The process of
the bond asset $B_{t}$ is a trivial martingale in every measure because
it's a deterministic constant which therefore doesn't depend on the
measure.
\end{proof}
\begin{rem}
Changing the measure of a Poisson process changes the intensity and
leaves the drift unchanged. This is in contrast with the case of a
Wiener process where change of measure changes the drift and leaves
the volatility unchanged.
\end{rem}
\begin{thm}
\label{prop:arbFreeComplete}(arbitrage-free) The model is arbitrage-free
and complete.
\end{thm}
\begin{proof}
This follows directly from the first and second fundamental theorems
of finance. To be more specific, arbitrage-freeness follows from theorem
1.1 of \citet{delbaen1994general} which states that the existence
of a risk-neutral measure implies a so-called condition ``no free
lunch with vanishing risk'' which implies arbitrage-freeness. Completeness
follows from theorem 3.36 of \citet{harrison1981martingales} which
states that the model is complete if the risk-neutral measure is unique.
Alternatively it also follows from theorem 3.35 which states that
the model is complete if the martingale representation theorem holds
for all martingales which is the case according to Theorem 17, p.76
of \citet{bremaud1981point}.
\end{proof}
\begin{cor}
\label{prop:value_eq_expectedvalue}The time-$t$ value of a bet is
equal to the risk-neutral expectation of it's value at the end of the
game, formally:
\begin{equation}
X_{t}=\mathbf{E}^{\mathbb{Q}}\left[X_{T}|\mathcal{F}_{t}\right].
\end{equation}
\end{cor}
\begin{proof}
This follows directly from Proposition 3.31 of \citet{harrison1981martingales}
\end{proof}
\begin{cor}
\label{prop:value_selffinancingstrategy}The time-$t$ value of a
bet is also equal to the value of the associated self-financing trading
strategy $\phi_{t}$, formally:
\begin{equation}
X_{t}=V_{t}^{\phi}=V_{0}^{\phi}+\int_{0}^{t}\phi_{s}^{1}dS_{s}^{1}+\int_{0}^{t}\phi_{s}^{2}dS_{s}^{2}.\label{eq:betvalue_replication}
\end{equation}
\end{cor}
\begin{proof}
This follows directly from Proposition 3.32 of \citet{harrison1981martingales}
\end{proof}
\begin{defn}[\bf linear independence]
\label{def-linearindependence}
The
bets $Z_{T}^{1}$ and $Z_{T}^{2}$ are \textit{linearly independent}
if the self-financing trading strategy $\phi_{t}^{1}=\left(\phi_{t}^{10},\phi_{t}^{11},\phi_{t}^{12}\right)$
that replicates $Z_{T}^{1}$ is $\mathbb{P}$-almost surely linearly
independent from the self-financing trading strategy $\phi_{t}^{2}=\left(\phi_{t}^{20},\phi_{t}^{21},\phi_{t}^{22}\right)$
that replicates $Z_{T}^{2}$. Formally, at any time $t\in\left[0,T\right]$
and for any constants $c_{1},c_{2}\in\mathbb{R}$
\begin{equation}
c_{1}\phi_{t}^{1}\ne c_{2}\phi_{t}^{2}\;\;\;\mathbb{P}\,{\it a.s.}
\end{equation}
\end{defn}
\begin{prop}[replication]
\label{prop:replication-from-anything} Any bet $X_{T}$
can be replicated by taking a dynamic position in any two linearly
independent bets $Z_{T}^{1}$ and $Z_{T}^{2}$, formally:
\begin{equation}
X_{t}=X_{0}+\int_{0}^{t}\psi_{s}^{1}dZ_{s}^{1}+\int_{0}^{t}\psi_{s}^{2}dZ_{s}^{2},\label{eq:replication_from_anything}
\end{equation}
where the weights $\psi_{t}^{1},\psi_{t}^{2}$ are equal to the solution
of the following equation:
\begin{equation}
\left(\begin{array}{cc}
\phi_{t}^{11} & \phi_{t}^{12}\\
\phi_{t}^{21} & \phi_{t}^{22}
\end{array}\right)\left(\begin{array}{c}
\psi_{t}^{1}\\
\psi_{t}^{2}
\end{array}\right)=\left(\begin{array}{c}
\phi_{t}^{1}\\
\phi_{t}^{2}
\end{array}\right)\label{eq:replication_equation}
\end{equation}
where $\left(\phi_{t}^{11},\phi_{t}^{12}\right)$, $\left(\phi_{t}^{21},\phi_{t}^{22}\right)$
and $\left(\phi_{t}^{1},\phi_{t}^{2}\right)$ are the components of
the trading strategy that replicates $Z_{T}^{1}$, $Z_{T}^{2}$ and
$X_{T}$, respectively. The integral $\int_{0}^{t}\psi_{s}^{1}dZ_{s}^{1}$
is to be interpreted in the following sense:
\begin{equation}
\int_{0}^{t}\psi_{s}^{1}dZ_{s}^{1}=\int_{0}^{t}\psi_{s}^{1}\phi_{s}^{11}dS_{s}^{1}+\int_{0}^{t}\psi_{s}^{1}\phi_{s}^{12}dS_{s}^{2}
\end{equation}
and similarly for $\int_{0}^{t}\psi_{s}^{2}dZ_{s}^{2}$.
\end{prop}
\begin{proof}
Substituting $dZ_{t}^{1}=\phi_{t}^{11}dS_{t}^{1}+\phi_{t}^{21}dS_{t}^{2}$,
$dZ_{t}^{2}=\phi_{t}^{12}dS_{t}^{1}+\phi_{t}^{22}dS_{t}^{2}$ and
Equation \ref{eq:betvalue_replication} into Equation \ref{eq:replication_from_anything}
verifies the proposition.
\end{proof}
\subsection{European bets}
\begin{defn}[\bf European bet]
\label{def:european}A \textit{European bet}
is a bet with a value depending only on the final number of goals
$N_{T}^{1}$, $N_{T}^{2}$, that is one of the form
\begin{equation}
X_{T}=\Pi\left(N_{T}^{1},N_{T}^{2}\right)
\end{equation}
where $\Pi$ is a known scalar function $\mathbb{N}\times\mathbb{N}\rightarrow\mathbb{R}$ which is referred to as the \textit{payoff function}.
\end{defn}
\begin{example}
A typical example is a bet that pays out $1$ if the home team scores
more goals than the away team (home wins) and pays nothing otherwise,
that is $\Pi\left(N_{T}^{1},N_{T}^{2}\right)=\mathbf{1}\left(N_{T}^{1}>N_{T}^{2}\right)$
where the function $\mathbf{1}\left(A\right)$ takes the value of
1 if $A$ is true and zero otherwise. Another example is a bet that
pays out $1$ if the total number of goals is strictly higher than
2 and pays nothing otherwise, that is $\Pi\left(N_{T}^{1},N_{T}^{2}\right)=\mathbf{1}\left(N_{T}^{1}+N_{T}^{2}>2\right)$.
\end{example}
\begin{prop}[pricing formula]
\label{prop:european_closedform}
The time-$t$ value of a European bet with payoff function $\Pi$ is given by the explicit formula
\begin{equation}
X_{t}=\sum_{n_{1}=N_{1}^{t}}^{\infty}\sum_{n_{2}=N_{2}^{t}}^{\infty}\Pi\left(n_{1},n_{2}\right)P\left(n_{1}-N_{t}^{1},\lambda_{1}\left(T-t\right)\right)P\left(n_{2}-N_{t}^{2},\lambda_{2}\left(T-t\right)\right),\label{eq:europeanformula-1}
\end{equation}
where $P\left(N,\Lambda\right)$ is the Poisson probability,
that is $P\left(N,\Lambda\right)=\frac{e^{-\Lambda}}{N!}\Lambda^{N}$
if $N\ge0$ and $P\left(N,\Lambda\right)=0$ otherwise.
\end{prop}
\begin{proof}
This follows directly form Proposition \ref{prop:value_eq_expectedvalue}
and Definition \ref{def:european}.
\end{proof}
As we have seen, the price of a European bet is a function of the
time $t$ and the number of goals $\left(N_{t}^{1},N_{t}^{2}\right)$
and intensities $\left(\lambda_{1},\lambda_{2}\right)$. Therefore,
from now on we will denote this function by $X_{t}=X_{t}\left(N_{t}^{1},N_{t}^{2}\right)$
or $X_{t}=X_{t}\left(t,N_{t}^{1},N_{t}^{2},\lambda_{1},\lambda_{2}\right)$,
depending on whether the context requires the explicit dependence
on intensities or not.
It is important to note that Arrow-Debreu bets do exist in in-play football betting and are referred to as Correct Score bets.
\begin{defn}[\bf Arrow-Debreu bets]\label{defn:ad} \textit{Arrow-Debreu bets}, also known as Correct Score bets are European bets
with a payoff function $\Pi_{AD \left(K_1, K_2\right)}$ equal to $1$ if the final
score $\left(N_{T}^{1},N_{T}^{2}\right)$ is equal to a specified result $\left(K_1,K_2\right)$
and $0$ otherwise:
\begin{equation}
\Pi_{AD \left(K_1, K_2\right)} = \mathbf{1} \left( N_T^1=K_1, N_T^2=K_2\right)
\end{equation}
\end{defn}
According to the following proposition, Arrow-Debreu bets can be used to statically replicate any European bet:
\begin{prop}[static replication]
The time-$t$ value of a European bet with payoff function $\Pi$ in terms of time-$t$ values of Arrow-Debreu bets is given by:
\begin{equation}
X_{t}=\sum_{K_1=N_{1}^{t}}^{\infty}\sum_{K_2=N_{2}^{t}}^{\infty}\Pi\left(K_1,K_2\right)X_{t,AD\left(K_1,K_2\right)},
\end{equation}
where $X_{t,AD\left(K_1,K_2\right)}$ denotes the time-$t$ value of an Arrow-Debreu bet that pays out if the final scores are equal
to $\left(K_1,K_2\right)$.
\end{prop}
\begin{proof}
This follows directly form Proposition \ref{prop:european_closedform} and Definition \ref{defn:ad}.
\end{proof}
Let us now define the partial derivatives of the bet values with respect to change in time and the number goals scored. These are required for hedging and serve
the same purpose as the \textit{greeks} in the Black-Scholes framework.
\begin{defn}[Greeks]
\label{def:deltas_def}
The \textit{greeks} are the
values of the following forward difference operators ($\delta_{1}$,
$\delta_{2}$) and partial derivative operator applied to the bet
value:
\begin{eqnarray}
\delta_{1}X_{t}\left(N_{t}^{1},N_{t}^{2}\right) & = & X_{t}\left(N_{t}^{1}+1,N_{t}^{2}\right)-X_{t}\left(N_{t}^{1},N_{t}^{2}\right)\label{eq:delta1}\\
\delta_{2}X_{t}\left(N_{t}^{1},N_{t}^{2}\right) & = & X_{t}\left(N_{t}^{1},N_{t}^{2}+1\right)-X_{t}\left(N_{t}^{1},N_{t}^{2}\right)\label{eq:delta2}\\
\partial_{t}X_{t}\left(N_{t}^{1},N_{t}^{2}\right) & = & \lim_{dt\rightarrow0}\frac{1}{dt}\left[X_{t+dt}\left(N_{t}^{1},N_{t}^{2}\right)-X_{t}\left(N_{t}^{1},N_{t}^{2}\right)\right]
\end{eqnarray}
\end{defn}
\begin{rem*}
The forward difference operators $\delta_{1}$, $\delta_{2}$ play
the role of Delta and the partial derivative operator $\partial_{t}$
plays the role of Theta in the Black-Scholes framework.
\end{rem*}
\begin{thm}[Kolmogorov forward equation]
\label{prop:PIDE} The value of a European
bet $X\left(t,N_{t}^{1},N_{t}^{2}\right)$ with a payoff function
$\Pi(N_{T}^{1},N_{T}^{2})$ satisfies the following Feynman-Kac representation
on the time interval $t\in\left[0,T\right]$ which is also known as
the Kolmogorov forward equation:
\begin{eqnarray}
\partial_{t}X\left(t,N_{t}^{1},N_{t}^{2}\right) & = & -\lambda_{1}\delta_{1}X\left(t,N_{t}^{1},N_{t}^{2}\right)-\lambda_{2}\delta_{2}X\left(t,N_{t}^{1},N_{t}^{2}\right)\label{eq:PIDE}
\end{eqnarray}
with boundary condition:
\[
X_{T}\left(T,N_{T}^{1},N_{T}^{2}\right)=\Pi\left(N_{T}^{1},N_{T}^{2}\right).
\]
\end{thm}
\begin{proof}
The proposition can be easily verified using the closed form formula
from Proposition \ref{prop:european_closedform}. Furthermore, several
proofs are available in the literature, see for example Proposition
12.6 in \citet{tankov2004financial}, Theorem 6.2 in \citet{ross2006introduction}
or Equation 13 in \citet{feller1940integro}.
\end{proof}
\begin{rem}
Equation \ref{eq:PIDE} also has the consequence that any portfolio
of European bets that changes no value if either team scores a goal
(Delta-neutral) does not change value between goals either (Theta-neutral).
We note without a proof, that this holds for all bets in general
\end{rem}
\begin{cor}\label{prop:dX_dLambda}
The value of a European bet $X\left(t,N_{t}^{1},N_{t}^{2},\lambda_{1},\lambda_{2}\right)$
satisfies the following:
\begin{eqnarray}
\frac{\partial}{\partial\lambda_{i}}X_{t} & = & \left(T-t\right)\delta_{i}X_{t}\label{eq:dX_dLambda}
\end{eqnarray}
where $i\in\left\{ 1,2\right\} $.
\end{cor}
\begin{proof}
This follows directly from Proposition \ref{prop:european_closedform}
\end{proof}
\begin{prop}[portfolio weights]\label{prop:tradingstrategy_eq_deltas}
The components
$\left(\phi_{t}^{1},\phi_{t}^{2}\right)$ of the trading strategy
that replicates a European bet $X_{T}$ are equal to the forward difference
operators $\left(\delta_{1},\delta_{2}\right)$ of the bet, formally:
\begin{eqnarray}
\phi_{t}^{1} & = & \delta_{1}X\left(t,N_{t}^{1},N_{t}^{2}\right)\\
\phi_{t}^{2} & = & \delta_{2}X\left(t,N_{t}^{1},N_{t}^{2}\right).
\end{eqnarray}
\end{prop}
\begin{proof}
Recall that according to Proposition \ref{prop:value_selffinancingstrategy},
the time-$t$ value of a bet is equal to $X_{t}=X_{0}+\sum_{i=1}^{2}\int_{0}^{t}\phi_{s}^{i}dS_{s}^{i}$,
which after substituting $dS_{t}^{i}=dN_{t}^{i}-\lambda_{i}dt$ becomes
\begin{eqnarray}
X_{t} & = & X_{0}+\int_{0}^{t}\left(\phi_{s}^{1}\lambda_{1}+\phi_{s}^{2}\lambda_{2}\right)ds\nonumber \\
& & +\sum_{k=0}^{N_{t}^{1}}\phi_{t_{k}^{1}}^{1}+\sum_{k=0}^{N_{t}^{2}}\phi_{t_{k}^{2}}^{2},\label{eq:proof_replication_def}
\end{eqnarray}
where we used $\int_{0}^{t}\phi_{s}^{i}dN_{s}^{i}=\sum_{k=0}^{N_{t}^{i}}\phi_{t_{k}^{1}}^{i}$
where $0\le t_{k}^{i}\le t$ is the time of the $k.$th jump (goal)
of the process $N_{t}^{i}$ for $i\in\left\{ 1,2\right\} $.
On the other hand, using Ito's formula for jump processes (Proposition
8.15, \citet{tankov2004financial}), which applies because the closed
form formula in Proposition \ref{prop:european_closedform} is infinitely
differentiable, the value of a European bet is equal to
\begin{eqnarray}
X_{t} & = & X_{0}+\int_{0}^{t}\partial_{s}X\left(s,N_{s}^{1},N_{s}^{2}\right)ds\nonumber \\
& & +\sum_{k=0}^{N_{t}^{1}}\delta_{1}X\left(t_{k}^{1},N_{t_{k}^{1}-}^{1},N_{t_{k}^{1}-}^{2}\right)+\sum_{k=0}^{N_{t}^{2}}\delta_{2}X\left(t_{k}^{2},N_{t_{k}^{2}-}^{1},N_{t_{k}^{2}-}^{2}\right),\label{eq:proof_replication_ito}
\end{eqnarray}
where $t_{k}^{i}-$ refers to the fact that the value of the processes
is to be taken before the jump.
Because the equality between Equations \ref{eq:proof_replication_def}
and \ref{eq:proof_replication_ito} hold for every possible jump times,
the terms behind the sums are equal which proves the proposition
\end{proof}
\section{Model Calibration}\label{sub:Calibration}
In this section we discuss how to calibrate the model parameters to historical
market prices. We demonstrate that a unique equivalent martingale measure
$\mathbb{Q}$ exists, that is, a set of intensities $\lambda_{1},\lambda_{2}$
exist that are consistent with the prices of all bets observed on the market (see
Propositions \ref{prop:equivalentMartingaleMeasure} and \ref{prop:arbFreeComplete}).
We apply a least squares approach in which we consider market prices
of a set of bets and find model intensities that deliver model prices
for these bets that are as close as possible to the market prices.
Specifically, we minimize the sum of the square of the weighted differences
between the model and market mid prices as a function of model intensities,
using market bid-ask spreads as weights. The reason for choosing a
bid-ask spread weighting is that we would like to take into account
bets with a lower bid-ask spread with a higher weight because the
price of these bets is assumed to be more certain. Formally, we minimize the following
expression:
\begin{equation}
R\left(\lambda_{t}^{1},\lambda_{t}^{2}\right)=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left[\frac{X_{t}^{i,{\it MID}}-X_{t}^{i}\left(\lambda_{t}^{1},\lambda_{t}^{2},N_{t}^{1},N_{t}^{2}\right)}{\frac{1}{2}\left(X_{t}^{i,{\it SELL}}-X_{t}^{i,{\it BUY}}\right)}\right]^{2}},\label{eq:calibration}
\end{equation}
where $n$ is the total number of bets used, $X_{t}^{i,{\it BUY}}$
and $X_{t}^{i,{\it SELL}}$ are the best market buy and sell quotes
of the $i.$th type of bet at time $t$, $X_{t}^{i,{\it MID}}$ is
the market mid price which is the average of the best buy and sell
quotes, $X_{t}^{i}\left(N_{t}^{1},N_{t}^{2},\lambda_{t}^{1},\lambda_{t}^{2}\right)$
is the model price of the $i$.th bet at time $t$, given the current
number of goals $N_{t}^{1},N_{t}^{2}$ and model intensity parameters
$\lambda_{t}^{1},\lambda_{t}^{2}$, see Proposition \ref{prop:european_closedform}.
This minimization procedure is referred to as model calibration.
Calibration has been performed using a time step of 1 minute during the
game, independently at each time step. We used the three
most liquid groups of bets which in our case were Match Odds, Over
/ Under and Correct Score with a total of 31 bet types in these three
categories. Appendix \ref{sec:Valuation-of-Bets} describes these
bet types in detail.
The continuous lines in Figures \ref{fig:Match-Odds-values.} and
\ref{fig:Over-Under-values.} show the calibrated model prices while
the dashed lines are the market buy and sell offers. It can be seen
that the calibrated values are close to the market quotes, although
they are not always within the bid-ask spread. As the measures of
the goodness of the fit we use the optimal value of the cost function
of Equation \ref{eq:calibration}, which is the average distance of
the calibrated values from the market mid prices in units of bid-ask
spread, the calibration error is shown in Figure \ref{fig:residual}. We performed
calibration for multiple games of the Euro 2012 Championship, the
time average of the calibration errors for each game is shown in Table
\ref{tab:Calibration-errors}. The mean and standard deviation of
the calibration errors across games is $1.57\pm0.27$ which is to
be interpreted in units of bid-ask spread because of the weighting
of the error function in Equation \ref{eq:calibration}. This means,
that on average, the calibrated values are outside of the bid-ask
spread, but not significantly. Given that a model of only 2 parameters
has been calibrated to a total of 31 independent market quotes, this
is a reasonably good result.
Finally, the implied intensities, along with the estimated uncertainties
of the calibration using the bid-ask spreads are shown in Figure \ref{fig:Calibrated-intensities}.
Contrary to our initial assumption of constant intensities, the actual intensities
fluctuate over time and there also seems to be an increasing trend in the implied goal intensities of both teams.
In order to better understand the nature of the implied intensity
process, we estimated the drift and volatility of the log total intensity,
that is we assumed the following:
\begin{equation}
d\ln\left(\lambda_{t}^{1}+\lambda_{t}^{2}\right)=\mu dt+\sigma dW_{t}
\end{equation}
where $\mu$ and $\sigma$ are the drift and volatility of the process.
Table \ref{tab:ModelparamDriftVol} shows the results of the estimation
for multiple games. The mean and standard deviation of the drift terms
are $\mu=0.55\pm0.16\;1/90{\it min}$ while the mean and standard
deviation of the volatility terms are $\sigma=0.51\pm0.19\;1/\sqrt{90{\it min}}$.
The fact that implied goal intensities are increasing during the game
is consistent with findings of \citet{dixon1998birth} who found gradual
increase of scoring rates by analysing goal times of 4012 matches
between 1993 and 1996.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.55\paperwidth]{plots/Soccer-Euro2012-Fixtures17June-PortugalvNetherlandsCalibrationError.png}
\par\end{centering}
\protect\caption{\label{fig:residual}Calibration error during the game. Calibration
error is defined as the average distance of all 31 calibrated bet
values from the market mid prices in units of bid-ask spread. A formal
definition is given by Equation \ref{eq:calibration}. Note that the
calibration error for this particular game is usually between 1 and
2 bid-ask spreads which is a reasonably good result, given that the
model has only 2 free parameters to explain all 31 bet values.}
\end{figure}
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.55\paperwidth]{plots/Soccer-Euro2012-Fixtures17June-PortugalvNetherlandsModelParams.png}
\par\end{centering}
\protect\caption{\label{fig:Calibrated-intensities}Calibrated model parameters, also
referred to as implied intensities during the game. Formally, this
is equal to the minimizer $\lambda_{t}^{1},\lambda_{t}^{2}$ of Equation
\ref{eq:calibration}. The bands show the parameter uncertainties
estimated from the bid-ask spreads of the market values of the bets.
Note that the intensities appear to have an increasing trend and also
fluctuate over time.}
\end{figure}
\begin{table}[t]
\centering \begin{tabular}{|l|c|} \hline \textbf{Game} & \textbf{Calibration Error}\tabularnewline\hline Denmark v Germany & 1.65\tabularnewline\hline Portugal v Netherlands & 1.18\tabularnewline\hline Spain v Italy & 2.21\tabularnewline\hline Sweden v England & 1.58\tabularnewline\hline Italy v Croatia & 1.45\tabularnewline\hline Germany v Italy & 1.50\tabularnewline\hline Germany v Greece & 1.34\tabularnewline\hline Netherlands v Germany & 1.78\tabularnewline\hline Spain v Rep of Ireland & 1.64\tabularnewline\hline Spain v France & 1.40\tabularnewline\hhline{|=|=|} Average & 1.57\tabularnewline\hline Standard deviation & 0.27\tabularnewline\hline \end{tabular}
\protect\caption{\label{tab:Calibration-errors}Average calibration errors in units of bid-ask spread as shown
in Figure \ref{fig:residual} have been calculated for multiple games
of the UEFA Euro 2012 Championship and are shown in this table. Note
that the mean of the averages is just 1.57 bid-ask spreads with a standard deviation
of 0.27 which shows that the model fit is reasonably good for
the games analysed.}
\end{table}
\begin{table}[t]
\centering \begin{tabular}{|l|c|c|} \hline \textbf{Game} & \textbf{Drift [$1/90\it{min}$]} & \textbf{Vol [$1/\sqrt{90\it{min}}$]}\tabularnewline\hline Denmark v Germany & 0.36 & 0.28\tabularnewline\hline Portugal v Netherlands & 0.49 & 0.44\tabularnewline\hline Spain v Italy & 0.60 & 0.76\tabularnewline\hline Sweden v England & 0.58 & 0.59\tabularnewline\hline Italy v Croatia & 0.82 & 0.60\tabularnewline\hline Germany v Italy & 0.76 & 0.39\tabularnewline\hline Germany v Greece & 0.65 & 0.66\tabularnewline\hline Netherlands v Germany & 0.43 & 0.32\tabularnewline\hline Spain v Rep of Ireland & 0.32 & 0.78\tabularnewline\hline Spain v France & 0.48 & 0.25\tabularnewline\hhline{|=|=|=|} Average & 0.55 & 0.51\tabularnewline\hline Standard deviation & 0.16 & 0.19\tabularnewline\hline \end{tabular}
\protect\caption{\label{tab:ModelparamDriftVol}Average drift and volatility of total
log-intensities estimated for multiple games of the UEFA Euro 2012
Championship. Note that the drift term is positive for all games which
is consistent with the empirical observation of increasing goal frequencies
as the game progresses.}
\end{table}
\section{Hedging with Next Goal bets}\label{sec:nextgoal}
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.55\paperwidth]{plots/multiplot_MATCH_ODDS.png}
\par\end{centering}
\protect\caption{\label{fig:ReplicationPerformance}Replicating the Match Odds home, away and draw contracts
using Next Goal home and away contracts as hedging instruments. The left column shows the replication performance
with the dashed line showing the value of the original Match Odds contracts and the continuous line showing the value of
the replicating portfolio. The right column shows the weights of the replicating portfolio with the dashed line
showing the weight of the Next Goal home contract and the dotted line showing the weight of the Next Goal away contract.}
\end{figure}
In this section we demonstrate market completeness and we show that
Next Goal bets are natural hedging instruments that can be used to
dynamically replicate and hedge other bets.
Recall that according to Proposition \ref{prop:replication-from-anything}
any European bet $X_{t}$ can be replicated by dynamically trading
in two linearly independent instruments $Z_{t}^{1}$ and $Z_{t}^{2}$:
\begin{equation}
X_{t}=X_{0}+\int_{0}^{t}\psi_{s}^{1}dZ_{s}^{1}+\int_{0}^{t}\psi_{s}^{2}dZ_{s}^{2}
\end{equation}
where the portfolio weights $\psi_{t}^{1},\psi_{t}^{2}$ are equal
to the solution of the equation
\begin{equation}
\left(\begin{array}{cc}
\delta_{1}Z_{t}^{1} & \delta_{1}Z_{t}^{2}\\
\delta_{2}Z_{t}^{1} & \delta_{2}Z_{t}^{2}
\end{array}\right)\left(\begin{array}{c}
\psi_{t}^{1}\\
\psi_{t}^{2}
\end{array}\right)=\left(\begin{array}{c}
\delta_{1}X_{t}\\
\delta_{2}X_{t}
\end{array}\right),\label{eq:replication_equation-1}
\end{equation}
where the values of the finite difference operators $\delta$ (Definition
\ref{def:deltas_def}) can be computed using Proposition \ref{prop:european_closedform}
using the calibrated model intensities. Equation \ref{eq:replication_equation-1}
tells us that the change in the replicating portfolio must match the
change of the bet value $X_{t}$ in case either team scores a goal.
This approach is analogous to delta hedging in the Black Scholes framework.
The two bets that we use as replicating instruments are the Next Goal home
and the Next Goal away bets. These bets settle during the game in a way such that
when the home team scores a goal the price of the Next Goal home bet jumps to 1 and the price of the Next Goal away bet jumps
to zero and vice versa for the away team. After the goal the bets reset and trade again at their regular market price. The values of the bets are:
\begin{eqnarray}
Z^{\it{NG_1}}_{t} & = & \frac{\lambda_{1}}{\lambda_{1}+\lambda_{2}}\left[1-e^{-\left(\lambda_{1}+\lambda_{2}\right)\left(T-t\right)}\right] \\
Z^{\it{NG_2}}_{t} & = & \frac{\lambda_{2}}{\lambda_{1}+\lambda_{2}}\left[1-e^{-\left(\lambda_{1}+\lambda_{2}\right)\left(T-t\right)}\right].
\end{eqnarray}
The matrix of deltas, that is the changes of contract values in case of a goal as defined in \ref{def:deltas_def} are:
\begin{equation}
\left(\begin{array}{cc}
\delta_{1}Z_{t}^{NG_1} & \delta_{1}Z_{t}^{NG_2}\\
\delta_{2}Z_{t}^{NG_1} & \delta_{2}Z_{t}^{NG_2}
\end{array}\right) =
\left(\begin{array}{cc}
1 - Z_{t}^{NG_1} & - Z_{t}^{NG_2}\\
- Z_{t}^{NG_1} & 1 - Z_{t}^{NG_2}
\end{array}\right)
\end{equation}
The reason for choosing Next Goal bets as hedging instruments is that these bets are linearly independent (see Definition \ref{def-linearindependence}), that is the delta matrix is non-singular even if there is a
large goal difference between the two teams. Note that this is an advantage compared to using the Match Odds bets as hedging instruments: in case one team leads by several goals,
it is almost certain that the team will win. In that case the value of the Match Odds bets goes close to 1 for the given team and 0 for the other team. An additional goal does not change the values
significantly, therefore the delta matrix becomes singular and the bets are not suitable for hedging because the portfolio weights go to infinity.
This is never the case with Next Goal bets which can therefore be used as natural hedging instruments.
We used the Portugal vs. Netherlands game from Section \ref{sec:examplegame} to replicate the values of the three Match Odds bets, using the Next Goal bets as
hedging instruments. Figure \ref{fig:ReplicationPerformance} shows the values of the original Match Odds bets along with the values of the replicating portfolios
(left column) and the replicating portfolio weights (right column).
Figure \ref{fig:ReplicationErrorNoGoals} shows the jumps of contract
values against the jumps of replicating portfolio values at times when a goal
was scored. This figure contains several different types of bets, that is not only Match Odds bets,
but also Over/Under and Correct Score bets. The figure also contains all 3 goals scored during the
Portugal vs. Netherlands game. It can be seen that the jumps of the original
contract values are in line with the jumps of the replicating portfolio
values with a correlation of 89\%. Table \ref{tab:ReplicationCorrelation}
shows these correlations for multiple games of the UEFA Euro 2012 Championship. It can be seen that the
correlations are reasonably high for all games with an average of
80\% and a standard deviation of 19\%.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.55\paperwidth]{plots/Soccer-Euro2012-Fixtures17June-PortugalvNetherlandsReplicationWithGoals.png}
\par\end{centering}
\protect\caption{\label{fig:ReplicationErrorNoGoals}Jumps of actual contract values
(horizontal axis) versus jumps of replicating
portfolio values (vertical axis) at times of goals scored during the Portugal vs. Netherlands game. The changes are
computed between the last traded price before a goal and the first
traded price after a goal, for all goals. The figure contains Match Odds, Over/Under and Correct Score bets.
Next Goal home and away bets were used as hedging instruments to build the replicating portfolios.
Note that the value changes of the replicating portfolios corresponds reasonably well to the value
changes of the original contracts with a correlation of 89\%.}
\end{figure}
\begin{table}[t]
\centering \begin{tabular}{|l|c|} \hline \textbf{Game} & \textbf{Correlation}\tabularnewline\hline Denmark vs. Germany & 79\%\tabularnewline\hline Portugal vs. Netherlands & 89\%\tabularnewline\hline Spain vs. Italy & 97\%\tabularnewline\hline Italy vs. Croatia & 47\%\tabularnewline\hline Spain vs. France & 86\%\tabularnewline\hline Germany vs. Italy & 99\%\tabularnewline\hline Germany vs. Greece & 60\%\tabularnewline\hline Netherlands vs. Germany & 93\%\tabularnewline\hline Spain vs. Rep of Ireland & 98\%\tabularnewline\hline Sweden vs. England & 50\%\tabularnewline\hhline{=|=} Average & 80\%\tabularnewline\hline Standard deviation & 19\%\tabularnewline\hline \end{tabular}
\protect\caption{\label{tab:ReplicationCorrelation}Correlation between the jumps of
bet values and jumps of replicating portfolios at times of goals for
all bets of a game.}
\end{table}
\iffalse
\section{Possible extensions of the model}\label{sec:extension}
In order to account for the deviations observed, the constant Poisson
model can be extended in several ways. In this section we discuss
a set of possible extensions of the model.
\subsection{Time-Inhomogeneous Intensity}
The simplest extension that accounts for the drift observed in the
implied intensities is to use deterministic, but time-dependent intensities.
If the time-dependent intensity of team $i\in\left\{ 1,2\right\} $
is denoted by $\lambda_{t}^{i}$, then all that needs to be done in
order to incorporate this in the above model is to change $\lambda_{i}$
to $\lambda_{t}^{i}=\frac{1}{T-t}\int_{t}^{T}\lambda_{i}\left(\hat{t}\right)d\hat{t}$
in the formulae of Appendix \ref{sec:Valuation-of-Bets}. It can be
shown that the results of section \ref{sec:Maths} still hold within
this model. The number of parameters would increase depending on the
parametrization of the intensity function, but the number of instruments
to build a replicating portfolio would remain 2. For example, one
of the most obvious choices is to use an exponential function of the
form
\[
\lambda_{t}^{i}=\lambda_{0}^{i}e^{\alpha_{i}t}
\]
where $\alpha_{i}$ is the rate of intensity change of team $i$.
\subsection{Local Intensity}
Another possible extension is to use a deterministic, but state-dependent
intensity surface that is dependent on the current number of goals
and time, formally:
\[
\lambda_{i}\left(N_{t}^{1},N_{t}^{2},t\right).
\]
This is similar to local volatility models developed by \citet{dupire1994pricing}
and others where volatility depends on the stock price. To calibrate
such a model to market prices, the number of calibration instruments
depends on the number of parameters that define the intensity surface.
However, it can be shown that by assuming the same number of underlying
assets, a unique risk-neutral measure exists also in this model, and
therefore the model is arbitrage-free and complete. To actually compute
the value of bets within this model, most of the simple analytical
formulae are lost due to the increased complexity, however prices
can be computed on a grid according to an equation similar to Equation
\ref{prop:PIDE} or Monte-Carlo techniques can also be applied. The
model suggested by \citet{dixon1998birth} belongs to this category.
\subsection{Stochastic Intensity}
Another possible extension is to mimic stochastic volatility models,
for example the Heston model \citet{heston1993closed} by assuming
a stochastic process that drives the intensities. These processes
are known as doubly stochastic Poisson processes or Cox processes
and are widely used in modelling credit derivatives, see for example
\citet{cox1955some}, \citet{cox1985theory} and \citet{lando1998cox}.
One possibility is to use a Cox-Ingerson-Ross process (also known
as Feller process) as suggested by \citet{jottreau2009cir} which
has the following form:
\begin{equation}
d\lambda_{t}=a\left(b-\lambda_{t}\right)dt+\sigma\sqrt{\lambda_{t}}dW,
\end{equation}
Using a stochastic intensity model has three potential advantages.
First, introducing stochastic intensities can account for the change
in intensities observed in Section \ref{sub:Calibration} which turned
out to be the major drawback of the constant intensity model. It can
be shown that a unique martingale measure exists also within this
model, if the number of underlying assets is increased to 4. Therefore,
a hedging portfolio would contain 4 assets where the additional 2
assets would allow for hedging against changes in intensities.
Second, both within the Heston and the Cox-Ingerson-Ross model, some
analytical or semi-analytical formulae are available for the values
of some derivatives which suggests that similar formulae might be
also available for some football bets.
Third, if the characteristic time of the Feller process $1/a$ is
small enough compared to the length of the game $T$ and the process
therefore has enough time to reach it's asymptotic distribution, then
the time-$T$ distribution of the intensities can assumed to the Gamma
distribution (see for example Equation 20 in \citet{cox1985theory}).
Because a compound Poisson distribution with a Gamma distributed intensity
is equal to the negative binomial distribution, therefore the marginal
distributions of the final scores of each team are distributed according
to the negative binomial distribution. This is something that has
been observed empirically earlier by authors including \citet{moroney1943facts,reep1968skill,reep1971skill},
however to our knowledge no consistent stochastic model has been suggested
so far that explains this effect.
\fi
\section{Conclusions}\label{sec:conclusion}
In this paper we have shown that the Fundamental Theorems of Asset
Pricing apply to the market of in-play football bets if the scores
are assumed to follow independent Poisson processes of constant intensities.
We developed general formulae for pricing and replication. We have
shown that the model of only 2 parameters calibrates to 31 different
bets with an error of less than 2 bid-ask spreads. Furthermore, we
have shown that the model can also be used for replication and hedging.
Overall we obtained good agreement between actual contract values
and the values of the corresponding replicating portfolios, however
we point out that hedging errors can sometimes be significant due
to the fact the implied intensities are in practice not constant.
\section*{Funding}
PD and TA acknowledge support of the Economic and Social Research
Council (ESRC) in funding the Systemic Risk Centre (ES/K002309/1).
\bibliographystyle{rAMF}
| {
"attr-fineweb-edu": 1.608398,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdabxK7kjXIK5rzbO |
\chapter*{Agradecimientos}
\label{sec:acknowledgements}
\addcontentsline{toc}{chapter}{\protect\numberline{}Agradecimientos}
\paragraph{}
Me gustaría agradecer a todas aquellas personas y entidades que han estado cerca de mí durante estos cuatro años de carrera.
\paragraph{}
Entre ellos, me gustaría remarcar la labor realizada por la \emph{Universidad de Valladolid} en general, y de la \emph{Escuela de Ingeniería Informática de Valladolid} en concreto, por su colaboración siempre que se ha requerido. También me gustaría agradecer a la empresa \emph{Brooktec S.L.} su acogida durante mi periodo de prácticas.
\paragraph{}
Muchas gracias a todos los profesores que siempre han estado ahí para recibirme en su despacho cuando he necesitado su ayuda. En especial, muchas gracias a \emph{Manuel Barrio Solórzano} por supervisar este trabajo.
\paragraph{}
También me gustaría agradecer a mi familia su apoyo incondicional, tanto a nivel económico como emocional. En especial, a mi madre por todos esos días de estrés en que aguantó siempre con una sonrisa mi mal humor y preocupaciones.
\paragraph{}
Por último, tengo que resaltar el apoyo de todas aquellas personas que me han estado apoyando estos años, tanto a los compañeros de clase, con los que sin su compañía todo esto habría sido mucho más complicado. A los amigos de siempre, por no dejar nunca de estar ahí y a todas esas personas que han estado aguantando mis dramas este último año, ellas saben quienes son.
\end{document}
\chapter{Algoritmos aplicados a Grafos}
\label{chap:graphs}
\section{Introducción}
\label{sec:graphs_intro}
\paragraph{}
En las últimas décadas se han dedicado grandes esfuerzos en el estudio de sucesos basados en interelaciones entre elementos. Esto puede entenderse como el análisis de una red de interconexiones modelada como flechas que relacionan un conjunto de puntos. Una gran cantidad de situaciones de la vida cotidiana puede ser representada de esta manera, mediante la cual, se consigue un formalismo matemático enriquecedor sobre el cual resolver distintos problemas que surgen sobre dichas redes.
\paragraph{}
Estas redes de interconexiones se pueden apreciar en ámbitos muy dispares, como las ciudades y las carreteras que conectan unas con otras, las personas y las amistades entre si, los teléfonos y las llamadas de unos a otros o las páginas web de internet y los enlaces para navegar de unas a otras. El estudio de estas situaciones es de gran interés en las sociedades modernas, permitiendo una mejora del rendimiento a partir de la extracción de información poco obvia mediante distintas técnicas, lo cual conlleva reducción de costes y un aumento del grado de satisfacción para los usuarios de dichos servicios.
\paragraph{}
Sin embargo, utilizar un lenguaje de representación más enriquecedor conlleva sobrecostes asociados que en otros modelos más simples no se dan, lo cual requiere de técnicas sofisticadas para tratar de hacer frente a la resolución de los problemas que se pretende resolver. Además, el crecimiento exponencial en la infraestructura tecnológica ha traido como consecuencia un gran aumento a nivel de tamaño en estos. Algunos ejemplos destacados en el ámbito de internet se dan el protocolo IPv6, que cuenta con $2^128$ posibles direcciones hacia las que poder comunicarse, o el caso de la red social \emph{Facebook}, que cuenta con 1 trillón de relaciones de amistad, tal y como se indica en \emph{One trillion edges: Graph processing at facebook-scale} \cite{ching2015one}.
\paragraph{}
Conforme el tamaño de las redes aumenta, una estrategia razonable es la de utilizar soluciones aproximadas para la resolución de los problemas que se plantean sobre ellas, a través de los cuales se pretende encontrar una solución cercana a la exacta (admitiendo una desviación máxima de $\epsilon$, que se cumple con probabilidad $\delta$), lo que otorga como ventaja una reducción significativa tanto desde la perspectiva del coste temporal como del espacial en la resolución del problema.
\paragraph{}
En este capítulo se pretende realizar una descripción acerca de las distintas alternativas posibles para tratar de agilizar la resolución de problemas sobre redes de tamaño masivo utilizando técnicas aproximadas. Por tanto, primero se ha realizado una descripción formal sobre el modelo de representación de grafos en la sección \ref{sec:graph_formalism}. A continuación, se ha descrito un modelo sobre el cual diseñar algoritmos que resuelvan problemas aplicados a grafos tratando de aprovechar al máximo el espacio en memoria (que se considera limitada) y reduciendo el número de accesos sobre el espacio de almacenamiento mediante el \emph{modelo en semi-streaming}, del cual se habla en la sección \ref{sec:semi_streaming_model}. El siguiente tema que se trata en este capítulo se refiere a técnicas que tratan de reducir la complejidad de una red de tamaño masivo mediante la eliminación de relaciones entre sus punto mediante la utilización de \emph{Spanners} y \emph{Sparsifiers} en la sección \ref{sec:spanners_sparsifiers}. Después, se ha realizado una breve descripción acerca de distintos problemas bien conocidos para los cuales se han encontrados soluciones sobre el \emph{modelo en semi-streaming} en la sección \ref{sec:graph_problems}. Finalmente se ha realiza una breve conclusión en la sección \ref{sec:graph_conclusions}.
\section{Definición Formal}
\label{sec:graph_formalism}
\paragraph{}
En esta sección se describen los conceptos básicos necesarios para entender el estudio de problemas modelados como \emph{Grafos}. Para la descripción formal sobre dichas estructuras se han utilizado las notas de clase de la asignatura de \emph{Matemática Discreta} \cite{matematicaDiscreta2016notes} impartida en la \emph{Universidad de Valladolid} así como las de la asignatura equivalente (\emph{Discrete Mathematics CS 202} \cite{aspnes2013notes}) impartida por \emph{Aspnes} en la \emph{Universidad de Yale}.
\paragraph{}
La \textbf{Teoría de Grafos} (\emph{Graph Theory}) es la disciplina encargada del estudio de estructuras compuestas por vértices y aristas desde una persepectiva matemática. Los vértices representan objetos o elementos, mientras que las aristas se corresponden con las relaciones que se dan entre vértices. Un grafo $G$ se define por tanto como la tupla del conjunto de vértices $V = \{ v_1, _2, ..., v_n \}$ y el conjunto de aristas $E = \{ e_1, e_2, ..., e_m \}$, de tal manera que $e_j = (v_{i_1}, v_{i_2})$ representa el arista que conecta el vértice $v_{i_1}$ con el vértice $v_{i_2}$. Nótese por tanto, que el grafo está compuesto por $n$ vértices y $m$ aristas. El grafo $G$ se puede modelizar por tanto como $G = (V, E)$. En la figura \ref{img:graph_example} se muestra una representación gráfica de un determinado grafo no dirigido compuesto por $6$ vértices y $7$ aristas.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{graph-example}
\caption{Ejemplo de \emph{Grafo No Dirigido}. (Extraído de \cite{wiki:Graph_(discrete_mathematics)})}
\label{img:graph_example}
\end{figure}
\paragraph{}
Aquellos grafos para los cuales la arista $e_j$ representa los dos sentidos de la relación, es decir, $v_{i_1}$ está relacionado con $v_{i_2}$ y $v_{i_2}$ está relacionado con $v_{i_1}$ se denominan \emph{Grafos no nirigidos}, mientras que en los casos en que esta relación no es recíproca se habla de \emph{Grafos dirigidos}. Cuando cada arista $e_j$ tiene asociado un determinado peso $w_j \in W = \{ w_1, w_2, ..., w_m\}$ se dice entonces que $G$ es un \emph{grafo ponderado} y se denota como $G=(V, E, W)$, mientras que cuando se presupone que todas las aristas tienen el mismo peso $W$ se omite de la notación.
\paragraph{}
Cuando un vértice denominado $v_{i_1} \in V$ está directamente relacionado con otro $v_{i_2} \in V$, es decir, existe una arista $e_j \in E$ que los conecta ($e_j = (v_{i_1}, v_{i_2})$) se dice que son $e_j$ es \emph{incidente} sobre dichos vértices. De la misma manera se dice que $v_{i_1}$ y $v_{i_2}$ son \emph{adjacentes} entre sí.
\paragraph{}
Respecto del conjunto de aristas incidentes sobre cada vértice, se denomina \emph{grado} al cardinal dicho conjunto, diferenciando en los grafos no dirigidos entre \emph{in-grado} a las de entrada y \emph{out-grado} a las de salida. Se utiliza la notación $d(v_i)$ para referirse al grado del vértice $i$-ésimo, $d^+(v_i)$ al \emph{in-grado} y $d^-(v_i)$ al \emph{out-grado} de dicho vértice. Nótese por tanto, que se cumple la siguiente propiedad: $d(v_i) = d^+(v_i) + d^-(v_i)$.
\paragraph{}
Un \emph{camino} es un conjunto de aristas $P_p = \{ e_{k_1}, e_{k_2}, ..., e_{k_p}\}$, tales que el arista k-ésimo tiene como vértice de destino el mismo que utiliza el arista $k+1$ como vértice origen. Nótese que el valor $p$ indica la \emph{longitud} del camino. Cuando el vértice de destino de la arista $e_{k_p}$ es el mismo que el de origen de $e_{k_1}$ se denomina \emph{ciclo} y se denomina $C_p$.
\paragraph{}
Se denota como $K_n$ al grafo compuesto por $n$ vértices y $n*(n-1)$ aristas, de tal manera que estas conectan cada vértice con todos los demás. Los grafos que cumplen esta propiedad se denominan grafos completos de grado $n$. Nótese que el cardinal de aristas se reduce a la mitad en el caso de los grafos no dirigidos.
\paragraph{}
Cuando al estudiar la estructura de un grafo, se comprueba que el conjunto de vértices puede dividirse en dos sub-conjuntos disjuntos $V_1$ y $V_2$, de tal manera que para todas las aristas $e_j = (v_{i_1}, v_{i_2})$ el vértice $v_{i_1}$ se encuentra en el sub-conjunto $V_1$ y el vértice $v_{i_2}$ se encuentra en el sub-conjunto $V_2$, entonces se habla de un \emph{Grafo Bipartito}. Un ejemplo de esta situación se muestra en la figura \ref{img:bipartite_graph_example}. Nótese que el concepto de grafo bipartito puede extenderse fácilmente a más de dos sub-conjuntos, denominandose entonces \emph{Grafo k-partito}. Estos grafos son de gran utilidad para modelizar algunos problemas tales como los que se dan en empresas como \emph{Netflix}, para los cuales $V_1$ puede estar formado por el conjunto de usuarios mientras que $V_2$ representa el contenido multimedia. Por tanto, cada arista puede entenderse como una visualización del usuario $v_{i_1} \in V_1$ sobre el contenido $v_{i_2} \in V_2$ .
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{bipartite-graph-example}
\caption{Ejemplo de \emph{Grafo Bipartito}. (Extraído de \cite{wiki:Graph_(discrete_mathematics)})}
\label{img:bipartite_graph_example}
\end{figure}
\paragraph{}
Un \emph{sub-grafo} $H$ es el grafo compuesto por un sub-conjunto de vectores y aristas del grafo $G$. Nótese que en el caso de que se eliminen vértices, es necesario eliminar también todas sus aristas incidentes. Esto se puede indicar de manera matemática de la siguiente forma: $H \subseteq G$ por lo que $H_V \subseteq G_V$ y $H_E \subseteq G_E$.
\paragraph{}
Desde el punto de vista de las transformaciones que se pueden realizar sobre grafos, se denomina \emph{isomorfismo} a una transformación biyectiva sobre el grafo $G =(V_G, E_G)$ al grafo $H = (V_H, E_H)$ que se realiza sobre los vértices de manera que $f: V_G \rightarrow V_H$ y por cada arista $(e_1, e_2) \in E_G$, entonces $(f(e_1), f(e_2)) \in E_H$ de tal manera que la estructura de $G$ se conserva en $H$. Entonces se dice que $G$ y $H$ son isomorfos entre si. Si se modifica la condición de biyectividad del \emph{isomorfismo} y tan solo se requiere la propiedad de inyectividad entonces se habla de \emph{homomorfismo}. Por tanto, esto puede ser visto como una transformación sobre la cual no se mantiene la estructura de $G$, entonces $H$ ya no es equivalente a $G$, sino que es un \emph{sub-grafo} de este.
\paragraph{}
Las transformaciones son interesantes desde el punto de vista del tratamiento de grafos de tamaño masivo, dado que a partir de estas se trata de reducir la complejidad cuando el tamaño de estos los hace intratables. Por lo tanto, en este caso interesa conseguir transformaciones que reduzcan el tamaño del grafo, pero que tratan de mantener su estructura lo más semejante posible. Destacan transformaciones conocidas como \emph{Spanners} (de los que se hablará en la sección \ref{sec:spanners}) y \emph{Sparsifiers} (en la sección \ref{sec:sparsifiers}).
\subsection{Métodos de Representación}
\label{sec:representation_methods}
\paragraph{}
Existen diversas estrategias para representar un grafo sobre estructuras de datos, las cuales presentan distintas ventajas e inconvenientes, tanto a nivel de espacio como de tiempo de acceso. Dichas estrategias se escogen teniendo en cuenta la estructura del grafo. En esta sección se habla de matrices de adyacencia, de listas de adyacencia y de la matriz laplaciana, la cual contiene un conjunto de propiedades interesantes que pueden ser muy útiles en la resolución de algunos problemas.
\subsubsection{Matriz de Adyacencia}
\label{sec:adjacency_matrix}
\paragraph{}
Se denomina matriz de adyacencia $A$ del grafo $G = (V,E)$ compuesto por $n=|V|$ vértices, $m=|E|$ aristas y $W$ el conjunto de pesos de los aristas a una matriz de tamaño $n*n$ construida tal y como se indica en la ecuación \eqref{eq:adjacency_matrix}. Nótese que esta definición es válida tanto para grafos ponderados como para no ponderados suponiendo que $w_k = 1, k \in [1, m]$.
\begin{equation}
\label{eq:adjacency_matrix}
A_{i,j} =
\begin{cases}
w_k, & (v_i, v_j) = e_k \in E\\
0, &\text{en otro caso}
\end{cases}
\end{equation}
\paragraph{}
Esta estrategia de representación es apropiada cuando se trabaja sobre grafos altamente conexos (con un gran número de aristas), ya que el número de posiciones nulas en la matriz es reducido y la estructura matricial indexada proporciona un tiempo de acceso de $O(1)$ a cada posición. Sin embargo, se requiere de $O(n^2)$ para almacenar dicha matriz, algo admisible cuando $n^2 \approx m$ para lo que esta representación se acerca a su óptimo espacial.
\subsubsection{Lista de Adyacencia}
\label{sec:adjacency_list}
\paragraph{}
La alternativa a la codificación del grafo como una matriz de adyacencia se conoce como \emph{lista de adyacencia}. En este caso, se mantiene una lista (posiblemente ordenada u otras estrategias estructuradas como listas enlazadas o árboles, para tratar de reducir los tiempos de acceso) para almacenar el conjunto de aristas $E$. Nótese por tanto, que en este caso la codificación es óptima a nivel de espacio $O(m)$, no existiendo una estrategia de representación que pueda almacenar la estructura del mismo de manera exacta utilizando un tamaño menor. Sin embargo, tal y como se puede intuir, el tiempo de acceso es de $O(m)$ en el peor caso. Esta solución es apropiada para grafos muy dispersos (aquellos en que $n \ll m$).
\subsubsection{Matriz Laplaciana}
\label{sec:laplacian_matrix}
\paragraph{}
La \emph{matriz laplaciana} consiste en una estrategia de representación de grafos que cumple importantes propiedades, a partir de las cuales se facilita en gran medida la resolución de distintos problemas, entre los que se encuentran \emph{árboles de recubrimiento} (sección \ref{sec:minimum_spanning_tree}), aunque también se utiliza en la estimación de la distribución de probabilidad de \emph{paseos aleatorios} (sección \ref{sec:random_walks_overview}).
\paragraph{}
El método de construcción de la \emph{matriz laplaciana} se indica en la ecuación \eqref{eq:laplacian_matrix}, el cual genera una matriz $L$ de tamaño $n * n$ (al igual que en el caso de la matriz de adyacencia) que aporta información sobre el grafo subyacente. La diagonal de la matriz laplaciana contiene el grado del vértice referido a dicha posición, mientras que el resto de las celdas se construyen colocando el valor $-w_k$ cuando existe un arista entre el vértice $v_i$ y el vértice $v_j$ ( $e_k = (v_i,v_j) \in E$ ) y $0$ cuando no existe. La matriz laplaciana también puede entenderse como $L = D - A$ donde $D$ representa una matriz diagonal de tamaño $n*n$ que almacena el grado del vértice $v_i$ en la posición $D_{i,i}$ y $0$ en otro caso, mientras que $A$ se corresponde con la matriz de adyacencia descrita anteriormente.
\begin{align}
\label{eq:laplacian_matrix}
L_{{i,j}} = {\begin{cases}
d(v_{i})&{\mbox{if}}\ i=j\\
-1&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\
0&{\mbox{otherwise}}\end{cases}}
\end{align}
\paragraph{}
A partir de esta representación se facilita la resolución de distintos problemas tal y como se ha indicado anteriormente. También existen distintas variaciones respecto de la descrita en esta sección, entre las que se encuentran la \emph{matriz laplaciana normalizada} o la \emph{matriz laplaciana de caminos aleatorios}, de la cual se hablará en el capítulo \ref{chap:pagerank} para el cálculo del ranking de importancia \emph{PageRank}.
\paragraph{}
En la práctica, cuando se trabaja con grafos de tamaño masivo, es muy común que estos sean muy dispersos. Algunos ejemplos de ello son el \emph{grafo de la web} (grafo dirigido), en el cual cada vértice representa sitio web y cada arista un enlace hacia otro sitio web. Es fácil comprobar que este grafo es altamente disperso ya que es muy poco probable que un sitio web contenga enlaces al resto de sitios de la red cuando se habla de millones de vértices. Algo similar ocurre en el caso de redes sociales como \emph{Facebook} (grafo no dirigido debido a relaciones de amistad) o \emph{Twitter} (grafo dirigido debido a relaciones seguimiento). Por tanto, la representación mediante listas de adyacencia es una herramienta útil a nivel conceptual pero que en la práctica no se utiliza por la inviabilidad derivada del gran coste espacial para su almacenamiento.
\section{Modelo en Semi-Streaming}
\label{sec:semi_streaming_model}
\paragraph{}
Al igual que sucede en el caso de conjuntos de datos de carácter numérico y categoríco, en el modelo de grafos, también es necesario hacer frente al elevado tamaño del problema mediante nuevas estrategias de diseño de algoritmos. El \emph{modelo en streaming}, del cual se habló en la sección \ref{sec:streaming_model} es una buena alternativa para tratar de agilizar la búsqueda de soluciones que varían con respecto del tiempo, además de trabajar sobre un espacio reducido lo cual aprovecha en mayor medida las capacidades del hardware subyacente. Tal y como se indicó anteriormente, esta estrategia permite reducir el número de acceso sobre el dispositivo de almacenamiento tratando de trabajar únicamente con los estimadores que se mantienen en la memoria del sistema, cuyo tiempo de acceso es mucho más reducido.
\paragraph{}
Sin embargo, en el caso de los problemas referidos a grafos, este modelo presenta mayores dificultades, tanto a nivel de espacio como del número de accesos sobre el stream de datos por la estructura enlazada de la representación de grafos. Por lo tanto, se han definido variaciones sobre el \emph{modelo en streaming} original para tratar de hacer frente a los requisitos característicos de este tipo de problemas. En los artículos \emph{On graph problems in a semi-streaming model} \cite{feigenbaum2005graph} y \emph{Analyzing graph structure via linear measurements} \cite{ahn2012analyzing} los autores describen dicho modelo, al cual denominan \textbf{Modelo en Semi-Streaming}.
\paragraph{}
Este se corresponde con una relajación del \emph{modelo en streaming} estándar, el cual permite mayor libertad tanto a nivel de espacio como de pasadas permitidas sobre el stream de datos. Por esta razón, cuando el número de pasadas $p$ es superior a 1 ($p > 1$), entonces ya no es posible su uso en entornos en que se requiere que el procesamiento de la información y las consultas sean en tiempo real, algo que, por contra, si sucedía en el caso del \emph{modelo en streaming} definido anteriormente. Por lo tanto, el \emph{modelo en semi-streaming} se presenta como una estrategia de diseño de algoritmos que trata de obtener estimaciones sobre grafos en un espacio reducido y con un buen planteamiento a nivel de accesos a disco cuando el grafo completo no puede mantenerse en la memoria del sistema de manera completa.
\paragraph{}
El \emph{modelo en semi-streaming} impone la forma en que el grafo es recibido bajo la idea de stream de datos, lo cual se describe a continuación: Sea $G = (V, E)$ un grafo dirigido (su adaptación al caso de grafos no dirigidos es trivial) compuesto por $n = |V|$ vértices y un número indeterminado de arístas desde el punto de vista del algoritmo que procesará el stream. Se presupone que se conoce \emph{a-priori} el número de vértices que forman el grafo, mientras que el stream consiste en el conjunto de tuplas que representan las aristas. El conjunto de aristas $E$ se define como $E = \{ e_{i_1}, e_{i_2}, ..., e_{i_j}, ..., e_{i_m} \}$, tal y como se ha hecho en secciones anteriores. Por tanto el grafo $G$ está formado por $m = |E|$ aristas (es necesario remarcar que el algoritmo que procesa el stream no conoce dicho valor). Dichas aristas son procesadas en un orden desconocido de manera secuencial marcado por el conjunto de permutaciones arbitrarias $\{ i_1, i_2, i_j, i_m \}$ sobre $[1, m]$.
\paragraph{}
Una vez descrita la estrategia de procesamiento sobre el \emph{modelo en semi-streaming}, lo siguiente es indicar las unidades de medida sobre las cuales realizar el análisis sobre la complejidad de los algoritmos que se desarrollan sobre este modelo. En \cite{feigenbaum2005graph}, los autores definen $S(n,m)$ como el espacio utilizado para procesar el stream de aristas, $P(n,m)$ el número de pasadas sobre dicho stream y $T(n,m)$ el tiempo necesario para procesar cada arista. Sobre esta contextualización se requiere que para que un algoritmo sea válido en el \emph{modelo en semi-streaming} $S(n,m)$ esté contenido en el orden de complejidad $O(n \cdot polylog(n))$. Tal y como se puede apreciar, esta restrición a nivel de espacio es mucho más relajada que la impuesta sobre el \emph{modelo en streaming} estándar, que requiere de $o(N)$. Sin embargo, es necesario tener en cuenta que gran cantidad de problemas sobre grafos requieren un coste espacial de $O(n^2)$, por lo que tratar de encontrar una solución en $O(n \cdot polylog(n))$ representa una tarea compleja, pero conlleva una mejora significativa.
\paragraph{}
Al igual que sucede con en el caso del \emph{modelo en streaming} estándar, al utilizar un orden espacial menor del necesario para encontrar soluciones exactas, las soluciones encontradas admiten la existencia de una determinada tasa de error máxima delimitada por $\epsilon$, la cual se debe cumplir con una probabilidad $\delta$. Para conjuntos de datos sobre los cuales no es admisible la búsqueda de una solución exacta o para los cuales sea admisible una reducida tasa de error, esta estrategia de diseño de algoritmos se presenta por tanto como una alternativa acertada.
\section{Spanners y Sparsifiers}
\label{sec:spanners_sparsifiers}
\paragraph{}
Para tratar de agilizar los cálculos necesarios para la resolución de problemas sobre grafos, se han propuesto distintas alternativas, muchas de las cuales son aplicables únicamente a problemas concretos. Sin embargo, estas ideas se discutirán en la sección \ref{sec:graph_problems}. En esta sección se describen distintas técnicas utilizadas para tratar de \say{sumarizar} o disminuir el espacio necesario para almacenar un determinado grafo $G$ (al igual que ocurría con las técnicas descritas en el capítulo anterior para valores numéricos) transformándolo en un nuevo grafo $H$, de tal manera que la estructura del mismo siga siendo lo más similar a la del original respecto de distintas medidas.
\paragraph{}
La descripción de estas técnicas se ha basado en las ideas recogidas en los artículos \emph{Graph stream algorithms: a survey} \cite{mcgregor2014graph} de \emph{McGregor} y \emph{Graph sketches: sparsification, spanners, and subgraphs} \cite{ahn2012graph} de \emph{Ahn y otros}, así como las notas de la \emph{Clase 11} de la asignatura \emph{Randomized Algorithms} \cite{harvey2011randomized} impartida en la \emph{Universidad de Columbia Británica}. Tal y como se ha indicado en el párrafo anterior, estas técnicas consisten en la eliminación de vértices y/o aristas, de tal manera que la distorsión producida tras dichas modificaciones sea mínima.
\paragraph{}
Existe un enfoque trivial para la reducción del tamaño en grafos, sin embargo, este tan solo ofrece resultados aceptables sobre grafos densos (aquellos similares al grafo completo $K_n$). La técnica consiste en la eliminación de aristas a partir de una determinada tasa de probabilidad $\alpha$. Mediante esta estrategia se reduce el espacio necesario para almacenar las aristas de $(1-\alpha)$. Tal y como se ha dicho, esta solución tan solo es válida en aquellos casos en que el grafo sea denso. Cuando el grafo es disperso, por contra, la eliminación de aristas mediante la selección uniforme puede producir una gran distorsión respecto del grafo original.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{graph-community-structure}
\caption{Ejemplo de \emph{Grafo Disperso}. (Extraído de \cite{wiki:Community_structure})}
\label{img:graph_community_structure}
\end{figure}
\paragraph{}
En la figura \ref{img:graph_community_structure} se muestra un grafo disperso, que además forma 3 comunidades (conjuntos de vértices altamente conexos entre sí). Nótese que en aquellos casos en que el grafo posea una estructura similar a la del de la figura, ya no es conveniente utilizar la estrategia descrita en el párrafo anterior, puesto que para tratar de preservar la estructura del mismo, no todas las aristas tienen la misma relevancia. Esto puede entenderse de manera sencilla si al comprobar la variación del grafo al eliminar una de las aristas que conectan dos comunidades. Dicha variación estructural es significativamente mayor que la ocurrida tras eliminar una arista que conecta dos vértices pertenecientes a una misma comunidad.
\paragraph{}
Por esta razón, distintos autores han trabajado en técnicas para tratar de mantener la estructura del sub-grafo generado lo más semejante posible a la del grafo original. Para ello, las estrategias más populares se basan en la utilización de \emph{Spanners} y \emph{Sparsifiers}, los cuales se describirán a continuación en las secciones \ref{sec:spanners} y \ref{sec:sparsifiers} respectivamente. Estas técnicas han sido ampliamente estudiadas, sin embargo, en este caso se pretende orientar la descripción de las mismas sobre el \emph{modelo en semi-streaming} del cual se habló anteriormente. En este modelo se han encontrado soluciones eficientes para \emph{grafos no dirigidos}. Por contra, para el caso de los \emph{grafos dirigidos}, la búsqueda de soluciones eficientes para este problema continua siendo un problema abierto.
\subsection{Spanners}
\label{sec:spanners}
\paragraph{}
Se denomina \emph{Spanner} a un determinado sub-grafo que mantiene las propiedades de distancia entre todos sus vértices respecto del original con una tasa máxima de variación acotada por $\alpha$. Por tanto, se denomina \emph{$\alpha$-spanner} del grafo $G = (V, E)$ a un sub-grafo $H = (V, E')$ donde $E' \subset E$, construido de tal manera que se cumpla la ecuación \eqref{eq:alpha_spanner} representando $d_G(v_{i_1},v_{i_2})$ la distancia del camino más corto entre los vértices $v_{i_1}$ y $v_{i_2}$ sobre el grafo $G$.
\begin{equation}
\label{eq:alpha_spanner}
\forall v_{i_1}, v_{i_2} \in V, \ d_G(v_{i_1},v_{i_2}) \leq d_H(v_{i_1},v_{i_2}) \leq \alpha \cdot d_G(v_{i_1},v_{i_2})
\end{equation}
\paragraph{}
Tal y como se indica en la ecuación \eqref{eq:alpha_spanner}, lo que se pretende acotar mediante esta estrategia es, por tanto, el error desde el punto de vista de la distancia entre vértices. Tal y como se puede intuir, mediante el cumplimiento de esta propiedad se soluciona el problema descrito anteriormente que surge sobre grafos dispersos, dado que si se elimina el único arista que conecta dos comunidades distintas, entonces la distancia del camino más corto entre los vértices de comunidades distintas variará en gran medida, posiblemente superando la acotada del valor $\alpha$.
\paragraph{}
Para la construcción de un \emph{$\alpha$-spanner} sobre el modelo en streaming existe un algoritmo sencillo que resuelve este problema para el caso del modelo de caja registradora (sección \ref{sec:streaming_cash_register}), es decir, en el cual tan solo estén permitidas adicciones. Esta estrategia se ilustra en el algoritmo \ref{code:basic_spanner}. En este caso, la solución es trivial y consiste en añadir únicamente al conjunto de aristas $E'$ del grafo $H$, aquellas cuya distancia del camino más corto entre sus vértices en $H$ sea mayor que $\alpha$, con lo cual se garantiza la propiedad del \emph{$\alpha$-spanner}.
\paragraph{}
\begin{algorithm}
\SetAlgoLined
\KwResult{$E'$ }
$E' \gets \emptyset$\;
\For{cada $(u, v) \in E$}{
\If{$d_H(u,v) > \alpha$}{
$E' \gets E' \cup \{(u,v)\}$\;
}
}
\caption{Basic Spanner}
\label{code:basic_spanner}
\end{algorithm}
\paragraph{}
Sin embargo, esta técnica requiere del cálculo del camino más corto entre los vértices $u$ y $v$ en cada actualización, lo cual genera un coste de $O(card(E'))$ en tiempo, o el mantenimiento de una estructura de datos auxiliar que almacene dichas distancias, lo que requiere de un $O(n^2)$ en espacio. El mejor resultado encontrado para este problema es la construcción de un \emph{$(2k-1)$-spanner} utilizando $O(n^(1+1/k)$ de espacio. Esta solución se ha demostrado que es óptima respecto de la precisión utilizando tan solo una pasada sobre el stream de aristas. La descripción completa de la misma se encuentra en el trabajo \emph{Streaming and Fully Dynamic Centralized Algorithms for Constructing and Maintaining Sparse Spanners} \cite{elkin2007streaming} de \emph{Elkin}.
\paragraph{}
Para el caso general, en el cual están permitidas tanto adicciones como eliminaciones (modelo en molinete descrito en la sección \ref{sec:streaming_turnstile}) la solución básica no es trivial. El algoritmo que se describe en \cite{elkin2007streaming} se basa en la generación de árboles incrementales que se construyen a partir de la selección aleatoria de vértices. Sin embargo, esto no es sencillo cuando se permiten las eliminaciones. Por lo tanto, para la adaptación de dichas técnicas al modelo en molinete una solución es la utilización de \emph{$L_0$-Samplers}, que se describieron en la sección \ref{sec:lp_samplers}, lo que requiere de múltiples pasadas sobre el stream de aristas.
\paragraph{}
Tal y como se ha visto en esta sección, la construcción de \emph{Spanners} añade un sobrecoste a la resolución de problemas sobre grafos, junto con una determinada tasa de error desde el punto de vista de la distancia entre vértices. Sin embargo, dichos inconvenientes se ven recompensados en muchos casos por la reducción en tiempo y espacio de la resolución del problema sobre el sub-grafo resultante.
\subsection{Sparsifiers}
\label{sec:sparsifiers}
\paragraph{}
Otra alternativa para la generación del sub-grafo $H$ construido sobre el mismo conjunto de vértices y un sub-conjunto de aristas respecto del grafo $G$ son los \emph{Sparsifiers}. En este caso, en lugar de tratar de mantener la propiedad de la distancia del camino más corto entre pares de vértices, se basa en la minimización del número de cortes mínimos (eliminación de aristas) para separar el conjunto de vértices del grafo en dos sub-conjuntos disjuntos para cada par de vértices. A este concepto se lo denomina \emph{corte mínimo} (o \emph{Minimun Cut}) y se denota por $\lambda_{u,v}(G)$ para indicar el número de aristas a eliminar para formar dos sub-conjuntos disjuntos $V_{1}, V_{2}$ de tal manera que $u\in V_{1}$ y $v \in V_{2}$. Nótese por tanto, que en un grafo dirigido el \emph{corte mínimo} puede tomar valores en el intervalo $[1, n\cdot(n-1)]$. En la ecuación \eqref{eq:sparsifier_cut} se muestra la definición formal de \emph{Sparsifier} donde $A$ representa todas las combinaciones de pares de vértices y $\epsilon$ la desviación máxima permitida por el \emph{Sparsifier}, de tal manera que el resultado se corresponde con un \emph{$(1 +\epsilon)$-Sparsifier}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{graph-sparsifier}
\caption{Ejemplo de \emph{Sparsifier}. (Extraído de \cite{harvey2011randomized})}
\label{img:graph_community_structure}
\end{figure}
\begin{equation}
\label{eq:sparsifier_cut}
\forall A \in V \ (1-\epsilon)\lambda_A(G)\leq\lambda_A(H)\leq(1+\epsilon)\lambda_A(G)
\end{equation}
\paragraph{}
A continuación se describe una definición para entender las ideas subyacentes en que se basan los \emph{Sparsifiers}. Para construir un \emph{$(1 +\epsilon)$-Sparsifier} de $G$, el valor $p$ debe ser escogido tal y como se indica en la ecuación \eqref{eq:sparsifier_p} donde $\lambda(G)$ representa el corte mínimo de $G$. La demostración se encuentra en el artículo \emph{On graph problems in a semi-streaming model} \cite{feigenbaum2005graph} desarrollado por \emph{Feigenbaum y otros}. Por tanto, el problema se reduce al mantenimiento de un \emph{$L_{0}$-Sampler} que devuelva un sub-conjunto de aristas seleccionados con probabilidad $p$ del stream de aristas.
\begin{equation}
\label{eq:sparsifier_p}
p \geq min\{6\lambda(G)^{-1}\epsilon^{-2}log(n),1\}
\end{equation}
\paragraph{}
A continuación se describe una estrategia sencilla de construcción de \emph{$(1 +\epsilon)$-Sparsifiers} para grafos no dirigidos. Esta se basa en la selección de aristas con probabilidad $p$ definido tal y como se indicó anteriormente en la ecuación \eqref{eq:sparsifier_p}.
\paragraph{}
\begin{algorithm}
\SetAlgoLined
\KwResult{$E'$ }
$E' \gets \emptyset$\;
\For{cada $(u, v) \in E$}{
$r \gets Uniform(0,1)\;$ \\
\If{$r < p$}{
$E' \gets E' \cup \{(u,v)\}$\;
}
}
\caption{Basic Sparsifier}
\label{code:basic_sparsifier}
\end{algorithm}
\paragraph{}
Otro enfoque más general para la construcción de \emph{Sparsifiers} se basa en la utilización de la \emph{Matriz Laplaciana} (definida en la sección \ref{sec:laplacian_matrix}) del grafo $H$, que denotaremos como $L_{H}$. A los \emph{Sparsifiers} construidos manteniendo la propiedad definida en la ecuación \eqref{eq:sparsifier_spectral} se los denomina \emph{Spectral Sparsifiers}. Desarrollando la ecuación obtenida si restringimos el rango de valores de $x$ al sub-conjunto $\{ 0, 1\}^n$ entonces podemos obtener la ecuación \ref{eq:sparsifier_cut}. Dado que el vector $x$ modeliza el conjunto $A$ en dicha ecuación. Por tanto, los \emph{Spectral Sparsifiers} pueden ser vistos como una generalización de los anteriores.
\begin{equation}
\label{eq:sparsifier_spectral}
\forall x \in \mathbb{R} \leq (1 - \epsilon) x^TL_{G}x \leq x^TL_{H}x \leq (1 + \epsilon) x^TL_{G}x
\end{equation}
\paragraph{}
Mediante la estrategia de los \emph{Spectral Sparsifiers}, además de aproximar el corte mínimo $\lambda(G)$ se pueden aproximar otras propiedades como propiedades de los paseos aleatorios (que se describen en la sección \ref{sec:random_walks_overview}). En el trabajo \emph{Twice-Ramanujan Sparsifiers} \cite{batson2012twice} redactado por \emph{Batson y otros} se describe una estrategia de construcción de \emph{$(1 +\epsilon)$-Spectral Sparsifiers} en un espacio de $O(\epsilon^{-2}n)$.
\paragraph{}
Las estrategias descritas en esta sección para la reducción de la complejidad de grafos ajustándose al \emph{modelo en semi-streaming} proporcionan una ventaja significativa a nivel de espacio mediante la redución del conjunto de aristas para la resolución de otros problemas a partir de ellas. Por contra, estas generan una determinada tasa de error.
\paragraph{}
Cuando se ha hablado de optimalidad en esta sección, ha sido desde el punto de vista de los \emph{grafos no dirigidos}, dado que para el caso de los \emph{grafos dirigidos}, aún no se han encontrado métodos de generación de \emph{Spanners} o \emph{Sparsifiers} óptimos debido a la mayor complejidad que estos conllevan. En la siguiente sección se describen distintos problemas para los cuales se ha encontrado una solución sobre el \emph{modelo en semi-streaming} y sus correspondientes restricciones.
\section{Problemas sobre Grafos}
\label{sec:graph_problems}
\paragraph{}
El modelo de representación de grafos proporciona un marco de trabajo flexible sobre el cual se pueden modelizar un gran número de situaciones. Un ejemplo característico de esta situación son las redes sociales, en las cuales cada persona representa un vértice en el grafo mientras que las aristas representan las relaciones entre estas. El conjunto de relaciones generadas a partir de los enlaces entre páginas web (\emph{Web Graph}) también son un claro ejemplo de problemas que se pueden representar mediante un grafo. Sin embargo, este modelo permite representar otras muchas situaciones como planificación de tareas donde cada vértice representa una tarea y las aristas marcan el orden de precedencia de las mismas. Los grafos también se pueden utilizar para modelizar el problema de encontrar determinados objetos o estructuras en imágenes.
\paragraph{}
Muchos de estos problemas pueden extenderse sobre un entorno dinámico, en el cual la estructura del grafo varía con respecto del tiempo. Algunos ejemplos son la conexión con nuevos amigos que se conectan entre sí en un grafo que modele redes sociales, la eliminación de enlaces entre webs en el caso del grafo de la red (\emph{Web Graph}), cambios de última hora debido a factores externos en cuanto a planificación de tareas, o la extensión del reconocimiento de estructuras en vídeos, que pueden ser vistos como la variación del estado de la imagen con respecto del tiempo.
\paragraph{}
Por estas razones, en esta sección se pretende realizar una descripción acerca de distintos problemas aplicados a grafos sobre el \emph{modelo en semi-streaming}. Los problemas que se describen, generalmente son de carácter básico. Sin embargo, son de vital importancia puesto que a partir de ellos pueden plantearse soluciones a otros de mayor complejidad.
\paragraph{}
El resto de la sección se organiza de la siguiente manera: En el apartado \ref{sec:bipartite_matchings} se describirá el problema de \emph{Verificación de Grafos Bipartitos}, a continuación se habla del \emph{problema de Conteo de Triángulos} en el apartado \ref{sec:counting_triangles}, posteriormente se expone el problema de encontrar el \emph{Árbol Recubridor Mínimo} en el apartado \ref{sec:minimum_spanning_tree}, en el apartado \ref{sec:graph_connected_components} se discute sobre el problema de \emph{Componentes Conectados} y finalmente, en el apartado \ref{sec:random_walks_overview} se realiza una breve introducción acerca de los \emph{Paseos Aleatorios}, que será extendida en profundidad en el capítulo siguiente, destinado exclusivamente al \emph{Algoritmo PageRank}. Se vuelve a remarcar que estos problemas se describen desde la persepectiva del \emph{modelo en semi-streaming}.
\subsection{Verificación de Grafos Bipartitos}
\label{sec:bipartite_matchings}
\paragraph{}
En la sección \ref{sec:graph_formalism} en la cual se realizó una descripción formal acerca de las definiciones sobre grafos que se utilizarían durante el resto del capítulo se hablo de \emph{Grafos Bipartitos}, que tal y como se indicó, estos se refieren a aquellos grafos que cumplen la propiedad de formar dos sub-conjuntos disjuntos de vértices que se relacionan entre si mediante aristas incidentes sobre 2 vértices cada uno perteneciente a un sub-conjunto distinto. Tal y como se indicó anteriormente, en la figura \ref{img:bipartite_graph_example} se muestra un ejemplo de grafo bipartito.
\paragraph{}
En el trabajo \emph{On graph problems in a semi-streaming model}\cite{feigenbaum2005graph} (en el cual se expone por primera vez el modelo en semi-streaming) los autores ilustran un algoritmo con un coste espacial de $O(n \cdot log(n))$ procesando cada arista en $O(1)$ y realizando $O(log(1 / \epsilon)) \epsilon$ pasadas sobre el stream de aristas que indica si se cumple la propiedad de grafo bipartito.
\paragraph{}
El algoritmo se basa en lo siguiente: Una fase inicial de construcción de un \emph{Matching Maximal} (selección de aristas de tal manera que ninguna sea indicente del mismo vértice que otra). Esta tarea se realiza sobre la primera pasada del stream de aristas. El resto de pasadas se basan en la adicción de aristas para hacer conexo el grafo de tal manera que se no se formen ciclos. Si esta estrategia es posible para todas las aristas del stream, entonces se cumple la propiedad de grafo bipartito.
\paragraph{}
Existen otras estrategias para probar que un grafo cumple la propiedad de grafo bipartito. Cuando se hable de problemas de conectividad en la sección \ref{sec:graph_connected_components} se expondrá otra estrategia para la validación de la propiedad de grafos bipartitos.
\subsection{Conteo de Triángulos}
\label{sec:counting_triangles}
\paragraph{}
Uno de los problemas que se ha tratado en profundidad sobre el \emph{modelo en semi-streaming} aplicado a grafos es el \emph{conteo de triángulos}. Este problema se define como el número de tripletas de vértices conectadas entre sí mediante tres aristas de tal manera que estas sean incidentes a dichos vértices. Una modelización más precisa se define a continuación sobre el grafo $G=(V,E)$.
\paragraph{}
Sean $v_{i_1},v_{i_2},v_{i_3} \in V$ tres dístintos vértices del grafo $G$ y $e_{j_1}, e_{j_2} e_{j_3} \in E$ tres aristas distintos de dicho grafo. Entonces, se denomina triángulo a la tripleta $\{e_{j_1}, e_{j_2} e_{j_3}\}$ si se cumple que $e_{j_1} = (v_{i_1},v_{i_2})$, $e_{j_2} = (v_{i_2},v_{i_3})$ y $e_{j_3} = (v_{i_3},v_{i_1})$. Al cardinal del conjunto de tripletas distintas sobre el grafo $G$ se lo denomina $T_3$. Esta definición puede extenderse a figuras de con mayor número de vértices modificando el valor $3$ por otro mayor. Sin embargo, estos casos son de menor interés puesto que aportan información similar sobre la estructura del grafo pero requieren un mayor coste computacional para su cálculo.
\paragraph{}
Se han encontrado estrategias para soluciones aproximadas al problema del \emph{conteo de triángulos} sobre las restricciones del \emph{modelo en semi-streaming}. La primera propuesta expuesta en \emph{Reductions in streaming algorithms, with an application to counting triangles in graphs} \cite{bar2002reductions} por \emph{Bar-Yossef y otros} fue la modelización del problema como el conteo de aristas que unen todas aquellas tripletas de nodos, denotadas como $x_{\{v_{i_1},v_{i_2},v_{i_3}\}}$ cuyo valor es $3$.
\paragraph{}
Mediante la estimación de momentos de frecuencia (de los cuales se habló en la sección \ref{sec:streaming_frecuency_moment_aproximation}) se puede calcular el número de triángulos distintos tal y como se indica en la ecuación \ref{eq:graph_triangles_frecuency}. En el trabajo \cite{bar2002reductions} se muestra una descripción acerca del algoritmo para encontrar una \emph{$(1 + \epsilon)$-estimación} basada en esta técnica con un coste computacional de $O(\alpha^{-2})$ en espacio donde $\alpha = \epsilon / (8 \cdot m \cdot n)$.
\begin{equation}
\label{eq:graph_triangles_frecuency}
T_3 = F_0 + -1.5F_1 + 0.5 F_2
\end{equation}
\paragraph{}
Otra propuesta es la estimación de $T_3$ mediante el uso de un \emph{$L_0$-Sampler} (del cual se habló en la sección \ref{sec:lp_samplers}). De esta técnica se habla en \cite{ahn2012graph} y presupone que si el cumplimiento de la propiedad $x_{\{v_{i_1},v_{i_2},v_{i_3}\}} = 3$ se denota por $Y$ y representa un \emph{proceso de Bernoulli}, entonces esta se puede extender a una distribución binomial al probar todas las combinaciones posibles. Los autores exponen que $\mathbb{E}[Y] = T_3/F_0$ y dado que existen técnicas para la estimación de $F_0$ (como el algoritmo de \emph{Flajolet-Martin}), entonces es posible obtener $T_3$.
\paragraph{}
La razón por la cual resulta interesante el conteo de triángulos es que se utiliza para el cálculo del \emph{coeficiente de agrupamiento} que denotaremos como $C$. La forma de calcular dicha propiedad se muestra en la ecuación \eqref{eq:clustering_coefficient_graph}. A partir de dicho indicador se puede obtener una estimación sobre la estructura del grafo, es decir, si es altamente conexo o por contra es disperso.
\begin{equation}
\label{eq:clustering_coefficient_graph}
C = \frac{1}{n} \sum_{v\in V} \frac{T_3(v)}{\binom{deg(v)}{2}}
\end{equation}
\subsection{Árbol Recubridor Mínimo}
\label{sec:minimum_spanning_tree}
\paragraph{}
El problema de la búsqueda del \emph{árbol recubridor mínimo} (\emph{Minimum Spanning Tree}) es uno de los casos más estudiados en problemas de grafos. Se refiere a la búqueda del sub-grafo $H = (V, E')$ respecto del grafo $G=(V, E)$ formado por todos los vértices del grafo $G$ y un sub-conjunto de aristas del mismo.
\paragraph{}
La propiedad que debe cumplir el sub-conjunto de aristas $E'$ es que tan solo debe existir un único camino para llegar desde un vértice cualquiera al resto de vértices del grafo y, además, el sumatorio de los pesos de las aristas contenidas en dicho camino debe ser el mínimo respecto de todos los posible en $G$. En la figura \ref{img:minimum_spanning_tree} se muestra un ejemplo del \emph{árbol recubridor mínimo} sobre un grafo ponderado
\paragraph{}
Nótese que para que exista un \emph{árbol recubridor mínimo} el grafo $G$ debe ser conexo. En el caso de que $G$ no sea conexo entonces se habla de \emph{bosque recubridor mínimo} (\emph{Minimum Spanning Forest}). El problema del \emph{árbol recubridor mínimo} no siempre tiene una solución única, sino que pueden encontrarse distintos sub-grafos $H_i$ que cumplan la propiedad utilizando sub-conjuntos de aristas diferentes. El caso más característico de esta situación es cuando se busca el \emph{árbol recubridor mínimo} sobre un grafo no ponderado, es decir, aquel en el cual todas las aristas tienen el mismo peso.
\paragraph{}
Este problema se ha definido comúnmente sobre grafos no dirigidos, por su mayor simplicidad y aplicación práctica. Sin embargo, la modelización es extensible a grafos dirigidos, para los cuales el problema es mucho más complejo dado que por cada vértice es necesario mantener un arista de entrada y otro de salida.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{minimum-spanning-tree}
\caption{Ejemplo de \emph{Árbol Recubridor Mínimo}. (Extraído de \cite{wiki:Minimum_spanning_tree})}
\label{img:minimum_spanning_tree}
\end{figure}
\paragraph{}
La versión estática del problema del \emph{árbol recubridor mínimo} ha sido ampliamente estudiada en la literatura, existiendo un gran número de alternativas, entre las que destaca históricamente el \emph{Algoritmo de Kruskal} descrito en el artículo \emph{On the shortest spanning subtree of a graph and the traveling salesman problem} \cite{kruskal1956shortest} cuya complejidad temporal es de $O(n + log(m))$ sobre un espacio de $O(n+m)$.
\paragraph{}
También se han propuesto soluciones basadas en algoritmos probabilistas, que ofrecen garantías de optimalidad en su solución. La versión más eficiente encontrada hasta la fecha se describe en el trabajo \emph{An optimal minimum spanning tree algorithm} \cite{pettie2002optimal} de \emph{Pettie y otros}. Sin embargo, debido a las técnicas que utiliza (mediante reducción a \emph{Árboles de Decisión}) no se puede representar mediante una función, pero sí que se ha demostrado su optimalidad. El problema también se ha estudiado desde la perspectiva de la parelelización. En el trabajo \emph{Fast shared-memory algorithms for computing the minimum spanning forest of sparse graphs} \cite{bader2006fast} \emph{Bader y otros} muestran un algoritmo que mediante el multiprocesamiento consiguen soluciones $5$ veces más eficientes que la versión optimizada secuencial.
\paragraph{}
Una vez descrito el problema y su estado actual sobre el modelo estático, el resto del apartado se basará en la descripción del mismo sobre las restricciones del \emph{modelo en semi-streaming}. Para ello, se realiza una descripción del algoritmo exacto propuesto por \emph{Ahn y otros} en \emph{Analyzing graph structure via linear measurements} \cite{ahn2012analyzing}. Dicha estrategia es capaz de encontrar el \emph{árbol recubridor mínimo} en $O(log(n)/log(log(n)))$ pasadas sobre el stream de aristas ajustándose a las restricciones espaciales de $O(polylog(n))$.
\paragraph{}
El planteamiento del algoritmo en semi-streaming se basa inicialmente en el \emph{algoritmo de Boruvka's} (en \cite{wiki:Boruvkas_algorithm} se encuentra una breve descripción), cuyo planteamiento es el siguiente: se realizan $O(log(n))$ de tal manera que en cada una de ellas se seleccionan las aristas de menor peso que conecten vértices aún no marcados como conectados. De manera incremental se construye el \emph{árbol recubridor mínimo} hasta que todos los vértices del grafo están conectados.
\paragraph{}
Tal y como se indica en el documento, el \emph{algoritmo de Boruvka's} puede emularse fácilmente sobre el modelo en semi-streaming emulando cada fase en $O(log(n))$ pasadas por el stream (para encontrar el arista de menor peso), lo cual conlleva $O(log(log(n)))$ pasadas en total. La idea en que se basa el algoritmo para reducir el número de pasadas sobre el streaming es realizar la búsqueda del arista de menor peso de cada nodo en \say{paralelo}, teniendo en cuenta las limitaciones espaciales ($O(polylog(n))$) lo cual es posible en $O(log(n)/log(log(n)))$.
\paragraph{}
El problema del \emph{árbol recubridor mínimo} tiene aplicaciones prácticas en muchos entornos reales, como el diseño de redes de telecomunicaciones o de transporte. También se han encontrado reducciones sobre el \emph{Problema del Viajante} y el \emph{Problema del corte mínimo}. Otros usos son el análisis de estructuras de grafos, problemas de clustering o reconocimiento de estructuras sobre imágenes.
\subsection{Componentes Conectados}
\label{sec:graph_connected_components}
\paragraph{}
El problema de \emph{Componentes Conectados} se refiere a la búsqueda del cardinal de sub-conjuntos de vértices para los cuales existe un camino entre todos los vértices del sub-conjunto. Mediante este resultado se puede determinar por tanto si existe un camino que entre dos vértices simplemente comprobando si pertenecen al mismo sub-conjunto de componentes conectados. Nótese por tanto que un grafo conexo tan solo posee un único componente conectado que contiene todos los vértices del grafo.
\paragraph{}
Se denota como $cc(G)$ al cardinal de componentes conectados del grafo $G$. El algoritmo clásico para resolver este problema se describe a continuación. Este requiere de $O(log(n))$ fases para su finalización y se basa en la fusión de vértices conectados. En cada fase se fusiona cada vértice con otro que posea una arista incidente a el para después crear un \emph{súper-vértice} con el cual se relacionen las aristas de los dos vértices fusionados. Tras realizar esta operación repetidas veces, el algoritmo termina cuando ya no es posible fusionar mas vértices. El número de \emph{súper-vértices} resultantes, por tanto es equivalente a $cc(G)$.
\paragraph{}
En el trabajo \emph{Analyzing graph structure via linear measurements} \cite{ahn2012analyzing} se describe un algoritmo para realizar dicha tarea sobre el \emph{modelo en semi-streaming} en una única pasada y con una complejidad espacial de $O(n \cdot log(n))$. Para ello, se basa en la construcción de sketches a partir del \emph{$L_0$-Samplers} denotados como $S_1, S_2, ..., S_t$ con $t = O(log(n))$. Esto conlleva un coste espacial de $O(n\cdot t \cdot log(log(n)))$, por lo que es válido sobre el \emph{modelo en semi-streaming}. La idea es la construcción jerárquica de estos sketches, de tal manera que $S_1$ represente el sketch de las aristas de los vértices del grafo $G$, $S_2$ las aristas de los \emph{súper-vértices} generados en la primera iteración, y así sucesivamente, de tal manera que a partir de $S_t$ se puede obtener $cc(G)$ al igual que en el algoritmo básico.
\paragraph{}
En \cite{ahn2012analyzing}, además, se extiende dicho algoritmo para el problema de \emph{$k$-aristas conectadas}, que indica el número mínimo de aristas incidentes sobre cada vértice. Las aplicaciones prácticas tanto de este problema como el de \emph{componentes conectados} son de utilizad para conocer la estructura del grafo. Un ejemplo práctico se da en grafos referidos a redes sociales, en las cuales se pretende encontrar el número de agrupaciones (o \emph{clusters}) de usuarios aislados del resto, así como el número mínimo de amistades que presenta cada usuario.
\subsection{Paseos Aleatorios}
\label{sec:random_walks_overview}
\paragraph{}
Un paseo aleatorio se define como la distribución de probabilidad referida a la realización de un camino de longitud $l$ desde un determinado vértice de inicio fijado \emph{a-priori}. Suponiendo que en cada vértice se tiene una distribución de probabilidad sobre sus aristas incidentes, entonces este problema está íntimamente relacionado con las \emph{cadenas de Markov}.
\paragraph{}
El algoritmo PageRank se refiere a la obtención del estado estacionario de la distribución de probabilidad de los paseos aleatorios con algunas modificaciones. Tanto los \emph{paseos aleatorios} como las cadenas de Markov se describen en detalle en el capítulo \ref{chap:pagerank} destinado al algoritmo PageRank.
\section{Conclusiones}
\label{sec:graph_conclusions}
\paragraph{}
A lo largo del capítulo se han citado distintas estrategias mediante las cuales se pretende reducir el grado de complejidad para la resolución de problemas sobre grafos, lo cual implica una reducción del tiempo de computo y del espacio necesario. Estas técnicas están ganando una gran relevancia en la actualidad debido a la necesidad de obtener respuestas en un periodo corto de tiempo, lo cual permite mejorar la toma de decisiones.
\paragraph{}
Debido al dinamismo del entorno en que vivimos, en un gran número de ocasiones es más beneficioso encontrar soluciones aproximadas que a pesar de no otorgar el resultado óptimo, son obtenidas en un tiempo mucho menor, lo cual permite una rápida adaptación a los cambios. Por lo cual, a través de este punto de vista, se pueden obtener mayores beneficios en promedio, puesto que los sobrecostes temporales propiciados por el tiempo de respuesta en soluciones exactas muchas veces conllevan una rápida obsolescencia de los resultados.
\paragraph{}
Se cree que en los próximos años, el estudio e investigación en el ámbito de la resolución de problemas sobre grafos de tamaño masivo será creciente, al igual que sucedía en el caso numérico como se indicó en anteriores capítulos. A pesar de ello, debido a su mayor grado de dificultad, dichos avances son lentos y actualmente se encuentran en fases muy tempranas de desarrollo, pero tal y como se ha indicado, se espera que esta situación cambie en el futuro.
\end{document}
\chapter{¿Cómo ha sido generado este documento?}
\label{chap:how_it_was_build}
\paragraph{}
En este apéndice se describen tanto la estructura como las tecnologías utilizadas para redactar este documento. El estilo visual que se ha aplicado al documento se ha tratado de almoldar lo máximo posible a las especificaciones suministradas en la \emph{guía docente} de la asignatura \emph{Trabajo de Fin de Grado} \cite{uva:tfg-teaching-guide}.
\paragraph{}
Este documento ha sido redactado utilizando la herramienta de generación de documentos \LaTeX \cite{tool:latex}, en concreto se ha utilizado la distribución para sistemas \emph{OS X} denominada \emph{MacTeX} \cite{tool:mactex} desarrollada por la organización \emph{\TeX \ User Group}. Mediante esta estrategia todas las labores de compilación y generación de documentos \emph{PDF} (tal y como se especifica en la guía docente) se realizan de manera local. Se ha preferido esta alternativa frente a otras como la utilización de plataformas online de redacción de documentos \LaTeX \ como \emph{ShareLateX} \cite{tool:sharelatex} u \emph{Overleaf} \cite{tool:overleaf} por razones de flexibilidad permitiendo trabajar en lugares en que la conexión a internet no esté disponible. Sin embargo, dichos servicios ofrecen son una buena alternativa para redactar documentos sin tener que preocuparse por todos aquellos aspectos referidos con la instalación de la distribución u otros aspectos como un editor de texto. Además garantizan un alto grado de confiabilidad respecto de pérdidas inesperadas.
\paragraph{}
Junto con la distribución \LaTeX \ se han utilizado una gran cantidad de paquetes que extienden y simplifican el proceso de redactar documentos. Sin embargo, debido al tamaño de la lista de paquetes, esta será obviada en este apartado, pero puede ser consultada visualizando el correspondiente fichero \texttt{thestyle.sty} del documento.
\paragraph{}
Puesto que la alternativa escogida ha sido la de generar el documento mediante herramientas locales es necesario utilizar un editor de texto así como un visualizador de resultados. En este caso se ha utilizado \emph{Atom} \cite{tool:atom}, un editor de texto de propósito general que destaca sobre el resto por ser desarrollado mediante licencia de software libre (\emph{MIT License}) y estar mantenido por una amplia comunidad de desarrolladores además de una extensa cantidad de paquetes con los cuales se puede extender su funcionalidad. En este caso, para adaptar el comportamiento de \emph{Atom} a las necesidades de escritura de texto con latex se han utilizados los siguientes paquetes: \emph{latex} \cite{tool:atom-latex}, \emph{language-latex} \cite{tool:atom-language-latex}, \emph{pdf-view} \cite{tool:atom-pdf-view} encargados de añadir la capacidad de compilar ficheros latex, añadir la sintaxis y permitir visualizar los resultados respectivamente.
\paragraph{}
Puesto que el \emph{Trabajo de Fin de Grado} se refiere a algo que requiere de un periodo de tiempo de elaboración largo, que además sufrirá una gran cantidad de cambios, se ha creído conveniente la utilización de una herramienta de control de versiones que permita realizar un seguimiento de los cambios de manera organizada. Para ello se ha utilizado la tecnología \emph{Git} \cite{tool:git} desarrollada originalmente por \emph{Linus Torvalds}. En este caso en lugar de confiar en el entorno local u otro servidor propio se ha preferido utilizar la plataforma \emph{GitHub} \cite{tool:github}, la cual ofrece un alto grado de confiabilidad respecto de posibles perdidas además de alojar un gran número de proyectos de software libre. A pesar de ofrecer licencias para estudiantes que permiten mantener el repositorio oculto al público, no se ha creído necesario en este caso, por lo cual se puede acceder al través de la siguiente url: \url{https://github.com/garciparedes/tf_G}
\paragraph{}
Una vez descritas las distintas tecnologías y herramientas utilizadas para la elaboración de este trabajo, lo siguiente es hablar sobre la organización de ficheros. Todos los ficheros utilizados para este documento (obviando las referencias bibliográficas) han sido incluidos en el repositorio indicado anteriormente.
\paragraph{}
Para el documento, principal alojado en el directorio \texttt{/document/} se ha seguido una estructura modular, dividiendo los capítulos, apéndices y partes destacadas como portada, bibliografía o prefacio entre otros en distintos ficheros, lo cual permite un acceso sencillo a los mismos. Los apéndices y capítulos se han añadido en los subdirectorios separados. Para la labor de combinar el conjunto de ficheros en un único documento se ha utilizado el paquete \emph{subfiles}. El fichero raiz a partir del cual se compila el documento es \texttt{document.tex}. La importación de los distintos paquetes así como la adaptación del estulo del documento a los requisitos impuestos se ha realizado en \texttt{thestyle.sty} mientras que el conjunto de variables necesarias como el nombre de los autores, del trabajo, etc. se han incluido en \texttt{thevars.sty}.
\paragraph{}
En cuanto al documento de resumen, en el cual se presenta una vista panorámica acerca de las distintas disciplinas de estudio relacionadas con el \emph{Big Data} se ha preferido mantener un único fichero debido a la corta longitud del mismo. Este se encuentra en el directorio \texttt{/summary/}.
\paragraph{}
Por último se ha decidido añadir otro directorio denominado \texttt{/notes/} en el cual se han añadido distintas ideas de manera informal, así como enlaces a distintos cursos, árticulos y sitios web en que se ha basado la base bibliográfica del trabajo. En la figura \ref{fig:repository-tree} se muestra la estructura del repositorio en forma de árbol.
\begin{figure}
\centering
\BVerbatimInput{directory-tree.txt}
\caption{Árbol de directorios del repositorio}
\label{fig:repository-tree}
\end{figure}
\end{document}
\chapter{Implementación, Resultados y Trabajo Futuro}
\label{chap:implementation}
\section{Introducción}
\label{sec:implementation_intro}
\paragraph{}
A lo largo de este documento se han descrito distintas ideas relacionadas con nuevas técnicas para tratar de hacer frente al problema de la complejidad derivada del tamaño de conjunto de datos de tamaño masivo, para el cual es necesario utilizar técnicas sofisticadas que agilicen dichos procedimientos. Dichos conceptos se han descrito desde una perspectiva teórica dejando de lado cuestiones de implementación u otros factores. Dicha abstracción ha permitido simplificar las descripciones teniendo en cuenta únicamente el enfoque algorítmico de las mismas.
\paragraph{}
Sin embargo, el enfoque que se seguirá en este capítulo pretende ser muy diferente, centrándose en los detalles de implementación y dejando de lado el contenido matemático. De esta manera, se pretende describir el código fuente desarrollado desde la perspectiva de su estructura y organización, ya que a pesar de basarse en una implementación que trata de ejemplificar conceptos descritos a lo largo del documento, se ha dedicado especial cuidado tratando de escribir código de calidad, mantenible y reutilizable.
\paragraph{}
Antes de profundizar en detalles relacionados con la implementación en si, es necesario realizar una explicación acerca de lo que se ha pretendido conseguir mediante el desarrollo de la misma, ya que debido al contexto en que se enmarca (\emph{Trabajo de Fin de Grado} de \emph{Ingeniería Informática}) y la metodología seguida para la realización del mismo (\emph{Proyecto de Investigación}), esta implementación se encuentra en las primeras fases de su desarrollo, por lo cual aún no tiene el grado de madurez esperado para ser incluida en entornos de producción. A pesar de ello, se cree que la continuación en el desarrollo de la misma es una tarea interesante, que con las horas de trabajo necesarias, se podría convertir en una herramienta interesante frente a otras alternativas que existen actualmente.
\paragraph{}
Para entender lo que se ha pretendido conseguir con esta implementación, a continuación se ejemplifica un caso de una implementación similar que se ha llevado a cabo utilizando otras tecnologías en los últimos años. Dicha implementación (e ideas) se conoce como \emph{GraphX}, una biblioteca para el tratamiento de grafos masivos de manera distribuida presentada en \emph{2013} en el trabajo \emph{Graphx: A resilient distributed graph system on spark} \cite{xin2013graphx} desarrollado por \emph{Xin y otros}. Esta implementación se desarrolló inicialmente como un conjunto de utilidades y procedimientos sencillos para facilitar la representación de grafos y el desarrollo de algoritmos sobre estos.
\paragraph{}
\emph{GraphX} se ha desarrollado utilizando como base la plataforma de computación distribuida \emph{Spark} publicada en el trabajo \emph{Spark: Cluster computing with working sets} \cite{zaharia2010spark} de \emph{Zaharia y otros}. Esta plataforma se basa en el tratamiento de grandes conjuntos de datos mediante el procesamiento de los mismos en lotes, lo cual proporciona grandes mejoras respecto de otras soluciones como \emph{Hadoop}, presentado en el documento \emph{The hadoop distributed file system} \cite{shvachko2010hadoop} desarrollado por \emph{Shvachko y otros}.
\paragraph{}
Dichas plataformas tratan de abstraer la idea de procesamiento distribuido y hacerlo lo más transparente posible para el usuario, sin olvidar en ningún momento que los conjuntos de datos utilizados sobre los que se trabaja no se encuentran contenidos totalmente en una única máquina, lo cual implica distintas restricciones respecto de las estrategias de programación clásicas, como los procesos de acceso y escritura al sistema de almacenamiento. Sin embargo, en estos casos también existen soluciones que abstraen dichas tareas de almacenamiento distribuidas, algunas de ellas son \emph{Google File System} \cite{ghemawat2003google} o \emph{Hadoop File System} \cite{shvachko2010hadoop}.
\paragraph{}
Lo característico de \emph{GraphX} es que se ha desarrollado como una biblioteca para el tratamiento de grafos utilizando \emph{Spark} como plataforma base, pero tratando de mantener la independencia entre las mismas. Es decir, \emph{GraphX} ha sido desarrollado utilizando las utilidades que proporciona \emph{Spark}, pero en \emph{Spark} no existe ninguna dependencia hacia \emph{GraphX}. Por tanto, esto se puede entender como un sistema basado en capas, donde \emph{Spark} representa la capa inferior y \emph{GraphX} se coloca en una capa inmediatamente superior.
\paragraph{}
En este trabajo, se ha tratado de realizar una implementación semejante (a un nivel muy básico por las restricciones temporales en que se ha desarrollado), tratando de proporcionar igualmente una capa de abstracción que modeliza el concepto de grafo sobre otra plataforma de computación de alto rendimiento. En este caso se ha decidido utilizar la biblioteca de cálculo intensivo \emph{TensorFlow}, la cual se hizo pública en \emph{2016} en el trabajo \emph{Tensorflow: Large-scale machine learning on heterogeneous distributed systems} \cite{abadi2016tensorflow}, desarrollada por el departamente de investigación de \emph{Google} y actualmente publicada con licenciatura de código abierto.
\paragraph{}
\emph{TensorFlow} proporciona un \emph{framework} para la implementación de algoritmos cuyo funcionamiento se basa en el cálculo de operaciones sobre \emph{tensores} (una generalización del concepto de matriz). Se ha preferido posponer la descripción de esta plataforma hasta la sección \ref{sec:tensorflow}, ya que a continuación se describirá el conjunto de tecnologías utilizadas para la implementación realizada.
\paragraph{}
La motivación por la cual se ha decidido realizar la implementación de una biblioteca que simplifique el desarrollo de algoritmos sobre grafos utilizando una plataforma de cálculo matemático intensivo se debe a lo siguiente: una gran cantidad de analíticas sobre grafos pueden ser calculadas entendiendo dicho grafo como una estructura de datos matricial, a través de la \emph{matriz de adyacencia} (sección \ref{sec:adjacency_matrix}), u otras representaciones como la \emph{matriz laplaciana} (sección \ref{sec:laplacian_matrix}). Este marco conceptual conlleva el desarrollo de algoritmos con un alto grado de paralelización, que tal y como se verá posteriormente, satisface la plataforma \emph{TensorFlow}.
\paragraph{}
Sobre este contexto también se pueden desarrollar algoritmos de optimización, tales como planificación de rutas, recorridos de vehículos o cubrimiento de zonas mediante la modelización del grafo de manera conveniente y la utilización de distintas estrategias de programación lineal, lo cual se ha estudiado ampliamente en la literatura.
\paragraph{}
En este caso, la implementación realizada para este trabajo se encuentra en las primeras fases de su desarrollo. Por tanto, únicamente se ha basado en el conjunto de utilidades necesarias para llevar a cabo la implementación del \emph{Algoritmo PageRank} (capítulo \ref{chap:pagerank}), junto con un \emph{Sparsifier} (sección \ref{sec:sparsifiers}) que reduce el número de aristas del grafo para después comparar resultados a nivel de precisión.
\paragraph{}
Sin embargo, tal y como se indicará posteriormente, se pretende seguir trabajando en dicha biblioteca de grafos para ampliar su funcionalidad y desarrollar otras implementaciones que permitan obtener otras analíticas sobre el grafo.
\paragraph{}
El resto del capítulo se organiza de la siguiente manera: en la sección \ref{sec:implementation} se realiza una descripción acerca de las decisiones tomadas en la implementación de la biblioteca, indicando las tecnologías utilizadas (sección \ref{sec:used_technologies}), los servicios en que se ha apoyado el desarrollo (sección \ref{sec:used_services}) y el diseño que ha seguido dicha implementación (sección \ref{sec:implementation_design}). Posteriormente, se indican distintas vías a través de las cuales sería interesante seguir trabajando en la implementación en la sección \ref{sec:future_work} y, por último, se realiza una breve conclusión acerca del trabajo realizado en la sección \ref{sec:implementation_conclusions}
\section{Implementación}
\label{sec:implementation}
\paragraph{}
En esta sección se exponen distintas explicaciones acerca de la implementación realizada, la cual pretende comportarse como una biblioteca de utilidades que permita modelizar de manera sencilla el concepto de grafo, utilizando como base una plataforma de cálculo matemático intensivo. Dicha implementación se ha realizado prestando especial atención en la reducción de dependencias hacia el exterior, de tal manera que sea posible la distribución de la misma como un paquete compacto que integrar en otros sistemas de mayor envergadura. Por tanto, se ha utilizado el sistema de distribución de paquetes del lenguaje \emph{Python}, el cual simplifica dicha tarea. Sin embargo, antes de comenzar a describir distintos detalles acerca de las decisiones tomadas, a continuación se describen brevemente las tecnologías utilizadas, ya que son influyentes respecto de dichas decisiones.
\subsection{Tecnologías Utilizadas}
\label{sec:used_technologies}
\paragraph{}
La implementación se ha desarrollado utilizando el lenguaje \emph{Python}, junto con distintas bibliotecas que extienden su comportamiento y le otorgan una mayor funcionalidad. Además, se ha utilizado el sistema de control de versiones \emph{git}, que permite trabajar de manera ordenada en distintas partes del trabajo.
\subsubsection{Python}
\label{sec:python}
\paragraph{}
\emph{Python} se define como un lenguaje de propósito general sobre un paradigma imperativo pero con utilidades de programación funcional como funciones lambda o tratamiento de funciones como un valor más. Es orientado a objetos y no tipado, lo cual simplifica el trabajo a la hora de escribir código, pero limita la seguridad del mismo ante entradas incorrectas. Python es un lenguaje interpretado en tiempo de ejecución, por lo cual no es requiere de la utilización de un compilador. Internamente existen implementaciones de \emph{Python} en distintos lenguajes de programación compilados, sin embargo, en este caso el desarrollo se ha realizado sobre \emph{cpython}, que se basa en un intérprete desarrollado en el lenguaje \emph{C}.
\paragraph{}
En la revisión del estándar \emph{PEP-484} \url{https://www.python.org/dev/peps/pep-0484/} se añade un sistema de marcado de tipos para el lenguaje, el cual aún no está operativo en tiempo de ejecución (a través del intérprete), pero permite la comprobación estática del mismo. En la implementación realizado se ha utilizado este este sistema de comprobación de tipos, el cual se introdujo en \emph{Python 3.5}, por tanto está ha sido la versión escogida como mínima.
\paragraph{}
\emph{Python} implementa como estructura de datos indexadas \emph{listas enlazadas}, lo cual proporciona una gran versatilidad ya que permite tanto agregar como eliminar nuevos elementos de manera eficiente ($O(1)$). Sin embargo, el tiempo de acceso se ve altamente penalizado por dicha condición ($O(n)$). Por tanto, se han utilizado bibliotecas externas que mejoran dichos costes.
\subsubsection{NumPy y Pandas}
\label{sec:numpy_pandas}
\paragraph{}
Para solventar la problemática de la eficiencia en tiempo de acceso de las estructuras de datos indexadas en \emph{Python} existe una biblioteca que permite implementa dichas estructuras de datos de manera contigua, lo cual elimina el problema. Dicha biblioteca se conoce como \emph{NumPy} \cite{walt2011numpy}, la cual proporciona además un gran conjunto de operaciones matemáticas sobre esta estructura de datos. De esta manera se permite desarrollar algoritmos con una elevada carga matemática de manera muy eficiente y a la vez sencilla, al estilo de lenguajes como \emph{MatLab} o \emph{R}.
\paragraph{}
Para algunas partes del código implementado, se ha utilizado la biblioteca \emph{Pandas} \cite{mckinney2010data}. Esta biblioteca consiste en una extensión respecto de \emph{NumPy}, que permite ver la estructura de datos desde una perspectiva de conjunto de datos en lugar de estructura matemática, lo cual simplifica el trabajo para tareas como la lectura y escritura de conjuntos de datos en el espacio de almacenamiento.
\subsubsection{TensorFlow}
\label{sec:tensorflow}
\paragraph{}
\emph{TensorFlow} \cite{abadi2016tensorflow} es una biblioteca inicialmente desarrollada por \emph{Google}. La idea principal por la cual fue desarrollada es la simplificación de tareas para la implementación de algoritmos para aprendizaje automático y \emph{deep learning}, cuya carga computacional es altamente paralelizable y puede ser entendida como operaciones entre \emph{tensores}. Esta biblioteca proporciona interfaces para ser utilizada junto con los lenguajes \emph{Python}, \emph{C++}, \emph{Java} o \emph{Go}.
\paragraph{}
Un \emph{tensor} es una generalización del concepto de matriz, permitiendo que estas sean de cualquier número de dimensiones. Esto puede entenderse fácilmente diciendo que un tensor de grado 0 se corresponde con el concepto de escalar, uno de grado 1 puede ser visto como un vector, el grado 2 se corresponde con las matrices y así sucesivamente. A partir del conjunto de operaciones aplicable a dichos \emph{tensores} se crea un flujo de estas, que puede ser visto como un grafo de dependencias entre ellas. Por tanto, el la biblioteca se decidió llamar \emph{TensorFlow} (\emph{flujo de tensores}).
\paragraph{}
\emph{TensorFlow} proporciona por tanto, estructuras de datos para representar el concepto de \emph{tensor} así como un conjunto de operaciones básicas que aplicar entre ellos para así obtener nuevos \emph{tensores} con los resultados. La biblioteca se puede dividir en dos bloques bien diferenciados: el primero de ellos de bajo nivel y que se corresponde con operaciones matemáticas sencillas (suma, multiplicación, división, exponenciación, máximos, etc.) y otro segundo bloque construido a partir del primero al cual pertenece todo el conjunto de operaciones de alto nivel que permiten la implementación de algoritmos de aprendizaje automático fácilmente, tales como implementaciones del \emph{gradiente descendente} u otros conceptos semejantes. Independientemente de estos dos bloques, la biblioteca también proporciona otra serie de utilidades como capacidad para definir variables y constantes, guardar y recuperar modelos matemáticos y otras facilidades.
\paragraph{}
En esta implementación se han utilizado métodos relacionados con operaciones algebraicas, ya que se ha utilizado dicha librería centrándose únicamente en su perspectiva matemática y obviando por completo el bloque de aprendizaje automático, ya que no se corresponde con el tema de este trabajo ni con las ideas descritas en anteriores capítulos.
\paragraph{}
\emph{TensorFlow} proporciona distintas ventajas a nivel de rendimiento respecto de otras alternativas de computación numérica intensiva puesto que está construida como una interfaz de alto nivel que abstrae al usuario del sistema donde está siendo ejecutado el cómputo. Con esto nos estamos refiriendo a que la implementación realizada es ejecutable tanto en una \emph{CPU} clásica de un ordenador como en acelerados de computación externos tales como \emph{GPU's} \emph{CUDA} o las \emph{Tensor Processor Units} \cite{jouppi2017datacenter} diseñadas expecíficamente por \emph{Google} para ser utilizadas junto con esta librería.
\paragraph{}
Debido a estas ideas, el estilo de programación se divide en dos fases, una de definicion de tensores, operaciones y relaciones de dependencia entre ellas en la cual se construye el denominado flujo o grafo de operaciones, y una segunda fase correspondiente a la llamada para la ejecución de dichas operaciones. Esta idea cobra sentido debido tanto al elevado tamaño de los datos de entrada para las operaciones, así como el modelo de computación en aceleradores externos, que requiere de tareas de transferencia entre el sistema y dichas unidades. Al dividirse la definición de la ejecución, esto se puede optimizar de manera eficiente sobre lenguajes interpretados como \emph{Python}, que sino tendrían una alta penalización en temporal derivada del coste de transferencias.
\subsubsection{pytest}
\label{sec:tensorflow}
\paragraph{}
\emph{pytest} es una herramienta de generación de casos de prueba para el lenguaje \emph{Python}. Para ello se basa en la ejecución de distintas funciones definidas por el usuario, las cuales contienen un conjunto de asertos (palabra reservada \emph{assert} en \emph{Python}), los cuales deben superar satisfactoriamente para finalizar el test de manera satisfactoria. Para los casos de prueba, estos se han utilizado junto con los que proporciona \emph{NumPy} para comprobar la semejanza entre dichas estructuras de datos de manera eficiente.
\subsubsection{sphinx}
\label{sec:sphinx}
\paragraph{}
\emph{sphinx} consiste en una herramienta que permite extraer la documentación incorporada en el código a otras fuentes para facilitar su visualización, tales como un sitio web, documentos de \emph{PDF} y otras alternativas. Para ello se basa en la documentación interna del código. En el caso de \emph{Python}, esta documentación se denomina \emph{docstring} y permite añadir una breve explicación acerca de los bloques de código, así como las entradas y salidas de los métodos. En este caso se ha utilizado el estilo de documentación definido por \emph{Google} para tratar de asemejarse lo máximo posible a la documentación seguida por \emph{TensorFlow}
\subsubsection{git}
\label{sec:git}
\paragraph{}
\emph{git} se corresponde con un \emph{sistema de control de versiones} que permite el trabajo de manera colaborativa entre distintos usuarios, a través de distintas ramas de desarrollo, que después se combinan para llegar a un estado de desarrollo final. Además de esto, permite almacenar un historial de todos los cambios realizados, lo cual es de gran ayuda en puntos en los cuales es necesario entender la razón de cambios pasados así, como retroceder hasta un estado anterior si fuera conveniente. Otra de las ventajas de esta herramienta es la capacidad de sincronización entre distintos sistemas, así como la posibilidad de mantener una copia del repositorio en un servidor externo, lo cual previene de problemas relacionados con fallos del sistema local.
\subsection{Servicios Utilizados}
\label{sec:used_services}
\paragraph{}
Para el desarrollo de este trabajo se han utilizado distintos servicios de empresas externas que simplifican algunas de las tareas inherentes en la utilización de las tecnologías descritas anteriormente. Dichas tareas se pueden realizar de manera independiente a estos servicios, sin embargo, resulta interesante su utilización por el grado de mayor confiabilidad que proporcionan respecto del desarrollo completo utilizando únicamente una máquina local, que puede sufrir fallos perdiendo todo el trabajo. Todos los servicios externos utilizados han sido utilizados en su versión gratuita, que ofrece las funcionalidades suficientes para el correcto desarrollo del proyecto.
\paragraph{GitHub}
Proporciona un servidor \emph{git} externo sobre el cual se realizan copias del trabajo local de tal manera que se aumenta el grado de seguridad desde el punto de la aparición de pérdidas inesperadas. Además del servicio de control de versiones basado en \emph{git}, se proporcionan otra serie de funcionalidades como gestión de incidencias mediante el concepto de \emph{issue}, la fusión de ramas de manera segura a partir de \emph{pull request} o una gestión de proyectos mediante un tablero \emph{kanban} configurable según las necesidades especificas del mismo.
\paragraph{Read the Docs}
Ofrece un servicio de generación y publicación de sitios web basados en la documentación de proyectos sofware basada en \emph{sphinx} de manera sencilla, ya que únicamente requiere de la dirección de acceso al repositorio de trabajo \emph{git} junto con las opciones del entorno de desarrollo necesarias para utilizar \emph{sphinx} sobre dicho repositorio. Se cree conveniente la publicación de la documentación de código para que sea accesible fácilmente, por lo cual se decidió utilizar dicho servicio.
\paragraph{Travis CI}
Consiste en un entorno de realización de casos de prueba previamente configurados por el usuario a través de bibliotecas como \emph{pytest} u otras similares. El servicio ejecuta despliega un entorno de pruebas previamente configurado por el usuario para después ejecutar el conjunto de casos de prueba indicados. Este servicio indica tras cada cambio en el repositorio si las pruebas han sido superadas, o por contra han ocurrido fallos, indicando el punto de dichos fallos.
\paragraph{Codecov}
Es una utilidad interesante que determina el grado de cobertura de los casos de prueba sobre la implementación realizada, lo cual proporciona una buena estimación acerca del conjunto de lineas de código que están siendo verificadas por los tests. Sin embargo, a pesar de que esta utilidad proporciona una buena estimación, su resultado tan solo puede ser orientativo, ya que únicamente tiene en cuenta si existe un caso de prueba que analice una determinada línea de código. Esto no tiene en cuenta casos específicos como divisiónes entre cero, verificaciones de valores fuera de rango o casos similares.
\paragraph{WakaTime}
Es una utilidad destinada al seguimiento de horas de trabajo, que indica tanto la proporción de tiempo destinada a cada lenguaje como a cada proyecto. Esta se basa en la recolección de información sobre los editores de código. Por tanto, recoge el tiempo de trabajo destinado a dicha tarea. Esto es una métrica interesante, pero no tiene en cuenta la cantidad de tiempo destinado a tareas de aprendizaje e investigación leyendo artículos o leyendo documentación de bibliotecas utilizadas durante el desarrollo.
\subsubsection{Astah}
\label{sec:astah}
\paragraph{}
\emph{Astah} es un software para la generación de distintos diagramas relacionados con el diseño de software. A través de una interfaz de usuario permite el modelado de sistemas siguiendo el estándar \emph{UML}. Este software ha sido utilizado para la realización de diagramas de clases y componentes. Lo cual permite entender las relaciones entre las distintas clases implementadas de una manera rápida y simple.
\paragraph{}
Todos estos servicios se han utilizados de manera relacionada entre si, en algunos casos de manera directa, mientras que en otros de manera indirecta. El punto de conexión entre todos ellos es el sistema de control de versiones \emph{git}, que estos utilizan para relacionar la tarea que realizan con un determinado proyecto. El sistema de documentación de \emph{Read the Docs} realiza una nueva ejecución tras cada cambio en el repositorio \emph{git} de \emph{GitHub}. De la misma manera \emph{Travis CI} y \emph{Codecov} realizan sus tareas correspondientes. Por último, \emph{WakaTime} realiza un seguimiento constante sobre el tiempo dedicado al repositorio. De esta manera se obtiene un entorno de trabajo que permite un desarrollo ágil permitiendo al desarrollador centrarse únicamente en las tareas de investigación y desarrollo, para posteriormente validar los resultados de ejecución de estos servicios, lo cual reduce tiempo y costes.
\subsection{Diseño de la implementación}
\label{sec:implementation_design}
\paragraph{}
Una vez descritas las distintas dependencias externas utilizadas para la implementación realizada, ya se puede comenzar a describir la misma. Para ello, se han realizado distintos diagramas, entre los que se encuentran un \emph{Diagrama de Componentes}, \emph{Diagramas de Clases} que tratan de facilitar el entendimiento del código desde distintos puntos de vista y, por último, se han añadidos \emph{Diagramas de Operaciones} generados por \emph{TensorFlow}, que permiten entender en mayor medida el comportamiento de los algoritmos de una manera visual.
\paragraph{}
Sin embargo, antes de comenzar con la descripción a nivel de diseño de la implementación realiza, es interesante realizar una breve indicación acerca del sistema de distribución de paquetes utilizado por el lenguaje \emph{Python}, ya que se ha utilizado dicha estrategia para encapsular la implementación y facilitar su utilización en otros sistemas.
\paragraph{}
\emph{Python} proporciona un estándar sencillo de distribución de paquetes basados en módulos. Para utilizar dicha funcionalidad, es necesario incluir un fichero \texttt{/setup.py} en el directorio de nivel superior respecto del referido al módulo que se pretende empaquetar. En dicho fichero se anotan distintos valores acerca del módulo, entre las que se encuentran su localización, las dependencias sobre otros módulos externos, la versión mínima de \emph{Python} para su utilización, u otros meta-datos como el nombre del autor, el nombre del módulo, la versión o la licencia sobre la que se distribuye.
\paragraph{}
La implementación realizada se ha decido encapsular en un módulo al que se ha denominado \texttt{tf\_G}. Dicho módulo se ha nombrado de esta manera por la tecnología utilizada para su implementación, junto con su funcionalidad relacionada con \emph{Grafos}. Por tanto, se ha utilizado el acrónimo \texttt{tf}, utilizado como estándar al importar la biblioteca \emph{tensorflow} (\texttt{import tensorflow as tf}), al cual se le ha añadido la \texttt{G} para finalmente formar el nombre \texttt{tf\_G}.
\paragraph{}
En cuanto a la implementación, esta se ha incluido en el directorio \texttt{/src/}, que a su vez contiene otro directorio con el nombre del módulo (\texttt{tf\_G}). Además de la implementación en si, que se describirá a continuación, también se han incluido en el directorio \texttt{/tests/} distintos casos de prueba (a nivel muy básico y casi a modo demostrativo) para la comprobación acerca del correcto funcionamiento de la implementación. En el directorio \texttt{/examples/} se han incluido distintos scripts que permiten apreciar el funcionamiento de la implementación realizada de una manera práctica.
\subsubsection{Diagrama de Componentes}
\label{sec:component_diagram}
\paragraph{}
En la figura \ref{img:component_diagram} se muestra el diagrama de componentes seguido por el módulo \texttt{tf\_G}. Tal y como se puede apreciar, se ha decidido dividir la implementación en 3 paquetes denominados: \texttt{graph} (para la implementación de los grafos), \texttt{algorithms} (donde se encontrarán las implementaciones de los algoritmos para grafos) y \texttt{utils} (donde se contienen diferentes clases de carácter general necesarias para que el resto de paquetes funcionen de manera apropiada).
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=0.9\textheight,keepaspectratio]{component-diagram}
\caption{Diagrama de componentes referido al módulo \texttt{tf\_G}.}
\label{img:component_diagram}
\end{figure}
\paragraph{}
En cuanto al paquete \texttt{utils}, este contiene dos sub-paquetes, el referido a clases que ofrecen la funcionalidad de \texttt{callbacks} (esto se describirá a continuación junto con su respectivo diagrama de clases). Así como \texttt{math}, que contiene clases referidas a utilidades matemáticas (en este caso normas vectoriales y criterios de convergencia).
\paragraph{}
El paquete \texttt{algorithms} contiene las distintas implementaciones algorítmicas implementadas destinando un sub-paquete para cada una de ellas. En este caso, únicamente se ha implementado el algoritmo \emph{PageRank}, por lo que solo contiene un paquete, sin embargo, esto se espera extender en el futuro. El paquete \texttt{pagerank} contiene la implementación de una versión algebraica y otra iterativa, junto con sus respectivas matrices de transición contenidas en el sub-paquete \texttt{transition}. Posteriormente se describirá más en detalle la relación existente entre ellas en el respectivo diagrama de clases.
\subsubsection{Diagrama de Clases}
\label{sec:class_diagram}
\paragraph{}
Una vez descrito el diagrama de componentes de la implementación, lo siguiente es hablar del \emph{Diagrama de Clases} de la misma. Para ello, en primer lugar hay que tener en cuenta diversos factores, entre los que se encuentran las peculiaridades del lenguaje \emph{Python} respecto de las ideas de \emph{Orientación a Objetos}. En \emph{Python} no existen \emph{Clases Abstractas} ni \emph{Interfaces}. Sin embargo, este lenguaje permite emular su comportamiento mediante el concepto de \emph{Herencia Múltiple}, que permite que una clase tenga más de una clase padre.
\paragraph{}
A partir de dicho concepto se pueden describir los mismos conceptos que en otros lenguajes como \emph{Java} se harían utilizando las \emph{Clases Abstractas} e \emph{Interfaces}. Para ello, además de la \emph{Herencia Múltiple}, se ha utilizado la excepción \texttt{NotImplementedError} para notificar que no las clases descendientes no han implementado la funcionalidad requerida. Alternativamente, se podría haber utilizado la alternativa propuesta por el paquete \texttt{abc}, pero se ha preferido la otra opción.
\paragraph{}
Una vez descritas dichas caracterizaciones, se discutirá la implementación realizada. En este caso, tan solo se realizarán indicaciones desde el punto de vista de la estructura de clases, ya que si se desea conocer el comportamiento de cada método concreto se puede hacer uso de la documentación interna de cada clase visualizando el código fuente, o de manera online a través de \url{http://tf-g.readthedocs.io/en/latest/}.
\paragraph{}
El diagrama de clases completo se muestra en la figura \ref{img:class_diagram}. Tal y como se puede apreciar, a través de dicho dicho diagrama es complicado comprender el funcionamiento de la implementación, debido a las relaciones que se solapan unas a otras derivado de la herencia múltiple. Por tanto, se ha creído una solución más acertada dividir el diagrama en partes, para realizar la descripción desde distintos puntos de vista.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=0.9\textheight,keepaspectratio]{complete-class-diagram}
\caption{Diagrama de clases completo referido al módulo \texttt{tf\_G}.}
\label{img:class_diagram}
\end{figure}
\paragraph{}
Sin embargo, en la figura \ref{img:class_diagram} se muestra una clase aislada. Esta clase se denomina \emph{DataSets} y no se relaciona de manera directa con el resto, sino que se suministra para facilitar las tareas de obtención de conjuntos de datos que representan las aristas de un grafo. Para ello proporciona funciones que permiten la generación de estructuras de datos que almacenan dichos aristas. Además, el módulo contiene distintos conjuntos de datos para poder realizar pruebas sobre la implementación. Por tanto, esta clase suministra los métodos para poder acceder a ellos.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=0.9\textheight,keepaspectratio]{tensorflowobject-class-diagram}
\caption{Diagrama de clases referido a las relaciones con la clase abstracta \texttt{TensorFlowObject} del módulo \texttt{tf\_G}.}
\label{img:tensorflowobject_class_diagram}
\end{figure}
\paragraph{}
En la figura \ref{img:tensorflowobject_class_diagram} se muestra una sub-conjunto de clases relacionadas entre sí a través de la herencia de la clase \texttt{TensorFlowObject}. Las relacionees entre ellas se discutirán posteriormente. Sin embargo, a continuación se expone la razón por la cual se ha implementado dicha clase abstracta. Puesto que la implementación se ha apoyado fuertemente en el uso de la biblioteca \emph{TensorFlow}, muchas de estas clases necesitaban atributos y operaciones comunes, como la necesidad de obtener el resultado de una determinada operación en \emph{TensorFlow} a través del método \texttt{run\_tf} o la de identificar a una variable por su nombre. Por tanto, a partir de \texttt{TensorFlowObject} se suministra dicha funcionalidad.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=0.9\textheight,keepaspectratio]{updateedge-class-diagram}
\caption{Diagrama de clases referido a las relaciones con las clases abstractas \texttt{UpdateEdgeListener} y \texttt{UpdateEdgeNotifier} del módulo \texttt{tf\_G}.}
\label{img:update_edge_diagram}
\end{figure}
\paragraph{}
El siguiente diagrama de clases incluido se muestra en la figura \ref{img:update_edge_diagram}. Dicho diagrama se corresponde con el conjunto de clases relacionadas con \texttt{UpdateEdgeListener} y \texttt{UpdateEdgeNotifier}. Estas clases son las contenidas en el paquete \texttt{utils.callbacks} y la funcionalidad que proporcionan es la de notificar y ser notificadas por otras clases, de algún cambio en el conjunto de aristas del grafo con el cual interaccionan.
\paragraph{}
Dicho comportamiento es muy similar al del patrón \emph{Observador}, sin embargo, no se puede denominar de dicha manera puesto que en el caso de las clases que descienden de \texttt{UpdateEdgeListener}, estas si que conocen al objeto observado. Por lo cual, en lugar de utilizar la denominación \emph{observer-observed} se ha preferido \emph{listener-notifier}. Para las clases descendientes se proporcionan los clásicos métodos \texttt{attach}, \texttt{detach} y \texttt{notify} en el caso de \texttt{UpdateEdgeNotifier} y \texttt{update\_edge} para \texttt{UpdateEdgeListener}.
\paragraph{}
Los conjuntos de clases que descienden de \texttt{UpdateEdgeListener} y \texttt{UpdateEdgeNotifier} tienen por tanto la capacidad de implementar sus algoritmos de manera dinámica. En este caso, puesto que tan solo se ha implementado el algoritmo \emph{PageRank}, esta es el único algoritmo que posee capacidad de dinamismo. A pesar de ello, el usuario del módulo puede implementar clases que desciendan de \texttt{UpdateEdgeListener} para así ser notificadas de cambios en el grafo al cual se refieran.
\paragraph{}
El resto de diagramas de clases que se han incluido se corresponden con implementaciones concretas que proporcionan funcionalidad directa al usuario. Estas se refieren a la representación de grafos sobre \emph{TensorFlow} (lo cual realiza por el paquete \texttt{graph}) y el cálculo del \emph{PageRank} (lo cual se lleva a cabo en el paquete \texttt{algorithms/pagerank}).
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=0.9\textheight,keepaspectratio]{graph-class-diagram}
\caption{Diagrama de clases referido a las relaciones con la clase \texttt{Graph} del módulo \texttt{tf\_G}.}
\label{img:graph_class_diagram}
\end{figure}
\paragraph{}
La figura \ref{img:graph_class_diagram} muestra el conjunto de clases referidas a la representación de un grafo utilizando como base la biblioteca \emph{TensorFlow}. Dicha implementación se constituye únicamente por 3 clases: \texttt{Graph} (que representa un grafo), \texttt{GraphSparsifier} (que representa un \emph{pseudo-Sparsifier} de un grafo dado) y \texttt{GraphConstructor} (que proporciona distintas utilidades que permiten construir un grafo de manera simple).
\paragraph{}
Tal y como se indicó anteriormente, para entender la funcionalidad que provee cada método de la clase \texttt{Graph}, es más apropiado seguir la documentación interna de la clases o su versión publicada online. Debido a las restricciones de tiempo en la realización de la implementación, tan solo se han implementado aquellas funcionalidades necesarias para el desarrollo del algoritmo \emph{PageRank}.
\paragraph{}
En cuanto a la clase \texttt{GraphSparsifier}, esta se ha indicado anteriormente como \emph{pseudo-Sparsifier} puesto que no se ha realizado ninguna demostración acerca de la precisión de dicha implementación ni algoritmo. Por motivos derivados de las restricciones temporales no ha sido posible realizar una implementación apropiada de entre las discutidas en la sección \ref{sec:sparsifiers}. Sin embargo, se han seguido dichas ideas para la implementación del mismo, aunque tal y como se indica, \emph{este no ofrece ninguna garantía desde el punto de vista analítico}.
\paragraph{}
Por último, la clase \texttt{GraphConstructor} provee una serie de métodos que permiten la construcción de un grafo en sobre la implementación de la biblioteca de manera sencilla a través de una interfaz intuitiva. En el directorio \texttt{/examples/} se pueden visualizar distintos ejemplos que hacen uso de dicha clase.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=0.9\textheight,keepaspectratio]{pagerank-class-diagram}
\caption{Diagrama de clases referido a las relaciones con la clase \texttt{PageRank} del módulo \texttt{tf\_G}.}
\label{img:pagerank_diagram}
\end{figure}
\paragraph{}
El último diagrama de clases que se incluye en esta implementación es el referido al algoritmo \emph{PageRank}. Dicho diagrama se corresponde con la figura \ref{img:pagerank_diagram}. Nótese que en dicho diagrama se ha obviado la relación entre la clase \texttt{Transition} y la clase \texttt{Graph}. La razón ha sido tratar de reducir la complejidad y mejorar el entendimiento del diagrama.
\paragraph{}
En cuanto a las herencias, en este caso tanto \texttt{PageRank} como \texttt{Transition} son descendientes de \texttt{TensorFlowObject}, que tal y como se indicó anteriormente ofrece la funcionalidad de ejecución de la estructura de datos \texttt{tf.Tensor} porporcionada por la biblioteca \emph{TensorFlow}. En cuanto a la notificación de modificaciones en las aristas del grafo, \texttt{PageRank} tiene la capacidad de ser notificada por dichos cambios, para recalcular el ranking PageRank conforme este se modifica tras los cambios en el conjunto de aristas del grafo.
\paragraph{}
Es necesario remarcar que \texttt{Transition} implementa tanto la funcionalidad de notificar como de ser notificado. Esto se debe a que cuando una arista es modificada, la matriz de transición debe ser notificada, y una vez hecho esto, se debe notificar al algoritmo PageRank para que recalcule el ranking a partir de ella. De ahí la necesidad de implementar las dos funcionalidades.
\paragraph{}
En cuanto a las clases descendientes de \texttt{PageRank} y \texttt{Transition}, estas implementan la funcionalidad del cálculo del ranking \emph{PageRank} de manera algebraica (\texttt{AlgebraicPageRank} junto con \texttt{TransitionMatrix}) y de manera iterativa (\texttt{IterativePageRank} junto con \texttt{TransitionResetMatrix}) respectivamente.
\paragraph{}
En cuanto a la clase \texttt{VectorNorm}, esta provee de distintas normas vectoriales utilizadas para realizar comparaciones entre diferentes rankings. Además, es utilizada para el cálculo del punto de convergencia en la clase \texttt{ConvergenceCriterion}, que utiliza la implementación iterativa del algoritmo \emph{PageRank}. Por último, la clase \texttt{Utils} proporciona la funcionalidad de generación del ranking de vértices a partir del valor del \emph{PageRank} obtenido, es decir, realiza una ordenación de los vértices.
\paragraph{}
Una vez descrita la implementación a partir de la estructura de sus clases y módulos, se ha decidido añadir diagramas referidos al conjunto de operaciones que se realizan para el cálculo del resultado en la siguiente sección.
\subsubsection{Diagrama de Operaciones}
\label{sec:operations_diagram}
\paragraph{}
La implementación realizada se basa en el desarrollo del algoritmo \emph{PageRank} sobre la biblioteca de cálculo matemático intensivo \emph{TensorFlow}. Sin embargo, los detalles de la implementación del algoritmo no son apreciables a través de los distintos diagramas ilustrados en anteriores secciones. Por dicha razón, se ha decidido incluir este apartado, en el cual se presentan los grafos de operaciones referidos a las implementaciones algebraica e iterativa del algoritmo.
\paragraph{}
Dichos árboles de operaciones requieren de la necesidad de estar familiarizado con la biblioteca utilizada para su apropiado entendimiento. Sin embargo, mediante su visualización rápida se puede apreciar una perspectiva de alto nivel acerca de las operaciones necesarias para el cálculo del ranking.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=0.9\textheight,keepaspectratio]{tensorflow-graph-algebraic-pagerank}
\caption{Diagrama de operaciones referido a la implementación algebraica del algoritmo \emph{PageRank} utilizando \emph{TensorFlow} del módulo \texttt{tf\_G}.}
\label{img:pagerank_algebraic_diagram}
\end{figure}
\paragraph{}
En la figura \ref{img:pagerank_algebraic_diagram}, se muestra el árbol de operaciones necesario para calcular el ranking \emph{PageRank} de manera algebraica. Dichas operaciones se corresponden con las previamente descritas en la sección \ref{sec:pagerank_algorithm_algebraic}, la cual se destinó íntegramente al estudio de dicho algoritmo.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=0.9\textheight,keepaspectratio]{tensorflow-graph-iterative-pagerank}
\caption{Diagrama de operaciones referido a la implementación iterativa del algoritmo \emph{PageRank} utilizando \emph{TensorFlow} del módulo \texttt{tf\_G}.}
\label{img:pagerank_iterative_diagram}
\end{figure}
\paragraph{}
En la figura \ref{img:pagerank_iterative_diagram} se muestra la versión iterativa implementada para el cálculo del \emph{PageRank}. En este caso, al igual que en el caso anterior, también se ha seguido el mismo algoritmo que el descrito en la sección \ref{sec:pagerank_algorithm_iterative}. Nótese por tanto, que en este caso es necesario el uso de un bucle que realiza las iteraciones, controlado por un determinado criterio de convergencia escogido de entre los implementados en la clase \texttt{ConvergenceCriterion}.
\paragraph{}
Nótese que estos diagramas representan una visión completamente distinta de la descrita en los diagramas de componentes o de clases, ya que estos ignoran por completo dicha organización, centrándose únicamente en el conjunto de operaciones necesarias para el cálculo del algoritmo.
\paragraph{}
En esta sección se han presentado una serie de detalles de implementación, comenzando por la descripción del conjunto de herramientas utilizadas para el desarrollo de la aplicación. Posteriormente se indicaron las dependencias con servicios externos que se han utilizado. El resto de la sección se ha basado en la descripción de la implementación a partir de diagramas de componentes, clases y operaciones. En este caso se ha prescindido del uso de diagramas de secuencia que indicasen el orden de llamadas entre métodos puesto que se ha creído que para una implementación de carácter algorítmico como la realizada, la inclusión del diagrama de operaciones generado por la biblioteca \emph{TensorFlow} podría aportar una información similar, que además muestra el conjunto de dependencias entre las variables utilizadas durante el cálculo.
\section{Trabajo Futuro}
\label{sec:future_work}
\paragraph{}
El trabajo realizado se puede enmarcar en una zona intermedia entre un trabajo de investigación y un trabajo de implementación práctica, puesto que una gran cantidad del mismo se ha destinado a adquirir nuevos conceptos relacionados con el ámbito del tratamiento de grandes conjuntos de datos. Esto se ve reflejado en los primeros capítulos del trabajo, marcadamente más teóricos que este último, en el cual se indican detalles y razonamientos acerca de las decisiones de implementación.
\paragraph{}
Debido al tiempo limitado en la realización del trabajo, así como el contexto académico junto con la compaginación de la realización al mismo tiempo que con otras asignaturas de la titulación, no se ha podido realizar de la manera tan extensa que se hubiera deseado. Además, se cree que el grado de extensión de los conceptos que se describen a lo largo del trabajo así como el proceso iterativo que se puede llevar a cabo para seguir trabajando en la implementación desarrollada lo hacen un trabajo continuable en el futuro.
\paragraph{}
Resultaría interesante seguir trabajando en el proyecto de una manera similar, siguiendo un equilibrio equitativo entre las horas de investigación en técnicas para la reducción de la complejidad de grafos de tamaño masivo, junto con la creación de un ecosistema de utilidades que permiten la implementación de estas técnicas así como nuevas métricas como el \emph{conteo de triángulos} u otros algoritmos, aprovechando la implementación inicialmente desarrollada.
\paragraph{}
Muchas de estas ideas podrían llegar resultados interesantes en el futuro mediante la realización de un estudio intensivo del problema así como una adecuada base matemática que facilite la comprensión de los trabajos desarrollados por los expertos en la materia, puesto que se cree que en el futuro será necesaria la utilización de técnicas como las estudiadas para hacer frente a los problemas que se están dando en la actualidad, tales como planificación de rutas u otros, que además, contiene una componente dinámica muy importante.
\paragraph{}
Al igual que en el caso de la investigación, se cree que continuar con el proceso de implementación de una biblioteca de utilidades que permita la implementación y desarrollo de soluciones basadas en grafos sobre una plataforma como \emph{TensorFlow} podría resultar muy interesante, al igual que sucedió con \emph{GraphX} y \emph{Spark}.
\paragraph{}
Por tanto, existen distintas lineas de trabajo sobre las cuales poder seguir en este área, todas ellas muy relacionadas entre si, y que podrían llegar a resultados satisfactorios con el correspondiente trabajo y dedicación.
\section{Conclusiones}
\label{sec:implementation_conclusions}
\paragraph{}
En esta sección se pretende realizar una descripción a nivel de resultados obtenidos tras realizar el \textbf{Trabajo de Fin de Grado} completo. Es decir, tanto la parte de investigación y estudio de \emph{Algoritmos para Big Data}, desde la modelización de \emph{Streaming} (capítulo \ref{chap:streaming}) y las \emph{Estrategias de Sumarización} (capítulo \ref{chap:summaries}) de información necesarias para agilizar la obtención de resultados sobre conjunto de datos masivos, como las aplicadas a \emph{Grafos} (capítulo \ref{chap:graphs}) y el estudio del algoritmo implementado \emph{PageRank} (capítulo \ref{chap:pagerank}).
\paragraph{}
Dichos estudios han permitido conocer de manera más profunda los factores que dificultan la tarea de diseño de algoritmos, cuya ejecución continúe siendo factible aún cuando el conjunto de datos crece de manera drástica y es necesario que las consultas sean realizadas en un periodo reducido de tiempo. Esto implica una tarea compleja, para la cual es necesario poseer una extensa base matemática que agilice las tareas de razonamiento y entendimiento acerca de la información encontrada.
\paragraph{}
Sin embargo, a pesar de la dificultad propiciada por dicha carga matemática, se ha tratado de hacer frente al trabajo de investigación mediante la lectura de una gran cantidad de artículos científicos en las cuales aparecen explicaciones e ideas muy acertadas. Algo que ha servido para introducirse y conocer cómo es el proceso de investigación, en numerosas ocasiones frustrante y complejo, pero que también ofrece un alto índice de satisfacción personal cuando se consiguen los propósitos alcanzados.
\paragraph{}
Desde la perspectiva de los resultados obtenidos a nivel de investigación, estos no han sido satisfactorios desde el punto de vista del descubrimiento o análisis de una nueva técnica aplicable sobre el área de investigación que se ha estudiado a lo largo del trabajo. Sin embargo, esto no se entiende como algo totalmente negativo puesto que a través de este proceso se han aprendido una gran cantidad de nuevos conceptos más amplios respecto de los prefijados para un \emph{Graduado en Ingeniería Informática}, los cuales se han creído interesantes y útiles en el futuro.
\paragraph{}
La introducción en el ámbito de la investigación, el aprendizaje de distintas estrategias para leer artículos (ya que requieren de práctica y metodología en comparación con otro tipo de literatura), y la gestión del tiempo en dichas tareas han sido un conjunto de conocimientos transversales, que además del aprendizaje inherente relacionado con el \emph{Big Data} se han creído extremadamente útiles para el futuro. Además, el estudio en profundidad de la problemática mediante artículos donde los autores originales de los trabajos exponen sus ideas se ha creído muy enriquecedor para las tareas posteriores de implementación.
\paragraph{}
Respecto de los resultados obtenidos desde el punto de vista de la implementación, se ha creído interesante el aprendizaje completo en cuanto al despliegue de un entorno de desarrollo que realice pruebas automatizadas, genere documentación y mantenga un seguimiento acerca de las horas de trabajo, así como la encapsulación de la implementación en un paquete auto-contenido de \emph{Python}, distribuible e instalable en otros sistemas es una tarea interesante, que hasta el momento no se había llevado a cabo debido a la rama escogida en los dos últimos años del grado (centrada mayoritariamente en aspectos algorítmicos y computacionales).
\paragraph{}
La implementación realizada ha servido para entender en profundidad el algoritmo \emph{PageRank}, así como las ideas subyacentes que permiten entender su procedencia así como su convergencia hacia un resultado satisfactorio. En cuanto a la implementación de los \emph{Sparsifiers}, partir de estos se ha conseguido entender en mayor medida las ventajas y dificultades derivadas de la adicción de un grado de indeterminismo sobre un \emph{Grafo}, tratando de perturbar al mínimo posible la estructura del mismo, lo cual es una tarea compleja, pero que se ha creído muy interesante para el futuro.
\paragraph{}
El \emph{Trabajo de Fin de Grado} ha servido para comprender el grado de dificultad que representa la realización de un proyecto de mayor envergadura que una práctica de una asignatura del grado, con una fecha prefijada y un periodo relativamente largo de tiempo para su ejecución. En este trabajo se ha tenido muy en cuenta la necesidad de organización y constancia personal diaria para poder asimilar la gran cantidad de conceptos estudiados así como la complejidad de las tecnologías utilizadas para la implementación, que muchas no habían sido utilizadas anteriormente. Esto ha proporcionado un grado de experiencia que se cree favorable y necesario para la finalización de los estudios por un graduado en \emph{Ingeniería Informática}. Sin embargo, este trabajo debe marcar un punto de comienzo sobre el cual mejorar en futuras ocasiones a partir de la experiencia, ya se los errores cometidos durante el desarrollo del mismo no se deben entender como algo negativo, sino como factores que no se deben repetir en futuros proyectos similares.
\end{document}
\chapter{Introducción}
\label{chap:intro}
\section{Contexto}
\label{sec:introduction_context}
\paragraph{}
El \emph{Trabajo de Fin de Grado} representa la última fase para la obtención de una titulación de \emph{Grado} en el modelo educativo Español. Para poder que el alumno pueda presentar su trabajo final ante un tribunal, es necesario que este haya completado el resto de créditos de la titulación. Por tanto, el \emph{Trabajo de Fin de Grado} representa la última barrera antes de convertirse en graduado. En este trabajo, se espera que el alumno demuestre las capacidades y conocimientos adquiridos a lo largo de su formación universitaria desde una perspectiva práctica y más cercana a lo que se espera que realice una vez comience su andadura por el mundo laboral.
\paragraph{}
Estas ideas son de carácter general y no dependen de la titulación que se esté realizando. Sin embargo, el trabajo de fin de grado depende fuertemente de la titulación a la cual se refiera. Es trivial entender que un alumno que haya cursado estudios de \emph{Filología Hispánica} no tenga nada que ver con el de un alumno que cuyos estudios estén referidos a otros ámbitos del conocimiento como la \emph{Física}, puesto que sus competencias son muy diferentes. Todos ellos tendrán una base común, realizando una introducción previa al tema que pretenden desarrollar, posiblemente describiendo el contexto histórico, seguidamente desarrollando el tema principal para finalmente llegar a unas conclusiones específicas.
\paragraph{}
En este caso, este \emph{Trabajo de Fin de Grado} se refiere a la titulación del \emph{Grado} en \emph{Ingeniería Informática} impartido en la \emph{Escuela Técnica Superior de Ingeniería Informática} de \emph{Valladolid}, dependiente de la \emph{Universidad de Valladolid}. Por esta razón, el trabajo será referido completamente al ámbito de la informática. Sin embargo, en este caso sucede una característica similar al descrito en el párrafo anterior. En esta titulación existen 3 menciones (o especialidades) que tratan de segregar las competencias que se enseñan, de tal manera que los alumnos puedan llegar a un mayor grado de conocimiento en la disciplina que más prefieran.
\paragraph{}
La razón de dicha separación durante el segundo ciclo de la titulación de grado se debe al amplísimo crecimiento que se está llevando a cabo en los últimos años, de tal manera que a pesar de haber una serie de conocimientos comunes que todo \emph{Ingeniero Informático} debe conocer, llega un punto en que la diversificación de áreas dificultan la tarea de adquicisión de todos aquellos conceptos en profundidad. Por tanto, parece apropiado dividir dichas disciplinas en ramas separadas. En el caso de la titulación para la cual se realiza este trabajo, existen 3 menciones: \emph{Tecnologías de Información}, \emph{Ingeniería de Software} y \emph{Computación}.
\paragraph{}
En este documento no se describirán cada una de ellas, ni se realizará una diferenciación de las mismas, puesto que esto ya puede ser consultado a través de la página web de la \emph{Escuela Técnica Superior de Ingeniería Informática} de \emph{Valladolid} a través de \url{https://www.inf.uva.es/}. Sin embargo, si que se indicará que este trabajo ha sido realizado tras haber seguido la mención en \emph{Computación}, la cual se refiere a los aspectos más teóricos, matemáticos y abstractos de la informática, tratando de dejar de lado el contexto del problema para centrarse en la búsqueda eficiente de la solución.
\paragraph{}
La razón por la cual se indica dicha explicación acerca de las distintas menciones sobre las que completar la titulación de \emph{Grado} en \emph{Ingeniería Informática}, así como el ejemplo inicial acerca de la diferenciación a nivel de contenido entre distintos trabajos de \emph{Fin de Grado} dependiendo de la titulación se debe a lo siguiente:
\paragraph{}
Este trabajo se ha pretendido focalizar en el estudio de \emph{Algoritmos para Big Data, Grafos y PageRank} desde una perspectiva mayoritariamente teórica, dejando de lado aspectos y cuestiones referidas a la implementación de los mismos. A pesar de que se ha realizado una implementación en el trabajo, esta ha sido de carácter ilustrativo, teniendo que requerir de trabajo adicional si pretender convertirse en una implementación adecuada para ser usada en entornos de producción.
\paragraph{}
Esto se contrapone con los temas que se desarrollan comúnmente para los estudios en cuestión, que generalmente basan un mayor esfuerzo en la parte de la implementación para llegar en muchas ocasiones a un producto o servicio final. Esto se debe a las competencias desarrolladas, que se centran en ese tipo de actividades. Sin embargo, esto se contrapone con las competencias adquiridas durante el desarrollo de la mención en \emph{Computación}, la cual, tal y como se ha indicado anteriormente, se centra mayoritariamente en el apartado teórico y matemático de la resolución de problemas de manera eficiente.
\paragraph{}
Una vez realizada dicha distinción, ya se está en condiciones de comenzar a tratar el tema sobre el cual trata este \emph{Trabajo de Fin de Grado}. A continuación se hablará acerca de las motivaciones tanto personales como académicas que han propiciado la selección de dicho tema en la sección \ref{sec:introduction_motivation}. Para ello, se ha creído conveniente realizar una descripción acerca de las ideas iniciales que se tenían antes de comenzar el trabajo, las cuales son drásticamente diferentes de las que se tiene una vez se ha finalizado el mismo. Esto se realiza en la sección \ref{sec:introduction_initial_ideas}. Posteriormente, en las secciones \ref{sec:introduction_big_data} y \ref{sec:introduction_graphs} se realiza una descripción superficial acerca del \emph{Big Data} y la modelización de \emph{Grafos} respectivamente, puesto que son los temas principales de dicho trabajo. Por último, en la sección \ref{sec:introduction_goals} se indican una serie de objetivos que se han pretendido conseguir mediante la realización de este trabajo.
\section{Motivación}
\label{sec:introduction_motivation}
\paragraph{}
La razón original por la cual se decidió escoger la temática del \emph{Big Data} para la realización de este trabajo está motivada por la consulta con el profesor y doctor \emph{Manuel Barrio-Solórzano} (\texttt{[email protected]}), que creyó apropiado un proyecto cercano a la investigación de los algoritmos subyacentes que permiten resolver problemas de tamaño masivo como \emph{Trabajo de Fin de Grado}.
\paragraph{}
Una vez indicada la explicación acerca del modo en que el trabajo fue propuesto, a continuación se indican distintas razones por las cuales se cree que dicho tema es interesante y ferviente en la actualidad: Durante los últimos años se han producido grandes avances a nivel tecnológico que han permitido la construcción de sistemas computacionales cuyo coste es mucho más reducido y su capacidad de cálculo mucho más elevada.
\paragraph{}
Estos avances son claramente apreciables, tanto directamente por los usuarios a través de los dispositivos móviles, las televisiones inteligentes o los ordenadores personales, que ofrecen capacidades de cómputo inimaginables en décadas posteriores. Esto también se puede apreciar internamente en la industria de la informática, con la construcción de supercomputadores como el \emph{Sunway TaihuLight} que duplica las capacidades de su predecesor. La necesidad por agilizar los cálculos matemáticoss intensivos ha llevado a empresas como \emph{Google} a diseñar sus propios chips específicamente para dicha función, que denominan \emph{Tensor Processor Units} \cite{jouppi2017datacenter}. Además, distintas técnicas basadas en virtualización de equipos y paralelización han permitido un mejor aprovechamiento de las capacidades computacionales existentes.
\paragraph{}
Todo ello ha propiciado una explosión a nivel de información, de tal manera que año a año la cantidad de datos generados por los usuarios está creciendo en un orden asombrante. Por tanto, esto ha generado nuevos retos, tanto a nivel de almacenamiento y recuperación, como de procesamiento y obtención de nuevas conclusiones a partir de su análisis. Sin embargo, debido a distintos factores, entre los que destacan el elevado tamaño del mismo, así como la componente dinámica que muchas veces se da, generando una ventana temporal subyacente a partir de la cual se restringe el periodo de validez de estos, en los últimos años se han realizado distintos trabajos centrados en la investigación de distintas técnicas que tratan de agilizar dicho procesamiento y análisis.
\paragraph{}
Muchos de los procesos que se llevan a cabo en la vida cotidiana se basan en la interrelación de objetos entre sí, lo cual genera una red de interrelaciones de manera subyacente lo cual puede ser modelizado matemáticamente a través del concepto de grafo. Este tipo de sucesos también están siendo estudiados y analizados por lo cual, también forman parte del mundo del \emph{Big Data}. No es difícil darse cuenta de que muchas de las empresas más populares del sector tecnológico basan su actividad en procesos de este tipo. Algunos ejemplos son la búsqueda de Sitios Web de \emph{Google} y las interconexiones que estos tienen a través de enlaces, las redes de amigos que se forman en \emph{Facebook}, las relaciones de similitud de contenido en servicios como \emph{Netflix} o \emph{Spotify}, los sistemas de planificación de rutas de empresas como \emph{Tesla} para sus sistemas de conducción autónoma, etc. Por tanto, se cree interesante estudiar las distintas técnicas y conceptos que permiten agilizar la obtención de dichos resultados.
\paragraph{}
En cuanto al algoritmo \emph{PageRank}, se ha creído adecuado como punto de combinación entre las técnicas de \emph{Big Data} y procesamiento masivo de información, con el modelo matemático de \emph{Grafos}. Además, se cree que los conceptos matemáticos en que se basa son de gran utilidad para entender tanto su comportamiento como servir de introducción en el área de modelos gráficos probabilísticos (\emph{Cadenas de Markov}). Otra de las razones por las cuales se ha creído conveniente el estudio de dicho algoritmo es la importante relevancia que ha tenido en el mundo de la informática, permitiendo mejorar drásticamente los resultados de búsquedas que se obtenían hasta el momento de su publicación y convirtiendo al motor de búsquedas \emph{Google} en la alternativa más utilizada respecto de la competencia.
\paragraph{}
Tras indicar los motivos por los cuales se ha creído interesante el estudio de \emph{Algoritmos para Big Data}, centrándose especialmente en el caso de los problemas de \emph{Grafos} y discutiendo en profundidad el \emph{Algoritmo PageRank}, se ha creído conveniente añadir una sección en la cual se indique la visión que se tenía sobre dichas estrategias al comienzo del trabajo. Esto se realizará en la siguiente sección.
\section{Ideas Iniciales}
\label{sec:introduction_initial_ideas}
\paragraph{}
Se ha creído oportuno añadir un apartado en el documento, en el cual se explicara la visión y conocimientos previos que se tenían sobre la temática del trabajo antes de comenzar el proceso de investigación y estudio. Por tanto, en esta sección se pretende realizar una descripción acerca de lo que se conocía previamente por \emph{Big Data}, las intuiciones acerca de las técnicas utilizadas para hacer frente a la resolución de problemas sobre grafos de tamaño masivo y el algoritmo \emph{PageRank}. Habría sido más apropiado redactar esta sección al comienzo del proyecto, de tal manera que la apreciación del contraste entre los conocimientos iniciales y los adquiridos durante el proyecto sería mucho más visible y realista. Sin embargo, esta tarea se ha realizado al final del proyecto, por lo que se tratará de ser lo más riguroso posible con respecto de la visión previa.
\paragraph{}
Hasta el comiendo del estudio en profundidad y al seguimiento de cursos como el de \emph{Algorithms for Big Data} \cite{bigdata2015jelani} impartido por \emph{Jelani Nelson}, la visión que se tenía de estos era muy reducida. Se entendía que se trataban de técnicas para extraer más rápidamente información de conjuntos de datos contenidos en bases de datos, o incluso en tiempo real a través de \emph{streams} de datos. Se intuía que para el procesamiento de tal magnitud de información, las implementaciones se apoyaban en \emph{Algoritmos Probabilistas}, que agilizaran los cálculos a coste de una determinada tasa de error. Sin embargo, no se tenía ningún conocimiento sobre las estrategias en que se basaban las soluciones.
\paragraph{}
Algo a remarcar es la necesidad de clarificar el concepto de \emph{Big Data}, que al comienzo del trabajo se concebía únicamente como la obtención de métricas a partir de conjuntos de datos de tamaño elevado. Sin embargo, tras la realización del trabajo se ha ampliado el entendimiento de dicho concepto, para llegar a la conclusión de que \say{\emph{Big Data} consiste en todas aquellas soluciones diseñadas para tratar de resolver problemas para los cuales el cálculo de su solución no es asumible utilizando únicamente la memoria del sistema}. En la sección \ref{sec:introduction_big_data}, destinada a la descripción superficial acerca del \emph{Big Data} se indican las distintas alternativas propuestas para tratar de hacer frente a dicha problemática.
\paragraph{}
En cuanto a la modelización de \emph{Grafos}, entendida como la representación matemática de estructuras relacionales de tal manera que estas puedan ser vistas como una red de interconexiones entre puntos, desde el comienzo de este trabajo se poseía una cierta base en la materia. Dichos conocimientos fueron adquiridos a partir del conjunto de asignaturas impartidas en la titulación, destacando la de \emph{Matemática Discreta} \cite{matematicaDiscreta2016notes}, en la cual se estudia el formalismo matemático, además de un amplio conjunto de definiciones básicas relacionadas con propiedades de \emph{Grafos} o sus vértices. En dicha asignatura se describen algunos algoritmos como el de \emph{Prim} o \emph{Kruskal} para encontrar la solución al problema del árbol recubridor mínimo.
\paragraph{}
Sin embargo, debido al carácter general e introductorio de esta asignatura, en ella se describen estos algoritmos sin tener en cuenta el coste computacional de los mismos, por tanto, no se tiene en cuenta la escalabilidad de dichas estrategias sobre grafos formados por trillones de aristas \cite{ching2015one}. Tal y como se ha comprendido a lo largo del desarrollo del trabajo, existen diferentes estrategias para hacer frente al elevado tamaño de los grafos, dependiendo del problema a tratar. La idea que se tenía previamente era el tratamiento de sub-grafos seleccionados de manera acertada para resolver problemas que después fueran extrapolables al grafo general. Sin embargo, no se tenía conocimiento acerca de qué propiedades se pretendía mantener ni cómo se llevaban a cabo dichas técnicas.
\paragraph{}
En cuanto al algoritmo \emph{PageRank} estudiado en detalle en el trabajo, al igual que en los casos anteriores, se tenía una vaga intuición acerca de su funcionamiento, así como la información que proporciona. Se sabía que inicialmente se diseñó para la obtención del grado de importancia de un determinado vértice de un grafo, al igual que sucede en el grafo formado por los sitios web y los enlaces que los relacionan (\emph{Web Graph}). Sin embargo, tan solo se conocía que este basaba su puntuación en la propagación de influencias entre vértices, de tal manera que relacionarse con un número reducido de vértices importantes genera más relevancia que relacionarse con un mayor número de vértices menos importantes. A pesar de tener esta visión del algoritmo, no se tenían ideas claras acerca de la forma en que este puede ser calculado, ni de las capacidades de personalización o su relación con el concepto de \emph{Cadenas de Markov}.
\paragraph{}
Tras tratar de realizar una breve explicación acerca de la base de conocimientos relacionados con el tema al comienzo del trabajo, el siguiente paso que se realiza es exponer los objetivos que se pretenden conseguir tras la realización de este trabajo en la siguiente sección.
\section{Objetivos}
\label{sec:introduction_goals}
\paragraph{}
Para la realización de este trabajo, se han fijado una serie de objetivos, lo cual ha servido como guía para la realización del mismo. Sin embargo, dicha tarea no ha sido simple por la naturaleza exploratoria del proyecto. Esta razón, tal y como se ha tratado de exponer en el apéndice \ref{chap:methodology}, ha permitido que a pesar de que el trabajo tuviera una temática fijada \emph{a-priori}, la especificación del mismo en un ámbito concreto y estudiable haya sido guiada por el proceso de investigación. Por tanto, a continuación se indican los objetivos generales que se pretendía conseguir al comienzo del trabajo, además de la indicación acerca de los temas escogidos de manera concreta.
\begin{itemize}
\item Obtención de una visión panorámica acerca de las distintas técnicas y estrategias utilizadas para resolver problemas sobre conjuntos de datos de tamaño masivo (\emph{Big Data}).
\item Selección de un tema concreto para ser estudiado con mayor profundidad, cuya relación con las estrategias y algoritmos estudiados desde el punto de vista del \emph{Big Data} sea elevada. Para esta tarea se ha decidido escoger el ámbito de los \emph{Grafos} de tamaño masivo y las distintas técnicas de reducción de su tamaño manteniendo la estructura de relaciones semejante.
\item Implementación y estudio de un algoritmo concreto ampliamente relacionado con el resto del trabajo realizado, que permita poner en práctica el conjunto de conceptos estudiados a lo largo del proyecto. En este caso, el algoritmo escogido ha sido \emph{PageRank}, por su relación conceptual con el modelo de \emph{Grafos}, su base conceptual ampliamente relacionada con la \emph{Estadística} y la fuerte necesidad de ser implementado de manera eficiente para hacer frente al elevado tamaño de los problemas en que es aplicado \emph{Big Data}.
\end{itemize}
\paragraph{}
Dichos objetivos principales no son los únicos que se han pretendido conseguir durante la realización del trabajo, sino que existe una cantidad más amplia de sub-objetivos necesarios para poder llegar al cumplimiento de estos. En este grupo se encuentra la realización de tareas de carácter investigatorio, requiriendo la necesidad de mantener un determinado índice de curiosidad que facilite la búsqueda de nuevas definiciones. Esto conlleva la lectura de distintos artículos de carácter científico, junto con la correspondiente dificultad propiciada por el tono extremadamente formal de los mismo. Además, esto requiere rigurosidad, tanto desde el punto de vista de la comprensión y citación, como del mantenimiento de unos objetivos claros que no conlleven un proceso de divagación entre un conjunto de temas muy dispersos entre sí.
\paragraph{}
También se pueden incluir dentro de estos sub-objetivos la necesidad de mantener un nivel personal de disciplina apropiado para la realización del trabajo, puesto que tanto la envergadura como la cantidad de tiempo para llevarlo a cabo son de gran tamaño. Sin el apropiado orden esto puede generar problemas derivados de dejar el trabajo para última hora, por tanto, se ha creído conveniente incluir dicho orden como sub-objetivo.
\paragraph{}
En cuanto a la implementación a realizar, también se ha creído conveniente el cumplimiento de una serie de objetivos a nivel de calidad del software. El primero de ellos es el apropiado funcionamiento de la implementación, que debido a su importancia debería incluso presuponerse. Además, se ha creído conveniente el diseño de la implementación como un módulo auto-contenido que tan solo requiera un conjunto reducido de dependencias para facilitar su instalación y distribución. En cuanto al código, se ha creído conveniente prestar especial atención a la parte de claridad del código fuente, de tal manera que la legibilidad del mismo sea sencilla. También se ha fijado como objetivo la generación de un conjunto de pruebas que permitan validar el funcionamiento del mismo, así como la inclusión de un sistema de auto-documentación que permita a otros usuarios utilizar la implementación siguiendo las indicaciones, sin necesidad de tener que comprender el código fuente subyacente.
\paragraph{}
En las secciones posteriores se realiza una visión superficial acerca de las diferentes disciplinas que abarca el ámbito del conocimiento del \emph{Big Data} así como los grafos de tamaño masivo.
\section{Big Data}
\label{sec:introduction_big_data}
\paragraph{}
El procesamiento de cantidades masivas de información presenta un gran reto a nivel computacional, debido a un elevado coste originado por el gran tamaño en la entrada. Para solventar dicha problemática, se prefieren algoritmos que posean un orden de complejidad sub-lineal ($o(N)$) sobre todo en espacio. Dichas técnicas se llevan a cabo sobre paradigmas de computación paralela, lo que permite aprovechar en mayor medida las restricciones a nivel de hardware.
\subsection{Algoritmos para Streaming}
\paragraph{}
Los \emph{Algoritmos para Streaming} se caracterizan por procesar las instancias del conjunto de datos secuencialmente e imponen como restricción que el orden de dicha operación sea irrelevante para el resultado final. La ventaja que presentan respecto de otras alternativas en tiempo real, como los \emph{Algoritmos Online}, es la utilización de propiedades estadísticas (se enmarcan por tanto, dentro de los \emph{Algoritmos Probabilísticos}) para reducir su coste, lo que por contra, añade una determinada tasa de error. El descubrimiento de métodos altamente eficientes para estimar los \emph{Momentos de Frecuencia} ha marcado un gran hito dentro de esta categoría algorítmica.
\subsection{Estrategias de Sumarización}
\paragraph{}
Para reducir el coste derivado de la obtención de resultados valiosos sobre conjuntos masivos de datos, es necesario apoyarse en diferentes estrategias que los sinteticen, de manera que el coste de procesamiento a partir de estas estructuras se convierta en una tarea mucho más asequible. Se utilizan sobre conjuntos de datos de distinta índole, como \emph{streamings en tiempo real}, \emph{bases de datos estáticas} o \emph{grafos}. Existen distintas técnicas como \emph{Sampling}, \emph{Histogram}, \emph{Wavelets} o \emph{Sketch}. A continuación se realiza una breve descripción acerca de esta última técnica.
\subsection{Sketch}
\paragraph{}
Son estructuras de datos que se basan en la idea de realizar sobre cada una de las instancias del conjunto de datos la misma operación (lo que permite su uso en entornos tanto estáticos como dinámicos) para recolectar distintas características. Destacan los \emph{Sketches lineales}, que permiten su procesamiento de manera distribuida. Para mantener estas estructuras se utilizan \emph{Algoritmos para Streaming}, puesto que se encajan perfectamente en el contexto descrito. Los \emph{Sketches} permiten realizar distintas preguntas sobre propiedades estadísticas referentes al conjunto de datos. Los ejemplos más destacados son: \emph{Count-Sketch}, \emph{CountMin-Sketch}, \emph{AMS Sketch}, \emph{HyperLogLog}, etc.
\subsection{Redución de la Dimensionalidad}
\paragraph{}
Los algoritmos que utilizan técnicas de reducción de dimensionalidad se basan en la intuición originada a partir del lema de \emph{Johnson–Lindenstrauss}, que demuestra la existencia de funciones para la redución de la dimensión espacial con un ratio de distorsión acotado. Estas técnicas son utilizadas en algoritmos para la \emph{búsqueda de los vecinos más cercanos}, la \emph{multiplicación aproximada de matrices} o el aprendizaje mediante \emph{Manifold Leaning}.
\subsection{Paralelización a gran Escala}
\paragraph{}
El paradigma de alto nivel sobre el que se lleva a cabo el procesamiento de conjuntos de datos de gran escala se apoya fuertemente en técnicas de paralelización. La razón se debe al elevado tamaño de la entrada, que no permite su almacenamiento en la memoria de un único sistema.
\subsection{Modelo MapReduce}
\paragraph{}
El modelo \emph{MapReduce} ha sufrido un crecimiento exponencial en los últimos años debido a su alto grado de abstracción, que oculta casi por completo cuestiones relacionadas con la implementación de bajo nivel al desarrollador, y su capacidad para ajustarse a un gran número de problemas de manera eficiente.
\subsection{Técnicas de Minería de Datos}
\paragraph{}
Una de las razones por las cuales es necesaria la investigación de nuevos algoritmos de carácter sub-lineal es la necesidad de obtención de información valiosa a partir de conjuntos masivos de datos. A este fenómeno se le denomina \emph{Minería de Datos}. Existen dos grandes categorías denominadas: \emph{Clasificación} (determinar una clase de pertenencia) y \emph{Regresión} (determinar un valor continuo). Para ello, se utilizan distintas técnicas como: \emph{Árboles de Decisión}, \emph{Métodos Bayesianos}, \emph{Redes Neuronales}, \emph{Máquinas de Vector Soporte}, \emph{Manifold Leaning}, etc.
\section{Grafos}
\label{sec:introduction_graphs}
\paragraph{}
Los grafos representan un método de representación para la resolución de problemas desde una perspectiva matemática mediante la modelización de una red de objetos que se relacionan a través de interconexiones. Esta abstracción, que deja de lado el contexto de aplicación para basarse únicamente en las relaciones y la estructura de las mismas, permite diseñar algoritmos de manera más simple al tener en cuenta únicamente la información necesaria para resolver el problema.
\paragraph{}
Los problemas referidos a grafos han sido ampliamente en la literatura desde hace mucho tiempo. Sin embargo, en los últimos años se ha producido un elevado crecimiento de distintas técnicas que permiten resolver estos, de tal manera que el coste sea más reducido. Esto genera una reducción de tiempo y espacio en su resolución a costa de la inclusión de una determinada tasa de error.
\paragraph{}
Una propuesta interesante es la generación de un sub-grafo de menor tamaño que mantenga las propiedades a nivel de estructura lo más semejantes posibles respecto del grafo sobre el cual se pretende resolver el problema en cuestión. Existen distintas técnicas para esta labor, conocidas como \emph{Spanners} y \emph{Sparsifiers}. Los últimos trabajos de investigación relacionados con el tema pretenden diseñar algoritmos que apliquen dichas técnicas siguiendo las mismas ideas que los \emph{Sketches} para el caso de valores numéricos.
\paragraph{}
Un algoritmo basado en conceptos de estadística y aplicado a grafos de tamaño masivo es el algoritmo \emph{PageRank}, el cual genera un ranking de importancia entre los puntos de la red, basándose únicamente en la estructura de interconexiones de la misma. Este ranking está íntimamente relacionado con conceptos de probabilidad como las \emph{Cadenas de Markov}.
\paragraph{}
Debido a la ingente cantidad de tiempo necesaria para realizar un trabajo de investigación que contuviera descripciones acerca de todos los conceptos relacionados con el \emph{Big Data} que se han resumido en las secciones posteriores, se han seleccionado un sub-conjunto de ellas. Por tanto, a continuación se indica cómo se organiza el resto del documento: en el capítulo \ref{chap:streaming} se realiza una descripción acerca de los \emph{Algoritmos para Streaming}. Seguidamente, en el capítulo \ref{chap:summaries} se indican distintas \emph{Estrategias de Sumarización}. A continuación, se cambia de perspectiva para hablar de \emph{Grafos} en el capítulo \ref{chap:graphs}. Después, se describe el algoritmo \emph{PageRank} en detalle en el capítulo \ref{chap:pagerank}. Por último, se describen distintos detalles de implementación, así como de los resultados obtenidos y se realiza una conclusión acerca del trabajo realizado en el capítulo \ref{chap:implementation}.
\paragraph{}
De manera adicional, también se han incluido distintos anexos: En el anexo \ref{chap:methodology} se indica la metodología de trabajo seguida durante el desarrollo del proyecto. En el anexo \ref{chap:how_it_was_build} se indica cómo ha sido construido este documento mediante la herramienta \LaTeX. Por último, se ha incluido una guía de usuario de la implementación en el anexo \ref{chap:user_guide}.
\end{document}
\chapter{Algoritmo PageRank}
\label{chap:pagerank}
\section{Introducción}
\label{sec:pagerank_intro}
\paragraph{}
El algoritmo \emph{PageRank} fue nombrado por primera vez en el trabajo \emph{The PageRank citation ranking: Bringing order to the web} \cite{page1999pagerank} publicado por \emph{Larry Page} y \emph{Sergey Brin}. La motivación del mismo fue tratar de realizar un \emph{ranking de importancia} (o relevancia) sobre los nodos de un \emph{grafo dirigido no ponderado} de tal manera que este esté basado únicamente en la estructura de dicho grafo.
\paragraph{}
La motivación de dicho trabajo fue la de tratar de mejorar el ranking de resultados del buscador de sitios web \emph{Google} en que estaban trabajando. Hasta la publicación de dicho trabajo, los sistemas de búsqueda se basaban en heurísticas como el número de ocurrencias de la palabra clave sobre la cual se basaba la búsqueda o el número de enlaces hacia dicha página.
\paragraph{}
Sin embargo, los rankings basados en este tipo de estrategias podían ser fácilmente manipulables con el fin de tratar de conseguir posicionarse en las primeras posiciones del sistema de búsqueda. Por ejemplo, una página web que se basara en la repetición de la misma palabra muchas veces, entonces aparecería en primer lugar en rankings basados en el número de ocurrencias para dicha palabra clave. En el caso de rankings basados en el número de enlaces hacia dicha página, tampoco sería complejo manipular el resultado creando un número elevado de páginas web que contuvieran links hacia la página para la cual se pretende mejorar su posicionamiento.
\paragraph{}
La solución propuesta por \emph{Page} y \emph{Brin} para tratar de solucionar dicha problemática se basa en la generación de un ranking sobre los sitios web basado en la estructura del grafo subyacente, de tal manera que los vértices (sitios web) sobre los cuales existan aristas (enlaces) que provengan de otros vértices relevantes, tendrán una puntuación mayor que la de vértices que cuyo sub-conjunto de aristas los relacione con otros vértices menos relevantes.
\paragraph{}
La idea en que se basa el ranking se refiere por tanto a que los sitios web a los cuales se puede acceder a partir de otros sitios web considerados como importantes, entonces deberán encontrarse en primeras posiciones. Esta idea se extiende de manera inductiva sobre todos los vértices del grafo, puesto que tal y como veremos en las siguientes secciones converge hacia un estado estable (o \emph{distribución estacionaria} desde el punto de vista estadístico)
\paragraph{}
Para tratar de facilitar la comprensión acerca de esta idea a continuación se expone un ejemplo: Supongamos que en una red social como \emph{Twitter} (la cual se puede entender como un conjunto de usuarios que se relacionan entre si mediante relaciones de seguimiento, por lo que se puede ver como un grafo dirigido no ponderado donde el conjunto de usuarios se refiere a los vértices y el conjunto de relaciones de seguimiento con las aristas) un usuario habitual (el cual no tiene un número elevado de seguidores) comienza a seguir a un antiguo amigo de la universidad, el cual tampoco tiene un gran número de seguidores.
\paragraph{}
La red social \emph{Twitter} envía una notificación a todos seguidores del usuario indicando que este ha empezado a seguir a su amigo de la universidad. Puesto que su número de seguidores es bajo dicha acción no tendrá una gran relevancia y muy probablemente nadie más comience a seguir al amigo de la universidad. Sin embargo, supongamos que nuestro usuario, en lugar de tener un conjunto reducido de seguidores, es una persona influyente en la red social, a la cual siguen millones de personas, entonces la notificación del nuevo seguimiento le llegará a muchos más usuarios y probablemente el amigo de la universidad verá como su número de seguidores aumenta notablemente.
\paragraph{}
A grandes rasgos, esta es la idea en que se basa el algoritmo \emph{PageRank}, es decir, la puntuación de un vértice del grafo se basa en la relevancia de los vértices que contienen aristas que apuntan hacia el propio vértice.
\paragraph{}
La idea inicial del algoritmo \emph{PageRank} era la de realizar un ranking basado en la estructura del grafo de la web (\emph{Web Graph}), sin embargo, tal y como se verá a lo largo del capítulo, los conceptos matemáticos en que se basa dicho ranking son extrapolables a cualquier entorno que se pueda representar a partir de una red o grafo. En el trabajo \emph{PageRank beyond the Web}\cite{gleich2015pagerank} \emph{Gleich} realiza un estudio acerca de los distintos entornos sobre los cuales se han aplicado estas ideas.
\paragraph{}
Entre otros, se han realizado trabajos sobre los cuales se ha aplicado el algoritmo \emph{PageRank} en áreas tan dispares como la Biología, para analizar las células más importantes a partir de las interrelaciones entre ellas. También se ha aplicado en el sector de la neurociencia por razones similares. En el caso de la literatura, se ha aplicado sobre el grafo generado a partir del sistema de citaciones de artículos de investigación. Otros ámbitos de aplicación han sido sistemas de planificación de tareas o incluso estudios acerca de resultados en deportes, para conocer los encuentros más relevantes.
\paragraph{}
El resto del capítulo se organiza como sigue: Lo primero será hablar de \emph{Paseos Aleatorios} en la sección \ref{sec:random_walks}, lo cual permitirá comprender en mayor medida las idea sobre las cuales se basa el ranking \emph{PageRank}. A continuación se realizará una definición formal acerca del problema y lo que se pretende resolver en la sección \ref{sec:pagerank_formal_definition}. Una vez entendido el problema sobre el que se está tratando, en la sección \ref{sec:pagerank_algorithm} se describen las formulaciones para resolver el problema desde un punto de vista \emph{Algebraico} (sección \ref{sec:pagerank_algorithm_algebraic}), \emph{Iterativo} (sección \ref{sec:pagerank_algorithm_iterative}) y \emph{basado en Paseos Aleatorios} (sección \ref{sec:pagerank_algorithm_random_walks}). El siguiente paso es discutir cómo se puede añadir personalización al ranking, lo cual se lleva a cabo en la sección \ref{sec:pagerank_algorithm_personalized}. Por último, se presentan distintas alternativas al algoritmo \emph{PageRank} en la sección \ref{sec:pagerank_alternativas}, en la cual se habla de \emph{HITS} (sección \ref{sec:hits}), \emph{SALSA} (sección \ref{sec:salsa}) y \emph{SimRank} (sección \ref{sec:simrank}). Finalmente se realiza un breve resumen del capítulo en la sección \ref{sec:pagerank_conclusions}.
\section{Paseos Aleatorios}
\label{sec:random_walks}
\paragraph{}
En esta sección se trata el concepto de \emph{Paseos Aleatorios}, el cual está íntimamente relacionado con el algoritmo \emph{PageRank}. Para el desarrollo de esta sección se ha utilizado como herramienta bibliográfica el libro \emph{Randomized algorithms} \cite{motwani2010randomized} de \emph{Motwani} y \emph{Raghavan}, concretamente se ha prestado especial atención al \emph{capítulo 6: Markov Chains and Random Walks}. El resto de la sección se basará en la descripción de propiedades relacionadas con \emph{Paseos Aleatorios} para finalizar ilustrando la relacion de estos con \emph{PageRank}
\paragraph{}
Lo primero que haremos será describir en qué consiste un \emph{Paseo Aleatorio}. Para ello nos referiremos al grafo $G=(V,E)$ dirigido y no ponderado, el cual esta compuesto por $n = card(V)$ vértices y $m=card(E)$ aristas. Un paseo aleatorio se refiere entonces a un camino de longitud $l$ con origen en el vértice $v_{i_1}$ y final en $v_{i_l}$. Para que dicho camino constituya un paseo aleatorio, cada paso debe haber sido generado seleccionando de manera uniforme el siguiente vértice a visitar de entre los adyacentes al vértice actual. Nótese que este comportamiento puede ser visto como el promedio del modo en que los usuarios navegan por internet, de tal manera que acceden a páginas web mediante los enlaces que encuentran en la página que están visualizando. A continuación se describen las \emph{Cadenas de Markov} por su relación como herramienta de estudio para los paseos aleatorios.
\subsection{Cadenas de Markov}
\label{sec:markov_chains}
\paragraph{}
Para el estudio de paseos aleatorios, es apropiado utilizar la abstracción conocida como \emph{Cadenas de Markov}, las cuales están íntimamente relacionadas con el concepto de grafo y máquina de estados. Una \emph{Cadena de Markov} $M$ se define como un proceso estocástico que consta de $n$ posibles estados, los cuales tienen asociadas un conjunto de probabilidades denotadas como $p_{ij}=\frac{A_{ij}}{d^-(i)}$ para indicar la probabilidad con la cual se pasará del estado $i$ al estado $j$.
\paragraph{}
Dichas probabilidades se pueden representar de manera matricial sobre una matriz de transiciones $P$ de tamaño $n*n$, de tal manera que la posición $(i,j)$ contenga el valor $p_{ij}$ construido tal y como se indica en el párrafo anterior. Notese por tanto, que $\sum_{j}p_{ij}=1$ para que la distribución de probabilidades sea válida, es decir, la suma de probabilidades para las transiciones sobre cada estado deben sumar $1$.
\paragraph{}
Supóngase que se realiza un paseo aleatorio sobre la \emph{Cadena de Markov} $M$ cuya longitud $l$ es muy elevada ($l \gg n^2$), entonces, es fácil intuir que se visitará más de una vez cada estado. Sin embargo, el ratio de veces que se visitará cada estado muy probablemente no se distribuirá de manera uniforme, sino que habrá estados que serán visitados muchas más veces que otros. Esto depende en gran medida de la matriz de transiciones $P$. A la distribución de probabilidad generada sobre el ratio de visitas sobre cada nodo tras aplicar un paseo aleatorio de longitud elevada se le conoce como \emph{distribución estacionaria} y se denota como $\pi$.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{markov-chain-example}
\caption{Ejemplo de \emph{Cadena de Markov}. (Extraído de \cite{sanchez2012wireless})}
\label{img:markov_chain_example}
\end{figure}
\paragraph{}
En la figura \ref{img:markov_chain_example} (Extraído de \cite{sanchez2012wireless}) se muestra una cadena de Markov constituida por 3 estados, tanto en su forma de grafo dirigido como de forma matricial.
\paragraph{}
La distribución estacionaria $\pi$ existe siempre que la \emph{Cadena de Markov} $M$ permita que a partir de un determinado estado $i$, se pueda llegar al menos a otro estado $j$. La razón se debe a que si se llega al estado $i$ y este no contiene más posibles estados de salida, entonces el resto de épocas se seguirá en el mismo estado. A los estados que poseen esta característica se los denomina sumideros. La segunda restricción para que se pueda calcular la distribución estacionaria $\pi$ es que la matriz de transiciones $P$ no debe ser periódica, es decir, no debe contener ciclos de probabilidad constante sobre los cuales el paseo aleatorio se quedaría iterando de manera indefinida.
\paragraph{}
Las definiciones descritas en este apartado se pueden extender de manera trivial al caso de grafos dirigidos sin más que utilizar cada vértice del grafo como un estado y construir la matriz $P$ de tal manera que $p_{ij}=\frac{A_{ij}}{d^-(i)}$ donde $d^-(i)$ representa el cardinal de aristas cuyo origen es el vértice $i$. Tal y como se verá posteriormente, el vector $\pi$ se corresponde con el resultado obtenido por el algoritmo \emph{PageRank} sobre una matriz $P$ de transiciones modificada.
\subsection{Matriz Laplaciana de Paseos Aleatorios Normalizada}
\label{sec:random_walk_normalized_laplacian_matrix}
\paragraph{}
En la sección \ref{sec:laplacian_matrix} se habló sobre la \emph{Matriz Laplacina}, la cual es una estrategia de representación, que ilustra distintas propiedades sobre el grafo subyacente. En este caso se describe una variación de la misma que es más apropiada para problemas relacionados con \emph{Paseos Aleatorios}. Esta se denota como $L^{{{\text{rw}}}}$ y se denomina \emph{Matriz Laplaciana de Paseos Aleatorios Normalizada}. La estrategia de construcción de la misma se indica en la ecuación \eqref{eq:random_walk_normalized_laplacian_matrix}. Esto consiste en asignar a la posición $(i,j)$ el opuesto de la probabilidad de transición del vértice $i$ al vértice $j$. Además, en la diagonal $(i,i)$ se asigna el valor $1$ cuando el grado del vértice es mayor que $0$.
\begin{equation}
\label{eq:random_walk_normalized_laplacian_matrix}
L_{{i,j}}^{{{\text{rw}}}}:={
\begin{cases}
1&{\mbox{if}}\ i=j\ {\mbox{and}}\ d(v_{i})\neq 0\\
-{P_{ij}}&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\
0&{\mbox{otherwise}}.
\end{cases}}
\end{equation}
\paragraph{}
Una vez descritos los \emph{Paseos Aleatorios}, junto con las \emph{Cadenas de Markov} y la \emph{Matriz Laplaciana de Paseos Aleatorios Normalizada}, ya se está en condiciones suficientes como para describir de manera formal el \emph{PageRank} de un determinado grafo, que se realizará en la siguiente sección. Para ello, se indicarán las dificultades que surgen sobre este problema en grafos reales, así como las soluciones utilizadas para poder hacer frente a estas.
\section{Definición Formal}
\label{sec:pagerank_formal_definition}
\paragraph{}
Se define el \emph{PageRank} como la \emph{distribución estacionaria} $\pi$ de un determinado grafo dirigido no ponderado $G$ sobre el cual, la matriz de transiciones $P$, ha sido ligeramente modificada. Tal y como se ha visto en la sección anterior, la \emph{distribución estacionaria} consiste en la probabilidad de encontrarse en el estado $i$ durante un paseo aleatorio de longitud elevada sobre la \emph{Cadena de Markov} $M$. Tal y como se ha indicado, para que una cadena de Markov sea válida, entonces no deben existir estados \emph{sumideros} (no tienen posibles estados próximos).
\paragraph{}
Por estas razones, obtener la \emph{distribución estacionaria} de un grafo $G$, este no debe contener vértices \emph{sumideros}. La solución que proponen \emph{Page} y \emph{Brin} en \cite{page1999pagerank} para encontrar la \emph{distribución estacionaria} o \emph{PageRank} del grafo generado por los enlaces de la web (\emph{Web Graph}) es añadir un determinado índice de probabilidad sobre el cual los usuarios dejan de seguir los enlaces entre páginas web para acceder a otra distinta introduciendo la URL directamente. El apoyo en esta estrategia soluciona por tanto el problema de los vértices \emph{sumidero}, además de asemejarse en un mayor grado al comportamiento real de un usuario que navega por internet.
\paragraph{}
En \cite{page1999pagerank} \emph{Page} y \emph{Brin} proponen la modificación de la matriz de transiciones $P$ para la adaptación descrita en el párrafo superior, la cual se indica en la ecuación \eqref{eq:pagerank_transition_matrix}. De esta manera, se representa la situación en la cual un determinado usuario que llega a una página web sin enlaces hacia otras (\emph{sumidero}), accede a otra seleccionada de manera uniforme (esto se modeliza mediante el vector $p$ construido de tal manera que $p_{i} = \frac{1}{n}, \ \forall i \in [1,n]$). Además, se añade un determinado índice $\beta$, que se corresponde con la probabilidad de que el usuario continúe seleccionando enlaces en la página actual o ,por contra, acceda a otra selecionandola de manera uniforme. Típicamente el valor $\beta$ se fija a $0.85$, sin embargo, admite cualquier valor contenido en el intervalo $[0,1]$.
\begin{equation}
\label{eq:pagerank_transition_matrix}
p'_{ij} =
\begin{cases}
\beta * \frac{A_{ij}}{d^-(i)} + (1- \beta) * p_{i} & \mbox{if} \ d^-(i) \neq 0 \\
p_{i}&\mbox{otherwise}
\end{cases}
\end{equation}
\paragraph{}
Tal y como se ha indicado anteriormente, el vector $p$ representa la distribución de probabilidad referida a los saltos que un usuario lleva a cabo entre sitios web sin tener en cuenta los enlaces del sitio web actual. En el párrafo anterior se ha indicado que este vector es construido siguiendo una distribución uniforme, por tanto, esto puede ser visto de tal manera que la probabilidad de saltar de un sitio web a otro es la misma. Sin embargo, dicha acción podría seguir una distribución de probabilidad distinta dependiendo de cada usuario de la red. Por tanto, en \cite{page1999pagerank} se habla de \emph{PageRank Personalizado} cuando el vector $p$ sigue una distribución de probabilidad distinta de la uniforme (desviada hacia los sitios web a los que más accede el usuario).
\paragraph{}
En el trabajo \emph{Topic-sensitive pagerank} \cite{haveliwala2002topic} \emph{Haveliwala} propone la generación de 16 \emph{distribuciones estacionarias} (\emph{PageRanks}) distintas mediante la personalización del vector $v$, para después realizar una combinación de estas y así conseguir que el ranking final sea personalizado.
\paragraph{}
Una vez descritas las transformaciones necesarias a realizar sobre la matriz de transiciones $P$ para que esta se adapte a la estructura de grafos con vértices \emph{sumidero}, y que además emule de manera más apropiada el comportamiento de un determinado usuario sobre el grafo de la web (\emph{Web Graph}), lo siguiente es explicar cómo se puede obtener la \emph{distribución estacionaria} o \emph{PageRank} del grafo. Para ello, a continuación se describe el \emph{Teorema de Perron–Frobenius}, que aporta una idea acerca de la manera en que se calcula, además de asegurar la convergencia de la matriz de transiciones hacia un estado estacionario del vector $\pi$.
\subsection{Teorema de Perron–Frobenius}
\label{sec:perron_frobenius_theorem}
\paragraph{}
El \emph{teorema de Perron–Frobenius} se refiere a la existencia de un \textbf{único} \emph{vector propio} (\emph{eigenvector}) para las matrices cuadradas reales positivas. Dicha descripción ha sido extraída del documento \emph{Notes on the perron-frobenius theory of nonnegative matrices} \cite{boyle2005notes} de \emph{Boyle} (profesor de matemáticas de la \emph{Universidad de Marylan}). En primer lugar es necesario describir los conceptos de \emph{vector propio} como de \emph{matriz cuadrada real positiva} para después ver que la \emph{distribución estacionaria} y la \emph{matriz de transiciones} referidos a una \emph{Cadena de Markov} $M$ pueden ser vistos de esta manera.
\paragraph{}
Una \emph{matriz cuadrada real positiva} $A$ es aquella formada por $n$ filas y $n$ columnas ($n*n$ celdas) para las cuales $\forall i,j \in [1,n]$ se cumple que $A_{ij} \in \mathbb{R} \geq 0$. Tal y como se puede apreciar, la \emph{matriz de transiciones modificada} $P'$ del grafo $G$ cumple esta propiedad ya que $\forall i,j \in [1,n] \ P'_{ij} \in [0,1]$.
\paragraph{}
En cuanto al concepto de \emph{vector propio} $\lambda$ de un matriz, se refiere a un vector de $n$ columnas ($1*n$) tal que cuando es multiplado por una determinada matriz $A$, el resultado sigue siendo el mismo. Es decir, se cumple que $\lambda = \lambda * A$. Notese por tanto, que esta idea es equivalente a la \emph{distribución estacionaria} desde el punto de vista de llegar a un estado estable.
\paragraph{}
El teorema de \emph{teorema de Perron–Frobenius} asegura por tanto, que para una \emph{matriz cuadrada real positiva} $A$ tan solo existe un único \emph{vector propio} $\lambda$ y el conjunto de valores de este se es estrictamente positivo, es decir, $\forall i \in [1,n] \ \lambda_i \geq 0$. La demostración de dicho teorema puede encontrarse en \cite{boyle2005notes}.
\paragraph{}
Además, en el caso de que la matriz $A$ haya sido normalizada por columna, es decir, se cumpla que $\forall i \in [1,n] \ \sum_j A_{ij} = 1$, entonces el autovector $\lambda$ también seguirá la propiedad de normalización ($\sum_i \lambda_i = 1$). Gracias a este resultado es posible calcular la \emph{distribución estacionaria} $\pi$ de una \emph{Cadena de Markov} como el \emph{vector propio} de su matriz de transiciones.
\paragraph{}
Una vez descrito el \emph{teorema de Perron–Frobenius} ya se está en condiciones suficientes para describir las distintas alternativas para calcular el \emph{PageRank} de un determinado grafo, el cual se calcula tal y como se ha indicado en esta sección, encontrando el \emph{vector propio} de la matriz de transición modificada de la cadena de Markov. Las distintas estrategias para obtener este resultado se describen en la siguiente sección.
\section{Algoritmo Básico}
\label{sec:pagerank_algorithm}
\paragraph{}
En esta sección se describe el método para obtener el vector \emph{PageRank} sobre un determinado grafo $G$. Para ello, es necesario fijar 3 parámetros los cuales se indican a continuación:
\begin{itemize}
\item Matriz de adyacencia $A$, que a partir de la cual se obtiene la estructura del grafo (Se habló de ella en la sección \ref{sec:adjacency_matrix}).
\item El valor de probabilidad $\beta$ de seguir el paseo aleatorio a partir de la distribución del vértice actual (el cual se comentó en la sección anterior).
\item El vector de personalización $p$ referido a la distribución de probabilidad de los saltos aleatorios entre vértices (también se habló en la sección anterior).
\end{itemize}
\paragraph{}
Para calcular el \emph{vector propio} $\lambda$ existen distintas estrategias matemáticas. En esta sección se habla de dos estrategias, la primera de ellas basada en la resolución de un sistema de ecuaciones lineales mientras que la segunda se basa en acercamiento a la solución de manera iterativa. Tal y como se verá a continuación, la estrategia algebraica conlleva un coste computacional muy elevado, por lo que no es admisible sobre grafos de tamaño masivo. En estos casos se utiliza la estrategia iterativa u otras alternativas basadas en la generación de \emph{Paseos Aleatorios}.
\subsection{Estrategia Algebraica}
\label{sec:pagerank_algorithm_algebraic}
\paragraph{}
La idea de la estrategia algebraica se refiere a la búsqueda del vector $\lambda$ que resuelva la ecuación $\lambda = \lambda * P'$ como un sistema de ecuaciones lineales. Esto se puede llevar a cabo siguiendo el desarrollo de la ecuación \eqref{eq:pagerank_algorithm_algebraic_1}. Nótese que para ello no se utiliza la \emph{matriz de transiciones modificada} $P'$ explícitamente. En su lugar, esta es representada implícitamente a partir de las operaciones de \eqref{eq:pagerank_algorithm_algebraic_2} y \eqref{eq:pagerank_algorithm_algebraic_3}.
\paragraph{}
Para entender estas ecuaciones, lo primero es indicar la notación que se ha utilizado así como la interpretación de algunas operaciones: El símbolo $\boldsymbol{I}$ representa la matriz identidad ($1$'s en la diagonal y $0$'s en el resto) de tamaño $n$. El símbolo $d^-$ representa un vector columna de tamaño $n$ que representa en su posición $j$ el cardinal de aristas cuyo origen es el vértice $j$. Esto puede ser visto de la siguiente manera: $\forall j \in [1,n] \ \sum_i A_{ij} = d^{-}_{j}$. A nivel de operaciones es necesario resaltar el caso de la división $\frac{A}{d^-}$ por su carácter matricial. Esta se lleva a cabo realizando la división elemento a elemento por columnas.
\begin{align}
\label{eq:pagerank_algorithm_algebraic_1}
\lambda =& \lambda * P' \\
\label{eq:pagerank_algorithm_algebraic_2}
=& \lambda * \beta * \frac{A}{d^-} + (1-\beta)*p \\
\label{eq:pagerank_algorithm_algebraic_3}
=& \bigg(\boldsymbol{I} - \beta * \frac{A}{d^-}\bigg)^{-1} * (1-\beta)*p
\end{align}
\paragraph{}
A partir de las operaciones descritas en la ecuación \eqref{eq:pagerank_algorithm_algebraic_3}, se obtiene por tanto el \emph{vector propio} $\lambda$, que en este caso se refiere a la \emph{distribución estacionaria} de la \emph{cadena de Markov} descrita por la matriz de transiciones modificada $P'$, por lo que es equivalente al vector \emph{PageRank}. Sin embargo, el calculo del vector \emph{PageRank} siguiendo esta estrategia conlleva un elevado coste computacional derivado de la necesidad de invertir una matriz de tamaño $n*n$, algo inadmisible para grafos de tamaño masivo.
\subsection{Estrategia Iterativa}
\label{sec:pagerank_algorithm_iterative}
\paragraph{}
La \emph{estrategia iterativa} para el cálculo del vector \emph{PageRank} se basa en la aproximación a este mediante la multiplicación del mismo por la matriz de transiciones modificada $P'$ de manera repetida hasta que este llegue a un estado estable desde el punto de vista de una determinada norma vectorial.
\paragraph{}
El primer paso es calcular la \emph{matriz de transiciones modificada} $P'$ a partir de los tres parámetros de entrada $(A, \beta, v)$ siguiendo la definición de la ecuación \eqref{eq:pagerank_transition_matrix}. Una vez obtenida dicha matriz se está en condiciones de calcular el vector \emph{PageRank} siguiendo la idea del \emph{vector propio} único expuesta en la sección anterior.
\paragraph{}
El algoritmo para dicha tarea se muestra en la figura referida al \emph{Algorimo \ref{code:iterative_pagerank}}. Este toma como argumentos de entrada la matriz de transición aproximada $P'$ junto con un determinado valor de convergencia $\in (0,1)$, que condiciona la precisión del resultado así como el número de iteraciones necesarias para llegar a él.
\paragraph{}
\begin{algorithm}
\SetAlgoLined
\KwResult{$\pi(t)$ }
$t \gets 0$\;
$\pi(t) \gets \frac{1}{n}*\boldsymbol{1}$\;
\Do{$||\pi(t) - \pi(t-1)|| > conv$}{
$t \gets t + 1$\;
$\pi(t) = \pi(t-1) * P'$\;
}
\caption{Iterative PageRank}
\label{code:iterative_pagerank}
\end{algorithm}
\paragraph{}
En cuanto al vector \emph{PageRank} $\pi$, este debe ser inicializado antes de comenzar el bucle de iteraciones. La única restriccione que se pide es que la suma de las posiciones del mismo sea $1$, es decir, que la norma 1 del sea igual a 1 ($||\pi||_1 = \sum_i \pi_i = 1$) para mantener la propiedad de distribución de probabilidad. Existen distintas heurísticas destinadas a reducir el número de iteraciones del algoritmo mediante la inicialización de este vector, sin embargo, en este caso se ha preferido inicializarlo siguiendo una distribución uniforme. Esta inicialización no determina el resultado, tan solo el tiempo de convergencia hacia este.
\paragraph{}
Tal y como se puede apreciar, el bucle de iteraciones consiste únicamente en la multiplicación del vector \emph{PageRank} por la matriz $P'$ junto con la actualización del índice referido a la iteración. El siguiente punto a discutir es el criterio de convergencia del resultado. Para ello existen distintas normas vectoriales, entre ellas la norma uno (descrita en el párrafo anterior) o la norma infinito ($||\pi||_{\infty}=max_i\{\pi_i\}$). En este caso se ha creído más conveniente la utilización de la \emph{norma 1} para el criterio de convergencia puesto que se pretenden reducir todos los valores del vector y no únicamente el máximo de todos ellos, por lo que de esta manera se consigue una aproximación más homogénea respecto del \emph{PageRank Exacto}.
\subsection{Estrategia Basada en Paseos Aleatorios}
\label{sec:pagerank_algorithm_random_walks}
\paragraph{}
También existe una estrategia basada en paseos aleatorios para el cálculo del vector \emph{PageRank}, la cual sigue una estrategia muy diferente de las descritas en anteriores secciones. En este caso la idea principal se basa en realizar paseos aleatorios sobre los vértices del grafo siguiendo la distribución de probabilidades descrita a partir de la matriz de transiciones modificada $P'$ para después estimar el vector \emph{PageRank} a partir del número de veces que cada vértice ha aparecido en los paseos aleatorios.
\paragraph{}
La dificultad algorítmica en este caso se refiere a la generación de valores aleatorios a partir de una distribución ponderada (obtenida de la matriz de transiciones $P'$) ya que es necesario mantener $n$ generadores de valores aleatorios que indiquen el siguiente vértice a visitar en cada caso. Obviando esta dificultad, el resto del algoritmo es bastante simple y se basa en el mantenimiento del ratio de visitas sobre cada vértice conforme el paseo aleatorio aumenta su longitud. Como en el caso iterativo, también se basa en la repetición de la misma operación hasta llegar a un criterio de convergencia (idéntico a la solución iterativa).
\paragraph{}
La figura correspondiente al Algoritmo \ref{code:random_walks_pagerank}, que se basa en la actualización del vector \emph{PageRank} conforme se obtiene el resultado del próximo vértice a visitar. De esta manera en cada iteración se realizan $n$ paseos aleatorios de longitud $1$ que se combinan para llegar al estado estacionario. La dificultad conceptual en este caso se refiere tanto a la generación de números aleatorios ponderados y el mantenimiento de una media incremental normalizada sobre el vector \emph{PageRank} $\pi$. Tal y como se ha indicado en el párrafo anterior, el criterio de convergencia es equivalente al caso iterativo.
\paragraph{}
\begin{algorithm}
\SetAlgoLined
\KwResult{$\pi(t)$ }
$t \gets 0$\;
$\pi(t) \gets \frac{1}{n}*\boldsymbol{1}$\;
\Do{$||\pi(t) - \pi(t-1)|| > conv$}{
$t \gets t + 1$\;
$\pi_u(t) \gets \frac{t}{t+n}*\pi_u(t) $\;
\For{cada $v \in V$}{
$u \gets randomWeighted(v)$\;
$\pi_u(t) \gets \pi_u(t) + \frac{1}{t+n}$\;
}
}
\caption{Random Walks PageRank}
\label{code:random_walks_pagerank}
\end{algorithm}
\paragraph{}
Las estrategias descritas en este capítulo para la obtención de la distribución estacionaria $\pi$ sobre la matriz de transiciones modificada $P'$ del grafo $G$ se han hecho desde una perspectiva básica. Existen gran cantidad de trabajos sobre distintas variaciones de estos métodos para tratar de reducir el tiempo de convergencia.
\section{PageRank Personalizado}
\label{sec:pagerank_algorithm_personalized}
\paragraph{}
Tal y como se ha indicado a lo largo del capítulo, el algoritmo \emph{PageRank} genera un ranking referido al ratio de importancia de cada vértice perteneciente a un determinado grafo dirigido no ponderado $G$. Dicha medida para el vértice $v$ se calcula como la probabilidad de que tras un paseo aleatorio por los vértices del grafo siguiendo las aristas del mismo, el vértice final del camino sea $v$.
\paragraph{}
Para hacer frente a aquellos casos en los cuales se llega a un vértice que no contiene aristas (\emph{vértice sumidero}) sobre las cuales acceder a otros vértices entonces es necesario realizar un salto hacia otro vértice del grafo sin tener en cuenta las aristas. Tal y como se ha indicado, este proceso se lleva a cabo tanto en los \emph{vértices sumidero}, para los cuales es necesario para poder calcular el ranking, como en el resto de vértices con probabilidad $1-\beta$.
\paragraph{}
En esta sección se habla de la distribución de probabilidad que sigue el proceso de \emph{saltar} de un vértice a otro. Dicha distribución se codifica mediante el vector $p = [ p_1, p_2,...,p_i,...,p_n ]$, cuya única restricción se refiere a la suma de valores, que debe ser 1, es decir, $p$ debe estar normalizado ($||p||_1 = 1$).
\paragraph{}
El caso básico se refiere a la distribución uniforme, para la cual todas las posiciones del vector toman el mismo valor, es decir, $\forall i \in [1,n], \ p_i = \frac{1}{n}$. De esta manera se modeliza la situación en la cual cuando se lleva a cabo un salto, todos los vértices tienen la misma probabilidad de ser el vértice de destino. Nótese por tanto, que en este caso no existe no existe ningún grado de personalización sobre el resultado, por lo que se conoce como \emph{PageRank General} ya que aporta una estimación acerca de la importancia de los vértices del grafo desde una perspectiva generalizada, lo cual es interesante desde el punto de vista de la obtención de analíticas del grafo.
\paragraph{}
La situación descrita en el párrafo anterior, sin embargo, no se asemeja al comportamiento real de un usuario que navega por la red. La razón se debe a que cuando alguien accede directamente a un sitio web escribiendo la url del mismo en su navegador favorito, este no selecciona una al azar de entre los millones de páginas posibles, sino que la escoge de entre un reducido sub-conjunto de ellas, que además no es uniforme, dado que los usuarios no visitan las páginas web el mismo número de veces.
\paragraph{}
A través de dicha idea, el resultado del algoritmo \emph{PageRank} cambia y en lugar de generar un ranking general del grafo $G$, ahora realiza un ranking sesgado otorgando una mayor puntuación a los vértices cercanos a los vértices de destino de los saltos, lo cual altera en gran medida el resultado anterior.
\paragraph{}
Para comprender esta idea se ha incluido en la figura \ref{img:directed_graph_example} un ejemplo de \emph{grafo dirigido no ponderado}. Supóngase que se calcula el \emph{PageRank} seleccionando el vector $p$ de tal manera que $p_3 =1$ y $\forall i \in [1,7] - [3], \ p_i = 0$. Mediante esta situación se consigue que todos los saltos (los referidos a vértices sumidero y los relizados con probabilidad $1-\beta$) tengan como vértice destino el $3$. Por esta razón con total seguridad el vértice $3$ tendrá la mejor puntuación \emph{PageRank} del grafo. Además, la puntuación del resto de vértices también se habrá modificado, con altas probabilidades de que el vértice $6$ se encuentre en segunda posición.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{graph-directed}
\caption{Ejemplo de \emph{Grafo Dirigido No Ponderado}. (Extraído de \cite{freedman2010graphs})}
\label{img:directed_graph_example}
\end{figure}
\paragraph{}
A través de la modificación de la distribución de probabilidad del vector $p$ se consigue por tanto la adaptación del ranking \emph{PageRank} desde una perspectiva de localidad, lo cual se convierte en una clasificación mucho más interesante desde el punto de vista del usuario, que en el caso del grafo de la web, conoce de manera mucho más fiel las páginas web que son importantes desde su perspectiva individual.
\paragraph{}
Sin embargo, la personalización del ranking \emph{PageRank} conlleva distintos problemas a nivel computacional. Esto se debe a que en este caso en lugar de tener que mantener un único ranking para todo el grafo es necesario mantener un ranking para cada usuario que pretenda consultar el pagerank del grafo. Este problema no es tan grave como parede puesto que el vector de la distribución estacionaria $\pi$ posee la propiedad de linealidad, lo cual simplifica dicha problemática, reduciendo el problema a la necesidad de mantener un vector $\pi$ para cada vértice del grafo, es decir, hay que generar $n$ vectores \emph{PageRank}.
\paragraph{}
Ahora supongamos que un determinado usuario desea calcular su propio ranking personalizado. La tarea se reduce a realizar una media ponderada sobre los vectores $\pi_i$ escogiendo los pesos según el vector $p$ de personalización para dicho usuario. A pesar de ello, esta tarea continua siendo compleja, tanto a nivel temporal como espacial, puesto que se incrementa el coste respecto del \emph{PageRank General} en un orden de $n$.
\paragraph{}
Debido al tamaño de grafos como el de la web (\emph{Web Graph}), cuyo número de vértices es tan elevado que no es posible mantener un vector \emph{PageRank} para cada uno de ellos en memoria. Esta razón dificulta la tarea de calcular el ranking personalizado para un determinado usuario en tiempo de consulta. Por tanto, se ha se han propuesto distintas soluciones para este problema.
\paragraph{}
Una de las primeras es la descrita en \cite{haveliwala2002topic}, que se basa en el mantenimiento de \emph{16} vectores \emph{PageRank} de referidos a temas distintos entre si, que después se combinan en tiempo de consulta para generar un ranking personalizado. En \cite{kamvar2003exploiting} los autores proponen el cálculo de vectores \emph{PageRank} teniendo en cuenta la estructura de bloques que se genera entre vértices cuyo ranking es similar. Esto se lleva a cabo a partir de un algoritmo de 3 fases (búsqueda de vértices similares, cálculo del vector \emph{PageRank} para los conjuntos resultantes y, en tiempo de consulta, combinación de los mismos para obtener el ranking personalizado).
\paragraph{}
En \cite{jeh2003scaling} se explica una técnica que permite calcular el vector \emph{PageRank} personalizado utilizando una técnica incrementa o jerárquica basada en \emph{Programación Dinámica}, lo cual permite hacer calcular el ranking basado en la combinación de cien mil vectores en tiempo de consulta. En \cite{sarlos2006randomize} se propone una solución semejante en un espacio menor mediante la utilización del \emph{Count-Min Sketch} (Sección \ref{sec:count_min_sketch}).En el trabajo \emph{Estimating pagerank on graph streams} \cite{sarma2011estimating} \emph{Sarma y otros} proponen un algoritmo basado en la generación de paseos aleatorios para estimar el ranking \emph{PageRank} sobre las restricciones que impone el \emph{modelo en semi-streaming}.
\section{Alternativas a PageRank}
\label{sec:pagerank_alternativas}
\paragraph{}
El algoritmo \emph{PageRank} se ha convertido en la alternativa más usada para el cálculo del ránking de vértices sobre un grafo basándose únicamente en la estructura del mismo (relaciones a partir de aristas). Su popularidad se debe en gran medida tanto a su sencillez desde el punto de vista algorítmico (a pesar de que su coste computacional es elevado), como a la gran fama del motor de búsquedas \emph{Google}, que desde sus comienzos otorgaba resultados muy buenos apoyándose en la utilización de dicho ranking.
\paragraph{}
Sin embargo, \emph{PageRank} no es la única alternativa sobre la que se ha trabajado en el ámbito del cálculo de importancia para los vértices de grafos. En esta sección se describen brevemente distintos algoritmos cuya finalidad es semejante. Además se hablará de una extensión del \emph{PageRank} conocida como \emph{SimRank} que obtiene el grado de similitud entre los vértices del grafo desde un punto de vista estructural. En la sección \ref{sec:hits} se describe \emph{HITS}, posteriormente en la sección \ref{sec:salsa} se describe \emph{SALSA} (una mejora respecto del anterior), y finalmente, en la sección \ref{sec:simrank} se hablará de \emph{SimRank}.
\subsection{HITS}
\label{sec:hits}
\paragraph{}
El algoritmo \emph{HITS} surge en paralelo junto con \emph{PageRank}, puesto que los artículos en los que se habla de dichos algoritmos fueron publicados en \emph{1999}. \emph{HITS} fue descrito por primera vez en el trabajo \emph{Authoritative sources in a hyperlinked environment} \cite{kleinberg1999authoritative} de \emph{Kleinberg}.
\paragraph{}
Este algoritmo, a diferencia del \emph{PageRank}, se basa en la generación del ranking en tiempo de consulta. Para ello utiliza un sub-conjunto de vértices inicial obtenido a partir del ranking del ranking de similitud basada en texto respecto de la consulta en cuestión. A partir del sub-grafo generado por este sub-conjunto. Las iteraciones del algoritmo se basan la generación de dos puntuaciones para cada vértice. Estas puntuaciones se conocen como \emph{Autority score} y \emph{Hub score}. Las cuales se construyen como la suma de \emph{Hub scores} de los vértices que apuntan hacia el vértice para el caso del \emph{Autority score} y la suma de \emph{Autority scores} de los vértices hacia los que apunta el vértice para el \emph{Hub score}. Dichos valores son normalizados en cada iteración. Este proceso se repite hasta llegar a un determinado grado de convergencia.
\paragraph{}
Las diferencias respecto del \emph{PageRank} por tanto se basan en la generación del ranking en tiempo de consulta en lugar de en tiempo de indexación o estático, por tanto este ranking varía respecto de cada búsqueda. En este caso, en lugar de obtener un único ranking se obtienen dos, indicando los vértices más importantes y indicando los vértices que más importancia generan. Además, en lugar de generar el ranking sobre el grafo completo, \emph{HITS} lo lleva a cabo sobre un sub-grafo del mismo (obtenido mediante búsqueda textual tal y como se ha indicado anteriormente).
\subsection{SALSA}
\label{sec:salsa}
\paragraph{}
El algoritmo \emph{SALSA} se refiere a una combinación de \emph{PageRank} y \emph{HITS} (descrito en la anterior sección), por tanto, tiene características semejantes de ambas alternativas. La descripción del mismo fue llevada a cabo en el documento \emph{SALSA: the stochastic approach for link-structure analysis}\cite{lempel2001salsa} de \emph{Lempel y Moran}.
\paragraph{}
Debido a que este algoritmo consiste en una combinación de los citados previamente, a continuación se indican las semejanzas que tiene respecto de estos. \emph{SALSA} se basa en la misma idea que \emph{HITS}, en el sentido de que genera dos ranking, referidos a autoridades y generadores de autoridad (\emph{Autority} y \emph{Hub}). Además, también realiza dicho ranking en tiempo de consulta utilizando un sub-grafo generado a partir del ranking de similitud textual. La diferencia respecto de \emph{HITS} se basa en la estrategia utilizada para calcular dichos ranking. En este caso utiliza el mismo enfoque de \emph{paseos aleatorios} del \emph{PageRank} (por eso se dice que es una combinación de ambos).
\paragraph{}
La generación de rankings a través de paseos aleatorios le otorgo una ventaja significativa a nivel de coste computacional respecto de \emph{HITS}, además de ofrecer resultados en tiempo de consulta, lo cual le diferencia de \emph{PageRank}. En el documento \emph{WTF: The Who to Follow Service at Twitter} \cite{gupta2013wtf} los autores describen cómo la red social \emph{Twitter} ha desarrollado una variación de este algoritmo para su sistema de recomendación de usuarios a los que comenzar a seguir.
\subsection{SimRank}
\label{sec:simrank}
\paragraph{}
El algoritmo \emph{SimRank} tiene un propósito distinto respecto de los descritos anteriormente. En este caso, en lugar de tratar de conocer el grado de importancia de un determinado vértice del grafo, lo que se pretende es obtener un ranking de similitud del resto de vértices del grafo respecto de uno concreto. En el trabajo \emph{SimRank: a measure of structural-context similarity} \cite{jeh2002simrank} de \emph{Jeh y Widom} se describe de manera el completa el modo de funcionamiento del algoritmo, así como la demostración acerca de la corrección del mismo. A continuación se realiza una breve descripción acerca de dicho modo de funcionamiento.
\paragraph{}
Al igual que el \emph{PageRank}, en este caso el algoritmo también se basa en el cálculo iterativo del ranking hasta llegar a un determinado índice de convergencia. La primera iteración se basa en el cálculo de un \emph{PageRank Personalizado} desde el vértice sobre el cual se pretende basar la comparación. Tras esta inicialización, el resto del algoritmo se basa en la repetición de la ecuación \eqref{eq:simrank_iteration} hasta llegar a un índice de convergencia.
\begin{equation}
\label{eq:simrank_iteration}
Sim^{(k)}_{u_1}({u_2}) =
\begin{cases}
(1-c) * \frac{\sum_{\{(u_1,v_1),(u_2,v_2)\} \in E} Sim^{(k-1)}_{v_1}({v_2})}{d^-({u_1})*d^-({u_2})}, & \text{if} \ u_1 \neq u_2 \\
1, & \text{if} \ u_1 = u_2
\end{cases}
\end{equation}
\paragraph{}
La ecuación \eqref{eq:simrank_iteration} calcula por tanto el grado de similitud del vértice $u_2$ respecto del vértice $u_1$ en la iteración $k$, lo cual se puede denotar como $Sim^{(k)}_{u_1}({u_2})$. Tal y como se ha indicado anteriormente, dicho valor se incrementa conforme los vértices $u_1$ y $u_2$ se relacionan con conjuntos similares de vértices desde el punto de vista de que estos sean los mismos. Esto no debe confundirse con otros problemas como el de \emph{Matchings} (referido a encontrar subestructuras con la misma forma dentro de un grafo).
\paragraph{}
Existen numerosas aplicaciones prácticas para este algoritmo como sistema de recomendaciones sobre conjuntos de datos con estructura de grafo. Algunos ejemplos donde podría ser utilizado es en problemas como la generación de anuncios de sitios web en un buscador, la generación de listas de reproducción de vídeo a partir de un determinado vídeo teniendo en cuenta la navegación que los usuarios han llevado a cabo previamente entre estos, o un sistema de recomendación de compras en una tienda virtual basándose en la misma idea.
\section{Conclusiones}
\label{sec:pagerank_conclusions}
\paragraph{}
Tal y como se ha visto a lo largo del capítulo, el cálculo de importancia sobre los vértices grafos es una tarea compleja, cuya dificultad se incrementa en gran medida cuando el tamaño del grafo es de tamaño masivo, lo cual dificulta la tarea de contenerlo de manera completa en memoria. Sin embargo, en este capítulo no se han discutido soluciones para dicho problema, sino que se ha realizado un estudio acerca del algoritmo \emph{PageRank}, el cual goza de gran popularidad gracias a motor de búsquedas \emph{Google}.
\paragraph{}
A partir de la descripción de este algoritmo se ha podido comprender la perspectiva algebraica para la resolución de un problema sobre grafos, que puede entenderse como uno problema de matrices. Además, se ha ilustrado la cercanía de este ranking respecto de las \emph{Cadenas de Markov} y los paseos aleatorios, que convergen hacia la \emph{distribución estacionaria}. Después se ha indicado cómo calcularla a partir de distintas estrategias (\emph{Algebraica}, \emph{Iterativa} o \emph{basada en paseos aleatorios})
\paragraph{}
Posteriormente también se habló del \emph{PageRank Personalizado} y la problemática referida a la cantidad de rankings que sería necesario mantener para llevar a cabo una solución exacta para dicho problema. Por último se ha hablado de estrategias similares para el ranking de vértices así como el algoritmo \emph{SimRank}, que indica el grado de similitud de vértices respecto de un determinado vértice.
\paragraph{}
Gracias a este algoritmo se puede obtener un indicador acerca de la importancia de los vértices de un determinado grafo, lo cual es una tarea interesante y aplicable sobre un gran conjunto de ámbitos para los cuales el problema puede ser modelizado como una red. La observación acerca de la importancia de los vértices desde el punto de vista estructural es un factor interesante que puede mejorar la toma de decisiones en un gran número de casos.
\end{document}
\chapter*{Prefacio}
\addcontentsline{toc}{chapter}{\protect\numberline{}Prefacio}
\paragraph{}
Para entender el contenido de este documento así como la metodología seguida para su elaboración, se han de tener en cuenta diversos factores, entre los que se encuentran el contexto académico en que ha sido redactado, así como el tecnológico y social. Es por ello que a continuación se expone una breve descripción acerca de los mismo, para tratar de facilitar la compresión sobre el alcance de este texto.
\paragraph{}
Lo primero que se debe tener en cuenta es el contexto académico en que se ha llevado a cabo. Este documento se ha redactado para la asignatura de \textbf{Trabajo de Fin de Grado (mención en Computación)} para el \emph{Grado de Ingeniería Informática}, impartido en la \emph{E.T.S de Ingeniería Informática} de la \emph{Universidad de Valladolid}. Dicha asignatura se caracteriza por ser necesaria la superación del resto de las asignaturas que componen los estudios del grado para su evaluacion. Su carga de trabajo es de \textbf{12 créditos ECTS}, cuyo equivalente temporal es de \emph{300 horas} de trabajo del alumno, que se han llevado a cabo en un periodo de 4 meses.
\paragraph{}
La temática escogida para realizar dicho trabajo es \textbf{Algoritmos para Big Data}. El Big Data es la disciplina que se encarga de \say{todas las actividades relacionadas con los sistemas que manipulan grandes conjuntos de datos. Las dificultades más habituales vinculadas a la gestión de estas cantidades de datos se centran en la recolección y el almacenamiento, búsqueda, compartición, análisis, y visualización. La tendencia a manipular enormes cantidades de datos se debe a la necesidad en muchos casos de incluir dicha información para la creación de informes estadísticos y modelos predictivos utilizados en diversas materias.}\cite{wiki:big_data}
\paragraph{}
Uno de los puntos más importantes para entender la motivación por la cual se ha escogido dicha temática es el contexto tecnológico en que nos encontramos. Debido a la importante evolución que están sufriendo otras disciplinas dentro del mundo de la informática y las nuevas tecnologías, cada vez es más sencillo y económico recoger gran cantidad de información de cualquier proceso que se dé en la vida real. Esto se debe a una gran cantidad de factores, entre los que se destacan los siguientes:
\begin{itemize}
\item \textbf{Reducción de costes derivados de la recolección de información}: Debido a la constante evolución tecnológica cada vez es más barato disponer de mecanismos (tanto a nivel de hardware como de software), a partir de los cuales se puede recabar datos sobre un determinado suceso.
\item \textbf{Mayor capacidad de cómputo y almacenamiento}: La recolección y manipulación de grandes cantidades de datos que se recogen a partir de sensores u otros métodos requieren por tanto del apoyo de altas capacidades de cómputo y almacenamiento. Las tendencias actuales se están apoyando en técnicas de virtualización que permiten gestionar sistemas de gran tamaño ubicados en distintas zonas geográficas como una unidad, lo cual proporciona grandes ventajas en cuanto a reducción de complejidad algorítmica a nivel de aplicación.
\item \textbf{Mejora de las telecomunicaciones}: Uno de los factores que ha permitido una gran disminución de la problemática relacionada con la virtualización y su capacidad de respuesta ha sido el gran avance a nivel global que han sufrido las telecomunicaciones en los últimos años, permitiendo disminuir las barreras geográficas entre sistemas tecnológicos dispersos.
\end{itemize}
\paragraph{}
Gracias a este conjunto de mejoras se ha llegado al punto en que existe la oporturnidad de poder utilizar una gran cantidad de conocimiento, que individualmente o sin un apropiado procesamiento, carece de valor a nivel de información.
\paragraph{}
El tercer factor que es necesario tener en cuenta es la tendencia social actual, que cada vez más, está concienciada con el valor que tiene la información. Esto se ve reflejado en un amplio abanico de aspectos relacionados con el comportamiento de la población:
\begin{itemize}
\item \textbf{Monitorización de procesos laborales}: Muchas empresas están teniendo en cuenta la mejora de la productividad de sus empleados y máquinas. Por tanto, buscan nuevas técnicas que les permitan llevar a cabo dicha tarea. En los últimos años se ha dedicado mucho esfuerzo en implementar sistemas de monitorización que permitan obtener información para después procesarla y obtener resultados valiosos para dichas organizaciones.
\item \textbf{Crecimiento exponencial de las plataformas de redes sociales}: La inherente naturaleza social del ser humano hace necesaria la expresión pública de sus sentimientos y acciones, lo cual, en el mundo de la tecnología se ha visto reflejado en un gran crecimiento de las plataformas de compartición de información así como de las de comunicación.
\item \textbf{Iniciativas de datos abiertos por parte de las administraciones públicas}: Muchas insitituciones públicas están dedicando grandes esfuerzos en hacer visible la información que poseen, lo que conlleva una mejora social aumentando el grado de transparencia de las mismas, así como el nivel de conocimiento colectivo, que puede ser beneficioso tampo para ciudadanos como para empresas.
\end{itemize}
\paragraph{}
Como consecuencia de este cambio social, posiblemente propiciado por el avance tecnológico anteriormente citado, la población tiene un mayor grado de curiosidad por aspectos que antes no tenia la capacidad de entender, debido al nivel de complejidad derivado del tamaño de los conjuntos de muestra necesarios para obtener resultados fiables.
\paragraph{}
En este documento no se pretenden abordar temas relacionados con las técnicas utilizadas para recabar nuevos datos a partir de los ya existentes. A pesar de ello se realizará una breve introducción sobre dicho conjunto de estrategias, entre las que se encuentran: \emph{Heurísticas}, \emph{Regresión Lineal}, \emph{Árboles de decisión}, \emph{Máquinas de Vector Soporte (SVM)} o \emph{Redes Neuronales Artificiales}.
\paragraph{}
Por contra, se pretende realizar un análisis acerca de los diferentes algoritmos necesarios para manejar dichas cantidades ingentes de información, en especial de su manipulación a nivel de operaciones básicas, como operaciones aritméticas, búsqueda o tratamiento de campos ausentes. Para ello, se tratará de acometer dicha problemática teniendo en cuenta estrategias de paralelización, que permitan aprovechar en mayor medida las capacidades de cómputo existentes en la actualidad.
\paragraph{}
Otro de los aspectos importantes en que se quiere orientar este trabajo es el factor dinámico necesario para entender la información, lo cual conlleva la búsqueda de nuevas estrategias algorítmicas de procesamiento en tiempo real. Por lo tanto, se pretende ilustrar un análisis acerca de las soluciones existentes en cada caso con respecto a la solución estática indicando las ventajas e inconvenientes de la versión dinámica según corresponda.
\end{document}
\chapter{Metodología de Trabajo}
\label{chap:methodology}
\paragraph{}
La metodología seguida para la realización de un proyecto de gran envergadura para un estudiante de grado, tal como es el trabajo de fin de grado requiere de una definición adecuada de la misma. De esta manera, se puede clarificar el camino a seguir para la elaboración del mismo.
\paragraph{}
Tal y como se puede apreciar en el documento \emph{Portal of research methods and methodologies for research projects and degree projects} \cite{haakansson2013portal}, el cual se ha utilizado como base de partida para conocer las posibles metodologías a seguir para un proyecto de investigación, esta decisión debe ser tomada al comienzo del mismo. De esta manera, se simplifica en gran medida la búsqueda de objetivos a logra durante el trabajo. En dicho documento \cite{haakansson2013portal} se realiza una diferenciación entre estrategias de investigación y métodos de investigación para después describir cada una de ellas. Posteriormente en este apéndice se indicará cómo se ha desarrollado este trabajo siguiendo dichas diferenciaciones
\paragraph{}
Un problema común que surge durante el desarrollo de los primeros proyectos de investigación es la dificultad por concretar el tema de estudio en un ámbito específico y analizable apropiadamente. Existen distintas razones por las cuales puede suceder dicho problema, sin embargo, la razón más destacada es la elevada interrelación entre las distintas ramas del conocimiento, que muchas veces complica la labor de \say{aislar} un pequeño sub-conjunto de ideas sobre las cuales realizar una profundización más extensa.
\paragraph{}
Tal y como se ha indicado anteriormente, es necesario indicar tanto las estrategias de investigación como lo métodos de investigación seguidos durante este trabajo. Sin embargo, lo primero es indicar a qué se refieren cada una de ellas. Cuando se habla de \emph{Estrategia de Investigación} nos estamos refiriendo al propósito final que se pretende conseguir con la realización del proyecto, es decir, es algo similar a los objetivos del mismo. En cambio, cuando se habla de \emph{Métodos de Investigación} nos estamos refiriendo al conjunto de \say{herramientas} conceptuales que se utilizan para conseguir llegar al propósito del trabajo.
\paragraph{}
La estrategia de investigación seguida a lo largo del desarrollo de este proyecto ha sido un estudio (\emph{survey}) realizado con la finalidad de tratar de comprender mejor el amplísimo área de investigación relacionado con el \emph{Big Data}, a partir del cual se ha ido profundizando en un ámbito más concreto: el estudio de grafos de tamaño masivo y la implementación del \emph{PageRank}, que en conjunto han otorgado una visión detallada acerca de dichas áreas desde una perspectiva clara. En cuanto al método seguido, lo primero fue prefijar el tema del \emph{Big Data} por el tutor del proyecto. A partir de este punto se puede diferenciar el método en dos partes, la primera de ellas correspondiente a un periodo de \emph{investigación descriptiva}, basada en obtener una visión global sobre las áreas de investigación del \emph{Big Data}, por tanto, esta fase fue guiada por las competencias de distintos cursos impartidos en universidades de a lo largo del mundo, centrados en la materia. Tras haber conseguido un nivel de comprensión adecuado del mismo, se modificó la metodología a seguir, por la que en \cite{haakansson2013portal} denominan \emph{investigación fundamental} (\emph{Fundamental Research}). Esta se caracteriza por la focalización del trabajo en un ámbito concreto a partir de la curiosidad personal, que en este caso, poco a poco fue acercandose hacia el estudio de grafos.
\paragraph{}
Puesto que el trabajo de fin de grado se refiere a una titulación de \emph{Ingeniería Informática}, se creyó interesante que este no se basara únicamente en tareas de investigación, sino que también contuviera una pequeña parte de implementación de código fuente. Para dicha labor, se escogió el algoritmo \emph{PageRank} puesto que a partir de su estudio se abarcan una gran cantidad de conceptos relacionados con el resto de los estudiados.
\paragraph{}
Muchas de estas decisiones fueron tomadas conforme avanzaba el proyecto, lo cual presenta distintas ventajas e inconvenientes. La ventaja más notoria se corresponde con el grado de libertad que se ha tenido durante todo el proceso, lo cual ha permitido centrarse en aquellas partes más motivadoras. Sin embargo, esto también genera una serie de desventajas, entre las que se encuentra la dificultad al llegar a puntos a partir de los cuales no saber hacia qué dirección continuar. Sin embargo, esta desventaja también representa un aprendizaje, que ayudará en futuras ocasiones a hacer frente a problemas semejantes con un grado de presión mucho menor debido a la experiencia adquirida en este caso.
\paragraph{}
Muchos proyectos de ingeniería informática se realizan siguiendo distintas metodologías de gestión de proyectos, tales como \emph{metodologías en cascada} o las denominadas \emph{ágiles} como \emph{SCRUM}. En las primeras semanas de la realización de este proyecto, se pretendió seguir una metodología basada en \emph{SCRUM}, tratando de realizar una serie de tareas divididas en bloques de dos semanas. Sin embargo, dicho enfoque se avandonó rápidamente por la naturaleza inherente de investigación seguida para este proyecto. La razón se debe a que es muy complicado compaginar las tareas de investigación, las cuales se basan en la adquisición de conocimiento con un determinado concepto concreto y un periodo de tiempo acotado.
\paragraph{}
Las razones que han llevado a pensar esto están relacionadas con el proceso de investigación basado en la lectura de artículos de carácter científico, los cuales se relacionan fuertemente unos con otros. Dicho suceso conlleva la necesidad de tener que leer un grupo de artículos relacionados con el que se pretende comprender mediante el conjunto de citaciones que asumen el conocimiento de los términos Extraídos de otro artículo. Estas razones dificultan la tarea de estimación temporal necesaria para entender la idea descrita en un artículo, que muchas veces se reduce a unos pocos trabajos, que además han sido comprendidos previamente, mientras que en otras ocasiones es necesario adquirir una gran cantidad de nuevos conceptos.
\paragraph{}
A este factor se le añade otra dificultad derivada del mismo, en un gran número de ocasiones dichas dificultades no se conocen hasta que no se ha profundizado en el entendimiento del trabajo, por lo que no se pueden estimar a priori. Sin embargo, son de gran ayuda los estudios (\emph{surveys}) realizados sobre temas específicos, que permiten obtener una visión panorámica acerca del tema mediante la lectura de un único trabajo, que después puede ser ampliada por el lector mediante la lectura de las referencias contenidas en el estudio.
\paragraph{}
A partir de todos estos factores se ha permitido conocer en mayor detalle cómo es el proceso de investigación, así como los retos metodológicos que surgen durante la realización de dichos proyectos. Ha habido muchos puntos que se podrían haber realizado de una manera más apropiada, con la consiguiente reducción de tiempo en dichas tareas. Sin embargo, se cree que todas estas complicaciones han permitido adquirir una experiencia que en futuras ocasiones agilizará el proceso y permitirá evitar dichos errores.
\end{document}
\chapter{Algoritmos para Streaming}
\label{chap:streaming}
\section{Introducción}
\label{sec:streaming_intro}
\paragraph{}
Los \emph{algoritmos para streaming} son una estrategia de diseño algorítmica basada en el procesamiento secuencial de la entrada, lo cual encaja en marcos en los cuales los datos tienen una componente dinámica. Además, este contexto se amolda perfectamente en los casos en que el tamaño de los mismos es tan elevado que no es posible mantenerlos de manera completa en la memoria del sistema. Dicha problemática es precisamente la que surge con el denominado \emph{Big Data}, que al trabajar con conjuntos masivos de datos en el orden de gigabytes, terabytes o incluso petabytes, no se pueden procesar utilizando estrategias clásicas que presuponen que se dispone de todos los datos de manera directa e inmediata.
\paragraph{}
Por tanto, en dicho contexto se presupone un espacio de almacenamiento en disco de tamaño infinito, mientras que se restringe el espacio de trabajo o memoria a un tamaño limitado y mucho menor que el del conjunto de datos con el que se trabaja. Mediante estas presuposiciones fijadas a priori cobra especial importancia el diseño de algoritmos en el modelo en streaming, que tratan de reducir el número de peticiones al espacio de almacenamiento o disco, lo cual genera una gran reducción en el tiempo de procesamiento.
\paragraph{}
Además, bajo este modelo es trivial realizar una extensión de los algoritmos y técnicas para su uso en entornos dinámicos en los cuales el conjunto de datos varía con respecto del tiempo, añadiendo y eliminando nuevos datos. Debido a estas características la investigación en el campo de los \emph{algoritmos para streaming} a cobrado una gran importancia. En este capítulo se pretende realizar una introducción conceptual acerca de los mismos, además de realizar una exposición acerca de los algoritmos más relevantes en dicha área.
\subsection{Computación en Tiempo Real}
\label{sec:realtime_computing}
\paragraph{}
El primer concepto del que hablaremos es \textbf{Computación en Tiempo Real}, que tal y cómo describen Shin y Ramanathan \cite{259423} se carácteriza por tres términos que se exponen a continuación:
\begin{itemize}
\item \textbf{Tiempo}\emph{(time)}: En la disciplina de \emph{Computación en Tiempo Real} el tiempo de ejecución de una determinada tarea es especialmente crucial para garantizar el correcto desarrollo del cómputo, debido a que se asume un plazo de ejecución permitido, a partir del cual la solución del problema deja de tener validez. Shin y Ramanathan\cite{259423} diferencian entre tres categorías dentro de dicha restricción, a las cuales denominan \emph{hard}, \emph{firm} y \emph{soft}, dependiendo del grado de relajación de la misma.
\item \textbf{Confiabilidad}\emph{(correctness)}: Otro de los puntos cruciales en un sistema de \emph{Computación en Tiempo Real} es la determinación de una unidad de medida o indicador acerca de las garantías de una determinada solución algorítmica para cumplir lo que promete de manera correcta en el tiempo esperado.
\item \textbf{Entorno}\emph{(environment)}: El último factor que indican Shin y Ramanathan\cite{259423} para describir un sistema de \emph{Computación en Tiempo Real} es el entorno del mismo, debido a que este condiciona el conjunto de tareas y la periodicidad en que se deben llevar a cabo. Por esta razón, realizan una diferenciación entre:
\begin{enumerate*}[label=\itshape\alph*\upshape)]
\item tareas periódicas \emph{periodic tasks} las cuales se realizan secuencialmente a partir de la finalización de una ventana de tiempo, y
\item tareas no periódicas \emph{periodic tasks} que se llevan a cabo debido al suceso de un determinado evento externo.
\end{enumerate*}
\end{itemize}
\subsection{Problemas Dinámicos}
\label{sec:dynamic_problems}
\paragraph{}
Una vez completada la descripción acerca de lo que se puede definir como \emph{Computación en Tiempo Real}, conviene realizar una descripción desde el punto de vista de la \emph{teoría de complejidad computacional}. Para definir este tipo de problemas, se utiliza el término \emph{problemas dinámicos}, los cuales consisten en aquellos en los cuales es necesario recalcular su solución conforme el tiempo avanza debido a variaciones en los parámetros de entrada del problema (Nótese que dicho término no debe confundirse con la estrategia de \emph{programación dinámica} para el diseño de algoritmos).
\paragraph{}
Existen distintas vertientes dependiendo del punto de vista desde el que se estudien, tanto de la naturaleza del problema (soluciones dependientes temporalmente unas de otras o soluciones aisladas) como de los parámetros de entrada (entrada completa en cada nueva ejecución o variación respecto de la anterior). Los \emph{Algoritmos para Streaming} están diseñados para resolver \emph{problemas dinámicos}, por lo que en la sección \ref{sec:streaming_model}, se describe en profundidad el modelo en que se enmarcan.
\paragraph{}
A continuación se indican los principales indicadores utilizados para describir la complejidad de una determinada solución algorítmica destinada a resolver un problema de dicha naturaleza:
\begin{itemize}
\item Espacio: Cantidad de espacio utilizado en memoria durante la ejecución del algoritmo.
\item Inicialización: Tiempo necesario para la inicialización del algoritmo.
\item Procesado: Tiempo necesario para procesar una determinada entrada.
\item Consulta: Tiempo necesario para procesar la solución a partir de los datos de entrada procesados hasta el momento.
\end{itemize}
\subsection{Algoritmos Online vs Algoritmos Offline}
\paragraph{}
Una vez descrita la problemática de \emph{Computación en Tiempo Real} en la sección \ref{sec:realtime_computing} y la categoría de \emph{Problemas Dinámicos} en la sección \ref{sec:dynamic_problems}, en esta sección se pretende ilustrar la diferencia entre los \emph{Algoritmos Online} y los \emph{Algoritmos Offline}. Para ello, se ha seguido la diferenciación propuesta por Karp \cite{Karp:1992:OAV:645569.659725}, en la cual se plantea el problema de la siguiente manera (Se utilizará la misma notación que sigue Muthukrishnan\cite{Muthukrishnan:2005:DSA:1166409.1166410} para tratar mantener la consistencia durante todo el documento): Sea $A$ el conjunto de datos o eventos de entrada, siendo cada $A[i]$ el elemento \emph{$i$-ésimo} del conjunto, y que en el caso de los \emph{Algoritmos Online} supondremos que es el elemento recibido en el instante \emph{i}. A continuación se muestran las características de cada subgrupo:
\begin{itemize}
\item \textbf{Algoritmos Offline}: Esta categoría contiene todos los algoritmos que realizan el cómputo suponiendo el acceso a cualquier elemento del conjunto de datos $A$ durante cualquier momento de su ejecución. Además, en esta categoría se impone la restricción de que el $A$ debe ser invariante respecto del tiempo, lo que conlleva que para la adaptación del resultado a cambios en la entrada, este tenga que realizar una nueva ejecución desde su estado inicial. Nótese por tanto, que dentro de este grupo se engloba la mayoría de algoritmos utilizados comúnmente.
\item \textbf{Algoritmos Online}: Son aquellos que calculan el resultado a partir de una secuencia de sucesos $A[i]$, los cuales generan un resultado dependiente de la entrada actual, y posiblemente de las anteriores. A partir de dicha estrategia, se añade una componente dinámica, la cual permite que el tamaño del conjunto de datos de entrada $A$ no tenga impuesta una restricción acerca de su longitud \emph{a-priori}. Por contra, en este modelo no se permite conocer el suceso $A[i+1]$ en el momento $i$. Esto encaja perfectamente en el modelo que se describirá en la sección \ref{sec:streaming_model}.
\end{itemize}
\paragraph{}
Según la diferenciación que se acaba de indicar, estas dos estrategias de diseño de algoritmos encajan en disciplinas distintas, teniendo una gran ventaja a nivel de eficiencia en el caso estático los \emph{Algoritmos Offline}, pero quedando prácticamente in-utilizables cuando la computación es en tiempo real, donde es mucho más apropiado el uso de estrategias de diseño de \emph{Algoritmos Online}.
\paragraph{}
Como medida de eficiencia para los \emph{Algoritmos Online}, Karp \cite{Karp:1992:OAV:645569.659725} propone el \textbf{Ratio Competitivo}, el cual se define como la cota inferior del coste de cualquier nueva entrada con respecto de la que tiene menor coste. Sin embargo, dicha medida de eficiencia no es comúnmente utilizada en el caso de los \emph{Algoritmos para Streaming} por la componente estocástica de los mismos, para los cuales son más apropiadas medidas probabilistas. A continuación se describen las ventajas de estos respecto de su vertiente determinista.
\subsection{Algoritmos Probabilistas}
\paragraph{}
Los \emph{Algoritmos Probabilistas} son una estrategia de diseño que emplea en un cierto grado de aleatoriedad en alguna parte de su lógica. Estos utilizan distribuciones uniformes de probabilidad para tratar de conseguir un incremento del rendimiento en su caso promedio. A continuación se describen los dos tipos de algoritmos probabilísticos según la clasificación realizada por Babai \cite{Babai79monte-carloalgorithms}:
\begin{itemize}
\item \textbf{Algoritmos Las Vegas}: Devuelven un resultado incorrecto con una determinada probabilidad, pero avisan del resultado incorrecto cuando esto sucede. Para contrarrestar este suceso basta con llevar a cabo una nueva ejecución del algoritmo, lo cual tras un número indeterminado de ejecuciones produce un resultado válido.
\item \textbf{Algoritmos Monte Carlo}: Fallan con un cierto grado de probabilidad, pero en este caso no avisan del resultado incorrecto. Por lo tanto, lo único que se puede obtener al respecto es una indicador de la estimación del resultado correcto hacia la que converge tras varias ejecuciones. Además, se asegura una determinada cota del error $\epsilon$, que se cumple con probabilidad $\delta$.
\end{itemize}
\paragraph{}
La razón anecdótica por la cual Babai \cite{Babai79monte-carloalgorithms} decidió denominar dichas categorías de algoritmos de esta manera se debe a lo siguiente (teniendo en cuenta el contexto de lengua inglesa): cuando se va a un casino en \emph{Las Vegas} y se realiza una apuesta el \emph{croupier} puede decir si se ha ganado o perdido porque habla el mismo idioma. Sin embargo, si sucede la misma situación en \emph{Monte Carlo}, tan solo se puede conocer una medida de probabilidad debido a que en este caso el \emph{croupier} no puede comunicarlo por la diferencia dialéctica.
\subsection{Algoritmos Online Probabilistas vs Deterministas}
\paragraph{}
La idea subyacente acerca del diseño de los \emph{Algoritmos Online} es la mejora de eficiencia con respecto de sus homónimos estáticos cuando el conjunto de valores de entrada es dependiente de los resultados anteriores. Sin embargo, existen casos en que la frecuencia de ejecución del algoritmo, debido a una alta tasa de llegada de valores en la entrada, las soluciones deterministas se convierten en alternativas poco escalables.
\paragraph{}
Dicha problemática se ha incrementado de manera exponencial debido al avance tecnológico y la gran cantidad de información que se genera en la actualidad, que sigue creciendo a un ritmo desorbitado. Este fenómeno ha convertido en algo necesario el diseño de estrategias basadas en técnicas probabilísticas que reduzcan en gran medida el coste computacional que como consecuencia eliminan el determinismo de la solución.
\section{Modelo en Streaming}
\label{sec:streaming_model}
\paragraph{}
En esta sección se describen los aspectos formales del \emph{Modelo en Streaming}. Para ello se ha seguido la representación definida por Muthukrishnan \cite{Muthukrishnan:2005:DSA:1166409.1166410}. Lo primero por tanto, es definir un flujo de datos o \emph{Data Stream} como una \say{secuencia de señales digitalmente codificadas utilizadas para representar una transmisión de información} \cite{ITS-def-data-stream}. Muthukrishnan \cite{Muthukrishnan:2005:DSA:1166409.1166410} hace una aclaración sobre dicha definición y añade la objeción de que los datos de entrada deben tener un ritmo elevado de llegada. Debido a esta razón existe complejidad a tres niveles:
\begin{itemize}
\item \textbf{Transmisión}: Ya que debido a la alta tasa de llegada es necesario diseñar un sistema de interconexiones que permita que no se produzcan congestiones debido a la obtención de los datos de entrada.
\item \textbf{Computación}: Puesto que la tarea de procesar la gran cantidad de información que llega por unidad de tiempo produce cuellos de botella en el cálculo de la solución por lo que es necesario implementar técnicas algorítmicas con un reducido nivel de complejidad computacional para contrarrestar dicha problemática.
\item \textbf{Almacenamiento}: Debido a la gran cantidad de datos que se presentan en la entrada, deben existir técnicas que permitan almacenar dicha información de manera eficiente. Esto puede ser visto desde dos puntos de vista diferences: \begin{enumerate*}[label=\itshape\alph*\upshape)]
\item tanto desde el punto de vista del espacio, tratando de minimizar el tamaño de los datos almacenados, maximizando la cantidad de información que se puede recuperar de ellos,
\item como desde el punto de vista del tiempo necesario para realizar operaciones de búsqueda, adicción, eliminación o edición.
\end{enumerate*}. Además, se debe prestar especial atención en la información que se almacena, tratando de reducirla al máximo prescindiendo de datos redundantes o irrelevantes.
\end{itemize}
\subsection{Formalismo para Streaming}
\label{sec:streaming_formalism}
\paragraph{}
Una vez descritos los niveles de complejidad a los que es necesario hacer frente para abordar problemas en el \emph{Modelo en Streaming}, se realiza una descripción de los distintos modelos que propone Muthukrishnan \cite{Muthukrishnan:2005:DSA:1166409.1166410} en los apartados \ref{sec:streaming_time_series}, \ref{sec:streaming_cash_register} y \ref{sec:streaming_turnstile}. La especificación descrita en dichos apartados será seguida durante el resto del capítulo. Para ello nos basaremos en el siguiente formalismo:
\paragraph{}
Sea $a_1 ,a_2 ,... ,a_t,... $ un flujo de datos de entrada (\emph{Input Stream}), de tal manera que cada elemento debe presentar un orden de llegada secuencial respecto de $t \in \mathbb{M}$. Esto también se puede ver de la siguiente manera: el elemento siguiente a la llegada de $a_{t-1}$ debe ser $a_{t}$ y, por inducción, el próximo será $a_{t+1}$. Es necesario aclarar que $t$ no se refiere a unidades propiamente temporales, sino a la posición en la entrada.
\begin{equation}
\label{eq:streaming_A_function}
\boldsymbol{A}_t:[1...N] \rightarrow \mathbb{R}^2
\end{equation}
\paragraph{}
El siguiente paso para describir el formalismo es añadir la función $\boldsymbol{A}_t$, cuyo dominio e imagen se muestran en la ecuación \eqref{eq:streaming_A_function}. Esta función tiene distintas interpretaciones dependientes del \emph{Modelo en Streaming} bajo el cual se esté trabajando en cada caso, pero la idea subyacente puede resumirse asumiendo que la primera componente almacena el valor, mientras que la segunda almacena el número de ocurrencias de dicho valor. Algo común a todos ellos es la variable $t$, que se corresponde con resultado de la función en el instante de tiempo $t$. Por motivos de claridad, en los casos en que nos estemos refiriendo un único momento, dicha variable será obviada en la notación.
\subsection{Modelo de Serie Temporal}
\label{sec:streaming_time_series}
\paragraph{}
El \emph{Modelo de Serie Temporal} o \emph{Time Series Model} se refiere, tal y como indica su nombre, a una serie temporal, es decir, modeliza los valores que toma la variable $i$ respecto de $t$, codificados en el modelo como $a_t = (i,1)$. Nótese que se utiliza el valor $1$ en la segunda componente de $a_t$, la razón de ello se debe a la definición de la imagen de $\boldsymbol{A}$ en la ecuación \eqref{eq:streaming_A_function}. A pesar de ello, dicho campo es irrelevante en este modelo, por lo que se podría haber escogido cualquier otro arbitrariamente. La razón por la cual se ha utilizado el valor $1$ ha sido el refuerzo de la idea de que en este caso, el valor que toma $a_t$ en un determinado momento, no volverá a variar su valor, pero quedará obsoleto con la llegada de $a_{t+1}$.
\paragraph{}
El modelo se describe de manera matemática mediante la función $\boldsymbol{A}$, tal y como se ilustra en la ecuación \eqref{eq:streaming_time_series}. Textualmente, esto puede traducirse diciendo que la función $\boldsymbol{A}$ representa una estructura de datos que almacena el valor de todos los elementos recibidos en la entrada hasta el instante de tiempo $t$, es decir, actúa como un historial. Un ejemplo de este modelo son los valores en bolsa que toma una determinada empresa a lo largo del tiempo.
\begin{equation}
\label{eq:streaming_time_series}
\boldsymbol{A}(t) = a_t
\end{equation}
\subsection{Modelo de Caja Registradora}
\label{sec:streaming_cash_register}
\paragraph{}
El \emph{Modelo de Caja Registradora} o \emph{Cash Register Model} consiste en la recepción de incrementos de un determinado valor $i$. El nombre del modelo hace referencia al funcionamiento de una caja registradora (suponiendo que el pago se realiza de manera exacta), que recibe billetes o monedas de tipos diferentes de manera secuencial.
\paragraph{}
Para describir dicho modelo, previamente hay que realizar una aclaración acerca del contenido del elemento $a_t = (i, I_t)$, de manera que $i$ representa el valor recibido, mientras que $I_t \geq 0$ indica el incremento en el instante $t$. Una vez aclarada esta definición, la función $\boldsymbol{A}_{t}$, se construye tal y como se indica en la ecuación \eqref{eq:streaming_cash_register}.
\begin{equation}
\label{eq:streaming_cash_register}
\boldsymbol{A}_{t}(i) = {A}_{t-1}(i) + I_{t}
\end{equation}
\paragraph{}
El \emph{Modelo de Caja Registradora} es ampliamente utilizado en la formalización de problemas reales debido a que muchos fenómenos siguen esta estructura. Un ejemplo de ello es el conteo de accesos a un determinado sitio web, los cuales se corresponden con incrementos $I_t$, en este caso de carácter unitario realizados por un determinado usuario $i$ en el momento $t$.
\subsection{Modelo de Molinete}
\label{sec:streaming_turnstile}
\paragraph{}
El \emph{Modelo de Molinete} o \emph{Turnstile Model} se corresponde con el caso más general, en el cual no solo se permiten incrementos, sino que también se pueden realizar decrementos en la cuenta. El nombre que se le puso a este modelo se debe al funcionamiento de los molinetes que hay en las estaciones de metro para permitir el paso a los usuarios, que en la entrada incrementan la cuenta del número de personas, mientras que en la salida los decrementan. La relajación originada por la capacidad de decremento ofrece una mayor versatilidad, que permite la contextualización de un gran número de problemas en este modelo. Por contra, añade un numerosas complicaciones a nivel computacional, tal y como se verá a lo largo del capítulo.
\paragraph{}
Al igual que ocurre en el caso anterior, para describir este modelo, lo primero es pensar en la estructura de los elementos en la entrada, que están formados por $a_t = (i, U_t)$, algo muy semejante a lo descrito en el \emph{Modelo de Caja Registradora}. Sin embargo, en este caso $U_t$ no tiene restricciones en su imagen, sino que puede tomar cualquier valor tanto positivo como negativo, lo cual añade el concepto de decremento. La construcción de la función $\boldsymbol{A}_{t}$ se describe en la ecuación \eqref{eq:streaming_turnstile}.
\begin{equation}
\label{eq:streaming_turnstile}
\boldsymbol{A}_{t}(i) = {A}_{t-1}(i) + U_{t}
\end{equation}
\paragraph{}
Muthukrishnan \cite{Muthukrishnan:2005:DSA:1166409.1166410} hace una diferenciación dentro de este modelo dependiendo del nivel de exigencia que se le pide al modelo, se dice que es un \emph{Modelo de Molinete estricto} cuando se añade la restricción $\forall i, \forall t \ \boldsymbol{A}_{t}(i) \geq 0$, mientras que se dice que es un \emph{Modelo de Molinete relajado} cuando dicha restricción no se tiene en cuenta.
\paragraph{}
Un ejemplo de este modelo es el conteo del número de usuarios que están visitando un determinado sitio web, tomando $U_t$ el valor $1$ en el caso de una nueva conexión y $-1$ en el caso de de una desconexión. En este ejemplo el valor $i$ representa una determinada página dentro del sitio web.
\section{Estructura básica}
\label{sec:streaming_structure}
\paragraph{}
Puesto que la naturaleza intríseca de los \emph{Algoritmos para Streaming} hace que procesen los elementos de entrada según van llegando, esto presenta peculiaridades con respecto de otras categorías algorítmicas más asentadas y utilizadas en la actualidad. Por tanto, primero se describirá la estructura básica que siguen los algoritmos más comúnmente utilizados para después mostrar la estrategia seguida en el caso de Streaming.
\paragraph{}
Los algoritmos clásicamente estudiados para resolver la mayoría de problemas se basan la idea de funciones matemáticas. Es decir, se les presenta un conjunto de valores en la entrada, y a partir de ellos, realizan una determinada transformación sobre los datos, que genera como resultado una salida. Nótese que esta idea no impone ninguna restricción acerca de lo que puede suceder en dicho proceso, es decir, no se restringe el uso de estructuras de datos auxiliares o técnicas similares.
\paragraph{}
Esta visión no se enmarca correctamente en el contexto de los \emph{Algoritmos para Streaming}. La razón se debe a que la entrada no es propiamente un conjunto de datos, sino que se refiere a un flujo en sí mismo. Esta característica tiene como consecuencia que en un gran número de ocasiones ya no sea necesario obtener los resultados tras cada llamada al algoritmo, ya que estos podrían carecer de interés o requerir un sobrecoste innecesario. Por lo tanto, el concepto de función matemática pierde el sentido en este caso, ya que estas exigen la existencia de un valor como resultado.
\paragraph{}
Un concepto más acertado para modelizar un \emph{Algoritmo para Streaming} podría ser lo que en los lenguajes de programación derivados de \emph{Fortran} se denomina subrutina, es decir, una secuencia de instrucciones que realizan una tarea encapsulada como una unidad. Sin embargo, para poder describir correctamente la estructura de un \emph{Algoritmo para Streaming} hace falta algo más. La razón de ello es que a partir de dicho modelo de diseño no sería posible realizar peticiones sobre el resultado calculado, es decir, sería una estrategia puramente procedural. Para corregir dicha problemática surge el concepto de consulta o \emph{query}. A través de dicha idea se pretende representar la manera de obtener un resultado a partir del cómputo realizado hasta el momento actual.
\paragraph{}
En resumen, con dicha estrategia de diseño se consigue separar la parte de procesado de la entrada de la parte de consulta del resultado, lo cual proporciona amplias ventajas para el modelo seguido por los \emph{Algoritmos para Streaming}. Sin embargo, dicha estrategia produce un sobrecoste espacial con respecto del modelo de algoritmo clásico. Este se debe a la necesidad de mantener una estructura de datos en la cual se almacenen los resultados parciales referentes al flujo de entrada.
\paragraph{}
Los algoritmos \emph{Algoritmos para Streaming} se componen por tanto de algoritmo de procesamiento del flujo de datos, una estructura de datos que almacena dichos resultados, y por último, un algoritmo de procesamiento de la consulta o \emph{query} necesaria para obtener los resultados requeridos. a continuación se dividen las fases para el funcionamiento de un algoritmo de dichas características.
\begin{itemize}
\item \textbf{Inicialización}: En esta fase se llevan a cabo el conjunto de tareas necesarias para inicializar la estructura de datos que actuará como almacen de información durante el procesamiento del flujo de datos de entrada. Generalmente esto consite en el proceso de reservar memoria, inicializar a un valor por defecto la estructura de datos, etc. Sin embargo, existen técnicas más sofisticadas que requieren de una mayor carga computacional en esta fase.
\item \textbf{Procesado}: Se corresponde con el procesamiento del flujo de datos de manera secuencial. La idea subyacente en esta fase es la de realizar una determinada operación sobre la estructura de datos y el elemento de entrada actual, de manera que se lleve a cabo una actualización sobre la misma. Nótese en que la manera en que se manipula dicha estructura de datos condiciona en gran medida el conjunto de peticiones que se podrán realizar sobre ella.
\item \textbf{Consulta}: La fase de consulta se diferencia con respecto de la anterior por ser de carácter consultivo. Con esto nos estamos refiriendo a que dicha tarea no modifica el estado actual de la estructura de datos, sino que recoge información de la misma, que posiblemente transforma mediante alguna operación, para después obtener un valor como resultado de dicha petición.
\end{itemize}
\section{Medidas de Análisis y Conceptos Matemáticos}
\label{sec:streaming_analysis}
\paragraph{}
Los \emph{Algoritmos para Streaming} se caracterizan por utilizar propiedades estadísticas en alguna parte (generalmente en el procesado) de su cómputo para obtener la solución con un menor coste computacional. En este caso, el coste que se pretende minimizar es el referido al espacio necesario para almacenar la estructura de datos auxiliar. Tal y como se ha dicho anteriormente, la razón de ello es debida a que se presupone un conjunto masivo de datos en la entrada, por lo que se pretende que el orden de complejidad espacial respecto de la misma sea de carácter sub-lineal ($o(N)$).
\paragraph{}
El objetivo es encontrar soluciones con un intervalo de error acotado que permitan llegar a la solución en un orden espacial de complejidad logarítmica ($O(log(N))$). Sin embargo, existen ocasiones en que no es posible llegar a una solución en dicho orden de complejidad, como es el caso de \emph{Algoritmos para Streaming} aplicados a problemas de \emph{Grafos}, en los cuales se relaja dicha restricción a un orden de complejidad \emph{poli-logarítmico} ($O(polylog(N))$).
\paragraph{}
El orden de complejidad \emph{poli-logarítmico} engloba el conjunto de funciones cuyo orden de complejidad presenta un crecimiento acorde a una función polinomial formada logaritmos. Matemáticamente esto se modeliza a través de la ecuación \eqref{eq:polylog-complexity}
\begin{equation}
\label{eq:polylog-complexity}
a_{k}\log ^{k}(N)+\cdots +a_{1}\log(N)+a_{0} = O(polylog(N)) \in o(N).
\end{equation}
\paragraph{}
En esta sección se muestran distintas estrategias para poder llevar a cabo la demostración de pertenencia a un determinado orden de complejidad de un \emph{Algoritmo para Streaming}. Debido a la elevada base estadística que requieren dichas demostraciones, a continuación se definen algunos conceptos básicos relacionadas con estimadores estadísticos, para después realizar una breve demostración acerca de distintas cotas de concentración de valores en una distribución en las sub-secciones \ref{sec:markov_inequality}, \ref{sec:chebyshev_inequality} y \ref{sec:chernoff_inequality}. Las definiciones que se exponen a continuación han sido extraídas de los apuntes del curso sobre \emph{Randomized Algorithms} \cite{aspnes2014notes} impartido por Aspnes en la \emph{Universidad de Yale} así como las de la asignatura de \emph{Estadística}\cite{estadistica2016notes} impartida en el Grado de Ingeniería Informática de la \emph{Universidad de Valladolid}.
\subsection{Conceptos básicos de Estadística}
\label{sec:basic_statistics}
\paragraph{}
Denotaremos como $x_1$ a una observación cualquiera contenida en el espacio de todas los posibles. Al conjunto de todas las observaciones posibles lo denotaremos como $\Omega$ y lo denominaremos espacio muestral, por lo tanto, $x \in \Omega$. Este concepto se puede entender de manera más sencilla mediante el siguiente ejemplo. Supongamos el lanzamiento de una moneda, que como resultado puede tomar los valores cara o cruz. Definiremos entonces $x_1 = \text{cara}$ y $x_2 = \text{cruz}$ como los sucesos posibles de lanzar una moneda. Por tanto el espacio $\Omega$ se define como $\Omega = \{x_1, x_2\} = \{\text{cara}, \text{cruz}\}$.
\paragraph{}
El siguiente paso es definir el concepto de \textbf{Variable Aleatoria}, que representa una función que mapea la realización de un determinado suceso sobre el espacio $\Omega$. Dicha función se denota con letras mayúsculas y puesto que sus parámetros de entrada son desconocidos, estos se ignoran en la notación. Por tanto denotaremos las variables aleatorias como ($\boldsymbol{E}, \boldsymbol{X}, \boldsymbol{Y}, \text{etc.}$). Para la variable aleatoria $\boldsymbol{X}$, sean $x_1, x_2, ..., x_i,...$ cada una de las observaciones posibles. Siguiendo el ejemplo anterior, se puede modelizar el lanzamiento de una moneda como $X$. Nótese por tanto, que una variable aleatoria puede definirse de manera textual como la modelización del resultado de un suceso \emph{a-priori} desconocido.
\paragraph{}
Definiremos probabilidad como la medida de certidumbre asociada a un suceso o evento futuro, expresada como un valor contenido en el intervalo $[0,1]$, tomando el valor $0$ un suceso imposible y $1$ un suceso seguro. La notación seguida para representar esto será $Pr[\boldsymbol{X} = x_i]$. Suponiendo la equi-probabilidad en el ejemplo de la moneda, podemos definir sus valores de probabilidad como $Pr[\boldsymbol{X} = \text{cara}] = \tfrac{1}{2}$ y $Pr[\boldsymbol{X} = \text{cruz}] = \tfrac{1}{2}$
\paragraph{}
Una vez descritos estos conceptos simples, a continuación hablaremos sobre distintos conceptos estadísticos utilizados en el análisis de algoritmos probabilísticos tales como \emph{Esperanza}, \emph{Varianza}, \emph{Variables Independientes} y \emph{Probabilidad Condicionada}.
\paragraph{}
Denominaremos \textbf{Esperanza Matemática} al valor medio o más probable que se espera que tome una determinada variable aleatoria. La modelización matemática de dicho concepto se muestra en la ecuación \eqref{eq:expectation}. Además, la esperanza matemática es de carácter lineal, por lo que se cumplen las ecuaciones \eqref{eq:expectation_l1} y \eqref{eq:expectation_l2}
\begin{equation}
\label{eq:expectation}
\mathbb{E}[\boldsymbol{X}] = \sum_{i=1}^\infty x_i \cdot Pr[\boldsymbol{X} = x_i]
\end{equation}
\begin{equation}
\label{eq:expectation_l1}
\mathbb{E}[c \boldsymbol{X}] = c \mathbb{E}[\boldsymbol{X}]
\end{equation}
\begin{equation}
\label{eq:expectation_l2}
\mathbb{E}[\boldsymbol{X} + \boldsymbol{Y}] = \mathbb{E}[\boldsymbol{X}] + \mathbb{E}[\boldsymbol{Y}]
\end{equation}
\paragraph{}
La \textbf{Varianza} se define como una medida de dispersión de una variable aleatoria. Dicho estimador representa el error cuadrático respecto de la esperanza. Su modelización matemática se muestra en la ecuación \eqref{eq:variance}. Aplicando propiedades algebraicas se puede demostrar la veracidad de las propiedades descritas en las ecuaciones \eqref{eq:variance_p1} y \eqref{eq:variance_p2}.
\begin{equation}
\label{eq:variance}
Var[\boldsymbol{X}] = \mathbb{E}[(\boldsymbol{X} - \mathbb{E}[\boldsymbol{X}])^2]
\end{equation}
\begin{equation}
\label{eq:variance_p1}
Var[\boldsymbol{X}] = \mathbb{E}[\boldsymbol{X}^2] - \mathbb{E}^2[\boldsymbol{X}]
\end{equation}
\begin{equation}
\label{eq:variance_p2}
Var[c \boldsymbol{X}] = c^2 Var[\boldsymbol{X}]
\end{equation}
\paragraph{}
A continuación se describe el concepto de \textbf{Independencia} entre dos variables aleatorias $\boldsymbol{X}, \boldsymbol{Y}$. Se dice que dos variables son independientes cuando los sucesos de cada una de ellas no están condicionados por los de otras. Esto puede verse a como el cumplimiento de la igualdad de la ecuación \eqref{eq:independence}.
\begin{equation}
\label{eq:independence}
Pr[\boldsymbol{X} = x \cap \boldsymbol{Y} = y] = Pr[\boldsymbol{X} = x] \cdot Pr[\boldsymbol{Y} = y]
\end{equation}
\paragraph{}
Cuando nos referimos al concepto de independencia referido a un conjunto $n$ variables aleatorias $\boldsymbol{X_1}, \boldsymbol{X_2},..., \boldsymbol{X_n}$ lo denominaremos \textbf{Independencia Mutua}, que impone la restricción descrita en la ecuación \eqref{eq:mutual_independence}.
\begin{equation}
\label{eq:mutual_independence}
Pr \bigg[ \bigcap_{i=1}^n \boldsymbol{X_i} = x_i \bigg] = \prod_{i=1}^n Pr[\boldsymbol{X_i} = x_i]
\end{equation}
\paragraph{}
También es de especial interés en el campo de los algoritmos probabilísticos el caso de la \textbf{k-independencia} sobre un conjunto de $n$ variables aleatorias $\boldsymbol{X_1}, \boldsymbol{X_2},..., \boldsymbol{X_n}$. Dicha idea se puede resumir como la independencia de todas las variables en grupos de $k$ variables. Este concepto tiene mucha importancia en el ámbito de los \emph{Sketches}, tal y como se verá en la sección \ref{sec:sketch}. El caso más simple es para $k = 2$, el cual se denomina \textbf{independencia pareada}, cuya modelización matemática se muestra en la ecuación \eqref{eq:pairwise_independence}.
\begin{equation}
\label{eq:pairwise_independence}
\forall i, \forall j \ Pr[\boldsymbol{X_i} = x_i \cap \boldsymbol{X_j} = x_j] = Pr[\boldsymbol{X_i} = x_i] \cdot Pr[\boldsymbol{X_j} = x_j]
\end{equation}
\paragraph{}
Desde el punto de vista de conjuntos de $n$ variables aleatorias $\boldsymbol{X_1}, \boldsymbol{X_2},..., \boldsymbol{X_n}$, existen distintas propiedades de linealidad que se cumplen entre ellas a nivel del cálculo de la \emph{Esperanza} y la \emph{Varianza}. En el caso de la \emph{Esperanza}, la linealidad respecto de la suma (ecuación \eqref{eq:expectation_linearity}) se cumple para variables dependientes e independientes. Sin embargo, en el caso de la \emph{Varianza}, la linealidad respecto de la suma (ecuación \eqref{eq:variance_linearity}) se cumple tan solo para variables \textbf{independientes pareadas}.
\begin{equation}
\label{eq:expectation_linearity}
\mathbb{E}\bigg[\sum_{i=1}^n \boldsymbol{X_i}\bigg] = \sum_{i=1}^n \mathbb{E}[\boldsymbol{X_i}]
\end{equation}
\begin{equation}
\label{eq:variance_linearity}
Var\bigg[\sum_{i=1}^n \boldsymbol{X_i}\bigg] = \sum_{i=1}^n Var[\boldsymbol{X_i}]
\end{equation}
\paragraph{}
La \textbf{Probabilidad Condicionada} entre dos variables aleatorias $\boldsymbol{E_1}$ y $\boldsymbol{E_2}$ se puede definir como la medida de verosimilitud de la ocurrencia del suceso $\boldsymbol{E_1}$ sabiendo que ya ha ocurrido $\boldsymbol{E_2}$. Esto se puede modelizar matemáticamente tal y como se muestra en la ecuación \eqref{eq:conditional_probability}.
\begin{equation}
\label{eq:conditional_probability}
Pr[\boldsymbol{E_1} \rvert \boldsymbol{E_2}] = \frac{Pr[\boldsymbol{E_1} \cap \boldsymbol{E_2}]}{Pr[\boldsymbol{E_1}]}
\end{equation}
\paragraph{}
En el caso de la \textbf{Probabilidad Condicionada} sobre variables independientes, surge la propiedad descrita en la ecuación \eqref{eq:conditional_probability_independence}. Es fácil entender la razón, que se apoya en la idea de que si dos variables aleatorias no guardan relación, entonces la ocurrencia de una de ellas, no condicionará el resultado de la otra.
\begin{equation}
\label{eq:conditional_probability_independence}
Pr[\boldsymbol{X_1} = x_1 \rvert \boldsymbol{X_2} = x_2] =
\frac{Pr[\boldsymbol{X_1} = x_1 \cap \boldsymbol{X_2} = x_2]}{Pr[\boldsymbol{X_1} = x_1]} =
\frac{Pr[\boldsymbol{X_1} = x_1 \cdot \boldsymbol{X_2} = x_2]}{Pr[\boldsymbol{X_1} = x_1]} =
Pr[\boldsymbol{X_2} = x_2]
\end{equation}
\paragraph{}
Una vez descritos los conceptos estadísticos básicos para el análisis de algoritmos probabilísticos, lo siguiente es realizar una exposición acerca de las distintas cotas de concentración de valores, lo cual permite obtener resultados aproximados acerca de los resultados esperados por dichos algoritmos, así como sus niveles de complejidad. Primero se describirá \emph{Desigualdad de Boole}, para después tratar las desigualdades de \emph{Markov}(\ref{sec:markov_inequality}), \emph{Chebyshev}(\ref{sec:chebyshev_inequality}) y \emph{Chernoff}(\ref{sec:chernoff_inequality})
\paragraph{}
La \textbf{Desigualdad de Boole} consiste en una propiedad básica que indica que la probabilidad de que se cumpla la ocurrencia de un suceso es menor o igual que ocurrencia de la suma de todas ellas. Esto se modeliza matemáticamente en la ecuación \eqref{eq:boole_inequality}.
\begin{equation}
\label{eq:boole_inequality}
Pr\bigg[\bigcup_{i=1}^n \boldsymbol{E_i}\bigg] \leq \sum_{i=1}^n Pr[\boldsymbol{E_i}]
\end{equation}
\subsection{Desigualdad de Markov}
\label{sec:markov_inequality}
\paragraph{}
La \emph{Desigualdad de Markov} es la técnica base que utilizan otras desigualdades más sofisticadas para tratar de acotar la \emph{Esperanza} de una determinada \emph{Variable Aleatoria}. Proporciona una cota superior de probabilidad respecto de la \emph{Esperanza} tal y como se muestra en la ecuación \eqref{eq:markov_inequality}. Tal y como se puede intuir, dicha cota es muy poco ajustada, sin embargo, presenta una propiedad muy interesante como estructura base. Sea $f: \boldsymbol{X} \rightarrow \mathbb{R}^+$ una función positiva, entonces también se cumple la desigualdad de la ecuación \eqref{eq:markov_inequality_function}. El punto interesante surge cuando se escoge la función $f$ de tal manera que sea estrictamente creciente, entonces se cumple la propiedad descrita ecuación \eqref{eq:markov_inequality_function_positive}, a través de la cual podemos obtener cotas mucho más ajustadas. Dicha idea se apoya en la \emph{Desigualdad de Jensen}.
\begin{equation}
\label{eq:markov_inequality}
\forall \lambda \geq 0, \ Pr[\boldsymbol{X} \geq \lambda ] \leq \frac{\mathbb{E}[\boldsymbol{X}]}{\lambda}
\end{equation}
\begin{equation}
\label{eq:markov_inequality_function}
\forall \lambda \geq 0, \ Pr[f(\boldsymbol{X}) \geq f(\lambda) ] \leq \frac{\mathbb{E}[f(\boldsymbol{X})]}{f(\lambda)}
\end{equation}
\begin{equation}
\label{eq:markov_inequality_function_positive}
\forall \lambda \geq 0, \ Pr[\boldsymbol{X} \geq \lambda ] = Pr[f(\boldsymbol{X}) \geq f(\lambda) ] \leq \frac{\mathbb{E}[f(\boldsymbol{X})]}{f(\lambda)}
\end{equation}
\subsection{Desigualdad de Chebyshev}
\label{sec:chebyshev_inequality}
\paragraph{}
La \emph{Desigualdad de Chebyshev} utiliza la técnica descrita en la sub-sección anterior apoyándose en al idea de la función $f$ para obtener una cota de concentración mucho más ajustada basándose en la \emph{Varianza}. Dicha propiedad se muestra en la ecuación \eqref{eq:chebyshev_inequality}. En este caso se utiliza $f(\boldsymbol{X}) = \boldsymbol{X}^2$, que es estrictamente creciente en el dominio de aplicación de $\boldsymbol{X}$. Además, se selecciona como variable aleatoria $|\boldsymbol{X} - \mathbb{E}[\boldsymbol{X}]$, es decir, el error absoluto de una $\boldsymbol{X}$ respecto de su valor esperado. La demostración de esta idea se muestra en la ecuación \eqref{eq:chebyshev_inequality_demo}.
\begin{equation}
\label{eq:chebyshev_inequality}
\forall \lambda \geq 0, \ Pr[|\boldsymbol{X} - \mathbb{E}[\boldsymbol{X}]| \geq \lambda] \leq \frac{Var[\boldsymbol{X}]}{\lambda^2}
\end{equation}
\begin{equation}
\label{eq:chebyshev_inequality_demo}
\forall \lambda \geq 0, \
Pr[|\boldsymbol{X} - \mathbb{E}[\boldsymbol{X}]| \geq \lambda] =
Pr[(\boldsymbol{X} - \mathbb{E}[\boldsymbol{X}])^2 \geq \lambda^2] \leq
\frac{\mathbb{E}[(\boldsymbol{X} - \mathbb{E}[\boldsymbol{X}])^2]}{\lambda^2} =
\frac{Var[\boldsymbol{X}]}{\lambda^2}
\end{equation}
\subsection{Desigualdad de Chernoff}
\label{sec:chernoff_inequality}
\paragraph{}
En este apartado se realiza una descripción acerca de la \emph{Desigualdad de Chernoff}. Dicha descripción ha sido extraída de los apuntes de la asignatura de Algoritmos Probabilísticos (\emph{Randomized Algorithms}) \cite{chawla2004chernoff} impartida por Shuchi Chawla en la \emph{Carnegie Mellon University} de Pennsylvania.
\paragraph{}
La \emph{Desigualdad de Chernoff} proporciona cotas mucho más ajustadas que por contra, exigen unas presunciones más restrictivas para poder ser utilizada. La variable aleatoria en este caso debe ser de la forma $\boldsymbol{S} = \sum_{i=1}^n \boldsymbol{X_i}$ donde cada $\boldsymbol{X_i}$ es una variable aleatoria uniformemente distribuida e independiente del resto. También describiremos la esperanza de cada una de las variables $\boldsymbol{X_i}$ como $\mathbb{E}[\boldsymbol{X}] = p_i$.
\paragraph{}
Denotaremos como $\mu$ a la esperanza de $\boldsymbol{S}$, tal y como se describe en la ecuación \eqref{eq:chernoff_inequality_expectation}. También se define la función $f$ como $f(\boldsymbol{S}) = e^{t\boldsymbol{S}}$.
\begin{equation}
\label{eq:chernoff_inequality_expectation}
\mu = \mathbb{E}\bigg[\sum_{i=1}^n \boldsymbol{X_i}\bigg] = \sum_{i=1}^n \mathbb{E}[\boldsymbol{X_i}] = \sum_{i=1}^n p_i
\end{equation}
\paragraph{}
El siguiente paso es utilizar la ecuación \eqref{eq:markov_inequality_function_positive} de la \emph{Desigualdad de Markov} con la función $f$, que en este caso es posible puesto que es estrictamente creciente. En este caso en lugar de utilizar $\lambda$ como constante, se prefiere $\delta \in [0,1]$, que se relacionada con la anterior de la siguiente manera: $\lambda = (1 + \delta)$. Entonces la ecuación \eqref{eq:chernoff_inequality_raw} muestra el paso inicial para llegar a la \emph{Desigualdad de Chernoff}.
\begin{equation}
\label{eq:chernoff_inequality_raw}
Pr[ \boldsymbol{X} > (1+\delta) \mu] =
Pr[ e^{t \mathbb{X}} > e^{(1+\delta)t\mu}] \leq
\frac{\mathbb{E}[ e^{t \boldsymbol{X}}] }{e^{(1 + \delta) t \mu}}
\end{equation}
\paragraph{}
Aplicando operaciones aritméticas y otras propiedades estadísticas, se puede demostrar la veracidad de las ecuaciones \eqref{eq:chernoff_inequality_upper} y \eqref{eq:chernoff_inequality_lower}, que proporcionan cotas mucho muy ajustadas de concentración de la distribución de una \emph{variable aleatoria} formada por la suma de $n$ variables aleatorias independientes uniformemente distribuidas.
\begin{equation}
\label{eq:chernoff_inequality_upper}
\forall \delta \geq 0, \ Pr[\boldsymbol{X} \geq (1 + \delta)\mu] \leq e^\frac{-\lambda^2\mu}{2 + \lambda}
\end{equation}
\begin{equation}
\label{eq:chernoff_inequality_lower}
\forall \delta \geq 0, \ Pr[\boldsymbol{X} \leq (1 - \delta)\mu] \leq e^\frac{-\lambda^2\mu}{2 + \lambda}
\end{equation}
\subsection{Funciones Hash}
\label{sec:hash_functions}
\paragraph{}
Las funciones hash son transformaciones matemáticas basadas en la idea de trasladar un valor en un espacio discreto con $n$ posibles valores a otro de $m$ valores de tamaño menor tratando de evitar que se produzcan colisiones en la imagen. Por tanto son funciones que tratan tratan de ser inyectivas en un sub-espacio de destino menor que el de partida. Sin embargo, tal y como se puede comprender de manera intuitiva es una propiedad imposible de cumplir debido a los tamaños del espacio de partida y de destino.
\paragraph{}
Las familias de funciones hash universales se refieren a distintas categorías en las cuales se pueden agrupar las funciones hash según el nivel de propiedades que cumplen. La categorías en las que se agrupan se denominan \emph{funciones hash k-universales}, de tal manera que el valor $k$ indican la dificultad de aparición de colisiones. Las funciones \emph{2-universales} se refieren a aquellas en las cuales se cumple que $Pr[h(x) = h(y)] \leq 1/m$ siendo $h$ la función hash, $x$ e $y$ dos posibles entradas tales que $x \neq y$ y $m$ el cardinal del conjunto de todas las posibles claves. Se dice que una función hash es \emph{fuertemente 2-universal} (\emph{strongly 2-universal}) si cumple que $Pr[h(x) = h(y)] \leq 1/m^2$ y genéricamente se dice que una función es \emph{k-universal} si cumple que $Pr[h(x) = h(y)] \leq 1/m^-k$. A continuación se describen dos estrategias básicas de diseño de funciones hash. Posteriormente se realiza una descripción acerca de las funciones hash sensibles a la localización.
\subsubsection{Hash basado en Congruencias}
\label{sec:hash_congruential}
\paragraph{}
Las funciones hash basadas en congruencias poseen la propiedad de ser \emph{2-universales}. Se basan en la idea de utilizar como espacio de destino aquel formado por $\mathbb{Z}_p$, es decir, todos los enteros que son congruentes con $p$ siendo $p \geq m$ un número primo. Además, se utilizan los enteros $a,b \in \mathbb{Z}_p$ con $a \neq 0$. La función hash entonces se describe tal y como se indica en la ecuación \eqref{eq:hash_congruential}.
\begin{equation}
\label{eq:hash_congruential}
h_{ab}(x) = (ax + b) mod p
\end{equation}
\subsubsection{Hash basado en Tabulación}
\label{sec:hasch_tabulation}
\paragraph{}
El hash basado en tabulación consiste en un método de generación de valores hash que restringe su dominio de entrada a cadenas de texto de tamaño fijo (u otras entradas que puedan codificarse de dicha forma). Denominaremos $c$ al tamaño fijo y $T_i, i \in [1,c]$ a vectores que mapean el carácter $i$-ésimo de manera aleatoria. Entonces la función hash realiza una operación \emph{or-exclusiva} sobre los $T_i[x_i]$ valores tal y como se indica en la ecuación \eqref{eq:hash_tabulation}. Las funciones hash que se construyen siguiendo esta estrategia poseen la propiedad pertenecer a la categoría de las \emph{3-universales}.
\begin{equation}
\label{eq:hash_tabulation}
h(x) = T_1[x_1] \oplus T_2[x_2] \oplus ... \oplus T_c[x_c]
\end{equation}
\subsubsection{Funciones Hash sensibles a la localización}
\label{sec:hash_lsh}
\paragraph{}
Una categoría a destacar son las \emph{funciones hash sensibles a la localización}. Estas poseen la propiedad de distribuir los valores cercanos en el espacio de destino tratando mantener las propiedades de cercanía entre los valores. El primer artículo en que se habla de ellas es \emph{Approximate Nearest Neighbor: Towards Removing the Curse of Dimensionality} \cite{indyk1998approximate} de \emph{Indyk} y \emph{Motwani} inicialmente para resolver el problema de la \emph{búsqueda del vecino más cercano (Nearest Neighbor Search)}. Se trata de funciones hash multidimensionales, es decir, en la entrada están compuestas por más de un elemento. Estas funciones han cobrado especial importancia en los últimos años por su uso en problemas de dimensión muy elevada, ya que sirven como estrategia de reducción de la dimensionalidad del conjunto de datos, que como consecuencia reduce el coste computacional del problema.
\paragraph{}
\emph{Indyk} indica que para que una función hash sea sensible a la localización o $(r_1,r_2, p_1, p_2)$-sensible, para cualquier par de puntos de entrada $p,q$ y la función de distancia $d$ se cumpla la ecuación \eqref{eq:hash_lsh}. Las funciones hash que cumplen esta propiedad son interesantes cuando $p_1 > p_2$ y $r_1 < r_2$, de tal forma que para puntos cercanos en el espacio, el valor hash obtenido es el mismo, por lo que se asume que dichos puntos se encuentran cercanos en el espacio de partida.
\begin{align}
\label{eq:hash_lsh}
\text{if} \ d(p,q) \leq r_1, \ & \text{then} \ Pr[h(x) = h(y)] \geq p_1 \\
\text{if} \ d(p,q) \geq r_2, \ & \text{then} \ Pr[h(x) = h(y)] \leq p_2
\end{align}
\paragraph{}
Una vez descritos los conceptos básicos acerca de lo que son los \emph{Algoritmos para Streaming} y las bases estadísticas necesarias para poder entender el funcionamiento de los mismo y sus niveles de complejidad así como de precisión, en las siguientes secciones se realiza una descripción sobre algunos de los algoritmos más relevantes en este área. En especial se explica el algoritmo de \emph{Morris} en la sección \ref{sec:streaming_morris_algorithm}, el de \emph{Flajolet-Martin} en la \ref{sec:streaming_morris_algorithm} y por último se hablará de la \emph{Estimación de Momentos de Frecuencia} en el modelo en streaming en la sección \ref{sec:streaming_frecuency_moment_aproximation}.
\section{Algoritmo de Morris}
\label{sec:streaming_morris_algorithm}
\paragraph{}
El \emph{Algoritmo de Morris} fue presentado por primera vez en el artículo \emph{Counting Large Numbers of Events in Small Registers} \cite{morris1978counting} redactado por \emph{Robert Morris}. En dicho documento se trata de encontrar una solución al problema de conteo de ocurrencias de un determinado suceso teniendo en cuenta las restriciones de espacio debido a la elevada tasa de ocurrencias que se da en muchos fenómenos. El problema del conteo (\textbf{Count Problem}) de ocurrencias también se denomina el momento de frecuencia $F_1$ tal y como se verá en la sección \ref{sec:streaming_frecuency_moment_aproximation}.
\paragraph{}
Por tanto, \emph{Morris} propone realizar una estimación de dicha tasa para reducir el espacio necesario para almacenar el valor. Intuitivamente, a de partir dicha restricción se consigue un orden de complejidad espacial sub-lineal ($o(N)$) con respecto del número de ocurrencias. Se puede decir que el artículo publicado por \emph{Morris} marcó el punto de comienzo de este área de investigación. El conteo probabilista es algo trivial si se restringe a la condición de incrementar el conteo de ocurrencias siguiendo una distribución de \emph{Bernoulli} con un parámetro $p$ prefijado previamente. Con esto se consigue un error absoluto relativamente pequeño con respecto al valor $p$ escogido. Sin embargo, el error relativo que se obtiene cuando el número de ocurrencias es pequeño es muy elevado, lo cual lo convierte en una solución impracticable.
\paragraph{}
Para solucionar dicha problemática y conseguir una cota del error relativo reducida, la solución propuesta por \emph{Morris} se basa en la selección del parámetro $p$ variable con respecto del número de ocurrencias, con lo cual se consigue que la decisión de incrementar el contador sea muy probable en los primeros casos, lo cual elimina el problema del error relativo. \emph{Morris} propone aumentar el contador $X$ con probabilidad $\frac{1}{2^X}$. Tras $n$ ocurrencias, el resultado que devuelve dicho algoritmo es $\widetilde{n} = 2^X -1$. El pseudocódigo se muestra en el algoritmo \ref{code:morris-algorithm}.
\paragraph{}
\begin{algorithm}[h]
\SetAlgoLined
\KwResult{$\widetilde{n} = 2^X -1$ }
$X \gets 0$\;
\For{cada evento}{
$X \gets X + 1 \ \text{con probabilidad} \frac{1}{2^X}$\;
}
\caption{Morris-Algorithm}
\label{code:morris-algorithm}
\end{algorithm}
\paragraph{}
A continuación se realiza un análisis de la solución. Esta ha sido extraída de los apuntes de la asignatura \emph{Algorithms for Big Data} \cite{bigdata2015jelani} impartida por \emph{Jelani Nelson} en la \emph{Universidad de Harvard}. Denotaremos por $X_n$ el valor del contador $X$ tras $n$ ocurrencias. Entonces se cumplen las igualdades descritas en las ecuaciones \eqref{eq:morris_expectation_1} y \eqref{eq:morris_expectation_2}. Esto se puede demostrar mediante técnicas inductivas sobre $n$.
\begin{equation}
\label{eq:morris_expectation_1}
\mathbb{E}[2^{X_n}] = n + 1
\end{equation}
\begin{equation}
\label{eq:morris_expectation_2}
\mathbb{E}[2^{2X_n}] = \frac{3}{2}n^2 + \frac{3}{2}n + 1
\end{equation}
\paragraph{}
Por la \emph{Desigualdad de Chebyshev} podemos acotar el error cometido tras $n$ repeticiones, dicha formulación se muestra en la ecuación \eqref{eq:morris_bound}.
\begin{align}
\label{eq:morris_bound}
Pr[|\widetilde{n} - n| > \epsilon n ] < \frac{1}{\epsilon^2n^2}\cdot\mathbb{E}[\widetilde{n} - n]^2
&= \frac{1}{\epsilon^2n^2}\cdot\mathbb{E}[2^X-1-n]^2
\\&= \frac{1}{\epsilon^2n^2}\cdot \frac{n^2}{2}
\\&= \frac{1}{2\epsilon^2}
\end{align}
\paragraph{}
La ventaja de esta estrategia algorítmica con respecto de la trivial es la cota del error relativo producido en cada iteración del algoritmo, lo cual aporta una mayor genericidad debido a que esta se mantiene constante con respecto del número de ocurrencias. Sin embargo, se han propuesto otras soluciones para tratar de reducir en mayor medida dicha cota de error. El algoritmo \emph{Morris+} se basa en el mantenimiento de $s$ copias independientes de \emph{Morris} para después devolver la media del resultado de cada una de ellas. A partir de esta estrategia se consiguen las tasas de error que se indican en la ecuación \eqref{eq:morris+_bound}.
\begin{align}
\label{eq:morris+_bound}
Pr[|\widetilde{n} - n| > \epsilon n ] < \frac{1}{2s\epsilon^2} && s > \frac{3}{2\epsilon^2}
\end{align}
\section{Algoritmo de Flajolet-Martin}
\label{sec:streaming_flajolet_martin_algorithm}
\paragraph{}
En esta sección se describe el \emph{Algoritmo de Flajolet-Martin}, cuya descripción aparece en el artículo \emph{Probabilistic Counting Algorithms for Data Base Applications} \cite{flajolet1985probabilistic} redactado por \emph{Philippe Flajolet} y \emph{G. Nigel Martin}. En este caso, la problemática que se pretende resolver no es el número de ocurrencias de un determinado suceso, sino el número de sucesos distintos (\textbf{Count Distinct Problem}) en la entrada. Este problema también se conoce como el cálculo del momento de frecuencia $F_0$ tal y como se verá en la sección \ref{sec:streaming_frecuency_moment_aproximation}. Al igual que en el caso del algoritmo de \emph{Morris}, se apoya en estrategias probabilistas para ajustarse a un orden de complejidad espacial de carácter sub-lineal ($o(N))$) manteniendo una cota de error ajustada.
\paragraph{}
La intuición en la cual se basa el \emph{Algoritmo de Flajolet-Martin}, es la transformación de los elementos de entrada sobre una \emph{función Hash} universal binaria con distribución uniforme e independiente de probabilidad. La propiedad de distribución uniforme permite entonces prever que la mitad de los elementos tendrán un $1$ en el bit menos significativo, que una cuarta parte de los elementos tendrán un $1$ en el segundo bit menos significativo y así sucesivamente. Por tanto, a partir de esta idea se puede realizar una aproximación probabilista del número de elementos distintos que han sido presentados en la entrada. Requiere de $L$ bits de espacio para el almacenamiento del número de elementos distintos. Por la notación descrita en anteriores secciones $L = log(n)$, donde $n$ es el número máximo de elementos distintos en la entrada. A continuación se explica esta estrategia, para ello nos apoyaremos en las siguientes funciones:
\begin{itemize}
\item $hash(x)$ Es la función hash con distribución uniforme e independiente de probabilidad que mapea una entrada cualquiera a un valor entero en el rango $[0,...,2^L-1]$.
\item $bit(y, k)$ Esta función devuelve el bit \emph{k-ésimo} de la representación binaria de $y$, de tal manera que se cumple que $y = \sum_{k \geq 0} bit(y,k)2^k$
\item $\rho(y)$ La función $\rho$ devuelve la posición en la cual se encuentra el bit con valor $1$ empezando a contar a partir del menos significativo. Por convenio, devuelve el valor $L$ si $y$ no contiene ningún $1$ en su representación binaria, es decir, si $y = 0$. Esto se modeliza matemáticamente en la ecuación \eqref{eq:rho_function}.
\end{itemize}
\begin{equation}
\label{eq:rho_function}
\rho(y) =
\begin{cases}
min_{k \geq 0} bit(y, k) \neq 0 & y \geq 0\\
L & y =0
\end{cases}
\end{equation}
\paragraph{}
\emph{Flajolet} y \emph{Martin} se apoyan en una estructura de datos indexada a la cual denominan \textbf{BITMAP}, de tamaño $[0...L-1]$ la cual almacena valores binarios $\{ 0, 1\}$ y se inicializa con todos los valores a $0$. Nótese por tanto, que esta estructura de datos puede ser codificada como un string binario de longitud $L$. La idea del algoritmo es marcar con un $1$ la posición \textbf{BITMAP[$\rho(hash(x))$]}. Seguidamente, queda definir el resultado de la consulta sobre cuántos elementos distintos han aparecido en el flujo de datos de entrada. Para ello se calcula $2^{\rho(\textbf{BITMAP})}$. El pseudocódigo se muestra en el algoritmo \ref{code:fm-algorithm}.
\paragraph{}
\begin{algorithm}[h]
\SetAlgoLined
\KwResult{$2^{\rho(\textbf{BITMAP})}$}
\For{$i \in [0,...,L-1]$}{
$\textbf{BITMAP[$i$]} \gets 0$\;
}
\For{cada evento}{
\If{$\textbf{BITMAP[$\rho(hash(x))$]} = 0$}{
$\textbf{BITMAP[$\rho(hash(x))$]} \gets 1$\;
}
}
\caption{FM-Algorithm}
\label{code:fm-algorithm}
\end{algorithm}
\paragraph{}
El análisis de esta solución ha sido Extraído de los apuntes del libro \emph{Mining of massive datasets} \cite{leskovec2014mining} de la \emph{Universidad de Cambridge}. En este caso lo representa teniendo el cuenta el número de $0$'s seguidos en la parte menos significativa de la representación binaria de $h(x)$. Nótese que esto es equivalente al valor de la función $\rho(h(y))$, por tanto, adaptaremos dicho análisis a la solución inicial propuesta por \emph{Flajolet} y \emph{Martin}. La probabilidad de que se cumpla $\rho(h(x)) = r$ es $2^{-r}$. Supongamos que el número de elementos distintos en el stream es $m$. Entonces la probabilidad de que ninguno de ellos cumpla $\rho(h(x)) = r$ es al menos $(1- 2^{-r})^m$ lo cual puede ser reescrito como $((1- 2^{-r})^{2^r})^{m2^{-r}}$. Para valores suficientemente grandes $r$ se puede asumir que dicho valor es de la forma $(1-\epsilon)^{1/\epsilon} \approx 1/\epsilon$. Entonces la probabilidad de que no se cumpla que $\rho(h(x)) = r$ cuando han aparecido $m$ elementos distintos en el stream es de $e^{-m2^{-r}}$
\paragraph{}
La problemática de este algoritmo deriva de la suposición de la capacidad de generación de claves Hash totalmente aleatorias, lo cual no se ha conseguido en la actualidad. Por lo tanto posteriormente, \emph{Flajolet} ha seguido trabajando el problema de conteo de elementos distintos en \emph{Loglog counting of large cardinalities} \cite{durand2003loglog} y \emph{Hyperloglog: the analysis of a near-optimal cardinality estimation algorithm} \cite{flajolet2007hyperloglog} para tratar de mejorar el grado de precisión de su estrategia de conteo. En el artículo \emph{An optimal algorithm for the distinct elements problem} \cite{kane2010optimal} \emph{Daniel Kane y otros} muestran un algoritmo óptimo para el problema. Los resultados de dichos trabajos se discuten en la sección \ref{sec:hyper_log_log} por su cercana relación con las \emph{estructuras de datos de resumen}.
\section{Aproximación a los Momentos de Frecuencia}
\label{sec:streaming_frecuency_moment_aproximation}
\paragraph{}
La siguiente idea de la que es interesante hablar para terminar la introducción a los \emph{Algoritmos para Streaming} son los \emph{Momentos de Frecuencia}. Una generalización de los conceptos del número de elementos distintos ($F_0$) y el conteo de elementos ($F_1$) que se puede extender a cualquier $F_k$ para $k \geq 0 $. La definición matemática del momento de frecuencia $k$-ésimo se muestra en la ecuación \eqref{eq:frecuency_moments}. Nótese el caso especial de $F_\infty$ que se muestra en la ecuación \eqref{eq:frecuency_moments_max} y se corresponde con el elemento más veces común en el \emph{Stream}. Estas ideas han sido extraídas del documento \emph{Frequency Moments} \cite{woodruff2009frequency} redactado por \emph{David Woodruff}.
\begin{equation}
\label{eq:frecuency_moments}
F_k = \sum_{i=1}^n m_i^k
\end{equation}
\begin{equation}
\label{eq:frecuency_moments_max}
F_\infty = max_{1 \leq i \leq n} m_i
\end{equation}
\paragraph{}
El resto de la sección trata sobre la exposición de los algoritmos para el cálculo de los momentos de frecuencia descritos en el articulo \emph{The space complexity of approximating the frequency moments} \cite{alon1996space} redactado por \emph{Noga Alon}, \emph{Yossi Matias} y \emph{Mario Szegedy}, por el cual fueron galardonados con el premio \emph{Gödel} en el año 2005. En dicho trabajo además de presentar \emph{Algoritmos para Streaming} para el cálculo de $F_k$ (cabe destacar su solución para $F_2$), también presentan cotas inferiores para el problema de los \emph{Momentos de Frecuencia}. Posteriormente \emph{Piotr Indyk} y \emph{David Woodruff} encontraron un algoritmo óptimo para el problema de los\emph{Momentos de Frecuencia} tal y como exponen en \emph{Optimal Approximations of the Frequency Moments of Data Streams} \cite{indyk2005optimal}. A continuación se discuten los resultados de dichos trabajos.
\paragraph{}
Para el cálculo de $F_k$ para $k \geq 0$ \emph{Alon}, \emph{Matias} y \emph{Szedgedy} proponen un enfoque similar a los propuestos en algoritmos anteriores (la definición de una variable aleatoria $X$ tal que $\mathbb{E}[X] = F_k$). Sin embargo, la novedad en este caso es que su algoritmo no está restringido a un $k$ concreto, sino que en su caso es generalizable para cualquier entero positivo, sin embargo, en este caso la exposición es a nivel teórico.
\paragraph{}
Definiremos las constantes $S_1 =O(n^{1-1/k}/\lambda ^{2})$ y $S_2 = O(\log(1/\varepsilon ))$. El algoritmo utiliza $S_2$ variables aleatorias denominadas $Y_1, Y_2, Y_{S_2}$ y devuelve la mediana de estas denominándola $Y$. Cada una de estas variables $Y_i$ está formada por la media de $X_{ij}$ variables aleatorias tales que $1 leq j leq S_1$. La forma en que se actualiza el estado del algoritmo tras cada nueva llegada se apoya en el uso de $S_2$ funciones hash uniformemente distribuidas e independientes entre si que mapean cada símbolo a un determinado indice $j$.
\paragraph{}
Para el análisis del algoritmos supondremos que el tamaño $n$ del \emph{Stream} es conocido \emph{a-priori}. La demostración se apoya en una variable aleatoria $X$ construida de la siguiente manera:
\begin{itemize}
\item Seleccionaremos de manera aleatoria el elemento $a_{p \in (1,2,...,m)}$ del Stream, siendo $a_p = l \in (1,2,...n)$. Es decir, el elemento procesado en el momento $p$ representa la llegada del símbolo $l$.
\item Definiremos $r=|\{q:q\geq p,a_{p}=l\}|$ como el número de ocurrencias del símbolo $l$ hasta el momento $p$.
\item La variable aleatoria $X$ se define como $X=m(r^{k}-(r-1)^{k})$
\end{itemize}
\paragraph{}
Desarrollando la Esperanza Matemática de la variable aleatoria $X$ se puede demostrar que esta tiende al momento de frecuencia $k$ tal y como se muestra en la ecuación \eqref{eq:expectation_frecuency_moments}.
\begin{equation}
\label{eq:expectation_frecuency_moments}
{\displaystyle {\begin{array}{lll}\mathbb{E}(X)&=&\sum _{i=1}^{n}\sum _{i=1}^{m_{i}}(j^{k}-(j-1)^{k})\\&=&{\frac {m}{m}}[(1^{k}+(2^{k}-1^{k})+\ldots +(m_{1}^{k}-(m_{1}-1)^{k}))\\&&\;+\;(1^{k}+(2^{k}-1^{k})+\ldots +(m_{2}^{k}-(m_{2}-1)^{k}))+\ldots \\&&\;+\;(1^{k}+(2^{k}-1^{k})+\ldots +(m_{n}^{k}-(m_{n}-1)^{k}))]\\&=&\sum _{i=1}^{n}m_{i}^{k}=F_{k}\end{array}}}
\end{equation}
\paragraph{}
En cuanto al coste espacial del algoritmo, se puede demostrar tal y como indican \emph{Alon}, \emph{Matias} y \emph{Szedgedy} en su artículo original \cite{alon1996space} que este sigue el orden descrito en la ecuación \eqref{eq:complexity_frecuency_moments} puesto que es necesario almacenar $a_p$ y $r$ lo cual requiere de $log(n) + log(m)$ bits de memoria, además de $S_1 x S_2$ variables aleatorias para mantener $X$.
\begin{equation}
\label{eq:complexity_frecuency_moments}
{\displaystyle O\left({\dfrac {k\log {1 \over \varepsilon }}{\lambda ^{2}}}n^{1-{1 \over k}}\left(\log n+\log m\right)\right)}
\end{equation}
\section{Conclusiones}
\label{sec:streaming_conclusions}
\paragraph{}
Tal y como se ha ilustrado a lo largo del capítulo, los \emph{algoritmos para streaming} son una solución adecuada tanto a nivel conceptual como práctica para problemas en los cuales el tamaño del conjunto de datos de entrada es tan elevado que no se puede hacer frente mediante estrategias clásicas. Por contra, estas soluciones presentan dificultades debido al elevado peso de la componente matemática y estadística que presentan. Además, la imprecisión en sus resultados restringe su uso en casos en los cuales la precisión es un requisito imprescindible por lo que tan solo deben ser utilizados en casos en los cuales no existan otras soluciones que calculen una solución con las restricciones de tiempo y espacio impuestas.
\paragraph{}
En este capítulo se ha realizado una introducción superficial acerca de este modelo, sin embargo las implementaciones y descripciones que se muestran tan solo gozan de importancia a nivel teórico y conceptual. Por lo tanto, en el capítulo \ref{chap:summaries} se continua la exposición de técnicas de tratamiento de grandes cantidades de datos desde una perspectiva más práctica hablando de las \emph{estructuras de datos de resumen}. Se describen en especial detalle las estructuras basadas en \emph{sketches}, que internamente utilizan \emph{algoritmos para streaming}.
\end{document}
\chapter{Estrategias de Sumarización}
\label{chap:summaries}
\section{Introducción}
\label{sec:summaries_intro}
\paragraph{}
El gran crecimiento tecnológico que se está llevando a cabo en la actualidad a todos los niveles está propiciando además un aumento exponencial en cuanto a la cantidad de información que se genera. La reducción de costes en cuanto a la instalación de sensores que permiten recoger información de muchos procesos productivos, así como la obtención de meta-datos a partir del uso de internet y las redes sociales por parte de los usuarios hace que el ritmo de crecimiento en cuanto a cantidad información generada por unidad de tiempo haya crecido a un ritmo frenético.
\paragraph{}
Una de las razones que han facilitado dicha tendencia es la disminución de costes de almacenamiento de información, a la vez que han aumentado las capacidades de cómputo necesarias para procesarla. Sin embargo, debido al crecimiento exponencial de los conjuntos de datos, es necesario investigar nuevas técnicas y estrategias que permitan obtener respuestas satisfactorias basadas en la gran cantidad de información de la que se dispone en un tiempo razonable.
\paragraph{}
Tradicionalmente, la investigación en el campo de las \emph{bases de datos} se ha centrado en obtener respuestas exactas a distintas consultas, tratando de hacerlo de la manera más eficiente posible, así como de tratar de reducir el espacio necesario para almacenar la información. \emph{Acharya y otros} proponen en el artículo \emph{Join synopses for approximate query answering} \cite{acharya1999join} el concepto de \emph{Approximate Query Processing}. Dicha idea se expone en la sub-sección \ref{sec:aproximate_query_processing}.
\subsection{Approximate Query Processing}
\label{sec:aproximate_query_processing}
\paragraph{}
El \emph{procesamiento de consultas aproximado}, (\emph{Approximate Query Processing} o \textbf{AQP}) se presenta como una estrategia de resolución de consultas basada en conceptos y propiedades estadísticas que permiten una gran reducción en la complejidad computacional y espacial necesaria para la resolución de las mismas por una base de datos. Por contra, dicha reducción a tiene como consecuencia la adicción de un determinado nivel de imprecisión en los resultados a la cual se denomina error. Se pretende que dicho error pueda ser acotada en un intervalo centrado en el valor verdadero con una desviación máxima determinada por $\epsilon$, y que la pertenencia de la solución a este intervalo se cumpla con una probabilidad $\delta$. Al igual que en el anterior capítulo, en este caso también se presta especial importancia a la minimización del error relativo, lo cual consigue que las soluciones mediante el \emph{procesamiento de consultas aproximado} sean válidas tanto para consultas de tamaño reducido como de gran tamaño.
\paragraph{}
Durante el resto del capítulo se describen y analizan distintas estrategias que permiten llevar a cabo implementaciones basadas en \emph{procesamiento de consultas aproximado} centrando especial atención en los \emph{Sketches} por su cercanía respecto del \emph{Modelo en Streaming} descrito en el capítulo \ref{chap:streaming}. En la sección \ref{sec:summaries_types} se realiza una descripción a partir de la cual se pretende aclarar las diferencias entre las distintas estrategias de sumarización de grandes conjuntos de datos. Las estrategias que se describen son \emph{muestreo probabilístico}, mantenimiento de un \emph{histograma}, utilización de \emph{wavelets} y por último se describen en profundidad conceptos referidos a \emph{Sketches}. En las secciones \ref{sec:bloom_filter}, \ref{sec:count_min_sketch}, \ref{sec:count_sketch} y \ref{sec:hyper_log_log} se habla del \emph{Bloom-Filter} \emph{Count-Min Sketch}, \emph{Count Sketch} y \emph{HyperLogLog} respectivamente.
\section{Tipos de Estrategias de Sumarización}
\label{sec:summaries_types}
\paragraph{}
Para el diseño de soluciones basadas en \emph{procesamiento de consultas aproximado} existen distintas estrategias, cada una de las cuales presentan ventajas e inconvenientes por lo que cada una de ellas es más conveniente para una determinada tarea, sin embargo en ocasiones surgen solapamientos entre ellas tal y como se pretende mostrar en esta sección. Dichas descripciones han sido extraídas del libro \emph{Synopses for massive data} \cite{cormode2012synopses} redactado por \emph{Cormode y otros}. En las secciones \ref{sec:sampling}, \ref{sec:histogram}, \ref{sec:wavelet} y \ref{sec:sketch} se habla de \emph{Sampling}, \emph{Histogram}, \emph{Wavelet} y \emph{Sketches} respectivamente.
\subsection{Sampling}
\label{sec:sampling}
\paragraph{}
El \emph{Sampling} o \emph{muestreo probabilístico} es la estrategia más consolidada de entre las que se presentan. Las razones se deben a su simplicidad conceptual así como su extendido uso en el mundo de la estadística. Uno de los primeros artículos en que se trata el muestreo aplicado a bases de datos es \emph{Accurate estimation of the number of tuples satisfying a condition} \cite{piatetsky1984accurate} redactado por \emph{Piatetsky-Shapiro} y \emph{Connell}. La intuición en que se basa esta solución es la selección de un sub-conjunto de elementos denominado \emph{muestra} extraída del conjunto global al cual se denomina \emph{población}. Una vez obtenida la \emph{muestra} del conjunto de datos global, cuyo tamaño es significativamente menor respecto del global (lo cual reduce drásticamente el coste computacional), se realizan los cálculos que se pretendía realizar sobre toda la \emph{población} para después obtener un estimador del valor real que habría sido calculado al realizar los cálculos sobre el conjunto de datos global.
\paragraph{}
Para que las estrategias de sumarización de información obtengan resultados válidos o significativos respecto del conjunto de datos, es necesario que se escojan adecuadamente las instancias de la \emph{muestra}, de manera que se maximice la similitud del resultado respecto del que se habría obtenido sobre toda la población. Para llevar a cabo dicha labor existen distintas estrategias, desde las más simples basadas en la selección aleatoria sin reemplazamiento como otras mucho más sofisticadas basadas en el mantenimiento de \emph{muestras} estratificadas. Sea $R$ la población y $|R|$ el tamaño de la misma. Denominaremos $t_j$ al valor $j$-ésimo de la población y $X_j$ al número de ocurrencias del mismo en la \emph{muestra}. A continuación se describen distintas técnicas de muestreo:
\begin{itemize}
\item \textbf{Selección Aleatoria Sin Reemplazamiento}: Consiste en la estrategia más simple de generación de \emph{muestras}. Se basa en la selección aleatoria de un valor entero $r$ en el rango $[1, |R|]$ para después añadir el elemento localizado en la posición $r$ de la \emph{población} al sub-conjunto de la \emph{muestra}. Este proceso se repite durante $n$ veces para generar una \emph{muestra} de tamaño $n$. A modo de ejemplo se muestra el estimador para la operación \emph{SUMA} en la ecuación \eqref{eq:sum_with_replacement}, además se muestra la fórmula de la desviación para dicho estimador en la ecuación \eqref{eq:sum_with_replacement_deviation}.
\begin{align}
\label{eq:sum_with_replacement}
Y &= \frac{|R|}{n}\sum_jX_jt_j \\
\label{eq:sum_with_replacement_deviation}
\sigma^2(Y) &= \frac{|R|^2\sigma^2(R)}{n}
\end{align}
\item \textbf{Selección Aleatoria Con Reemplazamiento}: En este caso se supone que la selección de una instancia de la población tan solo se puede llevar a cabo una única vez, por lo tanto se cumple que $\forall X_j \in {0,1}$. La selección se lleva a cabo de la siguiente manera: se genera de manera aleatoria un valor entero $r$ en el rango $[1, |R|]$ para después añadir el elemento localizado en la posición $r$ de la \emph{población} al sub-conjunto de \emph{muestra} si este no ha sido añadido ya, sino volver a generar otro valor $r$. Después repetir dicha secuencia durante $n$ veces para generar una \emph{muestra} de tamaño $n$. Al igual que en la estrategia anterior, en este caso también se muestra el estimador para la operación \emph{SUMA} en la ecuación \eqref{eq:sum_without_replacement}. Nótese que el cálculo es el mismo que en el caso de la estrategia sin reemplazamiento. Sin embargo, la varianza obtenida a partir de dicha estrategia es menor tal y como se muestra en la ecuación \eqref{eq:sum_without_replacement_deviation}.
\begin{align}
\label{eq:sum_without_replacement}
Y &= \frac{|R|}{n}\sum_jX_jt_j \\
\label{eq:sum_without_replacement_deviation}
\sigma^2(Y) &= \frac{|R|(|R| - n)\sigma^2(R)}{n}
\end{align}
\item \textbf{Bernoulli y Poisson}: Mediante esta alternativa de muestreo se sigue una estrategia completamente distinta a las anteriores. En lugar de seleccionar la siguiente instancia aleatoriamente de entre todas las posibles, se decide generar $|R|$ valores aleatorios $r_j$ independientes en el intervalo $[0,1]$ de tal manera que si $r_j$ es menor que un valor $p_j$ fijado a priori, la instancia se añade al conjunto de \emph{muestra}. Cuando se cumple que $\forall i, j \ p_i = p_j$ se dice que es un muestreo de \emph{Bernoulli}, mientras que cuando no se cumple dicha condición se habla de muestreo de \emph{Poisson}. El cálculo del estimador para la \emph{SUMA} en este caso es muy diferente de los ilustrados anteriormente tal y como se muestra en la ecuación \eqref{eq:sum_bernoulli_poisson}. La desviación de este estimador se muestra en la ecuación \eqref{eq:sum_bernoulli_poisson_deviation}, que en general presenta peores resultados (mayor desviación) que mediante otras alternativas, sin embargo, esta posee la cualidad de poder aplicar distintos pesos a cada instancia de la población, lo que puede traducirse en que una selección adecuada de los valores $p_j$, lo cual puede mejorar significativamente la precisión de los resultados si estos se escogen de manera adecuada.
\begin{align}
\label{eq:sum_bernoulli_poisson}
Y &= \sum_{i \in muestra }\frac{t_i}{p_i} \\
\label{eq:sum_bernoulli_poisson_deviation}
\sigma^2(Y) &= \sum_i(\frac{1}{p_i}-1)t_i^2
\end{align}
\item \textbf{Muestreo Estratificado}: El muestreo estratificado trata de minimizar al máximo las diferencias entre la distribución del conjunto de datos de la \emph{población} respecto de la \emph{muestra} que se pretende generar. Para ello existen distintas alternativas entre las que se encuentra una selección que actualiza los pesos $p_j$ tras cada iteración, lo que reduce drásticamente la desviación de la \emph{muestra}, sin embargo produce un elevado coste computacional para su generación. Por tanto, existen otras estrategia más intuitivas basada en la partición del conjunto de datos de la \emph{población} en sub-conjuntos disjuntos que poseen la cualidad de tener varianza mínima a los cuales se denomina \emph{estratos}. Posteriormente, se selecciona mediante cualquiera de los métodos descritos anteriormente una \emph{muestra} para cada \emph{estrato}, lo cual reduce en gran medida la desviación típica global del estimador.
\end{itemize}
\paragraph{}
La estrategia de sumarización de información mediante \emph{muestreo} tiene como ventajas la independencia de la complejidad con respecto a la dimensionalidad de los datos (algo que como se ilustrará con en posteriores secciones no sucede con el resto de alternativas) además de su simplicidad conceptual. También existen cotas de error para las consultas, para las cuales no ofrece restricciones en cuanto al tipo de consulta (debido a que se realizan sobre un sub-conjunto con la misma estructura que el global). El muestre es apropiado para conocer información general acerca del conjunto de datos. Además, presenta la cualidad de permitir su modificación y adaptación en tiempo real, es decir, se pueden añadir o eliminar nuevas instancias de la muestra conforme se añaden o eliminan del conjunto de datos global.
\paragraph{}
Sin embargo, en entornos donde el ratio de adicciones/eliminaciones es muy elevado el coste computacional derivado del mantenimiento de la muestra puede hacerse poco escalable. El \emph{muestreo} es una buena alternativa para conjuntos de datos homogéneos, en los cuales la presencia de valores atípicos es irrelevante. Tampoco obtiene buenos resultados en consultas relacionadas con el conteo de elementos distintos. En las siguientes secciones se describen alternativas que resuelven de manera más satisfactoria estas dificultades y limitaciones.
\subsection{Histogram}
\label{sec:histogram}
\paragraph{}
Los \emph{histogramas} son estructuras de datos utilizadas para sumarizar grandes conjuntos de datos mediante el mantenimiento de tablas de frecuencias, por lo que tienen un enfoque completamente diferente al que siguen las estrategias de \emph{muestreo} de la sección anterior. En este caso, el concepto es similar a la visión estadística de los histogramas. Consiste en dividir el dominio de valores que pueden tomar las instancias del conjunto de datos en intervalos o contenedores disjuntos entre si de tal manera que se mantiene un conteo del número de instancias pertenecientes a cada partición.
\paragraph{}
Durante el resto de la sección se describen de manera resumida distintas estrategias de estimación del valor de las particiones, así como las distintas estrategias de particionamiento del conjunto de datos. Para llevar a cabo dicha tarea, a continuación se describe la notación que se ha seguido en esta sección: Sea $D$ el conjunto de datos e $i \in [1,M]$ cada una de las categorías que se presentan en el mismo. Denotaremos por $g(i)$ al número de ocurrencias de la categoría $i$. Para referirnos a cada uno de las particiones utilizaremos la notación $S_j$ para $j \in [1, B]$. Nótese por tanto que $M$ representa el cardinal de categorías distintas mientras que $B$ representa el cardinal de particiones utilizadas para \say{comprimir} los datos. La mejora de eficiencia en cuanto a espacio se consigue debido a la elección de $B \ll M$
\paragraph{}
Cuando se hablamos de \emph{esquemas de estimación} nos estamos refiriendo a la manera en que se almacena o trata el contenido de cada una de las particiones $S_j$ del histograma. La razón por la cual este es un factor importante a la hora de caracterizar un histograma es debida a que está altamente ligada a la precisión del mismo.
\begin{itemize}
\item \textbf{Esquema Uniforme}: Los esquemas que presuponen una distribución uniforme de las instancias dentro del contenedor se subdividen en dos categorías: \begin{enumerate*} [label=\itshape\alph*\upshape)]
\item \emph{continous-value asumption} que presupone que todas las categorías $i$ contenidas en la partición $S_j$ presentan el mismo valor para la función $g(i)$ y
\item \emph{uniform-spread asumption} que presupone que el número de ocurrencias de la partición $S_j$ se localiza distribuido uniformemente al igual que en el caso anterior, pero en este caso entre los elementos de un sub-conjunto $P_j$ generado iterando con un determinado desplazamiento $k$ sobre las categorías $i$ contenidas en $S_j$.
\end{enumerate*} El segundo enfoque presenta mejores resultados en el caso de consultas de cuantiles que se distribuyen sobre más de una partición $S_j$
\item \textbf{Esquema Basado en Splines}: En la estrategia basada en splines se presupone que los valores se distribuyen conforme una determinada función lineal de la forma $y_j = a_jx_j + b_j$ en cada partición $S_j$ de tal manera que el conjunto total de datos $D$ puede verse como una función lineal a trozos y continua, es decir, los extremos de la función en cada partición coinciden con el anterior y el siguiente. Nótese que en este caso se ha descrito la estrategia suponiendo el uso de una función lineal, sin embargo esta puede extenderse a funciones no lineales.
\item \textbf{Esquema Basado en Árboles}: Consiste en el almacenamiento de las frecuencias de cada partición $S_j$ en forma de árbol binario, lo cual permite seleccionar de manera apropiada el nivel del árbol que reduzca el número de operaciones necesarias para obtener la estimación del conteo de ocurrencias según el la amplitud del rango de valores de la consulta. La razón por la cual se escoje un árbol binario es debida a que se puede reducir en un orden de $2$ el espacio necesario para almacenar dichos valores manteniendo únicamente los de una de las ramas de cada sub-árbol. La razón de ello es debida a que se puede recalcular el valor de la otra mediante una resta sobre el valor almacenado en el nodo padre y la rama que si contiene el valor.
\item \textbf{Esquema Heterogéneo}: El esquema heterogéneo se basa la intuición de que la distribución de frecuencias de cada una de las particiones $S_j$ no es uniforme y tiene peculiaridades propias, por lo tanto sigue un enfoque diferente en cada una de ellas tratanto de minimizar al máximo la tasa de error producida. Para ello existen distintas heurísticas basadas en distancias o teoría de la información entre otros.
\end{itemize}
\paragraph{}
Una vez descritas distintas estrategias de estimación del valor de frecuencias de una determinada partición $S_j$, el siguiente paso para describir un \emph{histograma} es realizar una descripción acerca de las distintas formas de generación de las particiones o contenedores. Para tratar de ajustarse de manera más adecuada a la distribución de los datos se puede realizar un \emph{muestreo} a partir del cual se generan las particiones. A continuación se describen las técnicas más comunes para la elaboración de dicha tarea:
\begin{itemize}
\item \textbf{Particionamiento Heurístico}: Las estrategias de particionamiento heurístico se basan en el apoyo sobre distintas presuposiciones que en la práctica han demostrado comportamientos aceptables en cuanto al nivel de precisión que se obtiene en los resultados, sin embargo, no proporcionan ninguna garantía desde el punto de vista de la optimalidad. Su uso está ampliamente extendido debido al reducido coste computacional. Dentro de esta categoría las heurísticas más populares son las siguientes:
\begin{itemize}
\item \textbf{Equi-Width}: Consiste en la división del dominio de categorías $[1,M]$ en particiones equi-espaciadas unas de otras. Para dicha estrategia tan solo es necesario conocer \emph{a-priori} el rango del posible conjunto de valores. Es la solución con menor coste computacional, a pesar de ello sus resultados desde el punto de vista práctico son similares a otras estrategias más sofisticadas cuando la distribución de frecuencias es uniforme.
\item \textbf{Equi-Depth}: Esta estrategia de particionamiento requiere conocer la distribución de frecuencias \emph{a-priori} (o una aproximación que puede ser obtenida mediante alguna estrategia como el muestreo). Se basa en la división del dominio de valores de tal manera que las particiones tengan la misma frecuencia. Para ello se crean particiones de tamaños diferentes.
\item \textbf{Singleton-Bucket}: Para tratar de mejorar la precisión esta estrategia de particionamiento se basa en la utilización de dos particiones especiales, las cuales contienen las categorías de mayor y menor frecuencia respectivamente para después cubrir el resto de categorías restante mediante otra estrategia (generalmente \emph{equi-depth}) lo cual se basa en la idea de que estas particiones especiales contendrán los valores atípicos y el resto de la muestra será más uniforme.
\item \textbf{Maxdiff}: En este caso, el método de particionamiento se basa en la idea de utilizar los puntos de mayor variación de frecuencias mediante la medida $|g(i+1) - g(i)|$, para dividir el conjunto de categorías en sus respectivas particiones, de tal manera que las frecuencias contenidas en cada partición sean lo más homogéneas posibles entre sí.
\end{itemize}
\item \textbf{Particionamiento con Garantías de Optimalidad}: En esta categoría se enmarcan las estrategias de generación de particiones que ofrecen garantías de optimalidad a nivel de la precisión de resultados en las consultas. Para ello se utilizan técnicas de \emph{Programación Dinámica} (DP), de tal manera que la selección de las particiones se presenta como un problema de \emph{Optimización}. Sin embargo, dichas estrategias conllevan un elevado coste computacional que muchas veces no es admisible debido al gran tamaño del conjunto de datos que se pretende sumarizar. Como solución ante dicha problemática se han propuesto distintas estrategias que se basan en la resolución del problema de optimización, pero sobre una \emph{muestra} del conjunto de datos global, lo cual anula las garantías de optimalidad pero puede ser una buena aproximación si la muestra seleccionada es altamente representativa respecto de la población.
\item \textbf{Particionamiento Jerárquico}: Las estrategias de particionamiento jerárquico se basan en la utilización de particiones siguiendo la idea de utilizar un árbol binario. Por lo tanto, las particiones no son disjuntas entre ellas, sino que se contienen unas a otras. Esto sigue la misma idea que se describió en el apartado de \emph{Esquemas de estimación Basados en Árboles}. Apoyándose en esta estrategia de particionamiento se consigue que las consultas de rangos de frecuencias tengan un coste computacional menor en promedio (aún en el casos en que el rango sea muy amplio). En esta categoría destacan los histogramas \emph{nLT} (n-level Tree) y \emph{Lattice Histograms}. Estos últimos tratan de aprovechar las ventajas a nivel de flexibilidad y precisión que presentan los histogramas, además de las estrategias jerárquicas de sumarización en que se apoyan las \emph{Wavelets} tal y como se describe en la siguiente sección.
\end{itemize}
\paragraph{}
Las ideas descritas en esta sección sobre los \emph{histogramas} son extrapolables conforme se incrementa la dimensionalidad de los datos. En el caso de los esquemas de estimación, esto sucede de manera directa. Sin embargo, en el caso de los esquemas de particionamiento surgen distintos problemas debido al crecimiento exponencial tanto a nivel de espacio como de tiempo conforme aumenta el número de dimensiones de los datos por lo cual se convierten en soluciones impracticables para conjuntos de datos con elevada dimensionalidad
\paragraph{}
Los \emph{Histogramas} representan una estrategia sencilla, tanto a nivel de construcción como de consulta, la cual ofrece buenos resultados en un gran número de casos. Dichas estructuras han sido ampliamente probadas para aproximación de consultas relacionadas con suma de rangos o frecuencias puntuales. Tal y como se ha dicho previamente, su comportamiento en el caso unidimensional ha sido ampliamente estudiado, sin embargo, debido al crecimiento exponencial a nivel de complejidad conforme las dimensiones del conjunto de datos aumentan estas estrategias son descartadas en muchas ocasiones. Los \emph{Histogramas} requieren de un conjunto de parámetros fijados \emph{a-priori}, los cuales afectan en gran medida al grado de precisión de los resultados (pero cuando se seleccionan de manera adecuada esta solución goza de una gran cercanía al punto de optimalidad), por tanto, en casos en que la estimación de dichos valores necesarios \emph{a-priori} se convierte en una labor complicada, existen otras técnicas que ofrecen mejores resultados.
\subsection{Wavelet}
\label{sec:wavelet}
\paragraph{}
Las estructuras de sumarización denominadas \emph{Wavelets}, a diferencia de las anteriores, han comenzado a utilizarse en el campo del \emph{procesamiento de consultas aproximado} desde hace relativamente poco tiempo, por lo que su uso no está completamente asentado en soluciones comerciales sino que todavía están en fase de descubrimiento e investigación. Las \emph{Wavelets} (u \emph{ondículas}) se apoyan en la idea de representar la tabla de frecuencias del conjunto de datos como una función de ondas discreta. Para ello, se almacenan distintos valores (dependiendo del tipo de \emph{Wavelet}) que permiten reconstruir la tabla de frecuencias tal y como se describirá a continuación cuando se hable de la \emph{transformada de Haar}, la mejora de eficiencia en cuanto a espacio a partir de esta estructura de sumarización se basa en el mantenimiento aproximado de los valores que permiten reconstruir el conjunto de datos.
\paragraph{}
A continuación se describe la \emph{transformada de Haar}, a partir de la cual se presentan las distintas ideas en que se apoyan este tipo de estructuras de sumarización. En los últimos años se ha trabajado en estrategias más complejas como la \emph{Daubechies Wavelet} \cite{akansu2001multiresolution} de \emph{Akansu y otros} o la \emph{transformada de Wavelet basada en árboles duales completos} \cite{selesnick2005dual} de \emph{Selesnick y otros}.
\subsubsection{Haar Wavelet Transform}
\paragraph{}
La \emph{Haar Wavelet Transform} (\textbf{HWT}) consiste en una construcción de carácter jerárquico que colapsa las frecuencias de las distintas categorías de manera pareada recursivamente hasta llegar a un único elemento. Por tanto, la idea es la similar a la creación de un árbol binario desde las hojas hasta la raiz. Esta estrategia es similar a la que siguen los \emph{Histogramas jerárquicos} de la sección anterior. Además, se aprovecha de manera equivalente al caso de los histogramas jerárquicos para optimizar el espacio, consistente en almacenar únicamente la variación de uno de los nodos hoja con respecto del padre, lo cual permite reconstruir el árbol completo mediante una simple operación.
\paragraph{}
Para simplificar el entendimiento de la construcción de la \emph{transformada de Haar} se describe un ejemplo Extraído del libro \emph{Synopses for massive data} \cite{cormode2012synopses} de \emph{Cormode y otros}. Supongamos los valores de frecuencias recogidos en $A = [2,2,0,2,3,5,4,4]$. Para construir la transformada realizaremos la media de los elementos contiguos dos a dos recursivamente, de tal manera que para los distintos niveles obtenemos los resultados de la tabla \ref{table:wavelet_example}. Además, se muestran los coeficientes de detalle, los cuales se obtienen tras calcular la diferencia entre el primer y segundo elemento contiguos del nivel anterior.
\begin{table}[H]
\centering
\begin{tabular}{| c | c | c |}
\hline
Nivel & Medias & Coeficientes de Detalle \\ \hline \hline
$3$ & $[2,2,0,2,3,5,4,4]$ & $-$ \\ \hline
$2$ & $[2,1,4,4]$ & $[0,-1,-1,0]$ \\ \hline
$1$ & $[3/2,4]$ & $[1/2, 0]$ \\ \hline
$0$ & $[11/4]$ & $[-5/4]$ \\
\hline
\end{tabular}
\caption{Ejemplo de construcción de \emph{Haar Wavelet Transform}}
\label{table:wavelet_example}
\end{table}
\paragraph{}
Nótese que a partir de la media de nivel 0 que denominaremos $c_0 = 11/4$ así como el conjunto de coeficientes de detalle, que denotaremos por $c_1 = -5/4, c_2 = 1/2, ..., c_7 = 0$ y los coeficientes de detalle es posible reconstruir la tabla de frecuencias $A$.
\paragraph{}
Una vez entendida la estrategia de construcción en que se apoya la \emph{transformada de Haar}, se puede apreciar que esta no ofrece ventajas a nivel de coste de almacenamiento respecto del conjunto de frecuencias respecto del cual ha sido construida. Sin embargo, posee la siguiente cualidad, sobre la cual se apoya esta estrategia de sumarización: \emph{Para las categorías contiguas en que la variación de frecuencias es muy reducida, los coeficientes de detalle tienden a aproximarse a $0$}.
\paragraph{}
Por la razón descrita en el parrafo anterior, se intuye que dichos coeficientes de detalle pueden ser obviados, de tal manera que el espacio utilizado para el almacenamiento de la \emph{Wavelet} se convierte en sub-lineal ($o(N)$), en lugar de lineal ($O(N))$ respecto del espacio del conjunto de datos. Para elegir qué coeficientes de detalle se pueden utiliza estrategias que tratan de minimizar el error. Comúnmente, las \emph{Wavelets} han sido construidas a partir del \emph{error quadrático medio} o \emph{norma-$L_2$}, la cual se describe en la ecuación \eqref{eq:l2_error}. Sin embargo, distintos estudios como el realizado en el artículo \emph{Probabilistic wavelet synopses} \cite{garofalakis2004probabilistic} de \emph{Garofalakis y otros} muestran como esta medida del error obtiene malos resultados cuando se aplica a la sumarización de datos mediante \emph{Wavelets}.
\paragraph{}
Por tanto, se proponen otras medidas de error como la minimización del máximo error absoluto o relativo, que se ilustran en las ecuaciones \eqref{eq:abs_error} y \eqref{eq:rel_error}. También se propone como alternativa la minimización de la \emph{norma-$L_p$} que se muestra en la ecuación \eqref{eq:lp_error}. Dicha medida de error es una generalización del \emph{error cuadrático medio} (caso $p=2$) a cualquier valor de $p \geq 0$. Por último se muestra en la ecuación \eqref{eq:lp_w_error} el caso del cálculo del error mediante la \emph{norma-$L_p$} con pesos o ponderada, lo cual permite fijar el grado de importancia para cada categoría en la representación mediante \emph{Wavelets} permitiendo aumentar la precisión de las categorías más significativas.
\begin{align}
\label{eq:l2_error}
||A - \widetilde{A} ||_2 &= \sqrt{\sum_{i}(A[i]-\widetilde{A}[i])^2} \\
\label{eq:abs_error}
max_i\{absErr_i\} &= max_i\{|A[i]-\widetilde{A}[i]|\} \\
\label{eq:rel_error}
max_i\{relErr_i\} &= max_i\bigg\{\frac{|A[i]-\widetilde{A}[i]|}{|A[i]|} \bigg\} \\
\label{eq:lp_error}
||A - \widetilde{A} ||_{p} &= (\sum_{i}(A[i]-\widetilde{A}[i])^p)^{\frac{1}{p}} \\
\label{eq:lp_w_error}
||A - \widetilde{A} ||_{p,w} &= (\sum_{i}w_i \cdot(A[i]-\widetilde{A}[i])^p)^{\frac{1}{p}}
\end{align}
\paragraph{}
Al igual que en el caso de los \emph{Histogramas}, las \emph{Wavelets} presentan problemas de eficiencia cuando se usan en entornos en los cuales el conjunto de datos está compuesto por una gran número de atributos. Por tanto, se dice que sufren la \emph{Maldición de la dimensionalidad} (\emph{Curse of Dimensionality}), que provoca un crecimiento de orden exponencial en el coste tanto en espacio como en tiempo.
\paragraph{}
Tal y como se puede apreciar, esta estrategia es muy similar a la basada en \emph{Histogramas}, dado que ambas se apoyan en el almacenamiento de valores que tratan de describir o resumir la tabla de frecuencias de los datos de manera similar. Sin embargo, mientras que en el caso de los \emph{Histogramas} estos destacan cuando se pretende conocer la estructura general de los datos, las \emph{Wavelets} ofrecen muy buenos resultados cuando se pretenden conocer valores atípico o extremos (a los cuales se denomina \emph{Heavy Hitters}).
\paragraph{}
Por su estrategia de construcción, las \emph{Wavelets} permiten sumarizar una mayor cantidad de información utilizando un espacio reducido. Además, en el caso de la \emph{transformada de Haar}, que posee la característica de linealidad, se puede adaptar de manera sencilla al \emph{modelo en Streaming}. Tal y como se ha dicho en el párrafo anterior, las desventajas de esta alternativa derivan en gran medida de los problemas relacionados con el incremento de la dimensionalidad de los datos.
\subsection{Sketch}
\label{sec:sketch}
\paragraph{}
Las estructuras de sumarización conocidas como \emph{Sketches} son las que menos tiempo de vida tienen de entre las descritas, por lo tanto, aún están en una fase muy temprana por lo que su uso en sistemas reales todavía es anecdótico. Sin embargo poseen distintas características por las que se piensa que en el futuro las convertirán en estrategias muy importantes el el ámbito del \emph{procesamiento aproximado de consultas}. Los \emph{Sketches} se amoldan perfectamente al modelo en streaming del cual se habla en el capítulo anterior en la sección \ref{sec:streaming_model}. Este modelo se amolde perfectamente a muchos sucesos cambiantes que se dan en la actualidad y que requieren de la obtención de analíticas. Un ejemplo de ello es un sistema de transacciones financieras, que suceden con una frecuencia elevada y para las cuales sería apropiado obtener métricas en tiempo real para la mejora de la toma de decisiones. También encajan de manera apropiada en el modelo de transacciones de una base de datos, la cual es modificada constantemente mediante inserciones, modificaciones y eliminaciones.
\paragraph{}
Los \emph{Sketches} se corresponden con estructuras de datos que funcionan manteniendo estimadores sobre cada una de las instancias que han sido procesadas hasta el momento, es decir, realizan una modificación interna por cada entrada. Esto se opone a las estrategias descritas en anteriores secciones, que procesan una parte del conjunto de datos o requieren que el mismo tenga un carácter estático. Estas modificaciones pueden enmarcarse bajo distintas especificaciones del modelo en streaming tal (serie temporal, modelo de caja registradora o modelo de molinete) teniendo que realizar pequeñas adaptaciones en algunos algunos \emph{Sketches}, pero siendo muy poco escalable la implementación del modelo de molinete (que permite eliminaciones) en otros.
\paragraph{}
Los \emph{Sketches Lineales} son un sub-conjunto de \emph{Sketches} que pueden ser vistos como una transformación lineal de la estructura de sumarización, la cual puede ser interpretada como un vector de longitud $1*n$ al cual denominaremos $S$. Para generar dicho vector es nos apoyamos en la matriz de tamaño $n*m$ a la cual denominaremos $A$ y que representa la transformación lineal del conjunto de datos, el cual puede ser codificado como un vector al cual denominaremos $D$ de tamaño $1xm$. Dicha transformación lineal se representa en la ecuación \eqref{eq:linear_sketch} y de manera gráfica en la figura \ref{fig:linear_sketch}.
\begin{equation}
\label{eq:linear_sketch}
A * D = S
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{linear-sketch-abstraction}
\caption{Abstracción del \emph{Sketch Lineal}. \emph{(extraída de \cite{cormode2012synopses})}}
\label{fig:linear_sketch}
\end{figure}
\paragraph{}
Puesto que la transformación que se muestra es de carácter lineal, entonces posee la propiedad de que siendo $S_1$ y $S_2$ dos \emph{Sketches} lineales, entonces se cumple la propiedad de que $S_1 + S_2 = S_{sum}$, lo que puede entenderse como la combinación de los estimadores de los dos \emph{Sketches}. Por tanto, se puede decir que el cada actualización se procesa como una transformación de la nueva instancia codificada en el vector $D$ para después combinar el \emph{Sketch} resultante con el generado anteriormente.
\paragraph{}
Por motivos de eficiencia a nivel de espacio, la matriz de transformación $A$ no se utiliza de manera explicita, sino que por contra, en muchos casos se utilizan distintas \emph{funciones hash} (las cuales se describen en la sección \ref{sec:hash_functions}), que realizan la transformación de la misma forma que la matriz $A$, solo que el coste computacional para utilizarlas es significativamente menor que el coste necesario para el almacenamiento y multiplicación de una matriz de tal tamaño.
\paragraph{}
Tal y como se puede apreciar en la figura \ref{fig:linear_sketch} la ventaja de sumarización de esta estrategia se obtiene en la reducción de tamaño obtenida en la transformación lineal. Posteriormente, cada estructura de \emph{Sketch} tiene asociadas un conjunto de \emph{consultas o queries} para las cuales extraer la información valiosa contenida en ellas. Puesto que la forma en que se llevan a cabo dichas consultas varía según la alternativa a la que nos estamos refiriendo, estas técnicas se expondrán en profundidad en sus correspondientes secciones.
\paragraph{}
Siguiendo con las restricciones descritas anteriormente, existe una implementación trivial de las estructuras de \emph{Sketch} consistente en mantener una tabla de frecuencias sobre cada posible instancia de entrada de tal manera que a partir de esta pueden resolver consultas como sumas de frecuencias, conteo de elementos distintos, búsqueda del máximo o mínimo, media o varianza del conjunto de datos. Sin embargo, esta solución no proporciona ninguna ventaja a nivel de espacio, por tanto, la tarea es encontrar técnicas que permitan \say{colapsar} dicha tabla de frecuencias para tratar de minimizar dicho coste asumiendo un cierto grado de imprecisión.
\paragraph{}
Existen dos categorías principales según la naturaleza de las consultas que el \emph{Sketch} puede responder. Dichas categorías se describen a continuación:
\begin{itemize}
\item \textbf{Estimación de Frecuencias}: Los \emph{Sketches} basados en estimación de frecuencias se corresponden con aquellos que tratan de recoger estimadores que se aproximen a la frecuencia de cada elementos $f(i)$. En esta categoría se enmarcan el \emph{Count-Min Sketch} y \emph{Count Sketch}.
\item \textbf{Elementos Distintos}: Los \emph{Sketches} basados en el conteo de elementos distintos (y que también pueden responder consultas referidas a la presencia de un determinado elemento) se basan en el mantenimiento de distintos indicadores que permiten aproximar dichos valores. Entre los más destacados en esta categoría se encuentran el \emph{Bloom Filter} y el \emph{HyperLogLog}.
\end{itemize}
\paragraph{}
A pesar de que todos los \emph{Sketches} siguen una estructura básica similar, existen distintos parámetros diferenciadores que caracterizan unas alternativas frente a otras entre las que destacan las consultas soportadas, el tamaño de los mismos (algunos necesitan menor espacio para obtener un grado de precisión similar a otros), el tiempo de procesamiento de cada nueva actualización (el cual se pretende que sea muy reducido debido al modelo en streaming), el tiempo de resolución de consultas o la estrategia de inicialización del mismo.
\paragraph{}
La extensión de los \emph{Sketches} sobre conjuntos de datos de carácter multidimensional puede resolverse utilizando funciones hash que mapeen una entrada multidimensional sobre un espacio unidimensional para después utilizarlos de la manera que para los casos de una dimensión. Esta solución no presenta problemas en la estimación de frecuencias puntuales o búsqueda de máximos o mínimos, sin embargo, no es factible para consultas de sumas de rangos utilizando \emph{sketches} multidimensionales o asumiendo la independencia entre dimensiones y manteniendo una estructura de \emph{sketch} por cada dimensión.
\paragraph{}
Los \emph{Sketches} presentan un gran nivel de simplicidad en sus implementaciones, lo que les hace muy eficientes y válidos para modelos que requieren de actualizaciones constantes como el descrito en el modelo en streaming. La dificultad de los mismos se basa en los conceptos matemáticos subyacentes, que complica en gran medida la demostración de sus características de convergencia hacia el valor real que estiman. Generalmente para reducir su espacio se apoyan en la utilización de funciones hash con distintas propiedades, pero que también se pueden calcular eficientemente. La mayor limitación se refiere al rango de consultas que cada tipo de \emph{Sketch} puede resolver de manera eficiente, estando especialmente limitados en el ámbito de múltiples consultas anidadas.
\paragraph{}
Los \emph{Sketches} son un ámbito interesante dentro del mundo de la investigación, con una visión de futuro muy prometedora que les coloca como la mejor solución a medio-largo plazo para la tarea del \emph{procesamiento de consultas aproximadas} por su reducido coste computacional, tanto a nivel de espacio como de tiempo de procesamiento. Actualmente universidades prestigiosas como la \emph{Universidad de California, Berkeley} o el \emph{MIT} están trabajando en el diseño de bases de datos que se apoyan fuertemente en el uso de estas estructuras. Dicha universidad está trabajando en una base de datos a la cual denominan \emph{BlinkDB} y la cual describen en el artículo \emph{BlinkDB: queries with bounded errors and bounded response times on very large data} \cite{agarwal2013blinkdb}
\paragraph{}
En las siguientes secciones se describen en profundidad distintas estructuras de datos basadas en \emph{Sketches} tratando de prestar especial importancia en una visión conceptual de la misma pero sin dejar de lado la parte práctica de las mismas. Además, se trata de incluir demostraciones acerca de la precisión de las mismas respecto del espacio utilizado para su almacenamiento respecto del tamaño del conjunto de datos global.
\section{Bloom Filter}
\label{sec:bloom_filter}
\paragraph{}
El \emph{Bloom-Filter} se corresponde con la primera estructura de datos basada en la idea de \emph{Sketch} por procesar cada nueva entrada de la misma forma y tener un coste a nivel de espacio sub-lineal ($o(N)$) respecto del cardinal de posibles valores que pueden ser procesados. Esta estructura de datos fue diseñada inicialmente por \emph{Burton Howard Bloom} la cual describe en el artículo \emph{Space/time trade-offs in hash coding with allowable errors} \cite{bloom1970space} publicado en 1970.
\paragraph{}
La funcionalidad que suministra dicha estructura de datos es la consulta acerca de presencia de un determinado elemento. Para la implementación descrita en el artículo inicial no se permiten incrementos ni eliminaciones, tan solo la existencia o inexistencia del elemento, por tanto esta estructura de datos se enmar en el marco de \emph{serie temporal} del \emph{modelo en streaming}. El \emph{Bloom-Filter} asegura la inexistencia de falsos negativos (elementos que se presupone que no han sido introducidos cuando en realidad si que lo fueron) pero por contra, se admite una determinada tasa de falsos positivos (elementos que se presupone que han sido introducidos cuando en realidad no lo fueron).
\paragraph{}
Debido al tiempo de vida de dicha estrategia, su uso está muy asentado y se utiliza en distintos sistemas para tareas en las cuales se pretende optimizar el tiempo, pero sin embargo son admisibles fallos. Es comúnmente utilizado en bases de datos comerciales para limitar el número de accesos al disco durante búsquedas de elementos. Esto se lleva a cabo manteniendo un \emph{Bloom-Filter} al cual se consulta sobre la presencia del elemento a buscar y en el caso de que la respuesta sea negativa, entonces se prescinde de dicho acceso a la unidad de almacenamiento. Mientras que si es positiva se realiza el acceso a disco. Nótese por tanto que en algunos ocasiones dicho acceso será innecesario, sin embargo las consecuencias de dicha tarea tan solo tienen consecuencias negativas a nivel de eficiencia y no de resultados.
\paragraph{}
Una vez descrita la funcionalidad que proporciona el \emph{Bloom-Filter} así como un caso práctico de uso del mismo ya se tiene una idea a grandes rasgo acerca del mismo. Por tanto, lo siguiente es describir su composición. La estructura de datos se basa en el mantenimiento de un \emph{mapa de bits} $S$ de longitud $n$ ($|S| = n$) de tal manera que se cumpla que $n \ll m$ siendo $m$ el cardinal de elementos distintos que se pueden presentar en la entrada. Además, se utilizan $k$ funciones hash denominadas $h_1, h_2,..., h_j,..., h_k$ que distribuyen de manera independiente el elemento $i$-ésimo en $S$. El mapa de bits $S$ se inicializa con todas sus posiciones tomando el valor $0$.
\paragraph{}
El modo de funcionamiento del \emph{Bloom-Filter} se lleva a cabo de la siguiente manera: Cada vez que se presenta un nuevo elemento en la entrada (al cual denominaremos $i \in [1, m]$) se tomaran los $k$ valores de las funciones funciones hash indicadas anteriormente y se asignará el valor $1$ a dichas posiciones del mapa de bits $S$. Es decir, se realiza la operación descrita en la ecuación \eqref{eq:bloom_filter_update}. La consulta acerca de la presencia del elemento $i$-ésimo se realiza por tanto, consultando dichas posiciones, de tal manera que si $\forall j \in [1,k], \ S[h_j(i)]$ toma el valor valor $1$ se dice que el elemento ha aparecido en la entrada mientras que si alguna posición es distina de $1$ (toma el valor $0$) se dice que el elemento no ha sido introducido.
\begin{align}
\label{eq:bloom_filter_update}
S[h_j(i)] = 1 && \forall j \in [1,k]
\end{align}
\paragraph{}
Nótese que la restricción acerca de la presencia de un determinado elemento es mucho más débil que la de no existencia. La razón de ello se debe a que pueden existir colisiones entre las distintas funciones hash que conviertan dicho resultado en érroneo. Sin embargo, tal y como se ha indicado anteriormente, en caso de que el \emph{Bloom-Filter} indique la no presencia del elemento dicha respuesta será válida. A continuación se realiza un análisis acerca del índice de error para el caso de los falsos positivos la cual se basa en el artículo \emph{Network applications of bloom filters: A survey} \cite{broder2004network} de \emph{Broder} y \emph{Mitzenmacher}. Nótese que dicho análisis depende de 3 parámtros: el tamaño $n$ del mapa de bits, el cardinal $m$ del conjunto de posibles elementos en la entrada y el número $k$ de funciones hash utilizadas.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{bloom-filter}
\caption{Modo de funcionamiento del \emph{Bloom Filter}, que inserta los valores $\{ x, y, z\}$ y comprueba la existencia de $w$. La imagen ha sido extraída de \cite{wiki:Bloom_filter}.}
\label{fig:bloom_filter}
\end{figure}
\paragraph{}
Para el análisis se presupone que las funciones hash $h_j$ son totalmente aleatorias, es decir, distribuyen los valores de manera totalmente uniforme en el intervalo $[1,n]$ y son totalmente independientes entre sí. La probabilidad $p'$ de que cualquier posición de $S$ siga siendo igual a cero después de la aparición de $l$ elementos distintos se muestra en la ecuación \eqref{eq:bloom_filter_p_estimator} siendo la base del lado derecho de la igualdad la probabilidad de que cada función hash mantenga con el valor $0$ una posición.
\begin{equation}
\label{eq:bloom_filter_p_estimator}
p' = \bigg(1-\frac{1}{m}\bigg)^{kl}
\end{equation}
\paragraph{}
La probabilidad de que un elemento no introducido en la entrada tome todas sus correspondientes posiciones de $S$ con el valor $1$ se da con probabilidad $(1-p)^k$ y dado que $p'$ es un buen estimador para $p$, entonces podemos aproximar la tasa de falsos positivos. Tras desarrollar dicha expresión se puede llegar a la ecuación \eqref{eq:bloom_filters_false_positives}, que aproxima de manera apropiada la tasa de falsos positivos.
\begin{equation}
\label{eq:bloom_filters_false_positives}
f = (1-e^{-kn/m})^k
\end{equation}
\paragraph{}
El \emph{Bloom-Filter} es una buena aproximación para los casos en que es necesario reducir el sobre coste de comprobación de accesos a medios costosos, sin embargo, su utilización puede utilizarse en filtros anti-spam u otros entornos en que una determinada tasa de error sea admisible. Nótese que este tipo de \emph{Sketches} no puede agruparse en el sub-conjunto de \emph{Sketches Lineales} puesto que la operación de asignar el valor $1$ no es lineal.
\section{Count-Min Sketch}
\label{sec:count_min_sketch}
\paragraph{}
El \emph{Count-Min Sketch} es otra estructura de datos con la característica de presentar un coste espacial de carácter sub-lineal ($o(N)$) respecto del cardinal de posibles elementos de entrada. El \emph{Count-Min Sketch} (en adelante \emph{CM Sketch} por abreviación) fue descrito por primera vez por \emph{Cormode} y \emph{Muthukrishnan} en el artículo \emph{An improved data stream summary: the count-min sketch and its applications} \cite{cormode2005improved} publicado en 2005. Nótese por tanto que el tiempo de vida de dicha estructura de datos es muy corto.
\paragraph{}
El propósito del \emph{CM Sketch} es la estimación de frecuencias para el rango de posibles valores que se presentan en la entrada. En la formulación inicial se enmarcaba modelo de caja registradora, que tan solo permite adicciones, sin embargo posteriormente se han propuesto mejoras para ampliar su uso en entornos en que también se permitan eliminaciones (modelo en molinete). Tal y como se verá a continuación, el nombre de este \emph{Sketch} se refiere a las operaciones de operaciones que utiliza durante su funcionamiento, es decir, el conteo o suma y la búsqueda del mínimo.
\paragraph{}
La estimación de frecuencias del \emph{CM Sketch}, tal y como se puede apreciar debido al carácter sub-lineal del mismo, no garantiza la exactitud en de los mismos, sino que al igual que en el caso del \emph{Bloom-Filter} devuelve como resultado una aproximación. En este caso dicha aproximación garantiza que el resultado estimado siempre será mayor o igual que el verdadero, es decir, realiza una sobre-estimación del valor real. La razón por la cual el \emph{CM Sketch} utiliza la operación de búsqueda del mínimo es para tratar de reducir dicho efecto en la medida de lo posible.
\paragraph{}
Debido al corto periodo de vida su uso todavía no está asentado en sistemas reales, sin embargo existen una gran variedad de situaciones que se enmarcan sobre el modelo en streaming en las cuales la precisión de frecuencias no tienen grandes efectos perjudiciales. Un ejemplo de ello podría ser el conteo de visitas a un determinado sitio web formado por distintas páginas. La estimación sobre el número de visitas para cada una de estas páginas podría llevarse a cabo utilizando esta estructura de datos, ya que es una tarea que puede admitir una determinada tasa de error y requiere de actualizaciones constantes.
\paragraph{}
Una vez descrita la funcionalidad que suministra el \emph{CM Sketch} se describirá la estructura interna del mismo: Esta estructura de datos está formada por una matriz $S$ de estructura bidimensional con $w$ columnas y $d$ filas, de tal manera que se mantienen $w * d$ contadores. Cada una de las $d$ filas tiene asociada una función hash $h_j$ que distribuye elementos pertenecientes al dominio $[1, m]$ (siendo $m$ el cardinal de elementos distintos) sobre la fila de dicha matriz, es decir, sobre $[1,w]$. Para que las estimaciones acerca de la precisión del \emph{CM Sketch} sean correctas cada función hash $h_j$ debe cumplir la propiedad de ser independiente del resto además de seguir una distribución uniforme. En cuanto a la inicialización de la estructura de datos, todas las posiciones toman el valor cero, esto se muestra en la ecuación \eqref{eq:count_min_sketch_init}.
\begin{align}
\label{eq:count_min_sketch_init}
S[k,j] = 0 && \forall k \in [1,w], \ \forall j \in [1,d]
\end{align}
\paragraph{}
En cuanto al modo de funcionamiento del \emph{CM Sketch}, tal y como se ha indicado anteriormente, se enmarca en el modelo de caja registradora, es decir, cada actualización se corresponde con una tupla formada por el identificador del elemento al que se refiere y un determinado valor positivo que indica el incremento $c$ al cual se refiere la actualización. El funcionamiento es similar al del \emph{Bloom-Filter} en el sentido de que por cada nueva entrada se realizan $d$ actualizaciones (una para cada fila). La operación de conteo se refiere a dicha actualización, que se ilustra de manera matemática en la ecuación \eqref{eq:count_min_sketch_update}. Una representación gráfica de dicha operación se muestra en la figura \ref{fig:count_min_sketch}. Nótese por tanto que el coste de actualización es de $O(d)$. Nótese que esta operación es de carácter lineal por lo que el \emph{CM Sketch} se agrupa en el sub-conjunto de \emph{Sketches Lineales}.
\begin{align}
\label{eq:count_min_sketch_update}
S[j, h_j(i)] = S[j, h_j(i)] + c && \forall j \in [1,d]
\end{align}
\paragraph{}
En cuanto al proceso de consulta sobre la frecuencia del elemento $i$-ésimo, esto se lleva a cabo recogiendo el valor almacenado en cada fila y devolviendo el menor, de tal manera que se pretende reducir el efecto de las colisiones generados sobre las funciones hash al distribuir elementos sobre un espacio de menor tamaño, ya que se presupone que $w * d \ll m$ para que la ventaja a nivel de espacio se produzca. Dicha operación se muestra en la ecuación \eqref{eq:count_min_sketch_estimate}
\begin{equation}
\label{eq:count_min_sketch_estimate}
\widetilde{f}(i) = min_{j \in [1,d]}\{S[j, h_j(i)]\}
\end{equation}
\paragraph{}
A nivel de análisis sobre el coste espacial necesario para almacenar en memoria el \emph{CM Sketch}, este es dependiente del grado de precisión que se pretenda asegurar mediante el uso del mismo. Sin embargo, el análisis no se realiza sobre el cardinal de posibles elementos que pueden aparecer en la entrada, sino que depende del sumatorio del conteo de estos. Este valor se denomina $F_1$ (o primer momento de frecuencia), que se define tal y como se indica en la ecuación \eqref{eq:count_min_sketch_sum}.
\begin{equation}
\label{eq:count_min_sketch_sum}
F_1 = \sum_{i=1}^m f(i)
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{count-min-sketch}
\caption{Modo de funcionamiento del \emph{Count-Min Sketch} durante el proceso de inserción de un nuevo elemento. La imagen ha sido extraída de \cite{cormode2005improved}.}
\label{fig:count_min_sketch}
\end{figure}
\paragraph{}
Por tanto, la precisión de esta estructura de datos probabilística se indica diciendo que $\widetilde{f}(i)$ tendrá un error máximo de $\epsilon N$ que el cual se cumplirá con probabilidad $1-\delta$. Estos parámetros se fijan en el momento de la inicialización del \emph{CM Sketch} fijando el número de filas y columnas de tal manera que $d = log(1/\delta)$ y $w=2/\epsilon$. La demostración acerca de la elección de estos valores puede encontrarse en el artículo original \cite{cormode2005improved}.
\paragraph{}
En cuanto a la estimación de frecuencias obtenida por el \emph{CM Sketch}, en sentido estricto se dice que esta es sesgada respecto de su valor real. La razón se debe a que siempre se produce una sobre-estimación, a pesar de tratar de reducir los efectos negativos de la misma tratando de seleccionar el mínimo de las estimaciones siempre será mayor o igual (nunca menor). Para tratar de solventar esta problemática se han propuesto distintas heurísticas que tratan de contrarrestar esta problemática en el modelo de caja de registradora (tan solo se permiten inserciones).
\paragraph{}
Sin embargo, dicho problema no sucede en el caso del modelo general o de molinete, sobre el cual si que están permitidas las eliminaciones. En este caso es más apropiado utilizar como estimación la mediana de los valores almacenados en cada columna, puesto que se escoge el mínimo para un elemento que tan solo ha recibido actualizaciones negativas probablemente este será muy diferente del valor real. La razón por la cual se escoge la mediana y no la media es por sus propiedades de resistencia ante valor atípicos u \emph{outliers}. En este caso la demostración se puede llevar a cabo apoyándose en la desigualdad de Chernoff descrita en la sección \ref{sec:chernoff_inequality}. Esta demostración se puede encontrar en el artículo \emph{Selection and sorting with limited storage }\cite{munro1980selection} de \emph{Munro} y \emph{Paterson}.
\paragraph{}
El \emph{Count-Min Sketch} consiste en una estrategia apropiada para el conteo de ocurrencias en un espacio de orden sub-lineal. Además, la implementación del mismo es simple en contraposición con otras estrategias más sofisticadas. En posteriores secciones se hablará del \emph{Count Sketch}, que proporcionan valores de precisión equivalentes en espacios de menor tamaño añadiendo una mayor complejidad conceptual.
\section{Count Sketch}
\label{sec:count_sketch}
\paragraph{}
El \emph{Count Sketch} se refiere a una estructura muy similar al \emph{Count-Min Sketch} ya que sigue la misma estructura e ideas muy similares en para la actualización y consulta de valores, sin embargo proporciona ventajas a nivel de coste espacial respecto del anterior reduciendo en un orden de dos el espacio necesario para mantener la estructura de datos. La definición acerca del \emph{Count Sketch} se indicó por primera vez en el trabajo \emph{Finding frequent items in data streams} \cite{charikar2002finding} desarrollado por \emph{Charikar y otros}. Nótese que este trabajo fue llevado a cabo en el año 2002, es decir, es anterior a la publicación del referido al \emph{Count-Min Sketch} (2005). La razón por la cual se ha seguido este orden y no el temporal se debe a que el \emph{Count Sketch} puede ser visto como una mejora sobre el descrito en la sección anterior, lo que simplifica el entendimiento de los mismos.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{count-sketch}
\caption{Modo de funcionamiento del \emph{Count Sketch} durante el proceso de inserción de un nuevo elemento. La imagen ha sido extraída de \cite{cormode2012synopses}.}
\label{fig:count_sketch}
\end{figure}
\paragraph{}
El \emph{Count Sketch} se basa en los mismos elementos para su funcionamiento que su homónimo más simple, por lo tanto se utilizará la definición utilizada previamente de la matriz $S$ compuesta por $w$ columnas y $d$ filas además de las funciones hash $h_1, h_2, ..., h_j,..., h_d$ independientes entre si, y cuya distribución es uniforme. La variación surge a nivel de las operaciones que se realizan en el proceso de actualización y consulta sobre la estructura de datos. El funcionamiento del \emph{Count Sketch} puede ser visto de varias formas, primero describiremos la versión extendida del mismo para después describir una versión que reduce a la mitad el espacio necesario para su mantenimiento en memoria.
\begin{equation}
\label{eq:count_sketch_hash_2}
h'_j(i) =
\begin{cases}
h_j(i) - 1, & h_j(i) \ mod \ 2 = 0\\
h_j(i) + 1, & h_j(i) \ mod \ 2 = 1
\end{cases}
\end{equation}
\paragraph{}
A continuación se describe la versión extendida, que se apoya en la utilización de la función hash $h'_j(i)$, la cual se define en la ecuación \eqref{eq:count_sketch_hash_2}, que consiste en una variación sobre la función hash base $h_j(i)$. Esta función se utiliza únicamente en el momento de la estimación, es decir, no se utiliza para la actualización tras la llegada de elementos. La intuición en la que se basa su utilización es la reducción del sesgo producido por las colisiones de otros elementos en cada fila, que siempre sobre-estima los resultados (en promedio la sobre-estimación de cada fila es de $\frac{\sum_{k=1}^wS[j,k]}{w}$). De esta manera se pretende contrarrestar dicho efecto por lo que ya no es apropiada la utilización de la búsqueda del mínimo, sino que se puede llevar a cabo la estimación mediante el uso de la mediana, tal y como se indica en la ecuación \eqref{eq:count_sketch_estimation_1}.
\begin{equation}
\label{eq:count_sketch_estimation_1}
\widetilde{f}(i) = median_{j \in [1,d]}\{S[j, h_j(i)] - S[j, h'_j(i)]\}
\end{equation}
\paragraph{}
La versión reducida de esta alternativa se apoya en la utilización de $d$ funciones hash uniformes e independientes entre sí con dominio de entrada $[1,m]$ e imagen en el sub-conjunto $\{-1,1\}$. Si se sustituye el método de actualización de elementos por el descrito en la ecuación \eqref{eq:count_sketch_update} donde $i$ identifica el $i$-ésimo elemento y $c$ la variación del mismo correspondiente a dicha actualización, entonces para conseguir la misma precisión que en el caso anterior se requiere de la mitad de espacio. Puesto que esto es equivalente a mantener $S[j, h_j(i)] - S[j, h'_j(i)]$ en una única posición de $S$, entonces la estimación de la frecuencia se sustituye por la ecuación \eqref{eq:count_sketch_estimation_2}.
\begin{equation}
\label{eq:count_sketch_update}
S[j,h_j(i)] = S[j,h_j(i)] + g_j(i)c
\end{equation}
\begin{equation}
\label{eq:count_sketch_estimation_2}
\widetilde{f}(i) = median_{j \in [1,d]}\{S[j, h_j(i)]\}
\end{equation}
\paragraph{}
Para el análisis sobre la precisión del \emph{Count Sketch}, se utiliza un enfoque similar al del \emph{CM Sketch}. Sin embargo, en este caso el valor $\epsilon$ se escoge teniendo en cuanto el segundo momento de frecuencias, es decir, $F_2 = \sum_{i=1}^mf(i)^2$, de tal manera que para estimación obtenida sobre cada fila, esta se encuentra en el intervalo $[f(i) - \sqrt{F_2/w}, f(i) + \sqrt{F_2/w}]$. La estimación global por tanto tiene un error máximo de $\epsilon\sqrt{F_2}$ con probabilidad $1- \delta$.
\paragraph{}
Para que se cumpla dicha estimación del error el tamaño de la matriz $S$ debe escogerse de tal forma que el número de columnas sea $w = O(1/\epsilon^2)$ y el número de filas sea $d = O(log(1/\delta))$. La demostración completa acerca de la elección puede encontrarse en el artículo original \cite{charikar2002finding}. Nótese la diferencia en cuanto a la estimación del error del \emph{Count Sketch} con respecto del \emph{CM Sketch}, se debe a que en el primer caso se utiliza el segundo momento de frecuencias ($F_2$) mientras que en el \emph{CM Sketch} la estimación de la tasa de error se apoya en el primer momento de frecuencias ($F_1$).
\paragraph{}
En esta implementación del \emph{Count Sketch} tan solo se ha impuesto la restricción a nivel de funciones hash de que estas sean de carácter \emph{2-universales}, lo cual es una condición suficiente para estimar frecuencias puntuales. Sin embargo, esta estructura de datos se puede extender para la estimación del segundo momento de frecuencias ($F_2$) mediante la técnica descrita en la sección \ref{sec:streaming_frecuency_moment_aproximation} extraída del artículo de \emph{Alon y otros} \cite{alon1996space}. Para la estimación de $F_2$ basta con calcular la mediana de las sumas de los elementos de cada fila al cuadrado. Esto se indica en la ecuación \eqref{eq:ams_sketch_f2}. Sin embargo, para que esta solución sea correcta desde el punto de vista de la varianza del estimador es necesario que las funciones hash utilizadas sean \emph{4-universales} (la razón se debe al cálculo de las potencias de grado 2). Cuando se cumple esta propiedad entonces la técnica se denomina \textbf{AMS Sketch}.
\begin{equation}
\label{eq:ams_sketch_f2}
\widetilde{F}_2 = median_{j \in [1,d]}\{\sum_{k=1}^w S[j, k]^2\}
\end{equation}
\paragraph{}
A nivel práctico estas dos alternativas ofrecen resultados muy similares cuando se imponen las mismas restricciones a nivel de espacio. Sin embargo, ambos poseen una característica destacada: En muchos casos, la estimación obtenida es mucho más precisa puesto que muchas distribuciones de probabilidad tienen formas asimétricas, es decir, algunos elementos presentan una frecuencia de aparición elevada mientras que otros aparecen en contadas ocasiones, lo cual mejora la precisión de los estimadores obtenidos por estos \emph{sketches}.
\section{HyperLogLog}
\label{sec:hyper_log_log}
\paragraph{}
En esta sección se habla del \emph{Sketch} \emph{HyperLogLog}, que en este caso tiene tanto un enfoque como una utilidad distintas a las alternativas descritas en secciones anteriores. El \emph{HyperLogLog} se utiliza para tratar de estimar de manera precisa el número de elementos distintos que han aparecido sobre un stream de elementos. Esta estructura de datos es una evolución de la versión básica de la cual se cual se habló en el capítulo de \emph{Algoritmos para Streaming} en la sección \ref{sec:streaming_flajolet_martin_algorithm}.
\paragraph{}
Esta versión fue ampliada posteriormente en \emph{Loglog counting of large cardinalities} \cite{durand2003loglog} y para finalmente en 2007 se presentar el \emph{HyperLogLog}. La descripción del \emph{HyperLogLog} se indica en el trabajo \emph{Hyperloglog: the analysis of a near-optimal cardinality estimation algorithm} \cite{flajolet2007hyperloglog} llevado a cabo por \emph{Flajolet}. Para las explicaciones indicadas en esta secciones se ha seguido el artículo \emph{HyperLogLog in practice: algorithmic engineering of a state of the art cardinality estimation algorithm} \cite{heule2013hyperloglog} en el cual se exponen distintas mejoras sobre el \emph{HyperLogLog} que se describirán posteriormente.
\paragraph{}
Al igual que se indicó en la descripción del \emph{Algorimo de Flajolet-Martin} (Sección \ref{sec:streaming_flajolet_martin_algorithm}), la tarea que el \emph{HyperLogLog} pretende resolver es el conteo de elementos distintos o cardinalidad de un conjunto de elementos. Para ello, esta estrategia se apoya en la idea de mapear cada elemento que se presente en la entrada sobre una función hash binaria que cumpla una distribución uniforme de probabilidad. Bajo estas condiciones, la intuición en que se basa el \emph{HyperLogLog} es que en el sub-espacio de destino de la función hash, el $50\%$ de los elementos se codificarán de tal manera que el bit menos significativo sea un $0$, el $25\%$ se codificarán de tal manera que los dos bits más significativos sean $00$ y así sucesivamente, de tal manera que $1/2^k$ tendrán $k$ ceros como bits más significativos.
\paragraph{}
Por tanto, el \emph{HyperLogLog} se basa en la utilización de la función $\rho(x)$, que indica el número de ceros contiguos más significativos mas uno de en la representación binaria generada por la función hash. Hasta este punto, la estrategia es equivalente al \emph{Algorimo de Flajolet-Martin}, sin embargo, a continuación se indica una alternativa a esta a partir de la cual se obtiene una estimación con varianza mucho más reducida. Para ello en lugar de tratar el stream de elementos $S$ de manera conjunta, se divide dicho stream en $m$ sub-streamns de tal manera que todos tengan longitudes similares a los cuales denotaremos por $S_i$ con $i \in [1,m]$.
\begin{equation}
\label{eq:hyper_log_log_counter}
M[i] = max_{x in S_i}\rho(x)
\end{equation}
\paragraph{}
Para mantener la cuenta de cada estimador se utiliza un vector $M$ que almacena dicho valor. La estrategia de construcción del mismo se muestra en la ecuación \eqref{eq:hyper_log_log_counter}. En cuanto a la estimación de resultados, se utiliza la estrategia que se describe en la ecuación \eqref{eq:hyper_log_log_estimation}, que se apoya en el término $\alpha_m$ descrito en la ecuación \eqref{eq:hyper_log_log_alpha}. Nótese que el último término se corresponde con una media armónica de la cada contador. La elección de dichos operadores así como el término $\alpha_m$ se describe en el artículo original \cite{flajolet2007hyperloglog} y la razón por la cual se escogen de dicha manera es la de tratar la varianza de la estimación y, por tanto, mejorar su precisión.
\begin{equation}
\label{eq:hyper_log_log_estimation}
card(S) = \alpha_m \cdot m^2 \cdot \bigg( \sum_{j=1}^{m}2^{-M[j]}\bigg)
\end{equation}
\begin{equation}
\label{eq:hyper_log_log_alpha}
\alpha_m = \bigg(m \cdot \int_0^\infty \bigg(log_2\bigg(\frac{2+u}{1+u}\bigg)\bigg)^m du\bigg)^{-1}
\end{equation}
\paragraph{}
En cuanto a las mejoras propuestas en \cite{heule2013hyperloglog} sobre el \emph{HyperLogLog} original, estas se basan en la utilización de una función hash de \emph{64 bits} (en lugar de 32 como la propuesta original), la utilización de un estimador diferente cuando se cumple que $card(S) < \frac{5}{2}m$ y una estrategia de representación dispersa que trata de aprovechar en mayor medida el espacio en memoria.
\section{$L_p$-Samplers}
\label{sec:lp_samplers}
\paragraph{}
En esta sección se describe el concepto de \emph{$L_p$-Sampler}, para lo que se han seguido las ideas expuestas en \emph{Tight bounds for Lp samplers, finding duplicates in streams, and related problems} \cite{jowhari2011tight} de \emph{Jowhari y otros}. \emph{$L_p$-Samplers} se basan en estructuras de datos que procesan un stream de elementos definidos sobre el conjunto $N= \{1, 2, ...,i, ...n\}$ sobre el cual están permitidas tanto adicciones como eliminaciones, es decir, se enmarcan dentro del modelo en molinete (\emph{turnstile model}). Denotaremos por $x_i \in x$ al número de ocurrencias del elemento $i$-ésimo sobre el stream.
\paragraph{}
Los \emph{$L_p$-Samplers} devuelven un sub-conjunto de elementos $N'$ Extraído del conjunto global $N$ de tal manera que cada elemento es seleccionado con probabilidad $\frac{|x_i|^p}{||x||_p^p}$ siendo $p\geq0$. De esta manera, se obtienen una muestra del conjunto global que mantiene la misma proporción de elementos con respecto del global pero en un espacio menor utilizando como medida de estimación para el muestreo la norma $p$.
\paragraph{}
La dificultad del problema se basa en el mantenimiento de la muestra teniendo en cuenta que los elementos pueden ser tanto añadidos como eliminados de la misma según varía su cuenta $x_i$ conforme se procesa el stream.
\paragraph{}
De esta manera, cuando escogemos $p =1$ se seleccionarán los elementos pertenecientes a la muestra con una probabilidad proporcional al número de ocurrencias de los mismos en el stream. En \cite{jowhari2011tight} se describe un algoritmo para \emph{$L_p$-Sampler} con $p \in [0,2]$, ajustándose a las restricciones de espacio del modelo en streaming. Para ello se apoyan en la utilización del \emph{Count-Sketch} (sección \ref{sec:count_sketch}). A continuación se describe en más detalle el caso de los \emph{$L_0$-Samplers}.
\subsection{$L_0$-Samplers}
\label{sec:l0_samplers}
\paragraph{}
Los \emph{$L_0$-Samplers} siguen la misma idea de muestreo descrita en esta sección. En este caso utilizan la norma $0$, por lo tanto, seleccionan los elementos pertenecientes a la muestra con probabilidad uniforme de entre los que han aparecido en el stream. Este es por tanto el caso más básico del problema debido a que tan solo es necesario mantener un contador que indique si el elemento está presente. Los \emph{$L_0$-Samplers} son de gran utilidad en el contexto de grafos, ya que encajan de manera adecuada en dicho modelo, permitiendo mantener un sub-conjunto de aristas del grafo basándose únicamente en si estas han aparecido o no en el stream.
\paragraph{}
Esto es sencillo cuando solo están permitidas adicciones, ya que una vez aparecido el elemento este no desaparecerá. Sin embargo, en el caso de las eliminaciones la tarea se vuelve más complicada ya que tal y como se ha indicado en el párrafo anterior, es necesario mantener la cuenta de ocurrencias para saber si el elemento a desaparecido o únicamente ha decrementado su cuenta.
\paragraph{}
A continuación se describe la estructura básica de estos algoritmos siguiendo el trabajo desarrollado por \emph{Cormode y Fermain} en \emph{On unifying the space of L0-sampling algorithms} \cite{cormode2013unifying}. Estos indican que todas las propuestas de \emph{$L_0$-Samplers} se basan en tres fases:
\begin{itemize}
\item \emph{Sampling}: Se mantiene el conteo de ocurrencias en una estructura de datos auxiliar con ideas similares a las del \emph{Count-Sketch}.
\item \emph{Recovery}: Se realizan peticiones a la estructura de datos auxiliar para recuperar el conteo de ocurrencias de cada elemento en el stream (saber si ha apararecido en el mismo).
\item \emph{Selection}: Se realiza una selección de entre los elementos aparecidos en el stream siguiendo una distribución uniforme de probabilidad.
\end{itemize}
\section{Conclusiones}
\label{sec:summaries_conclusions}
\paragraph{}
En este capítulo se han descrito distintas estrategias y técnicas para tratar de estimar indicadores estadísticos que ayuden a entender y comprender mejor conjuntos de datos de tamaño masivo. Estos estimadores tratan de agilizar la obtención de métricas sobre el conjunto de datos que debido a gran tamaño, muchas veces hace impracticable su obtención de manera determinística por el coste en complejidad tanto espacial como temporal. Para ello, se han descrito distintas soluciones, desde la extracción de muestras, así como histogramas o wavelets, hasta técnicas más novedosas y que encajan de manera más apropiada en el entorno cambiante sobre el que en muchas ocasiones se trabaja.
\paragraph{}
Por contra, las soluciones basadas en \emph{Sketches} tienen un escaso tiempo de vida, por lo que todavía se encuentran en fase de investigación. A pesar de ello existen soluciones para la comprobación de la aparición de elementos mediante el \emph{Bloom Filter}, la estimación de frecuencias mediante el \emph{Count-Min Sketch} o el \emph{Count Sketch}, y el conteo de elementos distintos a partir del \emph{HyperLogLog}.
\paragraph{}
Se cree que el conjunto de estructuras de datos probabilísticas basadas en \emph{Sketches} aumentará conforme el paso del tiempo y en el futuro se diseñarán alternativas más sofisticadas que permitan estimar un gran número de parámetros.
\end{document}
\chapter{Guía de Usuario}
\label{chap:user_guide}
\paragraph{}
En esta sección se describe el proceso de instalacción y uso de la implementación realizada. Para ello existen distintas alternativas, entre las que se encuentran la instalación utilizando el comando \texttt{python} u otras basadas en gestores de modulos como \texttt{easy\_install} o \texttt{pip}. En este caso se realiza una descripción para instalar el proyecto usando el gestor \texttt{pip}.
\paragraph{}
Antes de nada es necesario tener instalado en el sistema el lenguaje de programación \texttt{Python}, en su versión \textbf{3.5} o superior, junto con su correspondiente versión de \texttt{pip}. Para el proceso de instalacción del modulo se puede recurrir al repositorio alojado en \emph{GitHub} o instalarse directamente a través de la copia local.
\paragraph{}
El comando a ejecutar para instalar la implementación en el sistema a partir del repositorio de \emph{Github} (nótese que en este caso es necesario tener instalada la utilidad \texttt{git}) se muestra a continuación:
\begin{verbatim}
$ pip install git+https://github.com/garciparedes/tf_G.git
\end{verbatim}
\paragraph{}
En el caso de preferir instalar la copia local del repositorio, tan solo es necesario ejecutar la siguiente orden:
\begin{verbatim}
$ pip install .
\end{verbatim}
\paragraph{}
Una vez completado el proceso de instalación con éxito, ya se está en condiciones suficientes como para utilizar la implementación realizada. Para ello, esta se puede importar en ficheros que contengan código fuente \texttt{python} que se ejecute sobre intérpretes cuya versión sea \textbf{3.5} o superior.
\paragraph{}
También se puede utilizar la implementación sobre un intérprete ejecutandose sobre la línea de comandos del sistema mediante comandos como \texttt{python3} o la versión extendida \texttt{ipython3}. Una vez en el intérprete se puede importar el módulo simplemente con ejecutar:
\begin{verbatim}
>>> import tf_G
\end{verbatim}
\paragraph{}
Una vez ejecutada dicha sentencia se tiene acceso al ecosistema de clases descrito en la documentación. La cual se encuentra contenida en el propio código a través del estándar \texttt{docstring}. Dicha documentación también es accesible en forma de sitio web a través de la siguiente url: \url{http://tf-g.readthedocs.io/en/latest/}.
\paragraph{}
En el caso de poseer una copia local del repositorio, también es posible realizar una ejecución del conjunto de pruebas de test que confirman que la corrección del código. Para ello es necesario poseer la utilidad \texttt{pytest}. Únicamente con la ejecución de dicha prueba sobre el fichero raiz del repositorio, se realizarán todas las pruebas unitarias contenidas en el repositorio. Para ello se debe ejecutar la siguiente sentencia:
\begin{verbatim}
$ pip install -e .
$ pytest
\end{verbatim}
\paragraph{}
En la copia local, además de incluirse los distintos casos de prueba a partir de los cuales se comprueba el correcto funcionamiento del código, se incluyen una serie de ejemplos. Dichos ejemplos consisten en un conjunto de pequeños scripts que realizan distintas llamadas a al módulo desarrollado para después imprimir los resultados en pantalla. Estos son accesibles a través del directorio \texttt{/examples/}.
\paragraph{}
Tal y como se puede apreciar mediante este manual de instalación y uso, gracias al sistema de gestión de módulos de \emph{Python}, las tareas de distribución de los mismos, así como la gestión de dependencias se simplifican drásticamente, límitandose únicamente al comando de instalacción, junto con la correspondiente importación necesaria para su uso.
\end{document}
| {
"attr-fineweb-edu": 1.241211,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdsc5qWTD6fkJgOW0 |
\section{Introduction}
Coreference resolution is the task of clustering textual mentions that refer to the same discourse entity.
This fundamental task requires many decisions. In this work, we argue that different \emph{kinds} of decisions involve different challenges. To illustrate that, consider the following text:
\emph{``\textbf{Lionel Messi} has won a record seven Ballon d'Or awards. \textbf{He} signed for Paris Saint-Germain in August 2021. ``\textbf{I} would like to thank \textbf{my} family'', said \textbf{the Argentinian footballer}. \textbf{Messi} holds the records for most goals in La Liga''}
To correctly identify that the pronoun ``He'' refers to ``Lionel Messi'', models need to model the discourse, while linking ``my'' to ``I'' may rely more heavily on morphological agreement. Likewise, linking ``the Argentinian footballer'' to ``Lionel Messi'' requires world knowledge, while linking ``Messi'' to ``Lionel Messi'' may be achieved by simple lexical heuristics.
Despite these inherent differences, recent models are based on a \emph{single} pairwise scorer for all types of pairs, regardless of the different challenges that need to be addressed~\citep{lee-etal-2017-end, lee-etal-2018-higher, joshi-etal-2019-bert, kantor-globerson-2019-coreference, joshi-etal-2020-spanbert, xu-choi-2020-revealing, xia-etal-2020-incremental, toshniwal-etal-2020-learning, thirukovalluru-etal-2021-scaling, kirstain-etal-2021-coreference, cattan-etal-2021-cross-document, dobrovolskii-2021-word}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{head_arie.png}
\caption{Architecture of our multi-head expert model. Given two spans \emph{``Lionel Messi''} and \emph{``He''}, the contextualized vectors (green) are fed into $\V{m_s(\cdot)}$ and $\V{m_e(\cdot)}$ to compute mention scores $f_m(\cdot)$ (black), and to $\V{a^t_s(\cdot)}$, $\V{a^t_e(\cdot)}$ to perform per category antecedent scores (blue family). The relevant category (Pron-Ent) score $f_{a}^t(\cdot, \cdot)$ (turquoise) and the general score $f_{a}(\cdot, \cdot)$ (white). The final score $F(c, q)$ is the sum of these four scores.}
\label{fig:head}
\end{figure}
In this work, we identify a set of linguistically meaningful classes of decisions: (a) linking pronouns to pronouns (\textsc{Pron-Pron}); (b) linking pronouns to entities (\textsc{Ent-Pron}); (c) linking entities which share the exact lexical form (\textsc{Match}); (d) linking entities where the lexical form of one contains the lexical form of the other (\textsc{Contains}); (e) other cases. Each of these classes is easy to identify deterministically, each contains both positive and negative instances, and each could benefit from a somewhat different decision process. An example of each class is given in Table~\ref{tab:categories}.
\input{phenomena_table}
We present \textbf{Ling}uistically Informed \textbf{M}ulti \textbf{E}xpert \textbf{S}corer\textbf{s} (\model{}), a coreference model which categorizes each pairwise decision into one of these classes,\footnote{Others---more fine-grained---distinctions are of course also possible, but we leave exploration of them to future work.} and learns a separate scoring function for each class. Specifically, we extend the recent \emph{s2e}'s model~\citep{kirstain-etal-2021-coreference} by adding per-category scoring, but the method is general and may work with other coreference models as well. As illustrated in Figure~\ref{fig:head}, the final coreference score between two spans is composed---in addition to the individual mention scores---of two scores: a general antecedent-compatibility score and an ``expert'' antecedent compatibility score which depends on the linguistic-type of the pair.
We show that this significantly improves the pairwise F1 scores, and also reflected in a 1-point increase in cluster-level
CoNLL F1 score on Ontonotes~\citep{pradhan-etal-2012-conll}.
We also inspect the performance of the model for each category separately, showing that some classes improve more than others. This analysis further provides a finer-grained understanding of the models and the coreference phenomena, and points out directions for future research.
\section{Background: the s2e Model}
\label{sec:bg}
The \emph{s2e} model~\citep{kirstain-etal-2021-coreference} is a lightweight and efficient coreference model, which avoids the costly construction of full-fledged span representations, by considering only span boundaries, and also achieves the current best coreference scores among all practical models.\footnote{\citet{dobrovolskii-2021-word}'s model slightly surpasses the \emph{s2e} model but requires more GPU memory for training. The CorefQA model~\citep{wu-etal-2020-corefqa} achieves a substantially higher score, but also requires to run a separate BERT inference for each mention, making it highly impractical. }
Given a sequence of tokens $x_1, ..., x_n$, each span is represented by its start and end tokens. For a pair of spans $c=(x_i, x_j)$, $q=(x_k, x_\ell)$ where $c$ (``candidate'') appears before $q$ (``query''), a parameterized function $f_m(c)$ ($f_m(q)$) scores how fit a given span is to be a mention, while a parameterized function $f_a(c,q)$ scores how fit $c$ is to be a coreferring antecedent of $q$ (As computing antecedent scores for all possible pairs would result in a complexity of $\mathcal{O}(n^4)$, in practice only the highest scoring spans according to $f_m$ are fed to the antecedent scorer $f_a$).
These functions operate over contextualized vector representations, obtained by a BERT-like model. For the exact function form, see \cite{kirstain-etal-2021-coreference}.
Finally, similarly to~\citet{lee-etal-2017-end}, the final pairwise score $F_g(c, q)$ is composed of the two mention scores $f_m(q)$, $f_m(c)$ and the antecedent score $f_a(c, q)$.
\begin{align*}
F_g(c, q) =
\begin{cases}
f_m(c) + f_m(q) + f_a(c, q) & c \neq \varepsilon \\
0 & c = \varepsilon
\end{cases}
\end{align*}
where $\varepsilon$ is the null antecedent.
For each possible mention $q$, the learning objective optimizes the sum of probabilities over the true antecedent $\hat{c}$ of $q$:
\begin{align*}
L_{g}(q) = \log \sum_{\hat{c} \in \mathcal{C}(q) \cap \textsc{gold}(q)}P_g(\hat{c} \mid q)
\end{align*}
where $\mathcal{C}(q)$ is the set of all candidate antecedents\footnote{All spans before $q$ that passed some pruning threshold.} together with a null antecedent $\varepsilon$. $\textsc{gold}(q)$ is the set of the true antecedents of $q$.
$P_g(c \mid q)$ is computed as a softmax over $F_g(c, q)$ scores for $c$ values in $\mathcal{C}(q)$:
\begin{align*}
P_g(c \mid q) = \frac{\exp{F_g(c, q)}}{\sum\limits_{c' \in \mathcal{C}(q)} \exp{F_g(c', q)}}
\end{align*}
\section{\model{}}
Clustering coreferring entities typically involves many different phenomena, which we argue should be addressed in a different manner.
Indeed, linking two string match entities such as \emph{Hong Kong} to \emph{Hong Kong} is different from linking an entity to a pronoun, for instance, \emph{Lionel Messi} and \emph{he}. In the string match case, the mentions tokens are alone indicative features for the linking decision, (though there are cases where the lexical identify should be ignored, such as \emph{Washington} the U.S government vs. \emph{Washington} the city), whereas the entity-pronoun case requires a different kind of discourse analysis. Therefore, our core contribution is proposing to allocate a dedicated scorer $f^t_a(c, q)$ for each phenomenon type $t$, in addition to the general antecedent scorer $f_a(c, q)$. The overall architecture of our model is shown in Figure~\ref{fig:head}.
Concretely, we extend the \emph{s2e} model with five additional antecedent scorers $f_a^{t}(\cdot, \cdot)$ where $t\in\{\textsc{Pron-Pron}, \textsc{Ent-Pron}, \textsc{Match}, \textsc{Contains},$ $\textsc{Other}\}$, the five categories we list in Table \ref{tab:categories}.
The pairwise scoring function now becomes:
\begin{align*}
F(c, q) =
\begin{cases}
f_m(c) + f_m(q) + f(c, q) & c \neq \varepsilon \\
0 & c = \varepsilon \\
\end{cases}
\numberthis \label{eq:F}
\end{align*}
\begin{align*}
f(c, q) = f_a(c, q) + f^{T(c,q)}_{a}(c, q)
\end{align*}
where $T(c, q)$ is a rule-based function to determine the category $t$ of the pair $(q, c)$.
$F(c, q)$ is the final score function, a sum of four scores, $f_m(q)$ how likely is span $q$ being a mention, $f_m(c)$ how likely is span $c$ being a mention, $f_a(c, q)$ is the ``general'' scorer, how likely is $c$ is an antecedent of $q$, and lastly, $f^t_a(c, q)$ is the ``expert'' scorer for the category $t$, how likely is $c$ is an antecedent of $q$. Each of the six pairwise scoring functions ($f_a$ and the fine $f_a^t$) is parameterized separately using its own set of matrices.
The transformer-based encoder and the mention scorer are shared between all the different pairwise scorers.
\paragraph{Learning}
Through training, for each span $q$, our model optimizes the objective function $L_{coref}$ over the sum of probabilities of all true antecedents of $q$:
\begin{align*}
L_{coref}(q) = \log \sum_{\hat{c} \in \mathcal{C}(q) \cap \textsc{gold}(q)}P(\hat{c} \mid q)
\end{align*}
Here, $P(\hat{c} \mid q)$ is a softmax over $F(\hat{c}, q)$ scores, that is our new score function described in Figure~\ref{fig:head}.
\begin{align*}
P(\hat{c} \mid q) = \frac{\exp{F(\hat{c}, q)}}{\sum\limits_{c' \in \mathcal{C}(q)} \exp{F(c', q)}}
\end{align*}
This model is also the one used in inference. However, this objective does not explicitly push each category (``expert'') to specialize. For example, for the \textsc{Pron-Pron} cases, it would be useful to explicitly train the model to distinguish between the possible antecedents of that type (without regarding other antecedents), as well as to explicitly distinguish between a pronoun antecedent and a null antecedent. To this end, we extend the training objective by also training each expert separately:
\begin{align*}
L_{t}(q) = \log \sum_{\hat{c} \in \mathcal{C}_t(q) \cap \textsc{gold}(q)}P_t(\hat{c} \mid q)
\end{align*}
\begin{align*}
F_t(c, q) =
\begin{cases}
f_m(c) + f_m(q) + f^t_a(c, q) & c \neq \varepsilon \\
0 & c = \varepsilon
\end{cases}
\end{align*}
\begin{align*}
P_t(\hat{c} \mid q) = \frac{\exp{F_t(\hat{c}, q)}}{\sum\limits_{c' \in \mathcal{C}_t(q)} \exp{F_t(c', q)}}
\end{align*}
Note that for $L_t(q)$ we replace $\mathcal{C}(q)$ with $\mathcal{C}_t(q)$, considering only the potential antecedents that are compatible with the span $q$ for the given type (for example, for $L_{\textsc{Pron-Pron}}$ and a span $q$ corresponding to a pronoun, we will only consider candidates $c$ which appear before $q$ and are also pronouns).
Our final objective for each mention span $q$ is thus:
\begin{align*}
L(q) &= L_{coref}(q) + L_{tasks}(q) \\
L_{tasks}(q) &= \sum_{t} L_t(q) + L_g(q)
\end{align*}
\paragraph{Inference} We form the coreference chains by linking each mention $q$ to its most likely antecedent $c$ according to $F(c, q)$ (Eq.~\ref{eq:F}). We do not use higher-order inference as it has been shown to have a marginal impact~\citep{xu-choi-2020-revealing}.
\section{Method}
Clustering coreferring entities typically involves many different phenomena, which should be addressed in a different manner.
Indeed, linking two string match entities such as \emph{Hong Kong} is different from linking an entity to a pronoun, for instance, \emph{Barack Obama} and \emph{he}. In the string match case, the mentions tokens are alone indicative features for the linking decision, whereas the entity-pronoun case requires a deeper understanding of the discourse. Therefore, instead of having a single antecedent scorer $f_a(c, q)$, our model is composed of a separate scorer $f^t_a(c, q)$ for each phenomenon $t$ that we define in Table 1. As a result, each scorer will distinguish better between positive and negative pairs than an one scorer which learns from all possible phenomena pairs. The overall architecture of our model is shown in Figure~\ref{fig:head}.
More formally, at the beginning, our method is similar to s2e model. Given a sequence of tokens from an input document, a transformer-based encoder first forms contextualized representation vectors, $\V{x_1}, ..., \V{x_n}$, for each token in the sequence. Then,
the mention scorer $f_m(\cdot)$ assigns a score to all possible spans in the document (up to a certain length).
Spans with the highest scores are extracted and fed to the linking stage where the model predicts five distributions over possible pairs of spans, each for every category $t \in \{pron-pron, pron-ent, match, contain, other\}$. Example of each category are shown in Table~\ref{tab:categories}. In addition to these five categories, our model predicts one more distribution, the non-category distribution, i.e any possible combination of two spans in the text. Given two spans $q$ and $c$, where $c$ appears before $q$, we compute how likely $c$ is a coreferring antecedent of $q$ as follows:
\begin{align*}
F(c, q) =
\begin{cases}
f_m(c) + f_m(q) + f(c, q) & c \neq \varepsilon \\
0 & c = \varepsilon \\
\end{cases}
\end{align*}
\begin{align*}
f(c, q) = f_a(c, q) + \sum_t {f^{t}_{a}(c, q)}\mathbbm{1}_{T(c, q) = t}
\end{align*}
where $T(c, q)$ is a rule-based function to determine the category $t$ of the pair $(q, c)$.
$F(c, q)$ is the final score function, a sum of four scorers, $f_m(q)$ how likely is span $q$ being a mention, $f_m(c)$ how likely is span $c$ being a mention, $f_a(c, q)$ is the ``general'' scorer, how likely is $c$ is an antecedent of $q$, and lastly, $f^t_a(c, q)$ is the ``expert'' scorer for the category $t$, how likely is $c$ is an antecedent of $q$.
We define $f^t_a(c, q)$ similar to \citet{kirstain-etal-2021-coreference}'s $f_a(c, q)$ function, as a combination of non-linear and linear functions for every category $t$ as follows:
\begin{align*}
\resizebox{1.0\hsize}{!}{
$\V{a}^{t}_{s}(\V{x}) = \text{GeLU} (\mathbf{W}^{t}_{a_s} \V{x}) \qquad
\V{a}^{t}_{e}(\V{x}) = \text{GeLU} (\mathbf{W}^{t}_{a_e} \V{x})
$}
\end{align*}
\begin{equation*}
\resizebox{1.0\hsize}{!}{
$\begin{aligned}
f^t_a(c, q) &= \V{a}^{t}_{s}(\V{c_s}) \cdot \V{B}_{ss}^{t} \cdot \V{a}^{t}_{s}(\V{q_s}) + \V{a}^{t}_{e}(\V{c_e}) \cdot \V{B}_{es}^{t} \cdot \V{a}^{t}_{s}(\V{q_s}) \\
&+ \V{a}^{t}_{s}(\V{c_s}) \cdot \V{B}_{se}^{t} \cdot \V{a}^{t}_{e}(\V{q_e}) + \V{a}^{t}_{e}(\V{c_e}) \cdot \V{B}_{ee}^{t} \cdot \V{a}^{t}_{e}(\V{q_e})
\end{aligned}$%
}
\end{equation*}
Our method defines three new functions for each category $t$: $\V{a}^{t}_{s}(\V{x})$, $\V{a}^{t}_{e}(\V{x})$ and $f^t_a(c, q)$, 6 leranable matrices for each phenomenon, and 6 more learnable matrices for the non-categories, together introduce total of 36 learnable matrices. The transformer-based encoder and the mention scorer are shared between all the different pairwise scorers.
\paragraph{Learning} Through the training objective, for each span $q$, the model optimizes the loss function $L$ over the sum of the probabilities of all true antecedents:
\begin{align*}
L(q) = \log \sum_{c} P(a = c|q)
\end{align*}
where
\begin{align*}
P(a = c|q) = \frac{\exp{F(c, q)}}{\sum_{c'} \exp{F(c', q)}}
\end{align*}
In addition to learning $L$, we aim to train our "experts" to solve the sub task of coreference resolution within their respective expertise, i.e. within category $t$. For this, we also optimize a loss function for each expert and category:
\begin{align*}
L_t(q) = \log \sum_{c} P_t(a = c|q)
\end{align*}
where
\begin{align*}
P_t(a = c|q) = \frac{\exp{F_t(c, q)}}{\sum_{c'} \exp{F_t(c', q)}}
\end{align*}
\begin{align*}
F_t(c, q) =
\begin{cases}
f_m(c) + f_m(q) + f^t_a(c, q) & T(c, q) = t \\
0 & c = \varepsilon \\
-\infty & otherwise
\end{cases}
\end{align*}
To solve sub coreference task, $F_t(c, q)$ assign $-\infty$ to all pairs $q$ and $c$ not in the category $t$. All the learning objectives are trained end-to-end together.
\paragraph{Inference} For each span pair $q$ and $c$, our model predicts and sum up four scores as described in Figure 1. $f_m(q)$, $f_m(c)$ are for the mentions perspective, where $f_a(c, q)$ is the "general" antecedent perspective and $f^t_a(c, q)$ is the "expert" perspective.
\section{Experiments}
\label{sec:experiments}
In our experiments, we use the English OntoNotes 5.0 dataset \cite{pradhan-etal-2012-conll} to train and evaluate our model. This dataset contains 2802 documents for training, 343 for development, and 348 for test. To implement our categories method, we extend the \emph{s2e}'s implementation based on PyTorch~\citep{NEURIPS2019_9015} and Hugging face Transformers library~\citep{wolf-etal-2020-transformers}.
\paragraph{Baseline} We consider \citet{kirstain-etal-2021-coreference} \emph{s2e} model as our baseline, which is an efficient baseline that achieves 80.3 F1 on OntoNotes test set.
\paragraph{Hyperparameters} We used the same hyperparameters as the baseline except the size of the feed forward neural network (FFNN) used by the functions $f_m(\cdot)$ and $f_a(\cdot, \cdot)$. The FFNN size of the baseline is 3072 parameters. Our method introduce $f^t_a(\cdot, \cdot)$ function for each category $t$, thus, to fit into memory we reduce the size of FFNN to 2048 parameters. Similar to the baseline our head method is on top of Longformer-Large~\cite{Beltagy2020LongformerTL}, resulting in a total of 569M learnable parameters, comparable size to the baseline which contains 494M learnable parameters.
\paragraph{Performance}
Table~\ref{table:results} presents the performance of \model{} in comparison to the baseline with the standard evaluation metrics for coreference resolution: MUC~\citep{vilain-etal-1995-model}, B\textsuperscript{3}~\citep{bagga-baldwin-1998-entity-based}, and CEAF\textsubscript{$\phi_4$}~\citep{luo-2005-coreference}. The main evaluation is the average F1 of the three metrics. \model{} outperforms previous baselines according to all evaluation metrics. The CoNLL F1 on the development set is 81.4.
\paragraph{Advantages of Categories}
To assess that the improvement of \model{} is due to the decomposition into our set of categories and not due to the added parameters, we did two experiments. First, we train a random baseline, which randomly assigns a category for each pair.\footnote{For each pair of mentions $(c, q)$, we take the modulo of the sum of the ASCII code of the last token of $c$ and $q$.} Second, we train our model by optimizing only the overall loss $L_{coref}$ and not $L_{tasks}$. In both experiments, we obtain similar results to the baseline, likely due to the dominance of the non-category parameters.
In addition to the standard coreference evaluation, we measure pairwise performance and report the results for each category. Given a mention-pair $(q, c)$, if $F(c, q)$ is greater than 0, we treat it as a positive score, otherwise negative. We then measure precision, recall and F1 based on gold clusters labels. Table~\ref{table:pairwise} shows the pairwise performance of the \emph{s2e} model and \model{}. \model{} outperforms \emph{s2e} by a significant margin for all categories (e.g +7.2 F1 for \textsc{Pron-Pron}, +3.9 F1 for \textsc{Contain}, etc.) except for \textsc{Ent-Pron} where the $s2e$ model surpasses \model{} by only 0.6 F1. The gain in coreference metrics is not so significant because the coreference chains are formed by linking a mention to only one antecedent with the highest $F(c, q)$. Nonetheless, \model{} provides scores with higher quality which can be useful for injecting coreference signals in downstream tasks.
We note also that models address each category differently. Indeed, \model{} achieves 88.9 F1 on \textsc{Pron-Pron} and 93.0 F1 on \textsc{Match}, whereas it achieves only 78.4 for \textsc{Contain} and 69.9 F1 for \textsc{Other}. These results indicate the vast room for improvement in the categories with lowest scores.
\section{Related Work}
The deterministic sieve-based model~\citep{lee-etal-2013-deterministic} is an early system that breaks down the coreference decisions into multiple linguistic categories. They adopt an easy-first approach where coreference decisions in the first sieves help disambiguate later decisions. \citet{lu-ng-2020-conundrums} analyze empirically the performance of recent coreference resolvers on various fine-grained resolution classes of mentions (e.g gendered pronoun vs. 1st and 2nd pronoun, etc). Our work makes progress in that direction by optimizing separately a supervised model for different categories of mentions.
\section{Conclusion}
We propose \model{}, an approach for coreference resolution that learns a separate antecedent scorer for different classes of coreference cases. \model{} outperforms the baseline on Ontonotes according to both cluster-level and pairwise F1 scores. These results demonstrate that optimizing separately the different linguistic challenges of a general NLP task is an appealing approach for improving performance.
\section{\model{}}
Clustering coreferring entities typically involves many different phenomena, which we argue should be addressed in a different manner.
Indeed, linking two string match entities such as \emph{Hong Kong} to \emph{Hong Kong} is different from linking an entity to a pronoun, for instance, \emph{Lionel Messi} and \emph{he}. In the string match case, the mentions tokens are alone indicative features for the linking decision, (though there are cases where the lexical identify should be ignored, such as \emph{Washington} the U.S government vs. \emph{Washington} the city), whereas the entity-pronoun case requires a different kind of discourse analysis. Therefore, our core contribution is proposing to allocate a dedicated scorer $f^t_a(c, q)$ for each phenomenon type $t$, in addition to the general antecedent scorer $f_a(c, q)$. The overall architecture of our model is shown in Figure~\ref{fig:head}.
Concretely, we extend the \emph{s2e} model with five additional antecedent scorers $f_a^{t}(\cdot, \cdot)$ where $t\in\{\textsc{Pron-Pron}, \textsc{Ent-Pron}, \textsc{Match}, \textsc{Contains},$ $\textsc{Other}\}$, the five categories we list in Table \ref{tab:categories}.
We integrate this in the \emph{s2e} model as follows. In addition to the overall antecedent scorer $f_a(\cdot, \cdot)$, we introduce five antecedent scorers $f_{a}^{t}(\cdot, \cdot)$, each for every phenomenon type $t \in \{pron-pron, pron-ent, match, contain, other\}$, that we define in Table~\ref{tab:categories}.
Given two spans $q$ and $c$, where $c$ appears before $q$, we compute how likely $c$ is a coreferring antecedent of $q$ as follows:
\begin{align*}
F(c, q) =
\begin{cases}
f_m(c) + f_m(q) + f(c, q) & c \neq \varepsilon \\
0 & c = \varepsilon \\
\end{cases}
\end{align*}
\begin{align*}
f(c, q) = f_a(c, q) + \sum_t {f^{t}_{a}(c, q)}\mathbbm{1}_{T(c, q) = t}
\end{align*}
where $T(c, q)$ is a rule-based function to determine the category $t$ of the pair $(q, c)$.
$F(c, q)$ is the final score function, a sum of four scores, $f_m(q)$ how likely is span $q$ being a mention, $f_m(c)$ how likely is span $c$ being a mention, $f_a(c, q)$ is the ``general'' scorer, how likely is $c$ is an antecedent of $q$, and lastly, $f^t_a(c, q)$ is the ``expert'' scorer for the category $t$, how likely is $c$ is an antecedent of $q$. For each pair of spans $(c=(x_i, x_j), q=(x_k,x_l))$ where $c$ appears before $q$, we define $f^t_a(c, q)$ using the same formula as Equation~\eqref{eq:a_repr} and~\eqref{eq:antecedent} with type-specific parameters:
\begin{align*}
\resizebox{1.0\hsize}{!}{
$\V{a}^{t}_{s}(\V{x_i}) = \text{GeLU} (\mathbf{W}^{t}_{a_s} \V{x_i}) \qquad
\V{a}^{t}_{e}(\V{x_i}) = \text{GeLU} (\mathbf{W}^{t}_{a_e} \V{x_i})
$}
\end{align*}
\begin{equation*}
\resizebox{1.0\hsize}{!}{
$\begin{aligned}
f^t_a(c, q) &= \V{a_s({x_i})} \cdot \V{B}_{ss}^{t} \cdot \V{a_s({x_k})} + \V{a_s({x_i})} \cdot \V{B}_{se}^{t} \cdot \V{a_e({x_l})}
\\
&+ \V{a_e({x_j})} \cdot \V{B}_{es}^{t} \cdot \V{a_s({x_k})} + \V{a_e({x_j})} \cdot \V{B}_{ee}^{t} \cdot \V{a_e({x_l})}
\end{aligned}$
}
\end{equation*}
Our method defines three new functions for each category $t$: $\V{a}^{t}_{s}(\V{x})$, $\V{a}^{t}_{e}(\V{x})$ and $f^t_a(c, q)$, 6 leranable matrices for each phenomenon, and 6 more learnable matrices for the non-categories, together introduce total of 36 learnable matrices. The transformer-based encoder and the mention scorer are shared between all the different pairwise scorers.
\paragraph{Learning}
Through training, for each span $q$, our model optimizes the objective function $L_{coref}$ over the sum of probabilities of all true antecedents of $q$:
\begin{align*}
L_{coref}(q) = \log \sum_{\hat{c} \in \mathcal{C}(q) \cap \textsc{gold}(q)}P(\hat{c} \mid q)
\end{align*}
Here, $P(\hat{c} \mid q)$ is a softmax over $F(\hat{c}, q)$ scores, that is our new score function described in Figure~\ref{fig:head}.
\begin{align*}
P(\hat{c} \mid q) = \frac{\exp{F(\hat{c}, q)}}{\sum\limits_{c' \in \mathcal{C}(q)} \exp{F(c', q)}}
\end{align*}
Furthermore, to maximize performance and to make our scorers ``experts'' in their associated category we also learn directly multiple coreference tasks individually. They are, the general task, i.e any pair $q$ and $c$ regardless their category, and sub tasks where each is a sub coreference task only for pairs $c$ and $q$ associated with category $t$. For the general task, our model optimize $L_{g}(q)$, while for the sub tasks our model optimize $L_{t}(q)$ for each $t$, where each learns only from positive and negative pairs of category $t$.
\begin{align*}
L_{t}(q) = \log \sum_{\hat{c} \in \mathcal{C}_t(q) \cap \textsc{gold}(q)}P_t(\hat{c} \mid q)
\end{align*}
In this case, $P_t(\hat{c} \mid q)$ is a softmax over $F_t(\hat{c}, q)$ which optimize the category scorer $f^t_a(c, q)$ individually, similar to $P_g(\hat{c} \mid q)$, which optimize the general scorer $f_a(c, q)$ individually.
\begin{align*}
F_t(c, q) =
\begin{cases}
f_m(c) + f_m(q) + f^t_a(c, q) & c \neq \varepsilon \\
0 & c = \varepsilon
\end{cases}
\end{align*}
\begin{align*}
P_t(\hat{c} \mid q) = \frac{\exp{F_t(\hat{c}, q)}}{\sum\limits_{c' \in \mathcal{C}_t(q)} \exp{F_t(c', q)}}
\end{align*}
$\mathcal{C}_t(q) =\{x\,\mid\, T(q, x) = t\}$ is a set of the possible assignments for $\hat{c}$, where $q$ and $\hat{c}$ are from category $t$. Therefore, $\bigcup\limits_{t} \mathcal{C}_t(q) \subseteq \mathcal{C}(q)$.
Finally, our loss function objective is:
\begin{align*}
L_{tasks}(q) = \sum_{t} L_t(q) + L_g(q)
\end{align*}
\begin{align*}
L(q) = L_{coref}(q) + L_{tasks}(q)
\end{align*}
\paragraph{Inference} Give a span $q$ to predict its antecedent, a set of antecedent candidates $\mathcal{C}(q)$, we score each pair, $q$, $\hat{c} \in \mathcal{C}(q)$ as a sum of four scores (see Figure~\ref{fig:head}): (1) $f_m(q)$, whether $q$ is mention, (2) $f_m(\hat{c})$, whether $\hat{c}$ is mention, (3) $f_a(\hat{c}, q)$ whether $\hat{c}$ is an antecedent of $q$ by global perspective, (4) $f^t_a(\hat{c}, q)$ whether $\hat{c}$ is an antecedent of $q$ by specific category expert perspective. Next we select the max scored antecedent to be the selected antecedent of $q$. To construct the final clusters we iterate linearly over the predict pairs and create the clusters form them. We do not use higher-order inference in \model{} as it has been shown to have a marginal impact on existing models~\citep{xu-choi-2020-revealing}
| {
"attr-fineweb-edu": 1.701172,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUesPxK7ICUuXelb7A | \section{Introduction}
It is well known to cricketers of all skill levels that the longer a batsman is in for, the easier batting tends to become. This is probably due to a large number of psychological and technique-related effects: for example, it is generally agreed that it takes a while for a batsman's footwork to ``warm up'' and for them to adapt to the subtleties of the prevailing conditions and the bowling attack. Consequently, it is frequently observed that players are far more likely to be dismissed early in their innings than is predicted by a constant-hazard model, where the probability of getting out on your current score (called the {\it Hazard}) is exactly the same regardless of your current score. Note that a constant hazard model leads to an exponential probability distribution over the non-negative integers (i.e. the geometric distribution) as describing our prediction of a batsman's score. The aim of this paper is to develop a Bayesian method \citep{2004kats.book.....O} for inferring how a player's Hazard varies throughout an innings, thus giving quantitative answers to the questions ``how long do we have to wait until batsmen get their eye in, and how much better do they really become?''. This question has been addressed previously using nonparametric frequentist survival analysis \citep{frequentist,cai}. However, using a nonparametric approach in a Bayesian setting would give the hazard function far too much freedom and would lead to very poorly constrained inferences of the hazard function if applied to individual players. To simplify matters, this paper uses a parametric model, which is effectively a single change-point model with a smooth transition rather than a sudden jump.
\section{Sampling Distribution}
Consider predicting the score $X \in \{0,1,2,3,...\}$ that a batsman will make in a single innings. We will now assign a probability distribution for $X$, conditional on some parameters. Define a hazard function $H(x) \in [0,1]$ as the probability of being dismissed on score $x$ (i.e. $P(X=x)$) given that the batsman is currently on score $x$ (i.e. given $X \geq x$):
\begin{equation}\label{hazardDefinition}
H(x) = P(X = x | X \geq x) = \frac{P(X = x, X\geq x)}{P(X \geq x)} = \frac{P(X = x)}{P(X \geq x)}
\end{equation}
Define a backwards cumulative distribution function by:
\begin{equation}
G(x) = P(X \geq x)
\end{equation}
Using $G(x)$ rather than the conventional cumulative distribution $F(x)=P(X \leq x)$ simplifies some expressions in this case, and also helps because $G(x)$ will also serve as the likelihood function for the ``not-outs'', or uncompleted innings. With this definition, Equation~\ref{hazardDefinition} becomes, after some rearrangement, a difference equation for $G$:
\begin{equation}
G(x+1) = \left[1 - H(x)\right]G(x)
\end{equation}
With the initial condition $G(0) = 1$, this can be solved, giving:
\begin{equation}
G(x) = \prod _{a=0}^{x-1}\left[1 - H(a)\right]
\end{equation}
This is the product of the probabilities of surviving to score $1$ run, times the probability of reaching a score of $2$ runs given that you scored $1$, etc, up to the probability of surviving to score $x$ given that you scored $x-1$. Thus, the probability distribution for $X$ is given by the probability of surviving up to a score $x$ and then being dismissed:
\begin{equation}
P(X = x) = H(x)\prod _{a=0}^{x-1}\left[1 - H(a)\right]
\end{equation}
This is all conditional on a choice of the hazard function $H$, which we will parameterise by parameters $\theta$. Assuming independence (this is not a physical assertion, rather, an acknowledgement that we are not interested in any time dependent effects for now), the probability distribution for a set of scores $\{x_i\}_{i=1}^{I-N}$ ($I$ and $N$ are the number of innings and not-outs respectively) and a set of not-out scores $\{y_i\}_{i=1}^N$ is:
\begin{equation}\label{likelihood}
p(\mathbf{x},\mathbf{y}|\theta) = \prod_{i=1}^{I-N}\left(H(x_i;\theta)\prod _{a=0}^{x_i-1}\left[1 - H(a;\theta)\right]\right) \times \prod_{i=1}^{N}\left(\prod _{a=0}^{y_i-1}\left[1 - H(a;\theta)\right]\right)
\end{equation}
When the data $\{\mathbf{x},\mathbf{y}\}$ are fixed and known, Equation~\ref{likelihood} gives the likelihood for any proposed model of the Hazard function - that is, for any value of $\theta$. The log likelihood is:
\begin{equation}
\log p(\mathbf{x},\mathbf{y}|\theta) = \sum_{i=1}^{I-N} \log H(x_i;\theta) + \sum_{i=1}^{I-N} \sum_{a = 0}^{x_i - 1} \log \left[1 - H(a;\theta) \right] + \sum_{i=1}^{N} \sum_{a = 0}^{y_i - 1} \log \left[1 - H(a;\theta) \right]
\end{equation}
\section{Parameterisation of the Hazard Function}
Rather than seek clever parameterisations of $H(x;\theta)$ and priors over $\theta$ that are conjugate to the likelihood, we will take the simpler approach of simply defining a model and prior, and doing the inference with a Metropolis-Hastings sampler (C++ source code and data files for carrying this out will be provided by the author on request). To capture the phenomenon of ``getting your eye in'', the Hazard function will need to be high for low $x$ and decrease to a constant value as $x$ increases and the batsman becomes more comfortable. Note that if $H(x) = h$, a constant value, the sampling distribution becomes a geometric distribution with expectation $\mu = 1/h - 1$. This suggests modelling the Hazard function in terms of an ``effective batting average'' that varies with time, which is helpful because it is easier to think of playing ability in terms of batting averages than dismissal probabilities. $H(x)$ is obtained from $\mu(x)$ as follows:
\begin{equation}
H(x) = \frac{1}{\mu(x) + 1}
\end{equation}
A simple change-point model for $\mu(x)$ would be to have $\mu(x) = \mu_1 + (\mu_2 - \mu_1)\textnormal{Heaviside}(x - \tau)$, where $\tau$ is the change-point. However, a more realistic model would have $\mu$ changing smoothly from one value to the other. Replacing the Heaviside step function with a logistic sigmoid function of the form $1/(1 + e^{-t})$ gives the following model, which will be adopted throughout this paper:
\begin{equation}\label{model}
\mu(x) = \mu_1 + \frac{\mu_2 - \mu_1}{1 + \exp \left(-(x-\tau) / L \right)}
\end{equation}
Hence
\begin{equation}
H(x) = \left[1 + \mu_1 + \frac{\mu_2 - \mu_1}{1 + \exp \left(-(x-\tau) / L \right)}\right]^{-1}
\end{equation}
This has four parameters: $\mu_1$ and $\mu_2$, the two effective abilities of the player, $\tau$, the midpoint of the transition between them, and $L$, which describes how abrupt the transition is. As $L \to 0$ this model resembles the simpler change-point model. A few examples of the kind of hazard models that can be described by varying these parameters are shown in Figure~\ref{models}. It is possible (although we don't really expect it) for the risk of being dismissed to increase as your score increases; more commonly it will decrease. Slow or abrupt transitions are possible and correspond to different values of $L$.
\section{Prior Distribution}\label{priors}
Now we must assign a prior probability distribution of the space of possible values for the parameters $(\mu_1, \mu_2, \tau, L)$. All of these parameters are non-negative and can take any positive real value. It is possible to take into account prior correlations between $\mu_1$ and $\mu_2$ (describing the expection that a player who is excellent when set is also more likely to be good just after arriving at the crease, and that $\mu_2$ is probably greater than $\mu_1$). However, this will almost certainly be supported by the data anyway. Hence, for simplicity we assigned the more conservative independent $\textnormal{Normal}(30,20^2)$ priors\footnote{A reasonable first-order description of the range of expected variation in batting abilities and hence our state of knowledge about a player whose identity is unspecified - we intend the algorithm to apply to any player. It is possible to parameterise this prior with unknown hyperparameters and infer them from the career data of many players, yielding information about the cricket population as a whole. However, such a calculation is beyond the scope of this paper.}, truncated to disallow negative values:
\begin{equation}\label{prior1}
p(\mu_1,\mu_2) \propto \exp \left[-\frac{1}{2}\left(\frac{\mu_1 - 30}{20}\right)^2 -\frac{1}{2}\left(\frac{\mu_2 - 30}{20}\right)^2\right] \hspace{2cm} \mu_1,\, \mu_2 > 0
\end{equation}
The joint prior for $L$ and $\tau$ is chosen to be independent of the $\mu$'s and also independent of each other. A typical player can expect to become accustomed to the batting conditions after $\sim$ 20 runs. An exponential prior for $\tau$ with mean 20 and an exponential prior with mean 3 to $L$ were found to produce a range of plausible hazard functions:
\begin{equation}\label{prior2}
p(\tau,L) \propto \exp\left(-\frac{\tau}{20}-\frac{L}{3}\right) \hspace{2cm} \tau,\,L > 0
\end{equation}
Some hazard functions sampled from the prior are displayed in Figure~\ref{models}. The posterior distribution for $\mu_1, \mu_2, \tau$ and $L$ is proportional to \texttt{prior} $\times$ \texttt{likelihood}, i.e. the product of the right hand sides of Equations~\ref{likelihood},~\ref{prior1} and~\ref{prior2}. Qualitatively, the effect of Bayes' theorem is to take the set of possible hazard functions and their probabilities (Figure~\ref{models}) and reweight the probabilities according to how well each hazard function fits the observed data.
\section{The Data}
Data were obtained from the StatsGuru utility on the Cricinfo website (\texttt{http://www.cricinfo.com/}) for the following players: Brian Lara, Chris Cairns, Nasser Hussain, Gary Kirsten, Justin Langer, Shaun Pollock, Steve Waugh and Shane Warne. These players were chosen arbitrarily but subject to the condition of having recently completed long careers. A selection of batsmen, quality all-rounders and bowlers was chosen. The MCMC was run for a large number of steps - mixing is quite rapid because the likelihood evaluation is fast and the parameter space is only 4-dimensional. For brevity, we will display posterior distributions for Brian Lara only. For the other players, summaries such as the posterior means and standard deviations will be displayed instead.
\section{Results}
\subsection{Marginal Posterior Distributions}
In this section we will focus on the posterior distributions of the parameters $(\mu_1, \mu_2, \tau, L)$ for Brian Lara. The marginal distributions (approximated by samples from the MCMC simulation) are plotted in Figure~\ref{lara_results}. These results imply that when Lara is new to the crease, he bats like a player with an average of $\sim$ 15, until he has scored $\sim$ 5 runs. After a transition period with a scale length of $\sim$ 2 runs, (although the form of the logistic functions shows that a transition is more gradual than indicated by $L$), he then begins to bat as though he has an average of $\sim$ 60. In this case, the analysis has confirmed the folklore about Brian Lara - that if you don't get him out early, you can never really tell when he might get a huge score. ``Form'' doesn't really come into it. The only surprise to emerge from this analysis is the low value of the change-point $\tau$ - Lara is halfway through the process of getting his eye in after scoring only about 5 runs. However, there is still a reasonable amount of uncertainty about the parameters, even though Brian Lara's long test career consisted of 232 innings. The posterior distribution for Brian Lara's parameters does not contain strong correlations (the maximum absolute value in the correlations matrix is 0.4). This is also true of the posterior distributions for the other players. Hence, in the next section, summaries of the marginal distributions for the four parameters will be presented for each player.
\subsection{Summaries}
The estimates and uncertainties (posterior mean $\pm$ standard deviation) for the four parameters are presented in Table~\ref{summaries}. Figure~\ref{points} also shows graphically where each of the eight players is estimated to lie on the $\mu_1$-$\mu_2$ plane. One interesting result that is evident from this analysis is that it is not just the gritty specialist batsmen that are robust in the sense that $\mu_1$ is quite high compared to $\mu_2$. The two aggressive allrounders Shaun Pollock and Chris Cairns also show this trait, and are even more robust than, for example, Justin Langer and Gary Kirsten. It is possible that the technique or mindset shown by these players is one that does not require much warming up, or that it is more difficult to get your eye in at the top of the order than in the middle/lower order - although this ought to be a very tentative conclusion given that it is based on only two examples.
The estimated value of $\mu_1$ for Steve Waugh is lower than all other players in the sample apart from Shane Warne. Even Shaun Pollock appears to be a better batsman than Steve Waugh at the beginning of his innings. The plausibility of this statement can be measured by asking the question ``what is the posterior probability that $H_{Pollock}(0) < H_{Waugh}(0)$?''. From the MCMC output, this probability was found to be 0.92.
The marginal likelihood or ``evidence'' for this entire model and choice of priors can be measured effectively using annealed importance sampling, or AIS \citep{ais,1997PhRvE..56.5018J,1997PhRvL..78.2690J}. AIS is a very generally applicable MCMC-based algorithm that produces an unbiased estimate of $Z = \int \texttt{prior}(\theta) \times \texttt{likelihood}(\theta) \,d\theta$. $Z$ is the probability of the data that were actually observed, under the model, averaged over all possible parameter values (weighted according to the prior). It is the crucial quantity for updating a state of knowledge about which of two distinct models is correct \citep{mackay,2004kats.book.....O}. To test our model for the hazard function, we computed the evidence value for each player for the varying-hazard model and also for a constant hazard model, with a truncated $N(30,20^2)$ prior on the constant effective average. The logarithm of the Bayes Factor (evidence ratio) describing how well the data support the varying-hazard model over constant hazard is shown in the right-hand column of Table~\ref{summaries}. Since these were computed using a Monte-Carlo procedure, they are not exact, but the AIS simulations were run for long enough so that the standard error in the Bayes Factor for each player was less than 5\% of its value. The data decisively favour the varying-hazard model in all cases, and this would be expected to persist under slight changes to the hazard function parameterisation or the prior distributions.
\begin{table}
\caption{Parameter estimates for the players studied in this paper. The right hand column, the logarithm of the Bayes Factor, shows that the data support the varying hazard model over a constant hazard model by a large factor in all cases. The smallest Bayes Factor is still over 2500 to 1 in favour of the varying Hazard Model. Thus, the varying hazard model would likely still be significantly favoured even if the priors for the parameters were slightly modified.\label{summaries}}\vspace{0.5cm}
\centering
\begin{tabular}{l c c c c | c c}
\hline\hline
Player & $\mu_1$ & $\mu_2$ & $\tau$ & $L$ & log$_e(Z)$ & log$_{e}(Z/Z_0)$\\
\hline
Cairns & 26.9 $\pm$ 9.2 & 36.7 $\pm$ 5.5 & 14.5 $\pm$ 17.7 & 3.1 $\pm$ 3.0 & -444.11 & 8.82\\
Hussain & 15.6 $\pm$ 9.1 & 42.1 $\pm$ 4.4 & 5.2 $\pm$ 7.1 & 2.2 $\pm$ 1.0 & -707.16 & 12.28\\
Kirsten & 16.6 $\pm$ 9.3 & 54.1 $\pm$ 5.7 & 7.3 $\pm$ 5.5 & 2.9 $\pm$ 2.4 & -757.16 & 16.94\\
Langer & 24.3 $\pm$ 11.5 & 49.6 $\pm$ 4.9 & 8.9 $\pm$ 14.3 & 2.8 $\pm$ 2.9 & -810.34 & 11.66\\
Lara & 14.5 $\pm$ 8.3 & 60.2 $\pm$ 4.7 & 5.1 $\pm$ 2.9 & 2.8 $\pm$ 1.8 & -1105.65 & 21.62\\
Pollock & 22.1 $\pm$ 7.7 & 38.9 $\pm$ 5.4 & 9.7 $\pm$ 9.3 & 3.1 $\pm$ 2.9 & -519.39 & 7.87\\
Warne & 3.5 $\pm$ 2.0 & 21.1 $\pm$ 2.0 & 1.1 $\pm$ 0.6 & 0.5 $\pm$ 0.4 & -686.59 & 22.54\\
Waugh & 10.5 $\pm$ 5.5 & 57.3 $\pm$ 4.4 & 1.8 $\pm$ 1.6 & 0.8 $\pm$ 1.2 & -1030.69 & 25.29\\
\hline
{\bf Prior} & 32.8 $\pm$ 17.6 & 32.8 $\pm$ 17.6 & 20.0 $\pm$ 20.0 & 3.0 $\pm$ 3.0 & N/A & N/A
\end{tabular}
\end{table}
\subsection{Predictive Hazard Function}
In the usual way \citep{2004kats.book.....O}, a predictive distribution for the next data point (score in the next innings) can be found by averaging the sampling distribution (Equation~\ref{likelihood}) over all possible values of the parameters that are allowed by the posterior. Of course, all of these players have retired, so this prediction is simply a conceptual device to get a single distribution over scores, and hence a single estimated hazard function via Equation~\ref{hazardDefinition}.
This predictive hazard function is plotted (in terms of the effective average) for three players (Brian Lara, Justin Langer and Steve Waugh) in Figure~\ref{predictive}. The latter two are noted for their grit, whilst Brian Lara is considered an aggressive batsman. These different styles may translate to noticable differences in their predictive hazard functions. It is clear from Figure~\ref{predictive} that Justin Langer is more ``robust'' than Brian Lara, as the difference between his abilities when fresh and when set is smaller ($P\left(\left(\frac{\mu_2}{\mu_1}\right)_{Lara} > \left(\frac{\mu_2}{\mu_1}\right)_{Langer}\right) = 0.80$). This is probably a good trait for an opening batsman. On the other hand, Steve Waugh is actually significantly worse when he is new to the crease than Brian Lara ($P(H_{Waugh}(0) > H_{Lara}(0)) = 0.85$). This surprising result shows that popular perceptions are not necessarily accurate, given that many people regard Steve Waugh as the player they would choose to play ``for their life''\footnote{Technically, the correct choice would be to choose a player that minimised the expected loss, where loss is defined as the amount of injury inflicted on the spectator as a function of the batsman's score.}. However, Steve Waugh's predictive hazard function has a transition to its high equilibrium value that is sooner and faster than both Lara and Langer ($P(\tau_{Waugh}<\tau_{Langer} \textnormal{ and } \tau_{Waugh}<\tau_{Lara} \textnormal{ and } L_{Waugh}<L_{Langer} \textnormal{ and } L_{Waugh}<L_{Lara} ) = 0.53$, the prior probability of this is 0.0625). Therefore, perhaps his reputation is upheld, except at the very beginning of an innings.
Note that the general tendency of the effective average to drift upwards as a function of score does not imply that all batsman get better the longer their innings goes on - since once the transition has occurred, our model says that the hazard rate should stay basically constant. Instead, the hazard function of the predictive distribution describes a gradual change in our state of knowledge: the longer a batsman stays in, the more convinced we are that our estimate of their overall ability $\mu_2$ should be higher than our prior estimate. This is why there is an upwards tendency in the predictive ability function for all players, even well after the change-point transition is completed.
\section{Conclusions}
This paper has presented a simple model for the hazard function of a batsman in test or first class cricket. Applying the model to data from several cricketers, we found the expected conclusion: that batsmen are more vulnerable towards the beginning of the innings. However, this analysis now provides a quantitative measurement of this effect, showing how significant it is, and the fact that there is substantial variation in the size of the effect for different cricketers. Surprisingly, we found that Steve Waugh was the second most vulnerable player in the sample at the beginning of an innings - only Shane Warne, a bowler, was was more vulnerable. Even Shaun Pollock is better at the beginning of his innings. This surprising result would have been very hard to anticipate.
From this starting point, there are several possible avenues for further research. One interesting study would involve much larger samples of players so we can identify any trends. For instance, is it true that all-rounders are more robust batsman in general, or are Chris Cairns and Shaun Pollock atypical? Also, it should be possible to create a more rigorous definition of the notion of robustness discussed above. Once this is done, we could characterise the population as a whole, and search for possible correlations between batting average, strike-rate (runs scored per 100 balls faced) and robustness. Modelling the entire cricket population would also allow for a more objective choice for the parameterisation of the hazard function, and the prior distribution over its parameter space. Depending on the results, these kinds of analyses could have implications for selection policy, especially for opening batsmen where consistency is a highly desirable trait.
\bibliographystyle{plainnat}
| {
"attr-fineweb-edu": 1.886719,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUb4bxK0zjCobCNN9E | \section{Introduction}\label{sci-sec:introduction}
SciSports (\url{http://www.scisports.com/}) is a Dutch sports analytics company taking a data-driven approach to football. The company conducts scouting activities for football clubs, gives advice to football players about which football club might suit them best, and quantifies the abilities of football players through various performance metrics.
So far, most of these activities have been supported by either coarse event data, such as line-ups and outcomes of matches, or more fine-grained event data such as completed passes, distances covered by players, yellow cards received and goals scored.
In the long term, SciSports aims to install specialized cameras and sensors across football fields to create a two- and three-dimensional virtual rendering of the matches, by recording players' coordinate positions and gait data in millisecond time intervals. From this massive amount of data, SciSports is interested in predicting future game courses and extracting useful analytics. Insights gained from this learning process can be used as preliminary steps towards determining the quality and playing style of football players. In this project we based our work on a dataset containing the recorded two-dimensional positions of all players and the ball during 14 standard football matches at $0.1$ second time intervals.
Football kinematics such as acceleration, maximal sprinting speed and distance covered during a match can be extracted automatically from trajectory data. However, there are also important unobservable factors/features determining the soccer game, e.g., a player can be of enormous value to a game without being anywhere near the ball. These latent factors are key to understanding the drivers of motion and their roles in predicting future game states. There are in general two basic approaches to uncovering these factors: we can either postulate a model or structure for these factors, based on physical laws and other domain knowledge (model-based), or we can use machine learning techniques and let the algorithms discover these factors on their own (data-driven).
Model-based approaches have been widely used to analyze football trajectories. Examples in the literature include statistical models such as state space models \cite{sci-bib:yu2003a,sci-bib:yu2003b,sci-bib:ren2008} and physical models based on equations of motion and aerodynamics \cite{sci-bib:goff2009}. These methods have the advantage of producing interpretable results and they can quickly give reasonable predictions using relatively few past observations. In Section~\ref{sci-sec:kalman}, we build state space models based on principles of Newtonian mechanics to illustrate these approaches.
The need to specify an explicit model is a drawback, however, since human players probably follow complicated rules of behavior. To this end, data-driven approaches embody the promise of taking advantage of having large amounts of data through machine learning algorithms, without specifying the model; in a sense the model is chosen by the algorithm as part of the training.
We implemented a Variational Autoencoder (VAE), as introduced by \cite{sci-bib:vae}, and a Generative Adversarial Net (GAN) as developed in \cite{sci-bib:Goodfellow2014}.
The paper is organized as follows. In the next section, we describe the two-dimensional positional data used for our analyses. We present the model-based state-space approach in Section \ref{sci-sec:methods} and the data-driven methods based on GANs and VAEs in Sections \ref{sci-sec:gan} and \ref{sci-sec:vae}, respectively. We introduce the discriminator network to differentiate movements in \ref{sci-sec:dis}. We conclude in Section~\ref{sci-sec:con} and discuss future work.
\medskip
The \texttt{R} and \texttt{Python} codes used to reproduce all our analyses can be found in \url{https://bitbucket.org/AnatoliyBabic/swi-scisports-2018}.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.42]{data}}
\caption{A snapshot in time ($\approx$ 2 minutes into the game) of the positional data for all players (blue and red teams) and the ball (circle). Note that the goalkeepers can be identified as the players standing at the leftmost and rightmost positions on the field.}
\label{sci-fig:game}
\end{figure}
\section{The data}\label{sci-sec:data}
The data that we used for this project was provided by SciSports and is taken from $14$ complete $90$-minute football matches. For each player and ball ($23$ entities total) the $(x,y)$-coordinates on the field have been recorded with a resolution of $10$ cm and $10$ frames per second; i.e., the trajectory of a player on a $10$ seconds timespan corresponds to a $(2\times 100)$-vector of $(x,y)$-coordinates. The field measures $68$ by $105$ meters, and the origin of the coordinate system is the center of the pitch. For all football fields illustrated in this report, the dimensions are given in centimeters, which means that the field corresponds to the rectangle $[-5250,5250]\times [-3400,3400]$.
For illustration, Figure \ref{sci-fig:game} shows a single-time snapshot of the positional data for the ball and all players.
\section{Methods: model-based}\label{sci-sec:methods}
In this section we describe a model-based approach to extract information from the data. With this approach we have two goals: first, to extract velocities from the position data in such a way that the impact of the noise in position measurements is minimized, and secondly, to estimate acceleration profiles of different players.
\subsection{Newtonian mechanics and the Kalman filter}\label{sci-sec:kalman}
\subsection*{A single football player}
We first consider the case of modeling the movement of one football player in the first match.
We assume that this player is not a goalkeeper, since we would like to model movement ranges that span at least half the field. The data provides a player's $(x,y)$-position at every fixed $100$ milliseconds as long as he remains in the game. Let $\Delta t$ be the time difference between successive timesteps, and let us denote a player's position in the $(x,y)$ plane at timestep $t$ as $\boldsymbol{x}_t$, with the velocity and acceleration as $\boldsymbol{v}_t$ and $\boldsymbol{a}_t$; they are related by $\boldsymbol{a}_t=d\boldsymbol{v}_t/dt$ and $\boldsymbol{v}_t=d\boldsymbol{x}_t/dt$. By approximating these derivatives by finite differences we obtain
\begin{align}\label{sci-eq:newton}
\boldsymbol{x}_t&=\boldsymbol{x}_{t-1}+\Delta t\,\boldsymbol{v}_{t-1}+\frac{1}{2}(\Delta t)^2\boldsymbol{a}_t,\nonumber\\
\boldsymbol{v}_t&=\boldsymbol{v}_{t-1}+\Delta t\, \boldsymbol{a}_t.
\end{align}
We now model the acceleration $\boldsymbol a_t$. We assume that at each timestep $t$ the acceleration $\boldsymbol a_t$ is independently and normally distributed with mean $\boldsymbol{0}$ and unknown covariance matrix $\boldsymbol{Q}$ (we write this as $\boldsymbol a_t \sim \mathrm N(\boldsymbol 0,\boldsymbol Q)$). Since acceleration is proportional to force by Newton's second law of motion, this induces a normal distribution on the corresponding force exerted by the player, and the exponential decay of its tails translate to natural limits imposed on muscular work output.
In view of \eqref{sci-eq:newton}, we take position and velocity $(\boldsymbol{x}_t,\boldsymbol{v}_t)$ as our underlying state vector, and we consider the following model:
\begin{align}
\label{sci-eq:latent}
\begin{pmatrix}\boldsymbol{x}_t\\ \boldsymbol{v}_t\end{pmatrix}
&=\underbrace{\begin{pmatrix}\boldsymbol{I}_2&\Delta t\boldsymbol{I}_2\\ \boldsymbol{0}&\boldsymbol{I}_2\end{pmatrix}}_{\boldsymbol{T}_t}\begin{pmatrix}\boldsymbol{x}_{t-1}\\ \boldsymbol{v}_{t-1}\end{pmatrix}
+\underbrace{\begin{pmatrix}\frac{1}{2}(\Delta t)^2\boldsymbol{I}_2\\ \Delta t\boldsymbol{I}_2\end{pmatrix}}_{\boldsymbol{R}_t}\boldsymbol{a}_t,\\
\boldsymbol{\eta}_t&=
\underbrace{\begin{pmatrix}
1&0&0&0\\
0&1&0&0
\end{pmatrix}}_{\boldsymbol{W}_t}
\underbrace{\begin{pmatrix}
\boldsymbol{x}_t\\
\boldsymbol{v}_t
\end{pmatrix}}_{\boldsymbol{z}_t}
+
\;\boldsymbol{\varepsilon}_t,
\label{sci-eq:observe}
\end{align}
In the state equation \eqref{sci-eq:latent}, the state vector $\boldsymbol{z}_t:=(\boldsymbol{x}_t,\boldsymbol{v}_t)$ propagates forward in time according to the Newtonian dynamics of \eqref{sci-eq:newton}, driven by an acceleration $\boldsymbol{a}_t\sim\mathrm{N}(\boldsymbol{0},\boldsymbol{Q})$.
In the observation equation \eqref{sci-eq:observe}, the observed quantity~$\boldsymbol \eta_t$ records the player's position and not his/her velocity, and we assume that these position data are recorded with Gaussian measurement errors: $\boldsymbol{\varepsilon}_t\sim\mathrm{N}(\boldsymbol{0},\boldsymbol{\Sigma})$ with $\boldsymbol{\Sigma}=\mathrm{Diag}(\sigma_x^2,\sigma_y^2)$.
We initialize $\boldsymbol{z}_1\sim\mathrm{N}(\boldsymbol{0},\boldsymbol{P}_1)$ and we assume that $\boldsymbol{\varepsilon}_t,\boldsymbol{a}_t$, and $\boldsymbol{z}_1$ are mutually independent, and independent across different times.
\medskip
We use a Kalman filter to integrate this model with the measurements; this should lead to an estimate for the velocity that is less noisy than simply calculating finite differences. However, the Kalman filter parameters depend on the noise levels as characterized by the player's acceleration variance $\boldsymbol Q$ and the measurement error parameters $\sigma_x,\sigma_y$, and these we do not know; therefore we combine the Kalman filter with parameter estimation.
\medskip
In each Kalman-filter timestep we assume that we have access to observations~$\boldsymbol\eta_t$, and we compute the one-step state prediction $\boldsymbol{Z}_{t+1}=\mathrm{E}(\boldsymbol{z}_{t+1}|\boldsymbol{\eta}_t,\dotsc,\boldsymbol{\eta}_1)$ and its error $\boldsymbol{\delta}_t=\boldsymbol{\eta}_t-\boldsymbol{W}_t\boldsymbol{Z}_t$, in conjunction with their estimated covariance matrices $\boldsymbol{P}_{t+1}=\mathrm{Var}(\boldsymbol{z}_{t+1}|\boldsymbol{\eta}_t,\dotsc,\boldsymbol{\eta}_1)$ and $\boldsymbol{F}_t=\mathrm{Var}(\boldsymbol{\delta}_t)=\boldsymbol{W}_t\boldsymbol{P}_t\boldsymbol{W}_t^T+\boldsymbol{\Sigma}$. The Kalman recursion formulas for these calculations are given by (see Appendix A of \citealp{sci-bib:kfas})
\begin{subequations}
\label{sci-eq:Kalman}
\begin{align}
\boldsymbol{Z}_{t+1}&=\boldsymbol{T}_t(\boldsymbol{Z}_t+\boldsymbol{K}_t\boldsymbol{F}_t^{-1}\boldsymbol{\delta}_t)\\
\boldsymbol{P}_{t+1}&=\boldsymbol{T}_t(\boldsymbol{P}_t-\boldsymbol{K}_t\boldsymbol{F}_t^{-1}\boldsymbol{K}_t^T)\boldsymbol{T}_t^T+\boldsymbol{R}_t\boldsymbol{Q}\boldsymbol{R}_t^T,
\end{align}
\end{subequations}
where $\boldsymbol{K}_t=\boldsymbol{P}_t\boldsymbol{W}_t^T$. For given values of $\boldsymbol Q$ and $\sigma_x,\sigma_y$ this leads to time courses of the state $\boldsymbol Z_t$, the covariance $\boldsymbol P_t$, and the derived quantities $\boldsymbol \delta_t$ and $\boldsymbol F_t$.
We have a total of $6$ unknown parameters in our state space model, i.e., the two diagonal entries of $\boldsymbol{\Sigma}$ and all the $2\times2$ entries of $\boldsymbol{Q}$ (we did not exploit the symmetry of $\boldsymbol Q$).
Given the result of a calculation for given $\boldsymbol Q$ and $\sigma_x,\sigma_y$, the log-likelihood function~\citep{sci-bib:kfas} is given by
\begin{align}
\label{sci-eq:log-likelihood-Kalman}
l_n=-\frac{np}{2}\log{(2\pi)}-\frac{1}{2}\sum_{t=1}^n\left(\log{\det{\boldsymbol{F}_t}}+\boldsymbol{\delta}_t^T\boldsymbol{F}_t^{-1}\boldsymbol{\delta}_t\right),
\end{align}
where $p$ is the dimension of $\boldsymbol{\eta}_t$ at a fixed $t$, which in our present case is $2$. We then compute the maximum likelihood estimator for the $6$ covariance parameters using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm.
This setup leads to the following multilevel iteration.
\begin{enumerate}
\item We select the first 10 timesteps from the data; this means that we know the values of $\boldsymbol \eta_1$ to $\boldsymbol \eta_{10}$.
\item \label{sci-en:iteration-Kalman}
At the outer level we maximize the log-likelihood function~\eqref{sci-eq:log-likelihood-Kalman} with respect to~$\boldsymbol Q$ and $\sigma_x,\sigma_y$.
\item At the inner level, i.e.\ for each evaluation of the log-likelihood, we run the Kalman filter~\eqref{sci-eq:Kalman} for 10 steps, ending at time $t=11$.
\item After completing the optimization over $\boldsymbol Q$ and $\sigma_x,\sigma_y$ for this choice of 10 timesteps, we have both an estimate of $\boldsymbol Q$ and $\sigma_x,\sigma_y$ during that period and a prediction for $\boldsymbol z_t = (\boldsymbol x_t,\boldsymbol v_t)$, for $t=1,\dots,11$. We then shift the 10-step window by one timestep, to $2,\dots,11$, and go back to step~\ref{sci-en:iteration-Kalman}.
\end{enumerate}
At the end of this process, we have for each 10-step window of times a series of estimates of $\boldsymbol x_t$, $\boldsymbol v_t$, $\boldsymbol P_t$, $\boldsymbol Q$, and $\sigma_x,\sigma_y$.
\begin{remark}
Each of the 11-step runs of the Kalman filter equations~\eqref{sci-eq:Kalman} needs to be initialized. We initialize $\boldsymbol z_1$ randomly, drawn from $\mathrm N(\boldsymbol 0, \boldsymbol P_1)$, as mentioned above. Concerning the choice of $\boldsymbol{P}_1$, a commonly used default is to set $\boldsymbol{P}_1=10^7\boldsymbol{I}$ as a diffuse prior distribution. However, this is numerically unstable and prone to cumulative roundoff errors. Instead, we use the exact diffuse initialization method by decomposing $\boldsymbol{P}_1$ into its diffusive and non-diffusive parts; for more details see \cite{sci-bib:koopman2003}.
\end{remark}
\begin{remark}
In actual implementation, some technical modifications are needed to speed up computations, particularly when $\boldsymbol{\eta}_t$ consists of high-dimensional observations at each time point (which happens when we estimate all 23 entities, as we do below). To solve for this dimensionality issue and to avoid direct inversion of $\boldsymbol{F}_t$, the state space model of \eqref{sci-eq:observe} and \eqref{sci-eq:latent} is recast into an equivalent univariate form and the latent states are estimated using a univariate Kalman filter (cf.~\citealp{sci-bib:koopman2000}).
\end{remark}
The Kalman filter algorithm and parameter estimation (including the univariate formulation and diffuse initialization) were performed using the \texttt{KFAS} package (see \citealp{sci-bib:kfas}) in the \texttt{R} software package.
\subsection*{Results for a single player}
We modeled the movement of the player with number 3, who appears to hold the position of left central midfielder, and who was in the pitch for the entire game. As described above, we use a sliding window of $10$ training samples for predictions, such that we first use $10$ time points to predict the $11$th point (one-step-ahead), then we shift the window one timestep ahead and use the next $10$ time points to predict the $12$th point and so on.
\begin{figure}[ht!]
\centerline{\includegraphics[scale=0.43]{prediction}}
\caption{Blue: One-step-ahead predicted position, Red: True recorded position.}
\label{sci-fig:prediction}
\end{figure}
\begin{figure}[ht!]
\centerline{\includegraphics[scale=0.45]{velocity}}
\caption{One-step-ahead predicted velocity vector field $\boldsymbol{v}_t$, arrow points to direction of motion and vector length is speed.}
\label{sci-fig:velocity}
\end{figure}
Figure \ref{sci-fig:prediction} shows one-step-ahead predicted positions of our midfielder (blue dots) for the first 2500 time points. We see that the state space model is able to make accurate predictions (when compared to the red true positions), even if we have used only the past $10$ locations in our algorithm. Moreover, the model is able to trace out complicated movements and sharp corners as is evident from the figure.
As mentioned above, one reason for applying a Kalman filter to the data is to extract the velocity. Figure \ref{sci-fig:velocity} illustrates the velocity vectors as arrows tangent to the position curve.
We also plot the scalar speeds $\|\boldsymbol{v}_t\|$ against the 2500 time points in Figure~\ref{sci-fig:speed}.
To see the correspondence between these three figures, let us focus on a distinguishing stretch of movement made by our midfielder, who starts at $(0, -1000)$, then sprints towards the goal post in the East, make two loops towards the North and again moved back across the field to the West, thus making a somewhat elongated rectangle on the field. We know that he is sprinting to the goal from Figure \ref{sci-fig:velocity} due to the long arrows pointing to the East, with exact magnitudes given by the peak slightly after time $1000$ in Figure \ref{sci-fig:speed}. The midfielder has relatively lower speeds when making the double loop (from time $1200$ to $1500$ in Figure \ref{sci-fig:speed}) and then he picks up the momentum when moving towards the West, as is evident from the marked increase in speeds after time $1500$.
\begin{figure}[ht!]
\centerline{\includegraphics[scale=0.3]{speed}}
\caption{One-step-ahead predicted speed $\|\boldsymbol{v}_t\|$ ($y$-axis) against timesteps ($x$-axis).}
\label{sci-fig:speed}
\end{figure}
Figure \ref{sci-fig:prediction5} shows the predictive performance of this model for longer time horizons; in this case we are using $10$ time points to predict $5$ steps ahead. When compared with the one-step-ahead case of Figure \ref{sci-fig:prediction}, we see that there is some deterioration in this model's predictive capability, particularly for places where the player's trajectory is curved. From this plot, we can deduce that positional uncertainties are the greatest when the midfielder is moving in loops or in circles.
\begin{figure}[ht!]
\centerline{\includegraphics[scale=0.45]{prediction5}}
\caption{Blue dot: 5-step-ahead predicted position; blue square: $95\%$-prediction rectangle; red dot: true recorded position. The horizontal and vertical lines are artefacts of the algorithm.}
\label{sci-fig:prediction5}
\end{figure}
\subsection*{Results for the ball and all $22$ football players}
Let us now consider the general case of modeling all $22$ football players, including goalkeepers, and the ball (collectively called `entities'). A snapshot of the positional data at around $2$ minutes into the game is shown in Figure \ref{sci-fig:game}. We choose the same equations for all entities, giving for all $k=1,\dotsc,23$,
\begin{align}\label{sci-eq:newtonp}
\boldsymbol{x}_t^{(k)}&=\boldsymbol{x}_{t-1}^{(k)}+\Delta t\,\boldsymbol{v}_{t-1}^{(k)}+\frac{1}{2}(\Delta t)^2\boldsymbol{a}_t^{(k)},\nonumber\\
\boldsymbol{v}_t^{(k)}&=\boldsymbol{v}_{t-1}^{(k)}+\Delta t\,\boldsymbol{a}_t^{(k)}.
\end{align}
By stacking up $23$ copies of the single player case \eqref{sci-eq:observe} and \eqref{sci-eq:latent}, we convert the equations of motion above to the following state space model:
\begin{align*}
\begin{pmatrix}
\boldsymbol{x}_t^{(1)}\\ \boldsymbol{v}_t^{(1)}\\ \boldsymbol{x}_t^{(2)}\\ \boldsymbol{v}_t^{(2)}\\ \vdots\\ \boldsymbol{x}_t^{(23)}\\ \boldsymbol{v}_t^{(23)}\end{pmatrix}
&=\begin{pmatrix}
\boldsymbol{I}_2&\Delta t\boldsymbol{I}_2&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\boldsymbol{0}&\boldsymbol{I}_2&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{I}_2&\Delta t\boldsymbol{I}_2&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{I}_2&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{I}_2&\Delta t\boldsymbol{I}_2\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{I}_2\end{pmatrix}
\begin{pmatrix}
\boldsymbol{x}_{t-1}^{(1)}\\ \boldsymbol{v}_{t-1}^{(1)}\\ \boldsymbol{x}_{t-1}^{(2)}\\ \boldsymbol{v}_{t-1}^{(2)}\\ \vdots\\ \boldsymbol{x}_{t-1}^{(23)}\\ \boldsymbol{v}_{t-1}^{(23)}\end{pmatrix}\\
&\qquad+\begin{pmatrix}\frac{1}{2}(\Delta t)^2\boldsymbol{I}_2&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\Delta t\boldsymbol{I}_2&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\boldsymbol{0}&\frac{1}{2}(\Delta t)^2\boldsymbol{I}_2&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\boldsymbol{0}&\Delta t\boldsymbol{I}_2&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\frac{1}{2}(\Delta t)^2\boldsymbol{I}_2\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\Delta t\boldsymbol{I}_2\end{pmatrix}\begin{pmatrix}\boldsymbol{a}_t^{(1)}\\ \boldsymbol{a}_t^{(2)}\\ \vdots\\ \boldsymbol{a}_t^{(23)}\end{pmatrix},
\end{align*}
with measurement vector
\begin{align*}
\boldsymbol{y}_t=\begin{pmatrix}
\boldsymbol{I}_2&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{I}_2&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{I}_2&\boldsymbol{0}&\cdots&\boldsymbol{0}&\boldsymbol{0}\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\
\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\boldsymbol{0}&\cdots&\boldsymbol{I}_2&\boldsymbol{0}
\end{pmatrix}\begin{pmatrix}
\boldsymbol{x}_t^{(1)}\\ \boldsymbol{v}_t^{(1)}\\ \boldsymbol{x}_t^{(2)}\\ \boldsymbol{v}_t^{(2)}\\ \vdots\\ \boldsymbol{x}_t^{(23)}\\ \boldsymbol{v}_t^{(23)}\end{pmatrix}+\begin{pmatrix}
\boldsymbol{\varepsilon}_t^{(1)}\\ \boldsymbol{\varepsilon}_t^{(2)}\\ \vdots \\ \boldsymbol{\varepsilon}_t^{(23)}\end{pmatrix}.
\end{align*}
Here the measurement error vector is $(\boldsymbol{\varepsilon}_t^{(1)}\quad\boldsymbol{\varepsilon}_t^{(2)}\quad\cdots\quad\boldsymbol{\varepsilon}_t^{(23)})\sim\mathrm{N}(\boldsymbol{0},\boldsymbol{\Sigma})$ with $\boldsymbol{\Sigma}=\mathrm{Diag}(\sigma_{x,1}^2,\sigma_{y,1}^2,\sigma_{x,2}^2,\sigma_{y,2}^2,\dotsc,\sigma_{x,23}^2,\sigma_{y,23}^2)$ and the acceleration vector $(\boldsymbol{a}_t^{(1)}\cdots\boldsymbol{a}_t^{(23)})\sim\mathrm{N}(\boldsymbol{0},\boldsymbol{Q})$.
It would be interesting to use this framework to model the interactions between different football players and the ball through the covariance matrix $\boldsymbol{Q}$; obviously, in a real match one expects a strong correlation between all entities. An unstructured~$ \boldsymbol{Q}$ consists of $46^2=2116$ parameters and adding the diagonal elements of $\boldsymbol{\Sigma}$ yields a total of $2162$ parameters. We found that this general case takes a prohibitively long time to optimize, and we have to simplify the problem by imposing additional structure on~$\boldsymbol{Q}$. To keep computations manageable, we disregard correlations between entities, by assuming that $\boldsymbol{Q}$ is a block diagonal matrix given by $\boldsymbol{Q}=\mathrm{BlockDiag}(\boldsymbol{Q}_1,\dotsc,\boldsymbol{Q}_{23})$ where $\boldsymbol{Q}_k=\mathrm{Var}(\boldsymbol{a}_t^{(k)})$ for $k=1,\dotsc,23$. In other words, each player's movement is modeled using his/her own state space equations that are independent of the other players.
If the prediction horizon is short, e.g., one step ahead, we found that this choice of $\boldsymbol{Q}$ gives reasonable predictive performance as shown in Figure \ref{sci-fig:footballgame}. Here we have used $5$ past time points to predict one timestep ahead and we see that the one-step-ahead predicted player's position (blue) closely follows the truth (red) over the span of $206$ time points. Moreover, the path of the ball is instantly recognizable as the zig-zag dotted line (due to it being the fastest object) embedded among the network of trajectories. If longer prediction horizons are sought, then this simplifying assumption might not give good performance and cross-covariance terms between players and ball are needed. To that end, one can consider low-rank approximations or imposing sparsity constraints on $\boldsymbol{Q}$. Alternatively, we can turn to machine-learning methods by training a (deep) multi-level neural network to learn these complex interactions; this is the subject of the next section.
\begin{figure}[ht!]
\centerline{\includegraphics[scale=0.45]{footballgame}}
\caption{One-step-ahead predicted positions for the ball and all $22$ players (blue) with their true paths (red). The path of the ball is the zig-zag dotted line.}
\label{sci-fig:footballgame}
\end{figure}
\section{Methods: data-driven}
\label{sci-sec:data-driven}
In this section we describe machine-learning techniques to model spatio-temporal trajectories of players and the ball throughout the game, in order to acquire meaningful insight on football kinematics. Our philosophy is that we aim to construct networks that can \emph{generate} trajectories that are \emph{statistically indistinguishable} from the actual data. Successfully trained networks of this type have a number of benefits. They allow one to quickly generate more data; the components of such networks can be re-used (we show an example in Section~\ref{sci-sec:dis}); when they produce `latent spaces', then these latent spaces may be interpreted by humans; and the structure of succesful networks and the values of the trained parameters should, in theory, give information about the trajectories themselves.
In Section \ref{sci-sec:gan}, we use Generative Adversarial Networks, such that two networks are pitted against each other to generate trajectories. Next, in Section \ref{sci-sec:vae}, we consider another class of networks called Variational Autoencoders, where we do data compression and train the network to replicate trajectories by learning important features. Finally, in Section~\ref{sci-sec:dis} we investigate a method to discriminate between walking patterns of two different football players.
\subsection{Generative Adversarial Network}\label{sci-sec:gan}
Generative Adversarial Networks (GANs) are deep neural net architectures introduced by \cite{sci-bib:Goodfellow2014} which exploit the competition between two (adversarial) networks: a generative network called the Generator and a discriminative network called the Discriminator.
Both the Generator and Discriminator are trained with a training set of real observations, and against each other. The Discriminator is a classifier; it has to learn to differentiate between real and generated observations, labeling them as ``realistic'' and ``fake'' respectively. The Generator, on the other hand, has to learn to reproduce features of the real data and generate new observations which are good enough to fool the Discriminator into labeling them as ``realistic''.
\subsection*{2D positional data into images}
GANs have been used with great success in image recognition, 3D-models reconstruction and photorealistic imaging; see e.g. \cite{sci-bib:karazeev}. Because of the limited time available to us, we decided to capitalize on existing codes for images; we use~\cite{sci-bib:oreilly}.
By rescaling the data accordingly we map the football field to the square $[-1,1]^2$ and interpret a $10$ seconds trajectory as a $2\times100$ gray-scale image: for each of the 100 time points, the two degrees of freedom indicate the rescaled $x$- and $y$-positions. This ``image" is what we input into the neural network machinery.
\subsection*{Network setup}
The algorithm we use is a repurposed version of the basic convolutional neural network found at \cite{sci-bib:oreilly}, which is meant to recognize and reproduce handwritten digits. There is a structural difference between the two:
\begin{itemize}
\item the original algorithm works with the MNIST digit dataset, which consists of $28\times28$ black-and-white images of $10$ possible states (the digits $0$-$9$);
\item our algorithm works with $2\times100$ gray-scale images, containing an aggregation of $10$ seconds of play.
\end{itemize}
If we were to convert our gray-scale images to black-and-white, we would lose too much information.
Another important difference is in the intrinsic asymmetry of the data:
\begin{itemize}
\item in the original version, both the Discriminator and the Generator look at $3\times3$ or $5\times5$ spatial features of the images: useful information about the topology of the shape can be obtained by looking at spatial neighborhoods of any given pixel;
\item in our case we want to look a the $x$ and $y$ coordinates independently, therefore our Discriminator and Generator work with
one-dimensional temporal features: the information regarding the $x$- or $y$-trajectory in a temporal neighborhood of each position, i.e., its recent past and future. The information about the recent past and future of the trajectory should not be too small, otherwise the feature only observes the position of a player. On the other hand, if the feature is too large, it observes almost the entire 10-second trajectory, and the trajectory only contains a few features. To balance this trade-off we use $1\times5$ and $1\times10$ temporal features.
\end{itemize}
By making this tweak to the original algorithm we exploit the natural directionality of the data and we avoid overlapping the spatial properties (i.e., the shade of gray) and the temporal properties (i.e., the variation in shade). To have a sense of what this means we visualize the correspondence between the $(x,y)$-coordinates and the real trajectory of a player, see Figure \ref{sci-fig:Real_example}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Real_example.pdf}
\caption{A non-trivial real trajectory and its twofold representation. The $(x,y)$-coordinates as gray-scale image (top) and the real trajectory on the football field (bottom).}
\label{sci-fig:Real_example}
\end{figure}
\subsection*{The algorithm}
We limit our training set to all random samplings of 20-second trajectories of any single player (excluding goalkeepers and the ball) during a single fixed match. This should give some extra structure for the network to work with while maintaining a diverse enough data sample.
The initialization of the parameters is the same as in the original algorithm, the Generator takes a standard Gaussian noise vector as input and then produces a new image based on the updates made by the network. To have a glance of what an untrained Generator is capable of, see Figure \ref{sci-fig:Untrained_Gen}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Untrained_Gen0.pdf}
\caption{A trajectory from the untrained Generator.}
\label{sci-fig:Untrained_Gen}
\end{figure}
The Discriminator is then pre-trained with real and generated trajectories. After this first training epoch, the Discriminator is able to correctly discriminate between the real trajectories and the untrained noisy ones produced by the Generator. Here an epoch consists of one full learning cycle on the training set. Then the main training session begins. From the second epoch and above, the Discriminator is trained with real and generated data and the Generator itself is trained against the Discriminator. This produces a Generator-Discriminator feedback loop that forces both networks to improve themselves with the objective to outperform the other. This is achieved by implementing a loss function to measure three quantities:
\begin{itemize}
\item Discriminator loss vs real: it measures how far the Discriminator is from labeling a real trajectory as ``realistic'';
\item Discriminator loss vs Generator: it measures how far the Discriminator is from labeling a generated image as ``fake'';
\item Generator loss vs Discriminator: it measures how far the Discriminator is from labeling a generated image as ``realistic''.
\end{itemize}
The first loss function deals with the interaction between the Discriminator and the real world, it makes sure that the network is adapting to recognize new real observations. The second and third loss functions on the other hand, work against each other: one is trying to force the Discriminator to always label ``fake'' when presented with a generated image, while the other is forcing the Generator to produce data that mimics the Discriminator's perception of the real world. The loss function used throughout the algorithm is the cross-entropy loss, for a discussion see \cite{sci-bib:takeshi}.
\subsection*{Performance and limitations}
Properly training a GAN requires a long time and much can go wrong in the process. The Generator and Discriminator need to maintain a perfect balance, otherwise one will outperform the other causing either the Discriminator to blindly reject any generated image, or the Generator to exploit blind spots the Discriminator may have. After a training session of $15$ hours our GAN managed to go from random noise trajectories to smooth and structured ones, although not fully learning the underlying structure of the data. While the generated movements look impressive when compared to the untrained ones, they are still underperforming when confronted with the real world. First and foremost, the acceleration pattern of the players make no physical sense, i.e., the algorithm is not able to filter out local small noise, and the trajectories are not smooth enough. The evolution of the network during training is shown in Figure \ref{sci-fig:Training}. In the end the GAN is not consistent enough when asked to generate large samples of data: too many trajectories do not look realistic.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{train01.pdf}
\end{subfigure}
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{train02.pdf}
\end{subfigure}
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{train03.pdf}
\end{subfigure}
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{train04.pdf}
\end{subfigure}
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{train05.pdf}
\end{subfigure}
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{train06.pdf}
\end{subfigure}
\caption{Different stages of GAN training (from left to right and from top to bottom). The network goes from random noise to shape recovery, but it is not able to filter out local noise consistently.}
\label{sci-fig:Training}
\end{figure}
\subsection{Variational Autoencoder}\label{sci-sec:vae}
In parallel, we implemented a Variational Autoencoder (VAE) as introduced by \cite{sci-bib:vae}. Like a GAN, a VAE is an unsupervised machine-learning algorithm that gives rise to a generative model.
We will apply the VAE algorithm on normalized trajectory data spanning 50 seconds. We call the set of all such trajectory data $X$. As the trajectories are sampled at intervals of 0.1 seconds, this means that we can identify $X$ with $[0,1]^{1000}$.
A VAE consists of two neural networks, an encoder and a decoder. The encoder is a function (parametrized by a vector $\phi$)
\[
\mathsf{Enc}_\phi: X \times \mathcal{E} \to Z
\]
that maps from the product of the space $X$ of input data and a space of noise variables $\mathcal{E}$, to the so-called latent space $Z$. We identify the space $Z$ with $\mathbb{R}^d$ ($d=10$).
The decoder is a function (parametrized by a vector $\theta$)
\[
\mathsf{Dec}_\theta : Z \times \Omega \to X
\]
which maps from the latent space $Z$ and a second space of noise variables $\Omega$ back to the data space $X$.
We choose the spaces of noise variables $\mathcal{E}$ and $\Omega$ to be Euclidean, with the same dimension as $Z$ and $X$ respectively, and endow them with standard Gaussian measures.
The encoder and decoder have a special structure. We implemented (as neural networks) functions
\[
\mu_{Z, \phi} : X \to Z \quad \text{ and } \quad \sigma_{Z, \phi} : X \to Z
\]
and chose
\[
\mathsf{Enc}_\phi(x,\epsilon) := \mu_{Z, \phi}(x) + \mathrm{diag}(\sigma_{Z, \phi}(x)) \epsilon.
\]
Here, $\mathrm{diag}(\sigma_{Z,\phi}(x))$ is a diagonal matrix with $\sigma_{Z, \phi}(x)$ on the diagonal. Equivalently, $\mathrm{diag}(\sigma_{Z, \phi}(x)) \epsilon$ is just the elementwise product of $\sigma_{Z,\phi}(x)$ and $\epsilon$.
Similarly, we implemented a function
\[
\mu_{X, \theta} : Z \to X
\]
and selected a constant $\sigma_X \in (0,\infty)$ and chose
\[
\mathsf{Dec}_\theta(z, \omega) := \mu_{X, \theta}(z) + \sigma_X \omega.
\]
The decoder provides us with a generative model for the data: to generate a data point we first sample $z$ and $\omega$ independently according to standard normal distributions, after which we apply the decoder to the pair $(z,\omega)$. Alternatively, we can generate zero-noise samples by only sampling $z$ and computing $\mathsf{Dec}_\theta(z,0)$.
The Variational Autoencoder $\mathsf{VAE}_{\phi,\theta}:X \times \mathcal{E} \times \Omega \to X$ is the composition of the encoder and decoder in the sense that
\[
\mathsf{VAE}_{\phi, \theta}(x,\epsilon,\omega) = \mathsf{Dec}_\phi ( \mathsf{Enc}_\theta(x, \epsilon) , \omega).
\]
The parameters $\phi$ and $\theta$ of the VAE are optimized simultaneously, so that when we apply the VAE to a randomly selected triple of trajectory $x$, noise variable $\epsilon$ and noise variable $\omega$, the result is close to the original trajectory, at least on average.
To this end, we follow \cite{sci-bib:vae} and minimize an average loss, for the loss function $\mathcal{L}_{\phi,\theta}:X \times \mathcal{E} \to \mathbb{R}$ given by
\begin{align}\label{sci-eq:loss}
\frac{1}{\sigma_X^2} \mathcal{L}_{\phi, \theta}(x, \epsilon)
&:= \frac{1}{\sigma_X^2} \left\| x - \mu_{X, \theta}\big(\mathsf{Enc}_\phi(x,\epsilon) \big)\right\|^2+ \|\mu_{Z,\phi}(x)\|^2 - d \nonumber\\
&\qquad
- \mathrm{tr} \big(\log ( \mathrm{diag}(\sigma_{Z,\phi}(x))^2\big)+ \mathrm{tr}\big(\mathrm{diag}(\sigma_{Z,\phi}(x))^2\big).
\end{align}
For a derivation of this loss function, we refer the reader to the Appendix.
We implemented the Autoencoder in the Keras library for Python (\citealp{sci-bib:keras}). The library comes with an example VAE which we took as a starting point. We introduced a hidden layer $H_E$ in the encoder and $H_D$ in the decoder, which we both identified with $\mathbb{R}^{400}$, and implemented the functions $\mu_{Z,\phi}$ and $\sigma_{Z,\phi}$ as
\[
\begin{split}
\mu_{Z,\phi} &= m_{Z,\phi} \circ h_{E,\phi}\\
\sigma_{Z,\phi} &= \exp \circ \, {l}_{Z,\phi} \circ h_{E,\phi}
\end{split}
\]
where $h_{E,\phi}: X \to H_E$ is the composition of an affine map and ReLu activation functions, the functions $m_{Z,\phi}, {l}_{Z,\phi} : H_E \to Z$ are linear and $\exp: Z \to Z$ is the exponential function applied componentwise.
Similarly,
\[
\mu_{X, \theta} = m_{X, \theta} \circ h_{D, \theta}
\]
where the function $h_{D,\theta}: Z \to H_D$ is again a composition of an affine map and ReLu activation functions and the function $m_{X,\theta}: H_D \to X$ is a composition of an affine map and sigmoid activation functions.
We trained the model, i.e.~we adjusted the parameters $\phi$ and $\theta$ to minimize the average loss, using the `rmsprop' optimizer in its default settings. Whether the model trained successfully or not did seem to depend crucially on the version of the libraries used. For the results presented below, we used Keras version 2.1.3 on top of Theano version 1.0.1. We first set $\sigma_X \approx 0.15$. After training for 1000 epochs, the average loss was slightly below $2$.
We used the VAE to approximate trajectories. We sampled at random trajectories $x_i$ from the data, and compared them to their approximations
\[
\hat{x}_i := \mathsf{VAE}_{\phi, \theta}(x_i, 0, 0).
\]
The average absolute deviation per coordinate per time-step (expressed as a ratio with respect to the dimensions of the playing field) was approximately $0.02$, the average squared error per coordinate per time step was approximately $0.0008$ and the average maximum error per coordinate, taken over the whole trajectory, was less than $0.09$.
\begin{figure}[h]
\centering
\includegraphics{plot_real_vs_gen_1000_v2.pdf}
\caption{A collection of sampled trajectories (orange) and an approximation calculated by the VAE (black). In general, the approximating trajectories are much smoother. We chose $\sigma_X \approx 0.15$ in training the VAE.}
\label{sci-fig:traj-reproduction}
\end{figure}
In Figure \ref{sci-fig:traj-reproduction} we show the result of sampling four random trajectories $x_i$ from the data, and comparing them to their approximation by the VAE. The approximating trajectories are much smoother than the original ones.
Some qualitative features of the original paths, such as turns and loops, are also present in the approximating paths. Even though the average error in the distance per coordinate per time step is relatively small, visually there is still quite some deviation between the true and the approximating trajectories. We expect, however, that with a more extensive network, consisting of more convolutional layers, we can greatly improve the approximation.
\begin{figure}[h]
\centering
\includegraphics{generated_traj_1000_ep_N01.pdf}
\caption{Six random trajectories generated by the generative model, i.e.~by the decoder part of the VAE.}
\label{sci-fig:traj-generation}
\end{figure}
Next, we use the decoder of the VAE as a generative model. In particular, we sample trajectories in $X$ at random by first sampling $z\in Z$ according to a standard normal distributions, and computing the trajectory $\mathsf{Dec}_\theta(z, 0)$. A collection of six trajectories generated in this way is shown in Figure \ref{sci-fig:traj-generation}. At first sight, the generated trajectories look like they could have been real trajectories of football players. However, they are in general smoother than the real trajectories. We could also have generated trajectories by sampling both $z$ and $\omega$ according to standard normal distributions and computing $\mathsf{Dec}_\theta(z,\omega)$. However, those trajectories would have been much too noisy.
If we reduce the value of $\sigma_X$ to approximately $0.008$ and retrain the model, the approximation of the trajectories becomes slightly better, and the final average loss reduces to 0.67 after training for 600 epochs. The corresponding plots look similar to Figure \ref{sci-fig:traj-reproduction}. However, if we now use the decoder to generate trajectories, most of the trajectories end up close to the boundary of the playing field: the dynamics of the generated trajectories is then clearly very different from the original dynamics.
In Appendix \ref{sci-sec:appendix}, we explain this effect by investigating the different parts of the loss function given in \eqref{sci-eq:loss}. The upshot is that when $\sigma_X$ is very small, the proportion of latent variables $z \in Z$ that are in the range of the encoder is very small (measured with the Gaussian measure on $Z$). If one applies the decoder to a $z \in Z$ which is in the range of the encoder, one probably gets a realistic trajectory. But for latent variables $z$ not in the range of the encoder, there is no reason for the decoded trajectories to look realistic at all.
\subsection{Discriminator}\label{sci-sec:dis}
In the previous sections, we studied several methods to create generative models for the movement trajectories of football players, with the aim of capturing the underlying dynamics and statistics. In this section, we study to what extent movement trajectories of different soccer players can be distinguished. To this end,
we test the Discriminator network of the GAN introduced in Section~\ref{sci-sec:gan} on data of different soccer players. We train the Discriminator on the data of two soccer players, and then test if the Discriminator is able to distinguish their motion patterns.
The success rate of the Discriminator to distinguish one player from the other then gives some insight in how different are the movement behaviors of two different players.
The loss function for the Discriminator is the same as in Section~\ref{sci-sec:gan}. The data we use as input for the Discriminator are $(x,y)$-coordinates of 10-second player trajectories. We test the Discriminator on these unedited $(x,y)$-trajectories, and on centered $(x,y)$-trajectories, where the coordinates of each trajectory are centered such that the first coordinate always equals $(0,0)$. Thus, by using the uncentered data, the Discriminator may distinguish two players by using their positions on the field, whereas the Discriminator can only use movement patterns of particular players when the centered data are used.
Figure~\ref{sci-fig:loss} shows the Discriminator loss function for both players as a function of the number of training steps for two different sets of two players. We see that the loss function declines more for the uncentered data than for the centered data. Thus, the Discriminator distinguishes uncentered trajectories based on the location on the field where the movement pattern happens. The two different examples also show that it is easier to distinguish some players than others.
\begin{figure}[tb]
\centering
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{loss1.pdf}
\caption{example 1}
\label{sci-fig:ex1}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{loss2.pdf}
\caption{example 2}
\label{sci-fig:ex2}
\end{subfigure}
\caption{Two examples of the Discriminator loss function for both players as a function of the number of training steps. The solid lines are the results for uncentered data and the dashed lines contain the results for the centered data. The two examples contain four different players.}
\label{sci-fig:loss}
\end{figure}
Table~\ref{sci-tab:succrate} shows the success rate of correctly identifying the player corresponding to a given trajectory after the training period for the two sets of players of Figure~\ref{sci-fig:loss}. The success rate of the Discriminator using the uncentered data is higher than for the centered data in both examples. Using the centered data, the Discriminator has difficulties distinguishing between players 1 and 2 in the first example. In the second example, the success rate is much higher. Thus, some players display more similarities in their movement patterns than other players.
\begin{table}[htbp]
\centering
\begin{tabular}{rrrrrr}
\toprule
\textbf{} & \textbf{} & \textbf{Player 1} & \textbf{Player 2}& \textbf{Player 3} & \textbf{Player 4} \\
\midrule
\multicolumn{1}{c}{\multirow{2}[0]{*}{example 1}} & non-centered & 0.74 & 0.9 \\
\multicolumn{1}{c}{} & centered & 0.2 & 0.96 \\
\multicolumn{1}{c}{\multirow{2}[0]{*}{example 2}} & non-centered &&& 0.98 & 0.82 \\
\multicolumn{1}{c}{} & centered &&& 0.54 & 0.95 \\
\bottomrule
\end{tabular}%
\caption{The success rate of the Discriminator after training on the two examples of Figure~\ref{sci-fig:loss}. We use separate data sets for training and validation.}
\label{sci-tab:succrate}%
\end{table}%
\section{Conclusion and future work}\label{sci-sec:con}
We used several methods to learn the spatio-temporal structure of trajectories of football players. With the state-space modeling approach we extracted velocity information from the trajectory data, and learned basic statistics on the motion of individual players.
With deep generative models, in particular Variational Autoencoders, we captured the approximate statistics of trajectories by encoding them into a lower dimensional latent space. Due to limitations on time and computational power, we did not manage to successfully train Generative Adversarial Nets on the data. Nonetheless, we were able to use the Discrimator network to distinguish between different football players based on their trajectory data. The algorithm was more successful if we used non-centered rather than centered data, and was better at distinguishing between some players than others.
It is very likely that with deeper convolutional neural networks, we can train VAEs that approximate the statistics of the player trajectories even better.
Besides, the approach can easily be extended to approximate trajectories of multiple players and the ball, although we may need more data to get an accurate model.
A big challenge is to interpret the latent space of the VAE. Ideally, one would be able to recognize qualities of the players as variables in the latent space. Although this is a difficult task in general, we expect that by adding additional structure in the architecture of the VAE, we can at least extract some relevant performance variables per player and recognize differences between players. Moreover, we could unify state-space models with VAEs to increase the interpretability of the latent variables.
By continuing this line of work, we could conceivably find an appropriate state space such that the football game can be fitted into a Reinforcement Learning framework. This framework may then be used to find optimal strategies, and to extract individual qualities of football players.
| {
"attr-fineweb-edu": 1.911133,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbjfxaJiQnLwYkF78 | \subsection*{Question #1}}
\newcommand{\qpart}[1]{\paragraph{(#1)}}
\newcommand{\bigparen}[1]{\left( #1 \right)}
\renewcommand\arraystretch{1.0}%
\begin{document}
\begin{titlepage}
\begin{center}
{\LARGE \bf Who's good this year? Comparing the Information Content of Games in the Four Major US Sports} \\
\ \\
{\large Julian Wolfson and Joseph S. Koopmeiners \\
\ \\
Division of Biostatistics, University of Minnesota, Minneapolis, Minnesota }\\
\ \\
{\large \it Corresponding author's email address: [email protected]} \\
\ \\
{\large \today}
\end{center}
\begin{abstract}
In the four major North American professional sports (baseball, basketball, football, and hockey), the primary purpose of the regular season is to determine which teams most deserve to advance to the playoffs. Interestingly, while the ultimate goal of identifying the best teams is the same, the number of regular season games played differs dramatically between the sports, ranging from 16 (football) to 82 (basketball and hockey) to 162 (baseball). Though length of season is partially determined by many factors including travel logistics, rest requirements, playoff structure and television contracts, it is hard to reconcile the 10-fold difference in the number of games between, for example, the NFL and MLB unless football games are somehow more ``informative'' than baseball games. In this paper, we aim to quantify the amount of information games yield about the relative strength of the teams involved. Our strategy is to assess how well simple paired comparison models fitted from $X$\% of the games within a season predict the outcomes of the remaining ($100-X$)\% of games, for multiple values of $X$. We compare the resulting predictive accuracy curves between seasons within the same sport and across all four sports, and find dramatic differences in the amount of information yielded by individual game results in the four major U.S. sports.
\end{abstract}
\end{titlepage}
\newpage
\section{Introduction}
In week 14 of the 2012 NFL season, the 9-3 New England Patriots squared off on Monday Night Football against the 11-1 Houston Texans in a game with major implications for both teams. At the time, the Texans had the best record in the AFC and were in line to earn home-field advantage throughout the playoffs, while the New England Patriots had the best record in their division and were hoping to solidify their playoff position and establish themselves as the favorites in the AFC. The Patriots ultimately defeated the Texans 42-14, which led some commentators to conclude that the Patriots were the favorites to win the Super Bowl \citep{Walker2012,MacMullan2012} and that Tom Brady was the favorite for the MVP award \citep{Reiss2012}, while others opined that the Texans were closer to pretenders than the contenders they appeared to be for the first 13 weeks of the season \citep{Kuharsky2012}. These are strong conclusions to reach based on the results of a single game, but the power of such ``statement games'' is accepted wisdom in the NFL. In contrast, it is rare for the outcome of a single regular-season game to create or change the narrative about a team in the NBA, NHL, or MLB. While one might argue that the shorter NFL season simply drives commentators to imbue each game with greater metaphysical meaning, an alternative explanation is that the outcome of a single NFL contest actually does carry more information about the relative strengths of the teams involved than a single game result in the other major North American professional sports. In this paper, we ask and attempt to answer the basic question: how much does the outcome of a single game tell us about the relative strength of the two teams involved?
In the four major North American professional sports (baseball, basketball, football, and hockey), the primary purpose of the regular season is to determine which teams most deserve to advance to the playoffs. Interestingly, while the ultimate goal of identifying the best teams is the same, the number of regular season games played differs dramatically between the sports, ranging from 16 (football) to 82 (basketball and hockey) to 162 (baseball). Though length of season is partially determined by many factors including travel logistics, rest requirements, playoff structure and television contracts, it is hard to reconcile the 10-fold difference in the number of games in the NFL and MLB seasons unless games in the former are somehow more informative about team abilities than games in the latter. Indeed, while it would be near-heresy to determine playoff eligibility based on 16 games of an MLB season (even if each of the 16 games was against a different opponent), this number of games is considered adequate for the same purpose in the NFL.
There is a well-developed literature on the topic of competitive balance and parity in sports leagues \citep{Owen2010,Horowitz1997,Mizak2005,Lee2010,Hamlen2007,Cain2006,Larsen2006,Ben-Naim2006a,Kesenne2000,Vrooman1995,Koopmeiners2012}. However, most papers focus on quantifying the degree of team parity over consecutive years along with the effects of measures taken to increase or decrease it. In papers which compare multiple sports, season length is often viewed as a nuisance parameter to be adjusted for rather than a focus of inquiry. Little attention has been directed at the question of how information on relative team strength accrues over the course of a single season.
In this paper, we aim to quantify the amount of information each games yields about the relative strength of the teams involved. We estimate team strength via paired-comparison \citep{Bradley1952} and margin-of-victory models which have been applied to ranking teams in a variety of sports \citep{McHale2011, Koehler1982, Sire2009, Martin1999}. The growth in information about the relative strength of teams is quantified by considering how these comparison models fitted from $X$\% of the games in a season predict the outcomes of the remaining ($100-X$)\% of games, for multiple values of $X$ (games are partitioned into training and test sets at random to reduce the impact of longitudinal trends over the course of a season). We begin by describing the data and analysis methods we used in Section 2. Section 3 presents results from recent seasons of the four major North American sports, and compares the ``information content'' of games across the four sports. In Section 4 we discuss the strengths and limitations of our analysis.
\section{Methods}
\subsection{Data}
We consider game results (home and away score) for the 2004-2012 seasons for the NFL, the 2003-2004 to 2012-2013 seasons of the NBA, the 2005-2006 to 2012-2013 seasons of the NHL, and the 2006-2012 seasons of MLB. Game results for the NFL, NBA and NHL were downloaded from Sports-Reference.com \citep{pfr} and game results for MLB were downloaded from Retrosheet \citep{retrosheet}. Only regular season games were considered in our analysis. The NFL plays a 16-game regular season, the NBA and NHL play 82 regular season games and MLB plays 162 regular season games.
\subsection{Methods}
Let $\mathcal{G}$ represent all the games within a single season of a particular sport. Our goal is to quantify the amount of information on relative team strength contained in the outcomes of a set of games $G \subset \mathcal{G}$, as the number of games contained in $G$ varies. We consider how well the results of games in the ``training set'' $G$ allow us to predict the outcomes of games in a ``test set'' $G' = \mathcal{G} \setminus G$. Specifically, given $G$ and $G'$, our information metric (which we formally define later) is the percentage of games in $G'$ which are correctly predicted using a paired comparison model applied to $G$.
We consider two types of paired comparison models in our work. Each game $g \in G$ provides information on the home team ($H_g = i$), away team ($A_g = j$) and the game result as viewed from the home team's perspective. When only the binary win/loss game result $W_g$ is considered, we fit a standard Bradley-Terry model \citep{Bradley1952, Agresti02},
\begin{equation}
logit \left(\pi_{i,j}\right) = \beta_{i} - \beta_{j} + \alpha,
\label{eq:BT}
\end{equation}
where $\pi_{i,j} = P(W_g = 1)$ is the probability that the home team, team $i$, defeats the visiting team, team $j$. $\beta_{i}$ and $\beta_{j}$ are the team strength parameters for teams $i$ and $j$, respectively, and $\alpha$ is a home-field advantage parameter.
We fit a similar model when the actual game scores are considered. In this context, home team margin of victory (MOV) $\Delta_g$ is recorded for each game; $\Delta_g$ is positive for a home team win, negative for a home team loss, and zero for a tie. The paired comparison model incorporating margin of victory is:
\begin{equation}
\mu_{i,j} = \delta_{i} - \delta_{j} + \lambda,
\label{eq:MOV}
\end{equation}
where $\mu_{i,j} = E(\Delta_g)$ is the expected margin of victory for the home team, team $i$, over the visiting team, team $j$. $\delta_{i}$ and $\delta_{j}$ are team strengths on the margin-of-victory scale for teams $i$ and $j$, respectively, and $\lambda$ is a home-field advantage on the margin-of-victory scale.
Both models \eqref{eq:BT} and \eqref{eq:MOV} can be fit using standard statistical software, such as R \citep{rsoftware}. Given estimates $\hat \beta_i$, $\hat \beta_j$, and $\hat \alpha$ derived by fitting model \eqref{eq:BT} to a set of games $G$, a predicted home team win probability $\hat \pi_g$ can be derived for every game $g \in G'$ based on which teams $i$ and $j$ are involved. A binary win/loss prediction for the home team is obtained according to whether $\hat \pi_g$ is greater/less than 0.5. Given estimates $\hat \delta_i$, $\hat \delta_j$, and $\hat \lambda$ from fitting model \eqref{eq:MOV}, home team margin of victory $\hat \mu_g$ can similarly be predicted for every game in $g \in G'$. A binary win/loss prediction for the home team is obtained according to whether $\hat \mu_g$ is positive, negative, or zero.
Our metrics for summarizing the amount of information on relative team strength available from a set of game results $G$ for predicting the outcomes of a set of games in $G'$ are simply the fraction of games that are correctly predicted by the paired comparison models:
\begin{align*}
\mathcal{I}^{BT}(G,G') &= \frac{ \sum_{g \in G'} W_g \indicatorBig{\hat \pi_g > 0.5}}{ | G' | } \ \ \text{for the Bradley-Terry model \eqref{eq:BT}}\\
\mathcal{I}^{MOV}(G,G') &= \frac{ \sum_{g \in G'} W_g \indicatorBig{\hat \mu_g > 0}}{ | G' | } \ \ \text{for the margin-of-victory model \eqref{eq:MOV}}
\end{align*}
where $\hat \pi_g$ and $\hat \mu_g$ are estimates derived from game results in $G$, and $|G'|$ denotes the number of games in $G'$.
For a given season, training data sets $G_1, G_2, \dots, G_K$ were formed by randomly sampling games corresponding to X\% of that season. Test data sets $G'_1, G'_2, \dots, G'_K$ were created as the within-season complements of the training sets, i.e., if $G_k$ consists of a number of games corresponding to X\% of the season, then $G'_k$ contains the remaining (100-X)\% of games in that season. Training (and corresponding test) data sets were created for X\% = 12.5\%, 25.0\%, 37.5\%, 50.0\%, 62.5\%, 75.0\% and 87.5\% of the games in each available season. Games were sampled at random so as to reduce the influence of temporal trends over the course of a season, for example, baseball teams who are out of playoff contention trading away valuable players and giving playing time to minor league prospects in August and September.
Information on relative team strength over a single season was computed and summarized as follows:
\begin{enumerate}
\item For X = 12.5, 25, 37.5, 50, 62.5, 75, and 87.5:
\begin{enumerate}
\item Generate 100 training sets $G_1, G_2, \dots, G_{100}$ (and complementary test sets $G'_1, G'_2, \dots, G'_{100}$) by randomly sampling X\% of games without replacement from $\mathcal{G}$.
\item For each training set $G_k$:
\begin{enumerate}
\item Fit models \eqref{eq:BT} and \eqref{eq:MOV} to the games in $G_k$.
\item Obtain binary win/loss predictions for the games in the test set $G'_k$.
\item Evaluate the information metrics $\mathcal{I}^{BT}(G_k, G'_k)$ and $\mathcal{I}^{MOV}(G_k,G'_k)$
\end{enumerate}
\item Average the computed information metrics to estimate the predictive accuracy of paired comparison models fitted to data from X\% of the entire season ($\mathcal{I}^{BT}$ and $\mathcal{I}^{MOV}$).
\end{enumerate}
\item Tabulate and plot $\mathcal{I}^{BT}$ and $\mathcal{I}^{MOV}$ across different values of X.
\end{enumerate}
The natural comparison value for our information metrics is the predictive accuracy of a naive model which chooses the home team to win every game. As shown in the plots below the average win probability for the home team (as determined by the parameters $\alpha$ and $\lambda$ in models \eqref{eq:BT} and \eqref{eq:MOV} respectively) varies from approximately 53\% to 61\% across the four sports we consider.
\section{Results}
\subsection{National Football League}
\begin{figure}[!h]
\centering
\includegraphics*[width = 6in]{NFL04-12.pdf}
\caption{Percent of games correctly predicted on test set vs. average number of games per team in training set, NFL seasons 2004-2012.\label{nfl_results}}
\end{figure}
Figure~\ref{nfl_results} plots the percent of games correctly predicted on the test set versus the average number of games per team in the training set for the 2004-2012 National Football League seasons. Both paired comparison models (i.e., those which incorporate and ignore margin of victory) outperform simply picking the home team to win every game. The margin of victory model appears to perform slightly better than the paired comparison model, though the differences are modest and in some seasons (e.g., 2004 and 2008) are non-existent. The prediction accuracy of both models improves throughout the season in most seasons (years 2008 and 2009 being notable exceptions), indicating that we are gaining information about the relative strengths of teams even in the final weeks of the season.
\subsection{National Basketball Association}
\begin{figure}[!h]
\centering
\includegraphics*[width = 6in]{NBA03-13.pdf}
\caption{Percent of games correctly predicted on test set vs. average number of games per team in training set, NBA seasons 2003-2013. \label{nba_results}}
\end{figure}
Results for the National Basketball Association can be found in Figure~\ref{nba_results}. The NBA was the most predictable of the four major North American professional sports leagues. Using 87.5\% of games as a training set, our model was able to accurately predict up to 70\% across seasons. The NBA also had the largest home court advantage with home teams winning approximately 60\% of games. There was virtually no advantage in including margin of victory in our model; indeed, it led to slightly worse predictions during the 05-06 season. The only major difference between the NFL and NBA was the growth of information over the season. While the accuracy of our predictions for the NFL continued to improve as more games were added to the training set, model accuracy for the NBA was no better when 75\% of games were included in the training set than when 25\% of games were included. Analyses using the \texttt{segmented} package in R for fitting piecewise linear models \citep{Muggeo2003, Muggeo2008} confirmed an inflection point in the prediction accuracy curve approximately 25-30 games into the season.
\subsection{Major League Baseball and the National Hockey League}
\begin{figure}[!h]
\centering
\includegraphics[width = 6in]{MLB06-12.pdf}
\caption{Percent of games correctly predicted on test set vs. average number of games per team in training set, MLB seasons 2006-2012. \label{mlb_results}}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width = 6in]{NHL05-13.pdf}
\caption{Percent of games correctly predicted on test set vs. average number of games per team in training set, NHL seasons 2005-2013. \label{nhl_results}}
\end{figure}
Results from Major League Baseball and the National Hockey League are found in Figures~\ref{mlb_results} and ~\ref{nhl_results}, respectively. The results for MLB and the NHL were quite similar, in that both leagues were substantially less predictable than the NFL and NBA. The percentage of games correctly predicted for MLB never exceeded 58\% even when 140 games (7/8 of a season) were included in the training set. The NHL was slightly better but our model was never able to predict more than 60\% of games correctly (and this was only achieved in the 2005-2006 season when the home team win probability was relatively high at 58\%). More importantly, prediction accuracy was rarely more than 2-3 percentage points better than the simple strategy of picking the home team in every game for either league. In fact, during the 2007-2008 and 2011-2012 seasons picking the home team performed better than paired comparison models constructed using a half-season's worth of game results.
It is perhaps not surprising that the outcome of a randomly chosen baseball game is hard to predict based on previous game results given the significant role that the starting pitcher plays in determining the likelihood of winning. In a sense, the ``effective season length'' of MLB is far less than 162 games because each team-pitcher pair carries a different win probability. In additional analyses (results not shown), we fit paired comparison models including a starting pitcher effect, but this did not substantially affect our results.
\subsection{Comparing the sports}
Figure \ref{allsports} displays curves of summarizing predictive accuracy of the MOV model for the four major sports, aggregated across the years of available data (results from the win-loss model were similar). We see that, even after only 1/8th of the games in a season have been played, substantial information on relative team strength has already accrued in the NBA, while much less can be said at this point about the NFL, NHL, and MLB. Predictive accuracy increases most rapidly with additional games in the NFL, so that predictive accuracy approaches that of the NBA when a substantial fraction of games are used for prediction. As seen above, the overall predictive accuracies for the MLB and NHL remain low, and do not increase markedly with the fraction of games in the training set.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{allsports_MOV.pdf}
\caption{Percent of games correctly predicted by margin of victory model on test set vs. percent of season in training set, for four major U.S. sports leagues. For each sport, connected plotting symbols represent the average predictive accuracy and shaded regions enclose the range of predictive accuracies across the seasons of available data. \label{allsports}}
\end{figure}
Table \ref{EOS.OR} gives one way of summarizing the informativeness of games in the four major sports, via an odds ratio comparing the predictive accuracy of two models: 1) the MOV paired comparison model using game data from 87.5\% of the season, and 2) a prediction ``model'' which always picks the home team. There is a clear separation between the NFL/NBA, where games played during the season improve the odds of making correct predictions by about 40\% over a ``home-field advantage only'' model, and the NHL/MLB, where the improvement is under 10\%.
\begin{table}[!h]
\centering
\caption{Odds ratio comparing the predictive accuracy of a MOV paired comparison model using data from 87.5\% of a season to the accuracy of a model which always picks the home team. \label{EOS.OR}}
\begin{tabular}{l|c}
& OR\\
\hline
\textbf{NBA} & 1.41\\
\textbf{NFL} & 1.46\\
\textbf{NHL} & 1.09\\
\textbf{MLB} & 1.06
\end{tabular}
\end{table}
Table \ref{infopergame} summarizes the per-game rate of increase in predictive model accuracy for the four sports. The estimates are obtained by fitting least-squares regression lines to the data displayed in Figure \ref{allsports}. The lines for each sport are constrained to a an intercept of 0.5, representing the predictive accuracy of a ``no-information'' model before any games have been played. In developing prediction models for actual use, one might want to incorporate prior information on the home-field advantage based on previous seasons, but in our simple paired comparison models both team strengths and the home-field advantage are estimated purely from current-season data. Hence, prior to any games being played these models can perform no better than flipping a fair coin. The columns of Table \ref{infopergame} correspond to the estimated rate of increase in predictive accuracy, on a percentage point per game scale, over 25\%, 37.5\%, 50\% and 87.5\% of the season.
\begin{table}[!h]
\centering
\caption{Estimated per-game percentage point increase in predictive accuracy of a margin-of-victory model for the four U.S. sports leagues, by percentage games used to train the model. \label{infopergame}}
\begin{tabular}{l|cccc}
& 25\% of games & 37.5\% of games & 50\% of games & 87.5\% of games\\
\hline
\textbf{NBA} & 0.91 & 0.69 & 0.55 & 0.34\\
\textbf{NFL} & 2.6 & 2.3 & 2 & 1.4\\
\textbf{NHL} & 0.29 & 0.23 & 0.19 & 0.13\\
\textbf{MLB} & 0.12 & 0.094 & 0.079 & 0.053\\
\end{tabular}
\end{table}
The results in Table \ref{infopergame} allow us to compute a ``per-game informativeness ratio'' between pairs of sports. For example, considering the last column allows us to estimate that, over the course the season, NFL games are approximately 4 times more informative than NBA games, which are in turn about 2-3 times more informative than NHL games, which are themselves approximately 2-3 times more informative than MLB games. The ``informativeness ratio'' of NFL to MLB games is on the order of 65, or about 6 times larger than the inverse ratio of their respective season lengths (162/16 $\approx$ 10). In contrast, the ratio comparing NFL to NBA games ($\approx$ 4) is slightly smaller than the inverse ratio of their respective season lengths (82/16 $\approx$ 5).
\section{Conclusions and discussion}
Our results reveal substantial differences between the major North American sports according to how well one is able to discern team strengths using game results from a single season. NBA games are most easily predicted, with paired comparison models having good predictive accuracy even early in the season; indeed, since our information metric for the NBA appears to plateau around game 30, an argument could be made that the latter half of the NBA season could be eliminated without substantially affecting the ability to identify the teams most deserving of a playoff spot. NFL game results also give useful information for determining relative team strength. On a per-game basis, NFL contests contain the largest amount of information. With the exception of the 2008 season, there was no obvious ``information plateau'' in the NFL, though the rate of increase in information did appear to slow somewhat after the first 5 games. These results suggest that games in the latter part of the NFL season contribute useful information in determining who the best teams are.
The predictive ability of paired comparison models constructed from MLB and NHL game data remains limited even when results from a large number of games are used. One interpretation of this finding is that, in comparison to the NBA and NFL, games in MLB and the NHL carry little information about relative team strength. Our results may also reflect smaller variance in team strengths (i.e., greater parity) in hockey and baseball: Because our information metric considers the predictive accuracy averaged across all games in the test set, if most games are played between opposing teams of roughly the same strength then most predictive models will fare poorly. Indeed, the inter-quartile range for winning percentage in these sports is typically on the order of $\sim$20\%, while in football and basketball it is closer to 30\%. Our observation that the hockey and baseball regular seasons do relatively little to distinguish between teams' abilities is reflected in playoff results, where ``upsets'' of top-seeded teams by teams who barely qualified for the postseason happen much more regularly in the NHL and MLB than in the NFL and NBA. One possible extension of this work would be to quantify this effect more formally.
Indeed, given the relative inability of predictive models to distinguish between MLB teams upon completion of the regular season, a compelling argument could be made for increasing the number of teams that qualify for the MLB playoffs since the current 10-team format is likely to exclude teams of equal or greater ability than ones that make it. Using similar logic, one might also argue that if the goal of the playoffs is to identify the best team (admittedly an oversimplification), then perhaps the NBA playoffs are \emph{overly} inclusive as there is ample information contained in regular season game outcomes to distinguish between the best teams and those that are merely average.
More surprising to us was the enormous discrepancy in the informativeness of game results between hockey and basketball, which both currently play seasons of the same length but perhaps ought not to. One possible explanation for why basketball game results more reliably reflect team strength is that a large number of baskets are scored, and the Law of Large Numbers dictates that each team approaches their ``true'' ability level more closely. In contrast, NHL games are typically low-scoring affairs, further compounded by the fact that a large fraction of goals are scored on broken plays and deflections which seem to be strongly influenced by chance. We have not analyzed data from soccer, but it would be interesting to explore whether the ``uninformativeness'' of hockey and baseball game results extends to other low-scoring sports.
Our analysis has several limitations. First, we chose to quantify information via the predictive accuracy of simple paired comparison models. It is possible that using more sophisticated models for prediction might change our conclusions, though we doubt it would erase the sizable between-sport differences that we observed. Indeed, as mentioned above, accounting for starting pitcher effects in our MLB prediction model did not substantially affect the results. Second, it could be argued that team win probabilities change over the course of a season due to roster turnover, injuries, and other effects. By randomly assigning games to our training and test set without regard to their temporal ordering, we are implicitly estimating ``average'' team strengths over the season, and applying these to predict the outcome of an ``average'' game. We chose a random sampling approach over one which would simply split the season because we wanted to eliminate time trends in team strengths when describing how information accrued as more game results were observed. While our approach does not directly describe how predictive accuracy improves as games are played in their scheduled order, we anticipate that the patterns would be similar to what we observed.
\bibliographystyle{plainnat}
\begingroup
\sloppy
| {
"attr-fineweb-edu": 1.625977,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeIDxK1ThhAqzXgIu | \section{Introduction}
Careful coordination of the travel behavior of tourists may reduce congestion, improve their travel experience, improve the quality of the environment, improve the quality of life for the local population, and avoid the many problems with tourists \cite{ivanovic2009fresh, de2015personalized, ccolak2016understanding}.
Coordination has grown to become even more important as leisure travel has continued to increase, both internationally and domestically, contributing to 10\% of global GDP and 6\% exports in 2015 \cite{unwto}. International tourism increases around 4.2\% to 6.6\% since 2010 and reached a record of 1.2 billion traveler-arrivels in the same year \cite{unwto}. This paper proposes a personalized event recommendation system that mitigates the adverse effects of mass tourism, while respecting the needs of all stakeholders.
Our solution can be considered a type of {\em travel demand management}, i.e., a paradigm to reduce or shift travel demand in space or time to reduce the negative impacts\cite{halvorsen2015improving}. However, little research has focused on managing tourism travel demand. Unlike commuting trips, where the travel time and desinations are fixed, leisure travelers are more flexible. Figure \ref{flex} illustrates how individuals' preferences of location and time bundles are relatively flat above a certain threshold, indicating the flexibility of tourists' travel \cite{de2015personalized}.
\begin{figure}
\label{flex}
\begin{center}
\includegraphics[width=.8\linewidth]{Pics/newChart2}
\end{center}
\caption{Choice flexibilities.}
\end{figure}
Recommendation systems have been successful tools in eCommerce. Current location recommendation algorithms, mostly adopted from movie or book recommendations, simply make recommendations based on inferred personal preferences. Applied to travel, however, this method may lead to even more severe congestion and longer waiting times if the most popular locations are recommended at the same time. We argue that \emph{system efficiency}, the interplay between the location preferences at a user level and the traffic congestion at a system level, should be balanced when using location recommendations, which can serve as strategies in travel demand management.
\comment{need to point out the it is the whole experience that is important which includes getting to the event. That the tradeoff does not mean great sacrifice on the part of the tourist.}
The availability of large-scale geolocation data from mobile devices, such as the Call Detail Records (CDR) used in this study, offers an unprecedented opportunity for location-based service providers, transportation agencies, tourism departments, and governments to understand human mobility pattern, provide personalized information and improve system-wide performance \cite{brockmann2006scaling, gonzalez2008understanding, song2010modelling}. Making the comprehensive picture of population-wide behaviors enables decision-makers to intervene at a system level.
Call Detail Records, used to understand travel behaviors, have rarely been used to understand and manage travel demand or recommend locations \cite{de2015personalized, jiangactivity=, Alex2015}.
Therefore, a recommendation system can be built based upon not only satisfying personal preferences at the demand side but also making the best use of the system capacity at the supply side.
In this paper, we use location recommendation to manage travel demand to achieve system efficiency. We show that uncoordinated travel behaviors lead to severe traffic congestion. At the same time, we propose a method as a solution for transportation practitioners or authorities to optimize and trade-off satisfied preferences and road congestion. We use matrix factorization to mine travelers' implicit preferences, taking advantages of underlying similarities among locations and travelers. We then formulate an optimization problem to maximize satisfied location preferences at the user level under pre-defined road congestion constraints. The method reveals the interplay between system congestions and user preferences. With an implementation with the CDR data in Andorra under various compliance rates, we show the effectiveness of the method. For example, under a 100\% compliance rate, a 52\% reduction in travel delay (from 11.73 minutes per hour to 5.61 minutes per hour) only sacrifices 31\% satisfaction regarding the recommendations. Similarly, under a 60\% compliance rate, a 41\% reduction in travel delay (to 6.98 minutes per hour) only sacrifices 17\% in satisfaction regarding the recommendations.
This paper is organized as follows: Section \ref{work} summarizes current literature regarding travel demand management and location recommendations. The data required to perform the study is described in section \ref{data}. Section \ref{method} demonstrates the framework of the method and details each step, including preference inference, and collective satisfaction maximization. Section \ref{case} describes a case study in Andorra. The performance of the proposed method is compared with a baseline model where location is recommended based simply on travelers' preferences. The impact of the method is analyzed under various compliance rates. Section \ref{future} concludes the paper and discuss future work.
\section{Related works}
\label{work}
\subsection{Travel demand management}
Travel Demand Management (TDM) encompasses strategies that alter demand patterns to increase transportation system efficiency, instead of adding more capacity to the system \cite{halvorsen2015improving}. TDM has broad applications, including energy savings, air quality improvements, peak period congestion alleviation, etc \cite{tdm}.
Different categories of TDM strategies include, economic policies, physical change measures, legal policies, and information or education measures \cite{garling2007travel, halvorsen2015improving}. Economic policies, the most popular TDM strategies, include taxing vehicles, congestion pricing, lowering transit costs, etc \cite{santos2005urban, halvorsen2015improving}. Physical change measures, such as walking/cycling improvements, park-and-ride schemas, represents another category in TDM \cite{tdm_pr}. Legal policies, prohibit traffic in some areas or parking controls.
Many of the above strategies are not applicable in the context of tourism. Relatively little TDM research has targeted tourism demand, which is more flexible than commuting-related travels. Therefore, the following characteristics of this work differentiate it from prior travel demand management researches, namely:
\begin{itemize}
\item We focus on flexible travel demand, which can be manipulated at the destination and time-of-day levels.
\item We propose to use Call Detail Records, a large-scale and opportunistic data source, to understand travel patterns.
\end{itemize}
\subsection{Location recommendations}
In the early 2010s, several studies introduced traditional recommender engines to personalized location recommendation. Ye (2011) \cite{ye2011exploiting} introduced user-based and item-based Collaborative Filtering (CF) to location recommendations using user check-in data, based on the assumption that similar users have similar tastes and users are interested in similar Points of interests (POIs). Berjani (2011) \cite{berjani2011recommendation} employed the more effective and efficient matrix factorization in POI recommendations on check-in history. Regularized matrix factorization is used to learn the latent feature matrix, which has better perfornance than item-based CF. Recent research focuses on utilizing additional information. Geospatial factors, social networks, and temporal influences are three main examples. Other researchers argue that users prefer nearby locations rather than distant ones, which is defined as the geographical clustering phenomenon \cite{Zhang:2013:IPG:2525314.2525339, yuan2013time}. Some researchers make the assumption that friends share more common interests than non-friends to utilize social influence in recommendation \cite{Pham:jucs_17_4:a_clustering_approach_for, Gao:2014:PLR:2645710.2645776}. Finally, to make use of temporal influence on activities, some researchers make seperate location recommendations for different temporal states \cite{Bao2015, yuan2013time, lian2014geomf}. Quercia (2010) \cite{l5694070} is the only work that makes recommendations using mobile phone data. However, his paper is based exclusively on item-based CF, which is computationally inefficient and hard to scale \cite{ye2011exploiting, wang2013location}. Application of existing methods, ignoring service capacity constraints, may result in traffic congestion and long waiting times, no matter how sophisticated these methods are in inferring preferences.
Therefore, this paper has the following improvements in location recommendation:
\begin{itemize}
\item We argue that capacity constraints are an important characteristic of location recommendation, which is currently ignored by the literature. We integrate capacity constraints in the location recommendation method.
\item We develop a framework to recommend locations for system efficiency based on Call Detail Records. It can be used in other cities when call records, traffic counts and road network GIS files are available. It can also be applied on other longitudinal data sources, such as WiFi, GPS, AFC, etc.
\end{itemize}
\subsection{Next-location prediction}
Next-location prediction has been an increasingly popular topic in pervasive computing based on GPS, bluetooth, check-in histories, etc. Accurate next-location predictions, given previous footprints from these data sources, is a significant building block benefiting many areas, including mobile advertising, public transit planning, and urban infrastructure management \cite{alhasouncity, gomes2013will, lu2013approaching, petzold2006comparison}. Different data sources vary at spatial and temporal scales, and depending on the availability of contextual information \cite{de2013interdependence}. Most researchers build Markov models and predict longitude and latitude as continuous variables based on previous travel trajectories \cite{asahara2011pedestrian, ashbrook2003using, gambs2012next}. Mathew \cite{mathew2012predicting} predicts next-location using a Hidden Markov Model with contextual information, such as activities and purposes. Domenico \cite{de2013interdependence} and Alhasoun \cite{alhasouncity} uses mobility correlations, either measured by social interactions or mutual information, to improve forecasting accuracy. Though extensive researches have acceptable accuracy in predicting next locations, the performance is poor with when Call Detail Records are sparse \cite{alhasouncity}.
In this paper, we develop a new perspective in viewing mobility behaviors based on Call Detail Records as sentences. We then introduce the use of Recurrent Neural Network, a successful tool in language modeling, into mobility prediction.
\section{Data}
\label{data}
In addition to the Call Detail Records, we make use of the topology of the Andorra road network, its capacity, and periodically recorded traffic counts. This information is combined to understand travel demand patterns and transportation system performance.
\subsection{Call Detail Records}
The Call Detail Records were originally used for billing purposes. A record is stored when a mobile phone user connects to the network of mobile carriers. Each Call Detail Record entry contains an encrypted user ID, start and end times of the phone call, the IDs of the connected cell tower ID, and the origin nationality and model type of the phone, see Figure \ref{cdr}. The cell tower ID is easily converted to geographic location of the cell tower in the provider's network.
The anonymized CDR data used in this paper is collected from Andorra, a small country situated between France and Spain. As a case study, we target travelers visiting Andorra during a week in May, 2015 to provide personalized recommendations for system efficiency. These 47743 tourists include 20311 tourists from France and 27432 from Spain. The spatial distribution of the cell towers are shown in Figure \ref{tower}, with different colors representing different cities in which the tower is located.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Pics/cdr2.png}
\caption{Snapshot of Call Detail Records.}
\label{cdr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Pics/towerciti.png}
\caption{Cell tower distributions in Andorra. \normalfont{Each circle represents a cell tower. Cell towers in the same city are colored the same, with the legend an abbreviation of a city name.}}
\label{tower}
\end{figure}
\subsection{Traffic flow derived from CDR and traffic counts}
\comment{Please check this for correctness}
We assume that the number of travellers using their cell phones while they on the highway is a constant fraction of the number of vehicles on the highway. Under this assumption, traffic flow derived from the Call Detail Records can be used to approximate actual traffic flow.
The Andorra road network is limited with a single major route between each city. Traffic counts have been collected at key locations using cameras by local authorities to monitor internal mobility. This data is publicly available \cite{andorra_traffic}.
The GIS shapefiles of road networks were acquired from the Andorra Transportation Department. Important attributes used in the simulation include connecting cities, number of lanes, capacities and free flow travel time, which enable the computation of travel times based on traffic flow. Free-flow travel time were obtained from the Google Map API. Road capacities were obtained from a NCHRP report \cite{report_road}.
From the phone record time and cell tower information, we can generate an Origin-Destination matrix, the number of trips between cell towers, i.e. Combining it with the road network information gives traffic flow over time, by group, and between two locations.
\section{Method}
\label{method}
The goal is to send recommendations for places to visit and when to visit them.
This is done in three steps, which are outlined in Figure \ref{frame}.
The first step infers travel demand in terms of vehicle trips along road links, from mobile phone records.
The second step infers personal location preferences based on location traces with no explicit ratings. With matrix factorization, we infer these implicit location preferences regarding all the locations with hidden factors, which correlate with the characteristics of both travelers and locations. With the inferred preferences, an optimization problem is formulated with the objective of maximizing satisfactions regarding the recommendations subject to tolerable congestion levels, which can be determined by the decision-makers.
Finally, we predict next locations based on historical traces and compare them with the recommended locations to determine whether a recommendation will be sent.
\comment{I do not understand this last step.}
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{Pics/Loc_Rec_New}
\label{frame}
\end{center}
\caption{Methodological framework. }
\end{figure}
\subsection{Terminology}
The following terms are used throughout the method as defined below. \\
\textbf{User profile}. User profiles contain the longitudes, latitudes, timestamps, and characteristics of the user. A user profile $l_{u, g} $ is generated for each user based on individual mobility traces ($l_{u,t_{1}, g}$, $l_{u,t_{2}, g}$, ...). $g$ is the user group of user $u$ based on his/her characteristics directly obtained or inferred from CDR. $t_i$ is the number of presences at location $i$.\\
\textbf{Realized trips}. Realized trips $p_{ij}$ are calculated by the number of times individual $i$ traveled to location $j$. Realized trips are used as a proxy for location preferences. \\
\textbf{Idealized trips}. Idealized trips $\hat{p}_{ij}$ measure the preferece of traveler $i$ regarding the location $j$ with no observed presences. Similarly, it is a measure of proxy travelers' preferences regarding the locations with no observations. \\
\textbf{Tolerable excess throughput}. Tolerable excess throughput is determined by the tolerable congestions by the decision-makers.
\subsection{Traffic flow inference}
Mobile phone data provides an imperfect estimate of the travel demand and traffic flow along road links. In order to understand travel delays, CDR data need to be processed to derive individual movements and vehicle trips from
the actual flow which is derived from the
CDR-based travel demand and the actual travel demand.
The first step is to extract the tower-to-tower Origin-Destination (O-D) matrix, aggregated by the individual movements from one cell tower to another cell tower. Peak hour traffic flow can be learned as traffic flow varies heterogenously by links and hours of the day. The O-D pairs are assigned to road links given the road network. The last step is to scale the aggregated movements to actual vehicle trips using traffic counts as the ground truth, which is calculated in Equation \ref{flow_2}.
\begin{equation}
\label{flow_1}
R_{it} = \sum_{jkt}O_{jt}D_{kt}
\end{equation}
\begin{equation}
\label{flow_2}
TC_{it} = R_{it}\times \beta_{it}
\end{equation}
where $i$ is the index of road link, $t$ is the index for hour of day, $O_jD_k$ represents OD pair with origin $j$ and destination $k$, $R_i$ is the vehicle trips along road link $i$, $TC$ is the actual traffic counts and $\beta$ is the scaling factor.
\subsection{Preference inference }
As no explicit review or rating regarding locations is available in CDR, we propose to use ``visiting frequencies'' as a proxy for location preferences. The next problem is to infer preferences regarding locations with no observed visits.
Matrix factorization, one type of latent factor model, is used to infer travelers\textquoteright{} preferences regarding new locations. This model characterizes both the locations and users by vector of factors inferred from location visiting patterns, mapping both travelers and locations to a joint latent factor space of dimensionality $k$. The latent factor space determines why and how traverlers like each location based on hidden characteristics, which can be interpreted as personal interests or land use categories. High correspondence between location and user factors, based on the characterization of both the locations and the travelers, leads to a recommendation \cite{lian2014geomf}.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{Pics/nmf}
\end{center}
\caption{Illustration of the preference inference methodology via matrix factorization to infer the preferences of travelers regarding locations with no observations. Matrix $U$ captures user's characteristics or interests (hidden factors). Location matrix $L$ characterizes the associations of the locations with the latent factors (point of interest categories). The multiplication of the two predicts travelers' preferences across the location space.}
\end{figure}
\subsection{Optimization for system efficiency - individual preferences vs. congestion}
The key idea of the proposed method is to optimize travelers' location preferences with the constraint of acceptable congestion. The authorities will then have the freedom to tradeoff between these two factors. An optimization model was built to maximize preferences regarding location recommendations subject to road capacity constraints. This model can be easily extended to other cases where capacity constraints exist by modifying the constraint accordingly.
This paper formalizes the traffic problem by modeling destination and time choice as follows: each traveler $i$ makes a choice of location $j$ and the day for travel day $t$. The choice is made based on personal utility $p_{ijt}$, which is assumed to be the preferences regarding the location $j$ inferred from the call records. Since travelers make selfish choices independent of any other larger constants, the system may settle into a suboptimal state. In a suboptimal state, the travel time delay of the whole system as well as the congestion are higher than they should be. The set of destination choices that occur when every traveler maximizes their satisfied preferences is referred to as the user equilibrium flows \cite{de2015personalized, acemoglu2016informational}, which is similar to Wardrop's principles in route choice \cite{wardrop1952road}.
The objective function of the formulated optimization model aims to maximize the overall satisfied preferences regarding the location recommendations. The constraint function is determined by the acceptable congestion of the decision-makes.
\subsection{Next-location prediction with Recurrent Neural Network}
In order to distribute recommendations more efficiently, the method sends recommendations only when the predicted next-locations are not in line with the ones to be recommended. In this section, we predict individual next-locations based on historical location traces.
Recurrent neural network is an adaptation of the traditional feed forward neural network, which can process variable-length sequences of inputs. It has been successfully used in language models, learning word embeddings, financial time series predictions \cite{pascanu2013construct, rumelhart1985learning}. In this project, we apply sophisticated recurrent neural network into mobility prediction by mapping between mobility models to language models. Cell tower traces for each individual are modeled as a sentence and each cell tower as a word. We use a simple RNN architecture, with a input layer, a hidden layer, a long short-term memory layer and output layer. The cell tower with the maximum probability is predicted to be the next location. The predicted next-location can be compared with the recommended location to determine the action to be taken.
RNN is advantageous in predicting next locations in two ways.\\
{\em Location sequences:} Travelers visit locations in a sequential way and RNN reads in data sequentially.\\
{\em Variable number of visited locations:} The heterogeneity in the size of location traces and frequency of mobile phone usage makes traditional machine learning techniques inapplicable. The ability to handle variable input lengths makes RNN appropriate in this situation \cite{de2015survey}.
\section{Application}
\label{case}
We performed extensive simulations to gain insights into the interplay between satisfaction regarding the recommendations and the travel delay caused by congestion. We vary the compliance rate across the population, i.e. the probability that travelers will follow the recommendation, in order to evaluate the potential traffic improvement of the recommendation system.
The simulation assumes that the individuals who do not comply will follow their preference with no behavioral change.
\subsection{Location recommendation for system efficiency}
Using the method described in section \ref{method}, we simulate the average travel delay per hour and overall idealized trips to assess the system-wide impacts and effectiveness of the method. The notion of idealized trips is a measure of satisfaction regarding the recommendtions. Average travel delay measures the congestion externalities of the recommendations. In transportation, practitioners and planners model the relationships between traffic flow and travel time based on the characteristics of the road infrastructure. One of the most widely used methods is Bureau of Public Road function (BPR) \cite{akcelik1991travel}, which models travel time as a function of the ratio between actual traffic volume and road capacity, volume-over-capacity (VOC) \cite{ben1999discrete}, as shown in Equation (\ref{bpr1}).
\begin{equation}
\label{bpr1}
t_{\text{simulated}} = t_{\text{free-flow}} \times (1 + \alpha(V/C)^\beta)
\end{equation}
Average travel delay:
\begin{equation}
\label{bpr2}
\Delta t = t_{\text{simulated}} - t_{\text{free flow}}
\end{equation}
where $t_{\text{free-flow}}$ is the free flow travel time on the road segment, $t_{\text{simulated}}$ is the simulated travel time, $\Delta{t}$ is the delay caused by congestion, $\alpha$ and $\beta$ are parameters that are used to characterize the non-linear relationship between $V/C$ and $t_{simulated}$. The default parameter for the BPR equation are $\alpha=0.15$ and $\beta=4$.
We compare our method with a baseline model, which is referred to as \emph{preference only} method. This method makes recommendations simply according to personal preferences with no system performances taken into account. Table (\ref{res_c}) summarizes average travel delay per hour and idealized trips of: 1) the preference only method; 2) the proposed method under various compliance rates. Under full compliance, the average travel delay is 5.61 minutes per hour with 44930 idealized trips, which indicates that with a 52\% reduction in congestion time, only 31\% of idealized trips are sacrificed. When the compliance rate is 80\%, a 47\% reduction in congestion time is achieved with only 23\% of idealized trips being sacrificed. The lower the compliance rate, the larger the idealized trips and the longer travel delay.
\begin{table}
\begin{tabular}{p{2.5cm}|p{2cm}|p{2.5cm}}
\hline
\textbf{Scenario} & \textbf{$\Delta \text{\textbf{t}}$ (min/h)} & \textbf{Idealized trips} \\
\hline
Preference only & 11.73 & 64925 \\ \hline
100\% comp. & 5.61 & 44930 \\ \hline
80\% comp. & 6.17 & 49997 \\ \hline
60\% comp. & 6.98 & 53680 \\ \hline
40\% comp. & 8.37 & 57442 \\
\hline
20\% comp. & 10.40 & 61219 \\
\hline
\end{tabular}
\caption{Results comparisons.
\normalfont{$\Delta t$ is the average travel delay during the peak hour. Idealized trips measures travelers' satisfaction regarding the recommended locations. Comp. is short for compliance rate. Preference only is the baseline model where recommendations are based on predicted preferences.}}
\label{res_c}
\end{table}
Figure \ref{compl_ana1} shows the relationship between idealized trips and tolerable excess throughput, which indicates individual perceived benefits under various level of congestion controls. Higher compliance rates satisfy larger individual benefits, which generate more traffic. The concave relationship reveals that preferences are satisfied more quickly in the beginning and slow down afterwards. The tolerable excess throughput enables decision-makers to manage congestion at an acceptable level.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{Pics/compiance41}
\end{center}
\caption{Relationships between idealized trips and tolerable excess throughput.}
\label{compl_ana1}
\end{figure}
Figure \ref{analysis_2} reveals the interplay between idealized trips and average travel delay. A horizontal reference line shows that for the same level of idealized trips, synergized behaviors generate less congestion. On the other hand, a vertical reference line shows that for the same travel delay, coordinated behaviors are more effective. This indicates the importance of effective schemas to incentivize behavioral change to achieve synergetic effects.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{Pics/compiance51}
\end{center}
\caption{Interplay between idealized trips and average travel delay.}
\label{analysis_2}
\end{figure}
\subsection{Next-location prediction}
As a case study, we evaluate different methods using CDR data collected over three weeks during May, 2015 in Andorra. We apply the method specifically on tourists, which can be filtered on the country code from CDR. We do not exclude travelers with too few observations as
long as their travel call includes more than one cell tower, suggesting the generalarization of the method. We use two settings with different spatial resolutions, at the cell-tower level and the merged -cell-tower level.
We introduce two baseline models: \enquote{Most Frequent} model and Markov model. \enquote{Most Frequent} model, referred to as the naive model, predicts the next location using the most frequent location. The Markov model is built on the contextual co-occurences between sequences of locations \cite{hsieh2015t, gambs2012next, lu2013approaching}. Various parameters have been tuned to optimize the performance of RNN, including activation function, dimension of embedding layer, drop out rates, sample size, and batch size.
Tables \ref{compare} and \ref{compare_1} show that the naive model has 50\% and 63\% accuracies in predicting next location at the cell-tower and merged-cell-tower level. The Markov model across the population has 54\% accuracy at the cell tower level, with 8\% improvement. However, at the merged cell tower level, the population-wide markov model has lower accuracy than the naive model. Comparatively, RNN improves the accuracy of location prediction significantly, with an accuracy of 67\% and 78\% on the two settings. It improves 34\% and 41\% in accuracy compare with the two settings, indicating that RNN significantly improves the accuracy of the next-location prediction.
\begin{table}
\caption{Cell-tower level}
\label{compare}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& Accuracy & Improvement \\ \hline Naive model & 50\% & NA \\ \hline Markov model & 54\% & 8\% \\ \hline RNN & 67\% & 34\% \\ \hline
\end{tabular}
\caption{Merged-cell-tower level}
\label{compare_1}
\begin{tabular}{|c|c|c|c}
\hline
& Accuracy & Improvement \\ \hline
Naive model & 63\% & NA \\ \hline Markov model & 57\% & -16\% \\ \hline RNN & 78\% & 41\% \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusions and Future work}
\label{future}
We have shown that individual travel decisions, without accounting for system efficiency, lead to traffic congestion. Existing location recommendation algorithms exacerbate congestion by recommending popular locations. To address this issue, we develop a location recommendation method to manage travel demand. Call Detail Records are the raw input and are processed to infer personal location preferences, traffic congestion, and other information about both tourists and native travellers. This study, as far as we know, is the first one making location recommendations based on Call Detail Records. Most importantly, we factor in the special characteristics, capacity constraints, of location recommendation, which distinguish our system from existing recommendation methods. The simultaneous trade-off between congestion relief and overall satisfied location preferences learned from the simulation results indicate that moderate sacrifices for individual utilities lead to significant collective travel time saving.
The simulation results from our Andorra case study reveal a noticeable impact in reducing traffic congestion with moderate sacrifices on individual preferences. For example, under 100\% compliance, there is a 52\% reduction in travel delay (from 11.73 minutes per hour to 5.61 minutes per hour) with 31\% dissatisfaction rate regarding the recommendations. Even with a much smaller compliance rate, under 60\%, there is a 41\% reduction in travel delay (to 6.98 minutes per hour) with only a 17\% dissatisfaction rate. We use a recurrent Neural Network approach to make sense of the input data. With the implementation of RNN on the large-scale CDR collected in Andorra, this paper demonstrates the applicability of the method with accuracies of 67\% and 78\% at cell-tower and merged-cell-tower levels, representing an improvement of greater than 30\% compared to two base-line models.
The method developed in this paper specifically targets leisure travel, where travelers are relatively more flexible in the decision-making processes at spatial and temporal levels. We aim to divert travelers to their
\enquote{secondary} choices at the location and the time to visit, while gaining more travel-time savings both for the individuals and for the society as a whole.
This research opens up multiple directions to advance in the topic of location recommendation for system efficiency using Call Detail Records. In this study, we only use travel delay as the systematic efficiency measure. However, other externality measures, such as air pollution or energy consumption, should also be factored in for comprehensive evaluations from the system side. In addition, except for time and destination, other interesting factors could be included in the choice bundle, such as travel routes, budgets, etc.
The proof-of-concept experiments in this study demonstrate the effectiveness of our approach. The natural next step is to investigate how to incentivize users to sacrifice perceived benefits for better system performances. A comprehensive framework of the application in real situations, detailing the distribution channel, frequencies, target markets, needs to be studied further from a marketing perspective.
An interesting extension of the paper is the information configurations for travelers, specifically, how to strategically present information accounting for travelers' willingness to accept the information. An web-interface-based experiment could be developed to help understand travel behaviors and decision-making processes, and how the system dynamics perform by providing various recommendation configurations.
Call Detail Records constitute one longitudinal data source in understanding travel behaviors. Integrating with other data, such as WiFi and bluetooth data, could supplement it and enhance the application - providing better spatial and temporal resolutions, social media data - capturing momentary feelings.
\section{Acknowledgement}
This project is a collaboration with Changing Places Group at MIT Media lab, directed by Professor Kent Larson. We thank them for the data support and discussions.
\nocite{*}
\bibliographystyle{abbrv}
| {
"attr-fineweb-edu": 1.864258,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcxA241xiD5zZ-Uls | \section{Introduction}
In sports, the performance of players is frequently discussed by fans and journalists. An often discussed phenomenon in several sports is the ``hot hand'', meaning that players may enter a state where they experience extraordinary success.
For example, the former German football player Gerd M\"uller potentially was in a ``hot'' state when scoring 11 penalties in a row between 1975 and 1976. However, with 3 penalties missed in a row earlier in 1971, he potentially was in a ``cold'' state when taking these penalty kicks.
Academic research on the hot hand started by \citet{gilovich1985hot}.
In their seminal paper, they analyzed basketball free-throw data and provided
no evidence for the hot hand, arguing that people tend to belief in the hot hand due to memory bias.
In the past decade, however, some studies provided evidence for the hot hand
while others failed to find such an effect (see \citealp{bar2006twenty}, for a review).
Hence, the existence of a hot hand effect in different sports still remains an open question.
In our analysis, we investigate a potential ``hot shoe'' effect of penalty takers in the German Bundesliga. Our data set comprises all taken penalties in the Bundesliga from the first season (1963/64) until season 2016/17, totaling in $n = 3,482$ observations. Specifically, to explicitly account for the underlying (latent) form of a player, we consider hidden Markov models (HMMs) to investigate a potential hot shoe effect. Using HMMs to investigate the hot hand was first done by \citet{albert1993statistical} for an analysis in baseball, but also more recently by \citet{green2017hot} who also analyze data from baseball and by \citet{otting2018hot} who analyse data from darts.
There are several potential confounding factors when analyzing the
outcome of penalty kicks, such as the score of the match and the abilities
of the two involved players, i.e.\ the penalty taker and the opposing team's
goal keeper. Accounting for these factors leads to a large number of covariates,
some of them also exhibiting a noteworthy amount of correlation/multicollinearity,
which makes model fitting and interpretation of parameters difficult.
Hence, sparser models are desirable. To tackle these problems,
variable selection is performed here by applying a LASSO penalization
approach (see \citealp{Tibshirani:96}).
Our results suggest clear evidence for two different states of penalty
takers, which can be tied to a cold and a hot state. In addition, the results
shed some light on exceptionally well-performing goalkeepers.
The remainder of the manuscript is structured as follows. The data on
penalty kicks from the German Bundesliga is described in Section~\ref{chap:data}.
In Section~\ref{chap:methods} the considered methodology is presented, namely a LASSO penalization technique for HMMs.
The proposed approach is further investigated in a short simulation study in
Section~\ref{chap:simulation} and the results of our hot shoe analysis on German Bundesliga data
are presented in Section~\ref{chap:results}. Finally, we discuss the results and conclude in Section~\ref{chap:concl}.
\section{Data}\label{chap:data}
The considered data set comprises all taken penalty kicks in the German
Bundesliga from its first season 1963/1964 until the end of the season 2016/2017. Parts of the data have already been used in \citet{bornkamp2009penalty}.
In the analysis, we include all players who took at least 5 penalty kicks
during the time period considered, resulting in $n = 3,482$ penalty kicks taken by
310 different players. For these penalty kicks considered, 327 different
goalkeepers were involved. The resulting variable of interest is a binary variable
indicating whether the player converted
the penalty kick or not. Hence, we consider binary time series
$\{ y_{p,t} \}_{t=1,\ldots,T_{p}}$, with $T_p$ denoting the total number of
penalties taken by player $p$,
indicating whether player $p$ scored the
penalty at attempt $t$, i.e.:
$$
y_{p,t} =
\begin{cases}
1, & \text{if the $t$--th penalty kick is converted;} \\
0, & \text{otherwise.}
\end{cases}
$$
Since several other factors potentially affect the outcome of a penalty kick
(such as the score of the match), we consider further covariates. For the
choice of covariates, we follow \citet{dohmen2008professionals}, who analyzed
the effect of pressure when taking penalty kicks and, hence, accounts for
different potential confounders. These additional covariates include a
dummy indicating whether the match was played at home, the matchday,
the minute where the penalty was taken, the experience of both the penalty
taker and the goalkeeper (quantified by the number of years the player played for a professional team) and the categorized score difference, with categories
more than 2 goals behind, 2 goals behind, 1 goal behind, 1 goal ahead, 2 goals
ahead or more than 2 goals ahead. Since the effect of the score might depend
on the minute of the match, we further include interaction terms between the
categories of the score difference and the minute. To consider rule changes for
penalty kicks (see \citealp{dohmen2008professionals}, for more details), we
include dummy variables for different time intervals (season
1985/86 and before, between season 1986/87 and season 1995/96, season 1996/1997,
and from season 1997/1998 up to season 2016/2017).
Table \ref{tab:descriptives} summarizes descriptive statistics for all
metric covariates considered as well as for $\{ y_{p,t} \}$.
\begin{table}[!htbp] \centering
\caption{Descriptive statistics.}
\label{tab:descriptives}
\begin{tabular}{@{\extracolsep{5pt}}lccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{St.\ Dev.} & \multicolumn{1}{c}{Min.} & \multicolumn{1}{c}{Max.} \\
\hline \\[-1.8ex]
successful penalty & 0.780 & 0.414 & 0 & 1 \\
matchday & -- & -- & 1 & 38 \\
home & 0.316 & 0.465 & 0 & 1 \\
experience (penalty taker) & 6.323 & 3.793 & 0 & 19 \\
experience (goalkeeper) & 5.343 & 4.187 & 0 & 19 \\
minute & 51.92 & 24.91 & 1 & 90 \\
\hline \\[-1.8ex]
\end{tabular}
\end{table}
Finally, to explicitly account for player-specific characteristics, we include
intercepts for all penalty takers as well as for all goalkeepers considered in our sample.
These parameters can be interpreted as the players penalty abilities (i.e., the penalty shooting skill
for the penalty taker and the (negative) penalty saving skill for the goalkeeper).
To illustrate the typical structure of our data, an example time series from our sample of the famous German attacker Gerd M\"uller, who played in the Bundesliga for Bayern
Munich from 1964 until 1979, is shown in Figure \ref{fig:data}. The corresponding part in the data set is shown in Tables \ref{tab:design1} and \ref{tab:design2}.
\begin{figure}[h ]
\centering
\includegraphics[scale=0.85]{data_option2.pdf}
\caption{Penalty history over time of the player Gerd M\"uller for the time period
from 1964 until 1979; a successful penalty is shown in yellow, a failure in black.}
\label{fig:data}
\end{figure}
\begin{table}[ht]
\centering
\caption{Part of the data set corresponding to the metric covariates.}
\label{tab:design1}
\scalebox{0.9}{
\begin{tabular}{cccccccc}
\hline
\thead{\textbf{player}} & \thead{\textbf{successful} \\ \textbf{penalty}} & \thead{\textbf{matchday}} & \thead{\textbf{home}} & \thead{\textbf{experience} \\ \textbf{(penalty taker)}} & \thead{\textbf{experience} \\ \textbf{(goalkeeper)}} & \thead{\textbf{minute}} & \thead{$\pmb{\cdots}$} \\
\hline
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\cdots$ \\
Gerd M\"uller & 1 & 15 & 0 & 1 & 3 & 90 & $\cdots$ \\
Gerd M\"uller & 1 & 25 & 0 & 1 & 3 & 81 & $\cdots$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\cdots$ \\
Gerd M\"uller & 0 & 7 & 0 & 13 & 2 & 37 & $\cdots$ \\
Gerd M\"uller & 0 & 8 & 1 & 13 & 8 & 68 & $\cdots$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ \\
\hline
\end{tabular}}
\end{table}
\begin{table}[ht]
\centering
\caption{Part of the data set corresponding to the player- and goalkeeper specific effects.}
\label{tab:design2}
\scalebox{0.85}{
\begin{tabular}{ccccccccc}
\hline
\thead{\textbf{Hans} \\ \textbf{M\"uller} \\ \textbf{(player)}} & \thead{\textbf{Gerd} \\ \textbf{M\"uller} \\ \textbf{(player)}} & \thead{\textbf{Ludwig} \\ \textbf{M\"uller} \\ \textbf{(player)}} & \thead{$\pmb{\cdots}$} &
\thead{\textbf{G\"unter} \\ \textbf{Bernard} \\ \textbf{(goalkeeper)}} &
\thead{\textbf{Wolfgang} \\ \textbf{Schnarr} \\ \textbf{(goalkeeper)}} &
\thead{$\pmb{\cdots}$} &
\thead{\textbf{Dieter} \\ \textbf{Burdenski} \\ \textbf{(goalkeeper)}} &
\thead{\textbf{Wolfgang} \\ \textbf{Kneib} \\ \textbf{(goalkeeper)}} \\
\hline
$\vdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\vdots$ \\
0 & 1 & 0 & $\cdots$ & 0 & 1 & $\cdots$ & 0 & 0 \\
0 & 1 & 0 & $\cdots$ & 1 & 0 & $\cdots$ & 0 & 0 \\
$\vdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\vdots$ \\
0 & 1 & 0 & $\cdots$ & 0 & 0 & $\cdots$ & 0 & 1 \\
0 & 1 & 0 & $\cdots$ & 0 & 0 & $\cdots$ & 1 & 0 \\
$\vdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\ddots$ \\
\hline
\end{tabular}}
\end{table}
\section{Methods}\label{chap:methods}
Figure \ref{fig:data} indicates that there are phases in the career of Gerd M\"uller
where he scored several penalty kicks in a row, e.g.\ between 1975 and 1976 as already
mentioned in the introduction. At some parts of his career, however, successful penalty kicks
were occasionally followed by one or more missed penalty kicks. To
explicitly account for such phases we consider HMMs, where the latent state
process can be interpreted as the underlying varying form of a player.
Moreover, \citet{stone2012measurement} argues that HMMs are more suitable
for analyzing the hot hand than analyzing serial correlation of outcomes, since the latter
mentioned outcomes are only noisy measures of the underlying (latent) form of a player.
\subsection{Hidden Markov models}\label{sec:hmm}
In HMMs, the observations $y_{p,t}$ are assumed to be driven by an underlying state process
$s_{p,t}$, in a sense that the $y_{p,t}$ are generated by one of $N$ distributions
according to the Markov chain. In our application, the state process $s_{p,t}$
serves for the underlying varying form of a player. For notational simplicity,
we drop the player-specific subscript $p$ in the following. Switching between
the states is taken into account by the transition probability matrix (t.p.m.)
$\boldsymbol{\Gamma} = (\gamma_{ij})$, with $\gamma_{ij} = \Pr(s_{t} = j | s_{t-1} = i),\, i,j = 1,\ldots,N$.
We further allow for additional covariates at time $t$, $\boldsymbol{x}_t = (x_{1t}, \ldots, x_{Kt})'$,
each of which assumed to have the same effect in each state,
whereas the intercept is assumed to vary across the states,
leading to the following linear state-dependent predictor:
$$
\eta^{(s_{t})} = \beta_0^{(s_t)} + \beta_1 x_{1t} + \ldots + \beta_k x_{Kt}.
$$
In fact, this is a simple Markov-switching regression model, where only the intercept varies across the states (see, e.g., \citealp{goldfeld1973markov}). The dependence structure of the HMM considered
is shown in Figure \ref{fig:HMM}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (A) at (2, -5) {$s_{t-1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (A1) at (-0.5, -5) {...};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (B) at (4.5, -5) {$s_{t}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C) at (7, -5) {$s_{t+1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C1) at (9.5, -5) {...};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y1) at (2, -2.5) {$y_{t-1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y2) at (4.5, -2.5) {$y_{t}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y3) at (7, -2.5) {$y_{t+1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (X3) at (7, 0) {$\mathbf{x}_{t+1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (X2) at (4.5, 0) {$\mathbf{x}_{t}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (X1) at (2, 0) {$\mathbf{x}_{t-1}$};
\draw[-{Latex[scale=2]}] (A)--(B);
\draw[-{Latex[scale=2]}] (B)--(C);
\draw[-{Latex[scale=2]}] (A1)--(A);
\draw[-{Latex[scale=2]}] (C)--(C1);
\draw[-{Latex[scale=2]}] (A)--(Y1);
\draw[-{Latex[scale=2]}] (B)--(Y2);
\draw[-{Latex[scale=2]}] (C)--(Y3);,
\draw[-{Latex[scale=2]}] (X1)--(Y1);
\draw[-{Latex[scale=2]}] (X2)--(Y2);
\draw[-{Latex[scale=2]}] (X3)--(Y3);
\end{tikzpicture}
\caption{Graphical structure of the HMM considered. Each observation $y_{t}$ is
assumed to be generated by one of $N$ distributions according to the state process
$s_{t}$, which serves for the underlying form of a player.
In addition, covariates $\mathbf{x}_{t}$ are assumed to affect $y_t$.}
\label{fig:HMM}
\end{figure}
For our response variable $y_t$, indicating whether the penalty attempt $t$
was successful or not, we assume $y_{t} \sim \text{Bern}(\pi_{t})$ and
link $\pi_t$ to our state-dependent linear predictor $\eta_t$ using the logit link function, i.e.\
$\text{logit}(\pi_t) = \eta^{(s_{t})}$. Defining an $N \times N$ diagonal matrix
$\boldsymbol{P}(y_t)$ with $i$--th diagonal element equal to
$\Pr(y_t|s_t = i)$ and assuming that the initial distribution $\boldsymbol{\delta}$
of a player is equal to the stationary distribution, i.e.\ the solution to
$\boldsymbol{\Gamma} \boldsymbol{\delta} = \boldsymbol{\delta}$, the likelihood for a single player $p$ is given by
\begin{equation*}
L_p(\pmb{\alpha}) = \boldsymbol{\delta} \mathbf{P}(y_{p1}) \mathbf{\Gamma}\mathbf{P}(y_{p2}) \dots \mathbf{\Gamma}\mathbf{P}(y_{pT_p}) \mathbf{1}\,,
\end{equation*}
with column vector $\mathbf{1}=(1,\ldots,1)' \in \mathbb{R}^N$
(see \citealp{zucchini2016hidden})
and parameter vector $\pmb{\alpha}=(\gamma_{11},\gamma_{12},\ldots,\gamma_{1N},\ldots,\gamma_{NN},\beta_0^{(1)},\ldots,\beta_0^{(N)},
\beta_1,\ldots,\beta_k)'$ collecting all unknown parameters. Specifically, formulating the likelihood as above amounts to running the forward algorithm, which allows to calculate the
likelihood recursively at computational cost $\mathcal{O}(TN^2)$ only, thus rendering
numerical maximization of the likelihood feasible \citep{zucchini2016hidden}. To obtain the full
likelihood for all 310 players considered in the sample, we assume independence between
the individual players such that the likelihood is calculated by the
product of the individual likelihoods of the players:
\begin{equation*}
L(\pmb{\alpha}) = \prod_{p=1}^{310} L_p(\pmb{\alpha}) = \prod_{p=1}^{310} \boldsymbol{\delta} \mathbf{P}(y_{p1}) \mathbf{\Gamma}\mathbf{P}(y_{p2}) \dots \mathbf{\Gamma}\mathbf{P}(y_{pT_p}) \mathbf{1}.
\end{equation*}
For our analysis of a potential hot shoe effect, we initially select $N=2$ states, which potentially are aligned to a ``hot'' and a ``cold'' state, i.e.\ states with superior and poor performance, respectively. The parameter vector $\pmb{\alpha}$, hence, reduces to $\pmb{\alpha}=(\gamma_{11}, \gamma_{12}, \gamma_{21}, \gamma_{22},\beta_0^{(1)}, \beta_0^{(2)}, \beta_1, \ldots, \beta_k)'$. The choice of $N$ will be further discussed in Chapter \ref{chap:results}.
Parameter estimation is done by maximizing the likelihood numerically using \texttt{nlm()} in R \citep{rcoreteam}. However, if we consider all covariates introduced in our model formulation from Section~\ref{chap:data}, the model gets rather complex, is hard to interpret and multicollinearity issues might occur. Hence, we propose to employ a penalized likelihood approach based on a LASSO penalty,
which is described in the next section.
\subsection{Variable selection by the LASSO}\label{sec:lasso}
In order to obtain a sparse and interpretable model,
the estimation of the covariate effects will be performed by a
regularized estimation approach. The idea is to first set up a model with a rather large number
of possibly influential variables (in particular, with regard to the player-specific ability parameters)
and then to regularize the effect of the single covariates. This way, the variance of the parameter
estimates is diminished and, hence, usually lower prediction
error is achieved than with the unregularized maximum likelihood (ML) estimator.
The basic concept of regularization is to maximize a penalized version of the likelihood
$\ell(\boldsymbol{\alpha}) = \log\left(L(\boldsymbol{\alpha})\right)$.
More precisely, one maximizes the penalized log-likelihood
\begin{equation}
\ell_{\text{pen}}(\boldsymbol{\alpha}) = \log\left(L(\boldsymbol{\alpha})\right) - \lambda \sum_{k=1}^{K} |\beta_k|\,,
\label{eq:pen_likelihood}
\end{equation}
where $\lambda$ represents a tuning parameter, which controls the
strength of the penalization. The optimal value for this tuning parameter
has to be chosen either by cross-validation or suitable model selection criteria. The latter
usually constitute a compromise between the model fit (e.g., in terms of the likelihood)
and the complexity of the model. Frequently used are the Akaike information criterion (AIC; \citealp{Akaike:73}) or Bayesian information criterion (BIC; \citealp{Schwarz:78}). In the context of LASSO, the effective degrees of freedom for the AIC and BIC are estimated as the number of non-zero coefficients (see \citealp{zou2007degrees}). Since our longitudinal data structure with multiple short time series from 310 individuals renders cross-validation rather difficult, we select the tuning parameter $\lambda$ in the following by information criteria.
Note that in contrast to the ridge penalty, which penalizes
the squared coefficients and shrinks them towards zero (see \citealp{HoeKen:70}),
the LASSO penalty on the absolute values of the coefficients, first proposed by \citet{Tibshirani:96},
can set coefficients to exactly zero and, hence, enforces variable selection.
Another advantage of the employed penalization is the way correlated predictors are treated.
For example, if two (or more) predictors are highly correlated, parameter estimates are stabilized by the penalization.
In such scenarios, the LASSO penalty tends to include only one of the predictors and
only includes a second predictor if it entails additional information for the response variable.
Therefore, if several variables possibly contain information on the outcome of the penalty,
they can be used simultaneously.
To fully incorporate the LASSO penalty in our setting, the non-differentiable $L_1$ norm $|\beta_k|$ in Eq.\ (\ref{eq:pen_likelihood}) is approximated as suggested by \citet{oelker2017uniform}. Specifically, the $L_1$ norm is approximated by $\sqrt{(\beta_k + c)^2}$, where $c$ is a small positive number (say c = $10^{-5}$). With the approximation of the penalty, the corresponding likelihood is still maximized numerically using \texttt{nlm()} in R as denoted above.
In the simulation study from the subsequent section, we also investigate a relaxed LASSO-type
version of our fitting scheme. The relaxed LASSO \citep{Mein:2007}
is known to often produce sparser models with equal or lower prediction loss than the regular LASSO.
To be more precise, for each value of the tuning parameter $\lambda$, in a final
step we fit an (unregularized) model that includes only the variables
corresponding to the non-zero parameters of the preceding LASSO estimates.
\section{A short simulation study}\label{chap:simulation}
We consider a simulation scenario similar to our real-data application, with a Bernoulli-distributed response variable, an underlying 2-state Markov chain and 50
covariates, 47 of which being noise covariates:
$$
y_t \sim \text{Bern}(\pi^{(s_{t})}),
$$
with
$$
\text{logit}(\pi^{(s_{t})}) = \eta^{(s_{t})} = \beta_0^{(s_t)} + 0.5 \cdot x_{1t} + 0.7 \cdot x_{2t} -0.8 \cdot x_{3t} + \sum_{j=4}^{47} 0\cdot x_{jt}\,.
$$
We further set $\beta_0^{(1)} = \text{logit}^{-1}(0.75), \beta_0^{(2)} = \text{logit}^{-1}(0.35)$ and
\begin{align*}
\pmb{\Gamma} =
\begin{pmatrix}
0.9 & 0.1 \\
0.1 & 0.9 \\
\end{pmatrix}.
\end{align*}
The covariate values were drawn independently from a uniform distribution within the interval $[-2, 2]$. The interval boundaries as well as the
corresponding effects $\beta_1, \beta_2$ and $\beta_3$ were chosen such that reasonable values for the response are obtained (i.e., moderate proportions of ones and zeros). We conduct 100 simulation runs,
in each run generating $T = 5100$ observations $y_t, t = 1, \ldots, 5100,$ from the model specified above, with the sample size being about the same size as for the real data application. Out of these 5100 simulated observations, the first 5000 are used for model fitting, whereas for the last 100 observations (which are denoted by $y_t^{\text{test}}$), the predictive performance of several different models is compared (see below).
For the choice of the tuning parameter $\lambda$, we consider a (logarithmic) grid of length 50, $\Lambda= \{5000, \ldots, 0.0001\}$. To compare the performance of the above
described LASSO-type estimation, we consider the following five fitting schemes:
\begin{itemize}
\item HMM without penalization (i.e., with $\lambda = 0$)
\item LASSO-HMM with $\lambda$ selected by AIC
\item LASSO-HMM with $\lambda$ selected by BIC
\item relaxed-LASSO-HMM with $\lambda$ selected by AIC
\item relaxed-LASSO-HMM with $\lambda$ selected by BIC
\end{itemize}
For all five methods considered, we calculate the mean squared error (MSE) of the
coefficients $\beta_1, \ldots, \beta_{50}$:
$$
\text{MSE}_{\boldsymbol{\beta}} = \dfrac{1}{50} \sum_{k=1}^{50} (\hat{\beta}_k -\beta_k)^2.
$$
We also calculate the MSE for the state-dependent intercepts $\beta_0^{(1)}$ and $\beta_0^{(2)}$, and for the entries of the t.p.m., $\gamma_{11}$ and $\gamma_{22}$, which is done analogously to the MSE for $\beta_1, \ldots, \beta_{50}$ shown above.
To further compare the predictive performance of the fitting schemes considered, we predict the distribution for each of the 100 out-of-sample observations, i.e.\ the success probabilities $\hat{\pi}_{t}^{\text{pred}}$, and compare these to the 100 remaining simulated observations $y_t^{\text{test}}$. For that purpose, we calculate the Brier score and the average predicted probability, which are given as follows:
\begin{align*}
\begin{split}
B & = \dfrac{1}{100} \sum_{t=1}^{100} (\hat{\pi}_{t}^{\text{pred}} - y_t^{\text{test}})^2 \\
A & = \dfrac{1}{100} \sum_{t=1}^{100} \left(\hat{\pi}_t^{pred} \mathds{1}_{\{ y_t^{test} = 1 \}} + (1 - \hat{\pi}_t^{pred}) \mathds{1}_{\{ y_t^{test} = 0 \}} \right),
\end{split}
\end{align*}
with $\mathds{1}_{\{.\}}$ denoting the indicator function. For the Brier score $B$, more accurate predictions correspond to lower values, with the lowest possible value being 0. For the average predicted probability $A$, higher values correspond to more precise predictions. In addition, the average predicted probability can be directly interpreted as the probability for a correct prediction.
The boxplots showing the MSEs over the 100 simulation runs, and the boxplots showing the Brier score and the average predicted probability for the predictive performance, are shown in Figures \ref{fig:simulation1} and \ref{fig:simulation2}, respectively. In both figures, the HMM without penalization is denoted by ``MLE", the LASSO-HMM with $\lambda$ selected by AIC and BIC are denoted by ``AIC" and ``BIC", respectively, and the relaxed-LASSO-HMM with $\lambda$ selected by AIC and BIC are denoted by ``AIC relaxed" and ``BIC relaxed", respectively. For the state-dependent intercepts $\beta_0^{(1)}$ and $\beta_0^{(2)}$, we see that the median MSE for the models with $\lambda$ chosen by the BIC is fairly high compared to all other models considered. A similar behaviour is observed for the entries of the t.p.m., $\gamma_{11}$ and $\gamma_{22}$.
The middle row in Figure \ref{fig:simulation1} shows the MSE for $\beta_1, \ldots \beta_{50}$ as well as the corresponding true and false positive rates (denoted by TPR and FPR, respectively). The simulation results indicate that the non-noise covariates are detected by all models, whereas especially the LASSO-HMM with $\lambda$ selected by the AIC detects several noise covariates. A fairly low number of noise coefficients is selected by the relaxed-LASSO-HMM fitting scheme with $\lambda$ selected by the AIC and by the LASSO-HMM with $\lambda$ selected by the the BIC. The corresponding medians for the FPR are 0.149 and 0.170, respectively. The most promising results are given by the relaxed-LASSO-HMM with $\lambda$ chosen by the BIC. In 84 out of 100 simulations, no noise covariates were selected by this model.
The left plot in the middle row of Figure \ref{fig:simulation1} shows that the median MSE of the coefficients $\beta_j$ for the LASSO-HMM with $\lambda$ selected by the BIC is higher than the median MSE of all other models considered. This arises since the BIC tends to select a rather high $\lambda$, which can be seen from the median MSE separated for the noise and non-noise coefficients (last row of Figure \ref{fig:simulation1}). With the fairly high $\lambda$ chosen by the BIC, i.e.\ with more shrinkage involved, the MSE for the non-noise coefficients is rather large. At the same time, since only a few covariates are selected with a rather high $\lambda$, the MSE for the noise coefficients is very low.
For the predictive performance of the methods considered, we see that visually there is no clear difference in the Brier score between the models, which is shown in the top panel of Figure \ref{fig:simulation2}. The result for the average predicted probability -- shown in the bottom panel of Figure \ref{fig:simulation2} -- confirm that the LASSO-HMM with $\lambda$ chosen by the BIC and the HMM without penalization perform worse than the other models considered.
The results of the simulation study are very encouraging, with the LASSO penalty allowing for variable selection. The performance in terms of MSE, TPR and FPR suggest that the LASSO-HMM with $\lambda$ selected by the BIC performs worst, with the MSE being higher than for the HMM without penalization. However, the LASSO-HMM with $\lambda$ selected by the AIC as well as the relaxed-LASSO-HMM with $\lambda$ selected by AIC and BIC, respectively, (partly substantially) outperform the HMM without penalization in terms of MSE. The relaxed-LASSO-HMM with $\lambda$ selected by the BIC performs best in terms of MSE, TPR, FPR and the predictive performance.
Finally, in all simulation runs the overall pattern was captured with regard to the true underlying state-dependent intercepts $\beta_0^{(1)}, \beta_0^{(2)}$ and diagonal entries of the t.p.m., i.e.\ $\gamma_{11}$ and $\gamma_{22}$. Fitting the LASSO-HMM and the relaxed-LASSO-HMM on the grid containing 50 different tuning parameters $\lambda$ took on average 47 minutes using a 3.4 GHz Intel\textsuperscript{\textcopyright} Core$^{\text{TM}}$ i7 CPU. This is remarkably fast, considering that both the LASSO-HMM and the relaxed-LASSO-HMM are fitted to the data for each value of $\lambda$, leading to 100 fitted models in total.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.69]{mse_sim_relax_new.pdf}
\caption{Boxplots of the MSE, TPR and FPR obtained in 100 simulation runs. ``AIC'' and ``BIC'' denote the LASSO-HMM fitting scheme with $\lambda$ chosen by AIC and BIC, respectively. ``AIC relaxed'' and ``BIC relaxed" denote the relaxed-LASSO-HMM fitting scheme with $\lambda$ chosen by AIC and BIC, respectively. ``MLE'' denots the HMM without penalization.}
\label{fig:simulation1}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.775]{mse_sim_relax_pred_new.pdf}
\caption{Boxplots of the Brier score (top panel) and the average predicted probability (bottom panel) obtained in 100 simulation runs.``AIC" and ``BIC" denote the LASSO-HMM fitting scheme with $\lambda$ chosen by AIC and BIC, respectively. ``AIC relaxed" and ``BIC relaxed" denote the relaxed-LASSO-HMM fitting scheme with $\lambda$ chosen by AIC and BIC, respectively. ``MLE" denots the HMM without penalization.}
\label{fig:simulation2}
\end{figure}
\FloatBarrier
\section{Results}\label{chap:results}
We now apply our LASSO-HMM approach to the German Bundesliga penalty data.
For the analysis of a potential hot shoe effect, we include all covariates from Section~\ref{chap:data} into the predictor and chose $N=2$ for the number of states, which can be interpreted as hot and cold states, respectively.\footnote{A psychological reason for being hot or cold may be a higher/lower level of self confidence.} This yields the following linear state-dependent predictor\footnote{Throughout this paper, all metric covariates are considered as linear. Since the main interest of this contribution is to investigate the LASSO penalty in HMMs, future research on the hot shoe could focus also on non-linear effects, for example for the matchday and the minute.}:
$$
\text{logit}(\pi^{(s_t)}) = \beta_0^{(s_t)}\, +\, \beta_1 \text{home}_{t}\, + \,\beta_2 \text{minute}_{t} \,+ \ldots + \,\beta_{100} \text{GerdMueller}_{t}\, + \ldots +\, \beta_{656}\text{WolfgangKneib}_{t}\,.
$$
Since the simulation study above indicates that the unpenalized maximum likelihood estimator is not appropriate for such a large number of covariates, we only consider the LASSO-type fitting schemes for the real data application. Specifically, since the relaxed-LASSO-HMM with $\lambda$ selected by the BIC showed the most promising results in the simulation, we focus on the results obtained by this fitting scheme, but we also present the results obtained by the two LASSO-HMM fitting schemes\footnote{The relaxed-LASSO-HMM with optimal $\lambda$ selected by the AIC yielded a rather unreasonable model, where almost all of the more than 600 covariates were selected and with partly unrealistically large corresponding estimated covariate effects, indicating some tendency of overfitting. For this reason, we excluded this model from the analysis.}.
The parameter estimates obtained (on the logit scale) indicate that the baseline level for scoring a penalty is higher in the model's state 1 than in state 2 ($\hat{\beta}_0^{(1)} = 1.422 > \hat{\beta}_0^{(2)} = -14.50$), thus indicating evidence for a hot shoe effect. State 1, hence, can be interpreted as a hot state, whereas state 2 refers to a cold state. In addition, with the t.p.m.\ estimated as
\begin{align*}
\hat{\mathbf{\Gamma}} =
\begin{pmatrix}
0.978 & 0.022 \\
0.680 & 0.320 \\
\end{pmatrix},
\end{align*}
there is high persistence in state 1, i.e.\ in the hot state, whereas the cold state 2 is a transient state, where switching to state 1 is most likely. The stationary distribution as implied by the estimated t.p.m.\ is $\hat{\boldsymbol{\delta}} = (0.969, 0.031)$, i.e., according to the fitted model, players are in about 96.9\% of the time in the hot state and in about 3.1\% in the cold state. The diagonal elements of the t.p.m.\ for the other fitting schemes are obtained as $\hat{\gamma}_{11, \text{AIC}} = 0.989, \, \hat{\gamma}_{22, \text{AIC}} = 0.386$ and $\hat{\gamma}_{11, \text{BIC}} = 0.987, \, \hat{\gamma}_{22, \text{BIC}} = 0.368$, respectively.
The corresponding state-dependent intercepts are obtained as $\hat{\beta}_{0, \text{AIC}}^{(1)} = -14.71, \, \hat{\beta}_{0, \text{AIC}}^{(2)} = 1.347$ and $\hat{\beta}_{0, \text{BIC}}^{(1)} = -18.83, \, \hat{\beta}_{0, \text{BIC}}^{(2)} = 1.360$, respectively. These results further confirm evidence for a hot shoe effect.
For the grid of potential tuning parameters $\lambda$, Figure \ref{fig:aicbic_fin} shows the progress of the AIC and BIC for the LASSO-HMM, indicating that the AIC selects a lower tuning parameter than the BIC. The corresponding coefficient paths of the LASSO-HMM approach together with the associated optimal tuning parameters selected by the AIC and BIC, respectively, are shown in Figure \ref{fig:pathslasso}.\footnote{We abstain from showing the coefficient paths plot for the relaxed LASSO-HMM model, because due to the unpenalized re-fit the paths look rather irregular.} No covariates are selected by the LASSO-HMM with $\lambda$ selected by the BIC, whereas Jean-Marie Pfaff, a former goalkeeper of Bayern Munich, is selected by the LASSO-HMM with $\lambda$ selected by the AIC and by the relaxed-LASSO-HMM model based on BIC. The negative coefficient implies that the odds for scoring a penalty decrease if Jean-Marie Pfaff is the goalkeeper of the opponent's team.
The coefficient is substantially larger in magnitude for the relaxed-LASSO model, since for this fitting schemes the model is re-fitted with $\lambda = 0$ on the set of selected coefficients from the first model fit (see above). An overview of the selected effects is given in Table \ref{tab:overview_effects}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.75]{coef_paths_aic_bic_2.pdf}
\caption{Coefficient paths of all covariates considered in the LASSO-HMM models. The dashed lines indicate the optimal penalty parameters $\lambda$ selected by AIC and BIC, respectively. For the $\lambda$ selected by the BIC, no covariates are selected, whereas for the AIC one covariate is selected (see also Table \ref{tab:overview_effects}).
}
\label{fig:pathslasso}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.75]{aic_bic_2.pdf}
\caption{Paths of AIC and BIC in the LASSO-HMM models. The vertical lines indicate the optimal penalty parameters $\lambda$ selected by AIC and BIC, respectively.
}
\label{fig:aicbic_fin}
\end{figure}
\begin{table}[ht]
\centering
\caption{Overview of selected players and goalkeepers by all models considered.}
\label{tab:overview_effects}
\begin{tabular}{lrrr}
\hline
& \thead{\textbf{BIC}} & \thead{\textbf{AIC}} & \thead{\textbf{BIC} \\ \textbf{relaxed}}\\
\hline
Jean-Marie Pfaff (goalkeeper) & 0.000 & -0.001 & -0.125 \\
Rudolf Kargus (goalkeeper) & 0.000 & 0.000 & 0.000\\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
Manuel Neuer (goalkeeper) & 0.000 & 0.000 & 0.000\\
Lothar Matth\"aus (player) & 0.000 & 0.000 & 0.000\\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
Nuri Sahin (player) & 0.000 & 0.000 & 0.000\\
\hline
\end{tabular}
\end{table}
\FloatBarrier
\section{Discussion}\label{chap:concl}
The modelling framework developed in this contribution, a LASSO-HMM and a relaxed-LASSO-HMM, respectively, allows for implicit variable selection in the state-dependent process of HMMs. The performance of the variable selection is first investigated in a simulation study, indicating that the relaxed-LASSO-HMM with the corresponding tuning parameter selected by the BIC is the best-performing fitting scheme considered.
For the analysis of a hot shoe effect, we fit both LASSO-HMMs and relaxed-LASSO-HMMs to data on penalty kicks in the German Bundesliga. Factors potentially affecting the performance of penalty-takers, such as the current score of the match, are included in the predictors. In addition, dummy variables for the penalty-takers as well as for the goalkeepers are included. The results indicate evidence for a hot shoe effect. In addition, our results also shed some light on exceptionally performing players such as Jean-Marie Pfaff, a former goalkeeper of Bayern Munich, who has been selected by several of the considered fitting schemes.
A clear limitation of the real data application considered is the problem of self selection. Since the manager (or the team) can decide which player has to take the penalty, players who have been rather unsuccessful in the past may not take penalty kicks anymore. However, several teams have demonstrated in the past that they rely on and trust in a certain player for taking penalty kicks, regardless of the outcome of the kick. Whereas penalty kicks in football yield to a time series due to the way in which penalties take place, the corresponding time intervals between actions are irregular. Although our data cover all attempts in the German Bundesliga, there are sometimes several month between two attempts. Moreover, some players might be involved in penalty kicks in matches from other competitions such as the UEFA Champions League, the UEFA European Cup or in matches with their national teams. These could also affect, whether a player is currently in a hot or cold state. From this perspective, the time series of Bundesliga penalties could be considered as partly incomplete for some players.
From a methodological point, the number of states selected (i.e., $N=2$) may be too coarse for modelling the underlying form of a player. Considering a continuously varying underlying state variable instead may be more realistic, since gradual changes in a player's form could then be captured. This could be achieved by considering state-space models, where regularized estimation approaches are a first point for further research. The motivation in this contribution for $N = 2$ states, however, was to approximate the potential psychological states in a simple manner for ease of interpretation, e.g., in the sense of hot (``player is confident'') or cold (``player is nervous'') states. Moreover, our main focus was to show the usefulness of our method developed in a rather simple/intuitive setting with two states. In addition, future research could focus on regularization in HMMs where not only the intercept (as considered here), but also the parameters $\beta_j$ are allowed to depend on the current state. Regularization in this model formulation could be taken into account by applying so-called fused LASSO techniques (see, e.g., \citealp{Lasso:GerTut:2010}), where the parameters could either be shrunk to zero or to the same size for all states considered. For the former, a single tuning parameter for each state has to be introduced, which brings the need for efficient tuning strategies.
The modeling framework developed here can easily be tailored to other applications, where implicit variable selection in HMMs is desired. For the application considered in this contribution, i.e.\ an analysis of a potential hot hand/hot shoe effect, other sports such as basketball or hockey could be analyzed. Potential covariates --- whose corresponding effects are penalized --- in these sports cover the shot type, shot origin, and game score, to name but a few.
\subsection*{Acknowledgements}
We want to thank the group of researchers B. Bornkamp, A. Fritsch, L. Geppert, P. Gn\"andinger, K. Ickstadt and O. Kuss
for providing the German Bundesliga penalty data set.
\newpage
\FloatBarrier
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.969727,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcnfxK0zjCsHeaNvN | \section{Introduction}
The Traveling Tournament Problem (TTP), first systematically introduced in~\cite{easton2001traveling}, is a hard but interesting sports scheduling problem inspired by Major League Baseball.
This problem is to find a double round-robin tournament satisfying several constraints that minimizes the total distances traveled by all participant teams.
There are $n$ participating teams in the tournament, where $n$ is always even. Each team should play $2(n-1)$ games in $2(n-1)$ consecutive days. Since each team can only play one game on each day, there are exact $n/2$ games scheduled on each day.
There are exact two games between any pair of teams,
where one game is held at the home venue of one team and the other one is held at the home venue of the other team.
The two games between the same pair of teams could not be scheduled in two consecutive days.
These are the constraints for TTP. We can see that it is not easy to construct a feasible schedule.
Now we need to find an optimal schedule that minimizes the total traveling distances by all the $n$ teams.
A well-known variant of TTP is TTP-$k$, which has one more constraint:
each team is allowed to take at most $k$ consecutive home or away games.
If $k$ is very large, say $k=n-1$, then this constraint will lose its meaning and it becomes TTP again. For this case, a team can schedule its travel distance as short as the solution to the traveling salesmen problem. On the other hand,
in a sports schedule, it is generally believed that home stands and road trips should alternate as regularly as possible for each team~\cite{campbell1976minimum,thielen2012approximation}.
The smaller the value of $k$, the more frequently teams have to return their homes.
TTP and its variants have been extensively studied in the literature~\cite{kendall2010scheduling,rasmussen2008round,thielen2012approximation,DBLP:conf/mfcs/XiaoK16}.
\subsection{Related Work}
In this paper, we will focus on TTP-2. We first survey the results on TTP-$k$.
For $k=1$, TTP-1 is trivial and there is no feasible schedule~\cite{de1988some}.
But when $k\geq 2$, the problem becomes hard. Feasible schedules will exist but it is not easy to construct one. Even no good
brute force algorithm with single exponential running time has been found yet.
In the online benchmark~\cite{trick2007challenge} (there is also a new benchmark website, updated by Van Bulck \emph{et al.}~\cite{DBLP:journals/eor/BulckGSG20}), most instances with more than $10$ teams are still unsolved completely even by using high-performance machines.
The NP-hardness of TTP was proved in \cite{bhattacharyya2016complexity}.
TTP-3 was also proved to be NP-hard \cite{thielen2011complexity} and the idea of the proof can be extended to prove the NP-hardness of TTP-$k$ for each constant $k\geq 4$~\cite{DBLP:journals/corr/abs-2110-02300}.
Although the hardness of TTP-2 has not been theoretically proved yet, most people believe TTP-2 is also hard since no single exponential algorithm to find an optimal solution to TTP-2 has been found after 20 years of study.
In the literature, there is a large number of contributions on approximation algorithms~\cite{miyashiro2012approximation,yamaguchi2009improved,imahori2010approximation,westphal2014,hoshino2013approximation,thielen2012approximation,DBLP:conf/mfcs/XiaoK16} and heuristic algorithms~\cite{easton2003solving,lim2006simulated,anagnostopoulos2006simulated,di2007composite,goerigk2014solving}.
In terms of approximation algorithms, most results are based on the assumption that the distance holds the symmetry and triangle inequality properties. This is natural and practical in the sports schedule.
For TTP-3, the first approximation algorithm, proposed by Miyashiro \emph{et al.}, admits a $2+O(1/n)$ approximation ratio~\cite{miyashiro2012approximation}.
They first proposed a randomized $(2+O(1/n))$-approximation algorithm and then derandomized the algorithm without changing the approximation ratio~\cite{miyashiro2012approximation}.
Then, the approximation ratio was improved to $5/3+O(1/n)$ by Yamaguchi \emph{et al.}~\cite{yamaguchi2009improved} and to $139/87+O(1/n)$ by Zhao \emph{et al.}~\cite{zhao2022improved}.
For TTP-4, the approximation ratio has been improved to $17/10+O(1/n)$~\cite{zhao2022improved}. For TTP-$k$ with $k\geq 5$, the approximation ratio has been improved to $(5k-7)/(2k)+O(k/n)$~\cite{imahori2010approximation}.
For TTP-$k$ with $k\geq n-1$, Imahori\emph{ et al.} \cite{imahori2010approximation} proved an approximation ratio of 2.75. At the same time, Westphal and Noparlik \cite{westphal2014} proved an approximation ratio of 5.875 for any choice of $k\geq 4$ and $n\geq 6$.
In this paper, we will focus on TTP-2.
The first record of TTP-2 seems from the schedule of a basketball conference of ten teams
in~\cite{campbell1976minimum}. That paper did not discuss the approximation ratio.
In fact, any feasible schedule for TTP-2
is a 2-approximation solution under the metric distance~\cite{thielen2012approximation}.
Although any feasible schedule will not have a very bad performance, no simple construction of feasible schedules is known now.
In the literature, all known algorithms for TTP-2 are different for $n/2$ being even and odd. This may be caused by different structural properties. One significant contribution to TTP-2 was done by Thielen and Westphal~\cite{thielen2012approximation}.
They proposed a $(1+16/n)$-approximation algorithm for $n/2$ being even and a $(3/2+6/n)$-approximation algorithm for $n/2$ being odd, and asked as an open problem whether the approximation ratio could be improved to $1+O(1/n)$ for the case that $n/2$ is odd. For even $n/2$, the approximation ratio was improved to $1+4/n$ by Xiao and Kou~\cite{DBLP:conf/mfcs/XiaoK16}.
There is also a known algorithm with the approximation ratio $1+\frac{\lceil\log_2 {n}\rceil+2}{2(n-2)}$, which is better for $n\leq 32$~\cite{DBLP:conf/atmos/ChatterjeeR21}.
For odd $n/2$, two papers solved Thielen and Westphal's open problem independently by giving algorithms with approximation ratio $1+O(1/n)$: Imahori~\cite{imahori20211+} proposed an idea to solve the case of $n\geq 30$ with an approximation ratio of $1+24/n$; in a preliminary version of this paper~\cite{DBLP:conf/ijcai/ZhaoX21}, we provided a practical algorithm with an approximation ratio of $1+12/n$. In this version, we will further improve the approximation ratio by using refined analysis.
\subsection{Our Results}
In this paper, we design two practical algorithms for TTP-2, one for even $n/2$ and one for odd $n/2$.
For even $n/2$, we first propose an algorithm with an approximation ratio of $1+\frac{3}{n}-\frac{10}{n(n-2)}$ by using the packing-and-combining method.
Then, we apply a divide-and-conquer method to our packing-and-combining method, and propose a more general algorithm with an approximation ratio $1+\frac{3}{n}-\frac{18}{n(n-2)}$ for $n\geq 16$ and $n\equiv0 \pmod 8$. Our results improve the previous result of $1+\frac{4}{n}+\frac{4}{n(n-2)}$ in~\cite{DBLP:conf/mfcs/XiaoK16}.
For odd $n/2$, we prove an approximation ratio of $1+\frac{5}{n}-\frac{10}{n(n-2)}$, improving the result of $\frac{3}{2}+\frac{6}{n-4}$ in~\cite{thielen2012approximation}.
In practice, our algorithms are easy to implement and run very fast.
Experiments show that our results can beat all previously-known solutions on the 33 tested benchmark instances in \cite{trick2007challenge}: for even $n/2$ instances, the average improvement is $2.86\%$; for odd $n/2$ instances, the average improvement is $8.65\%$.
Partial results of this paper were presented at the 27th International Computing and Combinatorics Conference (COCOON 2021)~\cite{DBLP:conf/cocoon/ZhaoX21} and the 30th International Joint Conference on Artificial Intelligence (IJCAI 2021)~\cite{DBLP:conf/ijcai/ZhaoX21}. In \cite{DBLP:conf/cocoon/ZhaoX21}, we proved an approximation ratio of $1+\frac{3}{n}-\frac{6}{n(n-2)}$ for TTP-2 with even $n/2$. In \cite{DBLP:conf/ijcai/ZhaoX21}, we proved an approximation ratio of $1+\frac{12}{n}+\frac{8}{n(n-2)}$ for TTP-2 with odd $n/2$. In this paper, we make further improvements for both cases. In the experiments, we also get a better performance.
\section{Preliminaries}\label{sec_pre}
We will always use $n$ to denote the number of teams and let $m=n/2$, where $n$ is an even number.
We use $\{t_1, t_2, \dots, t_n\}$ to denote the set of the $n$ teams.
A sports scheduling on $n$ teams is \emph{feasible} if it holds the following properties.
\begin{itemize}
\item \emph{Fixed-game-value}: Each team plays two games with each of the other $n-1$ teams, one at its home venue and one at its opponent's home venue.
\item \emph{Fixed-game-time}: All the games are scheduled in $2(n-1)$ consecutive days and each team plays exactly one game in each of the $2(n-1)$ days.
\item \emph{Direct-traveling}: All teams are initially at home before any game begins, all teams will come back home after all games, and a team travels directly from its game venue in the $i$th day to its game venue in the $(i+1)$th day.
\item \emph{No-repeat}: No two teams play against each other on two consecutive days.
\item \emph{Bounded-by-$k$}: The number of consecutive home/away games for any team is at most $k$.
\end{itemize}
The TTP-$k$ problem is to find a feasible schedule minimizing the total traveling distance of all the $n$ teams.
The input of TTP-$k$ contains an $n \times n$ distance matrix $D$ that indicates the distance between each pair of teams.
The distance from the home of team $i$ to the home of team $j$ is denoted by $D_{i,j}$.
We also assume that $D$ satisfies the symmetry and triangle inequality properties, i.e., $D_{i,j}=D_{j,i}$ and $D_{i,j} \leq D_{i,h} + D_{h,j}$ for all $i,j,h$. We also let $D_{i,i}=0$ for each $i$.
We will use $G$ to denote an edge-weighted complete graph on $n$ vertices representing the $n$ teams.
The weight of the edge between two vertices $t_i$ and $t_j$ is $D_{i,j}$, the distance from the home of $t_i$ to the home of $t_j$.
We also use $D_i$ to denote the weight sum of all edges incident on $t_i$ in $G$, i.e., $D_i=\sum_{j=1}^n D_{i,j}$.
The sum of all edge weights of $G$ is denoted by $D_G$.
We let $M$ denote a minimum weight perfect matching in $G$. The weight sum of all edges in $M$ is denoted by $D_M$.
We will consider the endpoint pair of each edge in $M$ as a \emph{super-team}. We use $H$ to denote the complete graph on the $m$ vertices representing the $m$ super-teams. The weight of the edge between two super-teams $u_i$ and $u_j$, denoted by $D(u_i,u_j)$, is the sum of the weight of the four edges in $G$ between one team in $u_i$ and one team in $u_j$, i.e., $D(u_i, u_j)=\sum_{t_{i'}\in u_i \& t_{j'}\in u_j}D_{i',j'}$. We also let $D(u_i,u_i)=0$ for any $i$.
We give an illustration of graphs $G$ and $H$ in Figure~\ref{fig001}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{001.pdf}
\caption{An illustration of graphs $G$ and $H$, where there four dark lines form a minimum perfect matching $M$ in $G$}
\label{fig001}
\end{figure}
The sum of all edge weights of $H$ is denoted by $D_H$. It holds that
\begin{eqnarray} \label{eqn_GH}
D_H=D_G-D_M.
\end{eqnarray}
\subsection{Independent Lower Bound and Extra Cost}
The \emph{independent lower bound} for TTP-2 was firstly introduced by Campbell and Chen~\cite{campbell1976minimum}.
It has become a frequently used lower bound.
The basic idea of the independent lower bound is to obtain a lower bound $LB_i$ on the traveling distance of a single team $t_i$ independently without considering the feasibility of other teams.
The road of a team $t_i$ in TTP-$2$, starting at its home venue and coming back home after all games, is called
an \emph{itinerary} of the team. The itinerary of $t_i$ is also regarded as a graph on the $n$ teams,
which is called the \emph{itinerary graph} of $t_i$.
In an itinerary graph of $t_i$, the degree of all vertices except $t_i$ is 2 and the degree of $t_i$ is greater than or equal to $n$ since team $t_i$ will visit each other team venue only once.
Furthermore, for any other team $t_j$, there is at least one edge between $t_i$ and $t_j$, because $t_i$ can only visit at most 2 teams on each road trip and then team $t_i$ either comes from its home to team $t_j$ or goes back to its home after visiting team $t_j$. We decompose the itinerary graph of $t_i$ into two parts: one is a spanning star centered at $t_i$ (a spanning tree which only vertex $t_i$ of degree $> 1$) and the forest of the remaining part. Note that in the forest, only $t_i$ may be a vertex of degree $\geq 2$ and all other vertices are degree-1 vertices. See Figure~\ref{fig002} for illustrations of the itinerary graphs.
\begin{figure}[ht]
\centering
\includegraphics[scale=1.1]{002.pdf}
\caption{The itinerary graph of $t_i$, where the light edges form a spanning star and the dark edges form the remaining forest. In the right example (b), the remaining forest is a perfect matching of $G$}
\label{fig002}
\end{figure}
For different itineraries of $t_i$, the spanning star is fixed and only the remaining forest may be different.
The total distance of the spanning star is $\sum_{j\neq i} D_{i,j}=D_i$. Next, we show an upper and lower bound on the total distance of the remaining forest. For each edge between two vertices $t_{j_1}$ and $t_{j_2}$ ($j_1,j_2\neq i$), we have that $D_{j_1,j_2}\leq D_{i,j_1}+D_{i,j_2}$ by the triangle inequality property. Thus, we know that the total distance of the remaining forest is at most the total distance of the spanning star. Therefore, the distance of any feasible itinerary of $t_i$ is at most $2D_i$.
\begin{lemma}\label{perfecti}
The traveling distance of any itinerary of a team $t_i$ is at most $2D_i$.
\end{lemma}
Lemma~\ref{perfecti} implies that the worst itinerary only consists of road trips containing one game.
On the other hand, the distance of the remaining forest is at least as that of a minimum perfect matching of $G$ by the triangle inequality.
Recall that we use $M$ to denote a minimum perfect matching of $G$.
\begin{lemma}\label{perfecti+}
The traveling distance of any itinerary of a team $t_i$ is at least $D_i+D_M$.
\end{lemma}
Thus, we have a lower bound $LB_i$ for each team $t_i$:
\begin{eqnarray} \label{eqn_lower1}
LB_i=D_i+D_M.
\end{eqnarray}
The itinerary of $t_i$ to achieve $LB_i$ is called the \emph{optimal itinerary}.
The \emph{independent lower bound} for TTP-2 is the traveling distance such that all teams reach their optimal itineraries, which is denoted as
\begin{eqnarray} \label{eqn_lowerbound}
LB=\sum_{i=1}^n LB_i =\sum_{i=1}^n (D_i +D_M)=2D_G+nD_M.
\end{eqnarray}
Lemma~\ref{perfecti} and (\ref{eqn_lower1}) imply that
\begin{lemma}\label{useful}
The traveling distance of any feasible itinerary of a team $t_i$ is at most $2D_i$.
\end{lemma}
Hence, we can get that
\begin{theorem} \label{twoapp}
Any feasible schedule for TTP-2 is a 2-approximation solution.
\end{theorem}
Theorem~\ref{twoapp} was first proved in~\cite{thielen2012approximation}.
For any team, it is possible to reach its optimal itinerary. However, it is impossible for all teams to reach their optimal itineraries synchronously in a feasible schedule even for $n=4$~\cite{thielen2012approximation}. It is easy to construct an example. So the independent lower bound for TTP-2 is not achievable.
To analyze the quality of a schedule of the tournament, we will compare the itinerary of each team with the optimal itinerary.
The different distance is called the \emph{extra cost}. Sometimes it is not convenient to compare the whole itinerary directly.
We may consider the extra cost for a subpart of the itinerary.
We may split an itinerary into several trips and each time we compare some trips.
A \emph{road trip} in an itinerary of team $t_i$ is a simple cycle starting and ending at $t_i$.
So an itinerary consists of several road trips. For TTP-2, each road trip is a triangle or a cycle on two vertices.
Let $L$ and $L'$ be two itineraries of team $t_i$, $L_s$ be a sub itinerary of $L$ consisting of several road trips in $L$, and
$L'_s$ be a sub itinerary of $L'$ consisting of several road trips in $L'$.
We say that the sub itineraries $L_s$ and $L'_s$ are \emph{coincident} if they visit the same set of teams.
We will only compare a sub itinerary of our schedule with a coincident sub itinerary of the optimal itinerary and consider the extra cost between them.
\section{Framework of The Algorithms}
Our algorithms for even $n/2$ and odd $n/2$ are different due to the different structural properties.
However, the two algorithms have a similar framework. We first introduce the common structure of our algorithms.
\subsection{The Construction}
The cases that $n=4$ and $6$ can be solved easily by a brute force method. For the sake of presentation, we assume that the number of teams $n$ is at least $8$.
Our construction of the schedule for each case consists of two parts. First we arrange \emph{super-games} between \emph{super-teams}, where each super-team contains
a pair of normal teams. Then we extend super-games to normal games between normal teams.
To make the itinerary as similar as the optimal itinerary, we may take each pair of teams in the minimum perfect matching $M$ of $G$ as a \emph{super-team}. There are $n$ normal teams and then there are $m=n/2$ super-teams. Recall that we use $\{u_1, u_2, \dots, u_{m-1}, u_{m}\}$ to denote the set of super-teams and relabel the $n$ teams such that $u_i=\{t_{2i-1},t_{2i}\}$ for each $i$.
Each super-team will attend $m-1$ super-games in $m-1$ time slots.
Each super-game in the first $m-2$ time slots will be extended to normal games between normal teams on four days, and each super-game in the last time slot will be extended to normal games between normal teams on six days. So each normal team $t_i$ will attend $4\times (m-2)+6=4m-2=2n-2$ games.
This is the number of games each team $t_i$ should attend in TTP-2.
For even $n/2$ and odd $n/2$, the super-games and the way to extend super-games to normal games will be different.
\subsection{The Order of Teams}
To get a schedule with a small total traveling distance, we order teams such that the pair of teams in each super-team corresponds to an edge in the minimum matching $M$. However, there are still $m!\cdot 2^m$ choices to order all the $n$ teams, where there are $m!$ choices to order super-teams $\{u_1,\dots,u_m\}$ and $2^m$ choices to order all teams in $m$ super-teams (there are two choices to order the two teams in each super-team). To find an appropriate order, we propose a simple randomized algorithm which contains the following four steps.
\medskip
\noindent\textbf{Step~1.} Compute a minimum perfect matching $M$ of $G$.
\noindent\textbf{Step~2.} Randomly sort all $m$ edges in $M$ and get a set of super-teams $\{u_1,\dots,u_m\}$ by taking the pair of teams corresponding to the $i$-th edge of $M$ as super-team $u_i$.
\noindent\textbf{Step~3.} Randomly order teams $\{t_{2i-1}, t_{2i}\}$ in each super-team $u_i$.
\noindent\textbf{Step~4.} Apply the order of $n$ teams to our construction.
\medskip
The randomized versions of the algorithms are easier to present and analyze. We show that the algorithms can be derandomized efficiently by using the method of conditional expectations~\cite{motwani1995randomized}.
\begin{lemma}\label{core}
Assuming $t_i\in u_{i'}$ and $t_j\in u_{j'}$ where $i'\neq j'$, it holds that $\EE{D_{i,j}}\leq\frac{1}{n(n-2)}\mbox{LB}$.
\end{lemma}
\begin{proof}
An edge $t_it_j$ with $i'\neq j'$ corresponds to an edge in $G-M$.
Since we label super-teams and teams in each super-team randomly, the probability of each edge in $G-M$ being $t_it_j$ is $\frac{1}{\size{G-M}}$.
Hence, the expected weight of the edge $t_it_j$ is
$$\frac{1}{\size{G-M}}(D_G-D_M)=\frac{1}{n(n-1)/2-n/2}D_H=\frac{2}{n(n-2)}D_H\leq \frac{1}{n(n-2)}\mbox{LB},$$
because $\mbox{LB}=2D_G+nD_M\geq 2D_G\geq 2D_H$ by (\ref{eqn_GH}) and (\ref{eqn_lowerbound}).
\end{proof}
Next, we will show the details of our constructions.
\section{The Construction for Even $n/2$}
In this section, we study the case of even $n/2$. We will first introduce the construction of the schedule. Then, we analyze the approximation quality of the randomized algorithm. At last, we will propose a dive-and-conquer method to get some further improvements.
\subsection{Construction of the Schedule}
We construct the schedule for super-teams from the first time slot to the last time slot $m-1$.
In each of the $m-1$ time slots, there are $\frac{m}{2}$ super-games.
In total, we have four different kinds of super-games: \emph{normal super-games}, \emph{left super-games}, \emph{penultimate super-games}, and \emph{last super-games}.
Each of the first three kinds of super-games will be extended to eight normal games on four consecutive days. Each last super-game will be extended to twelve normal games on six consecutive days.
We will indicate what kind each super-game belongs to.
For the first time slot, the $\frac{m}{2}$ super-games are arranged as shown in Figure~\ref{figa1}. Super-team $u_i$ plays against super-team $u_{m-1-i}$ for $i\in \{1, \dots, m/2-1\}$ and super-team $u_{m-1}$ plays against super-team $u_{m}$. Each super-game is represented by a directed edge, the direction information of which will be used to extend super-games to normal games between normal teams.
All the super-games in the first time slot are normal super-games.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{a1.pdf}
\caption{The super-game schedule in the first time slot for an instance with $m=10$}
\label{figa1}
\end{figure}
From Figure~\ref{figa1}, we can see that the super-team $u_m=u_{10}$ is denoted as a dark node and all other super-teams $u_1, \dots, u_{m-1}$ are denoted as white nodes.
The white nodes form a cycle $(u_1,u_2,\dots,u_{m-1}, u_1)$.
In the second time slot, we keep the position of $u_m$ unchanged, change the positions of white super-teams in the cycle by moving one position in the clockwise direction, and also change the direction of each edge except for the most left edge incident on $u_m$. Please see Figure~\ref{figa2} for an illustration of the schedule in the second time slot.
In the second time slot, the super-game including $u_m$ is a left super-game and we put a letter `L' on the edge in Figure~\ref{figa2} to indicate this.
All other super-games are still normal super-games. In the second time slot, there are $\frac{m}{2}-1$ normal super-games and one left super-game.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{a2.pdf}
\caption{The super-game schedule in the second time slot for an instance with $m=10$}
\label{figa2}
\end{figure}
In the third time slot,
we also change the positions of white super-teams in the cycle by moving one position in the clockwise direction while the direction of all edges is reversed. The position of the dark node $u_m$ will always keep the same.
In this time slot, there are still $\frac{m}{2}-1$ normal super-games and one left super-game that contains the super-team $u_m$. An illustration of the schedule in the third time slot is shown in Figure~\ref{figa3}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{a3.pdf}
\caption{The super-game schedule in the third time slot for an instance with $m=10$}
\label{figa3}
\end{figure}
The schedules for the other time slots are derived analogously.
However, the kinds of super-games in different time slots may be different.
For the first time slot, all the $\frac{m}{2}$ super-games in it are normal super-games.
For time slot $i$ $(2\leq i \leq m-3)$, the super-game involving super-team $u_m$ is a left super-game and all other super-games are normal super games.
For time slot $m-2$, all the $\frac{m}{2}$ super-games in it are penultimate super-games.
For time slot $m-1$, all the $\frac{m}{2}$ super-games in it are last super-games. Figure~\ref{figa45} shows an illustration of the super-game schedule in the last two time slots, where we put a letter `P' (resp., `T') on the edge to indicate that the super-game is a penultimate (resp., last) super-game.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{a45.pdf}
\caption{The super-game schedules in the last two time slots for an instance with $m=10$}
\label{figa45}
\end{figure}
Next, we explain how to extend the super-games to normal games.
Recall that we have four kinds of super-games: normal, left, penultimate, and last.
\textbf{Case~1. Normal super-games}:
Each normal super-game will be extended to eight normal games on four consecutive days.
Assume that in a normal super-game, super-team $u_{i}$ plays against the super-team $u_{j}$ at the home venue of $u_j$ in time slot $q$ ($1\leq i,j\leq m$ and $1\leq q\leq m-3$). Recall that $u_{i}$ represents normal teams \{$t_{2i-1}, t_{2i}$\} and $u_{j}$ represents normal teams \{$t_{2j-1}, t_{2j}$\}. The super-game will be extended to eight normal games on four corresponding days from $4q-3$ to $4q$, as shown in Figure~\ref{figa6}. A directed edge from team $t_{i'}$ to team $t_{i''}$ means that $t_{i'}$ plays against $t_{i''}$ at the home venue of $t_{i''}$.
Note that if the super-game is at the home venue of $u_i$, i.e., there is directed edge from $u_j$ to $u_i$, then the direction of all edges in the figure will be reversed.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{a6.pdf}
\caption{Extending normal super-games}
\label{figa6}
\end{figure}
\textbf{Case~2. Left super-games}:
Assume that in a left super-game, super-team $u_{i}$ plays against super-team $u_{m}$ at the home venue of $u_m$ in (even) time slot $q$ ($3\leq i\leq m-2$ and $2\leq q\leq m-3$). Recall that $u_{m}$ represents normal teams \{$t_{2m-1}, t_{2m}$\} and $u_{i}$ represents normal teams \{$t_{2i-1}, t_{2i}$\}. The super-game will be extended to eight normal games on four corresponding days from $4q-3$ to $4q$, as shown in Figure~\ref{figa7}, for even time slot $q$. Note that the direction of edges in the figure will be reversed for odd time slot $q$.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{a7.pdf}
\caption{Extending left super-games}
\label{figa7}
\end{figure}
\textbf{Case~3. Penultimate super-games}:
Assume that in a penultimate super-game, super-team $u_{i}$ plays against super-team $u_{j}$ at the home venue of $u_j$ in time slot $q=m-2$ ($1\leq i,j\leq m$). The super-game will be extended to eight normal games on four corresponding days from $4m-11$ to $4m-8$, as shown in Figure~\ref{figa8}.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{a8.pdf}
\caption{Extending penultimate super-games}
\label{figa8}
\end{figure}
\textbf{Case~4. Last super-games}:
Assume that in a last super-game, super-team $u_{i}$ plays against super-team $u_{j}$ at the home venue of $u_j$ in time slot $q=m-1$ ($1\leq i,j\leq m$). The super-games will be extended to twelve normal games on six corresponding days from $4m-7$ to $4m-2$, as shown in Figure~\ref{figa9}.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{a9.pdf}
\caption{Extending last super-games}
\label{figa9}
\end{figure}
We have described the main part of the scheduling algorithm. Before proving its feasibility, we first show an example of the schedule for $n=8$ teams constructed by our method. In Table~\ref{ansexample}, the $i$-th row indicates team $t_i$, the $j$-th column indicates the $j$-th day in the double round-robin, item $+t_{x}$ (resp., $-t_x$) on the $i$-th row and $j$-th column means that team $t_i$ plays against team $t_{x}$ in the $j$-th day at the home venue of team $t_{x}$ (resp., $t_i$).
\begin{table}[ht]
\centering
\begin{tabular}{c|cccccccccccccc}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14\\
\hline
$t_{1}$ & $-t_{3}$ & $-t_{4}$ & $+t_{3}$ & $+t_{4}$ & $-t_{5}$ & $+t_{6}$ & $+t_{5}$ & $-t_{6}$ &
$+t_{2}$ & $-t_{7}$ & $-t_{8}$ & $+t_{7}$ & $+t_{8}$ & $-t_{2}$ \\
$t_{2}$ & $-t_{4}$ & $-t_{3}$ & $+t_{4}$ & $+t_{3}$ & $-t_{6}$ & $-t_{5}$ & $+t_{6}$ & $+t_{5}$ &
$-t_{1}$ & $-t_{8}$ & $+t_{7}$ & $+t_{8}$ & $-t_{7}$ & $+t_{1}$ \\
$t_{3}$ & $+t_{1}$ & $+t_{2}$ & $-t_{1}$ & $-t_{2}$ & $+t_{7}$ & $+t_{8}$ & $-t_{7}$ & $-t_{8}$ &
$+t_{4}$ & $-t_{5}$ & $-t_{6}$ & $+t_{5}$ & $+t_{6}$ & $-t_{4}$ \\
$t_{4}$ & $+t_{2}$ & $+t_{1}$ & $-t_{2}$ & $-t_{1}$ & $+t_{8}$ & $-t_{7}$ & $-t_{8}$ & $+t_{7}$ &
$-t_{3}$ & $-t_{6}$ & $+t_{5}$ & $+t_{6}$ & $-t_{5}$ & $+t_{3}$ \\
$t_{5}$ & $+t_{7}$ & $+t_{8}$ & $-t_{7}$ & $-t_{8}$ & $+t_{1}$ & $+t_{2}$ & $-t_{1}$ & $-t_{2}$ &
$+t_{6}$ & $+t_{3}$ & $-t_{4}$ & $-t_{3}$ & $+t_{4}$ & $-t_{6}$ \\
$t_{6}$ & $+t_{8}$ & $+t_{7}$ & $-t_{8}$ & $-t_{7}$ & $+t_{2}$ & $-t_{1}$ & $-t_{2}$ & $+t_{1}$ &
$-t_{5}$ & $+t_{4}$ & $+t_{3}$ & $-t_{4}$ & $-t_{3}$ & $+t_{5}$ \\
$t_{7}$ & $-t_{5}$ & $-t_{6}$ & $+t_{5}$ & $+t_{6}$ & $-t_{3}$ & $+t_{4}$ & $+t_{3}$ & $-t_{4}$ &
$+t_{8}$ & $+t_{1}$ & $-t_{2}$ & $-t_{1}$ & $+t_{2}$ & $-t_{8}$ \\
$t_{8}$ & $-t_{6}$ & $-t_{5}$ & $+t_{6}$ & $+t_{5}$ & $-t_{4}$ & $-t_{3}$ & $+t_{4}$ & $+t_{3}$ &
$-t_{7}$ & $+t_{2}$ & $+t_{1}$ & $-t_{2}$ & $-t_{1}$ & $+t_{7}$ \\
\end{tabular}
\caption{The schedule for $n=8$ teams, where the horizontal ordinate represents the teams, the ordinate represents the days, and
`$+$' (resp., `$-$') means that the team on the corresponding horizontal ordinate plays at its opponent's home (resp., own home)}
\label{ansexample}
\end{table}
From Table~\ref{ansexample}, we can roughly check the feasibility of this instance: on each line there are at most two consecutive `$+$'/`$-$', and each team plays the required games.
Next, we formally prove the correctness of our algorithm.
\begin{theorem}\label{feas1}
For TTP-$2$ with $n$ teams such that $n\equiv 0 \pmod 4$ and $n\geq 8$, the above construction can generate a feasible schedule.
\end{theorem}
\begin{proof}
By the definition of feasible schedules, we need to prove the five properties: fixed-game-value, fixed-game-time, direct-traveling, no-repeat, and bounded-by-$k$.
The first two properties -- fixed-game-value and fixed-game-time are easy to see.
Each super-game in the first $m-2$ time slots will be extended to eight normal games on four days and each team participates in four games on four days. Each super-game in the last time slot will be extended to twelve normal games on six days and each team participates in six games on six days. So each team plays $2(n-1)$ games on $2(n-1)$ different days. Since there is a super-game between each pair super-teams, it is also easy to see that each team pair plays exactly two games, one at the home venue of each team. For the third property, we assume that the itinerary obeys the direct-traveling property and it does not need to be proved.
It is also easy to see that each team will not violate the no-repeat property.
In any time slot, no two normal games between the same pair of normal teams are arranged on two consecutive days according to the ways to extend super-games to normal games. For two days of two different time slots, each super-team will play against a different super-team and then a normal team will also play against a different normal team.
Last, we prove that each team does not violate the bounded-by-$k$ property. We will use `$H$' and `$A$' to denote a home game and an away game, respectively. We will also let $\overline{H}=A$ and $\overline{A}=H$.
First, we look at the games in the first $m-3$ time slots, i.e., the first $2n-12$ days. For the two teams in $u_m$, the four games in the first time slot will be $HHAA$, in an even time slot will be $HAAH$ (see Figure~\ref{figa7}), and in an odd time slot (not including the first time slot) will be $AHHA$. So two consecutive time slots can combine well without creating three consecutive home/away games.
Next, we consider a team $t_i$ in $u_j$ $(j\in \{1,2,\dots,m-1\})$.
For a normal super-game involving $u_j$, if the direction of the edge (the normal super-game) is from $u_j$ to another super-team, then the corresponding four games including $t_i$ will be $AAHH$, and if the direction of the edge is reversed, then the corresponding four games including $t_i$ will be $\overline{AAHH}=HHAA$.
For a left super-game involving $u_j$, if the direction of the edge (the normal super-game) is from $u_j$ to another super-team, then the corresponding four games including $t_i$ will be $AHHA$, and if the direction of the edge is reversed, then the corresponding four games including $t_i$ will be $\overline{AHHA}=HAAH$.
Note that in the first $m-3$ time slots the direction of the edge incident on super-team $u_j$ will only change after the left super-game. So two consecutive time slots can combine well without creating three consecutive home/away games.
Finally, we consider the last ten days in the last two time slots (time slots $m-2$ and $m-1$).
For the sake of presentation, we let $t_{i_1}=t_{2i-1}$ and $t_{i_2}=t_{2i}$.
We just list out the last ten games in the last two time slots for each team, which are shown in Figure~\ref{figa10}.
There are four different cases for the last ten games: $u_o$, $u_e$, $u_2$, and $u_m$, where $o\in \{3,\dots, m-1\}$ and $e\in \{1\}\cup \{4,\dots, m-2\}$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{a10.pdf}
\caption{The last ten games for the case of $n\equiv 0 \pmod 4$, where $o\in \{3,\dots, m-1\}$ and $e\in \{1\}\cup \{4,\dots, m-2\}$}
\label{figa10}
\end{figure}
From Figure~\ref{figa10}, we can see that there are no three consecutive home/away games in the last ten days. It is also easy to see that on day $2n-12$ (the last day in time slot $m-3$), the games for $t_{2_1}$ and $t_{2_2}$ (in $u_2$) are $H$, the games for $t_{m_1}$ and $t_{m_2}$ (in $u_m$) are $A$, the games for $t_{o_1}$ and $t_{o_2}$ (in $u_o$) are $H$, and the games for $t_{e_1}$ and $t_{e_2}$ (in $u_e$) are $A$. So time slots $m-3$ and $m-2$ can also combine
well without creating three consecutive home/away games.
Thus, the bounded-by-$k$ property also holds.
Since our schedule satisfies all the five properties of feasible schedules, we know our schedule is feasible for TTP-2.
\end{proof}
\subsection{Analyzing the Approximation Quality}
To show the quality of our schedule, we compare it with the independent lower bound. We will check the difference between our itinerary of each team $t_i$ and the optimal itinerary of $t_i$ and compute the expected extra cost.
As mentioned in the last paragraph of Section~\ref{sec_pre}, we will compare some sub itineraries of a team.
According to the construction, we can see that all teams stay at home before the first game in a super-game and return home after the last game in the super-game.
Hence, we will look at the sub itinerary of a team on the four or six days in a super-game, which is coincident with a sub itinerary of the optimal itinerary.
In our algorithm, there are four kinds of super-games: normal super-games, left super-games, penultimate super-games and last super-games. We analyze the total expected extra cost of all normal teams caused by each kind of super-game.
\begin{lemma}\label{extra}
Assume there is a super-game between super-teams $u_i$ and $u_j$ at the home of $u_j$ in our schedule.
\begin{enumerate}
\item [(a)] If the super-game is a normal super-game, then the expected extra cost of all normal teams in $u_i$ and $u_j$ is 0;
\item [(b)] If the super-game is a left super-game, then the expected extra cost of all normal teams in $u_i$ and $u_j$ is at most $\frac{4}{n(n-2)}\mbox{LB}$;
\item [(c)] If the super-game is a penultimate/last super-game, then the expected extra cost of all normal teams in $u_i$ and $u_j$ is at most $\frac{2}{n(n-2)}\mbox{LB}$.
\end{enumerate}
\end{lemma}
\begin{proof}
From Figure~\ref{figa6} we can see that in a normal super-game any of the four normal teams will visit the home venue of the two normal teams in the opposite super-team in one road trip. So they have the same road trips as that in their optimal itineraries.
The extra cost is 0. So (a) holds.
Next, we assume that the super-game is a left super-game. We mark here that one super-team is $u_m$.
From Figure~\ref{figa7} we can see that the two teams in the super-team $u_i$ play $AHHA$ on the four days (Recall that $A$ means an away game and $H$ means a home game), and the two teams in the super-team $u_j$ play $HAAH$.
The two teams in $u_j$ will have the same road trips as that in the optimal itinerary and then the extra cost is 0.
We compare the road trips of the two teams $t_{2i-1}$ and $t_{2i}$ in $u_i$ with their coincident sub itineraries of their optimal itineraries. In our schedule, team $t_{2i-1}$ contains two road trips $(t_{2i-1},t_{2j-1},t_{2i-1})$ and $(t_{2i-1},t_{2j},t_{2i-1})$ while the coincident sub itinerary of its optimal itinerary contains one road trip $(t_{2i-1},t_{2j-1},t_{2j},t_{2i-1})$, and team $t_{2i}$ contains two road trips $(t_{2i},t_{2j},t_{2i})$ and $(t_{2i},t_{2j-1},t_{2i})$ while the coincident sub itinerary of its optimal itinerary contains one road trip $(t_{2i},t_{2j-1},t_{2j},t_{2i})$. The expected difference is
\[
\EE{D_{2i-1,2j-1}+D_{2i-1,2j}+D_{2i,2j-1}+D_{2i,2j}-2D_{2j-1,2j}}\leq\frac{4}{n(n-2)}\mbox{LB},
\]
since $\EE{D_{2i-1,2j-1}}=\EE{D_{2i-1,2j}}=\EE{D_{2i,2j-1}}=\EE{D_{2i,2j}}\leq\frac{1}{n(n-2)}\mbox{LB}$ by Lemma~\ref{core}. So (b) holds.
Then, we suppose that the super-game is a penultimate super-game. From Figure~\ref{figa8} we can see that the two teams $t_{2i-1}$ and $t_{2i}$ in the super-team $u_i$ play $AAHH$ and $AHHA$, respectively, and the two teams $t_{2j-1}$ and $t_{2j}$ in the super-team $u_j$ play $HAAH$ and $HHAA$, respectively.
Teams $t_{2i-1}$, $t_{2j-1}$, and $t_{2j}$ will have the same road trip as that in their optimal itineraries and then the extra cost is 0.
We compare the road trips of team $t_{2i}$ with its optimal road trip. In our schedule, team $t_{2i}$ contains two road trips $(t_{2i},t_{2j},t_{2i})$ and $(t_{2i},t_{2j-1},t_{2i})$ while the coincident sub itinerary of its optimal itinerary contains one road trip $(t_{2i},t_{2j-1},t_{2j},t_{2i})$.
By Lemma~\ref{core}, we can get the expected difference is
\[
\EE{D_{2i,2j-1}+D_{2i,2j}-D_{2j-1,2j}}\leq \frac{2}{n(n-2)}\mbox{LB}.
\]
Last, we consider it as a last super-game.
From Figure~\ref{figa9} we can see that the two teams $t_{2i-1}$ and $t_{2i}$ in the super-team $u_i$ play $AAHHAH$ and $HAAHHA$, respectively, and the two teams $t_{2j-1}$ and $t_{2j}$ in the super-team $u_j$ play $AHHAAH$ and $HHAAHA$, respectively.
Teams $t_{2i}$, $t_{2j-1}$, and $t_{2j}$ will have the same road trips as that in their optimal itineraries and then the extra cost is 0.
We compare the road trips of team $t_{2i-1}$ with its optimal road trips. In our schedule, team $t_{2i-1}$ contains two road trips $(t_{2i-1},t_{2i},t_{2j-1},t_{2i-1})$ and $(t_{2i-1},t_{2j},t_{2i-1})$ while the coincident sub itinerary of its optimal itinerary contains two road trip $(t_{2i-1},t_{2j-1},t_{2j},t_{2i-1})$ and $(t_{2i-1},t_{2i},t_{2i-1})$.
By Lemma~\ref{core}, the team $t_{2i-1}$ in $u_i$ has an expected extra cost of
\[
\EE{D_{2i-1,2j}+D_{2i,2j-1}-D_{2i-1,2i}-D_{2j-1,2j}}\leq \frac{2}{n(n-2)}\mbox{LB}.
\]
So (c) holds.
\end{proof}
In our schedule, there are $\frac{m}{2}+(m-4)(\frac{m}{2}-1)$ normal super-games, which contribute 0 to the expected extra cost.
There are $m-4$ left super-games in the $m-4$ middle time slots, $\frac{m}{2}$ penultimate super-games in the penultimate time slot, and $\frac{m}{2}$ last super-games in the last time slot.
By Lemma~\ref{extra}, we know that the total expected extra cost is
\[
(m-4)\times\frac{4}{n(n-2)}\mbox{LB}+\lrA{\frac{m}{2}+\frac{m}{2}}\times\frac{2}{n(n-2)}\mbox{LB}=\lrA{\frac{3}{n}-\frac{10}{n(n-2)}}\mbox{LB}.
\]
Dominated by the time of computing a
minimum weight perfect matching~\cite{gabow1974implementation,lawler1976combinatorial}, the running time of the algorithm is $O(n^3)$. We can get that
\begin{theorem}\label{res_1}
For TTP-$2$ with $n$ teams, when $n\geq 8$ and $n\equiv 0 \pmod 4$, there is a randomized $O(n^3)$-time algorithm with an expected approximation ratio of $1+{\frac{3}{n}}-{\frac{10}{n(n-2)}}$.
\end{theorem}
Next, we show that the analysis of our algorithm is tight, i.e., the approximation ratio in Theorem~\ref{res_1} is the best for our algorithm. We show an example that the ratio can be reached.
In the example, the distance of each edge in the minimum perfect matching $M$ is 0 and the distance of each edge in $G-M$ is 1.
We can see that the triangle inequality property still holds.
By (\ref{eqn_lowerbound}), we know that the independent lower bound of this instance is
\[
\mbox{LB}=2D_G+nD_M=2\size{G-M}=n(n-2).
\]
In this instance, the extra costs of a normal super-game, left super-game, penultimate super-game and last super-game are 0, 4, 2 and 2, respectively.
In our schedule, there are $m-4$ left super-games, $\frac{m}{2}$ penultimate super-games and $\frac{m}{2}$ last super-games in total.
Thus, the total extra cost of our schedule is $(m-4)\times4+(\frac{m}{2}+\frac{m}{2})\times2=3n-16$. Thus, the ratio is
\[
1+\frac{3n-16}{n(n-2)}=1+\frac{3}{n}-\frac{10}{n(n-2)}.
\]
This example only shows the ratio is tight for this algorithm. However, it is still possible that some other algorithms can achieve a better ratio.
\subsection{Techniques for Further Improvement}
From Lemma~\ref{extra}, we know that there are three kinds of super-games: left super-games, penultimate super-games and last super-games that can cause extra cost. If we can reduce the total number of these super-games, then we can improve our schedule.
Based on this idea, we will apply divide-and-conquer to our packing-and-combining method which can reduce the total number of left super-games for some cases.
We first extend the packing-and-combining method. Suppose $n\geq 8p$ and $n\equiv 0 \pmod {4p}$, then we pack $p$ super-teams as a \emph{group-team}. There are $g=\frac{n}{2p}$ group-teams. Since the previous construction corresponds to the case $p=1$, we assume $p\geq 2$ here.
Similar to the previous construction, there are $g-1$ group-slots and each group-slot contains $\frac{g}{2}$ \emph{group-games}. But, there are only three kinds of group-games: normal, left, and last.
In the first group-slot, all group-games are normal group-games. In the middle $g-3$ group-slots, there is always one left group-game and $\frac{g}{2}-1$ normal group-games. In the last group-game, there are $\frac{g}{2}$ last group-games.
Next, we show how to extend these three kinds of group-games.
We assume that there is a group-game between two group-teams $U_i$ and $U_j$ at the home of $U_j$, where $U_i=\{u_{i_1},\dots,u_{i_p}\}$ and $U_j=\{u_{j_1},\dots,u_{j_p}\}$.
\textbf{Normal group-games:}
For a normal group-game, we extend it into $p$ time slots where in the $l$-th time slot, there are $p$ normal super-games: $\{u_{i_{i'}}\rightarrow u_{j_{(i'+l-2 \bmod p)+1}}\}_{i'=1}^{p}$.
Hence, there are $p^2$ normal super-games in a normal group-game. According the design of normal super-games, all games between group-teams $U_i$ and $U_j$ (i.e., between one team in a super-team in $U_i$ and one team in a super-team in $U_j$) are arranged.
Note that all super-teams in $U_i$ (resp., $U_j$) play away (resp., home) super-games.
\textbf{Left group-games:}
For a left group-game, we also extend it into $p$ time slots and the only difference with the normal group-game is that the $p$ super-games of the left group-game in the $p$-th time slot are $p$ left super-games. Hence, there are $p$ left super-games and $p(p-1)$ normal super-games in a left group-game.
According the design of normal/left super-games, all games between group-teams $U_i$ and $U_j$ are arranged.
Note that all super-teams in $U_i$ (resp., $U_j$) play away (resp., home) super-games.
\textbf{Last group-games:}
For a last group-game, we need to arrange all games inside each group-team as well as between these two group-teams, which form a double round-robin for teams in these two group-teams. Similar with the normal and left group-games, the super-teams in $U_i$ are ready to play away normal super-games while the super-teams in $U_j$ are ready to play home normal super-games.
Both of the general construction ($p\geq2$) and the previous construction ($p=1$) start with normal super-games.
Thus, the last group-game between $U_i$ and $U_j$ can be seen as a sub-problem of our construction with $4p$ teams, which can be seen as a divided-and-conquer method. Note that in this sub-problem we need to make sure that the super-teams in $U_i$ start with away super-games and the super-teams in $U_j$ start with home super-games.
Before the last group-slot, there are $(g-3)\times p=\frac{n}{2}-3p$ left super-games in total.
In the last group-slot, there are $\frac{g}{2}=\frac{n}{4p}$ last group-games. Therefore, if $n\geq 8p$ and $n\equiv 0 \pmod{4p}$, then the minimum total number of left super-games in our construction of packing $p$ super-teams, denoted by $L_p(n)$, satisfies that
\begin{eqnarray}\label{formula}
L_p(n)=\frac{n}{2}-3p+\frac{n}{4p}\min_{1\leq i< p} L_{i}(4p).
\end{eqnarray}
Since the framework of our general construction is the same as our initial construction, the correctness is easy to observe. Here, we show an example of our general construction on an instance of $n=16$ and $p=2$ in Figure~\ref{figa11}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{a11.pdf}
\caption{
An illustration of our general construction on an instance of $n=16$ and $p=2$: the general construction contains $g-1=\frac{n}{2p}-1=3$ group-slots as shown in (a); the instance in the last time slot contains $\frac{g}{2}=\frac{n}{4p}=2$ last group-games, which can be seen two sub-problems of TTP-2 with $4p=8$ teams; each sub-problem uses the initial construction as shown in (b) and (c)}
\label{figa11}
\end{figure}
Since the sub-problem will always reduce to the case $p=1$, the total number of penultimate/last super-games keeps unchanged. Hence, by the previous analysis, the total expected extra cost of $m/2$ penultimate super-games and $m/2$ last super-games is still $(\frac{m}{2}+\frac{m}{2})\times\frac{2}{n(n-2)}\mbox{LB}=\frac{n}{n(n-2)}\mbox{LB}$.
To analyze the approximation quality of this general construction, we only need to compute the number of left super-games used, which depends on the value $p$. The minimum number of left super-games, denoted by $L(n)$, satisfies that
\[
L(n)=\min_{\substack{p: n\geq 8p,\\n\bmod {4p}=0 }}L_p(n).
\]
By Lemma~\ref{core}, the expected extra cost of one left super-game is $\frac{4}{n(n-2)}\mbox{LB}$.
Therefore, the expected extra cost of $L(n)$ left super-games is $\frac{4L(n)}{n(n-2)}\mbox{LB}$. The expected approximation ratio of our algorithm is
\[
1+\frac{4L(n)+n}{n(n-2)}.
\]
Note that the value $L(n)$ can be computed in $O(n^2)$ by using a dynamic programming method.
\begin{theorem}
For TTP-$2$ with $n$ teams, there is a randomized $O(n^3)$-time algorithm with an expected approximation ratio of $1+\frac{4L(n)+n}{n(n-2)}$, where $L(n)\leq \frac{n}{2}-4$ for $n\geq 8$ and $n\equiv 0 \pmod 4$, and $L(n)\leq \frac{n}{2}-6$ for $n\geq 16$ and $n\equiv 0 \pmod 8$, i.e., the expected approximation ratio is at most $1+\frac{3}{n}-\frac{10}{n(n-2)}$ for the former case and $1+\frac{3}{n}-\frac{18}{n(n-2)}$ for the latter case.
\end{theorem}
\begin{proof}
According to our initial construction, we have $L(n)\leq L_1(n)=\frac{n}{2}-4$ for $n\geq 8$ and $n\equiv 0\pmod 4$. Thus, we have $L_1(8)=0$, and we can get $L(n)\leq L_2(n)=\frac{n}{2}-6$ for $n\geq 16$ and $n\equiv 0\pmod 8$.
For $n\geq 16$ and $n\equiv 0\pmod 8$, the general construction can reduce at least two left super-games. Hence, for this case, the expected approximation ratio of our algorithm is at most $1+\frac{3}{n}-\frac{18}{n(n-2)}$.
\end{proof}
We can reduce more than two left super-games for some other cases. For example, we can get $L(32)=L_4(32)=8$ and $L(64)=L_8(64)=24$, which has four less left super-games compared with $L_1(32)=12$ and $L_1(64)=28$.
Since the biggest instance on the benchmark is 40, we show an example on the number of left super-games before and after using divide-and-conquer (D\&C) for $n$ varying from $8$ to $40$ in Table~\ref{number}. It is worth noting that when $n>64$ our programming shows that the reduced number of left super-games is at most 2. Hence, we conjecture that $L(n)=L_2(n)=\frac{n}{2}-6$ for $n>64$ and $n\equiv 0\pmod 8$.
As $n$ goes larger, we can not reduce more left super-games.
\begin{table}[ht]
\centering
\begin{tabular}{c|cc}
\hline
Data & Before & After\\
size & D\&C & D\&C\\
\hline
$n=40$ & 16 & $\mathbf{14}$\\
$n=36$ & 14 & 14\\
$n=32$ & 12 & $\mathbf{8}$ \\
$n=28$ & 10 & 10\\
$n=24$ & 8 & $\mathbf{6}$ \\
$n=20$ & 6 & 6 \\
$n=16$ & 4 & $\mathbf{2}$ \\
$n=12$ & 2 & 2 \\
$n=8$ & 0 & 0
\end{tabular}
\caption{Results on the number of left super-games}
\label{number}
\end{table}
The generalized algorithm does not work for $n\equiv 4\pmod 8$.
In Table~\ref{number}, we can see that the improved number of left super-games for $n=32$ (bigger case) is even less than the number for $n=28$ (smaller case).
We conjecture that there also exist a method to reduce the number of left super-games for $n\equiv 4\pmod 8$.
\section{The Construction for Odd $n/2$}
\subsection{Construction of the Schedule}
When $n/2$ being odd, the construction will be slightly different for $n\equiv 2 \pmod 8$ and $n\equiv 6 \pmod 8$. We will describe the algorithm for the case of $n\equiv 2 \pmod 8$. For the case of $n\equiv 6 \pmod 8$, only some edges will have different directions (games taking in the opposite place). These edges will be denoted by dash lines and we will explain them later.
In the schedule, there will be two special super-teams $u_{m-1}$ and $u_m$. For the sake of presentation, we will denote $u_l = u_{m-1}$ (resp., $u_r=u_{m}$), and denote the two teams in $u_l$ as $\{t_{l1}, t_{l2}\}$ (resp., the two teams in $u_r$ as $\{t_{r1}, t_{r2}\}$).
We first design the super-games between super-teams in the first $m-2$ time slots, and then consider super-games in the last time slot.
In each of the first $m-2$ time slots, we have $\frac{m-1}{2}$ super-games (note that $m$ is odd), where one super-game involves three super-teams and all other super-games involve two super-teams.
In the first time slot, the $\frac{m-1}{2}$ super-games are arranged as shown in Figure~\ref{figb1}.
The most right super-game involving $u_r$ is called the \emph{right super-game}. The right super-game is the only super-game involving three super-teams.
The other $\frac{m-1}{2}-1$ super-games are called \emph{normal super-games}. There are also directed edges in the super-games, which will be used to extend super-games to normal games.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{b1.pdf}
\caption{The super-game schedule in the first time slot}
\label{figb1}
\end{figure}
Note that the dash edges in Figure~\ref{figb1} will be reversed for the case of $n\equiv 6\pmod 8$. We can also observe that the white nodes (super-teams $u_1, \dots, u_{m-2}$) in Figure~\ref{figb1} form a cycle $(u_1, u_2, \dots ,u_{m-2},u_1)$.
In the second time slot, super-games are scheduled as shown in Figure~\ref{figb2}. We change the positions of white super-teams in the cycle by moving one position in the clockwise direction, and also change the direction of each edge except for the most left edge incident on $u_l$. The super-game including $u_l$ is called a \emph{left super-game}. So in the second time slot, there is one left super-game, $\frac{m-1}{2}-2$ normal super-games and one right super-game.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{b2.pdf}
\caption{The super-game schedule in the second time slot}
\label{figb2}
\end{figure}
In the third time slot, there is also one left super-game, $\frac{m-1}{2}-2$ normal super-games and one right super-game.
We also change the positions of white super-teams in the cycle by moving one position in the clockwise direction while the direction of each edge is reversed. The positions of the dark nodes will always keep the same.
An illustration of the schedule in the third time slot is shown in Figure~\ref{figb2+}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{b2+.pdf}
\caption{The super-game schedule in the third time slot}
\label{figb2+}
\end{figure}
The schedules for the other middle time slots are derived analogously, however, in the time slot $m-2$, the left super-game will be special and we will explain it later. Next, we show how to extend the super-games in these time slots to normal games.
\textbf{Case~1. Normal super-games}:
We first consider normal super-games, each of which will be extended to four normal games on four days.
Assume that in a normal super-game, super-team $u_{i}$ plays against the super-team $u_{j}$ at the home venue of $u_j$ in time slot $q$ ($1\leq i,j \leq m-1$ and $1\leq q\leq m-2$). Recall that $u_{i}$ represents normal teams $\{t_{2i-1}, t_{2i}\}$ and $u_{j}$ represents normal teams \{$t_{2j-1}, t_{2j}$\}. The super-game will be extended to eight normal games on four corresponding days from $4q-3$ to $4q$, as shown in Figure~\ref{figb3}. There is no difference compared with the normal super-games in Figure~\ref{figa6} for the case of even $n/2$.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{b3.pdf}
\caption{Extending normal super-games}
\label{figb3}
\end{figure}
\textbf{Case~2. Left super-games}:
Assume that in a left super-game, super-team $u_{l}$ plays against the super-team $u_{i}$ at the home venue of $u_l$ in (even) time slot $q$ ($1\leq i\leq m-3$ and $2\leq q\leq m-2$). There are $m-3$ left super-games: the first $m-4$ left super-games in time slot $q$ with $2\leq q<m-2$ are (normal) left super-games; the left super-game in time slot $q$ with $q=m-2$ is a (special) left super-game.
We first consider normal left super-games.
Recall that $u_{l}$ represents normal teams \{$t_{l1}, t_{l2}$\} and $u_{i}$ represents normal teams \{$t_{2i-1}, t_{2i}$\}.
The super-game will be extended to eight normal games on four corresponding days from $4q-3$ to $4q$, as shown in Figure~\ref{figb4}, for even time slot $q$. Note that the direction of edges in the figure will be reversed for odd time slot $q$. There is no difference compared with the left super-games in Figure~\ref{figa7} for the case of even $n/2$.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{b4.pdf}
\caption{Extending left super-games}
\label{figb4}
\end{figure}
In time slot $q=m-2$, there is a directed arc from super-team $u_l$ to super-team $u_1$. The super-game will be extended to eight normal games on four corresponding days from $4q-3$ to $4q$, as shown in Figure~\ref{figb4+}, where we put a letter `S' on the edge to indicate that the super-game is a (special) left super-game. There is no difference compared with the penultimate super-games in Figure~\ref{figa8} for the case of even $n/2$.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{b4+.pdf}
\caption{Extending the left super-game in the time slot $q=m-2$}
\label{figb4+}
\end{figure}
\textbf{Case~3. Right super-games}:
Assume that in a right super-game, there are three super-teams $u_{i-1}$, $u_i$, and $u_{r}$ in time slot $q$ ($1\leq i\leq m-2$ and $1\leq q\leq m-2$) (we let $u_{0}=u_{m-2}$). Recall that $u_{r}$ represents normal teams \{$t_{r_1}, t_{r_2}$\}, $u_{i-1}$ represents normal teams \{$t_{2i-3}, t_{i-2}$\}, and $u_{i}$ represents normal teams \{$t_{2i-1}, t_{2i}$\}. The super-game will be extended to twelve normal games on four corresponding days from $4q-3$ to $4q$. Before extending the right super-games, we first introduce four types of right super-games $\{R_1,R_2,R_3,R_4\}$, as shown in Figure~\ref{figb5}. If the directions of the edges in $R_j$ are reversed, then we denote this type as $\overline{R_j}$.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{b5.pdf}
\caption{Four basic types of right super-games}
\label{figb5}
\end{figure}
To extend right super-games, we consider three cases: $1\leq q\leq \frac{m-3}{2}$, $q=\frac{m-1}{2}$ and $\frac{m+1}{2}\leq q\leq m-2$.
For the case of $1\leq q\leq \frac{m-3}{2}$, if there is a directed edge from $u_{i-1}$ to $u_i$ (we know that $i-1$ is even and $i$ is odd), the four days of extended normal games are arranged as $R_4\cdot R_3\cdot \overline{R_4}\cdot \overline{R_3}$, otherwise, $\overline{R_2}\cdot \overline{R_1}\cdot R_2\cdot R_1$.
For the case of $q=\frac{m-1}{2}$, we know that there is a directed edge from $u_{m-2}$ to $u_1$ (both $m-1$ and $1$ are odd), the four days of extended normal games are arranged as $R_1\cdot R_3\cdot \overline{R_1}\cdot \overline{R_3}$.
For the case of $\frac{m+1}{2}\leq q\leq m-2$, if there is a directed edge from $u_{i-1}$ to $u_i$ (we know that $i-1$ is odd and $i$ is even), the four days of extended normal games are arranged as $R_1\cdot R_2\cdot \overline{R_1}\cdot \overline{R_2}$, otherwise, $\overline{R_3}\cdot \overline{R_4}\cdot R_3\cdot R_4$.
The design of right super-games has two advantages: First, the construction can always keep team $t_{r1}$ playing $AHHA$ and $t_{r2}$ playing $HAAH$ in each time slot, which can reduce the frequency of them returning home;
Second, we can make sure that the road trips of teams in the super-team with an index do not cause any extra cost in each time slot $q$.
\textbf{The last time slot}: Now we are ready to design the unarranged games on six days in the last time slot.
There are three kinds of unarranged games.
First, for each super-team $u_i$, by the design of left/normal/right super-games, the games between teams in $u_i$, $\{t_{2i-1}\leftrightarrow t_{2i}\}$, were not arranged.
Second, for super-teams $u_{i-1}$ and $u_i$ ($1\leq i\leq m-2$), there is a right super-game between them, and then there are two days of games unarranged.
Third, since there is no super-game between super-teams $u_l$ and $u_r$, we know that the four games between teams in $u_l$ and $u_r$ were not arranged.
Note that super-teams $(u_1,u_2,...,u_{m-2},u_1)$ form a cycle with an odd length. The unarranged games on the last six days can be seen in Figure~\ref{figb6}. We can see that each team has six unarranged games.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{b6.pdf}
\caption{The left games on the last six days}
\label{figb6}
\end{figure}
To arrange the remaining games, we introduce three days of games: $self$, $A_1$ and $A_2$. An illustration is shown in Figure~\ref{figb7}.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{b7.pdf}
\caption{An illustration of the three days of games: the games in $self$ are denoted by orange edges; the games in $A_1$ are denoted by blue edges; the games in $A_2$ are denoted by red edges}
\label{figb7}
\end{figure}
Note that we also use $\overline{self}$, $\overline{A_1}$, $\overline{A_2}$ to denote the day of games with the reversed directions in $self$, $A_1$, $A_2$, respectively, i.e., the partner is the same but the game venue changes to the other team's home. We can see that the unarranged games in Figure~\ref{figb6} can be presented by the six days of games $self \cup A_1 \cup A_2 \cup \overline{self} \cup \overline{A_1} \cup \overline{A_2}$.
Next, we arrange the six days $\{self,\overline{self},$ $A_1,A_2,$ $\overline{A_1},\overline{A_2}\}$ to combine the previous $2n-8$ days without violating the bounded-by-$k$ and no-repeat constraints. We consider two cases:
For super-teams $u_i$ ($1\leq i\leq m-2$), the six days are arranged in the order: $A_1\cdot self\cdot A_2\cdot \overline{A_1}\cdot \overline{A_2}\cdot \overline{self}$; For super-teams $u_l$ and $u_r$, the six days are arranged in the order: $self\cdot A_1\cdot A_2\cdot \overline{A_1}\cdot \overline{A_2}\cdot \overline{self}$.
We have described the main part of the scheduling algorithm. Next, we will prove its feasibility.
\begin{theorem}\label{feas2}
For TTP-$2$ with $n$ teams such that $n\geq 10$ and $n\equiv 2 \pmod 4$, the above construction can generate a feasible schedule.
\end{theorem}
\begin{proof}
First, we show that each team plays all the required $2n-2$ games in the $2n-2$ days.
According to the schedule, we can see that each team will attend one game in each of the $2n-2$ days. Furthermore,
it is not hard to observe that no two teams play against each other at the same place.
So each team will play the required $2n-2$ games.
Second, it is easy to see that each team will not violate the no-repeat property.
In any time slot, no two games between the same teams are arranged in two consecutive days. Especially, $self$ and $\overline{self}$ are not arranged on two consecutive days, which we can see in the last time slot. In two different time slots, each team will play against different teams.
Last, we prove that each team does not violate the bounded-by-$k$ property. We still use `$H$' and `$A$' to denote a home game and an away game, respectively. We will also let $\overline{H}=A$ and $\overline{A}=H$.
We first look at the games in the first $2n-12$ days. For the two teams in $u_l$, the four games in the first time slot will be $HHAA$ (see Figure~\ref{figb3}), the four games in an even time slot will be $HAAH$ (see Figure~\ref{figb4}), and the four games in an odd time slot (not containing the first time slot) will be $AHHA$. So two consecutive time slots can combine well.
For the two teams in $u_r$, the four games of team $t_{r_1}$ will always be $AHHA$, and the four games of team $t_{r_2}$ will always be $\overline{HAAH}$ in each time slot, which can be seen from the construction of right super-games.
Two consecutive time slots can still combine well.
Next, we consider a team $t_i$ in $u_j$ $(j\in \{1,2,\dots,m-2\})$.
In the time slots for away normal/right super-games (the direction of the edge is out of $u_j$), the four games will be $AAHH$, and $\overline{AAHH}$ otherwise.
In the time slots for away left super-games, the four games will be $AHHA$, and $\overline{AAHH}$ otherwise.
According to the rotation scheme of our schedule, super-team $u_j$ will always play away (resp., home) normal/right super-games until it plays an away (resp., a home) left super-games. After playing the away (resp., home) left super-games, it will always play home (resp., away) normal/right super-games. So two consecutive time slots can combine well.
Finally, we have the last ten days in the last two time slots not analyzed yet. We just list out the last ten games in the last two time slots for each team. For the sake of presentation, we also let $t_{i_1}=t_{2i-1}$ and $t_{i_2}=t_{2i}$. We will have six different cases: teams in $u_l$, $u_r$, $u_1$, $u_o$ for odd $o\in \{3,\dots, m-2\}$, and $u_e$ for even $e\in \{2,\dots, m-3\}$.
The last ten games are shown in Figure~\ref{figb8}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{b8.pdf}
\caption{The last ten games for the case of $n\equiv 2\pmod 4$, where $o\in \{3,\dots, m-2\}$ and $e\in \{2,\dots, m-3\}$}
\label{figb8}
\end{figure}
From Figure~\ref{figb8}, we can see that there are no three consecutive home/away games. It is also easy to see that on day $2n-12$ (the last day in time slot $m-3$), the games for $t_{l_1}$ and $t_{l_2}$ (in $u_l$) are $H$, the games for $t_{r_1}$ and $t_{r_2}$ (in $u_r$) are $A$ and $H$, the games for $t_{1_1}$ and $t_{1_2}$ (in $u_1$) are $A$, the games for $t_{e_1}$ and $t_{e_2}$ (in $u_e$) are $A$, and the games for $t_{o_1}$ and $t_{o_2}$ (in $u_o$) are $H$. So time slots $m-3$ and $m-2$ can also combine well without creating three consecutive home/away games.
Thus, the bounded-by-$k$ property also holds. We know our schedule is feasible for TTP-2.
\end{proof}
\subsection{Analyzing the Approximation Quality}
We still compare the itinerary of each team with its optimal itinerary.
For teams in super-teams $u_i$ ($1\leq i\leq m-1$), we can see that they stay at home before the first game in a super-game and return home after the last game in the super-game (for the last two super-games in the last two time slots, we can see it in Figure~\ref{figb8}).
We can look at the sub itinerary of a team on the four days in a left/normal super-game, which is coincident with a sub itinerary of the optimal itinerary (see the proof Lemma \ref{extra}). But, the sub itinerary of a team in a right/last super-game may be not coincident with a sub itinerary of the optimal itinerary. For example, in the right super-game between super-teams $u_1=\{t_1,t_2\}$ and $u_{m-2}=\{t_{2m-5},t_{2m-4}\}$, the sub itinerary of team $t_{1}$ contains one road trip $(t_1,t_{r1},t_{2m-5},t_1)$, while the optimal itinerary of it, containing two road trips $(t_1,t_{r1},t_{r2},t_1)$ and $(t_1,t_{2m-5},t_{2m-4},t_1)$, cannot contain a coincident sub itinerary.
So we may compare the sub itineraries in two right super-games and the last super-games together, which will be coincident with some sub itineraries in its optimal itinerary.
For teams in super-team $u_r$, since team $t_{r1}$ always plays $AHHA$ and team $t_{r2}$ always plays $HAAH$ in each right super-game of the first $m-2$ time slots, the sub itinerary of them in each of these time slots may not be coincident with a sub itinerary of the optimal itinerary. We may just compare their whole itineraries with their optimal itineraries.
In the following part, we always assume that $u_0=u_{m-2}$, i.e., $t_{-1}=t_{2m-5}$ and $t_{0}=t_{2m-4}$.
We use $\Delta_i$ to denote the sum expected extra cost of teams $t_{2i-1}$ and $t_{2i}$ in super-team $u_i$.
For the sake of presentation, we attribute the extra cost of teams in left super-games to $\Delta_{m-1}$, which will be analyzed on super-team $u_l$. Thus, when we analyze super-teams $u_i$ ($1\leq i\leq m-2$), we will not analyze the extra cost caused in its left super-game again.
\textbf{The extra cost of super-team $u_l$:}
For super-team $u_l$, there are $m-3$ left super-games (including one special left super-game) in the middle $m-3$ time slots and one last super-game in the last time slot.
As mentioned before, the design of (normal) left super-game is still the same, while the design of (special) left super-game is the same as the penultimate super-game in the case of even $n/2$.
By Lemma~\ref{extra}, we know that the expected extra cost of $m-4$ (normal) left super-games and one (special) left super-game is bounded by $(m-4)\times\frac{4}{n(n-2)}\mbox{LB}+\frac{2}{n(n-2)}\mbox{LB}=\frac{2n-14}{n(n-2)}\mbox{LB}$.
In the last time slot, we can directly compare teams' road trips with their coincident sub itineraries of their optimal itineraries. Both teams $t_{l1}$ and $t_{l2}$ will have the same road trips as that in the optimal itineraries (see the design of the six days: $self\cdot A_1\cdot A_2\cdot \overline{A_1}\cdot \overline{A_2}\cdot \overline{self}$). We can get that
\begin{equation}\label{u_l}
\Delta_{m-1}\leq \frac{2n-14}{n(n-2)}\mbox{LB}.
\end{equation}
\textbf{The extra cost of super-team $u_1$:}
For super-team $u_1$, it plays $m-5$ normal super-games, one (special) left super-game, two right super-games, and one last super-game. The normal super-games do not cause extra cost by Lemma~\ref{core}. Since the extra cost of the left super-game has been analyzed on super-team $u_l$, we only need to analyze the extra cost of the two right super-games and the last super-game. Figure~\ref{figbb1} shows their road trips in these three time slots and their coincident sub itineraries of their optimal itineraries.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{bb1.pdf}
\caption{The road trips of teams in $u_1$ and their coincident sub itineraries of their optimal itineraries}
\label{figbb1}
\end{figure}
By Lemma~\ref{core}, we can get that
\begin{equation}\label{u_1}
\begin{aligned}
\Delta_1 =&\ \EE{(D_{1,2m-4}+D_{2,3}+D_{r1,2m-5}+D_{r2,4})-(D_{1,2}+D_{3,4}+D_{2m-5,2m-4}+D_{r1,r2})}\\
&\ +\EE{(D_{3,2m-5}+D_{r1,4}+D_{r2,2m-4})-(D_{3,4}+D_{2m-5,2m-4}+D_{r1,r2})}\\
\leq&\ \frac{7}{n(n-2)}\mbox{LB}.
\end{aligned}
\end{equation}
\textbf{The extra cost of super-team $u_i$ ($i=3,5...,m-4$):}
For super-team $u_i$, it plays $m-5$ normal super-games, one left super-game, two right super-games, and one last super-game.
Similarly, the normal super-games do not cause extra cost and the left super-game has been analyzed.
Only the two right super-games and the last super-game can cause extra cost. Figure~\ref{figbb2} shows theirs road trips in these three time slots and their coincident sub itineraries of their optimal itineraries.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{bb2.pdf}
\caption{The road trips of teams in $u_i$ ($i=3,5...,m-4$) and their coincident sub itineraries of their optimal itineraries}
\label{figbb2}
\end{figure}
By Lemma~\ref{core}, we can get that
\begin{equation}\label{u_oddi}
\begin{aligned}
\Delta_i =&\ \EE{(D_{2i-2,2i-1}+D_{2i,2i+1}+D_{r1,2i-3}+D_{r2,2i+2})-(D_{2i-3,2i-2}+D_{2i-1,2i}+D_{2i+1,2i+2}+D_{r1,r2})}\\
&\ +\EE{(D_{2i-2,2i}+D_{2i,2i+1}+D_{r1,2i+2}+D_{r2,2i-3})-(D_{2i-3,2i-2}+D_{2i+1,2i+2}+D_{r1,r2})}\\
\leq&\ \frac{8}{n(n-2)}\mbox{LB}.
\end{aligned}
\end{equation}
\textbf{The extra cost of super-team $u_i$ ($i=2,4...,m-3$):}
For super-team $u_i$, it plays $m-5$ normal super-games, one left super-game, two right super-games, and one last super-game.
Similarly, only these two right super-games and the last super-game could cause extra cost. However, by the design of right super-games and the last super-game, the two teams in $u_i$ will have the same road trips as that in the optimal itinerary and then the extra cost is 0.
\textbf{The extra cost of super-team $u_{m-2}$:}
For super-team $u_{m-2}$, it also plays $m-5$ normal super-games, one left super-game, two right super-games, and one last super-game.
Only the two right super-games and the last super-game can cause extra cost.
Figure~\ref{figbb3} shows theirs road trips in these three time slots and their coincident sub itineraries of their optimal itineraries.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{bb3.pdf}
\caption{The road trips of teams in $u_{m-2}$ and their coincident sub itineraries of their optimal itineraries}
\label{figbb3}
\end{figure}
By Lemma~\ref{core}, we can get that
\begin{equation}\label{u_m-2}
\begin{aligned}
\Delta_{m-2} =&\ \EE{(D_{2,2m-4}+D_{2m-6,2m-5}+D_{r1,2m-7}+D_{r2,1})-(D_{1,2}+D_{2m-7,2m-6}+D_{2m-5,2m-4}+D_{r1,r2})}\\
&\ +\EE{(D_{1,2m-4}+D_{2m-6,2m-4}+D_{r1,2}+D_{r2,2m-7})-(D_{1,2}+D_{2m-7,2m-6}+D_{r1,r2})}\\
\leq&\ \frac{8}{n(n-2)}\mbox{LB}.
\end{aligned}
\end{equation}
By (\ref{u_l}), (\ref{u_1}), (\ref{u_oddi}), and (\ref{u_m-2}), we can get that
\begin{equation}\label{sum1}
\sum_{i=1}^{m-2}\Delta_i\leq \frac{7}{n(n-2)}\mbox{LB}+\frac{m-3}{2}\times\frac{8}{n(n-2)}\mbox{LB}+\frac{2n-14}{n(n-2)}\mbox{LB}=\frac{4n-19}{n(n-2)}\mbox{LB}.
\end{equation}
\textbf{The extra cost of super-team $u_r$:}
For super-team $u_r$, there are two teams $t_{r1}$ and $t_{r2}$.
For team $t_{r1}$, there is one road trip visiting $t_{l1}$ and $t_{l2}$ in the last time slot (see the design of the six days: $self\cdot A_1\cdot A_2\cdot \overline{A_1}\cdot \overline{A_2}\cdot \overline{self}$), which does not cause extra cost. So we will not compare this sub itinerary. Figure~\ref{figbb4} shows the other road trips of $t_{r1}$ in the first $m-2$ time slots and the whole road trips of $t_{r2}$ in $m-1$ time slots. Note that the road trips in Figure~\ref{figbb4} correspond to the case of $n\equiv 2 \pmod 8$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{bb4.pdf}
\caption{The road trips of teams in $u_{r}$ for the case of $n\equiv 2 \pmod 8$}
\label{figbb4}
\end{figure}
By Lemma~\ref{core}, for the case of $n\equiv 2\pmod 8$, we can get that
\begin{equation}\label{u_r}
\begin{aligned}
\Delta_m =&\ \EEE{\sum_{i=1}^{\frac{m-5}{4}}(D_{4i-3,4i-1}+D_{4i-2,4i})+\sum_{i=\frac{m+3}{4}}^{\frac{m-3}{2}}(D_{4i-1,4i+1}+D_{4i,4i+2})}\\
&\ +\EEE{D_{m-4,m-2}+D_{m-1,m+1}+D_{r1,m-3}+D_{r2,m}-\sum_{i=1}^{m-2}D_{2i-1,2i}-D_{r1,r2}}\\
&\ +\EEE{\sum_{i=1}^{\frac{m-3}{2}}(D_{4i-3,4i-1}+D_{4i,4i+2})+(D_{2,2m-5}+D_{r1,l2}+D_{r2,l1})-\sum_{i=1}^{m}D_{2i-1,2i}}\\
\leq&\ \lrA{2\times\frac{m-5}{4}+2\times\frac{m-5}{4}+4+2\times\frac{m-3}{2}+3}\times\frac{1}{n(n-2)}\mbox{LB}\\
=&\ \frac{n-1}{n(n-2)}\mbox{LB}.
\end{aligned}
\end{equation}
For the case of $n\equiv 6 \pmod 8$, the dash edges will be reversed (see Figure \ref{figb1}) and then the road trips of teams in $u_r$ may be slightly different. Figure~\ref{figbb5} shows the other road trips of $t_{r1}$ in the first $m-2$ time slots and the whole road trips of $t_{r2}$ in $m-1$ time slots. We can see that the road trips of team $t_{r2}$ are still the same while the road trips of team $t_{r1}$ are different.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{bb5.pdf}
\caption{The road trips of teams in $u_{r}$ for the case of $n\equiv 6 \pmod 8$}
\label{figbb5}
\end{figure}
By Lemma~\ref{core}, for the case of $n\equiv 6\pmod 8$, we can get that
\begin{equation}\label{u_r+}
\begin{aligned}
\Delta_m =&\ \EEE{\sum_{i=1}^{\frac{m-3}{4}}(D_{4i-3,4i-1}+D_{4i-2,4i})+\sum_{i=\frac{m+1}{4}}^{\frac{m-3}{2}}(D_{4i-1,4i+1}+D_{4i,4i+2})}\\
&\ +\EEE{D_{r1,m-2}+D_{r2,m-1}-\sum_{i=1}^{m-2}D_{2i-1,2i}-D_{r1,r2}}\\
&\ +\EEE{\sum_{i=1}^{\frac{m-3}{2}}(D_{4i-3,4i-1}+D_{4i,4i+2})+D_{2,2m-5}+D_{r1,l2}+D_{r2,l1}-\sum_{i=1}^{m}D_{2i-1,2i}}\\
\leq&\ \lrA{2\times\frac{m-3}{4}+2\times\frac{m-3}{4}+2+2\times\frac{m-3}{2}+3}\times\frac{1}{n(n-2)}\mbox{LB}\\
=&\ \frac{n-1}{n(n-2)}\mbox{LB}.
\end{aligned}
\end{equation}
Therefore, the upper bound of $\Delta_m$ in these two cases is identical.
By (\ref{sum1}), (\ref{u_r}), and (\ref{u_r+}), we can get that the
total expected extra cost is
\[
\sum_{i=1}^{m}\Delta_i\leq\frac{4n-19}{n(n-2)}\mbox{LB}+\frac{n-1}{n(n-2)}\mbox{LB}=\lrA{\frac{5}{n}-\frac{10}{n(n-2)}}\mbox{LB}.
\]
\begin{theorem}\label{res_2}
For TTP-$2$ with $n$ teams, when $n\geq 10$ and $n\equiv 2 \pmod 4$, there is a randomized $O(n^3)$-time algorithm with an expected approximation ratio of $1+{\frac{5}{n}}-{\frac{10}{n(n-2)}}$.
\end{theorem}
The analysis of our algorithm for odd $n/2$ is also tight.
We can also consider the example, where the distance of each edge in the minimum matching $M$ is 0 and the distance of each edge in $G-M$ is 1. Recall that the independent lower bound satisfies $\mbox{LB}=n(n-2)$.
For odd $n/2$, by the analysis of $\Delta_i$, we can compute that the total extra cost of our construction is $5n-20$. Thus, in this case, the ratio is
\[
1+\frac{5n-20}{n(n-2)}=1+\frac{5}{n}-\frac{10}{n(n-2)}.
\]
\section{The Derandomization}
In the previous sections, we proposed a randomized $(1+\frac{4L(n)+n}{n(n-2)})$-approximation algorithm for even $n/2$ and a randomized $(1+\frac{5}{n}-\frac{10}{n(n-2)})$-approximation algorithm for odd $n/2$. In this section, we show how to derandomize our algorithms efficiently by using the method of conditional expectations~\cite{motwani1995randomized}.
This method was also used to derandomize a $(2+O(1/n))$-approximation algorithm for TTP-3 in \cite{miyashiro2012approximation}.
Their algorithm randomly orders all teams while our algorithms first randomly orders super-teams and then randomly orders the teams in each super-team. This is the difference.
We will also analyze a running-time bound of the derandomization.
According to the analysis of our algorithms, the total extra cost is bounded by
\begin{equation}\label{de1}
W=\sum_{\substack{1\leq i'<j'\leq m\\t_i\in u_{i'} \& t_j\in u_{j'}}} n_{ij}D_{i,j},
\end{equation}
where $n_{ij}$ is the number of times edge $t_it_j$ appears in computing the total extra cost.
In the main framework of our algorithms, there are two steps using the randomized methods: the first is that we use $\{u_1,\dots,u_m\}$ to label the $m$ edges in $M$ randomly; the second is that we use $\{t_{2i-1}, t_{2i}\}$ to label the two teams in each super-team $u_i$ randomly.
We first consider the derandomization of super-teams, i.e., use $\{u_1,\dots,u_m\}$ to label the $m$ edges in $M$ in a deterministic way such that the expected approximation ratio still holds.
\subsection{The Derandomization of Super-teams}
Suppose the $m$ edges in $M$ are denoted by $\{e_1,\dots, e_m\}$. We wish to find a permutation $\sigma: (1,2,\dots,m)\leftrightarrow (\sigma_1, \sigma_2,\dots,\sigma_m)$ such that
\[
\EE{W|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_m=e_{\sigma_m}}\leq \EE{W}.
\]
We can determine each $\sigma_i$ sequentially. Suppose we have already determine $(\sigma_1, \sigma_2,\dots,\sigma_{s-1})$ such that
\[
\EE{W|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_{s-1}=e_{\sigma_{s-1}}}\leq \EE{W}.
\]
To determine $\sigma_{s}$, we can simply let $\sigma_s$ be
\begin{equation}\label{de2}
\sigma_s=\arg\min_{\sigma_s\in \{1,2,\dots,m\}\setminus\{\sigma_1, \sigma_2,\dots,\sigma_{s-1}\}} \EE{W|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}\leq \EE{W}.
\end{equation}
Then, we can get
\begin{equation}\label{de3}
\EE{W|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}\leq \EE{W|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_{s-1}=e_{\sigma_{s-1}}}\leq \EE{W}.
\end{equation}
Therefore, we can repeat this procedure to determine the permutation $\sigma$.
Next, we show how to compute $\EE{W|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}$. Recall that $D(u_{i'},u_{j'})=\sum_{t_{i}\in u_{i'} \& t_{j}\in u_{j'}}D_{i,j}$.
When $t_i\in u_{i'}$ and $t_j\in u_{j'}$ ($i'\neq j'$), we can get that
\[
\EE{D_{i,j}|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}=\frac{1}{4}\EE{D(u_{i'},u_{j'})|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}},
\]
since the two teams in each super-team are still labeled randomly.
Hence, by (\ref{de1}) and (\ref{de3}), we can get
\[
\begin{aligned}
&\EE{W|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}\\
&=\ \EEE{\sum_{\substack{1\leq i'<j'\leq m\\t_i\in u_{i'} \& t_j\in u_{j'}}} n_{ij}D_{i,j}|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}\\
&=\ \sum_{\substack{1\leq i'<j'\leq m\\t_i\in u_{i'} \& t_j\in u_{j'}}}n_{ij}\EE{ D_{i,j}|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}\\
&=\ \sum_{\substack{1\leq i'<j'\leq m\\t_i\in u_{i'} \& t_j\in u_{j'}}}\frac{1}{4}n_{ij}\EE{D(u_{i'},u_{j'})|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}\\
&=\ \sum_{\substack{1\leq i'<j'\leq m}}\frac{1}{4}m_{ij}\EE{D(u_{i'},u_{j'})|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}},
\end{aligned}
\]
where $m_{i'j'}=\sum_{t_i\in u_{i'} \& t_j\in u_{j'}}n_{ij}$.
Let $S_s=\{\sigma_1, \sigma_2,\dots,\sigma_s\}$, $\overline{S_s}=\{1,2,\dots,m\}\setminus S_s$, $T_s=\{1, 2,\dots,s\}$, and $\overline{T_s}=\{1,2,\dots,m\}\setminus T_s$.
The value $\EE{D(u_{i'},u_{j'})|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}$ can be computed as follows:
\[
\begin{aligned}
&\EE{D(u_{i'},u_{j'})|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}} &\\
&=\left\{
\begin{array}{*{20}l}
D(u_{i'},u_{j'}), & i'\in T_s, j'\in T_s,\\
\frac{1}{m-s}\sum_{\sigma_{j'}\in\overline{S_s}}D(u_{i'}, e_{\sigma_{j'}}), & i'\in T_s, j'\in \overline{T_s},\\
\frac{1}{m-s}\sum_{\sigma_{i'}\in\overline{S_s}}D(e_{\sigma_{i'}}, u_{j'}), & i'\in \overline{T_s}, j'\in T_s,\\
\frac{1}{(m-s-1)(m-s)}\sum_{\sigma_{i'},\sigma_{j'}\in\overline{T_s}}D(e_{\sigma_{i'}}, e_{\sigma_{j'}}), & i'\in \overline{T_s}, j'\in \overline{T_s},\\
\end{array}
\right.
\end{aligned}
\]
where $D(e_{\sigma_{i'}}, e_{\sigma_{j'}})$ is the sum distance of all four edges between vertices of $e_{\sigma_{i'}}$ and vertices of $e_{\sigma_{j'}}$ (the edge $e_{\sigma_{i'}}$/$e_{\sigma_{j'}}$ can be regarded as a super-team).
Next, we analyze the running time of our derandomization on super-teams.
When $i'\in T_s$ and $j'\in \overline{T_s}$, there are $O(s^2)=O(n^2)$ variables, the expected conditional value of each variable $D(u_{i'},u_{j'})$ can be computed in $O(1)$ time, and hence the expected conditional values of these variables can be computed in $O(n^2)$ time.
When $i'\in T_s$ and $j'\in \overline{T_s}$, there are $O(s(m-s))=O(n^2)$ variables, the expected conditional value of each variable $D(u_{i'},u_{j'})$, which is not related to $j'$, can be computed in $O(m-s)=O(n)$ time, and hence these variables can be computed in $O(ns)=O(n^2)$ time.
Similarly, when $i'\in \overline{T_s}$ and $j'\in T_s$, these variables can be computed in $O(n^2)$ time.
When $i'\in \overline{T_s}$ and $j'\in \overline{T_s}$, there are $O((m-s)^2)=O(n^2)$ variables, the expected conditional value of each variable $D(u_{i'},u_{j'})$, which is not related to $i'$ and $j'$ (it is a constant), can be computed in $O(s^2)=O(n^2)$ time, and hence these variables can be computed in $O(ns)=O(n^2)$ time.
Therefore, all of $O(n^2)$ variables can be computed in $O(n^2)$ time. Hence, the value $\EE{W|u_1=e_{\sigma_1},u_2=e_{\sigma_2},\dots,u_s=e_{\sigma_s}}$ can be computed in $O(n^2)$ time. To determine $\sigma_{s}$, by (\ref{de2}), we need to use $O(n^3)$ time. Therefore, to determine the permutation $\sigma$, the total running time is $O(n^4)$.
Next, we consider the derandomization of each super-team, i.e., use $\{t_{2i-1}, t_{2i}\}$ to label the two vertices of edge $e_{\sigma_i}$ in a deterministic way such that the expected approximation ratio still holds.
\subsection{The Derandomization of Each Super-team}
Now, we assume that the $m$ edges in $M$ are directed edges. Let $a(e)$ and $b(e)$ be the tail vertex and the head vertex of the directed edge $e$, respectively. In the previous derandomization, we have determined $u_i=e_{\sigma_i}$, i.e., the super-team $u_i$ refers to the edge $e_{\sigma_i}$. Hence, the weight in (\ref{de1}) can be written as $W(\sigma)$. Note that $\EE{W(\sigma)}\leq \EE{W}$. Then, we need to determine the labels of $a(e_{\sigma_i})$ and $b(e_{\sigma_i})$. We use $u_i=e^{0}_{\sigma_i}$ (resp., $u_i=e^{1}_{\sigma_i}$) to mean that we let $t_{2i-1}=a(e_{\sigma_i})$ and $t_{2i}=b(e_{\sigma_i})$ (resp., $t_{2i-1}=b(e_{\sigma_i})$ and $t_{2i}=a(e_{\sigma_i})$).
We wish to find a vector $(\pi_1,\pi_2,\dots,\pi_m)\in\{0,1\}^m$ such that
\[
\EE{W(\sigma)|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_m=e^{\pi_m}_{\sigma_m}}\leq \EE{W(\sigma)}.
\]
The idea of derandomization is the same. We can determine each $\pi_i$ sequentially.
Suppose we have already determine $(\pi_1, \pi_2,\dots,\pi_{s-1})$ such that
\[
\EE{W(\sigma)|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_{s-1}=e^{\pi_{s-1}}_{\sigma_{s-1}}}\leq \EE{W(\sigma)}.
\]
To determine $\pi_{s}$, using a similar argument, we know that we can let $\pi_s$ be
\begin{equation}\label{de22}
\pi_s=\arg\min_{\pi_s\in \{0,1\}} \EE{W(\sigma)|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_s=e^{\pi_s}_{\sigma_s}}\leq \EE{W(\sigma)}.
\end{equation}
Next, we show how to compute $\EE{W(\sigma)|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_s=e^{\pi_s}_{\sigma_s}}$.
By (\ref{de1}), we can get
\[
\begin{aligned}
&\EE{W(\sigma)|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_s=e^{\pi_s}_{\sigma_s}}\\
&=\ \EEE{\sum_{\substack{1\leq i'<j'\leq m\\t_i\in u_{i'} \& t_j\in u_{j'}}} n_{ij}D_{i,j}|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_s=e^{\pi_s}_{\sigma_s}}\\
&=\ \sum_{\substack{1\leq i'<j'\leq m\\t_i\in u_{i'} \& t_j\in u_{j'}}}n_{ij}\EE{ D_{i,j}|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_s=e^{\pi_s}_{\sigma_s}}.
\end{aligned}
\]
Note that $t_i\in u_{i'}$ and $t_j\in u_{j'}$.
Let $T_s=\{1, 2,\dots,s\}$ and $\overline{T_s}=\{1,2,\dots,m\}\setminus T_s$.
The value $\EE{D_{i,j}|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_s=e^{\pi_s}_{\sigma_s}}$ can be computed as follows:
\[
\begin{aligned}
&\EE{D_{i,j}|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_s=e^{\pi_s}_{\sigma_s}} &\\
&=\left\{
\begin{array}{*{20}l}
D_{i,j}, & i'\in T_s, j'\in T_s,\\
D(t_i, u_{j'})/2, & i'\in T_s, j'\in \overline{T_s},\\
D(u_{i'}, t_j)/2, & i'\in \overline{T_s}, j'\in T_s,\\
D(u_{i'}, u_{j'})/4, & i'\in \overline{T_s}, j'\in \overline{T_s},
\end{array}
\right.
\end{aligned}
\]
where $D(t_i, u_{j'})=\sum_{t_j\in u_{j'}}D_{i,j}$ and $D(u_{i'}, t_j)=\sum_{t_i\in u_{i'}}D_{i,j}$.
Using a similar argument, the value $\EE{D_{i,j}|u_1=e^{\pi_1}_{\sigma_1},u_2=e^{\pi_2}_{\sigma_2},\dots,u_s=e^{\pi_s}_{\sigma_s}}$ can be computed in $O(n^2)$ time. To determine $\pi_{s}$, by (\ref{de22}), we need to take $O(n^2)$ time. Therefore, to determine the vector $(\pi_1,\pi_2,\dots,\pi_m)$, the total running time is $O(n^3)$.
Therefore, the derandomization takes $O(n^4)$ extra time in total.
\section{Experimental Results}
To evaluate the performance of our schedule algorithms, we implement our algorithms and test them on well-known benchmark instances.
In fact, the full derandomizations in our algorithms are not efficient in practical.
We will use some heuristic methods to get a good order of teams in our schedules. Thus, in the implementation we use the randomized algorithms and combine them with some simple local search heuristics.
\subsection{The Main Steps in Experiments}
In our algorithms and experiments, we first compute a minimum weight perfect matching.
After that, we pack each edge in the matching as a super-team and randomly order them with $\{u_1,u_2,...,u_{m}\}$.
For each super-team $u_i$, we also randomly order the two teams in it with $\{t_{2i-1}, t_{2i}\}$. Using the obtained order, we can get a feasible solution according to our construction methods. To get possible improvements, we will use two simple swapping rules: the first is to swap two super-teams, and the second is to swap the two teams in each super-team. These two swapping rules can keep the pairs of teams in super-teams corresponding to the minimum weight perfect matching. The details of the two swapping rules are as follows.
\medskip
\noindent
\textbf{The first swapping rule on super-teams:} Given an initial schedule obtained by our randomized algorithms, where
the super-teams $\{u_1,u_2,...,u_{m}\}$ are randomly ordered. We are going to swap the positions of some pairs of super-teams $(u_i, u_j)$ to reduce the total traveling distance.
There are $\frac{m(m-1)}{2}$ pairs of super-teams $(u_i, u_j)$ with $1\leq i<j\leq m$.
We consider the $\frac{m(m-1)}{2}$ pairs $(i,j)$ in an order by dictionary.
From the first pair to the last pair $(i,j)$, we test whether the total traveling distance can be reduced after we swap the positions of the two super-teams $u_i$ and $u_j$. If no, we do not swap them and go to the next pair. If yes, we swap the two super-teams and go to the next pair. After considering all the $\frac{m(m-1)}{2}$ pairs, if there is an improvement, we repeat the whole procedure. Otherwise, the procedure ends.
\medskip
\noindent
\textbf{The second swapping rule on teams in each super-team:}
There are $m$ super-teams $u_i$ in our algorithm. For each super-team $u_i$, there are two normal teams in it, which are randomly ordered initially. We are going to swap the positions of two normal teams in a super-team to reduce the total traveling distance.
We consider the $m$ super-teams $u_i$ in an order. For each super-team $u_i$, we test whether the total traveling distance can be reduced after we swap the positions of the two teams in the supper-team $u_i$. If no, we do not swap them and go to the next super-team. If yes, we swap the two teams and go to the next super-team. After considering all the $m$ super-teams, if there is an improvement, we repeat the whole procedure. Otherwise, the procedure ends.
\medskip
In our experiments, one suit of swapping operations is to iteratively apply the two swapping rules until no improvement we can further get.
Since the initial order of the teams is random, we may generate several different initial orders and apply the swapping operations on each of them.
In our experiments, when we say excusing $x$ rounds, it means that we generate $x$ initial random orders, apply the suit of swapping operations on each of them, and return the best result.
\subsection{Applications to Benchmark Sets}
We implement our schedule algorithms to solve the benchmark instances in~\cite{trick2007challenge}.
The website introduces 62 instances, most of which were reported from real-world sports scheduling scenarios, such as the Super 14 Rugby League, the National Football League, and the 2003 Brazilian soccer championship. The number of teams in the instances varies from 4 to 40.
There are 34 instances of even $n/2$, and 28 instances of odd $n/2$.
Almost half of the instances are very small ($n\leq 8$) or very special (all teams are on a cycle or the distance between any two teams is 1), and they were not tested in previous papers~\cite{thielen2012approximation,DBLP:conf/mfcs/XiaoK16}. So we only test the remaining 33 instances, including 17 instances of even $n/2$ and 16 instances of odd $n/2$.
Due to the difference between the two algorithms for even $n/2$ and odd $n/2$, we will show the results separately for these two cases.
\medskip
\noindent
\textbf{Results of even $n/2$:}
For the case of even $n/2$, we compare our results with the best-known results in Table~\ref{experimentresult1}. In the table, the column
`\emph{Lower Bounds}' indicates the independent lower bounds;
`\emph{Previous Results}' lists previous known results in~\cite{DBLP:conf/mfcs/XiaoK16};
`\emph{Initial Results}' shows the results given by our initial randomized schedule;
`\emph{$x$ Rounds}' shows the best results after generating $x$ initial randomized schedules and applying the suit of swapping operations on them (we show the results for $x=1,10, 50$ and 300);
`\emph{Our Gap}' is defined to be $\frac{300~Rounds~-~Lower~Bounds}{Lower~Bounds}$, and `\emph{Improvement Ratio}' is defined as $\frac{Previous~Results~-~300~Rounds}{Previous~Results}$.
\begin{table}[ht]
\footnotesize
\centering
\begin{tabular}{c|ccccccccc}
\hline
Data & Lower & Previous & Initial & 1 & 10 & 50 & 300 & Our & Improvement \\
Set & Bounds & Results & Results & Round & Rounds & Rounds & Rounds & Gap & Ratio\\
\hline
Galaxy40 & 298484 & 307469 &314114& 305051 & 304710 & 304575 & ${304509}$ & 2.02 & 0.96\\
Galaxy36 & 205280 & 212821 &218724& 210726 & 210642 & 210582 & ${210461}$ & 2.52 & 1.11
\\
Galaxy32 & 139922 & 145445 &144785& 142902 & 142902 & 142834 & ${142781}$ & 2.04 & 1.83
\\
Galaxy28 & 89242 & 93235 &94173& 92435 & 92121 & 92105 & ${92092}$ & 3.19 & 1.23
\\
Galaxy24 & 53282 & 55883 &55979& 54993 & ${54910}$ & 54910 & 54910 & 3.06 & 1.74\\
Galaxy20 & 30508 & 32530 &32834& 32000 & 31926 & ${31897}$ & 31897 & 4.55 & 1.95
\\
Galaxy16 & 17562 & 19040 &18664& 18409 & ${18234}$ & 18234 & 18234 & 3.83 & 4.23
\\
Galaxy12 & 8374 & 9490 &9277& 8956 & 8958 & ${8937}$ & 8937 & 6.72 & 5.83\\
NFL32 & 1162798 & 1211239 &1217448& 1190687 & 1185291 & 1185291 & ${1184791}$ & 1.89 & 2.18\\
NFL28 & 771442 & 810310 &818025& 800801 & 796568 & ${795215}$ & 795215 & 3.08 & 1.86
\\
NFL24 & 573618 & 611441 &602858& 592422 & 592152 & ${591991}$ & 591991 & 3.20 & 3.18
\\
NFL20 & 423958 & 456563 &454196& 443718 & ${441165}$ & 441165 & 441165 & 4.06 & 3.37
\\
NFL16 & 294866 & 321357 &312756& ${305926}$ & 305926 & 305926 & 305926 & 3.75 & 4.80
\\
NL16 & 334940 & 359720 &355486& 351250 & ${346212}$ & 346212 & 346212 & 3.37 & 3.76
\\
NL12 & 132720 & 144744 &146072& 139394 & ${139316}$ & 139316 & 139316 & 4.97 & 3.75\\
Super12 & 551580 & 612583 &613999& ${586538}$ & 586538 & 586538 & 586538 & 6.34 & 4.25
\\
Brazil24 & 620574 & 655235 &668236& 642251 & ${638006}$ & 638006 & 638006 & 2.81 & 2.63\\
\end{tabular}
\caption{Experimental results for even $n/2$ with an average improvement of 2.86\%}
\label{experimentresult1}
\end{table}
\medskip
\noindent
\textbf{Results of odd $n/2$:}
For odd $n/2$, we compare our results with the best-known results in Table~\ref{experimentresult2}.
Note now the previous known results in column `\emph{Previous Results}' is from another reference~\cite{thielen2012approximation}.
\begin{table}[ht]
\footnotesize
\centering
\begin{tabular}{c|ccccccccc}
\hline
Data & Lower & Previous &Initial& 1 & 10 & 50 & 300 & Our & Improvement\\
Set & Bounds & Results &Results& Round & Rounds & Rounds & Rounds & Gap & Ratio\\
\hline
Galaxy38 & 244848 & 274672 &268545& 256430 & 255678 & ${255128}$ & 255128 & 4.20 & 7.12\\
Galaxy34 & 173312 & 192317 &188114& 180977 & 180896 & ${180665}$ & 180665 & 4.24 & 6.06\\
Galaxy30 & 113818 & 124011 &123841& 119524 & 119339 & 119122 & ${119076}$ & 4.62 & 3.98\\
Galaxy26 & 68826 & 77082 &75231& 73108 & 72944 & 72693 & ${72639}$ & 5.54 & 5.76\\
Galaxy22 & 40528 & 46451 &45156& 43681 & 43545 & 43478 & ${43389}$ & 7.06 & 6.59\\
Galaxy18 & 23774 & 27967 &27436& 26189 & ${26020}$ & 26020 & 26020 & 9.45 & 6.96\\
Galaxy14 & 12950 & 15642 &15070& 14540 & 14507 & ${14465}$ & 14465 & 11.70 & 7.52\\
Galaxy10 & 5280 & 6579 &6153& 5917 & ${5915}$ & 5915 & 5915 & 12.03 & 10.09
\\
NFL30 & 951608 & 1081969 &1051675& 1008487 & 1005000 & 1002665 & ${1001245}$ & 5.22 & 7.46\\
NFL26 & 669782 & 779895 &742356& 715563 & 715563 & ${714675}$ & 714675 & 6.70 & 8.36\\
NFL22 & 504512 & 600822 &584624& 548702 & 545791 & ${545142}$ & 545142 & 8.05 & 9.27\\
NFL18 & 361204 & 439152 &418148& 400390 & 398140 & ${397539}$ & 397539 & 10.06 & 9.48\\
NL14 & 238796 & 296403 &281854& 269959 & ${266746}$ & 266746 & 266746 & 11.70 & 10.01\\
NL10 & 70866 & 90254 &83500& 81107 & 80471 & ${80435}$ & 80435 & 13.50 & 10.88\\
Super14 & 823778 & 1087749 &1025167& ${920925}$ & 920925 & 920925 & 920925 & 11.79 & 15.34\\
Super10 & 392774 & 579862 &529668& 503275 & ${500664}$ & 500664 & 500664 & 27.47 & 13.66\\
\end{tabular}
\caption{Experimental results for odd $n/2$ with an average improvement of 8.65\%}
\label{experimentresult2}
\end{table}
From Tables~\ref{experimentresult1} and \ref{experimentresult2}, we can see that our algorithm improves all the 17 instances of even $n/2$ and the 16 instances of odd $n/2$.
For one round, the average improvement on the 17 instances of even $n/2$ is $2.49\%$, and the the average improvement on the 16 instances of odd $n/2$ is $8.18\%$.
For 10 rounds, the improvements will be $2.81\%$ and $8.51\%$, respectively. For 50 rounds, the improvements will be $2.85\%$ and $8.63\%$, respectively.
If we run more rounds, the improvement will be very limited and it is almost no improvement after 300 rounds.
Another important issue is about the running time of the algorithms. Indeed, our algorithms are very effective.
Our algorithms are coded using C-Free 5.0, on a standard desktop computer with a 3.20GHz AMD Athlon 200GE CPU and 8 GB RAM.
Under the above setting, for one round, all the 33 instances can be solved together within 1 second. If we run $x$ rounds, the running time will be less than $x$ seconds.
Considering that algorithms are already very fast, we did not specifically optimize the codes.
The codes of our algorithms can be found in \url{https://github.com/JingyangZhao/TTP-2}.
\section{Conclusion}
We design two schedules to generate feasible solutions to TTP-2 with $n\equiv 0 \pmod 4$ and $n\equiv 2 \pmod 4$ separately, which can guarantee the traveling distance at most $(1+{\frac{3}{n}}-{\frac{10}{n(n-2)}})$ times of the optimal for the former case and $(1+{\frac{5}{n}}-{\frac{10}{n(n-2)}})$ for the latter case. Both improve the previous best approximation ratios.
For the sake of analysis, we adopt some randomized methods. The randomized algorithms can be derandomized efficiently with an extra running-time factor of $O(n^4)$. It is possible to improve it to $O(n^3)$ by using more detailed analysis (as we argued this for the case of even $n/2$ in \cite{DBLP:conf/cocoon/ZhaoX21}).
In addition to theoretical improvements, our algorithms are also very practical.
In the experiment, our schedules can beat the best-known solutions for all instances in the well-known benchmark of TTP with an average improvement of $2.86\%$ for even $n/2$ and an average improvement of $8.65\%$ for odd $n/2$. The improvements in both theory and practical are significant.
For further study, it should be interesting to extend the construction techniques and analyses in this paper to more TTP related problems, for example, TTP-$k$ with $k\geq 3$, single round-robin version of TTP-2 (STTP-2) \cite{imahori20211+}, linear distance relaxation of TTP-$k$ (LDTTP-$k$)~\cite{DBLP:journals/jair/HoshinoK12}, and so on.
\section*{Acknowledgments}
The work is supported by the National Natural Science Foundation of China, under grant 61972070.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.918945,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfBQ5qdmC6x7UQDuW | \section{Introduction}
The Traveling Tournament Problem (TTP), first systematically introduced in~\cite{easton2001traveling}, is a hard but interesting sports scheduling problem inspired by Major League Baseball.
This problem is to find a double round-robin tournament satisfying several constraints that minimizes the total distances traveled by all participant teams.
There are $n$ participating teams in the tournament, where $n$ is always even. Each team should play $2(n-1)$ games in $2(n-1)$ consecutive days. Since each team can only play one game on each day, there are exact $n/2$ games scheduled on each day.
There are exact two games between any pair of teams,
where one game is held at the home venue of one team and the other one is held at the home venue of the other team.
The two games between the same pair of teams could not be scheduled in two consecutive days.
These are the constraints for TTP. We can see that it is not easy to construct a feasible schedule.
Now we need to find an optimal schedule that minimizes the total traveling distances by all the $n$ teams.
A well-known variant of TTP is TTP-$k$. which has one more constraint:
each team is allowed to take at most $k$ consecutive home or away games.
If $k$ is very large, say $k=n-1$, then this constraint will lose its meaning and it becomes TTP again. For this case, a team can schedule its travel distance as short as the traveling salesmen problem. On the other hand,
in a sports schedule, it is generally believed that home stands and road trips should alternate as regularly as possible for each team~\cite{campbell1976minimum,thielen2012approximation}.
The smaller the value of $k$, the more frequently teams have to return their homes.
TTP and its variants have been extensively studied in the literature~\cite{kendall2010scheduling,rasmussen2008round,thielen2012approximation,DBLP:conf/mfcs/XiaoK16}.
\subsection{Related Work}
In this paper, we will focus on TTP-2. We mainly survey the results on TTP-$k$.
For $k=1$, TTP-1 is trivial and there is no feasible schedule~\cite{de1988some}.
But when $k\geq 2$, the problem suddenly becomes very hard. It is not easy to find a simple feasible schedule. Even no good
brute force algorithm with a single exponential running time has been found yet.
In the online benchmark \cite{trick2007challenge}, most instances with more than $10$ teams are still unsolved completely even by using high-performance machines.
The NP-hardness of
TTP-$k$ with $k=3$ or $k=n-1$ has been proved \cite{bhattacharyya2016complexity,thielen2011complexity}.
Although the hardness of other cases has not been theoretically proved, most people believe TTP-$k$ with $k\geq 2$ is very hard.
In the literature, there is a large number of contributions on approximation algorithms~\cite{yamaguchi2009improved,imahori2010approximation,miyashiro2012approximation,westphal2014,hoshino2013approximation,thielen2012approximation,DBLP:conf/mfcs/XiaoK16} and heuristic algorithms~\cite{easton2003solving,lim2006simulated,anagnostopoulos2006simulated,di2007composite,goerigk2014solving}.
In terms of approximation algorithms, most results are based on the assumption that the distance holds the symmetry and triangle inequality properties. This is natural and practical in the sports schedule.
For TTP or TTP-$k$ with $k\geq n-1$, Westphal and Noparlik \cite{westphal2014} proved an approximation ratio of 5.875 and Imahori\emph{ et al.} \cite{imahori2010approximation} proved an approximation ratio of 2.75 at the same time.
For TTP-3, the current approximation ratio is $5/3+O(1/n)$~\cite{yamaguchi2009improved}.
The first record of TTP-2 seems from the schedule of a basketball conference of ten teams
in~\cite{campbell1976minimum}. This paper did not discuss the approximation ratio.
In fact, any feasible schedule for TTP-2
is a 2-approximation solution~\cite{thielen2012approximation}.
Although any feasible schedule will not have a very bad performance, no simple construction of feasible schedules is known now.
In the literature, all known algorithms for TTP-2 are different for $n/2$ being even and odd. This may be caused by different structural properties. One significant contribution to TTP-2 was done by Thielen and Westphal~\cite{thielen2012approximation}.
They proposed a $(3/2+O(1/n))$-approximation algorithm for $n/2$ being odd and a $(1+16/n)$-approximation algorithm for $n/2$ being even.
Now the approximation ratio was improved to $(1+\frac{12}{n}+\frac{8}{n(n-2)})$ for odd $n/2$~\cite{DBLP:conf/ijcai/ZhaoX21} and to $(1+\frac{4}{n}+\frac{4}{n(n-2)})$ for even $n/2$~\cite{DBLP:conf/mfcs/XiaoK16}.
\subsection{Our Results}
In this paper, we design an effective algorithm for TTP-2 with $n/2$ being even with an approximation ratio $(1+\frac{3}{n}-\frac{6}{n(n-2)})$, improving the ratio from $(1+\frac{4}{n}+\Theta(\frac{1}{n(n-2)}))$ to
$(1+\frac{3}{n}-\Theta(\frac{1}{n(n-2)}))$. Now the ratio is small and improvement becomes harder and harder.
Our major algorithm is based on packing minimum perfect matching. We first find a minimum perfect matching in the distance graph, then pair the teams according to the matching, and finally construct a feasible schedule based on the paired teams (called super-teams).
Our algorithm is also easy to implement and runs fast.
Experiments show that our results beat all previously-known solutions on the 17 tested instances in~\cite{DBLP:conf/mfcs/XiaoK16} with an average improvement of $2.10\%$.
\section{Preliminaries}\label{sec_pre}
We will always use $n$ to denote the number of teams and let $m=n/2$, where $n$ is an even number.
We also use $\{t_1, t_2, \dots, t_n\}$ to denote the set of the $n$ teams.
A sports scheduling on $n$ teams is \emph{feasible} if it holds the following properties.
\begin{itemize}
\item \emph{Fixed-game-value}: Each team plays two games with each of the other $n-1$ teams, one at its home venue and one at its opponent's home venue.
\item \emph{Fixed-game-time}: All the games are scheduled in $2(n-1)$ consecutive days and each team plays exactly one game in each of the $2(n-1)$ days.
\item \emph{Direct-traveling}: All teams are initially at home before any game begins, all teams will come back home after all games, and a team travels directly from its game venue in the $i$th day to its game venue in the $(i+1)$th day.
\item \emph{No-repeat}: No two teams play against each other on two consecutive days.
\item \emph{Bounded-by-$k$}: The number of consecutive home/away games for any team is at most $k$.
\end{itemize}
The TTP-$k$ problem is to find a feasible schedule minimizing the total traveling distance of all the $n$ teams.
The input of TTP-$k$ contains an $n \times n$ distance matrix $D$ that indicates the distance between each pair of teams.
The distance from the home of team $i$ to the home of team $j$ is denoted by $D_{i,j}$.
We also assume that $D$ satisfies the symmetry and triangle inequality properties, i.e., $D_{i,j}=D_{j,i}$ and $D_{i,j} \leq D_{i,h} + D_{h,j}$ for all $i,j,h$. We also let $D_{i,i}=0$ for each $i$.
We will use $G$ to denote an edge-weighted complete graph on $n$ vertices representing the $n$ teams.
The weight of the edge between two vertices $t_i$ and $t_j$ is $D_{i,j}$, the distance from the home of $t_i$ to the home of $t_j$.
We also use $D_i$ to denote the weight sum of all edges incident on $t_i$ in $G$, i.e., $D_i=\sum_{j=1}^n D_{i,j}$.
The sum of all edge weights of $G$ is denoted by $D_G$.
We let $M$ denote a minimum weight perfect matching in $G$. The weight sum of all edges in $M$ is denoted by $D_M$.
We may consider the endpoint pair of each edge in $M$ as a \emph{super-team}. We use $H$ to denote the complete graph on the $m$ vertices representing the $m$ super-teams. The weight of the edge between two super-teams $u_i$ and $u_j$, denoted by $D(u_i,u_j)$, is the sum of the weight of the four edges in $G$ between one team in $u_i$ and one team in $u_j$, i.e., $D(u_i, u_j)=\sum_{t_{i'}\in u_i \& t_{j'}\in u_j}D_{i',j'}$. We also let $D(u_i,u_i)=0$ for any $i$.
We give an illustration of the graphs $G$ and $H$ in Figure~\ref{fig:fig001}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{001.pdf}
\caption{An illustration of graphs $G$ and $H$, where there four dark lines form a minimum perfect matching $M$ in $G$}
\label{fig:fig001}
\end{figure}
The sum of all edge weights of $H$ is denoted by $D_H$. It holds that
\begin{eqnarray} \label{eqn_GH}
D_H=D_G-D_M.
\end{eqnarray}
\subsection{Independent lower bound and extra cost}
The \emph{independent lower bound} for TTP-2 was firstly introduced by Campbell and Chen~\cite{campbell1976minimum}.
It has become a frequently used lower bound.
The basic idea of the independent lower bound is to obtain a lower bound $LB_i$ on the traveling distance of a single team $t_i$ independently without considering the feasibility of other teams.
The road of a team $t_i$ in TTP-$2$, starting at its home venue and coming back home after all games, is called
an \emph{itinerary} of the team. The itinerary of $t_i$ is also regarded as a graph on the $n$ teams,
which is called the \emph{itinerary graph} of $t_i$.
In an itinerary graph of $t_i$, the degree of all vertices except $t_i$ is 2 and the degree of $t_i$ is greater than or equal to $n$ since team $t_i$ will visit each other team venue only once.
Furthermore, for any other team $t_j$, there is at least one edge between $t_i$ and $t_j$, because $t_i$ can only visit at most 2 teams on each road trip and then team $t_i$ either comes from its home to team $t_j$ or goes back to its home after visiting team $t_j$. We decompose the itinerary graph of $t_i$ into two parts: one is a spanning star centered at $t_i$ (a spanning tree which only vertex $t_i$ of degree $> 1$) and the forest of the remaining part. Note that in the forest, only $t_i$ may be a vertex of degree $\geq 2$ and all other vertices are degree-1 vertices. See Figure~\ref{fig:fig002} for illustrations of the itinerary graphs.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.85]{002.pdf}
\caption{The itinerary graph of $t_i$, where the light edges form a spanning star and the dark edges form the remaining forest. In the right example (b), the remaining forest is a perfect matching of $G$}
\label{fig:fig002}
\end{figure}
For different itineraries of $t_i$, the spanning star is fixed and only the remaining forest may be different.
The total distance of the spanning star is $\sum_{j\neq i} D_{i,j}=D_i$. Next, we show an upper and lower bound on the total distance of the remaining forest. For each edge between two vertices $t_{j_1}$ and $t_{j_2}$ ($j_1,j_2\neq i$), we have that $D_{j_1,j_2}\leq D_{i,j_1}+D_{i,j_2}$ by the triangle inequality property. Thus, we know that the total distance of the remaining forest is at most the total distance of the spanning star. Therefore, the distance of any feasible itinerary of $t_i$ is at most $2D_i$. This also implies that any feasible solution to TTP-2 is a 2-approximation solution.
On the other hand, the distance of the remaining forest is at least as that of a minimum perfect matching of $G$ by the triangle inequality.
Recall that we use $M$ to denote a minimum perfect matching of $G$. Thus, we have
a lower bound $LB_i$ for each team $t_i$:
\begin{eqnarray} \label{eqn_lower1}
LB_i=D_i+D_M.
\end{eqnarray}
The itinerary of $t_i$ to achieve $LB_i$ is called the \emph{optimal itinerary}.
The \emph{independent lower bound} for TTP-2 is the traveling distance such that all teams reach their optimal itineraries, which is denoted as
\begin{eqnarray} \label{eqn_lowerbound}
LB=\sum_{i=1}^n LB_i =\sum_{i=1}^n (D_i +D_M)=2D_G+nD_M.
\end{eqnarray}
For any team, it is possible to reach its optimal itinerary. However, it is impossible for all teams to reach their optimal itineraries synchronously in a feasible schedule~\cite{thielen2012approximation}, even for $n=4$. So the independent lower bound for TTP-2 is not achievable.
To analyze the quality of a schedule of the tournament, we will compare the itinerary of each team with the optimal itinerary.
The different distance is called the \emph{extra cost}. Sometimes it is not convenient to compare the whole itinerary directly.
We may consider the extra cost for a subpart of the itinerary.
We may split an itinerary into several trips and each time we compare some trips.
A \emph{road trip} in an itinerary of team $t_i$ is a simple cycle starting and ending at $t_i$.
So an itinerary consists of several road trips. For TTP-2, each road trip is a triangle or a cycle on two vertices.
Let $L$ and $L'$ be two itineraries of team $t_i$, $L_s$ be a sub itinerary of $L$ consisting of several road trips in $L$, and
$L'_s$ be a sub itinerary of $L'$ consisting of several road trips in $L'$.
We say that the sub itineraries $L_s$ and $L'_s$ are \emph{coincident} if they visit the same set of teams.
We will only compare a sub itinerary of our schedule with a coincident sub itinerary of the optimal itinerary and consider the extra cost between them.
\section{Constructing the Schedule}
We will introduce a method to construct a feasible tournament first.
Our construction consists of two parts. First, we arrange \emph{super-games} between \emph{super-teams}, where each super-team contains
a pair of normal teams. Then we extend super-games to normal games between normal teams.
To make the itinerary as similar as the optimal itinerary, we take each team pair in the minimum perfect matching $M$ of $G$ as a \emph{super-team}. There are $n$ normal teams and then there are $m=n/2$ super-teams. We denote the set of super-teams as $\{u_1, u_2, \dots, u_{m}\}$ and relabel the $n$ teams such that $u_i=\{t_{2i-1},t_{2i}\}$ for each $i$.
Each super-team will attend $m-1$ super-games in $m-1$ time slots.
Each super-game on the first $m-2$ time slots will be extended to eight normal games between normal teams on four days, and each super-game on the last time slot will be extended to twelve normal games between normal teams on six days. So each normal team $t_i$ will attend $4\times (m-2)+6=4m-2=2n-2$ games.
This is the number of games each team $t_i$ should attend in TTP-2.
In our algorithm, the case of $n=4$ is easy, and hence we assume here that $n\geq 8$.
We construct the schedule for super-teams from the first time slot to the last time slot $m-1$.
In each of the $m-1$ time slots, we have $\frac{m}{2}$ super-games.
In fact, our schedules in the first time slot and in the last time slot are different from the schedules in the middle time slots.
For the first time slot, the $\frac{m}{2}$ super-games are arranged as shown in Figure~\ref{fig:figa}. All of these super-games are called \emph{normal super-games}. Each super-game is represented by a directed edge, the information of which will be used to extend super-games to normal games between normal teams.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{a.pdf}
\caption{The super-game schedule on the first time slot for an instance with $m=10$}
\label{fig:figa}
\end{figure}
In Figure~\ref{fig:figa}, the last super-team $u_m$ is denoted as a dark node, and all other super-teams
$u_1, \dots, u_{m-1}$ are denoted as white nodes.
The while nodes form a cycle and we may change the positions of the while nodes according to the cycle in the other time slots.
In the second time slot, we keep the position of $u_m$ and change the positions of white super-teams in the cycle by moving one position in the clockwise direction, and also change the direction of each edge except for the most left edge incident on $u_m$. This edge will be replaced by a double arrow edge. The super-game including $u_m$ is also called a \emph{left super-game} in the middle $m-3$ time slots. So in the second time slot, there are $\frac{m}{2}-1$ normal super-games and one left super-games.
An illustration of the schedule in the second time slot is shown in Figure~\ref{fig:figb}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{b.pdf}
\caption{The super-game schedule on the second time slot for an instance with $m=10$}
\label{fig:figb}
\end{figure}
In the third time slot, there are also $\frac{m}{2}-1$ normal super-games and one left super-games.
We also change the positions of white super-teams in the cycle by moving one position in the clockwise direction while the direction of each edge is reversed. The position of the dark node will always keep the same.
An illustration of the schedule in the third time slot is shown in Figure~\ref{fig:figc}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{c.pdf}
\caption{The super-game schedule on the third time slot for an instance with $m=10$}
\label{fig:figc}
\end{figure}
The schedules for the other middle slots are derived analogously. Before we introduce the super-games in the last time slot $m-1$,
we first explain how to extend the super-games in the first $m-2$ time slots to normal games.
In these time slots, we have two different kinds of super-games: normal super-games and left super-games. We first consider normal super-games.
\textbf{Case~1. Normal super-games}:
Each normal super-game will be extended to eight normal games on four days.
Assume that in a normal super-game, super-team $u_{i}$ plays against the super-team $u_{j}$ on time slot $q$ ($1\leq i,j\leq m$ and $1\leq q\leq m-2$). Recall that $u_{i}$ represents normal teams \{$t_{2i-1}, t_{2i}$\} and $u_{j}$ represents normal teams \{$t_{2j-1}, t_{2j}$\}. The super-game will be extended to eight normal games on four corresponding days from $4q-3$ to $4q$, as shown in Figure~\ref{fig:fig003}. A directed edge from team $t_{i'}$ to team $t_{i''}$ means $t_{i'}$ plays against $t_{i''}$ at the home venue of $t_{i''}$.
Note that if there is a directed edge from $u_j$ to $u_i$, then the direction of all the edges in Figure~\ref{fig:fig003} should be reversed.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{003.pdf}
\caption{Extending normal super-games}
\label{fig:fig003}
\end{figure}
\textbf{Case~2. Left super-games}:
Assume that in a left super-game, super-team $u_{m}$ plays against super-team $u_{i}$ on time slot $q$ ($2\leq i\leq m-2$ and $2\leq q\leq m-2$). Recall that $u_{m}$ represents normal teams \{$t_{2m-1}, t_{2m}$\} and $u_{i}$ represents normal teams \{$t_{2i-1}, t_{2i}$\}. The super-game will be extended to eight normal games on four corresponding days from $4q-3$ to $4q$, as shown in Figure~\ref{fig:fig004} for even time slot $q$. For odd time slot $q$, the direction of edges in the figure will be reversed.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{004.pdf}
\caption{Extending left super-games}
\label{fig:fig004}
\end{figure}
The first $m-2$ time slots will be extended to $4(m-2)=2n-8$ days according to the above rules. Each normal team will have six remaining games,
which will be corresponding to the super-games on the last time slot.
We will call a super-game on the last time slot a \emph{last super-game}.
Figure~\ref{fig:fig005} shows an example of the schedule on the last time slot.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{005.pdf}
\caption{The super-game schedule on the last time slot for an instance with $m=10$}
\label{fig:fig005}
\end{figure}
\textbf{Case~3. Last super-games}:
Next, we extend a last super-game into twelve normal games on six days.
Assume that on the last time slot $q=m-1$, super-team $u_i$ plays against super-team $u_j$ ($1\leq i,j\leq m$). Recall that $u_{i}$ represents normal teams \{$t_{2i-1}, t_{2i}$\} and $u_{j}$ represents normal teams \{$t_{2j-1}, t_{2j}$\}. The last super-game will be extended to twelve normal games on six corresponding days from $4q-3$ to $4q+2$, as shown in Figure~\ref{fig:fig006}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{006.pdf}
\caption{Extending last super-games}
\label{fig:fig006}
\end{figure}
The above is the main part of the schedule. Now, we give an example of the schedule for $n=8$ teams constructed by using the above rules. In Table~\ref{ansexample}, the $i$th row indicates team $t_i$, the $j$th column indicates the $j$th day in the double round-robin,
item $+t_{x}$ on the $i$th row and $j$th column means that team $t_i$ plays against team $t_{x}$ on the $j$th day at the home venue of the opposite team $t_{x}$, and item $-t_{x}$ on the $i$th row and $j$th column means that team $t_i$ plays against team $t_{x}$ on the $j$th day at its home venue.
\begin{table}[ht]
\centering
\begin{tabular}{ m{0.3cm}<{\centering}|*{14}{m{0.70cm}<{\centering}} }
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14\\
\hline
$t_{1}$ & $-t_{3}$ & $-t_{4}$ & $+t_{3}$ & $+t_{4}$ & $-t_{5}$ & $-t_{6}$ & $+t_{5}$ & $+t_{6}$ &
$-t_{7}$ & $+t_{2}$ & $-t_{8}$ & $+t_{7}$ & $+t_{8}$ & $-t_{2}$ \\
$t_{2}$ & $-t_{4}$ & $-t_{3}$ & $+t_{4}$ & $+t_{3}$ & $-t_{6}$ & $-t_{5}$ & $+t_{6}$ & $+t_{5}$ &
$-t_{8}$ & $-t_{1}$ & $+t_{7}$ & $+t_{8}$ & $-t_{7}$ & $+t_{1}$ \\
$t_{3}$ & $+t_{1}$ & $+t_{2}$ & $-t_{1}$ & $-t_{2}$ & $+t_{7}$ & $-t_{8}$ & $-t_{7}$ & $+t_{8}$ &
$-t_{5}$ & $+t_{4}$ & $-t_{6}$ & $+t_{5}$ & $+t_{6}$ & $-t_{4}$ \\
$t_{4}$ & $+t_{2}$ & $+t_{1}$ & $-t_{2}$ & $-t_{1}$ & $+t_{8}$ & $-t_{7}$ & $-t_{8}$ & $+t_{7}$ &
$-t_{6}$ & $-t_{3}$ & $+t_{5}$ & $+t_{6}$ & $-t_{5}$ & $+t_{3}$ \\
$t_{5}$ & $+t_{7}$ & $+t_{8}$ & $-t_{7}$ & $-t_{8}$ & $+t_{1}$ & $+t_{2}$ & $-t_{1}$ & $-t_{2}$ &
$+t_{3}$ & $+t_{6}$ & $-t_{4}$ & $-t_{3}$ & $+t_{4}$ & $-t_{6}$ \\
$t_{6}$ & $+t_{8}$ & $+t_{7}$ & $-t_{8}$ & $-t_{7}$ & $+t_{2}$ & $+t_{1}$ & $-t_{2}$ & $-t_{1}$ &
$+t_{4}$ & $-t_{5}$ & $+t_{3}$ & $-t_{4}$ & $-t_{3}$ & $+t_{5}$ \\
$t_{7}$ & $-t_{5}$ & $-t_{6}$ & $+t_{5}$ & $+t_{6}$ & $-t_{3}$ & $+t_{4}$ & $+t_{3}$ & $-t_{4}$ &
$+t_{1}$ & $+t_{8}$ & $-t_{2}$ & $-t_{1}$ & $+t_{2}$ & $-t_{8}$ \\
$t_{8}$ & $-t_{6}$ & $-t_{5}$ & $+t_{6}$ & $+t_{5}$ & $-t_{4}$ & $+t_{3}$ & $+t_{4}$ & $-t_{3}$ &
$+t_{2}$ & $-t_{7}$ & $+t_{1}$ & $-t_{2}$ & $-t_{1}$ & $+t_{7}$ \\
\end{tabular}
\caption{The schedule for $n=8$ teams, where the horizontal ordinate represents the teams, the ordinate represents the days,
`$+$' means the team on the corresponding horizontal ordinate plays at its home, and `$-$' means the team on the corresponding horizontal ordinate plays at the opposite team's home}
\label{ansexample}
\end{table}
From Table~\ref{ansexample}, we can see that on each line there are at most two consecutive `$+$'/`$-$', and then we can see that this is a feasible schedule.
\begin{theorem}
For TTP-$2$ with $n$ teams such that $n\equiv 0 \pmod 4$, the above construction can generate a feasible schedule.
\end{theorem}
\begin{proof}
According to the definition of feasible schedules, we only need to prove the five properties in the definition.
The first two properties of fixed-game-value and fixed-game-time are easy to see from the construction.
Each super-game on the first $m-2$ time slot will be extended to eight normal games on four days and each team participates in four games on four days. Each super-game on the last time slot will be extended to twelve normal games on six days and each team participates in six games on six days. So each team plays $2(n-1)$ games on $2(n-1)$ different days. It is also easy to see from the construction that each team pair plays exactly two games, one at the home venue of each team.
We also assume that the itinerary obeys the direct-traveling property. It does not need to prove.
Next, we focus on the bounded-by-$2$ property. We will use `$H$' and `$A$' to denote a home game and an away game, respectively. We will also let $\overline{H}=A$ and $\overline{A}=H$.
We first look at the games in the first $2n-8$ days. For the two teams in $u_m$, the 4 games in the first time slot will be $HHAA$, in an even time slot will be $HAAH$ (see Figure~7, and the 4 games in an odd time slot (not including the first time slot) will be $AHHA$. So two consecutive time slots can be jointed well.
Next, we consider a team $t_i$ in $u_j$ $(j\in \{1,2,\dots,m-1\})$.
In the time slots for normal super-games, the 4 games for $t_i$ will be $AAHH$ if the arc between the two super-teams is out of $u_j$ and $\overline{AAHH}$ otherwise. In the time slots for left super-games, the 4 games will be $AHHA$ or $\overline{AHHA}$.
In the time slot after a left super-game, the 4 games will be $HHAA$ or $\overline{HHAA}$.
In the time slot before a left super-game, the 4 games will be $AAHH$ or $\overline{AAHH}$.
So two consecutive time slots can joint well, no matter they are two time slots for normal super-games, or one time slot for a normal super-game and one time slot for a left super-game.
Only the last 6 days on the last time slot have not been analyzed yet. For the sake of presentation,
we simply list out all the games on the last two time slots for each team.
There are 10 games for each team, 4 on the second to last time slot and 6 on the last time slot.
Let $A\in \{3,\dots, m-1\}$ and $H\in \{4,6,\dots, m-2\}\cup\{1\}$ such that super-team $u_A$ plays $AAHH$ and $u_H$ plays $HHAA$ on the penultimate time slot. Note that the forms of super-teams $u_2$ and $u_m$ are different because they play against each other in a left super-game on the penultimate time slot.
We also denote teams $\{t_{2i-1},t_{2i}\}$ in super-team $u_i$ by $\{t_{i_1},t_{i_2}\}$.
The last 10 games for all the teams are shown in Figure~10.
We can see that there are no three consecutive home games or away games. So the \emph{bounded-by-k} property holds.
All the properties are satisfied and then the schedule is a feasible schedule for TTP-2.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{007.pdf}
\caption{The last 10 games for the case of $n\equiv 0 \pmod 4$}
\label{fig:fig007}
\end{figure}
We have introduced a method to construct a feasible schedule. However, it is not our final schedule. We may still need to specify the order of some teams or super-teams to minimize the extra cost. We will introduce this when we analyze the approximation ratio.
\section{Approximation Quality of the Schedule}
\subsection{Analysis of the approximation ratio}
To show the quality of our schedule, we compare it with the independent lower bound. We will check the difference between our itinerary of each team $t_i$ and the optimal itinerary of $t_i$ and compute the extra cost.
As mentioned in the last paragraph of Section~\ref{sec_pre}, we will compare some sub itineraries of a team.
We will look at the sub itinerary of a team on the four or six days in a super-game, which is coincident with a sub itinerary of the optimal itinerary:
all teams stay at home before the first game in a super-game and return home after the last game in the super-game.
In our algorithm, there are three types of super-games: normal super-games, left super-games, and last super-games. We analyze the total extra cost of all normal teams caused by each type of super-games.
\begin{lemma}\label{extra}
Assume there is a super-game between super-teams $u_i$ and $u_j$ in our schedule.
\begin{enumerate}
\item [(a)] If the super-game is a normal super-game, then the extra cost of all normal teams in $u_i$ and $u_j$ is 0;
\item [(b)] If the super-game is a left or last super-game, then the extra cost of all normal teams in $u_i$ and $u_j$ is at most $D(u_i,u_j)$.
\end{enumerate}
\end{lemma}
\begin{proof
From Figure 6 we can see that in a normal super-game any of the four normal teams will visit the home venue of the two normal teams in the opposite super-team in one road trip. So they have the same road trip as that in their optimal itineraries.
The extra cost is 0. So (a) holds.
Next, we assume that the super-game is a left super-game and $u_i=u_m$. From Figure~\ref{fig:fig004}, we can see that the two teams $t_{n-1}$ and $t_{n}$ in the super-team $u_m$ play $AHHA$ in the four days (Recall that $A$ means an away game and $H$ means a home game), and the two teams
in the super-team $u_j$ play $HAAH$.
The two teams in $u_j$ will have the same road trip as that in the optimal itinerary and then the extra cost is 0.
We compare the road trips of the two teams $t_{n-1}$ and $t_{n}$ with their optimal road trips together. The difference is that
$$D_{n-1,2j-1}+D_{n-1,2j}+D_{n,2j-1}+D_{n,2j}-2D_{2j-1,2j}\leq D(u_m,u_j),$$
by the triangle inequality.
Last, we consider it as a last super-game. We assume with loss of generality that on the first day of the six days, the games are held at the home venue of teams in $u_i$.
From Figure ~\ref{fig:fig006}, we can see that teams in $u_j$ do not have any extra cost and teams in $u_i$ have a total extra cost of
$$2D_{2i-1,2j}+D_{2i,2j-1}+D_{2i,2j}-D_{2i-1,2i}-2D_{2j-1,2j}\leq D(u_i,u_j),$$
by the triangle inequality.
\end{proof}
In our schedule, there are $\frac{m}{2}+(m-3)(\frac{m}{2}-1)$ normal super-games, which contribute 0 to the extra cost.
There are $m-3$ left super-games on the $m-3$ middle time slots. By Lemma~\ref{extra}, we know that the total extra cost is $E_1=\sum_{i=2}^{m-2}D(u_m,u_i)$.
There are $\frac{m}{2}$ last super-games on the last time slot. By Lemma~\ref{extra}, we know that the total extra cost is $E_2=\sum_{i=1}^{m/2}D(u_i,u_{m+1-i})$.
\begin{lemma}
The total extra cost of our schedule is at most
\[ E_1+E_2= \sum_{i=2}^{m-2}D(u_m,u_i)+\sum_{i=1}^{m/2}D(u_i,u_{m+1-i}).\]
\end{lemma}
Next, we will make $E_1$ and $E_2$ as small as possible by reordering the teams.
First, we consider $E_2$. The extra cost is the sum of the weight of edges $\{u_iu_{m+1-i}\}_{i=1}^{m/2}$ in $H$, which form a matching in $H$.
Our algorithm is to reorder $u_i$ such that $\{u_iu_{m+1-i}\}_{i=1}^{m/2}$ is a minimum perfect matching in $H$.
Note that $H$ is a complete graph on $m$ (even) vertices and then we can use $O(m^3)$ time
algorithm to find the minimum perfect matching $M_H$ in $H$.
Our algorithm will reorder $u_i$ such that $\{u_iu_{m+1-i}\}_{i=1}^{m/2}=M_H$. For the cost of $M_H$, we have that
\begin{eqnarray} \label{M_H}
E_2=D_{M_H}\leq \frac{1}{m-1}D_H.
\end{eqnarray}
Second, we consider $E_1$. Our idea is to choose $u_m$ such that $\sum_{i=2}^{m-2}D(u_m,u_i)$ is minimized.
Note that once $u_m$ is determined, super-team $u_1$ is also determined by the matching $M_H$ ($u_mu_1$ should be an edge in $M_H$).
After determining $u_m$ and $u_1$ together, we sill need to decider $u_{m-1}$.
We first let $u_m$ be the super-team such that $\sum_{i=2}^{m-1}D(u_m,u_i)$ is minimized (There are $m$ possible candidates for $u_m$). Thus, we have that
\[\sum_{i=2}^{m-1}D(u_m,u_i)\leq
\frac{2(D_H-D_{M_H})}{m}.
\]
Then we let $u_{m-1}$ be the super-team such that $D(u_m,u_{m-1})\geq D(u_m,u_i)$ for all $2\leq i \leq m-2$. Thus, we have that
\begin{eqnarray} \label{L}
E_1 = \sum_{i=2}^{m-2}D(u_m,u_i)\leq \sum_{i=2}^{m-1}D(u_m,u_i)\frac{m-3}{m-2}\leq \frac{2(m-3)(D_H-D_{M_H})}{m(m-2)}.
\end{eqnarray}
By (\ref{eqn_GH}), (\ref{eqn_lowerbound}), (\ref{M_H}) and (\ref{L}), we know that the total extra cost of our schedule is
\begin{eqnarray} \label{Even}
\begin{array}{*{20}l}
E_1+E_2&\leq& D_{M_H}+\frac{2(m-3)(D_H-D_{M_H})}{m(m-2)}\\
&=&(1-\frac{3}{m}+\frac{1}{m-2})D_{M_H}+(\frac{3}{m}-\frac{1}{m-2})D_H\\
&\leq&(\frac{3}{m}-\frac{3}{m(m-1)})D_H\\
&\leq&(\frac{3}{2m}-\frac{3}{2m(m-1)})LB=(\frac{3}{n}-\frac{6}{n(n-2)})LB.
\end{array}
\end{eqnarray}
Next, we analyze the running-time bound of our algorithm.
Our algorithm first uses $O(n^3)$ time to compute the minimum perfect matching $M$ and the minimum perfect matching $M_H$. It takes $O(n^2)$ time for us to determine $u_m$ and $u_{m-1}$ such that (\ref{M_H}) and (\ref{L}) hold and the remaining construction of the schedule also use $O(n^2)$ time. Thus, our algorithm runs in $O(n^3)$ time.
\begin{theorem}\label{result1}
For TTP-2 on $n$ teams where $n\geq 8$ and $n\equiv 0 \pmod 4$, a feasible schedule can be computed in $O(n^3)$ time such that the total traveling distance is at most $(1+{\frac{3}{n}}-{\frac{6}{n(n-2)}})$ times of the independent lower bound.
\end{theorem}
\subsection{The tightness of the analysis}
Next, we show that the analysis of our algorithm is tight, i.e., the approximation ratio in Theorem~\ref{result1} is the best for our algorithm. We show an example that the ratio can be reached.
In the example, the distance of each edge in the minimum perfect matching $M$ is 0 and the distance of any other edge in $G$ is 1.
We can see that the triangle inequality property still holds.
Let $E(G)$ denote the edge set of graph $G$. By (\ref{eqn_lowerbound}), we know that the independent lower bound of this instance is
\begin{eqnarray}\label{worst_case}
2D_G+nd(M)=2(\left|E(G)\right|-\left|M\right|)=n(n-2).
\end{eqnarray}
In this instance, the extra costs of a normal super-game, left super-game and last super-game are 0, 4 and 4, respectively.
In our schedule, there are $m-3$ left super-games and $\frac{m}{2}$ last super-games in total.
Thus, the total extra cost of our schedule is $4\times(m-3)+4\times\frac{m}{2}=3n-12$. Thus, the ratio is
\begin{eqnarray}\label{worst_ratio}
1+\frac{3n-12}{n(n-2)}=1+\frac{3}{n}-\frac{6}{n(n-2)}.
\end{eqnarray}
This example only shows the ratio is tight for this algorithm. However, it is still possible that some other algorithms can achieve a better ratio.
\section{Experimental Results}
To test the performance of our schedule algorithm, we will implement it on well-known benchmark instances.
The above construction method can guarantee a good approximation ratio. However, for experimentations, we may still be able to get further improvements by some heuristic methods.
For experiments, we will also use some simple heuristic methods to get further improvements.
\subsection{Heuristics based on Local Search}
Above we have introduced a method to construct a feasible schedule for TTP-2. Based on one feasible schedule,
we may be able to get many feasible schedules by just changing the ordering of the teams.
There are total $n!$ permutations of the $n$ teams, each of which may lead to a feasible schedule.
In the above analysis, we choose the permutation such that we can get a good approximation ratio.
This is just for the purpose of the analysis. We do not guarantee this permutation is optimal.
Other permutations may lead to better results on each concrete instance.
However, the number of permutations is exponential and it is not effective to check all of them.
If we check all permutations, the running-time bound will increase a factor of $n!$, which is not polynomial and not effective.
Our idea is to only consider the permutations obtained by swapping the indexes of two super-teams and by swapping the indexes of the two teams in the same super-team.
First, to check all possible swapping between two super-teams, we will have $O(m^2)$ loops, and the running-time bound will increase a factor of $m^2$.
Second, for each last super-game between two super-teams, we consider the two orders of the two teams in each super-team and then we get four cases.
We directly compute the extra cost for the four cases and select the best one. There are $m/2$ last super-games and then we only have $O(m)$ additional time.
Note that we do not apply the second swapping for normal and left super-games since this operation will not get any improvement on them (this can be seen from the proof of Lemma~\ref{extra}).
\subsection{Applications to Benchmark Sets}
Our tested benchmark comes from~\cite{trick2007challenge}, where introduces 62 instances and most of them are instances from the real world. There are 34 instances of $n$ teams with $n\geq 4$ and $n\equiv 0 \pmod 4$. Half of them are very small ($n\leq 8$) or very special (all teams are in a cycle or the distance between any two teams is 1) and they were not tested in previous papers.
So we only test the remaining 17 instances.
The results are shown in
Table~\ref{experimentresult}, where the column `\emph{ILB Values}' indicates the independent lower bounds, `\emph{Previous Results}' lists previously known results in~\cite{DBLP:conf/mfcs/XiaoK16}, `\emph{Before Swapping}' is the results obtained by our schedule algorithm without using the local search method of swapping, `\emph{After Swapping}' shows the results after swapping, `\emph{Our Gap}' is defined to be $\frac{After Swapping~-~ILB~Values}{ILB~Values}$ and `\emph{Improvement Ratio}' is defined as $\frac{Previous~Results~-~After Swapping}{Previous~Results}$.
\begin{table}[ht]
\centering
\begin{tabular}{ m{1.4cm}<{\centering} *{1}{|m{1.2cm}<{\centering}} *{1}{m{1.3cm}<{\centering}} *{1}{m{1.4cm}<{\centering}}*{1}{m{1.4cm}<{\centering}}*{1}{m{1.2cm}<{\centering}}*{1}{m{1.9cm}<{\centering}} }
\hline
Data & ILB & Previous & Before & After &Our & Improvement \\
Set & Values & Results & Swapping & Swapping &Gap(\%) & Ratio(\%)\\
\hline
\vspace{+1mm}
Galaxy40 & 298484 & 307469 & 306230 & 305714 & 2.42 & 0.57 \\
Galaxy36 & 205280 & 212821 & 211382 & 210845 & 2.71 & 0.93 \\
Galaxy32 & 139922 & 145445 & 144173 & 144050 & 2.95 & 0.96 \\
Galaxy28 & 89242 & 93235 & 92408 & 92291 & 3.42 & 1.01 \\
Galaxy24 & 53282 & 55883 & 55486 & 55418 & 4.01 & 0.83 \\
Galaxy20 & 30508 & 32530 & 32082 & 32067 & 5.11 & 1.42 \\
Galaxy16 & 17562 & 19040 & 18614 & 18599 & 5.90 & 2.32 \\
Galaxy12 & 8374 & 9490 & 9108 & 9045 & 8.01 & 4.69 \\
NFL32 & 1162798& 1211239& 1199619& 1198091& 3.04 & 1.09 \\
NFL28 & 771442 & 810310 & 798208 & 798168 & 3.46 & 1.50 \\
NFL24 & 573618 & 611441 & 598437 & 596872 & 4.05 & 2.38 \\
NFL20 & 423958 & 456563 & 444426 & 442950 & 4.48 & 2.98 \\
NFL16 & 294866 & 321357 & 310416 & 309580 & 4.99 & 3.66 \\
NL16 & 334940 & 359720 & 351647 & 350727 & 4.71 & 2.50 \\
NL12 & 132720 & 144744 & 140686 & 140686 & 6.00 & 2.80 \\
Super12 & 551580 & 612583 & 590773 & 587387 & 6.49 & 4.11 \\
Brazil24 & 620574 & 655235 & 643783 & 642530 & 3.54 & 1.94 \\
\end{tabular}
\caption{Experimental results on the 17 instances with $n$ teams ($n$ is divisible by 4)}
\label{experimentresult}
\end{table}
From Table~\ref{experimentresult}, we can see that our schedule algorithm can improve all the 17 instances with an average improvement of $2.10\%$. In these tested instances, the number of teams is at most 40. So our algorithm runs very fast.
On a standard laptop with a 2.30GHz Intel(R) Core(TM) i5-6200 CPU and 8 gigabytes of memory, all the 17 instances can be solved together within 0.1 seconds before applying the local search and within 8 seconds including local search.
\section{Conclusion}
In this paper, we introduce a new schedule for TTP-2 with $n\equiv 0 \pmod 4$ and prove an approximation ratio of $(1+{\frac{3}{n}}-{\frac{6}{n(n-2)}})$, improving the previous ratio of $(1+\frac{4}{n}+\frac{4}{n(n-2)})$ in~\cite{DBLP:conf/mfcs/XiaoK16}.
The improvement looks small. However, the ratio is quite close to 1 now and further improvements become harder and harder.
Furthermore, the new construction method is simpler and more intuitive, compared with the previous method in~\cite{DBLP:conf/mfcs/XiaoK16}.
Experiments also show that the new schedule improves the results on all tested instances in the benchmark~\cite{trick2007challenge}.
In the analysis, we can see that the extra cost of our schedule is contributed by left and last super-games.
So we can decompose the analysis of the whole schedule into the analysis of left and last super-games.
To get further improvements, we only need to reduce the number of left and last super-games.
| {
"attr-fineweb-edu": 1.908203,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdP45i7PA9Pc3JQnq |
\subsection{Our results}
\begin{wrapfigure}{r}{0.4\textwidth}
\begin{center}
\includegraphics[width=0.38\textwidth]{img/ski.pdf}
\end{center}
\caption{Tight deterministic and randomized trade-offs for learning-augmented ski-rental.}
\end{wrapfigure}
\paragraph{Ski-rental.}
The \emph{ski rental problem} is a classical online algorithms problem \cite{Karlin1988} with a particularly simple model of decision-making under uncertainty. In the problem, there is a skier who is out to ski for an \emph{unknown} number of days. The first morning, the skier must either rent skis for a cost of \$$1$ or buy skis for a cost of \$$B$. Each day thereafter, the skier must make the same decision again as long as she has not yet purchased skis. The goal for the skier is to follow a procedure that minimizes competitive ratio. Variations of the ski-rental problem have been used to model a diverse set of scenarios, including snoopy caching~\cite{Karlin1988}, dynamic TCP acknowledgement~\cite{karlin2001dynamic}, and renting cloud servers \cite{khanafer2013constrained}.
In our setting of ski-rental with a machine-learned prediction, we assume that, in addition to knowing $B$, the skier has access to a prediction $y$ for the number of days she will ski. Let $\eta$ denote the absolute error of the prediction $y$ (i.e., if she actually skis for $x$ days, then $\eta = |x - y|$). Furthermore, define $c(\eta)$ to be the skier's worst-case competitive ratio over all $y$ given $\eta$. We say that the procedure is \emph{$\rob$-robust} if $c(\eta) \leq \rob$ for any $\eta$ and that it is \emph{$\beta$-consistent} if $c(0) \leq \beta$. We prove deterministic and randomized lower bounds on the robustness-consistency trade-off that match the algorithmic results in~\cite{NIPS2018_8174}. Specifically, we show:
\begin{theorem}[Deterministic Lower Bound for Ski-Rental; also in \cite{gollapudi2019online,recent}]\label{thm:det-lo}
Let $\lambda \in (0,1)$ be a fixed parameter. Any $(1+\lambda)$-consistent deterministic algorithm for ski-rental with machine-learned prediction problem is at least $(1+1/\lambda)$-robust.
\end{theorem}
We remark that this deterministic bound is simple to prove and has also appeared in two prior works~\cite{gollapudi2019online,recent}.
\begin{theorem}[Randomized Lower Bound for Ski-Rental]\label{thm:rand-low}
Any (randomized) algorithm for ski-rental with machine-learned prediction that achieves robustness $\rob$ must have consistency
\[ \cons\ge\rob\log\left(1 + \frac{1}{\rob - 1}\right). \]
In particular, any (randomized) algorithm achieving robustness $\rob\le 1/(1 - e^{-\lambda})$ for the ski-rental with machine-learned prediction problem must have consistency $\beta\ge \lambda / (1 - e^{-\lambda})$.
\end{theorem}
\begin{wrapfigure}{r}{0.4\textwidth}
\begin{center}
\includegraphics[width=0.38\textwidth]{img/schedule.pdf}
\end{center}
\caption{Tight trade-offs for scheduling two jobs}\label{fig:sc}
\end{wrapfigure}
\paragraph{Non-clairvoyant scheduling.}
The \emph{non-clairvoyant job scheduling problem} was first studied in an online setting by Motwani, Phillips, and Torng \cite{MOTWANI199417}. This problem models scheduling jobs on a single processor, where the jobs have unknown processing times and the objective is to minimize the completion time (\ie, the sum of the job completion times). More formally, the algorithm initially receives $n$ job requests with \textit{unknown} processing times $x_1,x_2,\cdots, x_n$ and is asked to schedule them on a single machine, allowing for preemptions. If the completion time of job $i$ is $t_i$, then the total \emph{completion time} of the algorithm is $\sum_{i=1}^n t_i$.
In the learning-augmented version of the problem, we additionally provide the algorithm with predictions $y_1,y_2,\cdots, y_n$ of the processing times $x_1,x_2, \cdots, x_n$. Let $\eta = \sum_i |x_i - y_i|$ be the $\ell_1$ error of the prediction and $c(\eta)$ be the algorithm's worst-case competitive ratio given $\eta$. As before, we say an algorithm is {$\rob$-robust} if $c(\eta) \leq \rob$ for any $\eta$ and {$\beta$-consistent} if $c(0) \leq \beta$.
Our first result is a lower bound on the robustness-consistency trade-off in the general case.
\begin{theorem}[Lower bound for non-clairvoyant scheduling with $n$ jobs]\label{thm:njob}
Any $(1+\lambda)$-consistent algorithm for non-clairvoyant scheduling with machine-learned prediction must have robustness
\begin{align*}
\gamma \geq \frac{n + n(n+1)\lambda}{1 + \lambda{(n+1)(n+2)}/{2}}.
\end{align*}
\end{theorem}
This bound is tight at the endpoints of the trade-off. When $\lambda = 0$, we have $c(\eta)\ge n$, which is achieved by any (non-idling) algorithm. On the other hand, when $\lambda = 1 - \smash{\tfrac{2}{n+1}}$ (so $1 + \lambda = 2 - \smash{\tfrac{2}{n+1}}$), we have \smash{$c(\eta)\ge 2 - \tfrac{2}{n+1}$}, which is the tight bound of~\cite{MOTWANI199417} (achieved by round-robin).\footnote{A competitive ratio of $2 - 2 / (n+1)$ can always be achieved (even without ML predictions)~\cite{MOTWANI199417}, so we do not need to consider consistency $1 + \lambda$ for $\lambda\ge 1- 2/(n+1)$}
On the other hand, Kumar, Purohit and Svitkina~\cite{NIPS2018_8174} give an algorithm that is $(1+\lambda)/2\lambda$-consistent and $2/(1-\lambda)$-robust for $\lambda\in (0,1)$.
In the case of $n=2$, the robustness can be improved to $4/(3-3\lambda)$. We provide a significantly better trade-off (\cref{fig:sc}) and a matching lower bound in this regime.
Our algorithm is $2$-competitive over all parameter choices,
while their algorithm has robustness tends to infinity as consistency goes to $1$.
\begin{theorem}[Tight bound for non-clairvoyant scheduling of $2$ jobs]\label{thm:2}
In the case of $2$ jobs, there is an algorithm that achieves $(1+\lambda)$-consistency and $(1+ 1/(1+6\lambda))$-robustness for non-clairvoyant scheduling with machine-learned prediction, for any $\lambda\in (0,1/3)$.\footnote{Kumar, Purohit and Svitkina~\cite{NIPS2018_8174}
uses large $\lambda$ to indicate low consistency, whereas we use small $\lambda$ for low consistency. The results are comparable up to a reparametrization. Also, round-robin has a competitive ratio of $4/3$ for $2$ jobs (without using predictions) \cite{MOTWANI199417}, so we do not need to consider consistency $1 + \lambda$ for $\lambda\ge 1/3$.} Moreover, this bound is tight.
\end{theorem}
\subsection{Related work}
For learning-based ski-rental, the result of \cite{NIPS2018_8174} has since been extended by~\cite{lee2019learning, gollapudi2019online}. Scheduling with predictions is also studied by \cite{soda20, mitzenmacher2019scheduling, mitzenmacher2019supermarket}, though under different prediction models or problem settings.
The results of \cite{DBLP:conf/icml/LykourisV18} on online caching with ML predictions have been improved and generalized by~\cite{antoniadis2020online,rohatgi2020near,deb20weight,wei2020better}.
Several other learning-augmented online problems have also been considered in the literature, including matching, optimal auctions and bin packing~\cite{devanur2009adwords,kumar2018semi,medina2017revenue,antoniadis2020secretary,recent}.
Online algorithms (without ML predictions) are a classical subject in the algorithms literature. The (classic) ski-rental problem is well-understood: It is known that there exists a
$2$-competitive deterministic algorithm~\cite{Karlin1988}. This can be further improved to $e/(e-1)$ using randomization and is known to be optimal~\cite{DBLP:journals/algorithmica/KarlinMMO94}.
There are also numerous extensions of the problem, including snoopy caching~\cite{Karlin1988} and dynamic TCP acknowledgment~\cite{karlin2001dynamic}.
The non-clairvoyant scheduling problem was first studied by~\cite{MOTWANI199417}. They show that for $n$ jobs the round-robin heuristic achieves a competitive ratio of $2-2/(n+1)$ and provide a matching lower bound. They also show that randomization provides at most a minor lower-order improvement to the competitive ratio. Our work revisits these classical results by extending their lower bounds to settings where we want to optimize for consistency (with respect to a prediction) in addition to worst-case competitive ratio.
Another related line of inquiry is the study of online problems in stochastic settings, where the inputs come from certain distribution~\cite{hentenryck2009online,feldman2009online, DBLP:journals/talg/MahdianNS12, DBLP:conf/soda/MirrokniGZ12, mitzenmacher2019scheduling, esfandiari2018allocation}. We note that this model differs from ours in that we do not make any assumptions on the distribution or stochasticity of inputs.
Finally, using machine learning to design algorithms under uncertainty has been explored in other settings as well, such as online learning~\cite{kong2018new, bhaskara2020online} and data streams \cite{ali19, Aamand2019LearnedFE, jiang2020learningaugmented, cohen2020composable}.
A number of works also study learning-based methods for numerical linear algebra, combinatorial optimization, and integer programming~\cite{bello2016neural,khalil2017learning,pmlr-v80-balcan18a,nazari2018reinforcement,NIPS2018_7335,kool19,selsam2018learning,amizadeh2018learning, chawla2019learning,indyk2019learning,alabi2019learning,dao2019learning}.
\subsection{Preliminaries and notations}
In our analyses that follow, we use $\mathsf{ALG}$ to denote the cost incurred by the algorithm on a given input and prediction. We use $\mathsf{OPT}$ to denote the optimal cost achievable by an algorithm with full knowledge of the future (i.e., an offline algorithm). Note that $\mathsf{ALG}$ is a function of the input and the prediction, while $\mathsf{OPT}$ depends only on the input. The competitive ratio for a given input and prediction is simply the ratio $\mathsf{ALG} / \mathsf{OPT}$.
In terms of this notation, an algorithm is $\cons$-consistent if $\mathsf{ALG} / \mathsf{OPT}\le\cons$ for all situations where the input is the same as the prediction; an algorithm is $\rob$-robust if $\mathsf{ALG} / \mathsf{OPT}\le\rob$ for all pairs of input and prediction.
\section{Introduction}
\input{intro.tex}
\section{Ski Rental}
\input{ski-rental.tex}
\section{Non-clairvoyant Scheduling}
\input{scheduling.tex}
\section{Conclusion}
\input{conclusion_1.tex}
\section*{Acknowledgements}
We would like to thank Constantinos Daskalakis, Piotr Indyk, and Jelani Nelson for their comments on drafts of this paper.
\bibliographystyle{alpha}
\subsection{A General Lower Bound}
Our first result is a lower bound on the robustness-consistency trade-off that is tight at the endpoints of the trade-off curve. Note that since the classic work~\cite{MOTWANI199417} provides a $c=2-2/(n+1)$ competitive ratio (with no ML prediction), one can always achieve $c$-robustness and $c$-consistency simultaneously. Hence, as we remarked, \autoref{thm:njob} is tight at $\lambda = 0$ and $\lambda = 1 - \tfrac{2}{n+1}$. We now prove the theorem.
\begin{proof}[Proof of \autoref{thm:njob}]
Consider an algorithm that achieves $1+\lambda$ consistency. Let the predictions be $y_1 = y_2 = \cdots = y_n = 1$. Let $d(i,j)$ denote the amount of processing time on job $i$ before job $j$ finishes. Assume without loss of generality that job $1$ is the first job to finish and that when it finishes, we have $d(i, i)\ge d(j, j)$ for all $i < j$. Consistency requires
\[ (1 + \lambda)\cdot\OPT = \frac{n(n+1)}{2}(1 + \lambda)\ge\sum_{i,j} d(i,j) + \sum_{i=2}^n (n - i + 1)(1 - d(i, i)), \]
where the first term represents the costs incurred thus far, and the second term represents the minimum cost required to finish from this state. Simplifying, we obtain the condition
\begin{equation}\label{eqn}
\frac{n(n+1)}{2}\lambda\ge \sum_{i=2}^n (i - 1)\cdot d(i, i),
\end{equation}
as $d(i, j) = d(i, i)$ for all $i$ at this point in the execution.
Now, consider a (adversarial) setup with $x_i = d(i, i) + \eps$, where we take $d(i, i)$ to be as measured upon the completion of job $1$ and $\eps > 0$ to be a small positive number. For this instance, we have
\[ \OPT = 1 + \sum_{i=2}^n ix_i + O(\eps). \]
We also have, based on the execution of the algorithm up to the completion of job $1$, that
\[ \ALG\ge n\bigp{1 + \sum_{i=2}^n x_i}. \]
To show a consistency-robustness lower bound, it suffices to lower bound $\ALG / \OPT$ subject to the consistency constraint. Equivalently, we can upper bound
\[ \frac{\OPT}{\ALG} - \frac 1n\le\frac 1n\bigp{\frac{1 + \sum_{i=2}^n (i-1)x_i + O(\eps)}{1 + \sum_{i=2}^n x_i}}. \]
Suppose we know a priori that the value of the numerator is $C + 1 + O(\eps)$ (i.e., $\sum_{i=2}^n (i-1)x_i = C$). To maximize the quantity on the right-hand side, we would want to have $\sum_{i=2}^n x_i$ be as small as possible subject to the constraints that $x_i\ge x_j\ge 0$ if $i < j$ and
\[ \sum_{i=2}^n (i-1)x_i = C. \]
Observe that this optimization problem is a linear program. For this linear program, suppose we have a feasible solution with $x_i > x_{i+1}$. Such a solution cannot be optimal, as we can set $x_i\gets x_i - \frac{\alpha}{i-1}$ and $x_{i+1}\gets x_{i+1} + \frac{\alpha}{i}$ for sufficiently small $\alpha > 0$, reducing the objective while remaining feasible.
Thus, if an optimal solution exists, it must have $x_2 =x_3= \cdots = x_n$. It is not hard to see that this linear program is bounded and feasible, so an optimum does exist. It follows that for a given $C$, we want to set $x_2=x_3 = \cdots = x_n = \frac{2C}{n(n-1)}$,
in which case the right-hand side is equal to
\[ \frac{C + 1 + O(\eps)}{1 + \frac{2C}{n}} - \frac{n-1}{2} + \frac{n-1}{2} = \frac{-\frac{n-3}{2} + O(\eps)}{ 1 + \frac{2C}{n}} + \frac{n-1}{2}. \]
To maximize the leftmost term, which has a negative numerator (for sufficiently small $\eps$), we want to maximize $C$. However, we know from \eqref{eqn} that $C = \sum_{i=2}^n (i-1)x_i\le\frac{n(n+1)}{2}\lambda$. Therefore, we have the upper bound
\[ \frac{\OPT}{\ALG} - \frac 1n\le \frac 1n\bigp{\frac{\frac{n(n+1)}{2}\lambda + 1 + O(\eps)}{1 + {(n+1)}\lambda}}. \]
Finally, taking $\eps\to 0$ yields the desired bound
\[ \frac{\ALG}{\OPT}\ge\frac{n + n(n+1)\lambda}{1 + \frac{(n+1)(n+2)}{2}\lambda}. \tag*{\qedhere} \]
\end{proof}
\subsection{A Tight Complete Trade-off for Two Jobs}
We now consider the special case of having $n=2$ jobs. It is always possible to achieve $4/3$ competitiveness by round-robin~\cite{MOTWANI199417}, and with machine-learned predictions, Kumar, Purohit, and Svitkina \cite{NIPS2018_8174} proves an $(1+\lambda)/2\lambda$-consistency and $4/(3-3\lambda)$-robustness trade-off. We show that this trade-off can be significantly improved and that our new bound is in fact tight.
\paragraph{Lower bound.} We start by proving our lower bound.
Here, we remark that any lower bound for $k$ jobs directly implies the same lower bound for any $n\geq k$ jobs, since one can add $n-k$ dummy jobs with $0$ predicted and actual processing times.
Thus, the lemma below also holds for $n > 2$.
\begin{lemma}[Lower bound for non-clairvoyant scheduling]\label{lem:lbs}
For the non-clairvoyant scheduling problem of $2$ jobs, any algorithm that achieves $(1+\lambda)$-consistency must be at least $1+(1/(1+6\lambda))$-robust for a $\lambda\in (0,1/3)$.
\end{lemma}
\begin{proof}
Consider a $(1+\lambda)$-consistent algorithm $\mathcal{A}$.
Suppose the inputs are predictions $y_1 = y_2 =1$.
First, we focus on an instance $I$, where $x_1 = y_1,x_2=y_2$.
Let $d(i,j)$ denote the amount of processing time on job $i$ before job $j$ finishes for this instance,
and assume without loss of generality that the algorithm finishes job $1$ first.
Observe in this scenario the consistency requirement asks that $\mathcal{A}$ must produce a schedule with total completion time at most $(1+\lambda) (2y_1 + y_2)=3+3\lambda$. As job $1$ finishes first, $d(1,2)=1$. Since $x_1=x_2=1$ and $\textsf{ALG} = x_1+x_2 + d(1,2)+d(2,1)$, we must have
\begin{equation}\label{eq:scl}
d(2,1) \leq \lambda(2y_1 + y_2) = 3\lambda.
\end{equation}
Now we consider an adversarial instance $I'$ with same predictions ($y_1=y_2=1$), but different choices of actual processing times. In particular, let $x_1 = 1$ but $x_2= d(2,1)+\epsilon$ for an infinitesimal constant $\epsilon$.
Since the inputs to the algorithm are the same as in the previous instance $I$, it would start off by producing the same schedule.
In particular, the algorithm would finish job $1$ first at time $1+ d(2,1)$, then finish job $2$ immediately afterwards.
Therefore,
\begin{equation}
\textsf{ALG} = 2+2d(2,1)+\epsilon.
\end{equation}
On the other hand,
since $\lambda \leq 1/3$, $x_2 \leq x_1$, we have
\begin{equation}
\textsf{OPT}= 2x_2 + x_1 = 2d(2,1) + 2\epsilon + 1.
\end{equation}
By~\eqref{eq:scl}, we get that the competitive ratio is at least $1+ 1/(1+6\lambda)$ as $\epsilon \rightarrow 0$.
\end{proof}
\paragraph{Upper bound.}
To complete the proof of \autoref{thm:2}. We show that the algorithm from~\cite{NIPS2018_8174} can be improved. Our new scheduling scheme proceeds in two stages. First, it follows the round-robin algorithm until the consistency constraint is tight. Then, it processes jobs in a greedy order, starting with the job of minimum prediction time. We name the algorithm {\texttt{Two-Stage-Schedule}} and prove the following guarantee:
\begin{lemma}[Algorithm for non-clairvoyant scheduling]\label{lem:ubs}
For the non-clairvoyant scheduling problem of $2$ jobs,
the algorithm {\texttt{Two-Stage-Schedule}} achieves $(1+\lambda)$-consistency and $(1+1/(1+6\lambda))$-robustness for a $\lambda\in (0,1/3)$.
\end{lemma}
The proof can be found in Appendix~\ref{sec:ubs}. Finally, combining \cref{lem:ubs} and \cref{lem:lbs} proves \autoref{thm:2}.
\subsection{Deterministic Lower Bound}
In this section, we prove~\autoref{thm:det-lo}, which also appeared in~\cite{gollapudi2019online,recent}. Since the algorithm is deterministic, we proceed by a an adversarial argument.
Let $x$ be the last day of the ski season. The high-level idea is to fix a specific $y$, and then consider two instances, one where $x = y$ and one where $x\neq y$.
Since the algorithm does not know $x$, it cannot distinguish between these two cases and therefore must output a unique day $t$ (for purchasing skis) given $y$ and $B$.
Suppose $y$ is large, say, greater than $B$. Then, intuitively, $t$ must be fairly small to satisfy consistency.
Given this constraint, in the other instance, we let the adversary choose an $x\neq y$ that yields the worst possible competitive ratio.
We will show that this competitive ratio indeed matches the robustness upper bound.
\begin{proof}[Proof of \autoref{thm:det-lo}]
Let $y$ be the prediction and $\eta = |y-x|$ be the error. Consider a deterministic algorithm that achieves $(1+\lambda)$ consistency.
Suppose $y> (1+\lambda) B$, and let $t$ be the day on which the algorithm purchases skis (given $y$ and $B$).
First, suppose $t \geq y$.
When $x= y$, we have $\OPT = B$ and $\ALG =y$. Then the competitive ratio is $y/B$, which must be bounded by $1+\lambda$ by our consistency requirement, but this contradicts the assumption $y> (1+\lambda)B$.
Second, suppose $B<t < y$. Again, when $x=y$, $\OPT = B$, and $\ALG = t+B-1$. By the $(1+\lambda)$-consistency, $(t+B-1)/B \leq 1+\lambda$. Thus, $(t-1)/ B\leq \lambda <1$, contradicting the assumption that $t> B$.
Therefore, simply to achieve $(1+\lambda)$-consistency, the algorithm must output $t\le B$. Now under this condition, we consider two cases. We use the case when $y=x$ to derive a bound on $\lambda$, and apply this along with an adversarial argument in the case when $y\neq x$ to obtain our robustness lower bound.
\begin{enumerate}[(i)]
\item Suppose $x = y$. Since $y > B$, we have $\OPT = B$. On the other hand, $\ALG = t+B-1$, as $ t < x$. Thus, the algorithm does $1+ (t-1)/B$ times worse than optimal. Assuming that the algorithm is $(1+\lambda)$-consistent, we have $1+ (t-1)/B \leq 1+\lambda$, so $t\le\lambda B + 1$.
\item Suppose $x\neq y$. We adversarially set $x = t$; note that $x \leq B$. Thus, $\OPT = x = t$ and $\ALG = t + B-1$.
Our bound on $t$ from (i) now lower bounds the competitive ratio as $(t + B - 1) / t\ge 1 + (B - 1) / (\lambda B + 1)$. For large $B$, this lower bound approaches $1 + 1 / \lambda$.
This shows that $c(\eta) \geq 1 + 1/\lambda$ and thus completes the proof.\qedhere
\end{enumerate}
\end{proof}
\subsection{Randomized Lower Bound}
The starting point of our randomized lower bound is the well-known fact that the ski-rental problem can be expressed as a linear program (see, \eg, \cite{buchbinder2009design}). Our key observation then is that the consistency and robustness constraints are in fact also linear. Somewhat surprisingly, we show that the resulting linear program can be solved \textit{analytically} in certain regimes. By exploiting the structure of the linear program, we will determine the optimal robustness for any fixed consistency, and this matches the trade-off given by~\autoref{thm:ski-rand} (when $y \gg B$ and for large $B$).
The proof of our randomized lower bound (\Cref{thm:rand-low}) is fairly technical. Thus, we defer the proof to Appendix~\ref{apx:A} and only present a sketch here.
\begin{proof}[Proof sketch of \autoref{thm:rand-low}]
As a first step,
we can characterize algorithms for ski rental as feasible solutions to an infinite linear program, with variables $\{p_i\}_{i\in\mathbb N}$ indicating the probability of buying at day $i$. The constraints of robustness and consistency can be written as linear constraints on this representation. Given $\gamma$ and $\beta$, understanding whether a $\rob$-robust and $\cons$-consistent algorithm exists therefore reduces to checking if this linear program is feasible. (In particular, we do not have an objective for the linear program.)
First, we ask that the $p_i$'s define a probability distribution. That is, $p_i \geq 0$ and
\begin{align}
\sum_{i=1}^\infty p_i= 1.
\end{align}
Second, to satisfy the consistency constraint, the algorithm must have expected cost within $\beta\cdot \OPT$ when $y = x$. In this case, the ski season ends at $i=y$, so there is no additional cost afterwards.
\begin{align}
\sum_{i=1}^y (B+i-1)p_i
+
y\sum_{i=y+1}^\infty p_{i} \leq \beta \min\{ B , y\}.
\end{align}
Third, each value of $x$ gives a distinct constraint for robustness, where the left side is the expected cost and the right side is $\gamma \cdot \OPT$. When $x\leq B$, $\OPT = x$, so we have
\begin{align}
\sum_{i=1}^x (B+i-1)p_i + x\sum_{i=x+1}^\infty p_i \leq \rob x
\quad\forall x \leq B.
\end{align}
If $x> B$, then $\OPT = B$. The robustness constraints are infinitely many, given by
\begin{align}
\sum_{i=1}^x (B+i-1)p_i + x\sum_{i=x+1}^\infty p_i \leq \rob B
\quad\forall x > B.
\end{align}
Having set up this LP, the remainder of the proof follows in two steps. First, we show that this (infinite) LP can be reduced to a finite one with $B+1$ constraints and $y$ variables. We then proceed to analytically understand the solution to the LP. This allows us to lower bound the parameter $\rob$ given any $\cons$, and it indeed matches the upper bound given by~\cite{NIPS2018_8174}.
\end{proof}
\section{Proof of \autoref{lem:ubs}}\label{sec:ubs}
Now we present our algorithmic result. Although our analysis deals with the case of $2$ jobs,
it is convenient to describe the algorithm in the general case of $n$ jobs.
The algorithm starts by running round robin for a while, then switches to a greedy strategy of processing jobs in the increasing order of the predicted times.
If at any point we know $x_i \neq y_i$ for any job $i$,
we switch to round robin forever. We use $\textsf{OPT}_y = \sum_{i} iy_i$ to denote the $\textsf{OPT}$ under perfect predictions.
\begin{algo}
\textul{$\texttt{Two-Stage-Schedule}$}$(y_1,y_2,\cdots,y_n)$:\+\\
At any point, if a job finishes with processing time less or more than its prediction, \+\\
round robin forever.\-\\
\textit{Stage} $1$: Round robin for at most $\lambda n \cdot \textsf{OPT}_y /\binom{n}{2}$ units of time.\\
\textit{Stage} $2$: Process jobs in predicted order\\
\quad \quad \quad\,\,\,(staring from the unfinished job with the least predicted time).
\end{algo}
The intuition behind the algorithm is simple. On one hand,
to ensure robustness, the algorithm switches to round robin when any misprediction is noticed.
On the other hand, we ask the algorithm to be $(1+\lambda)$-consistent.
Suppose $y_1<y_2< \cdots < y_n$.
If the predictions are perfect, then we expect that a consistent algorithm would produce a schedule that finishes the jobs in the correct order, \ie, job $1$ finishes first, job $2$ second, and so on.
In this case, the consistency requirement reduces to
\begin{equation}\label{eq:csr}
\sum_{i>j} d(i,j) \leq \lambda\, \textsf{OPT}_y,
\end{equation}
where and $d(i,j)$ denotes the amount job $i$ delays job $j$ in this scenario.
Observe that when no job is completed, round robin increases each term in the summation at the same rate of $1/n$.
Thus, stage 1 of the algorithm would make the inequality~\eqref{eq:csr} tight.
Then as we can no longer disobey the predictions in the ideal scenario, we switch to the greedy strategy in the second stage. Next, we analyze the performance of the algorithm in the case of two jobs.
We now prove \autoref{lem:ubs}
\begin{proof}[Proof of \autoref{lem:ubs}]
Let $t= 2y_1 + y_2$. To show consistency, assume $x_1= y_1, x_2= y_2$, so $\textsf{OPT} = t$. In stage $1$, the algorithm runs round robin for $2\lambda t$ units of time. Observe that job $2$ cannot finish before job $1$ in this stage: since $\lambda<1/3$, job $2$ can receive at most $(2y_1+y_2)/3 <y_2$ units of processing time. Consider two cases.
\begin{enumerate}[(i)]
\item Suppose job $1$ finishes in stage $1$. Then since two jobs share the same rate,
\begin{equation}\label{eq:y1s}
y_1 \leq \lambda t.
\end{equation}
Moreover, in this case. the algorithm runs round robin for $2y_1$ time and finishes job $2$ in $y_2-y_1$ time. Thus, $\textsf{ALG} = 3y_1+y_2$, and $\textsf{OPT} = t$. By~\eqref{eq:y1s}, we have $\textsf{ALG} \leq (1+\lambda)\,\textsf{OPT}$.
\item Suppose job $1$ does not finish in stage $1$. Then both jobs have been processed for $\lambda t $ units of time at the beginning of stage $2$. In stage $2$, the algorithm prioritizes job $1$. Thus,
\begin{equation}
\textsf{ALG} = 4\lambda t+ 2(y_1-\lambda t) + (y_2-\lambda t) = (1+\lambda)\,\textsf{OPT}
\end{equation}
\end{enumerate}
To show robustness, we consider mispredictions, and suppose without loss of generality $y_1 =1$.
Throughout, we let $\epsilon$ to denote an infinitesimal quantity.
Notice that if any misprediction is found or job $1$ is finished in stage $1$, the algorithm is equivalent of round robin and, therefore, achieves $4/3$ competitive ratio that is better than $1+1/(1+6\lambda)$ for any $\lambda \in (0,1/3)$, so we are done.
We do a case-by-case analysis, assuming in stage $1$ no misprediction is detected and both jobs are finished in stage $2$.
Notice that under the assumptions, $x_1,x_2 \geq \lambda t$, so $\textsf{OPT} \geq 3\lambda t$.
\begin{enumerate}[(i)]
\item Suppose job $1$ finishes no later than its prediction ($x_1 \leq 1$). We have $\textsf{ALG} = \lambda t + 2x_1 +x_2$. \label{it:i}
\begin{enumerate}[(a)]
\item If $x_1 < x_2$, then $\textsf{OPT} = 2x_1 + x_2$. Since $\lambda t \leq \textsf{OPT} /3$, we have $\textsf{ALG}/\textsf{OPT} \leq 4/3$. \label{it:ia}
\item If $x_1 \geq x_2$, then $\textsf{OPT} = 2x_2 + x_1$. Observe that setting $x_1=y_1= y_2 =1, x_2 = \lambda t + \epsilon$ maximizes the competitive ratio, and this yields a ratio of $1+1/(1+6\lambda)$. \label{it:ib}
\end{enumerate}
\item Suppose job $1$ finishes later than its prediction ($x_1 > 1 $). In this case, the stage $2$ starts off by processing job $1$ for $y_1 - \lambda t$ unit of time then switching to round robin.
\begin{enumerate}[(a)]
\item If job $1$ finishes no later than job $2$, then we calculate that
$
\textsf{ALG} =
\lambda t +3 x_1 +x_2 -1.
$
If $x_1 < x_2$, then $\textsf{OPT} = 2x_1 + x_2$, the competitive ratio is at most $4/3$, where the worst case is achieved at $x_1 = 1+\epsilon$ and we use $\lambda t \leq \textsf{OPT}/ 3$.
If $x_1 \geq x_2$, then $\textsf{OPT} = 2x_2 + x_1$. The competitive ratio is bounded by $1+1/(1+6\lambda)$, where the worst case is achieved when $x_1 = 1+\epsilon, x_2 = \lambda t + 2\epsilon, y_2= 1$.
\item If job $1$ finishes later than job $2$, then $\textsf{ALG} = 1 + x_1 + 3x_2 -\lambda t$. Observe that in this case, it is impossible that $x_2 > x_1$, since job $1$ receives more processing than job $2$ throughout the schedule. Assume $x_2 \leq x_1$; then the competitive ratio is bounded by $1+1/(1+6\lambda)$ with the worst case being $x_2=\lambda t + \epsilon,x_1=1$.
\qedhere
\end{enumerate}
\end{enumerate}
\end{proof} | {
"attr-fineweb-edu": 1.53125,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbLw4dbjiU5gJuIOf |
\subsubsection{Clustering and Label Identification}
\subsection{Mobility input data}
\label{sec:Methodology_MobilityData}
\input{Methodology/MobilityData}
\subsection{Travel demand zones generation}
\label{sec:Methodology_ZonesGeneration}
\input{Methodology/01_2_TravelDemandZonesGeneration}
\subsection{User segmentation based on zonal visiting profiles}
\label{sec:Methodology_ZonalVisitingProfiles}
\input{Methodology/03_1_ZonalVisitingProfile}
\subsection{User segmentation based on the spatial extent of locations visited}
\label{sec:Methodology_SpatialExtent}
\input{Methodology/03_4_GridCellVisitingHeatmap}
\subsection{Generating travel demand zones}
\input{Results and Analysis/01_2_TravelDemandZonesGeneration}
\subsection{User exploration segments}
\input{Results and Analysis/03_1_ZonalVisitingProfile}
\subsection{User segments by travel pattern spatial extent}
\input{Results and Analysis/03_4_GridCellVisitingHeatmap}
\section{Introduction}
\input{Introduction/Introduction}
\section{Methodology}
\input{Methodology/Method}
\section{Application}
\input{Application/Application}
\section{Results and Analysis}
\input{Results and Analysis/Results}
\section{Conclusion}
\input{Discussion_and_Conclusions/Discussion_and_Conclusions}
\section*{Acknowledgements}
This study is funded by Region Stockholm, project "Unravelling travel demand patterns using Access card data" RS 2019-0499. We also thank Region Stockholm for providing the smart card data that made this study possible. The authors also thank Isak Rubensson, Matej Cebecauer and Erik Jenelius for their support in the process.
\bibliographystyle{apa-good}
| {
"attr-fineweb-edu": 1.490234,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdKjxK7Tt6CA5G-3l | \section{Introduction}\label{sec:introduction}
Sports analytics has received extensive attention over the past few years. While a lot of work in sports analysis emphasizes on visual \cite{visual1,visual2} and tactical analysis \cite{tactical}, there have been recent attempts to predict the outcome of individual games and entire seasons. However, most of these attempts only predict the outcome without providing insights or internal statistics to corroborate their results. Another issue is the lack of large clean datasets for this task. While most of the existing datasets provide data summarising matches, there is little focus on the little intricacies of matches that might be of interest. To tackle this, our proposed \textbf{UCLData} dataset consists of both match and individual statistics from Champions League matches played over the past six years. Further, we handle dataset size issues with the help of some intuitive priors or handcrafted features which make our model robust and realistic.
In this work, our proposed novel autoencoder based architecture not only predicts the outcome of a game but also predicts its internal statistics, to give a more holistic picture of how a match is expected to pan out. Moreover, apart from match-wise statistics, we also present player-wise statistics to provide details about the contribution of each player and minor details about a match which are generally ignored. The code for our work is made publicly available.\footnote[2]{ \textit{https://github.com/ashwinvaswani/whatif}}
\section{Dataset}\label{sec:dataset}
The following section details our approach for creating a dataset from which we can derive meaningful predictions.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8 \textwidth, height=65mm]{images/cl_overview.JPG}
\caption{Overview of Dataset}
\label{fig:dataset}
\end{figure}
\subsection{Data Collection}
We scrape data from the official UEFA Champions League website to build our dataset. Data from the years $2014$ to $2020$ is used. Overall we collect the data for $157$ knockout stage matches. We do not collect data for group stage matches because our predictions will be on the knockout stage games of the $2019-20$ season of the Champions League, and hence we did not want the context of group stage matches misleading our model.
To scrape the data, we use the Python library Beautiful Soup \cite{soup}, which assists us to take the data directly from the relevant websites. We divide our data into two categories - team data and player data. Team data contains the statistics for the entire team playing in the match on both sides, while player data includes the statistics of the teams' individual players.
To obtain team data, we use the official UEFA website for the individual matches. However, the official website does not contain the statistics for individual players. Hence, we extract individual player data from the FBref website \cite{fbref} and the Global Sports Archive website \cite{gsa}. Table \ref{tab:stats} summarises the attributes we considered for our dataset.
\begin{table}
\centering
\begin{tabular}{|l|l|}
\hline
& \hspace{5pt}Attributes \\
\hline
Team & \hspace{5pt}Total goals, total attempts, attempts on and off target, blocked shots, \\ & \hspace{5pt}shots which hit the woodwork, corners, off-sides, amount of possession, \\ & \hspace{5pt}total passes, passing accuracy, completed passes, distance covered, \\ & \hspace{5pt}number of balls recovered, tackles, clearances, blocks, yellow and \\ & \hspace{5pt}red cards, fouls.\\
\hline
Individual \hspace{5pt}&\hspace{5pt}Goals scored, total shots, shots on target, assists, interceptions, crosses,\\ & \hspace{5pt}fouls committed, player off-sides, total time played\\
\hline
\end{tabular}
\vspace{5pt}
\caption{List of attributes for a team and an player}
\label{tab:stats}
\end{table}
\subsection{Data Pre-processing}
Our data in its raw form contains numbers spanning a wide range - from hundreds in the fields such as passes completed to only one or two in areas such as goals. Passing such fields without any pre-processing would lead to our proposed model not accurately capturing this wide range. Hence we normalize our data to the range of zero to one using MinMax Scaling. This ensures that our model does not give any undue importance to any fields because of scaling issues. After pre-processing, we create embeddings from our normalized data.
\subsection{Creation of Embeddings}
There are some problems with using individual match data throughout. First, information from earlier matches cannot be used efficiently. This argument can be demonstrated with the help of an example. Let us say two teams A and B play against each other in years Y1 and Y2. Now, these two games are not independent as the two sides have played multiple other teams in this period and improved their game-play. Thus, it is not ideal to directly use individual match stats without capturing this context. Another issue is regarding players switching teams, which is quite common in sports. If a player plays in team A in year Y1 and switches to team B in year Y2, we need a way to represent it so that their individual information is maintained. We solve these problems with the use of embeddings. We create embeddings for each team and each player so that when two teams are matched up, these representations can capture the interactions with other teams and players and can preserve contextual information from previous data.
\section{Methodology}\label{sec:method}
\subsection{Handling problem of Data bias}\label{subsec:databias}
Our data consists of matches from the last six years of Champions League games. Although we found this data sufficient to capture relationships between teams and players, there were a few issues due to imbalance. Some teams, not being Champions League regulars, had fewer data points. We find that our initial results were biased towards the lower number of data points of these teams and lacked generalization. We attempted to overcome this issue with the help of prior information, which is important in the field of football analysis. We propose three additional hand-crafted features which are crucial in the context of a game. We also infer that regularisation and dropout help in solving some of these problems. We show in the following sections how the addition of each of these features helps in making our results more robust.
\subsubsection{Home / Away status:}\label{subsec:homeaway}
An important feature of Champions League knockout stages is the Home / Away concept. A fixture consists of two games wherein each game is played at the home ground of the two teams. The Figure \ref{fig:home}\subref{sub:1} shows some analysis of the importance of the location of the fixture.
\begin{figure}[ht]
\centering
\subfloat[Home / Away wins \label{sub:1}]{\includegraphics[width=0.62 \textwidth, height = 45mm]{images/Home_Away_Wins.png}}
\subfloat[Outcome vs Form - Colour intensity represents higher concentration of matches with a particular outcome. \label{sub:2}]{\includegraphics[width=0.38 \textwidth, height = 45mm]{images/OutcomeVSForm.png}}
\caption{Home / Away wins and Outcome vs Form}
\label{fig:home}
\end{figure}
It can be seen that there is a general trend for most teams to perform better at home than while away, which is quite intuitive. We attempt to use this information by adding an extra flag to indicate the team is playing at home apart from our embeddings while giving input to the model.
\subsubsection{Form Index:}\label{subsec:form}
Another essential feature, relevant to the context of a match, is the form of the two teams playing. It can be seen in Figure \ref{fig:home}\subref{sub:2} that at lower values of the form($<$ $7$), teams are less likely to win whereas, in the middle range, it's difficult to predict with just form. We used the recent results of each team (Results from the five most recent games before the fixture) to generate a form index by giving a score of three points to a Win, one to a Draw, and zero to a Loss. This additional information helped in improving results of certain matches as a team would rather go into a game with a form of $15$(five straight wins) than $0$(five straight losses).
\subsubsection{Experience:}\label{subsec:experience}
Figure \ref{fig:dataset} shows that some teams such as Real Madrid, being Champions League regulars have plenty of data points. In contrast, teams like Atalanta, who are new to the Champions League, have few data points. Hence, results of matches involving Atalanta were biased to the data from these limited games resulting in Atalanta performing exceptionally well against the odds in our initial experiments. While this can be considered a case of an "upset" or Atalanta being "dark horses", we wanted to improve our results and make our predictions robust. A critical factor is a team's experience in the Champions League, due to the pressure of playing in such a high-profile platform. We accumulated total matches played by every team in our data to account for this experience factor, which helped in solving the issue of predictions being biased because of limited data.
\subsection{Details of the Model}\label{subsec:details}
\begin{figure}
\centering
\begin{tabular}{c}
\subfloat[Teams Model]{\includegraphics[width=0.99 \textwidth]{images/model.png}}
\\
\subfloat[Players Model]{\includegraphics[width=0.99 \textwidth]{images/model_players.png}}
\\
\end{tabular}
\caption{Details of the models used}
\label{fig:models}
\end{figure}
Our network is based on the idea of autoencoders\cite{autoencoder} which are widely used for data compression. The aim of our training process is to learn about the various features of the team and the players. To achieve this we aim to learn an embedding in latent dimension. We also want this data in latent dimension to be robust from other factors which cannot always be predicted from the data. The model architectures are as shown in Figure \ref{fig:models}. We add a Gaussian noise to this in order to create a "noisy" embedding. This is given as an input to the network. The intuition for adding Gaussian noise is that it will help take into consideration some factors which are not consistent with the data (example a player having a lucky day or an off day/weather conditions which affect the play). We use the embedding without Gaussian noise as our ground truth labels. The schematic of the training process is given in the Figure \ref{fig:pipeline}. So, after the training process, the model learned some important insights about the team's/player's performance, which is later helpful during the playoffs to decide the winner of a particular match. For training, the loss is taken to be \textbf{mean squared error}, and the metric that we have considered is the \textbf{root mean squared error} (RMSE). We used Adam Optimizer with a learning rate of $0.01$ and the batch size was $10$ embeddings, for both our models. The RMSE values in the training and validation process are not metrics of performance of the model on new matches, rather they are indicators of the model's efficiency in learning the embedding. The training RMSE value for the team model is $0.1380$, and for players model is $0.1127$. The validation RMSE values for both the models are pretty close to the training models at $0.1379$ for the team model and $0.1126$ for the players' model. The overall summary of our pipeline can be seen in Figure \ref{fig:pipeline}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8 \textwidth, height=50mm]{images/arch.png}
\caption{Summary of our pipeline}
\label{fig:pipeline}
\end{figure}
\section{Results and Observations}\label{sec:results}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9 \textwidth, height=45mm]{images/CL20_final.png}
\caption{Overview of Simulation}
\label{fig:sim}
\end{figure}
\begin{figure}[ht]
\centering
\subfloat[Correlation with goals \label{sub:cor}]{\includegraphics[width=0.5 \textwidth, height=50mm]{images/correlation.png}}
\subfloat[Average Goals per team \label{sub:cross}]{\includegraphics[width=0.5 \textwidth, height=50mm]{images/avg_goals.png}}
\caption{(a) Shows high correlation between goals and shots on target (b) Identifies high / low scoring teams}
\label{fig:corr}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tabular}{c c c}
\subf{\includegraphics[width=0.32 \textwidth, height=40mm]{images/passes_updated.png}}
{(1a) Passes}
&
\subf{\includegraphics[width=0.32 \textwidth, height=40mm]{images/poss_updated.png}}
{(2a) Possession}
&
\subf{\includegraphics[width=0.32 \textwidth, height=40mm]{images/corners_updated.png}}
{(3a) Corners}
\\
\subf{\includegraphics[width=0.32 \textwidth, height=40mm]{images/sim_passes.png}}
{(1b) Simulation of Passes}
&
\subf{\includegraphics[width=0.32 \textwidth, height=40mm]{images/sim_poss.png}}
{(2b) Simulation of Possession}
&
\subf{\includegraphics[width=0.32 \textwidth, height=40mm]{images/sim_corners.png}}
{(3b) Simulation of Corners}
\\
\end{tabular}
\caption{Distribution of Passes, Possession and Corners in training data and in our simulations. The similarity between the plots show that our model is able to learn the distribution effectively. Figure (1a) and (1b) are Passes vs Teams, Figure (2a) and (2b) are Possession vs Teams and Figure (3a) and (3b) are Corners vs Teams}
\label{fig:stats}
\end{figure}
Figure \ref{fig:sim} gives an overview of the simulation of the interrupted knockout stages of Champions League $2019$-$20$. Our model predicts both match(Total Goals, Total Passes, Possession, Blocks, Corners, etc.) and player statistics(Who scored the goals, Assists, Shots, Crosses, etc.) for the two teams in the fixture. The winner(team with a higher aggregate score over two legs) proceeds to the next round.
In the case of a draw in the overall fixture (equal aggregate score from home/away legs), the team with the highest number of shots on target qualifies. We picked \textbf{Shots on target} as a decider, as it has the highest correlation with goals, which can be seen in Figure \ref{fig:corr}\subref{sub:cor}.
The first simulation is between Bayern Munich and Chelsea(2nd Leg). Bayern Munich beat Chelsea comprehensively in the first leg fixture, which was conducted before the season was interrupted. Bayern entered the game with a form of five wins in its last five games, whereas Chelsea had mixed results recently. The odds favored Bayern to win this tie, which is also backed up by our results. Bayern beat Chelsea comfortably with a scoreline of $2$-$1$ dominating the possession($57\%$) and total passing($597$) stats. These stats are also backed up, as our data shows that Bayern Munich is one of the best teams in Europe in terms of passing and possession stats, which can be seen in Figure \ref{fig:stats}(1a) and Figure \ref{fig:stats}(2a). The goal scorers for Bayern were Robert Lewandowski and Jerome Boateng. Jorginho was the lone scorer for Chelsea. Our analysis shows Lewandowski as one of the most prolific goal scorers in Europe over the past few years, which is backed up by these results.
A similar result was found in the simulation of the game between Barcelona and Napoli. Barcelona being European giants and one of the best passers in Europe dominated the passing($571$) and possession($56\%$) stats and won with a scoreline of $2-1$ at home with Rakitic scoring for Barcelona. Rakitic has a good record of scoring in Champions League knockouts, which is an interesting observation that our model is able to capture. Also, Barcelona has a great home record, as can be seen in Figure \ref{fig:dataset}, which is also corroborated by our results.
In another match, Paris (PSG) beat Atlético by two goals to one in both fixtures. Our analysis shows that Paris, a team with a good scoring record (from Figure \ref{sub:cross}), have a tendency to perform better against more defensive teams like Atlético. Cavani, who is one of the most prolific scorers, scored in the fixture- thus validating our results. Another big fixture was the game between Juventus and Man. City in which Ronaldo scored one goal, and Dybala scored two goals. However, their efforts were in vain, as Laporte scored two headed goals off corners, and Gabriel Jesus scored one to take Manchester City to the semi-finals against Paris. Paris, being the in-form team in the semi-finals, beat Manchester City by dominating them in terms of both possession($58\%$) and passing stats, where Cavani and Peredes scored. This fixture at Manchester City's home ground was level in terms of possession($50\%$) and passing statistics which can be explained by Man. City's strong record at home, wherein they lost only $3$ out of $18$ games as seen in Figure \ref{fig:dataset}. These results validate our model's ability to learn about interactions between features.
The other semi-final was a close fixture between Bayern Munich and Barcelona. Both teams, being two of the favorites, dominated the stats at home. They established a strong home record and the match ended in a draw, with Bayern decided as winners on the basis of the highest number of shots on target (as per our chosen method). Another interesting observation was that our model could not decide the winner in this fixture over both legs, which is expected since Bayern and Barcelona were favorites to win the competition.
The final was played between Bayern Munich and Paris, where Bayern Munich emerged victorious. Few exciting observations from this simulation are discussed as follows: Lewandowski scored two goals for Bayern Munich, making a substantial contribution to Bayern's success. Bayern Munich had the highest blocks per game in the simulations, which can be explained by Manuel Neuer's brilliant performances over the last few years. Finally, the results of our model are also backed up by the fact that Bayern Munich is one of the strongest teams in the competition, and had the best form leading up to the knockout stages.
Our model can not only be used for predicting match statistics, but also for tactical analysis to help teams prepare better. We have shown that our model can make optimal predictions, and thus teams can use these predictions to be better prepared against their opposition. For example, in the simulation of a game between Bayern and Chelsea, our model predicted a significantly large number of crosses from Bayern, which matches their playing style and also reflects how a team is likely to play against another. Such analysis can help teams to plan better by focusing more on defending crosses if it is the opposition team's expected mode of attack. In addition, masked relations such as the performance of a team against relatively aggressive/defensive teams can be analysed and used to alter tactics accordingly. Finally, in order to verify the robustness of our model, we present some visualizations in Figure \ref{fig:stats}. We show the distributions of Passes, Possession, and Corners in the training data and their distributions in predictions of our simulation. It is seen that Barcelona and Bayern lead most of these stats in the training plots, and similar distributions can be seen in the simulations. It is evident from the plots in Figure \ref{fig:stats} that our model is robust and can capture the information and interactions among features very well.
\section{Conclusion and Future Work}
Inspired by the recent focus on sports analytics, and curiosity among the community on how the current seasons would have concluded, we conducted a simulation to find out how the rest of the season would pan out. We present \textbf{UCLData}, which contains data from the UCL games between the seasons $2014$-$2020$. We also propose a novel architecture that can efficiently capture the information and interactions within this data and make robust predictions on how individual matches of the season will pan out. We also propose solutions to handle some common problems related to data bias. Finally, we predict the results of the remaining Champions League games and thus predict the winners of this year's Champions League.
Future work can focus on giving weightage to the time of the matches, i.e. older matches will have a lower weightage as compared to the newer ones in the embedding. Although our model seems to work great on UCLData, it would be interesting to assess its learning capabilities on future football events and data from other leagues as well. Our methodology can be extended to predict other specific statistics such as the exact time of goals. Also, in the cases of a tied fixture over both legs, a penalty shootout simulation can also be added. In addition we would like to extend this work to more sporting events in the future.
\section{Related Work}\label{sec:relatedworks}
Most of the previous approaches based on machine learning for predicting results of sports games aim to predict simply the outcome of matches, instead of running a simulation predicting all match-related statistics.
Kampakis \textit{et al.}\cite{kampakis2015using} used both player and team data for cricket matches to predict the performance of teams based on different features. A study by Rotshtein \textit{et al.}\cite{geneticball} used several predictive models to predict outcomes in the English Premier League and the Premiership Rugby in England. There are various works based on Bayesian models \cite{footbayesian,kickoffai,ieee34}, but these limit themselves to predicting the outcomes of individual football matches instead of running simulations. A work based on the Gaussian Process model by L. Maystre \textit{et al.}\cite{gaussian} attempts to learn the strengths and traits of a team by player wise contributions. This is an inspiration for our present study.
Huang \textit{et al.}\cite{worldcup} focus on using neural networks to predict the results of the $2006$ Football World Cup and this is the most similar to what we have tried to achieve in this paper. They achieved an accuracy of $76.9\%$ on the games' results, having special difficulty in predicting draws. Hucaljuk \textit{et al.}\cite{noconf} incorporated expert opinion into Champions League matches, but in this case, there was no increase in accuracy in their prediction of game scores. S. Mohammad Arabzad \textit{et al.} \cite{nn1} incorporated the use of neural networks for the Iranian premier league. Flitman \textit{et al.} \cite{nn2} developed a model that will readily predict the winner of Australian Football League games together with the probability of that win. This model was developed using a genetically modified neural network to calculate the likely winner, combined with a linear program optimisation to determine the probability of that win occurring in the context of the tipping competition scoring regime.
| {
"attr-fineweb-edu": 1.547852,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfIg4uzkg4EbSnG3K | \section{Introduction}
In recent years, the interest in studying the emergence of complexity in team sports competition has increased \cite{ribeiro2012anomalous,clauset2015safe,kiley2016game,ibanez2018relative,ruth2020dodge,chacoma2020modeling,moritz2021risk,yamamoto2021preferential}.
Prompted by the advances in data acquisition and theoretically supported by state-of-the-art statistical tools and artificial intelligence techniques, this area has gone beyond the academy boundaries to positioning as a new vigorous precursor of the innovative processes in the sports industry \cite{rajvsp2020systematic}.
Particularly, in the game of football, the use of network science to describe the dynamics of a match is currently ubiquitous, especially the utilization of the so-called passing networks \cite{gonccalves2017exploring,yamamoto2018examination,ichinose2021robustness}.
In that framework, the information to set the network's links is given by the number of passes between teammates.
The network structure, in this context, allows analysts to quantify the interaction in the field and the team performance via the use of classical network metrics like the clustering coefficient, the shortest path length, or the eigenvector centrality, among others \cite{martinez2020spatial,buldu2019defining}.
However, this approach considers only the interaction among teammates ignoring the interaction between opponents, i.e., neglecting the effect of the marking dynamic.
In this context, it is necessary to highlight that the base of the tactical system in the game of football is the marking \cite{sampaio2012measuring}, since it defines the strategy of the team.
Hence, to carry out a complete analysis of the game it is crucial to characterize this phenomenon.
In this paper, we aim to study the marking dynamics using network science.
To do so, we survey a database containing the coordinates of the players in the field at each second of three professional games.
With this information, we define a bipartite graph where the nodes are the players of both teams, but the connections can be only between opponents.
To establish the connections, we will use the euclidean distance of the players in the field since the opponents' closeness is strictly related to the marking \cite{low2021porous}.
This particular type of graph is known as {\it proximity network} and has been widely used to study other phenomena in complexity science \cite{isella2011s,farine2015proximity,cattuto2010dynamics,gauvin2013activity,cencetti2021digital}.
The manuscript is divided into three parts: In the next section, we describe the database and give further information on the acquisition process.
In section results, we firstly study the evolution of the proximity networks during the game. In our analysis, we observe and characterize statistics regularities that confirm the emergence of complexity in the system.
Secondly, we propose a model to simulate the players' motion in the field and analyze the outcomes by performing the same analysis we used to study the empirical data. Despite the model's simplicity, we achieve a satisfactory performance, obtaining agreement with the observed in the real case.
In the last section, our main results are briefly summarized.
\section{Data}
We use tracking data from three professional football games provided by the company {\it Metrica Sports} \cite{metrica}.
They use artificial intelligence applied to visual recognition to gather from video records the players' coordinates in the pitch of both teams with high resolution.
The data is separated into three datasets, hereafter referred to as $DS1$, $DS2$, and $DS3$, and is publicly available for the community in \cite{data@metrica}.
All the data is anonymized, there are no references to players' names, teams, or matches.
Notice that the provided temporal resolution is $0.04~sec$. However, we averaged the data to avoid noise, preserving a resolution of $1~sec$.
\section{Results and discussion}
\subsection{Interaction distance}
In this section, we focus on defining the scope of the interaction range between players.
With this aim, we describe some aspects of players' movement during the first half of the game recorded in $DS1$.
Let us focus in Fig.~\ref{fi:1}. In panel (a), we show the players on the field at $t\approx20~min$. %
The heat map in the background shows the explored areas by the player of team 1 highlighted with a $star$, giving an example of the typical players' motion in the field.
Since both teams move around in a single block, like a bird flock, it is helpful to analyze the system from the center of mass reference frame.
From this perspective, we calculated the players' average position, hereafter referred to as $\vec{c}_n$, where $n$ indicates the $nth$ player in the field. The result is shown in panel (b).
The ellipse surrounding the player star is calculated from the cloud of points given by all the positions explored by it over time.
The ellipse's center corresponds to the player's center, and the radii $r_1$ and $r_2$ to the standard deviation intervals calculated on the two principal components of the cloud, obtained via a $PCA$ analysis.
Note that the ellipse measure the range of the player's movement. We will call to this {\it player's action zone}.
We define $\delta$ as the distance between a player's average position and that of its closest opponent.
Note that this parameter is an emergent of the marking dynamics and gives a measure of the interaction distances between nearest opponents.
Since the marking dynamic is diverse due to the different players involved, we obtain a wide variety of values for $\delta$. However, all the values fall into the small interval $(3.88, 12)~m$, neglecting the contribution of the goalkeepers who do not mark opponents. The same calculation in datasets $DS2$ and $DS3$ gives similar results.
Therefore, we can conclude that this is a good measure for the range of interaction distances where the marking dynamic occurs.
\subsection{Proximity Networks}
We now focus on describing the marking dynamics with a bipartite temporal proximity network.
In this frame, the nodes are the players of both teams, and the links will be only between players of different teams.
To establish a link, we proceed as follows: At every second of the game, we compute the $2D$ euclidean distance between all the pair of opponents in the field. When the distance between two opponents is lower than a given threshold $\theta$, then we will set a link between them.
To select the threshold value in our study, we explore the range of interaction distances defined in the previous section.
Notice that the three games under analysis have a different range but are still very similar, as we can observe in the box plots of Fig.2 (a).
For every value of $\theta$, we define a network at every second and compute the heterogeneity parameter $\kappa(t)= \frac{\avg{k^2}}{\avg{k}}$, where $k$ is the node degree. Then we calculate the mean value over time and obtain the relation $\theta$ vs.
$\avg{\kappa}$ showed in Fig.~\ref{fi:2}~(b).
We observe a smooth evolution where, from the range of analyzed values of $\theta$, we obtain values of $\avg{\kappa} $ into the interval $(1,3)$.
The black horizontal line in the plot shows the theoretical percolation point, $\kappa=2$, derived by Molloy and Reed in \cite{molloy1995critical}.
For networks with no degree correlation in the thermodynamic limit, this point defines a continuous phase transition, where, for $\kappa <2$, all the components in the network are small clusters of trees, and for $\kappa >2$, a giant component of size proportional to $N$ emerges.
In our research, since the networks are relatively small, we cannot frame our results in the theory of phase transitions.
However, this result evidences the emergence of complexity in the system.
We now turn to analyze the temporal evolution of the heterogeneity parameter $\kappa$.
Since we aim to compare the curves of the three datasets, we tune the value of $\theta$ in each case such that $\avg{\kappa}\approx 2$.
With this approach we obtain the values $\theta_1=8.5$, $\theta_2=8$ and $\theta_3=9$.
In Fig.~\ref{fi:2}~(c), we show the evolution of $\kappa$ during the $90$ minutes (plus extra time) of the three games. As expected, we can observe that the values fluctuate around $\kappa=2$. This behavior indicates that the network structure oscillates between periods of high clusterization and high defragmentation.
In the supplementary material, we incorporate an animation to visualize this result.
The higher peaks in the series correspond to special situations of the game where the players group altogether, like, for instance, in a corner kick or dead ball \cite{pulling2013defending}.
In this regard, we want to highlight that $\kappa$ cannot grow to the infinite. There is a limit graph given by all the opponents connected with 22 nodes, 121 edges, and $\kappa=11$.
We can also define a mean graph by using the average position of the players. For instance, for $DS1$, we have a mean graph with 17 nodes, 16 edges, and $\kappa=2.25$. Note $\kappa \approx 2$, as expected.
Finally, in Fig.~\ref{fi:2} panels (d), (e), and (f), we show a visualization of the proximity network in a given time for the three datasets.
Panel (d) exhibits a situation of high clusterization where the ball is in motion, panel (e) a case of high defragmentation, and panel (f) shows a moment of the game when the players are marking in the context of a free-kick.
\subsection{Temporal structure of the time series $\kappa(t)$}
\label{se:avalanches}
The evolution of $\kappa(t)$ bears essential information to describe the development of the marking dynamics.
In this section, we analyze the statistical regularities of these series.
Let us focus firstly on characterizing the successive increments in the series, defined as $i(t)=\kappa(t+1)-\kappa(t)$.
In Fig.~\ref{fi:3} panels (a) and (b), we show the probability density and the power spectrum density of $I(t)=i(t)/\sigma_{\kappa}$ calculated for the three datasets. In the panels, and in order to trace a comparison, we additionally show the results considering Gaussian noise.
We observe the data deviates from the Gaussian behavior, exhibiting heavy tails in the distribution and a decay in the power spectrum density for low values of the frequency.
Additionally, we performed a detrended fluctuation analysis (DFA) on the three series aiming to calculate the generalized Hurst exponent, $h$. We found values close to zero, which shows that the series are anti-persistent.
According to these results, the nontrivial behavior of the succession of increments reveals a complex time evolution of $\kappa$ in the three games.
In the following, we complement these results by studying the presence of self-similar behavior in the fluctuations.
We define an event $x$ of the series $\kappa(t)$ as the consecutive points starting when $\kappa>2$ and ending when $\kappa<2$.
Note that this is equivalent to the definition of avalanches in other contexts \cite{laurson2013evolution}.
In addition, given an event $x$, we define the event lifetime $T$ as the duration of the event; and the event size $S$ as the integral under the curve.
In the following, we perform a statistical analysis of the events' lifetime and sizes for the 1050 events gathered from the time series of $\kappa(t)$ linked to the three datasets.
In Fig.~\ref{fi:4} (a), (b), and (c), we show the distribution of events' lifetime $P(T)$, the distribution of events' sizes $P(S)$, and the relation $T$~vs.~$\avg{S}$, respectively.
We can observe a power-law behavior in the three cases.
According to \cite{bak2013nature,chialvo2015we}, if these relations follow the universal functional forms:
\begin{equation}
\label{eq:1}
\begin{split}
P(T) &\sim T^{-\alpha},\\
P(S) &\sim S^{-\tau},\\
\avg{S} &\sim T^{\mu+1},
\end{split}
\end{equation}
and, in addition, the following relation between the exponents holds,
\begin{equation}
\label{eq:2}
\frac{\alpha -1}{\tau-1}= \mu+1,
\end{equation}
then, we are in the presence of a self-similar process.
To analyze this hypothesis, we perform a nonlinear fit on the empirical curves using the maximal likelihood method proposed in \cite{clauset2009power}.
We found that the empirical exponents closely fulfill Eq.~(\ref{eq:2}) for all the set of values of $T$ and $S$, particularly for the region delimited for $T \in (2, 32)~sec$ and $S \in (5,85)$, we obtain full agreement with $\alpha=2.085$, $\tau= 1.974$, and $\mu=0.115$.
Therefore, from scaling arguments \cite{baldassarri2003average}, it is expected that the average profile of an event of lifetime $T$, $\chi:=\avg{x(T,t)}$, scales as:
\begin{equation}
\label{eq:3}
\chi= T^\mu \rho(t/T),
\end{equation}
where events of different lifetime rescaled by the parameter $\mu$ should collapse on a single scaling function given by $\rho(t/T)$.
With this idea in mind, in Fig.~\ref{fi:4} panel (d), we show several examples of averaged events profiles with different lifetimes and, in panel (e), the collapse using Eq.~(\ref{eq:3}).
In the latter, we normalize the profiles as $\tilde{\chi}=\chi/\chi_{MAX}$ where $\chi_{MAX}$ is the maximum value observed in the set of all the curves in the plot.
With this data, we perform a nonlinear fit via the function:
\begin{equation}
\label{eq:4}
\rho(t^\prime)=(A~t^\prime(1-t^\prime))^{\tilde{\mu}},
\end{equation}
with $t^\prime= t/T$, obtaining $A=1.37$ and $\tilde{\mu}=0.125$.
Note that the value obtained for $\tilde{\mu}$ is consistent with the value of $\mu$ obtained from Eqs.~(\ref{eq:1}).
\subsection{Model}
\subsubsection{The equations of players' evolution}
Football dynamics can be thought of as the outcome of a particular interaction between players.
Teammates interact while running a tactical scheme, and opponents interact on the marking.
In this section, based on the ideas of our previous work \cite{chacoma2021stochastic}, we propose to model the players' motion in the field within the following framework: In the center of mass frame of reference, we define
$\vec{r}_n(t)= \big(x_n(t), y_n(t)\big)^T$ and
$\vec{v}_n(t)= \big(v_n^x(t), v_n^y(t)\big)^T$
as the position and velocity of player $n$ at time $t$.
We propose that every player is bounded to $(i)$ a place in the field related to their natural position in the tactical scheme of the team, $\vec{b}_n$, and $(ii)$ the other players, both teammates and opponents.
In this frame, the equation of motion for a player $n$ can be written as follow,
\begin{equation}
\centering
M_n \ddot{\vec{r}}_n =
-\gamma_n\vec{v}_n
+ k_{bn}(\vec{b}_n-\vec{r}_n) +
{\sum_m}^\prime k_{nm}(\vec{r}_m-\vec{r}_n),
\label{eq:eqmov}
\end{equation}
where the first term is a damping force, the second one is an ``anchor'' to the player's position, and the sum in the third term is the contributions of the interaction forces related to both teammates and opponents.
We propose different interaction constants in the horizontal and vertical axis, thus the parameters $\gamma_n$, $k_{bn}$, and $k_{nm}$, are $2D$ diagonal matrices such as,
$\gamma_n=
\big(\begin{smallmatrix}
\gamma_n^x & 0\\
0 & \gamma_n^y
\end{smallmatrix}\big)$,
$k_{bn}=
\big(\begin{smallmatrix}
k_{bn}^x & 0\\
0 & k_{bn}^y
\end{smallmatrix}\big)$, and
$k_{nm}=
\big(\begin{smallmatrix}
k_{nm}^x & 0\\
0 & k_{nm}^y
\end{smallmatrix}\big)$.
Note that these forces are not isotropic.
Moreover, since it is expected that players have similar mass, for simplicity we will consider $M_n=1$ for all the players in the field.
\subsubsection{Fitting the model's parameters}
In this section we show how to obtain the parameters $\gamma_n$, $k_{bn}$, $k_{nm}$, and $\vec{b}_n$ by fitting Eq.~(\ref{eq:eqmov}) to the datasets.
To perform this calculation, we considered the following:
\begin{enumerate}
\item The velocity is calculated as $\vec{v}_n(t):=\frac{\vec{r}_n(t+\Delta t)- \vec{r}_n(t)}{\Delta t}$ ($\Delta t=1~s$).
\item The discrete version of the system of first order equations giving by Eq.~(\ref{eq:eqmov}), provides the tool to estimate the states of the players at time $t+\Delta t$ by using as inputs the real states at time $t$ and the model's parameters,
\begin{equation*}
\begin{split}
\vec{r}_n(t+\Delta t)^{\prime} &=
\vec{r}_n(t)+\vec{v}_n(t)\Delta t\\
%
\vec{v}_n(t+\Delta t)^{\prime} &= \vec{v}_n(t)+ \\
& \big(-\gamma_n \vec{v}_n(t)-
\big(k_{bn}+{\sum_m}^\prime k_{nm}\big) \vec{r}_n(t)+\\
&{\sum_m}^\prime k_{nm}\vec{r}_m(t)+ k_{bn}\vec{b}_n\big) \Delta t.
%
\end{split}
\label{eq:discretesystem}
\end{equation*}
Where $\vec{r}_n(t+\Delta t)^{\prime}$ and $\vec{v}_n(t+\Delta t)^{\prime}$ are the model's estimations.
\item Note that we are considering the definition of the velocity expressed in item $1$, $\vec{r}_n(t+\Delta t) = \vec{r}_n(t+\Delta t)^{\prime}$.
Then, at every step, the model's parameters are only used to predict the new velocities.
\item We can choose the values of $\vec{b}_n$ such that the equilibria point of the players are their average position, $\vec{c}_n$. By doing this, we can calculate $\vec{b}_n$ using Eq.~(\ref{eq:eqmov}), the values of $\vec{c}_n$, and the other parameters.
\item We define the error $\vec{\xi}_n(t) := \vec{v}_n(t+\Delta t)-\vec{v}_n(t+\Delta t)^\prime$, and fit $\gamma_n$, $k_{bn}$, and $k_{nm}$ by minimizing the sum $\sum_t \sum_n \big|\vec{\xi}_n(t)\big|$.
\end{enumerate}
With this methodology, we obtain a unique set of parameters that control the players' motion equations and, consequently, the dynamics of the game in each dataset.
\subsubsection{Simulations}
Once performed the fit to obtain the model's parameters, we use these outcomes to simulate the coupled equation system given by Eq.~(\ref{eq:eqmov}).
To input energy into the system, we add a non-correlated Gaussian noise, such that
$\vec{\xi}_n(t)= \vec{\sigma}_n \xi_n$, with
$\avg{\xi_n(t)}=0$,
$\avg{\xi_n(t)\xi_n(t^\prime)}= \delta (t-t^\prime)$,
$\avg{\xi_n(t)\xi_m(t)}= 0$.
Here, for $\vec{\sigma_n}= (\sigma^x_n, \sigma^y_n)$, we use the scale of the velocity fluctuation measure in the fit.
In this manner, the noise acts as a proxy to introduce higher-order contributions of the interaction forces into the model.
In this frame, we can write,
\begin{equation}
\begin{split}
d\vec{r}_n &= \vec{v}_n dt\\
d\vec{v}_n &= \big[
-\big(k_{nb}+{\sum_m}^\prime k_{nm}\big) \vec{r}_n
+ {\sum_m}^\prime k_{nm}\vec{r}_m -\\
&\gamma_n \vec{v}_n
+ k_{nb}\vec{b}_n
\big] dt + \vec{dW_n},
\end{split}
\label{eq:system2}
\end{equation}
where $d\vec{W}_n = \vec{\sigma}_n \xi_n dt$.
To solve this system of stochastic differential equation (SDE), we use the Euler--Maruyama algorithm for Ito equations.
We performed simulations using the set of parameters obtained by fitting the first half of the game recorded in $DS1$.
We simulated a continuous game of $10^5~sec$. From this outcome, we extracted the players' trajectories to analyze.
In the following, we extend our discussion on the results.
Let us focus on Fig.~\ref{fi:5} panels (a) and (b). Here we compare the players' empirical action zones with those obtained from the simulations.
We can see a reasonably good approximation, which indicates that the model allows us to reproduce the player's motion in the field.
On the other hand, in panel (c), we show the time evolution of parameter $\kappa$. Notice that to calculate the proximity networks, we use $\theta=8.5$, the same value used for $DS1$. In the inset, we show the total evolution, and in the main figure, the values for the first $500~sec$. In the latter we can see the emergence of avalanches as in the empirical case.
Lastly, to analyze the temporal structure of the series, we calculate the successive increments and study, as we did in section \ref{se:avalanches}, the probability density and the power spectrum density. In panel (d), we show these results. We can see that the data deviate from the Gaussian behavior. Moreover, we performed a $DFA$ analysis to obtain the generalized hurts exponent which gave a value close to zero indicating the presence of anti-persistency.
These results are consistent with the empirical case, indicating that the model succeeds in capturing the overall statistic of the complex evolution of the heterogeneity parameter.
\subsubsection{Analysis of avalanches in the series $\kappa(t)$ obtained from simulations}
We repeat the analysis of self-similarity that we performed in section \ref{se:avalanches} but in this case, using the series of $\kappa$ obtained from the simulations.
In Fig.~\ref{fi:6}, panels (a), (b), and (c), we show the distributions of avalanches' lifetimes, sizes, and the relation $T$ vs. $\avg{S}$, respectively.
In this case, we can see a cut-off in the distributions $P(T)$ and $P(S)$, particularly evident to the naked eye in the latter.
This cut-off indicates that the model cannot generate the larger avalanches observed in the empirical case.
This is because there are particular waiting times during the game when the players pack all together in a field sector, for instance, during a corner kick or dead ball. In these moments, $\kappa$ increases shaping an avalanche that is not related to the players' motion in the field but a particular dead time in the game.
Our minimalist model cannot capture this phenomenon; therefore, we do not see the same tail in the distribution of avalanches sizes $P(S)$.
However, if we fit these three relations using Eqs.~(\ref{eq:1}), as we did with the empirical case, and in a similar range $T \in (2, 29)~sec$ and $S \in (5,73)$, we obtain $\alpha=2.041$, $\tau= 1.944$, and $\mu=0.1$.
Note that these values fulfill the scaling relation expressed in Eq.~(\ref{eq:2}).
Therefore, as in the empirical case, we can write a scaling law to universally describe the avalanches' profile. In Fig.~\ref{fi:6} panels (d) and (e), we show average events of different lifetime and the collapse of the curves into a universal form, respectively. We fit the collapse via Eq.~(\ref{eq:4}) obtaining $\tilde{\mu}=0.104$, which is consistent with the value obtained for the parameter $\mu$.
Additionally, in panel (e), we include the nonlinear fit that we have previously shown in Fig.~\ref{fi:4} (e) (see the curve in black dashed line) to exhibit the differences between the results of the model and the empirical case.
\subsubsection{The effect of the tactical system structure in the proximity networks and the evolution of $\kappa$.}
If the process $\kappa(t)$ were a Wiener process, the average shape of the fluctuations would be a semi-circle, with $\tilde{\mu}=1/2$ \cite{baldassarri2003average}.
In our case, a value $\tilde{\mu}<<1/2$ indicates anti-persistency, where the evolution of the walker suffers a restitution force that drives it to the mean value.
This leads to the break down of the scaling laws for large times, affecting the events' lifetime $T$ and consequently the average shape of the fluctuations.
According to \cite{baldassarri2003average}, to consider this effect, we can model the evolution of $\kappa(t)$ such as,
\begin{equation}
\kappa(t+1)= \kappa(t) - \frac{1}{\tau}\; \kappa(t) + \xi(t),
\label{eq:7}
\end{equation}
where $\xi(t)$ is a random variable and the term $\frac{1}{\tau}\; \kappa(t)$ represents the effect of a parabolic well which introduces a characteristic time $\tau$ in the system.
Via the image method, one can calculate the analytical expression for the average fluctuation as a function of $t$, $T$, and $\tau$,
\begin{equation}
\chi=
\sqrt{\frac{4\;\tau}{\pi}
\frac{(1-e^{-2t/\tau})(1-e^{-2(T-t)/\tau})}
{(1-e^{-2T/\tau})}}.
\label{eq:8}
\end{equation}
Aiming to estimate the value of $\tau$, we used Eq.~(\ref{eq:8}) to perform a nonlinear fit of the average shape of the fluctuations. From this procedure, we obtain $\tau=6.06\pm 0.07~ sec$.
Interestingly, this value is in the order of magnitude of the average ball possession time reported in \cite{chacoma2020modeling}, which is $\sim13.72~sec$. Therefore, the emergence of this time scale in the system could be related to teams moving from offensive to defensive positions or vice versa.
Note the intrinsic marking dynamic is strictly related to the anti-persistency, the players' going out from their centers to mark an opponent and returning to their action zones to cover spaces induces a damped dynamics in the marking.
In other words, the constraint that the players suffer, in order to maintain the structure of the tactical system, is responsible for the anti-persistence that we see in the time series $\kappa(t)$.
Moreover, we can observe that this effect is higher in the model, where we explicitly rule the players' motion with restitution forces. It could indicate that, in the empirical case, the players actually move around with more freedom than in the model.
\section{Conclusions}
Summarizing, we observed that the proximity network evolves following the marking dynamics, exhibiting oscillating periods of high defragmentation and high clusterization.
To characterize this phenomenon, we calculated the heterogeneity parameter and found that the system evolves in a regime similar to a transition in percolation theory.
As we previously remarked, since the system is far from the thermodynamic limit, we cannot frame our results in the theory of phase transitions. Our observations, however, evidence the emergence of complexity in the marking dynamics.
We were able to study this complex behavior by analyzing the temporal structure of the time series of $\kappa$.
We found the presence of anti-persistency and self-similarity, which we characterized by uncovering a scaling law in the average shape of the fluctuations.
Lastly, we proposed a model to simulate the players' motion on the field.
From simulations, we obtained the evolution of a synthetic proximity network that we analyzed with the same methodology we used in our analysis of the empirical data.
Remarkably, the model showed a good performance in recovering the statistics of the empirical trajectories; and, consequently, the statistics of the temporal structure of the parameter $\kappa$.
The correlations observed in the marking dynamics could be related to the high level of coordination required to keep running the tactical system.
At each game challenge, the entire team will proceed in coordination to give a response.
They will tend to react optimally, according to the training precepts received.
Therefore, it is expected that, in similar situations, they will produce equivalent responses.
In our framework, these responses are encoded in the proximity networks as recurrent configurations and yield the memory effects we observe in the evolution of the heterogeneity parameter.
Moreover, the presence of correlations reveals the players are strongly connected.
These connections drive the team to behave flexibly and adaptable to stimuli, something crucial for the development of the game.
We can compare this "state of alert" of the teams with what occurs with a bird flocks or a fish shoals, in which connections among the individual make the group stronger to avoid predators \cite{albano1996self,cavagna2010scale,hattori1999self,juanico2007dissipative}.
The difference between these cases and the dynamics of a football team relays on the cognition capabilities required to achieve this level of organization among the group's individuals.
The emergence of complexity in the game of football is somewhat similar to that observed in a living system.
In these systems, when is unbalanced the delicate equilibrium between inhibition and promotion, cooperation and competition, something abnormal occurs.
This effect is observed, for example, in the appearance of cancer cells \cite{SEDIVY1999271}, in diseases of the nervous system \cite{moustafa2016complexity}, in diseased mitochondria \cite{zamponi2018mitochondrial}, etc.
When the complexity of the system is lacking, its functioning is severely damaged.
Analogously, in the case of football dynamics, the lack of complexity would be related to low level played games.
Therefore, our framework provides a tool that can help to detect a lack of performance in the teams.
Finally, we point out that other tactical-oriented types of analysis can be performed via the study of the temporal proximity network.
For instance, the value of $\theta$ that brings the marking system to the critical threshold could indicate the type of marking: A low value can correspond to man-to-man marking, whereas a high value to a zone or a hybrid system.
In the same line, it is possible to study the players' performance by characterizing recurrent configurations in the networks and the formation of small communities.
In this regard, we let the door open to further studies on this topic.
\section*{Acknowledgement}
This work was partially supported by CONICET under Grant number PIP 112 20200 101100; FonCyT under Grant number PICT-2017-0973; and SeCyT-UNC (Argentina).
| {
"attr-fineweb-edu": 1.487305,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUav_xK0fkXQzmJp3m | \section{Introduction}
Suppose $n$ runners are running on a circular track of circumference 1. It is not a race. The runners all start at the same point on the track and at the same time, and each runs at their own distinct constant speed. We say that a runner is lonely at time $t$ if the distance (along the track) to the nearest of the other runners is at least $1/n$. The lonely runner conjecture asserts that every runner is lonely at some point in time. This problem originally arose in the context of diophantine approximations and view obstruction problems \cite{wills1967zwei} \cite{cusick1973view}. (The poetic formulation given here is due to Goddyn \cite{bienia1998flows}.) It is easier to work with the following restatement of the conjecture, which we obtain by subtracting the speed of one runner from all speeds (then one of the runners is `standing still'). In the original problem speeds are real valued, but it is known that the general problem can be reduced to case where all speeds are integers (see \cite{bohman2001six}). So we henceforth consider only integer speeds.
For $x$ real, let $ \{ x \}$ be the fractional part of $x$ (i.e. $\{x\} = x - \lfloor x \rfloor $).
\begin{conj}[Lonely Runner Conjecture] \label{runnerdream}
For any $n$ positive integers $ v_1 < v_2 < \dots < v_n $, \eqn{\label{constraint}\exists t\in \mathbb{R}\ \text{ such that }\ 1/(n+1) \le \{ v_i t\} \le n/(n+1) \ \text{ for } \ i = 1, \dots n.}
\end{conj}
\noindent
There are examples of sets of speed which ``almost'' break Condition \eqref{constraint}.
\begin{defi}
\label{tinsdef}Positive integers $ v_1 < v_2 < \dots < v_n $ are said to be a {\bf tight instance} for the lonely runner conjecture if Condition \eqref{constraint} holds, but only with equality. In other words, the instance $ v_1 < v_2 < \dots < v_n $ is tight if (\ref{constraint}) holds
and there does not exist $ t\in \mathbb{R}$ such that $ 1/(n+1) < \{ v_i t\} < n/(n+1)$ for $i = 1, \dots n$.
An instance that is neither a counterexample nor tight is a {\bf loose instance} of the lonely runner conjecture. An instance is loose if
\eqn{\label{loose}\exists t\in \mathbb{R}\ \text{ such that }\ 1/(n+1) < \{ v_i t\} < n/(n+1) \ \text{ for } \ i = 1, \dots n.}
\end{defi}
\noindent
The canonical example of a tight instance is $(1,2,\dots,n)$.
Tight instances were studied by Goddyn and Wong \cite{Goddyn2006TIGHTIO}. They showed that the canonical tight instance can be modified to create another tight instance by accelerating a speed that is slightly less than $n$ -- and satisfies certain number theoretic conditions -- by a suitable integer factor. For example, $(1,2,3,4,5,7,12)$ is a tight instance. They also showed that small sets of speeds in the canonical instance that satisfy these conditions can be simultaneously accelerated to produce tight instances.
Tao \cite[Proposition 1.5]{tao2017remarks} showed that \eqref{constraint} holds if $v_1,\dots,v_n\le 1.2n$. He also suggested that proving the conjecture holds
for $ v_1, \dots, v_n \le 2n$ is a natural target: the condition $v_1,\dots,v_n\le 2n$, unlike $1.2n$, allows multiple tight instances. So the desired statement would prove the conjecture for instances that are in the vicinity of tight instances. We prove an approximate version of this target:
\begin{thm}
\label{thm:run}
There exists a constant $\mathcal{C}$ such that for sufficiently large $n$, if $n<v_n\le 2n-\exp(\mathcal{C}\cdot(\log\log n)^2)$, then positive integers $v_1<v_2<\dots<v_n$ are a loose instance for the lonely runner conjecture.
\end{thm}
\noindent
Unfortunately, as seen in the theorem statement, the underlying objective to separate tight instances from counterexamples is not achieved.
The key ingredient in our proof of Theorem~\ref{thm:run} is inspired by coprime mappings.
\begin{defi}
If $ A, B$ are sets of integers then a bijection $ f: A \to B$ is a coprime mapping if $ a$ and $ f(a)$ are coprime for every $a \in A$.
\end{defi}
\noindent
Initial interest in coprime mappings was focused on the case $A = [n]$.
D.J. Newman conjectured that for all $n\in\mathbb{Z}^+$ there is a coprime mapping between $[n]$ and any set of $n$ consecutive integers.
This conjecture was proved by Pomerance and Selfridge \cite{pomerance1980proof} (after Daykin and Baines \cite{daykin_baines_1963}
and Chvátal \cite{chvatal} established special cases of the conjecture.) Robertson and Small \cite{robertsonsmall} determined when
a coprime mapping exists between $A=[n]$ (or $A=\{1,3,5,\dots,2n-1\}$) and an $n$-term arithmetic progression (AP).
More recently there has been interest in coprime mappings where neither $A$ nor $B$ contains 1. Note that if $A = \{2, \dots, n+1 \}$, the integer $s$ is the product of all primes that are at most $n+1$,
and $ s \in B $ then there is no coprime map from $A$ to $B$. So we must place some restriction on the set $B$ if we consider sets
$A$ that do not contain 1.
Larsen et al \cite{Larsen2017CoprimeMO} considered sets of adjacent intervals of integers. They conjectured that
if $ 1 \le \ell < k$ and $ k \neq 3$ then there is coprime mapping from $A=\{\ell+1,\dots,\ell+k\}$ to $ B=\{\ell+k+1,\dots,\ell+2k\}$.
The application of coprime mappings that we use in the context of the lonely runner conjecture requires the further generalization to the case
that $A$ and $B$ are not adjacent. For each positive integer $n$ we define the number $ f(n)$ to be the smallest integer
such that for all $2m \ge f(n)$ there is a coprime mapping between every pair of intervals $A,B \subset [n]$ with $|A|=|B|=2m$.\footnote{We force the cardinality to be even, because when the cardinality is odd, $A$ and $B$ can both have a majority of even numbers, making a coprime mapping impossible.}
Note that the example we give above
establishes the bound
\eq{ f(n) > (1-o(1))\log n. }
Note further that the Conjecture of Larsen et al requires only a linear upper bound on $f(n)$. We establish a stronger asymptotic bound.
\begin{thm}
\label{thm:co}
$f(n) = \exp(O((\log\log n)^2)). $
\end{thm}
\noindent
We do not believe that this result is sharp. Indeed, we conjecture that $ f(n)$ is at most polylogarithmic in $n$. We also resolve the conjecture of Larsen et al.
\begin{thm}
\label{tnconj}
If $ 0 \le \ell < k$ and $ k \ge 4$ then there is coprime mapping from $A=\{\ell+1,\dots,\ell+k\}$ to $\ B=\{\ell+k+1,\dots,\ell+2k\}$.
\end{thm}
\noindent
Note that there is no coprime mapping from $A = \{2,3,4\} $ to $ B = \{ 5,6,7\}$. Thus, some condition on $ k$ is required.
The remainder of this paper is organized as follows. In the next Section we prove our central result, which can be viewed as a number theoretic version of Hall's condition. Theorem~\ref{thm:co} follows immediately from the central result. In Section~3 we use the central result to prove Theorem~\ref{thm:run}. In the final section we prove the conjecture of Larsen et al regarding coprime mappings between adjacent intervals; that is, we prove Theorem~\ref{tnconj}.
\section{The central result}
Our central result is as follows:
\begin{thm}\label{evencase}
There exists a constant $\mathcal{C}$ such that the following is true for sufficiently large $n$. If $I,J\subseteq [n]$ are both sets of $2m$ consecutive integers and $2m\ge\exp(\mathcal{C}\cdot(\log\log n)^2)$, then for all subsets $S\subseteq I$, $T\subseteq J$ that satisfy $|S|+|T|\ge 2m$, exactly one of the following happens:
\begin{enumerate}
\item $S=\emptyset$;
\item $T=\emptyset$;
\item $S=I\cap 2\mathbb{Z}$ and $T=J\cap 2\mathbb{Z}$;
\item there exist $s\in S$, $t\in T$ that are coprime.
\end{enumerate}
In particular, if $|S|+|T|>2m$ then there is a coprime pair.
\end{thm}
\begin{rem}
In order to apply Hall's Theorem to establish the existence of a coprime mapping between $I$ and $J$, it suffices to show that every pair of sets $ S \subseteq I, T \subseteq J$ such that $ |S| + |T| \ge 2m+1$ contains a coprime pair $s,t$ such that $s \in S$ and $t \in T$. Thus, Theorem~\ref{thm:co} follows immediately from Theorem~\ref{evencase}. Note that Theorem~\ref{evencase} is stronger than necessary for this purpose as it treats the case $ |S| + |T| = 2m$. This case is needed for the application to the lonely runner problem. (I.e. for the proof of Theorem~\ref{thm:run}.)
\end{rem}
\noindent
We now turn to the proof of Theorem~\ref{evencase}.
Lemma~\ref{interlem} is the core of the proof. It has a weaker condition (APs) and a weaker result (2-coprime) than Theorem~\ref{evencase}. For the remainder of this section we assume $ I,J \subset [n]$ are APs of cardinality $m$ with common difference 1 or 2.
\begin{defi}
Two numbers $s,t\in\mathbb{Z}^+$ are said to be 2-coprime if no prime other than 2 divides them both. E.g., (3,4), (12,16).
\end{defi}
\begin{lem}\label{interlem}
There exists a constant $\mathcal{C}$ such that the following is true for sufficiently large $n$.
If $m \ge \exp(\mathcal{C}\cdot(\log\log n)^2)$ then for all
nonempty subsets $S\subseteq I$, $T\subseteq J$ such that $|S|+|T|\ge m$ there exist $s\in S,\ t\in T$ that are 2-coprime.
\end{lem}
\noindent
The main ingredients of the proof of Lemma~\ref{interlem} are the following two lemmas and one fact.
\begin{lem}\label{rgcoprime}
Let $S\subseteq I$, $T\subseteq J$ be nonempty subsets such that $|S|+|T|\ge m$, and let $r=m/|S|$. If $r\ge16$, $m>5\log(n)^{\log_2(2r)}$, and $n$ is sufficiently large
then there exist $s\in S,\ t\in T$ that are 2-coprime.
\end{lem}
\begin{lem}\label{rlcoprime}
Let $S\subseteq I$, $T\subseteq J$ be nonempty subsets such that $|S|+|T|\ge m$, and let $r=m/|S|$. If $ 2\le r\le16$, $m>\log(n)^3$ and $n$ is sufficiently large then there exist $s\in S,\ t\in T$ that are 2-coprime.
\end{lem}
\begin{fact}[See \cite{10.2307/24489279}]\label{coprimeconsec}
There are positive constants $c_1,c_2$ such that for all $n\in \mathbb{Z}^+$, among any sequence of $c_1\cdot \omega(n)^{c_2}$ consecutive integers, there is at least one that is coprime to $n$. (Here $\omega(n)$ is the number of distinct prime divisors of $n$.)
\end{fact}
\noindent
We now prove Lemma \ref{interlem}, assuming Lemma~\ref{rgcoprime} -- Fact~\ref{coprimeconsec}.
We will prove Lemmas \ref{rgcoprime} and \ref{rlcoprime} immediately after. We end the section with the proof of Theorem~\ref{evencase}.
\begin{proof}[Proof of Lemma \ref{interlem}]
Set $\mathcal{C} =2c_2\log_2(e)$, where $c_2$ is the constant in Fact~\ref{coprimeconsec}.
Without loss of generality, assume that~$|S|+|T|=m$. Since the roles of $S$ and $T$ are the same, one can assume that $|S|\le m/2$. That is, $r:=m/|S|\ge 2$. Note that Lemma~\ref{interlem} follows immediately from either Lemma~\ref{rgcoprime} or Lemma~\ref{rlcoprime} (depending on the value of $r$) unless
\eqn{ \label{larger} \exp(\mathcal{C}\cdot(\log\log n)^2) \le m < 5\log(n)^{\log_2(2r)} .}
(Note that we clearly have $ \exp(\mathcal{C}\cdot(\log\log n)^2) > \log(n)^3$ for $n$ sufficiently large.) It remains to prove Lemma~\ref{interlem} when $r$ and $m$ satisfy (\ref{larger}).
To this end, we first observe that if we replace coprime with 2-coprime, then we can extend Fact~\ref{coprimeconsec} to APs with common difference 2.
\begin{cla*
For all $n\in \mathbb{Z}^+$, any integer AP with common difference 1 or 2 and at least $c_1\cdot \omega(n)^{c_2}$ terms contains at least one term that is 2-coprime to $n$. (Using the same constants as in Fact~\ref{coprimeconsec}.)
\end{cla*}
\begin{quote}
\begin{proof}[Proof of Claim]
Assume without loss of generality that $n$ is odd.
Say the AP is $a_1,\dots,a_\ell$. If the AP contains only even numbers, consider $a_1/2,a_2/2,\dots,a_\ell/2$; if only odd numbers, consider $(a_1+n)/2,(a_2+n)/2,\dots,(a_\ell+n)/2$. In any case, we have a sequence of $\ell$ consecutive integers, where the $i$-th number is coprime to $n$ if and only if $a_i$ is coprime to $n$. Fact~\ref{coprimeconsec} implies that the new sequence has a number coprime to $n$.
\end{proof}
\end{quote}
Now consider $s\in S$. By the Claim, among any $c_1\cdot \omega(s)^{c_2}$ consecutive terms of $J$, at least one is 2-coprime to $s$. Thus, $J$ has at least $\lfloor m/(c_1\omega(s)^{c_2})\rfloor$ terms that are 2-coprime to $s$. If any of these terms are in $T$ then we have the desired 2-coprime pair, so we may assume for the sake of contradiction that they
all in $J\setminus T$.
Note that, $\omega(s)\le \log_2(s)\le \log_2(n)$, and, for sufficiently large $n$, we have \eq{\frac{m}{c_1\omega(s)^{c_2}}\ge\frac{\exp(\mathcal{C}\cdot(\log\log n)^2)}{c_1\log_2(n)^{c_2}}.}
As this quantity is arbitrarily large for large $n$, the floor function has negligible effect, and it follows that we have
\eq{ r = \frac{m}{|S|} = \frac{m}{|J \setminus T|} \le \frac{m}{\left\lfloor\frac{m}{c_1\omega(s)^{c_2}}\right\rfloor} \le (1+o(1)) c_1\log_2(n)^{c_2} .}
Then, again appealing to (\ref{larger}), we have \eq{ m < 5\log(n)^{\log_2(2r)} \le 5\log(n)^{(c_2+o(1))\log_2\log_2n}<\exp(\mathcal{C}\cdot(\log\log n)^2)\le m.}
This is a contradiction.
\end{proof}
We now prove Lemmas \ref{rgcoprime} and \ref{rlcoprime}. The key idea is to count the non-coprime pairs in $S\times T$ by summing up $|S\cap p\mathbb{Z}||T\cap p\mathbb{Z}|$ over primes $p$ greater than 2. Note that if $S$ and $T$ are random subsets of $I$ and $J$, respectively, than we expect to have
\eq{\sum_{p >2} |S\cap p\mathbb{Z}||T\cap p\mathbb{Z}| \approx \sum_{p>2} \frac{|S|}{p} \frac{|T|}{p} = |S \times T| \sum_{p >2} \frac{1}{p^2} \approx 0.2 |S \times T|,}
and there are many 2-coprime pairs.
Of course, we have to complete the proof for all sets $S$ and $T$ (rather than just random ones) and we do this by `zooming in' on primes $p$ for which $ |S \cap p\mathbb{Z}|$ is large. We `zoom in' by looking at $S\cap p\mathbb{Z}$ and $T\setminus p\mathbb{Z}$ instead of $S$ and $T$, and we iterate this process if necessary.
Before the proof, we state a simple approximation that we use throughout this Section.
\begin{lem}
\label{lem:simpleapprox}
Let $A$ be an integer AP with common difference $d$, and suppose $P\in\mathbb{Z}^+$ is coprime with $d$. Then for all $R\subseteq [P]$, \eqn{\label{absrem}\frac{|A\cap (R+P\mathbb{Z})|}{|R|}-\frac{|A|}{P}\in(-1,1).}
As a result, if $|A|/P\ge \delta^{-1}$ for some $\delta>0$, then \eqn{\label{relrem}\frac{|A\cap (R+P\mathbb{Z})|}{|A||R|/P}\in(1-\delta,1+\delta).}
\end{lem}
\begin{proof}
Among every $P$ consecutive terms of $A$ (a ``chunk''), exactly $|R|$ of them are in $A\cap (R+P\mathbb{Z})$. $\lfloor |A|/P\rfloor$ disjoint chunks can cover a subset of $A$ and $\lceil |A|/P\rceil$
chunks can cover a superset of $A$, so $|A\cap (R+P\mathbb{Z})|$ is between $\lfloor |A|/P\rfloor |R|$ and $\lceil |A|/P \rceil |R|$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{rgcoprime}]
Let $S_0=S,\ T_0=T,\ I_0=I,\ J_0=J$. Let \eqn{\label{capmdefrg}M={\left(\frac{m}{5}\right)}^{1/\log_2(2r)}.} Note that by assumption, \eq{M>\left(\log(n)^{\log_2(2r)}\right)^{1/\log_2(2r)}=\log(n).}
For each $i\in\mathbb{Z}^+$, if there exists prime $p_i\notin\{2,p_1,\dots,p_{i-1}\}, p_i\le M$ such that \eq{
\frac{\left|S_{i-1}\cap p_i\mathbb{Z}\right|}{\left|I_{i-1}\cap p_i\mathbb{Z}\right|}\ge 2\frac{|S_{i-1}|}{|I_{i-1}|},
} define \eq{S_i=S_{i-1}\cap p_i\mathbb{Z},\ T_i=T_{i-1}\setminus p_i\mathbb{Z},\ I_i=I_{i-1}\cap p_i\mathbb{Z},\ J_i=J_{i-1}\setminus p_i\mathbb{Z}.}
Let's say $k$ is the last index where these are defined. For every $0\le i\le k$ define $P_i=\mathtt{Primes}\setminus\{2,p_1,\dots,p_i\}$ for convenience. Now we have \eqn{
\label{skconc} &\frac{|S_i|}{|I_i|} \ge 2\frac{|S_{i-1}|}{|I_{i-1}|} \hskip1cm & \text{ for } i =1,
\dots k, \text{ and }\\
\label{gkconc} & \frac{\left|S_k\cap p\mathbb{Z}\right|}{\left|I_k\cap p\mathbb{Z}\right|} < 2\frac{|S_k|}{|I_k|} \hskip3mm &
\forall p\in P_k \text{ such that } p\le M.
}
Here, \eqref{skconc} implies that \eq{\frac{|S_k|}{|I_k|}\ge 2^k \frac{|S_0|}{|I_0|}=\frac{2^k}r.}
Since $S_i$ is a subset of $I_i$, we have $ |S_k|/ |I_k| \le 1$, and hence $k\le \log_2(r)$.
Define $\Gamma=\prod_{i=1}^k p_i$. Note that \eqn{\label{mgamma}M\Gamma=M\prod p_i\le M\cdot M^k \le M^{\log_2(2r)}=\frac m{5}.}
We establish some estimates for $ |I_k|, |J_k| $ and $ |I_k \cap p \mathbb{Z}|, |J_k \cap p\mathbb{Z}|$ for $ p \in P_k$ such that $p < M$.
Since $I_k=I\cap \Gamma\mathbb{Z}$ and $|I|/\Gamma\ge 5M$, by \eqref{relrem} we have \eqn{\label{ikest}|I_k|\in ( 1 \pm 1/M)m/\Gamma.}
$I_k$ is once again an AP. For $p\in P_k$ such that $p\le M$, $p$ does not divide the common difference of $ I_k$ and $|I_k|/p\ge |I_k|/M>(1-1/M)m/M\Gamma\ge 5-5/M$. Hence by \eqref{relrem} \eqn{\label{ikpest}|I_k\cap p\mathbb{Z}|\in \left(\left(1-\frac1{5-5/M}\right)\frac{|I_k|}{p},\ \left(1+\frac1{5-5/M}\right)\frac{|I_k|}{p}\right).}
Similarly, $J_k$ and $J_k\cap p\mathbb{Z}$ can be bounded from both sides. Since $J_k=J\setminus p_1\mathbb{Z}\setminus \dots\setminus p_k\mathbb{Z}$, and $|J|/\Gamma\ge 5M$, by \eqref{relrem} we have
\eq{|J_k|\in ( 1 \pm 1/M)m\cdot \prod(p_i-1)/\Gamma.}
Define $\Phi=m\prod(p_i-1)/\Gamma$. Then \eqn{\label{jkest}|J_k|\in( 1\pm 1/M) \Phi.}
For $J_k\cap p\mathbb{Z}$, we apply \eqref{relrem} with $A=J$, $P=\Gamma p$, $R=([\Gamma p]\cap p\mathbb{Z})\setminus p_1\mathbb{Z}\setminus\dots\setminus p_k\mathbb{Z}$. Since $|J|/\Gamma p\ge 5$, we have \eqn{\label{jkpest}\forall p\in P_k,p\le M,\ |J_k\cap p\mathbb{Z}|&\in \left(0.8m|R|/|P|,\ 1.2m|R|/|P|\right)\nonumber\\&=(0.8\Phi/p, 1.2\Phi/p).}
Now consider $\Phi$. Note that
\eq{\Phi =m\cdot \prod_{i=1}^k\left(1-\frac1{p_i}\right)
& \ge m\cdot \prod_{i=1}^{k}\left(1-\frac1{i+2}\right)
=2m/(k+2) \\
& \ge 2m/\log_2(4r)
> \frac{2r |S| }{\log_2(4r)}
> \frac{16}{3}|S| \ \ \ \ \ (\text{as } r\ge 16).}
That is, $|S|<\fracflat{3\Phi}{16}.$
Since $J_k\setminus T_k\subseteq J\setminus T$, whose cardinality is at most $|S|$, by \eqref{jkest}, we have
\eqn{\label{tkjkest}\frac{|T_k|}{|J_k|}=1-\frac{|J_k\setminus T_k|}{|J_k|}&>1-\frac{3\Phi/16}{(1-1/M)\Phi}=\frac{13/16-1/M}{1-1/M}>\frac34;
\\ \label{tkest}|T_k|&> \frac{13/16-1/M}{1-1/M}|J_k|\ge (13/16-1/M)\Phi\ge\frac{3\Phi}4
.}
With these estimates in hand, we consider the quantity \eq{\lambda(p)=\frac{\left|S_k\cap p\mathbb{Z}\right|}{|S_k|}\frac{\left|T_k\cap p\mathbb{Z}\right|}{|T_k|}}
for every odd prime $p$. Note that the set of pairs $(s,t) \in S_k \times T_k$ that are {\bf not} 2-coprime because $p$ divides both $s$ and $t$ is $ (S_k\cap p\mathbb{Z}) \times (T_k\cap p\mathbb{Z}) $. Therefore, it follows from pigeonhole that if $\sum_{p\in P_0}\lambda(p)<1$ then there is a pair $(s,t)$ that is none of these sets. Such a pair is 2-coprime. Thus, it suffices to show $\sum_{p\in P_0}\lambda(p)<1$. In order to estimate $ \lambda(p)$ we divide into cases:
\begin{itemize}
\item \textbf{When $p\in \{p_1,\dots,p_k\}$:} By definition, $T_k$ has already excluded multiples of such $p$, so $\lambda(p)=0$.
\item \textbf{When $p\in P_k,p\le M$:} By \eqref{gkconc} and \eqref{ikpest}, \eqn{\label{skpskest}\frac{\left|S_k\cap p\mathbb{Z}\right|}{|S_k|}< 2\frac{\left|I_k\cap p\mathbb{Z}\right|}{|I_k|}< \left(1+\frac1{5-5/M}\right)\frac{2}{p}<\frac{5}{2p}.} By \eqref{jkpest} and \eqref{tkest}, \eq{\frac{\left|T_k\cap p\mathbb{Z}\right|}{|T_k|}\le\frac{\left|J_k\cap p\mathbb{Z}\right|}{|T_k|}<\frac{1.2\Phi/p}{3\Phi/4}=\frac{8}{5p}.}
Hence, $\lambda(p)<\frac{4}{p^2}$.
\item \textbf{When $p\in P_k, p>M$:} In this case, we don't have equally good individual bounds. Here we use the following simple observation: \eqn{\label{sskp}\sum_{p\in P_k, p>M} \left|S_k\cap p\mathbb{Z}\right| <\log_M(n) |S_k|.} (This follows from the fact that every number in $S_k$ is at most $n$, and so is divisible by less than $\log_M(n)$ primes greater than $M$.)
Also by \eqref{absrem} and \eqref{tkjkest}, \eq{\frac{|T_k\cap p\mathbb{Z}|}{|T_k|}\le\frac{|J_k\cap p\mathbb{Z}|}{|T_k|}<\frac{|J_k|/p+1}{|T_k|}<\frac{4}{3p}+\frac1{|T_k|}.} By \eqref{tkest} and \eqref{mgamma}, \eq{|T_k|>\frac{3\Phi}{4}=\frac{3m\prod(p_i-1)}{4\Gamma}\ge \frac{3m}{4\Gamma} > \frac{15M}4.}
Since $p>M$, \eq{\frac{|T_k\cap p\mathbb{Z}|}{|T_k|}<\frac{4}{3M}+\frac{4}{15M}=\frac{8}{5M}.} In view of \eqref{sskp},
\eq{\sum_{p\in P_k, p>M}\lambda(p)<\log_M(n) \cdot \frac{8}{5M} = \frac{ 8 \log(n)}{ 5 M \log M} .}
\end{itemize}
Summing up all cases, \eq{\sum_{p\in P_0}\lambda(p)&<\sum_{p\in P_k,p<M}\frac{4}{p^2}+\sum_{p\in P_k,p>M}\lambda(p)\nonumber\\ &< 4\left(P(2)-\frac14\right)+\frac{ 8 \log(n)}{ 5 M \log M} \nonumber\\ &< 4\left(P(2)-\frac14\right)+\frac{ 8 }{5 \log\log(n)}\tag{as $M>\log(n)$} \nonumber\\ & < 1,}
for $n$ sufficiently large,
where $P(2)=\sum_{p\text{ prime}}p^{-2}$ is the prime zeta function. (Note that $ \log \log (n) \ge 9 $ suffices.)
As this sum is less than 1, we conclude that some $s\in S_k\subseteq S$ and $t\in T_k\subseteq T$ are 2-coprime.\end{proof}
\vskip5mm
\begin{proof}[Proof of Lemma \ref{rlcoprime}]
Let $\alpha := 1/r = |S|/m\in [1/16, 1/2]$, $P_0=\mathtt{Primes}\setminus\{2\}$ and
\eqn{\label{capmdefrl}M = m^{1/3} >\log(n).}
Let $P_M=P_0\cap [M]$. For each $p\in P_0$, define $\alpha_p=|S\cap p\mathbb{Z}|/m\le \alpha$.
We begin with some estimates.
Note that we have \eqn{\label{test}|T| \ge m-|S|=(1-\alpha)m.} For every $ p,q\in P_M$ $(p\neq q)$, $m/pq\ge m/M^2 = M$. By \eqref{relrem} we have,
\eqn{\label{ipqest} |I\cap pq\mathbb{Z}| = ( 1 \pm 1/M) \frac{m}{pq} \hskip1cm & \text {for all } p\neq q\in P_M ,\\
|(J\setminus p\mathbb{Z})\cap q\mathbb{Z}| = (1 \pm 1/M) \frac{(p-1)m}{pq} \hskip1cm & \text{for all } p\neq q\in P_M, \text{ and } \\
\label{ijpest}{|I\cap p\mathbb{Z}|}, {|J\cap p\mathbb{Z}|} \in (1 \pm 1/M) \frac{m}{p} \hskip1cm & \text{ for all } p\in P_M.
}
A consequence of \eqref{ijpest} and \eqref{test} is \eqn{\label{tpest}\forall p\in P_M,\ |T\setminus p\mathbb{Z}|\ge |T|-|J\cap p\mathbb{Z}|\ge (1-\alpha- ( 1 + 1/M)/p)m.}
As in Lemma \ref{rgcoprime} we have
\eqn{\label{ssps} \sum_{p\in P_0\setminus P_M} |S\cap p\mathbb{Z}| < |S| \log_M(n) &
;\\\label{sspqsp} \sum_{q\in P_0\setminus P_M} |(S\cap p\mathbb{Z})\cap q\mathbb{Z}| < | S \cap p\mathbb{Z}| \log_M(n) & \text{ for all } p\in P_0 \text{ such that } \alpha_p > 0.}
For $p\in P_M$ such that $\alpha_p>0$, by \eqref{absrem} and \eqref{tpest}, we have \eqn{\label{tpqtp}\forall q\in P_0\setminus P_M,\ \frac{|(T\setminus p\mathbb{Z})\cap q\mathbb{Z}|}{|T\setminus p\mathbb{Z}|} &\le \frac{|J\cap q\mathbb{Z}|}{(1-\alpha- (1 + 1/M)/p)m}\nonumber\\&\le\frac{m/q+1}{(1/6- 1 /(Mp))m}\nonumber\\&=\frac6M\frac{M/q+1/M^2}{1-6/Mp}\nonumber\\(\text{assuming }M\ge15,)\ \ &< \frac{7}{M}.}
(Remark: Note that it is crucial that we excluded 2 from the primes.) With these estimates in hand, we consider two cases: \\[0mm]
\noindent
{\bf Case (a)}:
There exists some $p\in P_M$ such that \eqn{\label{alphapboundn}\alpha_p> \frac{0.24}{p(1-3\alpha/2)} .}
\noindent
Fix such $p\in P_M$, and for $q\in P_0$ define \eq{\lambda_1(q)=\frac{|(S\cap p\mathbb{Z})\cap q\mathbb{Z}|}{|S\cap p\mathbb{Z}|}\frac{|(T\setminus p\mathbb{Z})\cap q\mathbb{Z}|}{|T\setminus p\mathbb{Z}|}.} In view of \eqref{tpest}, $T\setminus p\mathbb{Z}$ is nonempty, and so we will have the desired 2-coprime pair if $\sum_{q\in P_0}\lambda_1(q)<1$. Note that when $q=p$, $\lambda_1(q)=0$. Thus, by \eqref{tpqtp},
\eqref{sspqsp},
\eqref{ijpest},
and \eqref{tpest}, we have \eq{\sum_{q\in P_0}\lambda_1(q) &= \sum_{q\in P_M\setminus\{p\}}\lambda_1(q)+\sum_{q\in P_0\setminus P_M}\lambda_1(q)\\
&\le \sum_{q\in P_M\setminus\{p\}}\frac{|I\cap pq\mathbb{Z}|}{|S\cap p\mathbb{Z}|}\frac{|(J\setminus p\mathbb{Z})\cap q\mathbb{Z}|}{|T\setminus p\mathbb{Z}|}+\left(\sum_{q\in P_0\setminus P_M}\frac{|(S\cap p\mathbb{Z})\cap q\mathbb{Z}|}{|S\cap p\mathbb{Z}|}\right)\frac{7}{M}\\
&\le \sum_{q\in P_M\setminus\{p\}} \frac{ (1+1/M)m/pq}{\alpha_pm}\frac{(1+1/M)(p-1)m/pq}{(1 -1/M)(1-\alpha-1/p)m}\ +\ \log_M(n) \cdot \frac{7}{M}\\
&=\frac{ (1 + 1/M)^2}{ 1 - 1/M}\frac{(p-1)}{\alpha_pp(p-\alpha p-1)}\left(\sum_{q\in P_M\setminus\{p\}}\frac1{q^2}\right)\ +\ \frac{7 \log_M(n) }{M}.}
By \eqref{alphapboundn}, \eqn{\label{quantityrl1}\sum_{q\in P_0}\lambda_1(q)
&< \frac{ (1 + 1/M)^2}{ 1 - 1/M} \frac{(P(2)-1/4)}{ 0.24} \frac{ (p-1)( 1 -3\alpha/2)}{( p - \alpha p -1 )}
+ \frac{7 \log_M(n) }{M} \\
&\le \frac{ (1 + 1/M)^2}{ 1 - 1/M} \frac{(P(2)-1/4)}{ 0.24}
+ \frac{7 }{ \log\log(n) } \\ & < 1}
for $n$ sufficiently large ($ \log\log(n) \ge 45$ suffices), and we have the desired 2-coprime pair.\\[0mm]
\noindent
{\bf Case (b)}: For all $p\in P_M$, \eqn{\label{alphapbound}\alpha_p \le \frac{0.24}{p(1-3\alpha/2)}.}
\noindent
For every $p\in P_0$, consider the quantity \eq{\lambda_0(p)=\frac{|S\cap p\mathbb{Z}|}{|S|}\frac{|T\cap p\mathbb{Z}|}{|T|}.} We show that the sum of these terms is less than 1. We bound this term in two cases:
\begin{itemize}
\item \textbf{$p\in P_M$.} In this case, by \eqref{test} and \eqref{ijpest}, \eqn{\label{l0plm}\lambda_0(p)\le\frac{\alpha_p}\alpha \frac{|J\cap p\mathbb{Z}|}{(1-\alpha)m}<\frac{\alpha_p}\alpha \frac{(1 + 1/M) m/p}{(1-\alpha)m}=\frac{( 1 + 1/M)\alpha_p} {\alpha(1-\alpha)p}.}
\item \textbf{$p\in P_0\setminus P_M$.} By \eqref{absrem}, (assuming $M\ge15$,) \eqn{\label{tppgm}\frac{|T\cap p\mathbb{Z}|}{|T|}< \frac{m/p+1}{(1-\alpha)m}\le\frac{2}{M}\left(\frac Mp+\frac1{M^2}\right)<\frac{2.01}{M}.}
\end{itemize}
Therefore, by \eqref{l0plm}, \eqref{tppgm}, \eqref{ssps} and \eqref{alphapbound} (noting that $\alpha_p\le \alpha$ and assuming $ M \ge 16$), we have \eqn{\sum_{p\in P_0}\lambda_0(p)&<\sum_{p\in P_M}\frac{17\alpha_p} {16\alpha(1-\alpha)p}\ +\left(\sum_{p\in P_0\setminus P_M}\frac{|S\cap p\mathbb{Z}|}{|S|}\right)\frac{2.01}{M}\nonumber\\
&< \sum_{p\in P_M}\frac{ (1 + 1/M)\alpha_p} {\alpha(1-\alpha)p}\ +\ \log_M(n) \cdot \frac{2.01}{M}\nonumber\\
&\le \label{quantityrl2} (1 + 1/M) \frac{ 0.24}{\alpha( 1 -\alpha)( 1 -3\alpha/2) } \left(\sum_{p\in P_M} \frac{1}{p^2}\right) + \frac{2.01}{\log\log(n)}.}
Define function $f: \alpha \mapsto \alpha( 1 -\alpha)( 1 -3\alpha/2)$,
and observe that $f$ is concave on the interval $ (0,5/9)$. It follows that $f$ takes its minimum value in the interval $ [1/16, 1/2]$ at one of the endpoints. As
$f(1/2) = 1/16$ and $f(1/16) = 435/2^{13} > 1/19$,
we have
\eqn{\sum_{p\in P_0}\lambda_0(p) < (1 + 1/M) 0.24 \cdot 19 (P(2) - 0.25) + \frac{2.01}{\log\log(n)} < 1,}
for $n$ sufficiently large. (Here $ \log\log(n) \ge 28$ suffices.)
\end{proof}
\vskip5mm
\begin{rem} \label{largen} The explicit conditions on $n$ that are sufficient for the proofs of Lemmas~\ref{rgcoprime}~and~\ref{rlcoprime} play a role when we apply these Lemmas in Section~4. Note that we can establish better bounds by writing some of the conditions in terms of both $n$ and the parameter $M$ (instead of simply applying the bound $ M > \log(n)$). Indeed, the conditions
\[ M = \left( \frac{m}{5} \right)^{ 1/ \log_2(2r)} > \log n \ \ \ \text{ and } \ \ \ \frac{ \log(n)}{M \log(M)} < \frac{1}{9} \]
are sufficient for Lemma~\ref{rgcoprime}, and the conditions
\[ M = m^{1/3} \ge 16 \ \ \ \text{ and } \ \ \
\frac{ \log(n)}{ M \log(M)} < \frac{1}{45} \]
are sufficient for Lemma~\ref{rlcoprime}.
\end{rem}
\vskip5mm
\begin{proof}[Proof of Theorem \ref{evencase}]
The four outcomes are clearly pairwise disjoint. We will show that at least one of them happens.
Let $I_1=I\setminus 2\mathbb{Z}$, $I_2=I\cap 2\mathbb{Z}$, $J_1=J\setminus 2\mathbb{Z}$, $J_2=J\cap 2\mathbb{Z}$. They are all integer APs with cardinality $m$ and common difference 2. For $i=1,2$, define $S_i=S\cap I_i$, $T_i=T\cap J_i$.
Since $|S_1|+|S_2|+|T_1|+|T_2|\ge 2m$, at least one of the following happens:
\begin{itemize}
\item Case I: $|S_1|+|T_2|>m$ (which implies $S_1,T_2\neq\emptyset$);
\item Case II: $|S_2|+|T_1|>m$ (which implies $S_2,T_1\neq\emptyset$);
\item Case III: $|S_1|+|T_2|=|S_2|+|T_1|=m$.
\end{itemize}
If $|S_1|+|T_2|\ge m$ and $S_1,T_2\neq \emptyset$, then by Lemma \ref{interlem}, there are $s\in S_1$, $t\in T_2$ that are 2-coprime. Because $s$ is odd, that means they are coprime, so the last outcome applies. Analogously, a coprime pair exists if $|S_2|+|T_1|\ge m$ and $S_2,T_1\neq\emptyset$. To \textit{not} fall into the last outcome, we must be in Case III, with at least one of $S_1,T_2$ empty and at least one of $S_2,T_1$ empty.
\begin{itemize}
\item If $S_1,S_2=\emptyset$, we are in Outcome 1.
\item If $T_2,T_1=\emptyset$, we are in Outcome 2.
\item If $S_1,T_1=\emptyset$, we are in Outcome 3.
\item If $T_2,S_2=\emptyset$, then $|S_1|=|T_1|=m$. By Lemma \ref{interlem} there are $s\in S_1$, $t\in T_1$ that are 2-coprime. Because both $s$ and $t$ are odd, they are coprime, so we are in Outcome 4.
\end{itemize}
In conclusion, one of the four outcomes must happen, so we are done.
\end{proof}
\section{The lonely runner with slow runners}
In this Section
we prove Theorem \ref{thm:run}.
Let $\mathcal{C}_0$ be the constant in Theorem \ref{evencase}. Set $\mathcal{C}$ as a constant for which the following holds for sufficiently large $n$: \eq{\exp(\mathcal{C}\cdot(\log\log n)^2)\ge 8\ceil{\exp(\mathcal{C}_0\cdot(\log\log(2n))^2)}+6.} For convenience, let \eq{k=k(n)=\frac{\exp(\mathcal{C}\cdot(\log\log n)^2)}2,\ \mathcal{M}=\mathcal{M}(n)=\ceil{\exp(\mathcal{C}_0\cdot(\log\log(2n))^2)}.}
The assumption then becomes $k\ge 4\mathcal{M}+3$. We will show that for sufficiently large $n$, if positive integers $v_1<v_2<\dots<v_n$ satisfy that $n<v_n\le 2n-2k$, then Condition \eqref{loose} holds.
Consider such $v_1,\dots,v_n$. Denote $V=\{v_1,\dots,v_n\}$. Since $v_n>n$, in particular $V\neq[n]$, so there exists a ``largest missing number in $[n]$'', \eq{x=\max\left([n]\setminus V\right).}
We first claim that if $x>n-k$ then Condition \eqref{loose} holds. Note that $x\neq v_i$ for all $i$, and larger multiples of $x$ are $>2n-2k$, which is too large to be one of $v_1,\dots,v_n$. Letting $t=1/x$, the quantity $\{v_it\}=\{v_i/x\}$ cannot be zero; moreover, its denominator is at most $x\le n$. Thus, $1/(n+1)<\{v_it\}<n/(n+1)$ for all~$i$.
Hence, we may assume $x\le n-k$. Next, we claim that it is possible to cut $[n]$ into some groups of consecutive integers in ascending order, such that all but the last group has cardinality either $2\mathcal{M}$ or $2\mathcal{M}+2$, the last group's cardinality is between $2\mathcal{M}+2$ and $4\mathcal{M}+3$, and $x$, $x+1$ and $x+2$ belong to the same group. One can start with $\{1,\dots,2\mathcal{M}+2\},\{2\mathcal{M}+3,\dots,4\mathcal{M}+4\}$, and continue to append groups of size $2\mathcal{M}+2$ by default; but if a new group would cut $x$, $x+1$ and $x+2$ apart, shrink its size by 2 to avoid the separation. There will likely be a residue of size $<2\mathcal{M}+2$ at the end of $[n]$, in which case merge it with the previous group. Given that the distance between $x$ and $n$ is at least $k\ge 4\mathcal{M}+3$, the resulting last group would not contain $x$, hence unaffected by the potential shrinking.
Denote the groups $I_1,I_2,\dots,I_\ell,I_{\ell+1}$ in ascending order, and say $x,x+1,x+2\in I_r$ for some $r\in[\ell]$. Now we will also partition $\{n+1,\dots,2n\}$ into groups. Let $J_\ell$ be the interval starting at $n+1$ with the same cardinality as $I_\ell$. Let $J_{\ell-1}$ be the next contiguous interval, with the same cardinality as $I_{\ell-1}$. (That is, $J_{\ell-1}=\{n+|I_\ell|+1,n+|I_\ell|+2,\dots,n+|I_\ell|+|I_{\ell-1}|\}$.) Define $J_{\ell-2},\dots,J_1$ analogously. The remaining numbers would form the interval $J_{\ell+1}$, whose cardinality is equal to that of $I_{\ell+1}$.
\begin{center}
\begin{tikzpicture}[scale=0.17]
\fill [lightgray] (0,0) -- (9,0) -- (9,-2) -- (0,-2);
\draw (4.5,0) [above] node {$I_1$};
\fill [lightgray] (10,0) -- (19,0) -- (19,-2) -- (10,-2);
\draw (14.5,0) [above] node {$I_2$};
\fill [lightgray] (36,0) -- (44,0) -- (44,-2) -- (36,-2);
\draw (40,0) [above] node {$I_r$};
\fill [lightgray] (50,0) -- (59,0) -- (59,-2) -- (50,-2);
\draw (54.5,0) [above] node {$I_\ell$};
\fill [lightgray] (60,0) -- (74,0) -- (74,-2) -- (60,-2);
\draw (67,0) [above] node {$I_{\ell+1}$};
\draw (-0.5,-1) [right] node {$1\ 2\ 3$};
\draw (27.5,-0.2) [below] node {$\dots$};
\draw (44,-1) [left] node {$x$};
\draw (47,-0.2) [below] node {$\dots$};
\draw (74.6,-1) [left] node {$n$};
\fill [lightgray] (0,-5) -- (9,-5) -- (9,-7) -- (0,-7);
\draw (4.5,-7) [below] node {$J_\ell$};
\fill [lightgray] (10,-5) -- (19,-5) -- (19,-7) -- (10,-7);
\draw (14.5,-7) [below] node {$J_{\ell-1}$};
\fill [lightgray] (25,-5) -- (33,-5) -- (33,-7) -- (25,-7);
\draw (29,-7) [below] node {$J_r$};
\fill [lightgray] (50,-5) -- (59,-5) -- (59,-7) -- (50,-7);
\draw (54.5,-7) [below] node {$J_1$};
\fill [lightgray] (60,-5) -- (74,-5) -- (74,-7) -- (60,-7);
\draw (67,-7) [below] node {$J_{\ell+1}$};
\draw (-0.5,-6) [right] node {$n\hspace{-0.04in}+\hspace{-0.04in}1$};
\draw (22,-5.2) [below] node {$\dots$};
\draw (41.5,-5.2) [below] node {$\dots$};
\draw (74.6,-6) [left] node {$2n$};
\end{tikzpicture}
\end{center}
Note inductively that for all $1\le j\le \ell$, $\min (I_j)+\max(J_j)=2n+1-|I_{\ell+1}|$. Thus, \eq{\min(I_j)+\min(J_j)&=2n+1-|I_{\ell+1}|-(|J_j|-1)\\
&\ge 2n+1-(4\mathcal{M}+3)-(2\mathcal{M}+1)\\
&=2n-6\mathcal{M}-3\\
&>2n-2k.\\
\max(I_j)+\max(J_j)&=2n+1-|I_{\ell+1}|+(|J_j|-1)\\
&\le 2n+1-(2\mathcal{M}+2)+(2\mathcal{M}+1)\\
&=2n.}
For all $1\le j\le \ell+1$, let $S_j=I_j\setminus V$ and $T_j=J_j\setminus V$. We claim that if for some $j\in[\ell]$ there exist $s\in S_j$ and $t\in T_j$ coprime then Condition \eqref{loose} holds. By the above calculation, $2n-2k< s+t\le 2n$. Consider $t=q/(s+t)$, where $q$ is the inverse of $s$ in $(\mathbb{Z}/(s+t)\mathbb{Z})^\times$. Observe that no $v_i$ can be $0$, $s$ or $t$ modulo $s+t$, because $s,t$ are non-speeds by definition and $s+t>2n-2k$ is too large. But these are the only ways to have $v_iq$ be $0,1$ or $-1$ modulo $s+t$. Thus, for all $i$, $v_iq/(s+t)$ is at least $2/(s+t)\ge 2/(2n)$ away from the nearest integer. Hence $1/(n+1)<\{v_it\}<n/(n+1)$.
Define $\alpha_j=|S_j|+|T_j|$ and $m_j=|I_j|/2=|J_j|/2$ for all $1\le j\le \ell+1$. By the grouping conditions, for all $j\le \ell$ we have $m_j\in \mathbb{Z}$ and $m_j\ge \mathcal{M}$. Note that $\sum_{j=1}^{\ell+1}\alpha_j=n=\sum_{j=1}^{\ell+1}2m_j$. Recall that $I_{\ell+1}\subseteq V$ because $x\le n-k\le n-|I_{\ell+1}|$, and $J_{\ell+1}\cap V=\emptyset$ because $v_n\le 2n-2k\le2n-|J_{\ell+1}|$. Hence, $\alpha_{\ell+1}=0+|J_{\ell+1}|=2m_{\ell+1}$. By pigeonhole, either $\alpha_j>2m_j$ for some $j\in[\ell]$, or $\alpha_j=2m_j$ for all $j\in[\ell]$. In the former case, according to Theorem~\ref{evencase} applied on $(2n,m_j,I_j,J_j,S_j,T_j)$, there must be a coprime pair between $S_j$ and $T_j$, witnessing Condition \eqref{loose}. In the second case, consider $S_r$ and $T_r$. We have $|S_r|+|T_r|=\alpha_r=2m_r$, so by Theorem \ref{evencase}, one of the four outcomes happens: $S_r=\emptyset$, $T_r=\emptyset$, ($S_r=I_r\cap 2\mathbb{Z}$ and $T_r=J_r\cap 2\mathbb{Z}$), or there is a coprime pair between $S_r$ and $T_r$. But as $x\notin V$, $S_r\neq\emptyset$; as $x+1\in V$, $T_r\neq\emptyset$; as $x+1$ and $x+2$ have different parities and are both in $V$, $S_r$ misses a number of each parity. Thus, only the last outcome is possible, which also witnesses Condition \eqref{loose}.
\section{Coprime mappings for adjacent intervals}
In this Section we prove Theorem~\ref{tnconj}; that is, we show that if $ 0 \le \ell < k$ and $ k \ge 4 $ then there is coprime mapping from $A=\{\ell+1,\dots,\ell+k\}$ to $\ B=\{\ell+k+1,\dots,\ell+2k\}$.
\noindent
Theorem~\ref{tnconj} follows from Lemmas~\ref{rgcoprime}~and~\ref{rlcoprime} when $n$ is sufficiently large.
We prove Theorem~\ref{tnconj} for smaller values of $n$ using a separate argument. We note in passing that
there are parallels between this argument and the earlier
work of Pomerance and Selfridge \cite{pomerance1980proof}. Both arguments make use of a statistical study of the parameter $ \phi(x)/x$, where $\phi$ is the Euler totient function, and both proofs make use of estimates on the prime counting functions given by Rosser and Schoenfeld \cite{older} (but our reliance on these estimates is significantly less extensive).
To handle smaller $n$ we need some additional definitions and elementary
observations. Let $ p_i$ be the $i^{\rm th}$ odd prime, and set $q_i = \prod_{j=1}^i p_i$. For each integer $x$ we let $ P(x) $ be the set of odd prime factors of $x$ and define
\eq{ \gamma(x) = \prod_{p \in P(x)} \frac{ p-1}{p}.}
Note that $ \gamma(x)$ is the proportion of numbers that are 2-coprime with $x$. (Note further that $ \gamma(x) = \phi(x)/x$ when $x$ is odd.)
\begin{cla}\label{prodarg}
Let $J$ be an integer AP with cardinality~$m$ and common difference~$d$. Let $x$ be an integer such that $d$ is coprime with all elements of $P(x)$. Then more than $\gamma(x) m - 2^{|P|}+1$ numbers in $J$ are 2-coprime with $x$.
\end{cla}
\begin{proof}
For $p\in P$, let $J_p=J\cap p\mathbb{Z}$. By inclusion-exclusion, \eq{\left|\bigcup_{p\in P} J_p\right|=\sum_{\emptyset\neq Q\subseteq P} (-1)^{|Q|+1}\left|\bigcap_{q\in Q}J_q\right|.}
Because $P\cup\{d\}$ is mutually coprime, the cardinality of $\bigcap_{q\in Q}J_q$ can be approximated: \eq{\left|\bigcap_{q\in Q}J_q\right|-\frac{m}{\prod_{q \in Q}q}\in (-1,1).}
Note that $m/\prod_{q \in Q}q$ is the ``heuristic'' cardinality as the size of $J$ gets large.
By summing these terms for all nonempty subsets of $P$, the cardinality of $\bigcup_{p\in P} J_p$ can be approximated, \eq{\left(1-\prod_{p\in P} \frac{p-1}p\right)m=(1-\gamma) m,} with an error less than $2^{|P|}-1$. The claim follows by taking the complement.
\end{proof}
\begin{cla}
\label{single}
Let $J \subset [n]$ be an AP with common difference 1 or 2 such that $ |J| =m \ge 2$. Let $ T \subseteq J$ such that $ |T| \ge (m+1)/2$. If $ x $ is an integer such that $ |P(x)| \le 1$ then there is $ y \in T$ such that $x$ and $y$ are 2-coprime.
\end{cla}
\begin{proof}
Since $ m \ge 2$, the set $T$ contains two consecutive elements or two elements out of 3 consecutive elements of $J$. In either case, at least one of these numbers is not divisible by the odd prime that divides $x$.
\end{proof}
With these preliminary observations in hand we are ready to prove Theorem~\ref{tnconj}.
Set $ A_0 = A \cap 2 \mathbb{Z}$, $A_1 = A \setminus A_0$, $ B_0 = B \cap 2 \mathbb{Z}$, $B_1 = B \setminus B_0$. Note that it suffices to find a 2-coprime mapping from $ A_0$ to $B_1$ and another from $A_1$ to $B_0$. (As the intervals $A$ and $B$ are consecutive we have $|A_0| = |B_1|$ and $|A_1| = |B_0|$.) Hall's Theorem states that there is a 2-coprime mapping between $ A_1$ and $B_0$ if and only if for every pair of sets $ S \subseteq A_1$ and $ T \subseteq B_0 $ such that $ |S| + |T| = |A_0| + 1 = |B_1| + 1$ there exists a 2-coprime pair $x,y$ such that $x\in S$ and $y\in T$. When $k$ is sufficiently large we can apply Lemmas~\ref{rgcoprime}~and~\ref{rlcoprime} -- with $ n = 2k+\ell$ and $m = |A_0| = |B_1|$ or $ m = |A_1|=|B_0|$ -- to conclude that the desired 2-coprime pair exists, and hence the desired 2-coprime mappings exist.
Let $I$ and $J$ be disjoint APs in $[n]$ with common difference 2 such
that $|I| = |J| = m \ge \frac{n-2}{6}$. (We take $ \{ I,J \} = \{ A_0, B_1\} $ or $ \{I,J\} = \{ A_1, B_0\}$. Note that $ \frac{ n-2}{6} \le \lfloor k/2 \rfloor $.)
Let $ S \subseteq I$ and $T \subseteq J$ such that $ |S| + |T| = m+1$ and $ |S| \le |T|$. As above we set $r = |S|/m$. We show that there is a 2-coprime pair $x, y$ such that $ x \in S$ and $y \in T$. We consider two cases depending on the value of $n$. For large $n$ we appeal to Lemmas~\ref{rgcoprime}~and~\ref{rlcoprime}, and for small $n$ we provide a direct argument.\\[-1mm]
\noindent
{\bf Case 1:} $ 2k+ \ell = n > 3 \cdot 10^7$.\\[-1mm]
First consider $ 2 \le r \le 16$. Here we apply Lemma~\ref{rlcoprime}. We clearly
have $ m > \log(n)^3$. As noted in Remark~\ref{largen}, the parameter $n$ is sufficiently large if the following two conditions hold:
\[ m^{1/3} = M \ge 16\ \ \ \text{ and } \ \ \ \frac{ \log(n)}{\frac{1}{3} \log(m) \cdot m^{1/3} } = \frac{ \log_M(n)}{ M} < \frac{1}{45}. \]
Both conditions hold in the range in question,
and Lemma~\ref{rlcoprime} gives the desired 2-coprime pair.
Now consider $ r > 16$. We first consider $ 3 \cdot 10^7 <
n < 10^{50}$.
Assume for the sake of contradiction that we do not have the desired 2-coprime pair. Then it follows from Claim~\ref{prodarg} that for any $ x \in S$
we have
\eq{ 15m/16 < m -|S| < |T| < m - m \gamma(x) + 2^{|P(x)|}.}
Observe that $ |P(x)| \le \log_3(n) $ for all integers $x$. Furthermore, as $ q_{32} > 10^{50}$, the assumed restriction on $n$ implies $ \gamma(x) > \gamma( q_{31} ) > 1/5$.
It follows that we have
\begin{multline*}
1/5 < \gamma(x) < 1 /16 + 2^{ |P(x)|}/m < 1/16 + 2^{ \log_3(n)}/( (n-2)/6) \\
< 1/16 + 6 \cdot n^{ \frac{\log(2)}{ \log(3)} -1} + 12/n < 1/5,
\end{multline*}
which is a contradiction.
It remains to consider $ r > 16$ and $ n > 10^{50}$.
We apply Lemma~\ref{rgcoprime} when $r < \log(n)^2$. In order to handle the
large $r$ case, we first note that if $ r > \log(n)^2$ then $T$ either contains
primes $ p_1, p_2 > k/4$ or contains $ 2 p_1, 2p_2$ where $ p_1, p_2 > k/8$ are primes. To see this, we appeal to bounds on the prime counting function $ \pi(x)$ (see Theorem~1 in \cite{older}). First suppose $J$ consists of odd numbers and $a$ is the largest element of $J$. Then the number of primes in $ J \cap [a-m+1,a]$ is
\begin{equation*}
\pi(a) - \pi( a - m ) \ge \frac{a}{ \log(a)}
- \frac{a - m }{ \log(a - m)} \left(1+\frac2{\log (a - m)}\right)
\end{equation*}
For ease of notation, let $ a - m = \eta a$ we have
\eq{
\pi(a) - \pi( \eta a ) & \ge \frac{a}{ \log(a)}
- \frac{\eta a }{ \log(\eta a)} \left(1+\frac2{\log (\eta a)}\right) \\
& = \frac{ a - \eta a}{ \log(a)} + \frac{ \eta \log( \eta) \cdot a}{ \log( \eta a) \log (a) }
- \frac{ 2 \eta a}{ \log( \eta a)^2} \\
& \ge \frac{m}{ \log(n)} - \frac{2n}{ \log(n)^2} \\
& \ge \frac{2 m}{ \log(n)^2 } \\
& > |S|.
}
Recalling that $ |S| + |T| \ge m + 1$, the number of elements of $J$ that are NOT elements of $T$ is at most $ |S|-1$. Thus we have the two desired primes in $T$. If $J$ consists of even numbers then we apply the same estimates to $ J/2 = \{ x/2: x \in J \}$ to get the desired elememts $ 2p_1, 2p_2$ where $ p_1$ and $p_2$ are prime. In either case,
no element of $S$ is divisible by both $ p_1$ and $ p_2$, and we have the desired 2-coprime pair.
When $16 < r < \log(n)^2 $ and $ n > 10^{50}$ we apply Lemma~\ref{rgcoprime}. Recall that, as noted in Remark~\ref{largen}, $n$ is sufficiently large if the following conditions hold:
\[ M = \left( \frac{m}{5} \right)^{ 1/ \log_2(2r)} > \log n \ \ \ \text{ and } \ \ \ \frac{ \log(n)}{M \log(M)} < \frac{1}{9}. \]
These conditions hold here (indeed, we have $ M > 3 \log(n)/2$), and Lemma~\ref{rgcoprime} applies.\\[-1mm]
\noindent{\bf Case 2:} $ 83 \le 2k + \ell = n < 3 \cdot 10^7$ \\[-1mm]
\noindent
Here we apply Claim~\ref{prodarg}. Note if
$ x \in S$ and
\eqn{\label{gmajor} |T| \ge ( 1 -\gamma(x)) m + 2 ^{|P(x)|} - 1,}
then Claim~\ref{prodarg} implies that the desired 2-coprime pair exists. As
many elements of $ [ 3 \cdot 10^7]$ have large values of $ \gamma(x) $, this
observation is usually sufficient to complete the proof for $n$ in this
interval. In order to make the proof
precise we consider cases. In some cases we will need to make a more detailed study of the collection of sets $ \{ P(x): x \in S\}$.\\[-1mm]
\noindent
{\bf Case 2a:} There exists $s\in S$ that is not divisible by 3. \\[-1mm]
\noindent
As $ |T| \ge (m+1)/2 $ it suffices to show that there exists $ x \in S$ such that
\eqn{ \label{no3} (\gamma(x) - 1/2) m \ge 2 ^{|P(x)|} - 3/2 }
Set $ w_a = q_{a+1}/3 = \prod_{i=2}^{a+1} p_i$.
Among numbers $x$ such that $ 3 \nmid x$ with a fixed value of $a= |P(x)|$, $x$ is minimized by $w_a$ and the parameter $ \gamma(x)$ is minimized by $ \gamma( w_a )$. Set \eq{\chi_a=\frac{2 ^a - 3/2}{\gamma(w_a) - 1/2}\cdot 6+2} and observe that if for a particular value of $a$ we have $n\ge \chi_a$
and there exists $ s \in S\setminus 3\mathbb{Z}$ such that $ |P(s)| = a $
then\footnote{We implicitly also need $\gamma(w_a)>1/2$, which holds within our range of consideration.} we have (\ref{no3}) and Claim~\ref{prodarg} implies that the desired 2-coprime pair exists.
We refer to the following table for values of $w_a$ and $\chi_a$.
\begin{center}
\begin{tabular}{|r|r|r|r|}
\hline
$a$ & $w_a$ & $\gamma( w_a)$ & $\chi_a$ \\
\hline 1 & 5 & 0.8 & 12 \\
\hline 2 & 35 & 0.6857 & 82.8 \\
\hline 3 & 385 & 0.6234 & 318.1 \\
\hline 4 & 5,005 & 0.5754 & 1155.5 \\
\hline 5 & 85,085 & 0.5416 & 4403.6 \\
\hline 6 & 1,616,615 & 0.5131 & 28689.1 \\
\hline
\end{tabular}
\end{center}
Take any $s\in S\setminus 3\mathbb{Z}$ and let $a=|P(s)|$. As $w_7>3\cdot 10^7$, we have $a\le 6$. If $a\le 2$, we have $n\ge 83>\chi_a$ immediately. If $a\ge 3$, then we also have $n\ge s\ge w_a \ge \chi_a$. Thus, we always have the desired 2-coprime pair.\\[-1mm]
\noindent
{\bf Case 2b:} Every number in $S$ is divisible by 3. \\
\noindent
As $|S|\le |I\cap 3\mathbb{Z}|\le(m+2)/3$ and hence $|T|\ge(2m+1)/3$,
it suffices to show that there exists $ x \in S$ such that
\eqn{ \label{with3} (\gamma(x) - 1/3) m \ge 2 ^{|P(x)|} - 4/3. }
Proceeding as in the previous case, if there exists a value of $a$ such that
\eq{n\ge \kappa_a := \frac{2 ^a - 4/3}{\gamma(q_a) - 1/3}\cdot 6+2}
and $ x\in S$ such that $ |P(x)| =a$ then Claim~\ref{prodarg} implies that the desired
2-coprime pair exists. It follows from the following table that we have the desired condition
for $ a=1,2,4,5,6,7$.
\begin{center}
\begin{tabular}{|r|r|r|r|}
\hline
$a$ & $q_a$ & $\gamma( q_a)$ & $\kappa_a$ \\
\hline 1 & 3 & 0.6667 & 14 \\
\hline 2 & 15 & 0.5333 & 82 \\
\hline 3 & 105 & 0.4571 & 325.1 \\
\hline 4 & 1,155 & 0.4156 & 1071.9 \\
\hline 5 & 15,015 & 0.3836 & 3661.3 \\
\hline 6 & 255,255 & 0.3611 & 13567.5 \\
\hline 7 & 4,849,845 & 0.3420 & 87210.9 \\
\hline
\end{tabular}
\end{center}
As $ q_8 > 3 \cdot 10^7$ we do not need to consider larger values of $a$. It remains to consider the case where $ |P(x)| = 3$ for all $ x \in S$. Again, if $n \ge \kappa_3=325.1$ we will have the desired 2-coprime pair, so assume $n\le 325$.
Note that \eq{S\subseteq \{x\in [325]:|P(x)|=3\}=\{105,165,195,210,231,255,273,285,315\}.} For \eqref{gmajor} to not hold, we must have \eq{m + 1 - |S| \le |T| &< (1-\gamma(q_3))m+2^3-1\\\Rightarrow m&<\frac{6+|S|}{\gamma(q_3)}\\\Rightarrow \max(S)\le n&< 6\cdot\frac{6+|S|}{\gamma(q_3)}+2<14|S|+81.} One can verify that this is not possible. Hence, \eqref{gmajor} holds and we have the desired 2-coprime pair.\\[-1mm]
\noindent{\bf Case 3:} $ 8 \le 2k + \ell = n < 83$ \\[-1mm]
\noindent
Here we make a more careful analysis of the collections of sets $ \{ P(x): x \in S \} $. We recall Claim~\ref{single}: If any of these sets has cardinality at most 1 then we have the desired 2-coprime pair. It follows that, as $ n<83< q_3$, we may assume that $ |P(x)|=2$ for all $ x \in S$, and so we can view
$ \{ P(x): x \in S \} $ as a graph $G$ on a vertex set consisting of the odd primes.
Now we consider some further cases.\\
\noindent{\bf Case 3a:} $G$ contains disjoint edges $p_1p_2$ and $ p_3p_4$.
\begin{quote}
Note that in this case we have $n \ge 35$ and $ m \ge 6$. Note that a number $y$ is 2-coprime with neither of the corresponding elements of $S$ if and only if $ P(y) $ contains a set in $ \{ p_1, p_2\} \times \{ p_3, p_4\}$.
As 15 is the smallest such product, at most 4 elements out of any 15 consecutive elements of $J$ contains one of these products.
It follows that we have the desired 2-coprime pair so long as $m \ge 8$. In the case $ 4 \le m \le 7$ we have $ n \le 44$ and there are at most 4 elements $z$ of $ I \cup J$ such that $ |P(z)| \ge 2$. Two of them are already in $S$, and
$T$ has at least 3 elements, so some element $t\in T$ satisfies $|P(t)|\le 1$, giving the desired 2-coprime pair. We finally note that if $ m \le 3$ then $ n \le 20$ and this case is not possible.
\end{quote}
\noindent{\bf Case 3b:} $G$ has a vertex of degree 2.
\begin{quote}
Suppose $G$ contains the edges $ p_1p_2$ and $p_1p_3$. Note if $y$ is an element of $J$ such that $ p_1 \not\in P(y)$ and $ \{p_2,p_3\} \not\subseteq P(y)$ then $y$ is 2-coprime with one of the corresponding elements of $S$.
First consider $ p_1 > 3$. In this case $ n \ge 35$, which implies $ m \ge 6$. Furthermore, if $ p_1> 3$ then at most 3 out of 10 consecutive elements of $J$ are not 2-coprime with the corresponding elements of $S$. It follows from these observations that we have the desired 2-coprime pair.
So, we may assume $ p_1=3$. As $m\le (k+1)/2 \le (n+2)/4 < 22$, $J$ contains at most one element that is a multiple of $ p_2p_3$, and the number of elements in $J$ that are not 2-coprime to any of the corresponding elements of $S$ is at most $ (m+2)/3 + 1$. As $|T| \ge (m+1)/2$, we have the desired 2-coprime pair if $ m \ge 8$.
Finally consider $m \le 7$, which implies $ n \le 44$. Here we claim that $J$ does not contain any element that is a multiple of $ p_2p_3$. Assume for the sake of contradiction that $J$ has such an element. Given the bound on $n$, 35 is the only product of distinct odd primes in $ I \cup J$ that does not include 3. So, we have $ 35 \in J$. Recalling that one of $I$ and $J$ consists of only even numbers, we observe that $ 35 \in J$ implies that $I$ consists of even numbers and $ 30, 42 \in S$. This is a contradiction as $A$ and $B$ are disjoint intervals. It follows that the number of elements of $J$ that are not 2-coprime to the corresponding elements of $S$ is at most $ (m+2)/3$, and we have the desired 2-coprime pair.
\end{quote}
\noindent{\bf Case 3c:} $G$ consists of a single edge $ p_1p_2$
\begin{quote}
Here we observe that $ |S| \le 2$ and that at most three elements out of any 5 consecutive elements of $J$ is not 2-coprime with the corresponding elements of $S$. As $ |T| \ge m +1 -|S| \ge m-1$, we have the desired 2-coprime pair when $ m\ge 5$. But $m \le 4$ implies $ n \le 26 $, which implies $|S| \le 1$ and $ T =J$. As at least 2 out of any 3 consecutive elements of $J$ are 2-coprime with the corresponding elements of $S$, we have the desired 2-coprime pair when $ m = 3,4$. Finally, if $m\le2$ then we have $ n \le 14$ and this case is not possible.
\end{quote}
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.746094,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdo45qWTD6hUhkdrs | \section{Flights versus trains: a comparison of different access modes to Paris}
\label{sec:appli_paris}
Let us consider a traveler leaving from Amsterdam city center to reach the Paris area. We have chosen the city center of Amsterdam as it covers both tourists and business travelers, but the proposed door-to-door travel time model and subsequent analysis can be applied from any zone. Three possible means of transportation are under study for this trip: plane from Amsterdam Airport Schiphol to Paris Charles De Gaulle airport (CDG) or via Paris Orly (ORY), or train from Amsterdam Centraal via Paris Gare du Nord (GDN).
\definecolor{green1}{RGB}{0,147,156}
\definecolor{green2}{RGB}{93,186,191}
\definecolor{green3}{RGB}{186,225,226}
\definecolor{red1}{RGB}{248,192,170}
\definecolor{red2}{RGB}{221,119,85}
\definecolor{red3}{RGB}{194,46,0}
\subsection{Flight and train schedules}
As in Section\,\ref{sec:model_eu_in}, the flight and train schedules used for this study are weekly schedules. The weekly flight schedules between Amsterdam Airport Schiphol (AMS) and CDG or ORY were extracted from the actual flight schedules from December 2019 to January 2020, and it was assumed that the obtained weekly schedules would run from January 1$^\text{st}$\, 2018 to September 30$^\text{th}$\, 2019. These weekly schedules are summarized in Table\,\ref{tab:sched_ams2ory} for flights between AMS and ORY, and in Table\,\ref{tab:sched_ams2cdg} for flights between AMS and CDG.
\begin{table}[h!t]
\parbox[t]{.49\textwidth}{
\centering
\caption{Extracted weekly schedule from Amsterdam to Paris via ORY.}
\label{tab:sched_ams2ory}
\csvautotabular[table head=\hline\bfseries Mo & \bfseries Tu & \bfseries We & \bfseries Th & \bfseries Fr & \bfseries Sa & \bfseries Su & \bfseries Ams. & \bfseries Paris\\\hline]{tables/ams2ory.csv}
}\hfill
\parbox[t]{.49\textwidth}{
\centering
\caption{Extracted weekly schedule from Amsterdam to Paris via CDG.}
\label{tab:sched_ams2cdg}
\csvautotabular[table head=\hline\bfseries Mo & \bfseries Tu & \bfseries We & \bfseries Th & \bfseries Fr & \bfseries Sa & \bfseries Su & \bfseries Ams. & \bfseries Paris\\\hline]{tables/ams2cdg.csv}
}
\end{table}
The weekly train schedule between Amsterdam Centraal station and Paris Gare du Nord (GDN) is similarly extracted from the actual train schedule of the year 2019 and applied to the year 2018. It is summarized in Table\,\ref{tab:sched_ams2gdn}. Night trains are not considered for this study.
\begin{table}[h!t]
\centering
\caption{Extracted weekly schedule from Amsterdam to Paris via GDN.}
\label{tab:sched_ams2gdn}
\csvautotabular[table head=\hline\bfseries Mo & \bfseries Tu & \bfseries We & \bfseries Th & \bfseries Fr & \bfseries Sa & \bfseries Su & \bfseries Ams. & \bfseries Paris\\\hline]{tables/amc2gdn.csv}
\end{table}
These schedules already highlight the major differences between the three considered modes. The flight schedule through ORY contains the fewest possibilities, limited to two flights daily, whereas the other two models offer hourly scheduled transport. Another notable difference can be seen with respect to the station-to-station travel times: flights between Amsterdam and Paris (both CDG and ORY) take 1h20 ($\pm$ 5 minutes) while train rides between Amsterdam Centraal and Paris GDN take 3h20 ($\pm$ 3 minutes).
\subsection{Average total travel time mode comparison}
\label{sec:appli_bestmode}
The proposed data-driven door-to-door model can be used to evaluate and compare the range of each considered mode, which helps to understand better the urban structure and behavior from a transportation point of view.
For each trip $\tau$ (flight or rail) over the period $\mathcal{D}$ from January 1$^\text{st}$\,2018 to September 30$^\text{th}$\,2019, the associated average full door-to-door travel time $\bar{T}(\tau)$,
\begin{equation}
\label{eq:avgd2d_eu}
\bar{T}(\tau) = \bar{t}_\text{to}(\tau) + t_\text{dep}(\tau) + t_\text{in}(\tau) + t_\text{arr}(\tau) + \bar{t}_\text{from}(\tau)~,
\end{equation}
where $\bar{t}_\text{to}(\tau)$ is the average ride time between Amsterdam city center and the departure station (AMS or Amsterdam Centraal) for the trip $\tau$, $\bar{t}_\text{from}(\tau)$ is the average ride time from the arriving station (CDG, ORY or GDN) to the final arrival zone for the trip $\tau$.
The same daily periods as those used in the Uber data (see Section\,\ref{sec:model_uber}) are considered here to categorize the trips into five groups depending on the time of arrival at the final destination. For each day $d$ and each period $p$, the mean per arrival zone $z$ of the average door-to-door travel times is calculated for each mode $m$,
\begin{equation}
\label{eq:mavgd2d_eu}
E^{d,p}_z(m) = \frac{1}{|\mathcal{T}^{d,p}_z|}\sum_{\tau \in \mathcal{T}^{d,p}_z} \bar{T}(\tau)~,
\end{equation}
where $\mathcal{T}^{d,p}_z$ is the set of flight and rail trips that end at zone $z$ on day $d$ and period $p$.
For each day $d$, each period $p$ and each arrival zone $z$, the mode with the shortest mean travel time $E^{d,p}_z(m)$ is kept. The number of times $N^p_z(m)$ a mode $m$ has had the shortest mean travel time is counted for each zone $z$ and for each period $p$ over the twenty-one month period $\mathcal{D}$,
\begin{equation}
\label{eq:numbestd2d_eu}
N^p_z(m) = | \{ d\in \mathcal{D} ~|~ m = \text{arg}\min_{m_1} E^{d,p}_z(m_1) \} | ~.
\end{equation}
This distribution of modes over the different zones can help travelers choose the mode of transport that is best suited depending on the desired arrival zone and on the desired time of arrival. It can also help urban planners to better understand the road network linking the different stations to the city.
Figure\,\ref{fig:paris_best_cmp} shows the fastest mode to reach the different zones in the Paris dataset for the five different periods of the days used by the Uber dataset. For each zone $z$ and each period $p$, the fastest mode associated is the mode $m$ having the highest $N^p_z(m)$, i.e. the highest number of days with the lowest average total travel time over the considered date range. The zones best reached through CDG are indicated in blue, ORY in red and GDN in green.
\begin{figure}[h!t]
\centering
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_best_am.png}
\caption{Morning (AM)}
\end{subfigure}
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_best_midday.png}
\caption{Midday}
\end{subfigure}
\caption{Comparison of the average total travel times to the Paris area between the three considered arrival stations (CDG: blue, ORY: red, GDN: green) for a trip starting from Amsterdam city center for different trip termination periods.}
\label{fig:paris_best_cmp}
\end{figure}
We can draw several conclusions from these maps. The absence of zones best reached by plane via ORY (in red) is particularly noticeable in the morning period: An important area of South-West Paris is not reached by Uber rides neither from GDN nor from CDG. These maps would advocate for an increase in frequency for the AMS-ORY flights from a traveler's perspective.
From a structural perspective, the interstate linking Paris to CDG is visible on all three maps since it enables travelers through GDN to reach zones close to CDG faster than if they flew to CDG directly. The perimeter highway circling Paris is also a major aid to GDN and is visible on the maps where there is an important competition between GDN and CDG. The section of the perimeter highway farthest from GDN (i.e. in the south-west of Paris) is, however, overtaken by either airport depending on the period of the day. The rest of GDN influence zone is fairly invariant from a period to another.
Using a similar map representation for trips ending in the afternoon but not shown here for space considerations, ORY's range is limited during the afternoon, with CDG taking over some zones close to ORY. This is essentially due to the limited number of flights landing in the afternoon (one per week, every Friday) compared to the daily arrival of CDG flights.
\subsection{Average total travel time distribution analysis}
\label{sec:appli_time}
Once the fastest mode to reach each zone is determined, it is possible to analyze the fastest full door-to-door travel time for each zone. This approach gives an overview of the level of integration of airports, train stations, and road structure and can indicate zones that are less reachable than others and would thus require more attention from urban planners.
The fastest full door-to-door travel time associated with a zone $z$ at a period $p$ is calculated as the average fastest travel time to reach zone $z$ at period $p$ across all modes and over the period $\mathcal{D}$,
\begin{equation}
\label{eq:avgfasttime_eu}
\bar{E}_z^p = \frac{1}{|\mathcal{D}|} \sum \min_m E_z^{d,p}(m) ~.
\end{equation}
Figure\,\ref{fig:paris_time_cmp} displays the fastest full door-to-door travel times $\{\bar{E}_z^p\}_{p,z}$ to reach the different zones in the Paris dataset for trips finishing in the morning or at midday. The color scale that indicates the fastest full door-to-door travel times is identical in all subfigures. The contour of each zone indicates the fastest mode to reach it according to the results from Section\,\ref{sec:appli_bestmode} using the same color code as Figure\,\ref{fig:paris_best_cmp}, i.e. the zones reached faster through CDG are surrounded in blue, ORY in red and GDN in green.
\begin{figure*}[h!t]
\centering
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_time_am.png}
\caption{Morning (AM)}
\end{subfigure}
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_time_midday.png}
\caption{Midday}
\end{subfigure}
\caption{Comparison of the fastest full door-to-door travel times to the Paris area between the three considered arrival stations (CDG: blue, ORY: red, GDN: green) for a trip starting from Amsterdam city center for different trip termination periods. The contour color of each zone indicates the fastest mode to reach it.}
\label{fig:paris_time_cmp}
\end{figure*}
For a better comparison, the distribution of the number of zones per period reached within four time intervals (less than 4 hours, between 4 hours and 4 hours and 30 minutes, between 4 hours and 30 minutes and 5 hours, and more than 5 hours) is presented in Table\,\ref{tab:paris_count}.
\begin{table}[htb]
\centering
\caption{Number of zones per mode and period of the day grouped by full door-to-door travel time intervals. The original dataset is the same as that used to generate Figure\,\ref{fig:paris_time_cmp}.}
\label{tab:paris_count}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\bfseries Mode & \bfseries Time interval & \bfseries Early & \bfseries AM & \bfseries Midday & \bfseries PM & \bfseries Late \\\hline
\input{tables/count_paris_90.csv}
\hline
\end{tabular}
\end{table}
We see a dissymmetry between the north and the south of Paris by looking only at the time color scale in Figure\,\ref{fig:paris_time_cmp}. The number of green zones, i.e. zones reachable in less than 4 hours and 20 minutes, and the surface covered by these green zones are more important in the northern part of Paris than in the southern part of Paris, including in areas close to Paris Orly airport. Combining this observation with the contour color of each zone suggests that Paris Orly airport is not as well integrated in the Parisian road structure as Paris Charles de Gaulle airport, which can make it less attractive for travelers desiring to travel by air and thus less competitive.
We can complete a more quantitative analysis from Table\,\ref{tab:paris_count}, with some of the main findings listed here:
\begin{itemize}
\item Only 10\% of the arrival zones are reached in less than 4 hours from Amsterdam city center.
\item Zones that can be reached in less than 4 hours from Amsterdam city center are overwhelmingly reached by train through Paris Gare du Nord (98\%).
\item A trip from Amsterdam city center to Paris going through ORY always takes more than 4 hours.
\item 78\% of the arrival zones are reached in less than 4 hours and 30 minutes from Amsterdam city center when combining all three possible modes.
\end{itemize}
Therefore, we can use these results to assess how well the 4-hour goal from ACARE FlightPath 2050 is engaged.
\subsection{Reliability issues}
The proposed full door-to-door travel time model assumes that passengers choose their departure time in order to arrive exactly $t_\text{sec}$ before the scheduled departure of their plane or train and that they also know how long it takes to reach the departure station. However, in reality, there is an uncertainty in the time the traveler will spend reaching the airport and in airport processing times. This uncertainty often leads to an additional buffer time implying an earlier departure time for the traveler. Using the presented model with the available data, we can find the most reliable mode to use per arrival zone. The most reliable mode for a given arrival zone is defined as the mode with the lowest variability in travel time, i.e. the mode where the difference between the maximum travel time and the minimum travel time to reach that zone is the lowest. This comparison is useful for passengers or trips that require an accurate arrival time rather than a minimum travel time.
Figure\,\ref{fig:paris_safest_cmp} shows the most reliable mode on average to reach the different zones in the Paris dataset for trips finishing in the morning or at midday. As for the previous analysis, the period was determined using the departure time of the full door-to-door trip and uses the same color code, i.e. the zones reached most reliably through CDG are indicated in blue, ORY in red and GDN in green. For each zone and each period of the day, the most reliable mode associated is the mode having the highest number of days with the lowest average variability travel time over the considered date range.
\begin{figure}[h!t]
\centering
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_safest_am.png}
\caption{Morning (AM)}
\end{subfigure}
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_safest_midday.png}
\caption{Midday}
\end{subfigure}
\caption{Comparison of the average variability of travel times to the Paris area between the three considered arrival stations (CDG: blue, ORY: red, GDN: green) for a trip starting from Amsterdam city center for different trip termination periods.}
\label{fig:paris_safest_cmp}
\end{figure}
Though Figure\,\ref{fig:paris_safest_cmp} and Figure\,\ref{fig:paris_best_cmp} are similar, there are some major differences between average efficiency and average reliability. For example, though it is on average faster to reach the zones close to the highway leading to CDG by train, after 10:00 it is safer from a time variability perspective to reach them via CDG. From a reliability perspective, CDG has claimed the quasi totality of the zones surrounding it, except in the early morning where trips through GDN are still better. When we compare all three modes using this metric, it appears that GDN has the greatest decrease in competitiveness, with its range smaller than when considering average travel times.
\subsection{Impact of faster processing times}
The major difference between air and rail travel is the necessary processing time both at departure and at arrival. In this particular study, with a flight time of about 80 minutes, the current assumption of a departure processing time of 90 minutes implies that travelers spend more time at their departure airport than in flight, which greatly impacts the rapidity of air travel. With the presented model, one can modify these assumed processing times in order to study the impact of improving these times both from an airport perspective and a passenger perspective. Let's assume that the processing time at airports is improved from 90 to 60 minutes at departure and from 45 to 30 minutes at arrival. These modifications could be achieved in reality considering that this is an intra-Schengen trip and that there isn't any border control.
\begin{figure}[h!t]
\centering
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_best60_early.png}
\caption{Early morning with faster processing times}
\label{fig:paris60_best_early}
\end{subfigure}
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_best_early.png}
\caption{Early morning with normal processing times}
\label{fig:paris90_best_early}
\end{subfigure}
\caption{Comparison of the average total travel times to the Paris area assuming faster airport processing times between the three considered arrival stations (CDG: blue, ORY: red, GDN: green) for a trip starting from Amsterdam city center for trips arriving at destination early in the morning. The corresponding map with normal processing times is also reproduced.}
\label{fig:paris60_time_cmp_early}
\end{figure}
Figures\,\ref{fig:paris60_time_cmp_early}\&\,\ref{fig:paris60_time_cmp_midday} show which is the fastest mode on average to reach the different zones in the Paris dataset for trips arriving at destination early in the morning or at midday.
The first major difference with this processing time improvement can be seen for trips arriving in the early morning (Figure\,\ref{fig:paris60_time_cmp_early}): all zones previously reached through CDG are no longer accessed at this period since they were associated to the 21:45 flight of the previous day. This indicates that all trips through CDG start and end on the same day, with no trips finishing after midnight.
\begin{figure*}[h!t]
\centering
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_best60_midday.png}
\caption{Midday with faster processing times}
\label{fig:paris60_best_midday}
\end{subfigure}
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\textwidth]{paris_best_midday.png}
\caption{Midday with normal processing times}
\label{fig:paris90_best_midday}
\end{subfigure}
\caption{Comparison of the average total travel times to the Paris area assuming faster airport processing times between the three considered arrival stations (CDG: blue, ORY: red, GDN: green) for a trip starting from Amsterdam city center for trips arriving at destination at midday. The corresponding maps with normal processing times are also reproduced.}
\label{fig:paris60_time_cmp_midday}
\end{figure*}
Looking at trips arriving at midday (Figure\,\ref{fig:paris60_time_cmp_midday}), trips through CDG are greatly advantaged by this time improvement, with CDG taking over more than half of GDN's previous influence zone. This range increase from CDG can be explained both by faster door-to-door travel times and by the increase of trips through CDG arriving at midday (rather than in the afternoon). As a matter of fact, besides in the early morning, GDN loses its competitiveness against both airports, with its range greatly shrinking in size. The competition between CDG and ORY remains unchanged, which is understandable since they both received the same processing time improvement.
A quantitative analysis similar to the analysis presented in Table\,\ref{tab:paris_count} concludes that all trips are now conducted in less than five hours and that 99.8\% of the zones reachable are reached in less than 4h30. ORY sees some major improvements with 97.5\% of the zones best reached through it reached in less than four hours (compared to no trips in less than 4h in the initial model), while increasing the number of zones it reaches the fastest.
Using a map representation similar to Section\,\ref{sec:appli_time}, but not presented here due to space considerations, it is possible to notice a 20-30 minutes shift in the time distribution for every period except for early morning trips since train processing times were unchanged. The upper bound travel time is also unchanged for trips arriving in the morning, which would indicate that for some zones, the processing time improvement resulted in no improvement or even a worsening of the full trip travel time. Besides that exception, in this case a 45 minutes improvement in airport processing time leads only to a maximum of 30 minutes of average total travel time improvement due to the influence of train trips through GDN.
\section{A multi-modal analysis of the US air transportation system}
\label{sec:appli_us}
Additional insights are gained from this full door-to-door travel time model thanks to the availability of complementary data. The United States is a federal state the size of a continent, therefore various aggregated and centralized datasets are more easily available to all. Several of these datasets are used in this section to add applications to the presented full door-to-door model. This US study is limited to the period from January 1st 2018 to March 31st 2018.
\subsection{Flight schedule}
As presented in the model definition in Section\,\ref{sec:model_us_in}, both the scheduled flight times and the actual flight schedules of most domestic flights can be obtained via the Department of Transportation Bureau of Transportation Statistics (BTS) \cite{BTSwebsite}. This study considers only the six US airports presented in Section\,\ref{sec:model_wait}, three East-coast airports - Hartsfield-Jackson Atlanta International Airport (ATL), Boston's Logan International Airport (BOS) and Ronald Reagan Washington National Airport (DCA) - and three West-coast airports - Los Angeles International Airport (LAX), Seattle-Tacoma International Airport (SEA) and San Francisco International Airport (SFO).
During this three-month period, 38,826 flights were considered, which corresponds to 3,523 early flights, 8,170 morning flights, 13,451 midday flights, 6.695 afternoon flights and 6,987 evening flights.
The full door-to-door travel times were then calculated for each scheduled flight from January 1$^\text{st}$, 2018 to March 31$^\text{st}$, 2018, using the model presented in Section\,\ref{sec:model}.
\subsection{Leg analysis}
We can use the full door-to-door model to better understand the time spent in each leg proportionally to the time spent on the overall trip. For each trip, we calculate the percentage of time spent at each phase based on the full door-to-door travel time. We then calculate the average percentage time spent for each phase and for each city pair trip. Figure\,\ref{fig:bar_time_split} shows the bar plot of these average percentage times for the thirty considered city pairs. The city pairs are sorted according to the percentage of time spent in the actual flight phase.
\begin{figure}[h!bt]
\centering
\includegraphics[width=\columnwidth]{bar_time_split_atl}
\caption{Bar plot of the average proportion of the time spent within each phase of the full door-to-door journey for all thirty considered trips.}
\label{fig:bar_time_split}
\end{figure}
With the proposed full door-to-door model, for all considered trips, passengers spend on average more time at the departure airport than riding to and from the airports. This figure also shows that, with this model, for some short-haul flights, such as between SFO and LAX or between BOS and DCA, passengers spend on average more time at the departure airport than in the plane. Refining the full door-to-door model by considering tailored airport processing times $t_\text{sec}$ at departure depending on the city pair and not only on the departure airport could lead to a different conclusion. However, this modification of the model would require more access to passenger data.
\subsection{Airport integration comparison}
The proposed full door-to-door model allows us to compare each airport's integration within its metropolitan area. Each census tract is associated with an internal point within its boundaries, and this internal point can be used to automatically calculate the distance between airports and each census tract of their metropolitan area. The internal points were defined using an online database\footnote{\url{www.usboundary.com}} based on the US government 2010 census. Figure\,\ref{fig:dist_avg_mean_daily} shows the scatter plot of the average daily ride time to each airport versus the geodesic distance to the airport for the six considered airport. The geodesic distance is the shortest distance between two points along the surface of the Earth. Additionally, the plot also figures a linear regression of these average time with respect to the distance to the airport. A steeper slope for the linear regression indicates that it takes longer to reach the airport from a given distance.
\begin{figure}[h!t]
\centering
\includegraphics[width=\columnwidth]{dist_avg_mean_daily_atl}
\caption{Scatter plot of the average ride time to the airport $t_\text{to}$ versus the distance to the airport from January 1$^\text{st}$\, 2018 to March 31$^\text{st}$\, 2018. Straight lines indicate the linear regression fit for each city.}
\label{fig:dist_avg_mean_daily}
\end{figure}
Figure\,\ref{fig:dist_avg_mean_daily} highlights the disparity between the range of each airport within available data: DCA has a range limited to 20 km while SFO attracts Uber riders from more than 120 km away. The other four airports have a similar range. The difference in slope of their associated linear regression is nonetheless useful to rank their integration within their region of attraction. From this perspective, Seattle has the best integrated airport, i.e. the smallest slope, followed by Atlanta, Boston and then Los Angeles.
\subsection{Impact of severe weather analysis}
Using the same door-to-door travel time visualization process and applying it to different days can be a tool to better analyze the effects of severe weather perturbations on the full door-to-door journey. As an example, the winter storm previously studied in \cite{marzuoli2018PassengercentricMetricsAir} is analyzed for trips between Washington D.C. and Boston using Figure\,\ref{fig:bos_from_dca}.
This winter storm hit the East Coast of the United States on January 4$^\text{th}$\, 2018, and led to the closure of two airports in New York City, along with the cancellation of the majority of flights flying to or from the North-Eastern US coast.
\begin{figure}[h!t]
\centering
\begin{subfigure}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{BOS_02Jan18.png}
\caption{Before landfall, on January 2$^\text{nd}$, 2018}
\label{fig:bos_from_dca_02}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{BOS_05Jan18.png}
\caption{After landfall, on January 5$^\text{th}$, 2018}
\label{fig:bos_from_dca_05}
\end{subfigure}
\caption{Average door-to-door travel times from Washington D.C. city hall to Boston over a single day, before and after the Bomb Cyclone of January 2018.}
\label{fig:bos_from_dca}
\end{figure}
Figure\,\ref{fig:bos_from_dca} shows the map of the average full door-to-door travel times to reach the Boston area starting from Washington D.C. city hall on January 2$^\text{nd}$\, 2018, before the landfall of this winter storm, and on January 5$^\text{th}$\, 2018, after the landfall of the winter storm.
The color scales representing the full door-to-door travel time are identical from one map to another. A shift towards the red is visible from January 2$^\text{nd}$\, 2018 (Figure\,\ref{fig:bos_from_dca_02}) to January 5$^\text{th}$\, 2018 (Figure\,\ref{fig:bos_from_dca_05}), along with some census tracts disappearing from the considered range on January 5$^\text{th}$\, 2018, due to lack of sufficient Uber ride data. These two observations indicate that the full door-to-door travel times are closer to the maximum average travel time than from the minimum travel time on January 5$^\text{th}$\, 2018 compared to January 2$^\text{nd}$\, 2018, and that some zones might have been sufficiently adversely impacted by the weather to prevent rides from the airport to reach them.
\subsection{On the importance of a passenger-centric approach to delays}
A final application to the full door-to-door model presented in this paper emphasizes the difference between flight delay and passenger delay. Since Uber splits the day into five different periods, each with their traffic idiosyncrasies with respect to peak times, we can calculate how much extra travel time is required for a passenger when a flight does not arrive in the scheduled period. For example, a flight expected to arrive in the early morning that lands after 10:00 AM could result in the passenger getting stranded in traffic when trying to leave the airport. Though airlines are not responsible for road traffic, passengers can choose flights based on their arrival time to avoid peak time traffic.
To calculate this extra travel at aggregated level, we have calculated the difference of average travel time between the two periods concerned by flights not arriving according to schedule for each arrival zone. These travel time differences are then aggregated into one travel time difference per city pair by weighing the travel time associated to each census tract with the proportion of passengers initiating their trips from there, or finishing their trip there. The number of passengers originating from or finishing within a census tract is assumed to be proportional to the population density of the considered census tract.
Another measure of sensitivity is to consider the maximum difference between the maximum travel times of each zone between the two considered periods. This second measure indicates the worst variation of the travel time upper bound, i.e. the maximum difference a traveler can experience going from the airport to their final destination zone.
Let us consider the flight UA460 from LAX to SFO scheduled to arrive on Thursday February 15, 2018 at 18:02 local time and that landed with a minor delay of 16 minutes. Due to the 45-minute processing time required to leave the airport, this 16-minute delay shifts the departure period from the airport from afternoon (PM) to late evening. The aggregated average extra travel time is of 15 minutes and 40 seconds, i.e. a 16-minute flight delay triggered an average 31-minute total delay for the passengers. Looking at the second considered measure, the maximum travel time difference for this flight delay is of 72 minutes, meaning that one passenger could experience a delay of 88 minutes resulting from this 16-minute flight delay. This first example illustrates that passenger delays and aircraft delays are distinct.
Paradoxically, arriving earlier than scheduled for a flight does not necessarily mean that the full door-to-door trip ends earlier. For example, flight VX1929 from LAX to SFO scheduled to arrive on Thursday February 8, 2018 at 15:22 local time actually landed 25 minutes earlier. This implied that the passengers were no longer leaving the airport in the afternoon (PM) period but at midday. The aggregated average extra travel time is here of 15 minutes and 2 seconds, so on average travelers did arrive earlier than scheduled, but only by about ten minutes and not the twenty-five minutes announced by the airline. However, looking at the second measurement method again, the maximum ride time difference is 66 minutes and 44 seconds, which means that a passenger could end up arriving forty minutes later than if the flight had landed on time.
\section{Discussion \& Conclusion}
\label{sec:concl}
By leveraging Uber's recently released data and combining it with several other available data sources, this paper proposed a data-driven model for the estimation of full door-to-door travel times for multi-modal trips both in Europe and in the United States. Though the model is used for one city pair in Europe and six different cities in the United States, it can be implemented between any world city pair with sufficient available ride-sharing or taxi data. The proposed model can be adapted depending on how much data about the considered travel modes are available, since a weekly schedule containing no information relative to delays can already lead to some meaningful insights from both a passenger perspective and a city planner perspective.
Once aggregated at a city level, the presented door-to-door travel time model can be used for a paired comparison of the different phases of the full door-to-door journey between two cities. It also enables us to analyze the actual time spent while travelling between two specific cities. The evaluation on a national level of some of the passenger-centric objectives proposed within NextGen in the United States and within ACARE FlightPath 2050 in Europe is possible thanks to the proposed model, especially the objectives regarding how well integrated airports are within their cities. The model can also provide insight to how multi-modal trips are affected by severe weather disruptions, indicating where improvements can be made. It also brings a valuable measurement of the difference between flight delays and passenger delays, emphasizing the need for passenger-centric metrics to evaluate the performance of the air transportation system, which is not solely constituted of planes.
Future studies should consider integrating additional data, such as alternative transportation modes, e.g. the subway, in the model once they are available and when calculating the time needed to reach the departure station or leave from the arrival station. Additionally, the knowledge of the actual daily proportion of passengers travelling via the different approach modes (road or rail) would lead to a better precision of the proposed full door-to-door travel time model. Aggregated information from GPS or mobile phone sources could possibly be used to determine this proportion without infringing passenger privacy.
\section*{Acknowledgments}
The authors would like to thank Nikunj Oza from NASA-Ames, the BDAI team from Verizon Media in Sunnyvale as well as the \emph{Ecole Nationale de l'Aviation Civile} and King Abdullah University of Science and Technology for their financial support. The authors are also grateful for the help of Marine Lebeater for her feedback. The authors would also like to thank the SESAR Joint Undertaking for its support of the project ER4-10 "TRANSIT: Travel
Information Management for Seamless Intermodal Transport". Data retrieved from Uber Movement, (c) 2020 Uber Technologies, Inc., https://movement.uber.com.
\bibliographystyle{IEEEtran}
\IEEEtriggeratref{14}
\section{Introduction}
\label{sec:intro}
Both in Europe and in the United States, national or supra-national agencies promote the need for seamless door-to-door travel and data sharing. They were deemed as needed by the European Commission's 2011 White Paper \cite{darecki2011Flightpath2050Europe} and were reconfirmed by the Federal Aviation Administration (FAA) in 2017 \cite{2017NextGenPrioritiesJoint}. Data sharing was already a main focus in the early 2000s; in response, Europe created and adopted SWIM - System Wide Information Management \cite{meserole2006WhatSystemWide} - and the FAA followed suit. The Next Generation Air Transportation System (NextGen) \cite{nextgen} in the United States and the Advisory Council for Aeronautics Research in Europe (ACARE) Flightpath 2050 \cite{darecki2011Flightpath2050Europe} both aim to have a more passenger-centric approach. To this end, ACARE Flightpath 2050 sets some ambitious goals, which are not all measurable yet due to lack of available data. For example, it aims at having 90\% of travelers within Europe being able to complete their door-to-door journey within 4 hours. In the US, the Joint Planning and Development Office has proposed and tested metrics regarding NextGen's goals \cite{gawdiak2011NextGenMetricsJoint}, but the passenger-centric metrics, especially regarding door-to-door travel times, are still missing.
Cook et al. \cite{cook2012PassengerOrientedEnhancedMetrics} first explored the shift from flight-centric to passenger-centric metrics in the project POEM - Passenger Oriented Enhanced Metrics - where they propose propagation-centric and passenger-oriented performance metrics and compare them with existing flight-centric metrics. Later, Laplace et al. \cite{laplace2014METACDMMultimodalEfficient} introduce the concept of Multimodal, Efficient Transportation in Airports and Collaborative Decision Making (META-CDM); they propose to link both airside CDM and landside CDM, thus taking passenger perspective into account. In this perspective, Kim et al. \cite{kim2013AirportGateScheduling} propose an airport gate scheduling model for improved efficiency with a balance between aircraft, operator and passenger objectives. Dray et al. \cite{dray2015AirTransportationMultimodal} illustrate the importance of multimodality by considering ground transportation as well during major disturbances of the air transportation system in order to offer better solutions to passengers.
The estimation of door-to-door travel time for multi-modal trips has been previously studied, but for trips contained within the same metropolitan area. Peer et al. \cite{peer2013DoortodoorTravelTimesa} focus on commutes within a Dutch city by studying door-to-door travel times and schedule delays for daily commuters, and show that, for the estimation of the overall travel time, it is important to consider the correlation of travel times across different road links. Salonen and Toivonen \cite{salonen2013ModellingTravelTime} investigate the need for comparable models and measures for trips by car or public transport with focus on the city of Helsinki. Their multi-modal approach takes into account the walking and waiting times necessary to reach a station or a parking place. Duran-Hormazabal and Tirachini \cite{duran-hormazabal2016EstimationTravelTime} analyze travel time variability for multi-modal trips within the city of Santiago, Chile, using human surveyors and GPS data to estimate the time spent in the different transportation modes, namely walking, driving a car, riding a bus and taking the subway. Pels et al. \cite{pels2003AccessCompetitionAirports} analyze the relative importance of access time to airports in the passengers' choice within the San Francisco Bay Area based on a passenger survey, offering perspective from air transportation. These works emphasize the importance of considering all relevant modes when estimating door-to-door travel times, but are limited in scope with respect to the size of the area considered and the amount of data available.
Thanks to the increasing use of mobile phones as data sources, larger scale studies with a focus on air transportation have been possible. In the United States, Marzuoli et al. \cite{marzuoli2019ImplementingValidatingAir} implement and validate a method to detect domestic air passengers using mobile phone data available on a nationwide scale. Though the main focus of this work is the passenger behavior at airports, the granularity of the data facilitates analysis of each phase within the full door-to-door trip. Marzuoli et al. \cite{marzuoli2018PassengercentricMetricsAir} then combine mobile phone data with social media data to analyze passenger experiences in airports during a major disruptions. In Europe, within the BigData4ATM project\footnote{\url{www.bigdata4atm.eu}}, Garcia-Albertos et al. \cite{garcia-albertos2017UnderstandingDoortoDoorTravel} also present a methodology for measuring door-to-door travel times using mobile phone data, illustrated through trips between Madrid and Barcelona. Mobile phone data are, however, proprietary data, which are difficult to access for research.
Grimme and Martens \cite{grimme2019Flightpath2050Revisited} propose a model analyzing the feasibility of the 4-hour goal proposed by FlightPath 2050 based on airport-to-airport flight times and a simplified model of access to and from airports. Sun et al. \cite{sun2018CompetitivenessOndemandAir} use open source maps and datasets to calculate door-to-door minimum travel time estimations in order to study the possible competitiveness of air taxis.
In the upcoming sections, the model and analysis presented are also based on already available online data but with a post operation approach. The aim of this model is to create a method to measure the average door-to-door travel times once trips are completed to analyze and compare available modes of transportation. We have applied the first version of this method to two intra-European multi-modal trips, thus comparing air transportation and rail transportation \cite{monmousseau2019DoortodoorTravelTime}. We then used an improved version of the same method, leveraging four different data sources (ride-sharing, flight, phone, and census data) and adapted to the conditions in the United States, to compare trips using direct flights between five US cities, three of them on the West Coast and the other two on the East Coast \cite{monmousseau2020DoortodoorAirTravel}.
In this paper, we offer a data-driven model for the computation of door-to-door travel times that harnesses recently available data along with public data. The data-driven methods developed can be applied for most multi-modal trips between two cities where relevant data are available and are not limited to the air transportation system. The range of new analyses available using this model is illustrated with multiple modal analysis of an intra-European trip, a per-leg analysis of multiple intra-USA trips and an analysis of the impact of severe weather disruptions. These analyses have direct applications for passengers, urban planners and decision makers and highlight the difference between taking a flight-centric approach to the air transportation system and taking a passenger-centric approach.
Section\,\ref{sec:model} of this paper presents the data-driven, full door-to-door travel time model; Section\,\ref{sec:appli_paris} showcases a first set of analyses and applications facilitated by this model for trips between Amsterdam and Paris; and, Section\,\ref{sec:appli_us} focuses on a set of analyses for trips within the United States where more data are available; Finally, Section\,\ref{sec:concl} concludes this paper and proposes future research directions.
\section{The full door-to-door data-driven model}
\label{sec:model}
Similarly to \cite{garcia-albertos2017UnderstandingDoortoDoorTravel} and \cite{sun2018CompetitivenessOndemandAir}, we can deconstruct the travel time $T$ for trips with direct flights or direct train links into five different trip phases, represented in Figure\,\ref{fig:door2door_model} and summarized in equation \eqref{eq:door2door},
\begin{equation}
\label{eq:door2door}
T = t_\text{to} + t_\text{dep} + t_\text{in} + t_\text{arr} + t_\text{from} ~,
\end{equation}
where
\begin{itemize}
\item $t_\text{to}$ is the time spent traveling from the start of the journey to the departure station (e.g. train station or airport),
\item $t_\text{dep}$ is the time spent waiting and going through security processes (if any) at the departure station,
\item $t_\text{in}$ is the time actually spent in flight or on rails,
\item $t_\text{arr}$ is the time spent at the arrival station (e.g. going through security processes),
\item $t_\text{from}$ is the time spent traveling from the arrival station to the final destination.
\end{itemize}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node[] (q0) {};
\node [draw,fill=gray!20,align=center, rounded corners, right of=q0,opacity=1] (dep) {Departure\\ station};
\node[right of=dep] (qi) {};
\node [draw,fill=gray!20,align=center, rounded corners, right of=qi,opacity=1] (arr) {Arrival\\ station};
\node[right of=arr] (qf) {};
\draw[opacity=1] (q0) edge node {$t_\text{to}$} (dep);
\draw[opacity=1] (dep) edge[loop above] node {$t_\text{dep}$} (dep);
\draw[opacity=1] (dep) edge[bend left=45] node {$t_\text{in}$} (arr);
\draw[opacity=1] (arr) edge[loop above] node {$t_\text{arr}$} (arr);
\draw[opacity=1] (arr) edge node {$t_\text{from}$} (qf);
\end{tikzpicture}
\caption{Model of the full door-to-door travel time.}
\label{fig:door2door_model}
\end{figure}
The full model for door-to-door travel time proposed in this paper is established by data-driven methods used to calculate the values of the different times contained in equation \eqref{eq:door2door}. These data-driven methods are described in Sections \ref{sec:model_uber} through \ref{sec:model_total}.
This study focuses on air and rail transportation as main transportation modes, which give the value of $t_\text{in}$, though the process can also be applied to inter-city bus trips. Furthermore, it is assumed that passengers travel by road when arriving or leaving the main station (airport or train station) for the calculation of $t_\text{to}$ and $t_\text{from}$.
In response to data availability, the case studies only consider six major US cities (Atlanta, Boston, Los Angeles, Seattle, San Francisco and Washington D.C.) and two European capitals (Amsterdam and Paris).
\subsection{Travel time from the origin location to the departure station and from the arrival station to the final destination}
\label{sec:model_uber}
We can estimate the road transit times from origin location to departure station ($t_\text{to}$) and from arrival station to final destination ($t_\text{from}$) by using aggregated and publicly available data from taxi or ride-sharing services.
Uber \cite{UberWebsite} is a ride-sharing service launched in 2010 and located in major urban areas on six continents; it has recently released anonymized and aggregated travel time data for certain of the urban areas where it operates. The available data consist of the average, minimum and maximum travel times between different zones (e.g. census tracts in the case of US cities) within serviced area from all Uber rides aggregated over five different periods for each considered day. The five considered periods, used throughout this study, are defined as follows:
\begin{itemize}
\item Early Morning: from midnight to 7am
\item AM: from 7am to 10am
\item Midday: from 10am to 4pm
\item PM: from 4pm to 7pm
\item Late Evening: from 7pm to midnight
\end{itemize}
There are days when the travel times between some zones are only aggregated at a daily level. Travel times are associated with their mean starting door time, i.e. the mean of all the time stamps from the trip contained in the zone of departure.
Since Uber was initially introduced in the US, the impact of Uber in US urban transit has already been the focus of several studies prior to this data release.
Li et al. \cite{li2016OndemandRidesharingServices} concludes that, at an aggregated level, Uber tends to decrease congestion in the US urban areas where it was introduced. Later, Erhardt et al. \cite{erhardt2019TransportationNetworkCompanies} build a model showing that ride sharing companies do increase congestion using the example of San Francisco. Hall et al. \cite{hall2018UberSubstituteComplement} focus on whether Uber complemented or substituted public transit by studying the use of public transit system before and after Uber's entry date in different US cities. Wang and Mu \cite{wang2018SpatialDisparitiesUber} study Uber's accessibility in Atlanta, GA (US) by using the average wait time for a ride as a proxy and conclude that the Uber use is not associated to a specific social category. Following the release of Uber data, Pearson et al. \cite{pearson2018TrafficFlowAnalysis} propose a traffic flow model based on this aggregated Uber data and use it to analyze traffic patterns for seven cities world-wide. Assuming Uber rides as part of the road traffic flow, this study considers that Uber's travel times are an acceptable proxy of the actual travel times by road. In cities where busses don't have specific road lanes, these travel times are a valid proxy for both car and bus trips. This paper limits its scope to road access to and egress from considered stations. The analysis of subway alternatives is not considered in this paper.
Each US city is divided into their census tracts; Paris into the IRIS zones used by INSEE \cite{InseeWebsite} for census, and Amsterdam into its official districts called \emph{wijk}.
\subsection{Dwell time at stations}
\label{sec:model_wait}
The dwell time at a station, either $t_\text{dep}$ or $t_\text{arr}$, is defined as the time spent at the station, whether going through security processes, walking through the station, or waiting. The time spent at each station depends on the mode considered, the specific trip, and whether the passenger is departing or arriving. The dwell time at departure can be split into two components,
\begin{equation}
\label{eq:split_dep}
t_\text{dep} = t_\text{sec} + t_\text{wait}. ~,
\end{equation}
a processing time, $t_\text{sec}$, necessary to get through security (if any) and through the station to the desired gate or track, and an extra wait time, $t_\text{wait}$, due to unanticipated delays.
Processing times at US airports are based on the average wait times at airports extracted from the study of Marzuoli et al. \cite{marzuoli2019ImplementingValidatingAir}. The six US airports under study in this paper are: Hartsfield-Jackson Atlanta International Airport (ATL), Boston's Logan International Airport (BOS) and Ronald Reagan Washington National Airport (DCA) for the East Coast, Los Angeles International Airport (LAX), Seattle-Tacoma International Airport (SEA) and San Francisco International Airport (SFO) for the West Coast. Processing times at European airports are assumed invariant between airports and determined using most airline recommendations. The three European airports under study are: Paris Charles de Gaulle Airport (CDG), Paris Orly Airport (ORY) and Amsterdam Airport Schiphol (AMS).
The average dwell times at these airports are summarized in Table\,\ref{tab:t_at_mode_us} for US airports and in Table\,\ref{tab:t_at_mode_eu} for European airports.
\begin{table}[h!]
\centering
\caption{Average dwell time spent at US airports in minutes.}
\label{tab:t_at_mode_us}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
& \textbf{ATL} & \textbf{BOS} & \textbf{DCA} & \textbf{LAX} & \textbf{SEA} & \textbf{SFO} \\ \hline
Time at departure & 110 & 105 & 100 & 125 & 105 & 105 \\
Time at arrival & 60 & 40 & 35 & 65 & 50 & 45 \\
\hline
\end{tabular}
\vspace{1em}
\caption{Average dwell time spent at European airports in minutes.}
\label{tab:t_at_mode_eu}
\begin{tabular}{|l|c|c|c|}
\hline
& \textbf{AMS} & \textbf{CDG} & \textbf{ORY} \\ \hline
Time at departure & 90 & 90 & 90 \\
Time at arrival & 45 & 45 & 45 \\
\hline
\end{tabular}
\end{table}
With regard to processing times at train stations, based on the recommendation of the train station websites, the departure dwell time is set at 15 minutes and the arrival dwell time is set at 10 minutes for all train stations. We can improve these estimates by gathering data from GPS or mobile phone sources as well as WiFi beacons within airports and train stations, and by using a method similar to Nikoue et al. \cite{nikoue2015PassengerFlowPredictions}.
We can calculate the extra wait times when the scheduled and actual departure or arrival times are available. For US airports, these wait times are calculated only for departure using the publicly available data from the Bureau of Transportation Statistics (BTS) \cite{BTSwebsite}. They were obtained by subtracting the scheduled departure time from the actual flight departure time.
\subsection{Time in flight or on rail}
\subsubsection{US flights}
\label{sec:model_us_in}
The actual flight time was calculated based on the data from BTS using the actual departure/arrival times of all direct flights between each city pairs from January 1$^\text{st}$\, 2018 to March 31$^\text{st}$\, 2018. Cancelled flights are not considered in this study and were discarded.
\subsubsection{European trips}
\label{sec:model_eu_in}
In Europe, we assume that flights and trains are on time and follow a weekly schedule, due to a lack of publicly centralized flight schedule data. The weekly schedules are extracted from actual train and flight schedules gathered over a period of several months and are assumed applicable over the full period under study.
\subsection{Full door-to-door travel time}
\label{sec:model_total}
Our model assumes that travelers plan their departure time to arrive at the departure station exactly $t_\text{sec}$ minutes (eq. \eqref{eq:split_dep}) before the scheduled departure time of their flight or train. We use this assumption to determine the value of $t_\text{to}$ since it defines the period of the day to consider when extracting the Uber average time from the origin location to the departure station. We extract the value of $t_\text{from}$ by using the actual arrival time of the flight or train. When only daily aggregated times are available in the Uber data, these times are used for each period of the day in proxy.
| {
"attr-fineweb-edu": 1.886719,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbEE5i7PA9LWbTZ9J | \section*{Acknowledgments}
\begin{acks}
This work was partially supported by NSF awards: CCF-1533564,CNS-1544753, CNS-1730396, CNS-1828576.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction} \label{sec:introduction}
Sports produce thousands of hours of video a season. Video is ubiquitous in sports as it is not only the main mode of sports consumption for fans, but also forms the primary analysis method for teams. However, video-based analysis is difficult to scale, as it is hard to find situations of interest across large video collections. In recent years, large amounts of sports data are being generated in the form of event or tracking data~\cite{basole2016sports}. The size of this data can exceed a terabyte per game, as is the case for baseball~\cite{woodie2014}. This growing source of data is being leveraged to proliferate new modes of analysis and to power unique visualization and play retrieval systems across many contemporary sports, such as baseball, soccer or basketball.
Esports, also known as professional gaming, is one sport which is rapidly gaining traction. For example, in 2013, the League of Legends World Championship garnered 32 million live streaming views~\cite{keiper2017no}. Additionally, esports betting markets are attracting billions of dollars in action, surpassing those of conventional sports~\cite{macey2020predicts}. However, unlike other sports, esports has not attracted similar analytical interest. This is largely due to a lack of accessible data, which has typically fostered analytics in other sports~\cite{basole2016sports}. Accordingly, the bulk of game analysis in esports is still accomplished through video, whereby players, coaches and analysts systematically review both their opposition and their own recorded games to discover strengths and weakness. Game review can be an extremely time-intensive and manual process, especially for a team with limited resources, as a single games can stretch for many hours. For media organizations, which may have resources to parse through thousands of videos to discover relevant highlights for broadcast, large databases can hold relevant slices of videos tagged by keywords, which are often imprecise~\cite{sha2016chalkboarding}. Further complicating game review is the difficulty of quickly parsing between recorded games -- many tools and applications used to review esports games are cumbersome to use, and may not easily provide a global view of the game. Ultimately, for many stakeholders in esports, the ability to query particular game situations would greatly improve productivity for an otherwise arduous and time-consuming workload.
Given manual game review's difficulty to scale, sports like baseball, soccer and basketball, have developed multiple data acquisition methods to analyze the game analytically. For example, there exist services, such as OPTA, which manually tag events in sports games~\cite{liu2013inter}. Recently, computer vision based systems which automatically generate spatial and event data from video are gaining traction~\cite{d2010review}. With large, clean and reliable spatiotemporal and event data, contemporary sports have developed visual analytics systems to allow analysts to quickly query for game situations of interest, thereby greatly improving productivity in the game review process. These systems utilize the growing amount of player tracking and event data, which can be used to visualize portions of a game in place of a video.
Since esports are played entirely virtually, one might expect data capture to be straightforward. However, the tools for data acquisition are surprisingly limited, and if they exist, undocumented~\cite{charleer2018towards}. Furthermore, esports data is often stored in esoteric formats that are riddled with inconsistencies and lacks a commonly accepted data model~\cite{bednarek2017data}. Thus, most data in esports exists only as simple counting statistics, as opposed to complex spatiotemporal and event data seen in other sports~\cite{maymin2018open}. These aforementioned data challenges have hampered development of state querying and retrieval systems for esports, since player tracking and event data is hard to acquire.
This paper presents ggViz, an visual analytics system to navigate a large corpus of spatiotemporal Counter-Strike: Global Offensive (CSGO) data. We focus on CSGO as it is one of the most popular esports, and allows for easy data collection, compared to other video games. ggViz allows users to search for game situations of interest through a drawing-based query method. Central to our system is a fast and performant state tokenization algorithm that efficiently encodes game scenarios. To do so, we exploit a graph of discrete spatial information that computer-controlled players use to move in-game. Our approach is sport-agnostic, and can easily be extended to other esports or conventional sports where the playing surface can be discretized, such as a soccer field or basketball court. We motivate ggViz's development through interviews with analysts, coaches and managers from top esports teams. We then evaluate ggViz through case studies and expert interviews. We show that our system can enable quick game review, tactical discovery and is well received by professional esports stakeholders.
\\[1em]
\textbf{Contributions.} We make the following contributions:
\begin{enumerate}
\item We detail a sport-agnostic tokenization algorithm to quickly summarize game scenarios by generating a token based on the spatial distributions of players.
\item We introduce ggViz, a visual analytics system that allows users to sketch game situations of interest to query for similar situations.
\item We evaluate ggViz through expert-inspired use cases, along with feedback from coaches, analysts and managers from top esports teams.
\end{enumerate}
The rest of the paper is structured as follows. Section~\ref{sec:related-work} reviews relevant literature in sports play retrieval and sports visualization systems. In Section~\ref{sec:play-clustering}, we describe CSGO, its data and our tokenization algorithm. Section~\ref{sec:system} presents the ggViz system, along with the interviews we conducted to develop requirements to guide the system's development. In Section~\ref{sec:evaluation} we conduct an expert study with analysts, coaches and managers from top esports teams and show use cases of ggViz inspired by the experts' workflows. Finally, we describe future work and conclude the paper in Section~\ref{sec:conclusion}.
\section{Related Work} \label{sec:related-work}
\subsection{Play Similarity and Retrieval}
Clustering similar scenarios, plays and trajectories in sports is a challenging but important research direction. As much of the sports workflow for analysts in both teams and media involves reviewing video, quick retrieval of specific game scenarios can be an arduous process~\cite{clegg2020data}. With the increasing proliferation of tracking data, determining play similarity has developed as a growing data mining topic, and the interfaces and query mechanisms to search databases of plays has developed alongside as a growing visualization and human-computer interaction research space. Initially, game state retrieval was first applied to chess, where Ganguly~et~al.~proposed a similarity metric between two chess game states~\cite{DBLP:conf/sigir/GangulyLJ14}. Additionally, sketch based querying has gained traction in other fields, such as time series search~\cite{siddiqui2020shapesearch, mannino2018expressive}, image retrieval~\cite{zhang2016sketch} and user interface discovery~\cite{huang2019swire}. With newly emerging data, these efforts can be extended into esports~\cite{perin2018state}.
Shao~et~al.~first introduced a system for searching trajectory data in soccer by means of an interactive search interface that allows a user to sketch trajectories of interest~\cite{shao2016visual}. To find similar plays, the authors propose a multi-stage approach. The first stage involves a coarse filtering step, where trajectories are filtered out if they differ in start and end position and length with the queried trajectory. Additionally, trajectories that are outside of the bounding box of the queried trajectory are discarded. In the second step, each candidate is assigned a feature vector according to the discrete portions of the field that the trajectory passes through. Then, each trajectory is assigned a ``direction'' label, which corresponds to the direction that the trajectory travels. Candidate trajectories are ranked by edit distance to the queried trajectory. Users of their system can draw trajectories on a soccer pitch to query for similar trajectories. Stein~et~al.~expand upon the previous work by considering more context around the play, such as opposing player trajectories and actions during the play, while allowing for a user to enforce more constraints via the visual analytics system~\cite{stein2019tackling}. Their visual interface includes the ability for a user to query player movement and on-ball events by sketching.
Sha~et~al.~(2016) introduced \textit{Chalkboarding}, a query paradigm and retrieval system which allows users to draw plays of interest in basketball~\cite{sha2016chalkboarding}. Chalkboarding aligns plays according to a template matching scheme which assigns roles to players. Each play is also assigned a cluster based on ball movement during the play. To facilitate a quick search speed, the queried play is only considered against those in the same cluster. The authors conducted a user study which showed great improvements over a keyword-based play search system. They further improved upon this work by proposing a tree-based method to perform multi-agent alignment~\cite{sha2017fine}.
Sha~et~al.~(2018)~showed how an interface for Chalkboarding can be used for broadcast purposes~\cite{sha2018interactive}. Their system allowed a user to draw trajectories on a video feed of the game. Additionally, their system allows for interactive analytics. For example, their system allows the user to move players around and observe how statistics change, such as expected points. Di~et~al.~extended upon Chalkboarding by learning a user-specific model that ranks play search results based on search result click behavior~\cite{di2018large}.
While various filtering and discretization steps constituted the first play and trajectory retrieval systems, other methods drawing from topic-modeling and deep learning have emerged. Miller~et~al.~drew upon topic-modeling by generating a ``vocabulary'' of player actions~\cite{miller2017possession}. Each offensive possession in a basketball game is then composed of these actions. The authors adapt Latent Dirichlet Allocation to treat each possession as a ``document'', consisting of multiple strategies (topics) and each action as a ``word''. Wang~et~al.~presented a deep learning approach to play clustering, dubbed \textit{play2vec}~\cite{wang2019effective}. Their method breaks plays into smaller segments and finds low dimensional representations for the play segments using a Denoising Sequential Encoder-Decoder (DSED) framework.
The use of sketch-based retrieval is becoming an important query paradigm in sports play retrieval, as it lends itself to a more familiar query formulation for less technical users. Probst~et~al.~presented \textit{SportsSense}, a sports play retrieval system that allows users to sketch plays of interest~\cite{probst2018sportsense}. SportsSense was built for sports video analysts and supports spatio-temporal queries for players, the ball and interactions between players. Specifically, SportsSense returns videos of drawn plays, which enhances the video search process. Play similarity is calculated by finding trajectories within similar spatial bounds, and similar starting and end positions~\cite{al2013towards}. Richly~et~al.~also provided another system which mimics a tactical board to offer a graphical query interface for soccer plays~\cite{richly2018leveraging}.
Along with more sophisticated approaches, keywords are a natural choice to power situational or play similarity. For example, a simple system could assign multiple keywords to each game situation or play. Then, a user could query the data by searching for sets of keywords. Each queried game scenario could be ranked by the number of keyword matches. The downsides to using a keyword based search are that massive amounts of required human annotation. Ushiki~et~al.~used game commentary to generate labels to create a keyword-based game state retrieval system~\cite{DBLP:conf/sigir/UshikuMKT17}. While assisted annotation systems, like \textit{HistoryTracker} by Ono~et~al.~can shorten annotation times, esports data can be very fine grained~\cite{piazentin2019historytracker}. For example, CSGO matches may contain up to 128 frames per second, which renders human annotation a cumbersome and hard to scale endeavor. Additionally, keyword-based systems have been shown to return results less favorable to users than more sophisticated and automated systems, like Chalkboarding.
While play retrieval has received increasing interest in sports analytics, to the best of our knowledge, there is no current work specific to esports. To a large degree, esports differ from the sports in the aforementioned works, which revolve mostly around basketball and soccer. Esports have a much more decentralized structure than conventional ball-based sports. For example, esports lack a ball to serve as a focal point, nor do players have well-defined positions. Accordingly, applying methods like Chalkboarding, which rely on clustering ball movement, are hard to apply. Finally, since in many esports, such as CSGO, players stay stationary for long periods of time, we may be interested in \textit{states} (positions) rather than \textit{trajectories} (movement).
\subsection{Esports Visualization}
Esports visualization is a nascent field, particularly concerning applications geared towards strategy identification and understanding how players play a game~\cite{KriglsteinMVKRT21, wallner2018introduction}. Most previous esports visualization work considers applications geared towards media production and spectating, or analyzing games themselves. Concerning media production, Block~et~al.~introduce \textit{Echo}, a production tool used to detect extraordinary player performances to translate into audience-facing graphics~\cite{BlockHHSDUDC18}. Echo was developed specifically on a Defense of the Ancients (Dota 2). Charleer~et~al.~designed informational dashboards for a variety of games including League of Legends (LoL) and CSGO~\cite{CharleerGGCLV18}. They found that dashboards were helpful for spectators, but there were tradeoffs and considerations when it came to the cognitive load of those spectators. Finally, Kokkinakis~et~al.~propose \textit{Weavr}, a companion app that allows audiences to consume data-driven insights during a match. They found that users had a strong interest in consuming analytical esports content~\cite{KokkinakisDNOPR20}.
Esports visualization efforts directed towards game analysis have thus far revolved around game summarization rather than play retrieval. For example, Wallner drew inspiration from military battle maps to generate visual game summaries for World of Tanks~\cite{Wallner18}. Li~et~al.~developed a visual analytics system to identify specific game occurrences in a game similar to Dota 2~\cite{LiXCWWQM17}. They extended this work by incorporating machine learning based models to recommend interesting portions of a match~\cite{LiWXQM18}. Ahmad~et~al.~present \textit{Interactive Behavior Analytics} (IBA), a methodology to model player behavior in Dota 2~\cite{AhmadBKTNE19}. Their framework consists of a playback and labeling visualization system which can generate labeled data, which then feeds the Glyph visualization system to show player and team behaviors. Goncalves~et~al. and Afonso~et~al.~present VisuaLeague I and VisuaLeague II, respectively~\cite{GoncalvesV0CM18, Afonso19}. These systems, designed for LoL analysis, provide match playback capabilities. They found that users were receptive to animated 2D replays. Recently, Weixelbaum and Matkovic introduce Rumble Flow++, a visual analytics system used to explore Dota 2 games by visualizing interactions between players using graphs~\cite{weixelbaum2021rumble}. We build upon the above literature by focusing on CSGO -- one of the most popular and oldest esports. In fact, CSGO is the most played game on Steam, a popular video gaming platform, with over 800,000 peak players at time of writing. Additionally, we focus on play retrieval, rather than match summarization. Finally, compared to prior works which focus on fans and players, our user study is directed towards professional esports individuals, such as coaches and analysts.
\section{Play Clustering} \label{sec:play-clustering}
\subsection{Counter-Strike Description} \label{sec:csgo-description}
Counter-Strike: Global Offensive (CSGO) is a popular first-person shooter (FPS) video game where two teams of five players compete across a variety of objectives. A professional game takes place over one or more \textit{maps}, which are distinct virtual worlds. Most professional CSGO games are structured as a best of three maps. There is a pool of seven maps for competitive play, so to determine the three maps played in a game, each team participates in a picking and banning process before the game to determine which maps are played. On each map, teams are initially assigned a side, either the Terrorists (T) or the Counter-Terrorists (CT), and then play for 15 rounds as their assigned side. The two teams switch sides after 15 rounds. A team wins a map by winning 16 rounds.
Both the T and CT sides can win a round by completing a variety of objectives, such as eliminating all members of the opposing side, having the bomb explode or be defused, or running out of time. Both sides can win a round if they eliminate all members of the opposing side. The T side can win a round by planting and exploding a bomb at one of the two bombsites on a map, denoted A or B. At the start of each round, one T player is assigned the bomb. The bomb can only be planted at one of two bombsites. Once planted, the bomb explodes in 35 seconds, unless defused by the CT side. If the CT side successfully defuses the bomb, they win the round. If no win condition is met by the end of the round time (which is roughly two minutes), the CT side wins by default. Players start each round with 100 health points (HP) and are eliminated from a round when they reach 0 HP. Specifically, players lose HP when they are damaged -- typically from gunfire and grenades from the opposing side. Players buy equipment, such as guns, grenades (also called utility) and armor, at the beginning of a round, using virtual money earned from doing well in previous rounds.
\subsection{CSGO Data} \label{sec:csgo-data}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{imgs/csgo_data_pipeline_lato.pdf}
\caption{Clients (players) provide input through keystrokes and mouse clicks, which changes their local game state. Clients then send these local game states to the game server, which reconciles the client states and sends a global game state back to all clients. While doing so, the server writes the global state to a demo file, which we then parse to generate a JSON containing spatiotemporal information about a match, including player events and trajectories.}
\label{fig:demofile_generation}
\end{figure}
CSGO games take place in a client-server architecture. The clients (players) connect to a game server where they play their games. Client inputs, such as keyboard presses and mouse clicks, are recorded by the server and resolved across all clients every \textit{tick}. In competitive play, there are 128 ticks in a second, meaning a tick is roughly 8 milliseconds. Additionally, server side changes, such as round starts and ends, or bomb explosions, are also resolved across all clients. We show how these events are generated, and their parsed output, in Figure~\ref{fig:demofile_generation}.
A game can be recorded by the server through the use of a \textit{demo file} (\texttt{.dem}). The demo file contains a serialization of all client-server communications. When a demo file is recording, it writes data in tick bursts, which contain around eight ticks~\cite{bednarek2017data}. While a demo file can be recorded by a client, we focus solely on server-recorded demo files, as they tend to be the predominant source of publicly available demo files. Server-recorded demo files are oftentimes released by tournaments and competition organizers on a variety of community sites. Additionally, players can access a service called \textit{GOTV} in-game, which broadcasts real-time games through streaming demo files as they are written by the server.
Since the demo files are effectively unstructured log files, it is important to impose a data structure on them to better facilitate data analysis. To do so, we use a demo file parser that can translate demo files into an interpretable JSON format. The parser parses the game into a series of consecutive \textit{game states}. Let $G_{r}^t$ be a game state in round $r$ at time $t$. Each game state contains the complete set of information of the world at time $t$. Thus, a game state contains data such as player locations, health, armor and grenade positions.
\subsection{Tokenizing Game States}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{imgs/nav_mesh.pdf}
\caption{Navigation meshes can be constructed as graphs. ``Areas'' are traversable surfaces (nodes), and a collection of adjacent areas forms a ``Place'' (set of nodes). Above, we visualize areas and places for popular CSGO map Inferno.}
\label{fig:map_places}
\end{figure}
Estimating situational similarity is important for a variety of sports use cases. For example, a coach may want to search for video clips in a repository containing thousands of sports plays or a player may want to tabulate their own statistics given a distinct game context. In each case, it is important that queries not only return accurate results, but also do so in a timely manner. Accurate results are especially important in a field like sports analytics, where many users, like coaches and players, may be new to working with data or model outputs, and have little experience understanding faulty output. Speed is important as we seek to improve upon the slow game review and play retrieval process. Since player locations are a large component of a game scenario, a retrieval method which can efficiently represent player locations is crucial to powering visual analytics systems which query game scenarios.
Hashing, central to many existing play clustering methods, can be used to provide quick lookup for similar scenarios. If a state can be assigned to a cluster quickly, the search space can be markedly reduced. In other sports, plays and game states may be clustered by applying clustering algorithms, such as k-means (used in Chalkboarding~\cite{sha2016chalkboarding}), on ball movement. However, as CSGO, along with most other esports, is not a ball based sport, such techniques, like those in Chalkboarding, cannot carry over. Furthermore, clusters can greatly be affected by the choice of parameters for the clustering algorithm, and may not be interpretable. To solve the problem of quickly generating interpretable game state representations, we turn to computer-controlled \textit{bot} movement in video games.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{imgs/token_pipeline_lato.pdf}
\caption{Our retrieval system consists of a preprocessing and retrieval step. In the preprocessing step, game states are assigned a token based on coarse player locations. This token is stored as an indexed field in a database to facilitate quick lookup. To query similar game states, a sketch containing player coordinates are translated into a token representing player locations, and then the system searches the database for game states with the same token. Candidate game states are then ranked and displayed to the user.}
\label{fig:retrieval_system}
\end{figure}
A bot is an computer-controlled player which can play against real players, and CSGO provides the functionality for players to play against bots. To move around the virtual world, bots use \textit{navigation meshes}, which provide information on all traversable surfaces of the virtual world~\cite{snook2000simplified, navigation}. In CSGO, each surface in a map's navigation mesh, also called an \textit{area}, is assigned a unique numerical identifier. Each surface also belongs to a human-interpretable \textit{place} identifier, which is composed of many areas. Each place has a name corresponding to a predefined region on the map, like ``Bombsite A'' or ``CT Spawn''. We show examples of areas and places extracted from navigation meshes for a few CSGO maps in Figure~\ref{fig:map_places}. Effectively, the navigation mesh is a graph describing the connections between discrete parts of a map. This discretization of a playing space is not only found in video games, but also conventional sports as well. For example, some data providers discretize the field into zones to tabulate player fielding statistics. In soccer, it is common for coaches to discretize the pitch into different ``channels'' when talking about passing and formations.
We present our game state retrieval framework in Figure~\ref{fig:retrieval_system}. To facilitate quick retrieval, we draw from previous play clustering research, which utilizes hashing of similar plays, as well as from CSGO spatial data characteristics, particularly the navigation mesh described above. Our system has the following advantages: (1) representation of spatial player information is easily interpretable, (2) limited data to store beyond three tokens for each game state and (3) performance easily scales to millions of game states. Furthermore, our method is not beholden to using navigation meshes to discretize a playing surface. As our method only relies on \textit{some} discretization of a playing surface, our method is also sport-agnostic, and can easily be adopted by other esports and conventional sports. One important distinction with previous work on play retrieval is that prior systems focused mostly on trajectory similarity. As we see in Section~\ref{sec:system}, users in esports are often searching for specific \textit{scenarios} (game states), rather than trajectories, since in CSGO players are often stationary.
The retrieval process starts with curating a large set of CSGO game states. Each game state includes precise player positions in the form of $(X, Y, Z)$ coordinates. From these player coordinates, we can determine where the player is located in the navigation mesh. Place names are human-interpretable and generally correspond to what players call that portion of the map. For each state, we assign a token which is simply a count of the number of players of each side in each place. The token is represented as a string of the form $0001000200...$, where the non-zero entries indicate indices of places where there is at least one alive player of a particular side. We calculate a token for the two sides, the T and CT. To create a single token for the overall state, we simply concatenate the two sides' tokens. We show an example of the token creation process in Figure~\ref{fig:play_tokenization}. Each token is indexed in a database for fast lookup. The system can support querying player positions for single side, as opposed to both Terrorists and Counter-Terrorists, since the intermediate tokens are saved. Furthermore, our construction allows for queries on portions of a token, such as queries to find states where there are X players in place Y.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{imgs/tokenization_algorithm.pdf}
\caption{Our tokenization process for a game state on the \textit{Inferno} map. Each colored area on the map represents a distinct place. Orange represents the Terrorist side and blue represents the Counter-Terrorists. Place names are annotated under their position in the token string. Player counts are tabulated per side on each place and concatenate into a string.}
\label{fig:play_tokenization}
\end{figure}
To query a large set of CSGO game states, a user can provide a list of player coordinates for both the Terrorists and Counter-Terrorists. Providing these player coordinates can be accomplished via a visual interface where users can draw game states of interest. These coordinates are then translated into their corresponding places, and from the list of places, the system generates a token. Then, using this token, the system queries the database using the index of tokens to find similar states. The number of candidate states can be further reduced by imposing non-spatial constraints, such as those on equipment value, number of grenades or specific teams. One can also construct other metrics to calculate intra-token similarity and rank returned game states. For example, one could find the distance to the closest player of the same side in $G_2$ for each player in $G_1$, given two game states $G_1$ and $G_2$. Then, the total distance between $G_1$ and $G_2$ can be the sum of these distances. This method clearly becomes intractable for queries which return large sets, or for searching over the entire database of game states, but it may be suitable for smaller sets of game states. To calculate inter-token similarity, for situations where the states where the generated query token may not exist, we can consider a modified Hamming distance, where the distance between two positions in a string is the absolute value difference between them.
\section{ggViz} \label{sec:system}
\subsection{Domain Requirements} \label{sec:domain-req}
To guide the development of our visual analytics system, we conducted interviews with primary esports stakeholders, including coaches, analysts and managers from professional CSGO teams. A1 and A2 are analysts, C1 is a coach and M1 is a manager. Each participant is employed by a professional esports organization, and has at least three years of experience in esports (median of five years). At the time of writing, A1, A2 and M1 were part of top 10 ranked teams, and C1 was part of a top 50 ranked team. We asked each expert about how they review game replays, what tasks they try to achieve and, what pain points they have.
A1 said that his organization's coach, analyst and in-game leader (the player who directs other players over the course of a match), typically prepare for matches by reviewing recent demo files of their opponents on the maps which they are most likely to play. This general workflow was also corroborated by the other professional esports stakeholders. Specifically, A1 said he looks for setups of a team under different parameters, such as how teams defend given specific equipment values or with different grenade loadouts. C1 said that their team's workflow revolved around trying to summarize setup tendencies of teams under different circumstances. For example, answering questions like ``given that a team does not have much money, how do they defend a bombsite?'' is especially important.
A1 and M1 used the in-game replay viewer to review demo files. Additionally, all interviewees also used industry-standard demo file viewers, but each agreed that there was still much work to make these viewers complete. These demo reviewers have intuitive interfaces to navigate demo files, but lack any retrieval functionality, meaning users still have to watch demo files manually to find scenarios they want. While these third-party tools are easier to use than the in-game viewer, they are often not as visually similar to the actual game. In fact, A2 said that his team eschewed the in-game viewer in favor of third-party tools and plugins. However, both the in-game and industry demo file viewers lack functionality to search for similar plays. Because of this, each stakeholder remarked that reviewing demos can be an arduous and slow process, which prohibits any sort of large scale analysis.
Specifically, A1, like A2, mentioned that the hardest part of the scouting process was going through demo files and finding specific instances to clip or screenshot as demonstration to players and coaches. Furthermore, A1 mentioned that he seeks to find specific game scenarios ``all the time'', and that a tool to do so would be incredibly beneficial to his workload. Specifically, A2 said that sharing points of a match can be difficult when using demo file viewers, and that 2D reconstructions were sometimes easier to share. M1 said that the ``tools aren't there to find specific spatial setups'', and for that reason, demo analysis can become quite broad and be overloading for both analysts and players. C1 said that the hardest part of scouting and game review was ``finding certain situations that are relevant to you, as you do not have the time to look through ten demo files to find enough examples''.
From these responses, a system which can return accurate spatial setups in a fast manner could greatly improve these esports professionals' workloads. Furthermore, existing sports play retrieval systems revolved around retrieving \textit{trajectories}, and A1, C1 and M1 each added that they were instead seeking to search for particular defensive (CT) positions rather than movement patterns. This is because teams often think about CSGO in terms of attacking and defending bombsites, and teams defend bombsites without much player movement.
From these interviews, we develop the following requirements:
\begin{enumerate}[start=1,label={\bfseries R\arabic*}]
\item \label{req:query_context} \textit{Query game scenarios by context.} In the CSGO game review process, a user primarily watches matches with the goal of discovering \textit{strategies} that occur under specific \textit{conditions}. One of the strategies users are most interested in is understanding how teams defend bombsites. In general, strategies are represented spatially through player locations. Concerning game conditions, users want to query for strategies in specific equipment level environments, such as when teams have good equipment or bad equipment. Furthermore, most game review is done with a specific opponent in mind, meaning our system must allow a user to query matches of a particular team. The proposed design must also allow a user to query by player location and game conditions.
\item \label{req:result_metadata} \textit{Visualize game state metadata.} The game scenario search space is large, since a single CSGO game can be a few hours, and teams can play multiple times a week. One map has at minimum 16 rounds, and teams often play three maps a match, meaning matches can last multiple hours. Accordingly, users want to identify especially interesting scenarios without having to watch each scenario from the result set. Thus, the proposed design should not only return results quickly, but also allow a user to easily identify interesting game scenarios returned by the query system.
\item \label{req:playback} \textit{Playback for player trajectories and events.} Watching a demo file is important as doing so helps users gain the full context of a state in a given round. The proposed design should provide the ability for users to playback the round in 2D, since manually watching a game is a fundamental component of game review. Additionally, player actions, like grenades and kills, should also be visualized in the playback, since they influence a situation's context.
\item \label{req:heatmap} \textit{Summarize query results.} Stakeholders are interested in summarizing tendencies of teams for presentation to other stakeholders, but especially players. For example, C1 often displayed visuals and statistics to his players before games to identify the tendencies of the opposition. The system should be able to provide an intuitive visual summary that describes a team's strategy in a given context.
\end{enumerate}
\subsection{System Design}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{imgs/ggviz_system_large.pdf}
\caption{ggViz system querying a game scenario. Users can query for game scenarios in the query view (A) by sketching player positions (A1) and applying team or round filters (A2). After querying, the result view (B) shows a list of similar game scenarios. Users can navigate by looking at icons for how the round ended, or by observing win probability charts (B1). Users can also summarize the result set by viewing heatmaps of common CT/T positions for scenarios in the result set (C). Finally, users can play back the rounds that the returned game scenarios occur in through the playback view (D). The playback view shows both player trajectories (D1) and important round metadata, such as grenade usage, kills and win probability (D2).}
\label{fig:ggviz_system}
\end{figure*}
To fulfill the requirements identified by interviews with esports experts, we developed the ggViz system, coined after the popular gaming phrase ``good game (gg)''. In Figure~\ref{fig:ggviz_system}, we see ggViz applied to a retrieval problem on the popular CSGO map, \textit{Inferno}. ggViz contains three components, (1) the \textit{query view}, to design a query across strategies and contexts, (2) the \textit{result view}, to observe the query result set and each element's associated metadata, and (3) the \textit{playback view}, to replay the round in which the game scenario occurred and identify key points of the round.
\subsubsection{Query View}
In \boxed{\textbf{A}} we see the query view, where a user can design a query consisting of player locations and game context to find similar game scenarios (\ref{req:query_context}). Drawing from previous sports retrieval systems, we utilize a drawing interface, whereby a user can mark player positions by clicking on a map. The user differentiates between T and CT sided players through sliders to indicate side. Additionally, a user can issue two kinds of spatial queries: (1) a full query, where every player is drawn or (2) a partial query, where only a subset of players are drawn. The full query is useful for situations where users want to identify how often a team utilizes certain global strategies, which is similar to the idea of scouting a team's ``formation'' in soccer. The partial query is useful for situations when a user wants to analyze a partial region of the map, such as a bombsite. A user can mark if they want to perform a full or partial query through a slider. Once a player is placed on the map, the user can manipulate the placed player's position, which facilitates easy editing of game states. Such functionality also allows for rapid turnaround when a user tries to answer ``what if'' questions, such as what happens when a player assumes a different position on the map.
Below the spatial query view, a user can select from a variety of filters. These filters were motivated by conversations our expert users, who oftentimes found themselves searching and filtering data often on specific teams, equipment values and number of grenades on each team. In order to facilitate any targeted analysis, the team filter is particularly important, as we gathered from our experts that most game analysis is spent with a particular opposition in mind. Furthermore, much of the analysis process is also spent understanding how teams operate under specific \textit{buy types}. Buy types are determined by the aggregate equipment values of a team. For example, if a team is saving money for a round and lacks the best equipment, they are described as being in an ``eco'' or ``semi buy'' situation. Alternatively, if a team starts the rounds with the best equipment, they are said to be in a ``full buy'' situation.
\subsubsection{Result View}
After a query is designed and executed from the query view, ggViz returns the set of results in \boxed{\textbf{B}}, the result view. Each element in the query's result set is displayed as a card which contains relevant information on that game scenario \ref{req:result_metadata}, such as the match date, competition name, round score and buy types, and how the round ended.
As the result set can be large for certain queries, users need a way to navigate through the result set to find interesting rounds. One approach to identify rounds with interesting moments is through analyzing \textit{win probability}. Win probability is a number which is an estimate for a side's chance of winning a given round. Effectively, we are estimating $P(Y_r = 1 | G_{r,t})$, where $Y_r$ is the outcome of round $r$ (1 if the CT side wins, 0 otherwise), and $G_{r,t}$ is the state of the game in round $r$ at time $t$. In order to estimate this probability, we can learn some function $f(G_{r,t}) = \widehat{Y}_{r}$, where $\widehat{Y}_{r,t}$ is the estimated outcome of round $r$ at time $t$. To learn $f$, we use the methodology in \cite{XenopoulosWPACSGO} to train an XGBoost model which predicts a side's win probability at time $t$ given $G_{r,t}$.
In each result view card, we plot the win probability of the round in which the game scenario occurred. We use a line chart which displays the win probability in the round where the game scenario occurred. CT win probability is blue, and T win probability is orange. We indicate where in the round the game scenario happened through a small circle at the top of the line chart. Finally, we also indicate important events in the round, such as when a bomb plant happened (black circle), which indicates frames that are ``pre-plant'' or ``post-plant'', since teams care about identifying ``post-plant'' situations. Essentially, the win probability chart is giving the overall story of a round, and where the retrieved scenario fits in.
After the result view populates, there is an option to summarize the result set through a heatmap, which a user can select in the upper left corner of ggViz. To construct the heatmap, we calculate the density of one side's player locations in all game scenarios in the result set. Users can select whether they want to view the CT or the T position heatmap. By viewing the heatmap, users can identify common spots in which teams position themselves (\ref{req:heatmap}). Furthermore, by adjusting the filters in the query view, users can create different visualizations as content for other downstream stakeholders, such as players receiving a pre-game presentation from a coach.
\subsubsection{Playback View}
When a user selects a game scenario of interest, the playback view in \boxed{\textbf{C}} populates. The playback view allows a user to view the round in which the game scenario occurred (\ref{req:playback}). Player positions are displayed on the map. Below the map is a slider which one can use to animate the round frame by frame. This feature acts as a quick substitute for video review, which typically takes place by playing back a demo file. Playing a demo file incurs a lengthy setup cost, as doing so typically requires running CSGO and loading a demo file. This greatly prohibits one from quickly switching between rounds in different demo files. However, if a user wants to view the demo file, we display a button to the right of the playback slider which will launch the CSGO game to watch the demo file.
By presenting the whole round for playback, a user can accomplish a variety of tasks. For example, a user can look at the game scenarios prior to selected one to develop a sense for how the selected game scenario came to be. On the other hand, a user can view the part of the round which occurs after the selected state to uncover what was the result of a particular scenario. Below the slider are tabs to view the round win probability, kills and grenades. In the win probability graph, we show added context for the round the user is reviewing. As major round events, like kills, are highly correlated with win probability~\cite{XenopoulosWPACSGO}, a user is quickly able to visit important round events. In the ``kill'' and ``grenades'' tab, a user can click to go to the frame where the kill occurred or grenade was thrown. These specific player events add more context to the 2D view, and can help a user understand the circumstances of a given round.
\subsection{Implementation}
Early play retrieval systems, such as Chalkboarding, relied on non-web based interfaces. However, web-based applications are now ubiquitous, are able to support data-intensive workloads, such as play retrieval, and are easy to access for a wide array of users. As such, we prioritized ggViz to be accessible via a web-based client. ggViz is implemented with a client-server architecture. The client is a web application built with Angular, which renders the spatial query, similar state and round playback views. Using Flask, we develop an API which fulfills client requests, and in particular implements the retrieval system described in Figure~\ref{fig:retrieval_system}. The API interacts with a SQL database that contains roughly 100 GB of data on 1,600 professional matches occurring from April 1st, 2020 to August 31st, 2020. This corresponds to over 10 million game states and over 100 million player locations. Game states are stored in a clustered SQL index, where the index is created on $(Map, Token)$ combinations. We acquired the demo files from the popular Counter-Strike website HLTV.org. We parsed each demo file using the demo parser from~\cite{XenopoulosWPACSGO} at a frequency of one state per second.
Both the client and server architectures are modular. The visual components, such as the playback view or query interface, can easily be extended to other systems. This is important for future systems, as most web-based visual analytics applications in esports will ostensibly require some 2D playback or map-based view. Additionally, our API can broadly support many CSGO specific applications, and can easily be extended by teams, organizations and players. For example, our API contains endpoints to retrieve event data, such as grenades, kills and damages, or player-tracking data. These endpoints can be used to retrieve data to power machine learning models or other visual analytics systems.
\section{Evaluation} \label{sec:evaluation}
To evaluate the usefulness of ggViz in esports analytics, we conduct expert interviews, where we showcased ggViz to staff from professional esports teams and recorded their feedback. Additionally, we showcase two use cases of ggViz, which were identified as routine game analysis tasks by CSGO experts.
\subsection{Expert Interviews}
To evaluate ggViz, we held ``think aloud'' interviews with the three esports experts mentioned in the pre-design interviews in section~\ref{sec:domain-req}, as well as an additional analyst, denoted A3. A3 was part of the same esports organization as C1. The experts were free to use ggViz in whichever way they wanted. While doing so, the experts were encouraged to speak about their thought process. During each interview, which lasted an hour, we recorded the expert's thoughts on the system, as well as took note of how the expert used ggViz.
\subsubsection{Filtering Scenarios}
Each expert's first actions with ggViz were to confirm tendencies for their own team. However, before drawing scenarios of interest, each user first applied the filters below the map query view. Of particular note were the team, round equipment and grenade count filters. Almost all queries that the users issued used these filters, which is consistent with the conclusions from the pre-design interviews. Before each query, each expert would vocalize what situation they wanted to find. In every instance, each expert referenced the ``buy type'' of a particular side, which is a rough hierarchy of equipment values. For example, an ``eco'' round is one where a team spends little to no money for a round in an effort to save money, and a ``full buy'' is one where a team acquires the best guns available. Our filters used the same nomenclature, which the experts appreciated. Overall, A2 remarked that the filtering was quite extensive and was satisfied that the filters corresponded to his team's general analysis workflows, saying ``the filters are exactly what we need''.
\subsubsection{Drawing Game Scenarios}
After selecting the filters, each expert would then draw the game state, which normally took under 10 seconds. All participants found the interface intuitive as the map view was familiar to anybody who has played the game. A1 mentioned that such an interface is ``super useful for coaches and for match preparation''. While ggViz allows a user to specify both CT and T players, most expert queries issued either partial queries, or full queries on a specific side. M1 and C1 said that he especially appreciated the ability to perform single-sided and partial queries, as a large part of their workflows are seeing how specific teams spatially distribute themselves on the CT side, irrespective of how the T side is distributed. After later explaining the retrieval process to the experts, each user's drawing speed was much faster, since each expert understood that their game state drawings need not be exact, and they spent less time on getting players in exact locations. We also observed A1 and A2 try to understand the retrieval methodology further by placing players near the borders of navigation mesh surfaces and seeing how the result set changed.
\subsubsection{Viewing Retrieval Results}
The retrieved game states, along with the speed of the retrieval, were well received. M1 said that the similar plays looked ``great, they correspond to what I'm looking for''. Additionally, M1 mentioned that the query speed was well-suited to delivering fast turnaround for his workload. A2 remarked that the retrieval was ``fluid and fast, I don't have to wait forever to find what I want'' and that the retrieved game states made sense given all of his queries. C1 said ``I drew the exact setups I know my team uses, and I can see all of them within seconds. The returned states are excellent''. A3 remarked that the system is ``very accurate and returns the game states I expect, and at the point in the round I expect them to occur''. The amount of time the experts spent analyzing the result set was variable. For example, C1 would quickly look at a few rounds (two to four), since his workflow was trying to understand how a team's CT sets up. C1 found the heatmap feature useful as well, since he could use these heatmaps to send to players. Conversely, A1 would spend a few minutes on a single game scenario by playing back the round's trajectories multiple times and jumping to different points in the round in order to understand the specific events of that round. Experts found that the win probability graph was useful in finding important points in a round as it clearly identified kills and deaths through large increases or decreases in win probability.
\subsubsection{Exploring Tactics On a Large Scale}
While team filters were present on almost all queries, A2 started to use the tool without a team filter towards the end of his interview. He then suggested that although ggViz's direct use case is for game review, ggViz could also be used for a large scale analysis since the database of games was large and the retrieval speed was fast. A2 mentioned that most coaches and analysts in the CSGO community watch demos from their own team or their upcoming opposition, and thus lack the time to watch game replays in-depth from other top teams, but ggViz could allow them to learn more from other teams in a faster manner than going through individual demo files.
\subsubsection{Expert Feedback}
The feedback from the participants was very positive and each of them were excited to use the tool in depth and to suggest new features for future work. A2 said that the system solved his problem of finding specific game setups and contexts. C1 stated that he saw potential for ggViz to be ``more valuable than the other tools out there''. A2, C1 and M1 expressed interest that future versions of ggViz include more analytics, such as assigning values to player actions. One option is to assign values using the change in win probability, as is done in~\cite{XenopoulosWPACSGO} The majority of the negative feedback of ggViz relied mostly on cosmetic details in the round playback view, such as what directions players were viewing or animations for grenades. One limitation of ggViz is dealing with height. Some maps in CSGO have buildings with multiple floors. In situations like these, a single 2D view may be confusing. To address this, we can use multiple maps to display locations, where each map visualize a certain height level. For example, in CSGO there already exists ``upper'' and ``lower'' images for Nuke and Vertigo, two maps with large height differences.
\subsection{Analyzing Retakes}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{imgs/retakes_ggviz.pdf}
\caption{S1, S2 and S3 detail three possible retake strategies for the Counter-Terrorists in a 2 CT versus 3 T scenario for the B bombsite on the \textit{Inferno} map. As shown in Table~\ref{tab:retake-wins}, retakes originating from ``Banana'' have a lower success rate than those from ``CT''. The banana and CT areas are labeled in S1.}
\label{fig:retake-examples}
\end{figure}
Understanding how a given game state affects the subsequent game state is an important part of match analysis. C1 said that the use case he found ggViz most useful was for analyzing specific CT defensive setups and retake scenarios. A \textit{retake} is a common maneuver in CSGO, whereby the CT sided team attempts to regain control of a bombsite where the T team has planted the bomb. In fact, A1 also mentioned that understanding a team's retake strategy is key game preparation point. ggViz is uniquely positioned to analyze retakes, and how teams play them, as one can not only identify retake strategies but also retake defense setups. A user can define a given bombsite defense from the T side, and then analyze the retake strategies on similar defense setups. Even though there is no definitive ``retake'' filter in ggViz, simply placing T players in common retake defense places, and enforcing a bomb exploded or defused filter, ensures that the returned states are mostly retakes.
Let's consider a case where a user is interested in retake strategies for a two CT versus three T situation for bombsite B on the map \textit{de\_inferno}. We will assume that all three T players are defending from inside bombsite B, which is a common strategy verified by our experts. A two CT on three T situation is not uncommon, as usually two to three CTs defend a bombsite, so if a T side gains control of a bombsite, it is likely the other CT players were killed in the process. To isolate retake scenarios, we can filter for rounds that ended with the bomb exploding, the CT side being killed or the bomb being defused. In the former two scenarios, the T side wins the round. For the latter scenario, the CT side wins the round. By analyzing successful and unsuccessful retakes, we can parse through example rounds to find successful strategies. In our example, there are two general paths a player can take to retake the B bombsite: the ``banana'' or the ``CT'' path. In Figure~\ref{fig:retake-examples} we show three queried game situations. Because the B bombsite is considered a single place in the navigation mesh, we just place the three T players in the bombsite. In S1, there are two players in the CT path. In S2, there are two players in the banana path. In S3, there is one player in the CT and one player in the banana path.
When a user issues these queries, they can quickly parse through the returned game states and see the outcomes by viewing the round end icons, which are colored by the winning side and represent how the round ended, such as through a bomb explosion, defuse, and so on. We present the summarized round end results for each situation query in Table~\ref{tab:retake-wins}. Although a two CT versus three T retake is already a difficult situation for the CT side to win, it is clear that coming from the banana direction is the least advantageous, and rarely results in a win for the CT side. Additionally, we see that S1 is the most common situation in our database of CSGO games, indicating the community typically uses the highest win rate retake strategy.
\begin{table}[]
\centering
\begin{tabular}{@{}cccc@{}}
\toprule
\textbf{Situation} & \textbf{T Wins} & \textbf{CT Wins} & \textbf{CT Win Rate} \\ \midrule
S1 & 51 & 6 & 12\% \\
S2 & 19 & 1 & 5\% \\
S3 & 7 & 0 & 0\% \\ \bottomrule
\end{tabular}%
\caption{Situations which involve attacks emanating from the banana area (S2 and S3) have a lower win rate than those from the CT area (S1).}
\label{tab:retake-wins}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{imgs/heatmaps.pdf}
\caption{We issue a partial query for two CT players on Inferno's B bombsite. We generate heatmaps for four different buy scenarios, and we provide the callouts for reference. We see that in full buy scenarios, the CT players are more likely to position themselves at 1st/2nd and Dark. In Semi Buys, players are more likely to position themselves closer to Banana. }
\label{fig:heatmap}
\end{figure*}
\subsection{Summarizing Team Setups}
In the previous use case, we demonstrated the use of ggViz for understanding the events that occur \textit{after} a given game state. To do so, one must primarily use the result view and playback view. However, it is also important to summarize the result set itself. For example, a coach may want to create a visualization to share among players that describes how an opposing team defends a specific bombsite.
Consider the query in Figure~\ref{fig:heatmap}. Here, we issue a partial query that places two players on B bombsite in the map Inferno. Typically, two players defend the B bombsite on the CT side. We are interested in summarizing what positions these two players are likely to play. Furthermore, we are also interested how these positions differ under different buy conditions, such as a full buy or a semi-buy. This is a common task performed by coaches and analysts to find what they call a team's ``default''.
In Figure~\ref{fig:heatmap}, we see the player position heatmaps for scenarios with two players on bombsite B under different buy scenarios. We provide the callouts for reference in the right side of the figure. We can see that there are clear positioning distinctions between the different buy types. For example, in full buys, players are more likely to position themselves at 1st/2nd and Dark. In semi buys, we see that players are likely to play closer to banana, and leaving Dark and 1st/2nd sparsely occupied. Interestingly, we see that in pistol rounds, players take positions deep in the bombsite, such as at Coffins or New Box.
\section{Conclusion} \label{sec:conclusion}
This paper presents ggViz, an exploratory visual analytics system to navigate a large corpus of novel spatiotemporal CSGO data. Our system allows users to search for similar game situations through a sketch-based query method. Powering our retrieval system is a fast and performant tokenization algorithm that exploits information on player locations using navigation meshes. ggViz supports a wide variety of queries, including both full and partial spatial queries. Informed through interviews with top CSGO coaches and analysts, we design ggViz to aid game review and tactic discovery. Our evaluation, performed through expert interviews and cases motivated by expert use, suggests that ggViz can greatly enhance the game review and tactic discovery process for esports analysts, coaches and managers by decreasing the time it takes to find interesting game scenarios.
Although we presented ggViz under the backdrop of esports, and in particular CSGO, our backend and frontend are easily extensible to other esports as the demo parsing and navigation mesh mechanics are largely the same across most esports, including those with large maps, like PUBG, Fortnite or Warzone. Furthermore, the general concept of discretizing a playing surface is not only common in other sports, but also generally easy to implement. One can also consider non-sports domains in which our tokenization of ``states'' is also useful for rapid retrieval and visualization of spatial data. For example, consider retrieving urban data, where data may be available in a spatial resolution of zip codes or neighborhoods, which effectively are a navigation mesh themselves. In this example, the value of different token indices can correspond to commuters or 911 calls in a particular area. Thus, we can encode different ``states'' of a city with a token, and search for historical states using these tokens.
ggViz as a system contains several avenues for future work. On the algorithmic side, we seek to extend our algorithm to accommodate trajectory search. A simple approach could involve treating periods of the game as ``sentences'', where each game state is a ``word'' that has an associated token. Similar trajectories may have similar subsequences, and one could use a metric such as the longest common subsequence to rank candidate states. Another area of interest is determining the ranking of returned results in the game state retrieval process. Di~et.~al.~proposed a pairwise learning-to-rank approach for returned sports play results~\cite{di2018large}. They found that users favored results ranked through their approach. While one limitation of ggViz is that when a player crosses a place border, the corresponding state token can change, which can be undesired. However, none of the experts ran into situations where this was an issue. Future work could allow users to provide their own discretization of a map.
User feedback suggests that there is demand for a feature that allows a user to annotate frames of interest. Ono~et.~al.~present HistoryTracker, a system to quickly annotate baseball games~\cite{piazentin2019historytracker}. One of the main features of their system is that it allows for a user to produce tracking data in a quick manner. Due to the occasional data inconsistencies that appeared during the user studies, which result from errors in the demo files, some player trajectories are clearly incorrect. Drawing from HistoryTracker, future renditions of ggViz could include annotation and debugging functionality to tag incorrect trajectories and fix them in place.
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what's in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what's in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what's in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.696289,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUakHxK3xgpfdGNnRw | \section{Introduction}
Let $d$ be a quasi-metric on a set $\X$, that is, $d$ is a nonnegative function on $\mathcal X\times \mathcal X$ satisfying
\begin{enumerate}[\hskip 0.5cm $\bullet$]
\item $d(x,y)=d(y,x)$,
\item $d(x,y)>0$ if and only if $x\ne y$,
\item there exists a constant $\kappa_1\geq 1$ such that for all $x,y,z\in \mathcal X$,
\begin{equation}\label{the metric constant}
d(x,z)\leq \kappa_1(d(x,y)+ d(y,z)).
\end{equation}
\end{enumerate}
A trip $(\mathcal X, d,\mu)$ is called a {\sl space of homogeneous type} in the sense of Coifman and Weiss if $\mu$ is a regular Borel measure satisfying {\sl doubling property}, i.e. there exists a constant $\kappa_2 >1$ such that for all $x\in \mathcal X$ and $r>0$,
\begin{equation}\label{the doubling constant}
\mu(B(x,2r))\leq \kappa_2 \mu(B(x,r)).
\end{equation}
In this paper, we always assume that $(\mathcal X, d,\mu)$ is a complete space of homogeneous type, $\mu(\X)=\infty$ and $0<\mu(B)<\infty$ for any ball $B\subset \X$.
Recall (see \cite{CW}) that a function $a$ is called an {\sl $H^1$-atom} related to the ball $B\subset \X$ if
\begin{enumerate}[\hskip 0.5cm $\bullet$]
\item $\supp a\subset B$;
\item $\|a\|_{L^\infty(\X)}\leq \mu(B)^{-1}$;
\item $\int_{\X} a(x) d\mu(x)=0$.
\end{enumerate}
The Hardy space $H^1(\X)$ is defined as the set of all $f=\sum_{j=1}^\infty \lambda_j a_j$ with $\{a_j\}_{j=1}^\infty$ are $H^1$-atoms and $\{\lambda_j\}_{j=1}^\infty \subset \mathbb C$ is such that $\sum_{j=1}^\infty |\lambda_j|<\infty$. The norm on $H^1(\X)$ is then defined by
$$\|f\|_{H^1(\X)}:= \inf\left\{\sum_{j=1}^\infty |\lambda_j|: f=\sum_{j=1}^\infty \lambda_j a_j\right\}.$$
It is well-known (see \cite{CW}) that the dual space of $H^1(\X)$ is $BMO(\X)$ the space of all locally integrable functions $f$ with
$$\|f\|_{BMO(\X)}:=\sup\limits_{B}\frac{1}{|B|}\int_B \Big|f(x)-\frac{1}{|B|}\int_B f(y) d\mu(y)\Big|d\mu(x)<\infty,$$
where the supremum is taken over all balls $B\subset \X$. Furthermore, $H^1(\X)$ is itself the dual space of $VMO(\X)$ the closure in $BMO(\X)$ norm of the set $C_c(\X)$ of all continuous functions with compact support.
The aim of the present paper is to establish the following.
\begin{theorem}\label{the main theorem}
Suppose that $\{f_n\}_{n=1}^\infty$ is a bounded sequence in $H^1(\X)$, and that $\lim_{n\to\infty} f_n(x) = f(x)$ for almost every $x\in\X$. Then, $f\in H^1(\X)$ and $\{f_n\}_{n= 1}^\infty$ weak$^*$-converges to $f$, that is, for every $\varphi\in VMO(\X)$, we have
$$\lim_{n\to\infty} \int_{\X} f_n(x) \varphi(x)d\mu(x) = \int_{\X} f(x) \varphi(x) d\mu(x).$$
\end{theorem}
It should be pointed out that, when $\X=\R^d$, Theorem \ref{the main theorem} was proved firstly by Jones and Journ\'e \cite{JJ} when trying to answer a question of Lions and Meyer \cite{CLMS}. When $\X$ is a norm space of homogeneous type, Theorem \ref{the main theorem} was established by Grafakos and Rochberg \cite{GR}.
\section{Proof of Theorem \ref{the main theorem}}
Let $M$ be the classical Hardy-Littlewood maximal function. The following result is well-known (see \cite{CW}).
\begin{lemma}\label{the weak type (1,1) for the Hardy-Littlewood function}
There exists a constant $C_1>0$ such that
$$\mu(\{x\in \X: Mg(x)>\lambda\})\leq C_1 \frac{1}{\lambda} \int_{\X} |g(x)| d\mu(x)$$
for all $g\in L^1(\X)$ and $\lambda>0$.
\end{lemma}
By a standard argument (cf. \cite{CR, GR}), we get the following lemma.
\begin{lemma}\label{a lemma of Coifman and Rochberg}
There exists a constant $C_2>0$ such that, for any Borel set $E$,
$$\|\log(M(\chi_E))\|_{BMO(\X)}\leq C_2.$$
\end{lemma}
\begin{proof}[Proof of Theorem \ref{the main theorem}]
Without loss of generality, we can assume that $\|f_n\|_{H^1}\leq 1$ for all $n\geq 1$. Since the Fatou's lemma, we see that $\|f\|_{L^1(\X)}\leq 1$. By this and a standard function theoretic argument, it suffices to show that
\begin{equation}\label{the main theorem, 1}
\lim_{n\to\infty}\int_{\X} f_n(x) \varphi(x) d\mu(x) = \int_{\X} f(x) \varphi(x) d\mu(x)
\end{equation}
for all $\varphi\in C_c(\X)$ with $\|\varphi\|_{L^1(\X)}, \|\varphi\|_{L^\infty(\X)}\leq 1$.
Fix $\varepsilon >0$. As $f\in L^1(\X)$, there exists a positive number $\beta$ such that $\int_{E}|f|d\mu<\varepsilon$ for any Borel set $A$ satisfying $\mu(A)<\beta$. By $\varphi\in C_c(\X)$ and \cite[Theorems 2 and 3]{MS}, there exists $\alpha\in (0, \frac{\beta}{C_1 \varepsilon})$, where the constant $C_1$ is as in Lemma \ref{the weak type (1,1) for the Hardy-Littlewood function}, such that if $B$ is a ball in $\X$ satisfying $\mu(B)<\alpha$, then
\begin{equation}\label{Macias, Segovia and Ky}
|\varphi(x) - \varphi(y)|<\varepsilon
\end{equation}
for all $x,y\in B$. On the other hand, by the Egorov's theorem and the regularity of $\mu$, there exists an open set $E\subset\X$ such that $\mu(E)< \alpha \varepsilon e^{-\frac{1}{\varepsilon}}$ and $f_n\to f$ uniformly on $\supp\varphi\setminus E$. Define $\tau:= \max\{0, 1+ \varepsilon \log(M(\chi_{E}))\}$. It is clear that $0\leq \tau\leq 1$ and $\tau(x)=1$ for all $x\in E$. We now claim that
\begin{equation}\label{BMO-estimate of Jones and Journe}
\|\varphi\tau\|_{BMO(\X)}\leq (2+ 2C_1+ 3C_2)\varepsilon,
\end{equation}
where the constant $C_2$ is as in Lemma \ref{a lemma of Coifman and Rochberg}.
Assume that (\ref{BMO-estimate of Jones and Journe}) holds for a moment. Since $f_n\to f$ uniformly on $\supp\varphi\setminus E$, there exists $n_0\in\mathbb N$ such that $\|f_n-f\|_{L^{\infty}(\supp \varphi\setminus E)}<\varepsilon$ for all $n\geq n_0$. Applying Lemma \ref{the weak type (1,1) for the Hardy-Littlewood function} with $g=\chi_E$ and $\lambda= e^{-\frac{1}{\varepsilon}}$, we obtain that
$$\mu(\supp \tau)\leq C_1 \mu(E) e^{\frac{1}{\varepsilon}}< C_1 \alpha \varepsilon <\beta.$$
Therefore, for any $n\geq n_0$,
\begin{eqnarray*}
&&\left|\int_{\X} (f_n-f)\varphi d\mu\right| \\
&\leq& \left|\int_{\supp \varphi\setminus E} (f_n -f) \varphi(1-\tau) d\mu\right| + \left|\int_{\supp \tau} f \varphi \tau d\mu\right| + \left|\int_{\X} f_n \varphi \tau d\mu\right|\\
&\leq& \|f_n-f\|_{L^{\infty}(\supp \varphi\setminus E)} \|\varphi\|_{L^1(\X)} + \|\varphi\|_{L^\infty(\X)}\int_{\supp\tau} |f| d\mu + \|f_n\|_{H^1(\X)}\|\varphi\tau\|_{BMO(\X)}\\
&\leq& \varepsilon + \varepsilon + (2+ 2C_1 + 3 C_2)\varepsilon= (4+ 2C_1 + 3 C_2) \varepsilon.
\end{eqnarray*}
This proves that (\ref{the main theorem, 1}) holds.
Let us now show (\ref{BMO-estimate of Jones and Journe}). Let $B$ be an arbitrary ball in $\X$. If $\mu(B)\geq \alpha$, then
\begin{eqnarray*}
\frac{1}{\mu(B)}\int_{B}\left|\varphi(x) \tau(x) -\frac{1}{\mu(B)}\int_B \varphi(y)\tau(y)d\mu(y)\right|d\mu(x) &\leq& 2 \frac{1}{\mu(B)} \int_B |\varphi\tau| d\mu\\
&\leq& 2 \frac{1}{\alpha}\|\varphi\|_{L^\infty(\X)} \mu(\supp \tau)\\
&\leq& 2 C_1 \varepsilon.
\end{eqnarray*}
Otherwise, by (\ref{Macias, Segovia and Ky}) and Lemma \ref{a lemma of Coifman and Rochberg}, we have
\begin{eqnarray*}
&&\frac{1}{\mu(B)}\int_{B}\left|\varphi(x) \tau(x) -\frac{1}{\mu(B)}\int_B \varphi(y)\tau(y)d\mu(y)\right|d\mu(x)\\
&\leq& 2 \frac{1}{\mu(B)}\int_{B}\left| \varphi(x) \tau(x) -\frac{1}{\mu(B)}\int_B \varphi(y) d\mu(y) \frac{1}{\mu(B)}\int_B \tau(y) d\mu(y) \right|d\mu(x) \\
&\leq& 2\|\tau\|_{L^\infty(\X)}\sup_{x,y\in B} |\varphi(x) -\varphi(y)| + 2 \|\varphi\|_{L^\infty(\X)} \|\tau\|_{BMO(\X)}\\
&\leq& 2 \varepsilon + 2 \frac{3}{2} \|\varepsilon \log(M(\chi_E))\|_{BMO(\X)}\leq (2+ 3 C_2)\varepsilon,
\end{eqnarray*}
which proves that (\ref{BMO-estimate of Jones and Journe}) holds.
\end{proof}
{\bf Acknowledgements.} The paper was completed when the second author was visiting
to Vietnam Institute for Advanced Study in Mathematics (VIASM). He would like
to thank the VIASM for financial support and hospitality.
| {
"attr-fineweb-edu": 1.351562,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcww5qhDACtS7gfrD | \section{What demand modelers have always wanted to do}
\label{sec:travel_demand_goals}
Consider the following three quotations.
\begin{quotation}
``Travel demand models are used to aid in the evaluation of alternative policies. The purpose of the models is to predict the consequences of alternative policies or plans. [...] Predictions made by the model are conditional on the correctness of the behavioral assumptions and, therefore, are no more valid than the behavioral assumptions on which the model is based. A model can duplicate the data perfectly, but may serve no useful purpose for prediction if it represents erroneous behavioral assumptions. For example, consider a policy that will drastically change present conditions. In this case the future may not resemble the present, and simple extrapolation from present data can result in significant errors. However, if the behavioral assumptions of the model are well captured, the model is then valid under radically different conditions.'' ---\citep{ben1973structure}
\end{quotation}
\begin{quotation}
``Indeed, causal models (assuming they are valid) are much more informative than probability models. A joint distribution tells us how probable events are and how probabilities would change with subsequent observations, but a causal model also tells us how these probabilities would change as a result of external interventions---such as those encountered in policy analysis, treatment management, or planning everyday activity. Such changes cannot be deduced from a joint distribution, even if fully specified.'' ---\citep{pearl2009causality}
\end{quotation}
\begin{quotation}
``The goal of many sciences is to understand the mechanisms by which variables came to take on the values they have (that is, to find a generative model), and to predict what the values of those variables would be if the naturally occurring mechanisms were subject to outside manipulations. [...] Finding answers to questions about the mechanisms by which variables come to take on values, or predicting the value of a variable after some other variable has been manipulated, is characteristic of causal inference.''---\citep{spirtes2010introduction}
\end{quotation}
Based on personal communication with many travel demand modelers, i.e. based on anecdote, we believe that the first quotation, by Moshe Ben-Akiva, accurately represents the opinions of most researchers and practitioners within the field of transportation. Moreover, we think it is safe to say that a ``policy that will drastically change present conditions'' can be categorized as an ``external intervention'' or ``outside manipulation.'' If one accepts these two premises, then based on the two quotations by Pearl and Spirtes, it is clear that the implicit goal of travel demand modeling is to make causal inferences (i.e. ``to predict the consequences of alternative policies or plans'')\footnote{As noted by an anonymous referee, ``some might argue that the purpose of demand modeling is to make predictions, as opposed to discover the causal mechanism.'' We believe such distinctions are red herrings. The predominant role of travel demand modelers, especially practitioners, is to predict the effects of particular policies on a future population's travel behavior. As stated by Spirtes (\citeyear{spirtes2010introduction}), ``predicting the value of a variable [i.e. travel behavior] after some other variable [i.e. a policy] has been manipulated is characteristic of causal inference.'' Put succinctly, counterfactual prediction is a causal inference task. Identifying causal mechanisms is also a causal inference task, but identifying causal mechanisms is not always necessary for making counterfactual predictions.}. Moreover, in order to produce such causal inferences, it is clear that travel demand models should be ``causal models.''
In the rest of this paper, we further investigate the relationship between travel demand models and ``causal models'' as seen in other disciplines. Section \ref{sec:brief_primer} provides a brief overview of what causal inference is and why it should be seen as a distinct field from travel demand modeling. In Section \ref{sec:current_state_of_affairs}, we describe the current state of relations between the fields of causal inference and travel demand modeling. There, we pay special attention to the differences between practices in the causal inference literature and practices in travel demand modeling. In Section \ref{sec:why_the_disconnect}, we continue this focus by hypothesizing about why the travel demand modeling literature seems so far removed from the causal inference literature. Finally, although we do not try to ``solve'' the issues of drawing valid causal inferences from travel demand models, we try to bridge the gap between the two literatures in Section \ref{sec:looking_towards_future}. In this section, we emphasize what travel demand modelers can learn from causal inference researchers, we provide an extended example that illustrates the use of the techniques described in this paper, and we conclude with a statement about how travel demand modelers can contribute to the causal inference literature.
\section{A brief primer}
\label{sec:brief_primer}
Despite sharing the same goals (as highlighted in Section \ref{sec:travel_demand_goals}), we believe travel demand modelers are generally uninformed or misinformed\footnote{Note, we do not use the adjectives ``uninformed'' and ``misinformed'' to be disparaging. We mean very literally that travel demand modelers do not seem to widely read the causal inference literature, and because the concepts and findings of that literature are non-trivial and sometimes un-intuitive, travel demand modelers often express sentiments that (1) show a lack of awareness of the technical details and definitions from the causal inference literature or (2) show beliefs that directly contradict findings from the causal inference literature. This second point is supported in the next paragraph.} about key concepts from the field of causal inference.
Here are some recent examples of this point. On April 20th and 21st, 2017, the ``Advancing the Science of Travel Demand Modeling'' National Science Foundation Workshop was held at the University of California, Berkeley. This workshop convened many travel demand modeling scholars and practitioners, young and old, from both within and outside of the United States. As such, the comments made during the workshop represent a wide cross-section of voices within the field. Of special interest was panel discussion \#2: ``How critical is causality? And how can we make clear statements about causality in travel demand models?'' In particular, some direct quotes\footnote{Note that the names of individuals who made each quote have been redacted to respect participant privacy because individuals did not make these statements ``on the record.''} from the discussion after Panel \#2 were:
\begin{itemize}
\item ``What is causality? What is the clear definition of causality?''
\item ``What is causality? What about the context? It's not just $Y$ and $X$.''
\item ``How do we define causality? How much causality is needed in the models to give robust predictions?''
\item ``A model that predicts successfully implies that we are accounting for causality.''
\item ``If we take a certain intervention, will it have the outcome desired by the policy makers? It's not about getting causality right. It's more about what confidence do we have in our projected outcome.''
\end{itemize}
As illustrated by these comments from attendees and the overall tenor of the conversations throughout the workshop, the topic of causality in travel demand modeling is beginning to be widely discussed, but it is still far from being widely and correctly understood\footnote{To be completely explicit, we note that on the topic of making inferences about outcomes under external manipulation or intervention, we generally assume that if the statements of travel demand modelers and causal inference researchers disagree, then the travel demand modeler is incorrect. Of course, we examine the statements and supporting arguments made by both parties, but we have found our assumption to typically hold true. Again, this is not a pejorative remark against travel demand modelers. It is an expected outcome based on the fact that causal inference researchers are trained to focus on this topic, whereas travel demand modelers are typically not.}. Specifically, travel demand modelers seemed most uninformed or misinformed about what causal inference is and how it differs from prevailing practices in travel demand modeling. Below, we briefly address these two questions.
First, for the purposes of this paper, causal inference is defined as the use of data and assumptions to make inferences about outcomes under external manipulation or intervention in a particular context \citep{dawid_2010_beware}. We will begin by introducing some notation. Let $Y_i$ be a discrete dependent variable for an individual $i$. In the field of travel demand, concrete examples of $Y_i$ might be the vector of zeros and ones that represents the travel mode that an individual takes, a count of how many automobiles an individual or household owns, or the time period during which an individual departs from work. Now, let $X_i$ be some explanatory variable for that individual, which is amenable to change via political action. Concrete examples of $X_i$ include the speed of public transit, the cost of driving (as affected by gas taxes), or the prevalence of bicycle lanes between the individual's home and work. Finally, let $Z_i$ be a set of covariates for individual $i$ that also affect the outcome $Y_i$ but are not being subjected to any external change. $Z_i$ might include, for instance, socio-demographics or attributes of a travel mode that are not being subjected to change by the policy in question (e.g. walking time between one's origin and destination).
Using this notation\footnote{Note, our discussion is in terms of a discrete dependent variable because much of travel demand modeling focuses on predicting discrete outcomes. However, if one is interested in a continuous dependent variable, then the quantities described in this paragraph would change as follows. Instead of inferring the probability mass function $P \left( Y_i | \textrm{do} \left( X_i = x \right), Z_i \right)$, we would instead focus on inferring the probability density $f \left( Y_i | \textrm{do} \left( X_i = x \right), Z_i \right)$. Additionally, using $Y_{ij}$ to denote the outcome for individual $i$ if policy $j$ is enacted, we would define the individual causal effect as $Y_{i2} - Y_{i1}$, and we would define the average causal effect as $E \left[ Y_{i2} - Y_{i1} \right]$.}, causal inference focuses on inferring $P \left( Y_i | \textrm{do} \left( X_i = x \right), Z_i \right)$ \citep[Section 3.2.1]{pearl2009statistics}. Here, $x$ is a particular value of $X_i$. The notation $\textrm{do} \left( X_i = x \right)$ explicitly denotes the fact that we are interested in the so-called post-intervention or controlled distribution of the outcome $Y_i$, where we externally set $X_i$ to the value $x$ \citep{pearl_2014_external}. From this post-intervention distribution, numerous quantities of interest may be calculated. For instance, let $X_{i1}$ and $X_{i2}$ be two different values of $X_i$, each corresponding to a different policy: policy 1 and policy 2. Then the individual causal effect of policy 2 versus policy 1 can be defined as $P \left( Y_i | do \left( X_i = X_{i2} \right) \right) - P \left( Y_i | do \left( X_i = X_{i1} \right) \right)$ \citep[Section 8.2.1]{pearl2009statistics}. Other quantities of interest can also be calculated. For instance, the average treatment effect can be defined as the average of the individual causal effects over the population $E \left[ P\left( Y_i | \textrm{do} \left( X_i = X_{i2} \right) \right) - P\left( Y_i | \textrm{do} \left( X_i = X_{i1} \right) \right) \right]$.
We emphasize here that the post-intervention distributions, i.e. the distributions using the ``do'' operator, contrast the observational distributions of $Y_i$ where individuals choose to have $X_i = x$, e.g. $P \left( Y_i | X_i = x, Z_i \right)$. In general, the two distributions are not equivalent: $P \left( Y_i | \textrm{do} \left( X_i = x \right) \right) \neq P \left( Y_i | X_i = x \right)$. As some readers may already be thinking, this difference is related to the traditionally defined concept of endogeneity. However, as we will discuss two paragraphs from now, this difference in distributions is \textbf{\textit{broader}} than the traditional concept of endogeneity. Now, to clarify what we mean by differences in the post-intervention and observational distributions, imagine the following (fictitious) public health study. Here, $Y_i$ is the number of times an individual rides a bike for recreation in a given month. As an explanatory variable, $X_i$, consider the average number of times in a week that the individual rides his/her bicycle to work. The post-intervention distribution may be observed where participants in a randomized controlled trial are made to ride a bicycle to work 3 or more times a week. This might differ from the observational distribution where perhaps only ``serious'' cyclists rode a bicycle to work 3 or more times a week. Intuitively, one might expect that more recreational bike rides would be observed amongst those who frequently commuted by bicycle without the study intervention as compared to those who were forced to commute by bicycle frequently. In other words, one might expect $P \left( Y_i | X_i = x, Z_i \right) > P \left( Y_i | \textrm{do} \left( X_i = x \right), Z_i \right)$ in this example.
The primary reason for the inequality of the post-intervention and the observational distributions is that individuals choose the values of $X_i$ that they are observed to have. Continuing the example from the last paragraph, individuals choose how often they wish to commute to work by bicycle\footnote{Again, we realize that some readers in the travel demand community may be already thinking that this is simply about endogeneity. Endogeneity, as typically defined, is a \textit{\textbf{subset}} of the concepts used in the causal inference literature when judging the identifiability of $P \left( Y_i | \textrm{do} \left( X_i = x \right) \right)$. We will return to this point in the next paragraph where we will discuss how causal inference is a larger topic than simply dealing with endogeneity as known to travel demand modelers.}. Because individuals choose their observed values of $X_i$, there may be unobserved factors influencing their choice of $X_i$ that also affect their outcome $Y_i$. We discuss this point further in Section \ref{sec:current_state_of_affairs}, but for now, note that in our example, a person with an unobserved aversion to bicycling may still choose not to bicycle a lot for recreation, even if he/she is forced to bicycle to work by his/her doctor. In general, simply looking at the observational distribution $P \left( Y_i | X_i, Z_i \right)$ may lead one to incorrectly overestimate or underestimate the effect of externally setting $X_i = x$ while holding all of the other unobserved variables constant. As put eloquently by statistician A.P. Dawid:
\begin{quotation}
``[I]t is a logically trivial but fundamentally important point that there is no necessary connexion between the different regimes of seeing and doing: a system may very well behave entirely differently when it is kicked than when it is left alone.''---\citep{dawid_2010_beware}
\end{quotation}
As travel demand modelers, we do ourselves a disservice by not paying special attention to this distinction in our modeling efforts. Indeed, our policy problems call for the post-intervention distribution $P \left( Y_i | \textrm{do} \left( X_i = x \right) \right)$, but we typically estimate $P \left( Y_i | X_i = x \right)$. Then, when we apply our models, we erroneously behave as if we have estimated $P \left( Y_i | \textrm{do} \left( X_i = x \right) \right)$. This causal non sequitur leads to misguided statements such as those quoted above where success or confidence in predictions of $P \left( Y_i | X_i = x \right)$ is taken to be the important feature in a causal inference problem, even though the observational distribution may be arbitrarily far from the post-intervention distribution that we truly need.
The discussion above should be helpful for travel demand modelers who are unfamiliar with the field of causal inference and who seek a basic understanding of what the vast and growing body of causal inference studies is about. However, there may be other travel demand modelers who see little added value in the preceding (and following) discussions. Presumably, their thought will be that since the concept of endogeneity and self-selection already exists within travel demand modeling, there is nothing new to be learned. This thought is incorrect. For example, current definitions of endogeneity typically refer to the case where one's explanatory variables are correlated with the error terms in one's model \citep{louviere_2005_recent}. Concretely, imagine that (1) $X_i$ and $T_i$ are two observed, explanatory variables that affect one's outcome $Y_i$, (2) that $X_i$ is currently excluded from one's model while $T_i$ is included, and (3) that $X_i$ and $T_i$ are correlated. Using common definitions, $T_i$ would be labelled endogenous because it is correlated with $X_i$, which is excluded from one's model and therefore part of one's error terms. As such, travel demand scholars who research endogeneity might say that the correct action would be to include $X_i$ in one's model. We will not delve into the details here, but researchers from the causal inference literature would (correctly) point out that the decision of whether or not one should include $X_i$ in one's model depends on one's causal assumptions about how $X_i, T_i,\textrm{ and } Y_i$ are related. In some cases, including $X_i$ can increase bias in one's estimation of $P \left( Y_i | \textrm{do} \left(T_i = t \right) \right)$ instead of reducing it \citep{elwert_2013_graphical, ding_2015_adjust}. For some scenarios where the statistical definitions of endogeneity are insufficient for determining whether a variable should included in one's model, see the literature on M-bias \citep{ding_2015_adjust}, butterfly-bias \citep{ding_2015_adjust}, overcontrol or overadjustment bias \citep{schisterman_2009_overadjustment, elwert_2014_endogenous}, and endogenous selection bias \citep{elwert_2014_endogenous} for more information.
The basic point that we reiterate and elaborate on further in the remaining sections of this paper is that (1) techniques, approaches, and insights from the causal inference literature are distinct from and broader than those in the current travel demand literature, and (2) that given their common goals, the travel demand literature should both adopt and contribute to methods from the causal inference literature.
\section{The current state of the union}
\label{sec:current_state_of_affairs}
Starting in the 1970's with the so-called Rubin Causal Model \citep{holland1986statistics} and continuing to the present, an impressive amount of scholarly study on causal inference has been performed. This research has largely taken place outside the field of travel demand modeling, within disciplines such as economics, statistics, artificial intelligence/computer science, sociology, and epidemiology. In particular, the causal inference literature has come to focus on a number of discoveries and concepts that are not widely emphasized or utilized within the field of travel demand modeling. The critically important point here is that some of these discoveries in the causal inference literature show that the common viewpoints and practices of travel demand modelers, as exemplified by the Ben-Akiva quotation in Section \ref{sec:travel_demand_goals}, are incorrect or misguided. As a result, the field of travel demand modeling can be improved by incorporating these concerns into its own practice.
Let us give a concrete example to motivate this section. Thus far, much of the causal inference research has focused on the necessary and sufficient conditions for estimating various kinds of causal effects from observational data. Said differently, much causal inference work has focused on specifying the ``requirements for a causal interpretation of an empirical relationship'' \citep{heckman2000causal}. That so much effort has been expended on this topic is instructive. It is now known that having ``valid behavioral assumptions'' that are ``well captured''\footnote{Note that ``well captured'' is taken, here, to mean that one's model is based on and mathematically represents one's behavioral assumptions.} in one's model \textbf{\textit{is not}} sufficient for one to justifiably draw causal inferences from a model estimated from observational data. In the words of \citet{imbens2015causal},
\begin{quotation}
``we cannot simply look at the observed values of [...] outcomes under different treatments [...] and reach valid causal conclusions irrespective of the assignment mechanism. In order to draw valid causal inferences we must consider why some units received one treatment rather than another'' (p.15).
\end{quotation}
We will revisit this notion of treatment assignment later, but for now, the point is that if one estimates a travel model based on ``valid behavioral assumptions'' but fails to consider why the individual decision makers had particular values for the treatment or treatments received (e.g. travel costs and travel times), then one will not be able to make valid causal inferences. To a certain extent, this fact has been acknowledged by academics who work in the field of travel demand modeling \citep{petrin2010control, mabit2010mode, pinjari2011modeling, guevara2015critical}, but such knowledge is not routinely reflected in travel demand research, and it is largely ignored by travel demand modeling practitioners.
Travel models are almost always estimated using observational as opposed to experimental data, and as just noted, there is a discord between the concerns of causal inference researchers and the norms of travel demand modelers. Such a disagreement should spur large changes in how we approach our work as travel demand modelers. In particular, since the predominant purpose of travel demand modeling is to make causal inferences, one might expect travel demand modelers to (as much as possible) have done two things. First, one might have expected modelers to have incorporated the existing causal inference techniques into their own practices. Secondly, one might have expected travel demand modelers to have begun contributing to the general field of causal inference based on their need to make causal inferences in settings that are distinct from the settings typically faced by scholars from other fields. However, despite the two academic disciplines developing roughly simultaneously, no such merger of the causal inference and travel demand modeling worlds has occurred.
To be clear, we recognize that some concepts from the study of causality have made their way into transportation studies. For instance, when trying to determine the effect of the built environment on travel behavior, transportation researchers have long spoken about the ``self-selection'' problem \citep[see for example the review of][]{cao2009examining}. As a specific illustration, consider the impact of transit-oriented-development (TOD) on transit ridership. Here, the issue is that it may not be the presence of TOD that causes higher rates of transit ridership in a given area, but perhaps individuals who prefer to take transit chose to live in TODs. Using terminology from the Rubin Causal Model, one might say that the treatment assignment (TOD or not) mechanism is not random---people choose where to live and therefore choose to be exposed to the treatment. Clearly then, transportation researchers of select topics, such as the land-use and transportation connection or traffic safety, have begun to make use of techniques from the causal inference literature. However, as exemplified by the work done by metropolitan planning organizations, cities, and discrete choice researchers, the general practice of travel demand modeling remains disconnected from the causal inference literature.
Let us provide an example of the disconnect that we are referring to. Treasure Island is between San Francisco and Oakland, California. This island is under the jurisdiction of San Francisco, and a major suite of residential and commercial developments are planned for the island. An important policy objective for San Francisco is that when the initial suite of development is complete, that the majority of travel to, from, and within the island takes place via public transit, walking, and bicycling \citep{treasure2015mobility}. This objective has triggered massive travel demand modeling efforts, both by practitioners and academics. A key piece of these modeling efforts is the creation of travel mode choice models. These models typically take as inputs individual characteristics (e.g. age, gender, family structure, automobile ownership, etc.) and alternative-specific attributes (e.g. travel times and travel costs for a particular individual traveling from a particular origin to a particular destination). As outputs, travel mode choice models return the probability that an individual chooses to complete a trip by a particular travel mode (e.g. car, bus, train, bicycle, walk, taxi etc.). Given a travel mode choice model, as well as a model that can simulate a synthetic population to represent the individuals expected to be living, working, and visiting Treasure Island, one can estimate the expected share of people traveling via each available travel mode. Moreover, such models will be used to study the causal effects on the aggregate travel mode shares, due to the introduction of various types of transportation policies (e.g. transit signal priority, parking restrictions, heavy investments in bicycle infrastructure, etc.).
A major difference between traditional causal inference studies and such travel demand modeling efforts is that there is typically no accounting for the treatment assignment mechanism (i.e. no accounting for confounding\footnote{The term confounding is used in a somewhat technical manner in this paper. Let $C$ denote the set of confounding variables. $C$ may comprise any mix of observed or unobserved variables. Let $Y$ denote the set of outcome variables, and let $D$ denote the set of treatment variables. This paper uses the term confounding to refer to the condition where $C$ has two causal pathways through which it affects $Y$. One is where $C \rightarrow D \rightarrow Y$, i.e. where $C$ affects the value of $D$, that in turn affects the value of $Y$. The second causal pathway is where $C \rightarrow Y$, i.e. where $C$ affects $Y$ through means that do not involve affecting the treatment variables $D$.}) in travel demand modeling work. For instance, the travel mode choice models just described will likely be estimated using disaggregate data collected from household travel surveys. The key parameters being estimated are those that correspond to variables being manipulated by the transportation policies, namely the parameters related to travel times, travel costs, and infrastructure conditions. However, the values of those time, cost, and infrastructure variables were not randomly assigned to the individuals being used for model estimation. Instead, the observed time, cost, and infrastructure values are the result of individuals choosing to live in, work in, and visit particular locations. For instance, since I (Timothy) enjoy commuting by bicycle, I chose to limit my household location search to areas that were within three miles of my workplace. Similar to the TOD example, my choice of bicycling to work is therefore not due solely to having a low bicycle travel time---my bicycle travel time is low because I want to bicycle to work. Put another way, in observational studies such as the kind performed in travel demand modeling, the variables of interest may be endogenous or confounded. Without accounting for this confounding or endogeneity, one has not accounted for the treatment assignment mechanism, and one cannot hope to draw valid causal inferences.
Broadly, we think that a serious problem of the travel demand modeling field is that it ignores findings and methods from the causal inference literature. In particular, travel demand analyses are often not explicit about the causal effects that they are meant to estimate. Moreover, travel demand analyses often lack transparent accounts of how their assumptions and techniques combine to identify the desired causal effects. In the upcoming sections, we will review why we think this gap between the two fields exists, what lessons travel demand modelers can immediately take from the causal inference literature, and where we think travel demand modelers can contribute to the travel demand literature.
\section{Why the disconnect?}
\label{sec:why_the_disconnect}
Given the current state of affairs just described, it may be useful to reflect on why there is a disconnect between travel demand modeling and the study of causality. Below, we state and discuss our (admittedly) subjective views on this topic.
In general, if two academic disciplines address (or appear to address) markedly different problems, then it is quite understandable that those disciplines might not rely on common techniques. For instance, as an extreme example, it is not surprising that creative writing and transportation engineering have very little methodological overlap. These disciplines attempt to answer very different questions. More relevant to this discussion is the fact that travel demand modeling takes place in a setting that is quite different from typical causal inference work. This difference in setting is manifest in terms of the effects or target quantities being studied, how treatments are defined, and the data that is available for use in our studies. We will expound on each of these areas of difference below, but given such differences, it is lamentable---though not surprising---that there is little methodological overlap between causal inference studies and most travel demand modeling efforts. At first glance, travel demand modelers might not think that the causal inference literature will be of much assistance in the sorts of transportation policy questions being addressed.
\subsection{Different Quantities of Interest}
\label{sec:diff_quantities_of_interest}
In terms of the effects or target quantities being studied, questions regarding transportation policy may be ambitious compared to the types of questions typically studied in the causal inference literature. Consider the Treasure Island example once more. The target quantities of interest can be defined as the combined mode shares of public transit, walking, and bicycling under different suites of transportation policies. Given that this is a future development, we observe neither the ``treatment outcome'' nor the reference or control outcome being used as the basis for comparison. Such a setting stands in stark contrast to the typical settings described by prominent researchers of causality. For instance, consider the following three quotes. In ``The State of Applied Econometrics: Causality and Policy Evaluation,'' Athey and Imbens write that
\begin{quotation}
``[w]e focus on the case where the policies of interest had been implemented for at least some units in an available dataset, and the outcome of interest is also observed in that dataset. We do not consider here questions about outcomes that cannot be directly measured in that dataset, such as consumer welfare or worker well-being, and we do not consider questions about policies that have never been implemented. The latter type of question is considered a branch of applied work referred to as ``structural'' analysis; the type of analysis considered in this review is sometimes referred to as ``reduced-form,'' or ``design-based,'' or `causal methods.'"---\citep{athey2016state}
\end{quotation}
Here, Athey and Imbens are explicit about their description of ``causal methods'' not including questions about policies that have never been implemented. Earlier, \citeauthor{imbens2009recent}, in ``Recent Developments in the Econometrics of Program Evaluation'' wrote that
\begin{quotation}
``[t]he central problem studied in this literature is that of evaluating the effect of the exposure of a set of units to a program, or treatment, on some outcome. [...] Moreover, this literature is focused on settings with observations on units exposed, and not exposed, to the treatment, with the evaluation based on comparisons of units exposed and not exposed. As opposed to studies where the causal effect of fundamentally new programs is predicted through direct identification of preferences and production functions.''---\citep{imbens2009recent}
\end{quotation}
And even before this, Nobel Laureate James Heckman wrote that
\begin{quotation}
``[t]he treatment effect literature focuses almost exclusively on policy problem P1 [(evaluating the impact of historical interventions on outcomes)] for the subset of outcomes that is observed. It ignores the problems of forecasting a policy in a new environment [...] or a policy never experienced [...]. Forecasting the effects of new policies is a central task of science and public policy that the treatment effect literature ignores.''---\citep{heckman2005scientific}.
\end{quotation}
The ``treatment effect literature'' that Heckman references is a large subset of the causal inference literature, and these papers are silent about the types of problems that travel demand models are being used for. As a result, travel demand modelers would need to perform a rather substantive search of the causal inference literature to see that some causality work (the so-called structural analysis) is addressing questions that mirror those found in transportation policy analysis.
\subsection{Different Treatments}
\label{sec:diff_treatments}
In much of the standard causal inference literature, the treatment variable in one's analysis is defined as the policy being evaluated. However, in transportation policy analysis, and in the structural analysis segment of the causal inference literature more generally, policies stipulate bundles of treatment variables that are thought to affect one's potential outcomes. For example, when forecasting the effect of a congestion pricing scheme, it is the manipulated automobile travel costs and travel times that will affect one's travel mode choice. Here, the treatment effects of interest are the dose-response relationships between levels of automobile travel costs and travel time, and the probability of an individual choosing to drive.
There are numerous ramifications from redefining treatment variables to be distinct from particular policies. The biggest benefit of this redefinition is noted by Heckman, below.
\begin{quotation}
``This approach models different treatments as consisting of different bundles of characteristics. [...] Different treatments $s$ are characterized by different bundles of the same characteristics that generate all outcomes. This framework provides the basis for solving policy problem P3 [(forecasting the impacts of interventions never historically experienced to new environments)] since new policies (treatments) are generated as different packages of common characteristics, and all policies are put on a common basis''---\citep{heckman2005scientific}.
\end{quotation}
However, despite the increased capabilities brought about by such a redefinition of one's treatment variables, there are at least four drawbacks\footnote{See \citet[Footnote 10]{mokhtarian_2016_quantifying} for a similar discussion of how the structural definition of treatments leads to difficulties in applying standard causal inference techniques in a residential choice setting.}. The first drawback is that to estimate the causal effects of interest, one must now make much stronger assumptions about \textit{how} the treatment variables affect one's outcome variables, as compared to researchers who only study policies that have already been implemented. For instance, instead of simply observing how the construction of transit oriented development changes transit usage rates of residents in an area, one must make assumptions about the mechanisms by which TOD does and does not affect transit usage (e.g. by reducing travel time from the transit station to destinations of interest, but not by making transit usage a more socially acceptable travel mode). In Heckman's words, one must now make assumptions about the ``causes of effects'' instead of simply measuring the ``effects of causes'' \citep{heckman2005scientific}. As a result of discomfort with making such strong assumptions, many scholars who are interested in causal inference do not take the structural analysis approach, and it becomes easy to miss the work of scholars who do focus on forecasting questions that are similar to those seen in transportation.
The second drawback is that while the typical treatment effect literature focuses on categorical treatments (e.g implement one of a finite set of policies), the redefinition described above typically makes use of continuous treatment variables in transportation contexts (e.g. travel times and travel costs). Continuous treatments require one to make even more assumptions in order to arrive at identifiable quantities that can be regarded as treatment effects. In particular, when using the presence of a particular policy as the treatment variable, dummy variables sufficed to describe the treatment effect (e.g. when assuming additive and homogeneous treatment effects). Now, when using continuous treatment variables, one must specify the form of the relationship between the treatment variable and the response (e.g. $x$, $x^2$, $\ln \left( x \right)$, etc.). As before, increasing the number of assumptions that must be made decreases the amount of causal inference literature that is devoted to this sort of transportation-relevant analysis.
Thirdly, in a ``selection-on-observables'' regime where one believes that he or she has observed all the variables that influence both a person's outcome and his/her observed level of the treatment variables, the redefinition just described may open the analyst up to problems due to the curse of dimensionality. Specifically, many causal inference techniques in ``selection-on-observables'' settings rely on the ``propensity score''---the probability or probability density of the observed treatment level given the observed covariates. When there are multiple treatment variables involved, there are typically multiple propensity scores \citep{imai2004causal}. As the number of treatment variables used to characterize a policy increases, one can encounter a situation with very low numbers of individuals with similar values for all of their propensity scores for the various treatment variables. In such settings, common causal inference techniques such as matching and sub-classification may become difficult to use in practice \citep{imai2004causal}. Travel demand modelers may see such an issue and be initially discouraged, noting that their particular applications suffer from issues that have not even been resolved in the causal inference literature itself.
Finally, as noted earlier, a requirement for drawing causal inferences is that one accounts for the treatment assignment mechanism. Given that the redefinition above typically leads to the creation of multiple treatment variables per policy, travel demand modelers should be concerned about the assignment mechanism for each of the treatment variables for the observations in their sample. Moreover, since confounding due to unobserved variables is usually a serious concern in observational studies with only one treatment variable, it may be reasonable to expect the potential for unobserved confounding to be increased when there are multiple treatment variables. If unobserved confounding exists in one's study, then the prospects for drawing credible causal inferences are grim, partially due to the cross-sectional datasets used in transportation\footnote{For instance, cross-sectional datasets preclude fixed-effects and random-effects estimators that may be used to deal with unobserved heterogeneity.}.
In the context of travel demand models and cross-sectional data, unobserved confounding shows up as an endogeneity issue. Endogeneity in travel demand models may currently be addressed through a number of techniques such as the use of proxy variables, the ``Berry-Levinson-Pakes'' technique (in particular instances), and instrumental variable techniques \citep{guevara2015critical}. Of these strategies, instrumental variable approaches are the most generally applicable\footnote{As noted by \citet{guevara2015critical}, proxy variable methods are easy to apply but their assumptions are commonly violated. The ``Berry-Levinson-Pakes'' method requires the endogeneity to be present at the level of groups of observations and is not applicable when the endogeneity is present for individual observations \citep{guevara2015critical}.}. Instrumental variable approaches such as control function, latent variable, or multiple indicator solution methods, all rely on researchers being able to find variables that are ``valid instruments.'' That is, one needs variables that are correlated with the endogenous variable but conditionally independent of the outcome, given the endogenous variable(s) \citep{guevara2015critical}. Unfortunately, a travel demand modeler might be dismayed due to the consensus in the causal inference literature that ``[g]ood instruments are hard to find, however, so we'd like to have other tools to deal with unobserved confounders'' \citep{angrist2008mostly}. Even Phillip G. Wright, the inventor of the instrumental variable estimator in econometrics, wrote that ``[s]uch factors, [i.e. valid instruments] I fear, especially in the case of demand conditions, are not easy to find'' \citep{angrist2015mastering}.
\subsubsection{Summary}
In summary, travel demand modelers often hope to draw causal inferences regarding policies that either have not been implemented yet or have not been implemented in the population of interest yet. In such settings, modelers must redefine the ``treatment variables'' in their studies from being particular policies to being sets of characteristics that define policies. This redefinition permits a so-called ``structural analysis'' that is used in a small subset of the causal inference literature. Moreover, this redefinition requires the use of strong assumptions to provide identification of the causal parameters of interest. As a result, the type of causal inference work that most directly pertains to travel demand modelers is not highly visible within the causal inference literature. Additionally, the forays of travel demand modelers into the causal inference literature may not be well received by scholars of the more common ``treatment effect literature'' that do not typically concern themselves with the more speculative studies that are needed in transportation policy analysis. Beyond research visibility and reception, the redefinition of treatment variables may lead to practical difficulties in credibly employing common techniques from the causal inference literature for dealing with confounding/treatment-assignment due to observed or unobserved factors. Such difficulties may be discouraging for travel demand modelers, but they also point to areas where travel demand modeling could contribute to the causal inference literature.
\subsection{Different Datasets}
\label{sec:diff_datasets}
Lastly, as mentioned in the previous subsection, the redefinition of treatment variables, from representing a policy of interest to representing characteristics of policies, may lead to a greater opportunity for an analyst's study to suffer from unobserved confounding. That is, one's treatment variables and one's outcome may both be a function of some unobserved factor(s). Causality researchers, especially economists, have developed a number of techniques for dealing with unobserved confounding, beyond the aforementioned methods. Such techniques include difference-in-difference, fixed effects, and random effect models, to name a few. While this fact may initially seem encouraging to travel demand modelers, these techniques rely on panel data to achieve identification of the causal effects of interest. As already mentioned, travel demand models are typically (though not always) estimated using cross-sectional datasets, thereby precluding the use of many of the existing models for dealing with unobserved confounding.
\subsection{Recapping the rift}
\label{sec:disconnect_summary}
To summarize this section, we have attempted to detail our opinions about why travel demand modeling does not incorporate many of the techniques developed in the causal inference literature. The main reasons that come to mind are that first, not all areas of the causal inference literature are directly applicable to the inferential settings of travel demand modeling. In particular, travel demand modeling often seeks to forecast the effect of a policy that has not been implemented in the target population of interest (e.g. different policies for the future Treasure Island development). Much of the existing causal inference literature ignores this problem in favor of evaluating the effects of policies that have already been implemented in the population of interest. As a result of this difference in questions, a minority subset of the causal inference literature (i.e. the literature on structural analysis) is of greater relevance to travel demand modeling than the more common ``treatment effect literature.''
Secondly, as a result of asking different questions, travel demand modelers will likely need to change their definition of what a treatment variable is. By moving from treatment variables that are equivalent to policies being evaluated, to treatment variables that define characteristics of policies, travel demand modelers are able to draw causal inferences about the effects of policies that have not yet been implemented in the populations of interest. However, in redefining what a treatment variable is, travel demand modelers may face difficulties in applying standard causal inference techniques. For instance, there may be a greater chance of suffering from the curse of dimensionality when applying propensity score techniques. Additionally, there may be a greater need for sensitivity analysis due to modelers making strong assumptions about the nature of the relationship between treatment and outcome variables. And finally, the redefinition of treatment effects may expose modelers to a greater chance of confounding from unobserved factors. Unfortunately, travel demand modeling's ubiquitous cross-sectional data disqualifies many of the tools that have been developed to combat just this type of unobserved confounding.
Put simply, travel demand modeling may not have adopted techniques from the causal inference literature because the relevant techniques are not widely visible, nor are they necessarily straightforward or possible to apply in a transportation setting.
\section{Where we can go from here?}
\label{sec:looking_towards_future}
While the previous section may appear to be a rather somber conclusion about the intermingling ability of the causal inference and travel demand modeling worlds, we are actually quite optimistic that the two fields can actually be mutually beneficial to one another. Presently, we think that there are many practices and perspectives that can be usefully adopted from the various branches of the causal inference literature. We will use Subsection \ref{sec:causal_lessons} to provide an overview of lessons that we think may be most valuable for travel demand modelers. Subsection \ref{sec:final_example} will then illustrate these points on a final, concrete example. It is hoped that this example will be familiar enough to travel demand modelers that they can go forth and begin trying to apply techniques from the causal inference literature in their own work. Finally, we use Subsection \ref{sec:giving_back_to_causality} to conclude by pointing out the potential contributions to the causal inference literature that can come from travel demand researchers.
\subsection{Lessons to learn from the causal inference literatures}
\label{sec:causal_lessons}
This subsection is targeted towards travel demand modelers. Herein, we provide a high-level and subjective overview of what we believe are three key and useful points from the various causal inference literatures. In particular, we make note of topics discussed in the computer science, machine learning, econometrics, statistics, and epidemiology literatures. Where appropriate, we also point out areas that we believe should be of future research interests to the travel demand modeling discipline.
\subsubsection{Lesson 1: Be explicit}
\label{sec:be-explicit}
As expressed by Judea Pearl, ``behind every causal conclusion there must lie some causal assumption that is not testable in observational studies'' \citep[p.99]{pearl2009statistics}. Consequently, travel demand modelers should be explicit about the assumptions they have made in order to draw their conclusions. Such an upfront statement of one's assumptions would facilitate an honest evaluation of the validity of one's claimed causal inferences. In particular, two pieces of information seem key. First, it would be useful for travel demand modelers to explicitly state their assumptions about the causal relationships between the observed explanatory variables, the outcome variables, and the unobserved variables that are thought to affect the outcomes. Secondly, it would be useful for travel demand modelers to explicitly state their identification strategy---i.e., how their dataset and methodology allow them to make use of their causal assumptions to identify the causal relationships of interest \citep{keele2015statistics}. Both of these points will be expounded on below.
In stating one's assumptions about how the observed and unobserved variables of interest are causally related, such statements would ideally be made both graphically and verbally. The figures most frequently used for these graphical displays are directed, acyclic graphs. When used to encode causal assumptions, such graphs are known as ``structural equation models\footnote{Note, structural equation models are often based on linear models \citep{golob_2003_structural}. These parametric assumptions need not be made, and indeed, for the use of encoding causal assumptions we refer to non-parametric structural equation models \citep{bollen_2013_eight} because we are not making any parametric assumptions at this stage of the analysis.}'' in transportation, econometrics, psychology, and sociology \citep{golob_2003_structural, bollen_2013_eight}; ``causal flow diagrams'' or ``system maps'' in systems dynamics \citep{abbas_1994_system, shepherd_2014_review}; ``causal diagrams'' in computer science and systems dynamics \citep{pearl2009causality, abbas_1994_system}; ``influence diagrams'' in statistics \citep{dawid2015statistical}; and ``causal graphs'' or ``path diagrams'' in the social sciences \citep{morgan2015counterfactuals}. These graphs serve multiple purposes. First, they aid one in communicating one's assumptions about a potentially complicated system of relations between various sets of observed and unobserved variables. Additionally, the graphs aid one in determining how and which causal effects are theoretically identifiable given one's assumptions. Once a graph has been shown, a verbal description can follow, explaining any additional causal assumptions, explaining the unobserved variables in greater detail, and/or justifying the exclusion of other variables from the graph.
While the preceding paragraph concerned one's beliefs about how the world works in theory, it is also important to state one's assumption about how the dataset in one's study permits the identification of causal effects. This corresponds to making explicit statements about the details of the dataset being used in one's study and how one's methodology will account for the treatment assignment mechanism of one's observations. As econometricians might say, one should be explicit about where the ``identifying variation'' in one's dataset is coming from and what one's ``identification strategy'' is \citep{angrist2010credibility, keele2015statistics}. Is one saying that all covariates of interest and all confounding variables have been observed? Is one relying on an instrumental variable approach to identification, and if so, what are one's instruments, and how strong are they? How is one dealing with unobserved confounding if any is suspected? These types of questions should be clearly answered in one's study in order to help others judge the validity of one's research.
\subsubsection{Lesson 2: Make fewer assumptions}
\label{sec:fewer-assumptions}
In 1983, Edward Leamer wrote a scathing critique of data analysis practices within economics. Bemoaning the lack of robustness in the conclusions that were drawn from various analyses, Leamer wrote that
\begin{quotation}
``an inference is not believable if it is fragile, if it can be reversed by minor changes in assumptions. As consumers of research, we correctly reserve judgement on an inference until it stands up to a study of fragility [...]. [...] The professional audience consequently and properly withholds belief until an inference is shown to be adequately insensitive to the choice of assumptions.''---\citep{leamer1983let}
\end{quotation}
Echoing these sentiments, a strong wave of criticism swept the academic world of econometrics and the social sciences more broadly in the 1970's and 1980's \citep{leontief1971theoretical, freedman1985statistics, abbott1988transcending}. The main intellectual thrust of these critiques was that the inferences made by many researchers rested on strong assumptions that could not be credibly defended. As pointed out by econometrician Charles Manski \citep{manski2003partial}, ``the credibility of inference decreases with the strength of the assumptions maintained,'' so based on the dubiously strong assumptions invoked by researchers, scholarly inferences themselves were also deemed untrustworthy.
Within travel demand modeling, where there is nearly ubiquitous appeal to assumptions of utility maximization and Type I extreme value distribution assumptions for unobserved factors, there has been some response to the credibility concerns just mentioned. Discrete choice modelers have relaxed assumptions to allow for taste heterogeneity amongst individuals (mixed logit), substitution patterns across alternatives (nested logit, cross-nested logit, etc.), distributional heterogeneity across alternatives (heteroskedastic logit, mixed logit with alternative specific variances), attribute non-attendance, and more. However, many academic studies, and most travel demand models used in practice, still rely on stringent assumptions about how one's explanatory variables lead to the probability of a given outcome.
In this sense, travel demand modelers may do well to follow the lead of researchers in other disciplines who also conduct model-based causal inference. In disciplines such as econometrics, epidemiology, biostatistics, etc., non-parametric models are beginning to see increased use. These models make substantially weaker assumptions than the assumptions typically made in travel demand models. For example, consider the words of biostatistician Mark van der Laan:
\begin{quotation}
``Why do we need a revolution? Can we not keep doing what we have been doing? Sadly, nearly all data analyses are based on the application of so-called parametric (or other restrictive) statistical models that assume the data-generating distributions have specific forms. Many agree that these statistical models are wrong. That is, everybody knows that linear or logistic regression in parametric statistical models and Cox proportional hazards models are specified incorrectly. [...] However, today statisticians still use these models to draw conclusions in high-dimensional data and then hope these conclusions are not too wrong. It is too easy to state that using methods we know are wrong is an acceptable practice: it is not! [...] That is, instead of assuming misspecified parametric or heavily restrictive semi-parametric statistical models, and viewing the (regression) coefficients in these statistical models as the target parameters of interest, we need to define the statistical estimation problem in terms of non-parametric or semi-parametric statistical models that represent realistic knowledge, and in addition we must define the target parameter as a particular function of the true probability distribution of the data.''---\citep{vanderlaan2011targed}
\end{quotation}
As van der Laan counsels, travel demand modelers should make greater use of non-parametric and semi-parametric models that ``represent realistic knowledge.'' Here, there is likely much room to learn from the practices of modern econometricians who make use of non-parametric models. Likewise, given that the machine learning community builds models of discrete outcomes with minimal assumptions, travel demand modelers can probably benefit from adapting techniques from the machine learning literature. Such a melding of techniques has already begun to occur in other disciplines. For instance, a growing cohort of econometricians and statisticians have begun making use of machine learning techniques for making causal inferences \citep[see for example][]{hill2011bayesian, su2012facilitating, athey2016recursive}, and the fields of epidemiology and biostatistics have begun to do the same \citep[e.g.][]{cruz2006applications, van2010targeted, lee2010improving}. Though machine learning techniques are not widely used within the field of travel demand modeling, we think this can and should change.
\subsubsection{Lesson 3: Validate one's inferences}
\label{sec:validate-inferences}
Undoubtedly, the prospective analyses that are needed in travel demand modeling require a ``structural'' approach to causal inference, where explicit models are used for the probabilities of individual travel choices. However, it would be wise to pay attention to the critiques that have already been levied at the structural approach to causal inference. In particular, it seems prudent to adopt a healthy dose of skepticism towards our travel demand models and subject them to numerous means of validation.
Looking at in-sample means of inferential validation, Leamer writes in a rejoinder to his original critique that ``sensitivity analysis would help'' (\citeyear{leamer1985sensitivity}). We agree. It should be standard practice to subject one's model assumptions to multiple changes (changes in variable specification, radical changes in model form, etc.) in order to assess the robustness of one's results. However, sensitivity analyses by themselves are not enough. As noted by Angrist and Pischke (\citeyear{angrist2010credibility}),
\begin{quotation}
``[a] good structural equation model might tell us something about economic mechanisms as well as causal effects. But if the information about mechanisms is to be worth anything, the structural estimates should line up with those derived under weaker assumptions. [...] We find the empirical results generated by a good research design more compelling than the conclusions derived from a good theory [...].''
\end{quotation}
Such sentiments have been echoed numerous times in the causal inference literature \citep[for e.g.][]{hendry1980econometrics, lalonde1986evaluating}. To ensure that our structural models are producing reasonable inferences, we should also be validating our models using out-of-sample data. Note that this out-of-sample validation does not simply test one's model on more samples from the observational distribution (e.g. such as by hold-out samples or cross-validation). The out-of-sample validation being spoken of here uses samples from a post-intervention distribution where the variables of interest have actually been ``manipulated'' and the samples being used for validation were not part of the original model estimation process.
Such out-of-sample validation can take numerous forms. First, in the case where we are making predictions about some future event (for e.g. travel mode shares on Treasure Island), we should be performing post-evaluations using the actual results that are observed after the event in question (e.g. the actual mode shares after the Treasure Island development is opened). This is reminiscent of the early Bay Area Rapid Transit (BART) studies that were performed by Daniel McFadden \citep{mcfadden1974measurement, mcfadden2000disaggregate}. Before the BART system opened, McFadden predicted BART mode share, and he compared those predictions with the actual mode shares after the system opened. Such comparisons allow one to judge the credibility of a given structural analysis.
Beyond the use of post-evaluation studies, travel demand modelers should take advantage of ``natural experiments'' and highly credible observational studies (e.g. well done regression discontinuity and difference-in-difference designs). For instance, has a transit strike temporarily eliminated the public transit option for travelers? This presents an opportunity to observe whether travelers redistribute themselves according to the patterns predicted by one's travel demand model. Alternatively, is one's city or region considering the implementation of dynamic parking prices? Provided that (1) there is adequate public notice and (2) that prices remain stable long enough for people to reach new equilibrium behaviors, one can observe how people's driving habits change in response to changing driving costs. Do people's real changes match the predictions from one's travel model? Overall, our transportation systems are continually buffeted\footnote{Thanks to Michael Anderson for pointing out the importance of this fact.} by sporadic disturbances that change travel times, travel costs, and various types of physical infrastructure \citep[e.g.][]{marsden_2013_insights}. Such disturbances are invaluable opportunities to observe how well our analyses predict the effects of external changes to these key attributes.
Lastly, one should also strive whenever possible to make use of randomized controlled trials (RCTs). We recognize that there are formidable ethical and logistic challenges to performing RCTs in transportation settings. This is a large part of why RCTs have not been performed more frequently by travel demand modelers. However, as noted by Donald Rubin
\begin{quotation}
``[f]or obtaining causal inferences that are objective, and therefore have the best chance of revealing scientific truths, carefully designed and executed randomized experiments are generally considered to be the gold standard. Observational studies, in contrast, are generally fraught with problems that compromise any claim for objectivity of the resulting causal inferences.'' ---\citep{rubin2008objective}
\end{quotation}
Fortunately, as digital transportation services rise in popularity, the ease with which RCTs can be performed is also increasing. For example, the use of transit smartcards can help transit agencies perform experiments related to transit prices (via electronically distributed discounts) \citep{carrel_2017_san}. Private transportation network companies such as Lyft and Uber already perform large numbers of RCTs on their users, varying attributes such as prices, displayed wait times, etc. \citep{chamandy_2016_experimentation, attwell_2017_engineering}. To the extent that the results of travel demand models built on observational data match the results of these and other RCTs, one can have greater confidence in the inferences from one's model. And critically, if predictions from one's model that was built on observational data does not align with the results of one's RCT, then one should investigate which assumptions need to be modified in order to produce valid inferences.
Importantly, as a result of using post-evaluation, highly credible observational studies of the kind employed in the ``treatment effect'' literature, and RCTs, it often becomes easier to actually implement new transportation policies. The sad, and perhaps justified truth, is that many individuals in the public, many politicians, and even many transportation practitioners do not trust travel model outputs. Based on our experience, travel demand models are often viewed with suspicion. At the same time however, actual data on the result of implemented policies are viewed as having greater credibility. If we are to not just analyze transportation policies but actually be useful in helping good policies get implemented, then evaluation (not just forecasting) must be employed. To this end, consider the following the quote from former New York City Department of Transportation Commissioner Janette Sadik-Khan. Known for her dizzying array of completed projects and change of New York City streets, she wrote that
\begin{quotation}
``like all politics, all transportation is local and intensely personal. A transit project that could speed travel for tens of thousands of people can be halted by objections to the loss of a few parking spaces or by the simple fear that the project won't work. [...] Data showed that interventions that resolved street problems improved safety and had neutral or even positive effects on overall traffic and business. The public discussion slowly graduated from anecdote to analysis. [...] Data change the scope of how we understand the street. They change the question from whether people like or want redesigned roads to whether these redesigns make the street work better.''---\citep{sadik2016streetfight}
\end{quotation}
In sum, post-evaluation, ``treatment effect'' studies, and RCTs are the opposite side of the travel demand modeling coin. All of these actions can help increase model credibility for both analysts and the public, thereby speeding the identification, adoption, and implementation of sound transportation policies.
\subsection{A final example}
\label{sec:final_example}
To end the discussion of what travel demand modelers can learn from the causal inference literature, we will sketch out how one might apply the various lessons from this paper to a travel demand question. In keeping with our stated goal of encouraging discussion and experimentation, as opposed to ``trying to solve issues related to the drawing of causal inferences from travel demand models,'' we do not carry the analysis through. Instead, we merely describe how such an analysis might proceed. This is partially because methodological issues such as those described in Section \ref{sec:why_the_disconnect} remain currently unresolved, and it is beyond the scope of this paper to make such methodological advancements. We emphasize that our example is merely given so travel demand modelers can have a concrete illustration that helps enable them to go forth and begin working out how to use such causal inference techniques in their own research.
The basic problem we will use to illustrate the methods described in this paper is the following. Imagine one is a transportation planner in Berkeley, CA. The policy question of interest is ``if I install a bicycle lane on University Avenue, from the Berkeley Marina to the University of California, Berkeley, how many additional Berkeley residents are expected to commute to work or to school by bicycle?''
Given that this is a question about the effects of an intervention, we are dealing with a causal inference problem. We will first state what we think the steps of analysis might be, and then we will expound on the less familiar steps afterwards. To be clear, we do not think these steps are necessary or sufficient for every causal inference task. However, we think these steps will be useful for and commonly used by many researchers. Now, the steps of analysis might proceed as follows:
\begin{enumerate}
\item
\label{step:re-express-problem}
Re-express the problem in terms of the ``treatment variables'' as described in Section \ref{sec:diff_treatments} instead of the ``treatment policy'' that was initially used to define the problem.
\item
\label{step:draw-diagram}
State one's causal assumptions, both verbally and graphically. This includes drawing the causal diagram that encodes one's belief about how the treatment variables and the other variables of substantive interest in this problem all relate to the outcomes of interest.
\item
\label{step:test-causal-assumptions}
Attempt to falsify the assumptions encoded in the causal diagram by deriving all testable implications from the diagram and testing them on the data at hand. If any of the testable implications are found to be false, then one or more of the assumptions in the causal diagram are false, and we must reformulate our assumptions. If all tests are passed, then the causal assumptions are compatible with the data at hand. Note, however, that we still cannot say the causal assumptions are true.
\item
\label{step:identification-check}
Determine whether or not the desired causal effects are identifiable given one's causal diagram (i.e. given one's causal assumptions about how the world works) and given the type of data one has access to. Note that this involves determining which variables, if any, need to be conditioned on and how the causal effect will be identified.
\item
\label{step:model-building}
Build models for the various quantities that are involved in the expression for one's causal effect. This is the step travel demand modelers are most familiar with and spend the most time on. It includes tasks such as modeling the outcome of interest (e.g. traveler mode choice) as a function of the covariates determined in the previous step.
\item
\label{step:post-evaluation}
Use natural experiments, ``real'' experiments (such as RCTs), or post-evaluation studies to validate one's analysis and determine what, if anything, should be changed about how such analyses are approached in the future.
\end{enumerate}
\subsubsection*{Step \ref{step:re-express-problem}:}
In this example, the treatment policy is the installation of the bicycle lane on University Avenue. However, expressed in this way, the policy is too narrowly defined. Indeed, because there was never a bicycle lane on University Avenue, there is no data on that \textit{\textbf{exact}} policy that can be used to inform our analyses. One cannot, for example, compare the bicycling rate of individuals before and after the installation of the bicycle lane. Instead, we need a variable that can be thought of as representing the mechanism through which all bike lane projects work (not just a lane on University Avenue). For instance, we might (simplistically) hypothesize that installing a bicycle lane on University Avenue affects people solely by changing the percentage of roadways between an individual's home and work that have bicycle lanes on them. Let us define the treatment variable $T_i$ as this percentage\footnote{We understand that in this example, we could have used other treatment variables. For instance we could let the treatment variable be the aggregate quality of the bicycling environment, as measured by the log-sum from a route choice model. We have instead used the treatment variable defined in the text since less background material is required to understand it.}, where ``between'' is some precisely defined region for each individual that is anchored by his/her home and commute destination.
Note that $T_i$ is a function of the policies employed. For each individual, we can define $T_i \left( \textrm{No bike lane} \right) = T_i ^{NBL}$, and this corresponds to the current percentage of roadways with bicycle lanes, since there is currently no bicycle lane on University Avenue. Likewise, we can define $T_i \left( \textrm{Bike lane on University Avenue} \right) = T_i ^{BL}$ as the percentage of roadways with bicycle lanes between individual $i$'s home and destination given that a bike lane on University Avenue is installed. Now, define $Y_i$ as an indicator variable that denotes what travel mode a person uses to commute. The quantity we want to estimate is
$$E \left[ P \left( Y_i = \textrm{bicycle} | \textrm{do} \left( T_i = T_i ^{BL} \right) \right) - P \left( Y_i = \textrm{bicycle} | \textrm{do} \left( T_i = T_i ^{NBL} \right) \right) \right]$$
where the expectation is taken over the entire population of Berkeley.
\subsubsection*{Step \ref{step:draw-diagram}:}
Now, given the well defined problem specified above, we need to draw the causal diagram that depicts our beliefs about how the world works. Guidance on how to construct such diagrams is given in \citep{pearl_1995_causal, greenland_1999_causal, elwert_2014_endogenous, morgan2015counterfactuals}. In order to avoid lengthening this paper, we do not repeat their instructions. The main point, however, is that constructing a causal diagram involves the explicit representation of relationships between the outcome variable $Y_i$, the treatment variable $T_i$, and the miscellaneous other variables that affect $Y_i$---both observed and unobserved.
As noted by Morgan and Winship,
\begin{quotation}
``[w]riting down a full graph that represents a consensus position, or a set of graphs that represent alternative positions can be very difficult, especially if the arguments put forward in alternative pieces of research are open to multiple interpretations. Yet little progress on estimating causal effects is possible until such graphs are drawn, or at least some framework consistent with them is brought to bear on the questions of central interest.''---\citep[pg.~33]{morgan2015counterfactuals}
\end{quotation}
One result of this difficulty is that in studies purporting to draw causal inferences, the statement of one's assumptions can be one of the most viciously debated points. Indeed, ``assumptions are self-destructive in their honesty. The more explicit the assumption, the more criticism it invites, for it tends to trigger a richer space of alternative scenarios in which the assumption may fail'' \citep[~pg. 580]{pearl_2014_external}. Here, we do not wish to engage in such debates over our causal assumptions. The example of a causal diagram that we provide in Figure \ref{fig:bike-causal-diagram} is meant to be just that: an example. In a real application, the causal diagram and its embedded assumptions would have to be defended. We simply present such a diagram to give a concrete example of what one might look like and how such a diagram might be used.
\begin{figure}[h!]
\begin{centering}
\begin{tikzpicture}
[scale=0.8,
bend angle=25,
pre/.style={{Stealth[width=6pt]}-, shorten <=1pt, thick},
post/.style={ -{Stealth[width=6pt]}, shorten >=1pt, thick},
latent/.style={ellipse, draw=blue!50},
observed/.style={rectangle, draw=black!50}]
\node[observed, align=center] (characteristics) at ( 1.25, -1) {Individual Characteristics:\\\{Gender, Age, Income, Education,\\Physical Fitness, \# Children, \\\# Dependents, \# Housemates,\\Marital Status\}};
\node[latent] (bike-preference) at (1.35, 3) {Bicycle Preference};
\node[observed, align=center] (locations) at ( 10, 1.5) {Locations:\\\{Home Location, Work Location\}};
\node[observed] (bike-ownership) at ( -3, 5) {Bicycle Ownership};
\node[observed] (auto-ownership) at ( 6, 5) {Automobile Ownership};
\node[observed, align=center] (level-of-service) at ( 13, 6) {Level of Service:\\ \{Transit Availability,\\Travel Distance (Walk, Bike),\\ Travel Cost (Transit, Auto)\\Travel Time (Transit, Auto)\\Bike Lane Percentage\}};
\node[observed, align=center] (travel-mode) at ( 1.25, 9) {Travel Mode:\\\{Bike, Walk, Transit, Drive\}};
\node (imaginary) at (-5.5, 5) {};
\path[->] (locations.north) edge [post] (level-of-service.south)
edge [post] (auto-ownership.south)
(bike-preference.north) edge[post] (auto-ownership.south)
edge[post] (bike-ownership.south)
edge[post] (travel-mode.south)
(bike-preference.east) edge[post, bend right] (locations.west)
(travel-mode.south) edge[pre] (level-of-service.west)
edge[pre] (auto-ownership.north)
edge[pre] (bike-ownership.north)
(characteristics.north) edge[post] (bike-preference.south)
edge[post, bend right] (auto-ownership.south)
edge[post, bend left, bend angle=45] (bike-ownership.south)
(characteristics.east) edge[post, bend right] (locations.south)
(characteristics.west) edge[-, thick, out=180, in=270] (imaginary.south)
(imaginary.south) edge[post, out=90, in=180] (travel-mode.west);
\node (info) at (4, -4) {Note: Squares denote observed variables. Ovals denote unobserved variables.};
\end{tikzpicture}
\caption{Example Causal Diagram for Travel Mode Choice}
\label{fig:bike-causal-diagram}
\end{centering}
\end{figure}
When reading Figure \ref{fig:bike-causal-diagram}, note that the variables in the squares are observed, and the variables in ovals are unobserved. In particular, the ``bicycle preference'' node refers to a latent preference for cycling and self-identification as ``a cyclist.'' Additionally, the ``Individual Characteristics'' node and the ``Level of Service'' node denote sets of variables (given in the curly braces in each box). Each of the variables in the curly braces in these two boxes can be thought of as their own node, with the exact same ``parent nodes'' and ``child nodes.'' For instance, both bike lane percentage and transit availability are functions of home and work location, and both bike lane percentage and transit availability influence the travel mode that one chooses to commute by.
As mentioned earlier, the causal diagram encodes one's causal assumptions. For example, Figure \ref{fig:bike-causal-diagram} expresses the assumption that conditional on one's home and work locations, individual characteristics have no effect on the level-of-service variables. However, not all causal assumptions are displayed by the diagram. One important assumption that is not explicitly shown on the diagram is that the travel mode of a given individual is not affected by the bike lane percentage for other individuals in the population. This assumption of no interference between individuals is known as the Stable Unit Treatment Value Assumption (or SUTVA for short) \citep[p.~10]{imbens2015causal}. SUTVA allows one to estimate causal effects by comparing the probability distributions of the outcomes across groups of individuals with different bike lane percentage levels. Overall, as demonstrated in this paragraph, any causal assumptions of importance that are not encoded in the causal diagram should be explicitly stated in words.
\subsubsection*{Step \ref{step:test-causal-assumptions}:}
Along with a causal diagram come a set of testable implications. Three such types of testable implications are (1) conditional independence tests, (2) tests of functional constraints, and (3) ``over-identification'' tests. First, note that our conditional independence assumptions are encoded in the causal diagram. For instance, given the causal diagram in Figure \ref{fig:bike-causal-diagram}, the following conditional independence\footnote{Note, $a \ \indep \ b \mid c$ means ``$a$ is conditionally independent of $b$ given $c$.''} assumptions are implied:
\begin{itemize}
\item $\textrm{Automobile Ownership } \indep \textrm{ Level of Service} \mid \textrm{Locations}$
\item $\textrm{Bicycle Ownership } \indep \textrm{ Level of Service} \mid \textrm{Locations}$
\item $\textrm{Individual Characteristics } \indep \textrm{ Level of Service} \mid \textrm{Locations}$
\end{itemize}
Each of these assumptions can be tested using one's actual dataset.
Secondly, in models containing latent variables, there may be ``functional constraints'' that can be tested. These constraints are basically statements that certain causal effects (i.e. certain functions) depend only on a particular subset of variables. One can then verify that this is indeed the case by ensuring that the computed causal effect is constant across different values of the variables that are supposed to have no influence on the causal effect. See \citet{tian2002testable} for more information.
Lastly, in a similar fashion, ``over-identification'' can be used when trying to falsify a given causal diagram. ``Over-identification'' refers to the situation where, given a particular causal diagram, there are two or more distinct ways to compute a given causal effect. While this is not the case in the causal diagram of Figure \ref{fig:bike-causal-diagram}, the general idea is that one computes the causal effect by all methods, and then tests for equality of the computed values. See for example \citet{sargan1958estimation, kirby2009using}.
\subsubsection*{Step \ref{step:identification-check}:}
Once one has tried and failed to falsify one's causal diagram, one can perform an identification analysis. This step is now automatic because the necessary procedures have been reduced to algorithms that are implemented in software that is freely available online. See, for example, \citet{breitling2010dagr, textor2016robust, tikka_2017_identifying}. In an observational setting, if the effects one wants to estimate (i.e. $P \left( Y_i = \textrm{bicycle} \mid \textrm{do} \left( T_i = t \right) \right)$) are identifiable, then such software will return an expression for this quantity in terms of observational distributions (i.e. distributions without the ``do'' operator) or a set of variables to be conditioned on in order to estimate the causal effect of interest. By looking at the variables contained in this expression, one will know what variables must be conditioned on, and in looking at the various probability distributions that are returned, one will know what models need to be built and estimated.
For a concrete example, see the diagram given in Figure \ref{fig:bike-causal-diagram} once more. Here, the causal effect is identified, and it is given by the following expression:
\begin{equation}
\label{eq:bike-covariate-adjustment}
P \left( Y_i = \textrm{bicycle} \mid \textrm{do} \left( T_i = t \right) \right) = \int P \left( Y_i = \textrm{bicycle} \mid T_i = t, \textrm{Locations$_i$} \right) P \left( \textrm{Locations$_i$} \mid T_i = t \right) d \left( \textrm{Locations}_i \right)
\end{equation}
From the given expression, one can see that a researcher would need to condition on the home and workplace locations of the various individuals in the dataset, and the researcher would need to build a model for the joint home and workplace location choices of the individual. Here, standard mode choice models are insufficient\footnote{As noted by an anonymous referee, it is not necessarily the case that a standard mode choice model will provide inconsistent estimates of $P \left( Y_i = \textrm{bicycle} \mid \textrm{do} \left( T_i = t \right) \right)$. However, the estimating expressions derived from the ``do-calculus'' operations on causal graphs have already been shown to sufficient for consistently estimating causal effects \citep{galles_1998_axiomatic, huang_2006_pearl}. If analysts wish to use differing expressions, then the analysts should show that their expressions also consistently estimate the desired causal effects or at least meet some other desired criteria.}. Although conventional mode choice models condition on individual characteristics, bicycle ownership, automobile ownership, and the level-of-service variables, controlling for these variables still allows for the possibility that the remaining variation in bike lane percentage is due to the ``confounding'' variable: the individual's latent bicycle preference (through one's home and work location choices). As a result, one cannot treat the observed distribution as being equal to the, desired, post-intervention distribution. We have to directly condition on the home and work locations to sever any ties between the confounding bicycle preference and the bike lane percentage\footnote{Note that if bicycle preference did not directly affect an individual's travel mode choice, then conventional travel demand models would be sufficient for estimating the effect of bicycle lane percentage on mode choice. In that hypothetical scenario, bicycle lane percentage would not be endogenous because bicycle preference would not be part of the error term.}.
\subsubsection*{Steps \ref{step:model-building} and \ref{step:post-evaluation}:}
In the previous subsection, we noted that Equation \ref{eq:bike-covariate-adjustment} called for models of the following probabilities: (1) the probability of bicycling given the bike lane percentage and one's home and work locations, and (2) the joint probability of an individual choosing his/her home and workplace locations, conditional on the bike lane percentage $t$. These models differ from typical travel demand models. The first difference is that the mode choice model, $P \left( Y_i = \textrm{bicycle} \mid T_i = t, \textrm{Locations$_i$} \right)$, conditions on far fewer variables than typical mode choice models. Secondly, the mode choice model directly conditions on the home and workplace locations instead of using proxies such as the level-of-service variables. Thirdly, the full model used to estimate the causal effect combines a joint location choice model with a mode choice model.
Differences from usual travel demand models notwithstanding, at least\footnote{There are definitely multiple sources of analytical difficulty in our example. We are not aiming to be comprehensive. If readers think of their own challenges in this example and wish to know how to address those challenges, we view this as a success for our efforts to spark consideration and discussion of causality in travel demand settings.} two problems will be encountered when trying to construct the needed models. The first issue is the fact that based on subject-matter insight, we know that even in the population, there are few individuals with the same home and workplace location. As a result, one will not have enough individuals to estimate $P \left( Y_i = \textrm{bicycle} \mid T_i = t, \textrm{Locations$_i$} \right)$ after conditioning. Secondly, even if one had multiple individuals with the same home and workplace location, the level-of-service variables such as bicycle lane percentage are a deterministic function of these two locations. There will therefore be no variation in the bike lane percentage after conditioning. Again, the requisite probabilities will not be estimable. Resolving these two issues will be key to solving the causal inference problem given in this example. Such a resolution is beyond the scope of this paper, but we think it is instructive to identify the problem so that other researchers may join us in working on this and similar issues.
As a first step in thinking about how one might identify the causal effects of bike lane percentage on the probability of bicycling to work or school, we offer the following preliminary thoughts. First, while conditioning on variables that influence ``self-selection'' of the bike lane percentage is one way to identify the causal effect of interest, it is not the only way. The identification strategy of conditioning on the variables that lead to bike lane percentage is known as using the ``back-door criterion.'' If we can identify a variable that provides insight into the ``mechanism'' by which bike lane percentage influences an individual's travel mode, then we may be able to use the alternative ``front-door criterion'' \citep{elwert_2013_graphical, knight_2013_causal} to identify the desired causal effect. We will give an example to illustrate this alternative identification strategy. Assume that increasing an individual's bike lane percentage only influences that individual's travel mode by increasing his or her perceived sense of traffic safety for the specific commute trip by bicycle. If we are able to collect measurements of an individual's perceived sense of safety, then using integrated choice and latent variable techniques, we may be able to estimate (1) the effect of bike lane percentage on perceived safety, and (2) the effect of perceived safety on an individual's travel mode. Combining these two estimates with our assumptions, we will be able to estimate the effect of bike lane percentage on an individual's probability of traveling by bicycle. We do not claim that this is the only way to estimate the desired causal effect, or even a correct way to estimate the desired effect (since the assumptions may be incorrect), but we use the discussion as an example of how one might proceed.
Now, once one completes Step \ref{step:model-building}, one will have a model for $P \left( Y_i = \textrm{bicycle} \mid \textrm{do} \left( T_i = t \right) \right)$. This model can then be used to calculate the desired quantity:
$$E \left[ P \left( Y_i = \textrm{bicycle} \mid \textrm{do} \left( T_i = T_i ^{BL} \right) \right) - P \left( Y_i = \textrm{bicycle} \mid \textrm{do} \left( T_i = T_i ^{NBL} \right) \right) \right]$$ The end result will be a causally valid inference about the effect of the University Avenue bike lane on citywide demand for bicycle commuting.
For Step \ref{step:post-evaluation}, one should use data from actual bicycle lane interventions to corroborate the model that one is making inferences from. Note that evaluating real outcomes to see whether or not they match one's predictions has been a part of travel demand analysis from the beginning (see the discussion in Section \ref{sec:validate-inferences} about the early BART studies). While such evaluations may not be performed regularly by travel demand modelers any more, the knowledge of how to do so exists. See for example, Section \ref{sec:validate-inferences}. In the context of our example, natural experiments might take the form of measuring bicycle usage before and after the construction of bicycle lanes that are built in an ``unexpected'' manner. For instance, looking before and after the stealthy construction of bicycle lanes in New York City for public trials. We can then compare our predictions of the change in bicycle mode share before and after the installation of these lanes. Alternatively, a RCT might be analyzed whereby low-income residents applying for housing assistance are randomly placed in housing and the individuals have differing values of $T_i$. Bicycle commuting rates can then be studied using the different individuals in the program. Finally, if the bicycle lane is actually constructed on University Avenue, one should perform a post-evaluation study whereby the bicycling rates amongst residents are measured before and after the construction of the bicycle lane\footnote{Of course, care should be taken to measure bicycling rates amongst those who lived and worked in the area for a sufficiently long time before and after the construction of the bike lane. This would be done to remove any influence of individuals who may have moved their home or work location to the area after they knew the bike lane existed or was going to be built.}. Such studies will confirm whether one's model is actually performing well.
\subsection{Conclusion}
\label{sec:giving_back_to_causality}
So far, we have discussed what causal inference is, the overlapping goals of causal inference and travel demand modelling, and some reasons why we think a gulf exists between these two disciplines. Moreover, we have extracted some lessons from the causal inference literature that we think can be of use for travel demand modelers, and we have tried to show how these lessons might be used in a concrete, travel demand setting. To conclude, we now turn to the prospect of the causal inference literature being enriched by the work of travel demand modelers, and we end with a distinctly hopeful outlook.
Travel demand modeling, in its modern incarnation, grew out of the application of econometrics to the study of human travel patterns. The problems faced in modeling human travel choices are difficult, and as a result, travel demand modeling applications provided the impetus for many of the most advanced discrete choice modeling techniques to date. Concurrently, the broader field of econometrics has moved on to embrace the challenge of determining causal effects from observational and experimental data \citep{angrist2010credibility}, and we think it is only natural that travel demand modeling ``catch-up'' to its progenitor. As noted in Sections \ref{sec:why_the_disconnect} and \ref{sec:causal_lessons}, there are a number of challenges to be faced in bringing causal inference techniques and perspectives to bear on travel demand modeling applications. However, methodological challenges have always provided the most fertile ground for progress. Accordingly, we highlight three causal inference topics that we think will be particularly fruitful grounds for research and application by travel demand modelers.
First, causal inference researchers in computer science and epidemiology have been producing a small but growing literature on the topic of ``causal transportability'' and ``meta-synthesis'' \citep{hernan_2011_compound, petersen_2011_compound, pearl_2011_transportability, bareinboim_2012_transportability, pearl_2012_calculus, lee_2013_m, bareinboim_2013_transportability, singleton_2014_motivating, bareinboim_2014_transportability}. Put simply, the study of causal transportability seeks to determine the conditions and procedures with which it is possible to \textit{transport} (i.e. generalize) causal inferences learned in one setting to another \citep{pearl_2011_transportability, pearl_2014_external}. Similarly, ``meta-synthesis'' \citep{pearl_2012_calculus, lee_2013_m, bareinboim_2013_transportability, bareinboim_2014_transportability} is concerned with the formalization of procedures for combining inferences from multiple studies into one aggregated measure for a target population that need not have been involved in any of the studies being combined. Thus far, papers about causal transportability have mainly focused on the abstract mathematical conditions that permit the transport of causal inferences. There has been a comparative lack of research applying the knowledge obtained thus far to real applications\footnote{Notable exceptions include work such as \citet{singleton_2014_motivating}.}. There is likely much to be gained from applying the abstract mathematics of causal transportability research to real problems and from attempting to integrate transportability notions with domain specific modeling techniques and traditions. Already, travel demand modelers have much experience with two specific transportability processes: (1) transferring model results from one time and place to another \citep{agyemang_1997_spatial, fox_2015_temporal}, and (2) generalizing insights from stated preference (SP) experiments to revealed preference (RP) studies. In the latter case, travel demand studies have used a number of techniques to facilitate the desired transport of model results. For example, these techniques include methods such as joint RP-SP estimation techniques \citep{brownstone_2000_joint, feit_2010_reality}, incentive-aligned SP experiments to increase the similarity between the SP study and the RP environment where the results will be used \citep{ding_2005_incentive, ding_2007_incentive, moser_2010_using, chung_2017_willingness}, and certainty calibration \citep{beck_2016_can}. From both model transferability and RP/SP studies, travel demand modelers may have much accumulated wisdom to offer the causal transportability literature. Conversely, travel demand modelers may also have much to gain by incorporating the existing causal transportability techniques, especially when trying to determine whether or not the desired inferential transport can actually be performed.
Secondly, feedback processes and change over time have been ignored in much of the causal inference research performed thus far\footnote{Thanks to two anonymous referees for raising this point.}. Indeed, much of the causal inference work performed thus far takes the directed \textit{acyclic} graph as its starting point (or equivalently, the \textit{recursive} structural equation model used in much of the social sciences). Such work explicitly excludes systems of relationships where variable 1 both causes and is affected by variable 2 (possibly offset by a time lag). A transportation example of such a feedback process is where an individual's attitudes towards bicycling affect the individual's choice of bicycling or not, but the individual's experiences while bicycling will then affect his/her future attitudes towards bicycling. While possibly a rare occurrence in other disciplines, we expect that such feedback processes are common in travel demand settings. As another example, consider the effect of increased driving costs. At the outset, the increase in costs is expected to damp demand by some initial amount. However, the initial decrease in the number of drivers may alleviate congestion, thereby causing an increase in traffic speeds and leading some potential drivers to begin driving due to the faster speeds. The overall decrease in the number of drivers will therefore be less than the initial decrease due to the increased prices. This overall decrease in the number of drivers may be over-predicted if the feedback mechanism is ignored. To give credit where it is due, a limited amount of causal inference work has tried to account for such feedback processes. This work includes the use of chain graphs \citep{lauritzen_2002_chain}, directed \textit{cyclic} graphs \citep{schmidt_2009_modeling}, the ``settable systems'' framework developed by econometricians Halbert White and Karim Chalak (\citeyear{white_2009_settable}), and dynamic causal networks \citep{blondel_2017_identifiability}. However, as with the aforementioned causal transportability studies, these techniques have seldom been used in real applications. Here, travel demand modelers would essentially forge the link between systems dynamics researchers who commonly use causal diagrams to portray systems with feedback processes over time (for example \citet{abbas_1994_system} and \citet{shepherd_2014_review}) and causal inference researchers who use causal diagrams to explicitly identify and compute causal effects.
Finally, travel demand researchers face severe challenges when trying to make robust quantitative claims. The fragility of travel demand modeling results often comes from ambiguity over how to choose the variables to be conditioned on when trying to estimate the probability of interest, how to specify the ``systematic utility'' equations\footnote{We realize that not all travel demand models have systematic utilities. However, given that most travel demand models are based on random utility maximization, the comments in this paragraph are valid for most travel demand analyses.}, how the preferences underlying the systematic utilities might change over time, how to specify the probability function that links the systematic utilities with the probability of the observed choice, and even ambiguity in the causal diagram upon which the entire analysis should be built (for e.g. see the literature on observationally equivalent causal graphs). As was recently noted by Dagsvik:
\begin{quotation}
``A well-known problem in quantitative economic analysis is that economic theory provides limited guidance for the specification of functional forms of quantitative structural economic models. An unfortunate consequence is that it becomes difficult to discriminate between econometric model formulations based on the same theoretical framework which fit the data reasonably well but result in different counterfactual predictions. Given this state of affairs, the analyst is forced to choose between model specifications without adequate theoretical or empirical support.''--\citep{dagsvik_2017_invariance}
\end{quotation}
Overall, we believe that this ambiguity in travel demand models will remain for the foreseeable future. So although the type of sensitivity analysis advocated in Section \ref{sec:validate-inferences} will help uncover how much uncertainty in one's estimates there actually is, one will likely always need good methods of reporting this uncertainty. Here, we have seen few \textit{structural} causal inference studies that managed to present the uncertainty inherent in their analysis in an easy to understand and useful manner. In our opinion, the inherent ambiguity in travel demand models gives transportation researchers an opportunity to take the lead on discovering parsimonious and meaningful ways of representing the uncertainty in one's analysis.
In conclusion, if the past is any indication of the future, then based on the topics above, we believe that travel demand modelers will once again be able to advance the arsenal of quantitative techniques, this time in the causal inference arena as opposed to simply in the arena of discrete choice. By generalizing and adapting causal inference techniques for travel demand applications, travel modelers can simultaneously contribute to the field of causal inference and fulfill their original purpose of \textit{validly} ``predict[ing] the consequences of alternative policies or plans [...] under radically different conditions'' than those present at the time of one's analysis.
\section*{Acknowledgements}
We thank Elias Barenboim, Kenneth Train, Eric Miller, Mohamed Amine Bouzaghrane, and Hassan Obeid for comments on earlier drafts of this paper, and we thank Michael Anderson for providing us with early introductions to the topic of causal inference. We are also very grateful to our anonymous reviewers for their thoughtful and detailed comments on an earlier draft of this manuscript. Nevertheless, any viewpoints (and, of course, any errors) expressed in this manuscript are entirely our own.
\newpage
\section*{\refname}
| {
"attr-fineweb-edu": 1.935547,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdR3xK1Thg98UgB7z | \section{Introduction} \label{sec:Introduction}
Measuring the quality of soccer players is the main task of many staff members at soccer clubs. Coaches, scouts and sporting directors evaluate players daily, with the goal of helping tactical or recruitment decisions. Judgments about player quality significantly impact a team's chances of success (or failure).
There are many approaches to rate players' quality. Traditionally, soccer clubs work with qualitative analysis from scouts, generating reports from live and on-demand video analysis. Recently, the trend started to change towards data-driven quantitative analysis of players. Here, data analysts use tools built with event or tracking data, which summarize data to key features that the analyst uses to filter players that are adequate for their team. Teams and service providers are also investing in machine learning models, for example, to predict how much value players add to their team or to predict a player's market value. These models also have value from the regulators' perspective, such as policing accounting fraud in soccer transfers.
Due to the value of these player rating models, even from the outsider's perspective (fan engagement, fantasy, betting), there are many approaches in the academic literature and private businesses, namely, live score apps and data services providers. Our work focuses on VAEP, a framework that evaluates actions in a game by estimating how a action changes the probability of scoring.
Instantaneous approaches to rate player quality are helpful. They summarize the player to a value (or a set of values), making it much easier to guide decision-making. Nevertheless, they have one flaw: they do not consider how the player developed over time. By focusing on a single value, decisions can be misguided. What if the recent player performance was an outlier? What if luck played an important role in recent performances? Is the player improving over time? How likely is it that the player hit a skill ceiling? From the perspective of recruitment and player development, it is important to track a player's performance over time to make a more informed (and better) decision.
In this paper, we make the following contributions:
\begin{itemize}
\item \textbf{Section \ref{sec:Literature Review}} reviews the literature on machine learning-based player evaluation systems and fundaments our choice regarding VAEP.
\item \textbf{Section \ref{sec:Defining the VAEP model}} describes our VAEP models with three significant changes versus the original proposal \cite{decroos_actions_2019}: (1) we modify the label to reflect how much time before a goal the action occurs, (2) address the problem using regression, and (3) use the Random Forest algorithm with a smaller set of features which creates a simpler model. Furthermore, we distinguish between the Intent VAEP (I-VAEP) model and the Outcome-aware VAEP (O-VAEP) model.
\item \textbf{Section \ref{sec:Creating tracking metrics}} details the process of extracting series and preprocessing them to obtain robust tracking metrics for players' skills. Specifically, we use data from Spanish La Liga between 2009 and 2019 to generate the time series. We debate the differences between using the game date, or cumulative games played as the series index, and create short-term and long-term metrics.
\item \textbf{Section \ref{sec:Experimental Use Cases}} presents the following experiments:
\begin{itemize}
\item We show visualizations of the data to find the best players and their best way of creating value for the team. We exemplify how the generated data helps us visualize playstyle changes, with an example of the evolution of Lionel Messi's playstyle over time.
\item Continuing to explore our dataset, we evaluate whether player performance volatility should play a more critical role in decision-making.
\item We build an expected player development curve with our time-series data, handling dataset biases such as younger players in our dataset being much better than the average young player.
\end{itemize}
\item \textbf{Section \ref{sec:Conclusion}} discusses the advantages and disadvantages of treating the player evaluation metrics as time series while also discussing some of the limitations of our overall approach.
\end{itemize}
\section{Literature Review} \label{sec:Literature Review}
There are many options for building player evaluation frameworks. The simplest is perhaps the ELO rating system \cite{elo_rating_1978} . Originally developed to measure the strength of chess players, ELO systems can also evaluate relative team strength. The original ELO system does not rate individual players in team sports. However, we can adapt the model to measure a player's contribution by calculating the differences in the ELO rating when the player is on and off the pitch. This system works quite well in high-rotation sports, like basketball, where players cycle through bench time and play time several times during a game.
In the past decade, the concept of expected goals (xG) \cite{pollard_estimating_2004} gained popularity in the field. xG measure the probability of a shot ending in a goal. For this, it uses game state information, like the ball's position, which part of the body the player is using, the position of the goal keeper, and much more. The major drawback of xG is that it only evaluates one action (shots). Although useful to measure the performance of strikers, it does not scale to other positions. Expected assists, which measures the likelihood of a pass leading to an assist, has similar drawdowns.
Some frameworks enable extending player evaluation across multiple action types. Expected Threat (xT) \cite{roy_valuing_2020}, Expected Possession Value (EPV) \cite{fernandez_framework_2021}, and Valuing Actions by Estimating Probabilities (VAEP) \cite{decroos_actions_2019} are frameworks that calculate the likelihood of a position or an action leading to a goal in the near future. In a way, these methods extend similar concepts to xG to a broader set of data.
In this paper, we use a custom VAEP model to estimate the value of actions in our dataset. Different methods can be used to generate results equivalent to the ones we obtained. However, we opted for VAEP because (1) it is easier to apply to the datasets available, and (2) there is a higher familiarity of the authors with this framework.
\section{Defining the VAEP model} \label{sec:Defining the VAEP model}
To build our VAEP models, we used the following event data: the training set is from the German Bundesliga, English Premier League, French Ligue 1, and Italian Serie A from 2018-2019. The test set is from the Spanish La Liga between 2009-2010 and 2018-2019. The final feature set is available in the Appendix \ref{ap:features}.
From the feature list, there is a particular set of features of interest: \textit{endAngleToGoal}, \textit{endDistanceToGoal}, \textit{outcome}, \textit{distanceToPost}, and \textit{distanceToBar}. These features include information about the outcome of actions. They inform the algorithm on the outcome of the action, which impacts how the action is valued. By introducing these features, our model evaluates the execution of the action by the player. When these features are not included the model measures a player's intent. We call the models Outcome-aware VAEP (O-VAEP) and Intent VAEP (I-VAEP).
In opposition to the original proposal \cite{decroos_actions_2019}, we build a regression model. Instead of splitting the state value between defensive and offensive values, we use a continuous label between -1 and 1, where -1 indicates certainty that the team will concede, and 1 indicates certainty that the team will score a goal.
We describe the label's value $l$ in Equation \ref{eq:label}, where $label_e$ indicates the value of the label of event $e$, $T$ is the team performing the event, $t$ is the time of the event in minutes, and $O$ is the ordinal number of the action (after sorted by time of occurrence). The component $\max\{1 - (t_{e} - t_{goal}), 0\}$ relates to how much time it occurs before the action, capped at 1 minute, and $int(O_e - O_{goal} > 5)$ relates to whether the actions is one of the last 5 actions before a goal. The component $(2*(T_e == T_{goal}) - 1)$ relates to whether the team that scored is the same team that performed the event and gives negative values to actions that lead to conceding goals. After calculating the label, we check whether the action occurred in the same period/game to ensure data consistency.
\begin{equation}
l_e=(2*(T_e == T_{goal}) - 1) * \max \{ \max\{1 - (t_{e} - t_{goal}), 0\}, int(O_e - O_{goal} > 5)\}
\label{eq:label}
\end{equation}
To train our model, we use the \textit{Random Forest} learner from \textit{scikit-learn} default parameters, except for \textit{min\_samples\_split}, which was set to 50 to avoid overfitting the model. Both models have the same parameters. We measure \textit{Mean Absolute Error} (MAE), and \textit{Median Absolute Error} (MedAE) to evaluate our model, and the results are available on the Appendix \ref{ap:results}.
Having estimated the probability of scoring/conceding at the end of each event, we can now calculate how much value an action creates. For this, we calculate the difference between the current probability of scoring and probability two actions earlier. We opted for this approach since our dataset contains many paired actions. For example, if a foul precedes a dribble, both actions should contribute in the same magnitude to the probability of scoring. Using a lag of two, we ensure we adequately value paired actions. To ensure consistency, we remove kick-off actions.
\section{Creating tracking metrics} \label{sec:Creating tracking metrics}
After calculating the value of each action, there is one additional step to build the tracking metric: grouping all player actions per game, building a time series that contains a player's total VAEP in each game. To ensure consistency in data, we calculate VAEP per minute played and only kept games where the player played more than 60 minutes.
Besides tracking all actions, we can track a specific subset of actions. This paper also generates time series related to passes, dribbles, and shots. Due to the specificity of the position, we excluded goalkeepers from the analysis.
As shown in Figure \ref{fig:dataviz}, performance per game is very volatile. To extract meaningful information, we need to capture the trend within the time series, ignoring the fluctuations between games. We use moving averages across different windows to capture the trend in our data. Another option would be using exponential moving averages, but our experiments showed that they are less robust to outliers.
\begin{figure}
\includegraphics[width=\textwidth]{img/001_dataviz.pdf}
\caption{On the left, we present the I-VAEP and O-VAEP values per game in the background, and the short-term and long-term ratings with the difference between both ratings in front. On the right, we observe the player's game ratings distribution.} \label{fig:dataviz}
\end{figure}
Before calculating the moving averages, we need to define the moving windows. The first decision is which index we will use in our series. We have two options: (1) use the cumulative count of games played, or (2) use game time. In the first option, we would define the window as a number of games. The series will have consistent indexes without problems regarding missing data from injuries or season breaks. On the second option, we define the window as the number of days. The index is inconsistent, but this format contains more information, like periods of player injury or players leaving the league. We use a game-count index for this work, due to its simplicity.
To track the evolution of players, we create two different metrics: short-term and long-term player performance. Each metric has a specific role. The short-term metric captures the short-term trend of a player without overreacting to a single performance, acting as a proxy for a player's recent form. The long-term metric considers the player's long-term performance and provides a consistent, less volatile measure of player performance.
In this work, we set the short-term window to 10 games, with a minimum of 5 games in this period. The long-term window is 40 games with at least 20 games. We set these values by trial and error in the training set. Our goal was to create a metric that captured player quality as fast as possible without overreacting to one game's performances.
\section{Experimental Use Cases} \label{sec:Experimental Use Cases}
\subsection{Evaluating Players}
The first use case for our performance metrics is to evaluate the best players in the league. Figure \ref{fig:ltovaep} shows the long-term O-VAEP rating over time.
\begin{figure}
\includegraphics[width=\textwidth]{img/002_long_term_ovaep.pdf}
\caption{The long-term O-VAEP ratings of the 10 players with highest peak.} \label{fig:ltovaep}
\end{figure}
We observe that Lionel Messi was the best player in La Liga during the period in our dataset. Except for a small period at the end of 2014, when Cristiano Ronaldo took the lead, Messi dominated the first place. Ronaldo is also a clear second most of the time, until he is overtaken by several players in 2016. Luis Suárez closes the podium. It is interesting to note that many series look correlated, even for players of different teams, which might indicate that competition between top players might get the best out of them.
\subsection{Rating's Granularity}
We can also evaluate players' ratings by their performance in specific actions. Figure \ref{fig:granular} presents the VAEP ratings of passing, dribbling, and shooting actions over time for the five players with the highest OA-VAEP values.
\begin{figure}
\includegraphics[width=\textwidth]{img/004_granular.pdf}
\caption{The contribution of different action types to the player's O-VAEP ratings.} \label{fig:granular}
\end{figure}
Unsurprisingly, Messi leads in value created from passes and dribbles for nearly the entirety of the data set. But more interesting than analyzing who is best at what is to understand player changes over time. In Figure \ref{fig:granular}, we observe that the value Messi creates from dribbles follows the opposite behavior of the value he creates from passes. As Barcelona's key playmakers like Xavi and Iniesta retired, Messi adapted his game, increasing his role in creating changes through passes and sacrificing the value he can produce by dribbling.
\subsection{Player Consistency}
Consistency is a key factor for player selection. A consistent player provides a similar amount of value to the team across all games. Especially when contending for championships, exhibition consistency is key to offering guarantees that teams do not miss a seasonal objective due to a single inconsistent match.
Equations \ref{eq:gametogamevol}, \ref{eq:negativegamevol}, and \ref{eq:negativeshorttermvol} describe our volatility metrics for player performance, where $\vec{r_{G}}$ is vector of a player's game rating, $\vec{r_{ST}}$ is vector of a player's short-term rating, $\vec{r_{LT}}$ is vector of a player's long-term rating for a specific game, and $\sigma$ is the standard deviation function.
\begin{equation}
Game\ to\ Game\ Volatility = \sigma(\vec{r_{G}} - \vec{r_{LT}})\label{eq:gametogamevol}
\end{equation}
\begin{equation}
Negative\ Game\ Volatility = \sigma(\Delta * (\Delta < 0)),\ where\ \Delta = \vec{r_{G}} - \vec{r_{LT}}
\label{eq:negativegamevol}
\end{equation}
\begin{equation}
Negative\ Short\ Term\ Volatility = \sigma(\Delta * (\Delta < 0)), where\ \Delta = \vec{r_{ST}} - \vec{r_{LT}}
\label{eq:negativeshorttermvol}
\end{equation}
By measuring the standard deviation of the difference between a player's exhibitions and his long-term performance, we can obtain a proxy for how consistent a player is in delivering his average performance.
However, due to the nature of our ratings, higher-rated players will have higher standard deviations. To fix this, we fit a linear regression model and use it to control for player ratings. Figure \ref{fig:consistency} presents the transformation of the volatility metric and shows the consistency results for the top players in our dataset.
\begin{figure}
\includegraphics[width=\textwidth]{img/005_consistency.pdf}
\caption{On the left: the volatility ratings of the top 10 players in our data set, plus 10 additional players arbitrarily selected. On the right: a visual explanation of the ratings on top, and the process of adjusting volatility according to the median rating of the player on the bottom.} \label{fig:consistency}
\end{figure}
Although we did not explore this, it would be interesting to evaluate a player's consistency across different ranges of game difficulty. For example, one could find a set of players that play better under challenging games and another that exceeds in more accessible games. This can be key when formulating the squad since the coach will want options for every type of opposition the team will face.
\subsection{Player Development Curve}
With access to information about players' performance over time, we can understand how a player develops on average. Figure \ref{fig:pdc} shows the player development curve (PDC) obtained from the procedure described in Appendix \ref{ap:pdc}.
\begin{figure}
\includegraphics[width=\textwidth]{img/007_pdc.pdf}
\caption{The average PDC, from the player's lowest (0) to the highest rating (1).} \label{fig:pdc}
\end{figure}
To build the PDC, we group the player's ratings relative to their peak across all ages available in the dataset. However, we face a bias in our dataset after performing this step. There are much fewer young (>20 years old) and older (>33 years old) players in our dataset. This occurs because players in these age groups will only be playing in the first team if they are exceptional.
This means that the average young/old player in our dataset is not the same compared to the average of a uniform sample of young/old players, who are more likely to play in academies or lower divisions. The dataset contains a sample of the top young/old players. To control for this, we assume that the number of players across all ages should be uniform. Therefore, we multiply the value of each age by $1-relative\ amount\ of\ players$.
Agreeing with similar studies in the subject \cite{dendir_soccer_peak_2016}, the PDC shows players peaking between 25 and 27 years old. The curve also relates the equivalent age where a young player starts to provide more value to the team compared to an older player.
One interesting application of the PDC is to detect late bloomers. Late bloomers are players who reach their peak long past the 25 to 27 years old range. Since the players' market value decreases substantially after their peak age, clubs can find bargains by purchasing older players who still have room to improve. This is the case with players presented in Figure \ref{fig:latebloomers} like Tiago, Aritz Aduriz, and Joaquín, who played at their peaks in their mid-30s.
\begin{figure}
\includegraphics[width=\textwidth]{img/008_latebloomers.pdf}
\caption{A sample of late bloomers in our dataset. These players either hit or were close to hitting their peak performance much later than the average player.} \label{fig:latebloomers}
\end{figure}
\section{Discussion} \label{sec:Discussion}
This work argued about potential use cases for player ratings treated as time series. The main goal was to highlight some situations where handling ratings as a continuous evaluation mechanism can yield better results and novel methods when compared to handling the problem as a tabular task.
In player valuation, we showed that tracking a rating over time can help us identify clear trends in player performance. Furthermore, it allows for a comparison between players over time. Besides ratings, we can measure players' consistency, discriminate which actions the players use to provide value and understand long-term player style changes over time.
Our approach does not come without limitations: our methods demand many data points for the ratings to be reliable, which might not always be available. Although we can evaluate the most important players reliably, we cannot assign ratings to players with smaller game samples. We cannot simply reduce the number of games required: doing so would result in unreliable and unstable ratings. We require methods to rate players that do not rely on so many data points. Another drawdown is the lack of cap on how old a game can be and still count for a player's rating. Due to our game-count index, players keep their rating even after years without playing a game.
Event data contains much information but has limitations. The context in certain actions (like tackles and interceptions) is limited, and the models wrongly value defensive actions. That being said, we still see many avenues where we can improve our results. For example, normalizing player ratings by position, game difficulty, competition difficulty, and teammates ratings are some of the possibilities where we can reduce biases in our current framework.
The PDC shows some of the potentials of treating data as time series. At this moment, we defined a curve that shows the expected player development over different ages. However, we see different use cases that can help in many facets of a team's management. Locating where a player is within the curve, detecting late bloomers, or projecting future player performance are some of the possibilities where this method can generate value for a club.
\section{Conclusion} \label{sec:Conclusion}
This paper presented a framework to treat player ratings as time series. First, we defined our VAEP models and methodology to measure short-term and long-term player performance. Then, we presented several use cases for the time series produced. From player evaluation to player development, we see the potential to apply this approach in many parts of the game.
As for future work, we plan to address the following:
\begin{itemize}
\item Adding external context to the time series (such as game difficulty);
\item Understand the trade-off between average value versus total value per game;
\item Improve the quality of the valuation of defensive actions;
\item Increase the efficiency of data, decreasing the amount of data required for ratings without decreasing its quality.
\end{itemize}
\subsubsection{Acknowledgements}
This work is financed by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia, within project UIDB/50014/2020.
The second author would like to thank Futebol Clube Porto – Futebol SAD for the funding.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the cited entity.
\bibliographystyle{splncs04}
| {
"attr-fineweb-edu": 1.662109,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbME5qhDAChH9Sx8s | \section{Introduction}
The core competency required of a manager is to be able to assess a players performance. This is a challenging endeavor as the performance of a player depends on a number of attributes and understandably varies from season to season. The approach adopted by this paper is to create a model that uses only player characteristics such as height, weight, expected time on ice and expected shooting percentage to create a projection for a player. Various regression based models were created in order to estimate the offensive production of a players in the form of points per game (PPG). Neural Nets were found to produce the estimates that were closest to the established baseline. This gives managers and coaches a way to quantify a player's contribution to the team based on a model that was trained using data accumulated from 2006-2017. This model can also serve as a guide to how players entering the league, for whom previous National Hockey League data is not available, will perform based on their physical attributes and their usage during an 82 game NHL season. The paper takes these projections a step further and proposes a metric that uses the projected player contributions (PPG), their actual performance during the current season (PPG), and the team's performance (points percentage) to rank players that might be available for trade. This metric can thus serve as a useful tool for any general manager, real or fantasy to make roster decisions.
\section{Process Diagram}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.35]{Flowchart.JPG}
\caption{Visual representation of flow of data for producing the PAR rating}
\label{fig:flowchart}
\end{figure}
\section{Formal Description}
The performance of players in the National Hockey League varies from season to season due to a variety of reasons such as overall team performance, player usage, physiological attributes, coaching styles etc. Each of the 31 teams in the league employ an army of scouts that are responsible for analyzing players over the course of a season which can be a daunting task. This paper proposes a tool that can be used by general managers to evaluate the performance of players during the course of a season based on their expected and predicted offensive performances, the long-term and short-term performances of their teams, and their usage per game. The algorithm used to determine the performance of the players is presented in Algorithm 1.
\begin{algorithm}
\SetAlgoLined
\KwResult{ PAR (Player Availability Rating) }
$PPG_{predicted}$, $PPG_{actual}$, $PPCG_{season}$, $PPCG_{recent}$, $X_{test}$, $Y_{test}$, $X_{train}$, $Y_{train}$, $w_o=2$\;
method=\{'linearRegression', 'k-NN', 'NeuralNets', 'decisionTree', 'randomForest'\}\;
$model_{minerror}$ = Choose method that produces the lowest average error\;
\While {items in $X_{test}$}{
Use $model_{minerror}$ with lowest calculated error and recalculate $PPG_{predicted}$\;
Extract latest PPG values from NHL.com and assign to $PPG_{actual}$\;
Extract team PPCG values (season) from NHL.com and assign to $PPCG_{season}$\;
Extract team PPCG values (recent) from NHL.com and assign to $PPCG_{recent}$\;
PAR = $\frac{(PPG_{predicted}-PPG_{actual})}{(PPCG_{season})} + w_o*\frac{(PPG_{predicted}-PPG_{actual})}{(PPCG_{recent})}$\;
}
\caption{PAR Algorithm}
\end{algorithm}
Each of the models were implemented by using existing libraries (scikit-learn) and were optimized by running a grid search over the parameters.
This insight is invaluable as the players on the list above are either severely under-performing, or their teams are not performing well or a combination of both. These players may be more likely to be surrendered by opposing general managers during trade negotiations and might be considered under the radar acquisitions with the potential for very high reward. The formula used to determine this rating is presented below:
\\
\\
$
PAR = \frac{(PPG_{predicted}-PPG_{actual})}{(PPCG_{season})}
+ w_o*\frac{(PPG_{predicted}-PPG_{actual})} {(PPCG_{recent})}
$
\\
\\
where
\begin{conditions*}
PPG_{predicted} & Predicted points/game for skater \\
PPG_{actual} & Actual points/game for skater \\
PPCG_{season} & Percentage of points earned by team during current season \\
PPCG_{recent} & Percentage of points earned by team during last 10 games \\
w_{o} & A parameter to control the influence of recent team performance on player availability \\
\end{conditions*}
The difference between the predicted and actual PPG values for each player is indicative of the player's performance. A negative value corresponds to the player out-performing expectations whereas a positive value means that the player is under-performing. The short term and long term winning percentages of the player's team can be indicative of the pressure that the opposing general manager might be facing to make a move to either trade for a player or to trade an under-performing player. Based on other factors such as health and confidence levels, which are challenging to quantify, the player might simply need a change of scenery and a trade to a different team might be a winning proposition for all parties involved. Regularly using the proposed algorithm can help managers stay on top of such situations.
The PAR estimate captures the essence of existing ratings that are dependent on probabilistic considerations. However, the formulation presented above is not derived from any of these sources. It also does not use a probabilistic method to make predictions.
The PAR estimate uses a combination of neural nets and empirical formulation to quantify the availability of an individual during an NHL season. Such a formulation was not found in literature. However, a number of sources attempted to quantify player and team performance based on a probabilistic approach \cite{trueskill}. The most popular one being TrueSkill \cite{trueskill} which is a Bayesian Rating system that has been applied to other sports such as Basketball \cite{ncaa} and Football \cite{ncaa}.
\section{Related Work}
The following sources were consulted during literature review:
\newline
i.) \textit{Forecasting Success in the National Hockey League using In-Game Statistics and Textual Data \cite{predict}:}
This paper utilizes traditional and advanced statistics for individual players on a team to predict how teams will perform over the course of a season. The core concept of using statistics to determine the cumulative performance of players is similar to the idea presented in this paper. However, PAR makes a point not to use advanced statistics and in its stead makes use of physical player characteristics and their expected usage over the course of an NHL season.
\newline
ii.) \textit{Estimating the Value of Major League Baseball Players \cite{fields}:}
This paper attempts to quantify the value of players to determine how much they should be paid. The author proposes a formulation that considers a number of features/factors that might determine player value. PAR attempts to consider similar features but only looks at offensive contribution.
\newline
iii.) \textit{Predicting the Major League Baseball Season \cite{jia}:}
This paper uses neural networks to solve a binary classification problem in the form of wins and losses for baseball teams over the course of a Major League Baseball season. Their use of neural networks along with a large amount of data to make these predictions.
\newline
iv.) \textit{TrueSkill - A Bayesian skill rating system \cite{trueskill}:}
The paper above uses a probabilistic approach to skill assessment to produce a rating based on the outcome of previously played games. This paper uses chess rankings to illustrate their approach.
\newline
v.) \textit{Knowing what we don't know in NCAA Football ratings: Understanding and using structured uncertainty \cite{ncaa}:}
This paper uses the TrueSkill method and applies it to evaluate team performance for NCAA football games. The focus is on team performance as opposed to player performance.
\newline
The papers reviewed above focus on a probabilistic evaluation of performance. After an exhaustive search of the literature, no papers were found that use non-probabilistic machine learning algorithms to produce real valued outputs to evaluate player contributions (offensive or defensive). An attempt had been made to do so in baseball but it was limited to the outcome of games based on recent performance \cite{trueskill}.
\section*{Comparison or Demonstration}
In order to demonstrate the effectiveness of neural nets to predict player performance based on historic data, five models were created using five different regression based methods. The performance of the neural network was compared against these methods. The table below summarizes the error values observed (predicted PPG-actual PPG) for these methods. Table X also looks at predictions made by TSN.ca and NHL.com before the start of the 2017-2018 hockey season and compares them to the baseline (current PPG values) for the top 100 players in the league as determined by their PPG.
The training, validation and test datasets were created by creating scripts that extracted the required data from NHL.com and using an 80-10-10 split. Similarly, additional scripts were created in order to extract baseline data from NHL.com\cite{nhl} and TSN.ca\cite{tsn}. Each feature was modified using z-score normalization before training the model.
Based on the results in Table 1, neural nets provided the lowest error values of any method with linear regression having the worst performance of any method, as expected. Figure 2 illustrates the performances of each of the methods in predicting the top 100 players with the highest PPG values.
\begin{table}
\centering
\label{my-label}
\begin{tabular}{|l|l|l|}
\hline
Method/Source & Mean & Median \\ \hline
Neural Nets & 0.211 & 0.188 \\ \hline
Decision Tree & 0.222 & 0.21 \\ \hline
Random Forest & 0.215 & 0.193 \\ \hline
k-NN Regression & 0.234 & 0.21 \\ \hline
Linear Regression & 0.245 & 0.22 \\ \hline
TSN.ca & 0.202 & 0.173 \\ \hline
NHL.com & 0.197 & 0.167 \\ \hline
\end{tabular}
\caption{Calculated error values comparing predicted and actual PPG values for top 100 scorers in the NHL}
\end{table}
\begin{table}
\centering
\label{my-label}
\begin{tabular}{|c|c|c|}
\hline
Payer Name & Actual & Predicted \\ \hline
Nikita Kucherov & 1.48 & 1.18 \\ \hline
Brad Marchand & 1.17 & 1.02 \\ \hline
Claude Giroux & 1.07 & 1 \\ \hline
Connor McDavid & 1.17 & 0.99 \\ \hline
Anze Kopitar & 1.17 & 0.99 \\ \hline
Johnny Gaudreau & 1.28 & 0.98 \\ \hline
John Tavares & 1.14 & 0.97 \\ \hline
Evgeny Kuznetsov & 1.06 & 0.96 \\ \hline
Brayden Point & 0.85 & 0.93 \\ \hline
Mark Scheifele & 1.21 & 0.91 \\ \hline
\end{tabular}
\caption{Comparison of actual and predicted PPG values for the current top 10 offensive contributors in the NHL}
\end{table}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{lReg.png}
\caption{Linear Regression}
\end{subfigure}
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{kNN.png}
\caption{k-Nearest Neighbors}
\end{subfigure}
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{dTree.png}
\caption{Decision Trees}
\end{subfigure}
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{rFor.png}
\caption{Random Forest}
\end{subfigure}
\begin{subfigure}[b]{1.0\linewidth}
\includegraphics[width=\linewidth]{nn.png}
\caption{Neural Networks}
\end{subfigure}
\caption{Comparison of various regression methods against baselines for the top 100 players with the highest PPG values in the NHL}
\label{fig:coffee3}
\end{figure}
The top 10 players based on the predicted PPG determined by the neural nets is presented in Table 2. Their current PPG values are also presented as a reference. The difference in their predicted vs actual values can be attributed to the current season being only 30\% complete. Over the course of the season, the actual PPG values are expected to decrease.
The final step of the PAR algorithm is to apply the PAR formula using the predicted PPG projections, baseline data and team-based statistics. The top 10 players with the highest PAR values are presented in Table 3.
\begin{table}
\centering
\label{my-label}
\begin{tabular}{|c|c|}
\hline
Player Name & PAR \\ \hline
Ryan Dzingel & 2.29 \\ \hline
Mark Stone & 2.07 \\ \hline
Cam Fowler & 2.03 \\ \hline
Brandon Montour & 1.66 \\ \hline
Tyler Myers & 1.64 \\ \hline
Brendan Perlini & 1.21 \\ \hline
Nick Foligno & 1.14 \\ \hline
Tomas Tatar & 1.12 \\ \hline
Dion Phaneuf & 1.02 \\ \hline
Gabriel Landeskog & 0.98 \\ \hline
\end{tabular}
\caption{Top 10 players with the highest PAR estimates}
\end{table}
\section*{Limitations}
The presented model provides estimates that are far from the norm for players that are highly skilled. An example of this behavior is for players that are very large in size (taller than 6' 3'' and weigh more than 220 pounds). Over the course of the last 10 years, the majority of such players have traditionally been enforcers (players that are not relied upon to score points). Only a handful of players that meet this criteria are prolific scorers. The model thus assigns low point per game estimates to these types of players. An example of this is Patrik Laine who was assigned a value of 0.63. This was extremely surprising because his actual performance in his first year in the league far exceeded the estimate (by a factor of 1.5). Similarly, players that are smaller in size (smaller than 5' 9'' and weigh less than 170 pounds) are more likely to be assigned much higher point per game values because traditionally such players have scored at very high rates.
One of the most important ways to improve this model is to include more features. These features should be a combination of player characteristics and situational usage. Another possible extension of this project would be to assign monetary values to these players in an attempt to present what their valuations should be at any point during the season. The incorporation of the methodology in the paper by Fields \cite{predict} would draw upon various features related to team performance, situational usage and situational performance to present estimates of valuations for players. Such a tool would be invaluable for general managers in the National Hockey League as they attempt to assemble their rosters while working under tight constraints such as the unavailability of funds (for managers of small market teams), and a salary cap enforced by the National Hockey League which limits the amount of money that each team can spend on players.
The prediction criteria defined in the algorithm makes it unique as it can also be applied to other sports such as Basketball, Baseball and Football. This is another area that might be worth exploring in the future.
\section*{Conclusions}
The goal of the paper was to assess the viability of using neural nets to predict player performance. The results indicate that neural nets and other regression based methods can be used to adequately complete this task. The results were compared to actual PPG values as well as other sources such as TSN.ca\cite{tsn} and NHL.com\cite{nhl} that are considered to be a top resource for player projections. The project was also extended in order to predict the estimate the Player Availability Rating (PAR) which is a novel metric that aims to quantify the availability of a player based on the current performance of the player and the performance of their team. The results indicate that neural nets outperform the other regression based methods and are comparable to those made by TSN.ca\cite{tsn} and NHL.com\cite{nhl}.
The author has also launched a website that has adopted the algorithm presented in this paper:
\href{https://gmaiplaybook.com}{gmaiplaybook.com}
\\
\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 1.748047,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdAA4dbgg-nPSr7k2 | \section{Introduction}
Each March, more than an estimated 50 million Americans fill out a bracket for the National Collegiate Athletic Association (NCAA) men's Division 1 basketball tournament \citep{Atlantic}. While paid entry into tournament pools is technically outlawed, prosecution has proved rare and ineffective; an estimated \$2.5 billion was illegally wagered on the tournament in 2012 \citep{LAT, BusinessWeek}.
Free tournament pools are legal, however, and Kaggle, a website that organizes free analytics and modeling contests, hosted its first college basketball competition in the early months of 2014. Dubbed the `March Machine Learning Mania' contest, and henceforth simply referred to as the Kaggle contest, the competition drew more than 400 submissions, each competing for a grand prize of \$15,000, which was sponsored by Intel. We submitted two entries, detailed in Section \ref{MC}, one of which earned first place in this year's contest.
This manuscript both describes our novel predictive models and quantifies the possible benefits, with respect to contest standings, of having a strong model. First, we describe our submission, building on themes first suggested by \cite{carlin1996improved} by merging information from the Las Vegas point spread with team-based possession metrics. The success of our entry reinforces longstanding themes of predictive modeling, including the benefits of combining multiple predictive tools and the importance of using the best possible data.
Next, we use simulations to estimate the fraction of our success which can be attributed to chance and to skill, using different underlying sets of probabilities for each conceivable 2014 tournament game. If one of our two submissions contained the exact win probabilities, we estimate that submission increased our chances of winning by about a factor of 50, relative to if the contest winner were to have been randomly chosen. Despite this advantage, due to the contest's popularity, that submission would have had no more than about a 50-50 chance of finishing in the top 10, even under the most optimal of conditions.
This paper is laid out as follows. Section 2 describes the data, methods, and scoring systems pertinent to predicting college basketball outcomes. Section 3 details our submission, and in Section 4, we present simulations with the hope of quantifying the proportions of our success which were due to skill and chance. Section 5 summarizes and concludes.
\section{NCAA tournament modeling}
\subsection{Data selection}
Two easily accessible sets of predictors for NCAA basketball tournament outcomes are information from prior tournaments and results from regular season competition. Regular season data would generally include information like each game's home team, away team, location, and the final score. For tournament games, additional information would include each team's seed (No. 1 to No. 16), region, and the distance from each school's campus to the game location.
The specific viability of using team seed to predict tournament success has been examined extensively; see, for example, \cite{schwertman1996more} and \cite{boulier1999sports}. In place of team seeds, which are approximate categorizations of team strengths based mostly on perceived talent, we supplemented regular season data with two types of information that we thought would be more relevant towards predicting tournament outcomes: (1) the Las Vegas point spread and (2) team efficiency metrics.
\subsubsection{The Las Vegas point spread}
One pre-game measurement available for the majority of Division 1 men's college basketball games over the last several seasons is the Las Vegas point spread. This number provides the predicted difference in total points scored between the visiting and the home team; a spread of -5.5, for example, implies that the home team is favored to win by 5.5 points. To win a wager placed on a 5.5 point favorite, one would need that squad to win by six points or more. Meanwhile, a bet on the underdog at that same point spread would win either if the underdog loses by 5 points or fewer, thereby covering the spread, or if the underdog outright wins. In principal, the point spread accounts for all pre-game factors which might determine the game's outcome, including relative team strength, injuries, and location.
Rules of efficient gambling markets imply that, over the long run, it is nearly impossible to outperform the point spreads set by sportsbooks in Las Vegas. A few landmark studies, including \cite{harville1980predictions} and \cite{stern1991probability}, used data from National Football League (NFL) games to argue that, in general, point spreads should act as the standards on which to judge any pre-game predictions. While recent work has looked at gambling markets within, for example, European soccer \citep{constantinou2013profiting}, the Women's National Basketball Association \citep{paul2014market}, the NFL \citep{nichols2014impact}, and NCAA men's football \citep{linna2014effects}, most research into the efficiency of men's college basketball markets was produced several years ago. \cite{colquitt2001testing}, for example, argued that, overall, evidence of market inefficiencies in men's college basketball were limited. These authors also found higher degrees of efficiency in betting markets among contests in which a higher amount of pre-game information was available. \cite{paul2005market} highlighted inefficiencies with respect to larger point spreads using men's college basketball games played between 1996-1997 and 2003-2004, and found that placing wagers on heavy underdogs could be profitable. Lastly, \cite{carlin1996improved} modeled tournament outcomes from the 1994 NCAA season, finding that the point spread was among the easiest and most useful predictors.
As a result of our relative confidence in the efficiency of men's basketball markets, we extracted the point spread from every Division 1 men's basketball contest since the 2002-2003 season using www.covers.com, and linked these results to a spreadsheet with game results.
\subsubsection{Efficiency metrics}
One aspect lost in the final scores of basketball games is the concept of a possession. Given that NCAA men's teams have 35 seconds on each possession with which to attempt a shot that hits the rim, the number of possessions for each team in a 40-minute game can vary wildly, depending on how quickly each squad shoots within each 35-second window. In the 2013-2014 season, for example, Northwestern State led all of Division 1 with 79.3 possessions per game, while Miami (Florida) ranked last of the 351 teams with 60.6 per game \citep{TR}. As a result, it is not surprising that Northwestern State scored 20.1 more points per game than Miami, given the large discrepancy in each team's number of opportunities \citep{TR}. As score differentials will also be impacted by the number of possessions in a game, offensive and defensive per-possession scoring rates may provide a greater insight into team strength, relative to the game's final score.
Several examples of possession-based metrics can be found on a popular blog developed by Ken Pomeroy (www.kenpom.com). Pomeroy provides daily updated rankings of all Division 1 teams, using offensive and defensive efficiency metrics that he adjusts for game location and opponent caliber. The larger umbrella of possession-based statistics, of which Pomeroy's metrics fall under, are summarized by \cite{kubatko2007starting}.
Pomeroy's website provides team-specific data for all seasons since 2001-2002. We extracted several different variables that we thought would plausibly be associated with a game's results, including a team's overall rating and its possession-based offensive and defensive efficiencies. These metrics provide a unique summary of team strength at each season; one downside, however, is that the numbers that we extracted were calculated $after$ tournament games, meaning that postseason outcomes were included. As a result, fitting postseason outcomes using Pomeroy's end-of-postseason numbers may provide too optimistic a view of how well his numbers produced at the end of the regular season would do. Given that there are many more regular season games than postseason games, however, we anticipated that changes between a team's possession-based efficiency metrics, as judged at the end of the regular season and at the end of the postseason, would be minimal. Relatedly, \cite{kvam2006logistic} found that most of the variability in a team's success during the tournament could be explained by games leading up to mid-February, implying that games at the end of the regular season do not have a dramatic impact on evaluation metrics.
\subsection{Contest requirements}
Standard systems for scoring NCAA basketball tournament pools, including those used in contests hosted online by ESPN (11 million participants in 2014) and Yahoo (15 million), award points based on picking each tournament game winner correctly, where picks are made prior to start of the tournament \citep{espn, yahoo}. In these pools, there are no lost points for incorrect picks, but it is impossible to pick a game correctly if you had previously eliminated both participating teams in earlier rounds. The standard point allocation ranges from 1 point per game to 32 points for picking the tournament winner, or some function thereof, with successive rounds doubling in value. With the final game worth 32 times each first round game, picking the eventual tournament champion is more or less a prerequisite for a top finish. For example, among roughly one million entries in one 2014 ESPN pool, the top 106 finishers each correctly pegged the University of Connecticut as the champion \citep{pagels}. In terms of measuring the best prognosticator of all tournament games, the classic scoring system is inadequate, leading some to call for an updated structure among the websites hosting these contests \citep{pagels}; for more on optimal strategies in standard pools, see \cite{metrick1996march} and \cite{breiter1997play}.
Systems that classify games as `win' or `lose' fail to provide probability predictions, and without probabilities, there is no information provided regarding the strength of victory predictions. For example, a team predicted to win with probability 0.99 by one system and 0.51 by another would both yield a `win' prediction, even though these are substantially different evaluations. An alternative structure would submit a probability of victory for each participating team in each contest. In the Kaggle contest, for example, each participant's submissions consisted of a 2278 x 2 file. The first column contained numerical identifications for each pair of 2014 tournament qualifiers, representing all possible games which could occur in the tournament. For simplicity, we refer to these teams as Team 1 and Team 2. The second column consisted of each submission's estimated probability of Team 1 defeating Team 2. Alphabetically, the first possible match-up in the 2014 tournament pitted the University of Albany (Team 1) against American University (Team 2); like 2213 of the other possible match-ups, however, this game was never played due to the tournament's single elimination bracket structure.
There were 433 total entries into the 2014 Kaggle contest, submitted by 248 unique teams. Each team was allowed up to 2 entries, with only the team's best score used in the overall standings. Let $\hat{y}_{ij}$ be the predicted probability of Team 1 beating Team 2 in game $i$ on submission $j$, where $i = 1, ...2278$ and $j = 1, ... , 433$, and let $y_i$ equal 1 if Team 1 beats Team 2 in game $i$ and 0 otherwise. Each Kaggle submission $j$ was judged using a log-loss function, $LogLoss_{j}$, where, letting $I(Z_i=1)$ be an indicator for whether or not game $i$ was played,
\begin{eqnarray}
LogLoss_{ij}& = & -\bigg(y_i \log(\hat{y}_{ij}) + (1 - y_i) \log(1 - \hat{y}_{ij})\bigg)*I(Z_i=1) \label{LL} \\
LogLoss_{j} & = & \frac{1}{\sum_{i = 1} ^ {2278} I(Z_i=1)} \sum_{i=1}^{2278} \left[ LogLoss_{ij} \right] \\
& = & \frac{1}{63} \sum_{i=1}^{2278} \left[ LogLoss_{ij} \right].
\end{eqnarray}
\noindent Smaller log-loss scores are better, and the minimum log-loss score (i.e., picking all games correctly with probability 1) is 0. Only the 63 games which were eventually played counted towards the participants' standing; i.e., $\sum_{i = 1} ^ {2278} I(Z_i=1) = 63$.
There are several unique aspects of this scoring system. Most importantly, the probabilities that minimize the log-loss function are the same as the probabilities that maximize the likelihood of a logistic regression function. As a result, we begin our prediction modeling by focusing on logistic regression.
Further, all games are weighted equally, meaning that the tournament's first game counts as much towards the final standings as the championship game. Finally, as each entry picks a probability associated with every possible contest, prior picks do not prevent a submission from scoring points in future games.
\subsubsection{Predicting NCAA games under probability based scoring function}
Despite the intuitiveness of a probability based scoring system, little research has explored NCAA men's basketball predictions based on the log-loss or related functions. In one example that we build on, \cite{carlin1996improved} supplemented team-based computer ratings with pre-game spread information to improve model performance on the log-loss function in predicting the 1994 college basketball tournament. This algorithm showed a better log-loss score when compared to, among other methods, seed-based regression models and models using computer ratings only. However, Carlin's model was limited to computer ratings based only on the final scores of regular season games, and not on possession-based metrics, which are preferred for basketball analysis \citep{kubatko2007starting}. Further, the application was restricted to the tournament's first four rounds in one season, and may not extrapolate to the final two rounds or to other seasons.
\cite{kvam2006logistic} applied similar principles in using logistic regression as the first step in developing a team ranking system prior to each tournament and found that their rankings outperformed seed-based evaluation systems. However, this proposal was more focused on picking game winners than improving scoring under the log-loss function.
\section{Model selection}
\label{MC}
Our submission was based on two unique sets of probabilities, $\boldsymbol{\hat{y}_{m_1}}=\left[\hat{y}_{1,m_1}, ... \hat{y}_{2278,m_1}\right]$ and $\boldsymbol{\hat{y}_{m_2}}=\left[\hat{y}_{1,m_2}, ... \hat{y}_{2278,m_2}\right]$, generated using a point spread-based model ($M_1$) and an efficiency-based model ($M_2$), respectively.
For $M_1$, we used a logistic regression model with all Division 1 NCAA men's basketball games from the prior 12 seasons for which we had point spread information. For game $g$, $g = 1, ..., 65043$, let $y_g$ be our outcome variable, a binary indicator for whether or not the first team (Team 1) was victorious. Our only covariate in $M_1$ is the game's point spread, $spread_g$, as shown in Equation (\ref{m1}),
\begin{eqnarray}
\text{logit}(Pr(y_g = 1)) = \beta_0 + \beta_1 * spread_g \label{m1}. \end{eqnarray}
We used the maximimum likelihood estimates of $\beta_0$ and $\beta_1$ from Equation (\ref{m1}), $\hat{\beta}_{0,m_1}$ and $\hat{\beta}_{1,m_1}$, to calculate $\hat{y}_{i, m_1}$ for any $i$ using $spread_i$, the point spread for 2014 tournament game $i$ such that
\begin{equation} \hat{y}_{i,m_1} = \hat{\pi}_i = \frac{\text{exp}^{\hat{\beta}_{0,m_1}+\hat{\beta}_{1,m_1}* spread_i}}{1+\text{exp}^{\hat{\beta}_{0,m_1}+\hat{\beta}_{1,m_1}*spread_i}}. \end{equation}
The actual point spread was available for the tournament's first 32 games; for the remaining 2246 contests, we predicted the game's point spread using a linear regression model with 2013-2014 game results.\footnote{This specific aspect of the model has been previously used for proprietary reasons, and we are unfortunately not at liberty to share it.} Of course, just 31 of these 2246 predicted point spreads would eventually be needed, given that there are only 31 contests played in each tournament after the first round.
An efficiency model ($M_2$) was built using logistic regression on game outcomes, with seven team-based metrics for each of the game's home and away teams as covariates, along with an indicator for whether or not the game was played at a neutral site. These covariates are shown in Table \ref{KP}. Each team's rating represents its expected winning percentage against a league average team \citep{KP}. Offensive efficiency is defined as points scored per 100 possessions, defensive efficiency as points allowed per 100 possessions, and tempo as possessions per minute. Adjusted versions of offensive efficiency, defensive efficiency, and tempo are also shown; these standardize efficiency metrics to account for opposition quality, site of each game, and when each game was played \citep{KP}.
\begin{table*}
\centering
\caption{Team-based efficiency metrics}
\label{KP}
\begin{tabular}{l l l}
\toprule
Variable & Description & Team \\
\hline
$X_1$&Rating & Home \\
$X_2$&Rating & Away \\
$X_3$&Offensive Efficiency & Home \\
$X_4$&Offensive Efficiency & Away \\
$X_5$&Defensive Efficiency & Home \\
$X_6$&Defensive Efficiency & Away \\
$X_7$&Offensive Efficiency, Adjusted & Home \\
$X_8$&Offensive Efficiency, Adjusted & Away \\
$X_9$&Defensive Efficiency, Adjusted & Home \\
$X_{10}$&Defensive Efficiency, Adjusted & Away \\
$X_{11}$&Tempo & Home \\
$X_{12}$&Tempo & Away \\
$X_{13}$&Tempo, Adjusted & Home \\
$X_{14}$&Tempo, Adjusted & Away \\
$X_{15}$&Neutral & N/A \\
\bottomrule
\end{tabular}
\end{table*}
We considered several different logistic regression models, using different combinations and functions of the 15 variables in Table \ref{KP}. Our training data set, on which models were fit and initial parameters were estimated, consisted of every regular season game held before March 1, using each of the 2002-2003 through 2012-2013 seasons. For our test data, on which we averaged the log-loss function in Equation (\ref{LL}) and selected our variables for $M_2$, we used all contests, both regular season and postseason, played after March 1 in each of these respective regular seasons. We avoided only using the Division 1 tournament outcomes as test data because only about 1\% of a season's contests are played during these postseason games. Given that March includes conference tournament games, which are perhaps similar to those in the Division 1 tournament, and our desire to increase the pool of test data, we chose the earlier cutoff. Table \ref{Modelbuild} shows examples of the models we considered and their $LogLoss$ score averaged on the test data.
\begin{table*}
\centering
\caption{Model building results}
\label{Modelbuild}
\begin{tabular}{l l l}
\toprule
Fit & Variables & $LogLoss$\^{}\\
\midrule
(a)&($X_1$ - $X_2$) & 0.509\\
(b)&($X_1$ - $X_2$), $X_{15}$ & 0.496\\
(c)&$X_1$, $X_2$ & 0.510\\
(d)&$X_1$, $X_2$, $X_{15}$ & 0.496\\
(e)&$X_3$, $X_4$, $X_5$, $X_6$, $X_{15}$ & 0.538\\
\hlc[green]{(f)}&$X_7$, $X_8$, $X_9$, $X_{10}$, $X_{15}$ &\hlc[green]{0.487}\\
(g)&($X_7$ - $X_8$), ($X_9$ - $X_{10}$), $X_{15}$ &0.487\\
(h)&$X_1$, $X_2$, $X_7$, $X_8$, $X_9$, $X_{10}$, $X_{15}$ & 0.487\\
(i)**&($X_7$, $X_8$, $X_9$, $X_{10}$, $X_{15})^2$ & 0.488\\
(j)**&($X_1$, $X_2$, $X_7$, $X_8$, $X_9$, $X_{10}$, $X_{13}$, $X_{14}$ , $X_{15})^2 $ & 0.488\\
(k)***&($X_1$, $X_2$, $X_7$, $X_8$, $X_9$, $X_{10}$, $X_{13}$, $X_{14}$ , $X_{15})^3 $ & 0.493\\
\hline
\multicolumn{3}{l}{\^{}\ Games after March 1, in each of the 2002-2003 to 2012-2013 seasons} \\
\multicolumn{3}{l}{**\ all two-way interactions of these variables} \\
\multicolumn{3}{l}{***\ all three-way interactions of these variables} \\
\multicolumn{3}{l}{Chosen model is \hlc[green]{highlighted}} \\ \bottomrule
\end{tabular}
\end{table*}
While not the complete set of the fits that we considered, Table \ref{Modelbuild} gives an accurate portrayal of how we determined which variables to include. First, given the improvement in the loss score from fits (a) to (b) and (c) to (d), inclusion of $X_{15}$, an indicator for if the game was played on a neutral court, seemed automatic. Next, fit (f), which included the overall team metrics that had been adjusted for opponent quality, provided a marked improvement over the unadjusted team metrics in fit (e). Meanwhile, inclusion of overall team rating (fit (h)), and linear functions of team efficiency metrics (fit (g)), failed to improve upon the log-loss score from fit (f). Higher order terms, as featured in models (i), (j), and (k), resulted in worse log-loss performances on the test data, and an ad-hoc approach using trial and error determined that there were no interaction terms worth including.
The final model for $M_2$ contained the parameter estimates from a logistic regression fit of game outcomes on $X_7$, $X_8$, $X_9$, $X_{10}$, and $X_{15}$ (Adjusted offensive efficiency for home and away teams, adjusted defensive efficiency for home and away teams, and a neutral site indicator, respectively). We estimated $\boldsymbol{\hat{y}_{m_2}}$ using the corresponding team specific metrics from kenpom.com, taken immediately prior to the start of the 2014 tournament.
Our final step used ensembling, in which individually produced classifiers are merged via a weighted average \citep{opitz1999popular}. Previous work has shown that ensemble methods work best using accurate classifiers which make errors in different input regions, because areas where one classifier struggles would be offset by other classifiers \citep{hansen1990neural}. While our two college basketball classifiers, $M_1$ and $M_2$, likely favor some of the same teams, each one is produced using unique information, and it seems plausible that each model would offset areas in which the other one struggles.
A preferred ensemble method takes the additional step of calculating the optimal weights \citep{dietterich2000ensemble}. Our chosen weights were based on evidence that efficiency metrics were slightly more predictive of tournament outcomes than the model based on spreads. Specifically, using a weighted average of $M_1$ and $M_2$, we calculated a $LogLoss$ score averaged over each of the Division 1 tournaments between 2008 and 2013 (incidentally, this was the `pre-test' portion of the Kaggle contest). The balance yielding the best $LogLoss$ score gave a weight of 0.69 to $M_2$ and 0.31 to $M_1$.
Thus, we wanted one of our submissions to give more importance to the efficiency model. However, given that each season's efficiency metrics may be biased because they are calculated after the tournament has concluded, for our other entry, we reversed the weightings to generate our two submissions, $\boldsymbol{S_1}$ and $\boldsymbol{S_2}$, rounding our weights for simplification.
\begin{eqnarray} \boldsymbol{S_1} = 0.75*\boldsymbol{\hat{y}_{m_1}} + 0.25* \boldsymbol{\hat{y}_{m_2}} \nonumber \label{WTE}\\
\boldsymbol{S_2} = 0.25*\boldsymbol{\hat{y}_{m_1}} + 0.75* \boldsymbol{\hat{y}_{m_2}} \nonumber \label{WTE2}\end{eqnarray}
The correlation between $\boldsymbol{S_1}$ and $\boldsymbol{S_2}$ was 0.94, and 78\% of game predictions on the two entries were within 0.10 of one another. Our top submission, $\boldsymbol{S_2}$, finished in first place in the 2014 Kaggle contest with a score of 0.52951. Submission $\boldsymbol{S_1}$, while not officially shown in the standings as it was our second best entry, would have been good enough for fourth place (score of 0.54107).
\section{Simulation Study}
In order to evaluate the luck involved in winning a tournament pool with probability entries, we performed a simulation study, assigning each entry a $LogLoss$ score at many different realizations of the 2014 NCAA basketball tournament. The contest organizer provided each of the 433 submissions to the 2014 Kaggle contest for this evaluation.
To simulate the tournament, ``true" win probabilities must be specified for each game. We evaluate tournament outcomes over five sets of true underlying game probabilities: $\boldsymbol{S_1}$, $\boldsymbol{S_2}$, M($\boldsymbol{S_{All}})$, M($\boldsymbol{S_{Top10}})$, and $\boldsymbol{0.5}$, listed as follows.
\begin{itemize}
\item Our first entry ($\boldsymbol{S_1}$)
\item Our second entry ($\boldsymbol{S_2}$)
\item Median of all Kaggle entries (M($\boldsymbol{S_{All}})$)
\item Median of the top 10 Kaggle entries (M($\boldsymbol{S_{Top10}})$)
\item All games were a coin flip (i.e. $p=0.5$ for all games) ($\boldsymbol{0.5}$)
\end{itemize}
Let rank($\boldsymbol{S_1}$) and rank($\boldsymbol{S_2}$) be vectors containing the ranks of each of our submissions across the 10,000 simulations at a given set of game probabilities. We are interested in the median rank and percentiles (2.5, 97.5) for each submission (abbreviated as M (2.5 - 97.5)), across all simulations. We are also interested in how often each submission finishes first and in the top 10.
\begin{landscape}
\begin{table*}
\centering
\caption{Simulation results}
\label{Tab2}
\begin{tabular}{l l l l l l l}
\toprule
& & \multicolumn{5}{c}{True Game Probabilities} \\ \cline{3-7}
Outcome & Type & $\boldsymbol{S_1}$ & $\boldsymbol{S_2}$ & M($\boldsymbol{S_{Top10}})$ &M($\boldsymbol{S_{All}})$ & $\boldsymbol{0.5}$\\
\hline
rank($\boldsymbol{S_1}$)& M (2.5 - 97.5) & 11 (1-168) & 59 (1-202) & 99 (2-236) & 145 (4-261) & 264 (186-299) \\
rank($\boldsymbol{S_2}$)& M (2.5 - 97.5)& 53 (2-205) & 14 (1-164)& 92 (2-245) & 146 (5-266) & 226 (135-285) \\
rank($\boldsymbol{S_1}$) = 1& \%& 15.57 &3.90 & 2.02 & 0.88 & 0 \\
rank($\boldsymbol{S_2}$) = 1& \%& 2.22 &11.65 & 1.89 & 0.63 & 0 \\
rank($\boldsymbol{S_1}) \leq 10$& \%& 48.79 & 17.69& 8.85 &5.04 & 0 \\
rank($\boldsymbol{S_2}) \leq 10$& \%&20.72 & 44.47& 11.96 & 4.77& $<$0.01 \\
Unique winners & Total & 332 & 336 & 337 &348 & 217\\
\hline
\multicolumn{7}{l}{M: Median, 2.5: 2.5th percentile, 97.5: 97.5th percentile}\\ \bottomrule
\end{tabular}
\end{table*}
\end{landscape}
\normalsize
Lastly, we extract the number of unique winners across the simulations, which can give us a sense of how many entries had a reasonable chance of winning at each set of underlying game probabilities.
The results of the simulations appear in Table \ref{Tab2}. Each column represents a different ``true" probability scenario and each row records the results of a statistic of interest. The first and second rows show results of our first and second entry, respectively. We can see that if the ``true" probabilities were $\boldsymbol{S_1}$, our entry finished at a median of 11th place, whereas if the true probabilities were $\boldsymbol{S_2}$, our median finish was 14th place. If the true probabilities were $\boldsymbol{S_1}$, our entry containing those probabilities would finish in first place around 15\% of the time. Likewise, with $\boldsymbol{S_2}$ as ``true" probabilities, that entry would win around 12\% of the time. Relative to a contest based entirely on luck, where each entry would have a 1 in 433 chance of finishing first, our chances of winning were roughly 50 to 60 times higher using $\boldsymbol{S_1}$ and $\boldsymbol{S_2}$ as the truth. This conceivably represents the upper bound of our submission's `skill.'
On the whole, our simulations indicated that the amount of luck required to win a contest like the Kaggle one is enormous; even if you knew the true probabilities of a win for every single game with certainty, you'd still only win about 1 in 8 times! In fact, even if our submissions were correct, we'd only finish in the top 10 about 49\% and 45\% of the time, respectively, for $\boldsymbol{S_1}$ and $\boldsymbol{S_2}$.
If the median of all entries (M($\boldsymbol{S_{All}})$) or the median of the top 10 entries (M($\boldsymbol{S_{Top10}})$) is used as the true probabilities, our chances of winning diminish. For M($\boldsymbol{S_{Top10}})$, our chances of winning on entries $\boldsymbol{S_1}$ and $\boldsymbol{S_2}$ were both about 2\%. For M($\boldsymbol{S_{All}})$, our chances of winning on entries $\boldsymbol{S_1}$ and $\boldsymbol{S_2}$ were both less than 1\%. Lastly, if each game was truly a coin flip, neither of our entries finished first in any of the simulations.
Of the 433 total entries, fewer than 350 finished in first place at least once in each of the simulations with our entries as the truth. This suggests that if either of our submitted probabilities were close to the ``true'' probabilities, about 20\% of entries had little to no chance of winning.
Lastly, Figure 1 shows the smoothed density estimates (dark line) of all winning $LogLoss$ scores from 10,000 simulated tournaments under the game probabilities in $\boldsymbol{S_2}$, along with the density estimates for $\boldsymbol{S_2}$ on simulations in which that entry was the winner. The winning score of $\boldsymbol{S_2}$ in 2014 (0.529) is shown by a vertical line. Relative to the simulated winning scores, $\boldsymbol{S_2}$ winning scores have a lower density in the tails. The 2014 winning score was relatively higher than most of the scores that won the simulated tournaments, perhaps because the University of Connecticut, a seven seed, won all six of its games en route to becoming Division 1 champions. Prior to Connecticut's win, only four previous champions since 1979 were seeded lower than three (four seed Arizona (1997), six seeds Kansas (1988) and North Carolina (1983), and eight seed Villanova (1985)). As a result, most predictive models would have a seven seed as a substantial underdog in games against higher seeded opponents in the later rounds, leading to comparatively larger values of the loss function than when favorites prevail.
\begin{figure}\begin{center}
***********************Insert Figure 1 here***********************\end{center}
\caption{Smoothed density estimates of winning scores across 10,000 tournament simulations using underlying game probabilities $\boldsymbol{S_2}$. The dotted line refers to the winning scores on simulations won by the $\boldsymbol{S_2}$ entry.}\end{figure}
\section{Conclusion}
While traditional NCAA basketball tournament bracket pools are here to stay, Kaggle has developed an alternative scoring system that requires a probability prediction rather than simply picking a winner. Given these guidelines, we used Las Vegas point spread data and Ken Pomeroy's efficiency ratings to build predictive models that ultimately led to a first place finish in this contest.
We employed logistic regression models, based, in part, on the fact that the maximum likelihood estimates derived from logistic regression are based on maximizing a function that was equivalent to the contest scoring function. While logistic regression is a fairly standard statistical technique, we propose that it was important in this context specifically because of the scoring function.
While our choice of a class of models possibly played a role in our victory, our choice of data likely played a much larger role. It is extremely difficult to generate predictive models that outperform the Las Vegas point spread, particularly in high profile games like the ones in the NCAA tournament, and both the point spread and efficiency ratings have previously been shown to work well in predicting college basketball outcomes \citep{carlin1996improved}. Conceptually, one could argue that the Las Vegas point spread is a subjective prior based on expert knowledge, whereas Pomeroy's ratings are based entirely on data. In this way, our ensembling of these two sources of data follows the same principles as a Bayesian analysis.
Given the size of the Kaggle contest, it is reasonable to estimate that our models increased our chances of winning anywhere from between five-fold to fifty-fold, relative to a contest that just randomly picked a winner. However, even with a good choice of models and useful data, we demonstrated that luck also played a substantial role. Even in the best scenario where we assumed that one of our predicted probabilities was correct, we found that this entry had less than a 50\% chance of finishing in the top ten and well less than a 20\% chance of winning, given a contest size of 433 entrants. Under different, but fairly realistic true probability scenarios, our chances of winning decreased to be less than 2\%.
\bibliographystyle{DeGruyter}
| {
"attr-fineweb-edu": 1.679688,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdJs5qhDBMnKv_76F | \section{Introduction}
\label{introduction}
Box score statistics are the baseline measures of performance in the National Basketball Association (NBA). These metrics, either in their raw form or as components of advanced metrics such as player efficiency rating (PER) \citep{hollinger} or win shares (WS) \citep{bbref}, are used by both fans and team officials to help measure and understand player performance. Thus, box score statistics play an influential role in determining playing time, salaries, trade negotiations, marketing potential, and public perception of players, so any inaccuracies or inconsistencies in their attribution can have far reaching impacts.
Box score statistics for each game are identified and recorded by a team of scorekeepers employed by the home team. Some statistics, such as points scored, are objective and there is little possibility of error by the scorekeepers. However, other statistics, such as assists and blocks, are more subjective in nature. For example, an assist ``is awarded only if, in the judgment of the statistician, the last player's pass contributed directly to a made basket'' \citep{nbau}.
An example of the consequences of this subjectivity occurred in 1997 when the Vancouver Grizzlies hosted the Los Angeles Lakers. Laker Nick Van Exel was awarded 23 assists, including several that were ``comically bad'', by a disgruntled Grizzlies scorekeeper, in protest of the inaccuracy of box score statistics \citep{craggs09}. The questionable score keeping went undetected, the scorekeeper unpunished, and the recorded box score unaltered.
While this example is extreme, it demonstrates that inconsistencies in the attribution of box score statistics can occur without notice. Something as simple as scorekeepers having differing views of statistic definitions can affect the comparability of the statistics and thus the perception of player performance.
With the recent rise in popularity of fantasy sports, these inconsistencies also have monetary implications for participants. FanDuel Inc. and DraftKings Inc., currently the two largest daily fantasy operators in North America, both offer daily fantasy contests for the NBA with point scoring systems relying exclusively on box score statistics \citep{fanduel, draftkings} and participants in the daily fantasy community have noticed the influence of scorekeepers on the scores. In a November 17, 2015 daily fantasy basketball article on ESPN.com, DraftKings analyst Peter Jennings recommends participants select Anthony Davis in the New Orleans Pelicans home game against the Denver Nuggets because ``Davis was much better at home last season (scorekeeper might be a Davis fan) and the trend is continuing" \citep{espn}.
In this paper we seek to improve the existing methods of examining statistic inconsistency in other sports and apply these methods to NBA data. Our main contributions are the development of specific methods for NBA data, an improved regression model that uses spatio-temporal information provided by optical tracking technology, and a new method of adjusting statistics to correct for inconsistencies. Our adjustment method also allows for the examination of individual scorekeeper accuracy distributions, providing insights into scorekeeper impact on daily fantasy sports.
The remainder of the paper is organized as follows. In the following section we discuss related work examining statistic inconsistencies in team sports. In Section \ref{assists_and_blocks} we conduct exploratory analysis at the game level into the tendency of scorekeepers to award assists and blocks. Then, in Section \ref{team_adjusted}, we introduce a regression model of assist and block attribution which accounts for the game location, the teams playing, and scorekeeper impacts. This model mirrors the current best practices for estimating the factors influencing the attribution of statistics. Section \ref{new_assists_model} introduces our improvements to the existing methods through a new assist model which takes advantage of optical player tracking data to predict the probability of individual passes being recorded as assists. This section also presents adjusted assist totals which correct for scorekeeper and other biases for a selection of players most affected by the adjustment process. Section \ref{daily_fantasy} examines the impact of scorekeeper inconsistencies on daily fantasy contests. Finally, Section \ref{conclusion} presents conclusions from the results of the paper, as well as a discussion of possible future work and extensions.
\section{Related Work}
\label{related_work}
Regression models are a common tool for sports analytics research, though their application is largely focused within one of two categories: analyzing player performance and predicting win probabilities. Such examinations have spanned several team sports including basketball \citep{deshpande, baghal, fearnhead, basketball_regression, teramoto}, hockey \citep{hockey_regression, macdonald}, baseball \citep{hamrick, neal}, and soccer \citep{groll, oberstone}.
Regression models have also been employed to examine the effects of biases and inconsistencies in sports. \cite{ref_bias} use NBA box score statistics and game information along with racial data of players, coaches, and referees to examine racial biases of referees. Their models treat referees and the race of players similarly to how our models treat scorekeepers and the home or away status of a team. However, we seek to quantify the behaviour of each scorekeeper individually while \cite{ref_bias} group referees by race. A more applicable model for our objectives was developed by \cite{park_factors} to improve the park factors statistic in Major League Baseball (MLB). They use a regression model to estimate the effects of the design of each park (park factors) on hitting and pitching statistics, while also controlling for a home field advantage and the strength of both teams. These estimated park factor effects are similar to the scorekeeper effects we wish to estimate, except they arise from the unique physical characteristics of each MLB park as opposed to human biases. \cite{schuckers} develop a similar model to estimate the differences in the recording of a number of events across National Hockey League (NHL) rinks. The key difference between this model and the park factors model is that the rink effects model deals with human behaviour and thus includes a rink by home ice interaction effect to capture the bias of the scorekeepers. The estimated rink effects are analogous to our estimated scorekeeper effects and our Model \ref{simple_model} for assists and blocks in the NBA extends this state of the art from hockey and baseball into the domain of basketball.
In Section \ref{new_assists_model}, we improve upon the existing methods through the inclusion of spatio-temporal covariates, available through optical tracking systems which have recorded data for all NBA courts since the 2013-2014 season. This spatio-temporal information has expanded the scope of possible research, leading to insights that were previously impossible. \cite{cervone} use the spatio-temporal information (including player locations, event annotations, and ball movement) leading up to a given time point to estimate a multiresolution stochastic process model to predict player movements and events after the given time point, with the ultimate goal of computing an expected possession value at any moment in a possession. \cite{basketball_dynamics} use the information in a similar manner to predict the occurrence of near term events, such as a pass or a shot, at a given time point. Aside from the introduction of a novel method of measuring the distance traveled by a player in possession of the ball, our work uses the same features and extraction methods of these previous applications. However, to our knowledge, our work is the first in any sport to use spatio-temporal information to model scorekeeper inconsistencies.
Finally, both \cite{schuckers} and \cite{park_factors} present statistic adjustment methods, which scale the recorded values linearly according to the estimated effects. This is a reasonable adjustment method given their models but since our models contain spatio-temporal covariates, we make use of this additional information and develop a new method to adjust recorded assist numbers over the course of a season. Our adjustment method has the additional advantage of isolating the impact of a variety of effects within the adjustment, providing a more detailed examination of the factors that inflate or deflate statistics.
\section{Assists and Blocks: The Grey Zone of Basketball Analytics}
\label{assists_and_blocks}
According to a former NBA scorekeeper \citep{craggs09}, scorekeepers are given broad discretion over two box score statistics: assists and blocks. Thus, we focus our attention on these metrics. Since assists are highly dependent on the number of made field goals, and blocks dependent on the number of opposing field goal attempts, we examine the assist ratio (AR) and block ratio (BR) rather than the raw totals. Here, the AR and BR for a team are defined as
\[
\text{AR} = \frac{\text{Team Assists}}{\text{Team Field Goals Made}}
\hspace{20pt}
\text{BR} = \frac{\text{Team Blocks}}{\text{Opponent Field Goals Attempted}}
\]
and can be computed for any given duration, such as a quarter, game, or season.
To examine the scorekeeper impact on these ratios, we use box score data from ESPN.com for the entire 2015-2016 NBA season to compute the season long AR and BR awarded by each scorekeeper to both their home team and the away teams. Figure \ref{ar_and_br} displays the results, demonstrating noticeable differences among scorekeepers. Note that since a scorekeeper is hired by an NBA team to record statistics for all of that team's home games, we use the team names and logos to reference the scorekeepers. Examining assist ratios, many scorekeepers award similar ratios to both home and away teams, however the ratios awarded by some scorekeepers are quite different, either in favour of the home team (Golden State Warriors) or the away team (Toronto Raptors). Similar variability occurs with block ratios with some even more extreme differences favouring either the home team (Miami Heat) or the away team (Los Angeles Lakers).
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{2016_no_model_ar_logo.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{2016_no_model_br_logo.png}
\end{subfigure}
\caption{Home team and away team assist ratios (left) and block ratios (right) for the 2015-2016 NBA season for each scorekeeper}
\label{ar_and_br}
\end{figure}
Examining the results for the Warriors scorekeeper, it may be the case that they have a bias (either intentional or unintentional) toward their team and thus are more generous when awarding assists. However, it may also be that the Warriors' offensive style focuses on ball movement, resulting in the Warriors making more passes and earning a higher assist ratio than the average NBA team. Similarly, we cannot be certain if the Heat scorekeeper seeks out blocks to award to their team more diligently than the Lakers scorekeeper, or if Hassan Whiteside (a Heat player who led the league in blocks for the 2015-2016 season) is simply that much better of a shot blocker than any player on the Lakers team, such that it inflates the Heat's ratio to well above that of the Lakers. In the following section, we reduce this uncertainty through models of AR and BR which separate the influence of the teams from the influence of the scorekeepers.
\section{Team Adjusted Models}
\label{team_adjusted}
We now introduce a regression model which examines the factors influencing the AR or BR for a single team in a given game. Aside from the actions of the scorekeeper, we suspect the primary variables influencing the assist or block ratio awarded by a scorekeeper in a game are the teams in that game. Both the style of play and the skill of the players on a team influence its likelihood to record an assist or block. Similarly, the style and player skill of a team also influence its likelihood of allowing an assist or block. Thus, the models estimate a ``team'' ($\pmb{\beta}_T$) and ``opponent'' ($\pmb{\beta}_O$) effect for each team, corresponding to the likelihood of each team to respectively record or allow a given statistic. Our models also include a ``home'' ($\beta_H$) effect, common to all teams. This effect is present only when the team of interest is the home team and represents any possible league-wide home court advantage resulting in increased performance with respect to the given statistic.
The final two non-intercept effects estimated in our models are the ``scorekeeper generosity'' ($\pmb{\beta}_G$) and ``scorekeeper bias'' ($\pmb{\beta}_B$) effects corresponding to each team's scorekeeper. $\pmb{\beta}_G$ measures how likely a scorekeeper is to award assists or blocks to both teams while $\pmb{\beta}_B$ measures how much more likely a scorekeeper is to award an assist or block to the home team compared to the away team. Isolated from the influence of the other previously mentioned effects, these effects will provide insight into the consistency of scorekeepers across the NBA.
Let $\text{R}_{i}$ be the expected ratio of interest (either AR or BR) for a given team-game combination $i$. Our model for $\text{R}_{i}$ takes the form
\begin{equation}
\label{simple_model}
\text{R}_{i} = \beta_0 + H_{i} \beta_H + \text{T}_{i} \pmb{\beta}_T + \text{O}_{i} \pmb{\beta}_O + \text{S}_{i} \pmb{\beta}_{G} + \text{S}'_{i} \pmb{\beta}_{B}
\end{equation}
where $\text{H}_{i}$ is an indicator variable denoting if the team in $i$ is the home team, $\text{T}_{i}, \text{O}_{i}$, and $\text{S}_{i}$ are each $30 \times 1$ indicator (one-hot encoded) vectors for the team, its opponent, and the scorekeeper for team-game combination $i$ respectively, and $\text{S}'_{i} = \text{S}_{i} \times \text{H}_{i}$ is a $30 \times 1$ indicator vector denoting the scorekeeper if the team in team-game combination $i$ is the home team (and is a zero vector otherwise). These variable definitions are summarized in Table \ref{initial_parameters}. Additionally, $\beta_0$ and $\beta_H$ are estimated coefficients and $\pmb{\beta}_T, \pmb{\beta}_{G},$ and $\pmb{\beta}_{B}$ are $1 \times 30$ row vectors of estimated coefficients. The coefficients measuring scorekeeper effects ($\pmb{\beta}_{G}$ and $\pmb{\beta}_{B}$) are the coefficients of interest while the remaining coefficients account for the impact of other influential factors.
Model \ref{simple_model} is essentially the model presented by \cite{schuckers} applied to NBA statistics, but with two main differences. First, we introduce two scorekeeper effects to mirror their single rink effect. We believe there are two distinct potential differences in scorekeeper behaviour and thus dividing the scorekeeper effect provides additional insight. Second, we model ratios of statistics rather than simple counts. Using ratios focuses the model on the scorekeepers' decisions rather than the teams' ability to generate shots, since assists and blocks are dependent on made and missed field goals respectively. Since we do not examine count data, we also do not employ the logarithmic transformation used by \cite{schuckers}.
\begin{table}
\caption{Variables for the team adjusted models}
\label{initial_parameters}
\begin{tabularx}{\textwidth}{ l X }
\hline\noalign{\smallskip}
\textbf{Notation} & \textbf{Definition} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$H_i$ & $2 \times 1$ indicator vector denoting whether the team is home or away \\
$T_i$ & $30 \times 1$ indicator vector denoting the team \\
$O_i$ & $30 \times 1$ indicator vector denoting the opponent \\
$S_i$ & $30 \times 1$ indicator vector denoting the scorekeeper \\
$S'_i$ & $30 \times 1$ indicator vector denoting the interaction of home and scorekeeper ($H_i$ and $S_i$) \\
\hline\noalign{\smallskip}
\end{tabularx}
\end{table}
To estimate the model parameters, we again use box score data from ESPN.com for the entire 2015-2016 NBA season, but this time we compute the assist and block ratios of each team in every game. To compare the consistency of the 30 scorekeepers for each ratio, we compute predicted ratios awarded by each scorekeeper to both the home team and an unspecified away team. Let the predicted home and away team ratios (either AR or BR) for scorekeeper $s$ be denoted $\text{PR}_{H_s}$ and $\text{PR}_{A_s}$ respectively. Then
\begin{align*}
\text{PR}_{H_s} &= \overline{\text{LR}} + \left( \pmb{\beta}_{G}^{(s)} - \bar{\pmb{\beta}}_{G} \right) + \left( \pmb{\beta}_{B}^{(s)} - \bar{\pmb{\beta}}_{B} \right) \\
\text{PR}_{A_s} &= \overline{\text{LR}} + \left( \pmb{\beta}_{G}^{(s)} - \bar{\pmb{\beta}}_{G} \right)
\end{align*}
where $\overline{\text{LR}}$ is the season long league ratio (AR or BR), the $\pmb{\beta}^{(s)}$ are the entries in the $\pmb{\beta}$ vectors corresponding to scorekeeper $s$, and the $\bar{\pmb{\beta}}$ are the average of the elements in the $\pmb{\beta}$ vectors. The resulting $\text{PR}_{H_s}$ and $\text{PR}_{A_s}$ values for all 30 scorekeepers for both AR and BR are presented in Figure \ref{sk_impact_on_ar_br}. Note that the scaled $\pmb{\beta}_{G}$ values are the differences between the league average and the away team predicted ratios while the scaled $\pmb{\beta}_{B}$ values can be determined by subtracting the away team predicted ratios from the home team predicted ratios. It can be noticed that some of the observations from Figure \ref{ar_and_br} still hold. For example, both AR figures indicate the Utah Jazz scorekeeper is unbiased but not generous. However, there are also substantial differences between the figures. The BR results for the Atlanta Hawks and Sacramento Kings scorekeepers are nearly on opposite ends of the home team block ratio range in Figure \ref{ar_and_br}, but in Figure \ref{sk_impact_on_ar_br}, the two scorekeepers have very similar results.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{2016_model_ar_logo.png}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{2016_model_br_logo.png}
\end{subfigure}
\caption{Predicted assist ratios (left) or block ratios (right) awarded by each scorekeeper to the home team and an unspecified away team based on the coefficients of the team-level model}
\label{sk_impact_on_ar_br}
\end{figure*}
The model results appear to confirm several interesting characteristics. First, some scorekeepers are biased against their home team. Away teams are much more likely to be awarded an assist by the Jazz scorekeeper or a block from the Dallas Mavericks scorekeeper than the corresponding home teams are. Also, the effect of the scorekeeper on different statistics is not necessarily consistent. The Philadelphia 76ers scorekeeper is among the most likely to award an assist to either team but is among the least likely to award a block, particularly to the home team.
However, when examining how well the models fit the data we see that the AR model and BR model have coefficients of determination ($R^2$ values) of 0.279 and 0.228 respectively. Thus, while the results provide some indication of the factors influencing the attribution of statistics, there is certainly room for improvement. Additionally, since the scorekeeper bias effect for a team's scorekeeper is only present when a team is home and there is only a single home effect common to all teams, it is not clear if the scorekeeper bias effect truly measures scorekeeper bias or if it measures a team specific home effect. It may be the case that the average pass made by a team at home is of a different quality than passes made by that team away from home. In the following section, we focus on assists and introduce a new model which uses spatio-temporal information to improve the results and reduce the potential of confounding effects.
\section{A New Assist Model: Adjusting for Context}
\label{new_assists_model}
While assists and blocks are both subjective statistics, the factors involved in their attribution differ. Blocks occur in an instant (the moment that a defender makes contact with an opposing player's shot) while assists involve two components (a pass and a made basket) which can develop over the course of several seconds and can include additional actions such as pivots, dribbles, and defender movements. Thus the context surrounding an assist is fundamental to its attribution. This section takes advantage of this context to build a contextual assist model that improves upon the currently available methods.
For this new model, we narrow our focus to the level of individual passes and examine passes with the potential to be recorded as assists. We define a potential assist to be a completed pass from a passer to a shooter who then scores a field goal within seven seconds of receiving the pass. The shooter is permitted to dribble and move after receiving the pass, as long as he maintains possession of the ball until the successful shot (no rebounds, turnovers, or additional passes may occur). Note that while an inbounds pass can be credited as an assist, for simplicity we will only examine passes made while the ball was in play.
\subsection{Extracting Spatio-Temporal Features}
\label{spatio-temporal_features}
When examining an individual potential assist, there are many contextual spatio-temporal factors that influence its probability of being recorded as an assist by the scorekeeper. Characteristics of the shooter's possession, such as possession length, number of dribbles, and distance traveled, are particularly relevant to the determination of assists, since the pass must be considered to lead directly to the made field goal. The locations of the passer and shooter may also influence the probability of a recorded assist. In order to measure these location impacts, we divide the court into the 6 distinct zones displayed in Figure \ref{court_zone_figure}.
\begin{figure*}
\centering
\includegraphics[width=0.6\textwidth]{court_zone_figure.pdf}
\caption{The offensive half court divided into six court zones: Dunk, Paint, Long 2, Arc 3, Corner 3, and Heave. The gray lines are court lines that do not divide the zones, the solid black lines are court lines that divide the zones, and the dashed black lines divide the zones but are not found on the court. The gray line surrounding the Dunk zone forms a circle of radius three feet centered at the center of the basket, the gray lines separating the Corner 3 and Arc 3 zones are drawn horizontally from the sidelines to the points at which the three point line begins to curve, and the dividing line between the Arc 3 and Heave zones is drawn ten feet beyond the three point arc. Note that the Heave zone extends beyond the half court line and covers the remainder of the court.}
\label{court_zone_figure}
\end{figure*}
To measure the contextual variables for potential assists, we use SportVu optical player tracking data from STATS LLC which contains the X- and Y-coordinates of each of the 10 players on the floor and X-, Y-, and Z-coordinates of the ball, which are recorded 25 times per second throughout the course of a game. Annotations for events including passes, dribbles, and shots are also included, as well as additional information for player and team identification, dates and times, and game clock and shot clock times. The data set contains game data for 1227 of the 1230 2015-2016 NBA regular season games and all teams have at least 81 of their 82 total games included. We examine the 82,493 potential assists contained in the data set, of which, 54,111 (65.59\%) were recorded assists.
The spatio-temporal context variables that will be included in our new contextual model are presented in Table \ref{context_parameters}. Note that the set of event annotations in the data set mark when a player releases or receives a pass and when the ball is dribbled or released for a shot. Thus, the methods of obtaining values for $C_{(1)}, C_{(2)}, C_{(4)}, C_{(7)}, C_{(8)},$ and $C_{(9)}$ are straightforward. Since the SportVu location data is noisy, summing the distance between each observation for a player's location over the range of time that player is in possession of the ball is likely to overestimate the total distance traveled by that player. To correct for this, we take advantage of the NBA traveling violation which prevents a player in control of the ball to move without dribbling the ball (with the exception of pivoting or of two steps allowed immediately after receiving a pass or concluding a final dribble) and define $C_{(3)}$ to be sum of the distances between the observations of a player's location when he receives possession of the ball, each time he dribbles the ball, and when he releases a shot. Finally, $C_{(5)}$ and $C_{(6)}$ are determined by calculating the distance between the corresponding offensive player and each of the five opposing players on the court (at the defined moment in time) and taking the minimum of those five distance values. The stages of a sample potential assist are displayed in Figure \ref{sample_play} to demonstrate the computation of the parameters in Table \ref{context_parameters}.
\begin{table}
\caption{Spatio-temporal variables for the contextual model}
\label{context_parameters}
\begin{tabularx}{\textwidth}{ l X }
\hline\noalign{\smallskip}
\textbf{Notation} & \textbf{Definition} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$C_{(1)}$ & Continuous variable denoting the time (seconds) of the shooter's possession \\
$C_{(2)}$ & Discrete variable denoting the number of dribbles taken during the shooter's possession \\
$C_{(3)}$ & Continuous variable denoting the distance (feet) traveled by the shooter during possession \\
$C_{(4)}$ & Continuous variable denoting the distance (feet) traveled by the pass \\
$C_{(5)}$ & Continuous variable denoting the distance (feet) of the nearest defender to the passer at the time of the pass \\
$C_{(6)}$ & Continuous variable denoting the distance (feet) of the nearest defender to the shooter at the start of the shooter's possession \\
$C_{(7)}$ & $6 \times 1$ indicator vector denoting the court zone corresponding to the passer at the time of the pass \\
$C_{(8)}$ & $6 \times 1$ indicator vector denoting the court zone corresponding to the shooter at the start of the shooter's possession \\
$C_{(9)}$ & $36 \times 1$ indicator vector denoting the interaction of passer and shooter court zones ($C_{(7)}$ and $C_{(8)}$) \\
\hline\noalign{\smallskip}
\end{tabularx}
\end{table}
\begin{figure*}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\caption{}
\includegraphics[width=\textwidth]{3panelplot-pass.pdf}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\caption{}
\includegraphics[width=\textwidth]{3panelplot-dribble.pdf}
\end{subfigure}
\caption{Optical tracking data for the stages of a potential assist which occurred in the first quarter of a January 7, 2015 game featuring the Los Angeles Clippers hosting the Los Angeles Lakers. Each point represents one of the 10 players on the floor, with the lighter points representing Clippers players (on offense) and the darker points representing Lakers players (on defense). The ball is marked by the black dot on top of the player who has possession. In \textbf{i}, Clippers guard Chris Paul is in possession of the ball in the $C_{(7)}=$ Paint court zone and is about to make a pass of distance $C_{(4)} = 11.09$ ft along arrow \textbf{a} to Clippers forward Blake Griffin. At the moment of the pass, the distance of the nearest defender to Paul is $C_{(5)} = 3.58$ ft, measured by line \textbf{b}. In \textbf{ii}, Griffin receives the pass in the $C_{(8)}=$ Long 2 court zone. At this moment, the distance of the nearest defender to Griffin is $C_{(6)} = 13.63$ ft, measured by line \textbf{c}. After receiving the pass (\textbf{d0}), Griffin drives to the basket, takes $C_{(2)} = 2$ dribbles (\textbf{d1} and \textbf{d2}), and shoots the ball (\textbf{d3}) (the locations of the other players during the drive are held constant for simplicity). During this drive, Griffin travels $C_{(3)} = 20.41$ ft over $C_{(1)} = 1.82$ seconds. On this play, Griffin's shot attempt was successful and the Clippers scorekeeper awarded Paul an assist.}
\label{sample_play}
\end{figure*}
\subsection{Estimating the Contextual Assist Model}
\label{estimating_new_assist_model}
For the contextual assist model, we use logistic regression to predict the probability of the $j^{th}$ potential assist being recorded as an assist, given a variety of contextual factors. The contextual model takes the form
\begin{equation}
\label{contextual_model}
P(A_j = 1) = \sigma
\begin{pmatrix}
\beta^*_0 + \text{H}_{j} \beta^*_H + \text{T}_{j} \pmb{\beta}^*_T + \text{O}_{j} \pmb{\beta}^*_O + \text{S}_{j} \pmb{\beta}^*_{G} + \text{S}'_{j} \pmb{\beta}^*_{B} \\
+ \text{N}_j \pmb{\beta}^*_N + \text{P}_j \pmb{\beta}^*_P + \displaystyle\sum_{k=1}^{9} C_{(k)j} \pmb{\beta}^*_{C_k}
\end{pmatrix}
\end{equation}
where $\sigma(x) = \exp(x)/(1+\exp(x))$ and $A_j$ is an indicator function equal to 1 when potential assist $j$ is a recorded assist. The terms common to the team-level model (Model \ref{simple_model}) share the same definitions as outlined in Table \ref{initial_parameters} except the index $j$ refers to a single potential assist achieved in a given team-game combination. For the new model terms, $N_j$ is an indicator vector denoting the name of the passer (from the 486 unique passers in the data), $P_j$ is an indicator vector denoting the primary position (point guard, shooting guard, small forward, power forward, or center) of the passer, and the $C_{(k)j}$ variables are the additional spatio-temporal context variables defined in Table \ref{context_parameters}. Finally, $\beta^*_0$ and $\beta^*_H$ are estimated coefficients and the $\pmb{\beta}^*_l$ coefficient vectors are $1 \times n_l$ row vectors of estimated coefficients, where $n_l$ is the number of rows of the observation vector which is multiplied by the corresponding coefficient vector. In estimating the model, we use logistic regression with an L2 penalty on the $\beta$ coefficients, learned through cross-validation. Thus, the model estimation solves
\[
\min_{\pmb{\beta}} \left[ - \left( \frac{1}{N} \sum^N_{j=1} A_j M - \log(1 + e^{M}) \right) + \lambda ||\pmb{\beta}^*||^2 \right]
\]
over a grid of values of $\lambda$ covering the range of interest where $\pmb{\beta}^*$ is a vector of all $\beta^*$ coefficients estimated in the contextual model, $N=82,493$ (the number of potential assists in the data set), and $\sigma(M)$ is the right hand side of Equation \ref{contextual_model} describing the contextual model.
The inclusion of contextual information in the model fixes a key issue of Model \ref{simple_model}. Using Model \ref{simple_model}, it is impossible to isolate the impacts of scorekeeper bias and specific home team effects within the scorekeeper bias effect, since passes made by some or all home teams may differ in quality from passes made by those teams when they are away from home. However, with the inclusion of measures of pass quality within the model (nearest defender distance to shooter when the pass is received, number of dribbles and distance traveled by shooter to attempt a shot, pass location, and pass distance), the only possible confounding effects are other measures of pass quality not included in the model. Therefore, we can confidently assume that true scorekeeper bias is the main factor influencing the estimated scorekeeper bias effect.
Our choice of an L2 penalty differs from that of \cite{hockey_regression} who choose an L1 penalty for their logistic regression model estimating player contribution in hockey. Their decision is based on the variable selection benefits of the L1 penalty. However, the covariates of their model are limited to teams and players while our model contains additional covariates, including several contextual covariates that are highly correlated (such as number of dribbles and distance traveled). Because of this, an L2 penalty is a natural choice for its improved predictive performance in the presence of collinearity \citep{tibshirani}.
It is important to note that the results of the contextual assist model depend on the selected value of $\lambda$. In particular, the estimated coefficients for variables with relatively few observations (such as some passer effects in $\pmb{\beta}^*_N$ and some court zone interaction effects in $\pmb{\beta}^*_{C_9}$) are more sensitive to shrinkage. Thus, while the relative order of the resulting effects associated with these coefficient values are largely unaffected, the magnitudes of the effects are impacted by the selected $\lambda$. In order to mitigate this impact we use 100-fold cross validation to select the optimal $\lambda$ value and use this value to estimate the model used to generate the results presented in the remainder of this paper.
\subsection{Contextual Assist Model Results}
\label{new_assist_model_results}
The primary focus of this section is to examine the results corresponding to the new variables introduced to the contextual model that were not included in the team-level model. However, we first examine the impact of these additional variables on the scorekeeper effects. Comparing the $\pmb{\beta}_{G}$ and $\pmb{\beta}_{B}$ coefficients of the team-level model to the $\pmb{\beta}^*_{G}$ and $\pmb{\beta}^*_{B}$ coefficients of the contextual model, both pairs of estimated coefficients are positively correlated. However, the correlation of $\pmb{\beta}_{G}$ and $\pmb{\beta}^*_{G}$ is 0.892 compared to only 0.597 for $\pmb{\beta}_{B}$ and $\pmb{\beta}^*_{B}$. One possible explanation for this difference is that the range of generosity coefficient values compared to the bias values is much greater in the contextual model. Thus, it may be the case that the variation explained by $\pmb{\beta}^*_{B}$ is better explained by $\pmb{\beta}^*_{G}$ or other new coefficients in the contextual model. Another possible explanation is that the bias values are estimated using fewer observations (since bias coefficients apply only to the home team of each game and the generosity coefficients apply to both teams) making them less reliable compared to the generosity values.
Shifting focus to the new variables introduced in the contextual model, we can isolate the impact of a single variable on the probability of a potential assist being recorded as an assist by examining an average potential assist, and observing how its recorded assist probability changes as we manipulate the value of the variable of interest. An average potential assist is a potential assist with no impact from the indicator variables or vectors, and the average values over all potential assists of the other variables. Thus an average potential assist is a pass that travels 18.21 feet from a passer whose nearest defender is 6.67 feet away, to a shooter whose nearest defender is 9.63 feet away when he catches the ball and who then takes 1.87 dribbles, traveling 16.00 feet, over 2.59 seconds before scoring a field goal. The model predicts that this average potential assist has a 39.23\% chance of being a recorded assist.
Let $V$ be the sum of the estimated intercept coefficient and the average potential assist values multiplied by their corresponding estimated coefficient values, that is $V = \beta^*_0 + 2.59 \beta^*_{C_1} + 1.87 \beta^*_{C_2} + 16.00 \beta^*_{C_3} + 18.21 \beta^*_{C_4} + 6.67 \beta^*_{C_5} + 9.63 \beta^*_{C_6}$. Also, let $I$ be any variable of interest from the contextual assist model, with corresponding estimated coefficient vector $\pmb{\beta}_I^*$. If $I$ is a variable included in $V$, redefine $V$ without the corresponding variable term. The influence of a given value $I^*$ of the variable of interest $I$ has the following effect ($E$) on the probability of an average potential assist being recorded as an assist:
\begin{equation}
E = \sigma \left(V + I^* \pmb{\beta}_I^* \right) - \sigma \left(V\right)
\label{effect_computation}
\end{equation}
where $E$ is measured in units of change in probability.
We first use the above effect computation expression to examine the impact of some of the contextual variables in the model. Figure \ref{time_dribbles} demonstrates the effects of changing the possession length or the number of dribbles of the average potential assist. While both variables effect the probability of a potential assist being a recorded assist, possession length has a more substantial effect, as demonstrated by the wider range of probabilities and the more drastic decrease in probability as the value of the variable increases.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{time_dribbles.pdf}
\caption{Select predicted recorded assist probabilities of the average potential assist as a function of the continuous possession length (left) or the discrete number of dribbles (right), computed using Equation \ref{effect_computation}}
\label{time_dribbles}
\end{figure*}
We also examine the impact of the passer and shooter locations. Here we ignore passes to or from the Heave zone as they have relatively low probabilities of being recorded assists. The resulting impact of each remaining pair of zones, and the frequency of passes between them, are presented in Figure \ref{court_zone_results}. Accounting for the other contextual model factors (including player positions, which are discussed in the following paragraph), for passer locations in zones closer to the basket (Paint and Dunk), passes to the Corner 3 zone are the most likely to be recorded assists. For the Long 2 and Arc 3 zones, passes to the Paint zone are those most likely to be recorded assists. For four of the zones, passes to the Dunk zone are the least likely to lead to assists (for the Arc 3 zone, passes within that zone are slightly less likely to be recorded assists). Overall, passes from the Arc 3 zone to the Paint zone and passes within the Dunk zone are respectively the most and least likely to be recorded as assists.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{court_zone_results_bw.pdf}
\caption{The impact of passer and shooter location on the recorded assist probability of an average potential assist and the frequency of passes between each location pair. For each pair of zones, the arrow points in the direction of the pass. Note that the point in each zone represents passes made within that zone.}
\label{court_zone_results}
\end{figure*}
Next, we shift our focus to the non-contextual variables of the contextual model, beginning with the primary position of the passer. The results of the position effects are displayed in Table \ref{position_effects} and show that even after accounting for all contextual variables in the model, the probability of an average potential assist being a recorded assist is greatest for point guards and least for centers, with a 7.77\% difference in probabilities between the two positions. We propose two possible explanations for this discrepancy. First, the average pass by a point guard may have a higher probability of being a recorded assist due to characteristics beyond the scope of the contextual model. However, if the model captures all important characteristics, then the discrepancy may be the result of bias from scorekeepers with respect to passer positions.
\begin{table}
\caption{Passer position effects where ``Effect'' is the isolated effect of the passer position on the recorded assist probability of an average assist and is computed using Equation \ref{effect_computation}}
\label{position_effects}
\begin{tabular}{l c c c c c}
\hline\noalign{\smallskip}
\textbf{Position} & Point Guard & Shooting Guard & Small Forward & Power Forward & Center \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{Effect (\%)} & 3.76 & 0.59 & -0.28 & -0.33 & -4.01 \\
\hline\noalign{\smallskip}
\end{tabular}
\end{table}
Continuing with the non-contextual variables, we end the results examination of the contextual model with the impact of the passer on the probability of a recorded assist. The results for the top and bottom ten passer effects are displayed in Table \ref{passer_effects}. Since passer positions are accounted for separately, they should not be the primary influence of passer coefficients. This idea holds true in the results as both the top and bottom ten include players from all five positions. The difference in effects between the player with the highest effect (Collison) and the player with the lowest effect (Roberson) is a substantial 27.41\%. Similarly to the position effect, we suspect the differing passer effects are the result of either characteristics of passes not picked up by the model, or bias from scorekeepers with respect to individual passers.
\addtolength{\tabcolsep}{-3pt}
\begin{table}
\caption{NBA players with the top and bottom ten passer effects where ``Effect'' is the isolated effect of the passer on the recorded assist probability of an average assist (computed using Equation \ref{effect_computation}), and ``Rank'' is the rank of ``Effect'' in decreasing order (and ranges from 1 to 473)}
\label{passer_effects}
\begin{minipage}{.5\linewidth}
\begin{tabular}{l l c c }
\hline\noalign{\smallskip}
\textbf{Rank} & \textbf{Passer Name} & \textbf{Position} & \textbf{Effect (\%)} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
1 & Nick Collison & C & 14.42 \\
2 & James Harden & SG & 13.29 \\
3 & LeBron James & SF & 13.03 \\
4 & Russell Westbrook & PG & 12.06 \\
5 & Josh McRoberts & PF & 11.61 \\
6 & G. Antetokounmpo & SF & 11.55 \\
7 & Chris Paul & PG & 11.52 \\
8 & Luke Babbitt & SF & 11.37 \\
9 & Tony Allen & SF & 11.2 \\
10 & Ricky Rubio & PG & 10.71 \\
\hline\noalign{\smallskip}
\end{tabular}
\end{minipage}
\vrule{}
\begin{minipage}{.5\linewidth}
\begin{tabular}{l l c c }
\hline\noalign{\smallskip}
\textbf{Rank} & \textbf{Passer Name} & \textbf{Position} & \textbf{Effect (\%)} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
473 & Andre Roberson & SG & -12.99 \\
472 & Andre Drummond & C & -11.63 \\
471 & Richard Jefferson & SF & -11.23 \\
470 & Quincy Acy & PF & -10.75 \\
469 & Tony Wroten & PG & -10.41 \\
468 & Hassan Whiteside & C & -9.88 \\
467 & Enes Kanter & C & -9.49 \\
466 & Lavoy Allen & C & -9.30 \\
465 & Andre Miller & PG & -9.21 \\
464 & Hollis Thompson & SG & -9.07 \\
\hline\noalign{\smallskip}
\end{tabular}
\end{minipage}
\end{table}
\addtolength{\tabcolsep}{3pt}
\subsection{Model Validation and Consistency}
\label{model_validation_and_consistency}
We wish to test the accuracy of our model in predicting whether a new potential assist will be recorded as an assist. To measure this accuracy, we use 10-fold cross validation to obtain mean log likelihood values and misclassification rates for the held out data. The misclassification rate is computed using the model as a classification tool with a cutoff of a probability of 0.5. We also compare the accuracy results of our model to those of three other models. That is, we compare the results for the full contextual model (Model \ref{contextual_model}), to the results for a model with no scorekeeper covariates, a model with no context covariates, and an intercept model. The model with no scorekeeper covariates is simply Model \ref{contextual_model} without the scorekeeper generosity and scorekeeper bias information. Comparing this model to the full model demonstrates whether the inclusion of the scorekeeper information improves the model. The model with no context covariates removes all information obtained from the optical tracking data, leaving Model \ref{simple_model}. Comparing this model to the full model demonstrates the performance differences between the existing methods and our new methods. Finally, the intercept model contains only an intercept term and no other covariates, and thus it treats every potential assist in an identical fashion (classifying each as a recorded assist). This model provides an estimate of the baseline accuracy to compare with the other models.
The model validation results are presented in Table \ref{model_validation}. The results show that our new methods (Model \ref{contextual_model}) far outperform the previous best practice (Model \ref{simple_model}), leading to a misclassification rate of only 0.066. Additionally, by these metrics, the previous best practice had little to no improvement over simply using an intercept model. Finally, the results demonstrate a noticeable improvement with scorekeeper covariates included, even with all other covariates present.
\begin{table}
\caption{Out of sample mean log likelihood and misclassification rate results from a 10-fold cross validation performed on the set of all potential assists from the 2015-2016 season. The mean log likelihood values represent the average value for a single out of sample observation.}
\label{model_validation}
\begin{tabular}{l c c c}
\hline\noalign{\smallskip}
\textbf{Model} & \textbf{Mean Log Likelihood} & \textbf{Misclassification Rate} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Model \ref{contextual_model} (Full Contextual Model) & -0.176 & 0.066 \\
No Scorekeeper Covariates & -0.182 & 0.070 \\
Model \ref{simple_model} (No Context Covariates) & -0.638 & 0.344 \\
Intercept Model & -0.644 & 0.344 \\
\hline\noalign{\smallskip}
\end{tabular}
\end{table}
We now wish to examine the stability of our model across NBA seasons. Optical tracking data is available in all stadiums for each of the seasons ending in 2014, 2015, and 2016 so we estimate models for each of these seasons and compare the resulting coefficient values. The results of this comparison are presented in Table \ref{model_comparison} in the form of correlation values across seasons for groups of coefficients. The results show that the coefficients which are common to all teams and scorekeepers (position and court zone effects) are all highly correlated across seasons. This result implies that these coefficients are detecting a meaningful signal, that remains consistent over time. Additionally, the scorekeeper bias and scorekeeper generosity effects display a moderate and a high level of correlation respectively. Again it appears as though these coefficients are detecting a meaningful and consistent signal. As previously discussed, the scorekeeper bias coefficients are estimated using less data than the generosity coefficients, likely leading to their slightly reduced correlation values. Finally both the opponent and team coefficients have little to no correlation across seasons. This result is expected since the players, coaches, and playing styles of a team are much more likely to change across seasons than the scorekeepers of a team. Thus, the consistency of the coefficients are highly team dependent. For example, across the three seasons, both the team (0.014, 0.020, and 0.068) and opponent (0.062, 0.046, -0.011) coefficients of the Boston Celtics have remained fairly consistent. This consistency may be explained by the system implemented by Brad Stevens, head coach of the Celtics across all three seasons. Conversely, in the 2014 off season, LeBron James announced his return to the Cleveland Cavaliers. In addition, the team also acquired Kevin Love through trade, and hired David Blatt to be its new head coach. This coaching change and addition of two all-star players drastically altered the playing style of the team, and this change was reflected in both the team (-0.280 and 0.326) and opponent (-0.012 and 0.217) coefficients across the seasons ending in 2014 and 2015.
\begin{table}
\caption{Correlation of estimated coefficient values for a variety of effects across models estimated for the seasons ending in 2014, 2015, and 2016. Each estimated model follow the form of Model \ref{contextual_model}. The effects are sorted in increasing order by their 2014 and 2016 correlation values.}
\label{model_comparison}
\begin{tabular}{l r r r}
\hline\noalign{\smallskip}
\textbf{Effect} & \textbf{2014 and 2015} & \textbf{2015 and 2016} & \textbf{2014 and 2016} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Opponent & -0.065 & 0.003 & -0.164 \\
Team & -0.284 & -0.001 & 0.204 \\
Scorekeeper Bias & 0.486 & 0.631 & 0.460 \\
Court Zone Interaction & 0.820 & 0.742 & 0.644 \\
Passer Court Zone & 0.688 & 0.924 & 0.757 \\
Scorekeeper Generosity & 0.887 & 0.856 & 0.776 \\
Positions & 0.965 & 0.953 & 0.898 \\
Shooter Court Zone & 0.920 & 0.990 & 0.903 \\
\hline\noalign{\smallskip}
\end{tabular}
\end{table}
\subsection{Adjusting Player Assist Totals}
\label{adjusting_player_assist_totals}
Since we have isolated the impact of each variable in the contextual model on the probability of a potential assist being a recorded assist and verified Model \ref{contextual_model} produces accurate prediction results, we can use the model to compute adjusted assist totals for each player. We compute adjusted assists using Equation \ref{effect_computation} to estimate the predicted probability of a pass being a recorded assist after removing the effects of all non-contextual variables (scorekeeper effects, position effects, etc.). As opposed to the adjustment methods of \cite{park_factors} and \cite{schuckers}, our method examines every potential assist individually. Our method also allows us to determine the expected number of recorded assists gained or lost by a player due to an individual factor, such as the passer effect for that player. The ten players with the greatest increase and the greatest decrease in total assists after this adjustment are presented in Table \ref{adjusted_results}. The results show that the ``Home Scorekeeper'' effect, which is the sum of all assists gained or lost due to the generosity and bias of the home scorekeeper for a player, tends to be the greatest potential deflater of the players' recorded totals. This observation is emphasized by the fact that seven of the ten players with the greatest increase is assists were members of the Utah Jazz, the team with the most negative scorekeeper bias coefficient. Examining the players whose assist totals decrease, the ``Home Scorekeeper'' effect is again often an important factor, as is the ``Passer'' effect. For point guards, the ``Position'' effect also contributed to the decreased totals. Conversely, the ``Away Scorekeeper'' effect, which is the sum of all assists gained or lost due to the generosity of the away scorekeepers for a player, tends to be relatively insubstantial across all players.
\addtolength{\tabcolsep}{-2.5pt}
\begin{table}[ht]
\caption{Comparisons of the total recorded and adjusted assists for the 10 players experiencing the greatest increases and the 10 players experiencing the greatest decreases. The adjusted totals are computed using Equation \ref{effect_computation} to compute the effects of all non-contextual variables and remove them from the recorded totals. The ``Assist Change'' column measures the difference between the recorded and adjusted totals. The ``Original Rank'' and ``Adjusted Rank'' columns provide the players' ranks (1-473) before and after the adjustment respectively. The four right-most columns display the estimated number of additional assists a player originally received due to the given effect (SK is short for scorekeeper) which were removed in the adjustment process. Note that not all factors in the adjustment are displayed, so the ``Assist Change'' column is not equal to the negative sum of the four displayed adjustment effects.}
\label{adjusted_results}
\begin{tabular}{ l r r r r r r r r }
\hline\noalign{\smallskip}
& \multicolumn{1}{c}{\textbf{Assist}} & \multicolumn{1}{c}{\textbf{Recorded}} & \multicolumn{1}{c}{\textbf{Original}} & \multicolumn{1}{c}{\textbf{Adjusted}} &
& & \multicolumn{1}{c}{\textbf{Home}} & \multicolumn{1}{c}{\textbf{Away}} \\
\textbf{Player} & \multicolumn{1}{c}{\textbf{Change}} & \multicolumn{1}{c}{\textbf{Assists}} & \multicolumn{1}{c}{\textbf{Rank}} & \multicolumn{1}{c}{\textbf{Rank}} & \multicolumn{1}{c}{\textbf{Position}} & \multicolumn{1}{c}{\textbf{Passer}} & \multicolumn{1}{c}{\textbf{SK}} & \multicolumn{1}{c}{\textbf{SK}} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Gordon Hayward & 28.38 & 287 & 46 & 36 & -0.26 & -2.80 & -24.90 & -0.17 \\
George Hill & 18.64 & 257 & 55 & 48 & 3.98 & -4.53 & -20.60 & 0.25 \\
Trevor Booker & 18.29 & 83 & 222 & 187 & -0.19 & -3.75 & -12.71 & -0.01 \\
Rodney Hood & 16.83 & 208 & 71 & 67 & 0.41 & 0.25 & -16.74& -1.39 \\
Joe Ingles & 16.74 & 93 & 203 & 171 & 0.29 & -4.35 & -11.28 & 0.58 \\
Lavoy Allen & 12.93 & 76 & 234 & 214 & -1.60 & -3.88 & -6.03 & 0.71 \\
Tyson Chandler & 12.88 & 64 & 262 & 233 & -2.09 & -4.11 & -7.06 & -0.63 \\
Derrick Favors & 11.73 & 93 & 204 & 178 & -0.20 & 0.32 & -11.18 & -0.92 \\
Trey Lyles & 10.16 & 60 & 273 & 242 & -0.11 & -1.33 & -8.22 & 0.35 \\
P.J. Tucker & 9.83 & 176 & 96 & 81 & -0.19 & -2.34 & -9.03 & -1.62 \\
\hline\noalign{\smallskip}
Ish Smith & -19.60 & 495 & 11 & 12 & 4.99 & 1.13 & 11.78 & -0.66 \\
James Harden & -22.63 & 602 & 6 & 6 & 0.82 & 16.37 & 6.36 & -2.65 \\
Reggie Jackson & -24.34 & 483 & 14 & 15 & 5.79 & 6.76 & 10.66 & -0.22 \\
Jrue Holiday & -27.22 & 386 & 20 & 22 & 4.56 & 2.28 & 14.11 & -1.74 \\
LeBron James & -29.95 & 505 & 10 & 13 & -0.32 & 13.32 & 14.72 & -0.90 \\
Tony Parker & -30.07 & 377 & 21 & 27 & 3.83 & 7.97 & 9.67 & -0.05 \\
Draymond Green & -30.09 & 587 & 7 & 7 & -0.47 & 12.41 & 6.26 & -1.02 \\
G. Antetokounmpo & -30.56 & 342 & 29 & 39 & -0.38 & 14.23 & 4.96 & 1.69 \\
Ricky Rubio & -31.12 & 645 & 5 & 5 & 8.06 & 21.67 & -7.03 & -0.14 \\
Chris Paul & -34.80 & 729 & 4 & 4 & 5.73 & 16.34 & 11.06 & -0.90 \\
\hline\noalign{\smallskip}
\end{tabular}
\end{table}
\addtolength{\tabcolsep}{2.5pt}
\section{Impact of Scorekeeper Inconsistency on Daily Fantasy Sports}
\label{daily_fantasy}
In daily fantasy contests individuals pay entrance fees, select a roster of players who generate fantasy points, and potentially receive a payout depending on the performance of their roster, all within the span of 24 hours. The popularity of such games is increasing, and so too is the amount of money at stake. In 2014, FanDuel Inc. and DraftKings Inc., currently the two largest daily fantasy operators in North America, together awarded over \$800 million in prizes across all sports and pledged to increase that number to over \$3 billion in 2015 \citep{okeeffe}. Since the NBA daily fantasy contest scoring systems for both companies rely exclusively on box score statistics (including assists and blocks) scorekeeper behaviour has significant influence on these scores.
In addition to the overall bias or generosity of scorekeepers, the variability of their behaviour is also important to daily fantasy participants. Depending on their selection criteria and the contest they enter, a participant may either seek or avoid a player in a game whose scorekeeper has a high level of variability in the recording of statistics, since this variability affects the overall variability of a player's performance.
Using the contextual model, we can compute adjusted assist totals for each team in all 1227 games in the data set by computing the sum of the expected probabilities of the potential assists after removing the estimated scorekeeper effects. These adjusted values represent the expected assist totals for each team in every game if an average scorekeeper had recorded the statistics. We can then compare the number of recorded assists to the number of adjusted assists and collect the difference values for both the home and away teams for each scorekeeper to obtain a home and away ``scorekeeper bonus" distribution for each of the 30 NBA scorekeepers. The home and away team distributions for 5 selected scorekeepers are presented in Figure \ref{assist_beanplot}. The means of the distributions range from -3.44 for the home team of the Utah Jazz scorekeeper to 2.32 for the home team of New Orleans Pelicans scorekeeper. Given that teams averaged 22.05 assists per game over the 1227 games in the data set, this difference of 5.76 assists per game is substantial. Both the home and away teams are more likely to get extra assists when playing in Atlanta, where almost all of both distributions are above zero, compared to Utah where with the majority of observations are below zero. Additionally, the Pelicans scorekeeper is the most unpredictable, with the greatest distribution variance values for both the home (4.03) and away (4.54) teams. However, not all scorekeepers are inaccurate and inconsistent. The scorekeeper for the Los Angeles Clippers is the most consistent in the league with a home distribution variance of 1.09 and an away distribution variance of 1.21 while the scorekeeper for the Houston was most accurate by the metric of mean absolute distance from zero with a home value of 1.06 and an away value of 0.997.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{assist_beanplot.pdf}
\caption{Scorekeeper bonus distributions of the home and away teams for 5 selected scorekeepers}
\label{assist_beanplot}
\end{figure*}
\section{Discussion and Conclusion}
\label{conclusion}
In this paper we have presented evidence of inconsistencies in the awarding of box score statistics by the 30 team-hired scorekeepers. To quantify these inconsistencies, we used spatio-temporal data from optical tracking systems to develop a new model for predicting recorded assists. Our model was shown to have a greater predictive accuracy than previous methods, and also allowed us to develop an improved method of statistic adjustment. Though we only presented adjusted results for the contextual model from Section \ref{new_assists_model}, the results of the team-level model from Section \ref{team_adjusted} can also be used to adjust recorded assist and block totals.
In addition to demonstrating inconsistencies in the awarding of assists by the scorekeepers, both to all opposing teams and to their corresponding home teams, the results of the contextual model indicate scorekeepers may have biases in regard to both passer positions and the individual passers. Though this model attempts to include the coefficients we suspect have the greatest impact on the probability of a recorded assist, basketball is a complex system of positioning, events, and interactions, and it is impossible to include all potential factors in any model. As such, the difference in position and passer effects may be the result of characteristics that extend beyond the model. Further work must be completed in order to verify these results.
The same level of detail we used to examine assists could also be applied to the examination of other statistics. We have already presented evidence of scorekeeper inconsistencies in the recording of blocks, and the same may be true for other statistics such as rebounds or turnovers.
In addition to the inconsistencies among the 30 NBA scorekeepers, Section \ref{daily_fantasy} provides evidence of the inconsistency of individual scorekeepers among different games. These inconsistencies have real world consequences, including the rising potential of monetary consequences for daily fantasy participants due to the growing popularity of the contests.
In light of the findings of this paper, the NBA and its players may be well served to adopt a more proactive stance towards monitoring the attribution of subjective box score statistics. While our approach provides an adjustment for players' assist totals, significant work remains to understand the impact scorekeeper inconsistencies have on aggregate metrics (such as PER, and WS) and on the salaries, perception, and careers of players.
| {
"attr-fineweb-edu": 1.90332,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc8825V5ioeHBl5ub | \section{Acknowledgments}
This work has been partly supported by the Spanish MINECO under project TIN2014-55637-C2-2-R.
\bibliographystyle{style/aaai}
\section{Conclusion}
This paper describes the tourist problem, which consists in creating a personalized tourist agenda taking into account, apart from the usual constraints, such as maximizing the user satisfaction with the visits, other user preferences related to the travel style. We detail how this problem can be solved both using an automated planner and a CSP solver. We tested various plan metrics in two problem sets and showed that a plan metric that takes into account all the activities of the tour, including travelling times between places, yields in general a better utility to the user.
\section{CSP model}\label{CSP}
This section details the specification of the tourist problem $P^u=<R,V,H,T>$ as a CSP.
\subsection{Constraints}
In this section we explain the constraints that it is necessary to specify in order to correctly solve the tourist problem.
\subsubsection{Plan structure.}
Among the $V$ places recommended by the RS, not all of them will be possibly included in the agenda due to several temporal restrictions. We define an array $P$ of $|V|+3$ elements that is used to record the places that will be included in the tourist route. The $|V|$ variables take a value in $\{0,1\}$ to denote whether or not the respective place is a visit to realize in the route. The three extra variables defined in $P$ represent the initial location of the user (always set to 1), the restaurant (this variable equals 1 if the user selected a lunch time interval) and the destination (always set to 1).
For example, for the first plan of Figure \ref{plans}, and assuming $|V|=6$, the final array $P$ will be $P = \left< Orig, Vis1, Vis2, Vis3, Vis4, Vis5, Vis6, Rest, Dest \right>$, where variables $P_0, P_1, P_2, P_7$ and $P_8$ are set to 1 and the value of variables from $P_3$ ($Vis3$) to $P_6$ ($Vis6$) is 0 because they are not included in the plan.
\subsubsection{Plan sorting.}
The constraints explained in this subsection are devoted to obtain a correct plan from the point of view of the ordering of the visits. Specifically, assuming that each visit included in a plan is assigned a number in the sequence, a plan is {\em correctly ordered} if: the current location of the user is assigned the 0th position and the destination of the user is assigned the nth position (if the plan has n+1 visits). This implies that there are not {\em empty positions} in the plan.
In order to obtain a correct plan, several additional structures and constraints must be added. Let $A_{ij}$ be a 2-dimension matrix (($|V|+3) \times (|V|+3)$) with components in a $\{0,1\}$ domain. $A_{ij}$ is used to represent the sequence of visits in the plan, where $i$ is the visited place and $j$ is the order of $i$ in the sequence.
For example, the first plan in Figure \ref{plans} would be stored in a matrix $A$ of $9x9$, where all the elements $A_{ij}$ are equal to 0, except $A_{00}$, which indicates that the user is initially at $start\_location$; $A_{21}$ to represent that the first to visit is $Vis2$; $A_{72}$ to indicate that next the user heads to the restaurant; $A_{13}$ to represent that in the next step the user visits $Vis1$ and $A_{84}$ to indicate that the user finishes her route in the destination.
$A_{ij}$ must fulfill two conditions: a place can only be visited once and two or more monuments cannot be visited at the same time.
\begin{equation*}
\forall i \sum_{j=0}^{|V|+2}A_{ij} \leq 1
\quad \quad \quad \quad \quad
\forall j \sum_{i=0}^{|V|+2}A_{ij} \leq 1
\end{equation*}
Let $m_{ijk}$ be a 3-dimension matrix ($(|V|+3) \times (|V|+3) \times (|V|+2)$) whose components take a value in $\{0,1\}$. This matrix establishes a relationship between a place to visit and the next one. That is, $m_{ijk}$ is set to 1 if place $i$ is visited immediately before $j$, and $i$ is visited in position $k$. More formally:
\begin{multline*}
\forall i,j,k / i,j \in [0, |V|+2], k \in [0, |V|+1] : \\ m_{ijk} = A_{ik} \ast A_{j(k+1)}
\end{multline*}
Considering the matrix $A$ above, all the elements in the matrix $m_{ijk}$ are 0 except $m_{020}, m_{271}, m_{712}, m_{183}$. For example, $m_{271}$ denotes that after visiting the second place of array $P$ ($Vis2$) in position 1 the user heads to the fifth place of $P$ ($Rest$).
The aforementioned constraints may not prevent the model from generating an incorrect solution like the one shown before. In order not to have \emph{empty positions} in the sequence of visits, i.e., positions in which no visit is applied, we use the following constraint:
\begin{equation*}
\forall j \in [1, |V|+2]: \sum_{i=0}^{|V|+2}A_{ij} \leq \sum_{i=0}^{|V|+2}A_{i(j-1)}
\end{equation*}
The last place in the tourist agenda must be the user destination. Hence, if the user destination appears in the $j^{th}$ position of the sequence of visits, all the values on the right of column $j^{th}$ in matrix $A$ must be 0
\begin{equation*}
\text{If} \; A_{(|V|+2)j} = 1 \Rightarrow \forall z / j < z \leq |V|+2 : \sum_{z=0}^{|V|+2} A_{iz}=0
\end{equation*}
\subsubsection{Temporal constraints over the visits included in the plan}
These constraints are used to determine the value of $(start_i, finish_i)$ for each visit and lunch action $i$ included in the plan.
A recommended interval of the duration of each visit $[dmin_i, dmax_i]$ is provided by the RS. The following constraint establishes that the actual duration of a visit must fall within the recommended interval:
\begin{multline*}
\forall i \in [1,|V|+1] / P_i=1 : dmin_i \leq duration_i \leq dmax_i
\end{multline*}
The finish time of a visit is specified as:
\begin{equation*}
\forall i \in [1,|V|+1] / P_i=1 : finish_i = start_i + duration_i
\end{equation*}
A visit to a POI $i$ must be performed within the interval of the opening hours in $H$, denoted as $[open_i, close_i]$:
\begin{multline*}
\forall i \in [1,|V|+1] / P_i=1 : \\ open_i \leq start_i \wedge finish_i \leq close_i
\end{multline*}
In order to calculate the start time of a visit $j$, the estimated time needed to move from the prior visit $i$ (defined as $dur_{i,j}$ in $T$) must be taken into account:
\begin{multline*}
\forall i,j,k / i,j \in [0, |V|+2], k \in [0, |V|+1]: \\ \text{if} \; m_{ijk} = 1 \Rightarrow start_j > finish_i + dur_{i,j}
\end{multline*}
\subsection{Optimization function}
The constraints specified in the previous section enables the CSP solver to obtain a valid plan. Since our aim is to obtain a high-quality plan that fits the user's preferences, some other factors must be considered. In this case, we have implemented all the designed metrics as defined in Section {\em Metrics} by using the variables defined above. For example, the $P_{journey}$ penalty can be formalized as follows:
\begin{equation}
P_{journey} = \frac{\sum_{\forall i,j,k} dur_{i,j} * m_{ijk}}{total\_time}
\end{equation}
where $dur_{i,j}$ is the duration of the travelling time from $i$ to $j$ as defined in $T$.
\begin{figure*}[htb]
\centering
\begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=5.5cm]{./figures/experiments/avg_utility3}
\caption{U1*}
\label{fig:u3}
\end{minipage}%
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=5.5cm]{./figures/experiments/avg_utility2}
\caption{U2}
\label{fig:u2}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=5.5cm]{./figures/experiments/avg_utility}
\caption{U3}
\label{fig:u1}
\end{minipage}
\end{figure*}
\begin{figure*}[htb]
\vspace{-0.4cm}
\centering
\begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=5.5cm]{./figures/experiments/avg_perc_occupation}
\caption{Percentage of occupation}
\label{fig:occup}
\end{minipage}%
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=5.5cm]{./figures/experiments/avg_num_visits}
\caption{Number of visits}
\label{fig:visits}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=5.5cm]{./figures/experiments/avg_execution}
\caption{Execution time}
\label{fig:execution}
\end{minipage}
\end{figure*}
\subsection{Solution}
A CSP solution is a pair of the form $(start_i, finish_i)$ for the lunch action and for each recommended visit $i$. In case that the POI is included in the plan, $start_i$ and $finish_i$ indicate the start and finish time of the visit, otherwise these values are set to 0. Therefore, $V_{\Pi}$ would contain all the visits with values greater than 0 in $start_i$ and $finish_i$. On the other hand, each travelling action to be included $T_{\Pi}$ would be obtained from the gaps between visits.
The following table represents the CSP output from the first plan in Figure \ref{plans}. In this case, visits $V2$ and $V3$ and the restaurant would be included into $V_\Pi$ and, for example, the travelling action from the start location to the first visit ($V2$) would start at time 0 with a duration of 20.
\begin{center}
\begin{tabular}{l|l|l|l|l|l|l|l|}
\cline{2-8}
& \textbf{V1} & \textbf{V2} & \textbf{V3} & \textbf{V4} & \textbf{V5} & \textbf{V6} & \textbf{R} \\ \hline
\multicolumn{1}{|l|}{\textbf{Start}} & 320 & 20 & 0 & 0 & 0 & 0 & 180 \\ \hline
\multicolumn{1}{|l|}{\textbf{Finish}} & 570 & 170 & 0 & 0 & 0 & 0 & 300 \\ \hline
\end{tabular}
\end{center}
\section{Experiments}
This section presents the results obtained when solving the tourist problem with the PDDL-encoding and the CSP-encoding.
According to the features required to formulate the PDDL problem, we need a planner that handles PDDL3.0. A few automated planners are capable of this, such as {\small{\sf MIPS-XXL}} \cite{Gerevini06-mips}, {\small{\sf SGPLAN5}} \cite{chen05} or {\small{\sf OPTIC}} \cite{BentonCC12}. We opted for {\small{\sf OPTIC}} because it manages non-fixed durations, preferences and other helpful functionalities. {\small{\sf OPTIC}} is a heuristic planner that adapts the FF's relaxed-plan heuristic \cite{Hoffmann03} to temporal settings and numeric preference satisfaction. {\small{\sf OPTIC}} uses a hill-climbing algorithm in combination with a greedy algorithm, which enables to efficiently obtain good-quality solutions.
However, {\small{\sf OPTIC}} presents a severe limitation as it is not able to handle nonlinear functions. This prevents us from testing the metric $M3$ that involves the nonlinear penalty $U3$. On the other hand, let's assume the preference {\small\texttt{(preference p1 (visit location id\_1))}} in a problem; given that preference violation is expressed in PDDL through the variable {\small\texttt{(is-violated p1)}}, encoding $U2$ would require to be able to express {\small\texttt{(* (is-violated p1) (duration\_visit id\_1))}}, which is not allowed in {\small{\sf OPTIC}}. This nonlinearity restriction also affects the definition of the low temporal occupation penalty, reason why we implemented the occupation formula presented in \cite{IbanezSO16} where $P_{occup} = \frac{free\_time}{total\_time}$ if $high$ and $P_{occup} = \frac{total\_time - free\_time }{total\_time}$ if $low$.
In summary, {\small{\sf OPTIC}} has been tested in a modified version of metric $M1$ which we will refer to as $M1'$.
For solving the CSP-encoding, we chose a fast CSP solver like {\small{\sf CHOCO}} \cite{choco}. We used the {\small{\sf CHOCO}} function that, according with the manual, returns the optimal solution provided that no stop criteria is applied in the search. The variable and value selector is based on ``DowOverWDeg + LB'' strategy that solves the hard-constraints firstly and it reaches a good solution in a short time. Unlike {\small{\sf OPTIC}}, {\small{\sf CHOCO}} provides a greater flexibility and the possibility of testing all the defined metrics.
We tested the quality of the solutions obtained with the four possibilities ($M1'$ in {\small{\sf OPTIC}}, denoted as OPT-M1', and from $M1$ to $M3$ in {\small{\sf CHOCO}}, denoted by CHO-M1 to CHO-M3), as well as the temporal performance of both solvers\footnote{Experiments were performed in an intel i7-4790 3.6 Ghz machine with 16 GB DDR3.}. For doing this, we randomly generated a set of problems, with the following parameters: (1) a random $v_p$ value for each POI in the interval $[180,300]$; (2) a random distance between every two locations $T$ in the interval $[1,60]$; (3) for the visit duration, we take a random value between $30$ and $200$ minutes which determines the average duration visit (in case this value is larger than the total available time, the system will discard it); then, we compute the duration interval as explained in \cite{IbanezSO16} (i.e. we apply a normal distribution to obtain $dmin_p$ and $dmax_p$).
\begin{table}[tbp]
\centering
\caption{Average results wrt. the preference of occupation}
\label{tab:occup}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Pref\\ Occup\end{tabular}} & \multirow{2}{*}{Metric} & \multicolumn{4}{l|}{Results} \\ \cline{3-6}
& & Occup & U1* & U2 & U3 \\ \hline
\multirow{4}{*}{High} & OPT/M1' & 0.924 & 0.881 & 0.669 & 0.897 \\ \cline{2-6}
& CHO/M1 & 0.937 & 0.876 & 0.662 & 0.885 \\ \cline{2-6}
& CHO/M2 & 0.940 & 0.899 & 0.669 & 0.907 \\ \cline{2-6}
& CHO/M3 & 0.939 & 0.888 & 0.678 & 0.896 \\ \hline
\multirow{4}{*}{Indif.} & OPT/M1' & 0.614 & 0.822 & 0.421 & 0.834 \\ \cline{2-6}
& CHO/M1 & 0.690 & 0.855 & 0.468 & 0.860 \\ \cline{2-6}
& CHO/M2 & 0.919 & 0.900 & 0.660 & 0.908 \\ \cline{2-6}
& CHO/M3 & 0.898 & 0.895 & 0.660 & 0.905 \\ \hline
\multirow{4}{*}{Low} & OPT/M1' & 0.360 & 0.850 & 0.204 & 0.851 \\ \cline{2-6}
& CHO/M1 & 0.640 & 0.860 & 0.427 & 0.864 \\ \cline{2-6}
& CHO/M2 & 0.860 & 0.906 & 0.617 & 0.912 \\ \cline{2-6}
& CHO/M3 & 0.813 & 0.895 & 0.596 & 0.900 \\ \hline
\end{tabular}
\end{table}
We set three different values for the number of recommended places, $|V|=5$, $|V|=7$ and $|V|=10$, and three different values for the $total\_time$ of the route: a short plan (3 hours), half-day (5 hours) and all-day (9 hours). This makes a total of 9 different problem types. Only in the case of all-day routes, the plan will include the lunch time interval, which is a fixed-duration interval in a fixed time slot. Then, we added the 9 possible combinations of user preferences $P_{\#visits}=\{Many, Indifferent, Low\}$ and $P_{occupation}=\{High, Indifferent, Low\}$ to each problem type, thus having a total of 81 combinations. We generated two problem instances for each combination creating a total of 162 problems with a wide range of opening hours for each place.
We used different measures to evaluate the quality of the obtained plans with respect to the user preferences. We analyzed the number of visits in the plan and the level of occupation of the plan, that is, the ratio of the time the user is doing any activity (a visit, a journey between two places or having lunch) to the $total\_time$:
\begin{equation*}
Occup=1-\frac{free\_time}{total\_time}
\end{equation*}
Moreover, we analyzed the utility of the obtained plans with the utility measures $U1$, $U2$ and $U3$ defined in Equation \ref{u1}, \ref{u2} and \ref{u3}, except for $U1$, which has been slightly modified as follows:
\begin{equation} U1^*=\frac{\sum_{\forall i \in V_\Pi} v_i}{|V_\Pi|*vmax_p} \label{utility1*} \end{equation}
The results for the generated problem set are shown in the plots from Figure \ref{fig:u3} to Figure \ref{fig:execution}. Each Figure consists of nine plots, resulting from the combination of the values '\textbf{F}ew', '\textbf{I}ndif' or '\textbf{M}any' for the preference $P_{\#visits}$ along the $X$ axis; and '\textbf{L}ow', '\textbf{I}ndif' or '\textbf{H}igh' for $P_{occupation}$ along the $Y$ axis. Each single plot pictures 4 bars, where the first corresponds to OPT-M1' and the other three correspond to CHO-M1 to CHO-M3. For example, the plot in the left-bottom corner of Figure \ref{fig:occup} shows, for each of the four plan metrics, the average percentage of occupation of the agendas for all the problems where the user has defined a '\textbf{H}igh' temporal occupation and a '\textbf{F}ew' number of visits.
\setlength{\tabcolsep}{5pt}
\begin{table}[tbp]
\centering
\caption{Average results wrt. the pref. of number of visits}
\label{tab:visits}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Pref\\ \#Visit\end{tabular}} & \multirow{2}{*}{Metric} & \multicolumn{4}{l|}{Results} \\ \cline{3-6}
& & \#Visit & U1* & U2 & U3 \\ \hline
\multirow{4}{*}{Many} & OPT/M1' & 2,704 & 0,830 & 0,467 & 0,846 \\ \cline{2-6}
& CHO/M1 & 3,259 & 0,856 & 0,580 & 0,862 \\ \cline{2-6}
& CHO/M2 & 3,148 & 0,873 & 0,651 & 0,885 \\ \cline{2-6}
& CHO/M3 & 3,074 & 0,872 & 0,654 & 0,884 \\ \hline
\multirow{4}{*}{Indif.} & OPT/M1' & 2,444 & 0,836 & 0,468 & 0,850 \\ \cline{2-6}
& CHO/M1 & 3,185 & 0,857 & 0,574 & 0,865 \\ \cline{2-6}
& CHO/M2 & 2,463 & 0,910 & 0,682 & 0,918 \\ \cline{2-6}
& CHO/M3 & 2,556 & 0,893 & 0,670 & 0,902 \\ \hline
\multirow{4}{*}{Few} & OPT/M1' & 1,296 & 0,886 & 0,359 & 0,887 \\ \cline{2-6}
& CHO/M1 & 1,685 & 0,878 & 0,403 & 0,881 \\ \cline{2-6}
& CHO/M2 & 1,407 & 0,922 & 0,612 & 0,923 \\ \cline{2-6}
& CHO/M3 & 1,648 & 0,913 & 0,610 & 0,916 \\ \hline
\end{tabular}
\end{table}
Figures from \ref{fig:u3} to \ref{fig:u1} show the average utility measured by $U1^*$, $U2$ and $U3$ (equations \ref{utility1*}, \ref{u2} and \ref{u3}, respectively) for the 9 combinations of user preferences. We can observe that the utility values are very similar for the four metrics in Figures \ref{fig:u3} and \ref{fig:u1}. However, in Figure \ref{fig:u2}, the values are significantly lower for {\small{\sf OPTIC}}. This is because {\small{\sf CHOCO}}, which returns the optimal value for the three metrics, tends to principally minimize the penalty for the non-visited POIs of the list $V$. Consequently, {\small{\sf CHOCO}} plans will typically include the most-valued POIs, a larger number of POIs or the POIs that render more utility per unit time and so the plans will have higher utility values, which is specially notable for the '\textbf{L}ow' occupation. This is also confirmed with the results of Tables \ref{tab:occup} and \ref{tab:visits}. Table \ref{tab:occup} shows the average results of the utility measures in the solution plans obtained for the problem instances with occupation '\textbf{H}igh', '\textbf{I}ndif' and '\textbf{L}ow'. And Table \ref{tab:visits} shows the average results of the utility in the solution plans obtained for the problem instances where the user selected '\textbf{M}any', '\textbf{I}ndif' o '\textbf{F}ew' number of visits. We can also observe in the Tables that the average occupation and number of visits is always higher in {\small{\sf CHOCO}} for the aforementioned reason. For instance, for problems where the user selected '\textbf{F}ew' visits, {\small{\sf OPTIC}} includes only one POI in the majority of plans while {\small{\sf CHOCO}} includes two POIs.
The plot in Figure \ref{fig:occup} shows the percentage of occupation for each of the 9 possible combinations of user preferences. The four plan metrics yield high values of occupation when $P_{occupation}$ '\textbf{H}igh'. However, when the user selects '\textbf{L}ow' or '\textbf{I}ndif', we can observe the values of {\small{\sf OPTIC}} are significantly lower, which could be interpreted as a result more compliant with a '\textbf{L}ow' occupation (for the case '\textbf{I}ndif' any value would be equally acceptable). Likewise, this is explained because {\small{\sf CHOCO}} solution represents a plan in which the user is visiting highly-recommendable POIs for longer or the plan includes more POIs than the plans returned by {\small{\sf OPTIC}}, and this negatively affects the occupation in problems where the this preference is set to '\textbf{L}ow'. However, given that the values of {\small{\sf OPTIC}} plans are around 40\% of the route occupancy, one might see this as an 'extremely low' value. This interpretation would obviously depend on the user likes, an indication that a more accurate definition of user preferences might be preferable. On the other hand, CHO-M2 and CHO-M3 are clearly the metrics less compliant with a '\textbf{L}ow' occupancy preference, an indication that {\small{\sf CHOCO}} tends to maximize the time the user is visiting highly-recommendable POIs.
The plots in Figure \ref{fig:visits} show the average number of visits. In general, metrics OPT-M1' and CHO-M1 are the best performers, except in the '\textbf{F}ew-\textbf{H}igh' dimension and '\textbf{M}any-\textbf{L}ow' dimension, respectively. This reflects the fact that OPT-M1' is more sensible to the '{\bf L}ow' occupation preference, similarly to Figure \ref{fig:occup}, whereas CHO-M1 is more sensible to the '{\bf H}igh' occupation preference. With respect to the execution time shown in Figure \ref{fig:execution} (time in minutes), again OPT-M1' and CHO-M1 are the top performers in all the dimensions. In general, we can observe that when $P_{occupation}$ is '\textbf{H}igh' and $P_{\#visits}$ is '\textbf{M}any', both solvers require longer to solve the problems with any metric.
The values in Tables \ref{tab:occup} and \ref{tab:visits} allow us to examine the plan metrics with respect to the achievement of the user preferences. Table \ref{tab:occup} shows that the best metrics wrt the utility and level of occupation for a '\textbf{H}igh' and '{\bf I}ndif' $P_{occupation}$ are CHO-M2 and CHO-M3, although the differences in the '\textbf{H}igh' case are smaller. In the '{\bf L}ow' case, the best results are obtained by OPT-M1', followed by CHO-M1. This follows the same tendency as above, where {\small{\sf OPTIC}} is more compliant with '{\bf L}ow' occupation values. In Table \ref{tab:visits}, the best metric for '\textbf{M}any' and '{\bf I}ndif' number of visits is CHO-M1; however, CHO-M2 obtains better utility values. This means that, although CHO-M2 includes less visits in the plan, the utility per time unit for the user is higher. For the case of '\textbf{F}ew' visits, the best metrics is OPT-M1', in line with the rest of results, followed by CHO-M2, which obtains higher utility values. We can conclude that maximizing $\sum\limits_{p \in V_\Pi}(v_p*dur_p)$ provides the best utility to the user and that CHO-M2 is the best plan metric.
\section{Problem description}
In this section, we describe the tourist problem to be solved with a planner and a CSP solver. The problem is inspired in the tourist setting introduced in \cite{IbanezSO16}, which describes the problem of generating an agenda for a tourist. The information included in the problem definition is: (a) a set of recommended points of interest (POIs) for the particular tourist; we assume the existence of a Recommender System (RS) that returns a set of preferable POIs for the user according to her likes; (b) the tourist's preferences regarding the travel style and model of transport; and (c) context-ware information such as the location or hours of the tourist attractions.
Initially, the user enters the basic details of the route: the date of the visit, the start and finish point, start and finish hour, the time interval reserved for lunch and the mode of transport she prefers, which may determine the time needed to move between locations. Then, she also indicates her preferences related to her travel style, namely, she indicates if she prefers to include many or few visits in the tour or has
no preference over it and if she prefers to obtain an agenda with a high or a low temporal occupation or has no preference over it. Figure \ref{plans} shows four examples of agendas that reflect four combinations of these travel style preferences. The first one shows an agenda with a low number of visits but a high temporal occupation with no free time between visits. The second shows an agenda with a low occupation rate, where visits are shorter in order to have some free time between visits. The third example shows a high occupation rate agenda that also contains a high number of visits (in this case, visits 1 and 2 are shorter to be able to include visits 3, 4 and 5). The last example is an agenda with a low occupation rate but a high number of visits.
\begin{figure*}
\centering
\includegraphics[width=18cm]{./figures/planes-CSP.png}
\caption{Example of agendas with different travel preferences ($vmax_p=300$)}
\label{plans}
\end{figure*}
\subsection{Problem formulation}
In order to define a tourist problem, we need to distinguish between the domain data relative to the particular area or city the user wants to visit, and the user data, which specify the personalized features of the tourist route.
\begin{enumerate}
\item Domain information. Two sources of knowledge from the domain are relevant for a tourist problem: the POIs of the city along with their opening and closing times and the travelling time between POIs.
\begin{enumerate}
\item Information about the POIs. For each POI $p$ of the city, we store three values: the POI identifier and the opening and closing time of $p$ (denoted by $open_p$ and $close_p$, respectively). Times are measured from 00:00. For example, $<$Cathedral, 600, 1170$>$ represents that the POI 'Cathedral' opens at 10:00; i.e., $open_{Cathedral}$=600 (10h*60min/h+0min = 600); and closes at 19:30; i.e., $close_{Cathedral}$=1170 (19h*60min/h+30min = 1170). The information of the hours of the city POIs will be denoted as $H^*$.
\item Travelling times. Additionally, for every two POIs of the city, including the initial and final location of the user's route, we store the travelling time between them accordingly to the transport mode (e.g., walk, bus, car). For example, $<$Hotel, Cathedral, walk, 20$>$, $<$Hotel, Cathedral, bus, 8$>$. We will denote the data of travelling times between POIs by $T^*$.
\end{enumerate}
\item Personalized route information. We distinguish between the data that are directly provided by the user (route details) and the data estimated by the Recommendation System for the user:
\begin{enumerate}
\item Route details. The user introduces the following data:
\begin{itemize}
\item initial and final time of the route, which define the $total\_time$ available for the route
\item initial and final location of the route, denoted by $start\_loc$ and $final\_loc$, respectively
\item initial and final time for the lunch break, if specified
\item the transport mode (e.g., walk, bus, car)
\item preference of the user for the number of visits; the user can select among the values $\{few,many,indif\}$
\item preference of the user for the temporal occupation of the route; the user can select among the values $\{high,low,indif\}$
\end{itemize}
We will refer to the route details introduced by the user as $R$.
\item Recommendation. A set of recommendable POIs for the user to visit are obtainable through the RS. Specifically, for each recommended visit, the RS returns a tuple of the form $\langle p,v_p,dmin_p, dmax_p \rangle$, where $p$ is the POI to visit, $v_p \in [0,vmax_p]$ is the estimated value of $p$ to the user (i.e., the estimated degree of interest of the user in $p$), and $dmin_{p}$ and $dmax_{p}$ are the minimum and maximum recommended duration for visiting $p$, respectively. We will denote the set of recommended visits to a user as $V$\footnote{The RS we used to elicit the list of recommended POIs and values assumes $vmax_p=300$.}.
\end{enumerate}
\end{enumerate}
A tourist problem $P^u$ for a particular user is defined as a tuple $P^u=<R,V,H,T>$, where $R$ is the set of route details specified by the user, $V$ is the set of recommended visits to the user, $H \subseteq H^*$ is the hours of the POIs contained in $V$ and $T \subseteq T^*$ is the travelling times between the POIs in $V \cup \{start\_loc, final\_loc, restaurant\}$ according to the transport mode selected by the user.
A {\bf solution for a problem $P^u=<R,V,H,T>$} is a sequence of actions or plan $\Pi$ that contains \texttt{move} actions from $T$ and \texttt{visit} actions from $V$. More specifically, $\Pi=\{T_{\Pi},V_{\Pi}\}$, where:
\begin{itemize}
\item $T_{\Pi}$ are actions of the form $(\texttt{move} \; p \; q \; t_s \; dur_{p,q})$, being $p$ and $q$ two POIs, $t_s$ the start time of the move action and $dur_{p,q}$ the travelling time between $p$ and $q$ according to the user's selected transport.
\item $V_{\Pi}$ are actions of the form $(\texttt{visit} \; p \; t_s \; dur_p)$, being $p$ the POI to visit, $t_s$ the start time of the visit and $dur_p \in [dmin_p,dmax_p]$ the duration recommended for the visit. The $restaurant$ is also included in this set with $v_{restaurant}=0$\footnote{In this work, restaurants are not defined as POIs. We consider a generic POI \emph{restaurant} that the user must visit if a lunch break is specified. Including a list of restaurants as POIs of $V$ alongside their recommended value to the user is straightforward.}.
\end{itemize}
The goal is to maximize the user satisfaction; that is, including the most-valued recommended visits of $V$ and meeting the user preferences with respect to the number of visits and temporal occupation. We define a \textbf{penalty cost} for violation of user preferences and we pose the problem as finding the solution with minimal penalty.
\subsection{Penalties}
There can be many different solutions for the tourist problem defined above, given that different visits can be selected to be included in the plan, with different durations, in different order (which implies different movements from one location to another), etc. Each of these solutions will fit the user preferences in a better or worse way. The solver is aimed at finding the best plan according to some metrics that rely on four types of penalties to assess the degree to which the user preferences are not satisfied. Next, we present the four penalties, which are values in the interval [0,1].
\begin{itemize}
\item \textbf{Non-visited POIs:} This penalty is used to force the solver to include in the plan those visits with the highest recommendation value. That is, the idea is to obtain a plan with a high utility for the user, where {\em utility} is defined with respect to the recommendation value. We have designed three different utilities which, in turn, define three different penalties by substracting the obtained utility from the highest possible utility.
Equation \ref{u1} calculates the utility as the ratio of the recommendation value of the visits included in the plan with respect to the maximum recommendation value that a plan could have, that is, the sum of the recommendation value of all the elements in $V$\footnote{Abusing the notation, we will write $p \in V$ or $p \in V_{\Pi}$ when we refer to the POI $p$ of an action {\tt visit} $p \; t_s \; dur_p$. And we will use $(p,q) \in T_{\Pi}$ when we refer to the POIs $p$ and $q$ of an action {\tt move} $p \; q \; t_s \; dur_{p,q}$).}:
\begin{equation}\label{u1}
U1 = \frac{\sum_{p \in V_\Pi} v_p}{\sum_{p \in V} v_p} \;\;\;\;\;\;\;\; P_{U1} = 1 - U1
\end{equation}
The second way to calculate this penalty is formalized in equation \ref{u2}. In this case, the utility is calculated as the sum of the recommendation value of all the visits included in the final plan multiplied by the time spent in each visit, with respect to the total time spent in the plan (that is, taking into account movements between locations). The aim of this penalty is to consider how long the user is visiting a place with a high recommendation value, not only that this place is visited.
\begin{equation}\label{u2}
U2=\frac{\sum\limits_{p \in V_\Pi}(v_p*dur_p)}{total\_time} \;\;\;\;\;
P_{U2} = \left(vmax_p - U2 \right) / vmax_p
\end{equation}
And finally, the third way to calculate this penalty, shown in equation \ref{u3}, is similar to equation \ref{u2}, but the recommendation value per time unit is divided by the total time spent on visiting POIs, instead of the total available time:
\begin{equation}\label{u3}
U3=\frac{\sum\limits_{p \in V_\Pi} (v_p*dur_p)}{\sum\limits_{p \in V_\Pi} dur_p} \;\;\;\;\;
P_{U3} = \left(vmax_p - U3 \right) / vmax_p
\end{equation}
For example, for the first plan in Figure \ref{plans}, where the recommendation value $v$ and the duration interval for each visit are shown on the right and $vmax_p=300$, the penalties would be calculated as follows:
\begin{itemize}
\item $P_{U1}=1-\frac{300+280}{1480}=0.61$, where 1480 is the sum of $v$ for all the visits.
\item $P_{U2}=(300-\frac{300*240+280*150}{600})/300=0.37$, where 240 and 150 are the duration of visits 1 and 2, respectively, and 600 is the total available time of the user.
\item $P_{U3}=(300-\frac{300*240+280*150}{240+150})/300=0.03$. This is a low value because, in fact, the recommendation value per time unit is nearly maximal.
\end{itemize}
If we compare these values of the penalties for the first plan, with the values obtained for the fourth plan, which are 0.14, 0.57 and 0.15, respectively, we can observe that: (1) $P_{U1}$ is much better for the fourth plan, because it includes three more visits; (2) however, $P_{U2}$ and $P_{U3}$ are better in the first plan, because the utility per time unit is higher as the new visits included in the fourth plan have a lower recommendation value.
\item \textbf{Movement time:} Movement time is the sum of all time needed to move between the locations included in the plan. The aim of this penalty is to force the solver to reduce the time spent in moving from one location to another.
\begin{equation*}\label{p_journey}
P_{journey}=\frac{\sum_{(p,q) \in T_\Pi} dur_{p,q}}{total\_time}
\end{equation*}
Using the first plan as in the example above, this penalty would be calculated as: $P_{journey}=\frac{20+10+30+20}{600}=0.13$
\item \textbf{Number of visits:} As for the preference regarding the number of visits, which denotes whether the user desires a tour with many POIs to visit, few POIs or it is indifferent to her, the penalty considers the number of visits included in the plan with respect to the total recommended places:
\begin{equation*}\label{p_visits}
P_{\#visits} =
\begin{cases}
\frac{|V| - |V_\Pi|}{|V|} & \text{if} \quad many\\
0 & \text{if} \quad indif.\\
\frac{|V_\Pi|}{|V|} & \text{if} \quad few\\
\end{cases}
\end{equation*}
Taking into account that the first plan in Figure \ref{plans} represents a plan with a few number of visits, this penalty would be calculated as: $P_{\#visits} = \frac{2}{6}=0.33$.
Obviously, the fourth plan with the same preference would have a higher value of the penalty, specifically 0.83, given that it includes 5 visits.
\item \textbf{Occupation:} If the user selects a high temporal occupation, the free time must be minimized. If the user selects a low temporal occupation, then the variables to minimize are the time spent on visits or travelling. In case the user selects ''indifferent'', it is not needed to minimize any expression. $free\_time$ is defined as the slack time between activities and it is calculated as the difference between the total available time and the time spent in actions in the plan:
\begin{equation*}
free\_time=total\_time - \sum\limits_{p \in V_\Pi} dur_p - \sum\limits_{(p,q) \in T_\Pi} dur_{p,q}
\end{equation*}
Therefore, the penalty is defined as follows:
\begin{equation*}\label{p_occup}
P_{occup} =
\begin{cases}
\frac{free\_time}{total\_time} & \text{if} \quad high\\
0 & \text{if} \quad indif.\\
\frac{1}{free\_time*total\_time} & \text{if} \quad low\\
\end{cases}
\end{equation*}
Taking into account that the first plan in Figure \ref{plans} represents a plan with a high temporal occupation, this penalty would be calculated as: $P_{occup} = \frac{0}{600}=0$,
which is consistent with the fact that there is not free time in this case and, therefore, there is not any penalty due to occupation. In contrast, these penalty for the second plan would take a value of 0.47.
\end{itemize}
\subsection{Metrics}
In this section, we define the set of metrics that make use of the penalties introduced above. Given that these are penalties, a plan satisfies better the user preferences when the values of the penalties are lower; therefore, the metrics must be minimized.
In general, these metrics consist in the addition of the four penalties defined above. That is, all the factors are considered equally.
\begin{itemize}
\item \textbf{M1: utility per POI:} This metrics uses the first defined not-visited POIs penalty $P_{U1}$, therefore emphasis is put on how many POIs with high utility are visited. It will take a value in the interval [0,4].
\begin{equation*}\label{p_total}
M1=P_{U1}+P_{journey}+P_{\#visits}+P_{occup}
\end{equation*}
For example, assuming that the user prefers few visits and a high temporal occupation in the plan, this metrics would take the following values for each plan in Figure \ref{plans}: 1.09, 1.54, 1.18 and 1.28. This means that, according to this metrics, the best plan would be the first plan and the second plan would be the worst one; that is, in this particular example, the penalty due to the number of visits has not a great weight.
\item \textbf{M2: utility per time unit with respect to the total available time:} $M2$ uses the second not-visited POIs penalty $P_{U2}$, which considers the utility of each visit per time unit with respect the total time, thus taking into account the travelling actions, that do not provide any reward. Given that journeys are already considered, $P_{journey}$ penalty is excluded from $M2$, so this metrics will take a value in the interval [0,3]. Specifically:
\begin{equation*}\label{pm2}
M2=P_{U2}+P_{\#visits}+P_{occup}
\end{equation*}
For example, assuming the same preferences than above (few visits and a high temporal occupation), this metrics would take the following values for each plan in Figure \ref{plans}: 0.7, 1.61, 1.34 and 1.48; that is, the best plan would be the first plan and the second plan would be the worst according to this metrics. Again, the penalty for the number of visits has a lower impact than the penalty due to occupation in this example.
\item \textbf{M3: utility per time unit with respect to the time spent in visits:}
$M3$ considers the utility of the selected visits per time unit regarding the time spent in visits only, with the aim of focusing on how satisfying are the visits for the users, in spite of the time spent on travelling from one place to another. Specifically:
\begin{equation*}
M3=P_{U3}+P_{journey}+P_{\#visits}+P_{occup}
\end{equation*}
For example, assuming the same preferences than above (few visits and a high temporal occupation), this metrics would take the following values for each plan in Figure \ref{plans}: 0.51, 0.96, 1.2 and 1.28; that is, the best plan would be the first plan and the fourth plan would be the worst according to this metrics. Unlike the previous metrics, in this particular example, $P_{U3}$ is penalizing, in a greater degree, the plans with many visits than the plans with a low occupation.
\end{itemize}
In summary, we can observe that, in all cases, the metrics have selected correctly the best plan according to the user preferences. The degree in which the other plans are penalized depend on the particular configuration of the plan, as the results will show.
\section{Introduction}
The exponential growth of the Internet of Things and the surge of open data platforms provided by city governments worldwide is providing a new foundation for travel-related mobile products and services. With technology being embedded on all organizations and entities, and the application of the smartness concept to address travellers' needs before, during and after their trip, destinations could increase their competitiveness level \cite{buhalis13}.
Many tourism applications provide a personalized tourist agenda with the list of recommended activities to the user \cite{Sebastia:2009,vansteenwegen11,RefanidisE15}. In most cases, applications return a tourist route or agenda indicating the most convenient order for the user to realize the activities and, additionally, the path to follow between activities. In other cases, tools provide a dynamic interaction that allows the user to interact with such agenda by adding or removing activities or changing their order.
{\sf SAMAP} \cite{castillo08}, for instance, elicits a tourist plan including information about the transportation mode, restaurants and bars or leisure attractions such as cinemas or theaters, all this accompanied with a detailed plan explanation. Scheduled routes presented in a map along with a timetable are nowadays a common functionality of many tourist applications like {\sf e-Tourism} \cite{Sebastia:2009}, including also context information such as the opening and closing hours of the places to visit and the geographical distances between places. In {\sf CT-Planner} \cite{kurata14}, personalization is understood as taking into account preferences like the walking speed or reluctance to walk of the user, in which case the planner will suggest short walking distances. {\sf PersTour}~\cite{LimCLK15} calculates a personalized duration of a visit using the popularity of the point of interest and the user preferences. The work in~\cite{RodriguezMPC12} considers user preferences based on the number of days of the trip and the pace of the tour; i.e., whether the user wants to perform many activities in one day or travel at a more relaxed pace.
Personalized tourism applications must undoubtedly deal with the constraints and preferences that define the interests of the user. This can be addressed through a scheduler designed for the automatic scheduling of user's individual activities in an electronic calendar such as {\sf SelfPlanner} \cite{RefanidisA11}. In {\sf SelfPlanner}, user preferences are defined over alternative schedules and the user specifies whether she prefers the task to be scheduled as early or as late as possible or whether she is indifferent on how a task will be scheduled within its temporal domain. {\sf SelfPlanner} and its descendent \cite{AlexiadisR16} enable users express their preferences over the way their activities should be scheduled in time. {\sf e-Tourism} \cite{Sebastia:2009,IbanezSO16} approaches the problem of creating a customized tourist plan as a preference-based planning problem encoded in PDDL3.0 and solved with the planner {\sf OPTIC} \cite{BentonCC12}. One limitation of {\sf OPTIC} is that it does not enable the use of non-linear plan metrics. This makes it unaffordable to deal with situations in which, for instance, the satisfaction of the user with tourist plan does not always increase linearly with its duration.
Tourist preferences are not only about scheduling activities at the user's preferred time but also dealing with the travel style or personal circumstances of the tourist such as the rhythm of the trip, handling the number of visits to include in the tour or giving more priority to visits of special predilection for the user. Thus, we envision the task of creating a customized tourist agenda as a planning and scheduling (P\&S) application capable of conveniently scheduling the most appropriate goals (visits) so as to maximize the user satisfaction with the tourist route. Our proposal relies upon
encoding the tourist agenda problem as a CSP (Constraint Satisfaction Problem). The challenge when using a CSP-based approach is specifically on (a) the encoding of a planning problem as a constraint programming formulation \cite{Garrido09,Sebastia:2009} and (b) the encoding of user preferences that must be maximized as a function to minimize.
The paper is organized as follows. First, a description of the tourist agenda problem is given and we detail the metrics we will use to evaluate each obtained solution. Then, we describe the formulation of this problem for being solved by an automated planner. Afterwards, the problem is encoded as a CSP. In the section {\em Experiments}, we analyze the results we have obtained for a set of tourist problems. Finally, we draw some conclusions about this work.
\section{PDDL encoding}
This section describes the planning formulation of the tourist problem described in the prior section. We will specify the problem with PDDL (Planning Domain Definition Language), the standard language for encoding planning problems.
The features required to define the tourist problem in PDDL are: (1) temporal planning and management of durative actions (e.g., duration of visits, time spent in transportation, etc.); (2) ability of reasoning with temporal constraints (e.g., scheduling the activities within the opening hours of places, planning the tour within the available time slot of the tourist, etc.) and (3) ability of reasoning with the tourist preferences (e.g., selecting the preferred activities of the user for planning the tour). Specifically, apart from durative actions, which were introduced in PDDL 2.1 \cite{fox2003pddl2}, we also need the following features:
\begin{itemize}
\item \textbf{Duration inequality} to define the duration of an action as a value within an interval. This functionality was included in PDDL2.1.
\item \textbf{Timed initial literals:} to describe deterministic and unconditional exogenous events. They were included in PDDL2.2 \cite{edelkamp04}.
\item \textbf{Preferences} or soft goals to express the user preferences. They were included in PDDL3.0 \cite{gerevini2009deterministic}.
\item \textbf{Plan metrics} to allow quantitative evaluation of plans for selecting the best plan. This was included in PDDL2.1
\end{itemize}
Subsequently, we describe the PDDL problem formulation from the input data $P^u=<R,V,H,T>$.
\subsection{Variables}
The problem variables are specified through predicates and functions. Visiting a POI is described by means of:
\begin{itemize}
\item The interval duration of visiting a POI $p$ is defined through the functions {\tt (min\_visit\_duration ?p)} and {\tt (max\_visit\_duration ?p)}. They will be assigned the values $dmin_p$ and $dmax_p$ of the corresponding POI $p$ in the list $V$.
\item The opening and closing time of a POI $p$ are specified by a timed-initial literal: {\tt (at $open_p$ (open $p$))} and {\tt (at $close_p$ (not (open $p$)))}, where $open_p$ and $close_p$ are defined in $H$.
\end{itemize}
The time for moving from one location $p$ to another location $q$ is defined by the function {\tt (location\_time $p$ $q$)}, which indicates the time in minutes to move from $p$ to $q$ as indicated in $T$. We note that $T$ contains the travelling time between the POIs of $V$ according to the transport mode specified by the user. We also need the following predicates and functions:
\begin{itemize}
\item A predicate that represents the user initial location, {\tt (person\_at $start\_loc$)}, which will be modified when a \texttt{move} action is applied.
\item The function {\tt (free\_time)} represents the remaining available time; the initial value is set to $total\_time$, which will decrease as new activities are included in the plan.
\item Two functions to compute the metrics during the plan construction, namely, a function to count the number of visits included in the plan, {\tt (number\_visit\_location)}; and the function {\tt (transport\_time)} to add up the time spent in \texttt{move} actions.
\end{itemize}
\subsection{Actions}
\begin{figure}[t]
\begin{scriptsize}
\begin{verbatim}
(:durative-action move
:parameters (?x - location ?y - person ?z - location)
:duration (= ?duration (location_time ?x ?z))
:condition
(and
(at start (person_at ?y ?x))
(at start (>= (free_time)(location_time ?x ?z))))
:effect
(and
(at start (not (person_at ?y ?x)))
(at end (person_at ?y ?z))
(at end (decrease (free_time)
(location_time ?x ?z)))
(at end (increase (transport_time)
(location_time ?x ?z)))))
\end{verbatim}
\end{scriptsize}
\caption{Action {\tt move} of the tourism domain} \label{fig:move}
\end{figure}
\begin{figure}
\begin{scriptsize}
\begin{verbatim}
(:durative-action visit
:parameters (?x - location ?y - person)
:duration
(and
(>= ?duration (min_visit_time ?x))
(<= ?duration (max_visit_time ?x))
(<= ?duration (free_time)))
:condition
(and
(at start (not_visit_location ?x))
(over all (person_at ?y ?x))
(over all (open ?x)))
:effect
(and
(at start (not (not_visit_location ?x)))
(at end (visit_location ?x))
(at end (increase (number_visit_location) 1))
(at end (decrease (free_time) ?duration))))
\end{verbatim}
\end{scriptsize}
\caption{Action {\tt visit} of the tourism domain} \label{fig:visit}
\end{figure}
We define three three types of actions in the tourist problem: \texttt{move}, \texttt{visit} and \texttt{eat} actions.
The action to \texttt{move} from one location to another is shown in Figure \ref{fig:move}. The parameters are the initial place {\tt ?x}, the user {\tt ?y} and the destination {\tt ?z}. The action duration is set to the time specified in $T$. The preconditions for this action to be applicable are: (1) the user is at location {\tt ?x} and (2) the free time is greater than the \texttt{move} duration. The effects of the action assert that (1) the user is not longer at the initial location, (2) the user is at the new location at the end of the action and (3) the free time and the time spent in the movement are modified accordingly to the action duration.
The action to \texttt{visit} a POI is defined in Figure \ref{fig:visit}. The parameters are the POI to visit {\tt ?x} and the user {\tt ?y}. The action duration is a value between {\tt (min\_visit\_time ?x)} and {\tt (max\_visit\_time ?x)} which must be smaller than the remaining available time {\tt (free\_time)}. The planner will choose the actual duration of the action according to these constraints. The conditions for this action to be applicable are: (1) the POI has not been visited yet; (2) the user is at the POI during the whole execution of the action; and (3) the place is open during the whole execution of the action. The effects of the action assert that (1) the POI has been visited\footnote{Two predicates are necessary to indicate a visit has been done if the planner does not allow for negated conditions.}, (2) the number of visited locations is increased and (3) the free time is updated according to the visit duration.
Finally, the \texttt{eat} action to represent the activity of {\em "having lunch"} is similarly defined to the \texttt{visit} action.
\subsection{Goal and optimization function}
We define two different types of goals:
\begin{itemize}
\item {\em Hard goals} that represent the realization of an action that the user has specified as mandatory (e.g., the location the user has indicated final destination: {\tt (person\_at $location_{final}$)}).
\item {\em Soft goals or preferences} that we wish to satisfy in order to generate a good plan but that do not have to be achieved in order for the plan to be correct \cite{gerevini2009deterministic}. We will assign penalties to violated preferences.
\end{itemize}
The objective is to find a plan that achieves all the hard goals and minimize the total penalty for unsatisfied preferences. For example, the specification of metric $M1$ (see Sections \emph{Penalties} and \emph{Metrics}) in PDDL is expressed as:
\texttt{(:metric minimize (+ \indent \indent \indent $ P_{U1} \; P_{journey} \; P_{\#visits} \; P_{occup}$))}
\vspace{0.1cm}
\textbf{(1)}$\;P_{U1}$. The specification of $P_{U1}$ requires defining a preference for every POI in $V$; e.g.
\vspace{0.1cm}
{\small
\texttt{(preference p1 (visit\_location id\_1))}
\texttt{(preference p2 (visit\_location id\_2))}}
\vspace{0.1cm}
and defining the penalty $P_{U1}$ for each preference:
{\small
\texttt{(/ (* 250 (is-violated p1)) 532)}
\texttt{(/ (* 282 (is-violated p2)) 532)}}
\vspace{0.1cm}
where $v_{id_1}=250$, $v_{id_2}=282$ and $\sum_{p \in V} v_p = 532$ (assuming $V$ contains only two POIs, {\small \texttt{id\_1}} and {\small \texttt{id\_2}}).
\vspace{0.1cm}
\textbf{(2)}$\,P_{journey}$. This penalty is specified as {\small \texttt{(/ (transport\_time) 540)}} where $total\_time=540$ in this particular example.
\vspace{0.1cm}
\textbf{(3)}$\;P_{\#visits}$. If the user selected \emph{few} visits, this penalty is expressed as {\small \texttt{(/ (number\_visit\_location) 10)}}, where $|V|=10$ in this example.
\vspace{0.1cm}
\textbf{(4)}$\;P_{occup}$. Assuming the user selected a \emph{high} temporal occupation in the plan, this is expressed as {\small \texttt{(/ (free\_time) 540)}} with $total\_time=540$.
\vspace{0.1cm}
| {
"attr-fineweb-edu": 1.555664,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfcXxK6-gDz87SFL8 | \section{Introduction} \label{Sec1}
Every sports tournament has to provide appropriate incentives for the contestants to exert effort \citep{Szymanski2003}. In particular, the ranking method should not reward teams for poor performance \citep{VaziriDabadghaoYihMorin2018}. However, there are a number of reasons why a team may consider \emph{tanking} (deliberately losing) in a competition: \citet{KendallLenten2017} identify several examples when the misaligned rules had such unforeseen consequences.
Unsurprisingly, academic scholars have studied various theoretical models of sports contests in view of \emph{incentive incompatibility}.
\citet{Pauly2014} derives an impossibility theorem for championships consisting of two qualifying tournaments with disjoint sets of participants. \citet{Vong2017} proves that, if more than one contestants advance to the next round, some players can benefit from shirking to qualify with a lower rank.
\citet{DagaevSonin2018} investigate tournament systems, composed of multiple round-robin and knockout tournaments with the same set of participants when the sets of winners of noncumulative prizes have a nonempty intersection. \citet{Csato2020f} considers group-based qualification systems where teams from different groups are compared, which can create incentives for both teams to play a draw instead of winning \citep{Csato2020d}. \citet{Csato2021a} presents how the ignorance of these theoretical findings has lead to problems in European football.
\citet{KrumerMegidishSela2020b} show that strategic considerations may motivate a contestant to lose in a round-robin tournament because this can result in a higher expected payoff.
Although the round-robin format in which each team meets all the others is one of the most common sports tournaments, it requires a lot of time. On the other hand, if the competitors can play only against a limited number of opponents, the set of matches should be chosen carefully. This can be achieved through \emph{seeding}, by ordering the entrants based on playing history and/or the judgement of experts to pair them according to their ranks.
The problem of seeding in knockout tournaments has been thoroughly explored, see e.g.\ \citet{Hwang1982, HorenRiezman1985, Schwenk2000, GrohMoldovanuSelaSunde2012, DagaevSuzdaltsev2018, ArlegiDimitrov2020, DellaCroceDragottoScatamacchia2020}. The seeding rules of the most prominent football competition, the FIFA World Cup have also got serious attention \citep{ScarfYusof2011, Guyon2015a, LalienaLopez2019, CeaDuranGuajardoSureSiebertZamorano2020}. Similarly, several statistical papers have analysed the effect of seeding on tournament outcome \citep{MonksHusch2009, CoronaForrestTenaWiper2019, DagaevRudyak2019, EngistMerkusSchafmeister2021}.
However, the previous literature has scarcely addressed the incentive compatibility of the seeding rules except for \citet{Csato2020c}, a paper revealing a unique shortcoming in the UEFA Champions League group stage draw that emerged only in the 2015/16 season due to a misaligned way of filling vacant slots.
Our main contribution resides in a more universal result: traditional seeding systems based on exogenous measures of teams' strengths are generically incentive incompatible, but they can easily be made strategyproof.
Our roadmap is as follows.
Section~\ref{Sec2} presents two real-world cases to highlight the issue. A mathematical model is given in Section~\ref{Sec3}. We provide a strategyproof seeding mechanism and summarise policy implications in Section~\ref{Sec4}. Finally, Section~\ref{Sec6} concludes.
\section{Case studies from the real world} \label{Sec2}
Let us see two motivating examples.
\begin{example} \label{Examp1}
Assume the following hypothetical modifications to real-world results in the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification}{2018 FIFA World Cup qualification}:
\begin{itemize}
\item
Wales vs.\ Republic of Ireland was 1-1 (instead of 0-1) on 9 October 2017 in \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_UEFA_Group_D}{UEFA Group D}. Consequently, Wales would have been 18 and the Republic of Ireland would have been 17 points in that group, thus Wales would have advanced to the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_UEFA_Second_Round}{UEFA Second Round}. There Wales would have been in Pot 1 rather than Denmark, therefore the tie Wales vs.\ Denmark would have been possible (in fact, Denmark played against the Republic of Ireland). Suppose that Wales qualified for the World Cup instead of Denmark.
\item
The first leg of Sweden vs.\ Italy was 1-1 (instead of 1-0) on 10 November 2017 in the UEFA Second Round, hence Italy qualified for the World Cup.
\end{itemize}
In the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_seeding}{draw for the 2018 FIFA World Cup}, the composition of the pots depended on the \href{https://www.fifa.com/fifa-world-ranking/ranking-table/men/rank/id11976/}{October 2017 FIFA World Ranking}. The only exception was the automatic assignment of the host---Russia---to Pot 1 besides the seven highest-ranked qualified teams.
Hence Uruguay ($17$th in the relevant FIFA ranking) would have been drawn from Pot 3 as among the best $16$ teams, only Chile would have not qualified in the above scenario (Wales was the $14$th and Italy was the $15$th in the FIFA World Ranking of October 2017).
\begin{table}[t!]
\centering
\caption{Pot composition in the hypothetical 2018 FIFA World Cup}
\label{Table1}
\rowcolors{3}{}{gray!20}
\begin{threeparttable}
\begin{tabularx}{\textwidth}{LLLL} \toprule
Pot 1 & Pot 2 & Pot 3 & Pot 4 \\ \bottomrule
Russia (65) & Spain (8) & \textbf{Uruguay (17)} & Serbia (38) \\
Germany (1) & \textbf{Peru (10)} & Iceland (21) & Nigeria (41) \\
Brazil (2) & Switzerland (11) & Costa Rica (22) & Australia (43) \\
Portugal (3) & England (12) & Sweden (25) & Japan (44) \\
Argentina (4) & Colombia (13) & Tunisia (28) & Morocco (48) \\
Belgium (5) & \emph{Wales (14)} & Egypt (30) & Panama (49) \\
Poland (6) & \emph{Italy (15)} & Senegal (32) & South Korea (62) \\
France (7) & Mexico (16) & Iran (34) & Saudi Arabia (63) \\ \bottomrule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item The pots are determined by the FIFA World Ranking of October 2017, see the numbers in parenthesis.
\item Russia is the top seed as host.
\item Teams written in \emph{italics} qualified only in the hypothetical but feasible scenario of Example~\ref{Examp1}.
\item Uruguay (17)---the top team in Pot 3---would have been drawn from Pot 2 due to losing against Paraguay (34) in the South American qualifiers of the 2018 FIFA World Cup since then either Paraguay (34) or New Zealand (122) would have qualified for the World Cup instead of Peru (10). The national teams affected by this tanking are written in \textbf{bold}.
\end{tablenotes}
\end{threeparttable}
\end{table}
The allocation of the teams in the above scenario is given in Table~\ref{Table1}.
Consider what would have happened if the result of the match Paraguay ($34$) vs.\ Uruguay, played on 5 September 2017 in the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(CONMEBOL)}{South American qualifiers}, would have been 2-1 instead of 1-2. Then Uruguay would have remained second and Paraguay would have been fifth in this qualifying competition. Paraguay would have played against New Zealand ($122$) in the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(OFC\%E2\%80\%93CONMEBOL_play-off)}{OFC--CONMEBOL qualification play-off}, thus Peru ($10$) could not have qualified for the World Cup. Therefore, Uruguay would have been drawn from the stronger Pot 2 instead of Pot 3 merely due to its loss against Paraguay.
It probably means a substantial advantage: in the 2018 FIFA World Cup, seven and two teams advanced to the knockout stage from Pots 2 and 3, respectively.
\end{example}
Example~\ref{Examp1} contains a small sloppiness since we have not checked whether the October 2017 FIFA World Ranking would have been modified if the result of Paraguay vs.\ Uruguay would have changed. However, this minor issue does not affect the potential case of incentive incompatibility.
\begin{example} \label{Examp2}
In the \href{https://en.wikipedia.org/wiki/2020\%E2\%80\%9321_UEFA_Europa_League_group_stage#Draw}{draw for the 2020/21 UEFA Europa League group stage}, the composition of the pots was determined by the 2020 UEFA club coefficients, available at \url{https://kassiesa.net/uefa/data/method5/trank2020.html}.
Assume the following hypothetical modifications to real-world results in the \href{https://en.wikipedia.org/wiki/2020\%E2\%80\%9321_UEFA_Europa_League_qualifying_phase_and_play-off_round\#Play-off_round}{play-off round of the qualifying phase}, with the first favourite team advancing to the group stage in place of the second unseeded underdog (the UEFA club coefficients are given in parenthesis):
\begin{itemize}
\item
Viktoria Plze{\v n} ($34.0$) against Hapoel Be'er Sheva ($14.0$);
\item
Basel ($58.5$) against CSKA Sofia ($4.0$);
\item
Sporting CP ($50.0$) against LASK ($14.0$);
\item
Copenhagen ($42.0$) against Rijeka ($11.0$);
\item
VfL Wolfsburg ($36.0$) against AEK Athens ($16.5$).
\end{itemize}
There were $48$ teams in the group stage. Leicester City ($22.0$) was ranked $20$th because Rapid Wien ($22.0$) had the same 2020 UEFA club coefficient but the tie-breaking criterion---coefficient in the next most recent season in which they are not equal \citep[Annex~D.8]{UEFA2020b}---preferred the latter club. Due to the above changes, five teams with a higher coefficient than Leicester City would have qualified instead of five teams with a lower coefficient. Hence, Leicester City would have been only the $25$th highest-ranked, namely, the best club in Pot 3 as each of the four pot contain $12$ clubs.
In addition, suppose that Leicester defeated Norwich City at home by 2-1 (instead of 0-0) in the \href{https://en.wikipedia.org/wiki/2019\%E2\%80\%9320_Premier_League}{2019/20 English Premier League}. Then Leicester City would have remained fifth at the end of the season with $64$ points.
\begin{table}[t!]
\centering
\caption{Pot composition in the hypothetical 2020/21 UEFA Europa League}
\label{Table2}
\rowcolors{3}{}{gray!20}
\begin{tabularx}{1\textwidth}{LL} \toprule
Pot 1 & Pot 2 \\ \bottomrule
Arsenal (91.0) & Gent (39.5) \\
\textbf{Tottenham Hotspur (85.0)} & PSV Eindhoven (37.0) \\
Roma (80.0) & \emph{VfL Wolfsburg (36.0)} \\
Napoli (77.0) & Celtic (34.0) \\
Benfica (70.0) & \emph{Viktoria Plze{\v n} (34.0)} \\
Bayer Leverkusen (61.0) & Dinamo Zagreb (33.5) \\
\emph{Basel (58.5)} & Sparta Prague (30.5) \\
Villarreal (56.0) & Slavia Prague (27.5) \\
\emph{Sporting CP (50.0)} & Ludogorets Razgrad (26.0) \\
CSKA Moscow (44.0) & Young Boys (25.5) \\
\emph{Copenhagen (42.0)} & Crvena Zvezda (22.75) \\
Braga (41.0) & Rapid Wien (22.0) \\ \bottomrule
\end{tabularx}
\vspace{0.25cm}
\begin{threeparttable}
\rowcolors{3}{}{gray!20}
\begin{tabularx}{1\textwidth}{LL} \toprule
Pot 3 & Pot 4 \\ \bottomrule
\textbf{Leicester City (22.0)} & 1899 Hoffenheim (14.956) \\
PAOK (21.0) & CFR Cluj (12.5) \\
Qaraba{\u g} (21.0) & Zorya Luhansk (12.5) \\
Standard Li{\` e}ge (20.5) & Nice (11.849) \\
Real Sociedad (20.476) & Lille (11.849) \\
Granada (20.456) & Dundalk (8.5) \\
Milan (19.0) & Slovan Liberec (8.0) \\
AZ Alkmaar (18.5) & Antwerp (7.58) \\
Feyenoord (17.0) & Lech Poznan (7.0) \\
Maccabi Tel Aviv (16.5) & Sivasspor (6.72) \\
Rangers (16.25) & Wolfsberger AC (6.585) \\
Molde (15.0) & Omonia (5.35) \\ \bottomrule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item The pots are determined by the 2020 UEFA club coefficients, shown in parenthesis.
\item Teams written in \emph{italics} qualified only in the hypothetical but feasible scenario of Example~\ref{Examp2}.
\item Leicester City (22.0)---the top team in Pot 3---could have been drawn from Pot 2 due to losing against Wolverhampton (18.092) in the 2019/20 English Premier League since then the latter team could have qualified for the UEFA Europa League group stage instead of Tottenham Hotspur (85.0). The clubs affected by this tanking are written in \textbf{bold}.
\end{tablenotes}
\end{threeparttable}
\end{table}
The allocation of the clubs in the above scenario is presented in Table~\ref{Table2}.
Consider what would have happened if the outcome of the match Wolverhampton Wanderers ($18.092$) vs.\ Leicester City, played on 14 February 2020 in the 2019/20 Premier League, would have been 1-0 instead of 0-0. Leicester would have remained fifth with $62$ points, while Wolverhampton would have been sixth with $61$ points rather than Tottenham Hotspur ($85.0$), which scored $59$ points. Consequently, Wolverhampton would have entered the Europa League qualification in the second qualifying round, and it could have qualified for the group stage in the place of Tottenham. Then Leicester would have been drawn from the stronger Pot 2 merely due to its loss against Wolverhampton.
This probably means an advantage, although in the 2020/21 Europa League, six teams advanced to the knockout stage from both Pots 2 and 3, respectively.
\end{example}
\section{Theoretical background} \label{Sec3}
Consider a round-robin qualifying tournament with a set of teams $T$, where each team $t \in T$ has a coefficient $\xi_t$. The teams ranked between the $p$th and $q$th ($p \leq q$) qualify for the second stage. A qualified team $t$ has a seeding value $\Psi_t$.
Any team prefers if more teams play in the second round with a lower seeding value.
In other words, it is a common belief that these measures positively correlate with true abilities. All coefficients used in practice are constructed along this line. For instance, the \href{https://en.wikipedia.org/wiki/FIFA_World_Rankings}{FIFA World Ranking} and the \href{https://en.wikipedia.org/wiki/UEFA_coefficient}{UEFA coefficients} for national teams, countries, and clubs alike award more points for wins (draws) than for draws (losses), thus better achievements in the past translate into a higher value.
Therefore, the first goal for every team is to qualify and the second goal is to qualify together with teams having a lower coefficient.
Any team $t \in T$ may tank to improve the second objective without deteriorating the first. Denote the (strict) rankings of the qualifying tournament by $\succ$ and $\succ'$, as well as the sets of teams qualified by
\[
Q = \{ s \in T: p-1 \leq |r \in T: r \succ s| \leq q-1 \} \text{ and}
\]
\[
Q' = \{ s \in T: p-1 \leq |r \in T: r \succ' s| \leq q-1 \}
\]
before and after tanking, respectively.
\begin{definition} \label{Def1}
\emph{Incentive incompatibility with respect to seeding}:
A round-robin qualifying tournament is said to be \emph{incentive incompatible with respect to seeding} if there exists a team $t \in T$ with a tanking strategy such that:
\begin{itemize}
\item
team $t$ qualifies for the second stage both before and after tanking, that is, $t \in Q$ and $t \in Q'$;
\item
team $t$ has a better seeding position after tanking than before tanking, namely, $|s \in Q': \Psi_s > \Psi_t| < |s \in Q: \Psi_s > \Psi_t|$.
\end{itemize}
Otherwise, the qualifying tournament is called \emph{incentive compatible with respect to seeding}.
\end{definition}
As Section~\ref{Sec2} reveals, a qualifying tournament is incentive incompatible if $\Psi_t = \xi_t$ for all $t \in Q$ and $2 \leq |Q| < |T|$. In particular, a situation may exist where team $i$ has already secured qualification, while teams $j$ and $k$ compete for another slot such that $\xi_k > \xi_i > \xi_j$. Then team $i$ may consider losing against team $j$ in order to push it to the next stage at the expense of team $k$ as team $i$ can get a better seeding pot by taking team $j$ to the second round instead of team $k$.
The result below provides sufficient conditions to prevent a strategic manipulation of this type. For the sake of simplicity, we assume that ties in the seeding values of qualified teams are broken in favour of the team ranked higher in the qualifying tournament.
\begin{proposition} \label{Prop1}
A round-robin qualifying tournament is incentive incompatible with respect to seeding if at least one of the following conditions hold:
\begin{itemize}
\item
Only one team is allowed to qualify: $p = q$;
\item
All teams qualify: $p = 1$ and $q = |T|$;
\item
The seeding value of each qualified team $t \in Q$ in the second stage is equal to the maximal coefficient of the teams that are ranked lower than team $t$ in the qualifying round-robin tournament: $\Psi_t = \max \{ \xi_s: t \succ s \}$ for all $t \in Q$.
\end{itemize}
\end{proposition}
\begin{proof}
If $p = 1$, then $t \in Q$ leads to $|s \in Q: \Psi_s > \Psi_t| = 0$. Consequently, no tanking strategy can satisfy Definition~\ref{Def1}. \\
$p = 1$ and $q = |T|$ result in $Q = T$, thus $|s \in Q': \Psi_s > \Psi_t| = |s \in Q: \Psi_s > \Psi_t|$, which excludes the existence of a tanking strategy satisfying Definition~\ref{Def1}. \\
$\Psi_t = \max \{ \xi_s: t \succ s \}$ for all $t \in Q$ implies that
\[
s \in Q \text{ and } \Psi_s > \Psi_t \iff s \in Q \text{ and } s \succ t.
\]
Since team $t$ cannot be ranked higher in the round-robin qualifying tournament after tanking,
\[
|s \in Q': \Psi_s > \Psi_t| = |s \in Q': s \succ t| \geq |s \in Q: s \succ t| = |s \in Q: \Psi_s > \Psi_t|
\]
holds. Hence, the qualifying tournament is incentive compatible with respect to seeding.
\end{proof}
\section{Discussion} \label{Sec4}
Now we present a general procedure to guarantee our requirement, incentive compatibility with respect to seeding, on the basis of the theoretical model. Some alternative ideas are also outlined shortly.
According to Proposition~\ref{Prop1}, there are three ways to achieve strategyproofness in a round-robin qualifying tournament. However, the first two conditions---when exactly one team qualifies or all teams qualify for the next round---could not offer a universal rule. Nonetheless, they can be exploited in certain cases, for example, only one team advanced from the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(OFC)}{Oceanian (OFC) section of the 2018 FIFA World Cup qualification}.
Fortunately, there is a third opportunity, that is, to calculate the seeding value of any qualified team $t \in Q$ as $\Psi_t = \max \{ \xi_s: t \succ s \}$. In other words, team $t$ is seeded in the second stage based on the maximum of coefficients $\xi_s$ of all teams $s$ ranked lower than team $t$ in its round-robin qualifying competition. This is a reasonable rule: if team $i$ finishes ahead of team $j$ in a league, why is it judged worse for the draw in the next round?
Our proposal is called \emph{strategyproof seeding}.
\begin{sidewaystable
\centering
\caption{Alternative rules for the draw of the UEFA Champions League group stage in the 2020/21 season}
\label{Table3}
\renewcommand\arraystretch{0.8}
\begin{threeparttable}
\rowcolors{1}{gray!20}{}
\begin{tabularx}{\textwidth}{lccCCcCCc} \toprule \hiderowcolors
Club & Country & Position & \multicolumn{2}{c}{Coefficient} & Inherited from & \multicolumn{3}{c}{Pot allocation} \\
& & & Official & Proposed & & Official & Proposed & Change \\ \bottomrule \showrowcolors
Bayern Munich & \multicolumn{2}{c}{CL TH (Germany 1st)} & 136 & 136 & --- & 1 (1) & 1 (1) & --- \\
Sevilla & \multicolumn{2}{c}{EL TH (Spain 4th)} & 102 & 102 & --- & 1 (1) & 1 (1) & --- \\
Real Madrid & Spain & 1st & 134 & 134 & --- & 1 (1) & 1 (1) & --- \\
Liverpool & England & 1st & 99 & 116 & Manchester City (2nd) & 1 (1) & 1 (1) & --- \\
Juventus & Italy & 1st & 117 & 117 & --- & 1 (1) & 1 (1) & --- \\
Paris Saint-Germain & France & 1st & 113 & 113 & --- & 1 (1) & 1 (1) & --- \\
Zenit Saint Petersburg & Russia & 1st & 64 & 64 & --- & 1 (1) & 1 (1) & --- \\
Porto & Portugal & 1st & 75 & 75 & --- & 1 (2) & 1 (3) & --- \\
Barcelona & Spain & 2nd & 128 & 128 & --- & 2 (2) & 2 (2) & --- \\
Atl\'etico Madrid & Spain & 3rd & 127 & 127 & --- & 2 (2) & 2 (2) & --- \\
Manchester City & England & 2nd & 116 & 116 & --- & 2 (2) & 2 (2) & --- \\
Manchester United & England & 3rd & 100 & 100 & --- & 2 (2) & 2 (2) & --- \\
Shakhtar Donetsk & Ukraine & 1st & 85 & 85 & --- & 2 (2) & 2 (2) & --- \\
Borussia Dortmund & Germany & 2nd & 85 & 85 & --- & 2 (1) & 2 (1) & --- \\
Chelsea & England & 4th & 83 & 91 & Arsenal (8th) & 2 (2) & 2 (2) & --- \\
Ajax & Netherlands & 1st & 69.5 & 69.5 & --- & 2 (2) & 3 (3) & \textcolor{BrickRed}{\rotatebox[origin=c]{270}{\ding{212}}} \\
Dynamo Kyiv & Ukraine & 2nd & 55 & 55 & --- & 3 (3) & 3 (3) & --- \\
Red Bull Salzburg & Austria & 1st & 53.5 & 53.5 & --- & 3 (3) & 4 (4) & \textcolor{BrickRed}{\rotatebox[origin=c]{270}{\ding{212}}} \\
RB Leipzig & Germany & 3rd & 49 & 61 & Bayer Leverkusen (5th) & 3 (3) & 3 (3) & --- \\
Inter Milan & Italy & 2nd & 44 & 80 & Roma (5th) & 3 (3) & 3 (3) & --- \\
Olympiacos & Greece & 1st & 43 & 43 & --- & 3 (3) & 4 (4) & \textcolor{BrickRed}{\rotatebox[origin=c]{270}{\ding{212}}} \\
Lazio & Italy & 4th & 41 & 80 & Roma (5th) & 3 (3) & 3 (3) & --- \\
Krasnodar & Russia & 3rd & 35.5 & 44 & CSKA Moscow (4th) & 3 (3) & 4 (4) & \textcolor{BrickRed}{\rotatebox[origin=c]{270}{\ding{212}}} \\
Atalanta & Italy & 3rd & 33.5 & 80 & Roma (5th) & 3 (3) & 3 (3) & --- \\
Lokomotiv Moscow & Russia & 2nd & 33 & 44 & CSKA Moscow (4th) & 4 (4) & 4 (4) & --- \\
Marseille & France & 2nd & 31 & 83 & Lyon (7th) & 4 (4) & 2 (2) & \textcolor{PineGreen}{\rotatebox[origin=c]{90}{\ding{212}}} \textcolor{PineGreen}{\rotatebox[origin=c]{90}{\ding{212}}} \\
Club Brugge & Belgium & 1st & 28.5 & 39.5 & Gent (2nd) & 4 (4) & 4 (4) & --- \\
Borussia M\"onchengladbach & Germany & 4th & 26 & 61 & Bayer Leverkusen (5th) & 4 (4) & 3 (3) & \textcolor{PineGreen}{\rotatebox[origin=c]{90}{\ding{212}}} \\
Istanbul Ba{\c s}ak{\c s}ehir & Turkey & 1st & 21.5 & 54 & Be{\c s}ikta{\c s} (3rd) & 4 (4) & 4 (4) & --- \\
Midtjylland & Denmark & 1st & 14.5 & 42 & Copenhagen (2nd) & 4 (4) & 4 (4) & --- \\
Rennes & France & 3rd & 14 & 83 & Lyon (7th) & 4 (4) & 3 (2) & \textcolor{PineGreen}{\rotatebox[origin=c]{90}{\ding{212}}} \\
Ferencv\'aros & Hungary & 1st & 9 & 10.5 & Feh\'erv\'ar (2nd) & 4 (4) & 4 (4) & --- \\
\bottomrule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item CL (EL) TH stands for the UEFA Champions League (Europa League) titleholder.
\item The column ``Inherited from'' shows the club of the domestic league whose UEFA club coefficient is taken over.
\item Proposed pot is the pot that contains the club if the current seeding policy applies to Pot 1. Since this rule is incentive compatible \citep{Csato2020c}, the pot according to the amendment suggested by \citet[Section~5]{Csato2020c} is reported in parenthesis for both the official and the strategyproof seeding systems.
\item The column ``Change'' shows the movements of clubs between the pots due to the strategyproof seeding if the current seeding regime applies to Pot 1.
\end{tablenotes}
\end{threeparttable}
\end{sidewaystable}
Table~\ref{Table3} applies strategyproof seeding for the 2020/21 Champions League group stage. Even though the club coefficients of $15$ teams, including the $11$ lowest-ranked, are increased, it has only a moderated effect on the composition of pots as one German and two French teams benefit at the expense of four teams from the Netherlands, Austria, Greece, and Russia. That amendment usually favours the highest-ranked associations, where some clubs emerging without a robust European record (recall the unlikely triumph of Leicester City in the 2015/16 English Premier League \citep{BBC2016}) can ``obtain'' the performances of clubs with considerable achievements at the international level. Thus strategyproof seeding contributes to the success of underdogs in the European cups, which may be advantageous for the long-run competitive balance in the top leagues. In addition, it probably better reflects the true abilities of the teams since playing more matches reduces the role of luck in sports tournaments \citep{McGarrySchutz1997, ScarfYusofBilbao2009, LasekGagolewski2018, Csato2021b, CsatoBiroSziklai2021}. Consequently, it is more difficult to perform better in a round-robin league than in the Champions League or Europa League.
From the 2018/19 season onwards, UEFA club coefficients are determined either as the sum of all points won in the previous five years or as the association coefficient over the same period, \emph{whichever is the higher} \citep{Kassies2021, UEFA2021}. While this rule was not effective in the 2020/21 Champions League, the lower bound applied in the case of some Spanish, German, and French teams in the 2020/21 Europa League.
A somewhat similar policy is used in the UEFA Champions League and Europa League qualification, too, if a later round is drawn before the identity of the teams is known: ``\emph{If, for any reason, any of the participants in such rounds are not known at the time of the draw, the coefficient of the club with the higher coefficient of the two clubs involved in an undecided tie is used for the purposes of the draw.}'' \citep[Article~13.03]{UEFA2020a}.
Therefore, the principle of strategyproof seeding is not unknown in UEFA club competitions, which can support its implementation.
Table~\ref{Table3} reinforces that the strategyproof seeding system may result in more ties than the current definition. If some teams inherit their seeding values from the same lower-ranked team, then these remain identical, and the tie should be broken by drawing of lots \citep[Annex~D.8]{UEFA2020b}. Although tie-breaking does not affect incentive compatibility, it is reasonable to prefer the teams ranked higher in the domestic league. If clubs from other associations also have the same coefficient (which has a much lower probability), they can be assigned arbitrarily in this equivalence class. Alternatively, the original club coefficients can be used for tie-breaking.
Our incentive compatible mechanism has further favourable implications. UEFA has modified the pot allocation policy in the Champions League from the 2015/16 season, probably inspired by the previous year when Manchester City, the English champion, was drawn from the second pot but Arsenal, the fourth-placed team in England, was drawn from the first pot. This decision---intended to strengthen the position of domestic titleholders \citep{UEFA2015e}---has considerable sporting effects \citep{CoronaForrestTenaWiper2019, DagaevRudyak2019}, especially since the poor way of filling vacancies leads to incentive incompatibility \citep{Csato2020c}. On the other hand, the proposed seeding rule guarantees that a national champion has at least the same seeding value as any team ranked lower in its domestic league.
Naturally, other strategyproof seeding policies can be devised. One example is the system of the 2020 UEFA European Championship: the ranking of all entrants on the basis of their results in the qualification. However, that principle is not appropriate if the achievements in the qualifying tournament(s) cannot be compared.
Another solution might be to associate seeding positions not with the coefficients but with the path of qualification. For instance, a club can be identified in the UEFA Champions League as the Spanish runner-up rather than by its name. The results of these ``labels'' can be measured by the achievements of the corresponding teams \citep{Guyon2015b}. Nonetheless, this principle seems to be difficult to apply for the UEFA European Championship.
To conclude, the recommended strategyproof seeding mechanism provides incentive compatibility in any setting. While other rules are also able to eliminate perverse incentives, they are unlikely to be independent of the particular characteristics of the tournament.
\section{Conclusions} \label{Sec6}
The present work has analysed a mathematical model of seeding for sports tournaments where the teams qualify from round-robin competitions. Several contests are designed this way, including the most prestigious football tournaments (FIFA World Cup, UEFA European Championship, UEFA Champions League). The sufficient conditions of incentive incompatibility have turned out to be quite restrictive: if each competitor is considered with its own coefficient (usually a measure of its past performance), only one or all of them should qualify from every round-robin contest.
Similarly to the main findings of \citet{Vong2017} and \citet{KrumerMegidishSela2020b}, our result has the flavour of an impossibility theorem at first glance. However, here we can achieve strategyproofness by giving to each qualified competitor the highest coefficient of all competitors that are ranked lower in its round-robin qualifying tournament for seeding purposes.
The central message of our paper for decision makers is consonant with the conclusion of \citet{HaugenKrumer2021}, that is, tournament design should be included into the family of traditional topics discussed by sports management. In particular, administrators are strongly encouraged to follow our recommendation in order to prevent the occurrence of costly scandals in the future.
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
\noindent
Three anonymous reviewers provided valuable comments on an earlier draft. \\
We are indebted to the \href{https://en.wikipedia.org/wiki/Wikipedia_community}{Wikipedia community} for collecting and structuring valuable information on the sports tournaments discussed. \\
The research was supported by the MTA Premium Postdoctoral Research Program grant PPD2019-9/2019.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.981445,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdQY25V5hc_OxxVGI | \section{Introduction}
The main object of study in this article is the so-called Lyapunov exponent, which measures the cost of traveling for the simple random walk in an i.i.d.~nonnegative potential on the $d$-dimensional cubic lattice $\mathbb{Z}^d$ ($d \geq 1$).
We now focus on the fact that the Lyapunov exponent depends on the distribution function of the potential.
Then, the aim of this article is to show that the Lyapunov exponent is strictly monotone in the distribution function of the potential with the order according to strict dominance.
In addition, since the Lyapunov exponent describes the rate function of the large deviation principle for the simple random walk in a random potential, we can lift the strict monotonicity of the Lyapunov exponent to the rate function.
\subsection{The model}\label{subsect:model}
Let $d \geq 1$ and consider the simple random walk $(S_k)_{k=0}^\infty$ on $\mathbb{Z}^d$.
For $x \in \mathbb{Z}^d$, write $P^x$ for the law of the simple random walk starting at $x$, and $E^x$ for the associated expectation.
Independently of $(S_k)_{k=0}^\infty$, let $\omega=(\omega(x))_{x \in \mathbb{Z}^d}$ be a family of i.i.d.~random variables taking values in $[0,\infty)$, and we call $\omega$ the \emph{potential}.
Denote by $\P$ and $\mathbb{E}$ the law of the potential $\omega$ and the associated expectation, respectively.
For any subset $A$ of $\mathbb{R}^d$, $H(A)$ stands for the hitting time of the simple random walk to $A$, i.e.,
\begin{align*}
H(A):=\inf\{ k \geq 0:S_k \in A \}.
\end{align*}
When $A=\{y\}$ is a single vertex set, we write $H(y):=H(\{y\})$ for simplicity.
Then, define for $x,y \in \mathbb{Z}^d$,
\begin{align*}
e(x,y,\omega):=E^x\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(y)-1}\omega(S_k) \Biggr\} \1{\{ H(y)<\infty \}} \Biggr],
\end{align*}
with the convention that $e(x,y,\omega):=1$ if $x=y$.
Moreover, let us consider the following two-point functions $a(x,y,\omega)$ and $b(x,y)$ on $\mathbb{Z}^d$:
For $x,y \in \mathbb{Z}^d$,
\begin{align*}
a(x,y,\omega):=-\log e(x,y,\omega)
\end{align*}
and
\begin{align*}
b(x,y):=-\log\mathbb{E}[e(x,y,\omega)].
\end{align*}
We call $a(x,y,\omega)$ and $b(x,y)$ the \emph{quenched} and \emph{annealed travel costs} from $x$ to $y$ for the simple random walk, respectively.
The quenched travel cost $a(x,y,\omega)$ can be thought of as measuring the cost of traveling from $x$ to $y$ for the simple random walk in a fixed potential $\omega$.
On the other hand, Fubini's theorem and the independence of the potential imply that if $x \not= y$, then
\begin{align*}
b(x,y)
= -\log E^x\Biggl[ \prod_{z \in \mathbb{Z}^d}\mathbb{E}\bigl[ \exp\{ -\ell_z(H(y))\omega(0) \} \bigr] \1{\{ H(y)<\infty \}} \Biggr],
\end{align*}
where for $z \in \mathbb{Z}^d$ and $N \in \mathbb{N}$, $\ell_z(N)$ is the number of visits to $z$ by the simple random walk up to time $N-1$, i.e.,
\begin{align*}
\ell_z(N):=\#\{ 0 \leq k<N:S_k=z \}.
\end{align*}
Hence, the annealed travel cost $b(x,y)$ is rewritten as the quantity after averaging over the potential, and we can interpret $b(x,y)$ as the cost of both optimizing the potential and transporting the simple random walk from $x$ to $y$.
It is easy from the strong Markov property to see that the above travel costs satisfy the following triangle inequalities:
For any $x,y,z \in \mathbb{Z}^d$,
\begin{align*}
a(x,z,\omega) \leq a(x,y,\omega)+a(y,z,\omega)
\end{align*}
and
\begin{align*}
b(x,z) \leq b(x,y)+b(y,z).
\end{align*}
(For more details we refer the reader to \cite[(12) in Section~3]{Flu07} and \cite[Proposition~2]{Zer98a}.)
As seen above, this paper treats the quenched and annealed situations simultaneously.
Therefore, to simplify statements, we always make the following assumption only for the quenched situation:
\begin{itemize}
\item[\bf (Qu)]
The potential $\omega$ satisfies $\mathbb{E}[\omega(0)]<\infty$ in $d=1$ (there is no additional assumption at all if $d \geq 2$).
\end{itemize}
Under this assumption, the next proposition exhibits the asymptotic behaviors of the travel costs, which were obtained by Flury~\cite[Theorem~A]{Flu07}, Mourrat~\cite[Theorem~{1.1}]{Mou12} and Zerner~\cite[Proposition~4]{Zer98a}.
\begin{prop}\label{prop:lyaps}
There exist norms $\alpha(\cdot)$ and $\beta(\cdot)$ on $\mathbb{R}^d$ (which are called the \emph{quenched} and \emph{annealed Lyapunov exponents}, respectively) such that for all $x \in \mathbb{Z}^d$,
\begin{align*}
\lim_{n \to \infty} \frac{1}{n}a(0,nx,\omega)=\alpha(x) \qquad \text{in probability},
\end{align*}
and
\begin{align*}
\lim_{n \to \infty} \frac{1}{n}b(0,nx)
= \inf_{n \in \mathbb{N}} \frac{1}{n} b(0,nx)
= \beta(x).
\end{align*}
Furthermore, the quenched and annealed Lyapunov exponents have the following bounds:
For $x \in \mathbb{R}^d \setminus \{0\}$,
\begin{align*}
-\log\mathbb{E}[e^{-\omega(0)}] \leq \frac{\alpha(x)}{\|x\|_1} \leq \log(2d)+\mathbb{E}[\omega(0)]
\qquad (\text{whenever } \mathbb{E}[\omega(0)]<\infty)
\end{align*}
and
\begin{align*}
-\log\mathbb{E}[e^{-\omega(0)}] \leq \frac{\beta(x)}{\|x\|_1} \leq \log(2d)-\log\mathbb{E}[e^{-\omega(0)}],
\end{align*}
where $\|\cdot\|_1$ is the $\ell^1$-norm on $\mathbb{R}^d$.
\end{prop}
The Lyapunov exponents play a key role in large deviation principles for the simple random walk in random potentials.
For more details, we consider the quenched and annealed path measures $Q_{n,\omega}^\textrm{qu}$ and $Q_n^\textrm{an}$ defined as follows:
\begin{align*}
\frac{dQ_{n,\omega}^\mathrm{qu}}{dP^0}
= \frac{1}{Z_{n,\omega}^\mathrm{qu}} \exp\biggl\{ -\sum_{k=0}^{n-1}\omega(S_k) \biggr\}
\end{align*}
and
\begin{align*}
\frac{dQ_n^\mathrm{an}}{dP^0}
= \frac{1}{Z_n^\mathrm{an}} \mathbb{E}\biggl[ \exp\biggl\{ -\sum_{k=0}^{n-1}\omega(S_k) \biggr\} \biggr],
\end{align*}
where $Z_{n,\omega}^\mathrm{qu}$ and $Z_n^\mathrm{an}$ are the corresponding normalizing constants.
In addition, write $\alpha(\lambda,\cdot)$ and $\beta(\lambda,\cdot)$ for the quenched and annealed Lyapunov exponents associated with the potential $\omega+\lambda=(\omega(x)+\lambda)_{x \in \mathbb{Z}^d}$, respectively.
Note that $\alpha(\lambda,x)$ and $\beta(\lambda,x)$ are continuous in $(\lambda,x) \in [0,\infty) \times \mathbb{R}^d$ and concave increasing in $\lambda$ (see \cite[Theorem~A]{Flu07} and \cite[below~(64)]{Zer98a}).
Then, define the functions $I$ and $J$ on $\mathbb{R}^d$ as follows:
For $x \in \mathbb{R}^d$,
\begin{align*}
I(x):=\sup_{\lambda \geq 0}(\alpha(\lambda,x)-\lambda)
\end{align*}
and
\begin{align*}
J(x):=\sup_{\lambda \geq 0}(\beta(\lambda,x)-\lambda).
\end{align*}
It is known from \cite[below Theorem~A]{Flu07} and \cite[below~(66)]{Zer98a} that $I$ and $J$ are continuous and convex on their effective domains, which are equal to the closed $\ell^1$-unit ball.
The following proposition states the quenched and annealed large deviation principles for the simple random walk in a random potential, which were obtained by Flury~\cite[Theorem~B]{Flu07} and Mourrat~\cite[Theorem~{1.10}]{Mou12}.
\begin{prop}\label{prop:ldp}
Suppose that $\essinf \omega(0)=0$.
Then, the law of $S_n/n$ obeys the following quenched and annealed large deviation principles with the rate functions $I$ and $J$, respectively:
\begin{itemize}
\item (Quenched case)
For $\P \textrm{-} \ae\,\omega$ and for any Borel set $\Gamma$ in $\mathbb{R}^d$,
\begin{align*}
-\inf_{x \in \Gamma^o}I(x)
&\leq \liminf_{n \to \infty} \frac{1}{n}\log Q_{n,\omega}^\mathrm{qu}(S_n \in n\Gamma)\\
&\leq \limsup_{n \to \infty} \frac{1}{n}\log Q_{n,\omega}^\mathrm{qu}(S_n \in n\Gamma)
\leq -\inf_{x \in \bar{\Gamma}}I(x).
\end{align*}
\item (Annealed case)
For any Borel set $\Gamma$ in $\mathbb{R}^d$,
\begin{align*}
-\inf_{x \in \Gamma^o}J(x)
&\leq \liminf_{n \to \infty} \frac{1}{n}\log Q_n^\mathrm{an}(S_n \in n\Gamma)\\
&\leq \limsup_{n \to \infty} \frac{1}{n}\log Q_n^\mathrm{an}(S_n \in n\Gamma)
\leq -\inf_{x \in \bar{\Gamma}}J(x).
\end{align*}
\end{itemize}
Here $\Gamma^o$ and $\bar{\Gamma}$ denote the interior and closure of $\Gamma$, respectively.
\end{prop}
\begin{rem}\label{rem:F_dep}
By definition, the annealed travel cost $b(x,y)$, the Lyapunov exponents $\alpha(\cdot)$ and $\beta(\cdot)$ and the rate functions $I$ and $J$ depend on the distribution function of $\omega(0)$, say $\phi$.
If the specification of the dependence on $\phi$ is necessary, we put a subscript $\phi$ on the above notations: $b(x,y)=b_\phi(x,y)$, $\alpha(x)=\alpha_\phi(x)$, $\beta(x)=\beta_\phi(x)$, $I(x)=I_\phi(x)$ and $J(x)=J_\phi(x)$.
\end{rem}
\subsection{Main results}
As mentioned in Remark~\ref{rem:F_dep}, the Lyapunov exponents and the rate functions depend on the distribution function of $\omega(0)$.
In particular, it immediately follows that if $F$ and $G$ are distribution functions on $[0,\infty)$ and satisfy $F \leq G$, then
\begin{align*}
\alpha_F \geq \alpha_G,\qquad \beta_F \geq \beta_G
\end{align*}
and
\begin{align*}
I_F \geq I_G,\qquad J_F \geq J_G.
\end{align*}
This raises the question whether we can obtain ``strict'' inequalities in the above inequalities.
To discuss this problem, we introduce the following order between distribution functions on $[0,\infty)$:
For any two distribution functions $F$ and $G$ on $[0,\infty)$, we say that $F$ \emph{strictly dominates} $G$ if $F \leq G$ but $F \not\equiv G$.
Let us now formulate our main results, which are strict comparisons for the quenched and annealed Lyapunov exponents.
\begin{thm}\label{thm:strict_qlyap}
Suppose that $F$ strictly dominates $G$.
Then, there exists a constant $0<\Cl{qlyap}<\infty$ (which may depend on $d$, $F$ and $G$) such that for all $x \in \mathbb{R}^d \setminus \{0\}$,
\begin{align*}
\alpha_F(x)-\alpha_G(x) \geq \Cr{qlyap}\|x\|_1.
\end{align*}
\end{thm}
\begin{thm}\label{thm:strict_alyap}
Suppose that $F$ strictly dominates $G$.
For $d=1$, assume additionally that
\begin{align}\label{eq:add_a}
F(0)<e^{-\beta_G(1)}.
\end{align}
Then, there exists a constant $0<\Cl{alyap}<\infty$ (which may depend on $d$, $F$ and $G$) such that for all $x \in \mathbb{R}^d \setminus \{0\}$,
\begin{align*}
\beta_F(x)-\beta_G(x) \geq \Cr{alyap}\|x\|_1.
\end{align*}
\end{thm}
Since the rate functions are defined by the Lyapunov exponents, strict comparisons for the rate functions are direct consequences of Theorems~\ref{thm:strict_qlyap} and \ref{thm:strict_alyap}.
\begin{cor}\label{cor:strict_rate}
Under the assumption of Theorem~\ref{thm:strict_qlyap} (resp.~Theorem~\ref{thm:strict_alyap}), we have $I_F(x)>I_G(x)$ (resp.~$J_F(x)>J_G(x)$) for all $x \in \mathbb{R}^d$ with $0<\|x\|_1<1$.
\end{cor}
\begin{rem}
It is clear that for any distribution function $\phi$ on $[0,\infty)$, we have $\alpha_\phi(0)=\beta_\phi(0)=0$ and $I_\phi(0)=J_\phi(0)=0$.
This is the reason why we omit the case $\|x\|_1=0$ in Theorems~\ref{thm:strict_qlyap} and \ref{thm:strict_alyap} and Corollary~\ref{cor:strict_rate}.
Furthermore, since the effective domains of the rate functions are equal to the closed $\ell^1$-unit ball, in the case $\|x\|_1>1$, $I_\phi(x)=J_\phi(x)=\infty$ holds for any distribution function $\phi$ on $[0,\infty)$.
Therefore, we can also omit the case $\|x\|_1>1$ in Corollary~\ref{cor:strict_rate}.
However, we do not know whether Corollary~\ref{cor:strict_rate} is still true in the case $\|x\|_1=1$ for a technical reason (see Lemma~\ref{lem:finiteness} below).
\end{rem}
Let us here comment on earlier works related to the above results.
Zerner~\cite{Zer98a} and Flury~\cite{Flu07} first introduced the quenched and annealed Lyapunov exponents for the simple random walk in random potentials, respectively.
In addition, Mourrat~\cite{Mou12} gave optimal conditions for the existence of the quenched Lyapunov exponent.
As mentioned in Subsection~\ref{subsect:model}, the Lyapunov exponents play an important role in large deviation principles for the simple random walk in random potentials.
Accordingly, the Lyapunov exponents have been investigated from various viewpoints.
Flury~\cite{Flu08} and Zygouras~\cite{Zyg09} proved that the quenched and annealed Lyapunov exponents coincide in $d \geq 4$ and the low disorder regime.
In particular, the low disorder regime enables us to study the behaviors of the quenched and annealed Lyapunov exponents well.
In fact, Wang~\cite{Wan01,Wan02} observed that the quenched and annealed Lyapunov exponents were of the same order in the low disorder regime.
After that, Kosygina et al.~\cite{KosMouZer11} improved Wang's result, and explicitly computed the asymptotic behavior of the quenched and annealed Lyapunov exponents as the potential tends to zero.
The aforementioned results compare the quenched and annealed Lyapunov exponents for a fixed law of the potential.
On the other hand, there are a few results on the comparison between Lyapunov exponents for different laws of the potential.
As a work of this topic, Le~\cite{Le17} considered different laws of the potential simultaneously and proved that in $d \geq 3$, the quenched and annealed Lyapunov exponents are continuous in the law of the potential, i.e., if $F_n$ converges weakly to $F$, then we have for all $x \in \mathbb{R}^d$,
\begin{align*}
\lim_{n \to \infty} \alpha_{F_n}(x)=\alpha_F(x),\qquad
\lim_{n \to \infty} \beta_{F_n}(x)=\beta_F(x).
\end{align*}
Le's result naturally raises the question whether $\alpha_{F_n}(x)$ (resp.~$\beta_{F_n}(x)$) coincides with $\alpha_F(x)$ (resp.~$\beta_F(x)$) for all sufficiently large $n$, and this is a motivation for the present article.
Our results are also related to the first passage percolation on $\mathbb{Z}^d$.
In this model, a main object of study is the behavior of the \emph{first passage time} $\tau(x,y)$ from $x$ to $y$ defined as follows:
Assign independently to each edge $e$ of $\mathbb{Z}^d$ a nonnegative random weight $t_e$ with a common distribution function $\phi$.
Then, define
\begin{align}\label{eq:fpt}
\tau(x,y):=\inf\biggl\{ \sum_{e \in \gamma} t_e:\text{$\gamma$ is a lattice path on $\mathbb{Z}^d$ from $x$ to $y$} \biggr\}.
\end{align}
It is known from \cite[Theorem~{2.18}]{Kes86_book} that under some mild moment condition for the weights, there exists a norm $\mu_\phi(\cdot)$ on $\mathbb{R}^d$ (which is called the \emph{time constant}) such that for all $x \in \mathbb{Z}^d$,
\begin{align*}
\lim_{n \to \infty} \frac{1}{n}\tau(0,nx)=\mu_\phi(x),\qquad \text{a.s.~and in $L^1$}.
\end{align*}
The first passage time and the time constant correspond to the quenched travel cost and the quenched Lyapunov exponent, respectively.
In the context of the first passage percolation, Marchand~\cite{Mar02} and van~den~Berg--Kesten~\cite{vdBerKes93} studied the strict comparison for the time constant, and obtained the following result:
Assume that $d=2$ and $F(0)<1/2$.
If $F$ is \emph{strictly more variable} than $G$, i.e.,
\begin{align*}
\int_0^\infty h(t) \,dF(t)<\int_0^\infty h(t) \,dG(t)
\end{align*}
for every convex increasing function $h:\mathbb{R} \to \mathbb{R}$ for which the two integrals converge absolutely, then $\mu_F(\xi_1)<\mu_G(\xi_1)$ holds, where $\xi_1$ is the first coordinate vector.
Note that the strict more variability is a much weaker condition than the strict dominance (see \cite[Section~3]{vdBerKes93}).
We believe that Theorems~\ref{thm:strict_qlyap} and \ref{thm:strict_alyap} are established under the strict more variability.
However, it may be difficult to apply the arguments taken in \cite{Mar02,vdBerKes93} to the quenched and annealed Lyapunov exponents.
This is because in \cite{Mar02,vdBerKes93}, the key to deriving the strict comparison for the time constant is the analysis of ``optimal paths'' for the first passage time (which are lattice paths attaining the infimum on the right side of \eqref{eq:fpt}).
For the quenched and annealed travel costs, we cannot fix such an optimal path since the travel costs are averaged over trajectories of the simple random walk.
Hence, the strict dominance is thought of a reasonable order for the strict comparison between Lyapunov exponents.
Although we consider i.i.d.~potentials and the simple random walk on $\mathbb{Z}^d$ in the present and aforementioned articles, let us finally mention results for models with various changes of our setting.
In \cite{JanNurRA20_arXiv,RASepYil13}, the underlying space is $\mathbb{Z}^d$, but the potential is stationary and ergodic and each step of the random walk is in an arbitrary finite set.
Under such a more general setting, \cite{RASepYil13} studied the quenched large deviation principle and \cite{JanNurRA20_arXiv} constructed the quenched Lyapunov exponent.
On the other hand, \cite[Part~II]{Szn98_book} treats a Brownian motion evolving in a Poissonian potential, which is a continuum version of our model.
In that model, the Lyapunov exponent and the large deviation principle were also studied in both the quenched and annealed situations.
For further related works, see the references given in the aforementioned articles.
\subsection{Organization of the paper}
Let us describe how the present article is organized.
In Section~\ref{sect:pre}, we first introduce a coupling of potentials based on the pseudo-inverse function of the distribution function.
Our next purpose is to observe that the strict dominance for distribution functions causes a definite difference between their pseudo-inverse functions (see Lemma~\ref{lem:pseudo} below).
Throughout the paper, this observation is useful to derive a definite difference between Lyapunov exponents.
Section~\ref{sect:qu_strict} is devoted to the proof of Theorem~\ref{thm:strict_qlyap}, which is the strict inequality for the quenched Lyapunov exponent.
The idea of the proof is as follows:
Assume that $F$ strictly dominates $G$, and let $\omega_F$ and $\omega_G$ be the potentials distributed as $F$ and $G$, respectively.
Then, the observation of Section~\ref{sect:pre} yields that with high probability, there exist a lot of sites whose potentials for $F$ and $G$ are definitely different.
Hence, when we focus on such a typical situation, the simple random walk passes through a lot of sites $z$ with a definite gap between $\omega_F(z)$ and $\omega_G(z)$.
This shows that the quenched travel cost in $\omega_F$ is strictly bigger than that in $\omega_G$, and the strict inequality is inherited to the quenched Lyapunov exponents $\alpha_F$ and $\alpha_G$.
In Section~\ref{sect:an_strict}, we prove Theorem~\ref{thm:strict_alyap}, which is the strict inequality for the annealed Lyapunov exponent.
The idea of the proof is essentially the same as the quenched case.
However, since the annealed travel cost is the quantity after averaging over the potential, it is not sufficient to treat only a typical situation as in the quenched case.
Hence, the main task of this section is to construct an event which is typical for both the potential and the simple random walk and is harmless to the comparison for the annealed Lyapunov exponent.
We need a slightly different construction of such an event in $d=1$ and $d \geq 2$.
Therefore, this section is consist of three subsections:
Subsections~\ref{subsect:harmless} and \ref{subsect:pf_anl_multi} treat the proof of Theorem~\ref{thm:strict_alyap} for $d \geq 2$, and Subsection~\ref{subsect:pf_anl_one} gives the proof of Theorem~\ref{thm:strict_alyap} for $d=1$.
The aim of Section~\ref{sect:rf_strict} is to prove Corollary~\ref{cor:strict_rate}, which is the strict inequality for the quenched and annealed rate functions.
This is a direct consequence of Theorems~\ref{thm:strict_qlyap} and \ref{thm:strict_alyap}.
Section~\ref{sect:one-dim} is devoted to the discussion of comparisons for one-dimensional Lyapunov exponents and rate functions without assumptions (Qu) and \eqref{eq:add_a}.
The main work here is to check that \eqref{eq:add_a} is a necessary and sufficient condition for strict comparison between one-dimensional annealed Lyapunov exponents.
It is clear from Theorem~\ref{thm:strict_alyap} that \eqref{eq:add_a} is a sufficient condition for strict comparison between Lyapunov exponents.
On the contrary, the reason why the lack of \eqref{eq:add_a} causes the coincidence of annealed Lyapunov exponents is as follows:
Assume that $F \leq G$ but \eqref{eq:add_a} fails to hold (i.e., $F(0) \geq e^{-\beta_G(1)}$).
Then, for all large $n$,
\begin{align*}
b_F(0,n) \geq b_G(0,n) \approx n\beta_G(1) \geq -n\log F(0).
\end{align*}
Roughly speaking, $-n\log F(0)$ is regarded as the cost of adjusting all the potentials for $F$ on the interval $[0,n)$ to zero, and this is one of the worst strategies for the annealed travel cost.
It follows that
\begin{align*}
-n\log F(0) \geq b_F(0,n) \gtrsim n\beta_G(1) \geq -n\log F(0).
\end{align*}
Therefore, we obtain $\beta_F(1)=\beta_G(1)=-\log F(0)$ by dividing $n$ and letting $n \to \infty$, and $\beta_F(1)$ and $\beta_G(1)$ coincide (see Section~\ref{sect:one-dim} for more details).
We close this section with some general notation.
Write $\|\cdot\|_1$ and $\|\cdot\|_\infty$ for the $\ell^1$ and $\ell^\infty$-norms on $\mathbb{R}^d$.
Throughout the paper, $c$, $c'$ and $C_i$, $i=1,2,\dots$, denote some constants with $0<c,c',C_i<\infty$.
\section{Preliminary}\label{sect:pre}
In this section, we introduce a coupling of potentials.
This is useful to compare Lyapunov exponents for different distribution functions simultaneously.
Independently of $(S_k)_{k=0}^\infty$, let $(U(x))_{x \in \mathbb{Z}^d}$ be a family of independent random variables with the uniform distribution on $(0,1)$.
Then, for a given distribution function $\phi$ on $[0,\infty)$, define
\begin{align*}
\omega_\phi(x):=\phi^{-1}(U(x)),\qquad x \in \mathbb{Z}^d,
\end{align*}
where $\phi^{-1}$ is the pseudo-inverse function of $\phi$:
\begin{align*}
\phi^{-1}(s):=\sup\{ t \geq 0:\phi(t)<s \},\qquad s \in (0,1),
\end{align*}
with the convention $\sup\emptyset:=0$.
Note that the potential $\omega_\phi=(\omega_\phi(x))_{x \in \mathbb{Z}^d}$ is a family of i.i.d.~random variables with the common distribution function $\phi$.
The following lemma says that the strict dominance for distribution functions causes a definite difference between their pseudo-inverse functions.
\begin{lem}\label{lem:pseudo}
If $F$ strictly dominates $G$, then the following results hold:
\begin{enumerate}
\item\label{item:pseudo_H} There exists an $\eta_0=\eta_0(F,G)>0$ and a closed interval
$\mathcal{H}=\mathcal{H}(\eta_0) \subset (0,1)$ with the Lebesgue measure $|\mathcal{H}| \in (0,1)$
such that for all $s \in \mathcal{H}$,
\begin{align*}
F^{-1}(s)-G^{-1}(s) \geq \eta_0.
\end{align*}
\item\label{item:pseudo_0} $F(0)<1$ holds.
\end{enumerate}
\end{lem}
\begin{proof}
Let us first prove part~\eqref{item:pseudo_H}.
Since $F$ strictly dominates $G$, we can find some $t' \geq 0$ such that $G(t')>F(t')$.
Then, set
\begin{align*}
\epsilon:=\frac{1}{2}(G(t')-F(t'))>0.
\end{align*}
The right-continuity of $F$ enables us to take $\eta_0>0$ such that $F(t'+\eta_0) \leq F(t')+\epsilon$.
We now consider the interval
\begin{align*}
\mathcal{H}:=\biggl[ F(t'+\eta_0)+\frac{1}{3}(G(t')-F(t'+\eta_0)),G(t') -\frac{1}{3}(G(t')-F(t'+\eta_0)) \biggr].
\end{align*}
Clearly, $\mathcal{H}$ is a closed interval included in $(0,1)$ and $|\mathcal{H}| \in (0,1)$ holds.
Moreover, for any $s \in \mathcal{H}$, we have $F^{-1}(s) \geq t'+\eta_0$ and $G^{-1}(s) \leq t'$.
This implies that for all $s \in \mathcal{H}$,
\begin{align*}
F^{-1}(s)-G^{-1}(s) \geq t'+\eta_0-t'=\eta_0,
\end{align*}
and the proof of part~\eqref{item:pseudo_H} is complete.
To prove part~\eqref{item:pseudo_0}, assume $F(0)=1$.
Then, $F \equiv 1$ follows.
Since $F \leq G$, we have $F \equiv G \equiv 1$.
This contradicts $F \not\equiv G$, and part~\eqref{item:pseudo_0} is proved.
\end{proof}
\section{Strict inequality for the quenched Lyapunov exponent}\label{sect:qu_strict}
The aim of this section is to prove Theorem~\ref{thm:strict_qlyap}.
To this end, throughout this section, we fix two distribution functions $F$ and $G$ on $[0,\infty)$ such that $F$ strictly dominates $G$, and let $\eta_0=\eta_0(F,G)$ be the constant appearing in Lemma~\ref{lem:pseudo}-\eqref{item:pseudo_H}.
The idea of the proof of Theorem~\ref{thm:strict_qlyap} is as follows:
Since $F$ strictly dominates $G$, Lemma~\ref{lem:pseudo}-\eqref{item:pseudo_H} implies that for each $z \in \mathbb{Z}^d$, one has $\omega_F(z)>\omega_G(z)$ with positive probability.
Hence, in a typical situation, during a certain time interval, the simple random walk starting at $0$ passes through ``enough'' sites $z$ with $\omega_F(z)>\omega_G(z)$.
It follows that with high probability, the travel cost in $\omega_F$ is strictly bigger than that in $\omega_G$, and this strict comparison is inherited to the quenched Lyapunov exponents $\alpha_F$ and $\alpha_G$.
To carry out the above idea, for each $R \in 2\mathbb{N}$, consider the boxes $\Lambda_R(v):=Rv+[-R/2,R/2)^d$, $v \in \mathbb{Z}^d$, which are called $R$-boxes.
Note that $R$-boxes form a partition of $\mathbb{Z}^d$.
Hence, each $z \in \mathbb{Z}^d$ is contained in precisely one $R$-box, and denote by $[z]_R$ the index $v$ such that $z \in \Lambda_R(v)$.
For a given $M \in \mathbb{N}$, we say that an $R$-box $\Lambda_R(v)$ is \emph{$M$-white} if the following conditions \eqref{item:white1} and \eqref{item:white2} hold:
\begin{enumerate}
\item\label{item:white1}
$\omega_F(z) \geq \omega_G(z)+\eta_0$ holds for some $z \in \Lambda_R(v)$.
\item\label{item:white2}
$\omega_G(z) \leq M$ holds for all $z \in \Lambda_R(v)$.
\end{enumerate}
The next lemma guarantees that if $R$ and $M$ are large enough, then each $R$-box can be $M$-white with high probability.
\begin{lem}\label{lem:white}
We have
\begin{align*}
\lim_{R \to \infty} \lim_{M \to \infty} \P(\Lambda_R(0) \text{ is $M$-white})=1.
\end{align*}
\end{lem}
\begin{proof}
Since $\omega_G(z)$'s are finite and the event $\{ \omega_G(z) \leq M \text{ for all } x \in \Lambda_R(0) \}$ is increasing in $M$, we have
\begin{align*}
\lim_{M \to \infty} \P(\Lambda_R(0) \text{ is $M$-white})
&= \P(\omega_F(z) \geq \omega_G(z)+\eta_0 \text{ for some } z \in \Lambda_R(0))\\
&= 1-\{ 1-\P(\omega_F(0) \geq \omega_G(0)+\eta_0) \}^{R^d}.
\end{align*}
Note that Lemma~\ref{lem:pseudo}-\eqref{item:pseudo_H} and the definition of $\omega_F$ and $\omega_G$ imply
\begin{align*}
\P(\omega_F(0) \geq \omega_G(0)+\eta_0)
&= \int_{(0,1)} \1{\{ F^{-1}(s)-G^{-1}(s) \geq \eta_0 \}}\,ds\\
&\geq \int_\mathcal{H} \1{\{ F^{-1}(s)-G^{-1}(s) \geq \eta_0 \}}\,ds
=|\mathcal{H}| \in (0,1).
\end{align*}
Hence,
\begin{align*}
\lim_{M \to \infty} \P(\Lambda_R(0) \text{ is $M$-white})
\geq 1-(1-|\mathcal{H}|)^{R^d},
\end{align*}
and the lemma follows by letting $R \to \infty$.
\end{proof}
Define for $0<\delta<p<1$,
\begin{align}\label{eq:D}
D(\delta\|p):=\delta\log\frac{\delta}{p}+(1-\delta)\log\frac{1-\delta}{1-p}.
\end{align}
It is clear that for each $\delta \in (0,1)$,
\begin{align*}
\lim_{p \nearrow 1}D(\delta\| p)=\infty
\end{align*}
Moreover, set for $R \in 2\mathbb{N}$ and $M \in \mathbb{N}$,
\begin{align*}
p_{R,M}:=\P(\Lambda_R(0) \text{ is $M$-white}).
\end{align*}
Thanks to Lemma~\ref{lem:white}, there exist $R$ and $M$ such that
\begin{align}\label{eq:D-log}
D(1/2\|p_{R,M})>2\log(2d),
\end{align}
and we fix such $R$ and $M$ throughout this section.
The next proposition ensures that with high probability, the simple random walk starting at $0$ must pass through ``enough'' $M$-white $R$-boxes by reaching a remote point.
\begin{prop}\label{prop:QLA}
There exist constants $\Cl{QLA1}$ and $\Cl{QLA2}$ (which may depend on $d$, $F$, $G$, $\eta_0$, $R$ and $M$) such that for all $N \in \mathbb{N}$,
\begin{align*}
\P(\mathcal{E}(N)^c) \leq \Cr{QLA1}e^{-\Cr{QLA2}N},
\end{align*}
where $\mathcal{E}(N)$ is the event that for all lattice animals $\mathbb{A}$ on $\mathbb{Z}^d$ containing $0$ with $\#\mathbb{A} \geq N$,
\begin{align*}
\sum_{v \in \mathbb{A}} \1{\{ \Lambda_R(v) \text{ is $M$-white\}}} \geq \frac{\#\mathbb{A}}{2}.
\end{align*}
\end{prop}
\begin{proof}
The union bound shows that
\begin{align*}
\P(\mathcal{E}(N)^c)
\leq \sum_{\ell=N}^\infty \sum_\mathbb{A}
\P\biggl( \sum_{v \in \mathbb{A}}\1{\{ \Lambda_R(v) \text{ is $M$-white} \}}<\frac{\ell}{2} \biggr),
\end{align*}
where the second sum is taken over all lattice animals $\mathbb{A}$ on $\mathbb{Z}^d$ containing $0$ with $\#\mathbb{A}=\ell$.
Note that $(\1{\{ \Lambda_R(v) \text{ is $M$-white} \}})_{v \in \mathbb{Z}^d}$ is a family of independent Bernoulli random variables with parameter $p_{R,M}$.
Hence, we can use the Chernoff bound to estimate the last probability as follows:
\begin{align*}
\P\biggl( \sum_{v \in \mathcal{\mathbb{A}}}\1{\{ \Lambda_R(v) \text{ is $M$-white} \}}<\frac{\ell}{2} \biggr)
\leq e^{-\ell D(1/2\|p_{R,M})}.
\end{align*}
Since $(2d)^{2\ell}$ is a rough upper bound on the number of lattice animals on $\mathbb{Z}^d$, of size $\ell$, containing $0$ (see \cite[Lemma~1]{CoxGanGriKes93}), one has
\begin{align*}
\P(\mathcal{E}(N)^c)
&\leq \sum_{\ell=N}^\infty (2d)^{2\ell}e^{-\ell D(1/2\|p_{R,M})}\\
&= \sum_{\ell=N}^\infty \exp\{ -\ell(D(1/2\|p_{R,M})-2\log(2d)) \}.
\end{align*}
Therefore, the proposition immediately follows by \eqref{eq:D-log}.
\end{proof}
We next observe that $M$-white boxes contribute to the difference between the travel costs in $\omega_F$ and $\omega_G$.
To do this, set for $x,v \in \mathbb{Z}^d$,
\begin{align*}
\Delta_{F,G}(x):=\omega_F(x)-\omega_G(x)
\end{align*}
and
\begin{align}\label{eq:ET}
T_R(v):=\inf\{ k>0: S_k \not\in \Lambda_R(v) \}.
\end{align}
Furthermore, define for $x,y \in \mathbb{Z}^d$,
\begin{align*}
g(x,y):=E^x\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{T_R([S_0]_R)-1}\Delta_{F,G}(S_k) \Biggr\}
\exp\Biggl\{ -\sum_{k=0}^{H(y)-1}\omega_G(S_k) \Biggr\} \1{\{ H(y)<\infty \}} \Biggr].
\end{align*}
\begin{prop}\label{prop:gap}
If $\Lambda_R(v)$ is $M$-white and $y \not\in \Lambda_R(v)$, then for all $x \in \Lambda_R(v)$,
\begin{align*}
g(x,y) \leq \delta_0 \times e(x,y,\omega_G),
\end{align*}
where
\begin{align*}
\delta_0:=1-(1-e^{-\eta_0})\Bigl( \frac{1}{2de^M} \Bigr)^{2dR} \in (0,1).
\end{align*}
\end{prop}
\begin{proof}
Assume that $\Lambda_R(v)$ is $M$-white and fix $x \in \Lambda_R(v)$ and $y \not\in \Lambda_R(v)$.
Furthermore, let $A$ be the set of all sites $z \in \mathbb{Z}^d$ such that $\Delta_{F,G}(z) \geq \eta_0$.
Note that $P^x$-a.s.~on the event $\{ H(A)<T_R(v) \}$,
\begin{align*}
\exp\Biggl\{ -\sum_{k=0}^{T_R(v)-1} \Delta_{F,G}(S_k) \Biggr\}
\leq \exp\{ -\Delta_{F,G}(S_{H(A)}) \}
\leq e^{-\eta_0}.
\end{align*}
This proves
\begin{align*}
g(x,y)
&\leq e^{-\eta_0} \times E^x\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(y)-1} \omega_G(S_k) \Biggr\}
\1{\{ H(y)<\infty,\,H(A)<T_R(v) \}} \Biggr]\\
&\quad +E^x\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(y)-1} \omega_G(S_k) \Biggr\}
\1{\{ H(y)<\infty,\,H(A) \geq T_R(v) \}} \Biggr]\\
&= e(x,y,\omega_G)
-(1-e^{-\eta_0})E^x\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(y)-1} \omega_G(S_k) \Biggr\}
\1{\{ H(y)<\infty,\,H(A)<T_R(v) \}} \Biggr].
\end{align*}
To estimate the last expectation, we consider a shortest lattice path $\gamma$ which starts and ends at the same site $x$ and goes through a site in $A$.
Since $\Lambda_R(v)$ is $M$-white, it is clear that $\gamma$ has at most $2dR$ vertices and each $z \in \gamma$ satisfies that $z \in \Lambda_R(v)$ and $\omega_G(z) \leq M$.
This combined with the Markov property implies that
\begin{align*}
&E^x\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(y)-1} \omega_G(S_k) \Biggr\}
\1{\{ H(y)<\infty,\,H(A)<T_R(v) \}} \Biggr]\\
&\geq \exp\Biggl\{ -\sum_{z \in \gamma}\omega_G(z) \Biggr\}
P^x(S_\cdot \text{ follows } \gamma) \times e(x,y,\omega_G)\\
&\geq \Bigl( \frac{1}{2de^M} \Bigr)^{2dR} \times e(x,y,\omega_G).
\end{align*}
With these observations, one has
\begin{align*}
g(x,y)
&\leq e(x,y,\omega_G)-(1-e^{-\eta_0})\Bigl( \frac{1}{2de^M} \Bigr)^{2dR} \times e(x,y,\omega_G)\\
&= \delta_0 \times e(x,y,\omega_G),
\end{align*}
and the proof is complete.
\end{proof}
We are now in a position to prove Theorem~\ref{thm:strict_qlyap}.
\begin{proof}[\bf Proof of Theorem~\ref{thm:strict_qlyap}]
We first introduce the entrance times $(\sigma_i)_{i=1}^\infty$ in $M$-white $R$-boxes and the exit times $(\tau_i)_{i=1}^\infty$ from them:
Set $\tau_0:=1$ and define for $j \geq 0$,
\begin{align*}
\sigma_{j+1}:=\inf\{ k \geq \tau_j: S_k \text{ is in an $M$-white $R$-box} \}
\end{align*}
and
\begin{align*}
\tau_{j+1}:=\inf\{ k>\sigma_{j+1}: S_k \not\in \Lambda_R([S_{\sigma_{j+1}}]_R) \}.
\end{align*}
Furthermore, for $z \in \mathbb{Z}^d$, let $\mathbb{A}_R(z)$ stand for the lattice animal on $\mathbb{Z}^d$ which is made of the labels $v$ of $R$-boxes $\Lambda_R(v)$ visited by the simple random walk up to but not including time $H(\Lambda_R([z]_R))$, i.e.,
\begin{align}\label{eq:LA}
\mathbb{A}_R(z):=\bigl\{ [S_k]_R:0 \leq k<H(\Lambda_R([z]_R)) \bigr\}.
\end{align}
Fix $x \in \mathbb{Z}^d \setminus \{0\}$ and let $n$ be a sufficiently large integer.
To shorten notation, write $N:=\lfloor n\|x\|_\infty/R \rfloor$ and $L:=\lceil N/2 \rceil$.
We now restrict ourselves on the event $\mathcal{E}(N)$ (which appears in Proposition~\ref{prop:QLA}).
Then, since $P^0$-a.s., $\mathbb{A}_R(nx)$ is a lattice animal on $\mathbb{Z}^d$ containing $0$ with $\#\mathbb{A}_R(nx) \geq N$,
\begin{align*}
\sum_{v \in \mathbb{A}_R(nx)} \1{\{ \Lambda_R(v) \text{ is $M$-white\}}}
\geq \frac{1}{2} \#\mathbb{A}_R(nx)
\geq \frac{N}{2}.
\end{align*}
It follows that $P^0$-a.s.,
\begin{align}\label{eq:st}
\tau_{i-1} \leq \sigma_i<\tau_i \leq H(\Lambda_R([nx]_R)),\qquad 1 \leq i \leq L.
\end{align}
Then, set for $1 \leq i \leq L$,
\begin{align*}
f_i:=E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{\tau_i-1}\Delta_{F,G}(S_k) \Biggr\}
\exp\Biggl\{ -\sum_{k=0}^{H(nx)-1}\omega_G(S_k) \Biggr\} \1{\{ H(nx)<\infty \}}\Biggr].
\end{align*}
Note that $P^0$-a.s.,
\begin{align}\label{eq:q_induction}
e(0,nx,\omega_F) \leq f_L.
\end{align}
On the other hand, the strong Markov property implies that for each $1 \leq i \leq L$,
\begin{align}\label{eq:f}
f_i
= E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{\sigma_i-1}\Delta_{F,G}(S_k) \Biggr\}
\exp\Biggl\{ -\sum_{k=0}^{\sigma_i-1}\omega_G(S_k) \Biggr\} \times g(S_{\sigma_i},nx) \Biggr],
\end{align}
where $g(\cdot,\cdot)$ is the two-point function on $\mathbb{Z}^d$ introduced above Proposition~\ref{prop:gap}.
The definition of $\sigma_i$'s and \eqref{eq:st} yield that $P^0$-a.s., for all $1 \leq i \leq L$, $\Lambda_R([S_{\sigma_i}]_R)$ is $M$-white and $nx \not\in \Lambda_R([S_{\sigma_i}]_R)$ holds.
Therefore, we can use Proposition~\ref{prop:gap} to obtain that $P^0$-a.s.,
\begin{align*}
g(S_{\sigma_i},nx)
&\leq \delta_0 \times e(S_{\sigma_i},nx,\omega_G),\qquad 1 \leq i \leq L.
\end{align*}
Substituting this into \eqref{eq:f} and using the strong Markov property again, one has for each $1 \leq i \leq L$,
\begin{align*}
f_i
&\leq \delta_0 \times E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{\sigma_i-1}\Delta_{F,G}(S_k) \Biggr\}
\exp\Biggl\{ -\sum_{k=0}^{H(nx)-1}\omega_G(S_k) \Biggr\} \1{\{ H(nx)<\infty \}} \Biggr]\\
&\leq \delta_0 \times f_{i-1},
\end{align*}
with the convention $f_0:=e(0,nx,\omega_G)$.
This together with \eqref{eq:q_induction} implies
\begin{align*}
e(0,nx,\omega_F) \leq \delta_0^L \times e(0,nx,\omega_G).
\end{align*}
With these observations, on the event $\mathcal{E}(N)$,
\begin{align*}
a(0,nx,\omega_F) \geq a(0,nx,\omega_G)+L\log\delta_0^{-1},
\end{align*}
and Proposition~\ref{prop:QLA} shows that
\begin{align*}
\P\bigl( a(0,nx,\omega_F)<a(0,nx,\omega_G)+L\log\delta_0^{-1} \bigr)
\leq \P(\mathcal{E}(N)^c)
\leq \Cr{QLA1}e^{-\Cr{QLA2}N}.
\end{align*}
Hence, the definition of the quenched Lyapunov exponent (see Proposition~\ref{prop:lyaps}) proves that for all $x \in \mathbb{Z}^d \setminus \{0\}$,
\begin{align*}
\alpha_F(x) \geq \alpha_G(x)+\frac{\log\delta_0^{-1}}{2dR} \|x\|_1.
\end{align*}
Since $\alpha_F(\cdot)$ and $\alpha_G(\cdot)$ are norms on $\mathbb{R}^d$ and the constants $\delta_0$ and $R$ are independent of $x$, we can easily extend the above inequality to all $x \in \mathbb{R}^d \setminus \{0\}$.
This completes the proof.
\end{proof}
\section{Strict inequality for the annealed Lyapunov exponent}\label{sect:an_strict}
This section is devoted to the proof of Theorem~\ref{thm:strict_alyap}.
To this end, throughout this section, we fix two distribution functions $F$ and $G$ such that $F$ strictly dominates $G$.
In the quenched situation, the strict comparison follows from typical potentials caused by the event $\mathcal{E}(N)$ (which appears in Proposition~\ref{prop:QLA}).
However, in the annealed situation, we have to consider the travel cost after averaging over the potential, and it is not enough to focus on typical potentials as in the quenched situation.
The key to overcoming this difficulty is how long the simple random walk stays around sites with high potential.
To see this, in Subsection~\ref{subsect:harmless}, we construct some events harmless to the comparison for the annealed Lyapunov exponent.
Since those harmless events are slightly different in one and more dimensions, the proof of Theorem~\ref{thm:strict_alyap} is divided into two subsections (see Subsections~\ref{subsect:pf_anl_multi} and \ref{subsect:pf_anl_one} for $d \geq 2$ and $d=1$, respectively).
\subsection{Some events harmless to the annealed comparison}\label{subsect:harmless}
Assume $d \geq 2$ in this subsection.
For any $\kappa>0$, we say that an $R$-box $\Lambda_R(v)$ is \emph{$\kappa$-good} if $\Lambda_R(v)$ contains a site $z$ of $\mathbb{Z}^d$ with $\omega_F(z) \geq \kappa$.
Our first objective is to observe that an $R$-box can be $\kappa$-good with high probability if $\kappa$ and $R$ are sufficiently small and large, respectively.
\begin{lem}\label{lem:kappa}
There exists $\kappa>0$ such that $\P(\omega_F(0)<\kappa)<1$ holds.
In addition, we have for all $R \in 2\mathbb{N}$,
\begin{align*}
q_{\kappa,R}:=\P(\Lambda_R(0) \text{ is $\kappa$-good})=1-\P( \omega_F(0)<\kappa)^{R^d}.
\end{align*}
\end{lem}
\begin{proof}
Since we have assumed that $F$ strictly dominates $G$, Lemma~\ref{lem:pseudo}-\eqref{item:pseudo_0} implies $\P(\omega_F(0)=0)=F(0)<1$.
Hence, $\P(\omega_F(0)<\kappa)<1$ holds for some small $\kappa>0$, and the first assertion follows.
The proof of the second assertion is straightforward because we are now working on the i.i.d.~setting.
\end{proof}
From now on, we fix such a $\kappa$ and an arbitrary $x \in \mathbb{Z}^d \setminus \{ 0 \}$.
Then, the next proposition guarantees that with high probability, the simple random walk starting at $0$ must go through ``enough'' $\kappa$-good $R$-boxes by reaching a remote point.
\begin{prop}\label{prop:E1}
There exists an $R=R(d,\kappa) \in 2\mathbb{N}$ such that for all large $n$,
\begin{align*}
\P(\Cr{E1}(R,n)^c) \leq \Bigl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \Bigr)^{n\|x\|_1},
\end{align*}
where $\Cl[E]{E1}(R,n)$ is the event that
\begin{align*}
\sum_{v \in \mathcal{A}}\1{\{ \Lambda_R(v) \text{ is $\kappa$-good} \}}
\geq \frac{\#\mathcal{A}}{2}
\end{align*}
holds for all lattice animals $\mathcal{A}$ on $\mathbb{Z}^d$ containing $0$ with $\#\mathcal{A} \geq \lfloor n\|x\|_\infty/R \rfloor$.
\end{prop}
\begin{proof}
Thanks to Lemma~\ref{lem:kappa} and the fact that $(\1{\{ \Lambda_R(v) \text{ is $\kappa$-good} \}})_{v \in \mathbb{Z}^d}$ is a family of Bernoulli random variables with parameter $q_{\kappa,R}$, we can apply the same argument as in the proof of Proposition~\ref{prop:QLA} to obtain that for all large $R \in 2\mathbb{N}$ and $n \in \mathbb{N}$ with $n \geq 2R$,
\begin{align}\label{eq:E1}
\P(\Cr{E1}(R,n)^c)
\leq 2\exp\biggl\{ -\frac{n\|x\|_1}{2dR}(D(1/2 \| q_{\kappa,R})-2\log(2d)) \biggr\}.
\end{align}
On the other hand, the definition of $D(1/2 \| q_{\kappa,R})$ (see \eqref{eq:D}) implies that for all $R \in 2\mathbb{N}$,
\begin{align*}
\frac{1}{R}(D(1/2 \| q_{\kappa,R})-2\log(2d))
\geq \frac{R^{d-1}}{2}\log\P(\omega_F(0)<\kappa)^{-1}-\frac{3}{R}\log(2d).
\end{align*}
Due to the hypothesis $d \geq 2$ and Lemma~\ref{lem:kappa}, the right side above goes to infinity as $R \to \infty$.
With these observations, we can find an $R \in 2\mathbb{N}$ such that for all $n \in \mathbb{N}$, the right side of \eqref{eq:E1} is smaller than or equal to $\{ \mathbb{E}[e^{-\omega_G(0)}]/(4d) \}^{n\|x\|_1}$, and this is the proposition follows.
\end{proof}
Our second objective is to estimate the number of $\kappa$-good $R$-boxes gone through by the simple random walk starting at $0$.
To do this, for $z \in \mathbb{Z}^d$, let $\mathcal{A}_R(z)$ stand for the lattice animal which is made of the labels $v$ of $R$-boxes $\Lambda_R(v)$ visited by the simple random walk up to but not including time $H(z)$, i.e.,
\begin{align*}
\mathcal{A}_R(z):=\bigl\{ [S_k]_R:0 \leq k<H(z) \bigr\}.
\end{align*}
In addition, denote by $\mathcal{G}_R$ the set of all sites $v$ of $\mathbb{Z}^d$ such that the $R$-boxes $\Lambda_R(v)$ is $\kappa$-good:
\begin{align*}
\mathcal{G}_R:=\{ v \in \mathbb{Z}^d:\Lambda_R(v) \text{ is $\kappa$-good} \}.
\end{align*}
The next proposition says that with high probability, there are a few $\kappa$-good $R$-boxes which the simple random walk starting at $0$ goes through many times before reaching a remote point.
\begin{prop}\label{prop:E2}
There exists an $M=M(d,\kappa,R) \in \mathbb{N}$ such that for all large $n$,
\begin{align}\label{eq:E2}
\begin{split}
&\mathbb{E} \otimes E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(nx)-1}\omega_F(S_k) \Biggr\}
\1{\{ H(nx)<\infty \} \cap \Cr{E2}(M,n)^c} \Biggr]\\
&\leq 2\Bigl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \Bigr)^{n\|x\|_1},
\end{split}
\end{align}
where $\Cl[E]{E2}(M,n)$ is the event that
\begin{align*}
\sum_{v \in \mathcal{G}_R}
\1{\{ \text{$(S_k)_{k=0}^{H(nx)}$ goes through $\Lambda_R(v)$ at least $M$ times}\}}
\leq \frac{1}{3}\#(\mathcal{A}_R(nx) \cap \mathcal{G}_R).
\end{align*}
\end{prop}
\begin{proof}
We use Proposition~\ref{prop:E1} to obtain that for all large $n$, the left side of \eqref{eq:E2} is not larger than
\begin{align}\label{eq:E1E2}
\begin{split}
&\Bigl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \Bigr)^{n\|x\|_1}\\
&+\mathbb{E} \otimes E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(nx)-1}\omega_F(S_k) \Biggr\}
\1{\{ H(nx)<\infty \} \cap \Cr{E1}(R,n) \cap \Cr{E2}(M,n)^c} \Biggr].
\end{split}
\end{align}
Hence, our task is to show that the second term of \eqref{eq:E1E2} is bounded from above by $\{ \mathbb{E}[e^{-\omega_G(0)}]/(4d) \}^{n\|x\|_1}$.
To this end, take $M \in \mathbb{N}$ large enough to have
\begin{align*}
\biggl\{ 1-(1-e^{-\kappa})\Bigl( \frac{1}{2d} \Bigr)^{2dR} \biggr\}^{M/(12dR)}
\leq \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}].
\end{align*}
Moreover, consider the entrance times $(\sigma_i)_{i=1}^\infty$ in $\kappa$-good $R$-boxes and the exit times $(\tau_i)_{i=1}^\infty$ from them:
Set $\tau_0:=1$ and define for $j \geq 0$,
\begin{align*}
\sigma_{j+1}:=\inf\{ k \geq \tau_j: S_k \text{ is in a $\kappa$-good $R$-box} \}
\end{align*}
and
\begin{align*}
\tau_{j+1}:=\inf\{ k>\sigma_{j+1}: S_k \not\in \Lambda_R([S_{\sigma_{j+1}}]_R) \}.
\end{align*}
Let $n$ be a sufficiently large integer.
Since $P^0 \textrm{-} \textrm{a.s.}$, $\mathcal{A}_R(nx)$ is a lattice animal on $\mathbb{Z}^d$ containing $0$ with $\#\mathcal{A}_R(nx) \geq \lfloor n\|x\|_\infty/R \rfloor$ and
\begin{align*}
\#(\mathcal{A}_R(nx) \cap \mathcal{G}_R)=\sum_{v \in \mathcal{A}_R(nx)} \1{\{ \Lambda_R(v) \text{ is $\kappa$-good} \}},
\end{align*}
it follows that $\P \otimes P^0$-a.s.~on the event $\{ H(nx)<\infty \} \cap \Cr{E1}(R,n) \cap \Cr{E2}(M,n)^c$,
\begin{align*}
&\sum_{v \in \mathcal{G}_R}
\1{\{ \text{$(S_k)_{k=0}^{H(nx)}$ goes through $\Lambda_R(v)$ at least $M$ times}\}}\\
&> \frac{1}{3}\#(\mathcal{A}_R(nx) \cap \mathcal{G}_R)
\geq \frac{1}{6}\#\mathcal{A}_R(nx)
\geq \frac{1}{6} \biggl\lfloor \frac{n\|x\|_\infty}{R} \biggr\rfloor
\geq \frac{n\|x\|_1}{12dR}.
\end{align*}
Therefore, setting $N:=M \lceil n\|x\|_1/(12dR) \rceil$, one has $P^0$-a.s.~on the event $\{ H(nx)<\infty \} \cap \Cr{E1}(R,n) \cap \Cr{E2}(M,n)^c$,
\begin{align*}
\tau_{i-1} \leq \sigma_i<\tau_i \leq H(nx),\qquad 1 \leq i \leq N.
\end{align*}
This tells us that the second term of \eqref{eq:E1E2} is bounded from above by
\begin{align*}
\mathbb{E} \otimes E^0\Biggl[ \prod_{i=1}^N \exp\Biggl\{ -\sum_{k=\sigma_i}^{\tau_i-1} \omega_F(S_k) \Biggr\}
\1{\{ \sigma_i<\infty \}} \Biggr].
\end{align*}
On the other hand, the same argument as in the proof of Theorem~\ref{thm:strict_qlyap} implies that $\P \textrm{-} \textrm{a.s.}$,
\begin{align*}
E^0\Biggl[ \prod_{i=1}^N \exp\Biggl\{ -\sum_{k=\sigma_i}^{\tau_i-1} \omega_F(S_k) \Biggr\}
\1{\{ \sigma_i<\infty \}} \Biggr]
\leq \Bigl\{ 1-(1-e^{-\kappa})\Bigl( \frac{1}{2d} \Bigr)^{dR} \Bigr\}^N.
\end{align*}
By the choice of $M$, the right side above is smaller than or equal to
\begin{align*}
\Bigl\{ 1-(1-e^{-\kappa})\Bigl( \frac{1}{2d} \Bigr)^{dR} \Bigr\}^{Mn\|x\|_1/(12dR)}
\leq \Bigl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \Bigr)^{n\|x\|_1}.
\end{align*}
With these observations, $\{ \mathbb{E}[e^{-\omega_G(0)}]/(4d) \}^{n\|x\|_1}$ is an upper bound on the second term of \eqref{eq:E1E2}, and the proof is complete.
\end{proof}
Although Proposition~\ref{prop:E2} estimates the number of $\kappa$-good $R$-boxes which the simple random walk goes through many times, the next proposition provides an estimate for the number of $\kappa$-good $R$-boxes which the simple random walk goes through.
\begin{prop}\label{prop:E3}
There exists a positive integer $A=A(d,\kappa,R) \geq 6$ such that for all large $n$,
\begin{align*}
&\mathbb{E} \otimes E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(nx)-1}\omega_F(S_k) \Biggr\}
\1{\{ H(nx)<\infty \} \cap \Cr{E3}(A,n)^c} \Biggr]\\
&\leq \Bigl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \Bigr)^{n\|x\|_1},
\end{align*}
where $\Cl[E]{E3}(A,n)$ is the event that $(S_k)_{k=0}^{H(nx)}$ goes through $\kappa$-good $R$-boxes at most $A\lfloor n\|x\|_\infty/R \rfloor$ times.
\end{prop}
\begin{proof}
Take $A$ large enough to satisfy that $A \geq 6$ and
\begin{align*}
\Bigl\{ 1-(1-e^{-\kappa})\Bigl( \frac{1}{2d} \Bigr)^{dR} \Bigr\}^{A/(2dR)}
\leq \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}].
\end{align*}
Then, the same argument as in the proof of Proposition~\ref{prop:E2} is applicable, and we obtain that for all large $n$,
\begin{align*}
&\mathbb{E} \otimes E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(nx)-1} \omega_F(S_k) \Biggr\}
\1{\{ H(nx)<\infty \} \cap \Cr{E3}(A,n)^c} \Biggr]\\
&\leq \Bigl\{ 1-(1-e^{-\kappa})\Bigl( \frac{1}{2d} \Bigr)^{dR} \Bigr\}^{A\lfloor n\|x\|_\infty/R \rfloor}
\leq \Bigl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \Bigr)^{n\|x\|_1}.
\end{align*}
Hence, the proof is complete.
\end{proof}
Our third objective is to observe that with high probability, the simple random walk does not stay in any $\kappa$-good $R$-box for a long time.
To this end, we set $L_B=L_B(d,R):=BR^{2d}$ and $\mathcal{R}_k:=\{ S_j:0 \leq j \leq k \}$ for $B,k \in \mathbb{N}$, and begin by proving the following lemma.
\begin{lem}\label{lem:range}
We have
\begin{align*}
\lim_{B \to \infty} \max_{z \in \Lambda_R(0)} P^z(\mathcal{R}_{L_B} \subset \Lambda_R(0))=0.
\end{align*}
\end{lem}
\begin{proof}
The argument as in \cite[Lemma~{4.2}]{Kub20} tells us that there exists a constant $c$ such that for all large $k$,
\begin{align*}
P^0\biggl( \#\mathcal{R}_k<\frac{ck^{1/2}}{\log k} \biggr)
\leq \exp\biggl\{ -\frac{c\lfloor k^{1/2} \rfloor}{\log k} \biggr\}.
\end{align*}
Hence, if $B$ is large enough to have $B \geq R$ and $cB^{1/2}>(2d+1)\log B$, then
\begin{align*}
&\max_{z \in \Lambda_R(0)} P^z(\mathcal{R}_{L_B} \subset \Lambda_R(0))\\
&= \max_{z \in \Lambda_R(0)} P^0(\mathcal{R}_{L_B} \subset \Lambda_R(0)-z)
\leq P^0(\#\mathcal{R}_{L_B} \leq R^d)\\
&\leq P^0\biggl( \#\mathcal{R}_{L_B}<\frac{cL_B^{1/2}}{\log L_B} \biggr)
\leq \exp\biggl\{ -\frac{c\lfloor L_B^{1/2} \rfloor}{\log L_B} \biggr\}.
\end{align*}
The most right side converges to zero as $B \to \infty$, and the lemma follows.
\end{proof}
After the preparation above, the next proposition gives our desired conclusion for the staying times in $\kappa$-good $R$-boxes.
\begin{prop}\label{prop:E4}
There exists $B=B(d,R,A) \in \mathbb{N}$ such that for all large $n$,
\begin{align*}
\P \otimes P^0(\{ H(nx)<\infty\} \cap \Cr{E4}(B,n)^c)
\leq 3\Bigl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \Bigr)^{n\|x\|_1},
\end{align*}
where $\Cl[E]{E4}(B,n)$ is the event that
\begin{align*}
\sum_{\substack{i \geq 1\\ \tau_i \leq H(nx)}}\1{\{ \tau_i-\sigma_i \leq L_B \}}
\geq (1-A^{-2}) \times \#\{ i \geq 1:\tau_i \leq H(nx) \}.
\end{align*}
\end{prop}
\begin{proof}
Let $n$ be a sufficiently large integer.
For simplicity of notation, write $\delta:=1-A^{-2}$ and
\begin{align*}
N:=\biggl\lceil \frac{1}{2} \biggl\lfloor \frac{n\|x\|_\infty}{R} \biggr\rfloor \biggr\rceil-1
\geq \frac{n\|x\|_1}{2dR}-2.
\end{align*}
In addition, thanks to Proposition~\ref{prop:E1}, we may restrict our attention to the event $\Cr{E1}(R,n)$.
Then, the simple random walk goes through $\kappa$-good $R$-boxes at least $N$ times before hitting $nx$, and the union bound shows that for any $a>0$,
\begin{align*}
&P^0(\{ H(nx)<\infty\} \cap \Cr{E4}(B,n)^c)\\
&\leq \sum_{\ell=N}^\infty P^0\Biggl( \sum_{i=1}^\ell \1{\{ \tau_i-\sigma_i \leq L_B \}}<\delta\ell
\text{ and } \sigma_i<\infty,\, 1 \leq i \leq \ell \Biggr)\\
&\leq \sum_{\ell=N}^\infty e^{a\delta\ell}
E^0\Biggl[ \prod_{i=1}^\ell \exp\bigl\{ -a\1{\{ \tau_i-\sigma_i \leq L_B \}} \bigr\} \1{\{ \sigma_i<\infty \}} \Biggr].
\end{align*}
To estimate the last expectations, set
\begin{align*}
r_B:=\min_{z \in \Lambda_R(0)}P^z(T_R(0) \leq L_B),
\end{align*}
where $T_R(0)$ is the exit time from the $R$-box $\Lambda_R(0)$ (see \eqref{eq:ET}).
From Lemma~\ref{lem:range}, $\lim_{B \to \infty} r_B=1$ holds, and hence $D(\delta\| r_B)$ goes to infinity as $B \to \infty$.
This enables us to take $B$ large enough to have $r_B>\delta$, $e^{-D(\delta\| r_B)} \leq 1/2$ and
\begin{align*}
\exp\biggl\{ -\frac{1}{4dR} D(\delta\| r_B) \biggr\} \leq \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}].
\end{align*}
The same argument as in the proof of Proposition~\ref{prop:E2} works to obtain
\begin{align*}
E^0\Biggl[ \prod_{i=1}^\ell \exp\bigl\{ -a\1{\{ \tau_i-\sigma_i \leq L_B \}} \bigr\} \1{\{ \sigma_i<\infty \}} \Biggr]
\leq \{ 1-(1-e^{-a})r_B \}^\ell.
\end{align*}
It follows that setting $f(a):=-a\delta-\log\{ 1-(1-e^{-a})r_B \}$, one has for any $a>0$
\begin{align*}
P^0(\{ H(nx)<\infty\} \cap \Cr{E4}(B,n)^c)
\leq \sum_{\ell=N}^\infty e^{-\ell f(a)}.
\end{align*}
Note that the function $f(a)$ attains its maximum at the point
\begin{align*}
a_0:=\log\frac{r_B(1-\delta)}{\delta(1-r_B)}>0,
\end{align*}
and $f(a_0)=D(\delta\| r_B)$ holds.
With these observations, on the event $\Cr{E1}(R,n)$,
\begin{align*}
P^0(\{ H(nx)<\infty\} \cap \Cr{E4}(B,n)^c)
\leq \sum_{\ell=N}^\infty e^{-\ell D(\delta\| r_B)}
= \frac{1}{1-e^{-D(\delta\| r_B)}} e^{-ND(\delta\| r_B)}.
\end{align*}
By the choice of $B$, the most right side is smaller than or equal to
\begin{align*}
2\exp\biggl\{ -\biggl( \frac{n\|x\|_1}{2dR}-2 \biggr)D(\delta\|r_B) \biggr\}
\leq 2\biggl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \biggr)^{n\|x\|_1},
\end{align*}
and the proof is straightforward.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:strict_alyap} in the multi-dimensional case}\label{subsect:pf_anl_multi}
The aim of this subsection is to prove Theorem~\ref{thm:strict_alyap} in $d \geq 2$.
We fix $x \in \mathbb{Z}^d \setminus \{0\}$ and summarize the events appearing the previous subsection for the convenience of the reader:
\begin{align*}
&\Cr{E1}(R,n)=\biggl\{
\begin{minipage}{9.8truecm}
$\sum_{v \in \mathcal{A}}\1{\{ \Lambda_R(v) \text{ is $\kappa$-good} \}} \geq \#\mathcal{A}/2$ holds for all lattice\\
animals $\mathcal{A}$ on $\mathbb{Z}^d$ containing $0$ with $\#\mathcal{A} \geq \lfloor n\|x\|_\infty/R \rfloor$
\end{minipage}
\biggr\},\\
&\Cr{E2}(M,n)=\Biggl\{
\sum_{v \in \mathcal{G}_R}
\1{\{ \text{$(S_k)_{k=0}^{H(nx)}$ goes through $\Lambda_R(v)$ at least $M$ times}\}}
\leq \frac{1}{3}\#(\mathcal{A}_R(nx) \cap \mathcal{G}_R)
\Biggr\},\\
&\Cr{E3}(A,n)=\Bigl\{
\text{$(S_k)_{k=0}^{H(nx)}$ goes through $\kappa$-good $R$-boxes
at most $A\lfloor n\|x\|_\infty/R \rfloor$ times}
\Bigr\},\\
&\Cr{E4}(B,n)=\Biggl\{
\sum_{\substack{i \geq 1\\ \tau_i \leq H(nx)}}\1{\{ \tau_i-\sigma_i \leq L_B \}}
\geq (1-A^{-2}) \times \#\{ i \geq 1:\tau_i \leq H(nx) \} \Biggr\},
\end{align*}
where $\kappa$, $R=R(d,\kappa)$, $M=M(d,\kappa,R)$, $A=A(d,\kappa,R) \geq 6$ and $B=B(d,R,A)$ are the constants chosen in Lemma~\ref{lem:kappa}, Propositions~\ref{prop:E1}, \ref{prop:E2}, \ref{prop:E3} and \ref{prop:E4}, respectively.
For simplicity of notation, write
\begin{align*}
\mathcal{E}'(n):=\Cr{E1}(R,n) \cap \Cr{E2}(M,n) \cap \Cr{E3}(A,n) \cap \Cr{E4}(B,n).
\end{align*}
Our first task is to prove that if $n$ is large enough, then $\P \otimes P^0 \textrm{-} \textrm{a.s.}$ on the event $\{ H(nx)<\infty \} \cap \mathcal{E}'(n)$,
\begin{align}\label{eq:l_bound}
\#\{ z \in \mathbb{Z}^d:1 \leq \ell_z(H(nx)) \leq ML_B \} \geq \frac{n\|x\|_1}{12dR}.
\end{align}
To this end, set
\begin{align*}
&V_1:=\bigl\{ v \in \mathcal{A}_R(nx) \cap \mathcal{G}_R:
\text{$(S_k)_{k=0}^{H(nx)}$ goes through $\Lambda_R(v)$ at most $M$ times}
\bigr\},\\
&V_2:=\biggl\{ v \in \mathcal{A}_R(nx) \cap \mathcal{G}_R:
\begin{minipage}{6truecm}
$(S_k)_{k=0}^{H(nx)}$ exits from $\Lambda_R(v)$ within\\
time $L_B$ each time it visits $\Lambda_R(v)$
\end{minipage}
\biggr\},
\end{align*}
and consider the cardinality
\begin{align*}
\mathcal{N}(n):=\#(V_1 \cap V_2).
\end{align*}
Note that for $v \in V_1 \cap V_2$, $\Lambda_R(v)$ is $\kappa$-good and contains at least one site $z$ of $\mathbb{Z}^d$ with $1 \leq \ell_z(H(nx)) \leq ML_B$.
Hence, $\mathcal{N}(n)$ is a lower bound on the left side of \eqref{eq:l_bound}.
Therefore, for \eqref{eq:l_bound}, it suffices to prove that $\mathcal{N}(n)$ is bounded from below by $n\|x\|_1/(12dR)$.
$\P \otimes P^0 \textrm{-} \textrm{a.s.}$ on the event $\Cr{E1}(R,n) \cap \Cr{E2}(M,n)$,
\begin{align*}
\#V_1
&\geq \#(\mathcal{A}_R(nx) \cap \mathcal{G}_R)
-\sum_{v \in \mathcal{G}_R}\1{\{ \text{$(S_k)_{k=0}^{H(nx)}$ goes through $\Lambda_R(v)$ at least $M$ times}\}}\\
&\geq \frac{2}{3}\#(\mathcal{A}_R(nx) \cap \mathcal{G}_R)
\geq \frac{1}{3} \#\mathcal{A}_R(nx).
\end{align*}
Moreover, $\P \otimes P^0 \textrm{-} \textrm{a.s.}$ on the event $\Cr{E3}(A,n) \cap \Cr{E4}(B,n)$,
\begin{align*}
\sum_{\substack{i \geq 1\\ \tau_i \leq H(nx)}}\1{\{ \tau_i-\sigma_i>L_B \}}
&\leq A^{-2} \times \#\{ i \geq 1:\tau_i \leq H(nx) \}\\
&\leq \frac{1}{A} \biggl\lfloor \frac{n\|x\|_\infty}{R} \biggr\rfloor,
\end{align*}
which guarantees that there exist at most $\lfloor \lfloor n\|x\|_\infty/R \rfloor/A \rfloor$ $\kappa$-good $R$-boxes $\Lambda_R(v)$ such that $(S_k)_{k=0}^{H(nx)}$ stays in $\Lambda_R(v)$ for more than time $L_B$ when it visits $\Lambda_R(v)$.
Therefore, $\P \otimes P^0 \textrm{-} \textrm{a.s.}$ on the event $\Cr{E3}(A,n) \cap \Cr{E4}(B,n)$,
\begin{align*}
\# V_2^c \leq \frac{1}{A} \biggl\lfloor \frac{n\|x\|_\infty}{R} \biggr\rfloor
\leq \frac{1}{A} \#\mathcal{A}_R(nx).
\end{align*}
The estimates for $\#V_1$ and $\#V_2^c$ above and $A \geq 6$ implies that if $n$ is large enough, then $\P \otimes P^0 \textrm{-} \textrm{a.s.}$ on the event $\{ H(nx)<\infty \} \cap \mathcal{E}'(n)$,
\begin{align*}
\mathcal{N}(n)
\geq \#V_1-\# V_2^c
\geq \bigg( \frac{1}{3}-\frac{1}{A} \biggr)\#\mathcal{A}_R(nx)
\geq \frac{n\|x\|_1}{12dR},
\end{align*}
and \eqref{eq:l_bound} follows.
We next prove that there exists $\rho_0 \in (0,1)$ (which is independent of $x$) such that for all large $n$,
\begin{align}\label{eq:rho}
\begin{split}
&\mathbb{E} \otimes E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(nx)-1}\omega_F(S_k) \Biggr\}
\1{\{ H(nx)<\infty \} \cap \mathcal{E}'(n)} \Biggr]\\
&\leq \rho_0^{n\|x\|_1}\mathbb{E}[e(0,nx,\omega_G)].
\end{split}
\end{align}
To do this, let $\mathcal{L}(n)$ be the event that \eqref{eq:l_bound} holds.
The first assertion tells us that if $n$ is large enough, then the left side of \eqref{eq:rho} is bounded from above by
\begin{align*}
&\mathbb{E} \otimes E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(nx)-1}\omega_F(S_k) \Biggr\}
\1{\{ H(nx)<\infty \} \cap \mathcal{L}(n)} \Biggr]\\
&= E^0\Biggl[ \prod_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(nx)) \geq 1}} \mathbb{E}[e^{-\ell_z(H(nx))\omega_F(0)}]
\1{\{ H(nx)<\infty \} \cap \mathcal{L}(n)} \Biggr]\\
&\leq E^0\Biggl[ \prod_{\substack{z \in \mathbb{Z}^d\\ 1 \leq \ell_z(H(nx)) \leq ML_B}}R_z(nx)
\times \prod_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(nx)) \geq 1}}
\mathbb{E}[e^{-\ell_z(H(nx))\omega_G(0)}] \1{\{ H(nx)<\infty \} \cap \mathcal{L}(n)} \Biggr],
\end{align*}
where for $y,z \in \mathbb{Z}^d$,
\begin{align*}
R_z(y)
:= \frac{\mathbb{E}[e^{-\ell_z(H(y))\omega_F(0)}]}{\mathbb{E}[e^{-\ell_z(H(y))\omega_G(0)}]}
= \frac{\int_0^1e^{-\ell_z(H(y))F^{-1}(s)}\,ds}{\int_0^1e^{-\ell_z(H(y))G^{-1}(s)}\,ds}
\in [0,1].
\end{align*}
We use Lemma~\ref{lem:pseudo}-\eqref{item:pseudo_H} to estimate the denominator in the definition of $R_z(n)$ as follows: For $z \in \mathbb{Z}^d$ with $1 \leq \ell_z(H(nx)) \leq ML_B$,
\begin{align*}
\int_0^1e^{-\ell_z(H(nx))G^{-1}(s)}\,ds
&= \int_0^1e^{-\ell_z(H(nx))F^{-1}(s)} \times e^{\ell_z(H(nx))(F^{-1}(s)-G^{-1}(s))}\,ds\\
&\geq \int_0^1e^{-\ell_z(H(nx))F^{-1}(s)}\,ds+(e^{\eta_0}-1)\int_\mathcal{H}e^{-\ell_z(H(nx))F^{-1}(s)}\,ds\\
&\geq \int_0^1e^{-\ell_z(H(nx))F^{-1}(s)}\,ds+a,
\end{align*}
where
\begin{align*}
a:=|\mathcal{H}|(e^{\eta_0}-1)\exp\Bigl\{ -ML_B\sup_{s \in \mathcal{H}}F^{-1}(s) \Bigr\} \in (0,\infty).
\end{align*}
Since the function $f(t):=t/(t+a)$ is increasing in $t \geq 0$, one has for $z \in \mathbb{Z}^d$ with $1 \leq \ell_z(H(nx)) \leq ML_B$,
\begin{align*}
R_z(nx)
\leq f\biggl( \int_0^1e^{-\ell_z(H(nx))F^{-1}(s)}\,ds \biggr)
\leq f\biggl( \int_0^1e^{-F^{-1}(s)}\,ds \biggr)
=: \rho \in (0,1).
\end{align*}
Accordingly, the left side of \eqref{eq:rho} is not greater than
\begin{align*}
&E^0\Biggl[ \rho^{\#\{z \in \mathbb{Z}^d:1 \leq \ell_z(H(nx)) \leq ML_B \}}
\times \prod_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(nx)) \geq 1}}
\mathbb{E}[e^{-\ell_z(H(nx))\omega_G(0)}] \1{\{ H(nx)<\infty \} \cap \mathcal{L}(n)} \Biggr]\\
&\leq \rho^{n\|x\|_1/(12dR)} \times
E^0\Biggl[ \prod_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(nx)) \geq 1}}
\mathbb{E}[e^{-\ell_z(H(nx))\omega_G(0)}] \1{\{ H(nx)<\infty \}} \Biggr]\\
&= \bigl( \rho^{1/(12dR)} \bigr)^{n\|x\|_1} \times \mathbb{E}[e(0,nx,\omega_G)],
\end{align*}
and we get \eqref{eq:rho} by taking $\rho_0:=\rho^{1/(12dR)}$.
Let us finally complete the proof of Theorem~\ref{thm:strict_alyap} for $d \geq 2$.
For a given $x \in \mathbb{Z}^d \setminus \{0\}$, Propositions~\ref{prop:E1}, \ref{prop:E2}, \ref{prop:E3} and \ref{prop:E4} and \eqref{eq:rho} imply that for all large $n$,
\begin{align*}
\mathbb{E}[e(0,nx,\omega_F)]
\leq 7\Bigl( \frac{1}{4d}\mathbb{E}[e^{-\omega_G(0)}] \Bigr)^{n\|x\|_1}+\rho_0^{n\|x\|_1} \times \mathbb{E}[e(0,nx,\omega_G)].
\end{align*}
Since $\mathbb{E}[e(0,nx,\omega_G)] \geq \{ \mathbb{E}[e^{-\omega_G(0)}]/(2d) \}^{n\|x\|_1}$, we have for all large $n$,
\begin{align*}
\mathbb{E}[e(0,nx,\omega_F)]
\leq 2\max\Bigl\{ 7\Bigl( \frac{1}{2} \Bigr)^{n\|x\|_1}, \rho_0^{n\|x\|_1}\Bigr\} \times \mathbb{E}[e(0,nx,\omega_G)],
\end{align*}
or equivalently
\begin{align*}
b_F(0,nx) \geq b_G(0,nx)-\log 2-\log\max\biggl\{ 7\Bigl( \frac{1}{2} \Bigr)^{n\|x\|_1}, \rho_0^{n\|x\|_1}\biggr\}.
\end{align*}
Therefore, dividing by $n$ and letting $n \to \infty$ proves that for any $x \in \mathbb{Z}^d \setminus \{0\}$,
\begin{align*}
\beta_F(x) \geq \beta_G(x)+\|x\|_1 \min\{ \log 2,-\log\rho_0\}.
\end{align*}
Since $\beta_F(\cdot)$ and $\beta_G(\cdot)$ are norms on $\mathbb{R}^d$ and the constant $\rho_0$ is independent of $x$, we can easily extend the above inequality to $x \in \mathbb{R}^d \setminus \{0\}$, and the proof is complete.\qed
\subsection{Proof of Theorem~\ref{thm:strict_alyap} in the one-dimensional case}\label{subsect:pf_anl_one}
Let $d=1$ and assume \eqref{eq:add_a} (i.e., $F(0)<e^{-\beta_G(1)}$).
In this case, the proof of Theorem~\ref{thm:strict_alyap} is simpler than the case $d \geq 2$.
Since $\beta_F(\cdot)$ and $\beta_G(\cdot)$ are norms on $\mathbb{R}$, it suffices to prove that there exists a constant $\Cl{d=1}$ such that
\begin{align}\label{eq:beta}
\beta_F(1)-\beta_G(1) \geq \Cr{d=1}.
\end{align}
To this end, we first prepare some notation and lemma.
Due to the assumption~\eqref{eq:add_a}, it is possible to take $\delta \in (0,1)$ such that
\begin{align}\label{eq:F}
F(0)^{1-\delta}<(1-\delta)e^{-\beta_G(1)}.
\end{align}
Then, for $K,n \in \mathbb{N}$, let $\mathcal{L}'(K,n)$ be the event that
\begin{align*}
\#\{ z \in \mathbb{Z}: 1 \leq \ell_z(H(n)) \leq K \} \geq \delta n.
\end{align*}
This event plays a role similar to the event $\mathcal{L}(n)$ in the previous subsection, and the next lemma guarantees that the complement of $\mathcal{L}'(K,n)$ is harmless to the one-dimensional annealed comparison.
\begin{lem}\label{lem:L'}
There exists $K=K(\delta) \in \mathbb{N}$ such that for all $n \geq 1$,
\begin{align}\label{eq:L'}
\mathbb{E} \otimes E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(n)-1}\omega_F(S_k) \Biggr\} \1{\mathcal{L}'(K,n)^c} \Biggr]
\leq (1-\delta)^ne^{-n\beta_G(1)}.
\end{align}
\end{lem}
\begin{proof}
Note that $P^0 \textrm{-} \textrm{a.s.}$ on the event $\mathcal{L}'(K,n)^c$, the number of sites $z$ such that $\ell_z(H(n))>K$ is bigger than $(1-\delta)n$.
Hence, the left side of \eqref{eq:L'} is smaller than or equal to
\begin{align*}
E^0\Biggl[ \prod_{\substack{z \in \mathbb{Z}\\ \ell_z(H(n))>K}} \mathbb{E}[e^{-\ell_z(H(n))\omega_F(0)}] \1{\mathcal{L}'(K,n)^c} \Biggr]
\leq \mathbb{E}[e^{-K\omega_F(0)}]^{(1-\delta)n}.
\end{align*}
Lebesgue's dominated convergence theorem together with \eqref{eq:F} shows that
\begin{align*}
\lim_{K \to \infty} \mathbb{E}[e^{-K\omega_F(0)}]^{1-\delta}
= F(0)^{1-\delta}
< (1-\delta)e^{-\beta_G(1)}.
\end{align*}
Therefore, if $K$ is large enough, then the left side of \eqref{eq:L'} is bounded from above by $(1-\delta)^ne^{-n\beta_G(1)}$, and the proof is complete.
\end{proof}
We move to the proof of Theorem~\ref{thm:strict_alyap} in $d=1$.
Lemma~\ref{lem:L'} implies that
\begin{align}\label{eq:conclusion}
\mathbb{E}[e(0,n,\omega_F)]
\leq (1-\delta)^ne^{-n\beta_G(1)}
+\mathbb{E} \otimes E^0\Biggl[ \exp\Biggl\{ -\sum_{k=0}^{H(n)-1}\omega_F(S_k) \Biggr\} \1{\mathcal{L}'(K,n)} \Biggr].
\end{align}
To estimate the right side, we follow the argument used to obtain \eqref{eq:rho}.
The second term in the right side of \eqref{eq:conclusion} is smaller than or equal to
\begin{align*}
E^0\Biggl[ \prod_{\substack{z \in \mathbb{Z}\\1 \leq \ell_z(H(n)) \leq K}} R_z(n)
\times \prod_{\substack{z \in \mathbb{Z}\\ \ell_z(H(n)) \geq 1}} \mathbb{E}[e^{-\ell_z(H(n))\omega_G(0)}] \1{\mathcal{L}'(K,n)} \Biggr].
\end{align*}
Note that for any $z \in \mathbb{Z}$ with $1 \leq \ell_z(H(n)) \leq K$,
\begin{align*}
R_z(n) \leq \frac{\int_0^1e^{-F^{-1}(s)}\,ds}{\int_0^1e^{-F^{-1}(s)}\,ds+a}=:\rho \in (0,1),
\end{align*}
where
\begin{align*}
a:=|\mathcal{H}|(e^{\eta_0}-1)\exp\Bigl\{ -K\sup_{s \in \mathcal{H}}F^{-1}(s) \Bigr\} \in (0,\infty).
\end{align*}
This, combined with the definition of the annealed Lyapunov exponent (see Proposition~\ref{prop:lyaps}), implies that the second term in the right side of \eqref{eq:conclusion} is bounded from above by
\begin{align*}
\rho^{\delta n} \times
E^0\Biggl[ \prod_{\substack{z \in \mathbb{Z}\\ \ell_z(H(n)) \geq 1}} \mathbb{E}[e^{-\ell_z(n)\omega_G(0)}] \1{\mathcal{L}'(K,n)} \Biggr]
\leq \rho^{\delta n} \times \mathbb{E}[e(0,n,\omega_G)]
\leq \rho^{\delta n} e^{-n\beta_G(1)}.
\end{align*}
Hence, one has
\begin{align*}
\mathbb{E}[e(0,n,\omega_F)]
&\leq (1-\delta)^ne^{-n\beta_G(1)}+\rho^{\delta n}e^{-n\beta_G(1)}\\
&\leq 2 \max\{ (1-\delta)^n,\rho^{\delta n} \}e^{-n\beta_G(1)},
\end{align*}
which proves that
\begin{align*}
\frac{1}{n}b_F(0,n) \geq \beta_G(1)-\frac{1}{n}\log 2-\log\max\{ 1-\delta,\rho^\delta \}.
\end{align*}
Consequently, \eqref{eq:beta} immediately follows by letting $n \to \infty$.\qed
\section{Strict inequalities for the rate functions}\label{sect:rf_strict}
This section is devoted to the proof of Corollary~\ref{cor:strict_rate}.
\begin{proof}[\bf Proof of Corollary~\ref{cor:strict_rate}]
Let $\phi$ be a distribution function on $[0,\infty)$.
Recall that $\alpha_\phi(\lambda,\cdot)$ and $\beta_\phi(\lambda,\cdot)$ are the quenched and annealed Lyapunov exponents associated with the potential $\omega_\phi+\lambda=(\omega_\phi(x)+\lambda)_{x \in \mathbb{Z}^d}$, respectively.
For each $x \in \mathbb{R}^d$, we introduce the quantities
\begin{align*}
\lambda_\phi^\mathrm{qu}(x):=\inf\{ \lambda>0:\partial_- \alpha_\phi(\lambda,x) \leq 1 \}
\end{align*}
and
\begin{align*}
\lambda_\phi^\mathrm{an}(x):=\inf\{ \lambda>0:\partial_-\beta_\phi(\lambda,x) \leq 1 \},
\end{align*}
where $\partial_-\alpha_\phi(\lambda,x)$ and $\partial_-\beta_\phi(\lambda,x)$ are the left derivatives of $\alpha_\phi(\lambda,x)$ and $\beta_\phi(\lambda,x)$ with respect to $\lambda$, respectively.
Clearly, $\lambda_\phi^\mathrm{qu}(x)$ (resp.~$\lambda_\phi^\mathrm{an}(x)$) attains the supremum in the definition of $I_\phi(x)$ (resp.~$J_\phi(x)$).
Then, the following lemma is the key to proving the corollary.
\begin{lem}\label{lem:finiteness}
We have for any $x \in \mathbb{R}^d$ with $0<\|x\|_1<1$,
\begin{align}\label{eq:finite_a}
\limsup_{\lambda \to \infty} \frac{\alpha_\phi(\lambda,x)}{\lambda}<1
\end{align}
and
\begin{align}\label{eq:finite_b}
\limsup_{\lambda \to \infty} \frac{\beta_\phi(\lambda,x)}{\lambda}<1.
\end{align}
\end{lem}
The proof of the lemma is postponed until the end of the section, and we shall complete the proof of the corollary.
Fix $x \in \mathbb{R}^d$ with $0<\|x\|_1<1$.
As mentioned above Proposition~\ref{prop:ldp}, $\alpha_G(\lambda,x)$ and $\beta_G(\lambda,x)$ are concave increasing in $\lambda$.
This together with Lemma~\ref{lem:finiteness} implies that there exists $\lambda_0 \in (0,\infty)$ such that
\begin{align*}
\partial_- \alpha_G(\lambda_0,x) \leq \frac{\alpha_G(\lambda_0,x)}{\lambda_0}<1
\end{align*}
and
\begin{align*}
\partial_- \beta_G(\lambda_0,x) \leq \frac{\beta_G(\lambda_0,x)}{\lambda_0}<1,
\end{align*}
which proves that $\lambda_G^\mathrm{qu}(x) \vee \lambda_G^\mathrm{an}(x) \leq \lambda_0<\infty$.
Therefore, Theorems~\ref{thm:strict_qlyap} and \ref{thm:strict_alyap} yield that there exist constants $c$ and $c'$ (which depend on $\lambda_G^\mathrm{qu}(x)$ and $\lambda_G^\mathrm{an}(x)$, respectively) such that
\begin{align*}
I_F(x)-I_G(x)
\geq \alpha_F(\lambda_G^\mathrm{qu}(x),x)-\alpha_G(\lambda_G^\mathrm{qu}(x),x)
\geq c\|x\|_1>0
\end{align*}
and
\begin{align*}
J_F(x)-J_G(x)
\geq \beta_F(\lambda_G^\mathrm{an}(x),x)-\beta_G(\lambda_G^\mathrm{an}(x),x)
\geq c'\|x\|_1>0,
\end{align*}
and the corollary follows.
\end{proof}
We close this section with the proof of Lemma~\ref{lem:finiteness}.
\begin{proof}[\bf Proof of Lemma~\ref{lem:finiteness}]
Fix $x \in \mathbb{R}^d$ with $0<\|x\|_1<1$.
If $\mathbb{E}[\omega_\phi(0)]<\infty$ holds, then Proposition~\ref{prop:lyaps} tells us that
\begin{align*}
\alpha_\phi(\lambda,x) \leq \|x\|_1(\lambda+\log(2d)+\mathbb{E}[\omega_\phi(0)]).
\end{align*}
Since we have assumed the finiteness of $\mathbb{E}[\omega_\phi(0)]$ in the one-dimensional quenched situation (see assumption~(Qu) above Proposition~\ref{prop:lyaps}), \eqref{eq:finite_a} is valid for $d=1$.
Proposition~\ref{prop:lyaps} also implies that
\begin{align*}
\beta_\phi(\lambda,x) \leq \|x\|_1\bigl( \lambda+\log(2d)-\log\mathbb{E}[e^{-\omega_\phi(0)}] \bigr),
\end{align*}
and \eqref{eq:finite_b} holds for all $d \geq 1$.
It remains to prove \eqref{eq:finite_a} for $d \geq 2$ (because (Qu) does not guarantee the finiteness of $\mathbb{E}[\omega_\phi(0)]$ for $d \geq 2$).
Although the proof is essentially the same as above, we need some more work to carry out it.
Let $M>0$ and consider the independent Bernoulli site percolation $\eta_M$ on $\mathbb{Z}^d$ defined as
\begin{align*}
\eta_M=(\eta_M(z))_{z \in \mathbb{Z}^d}:=(\1{\{ \omega_\phi(z) \leq M \}})_{z \in \mathbb{Z}^d}.
\end{align*}
Then, \emph{$M$-clusters} of the configuration $\eta_M$ are the connected components of the graph $\{ z \in \mathbb{Z}^d: \eta_M(z)=1 \}$ with the usual adjacency relation on $\mathbb{Z}^d$: $u,v \in \mathbb{Z}^d$ are adjacent if $\|u-v\|_1=1$.
It is well known that there exists $p_c=p_c(d) \in (0,1)$ such that if $\P(\eta_M(0)=1)>p_c$ holds, then $\P$-a.s., we have a unique infinite $M$-cluster, say $\mathcal{C}_{\infty,M}$, with $\P(0 \in \mathcal{C}_{\infty,M})>0$ (see \cite[Theorems~{1.10} and 8.1]{Gri99_book} for instance).
In addition, define the \emph{chemical distance} $d_M(u,v)$ between $u$ and $v$ as the minimal length of a lattice path from $u$ to $v$ which uses only sites $z$ with $\eta_M(z)=1$.
Note that the chemical distance $d_M(u,v)$ may be equal to infinity if $u$ or $v$ is not in $\mathcal{C}_{\infty,M}$.
To avoid this, for each $z \in \mathbb{Z}^d$, let us consider the closest point to $z$ in $\mathcal{C}_{\infty,M}$ for the $\ell^1$-norm, with a deterministic rule to break ties, and denote it by $\tilde{z}^M$.
From \cite[Lemma~{4.1}]{GarMar10}, there exists a norm $\mu_M(\cdot)$ on $\mathbb{R}^d$ such that for each $y \in \mathbb{Z}^d$,
\begin{align}\label{eq:GM}
\lim_{n \to \infty} \frac{1}{n}d_M(\tilde{0}^M,\tilde{ny}^M)=\mu_M(y),\qquad \text{$\P$-a.s.~and in $L^1(\P)$}.
\end{align}
Moreover, since
\begin{align*}
\lim_{M \to \infty}\P(\eta_M(0)=1)=\lim_{M \to \infty} \phi(M)=1,
\end{align*}
we can apply \cite[Theorem~{8.8}]{Gri99_book} and \cite[Corollary~{1.5}]{GarMar07} (or \cite[Theorem~{1.2}]{GarMarProThe17}) to obtain that
\begin{align*}
\lim_{M \to \infty} \P(0 \in \mathcal{C}_{\infty,M})=1
\end{align*}
and
\begin{align*}
\lim_{M \to \infty} \sup_{\|x\|_1 \leq 1}|\mu_M(x)-\|x\|_1|=0.
\end{align*}
Fix $x \in \mathbb{R}^d$ with $0<\|x\|_1<1$ and take $M>0$ large enough to have $\P(0 \in \mathcal{C}_{\infty,M})>1/2$ and $\mu_M(x)<1$.
Note that for all $\lambda>0$, $y \in \mathbb{Z}^d$ and $n \in \mathbb{N}$,
\begin{align*}
0<1-2\P(0 \not\in \mathcal{C}_{\infty,M})
&\leq \P(0,ny \in \mathcal{C}_{\infty,M})\\
&\leq \P\bigl( a(0,ny,\omega_\phi+\lambda) \leq d_M(\tilde{0}^M,\tilde{ny}^M)(\lambda+\log(2d)+M) \bigr).
\end{align*}
Hence, Proposition~\ref{prop:lyaps} and \eqref{eq:GM} imply that for all $\lambda>0$ and $y \in \mathbb{Z}^d$,
\begin{align*}
\frac{\alpha_\phi(\lambda,y)}{\lambda} \leq \frac{1}{\lambda}\mu_M(y)(\lambda+\log(2d)+M).
\end{align*}
This inequality is also valid for all $y \in \mathbb{R}^d$ because $\alpha_\phi(\lambda,\cdot)$ and $\mu_M(\cdot)$ are norms on $\mathbb{R}^d$.
It follows that
\begin{align*}
\limsup_{\lambda \to \infty} \frac{\alpha_\phi(\lambda,x)}{\lambda} \leq \mu_M(x)<1,
\end{align*}
and \eqref{eq:finite_a} is also valid for $d \geq 2$.
\end{proof}
\section{Discussion on the one-dimensional annealed situation}\label{sect:one-dim}
In the statements of Theorems~\ref{thm:strict_qlyap} and \ref{thm:strict_alyap}, additional conditions (Qu) and \eqref{eq:add_a} are assumed for $d=1$.
Hence, we finally discuss comparisons for one-dimensional Lyapunov exponents and rate functions without (Qu) and \eqref{eq:add_a}.
Let us first comment on the one-dimensional quenched situation without (Qu) (i.e., $\mathbb{E}[\omega(0)]=\infty$).
In this situation, for all $x \in \mathbb{Z} \setminus \{0\}$,
\begin{align*}
\lim_{n \to \infty} \frac{1}{n}a(0,nx,\omega)=\infty \qquad \P \textrm{-} \textrm{a.s.}
\end{align*}
Indeed, we have for any $L>0$,
\begin{align*}
a(0,nx,\omega) \geq \sum_{k=0}^{nx-1}(\omega(k) \wedge L),
\end{align*}
and the law of large numbers yields that
\begin{align*}
\lim_{n \to \infty} \frac{1}{n}\sum_{k=0}^{nx-1}(\omega(k) \wedge L)
=|x| \times \mathbb{E}[\omega(0) \wedge L] \qquad \P \textrm{-} \textrm{a.s.}
\end{align*}
Since $\mathbb{E}[\omega(0)]=\infty$, we get the desired conclusion by letting $L \to \infty$.
Therefore, the quenched Lyapunov exponent does not exist in the sense of Proposition~\ref{prop:lyaps}, and we cannot also define the quenched rate function by using the quenched Lyapunov exponent.
Consequently, if $F$ and $G$ are distribution functions on $[0,\infty)$ and one of them does not satisfy (Qu), then the comparisons for $\alpha_F$, $\alpha_G$, $I_F$ and $I_G$ are not well-defined.
As long as we argue comparisons for the Lyapunov exponent and the rate function in the present setting, assumption~(Qu) is necessary in the one-dimensional quenched situation.
On the other hand, in spite of the establishment of (Qu), the annealed Lyapunov exponent is always well-defined in $d=1$.
Therefore, we can expect that the one-dimensional annealed situation is different from the quenched one.
Actually, the next theorem exhibits criteria of the strict comparison for the one-dimensional annealed Lyapunov exponent.
\begin{prop}\label{prop:criteria}
Let $d=1$.
Then, the following results hold:
\begin{enumerate}
\item\label{item:criteria_equiv}
If $F \leq G$ and \eqref{eq:add_a} fails to hold (i.e., $F(0) \geq e^{-\beta_G(1)}$), then
\begin{align*}
-\log F(0)=\beta_F(1)=\beta_G(1)=-\log G(0).
\end{align*}
In particular, if $F$ strictly dominates $G$, then $F(0)<e^{-\beta_G(1)}$ is a necessary and sufficient condition
for $\beta_F(1)>\beta_G(1)$.
\item\label{item:criteria_gap}
If $F$ strictly dominates $G$ and $F(0)<G(0)$, then $\beta_F(1)>\beta_G(1)$ holds.
\item\label{item:criteria_zero}
If $F$ strictly dominates $G$ and $F(0)=0$, then $\beta_F(1)>\beta_G(1)$ holds.
\end{enumerate}
\end{prop}
\begin{proof}
We first prove part~\eqref{item:criteria_equiv}.
Assume that $F \leq G$ and $F(0) \geq e^{-\beta_G(1)}$.
For an arbitrary $\epsilon>0$,
\begin{align*}
\mathbb{E}[e(0,n,\omega_F)]
&\geq E^0\Bigl[ F(0)^{\#\{z \in \mathbb{Z}:\ell_z(H(n)) \geq 1 \}} \Bigr]\\
&\geq E^0\Bigl[ F(0)^{\#\{z \in \mathbb{Z}:\ell_z(H(n)) \geq 1 \}} \1{\{ H(n)<H(-\lceil \epsilon n \rceil) \}} \Bigr]\\
&\geq F(0)^{(1+\epsilon)n} P^0(H(n)<H(-\lceil \epsilon n \rceil)).
\end{align*}
An easy computation shows that
\begin{align*}
P^0(H(n)<H(-\lceil \epsilon n \rceil))
= \frac{\lceil \epsilon n \rceil}{n+\lceil \epsilon n \rceil}
\end{align*}
(see for instance \cite[(1.20)]{Law91_book}), and we have
\begin{align*}
\frac{1}{n}b_F(0,n) \leq -(1+\epsilon)\log F(0)-\frac{1}{n}\log\frac{\lceil \epsilon n \rceil}{n+\lceil \epsilon n \rceil}.
\end{align*}
Hence, letting $n \to \infty$ proves $\beta_F(1) \leq -(1+\epsilon)\log F(0)$.
Since $\epsilon$ is arbitrary, one has
\begin{align}\label{eq:key}
\beta_F(1) \leq -\log F(0).
\end{align}
This, combined with the assumption $F(0) \geq e^{-\beta_G(1)}$ and the fact that $\beta_F \geq \beta_G$, proves that
\begin{align*}
F(0) \geq e^{-\beta_G(1)} \geq e^{-\beta_F(1)} \geq F(0),
\end{align*}
which implies $\beta_F(1)=\beta_G(1)=-\log F(0)$.
Furthermore, since \eqref{eq:key} with $F$ replaced by $G$ is valid, we have
\begin{align*}
G(0) \geq F(0) \geq e^{-\beta_G(1)} \geq G(0),
\end{align*}
and $\beta_G(1)=-\log G(0)$ holds.
With these observations, the first assertion of \eqref{item:criteria_equiv} follows.
For the second assertion of part~\eqref{item:criteria_equiv}, assume that $F$ strictly dominates $G$.
If $F(0)<e^{-\beta_G(1)}$ holds, then $\beta_F(1)>\beta_G(1)$ follows form Theorem~\ref{thm:strict_alyap}.
Conversely, suppose that $\beta_F(1)>\beta_G(1)$ holds.
Then, the first assertion of part~\eqref{item:criteria_equiv} implies $F(0)<e^{-\beta_G(1)}$.
Therefore, the second assertion of part~\eqref{item:criteria_equiv} is proved.
We next prove part~\eqref{item:criteria_gap}.
Note that $-\log F(0)>-\log G(0)$ holds if $F(0)<G(0)$.
Hence, the first assertion of Proposition~\ref{prop:criteria}-\eqref{item:criteria_equiv} implies $F(0)<e^{-\beta_G(1)}$.
Therefore, Theorem~\ref{thm:strict_alyap} gives $\beta_F(1)>\beta_G(1)$, and part~\eqref{item:criteria_gap} follows.
Finally, part~\eqref{item:criteria_zero} is a direct consequence of Theorem~\ref{thm:strict_alyap}.
Indeed, since $\beta_G(1)$ is finite, $F(0)<e^{-\beta_G(1)}$ holds if $F(0)=0$.
Hence, Theorem~\ref{thm:strict_alyap} leads to $\beta_F(1)>\beta_G(1)$.
\end{proof}
Proposition~\ref{prop:criteria} guarantees the existence of the threshold for the coincidence of one-dimensional annealed rate functions as follows.
\begin{cor}\label{cor:criteria_arate}
Let $d=1$.
Suppose that $F$ strictly dominates $G$ and \eqref{eq:add_a} fails to hold (i.e., $F(0) \geq e^{-\beta_G(1)}$).
Then, there exists a constant $v_0 \in (0,1)$ (which may depend on $F$ and $G$) such that
\begin{align}\label{eq:v}
J_F(x)-J_G(x)
\begin{cases}
>0, & \text{if } v_0<|x|<1,\\
=0, & \text{if } |x| \leq v_0.
\end{cases}
\end{align}
\end{cor}
\begin{proof}
Our proof starts with the observation that for any $x \in \mathbb{R}$ with $0<|x|<1$,
\begin{align}\label{eq:J_eqiv}
J_F(x)-J_G(x)>0 \,\Longleftrightarrow\,
\lambda_F^\mathrm{an}(x)>0 \text{ or } \lambda_G^\mathrm{an}(x)>0.
\end{align}
To this end, fix $x \in \mathbb{R}$ with $0<|x|<1$.
Note that $\lambda_G^\mathrm{an}(x)$ is finite as seen in the proof of Corollary~\ref{cor:strict_rate}.
We first treat the case where $\lambda_G^\mathrm{an}(x)>0$.
Let $\tilde{F}$ and $\tilde{G}$ be the distribution functions of $\omega_F(0)+\lambda_G^\mathrm{an}(x)$ and $\omega_G(0)+\lambda_G^\mathrm{an}(x)$, respectively.
Then, we have $\tilde{F}(0)=0$, and Proposition~\ref{prop:criteria}-\eqref{item:criteria_zero} implies that
\begin{align*}
J_F(x)-J_G(x)
&\geq \beta_F(\lambda_G^\mathrm{an}(x),x)-\beta_G(\lambda_G^\mathrm{an}(x),x)\\
&= \beta_{\tilde{F}}(x)-\beta_{\tilde{G}}(x)>0.
\end{align*}
Next, in the case where $\lambda_G^\mathrm{an}(x)=0$ but $\lambda_F^\mathrm{an}(x)>0$, there exists $\lambda'>0$ such that $\partial_- \beta_F(\lambda',x)>1$.
Then, since $\beta_F \geq \beta_G$ and $\beta_F(\lambda,x)$ is concave in $\lambda$, one has
\begin{align*}
J_F(x)-J_G(x)
&\geq \biggl( \frac{\beta_F(\lambda',x)-\beta_F(0,x)}{\lambda'}-1 \biggr) \lambda'\\
&\geq (\partial_- \beta_F(\lambda',x)-1)\lambda'>0.
\end{align*}
Finally consider the case where $\lambda_F^\mathrm{an}(x)=\lambda_G^\mathrm{an}(x)=0$.
Then, Proposition~\ref{prop:criteria}-\eqref{item:criteria_equiv} gives
\begin{align*}
J_F(x)-J_G(x)=\beta_F(x)-\beta_G(x)=0.
\end{align*}
With these observations, \eqref{eq:J_eqiv} immediately follows.
We now refer to the following result obtained by Kosygina--Mountford~\cite[Theorem~1.2]{KosMou12} in our setting:
If $\phi$ is a distribution function on $[0,\infty)$ satisfying that $\P(\omega_\phi(0)=0)<1$ and $\essinf \omega_\phi(0)=0$, then there exists a constant $v_\phi \in (0,1)$ such that
\begin{align}\label{eq:KM}
\partial_+\beta_\phi(0,1)=\frac{1}{v_\phi},
\end{align}
where $\partial_+\beta_\phi(\lambda,x)$ stands for the right derivative of $\beta_\phi(\lambda,x)$ with respect to $\lambda$.
Then, we prove that if $\phi$ is a distribution function on $[0,\infty)$ satisfying that $\P(\omega_\phi(0)=0)<1$ and $\essinf \omega_\phi(0)=0$, then for $x \in \mathbb{R}$,
\begin{align}\label{eq:anlam}
\lambda_\phi^\mathrm{an}(x)
\begin{cases}
>0, & \text{if } v_\phi<|x|<1,\\
=0, & \text{if } |x|<v_\phi.
\end{cases}
\end{align}
In the case where $v_\phi<|x|<1$, \eqref{eq:KM} implies that
\begin{align*}
\partial_+ \beta_\phi(0,x)
= |x| \times \partial_+ \beta_\phi(0,1)
= \frac{|x|}{v_\phi}>1.
\end{align*}
This means that there exists $\lambda_1>0$ such that
\begin{align*}
\frac{\beta_\phi(\lambda_1,x)-\beta_\phi(0,x)}{\lambda_1}>1.
\end{align*}
By the continuity of $\beta_\phi(\lambda,x)$ in $\lambda$, we can take $\lambda_2 \in (0,\lambda_1)$ such that
\begin{align*}
\frac{\beta_\phi(\lambda_1,x)-\beta_\phi(\lambda_2,x)}{\lambda_1-\lambda_2}>1.
\end{align*}
This, together with the concavity of $\beta_\phi(\lambda,x)$ in $\lambda$, proves
\begin{align*}
1<\frac{\beta_\phi(\lambda_1,x)-\beta_\phi(\lambda_2,x)}{\lambda_1-\lambda_2}
\leq \partial_-\beta_\phi(\lambda_2,x).
\end{align*}
Therefore, $\lambda_\phi^\mathrm{an}(x) \geq \lambda_2>0$ holds in the case where $v_\phi<|x|<1$.
On the other hand, if $|x|<v_\phi$ holds, then
\begin{align*}
\partial_+\beta_\phi(0,x) =\frac{|x|}{v_\phi}<1,
\end{align*}
and we can easily see that $\lambda_\phi^\mathrm{an}(x)=0$.
Therefore, \eqref{eq:anlam} is proved.
We are now in a position to prove Corollary~\ref{cor:criteria_arate}.
Note that since $F$ strictly dominates $G$, Lemma~\ref{lem:pseudo}-\eqref{item:pseudo_0} implies $\P(\omega_F(0)=0)<1$.
In addition, since \eqref{eq:add_a} fails to hold, we have
\begin{align*}
G(0) \geq F(0) \geq e^{-\beta_G(1)}>0,
\end{align*}
which proves that $\essinf \omega_F(0)=\essinf \omega_G(0)=0$.
Hence, \eqref{eq:anlam} holds for $F$.
Assume that $\P(\omega_G(0)=0)<1$.
Then, \eqref{eq:anlam} is also established for $G$.
It follows that $\lambda_F^\mathrm{an}(x)>0$ or $\lambda_G^\mathrm{an}(x)>0$ holds for $v_F \wedge v_G<|x|<1$, and $\lambda_F^\mathrm{an}(x)=\lambda_G^\mathrm{an}(x)=0$ holds for $|x|<v_F \wedge v_G$.
Therefore, \eqref{eq:J_eqiv} shows that
\begin{align*}
J_F(x)-J_G(x)
\begin{cases}
>0, & \text{if } v_F \wedge v_G<|x|<1,\\
=0, & \text{if } |x|<v_F \wedge v_G.
\end{cases}
\end{align*}
This is also valid for $|x|=v_F \wedge v_G$ because $J_F$ and $J_G$ are continuous on $[-1,1]$ (see the statement stated above Proposition~\ref{prop:ldp}).
Thus, in the case where $\P(\omega_G(0)=0)<1$, \eqref{eq:v} follows by taking $v_0:=v_F \wedge v_G$.
For the case where $\P(\omega_G(0)=0)=1$, taking $v_0:=v_F$ establishes \eqref{eq:v}.
Indeed, Proposition~\ref{prop:criteria}-\eqref{item:criteria_equiv} gives that for all $x \in \mathbb{R}$,
\begin{align*}
\beta_G(x)=-\log G(0)=0,
\end{align*}
which implies that $\lambda_G^\mathrm{an}(x)=0$ holds for all $x \in \mathbb{R}$.
This together with \eqref{eq:J_eqiv} and \eqref{eq:anlam} tells us that \eqref{eq:v} holds for $v_0=v_F$.
Consequently, we can find the desired constant $v_0$ in any case, and the proof is complete.
\end{proof}
As seen above, \eqref{eq:add_a} is an important condition for strict comparisons for the one-dimensional annealed Lyapunov exponent and rate function.
Unfortunately, in the one-dimensional case, we did not know whether or not the simple random walk in random potentials always satisfies \eqref{eq:add_a}.
This problem is of interest as future work.
| {
"attr-fineweb-edu": 1.84668,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdRU5qhDBYTWvI_Ak | \section{Introduction}\label{intro}}
Data-driven decision making is a ubiquitous strategy in today's
marketplace and is becoming increasingly common amongst professional
sports organizations. From the Major League Baseball's Moneyball
movement \citep{lewis03} to the Moreyball strategies employed by the National Basketball Association's Houston Rockets \citep{walsh19},
analytics are no longer the sole purview of the academy as teams try to
improve their performance by investigating the data. The general
consensus, however, is that the National Football League (NFL) lags
behind other professional sports leagues in their use of analytics
\citep{clark}. This does seem to be changing, as evidenced by a recent
hiring trend of data analysts to NFL teams \citep{loque19}, as well as
within the league office in New York.
In the NFL, perhaps the most widely debated research question regards
the decision of going for it on fourth down. This has also been the
subject of several academic articles \citep{romer2006firms,yam2019lost},
the New York Times ``4th down bot'' \citep{fdbot, causey15}, and a new calculator from sports analyst Ben Baldwin \citep{baldwin_athletic}. The consensus among researchers and analysts is NFL coaches tend to be too conservative in their fourth-down calls, often preferring to kick the football (punt or field goal attempt) when the data suggests they should pass or run the ball.
While the decision to go for it on fourth down is much discussed, there
are a plethora of other strategies a team may wish to investigate.
Potential strategies for deeper investigation range from the frequency
and type of plays run, the use of a team's (limited) timeouts during a
game, defensive alignment, and so on. The seemingly infinite
possibilities for NFL strategy evaluation made us wonder how one could
determine which strategies offer the best chance of winning. Some
attempt has been done for strategies such as passing versus running the
football. In the sole peer-reviewed article that we are aware of, \cite{levitt09} found NFL teams did not pass as much as they should.
\cite{hermsmeyer} similarly noted, in an article for the data journalism
website fivethirtyeight.com, that even though the NFL has transitioned
to become a more passing heavy league, teams should still pass more.
Apart from the passing versus rushing and fourth down decision making,
there is a lack of research regarding NFL strategies in the literature.
In this paper we present an R software package, \texttt{NFLSimulatoR},
and an analytically rigorous method for analyzing NFL strategies. Our
method consists of simulating strategies via the sampling of NFL
play-by-play datasets realized in previous seasons. This
simulation method is flexible and allows for the investigation of many
possible strategies and offers a tool for informed decision making with
respect to sport performance. We have embedded the simulation framework
into an open source software package to share the method with the
broader sports analytics community. The rest of the paper is outlined as
follows. In the next section we present the R software package we wrote
for simulating NFL strategies. Section 3 describes the use of the
software package for the two strategies we have discussed thus far:
fourth down decision making and passing versus rushing. Finally, we
offer some concluding thoughts about using (and contributing to) the
package moving forward and other discussions in the final section.
\hypertarget{sec:nfl_sim}{%
\section{NFLSimulatoR}\label{sec:nfl_sim}}
The ideas presented in this paper are, in part, inspired by a blog post
by Mike Lopez, currently the Director of Data and Analytics for the NFL,
in which he used a simulation-based approach to investigate a potential
overtime rule change in the NFL \citep{statsbylopez}. In contrast to the one-off solution presented on his blog, we provide a robust software platform for assessing NFL strategies in the \texttt{NFLSimulatoR} R package. Our desire is for the wider analytics community to use this package, extend our work, and study other strategies in a analytically sound manner.
The ideas embedded in \texttt{NFLSimulatoR} are simple, yet extremely
powerful. The key feature is that we rely on simulations of actual NFL
play-by-play data to evaluate potential strategies. We define a strategy
broadly as any set of principled decisions consistently made by an NFL
team during a game. An example, albeit possibly extreme, is for a team
to employ only passing plays rather than a mixture of passes and runs
while on offense. This is a simple strategy, but one we can nonetheless
examine using our package. To examine a particular strategy, we sample
plays satisfying the criteria of the strategy at hand. Going back to our
simplistic example, we would sample only passing plays if we wanted to
see what happened when a team only passes the football.
Sampling data to make estimates, inferences, or decisions about a larger
population is at the core of statistics and lends important rigor to our
method. In our package, we select probability samples according to a
simple random sample with replacement from our population of interest
(NFL play-by-play data) to produce unbiased and representative results.
An excellent resource for more on statistical sampling can be found in
\cite{lohr}.
The package relies on NFL play-by-play data available via the NFL's
Application Programming Interface. These datasets are accessible within
R using either the \texttt{nflscrapR} or the \texttt{nflfastR} R
packages \citep{scrapr,fastr} (or by downloading it directly from the nflscrapR-data or nflfastR-data websites \citep{yurko,baldwin}. The \texttt{NFLSimulatoR} package includes two functions, \texttt{download\_nflscrapr\_data()} and
\texttt{download\_nflfastr\_data()}, for directly downloading regular-,
pre-, or post-season NFL play-by-play data from either source for
several years, currently from 2009 - 2019. Each year contains
approximately 48,000 plays of data. In addition, we include a function
called \texttt{prep\_pbp\_data()} to eliminate extraneous information
and prepare the NFL data for use in \texttt{NFLSimulatoR} functions.
Our package is built primarily on the function \texttt{sample\_play()}.
This function samples from NFL data according to a given strategy for a
particular down and distance. The strategy is passed to the function via
the \texttt{strategy} parameter. Down and distance information refer to
what down it is (1 - 4), how many yards are required for a first down,
and the yardline at which the play occurs (1 - 99). The down is passed
to the function via the \texttt{what\_down} parameter, the distance to
go is passed via the \texttt{yards\_to\_go} parameter, and the yardline
is passed via the \texttt{yards\_from\_own\_goal} parameter. Our
sampling is done randomly and so we are confident in the outcomes from
the simulations. However, some combinations of sampling parameters
(strategy, down, distance, yardline) rarely occur in an NFL game. For
example, it may be there are few or no plays where a team had the ball
on 3rd down, on the 47th yardline, with 15 yards to go for a first down,
and chose to run the ball. In such cases we widen our sampling range to
include plays from yardlines close the to the yardline of interest or
with one less yard to go for a first down (the user can also choose a
window to expand the yardline selection via the
\texttt{window\_yards\_from\_own\_goal} parameter). We have built
flexibility into the \texttt{sample\_play()} function so the user can
seamlessly implement it in their unique settings.
The other main function of interest in the package is called
\texttt{sample\_drives()}. This allows the user to simulate a series of
plays by one team (a drive) following some specific strategy versus
another team employing a ``normal'' strategy. By ``normal'' we mean the
plays of the opposing team are simply sampled at random from all plays
without a specific strategy in mind. The \texttt{sample\_drives()}
function shows how a specific strategy is expected to perform if
implemented during an NFL game when the opposing team is employing the
status quo. The function can either sample drives until one team scores,
or it can sample a single drive and return the outcome of the drive
(i.e., touchdown, field goal, punt, or turnover). By simulating many
drives one can identify statistics such as expected points per drive and
proportion of drives resulting in a score for a variety of strategies.
The \texttt{sample\_drives()} function takes parameters for the number
of simulations to be run (\texttt{n\_sims}), the starting yardline of
the simulations (\texttt{from\_yard\_line}), the strategy
(\texttt{strategy}), and if the simulation is of a single drive
(\texttt{single\_drive}). Within \texttt{sample\_drives()}, the function
\texttt{down\_distance\_updater()} updates the down, distance, and yards
to go and then samples the next play from all plays satisfying the
updated criteria.
To demonstrate the use of this software and to offer an idea of how to
extend our work, we provide two strategies in the package. The first is
a strategy related to fourth-down decision making and the second is
associated with how often a team should pass (or run) the football.
Within the fourth down strategy we include several sub-strategies to make
a decision about going for it or not on fourth down. As mentioned above,
the fourth down strategy has been studied in the academic domain, see
e.g. \cite{yam2019lost} and \cite{romer2006firms}. We include it in this
manuscript due to its popularity and to give our own perspective on this
well-known problem. In the next section, we discuss these two strategies
in more detail.
The \texttt{NFLSimulatoR} package is available on CRAN (Comprehensive R
Archive Network) and the latest developmental version is available on
github. Adding this package to CRAN was an important step to make sure
our package passed rigorous software checks and to make installation
simpler.
Additional package details related to issues, recent changes, etc. can
be found at the \href{http://datacolorado.com/r/NFLSimulatoR}{NFLSimulatoR
website}. The package can be installed within R using either option
given below.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{## From CRAN}
\KeywordTok{install.packages}\NormalTok{(}\StringTok{"NFLSimulatoR"}\NormalTok{)}
\CommentTok{## From Github}
\KeywordTok{install.packages}\NormalTok{(}\StringTok{"remotes"}\NormalTok{)}
\NormalTok{remotes}\OperatorTok{::}\KeywordTok{install_github}\NormalTok{(}\StringTok{"rtelmore/NFLSimulatoR"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\hypertarget{sec:apps}{%
\section{Applications}\label{sec:apps}}
\hypertarget{fourth-down-strategy}{%
\subsection{Fourth Down Strategy}\label{fourth-down-strategy}}
The first strategy we examine concerns fourth down decision making. This
is one of the most well-known and discussed NFL strategies. On a fourth
down the offensive team has two options: go for it or kick. If they
kick, they can either punt the ball and allow the other team to take
offensive position or kick a field goal. The other option a team has on
fourth down is to attempt to run or pass the ball and gain enough yards
for a first down. Historically, NFL coaches tend to not go for it on
fourth down unless time is running out and/or the only possible way to
win the game involves increasing the risk of a turnover for the
potential benefit of a first down. However, thanks to the analytics
movement, teams are beginning to challenge the status quo.
In 2006, Romer began the discussion about optimal decision making on
fourth down by estimating the expected point value of kicking versus
going for it on fourth down. This was done by estimating the value of a
team having the ball at each yardline on the field. These values were
estimated from NFL play-by-play data from 1998, 1999, and 2000 \citep{romer2006firms}. This work was updated in 2013 via the New York Times' Fourth Down Bot \citep{fdbot}. Burke and Quealy use a similar calculation of the value of being at each yardline and then estimate the probability of gaining enough yards for a first down. The expected points for some fourth down can be calculated as the product of the probability of
securing a first down and the point value of a first down at the
specific yardline added to the product of the probability of not
securing a fourth down and the point value of the other team taking
possession at the given yardline.
The estimated value of being at a given yardline takes into account
field goals and the expectation can be either positive or negative. If
it is positive, the Fourth Down Bot recommends going for it on fourth
down. \cite{yam2019lost} used data from the New York Times' Fourth Down Bot in a causal
analysis and determined, on average, if teams employed the (more
agressive) strategy of the Fourth Down Bot they would enjoy
approximately 0.4 more wins per year. In the NFL where there are only 16
games in a season, 0.4 is a substantial increase in wins. For further
examination into the history of fourth down decision making see \cite{yam2019lost}.
Because this strategy is of such interest we include it in our package.
We offer five sub-strategies regarding decision making on fourth downs
to compare various methods. The first is called the \emph{empirical}
sub-strategy. Here, our functions simply select the fourth down play at
random from among all similar plays (i.e., similar with respect to down,
distance and yardline). The majority of the time this will be a punt or
field goal attempt, but there are occasions where a team may try for a
fourth down (perhaps if there is very little yardage needed for a first
down and the yardline is close to the opposing endzone). The second
sub-strategy is \emph{always go for it} and samples non-kicking plays
from the given down and distance. In this sub-strategy we do not require
the sampling to be exclusively from fourth down plays. In fact, we
expand the pool of potential plays to sample from on each of downs two
through four. That is, we sample from downs \(d\) and \(d-1\) on down
\(d\), for \(d = 2, 3, 4\). We assume the impact of, and mental anxiety
among, players due to it being fourth down is negligible because the
defensive team would have similar anxieties, the players are
professional and should be more immune to such inhibitions, and because
previous literature followed this procedure (e.g., \cite{romer2006firms} used third downs instead of fourth downs) . The third sub-strategy is
\emph{never go for it} and in it the team always punts or kicks a field
goal. This offers us a conservative strategy to study, and we simply
sample kicks (and their outcome) from the given location.
The fourth sub-strategy is \emph{go for it if yardage is smaller than
Y}. Here we let the user set the parameter \(Y\) to be the value of the
yards required for a first down. If the distance for a first down is
less than or equal to \(Y\) the strategy says to go for it, and to kick
if the distance is greater than or equal to \(Y\). This allows the
examination of a stricter sub-strategy but one offering a trade-off
between \emph{always go for it} and \emph{never go for it}. This
sub-strategy is likely more palatable for NFL teams since having a rule
to go for it on fourth if there is always less than, say, 1 yard to go
for a first down might be more acceptable than always going for it. The
final sub-strategy is \emph{expected points}. Here we use the expected
points estimated from the \texttt{nflscrapR} R package to find the
expected points at each yardline on the field. We further empirically
estimate the probability of gaining a first down and making a field
goal. Then we solve for the expected value of going for it, punting it,
and kicking a field goal. The decision is made by selecting the choice
which maximizes this expected points value. This last sub-strategy is
the most analytically reliant, and best mirrors current literature.
Because we offer these sub-strategies within a free software package
they can be re-run each season as more data becomes available allowing
analysts to make recommendations which include the most recent NFL data.
We compare these sub-strategies by plotting the percent of drives
resulting in no score, a field goal, or a touchdown for the five
sub-strategies. For the \emph{go for it if yardage is smaller than Y}
option we let \emph{Y=5}. For this and subsequent fourth down analyses,
we only keep plays occuring before the final 2 minutes of each half of
the game and only plays where one team is within 28 points of the other.
This allows us to remove any plays that result from extreme decision
making because the outcome of the game is all but determined. We use
play-by-play data from both 2018 and 2019.
For the simulations, we generate 10000 drives for each sub-strategy
starting at the 25 yard line for all plays from these two regular
seasons. This corresponds to the usual starting position to begin a half
or after an opposing team scores (assuming the kickoff is a touchback).
For each drive we use the \texttt{sample\_drives()} function and set the
\texttt{single\_drive} argument equal to \texttt{TRUE}. Thus, we only
care about simulating one drive and storing its outcome for each
simulated drive. In other words, we start each drive with first down and
ten yards to go from the 25 and sample plays accordingly. The summarized
results are displayed Figure \ref{fig:fourth-down-perc-score}.
\begin{figure}
\centering
\includegraphics{fourth-down-perc-score-1.pdf}
\caption{\label{fig:fourth-down-perc-score}The percentage of simulated
drives that resulted in no score (green), a field goal (orange), or a
touchdown (purple) in 2018 and 2019, for the fourth-down sub-strategies}
\end{figure}
From this figure we see the \emph{never go for it} strategy offers the
largest probability for scoring on a single drive with the majority of
the scores coming from field goals. The \emph{expected points} strategy
has the second largest percentage of simulated drives resulting in a
score, followed by \emph{yardage smaller than 5 yards}, and then the
\emph{empirical} sub-strategy. For further investigation, in Table
\ref{tab:tab1} we examine the percent of drives resuling in a field goal
(FG) or touchdown (TD), the average score per drive (assuming a
touchdown always results in 7 points), and a 95\% confidence interval
for the average score, for the 5 sub-strategies.
\rowcolors{2}{gray!25}{white}
\begin{table}[H]
\begin{center}
\caption{\label{tab:tab1} A summary of 10000 simulations (2018 and 2019 data) for each fourth down sub-strategy. The percentage of drives (out of 1000) ending in either a field goal or touchdown, the average score, and 95\% confidence intervals for each sub-strategy is reported.}
\begin{tabular}{ lrrrr }
\toprule
\addlinespace[.1cm]
\multicolumn{1}{l}{\bf Sub-strategy} & \multicolumn{1}{c}{Field Goals} & \multicolumn{1}{c}{Touchdowns} & \multicolumn{1}{c}{Mean Score} & \multicolumn{1}{c}{$95\%$ CI} \\
\midrule
{\em Always Go} & 0\% & 33\% & 2.28 & (2.22, 2.35) \\
\addlinespace[.1cm]
{\em Empirical} & 15\% & 22\% & 1.96 & (1.91, 2.02)\\
\addlinespace[.1cm]
{\em Expected Points} & 28\% & 19\% & 2.19 & (2.14, 2.25) \\
\addlinespace[.1cm]
{\em Never Go} & 33\% & 15\% & 2.03 & (1.98, 2.08)\\
\addlinespace[.1cm]
{\em Go if Yards $<$ 5} & 11\% & 27\% & 2.24 & (2.18, 2.30) \\
\toprule
\end{tabular}
\end{center}
\end{table}
From Table \ref{tab:tab1}, the fourth down sub-strategy with the largest
average points per simulated drive is \emph{always go for it} (average
of 2.28 points) followed by \emph{yardage smaller than 5 yards} (average
of 2.24 points), and \emph{expected points} (average of 2.19 points). We
also see the \emph{always go for it} sub-strategy is boom or bust
resulting in only touchdowns or no scores. Interestingly the
\emph{yardage smaller than 5 yards} has an average score similar to
\emph{always go for it}, yet it does recommend field goals to be taken.
The confidence interval for the \emph{yardage smaller than 5 yards} mean
score is also narrower than that for \emph{always go for it}. Taking
this into account along with the fact that the averages of these two
sub-strategies are so close, a recommendation for a team nervous about
always going for it on fourth down might be to always go for it if there
are less than five yards to go for a first down, regardless of field
position. Figure \ref{fig:fourth-down-perc-score} shows this strategy
will produce scoring drives more often and has nearly the highest
average score per drive.
If a team wishes to pursue this sub-strategy (going for it on fourth if
the yards to go is less than five yards) a logical next question is:
what about other \emph{yards to go} values? That is, what if the team
went for it if the yards required for a first down are 4, or 6, or
something else entirely? Figure 2.1 shows the percent of drives
resulting in a score for a range of \(Y\) values. Figure 2.2 displays
the average (and 95\% confidence interval) score per drive for the
various \(Y\) values, and Figure 2.3 gives the average (and 95\%
confidence interval) yardline at which the ball is turned over when the
drive does not result in a score.
\clearpage
\begin{figure}
\centering
\includegraphics{yds-less-than-1.pdf}
\caption{\label{fig:yds-less-than} 2.1: The percentage of simulated
drives that resulted in a field goal (green) or a touchdown (orange);
2.2: Average score per drive for the \emph{yardage less than Y yards}
sub-strategy as a function of \(Y\); 2.3: Average turnover yardline
resulting from the \emph{yardage less than Y yards} sub-strategy as a
function of \(Y\)}
\end{figure}
In Figure 2.1 the largest percent score value (of about 38\%) is nearly
exactly achieved by \(Y\) values of 3, 4, and 5. Figure 2.2 shows the
\(Y\) values of 8 has the top average score per drive values, and this
average decreases as \(Y\) decreases. Figure 2.3 shows the average
turnover yardline gets further away from the offensive teams goal for
larger values of \(Y\). Taking all this together, a value of 5 yards may
be the best option for the fourth down sub-strategy \emph{go for it if
yardage less than \(Y\)} because it has nearly the highest percent score
value, a higher average score than all smaller \(Y\) values, a more
advantageous average turnover yardline than all larger \(Y\) values, and
(speculatively) may be more acceptable by NFL coaching staffs than a
value of, say, \(Y = 8\).
Here, we caution the reader that this is by no means a causal
investigation of fourth down strategies. Indeed, we could further
analyze the data by evaluating the performance of a specific
sub-strategy amongst better or worse teams, but do not do so as our
primary purpose is to demonstrate the usefulness of the
\texttt{NFLSimulatoR} package and its core functionality.
\hypertarget{runpass-percentage}{%
\subsection{Run/Pass Percentage}\label{runpass-percentage}}
\cite{levitt09} and \cite{hermsmeyer} argue NFL teams should
pass more often. In this section we investigate this thesis using the
simulation-based approach of \texttt{NFLSimulatoR}. Though perhaps
simple on its surface, examining a strategy having to do with the
proportion of plays that are a pass instead of a run proves interesting.
Even if the NFL is not as analytically forward as other professional
sports leagues, the league seems to be trending towards passing more.
The \texttt{NFLSimulatoR} package includes a strategy allowing the user
to study the effect of passing the ball more or less often.
When employing this strategy in the \texttt{sample\_play()} or
\texttt{sample\_drives()} functions, the argument \texttt{p} must be included
as a parameter. \texttt{p} is the probability a given offensive play on
first, second, or third down is a pass. To keep the strategy
straightforward, we follow an empirical procedure when the play to be
sampled is a fourth down. That is, when a fourth down situation arises
in the sample, we assume the play is simply sampled from all fourth down
plays at the given yardline (or within a neighborhood of the yardline)
and distance to go until a first down. Fourth down plays sampled at
their regular rates usually result in a punt or a field goal attempt. By
varying \texttt{p} we can study how pass proportion affects statistics such
as the expected points per drive, the proportion of drives resulting in
a score, among a host of other metrics.
Figure \ref{fig:pass-rush-all-facet} shows the proportion of simulated
drives resulting in a score for the offensive team (field goal or
touchdown) in 2018 and 2019. Note that we include a vertical dashed line
showing the league-wide proportion of passing plays on first through
third downs. This proportion of passing (running) plays on first through
third downs was roughly 59\% (41\%) in both 2018 and 2019. At first
inspection this figure suggests passing more often results in scoring
\textbf{less} on average. Obviously this initial glance requires more
scrutiny and indeed, subsetting by the type of score reveals additional
insight. Specifically, Figure \ref{fig:pass-rush-by-type} shows the same
data subsetted by the type of score: either a touchdown or field goal.
There is a clear trend showing more touchdowns are scored as the
proportion of plays that are passes increases.
\begin{figure}
\centering
\includegraphics{pass-rush-all-facet-1.pdf}
\caption{\label{fig:pass-rush-all-facet}The percentage of simulated
drives that resulted in a score (touchdown or field goal) in 2018 and
2019. The dashed line represents the actual proportion of passing plays
on first, second, and third downs in both years.}
\end{figure}
\begin{figure}
\centering
\includegraphics{pass-rush-by-type-1.pdf}
\caption{\label{fig:pass-rush-by-type}The percentage of simulated drives
that resulted in either a touchdown (orange) or a field goal (green) in
2018 and 2019. The dashed line represents the actual proportion of
passing plays on first, second, and third downs in both years.}
\end{figure}
Next, we look at the percentage of drives resulting in a score broken
down by the quality of the team. In this case we subset by whether or
not a team made the playoffs, and use playoff appearance as a proxy for
quality. To do this we simulate one set of drives by sampling plays from teams that made the playoffs and another set of drives by sampling teams that did not.
Figure \ref{fig:pass-rush-facet} shows drives using plays from the
better teams (i.e., playoff teams) tend to result in a score more often
when employing a heavier passing-based strategy than the drives from
non-playoff teams.
\begin{figure}
\centering
\includegraphics{pass-rush-facet-1.pdf}
\caption{\label{fig:pass-rush-facet} The percentage of simulated drives
that resulted in a score by type (touchdown or field goal) in 2018 and
2019 colored by playoff teams (orange) versus non-playoff teams
(purple).}
\end{figure}
Again, we stress that our approach is not causal in any sense of the
imagination. That is, we are not saying that passing more will
necessarily lead to more scores, particularly if the team has a
sub-standard quarterback. This result, of course, is likely confounded
by playoff teams (traditionally) having better quarterbacks. Thus, we
next subset the pool of plays by each team's overall passer rating (RTG)
and sample plays from three distinct pools: High, Medium, and Low passer
rating teams. A team in the pool of High passer rating teams had an
overall rating falling into the upper-most tercile of teams. The pools
for Medium and Low are similarly defined. The results of the simulated
drives using these groups are displayed in Figure
\ref{fig:pass-rush-qbr}. Here, we see the teams in the upper tercile of
passing ratings score more touchdowns as the proportion of passing plays
increases than teams in the other two groups. However, the percent field
goals scored as a function of the proportion of passing plays is similar
for all the three team groupings.
\begin{figure}
\centering
\includegraphics{pass-rush-qbr-1.pdf}
\caption{\label{fig:pass-rush-qbr}The percentage of simulated drives
that resulted in a score by type (touchdown or field goal) in 2018 and
2019 colored by overall team passer rating classification: High (green),
Medium (purple), and Low (orange).}
\end{figure}
Our overall conclusion, based on these simulations, is that passing more
should lead to a higher percentage of touchdowns scored. This conclusion
is not uniformly true across all types of teams, however. That is, the
better teams, or those teams with a higher-quality quarterback relative
to the rest of the teams in the league, will benefit much more than the
others.
Finally, we should mention that the time remaining in a game, as well as
other variables, could confound these results as well. That is, teams that
are winning in the fourth quarter might elect to rush more often in order
to ``shorten the game'' and losing teams might pass more. Therefore, a future study might include a time-in-game parameter, $\tau$, in order to sample from plays within a window around $\tau$.
\hypertarget{sec:conc}{%
\section{Conclusions and Discussion}\label{sec:conc}}
Even though the NFL has existed since 1920, teams are still seeking
inefficencies in the game to exploit. There always seems to be a
brilliant new coach ready to introduce a new strategy to push teams to
more success. The purpose of creating \texttt{NFLSimulatoR} is to give
the wider community a tool to examine a multitude of NFL strategies. The
package contains a set of robust and statistically sound tools to
simulate plays and drives to examine NFL game plans. This package will
also age well as it can be continually updated with data from the most
recent NFL season. Another use for the package is to examine the
ramifications of rule changes by the league. This would allow the league
to take a data-driven approach to such changes. One example of a rule
change that has been debated is eliminating the kickoff after a score.
In the package we include two strategies of interest, passing versus
rushing the ball and going for it or not on fourth down. We have
examined each strategy in this paper as examples of possibilities for
the package. We imagine many extensions of our work, including
strategies regarding whether to run or throw on first down, what play
works best after a penalty or timeout, and what plays to run in the
first or last few minutes of a game or quarter. Another obvious extension to this work is the implementation of more game- or team-specific scenarios. For example, we might study game-specific strategies by sampling from plays satisfying, or nearly satisfying, additional simulated in-game characteristics such as time-in-game, current score, weather, location, among a host of additional parameters. Taking this concept a step further, we could also assign higher sampling probabilities to plays most closely matching these given in-game characteristics.
In addition, we might study team-specific strategies if we assume some teams perform better at various aspects of the game than others. For example, a team could have an excellent run game but a poor pass game, or excel at throwing the ball fewer than 20 yards but struggle when the throw is over 20 yards. In such cases, we could only sample plays which match a given team's profile to study the outcomes of strategies a specific team is more likely to employ. Our methodology is robust enough to include these subsetting parameters in any of the sampling functions given in \texttt{NFLSimulatoR}. Furthermore, the simulations can either be at the individual drive level (as we did) or by evaluating a strategy over the course of an entire game.
Another possible extension of our work is to choose strategies at random to implement. By randomly selecting and setting various parameters for a strategy (or combination of strategies) one can compare a bevy of strategies. As the adoption of \texttt{NFLSimulatoR} grows and accumulates more available strategies, randomly choosing and combining strategies and then testing them seems immensely useful. Harnessing computational power to examine a plethora of strategies (such as: always go for it on fourth down while also passing on 77\% of first down plays) will only lead to further optimization of in-game decision making in the NFL.
We welcome collaboration from the sports analytics community and hope
for contributions to our package, which are easy to make given its
open-source nature. The fact that there are so many possible analysis options of game strategies makes us more excited about the existence of the \texttt{NFLSimulatoR} package because now the wider sports analytics community can take our initial work and extend it. As an example, recently a new model-based fourth down decision maker was introduced by Ben Baldwin, an author of the previously mentioned \texttt{nflfastR} package \cite{baldwin_athletic}. This is exactly the sort of contribution we hope will be added to the \texttt{NFLSimulatoR} package. Such a strategy could be integrated and tested within the simulation based framework we created and shared with the community at large. We look forward to what new strategies will be devised and tested and hope to see even more analytics used in the NFL and other sports leagues.
Finally, we stress that \texttt{NFLSimulatoR} is in its infancy with the current release being v0.3.1. As previously mentioned, we encourage interested parties to contribute to the package as this project evolves. Contributions could be in the form of new strategies, vignettes showing new and interesting analyses, or simply code enhancements. For reproducibility we have included code to generate the figures and table used in this paper as a github repository located at \href{https://github.com/williamsbenjamin/nflsimulator_aoor}{\text{github.com/williamsbenjamin/nflsimulator\_aoor}}.
| {
"attr-fineweb-edu": 1.993164,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdbA5qoTBG46msD89 | \section{Introduction}
\label{sec:intro}
This report describes our submission to the action spotting track of the SoccerNet Challenge 2022. Our submission is largely based on our recently proposed method~\cite{soares2022temporally}. It uses a dense set of detection anchors, with the goal of increasing the temporal precision of the detections. This method is briefly reviewed here in Section~\ref{sec:dense}. It consists of a two-phase approach, in which first a set of features is computed from the video frames, and then a model is applied on those features to infer which actions are present in the video. Section~\ref{sec:features} describes the different features we experimented with, which are derived from previously published ones~\cite{zhou2021feature,deliege2021soccernet}. Section~\ref{sec:training} describes our experimental protocols and training procedures, Section~\ref{sec:nms} describes the post-processing that we applied (which is based on Soft-NMS~\cite{bodla2017soft}), and Section~\ref{sec:results} presents our results.
\section{Action spotting via dense detection anchors}
\label{sec:dense}
For our submissions, we used the recent action spotting approach proposed by Soares et al.~\cite{soares2022temporally}. The approach uses a dense set of anchors in order to produce temporally precise detections. Each anchor is defined as a pair formed by a time instant and an action class. These are taken at regularly spaced time instants, at the same frequency as the input feature vectors, usually 1 or 2 per second. For each anchor, both a detection confidence and a fine-grained temporal displacement are inferred, with the temporal displacement indicating exactly when an action was predicted to happen, thus further increasing the temporal precision of the results. Both types of outputs are then combined by displacing each confidence by its respective predicted temporal displacement. Finally, a non-maximum suppression (NMS) step is applied to obtain the final detections. For the trunk of the model, we used the 1D version of the u-net, which was shown to be significantly faster than a Transformer Encoder alternative, while producing similar results~\cite{soares2022temporally}.
\begin{table*}[t]
\centering
\begin{tabular}{lccccccc}
\toprule
Features & \makecell{Experimental\\protocol} &
\makecell{Validation\\tight a-mAP} & \makecell{Validation\\a-mAP} &
\makecell{Challenge\\tight a-mAP} & \makecell{Challenge\\a-mAP}\\
\midrule[2\arrayrulewidth]
Combination$\times 2$ & Challenge Validated & 65.8 & 76.5 & 65.1$^*$ & 75.9$^*$\\
Combination$\times 2$ & Challenge Validated & 65.6 & 76.3 & 64.6$^*$ & 74.9$^*$\\
\hline
Combination$\times 2$ & Challenge & - & - & 67.0$^*$ & 77.3$^*$\\
Combination$\times 2$ & Challenge & - & - & 66.8$^*$ & 76.8$^*$\\
\hline
Combination$\times 2$ + ResNet & Challenge & - & - & 67.6$^*$ & 77.9$^*$\\
Combination$\times 2$ + ResNet & Challenge & - & - & 67.8$^*$ & 78.0$^*$\\
\bottomrule
\end{tabular}
\caption{Results using different protocols for training, validation, and testing, and using different sets of input features. For each configuration determined by a set of features and experimental protocol, we have two runs. Each run uses a different random initialization during training, leading to small variations in the results. Combination$\times 2$ refers to the Combination features from Zhou et al.~\cite{zhou2021feature} resampled to 2 feature vectors per second, while Combination$\times 2$ + ResNet refers to our late fusion approach. *~Results provided by the evaluation server.
}
\label{tab:results}
\end{table*}
\section{Features}
\label{sec:features}
Our method uses a standard two-phase approach, consisting of feature extraction followed by action spotting. It was implemented in such a way that the set of possible time instants for the detections is the same as that of the extracted feature vectors. Our experiments use the features provided by Zhou et al.~\cite{zhou2021feature}, which we refer to here as {\it Combination} features, given that they were produced by concatenating the features from a series of different fine-tuned models. These features were originally computed at 1 feature vector per second, which, for our method, results in output detections of the same frequency. We noticed that this frequency was too low to appropriately compute the tight average-mAP metric, whose tolerance radius (equal to half the tolerance window size $\delta$) increases in half-second increments. We thus resampled the Combination features using linear interpolation, to the desired frequency of 2 feature vectors per second.
We also experimented with combining the resampled Combination features with the ResNet features from Deliege et al.~\cite{deliege2021soccernet}, which were already available at the desired frequency of 2 per second. As a straightforward way of combining these features, we adopted a late fusion approach. To obtain the fused detection confidences, we calculated a weighted average of the logits of the confidences from two previously trained models: one trained on the resampled Combination features, and the other trained on the ResNet features. The optimal weights were found through exhaustive search on the validation set. In a similar manner, we experimented with fusing the temporal displacements from models trained on the different feature types. However, this did not provide any improvement, so we opted to use just the single temporal displacement model trained on the resampled Combination features.
\section{Experimental protocol and model training}
\label{sec:training}
We used two different protocols in our experiments, each combining the SoccerNet data splits in different ways. The dataset provides the following pre-determined set of splits: training, validation, test, and challenge. The challenge split labels are not provided, so metrics on this split can only be obtained by submitting results to the evaluation server. Our first protocol, named {\it Challenge Validated}, trains on the training and test splits, runs validation on the validation split, and tests on the challenge split. Our second protocol, named {\it Challenge}, trains on all the available labeled data (training, validation, and test splits), does not include any validation, and tests on the challenge split. The hyper-parameters for this last protocol are taken directly from those found using the {\it Challenge Validated} protocol.
Our confidence and temporal displacement models each have 22.9M parameters when built on the resampled Combination features, and 18.5M when built on the ResNet features. For each model, following~\cite{soares2022temporally}, we use the validation set to tune each of the hyper-parameters in turn: the learning rate, $\rho$ for Sharpness-Aware Minimization (SAM)~\cite{foret2021sharpnessaware}, weight decay, and $\alpha$ for mixup data augmentation~\cite{zhang2018mixup}.
\section{Soft non-maximum suppression}
\label{sec:nms}
To post-process the detection results, we applied a one-dimensional version of Soft Non-Maximum Suppression (Soft-NMS)~\cite{bodla2017soft}. Soft-NMS was originally proposed for post-processing object detection results, as a generalization of the well-known NMS algorithm. Whereas standard NMS will remove any detections that have a high overlap with an accepted high-confidence detection, Soft-NMS opts to instead only {\it decay} the confidences of those overlapping detections, with the decay being proportional to the overlap. To adapt this to the one-dimensional action spotting setting, we define the decay as being proportional to the temporal distance between detections. Namely, given an accepted high-confidence detection at time $t$, for any detection of the same class at time $s$, we define the decay function as $f(s) = \min\{\abs{s - t} / (\nicefrac{w}{2}), 1\}$, where $w$ is a window size that we choose on the validation set. The confidences are then updated by multiplication with the decay function $f$.
\section{Results}
\label{sec:results}
All the results presented here were obtained using Soft-NMS. Generally, applying Soft-NMS gives a very small but consistent improvement relative to standard NMS. For example, when using the {\it Challenge Validated} protocol, on the validation set, one of our models produced 65.78 tight average-mAP using Soft-NMS, versus 65.46 when using standard NMS, where each result corresponds to its respective optimal window size.
Table~\ref{tab:results} presents all our results. The table shows an improvement in the challenge tight average-mAP of around 2.1 when switching protocols from {\it Challenge Validated} to {\it Challenge} protocol, most likely due to the latter having more training data available. When we use late fusion to add the ResNet features to the resampled Combination features, we see a further average improvement of 0.8 on the challenge tight average-mAP, leading to our best submitted results.
\vspace{4mm}
\noindent {\bf Acknowledgements} We are grateful to Topojoy Biswas and Gaurav Srivastava for insightful discussions and feedback.
\clearpage
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 1.448242,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdio4ubnjosH9HdG4 |
\section{Introduction}
\vspace{-2pt}
Sports is a lucrative sector, with large amounts of money being invested on players and teams.
The global sports market is estimated to generate an annual revenue of \$91 billion~\cite{GlobalSportsMarket}, whereby the European soccer market contributes about \$28.7 billion~\cite{EuropeanFootballMarket}, from which \$15.6 billion alone come from the Big Five European soccer leagues (EPL, La Liga, Ligue 1, Bundesliga and Serie A)~\cite{BigFiveMarket1,BigFiveMarket2}.
After merchandising, TV broadcast rights are the second major revenue stream for a soccer club~\cite{BroadcastingRevenue}.
Even though the main scope of soccer broadcast is entertainment, such videos are also used by professionals to generate statistics, analyze strategies, and scout new players.
Platforms such as Wyscout~\cite{wyscout}, Reely~\cite{reely}, and Stats SPORTVU~\cite{sportvu} have made sports analytics their core business and already provide various products for advanced statistics and highlights.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{img/Teaser/Slide1}\\ \vspace{1mm}
\includegraphics[width=0.48\textwidth]{img/Teaser/Slide2}\\ \vspace{1mm}
\includegraphics[width=0.48\textwidth]{img/Teaser/Slide3}
\caption{Example of events defined in the context of soccer.
From top to bottom:
\textbf{Goal}: the instant the ball crosses the goal line.
\textbf{Substitution}: the instant a players enters the field to substitute an other player.
\textbf{Card}: the instant the referee shows a card to a player.}
\vspace{-10pt}
\label{fig:Teaser}
\end{figure}
In order to get such statistics, professional analysts watch a lot of broadcasts and identify all the events that occur within a game.
According to Matteo Campodonico, CEO of Wyscout, a 400 employee company focusing on soccer data analytics~\cite{wyscout}, it takes over 8 hours to provide up to 2000 annotations per game.
With more than 30 soccer leagues in Europe, the number of games is very large and requires an army of annotators.
Even though Amazon Mechanical Turk (AMT) can provide such workforce, building an annotated dataset of soccer games comes at a significant cost.
Automated methods for sports video understanding can help in the localization of the salient actions of a game.
Several companies such as Reely~\cite{reely} are trying to build automated methods to understand sports broadcasts and would benefit from a large-scale annotated dataset for training and evaluation.
Many recent methods exist to solve generic human activity localization in video focusing on sports~\cite{bettadapura2016leveraging,caba2017scc,Felsen_2017_ICCV,KarpathyCVPR14}.
However, detecting soccer actions is a difficult task due to the sparsity of the events within a video.
Soccer broadcast understanding can thus be seen as a sub-problem of video understanding, focusing on a vocabulary of sparse events defined within its own context.
\vspace{4pt}\noindent\textbf{Contributions.}
\textbf{(i)} We propose the task of \emph{event spotting} within a soccer context.
We define \emph{events} as actions anchored to a single timestamp in a video and, thus, proceed to define and study the task of \emph{spotting} events within soccer videos (Section~\ref{sec:Spotting}).
\textbf{(ii)} We propose \emph{SoccerNet}, a scalable dataset for soccer video understanding. It contains 764 hours of video and 6,637 instances split in three classes (Goal, Yellow/Red Card, and Substitution), which makes it the largest localization dataset in term of total duration and number of instances per class (Section~\ref{sec:DataCollection}).
\textbf{(iii)} We provide baselines for our dataset in the tasks of video chunk classification and event spotting. Our \emph{minute classifier} reaches a performance of 67.8\% (mAP) and our \emph{event spotter} an Average-mAP of 49.7\% (Section~\ref{sec:Experiments}).
\section{Related Work}
This paper relates to the topics of Sports Analytics, Activity Recognition and Action Localization Datasets.
We give a brief overview of work relevant to each of these topics and highlight how our paper contributes to each of them.
\vspace{4pt}\noindent\textbf{Sports Analytics.~~} Many automated sports analytics methods have been developed in the computer vision community to understand sports broadcasts~\cite{bettadapura2016leveraging,d2010review,Felsen_2017_ICCV,kapela2014real,ramanathan2016detecting}.
They produce statistics of events within a game by either analyzing camera shots or semantic information.
Ekin \etal~\cite{ekin2003automatic} present a cornerstone work for game summarization based on camera shot segmentation and classification, followed by Ren \etal~\cite{ren2005football} who focus on identifying video production patterns.
Huang \etal~\cite{huang2006semantic} analyze semantic information to automatically detect goals, penalties, corner kicks, and card events.
Tavassolipour \etal~\cite{tavassolipour2014event} use Bayesian networks to summarize games by means of semantic analysis.
More recent work in this category focuses on deep learning pipelines to localize salient actions in soccer videos.
Baccouche \etal~\cite{baccouche2010action} use a Bag-of-Words (BOW) approach with SIFT features to extract visual content within a frame.
They use such representations to train a Long Short Term Memory (LSTM) network that temporally traverses the video to detect the main actions.
Jiang \etal~\cite{jiang2016automatic} propose a similar methodology using Convolution Neural Networks (CNN) to extract global video features rather than local descriptors.
They also use a play-break structure to generate candidate actions.
Tsagkatakis \etal~\cite{tsagkatakis2017goal} present a two-stream approach to detect goals,
while Homayounfar \etal~\cite{Homayounfar_2017_CVPR} recently present a deep method for sports field localization, which is crucial for video registration purposes.
The main impediment for all these works is the lack of reference datasets/benchmarks that can be used to evaluate their performance at large-scale and standardize their comparison. They all use small and custom-made datasets, which contain a few dozen soccer games at most. We argue that intelligent sports analytics solutions need to be scalable to the size of the problem at hand. Therefore, to serve and support the development of such scalable solutions, we propose a very large soccer-centric dataset that can be easily expanded and enriched with various types of annotations.
\vspace{4pt}\noindent\textbf{Activity Recognition.~~}
Activity recognition focuses on understanding videos by either detecting activities or classifying segments of video according to a predefined set of human-centric action classes.
A common pipeline consists of proposing temporal segments~\cite{Buch_2017_CVPR,caba2016fast,Gao_2017_ICCV,shou2016temporal}, which are in turn further pruned and classified~\cite{girdhar2017actionvlad,wang2015action}.
Common methods for activity classification and detection make use of
dense trajectories~\cite{van2015apt,wang2013action,wang2015action,wang2016temporal},
actionness estimation~\cite{chen2014actionness,Gao_2017_ICCV,Zhao_2017_ICCV},
Recurrent Neural Networks (RNN)~\cite{sstad_buch_bmvc17,Buch_2017_CVPR,escorcia2016daps},
tubelets~\cite{Kalogeiton_2017_ICCV, Saha_2017_ICCV},
and handcrafted features~\cite{caba2016fast,mettes2015bag,yu2015fast}.
In order to recognize or detect activities within a video, a common practice consists of \textbf{aggregating} local features and \textbf{pooling} them, looking for a consensus of characteristics~\cite{KarpathyCVPR14,simonyan2014two}.
While naive approaches use mean or maximum pooling,
more elaborate techniques such as
Bag-of-Words (BOW)~\cite{csurka2004visual,sivic2003video},
Fisher Vector (FV)~\cite{jaakkola1999exploiting,perronnin2007fisher,perronnin2010improving}, and
VLAD~\cite{arandjelovic2013all,jegou2010aggregating}
look for a structure in a set of features by clustering and learning to pool them in a manner that improves discrimination.
Recent works extend those pooling techniques by incorporating them into Deep Neural Network (DNN) architectures, namely
NetFV~\cite{lev2016rnn,perronnin2015fisher,sydorov2014deep},
SoftDBOW~\cite{philbin2008lost}, and
NetVLAD~\cite{arandjelovic2016netvlad,girdhar2017actionvlad}.
By looking for correlations between a set of primitive action representations, ActionVLAD~\cite{girdhar2017actionvlad} has shown state-of-the-art performance in several activity recognition benchmarks.
To further improve activity recognition, recent works focused on exploiting \textbf{context}~\cite{caba2017scc,Dai_2017_ICCV,miech2017learnable}, which represent and harness information in both temporal and/or spatial neighborhood, or on \textbf{attention}~\cite{nguyen2015stap}, which learns an adaptive confidence score to leverage this surrounding information.
In this realm, Caba Heilbron \etal~\cite{caba2017scc} develop a semantic context encoder that exploits evidence of objects and scenes within video segments to improve activity detection effectiveness and efficiency. Miech \etal~\cite{miech2017learnable}, winners of the first annual Youtube 8M challenge~\cite{abu2016youtube}, show how learnable pooling can produce state-of-the-art recognition performance on a very large benchmark, when recognition is coupled with context gating. More recently, several works use temporal context to localize activities in videos~\cite{Dai_2017_ICCV} or to generate proposals~\cite{Gao_2017_ICCV}. Furthermore, Nguyen \etal~\cite{nguyen2015stap} present a pooling method that uses spatio-temporal attention for enhanced action recognition, while Pei \etal~\cite{Pei_2017_CVPR} use temporal attention to gate neighboring observations in a RNN framework. Note that attention is also widely used in video captioning~\cite{Hori_2017_ICCV,Krishna_2017_ICCV,Mazaheri_2017_ICCV}.
Activity recognition and detection methods are able to provide good results for these complicated tasks. However, those methods are based on DNNs and require large-scale and rich datasets to learn a model. By proposing a large-scale dataset focusing on event spotting and soccer, we encourage algorithmic development in those directions.
\vspace{4pt}\noindent\textbf{Datasets.~~}
Multiple datasets are available for video understanding, especially for video classification. They include
\textbf{Hollywood2}~\cite{marszalek2009actions} and
\textbf{HMDB}~\cite{Kuehne11}, both focusing on movies;
\textbf{MPII Cooking}~\cite{rohrbach2012database}, focusing on cooking activities;
\textbf{UCF101}~\cite{soomro2012ucf101}, for classification in the wild;
\textbf{UCF Sports}~\cite{rodriguez2008action},
\textbf{Olympics Sports}~\cite{niebles2010modeling} and
\textbf{Sports-1M}~\cite{KarpathyCVPR14}, focusing on sports;
\textbf{Youtube-8M}~\cite{abu2016youtube} and
\textbf{Kinetics}~\cite{kay2017kinetics}, both tackling large scale video classification in the wild.
They are widely used in the community but serve the objective of video classification rather than activity localization.
The number of benchmark datasets focusing on action localization is much smaller.
\textbf{THUMOS14}~\cite{THUMOS14} is the first reasonably scaled benchmark for the localization task with a dataset of 413 untrimmed videos, totaling 24 hours and 6,363 activities, split into 20 classes.
\textbf{MultiTHUMOS}~\cite{yeung2015every} is a subset of THUMOS, densely annotated for 65 classes over unconstrained internet videos.
\textbf{ActivityNet}~\cite{caba2015activitynet} tackles the issue of general video understanding using a semantic ontology, proposing challenges in trimmed and untrimmed video classification, activity localization, activity proposals and video captioning. ActivityNet 1.3 provides a dataset of 648 hours of untrimmed videos with 30,791 activity candidates split among 200 classes. It is so far the largest localization benchmark in terms of total duration.
\textbf{Charades}~\cite{sigurdsson2016hollywood} is a recently compiled benchmark for temporal activity segmentation that crowd-sources the video capturing process. After collecting a core set of videos from YouTube, they use AMT to augment their data by recording them at home. This dataset consists of a total of 9,848 videos and 66,500 activities.
More recently, Google proposed \textbf{AVA}~\cite{gu2017ava} as a dataset to tackle dense activity understanding. They provide 57,600 clips of 3 seconds duration taken from featured films, annotated with 210,000 dense spatio-temporal labels across 100 classes, for a total of 48 hours of video. While the main task of AVA is to classify these 3 seconds segments, such dense annotation can also be used for detection.
Within the multimedia community, \textbf{TRECVID} has been the reference benchmark for over a decade~\cite{awad2016trecvid,smeaton2006evaluation}.
They host a ``Multimedia Event Detection'' (MED) and a ``Surveillance Event Detection'' (SED) task every year, using the \textbf{HAVIC} dataset~\cite{strassel2012creating}.
These tasks focus on finding all clips in a video collection that contain a given event, with a textual definition, in multimedia and surveillance settings. Also, Ye \etal~\cite{ye2015eventnet} propose \textbf{EventNet}, a dataset for event retrieval based on a hierarchical ontology, similar to ActivityNet.
We argue that these two datasets both focus on large-scale information retrieval rather than video understanding.
We propose \emph{SoccerNet}, a scalable and soccer-focused dataset for event spotting. It contains 500 games, 764 hours of video and 6,637 instances split in three classes (Goal, Yellow/Red Card, and Substitution), which makes it one of the largest dataset in term of total duration and number of instances per class. With an average of one event every 6.9 minutes, our dataset has a sparse distribution of events in long untrimmed videos, which makes the task of localization more difficult. The annotations are obtained within one minute at no cost by parsing sports websites, and further refined in house to one second resolution. We define our dataset as easily scalable since annotations are obtained for \emph{free} from online match reports.
Table~\ref{tab:DatasetsComparison} shows a breakdown description and comparison of the datasets available for the problem of action localization. Figure~\ref{fig:DatasetsComparison} shows a graphical comparison between these datasets in terms of the number of instances per class and the total duration of videos they contain.
\begin{table*}[htb]
\centering
\caption{Comparison of benchmark datasets currently tackling the task of action localization.}
\vspace{5pt}
\csvreader[tabular=l||c|r|r|r|r|r|r,
table head= \textbf{Dataset} & \textbf{Context} & \textbf{\#Video} & \textbf{\#Instance} & \textbf{Duration} & \multicolumn{1}{c|}{\textbf{Sparsity}} & \textbf{Classes} & \multicolumn{1}{c}{\textbf{Instance}} \\
\textbf{} & \textbf{} & \textbf{} & \textbf{} & \multicolumn{1}{c|}{\textbf{(hrs)}} & \textbf{(event/hr)} & \textbf{} & \textbf{per class} \\\midrule,
late after line=\ifthenelse{\equal{\Dataset}{Ours}}{\\\midrule}{\\}]
{img/Dataset/datasets.csv}%
{Dataset=\Dataset, Context=\Context, NumVideo=\NumVideo, NumActivity=\NumActivity, Duration=\Duration, SparsityVideo=\SparsityVideo, SparsityHour=\SparsityHour, AnnotSource=\AnnotSource, NumClass=\NumClass, ActivityClass=\ActivityClass}%
{\textbf{\Dataset} &
\Context &
\NumVideo &
\NumActivity &
\Duration &
\SparsityHour &
\NumClass &
\ActivityClass }
\label{tab:DatasetsComparison}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{img/Dataset/datasetsGraph}
\caption{
Dataset comparison in term of number of instance per class, and total duration.
The size of the hexagon shows the density of the event within the video.
Our dataset has the largest amount of instances per class and the largest total duration, despite being sparse, which makes the task of localization more difficult.}
\label{fig:DatasetsComparison}
\end{figure}
\section{Spotting Sparse Events in Soccer}
\label{sec:Spotting}
In this context, we define the concept of \emph{events} and the task of \emph{spotting events} within soccer videos.
\vspace{4pt}\noindent\textbf{Events:~~}
Sigurdsson \etal~\cite{Sigurdsson_2017_ICCV} recently question the concept of temporal boundaries in activities. They re-annotated Charades~\cite{sigurdsson2016hollywood} and MultiTHUMOS~\cite{yeung2015every} (using AMT), and conclude that the average agreement with the ground truth is respectively 72.5\% and 58.8\% tIoU. This clearly indicates that temporal boundaries are ambiguous. However, Sigurdsson \etal~\cite{Sigurdsson_2017_ICCV} observe that central frames within an activity offer more consensus among annotators.
Chen \etal~\cite{chen2014actionness} define the concept of \emph{action} and \emph{actionness} by underlining 4 necessary aspects that define an action: an agent, an intention, a bodily movement, and a side-effect. Dai \etal~\cite{Dai_2017_ICCV} define an \emph{activity} as a set of events or actions, with a beginning and an ending time. In our work, we define the concept of \emph{event} as an action that is anchored in a single time instance, defined within a specific context respecting a specific set of rules. We argue that defining every action with temporal boundaries is ambiguous for multiple reasons:
\vspace{-5pt}
\begin{enumerate}
\item An action can occur in a glimpse, such as \emph{``a man dropped his wallet''} or \emph{``a man put a letter in the mail''}. While there are no well-defined boundaries for such actions, a sole instant can readily define these events.
\vspace{-5pt}
\item An action can be continuous within a live video, hence it is unclear when it starts or stops. For instance, time boundaries in video for actions such as \emph{``the night is falling''} or \emph{``the ice is melting in my glass''}, rely on a subjective discrimination between measurable quantities such as the illumination level or visual changes in matter state.
\vspace{-5pt}
\item An action can overlap and conflict with another. Consider a video of a man walking his dog, when he suddenly receives a phone call.
It is not clear whether the activity \emph{``taking a phone call''} actually cancels out the activity \emph{``walking a dog''}, or the activity \emph{``walking a dog''} should be split into two parts as opposed to one single segment overlapping the \emph{``taking a phone call''} instance.
\vspace{-5pt}
\end{enumerate}
Current benchmarks such as THUMOS14~\cite{THUMOS14}, ActivityNet~\cite{caba2015activitynet}, and Charades~\cite{sigurdsson2016hollywood} only focus on activities with temporal boundaries and cope with ambiguities by anchoring an activity with a consensus between several annotators.
This ambiguity motivates the recently developed AVA~\cite{gu2017ava} dataset that attempts to tackle the atomic characteristic of actions by providing dense fine-scale annotations within a short time duration (3 seconds).
In the multimedia community, the concept of events is generally more vague and overlaps with the concept of actions and activities.
In the MED task of the TRECVID benchmark~\cite{awad2016trecvid}, an event is defined as a kit which consists of a \emph{mnemonic title} for the event, a \emph{textual definition}, an \emph{evidential description} that indicates a non-exhaustive list of textual attributes, and a set of \emph{illustrative video examples}.
They propose a specific set of events, providing a description and defining rules for the start and end times. Such work underlines our hypothesis that events need to be defined with a set of rules and within specific circumstances.
In the context of live soccer broadcasts, it is unclear when a given action such as \emph{``scoring a goal''} or \emph{``making a foul''} starts and stops.
For similar reasons, the beginning and end of activities such as \emph{``scoring a 3 points shot''} or a \emph{``slam dunk''} in a basketball broadcast are ambiguous.
We argue that sports respect well-established rules and define an action vocabulary anchored in a single time instance.
In fact, soccer rules provide a strict definition of \emph{``goal''}, \emph{``foul''}, \emph{``card''}, \emph{``penalty kick''}, \emph{``corner kick''}, \etc and also anchor them within a single time.
Similarly, Ramanathan \etal~\cite{ramanathan2016detecting} define the action \emph{``basket-ball shot''} as a 3 seconds activity and its ending time as the moment the ball crosses the basket.
Defining starting or stopping anchors around such events or fixing its duration would be considered as subjective and biased by the application.
\vspace{4pt}\noindent\textbf{Spotting:~~}
Rather than identifying the boundaries of an action within a video and looking for similarities within a given temporal Intersection-over-Union (tIoU), we introduce the task of \emph{spotting}.
Spotting consists of finding the anchor time (or \emph{spot}) that identifies an \emph{event}. Intuitively, the closer the candidate spot is from the target, the better the spotting is, and its capacity is measured by its distance from the target. Since perfectly spotting a target is intrinsically arduous, we introduce a tolerance $\delta$ within which a event is considered to be spotted (\emph{hit}) by a candidate.
We believe that event spotting is better defined and easier than detection since it focuses only on identifying the presence of an event within a given tolerance. An iterative process can refine such tolerance at will by using fine localization methods around candidate spots.
By introducing the task of spotting, we also define the metric to be used for evaluation.
First of all, we define a candidate spot as positive if it lands within a tolerance $\delta$ around the anchor of an event. For each tolerance, we can recast the spotting problem as a general temporal detection problem, where the tIoU threshold used is very small. In that case, we can compute the recall, precision, Average Precision (AP) for each given class, and a mean Average Precision (mAP) across all classes. For general comparison, we also define an Average-mAP over a set of predefined $\delta$ tolerances, in our case below the minute.
\section{Data Collection}
\label{sec:DataCollection}
We build our dataset in three steps:
\textbf{(i)} we collect videos from online sources;
\textbf{(ii)} we synchronize the game and video times by detecting and reading the game clock;
and \textbf{(iii)} we parse match reports available in the web to generate temporal aligned annotations.
\subsection{Collecting Videos}
We compile a set of 500 games from the main European Championships during the last 3 seasons as detailed in Table~\ref{tab:DatasetVideoCollected}. Each game is composed of 2 untrimmed videos, one for each half period. The videos come from online providers, in a variety of encodings (MPEG, H264), containers (MKV, MP4, and TS), frame rates (25 to 50 fps), and resolutions (SD to FullHD). The dataset consumes almost 4TB, for a total duration of 764 hours. The games are randomly split into 300, 100, and 100 games for training, validation, and testing ensuring similar distributions of the events between the classes and the datasets.
\begin{table}[htb]
\centering
\caption{Summary of the video collection for our dataset.}
\vspace{5pt}
\csvreader[tabular=l||c|c|c||>{\bfseries}c,
table head= & \multicolumn{3}{c||}{Seasons} & \\
League & 14/15 & 15/16 & 16/17 & Total \\\midrule,
late after line=\ifthenelse{\equal{\Championship}{Total}}{\\\midrule}{\\}]
{img/Dataset/games_crop224.csv}%
{Championship=\Championship,
seasA=\seasA,
seasB=\seasB,
seasC=\seasC,
Total=\Total}%
{\ifthenelse{\equal{\Championship}{Total}}
{\textbf{\Championship}}{\Championship} &
\ifthenelse{\equal{\Championship}{Total}}
{\textbf{\seasA}}{\seasA} &
\ifthenelse{\equal{\Championship}{Total}}
{\textbf{\seasB}}{\seasB} &
\ifthenelse{\equal{\Championship}{Total}}
{\textbf{\seasC}}{\seasC} &
\ifthenelse{\equal{\Championship}{Total}}
{\textbf{\Total}}{\Total}
}
\label{tab:DatasetVideoCollected}
\end{table}
\vspace{-15pt}
\subsection{Game Synchronization with OCR}\label{subsec:OCR}
The video of the games are untrimmed and contains spurious broadcast content before and after the playing time. Finding a mapping between the game time and the video time is necessary to align the annotations from the web sources to the videos. Soccer games have a continuous game flow, \ie the clock never stops before the end of a half, hence there is a simple linear relationship (with slope 1) between the video and the game time. Wang \etal~\cite{wang2017soccer} propose a method using the center circle of the field and the sound of the referee whistle to identify the start of the game. We argue that focusing the effort on a single instant is prone to error. In contrast, we focus on detecting the game clock region within multiple video frames and identify the game time through Optical Character Recognition (OCR) at different instants.
The clock is displayed in most of the frames throughout the video, though its shape and position vary between leagues. We leverage a statistical study of the pixel intensity deviation within a set of $N$ random frames to identify candidates for the clock region. We run the Tesseract OCR Engine~\cite{smith2007overview} on the candidate clocks and look for a coherent time format for each of the $N$ frames. To cope with eventual misreadings in the clock, we use a RANSAC~\cite{fischler1981random} approach to estimate the linear relation between the game clock and the video time, enforcing a unitary gradient to our linear model. Our method also checks for the temporal integrity of the video, reporting temporal inconsistencies. To verify the quality of this game-to-video temporal alignment, we manually annotate the start of the game for all 500 games and report an accuracy of 90\% for automatically estimating the start of both halves within a tolerance of two seconds, using a set of $N=200$ frames.
\subsection{Collecting Event Annotations}
For our dataset, we obtain event annotations for \emph{free} by parsing match reports provided by league websites\footnote{We choose \url{www.flashscore.info} to get our annotations since they provide a wide number of summaries and have a consistent format across their match reports.}. They summarize the main actions of the game and provide the minute at which the actions occur. We categorize these events into our three main categories: \emph{``goals''}, \emph{``cards''} and \emph{``substitutions''}. We parse and mine the annotations for all games of the Big Five European leagues (EPL, La Liga, Ligue 1, Bundesliga and Serie A) as well as the Champions League from 2010 to 2017, for a total of 171,778 annotations corresponding to 13,489 games.
For sake of storage, we focus on our subset of videos for the 500 games and use only 6,637 events. To resolve these free annotations to one second level, we manually annotate each event within one second resolution by first retrieving its minute annotation and refining it within that minute window. To do so, we define the temporal anchors for our events from their definitions within the rules of soccer. We define a \emph{``goal''} event as the instant the ball crosses the goal line to end up in the net. We define the \emph{``card''} event as the instant the referee shows a player a yellow or a red card because of a foul or a misbehaviour. Finally, we define the \emph{``substitution''} event as the instant a new player enters in the field.
We ensure those definition were coherent when annotating the dataset.
Apart for the substitutions that occur during half time break, almost all of our instances follow their definitions.
\subsection{Dataset Scalability}
We believe that scaling our dataset is cheap and easy, since web annotations are freely available with one minute resolution.
Algorithm can either use the weakly annotated events within one minute resolution or generate a complete one second resolution annotation which is estimated to take less than 10 minutes per game.
We also argue that broadcast providers can easily scale up such datasets by simply providing more videos and richer annotations.
\section{Experiments}
\label{sec:Experiments}
We focus the attention of our experiments on two tasks: event classification for chunks of one minute duration, and event spotting within an entire video.
For these tasks, we report and compare the performance metrics for various baseline methods when trained on weakly annotated data (\ie one minute resolution) and the improvement that is achieved by training on one second resolution annotations.
\subsection{Video Representation}
Before running any experiments, we extract C3D~\cite{tran2015learning}, I3D~\cite{carreira2017quo}, and ResNet~\cite{he2016deep} features from our videos to be used by our baselines.
The videos are trimmed at the game start, resized and cropped at a $224\times 224$ resolution, and unified at 25fps.
Such representation guarantees storage efficiency, fast frame access, and compatible resolution for feature extraction.
\textbf{C3D}~\cite{tran2015learning} is a 3D CNN feature extractor that stacks 16 consecutive frames and outputs at the \emph{fc7} layer a feature of dimension 4,096. It is pretrained on Sport-1M~\cite{KarpathyCVPR14}.
\textbf{I3D}~\cite{carreira2017quo} is based on Inception~V1~\cite{szegedy2016rethinking}, uses 64 consecutive frames, and is pretrained on Kinetics~\cite{kay2017kinetics}. In this work, we only extract the RGB features at the \emph{PreLogits} layer of length 1,024 so to maintain a reasonable computational runtime.
They have been shown to produce only meager improvements when flow features are used~\cite{carreira2017quo}.
\textbf{ResNet}~\cite{he2016deep} is a very deep network that outputs a 2,048 feature representation per frame at the \emph{fc1000} layer. In particular, we use ResNet-152 pretrained on ImageNet~\cite{deng2009imagenet}. Since ResNet-152 applies to single images, it does not intrinsically embed contextual information along the time axis.
We use TensorFlow~\cite{tensorflow2015-whitepaper} implementations to extract features every 0.5 second (s).
In order to simplify and unify the dimension of those features, we reduce their dimensionality by applying Principal Component Analysis (PCA) on the 5.5M features we extract per model.
We reduce C3D, I3D, and ResNet-152 features to a dimension of 512 and respectively maintain 94.3\%, 98.2\%, and 93.9\% of their variance.
For the benchmark purpose, we provide the original and cropped versions of the videos, as well as the original and the reduced versions of all the features extracted every 0.5s.
\subsection{Video Chunk Classification}
\label{subsec:Classifier}
Similar to the setup in the AVA dataset~\cite{gu2017ava}, localization can be cast as a classification problem for densely annotated chunks of video, especially since we gather webly annotations. We split our videos into chunks of duration 1 minute, annotated with all events occurring within this minute, gathering respectively 1246, 1558, 960 and 23750 chunks for cards, substitutions, goals and backgrounds for the training dataset, 115 of which having multiple labels. We aggregate the 120 features within a minute as input for different versions of shallow pooling neural networks. By using a sigmoid activation function at the last layer of these networks, we allow for multi-labelling across the candidates. We use an Adam optimizer that minimizes a multi binary cross-entropy loss for all the classes.
We used a step decay for the learning rate and an early stopping technique based on the validation set performances.
Following best practices in the field, the evaluation metric in this case is mAP (classification) across the three classes on the designated testing set.
In what follows, we report strong baseline results using different video features, different pooling techniques, and compare solutions to cope with the imbalanced dataset.
\vspace{4pt}\noindent\textbf{Learning How to Pool:~~}
We investigate the usage of different feature representations and various pooling methods. We propose shallow neural networks that handles the input matrix of dimension $120\times512$. We test a \textbf{mean} and a \textbf{max pooling} operation along the aggregation axis that output 512-long features. We use a custom \textbf{CNN} with a kernel of dimension $512\times20$ that traverses the temporal dimension to gather temporal context. Finally, we use implementations of \textbf{SoftDBOW}, \textbf{NetFV}, \textbf{NetVLAD} and \textbf{NetRVLAD} provided by Miech \etal~\cite{miech2017learnable}, who leverage a further context-gating layer. After each of these pooling layer, we stack a fully connected layer with a dropout layer (keep probability 60\%) that predicts the labels for the minutes of video and prevent overfitting.
Table~\ref{tab:PoolingFeatures} summarizes a performance comparison between the various pooling methods when applied to the testing set.
First of all, we notice similar results across features by using mean and max pooling, that only rely on a single representation of the set of 120 features and not its distribution.
Using the custom CNN layer, which is an attempt to gather temporal context, ResNet-152 performs better than C3D which performs better than I3D. We believe that the I3D and C3D already gather temporal information for 64 and 16 frames.
We can notice that the gap between the features increases by using the pooling methods proposed by Miech \etal~\cite{miech2017learnable}, which is a way to embed context along the temporal dimension. We believe that I3D and C3D features already rely on a temporal characterization within the stack of frames. On the other hand, the ResNet-152 provides a representation that focuses only on the spatial aspect within a frame. We believe that the temporal pooling methods provides more redundant information for I3D and C3D, than for ResNet-152. For this reason, we argue that ResNet-152 features provide better results when coupled with any temporal pooling methods provided by Miech \etal~\cite{miech2017learnable}.
Focusing on the pooling, VLAD-based methods are at the top of the ranking, followed by the deep versions of the FV and BoW methods. Such improvement is attributed to the efficient clustering for the 120 features learned in NetVLAD~\cite{arandjelovic2016netvlad} providing state-of-the-art results for action classification~\cite{girdhar2017actionvlad}. Note that NetRVLAD performs similarly if not better than NetVLAD by relying only on the average and not the residuals for each clustering, reducing the computational load~\cite{miech2017learnable}. For the rest of the experiment we are relying exclusively on ResNet-152 features.
\begin{table}[htb]
\centering
\caption{Classification metric (mAP) for different combinations of frame representations and pooling methods.}
\vspace{5pt}
\csvreader[tabular=l||c|c|c,
table head= & \multicolumn{3}{c}{\textbf{Frame features}} \\
\textbf{Pooling} & \textbf{I3D} & \textbf{C3D} & \textbf{ResNet} \\\midrule,
late after line=\\]
{img/Results/Results_Network_Features.csv}%
{Pooling=\Pooling,
IED=\IED,
CED=\CED,
ResNet=\ResNet}%
{\textbf{\Pooling} &
~~\IED~~ &
~~\CED~ &
~~\ResNet~~}
\label{tab:PoolingFeatures}
\vspace{-4mm}
\end{table}
For the various pooling methods, the number of clusters can be fine-tuned. In Table~\ref{tab:PoolingFeatures}, we use $k=64$ clusters, which can be interpreted as the vocabulary of atomic elements that are learned to describe the events.
Intuitively, one can expect that a richer and larger vocabulary can enable better overall performance~\cite{girdhar2017actionvlad}. We show in Table~\ref{tab:PoolingClusters} that this intuition is true within a certain range of values $k$, beyond which the improvement is negligible and overfitting occurs. The performance of all pooling methods seem to plateau when more than 256 clusters are used for the quantization. The best results are registered when NetVLAD is used with 512 clusters. Nevertheless, the computational complexity increases linearly with the number of clusters, hence computational times grow drastically.
\begin{table}[htb]
\centering
\caption{Classification metric (mAP) for different number of cluster for the pooling methods proposed by Miech \etal~\cite{miech2017learnable}.}
\vspace{5pt}
\csvreader[tabular=l||c|c|c|c,
table head= & \multicolumn{4}{c}{\textbf{Pooling Methods}} \\
\textbf{$k$} & \textbf{SoftBOW} & \textbf{NetFV} & \textbf{NetRVLAD} & \textbf{NetVLAD} \\\midrule,
late after line=\\]
{img/Results/Results_Network_Cluster2.csv}%
{K=\K,
SOFTBOW=\SOFTBOW,
NETFV=\NETFV,
RVLAD=\RVLAD,
VLAD=\VLAD}%
{\textbf{\K} &
\SOFTBOW &
\NETFV &
\RVLAD &
\VLAD}
\label{tab:PoolingClusters}
\vspace{-4mm}
\end{table}
\vspace{4pt}\noindent\textbf{Coping with Imbalanced Data:~~}
The performance of classifiers are significantly affected when training sets are imbalanced.
Due to the sparsity of our events, we have numerous background instances. Here, we present three main techniques to cope with this imbalance.
One method focuses on \textbf{weighting (Weig)} the binary cross-entropy with the ratio of negative samples to enforce the learning of the positive examples.
Another method applies a \textbf{random downsampling (Rand)} on the highest frequency classes, or by \textbf{hard negative mining (HNM)}, \ie by sampling the examples that are misclassified the most in the previous epoch.
The third method uses \textbf{Data Augmentation (Augm)} to balance the classes.
In that case, we use the fine annotation of the event and slide the minute window with a stride of 1s within $\pm$20s of the event spot to sample more video segments for the sparsest event classes.
We argue that a chunk of 1 minute within $\pm$10s around the anchor of the event still contains this event, and the pooling method should be able to identify it.
Although, note that our data augmentation requires the data to be finely annotated.
Table~\ref{tab:Imbalanced} shows the classification mAP for the testing dataset, training with the previous pooling methods on ResNet features, and using the aforementioned strategies to cope dataset imbalance.
We see that weighting slightly improves the metric.
Both downsampling methods actually lead to the worst results, because of the reduced amount of data the model has been trained on at each epoch.
Using the second resolution annotations to augment the data helps to achieve slightly better classification results.
\begin{table}[htb]
\centering
\caption{ Classification metric (mAP) using different solutions to cope with an imbalanced dataset on our pooling methods, using ResNet-152 features.}
\vspace{5pt}
\csvreader[tabular=l||c|c|c|c|c,
table head= & \multicolumn{5}{c}{\textbf{Imbalance}} \\
\textbf{Pooling} & \textbf{Orig} & \textbf{Weig} & \textbf{Rand} & \textbf{HNM} & \textbf{Augm} \\\midrule,
late after line=\\]
{img/Results/Results_Imbalanced_Pooling.csv}%
{Imbalance=\Imbalance,
Nothing=\Nothing,
Weighting=\Weighting,
Rand=\Rand,
HNM=\HNM,
Augm=\Augm}%
{\textbf{\Imbalance} &
\Nothing &
\Weighting &
\Rand &
\HNM &
\Augm}
\label{tab:Imbalanced}
\vspace{-5mm}
\end{table}
\subsection{Spotting}
In this section, we discuss the task of event spotting in soccer videos.
We use the models trained in the classifier task and apply them in a sliding window fashion on each testing video, with a stride of 1s, thus, leading to a second resolution score along for each event class.
We investigate the spotting results of three strong baselines
\textbf{(i)} a watershed method to compute segment proposals and use the \textbf{center} time within the segment to define our candidate;
\textbf{(ii)} the time index of the \textbf{maximum} value of the watershed segment as our candidate; and
\textbf{(iii)} the local maxima along all the video and apply \emph{non-maximum-suppression} (\textbf{NMS}) within a minute window.
The evaluation metric is the mAP with tolerance $\delta$ as defined for spotting in Section~\ref{sec:Spotting}, as well as, the Average-mAP expressed as an area under the mAP curve with tolerance ranging from 5 to 60 seconds.
\begin{figure*}[htb]
\centering
\subfloat[\textbf{Model trained on chunks of 60s}]
{\includegraphics[width=0.32\linewidth]{img/Results/GraphSpotting_ModelVLAD512_Wind60}
\label{fig:GraphSpotting_ModelVLAD512_Wind60}}
\subfloat[\textbf{Model trained on chunks of 20s}]
{\includegraphics[width=0.32\linewidth]{img/Results/GraphSpotting_ModelVLAD64_Wind20}
\label{fig:GraphSpotting_ModelVLAD64_Wind20}}
\subfloat[\textbf{Model trained on chunks of 5s}]
{\includegraphics[width=0.32\linewidth]{img/Results/GraphSpotting_ModelVLAD64_Wind5}
\label{fig:GraphSpotting_ModelVLAD64_Wind5}}
\caption{Spotting metric (mAP) in function of the tolerance $\delta$ for model trained on chunks of size (a) 60s, (b) 20s and (c) 5s.
The Average-mAP is estimated through the area under the curve between 5s and 60s for each baseline.
}
\label{fig:GraphSpotting}
\end{figure*}
\vspace{4pt}\noindent\textbf{Comparison of Spotting Baselines:~~}
We investigate the results for event spotting for our best weakly-trained classifier, to leverage the use of webly-parsed annotations, \ie we train on the imbalanced minute resolution annotated data and do not perform data augmentation.
Specifically, we use a NetVLAD model with $k=512$ cluters based on ResNet features and the watershed threshold is set to 50\%.
Figure~\ref{fig:GraphSpotting_ModelVLAD512_Wind60} plots the mAP of each spotting baseline as a function of the tolerance $\delta$ to the spot.
As expected, the mAP decreases with the spot tolerance $\delta$.
Above a tolerance $\delta$ of 60s, both three baselines plateau at 62.3\%.
Below 60s, the baseline (ii) and (iii) perform similarly and decrease linearly with the tolerance.
On the other hand, baseline (i) decreases more gradually, hence provides a better Average-mAP of 40.6\%.
Even though the model has been trained using chunks of 1 minute, the method is still able to achieve good spotting results for tolerances below 60s.
We argue that our model predicts positively any window that contains an event, creating a plateau.
\vspace{4pt}\noindent\textbf{Training on Smaller Windows:~~}
Here, we train our classifiers from Section~\ref{subsec:Classifier} using a smaller chunk size, ranging from 60 seconds to 5 seconds.
We expect these models to perform in a similar fashion, with a drop in performance (mAP) occurring for tolerances below the chunk size.
Note that we use finely annotated data to train such classifiers.
Figure~\ref{fig:GraphSpotting} depicts the spotting mAP in function of the tolerance $\delta$ for the models trained on 60, 20 and 5 seconds.
They all have similar shape, a metric that plateaus for spotting tolerance $\delta$ above the chunk video length they have being trained on, and a decreasing metric below such threshold.
By using baseline (i) on chunks of 20s we obtain the best Average-mAP of 50\% (see Figure~\ref{fig:GraphSpotting_ModelVLAD64_Wind20}).
Also, a drop in performance occurs with models trained with chunks of 5s (see Figure~\ref{fig:GraphSpotting_ModelVLAD64_Wind5}).
We believe such gap in performance is related to the amount of context we allow around the event.
With these experiments, we setup a baseline for the spotting task but the best performance is far from satisfactory.
Nevertheless, we see our newly compiled and scalable dataset to be a rich environment for further algorithm development and standardized evaluations; especially when it comes to novel spotting techniques.
\section{Future Work}
Activity detection is commonly solved by proposing candidates that are further classified.
We believe that detection can be solved by spotting a candidate and focusing attention around the spot to localize the activity boundaries.
In future works, we encourage the usage of RNNs to embed a further temporal aspect that will understand the evolution of the game.
We will also include more classes for soccer events to enrich its contents and enable learning potential causal relationships between events.
We believe for instance that the event \emph{``card''} is mostly the result of an event \emph{``foul''}.
Also, embedding semantic relationship information from the players, the ball and the field can improve soccer video understanding.
Our video also contains an audio track that should be used; visual and audio sentiment analysis could localize the salient moments of the game.
The match reports from our online provider also includes match commentaries.
We collected and will release a total of 506,137 commentaries for the six aforementioned leagues with a one second resolution.
We believe such data can be used for captioning events in soccer videos.
\section{Conclusion}
In this paper, we focus on soccer understanding in TV broadcast videos.
We build this work as an attempt to provide a benchmark for soccer analysis, by providing a large-scale annotated dataset of soccer game broadcasts.
We discussed the concept of \emph{event} within the soccer context, proposed a definition of \emph{``goal''}, \emph{``card''} and \emph{``substitution''} and parse a large amount of annotation from the web.
We defined the task of \emph{spotting} and provide a baseline for it.
For the minute classification task, we have shown performance of 67.8\% (mAP) using ResNet-152 features and NetVLAD pooling along a 512-long vocabulary and using a coarse annotation.
Regarding the spotting task, we have establish an Average-mAP of 49.7\% with fine annotation and 40.6\% by using only weakly annotated data.
We believe that focusing effort on spotting, new algorithms can improve the state-of-the-art in detection tasks.
\section{Supplementary Material}
We provide further details on the the dataset and further results for the spotting baseline.
\subsection{Dataset Details}
Table~\ref{tab:DatasetSplit} provides more details on the distribution of the events for the training (300 games), validation (100 games) and testing (100 games) sets. We assess that the events are equally distributed along the different sets.
\begin{table}[htb]
\centering
\caption{Details on the events split between Training, Validation and Testing sets.}
\vspace{5pt}
\csvreader[tabular=l||c|c|c||>{\bfseries}c,
table head= & \multicolumn{3}{c||}{Events} & \\
Split & Goals & Cards & Subs & Total \\\midrule,
late after line=\ifthenelse{\equal{\Split}{Total}}{\\\midrule}{\\}]
{img/Dataset/CountEventSplit.csv}%
{Split=\Split,
goal=\goal,
card=\card,
subs=\subs,
Total=\Total}%
{\ifthenelse{\equal{\Split}{Total}}
{\textbf{\Split}}{\Split} &
\ifthenelse{\equal{\Split}{Total}}
{\textbf{\goal}}{\goal} &
\ifthenelse{\equal{\Split}{Total}}
{\textbf{\card}}{\card} &
\ifthenelse{\equal{\Split}{Total}}
{\textbf{\subs}}{\subs} &
\ifthenelse{\equal{\Split}{Total}}
{\textbf{\Total}}{\Total} }
\label{tab:DatasetSplit}
\end{table}
\subsection{PCA reduction}
Reducing the dimension of the frame features reduces the complexity of the successive pooling layers. Nevertheless, the features lose some variance. We ensure in Figure~\ref{fig:supplPCA} that the loss in variance is minimal when reducing the dimension to 512 for ResNET, C3D and I3D.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{img/PCA/cumsum_explained_variance_ratio_zoom.png}
\caption{Dimensionality reduction using Principal Component Analysis (PCA).}
\label{fig:supplPCA}
\end{figure}
\subsection{Insight for the metrics}
We show in Figure~\ref{fig:supplInsightMetrics} some insight from the metric we are defining. A candidate spot is considered positive if he lands within a tolerance $\delta$ of a ground truth spot. In Figure~\ref{fig:supplInsightMetrics}, candidate A lands within a tolerance $\delta$, candidate B within a tolerance $3\delta$ and candidate C within a tolerance $4\delta$, hence considered as positive for tolerances greater or equal to such value, and negative for smaller tolerances. Recall and Precision are defined for a given tolerance $\delta$, as well as the Average Precision (AP), the mean Average Precision (mAP) along the three classes and the Average mAP along a set of tolerances. Note that the Average mAP can also be estimated through the area under the curve of the mAP in functions of the spotting tolerance (see Figure~\ref{fig:SupplGraphSpottingWindows}).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{img/Supplementary/InsightMetrics}
\caption{Insight to understand the metrics.
Candidate A spots the event within a tolerance of $\delta$,
Candidate B within $3\delta$ and
Candidate C within $4\delta$.}
\label{fig:supplInsightMetrics}
\end{figure}
\subsection{Spotting Results}
We show in Figure~\ref{fig:supplRecallPrecision} the Recall Precision curve for the 3 metrics, using the best result for classification training, \ie ResNet-152, NetVLAD pooling with $k=64$ and segment center spotting baseline. Goal events are the easiest to spot with an AP of 73.0\%, then substitutions reach 59.3\% and Cards 52.1\%.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{img/Results/RecallPrecision}
\caption{Recall Precision curve for the three classes.}
\label{fig:supplRecallPrecision}
\end{figure}
Also, Figure~\ref{fig:SupplGraphSpottingWindows} illustrates the mAP and the Average mAP for different models using different window sizes during training. It shows that the best result is performed by a window size of 20 seconds in classification training. A drop in performances in visible for windows of 5 seconds. Note that such model uses the fine annotation at a one second resolution.
\begin{figure*}[t]
\centering
\subfloat[\textbf{Model trained on chunks of 50s}]
{\includegraphics[width=0.45\linewidth]{img/Results/GraphSpotting_ModelVLAD64_Wind50}}
\subfloat[\textbf{Model trained on chunks of 40s}]
{\includegraphics[width=0.45\linewidth]{img/Results/GraphSpotting_ModelVLAD64_Wind40}}\\
\subfloat[\textbf{Model trained on chunks of 30s}]
{\includegraphics[width=0.45\linewidth]{img/Results/GraphSpotting_ModelVLAD64_Wind30}}
\subfloat[\textbf{Model trained on chunks of 20s}]
{\includegraphics[width=0.45\linewidth]{img/Results/GraphSpotting_ModelVLAD64_Wind20}}\\
\subfloat[\textbf{Model trained on chunks of 10s}]
{\includegraphics[width=0.45\linewidth]{img/Results/GraphSpotting_ModelVLAD64_Wind10}}
\subfloat[\textbf{Model trained on chunks of 5s}]
{\includegraphics[width=0.45\linewidth]{img/Results/GraphSpotting_ModelVLAD64_Wind5}}
\caption{Spotting metric (mAP) in function of the tolerance $\delta$ for model trained on chunks of size (a) 50s, (b) 40s, (c) 30s, (d) 20s, (e) 10s and (f) 5s.
The Average-mAP is estimated through the area under the curve between 5s and 60s for each baseline.}
\label{fig:SupplGraphSpottingWindows}
\end{figure*}
\subsection{Qualitative results}
Figure~\ref{fig:supplQualitativeResults} shows qualitative results for games in the training, validation and testing set. Each tile shows the activation around ground truth events for a given class, and depicts the candidate spot using our best model (ResNet-152, NetVLAD with $k=512$) and the center segment (i) spotting baseline.
The prediction usually activates for a 60 seconds range around the spot. It validates our hypothesis that any sliding window that contains the ground truth spot activates the prediction for the class.
\begin{figure*}[htb]
\centering
\subfloat
{\includegraphics[width=\linewidth]{img/Supplementary/QualitativeResults_Train0}
\label{fig:QualitativeResults_Train}}\\
\addtocounter{subfigure}{-1}
\subfloat[\textbf{Example of games from the training set}]
{\includegraphics[width=\linewidth]{img/Supplementary/QualitativeResults_Train1}
\label{fig:QualitativeResults_Train}}\\
\vspace{10pt}
\subfloat
{\includegraphics[width=\linewidth]{img/Supplementary/QualitativeResults_Valid0}
\label{fig:QualitativeResults_Valid}}\\
\addtocounter{subfigure}{-1}
\subfloat[\textbf{Example of games from the validation set}]
{\includegraphics[width=\linewidth]{img/Supplementary/QualitativeResults_Valid1}
\label{fig:QualitativeResults_Valid}}\\
\vspace{10pt}
\subfloat
{\includegraphics[width=\linewidth]{img/Supplementary/QualitativeResults_Test0}
\label{fig:QualitativeResults_Test}}\\
\addtocounter{subfigure}{-1}
\subfloat[\textbf{Example of games from the testing set}]
{\includegraphics[width=\linewidth]{img/Supplementary/QualitativeResults_Test1}
\label{fig:QualitativeResults_Test}}
\caption{Qualitative results for the Training (a), Validation (b) and Testing (c) examples. The time scale (X axis) is in minute.}
\label{fig:supplQualitativeResults}
\end{figure*}
Figure~\ref{fig:supplQualitativeResultsWindow} shows further results with smaller windows sizes in training. As expected, the activation width reduces from 60 seconds to the value of the size of the video chunks used in training.
\begin{figure*}[htb]
\centering
\includegraphics[width=\linewidth]{img/Supplementary/QualitativeResults_Wind}
\caption{Qualitative results for window size ranging from 60 to 5 s.}
\label{fig:supplQualitativeResultsWindow}
\end{figure*}
| {
"attr-fineweb-edu": 1.966797,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdNw5qhLACAkxAvWP | \section{Introduction}\label{sec:Int}
Former England international, long-time Arsenal player and present SKYSPORTS commentator Paul Merson, predicts the final Premiel League (PL) table for the 2016$/$2017 season in~\cite{Merson}. With the hindsight of time, we can check Merson's predictions compared with the true final table as indicated in table~\ref{tab:table_1}.
\begin{table}[htpb]
\begin{center}
\caption{Paul Merson's predictions compared to true final Premier League table.}
\label{tab:table_1}
\begin{tabular}{ll|c|c}
& \em{Final PL-table} & \em{Merson's predictions} & $|P(i)-i|$ \\
\hline
1. & Chelsea & 1 & 0 \\
2. & Tottenham & 6 & 4 \\
3. & Manchester City & 2 & 1 \\
4. & Liverpool & 5 & 1 \\
5. & Arsenal & 4 & 1 \\
6. & Manchester United & 3 & 3 \\
7. & Everton & 8 & 1 \\
8. & Southampton & 9 & 1 \\
9. & Bournemouth & 18 & 9 \\
10. & West Bromwich & 17 & 7 \\
11.& West Ham & 7 & 4 \\
12. & Leicester & 11 & 1 \\
13. & Stoke & 10 & 3 \\
14. & Crystal Palace & 12 & 2 \\
15. & Swansea & 15 & 0 \\
16. & Burnley & 20 & 4 \\
17. & Watford & 16 & 1 \\
18. & Hull & 19 & 1 \\
19. & Middlesbrough & 13 & 6 \\
20. & Sunderland & 14 & 6 \\
\end{tabular}
\end{center}
\end{table}
In table~\ref{tab:table_1}, the final table outcome is given in the leftmost column ({\em Final PL-table}), while Merson's predictions are given in the mid column ({\em Merson's predictions}). By defining the true (correct) final table as the consecutive integers $\{1, 2, \ldots, n\footnote{$n$ is the number of teams in the league.}\}$, and a table prediction as a certain permutation $P(i)$\footnote{In this example, $P(i)$ denotes Merson's predictions.} of the integers $\{1, 2, \ldots, n\}$, the absolute deviations between forecasts and true values can be computed as in the rightmost column in table~\ref{tab:table_1}.
If we examine Merson's tips closer, we observe that he obtained two zeros (or perfect hits) in the rightmost column in table~\ref{tab:table_1} -- Chelsea as the winner, and Swansea as number 15. Furthermore, he only missed by one placement for 8 outcomes, but also missed with greater margin for instance for Bournemouth which he thought should finish at 18th place, but in fact ended 9th.
The question that we will be interested in initially, is the quality of Merson's predictions. Is Merson's permutation in table~\ref{tab:table_1} a good guess? In order to attempt to answer such a question, we need to define quality. it seems reasonable to look for some function;
\begin{equation}
f(|P(1)-1|, |P(2)-2|, \ldots, |P(n)-n|)
\end{equation}
which produces a single numerical value tailored for comparison. Of course, infinite possibilities exist for such a function. Fortunately, forecasting literature comes to rescue -- refer for instance to~\cite{Makr}. The two most common measures used in similar situations are $MAE$, Mean Absolute Error or $MSE$, Mean Squared Error. With our notation, these two measures are defined as:
\begin{equation}\label{eq:MAE}
MAE = \frac{1}{n} \sum_{i=1}^n |P(i)-i| \mbox{, } MSE = \frac{1}{n} \sum_{i=1}^n \left( P(i)-i \right)^2
\end{equation}
Although $MSE$ is more applied in statistics, preferably due to its obvious nicer mathematical properties\footnote{Strictly convex for instance.}, we choose to use MAE. It weighs errors equally, and it produces also an easily interpretable result; a MAE of 3 means that a prediction on average misplaces all teams by 3 places.
Given this choice, Merson's $MAE$ in table~\ref{tab:table_1} can be easily calculated as; $MAE=2.8$. Still, we are in no position to give any statements on the quality of this $MAE$ of 2.8.
One obvious alternative way to answer our initial question, would be to gather information on other predictions, internet is indeed full of them~(\cite{FT},\cite{Forbes},~\cite{BBC}), calculate $MAE$ for these guesses and compare. Unfortunately, this is indeed a formidable task. As a consequence, we have chosen a slightly different path. Instead of empirical comparisons, we can investigate some basic statistical properties\footnote{Under an assumption of a random guess.} of MAE; for instance to establish minimal ($MAE_{MIN}$), maximal ($MAE_{MAX}$) as well as the expected value for MAE ($E[MAE]$) as a simpler (or at least less time consuming) way of testing the quality of Merson's predictions. It turns out that\footnote{Refer to Appendix~\ref{sec:appa} for the derivation of these results as well as some other relevant statistical properties of $MAE$.} $MAE_{MIN}=0$, $MAE_{MAX}=\frac{n}{2}$ and $E[MAE]= \frac{1}{3} \cdot \frac{n^2-1}{n}$.
The above results provide interesting information. A random table permutation (or prediction) for PL ($n=20$) can at best produce $MAE=0$, while at worst, it can produce $MAE=\frac{n}{2}= \frac{20}{2}=10$. On average, a random prediction should produce $E[MAE]=\frac{1}{3} \cdot \frac{n^2-1}{n} = \frac{1}{3} \cdot \frac{20^2-1}{20}=6.65$.
The task of guessing randomly and hit the correct table is definitely a formidable one. There are $n!$ different tables to guess, and a random guess would hence (in the case of PL, $n=20$) have a probability of $\frac{1}{20!} = \frac{1}{2432902008176640000} \approx 4 \cdot 10^{-19}$ of hitting the correct table. Consequently, Merson's table prediction is really impressive. On average, a $MAE$ of 6.65 compared to Merson's 2.8 indicate high quality in Merson's prediction. One could of course argue that Paul Merson is an expert, and one should expect him to know this business\footnote{Some might even argue that PL is a league with low uncertainty of outcome (a competitively imbalanced league). Hence, it is not that hard to guess final tables. Chelsea, Arsenal and Manchester United have for instance a recurring tendency to end up among the 5 best.}. Still, information from other countries, for instance Norway, which we will focus more on in subsequent sections, indicate that even experts may have challenges in providing tips that fit final tables.
In the next section (section~\ref{sec:lit}), we investigate some scientific attempts to produce football table forecasts. In section~\ref{sec:fcast} we argue that the trend in present research seems to be oriented in a non-parsimonious fashion, and argue why this perhaps is not a good idea. In section~\ref{sec:hyp} , we discuss alternative parsimonious modelling hypotheses and test one involving goal-difference as the prime explanatory factor. Section~\ref{sec:conc} concludes, and discusses and suggests further research.
\section{The science of football table prediction}\label{sec:lit}
Although the internet is ``full'' of football table predcitions, it would be an exaggeration to state that research literature is full of serious attempts to predict the same. Still, some noteworthy exceptions exist. Three relatively recent papers by Brillinger~\cite{Brill1},~\cite{Brill2},~\cite{Brill3} seem to sum up state of the art of the area. Brillinger's touch of difference compared to other previous work seems to be that he models game outcomes in the form of Win Tie or Loss -- W, T, L -- directly, as opposed to other authors who uses some distributional assumptions on goal scoring frequencey, typically as seen in~\cite{Lee} or in~\cite{Meeden}\footnote{Meeden's paper does get some interesting and perhaps unexpected criticism in~\cite{HaugenChance}. Here, the whole assumption of using probability theory to model goal scoring or match outcomes is questioned by game theoretic arguments.}.
Almost all of the work discussed above rely on simulation to produce actual forecasts. The idea is simple. Let the computer play the games; either by drawing goal scores or W, T, L (by estimated probailistic mechanisms) for all predefined matches in the league. Register match outcomes; either by counting goals or more directly by Brillingers approach. Then, when all match outcomes are defined, the league table can be set-up. Repeating the simulation produces a new final league table, and by a large number of simulation runs, expected table placement or probabilistic table predictions can be generated. Of course, such a method opens up for updating or reestimating underlying probabilties for team quality, which then are applied if one runs a rolling horizon approach. Such rolling horizon approaches seem to be quite popular in media -- refer for instance to~\cite{Sig}.
\section{Parsimonious forecasting}\label{sec:fcast}
The concept of parsimony (or parameter minimzation) is both well known and well studied in time series forecasting literature. Already Box and Jenkins~\cite{Box} pointed out that parsimony is desireable if forecasting accuracy is the objective. An interesting emprical test of the actual consequences of parsimony versus non-parsimomny can be found in~\cite{Ledolter}.
The reason why parsimony is desirable is obvious. A model where many parameters need to be estimated generate more aggregate uncertainty than a model with fewer parameters. As a consequence, the outcome -- the forecasts -- tends to be more uncertain and inaccurate.
Furthermore, in many cases where causal (regression type models) are used, either alone or in combination with time series models, the causal variables will often have to be predicted in order to obtain model estimates for the target variable. And, these causal variables are typically just as hard, or (perhaps) even harder to predict reasonably correct, than the target variable. Suppose you want to predict the number of flats sold in a certain area in London this month next year. You know that many relevant economic variables like UK salary level, unemployment rate and Interest rates (just to name few) affect this target variable. If a prediction model contains these variables, you need to predict them in order to predict the number of flats sold in London next year. And, predicting next years UK unemployment, salary as well as interest rates are (obviously) not an easy task.
If we return back to our focus -- football table prediction -- it should seem quite obvious that the reported methodolgy discussed in section~\ref{sec:lit}, hardly can be described as parsimonious. On the contrary, probability estimates of many teams, maybe conditional on future events like injuries, or talent logistics shoud generate much added uncertainty and it should not come as a surprise that such methods produce quite bad predictions. The fact that the team in~\cite{Sig} missed Greece as a potential winner in EURO 2004 may serve as an adequate example.
This said, non-parsimonious models, either causal or not, have other interesting properties, they can for instance (far better) answer questions of 'what if type', which in some situations are more desirable than accurate forecasts.
So, what would be a parsimonious model for football table forecasting? The answer is simple and obvious, the table itself. Either last years table, if one predicts the final table in-between seasons, or the latest table available if the aim is to predict the final table within a rolling horizon.
Obviously, even such a simple strategy holds challenges. In most leagues there are relegation and promotion which has the obvious effect that last season's table contains a few other teams than this season's table. Furthermore, if the tables that are to be predicted are group tables in say European or World Championships, there is no previous season.
Still, such problems may be solved at least if we restrict our focus to prediction after some games or rounds are played.
\section{Testing a hypothesis of parsimonious football table prediction}\label{sec:hyp}
One simple way of testing the table in a certain round $r$'s predictive power on the final table is to perform a set of linear regressions, one for each round with the final table rank as the dependent variable and table rank in round $r$ as the independent variable. Or, in our notation: (the obvious $r$-subscript is omitted for simplicity)
\begin{equation}
i = \beta_0 + \beta_1 P(i) + \epsilon_i
\end{equation}
By calculating $R^2$ in all these regressions, a function $R^2(r)$ is obtained. Presumably, this function will have some kind of increasing pattern (not necessarily strict), but common sense indicates that football tables change less in later than early rounds. Figure shows an example from last years Tippeliga\footnote{The name Tippeligaen has been changed to Eliteserien for this (2017) season.} in Norway.
\begin{figure}[htpb]
\centerline{\includegraphics[scale=0.40]{R2pos}}
\caption{An example of $R^2(r)$ from Tippeligaen 2016}
\label{fig:fig_1}
\end{figure}
Looking at figure~\ref{fig:fig_1}, we observe our predicted pattern of non-strict positive monotonicity in $R^2(r)$. However, we also observe something else: $R^2(r)$ reaches 80\% explanatory power already in round 7. That is, 80\% of the final table is there, already in round 7. Surely, $R^2(r)$ drops slightly in rounds 7 to 19, but this observation indicates that our parsimonious hypothesis actually may be of relevance.
Now, is the table rank the only possible parsimonious alternative? The answer is of course no. A table contains home wins, away wins, points, goal-score to name some potential additional information. Let us focus on goal score. In the start of a season, many teams may have new players, new managers and we may suspect that the full potential of certain good teams may not be revealed in early table rankings. Vice versa, other teams have luck, are riding a wave and take more points than expected. These, not so good teams, may have a tendency to win even matches by a single goal, but also loose other matches (against very good teams) by many goals. As such, we could suspect that goal difference (at least in earlier rounds) perhaps could hold more and better predictive information than table ranking (or points for that matter). That is, we could hope to observe patterns (of course not as smooth) similar to the ``fish-form'' in figure~\ref{fig:fig_2}.
\begin{figure}[htpb]
\centerline{\includegraphics[scale=0.40]{fish-form}}
\caption{A hypothetical comparison of $R^2(r)$ for regressions with both goal difference ($R^2_{gd}$) and table position ($R^2_{pos}$)as independent variables}
\label{fig:fig_2}
\end{figure}
In order to check these two hypotheses more thoroughly, We examined the Norwegian top league for some previous seasons. We stopped in 2009, as the number of teams changed to 16 (from 14) this year, and performed regressions like those described above with both table rank and goal difference as independent variables. Then 8 $R^2_{pos}(r)$ and 8 $R^2_{gd}(r)$ were generated. The result of this generation is shown in figure~\ref{fig:fig_3}
\begin{figure}[htpb]
\centerline{\includegraphics[scale=0.70]{All-R2-figs}}
\caption{Full output from empirical analysis}
\label{fig:fig_3}
\end{figure}
If we examine figure~\ref{fig:fig_3} we observe that 7 out of 8 seasons have patterns, although perhaps not visually very similar to figure~\ref{fig:fig_2}), where goal difference explain final table better than table rank -- typically early in the season. Furthermore, around 80\% explanatory power ($R^2 > 0.8$) is obtained roughly (for table rank) as early as mid-season for most cases.
Hence, an operative predictive strategy where a table prediction simply could be generated by sorting goal differences in early parts of the season and using the last observed table as the forecast later in the season seems reasonable.
Hence, we have demonstrated that our hypotheses are supported for these Norwegian data. Our results do of course not state anything related other leagues in other countries, but we do believe that similar patters should be observable in most international leagues.
\section{Conclusions and suggestions for further research}\label{sec:conc}
Apart from the fact that Paul Merson is a good predictor of the final PL-table (or at least he was before the 2016/2017 season), we have demonstrated that table rank or sometimes even better, goal difference explains major parts of final tables early. We have not (actually) checked (empirically) if our prediction method is ``better'' than existing methods in research literature. This is of course feasible, however time consuming. What we without doubt can conclude, is that the methods applied by researchers in football table prediction are far more time and resource consuming than our methods. Simulation experiments take both coding and computing time, and if one is in doubt about whether one approach is better than the other, at least we could recommend to try our approach first.
So, is football table prediction important? Does it contribute to world welfare? Is it really necessary to spend time and resources even addressing this problem? Perhaps not. Still, most modern news agents spend a lot of time and (valuable) space on distributing such predictions each season. And, as a consequence, some real world demand seems to exist.
In any case, we have examined the problem from our perspective and even found some mathematical results we found interesting. Hopefully, our small effort may inspire other researchers to start the tedious job of empirically testing whether our approach performs better or worse than the simulation-based approaches.
| {
"attr-fineweb-edu": 1.922852,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdmo5qYVBZEfUin9E | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{I}{n} motorsport events, strategic decisions may have a relevant impact on the result of the competition.
Particularly, with reference to World Endurance Championship (WEC) \cite{boretti_chapter_2018}, there are four vehicle categories characterised by very different performance. This results in the formation of traffic congestion, obliging the fastest category, i.e. Le Mans Prototype 1 (LMP1), to overlap other vehicles many times during the six hours of an event. LMP1 consists in hybrid vehicles that have to respect constraints set by the technical regulation \cite{regulations} for the usage of both the electrical and thermal energy. Race engineers define how to use the energy budget in an optimal way along the track with the aim to minimise the lap time.
In the last years, many state-of-the-art works have addressed the problem of lap strategy optimisation for race electric \cite{anselma_optimal_2021, yesil_strategy_2013, betancur_heuristic_2017, herrmann_minimum_2020, borsboom_convex_2021, broere_minimum-lap-time_2021} and hybrid \cite{duhr_time-optimal_2021, salazar_real-time_2017, salazar_time-optimal_2017} vehicles. In the majority of the cases, the formulations include very detailed and complex models of the powertrain, but do not include the presence of competitors in the driving scenario, which instead represents a fundamental aspect for what said above. Competitors are therefore neglected due to the difficulties in effectively simulating their behaviour, as well as including this information into the optimisation problem, which may become burdensome and not real-time solvable. However, this may result in overtakings being performed in points along the track where a significant deviation from the pre-computed optimal trajectory is necessary, thus determining relevant time losses.
\vspace{0.3cm}
\noindent \textbf{Statement of contributions.} To bridge this gap, we propose a computationally efficient procedure to offline identify the best strategy for the energy budget utilisation along the track, taking into account realistic traffic conditions. By best strategy we refer to the one that statistically minimises the lap time while respecting the technical regulation.
Our contribution is threefold.
\paragraph{Statistical modelling of the competitors' behaviour} we perform a statistical analysis to practically describe the behaviour of the competitors along the track in WEC events, in terms of travel time and their mutual interactions, i.e. overtaking probability. The statistical analysis is necessary since we do not know the behavioural model of the competitors. Statistical analyses of previous events have been carried out to extract the free sector times and overtaking probability distributions. The latter refers to the probability of occurrence of overtakings between two vehicle categories along the track. This information is later employed to generate realistic simulations of the competitors' behaviour.
\paragraph{Computation of traffic-free optimal strategies}
theoretically, it would be necessary to define the best points of application of the electric motor in real time according to the actual traffic conditions, but this task is computationally too expensive to guarantee sufficiently fast cycle times. Therefore, the following solution is proposed.
We generate a set of possible traffic-free optimal solutions, which aim to minimise the lap time while respecting the constraints imposed by the technical regulation. The set of varied solution is obtained by imposing different extra constraints on the distribution of the powertrain energy budget along the track, according to engineering expertise. The optimisations are solved using Genetic Algorithms, for their computational quickness and ease of tuning. This choice is supported by a comparison with a more classical Mixed Integer Quadratically Constrained Program (MIQCP). In both cases, the ego-vehicle is described through a longitudinal model. The set of traffic-free candidate optimal strategies will be finally tested in real traffic conditions to determine the most suitable one, as explained in the next point.
\paragraph{Optimal strategy in presence of traffic}
to identify the optimal strategy in presence of traffic, we propose an evaluation metrics based on Stochastic Dynamic Programming (SDP) \cite{SDP1,SDP2}. The dataset used by SDP is generated through the Monte Carlo (MC) \cite{Alexandrov2011} simulations, relying on a multi-agent influence/reaction model and on the previously computed statistics of the competitors.
\vspace{0.3cm}
\noindent \textbf{Sections organisation.} The remainder of this paper is organised as follows. Sec. \ref{sec: Related work} presents state-of-the-art related work. In Sec. \ref{sec:competitors_performance}, we perform the statistical analysis of the past WEC events, to compute the free sector times and overtaking probability distributions. Sec. \ref{sec:vehicle_model} details the longitudinal vehicle model that is used to solve the powertrain energy budget optimisation problem. The latter, considering traffic-free conditions, is then formulated and solved in Sec. \ref{sec:off_lap_optimization}, comparing a Mixed Integer Quadratically Constrained Program (MIQCP) formulation and Genetic Algorithms. In Sec. \ref{sec:on_simulations}, we show how to generate Monte Carlo numerical simulations of the competitors' positions along the track, employing the free sector times and the overtaking probabilities distributions, as well as a multi-agent model. Finally, in Sec. \ref{sec:results}, the previously computed traffic-free strategies are combined with the traffic-aware Monte Carlo simulations, and Stochastic Dynamic Programming is employed to evaluate the statistically best strategy in presence of traffic. Finally, conclusions are discussed in Sec. \ref{sec: conclusions}.
\section{Related work}
\label{sec: Related work}
Simulating the competitors' motion in racing events is an active field of research. A realistic simulator for circuit motorsports is proposed in \cite{heilmeier_race_2018}. It includes the effects
of tire degradation, fuel mass loss, pit stops and overtaking. However, it employs a lap-wise discretisation, which is incompatible with the goal of our work. In fact, we aim to generate an optimal lap strategy that is section-wise. A discrete-event simulation model is developed in \cite{bekker:toyota}, which is suitable for decision making to define the race strategy. The track is divided into sections. For each of them, the vehicle and environment characteristics, such as the fuel mass and the air resistance penalty, are taken into account. However, the computation of the lap time is performed using a deterministic approach, without considering the uncertainty related to the interactions between the vehicles, which is our target.
Many state-of-the-art works deal with energy management for electric and hybrid vehicles under a lower-level perspective compared to our work. Time-optimal energy management and gear shift for hybrid race cars is investigated in \cite{duhr_time-optimal_2021}. Given fuel and battery consumption targets, they implement a computationally efficient algorithm to solve the problem, mixing convex optimisation, Dynamic Programming and the Pontryagin's minimum principle. In \cite{7986999}, a similar problem is solved for real-time control of the Formula 1 power unit using a two-level Model Predictive Control scheme. Minimum-lap-time optimisation for all-wheel drive electric race cars is presented in \cite{broere_minimum-lap-time_2021}. An optimal adaptive race strategy for Formula-E cars is presented in \cite{anselma_optimal_2021}. It is based on an adaptive equivalent consumption minimisation strategy (A-ECMS) approach, and compared with a global optimal benchmark provided by Dynamic Programming. In \cite{yesil_strategy_2013}, they introduce a lap strategy optimisation method based on a Big Bang - Big Crunch approach for Solar cars in long-distance races. Finally, for Solar cars, heuristic methods are compared in \cite{betancur_heuristic_2017}. One of the implemented methods is Genetic Algorithms, which is adopted in our framework.
The works presented here accurately represent the dynamics of the vehicle powertrain. They cannot however be employed in our framework since they neglect competitors and overtakings, which affect higher-level strategic decisions.
\section{Statistical analysis of the competitors' performance}
\label{sec:competitors_performance}
One of the key contributions of the proposed methodology for lap strategy optimisation is modelling competitors' motion along the track. Modelling and simulating the behaviour of the competitors allows in fact to design and evaluate lap strategies that can be effectively actuated in realistic racing situations. In this section, we detail how to extract a set of useful statistical indices from the publicly available fraction of data collected in previous races by WEC. The set of statistical indices is meant to synthetically describe the competitors' performance during the race, and they will be lately used to simulate their motion along the track.
Being WEC events long-lasting races, the generated amount of data provides a relevant statistical basis for our scope. However, since GPS data of the vehicles are provided by Federation Internationale de l'Automobile (FIA) to every team (for their own vehicle only) during the race but they are not publicly available, the proposed approach relies on an alternative procedure. Based on the statistical distribution of sector times data, we aim to describe the behaviour of each competitor during the race in a probabilistic fashion. The following sections illustrate the entire procedure.
\subsection{Dataset and data cleaning}
In motorsport competitions, circuits are typically divided into three main sectors. The sector times indicate the time interval spent by a car in each sector of the circuit for a specific lap. Differently from the GPS data, the sector time database is freely available\footnote{\url{http://fiawec.alkamelsystems.com/}}. An example of the database structure is reported in Table \ref{tab:data_structure}, where the columns $S_1$, $S_2$ and $S_3$ contain the three sector times.
\begin{table*}[htb]\normalsize
\caption{General structure of the FIA sector times database}
\label{tab:data_structure}
\centering
\begin{tabular}{cccccccccc}
\toprule
\# & Lap & Stop & {$S_1$} & {$S_2$} & {$S_3$} & {Elapsed} & Class & Group & Team \\
& & & {[s]} & {[s]} & {[s]} & {[s]} & & & \\
\midrule
1 & 1 & & 33.978 & 38.779 & 32.358 & 105.115 & LMP1 & H & Porsche \\
1 & 2 & & 33.846 & 37.727 & 31.753 & 208.441 & LMP1 & H & Porsche \\
1 & 3 & & 33.340 & 37.789 & 36.823 & 316.393 & LMP1 & H & Porsche \\
\vdots & \vdots & & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
77 & 30 & & 40.246 & 46.643 & 40.224 & 3833.398 & LMGTE Am & \vdots & Porsche \\
77 & 31 & B & 40.622 & 47.265 & 45.367 & 3966.652 & LMGTE Am & \vdots & Porsche \\
77 & 32 & & 121.453 & 45.261 & 38.340 & 4171.706 & LMGTE Am & \vdots & Porsche \\
\bottomrule
\end{tabular}
\end{table*}
To eliminate spurious laps from the dataset,
f.i. laps led by the safety car, the corresponding sector times have been clustered through the DBSCAN algorithm \cite{DBSCAN}, and only the data belonging to the cluster with the fastest sector times have been considered for successive analyses.
The clusters of fast sector times and the outliers are shown in Figure \ref{fig:clusters} for each car.
\begin{figure}[tbp]
\centering
\subfloat[$S1$.]{\includegraphics[width=0.7\columnwidth]{Cluster1}}\\
\subfloat[$S2$.]{\includegraphics[width=0.7\columnwidth]{Cluster2}}\\
\subfloat[$S3$.]{\includegraphics[width=0.7\columnwidth]{Cluster3}}
\caption{Clusters subdivision of the sector times according to DBSCAN.}
\label{fig:clusters}
\end{figure}
\subsection{Forecasting the competitors' speed profiles}
Resorting to the sector times, we aim to estimate the speed profile of each competitor. Then, it is straightforward to derive an estimate of the competitors position during the race, which is fundamental to statistically model their behaviour.
The proposed procedure to estimate the speed profiles is now described. Considering a lap performed by our vehicle in absence of traffic and without using KERS, we extract a reference profile from GPS data, as shown in Fig. \ref{fig:speed_profile_reconstructed}. We then scale it according to the measured sector times to estimate the speed profile of each competitor. The adopted linear scaling equation is
\begin{equation}
V_{c,i,j}(k) = \dfrac{V^{ref}_{i}(k) T_{c,i,j}}{T^{ref}_{i}},
\end{equation}
where
\begin{itemize}
\item $T^{ref}_{i}$ is the $i$-th sector time performed by the car in the reference video;
\item $V^{ref}_{i}(k)$ is the speed of the car in the reference video, at the frame $k$ of the $i$-th sector;
\item $T_{c,i,j}$ is the $i$-th sector time of the $j$-th lap performed by the $c$-th competitor;
\item $V_{c,i,j}(k)$ is the reconstructed speed of the $c$-th competitor, at the frame $k$ in the $i$-th sector of the $j$-th lap;
\item $i = 1, 2, 3$ indicates the three sectors;
\item $j = 1, \dots, J(c)$, with $J(c)\in\mathbb{N}_0$ being the total number of laps performed by the competitor $c$;
\item $c = 1, \dots, C$, with $C\in\mathbb{N}_0$ being the total number of competitors.
\end{itemize}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.95\columnwidth]{Speed_profile}}
\caption{Speed profile reconstructed using the linear scaling approach.}
\label{fig:speed_profile_reconstructed}
\end{figure}
\subsection{Statistics of the competitors' behaviour}
Referring to the predicted speed profiles, the following statistics have been extracted from the cleaned dataset:
\begin{itemize}
\item the \textit{free sector times} distributions;
\item the \textit{overtaking probabilities}.
\end{itemize}
The free sector times are defined as the sector times performed by a competitor with the preceding vehicle far at least 100 m for the whole duration of the sector. Under these conditions, it is possible to assume that the competitor performance and behaviour have not been influenced by the other competitors. Free sector times are useful to forecast the performance of competitors in absence of interactions. In Figure \ref{fig:FreeSectorTimes}, an example of the sector times distribution is shown for a competitor.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.95\columnwidth]{FreeSectorTimes}}
\caption{Free sector times distributions for a single competitor.}
\label{fig:FreeSectorTimes}
\end{figure}
The overtaking probabilities are modelled as functions of the section and category of the two competitors involved. These statistics are useful to practically describe the interactions between competitors along the sectors. Sections are a finer subdivision of the circuit with respect to the sectors and they allow to distinguish between straights and curves, which are typically characterised by very different probabilities of overtaking. Figure \ref{fig:sections} represents an arbitrary subdivision of the circuit of Bahrain into $37$ sections, which has been considered as the test circuit to validate the proposed approach.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{SectionsPNG}}
\caption{Subdivision of the Bahrain circuit into sections.}
\label{fig:sections}
\end{figure}
Denoting with $A$ the category of the vehicle that is attempting an overtake and with $B$ the category of the vehicle that may be overtaken, the overtaking probability $\mathcal{P}(A,B,i)$ of the pair $(A,B)$ along section $i$ is computed as
\begin{equation*}
\mathcal{P}(A,B,i) =
\frac{\xi(A,B,i)}{\phi(A,B,i)},
\end{equation*}
where $\xi(A,B)$ is the number of overtakings of $A$ on $B$ in section $i$, and $\phi(A,B,i)$ is the number of times A \mbox{and} B have been in section $i$ with at most $10$ meters distance one from the other.
Let us define the following notation:
\begin{itemize}
\item LMP1: Le Mans Prototype 1;
\item LMP2: Le Mans Prototype 2;
\item LMGTE Pro: Le Mans Grand Touring Endurance Professionals;
\item LMGTE Am: Le Mans Grand Touring Endurance Amateurs.
\end{itemize}
Then, we can represent the overtaking probabilities for each pair of vehicle categories and each section. An example is provided in Figure \ref{fig:over_prob}, for the classes LMP1 and LMP2.
\begin{figure}[htb]
\centering
\subfloat[Class LMP1.]{\includegraphics[width=0.95\columnwidth]{Prob1}}\\
\subfloat[Class LMP2.]{\includegraphics[width=0.95\columnwidth]{Prob2}}
\caption{Examples of overtaking probabilities distributions.}
\label{fig:over_prob}
\end{figure}
\section{Ego-vehicle model}
\label{sec:vehicle_model}
The statistics derived in the previous section are used to model the behaviour of the competitors, whose actual dynamics is unknown due to lack of information. Considering the ego-vehicle instead, for which we aim to design the optimal lap strategy, the necessary information to model its dynamics is supposed to be available. Therefore, we propose here a dynamic model of the ego-vehicle, which is used to study the effect of the electrical energy usage on the speed profile. In the next section, the model will be used to compute optimal lap strategies.
The designed model is the result of a trade-off between computational complexity
and accuracy in the description of the relevant
dynamics. The longitudinal dynamics of the vehicle is modelled as
\begin{equation}
m \dot{v} = F_{x,f} + F_{x,r} - F_{aero} - R_f - R_r - m g \sin{\alpha},
\label{eq:EOM}
\end{equation}
where
\begin{itemize}
\item m is the total vehicle mass, accounting also for the fuel and the driver;
\item g is the gravity acceleration;
\item $\alpha$ is the ground slope;
\item $v$ is the vehicle speed;
\item h is the height with respect to the road at which the vehicle centre of mass is located;
\item $R_f$ is the resistive rolling force at the front wheels;
\item $R_r$ is the resistive rolling force at the rear wheels;
\item $F_{aero}$ is the aerodynamic resistive force;
\item $F_{x,f}$ is the thrust applied to ground by the electric motor through the front tires;
\item $F_{x,r}$ is the thrust applied to ground by the combustion engine through the rear tires;
\item $F_{down,f}$ and $F_{down,r}$ are the aerodynamic downforces, split between the front and rear tires, respectively;
\item $F_{z,f}$ and $F_{z,r}$ are the front and rear vertical tire forces, respectively;
\item $2L$ is the distance between the front and rear axles.
\end{itemize}
A schematic representation of the involved quantities is depicted in Figure \ref{fig:car}.
\begin{figure}[htb]
\centerline{\includegraphics[width=\columnwidth]{Auto}}
\caption{Schematic representation of the ego-vehicle (LMP1).}
\label{fig:car}
\end{figure}
Being the torque curves of the electric motor and of the combustion engine known, it is possible to write
\begin{equation}
F_{comb} = F_{comb} \Bigl(T_{comb}(r_{comb}), \tau_{comb}(q_{comb}) \Bigr),
\label{eq: eq comb}
\end{equation}
\begin{equation}
F_{el} = F_{el} \Bigl(T_{el}(r_{el}), \tau_{el} \Bigr).
\label{eq: eq el}
\end{equation}
With reference to \eqref{eq: eq comb} and \eqref{eq: eq el}, $F_{comb}$ represents the theoretically available thrust at the rear wheels provided by the combustion engine, $T_{comb}$ is the engine torque, $r_{comb}$ is the engine speed in rpm, $\tau_{comb}$ is the gear ratio and $q_{comb}$ is the gear selected by the driver. Similarly, $F_{el}$ represents the theoretical thrust provided by the electric motor at the front wheels, having in this case just one transmission ratio $\tau_{el}$ available.
The amount of thrust transferred from the motors to ground depends on the maximum forces that can be generated by the tires through friction. The absolute value of the maximum tire forces $|F_{ad,f}|$ and $|F_{ad,r}|$ at the front and rear wheels, respectively, are given by
\begin{subequations}
\begin{equation}
|F_{ad,f}| = \mu |F_{z,f}|, \end{equation}
\begin{equation}
|F_{ad,r}| = \mu |F_{z,r}|,
\end{equation}
\end{subequations}
being $\mu$ the tire-road friction coefficient. The vertical forces can be calculated through moments equilibrium as
\begin{subequations}
\begin{equation}
\begin{split}
F_{z,f} = & \,\,\,\frac{- F_{aero} \cdot h_{aero} + F_{down,f}\cdot 2L}{2L}+\\[1ex]
& + \frac{- m\dot{v}h - mgh\sin(\alpha) + mgL\cos(\alpha)}{2L},
\end{split}
\end{equation}
\begin{equation}
\begin{split}
F_{z,r} = & \,\,\,\frac{F_{aero} \cdot h_{aero} + F_{down,r}\cdot 2L}{2L} + \\[1ex]
& + \frac{m\dot{v}h + mgh\sin(\alpha) + mgL\cos(\alpha)}{2L},
\end{split}
\end{equation}
\end{subequations}
where
\begin{equation}
\begin{split}
F_{down,\cdot} &= \frac{1}{4} \rho c_z S v^2,
\end{split}
\label{eq: downforce}
\end{equation}
$\rho$ is the air density, $c_z$ is the lift coefficient and $S$ is the reference surface. The aerodynamic force is given by
\begin{equation}
F_{aero} = \frac{1}{2} \rho c_x S v^2,
\end{equation}
where $c_x$ is the drag coefficient. Finally, the resistive rolling forces are
\begin{equation}
R_{\cdot} = C_{res} \cdot F_{z,\cdot},
\end{equation}
being $C_{res}$ the rolling resistance coefficient.
The maximum longitudinal thrust that each tire can generate through friction can be calculated by vector difference between the maximum tire force and the lateral tire force experienced during curves. The lateral tire forces $F_{y,f}$ and $F_{y,r}$ at the front and rear tires, respectively, can be computed with fair approximation considering a curve with a radius of curvature $r$ and constant vehicle speed $v$ as
\begin{equation}
m \frac{v^2}{2r} = F_{y,\cdot}.
\label{eq: lateral forces}
\end{equation}
Therefore, the absolute value of the maximum longitudinal forces $F_{t,f}$ and $F_{t,r}$ exchangeable with ground by the front and rear tires, respectively, are
\begin{equation}
|F_{t,\cdot}| = \sqrt{|F_{ad,\cdot}|^2 - |F_{y,\cdot}|^2}.
\label{eq: maximum allowed thrust forces}
\end{equation}
The driving/braking torques commanded by the vehicle powertrain depend on the usage mode. In the WEC events under analysis, there are four different modes. The vehicle central unit can decide whether to power the electric motor and combustion engine at the same time (mode 1), power the combustion engine only (mode 2), undergo sailing\footnote{Sailing is the condition for which the combustion engine is automatically powered off by the control unit to satisfy the technical constraint on the fuel usage, even if the driver applies full throttle. In this powertrain usage mode, the vehicle is decelerated by the aerodynamic forces and by the intervention of the KERS, as explained later.} (mode 3) or actuate the brakes (mode 4). Different modes imply different longitudinal tire forces, and thus different vehicle accelerations. In modes 1 and 2, the absolute value of the longitudinal tire forces can be computed as the minimum value between the thrust that the motors are capable to provide and the thrust that the tires are capable to transfer, that is
\begin{subequations}
\begin{equation}
|F_{x,f}| = \min (|F_{el}|, |F_{t,f}|),
\end{equation}
\begin{equation}
|F_{x,r}| = \min (|F_{comb}|, |F_{t,r}|).
\end{equation}
\end{subequations}
In mode 3, the total longitudinal force is equal to the force deriving from the electric torque during sailing
\begin{equation}
|F_{x,r}| + |F_{x,f}| = |F_{sail}|.
\end{equation}
Finally, during braking the total longitudinal force is
\begin{equation}
|F_{x,r}| + |F_{x,f}| = \min(|F_{dec}|, |F_{t,f}| + |F_{t,r}|),
\label{eq: braking longitudinal force}
\end{equation}
where $F_{dec}$ is the deceleration force generated by brakes.
Considering a spatial discretisation of $2$ m, the speed profile is reconstructed using the longitudinal model and compared to a reference one.
Given the vehicle velocity and the throttle/brake commands at the current spatial discretisation point, all of the quantities are evaluated through \eqref{eq:EOM}-\eqref{eq: braking longitudinal force}, so as to obtain the vehicle acceleration $\dot{v}$, and then the velocity at the next discretisation point.
The procedure is repeated iteratively for each discretisation point.
Hereinafter, the speed profile computed for the Bahrain circuit is compared with the real one that the reference car experienced during the real race. The results are shown in Fig. \ref{fig:model_application1}.
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.8\columnwidth]{Speed_comparison}}
\caption{Comparison between the reference speed profile and the one computed by means of the longitudinal vehicle model.}
\label{fig:model_application1}
\end{figure}
It is evident that the longitudinal vehicle model generates an overestimated speed profile with respect to the real one. This is due to different effects that have not been taken into account by the model. Therefore, three corrective coefficients are introduced into the model and properly tuned to better fit the real data. The corrective coefficients are:
\begin{itemize}
\item the \textit{engine coefficient}, which scales the thrust provided by the combustion engine
\item the \textit{adherence coefficient}, which scales the friction coefficient $\mu$ in low-speed curves;
\item the \textit{downforce coefficient}, which scales the lift coefficient $c_z$ in high-speed curves.
\end{itemize}
After introducing the corrective coefficients into the model, the two speed profiles result to be coherent, as shown in Figure \ref{fig:model_application2}.
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.8\columnwidth]{Speed_comparison2}}
\caption{Comparison between the reference speed profile and the one computed by means of the longitudinal vehicle model, after tuning the corrective coefficients.}
\label{fig:model_application2}
\end{figure}
\section{Traffic-free lap strategy optimisation}
\label{sec:off_lap_optimization}
Having developed and validated the ego-vehicle model, it is now possible to formulate and solve an optimisation problem for the lap time minimisation, through the management of the powertrain energy budget. We remark that, at this stage, we aim to identify the best points of activation of the electric motor in a lap, considering absence of traffic.
Theoretically, the optimal strategy should be computed online during the real race, in order to take into account the actual traffic conditions. However, being this computation time consuming, we offline compute a set of traffic-free energy management strategies, and subsequently evaluate the best one in simulated traffic conditions (see Sec. \ref{sec:results}).
The optimisation problem was initially solved using a MIQCP formulation. Although the method can provide optimal solutions, it has many shortcomings. First of all, it is time consuming, even for the resolution of a single optimisation problem. Therefore, generating a set of offline strategies would have been intractable. Moreover, numerical issues can degrade the performance of the solver, e.g. the presence of dense columns in the resulting matrices. The solver detects and eliminates as many dense columns as possible before optimising, but this may cause numerical instability. Finally, the problem could be ill-conditioned, which may lead to inconsistent results.
These issues may be attenuated by tuning the solver parameters, which is a difficult and time-consuming operation. To cope with this, we propose an alternative method to solve the optimisation problem, based on Genetic Algorithms (GA), which provides suboptimal solutions whereas it does not suffer from the aforementioned problems. The MIQCP approach is presented just to provide a baseline for assessing the suitability of the GA. Then, the optimisation is solved multiple times with GA, varying the constraints on the electric motor usage, to generate the set of candidate lap strategies.
The basic optimisation problem, written according to the spatial discretisation, is given by
\begin{subequations}
\begin{align}
\min_{\mathbf{u}(s)} \quad & t_{lap}, \label{eq: minimization 1}\\
\text{subject to} \quad & \mbox{\eqref{eq:EOM}-\eqref{eq: braking longitudinal force}},\\
& E_{el,used} \leq E_{el,used}^{max}, \label{eq: minimization 3}\\
& p \leq p^{max}, \label{eq: minimization 4}\\
& E_{el,rec} \geq E_{el,rec}^{min}, \label{eq: minimization 5}
\end{align}
\label{eq:minimization}
\end{subequations}
where
\begin{itemize}
\item $t_{lap}$ is the lap time;
\item $\mathbf{u}(s)$ is the optimisation variable, and represents the powertrain usage mode at each spatial discretisation point along the track through a one-hot encoding vector of four elements;
\item $E_{el,used}$ is the consumed electrical energy in a lap;
\item $E_{el,used}^{max}$ is the maximum allowed electrical energy consumption in a lap;
\item $p$ represents the kilograms of fuel consumption in a lap;
\item $p^{max}$ represents the maximum allowed kilograms of fuel consumption in a lap;
\item $E_{el,rec}$ is the amount of recovered electrical energy in a lap;
\item $E_{el,rec}^{min}$ is the minimum allowed amount of recovered electrical energy in a lap.
\end{itemize}
The constraints \eqref{eq: minimization 3} and \eqref{eq: minimization 4} are set by the WEC technical regulation. Referring to the Bahrain International Circuit, the corresponding limits are $E_{el,used}^{max} = 4924 \, \frac{\text{kJ}}{\text{lap}}$ and $
p^{max} ~=~ 1.381 \, \frac{\text{kg}}{\text{lap}}$.
\vspace*{0.2cm}
Instead, the constraint \eqref{eq: minimization 5} is necessary to keep the state of charge of the battery greater than or equal to a constant value at the end of each lap. Thus, the minimum amount of recovered electrical energy must be equal to the amount of consumed electrical energy, that is
\begin{equation}
E_{el,rec}^{min} = E_{el,used}.
\end{equation}
Since the electrical energy can be recovered through both the Heat Energy Recovery System (HERS), $E_{el,rec-HERS}$, and the Kinetic Energy Recovery System (KERS), $E_{el,rec-KERS}$, we can write
\begin{equation}
E_{el,rec} = E_{el,rec-HERS} + \text{E}_{el, rec-KERS},
\end{equation}
where the amount recovered through the HERS is track dependent and fixed for each lap.
In the next sections, the terms in \eqref{eq:minimization} will be linked to the longitudinal vehicle dynamics. Lap time optimisation based on a MIQCP formulation is detailed in Sec. \ref{sec: Mathematical Programming solver}, whereas the GA approach is presented in Sec. \ref{sec: Genetic Algorithm}.
\subsection{MIQCP solver}
\label{sec: Mathematical Programming solver}
We decide to divide the track into $N \in \mathbb{N}$ subportions with spatial discretisation $\Delta s=5$ m. The cost function and constraints are then reformulated as functions of the optimisation variables.
\subsubsection{Cost function}
We first link the vehicle acceleration to the lap time, whose minimisation is the objective of \eqref{eq:minimization}. The kinetic energy $E_{kin}$ of the vehicle satisfies
\begin{equation}
m \ddot{x}(s) = \frac{d E_{kin}}{ds} (s),
\end{equation}
where $s$ refers to the generic curvilinear abscissa. Identifying with $k=0,1,...,N$ the discretisation points along the curvilinear abscissa, i.e. for the generic variable $\phi$ it holds that $\phi(k):=\phi(k \Delta s)$, the above equation can be discretised using the forward Euler method, obtaining
\begin{equation}
E_{kin}(k+1) = E_{kin}(k) + m\, \Delta s\, \ddot{x}(k),
\label{eq: Ekin difference}
\end{equation}
where
\begin{equation}
E_{kin}(k) := \frac{1}{2} m v^2(k).
\label{Kinetic_Energy}
\end{equation}
Computing the vehicle speed through the above equation would involve a
square root operator. In order to preserve linearity, which is convenient for solving optimisation problems, the method presented in \cite{hybrid} is employed. It is possible to prove that a geometric mean inequality constraint written as
\begin{equation}
\sqrt{x_1 \cdot x_2} \geq x_3 \quad x_1 \cdot x_2 \geq 0,
\end{equation}
where $x_i \in \mathbb{R}$, can be reformulated as a second-order conic constraint
\begin{equation}
\begin{Vmatrix} 2 \cdot x_3 \\ x_1 - x_2 \end{Vmatrix}_2 \leq x_1 + x_2.
\end{equation}
Relaxing \eqref{Kinetic_Energy} as
\begin{equation}
E_{kin}(k) \geq \frac{1}{2} m v^2(k),
\end{equation}
and taking the square root from both sides we obtain
\begin{equation}
\sqrt{\dfrac{2E_{kin}(k)}{m}} \geq v(k).
\end{equation}
Hence, it is possible to reformulate the relaxed constraint as the following convex quadratic constraint
\begin{equation}
\begin{Vmatrix}
2 \cdot v(k+1) \\[1ex] \dfrac{2E_{kin}(k+1)}{m} - 1
\end{Vmatrix}_2 \leq \dfrac{2E_{kin}(k+1)}{m} + 1,
\label{eq: Ekin relaxed}
\end{equation}
linking the speed and the kinetic energy. The inverse of speed is usually defined as 'lethargy',
\begin{equation}
\frac{dt}{ds}(k) = \frac{1}{v(k)},
\label{eq:lethargy}
\end{equation}
that is the spatial derivative of time. Since \eqref{eq:lethargy} is a nonlinear constraint, we relax it and transform into a convex quadratic constraint
\begin{equation}
\begin{Vmatrix}
2 \\ \frac{dt}{ds}(k) - v(k)
\end{Vmatrix}_2 \leq \frac{dt}{ds}(k) + v(k).
\label{eq: lethargy relaxed}
\end{equation}
Finally, the lap time $t_{lap}$ can be expressed as
\begin{equation}
t_{lap} = \Delta s \sum_{i=0}^{N} \frac{dt}{ds}(k).
\end{equation}
Therefore, the relationship between the control inputs $\dot{v}(k)$, $k=0,...,N-1$ and the objective $t_{lap}$ is fully described by \eqref{eq: Ekin difference}, \eqref{eq: Ekin relaxed} and \eqref{eq: lethargy relaxed}, resorting to the intermediate variables $E_{kin}(k),\,v(k)$ and $\frac{dt}{ds}(k)$.
\subsubsection{Constraints} To make the constraints of \eqref{eq:minimization} explicit, it is necessary to calculate the consumed fuel and used/recovered electrical energy in each portion of the track. The technical regulation defines the maximum fuel consumption per second at full thrust, that is $p^{max/s} = 0.0223 \, \frac{\text{kg}}{\text{s}}$. Given the combustion engine torque $T_{comb}(k)$,
the amount of consumed fuel kilograms can be computed as
\begin{equation}
p(k) = p^{max/s} \cdot \frac{T_{comb}(k)}{T_{comb}^{max}(k)} \cdot \frac{dt}{ds}(k) \cdot \Delta s.
\end{equation}
From the thrust provided by the electric motor and its efficiency in traction $\eta_{el, traction}$, it is straightforward to compute the used electrical energy usage as
\begin{equation}
E_{el,used}(k) = \frac{T_{el}(k)}{\eta_{el, traction}} \cdot \Delta s.
\end{equation}
The electrical energy can be recovered through the KERS both during sailing or in the braking phase. In the first case, the KERS intervenes to partly recover the kinetic energy ($F_{sail}\leq 0$). In the second case, instead, the energy is recovered through a braking force $F_{dec} \leq 0$, lower in module than the maximum value $F_{dec,max} \leq 0$ that can be guaranteed by the brakes and the adherence with ground. Hence, according to the situation, the recovered electrical energy can be estimated using one of the two expressions
\begin{subequations}
\begin{equation}
E_{el, rec-KERS}(k) = |F_{sail}|(k) \cdot \Delta s \cdot \eta_{el, rec},
\end{equation}
\begin{equation}
E_{el,rec-KERS}(k) = \min(|F_{dec}|(k), |F_{dec,max}|) \cdot \Delta s \cdot \eta_{el, rec},
\end{equation}
\end{subequations}
where $\eta_{el, rec}$ is the efficiency of the electric motor in recuperation phase. The constraints of \eqref{eq:minimization} can be finally reformulated as
\begin{subequations}
\begin{equation}
\sum_{s=0}^{N} p(k) \leq p^{max}
\end{equation}
\begin{equation}
\sum_{s=0}^{N} E_{el,used}(k) \leq E_{el,used}^{max}
\end{equation}
\begin{equation}
\sum_{s=0}^{N} E_{el,rec-KERS}(k) \geq \sum_{s=0}^{N} E_{el,used}(k) - E_{el,rec-HERS}
\end{equation}
\end{subequations}
The optimisation problem was built using the toolbox YALMIP \cite{yalmip1}. The resulting Mixed Integer Quadratically Constrained Program (MIQCP) was then solved with IBM\textsuperscript{\textregistered} ILOG\textsuperscript{\textregistered} CPLEX\textsuperscript{\textregistered} version 12.8.0. The optimised speed profile is shown in Figure \ref{fig:speed_comp}, as compared with the reference one. There is significant coherence between the reference speed profile and the one resulting from optimisation. Finally, the points of activation of the electric motor are highlighted in Figure \ref{fig:speed_KERS}, from which we can conclude that the solver decides to use the electric motor in the first part of the straights, achieving the highest possible speed in the shortest time. This is coherent with the behaviour of professional human drivers.
\begin{figure}[htb]
\centerline{\includegraphics[width=\columnwidth]{Speed_comp}}
\caption{Comparison between the reference speed profile and the optimal one, obtained by solving the MIQCP.}
\label{fig:speed_comp}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\columnwidth]{Speed_kers}}
\caption{MIQCP-based optimal speed profile with detail of the electric motor activation points.}
\label{fig:speed_KERS}
\end{figure}
\subsection{Genetic-Algorithms-based solver}
\label{sec: Genetic Algorithm}
As previously mentioned, we now propose an alternative resolution method for the optimisation problem based on Genetic Algorithms.
The circuit is divided into $M=8$ regions, each one from the beginning of a straight to the beginning of the following one. Differently from the MIQCP approach, whose optimisation variable embodies the powertrain usage mode directly, we consider $2$ optimisation variables for each region, that is the electrical energy consumption $E_{el, used}$ and the fuel consumption $p$. The total number of optimisation variables in a lap is therefore $2M=16$. We further assume that the energy budget of each region, encoded by the optimisation variables, is actuated continuously from the beginning of the region (straight) until it is completely consumed. This assumption is supported by the fact that the optimal way to apply energy is to use it in the first part of the acceleration region, as known to the experts of the field. Finally, we highlight that, given an instance of the optimisation variables and the previous assumption, it is possible to uniquely identify the corresponding powertrain usage mode, speed profile and lap time.
With reference to the traditional nomenclature of GA, an instance of the $2M$ optimisation variables constitutes the genome of an individual of the population. We employ a discretisation step of $1$ kJ for the electrical energy consumption and $1$ g for the fuel consumption, and select the optimisation variables ranges as $\left[ 0, 2300 \right]$ $\frac{\text{kJ}}{\text{region}}$ and $\left[ 0, 1381 \right]$ $\frac{\text{g}}{\text{region}}$, respectively. The discretisation speeds up mutations and crossovers to reach the optimised solution, and constitutes a fair approximation being the step sizes three orders of magnitude lower than the maximum value that the variables can take. The inequality constraints \eqref{eq: minimization 3} and \eqref{eq: minimization 4} are then enforced through linear inequalities on the optimisation variables.
Finally, the fitness function is the lap time generated by the genome.
A reasonable initial population was provided to the GA to hot start the optimisation and improve computational times. The simplest way to obtain a hot-start genome in the population is to compute the amount of electrical energy and fuel consumed in each region of the circuit in a real lap. The chosen reference lap was performed in absence of interactions with competitors and with the lowest lap time possible, so as to be close to the optimal solution. Another important hyperparameter to be carefully selected is the population size, which is critical for the convergence of the GA. Literature provides some heuristic methods to select the dimension of the population. Anyway, for the majority of the problems they indicate that an increase in the population size statistically reduces the error between the solution found by the GA and the exact solution \cite{GA1},
at the expense of an increase in the computational time. The most common rule of thumb establishes that the dimension of the population needs to be at least equal to the size of the genome. Converting the variables from decimal to binary, as typical for GA, we obtain
\begin{equation}
\begin{split}
(2300)_{10} = (100011111100)_2 \, &\rightarrow \, 12 \, \text{bit},\\
(1381)_{10} = (10101100101)_2 \, &\rightarrow \, 11 \, \text{bit},
\end{split}
\end{equation}
from which the initial population needs to have at least $23M = 184$ individuals. After analysis of the results, we finally changed the population size to $2000$ individuals to obtain stable and repeatable solutions.
Differently from the previous method based on the MIQCP formulation, which generates a single optimal solution in absence of competitors, we employ the GA to create a set of optimal energy strategies, whose efficacy will be then evaluated in realistic traffic conditions. The evaluation is performed in the next section. The set of strategies is obtained by launching the GA many times, each with different constraints on the electrical energy usage. The underlying reasoning for heuristically imposing extra constraints is the following. Since the strategies will be evaluated in complex traffic scenarios, activating the electric motor where overtaking is easier to be performed, e.g. a straight, can be beneficial to reduce the time loss related to the maneuver and the lap time. Table \ref{tab:policiesGA} shows the constraints applied in each strategy and the corresponding optimal lap time. It can be noticed that the applied constraints have an action range of at least 100 meters. This is done to respect the sensitivity/capability of the driver to manually intervene on the power-off/on button of the electric motor.
Some exemplary optimal strategies are shown in Figure \ref{fig:speed_GA}.
\begin{table*}[htb] \normalsize
\caption{Strategies and their Electrical Energy constraints generated by the Genetic Algorithm}
\label{tab:policiesGA}
\centering
\begin{tabular}{ccc}
\toprule
{\bfseries Strategies \#} & {\bfseries Lap time} (s) & {\bfseries Electrical Energy constraints}\\
\midrule
1 & 104.028 & No constraints \\
2 & 104.092 & No KERS first 100 meters $8^{th}$ straight \\
3 & 104.152 & No KERS first 100 meters $2^{nd}$ straight \\
4 & 104.273 & No KERS first 110 meters $4^{th}$ straight \\
5 & 104.290 & No KERS first 100 meters $1^{st}$ straight \\
6 & 104.310 & No KERS first 200 meters $8^{th}$ straight \\
7 & 104.373 & No KERS first 100 meters $7^{th}$ straight \\
8 & 104.396 & No KERS first 300 meters $8^{th}$ straight \\
9 & 104.402 & No KERS first 100 meters $7^{th}$ and $8^{th}$ straights \\
10 & 104.415 & No KERS first 100 meters $1^{st}$ and $7^{th}$ straights \\
11 & 104.417 & No KERS first 200 meters $7^{th}$ straight \\
12 & 104.566 & No KERS first 100 meters $1^{st}$ and $8^{th}$ straights \\
13 & 104.603 & No KERS first 200 meters $1^{st}$ straight \\
14 & 104.674 & No KERS first 70 meters $5^{th}$ straight \\
15 & 104.733 & No KERS first 140 meters $5^{th}$ straight \\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}[tbp]
\centering
\subfloat[Strategy 1.]{\includegraphics[width=0.48\columnwidth]{Speed_opt}}
\subfloat[Strategy 5.]{\includegraphics[width=0.48\columnwidth]{Speed_1_100}}\\
\subfloat[Strategy 13.]{\includegraphics[width=0.48\columnwidth]{Speed_1_200}}
\subfloat[Strategy 10.]{\includegraphics[width=0.48\columnwidth]{Speed_1_7_100}}
\caption{Speed profiles resulting from the GA-based optimisation. The green lines represent the points of activation of the electric motor.}
\label{fig:speed_GA}
\end{figure}
\section{Multi-agent simulations of the competitors' behaviour}
\label{sec:on_simulations}
The approach presented in Sec. \ref{sec:off_lap_optimization} generated a set of offline strategies for the lap time minimisation in absence of competitors. To evaluate them in realistic race conditions, where competitors are present along the track and mutually interact with each other, it is necessary to simulate their behaviour. In this section, we explain how to build multi-agent Monte Carlo simulations of the competitors' behaviour from the statistics computed in Sec. \ref{sec:competitors_performance}.
The forecasts have to consider a time horizon that is long enough to allow the ego-car to complete the lap. For this reason, we decide to perform two-laps-long forecasts starting from the initial condition.
The positions of the competitors in absence of mutual interactions are forecast by means of the free sector times probabilistic distributions, derived from Sec. \ref{sec:competitors_performance}.
With reference to Figure \ref{fig:FreeSectorTimes}, we remark that the associated probability distributions are not Gaussian.
To cope with this, we perform random sampling, with enough extractions to guarantee that the probabilistic distributions of the competitors' behaviour can be reliably approximated.
The forecast position along the track of an exemplary competitor for different simulations is shown in Figure \ref{fig:free_sector2}.
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.95\columnwidth]{Space_profiles}}
\caption{Simulated distribution of forecast positions of an exemplary competitor in absence of mutual interactions.}
\label{fig:free_sector2}
\end{figure}
At this point, we introduce the interactions between the competitors.
To do so, a multi-agent-based model has been developed, resorting to the Influence/Reaction principle \cite{InfluenceReaction, Agent1, Agent2}. Each competitor is considered as a rational agent that tries to minimise its lap time. If a competitor gets closer to the preceding one, it perceives its influence and has to react, performing overtaking or following.
When an overtaking may be performed, the category of the involved vehicles is evaluated and, taking into account the section along the circuit where the overtaking may occur, the overtaking probability is computed from the statistics derived in Sec. \ref{sec:competitors_performance}.
A random number in the range $[0,1]$ is extracted and compared to the overtaking probability. If it is lower, then overtaking succeeds, otherwise the car keeps on following the preceding car for the entire length of the section. The procedure is then repeated at the successive section.
Repeating these steps for all possible overtakings involving the competitors along the circuit generates a Monte Carlo numerical simulation. Two examples of Monte Carlo simulation are reported in Figure \ref{fig:MC}.
\begin{figure}[htb]
\subfloat[Example 1.]{\includegraphics[width=0.95\columnwidth]{MC1}}\\
\subfloat[Example 2.]{\includegraphics[width=0.95\columnwidth]{MC2}}
\caption{Monte Carlo numerical simulations of the competitors' motion along the track. The colours of the lines represent different categories: red for LMP1, blue for LMP2, green for LMGTE Pro and finally black for LMGTE Am. The circles, instead, have the following meaning: the blue one represents the position along the circuit where two vehicles get close enough to activate the Influence/Reaction principle, whereas the black one defines the position where the overtaking occurs.}
\label{fig:MC}
\end{figure}
To reliably forecast the behaviour of the competitors, it is necessary to generate a significant number of Monte Carlo numerical simulations. This means performing a large amount of random samplings of free sector times and overtaking probabilities.
These computations may however be time consuming. Indeed, they have to be performed in real time at the beginning of each lap, as they depend on the actual positions of the competitors along the circuit. A solution to this issue can be addressed through parallel computing, since the simulations are completely independent from each other. The statistical analysis of the outputs and evaluation of the optimal strategy in presence of realistic traffic conditions is described in the next section.
\section{Stochastic lap strategy optimisation in traffic conditions}
\label{sec:results}
Optimal strategies for the powertrain energy budget in absence of traffic have been presented in Sec. \ref{sec:off_lap_optimization}. On the contrary, the Monte Carlo approach defined in \ref{sec:on_simulations} allows to forecast the positions of the competitors, accounting for their mutual interactions. We now propose a novel way to combine the two features and build a stochastic optimal solver, aiming to identify the strategy
that statistically guarantees the minimum lap time in presence of traffic.
Stochastic Dynamic Programming (SDP), which is the method we employ in this work, is an optimisation-based method for making sequential decisions under uncertainty.
In our framework, uncertainty is associated to the occurrence of overtakings. Being SDP a discrete decision-making process and considering $n$ stages $t = 0, \dots, n-1$, the following ingredients are defined at each timestep $t$:
\begin{itemize}
\item an initial state $s_t \in \mathbb{S}_t$, where $\mathbb{S}_t$ is the set of feasible states at stage $t$;
\item a decision variable $x_t \in \mathbb{X}_t$, where $\mathbb{X}_t$ is the set of feasible actions that can be chosen at stage $t$;
\item an immediate cost or reward function $r_t (s_t, x_t)$, representing the incurred cost or the gained reward at stage $t$ if an action $x_t$ is chosen at the state $s_t$;
\item a state transition function $g_t(s_t, x_t)$, defining the state change from $s_t$ towards $s_{t+1}$;
\item a discount factor $\alpha \in [0,1]$;
\item a conditional probability $Pr(s_{t+1} \lvert s_t,x_t)$, representing the probability to move into state $s_{t+1}$ given the current state $s_t$ and the chosen action $x_t$.
\end{itemize}
The optimal control problem can be iteratively described through the value function $f_t$ at the generic stage $t$ as
\begin{equation} \footnotesize
f_t(s_t) = \min_{x_t \in \mathbb{X}_t(s_t)} \biggl(r_t(s_t,x_t) + \alpha \sum_{s_{t+1}} Pr(s_{t+1} \lvert s_t,x_t) f_{t+1}(s_{t+1}) \biggr),
\label{eq:SDP}
\end{equation}
which represents the expected optimal cost that can be attained from the state $s_t$ if the optimal action $x_t$ is chosen.
According to \eqref{eq:SDP}, the optimal strategy is the one that minimises the value function at the initial stage of the problem, and can be computed resorting to policy iteration. However, we have already selected fifteen strategies a priori in Sec. \ref{sec:off_lap_optimization}, through the energy budget optimisation problem. Thus, we do not perform policy iteration, but policy evaluation: the value function deriving from each of the pre-computed strategies is evaluated. Finally, the best strategy is identified as the one that minimises the value function at the first stage.
To apply this technique, we first test each of the fifteen optimal energy strategies in each of the Monte Carlo numerical simulations. Figure \ref{fig:MC+Policy} reports an example of this procedure.
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.95\columnwidth]{MC+Policy}}
\caption{Testing the energy budget strategy in a Monte Carlo numerical simulation. The thick red line represents the spatial profile of the reference car, whereas the other lines represent the competitors.}
\label{fig:MC+Policy}
\end{figure}
The parameters of this technique have been defined in the following way. The variable $s$, representing the state of the system, consists in the negative number of the possible overtakings to be performed in the reference lap. Indeed, the positions of the reference car and of the competitors are known from respectively the optimisation problem and the Monte Carlo numerical simulations. Thus, for each combination of optimal strategies and numerical simulations it is possible to compute the number of possible overtakings that the reference car can perform in the lap. For instance, with reference to Figure \ref{fig:MC+Policy}, three overtakings can occur. This implies that the state of the system is initialised to $s_1 = -3$. When an overtaking occurs, the state is augmented by 1. The decision variable $x_t$ consists in the usage of the powertrain energy budget, either coming from the combustion engine and from the electric motor. The variable $Pr$ refers to the probability of changing the state from $s_t$ to $s_{t+1}$, once a specific decision variable $x_t$ is chosen. Since we have defined the state as the number of possible overtakings to be performed in a lap, this probability is exactly the overtaking probability that we have drawn from the analysis of previous years events as a function of the category of the involved vehicles and of the section where the overtaking may occur.
The variable $r_t$ is defined as a cost. When the reference car approaches a competitor, according to the categories and the section where the vehicles are, there is a specific overtaking probability.
If overtaking is not performed, the ego-vehicle needs to slow down, which involves a time loss. The latter can be computed as the difference between the time instant at which the reference car has reached the end of the section behind the preceding car and the instant of time at which the reference car would have reached the end of the section without traffic. Moreover, if the reference car is obliged to stay behind the preceding car in a section, it has to reduce its speed.
Once it is able to overtake the slower vehicle in a successive section, it needs time before reaching its reference optimal speed profile again. This time loss is also added to the cost.
The parameter $\alpha$ is a factor that discounts future costs, which is typically employed in infinite-time problems.
Since we deal with a finite-time problem, the parameter is set equal to $1$.
The $n$ stages of SDP coincide with the $37$ sections defined for the computation of the overtaking probabilities in Sec. \ref{sec:competitors_performance}. In each section the algorithm evaluates whether the reference car may perform an overtaking. If a competitor is close enough, there is a specific probability of performing the overtaking.
If overtaking succeeds, the state is augmented by one and there is no associated time loss. On the contrary, the reference car is obliged to follow the preceding vehicle, the state does not change, and a time loss is introduced in the cost function
It is convenient to graphically represent all of the probabilistic events rising from overtakings in a lap. In this regard, we adopt a suitable and common representation called Decision Tree. To build Decision Trees, the possible occurrence of an overtaking is evaluated in each section from the beginning to the end of the track (forward pass \cite{forward}). Then, a backward pass computes the value function at the initial stage of the problem, i.e. $f_0$. It consists in multiplying the probability of an event with the associated cost values, and summing them up stage by stage from the end to the beginning, according to \eqref{eq:SDP}. A portion of the Decision Trees for the optimal strategy is reported in Figure \ref{fig:tree}.
\begin{figure}[tbp]
\subfloat[Probability Decision Tree. The number above the state represents the probability of occurrence of a particular event.]{\includegraphics[width=0.95\columnwidth]{Tree}}\\
\subfloat[Cost Decision Tree. The number above the state represents the cost of a particular event, i.e. its associated time loss expressed in seconds.]{\includegraphics[width=0.95\columnwidth]{Tree2}}
\caption{Example of Probability and Cost Decision Trees. The negative value over each circle represents the state of the system, i.e. the number of overtakings still to be performed in the lap. A red circle with state equal to 1 indicates that all possible overtakings have already been performed for that probabilistic event. The left column of numbers identifies the decision stages where overtakings may occur, whereas the right column specifies the section along the track.}
\label{fig:tree}
\end{figure}
The value functions at time zero $f_0$ of each strategy are reported in Table \ref{tab:f1_results}.
The fifth strategy results to be the optimal one, having the lowest time loss. Despite this strategy is approximately 0.3 s slower than the first one in absence of traffic conditions, i.e. the reference car, it is statistically faster of approximately 0.5 s considering traffic conditions. Moreover, the fifth strategy has a high statistical significance, since it resulted to be the optimal strategy in approximately 70\% of the Monte Carlo numerical simulations. Further graphical interpretations are provided in Figures \ref{fig:time_loss_track} and \ref{fig:strategies_speed_prof} to support our approach. They both show that the optimal strategy tends to save electric power and release it to perform overtakings only in portions of the track where they produce lower time losses.
\begin{table}[tbp]\normalsize
\caption{Value function at the initial stage from Stochastic Dynamic Programming}
\centering
\begin{tabular}{cc}
\toprule
{\bfseries Strategy \#} & \bfseries $f_0$\\
& (s)\\
\midrule
1 & 2.131 \\
2 & 1.987 \\
3 & 1.792 \\
4 & 1.884 \\
5 & 1.662 \\
6 & 2.092 \\
7 & 2.176 \\
8 & 2.338 \\
9 & 2.359 \\
10 & 2.421 \\
11 & 2.428 \\
12 & 2.619 \\
13 & 2.211 \\
14 & 2.886 \\
15 & 2.534 \\
\bottomrule
\end{tabular}
\label{tab:f1_results}
\end{table}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.7\columnwidth]{Time_loss_Bahrain_png2}}
\caption{Analysis of time losses due to overtakings in a full lap. The black-orange-red colours of the sections represent increasing time losses due to overtakings. The blue circle marks the position where the first overtaking is probable to occur, according to the forecast of the competitors positions and to the default energy budget strategy, i.e. the first strategy. The red circle expresses the same concept for the fifth energy strategy, i.e. the optimal one. Since this strategy forces the solver not to use the electric motor in the first straight, the ego-car is expected to perform the first overtaking further along the track. This highlights that the proposed approach is suitable to shift overtakings to positions where they cause lower time losses.}
\label{fig:time_loss_track}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.95\columnwidth]{Speed_comp_result}}
\caption{Speed profiles resulting from the first and fifth strategy. The optimal strategy reaches a lower top speed in the first straight, due to a lower usage of the electrical energy. The saved energy is then spread into the last two straights, where it is easier to perform overtakings and reach higher speed.}
\label{fig:strategies_speed_prof}
\end{figure}
The analysis can be extended to a sequence of laps, i.e. a stint of the race. Considering the fifth stint of the Bahrain 2017 event, we launch a Monte Carlo numerical simulation starting with the initial positions of the competitors, and identify the best among the fifteen pre-computed strategies through SDP. The positions of the ego-car during the real race are then replaced by the space profile deriving from the optimal strategy. Then, SDP is employed a second time to evaluate the statistical time loss due to overtakings that would have been generated through the optimal strategy. Figure \ref{fig:stint_result} shows that our method can lead to a time gain of $6.44$ s compared to the original strategy used during the race.
Running the procedure multiple times, we obtained an upper bound equal to $6.44 + 1.44$ s and a lower bound of $6.44-1.83$ s, with confidence interval $90\%$. Figure \ref{fig:stint_temp_gain} represents the temporal gain lap by lap, with associated confidence interval. It is evident that the application of the proposed method can have a significant impact on the result of an event, since the time difference between the winner and the competitors is usually of a few seconds.
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.95\columnwidth]{Stint_result}}
\caption{Result from the application of Stochastic Dynamic Programming to an entire stint of the race. The red thick line represents the space profile deriving from the suggested strategy at the final lap of the stint, whereas the red thick dotted line represents the positions occupied by the reference car during the last lap of the stint in the real race. The other lines represent the competitors' motion. Adopting our approach to an entire stint of 26 laps, a time gain of 6.44 s would have been achieved.}
\label{fig:stint_result}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.95\columnwidth]{Stint_result2}}
\caption{Lap-by-lap temporal gain generated by our approach.}
\label{fig:stint_temp_gain}
\end{figure}
\section{Conclusions}
\label{sec: conclusions}
In this work, we have presented an efficient procedure to generate optimal lap strategies for LMP1 hybrid electric vehicles in WEC events. Particularly, the framework computes the optimal energy budget utilisation that statistically minimises the ego-vehicle lap time in presence of competitors, while complying with the technical regulations.
Our approach relies on multiple contributions. First, we have shown how to extract meaningful statistics regarding the competitors' motion along the track, using the limited amount of publicly available data from previous races. The statistics model the sector times and overtaking probability distributions of each vehicle type. They are used to develop realistic Monte Carlo simulations of the agents motion and interactions in a lap. Second, we have computed a set of candidate traffic-free solutions to the ego-vehicle lap strategy optimisation problem using Genetic Algorithms, presenting them as a more computationally efficient alternative to the classical MIQCP formulation. Finally, the traffic-aware optimal solution is statistically identified among the candidate traffic-free policies using Stochastic Dynamic Programming. Relying on several Monte Carlo simulations of the competitors' motion, Stochastic Dynamic Programming is used to evaluate the best strategy among the candidate ones.
To validate our approach, we have applied it to a stint of a real race. The results show that our approach leads to an average time gain of $6.44$ s, compared to the strategy that was actuated during the real race. The time gain is particularly significant, since the time difference between the winner and the leading vehicles is usually of a few seconds in WEC events.
Finally, we highlight that our strategy can be easily extended to other types of racing events. In fact, it only relies on an ego-vehicle longitudinal model and on the sector times data of the vehicles from previous races, which are commonly available. The extension to different types of events is left to future works.
\appendices
\section*{Acknowledgment}
This work has been developed in collaboration with Dr. Ing. h.c. F. Porsche AG. We would like to thank the Porsche Motorsport Team for providing sensitive data on WEC events and their vehicle.
A sincere thanks to Stephen Mitas, Porsche Chief Race Engineer, and to Emiliano Giangiulio and Vincenzo Scali, Porsche engineers, for
contributing to the establishment of this work and for their support.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.90625,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbJ85qoTDt2Gdg-QM | \section{Introduction}
Multi-player competitions are a recurrent theme in physics, biology, sociology, and economics since they model several phenomena. We refer to the introduction of \cite{ben2007efficiency} for various examples of models of multi-player competitions related to evolution, social stratification, business world and other related fields.
Among all these fields, sports are indubitably one of the most famous examples where modelling competitions is important. This is the case for two reasons: first, there is accurate and easily accessible data; second, sports competitions are typically head-to-head, and so they can be viewed as a nice model of multi-player competitions with binary interactions.
Sports multi-player leagues where the outcome of each game is completely deterministic have been studied for instance by Ben-Naim, Kahng, and Kim~\cite{ben2006dynamics} (see also references therein and explanations on how this model is related to urn models). Later, Ben-Naim and Hengartner~\cite{ben2007efficiency} investigated how randomness affects the outcome of multi-player competitions. In their model, the considered randomness is quite simple: they start with $N$ teams ranked from best
to worst and in each game they assume that the weaker team
can defeat the stronger team with a fixed probability. They investigate the total number of games needed for the best team to win the championship with high probability.
A more sophisticated model was studied by Bena\"{\i}m, Benjamini, Chen, and Lima~\cite{MR3346459}: they consider a finite connected graph, and place a team at each vertex of the graph. Two
teams are called a \emph{pair} if they share an edge. At discrete times, a match is
played between each pair of teams. In each match, one of the teams defeats the other (and gets a point) with
probability proportional to its current number of points raised to some fixed
power $\alpha>0$. They characterize the limiting behavior of the proportion of points
of the teams.
In this paper we propose a more realistic model for sport multi-player leagues that can be briefly described as follows (for a precise and formal description see \cref{sect:the_model}): we start with $2n$ teams having i.i.d.\ initial strengths. Then we consider a calendar of the league composed by $2n-1$ different days, and we assume that on each day each team plays exactly one match against another team in such a way that at the end of the calendar every team has played exactly one match against every other team (this coincides with the standard way how calendars are built in real football-leagues for the first half of the season). Moreover, we assume that on each day of the league, the initial strengths of the teams are modified by independent ergodic processes. Finally, we assume that a team wins against another team with probability given by a chosen function of the strengths of the two teams in the day the match is played.
We prove (see \cref{sect:main_results} for precise statements) a quenched law of large numbers and a quenched central limit theorem for the number of victories of a team according to its initial strength. Here quenched means that the results hold a.s.\ conditioning on the initial strengths of the teams in the league.
\subsection{The model}\label{sect:the_model}
We start by fixing some simple and standard notation.
\bigskip
\textbf{Notation.}
Given $n\in\mathbb{N}=\{1,2,3,\dots\}$, we set $[n]=\{1,2,\dots,n\}$ and $[n]_0=\{0,1,2,\dots,n\}$. We also set $\mathbb{R}_+=\mathbb{R}\cap[0,\infty)$.
We refer to random quantities using \textbf{bold} characters. Convergence in distribution is denoted by $\xrightarrow{d}$, almost sure convergence is denote by $\xrightarrow{a.s.}$, and convergence in probability by $\xrightarrow{P}$.
Given a collection of sets $\mathcal A$, we denote by $\sigma\left(\mathcal A\right)$ and $\lambda\left(\mathcal A\right)$ the $\sigma$-algebra and the monotone class generated by $\mathcal A$, respectively. Given a random variable $\bm X$, we denote by $\sigma\left(\bm X\right)$ the $\sigma$-algebra generated by $\bm X$.
For any real-valued function $f$ we denote with $f^2$ the function such that $f^2(\cdot)=(f(\cdot))^2$.
\bigskip
\noindent We can now proceed with the precise description of our model for multi-player leagues.
\bigskip
\noindent\textbf{The model.} We consider a league of $2n\in2\mathbb{N}$ teams denoted by $\{T_{i}\}_{i\in [2n-1]_0}$ whose \emph{initial random strengths} are denoted by $\{\bm s_{i}\}_{i\in [2n-1]_0}\in \mathbb{R}_+^{2n}$.
In the league every team $T_i$ plays $2n-1$ matches, one against each of the remaining teams $\{T_{j}\}_{j\in [2n-1]_0\setminus\{i\}}$. Note that there are in total ${2n}\choose{2}$ matches in the league. These matches are played in $2n-1$ different days in such a way that each team plays exactly one match every day.
For all $i\in[2n-1]_0$, the initial strength $\bm s_i$ of the team $T_i$ is modified every day according to a discrete time $\mathbb{R}_+$-valued stochastic process $\bm \xi^i=(\bm \xi^i_j)_{j\in \mathbb{N}}$. More precisely, the strength of the team $T_i$ on the $p$-th day is equal to $\bm s_i\cdot \bm \xi^i_p\in \mathbb{R}_+$.
We now describe the \emph{rules} for determining the winner of a match in the league. We fix a function $f:\mathbb R_+^2\to[0,1]$ that controls the winning probability of a match between two teams given their strengths. When a team with strength $x$ plays a match against another team with strength $y$, its probability of winning the match is equal to $f(x,y)$ and its probability of loosing is equal to $1-f(x,y)$ (we are excluding the possibility of having a draw). Therefore, if the match between the teams $T_i$ and $T_j$ is played the $p$-th day, then, conditionally on the random variables $\bm s_i, \bm s_j,\bm \xi^i_p,\bm \xi^j_p$, the probability that $T_i$ wins is $f(\bm s_i\cdot \bm \xi^i_p,\bm s_j\cdot \bm \xi^j_p)$.
Moreover, conditionally on the strengths of the teams, the results of different matches are independent.
\subsection{Goal of the paper}
The goal of this work is to study the model defined above when the number $2n$ of teams in the league is large. We want to look at the limiting behavior of the number of wins of a team with initial strength $s\in \mathbb{R}_+$ at the end of the league. More precisely, given $s\in \mathbb{R}_+$, we assume w.l.o.g.\ that the team $T_{0}$ has deterministic initial strength $s$, i.e.\ $\bm s_{0}=s$ a.s., and we set
\begin{equation}
\bm W_n(s)\coloneqq\text{Number of wins of the team }T_{0}\text{ at the end of a league with $2n$ players}.
\end{equation}
We investigate a quenched law of large numbers and a quenched central limit theorem for $\bm W_n(s)$.
\subsection{Our assumptions}
In the following two subsections we state some assumptions on the model.
\subsubsection{Assumptions for the law of large numbers}\label{ass:LLN}
We make the following natural assumptions\footnote{The second hypothesis is not needed to prove our results but it is very natural for the model.} on the function $f:\mathbb R_+^2\to[0,1]\:$:
\begin{itemize}
\item $f(x,y)$ is measurable;
\item $f(x,y)$ is weakly-increasing in the variable $x$ and weakly-decreasing in the variable $y$.
\end{itemize}
Recall also that it is not possible to have a draw, i.e.\ $f(x,y)+f(y,x)=1$, for all $x,y\in \mathbb{R}_+$.
Before describing our additional assumptions on the model, we introduce some further quantities.
Fix a Borel probability measure $\nu$ on $\mathbb{R}_+$; let $\bm \xi=(\bm \xi_\ell)_{\ell\in \mathbb{N}}$ be a discrete time $\mathbb{R}_+$-valued stochastic process such that
\begin{equation}\label{eq:stationarity0}
\bm \xi_\ell\stackrel{d}{=}\nu,\quad\text{for all}\quad \ell\in \mathbb{N},
\end{equation}
and\footnote{This is a weak-form of the \emph{stationarity property} for stochastic processes.}
\begin{equation}\label{eq:stationarity}
\left(\bm \xi_\ell,\bm \xi_k\right)\stackrel{d}{=} \left(\bm \xi_{\ell+\tau},\bm \xi_{k+\tau}\right), \quad\text{for all}\quad \ell,k,\tau\in\mathbb{N}.
\end{equation}
We further assume that the process $\bm \xi$ is \emph{weakly-mixing}, that is, for every $A \in \sigma(\bm \xi_1)$ and every collection of sets $B_\ell \in \sigma(\bm \xi_\ell)$, it holds that
\begin{equation}\label{eq:unif_weak_mix1}
\frac{1}{n} \sum_{\ell=1}^n \left|\mathbb{P}(A \cap B_\ell)-\mathbb{P}(A)\mathbb{P}(B_\ell)\right|\xrightarrow{n\to\infty} 0.
\end{equation}
The additional assumptions on our model are the following:
\begin{itemize}
\item For all $i\in[2n-1]_0$, the stochastic processes $\bm \xi^i$ are independent copies of $\bm \xi$.
\item The initial random strengths $\{\bm s_i\}_{i\in[2n-1]}$ of the teams different than $T_0$ are i.i.d.\ random variables on $\mathbb{R}_+$ with distribution $\mu$, for some Borel probability measure $\mu$ on $\mathbb{R}_+$.
\item The initial random strengths $\{\bm s_i\}_{i\in[2n-1]}$ are independent of the processes $\{\bm \xi^i\}_{i\in[2n-1]_0}$ and of the process $\bm \xi$.
\end{itemize}
\subsubsection{Further assumptions for the central limit theorem}\label{ass:CLT}
In order to prove a central limit theorem, we need to make some stronger assumptions. The first assumption concerns the mixing properties of the process $\bm \xi$. For $k \in \mathbb{N}$, we introduce the two $\sigma$-algebras $\mathcal{A}_1^{k} = \sigma \left(\bm \xi_1, \dots, \bm \xi_k \right)$ and $\mathcal{A}_k^{\infty} = \sigma \left(\bm \xi_k, \dots \right)$ and we define for all $n\in\mathbb{N}$,
\begin{equation}\label{eq:def_alpha_n}
\alpha_n = \sup_{\substack{k \in \mathbb{N}\\ A \in \mathcal{A}_1^{k},B \in \mathcal{A}_{k+n}^{\infty}}}
\left| \mathbb{P} \left( A \cap B \right) - \mathbb{P}(A)\mathbb{P}(B) \right|.
\end{equation}
We assume that
\begin{equation}\label{eq:strongly_mix_plus}
\sum_{n=1}^{\infty} \alpha_n < \infty.
\end{equation}
Note that this condition, in particular, implies that the process $\bm \xi$ is \emph{strongly mixing}, that is, $\alpha_n \to 0$ as $n \to \infty$.
Finally, we assume that there exist two sequences $p=p(n)$ and $q=q(n)$ such that:
\begin{itemize}
\item $p\xrightarrow{n\to\infty} +\infty$ and $q\xrightarrow{n\to\infty} +\infty$,
\item $q=o(p)$ and $p=o(n)$ as $n \to \infty$,
\item $ n p^{-1 } \alpha_q =o(1)$,
\item $ \frac{p}{n} \cdot \sum_{j=1}^p j \alpha_j = o(1)$.
\end{itemize}
\begin{rem}
The latter assumption concerning the existence of the sequences $p$ and $q$ is not very restrictive. For example, simply assuming that $\alpha_n = O(\frac{1}{n \log(n)})$ ensures that the four conditions are satisfied for $p=\frac{\sqrt n}{\log \log n}$ and $q=\frac{\sqrt n}{(\log \log n)^2}$. Indeed, in this case, the first two conditions are trivial, the fourth one follows by noting that $\sum_{j=1}^p j \alpha_j=O(p)$ thanks to the assumption in \cref{eq:strongly_mix_plus}, and finally the third condition follows by standard computations.
\end{rem}
\begin{rem}\label{rem:fbkwufobw}
Note also that as soon as $p= O(\sqrt n)$ then the fourth condition is immediately verified.
Indeed, $\sum_{j=1}^p j \alpha_j\leq \sqrt p \sum_{j=1}^{\sqrt p} \alpha_j+p \sum_{j=\sqrt p}^{ p} \alpha_j =o(p)$.
\end{rem}
\subsection{Main results}\label{sect:main_results}
\subsubsection{Results for our model of multi-player leagues}
Let $\bm{V},\bm{V'},\bm U, \bm U'$ be four independent random variables such that $\bm{V}\stackrel{d}{=}\bm{V'}\stackrel{d}{=}\nu$ and $\bm U\stackrel{d}{=}\bm U'\stackrel{d}{=}\mu$. Given a deterministic sequence $\vec{s}=(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, denote by $\mathbb{P}_{\vec{s}}$ the law of the random variable $\frac{\bm W_n(s)}{2n}$ when the initial strengths of the teams $(T_i)_{i\in[2n-1]}$ are equal to $\vec{s}=(s_i)_{{i\in[2n-1]}}$, i.e.\ we study $\frac{\bm W_n(s)}{2n}$ on the event
$$\bm s_0=s\quad \text{ and }\quad(\bm s_i)_{{i\in[2n-1]}}=(s_i)_{{i\in[2n-1]}}.$$
\begin{thm}(Quenched law of large numbers)\label{thm:LLN}
Suppose that the assumptions in \cref{ass:LLN} hold. Fix any $s\in\mathbb{R}_+$. For $\mu^{\mathbb{N}}$-almost every sequence $\vec{s}=(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, under $\mathbb{P}_{\vec{s}}$ the following convergence holds
\begin{equation}
\frac{\bm W_n(s)}{2n}\xrightarrow[n\to\infty]{P}\ell(s),
\end{equation}
where
\begin{equation}
\ell(s)=\mathbb{E}\left[f\left(s\cdot\bm{V},\bm U\cdot\bm{V'}\right)\right]=\int_{\mathbb{R}^3_+} f\left(s\cdot v,u\cdot v'\right)d\nu(v)d\nu(v')d\mu(u).
\end{equation}
\end{thm}
We now state our second result.
\begin{thm}(Quenched central limit theorem)\label{thm:CLT}
Suppose that the assumptions in \cref{ass:LLN} and \cref{ass:CLT} hold. Fix any $s\in\mathbb{R}_+$. For $\mu^{\mathbb{N}}$-almost every sequence $\vec{s}=(s_i)_{i\in\mathbb{N}}\in\mathbb{R}^{\mathbb{N}}_{+}$, under $\mathbb{P}_{\vec{s}}$ the following convergence holds
\begin{equation}\label{eq:CLT}
\frac{\bm W_n(s)- \mathbb{E}_{\vec{s}}[\bm W_n(s)] }{\sqrt{2n}}\xrightarrow{d} \bm{\mathcal{N}}\left(0,\sigma(s)^2 + \rho(s)^2\right),
\end{equation}
where, for $F_s(x,y)\coloneqq\mathbb{E}\left[f\left(s\cdot x,y\cdot \bm V'\right)\right]$ and $\tilde F_s \left(x,y\right) \coloneqq F_s\left(x,y \right) - \mathbb{E}[F_s\left(\bm V, y\right)]$,
\begin{equation}\label{eq:variance1_CLT}
\sigma(s)^2=\mathbb{E}\left[F_s(\bm V,\bm U)-\left(F_s(\bm V,\bm U)\right)^2\right]=\ell(s)-\mathbb{E}\left[\left(F_s(\bm V,\bm U)\right)^2\right]
\end{equation}
and
\begin{equation}\label{eq:def_rho_s}
\rho(s)^2= \mathbb{E} \left[ \tilde F_s (\bm V, \bm U)^2 \right] + 2 \cdot \sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde F_s(\bm{\xi}_1, \bm U) \tilde F_s (\bm{\xi}_{1+k}, \bm U') \right],
\end{equation}
the last series being convergent.
\end{thm}
\begin{rem}
The assumption that the initial strengths of the teams $(\bm s_i)_{i\in\mathbb{N}}$ are independent could be relaxed from a theoretical point of view, but we believe that this is a very natural assumption for our model.
\end{rem}
\subsubsection{Originality of our results and analysis of the limiting variance: a quenched CLT for functionals of ergodic processes and i.i.d.\ random variables}\label{sect:litterature_sum_var}
We now comment on the originality of our results and contextualize them in relation to the established literature and prior studies on sums of dependent and not equi-distributed random variables. Additionally, we give some informal explanations on the two components $\sigma(s)^2$ and $\rho(s)^2$ of the variance of the limiting Gaussian random variable in \cref{eq:CLT}.
We start by noticing that, without loss of generality, we can assume that for every $j\in[2n-1]$, the team $T_{0}$ plays against the team $T_j$ the $j$-th day of the league. Denoting by $W_{j}=W_j(s)$ the event that the team $T_{0}$ wins against the team $T_j$, then $\bm W_n(s)$ rewrites as
\begin{equation}
\bm W_n(s)=\sum_{j=1}^{2n-1}\mathds{1}_{W_j}.
\end{equation}
Note that the Bernoulli random variables $(\mathds{1}_{W_j})_{j\in[2n-1]}$ are only independent conditionally on the process $(\bm \xi^{0}_j)_{j\in[2n-1]}$.
In addition, under our assumptions, we have that the conditional parameters of the Bernoulli random variables are given by $\mathbb{P}_{\vec{s}} \left(W_j\middle | \bm \xi^{0}_j\right)= F_s (\bm \xi^{0}_j, s_j )$.
Therefore, under $\mathbb{P}_{\vec{s}}$, the random variable $\bm W_n(s)$ is a sum of Bernoulli random variables that are \emph{neither independent nor identically distributed}. As a consequence, the proofs of the quenched law of large numbers (see \cref{sect:LLN}) and of the quenched central limit theorem (see \cref{sect:CLT}) do not follow from a simple application of classical results in the literature.
We recall that it is quite simple to relax the identically distributed assumption in order to obtain a central limit theorem: the Lindeberg criterion, see for instance \cite[Theorem 27.2]{MR1324786}, gives a sufficient (and almost necessary) criterion for a sum of independent random
variables to converge towards a Gaussian random variable (after suitable renormalization). Relaxing independence is more delicate and there is no universal theory to do it. In the present paper, we combine two well-known theories to obtain our results: the theory for stationary ergodic processes (see for instance \cite{MR74711, MR0148125, MR0322926, MR1176496, MR2325294}) and the theory for $m$-dependent random variables (see for instance \cite{MR26771,MR350815,MR1747098}) and dependency graphs (see for instance \cite{MR681466, MR920273, MR1048950, MR2068873, MR4105789}).
To explain the presence of the two terms in the variance, we start by noticing that using the law of total conditional variance, the conditional variance of $\mathds{1}_{W_j}$ is given by $\Var_{\vec{s}} \left(\mathds{1}_{W_j}\middle | \bm \xi^{0}_j\right)= F_s (\bm \xi^{0}_j, s_j ) - (F_s (\bm \xi^{0}_j, s_j ))^2$. The term $\sigma(s)^2$ arises as the limit of
$$ \frac{1}{2n} \sum_{j=1}^{2n-1} \Var_{\vec{s}} \left(\mathds{1}_{W_j}\middle | \bm \xi^{0}_j\right),$$
and this is in analogy with the case of sums of independent but not identically distributed Bernoulli random variables. The additional term $\rho(s)^2$, on the contrary, arises from the fluctuations of the conditional parameters of the Bernoulli variables, i.e.\ $\rho(s)^2$ comes from the limit of
\begin{equation}\label{eq:fbewbfw}
\frac{1}{2n} \sum_{j=1}^{2n-1} \Var \left(\mathbb{P}_{\vec{s}} \left(W_j\middle | \bm \xi^{0}_j\right) \right)=\frac{1}{2n} \sum_{j=1}^{2n-1} \Var \left( F_s (\bm \xi^{0}_j, s_j ) \right).
\end{equation}
Note that an additional difficulty arises from the fact that the sums in the last two equations are not independent (but we will show that they are asymptotically independent).
To study the limit in \cref{eq:fbewbfw} we prove in \cref{sect:CLT} the following general result that we believe to be of independent interest.
\begin{thm}\label{prop:clt_sum_of_the_g}
Suppose that the assumptions in \cref{eq:stationarity0,eq:stationarity}, and in \cref{ass:CLT} hold. Let $g:\mathbb{R}^2_+\to\mathbb{R}_+$ be a bounded, measurable function, and define $\tilde g :\mathbb{R}^2_+\to\mathbb{R}$ by $\tilde g (x,y) \coloneqq g(x,y) - \mathbb{E} \left[ g\left( \bm V, y\right) \right]$. Then, the quantity
\begin{equation}\label{eq:def_of_rho}
\rho_g^2 \coloneqq \mathbb{E} \left[ \tilde g (\bm V, \bm U)^2 \right] + 2 \cdot \sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde g(\bm{\xi}_1, \bm U) \tilde g (\bm{\xi}_{1+k}, \bm U') \right]
\end{equation}
is finite. Moreover, for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, the following convergence holds
\begin{equation}\label{eq:CLT2}
\frac{\sum_{j=1}^{2n-1}\tilde g\left(\bm \xi_j,s_j\right)}{\sqrt{2n}}\xrightarrow{d} \bm{\mathcal{N}}(0,\rho_g^2).
\end{equation}
\end{thm}
The main difficulty in studying the sum $\sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right)$ is that fixing a deterministic realization $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$ imposes that the random variables $g\left(\bm \xi_j,s_j\right)$ are not stationary anymore.
In many of the classical results available in the literature (see references given above) -- in addition to moment and mixing conditions -- stationarity is assumed, and so we cannot directly apply these results. An exception are \cite{MR1492353,MR3257385}, where stationarity is not assumed but it is assumed a stronger version\footnote{This is assumption (B3) in \cite{MR3257385} that is also assumed in \cite{MR1492353}. Since in our case all moments of $g\left(\bm \xi_j,s_j\right)$ are finite (because $g$ is a bounded function), we can take the parameter $\delta$ in \cite{MR3257385} equal to $+\infty$ and thus condition (B3) in \cite{MR3257385} requires that $\sum_{n=1}^{\infty} n^2 \alpha_n <+\infty$.} of our condition in \cref{eq:strongly_mix_plus}. Therefore, to the best of our knowledge, \cref{prop:clt_sum_of_the_g} does not follow from known results in the literature.
\subsection{Calibration of the parameters of the model and some examples}\label{sect:param_and_examples}
An interesting feature of our model consists in the fact that the main parameters that characterize the evolution of the league can be statistically calibrated in order for the model to describe real-life tournaments. Such parameters are:
\begin{itemize}
\item The function $f:\mathbb R_+^2\to[0,1]$ that controls the winning probability of the matches.
\item The distribution $\mu$ of the initial strengths $(\bm s_i)_i$ of the teams.
\item The marginal distribution $\nu$ of the tilting process $\bm \xi$.
\end{itemize}
We end this section with two examples. The first one is more theoretical and the second one more related to the statistical calibration.
\begin{exmp}\label{exmp:league}
We assume that:
\begin{itemize}
\item $f(x,y)=\frac{x}{x+y}$;
\item for all $i\in\mathbb{N}$, the initial strengths $\bm s_i$ are uniformly distributed in $[0,1]$, i.e.\ $\mu$ is the Lebesgue measure on $[0,1]$;
\item the tilting process $\bm \xi$ is a Markov chain with state space $\{a,b\}$ for some $a\in(0,1),b\in(1,\infty)$, with transition matrix $\begin{pmatrix}
p_a & 1-p_a \\
1-p_b & p_b
\end{pmatrix}$ for some $p_a,p_b\in(0,1)$. Note that the invariant measure $\nu=(\nu_a,\nu_b)$ is equal to $ \left(\frac{1-p_b}{2-p_a-p_b},\frac{1-p_a}{2-p_a-p_b}\right)$.
\end{itemize}
Under this assumptions, \cref{thm:LLN} guarantees that for any fixed $s\in\mathbb{R}_+$ and for almost every realization $\vec{s}=( s_i)_{{i\in\mathbb{N}}}$ of the random sequence $(\bm s_i)_{{i\in\mathbb{N}}}\in[0,1]^{\mathbb{N}}$, the following convergence holds under $\mathbb{P}_{\vec{s}}$:
\begin{equation}
\frac{\bm W_n(s)}{2n}\xrightarrow[n\to\infty]{P}\ell(s),
\end{equation}
where
\begin{equation}\label{eq:exemp_l_S}
\ell(s)=\int_{\mathbb{R}^3_+} \frac{s v}{s v+u v'}d\nu(v)d\nu(v')d\mu(u)=\sum_{i,j\in\{a,b\}} \frac{s\cdot i}{j}\log\left(1+\frac{j}{s\cdot i}\right)\nu_i\cdot \nu_j.
\end{equation}
In particular if $a=\frac{1}{2}, b=2, p_a=\frac 1 2, p_b=\frac 1 2$ then
\begin{equation}\label{eq:expression_exemp_ls}
\ell(s)=\frac{s}{16}\log\left(\frac{(1+s)^8(4+s)(1+4s)^{16}}{2^{32}\cdot s^{25}}\right),
\end{equation}
whose graph is plotted in \cref{fig:simple_exemp}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.5]{simple_exemp}
\caption{The graph of the function $\ell(s)$ in \cref{eq:expression_exemp_ls} for $s\in[0,1]$. \label{fig:simple_exemp}}
\end{figure}
In addition, \cref{thm:CLT} implies that for any $s\in\mathbb{R}_+$ and for almost every realization $\vec{s}=( s_i)_{{i\in\mathbb{N}}}$ of the random sequence $(\bm s_i)_{{i\in\mathbb{N}}}\in[0,1]^{\mathbb{N}}$, the following convergence also holds under $\mathbb{P}_{\vec{s}}$:
\begin{equation}
\frac{\bm W_n(s)- \mathbb{E}_{\vec{s}}[\bm W_n(s)] }{\sqrt{2n}}\xrightarrow{d} \bm{\mathcal{N}}(0,\sigma(s)^2 + \rho(s)^2),
\end{equation}
where $\sigma(s)^2$ and $\rho(s)^2$ can be computed as follows. From \cref{eq:variance1_CLT,eq:def_rho_s} we know that
\begin{equation}
\sigma(s)^2=\ell(s)-\mathbb{E}\left[\left(F_s(\bm V,\bm U)\right)^2\right]
\end{equation}
and that
\begin{equation}
\rho(s)^2= \mathbb{E} \left[ \tilde F_s (\bm V, \bm U)^2 \right] + 2 \cdot \sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde F_s(\bm{\xi}_1, \bm U) \tilde F_s (\bm{\xi}_{1+k}, \bm U') \right].
\end{equation}
Recall that $\ell(s)$ was computed in \cref{eq:exemp_l_S}. Note that $$F_s(x,y)=\mathbb{E}\left[f\left(s\cdot x,y\cdot \bm V'\right)\right]=\sum_{v'\in \{a,b\}}f\left(s\cdot x,y\cdot v'\right)\nu_{v'}$$
and that
$$\tilde F_s(x,y)=F_s(x,y)-\mathbb{E}\left[F_s(\bm V,y)\right]=\sum_{v'\in\{a,b\}}f\left(s\cdot x,y\cdot v'\right)\nu_{v'}-\sum_{(v,v')\in \{a,b\}^2}f\left(s\cdot v,y\cdot v'\right)\nu_{v'}\nu_{v}.$$
Therefore
\begin{equation}
\mathbb{E}\left[\left(F_s(\bm V,\bm U)\right)^2\right]=\sum_{v\in \{a,b\}}\left(\int_0^1\left(\sum_{v'\in\{a,b\}}f\left(s\cdot v,u\cdot v'\right)\nu_{v'}\right)^2du\right)\nu_{v}
\end{equation}
and
\begin{equation}
\mathbb{E}\left[\left(\tilde F_s(\bm V,\bm U)\right)^2\right]=\sum_{w\in \{a,b\}}\left(\int_0^1\left(\sum_{v'\in\{a,b\}}f\left(s\cdot w,u\cdot v'\right)\nu_{v'}-\sum_{(v,v')\in \{a,b\}^2}f\left(s\cdot v,u\cdot v'\right)\nu_{v'}\nu_{v}\right)^2du\right)\nu_{v}.
\end{equation}
Moreover,
\begin{equation}
\sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde F_s(\bm{\xi}_1, \bm U) \tilde F_s (\bm{\xi}_{1+k}, \bm U') \right]= \sum_{k=1}^{\infty} \sum_{(i,j)\in \{a,b\}^2} \mathbb{P}(\bm{\xi}_1=i,\bm{\xi}_{1+k}=j) \int_0^1 \tilde F_s(i, u)\; du \int_0^1 \tilde F_s (j, u') \; du'.
\end{equation}
It remains to compute $\mathbb{P}(\bm{\xi}_1=i,\bm{\xi}_{1+k}=j)$ for $i,j\in \{a,b\}$. Note that
\begin{equation}
\begin{pmatrix}
p_a & 1-p_a \\
1-p_b & p_b
\end{pmatrix}=
\begin{pmatrix}
1 & \frac{1-p_a}{p_b-1} \\
1 & 1
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & (p_a+p_b-1)
\end{pmatrix}
\begin{pmatrix}
\frac{p_b-1}{p_a+p_b-2} & \frac{p_a-1}{p_a+p_b-2} \\
\frac{1-p_b}{p_a+p_b-2} & \frac{p_b-1}{p_a+p_b-2}
\end{pmatrix}
\eqqcolon SJS^{-1}.
\end{equation}
Therefore,
\begin{align}
&\mathbb{P}(\bm{\xi}_1=a,\bm{\xi}_{1+k}=a)=\frac{p_b-1 + (p_a-1) (p_a + p_b-1)^k}{p_a + p_b-2}\cdot \nu_a,\\
&\mathbb{P}(\bm{\xi}_1=a,\bm{\xi}_{1+k}=b)=\frac{(p_a-1) ((p_a + p_b-1)^k-1)}{p_a + p_b-2}\cdot \nu_a,\\
&\mathbb{P}(\bm{\xi}_1=b,\bm{\xi}_{1+k}=a)=\frac{(p_b-1)((p_a + p_b-1)^k-1)}{p_a + p_b-2}\cdot \nu_b,\\
&\mathbb{P}(\bm{\xi}_1=b,\bm{\xi}_{1+k}=b)=\frac{p_a-1 + (p_b-1) (p_a + p_b-1)^k}{p_a + p_b-2}\cdot \nu_b.
\end{align}
With some tedious but straightforward computations\footnote{We developed a \emph{Mathematica} software to quickly make such computations for various choices of the function $f(x,y)$, and of the parameters $a,b,p_a,p_b$. The software is available at the following \href{https://drive.google.com/drive/folders/1CXZVpe-HJvtJNNGlThu2J-PO9OHme0KP?usp=sharing}{link}.}, we can explicitly compute $\sigma(s)^2$ and $\rho(s)^2$.
The graphs of the two functions $\sigma(s)^2$ and $\rho(s)^2$ for $s\in[0,1]$ are plotted in \cref{fig:diagram_variance} for three different choices of the parameters $a,b,p_a,p_b$. It is interesting to note that $\sigma(s)^2$ is much larger than $\rho(s)^2$ when $p_a$ and $p_b$ are small, $\sigma(s)^2$ is comparable with $\rho(s)^2$ when $p_a$ and $p_b$ are around 0.9, and $\rho(s)^2$ is much larger than $\sigma(s)^2$ when $p_a$ and $p_b$ are very close to 1.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.4]{diagram_variance2}
\hspace{0.1cm}
\includegraphics[scale=.4]{diagram_variance1}
\hspace{0.1cm}
\includegraphics[scale=.4]{diagram_variance}
\caption{In green the graph of $\sigma(s)^2$. In red the graph of $\rho(s)^2$. In blue the graph of $\sigma(s)^2+\rho(s)^2$.
\textbf{Left:} The parameters of the model are $a=1/2,b=2,p_a=2/5,p_b=2/5$.
\textbf{Middle:} The parameters of the model are $a=1/2,b=2,p_a=92/100,p_b=92/100$.
\textbf{Right:} The parameters of the model are $a=1/2,b=2,p_a=99/100,p_b=99/100$. \label{fig:diagram_variance}}
\end{figure}
\end{exmp}
\begin{exmp}
We collect here some data related to the Italian national basketball league in order to compare some real data with our theoretical results. We believe that it would be interesting to develop a more accurate and precise analysis of real data in some future projects.
In \cref{fig:table_basket}, the rankings of the last 22 national leagues played among exactly 16 teams in the league are shown (some leagues, like the 2011-12 league, are not tracked since in those years the league was not formed by 16 teams).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.95]{table_basket}
\caption{The rankings of the last 22 national italian basket leagues played with exactly 16 teams in the league. In the Italian basket league every team plays two matches against every other team and every victory gives two points.\label{fig:table_basket}}
\end{figure}
The mean number of points of the 22 collected leagues are (from the team ranked first to the team ranked 16-th):
\begin{multicols}{4}
\begin{enumerate}
\item 47,45
\item 42,27
\item 40,18
\item 37,59
\item 35,91
\item 34,09
\item 31,73
\item 30,55
\item 29,18
\item 27,77
\item 26,09
\item 24,36
\item 22,00
\item 19,59
\item 17,82
\item 12,41
\end{enumerate}
\end{multicols}
The diagram of this ranking is given in the left-hand side of \cref{fig:diagram_basket}.
\medskip
As mentioned above, an interesting question consists in assessing whether it is possible to describe the behaviour of these leagues by using our model. More precisely, we looked for a function $f(x,y)$ and two distributions $\mu$ and $\nu$ such that the graph of $\ell(s)$ for $s\in[0,1]$ can well approximate the graph in the left-hand side of \cref{fig:diagram_basket}. We find out that choosing $\mu$ to be the uniform measure on the interval $[0.1,0.999]$, $\nu=0.6\cdot\delta_{0.25}+0.9\cdot\delta_{1.3},$ and
\begin{equation}\label{eq:guess_f}
f(x,y)\coloneqq\frac{g(x)}{g(x)+g(y)},\quad\text{with}\quad g(x)\coloneqq\log \left(1-\min\left\{\frac{x}{1.3},0.999\right\}\right),
\end{equation}
then the graph of $\ell(s)$ for $s\in[0.1,0.999]$, is the one given in the right-hand side of \cref{fig:diagram_basket}. Note that the two graphs have a similar convexity.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.55]{diagram_basket}
\hspace{1cm}
\includegraphics[scale=.55]{diagram_guessed}
\caption{\textbf{Left:} The diagram of the mean number of points of the 22 leagues collected in \cref{fig:table_basket}. On the $x$-axis the teams are ranked from the weakest to the strongest; on the $y$-axis the mean number of points are plotted.
\textbf{Right:} The graph of $\ell(s)$ for $s\in[0.1,0.999]$ for the specific $\mu,\nu$ and $f(x,y)$ given in (and before) \cref{eq:guess_f}. \label{fig:diagram_basket}}
\end{figure}
\end{exmp}
\subsection{Open problems}
We collect some open questions and conjectures that we believe might be interesting to investigate in future projects:
\begin{itemize}
\item Conditioning on the initial strengths of the teams, how many times do we need to run the league in order to guarantee that the strongest team a.s.\ wins the league? We point out that a similar question was investigated in \cite{ben2007efficiency} for the model considered by the authors.
\item In the spirit of large deviations results, we believe it would be interesting to study the probability that the weakest team wins the league, again conditioning on the initial strengths of the teams.
\item Another natural question is to investigate the whole final ranking of the league. We conjecture the following.
For a sequence of initial strengths $(s_i)_{i\in [2n-1]_0}$, we denote by $(\tilde T_i)_{i\in [2n-1]_0}$ the sequence of teams reordered according to their initial strengths $(s_i)_{i\in [2n-1]_0}$ (from the weakest to the strongest). Set
\begin{equation}
\tilde{\bm W}_n(i)\coloneqq\text{Number of wins of the team }\tilde T_{i}\text{ at the end of a league with $2n$ players}.
\end{equation}
Let $\tilde{\bm {\mathcal W}}_n(x):[0,1]\to \mathbb{R}$ denote the piece-wise linear continuous function obtained by interpolating the values $\tilde{\bm W}_n(i/n)$ for all $i\in [2n-1]_0$.
Denote by $H_{\mu}(y)$ the cumulative distribution function of $\mu$ and by $H_{\mu}^{-1}(x)$ the generalized inverse distribution function, i.e.\ $H_{\mu}^{-1}(x)=\inf\{y\in \mathbb{R}_{+}: H_{\mu}(y)\geq x \}$.
\begin{conj}
Suppose that the assumptions in \cref{ass:LLN} hold with the additional requirement that the function $f$ is continuous\footnote{The assumption that $f$ is continuous might be relaxed.}. For $\mu^{\mathbb{N}\cup\{0\}}$-almost every sequence $\vec{s}=(s_i)_{{i\in\mathbb{N}\cup\{0\}}}\in\mathbb{R}^{\mathbb{N}\cup\{0\}}_{+}$ and for every choice of the calendar of the league, under $\mathbb{P}_{\vec{s}}$ the following convergence of càdlàg processes holds
\begin{equation}
\frac{\tilde{\bm {\mathcal W}}_n(x)}{2n}\xrightarrow[n\to\infty]{P}\ell(H_{\mu}^{-1}(x)),
\end{equation}
where $\ell(s)$ is defined as in \cref{thm:LLN} by
\begin{equation}
\ell(s)=\mathbb{E}\left[f\left(s\cdot\bm{V},\bm U\cdot\bm{V'}\right)\right]=\int_{\mathbb{R}^3_+} f\left(s\cdot v,u\cdot v'\right)d\nu(v)d\nu(v')d\mu(u).
\end{equation}
\end{conj}
Note that the correlations between various teams in the league strongly depend on the choice of the calendar but we believe that this choice does not affect the limiting result in the conjecture above. We refer the reader to \cref{fig:sim_whole_league} for some simulations that support our conjecture.
We also believe that the analysis of the local limit (as defined in \cite{MR4055194}) of the whole ranking should be an interesting but challenging question (and in this case we believe that the choice of the calendar will affect the limiting object). Here, with local limit we mean the limit of the ranking induced by the $k$ teams -- for every fixed $k\in\mathbb{N}$ -- in the neighborhood (w.r.t.\ the initial strengths) of a distinguished team (that can be selected uniformly at random or in a deterministic way, say for instance the strongest team).
\item As mentioned above, we believe that it would be interesting to develop a more accurate and precise analysis of real data in order to correctly calibrate the parameters of our model collected at the beginning of \cref{sect:param_and_examples}.
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[scale=.4]{sim_whole_league2}
\hspace{0.1cm}
\includegraphics[scale=.4]{sim_whole_league1}
\hspace{0.1cm}
\includegraphics[scale=.4]{sim_whole_league3}
\caption{Two simulations of the diagrams of $(\tilde{\bm W}_n(i))_{i\in[999]_0}$ in the setting of \cref{exmp:league} with different choices of $a, b, p_a$ and $p_b$. The values $(\tilde{\bm W}_n(i))_{i\in[999]_0}$ are plotted in blue. The limiting functions $1000\cdot\ell(H_{\mu}^{-1}(x/1000))$ for $x\in[0,1000]$ are plotted in green and red respectively. \textbf{Left:} In this simulation the parameters are $a=\frac{1}{2}, b=2, p_a=\frac 1 2, p_b=\frac 1 2$.
\textbf{Middle:} In this simulation the parameters are $a=\frac{1}{10}, b=10, p_a=\frac{1}{10}, p_b=\frac{1}{10}$.
\textbf{Right:} The two limiting functions are overlapped in the same diagram to highlight the different slopes.
\label{fig:sim_whole_league}}
\end{figure}
\section{Proof of the law of large numbers}\label{sect:LLN}
The proof of \cref{thm:LLN} follows from the following result using standard second moment arguments.
\begin{prop}\label{prop:first_mom}
We have that
\begin{equation}
\mathbb{E}\left[\frac{\bm W_n(s)}{2n}\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.} \ell(s) ,
\qquad\text{and}\qquad
\mathbb{E}\left[\left(\frac{\bm W_n(s)}{2n}\right)^2\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.} \ell(s)^2.
\end{equation}
\end{prop}
The rest of this section is devoted to the proof of \cref{prop:first_mom}.
\medskip
We recall (see \cref{sect:litterature_sum_var}) that we assumed that, for every $j\in[2n-1]$, the team $T_{0}$ plays against the team $T_j$ the $j$-th day and we denoted by $W_{j}=W_j(s)$ the event that the team $T_{0}$ wins the match. In particular, $\bm W_n(s)=\sum_{j=1}^{2n-1}\mathds{1}_{W_j}$ and
\begin{equation}\label{eq:prob_win_match}
\mathbb{P}\left(W_j\middle | \bm s_j, \bm \xi^{0}_j,\bm \xi^{j}_j\right)=f(s\cdot \bm \xi^{0}_j,\bm s_j \cdot \bm \xi^{j}_j).
\end{equation}
\begin{proof}[Proof of \cref{prop:first_mom}]
We start with the computations for the first moment. From \cref{eq:prob_win_match} we have that
\begin{equation}\label{eq:evfwryibfweonfpiwe}
\mathbb{E}\left[\bm W_n(s)\middle| (\bm s_i)_i\right]=\sum_{j=1}^{2n-1}\mathbb{P}(W_j| (\bm s_i)_i)=\sum_{j=1}^{2n-1}\mathbb{E}\left[f\left(s\cdot \bm \xi^{0}_j,\bm s_j\cdot \bm \xi^j_j\right)\middle| \bm s_j\right].
\end{equation}
Since for all $j\in [2n-1]$, $\bm \xi^j_j$, $\bm \xi^{0}_j$ and $\bm s_j$ are independent, $\bm \xi^{0}_j\stackrel{d}{=}\bm V$, and $\bm \xi^j_j\stackrel{d}{=}\bm V'$, we have
\begin{equation}
\mathbb{E}\left[f\left(s\cdot \bm \xi^{0}_j,\bm s_j\cdot \bm \xi^j_j\right)\middle| \bm s_j\right]=G_s\left(\bm s_j\right),
\end{equation}
where $G_s(x)=\mathbb{E}\left[f\left(s\cdot \bm V,x\cdot \bm V'\right)\right]$.
By the Law of large numbers, we can conclude that
\begin{equation}\label{eq:evfwuitgwrefbwofnwryibfweonfpiwe}
\mathbb{E}\left[\frac{\bm W_n(s)}{2n}\middle| (\bm s_i)_i\right]=\frac{1}{2n}\sum_{j=1}^{2n-1}G_s\left(\bm s_j\right)
\xrightarrow[n\to\infty]{a.s.} \mathbb{E}\left[G_s\left(\bm U\right)\right]=\ell(s).
\end{equation}
We now turn to the second moment.
Since for all $i,j\in [2n-1]$ with $i\neq j$, conditioning on $(\bm s_i,\bm s_j, \bm \xi^{0}_i,\bm \xi^{i}_i,\bm \xi^{0}_j,\bm \xi^{j}_j)$ the events $W_i$ and $W_j$ are independent, we have that, using \cref{eq:prob_win_match},
\begin{multline}
\mathbb{E}\left[\bm W_n(s)^2\middle| (\bm s_i)_i\right]=\mathbb{E}\left[\bm W_n(s)\middle| (\bm s_i)_i\right]+\sum_{\substack{\ell,j=1\\ \ell\neq j}}^{2n-1}\mathbb{P}(W_\ell\cap W_j | \bm s_\ell,\bm s_j)\\
=\mathbb{E}\left[\bm W_n(s)\middle| (\bm s_i)_i\right]+\sum_{\substack{\ell,j=1\\ \ell\neq j}}^{2n-1}\mathbb{E}\left[f\left(s\cdot \bm \xi^{0}_\ell,\bm s_\ell\cdot \bm \xi^\ell_\ell\right)f\left(s\cdot \bm \xi^{0}_j,\bm s_j\cdot \bm \xi^j_j\right)\middle| \bm s_\ell,\bm s_j\right].
\end{multline}
For all $\ell,j\in [2n-1]$ with $\ell\neq j$, $\bm s_\ell$, $\bm s_j$, $\bm \xi^\ell_\ell$ and $\bm \xi^j_j$ are mutually independent and independent of $(\bm \xi^{0}_\ell,\bm \xi^{0}_j)$. In addition, $(\bm \xi^{0}_\ell,\bm \xi^{0}_j)\stackrel{d}{=}(\bm \xi_\ell,\bm \xi_j)$ and $\bm \xi^\ell_\ell\stackrel{d}{=}\bm \xi^j_j\stackrel{d}{=}\bm V'$. Thus, we have that
\begin{equation}
\mathbb{E}\left[f\left(s\cdot \bm \xi^{0}_\ell,\bm s_\ell\cdot \bm \xi^\ell_\ell\right)f\left(s\cdot \bm \xi^{0}_j,\bm s_j\cdot \bm \xi^j_j\right)\middle| \bm s_\ell,\bm s_j\right]
=\mathbb{E}\left[F_s\left(\bm \xi_\ell,\bm s_\ell\right)F_s\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right],
\end{equation}
where we recall that $F_s(x,y)=\mathbb{E}\left[f\left(s\cdot x,y\cdot \bm V'\right)\right]$.
Simple consequences of the computations done for the first moment are that
$$\frac{\mathbb{E}\left[\bm W_n(s)\middle| (\bm s_i)_i\right]}{n^2}\xrightarrow[n\to\infty]{a.s.} 0\quad\text{and}\quad\frac{\mathbb{E}\left[\sum_{j=1}^{2n-1} F_s^2 (\bm \xi_j, \bm s_j)\middle| (\bm s_i)_i\right]}{n^2}\xrightarrow[n\to\infty]{a.s.} 0.$$ Therefore, we can write
\begin{multline}\label{eq:fbnwiruefbeownfw}
\mathbb{E}\left[\left(\frac{\bm W_n(s)}{2n}\right)^2\middle| (\bm s_i)_i\right]=
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}F_s\left(\bm \xi_\ell,\bm s_\ell\right) F_s\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]+\bm o(1)\\
=
\mathbb{E}\left[\left(\frac{1}{2n}\sum_{j=1}^{2n-1} F_s\left(\bm \xi_j,\bm s_j\right)\right)^2\middle| (\bm s_i)_i\right]+\bm o(1),
\end{multline}
where $\bm o(1)$ denotes a sequence of random variables that a.s.\ converges to zero.
We now need the following result, whose proof is postponed at the end of this section.
\begin{prop} \label{prop:conv_for_bnd_cont_funct}
For all bounded, measurable functions $g:\mathbb{R}^2_+\to\mathbb{R}_+$, we have that
\begin{equation}\label{eq:second_mom_funct}
\mathbb{E}\left[\left(\frac{1}{2n}\sum_{j=1}^{2n-1}g\left(\bm \xi_j,\bm s_j\right)\right)^2\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.}\mathbb{E}\left[g\left(\bm V,\bm U\right)\right]^2.
\end{equation}
\end{prop}
From \cref{eq:fbnwiruefbeownfw} and the proposition above, we conclude that
\begin{equation}
\mathbb{E}\left[\left(\frac{\bm W_n(s)}{2n}\right)^2\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.}\mathbb{E}\left[F_s\left(\bm V,\bm U\right)\right]^2=\ell(s)^2.
\end{equation}
This concludes the proof of \cref{prop:first_mom}.
\end{proof}
It remains to prove \cref{prop:conv_for_bnd_cont_funct}. We start with the following preliminary result.
\begin{lem} \label{lem:first_sec_mom_for_ret}
For every quadruple $(A,A',B,B')$ of Borel subsets of $\mathbb{R}_+$, we have that
\begin{equation}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1} \mathds{1}_{A \times B}\left(\bm \xi_\ell,\bm s_\ell\right) \mathds{1}_{A' \times B'}\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.} \mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A' \times B'}(\bm V,\bm U)\right]\label{eq:second_mom_ind_ret}.
\end{equation}
\end{lem}
\begin{proof}
Note that since the process $\bm \xi$ is independent of $(\bm s_i)_i$,
\begin{multline}\label{eq:rewriting_the_expression}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1} \mathds{1}_{A \times B}\left(\bm \xi_\ell,\bm s_\ell\right) \mathds{1}_{A' \times B'}\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]\\
=\frac{1}{2n}\sum_{\ell=1}^{2n-1}\mathds{1}_{B}(\bm s_\ell) \cdot \frac{1}{2n}\sum_{j=1}^{2n-1} \mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)\mathds{1}_{B'}(\bm s_j).
\end{multline}
For all $\ell\in[2n-1]$, we can write
\begin{align}\label{eq:split_sum_prob}
\frac{1}{2n}\sum_{j=1}^{2n-1} \mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)&\mathds{1}_{B'}(\bm s_j)\\
=\mathbb{P}\left(\bm V \in A\right)&\mathbb{P}\left(\bm V \in A'\right)\frac{1}{2n}\sum_{j=1}^{2n-1} \mathds{1}_{B'}(\bm s_j)\\
&+\frac{1}{2n}\sum_{j=1}^{2n-1} \left(\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V \in A\right)\mathbb{P}\left(\bm V \in A'\right)\right)\mathds{1}_{B'}(\bm s_j).
\end{align}
First, from the Law of large numbers we have that
\begin{equation}\label{eq:first_lim_real}
\frac{1}{2n}\sum_{j=1}^{2n-1}\mathds{1}_{B'}(\bm s_j)\xrightarrow[n\to\infty]{a.s.}\mu(B').
\end{equation}
Secondly, we show that the second sum in the right-hand side of \cref{eq:split_sum_prob} is negligible. We estimate
\begin{multline}
\left|\frac{1}{2n}\sum_{j=1}^{2n-1} \left(\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V \in A\right)\mathbb{P}\left(\bm V \in A'\right)\right)\mathds{1}_{B'}(\bm s_j)\right|\\
\leq
\frac{1}{2n}\sum_{j=1}^{2n-1} \left|\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|\\
=
\frac{1}{2n}\sum_{j=1}^{\ell-1} \left|\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|\\
+\frac{1}{2n}\sum_{j=\ell}^{2n-1} \left|\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|
\end{multline}
Using the stationarity assumption in \cref{eq:stationarity}, the right-hand side of the equation above can be rewritten as follows
\begin{multline}
\frac{1}{2n}\sum_{j=2}^{\ell} \left|\mathbb{P}\left(\bm \xi_j\in A,\bm \xi_1\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|\\
+\frac{1}{2n}\sum_{j=1}^{2n-\ell} \left|\mathbb{P}\left(\bm \xi_1\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|.
\end{multline}
Therefore, we obtain that
\begin{multline}
\left|\frac{1}{2n}\sum_{j=1}^{2n-1} \left(\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V \in A\right)\mathbb{P}\left(\bm V \in A'\right)\right)\mathds{1}_{B'}(\bm s_j)\right|\\
\leq
\frac{1}{2n}\sum_{j=1}^{2n} \left|\mathbb{P}\left(\bm \xi_1\in A',\bm \xi_j\in A\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|\\
+\frac{1}{2n}\sum_{j=1}^{2n} \left|\mathbb{P}\left(\bm \xi_1\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|.
\end{multline}
The upper bound above is independent of $\ell$ and tends to zero because the process $\bm \xi$ is weakly-mixing (see \cref{eq:unif_weak_mix1}); we can thus deduce from \cref{eq:split_sum_prob,eq:first_lim_real} that, uniformly for all $\ell\in[2n-1]$,
\begin{equation}\label{eq:ifbuewbfoewnfoiewnfew}
\frac{1}{2n}\sum_{j=1}^{2n-1} \mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)\mathds{1}_{B'}(\bm s_j)
\xrightarrow[n\to\infty]{a.s.}
\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\mu(B').
\end{equation}
Hence, from \cref{eq:rewriting_the_expression,eq:ifbuewbfoewnfoiewnfew} and the Law of large numbers, we can conclude that
\begin{multline}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1} \mathds{1}_{A \times B}\left(\bm \xi_\ell,\bm s_\ell\right) \mathds{1}_{A' \times B'}\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]\\
\xrightarrow[n\to\infty]{a.s.} \mathbb{P}\left(\bm V\in A\right)\mu(B)\mathbb{P}\left(\bm V \in A'\right)\mu(B')
=\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A' \times B'}(\bm V,\bm U)\right].\qedhere
\end{multline}
\end{proof}
We now show that we can extend the result of \cref{lem:first_sec_mom_for_ret} to all Borel subsets of $\mathbb{R}_+^2$, denoted by $\mathcal{B}(\mathbb{R}_+^2)$.
We also denote by $\mathcal{R}(\mathbb{R}_+^2)$ the sets of rectangles $A\times B$ of $\mathbb{R}_+^2$ with $A,B\in\mathcal{B}(\mathbb{R}_+)$.
We recall that we denote by $\sigma\left(\mathcal A\right)$ and $\lambda\left(\mathcal A\right)$ the sigma algebra and the monotone class generated by a collection of sets $\mathcal A$, respectively.
\begin{lem}\label{lem:ind_fnct}
For all $C,C'\in\mathcal{B}(\mathbb{R}_+^2)$, we have that
\begin{equation}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{C'}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.} \mathbb{E}\left[\mathds{1}_{C}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{C'}(\bm V,\bm U)\right]\label{eq:second_mom_ind_bor}.
\end{equation}
\end{lem}
\begin{proof}
We first fix a rectangle $A\times B\in\mathcal{R}(\mathbb{R}_+^2)$ and we consider the set
\begin{equation}
\mathcal{A}_{A\times B}\coloneqq \left\{C\in\mathcal{B}(\mathbb{R}_+^2)\middle |\text{Eq. }\eqref{eq:second_mom_ind_bor} \text{ holds with } C'=A\times B \right\}.
\end{equation}
By \cref{lem:first_sec_mom_for_ret}, we have $\mathcal{R}(\mathbb{R}_+^2)\subseteq\mathcal{A}_{A\times B}\subseteq \mathcal{B}(\mathbb{R}_+^2)$. If we show that $\mathcal{A}_{A\times B}$ is a monotone class, then we can conclude that $\mathcal{A}_{A\times B}=\mathcal{B}(\mathbb{R}_+^2)$. Indeed, by the monotone class theorem (note that $\mathcal{R}(\mathbb{R}_+^2)$ is closed under finite intersections), we have that
\begin{equation}\label{eq:class_inclusions}
\mathcal{B}(\mathbb{R}_+^2)= \sigma\left(\mathcal{R}(\mathbb{R}_+^2)\right)=\lambda\left(\mathcal{R}(\mathbb{R}_+^2)\right) \subseteq\mathcal{A}_{A\times B}.
\end{equation}
The equality $\mathcal{A}_{A\times B}=\mathcal{B}(\mathbb{R}_+^2)$ implies that \cref{eq:second_mom_ind_bor} holds for every set in $\mathcal{B}(\mathbb{R}_+^2)\times \mathcal{R}(\mathbb{R}_+^2)$. Finally, if we also show that for any fixed Borel set $C^*\in\mathcal{B}(\mathbb{R}_+^2)$, the set
\begin{equation}
\mathcal{A}_{C^*}\coloneqq \left\{C'\in\mathcal{B}(\mathbb{R}_+^2)\middle |\text{Eq. }\eqref{eq:second_mom_ind_bor} \text{ holds with } C=C^* \right\}
\end{equation}
is a monotone class, then using again the same arguments that we used in \cref{eq:class_inclusions} (note that $\mathcal{R}(\mathbb{R}_+^2)\subseteq\mathcal{A}_{C^*}$ thanks to the previous step) we can conclude that \cref{eq:second_mom_ind_bor} holds for every pair of sets in $\mathcal{B}(\mathbb{R}_+^2)\times \mathcal{B}(\mathbb{R}_+^2)$, proving the lemma. Therefore, in order to conclude the proof, it is sufficient to show that $\mathcal{A}_{A \times B}$ and $\mathcal{A}_{C^*}$ are monotone classes.
\medskip
We start by proving that $\mathcal{A}_{A\times B}$ is a monotone class:
\begin{itemize}
\item Obviously $\mathbb{R}_+^2\in \mathcal{A}_{A\times B}$.
\item If $C,D\in\mathcal{A}_{A\times B}$ and $C\subseteq D$, then $D\setminus C\in \mathcal{A}_{A\times B}$ because $\mathds{1}_{D\setminus C}=\mathds{1}_{D}-\mathds{1}_{C}$.
\item Let now $(C_m)_{m\in\mathbb{N}}$ be a sequence of sets in $\mathcal{A}_{A\times B}$ such that $C_m \subseteq C_{m+1}$ for all $m\in\mathbb{N}$. We want to show that $C\coloneqq\bigcup_m C_m\in \mathcal{A}_{A\times B}$, i.e.\ that
\begin{equation}\label{eq:ifbueiwbfowebnfoewinf}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]
\xrightarrow[n\to\infty]{a.s.}
\mathbb{E}\left[\mathds{1}_{C}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right].
\end{equation}
Since $\mathds{1}_{C}=\lim_{m\to \infty}\mathds{1}_{C_m}$, then by monotone convergence we have for all $n\in \mathbb{N}$,
\begin{multline}\label{eq:lim_of_indicator}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]
\xrightarrow[m\to\infty]{a.s.}\\
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{C}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right].
\end{multline}
We also claim that:
\begin{itemize}
\item[(a)] For all $m\in\mathbb{N}$,
\begin{equation}
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]\xrightarrow[n\to\infty]{a.s.} \mathbb{E}\left[\mathds{1}_{C_m}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right].
\end{equation}
\item[(b)] The convergence in \cref{eq:lim_of_indicator} holds uniformly for all $n\in\mathbb{N}$.
\end{itemize}
Item (a) holds since $C_m \in \mathcal{A}_{A\times B}$.
Item (b) will be proved at the end. Items (a) and (b) allow us to exchange the following a.s.-limits as follows
\begin{align}
\lim_{n\to \infty}\mathbb{E}&\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]\\
\stackrel{\eqref{eq:lim_of_indicator}}{=} &\lim_{n\to \infty}\lim_{m\to \infty}
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]\\
= &\lim_{m\to \infty}\lim_{n\to \infty}
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]\\
=
&\lim_{m\to \infty}\mathbb{E}\left[\mathds{1}_{C_m}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right]=\mathbb{E}\left[\mathds{1}_{C}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right],
\end{align}
where the last equality follows by monotone convergence. This proves \cref{eq:ifbueiwbfowebnfoewinf} and concludes the proof (up to proving item (b)) that $\mathcal{A}_{A\times B}$ is a monotone class.
\textbf{Proof of item (b). }Since $\mathds{1}_{C\setminus C_m}=\mathds{1}_{C}-\mathds{1}_{C_m}$, it is enough to show that
\begin{equation}
\sup_{n}\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C\setminus C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]\xrightarrow[m\to\infty]{a.s.}0.
\end{equation}
We set $D_m\coloneqq C\setminus C_m$, and define
$$(D_m)^{s}\coloneqq\{x\in \mathbb{R}_+|(x,s)\in D_m\},\quad \text{for all} \quad s\in\mathbb{R},$$
$$\pi_Y(D_m)\coloneqq\{y\in \mathbb{R}_+| \exists x\in \mathbb{R}_+ \text{ s.t. }(x,y)\in D_m\}.$$
Since $\bm \xi_\ell\stackrel{d}{=}\bm V$ for all $\ell\in[2n-1]$, then for all $\ell,j\in[2n-1]$, a.s.
\begin{align}
\mathbb{E}\left[\mathds{1}_{D_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]
&=\mathbb{P}\left(\bm \xi_\ell\in (D_m)^{\bm s_\ell},\bm \xi_j\in A\middle| \bm s_\ell\right)\mathds{1}_{\pi_Y(D_m)}(\bm s_\ell)\mathds{1}_{B}(\bm s_j)\\
&\leq \mathbb{P}\left(\bm \xi_\ell\in (D_m)^{\bm s_\ell}\middle| \bm s_\ell\right)=\mathbb{P}\left(\bm V\in (D_m)^{\bm s_\ell}\middle| \bm s_\ell\right).\label{eq:bound_for_expect1}
\end{align}
Therefore a.s.
\begin{equation}\label{eq:bnd_for_exp}
\sup_{n}\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C\setminus C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]
\leq\mathbb{P}\left(\bm V\in (C\setminus C_m)^{\bm s_\ell}\middle| \bm s_\ell\right).
\end{equation}
Now the sequence $\mathbb{P}\left(\bm V\in (C\setminus C_m)^{\bm s_\ell}\middle| \bm s_\ell\right)$ is a.s.\ non-increasing (because $C_m \subseteq C_{m+1}$) and hence has an a.s.\ limit. The limit is non-negative and its expectation is the limit of expectations which is $0$ because $C\coloneqq\bigcup_m C_m$. This completes the proof of item (b).
\end{itemize}
It remains to prove that $\mathcal{A}_{C*}$ is also a monotone class.
The proof is similar to the proof above, replacing the bound in \cref{eq:bound_for_expect1} by
\begin{multline}\label{eq:bound_for_expect2}
\mathbb{E}\left[\mathds{1}_{C^*}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{D_m}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]\\
=\mathbb{P}\left(\bm \xi_\ell\in (C^*)^{\bm s_\ell},\bm \xi_j\in (D_m)^{\bm s_j}\middle| \bm s_\ell,\bm s_j\right)\mathds{1}_{\pi_Y(C^*)}(\bm s_\ell)\mathds{1}_{\pi_Y(D_m)}(\bm s_j)
\leq \mathbb{P}\left(\bm \xi_j\in (D_m)^{\bm s_j}\middle|\bm s_j\right).
\end{multline}
This completes the proof of the lemma.
\end{proof}
We now generalize the result in \cref{lem:ind_fnct} to all bounded and measurable functions, hereby proving \cref{prop:conv_for_bnd_cont_funct}.
\begin{proof}[Proof of \cref{prop:conv_for_bnd_cont_funct}]
We further assume that $g:\mathbb{R}^2_+\to\mathbb{R}_+$ is non-negative, the general case following by stardand arguments.
Fubini's theorem, together with the fact that $g(x,y)=\int_0^{\|g\|_\infty}\mathds{1}_{\left\{z\leq g(x,y)\right\}}dz$, yields
\begin{multline}
\mathbb{E}\left[\left(\frac{1}{2n}\sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right)\right)^2\middle| (\bm s_i)_i\right]
=\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\int_0^{\|g\|_\infty}\mathds{1}_{\left\{z\leq g\left(\bm \xi_\ell,s_\ell\right)\right\}}dz \int_0^{\|g\|_\infty}\mathds{1}_{\left\{t\leq g\left(\bm \xi_j,s_j\right)\right\}}dt\middle| (\bm s_i)_i\right]\\
=\int_0^{\|g\|_\infty}\int_0^{\|g\|_\infty}\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{A(z)}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A(t)}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]dz \; dt,
\end{multline}
where $A(s)=\left\{(x,y)\in\mathbb{R}_+^2 \middle | g(x,y)\geq s \right\}$.
By \cref{lem:ind_fnct}, for almost every $(z,t)\in\mathbb{R}^2_+$
\begin{equation}
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{A(z)}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A(t)}\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]\xrightarrow[n\to \infty]{a.s.} \mathbb{E}\left[\mathds{1}_{A(z)}\left(\bm V,\bm U\right)\right]\mathbb{E}\left[\mathds{1}_{A(t)}\left(\bm V,\bm U\right)\right].
\end{equation}
Since the left-hand side is bounded by $1$, we can conclude by dominated convergence that
\begin{equation}
\mathbb{E}\left[\left(\frac{1}{2n}\sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right)\right)^2\middle| (\bm s_i)_i\right]\xrightarrow[n\to \infty]{a.s}\mathbb{E}\left[g\left(\bm V,\bm U\right)\right]^2,
\end{equation}
completing the proof of the proposition.
\end{proof}
\section{Proof of the central limit theorems}\label{sect:CLT}
In this section, we start by proving \cref{thm:CLT} using \cref{prop:clt_sum_of_the_g} and then we prove the latter result.
\begin{proof}[Proof of \cref{thm:CLT}]
We set $\bm X_n\coloneqq\frac{\bm W_n(s)- 2n \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)]}{\sqrt{2n}}$. In order to prove the convergence in \cref{eq:CLT}, it is enough to show that, for every $t \in \mathbb{R}$,
\begin{equation}\label{eq:goal_proof_MGF}
\mathbb{E}_{\vec{s}} \left[e^{it\bm X_n}\right]\xrightarrow{n\to \infty}e^{-\frac{t^2}{2} \left(\sigma(s)^2+\rho(s)^2 \right)},
\end{equation}
where we recall that $\sigma(s)^2=\mathbb{E}\left[F_s(\bm V,\bm U)-F^2_s(\bm V,\bm U)\right]$ and $\rho(s)^2 = \rho_{F_s}^2$. Note that $\sigma(s)^2+\rho(s)^2$ is finite thanks to \cref{prop:clt_sum_of_the_g} and the fact that $F_s-F_s^2$ is a bounded and measurable function.
Recalling that $\bm W_n(s)=\sum_{j=1}^{2n-1}\mathds{1}_{W_j}$, $W_{j}$ being the event that the team $T_{0}$ wins against the team $T_j$,
and that, conditioning on $\left( \bm \xi^{0}_r \right)_{_{r \in [2n-1]}}$, the results of different matches are independent, we have that
\begin{multline}
\mathbb{E}_{\vec{s}}\left[e^{it\bm X_n}\right]
=e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it} \cdot \mathbb{E}_{\vec{s}} \left[e^{\frac{it}{\sqrt{2n}} \sum_{j=1}^{2n-1}\mathds{1}_{W_j}}\right]\\
=e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}_{\vec{s}}\left[\mathbb{E}_{\vec{s}}\left[e^{\frac{it}{\sqrt{2n}}\sum_{j=1}^{2n-1}\mathds{1}_{W_j}}\middle|\left( \bm \xi^{0}_r \right)_{_{r \in [2n-1]}} \right]\right] \\
=e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}_{\vec{s}}\left[ \prod_{j=1}^{2n-1} \mathbb{E}_{\vec{s}}\left[e^{\frac{it}{\sqrt{2n}}\mathds{1}_{W_j}}\middle| \bm \xi^{0}_j \right]\right].
\end{multline}
Since, by assumption, we have that
$
\mathbb{P}_{\vec{s}}\left(W_j\middle | \bm \xi^{0}_j,\bm \xi^{j}_j\right)=f(s\cdot \bm \xi^{0}_j,s_j \cdot \bm \xi^{j}_j)
$
and, for all $j\in [2n-1]$, $\bm \xi^j_j$ is independent of $\bm \xi^{0}_j$, $\bm \xi^{0}_j\stackrel{d}{=}\bm \xi_j$ and $\bm \xi^j_j\stackrel{d}{=}\bm V'$, we have that
\begin{multline}
\mathbb{E}_{\vec{s}}\left[e^{it\bm X_n}\right]
= e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}\left[ \prod_{j=1}^{2n-1} \mathbb{E}\left[ 1 + \left( e^{\frac{it}{\sqrt{2n}}} -1 \right) f(s\cdot \bm \xi^{0}_j,s_j \cdot \bm \xi^{j}_j) \middle| \bm \xi^{0}_j \right]\right] \\
= e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}\left[ \prod_{j=1}^{2n-1} \left(1 + \left( e^{\frac{it}{\sqrt{2n}}} -1 \right) \cdot F_s(\bm \xi_j,s_j) \right)\right],
\end{multline}
where we recall that $F_s(x,y)=\mathbb{E}\left[f\left(s\cdot x,y\cdot \bm V'\right)\right]$.
Rewriting the last term as
\begin{equation}
e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}\left[ e^{ \sum_{j=1}^{2n-1} \log \left(1 + (e^{it /\sqrt{2n}} -1 ) \cdot F_s(\bm \xi_j,s_j ) \right) } \right],
\end{equation}
and observing that
\begin{multline}
\sum_{j=1}^{2n-1} \log \left( 1 + \left( e^{it /\sqrt{2n}} -1 \right) \cdot F_s\left(\bm \xi_j,s_j \right) \right) \\
= \sum_{j=1}^{2n-1} \left(\frac{it}{\sqrt{2n}} \cdot F_s\left(\bm \xi_j,s_j \right) - \frac{t^2}{4n} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) + O \left(\frac{1}{n\sqrt{n}} \right)\right) \\
= \frac{i t}{\sqrt{2n}} \sum_{j=1}^{2n-1} F_s\left(\bm \xi_j,s_j \right) - \frac{t^2}{2} \cdot \frac{1}{2n} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) + O \left(\frac{1}{\sqrt{n}} \right),
\end{multline}
we obtain that the characteristic function $\mathbb{E}_{\vec{s}}\left[e^{it\bm X_n}\right]$ is equal to
\begin{equation}
e^{O\left( \frac{1}{\sqrt n} \right)} \cdot e^{-\frac{t^2}{2}\sigma(s)^2}
\cdot \mathbb{E} \left[ e^{ \frac{it}{\sqrt{2n}} \left( \sum_{j=1}^{2n-1} F_s (\bm \xi_j,s_j ) - 2n \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \right) }
\cdot e^{- \frac{t^2}{2} \cdot \left(\frac{1}{2n} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) - \sigma(s)^2 \right) } \right].
\end{equation}
Now we set
\begin{align}
&\bm A_n\coloneqq e^{ \frac{it}{\sqrt{2n}} \left( \sum_{j=1}^{2n-1} F_s (\bm \xi_j,s_j ) - 2n \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \right) },\\
&\bm B_n\coloneqq e^{- \frac{t^2}{2} \cdot \left(\frac{1}{2n} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) - \sigma(s)^2 \right) },
\end{align}
obtaining that $\mathbb{E}_{\vec{s}}\left[e^{it\bm X_n}\right]= e^{O\left( \frac{1}{\sqrt n} \right)} \cdot e^{-\frac{t^2}{2}\sigma(s)^2}\left(\mathbb{E}\left[\bm A_n\right]-\mathbb{E}\left[\bm A_n(1-\bm B_n)\right]\right)$.
Hence, \cref{eq:goal_proof_MGF} holds if we show that
\begin{enumerate}
\item $\mathbb{E}\left[\bm A_n\right] \to e^{-\frac{t^2}{2} \rho(s)^2} $, \label{eq:clt_delta_small2}
\item $\mathbb{E}\left[\bm A_n(1-\bm B_n)\right] \to 0 $ .\label{eq:clt_delta_big2}
\end{enumerate}
Item 1 follows from \cref{prop:clt_sum_of_the_g}. For Item 2, since $|\bm A_n|=1$, we have that
\begin{equation}
\left|\mathbb{E}\left[\bm A_n(1-\bm B_n)\right] \right|\leq \mathbb{E}\left[|1-\bm B_n|\right].
\end{equation}
Recalling that $\sigma(s)^2=\mathbb{E}\left[F_s(\bm V,\bm U)-F^2_s(\bm V,\bm U)\right]$, and that $\bm \xi_j\stackrel{d}{=}\bm V$ for all $j\in[2n-1]$, we have that
\begin{multline}
\frac{1}{{2n}} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) - \sigma(s)^2
=\\
\frac{1}{{2n}} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right)
-\frac{1}{{2n}} \sum_{j=1}^{2n-1} \mathbb{E}\left[ F_s\left(\bm V,s_j \right) - F_s^2\left( \bm V,s_j \right) \right]\\
+\frac{1}{{2n}} \sum_{j=1}^{2n-1} \mathbb{E}\left[ F_s\left(\bm V,s_j \right) - F_s^2\left( \bm V,s_j \right) \right]
-\mathbb{E}\left[F_s(\bm V,\bm U)-F^2_s(\bm V,\bm U)\right] \xrightarrow{P} 0,
\end{multline}
where for the limit we used once again \cref{prop:clt_sum_of_the_g} and similar arguments to the ones already used in the proof of \cref{prop:first_mom}.
Since the function $e^{-t^2x/2}$ is continuous and the random variable $\frac{1}{{2n}} \left( \sum_{j=1}^{2n-1} F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) - \sigma(s)^2$ is bounded, we can conclude that $\mathbb{E}\left[|1-\bm B_n|\right]\to 0$. This ends the proof of \cref{thm:CLT}.
\end{proof}
The rest of this section is devoted to the proof of \cref{prop:clt_sum_of_the_g}. We start by stating a lemma that shows how the coefficients $\alpha_n$ defined in \cref{eq:def_alpha_n} control the correlations of the process $\bm \xi$.
\begin{lem}[Theorem 17.2.1 in \cite{MR0322926}]\label{lem:decay_correlations}
Fix $\tau \in \mathbb{N}$ and let $\bm X$ be a random variable measurable w.r.t.\ $\mathcal{A}_1^{k}$ and $\bm Y$ a random variable measurable w.r.t.\ $\mathcal{A}^{\infty}_{k + \tau}$. Assume, in addition, that $| \bm X | < C_1$ almost surely and $| \bm Y | < C_2$ almost surely. Then
\begin{equation}\label{eq:decay_correlations}
\left| \mathbb{E} \left[\bm X \bm Y\right] - \mathbb{E} [\bm X] \mathbb{E}[\bm Y] \right| \leq 4 \cdot C_1 \cdot C_2 \cdot \alpha_{\tau}.
\end{equation}
\end{lem}
We now focus on the behaviour of the random variables $\sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right)$ appearing in the statement of \cref{prop:clt_sum_of_the_g}. It follows directly from \cref{prop:conv_for_bnd_cont_funct} and
Chebyshev's inequality that, for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$,
\begin{equation}
\frac{1}{2n} \sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right) \xrightarrow{P} \mathbb{E} \left[g \left( \bm V, \bm U \right)\right].
\end{equation}
We aim at establishing a central limit theorem.
Recalling the definition of the function $\tilde g$ in the statement of \cref{prop:clt_sum_of_the_g}, we note that, for all $j \in \mathbb{N}$,
\begin{equation}
\tilde{g} \left(\bm \xi_j,s_j \right) = g\left(\bm \xi_j,s_j \right) - \mathbb{E}[g\left(\bm V, s_j\right)],
\end{equation}
and so $\mathbb{E} \left[\tilde{g} \left(\bm \xi_j,s_j \right) \right] = 0$.
Define
\begin{equation}
\rho_{g,n}^2 \coloneqq \Var \left( \sum_{j=1}^{2n-1}\tilde{g} \left(\bm \xi_j,s_j\right) \right) = \Var \left( \sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right) \right).
\end{equation}
The following lemma shows that the variance $\rho_{g,n}^2$ is asymptotically linear in $n$ and proves the first part of \cref{prop:clt_sum_of_the_g}.
\begin{lem} \label{lem:variance_is_linear}
The quantity $\rho_g^2$ defined in \cref{eq:def_of_rho} is finite. Moreover, for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, we have that
\begin{equation} \label{eq:asympt_for_var}
\rho_{g,n}^2 = 2n\cdot \rho_g^2 \cdot (1+o(1)).
\end{equation}
\end{lem}
\begin{proof}
We have that
\begin{multline}
\rho_{g,n}^2 = \Var \left( \sum_{j=1}^{2n-1}\tilde{g} \left(\bm \xi_j,s_j\right) \right)
= \mathbb{E} \left[ \left( \sum_{j=1}^{2n-1}\tilde{g} \left(\bm \xi_j,s_j\right) \right)^2 \right] \\
= \mathbb{E} \left[ \sum_{j=1}^{2n-1}\tilde{g}^2 \left(\bm \xi_j,s_j\right) \right]
+2\cdot \mathbb{E} \left[ \sum_{i=1}^{2n-2} \sum_{j=i+1}^{2n-1} \tilde{g} \left(\bm \xi_i,s_i\right) \tilde{g} \left(\bm \xi_j,s_j\right) \right] \\
= \sum_{j=1}^{2n-1} \mathbb{E} \left[ \tilde{g}^2 \left(\bm \xi_j,s_j\right) \right]
+ 2\cdot \sum_{i=1}^{2n-2} \sum_{k=1}^{2n-1 -i} \mathbb{E} \left[ \tilde{g} \left(\bm \xi_i,s_i\right) \tilde{g} \left(\bm \xi_{i+k},s_{i+k}\right) \right].
\end{multline}
Using similar arguments to the ones already used in the proof of \cref{prop:first_mom}, we have that
\begin{equation}\label{eq:fibiwfwofnbew}
\sum_{j=1}^{2n-1} \mathbb{E} \left[\tilde{g}^2 \left(\bm \xi_j,s_j\right) \right] = 2n \cdot \mathbb{E} \left[ \tilde{g}^2 (\bm V, \bm U) \right] + o(n) .
\end{equation}
We now show that
\begin{equation}\label{eq:wfejbbfweqibfdwequobfd}
\lim_{n\to \infty} \frac{1}{2n} \sum_{i=1}^{2n-2} \sum_{k=1}^{2n-1 -i} \mathbb{E} \left[ \tilde{g} \left(\bm \xi_i,s_i\right) \tilde{g} \left(\bm \xi_{i+k},s_{i+k}\right) \right] = \sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \right].
\end{equation}
First we show that the right-hand side is convergent. We start by noting that from Fubini's theorem and \cref{lem:decay_correlations},
$$ \mathbb{E} \left[\mathbb{E} \left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \middle| \bm{\xi}_1, \bm{\xi}_{1+k} \right]\right] =\int_{\mathbb{R}^2} \mathbb{E} \left[\tilde{g} (\bm{\xi}_1, x) \tilde{g} (\bm{\xi}_{1+k}, y)\right] d\mu(x) d\mu(y)\leq 4\cdot \alpha_k.$$
Therefore, thanks to the assumption in \cref{eq:strongly_mix_plus}, we have that
\begin{multline}\label{eq:bnd_with_alpha}
\sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \right] = \sum_{k=1}^{\infty} \mathbb{E}\left[ \mathbb{E} \left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \middle| \bm{\xi}_1, \bm{\xi}_{1+k} \right] \right]
\leq \sum_{k=1}^{\infty} 4 \cdot \alpha_k < \infty.
\end{multline}
Now we turn to the proof of the limit in \cref{eq:wfejbbfweqibfdwequobfd}. Using the stationarity assumption for the process $\bm\xi$ in \cref{eq:stationarity}, we can write
\begin{equation}
\frac{1}{2n} \sum_{i=1}^{2n-2} \sum_{k=1}^{2n-1 -i} \mathbb{E} \left[ \tilde{g} \left(\bm \xi_i,s_i\right) \tilde{g} \left(\bm \xi_{i+k},s_{i+k}\right) \right]
= \sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=1}^{2n-1-k} \mathbb{E} \left[ \tilde{g} \left(\bm \xi_1,s_i\right) \tilde{g} \left(\bm \xi_{1+k},s_{i+k}\right) \right].
\end{equation}
Using a monotone class argument similar to the one used for the law of large numbers, we will show that the right-hand side of the equation above converges to
\begin{equation}
\sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \right].
\end{equation}
We start, as usual, from indicator functions. We have to prove that for all quadruplets $(A,A',B,B')$ of Borel subsets of $\mathbb{R}_+$, it holds that
\begin{multline}\label{eq:dim_for_ind_fct_centered}
\sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=1}^{2n-1-k} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right)
\tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \\
\to
\sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde {\mathds{1}}_{A \times B}(\bm{\xi}_1, \bm U) \tilde {\mathds{1}}_{A' \times B'} (\bm{\xi}_{1+k}, \bm U') \right] <\infty,
\end{multline}
where for every rectangle $R$ of $\mathbb{R}_+^2$,
\begin{align}
\tilde{\mathds{1}}_{R}\left(x,y\right)\coloneqq\mathds{1}_{R}\left(x,y\right)-\mathbb{E}\left[ \mathds{1}_{R}\left(\bm \xi_1,y\right) \right].
\end{align}
Setting $S_n\coloneqq\sum_{k=1}^{n}\mathbb{E}\left[ \tilde {\mathds{1}}_{A \times B}(\bm{\xi}_1, \bm U) \tilde {\mathds{1}}_{A' \times B'} (\bm{\xi}_{1+k}, \bm U') \right]$, we estimate
\begin{multline}\label{eq:rehbgre0uq-9grg}
\left| S_{\infty} -
\sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=1}^{2n-1-k} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right| \\
\leq \left| S_{\infty}-S_{2n-2}\right|
+ \left| S_{2n-2} -
\sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=1}^{2n-2} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right| \\
+ \left| \sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=2n-1-k}^{2n-2} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right|.
\end{multline}
Clearly, the first term in the right-hand side of the inequality above tends to zero, being the tail of a convergent series (the fact that $S_{\infty}<\infty$ follows via arguments already used for \cref{eq:bnd_with_alpha}). For the last term, we notice that, using \cref{lem:decay_correlations},
\begin{equation}
\left| \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right| \leq 4 \cdot \alpha_k,
\end{equation}
and thus
\begin{equation}
\left| \sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=2n-1-k}^{2n-2} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right| \leq \frac{1}{2n} \sum_{k=1}^{2n-2} 4 k \cdot \alpha_k,
\end{equation}
which converges to $0$ as $n$ goes to infinity by the assumption in \cref{eq:strongly_mix_plus} and the same arguments used in \cref{rem:fbkwufobw}.
It remains to bound the second term. Expanding the products and recalling that $\bm V, \bm V', \bm U, \bm U'$ are independent random variables such that $\bm{\xi}_{1}\stackrel{d}{=}\bm{\xi}_{1+k}\stackrel{d}{=}\bm {V}\stackrel{d}{=}\bm {V}'$ and $\bm {U}\stackrel{d}{=}\bm {U}'\stackrel{d}{=}\mu$, we have that
\begin{multline}
S_{2n-2}=\sum_{k=1}^{2n-2} \mathbb{E}\left[ \tilde {\mathds{1}}_{A \times B}(\bm{\xi}_1, \bm U) \tilde {\mathds{1}}_{A' \times B'} (\bm{\xi}_{1+k}, \bm U') \right]\\
=
\sum_{k=1}^{2n-2} \mathbb{E}\left[ \mathds{1}_{A \times B}(\bm{\xi}_1, \bm U) \mathds{1}_{A' \times B'} (\bm{\xi}_{1+k}, \bm U') \right]-\mathbb{E}\left[ \mathds{1}_{A \times B}(\bm V, \bm U) \mathds{1}_{A' \times B'} (\bm{V}', \bm U') \right]\\
=
\sum_{k=1}^{2n-2} \mu \left(B\right) \cdot \mu \left(B'\right) \cdot
\left(\mathbb{E}\left[ \mathds{1}_{A \times A'}(\bm{\xi}_1,\bm{\xi}_{1+k} )\right]-\mathbb{E}\left[ \mathds{1}_{A \times A'}(\bm V,\bm{V}') \right]\right).
\end{multline}
Similarly, we obtain
\begin{multline}
\mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right)
\tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right]\\
=
\mathbb{E} \left[ \mathds{1}_{A \times B}\left(\bm \xi_1,s_i\right) \mathds{1}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right)\right]-\mathbb{E} \left[ \mathds{1}_{A \times B}\left( \bm V,s_i\right)\right]\mathbb{E}\left[ \mathds{1}_{A' \times B'}\left(\bm V',s_{i+k}\right)\right]\\
= \mathds{1}_{B \times B'}(s_i,s_{i+k})\cdot \left(\mathbb{E} \left[ \mathds{1}_{A \times A'}\left(\bm \xi_1,\bm \xi_{1+k}\right)\right]-\mathbb{E} \left[ \mathds{1}_{A \times A'}\left( \bm V,\bm V'\right)\right]\right).
\end{multline}
Therefore the second term in the right-hand side of \cref{eq:rehbgre0uq-9grg} is bounded by
\begin{equation}\label{eq:erbgorobgegoe}
\sum_{k=1}^{2n-2} \left| \mathbb{E} \left[ \mathds{1}_{A \times A'}\left(\bm \xi_1,\bm \xi_{1+k}\right)\right]-\mathbb{E} \left[ \mathds{1}_{A \times A'}\left( \bm V,\bm V'\right)\right]\right| \cdot \left| \mu (B)\mu(B') -\frac{1}{2n} \sum_{i=1}^{2n-2} \mathds{1}_{B \times B'} (s_i, s_{i+k}) \right|.
\end{equation}
Using \cref{prop:uniform_bound} we have that
$$\sup_{k \in [2n-2]} \left| \mu (B)\mu(B') -\frac{1}{2n} \sum_{i=1}^{2n-2} \mathds{1}_{B \times B'} (\bm s_i, \bm s_{i+k}) \right| \xrightarrow[n\to\infty]{a.s.} 0.$$
In addition, using once again \cref{lem:decay_correlations} and the assumption in \cref{eq:strongly_mix_plus} we have that $$\sum_{k=1}^{2n-2} \left| \mathbb{E} \left[ \mathds{1}_{A \times A'}\left(\bm \xi_1,\bm \xi_{1+k}\right)\right]-\mathbb{E} \left[ \mathds{1}_{A \times A'}\left( \bm V,\bm V'\right)\right]\right|<\infty.$$
The last two equations imply that the bound in \cref{eq:erbgorobgegoe} tends to zero as $n$ tends to infinity for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, completing the proof of \cref{eq:dim_for_ind_fct_centered}.
In order to conclude the proof of the lemma, it remains to generalize the result in \cref{eq:dim_for_ind_fct_centered} to all bounded and measurable functions. This can be done using the same techniques adopted to prove the law of large numbers, therefore we skip the details.
\end{proof}
We now complete the proof of \cref{prop:clt_sum_of_the_g}.
\begin{proof}[Proof of \cref{prop:clt_sum_of_the_g}]
Recalling that $\tilde g (x,y) \coloneqq g(x,y) - \mathbb{E} \left[ g\left( \bm V, y\right) \right]$ and thanks to \cref{lem:variance_is_linear}, it is enough to show that
\begin{equation}\label{eq:clt_h}
\frac{1}{\rho_{g,n}} \sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right)\xrightarrow{d} \bm{\mathcal{N}}(0, 1).
\end{equation}
The difficulty in establishing this convergence lies in the fact that we are dealing with a sum of random variables that are neither independent nor identically distributed.
We proceed in two steps. First, we apply the the Bernstein's method, thus we reduce the problem to the study of a sum of ``almost" independent random variables. More precisely, we use the decay of the correlations for the process $\bm \xi$ to decompose $\sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right)$ into two distinct sums, in such a way that one of them is a sum of ``almost" independent random variables and the other one is negligible (in a sense that will be specified in due time). After having dealt with the lack of independence, we settle the issue that the random variables are not identically distributed using the Lyapounov's condition.
We start with the first step. Recall that we assume the existence of two sequences $p=p(n)$ and $q=q(n)$ such that:
\begin{itemize}
\item $p\xrightarrow{n\to\infty} +\infty$ and $q\xrightarrow{n\to\infty} +\infty$,
\item $q=o(p)$ and $p=o(n)$ as $n \to \infty$,
\item $n p^{-1 } \alpha_q=o(1)$,
\item $ \frac{p}{n} \cdot \sum_{j=1}^p j \alpha_j = o(1)$.
\end{itemize}
As said above, we represent the sum $\sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right)$ as a sum of nearly independent random variables (the \emph{big blocks} of size $p$, denoted $\bm \beta_i$ below) alternating with other terms (the \emph{small blocks} of size $q$, denoted $\bm \gamma_i$ below) whose sum is negligible.
We define $k = \lfloor (2n-1)/(p+q) \rfloor$.
We can thus write
\begin{equation}
\sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right) = \sum_{i=0}^{k-1} \bm \beta_i + \sum_{i=0}^k \bm \gamma_i,
\end{equation}
where, for $0\leq i \leq k-1$,
\begin{equation}
\bm \beta_i= \bm \beta_i (\tilde g, n) \coloneqq \sum_{j=ip+iq+1}^{(i+1)p+iq} \tilde g \left(\bm \xi_j,s_j\right), \quad\quad
\bm \gamma_i= \bm \gamma_i (\tilde g, n) \coloneqq \sum_{j=(i+1)p+iq+1}^{(i+1)p+(i+1)q} \tilde g\left(\bm \xi_j,s_j\right),
\end{equation}
and
\begin{equation}
\quad \quad \bm \gamma_k=\bm \gamma_k(\tilde g, n) \coloneqq \sum_{j=kp+kq+1}^{2n-1} \tilde g\left(\bm \xi_j,s_j\right).
\end{equation}
Henceforth, we will omit the dependence on $\tilde g$ and on $n$ simply writing $\bm \beta_i$ and $\bm \gamma_i$, in order to simplify the notation (whenever it is clear). Setting $\bm H_n'\coloneqq \frac{1}{\rho_{g,n}} \sum_{i=0}^{k-1} \bm \beta_i$ and $\bm H_n''\coloneqq \frac{1}{\rho_{g,n}} \sum_{i=0}^k \bm \gamma_i $, we can write
\begin{equation}
\frac{1}{\rho_{g,n}}\sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right) = \bm H_n' + \bm H_n''.
\end{equation}
The proof of \cref{eq:clt_h} now consists of two steps. First, we show that $\bm H_n''\xrightarrow{P} 0$, and secondly we show that the characteristic function of $\bm H_n'$ converges to the characteristic function of a standard Gaussian random variable. Then, we can conclude using standard arguments.
We start by proving that $\bm H_n'' \xrightarrow{P} 0$. By
Chebyshev's inequality and the fact that $\mathbb{E}[\bm H_n'']=0$, it is enough to show that $\mathbb{E} \left[\left( \bm H_n''\right)^2\right] \to 0$ as $n \to \infty$.
We can rewrite $\mathbb{E} \left[\left( \bm H_n''\right)^2\right]$ as
\begin{multline}
\frac{1 }{\rho_{g,n}^2} \cdot \mathbb{E} \left[\left( \sum_{i=0}^{k} \bm \gamma_i \right)^2\right]
= \frac{1 }{\rho_{g,n}^2}
\left(
\mathbb{E} \left[ \sum_{i=0}^{k-1} \bm \gamma_i^2 \right]
+ \mathbb{E} \left[\bm \gamma_k^2 \right]
+\mathbb{E} \left[ \sum_{ \substack{ i,j=0\\i \neq j} }^{k-1} \bm \gamma_i \bm \gamma_j \right]
+ 2 \mathbb{E} \left[ \sum_{i=0}^{k-1} \bm \gamma_i \bm \gamma_k \right]
\right).
\end{multline}
Note that, by definition of $\bm \gamma_i$ and using \cref{lem:decay_correlations} once again, for $i\neq j$, we have the bounds
\begin{equation}\label{eq:ebfgreubfoeqrf}
\mathbb{E} \left[ \bm \gamma_i \bm \gamma_j \right] \leq q^2 \cdot \alpha_{p(i-j)},
\end{equation}
and
\begin{equation}
\mathbb{E} \left[ \bm \gamma_i \bm \gamma_k \right] \leq q \cdot (p+q) \cdot \alpha_{p(k-i)}.
\end{equation}
Moreover, by the same argument used in \cref{lem:variance_is_linear}, we have that
\begin{equation}
\mathbb{E} \left[ \bm \gamma_i^2 \right] = \rho_g^2 \cdot q \cdot (1+o(1))=O(q)
\end{equation}
and
\begin{equation}
\mathbb{E} \left[ \bm \gamma_k^2 \right] = O(p+q) = O(p).
\end{equation}
Hence, using \cref{lem:variance_is_linear},
\begin{equation}
\frac{1}{\rho_{g,n}^2} \mathbb{E} \left[ \sum_{i=0}^{k-1} \bm \gamma_i^2 \right] = O\left(\frac{kq}{n}\right) = O\left(\frac{q}{p}\right) = o(1)
\end{equation}
and
\begin{equation}
\frac{1 }{\rho_{g,n}^2} \mathbb{E} \left[\bm \gamma_k^2 \right] = o(1).
\end{equation}
Using \cref{eq:ebfgreubfoeqrf}, we also have that
\begin{multline}
\frac{1}{\rho_{g,n}^2} \mathbb{E} \left[ \sum_{i,j=1, i \neq j}^{k-1} \bm \gamma_i \bm \gamma_j \right]
\leq
\frac{2}{\rho_{g,n}^2} \sum_{j=0}^{k-1} \sum_{i=j+1}^{k-1} q^2 \cdot \alpha_{p(i-j)}
= \frac{2}{\rho_{g,n}^2} \sum_{j=0}^{k-1} \sum_{m=1}^{k-j} q^2 \cdot \alpha_{pm} \\
\leq \frac{2}{\rho_{g,n}^2}\cdot kq^2 \sum_{m=1}^{k} \alpha_{pm} \leq \frac{2kq^2}{\rho_{g,n}^2\cdot p} \sum_{m=1}^{\infty} \alpha_{m}= o(1),
\end{multline}
where in the last inequality we used that, since $\alpha_n$ is decreasing,
\begin{equation}
\sum_{m=1}^{k} \alpha_{pm} \leq \sum_{m=1}^{k} \frac{1}{p} \cdot \sum_{s=(m-1)p+1}^{mp} \alpha_s \leq \frac{1}{p} \cdot \sum_{s=1}^{\infty} \alpha_s.
\end{equation}
Analogously, we can prove that
\begin{equation}
\frac{2}{\rho_{g,n}^2} \mathbb{E} \left[ \sum_{i=0}^{k-1} \bm \gamma_i \bm \gamma_k \right] = o(1),
\end{equation}
concluding the proof that $\bm H_n'' \xrightarrow{P} 0$.
Now we turn to the study of the limiting distribution of $\bm H_n'$.
We have that, for $t \in \mathbb{R}$,
\begin{equation}
\exp \left\{ it \bm H_n' \right\} = \exp \left\{ \frac{it}{\rho_{g,n}} \sum_{i=0}^{k-1} \bm \beta_i \right\}.
\end{equation}
We now look at
$\exp \left\{ \frac{it}{\rho_{g,n}} \sum_{i=1}^{k-2} \bm \beta_i \right\}$
and
$ \exp \left\{ \frac{it}{\rho_{g,n}} \bm \beta_{k-1} \right\}$.
We have that the first random variable is measurable with respect to $\mathcal A_1^{(k-1)p + (k-2)q}$ and the second one is measurable with respect to $\mathcal A_{(k-1)p + (k-1)q+1}^{\infty}$. So, by \cref{lem:decay_correlations},
\begin{equation}
\left| \mathbb{E} \left[\exp \left\{ \frac{it}{\rho_{g,n}}\sum_{i=0}^{k-1} \bm \beta_i \right\} \right] - \mathbb{E} \left[ \exp \left\{ \frac{it}{\rho_{g,n}}\sum_{i=0}^{k-2} \bm \beta_i \right\}\right] \mathbb{E} \left[ \exp \left\{ \frac{it}{\rho_{g,n}} \bm \beta_{k-1} \right\}\right]\right| \leq 4 \cdot \alpha_q.
\end{equation}
Iterating, we get
\begin{equation}\label{eq:char_function_ofh_converges}
\left| \mathbb{E} \left[\exp \left\{ \frac{it}{\rho_{g,n}}\sum_{i=0}^{k-1} \bm \beta_i \right\} \right] -
\prod_{i=0}^{k-1} \mathbb{E} \left[ \exp \left\{ \frac{it}{\rho_{g,n}} \bm \beta_{i} \right\}\right]\right| \leq 4 \cdot (k-1) \cdot \alpha_q,
\end{equation}
the latter quantity tending to $0$ as $k \to \infty$ thanks to the assumptions on the sequences $p$ and $q$.
The last step of the proof consists in showing that, as $n \to \infty$,
\begin{equation}\label{eq:ifbueiwufbweonfew}
\prod_{i=0}^{k-1} \mathbb{E} \left[ \exp \left\{ \frac{it}{\rho_{g,n}} \bm \beta_{i} \right\}\right] \to e^{ \frac{t^2}{2} }.
\end{equation}
Consider a collection of independent random variables $\bm X_{n,i}$, $n \in \mathbb{N}, i \in [k-1]_0$, such that $\bm X_{n,i}$ has the same distribution as $\frac{1}{\rho_{g,n}}\bm \beta_i (n)$.
By \cite[Theorem 27.3]{MR1324786}, a sufficient condition to ensure that $\sum_{i=0}^{k-1}\bm X_{n,i}\to \bm{\mathcal N}(0,1)$ and so verifying \cref{eq:ifbueiwufbweonfew},
is the well-known Lyapounov's condition:
\begin{equation}\label{eq:fbbfoqehfoiewqhf}
\lim_{n \to \infty} \frac{1 }{\rho_{g,n}^{2+\delta}} \sum_{i=0}^{k-1} \mathbb{E} \left[ \bm Y_{n,i}^{2+\delta} \right] = 0, \quad \text{for some} \quad \delta>0,
\end{equation}
where $\bm Y_{n,i} \coloneqq \bm X_{n,i} \cdot \rho_{g,n} \stackrel{d}{=} \sum_{j=ip+iq+1}^{(i+1)p+iq} \tilde{g} \left( \bm \xi_j, s_j \right)$.
The condition is satisfied with $\delta=2$ thanks to the following lemma.
\begin{lem}\label{lem:CLT_bound_fourth_moment}
Under the assumptions of \cref{prop:clt_sum_of_the_g}, we have that for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, uniformly for all $i \in [k-1]_0$,
\begin{equation}
\mathbb{E} \left[ \left( \bm Y_{n,i} \right)^4 \right] = O \left( p^2 \cdot \sum_{j=1}^{p} j\alpha_j \right).
\end{equation}
\end{lem}
Before proving the lemma above above, we explain how it implies the condition in \cref{eq:fbbfoqehfoiewqhf} with $\delta=2$. By \cref{lem:variance_is_linear}, we have that $\rho_{g,n}^2 = 2 n \cdot \rho^2_g \cdot (1+o(1))$, thus
\begin{equation}
\frac{1}{\rho_{g,n}^4 } \sum_{i=0}^{k-1} \mathbb{E} \left[ \bm Y_{n,i}^4 \right]
\leq
C \cdot \frac{k\cdot p^2 \cdot \sum_{j=1}^{p} j\alpha_j}{4 n^2 \cdot \rho^4_g} \to 0,
\end{equation}
where for the limit we used the fact that $ \frac{k\cdot p}{n} \to 1$ and that, by assumption, $ \frac{ p \cdot \sum_{j=1}^{p} j\alpha_j}{n} \to 0$.
\medskip
We conclude the proof of \cref{prop:clt_sum_of_the_g} by proving \cref{lem:CLT_bound_fourth_moment}. Let $A_0 \coloneqq [p]$ and $A_i \coloneqq [(i+1)p+iq] \setminus [ip+iq]$ for $i \geq 1$. Note that $|A_i|=p$ for all $i\geq 0$. We have that
\begin{align}
&\mathbb{E} \left[ \left( \bm Y_{n,i} \right)^4 \right] = \mathbb{E} \left[ \left( \sum_{j=ip+iq+1}^{(i+1)p+iq} \tilde g \left( \bm \xi_j, s_j \right) \right)^4 \right] \\
=&O\Bigg(\sum_{j \in A_i} \mathbb{E} \left[ \tilde g^4 \left( \bm \xi_j, s_j \right) \right]
+ \sum_{\substack {j,k \in A_i \\ j\neq k}} \mathbb{E} \left[ \tilde g ^2 \left( \bm \xi_j, s_j \right) \tilde g ^2 \left( \bm \xi_k, s_k\right) \right]
+ \sum_{\substack{j,k \in A_i\\ j\neq k}} \mathbb{E} \left[ \tilde g^3 \left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \right] \\
+ &\sum_{\substack{j,k, l \in A_i \\ j\neq k\neq l}} \mathbb{E} \left[ \tilde g^2 \left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \right]
+ \sum_{\substack{j,k, l,m \in A_i \\ j\neq k\neq l \neq m}} \mathbb{E} \left[ \tilde g\left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \tilde g \left( \bm \xi_m, s_m\right) \right]\Bigg).
\end{align}
The fact that $\tilde g$ is bounded and the decay of the correlations will give us some bounds for each of these terms. First of all, since $\tilde g$ is bounded, we have that
$
\sum_{j \in A_i} \mathbb{E} \left[ \tilde g^4 \left( \bm \xi_j, s_j \right) \right] = O(p),
$
$
\sum_{j,k \in A_i, j\neq k} \mathbb{E} \left[ \tilde g^2 \left( \bm \xi_j, s_j \right) \tilde g^2 \left( \bm \xi_k, s_k\right) \right] = O \left(p^2 \right),
$
and
$
\sum_{j,k \in A_i, j\neq k} \mathbb{E} \left[ \tilde g^3 \left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \right] = O \left(p^2 \right).
$
We now look at the fourth addend.
We have that
\begin{multline}
\sum_{\substack{j,k, l \in A_i\\ j\neq k\neq l}} \mathbb{E} \left[ \tilde g^2 \left( \bm \xi_j, s_j \right) \tilde g\left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \right] \\
=
O \left(
\sum_{\substack{j,k, l \in A_i\\ j< k< l}} \mathbb{E} \left[ \tilde g^2 \left( \bm \xi_j, s_j \right) \tilde g\left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \right]
\right)
=
O \left(
\sum_{l \in A_i} \sum_{k = ip+iq+1 }^{l-1} \sum_{j=ip+iq+1}^{k} \alpha_{l-k}
\right) = O(p^2),
\end{multline}
since $\sum_{i=1}^{\infty} \alpha_{i} < + \infty$ by assumption and $|A_i|=p$.
Finally, we estimate the last addend.
We have that
\begin{multline}
\sum_{\substack{j,k, l,m \in A_i\\ j\neq k\neq l \neq m}} \mathbb{E} \left[ \tilde g \left( \bm \xi_j, s_j \right) \tilde g\left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \tilde g \left( \bm \xi_m, s_m\right) \right] \\
=
O \left(
\sum_{\substack{j,k, l,m \in A_i\\ j < k < l <m}} \mathbb{E} \left[ \tilde g\left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \tilde g\left( \bm \xi_m, s_m\right) \right]
\right)
=
O \left(
\sum_{\substack{j,k, l,m \in A_i\\j < k < l <m}} \min \{\alpha_{k-j}, \alpha_{m-l}\}
\right).
\end{multline}
We analyse the last expression. Since the sequence $(\alpha_n)_{n\in\mathbb{N}}$ is decreasing, we see that
\begin{multline}
\sum_{\substack{j,k, l,m \in A_i\\j < k < l <m}} \min \{\alpha_{k-j}, \alpha_{m-l}\}
=
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =1}^{p-l} \min \{\alpha_{x}, \alpha_{y}\} \\
=
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =1}^{x} \alpha_{x}
+
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =x+1}^{p-l} \alpha_{y} .
\end{multline}
Since $|A_i|=p$ we have that
\begin{equation}
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =1}^{x} \alpha_{x}
\leq p^2 \cdot \sum_{x=1}^p x\alpha_{x},
\end{equation}
and that
\begin{equation}
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =x+1}^{p-l} \alpha_{y}
\leq
p^2 \cdot \sum_{y=1}^p y \alpha_{y},
\end{equation}
from which we conclude that
\begin{equation}
\sum_{\substack{j,k, l,m \in A_i\\ j\neq k\neq l \neq m}} \mathbb{E} \left[ \tilde g \left( \bm \xi_j, s_j \right) \tilde g\left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \tilde g \left( \bm \xi_m, s_m\right) \right]
=
O \left( p^2 \cdot \sum_{j=1}^p j \alpha_j \right).
\end{equation}
This concludes the proof of \cref{lem:CLT_bound_fourth_moment}, and hence of \cref{prop:clt_sum_of_the_g} as well.
\end{proof}
| {
"attr-fineweb-edu": 1.925781,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcJM5jDKDx8o5rUev | \section{Introduction}
In most of online competitive games,
players need an ``avatar'' (an online identity) to log in the game network.
Nothing forbids a player to have several avatars
and actually, it is a very common practice for cyberathletes.
Players generally have one official avatar for official tournaments,
and several others to conceal their game tactics without being recognized by other players
they may meet online: global rankings and leagues are public just as in chess and tennis,
while game logs are available and prone to analysis by means of visualization and
machine learning just as in standard sport analytics.
Accordingly, we are facing a set of players, generating behavioural data,
in an unknown one-to-many relationship with avatars (handling \textit{many-to-many}
relationships is left to future work).
In this context, the \prob{} aims at discovering the group of avatars belonging to the same player.
Solving this problem is motivated by the growing need of e-sport structures to
study the games and strategies of the opponents (match preparation),
and the security challenges of game editors (detecting avatar usurpers).
Yan et al. showed that a classifier can be trained to predict
with high accuracy the avatars involved in a game play
of \starcraft{}~\cite{DBLP:conf/chi/YanHC15}.
Nevertheless, they purposely considered datasets without players having several avatars
(what we call avatar aliases): in presence of such \textit{avatar aliases},
the prediction accuracy drastically degrades,
since prediction models fail at differentiating two avatars of the same player.
We extend this work and answer the \prob{}:
it relies on mining the confusion matrix yielded by a supervised classifier
using Formal Concept Analysis~\cite{ganter99},
and exploits the confusion a classifier has in presence of avatar aliases
when they belong to the same player.
Experimental evaluation shows promising results.
\section{Basic notations and general intuition\label{sec:method}}
\newcommand{\ensuremath{\tilde{C}^\rho}}{\ensuremath{\tilde{C}^\rho}}
\newcommand{\ensuremath{C^\rho}}{\ensuremath{C^\rho}}
Let $A$ be a set of avatars and $T$ be a set of traces such as for a given avatar $a \in A$, the set $T_a \subseteq T$ is the set of all traces generated by $a$. Consider a classifier $\rho$ where labels are the avatars to predict. A classifier is a function $\rho: T \rightarrow A$ that assigns the avatar $\rho(t) \in A$ to a given trace $t \in T$.
Let $n = |A|$ be the number of avatars in $A$,
from any classifier $\rho$, one can derive a confusion matrix
$\ensuremath{C^\rho}_{n \times n} = (c_{i,j})$
where $c_{i,j} = \vert \{ t \in T_{a_i} ~s.t.~ \rho(t) = a_j \} \vert$.
Each row and column of $\ensuremath{C^\rho}$ correspond to an avatar,
while the value $c_{ij}$ is the number of traces of avatar $a_i$
that are classified by $\rho$ as of avatar $a_j$.
The normalized confusion matrix is given by
$\ensuremath{\tilde{C}^\rho} = [c_{i,j}/|T_{a_i}|]$
where $\ensuremath{\tilde{C}^\rho}_{i,i} = 1$ for any $i \in [1, |A|]$ means all the traces of avatar $a_i$ are correctly classified by $\rho$.
\begin{wrapfigure}{r}{40mm}\centering
{\scriptsize
\vspace{-7mm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& $a_1$ & $a_2$ & $a_3$ & $a_4$ & $a_5$ \\
\hline
$a_1$ & 0.6 & 0.4 & 0 & 0 & 0 \\ \hline
$a_2$ & 0.4 & 0.55 & 0.05 & 0 & 0 \\ \hline
$a_3$ & 0 & 0 & 0.8 & 0.15 & 0.05 \\ \hline
$a_4$ & 0 & 0.05 & 0 & 0.7 & 0.25 \\ \hline
$a_5$ & 0 & 0 & 0 & 0.5 & 0.5 \\ \hline
\end{tabular}
\caption{\scriptsize Confusion matrix}
\label{table.1}
}
\vspace{-8mm}
\end{wrapfigure}
Our goal is to discover the group of avatars that belong to the same player.
Our intuition is that a classifier will hardly differentiate these avatar aliases,
hence the confusion matrix values should be high and concentrated around them.
This is exemplified in Figure \ref{table.1}: avatars $\{a_1, a_2\}$ are candidates
to belong to the same player, $\{a_4, a_5\}$ shall belong to another player,
while ${a_3}$ stays as singleton with a diagonal high value.
A reasonable clustering of avatars would be given by
$\{ \{a_1, a_2\}, \{ a_3 \}, \{a_4, a_5\}\}$.
More formally, given a normalized confusion matrix $\ensuremath{\tilde{C}^\rho}$,
we would like to find pairs of avatars $a_i,a_j \in U$
such that $\ensuremath{\tilde{C}^\rho}_{ij} \simeq \ensuremath{\tilde{C}^\rho}_{ji} \simeq \ensuremath{\tilde{C}^\rho}_{ii} \simeq \ensuremath{\tilde{C}^\rho}_{jj}$
and $\ensuremath{\tilde{C}^\rho}_{ij} + \ensuremath{\tilde{C}^\rho}_{ji} + \ensuremath{\tilde{C}^\rho}_{ii} + \ensuremath{\tilde{C}^\rho}_{jj} \simeq 2$.
These conditions come from the fact that, if $a_i,a_j$ correspond to the same player,
traces in $T_{a_i}$ have the same probability of being classified as $a_i$ or $a_j$
(the same for traces in $T_{a_j}$).
Furthermore, for a trace of avatar $a_i$, it is required that the probability of classification is spread between $a_i$ and $a_j$ only,
meaning that $\ensuremath{\tilde{C}^\rho}_{ij} + \ensuremath{\tilde{C}^\rho}_{ii} \simeq 1$ (similarly for $a_j$).
\section{Method}
Our method firstly extracts fuzzy concepts from
the confusion matrix, scores and post-processes
them to generate avatar pairs,
candidates to be aliases.
\medskip
\noindent\textbf{Fuzzy concepts in a confusion matrix.} Let us define the fuzzy set of membership degrees $L^A$ where $L = [0,1]$,
such as the mapping function $\delta:A \rightarrow L^A$ assigns membership values
for the avatar $a_i$ in the fuzzy set $L^A$ based on the normalized confusion matrix.
Simply, this is a mapping that assigns to $a_i$ its corresponding row in $\ensuremath{\tilde{C}^\rho}$ which we denote $\ensuremath{\tilde{C}^\rho}_i$. We model a confusion matrix $\ensuremath{\tilde{C}^\rho}$ as a pattern structure $(A,(L^A,\sqcap),\delta)$ \cite{ganter01a}.
The operator $\sqcap$ is a meet operator in a semi-lattice (idempotent, commutative and associative),
and is defined as follows, given two avatars $a_i,a_j \in A$:
\begin{align*}
\delta(a_i) \sqcap \delta(a_j) &= \langle min(\ensuremath{\tilde{C}^\rho}_{ik}, \ensuremath{\tilde{C}^\rho}_{jk}) \rangle,\, k \in [1,|A|] \\
\delta(a_i) \sqsubseteq \delta(a_j) &\iff \delta(a_i) \sqcap \delta(a_j) = \delta(a_i)
\end{align*}
\smallskip
\noindent\textit{Example}. The Figure \ref{table.1} illustrates a confusion matrix
obtained from a classifier $\rho$. We have
$\delta(a_1) = \{ a_1^{0.6}, a_2^{0.4}, a_3^{0}, a_4^0, a_5^0 \}$,
$\delta(a_2) = \{ a_1^{0.4}, a_2^{0.55}, a_3^{0.05}, a_4^0, a_5^0 \}$
and $\delta(a_1) \sqcap \delta(a_2) = \{ a_1^{0.4}, a_2^{0.4}, a_3^{0}, a_4^0, a_5^0 \} $.
\smallskip
\newcommand{\ensuremath{\mathtt{A}}}{\ensuremath{\mathtt{A}}}
\newcommand{\ensuremath{\mathtt{d}}}{\ensuremath{\mathtt{d}}}
\newcommand{\pc}[1][]{\ensuremath{\mathtt{(A_{#1},d_{#1})}}}
Actually, $\sqcap$ corresponds to the fuzzy set intersection and $(L^A,\sqsubseteq)$
is a partial order over the elements of $L^A$ which can be represented as a semi-lattice. %
The pattern structure $(A,(L^A,\sqcap),\delta)$ is provided with two derivation operators,
forming a Galois connection \cite{ganter01a}. Formally, we have, for a subset of avatars $\ensuremath{\mathtt{A}} \subseteq A$
and a fuzzy set $\ensuremath{\mathtt{d}} \in L^A$ such as: $\ensuremath{\mathtt{A}}^\square = \bigsqcap_{a \in \ensuremath{\mathtt{A}}} \delta(a)$ and $\ensuremath{\mathtt{d}}^\square = \{a \in A ~|~ \ensuremath{\mathtt{d}} \sqsubseteq \delta(a) \}$.
The pair $\pc$ is a pattern concept iff $\ensuremath{\mathtt{A}}^\square = \ensuremath{\mathtt{d}}$ and $\ensuremath{\mathtt{d}}^\square = \ensuremath{\mathtt{A}}$. Pattern concepts are ordered by extent inclusion such that for $\pc[1]$ and $\pc[2]$ we have:
$\pc[1] \leq \pc[2] \iff \ensuremath{\mathtt{A}}_1 \subseteq \ensuremath{\mathtt{A}}_2 \, (\text{or }\ensuremath{\mathtt{d}}_1 \sqsupseteq \ensuremath{\mathtt{d}}_2)$. A pattern concept $\pc$ contains a fuzzy set $\ensuremath{\mathtt{d}}$ which can be represented as a \emph{vector} $\ensuremath{\mathtt{d}} = \langle \ensuremath{\mathtt{d}}^j \rangle$ with length $|A|$ where each value $\ensuremath{\mathtt{d}}^j$
is the minimum for all rows $i$ in column $j$ of matrix $\ensuremath{\tilde{C}^\rho}$ s.t. $a_i \in \ensuremath{\mathtt{A}}$.
\medskip
\noindent\textbf{Computing and scoring concepts.} %
From the confusion matrix we compute all possible pattern concepts using the \texttt{addIntent} algorithm~\cite{vandermerwe2004addintent}. Pattern concepts are then ranked according to a score and converted into a list of pairs. For example, if a pattern concept extent contains three avatars $a_1,a_2$ and $a_3$, we convert this concept into pairs $(a_1,a_2)$, $(a_1,a_3)$ and $(a_2,a_3)$. The scoring function $s:L^A \rightarrow [0,1]$ is given as follows: for a pattern $\ensuremath{\mathtt{d}}$, $s(\ensuremath{\mathtt{d}}) = \Sigma_{j = 1}^{|A|} \ensuremath{\mathtt{d}}^j $.
\smallskip
\noindent\textit{Example}. In Figure \ref{table.1}, we have:
$s(\{a_1,a_2\}^\square) = 0.8$, $s(\{a_4,a_5\}^\square) = 0.75$ and
$s(\{a_1,a_2,a_4\}^\square) = 0.05$.
\smallskip
It is clear that the function $s$ is decreasing w.r.t. the order of pattern concepts, i.e. $\pc[1] \leq \pc[2] \implies s(\ensuremath{\mathtt{d}}_1) \leq s(\ensuremath{\mathtt{d}}_2)$.
Thus, pattern concepts can be mined up to a given score threshold analogously
as formal concepts can be mined up to a given minimal support.
We can appreciate that the higher the score of a given pattern,
the more \emph{confused} is the classification of traces of avatars $a \in \ensuremath{\mathtt{A}}$ by $\rho$ in $\ensuremath{\tilde{C}^\rho}$ and thus,
they become candidates for merging.
This property directly follows from the choice of our similarity operator $\sqcap$ as a fuzzy set intersection,
which behaves as a pessimistic operator (returning minimum values).
\medskip
\noindent\textbf{Ranking avatar aliases.} %
Consider the clustering condition previously formalized as $\ensuremath{\tilde{C}^\rho}_{ij} \simeq \ensuremath{\tilde{C}^\rho}_{ji} \simeq \ensuremath{\tilde{C}^\rho}_{ii} \simeq \ensuremath{\tilde{C}^\rho}_{jj}$ and $\ensuremath{\tilde{C}^\rho}_{ii} + \ensuremath{\tilde{C}^\rho}_{ij} + \ensuremath{\tilde{C}^\rho}_{ji} + \ensuremath{\tilde{C}^\rho}_{jj} \simeq 2$. Consider that the pair of avatars ($a_i,a_j$) respects these conditions. It is easy to see that ($a_i,a_j$) will necessarily be a candidate pair highly ranked from the previous step.
\begin{align*}
\ensuremath{\tilde{C}^\rho}_{ij} \simeq \ensuremath{\tilde{C}^\rho}_{jj} \simeq min(\ensuremath{\tilde{C}^\rho}_{ij},\ensuremath{\tilde{C}^\rho}_{jj})
\mathrm{~~and~~}
\ensuremath{\tilde{C}^\rho}_{ii} \simeq \ensuremath{\tilde{C}^\rho}_{ji} \simeq min(\ensuremath{\tilde{C}^\rho}_{ii},\ensuremath{\tilde{C}^\rho}_{ji}) \\
\implies min(\ensuremath{\tilde{C}^\rho}_{ij},\ensuremath{\tilde{C}^\rho}_{jj}) + min(\ensuremath{\tilde{C}^\rho}_{ii},\ensuremath{\tilde{C}^\rho}_{ji}) \simeq 1
\end{align*}
Thus, the set of avatar clusters we are looking for are contained within the set of candidate pairs and moreover, they are highly ranked. In order to remove pairs from the list of candidates that do not hold the avatar cluster definition, we propose a cosine similarity measure between a couple of vectors calculated for each avatar as follows. Let ($a_i,a_j$) be a candidate pair, the cluster score is defined as: $cluster\_score(a_i,a_j) = cosine(\langle \ensuremath{\tilde{C}^\rho}_{ii}, \ensuremath{\tilde{C}^\rho}_{ij}\rangle, \langle \ensuremath{\tilde{C}^\rho}_{jj}, \ensuremath{\tilde{C}^\rho}_{ji}\rangle)$.
\begin{wrapfigure}{r}{15mm}\centering
{
\scriptsize
\begin{tabular}{|c|c|c|}
\hline
& $a_i$ & $a_j$ \\ \hline
$a_i$ & 1 & 0 \\ \hline
$a_j$ & 1 & 0 \\ \hline
\end{tabular}}
\vspace{-8mm}
\end{wrapfigure}
The cluster score establishes a measure of how close is a candidate pair from being an avatar cluster.
The logic comes from the following scenario. Consider that the traces of avatar $a_i$ were all correctly classified meaning that $\ensuremath{\tilde{C}^\rho}_{ii} = 1$ and that the traces of avatar $a_j$ were all incorrectly classified as $a_i$, meaning that $\ensuremath{\tilde{C}^\rho}_{ji} = 1$, thus we have the section of the normalized confusion matrix illustrated on the right hand side. We can observe that the pair $(a_i,a_j)$ will be contained in the set of candidate pairs and will be highly ranked, even though it is not an avatar cluster since it violates the first condition. The cluster score for this particular case can be calculated as:
$cluster\_score(a_i,a_j) = cosine (\langle 1,0 \rangle, \langle 0,1 \rangle) = 0$,
meaning that this candidate pair is not an avatar cluster. Notice that for the pair of avatars such that $a_{ii} = 1$ and $a_{jj} = 1$, the cluster score is 1 (cosine between parallel vectors) while the pair is not an avatar cluster. However this pair would have a score $s$ equal to 0 and would be at the bottom of the ranked candidate pairs. A third kind of pair occurs when the traces of $a_i$ and $a_j$ are all incorrectly classified as a third avatar $a_k$. In such a case, the cluster score is 0.
The post processing step is executed as follows. Given a ranked list of candidate pairs yielded from the previous step, each pair is evaluated using the cluster score. Given an arbitrary threshold $\lambda$, if the cluster score of the candidate pair is below this threshold, then it is rejected. Candidate pairs are re-ranked into a final list of avatar clusters.
\section{Experiments}
\label{sec:xp}
\noindent\textbf{Data collections and objectives.}
We constructed two collections of Starcraft 2 replays to test our method. A replay contains all
data necessary for the game engine to replay the game. Replays are shared on dedicated
websites\footnote{\url{http://wiki.teamliquid.net/starcraft2/Replay_Websites}} and can be parsed to extract relevant features\footnote{\url{http://sc2reader.readthedocs.org/}}. The first collection has been chosen for studying the accuracy of classifiers to recognize avatars from their traces:
we have selected 955 professionals games of 171 unique players which cannot contain avatar aliases\footnote{\url{http://wcs.battle.net/sc2/en/articles/wcs-2014-season-2-replays}}. The second collection, which have possible avatar aliases, is built with all replays available on \textit{SpawningTool.com} in July 2014, for a total 10,108 one-versus-one games and 3,805 players. This collection corresponds to a real world situation, and is used for evaluating our avatar alias resolution approach.
\medskip
\noindent\textbf{Classifying avatars.}
Our method analyses the confusion matrix of a given classifier $\rho$.
Good features, as well as a prediction method, should first be chosen.
As features, we use the hotkey usage count \cite{DBLP:conf/chi/YanHC15}
during the $\tau$ first seconds of the game:
there are 30 of such features ($\{0,...,9\} \times \{assign, remove, select\})$.
We also consider the \textit{faction} of the player,
the game outcome (\textit{winner} or \textit{loser}) and actions per minutes in average (APM).
We generated several datasets given the $\tau$ parameter,
and introduced also a minimum number $\theta$ of games an avatar should have to be considered in the dataset.
Each dataset is classified using the Weka machine learning software \footnote{\url{http://www.cs.waikato.ac.nz/ml/weka/}} and evaluated using 10-fold cross validation from which we obtain a confusion matrix. We chose four different classifiers, namely K Nearest Neighbours (knn), Naive Bayes (nbayes), J48 decision tree (j48) and Sequential Minimization Optimization (smo). Parameters for each of the classifier were left as default. Figure \ref{fig:results.1} shows the ROC area and the precision obtained for 92 datasets created for Collection 1. The parameter $\tau$ ranged over 23 values in an exponential scale, initially from 10 to 90 seconds then from 100 to 900 and finally from 1000 to 5000 seconds (the longest game in this collection has around 5300 seconds) and thus, the x axis of each figure is in logarithmic scale.
For each measure, four figures corresponding to four different settings of $\theta$ are presented.
Each line corresponds to a different classifier. The figures present an empirical evaluation that the initial assumption, that avatars are very easily recognizable based in the signatures left in the traces they generate while playing, is true. For each different setting, ROC area is always around 100\% showing the robustness of the approach under different parametrizations. Precision is always maintained over 60\%, achieving its minimal value for the SMO classifier with $\theta = 5$ and $\tau > 1000$. Actually, this also supports the following assumptions. Firstly, it is hard to recognize users that have played a few games, meaning that the larger the value of the $\theta$ threshold, the more discriminative power has the classifier. Secondly, users are recognizable in the first few minutes of the game. The precision curves show a slight concave behaviour hinting a maximum of the precision w.r.t. the time cut used for traces. Users can be efficiently discovered by their hotkeys binding settings. As the game progresses, traces may differ given that the number of options in the game greatly increase and vary in execution regarding different opponents.
\begin{figure}[h!]
\centering
\vspace{-0.6cm}
\hfill
\includegraphics[width=0.51\textwidth]{ROC.pdf}
\hspace{-0.5cm}
\includegraphics[width=0.51\textwidth]{Precision.pdf}
\hfill
\vspace{-0.6cm}
\caption{\scriptsize Classification results for Collection 1: ROC area under the curve (AUC) and precision distribution on 23 points of $\tau$ for four $\theta$ values.
}
\label{fig:results.1}
\vspace{-0.5cm}
\end{figure}
\medskip
\noindent\textbf{Main method evaluation strategy.}
As we do not have information about the users behind the avatars,
it is not possible to evaluate the avatar pairs candidates using a ``ground truth''.
Hence we performed an evaluation of our approach using three different strategies.
First consider that an avatar of \starcraft2{} is given by its
\textit{Battle.net account URL}, made of a server name (Europe, America, etc.),
a unique identifier, and an avatar name. We use the whole URL as avatar class labels
in our classifiers.
Note now that players have several accounts, on different servers, that may share the name.
Players can also change the name of their avatar:
it does not affect the ID and server that identify their account.
As our method returns an ordered list of pairs candidates merging,
we consider the following indicators, for each pair.
\begin{itemize}
\item \textit{Avatar names.} Two avatars may have the same name but different battle net id. It is weak indicator as it can be a common name (e.g. \textit{Batman)}.
\item \textit{Battle.net account unique ID.} Two avatars may have two different names but the same unique identifier. This is a strong indicator.
\item \textit{Surrogates.} We create surrogate avatars $a^1,a^2$ from an avatar $a \in A$ by generating a partition in two different subsets of traces for each avatars. Our goal is to retrieve that $a^1$ and $a^2$ are avatar aliases. For splitting traces of an avatar into surrogates,
we introduce a parameter $\beta$ as a balance between the traces distributed over the surrogate avatars
($\beta = 0.5$ yields that both surrogate avatars will have half associated traces).
We introduce others parameters: $\gamma$ is the proportion of avatars who are converted in surrogates aliases. We assume that professionals play a lot of games then we select the avatar which have played more than $\theta$ games. As we have observed, it is not necessary to analyse the entire replay to discriminate an avatar then we select the first $\tau$ either actions or seconds.
\end{itemize}
\begin{wrapfigure}{r}{5.5cm}\centering
\vspace{-1.2cm}
\includegraphics[width=1\linewidth]{./ranking_lambda09.pdf}
\vspace{-1.2cm}
\caption{Candidate pairs ranking\label{fig:results.lambda}.}
\vspace{-0.7cm}
\end{wrapfigure}
To evaluate our approach we will measure the precision, recall and f-measure of the first 100 ranked avatar clusters. Given the ranking $r$, we consider $TP,FP$ and $FN$ stand for true positives, false positives and false negatives respectively. A pair is a false positive when we do not have enough information to consider them as true positives, meaning that their avatar names do not match, their URL is different and they are not part of our own set of surrogate avatars. \emph{They are in fact the kind of pairs we are looking for}.
As an example, the Figure~\ref{fig:results.lambda} shows the initial candidate pairs extracted from a confusion matrix generated by a Sequential Minimization Optimization (SMO) classification with $\gamma = 0.05$, $\theta = 5$ and $\lambda~=~0.9$. Within the figure, a point represent a pair of avatars with a red circle if the avatars are surrogates, a green triangle if they have the same account, a yellow star if they have the same name and in the other cases a blue cross, annotated with the nick-names of the avatars. The only FP in this list is a couple of avatars that belong to the player known as \texttt{aLive}\footnote{\url{http://wiki.teamliquid.net/starcraft2/ALive}}. We also report on other three measures, namely P@10 (precision in the first 10 elements of the ranking), mean average precision (MAP), the receiver operating characteristic (ROC) and the ROC area under the curve (AUC).
\medskip
\noindent\textbf{Identifying multiple aliases.} The goal of these experiments is to assess our approach for finding avatar aliases based of the evaluation introduced above. For generating datasets, we have selected three different $\tau$ values, namely $30, 60$ and $90$ seconds. We have picked the same values for $\theta$ as in the previous experiments. Surrogates were generated for the first $5$, $10$, $15$ and $20$ percent of the most active users in the dataset ($\gamma$) and we have set the balance $\beta = 0.5$. For each of the previously selected classifiers, the confusion matrix was processed by the Sephirot addIntent implementation\footnote{https://code.google.com/p/sephirot/} to obtain a set of pattern concepts. Scoring and post processing were implemented in ad-hoc python scripts.
Table~\ref{table.avatars.details.1} shows a summary for the evaluation results using the top 100 pairs of avatar clusters. Results indicate that our approach is very efficient at identifying surrogate avatars, particularly for KNN and the J48 classifiers achieving very high recall values. In the upper part of the table, while precision is low it is worth noticing that in the top 100, there are only 41 surrogates meaning that the maximum achievable precision is 0.41. The classifier KNN is particularly good in this measure achieving an almost perfect value (0.4 of 0.41). All four classifiers achieve a very high precision in the first 10 results (P@10) while two of them get a perfect score. Indeed, one of the main characteristics of our approach is the good ranking it generates over the avatar pairs. This fact is confirmed by the good MAP and ROC area under the curve (AUC) values achieved by all four classifiers. Both these measures slightly degrade when including in the set of true positives URLs and names. This can be understood since not all avatars with the same name necessarily belong to the same user. Thus, pairs of avatars with the same name will be more evenly distributed over the ranking or can even be found at the bottom indicating that they do not belong to the same user. This fact is reflected in the gap between the high growth of precision and low degradation of recall, i.e. avatars with the same name are distributed between the pairs retrieved and those that were not. As we have discussed, avatars with the same URL necessarily belong to the same user. Hence, we would have expected that in the first 10 pairs retrieved we could find an even distribution of surrogates and URLs. Instead, for all classifiers, P@10 is more than 80\% surrogates (while the rest is always URLs - P@10 in the medium part of the table). Table~\ref{table.avatars.details.2} shows a summary of results when looking for just surrogates while varying the balance in the distribution of traces between them. We can clearly observe that the performance of the approach quickly degrades as more imbalanced gets the distribution (the higher the $\beta$ value). Actually, for some classifiers it is not possible to obtain a single good result, even when we have lowered the $\lambda$ threshold to 0.8. As URLs are not necessarily balanced, classifiers tend to predict the label of a trace belonging to an avatar with less traces to one with more traces. Issues related to learning from imbalanced datasets are reviewed in~\cite{He:2009:LID:1591901.1592322} and need to be considered when selecting a proper classifier for our particular application.
\begin{table*}[ht!]
\begin{minipage}[c]{.50\linewidth}
\scriptsize
\centering
{ \begin{tabular}{lcccccc}
\hline
\multicolumn{7}{c}{\textbf{Parameters:$\,\gamma = 0.2,\,\theta = 20,\,\lambda = 0.9,\, \tau = 90$}} \\
\hline\hline
\multicolumn{7}{l}{\textbf{SUG}} \\
Cl & F1 & MAP & Recall & AUC & Prec. & P@10\\
\hline
$j48$&0.468 & 0.824& 0.805 & 0.904 & 0.33 & 1.0\\
$nbayes$&0.226& 0.740 & 0.390 & 0.915 & 0.16 & 0.8\\
$smo$& 0.312 & 0.971 & 0.536 & 0.993 & 0.22 & 1.0\\
$knn$&0.567& 0.822 & 0.976& 0.882 & 0.4 & 0.9\\
\hline\hline
\multicolumn{7}{l}{\textbf{SUG \& URLS}} \\
Cl &F1 & MAP & Recall & AUC & Prec. & P@10\\
\hline
$j48$&0.588& 0.907 & 0.606 & 0.866 & 0.57 & 1.0\\
$nbayes$&0.443 & 0.857 & 0.457 & 0.864 & 0.43 & 1.0\\
$smo$&0.257 & 0.912 & 0.266 & 0.945 & 0.25 & 1.0\\
$knn$&0.670 & 0.937 & 0.691 & 0.874 & 0.65 & 1.0\\
\hline\hline
\multicolumn{7}{l}{\textbf{SUG \& URLS \& Names}} \\
Cl&F1 & MAP & Recall & AUC & Prec.& P@10\\
\hline
$j48$&0.689 & 0.983 & 0.606 & 0.935 & 0.8 & 1.0\\
$nbayes$&0.560 & 0.943 & 0.492 & 0.906 & 0.65 & 1.0\\
$smo$&0.258 & 0.949 & 0.227 & 0.960 & 0.3 & 1.0\\
$knn$&0.758 & 0.967 & 0.667 & 0.792 & 0.88 & 1.0\\
\end{tabular}
\caption{Main results}
\label{table.avatars.details.1}}
\end{minipage}
\hfill
\begin{minipage}[c]{.50\linewidth}
\scriptsize
\centering{
\begin{tabular}{lcccccc}
\hline
\multicolumn{7}{c}{\textbf{Parameters:$\,\gamma = 0.2,\,\theta = 20,\,\lambda = 0.8,\, \tau = 90$}} \\
\hline\hline
\multicolumn{7}{l}{\textbf{J48}} \\
Balance &F1 & MAP & Recall & AUC & Prec. & P@10\\
\hline
$\beta = 0.5$&0.925 & 0.996 & 0.929 & 0.955 & 0.920 & 1.0\\
$\beta = 0.6$&0.545 & 0.927 & 0.632 & 0.921 & 0.480 & 1.0\\
$\beta = 0.7$&0.053 & 0.695 & 0.077 & 0.977 & 0.040 & 0.3\\
\hline\hline
\multicolumn{7}{l}{\textbf{Naive Bayes}} \\
Balance &F1 & MAP & Recall & AUC & Prec. & P@10\\
\hline
$\beta = 0.5$&0.472 & 0.902 & 0.475 & 0.953 & 0.470 & 0.9\\
$\beta = 0.6$&0.273 & 0.923 & 0.316 & 0.973 & 0.240 & 1.0\\
$\beta = 0.7$&0.197 & 0.9 & 0.288 & 0.978 & 0.150 & 0.9\\
$\beta = 0.8$&0.048 & 0.533 & 0.120 & 0.983 & 0.030 & 0.3\\
\hline\hline
\multicolumn{7}{l}{\textbf{SMO}} \\
Balance&F1 & MAP & Recall & AUC & Prec. & P@10\\
\hline
$\beta = 0.5$&0.392 & 0.983 & 0.394 & 0.992 & 0.390 & 1.0\\
\hline\hline
\multicolumn{7}{l}{\textbf{KNN}} \\
Balance&F1 & MAP & Recall & AUC & Prec. & P@10\\
\hline
$\beta = 0.5$&0.905 & 0.964 & 0.909 & 0.732 & 0.9 & 1.0\\
$\beta = 0.6$&0.750 & 0.957 & 0.868 & 0.929 & 0.660 & 1.0\\
$\beta = 0.7$&0.184 & 0.706 & 0.269 & 0.949 & 0.140 & 0.7\\
\end{tabular}
\caption{Varying surrogate balance ($\beta$)}
\label{table.avatars.details.2}}
\end{minipage}
\end{table*}
\vspace{-1.3cm}
\section{Conclusion}
\label{sect:conclusions}
We introduced the problem of avatar aliases identification
when there exists no mapping between individuals and their avatars.
This is an important problem for game editors, but also for e-sport structures.
Our method relies on the fact that behavioural data hide individual characteristic patterns,
which allows making predictive approaches very accurately.
Nevertheless, this good performance quickly degrades when data hides avatar aliases,
which is why we based our analysis on confusion matrices.
As future work, we plan to study other competitive games,
and how biclustering could tackle the problem.
We also believe that our approach can be used to solve other application problems,
such as identifying users on different devices (smart-phones, tablet, computer, etc.)
regarding the usage traces they left.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.989258,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUgMM5qhDBLPWJMeWj | \section*{Acknowledgments}
We thank the anonymous reviewers for their insightful comments,
and Emma Strubell, Patrick Verga, David Fisher, Katie Keith, Su Lin Blodgett, and other members
of the UMass NLP group for help with
earlier drafts of the paper.
This work was partially supported by
NSF IIS-1814955 and a Figure Eight AI for Everyone award.
\section{Conclusion}\label{sec:conclusion}
We collect and release \textsc{football}\ to support large-scale, longitudinal analysis of racial bias in sports commentary, a major category of mass media. Our analysis confirms the results of prior smaller-scale social science studies on commentator sentiment and naming patterns.
However, we find that baseline NLP methods for quantifying mention-level genderedness~\citep{gender:naacl19} and modeling covariate effects ~\citep{eisenstein2011sparse} cannot overcome the statistical and linguistic confounds in this dataset. We hope that presenting such a technically-challenging resource, along with an analysis showing the limitations of current bias-detection techniques, will contribute to the emerging literature on bias in language. Important future directions include examining the temporal aspect of bias as well as developing more precise mention identification techniques.
\section{Collecting the \textsc{football}\ dataset}
\label{sec:dataset}
We collect transcripts of 1,455 full game broadcasts from the
U.S.\ NFL and
National Collegiate Athletic Association (NCAA) recorded between 1960 and 2019. Next, we identify and link mentions of players within these transcripts to information about their race (white or nonwhite) and position (e.g., quarterback). In total, \textsc{football}\ contains 267,778 mentions of 4,668 unique players, 65.7\% of whom are nonwhite.\footnote{See Appendix for more detailed statistics.}
We now describe each stage of our data collection process.
\subsection{Processing broadcast transcripts}
We collect broadcast transcripts by downloading YouTube videos posted by nonprofessional, individual users identified by querying YouTube for football archival channels.\footnote{We specifically query for \texttt{full NFL$|$NCAA$|$college football games 1960s$|$1970s$|$1980s$|$1990s$|$2000}, and the full list of channels is listed in in the Appendix.}
YouTube automatically captions many videos, allowing us to scrape caption transcripts from 601 NFL games and 854 NCAA games. We next identify the teams playing and game's year by searching for exact string matches in the video title and manually labeling any videos with underspecified titles.
After downloading videos, we tokenize transcripts using spaCy.\footnote{https://spacy.io/ (2.1.3), \citet{spacy2}} As part-of-speech tags predicted by spaCy are unreliable on our transcript text, we tag \textsc{football}\ using the ARK TweetNLP POS tagger~\cite{Owoputi2013ImprovedPT}, which is more robust to noisy and fragmented text, including TV subtitles~\citep{jorgensen-etal-2016-learning}.
Additionally, we use \texttt{phrasemachine} \cite{Handler2016BagOW} to identify all corpus noun phrases. Finally, we identify player mentions in the transcript text using exact string matches of first, last, and full names to roster information from online archives; these rosters also contain the player's position.\footnote{Roster sources listed in Appendix. We tag first and last name mentions only if they can be disambiguated to a single player in the rosters from opposing teams.} Although we initially had concerns about the reliability of transcriptions of player names, we noticed minimal errors on more common names. Qualitatively, we noticed that even uncommon names were often correctly transcribed and capitalized. We leave a more systematic study for future work.
\subsection{Identifying player race}
Racial identity in the United States is a creation of complex, fluid social and historical processes \cite{omi2014racial}, rather than a reflection of innate differences between fixed groups. Nevertheless, popular \textit{perceptions} of race in the United States and the prior scholarship on racial bias in sports broadcasts which informs our work \cite{rainville1977extent,rada1996color,billings2004depicting,rada2005color} typically assume hard distinctions between racial groups, which measurably affect commentary. In this work, we do not reify these racial categories; we use them as commonly understood within the context of the society in which they arise.
To conduct a large-scale re-examination of this prior work, we must identify whether each player in \textsc{football}\ is perceived as white or nonwhite.\footnote{While we use the general term ``nonwhite'' in this paper, the majority of nonwhite football players are black: in 2013, 67.3\% of the NFL was black and most of the remaining players (31\%)
were white~\citep{lapchick20122012}.} Unfortunately, publicly available rosters or player pages do not contain this information, so we resort to crowdsourcing. We present crowd workers on the Figure Eight platform with 2,720 images of professional player headshots from the Associated Press paired with player names. We ask them to ``read the player's name and examine their photo'' to judge whether the player is white or nonwhite. We collect five judgements per player from crowd workers in the US, whose high inter-annotator agreement (all five workers agree on the race for 93\% of players) suggests that their perceptions are very consistent.
Because headshots were only available for a subset of players, the authors labeled the race of an additional 1,948 players by performing a Google Image search for the player's name\footnote{We appended ``NFL'' to every query to improve precision of results.} and manually examining the resulting images. Players whose race could not be determined from the search results were excluded from the dataset.
\section{Related Work}
\label{sec:related_work}
Our work revisits specific findings from social science (\S\ref{sec:analysis}) on racial bias in sports broadcasts. Such non-computational studies typically examine a small number of games drawn from a single season and rely on manual coding to identify differences in announcer speech \cite{rainville1977extent,billings2004depicting,rada2005color}. For example, \newcite{rada1996color} perform a fine-grained analysis of five games from the 1992 season, coding for aspects such as players' cognitive or physical attributes. Our computational approach allows us to revisit this type of work (\S\ref{sec:analysis}) using \textsc{football}, without relying on subjective human coding.
Within NLP, researchers have studied gender bias in word embeddings \cite{Bolukbasi2016ManIT,Caliskan183}, racial bias in police stops \cite{Voigt6521} and on Twitter~\citep{hasanuzzaman2017demographic}, and biases in NLP tools like sentiment analysis systems~\citep{Kiritchenko2018ExaminingGA}. Especially related to our work is that of~\newcite{gender:naacl19}, who analyze mention-level gender bias, and \newcite{tennisgender:IJCAI}, who examine gender bias in tennis broadcasts.
Other datasets in the sports domain include the event-annotated baseball commentaries of~\newcite{keshet2011ballgame} and the WNBA and NBA basketball commentaries of~\newcite{aull2013fighting}, but we emphasize that \textsc{football}\ is the first large-scale sports commentary corpus annotated for race.
\section{Introduction}
\label{sec:introduction}
Sports broadcasts are major events in contemporary popular culture: televised American football (henceforth ``football``) games regularly draw tens of millions of viewers
\citep{Palotta2019CNN}.
\xdef\@thefnmark{}\@footnotetext{$^{\bigstar}$\textnormal{Authors contributed equally.}}
Such broadcasts feature
live sports commentators
who
weave the game's mechanical details
into a broader, more subjective narrative.
Previous work suggests that this form of storytelling exhibits racial bias: nonwhite players are less frequently praised for good plays ~\cite{rainville1977extent}, while white players are more often credited with ``intelligence''~\cite{bruce2004marking, billings2004depicting}. However, such prior scholarship forms conclusions from small datasets\footnote{ \citet{rainville1977extent}, for example, study only 16 games.} and subjective manual coding of race-specific language.
We revisit this prior work using large-scale computational analysis.
From YouTube, we collect broadcast football transcripts and identify mentions of players, which we link
to metadata about each player's race and position. Our resulting \textsc{football}\ dataset contains over 1,400 games
spanning six decades, automatically annotated with
$\sim$250K player mentions (Table~\ref{tab:dataset_examples}). Analysis of \textsc{football}\ identifies two confounding factors for research on racial bias: (1) the racial composition of many positions is very skewed (e.g., only $\sim$5\% of running backs are white), and (2) many mentions of players describe only their actions on the field (not player attributes). We experiment with an additive log-linear model for teasing apart these confounds. We also confirm prior social science studies on racial bias in naming patterns and sentiment. Finally, we publicly release \textsc{football},\footnote{ \href{http://github.com/jmerullo/football}{\texttt{http://github.com/jmerullo/football}}} the first large-scale sports commentary corpus annotated with player race, to spur further research into characterizing racial bias in mass media.
\begin{table}[]
\small
\begin{tabular}{p{1.2cm}p{1.1cm}p{4cm}}
\toprule
\bf Player & \bf Race & \bf Mention text\\
\midrule
Baker\hspace{0.5cm}Mayfield & white & ``Mayfield the ultimate competitor he's tough he's scrappy''\\ \\
Jesse James & white & ``this is a guy \dots does nothing but work brings his lunch pail''\\
\midrule
Manny Lawson & nonwhite & ``good specs for that defensive end freakish athletic ability''\\ \\
B.J. Daniels & nonwhite & ``that otherworldly athleticism he has saw it with Michael Vick''\\
\bottomrule
\end{tabular}
\caption{Example mentions from \textsc{football}\ that highlight racial bias in commentator sentiment patterns.}
\label{tab:dataset_examples}
\vspace{-0.1in}
\end{table}
\section{Analyzing \textsc{football}}
\label{sec:analysis}
\begin{figure}[t!]
\centering
\includegraphics[width=3in]{figs/position_plot.png}
\caption{Almost all of the eight most frequently-mentioned positions in \textsc{football}\ are heavily skewed in favor of one race.} \label{fig:position}
\vspace{-0.1in}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=3in]{figs/race_qbs_decade.pdf}
\caption{The percentage of nonwhite quarterbacks mentions has drastically increased over time, exemplifying the changing racial landscape in \textsc{football}\ across time.} \label{fig:qb_decades}
\vspace{-0.1in}
\end{figure}
We now demonstrate confounds in the data and revisit several established results from racial bias studies in sports broadcasting.
For all experiments, we seek to analyze the statistics of contextual terms that describe or have an important association with a mentioned player. Thus, we preprocess the transcripts by collecting contextual terms in windows of five tokens around each player mention, following the approach of~\citet{gender:naacl19} for gendered mention analysis.\footnote{If multiple player mentions fall within the same window, we exclude each term to avoid ambiguity.}
We emphasize that different term extraction strategies are possible, corresponding to different precision--recall tradeoffs. For instance, instead of collecting all terms in a window (high recall) we might instead only collect terms in copular constructions with the entity mention (high precision), such as `devoted' in ``Tebow is devoted''. Because mention detection strategies affect conclusions about bias in \textsc{football}, systematically defining, analyzing or even learning different possible strategies offers an exciting avenue for future work.
\subsection{Statistical and linguistic confounds}
\label{subsec:confounds}
Identifying racial bias in football broadcasts presents both statistical and linguistic modeling challenges. Many descriptions of players in broadcasts describe temporary player states (e.g., ``Smith deep in the backfield'') or discrete player actions (``Ogden with a huge block''), rather than possibly-biased descriptions of players themselves (``Cooper is one scrappy receiver''). Moreover, many players' actions (``passes the ball downfield'') depend on the position they play, which is often skewed by race (Figure~\ref{fig:position}). Furthermore, the racial composition of mentions across different decades can differ dramatically---Figure~\ref{fig:qb_decades} shows these changes for quarterback mentions---which makes the problem even more complex. Modeling biased descriptions of players thus requires disentangling attributes describing shifting, position-dependent player actions on field (e.g., ``Paulsen the tight end with a \textit{fast} catch'') from attributes referring to intrinsic characteristics of individual players (``Paulsen is just so, so \textit{fast}'').
To demonstrate this challenge, we distinguish between per-position effects and racial effects using an additive, log-linear model which represents the log probability that a word or noun phrase $w$ will describe a player entity $e$ as the sum of two learned coefficients, corresponding to two observed covariates. One observed covariate records a player's race and the other a player's position, which allows us to use learned coefficients to represent how much a player's race or position contributes to the chance of observing an $(w,e)$ pair.
Formally, we model such effects using a
sparse MAP estimation variant of SAGE \cite{eisenstein2011sparse}.
We define the binary vector $y_e\in\{0,1\}^J$
to represent the observed player covariates of race (white or nonwhite) and position. For example, component $y_{e,k}$ will be set to 1 if player $e$ is a quarterback and the component $k$ indexes the quarterback covariate; $y_e$ is a concatenation of two one-hot vectors.
We then model
$ p(w \mid e) \propto \exp\left( \beta_w + (\gamma y_e)_w \right)$,
with $\beta_w \in \mathbb{R}^{|\mathcal{V}|}$ as a background distribution over the vocabulary $\mathcal{V}$,
set to empirical corpus-wide word and phrase log-probabilities,
and
$\gamma \in \mathbb{R}^{J \times |\mathcal{V}|}$ as a matrix of feature effects on those log probabilities.
$\gamma_{j,w}$ denotes the \emph{difference} in log-probability of $w$ for the $j^{th}$ player feature being on versus off. For example, if $j$ indexes the quarterback covariate and $w$ indexes the word ``tough'', then $\gamma_{j,w}$ represents how much more likely the word ``tough'' is to be applied to quarterbacks over the base distribution. We impose a uniform Laplace prior on all elements of $\gamma$ to induce sparsity, and learn a MAP estimate with the LibLBFGS implementation of OWL-QN, an L1-capable quasi-Newtonian convex optimizer \cite{Andrew2007OWLQN,Okazaki2010LibLBFGS}. We learn from a sample of one million noun phrases and noun tokens from the corpus.
Table \ref{t:covar} shows several highest-valued $\gamma_{j,w}$ for a subset of the $J$ covariates. The adjective ``big'' is predictive of running backs, but refers to an action on the field (``big hole''), not an attribute of running backs. We also find that since ``strong safety'' is a kind of defensive back, the adjective ``strong'' is often associated with defensive backs, who are often nonwhite. In this case,
``strong'' does not reflect racial bias. Preliminary experiments with per-position mention-level race classifiers,
as per \citet{gender:naacl19}, were also unable
to disentangle race and position.
These results suggest that a more sophisticated approach
may be necessary to isolate race effects from the confounds;
it also raises sharp conceptual questions
about the meaning of race-conditional statistical effects
in social scientific inquiry,
since race is a multifaceted construct
(a ``bundle of sticks,'' as \citet{Sen2016Bundle} argue).
For future work, it may be useful to think of comparisons between otherwise similar players: how do broadcasters differ in their discussions of two players who are both quarterbacks, and who have similar in-game performance, but differ by race?
We now describe two experiments that sidestep some of these confounds,
each motivated by prior work in social science: the first
examines player naming patterns, which are less tied to action on field than player attributes. The other uses words with known sentiment polarity to identify positive and negative attributes, regardless of player position or game mechanics.
\begin{table}[t!]
\small
\begin{tabular}{@{}rl@{}}
\textbf{White} & {\textit{\small long time, official, free safety}} \\
\textbf{DB} & {\textit{\small great coverage, strong safety, free safety}} \\
\textbf{RB} & {\textit{\small big hole, more yards, great block}} \\
\textbf{QB} & {\textit{\small plenty of time, florida state, comfortable}} \\
\textbf{WR} & {\textit{\small double coverage, total, wide receivers}} \\
\end{tabular}
\caption{Top terms for the white, defensive back (DB), running back (RB), quarterback (QB),
and wide receiver (WR)
covariates for the log linear model.}
\label{t:covar}
\vspace{-0.1in}
\end{table}
\subsection{Exploring naming patterns} \label{naming}
\emph{Naming patterns} in sports broadcasting---how commentators refer to players by name (e.g., first or last name)---are influenced by player attributes, as shown by prior small-scale studies. For example,~\newcite{koivula1999gender} find that women are more frequently referred to by their first names than men in a variety of sports. \citet{bruce2004marking} discover a similar trend for race in basketball games: white players are more frequently referred to by their last names than nonwhite players, often because commentators believe their first names sound too ``normal''.~\citet{bruce2004marking} further points out that the ``practice of having fun or playing with the names of people from non-dominant racial groups'' contributes to racial ``othering''. A per-position analysis of player mentions in~\textsc{football}\ corroborates these findings for all offensive positions (Table~\ref{tab:naming}).
\begin{table}[t]
\small
\begin{center}
\begin{tabular}{ llrrr }
\toprule
\bf Position & \bf Race & \bf First & \bf Last & \bf Full \\
\midrule
QB & white & 8.3\% & 20.0\% & 71.7\% \\
QB & nonwhite & 18.1\% & 7.5\% & 74.5\% \\
\midrule
WR & white & 6.9\% & 36.5\% & 56.5\% \\
WR & nonwhite & 11.3\% & 24.1\% & 64.6\% \\
\midrule
RB & white & 10.5\% & 41.2\% & 48.4\% \\
RB & nonwhite & 8.5\% & 35.4\% & 56.1\% \\
\midrule
TE & white & 16.6\% & 18.7\% & 64.7\% \\
TE & nonwhite & 13.8\% & 16.6\% & 69.7\% \\
\bottomrule
\end{tabular}
\end{center}
\caption{White players at the four major offensive positions are referred to by last name more often than nonwhite players at the same positions, a discrepancy that may reflect unconscious racial boundary-marking.}
\label{tab:naming}
\vspace{-0.1in}
\end{table}
\subsection{Sentiment patterns} \label{sentiment}
Prior studies examine the relationship between commentator sentiment and player race:~\citet{rainville1977extent} conclude that white players receive more positive coverage than black players, and~\citet{rada1996color} shows that nonwhite players are praised more for physical attributes and less for cognitive attributes than white ones.
To examine sentiment patterns within \textsc{football}, we assign a binary sentiment label to contextualized terms (i.e., a window of words around a player mention) by searching for words that match those in domain-specific sentiment lexicons from~\newcite{hamilton2016inducing}.\footnote{We use a filtered intersection of lexicons from the NFL, CFB, and sports subreddits, yielding 121 positive and 125 negative words.}
This method identifies 49,787 windows containing sentiment-laden words, only 12.8\% of which are of negative polarity, similar to the 8.3\% figure reported by~\citet{rada1996color}.\footnote{Preliminary experiments with a state-of-the-art sentiment model trained on the Stanford Sentiment Treebank \cite{peters-etal-2018-deep} produced qualitatively unreliable predictions due to the noise in \textsc{football}.} We compute a list of the most positive words for each race ranked by ratio of relative frequencies~\citep{monroe2008fightin}.\footnote{We follow~\citet{monroe2008fightin} in removing infrequent words before ranking; specifically, a word must occur at least ten times for each race to be considered.} A qualitative inspection of these lists (Table~\ref{tab:polarize_terms}) confirms that nonwhite players are much more frequently praised for physical ability than white players, who are praised for personality and intelligence (see Table~\ref{tab:dataset_examples} for more examples). \\\\
\noindent\textbf{Limitations:}
The small lexicon results in the detection of relatively few sentiment-laden windows; furthermore, some of those are false positives (e.g., ``beast mode'' is the nickname of former NFL running back Marshawn Lynch). The former issue precludes a per-position analysis for all non-QB positions, as we are unable to detect enough sentiment terms to draw meaningful conclusions. The top two rows of Table~\ref{tab:polarize_terms}, which were derived from all mentions regardless of position, are thus tainted by the positional confound discussed in Section~\ref{subsec:confounds}.
The bottom two rows of Table~\ref{tab:polarize_terms} are derived from the same analysis applied to just quarterback windows; qualitatively, the results appear similar to those in the top two rows. That said, we hope that future work on contextualized term extraction and sentiment detection in noisy domains can shed more light on the relationship between race and commentator sentiment patterns.
\begin{table}[t!]
\small
\begin{center}
\begin{tabular}{ lp{4.6cm} }
\toprule
\bf Race & \bf Most positive words \\
\midrule
white (all) & \emph{enjoying, favorite, calm, appreciate, loving, miracle, spectacular, perfect, cool, smart}\\
nonwhite (all) & \emph{speed, gift, versatile, gifted, playmaker, natural, monster, wow, beast, athletic}\\
\midrule
white (QBs) & \emph{cool, smart, favorite, safe, spectacular, excellent, class, fantastic, good, interesting}\\
nonwhite (QBs) & \emph{ability, athletic, brilliant, awareness, quiet, highest, speed, wow, excited, wonderful}\\
\bottomrule
\end{tabular}
\end{center}
\caption{Positive comments for nonwhite players (top two rows: all player mentions; bottom two rows: only quarterback mentions) focus on their athleticism, while white players are praised for personality and intelligence.}
\label{tab:polarize_terms}
\vspace{-0.1in}
\end{table}
\section*{Appendix}
\label{sec:appendixA}
Here we provide tables that give a much more detailed breakdown of \textsc{football}.\ For instance, we show breakdowns by position (Table~\ref{tab:num_position}), time period (Table~\ref{tab:num_games}), and race (Table~\ref{tab:lab_ments_total}). Additionally, we fully specify the transcript and roster collection process.
\begin{table}[h!]
\begin{center}
\scalebox{1.0}{%
\begin{tabular}{ lrrr }
\toprule
Position & white & nonwhite & Total \\
\midrule
QB & 54.0k & 17.2k & 71.3k \\
RB & 3.1k & 42.8k & 45.8k \\
WR & 6.5k & 38.1k & 44.6k \\
DB & 2.5k & 32.1k & 34.6k \\
LB & 4.4k & 14.0k & 18.4k \\
DE & 3.0k & 9.2k & 12.2k \\
TE & 6.1k & 5.4k & 11.448 \\
DT & 1.3k & 6.7k & 8.0k \\
OT & 2.7k & 3.3k & 6.0k \\
K & 5.7k & 279 & 5.9k \\
OG & 1.9k & 2.1k & 4.0k \\
P & 3.6k & 219 & 3.8k \\
C & 1.4k & 254 & 1.6k \\
LS & 92 & 0 & 92 \\
OL & 27 & 38 & 65 \\
DL & 11 & 51 & 62 \\ \bottomrule
\end{tabular}}
\end{center}
\caption{Number of mentions in \textsc{football}\ by position. There are 267,778 player mentions in total.}
\label{tab:num_position}
\end{table}
\begin{table}[h!]
\begin{center}
\scalebox{1.0}{%
\begin{tabular}{ lrrrr }
\toprule
Years & NFL & NCAA & Total \\
\midrule
1960-1969 & 5 & 0 & 5 \\
1970-1979 & 53 & 50 & 103 \\
1980-1989 & 36 & 76 & 112 \\
1990-1999 & 57 & 106 & 163 \\
2000-2009 & 105 & 194 & 299 \\
2010-present & 345 & 428 & 773 \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Number of games in \textsc{football}\ by decade}
\label{tab:num_games}
\end{table}
\begin{table}[h!]
\small
\begin{center}
\scalebox{1.0}{%
\begin{tabular}{ lrrrr }
\toprule
& \multicolumn{2}{c}{Mentions by Decade}\\
Years & nonwhite & white & Total \\
\midrule
1960-1969 & 1.0k & 641 & 1.6k \\
1970-1979 & 9.4k & 9.2k & 18.6k \\
1980-1989 & 7.3k & 11.1k & 18.4k \\
1990-1999 & 9.5k & 18.9k & 28.4k \\
2000-2009 & 18.7k & 35.8k & 54.5k \\
2010-present & 50.2k & 95.9k & 146.1k \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Number of mentions in \textsc{football}\ by decade}
\label{tab:data_stats}
\end{table}
\begin{table}[h!]
\small
\begin{center}
\scalebox{1.0}{
\begin{tabular}{ lrr }
\toprule
League & nonwhite & white \\
\midrule
NFL & 137.4k & 84.0k \\
NCAA & 34.2k & 12.1k \\ \midrule
\textbf{Total} & \textbf{171.6k} & \textbf{96.1k} \\ \bottomrule
\end{tabular}}
\end{center}
\caption{Mentions in \textsc{football}\ by race and league}
\label{tab:lab_ments_total}
\end{table}
\begin{table}[h!]
\begin{center}
\scalebox{0.8}{%
\begin{tabular}{ lrrrr }
\toprule
& \multicolumn{2}{c}{NFL} & \multicolumn{2}{c}{NCAA} \\
Years & nonwhite & white & nonwhite & white\\
\midrule
1960-1969 & 75 & 40 & 0 & 0 \\
1970-1979 & 189 & 198 & 18 & 39 \\
1980-1989 & 185 & 291 & 15 & 85 \\
1990-1999 & 136 & 419 & 27 & 130 \\
2000-2009 & 265 & 761 & 76 & 312 \\
2010-present & 481 & 1.3k & 116 & 419 \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Number of distinct labeled players}
\label{tab:num_players}
\end{table}
\begin{table}[h!]
\begin{center}
\scalebox{0.8}{%
\begin{tabular}{ lrrr }
\toprule
Years & \% white (dataset) & \% white (NFL) \\
\midrule
1960-1969 & 61.0 & * \\
1970-1979 & 47.1 & * \\
1980-1989 & 38.4 & * \\
1990-1999 & 29.9 & 33.0 \\
2000-2009 & 40.5 & 30.6 \\
2010-present & 28.1 & 29.9 \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Percentage of mentions by race and decade in \textsc{football}\ compared with the actual race distribution of players in the NFL by decade \cite{lapchick2017}. The distributions are very similar. *\citet{lapchick2017} does not provide data for this time span.}
\label{tab:data_stats}
\end{table}
\subsection*{Transcript collection: additional details}
We collect YouTube videos from the following YouTube channels, which aggregate football games \emph{Mark Jones},
\emph{NFL Full Games 2018 / 2019},
\emph{Adrián GTZ Montoya},
\emph{NFL Full Games},
\emph{Bart Simpson},
\emph{Bryan Mears},
\emph{NFL},
\emph{Alex Roberts},
\emph{Sports},
\emph{Danger zone},
\emph{Nittany 96}. (Some channels publish dedicated lists of football games, amid other content. Other channels post football games only).
After downloading videos, we identified the teams playing and the year of each video by matching strings in video titles. In isolated cases when string matching failed, we manually identified the teams and year from the video itself.
We remove stop words and entities from our original text before processing mentions using the NLTK English stopwords list.
\subsection*{Roster collection: additional details}
We collected rosters for all NFL teams from 1960 to the present from \url{footballdb.com}. Because some players have the same name, we used the player name-position pair (e.g. Tom Brady, QB) to identify a unique entity in our dataset.
We collect NCAA rosters from the college football page \url{ sports-reference.com}, downloading rosters for the 290 available schools in the NCAA from 1869 to the present.
We lower case all names on rosters. We also remove periods, apostrophes and hyphens within names (e.g., Odell Beckham, Jr., Ra'shede Hageman, Dominique Rodgers-Cromartie).
In total, including the mentions for which we could not acquire racial metadata, we gathered a total of 545,232 labeled and non labeled player mentions, 265,879 NCAA mentions and 279,353 NFL mentions. The higher number of NFL mentions (despite fewer NFL games) is due to incomplete roster information for NCAA teams (some years are incomplete, some years missing altogether). For the players we were able to acquire race labels for, 137,428 nonwhite and 84,038 white mentions are collected from NFL games and 34,196 nonwhite and 12,116 white mentions are collected NCAA games.
\section*{Acknowledgments}
We thank the anonymous reviewers for their insightful comments,
and Emma Strubell, Patrick Verga, David Fisher, Katie Keith, Su Lin Blodgett, and other members
of the UMass NLP group for help with
earlier drafts of the paper.
This work was partially supported by
NSF IIS-1814955 and a Figure Eight AI for Everyone award.
\section{Conclusion}\label{sec:conclusion}
We collect and release \textsc{football}\ to support large-scale, longitudinal analysis of racial bias in sports commentary, a major category of mass media. Our analysis confirms the results of prior smaller-scale social science studies on commentator sentiment and naming patterns.
However, we find that baseline NLP methods for quantifying mention-level genderedness~\citep{gender:naacl19} and modeling covariate effects ~\citep{eisenstein2011sparse} cannot overcome the statistical and linguistic confounds in this dataset. We hope that presenting such a technically-challenging resource, along with an analysis showing the limitations of current bias-detection techniques, will contribute to the emerging literature on bias in language. Important future directions include examining the temporal aspect of bias as well as developing more precise mention identification techniques.
\section{Collecting the \textsc{football}\ dataset}
\label{sec:dataset}
We collect transcripts of 1,455 full game broadcasts from the
U.S.\ NFL and
National Collegiate Athletic Association (NCAA) recorded between 1960 and 2019. Next, we identify and link mentions of players within these transcripts to information about their race (white or nonwhite) and position (e.g., quarterback). In total, \textsc{football}\ contains 267,778 mentions of 4,668 unique players, 65.7\% of whom are nonwhite.\footnote{See Appendix for more detailed statistics.}
We now describe each stage of our data collection process.
\subsection{Processing broadcast transcripts}
We collect broadcast transcripts by downloading YouTube videos posted by nonprofessional, individual users identified by querying YouTube for football archival channels.\footnote{We specifically query for \texttt{full NFL$|$NCAA$|$college football games 1960s$|$1970s$|$1980s$|$1990s$|$2000}, and the full list of channels is listed in in the Appendix.}
YouTube automatically captions many videos, allowing us to scrape caption transcripts from 601 NFL games and 854 NCAA games. We next identify the teams playing and game's year by searching for exact string matches in the video title and manually labeling any videos with underspecified titles.
After downloading videos, we tokenize transcripts using spaCy.\footnote{https://spacy.io/ (2.1.3), \citet{spacy2}} As part-of-speech tags predicted by spaCy are unreliable on our transcript text, we tag \textsc{football}\ using the ARK TweetNLP POS tagger~\cite{Owoputi2013ImprovedPT}, which is more robust to noisy and fragmented text, including TV subtitles~\citep{jorgensen-etal-2016-learning}.
Additionally, we use \texttt{phrasemachine} \cite{Handler2016BagOW} to identify all corpus noun phrases. Finally, we identify player mentions in the transcript text using exact string matches of first, last, and full names to roster information from online archives; these rosters also contain the player's position.\footnote{Roster sources listed in Appendix. We tag first and last name mentions only if they can be disambiguated to a single player in the rosters from opposing teams.} Although we initially had concerns about the reliability of transcriptions of player names, we noticed minimal errors on more common names. Qualitatively, we noticed that even uncommon names were often correctly transcribed and capitalized. We leave a more systematic study for future work.
\subsection{Identifying player race}
Racial identity in the United States is a creation of complex, fluid social and historical processes \cite{omi2014racial}, rather than a reflection of innate differences between fixed groups. Nevertheless, popular \textit{perceptions} of race in the United States and the prior scholarship on racial bias in sports broadcasts which informs our work \cite{rainville1977extent,rada1996color,billings2004depicting,rada2005color} typically assume hard distinctions between racial groups, which measurably affect commentary. In this work, we do not reify these racial categories; we use them as commonly understood within the context of the society in which they arise.
To conduct a large-scale re-examination of this prior work, we must identify whether each player in \textsc{football}\ is perceived as white or nonwhite.\footnote{While we use the general term ``nonwhite'' in this paper, the majority of nonwhite football players are black: in 2013, 67.3\% of the NFL was black and most of the remaining players (31\%)
were white~\citep{lapchick20122012}.} Unfortunately, publicly available rosters or player pages do not contain this information, so we resort to crowdsourcing. We present crowd workers on the Figure Eight platform with 2,720 images of professional player headshots from the Associated Press paired with player names. We ask them to ``read the player's name and examine their photo'' to judge whether the player is white or nonwhite. We collect five judgements per player from crowd workers in the US, whose high inter-annotator agreement (all five workers agree on the race for 93\% of players) suggests that their perceptions are very consistent.
Because headshots were only available for a subset of players, the authors labeled the race of an additional 1,948 players by performing a Google Image search for the player's name\footnote{We appended ``NFL'' to every query to improve precision of results.} and manually examining the resulting images. Players whose race could not be determined from the search results were excluded from the dataset.
\section{Related Work}
\label{sec:related_work}
Our work revisits specific findings from social science (\S\ref{sec:analysis}) on racial bias in sports broadcasts. Such non-computational studies typically examine a small number of games drawn from a single season and rely on manual coding to identify differences in announcer speech \cite{rainville1977extent,billings2004depicting,rada2005color}. For example, \newcite{rada1996color} perform a fine-grained analysis of five games from the 1992 season, coding for aspects such as players' cognitive or physical attributes. Our computational approach allows us to revisit this type of work (\S\ref{sec:analysis}) using \textsc{football}, without relying on subjective human coding.
Within NLP, researchers have studied gender bias in word embeddings \cite{Bolukbasi2016ManIT,Caliskan183}, racial bias in police stops \cite{Voigt6521} and on Twitter~\citep{hasanuzzaman2017demographic}, and biases in NLP tools like sentiment analysis systems~\citep{Kiritchenko2018ExaminingGA}. Especially related to our work is that of~\newcite{gender:naacl19}, who analyze mention-level gender bias, and \newcite{tennisgender:IJCAI}, who examine gender bias in tennis broadcasts.
Other datasets in the sports domain include the event-annotated baseball commentaries of~\newcite{keshet2011ballgame} and the WNBA and NBA basketball commentaries of~\newcite{aull2013fighting}, but we emphasize that \textsc{football}\ is the first large-scale sports commentary corpus annotated for race.
\section{Introduction}
\label{sec:introduction}
Sports broadcasts are major events in contemporary popular culture: televised American football (henceforth ``football``) games regularly draw tens of millions of viewers
\citep{Palotta2019CNN}.
\xdef\@thefnmark{}\@footnotetext{$^{\bigstar}$\textnormal{Authors contributed equally.}}
Such broadcasts feature
live sports commentators
who
weave the game's mechanical details
into a broader, more subjective narrative.
Previous work suggests that this form of storytelling exhibits racial bias: nonwhite players are less frequently praised for good plays ~\cite{rainville1977extent}, while white players are more often credited with ``intelligence''~\cite{bruce2004marking, billings2004depicting}. However, such prior scholarship forms conclusions from small datasets\footnote{ \citet{rainville1977extent}, for example, study only 16 games.} and subjective manual coding of race-specific language.
We revisit this prior work using large-scale computational analysis.
From YouTube, we collect broadcast football transcripts and identify mentions of players, which we link
to metadata about each player's race and position. Our resulting \textsc{football}\ dataset contains over 1,400 games
spanning six decades, automatically annotated with
$\sim$250K player mentions (Table~\ref{tab:dataset_examples}). Analysis of \textsc{football}\ identifies two confounding factors for research on racial bias: (1) the racial composition of many positions is very skewed (e.g., only $\sim$5\% of running backs are white), and (2) many mentions of players describe only their actions on the field (not player attributes). We experiment with an additive log-linear model for teasing apart these confounds. We also confirm prior social science studies on racial bias in naming patterns and sentiment. Finally, we publicly release \textsc{football},\footnote{ \href{http://github.com/jmerullo/football}{\texttt{http://github.com/jmerullo/football}}} the first large-scale sports commentary corpus annotated with player race, to spur further research into characterizing racial bias in mass media.
\begin{table}[]
\small
\begin{tabular}{p{1.2cm}p{1.1cm}p{4cm}}
\toprule
\bf Player & \bf Race & \bf Mention text\\
\midrule
Baker\hspace{0.5cm}Mayfield & white & ``Mayfield the ultimate competitor he's tough he's scrappy''\\ \\
Jesse James & white & ``this is a guy \dots does nothing but work brings his lunch pail''\\
\midrule
Manny Lawson & nonwhite & ``good specs for that defensive end freakish athletic ability''\\ \\
B.J. Daniels & nonwhite & ``that otherworldly athleticism he has saw it with Michael Vick''\\
\bottomrule
\end{tabular}
\caption{Example mentions from \textsc{football}\ that highlight racial bias in commentator sentiment patterns.}
\label{tab:dataset_examples}
\vspace{-0.1in}
\end{table}
\section{Analyzing \textsc{football}}
\label{sec:analysis}
\begin{figure}[t!]
\centering
\includegraphics[width=3in]{figs/position_plot.png}
\caption{Almost all of the eight most frequently-mentioned positions in \textsc{football}\ are heavily skewed in favor of one race.} \label{fig:position}
\vspace{-0.1in}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=3in]{figs/race_qbs_decade.pdf}
\caption{The percentage of nonwhite quarterbacks mentions has drastically increased over time, exemplifying the changing racial landscape in \textsc{football}\ across time.} \label{fig:qb_decades}
\vspace{-0.1in}
\end{figure}
We now demonstrate confounds in the data and revisit several established results from racial bias studies in sports broadcasting.
For all experiments, we seek to analyze the statistics of contextual terms that describe or have an important association with a mentioned player. Thus, we preprocess the transcripts by collecting contextual terms in windows of five tokens around each player mention, following the approach of~\citet{gender:naacl19} for gendered mention analysis.\footnote{If multiple player mentions fall within the same window, we exclude each term to avoid ambiguity.}
We emphasize that different term extraction strategies are possible, corresponding to different precision--recall tradeoffs. For instance, instead of collecting all terms in a window (high recall) we might instead only collect terms in copular constructions with the entity mention (high precision), such as `devoted' in ``Tebow is devoted''. Because mention detection strategies affect conclusions about bias in \textsc{football}, systematically defining, analyzing or even learning different possible strategies offers an exciting avenue for future work.
\subsection{Statistical and linguistic confounds}
\label{subsec:confounds}
Identifying racial bias in football broadcasts presents both statistical and linguistic modeling challenges. Many descriptions of players in broadcasts describe temporary player states (e.g., ``Smith deep in the backfield'') or discrete player actions (``Ogden with a huge block''), rather than possibly-biased descriptions of players themselves (``Cooper is one scrappy receiver''). Moreover, many players' actions (``passes the ball downfield'') depend on the position they play, which is often skewed by race (Figure~\ref{fig:position}). Furthermore, the racial composition of mentions across different decades can differ dramatically---Figure~\ref{fig:qb_decades} shows these changes for quarterback mentions---which makes the problem even more complex. Modeling biased descriptions of players thus requires disentangling attributes describing shifting, position-dependent player actions on field (e.g., ``Paulsen the tight end with a \textit{fast} catch'') from attributes referring to intrinsic characteristics of individual players (``Paulsen is just so, so \textit{fast}'').
To demonstrate this challenge, we distinguish between per-position effects and racial effects using an additive, log-linear model which represents the log probability that a word or noun phrase $w$ will describe a player entity $e$ as the sum of two learned coefficients, corresponding to two observed covariates. One observed covariate records a player's race and the other a player's position, which allows us to use learned coefficients to represent how much a player's race or position contributes to the chance of observing an $(w,e)$ pair.
Formally, we model such effects using a
sparse MAP estimation variant of SAGE \cite{eisenstein2011sparse}.
We define the binary vector $y_e\in\{0,1\}^J$
to represent the observed player covariates of race (white or nonwhite) and position. For example, component $y_{e,k}$ will be set to 1 if player $e$ is a quarterback and the component $k$ indexes the quarterback covariate; $y_e$ is a concatenation of two one-hot vectors.
We then model
$ p(w \mid e) \propto \exp\left( \beta_w + (\gamma y_e)_w \right)$,
with $\beta_w \in \mathbb{R}^{|\mathcal{V}|}$ as a background distribution over the vocabulary $\mathcal{V}$,
set to empirical corpus-wide word and phrase log-probabilities,
and
$\gamma \in \mathbb{R}^{J \times |\mathcal{V}|}$ as a matrix of feature effects on those log probabilities.
$\gamma_{j,w}$ denotes the \emph{difference} in log-probability of $w$ for the $j^{th}$ player feature being on versus off. For example, if $j$ indexes the quarterback covariate and $w$ indexes the word ``tough'', then $\gamma_{j,w}$ represents how much more likely the word ``tough'' is to be applied to quarterbacks over the base distribution. We impose a uniform Laplace prior on all elements of $\gamma$ to induce sparsity, and learn a MAP estimate with the LibLBFGS implementation of OWL-QN, an L1-capable quasi-Newtonian convex optimizer \cite{Andrew2007OWLQN,Okazaki2010LibLBFGS}. We learn from a sample of one million noun phrases and noun tokens from the corpus.
Table \ref{t:covar} shows several highest-valued $\gamma_{j,w}$ for a subset of the $J$ covariates. The adjective ``big'' is predictive of running backs, but refers to an action on the field (``big hole''), not an attribute of running backs. We also find that since ``strong safety'' is a kind of defensive back, the adjective ``strong'' is often associated with defensive backs, who are often nonwhite. In this case,
``strong'' does not reflect racial bias. Preliminary experiments with per-position mention-level race classifiers,
as per \citet{gender:naacl19}, were also unable
to disentangle race and position.
These results suggest that a more sophisticated approach
may be necessary to isolate race effects from the confounds;
it also raises sharp conceptual questions
about the meaning of race-conditional statistical effects
in social scientific inquiry,
since race is a multifaceted construct
(a ``bundle of sticks,'' as \citet{Sen2016Bundle} argue).
For future work, it may be useful to think of comparisons between otherwise similar players: how do broadcasters differ in their discussions of two players who are both quarterbacks, and who have similar in-game performance, but differ by race?
We now describe two experiments that sidestep some of these confounds,
each motivated by prior work in social science: the first
examines player naming patterns, which are less tied to action on field than player attributes. The other uses words with known sentiment polarity to identify positive and negative attributes, regardless of player position or game mechanics.
\begin{table}[t!]
\small
\begin{tabular}{@{}rl@{}}
\textbf{White} & {\textit{\small long time, official, free safety}} \\
\textbf{DB} & {\textit{\small great coverage, strong safety, free safety}} \\
\textbf{RB} & {\textit{\small big hole, more yards, great block}} \\
\textbf{QB} & {\textit{\small plenty of time, florida state, comfortable}} \\
\textbf{WR} & {\textit{\small double coverage, total, wide receivers}} \\
\end{tabular}
\caption{Top terms for the white, defensive back (DB), running back (RB), quarterback (QB),
and wide receiver (WR)
covariates for the log linear model.}
\label{t:covar}
\vspace{-0.1in}
\end{table}
\subsection{Exploring naming patterns} \label{naming}
\emph{Naming patterns} in sports broadcasting---how commentators refer to players by name (e.g., first or last name)---are influenced by player attributes, as shown by prior small-scale studies. For example,~\newcite{koivula1999gender} find that women are more frequently referred to by their first names than men in a variety of sports. \citet{bruce2004marking} discover a similar trend for race in basketball games: white players are more frequently referred to by their last names than nonwhite players, often because commentators believe their first names sound too ``normal''.~\citet{bruce2004marking} further points out that the ``practice of having fun or playing with the names of people from non-dominant racial groups'' contributes to racial ``othering''. A per-position analysis of player mentions in~\textsc{football}\ corroborates these findings for all offensive positions (Table~\ref{tab:naming}).
\begin{table}[t]
\small
\begin{center}
\begin{tabular}{ llrrr }
\toprule
\bf Position & \bf Race & \bf First & \bf Last & \bf Full \\
\midrule
QB & white & 8.3\% & 20.0\% & 71.7\% \\
QB & nonwhite & 18.1\% & 7.5\% & 74.5\% \\
\midrule
WR & white & 6.9\% & 36.5\% & 56.5\% \\
WR & nonwhite & 11.3\% & 24.1\% & 64.6\% \\
\midrule
RB & white & 10.5\% & 41.2\% & 48.4\% \\
RB & nonwhite & 8.5\% & 35.4\% & 56.1\% \\
\midrule
TE & white & 16.6\% & 18.7\% & 64.7\% \\
TE & nonwhite & 13.8\% & 16.6\% & 69.7\% \\
\bottomrule
\end{tabular}
\end{center}
\caption{White players at the four major offensive positions are referred to by last name more often than nonwhite players at the same positions, a discrepancy that may reflect unconscious racial boundary-marking.}
\label{tab:naming}
\vspace{-0.1in}
\end{table}
\subsection{Sentiment patterns} \label{sentiment}
Prior studies examine the relationship between commentator sentiment and player race:~\citet{rainville1977extent} conclude that white players receive more positive coverage than black players, and~\citet{rada1996color} shows that nonwhite players are praised more for physical attributes and less for cognitive attributes than white ones.
To examine sentiment patterns within \textsc{football}, we assign a binary sentiment label to contextualized terms (i.e., a window of words around a player mention) by searching for words that match those in domain-specific sentiment lexicons from~\newcite{hamilton2016inducing}.\footnote{We use a filtered intersection of lexicons from the NFL, CFB, and sports subreddits, yielding 121 positive and 125 negative words.}
This method identifies 49,787 windows containing sentiment-laden words, only 12.8\% of which are of negative polarity, similar to the 8.3\% figure reported by~\citet{rada1996color}.\footnote{Preliminary experiments with a state-of-the-art sentiment model trained on the Stanford Sentiment Treebank \cite{peters-etal-2018-deep} produced qualitatively unreliable predictions due to the noise in \textsc{football}.} We compute a list of the most positive words for each race ranked by ratio of relative frequencies~\citep{monroe2008fightin}.\footnote{We follow~\citet{monroe2008fightin} in removing infrequent words before ranking; specifically, a word must occur at least ten times for each race to be considered.} A qualitative inspection of these lists (Table~\ref{tab:polarize_terms}) confirms that nonwhite players are much more frequently praised for physical ability than white players, who are praised for personality and intelligence (see Table~\ref{tab:dataset_examples} for more examples). \\\\
\noindent\textbf{Limitations:}
The small lexicon results in the detection of relatively few sentiment-laden windows; furthermore, some of those are false positives (e.g., ``beast mode'' is the nickname of former NFL running back Marshawn Lynch). The former issue precludes a per-position analysis for all non-QB positions, as we are unable to detect enough sentiment terms to draw meaningful conclusions. The top two rows of Table~\ref{tab:polarize_terms}, which were derived from all mentions regardless of position, are thus tainted by the positional confound discussed in Section~\ref{subsec:confounds}.
The bottom two rows of Table~\ref{tab:polarize_terms} are derived from the same analysis applied to just quarterback windows; qualitatively, the results appear similar to those in the top two rows. That said, we hope that future work on contextualized term extraction and sentiment detection in noisy domains can shed more light on the relationship between race and commentator sentiment patterns.
\begin{table}[t!]
\small
\begin{center}
\begin{tabular}{ lp{4.6cm} }
\toprule
\bf Race & \bf Most positive words \\
\midrule
white (all) & \emph{enjoying, favorite, calm, appreciate, loving, miracle, spectacular, perfect, cool, smart}\\
nonwhite (all) & \emph{speed, gift, versatile, gifted, playmaker, natural, monster, wow, beast, athletic}\\
\midrule
white (QBs) & \emph{cool, smart, favorite, safe, spectacular, excellent, class, fantastic, good, interesting}\\
nonwhite (QBs) & \emph{ability, athletic, brilliant, awareness, quiet, highest, speed, wow, excited, wonderful}\\
\bottomrule
\end{tabular}
\end{center}
\caption{Positive comments for nonwhite players (top two rows: all player mentions; bottom two rows: only quarterback mentions) focus on their athleticism, while white players are praised for personality and intelligence.}
\label{tab:polarize_terms}
\vspace{-0.1in}
\end{table}
\section*{Appendix}
\label{sec:appendixA}
Here we provide tables that give a much more detailed breakdown of \textsc{football}.\ For instance, we show breakdowns by position (Table~\ref{tab:num_position}), time period (Table~\ref{tab:num_games}), and race (Table~\ref{tab:lab_ments_total}). Additionally, we fully specify the transcript and roster collection process.
\begin{table}[h!]
\begin{center}
\scalebox{1.0}{%
\begin{tabular}{ lrrr }
\toprule
Position & white & nonwhite & Total \\
\midrule
QB & 54.0k & 17.2k & 71.3k \\
RB & 3.1k & 42.8k & 45.8k \\
WR & 6.5k & 38.1k & 44.6k \\
DB & 2.5k & 32.1k & 34.6k \\
LB & 4.4k & 14.0k & 18.4k \\
DE & 3.0k & 9.2k & 12.2k \\
TE & 6.1k & 5.4k & 11.448 \\
DT & 1.3k & 6.7k & 8.0k \\
OT & 2.7k & 3.3k & 6.0k \\
K & 5.7k & 279 & 5.9k \\
OG & 1.9k & 2.1k & 4.0k \\
P & 3.6k & 219 & 3.8k \\
C & 1.4k & 254 & 1.6k \\
LS & 92 & 0 & 92 \\
OL & 27 & 38 & 65 \\
DL & 11 & 51 & 62 \\ \bottomrule
\end{tabular}}
\end{center}
\caption{Number of mentions in \textsc{football}\ by position. There are 267,778 player mentions in total.}
\label{tab:num_position}
\end{table}
\begin{table}[h!]
\begin{center}
\scalebox{1.0}{%
\begin{tabular}{ lrrrr }
\toprule
Years & NFL & NCAA & Total \\
\midrule
1960-1969 & 5 & 0 & 5 \\
1970-1979 & 53 & 50 & 103 \\
1980-1989 & 36 & 76 & 112 \\
1990-1999 & 57 & 106 & 163 \\
2000-2009 & 105 & 194 & 299 \\
2010-present & 345 & 428 & 773 \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Number of games in \textsc{football}\ by decade}
\label{tab:num_games}
\end{table}
\begin{table}[h!]
\small
\begin{center}
\scalebox{1.0}{%
\begin{tabular}{ lrrrr }
\toprule
& \multicolumn{2}{c}{Mentions by Decade}\\
Years & nonwhite & white & Total \\
\midrule
1960-1969 & 1.0k & 641 & 1.6k \\
1970-1979 & 9.4k & 9.2k & 18.6k \\
1980-1989 & 7.3k & 11.1k & 18.4k \\
1990-1999 & 9.5k & 18.9k & 28.4k \\
2000-2009 & 18.7k & 35.8k & 54.5k \\
2010-present & 50.2k & 95.9k & 146.1k \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Number of mentions in \textsc{football}\ by decade}
\label{tab:data_stats}
\end{table}
\begin{table}[h!]
\small
\begin{center}
\scalebox{1.0}{
\begin{tabular}{ lrr }
\toprule
League & nonwhite & white \\
\midrule
NFL & 137.4k & 84.0k \\
NCAA & 34.2k & 12.1k \\ \midrule
\textbf{Total} & \textbf{171.6k} & \textbf{96.1k} \\ \bottomrule
\end{tabular}}
\end{center}
\caption{Mentions in \textsc{football}\ by race and league}
\label{tab:lab_ments_total}
\end{table}
\begin{table}[h!]
\begin{center}
\scalebox{0.8}{%
\begin{tabular}{ lrrrr }
\toprule
& \multicolumn{2}{c}{NFL} & \multicolumn{2}{c}{NCAA} \\
Years & nonwhite & white & nonwhite & white\\
\midrule
1960-1969 & 75 & 40 & 0 & 0 \\
1970-1979 & 189 & 198 & 18 & 39 \\
1980-1989 & 185 & 291 & 15 & 85 \\
1990-1999 & 136 & 419 & 27 & 130 \\
2000-2009 & 265 & 761 & 76 & 312 \\
2010-present & 481 & 1.3k & 116 & 419 \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Number of distinct labeled players}
\label{tab:num_players}
\end{table}
\begin{table}[h!]
\begin{center}
\scalebox{0.8}{%
\begin{tabular}{ lrrr }
\toprule
Years & \% white (dataset) & \% white (NFL) \\
\midrule
1960-1969 & 61.0 & * \\
1970-1979 & 47.1 & * \\
1980-1989 & 38.4 & * \\
1990-1999 & 29.9 & 33.0 \\
2000-2009 & 40.5 & 30.6 \\
2010-present & 28.1 & 29.9 \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Percentage of mentions by race and decade in \textsc{football}\ compared with the actual race distribution of players in the NFL by decade \cite{lapchick2017}. The distributions are very similar. *\citet{lapchick2017} does not provide data for this time span.}
\label{tab:data_stats}
\end{table}
\subsection*{Transcript collection: additional details}
We collect YouTube videos from the following YouTube channels, which aggregate football games \emph{Mark Jones},
\emph{NFL Full Games 2018 / 2019},
\emph{Adrián GTZ Montoya},
\emph{NFL Full Games},
\emph{Bart Simpson},
\emph{Bryan Mears},
\emph{NFL},
\emph{Alex Roberts},
\emph{Sports},
\emph{Danger zone},
\emph{Nittany 96}. (Some channels publish dedicated lists of football games, amid other content. Other channels post football games only).
After downloading videos, we identified the teams playing and the year of each video by matching strings in video titles. In isolated cases when string matching failed, we manually identified the teams and year from the video itself.
We remove stop words and entities from our original text before processing mentions using the NLTK English stopwords list.
\subsection*{Roster collection: additional details}
We collected rosters for all NFL teams from 1960 to the present from \url{footballdb.com}. Because some players have the same name, we used the player name-position pair (e.g. Tom Brady, QB) to identify a unique entity in our dataset.
We collect NCAA rosters from the college football page \url{ sports-reference.com}, downloading rosters for the 290 available schools in the NCAA from 1869 to the present.
We lower case all names on rosters. We also remove periods, apostrophes and hyphens within names (e.g., Odell Beckham, Jr., Ra'shede Hageman, Dominique Rodgers-Cromartie).
In total, including the mentions for which we could not acquire racial metadata, we gathered a total of 545,232 labeled and non labeled player mentions, 265,879 NCAA mentions and 279,353 NFL mentions. The higher number of NFL mentions (despite fewer NFL games) is due to incomplete roster information for NCAA teams (some years are incomplete, some years missing altogether). For the players we were able to acquire race labels for, 137,428 nonwhite and 84,038 white mentions are collected from NFL games and 34,196 nonwhite and 12,116 white mentions are collected NCAA games.
| {
"attr-fineweb-edu": 1.269531,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeC05qoYA4DD269Nr | \section{Introduction}
A gifted offensive college basketball player, Kris Jenkins (Villanova), made a three point buzzer beater against UNC (2015-2016 season), and recorded one of the greatest endings in NCAA championship history. He was arguably one of the best players in the entire NCAA tournament. A question is ``what makes him stand out from his peer players?''. His stats, e.g., average points and rebounds per game, can be a measure to evaluate his excellence. However, these measures do not capture every basketball aspect that a coach may want to use for assessing his potential impact in the future team, which is difficult to measure quantitatively. NBA coaches and scouts are eager to catch every nuance of a basketball player's abilities by watching a large number of his basketball videos.
\captionsetup{labelformat=default}
\captionsetup[figure]{skip=10pt}
\begin{figure}
\begin{center}
\includegraphics[width=1.025\linewidth]{./paper_figures/main/main14.pdf}
\end{center}
\vspace{-0.4cm}
\caption{Our goal is to assess a basketball player's performance based on an evaluator's criterion from an unscripted first-person basketball video of a player. During training, we learn such a model from the pairs of weakly labeled first-person basketball videos. During testing, our model predicts a performance measure customized to a particular evaluator from a first-person basketball video. Additionally, our model can also discover basketball events that contribute positively and negatively to a player's performance.\vspace{-0.4cm}}
\label{main_fig}
\end{figure}
Now consider a college recruitment process where there is a massive number of high school players. In such conditions, the searching task for the best players becomes much more challenging, more expensive and also more labor intense. More importantly, the recruiters need to measure and evaluate a sequence of atomic decision makings, e.g., when does a player shoot, whether he makes a shot, how good is his passing ability, etc. There exists neither universal measure nor golden standard to do this, i.e., most scouts and coaches have their own subjective evaluation criterion.
In this paper, we address a problem of computational basketball player assessment customized to a coach's or scout's evaluation criterion. Our conjecture is that a first-person video captures a player's basketball actions and his/her basketball decision making in a form of the camera motion and visual semantics of the scene. A key challenge of first-person videos is that it immediately violates primary assumptions made for third-person recognition systems: first-person videos are highly unstable and jittery and visual semantics does not appear as iconic as in third-person~\cite{imagenet_cvpr09}.
Our first-person approach innovates the traditional assessment methods, e.g., watching hours of third-person videos taken by non professional videographers and assessing the players in them. In contrast, a first-person video records what the player sees, which directly tells us what is happening to the player himself, e.g., the body pose of a point guard who is about to pass at HD resolution while a third-person video produces a limited visual access to such subtle signals. Furthermore, the 3D camera egomotion of the first person video reflects the decision making of how the player responds to the team configuration, e.g., can I drive towards the basket and successfully finish a layup? Finally, a first-person camera eliminates the tracking and player association tasks of the third-person video analysis, which prevents applications of computational approaches for amateur games\footnote{Usage of commercial tracking systems using multiple calibrated cameras is limited due to a high cost~\cite{stats_vu}}.
Our system takes a first-person video of basketball players and outputs a basketball assessment metric that is specific to an evaluator's preference. The evaluator provides the comparative weak labels of the performance of the players, e.g., the player $\mathsf{A}$ is better than $\mathsf{B}$ based on his own subjective criteria.
Our method first uses a convolutional LSTM to detect atomic basketball events from a first-person video. Our network's ability to localize the most informative regions in a first-person image, is essential for first-person videos where the camera undergoes severe head movement, which causes videos to be blurry. These atomic events are then passed through the Gaussian mixtures to produce a highly non-linear visual spatiotemporal basketball assessment feature. Finally, our basketball assessment model is learned from the pairs of labeled first-person basketball videos by minimizing a hinge loss function. We learn such a basketball skill assessment model from our new $10.3$ hour long first-person basketball dataset that captures $48$ distinct college level basketball players in an unscripted basketball game.
\textbf{Impact} Ample money and effort have been invested in recruiting, assessing, and drafting basketball players every year. However, limited progress has been made on developing computational models that can be used to automatically assess an athlete's performance in a particular sport~\cite{10.1109/ICDM.2014.106,open_shot}. As wearable technology advances, cameras can be non-invasively worn by players, which delivers a vivid sense of their dynamics, e.g., Spanish Liga ACB has demonstrated a possibility of a jersey camera that allows you to put yourself in the court~\cite{acb}. This trend will open up a new opportunity to share experiences and evaluate performance across players in different continents without bias and discrimination. Our work takes a first step towards enabling a computational analysis for such first-person data.
\textbf{Contribution} To the best of our knowledge, this is the first paper that addresses practical behavioral assessment tasks using first-person vision specific to an evaluator's preference. The core technical contributions of the paper include 1) a basketball assessment model that assesses the players based an an evaluator's assessment criterion, which we learn from the pairs of weakly labeled first-person basketball videos; 2) a predictive network that learns the visual semantics of important actions and localizes salient regions of first-person images to handle unstable first-person videos and 3) a new $10.3$ hour long first-person basketball video dataset capturing $48$ players in an unscripted basketball game.
\section{Related Work}
\noindent\textit{Talent wins games, but teamwork and intelligence wins championships.} --- Michael Jordan\\
Accurate diagnosis and evaluation of athletes is a key factor to build a synergic teamwork. However, it is highly subjective and task dependent, and the psychological and financial cost of such process is enormous. A large body of sport analytics and kinesiology has studied a computational approaches to provide a quantitative measure of the performance~\cite{10.1109/ICDM.2014.106,open_shot,tennis,ghosting,nba_strat}.
Kinematic abstraction (position, orientation, velocity, and trajectory) of the players offers a global centric representation of team behaviors, which allows a detailed analysis of the game such as the probability of shoot success, rebound, and future movement prediction~\cite{tennis,ghosting,park_cvpr:2017}. Not only an individual performance, but also team performance can be measured through the kinematic abstraction~\cite{nba_strat,open_shot}.
These kinematic data are often obtained by multiple third-person videos~\cite{stats_vu, ghosting, open_shot} where the players and ball are detected using recognition algorithms combined with multiple view geometry~\cite{Hartley2004}. Tracking and data association is a key issue where the role of the players provides a strong cue to disambiguate appearance based tracking~\cite{10.1109/CVPR.2013.349}. Events such as ball movement, can be also recognized using a spatiotemporal analysis~\cite{Maksai_2016_CVPR}. As players behave strategically and collectively, their group movement can be predicted~\cite{Kim:2012:GP-ROI} and the ball can be localized without detection. Various computational models have been used for such tasks, e.g., Dynamic Bayesian Network~\cite{Swears+Hoogs+Ji+Boyer2014}, hierarchical LSTM~\cite{msibrahi16deepactivity}, attention based LSTM~\cite{ramanathan_cvpr16} learned from a large collection of third-person videos.
Unlike third-person videos, first-person cameras closely capture what the players see. Such property is beneficial to understand activities highly correlated with visual attention, e.g., object manipulation and social communications. Important objects to the camera wearer are detected and segmented~\cite{DBLP:journals/ijcv/LeeG15,BMVC.28.30,conf/cvpr/RenG10,conf/cvpr/FathiRR11,DBLP:journals/corr/BertasiusPYS16}, which can be used to compress life-log videos~\cite{DBLP:journals/ijcv/LeeG15,Lu:2013:SSE:2514950.2516026}. As visual attention is also related with the intent of the camera wearer, her/his future movement can be predicted~\cite{park_ego_future}. Beyond individual behaviors, joint attention is a primary indicator of social interactions, which can be directly computed from first-person videos~\cite{Fathi_socialinteractions:,park_nips:2012}, and further used for human-robot interactions~\cite{Ryoo:2015:RAP:2696454.2696462,DBLP:journals/corr/GoriAR15}.
In sports, the complex interactions with a scene in first-person videos can be learned through spatiotemporal visual patterns. For instance, the scene can tell us about the activity~\cite{conf/cvpr/KitaniOSS11} and the egomotion can tell us about the physical dynamics of activity~\cite{park_force}. Joint attention still exists in team sports which can be described by the team formation~\cite{park_cvpr:2015} and future behaviors~\cite{park_cvpr:2017}.
Unlike previous work that mainly focuses on recognizing and tracking objects, activities, and joint attention, we take one step further: performance assessment based on the evaluator's preference. We introduce a computational model that exhibits strong predictive power when applied on the real world first-person basketball video data.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{./paper_figures/arch/test_arch15.pdf}
\end{center}
\vspace{-0.6cm}
\caption{A detailed illustration of our basketball assessment prediction scheme. Given a video segment from time interval $[t,t+10]$, we first feed it through a function $f_{\rm crop}$, which zooms-in to the relevant parts of a video. We then apply $f_{\rm event}$ to predict $4$ atomic basketball events from a zoomed-in video and a player's $(x,y)$ location on the court. We then feed these predictions through a Gaussian mixture function $f_{\rm gm}$, which produces a highly non-linear visual spatiotemporal assessment feature. Finally, we use this feature to compute a player's assessment measure by multiplying it with linear weights learned from the data, and with a predicted relevance indicator for a given video segment.\vspace{-0.4cm}}
\label{test_arch_fig}
\end{figure*}
\section{Basketball Performance Assessment Model}
\label{model_sec}
We define a measure of performance assessment using a first-person video:
\begin{align}
S(\mathcal{V}) = \frac{\sum_{t=1}^T p^{(1)}_t \mathbf{w}^\mathsf{T}\boldsymbol{\phi} (\mathbf{V}_t, \mathbf{x})}{\sum_{t=1}^T p^{(1)}_t} \label{Eq:assessment}
\end{align}
where $\mathcal{V}$ is a first-person video of $T$ number of frames, $\phi$ is a visual spatiotemporal basketball assessment feature, and $\mathbf{w}$ is a weight vector of performance regressor. $\mathbf{V}_t \subset \mathcal{V}$ is a segmented video starting at the $t^{\rm th}$ frame with a fixed length, $T_s$. $p^{(1)}_t \in [0, 1]$ is a relevance of $\mathbf{V}_t$ to evaluate a given player's performance. $\mathbf{x} \in \mathbb{R}^2$ is the 2D coordinate of the basketball player, i.e., the projection of 3D camera pose computed by structure from motion~\cite{Hartley2004} onto the canonical basketball court. In Figure~\ref{test_arch_fig}, we provide a detailed illustration of our basketball assessment prediction framework.
\subsection{Visual Spatiotemporal Assessment Feature}
Our first goal is to use a first-person basketball video to build a powerful feature representation that could be used for an effective player's performance assessment. We identify three key challenges related to building such a representation from first-person basketball videos: 1) our system needs to handle severe camera wearer's head motion, 2) we need to have an interpretable basketball representation in terms of its atomic events, and 3) our feature representation has to be highly discriminative for a player's performance prediction task.
To address these problems, we propose to represent the visual feature of the segmented video, $\mathbf{V}_t$, as follows, where each function below addresses one of the listed challenges:
\begin{align}
\boldsymbol{\phi}(\mathbf{V}_t,\mathbf{x}) = f_{\rm gm} \left( f_{\rm event} \left( f_{\rm crop} \left(\mathbf{V}_t\right), \mathbf{x}\right)\right),
\end{align}
where $f_{\rm crop}$ is a function that handles a severe camera wearer's head motion by producing a cropped video by zooming in on the important regions, $f_{\rm event}$ is a function that computes the probability of atomic basketball events, and $f_{\rm gm}$ is a Gaussian mixture function that computes a highly non-linear visual feature of the video.
\textbf{Zooming-In.} A key property of $f_{\rm crop}$ is the ability to zoom-in to relevant pixels which allows to learn an effective visual representation for the basketball performance assessment. Using this regional cropping, we minimize the effect of jittery and unstable nature of first person videos that causes larger variation of visual data. In our experimental section, we demonstrate that using $f_{\rm crop}$ in our model substantially improves the prediction performance. Thus, initially we process a first-person video to produce a cropped video:
\begin{align}
\overline{\mathbf{V}}_t = f_{\rm crop} (\mathbf{V}_t; \mathbf{w}_{\rm crop}),\nonumber
\end{align}
where $f_{\rm crop}$ is parametrized by $\mathbf{w}_{\rm crop}$, $\overline{\mathbf{V}}_t$ is the cropped video with fixed size $C_w\times C_w\times 3 \times T_s$, and $C_w$ is the width and height of the cropping window.
We predict the center of the cropping window by learning $\mathbf{w}_{\rm crop}$ using a fully convolutional network~\cite{DBLP:journals/corr/ChenYWXY15}. To do this, we train the network to predict the location of a ball, which is typically where most players are looking at. Afterwards, for each frame in a video, we compute a weighted average of $XY$ location coordinates weighted by the detected ball probabilities and then crop a fixed size patch around such a weighted average location. We illustrate some of the qualitative zoom-in examples in Figure~\ref{best_worst_fig}.
\textbf{Atomic Basketball Event Detection.} To build an interpretable representation in terms of atomic basketball events, we predict basketball events of 1) sombeody shooting a ball, 2) the camera wearer possessing the ball, and 3) a made shot respectively. Note that the cropped video focuses on the ball and its visual context, which allows to learn the visual semantics of each atomic event more effectively. To do this we use a multi-path convolutional LSTM network, where each pathway predicts its respective atomic basketball event. We note that such a multi-path architecture is beneficial as it allows each pathway to focus on learning a single atomic basketball concept. In contrast, we observed that training a similar network with a single pathway failed to produce accurate predictions for all three atomic events. Given a cropped video, our multi-path network is jointly trained to minimize the following cross-entropy loss:
\begin{equation*}
\begin{split}
\mathcal{L}_{\rm event} = -\sum_{t=1}^{T_s} \sum_{b=1}^{3} y^{(b)}_t \log p^{(b)}_{t}+(1-y^{(b)}_t) \log \left(1-p^{(b)}_{t}\right), \nonumber
\end{split}
\end{equation*}
where $p^{(b)}_t$ depicts a network's prediction for an atomic basketball event $b$ at a time step $t$; $y^{(b)}_t \in \{0,1\}$ is a binary atomic basketball event ground truth value for frame $t$ and basketball event $b$.
We also note that because many important basketball events occur when somebody shoots the ball~\cite{nba_savant,sloan}, the detected probability $p^{(1)}_t$ is also later used in Equation~(\ref{Eq:assessment}), as a relevance indicator for each video segment, $\mathbf{V}_t$.
As our fourth atomic basketball event $p^{(4)}_t$, we use a binary value indicating whether a player is in the 2 point or 3 point zone, which is obtained from a player's $(x,y)$ location coordinates on the court.
We then split each of the $4$ basketball event predictions in half across the temporal dimension, and perform temporal max pooling for each of the $8$ blocks. All the pooled values are then concatenated into a single vector $\mathbf{b}_t$:
\vspace{-0.3cm}
\begin{align}
\mathbf{b}_t = f_{\rm event} (\overline{\mathbf{V}}_t,\mathbf{x};\mathbf{w}_{\rm event}) \nonumber
\end{align}
\textbf{Gaussian Mixtures.} To build a representation that is discriminative, and yet generalizable, we construct a highly non-linear feature that works well with a linear classifier. To achieve these goals we employ Gaussian mixtures, that transform the atomic basketball event feature, into a complex basketball assessment feature, which we will show to be very effective in our assessment model. Formally, given a vector $\mathbf{b}_t$ over $T_s$, we compute the visual spatiotemporal assessment features for a given video segment as:
\begin{align}
\boldsymbol{\phi}_t = f_{\rm gm} \left(\mathbf{b}_t;\{\boldsymbol{\mu}_n,\boldsymbol{\Sigma}_n\}_{n=1}^N\right)\nonumber
\end{align}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{./paper_figures/arch/train_arch15.pdf}
\end{center}
\vspace{-0.6cm}
\caption{An illustration of of our training procedure to learn the linear weights $w$ that are used to assess a given basketball player's performance. As an input we take a pair of labeled first-person basketball videos with a label provided by a basketball expert indicating, which of the two players is better. Then, we compute visual spatiotemporal basketball assessment features for all input video segments, and use them to learn weights $w$ by minimizing our formulated hinge loss function. \vspace{-0.4cm}}
\label{train_arch_fig}
\end{figure*}
where $f_{\rm gm}$ is parametrized by Gaussian mixtures, $\{\boldsymbol{\mu}_n,\boldsymbol{\Sigma}_n\}_{n=1}^N$, and $N$ is the number of mixtures. Each mixture $j$ is defined by a function $z(y^{(1)}_{t_1},y^{(1)}_{t_2}, \hdots , y^{(4)}_{t_1},y^{(4)}_{t_2})=j$. Here $y^{(i)}_{t_1}, y^{(i)}_{t_2} \in \{0,1\}$ refer to the binary ground truth values associated with an atomic basketball event $i \in \{1,2,3,4\}$; the index $t_1$ indicates the first half of an input video segment, whereas $t_2$ indicates the second half. Every possible combination of these values define one of the $2^8=256$ Gaussian mixtures. We learn the parameters of each Gaussian mixture using maximum likelihood from the training data with diagonal covariances.
\subsection{Basketball Assessment Prediction}
We learn a linear weight $\mathbf{w}$ in Equation~(\ref{Eq:assessment}) based on the comparative assessment of players provided by a former professional basketball player in Section~\ref{data_sec}. We minimize the following hinge loss:
\begin{align}
\mathcal{L}_\mathbf{w} = \sum_{i=1}^D \max \left(0, \left(\frac{1}{2}-Y_i\right)\left(S(\mathcal{V}_1^i)-S(\mathcal{V}_2^i)\right)\right),
\end{align}
where $Y_i=1$ if a basketball expert declared Player 1 to be better than Player 2; otherwise $Y_i=0$ . $S(\mathcal{V}_1^i),S(\mathcal{V}_2^i)$ depict our predicted performance measure for Players 1, and 2 respectively, $\mathcal{V}_1^i$ and $\mathcal{V}_2^i$ are the first-person basketball videos of Player 1 and Player 2 respectively, and $D$ is the number of data points. Then based on Equation~\ref{Eq:assessment}, we can compute the subgradients of this loss function with respect to $w$ and find $w$ by minimizing it via a standard gradient descent. In Figure~\ref{train_arch_fig}, we provide an illustration of such a learning framework.
\textbf{Why Linear Classifier?} We only have $250$ labeled pairs for learning the weights, which is a small amount of training data. Thus, making a classifier more complex typically results in a severe overfitting. Through our experiments, we discovered that linear weights work the best.
\subsection{Implementation Details}
For all of our experiments involving CNNs, we used a Caffe library~\cite{jia2014caffe}. Both networks were based on DeepLab's~\cite{DBLP:journals/corr/ChenYWXY15} architecture and were trained for $4000$ iterations with a learning rate of $10^{-8}$, $0.9$ momentum, the weight decay of $5 \cdot 10^{-5}$, and $30$ samples per batch. The LSTM layers inside the atomic basketball event network spanned $10$ consecutive frames in the video input. Each pathway in the atomic basketball event network was composed of two $1024$ dimensional convolution layers with kernel size $1 \times 1$ and a $1024$ dimensional LSTM layer. The networks were trained using standard data augmentation. To learn the weights $w$ we used a learning rate of $0.001$ and ran gradient descent optimization for $100$ iterations.
\section{First-Person Basketball Dataset}
\label{data_sec}
We present a first person basketball dataset composed of 10.3 hours of videos with 48 college players. Each video is about 13 minutes long captured by GoPro Hero 3 Black Edition mounted with a head strip. It is recorded at 1280$\times$960 with 100 fps. We record $48$ videos during the two days, with a different group of people playing each day. We use $24$ videos from the first day for training and $24$ videos from the second day for testing. We extract the video frames at $5$ fps to get $98,452$ frames for training, and $87,393$ frames for testing.
We ask a former professional basketball player (played in an European national team) to label which player performs better given a pair of first-person videos. Total 500 pairs are used: 250 for training and 250 for testing. Note that there were no players overlapping between the training and testing splits.
We also label three simple basketball events: 1) somebody shooting a ball, 2) the camera wearer possessing the ball, and 3) a made shot. These are the key atomic events that drive a basketball game. In total, we obtain $3,734$, $4,502$, and $2,175$ annotations for each of these three events respectively.
Furthermore, to train a ball detector we label the location of a ball at $5,073$ images by clicking once on the location. We then place a fixed sized Gaussian around those locations and use it as a ground truth label.
\setlength{\tabcolsep}{3.5pt}
\begin{table}[t]
\footnotesize
\begin{center}
\begin{tabular}{ c | c | c | c | c |}
\cline{2-5}
& \multicolumn{4}{ c |}{Atomic Events} \\
\cline{2-5}
& \multicolumn{1}{ c |}{$p^{(1)}$} & \multicolumn{1}{ c |}{$p^{(2)}$} & \multicolumn{1}{ c |}{$p^{(3)}$} & \multicolumn{1}{ c |}{mean}\\ \cline{1-5}
\multicolumn{1}{| c |}{Tran et al.~\cite{Tran:2015:LSF:2919332.2919929}} & 0.312 & 0.428 & 0.193 & 0.311 \\
\multicolumn{1}{| c |}{Singh et al~\cite{Singh_2016_CVPR}} & 0.469 & 0.649 & 0.185 & 0.434 \\
\multicolumn{1}{| c |}{Bertasius et al~\cite{gberta_2017_RSS}} & 0.548 & 0.723 & 0.289 & 0.520 \\
\multicolumn{1}{| c |}{Ma et al~\cite{ma2016going}} & 0.622 & 0.718 & 0.364 & 0.568 \\ \hline
\multicolumn{1}{| c |}{Ours: no LSTM \& no zoom-in} & 0.711 & 0.705 & 0.192 & 0.536 \\
\multicolumn{1}{| c |}{Ours: no zoom-in} & 0.693 & 0.710 & 0.248 & 0.550 \\
\multicolumn{1}{| c |}{Ours: single path} & 0.678 & 0.754 & 0.308 & 0.580 \\
\multicolumn{1}{| c |}{Ours: no LSTM} & 0.718 & 0.746 &\bf 0.397 & 0.620 \\ \hline
\multicolumn{1}{| c |}{Ours} & \bf 0.724 & \bf 0.756 & 0.395 & \bf 0.625 \\ \hline
\end{tabular}
\end{center}
\vspace{-0.2cm}
\caption{The quantitative results for atomic basketball event detection on our first-person basketball dataset according to max F-score (MF) metric. These results show that our method 1) outperforms prior first-person methods and 2) that each component plays a critical role in our system.\vspace{-0.4cm}}
\label{att_table}
\end{table}
\setlength{\tabcolsep}{1.5pt}
\begin{figure*}[t]
\centering
\subfigure[A Pair of Players \#1]{\label{good_spatial_fig}\includegraphics[height=0.15\textheight]{./paper_figures/dynamic_results/1_4.pdf}}~
\subfigure[A Pair of Players \#2]{\label{bad_spatial_fig}\includegraphics[height=0.15\textheight]{./paper_figures/dynamic_results/8_16.pdf}}~
\subfigure[A Pair of Players \#3]{\label{good_spatial_fig}\includegraphics[height=0.15\textheight]{./paper_figures/dynamic_results/9_23.pdf}}~
\subfigure[A Pair of Players \#4]{\label{bad_spatial_fig}\includegraphics[height=0.15\textheight]{./paper_figures/dynamic_results/18_22.pdf}}~
\vspace{-0.4cm}
\caption{We randomly select $4$ pairs of basketball players, and visualize how our assessment model evaluates each player over time. The red plot denotes the better player in a pair, whereas the blue plot depicts the worse player. The $y$-axis in the plot illustrates our predicted performance measure for an event occurring at a specific time in a player's first-person video.\vspace{-0.4cm}}
\label{dynamic_fig}
\end{figure*}
\section{Experimental Results}
\subsection{Quantitative Results}
\textbf{Atomic Basketball Event Detection.} In Table~\ref{att_table}, we first illustrate our results for atomic basketball event detection task. The results are evaluated according to the maximum F-score (MF) metric by thresholding the predicted atomic event probabilities at small intervals and then computing a precision and recall curve. First, we compare our model's predictions with several recent first-person activity recognition baselines~\cite{Singh_2016_CVPR,gberta_2017_RSS,ma2016going} and also with the successful video activity recognition baseline C3D~\cite{Tran:2015:LSF:2919332.2919929}. We show that our model outperforms all of these baselines for each atomic event.
Furthermore, to justify our model's design choices, in Table~\ref{att_table} we also include several experiments studying the effect of 1) a multi-path architecture, 2) LSTM layers, and 3) zooming-in scheme. Our experiments indicate that each of these components is crucial for achieving a solid atomic event recognition accuracy, i.e. the system achieves the best performance when all three of these components are included in the model.
\begin{table}
\footnotesize
\begin{center}
\begin{tabular}{ c | c | c |}
\cline{2-3}
& \multicolumn{2}{|c |}{Accuracy} \\
\cline{2-3}
& \multicolumn{1}{ c |}{Pred. Events} & \multicolumn{1}{ c |}{GT Events} \\ \cline{1-3}
\multicolumn{1}{| c |}{LRCN~\cite{lrcn2014} 2-pt made shot detector} & fail & - \\
\multicolumn{1}{| c |}{ LRCN~\cite{lrcn2014} 3-pt made shot detector} & fail & - \\ \hline
\multicolumn{1}{| c |}{Ours: no GMs} & 0.477 & -\\
\multicolumn{1}{| c |}{Ours: no $p^{(3)}$} & 0.496 & -\\
\multicolumn{1}{| c |}{Ours: no $p^{(2)}$} & 0.515 & -\\
\multicolumn{1}{| c |}{ Ours: no $p^{(1)}$} & 0.536 & -\\
\multicolumn{1}{| c |}{Ours: single GM-top2} & 0.537 & - \\
\multicolumn{1}{| c |}{Ours: all weights $w$ set to $1$} & 0.583 & -\\
\multicolumn{1}{| c |}{Ours: single GM-top1} & 0.609 & - \\
\multicolumn{1}{| c |}{Ours: no $p^{(4)}$} & 0.649 & -\\ \hline
\multicolumn{1}{| c |}{Ours} & \bf 0.765 & \bf 0.793\\ \hline
\end{tabular}
\end{center}\vspace{-.4cm}
\caption{The quantitative results for our basketball assessment task. We evaluate our method on $250$ labeled pairs of players, and predict, which of the two players in a pair is better. We then compute the accuracy as the fraction of correct predictions. We report the results of various baselines in two settings: 1) using our predicted atomic events, and 2) using ground truth atomic events. These results show that 1) our model achieves best results, 2) that each of our proposed components is important, and 3) that our system is pretty robust to atomic event recognition errors. \vspace{-0.4cm}
\label{skill_table}
\end{table}
\textbf{Basketball Assessment Results.} In Table~\ref{skill_table}, we present our results for assessing $24$ basketball players from our testing dataset. To test our method's accuracy we evaluate our method on $250$ labeled pairs of players, where a label provided by a basketball expert indicates, which of the two players is better. For each player, our method produces an assessment measure indicating, which player is better (the higher the better). To obtain the accuracy, we compute the fraction of correct predictions across all $250$ pairs.
We note that to the best of our knowledge, we are the first to formally investigate a basketball performance assessment task from a first-person video. Thus, there are no well established prior baselines for this task. As a result, we include the following list of baselines for a comparison.
First, we include two basketball activity baselines: the detectors of 1) 2-point and 2) 3-point shots made by the camera wearer. We label all instances in our dataset where these activities occur and discover $\approx 100$ of such instances. Note that such a small number of instances is not a flaw of our dataset, but instead an inherent characteristic of our task. Such basketball activities belong to a long-tail data distribution, i.e. they occur pretty rarely, and thus, it is difficult to train supervised classifiers for such activity recognition. We then train an LRCN~\cite{lrcn2014} model as 1) a 2 point made shot detector, and 2) a 3 point made shot detector. We report that due to a small amount of training data, in all cases the network severely overfit the training data and did not learn any meaningful pattern.
Furthermore, to justify each of our proposed components in the model, in Table~\ref{skill_table} we also include several ablation baselines. First, we study how 1) Gaussian Mixtures (GM) and 2) the process of learning the weights affect the performance assessment accuracy. We do it 1) with our predicted and 2) with the ground truth atomic events. We show that in both cases, each of our proposed components is beneficial. In addition, we also observe that our system is robust to atomic event recognition errors: the accuracy when using the ground truth atomic events is only $2.8\%$ better compared to our original model.
We also present the performance assessment results when we remove one of the four atomic events from our system. We show that our method performs the best when all four atomic events are used, suggesting that each atomic event is useful. Finally, as two extra baselines we manually select two Gaussian mixtures with the largest weight magnitudes and use each of their predictions independently (denoted as single GM-top1,2 in Table~\ref{skill_table}). We show that our full model outperforms all the other baselines, thus, indicating that each of our proposed component in our model is crucial for an accurate player performance assessment.
\captionsetup{labelformat=empty}
\captionsetup[figure]{skip=5pt}
\begin{figure*}
\centering
\myfiguresixcol{./paper_figures/gmm_top1/ex1/22305.jpg}
\myfiguresixcol{./paper_figures/gmm_top1/ex1/22323.jpg}
\myfiguresixcol{./paper_figures/gmm_top1/ex1/22332.jpg}
\myfiguresixcol{./paper_figures/gmm_top1/ex1/22350.jpg}
\myfiguresixcol{./paper_figures/gmm_top1/ex1/22359.jpg}
\myfiguresixcol{./paper_figures/gmm_top1/ex1/22368.jpg}
\myfiguresixcol{./paper_figures/gmm_top2/ex2/47266.jpg}
\myfiguresixcol{./paper_figures/gmm_top2/ex2/47284.jpg}
\myfiguresixcol{./paper_figures/gmm_top2/ex2/47320.jpg}
\myfiguresixcol{./paper_figures/gmm_top2/ex2/47329.jpg}
\myfiguresixcol{./paper_figures/gmm_top2/ex2/47347.jpg}
\myfiguresixcol{./paper_figures/gmm_top2/ex2/47383.jpg}
\myfiguresixcol{./paper_figures/gmm_top3/ex2/35723.jpg}
\myfiguresixcol{./paper_figures/gmm_top3/ex2/35732.jpg}
\myfiguresixcol{./paper_figures/gmm_top3/ex2/35750.jpg}
\myfiguresixcol{./paper_figures/gmm_top3/ex2/35759.jpg}
\myfiguresixcol{./paper_figures/gmm_top3/ex2/35777.jpg}
\myfiguresixcol{./paper_figures/gmm_top3/ex2/35786.jpg}
\captionsetup{labelformat=default}
\setcounter{figure}{4}
\caption{A visualization of basketball activities that we discovered by manually inspecting Gaussian mixtures associated with the largest basketball assessment model weights $w$. Each row in the figure depicts a separate event, and the columns illustrate the time lapse of the event (from left to right), We discover that the two most positive Gaussian mixtures correspond to the events of a player making a 2 point and a 3 point shot respectively (the first two rows), while the mixture with the most negative weight captures an event when a player misses a 2 point shot (last row). \vspace{-0.5cm}}
\label{gmm_fig}
\end{figure*}
\captionsetup{labelformat=default}
\captionsetup[figure]{skip=10pt}
\subsection{Qualitative Results}
In addition, in Figure~\ref{dynamic_fig}, we also include a more dynamic visualization of how our assessment model works over time. To do this, we randomly select $4$ pairs of basketball players, and visualize how our model evaluates each player over time. The red plot in each pair denotes the better player, whereas the blue plot depicts the worse player. The $y$-axis in the plot illustrates our predicted performance measure for an event occurring at a specific time in a player's first-person video.
Furthermore, in Figure~\ref{best_worst_fig} we also include examples of short sequences, illustrating 1) a player's actions that contributed most positively to his/her performance assessment and also 2) actions that contributed most negatively. We select these action sequences by picking the first-person video sequences with a largest positive and negative values of the terms inside the summation of Equation~\ref{Eq:assessment} (which also correspond to positive and negative peaks from Figure~\ref{dynamic_fig}). Such terms depict each video segment's contribution to the overall basketball skill assessment measure.
We would like to note that it is quite difficult to include such results in an image format, because 1) images are static and thus, they cannot capture the full content of the videos; 2) images in the paper, appear at a very low-resolution compared to the original $480 \times 640$ videos, which makes it more difficult to understand what kind of events are depicted in these images. To address some of these issues, in our supplementary material, we include even more of such qualitative examples in a video format.
\textbf{Understanding the Feature Representation.} Earlier, we claimed that Gaussian mixtures produce a highly non-linear feature representation. We now want to get a better insight into what it represents. To do so we analyze the learned weights $w$, and then manually inspect the Gaussian mixtures associated with the largest magnitude weights in $w$. Upon doing so we discover that the two mixtures with the most positive weights learn to capture basketball activities when camera wearer makes a 2 point shot, and a 3 point shot respectively. Conversely, the mixtures with the two most negative weights represent the activities of the camera missing a 2 point shot, and the camera wearer's defender making a shot respectively. In Figure~\ref{gmm_fig}, we include several sequences corresponding to such discovered activities.
\section{Conclusions}
In this work, we introduced a basketball assessment model that evaluates a player's performance from his/her first-person basketball video. We showed that we can learn powerful visual spatiotemporal assessment features from first-person videos, and then use them to learn our skill assessment model from the pairs of weakly labeled first-person basketball videos. We demonstrated that despite not knowing the labeler's assessment criterion, our model learns to evaluate players with a solid accuracy. In addition, we can also use our model to discover the camera wearer's activities that contribute positively or negatively to his/her performance assessment.
We also note that performance assessment is an important problem in many different areas not just basketball. These include musical instrument playing, job related activities, and even our daily moments such as cooking a meal. In our future work, we plan to investigate these new areas, and try to generalize our model to such activities too.
\captionsetup{labelformat=empty}
\captionsetup[figure]{skip=5pt}
\begin{figure*}
\centering
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/20_best2/46901.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/20_best2/46910.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/20_best2/46919.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/20_best2/46928.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/20_best2/46964.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/20_best2/46973.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/15_best/18773.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/15_best/18782.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/15_best/18800.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/15_best/18818.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/15_best/18827.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/15_best/18836.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/18_best2/19553.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/18_best2/19562.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/18_best2/19571.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/18_best2/19589.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/18_best2/19598.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/18_best2/19607.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/17_best/1262.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/17_best/1271.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/17_best/1280.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/17_best/1289.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/17_best/1298.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/17_best/1307.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_best/22822.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_best/22831.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_best/22849.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_best/22876.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_best/22894.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_best/22903.jpg}
\vspace{-0.1cm}
\small{(a) The detected events that contributed most \textbf{positively} to a player's performance assessment score according to our model}
\normalsize{}
\vspace{0.2cm}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/4_worst/29529.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/4_worst/29538.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/4_worst/29547.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/4_worst/29556.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/4_worst/29565.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/4_worst/29574.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_worst/42847.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_worst/42856.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_worst/42865.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_worst/42883.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_worst/42910.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/16_worst/42919.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/14_worst/13298.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/14_worst/13307.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/14_worst/13334.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/14_worst/13343.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/14_worst/13361.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/14_worst/13370.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/9_worst/33282.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/9_worst/33300.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/9_worst/33309.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/9_worst/33327.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/9_worst/33345.jpg}
\myfiguresixcol{./paper_figures/best_worst_events_FIG_wBB_MAX/9_worst/33354.jpg}
\vspace{-0.1cm}
\small{(b) The detected events that contributed most \textbf{negatively} to a player's performance assessment score according to our model}
\normalsize{}\vspace{0.1cm}
\captionsetup{labelformat=default}
\setcounter{figure}{5}
\vspace{-0.1cm}
\caption{A figure illustrating the events that contribute most positively (top figure) and most negatively (bottom figure) to a player's performance measure according to our model. The red box illustrates the location where our method zooms-in. Each row in the figure depicts a separate event, and the columns illustrate the time lapse of the event (from left to right). We note that among the detected positive events our method recognizes events such as assists, made layups, and made three pointers, whereas among the detected negative events, our method identifies events such as missed layups, and missed jumpshots. We present more of such video examples in the supplementary material.\vspace{-0.5cm}
\label{best_worst_fig}
\end{figure*}
\captionsetup{labelformat=default}
\captionsetup[figure]{skip=10pt}
\bibliographystyle{plain}
\footnotesize{
| {
"attr-fineweb-edu": 1.973633,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdhw4dbghXafMtMi6 |
\section{Limitations and Discussion}
\label{sec:discussion}
\noindent\textbf{Subject tracking} is needed for {VPD}\xspace to ensure that the pose is of the correct person. Real-world sports video often contains many people, such as audience and judges, in addition to the subject. The tracking annotations in the datasets in~\autoref{sec:action_recognition} are computed automatically using off-the-shelf models and heuristics (see supplemental for details). This is possible because athletes are salient in appearance, space, and time --- sports video is a natural application for work on tracking~\cite{sort,deepsort} and detecting salient regions~\cite{saliencymodellooknext}.
We observe that the difference in accuracy between the tracked and non-tracked inputs on other prior methods such as~\cite{gsm,tsn,stgcn} can be staggering (48\% on {FSJump6}\xspace for GSM~\cite{gsm} and 40\% on {FX35}\xspace for ST-GCN~\cite{stgcn}; see~\autoref{tab:full_action_dataset}).
To evaluate the quality of our pose features, we focused on motion by a single athlete or synchronized athletes (contained in Diving48). Tasks and actions involving many people require a more sophisticated downstream model that can handle multiple descriptors or poses per frame.
\heading{Future work.}
First, the 2D pose estimates used to supervise {VPD}\xspace are inherently ambiguous with respect to camera view, and additional information such as depth or a behavioral prior could help alleviate this ambiguity. Other weak supervision sources, in addition to motion and VIPE, may also help.
Second, our distillation process is offline; supporting online training, similar to~\cite{jitnet,tttrain}, at the pose feature extraction stage could be beneficial in time-evolving datasets.
Distillation for explicit 2D or 3D pose estimation is another possibility.
Although {VPD}\xspace features can improve accuracy with limited data, performance on few-shot and semi-supervised tasks still has much room to improve, and we hope that future work continues to explore these topics.
\section{Conclusion}
\label{sec:conclusion}
Pose features are useful for studying human-centric action in novel sports video datasets.
However, such datasets are often challenging for off-the-shelf models.
Our method, {VPD}\xspace, improves the reliability of pose features in difficult and label-poor settings,
by distilling knowledge from existing pose estimators. {VPD}\xspace learns features that improve accuracy on both traditional and few-shot action understanding tasks in the target (sport) domain.
We believe that our distillation-based method is a useful paradigm for addressing challenges faced by applications in new video domains.
\section{Introduction}
Analyzing sports video requires robust algorithms to automate
fine-grained action recognition, retrieval, and detection in large-scale video collections.
Human pose is a useful feature when sports are centered around people.
State-of-the-art skeleton-based deep learning techniques for action recognition~\cite{msg3d,stgcn} rely on accurate 2D pose detection to
extract the athletes' motion, but the best pose detectors~\cite{hrnet,detectron2}
routinely fail on fast-paced sports video with complex blur and occlusions, often in frames crucial to the action (\autoref{fig:bad_poses}).
To circumvent these issues, end-to-end learned models operate directly on the video
stream~\cite{i3d,slowfast,tsm,gsm,tsn,trn}.
However, because they consume pixel instead of pose inputs, when trained with few labels, they
tend to latch onto specific visual patterns~\cite{danceinmall,mimetics} rather than the fine-grained motion (e.g., an athlete's clothes or the presence of a ball).
As a result, prior pose and end-to-end methods often generalize poorly on fine-grained tasks in challenging sports video, when labels are scarce.
While collecting large datasets with fine action and pose annotations is possible, doing so for each new sport does not scale.
We propose \textbf{Video Pose Distillation} ({VPD}\xspace), a weakly-supervised technique in which a \emph{student} network learns to extract robust pose features from RGB video frames in a new video domain (a single sport).
{VPD}\xspace is designed such that, whenever pose
is reliable, the features match the output of a pretrained \emph{teacher}
pose detector.
Our strategy retains the best of both pose and end-to-end worlds.
First, like directly supervised end-to-end methods, our student can exploit the rich
visual patterns present in the raw frames, including but not limited to the
athlete's pose, and continue to operate when pose estimation is unsuccessful.
Second, by constraining our descriptors to agree with the pose estimator
whenever high-confidence pose is available, we avoid the pitfall of overfitting to visual patterns unrelated to the athlete's action.
And third, weak pose supervision allows us to enforce an additional
constraint: we require that the student predicts not only instantaneous pose
but also its temporal derivative.
This encourages our features to pick up on visual similarities over time (e.g., an athlete
progressing from pose to pose).
When we train the student with weak-supervision over a corpus of unlabeled sports video,
the student learns to `fill-in the gaps' left by the noisy pose teacher.
Together, these properties lead to a student network whose features outperform
the teacher's pose output when used in downstream applications.
{VPD}\xspace features improve performance on few-shot, fine-grained action
recognition, retrieval, and detection tasks in the target sport domain,
without requiring additional ground-truth action or pose labels.
We demonstrate the benefits of {VPD}\xspace on four diverse sports video datasets with {\em fine-grained} action labels: diving~\cite{diving48}, floor exercises~\cite{finegym}, tennis~\cite{vid2player}, and a new dataset for figure skating.
In a few-shot --- limited supervision --- setting, action recognition models trained with distilled {VPD}\xspace features
can significantly outperform models trained directly on features from the teacher as well as baselines from prior skeleton-based and end-to-end learning work.
For instance, when restricted to between 8 and 64 training examples per class from diving and floor exercises, the two datasets that are most challenging for pose, {VPD}\xspace features improve fine-grained classification accuracy by 6.8 to 22.8\% and by 5.0 to 10.5\%, respectively, over the next best method(s).
Even when labels are plentiful, {VPD}\xspace remains competitive, achieving superior accuracy on three of the four test datasets.
To summarize, {VPD}\xspace surpasses its teacher in situations where leveraging pose is crucial (e.g., few-shot) and is also competitive when end-to-end methods dominate (e.g., unreliable pose and the high-data / full supervision setting).
Finally, we show applications of {VPD}\xspace features to fine-grained action retrieval and few-shot temporal detection tasks.
This paper makes the following contributions:
\begin{enumerate}
\item A weakly-supervised method, {VPD}\xspace, to adapt pose features to
new video domains, which significantly improves performance on downstream tasks
like action recognition, retrieval, and detection in scenarios where 2D
pose estimation is unreliable.
\item State-of-the-art accuracy in few-shot, fine-grained action understanding
tasks using {VPD}\xspace features, for a variety of sports. On action recognition,
{VPD}\xspace features perform well with as few as 8 examples per class and remain competitive or state-of-the-art even as the training data is increased.
\item
A new dataset (figure skating) and extensions to three datasets of real-world sports video,
to include tracking of the performers, in order to facilitate future research on
fine-grained sports action understanding.
\end{enumerate}
\section{Video Pose Distillation}
\label{sec:method}
Our strategy is to distill inaccurate pose estimates from an existing,
off-the-shelf pose detector --- the \emph{teacher} ---, trained on generic pose
datasets, into a --- \emph{student} --- network that is specialized to generate
robust pose descriptors for videos in a specific target sport domain
(\autoref{fig:system_diagram}).
The student (\autoref{sec:student}) takes RGB pixels and optical flow, cropped around the athlete, as input.
It produces a descriptor, from which we regress the athlete's pose as emitted by
the teacher (\autoref{sec:teacher}).
We run this distillation process over a large, \emph{uncut and unlabeled} corpus
of target domain videos (\autoref{sec:training_data}), using the sparse set of
high-confidence teacher outputs as weak supervision for the student.
Since the teacher is already trained, {VPD}\xspace requires no new pose annotations in
the target video domain. Likewise, no downstream application-specific labels (e.g.,
action labels for recognition) are needed to learn pose features.
{VPD}\xspace does, however, require that the athlete be identified in each input frame,
so we assume that an approximate bounding box for the athlete is provided in each frame as part of the dataset.
Refer to~\autoref{sec:discussion} for discussion and limitations.
\subsection{Teacher Network}
\label{sec:teacher}
To stress that {VPD}\xspace is a general approach that can be applied to different teacher models, we propose two teacher variants of {VPD}\xspace.
The first uses an off-the-shelf pose estimator~\cite{hrnet} to estimate 2D joint
positions from \Frame{\notation{t}}, the RGB pixels of the \notation{t}-th frame.
We normalize the 2D joint positions by rescaling and centering as in~\cite{prvipe},
and we collect the joint coordinates into a vector
$\WeakPose{\notation{t}}\in\mathbb{R}^\notation{d}$.
We refer to this as 2D-{VPD}\xspace since the teacher generates 2D joint
positions.
Our second teacher variant further processes the 2D joint positions into a
\emph{view-invariant} pose descriptor, emitted as $\WeakPose{\notation{t}}$.
Our implementation uses {VIPE$^\star$}\xspace to generate this descriptor.
{VIPE$^\star$}\xspace is a reimplementation of concepts from Pr-VIPE~\cite{prvipe} that is extended to
train on additional synthetic 3D pose data~\cite{amass,3dpeople,nba2k} for better
generalization.
We refer to this variation as VI-{VPD}\xspace since the teacher generates a
view-invariant pose representation.
(See supplemental for details about {VIPE$^\star$}\xspace and its quality compared to
Pr-VIPE.)
\subsection{Student Feature Extractor}
\label{sec:student}
Since understanding an athlete's motion, not just their current pose, is a key
aspect of many sports analysis tasks, we design a student feature extractor that
encodes information about both the athlete's current pose \WeakPose{\notation{t}} and
the rate of change in pose $\PoseMotion{\notation{t}}:=\WeakPose{\notation{t}}-\WeakPose{\notation{t}-1}$.
The student is a neural network \notation{F} that consumes a color
video frame $\Frame{\notation{t}}\in\mathbb{R}^{\notation{3\Height\Width}}$, cropped around the
athlete, along with its optical flow
$\Flow{\notation{t}}\in\mathbb{R}^{\notation{2\Height\Width}}$, from the previous frame. \notation{h} and \notation{w}
are the crop's spatial dimensions, and \notation{t} denotes the frame index.
The student produces a descriptor
$\notation{F}\left(\Frame{\notation{t}},\Flow{\notation{t}}\right)\in\mathbb{R}^\notation{d}$,
with the same dimension \notation{d} as the teacher's output.
We implement \notation{F} as a standard ResNet-34~\cite{resnet} with 5 input channels, and we resize the input
crops to $128\times128$.
During distillation, the features emitted by \notation{F} are passed through an
auxiliary decoder \notation{D}, which predicts \emph{both} the current pose
\WeakPose{\notation{t}} and the temporal derivative \PoseMotion{\notation{t}}.
%
Exploiting the temporal aspect of video, \PoseMotion{\notation{t}} provides an additional
supervision signal that forces our descriptor to capture motion in addition to the current pose.
\notation{D} is implemented as a fully-connected network, and
we train the combined student pathway
$\notation{D} \circ \notation{F}$
using the following objective:
\begin{equation}
\minimize_{\notation{F},\notation{D}}
\sum_{\notation{t}=1}^\notation{N} {
\begin{Vmatrix}
D\left(
F\left(\Frame{\notation{t}},\Flow{\notation{t}}\right)
\right) -
\begin{bmatrix}
\WeakPose{\notation{t}} \\
\PoseMotion{\notation{t}}
\end{bmatrix}
\end{Vmatrix}
}^2_2
\end{equation}
Since only \notation{F} is needed to produce descriptors during inference,
we discard \notation{D} at the end of training.
Unlike its teacher, which was trained to recognize a general distribution of poses
and human appearances, the student~\notation{F} \emph{specializes} to frames and optical flow
in the new target domain (e.g., players in tennis courts).
Specialization via distillation allows \notation{F} to focus on patterns present in the sports data that explain pose.
We do not expect, nor do downstream tasks require, that \notation{F} encode poses or people not seen in the target
domain (e.g., sitting on a bench, ballet dancers), although they may be part of the teacher's training distribution.
Experiments in~\autoref{sec:eval} show that our pose descriptors,
$\notation{F}(\Frame{\notation{t}},\Flow{\notation{t}})$, improve accuracy on
several applications, including few-shot, fine-grained action recognition.
\subsection{Training Data Selection and Augmentation}
\label{sec:training_data}
\noindent\textbf{Data selection.}
The teacher's output may be noisy due to challenges such as motion blur and occlusion
or because of domain shift between our target videos and the data that the teacher was trained on.
To improve the student's ability to learn and to discourage memorization of the teacher's noise,
we exclude frames with low pose confidence scores (specifically, {\em mean estimated joint score}) from the teacher's weak-supervision set.
By default, the threshold is 0.5, although 0.7 is used for tennis. Tuning this threshold has an effect on the quality of the distilled features (see supplemental for details).
We also withhold a fixed fraction of frames (20\%) uniformly at random as a validation set for the student.
\heading{Data augmentation.}
We apply standard image augmentations techniques such as
random resizing and cropping; horizontal flipping; and color and noise jitter,
when training the student \notation{F}.
To ensure that left-right body orientations are preserved when horizontally augmenting
\Frame{\notation{t}} and \Flow{\notation{t}}, we also must flip the teacher's output
\WeakPose{\notation{t}}.
For 2D joint positions and 2D-{VPD}\xspace, this is straightforward.
To flip {VIPE$^\star$}\xspace (itself a chiral pose embedding) used to train VI-{VPD}\xspace,
we must flip the 2D pose inputs to {VIPE$^\star$}\xspace and then re-embed them.
\section{Related Work}
\noindent\textbf{Pose representations} provide a powerful abstraction for
human action understanding.
Despite significant progress in 2D and 3D pose
estimation~\cite{personlab,videopose3d,hrnet}, downstream algorithms that
depend on pose continue to suffer from unreliable estimates in sports video.
With few labels available, for tasks such as fine-grained action recognition,
models must learn both the actions and to cope with noisy inputs.
VIPE~\cite{prvipe} and CV-MIM~\cite{cvmim} show that learned pose
embeddings, which factor-out camera view and
forgo explicit 3D pose estimation, can be useful; they are
trained on out-of-domain 3D pose data to embed 2D pose inputs and are effective when 2D pose is reliable.
{VPD}\xspace extends these works by using distillation to replace the unreliable 2D pose estimation step with a model that embeds directly from pixels to pose-embedding.
\cite{learnhumanmotion,videopose3d,phd} learn human motion from video but produce 3D pose rather than embeddings.
\input{figures/system_diagram}
\heading{Video action recognition} is dominated by end-to-end models~\cite{timesformer,i3d,slowfast,tsm,gsm,r3d,tsn,trn}, which are often evaluated on diverse but coarse-grained classification tasks (e.g., `golf', `tennis', etc.)~\cite{kinetics,hmdb51,minimit,ucf101,pennaction}.
Fine-grained action recognition in sports is a recent development~\cite{diving48,finegym}.
Besides being necessary for sports video analysis, fine-grained classification within a single sport is interesting because it avoids many contextual biases in coarse-grained tasks~\cite{danceinmall,diving48,mimetics}.
\cite{ikea,epickitchens,ssv1,ssv2} are also fine-grained datasets, but differ from body-centric actions in sports.
Pose or skeleton-based methods~\cite{potion,msg3d,stgcn} appear to be a good fit for action recognition in human-centric sports.
They depend on reliable 2D or 3D pose, which exists in datasets captured in controlled settings~\cite{nturgbd120,nturgbd} but not for public sports video, where no ground-truth is available and automatic detectors often perform poorly (e.g.,~\cite{diving48,finegym}).
{VPD}\xspace improves upon pose-based and end-to-end methods in human-centric sports datasets, especially when pose is not reliable.
Like VIPE~\cite{prvipe}, {VPD}\xspace produces effective pose features, to the extent that comparatively simple downstream models such as nearest neighbor search~\cite{prvipe} or a generic BiGRU~\cite{rnnchapter} network can compete with the state-of-the-art in action recognition --- in both few-shot and high-data regimes.
To show this, we compare against several recent action recognition methods~\cite{msg3d,gsm} in~\autoref{sec:action_recognition}.
{VPD}\xspace features can be used for any tasks where pretrained pose features may be helpful, such as action retrieval and temporally fine-grained detection (e.g., identifying tennis racket swings at 200 ms granularity).
The latter is interesting because prior baselines~\cite{activitynet,thumos14} focus on more general categories than human-centric action within a single sport and few papers~\cite{taen,detectionsimilarity} address the few-shot setting.
\heading{Few-shot action recognition} literature follows a number of paradigms, including meta-learning, metric learning, and data-augmentation approaches~\cite{taen,temporalalignment,protogan,generativefewshot}.
These works focus on coarse-grained datasets~\cite{activitynet,kinetics,hmdb51,ucf101}, adopt various protocols that partition the dataset into seen/unseen classes and/or perform a reduced N-way, K-shot classification (e.g., 5-way, 1 or 5 shot).
{VPD}\xspace differs in that it is completely agnostic to action labels when training features and does not require a particular architecture for downstream tasks such as action recognition.
In contrast to `few-shot' learning that seeks to generalize to unseen classes, {\em we evaluate on the standard classification task, with all classes known, but restricted to only $k$-examples per class at training time.}
Our evaluation is similar to~\cite{fixmatch,cvmim}, which perform action and image recognition with limited supervision, and, like~\cite{fixmatch,cvmim}, we test at different levels of supervision.
\heading{Self-supervision/distillation.}
{VPD}\xspace relies on only machine-generated pose annotations for weak-supervision and distillation.
{VPD}\xspace is similar to~\cite{noisystudentimagenet} in that the main goal of distillation is to improve the robustness and accuracy of the student rather than improve model efficiency.
Most self-supervision work focuses on pretraining and joint-training scenarios, where self-supervised losses are secondary to the end-task loss, and subsequent or concurrent fine-tuning is necessary to obtain competitive results~\cite{simclr,coclr,temporaltransform,cubicpuzzles,ms2l}.
By contrast, our {VPD}\xspace student is fixed after distillation.
\subsection{Action Retrieval}
\label{sec:eval:actionretrieval}
Action retrieval measures how well {VPD}\xspace features can be used to search for similar unlabeled actions. Here, the {VPD}\xspace features are distilled on the entire unlabeled corpus.
\heading{Experiment protocol.}
Given a query action, represented as a sequence of pose features, we rank all other actions in the corpus using the $L_2$ distance between pose features and dynamic time warping to compute an alignment score.
A result is considered relevant if it has the same fine-grained action label as the query, and we assess relevance by the precision at \emph{k} results, averaged across all the queries.
\heading{Results.}
At all cut-offs in~\autoref{tab:action_retrieval} and in all four datasets, {VPD}\xspace features outperform their teachers.
Sizeable improvements are seen on {FX35}\xspace and Diving48.
View-invariance does not always result in the highest precision if the number of camera angles is limited (e.g., {Tennis7}\xspace and Diving48), though it may be helpful for retrieving more diverse results.
\subsection{Pose Features for Few-Shot Action Detection}
\label{sec:eval:detection}
Detection of fine-grained actions, at fine temporal granularity and with few
labels, enables tasks such as few-shot recognition and retrieval.
We evaluate {VPD}\xspace features on the figure skating and tennis datasets, to temporally
localize the jumps and the swings, respectively.
The average jump is 1.6 seconds in length ($\approx$40 frames), while
a swing is defined to be the 200 ms around the frame of ball contact ($\approx$5 frames).
\heading{Experiment protocol.}
{\em We follow the same video-level train/test splits as {FSJump6}\xspace and {Tennis7}\xspace, and distill features on the training videos only.}
As a simple baseline method, we train a BiGRU that outputs per-frame predictions, which are merged to
produce predicted action intervals (see supplemental for details).
The BiGRU is trained on ground-truth temporal labels from five routines (figure skating)
and five points (tennis).
For more consistent results, we perform five-fold cross-validation and ensemble the per-frame predictions.
In \autoref{tab:detection}, we report average precision (AP) at various levels of temporal intersection over union (tIoU).
\heading{Results.}
{VPD}\xspace improves AP on both tasks.
The short duration of tennis swings means that noise in per-frame pose estimates has a large impact, and
VI-{VPD}\xspace improves AP at every tIoU threshold (up to 7.4 over {VIPE$^\star$}\xspace at $\text{tIoU}=0.5$).
\section{Results}
\label{sec:eval}
We evaluate the features produced by {VPD}\xspace on four
fine-grained sports datasets that exhibit a wide range of motions.
\heading{Figure skating}
consists of 371 singles mens' and womens' short program performances from the
Winter Olympics (2010-18) and World Championships (2017-19), totalling 17 video
hours.
In the classification task, {FSJump6}\xspace, there are six jump types defined by the
ISU~\cite{isu}.
All videos from 2018 (134 routines, 520 jumps) are held out for
testing. The remaining jumps are split 743/183 for training/validation.
\heading{Tennis}
consists of nine singles matches from two tournaments
(Wimbledon and US Open), with swings annotated at the frame of ball contact~\cite{vid2player}.
There are seven swing classes in {Tennis7}\xspace.
The training/validation sets contain 4,592/1,142
examples from five matches and the test set contains 2,509 from the remaining four matches.
Split by match video, this dataset is challenging due to the limited diversity in clothing and unique individuals (10 professional players).
\heading{Floor exercise.}
We use the womens' floor exercise event ({FX35}\xspace) of the FineGym99
dataset~\cite{finegym}, containing 1,214 routines (34 hours). There are 35
classes and 7,634 actions.
\heading{Diving48}~\cite{diving48}
contains 16,997 annotated instances of 48 dive sequences, defined by FINA~\cite{fina}.
We evaluate on the corrected V2 labels released by the authors and retrain the
existing state-of-the-art method, GSM~\cite{gsm}, for comparison.
\vspace{0.5em}
All four datasets contain frames in which pose is not well
estimated or uncertain, though their distribution varies (see supplemental for details).
As mentioned beforehand, pose estimates are typically worse in frames with fast motion, due to motion
blur and atypical, athletic poses such as flips or dives; see~\autoref{fig:bad_poses} for examples.
A common challenge across these datasets, the fast-motion frames are
often necessary for discriminating the fine-grained actions of interest.
We assume the subject of the action is identified and tracked.
With multiple humans in the frame, fast-moving athletes in challenging poses are
often missed otherwise: i.e., detected at lower confidence than static audience members or judges, or not detected at all.
{\em For fair comparison, we boost the baselines by providing them the same
inputs as our method, which improves their results significantly.}
\input{tables/full_data_action}
\input{tables/ablations}
\subsection{Fine-Grained Action Recognition}
\label{sec:action_recognition}
Fine-grained action recognition tests {VPD}\xspace's ability to capture precise details about an athlete's pose and motion.
We consider both the few-shot setting, where only a limited number of action examples are
provided, and the traditional full supervision setting, where all of the action
examples in the training set are available.
Our {VPD}\xspace features are distilled over the training videos in the sports corpus, uncut and without labels.
To extract features on the test set, we use the fixed {VPD}\xspace student \notation{F}.
VI-{VPD}\xspace and 2D-{VPD}\xspace features maintain the same
dimensions \notation{d}, of their teachers: $\notation{d}=64$ for {VIPE$^\star$}\xspace and
$\notation{d}=26$ for normalized 2D joints.
For Diving48, {VIPE$^\star$}\xspace has $\notation{d}=128$ because we also extract pose embeddings on the vertically flipped poses and concatenate them.
This data augmentation is beneficial for
{VIPE$^\star$}\xspace due to the often inverted nature of diving poses, which are less well represented in the out-of-domain 3D pose datasets that {VIPE$^\star$}\xspace is trained on.
\heading{Action recognition model.}
To use {VPD}\xspace for action recognition, we first represent each action as a sequence of pose features.
We then classify actions using a bidirectional Gated Recurrent Unit network (BiGRU)~\cite{rnnchapter} trained atop the (fixed) features produced by the student \notation{F}.
Since our features are chiral and many actions can be performed with either left-right orientation, we embed both the regular and horizontally
flipped frames with the student.
See supplemental for implementation details.
Prior pose embedding work has explored using sequence alignment followed by nearest-neighbor retrieval~\cite{prvipe}.
We also tested a nearest-neighbor search (NNS) approach that
uses dynamic time warping to compute a matching cost between sequences of pose features.
For NNS, each test example is searched against all the training
examples, and the label of the best aligned match is predicted.
The BiGRU is superior in most settings, though NNS can be effective in few-shot situations, and we indicate when this is the case.
\heading{Baselines.}
We compare our distilled 2D-{VPD}\xspace and VI-{VPD}\xspace features against several baselines.
\begin{enumerate}
\item The {\em features from the teacher:}
{VIPE$^\star$}\xspace or the normalized 2D
joint positions, using the same downstream action recognition models and data
augmentations.
\item {\em Skeleton-based:} a MS-G3D ensemble~\cite{msg3d} and
ST-GCN~\cite{stgcn}. Both baselines receive the same tracked 2D poses used to supervise
{VPD}\xspace.
\item {\em End-to-end:} GSM~\cite{gsm}, TSN~\cite{tsn}, and TRNms~\cite{trn} (multiscale).
We test with both the cropped athletes and the full frame (w/o cropping) as inputs,
and we find that cropping significantly improves accuracy in both the
few-shot setting on all four datasets, and the full supervision
setting on all datasets except for Diving48.
When applicable, combined results with RGB and optical flow models are indicated as 2-stream.
\end{enumerate}
\subsubsection{Few-shot and limited supervision setting}
\label{sub:few_shot_action}
\noindent\textbf{Experiment protocol.}
Each model is presented $k$ examples of each action class
but may utilize unlabeled data or knowledge from other datasets as pre-training.
For example, skeleton-based methods rely on 2D pose detection; {VIPE$^\star$}\xspace leverages out-of-domain 3D pose data; and
{VPD}\xspace features are distilled on the uncut, unlabeled training videos.
This experimental setup mirrors real-world
situations where few labels are present but unlabeled and out-of-domain data are plentiful.
Our evaluation metric is top-1 accuracy on the full test set.
To control for variation in the training examples selected for each few-shot
experiment, we run each algorithm on five randomly sampled and fixed subsets of the data,
for each $k$, and report the mean accuracy.
\heading{Results.}
\autoref{fig:few_shot_action} compares 2D-{VPD}\xspace and VI-{VPD}\xspace features to their teachers (and other baselines).
On {FSJump6}\xspace and {Tennis7}\xspace, VI-{VPD}\xspace provides a slight improvement over its state-of-the-art teacher,~{VIPE$^\star$}\xspace, with accuracies within a few percent.
{FX35}\xspace shows a large improvement and VI-{VPD}\xspace increases accuracy by up to 10.5\% over {VIPE$^\star$}\xspace at $k\leq32$ and 5\% over the MS-G3D ensemble at $k=64$.
Likewise, on Diving48, where end-to-end GSM and 2-stream TSN are otherwise better than the non-{VPD}\xspace pose-based methods, VI-{VPD}\xspace improves accuracy by 6.8 to 22.8\%.
Our results on {FX35}\xspace and Diving48 suggest that VI-{VPD}\xspace helps to transfer the benefits of pose to datasets where it is most unreliable.
While view-invariant (VI) features generally perform better than their 2D analogues, the difference in accuracy between VI-{VPD}\xspace and 2D-{VPD}\xspace is more noticeable in sports with diverse camera angles (such as figure skating and floor exercise) and at small $k$, where
the action recognition model can only observe a few views during training.
\input{tables/retrieval}
\subsubsection{Traditional, full training set setting}
\label{sec:many_shot}
{VPD}\xspace features are competitive even in the high-data regime (\autoref{tab:full_action_dataset}).
On all four datasets, both VI-{VPD}\xspace and 2D-{VPD}\xspace significantly improve accuracy over their teachers.
VI-{VPD}\xspace also achieves state-of-the-art accuracy on the {FSJump6}\xspace (0.6\% over {VIPE$^\star$}\xspace), {Tennis7}\xspace (1.5\% over {VIPE$^\star$}\xspace), and {FX35}\xspace (1.0\% over GSM, with cropped inputs) datasets.
Diving48 is especially challenging for pose-based
methods, and VI-{VPD}\xspace performs worse than GSM, without cropping, by 1.6\%.
GSM, with cropping, is also worse by 1.5\%, possibly due to errors and limitations of our tracking.
VI-{VPD}\xspace does, however, perform significantly better than the top pose-based baseline (8.4\% over MS-G3D, ensemble).
Our results demonstrate that {VPD}\xspace's success is not limited to few-shot regimes. However, because many methods in~\autoref{tab:full_action_dataset} can produce high accuracies, at or above 90\%, when given ample data, we view improving label efficiency as a more important goal for {VPD}\xspace and future work.
\subsubsection{Ablations and additional experiments}
\label{sub:ablations}
We highlight two important ablations of {VPD}\xspace to understand the source of {VPD}\xspace's improvements: (1) analyzing parts of the
distillation method and (2) distilling with only the action segments of the
video. We also consider (3) an unlabeled setting where {VPD}\xspace is distilled over the entire video corpus.
Please refer to supplemental for additional experiments.
\heading{Analysis of the distillation method.}
\autoref{tab:ablate_table}(a) shows the increase in accuracy on action
recognition for ablated 2D-{VPD}\xspace and VI-{VPD}\xspace features when we distill without flow input $\Flow{\notation{t}}$ and without motion prediction\footnote{The student mimics the teacher's $\WeakPose{\notation{t}}$ output directly, without the auxiliary decoder \notation{D} and $\PoseMotion{\notation{t}}$ in the training loss.}.
The incremental improvements are typically most pronounced in the few-shot setting, on the {FX35}\xspace and Diving48 datasets, where {VPD}\xspace produces the largest benefits (see \autoref{sub:few_shot_action}).
With {VIPE$^\star$}\xspace as the teacher, distillation alone from RGB can have a large effect (2.7\% and 7.7\%, at full and 16-shot settings on {FX35}\xspace; 7.9\% and 19.9\% on Diving48).
Adding flow in addition to RGB, without motion, gives mixed results. Finally, adding motion prediction and decoder \notation{D},
further improves results (1.1\% and 1.5\% on {FX35}\xspace, at full and 16-shot; 2.1\% and 3.9\% on Diving48).
The effect of distilling motion on {FSJump6}\xspace and {Tennis7}\xspace is mixed at the 16-shot setting, though the full setting shows improvement.
2D-{VPD}\xspace can be seen as an ablation of view-invariance ({VIPE$^\star$}\xspace) and shows a similar pattern when further ablated.
\heading{Training {VPD}\xspace on action parts of video only.}
Fine-grained action classes represent less than 7\%, 8\%, and 28\% of the video
in {FSJump6}\xspace, {FX35}\xspace, and {Tennis7}\xspace.
We evaluate whether distillation of VI-{VPD}\xspace over uncut video improves
generalization on action recognition, by distilling VI-{VPD}\xspace features with
{\em only the action parts} of the training videos.
The results are summarized in~\autoref{tab:ablate_table}(b) and
show that distilling with only the action video performs worse on our
datasets.
This is promising because (1) uncut performances are much easier to obtain than
performances with actions detected, and (2) in the low-supervision setting,
VI-{VPD}\xspace improves accuracy even if actions have not been detected in the
rest of the training corpus.
This also suggests that distilling over more video improves the quality of the features.
\heading{Distillation with the entire video corpus.}
An unlabeled corpus is often the starting point
when building real-world applications with videos in a new domain (e.g., \cite{vid2player}).
Because {VPD}\xspace is supervised only by machine-generated
pose estimates from unlabeled video, {VPD}\xspace features can be distilled over all of the video available, not just the training data.\footnote{This setting is similar to \cite{selfdomainshift,tttrain}, which propose self-supervision to align the training and testing distributions in situations with large domain shift.}
\autoref{tab:ablate_table}(c) shows results when VI-{VPD}\xspace is
distilled jointly with both the training and testing videos, {\em \mbox{uncut} and \mbox{without labels}}.
The improvement, if any, is minor on all four datasets ($\leq$1.5\%, attained on {Tennis7}\xspace at 16-shot) and demonstrates that VI-{VPD}\xspace, distilled over a large dataset, is able to generalize without seeing the test videos.
\section{Additional Dataset Details}
\label{sec:dataset_details}
This section provides additional details about the fine-grained sports video datasets used in the results section.
\heading{Figure skating} is a new dataset that contains the jumps in 371 singles short programs.
Because professional skaters often repeat the same routine in a competitive season, all performances from 2018 are held out for testing.
The six jump types that occur in the {FSJump6}\xspace dataset are:
Axel, flip, loop, Lutz, Salchow, and toe-loop (see~\autoref{tab:fs_dist}).
The labels are verified against the ISU's~\cite{isu} publicly accessible scoring data.
For the classification task,
the average label duration is 3.3 seconds and includes the poses from
before taking off and after landing.
\heading{Tennis} consists of Vid2Player's~\cite{vid2player} swing annotations in nine matches.
For action recognition, {Tennis7}\xspace has seven swing types: forehand topspin, backhand topspin, forehand slice, backhand slice, forehand volley, backhand volley, and overhead.
Note that the distribution of actions in tennis is unbalanced, with forehand topspin being the most common.
Serves are intentionally excluded from the action recognition task because they always occur at the start of points and do not need to be classified.
For swing detection, however, serves are included.
All action recognition models receive a one second interval, centered around the frame of ball contact for the swing.
\heading{Floor exercise.} We use the videos, labels, and official train/validation split from the floor exercise event of FineGym99~\cite{finegym}.
We focus on floor exercises ({FX35}\xspace) because the data is readily tracked and because the~\cite{finegym} authors report accuracies on this subset.
Because actions are often short, for each action, we extracted frames from 250 ms prior to the annotated start time to the end time, and we use these frames as the inputs to our methods and the baselines.
\heading{Diving48~\cite{diving48}} contains both individual and synchronized diving.
We use the standard train/validation split.
For synchronized diving, we track either diver as the subject and tracks can flicker between divers due to missed detections.
Tracking is the most challenging in this dataset because of the low resolution, motion blur, and occlusion upon entering the water.
Also, because the clips are short, it is more difficult to initialize tracking heuristics that utilize periods of video before and after an action, where the athlete is more static and can be more easily detected and identified.
\subsection*{Subject Tracking}
To focus on the athletes, we introduce subject tracking to the figure skating, floor exercises~\cite{finegym}, and Diving48~\cite{diving48} datasets.
Our annotations are created with off-the-shelf person detection and tracking algorithms.
First, we run a Mask R-CNN detector with a ResNeXt-152-32x8d backbone~\cite{detectron2} on
every frame to detect instances of people.
We use heuristics such as ``the largest person in the frame'' (e.g., in figure skating, floor exercise, and diving)
and ``upside down pose'' (e.g., in floor exercise and diving) to select the athlete.
These selections are tracked across nearby frames with bounding box
intersection-over-union, SORT~\cite{sort}, and OpenCV~\cite{opencv} object tracking (CSRT~\cite{csrt}) when detections are missed.
This heuristic approach is similar to the one taken by the authors of Vid2Player~\cite{vid2player}.
Example images of tracked and cropped athletes are shown in~\autoref{fig:example_crops}.
We run pose estimation on the pixels contained in and around the tracked boxes.
\section{Additional Experiments}
\label{sec:extra_experiments}
This section includes results of additional ablations, analysis, and baselines omitted from the main text.
\subsection{Ablation: Data Selection Criterion}
\label{sub:data_selection}
{\em Mean estimated joint score} from the teacher pose estimator is used as the weak-pose selection criterion.
\autoref{fig:pose_score} shows the distribution of such scores in each of the four sports datasets.
Notice that the teacher produces significantly less confident pose estimates on the floor exercise ({FX35}\xspace) and Diving48 datasets, and also on the labeled action portions of all four datasets.
While the optimal selection threshold is ultimately dependent on the calibration and quality of the pose estimator used,
we evaluate the effect of tuning the weak-pose selection criterion on three of our datasets: {Tennis7}\xspace, {FX35}\xspace, and Diving48. \autoref{tab:sparse_pose} shows results with VI-{VPD}\xspace when various thresholds are applied.
There is benefit to ignoring the least confident pose estimates, though setting the threshold too high also diminishes performance, as insufficient data remains to supervise the student.
\subsection{Ablation: NNS vs. BiGRU for Recognition}
\autoref{fig:few_shot_action} notes that the BiGRU classifier for action recognition generally performed better than NNS, except in extremely data-scarce settings, where there are simultaneously few classes and examples per class.
\autoref{tab:fs_nns_dtw} presents results for both the BiGRU and NNS.
\subsection{Ablation: Activation Threshold for Detection}
In~\autoref{sec:impl:fewshotdetection}, we use a frame-level activation threshold of 0.2 when proposing action intervals for few-shot action detection.
\autoref{tab:detection_threshold} shows the impact on average precision (AP) of other thresholds, scored at 0.5 temporal intersection over union (tIoU).
The results are similar at nearby thresholds and results at 0.2 are reported for consistency.
\subsection{Ablation: Action Recognition Architectures}
The BiGRU described in~\autoref{sub:gru} was used in our experiments for consistency.
This section includes a number of additional simple, well-studied architectures that we also tested.
Results from these models are given in~\autoref{tab:other_action_arch} and are often similar; the BiGRU is not necessarily the best performing model in all situations.
As~\autoref{sec:action_recognition} shows, however, the BiGRU is competitive with recent, state-of-the-art methods when trained with {VIPE$^\star$}\xspace or our VI-{VPD}\xspace features.
\subsection{Baseline: GSM Without Cropping on Diving48}
In~\autoref{sub:few_shot_action}, on few-shot action recognition, we reported results from GSM~\cite{gsm} with cropping.
This is despite GSM, without cropping, having higher accuracy in the full supervision setting on Diving48~\cite{diving48} (see~\autoref{tab:full_action_dataset}).
\autoref{tab:gsm_few_shot} shows that GSM, with cropping, is the stronger baseline when limited supervision is available.
We speculate that cropping forces the GSM model focus on the diver in few-shot settings. In the full supervision setting, the GSM model can learn this information by itself and is limited by noise in the crops and the loss of other information from the frame (e.g., the other diver in synchronized diving; the 3 metre springboard or 10 metre platform; and spatial information).
\subsection{Analysis: Visualizing Distilled 2D Pose}
\label{sub:visualize_distilled_2d}
Although the goal of this paper is to distill pose features for downstream tasks, this section provides preliminary qualitative results on how well distilled features mimic their teachers and reflect the explicit 2D pose.
Because the learned {VIPE$^\star$}\xspace and {VPD}\xspace features are not designed to be human interpretable, we use normalized 2D joint positions (described in~\autoref{sec:impl:vpd}) as the teacher instead, and we train an ablated student without the auxiliary decoder for motion.
\autoref{fig:pose_examples} compares the teacher's normalized 2D joint features to the student's distilled outputs.
Visible errors in the student's predictions show that our distillation method presented in this paper does not solve the explicit 2D pose estimation problem in challenging sports data.
However, solving this explicit task is not necessarily required to improve results in downstream tasks that depend on pose.
\section{Implementation: Video Pose Distillation}
\label{sec:impl:vpd}
This section provides additional implementation details for our method described in~\autoref{sec:method}.
\heading{Pose: $\WeakPose{\notation{t}}$ definition.}
{VPD}\xspace is not dependent on a specific 2D pose estimator or joint definition. We use an off-the-shelf HRNet~\cite{hrnet} to estimate pose in the detected region of the athlete, as is typical for top-down pose estimation. Heuristic tracking, described in~\autoref{sec:dataset_details}, can often provide bounding boxes in frames where person detection fails. We use only 13 of the 17 COCO~\cite{coco} keypoints (ignoring LEye, REye, LEar, and REar), and we apply the same joint normalization procedure as in~\cite{prvipe}.
\heading{Student inputs.}
The RGB crops \Frame{\notation{t}} are derived from the spatial bounding boxes of the athlete in frame \notation{t}.
We expand the bounding box to a square and then pad each side by 10\% or 25 pixels, whichever is greater.
Optical flow \Flow{\notation{t}} is computed using RAFT~\cite{raft} between \Frame{\notation{t}} and \Frame{\notation{t}-1}, where
we crop the same location as \Frame{\notation{t}} in the previous frame for \Frame{\notation{t}-1}. In datasets where the frame rate differs between videos, a target frame rate of 25 frames per second (fps) determines \Frame{\notation{t}-1}.
To obtain the final \Flow{\notation{t}}, we subtract the median of the RAFT output, clip to $\pm20$ pixels, and quantize into 8-bits.
During training and inference, $\Frame{\notation{t}}$ is scaled to a range of $\pm1$ and standardized with respect to the dataset RGB mean and standard deviation;
\Flow{\notation{t}} is also centered to $\pm0.5$.
In video frames where the athlete was explicitly detected by Mask R-CNN with a score above 0.8 (see~\autoref{sec:dataset_details}), we use the predicted mask to jitter the background with Gaussian noise ($\sigma=0.05$) as data augmentation.
For performance reasons, we pre-compute \WeakPose{\notation{t}}, \Frame{\notation{t}}, and \Flow{\notation{t}} in an offline manner for the entire corpus.
\heading{Auxiliary decoder \notation{D}} is a standard fully connected network, whose sole purpose is to provide supervision for training the student \notation{F}. We use two hidden layers, each with dimension of 128. Note that the ablations without motion in~\autoref{tab:ablate_table} do not use \notation{D} and directly optimize $L_2$ loss between the student's output and the teacher's $\WeakPose{\notation{t}}$.
\heading{Student training.}
The student is initialized with random weights.
In each training epoch, we randomly sample 20,000 frames \notation{t} that meet the pose selection criteria outlined in~\autoref{sub:data_selection}.
We use an AdamW~\cite{adamw} optimizer with learning rate $5e^{-4}$ and a batch size of 100.
The student is trained for 1,000 epochs, though in practice the model often converges sooner and using a higher learning rate is also possible.
We use the loss on the held-out validation frames to select the best epoch.
On a single Titan V GPU, the student model trains in approximately 8 hours.
\section{Implementation: Action Recognition}
This section provides details about our fine-grained action recognition models and baselines.
\subsection{BiGRU Architecture}
\label{sub:gru}
This is a standard bidirectional-GRU~\cite{rnnchapter} architecture.
The model is trained on sequences of VI-{VPD}\xspace, 2D-{VPD}\xspace, {VIPE$^\star$}\xspace, and normalized 2D joint position features.
\heading{The inputs} are variable length sequences of per-frame pose features (for each action). The features are sampled to 25 fps in {FX35}\xspace and Diving48, where frame rate varies from 25 to 60 fps. {FSJump6}\xspace is a small dataset and normalizing the features also reduces overfitting.
\heading{Architecture.} We use a two-layer BiGRU as the backbone, with a hidden dimension $h=128$.
The output of the BiGRU is a sequence $H\in \mathbb{R}^{2h \times t}$ of hidden states from the final layer.
To obtain a fixed size encoding of this sequence, we max-pool across the time steps in $H$.
To output an action class, the pooled encoding is sent to a fully connected network consisting of BN-Dropout-FC-ReLU-BN-Dropout-FC, with the FC dimensions being $2h$ and the number of output classes.
\heading{Training.} We train the network with AdamW~\cite{adamw} and a batch size of 50 for 500 epochs (200 on Diving48 due to the larger dataset). Learning rate is initially set to $1e^{-3}$ and adjusted with a cosine schedule. Dropout rate is $0.5$ on the dense layers and $0.2$ on the input sequence. Data augmentation consists of the horizontally flipped input sequences.
On a single Titan V GPU, our model takes 7 minutes to train for {FSJump6}\xspace, 25 minutes for {Tennis7}\xspace, 50 minutes for {FX35}\xspace, and 100 minutes for Diving48 over the full datasets.
\heading{Inference.} At inference time, we feed the input sequence and its horizontal flip to the model; sum the predictions; and output the top predicted class.
\input{figures/weak_pose}
\subsection{Nearest-Neighbor Search}
\label{sub:nns_dtw}
Our nearest-neighbor search (NNS) uses sequence alignment cost with dynamic time warping (DTW).
\heading{The inputs} are the same as in~\autoref{sub:gru}, but with each feature vector normalized to unit length.
\heading{Inference.} We treat the training set as an index. Alignment cost between two sequences of features, normalized by sequence length, is calculated using DTW with pairwise $L_2$ distance and the symmetricP2 step pattern~\cite{sakoe1978}. Combinations of the regular and horizontally flipped pose sequences in the testing set and training set are considered, with the lowest cost match returned.
Because the computational complexity of inference grows linearly with training set size, this method is unsuited for larger datasets with more examples or classes.
DTW is also sensitive to factors such as the precision of the temporal boundaries and the duration of the actions.
\subsection{Additional Baselines}
We evaluated ST-GCN~\cite{stgcn}, MS-G3D~\cite{msg3d}, multiscale TRN~\cite{trn}, and GSM~\cite{gsm} on our datasets using the reference implementations released by the authors.
For TSN~\cite{tsn}, we used the code from the authors of GSM~\cite{gsm}.
The GSM~\cite{gsm} codebase extends the TRN~\cite{trn} and TSN frameworks, and we backported ancillary improvements (e.g., learning rate schedule) to the TRN codebase for fairness.
\heading{Skeleton based.}
The inputs to ST-GCN and MS-G3D are the tracked 2D skeletons of only the identified athlete.
For MS-G3D, we trained both the bone and joint feature models and reported their ensemble accuracy. Ensemble accuracy exceeded the separate accuracies in all of our experiments.
\heading{End-to-end.}
We follow the best Diving48 configuration in the GSM~\cite{gsm} paper for the GSM, TSN, and TRNms baselines. This configuration uses 16 frames, compared to 3 to 7 in earlier work~\cite{trn}, and samples 2 clips at inference time.
As seen in benchmarks by the authors of~\cite{finegym}, additional frames are immensely beneficial for fine-grained action recognition tasks compared to coarse-grained tasks, where the class can often be guessed in a few frames from context~\cite{danceinmall,mimetics}.
The backbone for these baselines is an InceptionV3~\cite{inceptionv3}, initialized using pretrained weights.
When comparing to TSN and TRN with optical flow, we train using the same cropped flow images as {VPD}\xspace, described in~\autoref{sec:impl:vpd}.
Flow and RGB model predictions are ensembled to obtain the 2-stream result.
Recent architectures that model temporal information in RGB, such as GSM, often perform as well as or better than earlier flow based work.
\section{Implementation: Action Retrieval}
The search algorithm for action retrieval is identical to nearest neighbor search described in~\autoref{sub:nns_dtw}, for action recognition, except that the pose sequence alignment scores are retained for ranking.
\heading{Query set.}
For {FSJump6}\xspace, {Tennis7}\xspace, and {FX35}\xspace we evaluate with the entire corpus as queries. For the much larger Diving48 dataset, we use the 1,970 test videos as queries.
\section{Implementation: Action Detection}
\label{sec:impl:fewshotdetection}
We evaluated pose features for few-shot figure skating jump and tennis swing detection. Our method should be interpreted as a baseline approach to evaluate {VPD}\xspace features, given the lack of prior literature on temporally fine-grained, few-shot video action detection, using pose features.
More sophisticated architectures for accomplishing tasks such as generating action proposals and refining boundaries are beyond the scope of this paper.
\heading{The inputs} are the uncut, per-frame pose feature sequences.
For figure skating, the sequences are entire, 160 second long, short programs.
ISU~\cite{isu} scoring rules require that each performance contains two individual jumps and a jump combination (two jumps).
For tennis, each point yields two pose sequences, one for each player.
The points sampled for training have at least five swings each per player.
For the ResNet-3D~\cite{resnet3d} baseline, we extracted features for each frame using a Kinetics-400~\cite{kinetics} pretrained model on the $128\times128$ subject crops, with a window of eight frames. A limitation of this baseline is that actions (e.g., tennis swings) can be shorter than the temporal window.
\heading{Architecture.} We use a two-layer BiGRU as the backbone with a hidden dimension $h=128$. The hidden states at each time step from the final GRU layer are sent to a fully connected network consisting of BN-Dropout-FC-ReLU-BN-Dropout-FC, with the FC dimensions being $2h$ and 2 (a binary label for whether the frame is part of an action).
\heading{Training.} The BiGRU is trained on randomly sampled sequences of 250 frames from the training set. We use a batch size of 100, $1e^4$ steps with the AdamW~\cite{adamw} optimizer, and a learning rate of $1e^{-3}$. We apply dropout rates of $0.5$ on the dense layers and $0.2$ on the input sequence.
Because only five examples are provided in this few-shot setting, we use five-fold cross validation to train an ensemble.
The reported results are an average of separate runs on five randomly sampled, fixed few-shot dataset splits.
\heading{Inference.} We apply the trained BiGRU ensemble to the uncut test videos to obtain averaged frame-level activations.
Consecutive activations above 0.2 are selected as proposals; the low threshold is due to the large class imbalance because actions represent only a small fraction of total time.
A minimum proposal length of three frames is required.
The mean action length in the training data was also used to expand or trim proposals that are too short (less than $0.67\times$) or too long (greater than $1.33\times$).
\section{{VIPE$^\star$}\xspace Details}
\label{sec:jamesvipe}
We provide details of~{VIPE$^\star$}\xspace, which is used as the teacher for our view-invariant VI-{VPD}\xspace student.
{VIPE$^\star$}\xspace is used because the evaluation code and documentation for Pr-VIPE~\cite{prvipe}
is not released at the time of development.
The experiments in this section are to demonstrate that
{VIPE$^\star$}\xspace is a suitable substitute, based
on~\cite{prvipe}'s evaluation on coarse-grained action recognition.
\heading{Overview.}
View-invariant pose embedding (VIPE) methods embed 2D joints such that different camera views of the same pose
in 3D are similar in the embedding space. {VIPE$^\star$}\xspace is trained via 3D lifting
to canonicalized features (w.r.t. rotation and body shape).
We designed {VIPE$^\star$}\xspace to train on multiple (publicly available) datasets with differing 3D
joint semantics; we use Human3.6M~\cite{human36m} as well as synthetic pose data
from 3DPeople~\cite{3dpeople}, AMASS~\cite{amass}, and NBA2K~\cite{nba2k}.
\heading{Inputs.}
{VIPE$^\star$}\xspace learns view-invariant embeddings by regressing 3D joint
features from 2D joint pose.
The 2D joint inputs are the 13 COCO~\cite{coco}
keypoints (excluding eyes and ears) normalized as in~\cite{prvipe}.
To obtain canonicalized 3D features, first, we rotate the 3D pose around the vertical-axis,
aligning the torso-normal vector to the depth-axis.
Then, we normalize each
joint as two unit length offsets from its parent and from the hip (centered to 0).
We also concatenate the cosine bone angle at each 3D joint.
These transformations standardize 3D poses with respect to body appearance and camera view.
\heading{Model.} {VIPE$^\star$}\xspace uses a similar neural network backbone to~\cite{3dliftingbaseline,prvipe} and is trained with two losses:
\begin{itemize}
\item {\em 3D feature reconstruction loss.}
We use a fully-connected decoder that takes embeddings as input.
This decoder is discarded after training. To support multi-task training
with 3D datasets with different ground-truth joint semantics, we specialize
the output layer weights for each dataset.
\item {\em Contrastive embedding loss.}
We minimize the pairwise $L_2$ distance between embeddings of different 2D views of the same 3D
pose (positive pairs).
We also negatively sample pairs of 2D poses, corresponding to different
3D poses in each action sequence, and maximize their embedding distance.
Two 3D poses are considered to be different if one of their
joint-bone angles differs by 45$^\circ$ or more.
\end{itemize}
\heading{Substitute for Pr-VIPE.}
We compare {VIPE$^\star$}\xspace's performance to the coarse-grained action recognition results reported by~\cite{prvipe,cvmim}
on the Penn Action~\cite{pennaction} dataset. Our results suggest parity with Pr-VIPE when trained with Human3.6M only
and a small improvement from extra synthetic data.
{VIPE$^\star$}\xspace has 98.2\% top-1 accuracy (compared to 98.4\%, the best result for Pr-VIPE~\cite{cvmim})
when trained on the same subjects of the
Human3.6M dataset and using nearest-neighbor search as the action recognition method (see~\autoref{sub:nns_dtw}).
{VIPE$^\star$}\xspace obtains 98.6\% accuracy when trained with extra synthetic 3D data.
The saturated accuracies of {VIPE$^\star$}\xspace, Pr-VIPE~\cite{prvipe}, and other prior
work~\cite{cvmim} on the Penn Action dataset suggest that more challenging datasets,
such as fine-grained sports, are needed to evaluate new techniques.
For fine-grained action recognition in sports, additional synthetic 3D data improves
{VIPE$^\star$}\xspace (\autoref{tab:more_data_for_vipe}).
This is especially notable on {FX35}\xspace and Diving48, which contain a variety
of poses that are not well represented by Human3.6M.
We use {VIPE$^\star$}\xspace, improved with the synthetic 3D data, as the teacher for all of
our VI-{VPD}\xspace experiments.
\input{tables/vipe_synthetic_data}
\input{figures/explicit_2d_pose}
\input{figures/example_crops}
| {
"attr-fineweb-edu": 1.535156,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUaYrxK7Ehm9ic96Yc | \section{Introduction}
While the research on adequate statistical models to predict the outcome of (men's) international soccer tournaments,
such as European championships (EUROs) or FIFA World Cups, has substantially advanced in recent years,
to the best of our knowledge there exists no significant scientific literature on modeling women's soccer.
In this work, we propose a combined ranking-based machine learning approach
that is then used to forecast the FIFA Women's World Cup 2019.
A model class frequently used to model soccer results is the class of Poisson regression models. These
directly model the number of goals scored by both competing teams in the single soccer matches.
Let $X_{ij}$ and $Y_{ij}$ denote the goals of the first and second team, respectively, in a match between teams $i$ and $j$, where $i,j\in\{1,\ldots,n\}$ and let $n$ denote the total number of teams in the regarded set of matches.
One assumes $X_{ij}\sim Po(\lambda_{ij})$ and $Y_{ij}\sim Po(\mu_{ij})$, with intensity parameters
$\lambda_{ij}$ and $\mu_{ij}$ (reflecting the expected numbers of goals). For these intensity parameters several modeling strategies exist, which incorporate playing abilities or covariates of the competing teams in different ways.
In the simplest case, the Poisson distributions are treated as (conditionally) independent,
conditional on the teams' abilities or covariates. For example, \citet{Dyte:2000} applied this
model to FIFA World Cup data, with Poisson intensities that depend on the FIFA ranks of
both competing teams. \citet{GroAbe:2013} and \citet{GroSchTut:2015}
considered a large set of potential predictors for EURO and World Cup data,
respectively, and used $L_1$-penalized approaches
to detect sparse sets of relevant covariates. Based on these, they calculated predictions for the EURO
2012 and FIFA World Cup 2014 tournaments. Their findings show that,
when many covariates are regarded and/or the predictive power of the single
predictors is not clear in advance, regularization can be beneficial.
These approaches can be generalized in different ways to allow for dependent scores.
For example, \citet{DixCol:97} identified a (slightly negative) correlation between the scores
and introduced an additional dependence parameter. \citet{KarNtz:2003} and \citet{GrollEtAl2018}
model the scores of both teams by a bivariate Poisson distribution, which is able to account for (positive)
dependencies between the scores. If also negative dependencies should be accounted for,
copula-based models can be used (see, e.g., \citealp{McHaSca:2007}, \citealp{McHaSca:2011} or \citealp{boshnakov2017}).
Closely related to the covariate-based Poisson regression models are Poisson-based
ranking methods for soccer teams. On basis of a (typically large) set of matches, ability
parameters reflecting the current strength of the teams can be estimated by means of maximum likelihood.
An overview of the most frequently used Poisson-based ranking methods can be found in \citet{LeyWieEet2018}.
An alternative ranking approach that is solely based on bookmakers' odds was proposed by
\citet{Leit:2010a}. They calculate winning probabilities for each team by aggregating winning
odds from several online bookmakers. Based on these
winning probabilities, by inverse tournament simulation
team-specific {\it bookmaker consensus abilities} can be computed by paired comparison
models, automatically stripping the effects of the tournament
draw. Next, pairwise probabilities for each
possible game at the corresponding tournament can be predicted
and, finally, the whole tournament can be simulated.
A fundamentally different modeling approach is based on a
random forest -- a popular ensemble learning method for classification and regression \citep{Breiman:2001},
which originates from the machine learning and data mining community.
Firstly, a multitude of so-called decision trees (\citealp{qui:1986}; \citealp{BreiFrieOls:84})
is constructed on different training data sets, which are resampled from the original dataset. The predictions from
the individual trees are then aggregated, either by taking the mode of the predicted classes
(in classification) or by averaging the predicted values (in regression). Random forests reduce
the tendency of overfitting and the variance compared to regular decision trees, and are a common
powerful tool for prediction. To investigate the predictive potential of random forests, \citet{SchauGroll2018}
compared different types of random forests on data containing all matches of the FIFA
World Cups 2002--2014 with conventional regression methods for count data, such as the
Poisson models from above. The random forests provided very satisfactory results
and generally outperformed the regression approaches.
\citet{GroEtAl:WM2018b} showed on the same FIFA
World Cup data that the predictive performance of random forests could
be further improved by combining it with the Poisson ranking methods, leading to what they
call a \emph{hybrid random forest model}.
In the present work, we carry this strategy forward and combine the
random forest with both the Poisson ranking methods as well as the bookmaker consensus abilities from \citet{Leit:2010a}.
So in a sense, this results in a {\it doubly-hybrid} or {\it combined ranking-based} random forest.
The model is fitted to all matches of the FIFA Women's World Cups 2011 and 2015 and based on the resulting estimates,
the FIFA Women's World Cup 2019 is then simulated
100,000 times to determine winning probabilities for all 24 participating teams.
The rest of the manuscript is structured as follows. In Section~\ref{sec:data} we describe
the three underlying data sets. The first covers all matches of the two preceding FIFA
Women's World Cups 2011 and 2015 including covariate information, the second consists
of the match results of all international matches played by all national teams during certain time periods
and the third contains the winning odds from several bookmakers for the single World Cups regarded in this analysis.
Next, in Section~\ref{sec:methods} we briefly explain the basic idea of random forests and the two different
ranking methods and, finally, how they can be combined to a hybrid random forest model.
In Section~\ref{sec:prediction}, we fit the hybrid random forest model to
the data of the two World Cups 2011 and 2015.
Based on the resulting estimates, the FIFA Women's World Cup 2019 is simulated
repeatedly and winning probabilities for all teams are presented.
Finally, we conclude in Section~\ref{sec:conclusion}.
\section{Data}\label{sec:data}
In this section, we briefly describe three fundamentally different types of data
that can be used to model and predict international soccer tournaments such
as the FIFA World Cup. The first type of data covers variables that characterize
the participating teams of the single tournaments and connects them to the results
of the matches that were played during these tournaments.
The second type of data is simply based on the match results of all international
matches played by all national teams during certain time periods. These data do not only cover
the matches from the specific tournaments but also all qualifiers and friendly matches.
The third type of data contains the winning odds from different bookmakers separately for
single World Cups.
\subsection{Covariate data}\label{sec:covariate}
The first type of data we describe covers all
matches of the two FIFA Women's World Cups 2011 and 2015 together with several
potential influence variables. Basically, we use a similar (but smaller\footnote{It turned out that,
compared to men, for women's national teams covariates were generally more difficult to get or were simply not
recorded at all, as for women's soccer data archives are less detailed and sometimes incomplete. For example, while
for men's national coaches, their age, the duration of their tenure and their nationality
could be obtained manually from the website of the German soccer magazine {\it kicker},
\url{http://kicker.de}, from \url{http://transfermarkt.de} and from \url{https://en.wikipedia.org}, this was not possible
for women.}) set of covariates as introduced in \citet{GroSchTut:2015}.
For each participating team, the covariates are observed either for the year of the respective World Cup
(e.g.,\ GDP per capita) or shortly before the start of the World Cup (e.g.,\ average age), and,
therefore, vary from one World Cup to another.
Several of the variables contain information about the recent performance
and sportive success of national teams, as the current form of a national team is supposed to
have an influence on the team's success in the upcoming tournament. One additional covariate in this regard,
which we will introduce later, is reflecting the national teams' current playing abilities and
is related to the second type of data introduced in Section~\ref{sec:historic}.
The estimates of these ability parameters are based on a separate Poisson ranking model,
see Section~\ref{subsec:ranking} for details, and are denoted by {\it PoisAbil}.
Another additional covariate, which is also introduced later, reflects
the bookmaker consensus abilities (denoted by {\it OddsAbil}) from \citet{Leit:2010a} and is related to the third type of data
introduced in Section~\ref{sec:bookmaker:data}. Details on this ranking method can be found in
Section~\ref{subsec:consensus}.
Beside these sportive variables, also certain economic factors as well as variables describing the structure of a team's squad are collected. We shall now describe in more detail these variables.
\begin{description}
\item \textbf{Economic Factors:}
\begin{description}
\item[\it GDP per capita.] To account for the general
increase of the gross domestic product (GDP) during 2011--2015, a ratio of the GDP per capita of the respective country and the worldwide average GDP per capita is used (source: mostly \url{http://www.imf.org/external/pubs/ft/weo/2018/01/weodata/index.aspx}, but for England, Scotland, South and North Korea some additional research was necessary).
\item[\it Population.] The population size is
used in relation to the respective global population to account for the general world population growth (source: \url{http://data.worldbank.org/indicator/SP.POP.TOTL}).
\end{description}\bigskip
\item \textbf{Sportive factors:}
\begin{description}
\item[\it FIFA rank.] The FIFA Coca-Cola Women's World Ranking is based on an Elo-type rating system, which was
originally developed by Dr.\ Arpad Elo to rate the playing abilities of chess players. It aims at reflecting the current strength of a soccer team relative to its competitors (source: \url{https://de.fifa.com/fifa-world-ranking/ranking-table/women/}).
\end{description}\bigskip
\item \textbf{Home advantage:}
\begin{description}
\item[\it Host.] A dummy variable
indicating if a national team is a hosting country.
\item[\it Continent.] A dummy variable indicating if a national team is from the same continent as the host of the World Cup (including the host itself).
\item[\it Confederation.] This categorical variable comprises the teams' confederation with six possible values: Africa (CAF);
Asia (AFC); Europe (UEFA); North, Central America and Caribbean (CONCACAF); Oceania (OFC); South America (CONMEBOL).
\end{description}
\bigskip
\item \textbf{Factors describing the team's structure:}
The following variables describe the structure of the teams.
They were observed with the 23-player-squad
nominated for the respective World Cup and were obtained manually
both from the website of the German soccer magazine {\it kicker}, \url{http://kicker.de}, and
from \url{http://transfermarkt.de}\footnote{Note that for the World Cup 2011 the size of the
national teams' squads was restricted to 21 players. Hence, all of the following factors that
add up players with a certain characteristic (namely all factors except for the {\it average age}) have been divided by the respective
squad size (i.e.\ 21 or 23) to make them comparable across tournaments.}.\medskip
\begin{description}
\item[\it (Second) maximum number of teammates.] For each squad, both the maximum
and second maximum number of teammates playing together in the same national club are counted.
\item[\it Average age.] The average age of each squad is collected.
\item[\it Number of Champions League players.]
As a measurement of the success of the players on club level, the number of
players in the semi finals (taking place only few weeks before the
respective World Cup) of the UEFA Champions
League (CL) is counted.
\item[\it Number of Major League Soccer players.]
As the US Major League Soccer (MLS) is supposedly the best
national soccer league on the globe in women's soccer,
for each national team the number of players in the MLS is counted.
\item[\it Number of players abroad/Legionnaires.] For each squad, the number of players
playing in clubs abroad (in the season preceding the respective World Cup) is counted.
\end{description}
\end{description}
\noindent In addition, we include a dummy variable indicating whether a certain match is a group- or a knockout match.
The motivation for this is that soccer teams might change their playing style and be more cautious in knockout matches.
In total, together with the two ability variables from the two ranking methods this adds up to 15 variables which were collected separately for each World Cup and each participating team. As an illustration, Table~\ref{data1} shows the results (\ref{tab:results}) and (parts of) the covariates (\ref{tab:covar}) of the respective teams, exemplarily for the first four matches of the FIFA Women's World Cup 2011. We use this data excerpt to illustrate how the final data set is constructed.
\begin{table}[h]
\small
\caption{\label{data1} Exemplary table showing the results of four matches and parts of the covariates of the involved teams.}
\centering
\subfloat[Table of results \label{tab:results}]{
\begin{tabular}{lcr}
\hline
& & \\
\hline
NGA \includegraphics[width=0.4cm]{NGA.png} & 0:1 & \includegraphics[width=0.4cm]{FRA.png} \;FRA\\
GER \includegraphics[width=0.4cm]{GER.png} & 2:1 & \includegraphics[width=0.4cm]{CAN.png} \;CAN\\
CAN\includegraphics[width=0.4cm]{CAN.png} & 0:4 & \includegraphics[width=0.4cm]{FRA.png} \;FRA\\
GER \includegraphics[width=0.4cm]{GER.png} & 1:0 & \includegraphics[width=0.4cm]{NGA.png} \;NGA\\
\vdots & \vdots & \vdots \\
\hline
\end{tabular}}
\hspace*{0.8cm}
\subfloat[Table of covariates \label{tab:covar}]{
\begin{tabular}{llrrrrr}
\hline
World Cup & Team & PoisAbil & OddsAbil & Age & \ldots \\
\hline
2011 & France & $1.69$ & $0.02$ & $25.86$ & \ldots \\
2011 & Germany & $2.35$ & $1.25$ & $25.95$ & \ldots\\
2011 & Nigeria & $1.39$ & $-0.47$ & $22.24$ & \ldots \\
2011 & Canada & $1.82$ & $-0.17$ & $25.52$ & \ldots\\
\vdots & \vdots & \vdots & \vdots & \vdots & $\ddots$ \\
\hline
\end{tabular}
}
\end{table}
For the modeling techniques that we shall introduce in the following sections, all of the metric
covariates are incorporated in the form of differences between the two competing teams. For example, the final
variable {\it PoisAbil} will be the difference between the Poisson ranking abilities of both teams. The categorical variables {\it Host},
{\it Continent} and {\it Confederation}, however, are included as separate variables for both competing teams.
For the variable {\it Confederation}, for example, this results in two columns of the corresponding design matrix denoted by
{\it Confed} and {\it Confed.Oppo}, where {\it Confed} is referring to the confederation of the first-named team
and {\it Confed.Oppo} to the one of its opponent.
As we use the number of goals of each team directly as the response variable, each match
corresponds to two different observations, one per team. For the covariates, we consider
differences which are computed from the perspective of the first-named team.
The dummy variable {\it groupstage} corresponds to a single column in the
design matrix and is either zero or one for both rows corresponding to the same match.For illustration,
the resulting final data structure for the exemplary matches from Table~\ref{data1} is displayed in Table~\ref{data2}.
\begin{table}[!h]
\small
\centering
\caption{Exemplary table illustrating the data structure.}\label{data2}
\begin{tabular}{rllrrrrr}
\hline
Goals & Team & Opponent & Groupstage & PoisAbil & OddsAbil & Age & ... \\
\hline
0 & Nigeria & France & 1 & $-0.49$ & $-0.30$ & $-3.62$ & ... \\
1 & France & Nigeria & 1 & $0.49$ & $0.30$ & $3.62$ & ... \\
2 & Germany & Canada & 1 & $1.42$ & $0.53$ & $0.43$ & ... \\
1 & Canada & Germany & 1 & $-1.42$ & $-0.53$ & $-0.43$ & ... \\
0 & Canada & France & 1 & $-0.18$ & $0.13$ & $-0.33$ & ... \\
4 & France & Canada & 1 & $0.18$ & $-0.13$ & $0.33$ & ... \\
1 & Germany & Nigeria & 1 & $1.73$ & $0.96$ & $3.71$ & ... \\
1 & Nigeria & Germany & 1 & $-1.73$ & $-0.96$ & $-3.71$ & ... \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & $\ddots$ \\
\hline
\end{tabular}
\end{table}
\subsection{Historic match results}\label{sec:historic}
The data used for estimating the abilities of the teams consist of the results of every international game played in the last 8 years preceding the considered World Cup. Besides the number of goals, we also need the information of the venue of the game in order to correct for the home effect and the moment in time when a match was played.
The reason is that, in the ranking method described in Section~\ref{subsec:ranking},
each match is assigned a weight depending on the time elapsed since the game took place. For example, Table~\ref{tab:historicdata}
shows an excerpt of the historic match data used to obtain ability estimates for the teams at the FIFA Women's World Cup 2011.
\begin{table}[ht]
\caption{Historical match result data used for estimating the abilities,
exemplarily for the FIFA Women's World Cup 2011}
\label{tab:historicdata}
\centering
\begin{tabular}{lllcrr}
\hline
Date & Home team & Away team & Score & Country & Neutral \\
\hline
2011-05-19 & Iceland & Bulgaria & 6:0 & Iceland & no \\
2011-03-09 & United States & Iceland & 4:2 & Portugal & yes \\
2011-03-09 & Portugal & Finland & 2:1 & Portugal & no \\
2011-03-09 & Wales & China PR & 1:2 & Portugal & yes \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
\hline
\end{tabular}
\end{table}
\subsection{Bookmaker data}\label{sec:bookmaker:data}
The basis for the bookmaker consensus ranking model from \citet{Leit:2010a}, which is explained in more
detail in Section~\ref{subsec:consensus}, are the winning odds of the bookmakers\footnote{The possibility of betting on the overall cup winner before the start of the tournament is quite novel. While for men, the German state
betting agency ODDSET offered the bet for the first time at the FIFA
World Cup 2002, for women we could not get any odds before the World Cup 2011.}. These are typically available
already a few weeks before the tournament start. The popularity of this specific bet has substantially increased over time:
while for the World Cup 2011 we could only obtain the odds from the German state
betting agency ODDSET (upon request), for the World Cup 2015 we found already
corresponding odds of three different bookmakers publicly available, see Table~\ref{tab:consensus}.
For the upcoming World Cup 2019 tournament we easily obtained the winning odds from 18 different bookmakers.
\begin{table}[H]
\centering
\caption{Winning odds of all 24 participating teams from several bookmakers,
exemplarily for the FIFA Women's World Cup~2015}\label{tab:consensus}
\begin{tabular}{rrrr}
\hline
& MyTopSportsbooks & SportsInsights & BovadaSportsbook \\
\hline
United States & 4.00 & 4.00 & 3.25 \\
Germany & 4.50 & 4.35 & 4.50 \\
France & 5.50 & 10.00 & 9.00 \\
Japan & 9.00 & 9.00 & 8.00 \\
Brazil & 9.00 & 8.50 & 8.00 \\
Canada & 15.00 & 15.00 & 11.00 \\
Sweden & 17.00 & 15.00 & 11.00 \\
England & 21.00 & 21.00 & 21.00 \\
Norway & 34.00 & 28.00 & 26.00 \\
Australia & 51.00 & 51.00 & 41.00 \\
China PR & 61.00 & 66.00 & 51.00 \\
Spain & 67.00 & 66.00 & 41.00 \\
Netherlands & 81.00 & 76.00 & 51.00 \\
South Korea & 81.00 & 76.00 & 67.00 \\
Switzerland & 101.00 & 86.00 & 67.00 \\
New Zealand & 151.00 & 151.00 & 101.00 \\
Nigeria & 201.00 & 201.00 & 151.00 \\
Colombia & 251.00 & 251.00 & 151.00 \\
Mexico & 251.00 & 251.00 & 126.00 \\
Ecuador & 501.00 & 501.00 & 251.00 \\
Cameroon & 501.00 & 501.00 & 301.00 \\
Ivory Coast & 501.00 & 501.00 & 251.00 \\
Costa Rica & 1001.00 & 1001.00 & 251.00 \\
Thailand & 5001.00 & 5001.00 & 401.00 \\
\hline
\end{tabular}
\end{table}
\section{A combined ranking-based random forest}\label{sec:methods}
In this section, we propose to use a hybrid random forest approach that combines the
information from all three types of data bases introduced above. The proposed method combines
a random forest for the covariate data with both the abilities estimated on the historic match results
as used by the Poisson ranking methods and the abilities obtained from the bookmaker consensus approach.
Before introducing the proposed hybrid method, we first separately present the
basic ideas of the three model components.
\subsection{Random forests}\label{subsec:forest}
Random forests, originally proposed by \citet{Breiman:2001}, are an
aggregation of a (large) number of classification or regression trees (CARTs).
CARTs \citep{BreiFrieOls:84} repeatedly partition the predictor space mostly using
binary splits. The goal of the partitioning process is to find partitions such that the
respective response values are very homogeneous within a partition but very
heterogeneous between partitions. CARTs can be used both for metric response
(regression trees) and for nominal/ordinal responses (classification trees).
For prediction, all response values within a partition are aggregated either
by averaging (in regression trees) or simply by counting and using majority vote (in classification trees).
In this work, we use trees (and, accordingly, random forests) for the prediction of
the number of goals a team scores in a match of a FIFA World Cup.
As already mentioned in the Introduction, random forests are the aggregation of a large number
$B$ (e.g., $B=5000$) of trees, grown on $B$ bootstrap samples from the original data set.
Combining many trees has the advantage
that the resulting predictions inherit the feature of unbiasedness from the single trees
while reducing the variance of the predictions. For a short introduction to random forests
and how they can specifically
be used for soccer data, see \citet{GroEtAl:WM2018b}.
In \texttt{R} \citep{RDev:2018}, two slightly different variants of regression forests are
available: the classical random forest algorithm proposed by \citet{Breiman:2001}
from the \texttt{R}-package \texttt{ranger} \citep{ranger}, and a modification implemented
in the function \texttt{cforest} from the \texttt{party} package\footnote{Here, the single trees are
constructed following the principle of conditional inference trees as proposed in
\citet{Hotetal:2006}. The main advantage of these conditional inference trees is
that they avoid selection bias if covariates have different scales,
e.g., numerical vs. categorical with many categories (see, for example,
\citealp{StrEtAl07}, and \citealp{Strobl-etal:2008}, for details). Conditional
forests share the feature of conditional inference trees of avoiding biased variable selection.}.
In \citet{SchauGroll2018} and \citet{GroEtAl:WM2018b}, the latter package
turned out to be superior and will be used in the following.
\subsection{Poisson ranking methods}\label{subsec:ranking}
In this section we describe how (based on historic match data, see Section~\ref{sec:historic}) Poisson models can be used to obtain rankings that reflect a team's current ability. We will restrict our attention to the best-performing model according to the comparison achieved in \cite{LeyWieEet2018}, namely the bivariate Poisson model. The main idea consists in assigning a strength parameter to every team and in estimating those parameters over a period of $M$ matches via weighted maximum likelihood based on time depreciation.
The time decay function is defined as follows: a match played $x_m$ days back gets a weight of
\begin{equation*}\label{smoother}
w_{time,m}(x_m) = \left(\frac{1}{2}\right)^{\frac{x_m}{\mbox{\small Half period}}},
\end{equation*}
meaning that, for instance, a match played \emph{Half period} days ago only contributes half as much as a match played today. We stress that the \emph{Half period} refers to calendar days in a year, not match days. In the present case we use a Half period of 500 days based on an optimization procedure to determine which Half period led to the best prediction for women's soccer matches based on the average Rank Probability Score (RPS; \citealp{Gneitingetal:2007})
The bivariate Poisson ranking model is based on a proposal from \cite{KarNtz:2003} and can be described as follows. If we have $M$ matches featuring a total of $n$ teams, we write $Y_{ijm}$ the random variable \textit{number of goals scored by team $i$ against team $j$ ($i,j\in \{1,...,n\}$) in match $m$} (where $m \in \{1,...,M\}$). The joint probability function of the home and away score is then given by the bivariate Poisson probability mass function,
\begin{eqnarray*}
{\rm P}(Y_{ijm}=z, Y_{jim}=y) &=& \frac{\lambda_{ijm}^z \lambda_{jim}^y}{z!y!} \exp(-(\lambda_{ijm}+\lambda_{jim}+\lambda_{C}))\cdot\nonumber\\
&&\sum_{k=0}^{\min(z,y)} \binom{z}{k} \binom{y}{k}k!\left(\frac{\lambda_{C}}{\lambda_{ijm}\lambda_{jim}}\right)^k,
\end{eqnarray*}
where $\lambda_{C}$ is a covariance parameter assumed to be constant over all matches and $\lambda_{ijm}$ is the expected number of goals for team $i$ against team $j$ in match $m$, which we model as
\begin{eqnarray}
\label{independentpoisson}\log(\lambda_{ijm})&=&\beta_0 + (r_{i}-r_{j})+h\cdot \mathbb{I}(\mbox{team $i$ playing at home})\,,
\end{eqnarray}
where $\beta_0$ is a common intercept and $r_i$ and $r_j$ are the strength parameters of team~$i$ and team~$j$, respectively. Since the ratings are unique up to addition by a constant, we add the constraint that the sum of the ratings has to equal zero. The last term $h$ represents the home effect and is only added if team~$i$ plays at home. Note that we have the Independent Poisson model if $\lambda_C=0$. The overall (weighted) likelihood function then reads
\begin{equation*}
L = \prod_{m=1}^{M}\left({\rm P}(Y_{ijm}=y_{ijm}, Y_{jim}={y_{jim}})\right)^{w_{time,m}},
\end{equation*}
where $y_{ijm}$ and $y_{jim}$ stand for the actual number of goals scored by teams $i$ and $j$ in match $m$. The values of the strength parameters $r_1,\ldots,r_n$, which allow ranking the different teams, are computed numerically as maximum likelihood estimates {on the basis of historic match data as described in Section~\ref{sec:historic}}. These parameters also allow to predict future match outcomes thanks to the formula~\eqref{independentpoisson}.
\subsection{The bookmaker consensus ranking model}\label{subsec:consensus}
Prior to the tournament on 2019-06-03 we obtained long-term winning odds from 18~online bookmakers.
However, before these odds can be transformed to winning probabilities, the stake has to be accounted for
and the profit margin of the bookmaker (better known as the ``overround'') has to be removed
\citep[for further details see][]{ref:Henery:1999, ref:Forrest+Goddard+Simmons:2005}.
Here, it is assumed that the quoted odds are derived from the underlying ``true'' odds as:
$\mbox{\it quoted odds} = \mbox{\it odds} \cdot \delta + 1$,
where $+ 1$ is the stake (which is to be paid back to the bookmakers' customers in case they win)
and $\delta < 1$ is the proportion of the bets that is actually paid out by the bookmakers.
The overround is the remaining proportion $1 - \delta$ and the main basis of the bookmakers'
profits (see also \citealp{ref:Wikipedia:2019} and the links therein).
Assuming that each bookmaker's $\delta$ is constant across the various teams in the tournament
\citep[see][for all details]{Leit:2010a}, we obtain overrounds for all
bookmakers with a median value of 24.8\%.
To aggregate the overround-adjusted odds across the 18~bookmakers, we transform them to
the log-odds (or logit) scale for averaging \citep[as in][]{Leit:2010a}.
The bookmaker consensus is computed as the mean winning log-odds for each team across bookmakers
and then transformed back to the winning probability scale.
In a second step the bookmakers' odds are employed to infer the contenders' relative abilities
(or strengths). To do so, an ``inverse'' tournament simulation based on team-specific
abilities is used. The idea is the following:
\begin{enumerate}
\item If team abilities are available, pairwise winning probabilities can be derived
for each possible match using the classical \cite{ref:Bradley+Terry:1952} model. This model is
similar to the Elo rating \citep{ref:Elo:2008}, popular in sports, and computes the
probability that a Team~$A$ beats a Team~$B$ by their associated abilities (or strengths):
\[
\mathrm{Pr}(A \mbox{ beats } B) = \frac{\mathit{ability}_A}{\mathit{ability}_A + \mathit{ability}_B}.
\]
\item Given these pairwise winning probabilities, the whole tournament can be easily
simulated to see which team proceeds to which stage in the tournament and which
team finally wins.
\item Such a tournament simulation can then be run sufficiently often (here 1,000,000 times)
to obtain relative frequencies for each team winning the tournament.
\end{enumerate}
Here, we use the iterative approach of \cite{Leit:2010a}
to find team abilities so that the resulting simulated winning probabilities
(from 1,000,000 runs) closely match the bookmaker consensus probabilities. This allows to
strip the effects of the tournament draw (with weaker/easier and stronger/more difficult
groups), yielding a log-ability measure (on the log-odds scale) for each team.
\subsection{The combined ranking-based random forest}
In order to link the information provided by the covariate data, the historic match data
and the bookmakers' odds, we now combine the random forest approach from
Section~\ref{subsec:forest} and the ranking methods from Section~\ref{subsec:ranking}
and Section~\ref{subsec:consensus}. We propose to use both ranking approaches
to generate two new (highly informative) covariates that can be incorporated into the random forest model.
For that purpose, for each World Cup we estimate the team abilities $r_i, i=1,\ldots,24$, of all $24$
participating teams shortly before the start of the respective tournament. For example,
to obtain ability estimates for the 24 teams that participated in the World Cup 2011,
the historic match data for a certain time period preceding the World Cup 2011
(we chose to use 8 years, weighted by the described time
depreciation effect) is used. This procedure gives us the estimates $\hat r_i$ as an additional covariate
covering the current strength for all teams participating in a certain World Cup. Actually, this variable
appears to be somewhat similar to the Elo ranking, but turns out to be much
more informative, see Section~\ref{sec:fitforest}.
Moreover, based on the winning odds provided by the bookmakers,
we also calculate the additional abilities $s_i, i=1,\ldots,24$,
of all $24$ participating teams shortly before the start of the respective tournament
corresponding to the bookmaker consensus model. The corresponding estimates
$\hat s_i$ again serve as another additional covariate. Also this variable turns out to be more important than the Elo ranking, see again Section~\ref{sec:fitforest}.
The newly generated variables can be added to the covariate data based on
previous World Cups and a random forest can be fitted to these data. Based on
this random forest, new matches (e.g., matches from an upcoming World Cup) can be predicted.
To predict a new observation, its covariate values are dropped down from each of the $B$
regression trees, resulting in $B$ distinct predictions. The average of those is then used as a
point estimate of the expected numbers of goals conditioning on the covariate values.
In order to be able to use these point estimates for the prediction of the
outcome of single matches or a whole tournament, we follow \citet{GroEtAl:WM2018b}
and treat the predicted expected value for the number of goals as
an estimate for the intensity $\lambda$ of a Poisson distribution $Po(\lambda)$.
This way we can randomly draw results for single matches and compute
probabilities for the match outcomes \textit{win}, \textit{draw} and \textit{loss} by
using two independent Poisson distributions (conditional on the covariates) for both scores.
\section{Modeling the FIFA Women's World Cup 2019}\label{sec:prediction}
We now fit the proposed combined ranking-based random forest model
to the data of the World Cups 2011 and 2015.
Next, we calculate the Poisson ranking ability parameters
based on historic match data over the 8 years preceding the World Cup 2019 as well as the bookmaker consensus abilities based on
the winning odds from 18 different bookmakers.
Based on these ability estimates, the fitted random forest will be used to
simulate the FIFA Women's World Cup 2019 tournament
100,000 times to determine winning probabilities for all 24 participating teams.
\subsection{Fitting the combined ranking-based random forest to the data of the World Cups 2011 and 2015}\label{sec:fitforest}
We fit the hybrid random forest approach with $B=5000$ single trees to the
complete data set covering the two World Cups 2011 and 2015.
The best way to understand the role of the single predictor variables in a random forest
is the so-called variable importance, see \citet{Breiman:2001}. Typically, the variable
importance of a predictor is measured by permuting each of the predictors
separately in the out-of-bag observations of each tree. Out-of-bag observations
are observations which are not part of the respective subsample or bootstrap
sample that is used to fit a tree. Permuting a variable means that within the
variable each value is randomly assigned to a location within the vector.
If, for example, \emph{Age} is permuted, the average age of the German team
in 2011 could be assigned to the average age of the US team in 2015.
When permuting variables randomly, they lose their information with respect
to the response variable (if they have any). Then, one measures the loss of prediction
accuracy compared to the case where the variable is not permuted.
Permuting variables with a high importance will lead to a higher loss of
prediction accuracy than permuting values with low importance.
Figure~\ref{var_imp} shows bar plots of the variable importance values for all variables in the
hybrid random forest applied to the data of the World Cups 2011 and 2015.
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\textwidth]{var_imp.pdf}
\caption{Bar plot displaying the variable importance in the hybrid random forest applied to FIFA World Cup data.}
\label{var_imp}
\end{figure}
Interestingly, the Poisson abilities are by far the most important predictor
in the random forest and carry clearly more information than all other predictors.
But also the abilities from the bookmaker consensus approach seem to be
slightly more informative compared to the \emph{FIFA rank}.
Even though the \emph{FIFA rank}, the teams' average \emph{age} or the number of \emph{CL~players}
also contain some information concerning the current strengths of the teams,
it is definitely worth the effort to estimate such abilities in separate models.
For a more detailed comparison of the team abilities and the \emph{FIFA rank},
see Table~\ref{tab_rank}.
\begin{table}[!h]
\small
\caption{\label{tab_rank} Ranking of the participants of the FIFA World Cup 2019
according to estimated bookmaker consensus abilities (left; in logs), Poisson abilities (center; in logs) and FIFA ranking (right).}\vspace{0.4cm}
\centering
\begin{tabular}{l|ll|ll|ll}
\multicolumn{1}{c}{}&\multicolumn{2}{c}{\textbf{bookmaker consensus}} &\multicolumn{2}{c}{\textbf{Poisson}} &\multicolumn{2}{c}{\textbf{FIFA}}\\
\multicolumn{1}{c}{}&\multicolumn{2}{c}{\textbf{abilities}} &\multicolumn{2}{c}{\textbf{abilities}} &\multicolumn{2}{c}{\textbf{ranking}}\\
\toprule
1 & \includegraphics[width=0.4cm]{FRA.png} & France & \includegraphics[width=0.4cm]{USA.png} & United States & \includegraphics[width=0.4cm]{USA.png} & United States \\
2 & \includegraphics[width=0.4cm]{USA.png} & United States & \includegraphics[width=0.4cm]{GER.png} & Germany & \includegraphics[width=0.4cm]{GER.png} & Germany \\
3 & \includegraphics[width=0.4cm]{GER.png} & Germany & \includegraphics[width=0.4cm]{FRA.png} & France & \includegraphics[width=0.4cm]{ENG.png} & England \\
4 & \includegraphics[width=0.4cm]{ENG.png} & England & \includegraphics[width=0.4cm]{NED.png} & Netherlands & \includegraphics[width=0.4cm]{FRA.png} & France \\
5 & \includegraphics[width=0.4cm]{NED.png} & Netherlands & \includegraphics[width=0.4cm]{ENG.png} & England & \includegraphics[width=0.4cm]{CAN.png} & Canada \\
6 & \includegraphics[width=0.4cm]{JPN.png} & Japan & \includegraphics[width=0.4cm]{ESP.png} & Spain & \includegraphics[width=0.4cm]{AUS.png} & Australia \\
7 & \includegraphics[width=0.4cm]{AUS.png} & Australia & \includegraphics[width=0.4cm]{CAN.png} & Canada & \includegraphics[width=0.4cm]{JPN.png} & Japan \\
8 & \includegraphics[width=0.4cm]{ESP.png} & Spain & \includegraphics[width=0.4cm]{SWE.png} & Sweden & \includegraphics[width=0.4cm]{NED.png} & Netherlands \\
9 & \includegraphics[width=0.4cm]{BRA.png} & Brazil & \includegraphics[width=0.4cm]{NOR.png} & Norway & \includegraphics[width=0.4cm]{SWE.png} & Sweden \\
10 & \includegraphics[width=0.4cm]{SWE.png} & Sweden & \includegraphics[width=0.4cm]{JPN.png} & Japan & \includegraphics[width=0.4cm]{BRA.png} & Brazil \\
11 & \includegraphics[width=0.4cm]{CAN.png} & Canada & \includegraphics[width=0.4cm]{BRA.png} & Brazil & \includegraphics[width=0.4cm]{NOR.png} & Norway \\
12 & \includegraphics[width=0.4cm]{NOR.png} & Norway & \includegraphics[width=0.4cm]{AUS.png} & Australia & \includegraphics[width=0.4cm]{ESP.png} & Spain \\
13 & \includegraphics[width=0.4cm]{CHN.png} & China PR & \includegraphics[width=0.4cm]{ITA.png} & Italy & \includegraphics[width=0.4cm]{KOR.png} & South Korea \\
14 & \includegraphics[width=0.4cm]{ITA.png} & Italy & \includegraphics[width=0.4cm]{CHN.png} & China PR & \includegraphics[width=0.4cm]{ITA.png} & Italy \\
15 & \includegraphics[width=0.4cm]{KOR.png} & South Korea & \includegraphics[width=0.4cm]{KOR.png} & South Korea & \includegraphics[width=0.4cm]{CHN.png} & China PR \\
16 & \includegraphics[width=0.4cm]{NZL.png} & New Zealand & \includegraphics[width=0.4cm]{SCO.png} & Scotland & \includegraphics[width=0.4cm]{NZL.png} & New Zealand \\
17 & \includegraphics[width=0.4cm]{SCO.png} & Scotland & \includegraphics[width=0.4cm]{NZL.png} & New Zealand & \includegraphics[width=0.4cm]{SCO.png} & Scotland \\
18 & \includegraphics[width=0.4cm]{CHI.png} & Chile & \includegraphics[width=0.4cm]{NGA.png} & Nigeria & \includegraphics[width=0.4cm]{THA.png} & Thailand \\
19 & \includegraphics[width=0.4cm]{ARG.png} & Argentina & \includegraphics[width=0.4cm]{CHI.png} & Chile & \includegraphics[width=0.4cm]{ARG.png} & Argentina \\
20 & \includegraphics[width=0.4cm]{RSA.png} & South Africa & \includegraphics[width=0.4cm]{RSA.png} & South Africa & \includegraphics[width=0.4cm]{NGA.png} & Nigeria \\
21 & \includegraphics[width=0.4cm]{NGA.png} & Nigeria & \includegraphics[width=0.4cm]{THA.png} & Thailand & \includegraphics[width=0.4cm]{CHI.png} & Chile \\
22 & \includegraphics[width=0.4cm]{CMR.png} & Cameroon & \includegraphics[width=0.4cm]{JAM.png} & Jamaica & \includegraphics[width=0.4cm]{CMR.png} & Cameroon \\
23 & \includegraphics[width=0.4cm]{THA.png} & Thailand & \includegraphics[width=0.4cm]{CMR.png} & Cameroon & \includegraphics[width=0.4cm]{RSA.png} & South Africa \\
24 & \includegraphics[width=0.4cm]{JAM.png} & Jamaica & \includegraphics[width=0.4cm]{ARG.png} & Argentina & \includegraphics[width=0.4cm]{JAM.png} & Jamaica \\
\end{tabular}
\end{table}
\subsection{Probabilities for FIFA World Cup 2019 Winner} \label{sec:simul}
In this section, the hybrid random forest is applied to (new) data for the
World Cup 2019 in France (in advance of the tournament) to predict winning probabilities
for all teams and to predict the tournament course.
The Poisson abilities were estimated by a bivariate Poisson model with a half period of 500 days.
All matches of the 167 national teams played since 2011-06-01 up to 2019-06-01 are
used for the estimation, what results in a total of 3418 matches.
All further predictor variables are taken as the latest values shortly before the World
Cup (and using the finally announced squads of 23 players for all nations).
The bookmaker consensus abilities are based on the average odds of 18 bookmakers.
For each match in the World Cup 2019, the hybrid random forest can
be used to predict an expected number of goals for both teams. Given the expected
number of goals, a real result is drawn by assuming two (conditionally) independent
Poisson distributions for both scores. Based on these results, all 36 matches from the
group stage can be simulated and final group standings can be calculated. Due to
the fact that real results are simulated, we can precisely follow the official FIFA rules when
determining the final group standings\footnote{The final group standings are determined by (1) the number of
points, (2) the goal difference and (3) the number of scored goals.
If several teams coincide with respect to all of these three criteria, a
separate table is calculated based on the matches between the coinciding
teams only. Here, again the final standing of the teams is
determined following criteria (1)--(3). If still no distinct decision can
be taken, the decision is induced by lot.\label{fifa:rules}}. This enables us to
determine the matches in the round-of-sixteen and we can continue by
simulating the knockout stage. In the case of draws in the knockout stage,
we simulate extra-time by a second simulated result. However, here we multiply
the expected number of goals by the factor 0.33 to account for the shorter time
to score (30 min instead of 90 min). In the case of a further draw in extra-time
we simulate the penalty shootout by a (virtual) coin flip.
Following this strategy, a whole tournament run can be simulated, which we repeat 100,000
times. Based on these simulations, for each of the 24 participating
teams probabilities to reach the single knockout stages and,
finally, to win the tournament are obtained. These are summarized
in Table~\ref{winner_probs} together with the (average) winning probabilities
based on 18 different bookmakers for comparison.
\begin{table}[!h]
\small
\caption{\label{winner_probs}Estimated probabilities (in \%) for reaching the
different stages in the FIFA World Cup 2019 for all 24 teams based on 100,000
simulation runs of the FIFA World Cup together with (average) winning probabilities
based on the odds of 18 bookmakers.}\vspace{0.4cm}
\centering
\input{Winner_probs}
\end{table}
We can see that, according to our hybrid random forest model,
USA is the favored team with a predicted winning probability of $28.1\%$
followed by France, England and Germany. Overall, this result seems in line with the probabilities
from the bookmakers, as we can see in the last column. However, while the bookmakers slightly favor France,
the random forest model predicts a clear advantage for USA.
Beside the probabilities of becoming world champion, Table~\ref{winner_probs} provides some
further interesting insights also for the single stages within the tournament. For example, it is interesting
to see that the two favored teams USA and France have similar chances to at least reach the
round-of-sixteen ($98.4\%$ and $95.9\%$, respectively), while the probabilities to at least reach
the quarter finals differ significantly. While USA has a probability of $75.5\%$ to reach at least the quarter finals
and of $53.4$ to reach at least the semi finals, France only achieves respective probabilities of
$66.8\%$ and $40.7$. Obviously, in contrast to USA, France has
a rather high chance to meet a strong opponent in the round-of-sixteen and the quarter finals.
\section{Concluding remarks}\label{sec:conclusion}
In this work, we proposed a hybrid modeling approach for the scores of
international soccer matches which combines random forests with two different ranking methods,
a Poisson ranking method and abilities based on bookmakers' odds.
While the random forest is principally based on the competing teams' covariate information,
the latter two components provide ability parameters, which serve as adequate
estimates of the current team strengths a well as of the information contained in the bookmakers' odds.
In order to combine the methods, the Poisson ranking method needs to be repeatedly applied
to historical match data preceding each World Cup from the training data. This way, for each
World Cup in the training data and each participating team current ability estimates are obtained.
Similarly, the bookmaker consesus abilities are obtained by inverse tournament simulation
based on the aggregated winning odds from several online bookmakers.
These two ability estimates can be added as additional covariates to the set of covariates used
in the random forest procedure.
Additionally, based on the estimates of the combined ranking-based random forest
on the training data, we repeatedly simulated the FIFA Women's World Cup 2019 100,000 times.
According to these simulations, the defending champion USA is the top
favorite for winning the title, followed by France, England and Germany.
Furthermore, survival probabilities for all teams and at all tournament stages are
provided.
\subsection*{Acknowledgment}
We thank Jonas Heiner for his tremendous effort in helping us to collect
the covariate data.
\bibliographystyle{chicago}
| {
"attr-fineweb-edu": 1.439453,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUepjxK1yAgYaM3wz9 | \section{Introduction}
Soccer is an extremely popular and profitable, multi-billion dollar business
around the world. Recently, several aspects regarding the sport and
associated businesses have been the subject of investigation by the
scientific community, including physicists who have devoted some work and
time to describe statistics related to soccer. In the literature about
soccer models, one can find applications of complex networks~\cite{Onody2004}
and fits with generalized functions~\cite{Renio2000}; however, they ofttimes have
only one focus: goal distribution (see e.g. ~\cite {Bitner2007, Bitner2009, Skinera2009}).
Outside the soccer literature, it is important to mention other interesting studies
which do not necessarily focus on the scores of the games, such as models that
investigate properties of patterns emerging from failure/success processes in sports.
In the case of basketball, it has been suggested~\cite{yaari2011} that the
``hot hand" phenomenon (the belief that during a particular period a player's
performance is significantly better than expected on the basis of a player's
overall record), a definitively a non-random pattern, can be modeled by a
sequence of random independent trials. Returning to soccer, some
authors~\cite{Kranjec2010} have devoted attention to the influence of the
perceptual-motor bias associated with reading direction in foul judgment by
referees.
However, it is interesting to notice that there is a void in the literature:
few studies have been carried out under the game theoretic approach
of considering the outcome of a tournament from a simple dynamics among the
competing teams. In other words, in looking at the statistics that emerge
from this complex system called soccer, one can ask if the properties of the
distribution of final tournament classification points can be seen as an
emerging property of a soccer tournament dynamics established by simple
rules among the different competing teams, or how these classification point
distributions emerge from a soccer tournament by considering all
\textquotedblleft combats" among the teams. Here, we propose a model that
combines previous studies concerning goal distribution~\cite{Skinera2009}
and a game theoretic approach to football tournaments that produces
realistic final tournament scores and standings.
In this paper, we explore the statistics of standing points in the end of
tournaments disputed according to the ``Double Round Robin System" (DRRS
\footnote[1]
http://en.wikipedia.org/wiki/Round-robin\_tournament} in which the team with
the most tournament points at the end of the season is crowned the champion,
since many soccer tournament tables around the world are based on this
well-known system. In general, 20 teams take part in the first tier
tournament, such as ``Serie A'' in Italy, the English ``Premier League'',
the Spanish ``La Liga'' and the Brazilian ``Brasileir\~{a}o'' (from 2003
onwards) soccer tournaments. During the course of a season, each team plays
every other team twice: the ``home'' and ``away'' games. Moreover the points
awarded in each match follows the 3-1-0 points system: teams receive three
points for a win and one point for a draw; no points are awarded for a loss.
The Serie A Italian soccer tournament, or simply the ``Calcio", has been
played since 1898, but only from 1929 was it disputed in its current format
and system. Their main champions have been Juventus, winner of the league 27 times, and
Milan and Internazionale which won the league 18 times each. The Spanish
``La Liga" also started in 1929, and over its history, the tournament has
been widely dominated by only two teams: Real Madrid and Barcelona.
In Brazil, the national tournament, popularly known as ``Brasileir\~{a}o",
was first organized in a modern format in 1971. In 2010 the Brazilian Soccer
Confederation (CBF) recognized as national champions the winners of smaller
national tournaments such as the ``Ta\c{c}a Brasil" (played from 1959 to
1968) and another tournament known as ``Roberto Gomes Pedrosa" (played from
1967 to 1970). However, only in 2003 the Brazilian League started being
disputed via the DRRS. In all past editions of the tournament the league
table was based on the method of preliminaries, typically used in Tennis
tournaments, which will not be considered in this paper. In the 10 editions
played under the DRRS, the brazilian tournament has already been won by 6
different football clubs: Cruzeiro, Santos, S\~{a}o Paulo, Corinthians,
Flamengo, and Fluminense.
The statistics as well as the fluctuations associated to the standings and
scores of teams in tournaments with 20 teams playing under the DRRS can be
very interesting. Moreover, if we are able to reproduce such statistics via
a simple automaton considering the teams as ``agents" which evolve according
to definite ``rules" based on their previous performances and conditions,
one could use this information when preparing or building up a team before a
competition. Thus, models (e.g. automata) of games in a tournament, whose
results are defined by the evolving characteristics of the teams, could
provide important knowledge. Therefore, by exploring the conditions under
which the standing and scores of tournaments can be mimicked by a model, we
propose a simple, but very illustrative, evolutionary non-Markovian process.
It is known that many events can alter the performance of teams during a
season besides their initial strengths, such as the hiring of a new player,
renewed motivation due to a change in coach, key player injuries, trading of
players, among others. For the sake of simplicity, we consider that the
teams in the model initially have the same chance of winning the games and that the
combination of events that can lead to an improvement of a team will be
modeled solely by increasing the probability of a team winning future games
after a victory. Similarly, a loss should negatively affect their future winning
probabilities.
Our main goal is to verify if the Brazilian Soccer tournament has final
standing scores with the same statistical properties that emerge from our
simple model, and to check whether the properties of the Brazilian
tournament differ from other leagues and, if so, the reasons for that
behavior. In the first part of the paper we calibrate our model by using
constant draw probabilities introduced ad hoc, based on data from real
tournaments. In the second part, we have used draw probabilities that emerge
from the model dynamics, being dependent on the teams \textquotedblleft
abilities". Both situations are able to reproduce real tournament data.
The advantage of the second approach is the independence of extra
parameters, i.e., the first one uses pre-calculated rates from previous
statistics. In addition, we analyze distortions of our model under hypotheses of
inflated tournaments. Finally, we show a
transition from single to double peaked histograms of final standing
scores, which occurs when we analyze a small league and large tournaments.
However, it is possible to obtain a scaling for different
tournaments with different sizes.
\section{A first Model: ad-hoc draw probabilities}
In our model, each team starts with a potential $\varphi _{i}(0)=\varphi_0$, where
$i=1,..,n$ indexes the teams. Each team plays once
with the other $n-1$ teams in each half of the tournament;
a team $A$ plays with $B$ in the first half of the tournament and $B$ plays
with $A$ in the second, i.e. the same game occurs twice in the tournament
and there is no distinction between home and away matches (the
\textquotedblleft home court advantage\textquotedblright\ could be inserted
in the potential of the teams). In a game between team $i$ and team
$j$, the probability that $i$ beats $j$ is given by
\begin{equation}
\Pr (i\succ j)=\frac{\varphi _{i}}{(\varphi _{i}+\varphi _{j})}.
\label{prob_win}
\end{equation}
The number of games in the tournament is $N=n(n-1)$ and in each half of the
tournament, $n-1$ rounds of $n/2$ games are played. In each round, a
matching is performed over the teams by a simple algorithm, that considers
all circular permutations to generate the games. We give an illustration for
$n=6$ teams, starting with the configuration:
\begin{equation*}
\begin{array}{lll}
1 & 2 & 3 \\
4 & 5 & 6
\end{array
\end{equation*}
This configuration implies that in the first round, team 1 plays team 4, 2
plays 5 and team 3 plays team 6. To generate the second round, we keep team 1 fixed in its position
and we rotate the other teams clockwise:
\begin{equation*}
\begin{array}{lll}
1 & 4 & 2 \\
5 & 6 &
\end{array
\end{equation*}
Now, team 1 plays team 5, team 4 plays 6 and team 2 plays 3. After $n-1=5$
rounds, the system arrives at the last distinct configuration and all teams have confronted every
other only once. We repeat the same process to simulate the second half of the tournament.
In our model, the outcome of each match is a draw with probability $r_{draw}$
and one team will beat the other with probability $(1-r_{draw})$; the
winning team is decided by the probabilities defined by Eq.(\ref{prob_win}).
After each match, we increase $\varphi_i $ by one unit if team $i$ wins,
decrease $\varphi_i $ by one unit if team $i$ loses and $\varphi_i \geq 1$, and
leave it unchanged in the case of a draw. Here, we used $r_{draw}
\allowbreak 0.26$ that is the average draw probability in actual tournaments
around the world. Actually, we observe that $r_{draw}$ ranges from $0.24$
(Spanish La Liga) to $0.28$ (Italian Calcio); see table~\ref{main_table}.
Besides this, the team is awarded points according to the 3-1-0 scheme. In
each new match, the updated potentials are considered and the second half of
the tournament begins with the conditions acquired by the teams in the first
half. The team evolution dynamics is briefly described by the following
algorithm
\begin{equation*}
\begin{tabular}{ll}
\hline\hline
\label{algorithm} & \textbf{Main} \textbf{Algorithm} \\ \hline\hline
1 & If ($rand[0,1]<r_{draw}$) then \\
2 & $\ \ \ \ \ p_{i}=p_{i}+1$ and $p_{j}=p_{j}+1;$ \\
3 & else \\
4 & \ \ \ if\ ($rand[0,1]<\frac{\varphi _{i}}{(\varphi
_{i}+\varphi _{j})}\ $) then \\
5 & \ \ \ \ \ \ \ \ \ $p_{i}=p_{i}+3;$\ \ $\varphi _{i}=\varphi _{i}+1;$\
\varphi _{j}=\varphi _{j}-1;$\ \ \ \\
6 & \ \ \ else\ \ \ \ \ $\ $\ \ \\
7 & \ \ \ \ \ \ \ \ \ $p_{j}=p_{j}+3;$\ \ $\varphi _{i}=\varphi _{i}-1;$\
\varphi _{j}=\varphi _{j}+1;$\ \\
8 & \ \ \ endif \\
9 & Endif \\ \hline\hline
\end{tabular
\ \ \ \ \
\end{equation*
Here, it is important to notice that the algorithm works under the
constraint $\varphi _{j}\geq 1$, for every $j$.
It is important to mention that the arbitrary choice of increments equal to
one unit is irrelevant, since it is possible to alter the relative change in
potential by assigning it different starting values. For example, if team $A
$\ is matched against team $B$\ in a certain round, we can denote by $N_{A}$\ and $N_{B}
\ the difference in number of wins and losses up to that round ($N_{A}$\ and $N_{B}$\ can
be negative or postive integers) for each team. We can then write $\varphi _{A}=\varphi _{0}+N_{A}$\ and
\varphi _{B}=\varphi _{0}+N_{B}$, so our model works with unitary
increments/decrements, i.e., $\Delta \varphi =1$. As can be observed, for arbitrary $\Delta
\varphi $\ we have invariance of probability
\begin{equation}
\begin{array}{lll}
\Pr (A\succ B) & = & \dfrac{\varphi _{A}}{(\varphi _{A}+\varphi _{B})} \\
& & \\
& = & \dfrac{\varphi _{0}\Delta \varphi +N_{A}\Delta \varphi }{\varphi
_{0}\Delta \varphi +N_{A}\Delta \varphi +\varphi _{0}\Delta \varphi
+N_{B}\Delta \varphi } \\
& & \\
& = & \dfrac{\widehat{\varphi }_{0}+N_{A}\Delta \varphi }{\widehat{\varphi
_{0}+N_{A}\Delta \varphi +\widehat{\varphi }_{0}+N_{B}\Delta \varphi }
\dfrac{\widehat{\varphi }_{A}}{\widehat{\varphi }_{A}+\widehat{\varphi }_{B}},
\end{array}
\label{invariance}
\end{equation
where $\widehat{\varphi }_{A}=\widehat{\varphi }_{0}+N_{A}\Delta \varphi $\
and $\widehat{\varphi }_{B}=\widehat{\varphi }_{0}+N_{B}\Delta \varphi $.
This simple calculation shows that we can start from an arbitrary
potential $\widehat{\varphi }_{0}$ for the players and have exactly the same results if we perform
increments according to $\Delta \varphi $. In this case our main algorithm must be changed to increment/decrement by
\Delta \varphi $\ instead of 1 and it is dependent on one parameter only, i. e., $\varphi_0/\Delta \varphi$.
\section{Results Part I: Exploring the first model -- Calibrating parameters}
Before comparing our model with real data from tournaments played under the
DRRS, it is interesting to study some of its statistical properties. Given
n $ teams, one run of the algorithm will generate a final classification
score for each team. For instance, starting with $n=20$ teams with the same
potential $\varphi_{0}=30$, a possible final classification score generated
by our algorithm in increasing order is [23, 28, 39, 41, 44, 45, 47, 48, 49,
53, 54, 57, 60, 61, 62, 62, 64, 64, 65, 72]. To obtain significant
information from the model, it is necessary to average these data over
different random number sequences. To that end, we compute histograms of final
score distributions for $n_{run}=100$ different final scores, for a varying
number of teams.
\begin{figure}[h]
\begin{center}
\includegraphics[width=6.0in]{scaling_level_2_new.eps}
\end{center}
\caption{ Figure (a): \textbf{The score histogram rescaled by the number of
teams in the large fluctuation regime tournament for} $\protect\varphi_{0}=2$.
The histogram was generated from an average over $n_{run}=100$
different tournaments simulated by our automaton. We simulated tournaments
with $n=20,40,80,160,320$, and $640$ different teams. Figure (b): \textbf{The
accumulated frequency of classification scores}: number of teams
with score smaller than a determined score divided by the number of teams. }
\label{figure_scaling_2}
\end{figure}
In Fig.~\ref{figure_scaling_2} (a), we display the relative frequency of
scores as a function of the rescaled score, considering all teams initially
with $\varphi_{0}=2$, for varying tournament sizes $n$. Under this regime of
low $\varphi_0$, the changes in potential according to the algorithm generate
large fluctuations in the winning/losing probabilities and a double peak
pattern is observed in the histograms.
For a study of scaling size, we consider our histogram as a function
of the variable $\frac{score}{n}$ since the larger the tournament the larger
are the team scores (number of points). This double peak shows that our
dynamics leads to two distinct groups: one that disputes the leadership and
the other that fights against relegation to lower tiers. In Fig.~\re
{figure_scaling_2} (b), we plot the cumulative frequency as a function of $\frac{score}{n}$
(that essentially counts how many teams have scores smaller than, or equal to a
given score). We can observe an interesting behavior due to the presence of
extra inflection points that makes the concavity change sign and the
non-gaussian behavior of the scores, independent of the size of the
tournaments. Although clearly non-gaussian, because of the double peak and
the ``S'' shaped cumulative frequency, the Kolmogorov-Smirnov (KS) and
Shapiro-Wilk(SW) tests (references and routine codes of these tests are found in
\cite{recipes2007}) were performed to quantify the departure from
gaussianity. An important point for methods applied is KS \cite{Garpman1978} can be
applied to test other distributions differently from SW which is used for normality
tests specifically.
\begin{figure}[h]
\begin{center}
\includegraphics[width=6.0in]{scaling_level_30_new.eps}
\end{center}
\caption{Figure (a): \textbf{The histogram of scores rescaled by the number
of teams}: in the small fluctuation regime tournament for $\protect\varphi_{0}=30\ $,
under the same conditions of Figure~\protect\ref{figure_scaling_2}.
Figure (b): \textbf{The accumulated frequency for $\protect\varphi_{0}=30$}, for the
same conditions of Figure~\protect\ref{figure_scaling_2}.}
\label{figure_scaling_30}
\end{figure}
By repeating the experiment for $\varphi_{0}=30$ a transition from a single
peak to a double peak can be observed from $n\approx40$ which is observed in
Fig. \ref{figure_scaling_30} (a). Under this condition, wins and losses cause
small changes in the winning/losing probabilities simulating a tournament
under the ``adiabatic" regime.
We observe that this interesting behavior is reflected in the curves of
cumulative frequencies that change from single to double ``S" shaped in Fig.
\ref{figure_scaling_30} (b). It is interesting to verify whether this
tournament model is able to mimic the score statistics of real tournaments
and, if so, under what conditions? To answer this question, we need to
explore real tournaments statistics. In Table~\ref{main_table}, we show the
compiled data of the last 6 editions of important soccer tournaments around
the world: Italian, Spanish, and Brazilian. We collect data about scores of
the champion teams (maximum) and last placed teams. We average these
statistics for all studied editions and we analyze the Gaussian behavior of
score data for each edition separately (20 scores) and grouped (120 scores)
by using two methods: Shapiro-Wilk and Kolmogorov-Smirnov using a
significance level of 5\%. The draw average per team was also computed,
which shows that $r_{draw}\approx\frac{10}{38} \approx 0.26$ which corroborates the
input used in our previous algorithm.
\begin{table*}[th]
\caption{ \textbf{Compiled data of important tournaments around the world:
Italian, Spanish, and Brazilian Leagues}}
\label{main_table
\begin{tabular}{llllllll}
\hline\hline
& \textbf{2006} & \textbf{2007} & \textbf{2008} & \textbf{2009} & \textbf
2010} & \textbf{2011} & \textbf{all} \\ \hline\hline
\textbf{Italian* (Calcio)} & & & & & & & \\ \hline
minimum & 35 & 26 & 30 & 30 & 29 & 24 & 29(2) \\
maximum & 86 & 97 & 85 & 84 & 82 & 82 & 86(2) \\
Kolmogorov-Smirnov & no & yes & yes & yes & yes & yes & no \\
Shapiro-Wilk & no & no & yes & yes & yes & yes & no \\
draws (average per team) & 12.5 & 11.4 & 11.2 & 9.5 & 10.2 & 9.7 & 10.8(5)
\\ \hline\hline
\textbf{Spanish (La Liga)} & & & & & & & \\ \hline
minimum & 24 & 28 & 26 & 33 & 34 & 30 & 29(2) \\
maximum & 82 & 76 & 85 & 87 & 99 & 96 & 87(4) \\
Kolmogorov-Smirnov & yes & yes & yes & yes & yes & yes & no \\
Shapiro-Wilk & yes & yes & yes & yes & no & no & no \\
draws (average per team) & 10.5 & 9.8 & 8.7 & 8.3 & 9.5 & 7.9 & 9.1(4) \\
\hline\hline
\textbf{Brazilian (Brasileir\~{a}o)} & & & & & & & \\ \hline
minimum & 28 & 17 & 35 & 31 & 28 & 31 & 28(2) \\
maximum & 78 & 77 & 75 & 67 & 71 & 71 & 73(2) \\
Kolmogorov-Smirnov & yes & yes & yes & yes & yes & yes & yes \\
Shapiro-Wilk & yes & no & yes & yes & yes & yes & yes \\
draws (average per team) & 9.7 & 9 & 9.6 & 10.2 & 11.8 & 10.5 & 10.1(4) \\
\hline\hline
\end{tabular
\par
\begin{flushleft}
* The 2006 year (which corresponds to season 2005/2006) was replaced by
2004/2005 in Calcio since cases of corruption among referees have led to
changes in teams scores with points being reduced from some teams and
assigned to others. Here ``yes" denotes positive to normality test and ``no"
denotes the opposite, at a level of significance of 5\%.
\end{flushleft}
\end{table*}
Some observations about this table are useful. The traditional European
tournaments, based on the DRRS have non-Gaussian traces as opposed to the
Brazilian league, an embryonary tournament played under this system. This
fact deserves some analysis: in Brazil, over the last 6 editions,
(compiled data are presented in Table \ref{main_table}) 4 different football clubs
have won the league. If we consider all 10 disputed editions, we have 6 different
champions which shows the great diversity of this competition. The Brazilian
League seems to be at a greater random level when compared to the European
tournaments. A similarity among teams suggests that favorites are not always
crowned champions and many factors
and small fluctuations can be decisive in the determination of the champion.
This may also indicate that the Brazilian tournament has an abundance of
homogeneous players differently from the Italian tournament, in which the
traditional teams are able to hire the best players or have well-managed
youth teams, or even sign the ones who play for the national Italian team.
Consider for example Real Madrid and Barcelona in Spain: they govern the
tournament by signing the best players, even from youth teams from abroad (as
is the case with the World's Player of the Year, Lionel Messi who joined
Barcelona at age 13 from Argentina). It is not uncommon for a player who has stood
out in the Brazilian or other latin american champioships to be hired to play in
Europe for the next season, further contributing to the lack of continuity from one
season to the next and to the ``randomization'' of the teams.
In Brazil, there is not a very large financial or economic gap among teams
and although favorites are frequently pointed out by sports pundits before
the beginning of the tournament, they are typically not able to pick the
winners beforehand. In fact, many dark horses, not initially pointed out as
favorites, end up winning the league title. This suggests that, in Brazil,
the champions emerge from very noisy scenarios, as opposed to other
tournaments that only confirm the power of a (favorite) team. One could add
to that the existence of a half-season long local (or state-based only)
tournament making the predictions widely reported in the press not very
trustful or reliable in any sense. Therefore, it is interesting to check our
model by studying its statistical properties, changing parameters and then
comparing the model with real data.
We perform an analysis of our model considering different initial parameters
$\varphi _{0}=2,10$ and $30$ and the different tournaments (see plot (a) in
Fig. \ref{results_cumulative}). We have used $r_{draw}=0.26$ in our
simulations. It is possible to observe that the model fits the Brazilian
soccer very well for $\varphi _{0}=30$ and $\Delta \varphi =1$. It is
important to mention that extreme values (minimal and maximal) are
reproduced with very good agreement. For example, plot (b) in Fig. \re
{results_cumulative} shows that minimal and maximal values obtained by our
model (full squares and circles respectively, in black) are very similar to
the ones obtained from the six editions of the Brazilian tournament (open
squares and circles, in blue). We also plot continuous lines that represent
the average values obtained in each case. This shows
that our model and its fluctuations capture the nuances and emerging
statistical properties of the Brazilian tournament which, however, seems not
to be the case of Calcio and La Liga. Plot (a) of Fig. \re
{results_cumulative} shows that the cumulative frequency of these two
tournaments are very similar to one another and that no value of the
parameter $\varphi _{0}$ (many others were tested)
is capable of reproducing their data.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{cumulative_all_new.eps}\includegraphics[width=3.5in]{maximum_minimum_new.eps}
\end{center}
\caption{Figure (a):\textbf{ A comparison between results produced
by our model, using different initial parameters $\protect\varphi_{0}=2,10$
and $30$ and different tournaments}. We have used $r_{draw}=0.26$.
Results were obtained considering 6 runs of ourartificial tournament. We
can observe that the model fits the Brazilianleague (black continuous curve
obtained from 6 editions of the Brazilian league) precisely for $\protect\varphi_{0}=30$.
On the other hand, Calcio and La Liga are not reproduced by our model indicating clear
differences between such tournaments and the Brasileir\~{a}o. Figure (b):
\textbf{This figure shows that minimal and maximal values obtained by our model}
(full squares and circles respectively, depicted in black) are very similarto the ones
obtained in the six editions of the Brazilian League (open
squares and circles, in blue). The continuous line corresponds to the
average values obtained in each case for a comparison.}
\label{results_cumulative}
\end{figure}
Now a question that can quickly come to mind to readers of this paper is:
are we modeling something that is entirely random and non-evolutionary,
i.e., could we use a simpler model? The answer, fortunately, is ``not
really''. To understand this, let us suppose a completely random and non
evolutionary model (the probabilities do not change with time), in which a
team should win, lose, or draw with the same probability: 1/3.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{cumulative_Brazil_data_vs_random_new.eps}
\end{center}
\caption{\textbf{Comparison of our model (evolutionary) and a totally random
(static) model}: We can observe that the static model does not reproduce the
lower and upper values as well as the shape of the Brazilian League which,
on the other hand, is very well fitted by our model. }
\label{evolutionary_vs_non_evolutionary}
\end{figure}
A comparison of the best fit of our model (evolutionary) with the totally
random (static) model, under the exact same conditions of 20 teams under the
DRRS, is shown in Fig. \ref{evolutionary_vs_non_evolutionary}. We observe
that the static model does not reproduce the lower and upper values as well
as the shape of cumulative frequency of the Brasileir\~{a}o which, on the other hand, is very well
fitted by our model.
\section{Second model: Draw probabilities emerging from the model itself}
Previous authors (see for example \cite{Bitner2007} and \cite{Skinera2009})
claim that in a match between two teams $A$ and $B$, the probability that
the result is ($n_{A}$, $n_{B}$), where $n_{i}$ is the number of goals
scored by team $i$, can be approximated by a Poisson distribution
\begin{equation}
\begin{array}{lll}
\Pr \left[ (n_{a},n_{b})|(\phi _{A},\phi _{B})\right] & = & \Pr (n_{a},\phi
_{A})\cdot \Pr (n_{b},\phi _{B}) \\
& & \\
& = & \frac{\phi _{A}^{n_{A}}}{n_{A}!}e^{-\phi _{A}}\cdot \frac{\phi
_{B}^{n_{B}}}{n_{B}!}e^{-\phi _{B}
\end{array}
\label{poisson}
\end{equation
where $\phi _{A}$ and $\phi _{B}$, the average number of goals in a game,
are taken as the abilities of the teams.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3in]{fig_m_infinity_new.eps}
\end{center}
\caption{The probability $r_{draw}$ as function of $\protect\phi_{A}$ and $\protect\phi_{B}
. }
\label{plot_m_infinity}
\end{figure}
It is very interesting to work with models that have as few free parameters
as possible; the imposition of an ad-hoc draw probability in our previous
version of the model can therefore be seen as a shortcoming. It is possible
to overcome this problem, and at the same time maintain the same model
properties, by using Eq.~\ref{poisson} as a means to calculate $r_{draw}$ in
the previously defined algorithm. Given two teams, with potentials
\varphi_{A}$ and $\varphi_{B}$, we can calculate $r_{draw}$ making the
direct identification of the abilities with our concept of potential, i.e.,
\phi_{A}=\varphi_{A}$ and $\phi_{B}=\varphi_{B}$, so that
\begin{equation*}
\begin{array}{lll}
r_{draw} & = & \Pr\left[ (n_{a}=n_{b})|(\phi_{A},\phi_{B})\right] \\
& & \\
& = & {\displaystyle\sum\limits_{n=0}^{\infty}} \frac{\phi_{A}^{n}}{n!
e^{-\phi_{A}}\cdot\frac{\phi_{B}^{n}}{n!}e^{-\phi_{B} } \\
& & \\
& = & {\displaystyle\sum\limits_{n=0}^{\infty}} \frac{(\phi_{A}\phi_{B})^{n
}{n!^{2}}e^{-(\phi_{A}+\phi_{B})}
\end{array
\end{equation*}
leaving our model with only one free parameter.
The first important point is that the probability independent of previous
ad-hoc information obtained from tournaments, arising as a property of the
teams themselves, i.e., their ability to score goals. A plot of $r_{draw}$ as function of $\phi _{A}$ and $\phi
_{B}$ is shown in fig.~\ref{plot_m_infinity}. However it is important to
mention this definition must be adapted if $\varphi $ is not exactly the
average number of goals of the team per match ($\phi $), since the number of
goals in a match is finite, its extension to infinity can have drastic
consequences in the draw probabilities. It should be noted that the
potential of the teams (which may be rescaled) represents the abilities of
teams but can be very different from the average of the number of goals
scored by teams in a given match. In this case, a solution to this problem
is to consider a truncated Poisson functio
\begin{equation*}
f^{trunc}(n,\phi )=Z(\phi ,m)^{-1}\frac{\phi ^{n}e^{-\phi }}{n!}
\end{equation*
where
\begin{equation*}
Z(\phi ,m)={\displaystyle\sum\limits_{j=0}^{m}}\frac{\phi ^{j}e^{-\phi }}{j!}
\end{equation*
with $m$ being the appropriate cutoff for modeling and must be suitably
adjusted. Therefore, $r_{draw}$ is now re-written a
\begin{equation}
r_{draw}=\frac{Z(\phi _{B},m)^{-1}}{Z(\phi _{A},m)}{\displaystyl
\sum\limits_{n=0}^{m}}\frac{(\phi _{A}\phi _{B})^{n}}{n!^{2}}e^{-(\phi
_{A}+\phi _{B})}\text{.} \label{new_rdraw}
\end{equation}
This is a solution but $m$\ must be adjusted according to the initial potential
\varphi _{0}$\ if we use $\Delta \varphi =1$.
However, it is also possible to solve this problem by suitably scaling the potential to be
the average number of goals in a match. Therefore, if we start the simulations with the potential
representing the average number of goals of a real tournament $\widehat{\varphi }_{0}=\lambda =$\ $\varphi
_{0}\Delta \varphi $, then the increment must be given by $\Delta \varphi =\lambda
/\varphi _{0}$, so that the the win/lose probabilities are kept fixed, according to
equation \ref{invariance}. In this case, $m\rightarrow \infty $ presents the best fits and gives the correct
draw probabilities $r_{draw}$, making the model again more
suitable, since $m$\ is not an experimental parameter.\textbf{\ }
In the next section, we will present our results based on this new approach
for the calculation of $r_{draw}$ and we show that real data are also reproduced by both methods
presented in this section.
\section{Results Part II: variable draw probabilities}
We perform new simulations considering our previously described algorithm,
but allowing for variable draw probabilities. As before, we organize the
teams via the DRRS, starting with fixed potentials and take averages over
many runs of the model. Our first analysis was to reproduce the final score
of the Brazilian tournament tuning different values of $m$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.0in]{fig_m=10_drawing_new.eps}
\includegraphics[width=2.0in]{fig_m=20_drawing_new.eps}
\includegraphics[width=2.0in]{fig_m=30_drawing_new.eps}
\end{center}
\caption{Plots of $r_{draw}$ given by equation \protect\ref{new_rdraw} for
different values of $m$.}
\label{different_m}
\end{figure}
Figure \ref{different_m} shows the surfaces corresponding to $r_{draw}$
calculated by equation~\ref{new_rdraw}. We can see that higher $m$ values
result in higher draw probabilities for low potentials. Figure \re
{cumulative_different_m} shows the cumulative distribution of simulated
final scores from our artificial tournament generated by the model
considering the variable $r_{draw}$ given by equation \ref{new_rdraw}. Three
different values of $m$ (10, 20 and 30) were tested. We can show that best
value to fit the real data extracted from the same 6 Brazilian soccer
tournaments (continuous curve) is $m=20$. All teams started the simulations
with$\ \varphi _{0}=\phi _{0}=30$, calibrated in the previous results
developed in Results Part I.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{fig_m_cumulative_new.eps}
\end{center}
\caption{Cumulative distribution of simulated tournament generated by the
model considering the variable $r_{draw}$ given by equation \protect\re
{new_rdraw}. We show that the best value to fit the real data extracted
from the same 6 Brazilian soccer tournaments (continuous curve) is $m=20$. }
\label{cumulative_different_m}
\end{figure}
Since $m$\ is not an acessible parameter of tournaments, we can start from
\widehat{\varphi }_{0}=1.57$\ as the initial potential of the teams, which corresponds to the
average number of goals scored by a team in a match of the Brazilian tournament studied, and ajust
$\Delta \varphi =1.57/30$ in our algorithm. In this case, an
excelent fit (see figure \ref{standardt_poisson}) is obtained, considering the regular Poisson
distribution ($m\rightarrow \infty $). Naturally the
number 30 follows from our initial calibration of our model (when we fixed
r_{draw}=0.26$, and $\varphi _{0}=30$\ led to an excelent fit as we
previously observed).
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{standard_poisson_new.eps}
\end{center}
\caption{Cumulative distribution of the simulated tournament generated by the model,
considering $r_{draw}$ calculated from the standard Poisson distribuition (
m\rightarrow \infty $) (black balls) averaged over 6 repetitions. The continous
curve shows the 6 editions of the Brasilian soccer tournament. }
\label{standardt_poisson}
\end{figure}
As can be seen in the figures, we can obtain good fits with this new version
of model, which is a little more complicated than the previous version with
constant drawing probabilities, but it uses information inherent in the
model itself.
Finally, to test some scaling properties of the model, we reproduce the same plot of
figure \ref{figure_scaling_30} using
$r_{draw}$ according to \ref{new_rdraw}, which is shown in plot (a) in
figure \ref{scaling}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{no_fixed_draw_no_scaling_new.eps}\includegraphics[width=3.5in]{no_fixed_draw_scaling_new.eps}
\end{center}
\caption{\textbf{Plot(a)}: Histograms of final scores generated with
n_{run}=100$ different simulated tournaments using variable $r_{draw}$. The
figure is similar to figure \protect\ref{figure_scaling_30} since the same
\protect\varphi_{0}=30$ was used. \textbf{Plot (b)}: Scaling corresponding
to plot (a).}
\label{scaling}
\end{figure}
This figure shows histograms of final scores generated by $n_{run}=100$
different simulated tournaments using variable probabilities $r_{draw}$. We can observe that
the transition from one to two peaks is fully
due to the imposition that a team in our model has a minimal potential
\varphi \geq_i 1$. This effect can be overcome if we scale $\varphi_{0}$ with size system and,
it is possible to collapse the curves
by re-writing the scores as normal standard variables, i.e.,
\begin{equation*}
score(n)=s(n)\rightarrow s^{\ast}(n)=\frac{s(n)-\left\langle
s(n)\right\rangle }{<(\Delta s(n))^{2}>}.
\end{equation*}
Thus, if $H(s^{\ast}(n),\varphi_{0},n)$ denotes the histogram of normalized
scores, we have the scaling relation $H(s^{\ast}(n_{1})
\varphi_{0},n_{1})=H(s^{\ast}(bn_{1}),b\varphi_{0},bn_{1})$. Plot (b) in
figure \ref{scaling} shows this scaling. We take the logarithms of the
histogram to show the collapse more explicitly. The small inset plot is
taken without the logarithm. We can see that different tournaments can
present the same properties as long as the teams' potentials are rescaled.
\
\section*{Summary and Conclusions}
In this paper, we have explored a new model that reproduces in detail the
final classification score (standings) of the teams in the Brazilian Soccer
tournament. The Brazilian tournament, as opposed to other tournaments such
as the Italian and Spanish Leagues, has some peculiarities and seems to
display scores that emerge from a dynamics that preserves its Gaussian
traces. This can be justified by several reasons: Brazilian tournaments have
many distinct champions and the competition is not dominated by a few teams.
Favorite teams frequently perform badly, and there is an inexhaustible
source of new players, making the tournament more balanced and very
susceptible to small fluctuations. Our model also seems to be a good
laboratory to study fluctuations that may happen in large tournaments. More
specifically the model presents a transition from a one to a two peaked
distribution of the final scores (standings) histograms that correspond to
disputes near the champion's region and another closer to the region of the
last placed team. Moreover, we also presented results relative to scaling of
histograms of final scores and showed that tournaments based in our model
for different sizes collapse on the same curve when we consider normal standard
deviations for final scores and a linear scaling for potentials.
Here, it is important to mention that after the present contribution
was completed, we were alerted of the existence of a similar model with more parameters
proposed to study statistical properties of tournaments~\cite{ribeiro2010}. However, our contributions is very different,
because in that study, the matches are generated under the mean field approximation regime based on a
Markovian random walk. In such an approximation, therefore, the teams do not evolve in time.
\section*{About data extraction for the validation of the model}
Table \ref{main_table} shows the data from real tournaments used to compare
with the results produced by our model, as illustrated in Figs.~\re
{results_cumulative} and \ref{evolutionary_vs_non_evolutionary}. The data
(available publicly at http://www.wikipedia.org/) is based on records from
the Italian, Spanish, and Brazilian tournaments during the 2005/2006 -
2010/2011 seasons. For the Italian Calcio, the year 2006 (which corresponds
to season 2005/2006) was replaced by 2004/2005, since cases of corruption
and a referee scandal in 2005/2006 have supposedly changed the scores of
teams, and points were reduced from some teams and awarded to others. To
obtain the data from our model, we implemented a simple algorithm in the
FORTRAN language which computes the possible games according to the DRRS
system and evolves the potential and points of teams producing a final
classification score, or even a large number of final classification scores.
This is used to plot Figures \ref{figure_scaling_2} and \re
{figure_scaling_30} which explore the details of the model. All other
figures of the paper represent a comparison of the data extracted from
Wikipedia and those produced by our model.
\section*{Acknowledgments}
The authors thank CNPq (The Brazilian National Research Council) for its
financial support.
| {
"attr-fineweb-edu": 1.700195,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdm84dbghdUvawwve |
\section{Introduction}
\label{sec:introduction}
Football is one of the most popular sports worldwide and European or World Championships, especially the finals, are among the most watched sporting events. The Euro 2016 Final was watched by more than 20 million people in France \cite{variety2016soccer}, or the
Germany vs. France semifinal was watched by almost 30 million people in Germany \cite{variety2016soccer}. But, what about Hungary?
According to the MTVA (Media Services and Support Trust Fund), that operates the television channel M4~Sport, the first Hungarian match was watched by about 1.734 million people, the second by about 1.976 million and the third group match by about 2.318 million people\footnote{According to the Hungarian Central Statistical Office (\acrshort{ksh}), the population of Hungary was about 9.83 million in 2016 \cite{ksh22.1.1.1}.}. With these ratings, the M4~Sport, turned out to be the most watched television channel in Hungary, during those days \cite{hiradohu2016csoportgyoztes}.
The whole participation of the Hungarian national football team was beyond expectations and raised interest, even among those, who generally, do not follow football matches.
But, is it possible to measure/correlate this interest, with a mobile phone network?
Mobile phones can function as sensors, that detect the whereabouts and movement of their carrier. In this day and age, practically everyone has a mobile phone, that makes it possible to use large scale analyses. With enough data, the general mobility customs and reactions to events can also be studied.
The first step is to prepare the data and select the appropriate individuals for the study.
Filtering the subscribers of the \acrshort{cdr} data sets is always a crucial step. Not just to eliminate the inactive users: a subscriber, who only appears a few times in a data set, cannot be used for mobility analysis, but the abnormally active subscribers can also bias the result. Especially if their location does not change, as \acrshort{cdr} data may not only contain records for cell phones, that are carried by people.
Csáji et al. took into account subscribers who had at least 10 activity during the observation period (15 months) \cite{csaji2013exploring}.
Xu et al. chose to use those subscribers, who had at least one activity record at least half of the days during the observation period \cite{xu2018human}. Pappalardo et al. discarded the subscribers who had only one location, and the individuals have at least half as many calls as hours are in the data set. Furthermore, the abnormally active (more than \num{300} calls per day) \acrshort{sim} cards are excluded \cite{pappalardo2015returners}. In \cite{pinter2021evaluating}, we selected the \acrshort{sim} cards, that have activity at least 20 days (out of 30),
the daily mean activity number is at least 40 on workdays and at least 20 on weekends, but not more than \num{1000}. The upper limit is especially important to remove \acrshort{sim} cards, that possibly operate in mobile broadband modems, for example.
Filtering by activity is not necessarily sufficient to keep only individuals in the data set. Type Allocation Codes (\acrshort{tac}), on the other hand, can determine the type of the device and the exact model of a cell phone.
After the right subscribers have been selected, it is common to determine the home and work locations \cite{vanhoof2018assessing,mamei2019evaluating,pappalardo2021evaluation}, then between these two crucial locations, the commuting trends can be identified.
The commuting is studied between cities \cite{lee2018urban,zagatti2018trip,mamei2019evaluating,barbosa2020uncovering} or within a city \cite{diao2016inferring,jiang2017activity,fiadino2017call,fan2018estimation,ni2018spatial,ghahramani2018mobile,ghahramani2018extracting,pinter2021evaluating}.
Apart from commuting and connectivity analysis, \acrshort{cdr} processing is often used \cite{traag2011social,xavier2012analyzing,mamei2016estimating,furletti2017discovering,marques2018understanding,pinter2019activity,rotman2020using,hiir2020impact} for large social event detection.
When thousands of people are on the same place at the same time, they generate a significant `anomaly' in the data, whereas small groups usually do not stand out from the `noise'. This is especially true when the passive, transparent communication between the mobile phone device and the cell are not included in the data, but only the active communication (voice calls, text messages and data transfer) are recorded.
In \cite{pinter2019activity} and \cite{rotman2020using}, mass protests are analyzed via mobile phone network data.
In \cite{traag2011social,mamei2016estimating,xavier2012analyzing} and \cite{hiir2020impact}, the authors examined the location of stadiums, where the football matches took place. Traag et al. \cite{traag2011social} and Hiir et al. \cite{hiir2020impact} also found that the mobile phone activity of the attendees decreased significantly. In \cite{traag2011social}, z-score is also used to express the activity deviation during the social event from the average. Xavier et al. compared the reported number of attendees of these events with the detected ones. Furletti et al. also analyzed sociopolitical events, football matches and concerts, in Rome \cite{furletti2017discovering}.
This paper focuses on football matches, that however, took place in a remote country (France), and the fans' activity are studied in Budapest.
Mobility indicators, such as Radius of Gyration or Entropy, are often calculated \cite{pappalardo2015returners,xu2018human} to describe and classify the subscribers' mobility customs. Furthermore, using mobility to infer about Social Economic Status (\acrshort{ses}) is a current direction of mobility analysis \cite{xu2018human,cottineau2019mobile,barbosa2020uncovering,pinter2021evaluating}.
Cottineau et al. \cite{cottineau2019mobile} explored the relationship between mobile phone data and traditional socioeconomic information from the national census in French cities.
Barbosa et al. found significant differences in the average travel distance between the low and high income groups in Brazil \cite{barbosa2020uncovering}. Xu et al. \cite{xu2018human} found opposite travel tendencies in mobility of Singapore and Boston. In our previous work \cite{pinter2021evaluating}, we showed that the real estate price of the home and work locations characterize the mobility and validated our results with census data.
In this paper, the price and the age of the subscribers' mobile phones are proposed as a source of the socioeconomic indicator.
While Blumenstock et al. used the call history as a factor of socioeconomic status \cite{blumenstock2015predicting},
Sultan et al. \cite{sultan2015mobile} applied mobile phone prices as socioeconomic indicator and identified areas where more expensive phones appear more often, however, only manually collected market prices were used.
Mobile phone network data is also used to analyze the human mobility during COVID-19 pandemic and the effectiveness of the restrictions.
Willberg et al. identified a significant decrease of the population presence in the largest cities of Finland after the lockdown compared to a usual week \cite{willberg2021escaping}.
Bushman et al. analyzed the compliance to social distancing in the US using mobile phone data \cite{bushman2020effectiveness}. Gao et al. found negative correlation in stay-at-home distancing and COVID-19 increase rate \cite{gao2020association}.
Still, these analyses might not be common enough. Oliver et al. asked: `Why is the use of mobile phone data not widespread, or a standard, in tackling epidemics?' \cite{oliver2020mobile}. This, however, is not within the scope of this paper.
In this study, we analyzed the mobile phone network activity before, during and after the matches of the Hungarian national football team. The Call Detail Records (\acrshort{cdr}), analyzed in this study, have been recorded Budapest, however the matches took place in France. We present another example of social sensing, using \acrshort{cdr}s, in an indirect and a direct way. Indirectly, as the mobile phone activity of the sport fans, residing in Budapest, are studied during matches played in France. Directly, as the spontaneous festival on the streets of Budapest after the third match, and the welcome event at the Heroes' Square are presented from a data perspective.
The Call Detail Records are filtered by the Type Allocation Codes (\acrshort{tac}) to remove those Subscriber Identity Module (\acrshort{sim}) cards, that do not operate in mobile phones, thus not used by actual people. The price and age of the cell phones are also analyzed in contrast of the subscribers' age and mobility customs.
The contributions of this paper are summarized briefly as follows:
\begin{enumerate}
\item Fusing \acrshort{cdr} data set with mobile phone prices and release dates.
\item Filtering out \acrshort{sim} cards, that do not operate in mobile phones.
\item Demonstrating connection between the phone price and the mobility customs.
\item Proposing mobile phone price as a \acrshort{ses} indicator.
\item Attendees of the large social events are compared to the rest of the subscribers based on their mobility and \acrshort{ses}.
\end{enumerate}
The rest of this paper is organized as follows. The utilized data is described in Section~\ref{sec:materials}, then, in Section~\ref{sec:methodology}, the applied methodology is summarized, and in Section~\ref{sec:results}, the results of this study are introduced. Finally, in Section~\ref{sec:conclusions}, the findings of the paper are summarized and concluded.
\section{Materials}
\label{sec:materials}
Vodafone Hungary, one of the three mobile phone operators providing services in Hungary, provided anonymized \acrshort{cdr} data for this study. The observation area was Budapest, capital of Hungary and its agglomeration, and the observation period is one month (June 2016). In 2016 Q2, the nationwide market share of Vodafone Hungary was 25.3\% \cite{nmhh_mobile_market_report}. This data set contains \num{2291246932} records from \num{2063005} unique \acrshort{sim} cards, and does not specify the type of the activity (voice calls, text messages or data transfers).
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/sim_activity}
\caption{\acrshort{sim} cards categorized by the number of activity records.
The \acrshort{sim} cards with more than 1000 activity records (26.98\% of the \acrshort{sim} cards) provide the majority (91.31\%) of the activity.}
\label{fig:vod201606_sim_activity}
\end{figure}
Figure~\ref{fig:vod201606_sim_activity}, shows the activity distribution between the activity categories of the \acrshort{sim} cards. The dominance of the last category, \acrshort{sim} cards with more than 1000 activity records, is even more significant. This almost 27\% of the \acrshort{sim} cards produce the more the 91\% of the activity.
Figure~\ref{fig:vod201606_activity_by_days}, shows the \acrshort{sim} card distribution by the number of active days. Only the 34.59\% of the \acrshort{sim} cards have activity on at least 21 different days.
There were \num{241824} \acrshort{sim} cards (11.72\%), that have appearance at least two days, but the difference between the first and the last activity is not more the seven days. This may indicate the presence of tourists. High tourism is usual during this part of the year.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/sim_activity_by_days}
\caption{\acrshort{sim} card distribution by the number of active days.}
\label{fig:vod201606_activity_by_days}
\end{figure}
The obtained data was in a `wide' format, and contained a
\acrshort{sim} ID, a timestamp, cell ID, the base station (site) coordinates in \acrshort{wgs84} projection, the subscriber (age, sex) and subscription details (consumer/business and prepaid/postpaid) and the Type Allocation Code (\acrshort{tac}) of the device.
The \acrshort{tac} is the first 8 digits of the International Mobile Equipment Identity (\acrshort{imei}) number, allocated by the GSM Association and uniquely identifies the mobile phone model.
The Type Allocation Codes are provided for every record, because a subscriber can change their device at any time. Naturally, most of the subscribers (\num{95.71}\%) use only one device during the whole observation period, but there are some subscribers, maybe mobile phone repair shops, who use multiple devices (see Figure~\ref{fig:num_of_diff_tac}).
As a part of the data cleaning, the wide format has been normalized. The CDR table contains only the \acrshort{sim} ID, the timestamp and the cell ID. A table is formed from the subscriber and the subscription details, and another table to track the device changes of the subscriber.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/daily_activity}
\caption{Number of daily activity records, during two weeks of June 2016. The matches of the Hungarian national football team took place from June 14 to June 26.}
\label{fig:vod201606_daily_activity}
\end{figure}
While the subscription details are available for every \acrshort{sim} cards, the subscriber information is missing in slightly more than 40\% of the cases, presumably because of the subscribers' preferences of personal data usability.
Figure~\ref{fig:age_histogram}, shows the age distribution of the subscribers, whose data is available (\num{58.65}\%), in respect of the subscription type. Note that, this may not represent the age distribution of the population, not even the customers of Vodafone Hungary, as one is allowed to have multiple subscription and the actual user of the phone may differ from the owner of the subscription. Nevertheless, it is still clear that among the elderly people, the prepaid subscriptions are more popular.
Figure~\ref{fig:vod201606_daily_activity}, shows number of daily activity records during the second half of the month. Weekends (brown bars) show significantly fewer activity, hence the activity during the matches compared to the weekday or weekend activity average, respectively to the day of the match.
Although the data contains cell IDs, only the base station locations are known, where the cell antennas are located.
As a base station usually serve multiple cells, these cells has been merged by the serving base stations. After the merge, 665 locations (sites) remained with known geographic locations. To estimate the covered area of these sites, the Voronoi Tessellation, has been performed on the locations. This is a common practice \cite{pappalardo2016analytical,csaji2013exploring,vanhoof2018comparing,candia2008uncovering,novovic2020uncovering,trasarti2015discovering} for \acrshort{cdr} processing.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/num_of_diff_tac}
\caption{Number of used phones.}
\label{fig:num_of_diff_tac}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/age_histogram}
\caption{Subscribers' age distribution.}
\label{fig:age_histogram}
\end{subfigure}
\caption{The number of different \acrshort{tac}s used by the subscribers, and the subscriber' age distribution in respect of the subscription type.}
\label{fig:subscriber_age_device_num}
\end{figure}
\subsection{Resolving Type Allocation Codes}
The socioeconomic status \acrshort{ses} of the members in the celebrating crowd have been intended to characterize by the mobile device they use. The preliminary assumption was that the price of the mobile phone represents the \acrshort{ses} of a person.
According to our knowledge, there is no publicly available \acrshort{tac} database to resolve the \acrshort{tac}s to manufacturer and model, although some vendors (e.g., Apple, Nokia) publishes the \acrshort{tac}s of their products.
The exact model of the phone is required to know how recent and expensive a mobile phone is. Although this is not even enough to determine how much the cell phone costed for the subscriber as they could have bought it on sale or discount via the operator in exchange for signing an x-year contract. Still, the consumer price should designate the order of magnitude of the phone price.
The dataset of \acrshort{tac}s provided by ``51Degrees'' has been used, representing the model information with three columns: `HardwareVendor', `HardwareFamily' and `HardwareModel'. The company mostly deals with smartphones that can browse the web, so feature phones and other GSM-capable devices are usually not covered by the data set. Release date and inflated price columns are also included, but these are usually not known, making the data unsuitable to use on its own.
Although it cannot be separated by type, but the \acrshort{cdr} data contains not only call and text message records, but data transfer as well. Furthermore, some \acrshort{sim} cards do not operate in phones, but in other -- often immobile -- devices like a 3G router or a modem. 51Degrees managed to annotate several \acrshort{tac}s as modem or other not phone devices. This was extended by manual search on the most frequent \acrshort{tac}s.
There were \num{324793} \acrshort{sim} cards that uses only one device during the observation period and operates in a non-phone device.
\subsection{Fusing Databases}
For a more extensive mobile phone price database, a scarped GSMArena database\cite{mohit_gsmarena} has been used. GSMArena\footnote{\url{https://www.gsmarena.com/}} has a large and respectable database, that is also used in other studies\cite{reddi2018two,zehtab2021multimodal}. The concatenation of the brand and model fields of the GSMArena database could serve as an identifier for the database fusion. 51Degrees stores the hardware vendor, family and model, where hardware family is often contains a marketing name (e.g., [Apple, iPhone 7, A1778]). As these fields are not always properly distinguished, the concatenation of the three fields may contain duplications (e.g., [Microsoft, Nokia Lumia 820, Lumia 820]).
So, for the 51Degrees records, three identifiers are built using the concatenation of fields (i) vendor + family, (ii) vendor + model and (iii) vendor + family + model, and all the three versions are matched against the GSMArena records.
Another step of the data cleaning is to correct the name changes. For example, BlackBerries were manufactured by RIM (e.g., [RIM, BlackBerry Bold 9700, RCM71UW]), but later, the company name was changed to BlackBerry and the database records are not always consistent in this matter. The same situation occurs due to the Nokia acquisition by Microsoft.
To match these composite identifiers, the simple string equality cannot be used, due to writing distinction, so Fuzzy String match is applied using the FuzzyWuzzy Python package, that uses Levenshtein Distance to calculate the differences between strings. This method is applied for all the three identifiers from the 51Degrees data set and the duplicated matches (e.g., when the family and the model is the same) were removed.
Mapping the GSMArena database to the 51Degrees adds phone price and release date information to the \acrshort{tac}s, that can merged with the \acrshort{cdr}s.
From the GSMArena data, two indicators have been extracted: (i) price of the phone (in EUR), and (ii) the relative age of the phone (in months). The phone price was left intact without taking into consideration the depreciation, and the relative age of the phone is calculated as the difference of the date of the \acrshort{cdr} data set (June 2016) and the release date of the phone.
\section{Methodology}
\label{sec:methodology}
The framework, introduced in our earlier work\cite{pinter2021evaluating}, has been applied to process the mobile phone network data. The \acrshort{cdr}s are normalized, cleaned and the mobility metrics (Section \ref{sec:mobility_metrics}) are determined for every subscriber. The records can be filtered spatially and temporally, both of these filtering is applied for this work. Additionally, a group of \acrshort{sim} cards can be selected from the activity records.
Only temporal filtering is applied to visualize the activity trends during the football matches. Figures \ref{fig:aut_hun_timeseries}, \ref{fig:isl_hun_timeseries}, \ref{fig:hun_prt_timeseries}, \ref{fig:post_match_festival_timeseries}, \ref{fig:hun_prt_activity_fan_activity} and \ref{fig:hun_bel_timeseries}), illustrate the activity of the subscribers in the whole observation area during the matches, including the two hours before and after the matches.
For the celebration after the Hungary vs. Portugal match, spatial and temporal filtering is applied to select the area of interest (Budapest downtown) in the given time interval.
To determine the activity levels for the map, Figure~\ref{fig:post_match_festival}, the match-day activity, the average weekdays activity (without the match-day) and the Z-scores
\footnote{The standard score (or z-score) is defined as ${z = \frac{x-\mu}{\sigma}}$, where $\mu$ is the mean and $\sigma$ is the standard deviation.}
are determined for the sites of the area of interest (downtown), in the selected time interval (20:15--20:20). We observed that the standard deviation would be higher, without removing the target-day activity from the reference average, consequently the Z-score would be lower and the relative differences less consistent.
The histogram of the Z-score were generated for the selected sites (Figure~\ref{fig:zscore_hist}) to determine the activity categories. Zero value means that, the activity level equals to the average, but a wider interval (between $-2$ and $2$) is considered average to allow some variation.
Sites with Z-score between $2$ and $8$ are considered having high activity during the given time interval. There are sites with either low (below $-2$) or very high activity (over $8$).
The same method is applied for the map of Figure~\ref{fig:heroes_square_welcoming}, but as the area of interest and the event differs, the thresholds are not the same (see Section~\ref{sec:homecoming}).
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/downtown_cells_zscore_hist}
\caption{Z-score distribution of the downtown sites, with the activity level thresholds at $-2$, $2$ and $8$, using the same colors as in Figure~\ref{fig:post_match_festival}.}
\label{fig:zscore_hist}
\end{figure}
The groups of football fans are formed from the subscribers based on only the activity during the Hungary vs. Portugal match. The owner of those non-phone \acrshort{sim} cards, that were active after at least two goals are considered active football fans. The properties of these subscribers, including the age, mobility metrics, phone age and price are compared to the rest of the subscribers (Figure \ref{fig:phone_age_and_price_of_subscribers}).
\subsection{Mobility Metrics}
\label{sec:mobility_metrics}
The metrics of Radius of Gyration and Entropy has been used to characterize human mobility. These indicators are determined for every subscriber, omitting those \acrshort{sim} cards, that operate in non-phone devices. In this study, locations are represented by the base stations.
The Radius of Gyration \cite{gonzalez2008understanding} is the radius of a circle, where an individual (represented by a \acrshort{sim} card) can usually be found.
It was originally defined in Equation~(\ref{eq:gyration}), where $L$ is the set of locations visited by the individual, $r_{cm}$ is the center of mass of these locations, $n_i$ is the number of visits at the i-th location.
\begin{equation}
\label{eq:gyration}
r_g = \sqrt{\frac{1}{N} \sum_{i \in L}{n_i (r_i - r_{cm})^2}}
\end{equation}
The entropy characterizes the diversity of the visited locations of an individual's movements, defined as Equation~(\ref{eq:entropy}), where $L$ is the set of locations visited by an individual, $l$ represents a single location, $p(l)$ is the probability of an individual being active at a location $l$ and $N$ is the total number of activities of an individual \cite{pappalardo2016analytical,cottineau2019mobile}.
\begin{equation}
\label{eq:entropy}
e = - \frac{\sum_{l \in L}{p(l) \log p}}{\log N}
\end{equation}
\subsection{Socioeconomic Status}
\label{sec:ses}
In our earlier work \cite{pinter2021evaluating}, the real estate price of the subscribers' home locations were used to describe the socioeconomic status.
In this study, the \acrshort{cdr}s are enriched by phone prices and the phone price is assumed to apply as a socioeconomic indicator. To demonstrate the applicability of the mobile phone price as a socioeconomic indicator, it was examined in respect of the mobility indicators, applying Principal Component Analysis (\acrshort{pca}).
The \acrshort{sim} cards are aggregated by the subscriber age categories (5-year steps between 20 and 80) and the phone price categories (100 EUR steps to 700 EUR), the Radius of Gyration and Entropy categories. For the Radius of Gyration, 0.5 km distance ranges are used between 0.5 and 20 km, and the Entropy values are divided into twelve bins between \num{0.05} and \num{1.00}.
The structure of the data used for the Principal Component Analysis defined as follows.
A table has been generated where, every row consists of 40 columns, representing 40 Radius of Gyration bins between 0.5 and 20 km and 20 columns representing 20 Entropy bins, between \num{0.05} and \num{1.00}. The subscribers, belonging to each bin are counted, and the cardinality have been normalized by metrics to be able to compare them.
The categories are not explicitly labeled by them, so the subscriber age and the phone price descriptor columns are not provided to the \acrshort{pca} algorithm.
The same table is constructed using weekend/holiday metrics and its rows are appended after the weekdays ones.
When the \acrshort{pca} is applied, the 60-dimension vector is reduced to two dimensions based on the mobility customs, where the bins are weighted by the number of subscribers. The cumulative variance of the two best components is about 61\% (see Figure~\ref{fig:age_pp_pca_var}). The bins, representing the two new dimensions (PC1 and PC2) are plotted (see Figure~\ref{fig:age_pp_pca}) and the markers are colored by the phone price, marker sizes indicate the subscriber age category, using larger markers for younger subscribers.
\section{Results and Discussion}
\label{sec:results}
As Figure~\ref{fig:age_pp_pca} shows, the markers are clustered by color, in other words, the phone price, that is proportional to PC1, but inversely proportional to PC2.
Within each phone price group, the younger subscribers (larger markers) are closer to the origin, indicating that the mobility custom of the younger subscribers differs from the elders, although this difference is smaller within the higher price categories.
This finding coincides with \cite{fernando2018predicting}, where Fernando et al. found correlation between subscribers' age and mobility metrics.
To give context to Figure~\ref{fig:age_pp_pca}, Figure~\ref{fig:pp_hist}, shows the phone price distribution: most of the phones are within the 50--200 EUR range. Note that, there are only a few phones over 550 EUR, but the owners of those have significantly different mobility patterns.
Figure~\ref{fig:age_pp_pca} does not only show that the phone price forms clusters, but also reveals the effect of the subscription type to the mobility. Within the phone price categories, except the highest with only a very few subscribers, the postpaid groups are usually closer to the origin.
Prepaid subscriptions are usually for those, who do not use their mobile phone extensively, and it seems that people with a prepaid subscription have similar mobility customs as people with less expensive phones but postpaid subscription. That is most notable at (-6, 2) and (-5, -1).
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/age_pp_st_pca}
\caption{Scatter plot of the 2-component \acrshort{pca}. Marker sizes indicate subscriber age category,
the color represents the phone price category and the subscription type (Prepaid/Postpaid) is distinguished by the marker type.}
\label{fig:age_pp_pca}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/pp_hist}
\caption{Phone price distribution.}
\label{fig:pp_hist}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/age_pp_st_pca_var}
\caption{The Pareto histogram for the \acrshort{pca}.}
\label{fig:age_pp_pca_var}
\end{subfigure}
\caption{Phone price distribution and the Pareto histogram for the 60 components of the Principal Component Analysis.}
\label{fig:pp_dist_and_pca}
\end{figure}
Sultan et al. identified areas in Jhelum, Pakistan, where more expensive phones appear more often \cite{sultan2015mobile}. Using the same method, Budapest and its agglomeration was evaluated: the average phone prices from the activity records are determined for every site.
The ground truth is that the real estate prices are higher on Buda side (West of river Danube) of Budapest and downtown \cite{pinter2021evaluating}, and this tendency can be clearly seen in Figure~\ref{fig:avg_phone_price_map}. The airport area has a significantly higher average than its surroundings, that is not surprising.
The spatial tendencies of the mobile phone price, along with the result of the \acrshort{pca} (Figure~\ref{fig:age_pp_pca}), clearly demonstrates the expressiveness of the phone price as a socioeconomic indicator.
\begin{figure}[ht]
\centering
\includegraphics[width=.85\linewidth]{figures/avg_phone_price_map}
\caption{Average price (in EUR) of the mobile phones, that generated the activity records in each site, during the whole observation period (June 2016).}
\label{fig:avg_phone_price_map}
\end{figure}
The rest of this section examines the results, in the time order of the Hungarian Euro 2016 matches.
\subsection{Austria vs. Hungary}
The first match against Austria (Figure~\ref{fig:aut_hun_timeseries}) was started at 18:00, on Tuesday, June 14, 2016. Before the match, the activity level was significantly higher than the average of the weekdays, and later decreased until the half-time. During the second half, the activity level dropped to the average, which indicated that more people started to follow the match. Right after the Hungarian goals, there are two significant peaks have been observed in the activity, which exactly indicates increased attention and the massive usage of mobile devices during the match.
As the data source cannot distinguish the mobile phone activities by type, it cannot be examined what kind of activities caused the peaks. It is supposed that the activity was mostly data transfer or text messages, not phone calls. It simply does not seem to be lifelike to call someone during the match just because of a goal, but sending a line of text via one of the popular instant messaging services, is very feasible.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/aut_hun_20160614_16-22}
\caption{Mobile phone activity during and after the Austria--Hungary Euro 2016 match, in comparison with the average activity of the weekdays.}
\label{fig:aut_hun_timeseries}
\end{figure}
\subsection{Iceland vs. Hungary}
The match against Iceland was played on Saturday, June 18, 2016. Figure~\ref{fig:isl_hun_timeseries}, shows the mobile phone activity levels before, during and after the match. As the weekend activity is generally lower (see Figure~\ref{fig:vod201606_daily_activity}), the average of the weekdays are used as a reference. The match began at 18:00, and from that point, the activity level was significantly below the average, except the half-time break and, again, the peak after the Hungarian goal.
Interestingly, the Icelandic goal does not result such a significant peak, only a very moderate one can be seen in the time series.
Traag et al. \cite{traag2011social} also found activity drop during a game, but in that case the area of the stadium was analyzed, where the match was played and there was no peak during the match.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/isl_hun_20160618_16-22}
\caption{Mobile phone activity during and after the Iceland--Hungary Euro 2016 match, in comparison with the average activity of the weekends.}
\label{fig:isl_hun_timeseries}
\end{figure}
\subsection{Hungary vs. Portugal}
On Wednesday, June 22, 2016, as the third match of the group state of the 2016 UEFA European Football Championship, Hungary played draw with Portugal.
Both teams scored three goals and with this result, Hungary won their group and qualified for the knockout phase.
During the match, the mobile phone activity dropped below the average, but the goals against Portugal resulted significant peaks, especially the first one (see Figure~\ref{fig:hun_prt_timeseries}). On the other hand, the Portuguese equalizer goal(s) did not cause significant mark in the activity. In the second half, the teams scored four goals in a relatively short time period, but only the Hungarian goals resulted in peaks.
This observation suggests that the football fans had notable influence on the mobile network traffic.
After the match, the activity level is over the average, that might represent the spontaneous festival in downtown Budapest. According to the \acrshort{mti} (Hungarian news agency), thousands of people celebrated in the streets, starting from the fan zones, mainly from Erzsébet square (Figure~\ref{fig:post_match_festival} a), Margaret Island (Figure~\ref{fig:post_match_festival} b) and Erzsébet square (Figure~\ref{fig:post_match_festival} c) direction Budapest Nyugati railway station. The Grand Boulevard was completely occupied by the celebrating crowd and the public transportation was disrupted along those affected lines.
This social event is comparable to mass protests from a mobile phone network perspective. In an earlier work \cite{pinter2018analysis}, we have analyzed the mobile phone activity at the route of a mass protest. The activity of the cells were significantly high when the protesters passed through the cell. In this case, however, the affected area were smaller and the sites along the Grand Boulevard were very busy at the same time after the game.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/hun_prt_20160622_16-22}
\caption{Mobile phone activity during and after the Hungary--Portugal Euro 2016 match, in comparison with the average activity of the weekdays.}
\label{fig:hun_prt_timeseries}
\end{figure}
The activities of the sites (multiple cells aggregated by the base stations), in Budapest downtown, are illustrated on Figure~\ref{fig:post_match_festival_timeseries}.
The highlighted site covers mostly Szabadság square (for the location, see Figure~\ref{fig:post_match_festival} a), where one of the main fan zones was set up with a big screen. The activity curve actually follows the trends of the whole data set (see Figure~\ref{fig:hun_prt_timeseries}). There is high activity before the match, during half-time and, for a short period, after the match. During the match, the activity decreased except four, not so significant, peaks around the goals.
In the highlighted site, in Figure~\ref{fig:post_match_festival_timeseries}, almost 7 thousand \acrshort{sim} cards had been detected between 17:00 and 20:00. The data shows that 53.57\% of the subscribers were between 20 and 50 years old, while 33.49\%, had no age data.
After the match, there is a significant increase in the activity in some other sites. These sites are (mostly) around the Grand Boulevard, where the fans marched and celebrated the advancement of the national football team to the knockout phase.
Figure~\ref{fig:post_match_festival}, shows the spatial distribution of this social event, using Voronoi polygons generated around the base stations locations.
The polygons are colored by the mobile phone network activity increase at 20:15, compared to average of the weekday activity. For the comparison, the standard score
was determined for every base station with a 5-minute temporal aggregation. The darker colors indicate the higher activity surplus in an area.
The figure also denotes the three main fan zones in the area, routes of the fans by arrows, and the affected streets are emphasized.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/post_match_festival_timeseries}
\caption{Site activities, in Budapest downtown, on the day of the Hungary vs. Portugal football match (June 22, 2016). The highlighted site covers mostly the Szabadság Square, where one of the main fan zones was set up.}
\label{fig:post_match_festival_timeseries}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/post_match_festival}
\caption{After the Hungary vs. Portugal football match, the fans, delirious with joy, filled the streets. The arrows show their route from the main fan zones to and along the Grand Boulevard. Voronoi polygons colored by the mobile phone network activity at the peak of the event, at 20:15.}
\label{fig:post_match_festival}
\end{figure}
\subsubsection*{Who are responsible for the peaks?}
There were three Hungarian goals during the match, hence there were three peaks, starting at 18:18, 19:02 and 19:18. All of them had about 5-minute fall-times. To answer this question, the \acrshort{sim} cards that were active during any two of the peaks were selected. Selecting \acrshort{sim} cards that were active during any of the peaks, would also include many subscribers, that cannot be considered as football fan. The participation of all the three peaks, on the other hand, would be too restrictive.
Figure~\ref{fig:hun_prt_activity_of_fans}, presents the activity of the selected \num{44646} \acrshort{sim} cards and the owner of these cards, which may belong to the football fans.
Removing these \acrshort{sim} cards from the data set, should result an activity curve without peaks, and at the same time similar, in tendency, to the average activity. However, as Figure~\ref{fig:hun_prt_activity_without_fans} shows, the activity still drops during the match. Therefore, the `football fan' category should be divided to `active' and `passive' fans, from the mobile phone network perspective. Active fans are assumed to express their joy using the mobile phone network (presumably to access the social media) and cause the peaks. It seems that the passive fans ceased the other activities and watched the game, that caused some lack of activity, compared to the average.
By removing the active fans from the observed set of \acrshort{sim} cards, the activity level decreased in general (Figure~\ref{fig:hun_prt_activity_without_fans}). However, this is not surprising, as these people reacted to the goals, they must often use the mobile phone network. There are also some negative peaks, indicating that the selection is not perfect.
Is there any difference between the active fans regarding the phone age and price compared to the other subscribers? Figure~\ref{fig:phone_age_of_subscribers}, shows the relative age of the phones in respect of the subscribers' behavior after the goals. No significant difference has been realized between the active fans and other subscribers, the median of the phone relative age is about two years, and there are some much older (nearly ten years old) phones in use. It should be noted that older devices are used by elderly people. The price of the phones show opposite tendency: the younger subscribers own more expensive phones (Figure~\ref{fig:phone_price_of_subscribers}).
Naturally, not all of these \num{169089} \acrshort{sim} card (without the ones operating non-phone devices) generated activity after all the goals. \num{83352} devices were active after the first goal, \num{70603} after the second and \num{68882} after the third. After at least two goals \num{44646}, and after all the three goals only \num{9102} devices had activity, within 5 minutes.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/hun_prt_activity_of_fans}
\caption{Activity of fans.}
\label{fig:hun_prt_activity_of_fans}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/hun_prt_activity_without_fans}
\caption{Activity without the fans.}
\label{fig:hun_prt_activity_without_fans}
\end{subfigure}
\caption{Mobile phone network activity of the \acrshort{sim} cards (fans), that had activity right after any two of the Hungarian goals, and the mobile phone activity of the other \acrshort{sim} cards.}
\label{fig:hun_prt_activity_fan_activity}
\end{figure}
Why would they use the mobile phone network to access social media? If they were at home, they would have used the wired connection, via Wi-Fi for mobile devices. In Hungary, the \num{79.2}\% of the households had wired internet connection, according to the \acrshort{ksh}\cite{ksh12.8.1.9}, and it could be even higher in Budapest. However, if they were at fan zones, for example in Szabadság Square, using the mobile network is more obvious.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/phone_age_of_subscribers}
\caption{Relative age of the phones.}
\label{fig:phone_age_of_subscribers}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/phone_price_of_subscribers}
\caption{Price of the Phones.}
\label{fig:phone_price_of_subscribers}
\end{subfigure}
\caption{Mobile phone relative age and the price distributions in different age categories, comparing the fans, who had activity right after any two of the Hungarian goals, and the rest of the \acrshort{sim} cards.}
\label{fig:phone_age_and_price_of_subscribers}
\end{figure}
As Figure~\ref{fig:phone_age_and_price_of_subscribers} shows, there is no significant difference in the phone age between the active football fans and the rest of the subscribers. The medians are almost the same within the young adult and the middle-age categories, but elders tend to use older devices, especially those, who did not react to the goals.
The active football fans' median phone price is 180 EUR, in contrast of the 160 EUR median of the rest of the subscribers. However, the older subscribers tend to use less expensive phones. This tendency is also present within the football fans, but stronger within the other group.
Figure~\ref{fig:gyration_and_entropy_of_subscribers}, illustrates the mobility metrics in different age categories, also comparing the football fans and the rest of the subscribers. The Radius of Gyration median is almost the same in all the age categories and groups. The Entropy medians have a notable difference between the two groups, but do not really change between the age categories.
This means, that the mobility customs of the football fans, who use the mobile phone network more actively, are similar, regardless of the subscribers' age.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gyration_of_subscribers}
\caption{Radius of Gyration of the subscribers.}
\label{fig:gyration_of_subscribers}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/entropy_of_subscribers}
\caption{Entropy of the subscribers.}
\label{fig:entropy_of_subscribers}
\end{subfigure}
\caption{Radius of Gyration and Entropy distributions in different age categories, comparing the fans, who had activity right after any two of the Hungarian goals, and the rest of the \acrshort{sim} cards.}
\label{fig:gyration_and_entropy_of_subscribers}
\end{figure}
\subsection{Hungary vs. Belgium}
On Sunday, June 26, 2016, Hungary played the fourth and last Euro 2016 match against Belgium. Figure~\ref{fig:hun_bel_timeseries}, shows the mobile phone network activity before, during and after the match.
During the match, the activity level was below the weekend average.
The activity after the match was slightly higher than average, since the match ended late on Sunday, when the activity average is usually very low. This activity surplus may only indicate that the fans were simply leaving the fan zones and going home.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/hun_bel_20160626_16-22}
\caption{Mobile phone activity during and after the Hungary--Belgium Euro 2016 match, in comparison with the average activity of the weekends.}
\label{fig:hun_bel_timeseries}
\end{figure}
\subsection{Homecoming}
\label{sec:homecoming}
The Hungarian national football team returned to Budapest, on June 27, 2016. A welcome event at the Heroes' Square have been held, where the football fans can greet the national football team. According to the M4 Sport television channel, approximately 20 thousand people attended to the event \cite{hiradohu2016tizezrek}.
Between 18:00 and 19:30, there were \num{4246} unique, non-phone \acrshort{sim} cards active in the site, that covers the Heroes' Square. \num{3425} are known to use smartphone, based on the operating system column of the GSMArena data set.
The cells of this base station cover a larger area, so not all of these subscribers actually attended to the event, but on the other hand, it is not compulsory to use the mobile phones during this event. Supposing that the mobile phone operator preferences among the attendees corresponded to the nationwide trends in 2016, there could even be about 17 thousand people, as the data provider had \num{25.3}\% market share \cite{nmhh_mobile_market_report}.
Figure~\ref{fig:heroes_square_welcoming} shows, a part of District 6 and the City Park with the Heroes' Square and the Voronoi polygons of the area are colored according to the Z-score values, to indicate the mobile phone activity in the area, at 18:35. The activity is considered low below $-1$, average between $-1$ and $1$, high between $1$ and $2.5$ and very high above $2.5$.
Figure~\ref{fig:heroes_square_welcoming_time_series} shows, the mobile phone network activity (upper), and the Z-score (bottom) of the site, covering Heroes' Square. It is clear, that during the event, the activity is significantly higher than the weekday average, and the Z-score values are also follows that.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/heroes_square_welcoming_time_series}
\caption{Activity and Z-score of the site, at Heroes' square.}
\label{fig:heroes_square_welcoming_time_series}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.475\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/heroes_square_welcoming}
\caption{Spatial view of the activity at 18:35.}
\label{fig:heroes_square_welcoming}
\end{subfigure}
\caption{Mobile phone network activity at Heroes' Square and its neighborhood, during the welcoming event of the Hungarian national football team.}
\label{fig:welcoming}
\end{figure}
\subsection{Limitations}
We associated subscribers' \acrshort{ses} with the release price of their cell phones, however, it is not necessary for them to buy their phones at that price. Many people buy their phone on sale or discount via the operator in exchange for signing an x-year contract.
Also, subscribers can change their phone devices at any time. We have taken into consideration only those subscribers, who had used only one device during the observation period, or had a dominant device that generated most of the activity records of the given subscriber.
We have fused three data sets to exclude the non-phone \acrshort{sim} cards, but the identified devices are not complete. There remained devices, that models are unknown and there are phones, that release date and price are unknown. It is not possible to determine \acrshort{ses} of these subscribers with the proposed solution.
\subsection{Future Work}
Although, the current solution to select the football fans' \acrshort{sim} cards, in other words, the \acrshort{sim} cards, that caused the peaks gives a reasonable result, but could be improved by analyzing the activity during the whole observation period. For example, applying a machine learning technique.
Extending the list of the non-phone \acrshort{tac}s could also help to refine the results, and combining the mobile phone prices with the real estate prices of the home location would most certainly enhance the socioeconomic characterization.
The relative age of the cell phone might be used as a weight for the phone price, when applied as \acrshort{ses} indicator to distinguish between the phone price categories. As an expensive, but older phone is not worth as much as a newer one with the same price.
\section{Conclusions}
\label{sec:conclusions}
In this study, we demonstrated that mobile phone network activity shadows precisely the football fans' behavior, even if the matches are played in another country. This analysis focused the people followed the matches on TV (at home) or big screens at the fan zones, but not in the stadium, where the matches were actually played.
The mobile phone network data and the mobile phone specification database has been applied to characterize the \acrshort{ses} of the football fans. The data fusion allowed us to remove a considerable number of \acrshort{sim} cards from the examination that certainly operates in other devices than mobile phones. Although, there are some still unidentified \acrshort{tac}s in the data set, but this way, the activity records, involved in this study, have a significantly higher possibility to used by an actual person during the events.
The time series of mobile network traffic clearly show that the activity was below the average during the matches, indicating that many people followed their team. This observation coincides with other studies \cite{traag2011social, mamei2016estimating, xavier2012analyzing, hiir2020impact}, where the activity of the cells at the stadium were analyzed.
We also demonstrated that a remote football match can also have notable effect on the mobile phone network.
Moreover, the joy felt after the Hungarian goals, is clearly manifested in the data, as sudden activity peaks.
The \acrshort{cdr} data is certainly capable of social sensing.
The spontaneous festival after the Hungary vs. Portugal match and the welcoming event at the Heroes' Square are direct applications of social sensing and comparable to mass protests from a data perspective. During the events, the mobile phone network activity was significantly higher than the average in affected areas.
The price of the mobile phone was proved to be an expressive socioeconomic indicator. It is capable not only to cluster the areas of a city, but also to distinguish the subscribers by mobility customs. On the other hand, it does not seem to affect the interest in football.
\vspace{6pt}
\section*{Author Contributions}
Conceptualization, G.P; methodology, G.P.; software, G.P.; validation, G.P.; formal analysis, G.P.; investigation, G.P.; resources, G.P. and I.F.; data curation, G.P.; writing---original draft preparation, G.P.; writing---review and editing, G.P. and I.F.; visualization, G.P.; supervision, I.F.; project administration, I.F.; funding acquisition, I.F. All authors have read and agreed to the published version of the manuscript.
\section*{Funding}
This research supported by the project 2019-1.3.1-KK-2019-00007 and by the Eötvös Loránd Research Network Secretariat under grant agreement no. ELKH KÖ-40/2020.
\section*{Acknowledgments}
The authors would like to thank Vodafone Hungary and 51Degrees for providing the Call Detail Records and the Type Allocation Code database for this study.
For plotting the map, OpenStreetMap data is used, that is copyrighted by the OpenStreetMap contributors and licensed under the Open Data Commons Open Database License (ODbL).
\section*{Conflicts of Interest}
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
\printglossary[title=Abbreviations, toctitle=Abbreviations, nogroupskip=true]
\printbibliography
\end{document}
| {
"attr-fineweb-edu": 1.986328,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcRTxK7Ehm9ic_p5k | \section{Introduction}
\label{sec:intro}
Sports streaming websites are very popular with many services like TennisTV and WatchESPN offering full game replays on demand. Millions of users use these services for entertainment, education and other purposes. However, tennis matches are usually very long, often running into hours. It's very hard to infer playing styles and patterns of players without investing hundreds of hours of viewing time. Thus, it's cumbersome to find ``useful'' parts. Streaming websites provide the video as-is, i.e. it's only possible to access the video stream sequentially. However, in case of sports and other event-rich video streams, an useful extension is to provide random access (like accessing an array) grounded in events along with sequential access, so that extensions like skipping to next event, filtering events etc can be provided.
In this paper, we focus on constructing a point wise index of a tennis match and thus providing random access to the match. We propose a method to segment out the match into a set of rallies, then automatically extract the scorecard and the scores.Using tennis domain knowledge, we construct a novel algorithm to refine our extracted scores. We then demonstrate the utility of the automatically constructed index by building an interface to quickly and effortlessly retrieve and view the relevant point, game and set segments along with providing human accessible tags.
There are multiple challenges in this scenario. The tennis match videos are recorded from multiple camera angles and edited to have different kind of shots, to capture various emotions and drama along with the game play. With respect to extracting scores, the score board is never at a fixed position or in a specific format and the score digits are not constrained by font, size, and color.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{images/motivation_image.jpg}
\end{center}
\caption{We aim to provide random access to tennis match videos and construct a point wise index of a tennis match so that a user can access, jump and skip ``points'', ``games'' and ``sets''.}
\label{fig:motivatingfigure}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1\linewidth]{images/pipeline_image.jpg}
\end{center}
\caption{Our approach is illustrated in this figure. We start by temporally segmenting out the rallies, extracting the scoreboard and then recognizing the scores where we use contextual and domain knowledge to refine the recognized scores.}
\label{fig:pipeline}
\end{figure*}
The major contributions of this paper are,
\begin{enumerate}
\item An effective score recognition algorithm using domain knowledge which can be adapted for different games. Here, we do our experiments on tennis videos by using the tennis domain knowledge.
\item We propose a score based indexing system, to navigate and retrieve segments from large volumes of video data with considerable ease.
\item Our method also enables many applications of indexing, we demonstrate one such application, human accessible event tagging.
\end{enumerate}
Section \ref{sec:relatedwork} discusses advances and related work in literature. Section \ref{sec:approach} forms the core of the paper, describing our core approach. Lastly, Section \ref{sec:results} provides a brief background of tennis and a high level description of our dataset(s), describes the implementation details and the experiments we performed along with obtained results.
\section{Related Work}
\label{sec:relatedwork}
\textbf{Sports Understanding:} Using domain specific cues, several researchers have previously worked on improving sports understanding (specially tennis), with strides made in video summarization and automatically generating highlights~\cite{hanjalic2003generic, ghanem2012context, huang2009intelligent}, generating descriptions~\cite{sukhwani2015tennisvid2text} and automatically segmenting coarse temporal scenes~\cite{zhang2007personalized}, annotating players~\cite{yan2014automatic, mentzelopoulos2013active} and tracking the ball~\cite{yan2007all, zhou2015tennis}.
~\textbf{Sports Video Indexing and Applications:} Xu et al.~\cite{xu2008novel} and Miyamori et al.~\cite{miyamori2000video} focus on semantic annotations exploiting tennis domain knowledge to build retrieval systems based on positions and actions. Sukhwani et al.~\cite{sukhwani2016frame} proposed a dictionary learning method for frame level fine grained annotations for a given video clip, but their annotations are also computed at the level of actions, useful in the context of computing player statistics. Kolekar et al.~\cite{kolekar2015bayesian} use audio features to detect events in soccer scenes and generate highlights. Liu et al.~\cite{liu2009framework} perform mutlimodal analysis to generate tennis video highlights while Connaghan et al.~\cite{connaghan2011game} attempt to segment out game into point, game and set, however, perform no score keeping and use multiple cameras to perform the task. However, these methods do not attempt to robustly index point level information to enable retrieval from the point of view of a viewer. Our work differs from all of these as we attempt to annotate point level information for a match.
~\textbf{Scorecard and Score Extraction:} Liao et al.~\cite{liao2015research} focus only on detecting the scorecard while Miao et al.~\cite{miao2007real} focuses on both detection and extraction of scores, however the algorithm is specific for Basketball. Tesseract~\cite{smith2007overview} is the commonly used \textsc{ocr} pipeline to detect text from images and documents which have a plain background. Convolutional Recurrent Neural Network (\textsc{crnn})~\cite{shi2016end} is applicable for performing end-to-end scene text recognition while Textspot~\cite{gupta2016synthetic} introduces a Fully Convolutional Regression Network (\textsc{fcrn}) which performs end-to-end scene text detection and for recognition, uses the intermediary stage of the pipeline based on the lexicon-encoding CNN from Jaderberg et al.~\cite{jaderberg2014synthetic}.
\section{Approach}
\label{sec:approach}
Our goal is to automatically create an index for tennis videos. We begin by describing a method to automatically segment rallies. Then we detect and localize the scorecard in each of these rallies and recognize the text to abstract out the game score state to annotate the video with the accessible tags. An overview of our pipeline can be seen in Fig.~\ref{fig:pipeline}.
\subsection{Rally Segmentation}
Our method of segmenting out rallies stems from the observation that in \textsc{btv}'s, the camera is only overhead when the rally is in play and nowhere else. The background is mostly static after the serve begins, and remains the same till a player wins a point. \textsc{hog} features are appropriate in such a scenario, so we extract frames from the Segment Dataset, downscale them, and extract \textsc{hog} features. We then learn a $\chi$-squared kernel SVM to label a frame either as a rally frame or a non-rally frame. Then, we use this learned classifier to label each frame of the BTV as part of a rally or otherwise and smoothen this sequence using Kalman filter to remove any false positives/negatives to obtain the segmented rallies.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{images/some_montage.jpg}
\end{center}
\caption{(a) depicts some of the extracted scorecards from different matches from our dataset. As one can see, the scorecards detected are of different sizes and formats, and differences across tournaments is noticeable. We have also included some of our failure cases, (v) and (vi) have extra regions that have been detected. (b) depicts the tennis point automaton that can be constructed from the tennis scoring system which is used to refine our extracted scores.}
\label{fig:prelimmontage}
\end{figure*}
\subsection{Scorecard Extraction}
We utilize the observation that the scorecard position is stationary in a rally, while the camera pans and moves around to cover the game. However, the scorecard may disappear and is not necessarily of the same size across the game as opposed to the assumptions in~\cite{miao2007real}. So, to overcome these issues, we extract the scorecard independently from each rally segment instead of assuming a single scorecard template.
We adapt the method described in~\cite{liao2015research}. We start by finding the gradient for each frame (say $I_{x}(i, j, t)$) using the sobel filter, and then calculate the normalized temporal sum for each frame using, $I_{norm}(i, j, n) = \frac{1}{n} \sum_{t = 1}^{n} I_{x} (i, j, t)$. Then, we subtract $I_{x}$ and $I_{norm}$ to obtain the temporally correlated regions $I_{g}$. Further, we binarize the image using the following equation,
\begin{equation}
I_{r}(i,j,t) = (1 - \frac{I_{x}(i,j,t)}{max_{t,i,j}(I_{x})})I_{norm}(i,j,t)
\end{equation}
Empirically, the scorecard is found in one of the corners of the frame, we identify the four regions of size $(h/5, w/2)$ in the corners as the regions to search for the scorecard. Note, $w$ and $h$ are the width and height of the frame respectively. We identify the coarse scorecard region by selecting the region with the maximum number of white pixels in the specified regions in $I_{r}(i,j,t)$ summed over time. Further, after we have identified the coarse region, we apply morphological operators to remove small aberrations present and fit a rectangle which encloses the scorecard area. Our qualitative results can be seen in Fig.~\ref{fig:prelimmontage} {(a)}.
\subsection{Score Recognition}
Traditional \textsc{ocr} based methods like Tesseract~\cite{smith2007overview} can recognize text printed on a clear background however need the image to be preprocessed if the background is textured and shaded, and the contrast in the text fragments varies widely. However, with the advent of deep learning based \textsc{ocr} and scene text detection methods, a more general approach can be formulated.
To recognize scores, we experiment with three different methods, Tesseract, \textsc{crnn} and Textspot. Textspot combines \textsc{fcrn}~\cite{gupta2016synthetic} which is an end to end text detection network, which constructs a field of predictors where each predictor is responsible for detecting a word if the word centre falls within the corresponding cell, akin to the \textsc{yolo} network architecture. The recognition is performed by the intermediary stage of the pipeline based on the lexicon-encoding \textsc{cnn} from Jaderberg et al~\cite{jaderberg2014synthetic}. \textsc{crnn}~\cite{shi2016end} is a scene text recognition network which treats the image as a sequence of strips. It proceeds by treating a \textsc{cnn} as a feature extractor to extract feature maps and construct a sequence of feature vectors. The sequence is fed into a bi-directional \textsc{lstm} to obtain label sequence probabilities and \textsc{ctc} loss is employed to obtain labels. We adapt and perform a comparison of the various score recognition baselines in Section~\ref{sec:results}.
\subsection{Score Refinement}
To further refine our recognized scores, we use the knowledge of the tennis scoring system. As any structured game, score keeping in tennis is governed by a set of rules and thus, can be modeled as a finite automaton. Tennis in specific can be modeled as 3 automatons, one each for tracking the point, game and set score (See Fig.~\ref{fig:prelimmontage} (b)). Also, the vocabularies for point, game and set are restricted, so, we find errors by checking if the value belongs to the vocabulary or not. For instance, the the vocabulary for a point score is restricted to $\{ 0, 15, 30, 40, AD \}$.
Let $J = (game_{1}, set_{1}, point_{1}, game_{2}, set_{2}, point_{2})$ be the score state where game, set and point have the same meanings as in tennis. Firstly, we exploit the fact that the game and set scores are usually remain constant in a window, and thus replace errors with the mode of the value in the temporal window (with exceptions for score change within the window).
Consider the tennis scoring automaton $T$ which is composed of score states and the transition function is constructed using the tennis scoring rules. Then we define a function $nextStates(s)$ which returns all possible states for the next game state. Likewise, $previousStates(s)$ provides the set of originating states for the current state $s$. For instance, from Fig.~\ref{fig:prelimmontage} (b), if we assume that we are at state $s = (0, 0, 30, 0, 0, 30)$ (referred to as 30 all in the figure), the function $previousStates(s)$ will return $\{ (0, 0, 30, 0, 0, 15), (0, 0, 15, 0, 0, 30) \}$ and $nextStates(s)$ would return $\{ (0, 0, 40, 0, 0, 30), (0, 0, 30, 0, 0, 40) \}$.
Assuming that the set of scores is $S = \{ s_{1}, s_{2} ... s_{n} \}$, and that $s_{i}$ is erroneous (using vocabulary constraints), we compute the set $P = nextStates(s_{i-1}) \cap previousStates(s_{i+1})$, then we find the corrected score using,
\begin{equation}
s'_{i} = \argmax_{p \in P} \frac{1}{|J|} \sum_{j \in J} \delta(s_{i}(j), p_{i}(j))
\end{equation}
where $J$ is the set of game score states and $\delta$ is the Kronecker delta function. This equation is only needed if there are more than one possible score. It is to be noted that this method is extensible to any game which follows a structured scoring system like tennis.
\section{Experiments and Results}
\label{sec:results}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{images/interface_image.jpg}
\end{center}
\caption{The developed interface supports the indexing and retrieval of a match as a point, game and set.}
\label{fig:coolimage}
\end{figure}
\subsection{Dataset}
A tennis match is divided into sets, each set is divided into games and each game has certain number of points or rallies. We restrict ourselves to ``singles'' matches and work with broadcast tennis video (\textsc{btv}) recordings at 720p for 10 matches. 5 matches are taken from the French Open 2017 and remaining matches are from London Olympics 2012 for all our experiments. For performing rally segmentation, we created a "Rally Dataset" by manually annotating 2 matches into rally and non rally segments. The training and test set images are derived by dividing all the images in a 50-50 split. For evaluating score extraction, we further annotated 4 matches with score of each segment using the automated segmented rallies from our described algorithm. All together, we have annotated 1011 rallies to create the "Match Scores Dataset".
\subsection{Rally Segmentation}
For learning the rally segmentation classifier, we extracted every 10th frame from Rally Dataset and cross validated using a 0.8 split to find the optimal values of the hyper-parameters $C$ and the period of the $\chi$-squared kernel. The optimal value of $C$ is $0.05$ and the period of the $\chi$-squared kernel \textsc{svm} is found to be $3$.
The mean $F1$ score on the test set for the task was found to be $97.46\%$, the precision for the non-rally segments was $98.94\%$ and the rally segments was $95.41\%$.
\subsection{Score Recognition}
\begin{table}
\begin{center}
\caption{Averaged Edit Distance for score recognition (Lower is better)}
\label{tab:algochooser}
\begin{tabular}{|l|l|l|l|l|}
\hline
Match & Textspot & \textsc{crnn} & Tesseract-P \\
\hline\hline
Match 1 (186 rallies) & 0.2070 & 0.4272 & 0.2612 \\
Match 2 (218 rallies) & 0.2178 & 0.4476 & 0.3780 \\
\hline
\end{tabular}
\end{center}
\end{table}
For employing Tesseract, we carefully preprocess the scorecard image and threshold the image manually. For each tournament such a preprocessing step needs to be manually defined. To train the \textsc{crnn}, which is constrained to recognize words as sequences, we divided the scorecard to two parts horizontally. For employing Textspot, we don't train the network and use the model trained on ``SynthText in the Wild'' dataset as~\cite{gupta2016synthetic} note state-of-the-art performance on standard benchmarks. However, we post-process the text detection boxes and sort them to extract the scores. We used edit distance instead of the usual text recognition metrics because the ``spaces'' between scores (in the recognized string) are relevant in our case. For instance, \textsc{crnn} removes repetitions of numbers, which causes the decrease in accuracy. Table~\ref{tab:algochooser} here presents our experimental results on a subset of the matches and as we can see, Textspot performed the best and thus, for the next set of experiments we use that as our baseline.
\subsection{Score Refinement}
It is important to reiterate that our aim is not to recognize the text in the scorecard, but rather capture the game score state. To evaluate our results, we formulate a new metric, which inputs computed game state $C_{i}$ and the actual game state $G_{i}$, and computes the following (for a set of rallies say $R$),
\begin{equation}
AC(R) = \sum_{i \in R} \frac{1}{|J|} \sum_{j \in J} \delta(C_{i}(j), G_{i}(j))
\end{equation}
where J and $\delta$ as defined earlier.
\begin{table}
\caption{Averaged Score Accuracy AC(R) for our method and the defined baseline, \textsc{fcrn} (Higher is better)}
\label{tab:accresults}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Match & Textspot & Ours \\
\hline\hline
Match 1 (186 rallies) & 79.30\% & 91.66\% \\
Match 2 (218 rallies) & 77.90\% & 80.58\% \\
Match 3 (201 rallies) & 92.45\% & 95.19\% \\
Match 4 (194 rallies) & 85.22\% & 92.18\% \\
\hline
\end{tabular}
\end{center}
\end{table}
As can be seen from Table~\ref{tab:accresults}, our refinement algorithm shows a consistent improvement in the averaged score accuracy across matches over the best performing baseline method, Textspot~\cite{gupta2016synthetic}. However, as it is apparent, the performance of our method is dependent on the performance of the baseline score recognition and that is possibly the reason in the relatively meager improvements in score accuracy in the second match.
\subsection{Event Tagging}
\begin{table}
\begin{center}
\caption{Averaged Accuracy score of automatic event tagging (in percentage)}
\label{tab:accesstags}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{} &
\multicolumn{2}{c}{Match 1} &
\multicolumn{2}{c}{Match 2} &
\multicolumn{2}{c}{Match 3} &
\multicolumn{2}{c|}{Match 4} \\
\hline
& Textspot & Ours & Textspot & Ours & Textspot & Ours & Textspot & Ours \\
\hline\hline
Fault & 66.66 & 70.83 & 52.24 & 56.71 & 87.87 & 90.90 & 84.44 & 84.44 \\
Deuce & 100.0 & 100.0 & 73.68 & 78.94 & 100.0 & 100.0 & 94.73 & 94.73 \\
Advantage & 100.0 & 100.0 & 77.77 & 77.77 & 100.0 & 100.0 & 95.65 & 95.65 \\
\hline
Overall & 75.00 & 79.41 & 60.58 & 64.43 & 92.45 & 94.33 & 89.65 & 89.65 \\
\hline
\end{tabular}
\end{center}
\end{table}
Further, we automatically tagged common tennis events of importance to viewers such as ``fault'', ``deuce'' and ``advantage'' using simple rules which define these tennis terms and our extracted scores. We compare our accuracy with and without score refinement and can observe that there is an improvement corresponding to improvement in the score accuracy. Accuracy for each tag per match (the matches are same as Table~\ref{tab:accresults}) can be seen in Table~\ref{tab:accesstags}.
\section{Conclusion}
We have presented an approach to create a tennis match index based on recognizing rallies and scores, supporting random access of ``points'' (Fig.~\ref{fig:coolimage}) tagged with common tennis events. Further extensions to this work are numerous, such as providing point based semantic search and performing tennis player analytics using videos instead of expensive sensor-based technologies.
| {
"attr-fineweb-edu": 1.927734,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdX_xK7Tt6AlxD7-o | \section{Introduction}
The classical Beardwood-Halton-Hammersley theorem \cite{BHH} (see also Steele \cite{S}) concerns the minimum cost Traveling Salesperson Tour through $n$ random points in Euclidean space. In particular, it guarantees the existence of an absolute (though still unknown) constant $\beta_d$ such that if $x_1,x_2\dots,$ is a random sequence of points in the $d$-dimensional cube $[0,1]^d$, the length $T(\gx 1)$ of a minimum tour through $x_1,\dots,x_n$ satisfies
\begin{equation}
\label{e.bhh}
T(\gx 1)\sim \beta_d n^{\frac{d-1}{d}}\ a.s.
\end{equation}
The present paper is concerned still with the problem of traveling among random points in Euclidean space. In our case, however, we suppose that only a (random) subset of the pairs of points are joined by traversable connections, independent of the geometry of the point set.
In particular, we study random embeddings of the Erd\H{o}s-R\'enyi-Gilbert random graph $G_{n,p}$ into the $d$-dimensional cube $[0,1]^d$. We let ${\mathcal X}_n$ denote a random embedding of $[n]=\{1,\dots,n\}$ into $[0,1]^d$, where each vertex $i\in [n]$ is mapped (independently) to a random point $X_i\in [0,1]^d$, and we denote by $G_{\cX,p}$ the random graph whose vertex set is ${\mathcal X}_n$ and whose pairs of vertices are joined by edges each with independent probability $p$. Edges are weighted by the Euclidean distance between their points, and we are interested in the total edge-weight required to travel about the graph.
This model has received much less attention than the standard model of a random geometric graph, defined as the intersection graph of unit balls with random centers $X_i,i\in[n]$, see Penrose \cite{P}. We are only aware of the papers by Mehrabian \cite{M11} and Mehrabian and Wormald \cite{MR} who studied the {\em stretch factor} of $G_{\cX,p}$. In particular, let $\de x y$ denote the Euclidean distance between vertices $x,y$, and $\dx x y$ denote their distance in $G_{\cX,p}$.
They showed (considering the case $d=2$) that unless $p$ is close to 1, the stretch factor
\[
\sup_{x,y\in G_{\cX,p}} \frac{\dx x y}{\de x y}
\]
tends to $\infty$ with $n$.
As a counterpoint to this, our first result shows a very different phenomenon when we pay attention to additive rather than multiplicative errors. In particular, for $p\gg \frac{\log^d n}{n}$, the distance between a typical pair of vertices is arbitrarily close to their Euclidean distance, while for $p\ll \frac{\log^d n}{n}$, the distance between a typical pair of vertices in $\xdn$ is arbitrarily large (Figure \ref{f.paths}).
\begin{theorem}
\label{t.expected} Let $\omega=\omega(n)\to \infty$. We have:
\begin{enumerate}[(a)]
\item For $p\leq \frac 1 {\omega^d(\log\log n)^{2d}} \frac{\log^dn}{n}$ and fixed $u,v$,
$$\dx u v\geq \frac{\omega}{8de^d}
\qquad\text{a.a.s.}\footnote{A sequence of events $\mathcal{E}_n$ occurs {\em asymptotically almost surely} (a.a.s.) if $\lim_{n\to\infty}\Pr(\neg\mathcal{E}_n)=0$.}$$
\item
For $p\geq \frac{\omega\log^d n}{n}$, we have a.a.s. that uniformly for all vertices $u,v$,
\[
\dx u v =\de u v+o(1).
\]
\end{enumerate}
\end{theorem}
\begin{figure}
\hspace{\stretch{1}}\includegraphics[width=.24\linewidth]{geo-30-10.pdf}
\includegraphics[width=.24\linewidth]{geo-30-25.pdf}
\includegraphics[width=.24\linewidth]{geo-30-50.pdf}
\includegraphics[width=.24\linewidth]{geo-30-200.pdf}\hspace{\stretch{1}}
\caption{\label{f.paths}Paths in an instance of ${\mathcal X}_{n,p}$ for $d=2$, $n=2^{30}$, and $p=\tfrac{10}{n},\tfrac{25}{n},\tfrac{50}{n},$ and $\tfrac{200}{n}$, respectively. In each case, the path drawn is the shortest route between the vertices $x$ and $y$ which are closest to the SW and NE corners of the square. (See Q.~\ref{pathgeom}, Section \ref{Qs}.)}
\end{figure}
Theorem \ref{t.expected} means that, even for $p$ quite small, it is not that much more expensive to travel from one vertex of $G_{\cX,p}$ to another than it is to travel directly between them in the plane. On the other hand, there is a dramatic dependence on $p$ if the goal is to travel among \emph{all} points. Let $T(G_{\cX,p})$ denote the length of a minimum length tour in $G_{\cX,p}$ hitting every vertex exactly once, i.e. a Traveling Salesperson tour.
\begin{theorem}\label{worst}
There exists a sufficiently large constant $K>0$ such that for all $p=p(n)$ such that $p\geq \frac{K\log n}{n}$, $d\geq 2$, we have that
\beq{Tgxp}
T(G_{\cX,p})=\Theta\bfrac{n^{\frac{d-1}{d}}}{p^{1/d}}\qquad a.a.s.
\end{equation}
\end{theorem}
(Recall that $f(n)=\Theta(g(n))$ means that $f(n)$ is bounded between positive constant multiples of $g(n)$ for sufficiently large $n$.) As the threshold for $G_{n,p}$ to be Hamiltonian is at $p=\frac{\log n +\log \log n+\omega(n)}{n}$, this theorem covers nearly the entire range for $p$ for which a TSP tour exists a.a.s.
Finally, we extend the asymptotically tight BHH theorem to the case of $G_{\cX,p}$ for any constant $p$. To formulate an ``almost surely'' statement, we let $\gxnp$ denote a random graph on a random embedding of $\mathbb{N}$ into $[0,1]^d$, where each pair $\{i,j\}$ is present as an edge with independent probability $p$, and consider $G_{\cX,p}$ as the restriction of $\gxnp$ to the first $n$ vertices $\{1,\dots,n\}$.
\begin{theorem}\label{tsp}
If $d\geq2$ and $p>0$ is constant, then there exists $\beta^d_p>0$ such that
\[
T(G_{\cX,p})\sim \beta^d_pn^{\frac{d-1} d} \qquad a.s.
\]
\end{theorem}
Karp's algorithm \cite{K} for a finding an approximate tour through ${\mathcal X}_n$ extends to the case $G_{\cX,p}$, $p$ constant as well:
\begin{theorem}\label{t.alg}
For fixed $d\geq2$ and $p$ constant, then there is an algorithm that a.s. finds a tour in $G_{\cX,p}$ of value $(1+o(1))\beta^d_pn^{(d-1)/d}$ in polynomial time, for all $n\in \mathbb{N}$.
\end{theorem}
\section{Traveling between pairs}
In this section, we prove Theorem \ref{t.expected}. Let $\nu_d$ denote the volume of a $d$-dimensional unit ball; recall that $\nu_d$ is bounded $(\nu_d\leq \nu_5<6$ for all $d$).
\begin{proof}[Proof of Theorem \ref{t.expected}(a)]
Let
$\varepsilon=\frac{1}{\log\log n}$ and let ${\mathcal A}_k$ be the event that there exists a path of length $k\geq k_0=\frac{\log n}{2d\log\log n}$ from $u$ to $v$ that uses $\leq \varepsilon k$ edges of length at least $\ell_1= \frac{\omega(\log\log n)^2}{4e^d\log n}$. Then
\begin{align}
\Pr(\exists k:{\mathcal A}_k)&\leq \sum_{k\geq k_0}(k-1)!\binom{n}{k-1}p^k \binom{k}{(1-\varepsilon)k}\brac{\nu_d\bfrac{\omega(\log\log n)^2}{4e^d\log n}^d}^{(1-\varepsilon)k}\label{eq1}\\
&\leq \frac{1}{n}\sum_{k\geq k_0}\brac{\frac{\nu_d\log^{d\varepsilon}n}{(4e^d)^{d(1-\varepsilon)}}\cdot \bfrac{e}{\varepsilon}^\varepsilon}^k =o(1).\nonumber
\end{align}
{\bf Explanation of \eqref{eq1}:}
Choose the $k-1$ interior vertices of the possible path and order them in $(k-1)!\binom{n}{k-1}$ ways as $(u_1,u_2,\ldots,u_{k-1})$. Then $p^k$ is the probability that the edges exist in $G_{n,p}$. Now choose the short edges $e_i=(u_{i-1},u_i),i\in I$ in $\binom{k}{(1-\varepsilon)k}$ ways and bound the probability that these edges are short by $\brac{\nu_d\bfrac{\omega(\log\log n)^2}{4e^d\log n}^d}^{(1-\varepsilon)k}$ viz.~the probability that $u_i$ is mapped to the ball of radius $\ell_1$, center $u_{i-1}$ for $i\in I$.
Now a.a.s. the shortest path in $G_{n,p}$ from $u$ to $v$ requires at least $k_0$ edges: Indeed the expected number of paths of length at most $k_0$ from $u$ to $v$ can be bounded by
$$\sum_{k=1}^{k_0}(k-1)!\binom{n}{k-1}p^k\leq \frac{1}{n}\sum_{k=1}^{k_0}\bfrac{\log^dn}{\omega^d(\log\log n)^{2d}}^k=o(1).$$
So a.a.s.
$$dist(u,v)\geq \varepsilon k_0\ell_1=\frac{\varepsilon\log n}{2d\log\log n}\cdot \frac{\omega(\log\log n)^2}{4e^d\log n}=\frac{\omega}{8de^d}.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{t.expected}(b)]
Fix some small $\gamma>0$. We begin by considering the case of vertices $u,v$ at distance $\de u v\geq \gamma$. Letting $\delta=\frac{1}{\log n}$, there is a constant $C$ such that, for sufficiently large $n$ relative to $\gamma$, we can find a set ${\mathcal B}$ of $\geq \frac{2C}{\delta}$ disjoint balls of radius $\delta$ centered on the line from $u$ to $v$, such that $\frac{C}{\delta}$ of the balls are closer to $u$ than $v$, and $\frac{C}{\delta}$ balls are closer to $v$ than $u$ (Figure \ref{f.path}). Denote these two families of $\frac C {\delta}$ balls by $\mathcal{F}_{u,v}$ and $\mathcal{F}_{v,u}$.
\begin{figure}[t]
\begin{center}
\begin{pdfpic}
\psset{unit=6cm,dotsize=5pt}
\begin{pspicture}(0,0)(1,1)
\psframe(0,0)(1,1)
\rput{40}(.2,.2){
\rput{-40}(-.02,.03){$u$}
\psdot(0,0)
\rput{-40}(.82,.03){$v$}
\psdot(.8,0)
\rput{-40}(.2,.1){$\mathcal{F}_{u,v}$}
\rput{-40}(.6,-.1){$\mathcal{F}_{v,u}$}
\pscircle(.1,0){.05}
\pscircle(.2,0){.05}
\pscircle(.3,0){.05}
\pscircle(.5,0){.05}
\pscircle(.6,0){.05}
\pscircle(.7,0){.05}
\psline[linestyle=dashed](.4,.06)(.4,-.06)
{\psset{dotsize=3pt}
\psdot(.18,.01)
\psdot(.198,.022)
\psdot(.222,-.02)
\psdot(.199,-.01)
\psdot(.48,-.01)
\psdot(.498,-.022)
\psdot(.522,.02)
\psdot(.499,.01)
}
\psline(0,0)(.18,.01)(.198,.022)(.222,-.02)(.199,-.01)(.48,-.01)(.498,-.022)(.522,.02)(.499,.01)(.8,0)
}
\end{pspicture}
\end{pdfpic}
\end{center}
\caption{\label{f.path}Finding a short path.}
\end{figure}
Given a ball $B\in \mathcal{F}_{\{u,v\}}=\mathcal{F}_{u,v}\cup \mathcal{F}_{v,u}$, the induced subgraph $G_B$ on vertices of ${\mathcal X}$ lying in $B$ is a copy of $G_{N,p}$, where $N=N(B)$ is the number of vertices lying in $B$. Let
$${\mathcal S}_B\text{ be the event that }N(B)\notin \left[\frac{N_0}{2},2N_0\right]\text{ where }N_0=\nu_d\delta^d n.$$
The Chernoff bounds imply that for $B\in \mathcal{F}_{\{u,v\}}$,
\beq{f0}
\Pr\brac{\neg{\mathcal S}_B}\leq e^{-\Omega(n\delta^d)}=e^{-n^{1-o(1)}}.
\end{equation}
This gives us that a.a.s. ${\mathcal S}_B$ holds for all pairs $u,v\in {\mathcal X}$ and all ${\mathcal B}$:
\begin{enumerate}[(A)]
\item All subgraphs $G_B$ for $B\in \mathcal{F}_{\{u,v\}}$ have a giant component $X_B$, containing at least $N_0/3$ vertices.\\
Indeed, the expected average degree in $G_B$ is $Np=\Omega(\omega)\to \infty$ and at this value the giant component is almost all of $B$ a.a.s. In particular, since $\neg{\mathcal S}_B$ holds, that
\[
\Pr(\exists B:|X_B|\leq N_0/3)\leq ne^{-\Omega(N_0)}\leq ne^{-\Omega(\delta^dn)}=o(1).
\]
\item \label{p.between} There is an edge between $X_B$ and $X_{B'}$ for all $B,B'\in \mathcal{F}_{\{u,v\}}.$ \\
Indeed, the probability that there is no edge between $X_B,X_{B'}$, given (A), is at most\\
\[
(1-p)^{N_0^2/9}\leq e^{-\Omega(\delta^{2d}n^2p)}\leq e^{-n^{1-o(1)}}.
\]
This can be inflated by $n^2\cdot (C\log n)^2$ to account for all pairs $u,v$ and all pairs $B,B'$.
\item \label{p.xdiam} For each $B\in \mathcal{F}_{\{u,v\}}$, the graph diameter $\mathrm{diam}(X_B)$ (the maximum number of edges in any shortest path in $X_B$ satisfies
\[
\Pr\brac{\mathrm{diam}(X_B)>\frac{100\log N}{\log Np}}\leq n^{-3}.
\]
This can be inflated by $n^2\cdot (2C\log n)$ to account for pairs $u,v$ and the choice of $B\in \mathcal{F}_{\{u,v\}}$. Fernholz and Ramachandran \cite{FR} and Riordan and Wormald \cite{RW} gave tight estimates for the diameter of the giant component, but we need this cruder estimate with a lower probability of being exceeded. We will prove this later in Lemma \ref{dimlem}.
\end{enumerate}
Part \eqref{p.xdiam} implies that with high probability, for any $u,v$ at distance $\geq \gamma$ and all $B\in \mathcal{F}_{\{u,v\}}$ and vertices $x,y\in X_B$,
\begin{equation}
\label{e.bounce}
\dx x y\leq 100\delta\times \frac{\log N}{\log Np}\leq
\frac {100} {\log n} \frac{\log n-d(\log \omega+\log\log n)+O(1)}{\log \omega - O(1)}=o(1).
\end{equation}
As the giant components $X_B$ ($B\in \mathcal{F}_{u,v}$) contain in total at least
$\frac{C}{\delta}\cdot\frac{N_0}{3}=\frac{C\nu_dn}{3\delta^{d-1}}$ vertices, the probability that $u$ has no neighbor in these giant components is at most
\[
(1-p)^{\frac{C\nu_dn}{3\delta^{d-1}}}\leq e^{-\frac{C\nu_dnp}{3\delta^{d-1}}}=n^{-\omega C\nu_d/3}.
\]
In particular, the probability is small after multiplication by $n^2$, and thus a.a.s., for all pairs $u,v\in X_{n,p}$, $u$ has a neighbor in $X_B$ for some $B\in \mathcal{F}_{u,v}$ and $v$ has a neighbor in $X_{B'}$ for some $B'\in \mathcal{F}_{v,u}$. Now by part \eqref{p.between} and equation \eqref{e.bounce}, we can find a path
\beq{f6}
u,w_0,w_1,\dots,w_s,z_t,z_{t-1},\dots,z_1,z_0,v
\end{equation}
from $u$ to $v$ where the $w_i$'s are all in some $X_B$ for $B\in \mathcal{F}_{u,v}$ and the total Euclidean length of the path $w_0,\dots,w_s$ tends to zero with $n$, and the $z_i$'s are all in some $\bar X_B$ for some $B\in \mathcal{F}_{v,u}$, and the total Euclidean length of the path $z_0,\dots,w_t$ tends to zero with $n$. Meanwhile, the Euclidean segments corresponding to the three edges $u,w_0$, $w_s,z_t$, and $z_0,v$ lie within $\delta$ of disjoint segments of the line segment from $u$ to $v$, and thus have total length $\leq \de u v + 6\delta,$ giving
\begin{equation}
\label{e.farcase}
\dx u v\leq \de u v + 6\delta+o(1)=\de u v+o(1).
\end{equation}
We must also handle vertices $u,v$ with $\de u v<\gamma$. We have that
\beq{f4}
\Pr(\exists v,B:v\text{ is not adjacent to }B)\leq n^2(1-p)^{N_0p/3}
\end{equation}
A fortiori, a.a.s. all vertices $u,v$ are adjacent to some vertex in any ball of radius $\gamma$. In particular, we can find $w\sim u$ within distance $\tfrac 5 2 \gamma$ of $u$, $z\sim v$ within distance $\tfrac 5 2 \gamma$ of $v$, such that
\[
\gamma \leq \de w z \leq 5\gamma,
\]
implying via \eqref{e.farcase} that
\begin{equation}
\dx u v \leq 6\gamma +6\delta.
\end{equation}
In particular, $\dx u v-\de u v$ is bounded by a constant which can be made arbitrarily small by making $n$ large.
\end{proof}
We complete the proof of Theorem \ref{t.expected} by proving
\begin{lemma}\label{dimlem}
Suppose that $Np=\omega\to\infty, \omega=O(\log N)$ and let $K$ denote the unique giant component of size $N-o(N)$ in $G_{N,p}$, that q.s.\footnote{A sequence of events $\mathcal{E}_n$ occurs {\em quite surely} q.s. if $\Pr(\neg\mathcal{E}_n)=O(n^{-\omega(1)})$.} exists. Then for $L$ large,
$$\Pr\brac{\mathrm{diam}(K)\geq \frac{L\log N}{\log Np}}\leq O(N^{-L/20}).$$
\end{lemma}
\begin{proof}
Let ${\mathcal B}(k)$ be the event that there exists a set $S$ of $k$ vertices in $G_{N,p}$ that induces a connected subgraph and in which more than half of the vertices have less than $\omega/2$ neighbors outside $S$. Also, let ${\mathcal B}(k_1,k_2)=\bigcup_{k=k_1}^{k_2}{\mathcal B}_k$. Then for $k=o(N)$ we have
\begin{align}
\Pr({\mathcal B}(k))&\leq \binom{N}{k}p^{k-1}k^{k-2}2^k\brac{\sum_{i=0}^{\omega/2}\binom{N-k}{i}p^i(1-p)^{N-k-i}}^{k/2}\label{sp}\\
&\leq p^{-1}(2e\omega e^{-\omega/3})^k\leq Ne^{-k\omega/4}.\label{sp1}
\end{align}
{\bf Explanation of \ref{sp}:} $\binom{N}{k}$ bounds the number of choices for $S$. We then choose a spanning tree $T$ for $S$ in $k^{k-2}$ ways. We multiply by $p^{k-1}$, the probability that $T$ exists. We then choose half the vertices $X$ of $S$ in at most $2^k$ ways and then multiply by the probability that each $x\in X$ has at most $\omega/2$ neighbors in $[N]\setminus S$.
If $\kappa=\kappa(L)=\frac{L\log N}{\log Np}$ then \eqref{sp1} implies that $\Pr({\mathcal B}(\kappa)\leq N^{1-L/10}$.
Next let $\mathcal{D}(k)=\mathcal{D}_N(k)$ be the event that there exists a set $S$ of size $k$ for which the number of edges $e(S)$ contained in $S$ satisfies $e(S)\geq 2k$. Then,
$$\Pr(\mathcal{D}(k))\leq \binom{N}{k}\binom{\binom{k}{2}}{2k}p^{2k}\leq \brac{\frac{Ne}{k}\cdot\bfrac{ke\omega}{2N}^2}^k= \bfrac{ke^3\omega^2}{2N}^k.$$
Since $\omega=O(\log n)$ we have that q.s.
\beq{sp2}
\not\exists k\in [\kappa(1),N^{3/4}]\text{ such that $\mathcal{D}(k)$ occurs}.
\end{equation}
Suppose then that ${\mathcal B}(k_1,k_2)\cup \mathcal{D}(k_1,k_2)$ does not occur, where $k_1=\kappa(L/4)$ and $k_2=N^{3/4}$. Fix a pair of vertices $v,w$ and first do a breadth first search (BFS) from $v\in K$ and create sets $S_0,S_1,\dots,S_{k_1}$ where $S_i$ is the set of vertices at distance $i$ from $v$. We continue this construction unless we find that for some $i$, we have $w\in S_i$. Failing this, we must have $S_{k_1}\neq\emptyset$ and $|S_{\leq k_1}|\geq k_1$ where $S_{\leq t}=\bigcup_{i=0}^tS_i$ for $t\geq 0$. We continue this construction for $t\geq k_1$ and we see that $k_1\leq |S_{\leq t}|\leq N^{2/3}$ implies that $|S_{t+1}|\geq \omega|S_t|/4$. This is because only vertices in $S_t$ have neighbors outside $S_{\leq t}$ and we have assumed that ${\mathcal B}(|S_{\leq t}|)$ does not occur and because of \eqref{sp2}. Thus if $|S_{t+1}|<\omega|S_t|/4$ then $S_{\leq t+1}$ has at most $\omega N^{2/3}/4$ vertices and more than $\omega N^{2/3}/2$ edges.
Thus if $L$ is large, then we find that there exists $t\leq k_1+\kappa(3/4)$ such that $|S_t|\geq N^{2/3}$. Now apply the same argument for BFS from $w$ to create sets $T_0,T_1,\ldots,T_s$, where either we reach $v$ or find that $|T_s|\geq N^{2/3}$ where $s\leq k_1+\kappa(3/4)$. At this point the edges between $S_t$ and $T_s$ are unconditioned and the probability there is no $S_t:T_s$ edge is at most $(1-p)^{N^{4/3}}=O(e^{-\Omega(N^{1/3})})$.
\end{proof}
\section{Traveling among all vertices}\label{TSP}
Our first aim is to prove Theorem \ref{tsp}; this will be accomplished in Section \ref{pgtsp}, below. In fact, we will prove the following general statement, which will also be useful in the proof of Theorem \ref{worst}:
\begin{theorem}\label{gtsp}
Let $\yd{1}\subset [0,1]^d$ denote a set of points chosen from any fixed distribution, such that the cardinality $Y=|\yd{1}|$ satisfies $\expect(Y)=\mu>0$ and
$\Pr(Y\geq k)\leq C\rho^k$ for all $k$, for some $C>0,\rho<1$ Let $\ydt$ denote a random set of points in $[0,t]^d$ obtained from $t^d$ independent copies $\yd{1}+x$ $(x\in \{0,\cdots,t-1\}^d).$
If $p>0$ is constant, $d\geq 2$, and $\gyp$ denotes the random graph on $\ydt$ with independent edge probabilities $p$, then $\exists \beta>0$ (depending on $p$ and the process generating $\yd 1$) such that
\begin{enumerate}[(i)]
\item $T(\gyp)\sim \beta t^d$ a.a.s., and
\item $T(\gyp)\leq \beta t^d+o(t^d)$ q.s.\footnote{In this context $O(n^{-\omega(1)})$ is replaced by $O(t^{-\omega(1)})$.}
\end{enumerate}
\end{theorem}
The restriction $\Pr\left(|\yd{1}|\geq k\right)\leq \rho^k$ simply ensures that we have exponential tail bounds on the number of points in a large number of independent copies of $\yd{1}$:
\begin{observation}\label{o.Ychernoff}
For the total number $T_n$ of points in $n$ independent copies of $\yd{1}$, we have
\begin{equation}\label{e.Ychernoff}
\pushQED{\qed}
\Pr(|T_n-\mu n|>\delta \mu n)<e^{-A_\rho \delta^2 \mu^2 n}.\qedhere
\popQED
\end{equation}
\end{observation}
This is a straightforward consequence, but we do not have a reference and so we give a sketch proof in the appendix.
Note that the conditions on the distribution of $\ydt$ are satisfied for a Poisson cloud of intensity 1, and it is via this case that we will derive Theorem \ref{tsp}. Other examples for which these conditions hold include the case where $\ydt$ is simply a suitable grid of points, or is a random subset of a suitable grid of points in $[0,t]^d$, and we will make use of this latter case of Theorem \ref{gtsp} in our proof of Theorem \ref{worst}.
Our proof is by induction on $d$. For technical reasons (see also Question \ref{q3} of Section \ref{Qs}) Theorems \ref{gtsp} and \ref{tsp} are given just for $d\geq 2$, and before beginning with the induction, we must carry out a separate argument to bound the length of the tour in 1 dimension.
\subsection{Bounding the expected tour length in 1 dimension}
\label{s.d1}
We begin with the following simple lemma.
\begin{lemma}\label{permutations}
Let $\sigma$ be a permutation of $[n]$, and let $\ell(\sigma)$ be $\sum_{i=1}^{n-1} |\sigma_{i+1}-\sigma_i|$. Then
\beq{invo}
\ell(\sigma)<\sigma_n+3\cdot\mathrm{inv}(\sigma),
\end{equation}
where $\mathrm{inv}(\sigma)$ is the number of inversions in $\sigma$.
\end{lemma}
\begin{proof}
We prove this by induction on $n$. It is trivially true for $n=1$ since in this case $\ell(\sigma)=0$. Assume now that $n>1$, and given a permutation $\sigma$ of $[n]$, consider permutation $\sigma'$ of $[n-1]$ obtained by truncation:
\[
\sigma'_{i}=\begin{cases}\sigma_i &\mbox{ if }\sigma_i<\sigma_n\\
\sigma_i-1 &\mbox{ if }\sigma_i> \sigma_n
\end{cases}
\]
We have by induction that
\beq{induct}
\ell(\sigma')\leq \sigma'_{n-1}+3\cdot\mathrm{inv}(\sigma').
\end{equation}
Now observe that
\begin{align*}
\ell(\sigma)&=\ell(\sigma')+|\sigma_n-\sigma_{n-1}|+|\left\{i|\sigma_i<\sigma_n<\sigma_{i+1} \mbox{ OR } \sigma_i>\sigma_n>\sigma_{i+1}\right\}|\\
&\leq \ell(\sigma')+|\sigma_n-\sigma_{n-1}|+\mathrm{inv}(\sigma)-\mathrm{inv}(\sigma'),
\end{align*}
and, recalling that $\mathrm{inv}(\sigma)=\mathrm{inv}(\sigma^{-1})$,
\[
\mathrm{inv}(\sigma)-\mathrm{inv}(\sigma')=n-\sigma_n.
\]
Since $\sigma'_{n-1}\leq \sigma_{n-1}$, \eqref{induct} gives that
\begin{align*}
\ell(\sigma)&\leq\sigma_{n-1}+3\cdot\mathrm{inv}(\sigma') +|\sigma_n-\sigma_{n-1}|+\mathrm{inv}(\sigma)-\mathrm{inv}(\sigma')\\
&=\sigma_{n-1}+\mathrm{inv}(\sigma')+2(\mathrm{inv}(\sigma)-n+\sigma_n)+|\sigma_n-\sigma_{n-1}|+\mathrm{inv}(\sigma)-\mathrm{inv}(\sigma')\\
&=\sigma_{n-1}+3\cdot\mathrm{inv}(\sigma)-2n+2\sigma_n+|\sigma_n-\sigma_{n-1}|\\
&=\sigma_n+3\cdot\mathrm{inv}(\sigma)-(2n-\sigma_{n-1}-\sigma_n-|\sigma_n-\sigma_{n-1}|)\\
&\leq \sigma_n+3\cdot\mathrm{inv}(\sigma).\qedhere
\end{align*}
\end{proof}
For the 1-dimension case of Theorem \ref{tsp}, we have, roughly speaking, a 1-dimensional string of points joined by some random edges. Lemma \ref{permutations} allows us to prove the following lemma, which begins to approximate this situation.
\begin{lemma}\label{basic}
Consider the random graph $G=G_{n,p}$ on the vertex set $[n]$ with constant $p$, where each edge $\{i,j\}\in E(G)$ is given length $|i-j|\in \mathbb{N}$. Let $Z$ denote the minimum length of a Hamilton cycle in $G$ starting at vertex 1, assuming one exists. If no such cycle exists let $Z=n^2$. Then there exists a constant $A_p$ such that
$$\expect(Z)\leq A_p n\text{ and }Z\leq \frac{2A_pn}{p},\ q.s.$$
\end{lemma}
\begin{proof}
We first write $G=G_1\cup G_2\cup G_3$ where the $G_i$ are independent copies of $G_{n,p_1}$, where $1-p=(1-p_1)^3$. We will first construct a long path in $G_1$ via the following algorithm: We start with $v_1=1$. Then for $j\geq 1$ we let
$$\phi(j)=\min_{k\in [n]}\set{k:k\notin\set{v_1,v_2,\ldots,v_j}\text{ and }v_j\sim k}$$
and let $v_{j+1}=\phi(j)$ i.e. we move from $v_j$ to the lowest indexed $k$ that has not been previously visited. We repeat this until we reach $j_0$ such that $\phi(j_0)$ is undefined. This defines a path $P_1$ of length $\Lambda_1=\sum_{j=1}^{j_0-1}|v_{j+1}-v_j|$. It is convenient to extend the sequence $v_1,\ldots,v_{j_0}$ by $v_{j_0+1},\ldots,v_{n}$ where the latter is $[n]\setminus\{v_1,\ldots,v_{j_0}\}$ in increasing order. Now think of $v_1,v_2,\ldots,v_{n}$ as a permutation of $[n]$. Then Lemma \ref{permutations} implies that the length $\Lambda_1$ of the initial part corresponding to the path is at most $\ell(v)<n+3\cdot\mathrm{inv}(v)$.
Observe that $\Pr(j_0\leq n-k)\leq n(1-p_1)^k$. This is because at $j_0$ we find that $v_{j_0}$ has no neighbors in the set of unvisited vertices and the existence of such edges is unconditioned at this point. So,
\beq{b1}
j_0\leq n-\frac{\log^2n}{p_1}\ q.s.
\end{equation}
Now let $\alpha_j=|\set{i<j:v_i>v_j}|,j=1,2,\ldots,n$ so that $\mathrm{inv}(v)=\alpha_1+\alpha_2+\cdots+\alpha_n$.
Let $L_j=\max\set{v_i:1\leq i\leq j}$. Then if $i<j$ and $v_i>v_j$ we must have $j\leq v_j<v_i\leq L_j$. So,
\beq{Lj1}
\alpha_j\leq \Delta_j=L_j-j.
\end{equation}
Furthermore, we will need
\beq{Lj2}
|v_{i+1}-v_i|\leq |v_{i+1}-(i+1)|+|v_i-i|+1\leq \Delta_{i+1}+\Delta_i+1\qquad\text{ for }1\leq i<j_0.
\end{equation}
It is important therefore to analyze the sequence $\Delta_j,1\leq j\leq j_0$. We observe that
\beq{Lj5}
\Pr(L_{j+1}=L_j+u)\ \ \begin{cases}=1-(1-p_1)^{\Delta_j}&u=0.\\=p_1(1-p_1)^{\Delta_j+u-1}&u>0.\end{cases}.
\end{equation}
Furthermore, these probabilities hold regardless of previous edge exposures. This is because edges incident with $v_j$ and vertices not on $P_1$ have not been exposed.
It will follow from \eqref{Lj5} that
\begin{align}
&\Delta_j\leq \frac{\log^2n}{p_1},\,\forall j,\ q.s.\label{Lj4}\\
&\expect\brac{\sum_{j=1}^{j_0}\Delta_j}\leq \frac{n}{p_1}\label{Lj3}.\\
&\sum_{j=1}^{j_0}\Delta_j\leq \frac{2n}{p_1},\ q.s.\label{Lj3a}
\end{align}
We will prove \eqref{Lj4}, \eqref{Lj3}, \eqref{Lj3a} momentarily, but first let us use them to finish the proof of the lemma.
It follows from Lemma \ref{permutations}, \eqref{Lj1} and \eqref{Lj3} that
$$\expect\Lambda_1\leq A_1n,$$
where $A_1=1+\frac{3}{p_1}$.
It remains to show that there is a Hamilton cycle of length not much greater then $\Lambda_1$.
Let $J=\set{v_{j_0+1},\ldots,v_m}$. We will use the edges of $G_2$ to insert $J$ into the path $P_1$. Let $v_j\in J_0$. Assume that $v_jj\geq n/2$, the argument for $v_j<n/2$ is similar. We examine $k=v_j-1,v_j-2,\ldots$ in turn until we find a $k$ such that (i) $(v_j,v_j-k)\in E(G_2)$, $v_j-k=v_\ell\notin J$ and (ii) $(v_j,v_{\ell-1})\in E(G_2)$. We will find such a $k$ q.s. after examining at most $\log^2n$ possibilities. Using \eqref{Lj2} and \eqref{Lj4} we see that replacing the edge $(v_{\ell-1},v_\ell)$ by a path $v_{\ell-1},v_j,v_\ell$ q.s. incorporates $v_j$ into our path at a cost of at most $O\brac{\log^2n+\frac{\log^2n}{p_1}}$ and \eqref{b1} implies that there is room to insert all vertices in $J$ in this way, without using the same $v_\ell$ more than once. This gives us a Hamilton path $x_1,x_2,\dots,x_n$ in $G_1\cup G_2$ q.s. and the total added cost over the cost of $P_1$ is q.s. $O(\log^4n)$. There is only an exponentially small probability that we cannot find $G_3$-edges $\{x_1,x_{j+1}\}$, $\{x_j,x_n\}$ which now give us a Hamilton cycle; since the maximum value of of $Z$ is just $n^2$, this gives $\expect(Z)\leq A_p n$, as desired.
{\bf Proof of \eqref{Lj4}:}
First of all we note that \eqref{Lj5} that
$$\Pr\brac{\exists j: L_{j+1}\geq L_j+\frac{\log^2n}{4p_1}}\leq (1-p_1)^{\log^2n/4p_1}\leq e^{-\log^2n/4}.$$
So if there exists $j$ with $\Delta_j\geq \frac{\log^2n}{p_1}$ then q.s. there must be $k$ such that $\Delta_k\in\interval{\frac{\log^2n}{2p_1}}{\frac{3\log^2n}{4p_1}}$. But then \eqref{Lj5} implies that with probability $1-O(e^{-\log^2n/2})$, $L_{k+r}=L_k$ for $r\leq n$ and this completes the proof of \eqref{Lj4}.
{\bf Proof of \eqref{Lj3}, \eqref{Lj3a}:} It follows from \eqref{Lj5} that the sum in \eqref{Lj3} is bounded by the sum of $n$ independent geometric random variables with success probability $p_1$. This gives both the bound on expectation and the q.s. bound.
\end{proof}
We have:
\begin{corollary}\label{cor1}
Suppose that we replace the length of edge $(i,j)$ in Lemma \ref{basic} by $\xi_{i}+\cdots+\xi_{j-1}$ where $\xi_1,\xi_2,\ldots,\xi_n$ are random variables with mean bounded above by $\mu$ and exponential tails. If $\xi_1,\ldots,\xi_n$ are independent of $G_{n,p}$ then $\expect(Z)\leq \frac{A_p\mu n}{p}$.
\end{corollary}
\begin{proof}
The bound on the expectation follows directly from Lemma \ref{basic} and the linearity of expectation.
\end{proof}
Let us observe now that we get an upper bound $\expect(T(\gy{1}{t}{p}))\leq A_p t$ on the length of a tour in 1 dimension. We have
\[
\expect(T(\gy{1}{t}{p}))=\sum_{n=0}^{\infty}\Pr(|\gy{1}{t}{p}|=n)\expect\left(\gy{1}{t}{p}\middle| |\gy{1}{t}{p}|=n\right).
\]
When conditioning on $|\gy{1}{t}{p}|=n$, we let $p_1<p_2<\cdots<p_n\subset [0,t]$ be the points in $\gy{1}{t}{p}$. We choose $k\in \{0,n-1\}$ uniformly randomly and let $\xi_i=||p_{k+i+1}-p_{k+i}||$, where the indices of the $p_j$ are evaluated modulo $n$. We now have $\mu(\xi_i)\leq \frac{2t}{n}$ for all $i$, and Corollary \ref{cor1} gives that
\[
\expect\left(\gy{1}{t}{p}\middle| |\gy{1}{t}{p}|=n\right)\leq \frac{A_pn}{p}\cdot \frac{2t}{n}=O(t),
\]
and thus
\begin{equation}
\label{d1bound}
\expect\left(\gy{1}{t}{p}\right)\leq A_p t.
\end{equation}
\subsection{The asymptotic tour length}
\label{pgtsp}
Our proof of Theorem \ref{gtsp} will use recursion, by dividing the $[t]^d$ cube into smaller parts. However, since our divisions of the cube most not cross boundaries of the elemental regions $\yd 1$, we cannot restrict ourselves to subdivisions into perfect cubes (in general, the integer $t$ may not have the divisors we like).
To this end, if $L=T_1\times T_2\times \cdots \times T_d$ where each $T_i$ is either $[0,t]$ or $[0,t-1]$, we say $L$ is a $d$-dimensional \emph{near-cube} with sidelengths in $\{t-1,t\}$. For $0\leq d'\leq d$, we define the canonical example $L_d^{d'}:=[0,t]^{d'}\times [0,t-1]^{d-d'}$ for notational convenience, and let
\[
\Phi_p^{d,d'}(t)=\expect\left(T(\gyp\cap L_d^{d'})\right).
\]
so that
$$\Phi_p^{d}(t):=\Phi_p^{d,d}(t)=\Phi_p^{d,0}(t+1).$$
In the unlikely event that $\gyp\cap L_d^{d'}$ is not Hamiltonian, we take $T(\gyp\cap L_d^{d'})= t^{d+1}\sqrt d$, for technical reasons.
Our first goal is an asymptotic formula for $\Phi$:
\begin{lemma}\label{l.Fasym}
There exists $\beta_p>0$ such that
\[\Phi^{d,d'}_p(t)\sim \beta_p t^d.\]
\end{lemma}
The proof is by induction on $d\geq 2$. We prove the base case $d=2$ along with the general case. We begin with a technical lemma.
\begin{lemma}\label{eq10}
There is a constant $F_{p,d}>0$ such that
\beq{Apd}
\Phi^{d,d'}_p(t)\leq \Phi^{d,d'-1}_p(t)+F_{p,d}t^{d-1}
\end{equation}
for all $t$ sufficiently large. In particular, there is a constant $A_{p,d}>0$ such that
\beq{eq100}
\Phi^d_p(t+h)\leq \Phi^d_p(t)+A_{p,d} h t^{d-1}
\end{equation}
for sufficiently large $t$ and $1\leq h\leq t$.
\end{lemma}
\begin{proof}
We let $S$ denote the subgraph of $\gy d {t} p\cap L_d^{d'}$ induced by the difference $L_d^{d'}\setminus L_d^{d'-1}$.
By ignoring the $d'$th coordinate, we obtain the $(d-1)$ dimensional set $\pi(S)$, for which induction on $d$ (or line \eqref{d1bound} if $d=2$) implies an expected tour $T(S)$ of length $\Phi^{d-1,d'-1}_p(t)\leq \beta^{d-1}_p t^{d-1}$,
and so
\[
\Phi^{d-1,d'-1}_p(t)\leq D_{p,d-1} t^{d-1}
\]
for some constant $D_{p,d-1}$, for sufficiently large $t$.
We have that
\[
\expect(T(S))\leq \expect(T(\pi(S))+d^{1/2}\expect(|V(S)|)\leq D_{p,d-1}t^{d-1}+d^{1/2}t^{d-1}.
\]
The first inequality stems from the fact that the points in $L_d^{d'}\setminus L_d^{d'-1}$ have a $d'$ coordinate in $[t-1,t]$.
Now if $\gy d {t} p\cap L_d^{d'-1}$ and $S$ are both Hamiltonian, then we have
\beq{decompose}
T(\gy d {t} p\cap L_d^{d'})\leq T(\gy d {t} p\cap L_d^{d'-1})+T(S)+O_d(t)
\end{equation}
which gives us the Lemma, by linearity of expectation. We have \eqref{decompose} because we can patch together the minimum cost Hamilton cycle in $\gy d {t} p\cap L_d^{d'-1}$ and the minimum cost path $P$ in $S$ as follows: Let $u_1,v_1$ be the endpoints of $P$. If there is an edge $u,v$ of $H$ such that $(u_1,u),(v_1,v)$ is an edge in $\gy{d}{t}{p}$ then we can create a cycle $H_1$ through $\gy d {t} p\cap L_d^{d'-1}\cup P$ at an extra cost of at most $2d^{1/2}t$. The probability there is no such edge is at most $(1-p^2)^{t/2}$, which is negligible given the maximum value of $T(\gy d {t} p\cap L_d^{d'})$.
On the other hand, the probability that either of $\gy d {t} p\cap L_d^{d'-1}$ or $S$ is not Hamiltonian is exponentially small in $t$, which is again negligible given the maximum value of $T(\gy d {t} p\cap L_d^{d'})$.
\end{proof}
Our argument is an adaptation of that in Beardwood, Halton and Hammersley \cite{BHH} or Steele \cite{S}, with modifications to address difficulties introduced by the random set of available edges. First we introduce the concept of a decomposition into near-cubes. (Allowing near-cube decompositions is necessary for the end of the proof, beginning with Lemma \ref{cruder}).
We say that a partition of $L_d^{d'}$ into $m^d$ near-cubes $S_\alpha$ with sidelengths in $\{u,u+1\}$ indexed by $\alpha\in [m]^d$ is a \emph{decomposition} if for each $1\leq b\leq d$, there is an integer $M_b$ such that, letting
\[
f_b(a)=\begin{cases}
a\cdot u \mbox{ if } a<M_b\\
a\cdot u+(a-M_b) \mbox { if } a\geq M_b.
\end{cases}.
\]
we have that
\[
S_\alpha=[f_1(\alpha_1-1),f_1(\alpha_1)]\times [f_2(\alpha_2-1),f_2(\alpha_2)]\times \cdots \times [f_d(\alpha_d-1),f_d(\alpha_d)].
\]
Observe that so long as $u< t^{1/2}$, $L_d^{d'}$ always has a decomposition into near-cubes with sidelengths in $\{u,u+1\}$.
First we note that tours in not-too-small near-cubes of a decomposition can be pasted together into a large tour at a reasonable cost:
\begin{lemma}
\label{paste}
Fix $\delta>0$, and suppose $t=mu$ for $u=t^\gamma$ for $\delta<\gamma\leq 1$ ($m,u\in \mathbb{Z}$), and suppose $S_\alpha$ $(\alpha\in [m]^d)$ is a decomposition of $L_d^{d'}.$ We let ${\mathcal Y}^{d,\alpha}_{t,p}:={\mathcal Y}^d_{t,p}\cap S_\alpha$. We have
\[
T(\gyp\cap L_d^{d'})\leq \sum_{\alpha\in [m]^d}T(\gypa)+4m^du\sqrt d\qquad\mbox{with probability at least}\quad 1-e^{-\Omega(u^d p)}.
\]
\end{lemma}
\begin{proof}
Let ${\mathcal B},{\mathcal C}$ denote the events
\begin{align*}
{\mathcal B}&=\set{\exists \alpha:{\mathcal Y}^{d,\alpha}_{t,p}\text{ is not Hamiltonian}}\\
{\mathcal C}&=\set{\exists \alpha:\left||{\mathcal Y}^{d,\alpha}_{t}|-u^d\right |\geq \delta u^d},
\end{align*}
and let $\mathcal{E}={\mathcal B}\cup{\mathcal C}$.
Now $\Pr({\mathcal B})\leq m^de^{-\Omega(u^dp)}$ and, by Observation \ref{o.Ychernoff}, $\Pr({\mathcal C})\leq m^de^{-\Omega(u^d)}$ and so $\Pr(\mathcal{E})\leq e^{-\Omega(u^dp)}$. Assume therefore that $\neg\mathcal{E}$ holds. Each subsquare $S_\alpha$ will contain a minimum length tour $H_\alpha$. We now order the subcubes $\{S_\alpha\}$ as $T_1,\ldots,T_{m^d}$, such that for $S_\alpha=T_i$ and $S_\beta=T_{i+1}$, we always have that the Hamming distance between $\alpha$ and $\beta$ is 1. Our goal is to inductively assemble a tour through the subcubes $T_1,T_2,\dots,T_j$ from the smaller tours $H_\alpha$ with a small number of additions and deletions of edges.
Assume inductively that for some $1\leq j<m^d$ we have added and deleted edges and found a single cycle $C_j$ through the points in $T_1,\ldots,T_j$ in such a way that (i) the added edges have total length at most $4\sqrt d ju$ and (ii) we delete one edge from $\tau(T_1)$, $\tau(T_j)$ and two edges from each $\tau(T_i),2\leq i\leq j-1$. To add the points of $T_{j+1}$ to create $C_{j+1}$ we delete one edge $(u,v)$ of $\tau(T_j)\cap C_j$ and one edge $(x,y)$ of $\tau(T_{j+1})$ such that both edges $\{u,x\},\{v,y\}$ are in the edge set of $\gyp$. Such a pair of edges will satisfy (i) and (ii) and the probability we cannot find such a pair is at most $(1-p^2)^{(u^d/2-1)u^d/2}$. Thus with probability at least $1-e^{\Omega(u^d p)}$ we build the cycle $C_{m^d}$ with a total length of added edges $\leq 4\sqrt d m^d u$.
\end{proof}
Linearity of expectation (and the polynomial upper bound $t^{d+1}\sqrt d$ on $T(\gyp)$) now gives a short-range recursive bound on $\Phi^d_p(t)$ when $t$ factors reasonably well:
\begin{lemma}\label{lem1} For all large $u$ and $1\leq m\leq u^{10}$ $(m,u\in \mathbb{N})$,
$$\Phi^d_p(mu)\leq m^d(\Phi^d_p(u)+B_{d}u)$$
for some constant $B_{d}.$\qed
\end{lemma}
\noindent Note that here we are using a decomposition of $[mu]^d$ into $m^d$ subcubes with sidelength $u$; near-cubes are not required.
To get an asymptotic expression for $\Phi^d_p(t)$ we now let
$$\beta=\liminf_t\frac{\Phi^d_p(t)}{t^d}.$$
Choose $u_0$ large and such that
$$\frac{\Phi^d_p(u_0)}{u_0^d}\leq \beta+\varepsilon$$
and then define the sequence $u_k,k\geq -1$ by $u_{-1}=u_0$ and $u_{k+1}=u_k^{10}$ for $k\geq 0$.
Assume inductively that for some $i\geq 0$ that
\begin{equation}\label{e.inductively}
\frac{\Phi^d_p(u_i)}{u_i^d}\leq\beta+\varepsilon+\sum_{j=-1}^{i-2}\left(\frac{A_{p,d}}{u_j}+\frac{B_{p,d}}{u_j^{d-1}}\right).
\end{equation}
This is true for $i=0$, and then for $i\geq 0$ and $0\leq u\leq u_i$ and $d\leq m\in [u_{i-1},u_{i+1}]$ we have
\begin{align}
\frac{\Phi^d_p(mu_i+u)}{(mu_i+u)^d}&\leq \frac{\Phi^d_p(mu_i)+A_{p,d}u(mu_i)^{d-1}}{(mu_i)^d}\nonumber\\
&\leq \frac{m^d(\Phi^d_p(u_i)+B_{p,d}u_i)+A_{p,d}u(mu_i)^{d-1}}{(mu_i)^d}\nonumber\\
&\leq \beta+\varepsilon+\sum_{j=-1}^{i-2}\left(\frac{A_{p,d}}{u_j}+\frac{B_{p,d}}{u_j^{d-1}}\right) +\frac{B_{p,d}}{u_i^{d-1}}+\frac{A_{p,d}}{m}\nonumber\\
&\leq \beta+\varepsilon+\sum_{j=-1}^{i-1}\left(\frac{A_{p,d}}{u_j}+ \frac{B_{p,d}}{u_j^{d-1}}\right).\label{zxc}
\end{align}
Putting $m=u_{i+1}/u_i$ and $u=0$ into \eqref{zxc} completes the induction. We deduce from \eqref{e.inductively} and \eqref{zxc} that for $i\geq 0$ we have
\beq{zxcv}
\frac{\Phi^d_p(t)}{t^d}\leq \beta+\varepsilon+\sum_{j=-1}^{\infty}\brac{\frac{A_{p,d}}{u_j}+\frac{B_{p,d}}{u_j^{d-1}}}\leq \beta+2\varepsilon\qquad\text{ for } t\in J_i=[u_{i-1}u_i,u_i(u_{i+1}+1)]
\end{equation}
Now $\bigcup_{i=0}^\infty J_i=[u_0^2,\infty]$ and since $\varepsilon$ is arbitrary, we deduce that
\beq{beta}
\beta=\lim_{t\to\infty}\frac{\Phi^d_p(t)}{t^d},
\end{equation}
We can conclude that
\[
\Phi^d_p(t)\sim \beta t^d,
\]
which, together with Lemma \ref{eq10}, completes the proof of Lemma \ref{l.Fasym}, once we show that $\beta>0$ in \eqref{beta}. To this end, we let $\rho$ denote $\Pr(|\yd 1|\geq 1)$, so that $\expect(|\ydt|)\geq\rho t^d$. We say $x\in \{0,\dots,t-1\}^d$ is \emph{occupied} if there is a point in the copy $\yd 1+x$. Observing that a unit cube $[0,1]^d+x$ $(x\in \{0,\dots,t-1\}^d)$ is at distance at least 1 from all but $3^d-1$ other cubes $[0,1]^d+y$, we certainly have that the minimum tour length through $\ydt$ is at least $\frac{{\mathcal O}}{3^d-1}$, where where ${\mathcal O}$ is the number of occupied $x$. Linearity of expectation now gives that $\beta>\rho/(3^d-1)$, completing the proof of Lemma \ref{l.Fasym}.
\bigskip
Before continuing, we prove the following much cruder version of Part (ii) of Theorem \ref{gtsp}:
\begin{lemma}\label{cruder}
For any fixed $\varepsilon>0$, $T(\gyp)\leq t^{d+\varepsilon}$ q.s.
\end{lemma}
\begin{proof}
We let $m=\flr{t^{1-\varepsilon/2}}$ $u=\flr{t/m}$, and let $\{\gypta\}$ be a decomposition of $\gyp$ into $m^d$ near-cubes with sidelengths in $\{u,u+1\}$. We have that q.s. each $\gypta$ has (i) $\approx u^d$ points, and (ii) a Hamilton cycle $H_\alpha$. We can therefore q.s. bound all $T(\gypta)$ by $d u \cdot u^d$, and Lemma \ref{paste} gives that q.s. $T(\gy d t p)\leq 4dut^d+4m^du\sqrt{d}.$
\end{proof}
To prove Theorem \ref{gtsp}, we now consider a decomposition $\{S_\alpha\}$ ($\alpha\in [m]^d$) of $\ydt$ into $m^d$ near-cubes of side-lengths in $\{u,u+1\}$, for $\gamma=1-\frac \varepsilon 2$, $m=\flr{t^\gamma},$ and $u=\flr{t/m}$.
Lemma \ref{l.Fasym} gives that
$$\expect T(\gypa)\sim \beta_p u^d \sim \beta_p t^{(1-\gamma)d}.$$
Let
\[
{\mathcal S}_\gamma(\gyp)=\sum_{\alpha\in [m]^d}\min\set{T(\gypa),2dt^{(1-\gamma)(d+\varepsilon)}}.
\]
Note that ${\mathcal S}_\gamma(\gyp)$ is the sum of $t^{\gamma d}$ identically distributed bounded random variables.
Applying Hoeffding's theorem we see that for any $t$, we have
$$\Pr(|{\mathcal S}_\gamma(\gy d t p)-m^d \expect(T(\gy d {u} p))|\geq T)\leq
2\exp\left(-\frac{2T^2}{4m^dd^2t^{2(1-\gamma)(d+\varepsilon)}}\right).$$
Putting $T=t^{d\varepsilon}$ for small $\varepsilon$, we see that
\beq{eq3}
{\mathcal S}_\gamma(\gy d t p)=\beta_p t^{d}+o(t^d)\qquad q.s.
\end{equation}
Now, since q.s. $T(\gypa)\leq 2dt^{(1-\gamma)(d+\varepsilon)}$ for all $\alpha$ by Lemma \ref{cruder}, we have that q.s.
${\mathcal S}_\gamma({\mathcal Y}^{d}_{t,p})=\sum_{\alpha}T(\gypa)$,
so that Lemma \ref{paste} implies that
\beq{f1}
T(\gy d t p)\leq {\mathcal S}_\gamma(\gy d t p)+\delta_2\text{ where }\delta_2=o(t^d)\qquad q.s.
\end{equation}
It follows from \eqref{eq3} and \eqref{f1} and the fact that $\Pr(|\ydt |=t^d)=\Omega(t^{-d/2})$ that
\beq{f2}
T(\gyp)\leq\beta_p t^d+o(t^{d})\qquad q.s.
\end{equation}
which proves part (ii) of Theorem \ref{gtsp}.
Of course, we have from Lemma \ref{l.Fasym} that
\begin{equation}
\expect(T(\gyp))= \beta^d_p t^d+\delta_1\text{ where }\delta_1=o(t^d),
\end{equation}
and we show next that that this together with \eqref{f1} implies part (i) of Theorem \ref{gtsp}, that:
\beq{f3}
T=T(\gyp)=\beta_p t^d +o(t^d)\qquad a.a.s.
\end{equation}
We choose $0\leq\delta_3=o(t^{\frac{d-1}{d}})$ such that $0\leq\delta_2,|\delta_1|=o(\delta_3)$. Let $I=[\beta t^{\frac{d-1}{d}}-\delta_3,\beta t^{\frac{d-1}{d}}+\delta_2]$. Then we have
\begin{multline*}
\beta t^{\frac{d-1}{d}}+\delta_1=\expect(T(\gyp)\mid T(\gyp)\geq \beta t^{\frac{d-1}{d}}+\delta_2)\Pr(T(\gyp)\geq \beta t^{\frac{d-1}{d}}+\delta_2)\\
+\expect(T(\gyp)\mid T(\gyp)\in I)\Pr(T(\gyp)\in I)+\\
\expect(T(\gyp)\mid T(\gypa)\leq \beta t^{\frac{d-1}{d}}-\delta_3)\Pr(T(\gyp)\leq \beta t^{\frac{d-1}{d}}-\delta_3).
\end{multline*}
Now $\varepsilon_1=\expect(T(\gyp)\mid T(\gyp)\geq \beta t^{\frac{d-1}{d}}+\delta_2)\Pr(T(\gyp)\geq \beta t^{\frac{d-1}{d}}+\delta_2)=O(t^{-\omega(1)})$ since $|\gyp|\leq 2d^{1/2}t^d$
and $\Pr(T(\gyp)\geq \beta t^{\frac{d-1}{d}}+\delta_2)=O(t^{-\omega(1)})$.
So, if $\lambda=\Pr(T(\gyp)\in I)$ then we have
$$\beta t^{\frac{d-1}{d}}+\delta_1\leq \varepsilon_1+(\beta t^{\frac{d-1}{d}}+\delta_2)\lambda+(\beta t^{\frac{d-1}{d}}-\delta_3)(1-\lambda)$$
or
$$\lambda\geq \frac{\delta_1-\varepsilon_1+\delta_3}{\delta_2+\delta_3}=1-o(1),$$
and this proves \eqref{f3} competing the proof of Theorem \ref{gtsp}.\qed
To derive Theorem \ref{tsp}, we now let $\gwp$ be the graph on the set of points in $[0,t]^d$ which is the result of a Poisson process of intensity 1. Our task is now to control the variance of $T(\gwp)$. Here we follow Steele's argument \cite{S} with only small modifications.
Let $\mathcal{E}_t$ denote the event that
\[
T(\gw d {2t} p)\leq \sum_{\alpha\in [2]^d} T(\gwpa) +2^{d+2}t\sqrt d.
\]
Observe that Lemma \ref{paste} implies that
\begin{equation}\label{pneg}
\Pr(\neg\mathcal{E}_t)\leq e^{-\Omega(t^d p)}.
\end{equation}
We define the random variable $\lambda(t)=T(\gw d {t} p)+10\sqrt d t,$ and let $\lambda_i$ denote independent copies. Conditioning on $\mathcal{E}_t$, we have
\begin{equation}\label{ideq}
\lambda_0 (2t)\leq \sum_{i=1}^{2^d}\lambda_i(t)-4\sqrt d t \leq \sum_{i=1}^{2^d}\lambda_i(t).
\end{equation}
In particular, \eqref{pneg} implies that there is enough room that, letting $\Upsilon(t)=\expect(\lambda(t))$ and $\Psi(t)=\expect(\lambda(t)^2)$, we have for sufficiently large $t$ that
\[
\Psi(2t)\leq 2^d\Psi(t)+2^d(2^d-1)\Upsilon^2(t)
\]
and for
\[
{\mathcal V}(t):={\bf Var}(T(\gw d {t} p))=\Psi(t)-\Upsilon(t)^2,
\]
we have
\[
\frac{{\mathcal V}(2t)}{(2t)^{2d}}-\frac{1}{2^d}\frac{{\mathcal V}(t)}{(t)^{2d}}\leq \frac{\Upsilon^2(t)}{t^{2d}}-\frac{\Upsilon^2(2t)}{(2t)^{2d}}.
\]
Now summing over $t=2^kt_0$ for $k=0,\dots,M-1$ gives
\[
\sum_{k=1}^M\frac{{\mathcal V}(2^kt)}{(2^kt)^{2d}}-
\frac 1 {2^d} \sum_{k=0}^{M-1} \frac{{\mathcal V}(2^kt)}{(2^kt)^{2d}}\leq
\frac{\Upsilon^2(t)}{t^{2d}}-\frac{\Upsilon^2(2^M t)}{(2^M t)^{2d}}\leq \frac{\Upsilon^2(t)}{t^{2d}}
\]
and so, solving for the first sum, we find
\begin{equation}
\label{varsum}
\sum_{k=1}^M\frac{{\mathcal V}(2^kt)}{(2^kt)^{2d}}\leq
(1-\frac{1}{2^d})\left(\frac{{\mathcal V}(t)}{t^{2d}}+\frac{\Upsilon^2(t)}{t^{2d}}\right)<\infty.
\end{equation}
Still following Steele, we let $N(t)$ be the Poisson counting process on $[0,\infty).$ We fix a random embedding ${\mathcal U}$ of $\mathbb{N}$ in $[0,1]^d$ as $u_1,u_2,\dots$ and a random graph ${\mathcal U}_{p}$ where each edge is included with independent probability $p$. We let ${\mathcal U}_{n,p}$ denote the restriction of this graph to the first $n$ natural numbers. In particular, note that ${\mathcal U}_{N(t^d),p}$ is equivalent to ${\mathcal W}_{t,p}$, scaled from $[0,t]^d$ to $[0,1]^d$. Thus, applying Chebychev's inequality to \eqref{varsum} gives that
\begin{equation}
\sum_{k=0}^\infty\Pr\left(\left|\frac{t2^k T({\mathcal U}_{N((t2^k)^d),p})}{(t2^k)^d}-\beta^d_p\right|>\varepsilon\right)<\infty
\end{equation}
and so for $t>0$ that
\begin{equation}\label{2powlim}
\lim_{k\to \infty}\frac{T({\mathcal U}_{N((t2^k)^d),p})}{(t2^k)^{d-1}}=\beta\qquad a.s.
\end{equation}
Now choosing some large integer $\ell$, we have that \eqref{2powlim} holds simultaneously for all the (finitely many) integers $t\in S_P=[2^\ell,2^{\ell+1})$; and $r\in \mathbb{R}$, we have that $r\in [2^kt,2^k(t+1))$ for $t\in S_\ell$ and some $k$.
Unlike the classical case $p=1$, in our setting, we do not have monotonicity of $T({\mathcal U}_{n,p})$. Nevertheless, we show a kind of continuity of the tour length through $T({\mathcal U}_{n,p})$:
\begin{lemma}\label{sandwich}
For all $\varepsilon>0$, $\exists \delta>0$ such that for all $0\leq k<\delta n$, we have
\begin{equation}
\label{cont}
T({\mathcal U}_{n+k,p})<T({\mathcal U}_{n,p})+\varepsilon n^{\frac{d-1}{d}},\qquad q.s.
\end{equation}
\end{lemma}
\begin{proof}
We consider cases according to the size of $k$.
\noindent\textbf{Case 1}: $k\leq n^{\frac 1 3}$.\\
Note that we have $T({\mathcal U}_{n+1,p})<T({\mathcal U}_{n,p})+\sqrt d$ q.s., since we can q.s. find an edge in the minimum tour though ${\mathcal U}_{n,p}$ whose endpoints are both adjacent to $(n+1)$. $n^{\frac 1 3}$ applications of this inequality now give \eqref{cont}.
\noindent \textbf{Case 2:} $k> n^{\frac 1 3}$. \\
In this case the restriction $\mathcal{R}$ of ${\mathcal U}_{n+k,p}$ to $\{n+1,\dots,k\}$ is q.s. (with respect to $n$) Hamiltonian \cite{BFF}. In particular, by Theorem \ref{gtsp}, we can q.s. find a tour $T$ though $\mathcal{R}$ of length $\leq 2\beta^d_p k^{\frac{d-1}d}$. Finally, there are q.s., edges $\{x,y\}$ and $\{w,z\}$ on the minimum tours through ${\mathcal U}_{n,p}$ and $\mathcal{R}$, respectively, such that $x\sim w$ and $y\sim z$ in ${\mathcal U}_{n+k,p}$, giving a tour of length
\[
T({\mathcal U}_{n+k,p})\leq T({\mathcal U}_{n,p})+ 2\beta^d_p k^{\frac{d-1}d}+4\sqrt d.\qedhere
\]
\end{proof}
Applying Lemma \ref{sandwich} and the fact that $N((1+\delta)r^d)<(1+2\delta)N(r^d)$ q.s (with respect to $r$). gives that for some $\varepsilon_\ell>0$ which can be made arbitrarily small by increasing $\ell$, we have q.s.
\[
T({\mathcal U}_{N(((t+1)2^k)^d),p})-\varepsilon_\ell r^{d-1}
<
T({\mathcal U}_{N(r^d),p})
<
T({\mathcal U}_{N((t2^k)^d),p})+\varepsilon_\ell (t2^k)^{d-1},
\]
and so dividing by $r^{d-1}$ and taking limits we find that a.s.
\[
(\beta-\varepsilon_\ell)(1+\tfrac{1}{2^p})^{d-1}
\leq
\liminf_{r\to \infty} \frac{T({\mathcal U}_{N(r^d)})}{r^{d-1}}
\leq
\limsup_{r\to \infty} \frac{T({\mathcal U}_{N(r^d)})}{r^{d-1}}
\leq
\frac{\beta+\varepsilon_\ell}{(1+\frac{1}{2^p})^{d-1}}.
\]
Since $\ell$ may be arbitrarily large, we find that
\[
\lim_{r\to \infty} \frac{T({\mathcal U}_{N(r^d)})}{r^{d-1}}=\beta.
\]
Now the elementary renewal theorem guarantees that
\[
N^{-1}(n)\sim n,\qquad a.s.
\]
So we have a.s.
\[
\lim_{r\to \infty} \frac{T({\mathcal U}_{n,p})}{n^{\frac{d-1}{d}}}= \lim_{r\to \infty} \frac{T({\mathcal U}_{N(N^{-1}(n)),p})}{(N^{-1}(n))^{\frac{d-1}{d}}}\frac {(N^{-1}(n))^{\frac{d-1}{d}}}{n^{\frac{d-1}{d}}}=\beta\cdot 1=\beta.
\]
\subsection{The case $p(n)\to 0$}\label{TSPworst}
We will in fact show that \eqref{Tgxp} holds q.s. for $np\geq \omega\log n$, for some $\omega\to\infty$. That we also get the statement of Theorem \ref{TSPworst} can be seen by following the proof carefully, but this also follows as a consequence directly from the appendix in Johannson, Kahn and Vu \cite{JKV}.
We first show that q.s.
\beq{W1}
T(G_{\cX,p})=\Omega(n^{(d-1)/d}/p^{1/d}).
\end{equation}
Let $Y_1$ denote the number of vertices whose closest $G_{n,p}$-neighbor is within $\frac{1}{(np)^{1/d}}$. Observe first that if $r=1/(np)^{1/d}$ then with probability $\geq \brac{1-\nu_d r^dp}^{n-1}\approx e^{-\nu_d}$, there are no points within distance $1/(np)^{1/d}$ of any fixed $v\inG_{\cX,p}$. Thus $\expect(Y_1)\geq ne^{-\nu_d}/2$ and one can use the Azuma-Hoeffding inequality to show that $Y_1$ is concentrated around its mean. Thus q.s. $T(G_{\cX,p})\geq n^{(d-1)/d}e^{-\nu_d}/4p^{1/d}$, proving \eqref{W1}.
We will for convenience prove
\begin{theorem}\label{convenience}
Let $\yd{1}\subset [0,1]^d$ denote a set of points chosen via a Poisson process of intensity one in $[0,t]^d$ where $t=n^{1/d}$. Then there exists a constant $\gamma_p^d$ such that
$$T(\gyp)\leq \gamma_p^d \frac{t^d}{p^{1/d}}\qquad q.s.$$
\end{theorem}
\begin{proof}
We consider independent copies of ${\mathcal Y}^d_{t,p_i},\,i=1,2,\ldots,k+1$. We will let $p_0=p_1=p/3$ and $p_i=p_1/2^{i},i=1,2,\ldots,k=\log_2t$ and define $p_{k+1}$ so that $1-p=\prod_{j=1}^{k+1}(1-p_j)$. Observe that with this choice, we have that $\gyp$ decomposes as $\gyp=\bigcup_{i=0}^{k+1}G_i$, where the $G_i$ are spanning subgraphs given by independent instances of ${\mathcal Y}^d_{t,p_i}$.
We continue by constructing a large cycle, using only the edges of $G_1$. We choose $\varepsilon$ small and then choose $K$ sufficiently large for subsequent claims. In preparation for an inductive argument we let $t_1=t$, $T_1=t_1^d$, $m_1=\flr{(T_1p_1/K)^{1/d}}$ and consider the partition $\Delta_1=\{S_\alpha\}$ ($\alpha\in [m_1]^d$) of $[0,t]^d$ into $m_1^d$ subcubes of side length $u=t/m$. (Note that $t$ will not change throughout the induction). Now each $S_{\alpha}$ contains $\approx K/p_1$ vertices, in expectation and so it has at least $(1-\varepsilon)K/p_1$ vertices with probability $1-e^{-\Omega(K/p_1)}=1-o(1)$. Let $\alpha$ be {\em heavy} if $S_\alpha$ has at least this many vertices, and {\em light} otherwise. Let $\Gamma_\alpha$ be the subgraph of $G_{1}$ induced by $S_\alpha$. If $\alpha$ is heavy then for any $\varepsilon>0$ we can if $K$ is sufficiently large find with probability at least $1-e^{-\Omega(K/p_1)}=1-o(1)$, a cycle $C_\alpha$ in $\Gamma_\alpha$ containing at least $(1-\varepsilon)^2K/p_1$ vertices. This is because when $\alpha$ is heavy, $\Gamma_\alpha$ has expected average degree at least $(1-\varepsilon)K$. We say that a heavy $\alpha$ is {\em typical} if it $\Gamma_\alpha$ contains a cycle with $(1-\varepsilon)|S_\alpha\cap{\mathcal X}|$ edges; otherwise it is {\em atypical}.
We now let $N$ denote the set of vertices in $\bigcup C_\alpha$, where the union is taken over all typical heavy $\alpha$. Our aim is to use Theorem \ref{gtsp}(ii) to prove that we can q.s.~merge the vertices $N$ into a single cycle $C_1$, without too much extra cost, and using only the edges of $G_1$. Letting $q_\alpha=Pr(S_\alpha\text{ is normal})\geq 1-\varepsilon$, we make each typical heavy $\alpha$ \emph{available} for this round with independent probability $\frac{1-\varepsilon}{1-q_\alpha}$, so that the probability that any given $\alpha$ is available is exactly $1-\varepsilon$. (This is of course {\em rejection sampling}.) Now we can let $Y=\yd 1$ in Theorem \ref{gtsp} be a process which places a single point at the center of $[0,1]^d$ with probability $1-\varepsilon$, or produces an empty set with probability $\varepsilon$. Let now $Y_\alpha$ ($\alpha\in t^d)$ be the independent copies of $Y$ which give $\ydt$. Given two cycles $C_1,C_2$ in a graph $G$ we say that edges $u_i=(x_i,y_i)\in C_i,i=1,2$ are a {\em patchable pair} if $f_x=(x_1,x_2)$ and $f_y=(y_1,y_2)$ are also edges of $G$. Given $x\in Y_\alpha, y\in Y_{\beta}$, we let $x\sim y$ whenever there exist \emph{two disjoint} patchable pairs $\sigma_{\alpha,\beta}$ between $C_\alpha, C_\beta$. Observe that an edge between two vertices of $\yd 1$ is then present with probability
\[
q_{\alpha,\beta}\geq\Pr(Bin(K^2/100p_1^2,p_1^2)\geq 2)\geq 1-\varepsilon.
\]
In particular, this graph contains a copy of ${\mathcal Y}^d_{1,(1-\varepsilon)}$, for which Theorem \ref{gtsp}(ii) gives that q.s. we have a tour of length $\leq B_1 m_1^d$ for some constant $B_1$; in particular, there is a path $P=(\alpha_1,\alpha_2,\dots,\alpha_M)$ through the typical heavy $\alpha$ with at most this length. Using $P$, we now merge its cycles $C_{\alpha_i},i=1,2,\ldots,M$ into a single cycle.
Suppose now that we have merged $C_{\alpha_1},C_{\alpha_2},\ldots,C_{\alpha_j}$ into a single cycle $C_j$ and have used one choice from $\sigma_{\alpha_{j-1},\alpha_j}$ to patch $C_{\alpha_j}$ into $C_{j-1}$. We initially had two choices for patching $C_{\alpha_{j+1}}$ into $C_{\alpha_j}$, one may be lost, but one at least will be available. Thus we can q.s. use $G_1$ to create a cycle $H_1$ from $C_{\alpha_1},C_{\alpha_2},$ by adding only patchable pairs of edges, giving a total length of at most
\beq{cy1}
2T_1\times \frac{2t_1d^{1/2}}{m_1}+a_1m_1^d\times \frac{2t_1d^{1/2}}{m_1}\leq \frac{3T_1d^{1/2}}{p_1^{1/d}}.
\end{equation}
The first term in \eqref{cy1} is a bound on the total length of the cycles $C_\alpha$ where $\alpha$ is available, assuming that $|\gyp|\leq 2t^d$. The second smaller term is the q.s. cost of patching these cycles into $H_1$.
Having constructed $H_1$, we will consider coarser and coarser subdivisions $\mathcal{D}_i$ of $[0,t]^d$ into $m_i^d$ subcubes, and argue inductively that we can q.s. construct, for each $1\leq i\leq \ell$ for suitable $\ell$, vertex disjoint cycles $H_1,H_2,\ldots,H_\ell$ satisfying:
\begin{enumerate}
\item \label{P.leftover} $T_i\leq 3\varepsilon T_{i-1}$ for $i\geq 2$, where $T_j=t^d-\sum_{i=1}^{j-1}|H_i|$,
\item \label{P.ind} the set of points in the $\alpha$th subcube in the decomposition $\mathcal{D}_i$ occupied by vertices which fail to participate in $H_i$ is given by a process which occurs independently in each subcube in $\mathcal{D}_i$, and
\item \label{P.length} the total length of each $H_i$ is at most $\frac{3T_id^{1/2}}{p_i^{1/d}}$.
\end{enumerate}
Note that $H_1$, above, satisfies these conditions for $\ell=1$.
Assume inductively that we have constructed such a sequence $H_1,H_2,\ldots,H_{j-1}$ $(j\geq 2)$. We will now use the $G_j$ edges to construct another cycle $H_j$. Suppose now that the set ${\mathcal T}_j$ of points that are not in $\bigcup_{i=1}^{j-1}H_i$ satisfies $T_{j}=|{\mathcal T}_j|\geq t^{d-1}/\log t$. We let $m_j=(T_{j}p_j/K)^{1/d}$ and $t_j=T_j^{1/d}$. The expected number of points in a subcube will be $K/p_j$ but we have not exercised any control over its distribution. For $i\geq 2$, we let $\alpha\in [m_i]^d$ be heavy if $S_\alpha$ contains at least $\varepsilon K/p_j$ points. Now we want $K$ to be large enough so that $\varepsilon K$ is large and that a heavy subcube has a cycle of size $(1-\varepsilon)|{\mathcal T}_j\cap S_\alpha|$ with probability at least $1-\varepsilon$, in which case, again, it is \emph{typical}. We define $\Gamma_j$ as the set of typical heavy pairs $\{\alpha,\beta\}$ for which there are at least two disjoint patchable pairs between the corresponding large cycles. Applying the argument above with $T_j,t_j,m_j,\Gamma_j$ replacing $T_1,t_1,m_1,\Gamma_1$ (note that \ref{P.ind}, above, ensures that Theorem \ref{gtsp} applies) we can q.s. find a cycle $H_j$ with at least $(1-3\varepsilon)T_j$ vertices and length at most $\frac{3T_jd^{1/2}}{p_j^{1/d}}$, giving induction hypothesis part \ref{P.length}. Part \ref{P.leftover} is satisfied since the light subcubes only contribute $\varepsilon$ fraction of points to ${\mathcal T}_j$, and we q.s. take a $(1-\varepsilon)$ fraction of the heavy subcubes. Finally, Part \ref{P.ind} is satisfied since participation in $H_j$ is determined exclusively by the set of adjacency relations in $G_j\cap {\mathcal T}_j$, which is independent of the positions of the vertices.
Thus we are guaranteed a sequence $H_1,H_2,\ldots, H_\ell$ as above, such that $T_{\ell+1}<t^{d-1}/\log t$. The total length of $H_1,H_2,\ldots,H_\ell$ is at most
\beq{almost}
\sum_{i=1}^\ell \frac{3T_id^{1/2}}{p_i^{1/d}}\leq \frac{3^{1+1/d}t^d}{p^{1/d}}\sum_{i=1}^\infty 3^i\cdot 2^{i/d} \varepsilon^{i-1}=O\bfrac{t^d}{p^{1/d}}.
\end{equation}
We can now use $G_0$ to finish the proof. It will be convenient to write $G_0=\bigcup_{i=0}^2A_i$ where $A_i,i=1,2,3$ are independent copies of ${\mathcal Y}^d_{t,q}$ where $1-p_0=(1-q)^3$. Also, let $R=\set{x_1,x_2,\ldots,x_r}=\gyp\setminus\bigcup_{i=1}^\ell H_i$.
We first create a Hamilton path containing all vertices, only using the edges of $A_1\cup A_2$ and the extension-rotation algorithm introduced by P\'osa \cite{Po}. We begin by deleting an arbitrary edge from $H_1$ to create a path $P_1$. Suppose inductively that we have found a path $P_j$ through $Y_j=H_1\cup \cdots H_{\rho_j}\cup X_{j}$ where $X_{j}\subseteq R$ at an added cost of $O(jt)$. We let $V_j$ denote the vertices of $P_j$ and promise that $V_{\ell+r}=\gyp$. We also note that $|V_j|\geq |V_1|=\Omega(t^d)$ for $j\geq 1$.
At each stage of our process to create $P_{j+1}$ we will construct a collection $\mathcal{Q}=\set{Q_1,Q_2,\ldots,Q_r}$ of paths through $V_j$. Let $Z_\mathcal{Q}$ denote the set of endpoints of the paths in $\mathcal{Q}$. {\em Round} $j$ of the process starts with $P_j$ and is finished when we have constructed $P_{j+1}$.
If at any point in round $j$ we find a path $Q$ in $\mathcal{Q}$ with an endpoint $x$ that is an $A_2$-neighbor of a vertex in $y\notin V_{j}$ then we will make a {\em simple extension}
and proceed to the next round. If $x\in H_i$ then we delete one of the edges in $H_i$ incident with $y$ to create a path $Q'$ and then use the edge $(x,y)$ to concatenate $Q,Q'$ to make $P_{j+1}$. If $x\in R$ then $P_{j+1}=Q+y$.
If $Q=(v_1,v_2,\ldots,v_s)\in \mathcal{Q}$ and $(v_s,v_1)\in A_1$ then we can take any $y\notin V_j$ and with probability at least $1-(1-q)^s=1-O(t^{-\omega(1)})$ find an edge $(y,v_i)\in A_2$. If there is a cycle $H_i$ with $H_i\cap V_j=\emptyset$ then we choose $y\in H_i$ and delete one edge of $H_i$ incident with $y$ to create a path $Q'$ and then we can take $P_{j+1}=(Q',v_i,v_{i-1},\ldots,v_{i+1})$ and proceed to the next round. Failing this, we choose any $y\in R\setminus V_j$ and let $P_{j+1}=(y,v_i,v_{i-1},\ldots,v_{i+1})$ and proceed to the next round. Note that this is the first time we will have examined the $A_2$ edges incident with $y$. We call this a {\em cycle extension}.
Suppose now that $Q=(v_1,v_2,\ldots,v_s)\in \mathcal{Q}$ and $(v_s,v_i)\in A_1$ where $1<i<s-1$. The path $Q'=(v_1,\ldots,v_i,v_s,v_{s-1},\ldots,v_{i+1})$ is said to be obtained by a rotation. $v_1$ is the {\em fixed} endpoint. We partition $\mathcal{Q}=\mathcal{Q}_0\cup \mathcal{Q}_1\cup\cdots\cup \mathcal{Q}_{k_0},k_0=\log t$ where $\mathcal{Q}_0=\set{P_j}$ and $\mathcal{Q}_i$ is the set of paths that are obtainable from $P_j$ by exactly $i$ rotations with fixed endpoint $v_1$. We let $N_i$ denote the set of endpoints of the paths in $\mathcal{Q}_i$, other than $v_1$, and let $\nu_i=|N_i|$ and let $N_\mathcal{Q}=\bigcup_iN_i$. We will prove that q.s.
\beq{posa}
|\nu_i|\leq \frac{1}{100q}\text{ implies that }|\nu_{i+1}|\geq \frac{|\nu_i|t^dq}{300}.
\end{equation}
It follows from this that q.s. we either end the round through a simple or cycle extension or arrive at a point where the paths in $\mathcal{Q}$ have $\Omega(t^d)$ distinct endpoints. We can take an arbitrary $y\notin V_j$ and find an $A_2$ neighbor of $y$ among $N_\mathcal{Q}$. The probability we cannot find a neighbor is at most $(1-q)^{\Omega(t^d)}=O(t^{-\omega(1)})$. Once we prove \eqref{posa} we will have shown that we can create a Hamilton path through $\gyp$ from $H_1,H_2,\ldots,H_\ell,R$ at an extra cost of $O(d^{1/2}(t\ell+t^{d-1}/\log t\times \log t\times t))=O(t^d)$. We will not have used any $A_3$ edges to do this. The second $\log t$ factor comes from the fact that each path is obtained by at most $k_0$ rotations and each rotation adds one new edge.
{\bf Proof of \eqref{posa}:} We first prove that in the graph induced by $A_1$ we have
\beq{posa1}
|S|\leq \frac{1}{100q}\text{ implies that }|N_{A_1}(S)|\geq \frac{|S|t^{d}q}{100}.
\end{equation}
Here $N_{A_1}(S)$ is the set of vertices not in $S$ that have at least one $A_1$-neighbor in $S$.
Indeed, if $s_0=\frac{1}{100q}=o(n)$ then
\begin{align*}
\Pr(\exists S)&\leq \sum_{s=1}^{s_0}\binom{t^d}{s}\Pr\brac{Bin(t^d-s,1-(1-q)^s)\leq \frac{s t^{d}q}{100}}\\
&\leq \sum_{s=1}^{s_0}\binom{t^d}{s}\Pr\brac{Bin\brac{t^d-s,\frac{sq}{2}}\leq \frac{s t^{d}q}{100}}\\
&\leq \sum_{s=1}^{s_0}\brac{\frac{t^de}{s}\cdot e^{-\Omega(t^{d}q)}}^s\\
&=O(t^{-\omega(1)}).
\end{align*}
Now \eqref{posa} holds for $i=0$ because q.s. each vertex in $\gyp$ is incident with at least $t^{d}q/2$ $A_1$ edges. Given \eqref{posa1} for $i=0,1,\ldots,i-1$ we see that $\nu_1+\cdots+\nu_{i-1}=o(\nu_i)$. In which case \eqref{posa1} implies that
$$\nu_{i+1}\geq \frac{|N_{A_1}(N_i)|-(\nu_0+\cdots+\nu_{i-1})}{2}\geq \frac{t^{\gamma d}\nu_i}{2+o(1)}$$
completing an inductive proof of \eqref{posa}.
Let $P^*$ be the Hamilton path created above. We now use rotations with $v_1$ fixed via the edges $A_2$ to create $\Omega(t^d)$ Hamilton paths with distinct endpoints. We then see that q.s. one of these endpoints is an $A_2$-neighbor of $v_1$ and so we get a tour at an additional cost of $O(d^{1/2}t)$.
This completes the proof of Theorem \ref{convenience}.
\end{proof}
The upper bound in Theorem \ref{worst} follows as before by (i) replacing $\gyp$ by $G_{\cX,p}^d$, allowable because our upper bound holds q.s. and $\Pr(|\gyp|=t^d)=\Omega(t^{-d/2})$ and then (ii) scaling by $n^{-1/d}$ so that we have points in $[0,1]^d$.
\section{An algorithm}
To find an approximation to a minimum length tour in $G_{\cX,p}$, we can use a simple version of Karp's algorithm \cite{K}. We let $m=(n/K\nu_d\log n)^{1/d}$ for some constant $K>0$ and partition $[0,1]^d$ into $m^d$ subcubes of side $1/m$, as in Lemma \ref{paste} . The number of points in each subsquare is distributed as the binomial $B(n,q)$ where $q=K\log n/n$ and so we have a.a.s.~that every subsquare has $K\log n\pm\log n$, assuming $K$ is large enough. The probability that there is no Hamilton cycle in $S_{\alpha}$ is $O(e^{-Knqp/2})$ and so a.a.s. every subsquare induces a Hamiltonian subgraph. Using the dynamic programming algorithm of Held and Karp \cite{HK} we solve the TSP in each subsquare in time $O(\sigma^22^\sigma)\leq n^K$, where $\sigma=\sigma_{\alpha}=|S_\alpha\cap {\mathcal X}_{n,p}|$. Having done this, we can with probability of failure bounded by $m^2(1-p^2)^{(K\log n)^2}$ patch all of these cycles into a tour at an extra $O(m^{d-1})=o(n^{\frac{d-1}{d}})$ cost. The running time of this step is $O(m^d\log^2n)$ and so the algorithm is polynomial time overall. The cost of the tour is bounded q.s.~as in Lemma \ref{paste}. This completes the proof of Theorem \ref{tsp}.
\section{Further questions}
\label{Qs}
Theorem \ref{t.expected} shows that there is a definite qualitative change in the diameter of $G_{\cX,p}$ at around $p=\frac{\log^dn}{n}$, but our methods leave a $(\log\log n)^{2d}$ size gap for the thresholds.
\begin{q}
What is the precise threshold for there to be distances in $G_{\cX,p}$ which tend to $\infty$? What is the precise threshold for distance in $G_{\cX,p}$ to be arbitrarily close to Euclidean distance? What is the behavior of the intermediate regime?
\end{q}
\noindent One could also analyze the geometry of the geodesics in $G_{\cX,p}$ (Figure \ref{f.paths}). For example:
\begin{q}
\label{pathgeom}
Let $\ell$ be the length of a random edge on the geodesic between fixed points at at constant distance in $G_{\cX,p}$.
What is the distribution of $\ell$?
\end{q}
\smallskip
Improving Theorem \ref{worst} to give an asymptotic formula for $T(G_{\cX,p})$ is another obvious target. It may seem unreasonable to claim such a formula for all (say, decreasing) functions $p$; in particular, in this case, the constant in the asymptotic formula would necessarily be universal. The following, however, seems reasonable:
\begin{conjecture} If $p=\frac 1 {n^\alpha}$ for some constant $0<\alpha<1$ then there exists a constant $\beta^d_\alpha$ such that a.a.s. $T(G_{\cX,p})\sim \beta_\alpha\frac{n^{\frac{d-1}{d}}}{p^{1/d}}$.
\end{conjecture}
We note that $T(\gx 1)$ is known to be remarkably well-concentrated around its mean; see, for example, the sharp deviation result of Rhee and Talagrand \cite{RT}.
\begin{q}
How concentrated is the random variable $T(G_{\cX,p})$?
\end{q}
The case of where $p=o(1)$ may be particularly interesting.
\bigskip Even for the case $p=1$ covered by the BHH theorem, the constant $\beta_1^d$ $(d\geq 2)$ from Theorem \ref{gtsp} is not known. Unlike the case of $p=1$, the 1-dimensional case is not trivial for our model. In particular, we have proved Theorems \ref{tsp} and \ref{worst} only for $d\geq 2$. We have ignored the case $d=1$ not because we consider the technical problems insurmountable, but because we hope that it may be possible to prove a stronger result for $d=1$, at least for the case of constant $p$.
\begin{q}\label{q3}
Determine an explicit constant $\beta^1_p$ as a function of (constant) $p$ such that for $d=1$,
\[
\lim_{n\to \infty} T(G_{\cX,p})=\beta^1_pn.
\]
\end{q}
Our basic motivation has been to understand the constraint imposed on travel among random points by the restriction set of traversable edges which is chosen randomly independently of the geometry of the underlying point-set. While the Erd\H{o}s-R\'enyi-Gilbert model is the prototypical example of a random graph, other models such as the Barab\'asi-Albert preferential attachment graph have received wide attention in recent years, due to properties (in particular, the distribution of degrees) they share with real-world networks. In particular, if the random graph one is traveling within is the flight-route map for an airline, the following questions may be the most relevant:
\begin{q}\label{pa}
If the preferential attachment graph is embedded randomly in the unit square (hypercube), what is the expected diameter? What is the expected size of a minimum-length spanning tree?
\end{q}
Similarly, one could examine a combination of geometry and randomness in determining connections in the embedded graph. Our methods already give something in this direction. In particular, we can define $\gxpr$ as the intersection of the graphs $G_{\cX,p}$ with the random geometric graph on the vertex set ${\mathcal X}_n$, where a pair of points are joined by an edge if they are at distance $\leq r$. Following our proof of Theorem \ref{tsp}, one sees that we find that
\begin{theorem}\label{geo}
If $d\geq2$, $p>0$ is constant, and $r=r(n)\geq n^{\varepsilon-1/d}$ for some $\varepsilon>0$, then
\[
T(\gxpr)\sim \beta^d_pn^{\frac{d-1} d} \qquad a.a.s.
\]
\end{theorem}
Of course, the ideas behind Question \ref{pa} and Theorem \ref{geo} could be considered together; note that Flaxman, Frieze and Vera \cite{FFV} considered a geometric version of a preferential attachment graph.
The proof of Theorem \ref{t.alg} is relatively painless. We are reminded that Arora \cite{Ar} and Mitchell \cite{Mit} have described more sophisticated polynomial time algorithms that are asymptotically optimal even with the worst-case placing of the points. It would be interesting to see whether these algorithms can handle the random loss of edges.
\begin{q}
Do the methods of Arora and Mitchell allow efficiently approximation of the tour length through $G_{\cX,p}$, when the embedding ${\mathcal X}_n$ is \emph{arbitrary}?
\end{q}
| {
"attr-fineweb-edu": 1.472656,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbb3xK6-gDz87OR6N | \section{Introduction}\label{sec:introduction}
In recent years, interest in analyzing team sport videos has increased significantly in academia and
industry~\citep{r1, r2, r3, r4, r5, r6, r7}.
This is important for sports broadcasters and teams to understand key events in the game and
extract useful information from the videos. Use cases include identifying participating players, tracking player movement
for game statistics, measuring health and safety indicators, and automatically placing graphic overlays.
For broadcasters and teams that don't have the leeway or the capital to install hardware sensors in player wearables,
a Computer Vision (CV) based solution is the only viable option to automatically understand and generate insights
from games or practice videos. One important task in all sports CV applications is identifying players, specifically
identifying players with their jersey numbers. This task is challenging due to distortion and deformation of player
jerseys based on the player posture, movement and camera angle, rarity of labelled datasets, low-quality videos,
small image size in zoomed out videos, and warped display caused by the player movement.
(see Figure~\ref{fig:wideshot} and ~\ref{fig:playerposture}) \par
Current approaches for jersey number identification consist of two steps: collecting and annotating large
datasets~\citep{r5, r7}, and training large and complex models~\citep{r5, r6, r7}. These approaches include either sequential training of
multiple computer vision models or training one large model, solving for 2 objectives: identifying the jersey number
location (through custom object detection models or training a custom human pose estimation model) and classifying
the jersey number~\citep{r4, r5, r6, r7}. These approaches are tedious, time-consuming, and cost-prohibitive thus making it
intractable for all sports organizations. \par
In this paper we present a novel approach to detect jersey numbers in a small dataset consisting of practice video
footage from the Seattle Seahawks team . We use a three-step approach to number detection that leverages pretrained
models and novel synthetic datasets. We first identify and crop players in a video frame using a person detection model.
We then utilize a human pose estimation model for localizing jerseys on the detected players using the torso key-points,
obviating the need for annotating bounding boxes for number locations. This results in images that are less than
20x25 px with a high imbalance in jersey numbers (see Figure~\ref{fig:playerposture}). Finally, we test two different learning approaches
for model training - multi-class and multi-label each yielding an accuracy of 88\%, with an ensemble accuracy of
89\% to identify jersey numbers from cropped player torsos. \par
Additionally, to compensate for the low number of examples in some of the jersey numbers, we propose two novel
synthetic dataset generators — Simple2D and Complex2D. The Simple2D generator creates two-digit number images from
different combinations of fonts and background colors to mimic those of the Seattle Seahawks jerseys. The Complex2D
generator superimposes the Simple2D numbers on random COCO dataset~\citep{r8} images to add more complexity to the background
and make the model training robust. By pretraining our two CNNs on these synthetic datasets, we observe a 9\% increase
in accuracy on the ensemble models pre-trained with synthetic data compared to the baseline models trained with the
only the Seattle Seahawks numbers. Furthermore, we observe better generalization with low data. \par
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/wideshot.png}
\caption{Example frames from the practice videos demonstrating the challenges to identify jersey numbers in zoomed out videos.}\label{fig:wideshot}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/playerposture.png}
\caption{Cropped players examples showing the player posture, movement and camera angle challenges to
identify jersey numbers.}\label{fig:playerposture}
\end{figure}
\section{Related work}\label{sec:related-work}
\subsection{Synthetic Data Generation}\label{subsec:rw-synthetic-data-generation}
CNN algorithms, that are commonly used in most CV tasks, require large datasets to learn patterns in images.
Collecting and annotating large datasets is a manual, costly and time-consuming task. Several new approaches
including Active Learning~\citep{r9}, Zero or Few-shot learning~\citep{r10} and Synthetic data generation~\citep{r11} have emerged in
recent years to tackle complexities in obtaining a large annotated dataset. Our work focuses primarily on the use
of synthetically generated data. This idea dates back to the 1990's~\citep{r12} and is an active field of research that
alleviates the cost and efforts needed to obtain and manually label real-world data. Nowadays, models (pre)trained
on synthetic datasets have a broad range of utility including feature matching~\citep{r13} autonomous driving~\citep{r14}, robotics
indoor and aerial navigation~\citep{r15}, scene segmentation~\citep{r16} and anonymized image generation in healthcare~\citep{r17}.
The approaches broadly adopt the following process: pre-train with synthetic data before training on real-world
scenes~\citep{r13, r18}, generate composites of synthetic data and real images to create a new one that contains the desired
representation~\citep{r19} or generate realistic datasets using simulation engines like Unity~\citep{r20} or generative models
like GANs~\citep{r21, r22}. There are limitations to each of these regimes but one of the most common pitfalls is
performance deterioration in real-world datasets. Models trained only synthetic datasets don't generalize to
real-world data; this phenomenon is called "domain shift"~\citep{r21}.
\par
In order to reduce the need for annotating large dataset as well as account for the size and imbalance of the
real-world data, we generated two double-digit synthetic datasets - Simple2D and Complex2D with different levels
of complexity as described in Section~\ref{subsubsec:syn-data-gen} This helps to circumvent the domain shift when only synthetic data is
used and improves generalization on real-world data for fine-tuning.
\subsection{Number Identification}\label{subsec:rw-number-identification}
Automatic number identification in sports video has evolved from classical computer vision techniques including
feature extraction using contrast adjustment, edge detection of numbers~\citep{r1, r2, r3} to deep learning-based architectures
that use CNNs for classification~\citep{r4, r5, r6, r7}. A fundamental problem in number identification in sports is the
jersey number distortion due to erratic and continuous player movement. The spatial transformer-based approach
introduced in~\citep{r5} tries to localize and better position the number, so that the classifier has a better chance of
an accurate prediction. The faster-RCNN with pose estimation guidance mechanism~\citep{r6} combines the detection,
classification and key-point estimation tasks in one large network to correct region proposals, reducing the
number of false negative predictions. This approach needed careful labeling of the player bounding-boxes and four
human body key-points, shoulder (right, left), hip (right, left), in addition to the numbers. It also made use of
high-resolution number images (512 px). This approach yields 92\% accuracy for jersey number recognition as a whole
and 94\% on the digit-wise number recognition task. However, getting the right conditions for it i.e., label the
dataset for the three tasks, acquiring high resolution images and training a large model might be challenging for
real-world cases. Furthermore, a lack of standardization and availability of public (commercial use) datasets,
makes it difficult to obtain a benchmark for the number identification task.
\section{Approach}\label{sec:approach}
\subsection{Task Definition}\label{subsec:task-definition}
We define a jersey number as the one or two-digit number printed on the back of a player's shirt. The jersey number is
used to identify and distinguish players and one number is associated with exactly one player. Our solution takes
cropped images of player's torsos as input and attempts to classify the jersey number into 101 classes
(0-99 for actual numbers and 100 for unrecognizable images/ jerseys with no numbers).
\subsection{American Football Dataset}\label{subsec:american-football-dataset}
The data used for this work consisted of a collection of 6 practice videos from different angles for training and
additional 4 for testing from the Seattle Seahawks archives. Half of the videos were from the endzone perspective,
that is, the scoring zone between the end line and the goal line. The other half were from the sideline perspective,
the boundary line that separates the play area from the sides. Both cameras were placed on a high altitude to get a
panoramic view for the play and capture the majority of the actions taken by the players. A pitfall for collecting
data using this camera angle is that the size of a player is less than 10\% of the image size when the players are
far away from the camera. In addition, the sideline view has restricted visibility of jersey numbers compared to
end-zone (see Figure~\ref{fig:perspectives}). The videos were recorded in 1280x720 resolution and we sampled frames
from each video at 1, 5 and 10 frames per second (fps) rates. We noticed that images sampled at 5 fps sufficiently
captured all the jersey numbers in a play and we decided to use the same sampling rate throughout our solution.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/perspectives.png}
\caption{Examples of frames obtained from the two different angles from the training videos. Left, is the endzone
view of the players. Right is the sideline view which offers better visibility into jersey numbers. Within a play,
we can find players, observers with/without football jerseys.}\label{fig:perspectives}
\end{figure}
\subsubsection{Jersey number localization}\label{subsec:jersey-number-localization}
To mitigate the need for annotating player location, jersey number bounding boxes and consequently training person and
jersey number detection models, we utilized pretrained models for person detection and pose estimation to localize the
jersey number region. This approach prevents the model to generate correlations with wrong features like player
background, helmets or clothing items and confining the learning to the region of interest.
For the number localization we first use a pretrained person detector, Centernet~\citep{r23} model (ResNet50 backbone), to
detect and crop players from an image. Instead of training a custom human key-point estimation head~\citep{r6}, we use a
pretrained, pose estimation model, AlphaPose (https://gitee.com/marcy/AlphaPose, with ResNet101 backbone), to identify
four torso key-points
(left and right - hips and shoulders) on the cropped player images from the person detection step (see Figure~\ref{fig:models}).
We use the four key-points to create a bounding box around jersey numbers. To accommodate inaccuracies in key-point
prediction and localization due to complex human poses, we increased the size of torso keypoint area by expanding the
coordinates 60\% outward to better capture jersey numbers. The torso area is then cropped and used as the input for
the number prediction models discussed in Section~\ref{subsubsec:syn-data-gen} In previous works, the use of high-resolution images of
players and jersey numbers is very common. However, the American football dataset we used was captured from a bird's
eye view, where jersey numbers were smaller than 32x32 px. In fact, the average size of the torso crops is 20x25 with
the actual jersey number being even a smaller portion of this area (see Figure~\ref{fig:datasize}).
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/datasize.png}
\caption{Distribution of the sizes from person and torso bounding boxes. Note how the great majority of torso sizes is less than 32x32 px.}\label{fig:datasize}
\end{figure}
After player detection and jersey number localization, we generated 9,000 candidate images for number detection.
We labelled the images with Amazon SageMaker GroundTruth and noticed that 6,000 images contained non-players
(trainers, referees, watchers); the pose estimation model for jersey number localization simply identifies human
body key-points and doesn't differentiate between players and non-players. 3,000 labelled images with severe
imbalance (see Figure~\ref{fig:datadistro}) were usable for the training.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/datadistro.png}
\caption{Distribution of the jersey number labels in training set. Number 3 has 500+ images while numbers 43, 63, 69 and 93 have 10 images or less.}\label{fig:datadistro}
\end{figure}
\subsubsection{Synthetic Data Generation}\label{subsubsec:syn-data-gen}
Typically, a licensed (SVHN~\citep{r25}) or a large custom dataset is used for (pre)training number recognition models.
Since there are no standardized public datasets with permissive licenses, we created two 2-digit synthetic datasets
to pretrain our models. We investigated 2-digit MNIST~\citep{r26}, however it did not have pixel color and font variations
needed for jersey detection and performed poorly in our tests. Hence, we generated two different synthetic datasets;
a simple two-digit (Simple2D) numbers with font and background similar to the football dataset and other with 2-digit
synthetic numbers superimposed on COCO~\citep{r8} dataset images (Complex2D) to account for variations in numbers background.
The Simple2D dataset was generated by randomly selecting a number from a uniform distribution of 0 to 9 and randomly
scaling it. Color backgrounds (Red, Navy Blue, Green, Red, Yellow, White) and special font (Freshman ) that resembled
the team jerseys were used to generate these numbers (see Figure~\ref{fig:datasize}). One Light, five Medium and five Hard augmentations
(see Table~\ref{tab:data-aug}) were used on each digit to be later permuted and concatenated to obtain 4000 images (100 x 100 px) of
each 2-digit number, from 00 to 99. At the end this dataset consisted of a total of 400,000 images.
Since the real-world images had more complicated background, textures and lighting conditions, we decided to
synthetically generate another dataset (see Figure~\ref{fig:synthetic}) to increase the robustness and generalization of our pretrained
model. The complex2D dataset was designed to increase background noise by superimposing numbers from Sample2D on
random real-world images from the COCO dataset~\citep{r8}. We generated a total of 400,000 images (4000 per class) with
noisy backgrounds.
Our algorithm is explained in more details in Algorithms~\ref{alg:number-generation}, \ref{alg:simple2d} and \ref{alg:complex2d}.
\begin{table}[h]
\caption{data augmentations}\label{tab:data-aug}
\centering
\begin{tabular}{p{.1\linewidth} p{0.85\linewidth}}
\toprule
Name & Augmentations \\
\midrule
Light & Gaussian Noise, Optical distortion \\
Medium & Light + Grid distortion \\
Hard & Medium + Shuffling RGB channels, Random Shift-Scale-Rotation \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/synthetic.png}
\caption{Synthetic data generation with Simple2D and Complex2D. Simple2D dataset was generated by creating numbers
in football dataset jersey colors and fonts. Several augmentations (Table~\ref{tab:data-aug}) were applied on these numbers to get
Simple2D dataset. The numbers from this dataset were randomly sampled and randomly placed on COCO dataset images
to form Complex2D dataset}\label{fig:synthetic}
\end{figure}
\begin{algorithm}[hbt!]
\caption{Number generation}\label{alg:number-generation}
\ForAll{n in 0-9}{
select a jersey background and font color with a probability of U(1,n) = number of combinations\;
choose a font size with a probability of U(a,b) if a, b are scaled factors of image size \;
paste single number with chosen font and background color and size \;
}
\end{algorithm}
\begin{algorithm}[hbt!]
\caption{Simple2D}\label{alg:simple2d}
\ForAll{n in 0-99}{
\ForAll{background colors}{
generate 1000 images\;
\eIf{single digit}{
perform light, medium and hard augmentations\;
scale image to 100x100 px\;
}{
perform light, medium and hard augmentations on each digit\;
concatinate digits \;
scale image to 100x100 px\;
}
}
}
randomly sample 4000 images per number across all color combinations \;
\end{algorithm}
\begin{algorithm}[hbt!]
\caption{Complex2D}\label{alg:complex2d}
\ForAll{n in 0-99}{
select a random image from COCO dataset\;
select a random jersey number image\;
super-impose jersey number at a random position in the COCO image\;
rescale image to 100x100 px\;
continue until 4000 images per number are obtained\;
}
\end{algorithm}
\subsubsection{Jersey number detection}\label{subsubsec:jersey-n-detection}
After the number localization step above, two models were sequentially pretrained with the synthetic datasets
(Simple2D to Complex2D) and fine-tuned with the real-world football dataset (see Figure~\ref{fig:models}). The idea of training a
model with increasingly difficult samples is called curriculum learning. This technique has empirically shown accuracy
increase and faster convergence~\citep{r27, r28}. One of the challenges of implementing curriculum learning is manually ranking
difficulty in the training set~\citep{r27}. In our case, the synthetic data was generated explicitly in this manner
(simple to complex) and our training regime adopted this order, thus, bypassing this challenge.
Both models used a ResNet50~\citep{r29} architecture with deep residual connections, as backbone and a final layer predicting
classes (jersey numbers). The first model was a multi-class image classifier to detect two-digit number with a total
of 101 different classes (numbers from 0 - 99 plus an unrecognizable class). The second model was a multi-class
multi-label classifier with 21 classes to detect single digits (10 digits for each side- right, left numbers, plus an
unrecognizable class).
We define the i-th input feature $X_i$ (cropped image of a player) with the label $y_i$ (0-99 for actual numbers and 100 for
unrecognizable). Our multi-class model was optimized with the following loss function:
\[ L_{mc} = \sum_{i} {L_i} = - \sum_{i} {y_i \log \hat{y}_{mc} (X_i)} \]
where $y_i$ is the true label and $\hat{y}_{mc}$ is calculated as a softmax over scores computed by the multi-class
model as follows:
\[ \hat{y}_{mc} (X_i) = \sigma (\vec{Z}) \]
\[ \sigma (\vec{Z})_k = \frac {e^{Z_k}} {\sum_{j=0}^{100} e^{Z_j}} \]
Where $\vec{Z}$ is the outputs from the last layer of the multiclass model consists of $(z_0, ..., z_100)$ given $X_i$.
For the multi-label model, the loss function is defined as:
\[ L_{ml} = \sum_{i} {L_i} = - \sum_{i} {y_i \log \hat{y}_{ml} (X_i)} \]
where $y_i$ is the true label and $\hat{y}_{ml}$ is calculated as a sigmoid over scores computed by the multi-label model as follows:
\[ \hat{y}_{ml} (X_i) = \frac {1} {1 + e^{\vec{Z}}} \]
Where $\vec{Z}$ is the outputs from the last layer of the multilabel model given $X_i$.
Both models were trained until convergence and the model from the epoch with the best performance was selected. We
explored the combination of the two models to provide the final decision and we explain our results in section~\ref{sec:exp-results}
Our original idea was that the multi-label model would augment performance of the multi-class model and address
generalization issues with unseen/ low data availability for certain numbers. For example, if 83, 74 were present in
the training set but not 73, the right and left side of prediction nodes for 3 and 7 would have been activated in the
train set for all numbers starting and ending with 7 or 3 and hence the multi-label model would have enough samples
to predict 73.
We considered training a custom object detection model to identify single-digit numbers. However, due to additional
cost and time associated with labeling bounding boxes, image quality and small size of localized jersey numbers
(approximately 20 x 25 px), we chose the image classification approach.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/models.png}
\caption{Overview of the approach for extracting data, training and generating jersey number predictions.
a) describes the high-level football dataset processing pipeline - identify person in video, pass each person image
through pose estimation model to identify torso region and crop them. b) shows the sequential pretraining of
multi-class/label models with synthetic number datasets - Simple2D and Complex2D as well as fine-tuning on football
dataset. c) represents the inference pipeline that uses data pipeline from a) to crop jersey numbers and perform
prediction using multi-class/label models Figure b)}\label{fig:models}
\end{figure}
\section{Experimental Results}\label{sec:exp-results}
We trained the ResNet50 multi-class(number-detection) and multi-label(digit-detection) jersey number classifiers on
the football dataset to establish baseline performance without the synthetic data. For the multi-class model, we
took the number with highest softmax score as the prediction. For the multi-label model, we applied a threshold of
0.5 to both right and left predicted classes to get the output. Eventually we computed the final prediction from the
output of the two models.
The baseline model accuracy was 80\% for both models. We experimented with various input image sizes and found optimal
accuracy at 224x224 px for the multi-class and 100x100 px for the multi-label model. Our dataset presented a high
imbalance across several numbers where 24\% of the numbers have less than 100 samples and only 5\% reach the 400-sample
mark (See Figure~\ref{fig:perspectives}). Hence, we duplicated data points for each number to have 400 images in the training set when
needed. Our training pipeline dynamically applies image augmentation so that no image is seen twice by the models,
even when the base image is the same. We also up sample our test-set images to maintain 20 images per number.
After having our baselines, we investigated the effects of pre-training with the generated synthetic data on our model
performance. Pre-training on the Simple2D dataset and fine-tuning on the football dataset, resulted in a performance
improvement of 2\% over the baseline (82\%), for both, multi-class and multi-label models. However, pre-training on
the Complex2D dataset and fine-tuning on the football dataset, resulted in 3\% improvement on the multi-class model
and 8\% on the multi-label model. By pre-training on both Simple2D and Complex2D, we achieved 8.8\% and 6\% improvement
above the baseline in multi-class and multi-label models respectively.
The best multi-label model (Complex2D + Football dataset) had positive accuracy improvements on 74 classes, no change
in accuracy in 19 classes, negative change in accuracy in 8 classes (drop by 10\%). The best multi-class model
(Simple2D + Complex2D + Football dataset) had positive accuracy improvements on 63 classes, no change in accuracy in
21 classes, negative change in accuracy in 17 classes (drop by 7\%). In order to validate the hypothesis
(Section~\ref{subsubsec:jersey-n-detection}) that multi-label model could have better performance on numbers with
less images, we compare its
results with best multi-class model on numbers with less than 50 images in training set. We notice an average increase
in accuracy of 18.5\% for multi-class model and 20\% for multi-label model before and after training on synthetic data,
for these numbers. Despite larger gains in accuracy shown by multi-label model, the absolute accuracy scores for these
numbers were better for multi-class model, 81\% compared to 78\% for multi-label model.
\begin{figure}
\centering
\includegraphics[width=.35\textwidth]{figures/player1.png}
\includegraphics[width=.3\textwidth]{figures/player2.png}
\caption{Images where multi-label predicted class 100. The multi-label model is not sure of the number class when
the input image has very low resolution.}\label{fig:mc100}
\end{figure}
By analyzing the confusion matrix of the model predictions
, we learnt that the best multi-label
model produces false predictions in 2 major scenarios (see Figure~\ref{fig:mc100}): predicting one digit rather than both digits,
and predicting class 100 for low-resolution and hard-to-recognize digits. In other words, the multi-label model is
more likely to predict one digit number and non-number classes when challenged with new data. The multi-class model,
however, has relatively spread-out false predictions (see Figure~\ref{fig:ml100}). Major areas of error for this model are:
predicting one digit rather than both digits, and mistaking single digits for two digits or unrecognizable class.
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{figures/player3.png}
\includegraphics[width=.3\textwidth]{figures/player4.png}
\caption{Image where multi-class predicted class 100. Confusion for the multi-class model arise when the
numbers are rotated or occluded.}\label{fig:ml100}
\end{figure}
Examining the performance of the two models independently we noticed that predictions agree in 84.4\% of the test
cases, suggesting that despite the different objectives (multi-class vs multi-label) there is a robust learning of
the number representations. Furthermore, we notice an additional improvement of 0.4\% by two-model ensemble.
Table 2 presents our results.
\begin{table}[h]
\caption{A comparison of model performance under different conditions with confidence threshold of 0.5}\label{tab:results}
\centering
\begin{tabular}{p{.5\linewidth} p{0.1\linewidth}p{0.1\linewidth}p{0.1\linewidth}}
\toprule
Experiment & Multi-class & Multi-label & Ensemble \\
\midrule
\multicolumn{4}{c}{Without synthetic data} \\
Football dataset & 0.8064 & 0.8 & \\
Best (Multi-class + Multi-label) & & & 0.8028 \\
\\
\multicolumn{4}{c}{With synthetic data pre-training} \\
Simple2D + Football dataset & 0.8282 & 0.82 & \\
Complex2D + Football dataset & 0.8306 & 0.88 & \\
Simple2D + Complex2D + Football dataset & 0.8886 & 0.86 & \\
Best (Multi-class + Multi-label) & & & 0.8931 \\
\bottomrule
\end{tabular}
\end{table}
\section{Limitations}\label{sec:limitations}
The work presented in this paper shows that the number identification task can be simplified by leveraging synthetic
datasets. We were able to obtain a good performance that is comparable with previous works~\citep{r1, r2, r4} requiring no
change in the data collection pipeline. Despite these findings, we recognize this approach has some limitations which
we describe in this section.
We were able to achieve 89\% accuracy for our test dataset regardless of the challenging nature of jersey number
identification in a low-data regime. This performance is on par with some of the most recent works~\citep{r7}. However,
the lack of a benchmark dataset for this task and unavailability of already implemented tools, is a big barrier
for comparing performance across all methods. The only solution is to label large amounts of high-quality data
and retrain the available solutions in-house. This requires a lot of computational resources and man-hours put
into work, which is not always an option for all institutions.
In our jersey detection models, we used ResNet50 as a base model, because it proved to be effective for this task.
Bigger and more sophisticated models might provide better accuracy and recall but an exhaustive search is necessary
for each of the components of the solutions to determine an optimal cost-benefit tradeoff. We recognize that more
investigation is needed here to determine such optimal.
In our solution we chose a three-model pipeline approach versus a one-pass prediction model. Our approach comes with
a few limitations including cascading inaccuracies from one model to the next and increase in latency. However, our
choice was justified by ease of implementation, maintenance and portability to other domains. Even with this cascading
effect, our solution proves to have a good performance in our highly imbalanced, limited dataset.
\section{Future Work}\label{sec:future-work}
Our approach to increase performance can be broadly classified into two categories: improving data quality and quantity
or experimenting with different models.
\subsection{Data quality and quantity}\label{subsec:data-quality-and-quantity}
We observed no improvement in model accuracy by increasing the number of duplicated samples or the number of image
augmentations. The confidence of the predictions directly correlated with the quality and resolution of the jersey
number crop (input image). In future work, we plan to experiment with various image quality enhancement methods in
classical CV and deep learning domains to observe if it improves performance. Another path that can be considered is
to refine our synthetic data generation pipeline to produce images that are closer to the real-world dataset.
\subsection{Different model strategies}\label{subsec:different-model-strategies}
Our current method has minimal labeling effort. However, by collecting more images of reasonable quality and quantity
we plan to test object detection-based models. One way to improve frame level accuracy would be to track detected
jersey numbers across both side-line and end-zone views so that in situations where numbers are partially visible
or player pose is complex, we would be able to obtain predictions with continuity. Tracking players in team sports
like football is still a major challenge in the sports CV domain and we will evaluate its utility in our future work.
\section{Conclusion}\label{sec:conclusion}
This paper presented a new solution for low-data regime jersey detection with two-stage novel synthetic data generation
techniques, pose estimation for jersey number localization and CNN ensemble learning to detect jersey numbers.
Data augmentations during training and the use of large synthetic dataset provided enough variations for the model
to generalize well and learn numbers. Our solution is easy to implement, requires minimal labeling, curation,
supervision, and can be customized for various sports jersey fonts, colors and backgrounds. Our framework improves
the accuracy of number detection task by 9\% and can be easily extended to similar tasks across various Sports
communities as well as industries with similar use cases. Furthermore, our solution did not require the modification
of the data capturing or processing pipeline that is already in place, making it convenient and flexible.
Additionally, it introduces a novel data synthesis technique that can boost custom solution performance in a wide
array of sports. We hope this solution enables the Sport Analytics community to rapidly automate video understanding solutions.
\vskip 0.2in
| {
"attr-fineweb-edu": 1.882812,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdsk4uBhi-IyHEAHw | \section{Introduction}
\label{chp:intro}
The choice of the right tactic in a football match can have a decisive influence on the result and thus has a great impact on the success of professional football clubs~\cite{Rein2016}. According to \citet{garganta2009trends} and \citet{fradua2013designing}, the tactic describes how a team manages space, time and individual actions during a game. To select an appropriate tactic, detailed analyses are necessary to reveal and eventually exploit insights of the opposite team's behavior and patterns. These decisions are usually left to domain experts such as the coaching, analysts and scouting staff who observe and analyze entire football matches in order to prepare the next match. However, this process is very time-consuming which has limited its application in the past~\cite{james2006role,Rein2016}. For this reason, as the amount of available game performance data is steadily increasing the demand for automated analysis tools to support the scouting process is rapidly growing.
Early approaches are mainly based on the interpretation of match statistics such as the distribution of the ball possession as well as shot, pass and tackle variables with the general aim to predict successful teams~\cite{oberstone2009differentiating, lago2010performance, jankovic2011influence, jones2004possession, redwood2012impact, pelechrinis2016sportsnetrank, cintia2015network}.
But, these statistics discard most contextual information as they are usually calculated across extended game periods like halftimes, whole matches or seasons. Therefore, these measures are not able to capture the increasing complexity of tactic in modern football and lack explanatory power in terms of prediction variables for game success~\cite{Memmert2017}. The development of advanced tracking technologies~\cite{baysal2015sentioscope, liu2009automatic} from a range of companies~\cite{optasports, stats} has opened up new opportunities through the availability of accurate positional data for the players and the ball. These data
allow to apply automated approaches to analyze different tactical aspects~\cite{Memmert2017, stein2017bring, gyarmati2015automatic}. Referring to \citet{Rein2016}, tactics can be distinguished into team, group and individual tactics.
One important aspect with respect to team tactics is the team formation~\cite{Bialkowski2014IdentifyingTeamStyle,Rein2016}. The team formation describes the spatial arrangement of the players on the pitch
by dividing it into tactical groups~(e.g., defenders, midfielders, and attackers).
However, team sports are in general highly complex and dynamic since players are constantly switching positions and roles throughout a match. Consequently, \citet{Bialkowski2014IdentifyingTeamStyle, Bialkowski2016, Bialkowski2014Large-ScaleAnalysis} considered formation detection as a role assignment problem where each player is assigned to the most probable distinct role at each moment of the match at a specific time instant.
First automatic approaches for formation classification assumed that the team formation is stable over a match half and thus focus on the classification of formation for whole matches or halftimes~\cite{Bialkowski2014IdentifyingTeamStyle,Bialkowski2014Large-ScaleAnalysis}. More recently, spatio-temporal methods were introduced that aim to detect formation changes during the match~\cite{Bialkowski2016, Machado2017, Wu2019ForVizor}, but either evaluation was not performed on single match situations or, in case of \emph{ForVizor}~\cite{Wu2019ForVizor}, case studies were utilized to evaluate the visual analytics system itself.
\begin{figure*}[t]
\centering
\includegraphics[width=0.92\linewidth]{graphics/workflow.pdf}
\caption{Workflow of the proposed system. The positional data of the a football match are pre-processed~(Section~\ref{sec:input_preprocessing}) to create a series of match situations from ball gain to loss and vice versa. Subsequently, a visual formation summary~(VFS) is generated according to Section~\ref{sec:graph_repr} and compared to a set of ground-truth formation templates for classification~(Section~\ref{sec:classify_formation}).}
\label{fig:overview}
\end{figure*}
Moreover, previous work~\cite{Bialkowski2016, Wu2019ForVizor} mainly relied on clustering algorithms to automatically find the most prominent formations during a match to measure temporal differences in individual situations. Therefore, it is not possible to directly predict a numerical scheme such as 4-4-2 and the resulting clusters require supervision for classification. In contrast, \citet{Machado2017} developed a visual football match analysis tool where formations are classified by a k-means clustering approach using the coordinates of the players itself and assigning them to one of three tactical groups~(defender, midfielder, attacker). Although this approach is able to directly predict formations, the clustering it is solely based on one-dimensional player coordinates with respect to the field length~(from goal to goal) and formations are restricted to three tactical groups.
In this paper, we present an analytics system that automatically generates a visualization summary of the formation in single match situations. Thereby, a match situation is defined from gaining to losing ball possession. Based on the visual representation, we present a novel classification approach that calculates the similarity~(of a single match formation) to ground-truth templates for twelve different popular tactical formations~(some examples, e.g., 4-4-2, 4-2-3-1 are shown in Figure~\ref{fig:overview}). The quality of the visualization has been evaluated by twelve domain experts that have also provided ground-truth annotations for numerical schemes of tactical formations. A detailed analysis of the results is provided to evaluate the usefulness of the visualization of formations in single match situations. The inter-coder agreement of the experts is measured and provides first insights about the applicability of numerical schemes to describe team formations.
It turns out that one main issue is that some tactical formations only differ in the interpretation of some roles. To address this issue, we propose a novel metric to measure the quality of the formation classification with respect to the similarity to ground-truth formation templates provided by domain experts. To the best of our knowledge, this is the first work that provides a solution and detailed analysis of formation classification for single match situations.
The remainder of the work is organized as follows. Section~\ref{chp:rw} reviews related work in sports analytics with focus on formation detection in football games. Our system to create a visual formation summary and to classify the formation in single match situations is presented in Section~\ref{chp:analytics_system}. In Section~\ref{chp:experiments} the experimental results based on expert annotations are discussed in detail. Section~\ref{chp:summary} summarizes the paper and outlines potential areas of future research.
\section{Related Work}
\label{chp:rw}
Analytics in football or sports in general is a broad field that has recently attracted more attention mostly due to the availability of positional data commonly captured by pre-installed tracking devices in stadiums~\cite{baysal2015sentioscope, liu2009automatic} or provided by companies such as \citet{optasports} or \citet{stats}.
Since a more general overview goes beyond the scope of this work, we refer to the review of~\citet{Rein2016}, that covers various aspects and challenges of automated content analysis in football. One fundamental research area is the tactical analysis of football data. We therefore focus on related work that has been introduced to find general tactical patterns as well as to explicitly classify and visualize team formations.
Many approaches have been suggested that aim to cluster and consequently find prominent movement patterns of a team~\cite{Wei2013LargeScaleAnalsysisofFormations,Gudmundsson2014,Wang2015DiscerningTacticalPatterns,Gudmundsson2019}. In this context, sketch-based (video)~retrieval systems were introduced~\cite{Kabary2013SportSenseOriginal,sha2016chalkboarding,Probst2018SportSenseUI} that allow users to draw spatio-temporal queries on a virtual pitch to directly retrieve similar game situations.
While these approaches mainly reveal individual or group tactics, another important factor with significant impact on performance is team formation. However, due to the nature of team sports, players constantly change roles and thus make formation classification a complex task. Based on hockey games, \citet{Lucey2013} have shown that a role-based representation of the formation is superior compared to a representation that is solely based on the coordinates of player identities. Subsequently, \citet{Bialkowski2014Large-ScaleAnalysis,Bialkowski2014IdentifyingTeamStyle} have introduced a role-assignment algorithm and define the formation as a set of role-aligned player positions. However, they assume that the dominant formation is stable within a match half and this coarse temporal granularity is not sufficient to describe the complex and varying tactical formations in modern football.
To solve this issue, Perl et al. presented a number of formation analysis tools~\cite{Grunz2009AnalysisAndSimulationOfActions,Perl2011NetBasedGameAnalysis,Grunz2012TacticalPatternRecognition} and used the neural network~\emph{DyCoN}~\cite{Perl2004DyCon} to determine the distribution and sequential changes in the team formation, respectively. In addition, \citet{Bialkowski2016} have extended their previous systems~\cite{Bialkowski2014Large-ScaleAnalysis,Bialkowski2014IdentifyingTeamStyle} and utilized the role-assignment algorithm to discover with-in match variations using two methods as a proof-of-concept: (1)~clustering of role-aligned player positions and~(2) calculating the distance of each frame to the mean formation of the half time.
\citet{Wu2019ForVizor} proposed a visual analytics system called \emph{ForVizor}, that distinguishes between offensive and defensive formations. Based on the role-assignment algorithm of~\citet{Bialkowski2014Large-ScaleAnalysis,Bialkowski2014IdentifyingTeamStyle} they subsequently visualize formation changes between different match periods.
But the aforementioned systems rely on the detection of the most prominent formations, e.g., by using a clustering algorithm or the average formation of the halftime in order to detect temporal changes in the formation. Therefore, their approach cannot automatically predict a numerical tactical scheme such as 4-4-2 for short match situations.
Alternatively, \citet{Machado2017} developed a match analysis tool
and applied k-means clustering to the one dimensional $y$~player positions with respect to the field length~(from goal to goal) itself. Each player is then assigned to one of three tactical groups to create a rough but well-known numeric representation. However, this approach completely neglects the $x$-coordinates of the players for classification.
\section{Team Formation Classification}
\label{chp:analytics_system}
As the discussion of related work reveals, previous approaches for team formation analysis focused on entire half-times or matches. In contrast, we present a novel classification approach that can be applied to single match segments of arbitrary length and conduct an in-depth expert evaluation for individual match scenes.
In addition, such an evaluation by domain experts in terms of analyzing individual match situations with respect to the formation played was not conducted yet. Evaluation was either focused on long-term formations~\cite{Bialkowski2014Large-ScaleAnalysis,Bialkowski2014IdentifyingTeamStyle,Bialkowski2016} or placed more emphasis on the evaluation on the tools itself~\cite{Machado2017,Wu2019ForVizor}.
Our proposed system to explore football matches with respect to the team formation is introduced in this section. The definition of a team formation is provided in Section~\ref{sec:definition_formation}. The required input data as well as pre-processing methods are explained in Section~\ref{sec:input_preprocessing}. Based on this input information we propose a methodology to create a visual formation summary~(Section~\ref{sec:graph_repr}) that serves to classify~(Section~\ref{sec:classify_formation}) the team formation played in single situations of a football match. An overview is illustrated in Figure~\ref{fig:overview}.
\subsection{Definition of Team Formation}
\label{sec:definition_formation}
In general, the team formation describes the spatial arrangement of players within a team. Assuming that all ten players~(except the goalkeeper) are on the pitch, it is defined as a set of ten distinct roles~$F=\{r_1, \ldots, r_{10}\}$ that are represented by their two-dimensional position~$r \in \mathbb{R}^2$ on the football field. For simplification, these roles are often assigned to tactical groups like defenders, midfielders (defense and offensive) and attackers in order to generate a numeric representation.
These numerical schemes, e.g. 4-2-3-1 and 4-4-2, define the tactical formation of the team and are denoted as~$F_n$ in the following.
\subsection{Input Data and Pre-processing}
\label{sec:input_preprocessing}
Our system relies on two-dimensional location information of each player at each discrete timeframe~$f$. We use the normalized coordinates with respect to the width~$x \in [0.0, 0.7]$ and length~\mbox{$y \in [0.0, 1.0]$} of the football pitch and preserve the aspect ratio of the field. For unification, the direction of play of the observed team is always considered from bottom to top and the position data of the players are converted accordingly.
Since the aim of this work is to automatically detect formations in individual game situation, the match is first divided into temporal segments.
In this context, we require information which team possesses the ball at each timeframe~$f$ and define a game segment~$S=\{f_i, \dots , f_{i+m}\}$ containing $m$~timeframes from gaining to losing the ball or vice versa.
\subsection{Visual Formation Summary}
\label{sec:graph_repr}
To classify the formation, first a \emph{visual formation summary}~(VFS) from the two-dimensional position data of all frames in a game segment~$S$ is generated. Regardless of how the players of the observed team have moved on the pitch, e.g., while defending most players are located in the own half, the formation in terms of the distance between the players within a team remains the same. Therefore, we subtract the team center that is defined as the mean of all individual player positions at each timeframe for normalization.
As stated in Section~\ref{sec:definition_formation} the formation is defined by a set of roles. Theoretically, each player can be considered to act in one role and represented by its mean position during a match segment. However, as mentioned by previous work~\cite{Bialkowski2014Large-ScaleAnalysis, Bialkowski2016}, players can potentially switch roles and this approach would not accurately reflect the tactical formation. For this reason, we employ the role-assignment algorithm of \citet{Bialkowski2014Large-ScaleAnalysis, Bialkowski2016} to detect and compensate role changes with the change that only one iteration is applied. More than one iteration did not have a great influence on the result in our experiments, which is supposedly due to the length of the game sequences.
Finally, the mean position~$\overline{r}$ for each role during the observed match segment is utilized to define the formation~$F = \{\overline{r}_1, \dots, \overline{r}_{10}\}$ and to derive the visual formation summary.
\subsection{Classification of Numerical Schemes}
\label{sec:classify_formation}
The formation~$F$ with compensated role changes according to the previous section serves as input to classify each game situation into a common numerical tactical schema like 4-4-2. We propose a novel classification approach that measures the similarity of the extracted formation~$F$ to a pre-defined set of $t$~popular football \mbox{formations~$T=\{\hat{F}_1, \dotsc, \hat{F}_t\}$}. The expected player coordinates
are provided by domain experts as explained in Section~\ref{sec:templates}.
In order to enable a comparison between two formations, it is necessary to normalize the positional data of each role~$\tilde{r}$ by the minimum and maximum $x$ and $y$~coordinate within a formation~$F$:
\begin{equation}
\tilde{r} = \frac{\overline{r} - \min(F)}{\max(F) - \min(F)} \qquad \forall \, \overline{r} \in F \, .
\end{equation}
The formula provides also a normalization of the relative distances of the players and therefore allows for a comparison of formations with different compactness.
Subsequently, a similarity matrix~\mbox{$M(F_1, F_2) \in \mathbb{R}^{10\times10}$} is calculated. Since we use idealized templates for formation classification, the euclidean squared distance is applied in this context, because it penalizes smaller distances between different roles less severely. Each entry~$m_{i,j}$ is then calculated based on the positional coordinates of each role~$~\tilde{r}_i = (\tilde{x}_i, \tilde{y}_i)$ in formation~$F_1$ to each role~$\tilde{r}_j = (\tilde{x}_j, \tilde{y}_j)$ in formation~$F_2$ according to the following equation:
\begin{equation}
\label{eq:sim_matrix}
M(F_1, F_2) = \max{\left(1 - \frac{||\tilde{r}_i - \tilde{r}_j||_2^2}{\delta}, 0\right)} \qquad \forall \, \tilde{r}_i \in F_1; \tilde{r}_j, \in F_2 \, .
\end{equation}
The normalization factor~$\delta$ serves as tolerance radius. Under the assumption that a football pitch can be divided into three horizontal~(left, center, right) and vertical groups~(defender, midfielder, attacker), a normalization factor~$\delta = \nicefrac{1}{3}$ means that the similarity of wingers to central player as well as, e.g., from attackers to midfielders would already become zero. In our opinion, this fits well to the task of formation classification. Please note, that we only allow similarity values in the interval range~$m(i,j) \in [0, 1]$.
To calculate the similarity of two formations, each role in formation~$F_1$ has to be assigned to its optimal counterpart in formation~$F_2$.
With the constraint that each role can only be assigned once and the overall goal to maximize the similarity
this results in a linear sum assignment problem that can be solved via the \emph{Hungarian algorithm}~\cite{Kuhn1955Ass} whose solution corresponds to:
\begin{equation}
\label{eq:hungarian}
m^*_{i, j} = \begin{cases}
m_{i,j} & \text{, if } \tilde{r}_i \in F_1 \text{ is assigned to } \tilde{r}_j \in F_2, \\
0 & \text{otherwise.}
\end{cases}
\end{equation}
Finally, the formation similarity~FSIM($F_1, F_2$) of the compared formations is defined as the sum of all elements in the similarity matrix~$M^*(F_1, F_2)$ normalized with respect to the number of assigned roles~(in this case ten).
To classify the derived formation~$F$ according to Section~\ref{sec:graph_repr} of a given match segment into the numerical schema~$F_n$, we calculate the similarities to a set of pre-defined templates~$T=\{\hat{F}_1, \dotsc, \hat{F}_t\}$ that contain idealized role positions for $t$~popular football formations. This allows us to generate a ranking of the most probable numerical formation played in an individual match situation. For the final classification, the template formation with the highest similarity is selected as defined in Equation~\eqref{eq:codebook_selection}:
\begin{equation}
\label{eq:codebook_selection}
F_n^*= \underset{ \hat{F} \in T}{\argmax}\left(\text{FSIM}(F,\hat{F})\right) \,
\end{equation}
\section{Experimental Results}
\label{chp:experiments}
In this section, the experimental setup with details on the dataset characteristics and the annotation study with domain experts are presented~(Section~\ref{sec:setup}). Furthermore, the templates with idealized role position for twelve popular football formation are presented in Section~\ref{sec:templates}. The evaluation metrics including a novel similarity measurement based on ground truth formations are introduced in Section~\ref{sec:metrics}. In-depth analysis of the quality of the ground-truth annotations as well as the evaluation of the visual formation summary~(VFS) are presented in Section~\ref{sec:expert_study} and Section~\ref{sec:eval_graphic}. Finally, the results of the automatic formation classification to reveal team tactics are discussed in Section~\ref{sec:eval_cls}.
\subsection{Setup}
\label{sec:setup}
\subsubsection{Dataset:}
The dataset contains four matches that took place in the 2011/2012 \emph{first league season} (omitted for double blind review). The positional data were captured by a \emph{VisTrack} device with a temporal resolution of $25$~frames per second, that
provides further information like events~(corners, free kicks, etc.), actions~(passes, shots, etc.), ball possession and game status~(running or interrupted).
We use the additional events and game status to further clean the data as explained in the following.
First, we only consider frames with game status \emph{running} to get rid of all interruptions. Based on the information about ball possession, the matches are temporally segmented according to Section~\ref{sec:input_preprocessing}. In this context, we discard all match segments that a shorter than five seconds since they most likely do not contain any valuable tactical information. Since standard situations resolve tactical formations, we remove all frames five seconds after throw-ins and ten seconds after free kicks, corners, and penalties. The recognition of tactical patterns in this kind of situations should be analyzed independently since they show different characteristics, which is beyond the scope of the current work.
For a more detailed analysis we furthermore distinguish between short~($5\,s \leq t < 10\,s$), medium~($10\,s \leq t < 20\,s$) and long match situations~($t \geq 20\,s$) and consider situations of own and opposing team's ball possession independently in the experiments.
\subsubsection{Annotation Study:}
\input{tab_dataset-stats.tex}
The annotation study was split into two parts to (1)~provide ground-truth annotations of the tactical formation for a given match situation and (2)~evaluate the respective visual formation summary~(VFS). Twelve domain experts~(professional game analysts) from an \emph{Institute of Sports Sciences}~(omitted for double blind review) were available in both parts, who had a total time of 60~minutes to annotate 100~situations. The experts received two sets that each contained $50$~scenes that were evenly sampled from a single half of a football match. The sets contained $25$~scenes for both own and opposing team's ball possession. The scenes were watched in chronological order to simulate a real analysts process and allow the annotator to benefit from contextual knowledge of previous situations. To assess the quality of the annotations given by the experts, eight sets of 50~match situation~(total of 400 annotations) were assigned to two experts each. Detailed statistics are presented in Table~\ref{tab:dataset_stats}. Note that $71$~formation annotations and $307$~ratings for the VFS are missing due to the time constraint.
\textbf{Ground-truth Formation Annotations:} At first the annotator was asked to watch the two-dimensional match animation based on positional data of all players~(players of the opposing team were displayed opaque for reference) and the ball of a given match segment. The situation could either be explored by playing the scene automatically in real-time or manually using a slider. To help finding tactical groups, specific players could be marked by different colors. The analytics tool is shown in Figure~\ref{fig:gui}.
\begin{figure}[t]
\centering
\includegraphics[width=1.00\linewidth]{graphics/anno_gui4.png}
\caption{Analytics tool for formation detection. The two-dimensional animation of the selected match situation~(middle) is shown on the left and the resulting visual formation summary of the scene is shown on the right side.}
\label{fig:gui}
\end{figure}
Finally, the annotator was able to select one of twelve different formations as well as the options \emph{other} and \emph{undefined}~(as listed in Figure~\ref{fig:annotation_stats}) if a formation cannot be assigned unambiguously. Besides the annotation of the tactical scheme such as 4-4-2, the annotator had to rate how well the formation could be determined
by choosing one of the following options: \emph{entirely ambiguous, ambiguous, clear, very clear}. The statistics of annotated match situations is illustrated in Figure~\ref{fig:annotation_stats}.
\begin{figure}[t]
\centering
\includegraphics[width=1.00\linewidth,trim={0.0cm 0.00cm 0.2cm 0.0cm},clip]{graphics/formation_distribution/formation-distribution.pdf}
\caption{Distribution of annotated tactical formations and the degree of clarity~(emphasized in different colors).}
\label{fig:annotation_stats}
\end{figure}
\textbf{Rating of the Visual Formation Summary:} After providing the ground-truth formation, the annotator had to evaluate the VFS of the observed scene. In this context, he was allowed to select one of the three following options: \emph{bad, neutral, good}.
\input{tab_anno-agree.tex}
\subsection{Template Formations}
\label{sec:templates}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{graphics/template-formations-all.pdf}
\caption{Templates with idealized role positions for twelve popular football formations created by domain experts. If a formation is ambiguous, several variations are provided.}
\label{fig:templates}
\end{figure}
As explained in Section~\ref{sec:classify_formation}, the classification of the played formation is performed by a comparison to a pre-defined set of templates for different tactical formations.
Our domain experts were asked to provide these ground-truth templates to the best of their knowledge. But some formations like 4-4-2 contain some variations and are not completely unambiguous. Hence, multiple templates for one formation should be created. For classification we have calculated the similarity of the visual formation summary to all variations of a single formation, and used the maximum as value for the formation similarity~FSIM. The templates created for all twelve formations used in the experiments are visualized in Figure~\ref{fig:templates}.
\subsection{Evaluation Metrics}
\label{sec:metrics}
In order to assess the quality of the visual formation summary~(VFS) of a given match sequence, we propose to calculate the formation similarity~FSIM to the template of the annotated ground-truth formation as an evaluation metric. The accuracy is calculated to measure the classification performance of the system. Since the data set contains a large bias towards some formations, we report micro-accuracy alongside macro-accuracy as it allows us to study system performance while considering each class to be equally important. However, as already mentioned, some formation such as 3-5-2 and 5-3-2 show very similar spatial characteristics and only depend on the subjective interpretation of some player roles. Hence, the VFS is compared to all available template formations. This allows us to generate a ranking with respect to the formation similarities~FSIM and additionally report the top-$k$ accuracy. Please note, that some match situations were analyzed by two experts and their annotations can differ. But we assume that both annotations are valid and use the annotated formation which has a higher similarity to the VFS as reference.
\subsection{Analysis of the Expert Study}
\label{sec:expert_study}
In our experiments, we only consider match situations where a formation was \emph{clearly} or \emph{very clearly} recognizable for at least one expert. This results in a total of 472 unique situations of which 207 were annotated by two experts for classification and 450 ratings for the visual formation summary.
Referring to Figure~\ref{fig:annotation_stats}, the analysis of the annotated match situations has shown a huge bias towards some popular formations such as 4-4-2, 4-2-3-1 and 4-3-3. These results were more or less expected, since e.g. the 4-4-2 is generally widely accepted and therefore used more frequently to describe a formation compared to a 4-2-4, which however has very similar spatial properties as shown in Figure~\ref{fig:templates}. More surprisingly, the majority of annotations were rated at least \emph{clearly} recognizable by the experts, despite the short length of single match situations. In this context, the defensive formations were annotated with a larger confidence than offensive formations and have shown less variance~(mainly 4-4-2). This effect can be explained by the increased freedom of the players during the attack to make creative plays, which are very important for scoring goals in modern football~\cite{kempe2018good}.
In order to assess the quality of the provided annotation of the played formations, the inter-coder agreement is measured. In this context, we only consider match situations in which at least one expert has clearly or very clearly identified the formation. The results are reported in Table~\ref{tab:study-agree}. In particular, the agreement in terms of \emph{Krippendorff's}~$\alpha$ and the top-1 accuracy is noticeably lower than expected. Overall, annotations for defensive formations show significantly more correlation than the annotated offensive formations. As already mentioned, this is mainly due to freedom and creativity in attacking situations that lead to more fluid formations.
However, we believe that the annotations from domain experts still show correlations, but that the complexity and subjectivity of the task leads to different conclusions. As stated above, team formations sometimes only differ in the interpretation of very specific roles. In addition, it is possible that (1)~formations are not symmetric and (2)~different formations are played within a single game situation, e.g. when multiple offensive game patterns are performed during an attack. Therefore, we propose to calculate the formation similarity~FSIM between the templates of the annotated formations to obtain an alternative measure of the inter-coder agreement. This also enables us to determine the top-k accuracy since a ranking of the most similar formations can be generated. The similarity values of all tactical formations are visualized in Figure~\ref{fig:template_similarities}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth,trim={0.25cm 0.45cm 0.1cm 0.4cm},clip]{graphics/similarity_matrix.pdf}
\caption{Formation similarities~(FSIM) of the templates with idealized player positions~(Figure~\ref{fig:templates}) of twelve popular football formations. Formations with the same amount of defenders show very high similarities~(emphasized with red borders).}
\label{fig:template_similarities}
\end{figure}
It is clearly visible that the top-3 accuracy is significantly better than the top-1 accuracy~(Table~\ref{tab:study-agree}) with respect to the inter-coder agreement. In addition, the formation similarities~FSIM between the annotated formations are comparably high, especially if the values from Figure~\ref{fig:template_similarities} are taken into account. From our point of view, this indicates that the annotations of experts do indeed show a high correlation, at least if the recognition of formations is treated as a multi-label task where more than one answer can be considered as correct.
\subsection{Evaluation of the VFS}
\label{sec:eval_graphic}
In the second part of the annotation study, the domain experts were asked to rate the usefulness of the extracted visual formation summary~(VFS) of a team in a given match situation. In addition, we quantified the similarity of the VFS to the template of the annotated formation. The results are reported in Table~\ref{tab:study-eval-graphic}.
\input{tab_eval-graphic.tex}
Overall, the VFS's were mainly rated positive and only in $16\%$ of the situations the annotator did not see correlations to the two-dimensional schematic visual representations. This demonstrates that the VFS indeed provides a good overview in the majority of the cases. Particularly, in situations where the opponent of the observed team possesses the ball, the VFS can quickly give insights into the tactical defensive formation and therefore simplifies the analysts' process.
The same conclusions can be drawn with respect to the obtained formation similarity of the extracted VFS to the templates of the annotated formation. Similarities around 0.75 and 0.80 are achieved in the two cases of own and opposing team ball possession, respectively. Although these values are comparatively lower than the template similarities in Table~\ref{fig:template_similarities}, we believe that the results indicate a satisfying system quality. The template similarities are calculated based on idealized role positions that can be partially shared between two templates. Therefore, the values are expected to be higher, since the VFS of real football situations show more variations.
\subsection{Evaluation of the Formation Classification}
\label{sec:eval_cls}
\quad\textbf{Baselines:} As discussed in the related work section, previous work~\cite{Bialkowski2016, Wu2019ForVizor} apply clustering approaches to find the most prominent formations in a match in order to measure formation changes. These approaches are not capable to automatically classify the formation
and are therefore not suitable for comparisons. For this reason, we can only compare our proposed classification approach to \citet{Machado2017}'s system. However, their solution relies on a k-means clustering of y-coordinates and can only predict a pre-defined amount of, in this case $k=3$, tactical groups. In addition, it could predict unrealistic formations such as 2-7-1. This is a systematic drawback and the expert annotations shown in Figure~\ref{fig:annotation_stats} indicate that \emph{other} formations are labelled very rarely.
\textbf{Comparison to baseline approaches:} To enable a comparison, we reduce the number of groups in the annotated formations as well as predictions from four to three by assigning the most similar formation with three tactical groups according to Figure~\ref{fig:template_similarities}. The annotated 4-1-4-1 and 4-2-3-1 formations become a 4-5-1 and the 4-3-2-1 is converted to a 4-3-3, yielding a total number of nine different classes. \citet{Machado2017}'s classification approach based on the clustering of y-coordinates is also applied to the visual formation summaries and thus to the same input data as our system. In addition, we investigate the impact of the role assignment algorithm. The results are reported in Table~\ref{tab:eval-sota}.
\input{tab_eval_sota}
The results clearly show that our classification approach is superior to \citet{Machado2017}'s approach. Furthermore, we could confirm that the role assignment algorithm improves team formation classification.
\textbf{Analysis and discussion of the results:} Although the results are improved compared to previous baselines, in particular the top-1 accuracy is rather low. To analyze possible problems in more detail, quantitative as well as qualitative results are provided in Table~\ref{tab:eval_cls} and Figure~\ref{fig:qual_results}, respectively.
\input{tab_eval_classification.tex}
\begin{figure*}[t]
\centering
\parbox{.61\linewidth}{
\includegraphics[width=1.00\linewidth]{graphics/qualitative_results.pdf}
}
\hfill
\parbox{.37\linewidth}{
\includegraphics[width=1.0\linewidth,trim={0.25cm 0.45cm 0.1cm 0.4cm},clip]{graphics/results-classification-mk.pdf}
}
\caption{Qualitative results~(left) of the proposed analytics system for six individual scenes with the respective top-3 tactical group assignment~(emphasized in different colors) after comparing with the templates in Figure~\ref{fig:templates} as well as the confusion matrix~(right) for the predictions of all 472 match situations in percent.}
\label{fig:qual_results}
\end{figure*}
The quantitative results again show that the task of formation classification becomes easier for defense situations and for situations with increasing length. The results for the setting \emph{all situations} are much better than random guessing and is also improved significantly when considering the top-k, particularly for $k>2$, similar formations. Referring to the qualitative results in Figure~\ref{fig:qual_results}, this can be mainly explained by the problems described below.
As previously stated, formations can be very similar and their classification often depends on the interpretation of very specific roles. In particular, wing backs in formations with four defenders are moving up on the pitch to get involved in the attack or pro-actively defend in pressing situations. This is clearly visible in scenes~1, 4 and 6. While the domain expert in scene~1 considers them as midfielders, they are mostly perceived as defenders in similar situations.
However, simultaneously the defensive midfielder could move back to form a three-man formation with both center backs. For this reason, e.g., a 4-4-2 or 4-3-3 is often classified as a 3-4-3 or 3-5-2 by our system, as depicted in the confusion matrix~(Figure~\ref{fig:qual_results} right). Similarly, offensive wingers or attacking midfielders can be either interpreted as midfielders or strikers. Referring to Figure~\ref{fig:annotation_stats}, the experts lean towards more popular formations such as 4-4-2 during their annotations, while the visual formation summary suggests a 4-2-4 instead with respect to the pre-defined templates~(scene~5).
Admittedly, the experts have classified the formation based on the two-dimensional graphical animation representation and in addition had context from preceding situations which can have influence on the rating and allows for other conclusions. But in many cases the annotators even considered the VFS as \emph{good}, which shows that the mistakes are often connected to the subjective interpretation of roles instead of the similarity to idealized templates.
Overall, the analysis suggests that formation classification should be considered as a multi-label or fuzzy classification task, where more than one answer could be correct. For this reason, we believe that a visual formation summary often provide valuable insights into the tactical formations. The formations' similarities to templates of popular formation, however, can help to monitor tactical changes as well as to retrieve situations to show specific formations.
\section{Conclusions and Future Work}
\label{chp:summary}
In this paper, we have presented an analytics approach for the visualization and classification of tactical formations in single situations of football matches. A novel classification approach has been proposed employing a set of ground truth templates that contains idealized player positions for twelve popular team formations.
A detailed analysis of an expert annotation study was conducted to provide results for defensive and offensive formation classification in match situations with various length. The study has clearly demonstrated the complexity of the task, particularly for offensive formations, since even annotations from domain experts differ due to the subjectivity in interpreting roles of similar formation schemes such as 4-2-3-1 and 4-3-3.
For this reason, we have suggested a novel measurement to quantify the results for formation classification and visualization based on the similarity to pre-defined formation templates.
The results demonstrated that our visual formation summary already provides valuable information and is capable to summarize individual scenes in football matches. In addition, we have shown the superiority of our classification approach compared to the current state of the art.
In the future, we plan to extend the current analytics system with other valuable tactical indicators such as the variance and movements of the players. Our current approach explicitly aimed for a solution that does not require any training data for classification. However, due to the increasing amount of position data, whether synthetic or real, deep learning approaches could become applicable to find more sophisticated solutions for the classification of team formations.
| {
"attr-fineweb-edu": 1.959961,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUddg5qWTD6hAatgZM | \section{Introduction}
\label{s:intro}
\begin{comment}
This is an example of a LaTeX block comment. Look up online to see how to include.
\end{comment}
A mixed doubles tournament is a set of games or matches between two teams,
where each team consists of one male and one female player, as in mixed doubles tennis.
We are concerned here with the situation in which the teams are not fixed,
but vary throughout the tournament, unlike, say, the usual arrangement in a bridge
tournament, where the same two players form a team in every match they play.
Also we impose round robin properties on the tournament structure. These properties specify
the number of times players oppose, and the number of times players of the opposite sex partner.
The best-known type of a mixed doubles tournament in which partners are not fixed is the
spouse-avoiding mixed doubles round robin tournament.
A spouse-avoiding mixed doubles round robin tournament, \\ SAMDRR$(n)$, is a schedule of
mixed doubles games for $n$ husband and wife couples. The tournament is structured so that
spouses never play in a match together as partners or opponents. However, every man
and woman who are not spouses are partners exactly once and opponents exactly once,
and every pair of players of the same sex are opponents exactly once.
Brayton, Coppersmith, and Hoffman~\cite{BCH1, BCH2} defined these tournaments in 1973 and
showed that a SAMDRR$(n)$ exists for all $n$ except $2$, $3,$ and $6$.
A SAMDRR$(n)$ is resolvable if the games can be arranged in rounds so that: if $n$ is even, each player plays in every round; and if $n$ is odd, each player
except one husband and wife plays in every round. It follows that for $n$ odd, every player has exactly one bye, i.e., round they sit out.
The total number of games is $n(n-1)/2$.
The existence of a resolvable SAMDRR$(n)$ is equivalent to the existence of a self-orthogonal latin square of
order $n$ with a symmetric orthogonal mate (SOLSSOM) (see \cite{IA1, FIN}).
Recently, a new class of mixed doubles tournaments with round robin properties has been introduced and studied by
Berman and Smith~\cite{BS1, BS2}. They are called strict Mitchell mixed doubles round robin tournaments (strict MMDRR) and
were motivated by an article of Anderson~\cite{IA2} who describes a problem of Mitchell~\cite{M} from
the late nineteenth century.
\begin{Definition}
\label{strict} A strict Mitchell mixed doubles round robin tournament \\
(strict MMDRR$(n)$) is a schedule of mixed doubles games for $n$ men and $n$
women in which every man and woman partner exactly once and oppose exactly once.
Every pair of players of the same sex oppose at least once.
\end{Definition}
Note first that since every player appears in $n$ games, every player must oppose one player of the same sex exactly twice.
Also, the number of games in a strict MMDRR$(n)$ is ${n^2}/2$. It follows that this tournament structure can be considered only
when $n$ is even. Berman and Smith~\cite{BS1} give examples of strict MMDRR$(n)$ for
$n = 2, 4, 6, 10, 14$, prove a product theorem, and show strict MMDRR$(n)$ exist for $n = 16k$ for $k\ge1$ and
$n=16k+4$ for $k\ge3$.
In this paper we introduce a new type of tournament called a complete mixed doubles round robin tournament that generalizes
both SAMDRRs and strict MMDRRs.
\begin{Definition}\label{CCMDRR} A complete mixed doubles round robin tournament \\
(CMDRR$(n, k)$) is a schedule of mixed doubles games for $n$ men and $n$
women of which $k$ men and $k$ women are spouses. Spouses never play in a match together as partners or opponents. However, every man
and woman who are not spouses are partners exactly once and opponents exactly once. Each player
who has a spouse opposes every same sex player exactly once. Each player who does not have a spouse opposes
some other same sex player who does not have a spouse exactly twice and opposes all other same sex players exactly once.
\end{Definition}
By definition every CMDRR$(n, 0)$ is a strict MMDRR$(n)$ and every CMDRR$(n, n)$ is a SAMDRR$(n)$.
For odd $n$, CMDRR$(n, 1)$ is the closest that it is possible to come to the non-existent strict MMDRR$(n)$. The number of games in a
CMDRR$(n, k)$ is $({n^2}-k)/2$. Players who do not have spouses are paired by repeated opposition so $n-k$ must be even.
We represent a CMDRR$(n, k)$ as a square matrix of order $n$ with males as row indices and females as column indices.
The entry in position (M$i$, F$j$) is the pair (M$x$, F$y$) if and only if the game
M$i$, F$j$ v M$x$, F$y$ is in the tournament. Each game contributes two entries, i.e., the entry in position (M$x$, F$y$) is the pair (M$i$, F$j$).
If the CMDRR is a SAMDRR then this representation is different than the standard representation. In the standard representation, a SAMDRR$(n)$ corresponds
to a SOLS$(n)$ with males as both row and column indices and females as entries. There is a game M$i$, F$x$ v M$j$, F$y$ if and only if the entry in position
($i$, $j$) is $x$ and the entry in position ($j$, $i$) is $y$. This standard representation cannot be used for a CMDRR because of repeated opposition of
same sex players.
\begin{Example}\label{C31} A CMDRR$(3, 1)$
\\[.05in]
\begin{tabular}{c|ccc}
& F1 & F2 & F3 \\
\cline{1-4}
M1 & & M2F3 & M3F2 \\
M2 & M3F3 & M3F1 & M1F2 \\
M3 & M2F2 & M1F3 & M2F1 \\
\end{tabular}
\end{Example}
From row $1$ we see that M$1$ opposes M$2$, M$3$, F$3$, F$2$, and partners F$2$, F$3$.
From row $2$ we see that M$2$ opposes M$3$ twice and M$1$ once, opposes F$3$, F$1$, F$2$, and partners F$1$, F$2$, F$3$.
From row $3$ we see that M$3$ opposes M$2$ twice and M$1$ once, opposes F$2$, F$3$, F$1$, and partners F$1$, F$2$, F$3$.
From the columns we see similar information about each female. The hole at position (M$1$, F$1$) indicates
that these two players are spouses. Thus, it is easy to check that the conditions for a CMDRR$(3, 1)$
are satisfied. In future examples we will suppress the row and column headers and also the M and F in each entry.
We next discuss resolvability for CMDRR$(n, k)$. The games must be partitioned into rounds so that each player plays in at most one game per round.
We will call a round \textit{full} if it involves all players if $n$ is even,
and all but $2$ players if $n$ is odd. A round that is not full is called \textit{short}. A CMDRR$(n, k)$ is called \textit{fully resolvable} if the
games can be partitioned into rounds with at most one short round.
The round structure is specified by a matrix of order $n$, with entries from the set \{1, \ldots, r\}, where $r$ is the number of rounds,
and each entry appears at most one time in each row and column. The entry in cell $(i, j)$ is the round in which the game partnering M$i$ and F$j$ is played
for non-spouses, or empty for spouses.
\begin{Example}\label{C31b} Resolution for the CMDRR$(3, 1)$ of Example~\ref{C31} into $4$ full rounds.
\\[.05in]
\begin{tabular}{|ccc|}
\cline{1-3}
& 1 & 2 \\
3 & 4 & 1 \\
4 & 2 & 3 \\
\cline{1-3}
\end{tabular}
\hspace{0.5 in}
\begin{tabular}{ccc}
round & game & byes \\
\cline{1-3}
1 & M1F2 v M2F3 & M3, F1 \\
2 & M1F3 v M3F2 & M2, F1 \\
3 & M2F1 v M3F3 & M1, F2 \\
4 & M2F2 v M3F1 & M1, F3 \\
\cline{1-3}
\end{tabular}
\end{Example}
Unfortunately, full resolvability is usually hard or impossible to come by. Alternatively we will settle for
a partition of the games into all short rounds, all but one of equal length. Notice that every non-spouse player will have the same number of byes (say b),
and every spouse player will have $b+1$ byes. Ideally each of the equal length short rounds should have the greatest possible number of players.
\section{Examples}
In this section we give examples of CMDRR$(n, k)$ for $n \le 8$, and also of a CMDRR$(9, 3)$ and a CMDRR$(10, 2)$. Examples of
SAMDRR$(n)$ can be found in~\cite{IA1}. Most of the examples were found using an Embarcadero Delphi XE program, available from the second author.
The program fixes the partnerships and then exchanges them between games in a tabu search algorithm that seeks to optimize the
opposition pairs incidence matrix (see~\cite{Mor}). The examples will be used in the next section as the basis of our recursive construction.
A more extensive list of examples is available from the authors.
The strict MMDRR$(2)$, CMDRR$(3, 1)$, and SAMDRR$(n)$ for $n=4,5,7,$ and $8$ are fully resolvable. It is easy to check by hand that the strict MMDRR$(4)$ and the CMDRR$(5,1)$ are not fully resolvable. A computer search shows that the other examples are not fully resolvable.
A non-trivial resolution into short rounds is given when known.
In the next section we will give general results on resolvability.
\subsection{Tournaments with 4 players}
A SAMDRR$(2)$ does not exist.
\begin{Example}\label{C20} A strict MMDRR$(2)$ with repeat oppositions M$1$M$2$ and \\ F$1$F$2$.
\\[.05in]
\begin{tabular}{|cc|}
\cline{1-2}
22 & 21 \\
12 & 11 \\
\cline{1-2}
\end{tabular}
\end{Example}
\subsection{Tournaments with 6 players}
A CMDRR$(3, 1)$ was given in Example~\ref{C31}. A SAMDRR$(3)$ does not exist.
\subsection{Tournaments with 8 players}
A SAMDRR$(4)$ exists. A CMDRR$(4, 2)$ does not exist.
\begin{Example}\label{C40} A strict MMDRR$(4)$ with repeat oppositions M$1$M$2$, M$3$M$4$, \\ F$1$F$2$, and F$3$F$4$.
\\[.05in]
\begin{tabular}{|cccc|}
\cline{1-4}
24 & 41 & 22 & 33 \\
32 & 13 & 44 & 11 \\
43 & 21 & 14 & 42 \\
12 & 34 & 31 & 23 \\
\cline{1-4}
\end{tabular}
\end{Example}
\subsection{Tournaments with 10 players}
A SAMDRR$(5)$ exists.
\begin{Example}\label{C51} A CMDRR$(5, 1)$ with spouse pair M$1$F$1$ and repeat oppositions
M$2$M$3$, M$4$M$5$, F$2$F$3$, and F$4$F$5$.
\\[.05in]
\begin{tabular}{|ccccc|}
\cline{1-5}
& 55 & 22 & 43 & 34 \\
54 & 13 & 32 & 35 & 41 \\
42 & 23 & 51 & 15 & 24 \\
25 & 31 & 14 & 52 & 53 \\
33 & 44 & 45 & 21 & 12 \\
\cline{1-5}
\end{tabular}
\end{Example}
\begin{Conjecture} A CMDRR$(5, 3)$ does not exist.
\end{Conjecture}
\subsection{Tournaments with 12 players}
A SAMDRR$(6)$ does not exist.
\newpage
\begin{Example}\label{C60} A strict MMDRR$(6)$ with repeat oppositions M$1$M$2$, M$3$M$4$, \\ M$5$M$6$,
F$1$F$2$, F$3$F$4$, and F$5$F$6$.
\\[.05in]
\begin{tabular}{|cccccc|}
\cline{1-6}
55 & 63 & 24 & 42 & 26 & 31 \\
44 & 36 & 61 & 13 & 52 & 15 \\
16 & 41 & 45 & 53 & 64 & 22 \\
32 & 14 & 56 & 21 & 33 & 65 \\
62 & 25 & 34 & 66 & 11 & 43 \\
23 & 51 & 12 & 35 & 46 & 54 \\
\cline{1-6}
\end{tabular}
\\[.05in]
Resolution with short rounds $1$-$9$. Each round has $2$ games and each player has $3$ byes.
\\[.05in]
\begin{tabular}{|cccccc|}
\cline{1-6}
8 & 1 & 6 & 7 & 5 & 3 \\
1 & 8 & 7 & 6 & 3 & 5 \\
3 & 6 & 9 & 2 & 4 & 8 \\
6 & 7 & 4 & 1 & 9 & 2 \\
5 & 3 & 2 & 9 & 8 & 4 \\
7 & 5 & 1 & 4 & 2 & 9 \\
\cline{1-6}
\end{tabular}
\end{Example}
\begin{Conjecture} A CMDRR$(6, 2)$ does not exist.
\end{Conjecture}
\begin{Example}\label{C64} A CMDRR$(6, 4)$ with spouse pairs M$1$F$1$, M$2$F$2$, M$3$F$3$, and M$4$F$4$,
and repeat oppositions M$5$M$6$ and F$5$F$6$.
\\[.05in]
\begin{tabular}{|cccccc|}
\cline{1-6}
& 34 & 52 & 66 & 43 & 25 \\
54 & & 41 & 35 & 16 & 63 \\
46 & 61 & & 12 & 24 & 55 \\
23 & 56 & 15 & & 62 & 31 \\
65 & 13 & 64 & 21 & 36 & 42 \\
32 & 45 & 26 & 53 & 51 & 14 \\
\cline{1-6}
\end{tabular}
\end{Example}
\subsection{Tournaments with 14 players}
A SAMDRR$(7)$ exists.
\begin{Example}\label{C71} A CMDRR$(7, 1)$ with spouse pair M$1$F$1$ and repeat oppositions
M$2$M$3$, M$4$M$5$, M$6$M$7$, F$2$F$3$, F$4$F$5$, and F$6$F$7$.
\\[.05in]
\begin{tabular}{|ccccccc|}
\cline{1-7}
& 75 & 32 & 43 & 64 & 27 & 56 \\
53 & 31 & 65 & 37 & 74 & 42 & 16 \\
22 & 13 & 77 & 66 & 41 & 55 & 24 \\
35 & 26 & 14 & 51 & 57 & 73 & 62 \\
44 & 63 & 21 & 72 & 36 & 17 & 45 \\
76 & 47 & 52 & 15 & 23 & 34 & 71 \\
67 & 54 & 46 & 25 & 12 & 61 & 33 \\
\cline{1-7}
\end{tabular}
\\[.05in]
\newpage\noindent
Resolution with short rounds $1-12$. Each round has $2$ games, each player
except M$1$ and F$1$ has $5$ byes, and each of M$1$ and F$1$ have $6$ byes.
\\[.05in]
\begin{tabular}{|ccccccc|}
\cline{1-7}
& 8 & 9 & 2 & 5 & 6 & 3 \\
7 & 2 & 3 & 12 & 1 & 10 & 4 \\
2 & 9 & 10 & 8 & 11 & 4 & 12 \\
11 & 10 & 2 & 6 & 9 & 5 & 7 \\
6 & 1 & 7 & 11 & 4 & 3 & 9 \\
12 & 7 & 1 & 5 & 3 & 8 & 4 \\
4 & 11 & 5 & 1 & 8 & 12 & 10 \\
\cline{1-7}
\end{tabular}
\end{Example}
\begin{Example}\label{C73} A CMDRR$(7, 3)$ with spouse pairs M$1$F$1$, M$2$F$2$, and M$3$F$3$,
and repeat oppositions M$4$M$5$, M$6$M$7$, F$4$F$5$, and F$6$F$7$.
\\[.05in]
\begin{tabular}{|ccccccc|}
\cline{1-7}
& 55 & 62 & 26 & 43 & 37 & 74 \\
57 & & 41 & 73 & 36 & 14 & 65 \\
64 & 47 & & 52 & 71 & 25 & 16 \\
23 & 76 & 15 & 67 & 54 & 51 & 32 \\
46 & 34 & 77 & 45 & 12 & 63 & 21 \\
72 & 13 & 56 & 31 & 27 & 75 & 44 \\
35 & 61 & 24 & 17 & 66 & 42 & 53 \\
\cline{1-7}
\end{tabular}
\end{Example}
\begin{Example}\label{C75} A CMDRR$(7, 5)$ with spouse pairs M$1$F$1$, M$2$F$2$, M$3$F$3$,
M$4$F$4$, and M$5$F$5$, and repeat oppositions M$6$M$7$ and F$6$F$7$.
\\[.05in]
\begin{tabular}{|ccccccc|}
\cline{1-7}
& 54 & 42 & 35 & 66 & 27 & 73 \\
45 & & 61 & 53 & 37 & 74 & 16 \\
52 & 76 & & 67 & 14 & 41 & 25 \\
36 & 13 & 75 & & 21 & 57 & 62 \\
77 & 31 & 24 & 12 & & 63 & 46 \\
23 & 47 & 56 & 71 & 72 & 15 & 34 \\
64 & 65 & 17 & 26 & 43 & 32 & 51 \\
\cline{1-7}
\end{tabular}
\end{Example}
\subsection{Tournaments with 16 players}
A SAMDRR$(8)$ exists.
\begin{Example}\label{C80} A strict MMDRR$(8)$ with repeat oppositions
M$1$M$2$, \\ M$3$M$4$, M$5$M$6$, M$7$M$8$, F$1$F$2$, F$3$F$4$, F$5$F$6$, and F$7$F$8$.
\\[.05in]
\begin{tabular}{|cccccccc|}
\cline{1-8}
84 & 45 & 52 & 26 & 67 & 73 & 38 & 21 \\
18 & 37 & 65 & 42 & 76 & 14 & 51 & 83 \\
63 & 71 & 44 & 85 & 56 & 48 & 22 & 17 \\
55 & 24 & 87 & 33 & 12 & 61 & 78 & 36 \\
27 & 13 & 74 & 68 & 41 & 35 & 86 & 62 \\
46 & 58 & 31 & 77 & 23 & 82 & 15 & 54 \\
32 & 81 & 16 & 53 & 88 & 25 & 64 & 47 \\
72 & 66 & 28 & 11 & 34 & 57 & 43 & 75\\
\cline{1-8}
\end{tabular}
\\[.05in]
Resolution with short rounds $1-10$, each with $3$ games, and short round $11$ with $2$ games.
Each player has $3$ byes.
\\[.05in]
\begin{tabular}{|cccccccc|}
\cline{1-8}
7 & 9 & 8 & 11 & 3 & 10 & 4 & 2 \\
2 & 10 & 1 & 6 & 7 & 11 & 5 & 9 \\
6 & 1 & 5 & 8 & 2 & 3 & 10 & 4 \\
11 & 6 & 2 & 5 & 9 & 4 & 8 & 3 \\
5 & 8 & 4 & 10 & 11 & 2 & 1 & 7 \\
4 & 7 & 6 & 1 & 9 & 5 & 3 & 10 \\
1 & 3 & 10 & 4 & 6 & 7 & 9 & 8 \\
3 & 5 & 9 & 7 & 8 & 1 & 2 & 6\\
\cline{1-8}
\end{tabular}
\end{Example}
\begin{Example}\label{C82} A CMDRR$(8, 2)$ with spouse pairs M$1$F$1$ and M$2$F$2$, and repeat oppositions
M$3$M$8$, M$4$M$5$, M$6$M$7$, F$3$F$7$, F$4$F$8$, and F$5$F$6$.
\\[.05in]
\begin{tabular}{|cccccccc|}
\cline{1-8}
& 65 & 87 & 38 & 74 & 23 & 56 & 42 \\
75 & & 16 & 47 & 58 & 64 & 81 & 33 \\
46 & 83 & 28 & 51 & 67 & 85 & 72 & 14 \\
62 & 18 & 55 & 73 & 86 & 31 & 24 & 57 \\
34 & 76 & 61 & 82 & 43 & 17 & 48 & 25 \\
53 & 41 & 77 & 26 & 12 & 78 & 35 & 84 \\
88 & 37 & 44 & 15 & 21 & 52 & 63 & 66 \\
27 & 54 & 32 & 68 & 36 & 45 & 13 & 71\\
\cline{1-8}
\end{tabular}
\end{Example}
\begin{Example}\label{C84} A CMDRR$(8, 4)$ with spouse pairs M$1$F$1$, M$2$F$2$,
M$3$F$3$ and M$4$F$4$ and repeat oppositions
M$5$M$7$, M$6$M$8$, F$5$F$7$, and F$6$F$8$.
\\[.05in]
\begin{tabular}{|cccccccc|}
\cline{1-8}
& 58 & 42 & 83 & 66 & 34 & 25 & 77 \\
73 & & 55 & 61 & 17 & 48 & 36 & 84 \\
88 & 64 & & 16 & 72 & 27 & 51 & 45 \\
52 & 13 & 67 & & 38 & 71 & 85 & 26 \\
37 & 41 & 86 & 75 & 23 & 68 & 74 & 12 \\
24 & 87 & 78 & 32 & 81 & 15 & 43 & 56 \\
46 & 35 & 21 & 57 & 54 & 82 & 18 & 63 \\
65 & 76 & 14 & 28 & 47 & 53 & 62 & 31\\
\cline{1-8}
\end{tabular}
\end{Example}
\begin{Example}\label{C86} A CMDRR$(8, 6)$ with spouse pairs M$1$F$1$, M$2$F$2$,
M$3$F$3$, M$4$F$4$, M$5$F$5$, and M$6$F$6$, and repeat oppositions
M$7$M$8$ and F$7$F$8$.
\\[.05in]
\begin{tabular}{|cccccccc|}
\cline{1-8}
& 64 & 56 & 38 & 47 & 72 & 83 & 25 \\
43 & & 74 & 86 & 18 & 35 & 61 & 57 \\
68 & 75 & & 51 & 26 & 87 & 42 & 14 \\
76 & 37 & 21 & & 63 & 58 & 15 & 82 \\
34 & 81 & 62 & 77 & & 13 & 28 & 46 \\
27 & 53 & 45 & 12 & 84 & & 78 & 31 \\
85 & 16 & 88 & 23 & 32 & 41 & 54 & 67 \\
52 & 48 & 17 & 65 & 71 & 24 & 36 & 73\\
\cline{1-8}
\end{tabular}
\end{Example}
\subsection{Tournaments with more than 16 players}
\begin{Example}\label{C93} A CMDRR$(9, 3)$ with spouse pairs M$1$F$1$, M$2$F$2$, and\\
M$3$F$3$, and repeat oppositions M$4$M$5$, F$4$F$5$, M$6$M$7$, F$6$F$7$, M$8$M$9$, and F$8$F$9$.
\\[.05in]
\begin{tabular}{|ccccccccc|}
\cline{1-9}
& 47 & 29 & 66 & 53 & 82 & 38 & 75 & 94 \\
77 & & 68 & 35 & 46 & 91 & 54 & 89 & 13 \\
95 & 61 & & 52 & 24 & 78 & 49 & 17 & 86 \\
63 & 58 & 56 & 81 & 74 & 25 & 12 & 99 & 37 \\
88 & 34 & 15 & 27 & 69 & 43 & 96 & 42 & 71 \\
32 & 79 & 41 & 98 & 87 & 14 & 76 & 23 & 55 \\
59 & 93 & 84 & 45 & 18 & 67 & 21 & 36 & 62 \\
44 & 16 & 97 & 73 & 92 & 39 & 65 & 51 & 28 \\
26 & 85 & 72 & 19 & 31 & 57 & 83 & 64 & 48 \\
\cline{1-9}
\end{tabular}
\\[.05in]
Resolution with short rounds $1-13$, each with $3$ games.
\\[.05in]
\begin{tabular}{|ccccccccc|}
\cline{1-9}
& 3 & 4 & 9 & 2 & 12 & 1 & 11 & 13 \\
6 & & 3 & 12 & 5 & 2 & 7 & 10 & 4 \\
10 & 5 & & 11 & 12 & 8 & 2 & 1 & 6 \\
7 & 13 & 1 & 8 & 4 & 5 & 3 & 12 & 2 \\
9 & 11 & 2 & 7 & 8 & 1 & 4 & 13 & 3 \\
5 & 1 & 7 & 6 & 13 & 9 & 10 & 3 & 8 \\
3 & 9 & 5 & 4 & 11 & 10 & 6 & 8 & 1 \\
8 & 12 & 11 & 5 & 7 & 6 & 13 & 9 & 10 \\
2 & 7 & 9 & 13 & 10 & 4 & 11 & 6 & 12 \\
\cline{1-9}
\end{tabular}
\end{Example}
\begin{Example}\label{C102} A CMDRR$(10, 2)$ with spouse pairs M$1$F$1$ and M$2$F$2$,
and repeat oppositions M$3$M$4$, F$3$F$4$, M$5$M$6$, F$5$F$6$, M$7$M$8$, F$7$F$8$, M$9$M$0$, and F$9$F$0$.
\\[.05in]
\begin{tabular}{|cccccccccc|}
\cline{1-10}
& 40 & 74 & 92 & 66 & 85 & 29 & 37 & 53 & 08 \\
48 & & 56 & 39 & 00 & 71 & 84 & 95 & 17 & 63 \\
90 & 05 & 82 & 51 & 73 & 67 & 18 & 49 & 24 & 46 \\
69 & 86 & 04 & 75 & 57 & 30 & 93 & 21 & 38 & 12 \\
34 & 68 & 19 & 60 & 81 & 23 & 45 & 76 & 02 & 97 \\
03 & 77 & 20 & 88 & 99 & 15 & 36 & 52 & 41 & 54 \\
26 & 91 & 35 & 13 & 44 & 58 & 62 & 07 & 80 & 89 \\
55 & 33 & 98 & 27 & 16 & 42 & 01 & 64 & 70 & 79 \\
72 & 14 & 47 & 06 & 28 & 09 & 50 & 83 & 65 & 31 \\
87 & 59 & 61 & 43 & 32 & 94 & 78 & 10 & 96 & 25 \\
\cline{1-10}
\end{tabular}
\\[.05in]
\newpage\noindent
Resolution with short rounds $1-16$, each with $3$ games, and short round $17$ with $1$ game.
\\[.05in]
\begin{tabular}{|cccccccccc|}
\cline{1-10}
& 7 & 2 & 5 & 15 & 3 & 9 & 6 & 1 & 16 \\
14 & & 16 & 3 & 6 & 12 & 13 & 2 & 9 & 8 \\
4 & 11 & 12 & 15 & 7 & 14 & 6 & 13 & 3 & 1 \\
11 & 6 & 4 & 16 & 10 & 1 & 3 & 14 & 13 & 7 \\
15 & 4 & 1 & 9 & 8 & 16 & 10 & 5 & 2 & 11 \\
13 & 1 & 8 & 10 & 12 & 15 & 14 & 4 & 11 & 9 \\
12 & 10 & 7 & 2 & 16 & 5 & 1 & 15 & 17 & 14 \\
8 & 12 & 9 & 13 & 3 & 6 & 5 & 10 & 14 & 17 \\
10 & 5 & 3 & 8 & 2 & 7 & 11 & 9 & 12 & 4 \\
5 & 2 & 13 & 4 & 11 & 8 & 15 & 16 & 7 & 6 \\
\cline{1-10}
\end{tabular}
\end{Example}
\section{Recursive Construction}
In this section we present a recursive construction using holey SOLS and use it to show the existence of
CMDRR$(n,k)$ for all allowed values of $n$ and $k$, apart from $4$ exceptions and $31$ possible exceptions.
We show that a fully resolvable CMDRR$(2n, 0)$ exists for all $n \ge 5$
and a fully resolvable CMDRR$(3n, n)$ exists for all $n \ge 5$ and $n$ odd.
For completeness we include the definition of a holey SOLS (see~\cite{FIN}).
\begin{Definition}\label{HSOLS}
A holey SOLS (or frame SOLS) is a self-orthogonal latin square of order $n$ with $n_i$ missing sub-SOLS (holes)
of order $h_i$ ($1 \le i \le k$), which are disjoint and spanning (that is $\sum_{1 \le i \le k}n_ih_i = n$).
It is denoted by HSOLS$(h_1^{n_1} \ldots h_k^{n_k})$ where $h_1^{n_1} \ldots h_k^{n_k}$ is the type of the HSOLS.
\end{Definition}
Suppose an HSOLS exists and CMDRR exist for each hole size. Then we can fill in the holes with the CMDRRs to get a
new CMDRR. The details follow. For convenience we will assume that a CMDRR$(1, 1)$ exists with spouse pair M$1$F$1$ and no games.
\begin{Theorem}\label{cons}
Suppose an HSOLS$(n)$ of type $h_1^{n_1} \ldots h_k^{n_k}$ exists and for each $h_i$ there exist
CMDRR$(h_i, m_{i1}), \ldots, $ CMDRR$(h_i, m_{i{n_i}})$. Then there exists a CMDRR$(n, s)$ where
$s = \sum_{1 \le i \le k}\sum_{1 \le j \le n_i}m_{ij}$.
\end{Theorem}
\begin{Proof}
By possibly relabeling players we can assume that the HSOLS is block diagonal. By construction, spouse pairs will always have the form M$i$F$i$.
Use the standard SOLS representation for a SAMDRR to identify games for all entries of the HSOLS that are not in a hole.
Thus every entry $(i,j)$ not in a hole will
contribute the game M$i$F$(i,j)$ v M$j$F$(j,i)$. By definition of an HSOLS these games satisfy the conditions that every pair of opposite sex players
have partnered and opposed at most once, every pair of same sex players have opposed at most once, and no spouses have played in a game together.
Each missing sub-SOLS (hole) corresponds to a set S of consecutive integers which are the indices and also the missing entries. Use a translation of an appropriate sized
CMDRR and identify games using the representation introduced for CMDRRs. Assume the translation is by $t$.
Then a non-empty entry $(i,j)$ of the CMDRR will contribute the game M$i'$M$j'$ v F$x'$F$y'$ where
the $(i,j)$ entry of the CMDRR is $(x,y)$ and each primed symbol is the corresponding unprimed symbol plus $t$. By definition of an HSOLS, the players in a hole will
not be involved in any other common games.
Taking all the games identified by the two different processes described above will produce the required CMDRR.
As every pair of players is either in a hole or not in any hole, the conditions for a CMDRR are met. \qed
\end{Proof}
\begin{Example}\label{C11} A CMDRR$(11, 7)$ can be constructed from the \\ HSOLS$(1^63^12^1)$ given below (see~\cite{FIN, Z}). Use the
CMDRR$(3, 1)$ of Example~\ref{C31} to fill the hole of size three and the strict MMDRR$(2)$ of Example~\ref{C20} to fill the hole of size two.
The spouse pairs are M$1$F$1, \ldots, $M$7$F$7$. The repeat oppositions are M$8$M$9$, F$8$F$9$, M$10$M$11$, and F$10$F$11$.
\\[.05in]
\begin{tabular}{|ccccccccccc|}
\cline{1-11}
. & 7 & 5 & 2 & 10 & 9 & 4 & 11 & 3 & 6 & 8 \\
6 & . & 8 & 7 & 3 & 10 & 1 & 5 & 11 & 4 & 9 \\
9 & 4 & . & 10 & 8 & 1 & 11 & 2 & 6 & 5 & 7 \\
5 & 11 & 9 & . & 7 & 2 & 6 & 10 & 1 & 8 & 3 \\
7 & 6 & 11 & 3 & . & 8 & 2 & 4 & 10 & 9 & 1 \\
11 & 8 & 4 & 9 & 1 & . & 10 & 3 & 5 & 7 & 2 \\
3 & 10 & 6 & 1 & 11 & 5 & . & . & . & 2 & 4 \\
4 & 1 & 10 & 6 & 2 & 11 & . & . & . & 3 & 5 \\
10 & 5 & 2 & 11 & 4 & 3 & . & . & . & 1 & 6 \\
8 & 9 & 7 & 5 & 6 & 4 & 3 & 1 & 2 & . & . \\
2 & 3 & 1 & 8 & 9 & 7 & 5 & 6 & 4 & . & . \\
\cline{1-11}
\end{tabular}
\end{Example}
In order to create CMDRR using Theorem~\ref{cons} we need a supply of HSOLS. The following theorems (see~\cite{FIN, HZ, XZZ}) give just the
supply we need.
\begin{Theorem}\label{HSOLSanb1}
For $n \ge 4$ and $a \ge 2$, an HSOLS$(a^nb^1)$ exists if $0 \le b \le a(n-1)/2$ with
possible exceptions for $n \in \{6, 14, 18, 22\}$ and $b = a(n-1)/2$.
\end{Theorem}
\begin{Definition}
An incomplete SOLS is a self-orthogonal latin square of order $n$ missing a sub-SOLS of
order $k$, denoted by ISOLS$(n,k)$. An ISOLS$(n,k)$ is equivalent to an HSOLS$(1^{n-k}k^1)$. (see~\cite{FIN})
\end{Definition}
\begin{Theorem}\label{ISOLS}
There exists an ISOLS$(n,k)$ for all values of $n$ and $k$ satisfying $n \ge 3k+1$,
except for $(n,k) = (6,1), (8,2)$ and possibly excepting $n = 3k+2$ and $k \in \{6,8,10\}$.
\end{Theorem}
We can now show that CMDRR$(n,k)$ exist for all but a finite number of possible exceptions.
\begin{Theorem}\label{ge32}
There exists a CMDRR$(n,k)$ for each $n \ge 32, k \le n, $ and $n-k$ even.
\end{Theorem}
\begin{Proof}
By Theorem~\ref{HSOLSanb1} an HSOLS$(a^nb^1)$ exists for $a=8, n \ge 4, $ and $0 \le b \le 7$.
Each hole of size $8$ can be filled with any one of the CMDRR$(8,0)$ (Example~\ref{C80}), CMDRR$(8,2)$ (Example~\ref{C82}),
CMDRR$(8,4)$ \\ (Example~\ref{C84}), CMDRR$(8,6)$ (Example~\ref{C86}), or a SAMDRR$(8)$. For $2 \le b \le 7$,
the hole of size $b$ can be filled with one of the CMDRR$(b,i)$ given in Examples~\ref{C20}--~\ref{C75} or
the SAMDRR$(b)$ for $b \ne 2,3,6$. By direct observation and using Theorem~\ref{HSOLSanb1} we have the stated
CMDRRs, except for CMDRR$(n,n)$ when $n \ge 32$ and $n \equiv 2, 3,$ or $6 \ppmod 8$. But we know that SAMDRR$(n)$ exist for these sizes. \qed
\end{Proof}
Finally we can use Theorems~\ref{HSOLSanb1} and~\ref{ISOLS} to handle special cases for $n < 32$ and give our main result.
\begin{Theorem}
There exists a CMDRR$(n,k)$ for each $n \ge 2, k \le n, $ and \\ $n-k$ even, except for $(n,k) = (2,2),$ $(3,3),$ $(4,2),$ $(6,6)$
and possibly excepting the following $31$ values: $(n,k) = (5,3),$ $(6,2),$
$(12,2),$ $(12,6),$ $(12,8),$ $(13,3),$ $(13,7),$ $\!(14,2),$ $(14,6),$ $(15,3),$ $(15,7),$ $(15,9),$ $(16,2),$
$\!(16, 10),$ $\!(17,3),$ $\!(17,7),$ $\!(17,11),$ $\!(18,2),$ $(18,10),$ $(19,3),$ $(19,11),$ $(20,2),$
$(20,14),$ $(21,3),$ $(21, 11),$ $(22,2),$ $(22,14),$ $(23,15),$ $(24,2),$ $(24,14),$
$(25,15).$
\end{Theorem}
\begin{Proof}
By Theorem~\ref{ge32}, a CMDRR$(n,k)$ exists for $n \ge 32$. The table below shows a construction that can be used for each value
of $n < 32$ and compatible $k$, except for the values listed above.
\\[.05in]
\begin{tabular}{lll}
n & k & construction \\ \cline{1-3}
& & \\
9 & 1 & HSOLS$(2^41^1)$ \\
9 & 3 & Example \ref{C93} \\
9 & 5 & HSOLS$(2^21^5)$ Lemma 2.2 of~\cite{XZZ} \\
9 & 7 & ISOLS$(9,2)$ \\
10 & 0 & HSOLS$(2^5)$ \\
10 & 2 & Example \ref{C102} \\
10 & 4 & HSOLS$(2^31^4)$ Lemma 2.2 of~\cite{XZZ} \\
10 & 6 & HSOLS$(2^21^6)$ Lemma 2.2 of~\cite{XZZ} \\
10 & 8 & ISOLS$(10,2)$ \\
11 & 1 & HSOLS$(2^51^1)$ \\
11 & 3 & HSOLS$(2^41^3)$ Lemma 2.2 of~\cite{XZZ} \\
11 & 5 & HSOLS$(2^31^5)$ Lemma 2.2 of~\cite{XZZ} \\
11 & 7 & HSOLS$(1^63^12^1)$ Example \ref{C11} \\
11 & 9 & ISOLS$(11,2)$ \\
12 & 0 & HSOLS$(2^6)$ \\
12 & 4 & HSOLS$(3^4)$ \\
12 & 10 & ISOLS$(12,2)$ \\
13 & 1 & HSOLS$(2^53^1)$ \\
13 & 5 & HSOLS$(3^41^1)$ \\
13 & 9 & ISOLS$(13,4)$ \\
13 & 11 & ISOLS$(13,2)$ \\
14 & 0 & HSOLS$(2^7)$ \\
14 & 4 & HSOLS$(3^42^1)$ \\
14 & 8 & HSOLS$(1^84^12^1)$~\cite{ABZZ} \\
14 & 10,14 & ISOLS$(14,4)$ \\
14 & 12 & ISOLS$(14,2)$ \\
15 & 1 & HSOLS$(2^63^1)$ \\
15 & 5 & HSOLS$(3^5)$ \\
15 & 11 & ISOLS$(15,4)$ \\
15 & 13 & ISOLS$(15,2)$ \\
16 & 0,4,8,12,16 & HSOLS$(4^4)$ \\
16 & 6 & HSOLS$(3^51^1)$ \\
16 & 14 & ISOLS$(16,2)$ \\
17 & 1,5,9,13,17 & HSOLS$(4^41^1)$ \\
17 & 15 & ISOLS$(17,2)$ \\
18 & 0,4,8,12,16 & HSOLS$(4^42^1)$ \\
18 & 6 & HSOLS$(3^6)$ \\
18 & 14,18 & ISOLS$(18,4)$ \\
19 & 1,5,9,13,17 & HSOLS$(4^43^1)$ \\
19 & 7 & HSOLS$(3^61^1)$ \\
19 & 15,19 & ISOLS$(19,4)$ \\
20 & 0,4,8,12,16,20 & HSOLS$(4^5)$ \\
20 & 6,10 & HSOLS$(3^55^1)$ \\
20 & 18 & ISOLS$(20,2)$ \\
\end{tabular}
\newpage
\begin{tabular}{lll}
21 & 1,5,9,13,17,21 & HSOLS$(4^51^1)$ \\
21 & 7 & HSOLS$(3^7)$ \\
21 & 15,19 & ISOLS$(21,6)$ \\
22 & 0,4,8,12,16,20 & HSOLS$(4^52^1)$ \\
22 & 6,10 & HSOLS$(3^64^1)$ \\
22 & 16,18,20,22 & ISOLS$(22,7)$ \\
23 & 1,5,9,13,17,21 & HSOLS$(4^53^1)$ \\
23 & 3 & HSOLS$(2^87^1)$ \\
23 & 7,11 & HSOLS$(3^65^1)$ \\
23 & 17,19,21,23 & ISOLS$(23,7)$ \\
24 & 0,4,8,12,16 & HSOLS$(6^4)$ \\
24 & 6,10 & HSOLS$(3^66^1)$ \\
24 & 18,20,22,24 & ISOLS(24,7) \\
25 & 1,3,5,7,9,11,13 & HSOLS$(6^41^1)$ \\
25 & 17,19,21,23,25 & ISOLS(25,8) \\
26 & 0,4,8,12,16,20,24 & HSOLS$(4^56^1)$ \\
26 & 2 & HSOLS$(2^98^1)$ \\
26 & 6,10,14,18,22,26 & HSOLS$(5^51^1)$ \\
27 & 1,3,5,7,9,11,13,15,17,19,21,23,25,27 & HSOLS$(4^57^1)$ \\
28 & 0,2 & HSOLS$(2^{10}8^1)$ \\
28 & 4,6,8,10,12,14,16,18,20,22,24,26,28 & HSOLS$(7^4)$ \\
29 & 1,3 & HSOLS$(2^{11}7^1)$ \\
29 & 5,7,9,11,13,15,17,19,21,23,25,27,29 & HSOLS$(7^41^1)$ \\
30 & 0,2 & HSOLS$(2^{11}8^1)$ \\
30 & 4,6,8,10,12,14,16,18,20,22,24,26,28 & HSOLS$(7^42^1)$ \\
31 & 1,3 & HSOLS$(2^{12}7^1)$ \\
31 & 5,7,9,11,13,15,17,19,21,23,25,27,29 & HSOLS$(7^43^1)$ \\
\end{tabular}
\\[.05in]
\qed
\end{Proof}
Resolvability of a CMDRR is more difficult to ensure.
By filling holes in an HSOLSSOM we can construct resolvable CMDRR. For completeness we include the definition of a holey SOLSSOM (see~\cite{FIN}).
\begin{Definition}\label{HSOLSSOM}
A holey SOLSSOM (or frame SOLSSOM) is a holey self-orthogonal latin square S of order $n$ and type $h_1^{n_1} \ldots h_k^{n_k}$,
together with a symmetric partitioned latin square M of order $n$ and type $h_1^{n_1} \ldots h_k^{n_k}$, satisfying the property
that when superimposed, the ordered pairs are exactly those pairs of symbols that are from different holes. A holey SOLSSOM with this
structure is denoted by HSOLSSOM$(h_1^{n_1} \ldots h_k^{n_k})$, where $h_1^{n_1} \ldots h_k^{n_k}$ is the type of the HSOLSSOM.
\end{Definition}
Using the next Theorem~\cite{FIN, GA} we can construct resolvable CMDRR.
\begin{Theorem}
An HSOLSSOM$(2^n)$ exists for all $n \ge 5$ and an \\ HSOLSSOM$(3^n)$ exists for all odd values of $n$ with $n \ge 5$.
\end{Theorem}
\begin{Theorem}
A fully resolvable strict MMDRR$(2n)$ exists for all $n \ge 5$.
\end{Theorem}
\begin{Proof}
Begin with an HSOLSSOM$(2^n)$ and convert this to a resolvable mixed doubles round robin with $2n$ rounds of play.
Pairs of rounds will be missing all four of the players from one of the $n$ holes. Simply fill these holes with
a strict MMDRR$(2)$ constructed on the corresponding four players, thus completing the $2n$ rounds. \qed
\end{Proof}
\begin{Example}\label{x} Lemma 2.1.2 of Bennett and Zhu~\cite{BZ} gives both an example of a holey Steiner pentagon system (HSPS) of type $2^6$
and also its equivalent HSOLSSOM$(2^6)$. We rearrange the latter to make the holes block diagonal.
\\[.05in]
\begin{tabular}{|cccccccccccc|}
\cline{1-12}
. & . & 6 & 10 & 12 & 11 & 5 & 3 & 4 & 8 & 9 & 7 \\
. & . & 10 & 11 & 8 & 9 & 4 & 6 & 5 & 12 & 7 & 3 \\
12 & 8 & . & . & 9 & 7 & 10 & 1 & 11 & 6 & 5 & 2 \\
6 & 5 & . & . & 7 & 12 & 2 & 9 & 1 & 11 & 8 & 10 \\
10 & 4 & 12 & 9 & . & . & 1 & 11 & 2 & 7 & 3 & 8 \\
4 & 11 & 1 & 8 & . & . & 12 & 2 & 7 & 3 & 10 & 9 \\
9 & 10 & 11 & 12 & 4 & 3 & . & . & 6 & 5 & 2 & 1 \\
11 & 3 & 9 & 6 & 2 & 10 & . & . & 12 & 1 & 4 & 5 \\
7 & 12 & 8 & 5 & 3 & 2 & 11 & 4 & . & . & 1 & 6 \\
5 & 7 & 2 & 1 & 11 & 8 & 3 & 12 & . & . & 6 & 4 \\
8 & 6 & 7 & 2 & 10 & 1 & 9 & 5 & 3 & 4 & . & . \\
3 & 9 & 5 & 7 & 1 & 4 & 6 & 10 & 8 & 2 & . & . \\
\cline{1-12}
\end{tabular}
\\[.05in]
\begin{tabular}{|cccccccccccc|}
\cline{1-12}
. & . & 7 & 11 & 8 & 10 & 4 & 9 & 5 & 12 & 3 & 6 \\
. & . & 6 & 8 & 11 & 7 & 12 & 10 & 3 & 4 & 9 & 5 \\
7 & 6 & . & . & 2 & 12 & 5 & 11 & 1 & 8 & 10 & 9 \\
11 & 8 & . & . & 1 & 9 & 10 & 12 & 7 & 6 & 5 & 2 \\
8 & 11 & 2 & 1 & . & . & 9 & 4 & 12 & 3 & 7 & 10 \\
10 & 7 & 12 & 9 & . & . & 1 & 3 & 11 & 2 & 4 & 8 \\
4 & 12 & 5 & 10 & 9 & 1 & . & . & 2 & 11 & 6 & 3 \\
9 & 10 & 11 & 12 & 4 & 3 & . & . & 6 & 5 & 2 & 1 \\
5 & 3 & 1 & 7 & 12 & 11 & 2 & 6 & . & . & 8 & 4 \\
12 & 4 & 8 & 6 & 3 & 2 & 11 & 5 & . & . & 1 & 7 \\
3 & 9 & 10 & 5 & 7 & 4 & 6 & 2 & 8 & 1 & . & . \\
6 & 5 & 9 & 2 & 10 & 8 & 3 & 1 & 4 & 7 & . & . \\
\cline{1-12}
\end{tabular}
\\[.05in]
\newpage\noindent
The HSOLSSOM is converted to a mixed doubles tournament and filled to produce a fully resolvable strict MMDRR$(12)$ with $12$ rounds.
\\[.05in]
{\footnotesize
\begin{tabular}{cccccccc}
R1 & M01 F01 v M02 F02 & M03 F11 v M09 F08 & M04 F07 v M05 F09 &\\& M06 F12 v M07 F03 & M08 F05 v M12 F10 & M10 F06 v M11 F04 \\
R2 & M01 F02 v M02 F01 & M03 F09 v M05 F12 & M04 F10 v M12 F07 &\\& M06 F03 v M10 F08 & M07 F06 v M09 F11 & M08 F04 v M11 F05 \\
R3 & M01 F09 v M11 F08 & M02 F05 v M09 F12 & M03 F03 v M04 F04 &\\& M05 F07 v M10 F11 & M06 F02 v M08 F10 & M07 F01 v M12 F06 \\
R4 & M01 F05 v M07 F09 & M02 F12 v M10 F07 & M03 F04 v M04 F03 &\\& M05 F11 v M08 F02 & M06 F10 v M11 F01 & M09 F06 v M12 F08 \\
R5 & M01 F04 v M09 F07 & M02 F03 v M12 F09 & M03 F10 v M07 F11 &\\& M04 F08 v M11 F02 & M05 F05 v M06 F06 & M08 F01 v M10 F12 \\
R6 & M01 F07 v M12 F03 & M02 F10 v M03 F08 & M04 F11 v M10 F01 &\\& M05 F06 v M06 F05 & M07 F02 v M11 F09 & M08 F12 v M09 F04 \\
R7 & M01 F06 v M03 F12 & M02 F09 v M06 F11 & M04 F01 v M09 F05 &\\& M05 F03 v M11 F10 & M07 F07 v M08 F08 & M10 F04 v M12 F02 \\
R8 & M01 F12 v M05 F10 & M02 F11 v M04 F05 & M03 F06 v M10 F02 &\\& M06 F09 v M12 F04 & M07 F08 v M08 F07 & M09 F01 v M11 F03 \\
R9 & M01 F03 v M08 F11 & M02 F07 v M11 F06 & M03 F02 v M12 F05 &\\& M04 F12 v M06 F08 & M05 F01 v M07 F04 & M09 F09 v M10 F10 \\
R10 & M01 F11 v M06 F04 & M02 F06 v M08 F03 & M03 F05 v M11 F07 &\\& M04 F02 v M07 F12 & M05 F08 v M12 F01 & M09 F10 v M10 F09 \\
R11 & M01 F10 v M04 F06 & M02 F08 v M05 F04 & M03 F01 v M08 F09 &\\& M06 F07 v M09 F02 & M07 F05 v M10 F03 & M11 F11 v M12 F12 \\
R12 & M01 F08 v M10 F05 & M02 F04 v M07 F10 & M03 F07 v M06 F01 &\\& M04 F09 v M08 F06 & M05 F02 v M09 F03 & M11 F12 v M12 F11 \\
\end{tabular}}
\end{Example}
\begin{Theorem}
A fully resolvable CMDRR$(3n, n)$ exists for all $n \ge 5$ and $n$ odd.
\end{Theorem}
\begin{Proof}
Begin with an HSOLSSOM$(3^n)$ and convert to a resolved mixed doubles round robin tournament. Fill each hole with a
CMDRR$(3,1)$ on the corresponding six players, noting that each CMDRR$(3,1)$ contributes one spouse pair to
the final schedule. Three of the four games from each CMDRR$(3,1)$ are added to the three rounds that
lack the six players from the hole. Collect together the fourth game from each CMDRR$(3,1)$ into one additional
round. This give a tournament with $3n+1$ rounds of play. The first $3n$ full rounds will all have
$(3n-1)/2$ games and two byes, and the additional short round will have $n$ games and $2n$ byes. Over the course
of the tournament, each spouse pair player will receive exactly $2$ byes while the non-spouse pair players will
receive exactly $1$ bye.\qed
\end{Proof}
\newpage
\begin{Example}\label{y} Lemma 2.2 of Abel et al.~\cite{ABZ} gives an example of a HSPS of type $3^5$
which is equivalent to the HSOLSSOM$(3^5)$ below.
\\[.05in]
{\footnotesize
\begin{tabular}{|ccccccccccccccc|}
\cline{1-15}
. & . & . & 12 & 9 & 13 & 14 & 6 & 11 & 7 & 5 & 15 & 8 & 10 & 4 \\
. & . & . & 14 & 10 & 7 & 12 & 15 & 4 & 13 & 8 & 6 & 5 & 9 & 11 \\
. & . & . & 8 & 15 & 11 & 5 & 10 & 13 & 4 & 14 & 9 & 12 & 6 & 7 \\
7 & 11 & 13 & . & . & . & 15 & 12 & 2 & 3 & 9 & 14 & 10 & 8 & 1 \\
14 & 8 & 12 & . & . & . & 3 & 13 & 10 & 15 & 1 & 7 & 2 & 11 & 9 \\
10 & 15 & 9 & . & . & . & 11 & 1 & 14 & 8 & 13 & 2 & 7 & 3 & 12 \\
4 & 13 & 11 & 10 & 14 & 2 & . & . & . & 1 & 15 & 5 & 6 & 12 & 3 \\
12 & 5 & 14 & 3 & 11 & 15 & . & . & . & 6 & 2 & 13 & 1 & 4 & 10 \\
15 & 10 & 6 & 13 & 1 & 12 & . & . & . & 14 & 4 & 3 & 11 & 2 & 5 \\
6 & 9 & 15 & 7 & 2 & 14 & 13 & 3 & 5 & . & . & . & 4 & 1 & 8 \\
13 & 4 & 7 & 15 & 8 & 3 & 6 & 14 & 1 & . & . & . & 9 & 5 & 2 \\
8 & 14 & 5 & 1 & 13 & 9 & 2 & 4 & 15 & . & . & . & 3 & 7 & 6 \\
11 & 7 & 4 & 9 & 12 & 1 & 10 & 5 & 3 & 2 & 6 & 8 & . & . & . \\
5 & 12 & 8 & 2 & 7 & 10 & 1 & 11 & 6 & 9 & 3 & 4 & . & . & . \\
9 & 6 & 10 & 11 & 3 & 8 & 4 & 2 & 12 & 5 & 7 & 1 & . & . & . \\
\cline{1-15}
\end{tabular}
\\[.05in]
\begin{tabular}{|ccccccccccccccc|}
\cline{1-15}
. & . & . & 14 & 10 & 7 & 12 & 15 & 4 & 13 & 8 & 6 & 5 & 9 & 11 \\
. & . & . & 8 & 15 & 11 & 5 & 10 & 13 & 4 & 14 & 9 & 12 & 6 & 7 \\
. & . & . & 12 & 9 & 13 & 14 & 6 & 11 & 7 & 5 & 15 & 8 & 10 & 4 \\
14 & 8 & 12 & . & . & . & 3 & 13 & 10 & 15 & 1 & 7 & 2 & 11 & 9 \\
10 & 15 & 9 & . & . & . & 11 & 1 & 14 & 8 & 13 & 2 & 7 & 3 & 12 \\
7 & 11 & 13 & . & . & . & 15 & 12 & 2 & 3 & 9 & 14 & 10 & 8 & 1 \\
12 & 5 & 14 & 3 & 11 & 15 & . & . & . & 6 & 2 & 13 & 1 & 4 & 10 \\
15 & 10 & 6 & 13 & 1 & 12 & . & . & . & 14 & 4 & 3 & 11 & 2 & 5 \\
4 & 13 & 11 & 10 & 14 & 2 & . & . & . & 1 & 15 & 5 & 6 & 12 & 3 \\
13 & 4 & 7 & 15 & 8 & 3 & 6 & 14 & 1 & . & . & . & 9 & 5 & 2 \\
8 & 14 & 5 & 1 & 13 & 9 & 2 & 4 & 15 & . & . & . & 3 & 7 & 6 \\
6 & 9 & 15 & 7 & 2 & 14 & 13 & 3 & 5 & . & . & . & 4 & 1 & 8 \\
5 & 12 & 8 & 2 & 7 & 10 & 1 & 11 & 6 & 9 & 3 & 4 & . & . & . \\
9 & 6 & 10 & 11 & 3 & 8 & 4 & 2 & 12 & 5 & 7 & 1 & . & . & . \\
11 & 7 & 4 & 9 & 12 & 1 & 10 & 5 & 3 & 2 & 6 & 8 & . & . & . \\
\cline{1-15}
\end{tabular}}
\\[.05in]
\newpage\noindent
The HSOLSSOM is converted to a mixed doubles tournament and filled to produce a CMDRR$(15,5)$ with $15$ full rounds and $1$ short round.
The spouse pairs are M$1$F$1$, M$4$F$4$, M$7$F$7$, M$10$F$10$, and M$13$F$13$.
\\[.05in]
{\footnotesize
\begin{tabular}{ccccccccc}
R1 & M01 F02 v M02 F03 & M04 F09 v M11 F15 & M05 F13 v M08 F11 &\\& M06 F12 v M15 F08 & M07 F06 v M13 F10 & M09 F14 v M10 F05 &\\& M12 F07 v M14 F04 \\
R2 & M01 F03 v M03 F02 & M04 F10 v M13 F09 & M05 F07 v M12 F13 &\\& M06 F14 v M09 F12 & M07 F15 v M11 F06 & M08 F04 v M14 F11 &\\& M10 F08 v M15 F05 \\
R3 & M02 F01 v M03 F03 & M04 F15 v M07 F10 & M05 F11 v M14 F07 &\\& M06 F08 v M10 F14 & M08 F13 v M12 F04 & M09 F05 v M15 F12 &\\& M11 F09 v M13 F06 \\
R4 & M01 F11 v M09 F15 & M02 F13 v M10 F09 & M03 F07 v M15 F10 &\\& M04 F05 v M05 F06 & M07 F12 v M14 F01 & M08 F02 v M11 F14 &\\& M12 F03 v M13 F08 \\
R5 & M01 F08 v M13 F11 & M02 F12 v M07 F13 & M03 F14 v M11 F07 &\\& M04 F06 v M06 F05 & M08 F10 v M15 F02 & M09 F03 v M12 F15 &\\& M10 F01 v M14 F09 \\
R6 & M01 F15 v M12 F08 & M02 F09 v M14 F12 & M03 F10 v M08 F14 &\\& M05 F04 v M06 F06 & M07 F01 v M10 F13 & M09 F11 v M13 F03 &\\& M11 F02 v M15 F07 \\
R7 & M01 F13 v M06 F10 & M02 F11 v M15 F06 & M03 F04 v M10 F15 &\\& M04 F14 v M12 F01 & M05 F02 v M13 F12 & M07 F08 v M08 F09 &\\& M11 F05 v M14 F03 \\
R8 & M01 F05 v M11 F13 & M02 F14 v M04 F11 & M03 F12 v M13 F04 &\\& M05 F15 v M10 F02 & M06 F03 v M14 F10 & M07 F09 v M09 F08 &\\& M12 F06 v M15 F01 \\
R9 & M01 F10 v M14 F05 & M02 F06 v M12 F14 & M03 F15 v M05 F12 &\\& M04 F01 v M15 F11 & M06 F13 v M11 F03 & M08 F07 v M09 F09 &\\& M10 F04 v M13 F02 \\
R10 & M01 F09 v M05 F14 & M02 F15 v M08 F05 & M03 F06 v M14 F08 &\\& M04 F02 v M09 F13 & M06 F07 v M13 F01 & M07 F03 v M15 F04 &\\& M10 F11 v M11 F12 \\
R11 & M01 F04 v M15 F09 & M02 F07 v M06 F15 & M03 F13 v M09 F06 &\\& M04 F08 v M14 F02 & M05 F03 v M07 F14 & M08 F01 v M13 F05 &\\& M10 F12 v M12 F11 \\
R12 & M01 F14 v M07 F04 & M02 F05 v M13 F07 & M03 F08 v M04 F13 &\\& M05 F09 v M15 F03 & M06 F01 v M08 F15 & M09 F02 v M14 F06 &\\& M11 F10 v M12 F12 \\
R13 & M01 F07 v M10 F06 & M02 F04 v M09 F10 & M03 F11 v M06 F09 &\\& M04 F12 v M08 F03 & M05 F01 v M11 F08 & M07 F05 v M12 F02 &\\& M13 F14 v M14 F15 \\
R14 & M01 F12 v M04 F07 & M02 F08 v M11 F04 & M03 F05 v M07 F11 &\\& M05 F10 v M09 F01 & M06 F02 v M12 F09 & M08 F06 v M10 F03 &\\& M13 F15 v M15 F14 \\
R15 & M01 F06 v M08 F12 & M02 F10 v M05 F08 & M03 F09 v M12 F05 &\\& M04 F03 v M10 F07 & M06 F11 v M07 F02 & M09 F04 v M11 F01 &\\& M14 F13 v M15 F15 \\
R16 & M02 F02 v M03 F01 & M05 F05 v M06 F04 & M08 F08 v M09 F07 &\\& M11 F11 v M12 F10 & M14 F14 v M15 F13 \\
\end{tabular}}
\end{Example}
\newpage
\begin{Example}\label{C166} Abel et al.~\cite{ABZZ} gives an example of an HSOLS$(3^51^1)$ \\ which is shown in
block diagonal form below.
\\[.05in]
{\scriptsize
\begin{tabular}{|cccccccccccccccc|}
\cline{1-16}
.$\!$ & .$\!$ & .$\!$ & 10$\!$ & 14$\!$ & 11$\!$ & 5$\!$ & 12$\!$ & 13$\!$ & 15$\!$ & 7$\!$ & 8$\!$ & 6$\!$ & 16$\!$ & 4$\!$ & 9 \\
.$\!$ & .$\!$ & .$\!$ & 12$\!$ & 13$\!$ & 14$\!$ & 10$\!$ & 16$\!$ & 4$\!$ & 9$\!$ & 15$\!$ & 5$\!$ & 11$\!$ & 6$\!$ & 8$\!$ & 7 \\
.$\!$ & .$\!$ & .$\!$ & 13$\!$ & 15$\!$ & 10$\!$ & 4$\!$ & 11$\!$ & 12$\!$ & 14$\!$ & 6 $\!$ & 9 $\!$ & 7 $\!$ & 5 $\!$ & 16$\!$ & 8 \\
9 $\!$ & 16$\!$ & 11$\!$ & . $\!$ & . $\!$ & . $\!$ & 14$\!$ & 3 $\!$ & 2 $\!$ & 13$\!$ & 8 $\!$ & 15$\!$ & 10$\!$ & 1 $\!$ & 7 $\!$ & 12 \\
16 $\!$ & 8 $\!$ & 7 $\!$ & . $\!$ & . $\!$ & . $\!$ & 15$\!$ & 14$\!$ & 1 $\!$ & 3 $\!$ & 9 $\!$ & 13$\!$ & 12$\!$ & 2 $\!$ & 11$\!$ & 10 \\
7 $\!$ & 12$\!$ & 15$\!$ & . $\!$ & . $\!$ & . $\!$ & 1 $\!$ & 13$\!$ & 3 $\!$ & 8 $\!$ & 14$\!$ & 16$\!$ & 9 $\!$ & 10$\!$ & 2 $\!$ & 11 \\
14 $\!$ & 6 $\!$ & 10$\!$ & 2 $\!$ &11 $\!$ & 13$\!$ & . $\!$ & . $\!$ & . $\!$ & 4 $\!$ & 5 $\!$ & 3 $\!$ & 16$\!$ & 12$\!$ & 1 $\!$ & 15 \\
5 $\!$ & 11$\!$ & 14$\!$ & 15$\!$ & 10$\!$ & 16$\!$ & . $\!$ & . $\!$ & . $\!$ & 6 $\!$ & 1 $\!$ & 4 $\!$ & 2 $\!$ & 3 $\!$ & 12$\!$ & 13 \\
6 $\!$ & 15$\!$ & 13$\!$ & 11$\!$ & 16$\!$ & 12$\!$ & . $\!$ & . $\!$ & . $\!$ & 5 $\!$ & 2 $\!$ & 1 $\!$ & 3 $\!$ & 4 $\!$ & 10$\!$ & 14 \\
8 $\!$ & 14$\!$ & 4 $\!$ & 3 $\!$ & 7 $\!$ & 2 $\!$ & 13$\!$ & 15$\!$ & 16$\!$ & . $\!$ & . $\!$ & . $\!$ & 5 $\!$ & 9 $\!$ & 6 $\!$ & 1 \\
13 $\!$ & 5 $\!$ & 16$\!$ & 14$\!$ & 1 $\!$ & 7 $\!$ & 3 $\!$ & 6 $\!$ & 15$\!$ & . $\!$ & . $\!$ & . $\!$ & 4 $\!$ & 8 $\!$ & 9 $\!$ & 2 \\
4 $\!$ & 13$\!$ & 6 $\!$ & 1 $\!$ & 9 $\!$ & 15$\!$ & 16$\!$ & 2 $\!$ & 14$\!$ & . $\!$ & . $\!$ & . $\!$ & 8 $\!$ & 7 $\!$ & 5 $\!$ & 3 \\
11 $\!$ & 9 $\!$ & 12$\!$ & 7 $\!$ & 8 $\!$ & 3 $\!$ & 2 $\!$ & 5 $\!$ & 10$\!$ & 1 $\!$ & 16$\!$ & 6 $\!$ & . $\!$ & . $\!$ & . $\!$ & 4 \\
10 $\!$ & 7 $\!$ & 9 $\!$ & 8 $\!$ & 12$\!$ & 1 $\!$ & 6 $\!$ & 4 $\!$ & 11$\!$ & 16$\!$ & 3 $\!$ & 2 $\!$ & . $\!$ & . $\!$ & . $\!$ & 5 \\
12 $\!$ & 10$\!$ & 8 $\!$ & 16$\!$ & 3 $\!$ & 9 $\!$ & 11$\!$ & 1 $\!$ & 5 $\!$ & 2 $\!$ & 4 $\!$ & 7 $\!$ & . $\!$ & . $\!$ & . $\!$ & 6 \\
15 $\!$ & 4 $\!$ & 5 $\!$ & 9 $\!$ & 2 $\!$ & 8 $\!$ & 12$\!$ & 10$\!$ & 6 $\!$ & 7 $\!$ & 13$\!$ & 14$\!$ & 1 $\!$ & 11$\!$ & 3 $\!$ & . \\
\cline{1-16}
\end{tabular}}
\\[.05in]
A CMDRR$(16,6)$ can be derived from this and can be played in 25 short rounds of 5 games each (by computer search).
The spouse pairs are M$1$F$1$, M$4$F$4$, M$7$F$7$, M$10$F$10$, M$13$F$13$, and M$16$F$16$.
\\[.05in]
{\footnotesize
\begin{tabular}{ccccccc}
R1 & M02 F11 v M13 F09 & M04 F08 v M11 F14 & M05 F05 v M06 F04 &\\& M07 F15 v M16 F12 & M10 F06 v M15 F02 \\
R2 & M03 F04 v M07 F10 & M05 F12 v M13 F08 & M06 F16 v M12 F15 &\\& M08 F01 v M11 F06 & M14 F14 v M15 F13 \\
R3 & M01 F07 v M11 F13 & M02 F08 v M15 F10 & M03 F14 v M10 F04 &\\& M04 F02 v M09 F11 & M07 F12 v M14 F06 \\
R4 & M01 F09 v M16 F15 & M04 F13 v M10 F03 & M05 F02 v M14 F12 &\\& M06 F14 v M11 F07 & M09 F10 v M15 F05 \\
R5 & M02 F04 v M09 F15 & M03 F11 v M08 F14 & M06 F01 v M07 F13 &\\& M10 F09 v M14 F16 & M11 F10 v M12 F12 \\
R6 & M02 F10 v M07 F06 & M04 F03 v M08 F15 & M05 F01 v M09 F16 &\\& M10 F11 v M11 F12 & M12 F05 v M15 F07 \\
R7 & M01 F13 v M09 F06 & M02 F09 v M10 F14 & M03 F16 v M15 F08 &\\& M07 F05 v M11 F03 & M08 F04 v M12 F02 \\
R8 & M01 F02 v M02 F03 & M03 F07 v M13 F12 & M04 F06 v M06 F05 &\\& M08 F13 v M16 F10 & M09 F01 v M12 F14 \\
R9 & M04 F15 v M12 F01 & M07 F04 v M10 F13 & M08 F02 v M13 F05 &\\& M09 F14 v M16 F06 & M11 F08 v M14 F03 \\
R10 & M05 F14 v M08 F10 & M06 F02 v M15 F09 & M07 F03 v M12 F16 &\\& M09 F04 v M14 F11 & M10 F05 v M13 F01 \\
R11 & M02 F05 v M12 F13 & M03 F15 v M05 F07 & M06 F10 v M14 F01 &\\& M11 F04 v M13 F16 & M15 F06 v M16 F03 \\
R12 & M02 F14 v M06 F12 & M03 F09 v M12 F06 & M04 F01 v M14 F08 &\\& M05 F03 v M10 F07 & M11 F02 v M16 F13 \\
R13 & M01 F08 v M12 F04 & M05 F09 v M11 F01 & M06 F03 v M09 F12 &\\& M07 F16 v M13 F02 & M08 F06 v M10 F15 \\
R14 & M01 F11 v M06 F07 & M02 F02 v M03 F01 & M08 F03 v M14 F04 &\\& M09 F05 v M10 F16 & M13 F15 v M15 F14 \\
\end{tabular}
\\[.05in]
\begin{tabular}{ccccccc}
R15 & M01 F04 v M15 F12 & M03 F08 v M16 F05 & M08 F07 v M09 F09 &\\& M11 F11 v M12 F10 & M13 F14 v M14 F15 \\
R16 & M01 F10 v M04 F09 & M03 F12 v M09 F13 & M05 F15 v M07 F11 &\\& M10 F01 v M16 F07 & M12 F08 v M13 F06 \\
R17 & M01 F15 v M10 F08 & M02 F06 v M14 F07 & M05 F10 v M16 F02 &\\& M06 F13 v M08 F16 & M11 F09 v M15 F04 \\
R18 & M01 F14 v M05 F16 & M04 F10 v M13 F07 & M07 F08 v M08 F09 &\\& M09 F02 v M11 F15 & M14 F05 v M16 F11 \\
R19 & M01 F16 v M14 F10 & M02 F15 v M11 F05 & M03 F13 v M04 F11 &\\& M05 F04 v M06 F06 & M12 F03 v M16 F14 \\
R20 & M01 F05 v M07 F14 & M02 F13 v M05 F08 & M04 F07 v M15 F16 &\\& M06 F09 v M13 F03 & M10 F12 v M12 F11 \\
R21 & M03 F06 v M11 F16 & M04 F12 v M16 F09 & M07 F01 v M15 F11 &\\& M09 F03 v M13 F10 & M12 F07 v M14 F02 \\
R22 & M01 F03 v M03 F02 & M02 F16 v M08 F11 & M04 F05 v M05 F06 &\\& M07 F09 v M09 F08 & M14 F13 v M15 F15 \\
R23 & M01 F06 v M13 F11 & M02 F07 v M16 F04 & M03 F05 v M14 F09 &\\& M06 F08 v M10 F02 & M08 F12 v M15 F01 \\
R24 & M01 F12 v M08 F05 & M02 F01 v M03 F03 & M04 F14 v M07 F02 &\\& M05 F13 v M12 F09 & M06 F11 v M16 F08 \\
R25 & M02 F12 v M04 F16 & M03 F10 v M06 F15 & M05 F11 v M15 F03 &\\& M08 F08 v M09 F07 & M13 F04 v M16 F01 \\
\end{tabular}}
\end{Example}
\section{Product Theorem}
We next present a product construction for CMDRR. While this does not expand the spectrum given in
Section 3, it does provide an alternative construction that does not rely on HSOLS.
\begin{Theorem}\label{product}
If there exists a CMDRR$(n,k)$, a SAMDRR$(m)$, and two mutually orthogonal latin squares, MOLS, of order $n$,
then there exists a CMDRR$(mn,mk)$.
\end{Theorem}
\begin{Proof}
Let M$(i)$ and F$(i)$, with $i=1, \ldots, n$, denote the players of the CMDRR$(n,k)$, and
let M$'(j)$ and F$'(j)$, with $j=1, \ldots, m$, denote the players of the SAMDRR$(m)$. As usual
assume, without loss of generality, that spouses have the same index.
We will construct a CMDRR$(mn,mk)$ on new players M$(i,j)$ and F$(i,j)$, with $i=1, \ldots, n$,
and $j=1, \ldots, m$.
For each game M$(w)$F$(x)$ v M$(y)$F$(z)$ of the CMDRR$(n,k)$, add to the new CMDRR$(mn,mk)$ the $m$ games
M$(w,j)$F$(x,j)$ v M$(y,j)$F$(z,j)$, with $j=1, \ldots, m$. Call these type $1$ games. There are $m(n^2-k)/2$
of these games.
Let L$_1$ and L$_2$ be the two MOLS of order $n$.
For each game M$'(w)$F$'(x)$ v M$'(y)$F$'(z)$ of the SAMDRR$(m)$, add to the new CMDRR$(mn,mk)$ the $n^2$ games
M$(i_1,w)$F$(i_2,x)$ v M$(i_3,y)$F$(i_4,z)$, with $i_1,i_2=1, \ldots, n$, and
$i_3=$L$_1(i_1,i_2)$, and $i_4=$L$_2(i_1,i_2)$. Note that all of $w,x,y,$ and $z$ are distinct. Call these type $2$ games.
There are $n^2(m^2-m)/2$ of these games. So the total number of type 1 and type 2 games
is $m(n^2-k)/2 + n^2(m^2-m)/2 = ((mn)^2 - mk)/2$, the number of games expected for a CMDRR$(mn,mk)$.
We now check that the conditions for a CMDRR$(mn,mk)$ are met for opposite sex players. If M$(i)$ and F$(i)$ are spouses in the
CMDRR$(n,k)$, then for each $j=1, \ldots, m$, the players M$(i,j)$ and F$(i,j)$ satisfy the condition for spouses in the CMDRR$(mn,mk)$,
because each pair never occurs in a type $1$ or type $2$ game as partners or opponents. Thus there are at least $mk$
spouse pairs. Consider any other pair M$(i_1,w)$F$(i_2,x)$ that are not one of these spouse pairs. If $w=x$ then by
construction the players partner once and oppose once in type 1 games. If $w\ne x$ then M$'(w)$ and F$'(x)$ partner
and oppose exactly once in the SAMDRR$(m)$ and by definition of MOLS, M$(i_1,w)$ and F$(i_2,x)$
partner and oppose exactly once in the CMDRR$(mn,mk)$. We conclude that there are exactly $mk$ spouse pairs and that every male
and female who are not spouses are partners exactly once and opponents exactly once.
We now check that the conditions for a CMDRR$(mn,mk)$ are met for same sex players. Consider players
M$(i_1,w)$ and M$(i_3,y)$. If $w=y$ then by construction they oppose at least once
in a type 1 game. If $w\ne y$ then again by construction they oppose exactly once in a type 2 game. The condition for
female players is analogous. So same sex players oppose at least once. The total number of games is correct so we conclude
that each player who does not have a spouse opposes
some other same sex player who does not have a spouse exactly twice and opposes all other same sex players exactly once. \qed
\end{Proof}
| {
"attr-fineweb-edu": 1.900391,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUb1DxK1yAgWay3nNg | \section{Introduction}
Sports analytics is a fast-growing research field with a strong focus on data-driven performance analysis of professional athletes and teams. Soccer, and many other team-sports, have recently benefited from the availability of high-frequency tracking data of both player and ball locations, facilitating the development of fine-grained spatiotemporal performance metrics \cite{rein2016big}.
One of the main goals of performance analysis is to answer specific questions from soccer coaches, but to do so we require models to be robust enough to capture the nuances of a complex sport, and be highly interpretable so findings can be communicated effectively. In other words, we need models to be both accurate and also translatable to soccer coaches in visual terms.\\
The majority of existing research in soccer analytics has focused on analyzing the impact of either on-ball events, such as goals, shots, and passes,
or the effects of players' movements and match dynamics \cite{gudmundsson2017spatio}. Most modeling approaches share one or more common issues, such as: heavy use of handcrafted features, little visual interpretability, and coarse representations that ignore meaningful spatial relationships. We still lack a comprehensive approach that can learn from lower-level input,
exploit spatial relationships on any location, and provide accurate predictions of observed and unobserved events at any location on the field.\\
The main contributions of our work are the following:
\begin{itemize}
\item We present a novel application of deep convolutional neural networks that allows calculating full probability surfaces for developing fine-grained analysis of game situations in soccer. This approach offers a new way of providing coaches with rich information in a visual format that might be easier to be presented to players than the usual numerical statistics.
\item We show how this architecture can ingest a flexible structure of layers of spatiotemporal data, and how it can be easily adapted to provide practical solutions for challenging problems such as the estimation of pass probability, pass selection likelihood and pass expected value surfaces.
\item We present three novel practical applications derived from pass probability surfaces, such as the identification of optimal passing locations, the prediction of optimal positioning for improving pass probability, and the prediction of team-level passing tendencies.
\end{itemize}
The presented approach successfully addresses the challenging problem of estimating full probability surfaces from single-location labels, which corresponds to an extreme case of weakly-supervised learning.
\section{Related Work}
From an applied standpoint, our work is related to several other approaches aimed at estimating pass probabilities and other performance metrics derived from spatiotemporal data in soccer. Regarding the technical approach, we leverage recent findings on weakly-supervised learning problems and the application of fully convolutional neural networks for image segmentation.
\paragraph{Soccer analytics} Pass probability estimation has been approached in several ways. A physics-based model of the time it takes each player to reach and control the ball has been used to derive pass probabilities on top of tracking data \cite{spearman2017physics}. Other approaches include the use of dominant regions to determine which player is most likely to control the ball after a pass \cite{gudmundsson2017spatio} or using a carefully selected set of handcrafted features to build linear prediction models \cite{power2017not}. The related problem of pass selection has been approached by applying
convolutional neural networks that predict the likelihood of passing to a specific player on the attacking team\cite{hubavcek2018deep}. The estimation of pass value has been approached either by the expert-guided development of algorithmic rules \cite{cakmak2018computational}, the application of standard machine learning algorithms on a set of handcrafted features \cite{power2017not},
or problem-specific deep learning models with dense layers and single output prediction \cite{fernandezdecomposing}. While some of the related work has estimated probability surfaces by inference on a set of discrete pass destination locations \cite{spearman2017physics,fernandezdecomposing}, none has
yet approached the learning of probability surfaces directly.
\paragraph{Fully convolutional networks and weakly-supervised learning} Fully convolutional networks have been extensively applied to semantic image segmentation, specifically for the pixel-labeling problem to successfully detect broad pixel areas associated with objects in images. The approach most related to our work builds a hierarchy of features at different sampling levels that are merged to provide segmentation regions that preserve both fine and coarse details \cite{long2015fully}. From a learning perspective, image segmentation has been approached as either supervised \cite{long2015fully}, weakly-supervised \cite{pathak2015constrained}, and semi-supervised learning problems \cite{papandreou2015weakly}.
Commonly, available labels are associated with many other pixels in the original image. However, in our case, labels are only associated with a single location in the desired probability map, transforming our learning problem into an unusual case of weakly-supervised learning.
\section{A Deep Model for Interpretable Analysis in Soccer}
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\linewidth]{pass_probability_cnn_architecture_4.png}
\caption{SoccerMap architecture for a coarse soccer field representation of $104\times 68$ and 13 input channels.}
\label{fig:architecture}
\end{figure}
We build our architecture on top of tracking data extracted from videos of professional soccer matches, consisting of the 2D-location of players and the ball at 10 frames per second, along with manually tagged passes. At every frame we take a snapshot of the tracking data and create
a representation of a game situation consisting of a $l \times h \times c$ matrix, where $c$ channels of low-level information are mapped to a $l \times h$ coarse spatial representation of the locations on the field.
We seek an architecture that can learn both finer features at locations close to a possible passing destination and features considering information on a greater spatial scale. For passes, local features might be associated with the likelihood of nearby team-mates and opponents reaching the destination location and information about local spatial pressure. On the other hand, higher scale features might consider player's density and interceptability of the ball in its path from the location of origin. Finally, we seek to estimate this passing probability to any other location on the $l \times h$ spatial extent of the field.\\
This game state representation is processed by the deep neural network architecture presented in Figure \ref{fig:architecture}. The network creates a feature hierarchy by learning convolutions at $1x$, $1/2x$, and $1/4x$ scales while preserving the receptive field of the filters. Predictions are produced at each of these scales, and then upsampled nonlinearly and merged through fusion layers. A sigmoid activation layer is applied to the latest prediction to produce pass probability estimations at every location, preserving the original input scale. During training, a single-location prediction,
associated with the destination of a sample pass is selected to compute the log-loss that is backpropagated to adjust the network weights.
\subsection{The Reasoning Behind the Choice of Layers}
The network incorporates different types of layers: max-pooling, linear, rectified linear unit (ReLu) and sigmoid activation layers, and 2D-convolutional filters (conv2d) for feature extraction, prediction, upsampling and fusion. In this section we present a detailed explanation of the reasoning behind the choice of layers and the design of the architecture.
\paragraph{Convolutions for feature extraction}
At each of the $1x$, $1/2x$, and $1/4x$ scales two layers of conv2d filters with a $5\times 5$ receptive field and stride of $1$ are applied, each one followed by a ReLu activation function layer, in order to extract spatial features at every scale. In order to keep the same
dimensions after the convolutions we apply symmetric padding to the input matrix of the convolutional layer. We chose symmetric-padding to avoid border-image artifacts that can hinder the predicting ability and visual representation of the model.
\paragraph{Fully convolutional network}
There are several conceptual and practical reasons for considering convolutional neural networks (convnets) for this problem. Convolutional filters are designed to recognize the relationships between nearby pixels, producing features that are spatially aware. Convnets have been proven successful in data sources with a Euclidean structure,
such as images and videos, so a 2D-mapping of soccer field location-based information can be expected to be an ideal data structure for learning essential features. Also, these features are expected to be non-trivial and complex. Convnets have been proven to learn what are sometimes more powerful visual features than handcrafted ones, even given large receptive fields and weak label training \cite{long2014convnets}. Regarding the architecture design, we are interested in learning the full $l \times h$ mapping of
passing probabilities covering the extent of a soccer field, for which fully convolutional layers are more appropriate than classical neural networks built for classification when changing dense prediction layers for 1x1 convolution layers.
\paragraph{Pooling and upsampling}
The network applies downsampling twice through max-pooling layers to obtain the $1/2x$ and $1/4x$ representations. Since activation field size is kept constant after every downsampling step, the network can learn filters of a wider spatial extent, leading to the detection of coarse details.
We learn non-linear upsampling functions at every upsampling step by first applying a
$2x$ nearest neighbor upsampling and then two layers of convolutional filters. The first convolutional layer consists of $32$ filters with a $3 \times 3$ activation field and stride $1$, followed by a ReLu activation layer. The second layer consists of $1$ layer with a $3 \times 3$ activation field and stride $1$, followed by a linear activation layer. This upsampling strategy has been shown to provide smoother outputs and to avoid artifacts that can be usually found in the application transposed convolutions \cite{odena2016deconvolution}.
\paragraph{Prediction and fusion layers}
Prediction layers consist of a stack of two convolutional layers, the first with $32$ $1\times 1$ convolutional filters followed by
an ReLu activation layer, and the second consists of one $1\times 1$ convolutional filter followed by a linear activation layer.
Instead of reducing the output to a single prediction value, we keep the spatial dimensions at each step and
use $1\times 1$ convolutions to produce predictions at each location. The stack learns a non-linear prediction on top of the output of convolutional layers.
To merge the outputs at different scales, we concatenate the pair of matrices and pass them through a convolutional layer of one $1\times 1$ filter.
\subsection{Learning from Single-Location Labels}
We seek a model that can produce accurate predictions of the pass probability to every location on a $l \times h$ coarsed representation of a soccer field, given a $l \times h \times c$ representation of the game situation at the time a pass is made.
In training, we only count with the manually labeled location of the pass destination and a binary label of the outcome of the pass.
\begin{definition}[SoccerMap]
\label{theorem:SoccerMap}
Let $X = \{x | x \in \mathbb{R}^{l \times h \times c}\}$ be the set of possible game state representations at any given time, where $l,h \in \mathbb{N}_1$ are the height and length of a coarse representation of soccer field, and $c \in {\mathbb{N}_1}$ the number of data channels, a SoccerMap is a function $f(x;\theta), f: \mathbb{R}^{l\times h \times c} \to \mathbb{R}^{l \times h}_{[0,1]}$, where $f$ produces a pass probability map, and $\theta$ are the network parameters.
\end{definition}
\begin{definition}[Target-Location Loss]
\label{def:target_location_loss}
Given the sigmoid function $\sigma(x) = \frac{e^x}{e^x+1}$ and let $y_k \in \{0,1\}$ be the outcome of a pass at time $t(x_k)$, for a game state $x_k$, $d_k$ the destination location of the pass $k$, $f$ a SoccerMap with parameters $\theta$, and $logloss$ the log-loss function, we define the target-location loss as
$$L(f(x_k;\theta),y_k,d_k)= logloss(f(x_k;\theta)_{d_k}, y_k)$$
\end{definition}
We approach the training of the model as a weakly-supervised learning task, where the ground truth labels only correspond to a single location in the full mapping matrix that needs to be learned.
The target-location loss presented in Definition \ref{def:target_location_loss} essentially shrinks the output of a SoccerMap $f$ to a single prediction value by selecting the prediction value at the destination of the pass, and then computes the log-loss between this single prediction and the ground-truth outcome value.
\subsection{Spatial and Contextual Channels from Tracking Data}
\label{sec:channels}
Our architecture is designed to be built on top of two familiar sources of data for sports analytics: tracking data and event data. Tracking data consists of the location of the players and the ball at a high frequency-rate. Event-data corresponds to manually labeled observed events, such as passes, shots, and goals, including the location and time of each event.
We normalize the players' location and the ball to ensure the team in possession of the ball attacks from left to right, thus standardizing the representation of the game situation. On top of players' and ball locations, we derive low-level input channels, including spatial (location and velocity) and contextual information (ball and goal distance). Channels are represented by matrices of $(104,68)$ where each cell approximately represents $1m^2$ in a typical soccer field.
\begin{definition}[Tracking-Data Snapshot]
Let $Z_p(t),Z_d(t),Z_b(t), Z_g(t) \in \{z | z \in \mathbb{R}^{l \times h}\}$ be the locations of the attacking team players, the location of the defending team players, the location of the ball, and the location of the opponent goal, respectively, at time $t$, then a tracking-data snapshot at time $t$ is defined as the 4-tuple $Z(t)=(Z_p(t),Z_d(t),Z_b(t),Z_g(t))$.
\end{definition}
In order to create a game state representation $X(t) \in X$ as described in Definition \ref{theorem:SoccerMap} we produce 13 different channels on top of each tracking-data snapshot $Z$ where a pass has been observed, which constitute the game-state representation for the pass probability model.
Each channel corresponds to either a sparse or dense matrix of size $(h,l)$, according to the chosen dimensions for the coarse field representation. The game-state representation is composed of the following channels:
\begin{itemize}
\item Six sparse matrices with the location, and the two components of the velocity vector for the players in both the attacking team and the defending team, respectively.
\item Two dense matrices where every location contains the distance to the ball and goal location.
\item Two dense matrices containing the sine and cosine of the angle between every location to the goal and the ball location, and one dense matrix containing the angle in radians to the goal location.
\item Two sparse matrices containing the sine and cosine of the angle between the velocity vector of the ball carrier and each of the teammates in the attacking team.
\end{itemize}
\section{Experiments and Results}
\subsection{Dataset}
We use tracking-data, and event-data from 740 English Premier League matches from the 2013/2014 and 2014/2015 season, provided by \emph{STATS LLC}. Each match contains the $(x,y)$ location for every player, and the ball sampled at $10$Hz. The event-data provides the location, time, player, team and outcome for 433,295 passes. From this data, we extract the channels described in Section \ref{sec:channels} for a coarse $(104,68)$ representation of a soccer field to obtain a dataset of size $433295 \times 104 \times 68 \times 13
$. There are 344,957 successful passes and 88,338 missed passes.
\subsection{Benchmark Models}
\label{sec:benchmark}
We compare our results against a series of benchmark models of increasing levels of complexity. We define a baseline model \emph{Naive} that for every pass outputs the known average pass completion in the full dataset ($80\%$) following a similar definition in \cite{power2017not}.
We build two additional models \emph{Logistic Net} and \emph{Dense2 Net} based on a set of handcrafted features built on top of tracking-data. Logistic Net is a network with a single sigmoid unit, and {Dense2 Net is a neural network with two dense layers followed by ReLu activations and a sigmoid output unit.
\paragraph{Handcrafted features} We build a set of spatial features on top of tracking-data based on location and motion information on players and the ball that is similar to most of the features calculated in previous work on pass probability estimation \cite{power2017not,spearman2017physics,gudmundsson2017spatio}. We define the following set of handcrafted features from each pass: origin and destination location, pass distance, attacking and defending team influence at both origin and destination, angle to goal at origin and destination, and the maximum value of opponent influence in a straight line between origin and destination. The team's spatial influence values are calculated following the model presented in \cite{fernandez2018wide}.
\subsection{Experimental Framework}
In this section, we describe the experimental framework for testing the performance of the proposed architecture for the pass success probability estimation problem.
\paragraph{Training, validation, and test set} We randomly selected matches from both available seasons and split them into a training, validation, and test set with a $60:20:20$ distribution. We applied a stratified split, so the successful/missed pass class ratio remains the same across datasets. The validation set is used for model selection during a grid-search process.
The test set is left as hold-out data, and results are reported on performance for this dataset. For the benchmark models, datasets are built by extracting the features described in Section \ref{sec:benchmark}, and an identical split is performed. Features are standardized column-wise by subtracting the mean value and dividing by the standard deviation.
\paragraph{Optimization}
Both the SoccerMap network and the baseline models are trained using adaptive moment estimation (Adam). Model selection is achieved through grid-search on learning rates of $10^{-3}$, $10^{-4}$ and $10^{-5}$, and batch sizes of $1$, $16$ and $32$, while $\beta_1,\beta_2$ are set to $0.9$ and $0.999$, respectively. We use early stopping with a minimum delta rate of $0.001$. Optimization is computed on a single Tesla M60 GPU and using Tensorflow 1.5.0. During the optimization, the negative log-loss is minimized.
\paragraph{Metrics}
Let $N$ be the number of examples in the dataset, $y$ the ground-truth labels for pass events and $\hat{y}$ the model predictions. We report the negative log-loss $$\mathcal{L}(\hat{y},y) = - \frac{1}{N} \sum_i y_i \cdot log(\hat{y_i}) + (1-y_i) \cdot log(1-\hat{y_i}).$$ In order to validate the model calibration we use a variation of the expected calibration error (ECE) presented in \cite{guo2017calibration} which computes the expected difference between accuracy and confidence of the model on a finite set of samples split into $K$ bins of size $1/K$, according to the predicted confidence or probability for every sample.
Since our model is not designed for classification, we use the count of the number of examples of the positive class rather than accuracy for $ECE$.
Let $B_k$ be a bin where $k \in [1,K]$ then $$ECE = \sum_{k=1}^{K} \frac{\@ifstar{\oldabs}{\oldabs*}{B_k}}{N} \@ifstar{\oldabs}{\oldabs*}{ \bigg(\frac{1}{|B_k|} \sum_{i \in B_k}1(y_i=1)\bigg) -
\bigg(\frac{1}{|B_k|} \sum_{i \in B_k} \hat{y}_i \bigg) }. $$
A perfectly calibrated model will have a ECE value of $0$. Additionally, we provide a calibration reliability plot \cite{guo2017calibration} showing the mean confidence for every bin $B_k$.
\subsection{Results}
Table \ref{table:results} presents the results for the benchmark models and SoccerMap for the pass probability dataset. We can observe that SoccerMap achieves remarkably lower error than the other models and produces a calibrated estimation of pass probability. Despite the considerably large number
of parameters in SoccerMap, the inference time for a single sample is low enough to produce a real-time estimation for frame rates below 200Hz. Figure \ref{fig:calibration_plot} presents a calibration reliability plot for each of the models. Both Logistic Net and SoccerMap produce well-calibrated estimations of pass probabilities, however, SoccerMap is proven to be considerably more precise as shown by the difference in log-loss between both.
\begin{table}[h!]
\caption{Results for the benchmark models and SoccerMap for the pass probability dataset.}
\label{table:results}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & Log-loss & ECE & Inference time & Number of parameters\\
\hline
Naive & $0.5451$ & $-$ & $-$ & $0$\\
Logistic Net & $0.384$ & \bm{$0.0210$} & $0.00199$s & $11$\\
Dense2 Net & $0.349$ & $0.0640$ & $0.00231$s & $231$\\
SoccerMap & \bm{$0.217$} & \bm{$0.0225$} & $0.00457$s & $401,259$\\
\hline
\end{tabular}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=0.90\linewidth]{calibration_plot}
\caption{A calibration reliability plot, where the X-axis presents the mean predicted value for samples in each of 10 bins, and the Y-axis the fraction of samples in each bin containing positive examples.}
\label{fig:calibration_plot}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.90\linewidth]{pass_prob2}
\caption{Pass probability surface for a given game situation. Yellow and blue circles represent players' locations on the attacking and defending team, respectively, and the arrows represent the velocity vector for each player. The white circle represents the ball location.}
\label{fig:video_and_2d_surfaces}
\end{figure}
Figure \ref{fig:video_and_2d_surfaces} presents the predicted pass probability surface for a specific game situation during a professional soccer match. We observe that the model can capture both fine-grained information, such as the influence of defending and attacking players on nearby locations and coarse information such as the probability of reaching more extensive spatial areas depending on the distance to the ball and the proximity of players. We can also observe that the model considers the player's speed for predicting probabilities of passing to not-yet occupied spaces, a critical aspect of practical soccer analysis.
\subsubsection{Ablation Study}
We performed an ablation study in order to evaluate whether the different components of the proposed architecture allow improving its performance on the pass probability estimation problem or not, by testing the performance of different variations of the architecture.
Table \ref{table:ablation} presents the log-loss obtained on different configurations of the architecture with the following components: skip-connections (SC), non-linear up-sampling (UP), fusion layer (FL), non-linear prediction layer (NLP), and the number of layers of convolutional filters by sampling layer (NF). We can observe there are two configurations with similar log-loss: the SoccerMap and SoccerMap-UP configurations. While the removal of the non-linear upsampling slightly increases the performance, it produces visual artifacts that are less eye-pleasing when inspecting the surfaces. Given that the surfaces are intended to be used by soccer coaches in practice, SoccerMap provides a better option for practical purposes.
\begin{table}[h!]
\caption{Ablation study for subsets of components of the SoccerMap architecture.}
\label{table:ablation}
\centering
\begin{tabular}{|p{3.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.6cm}|}
\hline
Architecture & SC & UP & FL & NLP & NF & Log-loss \\
\hline
\textbf{SoccerMap} & YES & YES & YES & YES & 2 & \textbf{0.217} \\
SoccerMap-NLP & YES & YES & YES & NO & 2 & 0.245 \\
SoccerMap-FL & YES & YES & NO & YES & 2 & 0.221 \\
SoccerMap-FL-NLP & YES & YES & NO & NO & 2 & 0.292 \\
\textbf{SoccerMap-UP} & YES & NO & YES & YES & 2 & \textbf{0.216} \\
SoccerMap-UP-FL & YES & NO & NO & YES & 2 & 0.220 \\
SoccerMap-UP-NLP & YES & NO & YES & NO & 2 & 0.225 \\
SoccerMap-FL-NLP & YES & NO & NO & NO & 2 & 0.235 \\
Single Layer CNN-D4 & NO & YES & YES & YES & 2 & 0.256 \\
Single Layer CNN-D8 & NO & YES & YES & YES & 4 & 0.228 \\
\hline
\end{tabular}
\end{table}
\section{Practical Applications}
In this section, we present a series of novels practical applications that make use of the full probability surface for evaluating potential passing actions and assessing player's passing and positional skills.
\subsection{Adapting SoccerMap for the Estimation of Pass Selection Likelihood and Pass Value.}\label{sec:pass_select}
One of the main advantages of this architecture is that is can be easily adapted to other challenging problems associated with the estimation of pass-related surfaces, such as the estimation of pass selection and pass value.
\paragraph{Pass selection model} An interesting and unsolved problem in soccer is the estimation of the likelihood of a pass being made towards every other location on the field, rather than to specific player locations.
We achieve this by directly modifying the sigmoid activation layer of the original architecture by a softmax activation layer, which ensures that the sum of probabilities on the output surface adds up to $1$. For this case, instead of pass success, we use a sparse matrix as a target output and set the destination location of the pass in that matrix to $1$.
\paragraph{Pass value model} While a given pass might have a low probability of success, the expected value of that pass could be higher than a different passing option with higher probability, thus in some cases, the former could be preferable. We can directly adapt SoccerMap to estimate a pass value surface by modifying the target value and the loss function to be used. For this case, we use as an outcome the expected goals value \cite{eggels2016expected} of the last event in possession of any given pass, which can be positive or negative depending on whether the attacking or defending team had the last action in that possession.
Figure \ref{fig:selection_value} presents the surfaces for pass selection and pass value models derived from this architecture. With these surfaces, we can provide direct visual guidance to coaches to understand the value of the positioning of its team, the potential value gains of off-ball actions, and a team's likely passes in any given situation.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{surface_comparison}
\caption{On the left column, the pass selection surface for a give game-state, presented on a logarithmic scale. On the right column, a pass value surfaces for the same game-state, with the color scale constrained to a [-0.2,0.2] range.}
\label{fig:selection_value}
\end{figure}
\subsection{Assessing Optimal Passing and Location}
While soccer analytics have long-focused on using pass probabilities to evaluate a player's passing skills based on observed pass accuracy \cite{power2017not,spearman2017physics}, there are still two main challenging problems that remain unattended: the identification of optimal passing locations and optimal off-ball positioning for improving pass probability.
\subsubsection{Visual Assessment of Optimal Passing}
Given a game-state, where a player is in possession of the ball, we define the optimal and sub-optimal pass destinations as the locations near the teammates than provide a higher pass probability than the current location of the corresponding teammate. To obtain the optimal passing locations we first calculate the pass probability surface of a given game-state and then evaluate the probability of every possible location in a $5 \times 5$ grid around the expected teammate location in the next second, based on the current velocity. The location within that grid with the highest probability difference with the current player's location is set as the optimal passing location. Additionally, a set of sub-optimal passing locations are obtained by identifying locations with positive probability difference and that are at least 5 meters away from previous sub-optimal locations. In the left column of Figure \ref{fig:optimal_location} we present in red circles the set of best passing locations for each of the possession team players for a given game state. This kind of visualization provides a coach the ability to perform a direct visual inspection of passing options and allows her to provide direct feedback to players about specific game situations, improving the coach's effective communication options.
\subsubsection{Visual Assessment of Optimal Positioning}
Following a similar idea, we can leverage pass probabilities surfaces to detect the best possible location a player could occupy to increase the probability of receiving a pass directly. To obtain the optimal location for each player, we recalculate the pass probability surface of the same game situation but translating the location of the player (one player at a time) to any other possible location in the $5 \times 5$ grid. We analogously obtain the optimal locations, as described before. In the right column of Figure \ref{fig:optimal_location} we observe in green circles the expected pass probability added if the player would have been placed in that location instead. Again, this tool can be handy for coaches to instruct players on how to improve their off-ball game.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{application1}
\caption{In the left column, we present a game-state where red circles represent the optimal passing location for each teammate, and the expected pass probability. In the right column, the green circles represent the optimal positioning of players increasing the expected pass probability if the players were placed in those locations at that time.}
\label{fig:optimal_location}
\end{figure}
\subsubsection{Assessing Passing Skill}
We propose a new metric \textit{pass completion added (PPA)} to evaluate the quality of a players' selection of the passing destination location. For each observed pass, we calculate the difference between the probability of the optimal pass and the probability of the selected pass. This metric is formally defined in Equation \ref{eq:ppa2}, where $S$ and $M$ are the set of successful and missed passes, respectively, $\hat{y}$ is the optimal pass probability, and $y$ is the selected pass probability.
Intuitively a player reward is discounted if the selected pass was not optimal. In the case of the pass being unsuccessful, the player is only penalized in proportion to the probability difference with the optimal location, rewarding the player's pass selection.
\begin{equation}
\label{eq:ppa2}
PPA = \sum_{s=1}^{S}(1-\hat{y}^s)(1-(\hat{y}^s-y^s)) - \sum_{m=1}^M (\hat{y}^s)(\hat{y}^s-y^s)
\end{equation}
In table \ref{table:ppa_table} we present the best ten players in pass completion added for the 2014-2015 season of the English Premier League, where The cumulative $PPA$ of a player is normalized by 90 minutes played. The table includes the estimated player price in 2014, provided by \url{www.transfermarkt.com}. We can observe that the list contains a set of the best players in recent times in this league, including creative midfielders such as Oezil,Silva, Hazard and Fabregas, deep creative wingers such as Navas and Valencia, and Rosicky, a historical player.
\begin{table}[h!]
\caption{Ranking of the best ten players in pass completion added for the season 2014-2015 of the English Premier League.}
\label{table:ppa_table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Team & Player Name & PPA/90m & Age in 2014 & Player price (2014) \\ \hline
Arsenal & Mesut Oezil & 0.0578 & 24 & \euro 45M\\
Manchester City & David Silva & 0.0549 & 28 & \euro 40M\\
Chelsea & Eden Hazard & 0.0529 & 23 & \euro 48M\\
Manchester United & Antonio Valencia & 0.0502 & 29 & \euro 13M\\
Arsenal & Tomas Rosicky & 0.0500 & 33 & \euro 2M\\
Chelsea & Cesc Fabregas & 0.0484 & 27 & \euro 40M\\
Arsenal & Santi Cazorla & 0.0470 & 29 & \euro 30M\\
Manchester City & Jesus Navas & 0.0469 & 28 & \euro 20M\\
Manchester City & Yaya Toure & 0.0466 & 30 & \euro 30M\\
Manchester City & Samir Nasri & 0.0447 & 26 & \euro 22M \\
\hline
\end{tabular}
\end{table}
\subsection{Team-Based Passing Selection Tendencies}
The pass selection adaptation of SoccerMap, presented in Section \ref{sec:pass_select}, provides a fine-grained evaluation of the passing likelihood in different situations. However, it is clear to observe that passing selection is likely to vary according to a team's player style and the specific game situation. While a league-wide model might be useful for grasping the expected behavior of a typical team in the league, a soccer coach will be more interested in understanding the fine-grained details that separate one team from the other. Once we train a SoccerMap network to obtain this league-wide model, we can fine-tune the network with passes from each team to grasp team-specific behavior. In this application example, we trained the pass selection model with passes from all the teams from English Premier League season 2014-2015. Afterward, we retrained the initial model with passes from two teams with different playing-styles: Liverpool and Burnley. \\
In Figure \ref{fig:team_pass_selection} we compare the pass selection tendencies between Liverpool (left column) and Burnley (right column). On the top left corner of both columns, we show a 2D plot with the difference between the league mean passing selection heatmap, and each team's mean passing selection heatmap, when the ball is within the green circle area. We can observe that Liverpool tends to play short passes, while Burnley has a higher tendency of playing long balls to the forwards or opening on the sides. However, this kind of information would not escape from the soccer coach's intuition, so we require a more fine-grained analysis of each team's tendencies in specific situations. In the two plots of Figure \ref{fig:team_pass_selection} we show over each players' location the percentage increase in passing likelihood compared with the league's mean value. In this situation, we can observe that when a left central defender has the ball during a buildup, Liverpool will tend to play short passes to the closest open player, while Burnley has a considerably higher tendency to play long balls to the forwards, especially if forwards are starting a run behind the defender's backs, such as in this case. Through a straightforward fine-tuning of the SoccerMap-based model, we can provide detailed information to the coach for analyzing specific game situations.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{likelihood_comparison}
\caption{A game-state representation of a real game situation in soccer. Above each player (circles) we present the added percentage difference of pass likelihood in that given situation in comparison with the league for two teams: Liverpool (left column) and Burnley (right column). The heatmaps in both top left corners of each column represent the mean difference in pass selection likelihood with the league, when the ball is located within the green circle.}
\label{fig:team_pass_selection}
\end{figure}
\section{Discussion and Future Work}
The estimation of full probability surfaces provides a new dimension for soccer analytics. The presented architecture allows generating visual tools to help
coaches perform fine-tuned analysis of opponents and own-team performance, derived from low-level spatiotemporal soccer data. We show how this network can be easily adapted to many other challenging related problems in soccer, such as the estimation of pass selection likelihood and pass value, and that can perform remarkably well at estimating the probability of observed passes. By merging features extracted at different sampling levels, the network can extract both fine and coarse details, thereby managing to make sense of the complex spatial dynamics of soccer. We have also presented several novels practical applications on soccer analytics, such as evaluating optimal passing,
evaluating optimal positioning, and identifying context-specific and team-level passing tendencies. This framework of analysis derived from spatiotemporal data could also be applied directly in many other team sports, where the visual representation of complex information can bring the coach and the data analyst closer.
\section{Introduction}
Sports analytics is a fast-growing research field with a strong focus on data-driven performance analysis of professional athletes and teams. Soccer, and many other team-sports, have recently benefited from the availability of high-frequency tracking data of both player and ball locations, facilitating the development of fine-grained spatiotemporal performance metrics \cite{rein2016big}.
One of the main goals of performance analysis is to answer specific questions from soccer coaches, but to do so we require models to be robust enough to capture the nuances of a complex sport, and be highly interpretable so findings can be communicated effectively. In other words, we need models to be both accurate and also translatable to soccer coaches in visual terms.\\
The majority of existing research in soccer analytics has focused on analyzing the impact of either on-ball events, such as goals, shots, and passes,
or the effects of players' movements and match dynamics \cite{gudmundsson2017spatio}. Most modeling approaches share one or more common issues, such as: heavy use of handcrafted features, little visual interpretability, and coarse representations that ignore meaningful spatial relationships. We still lack a comprehensive approach that can learn from lower-level input,
exploit spatial relationships on any location, and provide accurate predictions of observed and unobserved events at any location on the field.\\
The main contributions of our work are the following:
\begin{itemize}
\item We present a novel application of deep convolutional neural networks that allows calculating full probability surfaces for developing fine-grained analysis of game situations in soccer. This approach offers a new way of providing coaches with rich information in a visual format that might be easier to be presented to players than the usual numerical statistics.
\item We show how this architecture can ingest a flexible structure of layers of spatiotemporal data, and how it can be easily adapted to provide practical solutions for challenging problems such as the estimation of pass probability, pass selection likelihood and pass expected value surfaces.
\item We present three novel practical applications derived from pass probability surfaces, such as the identification of optimal passing locations, the prediction of optimal positioning for improving pass probability, and the prediction of team-level passing tendencies.
\end{itemize}
The presented approach successfully addresses the challenging problem of estimating full probability surfaces from single-location labels, which corresponds to an extreme case of weakly-supervised learning.
\section{Related Work}
From an applied standpoint, our work is related to several other approaches aimed at estimating pass probabilities and other performance metrics derived from spatiotemporal data in soccer. Regarding the technical approach, we leverage recent findings on weakly-supervised learning problems and the application of fully convolutional neural networks for image segmentation.
\paragraph{Soccer analytics} Pass probability estimation has been approached in several ways. A physics-based model of the time it takes each player to reach and control the ball has been used to derive pass probabilities on top of tracking data \cite{spearman2017physics}. Other approaches include the use of dominant regions to determine which player is most likely to control the ball after a pass \cite{gudmundsson2017spatio} or using a carefully selected set of handcrafted features to build linear prediction models \cite{power2017not}. The related problem of pass selection has been approached by applying
convolutional neural networks that predict the likelihood of passing to a specific player on the attacking team\cite{hubavcek2018deep}. The estimation of pass value has been approached either by the expert-guided development of algorithmic rules \cite{cakmak2018computational}, the application of standard machine learning algorithms on a set of handcrafted features \cite{power2017not},
or problem-specific deep learning models with dense layers and single output prediction \cite{fernandezdecomposing}. While some of the related work has estimated probability surfaces by inference on a set of discrete pass destination locations \cite{spearman2017physics,fernandezdecomposing}, none has
yet approached the learning of probability surfaces directly.
\paragraph{Fully convolutional networks and weakly-supervised learning} Fully convolutional networks have been extensively applied to semantic image segmentation, specifically for the pixel-labeling problem to successfully detect broad pixel areas associated with objects in images. The approach most related to our work builds a hierarchy of features at different sampling levels that are merged to provide segmentation regions that preserve both fine and coarse details \cite{long2015fully}. From a learning perspective, image segmentation has been approached as either supervised \cite{long2015fully}, weakly-supervised \cite{pathak2015constrained}, and semi-supervised learning problems \cite{papandreou2015weakly}.
Commonly, available labels are associated with many other pixels in the original image. However, in our case, labels are only associated with a single location in the desired probability map, transforming our learning problem into an unusual case of weakly-supervised learning.
\section{A Deep Model for Interpretable Analysis in Soccer}
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\linewidth]{pass_probability_cnn_architecture_4.png}
\caption{SoccerMap architecture for a coarse soccer field representation of $104\times 68$ and 13 input channels.}
\label{fig:architecture}
\end{figure}
We build our architecture on top of tracking data extracted from videos of professional soccer matches, consisting of the 2D-location of players and the ball at 10 frames per second, along with manually tagged passes. At every frame we take a snapshot of the tracking data and create
a representation of a game situation consisting of a $l \times h \times c$ matrix, where $c$ channels of low-level information are mapped to a $l \times h$ coarse spatial representation of the locations on the field.
We seek an architecture that can learn both finer features at locations close to a possible passing destination and features considering information on a greater spatial scale. For passes, local features might be associated with the likelihood of nearby team-mates and opponents reaching the destination location and information about local spatial pressure. On the other hand, higher scale features might consider player's density and interceptability of the ball in its path from the location of origin. Finally, we seek to estimate this passing probability to any other location on the $l \times h$ spatial extent of the field.\\
This game state representation is processed by the deep neural network architecture presented in Figure \ref{fig:architecture}. The network creates a feature hierarchy by learning convolutions at $1x$, $1/2x$, and $1/4x$ scales while preserving the receptive field of the filters. Predictions are produced at each of these scales, and then upsampled nonlinearly and merged through fusion layers. A sigmoid activation layer is applied to the latest prediction to produce pass probability estimations at every location, preserving the original input scale. During training, a single-location prediction,
associated with the destination of a sample pass is selected to compute the log-loss that is backpropagated to adjust the network weights.
\subsection{The Reasoning Behind the Choice of Layers}
The network incorporates different types of layers: max-pooling, linear, rectified linear unit (ReLu) and sigmoid activation layers, and 2D-convolutional filters (conv2d) for feature extraction, prediction, upsampling and fusion. In this section we present a detailed explanation of the reasoning behind the choice of layers and the design of the architecture.
\paragraph{Convolutions for feature extraction}
At each of the $1x$, $1/2x$, and $1/4x$ scales two layers of conv2d filters with a $5\times 5$ receptive field and stride of $1$ are applied, each one followed by a ReLu activation function layer, in order to extract spatial features at every scale. In order to keep the same
dimensions after the convolutions we apply symmetric padding to the input matrix of the convolutional layer. We chose symmetric-padding to avoid border-image artifacts that can hinder the predicting ability and visual representation of the model.
\paragraph{Fully convolutional network}
There are several conceptual and practical reasons for considering convolutional neural networks (convnets) for this problem. Convolutional filters are designed to recognize the relationships between nearby pixels, producing features that are spatially aware. Convnets have been proven successful in data sources with a Euclidean structure,
such as images and videos, so a 2D-mapping of soccer field location-based information can be expected to be an ideal data structure for learning essential features. Also, these features are expected to be non-trivial and complex. Convnets have been proven to learn what are sometimes more powerful visual features than handcrafted ones, even given large receptive fields and weak label training \cite{long2014convnets}. Regarding the architecture design, we are interested in learning the full $l \times h$ mapping of
passing probabilities covering the extent of a soccer field, for which fully convolutional layers are more appropriate than classical neural networks built for classification when changing dense prediction layers for 1x1 convolution layers.
\paragraph{Pooling and upsampling}
The network applies downsampling twice through max-pooling layers to obtain the $1/2x$ and $1/4x$ representations. Since activation field size is kept constant after every downsampling step, the network can learn filters of a wider spatial extent, leading to the detection of coarse details.
We learn non-linear upsampling functions at every upsampling step by first applying a
$2x$ nearest neighbor upsampling and then two layers of convolutional filters. The first convolutional layer consists of $32$ filters with a $3 \times 3$ activation field and stride $1$, followed by a ReLu activation layer. The second layer consists of $1$ layer with a $3 \times 3$ activation field and stride $1$, followed by a linear activation layer. This upsampling strategy has been shown to provide smoother outputs and to avoid artifacts that can be usually found in the application transposed convolutions \cite{odena2016deconvolution}.
\paragraph{Prediction and fusion layers}
Prediction layers consist of a stack of two convolutional layers, the first with $32$ $1\times 1$ convolutional filters followed by
an ReLu activation layer, and the second consists of one $1\times 1$ convolutional filter followed by a linear activation layer.
Instead of reducing the output to a single prediction value, we keep the spatial dimensions at each step and
use $1\times 1$ convolutions to produce predictions at each location. The stack learns a non-linear prediction on top of the output of convolutional layers.
To merge the outputs at different scales, we concatenate the pair of matrices and pass them through a convolutional layer of one $1\times 1$ filter.
\subsection{Learning from Single-Location Labels}
We seek a model that can produce accurate predictions of the pass probability to every location on a $l \times h$ coarsed representation of a soccer field, given a $l \times h \times c$ representation of the game situation at the time a pass is made.
In training, we only count with the manually labeled location of the pass destination and a binary label of the outcome of the pass.
\begin{definition}[SoccerMap]
\label{theorem:SoccerMap}
Let $X = \{x | x \in \mathbb{R}^{l \times h \times c}\}$ be the set of possible game state representations at any given time, where $l,h \in \mathbb{N}_1$ are the height and length of a coarse representation of soccer field, and $c \in {\mathbb{N}_1}$ the number of data channels, a SoccerMap is a function $f(x;\theta), f: \mathbb{R}^{l\times h \times c} \to \mathbb{R}^{l \times h}_{[0,1]}$, where $f$ produces a pass probability map, and $\theta$ are the network parameters.
\end{definition}
\begin{definition}[Target-Location Loss]
\label{def:target_location_loss}
Given the sigmoid function $\sigma(x) = \frac{e^x}{e^x+1}$ and let $y_k \in \{0,1\}$ be the outcome of a pass at time $t(x_k)$, for a game state $x_k$, $d_k$ the destination location of the pass $k$, $f$ a SoccerMap with parameters $\theta$, and $logloss$ the log-loss function, we define the target-location loss as
$$L(f(x_k;\theta),y_k,d_k)= logloss(f(x_k;\theta)_{d_k}, y_k)$$
\end{definition}
We approach the training of the model as a weakly-supervised learning task, where the ground truth labels only correspond to a single location in the full mapping matrix that needs to be learned.
The target-location loss presented in Definition \ref{def:target_location_loss} essentially shrinks the output of a SoccerMap $f$ to a single prediction value by selecting the prediction value at the destination of the pass, and then computes the log-loss between this single prediction and the ground-truth outcome value.
\subsection{Spatial and Contextual Channels from Tracking Data}
\label{sec:channels}
Our architecture is designed to be built on top of two familiar sources of data for sports analytics: tracking data and event data. Tracking data consists of the location of the players and the ball at a high frequency-rate. Event-data corresponds to manually labeled observed events, such as passes, shots, and goals, including the location and time of each event.
We normalize the players' location and the ball to ensure the team in possession of the ball attacks from left to right, thus standardizing the representation of the game situation. On top of players' and ball locations, we derive low-level input channels, including spatial (location and velocity) and contextual information (ball and goal distance). Channels are represented by matrices of $(104,68)$ where each cell approximately represents $1m^2$ in a typical soccer field.
\begin{definition}[Tracking-Data Snapshot]
Let $Z_p(t),Z_d(t),Z_b(t), Z_g(t) \in \{z | z \in \mathbb{R}^{l \times h}\}$ be the locations of the attacking team players, the location of the defending team players, the location of the ball, and the location of the opponent goal, respectively, at time $t$, then a tracking-data snapshot at time $t$ is defined as the 4-tuple $Z(t)=(Z_p(t),Z_d(t),Z_b(t),Z_g(t))$.
\end{definition}
In order to create a game state representation $X(t) \in X$ as described in Definition \ref{theorem:SoccerMap} we produce 13 different channels on top of each tracking-data snapshot $Z$ where a pass has been observed, which constitute the game-state representation for the pass probability model.
Each channel corresponds to either a sparse or dense matrix of size $(h,l)$, according to the chosen dimensions for the coarse field representation. The game-state representation is composed of the following channels:
\begin{itemize}
\item Six sparse matrices with the location, and the two components of the velocity vector for the players in both the attacking team and the defending team, respectively.
\item Two dense matrices where every location contains the distance to the ball and goal location.
\item Two dense matrices containing the sine and cosine of the angle between every location to the goal and the ball location, and one dense matrix containing the angle in radians to the goal location.
\item Two sparse matrices containing the sine and cosine of the angle between the velocity vector of the ball carrier and each of the teammates in the attacking team.
\end{itemize}
\section{Experiments and Results}
\subsection{Dataset}
We use tracking-data, and event-data from 740 English Premier League matches from the 2013/2014 and 2014/2015 season, provided by \emph{STATS LLC}. Each match contains the $(x,y)$ location for every player, and the ball sampled at $10$Hz. The event-data provides the location, time, player, team and outcome for 433,295 passes. From this data, we extract the channels described in Section \ref{sec:channels} for a coarse $(104,68)$ representation of a soccer field to obtain a dataset of size $433295 \times 104 \times 68 \times 13
$. There are 344,957 successful passes and 88,338 missed passes.
\subsection{Benchmark Models}
\label{sec:benchmark}
We compare our results against a series of benchmark models of increasing levels of complexity. We define a baseline model \emph{Naive} that for every pass outputs the known average pass completion in the full dataset ($80\%$) following a similar definition in \cite{power2017not}.
We build two additional models \emph{Logistic Net} and \emph{Dense2 Net} based on a set of handcrafted features built on top of tracking-data. Logistic Net is a network with a single sigmoid unit, and {Dense2 Net is a neural network with two dense layers followed by ReLu activations and a sigmoid output unit.
\paragraph{Handcrafted features} We build a set of spatial features on top of tracking-data based on location and motion information on players and the ball that is similar to most of the features calculated in previous work on pass probability estimation \cite{power2017not,spearman2017physics,gudmundsson2017spatio}. We define the following set of handcrafted features from each pass: origin and destination location, pass distance, attacking and defending team influence at both origin and destination, angle to goal at origin and destination, and the maximum value of opponent influence in a straight line between origin and destination. The team's spatial influence values are calculated following the model presented in \cite{fernandez2018wide}.
\subsection{Experimental Framework}
In this section, we describe the experimental framework for testing the performance of the proposed architecture for the pass success probability estimation problem.
\paragraph{Training, validation, and test set} We randomly selected matches from both available seasons and split them into a training, validation, and test set with a $60:20:20$ distribution. We applied a stratified split, so the successful/missed pass class ratio remains the same across datasets. The validation set is used for model selection during a grid-search process.
The test set is left as hold-out data, and results are reported on performance for this dataset. For the benchmark models, datasets are built by extracting the features described in Section \ref{sec:benchmark}, and an identical split is performed. Features are standardized column-wise by subtracting the mean value and dividing by the standard deviation.
\paragraph{Optimization}
Both the SoccerMap network and the baseline models are trained using adaptive moment estimation (Adam). Model selection is achieved through grid-search on learning rates of $10^{-3}$, $10^{-4}$ and $10^{-5}$, and batch sizes of $1$, $16$ and $32$, while $\beta_1,\beta_2$ are set to $0.9$ and $0.999$, respectively. We use early stopping with a minimum delta rate of $0.001$. Optimization is computed on a single Tesla M60 GPU and using Tensorflow 1.5.0. During the optimization, the negative log-loss is minimized.
\paragraph{Metrics}
Let $N$ be the number of examples in the dataset, $y$ the ground-truth labels for pass events and $\hat{y}$ the model predictions. We report the negative log-loss $$\mathcal{L}(\hat{y},y) = - \frac{1}{N} \sum_i y_i \cdot log(\hat{y_i}) + (1-y_i) \cdot log(1-\hat{y_i}).$$ In order to validate the model calibration we use a variation of the expected calibration error (ECE) presented in \cite{guo2017calibration} which computes the expected difference between accuracy and confidence of the model on a finite set of samples split into $K$ bins of size $1/K$, according to the predicted confidence or probability for every sample.
Since our model is not designed for classification, we use the count of the number of examples of the positive class rather than accuracy for $ECE$.
Let $B_k$ be a bin where $k \in [1,K]$ then $$ECE = \sum_{k=1}^{K} \frac{\@ifstar{\oldabs}{\oldabs*}{B_k}}{N} \@ifstar{\oldabs}{\oldabs*}{ \bigg(\frac{1}{|B_k|} \sum_{i \in B_k}1(y_i=1)\bigg) -
\bigg(\frac{1}{|B_k|} \sum_{i \in B_k} \hat{y}_i \bigg) }. $$
A perfectly calibrated model will have a ECE value of $0$. Additionally, we provide a calibration reliability plot \cite{guo2017calibration} showing the mean confidence for every bin $B_k$.
\subsection{Results}
Table \ref{table:results} presents the results for the benchmark models and SoccerMap for the pass probability dataset. We can observe that SoccerMap achieves remarkably lower error than the other models and produces a calibrated estimation of pass probability. Despite the considerably large number
of parameters in SoccerMap, the inference time for a single sample is low enough to produce a real-time estimation for frame rates below 200Hz. Figure \ref{fig:calibration_plot} presents a calibration reliability plot for each of the models. Both Logistic Net and SoccerMap produce well-calibrated estimations of pass probabilities, however, SoccerMap is proven to be considerably more precise as shown by the difference in log-loss between both.
\begin{table}[h!]
\caption{Results for the benchmark models and SoccerMap for the pass probability dataset.}
\label{table:results}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & Log-loss & ECE & Inference time & Number of parameters\\
\hline
Naive & $0.5451$ & $-$ & $-$ & $0$\\
Logistic Net & $0.384$ & \bm{$0.0210$} & $0.00199$s & $11$\\
Dense2 Net & $0.349$ & $0.0640$ & $0.00231$s & $231$\\
SoccerMap & \bm{$0.217$} & \bm{$0.0225$} & $0.00457$s & $401,259$\\
\hline
\end{tabular}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=0.90\linewidth]{calibration_plot}
\caption{A calibration reliability plot, where the X-axis presents the mean predicted value for samples in each of 10 bins, and the Y-axis the fraction of samples in each bin containing positive examples.}
\label{fig:calibration_plot}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.90\linewidth]{pass_prob2}
\caption{Pass probability surface for a given game situation. Yellow and blue circles represent players' locations on the attacking and defending team, respectively, and the arrows represent the velocity vector for each player. The white circle represents the ball location.}
\label{fig:video_and_2d_surfaces}
\end{figure}
Figure \ref{fig:video_and_2d_surfaces} presents the predicted pass probability surface for a specific game situation during a professional soccer match. We observe that the model can capture both fine-grained information, such as the influence of defending and attacking players on nearby locations and coarse information such as the probability of reaching more extensive spatial areas depending on the distance to the ball and the proximity of players. We can also observe that the model considers the player's speed for predicting probabilities of passing to not-yet occupied spaces, a critical aspect of practical soccer analysis.
\subsubsection{Ablation Study}
We performed an ablation study in order to evaluate whether the different components of the proposed architecture allow improving its performance on the pass probability estimation problem or not, by testing the performance of different variations of the architecture.
Table \ref{table:ablation} presents the log-loss obtained on different configurations of the architecture with the following components: skip-connections (SC), non-linear up-sampling (UP), fusion layer (FL), non-linear prediction layer (NLP), and the number of layers of convolutional filters by sampling layer (NF). We can observe there are two configurations with similar log-loss: the SoccerMap and SoccerMap-UP configurations. While the removal of the non-linear upsampling slightly increases the performance, it produces visual artifacts that are less eye-pleasing when inspecting the surfaces. Given that the surfaces are intended to be used by soccer coaches in practice, SoccerMap provides a better option for practical purposes.
\begin{table}[h!]
\caption{Ablation study for subsets of components of the SoccerMap architecture.}
\label{table:ablation}
\centering
\begin{tabular}{|p{3.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.6cm}|}
\hline
Architecture & SC & UP & FL & NLP & NF & Log-loss \\
\hline
\textbf{SoccerMap} & YES & YES & YES & YES & 2 & \textbf{0.217} \\
SoccerMap-NLP & YES & YES & YES & NO & 2 & 0.245 \\
SoccerMap-FL & YES & YES & NO & YES & 2 & 0.221 \\
SoccerMap-FL-NLP & YES & YES & NO & NO & 2 & 0.292 \\
\textbf{SoccerMap-UP} & YES & NO & YES & YES & 2 & \textbf{0.216} \\
SoccerMap-UP-FL & YES & NO & NO & YES & 2 & 0.220 \\
SoccerMap-UP-NLP & YES & NO & YES & NO & 2 & 0.225 \\
SoccerMap-FL-NLP & YES & NO & NO & NO & 2 & 0.235 \\
Single Layer CNN-D4 & NO & YES & YES & YES & 2 & 0.256 \\
Single Layer CNN-D8 & NO & YES & YES & YES & 4 & 0.228 \\
\hline
\end{tabular}
\end{table}
\section{Practical Applications}
In this section, we present a series of novels practical applications that make use of the full probability surface for evaluating potential passing actions and assessing player's passing and positional skills.
\subsection{Adapting SoccerMap for the Estimation of Pass Selection Likelihood and Pass Value.}\label{sec:pass_select}
One of the main advantages of this architecture is that is can be easily adapted to other challenging problems associated with the estimation of pass-related surfaces, such as the estimation of pass selection and pass value.
\paragraph{Pass selection model} An interesting and unsolved problem in soccer is the estimation of the likelihood of a pass being made towards every other location on the field, rather than to specific player locations.
We achieve this by directly modifying the sigmoid activation layer of the original architecture by a softmax activation layer, which ensures that the sum of probabilities on the output surface adds up to $1$. For this case, instead of pass success, we use a sparse matrix as a target output and set the destination location of the pass in that matrix to $1$.
\paragraph{Pass value model} While a given pass might have a low probability of success, the expected value of that pass could be higher than a different passing option with higher probability, thus in some cases, the former could be preferable. We can directly adapt SoccerMap to estimate a pass value surface by modifying the target value and the loss function to be used. For this case, we use as an outcome the expected goals value \cite{eggels2016expected} of the last event in possession of any given pass, which can be positive or negative depending on whether the attacking or defending team had the last action in that possession.
Figure \ref{fig:selection_value} presents the surfaces for pass selection and pass value models derived from this architecture. With these surfaces, we can provide direct visual guidance to coaches to understand the value of the positioning of its team, the potential value gains of off-ball actions, and a team's likely passes in any given situation.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{surface_comparison}
\caption{On the left column, the pass selection surface for a give game-state, presented on a logarithmic scale. On the right column, a pass value surfaces for the same game-state, with the color scale constrained to a [-0.2,0.2] range.}
\label{fig:selection_value}
\end{figure}
\subsection{Assessing Optimal Passing and Location}
While soccer analytics have long-focused on using pass probabilities to evaluate a player's passing skills based on observed pass accuracy \cite{power2017not,spearman2017physics}, there are still two main challenging problems that remain unattended: the identification of optimal passing locations and optimal off-ball positioning for improving pass probability.
\subsubsection{Visual Assessment of Optimal Passing}
Given a game-state, where a player is in possession of the ball, we define the optimal and sub-optimal pass destinations as the locations near the teammates than provide a higher pass probability than the current location of the corresponding teammate. To obtain the optimal passing locations we first calculate the pass probability surface of a given game-state and then evaluate the probability of every possible location in a $5 \times 5$ grid around the expected teammate location in the next second, based on the current velocity. The location within that grid with the highest probability difference with the current player's location is set as the optimal passing location. Additionally, a set of sub-optimal passing locations are obtained by identifying locations with positive probability difference and that are at least 5 meters away from previous sub-optimal locations. In the left column of Figure \ref{fig:optimal_location} we present in red circles the set of best passing locations for each of the possession team players for a given game state. This kind of visualization provides a coach the ability to perform a direct visual inspection of passing options and allows her to provide direct feedback to players about specific game situations, improving the coach's effective communication options.
\subsubsection{Visual Assessment of Optimal Positioning}
Following a similar idea, we can leverage pass probabilities surfaces to detect the best possible location a player could occupy to increase the probability of receiving a pass directly. To obtain the optimal location for each player, we recalculate the pass probability surface of the same game situation but translating the location of the player (one player at a time) to any other possible location in the $5 \times 5$ grid. We analogously obtain the optimal locations, as described before. In the right column of Figure \ref{fig:optimal_location} we observe in green circles the expected pass probability added if the player would have been placed in that location instead. Again, this tool can be handy for coaches to instruct players on how to improve their off-ball game.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{application1}
\caption{In the left column, we present a game-state where red circles represent the optimal passing location for each teammate, and the expected pass probability. In the right column, the green circles represent the optimal positioning of players increasing the expected pass probability if the players were placed in those locations at that time.}
\label{fig:optimal_location}
\end{figure}
\subsubsection{Assessing Passing Skill}
We propose a new metric \textit{pass completion added (PPA)} to evaluate the quality of a players' selection of the passing destination location. For each observed pass, we calculate the difference between the probability of the optimal pass and the probability of the selected pass. This metric is formally defined in Equation \ref{eq:ppa2}, where $S$ and $M$ are the set of successful and missed passes, respectively, $\hat{y}$ is the optimal pass probability, and $y$ is the selected pass probability.
Intuitively a player reward is discounted if the selected pass was not optimal. In the case of the pass being unsuccessful, the player is only penalized in proportion to the probability difference with the optimal location, rewarding the player's pass selection.
\begin{equation}
\label{eq:ppa2}
PPA = \sum_{s=1}^{S}(1-\hat{y}^s)(1-(\hat{y}^s-y^s)) - \sum_{m=1}^M (\hat{y}^s)(\hat{y}^s-y^s)
\end{equation}
In table \ref{table:ppa_table} we present the best ten players in pass completion added for the 2014-2015 season of the English Premier League, where The cumulative $PPA$ of a player is normalized by 90 minutes played. The table includes the estimated player price in 2014, provided by \url{www.transfermarkt.com}. We can observe that the list contains a set of the best players in recent times in this league, including creative midfielders such as Oezil,Silva, Hazard and Fabregas, deep creative wingers such as Navas and Valencia, and Rosicky, a historical player.
\begin{table}[h!]
\caption{Ranking of the best ten players in pass completion added for the season 2014-2015 of the English Premier League.}
\label{table:ppa_table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Team & Player Name & PPA/90m & Age in 2014 & Player price (2014) \\ \hline
Arsenal & Mesut Oezil & 0.0578 & 24 & \euro 45M\\
Manchester City & David Silva & 0.0549 & 28 & \euro 40M\\
Chelsea & Eden Hazard & 0.0529 & 23 & \euro 48M\\
Manchester United & Antonio Valencia & 0.0502 & 29 & \euro 13M\\
Arsenal & Tomas Rosicky & 0.0500 & 33 & \euro 2M\\
Chelsea & Cesc Fabregas & 0.0484 & 27 & \euro 40M\\
Arsenal & Santi Cazorla & 0.0470 & 29 & \euro 30M\\
Manchester City & Jesus Navas & 0.0469 & 28 & \euro 20M\\
Manchester City & Yaya Toure & 0.0466 & 30 & \euro 30M\\
Manchester City & Samir Nasri & 0.0447 & 26 & \euro 22M \\
\hline
\end{tabular}
\end{table}
\subsection{Team-Based Passing Selection Tendencies}
The pass selection adaptation of SoccerMap, presented in Section \ref{sec:pass_select}, provides a fine-grained evaluation of the passing likelihood in different situations. However, it is clear to observe that passing selection is likely to vary according to a team's player style and the specific game situation. While a league-wide model might be useful for grasping the expected behavior of a typical team in the league, a soccer coach will be more interested in understanding the fine-grained details that separate one team from the other. Once we train a SoccerMap network to obtain this league-wide model, we can fine-tune the network with passes from each team to grasp team-specific behavior. In this application example, we trained the pass selection model with passes from all the teams from English Premier League season 2014-2015. Afterward, we retrained the initial model with passes from two teams with different playing-styles: Liverpool and Burnley. \\
In Figure \ref{fig:team_pass_selection} we compare the pass selection tendencies between Liverpool (left column) and Burnley (right column). On the top left corner of both columns, we show a 2D plot with the difference between the league mean passing selection heatmap, and each team's mean passing selection heatmap, when the ball is within the green circle area. We can observe that Liverpool tends to play short passes, while Burnley has a higher tendency of playing long balls to the forwards or opening on the sides. However, this kind of information would not escape from the soccer coach's intuition, so we require a more fine-grained analysis of each team's tendencies in specific situations. In the two plots of Figure \ref{fig:team_pass_selection} we show over each players' location the percentage increase in passing likelihood compared with the league's mean value. In this situation, we can observe that when a left central defender has the ball during a buildup, Liverpool will tend to play short passes to the closest open player, while Burnley has a considerably higher tendency to play long balls to the forwards, especially if forwards are starting a run behind the defender's backs, such as in this case. Through a straightforward fine-tuning of the SoccerMap-based model, we can provide detailed information to the coach for analyzing specific game situations.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{likelihood_comparison}
\caption{A game-state representation of a real game situation in soccer. Above each player (circles) we present the added percentage difference of pass likelihood in that given situation in comparison with the league for two teams: Liverpool (left column) and Burnley (right column). The heatmaps in both top left corners of each column represent the mean difference in pass selection likelihood with the league, when the ball is located within the green circle.}
\label{fig:team_pass_selection}
\end{figure}
\section{Discussion and Future Work}
The estimation of full probability surfaces provides a new dimension for soccer analytics. The presented architecture allows generating visual tools to help
coaches perform fine-tuned analysis of opponents and own-team performance, derived from low-level spatiotemporal soccer data. We show how this network can be easily adapted to many other challenging related problems in soccer, such as the estimation of pass selection likelihood and pass value, and that can perform remarkably well at estimating the probability of observed passes. By merging features extracted at different sampling levels, the network can extract both fine and coarse details, thereby managing to make sense of the complex spatial dynamics of soccer. We have also presented several novels practical applications on soccer analytics, such as evaluating optimal passing,
evaluating optimal positioning, and identifying context-specific and team-level passing tendencies. This framework of analysis derived from spatiotemporal data could also be applied directly in many other team sports, where the visual representation of complex information can bring the coach and the data analyst closer.
| {
"attr-fineweb-edu": 1.712891,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdDM4ubngv4gExOsx | \section{Introduction}\label{sect:intro}
In their final game of the group phase at the 2006 Olympic ice-hockey
tournament, a surprisingly lethargic Swedish team lost $3-0$ to Slovakia. The
result meant they finished third in their group, when a win would have
guaranteed at worst a second placed finish. As the top four teams in each of
the two groups qualified for the quarter-finals, Sweden remained in the
tournament after this abject performance, but with a lower seeding for the
playoffs. However, everything turned out well in the end as they crushed both
their quarter- and semi-final opponents ($6-2$ against Switzerland and $7-3$
against the Czech Republic respectively), before lifting the gold after a
narrow $3-2$ win over Finland in the final.
The Slovakia match has gained notoriety because of persistent rumours
that Sweden threw the game in order to avoid ending up in the same half of the
playoff draw as Canada and Russia, the two traditional giants of ice-hockey.
Indeed, in an interview in 2011, Peter Forsberg, one of Sweden's top stars,
seemed to admit as much{\footnote{See
\texttt{www.expressen.se/sport/hockey/tre-kronor/forsberg-slovakien-var-en-laggmatch}}},
though controversy remains about the proper interpretation of his words.
Whatever the truth in this regard, it certainly seems as though Sweden were
better off having lost the game.
\par Instances like this in high-profile sports
tournaments, where a competitor is accused of deliberately losing a game, are
rare and tend to attract a lot of attention when they occur.
This could be considered surprising given that deliberate underperformance
in sport is nothing unusual. For example, quite often a team will decide to
rest their best players or give less than $100 \%$ effort when faced with an
ostensibly weaker opponent, having calculated that the risk in so doing is
outweighed by future potential benefits. Note that this could occur even in a
single-elimination knockout tournament, with a team deciding to trade an
elevated risk of an early exit for higher probability of success later on. Of
course, in such a tournament it can never be in a team's interest to actually
\emph{lose}. However, many tournaments, including Olympic ice-hockey, are
based on the template of two phases, the first being a round-robin event
(everyone meets everyone) which serves to rank the teams, and thereby provide
a seeding for the second, knockout phase{\footnote{In 2006, the Olympic ice
hockey tournament employed a minor modification of this template. There
were $12$
teams. In the first phase, they were divided into two groups of six, each
group playing round-robin. The top four teams in each group qualified for
the knockout phase. The latter employed standard seeding (c.f. Figure
\ref{fig:knockout}), but with the extra condition that teams from the same
group could not meet in the quarter-finals. This kind of modification of the
basic two-phase template, where the teams are first divided into smaller
groups, is very common since it greatly reduces the total number of matches
that need to be played.} Teams are incentivized to
perform well in the first phase by $(1)$ often, only higher
ranking teams qualify for the second phase, and $(2)$ standard seeding (c.f.
Figure \ref{fig:knockout}) aims to place high ranking teams far apart in the
game tree, with higher ranking teams closer to lower ranking ones, meaning
that a high rank generally gives you an easier starting position.
\par
The example of Sweden in 2006 illustrates the following phenomenon of
two-phase tournaments. Since a weaker
team always has a non-zero probability of beating a stronger one in a single
match, a motivation to throw a game in the first phase can arise when it seems
like the ranking of one's potential knockout-phase opponents does not reflect
their actual relative strengths. Sweden's loss to Slovakia meant they faced
Switzerland instead of Canada in the quarter-final and most observers would
probably have agreed that this was an easier matchup, despite Switzerland
having finished second and Canada third in their group (Switzerland also beat
Canada $2-0$ in their group match).
The above phenomenon is easy to understand and begs the fascinating question
of why instances of game-throwing seem to be relatively rare. We don't
explore that (at least partly psycho-social) question further in this paper.
However, \emph{even if} game-throwing is rare, it is still certainly a
weakness of this tournament format that situations can arise where a team is
given the choice between either pretending to be worse than they are, or
playing \emph{honestly} at the cost of possibly decreasing their chances of
winning the tournament.
In a 2000 paper \cite{Sch}, Allan Schwenk studied the question of how to best
seed a knockout tournament from a mathematical point of view. One, perhaps
counter-intuitive, observation made in that paper is that standard seeding
does not necessarily benefit a higher-ranking players, even when the ranking
of its potential opponents \emph{accurately} reflects their relative strengths.
Consider a matchplay tournament with $n$ competitors, or ``players'' as
we shall
henceforth call them, even though the competitors may be teams. In Schwenk's
mathematical model, the players are numbered $1$ through $n$ and
there are fixed probabilities $p_{ij} \in [0,\,1]$ such
that, whenever
players $i$ and $j$ meet, the probability that $i$ wins is $p_{ij}$.
Draws are not allowed, thus $p_{ij} + p_{ji} = 1$. Suppose
we impose the conditions
\par (i) $p_{ij} \geq 1/2$ whenever $i < j$,
\par (ii) $p_{ik} \geq p_{jk}$ whenever $i<j$ and $k\not\in \{i, j\}$.
\\
Thus, for any $i < j$, $i$ wins against $j$ with probability at least $1/2$,
and for any other player $k$, $i$ has at least as high a probability of
beating $k$ as $j$ does.
It then seems unconstestable to
assert that player $i$ is at least as good as player $j$ whenever
$i < j$. Indeed, if we imposed strict inequalities in (i) and (ii) we would
have an unambiguous ranking of the players: $i$ is better than $j$ if and
only if $i < j$. This is a very natural model to work with.
It is summarized by a
so-called \emph{doubly monotonic} $n \times n$
matrix $M = (p_{ij})$, whose entries equal
$\frac{1}{2}$ along the main diagonal, are non-decreasing from left to
right along each row, non-increasing from top to bottom along each
column and satisfy $p_{ij} + p_{ji} = 1$ for all $i,\,j$. We shall refer to
the model as the \emph{doubly monotonic model (DMM) of tournaments}. It is
the model employed throughout the rest of the paper.
\par In \cite{Sch}, Schwenk gave an example of an $8 \times 8$ doubly
monotonic matrix such that, if the standard seeding method (illustrated in
Figure \ref{fig:knockout}) were employed
for a single-elimination tournament, then player $2$ would have a higher
probability of winning than player $1$.
\begin{figure*}[ht!]
\includegraphics[width=.8\textwidth]{knockout2.eps}
\caption{The standard seeding for a single-elimination knockout tournament
with $2^3 = 8$ players. In general, if there are $2^n$ players and the
higher ranked player wins every match then, in the $i$th round,
$1 \leq i \leq n$, the pairings will be
$\{j+1,\,2^{n+1-i}-j\}$, $0 \leq j < 2^{n+1-i}$.}
\label{fig:knockout}
\end{figure*}
As an evident corollary, assuming the
same mathematical model one can concoct situations in two-phase tournaments
of the kind considered above in which it is a player's interest to lose a
game in the first phase even when, say, in every other match played to that
point, the better team has won.
Many tournaments consist of only a single phase, either
round-robin{\footnote{or, more commonly, a \emph{league} format, where each
pair meet twice.}} or single-elimination. As opposed to the
aforementioned two-phase format, here it is not hard to see that it can
never be in a team's interest to lose a game. Indeed, this is
clear for the single-elimination format, as one loss means you're out of the
tournament. In the round-robin format, losing one game, all else being
equal, only decreases your own total score while increasing the
score of some other team. As Schwenk showed,
the single-elimination option, with standard seeding, may still not be fair,
in the sense of always giving a higher winning probability to a better
player. The obvious way around this is to randomize the draw. Schwenk
proposed a method called
\emph{Cohort Randomized Seeding}{\footnote{It is easy to see that the
standard method cannot result in a player from a lower cohort, as that
term is defined by Schwenk, having a higher probability of winning the
tournament than one in a higher cohort.}}, which seeks to respect the
economic incentives
behind the standard method{\footnote{The standard format
ensures the romance of
``David vs. Goliath'' matchups in the early rounds, plus the likelihood of
the later rounds featuring contests between the top stars, when public
interest is at its highest. Schenk used the
term \emph{delayed confrontation} for the desire to keep the top ranked
players apart in the early rounds.}} while introducing just enough
randomization to ensure that this basic criterion for fairness is satisfied.
According to Schwenk himself, in email correspondence with us, no major
sports competition has yet adopted his proposal{\footnote{On the other hand, uniformly random draws are commonly employed. An example is the English FA Cup, from the round-of-$64$ onwards.}}.
Even tournaments where it is never beneficial to lose a match often include another source
of unfairness, in that players may face quite different schedules, for
reasons of geography, tradition and so on. For example, qualifying for the
soccer World Cup is organized by continent, an arrangement that effectively
punishes European teams. The host nation automatically qualifies
for the finals
and is given a top seeding in the group phase, thus giving it an unfair
advantage over everyone else. In the spirit of fair competition, one would
ideally wish for a tournament not to give certain players any special
treatment from the outset, and only break this symmetry after seeing how the
teams perform within the confines of the tournament. Note that a
single-elimination tournament with standard seeding is an example of such
``asymmetric scheduling'', unless the previous performances upon which the
seeding is founded are considered part of the
tournament.
\\ \par The above considerations lead us to the question on which this paper
is based. Suppose the rules of a tournament ensure both
\par - \emph{honesty}, meaning it is impossible for a situation to arise
where it is in a player's interest to lose a game, and
\par - \emph{symmetry}, meaning that the rules treat all the players equally.
In particular, the rules should not depend on the identity of the players, or the order in which they entered the tournament. \\
\par Must it follow that the tournament is \emph{fair}, in the sense that a better
player always has at least as high a probability of winning the tournament as
a worse one? Having defined our terms precisely we will show below that the
answer, perhaps surprisingly, is no. Already for three players, we will
provide simple examples of tournaments which are symmetric and honest, but
not fair. The question of ``how unfair'' a symmetric and honest tournament
can be seems to be non-trivial for any $n\geq 3$ number of players.
For $n=3$ we solve this problem exactly, and for $n\geq 4$ we formulate a
general conjecture. The rest of the paper is organized as follows:
\begin{itemize}
\item
Section \ref{sect:defi} provides rigorous definitions. We will define what
we mean by a
\emph{(matchplay) tournament} and what it
means for a tournament to be either \emph{symmetric}, \emph{honest} or
\emph{fair}. The DMM is assumed throughout.
\item
Sections
\ref{sect:threeplayer} and \ref{sect:nplayer} are the heart of the
paper. In the former, we consider
$3$-player tournaments and describe what appear to be the simplest possible
examples of tournaments which are symmetric and honest, but not fair.
Theorem \ref{thm:threeplayer} gives a precise characterization of
those
probability vectors $(x_1, \, x_2, \, x_3) \in \mathbb{R}^3$ which can arise
as the vectors of win-probabilities for the players in a symmetric and
honest tournament{\footnote{These results may remind some readers of the
notion of a \emph{truel} and of the known fact that, in a truel, being
a better shot does not guarantee a higher probability of winning (that
is, of surviving). See \texttt{https://en.wikipedia.org/wiki/Truel}.
Despite the analogy, we're not aware of any deeper connection between
our results and those for truels, nor between their respective
generalizations to more than three ``players''.}}; here
fairness would mean $x_1 \geq x_2 \geq x_3$.
\item
In Section \ref{sect:nplayer} we extend these ideas to a general
method for constructing symmetric, honest and unfair $n$-player
tournaments. We introduce a family of $n$-vertex digraphs
and an associated convex polytope $\mathcal{A}^{*}_{n}$
of probability vectors
in $\mathbb{R}^n$ and show that every
interior point of this polytope arises as the vector of
win-probabilities of some symmetric
and honest $n$-player tournament. The polytope $\mathcal{A}^{*}_{n}$
includes all probability vectors satisfying
$x_1 \geq x_2 \geq \dots \geq x_n$, but is shown to have a total of
$\frac{3^{n-1} + 1}{2}$ corners, thus yielding a plethora
of examples of symmetric and honest, but unfair tournaments. Indeed, we
conjecture (Conjecture \ref{conj:nplayer}) that the
vector of win-probabilities of \emph{any} symmetric and honest
$n$-player tournament lies in $\mathcal{A}^{*}_{n}$.
\item
Section \ref{sect:frugal} considers the notion of a \emph{frugal}
tournament, namely one which always begins by picking one player uniformly
at random to take no further part in it (though he may still win).
The tournaments constructed in Sections
\ref{sect:threeplayer} and \ref{sect:nplayer} have this
property, and the main result of Section \ref{sect:frugal} is, in
essence, that frugal tournaments provide no counterexamples to
Conjecture \ref{conj:nplayer}.
\item
Section \ref{sect:maps} introduces the notion of a \emph{tournament map}, which is a natural way to view tournaments as continuous functions. We describe its relation to the regular tournament concept. Using this, we show (Corollary \ref{cor:andersnormalform}) that any symmetric and honest tournament can be approximated arbitrarily well by one of the form described in the section. We further provide three applications. \par
- The first is to \emph{strictly} honest tournaments, which means, informally, that a player should always be strictly better off in winning a match than in losing it. We show that any symmetric and honest tournament can be approximated arbitrarily well by a strictly honest one.
\par
- The second application is to \emph{tournaments with rounds}. For simplicity, we assume in the rest of the article that matches in a tournament are played one-at-a-time, something which is often not true in reality. Extending the notion of
honesty to tournaments with rounds provides some technical challenges, which are discussed here.
\par
- The final application is to
prove that the possible vectors of win-probablities for symmetric and
honest $n$-player tournaments form a finite union of convex polytopes
in $\mathbb{R}^n$, minus some boundary points.
This provides, in particular,
some further evidence in support of Conjecture \ref{conj:nplayer}.
\item
In Section \ref{sect:futile}, we consider the
concept of a \emph{futile}
tournament, one in which a player's probability of
finally winning is never affected by whether they win or lose a given
match. We prove that, in a symmetric and futile $n$-player
tournament, everyone has probability $1/n$ of winning. This is exactly as
one would expect, but it doesn't seem to be a completely trivial task to
prove it.
\item
Finally,
Section \ref{sect:final} casts a critical eye on the various
concepts introduced in the paper, and mentions some further possibilities
for future work.
\end{itemize}
\section{Formal Definitions}\label{sect:defi}
The word \emph{tournament} has many different meanings. In graph theory, it
refers to a directed graph where, for every pair of vertices $i$ and $j$,
there is an arc going either from $i$ to $j$ or from $j$ to $i$. In more
common language, a \emph{matchplay} refers to a competition
between a (usually
relatively large) number of competitors/players/teams in which a winner is
determined depending on the outcome of a number of individual matches, each
match involving exactly two competitors. We concern ourselves
exclusively with matchplay tournaments{\footnote{Athletics, golf, cycling,
skiing etc. are examples of sports in which competitions traditionally
take a different form, basically
``all-against-all''.}}.
Even with this restriction, the word ``tournament'' itself can be used to
refer to: a \emph{reoccurring
competition} with a fixed name and fixed format, such as the Wimbledon
Lawn Tennis Championships, a \emph{specific instance} of a (potentially
reoccurring)
competition, such as the 2014 Fifa World Cup, or a
specific \emph{set of rules} by which such a competition is structured,
such as ``single-elimination knock-out with randomized seeding'',
``single round-robin with randomized scheduling'',
etc. We will here use tournament in this last sense.
More precisely, we consider an \emph{$n$-player tournament} as a set of
rules for how to arrange matches between $n$ \emph{players}, represented
by numbers from $1$ to $n$. The decision on which players should meet each
other in the next match may depend on the results from earlier matches as
well as additional randomness (coin flips etc.). Eventually, the
tournament should announce one of the players as the winner. We assume that:
\begin{enumerate}
\item A match is played between an (unordered) pair of players $\{i, j\}$. The
outcome of said match can either be \emph{$i$ won}, or \emph{$j$ won}. In
particular, no draws are allowed, and no more information is given back to
the tournament regarding e.g. how close the match was, number of goals scored
etc.
\item Matches are played sequentially one-at-a-time. In practice, many
tournaments consist of ``rounds'' of simultaneous matches. We'll make some
further remarks on this restriction in Subsection \ref{subsect:rounds}.
\item There is a bound on the number of matches that can be played in a
specific tournament. So, for example, for three players we would not allow
``iteration of round-robin until someone beats the other two''. Instead,
we'd require the tournament to break a potential three-way tie at some
point, e.g. by randomly selecting a winner.
\end{enumerate}
Formally, we may think of a tournament as a randomized algorithm which is
given access to a function \texttt{PlayMatch} that takes as input an unordered
pair of numbers between $1$ and $n$ and returns one of the numbers.
In order to analyze our tournaments, we will need a way to model the outcomes
of individual matches. As mentioned in the introduction, we will here
employ the same simple model as Schwenk \cite{Sch}. For each pair of players
$i$ and $j$, we assume that there is some unchanging probability $p_{ij}$ that
$i$ wins in a match between them. Thus, $p_{ij} + p_{ji} = 1$ by (1) above.
We set $p_{ii} = \frac{1}{2}$ and denote the set of all
possible $n \times n$ matrices by
$$\mathcal{M}_n = \{P\in [0, 1]^{n\times n} : P+P^T=\mathbf{1} \},$$
where $\mathbf{1}$ denotes the all ones matrix. We say that
$P=(p_{ij})_{i,j\in[n]}\in\mathcal{M}_n$ is \emph{doubly monotonic} if $p_{ij}$ is
decreasing in $i$ and increasing in $j$. We denote
$$\mathcal{D}_n = \{P \in \mathcal{M}_n : P\text{ is doubly monotonic}\}.$$
We will refer to a pair $\bm{\mathcal{T}}=(\bm{T}, P)$ consisting
of an $n$-player
tournament $\bm{T}$ and a matrix $P\in\mathcal{M}_n$ as a \emph{specialization of
$\bm{T}$}. Note that any such specialization defines a random process where
alternatingly two players are chosen according to $\bm{T}$ to play a match,
and the winner of the match is chosen according to $P$. For a given
specialization $\bm{\mathcal{T}}$ of a tournament, we let $\pi_k$ denote the
probability for player $k$ to win the tournament, and define the \emph{win
vector} $\bm{wv}(\bm{\mathcal{T}})=(\pi_1, \dots, \pi_n)$. For a fixed
tournament $\bm{T}$ it will sometimes be useful to consider these
probabilities as functions of the matrix $P$, and we will hence write
$\pi_k(P)$ and
$\bm{wv}(P)$ to denote the corresponding probabilities in the specialization
$(\bm{T}, P)$
We are now ready to formally define the notions
of symmetry, honesty and fairness.
\\
\\
{\sc Symmetry:} Let $\bm{T}$ be an $n$-player tournament. For any
permutation $\sigma\in\mathcal{S}_n$ and any $P\in \mathcal{M}_n$, we
define $Q=(q_{ij})\in\mathcal{M}_n$ by $q_{\sigma(i)\sigma(j)} = p_{ij}$ for
all $i, j \in [n]$. That is, $Q$ is the matrix one obtains from $P$
after renaming each player $i\mapsto \sigma(i)$. We say that $\bm{T}$
is \emph{symmetric} if, for any
$P\in \mathcal{M}_n$, $\sigma\in\mathcal{S}_n$ and any $i\in[n]$, we have
$\pi_i(P)=\pi_{\sigma(i)}(Q).$
This definition is meant to capture the intuition that the rules
``are the same for everyone''. Note that any tournament can be turned
into a symmetric one by first randomizing the order of the players.
\\
\\
{\sc Honesty:} Suppose that a tournament $\bm{T}$ is in a state where
$r\geq 0$ matches have already been played, and it just announced a
pair of players $\{i, j\}$ to meet in match $r+1$. Let $\pi_i^+(P)$
denote the probability that $i$ wins the tournament conditioned on the
current state and on $i$ being the winner of match
$r+1$, assuming the outcome of any subsequent match is decided according
to $P\in\mathcal{M}_n$. Similarly, let $\pi_i^-(P)$ denote the
probability that $i$ wins the tournament given that $i$ is the loser
of match $r+1$. We say that $\bm{T}$ is \emph{honest} if, for any
possible such state of $\bm{T}$ and any $P\in\mathcal{M}_n$, we have
$\pi_i^+(P)\geq \pi_i^-(P)$.
\par
The tournament is said to be \emph{strictly honest} if in addition,
for all $P\in\mathcal{M}^{o}_{n}$, the above inequality is strict, and
all pairs of players have a positive probability to meet at least once
during the tournament. Here $\mathcal{M}^{o}_{n}$ denotes the set of
matrices $(p_{ij}) \in \mathcal{M}_n$ such that
$p_{ij} \not\in \{0,\,1\}$. It makes sense to exclude these
boundary elements
since, if $p_{ij} = 0$ for every $j \neq i$, then player
$i$ cannot affect his destiny at all. For instance, it seems natural
to consider a single-elimination tournament as strictly honest, but
in order for winning to be strictly better than losing, each player
must retain a positive probability of winning the tournament whenever
he wins a match.
\par To summarize, in an honest tournament a player can never be put in
a strictly better-off position by throwing a game. In a strictly
honest tournament, a player who throws a game is always put in a
strictly worse-off position.
\begin{remark}\label{rem:state}
We note that the ``state of a
tournament'' may contain more information than what the players can
deduce from the matches played so far. For instance, the two-player
tournament that plays one match and chooses the winner with
probability
$0.9$ and the loser with probability $0.1$ is honest if the decision
of whether to choose the winner or loser is made \emph{after} the
match. However, if the decision is made \emph{beforehand}, then with
probability $0.1$ we would have $\pi_1^+=\pi_2^+=0$
and $\pi_1^-=\pi_2^-=1$. Hence, in this case the tournament is not
honest.
\end{remark} $\;$ \\
{\sc Fairness:} Let $\bm{T}$ be an $n$-player tournament. We say
that $\bm{T}$ is \emph{fair} if
$\pi_1(P)\geq \pi_2(P) \geq \dots \geq \pi_n(P)$ for all
$P\in\mathcal{D}_n$.
\\
\par The main purpose of the next two sections is to show that there exist
symmetric and honest tournaments which are nevertheless unfair.
\section{Three-player tournaments}\label{sect:threeplayer}
It is easy, though non-trivial, to show that every $2$-player symmetric
and honest tournament is fair - see Proposition \ref{prop:twoplayer} below.
Already for three players, this breaks down however. Let $N \geq 2$ and
consider the following two tournaments:
\\
\\
{\sc Tournament $\bm{T}_1 = \bm{T}_{1, N}$}: The rules are as follows:
\par \emph{Step 1:} Choose one of the three players uniformly at random.
Let $i$ denote the chosen player and $j, \, k$ denote the remaining
players.
\par
\emph{Step 2:} Let $j$ and $k$ play $N$ matches.
\par - If one of them, let's say $j$, wins at least $\frac{3N}{4}$
matches, then the winner of the tournament is chosen by tossing a fair
coin between $j$ and $i$.
\par - Otherwise, the winner of the tournament is chosen by tossing a
fair coin between $j$ and $k$.
\\
\\
{\sc Tournament $\bm{T}_2 = \bm{T}_{2,\,N}$}: The rules are as follows:
\par \emph{Step 1:} Choose one of the three players uniformly at random.
Let $i$ denote the chosen player and $j, \, k$ denote the remaining
players.
\par
\emph{Step 2:} Let $j$ and $k$ play $N$ matches.
\par - If one of them wins at least $\frac{3N}{4}$ matches, then he is
declared the winner of the tournament.
\par - Otherwise, $i$ is declared the winner of the tournament.
\\
\par It is easy to see that both $\bm{T}_1$ and $\bm{T}_2$ are symmetric
and honest (though not strictly honest), for any $N$. Now let
$p_{12} = p_{23} = \frac{1}{2}$ and $p_{13} = 1$, so that the matrix
$P = (p_{ij})$ is doubly monotonic, and let's analyze the
corresponding specializations $\bm{\mathcal{T}}_1$, $\bm{\mathcal{T}}_2$
of
each tournament as $N \rightarrow \infty$.
\par \emph{Case 1:} Player $1$ is chosen in Step 1. In Step 2,
by the law of large numbers, neither
$2$ nor $3$ will win at least $\frac{3N}{4}$ matches, asymptotically
almost surely (a.a.s.). Hence, each of $2$ and $3$ wins
$\bm{\mathcal{T}}_1$ with probability tending to $\frac{1}{2}$, while
$1$ a.a.s. wins $\bm{\mathcal{T}}_2$.
\par \emph{Case 2:} Player $2$ is chosen in Step 1. In Step 2, player
$1$ will win all $N$ matches. Hence, each of $1$ and $2$ wins
$\bm{\mathcal{T}}_1$ with probability $\frac{1}{2}$, while $1$ wins
$\bm{\mathcal{T}}_2$.
\par
\emph{Case 3:} Player $3$ is chosen in Step 1. In Step 2, neither $1$
nor $2$ will win at least $\frac{3N}{4}$ matches, a.a.s.. Hence, each of
$1$ and $2$ wins $\bm{\mathcal{T}}_1$ with probability tending to
$\frac{1}{2}$, while $3$ a.a.s. wins $\bm{\mathcal{T}}_2$.
\par Hence, as $N \rightarrow \infty$, we find that
\begin{equation}\label{eq:unfairnodes}
\bm{wv}(\bm{\mathcal{T}_1}) \rightarrow \left( \frac{1}{3}, \, \frac{1}{2}, \, \frac{1}{6} \right) \;\;\; {\hbox{and}} \;\;\; \bm{wv}(\bm{\mathcal{T}_2}) \rightarrow \left( \frac{2}{3}, \, 0, \, \frac{1}{3} \right).
\end{equation}
Indeed, we get unfair specializations already for $N = 2$, in which case
the dichotomy in Step 2 is simply whether or not a player wins both
matches. One may check that, for $N = 2$,
\begin{equation*}\label{eq:unfairex}
\bm{wv}(\bm{\mathcal{T}_1}) = \left( \frac{3}{8}, \, \frac{5}{12}, \, \frac{5}{24} \right) \;\;\; {\hbox{and}} \;\;\; \bm{wv}(\bm{\mathcal{T}_2}) = \left( \frac{7}{12}, \, \frac{1}{6}, \, \frac{1}{4} \right).
\end{equation*}
We can think of $\bm{\mathcal{T}}_1$ as trying to give an advantage to
player $2$ over player $1$, and $\bm{\mathcal{T}}_2$ trying to give an
advantage to player $3$ over player $2$. It is natural to ask if it is
possible to improve the tournaments in this regard. Indeed the difference
in winning probabilities for players $1$ and $2$ in $\bm{\mathcal{T}}_1$
is only $\frac12-\frac13=\frac16$, and similarly the winning
probabilities for
players $3$ and $2$ in $\bm{\mathcal{T}}_2$
only differ by $\frac13$. In particular, is it
possible to modify $\bm{\mathcal{T}}_1$ such that $\pi_1$ goes below
$\frac13$ or such that $\pi_2$ goes above $\frac12$? Is it possible to
modify $\bm{\mathcal{T}}_2$ such that $\pi_3$ goes above $\frac13$? The
answer to both of these questions turns out to be ``no'', as we
will show below. In fact, these two tournaments are, in a sense, the
two unique maximally unfair symmetric and honest $3$-player tournaments.
We begin with two lemmas central to the study of symmetric and honest
tournaments for an arbitrary number of players.
\begin{lemma}\label{lem:symmetry}
Let $\bm{T}$ be a symmetric $n$-player tournament. If $p_{ik} = p_{jk}$
for all $k = 1,\,\dots,\,n$, then $\pi_i = \pi_j$.
\end{lemma}
\begin{proof}
Follows immediately from the definition of symmetry by taking $\sigma$
to be the permutation that swaps $i$ and $j$.
\end{proof}
\begin{lemma}\label{lem:honesty}
Let $\bm{T}$ be an honest $n$-player tournament and let
$P=(p_{ij})_{i,j\in[n]} \in\mathcal{M}_n$. Then, for any
$k \neq l$, $\pi_k=\pi_k(P)$ is increasing in $p_{kl}$.
\end{lemma}
As the proof of this lemma is a bit technical, we will delay this until the
end of the section.
In applying Lemma \ref{lem:honesty},
it is useful to introduce some terminology. We will use the terms
\emph{buff} and \emph{nerf} to refer to the act of increasing,
respectively decreasing, one player's match-winning probabilities while leaving
the probabilities between any other pair of players constant{\footnote{These
terms will be familiar to computer gamers.}}.
\begin{proposition}\label{prop:upperbd}
Let $n\geq 2$ and let $\bm{T}$ be a symmetric and honest $n$-player
tournament. For any $P\in\mathcal{D}_n$ and any $i>1$ we have
$\pi_i(P)\leq\frac12$.
\end{proposition}
\begin{proof}
Given $P\in\mathcal{D}_n$, we modify this to the matrix $P'$ by buffing player
$i$ to be equal to player $1$, that is, we put $p'_{i1}=\frac{1}{2}$ and for
any $j\not\in \{1,\,i\}$, $p'_{ij}=p_{1j}$. By Lemma \ref{lem:honesty},
$\pi_i(P)\leq \pi_i(P')$. But by Lemma \ref{lem:symmetry},
$\pi_1(P') = \pi_i(P')$. As the winning probabilities over all players
should sum to $1$, this means that $\pi_i(P')$ can be at most $\frac{1}{2}$.
\end{proof}
\begin{proposition}\label{prop:twoplayer} Every symmetric and honest
$2$-player tournament is fair. Moreover, for any $p\in [\frac12, 1]$, there
is a specialization of an honest and symmetric $2$-player tournament where
$\pi_1=p$ and $\pi_2=1-p$.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:upperbd}, any doubly monotonic specialization of such
tournament satisfies $\pi_2\leq\frac12$ and thereby $\pi_1\geq\pi_2$. On the
other hand, for any $p \in \left[ \frac{1}{2}, \, 1 \right]$, if $p_{12} = p$
and the tournament consists of a single match, then $\pi_1 = p$.
\end{proof}
\begin{proposition}\label{prop:threeplayer} Let $\bm{T}$ be a symmetric and
honest $3$-player tournament. Then, for any $P\in\mathcal{D}_3$,
$\pi_1\geq\frac13$, $\pi_2\leq\frac12$ and $\pi_3\leq\frac13$.
\end{proposition}
\begin{proof}
The second inequality was already shown in Proposition \ref{prop:upperbd}.
Let us consider the bound for player $1$. Given $P$ we construct a matrix $P'$
by {nerfing} player $1$ such that he becomes identical to player $2$. That is,
we let $p'_{12}=\frac12$ and $p'_{13}=p_{23}$. This reduces the winning
probability of player $1$, i.e. $\pi_1(P')\leq \pi_1(P)$, and by symmetry
$\pi_1(P')=\pi_2(P')$. We now claim that this common probability for players
$1$ and $2$ is at least $\frac13$. To see this, suppose we construct $P''$ from
$P'$ by {buffing} player $3$ to become identical to players $1$ and $2$, i.e.
$p''_{ij}=\frac12$ for all $i, j$. On the one hand, this increases the winning
probability of player $3$, i.e. $\pi_3(P'')\geq\pi_3(P')$, but on the other
hand, by symmetry we now have $\pi_1(P'')=\pi_2(P'')=\pi_3(P'')=\frac13$.
Hence, $\pi_3(P')\leq\frac13$ and hence $\pi_1(P')=\pi_2(P')\geq\frac13$, as
desired.
The bound for player $3$ can be shown analogously. We first buff player $3$ to
make him identical to player $2$, and then nerf $1$ to become identical to the
other two players.
\end{proof}
For each $n \in \mathbb{N}$, let $\mathcal{P}_n$ denote the convex polytope of
$n$-dimensional probability vectors, i.e.:
\begin{equation*}
\mathcal{P}_n = \{ (x_1, \, \dots, \, x_n) \in \mathbb{R}^n : x_i \geq 0 \, \forall \, i \; {\hbox{and}} \; \sum_{i=1}^{n} x_i = 1 \}.
\end{equation*}
Let $\mathcal{F}_n \subset \mathcal{P}_n$ be the closed, convex subset
\begin{equation*}
\mathcal{F}_n = \{ (x_1, \, \dots, \, x_n) \in \mathcal{P}_n : x_1 \geq x_2 \geq \dots \geq x_n \}.
\end{equation*}
We call $\mathcal{F}_n$ the $n$-dimensional \emph{fair} set.
A vector $\bm{x} = (x_1, \, \dots, \, x_n) \in \mathcal{P}_n$ will be
said to be \emph{achievable} if there is a matrix $P \in \mathcal{D}_n$ and a symmetric, honest $n$-player tournament
$\bm{T}$ such that $\bm{wv}(\bm{T}, P) = \bm{x}$.
We denote by $\mathcal{A}_n$ the closure of the set of achievable vectors in $\mathcal{P}_n$. Note that Proposition \ref{prop:twoplayer} says
that $\mathcal{A}_2 = \mathcal{F}_2$, whereas we already know from
(\ref{eq:unfairnodes}) that $\mathcal{A}_3 \neq \mathcal{F}_3$.
\begin{figure}
\includegraphics[scale=.8]{polytop_2.eps}
\caption{\label{fig:polytop_2} Illustration of the set $\mathcal{A}_3$, the closure of the set of achievable win vectors in symetric and honest $3$-player tournaments. The set $\mathcal{P}_3$ is illustrated by the triangle on the right with corners (top), (bottom left), (bottom right) corresponding to the win vectors $(1, 0, 0)$, $(0, 1, 0)$ and $(0, 0, 1)$ respectively. The fair set $\mathcal{F}_3$ is the triangle with corners $V_3=(1, 0, 0), V_4=(\frac12, \frac12, 0)$ and $V_5 =(\frac13, \frac13, \frac13)$. The dotted lines show the three inequalities $\pi_1 \geq \frac13$ (horizontal), $\pi_2\leq \frac12$ (down right diagonal) and $\pi_3 \leq \frac13$ (up right diagonal), as shown in Proposition \ref{prop:threeplayer}. This means that all achievable win vectors are contained in the remaining set, i.e. the convex pentagon with corners $V_3, V_4, V_5$ together with the unfair points $V_1=(\frac13, \frac12, \frac16)$ and $V_2=(\frac23, 0, \frac13)$. We show in Theorem \ref{thm:threeplayer} that every point in this set, except possibly some points on the boundary, is achievable. Thus $\mathcal{A}_3$ is equal to this pentagon.
}\end{figure}
The following result summarizes our findings for symmetric and honest
$3$-player tournaments. This is illustrated in Figure \ref{fig:polytop_2}.
\begin{theorem}\label{thm:threeplayer}
$\mathcal{A}_{3} = \left\{ (x_1, \, x_2, \, x_3) \in \mathcal{P}_3 : x_1 \geq \frac{1}{3}, \, x_2 \leq \frac{1}{2}, \, x_3 \leq \frac{1}{3} \right\}$.
\end{theorem}
\begin{proof}
Denote the above set by $\mathcal{S}$. By Proposition \ref{prop:threeplayer},
we know that $\mathcal{A}_{3} \subseteq \mathcal{S}$, so it only remains to
prove that $\mathcal{S} \subseteq \mathcal{A}_{3}$. We start with two
observations:
\begin{itemize}
\item $\mathcal{S}$ is a convex polygon with five vertices:
\begin{equation*}\label{eq:vertices}
V_1 = \left( \frac{1}{3}, \, \frac{1}{2}, \, \frac{1}{6} \right), \;\; V_2 = \left( \frac{2}{3}, \, 0, \, \frac{1}{3} \right), \;\; V_3 = (1,\,0,\,0), \;\; V_4 = \left( \frac{1}{2}, \, \frac{1}{2}, \, 0 \right), \;\; V_5 = \left( \frac{1}{3}, \, \frac{1}{3}, \, \frac{1}{3} \right).
\end{equation*}
\item Suppose $\bm{\mathcal{T}}^0$, $\bm{\mathcal{T}}^1$ are specializations
of symmetric and honest $n$-player tournaments $\bm{T}^0$, $\bm{T}^1$
respectively, and with the same matrix $P \in \mathcal{M}_n$. For
$p \in [0,\,1]$ we let $\bm{T}^p$ denote the tournament: ``With probability
$p$ play $\bm{T}^0$ and with probability $1-p$ play $\bm{T}^1$''. Clearly,
$\bm{T}^p$ is also symmetric and honest for any $p$ and, if
$\bm{\mathcal{T}}^p$ is its specialization for the matrix $P$, then
$\bm{wv}(\bm{\mathcal{T}}^p) = p \cdot \bm{wv}(\bm{\mathcal{T}}^0) + (1-p) \cdot \bm{wv}(\bm{\mathcal{T}}^1)$.
\end{itemize}
\par It follows from these observations
that, in order to prove that $\mathcal{S} \subseteq \mathcal{A}_3$, it
suffices to construct, for each $i = 1,\,\dots,\,5$, a sequence
$\bm{T}_{i,\,N}$ of symmetric and honest tournaments
such that $\bm{wv}(\bm{\mathcal{T}}_{i,\,N}) \rightarrow V_i$ as
$N \rightarrow \infty$, where $\bm{\mathcal{T}}_{i,\,N}$ is the specialization of
$\bm{T}_{i,\,N}$ by the unique matrix $P = (p_{ij}) \in \mathcal{D}_{3}$
satisfying $p_{12} = p_{23} = \frac{1}{2}$, $p_{13} = 1$.
\par Indeed, we've already constructed appropriate sequences for
$i = 1,\,2$, by (\ref{eq:unfairnodes}), so it remains to take care of
$i = 3,\,4,\,5$.
\\
\\
{\sc Tournament $\bm{T}_{3,\,N}$}: Play $N$ iterations of round-robin. Choose
the winner uniformly at random from among the players with the maximum
number of wins.
\par It is clear that $\bm{T}_{3,\,N}$ is symmetric and honest and
that $\bm{wv}(\bm{\mathcal{T}}_{3,\,N}) \rightarrow V_3$ as
$N \rightarrow \infty$.
\\
\\
{\sc Tournament $\bm{T}_{4,\,N}$}: Play $N$ iterations of round-robin.
Choose a player uniformly at random from among those with the minimum
number of wins. Flip a coin to determine the winner among the two
remaining players.
\par It is clear that $\bm{T}_{4,\,N}$ is symmetric and honest and that
$\bm{wv}(\bm{\mathcal{T}}_{4,\,N}) \rightarrow V_4$ as
$N \rightarrow \infty$.
\\
\\
{\sc Tournament $\bm{T}_{5}$:} Just choose the winner uniformly at
random. Obviously $\bm{wv}(\bm{\mathcal{T}}_5) = V_5$ and the
tournament is symmetric and honest.
\end{proof}
To conclude this section, we finally give the proof of Lemma \ref{lem:honesty}.
\begin{proof}[Proof of Lemma \ref{lem:honesty}.]
Fix $k, l\in[n]$ and $\delta>0$. Consider two matrices
$P=(p_{ij}), P'=(p'_{ij}) \in \mathcal{M}_n$ such that $p'_{kl}=p_{kl}+\delta$,
$p'_{lk}=p_{lk}-\delta$ and $p'_{ij}=p_{ij}$ whenever $\{i, j\}\neq \{k, l\}$.
The proof will involve
interpolating between the specializations $(\bm{T}, P)$ and $(\bm{T}, P')$
by a sequence of what we'll call ``tournaments-on-steroids''.
For a given $r\geq 0$ we imagine that we play the tournament $\bm{T}$ where,
in the first $r$ matches, winning probabilities are determined by $P'$, and
after that according to $P$. The idea is that, at the beginning of the
tournament, we give player $k$ a performance enhancing drug that only works
against $l$, and only lasts for the duration of $r$ matches (regardless of
whether he plays in those matches or not).
With some slight abuse of terminology, we will consider these as
specializations
of $\bm{T}$, and denote them by $\bm{\mathcal{T}}^r$, and the corresponding
winning probability of a player $i\in [n]$ by $\pi_i^r$. Clearly
$\bm{\mathcal{T}}^0 = (\bm{T}, P)$, and taking $m$ equal to the maximum
number of matches played in $\bm{T}$, it follows that
$\bm{\mathcal{T}}^m = (\bm{T}, P')$. Hence, it suffices to show that
$\pi_k^r$ is increasing in $r$.
Suppose we run the specializations $\bm{\mathcal{T}}^r$ and
$\bm{\mathcal{T}}^{r+1}$ until either $\bm{T}$ chooses a pair of players to
meet each other in match $r+1$, or a winner is determined before this happens.
As both specializations evolve according to the same probability distribution
up until this point, we may assume that both specializations have behaved
identically so far. The only way the winning probability for player $k$ can
differ in the two specializations from this point onwards
is if match $r+1$ is between
players $k$ and $l$. Assuming this is the case, let $\pi_k^+$ denote the
probability that $k$ wins the tournament conditioned on him winning the
current
match and assuming all future matches are determined according to $P$, that
is, according to the specialization $(\bm{T}, P)$. Similarly $\pi_k^-$ denotes
the probability that he wins conditioned on him losing the match. This means
that the winning probability for $k$ is
$p_{kl}\cdot\pi_k^+ + p_{lk}\cdot\pi_k^-$ in $\bm{\mathcal{T}}^{r}$ and
$p'_{kl}\cdot\pi_k^+ + p'_{lk}\cdot\pi_k^-$ in $\bm{\mathcal{T}}^{r+1}$. But by
honesty, $\pi_k^+\geq \pi_k^-$, from which it is easy to check that the winning
probability is at least as high in $\bm{\mathcal{T}}^{r+1}$ as in
$\bm{\mathcal{T}}^{r}$. We see
that, for any possibility until match $r+1$ is played, the probability for
$k$ to win in $\bm{\mathcal{T}}^{r+1}$ is at least as high as in
$\bm{\mathcal{T}}^r$. Hence $\pi_k^{r+1}\geq \pi_k^r$, as desired.
\end{proof}
\begin{remark}\label{rem:unboundedcouple}
{\bf (i)} The above proof still works without assuming a bound on the number
of matches in $\bm{T}$. The only difference will be that $(\bm{T}, P')$ is
now the limit of $\bm{\mathcal{T}}^r$ as $\rightarrow\infty$.
{\bf (ii)} If $\bm{T}$ is strictly honest, one can see that
$\pi_k^{r+1}>\pi_k^r$ for any $P \in \mathcal{M}^{o}_{n}$ and any $r$ such that
there is a positive probability that match $r+1$ is between players $k$ and
$l$. Hence, $\pi_k(P)$ is strictly increasing in $p_{kl}$ in this case.
\end{remark}
\section{$n$-Player Tournaments}\label{sect:nplayer}
Already for $n=4$, it appears to be a hard problem to determine which win
vectors are achievable. The aim of this section is to present partial results
in this direction. As we saw in the previous section, $\mathcal{A}_3$ can be completely
characterized by the minimum and maximum win probability each player can
attain. Thus, a natural starting point to analyze $\mathcal{A}_n$ for
$n\geq 4$ is to try to generalize this. For each $i \in [n]$, let
\begin{eqnarray*}
\Pi^{i,\,n} := \max \{ x_i : (x_1,\,\dots,\,x_n) \in \mathcal{A}_n \}, \\
\Pi_{i,\,n} := \min \{ x_i : (x_1,\,\dots,\,x_n) \in \mathcal{A}_n \}.
\end{eqnarray*}
In other words, $\Pi^{i,\,n}$ (resp. $\Pi_{i,\,n}$) is the least upper bound
(resp. greatest lower bound) for the win probability for player $i$, taken over
all doubly monotonic specializations of all symmetric and honest $n$-player
tournaments.
It is not too hard to construct a sequence of doubly monotonic specializations of
symmetric and honest tournaments such that $\pi_1\rightarrow 1$. Thus we have
$\Pi^{1,\,n}=1$ and $\Pi_{i,\,n}=0$ for all $i>1$. Moreover, by Proposition
\ref{prop:upperbd}, $\Pi^{i,\,n}\leq \frac12$ for all $i>1$. We can extract a
little more information by using the the technique of
``buffing and nerfing a player'' which was used in Propositions
\ref{prop:upperbd} and \ref{prop:threeplayer}.
\begin{proposition}\label{prop:generalbuffnerf}
(i) For every $n \in \mathbb{N}$, $\Pi^{i,\,n}$ is a decreasing function
of $i$.
\par (ii) $\Pi^{3,\,4} \leq \frac{3}{8}$.
\par (iii) $\Pi_{1,\,4} \geq \frac{1}{6}$.
\end{proposition}
\begin{proof}
\emph{(i)} Suppose, on the contrary, that $\Pi^{i+1,\,n} > \Pi^{i,\,n}$, for
some $n \geq 2$ and $1 \leq i < n$. Then there must exist some symmetric and
honest $n$-player tournament $\bm{T}$ and some matrix
$P \in \mathcal{D}_n$ such that $\pi_{i+1}(P) > \Pi^{i,\,n}$. Now buff player
$i+1$ until he is indistinguishable from $i$ (according to the same
kind of procedure as in the proof of Proposition \ref{prop:upperbd}).
Let $P'$ be the resulting matrix. By symmetry and honesty we then have
$\Pi^{i,\,n} \geq \pi_{i}(P') =
\pi_{i+1}(P') \geq \pi_{i+1}(P) > \Pi^{i+1,\,n}$, a contradiction.
\par \emph{(ii)} Let $\bm{T}$ be any symmetric and honest
$4$-player tournament and let $P \in \mathcal{D}_n$. Perform the following
three modifications of the
specialization:
\par \emph{Step 1:} Buff player $3$ until he is indistinguishable from $2$.
\par \emph{Step 2:} Nerf player $1$ until he
is indistinguishable from $2$ and $3$.
\par \emph{Step 3:} Buff player $4$ until he is indistinguishable from
$1,\,2$ and $3$.
\\
Let $P', P''$ and $P'''$ be the
corresponding matrices at the end of Steps $1, 2$ and $3$ respectively. By
Lemmas \ref{lem:symmetry} and
\ref{lem:honesty}, we first have
\begin{equation}\label{eq:step1}
\pi_{3}(P') \geq \pi_3(P), \;\;\;\;\;\; \pi_{2}(P') = \pi_{3}(P').
\end{equation}
The latter equality implies, in particular, that
\begin{equation}\label{eq:step11}
\pi_{1}(P') \leq 1-2\pi_{3}(P').
\end{equation}
A second application of Lemmas \ref{lem:symmetry} and \ref{lem:honesty}
implies that
\begin{equation}\label{eq:step2}
\pi_{1}(P'') \leq \pi_{1}(P'), \;\;\;\;\;\; \pi_{1}(P'')=\pi_{2}(P'')=\pi_{3}(P'').
\end{equation}
A third application yields
\begin{equation}
\pi_{4}(P''') \geq \pi_4(P''), \;\;\;\;\;\; \pi_{1}(P''')=\pi_{2}(P''')=\pi_{3}(P''')=\pi_{4}(P''') = \frac{1}{4}.
\end{equation}
Putting all this together, we have
\begin{eqnarray*}
1 = 3\pi_{1}(P'') + \pi_{4}(P'') \leq 3(1-2\pi_{3}(P')) +
\frac{1}{4} \Rightarrow \pi_{3}(P') \leq \frac{3}{8} \Rightarrow \pi_3(P) \leq \frac{3}{8}.
\end{eqnarray*}
\par \emph{(iii)} As before, let $\bm{T}$ be any symmetric and honest
$4$-player tournament and let $P \in \mathcal{D}_n$. We must show that
$\pi_1(P) \geq \frac{1}{6}$. Perform the following two modifications of the
specialization:
\par \emph{Step 1:} Nerf player $1$ until he is indistinguishable from
$2$.
\par \emph{Step 2:} Buff player $3$ until he
is indistinguishable from $1$ and $2$.
\\
Let $P', P''$ be the
corresponding matrices at the end of Steps $1$ and $2$ respectively. Twice
applying lemmas \ref{lem:symmetry} and
\ref{lem:honesty} we get
\begin{eqnarray}
\pi_1(P') \leq \pi_1(P), \;\;\;\;\;\; \pi_{1}(P') = \pi_{2}(P'), \label{eq:stepp1} \\
\pi_3(P'') \geq \pi_{3}(P'), \;\;\;\;\;\; \pi_1(P'') = \pi_2(P'') = \pi_3(P''). \label{eq:stepp2}
\end{eqnarray}
From (\ref{eq:stepp2}) we deduce that $\pi_3(P') \leq \frac{1}{3}$. By a
similar argument, where in Step $2$ one instead buffs $4$ to the level of
$1$ and $2$, one shows that $\pi_4(P') \leq \frac{1}{3}$. Then, with the
help of (\ref{eq:stepp1}), we have
\begin{eqnarray*}
1 = \pi_1(P') + \pi_2(P') + \pi_3(P') + \pi_4(P') \leq 2\pi_1(P') + 2\cdot \frac{1}{3} \Rightarrow \pi_1(P') \geq \frac{1}{6} \Rightarrow \pi_1(P) \geq \frac{1}{6}.
\end{eqnarray*}
\end{proof}
We next present a way to construct many symmetric and honest but unfair
tournaments. For each $n \in \mathbb{N}$, let $\mathcal{G}_n$ denote the
family of labelled
digraphs (loops and multiple arcs allowed) on the vertex set
$\{1,\,2,\,\dots,\,n\}$ whose set of arcs satisfies the following conditions:
\par \emph{Rule 1:} There are exactly two arcs going out from each vertex.
\par \emph{Rule 2:} Every arc $(i,\,j)$ satisfies $j \leq i$.
\par \emph{Rule 3:} If $(i, \, j_1)$ and $(i, \, j_2)$ are the two outgoing
arcs from $i$, then $j_1 = j_2 \Rightarrow j_1 = 1$ or $j_1 = i$. In other
words, if the two arcs have the same destination, then either they are both
loops or the destination is vertex $1$.
\\
\par To each digraph $G \in \mathcal{G}_n$ we associate a vector
$v(G) = (v_1,\,\dots,\,v_n) \in \mathcal{P}_n$ according to the rule
\begin{equation}\label{eq:graphvector1}
v_i = \frac{{\hbox{indeg}}_{G}(i)}{2n}.
\end{equation}
Note that since, by Rule 1, each vertex has outdegree $2$, we can also write
this formula as
\begin{equation}\label{eq:graphvector2}
v_i = \frac{1}{n} + \frac{{\hbox{indeg}}_{G}(i) - {\hbox{outdeg}}_{G}(i)}{2n}.
\end{equation}
In what follows, each vector $v(G)$ will be interpreted as the win vector of a
certain symmetric and honest tournament. According to (\ref{eq:graphvector2}),
the arcs of $G$ instruct us how to ``redistribute'' win probabilities amongst
the players, starting from the uniform distribution, where each arc
``carries with it'' $\frac{1}{2n}$ of probability.
\\
\par Let $\mathcal{A}^{*}_{n}$ denote the convex hull of all vectors
$v(G)$, $G \in \mathcal{G}_n$. It is easy to see that $\mathcal{A}^{*}_{1}$ is
the single point $(1)$ - the only digraph in $\mathcal{G}_1$ consists of the
single vertex $1$ with two loops. For $n \geq 2$, the number of digraphs in
$\mathcal{G}_n$ is $\prod_{i=2}^{n} 2 + \binom{i}{2}$
since, for each $i \geq 2$, the possibilities for the two outgoing arcs from
vertex $i$ are:
\par - send both to $i$ ($1$ possibility),
\par - send both to $1$ ($1$ possibility),
\par - send them to distinct $j_1, \, j_2 \in \{1,\,\dots,\,i\}$
($\binom{i}{2}$ possibilities).
\\
The number of corners in the convex polytope $\mathcal{A}^{*}_{n}$ is, however,
much less than this. For a digraph $G$ to correspond to a corner of
$\mathcal{A}^{*}_{n}$, there must exist some vector
$\bm{a} = (a_1,\,\dots,\,a_n) \in \mathbb{R}^n$ such that $v(G)$ is the
unique maximizer, in
$v(\mathcal{G}_n)$, of the sum $\sum_{i=1}^{n} a_i v_i(G)$. We can assume that the
coefficients $a_i$ are distinct numbers. For a given
vector $\bm{a}$, a digraph which maximizes the sum is determined by the
following procedure: List the components of $\bm{a}$ in decreasing order, say
$a_{i_1} > a_{i_2} > \dots > a_{i_n}$. Now draw as many arcs as possible first to
$i_1$, then to $i_2$ and so on, all the while respecting Rules 1,2,3 above.
\par We see that the resulting digraph depends only on the ordering of the
components of $\bm{a}$, not on their exact values. In other words, there is a
well-defined map $f: \mathcal{S}_n \rightarrow \mathcal{P}_n$ from permutations
of $\{1,\,\dots,\,n\}$ to corners of $\mathcal{A}^{*}_{n}$,
$f(\sigma) = v(G_{\sigma})$, where, for $\sigma = (\sigma_1,\,\dots,\,\sigma_n) \in \mathcal{S}_n$, the digraph $G_{\sigma}$ is given by the procedure:
\par ``Draw as many arcs as possible first to vertex $\sigma_1$, then to
$\sigma_2$ and so on, all the while respecting Rules 1, 2, 3''.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|c|c|c|} \hline
$\sigma$ & $G_{\sigma}$ & $v(G_{\sigma})$ \\ \hline \hline
$(1,\,2)$ & \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=4cm,
thick,main node/.style={circle,draw,font=\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [right of=1] {2};
\path
(1) edge [loop above] node {} (1)
edge [loop below] node {} (1)
(2) edge [bend right] node {} (1)
edge [bend left] node {} (1);
\end{tikzpicture} & $(1,\,0)$ \\ \hline
$(2,\,1)$ & \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=4cm,
thick,main node/.style={circle,draw,font=\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [right of=1] {2};
\path
(1) edge [loop above] node {} (1)
edge [loop below] node {} (1)
(2) edge [loop above] node {} (2)
edge [loop below] node {} (2);
\end{tikzpicture} & $\left( \frac{1}{2}, \, \frac{1}{2} \right)$ \\ \hline \hline
$(1,\,2,\,3) \; {\hbox{or}} \; (1,\,3,\,2)$ & \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,
thick,main node/.style={circle,draw,font=\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below left of=1] {2};
\node[main node] (3) [below right of=1] {3};
\draw
(1) to [out=150, in=120, looseness=8] (1);
\draw
(1) to [out=30, in=60, looseness=8] (1);
\path
(2) edge [bend left] node {} (1)
edge [bend right] node {} (1)
(3) edge [bend left] node {} (1)
edge [bend right] node {} (1);
\end{tikzpicture} & $(1,\,0,\,0)$ \\ \hline
$(2,\,1,\,3)$ & \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,
thick,main node/.style={circle,draw,font=\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below left of=1] {2};
\node[main node] (3) [below right of=1] {3};
\draw
(1) to [out=150, in=120, looseness=8] (1);
\draw
(1) to [out=30, in=60, looseness=8] (1);
\draw
(2) to [out=120, in=150, looseness=8] (2);
\draw
(2) to [out=240, in=210, looseness=8] (2);
\path
(3) edge node {} (1)
edge node {} (2);
\end{tikzpicture} & $\left( \frac{1}{2}, \frac{1}{2}, \, 0 \right)$ \\ \hline
$(2,\,3,\,1)$ & \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,
thick,main node/.style={circle,draw,font=\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below left of=1] {2};
\node[main node] (3) [below right of=1] {3};
\draw
(1) to [out=150, in=120, looseness=8] (1);
\draw
(1) to [out=30, in=60, looseness=8] (1);
\draw
(2) to [out=120, in=150, looseness=8] (2);
\draw
(2) to [out=240, in=210, looseness=8] (2);
\path
(3) edge [loop above] node {} (3)
edge node {} (2);
\end{tikzpicture} & $\left( \frac{1}{3}, \, \frac{1}{2}, \, \frac{1}{6} \right)$ \\ \hline
$(3,\,1,\,2)$ & \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,
thick,main node/.style={circle,draw,font=\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below left of=1] {2};
\node[main node] (3) [below right of=1] {3};
\draw
(1) to [out=150, in=120, looseness=8] (1);
\draw
(1) to [out=30, in=60, looseness=8] (1);
\draw
(3) to [out=60, in=30, looseness=8] (3);
\draw
(3) to [out=300, in=330, looseness=8] (3);
\path
(2) edge [bend left] node {} (1)
edge [bend right] node {} (1);
\end{tikzpicture} & $\left( \frac{2}{3}, \, 0, \, \frac{1}{3} \right)$ \\ \hline
$(3,\,2,\,1)$ & \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,
thick,main node/.style={circle,draw,font=\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below left of=1] {2};
\node[main node] (3) [below right of=1] {3};
\draw
(1) to [out=150, in=120, looseness=8] (1);
\draw
(1) to [out=30, in=60, looseness=8] (1);
\draw
(2) to [out=120, in=150, looseness=8] (2);
\draw
(2) to [out=240, in=210, looseness=8] (2);
\draw
(3) to [out=60, in=30, looseness=8] (3);
\draw
(3) to [out=300, in=330, looseness=8] (3);
\end{tikzpicture} & $\left( \frac{1}{3}, \, \frac{1}{3}, \, \frac{1}{3} \right)$ \\ \hline
\end{tabular}
\end{center}
\vspace{0.3cm}
\caption{All $\sigma \in \mathcal{S}_n$, $G_{\sigma} \in \mathcal{G}_n$ and
corners $v(G_{\sigma})$ of $\mathcal{A}^{*}_{n}$, for $n = 2, \, 3$.}
\label{tab:corners}
\end{table}
Table \ref{tab:corners} shows how this works for $n=2$ and $n=3$. The map
$f$ is not injective for any $n \geq 3$ and the exact number of corners in
$\mathcal{A}^{*}_n$ is computed in Proposition \ref{prop:corners} below. For the
time being, the crucial takeaway from Table \ref{tab:corners} is that
$\mathcal{A}^{*}_{2} = \mathcal{A}_2$ and $\mathcal{A}^{*}_3 = \mathcal{A}_3$.
Recall also that $\mathcal{A}^{*}_1 = \mathcal{A}_1 = \{ (1)\}$.
\par We are ready to formulate
\begin{conjecture}\label{conj:nplayer}
$\mathcal{A}^{*}_n = \mathcal{A}_n$, for every $n \in \mathbb{N}$.
\end{conjecture}
Our main result in this section is
\begin{theorem}\label{thm:nplayer}
$\mathcal{A}^{*}_n \subseteq \mathcal{A}_n$, for every $n \in \mathbb{N}$.
\end{theorem}
\begin{proof}
We've already observed that $\mathcal{A}^{*}_{n} = \mathcal{A}_n$ for
$n=1, 2, 3$. We divide the remainder of the proof into two cases.
\\
\\
{\sc Case I:} $n \geq 5$. Since we can form a
``convex combination of tournaments'' - see the proof of Theorem
\ref{thm:threeplayer} - it suffices to find, for any fixed
$P \in \mathcal{D}_n$ and for each $G \in \mathcal{G}_n$, a
sequence $\bm{T}_{G, \, N}$ of symmetric and honest tournaments such that
$\bm{wv}((\bm{T}_{G,\,N}, \, P)) \rightarrow v(G)$ as
$N \rightarrow \infty$.
\par Let $P = (p_{ij})$ be any doubly monotonic matrix such that
$p_{ij}\neq p_{kl}$ unless either $i=k, \, j=l$ or $i=j, \, k=l$.
The matrix $P$ is
henceforth fixed. Let
\begin{equation}\label{eq:eps}
\varepsilon_1 := \min_{i \neq j} {\hbox{$| p_{ij} - \frac{1}{2}|$}},
\;\;\; \varepsilon_2 := \min_{\stackrel{i \neq j, \, k \neq l,}{\{i,\,j\} \neq \{k,\,l\}}} |p_{ij} - p_{kl}|, \;\;\;
\varepsilon := \frac{1}{2} \min \{ \varepsilon_1, \, \varepsilon_2 \}.
\end{equation}
In other words, $\varepsilon$ is half the minimum difference between two
distinct numbers appearing in the matrix $P$.
\par For $N \in \mathbb{N}$ and $G \in \mathcal{G}_n$, the rules of the
tournament $\bm{T}_{G,\,N}$ are as follows. We remark that the matrix $P$
here is a fixed parameter as part of the rules and does not (necessarily)
have anything to do with the specialization. In due course we will, however,
also have reason to consider the specialization $(\bm{T}_{G, \, N}, \, P)$.
\\
\\
\emph{Step 1:} Present the matrix $P$ to each of the players.
\\
\\
\emph{Step 2:} Choose one of the players uniformly at random. This player
takes no further part in the tournament.
\\
\\
\emph{Step 3:} The remaining $n-1$ players play $N$ iterations of
round-robin.
Once all the matches are finished, each remaining player performs a sequence
of tasks{\footnote{One can instead imagine that there is a ``referee'' who performs all these tasks, since they are part of the rules for the tournament. We think it's intuitively easier to understand the idea, however, in terms of each player perfoming his own calculations. Note that Step 1 can be removed from the description of the rules if we formulate them in terms of a central referee.}}
which is a little technical to describe. Informally, he tries to establish
the identities of the other $n-2$ remainers, as elements from $[n]$, by
checking the results of all the matches not involving himself and comparing
with the given matrix $P$. More formally, he does the following:
\par (a) He makes an arbitrary list
$(t_1,\,t_2,\,\dots,\,t_{n-2})$ of the other $n-2$
remainers and computes the elements $q_{ij}$ of an $(n-2) \times (n-2)$
matrix such that $q_{ij}$ is the fraction of the matches between $t_i$ and
$t_j$ which were won by $t_i$.
\par (b) He tries to find a subset $\{u_1,\,\dots,\,u_{n-2}\} \subset [n]$
such that, for all $1 \leq i < j \leq n-2$,
\begin{equation}\label{eq:success}
|q_{ij} - p_{u_i, \, u_j}| < \varepsilon.
\end{equation}
Note that, by (\ref{eq:eps}), he can find at most one such
$(n-2) \times (n-2)$ submatrix of $P$. If he does so, we say that he
\emph{succeeds} in Step 3.
\\
\\
\emph{Step 4:} For each player that succeeds in Step 3, do the following:
\par (a) Let $i < j \in [n]$ be the numbers of the
two rows and columns in $P$ which are excluded from the submatrix he
identified in Step 3.
\par (b) For each $l \in [n] \backslash \{i,\,j\}$, compute the fraction
$r_l$ of matches which he won against the player whom he identified in
Step 3 with row $l$ of the matrix $P$.
\par (c) If $r_l > p_{il} - \varepsilon$ for every $l$, then assign this
player a ``token'' of weight $\frac{n_{ji}}{2}$, where $n_{ji}$ is the number
of arcs from $j$ to $i$ in the digraph $G$.
\\
\\
\emph{Step 5:} Assign to the player eliminated in Step 2 a token of weight
$1 - s$, where $s$ is the sum of the weights of the tokens distributed in
Step 4. The winner of the tournament is now chosen at random, weighted in
accordance with the distribution of tokens.
\\
\par What needs to be proven now is that the tournament $\bm{T}_{G,\,N}$ is
always well-defined, that is, it can never happen that the total weight of
the tokens distributed in Step 4 exceeds one. Supposing for the moment that
this is so, it is clear that the tournament is symmetric and honest, and it
is also easy to see that
$\bm{wv}((\bm{T}_{G,\,N}, \, P)) \rightarrow v(G)$ as
$N \rightarrow \infty$. For if the relative strengths of the $n$ players
are, \emph{in fact}, given by the matrix $P$ then, as
$N \rightarrow \infty$, with high probability everyone not eliminated in
Step 2 will succeed with identifying an $(n-2) \times (n-2)$ submatrix of
$P$ in Step 3, namely the submatrix corresponding to the \emph{actual}
rankings of these $n-2$ remainers, and will then have performed well enough
to be assigned a token in Step 4(c) if and only if their actual ranking is
higher than that of the player eliminated in Step 2 (note that the weight
of the token they are assigned will still be zero if there is no
corresponding arc in the digraph $G$).
\\
\par So it remains to prove that the total weight of all tokens assigned in
Step 4(c) can never exceed one. If at most one player is assigned a token
of non-zero weight then we're fine, because of Rule 1 in the definition of
the family $\mathcal{G}_n$. Suppose at least two players are assigned
tokens of non-zero weight. Let $A,\,B,\,C,\,D,\,\dots$ denote all the
players not eliminated in Step 2 (these are just letters, not
numbers) and suppose $A$ and $B$ are assigned non-zero-weight tokens. Since
each of $A$ and $B$ can see the results of all matches involving
$C,\,D,\,\dots$, they will identify these with the same $n-3$ elements of
$[n]$ in Step 3. Note that here we have used the fact that $n \geq 5$. Let
$\mathcal{S} \subset [n]$ be this $(n-3)$-element subset. This
leaves three indices $i < j < k \in [n] \backslash \mathcal{S}$.
We have four options to consider:
\\
\\
\emph{Option 1:} At least one of $A$ and $B$ identifies the other as $k$.
We show this can't happen. Suppose $A$ identifies $B$ as $k$. Then $B$
must have performed at about the level expected of $k$ against each of
$C,\,D,\,\dots$. More precisely, for any
$l \in \mathcal{S}$,
\begin{equation}\label{eq:option1a}
|r^{B}_{l} - p_{kl}| < \varepsilon.
\end{equation}
On the other hand,
the rules of Step 4 imply that, for $B$ to receive a token, he must have
performed at least at the level expected of $j$ against each of
$C,\,D,\,\dots$ (and, indeed, at the level expected of $i$ in the case that
he failed to identify $A$ as $i$). Precisely, for each $l \in \mathcal{S}$,
\begin{equation}\label{eq:option1b}
r^{B}_{l} > p_{jl} - \varepsilon.
\end{equation}
But (\ref{eq:option1a}) and (\ref{eq:option1b}) contradict (\ref{eq:eps}).
\\
\\
\emph{Option 2:} $A$ and $B$ identify one another as $j$. We show that this
can't happen either. Suppose otherwise. Since $A$ gets a token, it must
pass the test $r^{A}_{j} > p_{ij} - \varepsilon$. Similarly
$r^{B}_{j} > p_{ij} - \varepsilon$. But $r^{A}_{j} + r^{B}_{j} = 1$, since
each of $A$ and $B$ is here computing the fraction of matches it won
against the other. This implies that
$p_{ij} < \frac{1}{2} + \varepsilon$, which contradicts (\ref{eq:eps}).
\\
\\
\emph{Option 3:} Each of $A$ and $B$ identifies the other as $i$. Then the
weight of the token assigned to each is $\frac{n_{kj}}{2}$. But $j > 1$ so
$n_{kj} \leq 1$, by Rule 3 for the family $\mathcal{G}_n$. Hence it suffices
to prove that no other player receives a token. Suppose $C$ receives a
token. $C$ sees the results of matches involving either $A$ or $B$ and any
of $D,\,\dots$. Since $A$ and $B$ have already identified one another as
$i$, then $C$ must make the same identification for each, by (\ref{eq:eps}).
In other words, $C$ cannot distinguish $A$ from $B$, a contradiction.
\\
\\
\emph{Option 4:} $A$ and $B$ identify one another as $i$ and $j$, in some
order. Since both get non-zero-weight tokens, there must, by Rules 1-3, be
exactly one arc in $G$ from $k$ to each of $i$ and $j$. So the sum of the
weights assigned to $A$ and $B$ equals one, and there is no arc in $G$ from
$k$ to any vertex other than $i$ and $j$. It now suffices to show that no
other player $C$ receives a positive weight token. The only way $C$ can
succeed in Step 3 is if it also identifies $A$ and $B$ as $i$ and $j$, and
if there is some $l \neq k$ such that it identifies
$\{C, \, Z\} = \{k, \, l\}$, where $Z$ is the player eliminated in Step 2.
Both $A$ and $B$ must in turn have identified $C$ as $l$. If $k < l$ this
means that $C$ cannot have played sufficiently well to obtain a token in
Step 4(c). If $l < k$ then even if $C$ gets a token it will have weight
zero, since there is no arc in $G$ from $k$ to $l$.
\\
\\
{\sc Case II:} $n=4$. We use the same tournaments $\bm{T}_{G,\,N}$ as in
{\sc Case I}, but in order to ensure their well-definedness we require,
in addition to (\ref{eq:eps}), the following conditions on the
$4 \times 4$ doubly monotonic matrix $P = (p_{ij})$:
\begin{equation}\label{eq:eps4}
p_{14} > p_{24} > p_{34} > p_{13} > p_{12} > p_{23}.
\end{equation}
Intuitively, player $4$ is useless, while the gap between $1$ and $2$ is
greater than that between $2$ and $3$. To prove well-definedness, it
suffices to establish the following two claims:
\\
\\
\emph{Claim 1:} If some player receives a token of weight one, then no
other player receives a token of positive weight.
\\
\\
\emph{Claim 2:} It is impossible for three players to receive positive
weight tokens.
\\
\par Let $D$ denote the player eliminated in Step 2 and $A,\,B,\,C$ the
three remainers.
\\
\\
\emph{Proof of Claim 1.} Suppose $A$ receives a token of weight one. The
rules for $\mathcal{G}_n$ imply that $A$ must identify himself as $1$ and
there are two arcs in $G$ from $j$ to $1$, where $j$ is the identity
which $A$ assigns to $D$. We consider two cases.
\par Case (a): $j = 4$. Suppose, by way of contradiction, that $B$ also
receives a positive weight token. In order to obtain a token at all, $B$
cannot have identified himself as $1$, because he has lost more than half
his matches against $A$. Hence there is no arc in $G$ from $4$ to
whomever $B$ identifies himself as, so $B$ cannot have identified $D$ as
$4$. Since $A$ also beat $C$, it must be the case that $B$ identifies
$C = 4$, $A=1$, $B=2$, $D=3$. But for $B$ to receive a token, he must
then have won at least $p_{24} - \varepsilon$ of his matches against $C$.
This contradicts $A$:s identification $\{B, \, C\} = \{2, \, 3\}$, since
the latter
would mean that the fraction of matches $B$ won against $C$ was at most
$p_{23} + \varepsilon$.
\par Case (b): $j \in \{2,\,3\}$. $A$ must have identified some remainer
as $4$, say $C$, and then won at least a fraction $p_{14} - \varepsilon$
of their matches. $C$:s performance against $A$ is so bad that he cannot
possibly receive a token. Moreover, $B$ observes this and hence must also
identify $A = 1$, $C = 4$. So if $B$ receives a token, he will have
agreed with $A$ on the identities of all four players. But then his token
cannot have positive weight, since there are no more arcs emanating from
$j$.
\\
\\
\emph{Proof of Claim 2.} Suppose each of $A, \, B, \, C$ receives a
token. Since $p_{34} > p_{13}$ by (\ref{eq:eps4}), each must identify
$D = 4$. This is because if anyone has identified you as $4$, you are
so bad that you can never satisfy the condition to get a token. We
consider three cases.
\par Case (a): Someone, say $A$, identifies themselves as $1$. Then,
without loss of generality, they identify $B = 2$, $C = 3$. Since $A$
gets a positive weight token, he must at least have won more than half
of his matches against both $B$ and $C$. Hence, neither $B$ nor $C$ can
self-identify as $1$ and get a token. Since there are at most two arcs
emerging from $4$, $B$ and $C$ must identify themselves as the same
number, one of $2$ and $3$. But $B$ observes the matches between $A$ and
$C$ and, since $A$ got a token, he won at least a fraction
$p_{13} - \varepsilon$ of these. Thus $B$ must self-identify as $2$, hence
so does $C$. But $C$ observes the matches between $A$ and $B$, of which
$A$ won a majority, hence $C$ must identify $A$ as $1$. But then $C$
cannot get a token, since he lost at least a fraction
$p_{13} - \varepsilon$ of his matches against $A$.
\par Case (b): Nobody self-identifies as $1$, and someone lost at least
half of their matches against each of the other two. WLOG, let $A$ be
this ``loser''. The only way $A$ can get a token is if he self-identifies
as $3$. WLOG, he identifies $B = 1$, $C = 2$. To get a token he must have
won at least a fraction $p_{32} - \varepsilon$ of his matches against $C$.
But $C$ beat $A$ and $B$ didn't self-identify as $1$, hence $B$ must have
identified $C=1$, which means that $C$ won at least a fraction
$p_{12} - \varepsilon$ of his matches against $A$. This contradicts
(\ref{eq:eps}), given the additional assumption that
$p_{12} > p_{23}$ in (\ref{eq:eps4}).
\par Case (c): Nobody self-identifies as $1$, and everyone beat someone
else. Without loss of generality, $A$ beat $B$, who beat $C$ who beat
$A$. First suppose someone, say $A$, self-identifies as $2$. Then he
must identify $B = 1$, $C=3$. But then he would have to have beaten $C$
to get a token, a contradiction.
\par So, finally, we have the possibility that each of
$A,\,B,\,C$ self-identifies as $3$. Thus each identifies the other two as
$1$ and $2$, which means that in each pairwise contest, the fraction of
matches won by the winner lies in the interval
$(p_{12} - \varepsilon, \, p_{12} + \varepsilon)$.
Let $r_{CA}$ denote the fraction of matches won by $C$ against $A$.
Since $C$ beat $A$, the previous analysis implies that
$r_{CA} > p_{12} - \varepsilon$. But
$A$ identifies himself as $3$ and $B$ beat $C$, so he
must identify $C$ as $2$. Since $A$ gets a token, we must have
$r_{CA} < p_{23} + \varepsilon$. But these two inequalites for
$r_{CA}$ contradict (\ref{eq:eps4}) and (\ref{eq:eps}).
\end{proof}
\begin{corollary}\label{cor:unfairn}
$\mathcal{F}_n$ is a proper subset of $\mathcal{A}_n$, for all $n \geq 3$.
\end{corollary}
\begin{proof}
It is easy to see that $\mathcal{F}_n$ is a proper subset of
$\mathcal{A}^{*}_{n}$, for each $n \geq 3$. Then apply Theorem
\ref{thm:nplayer}.
\end{proof}
Given the preceding results, we now return to the consideration of the
maximum and minimum winning probabilities, $\Pi^{i,\,n}$ and $\Pi_{i,\,n}$
respectively, attainable by each player $i$. If we want to minimize the first
coordinate in a vector $v(G)$, there should be no arc pointing to $1$ from any
$j> 1$, and just the two loops from $1$ to itself. In that case,
$v_1 (G) = \frac{1}{n}$. For $i \geq 2$, in order to maximize the $i$:th
coordinate of $v(G)$, it is clear that the digraph $G$ should
\par - have one arc from $j$ to $i$, for each $j = i+1,\,\dots,\,n$,
\par - have two loops $(i,\,i)$,
\par - hence, have no arc from $i$ to $k$, for any $k < i$.
\\
For such $G$ we'll have $v_i (G) = \frac{{\hbox{indeg}}_{i}(G)}{2n} =
\frac{n-i+2}{2n} = \frac{1}{2} - \frac{i-2}{2n}$.
Hence, by Theorem
\ref{thm:nplayer}, we have
\begin{equation}
\Pi_{1,\,n} \leq \frac{1}{n}; \;\;\;\;\;\; \Pi^{i,\,n} \geq \frac{1}{2} - \frac{i-2}{2n}, \; i = 2, \dots, n.
\end{equation}
If Conjecture \ref{conj:nplayer} were true, we'd have equality everywhere.
Note that, by Proposition \ref{prop:upperbd}, we do indeed have the equality
$\Pi^{2,\,n} = \frac{1}{2}$, and by Proposition \ref{prop:generalbuffnerf},
$\Pi^{3, 4} = \frac38$. Other than this, we can't prove
a single outstanding equality for any $n \geq 4$. In particular, for every
$n \geq 4$ it remains open whether $\Pi_{1,\,n} = \Pi^{n,\,n} = \frac{1}{n}$.
Next, we determine the exact number of corners in
$\mathcal{A}^{*}_{n}$:
\begin{proposition}\label{prop:corners}
There are $\frac{3^{n-1} + 1}{2}$ corners in the convex
polytope $\mathcal{A}^{*}_{n}$.
\end{proposition}
\begin{proof}
We must determine the number of elements in the range of the
function $f : \mathcal{S}_n \rightarrow \mathcal{P}_n$ defined earlier.
We begin by noting that, in the encoding $f(\sigma)=v(G_\sigma)$,
we may not need to know
the entire permutation $\sigma$ in order to construct $G_\sigma$. In
particular, it suffices to know the subsequence $\sigma'$ of all vertices
that get assigned incoming arcs. We note that a vertex $i$ has no incoming
arcs in $G_\sigma$ if and only if it is either preceded by two
lower-numbered vertices or preceded by the vertex $1$. Therefore, any such
subsequence
$\sigma'$ is a sequence of distinct elements in $[n]$ that $(i)$ ends with a
$1$ and $(ii)$ for any $i$, at most one of
$\sigma'_1, \sigma'_2, \dots, \sigma'_{i-1}$ is smaller than $\sigma'_i$.
Conversely, any sequence $\sigma'$ that satisfies $(i)$ and $(ii)$ can
be extended to a permutation $\sigma$, without affecting which vertices get
incoming arcs, by putting the missing numbers after
the '$1$'. Hence the
possible subsequences $\sigma'$ are characterized by $(i)$ and $(ii)$.
We claim that the map
$\sigma'\mapsto v(G_{\sigma'})$ is injective.
Let $\sigma'$, $\sigma''$ be two distinct such sequences and pick $k$ such
that $\sigma'_1 = \sigma''_1, \dots, \sigma'_{k-1} = \sigma''_{k-1}$
and $\sigma'_{k} \neq \sigma''_{k}$, say $\sigma'_{k} < \sigma''_{k}$. To
prove injectivity it suffices, by (\ref{eq:graphvector1}),
to show that the vertex $\sigma''_{k}$
has higher indegree in $G_{\sigma''}$
than in $G_{\sigma'}$. We consider two cases:
\par Case 1: $\sigma''_{k}$ does not appear at all in the subsequence
$\sigma'$. Then, simply by how these subsequences were defined,
$\sigma''_{k}$ has indegree zero in $G_{\sigma'}$ and strictly positive
indegree in $G_{\sigma''}$.
\par Case 2: $\sigma''_{k} = \sigma'_{l}$ for some $l > k$. Since
$\sigma'_k < \sigma''_k$, property
$(ii)$ applied to $\sigma'$ implies that
$\sigma'_j = \sigma''_j > \sigma''_k$ for
every $j = 1,\,\dots,\,k-1$. Hence, in $G_{\sigma''}$, the vertex $\sigma''_k$
will retain both of its loops, whereas in $G_{\sigma'}$ there will be one arc
from $\sigma''_k$ to $\sigma'_k$. Moreover, since $\sigma'$ and
$\sigma''$ agree before the appearance of $\sigma''_k$, which then appears
first in $\sigma''$, if $v \in [n]$ is any vertex that sends an arc to
$\sigma''_k$ in $G_{\sigma'}$, then it will send at least as many arcs
to $\sigma''_k$ in $G_{\sigma''}$. Hence, the total indegree of $\sigma''_k$ will
be strictly higher in $G_{\sigma''}$ than in $G_{\sigma'}$, as desired.
It remains to count the number of sequences $\sigma'$ that satisfy properties
$(i)$ and $(ii)$. Denote this by $a_n$. Given such a sequence of elements in
$[n-1]$, we construct a sequence in $[n]$ by either $(1)$ doing nothing, $(2)$
placing $n$ first in the sequence, or $(3)$ inserting $n$ between the first
and second element - this is possible for all sequences except the one just
consisting of a '$1$'. Thus for any $n\geq 2$, we have $a_n = 3a_{n-1}-1$. It
is easy to check that $a_1=1$ and thus it follows by induction that
$a_n= \frac{3^{n-1} + 1}{2}$ as desired.
\end{proof}
We close this section by posing a natural question which arises from the
previous discussion, but which remains unknown to us:
\begin{question}\label{quest:boundary}
For each $n \geq 3$, which boundary points of $\mathcal{A}^{*}_{n}$ are
achievable ?
\end{question}
\section{Frugal tournaments}\label{sect:frugal}
A central idea of the unfair tournaments presented in Sections
\ref{sect:threeplayer} and \ref{sect:nplayer} is to first
choose one player uniformly at random to exclude from participation. This
player won't take part in any matches, though he might still win the
tournament. Let us call a
tournament with this property \emph{frugal}, as the organizers won't have
to pay the attendance costs for one of the players. In the proof of Theorem
\ref{thm:nplayer}, we constructed symmetric, honest and frugal tournaments
whose win vector can attain any interior point in $\mathcal{A}_n^*$ for any
$n\geq 4$. We will now show that,
under the restriction that the tournament is frugal, nothing outside of
$\mathcal{A}^{*}_{n}$ can be achieved.
\begin{theorem}\label{thm:frugal}
Let $\bm{T}$ be a symmetric, honest and frugal $n$-player tournament for any
$n\geq 2$. Then for any $P\in\mathcal{D}_n$,
$\bm{wv}((\bm{T},\,P))\in\mathcal{A}_n^*$.
\end{theorem}
\begin{corollary} The closure of the set of
all achievable win vectors for all symmetric,
honest and frugal $n$-player tournaments equals $\mathcal{A}_n^*$.
\end{corollary}
\begin{proof} This follows immediately from Theorems \ref{thm:frugal} and
\ref{thm:nplayer}.
\end{proof}
In order to prove Theorem \ref{thm:frugal}, we need a new formulation of
$\mathcal{A}_n^*$. We say that a matrix $M\in\mathbb{R}^{n\times n}$ is a
\emph{fractional arc flow} if
\begin{align}
m_{ij} \geq 0&\text{ for all }i \geq j,\label{eq:af1}\\
m_{ij} = 0&\text{ for all }i<j,\label{eq:af2}\\
m_{ij} \leq \frac{1}{2}&\text{ for all }j\neq 1, i,\label{eq:af3}\\
\sum_{j=1}^n m_{ij} = 1&\text{ for all }i\in [n].\label{eq:af4}
\end{align}
\begin{lemma}\label{lem:faf}
For any fractional arc flow $M$, define $v(M)\in\mathbb{R}^n$ by
$v_j(M)=\frac1n \sum_{i=1}^n m_{ij}$. Then $v(M)\in \mathcal{A}_n^*$.
\end{lemma}
\begin{proof}
Let $A$ be the set of vectors $v$ that can be obtained from fractional arc
flows in this way. Clearly, $A$ is a convex
polytope in $\mathbb{R}^n$. Thus it is
uniquely defined by the values of $\max_{v\in A} u\cdot v$ for all
$u \in \mathbb{R}^n$. For a given $u\in\mathbb{R}^n$, it is easy to optimize
the
corresponding fractional arc flow. Namely, initially all vertices are given
a flow of $1$. Go through the indices $j\in [n]$ in the order of decreasing
$u_j$, with ties broken arbitrarily, and try to send as much remaining flow as
possible from all $i\geq j$ to $j$. By (\ref{eq:af3}),
we see that any such optimal $v$ is
given by $v(G)$ for some $G\in \mathcal{G}_n$. From the
discussion in the paragraph preceding Conjecture \ref{conj:nplayer}, it is
easy to see that the
vector $v(G)$ is also the optimal vector in the maximization problem
$\max_{v\in\mathcal{A}_n^*} u\cdot v$. Hence $A=\mathcal{A}_n^*$ as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:frugal}]
For any $i\neq j$, let $\bm{T}^i$ denote the modified version of this
tournament that always excludes player $i$. By possibly precomposing
$\bm{T}$ with a random permutation of the players, we may assume that the
rules of $\bm{T}^i$ do not depend on $(a)$ which player $i$ was excluded, and
$(b)$ the order of the remaining players $[n]\setminus \{i\}$.
Let $\pi_j^i(P)$ denote the winning probability for player $j$ in the
specialization $(\bm{T}^i, P)$. Then
$\pi_j(P) = \frac{1}{n}\sum_{i=1}^n\pi_j^i(P)$. As $\bm{T}$ is honest, it
follows directly from the definition of honesty that also $\bm{T}^i$ is
honest, hence $\pi_j^i(P)$ is increasing in $p_{jk}$ for any $k\neq j$.
Moreover, if two players $i$ and $j$ are identical for a given
$P\in\mathcal{M}_n$ in the sense that $p_{ik}=p_{jk}$ for all $k\in[n]$, then
by $(a)$ by $(b)$, $$\pi^i_j(P)=\pi^j_i(P)$$ and
$$\pi^k_i(P)=\pi^k_j(P)\text{ for any }k\neq i, j.$$
Using the same argument as in Proposition \ref{prop:upperbd} it follows that,
for any $P \in \mathcal{D}_n$,
$$\pi^i_j(P)\leq \frac{1}{2}\text{ unless either (i) $j=1$, (ii) $i=j$, or (iii) $i=1$ and $j=2$.}$$
Moreover, for any $P\in\mathcal{D}_n$ and $i<j$, let $P'$ be the matrix
obtained by buffing player $j$ to be identical to player $i$. Then, by
honesty, $\pi^i_j(P) \leq \pi^i_j(P')$, by $(a)$, $\pi^i_j(P')=\pi^j_i(P')$,
and as $\pi^j_i(\cdot)$ does not depend on the skill of player
$j$, $\pi^j_i(P')=\pi^j_i(P)$. Thus
$$\pi^i_j(P) \leq \pi^j_i(P)\text{ for any $i<j$ and $P\in\mathcal{D}_n$.}$$
The idea now is that, for a given $P\in\mathcal{D}_n$, we can interpret the
probabilities $\pi^i_j(P)$ in terms of a
fractional arc flow. For any $i, j\in[n]$
we define $m'_{ij} = \pi^i_j(P)$. Then $\pi_j(P) = \frac1n \sum_{i=1}^n m'_{ij}$.
Now, this does not necessarily define an arc flow as $m'_{ij}$ might be
positive
even if $i<j$, and we might have $m'_{12}>\frac{1}{2}$ (which is really just a
special case of the former). However, as $m'_{ij}\leq m'_{ji}$ whenever $i<j$,
we can cancel out these ``backwards flows'' by, whenever $m'_{ij}=x>0$ for
$i<j$, reducing $m'_{ij}$ and $m'_{ji}$ and increasing $m'_{ii}$ and $m'_{jj}$,
all by $x$. Let $(m_{ij})$ be the resulting matrix. Then this is an arc flow.
As the cancelling does not change the net influx to each vertex, we have
$\pi_j(P) = \frac1n \sum_{i=1}^n m_{ij}$. Hence the theorem follows by Lemma
\ref{lem:faf}.
\end{proof}
\section{Tournament maps}\label{sect:maps}
As we have seen earlier in the article, an $n$-player tournament induces a map $P\mapsto\bm{wv}(P)$ from $\mathcal{M}_n$ to the set $\mathcal{P}_n$ of probability distributions on
$[n]$. The aim of this section is to see how honest and symmetric tournaments
can be characterized in terms of these maps.
We define an $n$-player \emph{tournament map} as any continuous function $f$
from $\mathcal{M}_n$ to $\mathcal{P}_n$. For any $M\in \mathcal{M}_n$
we denote $f(M) = (f_1(M),\dots,\,f_n(M))$.
Similarly to tournaments, we define:
\\
\\
{\sc Symmetry:} For any permutation $\sigma\in\mathcal{S}_n$ and any
$P\in \mathcal{M}_n$, we define $Q=(q_{ij})\in\mathcal{M}_n$ by
$q_{\sigma(i)\sigma(j)} = p_{ij}$ for all $i, j \in [n]$. We say that a tournament
map $f$ is \emph{symmetric} if, for any
$P\in \mathcal{M}_n$, $\sigma\in\mathcal{S}_n$ and any $i\in[n]$, we have
$f_i(P)=f_{\sigma(i)}(Q).$
\\
\\
{\sc Honesty:} A tournament map $f$ is (strictly) \emph{honest} if for any two
distinct $i, j\in [n]$ we have that $f_i(P)$ is (strictly) increasing in
$p_{ij}$.
\\
\\
Using these definitions it follows that the tournament map $f_{{\bm{T}}}$
induced by a
tournament $\bm{T}$ inherits the properties of $\bm{T}$.
\begin{lemma}\label{lemma:symesymhonehon}
The tournament map induced by any symmetric tournament is symmetric. The
tournament map induced by any honest tournament is honest.
\end{lemma}
\begin{proof} The first statement is the definition of a symmetric tournament.
The second statement follows from Lemma \ref{lem:honesty}.
\end{proof}
We now want to show a converse to this lemma. Here we have to be a bit careful
though. Consider for instance the $2$-player tournament map
$$f_1(P) := \frac{1}{2}+\sin(p_{12}-\frac{1}{2}), \;\;\;\;
f_2(P) := \frac{1}{2}-\sin(p_{12}-\frac{1}{2}).$$
This can be shown to be symmetric and honest, but as $f_1$ and $f_2$ are not
polynomials in the entries of $P$, this map cannot be induced by any tournament
whatsoever. On the other hand, for any tournament map $f$, we can construct a
tournament $\bm{T}_f$ whose win vector approximates $f$ arbitrarily well.
\begin{definition}\label{def:Tf}
Let $f$ be an $n$-player tournament map and let $N$ be a (large) positive
integer. We let $\bm{T}_f=\bm{T}_{f,N}$ denote the tournament defined as
follows: \begin{itemize} \item Play $N$ iterations of round-robin.
\item Let $\hat{p}_{ij}$ denote the fraction of matches that $i$ won against $j$, and let $\hat{P}\in\mathcal{M}_n$ be the corresponding matrix.
\item Randomly elect a tournament winner from the distribution given by $f(\hat{P})$.
\end{itemize}
\end{definition}
\begin{proposition}\label{prop:tournamentmaptournament}
Let $f$ be an $n$-player tournament map. For any $\varepsilon>0$ there exists
an $N_0$ such that for $N\geq N_0$, the tournament $\bm{T}_f$ satisfies
$\abs{\pi_i(P)-f_i(P)} < \varepsilon$ for all $P\in \mathcal{M}_n$ and all
$i\in [n]$. Moreover $\bm{T}_f$ is symmetric if $f$ is symmetric, and (strictly) honest
if $f$ is (strictly) honest.
\end{proposition}
\begin{proof}
It is easy to see that this tournament is symmetric if $f$ is so, and
likewise for honesty. It only remains to show that the win vector is
sufficiently close to $f(P)$ for all $P\in\mathcal{M}_n$. First, note that
$\pi_i(P) = \mathbb{E}f_i(\hat{P})$. Hence, by Jensen's inequality,
$$\abs{\pi_i(P)-f_i(P)} \leq \mathbb{E}\abs{f_i(\hat{P})-f_i(P)}.$$ As $f$ is
continuous and $\mathcal{M}_n$ is compact, $f$ is uniformly continuous.
Hence, given $\varepsilon>0$, there exists a
$\delta>0$ such that, for any $P$,
$\abs{f_i(\hat{P})-f_i(P)} < \varepsilon/2$ whenever
$\|\hat{P}-P\|_\infty < \delta$. Choosing $N_0$ sufficiently large, we can
ensure that $\mathbb{P}(\|\hat{P}-P\|_\infty \geq \delta) < \varepsilon/2$,
by the Law of Large Numbers.
As, trivially, $\abs{f_i(\hat{P})-f_i(P)} \leq 1$, it follows that
$$\mathbb{E}\abs{f_i(\hat{P})-f_i(P)} < \varepsilon/2 \cdot \mathbb{P}(\|\hat{P}-P\|_\infty < \delta) + 1\cdot \mathbb{P}(\|\hat{P}-P\|_\infty \geq \delta) \leq \varepsilon/2 + \varepsilon/2.$$
\end{proof}
For a given $\varepsilon>0$, we say that two $n$-player tournaments $\bm{T}_1$ and $\bm{T}_2$ are \emph{$\varepsilon$-close} if, for any $i\in[n]$ and $P\in\mathcal{M}_n$, we have $\abs{\pi_i(\bm{T}_1, P)-\pi_i(\bm{T}_2, P)} < \varepsilon$. A nice implication of the above results is that Definition \ref{def:Tf} provides an almost general construction of symmetric, honest tournaments in the following sense.
\begin{corollary}\label{cor:andersnormalform}
Any symmetric and honest tournament $\bm{T}$ is $\varepsilon$-close to a tournament $\bm{T}_f$ for a symmetric and honest tournament map $f$. As a consequence any such $\bm{T}$ is $\varepsilon$-close to a symmetric and honest tournament where
\begin{itemize}
\item the match schedule is fixed,
\item each pair of players meet the same number of times,
\item the tournament satisfies a stronger form of honesty, namely, given
the outcomes of all both past and future matches in the tournament, it is never better to lose the current match than to win it.
\end{itemize}
\end{corollary}
\begin{proof}
Let $f$ be the induced tournament map of $\bm{T}$. Then $f$ is symmetric and honest by Lemma \ref{lemma:symesymhonehon}, and by Proposition \ref{prop:tournamentmaptournament}, $\bm{T}_f=\bm{T}_{f, N}$ is $\varepsilon$-close to $\bm{T}$ for $N$ sufficiently large. It is clear that $\bm{T}_f$ has the claimed properties.
\end{proof}
Let $A_n$ denote the set of all vectors $f(P)$ attained by symmetric
and honest $n$-player tournament maps $f$ at doubly monotonic
matrices $P \in \mathcal{D}_n$.
\begin{corollary}\label{cor:weakaAn}
$\bar{A}_n = \mathcal{A}_n$, where $\bar{A}_n$ denotes the closure of
$A_n$.
\end{corollary}
\begin{proof}
If $\bm{T}$ is a symmetric and honest $n$-player tournament then, by
Lemma \ref{lemma:symesymhonehon}, the
tournament map $f_{\bm{T}}$ induced by $\bm{T}$ is also
symmetric and honest. For any $P \in \mathcal{M}_n$, we have
$\bm{wv}(\bm{T}, \, P) = f_{\bm{T}}(P)$. It follows that
$\mathcal{A}_n \subseteq \bar{A}_n$. Conversely, for any symmetric and
honest tournament map $f$ and any doubly monotonic matrix
$P\in\mathcal{D}_n$, we know by Proposition
\ref{prop:tournamentmaptournament} that there exist symmetric and honest
tournaments $\bm{T}_f$, whose win vector at $P$ approximates $f(P)$
arbitrarily
well. Hence $\mathcal{A}_n$ is dense in $\bar{A}_n$. As both sets are closed,
they must be equal.
\end{proof}
It turns out that $A_n$ is a closed set, hence $A_n = \mathcal{A}_n$,
a fact which will be established in
Subsection \ref{subsect:polytope} below. Before that, we consider two other
applications of the above material.
\subsection{Strictly honest tournaments}\label{subsect:strict} As has been remarked earlier in the article, the constructions of symmetric and honest tournaments presented in Sections \ref{sect:threeplayer} and \ref{sect:nplayer} are generally \emph{not} strictly honest. Since, in practice, honestly attempting to win a match typically requires a greater expenditure of effort than not trying, it is natural to require that a tournament should be strictly honest as to guarantee a strictly positive payoff for winning. We will now show how the proof of Corollary \ref{cor:andersnormalform} can be modified such that the tournament $\bm{T}_f$ is also strictly honest. Hence, any symmetric and honest tournament can be approximated arbitrarily well by symmetric and strictly honest ones.
Given $\bm{T}$, let $g = g_{\bm{T}}$ be the induced tournament map and let $h$ be
any symmetric and strictly honest tournament map whatsoever, for instance
$$h_i(M) := \frac{1}{{n \choose 2}} \sum_{j\neq i} m_{ij}.$$ Then
$f=(1-\frac{\varepsilon}{2})g + \frac{\varepsilon}{2} h$ is a symmetric and
strictly honest tournament map such that, for any $P \in \mathcal{M}_n$,
$$||f(P)-g(P)||_{\infty} \leq \frac{\varepsilon}{2} ||g(P)-h(P)||_{\infty} \leq \frac{\varepsilon}{2}.$$
By Proposition \ref{prop:tournamentmaptournament}, we know that choosing $N$
sufficiently large ensures that, for any $P \in \mathcal{M}_n$,
$||\bm{wv}((\bm{T}_{f, \, N}, \, P)) - f(P)||_{\infty} < \frac{\varepsilon}{2}$. Hence $\bm{T}_{f, N}$ is $\varepsilon$-close to $\bm{T}$. On the other hand, as $f$ is strictly honest, so is $\bm{T}_{f, N}$, as desired.
\subsection{Tournaments with rounds}\label{subsect:rounds}
In our definition of ``tournament'' we required that matches be played
one-at-a-time. Many real-world tournaments consist of
``rounds'' of matches, where matches in the same round are
in principle meant to be played simoultaneously. In
practice, things usually get even more complicated, with each
round being further subdivided into non-temporally overlapping
segments, for reasons usually having to do with TV viewing. Our formal
definition of tournament is easily extended to accomodate this much complexity:
simply replace ``matches'' by ``rounds of matches'', where each player plays
at most one match per round. In defining honesty, it then makes sense
to condition both on the results from earlier rounds and
on the pairings for the current round.
\par If $\bm{T}$ is such a ``tournament with rounds'', then
there is a canonical associated tournament without rounds
$\bm{T}^{\prime}$, got by internally ordering the matches of each round uniformly
at random. It is easy to see that \par (a) $\bm{T}$ symmetric $\Leftrightarrow$
$\bm{T}^{\prime}$ symmetric,
\par (b) $\bm{T}^{\prime}$ (strictly) honest $\Rightarrow$ $\bm{T}$ (strictly)
honest.
\\
\par The reverse implication in (b) does not always hold, a
phenomenon which will be familiar to sports fans{\footnote{For example, many
professional European football leagues currently require that, in the final
round of the season, all matches kick off at the same time. The same rule
applies to the final round of group matches in major international
tournaments such as the World Cup and European Championships and was
introduced after the so-called ``Disgrace of Gij\'{o}n'': \texttt{https://en.wikipedia.org/wiki/Disgrace$\underline{\;}$of$\underline{\;}$Gijon}}}. A toy
counterexample with four players is presented below.
\par Nevertheless, a tournament with rounds also induces a tournament map and, using the same proof idea as Lemma \ref{lem:honesty}, one can show that the induced tournament
map of any symmetric and honest tournament with rounds is symmetric and honest.
Hence, by Corollary \ref{cor:weakaAn}, any win vector that can be attained
by a symmetric and honest tournament with rounds for a doubly monotonic matrix
is contained in $\mathcal{A}_n$. In fact, for any $\varepsilon>0$, Proposition
\ref{prop:tournamentmaptournament} implies that any symmetric and honest
tournament with rounds is $\varepsilon$-close to a regular (i.e. one without
rounds) symmetric and honest tournament $\bm{T}_f$.
\\
\\
{\sc Example 6.2.1.} Consider the following tournament with rounds $\bm{T}$:
\\
\\
\emph{Step 0:} Pair off the players uniformly at random. Say the pairs are
$\{i,\,j\}$ and $\{k,\,l\}$.
\\
\emph{Round 1:} Play matches $\{i,\,j\}$ and $\{k,\,l\}$.
\\
\emph{Round 2:} Play the same matches.
\\
\emph{Step 3:} Toss a fair coin. The winner of the tournament is determined as
follows:
\par If heads, then
\par \hspace{0.5cm} - if $k$ and $l$ won one match each, the loser of the
first match between $i$ and $j$ wins the tournament
\par \hspace{0.5cm} - otherwise, the winner of the first $\{i,\,j\}$ match
wins the tournament.
\par If tails, then same rule except that we interchange the roles of the
pairs $\{i,\,j\} \leftrightarrow \{k,\,l\}$.
\\
\par It is clear that $\bm{T}$
is symmetric and honest (though not strictly honest, since what one does in
Round 2 has no effect on one's own probability of winning the tournament).
Without loss of generality, take player $i$. If he loses in Round $1$,
then he wins the tournament with probability $p_{kl}(1-p_{kl})$. If he wins in
Round 1, then he wins the tournament with probability
$\frac{1}{2} (p_{kl}^{2} + (1-p_{kl})^2)$. The latter expression is bigger for
any $p_{kl}$, and strictly so if $p_{kl} \neq \frac{1}{2}$. However, consider
any instance of $\bm{T}^{\prime}$. Without loss of generality, $i$ and $j$ play
first in Round 1. Suppose $p_{ij} > \frac{1}{2}$ and $j$ wins this match. Then
each of $k$ and $l$ would be strictly better off if they lost their first
match.
\subsection{$A_n = \mathcal{A}_n$ is a finite union of convex polytopes}\label{subsect:polytope}
We already know that $\mathcal{A}_n$ is a convex polytope for $n = 1, 2, 3$ and, if
Conjecture \ref{conj:nplayer} holds, then this is true in general.
In this subsection, we extend the ideas of tournament maps to show that $\mathcal{A}_n$ is a finite union of convex polytopes. We will here take \emph{convex polytope} to mean a set in $\mathbb{R}^n$ for some $n$ that can be obtained as the convex hull of a finite number of points. Equivalently, it is a bounded region of $\mathbb{R}^n$ described by a finite number of non-strict linear inequalities. In particular, a convex polytope is always a closed set. As a corollary, we show the stronger version of Corollary \ref{cor:weakaAn} that $A_n = \mathcal{A}_n$. In particular, for any $n\geq 1$, this gives the alternative characterization
\begin{equation}
\mathcal{A}_n = \{f(P) : f\text{ is a symmetric and honest $n$-player tournament map}, \, P\in\mathcal{D}_n\}
\end{equation}
of the closure of the set of achievable win vectors.
For any $P\in\mathcal{M}_n$, we define
\begin{equation}
A_n(P) = \{f(P) : f\text{ is a symmetric and honest $n$-player tournament map}\}.
\end{equation}
By definition,
\begin{equation}\label{eq:ulgyAn}
A_n = \bigcup_{P \in \mathcal{D}_n} {A}_n (P)
\end{equation}
and so, by Corollary \ref{cor:weakaAn},
\begin{equation}\label{eq:infiniteP}
\mathcal{A}_n = \bar{A}_n = \overline{\bigcup_{P\in\mathcal{D}_n} {A}_n(P)}.
\end{equation}
Our strategy will consist of two main steps. First, we show that it
suffices to take the union in \eqref{eq:ulgyAn} and therefore also in \eqref{eq:infiniteP} over a finite number of
$P\in\mathcal{D}_n$. Second, for any such $P$ we give a discretization argument
that shows that ${A}_n(P)$ is a convex polytope. As then $A_n$ is a finite union of closed sets, it is closed. Hence $\mathcal{A}_n=A_n$ (without closure).
Let us begin with the first step. For any two matrices $P, Q\in \mathcal{M}_n$,
we say that $P$ and $Q$ are \emph{isomorphic} if
$p_{ij}<p_{kl} \Leftrightarrow q_{ij}<q_{kl}$. As there are only a finite number
of ways to order $n^2$ elements, the number of isomorphism classes is clearly
finite.
\begin{proposition}
If $P$ and $Q$ are isomorphic, then ${A}_n(P) = {A}_n(Q)$.
\end{proposition}
\begin{proof}
Let $B=\{p_{ij} : i, j\in[n]\}$ and $C=\{q_{ij} : i, j\in[n]\}$. As the entries
of $P$ and $Q$ are ordered in the same way, the sets $B$ and $C$ contain
the same number of elements. Moreover, as each set contains $\frac{1}{2}$ and
is invariant under the map $x\mapsto 1-x$, each contains an odd number of
elements. Let us enumerate these by $b_0 < b_1 < \dots < b_{2k}$ and
$c_0 < c_1 < \dots < c_{2k}$. Then $b_k=c_k=\frac{1}{2}$ and
$b_i+b_{2k-i}=c_i+c_{2k-i}=\frac{1}{2}$. We define
$\varphi:[0, 1]\rightarrow[0, 1]$ to be the unique piecewise-linear
function satisfying $\varphi(0)=0$, $\varphi(b_i)=c_i$ for
all $0\leq i \leq 2k$, $\varphi(1)=1$.
It follows that $\varphi$ is a continuous
increasing function such that $\varphi(1-x) = 1-\varphi(x)$ for all
$x\in[0, 1]$. Hence, by letting $\varphi$ act on $P\in\mathcal{M}_n$
coordinate-wise, we can consider $\varphi$ as an increasing map from
$\mathcal{M}_n$ to itself such that $\varphi(P)=Q$.
Now, for any symmetric and honest tournament map $f$, it follows that
$f \circ \varphi$ and $f \circ \varphi^{-1}$ are also symmetric and honest
tournament maps. Moreover $f(Q)=(f\circ\varphi)(P)$ and
$f(P)=(f\circ\varphi^{-1})(Q)$. Hence the same win vectors are achievable for
$P$ and $Q$, as desired.
\end{proof}
As for the second step, we want to show that for any fixed $P\in\mathcal{M}_n$, ${A}_n(P)$ is a convex polytope. Given $P$, we define $B_P$ as the set consisting
of $0, 1$ and all values
$p_{ij}$ for $i, j\in[n]$. We define $\mathcal{M}_n(P)$ as the set of all
matrices $Q\in\mathcal{M}_n$ such that $q_{ij}\in B_P$ for all $i, j\in[n]$,
and define a $P$-discrete tournament map as a function from
$\mathcal{M}_n(P)$ to
$\mathcal{P}_n$. We define symmetry and honesty in the same way as for regular
tournament maps. Let ${A}_n'(P)$ be the set of all vectors $f(P)$ for
$P$-discrete, symmetric and honest $n$-player tournament maps.
\begin{proposition}
For any $P\in\mathcal{M}_n$, ${A}'_n(P)$ is a convex polytope.
\end{proposition}
\begin{proof}
As $\mathcal{M}_n(P)$ is a finite set, we can represent any $P$-discrete
$n$-player tournament map as a vector in a finite-dimensional (more precisely
$\left(\abs{\mathcal{M}_n(P)}\times n\right)$-dimensional) space. The
conditions that the map is symmetric and honest can be expressed as a finite
number of linear equalities and non-strict inequalities to be
satisfied by this vector. It is also clearly bounded, as it is contained in $\mathcal{M_n}\times \mathcal{P}_n$, which is a bounded set. Hence, the set of $P$-discrete, symmetric and
honest $n$-player tournament maps form
a convex polytope. Evaluating a tournament map at $P$ can be
interpreted as a projection of the corresponding vector, hence
${A}_n'(P)$ is a
linear projection of a convex polytope, which means that it must be a
convex polytope itself.
\end{proof}
\begin{proposition}
For any $P\in\mathcal{M}_n$, ${A}_n(P)={A}_n'(P)$.
\end{proposition}
\begin{proof}
As the restriction of any symmetric and honest tournament map $f$ to
$\mathcal{M}_n(P)$ is a symmetric and honest $P$-discrete tournament map, it
follows that ${A}_n(P)\subseteq {A}'_n(P)$. To prove that
${A}'_n(P)\subseteq {A}_n(P)$, it suffices to show that any
symmetric and honest $P$-discrete tournament map $f$ can be extended to a
symmetric and honest (non-discrete) tournament map $g$.
Given $Q\in\mathcal{M}_n$, we construct a random matrix
$\mathbf{R}\in\mathcal{M}_n(P)$ as follows: for each pair of players
$\{i, j\}$, if $q_{ij}$, and thereby also $q_{ji}$ are contained in $A_P$, let
$\mathbf{r}_{ij}=q_{ij}$ and $\mathbf{r}_{ji}=q_{ji}$.
Otherwise, write
$q_{ij} = pa_k+(1-p)a_{k+1}$ for $p\in(0, 1)$ where $a_k, a_{k+1}$ denote
consecutive elements in $A_P$ and, independently for each such pair of
players, put $\mathbf{r}_{ij}=a_k$, $\mathbf{r}_{ji}=1-a_k$ with probability
$p$, and
$\mathbf{r}_{ij}=a_{k+1}$, $\mathbf{r}_{ji}=1-a_{k+1}$ with probability $1-p$.
We define $g(Q) = \mathbb{E}f(\mathbf{R})$. This construction is clearly
continuous and symmetric, and a simple coupling argument shows that $g_i(Q)$
is increasing in $q_{ij}$, thus $g$ is honest. Moreover, by construction
$g(P)=f(P)$. Hence ${A}'_n(P)\subseteq {A}_n(P)$, as desired.
\end{proof}
\section{Futile Tournaments}\label{sect:futile}
Recall the notations $\pi_{i}^{+}$, $\pi_{i}^{-}$ in the definition of honest
tournaments in Section \ref{sect:defi}. In words, they were the probabilities
of $i$ winning the tournament, conditioned on whether $i$ won or lost a given
match and given the results of earlier matches and knowledge of the rules of
the tournament. Honesty was the criterion that $\pi_{i}^{+} \geq \pi_{i}^{-}$
should always hold. We now consider a very special case:
\begin{definition} With notation as above, a tournament is said to be
\emph{futile} if $\pi_{i}^{+} = \pi_{i}^{-}$ always holds.
\end{definition}
One natural way to try to reconcile the (arguably) paradoxical fact that
symmetric and honest tournaments can benefit a worse player over a better
one is to imagine the winning probability of player $i$ to be divided into
two contributions. First, the result of matches where player $i$ is involved,
where, by honesty, a higher ranked player should always be better off.
Second, the result of matches where $i$ is not involved, where there
is no immediate reason a player with low rank could not benefit the most.
Following this intuition, it would make sense to expect the most unfair
symmetric and honest tournaments to be ones without the first contribution,
that is, symmetric and futile tournaments. However, as the following result
shows, this is not the case.
\begin{proposition}\label{prop:futile}
If $\bm{T}$ is a symmetric and futile $n$-player tournament, then
$\pi_1 = \dots = \pi_n = \frac{1}{n}$ in any specialization.
\end{proposition}
\begin{proof}
Since $\bm{T}$ is futile, it is honest and hence, by
Lemma \ref{lem:honesty}, $\pi_i$ is increasing in $p_{ij}$ at every point
of $\mathcal{M}_{n}$, for any $i \neq j$. But
consider the tournament $\bm{T}^c$ which has the same rules as
$\bm{T}$, but where we reverse the result of every match. Clearly
this will also be futile, hence honest, and corresponds to
a change of variables $p_{uv} \mapsto 1 - p_{uv} (= p_{vu})$. Hence,
for $i \neq j$ it
also holds that $\pi_i$ is decreasing in $p_{ij}$ at every point of
$\mathcal{M}_{n}$ and so $\pi_i$ does not depend on $p_{ij}$ for any
$i\neq j$.
Given a matrix $P\in\mathcal{M}_n$, we say that a player $i>1$ is a
\emph{clone} of player $1$ if $p_{1j}=p_{ij}$ for all $j\in [n]$. Clearly,
$\pi_1=\pi_2=\dots =\pi_n = \frac{1}{n}$ for any matrix $P$ with $n-1$
clones of player $1$. We show by induction that the same equality holds
for any number of clones.
Assume $\pi_1=\pi_2=\dots =\pi_n = \frac{1}{n}$ whenever
$P\in\mathcal{M}_n$
contains $k\geq 1$ clones of player $1$. Let $P\in\mathcal{M}_n$ be a
matrix that contains $k-1$ such clones. For any player $i> 1$ that is not
a clone, we can make it into one by modifying the entries in the
$i$:th row and column of $P$ appropriately. By
futility, $\pi_i$ does not depend on these entries, but by the induction
hypothesis, $i$ gets winning probability $\frac{1}{n}$ after the
modification. Hence any $i>1$ that is not a clone of player $1$ has
winning probabiltity $\frac{1}{n}$. By symmetry, any clone must have the
same winning probability as player $1$, which means that these also must
have winning probability $\frac{1}{n}$.
\end{proof}
\section{Final Remarks}\label{sect:final}
In this paper we have taken a well-established mathematical model for
tournaments - whose key ingredient is the assumption of fixed probabilities
$p_{ij}$ for player $i$ beating $j$ in a single match - and introduced and
rigorously defined three new concepts: symmetry, honesty and fairness. Our
main insight is that it is possible for a tournament to be symmetric and
(strictly) honest, yet unfair. We'd like to finish here with some remarks
on the concepts themselves.
\par Symmetry seems to us a rather uncontroversial idea. It is of course true
that, in practice, many tournaments have special arrangements which break
symmetry in a myriad of ways. Hoewever, if one wishes to develop some
general mathematical theory,
it seems like a natural restriction to impose at the beginning.
\par Turning to honesty, the fact that ``it takes effort to try and actually win
a match'' suggests that it would be more realistic to demand that the
differences $\pi_{i}^{+} - \pi_{i}^{-}$ are bounded away from zero somehow. The
same fact indicates that a more realistic model should incorporate the
possibility of there being intrinsic value for a player in trying
to minimize the total number of matches he expects to play in the tournament.
This basically involves abandoning the assumption that the $p_{ij}$ are
constant. Incorporating ``effort expended'' into our framework is therefore
clearly a non-trivial task, which we leave for future investigation.
\par Thirdly, we turn to fairness.
Various alternative notions of ``fairness'' can already be
gleaned from the existing literature. Basically, however, there are two
opposite directions from which one might criticize our definition:
\begin{itemize}
\item On the one hand, one might say we are too restrictive in only
concentrating on the probabilities of actually winning the tournament.
In practice, many tournaments end up with a partial ordering of the
participants (though usually with a single maximal element), and rewards
in the form of money, ranking points etc. are distributed according to one's
position in this ordering. Hence, instead of defining fairness in terms
of winning probability, one could do so in terms of expected depth
in the final partial ordering, or some other proxy for expected reward. This
is another possibility for future work.
\item At the other end of the spectrum, one could suggest that a fair
tournament should not just give the best player the highest probability of
winning, but that this probability should be close to one. There are a
number of important papers in the literature which take this point of view,
see for example \cite{Ben}, \cite{Bra} and \cite{Fei}. These authors
are concerned with a different kind of question than us, namely how efficient
(in terms of expected total number of matches played) can one make
the tournament while ensuring that the best player wins with high
probability ? There are elegant, rigorous results for the special case of
the model in which $p_{ij} = p$ for all $i < j$, and some fixed
$p \in (0, \, 1]$. Moreover, as the papers \cite{Bra} and \cite{Fei} show,
this kind of question has applications far beyond the world of
sports tournaments. In this regard, see also \cite{BT}, where the focus
is more on efficiently producing a correct ranking of \emph{all}
participants with high probability.
\end{itemize}
$\;$ \par
Since our main result is a ``negative'' one, it seems
reasonable to ask
whether there is something stronger than honesty, but still a natural
condition, which if imposed on a tournament ensures fairness, in the sense
we defined it. Of course, Schwenk's paper already gives \emph{some} kind of
positive answer: the simplest way to ensure honesty is by having
single-elimination and his method of Cohort Randomized Seeding (CRS) introduces
just sufficient randomness to ensure fairness. Note that,
since a partial seeding remains, his tournaments are not symmetric. Our
question is whether there is a natural condition which encompasses a
significantly wider range of symmetric tournaments.
\\
\par An alternative viewpoint is to ask for ``more realistic'' examples
of tournaments which are symmetric and honest but unfair. It may be
surprising at first glance that the tournaments $\bm{T}_1$ and
$\bm{T}_2$ in Section \ref{sect:threeplayer}
are indeed unfair, but it is probably not going out on a limb to guess that
no major sports competition is ever likely to adopt those formats. This
is even more the case with the tournaments in Section \ref{sect:nplayer}, which
have the feeling of being ``rigged'' to achieve just the desired outcome.
\par As noted in Section \ref{sect:intro}, there are at least two commonly
occurring examples of symmetric and honest (and fair) tournaments:
\par - round-robin, with ties broken uniformly at random,
\par - single-elimination with uniformly randomized seeding.\\
\noindent On the other hand, the popular two-phase format of first playing round-robin in order to rank the players for a knock-out tournament using standard seeding is symmetric but not necessarily honest (or fair). Here it's worth noting that a two-phase tournament consisting of round-robin followed by CRS single-elimination, while symmetric, need not be honest either. Suppose we have $2^k$ players, for some large $k$, all but one of whom are clones (see the proof of Proposition \ref{prop:futile}), while the last player is much worse than the clones. Suppose that, before the last round of matches in the round-robin phase, the poor player has defied the odds and won all of his $2^k - 2$ matches to date, while nobody else has won significantly more than $2^{k-1}$ matches. In that case, the poor player is guaranteed to be in the highest cohort, so it is in the interest of every clone to end up in as low a cohort as possible, as this will increase their chances of meeting the poor player in the second phase, i.e.: of having at least one easy match in that phase. In particular, it will be in the interest of a clone to lose their last round-robin match.
\par If we employ uniform randomization in the knockout phase, then the round-robin phase serves no purpose whatsoever. We do not know if there is any other randomization procedure for single-elimination which, combined with round-robin, still yields a symmetric and honest tournament.
\\
\par These observations suggest that finding ``realistic'' examples of symmetric and honest, but unfair tournaments may not be easy. Then again, sports tournaments, or even \emph{tournaments} as defined in this paper, represent a very narrow class of what are usually called ``games''. As mentioned in Section \ref{sect:intro}, a \emph{truel} could be considered as another type of game which is symmetric and honest, yet unfair (in particular, it is possible to define those terms precisely in that context). As a final speculation, we can ask whether the ``real world'' provides any examples of phenomena analogous to those considered in this paper? A social scientist might use a term like ``equal treatment'' instead of ``symmetry'', so we are asking whether the real world provides examples of situations where participants are treated equally, there is no incentive for anyone to cheat, and yet the outcome is unfair (on average).
\vspace*{1cm}
\section*{Acknowledgements}
We thank Jan Lennartsson, Allan Schwenk and Johan W\"{a}stlund for helpful
discussions.
\vspace*{1cm}
| {
"attr-fineweb-edu": 1.931641,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUaf_xK6Ot9TMSoUdM | \section{Introduction}
Typical vacation rentals (VRs) are individually owned apartments or houses offering self-catering hospitality services on a temporary basis. They include a wide spectrum of property types such as professionally managed complexes, farm stays, apart-hotels, bed and breakfasts, etc. offering a broad variety of options alternative to the accommodation service provided by hotels. During the last years, VRs became very popular on Online Travel Platforms. One characteristic that distinguishes them from hotels is that the large majority lacks a star rating. Official institutions classify hotels in the well known 1-5 stars rating scale. This is a globally established system that customers know and understand, helping both the demand and the supply by creating realistic expectations about the quality of the service. It is also a very useful tool to navigate a large supply of accommodations alternatives through filtering and comparisons, which helps to better match demand with supply \cite{bernardi2019150} (see Figure \ref{fig:tiles}). Booking.com is an Online Travel Platform that offers both hotels and vacation rentals, which implies that when users apply star rating filters, the large majority of vacation rentals are immediately removed from the results list, which puts VRs at a clear disadvantage compared to hotels and hides potentially relevant options from the guests. This context motivates the need for quality ratings for vacation rentals that are comparable to hotels stars. One approach to consider is to classify VRs by expert assessment, but this is not a scalable solution since the experts need to actually visit each property listed in the platform. Remote classification suffers from high subjectivity and would produce ratings not comparable to hotels stars. In view of this, an automated VR rating process becomes an appealing solution. Accommodation quality assessment has many challenges, as described for the hotels case in \cite{vine1981hotel}. In our specific case, we focus on automated explainable vacation rental quality rating, which poses the following extra challenges:
\begin{itemize}
\item \textit{Lack of Ground Truth}: The amount of officially rated vacation rentals is very small. This is discussed in Section \ref{sec:solution}.
\item \textit{User-facing Explanations}: as the system is replacing a typically human task, explanations are critical to generate trust with the customers. This poses many challenges discussed in \cite{arrieta2020explainable} and \cite{gunning2017explainable}. Furthermore, explanations have business purposes like helping property managers maintain and improve the quality of their vacation rental quality. This is discussed in Section \ref{sec:modeling}.
\item \textit{Hotels compatibility}: since hotels and vacation rentals are listed on the same platform, we want to make sure they are comparable. Specifically, the automated quality rating system must mimic the human task of visiting accommodation and assessing the provided quality. This is discussed in Section \ref{sec:solution}.
\item \textit{User generated content}: the input for a specific rating is a description of the property generated by (typically non-professional) property managers. In some cases, these descriptions are incomplete and contain mistakes, making both labeling and explanations even more challenging. This is discussed in Section \ref{sec:mono}.
\end{itemize}
All of these challenges are addressed by our solution. Our main contributions are:
\begin{itemize}
\item The description of a machine learning system capable of producing global and explainable vacation rental quality ratings
\item Comparison of methods and techniques to address the mentioned issues
\item A set of large scale online controlled experiments that independently show the effectiveness and business impact of:
\begin{itemize}
\item Machine Learned generated VR Quality Ratings on both guests and property manager sides
\item Explanations to property managers
\item Suggestions for property managers to improve the rating of their property
\end{itemize}
\end{itemize}
To the best of our knowledge, no prior work focuses on automated accommodation quality ratings. The closest work we aware of is related to predicting guest ratings in hotels \cite{leal2017prediction}, but this is very different from our setting since guest ratings are not comparable to hotels star ratings because they are based on guests experiences as opposed to expert assessment, and more importantly, they depend on the subjective expectations of each guest introducing too much variance in the rating distribution for one property. The paper is organized as follows: Section \ref{sec:problem} formalizes the problem, Section \ref{sec:solution} describes our approach to generate labeled data, Section \ref{sec:modeling} discusses our explanations aware modeling approaches, Section \ref{sec:explainability} dives in to our method to explain VR ratings, Section \ref{sec:suggestions} describes an actionable advice generation process, Section \ref{sec:experiments} presents online controlled experiments conducted in Booking.com, a leading OTP with millions of daily users and vacation rentals and Section \ref{sec:conclusion} presents our conclusions.
\begin{figure*}
\includegraphics[width=0.7\linewidth]{tiles.png}
\caption{Booking.com Search Results Page. In blue, highlighted tools relying on Quality Rating}
\label{fig:tiles}
\end{figure*}
\section{Problem Definition}
\label{sec:problem}
Our objective is to construct a system capable of producing a quality rating given a description of a vacation rental property. The property description is a set of attributes such as facilities, amenities, size, number of rooms, etc. and the rating is an integer ranging from 1 to 5. The model must be capable of explaining the assigned ratings, more specifically, it must be able to explain what separates a given rating from worse ratings and to suggest what need to be added to reach the next level. It must be global, which means that it must be able to rate all VR property types in all countries ($\sim$200). The rating system must be compatible with the hotels star rating system, which means that it should be as close as possible to an objective process where an expert physically visits and assesses the property. Finally, since property descriptions and properties themselves are updated with new facilities and services, the model must be able to update ratings, explanations and suggestions as soon as new property details are available.
\section{Collaborative Labeling}
\label{sec:solution}
Defining the problem as a mapping from VR descriptions to ratings makes Supervised Learning a natural approach to solve it, but unfortunately, we don't have enough labeled VRs. However, we have the following data sets at our disposal:
\begin{itemize}[topsep=2pt]
\item Rated hotels: Global set of hotels with their star ratings (more than 500000)
\item Rated vacation rentals: vacation rentals with official ratings from one specific country and VR type (about 40000)
\item Unrated vacation rentals: Global set of unrated vacation rentals (more than 2 million)
\end{itemize}
The set of rated hotels is large enough to train and test standard supervised learning algorithms. The set of rated vacation rentals is only suitable for validation since it is small and only from one country and for one VR type. In this section we describe two labeling approaches and compare them using the rated vacation rentals and rated hotels sets. Since the ratings are ordered and their distribution is far from uniform (see Table \ref{tab:classdist}) we use the \textit{macro-averaged Mean Average Error}\cite{baccianella2009evaluation} (MAMAE) which computes the Mean Average Error per class and averages over the classes:
\begin{equation}
MAMAE = \frac{1}{c}\sum_{j=1}^{c}\frac{1}{|T_j|}\sum_{x\in T_j } |\hat{y}(x) - j |
\label{eq:mamae}
\end{equation} where $c$ is the number of classes (5 in our case), $T_j$ is the set of instances with true class $j$ and $\hat{y}(x)$ is the predicted class for property description $x$. We also report weighted F1, a typical classification metric that ignores class order. In order to benchmark these labeling schemes, we trained multinomial classifiers using gradient boosting \cite{friedman2002stochastic}, specifically Gradient Boosted Trees (GBT) with all the available features (about 400 including facilities/amenities, size, number of rooms, services, etc, which due to commercial sensitivity cannot be disclosed).
\begin{figure*}
\includegraphics[width=0.8\linewidth]{graph}
\caption{Collaborative Labeling Process}
\label{fig:labelprop}
\end{figure*}
\begin{table}
\caption{Rating distributions of different data sets}
\small
\centering
\begin{tabular}{@{}lrrrrr@{}}
\toprule
& Rated Hotels & Officially Rated VRs & Collaborative Labels VRs\\ \midrule
Class 1 & 5\%&0.5\%&0.2\%\\
Class 2 & 18\%&3\% &2.7\% \\
Class 3 & 45\%&80\%&66.9\% \\
Class 4 & 24\%&16\% &29\%\\
Class 5 &7\%&0.5\% &1.2\%\\ \bottomrule
\end{tabular}
\label{tab:classdist}
\end{table}
\par The first approach consists of training a model on hotels using the star rating as a label. This approach works well for hotels (MAMAE 0.411), however, when evaluated on the rated vacation rentals set the performance is poor (MAMAE 1.01 on the full rated vacation rentals set). This is expected since the average hotel room is different from a vacation rental, typical VRs are equipped for self-catering facilities (kitchen, dishwasher, washing machine, etc.) which are not present in most hotels. This makes it very hard for a model to generalize from hotels to vacation rentals.
\par In a second approach we apply a technique inspired by Label Propagation \cite{zhur2002learning} where we propagate hotel ratings to unrated vacation rentals. We construct a graph where vertices are unlabeled vacation rentals and labeled hotels. Each edge can only connect one hotel vertex $h$ with one vacation rental vertex $v$, and it is weighted by the number of stays in $h$ made by all guests who also stayed in $v$. We then construct a distribution over the star ratings based on the weights of all the edges of $v$. Finally, the label is the mode of such distribution (see Figure \ref{fig:labelprop}). The main underlying assumption is that guests choose hotels and vacation rentals with similar quality of service. For example, if a user stayed at six different hotels, and most of them are 4-stars, if she then stays in a vacation rental $v$, we expect $v$ to be comparable to a 4-stars hotel. Effectively, user data is telling us how to transfer hotel star ratings to vacation rentals, hence, we name this technique \textit{Collaborative Labeling}. We validate the assumption by using the collaborative labels as predictions of the known hotel star ratings. We found performance comparable to training a model with star ratings indicating that the collaborative labels contain information about the true known hotel stars. Furthermore, we trained a model with hotels data using collaborative labels and compared against training with the true ratings and found very similar performance indicating that training a model with collaborative labels is almost as good as using the true labels. Finally, we used the collaborative labels as predictions of vacation rentals ratings and evaluated with the small set of vacation rentals for which we do have an official rating. Performance is good, indicating that the collaborative labels also contain a lot of information about the true ratings of vacation rentals. All results are summarized in Table \ref{tab:labels}, showing that Collaborative Labeling is a sound technique to generate ground truth to train supervised machine learning models.
\begin{table}
\caption{Performance of different Labeling Schemes}
\centering
\small
\begin{tabularx}{\linewidth} {
>{\raggedright\arraybackslash\hsize=0.45\hsize}X
>{\raggedleft\arraybackslash\hsize=0.16\hsize}X
>{\raggedleft\arraybackslash\hsize=0.21\hsize}X
>{\raggedleft\arraybackslash\hsize=0.18\hsize}X}
\toprule
& Trained on True Stars &Col. Labels as Predictions& Trained on Col. Labels\\ \midrule
MAMAE Hotels Val. Set & 0.411&0.525&0.588\\
MAMAE Labeled VRs & 1.01 &0.84&0.962\\ \midrule
Weighted F1 Hotels Val. Set&0.810&0.822&0.726\\
Weighted F1 Labeled VRs&0.662 &0.883&0.735\\
\bottomrule
\end{tabularx}
\label{tab:labels}
\end{table}
\section{Explanation Aware Modeling}
\label{sec:modeling}
Through Collaborative Labeling we obtained a set containing more than 1 million labeled Vacation Rentals. We use this data to apply supervised learning techniques and rate all the remaining VRs, as well as new VRs, as they become part of the platform. But due to the explainability requirements, we have to make sure our models are capable of producing robust and consistent explanations, able to determine which characteristics are the main drivers of a specific rating \textit{with respect to the adjacent ratings} (as opposed to all other ratings). In other words, we want to explain to partners what makes a 3-stars property stand out from 2-stars properties, other 3-stars properties, and what is missing to reach 4-stars. Furthermore, which type of model is used to label properties has strong implications on the algorithms used to generate explanations like computation requirements and explanations semantics. We consider all these aspects while solving the prediction problem in order to guarantee accurate, scalable, and explainable models satisfying the established requirements.
\subsection{Baselines}
\label{sec:baselines}
\par As a trivial baseline, we consider the most frequent rating (mode-classifier), which has about 75\% accuracy. Linear regression, as a white-box model, is a natural approach to get an interpretable model \cite{tibshirani1996regression}, but we saw poor results (worse than the mode-classifier). Although linear regression is able to capture the order in the classes, it predicts the expected value, which needs to be discretized to match the possible labels. Such discretization process is non-trivial, we experimented with various thresholding techniques, but performance was always below the baseline level.
\par Another natural approach is Ordinal Regression, where ratings are still considered discrete, but ordered. We applied Logistic Ordinal Regression\cite{harrell2015regression} which separates classes with parallel decision boundaries and it is also straightforward to explain \cite{bender1997ordinal}. We did see an improvement on MAMAE, indicating that class ordering is helping, but still worse than baseline on F1 and Accuracy. We hypothesize that both Linear and Ordinal Logistic Regression struggle to find linear decision boundaries in the sparse and mostly binary feature space. Therefore, with the aim of learning non linear decision boundaries, we turned to Multinomial Classification with Gradient Boosted Trees (GBT), for which efficient explanation generation algorithms exist \cite{lundberg2020local2global}, which showed much better performance on all Accuracy, F1, and MAMAE metrics, except for MAMAE on the Labeled VR Set, which suggests there is room for improvement by introducing ordered labels. Results are summarized in Table \ref{tab:ordmult}.
\begin{table}
\caption{Baselines performance}
\small
\begin{center}
\noindent
\begin{tabularx}{\linewidth}{
>{\raggedright\arraybackslash\hsize=0.49\hsize}X
>{\raggedleft\arraybackslash\hsize=0.13\hsize}X
>{\raggedleft\arraybackslash\hsize=0.175\hsize}X
>{\raggedleft\arraybackslash\hsize=0.19\hsize}X}
\toprule
&Mode-classifier&Logistic Ordinal Regression&Multinomial GBT\\
\midrule
Accuracy, VR Validation Set&0.691&0.647&0.731\\
Accuracy, Labeled VRs&0.743&0.706&0.778\\
\midrule
Weighted F1, VR Validation Set&0.565&0.473&0.682\\
Weighted F1, Labeled VRs&0.634&0.511&0.735\\
\midrule
MAMAE, VR Validation Set&1.2&1.08&0.877\\
MAMAE, Labeled VRs&1.2&0.959&0.962\\
\bottomrule
\end{tabularx}
\end{center}
\label{tab:ordmult}
\end{table}
\subsection{Ordinal Regression Reduced to Binary Classification}
\label{sec:reduction}
\par Multinomial GBT still considers labels as discrete variables ignoring their structure. But more importantly, the semantics of explanations generated from a multinomial classifier does not match the requirements. Specifically, explanations from a multiclass classifier highlight why a property is rated with a specific rating vs. all others which could lead to scenarios where a 2-stars classification is explained through the lack of a \textit{spa wellness center}, a much higher class facility. Although this explanation is correct, it is not very useful since it is unlikely that a 2-stars property can add a \textit{spa wellness center}, and does not help to understand what makes this property better than all 1-star properties which help partners, for example, to better maintain such facilities like \textit{streaming services} or \textit{garden}. Because of this, we want to introduce information about the order of the labels. Following \cite{frank2001simple} we apply a reduction from Ordinal Regression to a set of \textit{ordered} binary classifiers: four binary classifiers are constructed where classifier $k \in \{1,2,3,4\}$ estimates $Pr(y_i>k)$, where $y_i$ is the true class of example $i$. The original training set is replicated 4 times, once for each classifier: the binary label of example $i$ with multiclass label $y_i$ in classifier $k$ is positive if $y_i > k$ and negative otherwise.
This reduction approach allows us to work with binary classifiers (which are particularly well suited for explainability), and at the same time, class-order information is kept allowing us to generate explanations with the required semantics.
\par At inference time the authors in \cite{frank2001simple} propose an analytical method to estimate the probability of an unlabeled example belonging to each class and then outputs the class that maximizes those probabilities. This approach implicitly assumes that the base binary classifiers are calibrated. Furthermore, in order to produce consistent explanations, we want to enforce consistent labeling, which means that if a property receives a 4-stars rating, it should also receive a 3, 2 and 1-star ratings. Formally, this means that if base classifier $k \in [1, 4]$ assigns a positive label to example $i$, then all predictions by classifiers $k^{\prime}<k$ must also be positive.
To address these two issues we propose a different inference algorithm that guarantees consistent labeling and does not rely on calibrated base classifiers. The procedure runs through all the binary classifiers in class order incrementing the number of stars until a classifier outputs a negative prediction (see Algorithm \ref{inference}). One important consequence of this algorithm is that it allows us to identify what we define as the \textit{Responsible Classifier}, which is the classifier \textit{before} the first classifier making a negative classification. This classifier encodes the information about why a property is not labeled with a lower rating. Formally, the responsible classifier is computed by a function $r$ that takes a class in $[1,5]$ and outputs a classifier index in $[1,4]$ as given by the following equation:
\begin{equation}
\label{eq:resp}
r(c) =
\left\{
\begin{array}{ll}
1 & c<3 \\
c-1 & c \geq 3
\end{array}
\right.
\end{equation}
Concretely, for a property with predicted label $\hat{y}$ we can compute explanations (why is the property classified as $\hat{y}$ vs $\hat{y}-1$, Section \ref{sec:explainability}) using classifier $r(\hat{y})$ and suggestions (what is missing to reach $\hat{y}+1$, Section \ref{sec:suggestions}) based on classifier $r(\hat{y}+1)$. The semantics of these explanations match the requirements.
\begin{algorithm}
\caption{Class Rating Procedure}\label{inference}
\begin{algorithmic}
\State $X:$ property features
\State $\theta:$ 4-dimensional vector of thresholds
\Procedure{ConsistentLabeling}{$X,\theta$}
\State $\hat{y} \gets 1 $
\Comment{All properties get 1 star}
\For {$k \in [1, 4]$}
\State $\triangleright Pr(y>k | X)$ estimated with classifier $k$ (see Section \ref{sec:modeling})
\If{$Pr(y>k | X) \geq \theta_k$ \textbf{and} $\hat{y}=k$}
\State $\hat{y} \gets k +1 $
\Comment Increment stars
\EndIf
\EndFor
\State \Return $\hat{y}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
After relaxing the calibration requirement, we can use any base binary classifier. Again, considering explainability requirements, we use Logistic Regression as a baseline and found good results compared to the mode-classifier but worse than Multinomial GBT. Therefore we also used GBT as the base binary classifier in the reduction, for which scalable explanation algorithms exist (see Section \ref{sec:explainability}) and found much better results compared to Multinomial GBT in all metrics and all data sets. This suggests that the combination of non linear decision boundaries with ordinal labels is an effective technique to capture the structure of our problem. Table \ref{tab:base} summarizes these findings.
\subsection{Monotonicity Constraints}
\label{sec:mono}
Since the property descriptions are user-generated, they tend to be noisy. One example is under-reported facilities in high classes: 5 stars villas won't list \textit{hairdryer} as an amenity because it is obvious for them to provide it. This leads to some obviously positive facilities or amenities to contribute negatively towards a higher rating (examples: barbecue and children crib). To avoid this, we introduce monotonicity constraints \cite{10.1145/568574.568577}, which enforce positive contributions in all base classifiers. This allows us to encode domain knowledge as a mechanism to make our models more robust to noise in the property descriptions. We found that by introducing these constraints the model improved all metrics in all sets (see the last column in Table \ref{tab:base}). These constraints are also crucial to produce robust and consistent explanations: the lack of a facility cannot explain why a property is rated as 3 stars as opposed to 2, monotonicity prevents this scenario.
\par As a summary of our modeling considerations, we obtained the best performance with Ordinal Regression reduced to Binary Classification with Gradient Boosted Trees with monotonicity constraints. Equally important, this model allows us to generate robust, consistent, and scalable explanations, as described in the following sections.
\begin{table}
\caption{Ordinal Regression by reduction to Binary Classification with different base classifiers}
\small
\centering
\begin{tabularx}{\linewidth}{
>{\raggedright\arraybackslash\hsize=0.49\hsize}X
>{\raggedleft\arraybackslash\hsize=0.18\hsize}X
>{\raggedleft\arraybackslash\hsize=0.15\hsize}X
>{\raggedleft\arraybackslash\hsize=0.17\hsize}X}
\toprule
& Logistic Regression& Gradient Boosting Trees & GBT + monotonicity\\ \midrule
Weighted F1, VR Validation Set &0.666 &0.729&0.732 \\
Weighted F1, Labeled VRs& 0.737&0.773 &0.776 \\
\midrule
MAMAE, VR Validation Set &0.839&0.633& 0.6\\
MAMAE Labeled VRs&0.956 &0.922 & 0.899\\
\bottomrule
\end{tabularx}
\label{tab:base}
\end{table}
\section{Generating Explanations}
\label{sec:explainability}
\begin{figure}
\includegraphics[width=\linewidth]{bhexpl.png}
\caption{Explaining Quality Rating in Booking.com}
\label{fig:expl}
\end{figure}
Explainability is a crucial part of our solution. We have to be able to explain to every property manager, why their property obtains a specific rating. Furthermore, as model authors, we have to be able to justify our model decisions to business stakeholders. Local and Global Interpretability play these roles respectively. According to \cite{DBLP:journals/corr/abs-1902-03501} global interpretability means \textit{"understanding the entirety of a trained model including all decision paths"}, and local interpretability, \textit{"the goal of understanding the results of a trained model on a specific input and small deviations from that input"}. Our best model is based on Gradient Boosted Trees, which is a black-box model and can't be explained directly, but since we introduced monotonicity constraints, global interpretability is possible as described in \cite{10.1145/568574.568577}. To achieve local interpretability, we applied SHAP (SHapley Additive exPlanations) by \cite{lundberg2017unified}, a framework for Machine Learning interpretability based on Shapley values \cite{shapley1953value}. In particular, TreeShap which reduced the Shapley values computation from exponential to polynomial time for tree based model \cite{lundberg2020local2global}. The semantics of the SHAP values is: given the current set of feature values, the contribution of a feature value to the difference between the actual prediction and the mean prediction is the estimated Shapley value. With this, we can provide local interpretability per property and improve global interpretability per binomial classifier using Shapley values aggregations. Since we use a reduction to several binary classifiers, we first identify the base classifier responsible for the predicted rating using Equation \ref{eq:resp} and then calculate SHAP values $\Theta$ for classifier $r(\hat{y}_i)$ (see Algorithm \ref{explanations}). The list of attributes is ranked by SHAP values and presented to the property manager of that specific property.
Table \ref{tab:explanations} shows some examples of explanations computed with this algorithm.
\begin{table*}
\caption{Explanations examples, parenthesis indicates negative score}
\centering
\begin{tabularx}{\textwidth} {
>{\raggedright\arraybackslash\hsize=0.3\hsize}X
>{\raggedright\arraybackslash\hsize=0.7\hsize}X}
\toprule
Predicted class & Important features based on Shapley values \\
\midrule
Class 1, $Pr(>1)=0.27$& balcony, hair dryer, garden, (size, shared bathroom, no wardrobe closet)\\
Class 3, $Pr(>2)=0.98$, $Pr(>3)=0.14$& dishwasher, electric kettle, cable and satellite channels, hair dryer, (non feather pillows)\\
Class 5, $Pr(>4)=0.54$&swimming pool, daily maid service, safe deposit box, spa wellness center, (street parking)\\ \bottomrule
\end{tabularx}
\label{tab:explanations}
\end{table*}
\begin{algorithm}
\caption{Explanations generation procedure}\label{explanations}
\begin{algorithmic}
\State $X$: Binary features of a property
\State $\hat{y}:$ Assigned rating
\Procedure{ComputeExplanations}{$X$, $\hat{y}$}
\State $\triangleright$ Identify the responsible model $r(\hat{y})$ as defined by Eq. \ref{eq:resp}
\State $cl \gets r(\hat{y})$
\State $\triangleright$ internal model state required to invoke TREESHAP\_PATH
\State $\{v, a, b, t, r, d\} \gets Unpack(cl)$
\State $\triangleright$ See Algorithm 2 in \cite{lundberg2020local2global}
\State $w \gets TREESHAP\_PATH(X, \{v, a, b, t, r, d\})$
\State \Return $w$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Generating Suggestions}
\label{sec:suggestions}
\begin{figure}
\includegraphics[width=\linewidth]{suggestions.png}
\caption{Suggestions for a 3-stars Vacation Rental}
\label{fig:suggestions}
\end{figure}
We are also interested in explaining what is missing to reach the next rating level (e.g. add barbecue, crib, coffee machine, etc.). These explanations work as suggestions to improve the quality of a property. Two requirements must be satisfied:
\begin{enumerate}
\item Adding the recommended feature must increase the probability of getting a higher class
\item Adding the recommended feature must not increase the probability of getting a lower class
\end{enumerate}
To this end, we only consider binary facilities (e.g. \textit{has barbecue}) and ignore non-binary ones such as \textit{the number of rooms} or \textit{size}.
\par The procedure works as follows: for a given property with currently assigned rating $\hat{y}$, for each eligible facility with a negative value, we estimate the increment $w$ in the probability of belonging to the next class $\hat{y} + 1$ given that the corresponding feature is set to positive:
\begin{equation}
\label{eq:inc}
w = Pr(y>\hat{y} | X^{j=1}) - Pr(y>\hat{y} | X)
\end{equation}
Where $X$ is the current feature vector containing all eligible binary features and $X^{j=1}$ is the same vector with feature $j$ flipped from $0$ to $1$. These probabilities are estimated using the responsible classifier for the next class $r(\hat{y}+1)$ when $\hat{y}<5$ and $r(\hat{y})$ when $\hat{y}=5$ (see Section \ref{sec:modeling}). Due to monotonicity constraints, such increment can only be greater than or equal to zero, guaranteeing both requirements are satisfied. Facilities are ranked by the increment in descending order and suggested to property managers. A more formal description of this procedure is presented in Algorithm \ref{suggestions}. Figure \ref{fig:suggestions} shows an example of the recommendations presented to end users.
\begin{algorithm}
\caption{Suggestion generation procedure}\label{suggestions}
\begin{algorithmic}
\State $X$: Binary features of a property eligible for suggestions
\State $\hat{y}:$ Assigned rating
\Procedure{ComputeSuggestions}{$X$, $\hat{y}$}
\State $S \gets \emptyset$
\Comment{$S$: List of suggestions to be returned}
\For{$X_j \in X$}
\If{$X_j = 0$}
\State $X' \gets X$
\Comment{Copy full feature vector}
\State $X'_j \gets 1$
\Comment{Flip current feature}
\State $\triangleright$ Computed with classifier $r(\hat{y}+1)$, see Eq. \ref{eq:resp}
\State $w \gets Pr(y>\hat{y} | X') - Pr(y>\hat{y} | X)$
\Comment{See Eq. \ref{eq:inc}}
\If{$w > 0$}
\State $S \gets S \Cup (X_j, w)$
\Comment{Add to suggestions list}
\EndIf
\EndIf
\EndFor
\State \Return $S$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\label{sec:experiments}
We validated our system by conducting Online Controlled Experiments in Booking.com, one of the largest Online Travel Platforms in the world. In our case, we have to be extra careful with the experiments since we cannot simply change the quality rating of a property with the purpose of experimentation. Therefore, the purpose of our experiments is not to benchmark different algorithms for which we rely on offline experiments but to test hypotheses about the efficacy of our system with respect to user behaviour represented by both guests and property managers. We describe four experiments that we consider relevant for this paper. All hypotheses are tested with 90\% significance level, only statistically significant results are provided together with 90\% confidence intervals on pre-registered metrics.
\begin{table}[!ht]
\caption{Experiments 1 and 2, results with 90\% CIs. All statistical significant with p-value $<$ 0.001}
\centering
\begin{tabular}{lrr}
\toprule
Metric Uplift (\%)& Experiment 1 & Experiment 2\\
\midrule
Class Filter Usage & 0.17\% ±0.05\% & 6.5\% ±0.14\% \\
CTR after Filtering by Class & 1.58\% ±0.25\% & 1.78\% ±0.45\% \\
Property Type Filter Usage & 1.05\% ±0.16\% & 4.31\% ±0.12\% \\
CTR a. Filter by Property Type & 1.43\% ±0.30\% & 0.37\% ±0.31\%\\
Rated VR CTR & 0.41\% ±0.05\% & 0.52\% ±0.05\% \\
Rated VR Conversion & 1.35\% ±0.46\% & 1.45\% ±0.54\%\\
Customer Service Tickets & No effect & No effect \\
\bottomrule
\end{tabular}
\label{tab:exps12}
\end{table}
\begin{table}[!ht]
\caption{Experiment 3: Explanations. Results with 90\% CIs. All statistical significant with p-value $<$ 0.001}
\centering
\begin{tabular}{lr}
\toprule
Metric Uplift (\%)& Experiment 3\\
\midrule
Amenities Added & 0.17\% ±0.05\%\\
Room Added & 1.42\% ±0.51\% \\
Room Edited & 1.21\% ±0.11\% \\
Customer Service Tickets & No Effect \\
\bottomrule
\end{tabular}
\label{tab:exps}
\end{table}
\begin{table}[!ht]
\caption{Experiment 4: Suggestions. Results with 90\% CIs. All statistical significant with p-value $<$ 0.001}
\centering
\begin{tabular}{lr}
\toprule
Metric Uplift (\%)& Experiment 4\\
\midrule
Visit Amenities & 15.86\% ±3.39\%\\
Amenities Changed & 19.35\% ±4.75\% \\
Customer Service Tickets & No Effect \\
\bottomrule
\end{tabular}
\label{tab:suggestions}
\end{table}
\paragraph{Experiment 1} Hypothesis: "Machine Learned VR Quality Ratings help \textit{guests} to find suitable Vacation Rentals." In this experiment all eligible properties received a class rating according to the Multinomial GBT model (Section \ref{sec:baselines}). Visitors of the Booking.com website were randomly uniformly split into two groups: the control group with users exposed to the normal experience where Vacation Rentals do not have any rating (although all eligible VRs do have a label assigned, the user interface ignores them keeping everything exactly as if no ratings were available) and the treatment group where users are exposed to the machine generated quality ratings by displaying them in the search results page, but also by making them available for filtering, sorting, etc. (see Figure \ref{fig:tiles}). In the spirit of transparency, the automatically generated star-ratings are displayed with a different icon and referred to as \textit{tiles} and a generic explanation is displayed to users \textit{on-hover} stating that the tiles are indeed generated automatically.
Experiment run-time was more than 4 weeks and impacted ~100M guests and more than 500k vacation rentals.
Results show that indeed the funnel is much more efficient since users can find more VRs (for example by filtering by class or property type) producing higher Click-through and Conversion rates. At the same time, we saw no effect on Hotels Conversion Rate, which shows that there is no cannibalization, likely because our system is improving the conversion on users that are interested only in VRs. Finally, we found no effect on Customer Service Tickets. Results are summarized in Table \ref{tab:exps12}.
\paragraph{Experiment 2} Hypothesis: "Monotonous Ordinal Regression helps guests to find suitable Vacation Rentals." In this experiment we evaluated our best performing model according to offline evaluation criteria (Section \ref{sec:mono}). It is identical to Experiment 1 but tested on a separate group of properties (no rating was changed, we only added more ratings). Run time was 2 weeks and impacted ~100M guests and more than 200k Vacation Rentals. Again, we can see the funnel improving significantly which we interpret as evidence of this model being effective. Furthermore, most effects are larger compared to Experiment 1, suggesting this Monotonous Ordinal Regression is better than Multinomial GBT. We want to remark that explanations and suggestions that meet the established requirements are only computable based on this model, therefore, we consider it superior. Experiment results are summarized in Table \ref{tab:exps12}.
\paragraph{Experiment 3} Hypothesis: "Explanations are clear and help partners understanding how the rating was assigned." In this experiment we evaluated explanations based on Shapley Values from our best performing model according to offline evaluation criteria. In this case we split all vacation rentals for which a machine generated rating exists into two groups (as opposed to website visitors) and change the Property Management Interface. For the properties in the control group the machine generated ratings are displayed, but no explanation is available. For the properties in the treatment group, explanations are displayed as depicted in Figure \ref{fig:expl}. Run-time was 4 weeks and impacted ~200k Vacation Rentals. If the hypothesis is correct, we expect partners to improve the description of their property, otherwise, a raise in Customer Service Tickets would be observed, due to complaints about the received rating and/or explanation. Furthermore, we conducted an online survey on partners asking \textit{"Do you find the information regarding your Quality Rating and Highlights helpful?"}
We found no effect on Customer Service Tickets (by non-inferiority test), and found conclusive positive results on several metrics related to property details submission (see Table \ref{tab:exps}). The survey showed positive results with 72.3\% positive answers. From this data, we conclude that explanations are relevant and effective, supporting the hypothesis.
\paragraph{Experiment 4} Hypothesis: "Suggestions are relevant and clear, partners will visit facilities/amenities and update them."
In this experiment, we evaluated suggestions by adding the "Recommended additions" section (described in Figure \ref{fig:suggestions}) to the property management page. The setup is the same as Experiment 3, run-time was 4 weeks and impacted ~30k vacation rentals (random sample from all eligible properties). Similarly to the previous experiment, if the hypothesis is incorrect, we expect a raise in Customer Service Tickets due to complaints about the irrelevant suggestions. If the hypothesis is correct, we expected partners to visit facilities/amenities sections of their property more often and more changes in the facilities and amenities.
We found no effect on Customer Service Tickets (by non-inferiority test), and found conclusive positive results on several metrics related to property details submission (see Table \ref{tab:suggestions}). From these results, we conclude that the suggestions are indeed helping partners to improve their properties or their descriptions.
\section{Conclusion}
\label{sec:conclusion}
\begin{figure*}
\includegraphics[width=0.7\linewidth]{bhqr_components.png}
\caption{From models to Users. Highlighted classifier is the responsible classifier described in Section \ref{sec:reduction}}.
\label{fig:comp}
\end{figure*}
In this work we presented a Quality Rating System for Vacation Rentals based on Machine Learning. Several challenges were addressed, and technical details discussed, with rationale about design choices and trade offs. Our solution hinges on 3 main points: Collaborative Labeling that allowed us to transfer hotels Star Ratings to Vacation Rentals, Ordinal Regression by reduction to Binary GBTs with Monotonicity constraints which successfully captures the order in the classes allowing both accurate and explainable predictions with correct semantics, and SHAP that allowed to compute consistent explanations. The effectiveness of the system was thoroughly validated through massive Online Controlled Experiments conducted in Booking.com, one of the top Online Travel Platforms in the world with millions of daily users and millions of Vacation Rentals, showing strong evidence of significant benefits for all the relevant parties:
\begin{itemize}
\item Guests: it is easier for them to find accommodation fitting their needs and preferences.
\item Partners: they improve their commercial health through better market targeting, better visibility and through insights about how to improve and maintain the provided quality.
\item Online Travel Platform: it benefits from more engaged users and partners and more transactions.
\end{itemize}
Figure \ref{fig:comp} illustrates the relationship between the applied techniques and the parties involved in the platform.
\par We believe that integrating user facing explanations into the modeling process was fundamental to find a good balance between accuracy and explainability. How to improve this trade-off, maybe through white-box models, is an interesting and promising direction for future work.
\balance
\begin{acks}
This work is the result of the contributions of a large group of professionals. Authors want to thank Ioannis Kangas for sharing ideas and initial prototypes around Collaborative Labeling, Thomas Bosman for his significant contributions to the Ordinal Regression Reduction, Ahmed Khaili for his contributions around Feature Generation, Roberto Pagano and Yan Romanikhin who collaborated with the implementation and Dennis Bohle for his insights about explainability.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.543945,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdlA5qdmDCTsM6Xdm | \section{Introduction}
For some fixed number $k\geq 1$, suppose that $2k-1$ friends are planning to go on a vacation trip, staying all together in a vacation rental. They are looking at a large number of options, say $n$ options, that are listed on major vacation rental websites. In order for them to agree on booking a particular vacation rental $A$, it would be desirable that there is no other vacation rental $B$ which a majority of the friends prefers over $A$ (as otherwise it would make more sense to rather book option $B$ instead of $A$ if the majority of the group prefers that). In other words, for the group to decide to book some vacation rental $A$, there should be no vacation rental $B$ such that at least $k$ out of the $2k-1$ friends like $B$ better than $A$. What is the probability that the group of friends can indeed find a vacation rental $A$ with this property?
More formally, let $\S$ be the set of the $n$ vacation rental options, i.e.\ the set of the $n$ alternatives that the group is considering. Each of the $2k-1$ friends can be considered to be a ``voter'' that has a particular preference ranking of the alternatives in $\S$. Let $P_{\S}$ be the set of all $n!$ possible rankings of $\S$ (i.e.\ the set of permutations of $\S$), and let $\sigma_1,\dots,\sigma_{2k-1}\in P_{\S}$ be the rankings of the $2k-1$ voters.
We remark that in this paper, we always assume the set $\S$ of alternatives to be finite, albeit potentially very large. The setting where the set of alternatives is an infinite set of points forming continuous topological space has been studied for example by Plott \cite{plott}, McKelvey \cite{mckelvey-1,mckelvey-2}, and Schofield \cite{schofield}.
An alternative $A\in \S$ is called a \emph{Condorcet winner} if for every other alternative $B\in \S$ there are at least $k$ indices $i\in \{1,\dots,2k-1\}$ such that $A$ is ranked higher than $B$ in $\sigma_i$ (meaning that at least $k$ of the $2k-1$ voters prefer $X$ over $Y$). In other words, $A$ is a Condorcet winner if it wins against every other alternative $B$ in majority voting. It is easy to see that there can be at most one Condorcet winner in $\S$ (indeed, two different alternatives $A,B\in \S$ cannot both be Condorcet winners, since only of $A$ and $B$ wins in majority voting between $A$ and $B$). The notion of a Condorcet winner is named after Nicolas de Condorcet, who noted in 1785 \cite{condorcet} that already for $n=3$ alternatives it can happen that no such Condorcet winner exists. The observation that there does not always exist a Condorcet winner among the given alternatives is maybe somewhat suprising, and is commonly referred to as \emph{Condorcet's paradox}. There is an extensive body of research about Condorcet winners, see Gehrlein's book on this topic \cite{gehrlein-book} for an overview.
Often, the Condorcet winner problem is considered in the setting of elections, where typically there are many voters who vote on a small number of candidates (see for example \cite{bell, guilbaud, mossel, niemi-weisberg}). However, as in our example above concerning the group of friends looking for a vacation rental, there are also many settings where one has a small number of voters and many alternatives. In this paper, we will be particularly interested in the setting of having $2k-1$ voters for fixed $k\geq 1$ and a large number $n$ of alternatives.
Our question above concerning the probability of the group of friends finding a vacation rental can now be rephrased as asking about the probability to have a Condorcet winner in $\S$ with respect to (random) rankings $\sigma_1,\dots,\sigma_{2k-1}\in P_{\S}$. Of course, this depends on the random model for choosing $\sigma_1,\dots,\sigma_{2k-1}\in P_{\S}$.
A simple and natural model is to choose $\sigma_1,\dots,\sigma_{2k-1}\in P_{\S}$ independently and uniformly at random among all rankings in $P_\S$ (i.e.\ among all possible $n!$ rankings of the $n$ alternatives). In our example above, this would mean that there is no bias between the different vacation rental options, and so for each of the friends any of the $n!$ possible rankings of the $n$ options is equally likely (and the rankings of the $2k-1$ friends are all independent). In the literature, this is called an \emph{impartial culture}, and the problem of determining the probability of having a Condorcet winner in this setting has been studied since 1968. Specifically, Garman and Kamien \cite{garman-kamien} as well as DeMeyer and Plott \cite{demeyer-plott} calculated the probability of having a Condorcet wnner for various small values of $n$ and $k$ and conjectured that this probability tends to zero as $n\to \infty$ for a fixed number of $2k-1$ voters (further calculations for small values of $n$ and $k$ can be found in \cite{gehrlein-fishburn}). This conjecture was proved by May \cite{may} who also asymptotically determined the probability of a Condorcet winner for $3$ voters (i.e.\ for $k=2$) and a large number of alternatives. However, for $k>2$ even the order of magnitude of the probability of having a Condorcet winner was not known. Our first result asymptotically determines this probability.
\begin{theorem}\label{thm-impartial-culture}
For any fixed number $k\geq 1$ and a (large) set $\S$ of $n$ alternatives, let us consider $2k-1$ voters with independently and uniformly random rankings $\sigma_1,\dots,\sigma_{2k-1}\in P_{\S}$. Then the probability that there is Condorcet winner is
\[C_k\cdot n^{-(k-1)/k}+O_k\left(\frac{(\ln n)^{1/k}}{n}\right),\]
where $0<C_k<\infty$ is given by the $(2k-1)$-fold integral
\begin{equation}\label{eq-def-C-k-integral}
C_k=\int_0^\infty \dots \int_0^\infty \exp(-\sigma_k(x_1,\dots,x_{2k-1}))\diff x_1 \dots \diff x_{2k-1}.
\end{equation}
\end{theorem}
Here, $\sigma_k(x_1,\dots,x_{2k-1})=\sum_{I\subseteq \{1,\dots,2k-1\}, |I|=k} \prod_{i\in I} x_i$ denotes the usual $k$-th elementary symmetric polynomial, i.e.\ the sum of the products of any $k$ out of the $2k-1$ variables $x_1,\dots,x_{2k-1}$.
In particular, this means that for a fixed number of $2k-1$ voters and a large number of alternatives, in an impartial culture the probability that a Condorcet winner exists is of the form $(C_k+o(1))\cdot n^{-(k-1)/k}$. We remark that for $k=2$ (i.e.\ for $3$ voters) this was already proved by May \cite{may} with $C_3=\pi^{3/2}/2\approx 2.78$, but our error term $O_k((\ln n)^{1/k}/n)$ in Theorem \ref{thm-impartial-culture} is stronger than in May's result. For $k>2$, it was previously not even known that the probability is on the order of $n^{-(k-1)/k}$.
In real life settings, it is often not the case that all possible rankings of the $n$ alternatives are equally likely. In our example above of the friends that are considering options for their vacation, there is likely some correlation between how much a person likes different options. For example, if someone likes to go to the beach, they are likely to rank the vacation rental options on beaches fairly high. In contrast, someone who likes hiking in the mountains is likely to rank the options in the mountains higher. More generally, vacation rental options that are similar in some way (e.g.\ being on the beach or being in the mountains) are more likely to be close to each other in someone's ranking. It therefore makes sense to also consider other, non-uniform probability distributions on the set $P_\S$ of all rankings of the $n$ alternatives.
So let us consider a probability distribution $\pi$ on the set $P_\S$ of all $n!$ rankings of $\S$. This probability distribution corresponds to how likely it is for a person to have a specific ranking of the $n$ alternatives in $\S$. In the literature, such a probability distribution is often referred to as a \emph{culture}. In an impartial culture, as in Theorem \ref{thm-impartial-culture} the probability distribution $\pi$ is uniform on $P_\S$ (i.e.\ it assigns probability $1/n!$ to each ranking in $P_\S$), but now we allow any probability distribution (which can take into account that it is likely to for similar alternatives to be ranked close to each other). Note that it is in particular possible for $\pi$ to assign probability zero to some rankings in $P_\S$ (if there are rankings that cannot reasonably occur).
Now, consider rankings $\sigma_1,\dots,\sigma_{2k-1}$ that are chosen independently according to the probability distribution $\pi$ on $P_\S$. One may again ask what the probability of having a Condorcet winner is in this setting. Obviously, this depends on the probability distribution $\pi$.
It is easy to see that the probability of a Condorcet winner can be as large as $1$ (i.e.\ there may always exist a Condorcet winner). For example, if $\pi$ is a probability distribution that assigns a particular ranking of $\S$ probability $1$ (and all other rankings probability $0$), then $\sigma_1,\dots,\sigma_{2k-1}$ are always equal to this particular ranking and so the highest-ranked alternative in this ranking is clearly a Condorcet winner. In other words, if the probability distribution $\pi$ is concentrated on just a single ranking in $P_\S$, there is always a Condorcet winner.
One might expect that the lowest possible probability of having a Condorcet winner occurs in the opposite case, where $\pi$ is uniformly distributed among all rankings in $P_\S$ (i.e.\ in the setting of an impartial culture, as in Theorem \ref{thm-impartial-culture}). However, perhaps surprisingly, this is not true. The probability of a Condorcet winner can in fact be much smaller than in Theorem \ref{thm-impartial-culture}. Specifically, consider the probability distribution $\pi^*$ defined as follows. Let $\S=\{A_1,\dots,A_n\}$ and consider the $n$ ``cyclic-looking'' rankings of the form $(A_i, A_{i+1},....,A_n,A_1,\dots,A_{i-1})$ for $i=1,\dots,n$. Now let $\pi^*$ assign probability $1/n$ to each of these $n$ rankings (and probability $0$ to all other rankings). Then it turns out (see below) that for a large number $n$ of alternatives, the probability of a Condorcet winner is only on the order of $n^{-(k-1)}$ (which is much smaller than $n^{-(k-1)/k}$ as in Theorem \ref{thm-impartial-culture}).
A similar phenomenon was observed by Garman and Kamien \cite[p.~315]{garman-kamien} in the setting of $n=3$ alternatives and a large number of voters. They \cite[p.~314]{garman-kamien} suggested to classify a probability distribution $\pi$ on $P_\S$ as ``similar'' if (for a large number of voters) the probability of having a Condorcet winner is bigger than for the uniform distribution (i.e.\ for the impartial culture), and to classify $\pi$ as ``antagonistic'' if the probability of having a Condorcet winner is smaller than for the uniform distribution. However, while it is clear that that the maximum possible probability of having a Condorcet winner is $1$, the actual minimum possible probability has not been determined.
Answering this question, our next theorem determines the minimum possible probability of having a Condorcet winner for $2k-1$ voters and $n$ alternatives for any positive integers $n$ and $k$. This minimum possible probability of having a Condorcet winner is attained by the probability distribution $\pi^*$ described above. In other words, in the setting of a large number of alternatives discussed above, this probability distribution $\pi^*$ does not only yield to a much smaller probability for a Condorcet winner than the uniform distribution, but $\pi^*$ actually gives the smallest possible probability among all probability distributions on $P_\S$.
\begin{theorem}\label{thm-minimum-probability}
For some $n\geq 1$, let $\S=\{A_1,\dots,A_n\}$ be a set of $n$ alternatives. Then for any $k\geq 1$ and any probability distribution $\pi$ on the set $P_\S$ of all rankings of $\S$, the probability that there is a Condorcet winner when $2k-1$ voters independently choose rankings $\sigma_1,\dots,\sigma_{2k-1}\in P_\S$ according to the probability distribution $\pi$ is at least
\begin{equation}\label{eq-term-minimum-probability}
n^{-(2k-2)}\cdot \sum_{\ell=0}^{k-1}\binom{2k-1}{\ell}(n-1)^\ell.
\end{equation}
Furthermore, the probability of having a Condorcet winner is equal to the term in (\ref{eq-term-minimum-probability}) for the probability distribution $\pi^*$ on $P_\S$ defined by taking each of the rankings $(A_i, A_{i+1},....,A_n,A_1,\dots,A_{i-1})$ for $i=1,\dots,n$ with probability $1/n$, and all other rankings with probability $0$.
\end{theorem}
We stress again that Theorem \ref{thm-minimum-probability} applies to any positive numbers $n$ and $k$, and does not require an assumption of the number $n$ of alternatives being large compared to $k$. Buckley \cite[p.\ 113]{buckley} proved the special case of $n=3$ and $k=2$ of Theorem \ref{thm-minimum-probability} in 1975 (i.e.\ in the case of three voters deciding between three alternatives).
The term (\ref{eq-term-minimum-probability}) that gives the precise answer for the minimum possible probability of having a Condorcet winner cannot be simplified for general $n$ and $k$ (there is unfortunately no way to write this sum in a closed form). However, there are ways to express this term asymptotically if $n$ is large with respect to $k$ or vice versa.
If, as in the setting discussed earlier, the number $2k-1$ of voters is fixed, and the number of alternatives is larger (as in the example of the $2k-1$ friends looking for a vacation rental), then the minimum possible probability as determined by Theorem \ref{thm-minimum-probability} has the form
\[n^{-(2k-2)}\cdot \binom{2k-1}{k-1}(n-1)^{k-1}+O_k(n^{-k})=\binom{2k-1}{k}\cdot n^{-(k-1)}+O_k(n^{-k}).\]
We remark that for large $n$ this probability is much smaller than the probability of having a Condorcet winner for the uniform probability distribution on $P_\S$ (i.e.\ for an impartial culture). Indeed, this the latter probability is asymptotically equal to $C_k \cdot n^{-(k-1)/k}$ by Theorem \ref{thm-impartial-culture}.
On the other hand, if there is a fixed number $n\geq 3$ of alternatives and there are $2k-1$ voters for $k\to\infty$, then the minimum possible probability as determined by Theorem \ref{thm-minimum-probability} has the form
\[n^{-2k}\cdot \binom{2k-1}{k-1}\cdot (n-1)^{k}\cdot \exp(O_n(1))=\exp\left(-\ln\left(\frac{n^2}{4(n-1)}\right)\cdot k+O_n(\log k)\right).\]
So for fixed $n\geq 3$, this probability decays exponentially with $k$. Again, for the uniform probability distribution (i.e.\ for an impartial culture) the probability of having a Condorcet winner is much larger. In fact, for every fixed number $n\geq 3$ of alternatives, the latter probability converges to some real number strictly between $0$ and $1$ as $k\to\infty$ (see e.g.\ \cite[p. 320]{niemi-weisberg}). Note that for $n\in \{1,2\}$ alternatives, the probability of having a Condorcet winner is always equal to $1$ for any probability distribution on $P_\S$.
These results are in sharp contrast to Gehrlein's claim \cite{gehrlein-2002} that the probability of having a Condorcet winner is minimal for an impartial culture (i.e.\ for the uniform distribution $\pi$). More precisely, Gehrlein \cite[p.\ 197]{gehrlein-2002} argues that an impartial culture (as well as other models of ``balanced preferences'' that are described in his paper) gives a lower bound for the probability of a Condorcet winner in more general situations. He \cite[p.\ 177]{gehrlein-2002} also writes ``In general, intuition suggests that we would be most likely to observe this paradox on the pairwise majority rule relations when voters' preferences are closest to being balanced between all pairs of candidates'' and, with regards to this statement, ``it does seem to be a generally valid claim''. However, as shown above, for the probability distribution $\pi^*$ defined in Theorem \ref{thm-minimum-probability}, a Condorcet winner exists with much smaller probability than in an impartial culture (i.e.\ for the uniform probability distribution on $P_\S$). This disproves Gehrlein's claim.
In fact, this probability distribution $\pi^*$ achieves the actual minimum possible probability of having a Condorcet winner. Interestingly, while the distribution $\pi^*$ is balanced (or more precisely, symmetric) between all of the candidates, it is not ``balanced between all pairs of candidates'' as Gehrlein \cite[p.\ 177]{gehrlein-2002} suggested. For example, looking at alternatives $A_1$ and $A_2$, a voter whose ranking $\sigma$ is chosen according to the probability distribution $\pi^*$ prefers $A_1$ over $A_2$ with probability $(n-1)/n$. Thus, for $n\geq 2$ the probability distribution $\pi^*$ is far from balanced when looking at the pair of alternatives $A_1$ and $A_2$. Still, as shown in Theorem \ref{thm-minimum-probability}, the probability distribution $\pi^*$ minimizes the probability of a Condorcet winner among all possible probability distributions on $P_\S$. This again strongly disproves Gehrlein's claim \cite[p.\ 177]{gehrlein-2002}.
We remark that Tsetlin et al.\ \cite{tsetlin-et-al} studied the problem of minimizing the probability of a Condorcet winner in the setting of three candidates under the additional assumption that the probability distribution $\pi$ induces a transitive weak majority preference relationship (see the assumption of \cite[Theorem 3]{tsetlin-et-al}). Under this additional restriction, they proved that the impartial culture minimizes the probability of a Condorcet winner for three alternatives, and they conjectured the same to be true for more than three alternatives.
In conclusion, depending on the probability distribution $\pi$, the probability of having a Condorcet winner can be anywhere between $1$ and the probability in (\ref{eq-term-minimum-probability}). If there is no bias or correlation between the $n$ alternatives, i.e.\ in the setting of an impartial culture, then the probability is asymptotically equal to $C_k\cdot n^{-(k-1)/k}$ with $C_k$ as in Theorem \ref{thm-impartial-culture}. This probability is in some sense neither close to the upper bound $1$ nor to the lower bound in (\ref{eq-term-minimum-probability}). Coming back to our example from the beginning, one should hope that the relevant probability distribution $\pi$ leads to a much higher probability of a Condorcet winner, since it would be frustrating for the group of friends to be able to agree on a vacation rental option only with probability tending to zero for large $n$. It is indeed plausible to expect that, given that most likely some of the vacation rental options are inherently better than others, and so one should expect the probability distribution $\pi$ to be fairly biased towards these ``better'' alternatives. In this light, it is not surprising that in practice a group of friends is usually able to find a vacation rental that suits their needs.
\textit{Notation and Organization.} For a positive integer $m$, we abbreviate the set $\{1,\dots,m\}$ by $[m]$ (which is a common notation). We use standard asymptotic $O$-notation for fixed $k\geq 1$ and $n\to \infty$. In order to emphasize the possible dependence on the fixed number $k$, we write, for example, $O_k(1/n)$ for a term which is bounded in absolute value by a term of the form $D_k \cdot (1/n)$ for some constant $D_k>0$ depending on $k$.
For $\ell\leq m$, and variables $x_1,\dots,x_m\in \mathbb{R}$, the $\ell$-th elementary symmetric polynomial of $x_1,\dots,x_m$ is defined to be
\[\sigma_\ell(x_1,\dots,x_{m})=\sum_{\substack{I\subseteq [m]\\ |I|=\ell}} \,\prod_{i\in I} x_i,\]
i.e.\ it is the sum of the products of any $\ell$ out of the $m$ variables $x_1,\dots,x_{m}$. Note that this is a homogeneous polynomial of degree $\ell$, and, furthermore, for non-negative $x_1,\dots,x_m$, this polynomial $\sigma_\ell(x_1,\dots,x_{m})$ is always non-negative.
We will prove Theorem \ref{thm-impartial-culture} in Section \ref{sect-proof-thm-impartial-culture}, postponing the proofs of some lemmas to Section \ref{sect-lemmas}. Afterwards, we prove Theorem \ref{thm-minimum-probability} in Section \ref{sect-proof-minimum-probability}.
\textit{Acknowledgements.} The author would like to thank Asaf Ferber, Matthew Kwan and Elchanan Mossel for helpful discussions.
\section{The probability of a Condorcet winner in an impartial culture}
\label{sect-proof-thm-impartial-culture}
In this section, we prove Theorem \ref{thm-impartial-culture}, which concerns the probability of having a Condorcet winner in an impartial culture for $2k-1$ voters and a large number of alternatives.
First, note that the theorem is trivially true for $k=1$. Indeed, for $k=1$, there is only one voter and therefore a Condorcet winner exists with probability $1$. On the other hand,
\[C_1=\int_0^\infty \exp(-\sigma_1(x_1))\diff x_1 =\int_0^\infty \exp(-x_1)\diff x_1 =1\]
and therefore we have $C_1\cdot n^{(1-1)/1}=1$, as desired.
So let us from now on fix $k\geq 2$. Since we are proving an asymptotic statement, we may assume that $n$ is sufficiently large (with respect to $k$). In particular, we can assume that $2(\ln n)/(n-1)\leq 1/3$ .
We have a set $\S=\{A_1,\dots,A_n\}$ of $n$ alternatives and $2k-1$ voters choose rankings $\sigma_1,\dots,\sigma_{2k-1}$ independently and uniformly at random form the set $P_\S$ of all $n!$ rankings of $\S$. We can model the random choice of these rankings as follows.
Let us assume that each voter $i$ for $i=1,\dots,2k-1$ picks $n$ random real numbers $x_i^{(1)},\dots,x_i^{(n)}$ independently uniformly at random from the interval $[0,1]$ (and this happens independently for all voters). Then with probability $1$ these $n$ points $x_i^{(1)},\dots,x_i^{(n)}$ are distinct. Let the ranking $\sigma_i$ of $\S=\{A_1,\dots,A_n\}$ be obtained by recording the order of the points $x_i^{(1)},\dots,x_i^{(n)}$ in the interval $[0,1]$, meaning that voter $i$ ranks the alternative $A_\ell$ first (i.e.\ highest) for which the number $x_i^{(\ell)}$ is the smallest among $x_i^{(1)},\dots,x_i^{(n)}$ (and then the alternative $A_{\ell'}$ second for which the $x_i^{(\ell')}$ is the second-smallest number and so on). This way, we indeed obtain a uniformly random ranking $\sigma_i\in P_\S$ for voter $i$, and these rankings are independent for all the voters $i=1,\dots,2k-1$.
Note that since there is always at most one Condorcet winner, the total probability that a Condorcet winner exists is the sum of of the probabilities that $A_\ell$ is a Condorcet winner for all $\ell=1,\dots,n$. In other words,
\[\operatorname{\mathbb{P}}(\text{Condorcet winner exists})=\sum_{\ell=1}^{n}\operatorname{\mathbb{P}}(A_\ell\text{ is Condorcet winner}).\]
Note that since the rankings $\sigma_i$ are independent uniformly random in $P_\S$, each alternative $A_\ell$ for $\ell=1,\dots,n$ is equally likely to be a Condorcet winner. Hence the summands on the right-hand side of the previous equation are equal for all $\ell=1,\dots,n$ and we obtain that
\[\operatorname{\mathbb{P}}(\text{Condorcet winner exists})=\sum_{\ell=1}^{n}\operatorname{\mathbb{P}}(A_\ell\text{ is Condorcet winner})=n\cdot \operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner}).\]
Hence, in order to prove Theorem \ref{thm-impartial-culture}, it suffices to show that
\begin{equation}\label{eq-term-prob-one-alternative}
\operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner}) = C_k\cdot n^{-(2k-1)/k}+O_k\left(\frac{(\ln n)^{1/k}}{n^2}\right),
\end{equation}
where $C_k$ is given by (\ref{eq-def-C-k-integral}) and that the integral in (\ref{eq-def-C-k-integral}) is indeed finite (it is clear that the integral is positive, but it could a priori be infinite). The finiteness of the integral follows from the first part of the following more general lemma.
\begin{lemma}\label{lem-integral-finite} For any positive integers $\ell<m$, we have
\[\int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}\leq (m!)^2\]
Furthermore, defining $0<C_{\ell,m}<\infty$ to be the value of this integral, then in the case of $\ell\geq 2$ we have
\[\int_0^a \dots \int_0^a \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}\geq C_{\ell,m} - m\cdot (m-1) \cdot ((m-1)!)^2 \cdot a^{-(m-\ell)/(\ell-1)}\]
for every $a\geq 1$.
\end{lemma}
We postpone the proof of Lemma \ref{lem-integral-finite} to Section \ref{sect-lemmas}. Applying Lemma \ref{lem-integral-finite} to $\ell=k$ and $m=2k-1$ (noting that $k<2k-1$ by our assumption that $k\geq 2$) implies that the integral in (\ref{eq-def-C-k-integral}) is at most $((2k-1)!)^2$, and so in particular it is finite. So we have $0<C_k<\infty$ and it now suffices to prove (\ref{eq-term-prob-one-alternative}).
Recall that alternative $A_1$ is a Condorcet winner if and only if for each $\ell=2,\dots,n$ there are at least $k$ voters $i\in [2k-1]$ such that voter $i$ ranks $A_1$ before $A_\ell$. In other words, for each $\ell=2,\dots,n$ there must be at least $k$ different indices $i\in [2k-1]$ such that $x_i^{(1)}\leq x_i^{(\ell)}$.
Remembering that all $x_i^{(\ell)}$ for $i=1,\dots,2k-1$ and $\ell=1,\dots, n$ are independent uniformly random real numbers in the interval $[0,1]$, we can imagine that we first sample the random numbers $x_1^{(1)},\dots,x_{2k-1}^{(1)}$ (i.e.\ the numbers corresponding to alternative $A_1$). For simplicity, let us write $x_i=x_i^{(1)}$ for $i=1,\dots,2k-1$. Then $A_1$ is a Condorcet winner if and only if for each $\ell=2,\dots,n$ the random numbers $x_1^{(\ell)},\dots,x_{2k-1}^{(\ell)}$ satisfy the condition
\begin{equation}\label{eq-condition-for-each-ell}
x_i\leq x_i^{(\ell)}\text{ for at least }k\text{ indices }i\in [2k-1].
\end{equation}
Given the values of $x_1,\dots,x_{2k-1}\in [0,1]$, let $Q(x_1,\dots,x_{2k-1})$ be the probability that for independent uniformly random variables $y_1,\dots,y_{2k-1}$ in the interval $[0,1]$ we have $x_i\leq y_i$ for at most $k-1$ indices $i\in [2k-1]$. Then each $\ell=2,\dots,n$ satisfies condition (\ref{eq-condition-for-each-ell}) with probability precisely $1-Q(x_1,\dots,x_{2k-1})$ and these events are independent for all $\ell=2,\dots,n$ (conditioning on the values of $x_1,\dots,x_{2k-1}$). Hence, conditioning on the values of $x_1,\dots,x_{2k-1}$, the probability of $A_1$ being Condorcet winner is $(1-Q(x_1,\dots,x_{2k-1}))^{n-1}$. Thus, we obtain
\begin{equation}\label{eq-expression-with-Q}
\operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner}) = \int_0^1 \dots \int_0^1 \left(1-Q(x_1,\dots,x_{2k-1})\right)^{n-1}\diff x_1 \dots \diff x_{2k-1}.
\end{equation}
The following lemma gives some helpful estimates for $Q(x_1,\dots,x_{2k-1})$ in terms of $\sigma_k(x_1,\dots,x_{2k-1})$.
\begin{lemma}\label{lemma-bounds-Q} For real numbers $x_1,\dots,x_{2k-1}\in [0,1]$, we have
\[2^{-2k+1}\cdot \sigma_k(x_1,\dots,x_{2k-1})\leq Q(x_1,\dots,x_{2k-1})\leq \sigma_k(x_1,\dots,x_{2k-1})\]
and furthermore
\[Q(x_1,\dots,x_{2k-1})\geq \sigma_k(x_1,\dots,x_{2k-1})-2^{4k-2}\cdot (\sigma_k(x_1,\dots,x_{2k-1}))^{(k+1)/k}.\]
\end{lemma}
We postpone the proof of Lemma \ref{lemma-bounds-Q} to Section \ref{sect-lemmas}. The proof relies on Bonferroni's inequalities in probability theory, as well as on Newton's inequality for elementary symmetric functions. For our argument, we will use the following corollary of Lemma \ref{lemma-bounds-Q}.
\begin{corollary}\label{coro-Q-close-sigma-k}
If $x_1,\dots,x_{2k-1}\in [0,1]$ are real numbers with $Q(x_1,\dots,x_{2k-1})\leq 2 (\ln n)/(n-1)$, then we have
\[Q(x_1,\dots,x_{2k-1})\geq \left(1-2^{4k}\cdot \left(\frac{\ln n}{n-1}\right)^{1/k}\right)\cdot \sigma_k(x_1,\dots,x_{2k-1}).\]
\end{corollary}
Recall that by Lemma \ref{lemma-bounds-Q}, we always have $Q(x_1,\dots,x_{2k-1})\leq \sigma_k(x_1,\dots,x_{2k-1})$ if $x_1,\dots,x_{2k-1}\in [0,1]$. This means that under the assumptions in Corollary \ref{coro-Q-close-sigma-k}, the value of $Q(x_1,\dots,x_k)$ is actually fairly close to $\sigma_k(x_1,\dots,x_{2k-1})$.
\begin{proof}[Proof of Corollary \ref{coro-Q-close-sigma-k} assuming Lemma \ref{lemma-bounds-Q}]
When combining the assumption $Q(x_1,\dots,x_{2k-1})\leq 2 (\ln n)/n$ of the corollary with the first inequality in Lemma \ref{lemma-bounds-Q}, we obtain
\[\sigma_k(x_1,\dots,x_{2k-1})\leq 2^{2k-1}\cdot Q(x_1,\dots,x_{2k-1})\leq 2^{2k-1}\cdot 2 \cdot \frac{\ln n}{n-1} =4^k\cdot \frac{\ln n}{n-1}.\]
Hence
\[\sigma_k(x_1,\dots,x_{2k-1})^{(k+1)/k}=\sigma_k(x_1,\dots,x_{2k-1})^{1/k}\cdot \sigma_k(x_1,\dots,x_{2k-1})\leq 4\cdot \left(\frac{\ln n}{n-1}\right)^{1/k}\cdot \sigma_k(x_1,\dots,x_{2k-1}),\]
and from the second part of Lemma \ref{lemma-bounds-Q} we obtain
\[Q(x_1,\dots,x_{2k-1})\geq \sigma_k(x_1,\dots,x_{2k-1})-2^{4k-2} \sigma_k(x_1,\dots,x_{2k-1})^{(k+1)/k} \geq \left(1-2^{4k} \left(\frac{\ln n}{n-1}\right)^{1/k}\right) \sigma_k(x_1,\dots,x_{2k-1}),\]
as desired.
\end{proof}
In order to estimate the integral in (\ref{eq-expression-with-Q}), we also need the following lemma, which states, roughly speaking, that for $0\leq t\leq 1/3$, the function $1-t$ is fairly close to $e^{-t}$. The lemma follows relatively easily from Taylor's theorem, and we postpone the proof details to Section \ref{sect-lemmas}.
\begin{lemma}\label{lem-taylor}
For $0\leq t\leq 1/3$, we have
\[e^{-t-t^2}\leq 1-t\leq e^{-t}.\]
\end{lemma}
In order to show (\ref{eq-term-prob-one-alternative}), let us start by rewriting (\ref{eq-expression-with-Q}) by splitting up the $(2k-1)$-fold integral on the right-hand side into two domains. Let
\[D=\{(x_1,\dots,x_{2k-1})\in [0,1]^{2k-1}\mid Q(x_1,\dots,x_{2k-1})\leq 2 (\ln n)/(n-1)\}\]
be the domain of those $(x_1,\dots,x_{2k-1})$ in $[0,1]^{2k-1}$ for which we have $Q(x_1,\dots,x_{2k-1})\leq 2 (\ln n)/(n-1)$. Note that we can apply Corollary \ref{coro-Q-close-sigma-k} to any $(x_1,\dots,x_{2k-1})\in D$. Furthermore, note that for any $(x_1,\dots,x_{2k-1})\in [0,1]^{2k-1}\setminus D$, by Lemma \ref{lem-taylor} we have
\begin{equation}\label{eq-outside-D}
\left(1-Q(x_1,\dots,x_{2k-1})\right)^{n-1}\leq \left(1-2\cdot \frac{\ln n}{n-1}\right)^{n-1}\leq \left(\exp\left(-\frac{2\ln n}{n-1}\right)\right)^{n-1}=n^{-2}.
\end{equation}
From (\ref{eq-expression-with-Q}), we now obtain
\begin{align}
\operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner}) &= \int_0^1 \dots \int_0^1 \left(1-Q(x_1,\dots,x_{2k-1})\right)^{n-1}\diff x_1 \dots \diff x_{2k-1}\notag\\
&= \int_{[0,1]^{2k-1}} \left(1-Q(x_1,\dots,x_{2k-1})\right)^{n-1} \diff^{2k-1} (x_1,\dots,x_{2k-1})\notag\\
&=\int_{D} \left(1-Q(x_1,\dots,x_{2k-1})\right)^{n-1} \diff^{2k-1} (x_1,\dots,x_{2k-1})\notag\\
&\quad\quad+\int_{[0,1]^{2k-1}\setminus D} \left(1-Q(x_1,\dots,x_{2k-1})\right)^{n-1} \diff^{2k-1} (x_1,\dots,x_{2k-1})\label{eq-sum-two-integrals}
\end{align}
Let us now show upper and lower bounds for sum in (\ref{eq-sum-two-integrals}). First, as an upper bound we have by (\ref{eq-outside-D}) and Lemma \ref{lem-taylor} (noting that $Q(x_1,\dots,x_{2k-1})\leq 2(\ln n)/(n-1)\leq 1/3$ for all $(x_1,\dots,x_{2k-1})\in D$)
\begin{align}
\operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner})&\leq \int_{D} \left(1-Q(x_1,\dots,x_{2k-1})\right)^{n-1} \diff^{2k-1} (x_1,\dots,x_{2k-1})+\operatorname{Vol}([0,1]^{2k-1}\setminus D)\cdot n^{-2}\notag\\
&\leq \int_{D} \left(\exp(-Q(x_1,\dots,x_{2k-1}))\right)^{n-1} \diff^{2k-1} (x_1,\dots,x_{2k-1})+ n^{-2}\notag\\\
&= \int_{D} \exp(-Q(x_1,\dots,x_{2k-1})\cdot (n-1)) \diff^{2k-1} (x_1,\dots,x_{2k-1})+ n^{-2}.\label{eq-ineq-step-upper-bound}
\end{align}
Note that for $(x_1,\dots,x_{2k-1})\in D$ we can apply Corollary \ref{coro-Q-close-sigma-k} and obtain
\begin{align*}
Q(x_1,\dots,x_{2k-1})\cdot (n-1)&\geq \left(1-2^{4k}\cdot \left(\frac{\ln n}{n-1}\right)^{1/k}\right)\cdot \sigma_k(x_1,\dots,x_{2k-1})\cdot (n-1)\\
&=\left((n-1)-2^{4k}(\ln n)^{1/k}(n-1)^{(k-1)/k}\right)\cdot \sigma_k(x_1,\dots,x_{2k-1})\\
&\geq \left(n-2^{4k+1}(\ln n)^{1/k}n^{(k-1)/k}\right)\cdot \sigma_k(x_1,\dots,x_{2k-1}).
\end{align*}
Combining this with (\ref{eq-ineq-step-upper-bound}) yields
\begin{align}
&\operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner})\notag\\
&\quad \leq \int_{D} \exp\left(-\left(n-2^{4k+1}(\ln n)^{1/k}n^{(k-1)/k}\right)\sigma_k(x_1,\dots,x_{2k-1})\right) \diff^{2k-1} (x_1,\dots,x_{2k-1})+ n^{-2}\notag\\
&\quad\leq \int_0^\infty \dots \int_0^\infty \exp\left(-\left(n-2^{4k+1}(\ln n)^{1/k}n^{(k-1)/k}\right)\sigma_k(x_1,\dots,x_{2k-1})\right) \diff x_1 \dots \diff x_{2k-1}+ n^{-2}\notag\\
&\quad= \left(n-2^{4k+1}(\ln n)^{1/k}n^{(k-1)/k}\right)^{-(2k-1)/k}\int_0^\infty \dots \int_0^\infty \exp(-\sigma_k(z_1,\dots,z_{2k-1})) \diff z_1 \dots \diff z_{2k-1}+ n^{-2}\notag\\
&\quad= \left(1+O_k\left(\frac{(\ln n)^{1/k}}{n^{1/k}}\right)\right)\cdot n^{-(2k-1)/k}\cdot C_k+ n^{-2}=C_k\cdot n^{-(2k-1)/k}+O_k\left(\frac{(\ln n)^{1/k}}{n^2}\right),\label{eq-ineq-upper-bound-done}
\end{align}
where the first equality sign is obtained by substituting $z_i=(n-2^{4k+1}k^2(\ln n)^{1/k}n^{(k-1)/k})^{1/k}\cdot x_i$ for $i=1,\dots,2k-1$ (recalling that $\sigma_k$ is a homogeneous polynomial of degree $k$). This finishes the proof of the upper bound in (\ref{eq-term-prob-one-alternative}).
For the lower bound, let us return to (\ref{eq-sum-two-integrals}), and apply Lemma \ref{lem-taylor} to the first integral (recall that $Q(x_1,\dots,x_{2k-1})\leq 2 (\ln n)/(n-1)\leq 1/3$ for all $(x_1,\dots,x_{2k-1})\in D$). This gives
\begin{align}
\operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner}) &\geq\int_{D} \left(1-Q(x_1,\dots,x_{2k-1})\right)^{n-1} \diff^{2k-1} (x_1,\dots,x_{2k-1})\notag\\
&\geq\int_{D} \exp\left(-(Q(x_1,\dots,x_{2k-1})+Q(x_1,\dots,x_{2k-1})^2)\cdot (n-1)\right) \diff^{2k-1} (x_1,\dots,x_{2k-1})\notag\\
&\geq\int_{D} \exp\left(-Q(x_1,\dots,x_{2k-1})\cdot \left(1+2 \frac{\ln n}{n-1}\right)\cdot (n-1)\right) \diff^{2k-1} (x_1,\dots,x_{2k-1})\notag\\
&\geq\int_{D} \exp\left(-Q(x_1,\dots,x_{2k-1})\cdot (n+2\ln n)\right) \diff^{2k-1} (x_1,\dots,x_{2k-1})\notag\\
&=\int_{D} \exp\left(-Q(x_1,\dots,x_{2k-1})\cdot f(n)\right) \diff^{2k-1} (x_1,\dots,x_{2k-1}),\label{eq-step-loer-bound}
\end{align}
using the short-hand notation $f(n)=n+2\ln n$. The integral in (\ref{eq-step-loer-bound}) is very close to the analogous integral taken over $[0,1]^{2k-1}$ instead of over $D$. Indeed, recall that for any $(x_1,\dots,x_{2k-1})\in [0,1]^{2k-1}\setminus D$ we have $Q(x_1,\dots,x_{2k-1})\cdot f(n)\geq 2 (\ln n)/(n-1)\cdot (n+2\ln n)\geq 2\ln n$ and therefore
\[\int_{[0,1]^{2k-1}\setminus D} \exp\left(-Q(x_1,\dots,x_{2k-1})\cdot f(n)\right) \diff^{2k-1} (x_1,\dots,x_{2k-1})\leq \int_{[0,1]^{2k-1}\setminus D} n^{-2} \diff^{2k-1} (x_1,\dots,x_{2k-1})\leq n^{-2}.\]
Thus, (\ref{eq-step-loer-bound}) implies
\begin{align*}
\operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner})&\geq\int_{[0,1]^{2k-1}} \exp\left(-Q(x_1,\dots,x_{2k-1})\cdot f(n)\right) \diff^{2k-1} (x_1,\dots,x_{2k-1}) - n^{-2}\\
&\geq\int_0^1 \dots \int_0^1\exp\left(-\sigma_k(x_1,\dots,x_{2k-1})\cdot f(n)\right) \diff x_1\dots \diff x_{2k-1} - n^{-2},
\end{align*}
where for the second inequality we used the upper bound on $Q(x_1,\dots,x_{2k-1})$ in Lemma \ref{lemma-bounds-Q}. Let us now use the substitution $z_i=f(n)^{1/k}\cdot x_i$ for $i=1,\dots,2k-1$ (again recalling that $\sigma_k$ is a homogeneous polynomial of degree $k$). This yields
\begin{align*}
\operatorname{\mathbb{P}}(A_1\text{ is Condorcet winner})&\geq f(n)^{-(2k-1)/k}\int_0^{f(n)^{1/k}} \dots \int_0^{f(n)^{1/k}}\exp\left(-\sigma_k(x_1,\dots,x_{2k-1})\right) \diff z_1\dots \diff z_{2k-1} - n^{-2},\\
&\geq f(n)^{-(2k-1)/k} \cdot \left(C_k - ((2k)!)^2\cdot f(n)^{-1/k}\right) - n^{-2}\\
&= \left(1-O_k\left(\frac{\ln n}{n}\right)\right)\cdot n^{-(2k-1)/k} \cdot\left(C_k - O_k(n^{-1/k})\right) - n^{-2}\\
&= C_k\cdot n^{-(2k-1)/k}-O_k(n^{-2}),
\end{align*}
where the second inequality follows from the second part of Lemma \ref{lem-integral-finite} applied to $a=f(n)^{1/k}$ with $\ell=k$ and $m=2k-1$ (then $C_{k,2k-1}$ is simply $C_k$ and $(m-\ell)/(\ell-1)=(k-1)/(k-1)=1$), and where in the last step we used that $k\geq 2$. This gives the desired lower bound in (\ref{eq-term-prob-one-alternative}).
\section{The minimum possible probability of a Condorcet winner}
\label{sect-proof-minimum-probability}
In this section, we prove Theorem \ref{thm-minimum-probability}, which determines the minimum possible probability of having a Condorcet winner when $2k-1$ voters choose independent rankings of a given set of $n$ alternatives according to some probability distribution.
For an integer $k\geq 1$ and $x\in \mathbb{R}$, let us define
\[p_k(x)=\sum_{\ell=0}^{k-1}\binom{2k-1}{\ell}\cdot x^{2k-1-\ell}\cdot (1-x)^{\ell}.\]
Note that for $x\in [0,1]$, the value $p_k(x)$ can be interpreted as follows: Consider a biased coin which shows heads with probability $x$ and tails with probability $1-x$. Then $p_k(x)$ is precisely the probability that, when throwing this coin $2k-1$ times, we have at most $k-1$ tails (indeed, each summand in the sum above is the probability of having exactly $\ell$ tails). Equivalently, $p_k(x)$ is the probability that among $2k-1$ throws we have at least $k$ heads.
Also note that we can express the term in (\ref{eq-term-minimum-probability}) in Theorem \ref{thm-minimum-probability} as
\begin{equation}\label{eq-connect-term-to function}
n^{-(2k-2)}\cdot \sum_{\ell=0}^{k-1}\binom{2k-1}{\ell}(n-1)^\ell = n\cdot \sum_{\ell=0}^{k-1}\binom{2k-1}{\ell}\left(\frac{1}{n}\right)^{2k-1-\ell}\left(\frac{n-1}{n}\right)^{\ell}=n\cdot p_k(1/n).
\end{equation}
In other words, in order to prove Theorem \ref{thm-minimum-probability}, we need to show that for every probability distribution $\pi$ as in the theorem statement, the probability of having a Condorcet winner is at least $n\cdot p_k(1/n)$, and that furthermore the probability is exactly $n\cdot p_k(1/n)$ for the specific probability distribution $\pi^*$ defined in the theorem statement.
Our proof of Theorem \ref{thm-minimum-probability} will crucially rely on the following property of the function $p_k(x)$: For variables $x_1,\dots,x_n\in [0,1]$ consider the optimization problem of minimizing the sum $p_k(x_1)+\dots+p_k(x_n)$ under the constraint that $x_1+\dots+x_n=1$. When taking $x_1=\dots=x_n=1/n$, we obtain $p_k(x_1)+\dots+p_k(x_n)=n\cdot p_k(1/n)$. The following proposition states that this value $n\cdot p_k(1/n)$ is actually optimal, i.e.\ it is the minimum possible value of $p_k(x_1)+\dots+p_k(x_n)$ for any $x_1,\dots,x_n\in [0,1]$ with $x_1+\dots+x_n=1$. The proof of this optimization statement is given later in Section \ref{subsect-propo-optimization} (the proof is somewhat involved and requires several other lemmas that are stated in proved in Section \ref{subsect-propo-optimization}).
\begin{proposition}\label{propo-optimization}
For every $k\geq 1$ and any real numbers $x_1,\dots,x_n\in [0,1]$ with $x_1+\dots+x_n=1$ we have $p_k(x_1)+\dots+p_k(x_n)\geq n\cdot p_k(1/n)$.
\end{proposition}
We are now ready to prove Theorem \ref{thm-minimum-probability}. We will separately prove the two parts of the theorem (the first part asserting a bound for the probability of having a Condorcet winner for any probability distribution $\pi$, and the second part asserting equality for the specific probability distribution $\pi^*$).
Using (\ref{eq-connect-term-to function}), we can restate the first part of Theorem \ref{thm-minimum-probability} as follows.
\begin{proposition}\label{prop-minimum-probability-1}
Let $n\geq 1$, let $\S=\{A_1,\dots,A_n\}$ be a set of $n$ alternatives, and let $\pi$ be a probability distribution on the set $P_\S$ of all rankings of $\S$. Then for any $k\geq 1$, the probability that there is a Condorcet winner when $2k-1$ voters independently choose rankings $\sigma_1,\dots,\sigma_{2k-1}\in P_\S$ according to the probability distribution $\pi$ is at least $n\cdot p_k(1/n)$.
\end{proposition}
\begin{proof}
For each $i=1,\dots,n$, let us define $x_i$ to be the probability that a random ranking of $\mathcal{S}$ chosen according to the probability distribution $\pi$ has $A_i$ as its top-ranked alternative. Then we have $x_1,\dots,x_n\in [0,1]$ and $x_1+\dots+x_n=1$. Hence, by Proposition \ref{propo-optimization} we have $p_k(x_1)+\dots+p_k(x_n)\geq n\cdot p_k(1/n)$.
For any outcomes of the rankings $\sigma_1,\dots,\sigma_{2k-1}\in P_\S$ there is automatically a Condorcet winner if for some $i=1,\dots,n$ at least $k$ of the $2k-1$ rankings $\sigma_1,\dots,\sigma_{2k-1}$ have alternative $A_i$ as their top-ranked alternative (since $A_i$ is automatically a Condorcet winner in this case). We claim that for each $i=1,\dots,n$, this happens with probability exactly $p_k(x_i)$.
Indeed, fix some $i\in \{1,\dots,n\}$. For each of the $2k-1$ random rankings $\sigma_1,\dots,\sigma_{2k-1}$, the probability of $A_i$ being the top-ranked alternative is precisely $x_i$ (by definition of $x_i$). Hence the probability that at least $k$ of the $2k-1$ rankings $\sigma_1,\dots,\sigma_{2k-1}$ have alternative $A_i$ as their top-ranked alternative is the same as the probability that a biased coin showing heads with probability $x_i$ turns up heads at least $k$ times among $2k-1$ throws. This probability is precisely $p_k(x_i)$.
Thus, for each $i=1,\dots,n$ it happens with probability $p_k(x_i)$ that at least $k$ of the $2k-1$ rankings $\sigma_1,\dots,\sigma_{2k-1}$ have alternative $A_i$ as their top-ranked alternative. Furthermore, whenever this happens, alternative $A_i$ is a automatically a Condorcet winner.
Hence, for each $i=1,\dots,n$, alternative $A_i$ is a Condorcet winner with probability at least $p_k(x_i)$. As there is always at most one Condorcet winner, the total probability of having a Condorcet winner is therefore at least
\[p_k(x_1)+\dots+p_k(x_n)\geq n\cdot p_k(1/n),\]
as desired.
\end{proof}
It remains to prove the second part of Theorem \ref{thm-minimum-probability}. Again using (\ref{eq-connect-term-to function}), we can restate this remaining part as follows.
\begin{proposition}\label{prop-minimum-probability-2}
Let $n\geq 1$, and let $\S=\{A_1,\dots,A_n\}$ be a set of $n$ alternatives. Define $\pi^*$ to be the probability distribution on $P_S$ given by taking each of the rankings $(A_i, A_{i+1},....,A_n,A_1,\dots,A_{i-1})$ for $i=1,\dots,n$ with probability $1/n$, and all other rankings with probability $0$. Then for any $k\geq 1$, the probability that there is a Condorcet winner when $2k-1$ voters independently choose rankings $\sigma_1,\dots,\sigma_{2k-1}\in P_\S$ according to the probability distribution $\pi^*$ is equal to $n\cdot p_k(1/n)$.
\end{proposition}
\begin{proof}
We claim that for any outcomes of the rankings $\sigma_1,\dots,\sigma_{2k-1}$ chosen according to the probability distribution $\pi^*$, alternative $A_j$ is a Condorcet winner if and only if at least $k$ of the $2k-1$ rankings $\sigma_1,\dots,\sigma_{2k-1}$ are equal to $(A_j, A_{i+1},....,A_n,A_1,\dots,A_{j-1})$. Indeed, if at least $k$ of the $2k-1$ rankings are $(A_j, A_{j+1},....,A_n,A_1,\dots,A_{j-1})$, then alternative $A_j$ is the first-ranked alternative for at least $k$ of the $2k-1$ voters and so $A_j$ must be a Condorcet winner. On the other hand, if some alternative $A_j$ is a Condorcet winner for some outcome of the rankings $\sigma_1,\dots,\sigma_{2k-1}$, then at least $k$ of the $2k-1$ voters must rank alternative $A_j$ higher than alternative $A_{j-1}$. However, the only ranking of the form $(A_i, A_{i+1},....,A_n,A_1,\dots,A_{i-1})$ for $i=1,\dots,n$ where alternative $A_j$ is ranked higher than alternative $A_{j-1}$ is the ranking $(A_j, A_{j+1},....,A_n,A_1,\dots,A_{j-1})$. Hence at least $k$ of the $2k-1$ rankings $\sigma_1,\dots,\sigma_{2k-1}$ must be the ranking $(A_j, A_{j+1},....,A_n,A_1,\dots,A_{j-1})$.
We have shown that for $\sigma_1,\dots,\sigma_{2k-1}$ chosen according to the probability distribution $\pi^*$, any alternative $A_j$ is a Condorcet winner if and only if at least $k$ of the $2k-1$ rankings $\sigma_1,\dots,\sigma_{2k-1}$ are equal to $(A_j, A_{i+1},....,A_n,A_1,\dots,A_{j-1})$. For each $j=1,\dots,n$, we claim that the probability that this happens is precisely $p_k(1/n)$. Recall that each of the rankings $\sigma_1,\dots,\sigma_{2k-1}$ equals $(A_j, A_{j+1},....,A_n,A_1,\dots,A_{j-1})$ with probability $1/n$. Hence the probability that at least $k$ of the the $2k-1$ rankings $\sigma_1,\dots,\sigma_{2k-1}$ are $(A_j, A_{j+1},....,A_n,A_1,\dots,A_{j-1})$ is the same as the probability of having at least $k$ heads among $2k-1$ throws of a biased coin that shows heads with probability $1/n$. This latter probability is precisely $p_k(1/n)$.
So we have shown that for each $j=1,\dots,n$, alternative $A_j$ is a Condorcet winner with probability exactly $p_k(1/n)$. Since there is always at most one Condorcet winner, we can conclude that the probability of having a Condorcet winner equals $n\cdot p_k(1/n)$.
\end{proof}
\section{Proofs of technical lemmas}
\label{sect-lemmas}
It remains to prove Lemmas \ref{lem-integral-finite}, \ref{lemma-bounds-Q} and \ref{lem-taylor}, as well as Proposition \ref{propo-optimization}. We will prove the lemmas \ref{lemma-bounds-Q} and \ref{lem-taylor} in the first subsection, and we will give the (more complicated) proof of Lemma \ref{lem-integral-finite} in the second subsection. The proof of Proposition \ref{propo-optimization} can be found in the last subsection.
\subsection{Proof of Lemmas \ref{lemma-bounds-Q} and \ref{lem-taylor}}
\begin{proof}[Proof of Lemma \ref{lemma-bounds-Q}]
Recall that we defined $Q(x_1,\dots,x_{2k-1})$ to be the probability that independent uniformly random variables $y_1,\dots,y_{2k-1}\in [0,1]$ satisfy the condition that $x_i\leq y_i$ for at most $k-1$ indices $i\in [2k-1]$. Note that this condition is equivalent to saying that $y_i<x_i$ for at least $k$ indices $i\in [2k-1]$. Hence we can equivalently define $Q(x_1,\dots,x_{2k-1})$ to be the probability that independent uniformly random variables $y_1,\dots,y_{2k-1}\in [0,1]$ satisfy the condition that $y_i<x_i$ for at least $k$ indices $i\in [2k-1]$.
For a subset $I\subseteq [2k-1]$ of size $|I|=k$, let us define $\mathcal{E}_I$ to be the event that we have $y_i<x_i$ for all $i\in I$. Then $Q(x_1,\dots,x_{2k-1})$ is the probability that at least one of the events $\mathcal{E}_I$ for some subset $I\subseteq [2k-1]$ of size $|I|=k$ holds. In other words
\[Q(x_1,\dots,x_{2k-1})=\operatorname{\mathbb{P}}\left[\bigcup_{I}\mathcal{E}_I \right],\]
where the union is taken over all subsets $I\subseteq [2k-1]$ of size $|I|=k$
For each such subset $I$ we have $\operatorname{\mathbb{P}}[\mathcal{E}_I]=\prod_{i\in I}x_i$, since for each $i\in I$ we have $y_i<x_i$ with probability $x_i$ and this happens independently for all $i\in I$. Thus, by the union bound we obtain
\[Q(x_1,\dots,x_{2k-1})=\operatorname{\mathbb{P}}\left[\bigcup_{I}\mathcal{E}_I \right]\leq \sum_{\substack{I\subseteq [2k-1]\\ |I|=k}}\operatorname{\mathbb{P}}[\mathcal{E}_I]=\sum_{\substack{I\subseteq [2k-1]\\ |I|=k}}\,\prod_{i\in I}x_i=\sigma_k(x_1,\dots,x_{2k-1}).\]
This proves the upper bound in the first part of Lemma \ref{lemma-bounds-Q}.
For the lower bound, note that $\sigma_k(x_1,\dots,x_{2k-1})=\sum_{I\subseteq [2k-1],\, |I|=k}\prod_{i\in I}x_i$ is a sum of $\binom{2k-1}{k}\leq 2^{2k-1}$ summands. Hence at least one of these summands must be at least $2^{-2k+1}\cdot \sigma_k(x_1,\dots,x_{2k-1})$. In other words, there exists a subset $J\subseteq [2k-1]$ of size $|J|=k$ such that $\prod_{i\in J}x_i\geq 2^{-2k+1}\cdot \sigma_k(x_1,\dots,x_{2k-1})$. Hence
\[Q(x_1,\dots,x_{2k-1})=\operatorname{\mathbb{P}}\left[\bigcup_{I}\mathcal{E}_I \right]\geq \operatorname{\mathbb{P}}[\mathcal{E}_J]=\prod_{i\in J}x_i\geq 2^{-2k+1}\cdot \sigma_k(x_1,\dots,x_{2k-1}).\]
This finishes the proof of the first part of the lemma.
For the second part of the lemma, we use that by Bonferroni's inequalities we have
\[Q(x_1,\dots,x_{2k-1})=\operatorname{\mathbb{P}}\left[\bigcup_{I}\mathcal{E}_I \right]\geq \sum_I \operatorname{\mathbb{P}}[\mathcal{E}_I]- \sum_{I, I'} \operatorname{\mathbb{P}}[\mathcal{E}_I\cap \mathcal{E}_{I'}]=\sigma_k(x_1,\dots,x_{2k-1})-\sum_{I, I'} \operatorname{\mathbb{P}}[\mathcal{E}_I\cap \mathcal{E}_{I'}],\]
where the last sum is over all choices of two distinct subsets $I,I'\subseteq [2k-1]$ with $|I|=|I'|=k$. Note that for any choice of two such subsets, the event $\mathcal{E}_I\cap \mathcal{E}_{I'}$ happens if and only if $y_i<x_i$ for all $i\in I\cup I'$ and the probability for this to occur is precisely $\prod_{i\in I\cup I'}x_i$ (here, we again used that the different variables $y_i$ are independent). Thus,
\[Q(x_1,\dots,x_{2k-1})\geq \sigma_k(x_1,\dots,x_{2k-1})-\sum_{I, I'}\prod_{i\in I\cup I'}x_i,\]
where the sum is again over all choices of two distinct subsets $I,I'\subseteq [2k-1]$ with $|I|=|I'|=k$. Note that for any two such subsets we have $|I\cup I'|\geq k+1$. Hence for any two such $I,I'$ we can choose a subset $J(I,I')\subseteq [2k-1]$ of size $|J(I,I')|=k+1$ with $J(I,I')\subseteq I\cup I'$. Then, as $x_i\in [0,1]$ for all $i$, we have
\[Q(x_1,\dots,x_{2k-1})\geq \sigma_k(x_1,\dots,x_{2k-1})-\sum_{I, I'}\prod_{i\in I\cup I'}x_i\geq \sigma_k(x_1,\dots,x_{2k-1})-\sum_{I, I'}\prod_{i\in J(I,I')}x_i.\]
Note that the sum on the right-hand side has less than $\binom{2k-1}{k}^2\leq 2^{4k-2}$ summands. Therefore each $J\subseteq [2k-1]$ with $|J|=k+1$ occurs as $J(I,I')$ at most $2^{4k-2}$ times and we obtain
\[Q(x_1,\dots,x_{2k-1})\geq \sigma_k(x_1,\dots,x_{2k-1})-2^{4k-2}\cdot \sum_{\substack{J\subseteq [2k-1]\\ |J|=k+1}}\,\prod_{i\in J}x_i=\sigma_k(x_1,\dots,x_{2k-1})-2^{4k-2}\cdot \sigma_{k+1}(x_1,\dots,x_{2k-1}).\]
Finally, by Maclaurin's inequality for elementary symmetric polynomials we have
\[\frac{\sigma_{k+1}(x_1,\dots,x_{2k-1})}{\binom{2k-1}{k+1}}\leq \left(\frac{\sigma_{k}(x_1,\dots,x_{2k-1})}{\binom{2k-1}{k}}\right)^{(k+1)/k}\]
and therefore, as $\binom{2k-1}{k+1}\leq \binom{2k-1}{k}$, we obtain $\sigma_{k+1}(x_1,\dots,x_{2k-1})\leq \sigma_{k}(x_1,\dots,x_{2k-1})^{(k+1)/k}$. Thus,
\[Q(x_1,\dots,x_{2k-1})\geq \sigma_k(x_1,\dots,x_{2k-1})-2^{4k-2}\cdot\sigma_{k}(x_1,\dots,x_{2k-1})^{(k+1)/k},\]
as desired.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem-taylor}] For every $y<0$, by Taylor's theorem (with Lagrange remainder term) there is some $\xi$ in the interval $[y,0]$ such that
\[e^{y}=1+y+\frac{e^\xi}{2}\cdot y^2.\]
Using $0\leq e^\xi\leq 1$, we can conclude that
\[1+y\leq e^{y}\leq 1+y+\frac{1}{2}\cdot y^2\]
for all $y<0$. Now, let $0\leq t\leq 1/3$. Setting $y=-t$, we obtain $e^{-t}\geq 1+(-t)=1-t$, establishing the second inequality in Lemma \ref{lem-taylor}. For the first inequality, taking $y=-t-t^2$ gives
\[e^{-t-t^2}\leq 1-t-t^2+\frac{1}{2}\cdot (t+t^2)^2=1-t-\frac{1}{2}\cdot t^2+t^3+\frac{1}{2}\cdot t^4\le 1-t-\frac{1}{2}\cdot t^2+\frac{3}{2}\cdot t^3\leq 1-t,\]
as desired.
\end{proof}
\subsection{Proof of Lemma \ref{lem-integral-finite}}
We will use the following easy lemma in the proof of Lemma \ref{lem-integral-finite}.
\begin{lemma}\label{lemma-preparation-integral-finite}
For any positive integers $\ell<m$, and any $a\geq 1$, we have
\begin{multline*}
\int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}\leq\\
\int_0^a \dots \int_0^a \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}+m\cdot \int_a^\infty \int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}
\end{multline*}
\end{lemma}
\begin{proof}
For any $a\geq 1$, and any $i=1,\dots, m$, let us define the domain
\[D_i^{(a)}=\{(x_1,\dots,x_{m})\in [0,\infty)^{m} \mid x_i\geq a\}.\]
It is not hard to see that $[0,\infty)^{m}$ is covered by the union of $[0,a]^{m}$ and the sets $D_i^{(a)}$ for $i=1,\dots,m$. Thus, for any $a\geq 1$ we can conclude
\begin{align*}
&\int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}\\
&\quad \leq \int_0^a \dots \int_0^a \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}+\sum_{i=1}^{m} \int_{D_i^{(a)}} \exp(-\sigma_\ell(x_1,\dots,x_{m})) \diff^{m} (x_1,\dots,x_{m})\\
&\quad= \int_0^a \dots \int_0^a \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}+m\cdot \int_{D_m^{(a)}} \exp(-\sigma_\ell(x_1,\dots,x_{m})) \diff^{m} (x_1,\dots,x_{m})\\
&\quad=\int_0^a \dots \int_0^a \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}+m\cdot \int_a^\infty \int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m},
\end{align*}
where in the second step we used that by symmetry of the function $\exp(-\sigma_\ell(x_1,\dots,x_m))$ its integral has the same value on each of the domains $D_i^{(a)}$ for $i=1,\dots,m$.
\end{proof}
We can now prove Lemma \ref{lem-integral-finite} by induction on $\ell$.
\begin{proof}[Proof of Lemma \ref{lem-integral-finite}]
First, consider the case $\ell=1$. Then
\begin{multline*}
\int_0^\infty \dots \int_0^\infty \exp(-\sigma_1(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}=\int_0^\infty \dots \int_0^\infty \exp(-x_1-\dots-x_m)\diff x_1 \dots \diff x_{m}\\
=\left(\int_0^\infty e^{-x_1}\diff x_1\right)\dotsm \left(\int_0^\infty e^{-x_m}\diff x_1\right)=\left(\int_0^\infty e^{-x}\diff x\right)^m=1^m=1\leq (m!)^2,
\end{multline*}
so Lemma \ref{lem-integral-finite} holds for $\ell=1$ (note that the second part of the lemma statement is only for the case of $\ell\geq 2$).
Now, let us assume that $\ell\geq 2$ and that we already proved Lemma \ref{lem-integral-finite} for $\ell-1$. We claim that in order to prove Lemma \ref{lem-integral-finite} for $\ell$, it suffices to show that
\begin{equation}\label{eq-sufficient-induction-integral}
\int_a^\infty \int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}\leq (m-1)\cdot ((m-1)!)^2 \cdot a^{-(m-\ell)/(\ell-1)}
\end{equation}
for any integer $m>\ell$ and any $a\geq 1$. Indeed, the inequality in the second part of Lemma \ref{lem-integral-finite} follows directly by combining (\ref{eq-sufficient-induction-integral}) with Lemma \ref{lemma-preparation-integral-finite}. Furthermore, combining (\ref{eq-sufficient-induction-integral}) with Lemma \ref{lemma-preparation-integral-finite} in the special case of $a=1$ gives
\begin{align*}
&\int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}\\
&\quad\leq \int_0^1 \dots \int_0^1 \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}+m\cdot \int_1^\infty \int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}\\
&\quad\leq \int_0^1 \dots \int_0^1 1\diff x_1 \dots \diff x_{m}+m\cdot (m-1)\cdot ((m-1)!)^2 \cdot 1^{-(m-\ell)/(\ell-1)}\\
&\quad =1+m\cdot (m-1)\cdot ((m-1)!)^2\leq m\cdot ((m-1)!)^2+m\cdot (m-1)\cdot ((m-1)!)^2= (m!)^2,
\end{align*}
which proves the first part of Lemma \ref{lem-integral-finite} for $\ell$. So it only remains to show (\ref{eq-sufficient-induction-integral}). Note that for any non-negative $x_1,\dots,x_m$ we have $\sigma_\ell(x_1,\dots,x_{m})\geq\sigma_{\ell-1}(x_1,\dots,x_{m-1})\cdot x_m$ (indeed, the right-hand side consists of precisely those terms of $\sigma_\ell(x_1,\dots,x_{m})$ that contain $x_m$). Hence
\begin{align*}
&\int_a^\infty \int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m}\\
&\quad=\int_a^\infty \left(\int_0^\infty \dots \int_0^\infty \exp(-\sigma_\ell(x_1,\dots,x_{m}))\diff x_1 \dots \diff x_{m-1}\right)\diff x_{m}\\
&\quad\leq \int_a^\infty \left(\int_0^\infty \dots \int_0^\infty \exp(-\sigma_{\ell-1}(x_1,\dots,x_{m-1})\cdot x_m)\diff x_1 \dots \diff x_{m-1}\right)\diff x_{m}\\
&\quad= \int_a^\infty \left(x_m^{-(m-1)/(\ell-1)}\cdot \int_0^\infty \dots \int_0^\infty \exp(-\sigma_{\ell-1}(z_1,\dots,z_{m-1}))\diff z_1 \dots \diff z_{m-1}\right)\diff x_{m}\\
&\quad\leq \int_a^\infty x_m^{-\frac{m-1}{\ell-1}}\cdot ((m-1)!)^2\diff x_{m}=((m-1)!)^2\cdot \frac{\ell-1}{m-\ell}\cdot a^{-\frac{m-\ell}{\ell-1}} \leq (\ell-1)\cdot ((m-1)!)^2 \cdot a^{-\frac{m-\ell}{\ell-1}},
\end{align*}
where in the third step we considered the substitution $z_i=x_m^{1/(\ell-1)}x_i$ for $i=1,\dots,m-1$ (and used that $\sigma_{\ell-1}(x_1,\dots,x_{m-1})$ is a homogeneous polynomial of degree $\ell-1$), and in the fourth step we used the induction hypothesis for $\ell-1$ (noting that $m-1>\ell-1$). Since $\ell<m$, this in particular proves (\ref{eq-sufficient-induction-integral}).
\end{proof}
\subsection{Proof of Proposition \ref{propo-optimization}}
\label{subsect-propo-optimization}
We start by proving some lemmas about properties of the function $p_k(x)$, which will be used in our proof of Proposition \ref{propo-optimization}.
\begin{lemma}\label{lemma-function-anti-symmetry}
For every $k\geq 1$ and $x\in [0,1]$ we have $p_k(x)+p_k(1-x)=1$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
p_k(x)+p_k(1-x)&=\sum_{\ell=0}^{k-1}\binom{2k-1}{\ell}\cdot x^{2k-1-\ell}\cdot (1-x)^{\ell}+\sum_{\ell=0}^{k-1}\binom{2k-1}{\ell}\cdot (1-x)^{2k-1-\ell}\cdot x^{\ell}\\
&=\sum_{\ell=0}^{k-1}\binom{2k-1}{\ell}\cdot x^{2k-1-\ell}\cdot (1-x)^{\ell}+\sum_{\ell=k}^{2k-1}\binom{2k-1}{2k-1-\ell}\cdot (1-x)^{\ell}\cdot x^{2k-1-\ell}\\
&=\sum_{\ell=0}^{2k-1}\binom{2k-1}{\ell}\cdot x^{2k-1-\ell}\cdot (1-x)^{\ell}\\
&=\left(x+(1-x)\right)^{2k-1}=1,
\end{align*}
where in the second-last step we used the binomial theorem.
\end{proof}
\begin{lemma}\label{lemma-function-derivative}
For $k\geq 1$ and $x\in [0,1]$, the derivative of the function $p_k(x)$ is
\[p_k'(x)=\frac{(2k-1)!}{(k-1)!^2}\cdot x^{k-1}(1-x)^{k-1}.\]
\end{lemma}
\begin{proof}
We have
\begin{align*}
p_k'(x)&=\sum_{\ell=1}^{k-1}\binom{2k-1}{\ell}\cdot \left((2k-1-\ell)x^{2k-2-\ell}(1-x)^{\ell}-\ell x^{2k-1-\ell}(1-x)^{\ell-1}\right)+\binom{2k-1}{0}\cdot (2k-1)x^{2k-2}\\
&=\sum_{\ell=0}^{k-1}\binom{2k-1}{\ell}\cdot (2k-1-\ell)x^{2k-2-\ell}(1-x)^{\ell}-\sum_{\ell=0}^{k-2}\binom{2k-1}{\ell+1}\cdot (\ell+1)x^{2k-2-\ell}(1-x)^{\ell}\\
&=\sum_{\ell=0}^{k-1}\left(\binom{2k-1}{\ell}(2k-1-\ell)-\binom{2k-1}{\ell+1}(\ell+1)\right)x^{2k-2-\ell}(1-x)^{\ell}+\binom{2k-1}{k-1}\cdot kx^{k-1}(1-x)^{k-1}\\
&=\frac{(2k-1)!}{(k-1)!^2}\cdot x^{k-1}(1-x)^{k-1},
\end{align*}
where in the second-last step we used that
\[\binom{2k-1}{\ell}(2k-1-\ell)-\binom{2k-1}{\ell+1}(\ell+1)=\frac{(2k-1)!}{\ell!\cdot (2k-2-\ell)!}-\frac{(2k-1)!}{\ell!\cdot (2k-2-\ell)!}=0\]
for all $\ell=0,\dots,k-1$.
\end{proof}
\begin{lemma}\label{lemma-function-convex}
For every $k\geq 1$, the function $p_k(x)$ is convex on the interval $[0,1/2]$.
\end{lemma}
\begin{proof}
Note that the function $x(1-x)=x-x^2=1/4-((1/2)-x)^2$ is non-negative and monotonically increasing on the interval $[0,1/2]$. Hence the function $x^{k-1}(1-x)^{k-1}$ is monotonically non-decreasing on $[0,1/2]$. By Lemma \ref{lemma-function-derivative}, this means that the derivative $p_k'(x)$ is monotonically non-decreasing on the interval $[0,1/2]$. Hence $p_k(x)$ is convex on this interval.
\end{proof}
\begin{lemma}\label{lemma-less-than-1}
For every $k\geq 1$ and every integer $n\geq 1$, we have $n\cdot p_k(1/n)\leq 1$.
\end{lemma}
\begin{proof}
For $n=1$, we have $1\cdot p_k(1/1)=p_k(1)= \binom{2k-1}{0}=1$, so the desired inequality is true.
For $n\geq 2$, note that $1/n\in [0,1/2]$. By Lemma \ref{lemma-function-convex} the function $p_k(x)$ is convex on the interval $[0,1/2]$ and by applying Jensen's inequality we obtain
\[p_k(1/n)=p_k\left(\frac{2}{n}\cdot \frac{1}{2}+\left(1-\frac{2}{n}\right)\cdot 0\right)\leq \frac{2}{n}\cdot p_k(1/2)+\left(1-\frac{2}{n}\right)\cdot p_k(0)=\frac{2}{n}\cdot \frac{1}{2}+\left(1-\frac{2}{n}\right)\cdot 0=\frac{1}{n}.\]
Here, we used that $p_k(1/2)=1/2$ by Lemma \ref{lemma-function-anti-symmetry} applied to $x=1/2$ and that $p_k(0)=0$. Hence $n\cdot p_k(1/n)\leq 1$, as desired.
\end{proof}
Let us now prove Proposition \ref{propo-optimization}.
\begin{proof}[Proof of Proposition \ref{propo-optimization}]
First, let us consider the case $k=1$. Then $p_k(x)=x$ for all $x\in [0,1]$, and so for any real numbers $x_1,\dots,x_n\in [0,1]$ with $x_1+\dots+x_n=1$ we clearly have $p_k(x_1)+\dots+p_k(x_n) =x_1+\dots+x_n=1=n\cdot p_k(1/n)$. So let us from now on assume that $k\geq 2$.
Recall that we need to show that $p_k(x_1)+\dots+p_k(x_n)\geq n\cdot p_k(1/n)$ for any real numbers $x_1,\dots,x_n\in [0,1]$ with $x_1+\dots+x_n=1$. We may thus assume that $x_1,\dots,x_n\in [0,1]$ are chosen to minimize $p_k(x_1)+\dots+p_k(x_n)$ under the constraint $x_1+\dots+x_n=1$.
First, consider the case that we have $x_1,\dots,x_n\in [0,1/2]$. Then, as the function $p_k(x)$ is convex on $[0,1/2]$ by Lemma \ref{lemma-function-convex}, Jensen's inequality implies
\[p_k(x_1)+\dots+p_k(x_n)\geq n\cdot p_k((x_1+\dots+x_n)/n)=n\cdot p_k(1/n),\]
as desired.
Next, let us consider the case that we have $x_i=1$ for some $i\in \{1,\dots,n\}$. Then the remaining variables $x_1,\dots,x_{i-1},x_{i+1},\dots,x_n$ must all be zero, and we have
\[p_k(x_1)+\dots+p_k(x_n)=p_k(1)+(n-1)\cdot p_k(0)=1+(n-1)\cdot 0=1\geq n\cdot p_k(1/n),\]
where the last inequality is by Lemma \ref{lemma-less-than-1}.
So it remains to consider the case that we have $x_i\in (1/2,1)$ for some $i\in \{1,\dots,n\}$. Without loss of generality, let us assume that $x_n\in (1/2,1)$. Now, since $x_1+\dots +x_n=1$, at least one of $x_1,\dots,x_{n-1}$ must be positive. Again without loss of generality let us assume that $x_1>0$, then $0<x_1<x_1+x_n\leq 1$. Now, for $t\in [0,x_1+x_n]$ consider the function $g(t)=p_k(t)+p_k(x_2)+\dots+p_k(x_{n-1})+p_k(x_1+x_n-t)$. As $t+x_2+\dots+x_{n-1}+(x_1+x_n-t)=x_1+\dots+x_n=1$, by the choice of $x_1,\dots,x_n$ to minimize $p_k(x_1)+\dots+p_k(x_n)$, we must have
\[g(t)=p_k(t)+p_k(x_2)+\dots+p_k(x_{n-1})+p_k(x_1+x_n-t)\geq p_k(x_1)+\dots+p_k(x_n)=g(x_1)\]
for all $t\in [0,x_1+x_n]$. In other words, the function $g(t)$ has a minimum at $t=x_1$. Hence, using that $0<x_1<x_1+x_n$, the derivative $g'(t)=p_k'(t)-p_k'(x_1+x_n-t)$ of $g(t)$ at $t=x_1$ must satisfy $g'(x_1)=0$. From Lemma \ref{lemma-function-derivative}, we obtain
\[0=g'(x_1)=p_k'(x_1)-p_k'(x_1+x_n-x_1)=p_k'(x_1)-p_k'(x_n)=\frac{(2k-1)!}{(k-1)!^2}\cdot \left(x_1^{k-1}(1-x_1)^{k-1}-x_n^{k-1}(1-x_n)^{k-1}\right).\]
Hence $x_1^{k-1}(1-x_1)^{k-1}=x_n^{k-1}(1-x_n)^{k-1}$ and, as $k\geq 2$, this implies that $x_1(1-x_1)=x_n(1-x_n)$. Therefore
\[(x_1-x_n)\cdot (x_1+x_n-1)=x_n(1-x_n)-x_1(1-x_1)=0.\]
and consequently we must have $x_1=x_n$ or $x_1+x_n=1$. However, as $x_n\in (1/2,1)$, the former equation is impossible (as otherwise $x_1+\dots+x_n\geq x_1+x_n=2x_n>1$). Thus, we can conclude that $x_1+x_n=1$ and therefore $x_2=\dots=x_{n-1}=0$. Now,
\[p_k(x_1)+\dots+p_k(x_n)=p_k(x_1)+p_k(x_n)+(n-2)\cdot p_k(0)=p_k(x_1)+p_k(1-x_1)+(n-2)\cdot 0=1\geq n\cdot p_k(1/n),\]
where in the second-last step we used Lemma \ref{lemma-function-anti-symmetry} and in the last step Lemma \ref{lemma-less-than-1}. This finishes the proof of Proposition \ref{propo-optimization}.
\end{proof}
| {
"attr-fineweb-edu": 1.417969,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUaeTxK0wg09lJ_wmZ | \section{Introduction}
In sports, as in many other industries and research fields, data
analysis has become an essential ingredient of management. Sports
teams, traditionally run by people with experience playing and/or
coaching, now rely heavily on statistical models to measure player
ability and inform strategy decisions \citep{lewis2004moneyball,
oliver2004basketball}. Over the years, the quantity, scope, and
sophistication of these models has expanded, reflecting new data
sources, methodological developments, and increasing interest in the
field of sports analytics. Despite their inherent promise, new
developments in sports analytics have created a clutter of
metrics. For example, there are at least three different calculations
of the WAR (``Wins Above Replacement'') metric in baseball
\citep{baumer2015openwar}, all of which have the same hypothetical
estimand. In general, any individual making a management, coaching, or
gambling decision has potentially dozens of metrics at his/her
disposal, but finding the right metrics to support a given decision
can be daunting. We seek to ameliorate this problem by proposing a set
of ``meta-metrics'' that describe which metrics provide the most
unique and reliable information for decision-makers. Our methods are
simple to implement and applicable to any sport so they should be of
broad interest to the sports analytics community.
The core idea of our work is that quantifying sources of
variability---and how these sources are related across metrics, players, and time---is
essential for understanding how sports metrics can be used. In this
paper, we consider three different sources of variation, which we
classify differently depending on the use-case. These are 1) intrinsic
player skill, 2) context, e.g. influence of teammates, and 3) chance, i.e. sampling
variability. Each of these sources can vary across seasons and
between players. We consider each player metric to be composed of a
combination of these sources of variation (Figure \ref{fig:cartoon}),
and in this paper we discuss several diagnostics that can be used to
assess how well certain metrics are able to measure, control for, and
average across these sources of variation, depending on what is
required by the decision-maker.
The primary purpose of constructing our meta-metrics is to categorize the
sources of variation in the data as \emph{signal} and \emph{noise}. The
signal corresponds to variation that is the key input into a decision
process, e.g., a player's ability to operate in a given system,
whereas the \emph{noise} is variation that we choose not to explain
either because of complexity or lack of information (e.g., complex
team interactions or minuscule variations in a player's release
between shots). When relevant we condition on
observed contextual information (e.g. player position) to create more reliable
and interpretable signals.
For a metric to be useful for a particular decision, its treatment of
variation needs to match up with the decision that is being made. For
example, consider two distinct tasks in which metrics are often
deployed -- attribution, where we wish to credit a portion of a team's
success to a given player for, e.g., year-end awards, and acquisition,
where we wish to assess whether a player should be added to or
retained on a team. The classification of signal and noise in these
decision tasks is very different. For attribution, we do not care
whether a player can repeat their performance in another season (or
arguably even how much of their performance was due to chance),
whereas repeatability is a central question in player acquisition.
That is, chance and team context are still relevant signals when
making an attribution decision, but are sources of noise for an
acquisition decision.
While we can isolate some player-wise, season-wise, and team-wise
variation by subsetting the data, all measurements that we take are
confounded with chance. Further ``skills'' are abstract concepts
that are often collapsed together. With this in mind, we define three
meta-metrics that can be used to answer the following questions of
player performance metrics:
\begin{itemize}
\item \textbf{Discrimination}: Does the metric reliably differentiate between players?
\item \textbf{Stability}: Does the metric measure a quantity which is stable over time?
\item \textbf{Independence}: Does the metric provide new information?
\end{itemize}
Our discrimination meta-metric quantifies how useful a metric is for
distinguishing between players within a given season, whereas our
stability meta-metric measures how much a metric varies season to
season due to changes in context and player skill after removing
chance variation. The independence meta-metric quantifies how much
information in one metric is already captured by a set of other
metrics. Our meta-metrics are based on ideas which have a long
history in statistics (e.g., analysis of variance) and psychometrics
(e.g., Cronbach's alpha) \citep{fisher1925, cronbach1951coefficient,
kuder1937theory} but have not received widespread treatment in
sports. The limited work quantifying the reliability of metrics in
sports mostly appears in blogs \citep{hockeyStabilize, threeStabilize,
mvpchance} and our hope is to formalize and generalize some of the
ideas discussed in these these articles. We start, in Section
\ref{sec:methods} by motivating and defining three meta-metrics and
discuss how to estimate them in Section \ref{sec:inference}. Section
\ref{sec:results} demonstrates the application of these meta-metrics
to player performance in National Basketball Association (NBA) and
National Hockey League (NHL). Lastly, in Section \ref{sec:adjust} we
discuss building new metrics and adjusting existing ones in order to
improve their meta-analytic properties.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\textwidth]{ArxivFigs/cartoon}
\caption{Sources of variation in end-of-season metrics. Player
metrics confound different aspects of intrinsic player style or
ability, team effects and chance (e.g. sampling variability). We
visualize metrics amongst multiple players across seasons in a
3-dimensional array (right). Here, we illustrate two hypothetical metrics, one in
red and another purple. Variation in the color's tone on the front
face corresponds to observed between-player variability in a single season
and variation on the right face corresponds to variability in the
metric for one player over time. Team-wise and chance variation also play a
role in determining the variation in color tone.}
\label{fig:cartoon}
\end{figure}
\section{Defining Meta-metrics}
\label{sec:methods}
Throughout this paper, we write the 3-dimensional array of players,
seasons and metrics as $X$, with $X_{spm}$ the value of metric $m$
for player $p$ from season $s$ (see Figure \ref{fig:cartoon}). Our
meta-metrics are all R-squared style statistics and can be understood
as functions of the (co)variances along the three dimensions of $X$.
As a useful example, consider a model for a metric $m$ that varies over time $s$
and between players $p$ is a linear mixed effects model:
\begin{align}
\label{eqn:mixed}
X_{spm} &= \mu_m + Z_{sm} + Z_{pm} + Z_{spm} + \epsilon_{spm},
\end{align}
where
\begin{align*}
Z_{sm} & \sim [0, \sigtx{SM}] \\
Z_{pm} & \sim [0, \sigtx{PM}] \\
Z_{spm} & \sim [0, \sigtx{SPM}] \\
\epsilon_{spm} & \sim [0, \tautx{M}],
\end{align*}
and $[\mu, \sigma^2]$ represents a distribution with mean $\mu$ and variance $\sigma^2$.
The terms $Z_{*}$ can be thought of as random effects, while
$\epsilon_{spm}$ represents individual player-season variation in a metric---for instance, binomial variation in made shot percentage given a finite sample size. $Z_{spm}$ and
$\epsilon_{spm}$ are distinguished by assuming that for an infinitely long
season, a player's metric would have no such variability, thus
$\epsilon_{spm} = 0$. Note that we can recognize
$\sigtx{PM} + \sigtx{SPM} + \tautx{M}$ as the within-season,
between-player variance; $\sigtx{SM} + \sigtx{SPM} + \tautx{M}$ as the
within-player, beween-season variance; and of course,
$\sigtx{SM} + \sigtx{PM} + \sigtx{SPM} + \tautx{M}$ as the total
(between player-season) variance. Both the discrimination and stability meta-metrics defined in this section can be expressed as ratios involving these quantities, along with the sampling variance $\tautx{M}$.
The linear mixed effects model \eqref{eqn:mixed} may be a reasonable choice for some metrics and, due to its simplicity, provides a convenient example to illustrate our meta-metrics. However, an exchangeable, additive model is not appropriate for many of the metrics
we consider. A major practical challenge in our analysis is that all
of the metrics have unique distributions with distinct support---percentages are constrained to the unit interval, while many per game
or per season statistics are discrete and strictly positive. Other
advanced metrics like ``plus-minus'' or ``value over replacement''
(VORP) in basketball are continuous real-valued metrics which can be
negative or positive.
To define meta-metrics with full generality, consider the random variable $X$, which is a single entry $X_{spm}$ chosen randomly from $X$. Randomness in $X$ thus occurs both from sampling the indexes $S, P$, and $M$ of $X$, as well as intrinsic variability in $X_{spm}$ due to finite season lengths. We will then use the notational shorthand
\begin{align*}
E_{spm}[X] &= E[X | S = s, P=p, M = m] \\
V_{spm}[X] &= Var[X | S=s, P=p, M=m]
\end{align*}
and analogously for $E_{sm}[X], V_{sm}[X], E_{m}[X]$, etc. For example, $E_{sm}[V_{spm}[X]]$ is the average over all players of the intrinsic variability in $X_{spm}$ for metric $m$ during season $s$, or $\sum_p Var[X_{spm}] / N_{sm}$, where $N_{sm}$ is the number of entries of $X_{s\cdot m}$.
\subsection{Discrimination}
\label{sec:disc}
For a metric measuring player ability to be applicable, it must be a
useful tool for discriminating between different players. Implicit in
this is that most of the variability between players reflects true
variation in player ability and not chance variation or noise from small sample sizes. As a useful
baseline for discrimination, we compare the average intrinsic
variability of a metric to the total between player variation in this metric. A
similar approach which partially inspired this metric was used to
compare how reliably one could differentiate MVP candidates in Major
League Baseball \citep{mvpchance}.
To characterize the discriminative power of a metric, we need to
quantify the fraction of total between player variance that is due to
chance and the fraction that is due to signal. By the law of total variance, this can be decomposed as
$$ V_{sm}[X] = E_{sm}[V_{spm}[X]] + V_{sm}[E_{spm}[X]].$$
Here, $V_{sm}[X]$ corresponds to the total variation in
metric $m$ between players in season $s$, whereas $E_{sm}[V_{spm}[X]]$ is the
average (across players) sampling variability for metric $m$ in season $s$. With this decomposition in mind, we
define the discriminative power of a metric $m$ in season $s$ as
\begin{equation}
\label{eqn:disc}
\text{(Discrimination)} \hspace{1cm} \mathcal{D}_{sm} = 1- \frac{E_{sm}[V_{spm}[X]]}{V_{sm}[X]}.
\end{equation}
Intuitively, this describes the fraction (between 0 and 1) of between-player variance in
$m$ (in season $s$) due to true differences in player ability. Discrimination meta-metrics for different seasons can be combined as $\mathcal{D}_m = E_m[\mathcal{D}_{sm}]$.
It is helpful to understand the discrimination estimand for the linear mixed effects
model defined in Equation \ref{eqn:mixed}. When this model
holds, $E_{sm}[V_{spm}[X]] = \tautx{M}$, and
$V_{sm}[X] = \sigtx{PM} + \sigtx{SPM} + \tautx{M}$, the between-player variance (equal for all seasons $s$). Thus, the discrimination meta-metric under the linear mixed effects model is simply
\begin{align}
\mathcal{D}_m &= 1 - \frac{\tautx{M}}{\sigtx{PM} + \sigtx{SPM} + \tautx{M}} \label{eqn:disc_mixed} \\
& = \frac{\sigtx{PM} + \sigtx{SPM}}{\sigtx{PM} + \sigtx{SPM} + \tautx{M}}. \nonumber
\end{align}
\subsection{Stability}
\label{sec:stab}
In addition to discrimination, which is a meta-metric that describes
variation within a single season, it is important to understand how
much an individual player's metric varies from season to season. The notion
of stability is particularly important in sports management
when making decisions about about future acquisitions. For a stable
metric, we have more confidence that this year's performance will be
predictive of next year's performance. A metric can be unstable if it
is particularly context dependent (e.g. the player's performance
varies significantly depending on who their teammates are) or if
a players' intrinsic skill set tends to change year to year (e.g. through
offseason practice or injury).
Consequently, we define stability as a metric, which describes how much we
expect a single player metric to vary over time after removing chance
variability. This metric specifically targets the sensitivity of
a metric to change in context or intrinsic player skill over time.
Mathematically, we define \emph{stability} as:
%
\begin{equation}
\label{eqn:stab}
\text{(Stability)} \hspace{1cm} \mathcal{S}_m = 1 - \frac{E_m[V_{pm}[X] - V_{spm}[X]]}{V_m[X] - E_m[V_{spm}[X]]},
\end{equation}
with $0 \leq \mathcal{S}_m \leq 1$ (see Appendix for proof). Here,
$V_{pm}[X]$ is the between-season variability in metric $m$ for player
$p$; thus, the numerator in \eqref{eqn:stab} averages the
between-season variability in metric $m$, minus sampling variance,
over all players. The denominator is the total variation for metric
$m$ minus sampling variance. Again, this metric can be easily
understood under the assumption of an exchangeable linear model
(Equation \ref{eqn:mixed}).:
\begin{align}
\mathcal{S}_m &= 1 - \frac{\sigtx{SM} + \sigtx{SPM} + \tautx{M} - \tautx{M}}{\sigtx{PM} + \sigtx{SM} + \sigtx{SPM} + \tautx{M} - \tautx{M}} \label{eqn:stab_mixed} \\
& = \frac{\sigtx{PM}}{\sigtx{PM} + \sigtx{SM} + \sigtx{SPM}}. \nonumber
\end{align}
This estimand reflects the fraction of total variance (with sampling
variability removed) that is due to within-player changes over time.
If the within player variance is as large as the total variance, then
$\mathcal{S}_m = 0$ whereas if a metric is constant over time, then
$\mathcal{S}_m=1$.
\subsection{Independence}
\label{sec:ind}
When multiple metrics measure similar aspects of a player's ability,
we should not treat these metrics as independent pieces of information. This is especially
important for decision makers in sports management who use these
metrics to inform decisions. Accurate assessments of
player ability can only be achieved by appropriately synthesizing
the available information. As such, we present a method for quantifying the
dependencies between metrics that can help decision makers
make sense of the growing number of data summaries.
For some advanced metrics we know their exact formula in terms of
basic box score statistics, but this is not always the case. For
instance, it is much more challenging to assess the relationships
between new and complex model based NBA metrics like adjusted plus
minus \citep{adjustedpm}, EPV-Added \citep{cervone2014multiresolution}
and counterpoints \citep{franks2015counterpoints}, which are
model-based metrics that incorporate both game-log and player tracking
data. Most importantly, as illustrated in Figure \ref{fig:cartoon},
even basic box score statistics that are not functionally related will
be correlated if they measure similar aspects of intrinsic player
skill (e.g., blocks and rebounds in basketball are highly correlated
due to their association with height).
As such, we present a general approach for expressing
dependencies among an arbitrary set of metrics measuring multiple
players' styles and abilities across multiple seasons. Specifically, we
propose a Gaussian copula model in which the dependencies between metrics
are expressed with a latent multivariate normal distribution. Assuming we have $M$ metrics of interest, let $Z_{sp}$ be an $M$-vector of metrics for player $p$ during season $s$, and
\begin{align}
\label{eqn:copula}
Z_{sp} & \stackrel{iid}{\sim} \text{MVN}(0, C)\\
X_{spm} &= F_m^{-1}[\Phi(Z_{spm})],
\end{align}
\noindent where $C$ is a $M \times M$ correlation matrix, and
$F_m^{-1}$ is the inverse of the CDF for metric $m$. We define
independence score of a metric $m$ given a condition set of other metrics,
$\mathcal{M}$, as
\begin{equation}
\label{eqn:ind}
\mathcal{I}_{m\mathcal{M}} = \frac{Var \left[ Z_{spm} \mid \{Z_{spq} : q \in \mathcal{M} \} \right]}{Var[Z_{spm}]} = C_{m,m} - C_{m,\mathcal{M}}C_{\mathcal{M},\mathcal{M}}^{-1}C_{\mathcal{M},m}.
\end{equation}
For the latent variables $Z$, this corresponds to one minus the
R-squared for the regression of $Z_m$ on the latent variables $Z_q$ with $q$ in
$\mathcal{M}$. Metrics for which $\mathcal{I}_{m\mathcal{M}}$ is
small (e.g. for which the R-squared is large) provide little new
information relative to the information in the set of metrics
$\mathcal{M}$. In contrast, when $\mathcal{I}_{m\mathcal{M}}$ is
large, the metric is nearly independent from the information contained
in $\mathcal{M}$. Note that
$\mathcal{I}_{m\mathcal{M}} = 1$ implies that metric $m$ is
independent from all metrics in $\mathcal{M}$.
We also run a principal component analysis (PCA) on $C$ to evaluate
the amount of independent information in a set of metrics. If
$U\Lambda U^T$ is the eigendecomposition of $C$, with
$\Lambda = \text{diag}(\lambda_1, ... \lambda_M)$ the diagonal matrix
of eigenvalues, then we can interpret
$\mathcal{F}_k = \frac{\sum_1^k \lambda_i}{\sum_1^M \lambda_i}$ as the
fraction of total variance explained by the first $k$ principal
components \citep{Mardia1980}. When $\mathcal{F}_k$ is large for
small $k$ then there is significant redundancy in the set of metrics,
and thus dimension reduction is possible.
\section{Inference}
\label{sec:inference}
In order to calculate discrimination $\mathcal{D}_m$ and stability
$\mathcal{S}_m$, we need estimates of $V_{spm}[X]]$, $V_{sm}[X]$,
$V_{pm}[X]$ and $V_m[X]$. Rather than establish a parametric model
for each metric (e.g. the linear mixed effects model
\eqref{eqn:mixed}), we use nonparametric methods to estimate
reliability. Specifically, to estimate the sampling distribution of
$X$ within each season (e.g., $Var[X_{spm}]$, or equivalently
$V_{spm}[X]$, for all $s$, $p$, $m$), we use the bootstrap
\citep{efron1986bootstrap}. For each team, we resample (with
replacement) every game played in a season and reconstruct
end-of-season metrics for each player. We use the sample variance
of these resampled metrics, $\text{BV}[X_{spm}]$, to estimate the
intrinsic variation in each player-season metric $X_{spm}$. We
estimate $V_{sm}[X]$, $V_{pm}[X]$ and $V_m[X]$ using sample moments.
Thus, assuming $P$ players, our estimator for discrimination is simply
$$\hat{\mathcal{D}}_{sm} = 1 -
\frac{\frac{1}{P}\sum^P_{p=1}\text{BV}[X_{spm}]}{\frac{1}{P}\sum^P_{p=1}
(X_{spm}-\bar{X}_{s\cdot m})^2}$$
\noindent where $\bar{X}_{s\cdot m}$ is the average of metric $m$ over
the players in season $s$. Similarly, the stability estimator for a
metric $m$ is
$$\hat{\mathcal{S}}_m = 1 - \frac{\frac{1}{P}\sum^P_{p=1}\frac{1}{S_p}\sum^{S_p}_{s=1}\left[(X_{spm} -
\bar{X}_{\cdot pm})^2 -
\text{BV}[X_{spm}]\right]}{\frac{1}{P}\sum^P_{p=1}
\frac{1}{S}\sum^{S_p}_{p=1}\left[(X_{spm} - \bar{X}_{\cdot\cdot m})^2
- \text{BV}[X_{spm}]\right]}$$
\noindent where $\bar{X}_{\cdot pm}$ is the mean of metric $m$ for
player $p$ over all seasons, $\bar{X}_{\cdot \cdot m}$ is the total
mean over all player-seasons, and $S_p$ is the number of seasons
played by player $p$.
All independence meta-metrics are defined as a function of the latent
correlation matrix $C$ from the copula model presented in Equation
\ref{eqn:copula}. To estimate $C$, we use the semi-parametric
rank-likelihood approach developed by \citet{hoff}. This method is
appealing because we eschew the need to directly estimate the marginal
density of the metrics, $F_m$. We fit the model using the R package
\textit{sbgcop} \citep{sbgcop}. Using this software, we can model the
dependencies for both continous and discrete valued metrics with
missing values.
In Section \ref{sec:results}, we use $\mathcal{I}_{m\mathcal{M}}$ to
generate ``independence curves'' for different metrics as a function
of the number of statistics in the conditioning set, $\mathcal{M}$.
To create these curves, we use a greedy approach: for each metric $m$
we first estimate the independence score $\mathcal I_{m\mathcal{M}}$
(Equation \ref{eqn:ind}) conditional on the full set of available
metrics $\mathcal{M}$, and then iteratively remove metrics that lead
to the largest increase in independence score (See Algorithm
\ref{indAlg}).
\begin{algorithm}
\centering
\baselineskip=12pt
\caption{Create independence curves for metric $m$}\label{indAlg}
\begin{algorithmic}[1]
\State $\text{IC}_m \gets \text{Vector}(|\mathcal{M}|)$
\State $\mathcal{M}^* \gets \mathcal{M}$
\For{$i = |\mathcal{M}| \text{ to } 1$}
\State $\mathcal{I}_{max} \gets 0$
\State $m_{max} \gets \text{NA}$
\For{$\tilde{m} \in\mathcal{M}^*$}
\State $\mathcal{G} \gets \mathcal{M}^* \setminus \{\tilde{m}\}$
\If{$\mathcal{I}_{m\mathcal{G}} > \mathcal{I}_{max}$}
\State $\mathcal{I}_{max} \gets \mathcal{I}_{m\mathcal{G}}$
\State $m_{max} \gets \tilde{m}$
\EndIf
\EndFor
\State $\mathcal{M}^* \gets \mathcal{M}^* \setminus {m_{max}}$
\State $\text{IC}_m[i] \gets \mathcal{I}_{m\mathcal{M}^*}$
\EndFor
\State \Return $\text{IC}_m$
\end{algorithmic}
\end{algorithm}
\section{Results}
\label{sec:results}
To demonstrate the utility of our meta-metrics, we analyze metrics
from both basketball (NBA) and hockey (NHL), including both
traditional and ``advanced'' (model-derived) metrics. We gathered data on 70 NBA metrics from all
players and seasons from the year 2000 onwards \citep{bballref}. We also
gather 40 NHL metrics recorded from the year 2000 onwards
\citep{nhlref}. Where appropriate, we normalized metrics by minutes
played or possessions played to ameliorate the impact of anomalous events in our data range, such as injuries and work stoppages; this approach sacrifices no generality, since minutes/possessions can also be treated as metrics. In the appendix we provide a glossary of all of the
metrics evaluated in this paper.
\subsection{Analysis of NBA Metrics}
\label{sec:nba}
In Figure \ref{fig:nbaRel} we plot the stability and
discrimination meta-metrics for many of the NBA metrics available on
\url{basketball-reference.com}. For basic box score statistics,
discrimination and stability scores match intuition. Metrics like
rebounds, blocks and assists, which are strong indicators of player
position, are highly discriminative and stable because of the
relatively large between player variance. As another example, free
throw percentage is a relatively non-discriminative statistic
within-season but very stable over time. This makes sense because
free throw shooting requires little athleticism (e.g., does not
change with age or health) and is isolated from larger team strategy and personnel (e.g., teammates do not have an
effect on a player's free throw ability).
Our results also highlight the distinction between pure rate
statistics (e.g., per-game or per-minute metrics) and those that
incorporate total playing time. Metrics based on total minutes played
are highly discriminative but less stable, whereas per-minute or
per-game metrics are less discriminative but more stable. One reason
for this is that injuries affect total minutes or games played in a
season, but generally have less effect on per-game or per-minute
metrics. This is an important observation when comparing the most
reliable metrics since it is more meaningful to compare metrics of a
similar type (rate-based vs total).
WS/48, ORtg, DRtg and BPM metrics are rate-based metrics whereas WS
and VORP based metrics incorporate total minutes played
\citep{bballref}. WS and VORP are more reliable than the rate based
statistics primarily because MP significantly increases their
reliability, \emph{not} because there is stronger signal about player
ability. Rate based metrics are more relevant for estimating player
skill whereas total metrics are more relevant for identifying overall
end of season contributions (e.g. for deciding the MVP). Since these
classes of metrics serve different purposes, in general they should not be
compared directly. Our results show moderately improved stability and discriminative
power of the BPM-based metrics over other rate-based metrics like
WS/48, ORTg and DRtg. Similarly, we can see that for the omnibus
metrics which incorporate total minutes played, VORP is more reliable
in both dimensions than total WS.
Perhaps the most striking result is the unreliability of empirical three point
percentage. It is both the least stable and least discriminative of
the metrics that we evaluate. Amazingly, over 50\% of the variation
in three point percentage between players in a given season is due to chance. This is
likely because differences between shooters' true three point shooting
percentage tend to be very small, and as such, chance variation tends
to be the dominant source of variation. Moreover, contextual
variation like a team's ability to create open shots for a player
affect the stability of three point percentage.
\begin{figure}[!htb]
\centering
\includegraphics[width=.8\textwidth]{ArxivFigs/stabdisc_shrink} \\
\caption{Discrimination and stability score estimates for an ensemble
of metrics and box score statistics in the NBA. Raw three point
percentage is the least discriminative and stable of the metrics we
study; empirical Bayes estimates of three point ability (``3P\%
EB'', Section \ref{sec:adjust}) improve both stability and discrimination . Metrics like
rebounds, blocks and assists are strong indicators of player
position and for this reason are highly discriminative and stable.
Per-minute or per-game statistics are generally more stable but less
discriminative.}
\label{fig:nbaRel}
\end{figure}
Finally, we use independence meta-metrics to explore the dependencies
between available NBA metrics. In Figure \ref{fig:rsq-nba} we plot
the independence curves described in Section \ref{sec:inference}. Of
the metrics that we examine, steals (STL) appear to provide some of
the most unique information. This is evidenced by the fact that the
$\mathcal{I}^{STL}_{\mathcal{M}} \approx 0.40$ , meaning that only
60\% of the variation in steals across player-seasons is explainable
by the other 69 metrics. Moreover, the independence score estimate
increases quickly as we reduce the size of the conditioning set, which
highlights the relative lack of metrics that measure skills that
correlate with steals. While the independence curves for defensive
metrics are concave, the independence curves for the omnibus metrics
measuring overall skill are roughly linear. Because the omnibus
metrics are typically functions of many of the other metrics, they are
partially correlated with many of the metrics in the conditioning set.
\begin{figure}
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{ArxivFigs/Independence-1.png} \\
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{ArxivFigs/Independence-2.png} \\
\end{subfigure}
\caption{Independence score estimates as a function of the size of the
conditioning set, for overall skill metrics (left) and defensive
metrics (right). The curves look more linear for the overall skill
metrics, which suggest that they reflect information contained
in nearly all existing metrics. The first principal component from
the five-by-five sub-correlation matrix consisting of the overall
skill metrics, explains 73\% of the variation. Defensive metrics
have independence curves that are more concave. This highlights
the fact that defensive metrics are correlated with a smaller
set of metrics. The first principal component from the five-by-five
sub-correlation matrix consisting of these defensive metrics,
explains only 51\% of the variation and the second explains only
73\%. }
\label{fig:rsq-nba}
\end{figure}
Not surprisingly, there is a significant amount of redundancy across
available metrics. Principal component analysis (PCA) on the full
correlation matrix $C$ suggests that we can explain over 75\% of the
dependencies in the data using only the first 15 out of 65 principal
components, i.e., $\mathcal{F}_{15} \approx 0.75$. Meanwhile, PCA of the sub-matrix
$C_{\mathcal{M}_o,\mathcal{M}_o}$ where
$\mathcal{M}_o = \{\text{WS, VORP, PER, BPM, PTS}\}$ yields
$\mathcal{F}_1 = 0.75$, that is, the first component explains 75\% of
the variation in these five metrics. This means that much of the
information in these 5 metrics can be compressed into a single metric
that reflects the same latent attributes of player skill. In
contrast, for the defensive metrics presented in Figure
\ref{fig:rsq-nba},
$\mathcal{M}_d = \{\text{DBPM, STL, BLK, DWS, DRtg}\}$, PCA indicated
that the first component explains only 51\% of the variation. Adding
a second principal component increases the total variance explained to
73\%. In Figure \ref{fig:varExp-nba} we plot the cumulative variance
explained, $\mathcal{F}_k$ as a function of the number of components
$k$ for all metrics $\mathcal{M}$ and the subsets $\mathcal{M}_o$ and
$\mathcal{M}_d$.
\subsection{Analysis of NHL Metrics}
\label{sec:nhl}
NHL analytics is a much younger field than NBA analytics, and as a
consequence there are fewer available metrics to analyze. In Figure
\ref{fig:hockeyRel} we plot the estimated discrimination and stability scores for
many of the hockey metrics available on \url{hockey-reference.com}.
Again, we find that metrics like hits (HIT), blocks (BLK) and shots
(S) which are strong indicators for player type are the most
discriminative and stable because of the large between-player
variance.
Our results can be used to inform several debates in the NHL analytics community.
For example, our results highlight the low discrimination of
plus-minus (``+/-'') in hockey, which can be explained by the relative
paucity of goals scored per game. For this reason, NHL analysts
typically focus more on shot attempts (including shots on goal, missed
shots and blocked shots). In this context, it is often debated whether it is better to use
Corsi- or Fenwick-based statistics \citep{CorsiVsFenwick}. Fenwick-based statistics
incorporate shots and misses whereas Corsi-based statistics additionally
incorporate blocked shots. Our results indicate that with the
addition of blocks, Corsi metrics (e.g. ``CF\% rel'' and
``CF\%'') are both more reliable and stable than the Fenwick metrics.
In Figure \ref{fig:hockeyRsq} we plot the estimated independence scores as a
function of the number of statistics in the conditional set for five
different metrics. Like steals in the NBA, we found that takeaways
(TK) provide the most unique information relative to the other 39
metrics. Here, $\mathcal{I}^{TK}_{\mathcal{M}} = 0.73$, meaning that
all other metrics together only explain 27\% of the total variance in
takeaways, which is consistent with the dearth of defensive metrics in
the NHL. dZS\% is an example of a metric that is highly correlated
with only one other metric in the set of metrics we study, but
poorly predicted by the others. This metric is almost perfectly
predicted by its counterpart oZS\% and hence
$\mathcal{I}^{dZS}_{\mathcal{M}} \approx 0$ when
$oZS\% \in \mathcal{M}$ and significantly larger otherwise. This is
clear from the large uptick in the independence score of dZS\% after removing
oZS\% from $\mathcal{M}$.
Once again, the analysis of the dependencies among metrics reveals
significant redundancy in information across NHL metrics. We can
explain over 90\% of the variation in the data using only 15 out of 40
principal components, that is $\mathcal{F}_{15} = 0.90$ (Figure
\ref{fig:varExp-nhl}). Figure \ref{fig:nhlClust} illustrates a
hierarchical clustering of these metrics based on these dependencies.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{ArxivFigs/hockeyReliability}
\caption{}
\label{fig:hockeyRel}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{ArxivFigs/hockey-Rsquared}
\caption{}
\label{fig:hockeyRsq}
\end{subfigure}
\caption{Left) Discrimination and stability scores for many NHL
metrics. Corsi-based statistics are slightly more reliable than
Fenwick statistics. Plus/minus is non-discriminative in hockey
because of the paucity of goals scored in a typical game. Right). Fraction
of variance explained (R-squared) for each metric by a set of other
metrics in our sample. Only 27\% of the total variance in takeways (TK) is
explained by all other NHL metrics. }
\label{fig:hockeyMeta}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{ArxivFigs/hockeyDendro_color}
\caption{Hierarchical clustering of NHL metrics based on the
correlation matrix, $C$. Clustered metrics have larger absolute
correlations but can be positively or negatively associated. The
metrics that have large loadings on the two different principal
component (Figure \ref{fig:nhlPCmetrics}) are highlighted in red and
blue.}
\label{fig:nhlClust}
\end{figure}
\section{Constructing Novel Metrics}
\label{sec:adjust}
In addition to providing useful benchmarks on the quality of different
metrics, the meta-metrics can motivate the design of new and improved
metrics or be used to justify the superiority of new metrics over
traditional ones. Here we provide two examples in which novel metrics
improve upon existing metrics in at least one of the meta-metrics. In
the first example, we use a hierarchical model to shrink empirical
estimates of three point ability in basketball. We demonstrate that
this model-based estimate is both more stable and discriminative than
the simple percentage metric. In the second example, we propose
a method for creating a set of new metrics that are all mutually
independent.
\subsection{Shrinkage Estimators}
Model-based adjustments of common box score statistics can reduce
sampling variability and thus lead to improvements in discrimination
and stability. In Section \ref{sec:nba}, we showed how three point
percentage was one of the least discriminative and stable metrics in
basketball and thus an improved estimator of three point making
ability is warranted. We define three point ability using the notation
introduced in Section \ref{sec:methods} as $E_{sp(3P\%)}[X]$ , i.e. the expected three
point percentage for player $p$ in season $s$, and propose a
model-based estimate of this quantity that is both more stable and
discriminative than the observed percentage.
For this model, we assume an independent hierarchical Bernoulli model
for the three point ability of each player:
\begin{align}
\nonumber X^{\text{3P\%}}_{sp} &= \frac{z_{sp}}{n_{sp}}\\
\nonumber z_{sp} &\overset{iid}{\sim} \text{Bin}(n_{sp}, \pi_{sp})\\
\nonumber \pi_{sp} &\overset{iid}{\sim} Beta(r_p \pi^0_p, r_p (1-\pi^0_p))
\end{align}
\noindent where $X^{3P\%}_{sp}$ is the observed three point percentage
of player $p$ in season $s$, $\pi_{sp}=E_{sp(3P\%)}[X]$ is the
estimand of interest, $n_{sp}$ is the number of attempts,
$\pi^0_p = E_{p(3P\%)}[X]$ is the career average for player $p$, and
$\pi^0_p(1-\pi^0_p)/r_p$ is the variance in $\pi_{sp}$ over time. We
use the R package \textit{gbp} for empirical Bayes inference of
$\pi_{sp}$ and $r_p$, which controls the amount of shrinkage
\citep{Kelly}. In Figure \ref{fig:nbaRel} we plot the original and
shrunken estimates for LeBron James' three point ability over his
career.
We can compute discrimination and stability estimates for the
estimated three point ability derived from this model using the same
approach outlined in Section \ref{sec:inference}. Although the
empirical Bayes' procedure yields probability intervals for all
estimates, we can still compute the frequentist variability using the
bootstrap (e.g. see \citet{efron2015frequentist}). In Figure
\ref{fig:nbaRel} we highlight the comparison between observed three
point percentage and the empirical Bayes estimate in red. Observed
three point percentage is an unbiased estimate of three point ability
but is highly unreliable. The Bayes estimate is biased for all players,
but theory suggests that the estimates have lower mean squared error
due to a reduction in variance \citep{efron1975data}. The improved
stability and discrimination of the empirical Bayes estimate is
consistent with this fact.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{ArxivFigs/lbj_threes}
\label{fig:bayes}
\caption{Three point percentages for LeBron James by
season, and shrunken estimates using the empirical Bayes model
proposed by \citet{Kelly}. Shrinking three point percentage to a
player's career average improves stability and discrimination.}
\end{figure}
\subsection{Principal Component Metrics}
The dependency model proposed in Section \ref{sec:ind} provides a
natural way to derive new metrics that describe orthogonal aspects of
player ability. In particular, the eigendecomposition of the latent
correlation matrix, $C$, (Equation \ref{eqn:copula}) can be used to
develop a (smaller) set of new metrics, which, by construction, are
mutually independent and explain much of the variation in the original
set. If the latent normal variables $Z$ defined in Equation
\ref{eqn:copula} were known, then we could compute the principle
components of this matrix to derive a new set of orthogonal metrics.
The principle components are defined as $W = Z U$ where $U$ is the
matrix of eigenvectors of $C$. Then, by definition,
$W \sim \text{MVN}(0, I)$ and thus
$W_k \independent W_j\ \forall\ k\neq j$. For the independence score
defined in Section \ref{sec:ind}, this means that
$\mathcal{I}_{k, \mathcal{M}^W_{-k}} = 1$ for all $k$, where
$\mathcal{M}^W_{-k}$ is the set of all metrics $W_j$, $j\neq k$. We
estimate $Z$ by normalizing $X$, that is
$\hat{Z}_{spm} = \Phi^{-1}(\hat{F}_m(X_{spm}))$ where $\hat{F}_m$ is
the empirical CDF of $X_m$. Our estimate of the principle components
of the latent matrix $Z$ is then simply
$\hat{W}_{sp} = \hat{Z}_{sp}U$.
We present results based on these new PCA-based metrics for both NBA
and NHL statistics. In Figure \ref{fig:nbaRank} we list three
PCA-based metrics for the NBA and the corresponding original NBA
metrics which load most heavily onto them. We also rank the top ten
players across seasons according to $\hat{W}_{sp}$ and visualize the
scores for each of these three PCA-based metrics for four different
players in the 2014-2015 season. Here, the fact that LeBron James
ranks highly in each of these three independent metrics is indicative
of his versatility. Although the meaning of these metrics can be
harder to determine, they can provide a useful aggregation of
high-dimensional measurements of player skill that facilitate fairer
comparisons of players.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\textwidth]{ArxivFigs/starsPlot}
\qquad
\raisebox{1.25\height}{
\begin{minipage}{.3\linewidth}
\renewcommand{\baselinestretch}{1}
\footnotesize
\centering
\begin{tabular}{ |c|c|c|}
\hline
\multicolumn{3}{|c|}{\textbf{\color{myblue}{``Efficient
Shooters'' (PC1)}}}\\
\hline
\multicolumn{3}{|p{0.75\textwidth}|}{\color{myblue}{ FG\%, PER, WS, \%FG 2P, 2P\%, BPM, TS\%}}\\
\hline
Rank & Player & Year \\
\hline
1 & Dwight Howard & 2010 \\
2 & Dwight Howard & 2009 \\
3 & Dwight Howard & 2008 \\
4 & Shaquille O'Neal & 2000 \\
5 & Shaquille O'Neal & 2004 \\
6 & Dwight Howard & 2007 \\
7 & DeAndre Jordan & 2014 \\
8 & Amar'e Stoudemire & 2007 \\
9 & Shaquille O'Neal & 2003 \\
10 & Tim Duncan & 2006 \\
\hline
\end{tabular}
\end{minipage}%
\hspace{.5cm}
\begin{minipage}{.3\linewidth}
\renewcommand{\baselinestretch}{1}
\footnotesize
\centering
\begin{tabular}{ |c|c|c|}
\hline
\multicolumn{3}{|c|}{\color{myred}{\textbf{``Shooters, Assisters'' (PC2)}}}\\
\hline
\multicolumn{3}{|p{0.75\textwidth}|}{\color{myred}{OBPM, 3PA, AST\%, \%FGA 3P, Avg
Shot Dist, PGA}} \\
\hline
Rank & Player & Year \\
\hline
1 & Stephen Curry & 2014 \\
2 & Stephen Curry & 2013 \\
3 & Steve Nash & 2006 \\
4 & Chris Paul & 2014 \\
5 & Steve Nash & 2008 \\
6 & Chris Paul & 2007 \\
7 & Damon Jones & 2004 \\
8 & Steve Nash & 2009 \\
9 & Stephen Curry & 2012 \\
10 & LeBron James & 2009 \\
\hline
\end{tabular}
\end{minipage}%
\hspace{.5cm}
\begin{minipage}{.3\linewidth}
\renewcommand{\baselinestretch}{1}
\footnotesize
\centering
\begin{tabular}{ |c|c|c|}
\hline
\multicolumn{3}{|c|}{\textbf{``High Usage'' (PC3)}}\\
\hline
\multicolumn{3}{|p{0.75\textwidth}|}{USG, 2PA, FGA, LostBall, FTA, SfDrawn, PTS, And1}\\
\hline
Rank & Player & Year \\
\hline
1 & Allen Iverson & 2006 \\
2 & Cory Higgins & 2011 \\
3 & Kobe Bryant & 2014 \\
4 & Allen Iverson & 2003 \\
5 & Russell Westbrook & 2014 \\
6 & Tony Wroten & 2013 \\
7 & Tony Wroten & 2014 \\
8 & Allen Iverson & 2004 \\
9 & Jermaine O'Neal & 2004 \\
10 & Allen Iverson & 2005 \\
\hline
\end{tabular}
\end{minipage}
}
\caption{First three principal components of $C$. The tables indicate
the metrics that predominantly load on the components. Each
component generally corresponds to interpretable aspects of player
style and ability. The table includes the highest ranking players
across all seasons for each component. The top row depicts
principal component score for four players players in the 2014-2015
season. LeBron James ranks highly among all 3 independent
components. }
\label{fig:nbaRank}
\end{figure}
In Figure \ref{fig:nhlPCmetrics} we provide two PCA-based metrics for NHL
statistics. We again list the metrics that have the highest
loadings on two principal component along with the top ten players (in
any season) by component. The first principal component largely
reflects variation in offensive skill and easily picks up many of the
offensive greats, including Ovechkin and Crosby. For comparison, we
include another component, which corresponds to valuable defensive
players who make little offensive contribution. This component loads
positively on defensive point shares (DPS) and blocks (BLK), but
negatively on shots and goals (S, G).
\begin{figure}[ht!]
\centering
\begin{minipage}[t]{.4\textwidth}
\renewcommand{\baselinestretch}{1}
\vspace{.5in}
\footnotesize
\begin{tabular}{ |c|c|c|}
\hline
\multicolumn{3}{|c|}{``Offensive skill''}\\
\hline
\multicolumn{3}{|p{0.5\textwidth}|}{\textcolor{red}{PTS, OPS, GC, PS, TGF, G,
A, EV, PGF, TSA}}\\
\hline
Rank & Player & Year \\
\hline
1 & Alex Ovechkin & 2010 \\
2 & Sidney Crosby & 2009 \\
3 & Alexander Semin & 2008 \\
4 & Daniel Sedin & 2000 \\
5 & Evgeni Malkin & 2011 \\
6 & Daniel Sedin & 2010 \\
7 & Alex Ovechkin & 2007 \\
8 & Alex Ovechkin & 2008 \\
9 & Sidney Crosby & 2012 \\
10 & Marian Hossa & 2008 \\
\hline
\end{tabular}
\label{fig:nhlPC1}
\end{minipage}
\begin{minipage}[t]{.4\textwidth}
\renewcommand{\baselinestretch}{1}
\vspace{.5in}
\footnotesize
\begin{tabular}{ |c|c|c|}
\hline
\multicolumn{3}{|c|}{``Valuable defenders ''}\\
\hline
\multicolumn{3}{|p{0.5\textwidth}|}{\textcolor{blue}{ATOI,
DPS, BLK,\linebreak -S, -TSA, -G, -FA, -CF}}\\
\hline
Rank & Player & Year \\
\hline
1 & Nicklas Lidstrom & 2008 \\
2 & Ryan Suter & 2014 \\
3 & Toby Enstrom & 2009 \\
4 & Josh Gorges & 2012 \\
5 & Toni Lydman & 2011 \\
6 & Toby Enstrom & 2008 \\
7 & Chris Progner & 2010 \\
8 & Paul Martin & 2008 \\
9 & Niclas Havelid & 2008 \\
10 & Andy Greene & 2015 \\
\hline
\end{tabular}
\label{fig:nhlPC2}
\end{minipage}
\caption{Player rankings
based on two principal components. The first PC is associated with
offensive ability. The fact that this is the \emph{first} component
implies that a disproportionate fraction of the currently available
hockey metrics measure aspects of offensive ability. The other included
component reflects valuable defensive players (large positive
loadings for defensive point shares and blocks) but players that
make few offensive contributions (negative loadings for goals and
shots attempted). The metrics that load onto these components are
highlighted in the dendrogram of NHL metrics (Figure \ref{fig:nhlClust}). }
\label{fig:nhlPCmetrics}
\end{figure}
\section{Discussion}
Uncertainty quantification, a hallmark of statistical sciences, has so
far been under-appreciated in sports analytics. Our work demonstrates
the importance of understanding sources of variation and provides a
method to quantify how different metrics reflect this variation. Specifically, we
explore three different ``meta-metrics'' for evaluating the
reliability of metrics in any sport: discrimination, stability and
independence.
Our results show that we can use meta-metrics to characterize the
most discriminative and stable summaries amongst a set of omnibus metrics (like win shares, BPM
and PER for the NBA), which can in turn help decision-makers identify
the metrics that are most relevant for a given task. Meta-metrics
can also be used as a benchmark for evaluating the improvement of new
estimators. For instance, in the case of three point percentage, we
demonstrate that an estimate based on a simple hierarchical model can
improve the stability \emph{and} discrimination of standard boxscore
statistics.
In this paper, we focused on reliability and dependence of metrics for
\emph{all players in the league} but the meta-metrics can easily be
recalculated for relevant subsets of players. This is important
because, as shown, in this context the most reliable metrics are often
the metrics which distinguish between player types (e.g., blocks and
rebounds in basketball). This may be irrelevant when making decisions
involving a specific group of players (e.g., which NBA center to
acquire). When using metrics to evaluate players of a certain type,
we should compute the meta-metrics conditional on this player type.
For instance, there is less variation in the number of blocks and
rebounds by NBA centers, and as such, these metrics are less
discriminative and stable than they are for the league as a whole.
Moreover, the dependence between blocks and rebounds is largely driven
by height, and thus the conditional dependence between blocks and
rebounds given height is much smaller. Thus, it is important that the
meta-metrics are always interpreted in the context of the appropriate
group of players. In light of this point, it is notable that the
meta-metrics that we present in this paper are stated in terms of
expectations and variances, so that estimation of conditional meta-metrics
simply requires replacing marginal expectations and variances with
their conditional counterparts.
Another important consideration is that our meta-metrics only measure
the internal quality of a metric. The meta-metrics are not designed
to provide any information about how relevant the metrics are for the
sport of interest. For instance, although we identified Corsi-based
metrics as more discriminative and stable than the related
Fenwick-based metrics, it is still possible that Fenwick metrics are
more predictive of team performance. As a more extreme example, an
athlete's birthplace zip code would be perfectly discriminative,
stable and independent from all other metrics, but is clearly
irrelevant for determining a player's value to the team. This
suggests that in practice coaches and analysts should consider a
fourth meta-metric: ``relevance''. Relevance could simply be a
qualitative description of the metric's meaning or it could a
quantitative summary of the causal or predictive relationship between
the metric and an outcome of interest, like wins or revenue generated.
Nevertheless, the methods presented here provide a useful
characterization of the reliability of existing metrics. We believe
that future iterations of the meta-metrics outlined in this paper can
become a standard analytical tool that will improve the decisions made
and information gleaned from new and old metrics alike.
\clearpage
\bibliographystyle{natbib}
| {
"attr-fineweb-edu": 1.923828,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUb6HxK7FjYEXSDlCU | \section{Introduction}
In association football (football in the forthcoming) key game events such as shots and goals are very rare, in stark contrast to other team sports. By comparison, passes are two orders of magnitude more abundant than goals. It stands to reason that in order to get a comprehensive statistical summary of a football game passes should be one of the main focal points. However, not much attention has been paid to passing distributions, and the media attention to passes is normally limited to the total number of them, sometimes together with the \emph{passing accuracy}. A similar lack of attention is paid to the analysis of possession, oftentimes limited to a single percentage value.
In \cite{Reep1968} Reep and Benjamin analyze the distribution of length of passing sequences in association football and compare it to Poisson and negative binomial distributions. More recent works suggest the distribution of lengths of passing sequences in modern football is better explained by Bendford's law instead. However, all these works take a top-down approach in which a model is arbitrarily chosen and then fitted to the distribution, without any explanation on why football games should be described by that model.
In this work, we propose to take a bottom-up approach instead, inspired by our previous work on passing networks (cf. \cite{Lopez2013}) we model a team's game by a finite state automaton with states \textbf{Recovery}, \textbf{Possession}, \textbf{Ball lost}, \textbf{Shot taken}, with evolution matrix obtained from historical game data. The resulting Markovian system provides a model for possession after any given number of steps, as well as estimates for the probabilities of likelihood of either keeping possession or for it resulting in either of the two possible outcomes: lost ball or shot taken.
By choosing adequate fitting data, we compare the possession models for different teams and show how they vary across different leagues. We then compare the resulting model with the ones previously used in the literature (Poisson, NBD, Pareto) and with the actual possessions data in order to find the best fit. The obvious advantage of our model is that it explains the resulting distribution as the natural limit of a Markov process.
\section{Determining possession lengths}
Since football is such a fluid game, it is not always immediately clear which sequences of events constitute a possession. For our analysis, we consider the simplest possible notion of possession as any sequence of consecutive game events in which the ball stays in play and under control of the same team. As such, we will consider that a possession starts the first time a team makes a deliberate action on the ball, and ends any time the ball gets out of the pitch or there is a foul (regardless who gets to put the ball in play afterwards), any time there is a deliberate on-ball action by the opposing team, such as a pass interception tackle or a clearance (but not counting unsuccessful ball touches which don't interrupt the game flow), or any time the team takes a shot, regardless the outcome is a goal, out, hitting the woodwork, or a goalkeeper save.
One could introduce a further level of sophistication by distinguishing between clear passes and `\textit{divided balls}', such as passes to a general area where player of both teams dispute the ball (for instance in an aerial duel). For the sake of simplicity, we will not take this into consideration.
It is also worth noting that when measuring possession length we shall consider all in-ball actions, and not just passes. In particular dribbles, won aerial duels, and self-passes will all be considered as valid actions when accounting for a single possession.
\begin{figure}[ht]
\includegraphics[scale=0.28]{EPLPassSeqsDists}
\caption{Classical top-down models}\label{fig:EPL0}
\end{figure}
In \cite{Reep1968} it is suggested that length of passing sequences can be approximated by Poisson or Negative Binomial distributions. Our tests suggest this is no longer the case for the average EPL team in the 2012/2013 season: Figure \ref{fig:EPL0} shows how the best fitting Poisson and NBD grossly underestimate the share of long possessions. For the sake of comparison, we have also included the fitting of a Pareto distribution, which displays the opposite effect (underestimating short possessions and overestimating longer ones). For our data, all three classic distributions fail to meet the asymptotic behavior of the observed data.
This can be partly explained by our different notion for possession length, since Reep and Benjamin only consider the total number of passes in a possession, but the different definition does not tell us the whole story. According to Reep and Benjamin data, there are only 17 instances, measured over 54 games in 1957-58 and 1961-61, of sequences involving 9 passes or more (out of around 30000 possessions). Even if we use their more restrictive notion of possession there are many more instances of long possession nowadays. A possible explanation might be the generally admitted fact that football playing style has evolved over the years, leaning towards a more and more technical style which favors longer possessions, with many teams consistently playing possessions of 20 passes or even more.
In any case, it is quite evident that in order to accurately describe possessions in current professional football we need to find a different type of model which allows for a longer tail. One possible such candidate would be a power-law distribution, such as Pareto's (also plotted in Figure \ref{fig:EPL0} for comparison). As we plot longer parts of the tail, however, it will become apparent that Pareto distribution does not display the correct type of asymptotic behavior in order to describe our data.
\section{Markovian possession models}
A typical possession in a football game starts with a ball recovery, which can be achieved in an active or passive manner. Ball recovery is followed by a sequence of ball movements that will eventually conclude with a loss of possession. Said lost of possession can be either intentional, due to an attempt to score a goal, or unintentional, due to an error, an interception/tackle, or an infringement of the rules.
The different possession stages can be modeled using a nondeterministic finite state automaton, with initial state indicating the start of a possession, final states indicating the end of the possession, and some number of other states to account for intermediate stages of the possession, as well as appropriate probabilities for the state transitions.
The freedom in the choice of intermediate states allows for a wealth of different Markov models with varying degrees of complexity. One of the simplest such possible schemes is to consider only one intermediate state ``\textbf{Keep}'' to indicate continued possession, and two different final states ``\textbf{Loss}'' and ``\textbf{Shot}'' indicating whether the possession ends in a voluntary manner (by taking an attempt to score a goal) or in an involuntary manner. This simple model is schematized by the following diagram:
\begin{figure}[ht]
\begin{tikzpicture}[scale=0.8,transform shape, >=stealth']
\Vertex[x=0, y=0]{Recovery}
\Vertex[x=4, y=0]{Keep}
\Vertex[x=8, y=1]{Loss}
\Vertex[x=8,y=-1]{Shot}
\tikzset{EdgeStyle/.style={->}}
\Edge[style= dashed](Recovery)(Keep)
\Edge[label = $p_k$, style = loop](Keep)(Keep)
\Edge[label = $p_l$](Keep)(Loss)
\Edge[label = $p_s$](Keep)(Shot)
\end{tikzpicture}
\caption{Simple Markov model}
\end{figure}
A slight variation allows to track for divided balls (though using this model would require us to change the definition of possession that we described above):
\begin{figure}[ht]
\begin{tikzpicture}[scale=0.8,transform shape, >=stealth']
\Vertex[x=0, y=0]{Recovery}
\Vertex[x=4, y=1]{Keep}
\Vertex[x=8, y=-1]{Loss}
\Vertex[x=8,y=1]{Shot}
\Vertex[x=4, y=-2]{Divided}
\tikzset{EdgeStyle/.style={->}}
\Edge[style=dashed, label=$p_{r,k}$](Recovery)(Keep)
\Edge[style=dashed, label=$p_{r,d}$](Recovery)(Divided)
\Edge[label=$p_k$, style=loop](Keep)(Keep)
\Edge[label=$p_l$](Keep)(Loss)
\Edge[label=$p_s$](Keep)(Shot)
\Edge[label=$p_{k,d}$, style=bend left](Keep)(Divided)
\Edge[label=$p_{d,k}$, style=bend left](Divided)(Keep)
\Edge[label=$p_{d,l}$, style=bend right](Divided)(Loss)
\end{tikzpicture}
\caption{Markov model with divided balls}
\end{figure}
In an alternative model, one might be interested on the interactions among players in different positions (loops have been removed from the graph to avoid cluttering). This model is sketched in Figure \ref{dia:positional}.
\begin{figure}[ht]
\begin{tikzpicture}[scale=0.8,transform shape, >=stealth']
\Vertex[x=3, y=0]{Rec}
\Vertex[x=0, y=0]{GK}
\Vertex[x=3, y=3]{Def}
\Vertex[x=3, y=-3]{Mid}
\Vertex[x=6, y=0]{Fwd}
\Vertex[x=8, y=1]{Loss}
\Vertex[x=8,y=-1]{Shot}
\tikzset{EdgeStyle/.style={->}}
\Edge[style=dashed](Rec)(GK)
\Edge[style=dashed](Rec)(Mid)
\Edge[style=dashed](Rec)(Def)
\Edge[style=dashed](Rec)(Fwd)
\Edge[style={bend left = 15}](GK)(Def)
\Edge[style={bend left = 15}](GK)(Mid)
\Edge[style={bend left = 15}](GK)(Fwd)
\Edge[style={bend left = 15}](Def)(GK)
\Edge[style={bend left = 15}](Def)(Mid)
\Edge[style={bend left = 15}](Def)(Fwd)
\Edge[style={bend left = 15}](Mid)(GK)
\Edge[style={bend left = 15}](Mid)(Def)
\Edge[style={bend left = 15}](Mid)(Fwd)
\Edge[style={bend left = 15}](Fwd)(GK)
\Edge[style={bend left = 15}](Fwd)(Mid)
\Edge[style={bend left = 15}](Fwd)(Def)
\Edge[style={bend left = 15}](GK)(Loss)
\Edge[style={bend right = 15}](GK)(Shot)
\Edge[style={bend left = 15}](Def)(Loss)
\Edge[style={bend left = 15}](Def)(Shot)
\Edge[style={bend right = 15}](Mid)(Loss)
\Edge[style={bend right = 15}](Mid)(Shot)
\Edge[style={bend left = 15}](Fwd)(Loss)
\Edge[style={bend right = 15}](Fwd)(Shot)
\end{tikzpicture}
\caption{Positional Markov model}
\label{dia:positional}
\end{figure}
Once we have chosen the set of states that configure our Markov model, its behavior is then described by the \emph{transition matrix} $A=(a_{i,j})$, where entry $a_{i,j}$ represents the probability of transitioning from state $i$ to state $j$; similarly, given an initial state $i$, the probability of the game being at state $j$ after $r$ steps is given by $A^r e_i$, where $e_i = (0, \dotsc, 0, 1, 0, \dotsc, 0)$ is the vector with coordinate 1 in position $i$ and $0$ everywhere else.
More complex Markov models allow for a richer representation of game states, but since transition probabilities must (in general) be determined heuristically, they will be harder to fine-tune.
\section{Model fitting}
As a proof of concept, we will perform a detailed fitting of the simplest Markov model described in the previous section. This first approximation model works under the very simplistic assumption that the transition probabilities between two game states remain constant through the entire sequence of events. Transition probabilities $p_k, p_l, p_s$ (which must satisfy $p_k + p_s + p_l = 1$) are, respectively, the probability of keeping possession, losing the ball, or taking a shot.
Since the transition matrix for this system is extremely simple, this model can be solved analytically, yielding the probability distribution given by
\begin{equation}
P(\{X = x\}) = (1-p_k)p_k^{x-1} \simeq Ce^{-\lambda x},
\end{equation}
(where $\lambda = -\log p_k$ and $C$ is a normalization constant) suggesting that the distribution of lengths follows a pattern asymptotically equivalent to an exponential distribution.
\begin{figure}[ht]
\includegraphics[scale=0.28]{Markov2EPL}
\caption{Power law and Markov models}\label{fig:EPL1}
\end{figure}
As Figure \ref{fig:EPL1} shows, the simple Markov model provides a good fitting model for the general trend, with the correct asymptotic behavior, but it tends to over-estimate occurrence of sequences of 3 to 8 actions, as well as severely underestimating the number of sequences consisting on a single action. For individual Premier League teams, Pearson's goodness of fit test yields a $p$--value higher than $0.99$ in all cases.
The simple Markov model can be easily improved by weakening the constraint on the transition probability being constant. A simple way of doing it is by adding an accumulation factor $b$ (with $0<b<1$) and modifying the probability distribution to
\begin{equation}
P(\{X = x\}) = C(e^{-\lambda x} + b^x),
\end{equation}
since $b^x$ goes to 0 as $x$ increases, the added factor does not modify the asymptotic behavior of the resulting probability distribution, but it allows to correct for the errors in the probabilities for short possessions. It is worth noting that whilst this adjusted model (also shown in Figure \ref{fig:EPL1}) does not strictly come from a Markov process (as the transition probability is no longer constant), the additional factor $b$ can be interpreted as a (negative) self-affirmation feedback, incorporated to the model in a similar way as the one used in \cite{Bittner2007} for goal distributions. A possible interpretation of this factor would be the added difficulty of completing passes as a team proceeds to move the ball closer to their opponents box. A higher value of the $b$ coefficient means that a team is more likely to sustain their passing accuracy over the course of a long possession.
Besides the distribution of length of possessions, the Markov model allows us to study the probability of any state of the model after a given number of steps. In particular, one can look at the `\textbf{Shot}' state, obtaining a model for the probability that a shot will have taken place within a given number of actions. Once again, the system can be solved analytically, yielding
\begin{equation}
P(\{\text{\textbf{Shot}} | X\leq x \}) = \sum_{i=0}^{x-1} p_k^{i}p_s = p_s \frac{1-p_k^x}{1-p_k},
\end{equation}
where this probability should be understood as the chance that a team will take a shot within $x$ consecutive ball touches. Figure \ref{fig:shotchance} shows once again how the model yields a good approximation to the observed data fitting comfortably within the error bars. The $p$--value for the Pearson's goodness of fitting test is again greater than $0.99$.
\begin{figure}[ht]
\includegraphics[scale=0.28]{ShotChance}
\caption{Probability of shots for the Markov model}\label{fig:shotchance}
\end{figure}
The fitted coefficients for all the Premier League teams in the 2012-2013 season are listed in table \ref{tbl:markovfitting} (teams are sorted by final league position). There is a remarkable correlation between higher values of the `\emph{keep probability}' $p_k$ and what is generally considered `\emph{nice gameplay}', as well as between higher values of the shot probability $p_s$ and teams considered to have a more aggressive football style.
Traditionally possession is considered a bad indicator of performance, but our number suggest that this does not need to be the case when it is analyzed in a more sophisticated manner. Apart from a few outliers, such as under-performers Swansea and Wigan Athletic, and over-performing Stoke, most teams fitted probabilities correlate fairly well with their league table positions.
\begin{table}[t]
\begin{tabular}{rlccc}
\toprule
& Team & $p_k$ & $p_s$ & $b$ \\
\midrule
1 & Manchester United & 0.794 & 0.017 & 0.685 \\
2 & Manchester City & 0.794 & 0.016 & 0.683 \\
3 & Chelsea & 0.785 & 0.018 & 0.663 \\
4 & Arsenal & \textbf{0.797} & 0.015 & \textbf{0.690} \\
5 & Tottenham Hotspur & 0.782 & 0.016 & 0.653 \\
6 & Everton & 0.772 & 0.017 & 0.631 \\
7 & Liverpool & 0.788 & \textbf{0.019} & 0.672 \\
8 & West Bromwich Albion & 0.771 & 0.018 & 0.630 \\
9 & Swansea City & 0.794 & 0.018 & 0.684 \\
10 & West Ham United & 0.756 & 0.018 & 0.589 \\
11 & Norwich City & 0.764 & 0.014 & 0.601 \\
12 & Fulham & 0.783 & 0.018 & 0.657 \\
13 & Stoke City & 0.752 & 0.015 & 0.575 \\
14 & Southampton & 0.772 & 0.015 & 0.630 \\
15 & Aston Villa & 0.771 & 0.016 & 0.619 \\
16 & Newcastle United & 0.771 & \textbf{0.019} & 0.624 \\
17 & Sunderland & 0.763 & 0.017 & 0.601 \\
18 & Wigan Athletic & 0.783 & 0.017 & 0.660 \\
19 & Reading & 0.749 & 0.016 & 0.558 \\
20 & Queens Park Rangers & 0.767 & 0.016 & 0.618 \\
\bottomrule
\end{tabular}
\caption{EPL Markov model fitting parameters}
\label{tbl:markovfitting}
\end{table}
\section{Conclusions and future work}
We have shown how Markov processes provide a bottom-up approach to the problem of determining the probability distribution of possession related game states, as well as their outcomes. The bottom up approach has an obvious advantage of providing a conceptual explanation for the resulting probability distribution, but besides that we have shown that even a very simple Markov model yields better approximations than the classical top-down approaches.
It is worth noting that even if we focused in the particular case of association football (and concretely the English Premier League) the type of analysis we have carried out is of a very general nature and can easily be performed for many team sports which follow similar possession patterns, such as basketball, hockey, handball, or waterpolo, to name a few.
Besides the study of probability distributions, sufficiently granular Markov processes can be used to carry out game simulations. In theory one might want to consider a full Markov model containing a state for every single player, as in the passing networks described in \cite{Lopez2013}, and use it as the base of an agent based model in order to forecast game outcomes, although finding all the probability transitions in that extreme situation would admittedly be very hard.
\section{Data and analysis implementation details}
Our analysis uses data for all the English Premier League games in the 2012-2013 season (380 games). Raw data for game events was provided by Opta. Data munging, model fitting, analysis, and chart plotting were performed using IPython \cite{ipython} and the python scientific stack \cite{numpy, matplotlib}.
| {
"attr-fineweb-edu": 1.832031,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeEHxK0iCl36cXqAG | \section{Introduction}
Association football (simply referred to as \emph{football} in the forthcoming) is arguably the most popular sport in the world.
Traditionally, plenty of attention has been devoted to goals and their distribution as the main focus of football statistics. However, shots remain a rare occurrence in football games, to a much larger extent than in other team sports.
Long possessions and paucity of scoring opportunities are defining features of football games. Passes, on the other hand, are two orders of magnitude more frequent than goals, and therfore constitute a much more appropriate event to look at when trying to describe the elusive quality of `\emph{playing style}'. Some studies on passing have been performed, either at the level of passing sequences distributions (cf \cite{Hughes2005, LopezUNa, Reep1968}), by studying passing networks \cite{Duch2010, Lopez2013, Cotta2011}, or from a dynamic perspective studying game flow \cite{Brillinger2007}, or passing \emph{flow motifs} at the team level \cite{GyarmatiUN}, where passing flow motifs (developed following \cite{Milo2002}) were satisfactorily proven by Gyarmati, Kwak and Rodríguez to set appart passing style from football teams from randomized networks.
In the present work we ellaborate on \cite{GyarmatiUN} by extending the flow motif analysis to a player level. We start by breaking down all possible 3-passes motifs into all the different variations resulting from labelling a distinguished node in the motif, resulting on a total of 15 different 3-passes motifs at the player level (stemming from the 5 motifs for teams). For each player in our dataset, and each game they partitipate in, we compute the number of instances each pattern occurs. The resulting 15-dimensional distribution is used as a fingerprint for the player style, which characterizes what type of involvement the player has with his teammates.
The resulting feature vectors are then used in order to provide a notion of similarity between different football players, providing us with a quantifiable measure on how \emph{close} the playing styles between any two arbitrary players are. This is done in two different ways, first by performing a Clustering Analysis (with automatic cluster detection) on the feature vectors, which allow us to identify 37 separate groups of similar players, and secondly by defining a distance function (based on the mean features z-scores) which consequently is used to construct the distance similarity score.
As an illustrative example, we perform a detailed analysis of all the defined quantities for Xavi Hernández, captain of FC Barcelona who just left the team after many years in which he has been considered the flagship of the famous \emph{tiki-taka} style both for his club and for the Spanish national team. Using our data-based \emph{style fingerprint} we try to address the pressing question: which player could possibly replace the best passer in the world?
\section{Methodology}
The basis of our analysis is the study of passing subsequences. The passing style of a team is partially encoded, from an static point of view, in the passing network (cf. \cite{Lopez2013}). A more dynamical approach is taken in \cite{GyarmatiUN}, where passing subsequences are classified (at the team level) through ``flow motifs'' of the passing network.
Inspired by the work on flow motifs for teams, we carry out a similar analysis at the player level. We focus on studying flow motifs corresponding to sequences of three consecutive passes. Passing motifs are not concerned with the names of the players involved on a sequence of passes, but rather on the structure of the sequence itself. From a team's point of view, there are five possible variations: ABAB, ABAC, ABCA, ABCB, and ABCD (where each letter represents a different player within the sequence).
\begin{figure}[ht]
\begin{tikzpicture}[scale=0.8,transform shape, >=stealth']
\Vertex[x=0, y=0]{A}
\Vertex[x=2, y=0]{B}
\tikzset{EdgeStyle/.style={->}}
\Edge[style={bend left = 15}](A)(B)
\Edge(B)(A)
\Edge[style={bend right = 15}](A)(B)
\end{tikzpicture} \quad
\begin{tikzpicture}[scale=0.8,transform shape, >=stealth']
\Vertex[x=0, y=0]{A}
\Vertex[x=1, y=1]{B}
\Vertex[x=2, y=0]{C}
\tikzset{EdgeStyle/.style={->}}
\Edge[style={bend left = 15}](A)(B)
\Edge[style={bend left = 15}](B)(A)
\Edge[style={bend right = 15}](A)(C)
\end{tikzpicture} \quad
\begin{tikzpicture}[scale=0.8,transform shape, >=stealth']
\Vertex[x=0, y=0]{A}
\Vertex[x=0.75, y=1]{B}
\Vertex[x=1.5, y=0]{C}
\tikzset{EdgeStyle/.style={->}}
\Edge[style={bend left = 15}](A)(B)
\Edge[style={bend left = 15}](B)(C)
\Edge[style={bend left = 15}](C)(A)
\end{tikzpicture}
\\[10pt]
\centering
\begin{tikzpicture}[scale=0.8,transform shape, >=stealth']
\Vertex[x=0, y=0]{A}
\Vertex[x=1.5, y=0]{B}
\Vertex[x=3, y=0]{C}
\tikzset{EdgeStyle/.style={->}}
\Edge(A)(B)
\Edge[style={bend left = 15}](B)(C)
\Edge[style={bend left = 15}](C)(B)
\end{tikzpicture} \qquad
\begin{tikzpicture}[scale=0.8,transform shape, >=stealth']
\Vertex[x=0, y=0]{A}
\Vertex[x=1, y=0]{B}
\Vertex[x=2, y=0]{C}
\Vertex[x=3,y=0]{D}
\tikzset{EdgeStyle/.style={->}}
\Edge(A)(B)
\Edge(B)(C)
\Edge(C)(D)
\end{tikzpicture}
\caption{The five team flow motifs}
\end{figure}
The situation is different when looking at flow motifs from an specific player's point of view, as that player needs to be singled out within each passing sequence. Allowing for variation of a single player's relative position within a passing sequence, the total numer of motifs increases to fifteen. These patterns can all be obtained by swapping the position of player A with each of the other players (and relabelling if necessary) in each of the five motifs for teams. Adopting the convention that our singled-out player is always denoted by letter `A', the resulting motifs can be labelled as follows (the basic team motif shown in bold letters):
\begin{table}[h]
\centering
\begin{tabular}{c}
\textbf{ABAB}, BABA \\
\textbf{ABAC}, BABC, BCBA \\
\textbf{ABCA}, BACB, BCAB \\
\textbf{ABCB}, BACA, BCAC \\
\textbf{ABCD}, BACD, BCAD, BCDA
\end{tabular}
\end{table}
When tracking passing sequences, we will consider only possessions consisting of uninterrupted consecutive events during which the ball is kept under control by the same team. As such, we will consider than a possession ends any time the game gets interrupted or an action does not have a clear passing target. In particular, we will consider that posessions get interrupted by fouls, by the ball getting out of play, whenever there is a ``divided ball'' (eg an aerial duel), by clearances, interceptions, passes towards an open space without a clear target, or by shots, regardless on who gets to keep the ball afterwards. The motivation for this choice is that we are trying to keep track of game style through controlled, conscious actions. It is worth noting that here we are using a different methodology from the one in \cite{GyarmatiUN} (where passes are considered to belong to the same sequence if they are separated by less than five seconds).
Our analysed data consists of all English Premier League games over the last five seasons (comprising a total of 1900 games and 1402195 passes), all Spanish Liga games over the last three season (1140 games and 792829 passes), and the last season of Champions League data (124 games and 105993 passes). To reduce the impact of outliers, we have limited our study to players that have participated in at least 19 games (half a season). In particular, this means that only players playing the English and Spanish leagues are tracked in our analysis. Unfortunately, at the time of writing we do not have at our dispossal enough data about other European big leagues to make the study more comprehensive.
The resulting dataset contains a total of 1296 players. For each of the analyzed players, we compute the average number of occurrences of each of the fifteen passing motifs listed above, and use the results as the features vector in order to describe the player's style. For some of the analysis which require making different types of subsequences comparable, we replace the feature vector by the corresponding z-scores (where for each feature mean and standard deviation are computed over all the players included in the study).
Our analysis uses raw data for game events provided by Opta.
Data munging, model fitting, analysis, and chart plotting were performed using IPython \cite{ipython} and the python scientific stack \cite{numpy, matplotlib}.
\section{Analysis and results}
\subsection*{Summary statistics and motifs distributions}
A summary analysis of the passing motifs is shown in Table \ref{table:summary}. Perhaps unsurprisingly,
the maximum value for almost every single motif is reached by a player from FC Barcelona, the only exception being Yaya Touré.\footnote{Touré \textbf{did} play for FC Barcelona, however, our dataset only contains games in which he played for Manchester City. On the opposite side, we only have data for Thiago Alcántara as a Barcelona player as our dataset does not include the German Bundesliga.} Figure \ref{fig:motif_dists} shows the frequency distributions for player values at every kind of motif, and the relative position of Xavi within those distributions.
\begin{table}[h]
\begin{tabular}{lrrrl}
\toprule
\textbf{Motif} & Mean & Std & Max & Player \\
\midrule
\textbf{ABAB} & 0.33 & 0.31 & 3.56 & Dani Alves \\
\textbf{ABAC} & 1.52 & 1.30 & 8.71 & Thiago Alcántara \\
\textbf{ABCA} & 0.90 & 0.73 & 5.99 & Xavi \\
\textbf{ABCB} & 1.53 & 1.08 & 7.69 & Sergio Busquets \\
\textbf{ABCD} & 6.03 & 3.62 & 25.53 & Jordi Alba \\
\textbf{BABA} & 0.33 & 0.29 & 2.72 & Lionel Messi \\
\textbf{BABC} & 1.53 & 1.07 & 7.33 & Xavi \\
\textbf{BACA} & 1.51 & 1.28 & 8.94 & Thiago Alcántara \\
\textbf{BACB} & 0.91 & 0.59 & 3.79 & Xavi \\
\textbf{BACD} & 6.01 & 4.17 & 27.21 & Xavi \\
\textbf{BCAB} & 0.91 & 0.58 & 3.93 & Yaya Touré \\
\textbf{BCAC} & 1.52 & 1.08 & 6.83 & Jordi Alba \\
\textbf{BCAD} & 6.00 & 4.11 & 28.89 & Xavi \\
\textbf{BCBA} & 1.53 & 1.03 & 8.29 & Sergio Busquets \\
\textbf{BCDA} & 6.01 & 3.47 & 23.64 & Dani Alves \\
\bottomrule
\end{tabular}
\caption{Motif average values and players with highest values}
\label{table:summary}
\end{table}
We can see how Xavi dominates the passing, being the player featuring the highest numbers in five out of the fifteen motifs. Table \ref{table:xavi_vals} shows all the values and z-scores for Xavi. It is indeed remarkable that he manages to be consistently over four standard deviations away from the average passing patters, and particularly striking his astonishing z-score of 6.95 in the \textbf{ABCA} motif, which corresponds to being the starting and finishing node of a triangulation. To put this number in context, if we were talking about random daily events, one would expect to observe such a strong deviation from the average approximately once every billion years!\footnote{From a very rigorous point of view, actual passing patterns are neither random nor normally distributed. Statistical technicalities notwithstanding, Xavi's z-scores are truly off the charts!}
\begin{table}[h]
\centering
\begin{tabular}{lrr}
\toprule
\textbf{Motif} & Value & z-score \\
\midrule
\textbf{ABAB} & 1.57 & 3.97 \\
\textbf{ABAC} & 8.67 & 5.49 \\
\textbf{ABCA} & 5.99 & 6.95 \\
\textbf{ABCB} & 7.12 & 5.19 \\
\textbf{ABCD} & 21.44 & 4.26 \\
\textbf{BABA} & 1.71 & 4.71 \\
\textbf{BABC} & 7.33 & 5.41 \\
\textbf{BACA} & 8.58 & 5.51 \\
\textbf{BACB} & 3.79 & 4.88 \\
\textbf{BACD} & 27.21 & 5.08 \\
\textbf{BCAB} & 3.27 & 4.06 \\
\textbf{BCAC} & 6.78 & 4.86 \\
\textbf{BCAD} & 28.89 & 5.57 \\
\textbf{BCBA} & 7.08 & 5.40 \\
\textbf{BCDA} & 23.03 & 4.90 \\
\bottomrule
\end{tabular}
\caption{Motif values and z-scores for Xavi}
\label{table:xavi_vals}
\end{table}
\subsection*{Clustering and PCA}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{pca_plot_hist}
\caption{Pricipal Component Analysis, with labels for small AP clusters}
\label{fig:pca_plot_hists}
\end{figure}
Using the passing motifs means as feature vectors, we performed some clustering analysis on our player set.
The Affinity Propagation method with a damping coefficient of 0.9 yields a total of 37 clusters with varying number of players, listed in Table \ref{table:ap_clusters}, where a representative player for every cluster is also listed. The explicit composition of each of the clusters of size smaller than 10 is shown in Table \ref{table:small_clusters}.
Once again we can observe how the passing style of Xavi is different enough from everyone else's to the extent that he gets assignated to a cluster of his own!
Figure \ref{fig:pca_plot_hists} shows the relative players feature vectors, plotted using the first two components of a Principal Component Analysis (after using a whitening transformation to eliminate correlation). The PC's coefficients, together with their explained variance ratio, are listed in Table \ref{table:pca}. After looking at Figure \ref{fig:pca_plot_hists}, one can think of the first principal component (PC 1) as a measurement of overall involvement on the game, whereas the second principal componen (PC 2) separates players depending on their positional involvement, with high positive values highlight players playing on the wings and with a strong attacking involvement, and smaller values relate to a more purely defensive involvement. Special mention on this respect goes to Dani Alves and Jordi Alba, who in spite of playing as fullbacks display a passing distribution more similar to the ones of forwards than to other fullbacks. The plot also shows how Xavi has the highest value for overall involvement and a balanced involvement between offensive and defensive passing patterns.
\begin{table}[h]
\centering
\begin{tabular}{lrr}
\toprule
{} & \textbf{PC 1} & \textbf{PC 2} \\
\midrule
\textbf{ABAB} & 0.030 & 0.065 \\
\textbf{ABAC} & 0.153 & -0.019 \\
\textbf{ABCA} & 0.084 & -0.031 \\
\textbf{ABCB} & 0.127 & -0.091 \\
\textbf{ABCD} & 0.437 & 0.150 \\
\textbf{BABA} & 0.027 & 0.051 \\
\textbf{BABC} & 0.114 & 0.257 \\
\textbf{BACA} & 0.150 & -0.040 \\
\textbf{BACB} & 0.070 & 0.043 \\
\textbf{BACD} & 0.514 & -0.451 \\
\textbf{BCAB} & 0.064 & 0.086 \\
\textbf{BCAC} & 0.107 & 0.323 \\
\textbf{BCAD} & 0.511 & -0.310 \\
\textbf{BCBA} & 0.123 & -0.062 \\
\textbf{BCDA} & 0.406 & 0.690 \\
\midrule
\textbf{Explained variance} & 0.917 & 0.046 \\
\bottomrule
\end{tabular}
\caption{Principal Components and their explained variance}
\label{table:pca}
\end{table}
\subsection*{Player distance and similarity}
\label{sub:player_distance_and_similarity}
Our feature vector can be used in order to define a measure of similarity between players.
Given a player $i$, let $\mathbf{v}_i$ denote the vector of z-scores in passing motifs for player $i$.
Our definition of \emph{distance} between two players $i$ and $j$ is simply the Euclidean distance between the corresponding (z-scores) feature vectors:
\[
d(i, j) := \left\| \mathbf{v}_i - \mathbf{v}_j \right\|_2 = \sqrt{\sum_{m\in\text{motifs}} (v_{i,m} - v_{j,m})^2}
\]
This distance can be used as a measure of similarity between players, allowing us to establish how closely related are the passing patterns of any two given players. In more concrete terms, the coefficient of similarity is defined by
\[
s(i, j) := \frac{1}{1 + d(i, j)}.
\]
This similarity score is always between 0 and 1, with 1 meaning that two players display an identical passing pattern.
The reason for choosing z-scores rather than raw values is to allow for a better comparison between different passing motifs, as using raw values would yield a distance dominated by the four motifs derived from \textbf{ABCD}, which show up in a frequency one order of magnitude higher than any other pattern. Table \ref{table:dists} shows a summary of the average and minimum distances for all the players in our dataset, showing that for an average player we can reasonably expect to find another one at a distance of $0.826 \pm 0.5$.
\begin{table}[ht]
\centering
\begin{tabular}{lrr}
\toprule
{} & Mean & Closest \\
\midrule
Avg value & 4.471 & 0.826 \\
Std deviation & 1.800 & 0.500 \\
Min value & 3.188 & 0.178 \\
Max value & 19.960 & 5.134 \\
\bottomrule
\end{tabular}
\caption{Average and closest player distances.}
\label{table:dists}
\end{table}
An immediate application of this is to find out, for a given player, who is his \emph{closest} peer, which will be the player displaying the most similar passing pattern. Table \ref{table:player_min_dists} shows the minimum distances to the ten bottom players (the ones with the smallest minimum distance, hence easier to replace) and the top 10 players (the ones with the hightest minimum distance, thus harder to replace). Once again, we can see how the top 10 players are dominated by FC Barcelona players.
\begin{table}[ht]
\centering
\begin{tabular}{lr|lr}
\toprule
Player & Closest & Player & Closest \\
\midrule
R Boakye & 0.18 & A Rangel & 3.08 \\
Tuncay & 0.18 & Neymar & 3.26 \\
J Arizmendi & 0.23 & Y Touré & 3.92 \\
J Roberts & 0.23 & T Alcántara & 3.92 \\
S Fletcher & 0.23 & A Iniesta & 4.27 \\
F Borini & 0.23 & J Alba & 4.48 \\
G Toquero & 0.24 & D Alves & 4.48 \\
Babá & 0.24 & Xavi & 4.49 \\
J Walters & 0.25 & L Messi & 5.09 \\
C Austin & 0.25 & S Busquets & 5.13 \\
\bottomrule
\end{tabular}
\caption{Players minimum distance (bottom 10 and top 10)}
\label{table:player_min_dists}
\end{table}
Note that in some cases, the closest peer for a player happens to play for the same team, as it is the case for Jordi Alba, whose closest peer is Dani Alves. We decided against filtering closest player to search in team as it would make the analysis overly complicated due to constant player movement between teams.
Previous table shows that Xavi is amongst the hardest players to find a close replacement for. Table \ref{table:xavi} show the 20 players closest to Xavi. Among those, no one has a similarity score higher that 18.2\%, and only ten players have a score higher than 10\%.
\begin{table}[h]
\centering
\begin{tabular}{lrr}
\toprule
{Player} & Distance & Similarity (\%) \\
\midrule
Yaya Touré & 4.495 & 18.199 \\
Thiago Alcántara & 5.835 & 14.631 \\
Sergio Busquets & 6.494 & 13.345 \\
Andrés Iniesta & 7.038 & 12.441 \\
Cesc Fàbregas & 7.377 & 11.938 \\
Jordi Alba & 7.396 & 11.910 \\
Toni Kroos & 7.853 & 11.296 \\
Mikel Arteta & 8.257 & 10.802 \\
Michael Carrick & 8.505 & 10.521 \\
Santiago Cazorla & 8.515 & 10.509 \\
Daley Blind & 9.154 & 9.849 \\
Paul Scholes & 9.240 & 9.765 \\
Gerard Piqué & 9.524 & 9.502 \\
David Silva & 9.640 & 9.398 \\
Marcos Rojo & 9.671 & 9.371 \\
Angel Rangel & 9.675 & 9.368 \\
Samir Nasri & 9.683 & 9.360 \\
Leon Britton & 9.797 & 9.261 \\
Aaron Ramsey & 9.821 & 9.241 \\
Martín Montoya & 9.846 & 9.220 \\
\bottomrule
\end{tabular}
\caption{Distances and similarity scores of the 20 players closest to Xavi.}
\label{table:xavi}
\end{table}
\section{Conclusions and future work}
\label{sec:conclusions}
We have shown how the flow motif analysis can be extended from teams to players. Although there is an added level of complexity raising from the increasing of the different motives, the resulting data does a good job classifying and discriminating players. Clustering analysis provides a reasonable grouping of players with similar characteristics, and the similarity score provides a quantifiable measure on how similar any two players are. We believe these tools can be useful for scouting and for early talent detection if implemented properly.
For future work, we plan to expand our dataset to cover all the major European leagues over a longer time span. A larger dataset would allow us to measure changes in style over a player's career, and perhaps to isolate a \emph{team factor} that would allow to estimate what would be a player's style if he were to switch teams. Another interesting thing to explore would be the density of each of the passing motifs according to pitch coordinates.
Coming back to our motivating question, who can replace Xavi at Barcelona? Amongst all the ten players that showing a similarity score bigger than 10, three are already at Barcelona (Busquets, Iniesta and Jordi Alba), and another three used to play there but left (Touré, Alcántara and Fàbregas). Arteta, Carrick and Cazorla are all in their thirties, ruling them out as a long-term replacement, and Toni Kroos plays for Barcelona arch-rivals Real Madrid, making a move quite complicated (although not impossible, as current Barcelona manager Luis Enrique knows very well), the only choices for Barcelona seem to be either to recover Alcántara or Fàbregas, or to reconvert Iniesta to play further away from the oposition box. A bolder move would be the Dutch rising star, Daley Blind (who used to play as a fullback, but has been tested as a midfielder over the last season in Van Gaal's Manchester United), hoping that the young could rise to the challenge.
Xavi's passing patter stands out in every single metric we have used for our analysis. Isolated in his own cluster, and very far away from any other player, all data seems to point out at the fact that Xavi Hernández is, literally, one of a kind.
\begin{table}[h]
\centering
\begin{tabular}{lr}
\toprule
Representative Player & Cluster size \\
\midrule
Xavi & 1 \\
Dani Alves & 2 \\
Thiago Alcántara & 4 \\
David Silva & 4 \\
Gerard Piqué & 5 \\
Bacary Sagna & 6 \\
Isco & 8 \\
Chico & 10 \\
Mahamadou Diarra & 12 \\
Jonny Evans & 12 \\
Jordan Henderson & 15 \\
Andreu Fontás & 17 \\
Christian Eriksen & 17 \\
Hugo Mallo & 18 \\
Victor Wanyama & 19 \\
César Azpilicueta & 19 \\
Alberto Moreno & 20 \\
Gareth Bale & 20 \\
Fran Rico & 30 \\
David de Gea & 36 \\
Antolin Alcaraz & 36 \\
Phil Jagielka & 39 \\
Sebastian Larsson & 40 \\
Liam Ridgewell & 41 \\
Emmerson Boyce & 44 \\
Nyom & 46 \\
John Ruddy & 48 \\
Adam Johnson & 52 \\
Richmond Boakye & 57 \\
Chechu Dorado & 61 \\
Manuel Iturra & 62 \\
Loukas Vyntra & 62 \\
Kevin Gameiro & 72 \\
Borja & 73 \\
Rubén García & 85 \\
Gabriel Agbonlahor & 90 \\
Steven Fletcher & 113 \\
\bottomrule
\end{tabular}
\caption{Affinity propagation cluster sizes and representative players}
\label{table:ap_clusters}
\end{table}
\begin{table*}[h]
\centering
\begin{tabular}{ll}
\toprule
Size & Players \\
\midrule
1 & Xavi \\ \midrule
2 & Dani Alves, Jordi Alba \\ \midrule
4 & \pbox{15cm}{David Silva, Lionel Messi, \\
Samir Nasri, Santiago Cazorla} \\ \midrule
4 & \pbox{15cm}{Andrés Iniesta, Cesc Fàbregas, \\
Thiago Alcántara, Yaya Touré} \\ \midrule
5 & \pbox{15cm}{Daley Blind, Gerard Piqué, Javier Mascherano, \\
Sergio Busquets, Toni Kroos} \\ \midrule
6 & \pbox{15cm}{Adriano, Angel Rangel, Bacary Sagna, \\
Gaël Clichy, Marcelo, Martín Montoya} \\ \midrule
8 & \pbox{15cm}{Emre Can, Isco, James Rodríguez, Juan Mata, \\
Maicon, Mesut Özil, Michael Ballack, Ryan Mason} \\ \midrule
10 & \pbox{15cm}{Ashley Williams, Carles Puyol, Chico, Marc Bartra, Marcos Rojo, \\
Michael Carrick, Mikel Arteta, Nemanja Matic, Paul Scholes, Sergio Ramos} \\ \midrule
12 & \pbox{15cm}{Dejan Lovren, Garry Monk, John Terry, Jonny Evans, \\
Ki Sung-yueng, Matija Nastasic, Michael Essien, Morgan Schneiderlin, \\
Nabil Bentaleb, Per Mertesacker, Roberto Trashorras, Vincent Kompany} \\ \midrule
12 & \pbox{15cm}{Aaron Ramsey, Alexandre Song, Fernandinho, Gareth Barry, \\
Jerome Boateng, Jonathan de Guzmán, Leon Britton, Luka Modric, \\
Mahamadou Diarra, Mamadou Sakho, Steven Gerrard, Xabi Alonso} \\ \midrule
15 & \pbox{15cm}{Ander Herrera, Eric Dier, Frank Lampard, Ivan Rakitic, \\
Jamie O'Hara, Jordan Henderson, Michael Krohn-Dehli, Rafael van der Vaart, \\
Rafinha, Sascha Riether, Scott Parker, Seydou Keita, \\
Steven Davis, Vassiriki Abou Diaby, Wayne Rooney} \\
\bottomrule
\end{tabular}
\caption{Affinity propagation clustering: Composition of small clusters}
\label{table:small_clusters}
\end{table*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{pass_frequencies}
\caption{Passign motifs distributions}
\label{fig:motif_dists}
\end{figure*}
\clearpage
| {
"attr-fineweb-edu": 1.80957,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdGQ4eIZjf-ZSKGyh | \section{Introduction}\label{sec_intro}
Traditionally, billiards have been investigated from the point of
view of Ergodic Theory. That is, the properties that have been
studied, were the statistical properties with respect to the natural
invariant measure equivalent to the Lebesgue measure. However, it is
equally important to investigate the limit behavior of \emph{all}
trajectories and not only of \emph{almost all} of them. In
particular, periodic trajectories (which are of zero measure) are of
great interest. A widely used method in this context is to observe
that billiards in convex domains are twist maps (see, e.g.\
\cite{KH}, Section~9.2), so well developed rotation theory for twist
maps (see, e.g.\ \cite{KH}, Section~9.3) applies to them.
Rotation Theory has been recently developed further and its scope
has been significantly widened (see, e.g., Chapter~6 of \cite{ALM}
for a brief overview). This opens possibilities of its application
to new classes of billiards. In the general Rotation Theory one
considers a dynamical system together with an \emph{observable},
that is a function on the phase space, with values in a vector
space. Then one takes limits of ergodic averages of the observable
along longer and longer pieces of trajectories. The \emph{rotation
set} obtained in such a way contains all averages of the observable
along periodic orbits, and, by the Birkhoff Ergodic Theorem,
integrals of the observable with respect to all ergodic invariant
probability measures. With the natural choice of an observable,
information about the appropriate rotation set allows one to
describe the behavior of the trajectories of the system (see examples in
Chapter~6 of \cite{ALM}). Exact definitions are given later in the
paper.
We have to stress again that we consider \emph{all} trajectories.
Indeed, restricting attention to one ergodic measure would result in
seeing only one rotation vector. However, rotation vectors of other
points, non-typical for the measure, will be missing. Thus, the
approach to the billiards should be from the point of view of
Topological Dynamics, instead of Ergodic Theory, even though we do
consider various invariant measures for which rotation vector can be
computed. Observe that the rotation set for a suitably chosen
observable is a useful characteristic of the dynamical system.
In the simplest case, the observable is the increment in one step
(for discrete systems) or the derivative (for systems with
continuous time) of another function, called \emph{displacement}.
When the displacement is chosen in a natural way, the results on the
rotation set are especially interesting. In this paper we consider
two similar classes of billiards, and the observables which we use
for them are exactly of that type. One system consists of billiards
on an $m$-dimensional torus with one small convex obstacle which we
lift to the universal covering of the torus (that is, to the
Euclidean space) and consider the natural displacement there.
Those models constitute the rigorous mathematical formulation of the
so called Lorentz gas dynamics with periodic configuration of
obstacles. They are especially important for physicists doing
research in the foundations of nonequilibrium dynamics, since the
Lorentz gas serves as a good paradigm for nonequilibrium stationary
states, see the nice survey \cite{Dettmann}.
The other system consists of billiards in a square with one small convex
obstacle close to the center of the square; here we measure average
rotation around a chosen obstacle using the argument as the
displacement.
We treat both billiards as flows. This is caused by the fact that in
the lifting (or unfolding for a billiard in a square) we may have
infinite horizon, especially if the obstacle is small. In other
words, there are infinite trajectories without reflections, so when
considering billiards as maps, we would have to divide by zero.
Although this is not so bad by itself (infinity exists), we lose
compactness and cannot apply nice general machinery of the rotation
theory (see, e.g., \cite{Z}).
Note that in the case of a billiard in a square we have to deal with
trajectories that reflect from the vertices of the square. We can
think about such reflection as two infinitesimally close
reflections from two adjacent sides. Then it is clear that our
trajectory simply comes back along the same line on which it arrived to the
vertex, and that this does not destroy the continuity of the flow.
The ideas, methods, and results in both cases, the torus and the
square, are very similar. However, there are some important
differences, and, in spite of its two-dimensionali\-ty, in general the
square case is more complicated. Therefore we decided to treat the
torus case first (in Sections~\ref{sec_prelim}, \ref{sec_rotset}
and~\ref{sec_arlarge}) and then, when we describe the square case
(in Section~\ref{sec_square}), we describe the differences from the
torus case, without repeating the whole proofs. We believe that this
type of exposition is simpler for a reader than the one that treats
both cases simultaneously or the one that produces complicated
abstract theorems that are then applied in both cases. In
Section~\ref{sec_conn} we get additionally some general results,
applicable also to other situations.
Let us describe shortly the main results of the paper. The exact
definitions will be given later. Let us only note that the
\emph{admissible rotation set} is a subset of the full rotation set,
about which we can prove much stronger results than about the full
rotation set. Also, a \emph{small} obstacle does not mean
``arbitrarily small'' one. We derive various estimates of the size
of the admissible rotation set. In the torus case the estimates that
are independent of the dimension are non-trivial because of the
behavior of the geometry of $\mathbb{R}^m$ as $m\to\infty$. In both cases we
show that the admissible rotation set approximates better and better
the full rotation set when the size of the obstacle diminishes.
We prove that in both cases, the torus and the square, if the
obstacle is small, then the admissible rotation set is convex,
rotation vectors of periodic orbits are dense in it, and if $u$ is a
vector from its interior, then there exists a trajectory with
rotation vector $u$ (and even an ergodic invariant measure, for
which the integral of the velocity is equal to $u$, so that $u$ is
the rotation vector of almost every trajectory). The full rotation
set is connected, and in the case of the square, is equal to the
interval $[-\sqrt{2}/4,\sqrt{2}/4]$.
We conjecture that the full rotation set shares the strong
properties of the admissible rotation set.
\section{Preliminary results - torus}\label{sec_prelim}
Let us consider a billiard on the $m$-dimensional torus
$\mathbb{T}^m=\mathbb{R}^m/\mathbb{Z}^m$ ($m\ge 2$) with one strictly convex (that is, it is
convex and its boundary does not contain any straight line segment)
obstacle $O$ with a smooth boundary.
We do not specify explicitly how large the obstacle is, but let us
think about it as a rather small one. When we lift the whole picture
to $\mathbb{R}^m$ then we get a family of obstacles $O_{\mathbf{k}}$, where ${\mathbf{k}}\in\mathbb{Z}^m$
and $O_{\mathbf{k}}$ is $O_{\mathbf{0}}$ translated by the vector ${\mathbf{k}}$.
When we speak about a
\emph{trajectory}, we mean a positive (one-sided) billiard
trajectory, unless we explicitly say that it is a full (two-sided)
one. However, we may mean a trajectory in the phase space, in the
configuration space (on the torus), or in the lifting or unfolding
(the Euclidean space). It will be usually clear from the context,
which case we consider.
We will say that the obstacle $O_{\mathbf{k}}$ is \emph{between} $O_{\mathbf{i}}$ and $O_{\mathbf{j}}$
if it intersects the convex hull of $O_{\mathbf{i}}\cup O_{\mathbf{j}}$ and ${\mathbf{k}}\ne{\mathbf{i}},{\mathbf{j}}$. For
a trajectory $P$, beginning on a boundary of $O$,
its \emph{type} is a sequence $({\mathbf{k}}_n)_{n=0}^\infty$ of elements of
$\mathbb{Z}^m$ if the continuous lifting of $P$ to $\mathbb{R}^m$ that starts at the
boundary of $O_{{\mathbf{k}}_0}$ reflects consecutively from $O_{{\mathbf{k}}_n}$,
$n=1,2,\dots$. In order to make the type unique for a given $P$, we
will additionally assume that ${\mathbf{k}}_0={\mathbf{0}}$. Note that except for the case
when $P$ at its initial point is tangent to $O$, there are
infinitely many reflections, so the type of $P$ is well defined. This
follows from the following lemma. In it, we do not count tangency as a
reflection.
\begin{lemma}\label{infref}
If a trajectory has one reflection then it has infinitely
many reflections.
\end{lemma}
\begin{proof}
Suppose that a trajectory has a reflection, but there are only
finitely many of them. Then we can start the trajectory from the last
reflection. Its $\omega$-limit set is an affine subtorus of $\mathbb{T}^m$ and
the whole (positive) trajectory is contained in this subtorus (and it
is dense there). Since
we started with a reflection, this subtorus intersects the interior of
the obstacle. Since the trajectory is dense in this subtorus, we get a
contradiction.
\end{proof}
Of course, there may be trajectories without any reflections. In
particular, if the obstacle is contained in a ball of radius less than
$1/2$, there are such trajectories in the direction of the basic unit
vectors.
Sometimes we will speak about the \emph{type} of a piece of a
trajectory; then it is a finite sequence. We will also use the term
\emph{itinerary}.
We will call a sequence $({\mathbf{k}}_n)_{n=0}^\infty$ of elements of $\mathbb{Z}^m$
\emph{admissible} if
\begin{enumerate}
\item\label{adm1} ${\mathbf{k}}_0={\mathbf{0}}$,
\item\label{adm2} for every $n$ we have ${\mathbf{k}}_{n+1}\ne {\mathbf{k}}_n$,
\item\label{adm3} for every $n$ there is no obstacle between $O_{{\mathbf{k}}_n}$ and
$O_{{\mathbf{k}}_{n+1}}$,
\item\label{adm4} for every $n$ the obstacle $O_{{\mathbf{k}}_{n+1}}$ is not between
$O_{{\mathbf{k}}_n}$ and $O_{{\mathbf{k}}_{n+2}}$.
\end{enumerate}
\begin{theorem}\label{thmadm}
For any admissible sequence $({\mathbf{k}}_n)_{n=0}^\infty$ there is a trajectory
with type $({\mathbf{k}}_n)_{n=0}^\infty$. If additionally there is ${\mathbf{p}}\in\mathbb{Z}^m$
and a positive integer $q$ such that ${\mathbf{k}}_{n+q}={\mathbf{k}}_n+{\mathbf{p}}$ for every $n$
then this trajectory can be chosen periodic of discrete period $q$
(that is, after $q$ reflections we come back to the starting point in
the phase space). Similarly, for any admissible sequence
$({\mathbf{k}}_n)_{n=-\infty}^\infty$ there is a trajectory with type
$({\mathbf{k}}_n)_{n=-\infty}^\infty$.
\end{theorem}
\begin{proof}
Fix $n$. For every sequence $A=({\mathbf{x}}_i)_{i=0}^n$ such that ${\mathbf{x}}_i$ belongs
to the boundary of $O_{{\mathbf{k}}_i}$ for $i=0,1,\dots,n$, let $\Gamma(A)$ be
the curve obtained by joining consecutive points ${\mathbf{x}}_i$ by straight
segments (such a curve may intersect interiors of some obstacles).
Since the Cartesian product of the boundaries of $O_{{\mathbf{k}}_i}$
is compact and the length of $\Gamma(A)$ depends continuously on $A$,
there is an $A$ for which this length is minimal.
We claim that in such a case $\Gamma(A)$ is a piece of a
trajectory. By (\ref{adm3}), the segment $I_i$ joining ${\mathbf{x}}_i$ with
${\mathbf{x}}_{i+1}$ cannot intersect any obstacle except $O_{{\mathbf{k}}_i}$ and
$O_{{\mathbf{k}}_{i+1}}$. If it intersects $O_{{\mathbf{k}}_i}$ at more than one point, it
intersects its boundary at ${\mathbf{x}}_i$ and at another point ${\mathbf{y}}$. Then
replacing ${\mathbf{x}}_i$ by $y$ will make $\Gamma(A)$ shorter, a contradiction.
This argument does not work only if ${\mathbf{x}}_{i-1}$, ${\mathbf{x}}_i$ and ${\mathbf{x}}_{i+1}$ are
collinear and ${\mathbf{x}}_i$ lies between ${\mathbf{x}}_{i-1}$ and ${\mathbf{x}}_{i+1}$. However,
such situation is excluded by (\ref{adm4}). This proves that $I_i$
does not intersect $O_{{\mathbf{k}}_i}$ at more than one point. Similarly, it
does not intersect $O_{{\mathbf{k}}_{i+1}}$ at more than one point. Now the known
property of curves with minimal lengths guarantees that at every
${\mathbf{x}}_i$, $i=1,2,\dots,n-1$, the incidence and reflection angles are
equal. This proves our claim. For a two-sided sequence
$({\mathbf{k}}_n)_{n=-\infty}^\infty$ the argument is very similar.
Now we make this construction for every $n$ and get a sequence
$(A_n)_{n=1}^\infty$ of pieces of trajectories. We note their
initial points in the phase space (points and directions) and choose a
convergent subsequence of those. Then the trajectory of this limit
point in the phase space will have the prescribed type.
If there is ${\mathbf{p}}\in\mathbb{Z}^m$ and a positive integer $q$ such that
${\mathbf{k}}_{n+q}={\mathbf{k}}_n+{\mathbf{p}}$ for every $n$, then we consider only the sequence
$A=({\mathbf{x}}_i)_{i=0}^{q-1}$ and repeat the first part of the above proof
adding the segment joining ${\mathbf{x}}_{q-1}$ with ${\mathbf{x}}_0+{\mathbf{p}}$ to $\Gamma(A)$.
\end{proof}
Note that by Corollary~1.2 of \cite{Ch82}, if the obstacle is strictly
convex then a periodic orbit from the above theorem is unique.
The next lemma essentially expresses the fact that any billiard flow
with convex obstacles lacks focal points. It follows from the corollary
after Lemma~2 of \cite{St89}. The types of trajectory pieces about which we
speak in this lemma are not necessarily admissible.
\begin{lemma}\label{unique}
For a given finite sequence $B=({\mathbf{k}}_n)_{n=0}^s$ of elements of $\mathbb{Z}^m$
and points ${\mathbf{x}}_0,{\mathbf{x}}_s$ on the boundaries of $O_{{\mathbf{k}}_0}$ and $O_{{\mathbf{k}}_s}$
respectively, there is at most one trajectory piece of type $B$
starting at ${\mathbf{x}}_0$ and ending at ${\mathbf{x}}_s$. The same remains true if we
allow the first segment of the trajectory piece to cross $O_{{\mathbf{k}}_0}$ and
the last one to cross $O_{{\mathbf{k}}_s}$ (as in Figure~\ref{cross}).
\end{lemma}
\begin{figure}\refstepcounter{figure}\label{cross}\addtocounter{figure}{-1}
\begin{center}
\includegraphics[width=2truein]{cross}
\caption{A trajectory piece crossing the first and last obstacles}
\end{center}
\end{figure}
\begin{corollary}\label{shortest}
If the trajectory piece from Lemma~\ref{unique} exists and has
admissible type, then it is the
shortest path of type $B$ starting at ${\mathbf{x}}_0$ and ending at ${\mathbf{x}}_s$.
\end{corollary}
\begin{proof}
Similarly as in the proof of Theorem~\ref{thmadm}, the shortest path
of type $B$ from ${\mathbf{x}}_0$ to ${\mathbf{x}}_s$ is a trajectory piece (here we allow
the first segment of the trajectory piece to cross $O_{{\mathbf{k}}_0}$ and the
last one to cross $O_{{\mathbf{k}}_s}$). By Lemma~\ref{unique}, it is equal to
the trajectory piece from that lemma.
\end{proof}
Of course this trajectory piece depends on ${\mathbf{x}}_0$ and ${\mathbf{x}}_s$. However,
its length depends on those two points only up to an additive
constant. Denote by $c$ the diameter of $O$.
\begin{lemma}\label{const}
For every admissible finite sequence $B=({\mathbf{k}}_n)_{n=0}^s$ of elements of $\mathbb{Z}^m$ the
lengths of trajectory pieces of type $B$ (even if we allow them to
cross $O_{{\mathbf{k}}_0}$ and $O_{{\mathbf{k}}_s}$) differ by at most $2c$.
The displacements along those trajectory pieces also differ by at most
$2c$.
\end{lemma}
\begin{proof}
Let $\Gamma$ and $\Gamma'$ be two such trajectory pieces, joining
${\mathbf{x}}_0$ with ${\mathbf{x}}_s$ and ${\mathbf{y}}_0$ with ${\mathbf{y}}_s$ respectively, where ${\mathbf{x}}_0,{\mathbf{y}}_0$
belong to the boundary of $O_{{\mathbf{k}}_0}$ and ${\mathbf{x}}_s,{\mathbf{y}}_s$ belong to the
boundary of $O_{{\mathbf{k}}_s}$. Replace the first segment of $\Gamma$ by adding
to it the segment joining ${\mathbf{x}}_0$ with ${\mathbf{y}}_0$, and do similarly with the
last segment of $\Gamma$. Then we get a path joining ${\mathbf{y}}_0$ with ${\mathbf{y}}_s$
of type $B$. By Corollary~\ref{shortest}, its length is not smaller
than the length of $\Gamma'$. On the other hand, its length is not
larger than the length of $\Gamma$ plus $2c$. Performing the same
construction with the roles of $\Gamma$ and $\Gamma'$ reversed, we
conclude that the difference of the lengths of those two paths is not
larger than $2c$.
The second statement of the lemma is obvious.
\end{proof}
One can look at the definition of an admissible sequence in the
following way. Instead of a sequence $({\mathbf{k}}_n)_{n=0}^\infty$ of elements
of $\mathbb{Z}^m$ we consider the sequence $({\mathbf{l}}_n)_{n=1}^\infty$, where
${\mathbf{l}}_n={\mathbf{k}}_n-{\mathbf{k}}_{n-1}$. Since ${\mathbf{k}}_0={\mathbf{0}}$, knowing
$({\mathbf{l}}_n)_{n=1}^\infty$ we can recover $({\mathbf{k}}_n)_{n=0}^\infty$. Now,
condition (\ref{adm3}) can be restated as no obstacle between
$O_{\mathbf{0}}$ and $O_{{\mathbf{l}}_n}$, and condition (\ref{adm4}) as the obstacle
$O_{{\mathbf{l}}_n}$ not between $O_{\mathbf{0}}$ and $O_{{\mathbf{l}}_n+{\mathbf{l}}_{n+1}}$. Let
$G$ be the directed graph whose vertices are those
${\mathbf{j}}\in\mathbb{Z}^m\setminus\{{\mathbf{0}}\}$ for which there is no obstacle between
$O_{\mathbf{0}}$ and $O_{\mathbf{j}}$, and there is an edge (arrow) from ${\mathbf{j}}$ to ${\mathbf{i}}$ if
and only if $O_{\mathbf{j}}$ is not between $O_{\mathbf{0}}$ and $O_{{\mathbf{j}}+{\mathbf{i}}}$. Then every
sequence $({\mathbf{l}}_n)_{n=1}^\infty$ obtained from an admissible sequence
is a one-sided infinite path in $G$, and vice versa, each one-sided
infinite path in $G$ is a sequence $({\mathbf{l}}_n)_{n=1}^\infty$ obtained
from an admissible sequence. Hence, we can speak about paths
\emph{corresponding} to admissible sequences and admissible sequences
\emph{corresponding} to paths.
\begin{lemma}\label{fingr}
The set of vertices of $G$ is finite.
\end{lemma}
\begin{proof}
Fix an interior point ${\mathbf{x}}$ of $O_{\mathbf{0}}$. By Lemma~\ref{infref}, any ray
beginning at ${\mathbf{x}}$ intersects the interior of some $O_{\mathbf{k}}$ with ${\mathbf{k}}\ne{\mathbf{0}}$.
Let $V_{\mathbf{k}}$ be the set of directions (points of the unit sphere) for
which the corresponding ray intersects the interior of $O_{\mathbf{k}}$. This set
is open, so we get an open cover of a compact unit sphere. It has a
finite subcover, so there exists a constant $M>0$ such that every ray
from ${\mathbf{x}}$ of length $M$ intersects the interior of some $O_{\mathbf{k}}$ with
${\mathbf{k}}\ne{\mathbf{0}}$. This proves that the set of vertices of $G$ is finite.
\end{proof}
Note that in $G$ there is never an edge from a vertex to itself.
Moreover, there is a kind of symmetry in $G$. Namely, if ${\mathbf{k}}$ is a
vertex then $-{\mathbf{k}}$ is a vertex; there is an edge from ${\mathbf{k}}$ to $-{\mathbf{k}}$; and
if there is an edge from ${\mathbf{k}}$ to ${\mathbf{j}}$ then there is an edge from $-{\mathbf{j}}$ to
$-{\mathbf{k}}$.
The following lemma establishes another symmetry in $G$.
\begin{lemma}\label{st2}
If ${\mathbf{k}},{\mathbf{j}}\in\mathbb{Z}^m$ and $O_{\mathbf{k}}$ is between $O_{\mathbf{0}}$ and $O_{{\mathbf{k}}+{\mathbf{j}}}$, then
$O_{\mathbf{j}}$ is also between $O_{\mathbf{0}}$ and $O_{{\mathbf{k}}+{\mathbf{j}}}$. Thus, if there is an
edge in $G$ from ${\mathbf{k}}$ to ${\mathbf{j}}$ then there is an edge from ${\mathbf{j}}$ to ${\mathbf{k}}$.
\end{lemma}
\begin{proof}
The map $f(x)={\mathbf{k}}+{\mathbf{j}}-{\mathbf{x}}$ defines an isometry of $\mathbb{Z}^m$ and
$f(O_{\mathbf{0}})=O_{{\mathbf{k}}+{\mathbf{j}}}$, $f(O_{{\mathbf{k}}+{\mathbf{j}}})=O_{\mathbf{0}}$, $f(O_{\mathbf{k}})=O_{\mathbf{j}}$. This proves
the first statement of the lemma. The second statement follows from
the first one and from the definition of edges in $G$.
\end{proof}
We will say that the obstacle $O$ is \emph{small} if it is contained
in a closed ball of radius smaller than $\sqrt{2}/4$. To simplify the
notation, in the rest of the paper, whenever the obstacle is small, we
will be using the lifting to $\mathbb{R}^m$ such that the centers of the balls
of radii smaller than $\sqrt{2}/4$ containing the obstacles will be at
the points of $\mathbb{Z}^m$.
Denote by $U$
the set of unit vectors from $\mathbb{Z}^m$ (that is, the ones with one
component $\pm 1$ and the rest of components 0), and by $A_m$ the set
$\{-1,\,0,\,1\}^m\setminus\{{\mathbf{0}}\}$ (we use the subscript $m$ by $A$,
since this set will be used sometimes when we consider all dimensions
at once). In particular, $U\subset A_m$.
\begin{lemma}\label{scalar}
Let $O$ be small. If ${\mathbf{k}},{\mathbf{l}}\in\mathbb{Z}^m\setminus\{{\mathbf{0}}\}$ and $\langle
{\mathbf{k}},{\mathbf{l}}\rangle\le 0$, then $O_{\mathbf{k}}$ is not between $O_{\mathbf{0}}$ and
$O_{{\mathbf{k}}+{\mathbf{l}}}$. In particular, if ${\mathbf{k}}$ and ${\mathbf{l}}$ are vertices of $G$ and
$\langle{\mathbf{k}},{\mathbf{l}}\rangle\le 0$, then there are edges in $G$ from ${\mathbf{k}}$ to
${\mathbf{l}}$ and from ${\mathbf{l}}$ to ${\mathbf{k}}$.
\end{lemma}
\begin{proof}
We will use elementary geometry. Consider the triangle with vertices
$A={\mathbf{0}}$, $B={\mathbf{k}}$ and $C={\mathbf{k}}+{\mathbf{l}}$. The angle at the vertex $B$ is at
most $\pi/2$, and the lengths of the sides $AB$ and $BC$ are at least
1. We need to construct a straight line which separates the plane $P$
in which the triangle $ABC$ lies into two half-planes with the first
one containing the open disk of radius $\sqrt{2}/4$ centered at $B$
and the second one containing such disks centered at $A$ and $C$. Then
the hyperplane of dimension $m-1$ through this line and perpendicular
to the plane $P$ will separate $O_{\mathbf{k}}$ from $O_{\mathbf{0}}$ and $O_{{\mathbf{k}}+{\mathbf{l}}}$.
This will prove that there is an edge in $G$ from ${\mathbf{k}}$ to ${\mathbf{l}}$. By
Lemma~\ref{st2}, there will be also an edge in $G$ from ${\mathbf{l}}$ to ${\mathbf{k}}$.
Let $D$ and $E$ be the points on the sides $BA$ and $BC$ respectively,
whose distance from $B$ is $1/2$ and let $L$ be the straight line
through $D$ and $E$. Since the angle at the vertex $B$ is at most
$\pi/2$, the distance of $B$ from $L$ is at least $\sqrt{2}/4$. Since
$|AD|\ge|BD|$ and $|CE|\ge|BE|$, the distances of $A$ and $C$ from $L$
are at least as large as the distance of $B$ from $L$. This completes
the proof.
\end{proof}
\begin{lemma}\label{am}
For a billiard on a torus with a small obstacle, all elements of $A_m$
are vertices of $G$.
\end{lemma}
\begin{proof}
Let ${\mathbf{u}}\in A_m$ and ${\mathbf{v}}\in\mathbb{Z}^m\setminus\{{\mathbf{0}},{\mathbf{u}}\}$. If
${\mathbf{v}}=(v_1,v_2,\dots,v_m)$ then $|\langle{\mathbf{v}},{\mathbf{u}}\rangle|\le\sum_{i=1}^m
|v_i|\le\|{\mathbf{v}}\|^2$, so $\langle{\mathbf{v}},{\mathbf{u}}-{\mathbf{v}}\rangle\le 0$. Therefore, by
Lemma~\ref{scalar}, ${\mathbf{v}}$ is not between ${\mathbf{0}}$ and $u$. This proves
that $u$ is a vertex of $G$.
\end{proof}
\begin{lemma}\label{small}
Assume that $O$ is small. Then $G$ is connected, and for every
vertices ${\mathbf{k}},{\mathbf{l}}$ of $G$ there is a path of length at most $3$ from
${\mathbf{k}}$ to ${\mathbf{l}}$ in $G$, via elements of $U$.
\end{lemma}
\begin{proof}
By Lemma~\ref{am}, the set of vertices of $G$ contains $U$. Let
${\mathbf{k}},{\mathbf{l}}$ be vertices of $G$. Then ${\mathbf{k}},{\mathbf{l}}\ne{\mathbf{0}}$, so there exist
elements ${\mathbf{u}},{\mathbf{v}}$ of $U$ such that $\langle{\mathbf{k}},{\mathbf{u}}\rangle\le 0$ and
$\langle{\mathbf{l}},{\mathbf{v}}\rangle\le 0$. By Lemma~\ref{scalar}, there are edges
in $G$ from ${\mathbf{k}}$ to ${\mathbf{u}}$ and from ${\mathbf{v}}$ to ${\mathbf{l}}$. If ${\mathbf{u}}={\mathbf{v}}$ then ${\mathbf{k}}{\mathbf{u}}{\mathbf{l}}$
is a path of length 2 from ${\mathbf{k}}$ to ${\mathbf{l}}$. If ${\mathbf{u}}\ne{\mathbf{v}}$ then $\langle
{\mathbf{u}},{\mathbf{v}}\rangle=0$, so by Lemma~\ref{scalar} there is an edge from ${\mathbf{u}}$ to
${\mathbf{v}}$. Then ${\mathbf{k}}{\mathbf{u}}{\mathbf{v}}{\mathbf{l}}$ is a path of length 3 from ${\mathbf{k}}$ to ${\mathbf{l}}$.
\end{proof}
\section{Rotation set - torus}\label{sec_rotset}
Now we have enough information in order to start investigating the
\emph{rotation set} $R$ of our billiard. It consists of limits of the sequences
$(({\mathbf{y}}_n-{\mathbf{x}}_n)/t_n)_{n=1}^\infty$, where there is a trajectory piece in
the lifting from
${\mathbf{x}}_n$ to ${\mathbf{y}}_n$ of length $t_n$, and $t_n$ goes to infinity. Since we
have much larger control of pieces of trajectories of admissible type,
we introduce also the \emph{admissible rotation set} $AR$, where in
the definition we consider only such pieces. Clearly, the admissible
rotation set is contained in the rotation set. By the definition, both
sets are closed. It is also clear that they are contained in the
closed unit ball in $\mathbb{R}^m$, centered at the origin. Due to the
time-reversibility, both sets $R$ and $AR$ are centrally symmetric
with respect to the origin.
For a given point $p$ in the phase space let us consider the
trajectory $t\mapsto T(t)$ in $\mathbb{R}^m$ starting at $p$. We
can ask whether the limit of $(T(t)-T(0))/t$, as $t$ goes to infinity,
exists. If it does, we will call it the \emph{rotation vector} of $p$.
Clearly, it is the same for every point in the phase space of the full
trajectory of $p$, so we can speak of the \emph{rotation vector of a
trajectory}. In particular, every periodic orbit has a rotation vector,
and it is equal to $(T(s)-T(0))/s$, where $s$ is the period of the
orbit.
Note that if we use the discrete time (the number of reflections)
rather than continuous time, we would get all good properties of the
admissible rotation set from the description of the admissible
sequences via the graph $G$ and the results of \cite{Z}. Since we are
using continuous time, the situation is more complicated.
Nevertheless, Lemma~\ref{const} allows us to get similar results. For
a trajectory piece $T$ we will denote by $|T|$ its length and by
${\mathbf{d}}(T)$ its displacement.
\begin{theorem}\label{convex}
The admissible rotation set of a billiard on a torus with a small
obstacle is convex.
\end{theorem}
\begin{proof}
Fix vectors ${\mathbf{u}},{\mathbf{v}}\in AR$ and a number $t\in(0,1)$. We want to show that
the vector $t{\mathbf{u}}+(1-t){\mathbf{v}}$ belongs to $AR$. Fix $\varepsilon>0$. By the
definition, there are finite admissible sequences $A,B$ and trajectory pieces
$T,S$ of type $A,B$ respectively, such that
\begin{equation}\label{eq2}
\left\|\frac{{\mathbf{d}}(T)}{|T|}-{\mathbf{u}}\right\|<\varepsilon \textrm{\ \ and\ \ } \left\|\frac{{\mathbf{d}}(S)}
{|S|}-{\mathbf{v}}\right\|<\varepsilon.
\end{equation}
Both $A,B$ can be represented as finite paths in the graph $G$. By
Lemma~\ref{small}, there are admissible sequences $C_1,C_2,C_3$
represented in $G$ as paths of length at most 3, via elements of $U$,
such that the concatenations of the form
$$D=AC_1AC_1\dots AC_1AC_2BC_3BC_3\dots BC_3B$$
are admissible. There exists a trajectory piece $Q$ of type $D$. We
will estimate its displacement and length.
Assume that in $D$ the block $A$ appears $p$ times and the block $B$
appears $q-p$ times. Let ${\mathbf{d}}_A,{\mathbf{d}}_B$ be the total displacements due to
the blocks $A,B$ respectively. We get
$$\|{\mathbf{d}}_A-p{\mathbf{d}}(T)\|\le 2pc \textrm{\ \ and\ \ } \|{\mathbf{d}}_B-(q-p){\mathbf{d}}(S)\|\le 2(q-p)c.$$
The displacement due to each of the blocks $C_1,C_2,C_3$ is at most of
norm $2+2c$, so the total displacement due to all those blocks is
at most of norm $q(2+2c)$. If we replace all displacements by the
trajectory lengths, we get the same estimates (we use here
Lemma~\ref{const}). Thus we get the following estimates:
$$\|{\mathbf{d}}(Q)-{\mathbf{\alpha}}\|\le 4qc+2q \textrm{\ \ and\ \ }
\big||Q|-\beta\big|\le 4qc+2q,$$
where
$${\mathbf{\alpha}}=p{\mathbf{d}}(T)+(q-p){\mathbf{d}}(S) \textrm{\ \ and\ \ } \beta=p|T|+(q-p)|S|.$$
Therefore
\begin{eqnarray}\label{eq3}
\left\|\frac{{\mathbf{d}}(Q)}{|Q|}-\frac{{\mathbf{\alpha}}}{\beta}\right\|
&\le&\left\|\frac{{\mathbf{d}}(Q)}{|Q|}-\frac{{\mathbf{\alpha}}}{|Q|}\right\|
+\left\|\frac{{\mathbf{\alpha}}}{|Q|}-\frac{{\mathbf{\alpha}}}{\beta}\right\|\nonumber\\
&\le&\frac{4qc+2q}{|Q|}+\|{\mathbf{\alpha}}\|\frac{4qc+2q}{|Q|\beta}\\
&=&(4c+2)\frac{q}{|Q|}\left(1+\frac{\|{\mathbf{\alpha}}\|}{\beta}\right).
\nonumber
\end{eqnarray}
Set $s=p|T|/\beta$. Then $1-s=(q-p)|S|/\beta$, so
\begin{equation}\label{eq4}
\frac{{\mathbf{\alpha}}}{\beta}=s\frac{{\mathbf{d}}(T)}{|T|}+(1-s)\frac{{\mathbf{d}}(S)}{|S|}.
\end{equation}
By (\ref{eq2}), we get
$$\frac{\|{\mathbf{\alpha}}\|}{\beta}\le\max(\|{\mathbf{u}}\|,\|{\mathbf{v}}\|)+\varepsilon.$$
Moreover,
$$\frac{|Q|}{q}\ge\frac{\beta}{q}-(4c+2)\ge\min(|T|,|S|)-
(4c+2).$$
Therefore if $|T|$ and $|S|$ are sufficiently large (we may assume
this), the right-hand side of (\ref{eq3}) is less than $\varepsilon$.
Together with (\ref{eq4}), we get
$$\left\|\frac{{\mathbf{d}}(Q)}{|Q|}-\left(s\frac{{\mathbf{d}}(T)}{|T|}+(1-s)\frac{{\mathbf{d}}(S)}
{|S|}\right)\right\|<\varepsilon.$$
By this inequality and (\ref{eq2}), it remains to show that by the
right choice of $p,q$ we can approximate $t$ by $s$ with an arbitrary
accuracy.
We can write $s=f(x)$, where $x=p/q$ and
$$f(x)=\frac{|T|x}{|T|x+|S|(1-x)}.$$
The function $f$ is continuous on $[0,1]$, takes value 0 at 0 and
value 1 at 1. Therefore the image of the set of rational numbers from
$(0,1)$ is dense in $[0,1]$. This completes the proof.
\end{proof}
\begin{theorem}\label{perdense}
For a billiard on a torus with a small obstacle, rotation vectors of
periodic orbits of admissible type are dense in the admissible
rotation set.
\end{theorem}
\begin{proof}
Fix a vector ${\mathbf{u}}\in AR$ and $\varepsilon>0$. We want to find a periodic orbit
of admissible type whose rotation vector is in the $\varepsilon$-neighborhood
of ${\mathbf{u}}$. By the definition, there is an admissible sequence $A$ and a
trajectory piece $T$ of type $A$ such that
\begin{equation}\label{eq5}
\left\|\frac{{\mathbf{d}}(T)}{|T|}-{\mathbf{u}}\right\|<\frac{\varepsilon}{2}.
\end{equation}
Moreover, we can assume that $|T|$ is as large as we need. As in the
proof of Theorem~\ref{convex}, we treat $A$ as a path in the graph $G$
and find an admissible sequence $C$ represented in $G$ as paths of
length at most 3, via elements of $U$, such
that the periodic concatenation $D=ACACAC\dots$ is admissible. There
exists a periodic orbit of type $D$. Let $Q$ be its piece
corresponding to the itinerary $AC$. We will estimate its displacement and length.
Similarly as in the proof of Theorem~\ref{convex}, we get
$$|{\mathbf{d}}(Q)-{\mathbf{d}}(T)|\le 4c+2 \textrm{\ \ and\ \ } \big||Q|-|T|\big|\le 4c+2.$$
Therefore
\begin{eqnarray*}
\left\|\frac{{\mathbf{d}}(Q)}{|Q|}-\frac{{\mathbf{d}}(T)}{|T|}\right\|
&\le&\left\|\frac{{\mathbf{d}}(Q)}{|Q|}-\frac{{\mathbf{d}}(T)}{|Q|}\right\|
+\left\|\frac{{\mathbf{d}}(T)}{|Q|}-\frac{{\mathbf{d}}(T)}{|T|}\right\|\\
&\le&\frac{4c+2}{|T|-(4c+2)}+\frac{\|{\mathbf{d}}(T)\|}{|T|}\cdot
\frac{4c+2}{|T|-(4c+2)}.
\end{eqnarray*}
If $|T|$ is sufficiently large then the right-hand side of this
inequality is smaller than $\varepsilon/2$. Together with (\ref{eq5}) we get
$$\left\|\frac{{\mathbf{d}}(Q)}{|Q|}-{\mathbf{u}}\right\|<\varepsilon.$$
This completes the proof.
\end{proof}
We will refer to closed paths in $G$ as \emph{loops}.
\begin{remark}\label{vertex}
It is clear that in the above theorem we can additionally require that
the corresponding loop in the graph $G$ passes through a given vertex.
\end{remark}
To get more results, we need a generalization of a lemma from
\cite{MZ} to higher dimensions.
\begin{lemma}\label{geom}
Assume that ${\mathbf{0}}\in\mathbb{R}^m$ lies in the interior of the convex hull of
a set of $m+1$ vectors ${\mathbf{v}}_0,{\mathbf{v}}_1,\dots,{\mathbf{v}}_m$. For every ${\mathbf{k}}>0$
if $L$ is large enough then the following property holds. If $x\in\mathbb{R}^m$ and
$\|x\|\le L$ then there exists $i\in\{0,1,\dots,m\}$ and a positive
integer $n$ such that $\|{\mathbf{x}}+n{\mathbf{v}}_i\|\le L-K$. Moreover, $\|{\mathbf{x}}+j{\mathbf{v}}_i\|\le L$
for $j=1,2,\dots,n-1$.
\end{lemma}
\begin{proof}
Let us fix ${\mathbf{k}}>0$. We will consider only $L$ such that $L>K$. Set
$M=\max_i\|{\mathbf{v}}_i\|$.
For each ${\mathbf{x}}\in\mathbb{R}^m$ with $\|{\mathbf{x}}\|=1$ let $f({\mathbf{x}})$ be the minimum of
$\|{\mathbf{x}}+t{\mathbf{v}}_i\|$ over $i=0,1,\dots,m$ and $t\ge 0$. By the assumption,
$f({\mathbf{x}})<1$. Clearly $f$ is continuous, and therefore there is $\varepsilon>0$
such that $f({\mathbf{x}})\le 1-\varepsilon$ for every ${\mathbf{x}}$. Thus, for every ${\mathbf{y}}\in\mathbb{R}^m$
there is $i=0,1,\dots,m$ and $s\ge 0$ such that
$\|{\mathbf{y}}+s{\mathbf{v}}_i\|\le(1-\varepsilon)\|{\mathbf{y}}\|$. Let $n$ be the smallest integer larger
than $s$. Then $n>0$, and if $L\ge(M+K)/\varepsilon$ and $\|{\mathbf{y}}\|\le L$ then
$\|{\mathbf{y}}+n{\mathbf{v}}_i\|\le(1-\varepsilon)L+M\le L-K$.
The last statement of the lemma follows from the convexity of the
balls in $\mathbb{R}^m$.
\end{proof}
Now we can follow the methods of \cite{MZ} and \cite{Z}. We assume
that our billiard has a small obstacle. For a full trajectory $T$ we will
denote by $T(t)$ the point to which we get after time $t$.
\begin{lemma}\label{follow}
If ${\mathbf{u}}$ is a vector from the interior of $AR$, then there exists a
full trajectory $T$ of admissible type and a constant $M$ such that
\begin{equation}\label{eq6}
\|T(t)-T(0)-t{\mathbf{u}}\|\le M
\end{equation}
for all $t\in\mathbb{R}$.
\end{lemma}
\begin{proof}
Let us think first of positive $t$'s.
Since ${\mathbf{u}}$ is in the interior of $AR$, one can choose $m+1$ vectors
${\mathbf{w}}_0,{\mathbf{w}}_1,\dots,{\mathbf{w}}_m\in AR$ such that $u$ is in the interior of the
convex hull of those vectors. Moreover, by Theorem~\ref{perdense} and
Remark~\ref{vertex} we may assume that ${\mathbf{w}}_i$ are rotation vectors of
periodic orbits $P_i$ of admissible type, corresponding to loops $A_i$ in $G$
passing through a common vertex $V$. We can also consider those loops
as finite paths, ending at $V$ and starting at the next vertex in the
loop. Set
$${\mathbf{v}}_i={\mathbf{d}}(P_i)-|P_i|{\mathbf{u}}=|P_i|{\mathbf{w}}_i-|P_i|{\mathbf{u}}=|P_i|({\mathbf{w}}_i-{\mathbf{u}}).$$
Since ${\mathbf{u}}$ is in the interior of the convex hull of the vectors ${\mathbf{w}}_i$,
we get that ${\mathbf{0}}$ is in the interior of the convex hull of the
vectors ${\mathbf{w}}_i-{\mathbf{u}}$, and therefore ${\mathbf{0}}$ is in the interior of the
convex hull of the vectors ${\mathbf{v}}_i$.
We will construct our trajectory, or rather a corresponding path in
the graph $G$, by induction, using Lemma~\ref{geom}. Then we get a
corresponding trajectory of admissible type by Theorem~\ref{thmadm}. We start with
the empty sequence, that corresponds to the trajectory piece
consisting of one point. Then, when a path $B_j$ in $G$
(corresponding to a trajectory piece $Q_j$) is constructed, and it
ends at $V$, we look at the vector ${\mathbf{x}}={\mathbf{d}}(Q_j)-|Q_j|{\mathbf{u}}$ and choose ${\mathbf{v}}_i$
and $n$ according to Lemma~\ref{geom}. We append $B_j$ by adding $n$
repetitions of $A_i$ (corresponding to a trajectory piece that we can
call $nP_i$) and obtain $B_{j+1}$(corresponding to a trajectory piece
$Q_{j+1}$). To do all this, we have to define ${\mathbf{k}}$ that is used in
Lemma~\ref{geom} and prove that if $\|{\mathbf{x}}\|\le L$ then also
$\big\|{\mathbf{d}}(Q_{j+1})-|Q_{j+1}|{\mathbf{u}}\big\|\le L$.
Let us analyze the situation. When we concatenate $Q_j$ and
$A_i\dots A_i$ ($n$ times) to get $Q_{j+1}$, by Lemma~\ref{const}
we have
$$\|\big({\mathbf{d}}(Q_{j+1})-|Q_{j+1}|{\mathbf{u}}\big)-\big({\mathbf{d}}(Q_j)-|Q_j|{\mathbf{u}}\big)-\big({\mathbf{d}}(nP_i)-|nP_i|{\mathbf{u}}\big)\|\le
4c(1+\|{\mathbf{u}}\|).$$
Moreover,
$${\mathbf{d}}(nP_i)-|nP_i|{\mathbf{u}}=n({\mathbf{d}}(P_i)-|P_i|{\mathbf{u}})=n{\mathbf{v}}_i.$$
Therefore in Lemma~\ref{geom} we have to take ${\mathbf{k}}=4c(1+\|{\mathbf{u}}\|)$ and then we can
make the induction step.
In such a way we obtain an infinite path $B$ in $G$. By
Theorem~\ref{thmadm}, there exists a billiard trajectory $T$ of type
$B$.
Note that we did not complete the proof yet, because we got
(\ref{eq6}) (with $M=L$) only for a sequence of times $t=|Q_j|$. We
can do better using the last statement of Lemma~\ref{geom}. This shows that
(\ref{eq6}) with $M=L+K$ holds for a sequence of times $t$ with the
difference of two consecutive terms of this sequence not exceeding
$s=\max(|P_0|,|P_1|,\dots,|P_n|)+4c$. Every time $t'$ can be written
as $t+r$ with $t$ being a term of the above sequence (so that
(\ref{eq6}) holds with with $M=L+K$) and $r\in[0,s)$. Then
$$\|T(t')-T(0)-t'{\mathbf{u}}\|\le L+K+\|T(t+r)-T(t)\|+r\|{\mathbf{u}}\|.$$
Thus, (\ref{eq6}) holds for all times with $M=L+K+s+s\|{\mathbf{u}}\|$.
The same can be done for negative $t$'s, so we get a full (two-sided)
path, and consequently a full trajectory.
\end{proof}
Now we are ready to prove the next important theorem. Remember that
our phase space is a factor of a compact connected subset of the unit
tangent bundle over the torus.
\begin{theorem}\label{cominv}
For a billiard on a torus with a small obstacle, if ${\mathbf{u}}$ is a vector
from the interior of $AR$, then there exists a compact
invariant subset $Y$ of the phase space, such that every trajectory
from $Y$ has admissible type and rotation vector ${\mathbf{u}}$.
\end{theorem}
\begin{proof}
Let $Y$ be the closure of the trajectory $T$ from Lemma~\ref{follow},
taken in the phase space. If $S$ is a trajectory obtained from $T$ by
starting it at time $s$ (that is, $S(t)=T(s+t)$) then by
Lemma~\ref{follow} we get $\|S(t)-S(0)-t{\mathbf{u}}\|\le 2M$ for all $t$.
By continuity of the flow, this property extends to every trajectory
$S$ from $Y$. This proves that every trajectory from $Y$ has rotation
vector ${\mathbf{u}}$.
Since a trajectory of admissible type has no tangencies to the
obstacle (by the condition~\ref{adm4} of the definition of admissible
sequences), so each finite piece of a trajectory from $Y$ has
admissible type. Therefore every trajectory from $Y$ has admissible
type.
\end{proof}
\begin{remark}\label{minimal}
The set $Y$ above can be chosen minimal, and therefore the trajectory
from Lemma~\ref{follow} can be chosen recurrent.
\end{remark}
As a trivial corollary to Theorem~\ref{cominv} we get the following.
\begin{corollary}\label{pointwise}
For a billiard on a torus with a small obstacle, if ${\mathbf{u}}$ is a vector
from the interior of $AR$, then there exists a trajectory of
admissible type with rotation vector ${\mathbf{u}}$.
\end{corollary}
We also get another corollary, which follows from the existence of an
ergodic measure on $Y$.
\begin{corollary}\label{ergodic}
For a billiard on a torus with a small obstacle, if ${\mathbf{u}}$ is a vector
from the interior of $AR$, then there exists an ergodic invariant
probability measure in the phase space, for which the integral of the
velocity is equal to ${\mathbf{u}}$ and almost every trajectory is of
admissible type.
\end{corollary}
This corollary is stronger than Corollary~\ref{pointwise}, because
from it and from the Ergodic Theorem it follows that almost every
point has rotation vector ${\mathbf{u}}$. The details of the necessary formalism
are described in Section~\ref{sec_conn}. Of course, in our particular
case both results are corollaries to Theorem~\ref{cominv}, so we know
anyway that all points of $Y$ have rotation vector ${\mathbf{u}}$.
\section{Admissible rotation set is large}\label{sec_arlarge}
In this section we will investigate how large the admissible rotation
set $AR$ is. This of course depends on the size of the obstacle and
the dimension of the space. We will measure the size of $AR$ by the
radius of the largest ball centered at the origin and contained in $AR$.
We will start with the estimates that depend on the dimension $m$ of
the space but not on the size of the obstacle (provided it is small in
our meaning). In order to do it, we first identify some elements of
$\mathbb{Z}^m$ that are always vertices of $G$. Set
$$A_m=\{-1,\,0,\,1\}^m\setminus\{{\mathbf{0}}\}.$$
\begin{lemma}\label{amar}
If ${\mathbf{k}}\in A_m$ then $(\sqrt{2}/2)({\mathbf{k}}/\|{\mathbf{k}}\|)\in AR$.
\end{lemma}
\begin{proof}
If ${\mathbf{k}}\in U$ then there is a vector ${\mathbf{l}}\in U$ orthogonal to ${\mathbf{k}}$.
Vectors ${\mathbf{k}}+{\mathbf{l}}$ and ${\mathbf{k}}-{\mathbf{l}}$ belong to $A_m$ and one can easily check
that there are edges from ${\mathbf{k}}+{\mathbf{l}}$ to ${\mathbf{k}}-{\mathbf{l}}$ and from ${\mathbf{k}}-{\mathbf{l}}$ to
${\mathbf{k}}+{\mathbf{l}}$ in $G$. The periodic path $({\mathbf{k}}+{\mathbf{l}})({\mathbf{k}}-{\mathbf{l}})({\mathbf{k}}+{\mathbf{l}})({\mathbf{k}}-{\mathbf{l}})
\dots$ in $G$ gives us a periodic orbit $P$ of the billiard. The
displacement along $P$ is $2{\mathbf{k}}$ and the period of $P$ is smaller than
$\|{\mathbf{k}}+{\mathbf{l}}\|+\|{\mathbf{k}}-{\mathbf{l}}\|=2\sqrt{2}$, so the rotation vector of $P$ is
$t{\mathbf{k}}$, where $t>\sqrt{2}/2$. Since ${\mathbf{0}}\in AR$ and $AR$ is convex, we
get $(\sqrt{2}/2)({\mathbf{k}}/\|{\mathbf{k}}\|)\in AR$.
Assume now that ${\mathbf{k}}\in A_m$ and $\|{\mathbf{k}}\|>1$. Then ${\mathbf{k}}={\mathbf{l}}+{\mathbf{u}}$ for some
${\mathbf{l}}\in A_m$ and ${\mathbf{u}}\in U$ such that ${\mathbf{u}}$ is orthogonal to ${\mathbf{l}}$. By
Lemma~\ref{am}, ${\mathbf{l}}$ is a vertex of $G$. By Lemma~\ref{scalar}
there are edges in $G$ from ${\mathbf{l}}$ to ${\mathbf{u}}$ and from ${\mathbf{u}}$ to ${\mathbf{l}}$.
Similarly as before, we get a periodic orbit of the billiard
(corresponding to the periodic path ${\mathbf{l}}{\mathbf{u}}{\mathbf{l}}{\mathbf{u}}\dots$) with the
displacement ${\mathbf{k}}$ and period less than $\|{\mathbf{l}}\|+\|{\mathbf{u}}\|=\sqrt{\|{\mathbf{k}}\|^2-1}
+1$, so
$$\frac{\|{\mathbf{k}}\|}{\sqrt{\|{\mathbf{k}}\|^2-1}+1}\cdot\frac{{\mathbf{k}}}{\|{\mathbf{k}}\|}\in AR.$$
Since $\|{\mathbf{k}}\|/(\sqrt{\|{\mathbf{k}}\|^2-1}+1)\ge\sqrt{2}/2$, the vector
$(\sqrt{2}/2)({\mathbf{k}}/\|{\mathbf{k}}\|)$ also belongs to $AR$.
\end{proof}
By the results of \cite{Nandor}, the convex hull of $A_m$ contains the
closed ball centered at ${\mathbf{0}}$ with radius $2/\sqrt{\ln m+5}$. From
this and Lemma~\ref{amar} we get immediately the following result.
\begin{theorem}\label{large}
For a billiard on a torus with a small obstacle, the set $AR$ contains
the closed ball centered at ${\mathbf{0}}$ with radius $\sqrt{2/(\ln m+5)}$.
\end{theorem}
Now we proceed to the estimates that are independent of the dimension
$m$. This is not as simple as it seems. As we saw above, a
straightforward attempt that takes into account only those vectors of
$\mathbb{Z}^m$ for which we can show explicitly that they are vertices of $G$,
gives estimates that go to $0$ as $m\to\infty$. By the results of
\cite{Nandor}, those estimates cannot be significantly improved.
Therefore we have to use another method.
Let us assume first that $O_{\mathbf{0}}$ is the ball centered at ${\mathbf{0}}$ of
radius $r<\sqrt{2}/4$. We start with a simple lemma.
\begin{lemma}\label{st1}
Assume that $O_{\mathbf{l}}$ is between $O_{\mathbf{0}}$ and $O_{\mathbf{k}}$ and let $\theta$
be the angle between the vectors ${\mathbf{k}}$ and ${\mathbf{l}}$. Then
\begin{equation}\label{eqst1}
\langle{\mathbf{k}},{\mathbf{l}}\rangle^2\ge\|{\mathbf{k}}\|^2\left(\|{\mathbf{l}}\|^2-4r^2\right)>
\|{\mathbf{k}}\|^2\left(\|{\mathbf{l}}\|^2-\frac{1}{2}\right).
\end{equation}
and
\begin{equation}\label{eqst2}
\sin\theta\le 2r/\|{\mathbf{l}}\|.
\end{equation}
\end{lemma}
\begin{proof}
If $O_{\mathbf{l}}$ is between $O_{\mathbf{0}}$ and $O_{\mathbf{k}}$ then there is a line
parallel to the vector ${\mathbf{k}}$, whose distances from ${\mathbf{0}}$ and ${\mathbf{l}}$
are at most $r$. Therefore the distance of ${\mathbf{l}}$ from the line
through ${\mathbf{0}}$ and ${\mathbf{k}}$ is at most $2r$. The orthogonal projection
of ${\mathbf{l}}$ to this line is $(\langle{\mathbf{k}},{\mathbf{l}}\rangle/\|{\mathbf{k}}\|^2)k$, so
$$\left\|{\mathbf{l}}-\frac{\langle{\mathbf{k}},{\mathbf{l}}\rangle}{\|{\mathbf{k}}\|^2}{\mathbf{k}}\right\|^2\le 4r^2.$$
The left-hand side of this inequality is equal to
$$\|{\mathbf{l}}\|^2-\frac{\langle{\mathbf{k}},{\mathbf{l}}\rangle^2}{\|{\mathbf{k}}\|^2}$$
and $4r^2<1/2$, so (\ref{eqst1}) holds.
By (\ref{eqst1}), we have
$$\sin^2\theta=1-\cos^2\theta=1-\frac{\langle{\mathbf{k}},{\mathbf{l}}\rangle^2}
{\|{\mathbf{k}}\|^2\|{\mathbf{l}}\|^2}\le 1-\frac{\|{\mathbf{k}}\|^2(\|{\mathbf{l}}\|^2-4r^2)}
{\|{\mathbf{k}}\|^2\|{\mathbf{l}}\|^2}=\frac{4r^2}{\|{\mathbf{l}}\|^2},$$
so (\ref{eqst2}) holds.
\end{proof}
Clearly, the angle $\theta$ above is acute.
The estimate in the next lemma requires extensive use of the fact that
the vectors that we are considering have integer components.
\begin{lemma}\label{st3}
Assume that $O_{\mathbf{l}}$ is between $O_{\mathbf{0}}$ and $O_{\mathbf{k}}$ and
$$\langle{\mathbf{k}},{\mathbf{l}}\rangle\le\langle{\mathbf{k}},{\mathbf{k}}-{\mathbf{l}}\rangle.$$
Then $\|{\mathbf{l}}\|\le\|{\mathbf{k}}\|/2$.
\end{lemma}
\begin{proof}
By Lemma~\ref{st1}, (\ref{eqst1}) holds. Since $\langle{\mathbf{k}},{\mathbf{l}}\rangle\le
\langle{\mathbf{k}},{\mathbf{k}}-{\mathbf{l}}\rangle$, we get $\langle{\mathbf{k}},{\mathbf{l}}\rangle\le\|{\mathbf{k}}\|^2-
\langle{\mathbf{k}},{\mathbf{l}}\rangle$, so
\begin{equation}\label{eqst3}
2\langle{\mathbf{k}},{\mathbf{l}}\rangle\le\|{\mathbf{k}}\|^2.
\end{equation}
By (\ref{eqst1}) and (\ref{eqst3}) we get
\begin{equation}\label{eqst4}
\|{\mathbf{k}}\|^2>4\|{\mathbf{l}}\|^2-2.
\end{equation}
If $\|{\mathbf{l}}\|>\|{\mathbf{k}}\|/2$, then $\|{\mathbf{k}}\|^2<4\|{\mathbf{l}}\|^2$. Together with
(\ref{eqst4}), since $\|{\mathbf{k}}\|^2$ and $\|{\mathbf{l}}\|^2$ are integers, we get
\begin{equation}\label{eqst5}
\|{\mathbf{k}}\|^2=4\|{\mathbf{l}}\|^2-1.
\end{equation}
From (\ref{eqst3}) and (\ref{eqst5}) we get $\langle{\mathbf{k}},{\mathbf{l}}\rangle\le
2\|{\mathbf{l}}\|^2-1/2$. Hence, since $\langle{\mathbf{k}},{\mathbf{l}}\rangle$ is also an
integer, we get $\langle{\mathbf{k}},{\mathbf{l}}\rangle\le 2\|{\mathbf{l}}\|^2-1$. From this,
(\ref{eqst1}) and (\ref{eqst5}) we get
$$(2\|{\mathbf{l}}\|^2-1)^2>(4\|{\mathbf{l}}\|^2-1)\left(\|{\mathbf{l}}\|^2-\frac{1}{2}\right)
=\left(2\|{\mathbf{l}}\|^2-\frac{1}{2}\right)(2\|{\mathbf{l}}\|^2-1),$$
a contradiction.
\end{proof}
Let us think about standing at the origin and looking at the sky,
where vertices of $G$ are stars. Are there big parts of the sky
without a single star? Observe that as the dimension $m$
of the space grows, the angles between the integer vectors tend to
become larger. For instance, the angle between the vectors
$(1,0,\dots,0)$ and $(1,\dots,1)$ is of order $\pi/2-1/\sqrt{m}$.
Thus, any given acute angle can be considered relatively small if $m$
is sufficiently large.
Let $\alpha$ be a positive angle. We will say that a set
$A\subset\mathbb{Z}^m\setminus\{{\mathbf{0}}\}$ is \emph{$\alpha$-dense in the sky}
if for every ${\mathbf{v}}\in\mathbb{R}^m\setminus\{{\mathbf{0}}\}$ there is ${\mathbf{u}}\in A$ such that
the angle between the vectors ${\mathbf{v}}$ and ${\mathbf{u}}$ is at most $\alpha$. Set
$$\eta(r)=\sum_{n=0}^\infty\arcsin\frac{r}{2^{n-1}}.$$
\begin{proposition}\label{st5}
The set of vertices of $G$ is $\eta(r)$-dense in the sky.
\end{proposition}
\begin{proof}
Fix a vector ${\mathbf{v}}\in\mathbb{R}^m\setminus\{{\mathbf{0}}\}$ and $\varepsilon>0$.
There exists ${\mathbf{k}}_0\in\mathbb{Z}^m\setminus\{{\mathbf{0}}\}$ such that
the angle between ${\mathbf{v}}$ and ${\mathbf{k}}_0$ is less than $\varepsilon$. Then we define by
induction a finite sequence $({\mathbf{k}}_1,{\mathbf{k}}_2,\dots,{\mathbf{k}}_n)$ of elements of
$\mathbb{Z}^m\setminus\{{\mathbf{0}}\}$ such that $O_{{\mathbf{k}}_{i+1}}$ is between $O_{\mathbf{0}}$
and $O_{{\mathbf{k}}_i}$ and $\|{\mathbf{k}}_{i+1}\|\le\|{\mathbf{k}}_i\|/2$. This is possible by
Lemmas~\ref{st2} and~\ref{st3}. Since $\|{\mathbf{k}}_i\|\ge 1$ for each $i$,
this procedure has to terminate at some ${\mathbf{k}}_n$. Then there is no
obstacle between $O_{\mathbf{0}}$ and $O_{{\mathbf{k}}_n}$, so ${\mathbf{k}}_n$ is a vertex of $G$.
By Lemma~\ref{st1}, the angle between ${\mathbf{k}}_i$ and ${\mathbf{k}}_{i+1}$ is at most
$\arcsin(2r/\|{\mathbf{k}}_{i+1}\|)$. By our construction, we have $\|{\mathbf{k}}_n\|\ge
1=2^0$, $\|{\mathbf{k}}_{n-1}\|\ge 2^1$, $\|{\mathbf{k}}_{n-2}\|\ge 2^2$, etc. Therefore the
angle between ${\mathbf{k}}_n$ and ${\mathbf{k}}_0$ is smaller than $\eta(r)$. Hence, the
angle between ${\mathbf{v}}$ and ${\mathbf{k}}_n$ is smaller than $\eta(r)+\varepsilon$. Since
$\varepsilon$ was arbitrary, this angle is at most $\eta(r)$.
\end{proof}
\begin{remark}\label{st6}
Proposition~\ref{st5} was proved under the assumption that $O_{\mathbf{0}}$
is the ball centered at ${\mathbf{0}}$ of radius $r<\sqrt{2}/4$. However,
making an obstacle smaller results in preservation or even enlargement
of $G$. Moreover, we have a freedom in the lifting where to put the
origin. Therefore, Proposition~\ref{st5} remains true under a weaker
assumption, that $O$ is contained in a closed ball of radius
$r<\sqrt{2}/4$.
\end{remark}
Let us investigate the properties of $\eta(r)$.
\begin{lemma}\label{st7}
The function $\eta$ is continuous and increasing on $(0,\sqrt{2}/4]$.
Moreover,\break $\eta(r)<\sqrt{2}\,\,\pi r$. In particular,
$\eta(\sqrt{2}/4)<\pi/2$ and
$$\lim_{r\to 0}\eta(r)=0.$$
\end{lemma}
\begin{proof}
Assume that $0<r\le\sqrt{2}/4$. Then all numbers whose arcus sine we
are taking are from the interval $(0,\sqrt{2}/2]$, so clearly $\eta$
is continuous and increasing. We have also the estimate
$$\frac{x}{\arcsin x}\ge\frac{\sqrt{2}/2}{\pi/4}=
\frac{2\sqrt{2}}{\pi}.$$
Moreover, the equality holds only if $x=\sqrt{2}/2$. Thus
$$\eta(r)<\sum_{n=0}^\infty\frac{\pi r}{2^n\,\sqrt{2}}=\sqrt{2}\,\,\pi r.$$
Therefore $\lim_{r\to 0}\eta(r)=0$ and
$$\eta(\sqrt{2}/4)<\sqrt{2}\,\,\pi\frac{\sqrt{2}}{4}=\frac{\pi}{2}.$$
\end{proof}
Now we assume only that $O$ is small. In the next lemma we obtain two
estimates of the length of $t{\mathbf{k}}\in AR$ if ${\mathbf{k}}$ is a vertex of $G$. One of
those estimates will be useful for all vertices of $G$, the other one
for those with large norm. The main idea of the proof is similar as in
the proof of Lemma~\ref{amar}.
\begin{lemma}\label{st8}
If ${\mathbf{k}}$ is a vertex of $G$ then the vectors $(1-\sqrt{2}/2)({\mathbf{k}}/\|{\mathbf{k}}\|)$
and $\big((\|{\mathbf{k}}\|-1)/(\|{\mathbf{k}}\|+1)\big)({\mathbf{k}}/\|{\mathbf{k}}\|)$ belong to $AR$.
\end{lemma}
\begin{proof}
Let ${\mathbf{k}}=(x_1,x_2,\dots,x_m)$ be a vertex of $G$. Let $s$ be the number
of non-zero components of ${\mathbf{k}}$. Then we may assume that $x_i\ne 0$ if
$i\le s$ and $x_i=0$ if $i>s$. If $s=1$ then the statement of the
lemma follows from Lemma~\ref{amar}.
Assume now that $s>1$. Then for every $i\le s$ there is a vector
${\mathbf{v}}_i\in U$ with only $i$-th component non-zero and $\langle
{\mathbf{v}}_i,{\mathbf{k}}\rangle<0$. By Lemma~\ref{scalar} there are edges in $G$ from ${\mathbf{k}}$
to ${\mathbf{v}}_i$ and from ${\mathbf{v}}_i$ to ${\mathbf{k}}$, so the periodic path ${\mathbf{k}}{\mathbf{v}}_i{\mathbf{k}}{\mathbf{v}}_i\dots$
in $G$ gives us a periodic orbit $P_i$ of the billiard. The
displacement along $P_i$ is ${\mathbf{k}}+{\mathbf{v}}_i$ and the period of $P_i$ is smaller
than $\|{\mathbf{k}}\|+\|{\mathbf{v}}_i\|=\|{\mathbf{k}}\|+1$, so the rotation vector of $P_i$ is
$t_i({\mathbf{k}}+{\mathbf{v}}_i)$ with $t_i>1/(\|{\mathbf{k}}\|+1)$. Therefore the vector
$({\mathbf{k}}+{\mathbf{v}}_i)/(\|{\mathbf{k}}\|+1)$ belongs to $AR$.
Since the vectors ${\mathbf{v}}_i$ form an orthonormal basis of $\mathbb{R}^s$, we have
$${\mathbf{k}}=\sum_{i=1}^s\langle {\mathbf{v}}_i,{\mathbf{k}}\rangle {\mathbf{v}}_i.$$
Set
$$a=\sum_{i=1}^s\langle {\mathbf{v}}_i,{\mathbf{k}}\rangle \textrm{\ \ and\ \ } a_i=\frac{\langle
{\mathbf{v}}_i,{\mathbf{k}}\rangle}{a}.$$
Then the vector
$${\mathbf{u}}=\sum_{i=1}^s a_i\frac{{\mathbf{k}}+{\mathbf{v}}_i}{\|{\mathbf{k}}\|+1}$$
is a convex combination of elements of $AR$, so ${\mathbf{u}}\in AR$. We have
$${\mathbf{u}}=\frac{{\mathbf{k}}}{\|{\mathbf{k}}\|+1}\sum_{i=1}^s a_i+\frac{1}{a(\|{\mathbf{k}}\|+1)}\sum_{i=1}^s
\langle {\mathbf{v}}_i,{\mathbf{k}}\rangle {\mathbf{v}}_i=\frac{{\mathbf{k}}}{\|{\mathbf{k}}\|+1}\left(1+\frac{1}{a}\right).$$
For each $i$ we have $\langle {\mathbf{v}}_i,{\mathbf{k}}\rangle\le -1$, so $a\le -s$, and
therefore $1+1/a\ge(s-1)/s$. Moreover,
$$\frac{\|{\mathbf{k}}\|}{\|{\mathbf{k}}\|+1}\ge\frac{\sqrt{s}}{\sqrt{s}+1}.$$
Since $s\ge 2$, we get
$$\frac{s-1}{s}\cdot\frac{\sqrt{s}}{\sqrt{s}+1}=\frac{\sqrt{s}-1}
{\sqrt{s}}\ge\frac{\sqrt{2}-1}{\sqrt{2}}=1-\frac{\sqrt{2}}{2},$$
so the vector $u$ has the direction of ${\mathbf{k}}$ and length at least
$1-\sqrt{2}/2$.
To get the other estimate of the length of ${\mathbf{u}}$, note that $\langle
{\mathbf{v}}_i,{\mathbf{k}}\rangle=-|x_i|$, so
$$a=-\sum_{i=1}^s|x_i|\le-\|{\mathbf{k}}\|,$$
and hence
$$\|{\mathbf{u}}\|\ge\frac{\|{\mathbf{k}}\|}{\|{\mathbf{k}}\|+1}\cdot\left(1-\frac{1}{\|{\mathbf{k}}\|}\right)
=\frac{\|{\mathbf{k}}\|-1}{\|{\mathbf{k}}\|+1}.$$
\end{proof}
\begin{lemma}\label{st9}
Let $A\subset\mathbb{R}^m$ be a finite set, $\alpha$-dense in the sky for
some $\alpha<\pi/2$. Assume that every vector of $A$ has norm $c$.
Then the convex hull of $A$ contains a ball of radius $c\cos\alpha$,
centered at ${\mathbf{0}}$.
\end{lemma}
\begin{proof}
Let ${\mathbf{k}}$ be the convex hull of $A$. Then ${\mathbf{k}}$ is a convex polytope with
vertices from $A$. Since $A$ is $\alpha$-dense in the sky and
$\alpha<\pi/2$, ${\mathbf{k}}$ is non-degenerate and ${\mathbf{0}}$ belongs to its
interior. Let $s$ be the radius of the largest ball centered at
${\mathbf{0}}$ and contained in ${\mathbf{k}}$. This ball is tangent to some face of ${\mathbf{k}}$
at a point ${\mathbf{v}}$. Then the whole ${\mathbf{k}}$ is contained in the half-space
$\{{\mathbf{u}}:\langle {\mathbf{u}},{\mathbf{v}}\rangle\le\|{\mathbf{v}}\|^2\}$. In particular, for every ${\mathbf{u}}\in
A$ we have $\langle {\mathbf{u}},{\mathbf{v}}\rangle\le\|{\mathbf{v}}\|^2$. Since $A$ is $\alpha$-dense
in the sky, there is ${\mathbf{u}}\in A$ such that the angle between ${\mathbf{v}}$ and ${\mathbf{u}}$
is at most $\alpha$. Therefore
$$\|{\mathbf{v}}\|^2\ge\langle {\mathbf{u}},{\mathbf{v}}\rangle\ge\|{\mathbf{u}}\|\|{\mathbf{v}}\|\cos\alpha=
c\|{\mathbf{v}}\|\cos\alpha,$$
so $s=\|{\mathbf{v}}\|\ge c\cos\alpha$.
\end{proof}
Now we can get the first, explicit, estimate of the radius of the
largest ball contained in $AR$.
\begin{theorem}\label{st10}
For a billiard on a torus with small obstacle, assume that $O$ is
contained in a closed ball of radius
$r<\sqrt{2}/4$. Then the admissible rotation set contains the closed ball of radius
$(1-\sqrt{2}/2)\cos\eta(r)$ centered at ${\mathbf{0}}$.
\end{theorem}
\begin{proof}
Set $H=\{(1-\sqrt{2}/2)({\mathbf{k}}/\|{\mathbf{k}}\|):{\mathbf{k}}$ is a vertex of $G\}$ and let ${\mathbf{k}}$ be the convex
hull of $H$. By Lemma~\ref{st8} and by the convexity of $AR$, we have
${\mathbf{k}}\subset AR$. By Proposition~\ref{st5}, $H$ is $\eta(r)$-dense in the
sky. Thus, by Lemma~\ref{st9}, ${\mathbf{k}}$ contains the closed ball of radius
$(1-\sqrt{2}/2)\cos\eta(r)$ centered at ${\mathbf{0}}$.
\end{proof}
The second estimate is better for small $r$ (uniformly in $m$), but does not give an
explicit formula for the radius of the ball contained in $AR$. We
first need two lemmas.
\begin{lemma}\label{st11}
For every integer $N>1$ there exists an angle $\beta(N)>0$
(independent of $m$), such that if ${\mathbf{u}},{\mathbf{v}}\in\mathbb{Z}^m\setminus{\mathbf{0}}$ are
vectors of norm less than $N$ and the angle $\theta$ between ${\mathbf{u}}$ and
${\mathbf{v}}$ is positive, then $\theta\ge\beta(N)$.
\end{lemma}
\begin{proof}
Under our assumptions, each of the vectors ${\mathbf{u}},{\mathbf{v}}$ has less than $N^2$
non-zero components. Therefore the angle between ${\mathbf{u}}$ and ${\mathbf{v}}$ is the
same as the angle between some vectors ${\mathbf{u}}',{\mathbf{v}}'\in\mathbb{Z}^{2N^2-2}$ of the same
norms as ${\mathbf{u}},{\mathbf{v}}$. However, there are only finitely many vectors in
$\mathbb{Z}^{2N^2-2}\setminus\{{\mathbf{0}}\}$ of norm less than $N$, so the lemma holds
with $\beta(N)$ equal to the smallest positive angle between such
vectors.
\end{proof}
\begin{lemma}\label{st12}
Let $A\subset\mathbb{Z}^m\setminus{\mathbf{0}}$ be a finite set, $\alpha$-dense in the sky for some
$\alpha<\beta(N)/2$, where $\beta(N)$ is as in Lemma~\ref{st11}.
Then the set of those elements of $A$ which have norm at least $N$ is
$2\alpha$-dense in the sky.
\end{lemma}
\begin{proof}
Let $B$ be the set of vectors of $\mathbb{R}^m\setminus\{{\mathbf{0}}\}$ whose
angular distance from some vector of $A$ of norm at least $N$ is
$2\alpha$ or less.
Let ${\mathbf{v}}\in\mathbb{R}^m$ be a non-zero vector. Assume that the angles between
${\mathbf{v}}$ and all vectors of $A$ are non-zero. By the assumptions, there
exists ${\mathbf{u}}\in A$ such that the angle $\theta$ between ${\mathbf{v}}$ and ${\mathbf{u}}$ is at
most $\alpha$. If $\|{\mathbf{u}}\|\ge N$ then ${\mathbf{v}}\in B$. Suppose that $\|{\mathbf{u}}\|<N$.
Choose $\varepsilon>0$ such that $\varepsilon<\beta(N)-2\alpha$ and $\varepsilon<\theta$.
We draw a great circle in the sky through ${\mathbf{u}}$ and ${\mathbf{v}}$ and go along it
from ${\mathbf{u}}$ through ${\mathbf{v}}$ and beyond it to some ${\mathbf{v}}'$ so that the angle
between ${\mathbf{u}}$ and ${\mathbf{v}}'$ is $\alpha+\varepsilon$. Now, there exists ${\mathbf{u}}'\in A$
such that the angle between ${\mathbf{v}}'$ and ${\mathbf{u}}'$ is at most $\alpha$. Then
the angle between ${\mathbf{u}}$ and ${\mathbf{u}}'$ is at least $\varepsilon$ and at most
$2\alpha+\varepsilon<\beta(N)$, so $\|{\mathbf{u}}'\|\ge N$. The angle between ${\mathbf{v}}$ and
${\mathbf{u}}'$ is at most $2\alpha+\varepsilon-\theta<2\alpha$, so again, ${\mathbf{v}}\in B$.
In such a way we have shown that $B$ is dense in
$\mathbb{R}^m\setminus\{{\mathbf{0}}\}$. It is also clearly closed in
$\mathbb{R}^m\setminus\{{\mathbf{0}}\}$, so it is equal to $\mathbb{R}^m\setminus\{{\mathbf{0}}\}$.
\end{proof}
Let $\beta(N)$ be as in Lemma~\ref{st11}, and let $N(r)$ be the
maximal $N$ such that $\eta(r)<\beta(N)/2$. This definition is
correct for sufficiently small $r$, since clearly $\beta(N)\to 0$ as
$N\to\infty$, and by Lemma~\ref{st7}, $\eta(r)\to 0$ as $r\to 0$. It
follows that $N(r)\to\infty$ as $r\to 0$.
\begin{theorem}\label{st13}
For a billiard on a torus with a small obstacle, assume that $O$ is
contained in a closed ball of radius $r<\sqrt{2}/4$
for $r$ so small that $N(r)$ is defined. Then the admissible rotation set contains
the closed ball of radius $\big((N(r)-1)/(N(r)+1)\big)\cos(2\eta(r))$
centered at ${\mathbf{0}}$.
\end{theorem}
\begin{proof}
Since $\eta(r)<\beta(N(r))/2$, by
Proposition~\ref{st5}, Remark~\ref{st6} and Lemma~\ref{st12}, the set
of those vertices of $G$ that have norm at least $N(r)$ is
$2\eta(r)$-dense in the sky. By Lemma~\ref{st8}, if ${\mathbf{k}}$ is such a
vertex, $\big((\|{\mathbf{k}}\|-1)/(\|{\mathbf{k}}\|+1)\big)({\mathbf{k}}/\|{\mathbf{k}}\|)\in AR$. Since
${\mathbf{0}}\in AR$ and $AR$ is convex, also $\big((N(r)-1)/(N(r)+1)\big)({\mathbf{k}}/\|{\mathbf{k}}\|)
\in AR$. Then by Lemma~\ref{st9}, the closed ball of radius
$\big((N(r)-1)/(N(r)+1)\big)\cos(2\eta(r))$ centered at ${\mathbf{0}}$ is
contained in $AR$.
\end{proof}
\begin{corollary}\label{st14}
The radius of the largest ball centered at ${\mathbf{0}}$ contained in the
admissible rotation set
goes to $1$ uniformly in $m$ as the diameter of the obstacle goes to
$0$.
\end{corollary}
We conclude this section with a result showing that even though $AR$
may be large, it is still smaller than $R$.
\begin{theorem}\label{st15}
For a billiard on a torus with a small obstacle, the admissible
rotation set is contained in the open unit ball. In particular, $AR\ne
R$.
\end{theorem}
\begin{proof}
Since the graph $G$ is finite, there exist positive constants
$c_1<c_2$ such that for every trajectory piece of admissible type the
distance between two consecutive reflections is contained in
$[c_1,c_2]$. Moreover, from the definition of an edge in $G$ and from
the compactness of the obstacle it follows that there is an angle
$\alpha>0$ such that the direction of a trajectory piece of admissible
type changes by at least $\alpha$ at each reflection. Consider the
triangle with two sides of length $c_1$ and $c_2$ and the angle
$\pi-\alpha$ between them. Let $a$ be the ratio between the length of
the third side and $c_1+c_2$, that is,
$$a=\frac{\sqrt{c_1^2+c_2^2+2c_1c_2\cos\alpha}}{c_1+c_2}.$$
This ratio is less than 1 and it decreases when $\alpha$ or $c_1/c_2$
grows. Therefore, if consecutive reflections for a trajectory piece of
admissible type are at times $t_1,t_2,t_3$, then the displacement
between the first and the third reflections divided by $t_3-t_1$ is at
most $a$. Thus, every vector from $AR$ has length at most $a$.
Clearly, the vector $(1,0,\dots,0)$ belongs to $R$, and thus $AR\ne
R$.
\end{proof}
\section{Billiard in the square}\label{sec_square}
Now we consider a billiard in the square $S=[-1/2,1/2]^2$ with one
convex obstacle $O$ with a smooth boundary. The lifting to $\mathbb{R}^2$,
considered in Section~\ref{sec_prelim} is replaced in this case by the
unfolding to $\mathbb{R}^2$. That is, we cover $\mathbb{R}^2$ by the copies of $S$
obtained by consecutive symmetries with respect to the lines $x=n+1/2$
and $y=n+1/2$, $n\in\mathbb{Z}$. Thus, the square $S_{\mathbf{k}}=S+{\mathbf{k}}$ (${\mathbf{k}}\in\mathbb{Z}^2$) with
the obstacle $O_{\mathbf{k}}$ in it is the square $S$ with $O$, translated by
${\mathbf{k}}$, with perhaps an additional symmetry applied. If ${\mathbf{k}}=(p,q)$, then,
if both $p,q$ are even, there is no additional symmetry; if $p$ is
even and $q$ odd, we apply symmetry with respect to the line $y=q$; if
$p$ is odd and $q$ is even, we apply symmetry with respect to the line
$x=p$; and if both $p,q$ are odd, we apply central symmetry with
respect to the point $(p,q)$. In this model, trajectories in
$S$ with obstacle $O$ unfold to trajectories in $\mathbb{R}^2$ with
obstacles $O_{\mathbf{k}}$, ${\mathbf{k}}\in\mathbb{Z}^2$.
The situation in $\mathbb{R}^2$ is now the same as in the case of the torus
billiard, except that, as we mentioned above, the obstacles are not
necessarily the translations of $O_{\mathbf{0}}$, and, of course, the
observable whose averages we take to get the rotation set is
completely different. Let us trace which definitions, results and
proofs of Sections~\ref{sec_prelim}, \ref{sec_rotset} and
\ref{sec_arlarge} remain the same, and which need modifications.
The definitions of \emph{between} and \emph{type} remain the same.
Lemma~\ref{infref} is still valid, but in its proof we have to look at
the trajectory on the torus $\mathbb{R}^2/(2\mathbb{Z})^2$ rather than $\mathbb{R}^2/\mathbb{Z}^2$.
Then the definition of an \emph{admissible sequence} remains the same.
The first part of Theorem~\ref{thmadm} and its proof remains the same,
but in the proof of the part about periodic trajectories we have to be
careful. The point ${\mathbf{x}}_0+{\mathbf{p}}$ from the last paragraph of the proof has to
be replaced by a point that after folding (the operation reverse to the
unfolding) becomes ${\mathbf{x}}_0$. This gives us a periodic orbit in the
unfolding that projects (folds) to a periodic orbit in the square.
Moreover, there may be a slight difference between the square case and
the torus case if we want to determine the least discrete period of
this orbit (where in the square case, in analogy to the torus case, we
count only reflections from the obstacle). In the torus case it is
clearly the same as the least period of the type. In the square case
this is not necessarily so. For instance, if the obstacle is a disk
centered at the origin, the orbit that goes vertically from the
highest point of the disk, reflects from the upper side of the square
and returns to the highest point of the disk, has discrete period 1 in
the above sense. However, its type is periodic of period 2 and in the
unfolding it has period 2. Fortunately, such things are irrelevant for
the rest of our results.
Lemma~\ref{unique}, Corollary~\ref{shortest} and their proofs remain
the same as in the torus case. The same can be said about the part of
Lemma~\ref{const} that refers to the lengths of trajectory pieces.
The definition of the graph $G$ has to be modified. This is due to the
fact that the conditions~(\ref{adm3}) and~(\ref{adm4}) cannot be restated in
the same way as in the torus case, because now not only translations,
but also symmetries are involved (the obstacle needs not be symmetric,
and the unfolding process involves symmetries about vertical and
horizontal lines). In order to eliminate symmetries, we
enlarge the number of vertices of $G$ four times. Instead of
$O_{\mathbf{0}}$, we look at $\{O_{\mathbf{0}},O_{(1,0)},O_{(0,1)},O_{(1,1)}\}$. For
every ${\mathbf{k}}=(p,q)\in\mathbb{Z}^2$ there is $\zeta({\mathbf{k}})\in Q$, where
$Q=\{(0,0),(1,0),(0,1),(1,1)\}$, such that ${\mathbf{k}}-\zeta({\mathbf{k}})$ has both
components even. Then condition (\ref{adm3}) can be restated as no
obstacle between $O_{\zeta({\mathbf{k}}_{n-1})}$ and $O_{{\mathbf{l}}_n+\zeta({\mathbf{k}}_{n-1})}$, and
condition (\ref{adm4}) as the obstacle $O_{{\mathbf{l}}_n+\zeta({\mathbf{k}}_{n-1})}$ not
between $O_{\zeta({\mathbf{k}}_{n-1})}$ and $O_{{\mathbf{l}}_n+{\mathbf{l}}_{n+1}+\zeta({\mathbf{k}}_{n-1})}$.
Therefore we can take as the vertices of $G$ the pairs $(i,j)$, where
${\mathbf{i}}\in Q$, ${\mathbf{j}}\in\mathbb{Z}^2$, ${\mathbf{i}}\ne {\mathbf{j}}$, and there is no obstacle between $O_{\mathbf{i}}$
and $O_{\mathbf{j}}$. There is an edge in $G$ from $({\mathbf{i}},{\mathbf{j}})$ to $({\mathbf{i}}',{\mathbf{j}}')$ if and
only if $O_{\mathbf{j}}$ is not between $O_{\mathbf{i}}$ and $O_{{\mathbf{j}}+{\mathbf{j}}'-{\mathbf{i}}'}$ and $\zeta({\mathbf{j}})={\mathbf{i}}'$. Then,
similarly as in the torus case, there is a one-to-one correspondence
between admissible sequences and one-sided infinite paths in $G$,
starting at vertices $({\mathbf{0}},{\mathbf{j}})$.
This restriction on the starting point of a path in $G$ creates some
complication, but for every ${\mathbf{i}}\in Q$ there is a similar correspondence
between admissible sequences and one-sided infinite paths in $G$,
starting at vertices $({\mathbf{i}},{\mathbf{j}})$. Therefore, if we want to glue finite
paths, we may choose an appropriate ${\mathbf{i}}$.
Lemma~\ref{fingr} and its proof remain unchanged. The definition of a
\emph{small obstacle} has to be modified slightly. This is due to the
fact, that while a torus is homogeneous, so all positions of an
obstacle are equivalent, this is not the case for a square. An
obstacle placed close to a side of the square will produce a pattern
of obstacles in the unfolding which is difficult to control. Therefore
we will say that the obstacle $O$ is \emph{small} if it is contained
in a closed ball of radius smaller than $\sqrt{2}/4$, centered at
${\mathbf{0}}$. With this definition, Lemmas~\ref{scalar} and~\ref{small} and their proofs remain
unchanged.
We arrived at a point where the situation is completely different than
for the torus case, namely, we have to define the displacement
function. Once we do it, the definitions of the rotation set,
admissible rotation set and rotation vector remain the same, except
that we will call a rotation vector (since it belongs to $\mathbb{R}$) a
\emph{rotation number}.
Since we have to count how many times the trajectory rotates around
the obstacle, the simplest way is to choose a point ${\mathbf{z}}$ in the
interior of $O$ and set $\phi({\mathbf{x}})=\arg({\mathbf{x}}-{\mathbf{z}})/(2\pi)$, where arg is the
complex argument (here we identify $\mathbb{R}^2$ with $\mathbb{C}$). It is not
important that the argument is multivalued, since we are interested
only in its increment along curves. For any closed curve $\Gamma$
avoiding the interior of $O$, the increment of $\phi$ is equal to the
winding number of $\Gamma$ with respect to ${\mathbf{z}}$. Since the whole
interior of $O$ lies in the same component of $\mathbb{C}\setminus\Gamma$,
this number does not depend on the choice of ${\mathbf{z}}$. If $\Gamma$ is not
closed, we can extend it to a closed one, while changing the increment
of $\phi$ by less than 1. Therefore changing ${\mathbf{z}}$ will amount to the
change of the increment of $\phi$ by less than 2. When computing the
rotation numbers, we divide the increment of $\phi$ by the length of
the trajectory piece, and this length goes to infinity. Therefore in
the limit a different choice of $z$ will give the same result. This
proves that the rotation set we get is independent of the choice of
${\mathbf{z}}$.
The proofs of the results of Section~\ref{sec_rotset} rely on the
second part of Lemma~\ref{const}, which we have not discussed yet. The
possibility of the trajectory pieces crossing the obstacles, mentioned
in that lemma, was necessary only for the proof of its first part, so
we do not have to worry about it now. However, we have to make an
additional assumption that $B$ is admissible. This creates no problem
either, because this is how we use it later. Since we changed here
many details, it makes sense to state exactly what we will be proving.
\begin{lemma}\label{new_const}
For every finite admissible sequence $B=({\mathbf{k}}_n)_{n=0}^s$ of elements of
$\mathbb{Z}^2$ the displacements $\phi$ along trajectory pieces of type $B$
differ by less than $2$.
\end{lemma}
\begin{proof}
Note that the ``folding'' map $\pi:\overline{\mathbb{R}^2\setminus\bigcup_{{\mathbf{k}}
\in\mathbb{Z}^2}O_{\mathbf{k}}}\to\overline{S\setminus O}$ is continuous. Therefore if
curves $\Gamma$ and $\gamma$ in $\overline{\mathbb{R}^2\setminus\bigcup_{{\mathbf{k}}
\in\mathbb{Z}^2}O_{\mathbf{k}}}$ with the common beginning and common end are homotopic
then $\pi(\Gamma)$ and $\pi(\gamma)$ are homotopic, so the increments
of $\phi$ along them are the same. If $\Gamma$ and $\Gamma'$ are two
trajectory pieces as in the statement of the lemma, then $\Gamma'$ can
be extended to $\gamma$ with the same beginning and end as $\Gamma$,
with the change of the increment of $\phi$ along its projection by
$\pi$ less than 2 (1 at the beginning, 1 at the end). Thus, it
suffices to show that this can be done in such a way that $\Gamma$ and
$\gamma$ are homotopic.
Therefore we have to analyze what may be the reasons for $\Gamma$ and
$\gamma$ not to be homotopic. Extending $\Gamma'$ to $\gamma$ can be
done in the right way, so this leaves two possibly bad things that we
have to exclude. The first is that when going from $O_{{\mathbf{k}}_i}$ to
$O_{{\mathbf{k}}_{i+2}}$ via $O_{{\mathbf{k}}_{i+1}}$, we pass with $\Gamma$ on one side of
$O_{{\mathbf{k}}_{i+1}}$ and with $\Gamma'$ on the other side of $O_{{\mathbf{k}}_{i+1}}$.
The second one is that when going from $O_{{\mathbf{k}}_i}$ to $O_{{\mathbf{k}}_{i+1}}$, we
pass with $\Gamma$ on one side of some $O_{\mathbf{j}}$, and with $\Gamma'$ on
the other side of $O_{\mathbf{j}}$. However, the first possibility contradicts
condition (\ref{adm4}) from the definition of admissibility and the
second one contradicts condition (\ref{adm3}). This completes the
proof.
\end{proof}
With Lemma~\ref{new_const} replacing Lemma~\ref{const}, the rest of
results of Section~\ref{sec_rotset} (except the last two theorems and
the corollary) and their proofs remain the same
(with obvious minor modifications, for instance due to the fact that
constants in Lemmas~\ref{const} and~\ref{new_const} are different).
Let us state the main theorems we get in this way.
\begin{theorem}\label{new_convex}
The admissible rotation set of a billiard in a square with a small
obstacle is convex, and consequently, it is a closed interval
symmetric with respect to $0$.
\end{theorem}
\begin{theorem}\label{new_perdense}
For a billiard in a square with a small obstacle, rotation numbers of
periodic orbits of admissible type are dense in the admissible
rotation set.
\end{theorem}
\begin{theorem}\label{new_cominv}
For a billiard in a square with a small obstacle, if $u$ is a number
from the interior of $AR$, then there exists a compact, forward
invariant subset $Y$ of the phase space, such that every trajectory
from $Y$ has admissible type and rotation number $u$.
\end{theorem}
\begin{corollary}\label{new_pointwise}
For a billiard in a square with a small obstacle, if $u$ is a number
from the interior of $AR$, then there exists a trajectory of
admissible type with rotation number $u$.
\end{corollary}
\begin{corollary}\label{new_ergodic}
For a billiard in a square with a small obstacle, if $u$ is a number
from the interior of $AR$, then there exists an ergodic invariant
probability measure in the phase space, for which the integral of the
displacement is equal to $u$ and almost every trajectory is of
admissible type.
\end{corollary}
Since the billiard in the square is defined only in dimension 2, most
of the results of Section~\ref{sec_arlarge} do not have counterparts
here. However, we can investigate what happens to $AR$ as the size of
the obstacle decreases. Moreover, here the position of the obstacle
matters, so the size of the obstacle should be measured by the radius
of the smallest ball centered in the origin that contains it. The
following theorem should be considered in the context of
Theorem~\ref{wholer} that states that the full rotation set, $R$, is
equal to $[-\sqrt{2}/4,\sqrt{2}/4]$.
\begin{theorem}\label{new_epsilon}
For every $\varepsilon>0$ there exists $\delta>0$ such that the set $AR$
contains the interval $[-\sqrt{2}/4+\varepsilon,\sqrt{2}/4-\varepsilon]$
whenever the obstacle is contained in the disk centered
at the origin and diameter less than $\delta$.
\end{theorem}
\begin{proof}
Let us estimate the rotation number of the curve $V$ that
in the unfolding is a straight line segment from the origin to the
point $(2n+1,2n)$. We are counting the displacement as the rotation
around the center of the square (as always, as the multiples of
$2\pi$). In particular, the displacement for $V$ makes sense, since at
its initial and terminal pieces the argument is constant.
Compare $V$ to the curve $V'$ that in the unfolding is a
straight line segment from $(0,-1/2)$ to $(2n+1,2n+1/2)$. In the
square, $V'$ goes from the lower side to the right one, to the upper
one, to the left one, to the lower one, etc., and it reflects from
each side at its midpoint. Moreover, the distances of the endpoints of
$V$ from the corresponding endpoints of $V'$ are $1/2$. Therefore,
when we deform linearly $V$ to $V'$, we do not cross any point of
$\mathbb{Z}^2$. This means that the difference of the displacements along
those trajectories in the square is less than 2 (this difference may
occur because they end at different points). The displacement along
$V'$ is $n+1/2$, so the displacement along $V$ is between $n-3/2$ and
$n+5/2$.
The length of $V$ is between the length of $V'$ minus 1 and the length
of $V'$, that is, between $(2n+1)\sqrt{2}-1$ and $(2n+1)\sqrt{2}$.
Therefore, as $n$ goes to infinity, the rotation number of $V$ goes to
$n/(2n\sqrt{2})=\sqrt{2}/4$.
If we fix $\varepsilon>0$ then there is $n$ such that the rotation number of
$V$ is larger than $\sqrt{2}/4-\varepsilon/4$. Then we can choose $\delta>0$
such that if the obstacle is contained in the disk centered at the
origin and diameter less than $\delta$ then $(2n+1,2n)$ is a vertex of
the graph $G$ and any trajectory piece $T_n$ that in the unfolding is a
straight line segment from a point of $O_{\mathbf{0}}$ to a point of
$O_{(2n+1,2n)}$ has rotation number differing from the rotation number
of $V$ by less that $\varepsilon/4$ (when we deform linearly $V$ to get this
trajectory piece, we do not cross any point of $\mathbb{Z}^2$). Hence, this
rotation number is larger than $\sqrt{2}/4-\varepsilon/2$.
Now we construct a periodic orbit of admissible type with the rotation
number differing from the rotation number of $T_n$ by less than
$\varepsilon/2$. By Lemma~\ref{small}, there is a loop $A_n$ in $G$, passing
through ${\mathbf{v}}_n=(2n+1,2n)$ and at most 2 other vertices, both from $U$.
As $n$ goes to infinity, then clearly the ratios of displacements and
of lengths of $T_n$ and $A_n$ go to 1. Therefore the ratio of their
rotation numbers also goes to 1, and if $n$ is large enough, the
difference between them will be smaller than $\varepsilon/2$.
This gives us $\delta$ such that if the obstacle is
contained in the disk centered at the origin and diameter less than
$\delta$ then the set $AR$ contains the number $v>\sqrt{2}/4-\varepsilon$.
Since $AR$ is symmetric with respect to 0, it contains also the number
$-v$, and since it is connected, it contains the interval
$[-\sqrt{2}/4+\varepsilon,\sqrt{2}/4-\varepsilon]$.
\end{proof}
Theorem~\ref{st15} also has its counterpart for the billiard in the
square.
\begin{theorem}\label{new_st15}
For a billiard in a square with a small obstacle, the admissible
rotation set is contained in the open interval
$(-\sqrt{2}/4,\sqrt{2}/4)$. In particular, $AR\ne R$.
\end{theorem}
Since the proof of this theorem utilizes a construction introduced in
the proof of Theorem~\ref{thm47}, we postpone it until the end of
Section~\ref{sec_conn}.
\section{Results on the full rotation set}\label{sec_conn}
In this section we will prove several results on the full rotation set
$R$ in both cases, not only about the admissible rotation set. Some of
the proofs apply to a much more general situation than billiards, and
then we will work under fairly general assumptions.
Let $X$ be a compact metric space and let $\Phi$ be a \emph{continuous
semiflow} on $X$. That is, $\Phi:[0,\infty)\times X\to X$ is a
continuous map such that $\Phi(0,x)=x$ and $\Phi(s+t,x)=
\Phi(t,\Phi(s,x))$ for every $x\in X$, $s,t\in[0,\infty)$. We will
often write $\Phi^t(x)$ instead of $\Phi(t,x)$. Let $\xi$ be an
\emph{time-Lipschitz continuous observable cocycle} for $(X,\Phi)$
with values in $\mathbb{R}^m$, that is, a continuous function
$\xi:[0,\infty)\times X\to\mathbb{R}^m$ such that
$\xi(s+t,x)=\xi(s,\Phi^t(x))+\xi(t,x)$ and $\|\xi(t,x)\|\le Lt$ for
some constant $L$ independent of $t$ and $x$.
The \emph{rotation set} $R$ of $(X,\Phi,\xi)$ is the set of all limits
$$\lim_{n\to\infty}\frac{\xi(t_n,x_n)}{t_n},\text{\rm\ \ where\ \ }
\lim_{n\to\infty}t_n=\infty.$$
By the definition, $R$ is closed, and is contained in the closed ball
in $\mathbb{R}^m$, centered at the origin, of radius $L$. In particular, $R$
is compact.
\begin{theorem}\label{conn}
The rotation set $R$ of a continuous semiflow $\Phi$ on a connected
space $X$ with a time-Lipschitz continuous observable cocycle $\xi$ is
connected.
\end{theorem}
\begin{proof}
Set
$$\psi(t,x)=\frac{\xi(t,x)}{t}$$
for $t>0$ and $x\in X$. Then the function $\psi$ is continuous on the
space $(0,\infty)\times X$. For $n\ge 1$, set $K_n=\psi([n,\infty)
\times X)$. With this notation, we have
$$R=\bigcap_{n=1}^\infty\overline{K_n}.$$
The set $[n,\infty)\times X$ is connected, so $K_n$ is connected, so
$\overline{K_n}$ is connected. Moreover, $K_n$ is contained in the
closed ball in $\mathbb{R}^m$, centered at the origin, of radius $L$.
Therefore $\overline{K_n}$ is compact. Thus, $(\overline{K_n})_{n=1}
^\infty$ is a descending sequence of compact connected sets, and so
its intersection $R$ is also connected.
\end{proof}
In the case of a billiard on a torus that we are considering, the
phase space $X$
is the product of the torus minus the interior of the obstacles with
the unit sphere in $\mathbb{R}^m$ (velocities). At the boundaries of the
obstacles we glue together the pre-collision and post-collision
velocity vectors. This space is compact, connected, and our
semiflow (even a flow, since we can move backwards in time, too) is
continuous. The observable cocycle is the displacement function.
Clearly, it is time-Lipschitz with the constant $L=1$ and continuous.
Thus, Theorem~\ref{conn} applies, and the rotation set $R$ is
connected.
Similar situation occurs in the square. Here there is one more
complication, due to the fact that there are trajectories passing
through vertices. The gluing rule at a vertex $q$ of the square is
that the phase points $(q,v)$ and $(q,-v)$ must be identified for all
relevant velocities $v$. Then the flow is also continuous in this
case, so Theorem~\ref{conn} also applies. It means that the rotation
set $R$ is a closed interval, symmetric with respect to 0.
When we work with invariant measures, we have to use a slightly
different formalism. Namely, the observable cocycle $\xi$ has to be
the integral of the \emph{observable function} $\zeta$ along an orbit
piece. That is, $\zeta:X\to\mathbb{R}^m$ is a bounded Borel function,
integrable along the orbits, and
$$\xi(t,x)=\int_0^t \zeta(\Phi^s(x))\;ds.$$
Assume that $\Phi$ is a continuous flow. Then, if $\mu$ is a
probability measure, invariant and ergodic with respect to $\Phi$,
then by the Ergodic Theorem, for $\mu$-almost every point $x\in X$ the
\emph{rotation vector}
$$\lim_{t\to\infty}\frac{\xi(t,x)}{t}$$
of $x$ exists and is equal to $\int_X\zeta(x)\;d\mu(x)$.
Problems may arise if we want to use weak-* convergence of measures.
If $\zeta$ is continuous and $\mu_n$ weak-* converge to $\mu$ then
the integrals of $\zeta$ with respect to $\mu_n$ converge to the
integral of $\zeta$ with respect to $\mu$. However, for a general
$\zeta$ this is not true. Note that in the cases of billiards that we
are considering, $\zeta$ is the velocity vector. It has a
discontinuity at every point where a reflection occurs (formally
speaking, it is even not well defined at those points; for definiteness
we may define it there in any way so that it remains bounded and
Borel). However, it is
well known that the convergence of integrals still holds if the set of
discontinuity points of $\zeta$ has $\mu$-measure zero (as a random
reference, we can give \cite{bauer}, Theorem~7.7.10, page 234).
Let us call an observable \emph{almost continuous} if the set of its
discontinuity points has measure zero for every $\Phi$-invariant
probability measure. By what we said above, the following lemma holds.
\begin{lemma}\label{alcont}
If probability measures $\mu_n$ weak-* converge to a $\Phi$-invar\-iant
probability measure $\mu$ and $\zeta$ is almost continuous then
$$\lim_{n\to\infty}\int_X\zeta(x)\;d\mu_n(x)=\int_X\zeta(x)\;d\mu(x).$$
\end{lemma}
We have to show that this lemma is relevant for billiards.
\begin{lemma}\label{acobs}
Let $(\Phi,X)$ be a billiard flow in the phase space. Then the
velocity observable function $\zeta$ is almost continuous,
\end{lemma}
\begin{proof}
The only points of the discontinuity points of $\zeta$ are on the
boundary of the region $\Omega$ in which we consider the billiard.
Take a small piece $Y$ of this set. Then for a small $t\ge 0$ the sets
$\Phi(t,Y)$ will be pairwise disjoint (if for $y_1\in\Phi(t_1,Y)$ and
$y_2\in\Phi(t_2,Y)$ with $t_1\ne t_2$ the velocity is the same, the
points of $\Omega$ are different). However, by the invariance of
$\mu$, their measures are the same. Since the parameter $t$ varies in
an uncountable set, those measures must be 0.
\end{proof}
The following theorem is an analogue of Theorem~2.4 from \cite{MZ2}.
Its proof is basically the same as there (except that here we deal
with a flow and a discontinuous observable), so we will omit some
details. A point of a convex set $A$ is an \emph{extreme} point of $A$
if it is not an interior point of any straight line segment contained
in $A$.
\begin{theorem}\label{extreme}
Let $(\Phi,X)$ be a continuous flow and let $\zeta:X\to\mathbb{R}^m$ be an almost
continuous observable function. Let $R$ be the rotation set of
$(\Phi,X,\zeta)$. Then for any extreme vector ${\mathbf{u}}$ of the convex hull
of $R$ there is a $\Phi$-invariant ergodic probability measure $\mu$
such that $\int_X\zeta(x)\;d\mu(x)={\mathbf{u}}$.
\end{theorem}
\begin{proof}
There is a sequence of trajectory pieces such that the average
displacements on those pieces converge to ${\mathbf{u}}$. We can find a
subsequence of this sequence such that the measures equidistributed on
those pieces weakly-* converge to some probability measure $\nu$. This
measure is automatically invariant. Therefore, by Lemma~\ref{alcont},
we can pass to the limit with the integrals of $\zeta$, and we get
$\int_X\zeta(x)\;d\nu(x)={\mathbf{u}}$. We decompose $\nu$ into ergodic
components, and since $u$ is an extreme point of the convex hull of
$R$, for almost all ergodic components $\mu$ of $\nu$ we have
$\int_X\zeta(x)\;d\mu(x)={\mathbf{u}}$.
\end{proof}
By Lemma~\ref{acobs}, we can apply the above theorem to our billiards.
In particular, by the Ergodic Theorem, for any extreme vector ${\mathbf{u}}$ of
the convex hull of $R$ there is a point with the rotation vector ${\mathbf{u}}$.
Now we will look closer at billiards on the torus $\mathbb{T}^m$ with one
obstacle (not necessarily small). We know that its rotation set $R$ is
contained in the unit ball. It turns out that although (by
Corollary~\ref{st14}) $R$ can fill up almost the whole unit ball,
still it cannot reach the unit sphere $S^{m-1}$ on a big set. Let us
start with the following theorem.
\begin{theorem}\label{straight}
For a billiard on the torus $\mathbb{T}^m$ with one obstacle, if ${\mathbf{u}}$ is a
rotation vector of norm $1$, then there is a full trajectory in $\mathbb{R}^m$
which is a straight line of direction ${\mathbf{u}}$.
\end{theorem}
\begin{proof}
Clearly, such ${\mathbf{u}}$ is an extreme point of the convex hull of $R$.
By Theorem~\ref{extreme}, there is an ergodic measure $\mu$ such
that the integral of the velocity with respect to $\mu$ is ${\mathbf{u}}$.
Thus, the support of $\mu$ is contained in the set of points of the
phase space for which the vector component is ${\mathbf{u}}$. Take $t$
which is smaller than the distance between any two obstacles in the
lifting. Then $\mu$-almost all full trajectories of the billiard
have direction ${\mathbf{u}}$ at all times $k t$, for any integer $k$. Such
a trajectory has to be a straight line with direction ${\mathbf{u}}$.
\end{proof}
The following lemma has been proved in \cite{Sz} as Lemma~A.2.2.
\begin{lemma}\label{szasz}
For every dimension $m>1$ and every number $r>0$ there are finitely
many nonzero vectors ${\mathbf{x}}_1,{\mathbf{x}}_2,\dots,{\mathbf{x}}_k\in\mathbb{Z}^m$ such that whenever a
straight line $L$ in $\mathbb{R}^m$ is at least at the distance $r$ from
$\mathbb{Z}^m$, then $L$ is orthogonal to at least one of the vectors ${\mathbf{x}}_i$.
In other words, $L$ is parallel to the orthocomplement (lattice)
hyperplane $H_i=({\mathbf{x}}_i)^\perp$.
\end{lemma}
As an immediate consequence of Theorem~\ref{straight} and
Lemma~\ref{szasz}, we get the following result.
\begin{theorem}\label{sphere}
For a billiard on the torus $\mathbb{T}^m$ with one obstacle, the intersection
$R\cap S^{m-1}$ is contained in the union of finitely many great
hyperspheres of $S^{m-1}$. The hyperplanes defining these
great hyperspheres can be taken as in Lemma~\ref{szasz}.
\end{theorem}
For the billiard in the square with one obstacle, we can determine the
full rotation set much better than for the torus case. In the theorem
below we do not need to assume that the obstacle is small or even
convex. However, we assume that it is contained in the interior of the
square and that its boundary is smooth.
\begin{theorem}\label{thm47}
For a billiard with one obstacle in a square, the rotation set is
contained in the interval $[-\sqrt{2}/4,\sqrt{2}/4]$.
\end{theorem}
\begin{proof}
Let $Y$ be the square minus the obstacle. In order to measure the
displacement along a trajectory piece $T$, we have to trace how its
lifting to the universal covering space of $Y$ behaves. Since $Y$ is
homeomorphic to an annulus, this universal covering has a natural
structure of a strip in the plane. Without any loss of generality, we
may assume that the displacement along $T$ is positive.
We divide $Y$ into 4
regions, as in Figure~\ref{fig47}. The line dividing regions 1
and 2 is a segment of the lowest horizontal straight line such that
the whole obstacle is below it; this segment has only its left
endpoint belonging to the obstacle. The other three dividing lines are
chosen in the same way after turning the whole picture by 90, 180 and
270 degrees. Note that here we are not interested very much to which
region the points of the division lines belong, so it is a partition
modulo its boundaries.
\begin{figure}\refstepcounter{figure}\label{fig47}\addtocounter{figure}{-1}
\begin{center}
\includegraphics[width=2.5truein]{fig47}
\caption{Four regions}
\end{center}
\end{figure}
In the universal covering of $Y$, our four regions become
infinitely many ones, and they are ordered as
\begin{equation}\label{order}
\dots,1_{-1},2_{-1},3_{-1},4_{-1},1_0,2_0,3_0,4_0,1_1,2_1,3_1,4_1,
\dots
\end{equation}
Here the main number shows to which region in $Y$ our region in the
lifting projects, and the subscript indicates the branch of the
argument (as in the definition of the displacement) that we are using.
Thus, if in the lifting the trajectory goes from, say, $2_0$ to
$1_{23}$, then the displacement is 22, up to an additive constant.
This constant does not depend on the trajectory piece we consider, so
it disappears when we pass to the limit to determine the rotation set.
Since we assumed that the displacement along $T$ is positive, then in
general, the trajectory moves in the order as in (\ref{order}),
although of course it can go back and forth. Look at some region,
say, $1_n$, which is (in the universal covering) between the region
where $T$ begins and the region where $T$ ends. After $T$ leaves
$1_n$, for good, it can bounce between the left and right sides
several times. As this happens, the $y$-coordinate on the trajectory
must grow, so at some point the trajectory will hit the upper side of
the square for the first time after it leaves $1_n$ for good (unless
it ends before it does this). We denote the time of this collision by
$t(1_n)$. Then we use analogous notation for the time of the first hit
of the left side after leaving $2_n$ for good, etc.
After the trajectory hits the top side at $t(1_n)$, it moves to the
left or right (we mean the horizontal component of the velocity). It
cannot move vertically, because then it would return to the region
$1_n$. If it is moving to the left, it is still in the region $2_n$,
or it just left it, but did not hit the left side of the square yet.
Therefore $t(1_n)<t(2_n)$. If it is moving to the right, it is in, or
it will return to, the region $2_n$. Therefore also in this case
$t(1_n)<t(2_n)$.
In such a way we get an increasing sequence of times when the
trajectory $T$ hits the consecutive sides of the square (in the
lifting). By joining those consecutive reflection points by segments,
we get a piecewise linear curve $\gamma$, which is not longer than $T$,
but the displacement along $\gamma$ differs from the displacement
along $T$ at most by a constant independent of the choice of $T$. This
curve $\gamma$ goes from the right side of the square to the upper
one, to the left one, to the lower one, to the right one, etc. This is
exactly the same behavior that is displayed by the trajectory $\Gamma$
in the square without an obstacle, that starts at the midpoint of the
right side and goes in the direction of $(-1,1)$. Therefore $\gamma$
and $\Gamma$ pass through the same squares in the unfolding. We
terminate $\Gamma$ in that square in the unfolding in which $\gamma$
ends. Since in the unfolding $\Gamma$ is a segment of a straight line,
it is shorter than $\gamma$, again up to a constant independent of
$\gamma$, and those two curves have the same displacement (as always,
up to a constant). This shows that the rotation number of the
trajectory piece $T$ is not larger than for the curve $\Gamma$, plus a
constant that goes to 0 as the length of $T$ goes to infinity. Since
the rotation number of $\Gamma$ (let us think now about the infinite
trajectory) is $\sqrt{2}/4$, the limit in the definition of the
rotation set cannot exceed this number.
\end{proof}
The two trajectories in the square without an obstacle, described in
the last paragraph of the above proof, are also trajectories of any
square billiard with one small obstacle. Therefore in this case the
rotation set $R$ contains the interval $[-\sqrt{2}/4,\sqrt{2}/4]$. In
such a way we get the following result.
\begin{theorem}\label{wholer}
For a billiard with one small obstacle in a square, the rotation set
is equal to the interval $[-\sqrt{2}/4,\sqrt{2}/4]$.
\end{theorem}
Now we present the proof of Theorem~\ref{new_st15} that has been
postponed until now.
\begin{proof}[Proof of Theorem~\ref{new_st15}]
Let us use the construction from the proof of Theorem~\ref{thm47} and look
at a long trajectory piece $T$ of admissible type. We get an
increasing sequence of times when the trajectory $T$ hits the
consecutive sides of the square (in the lifting). Construct a
partial unfolding of $T$, passing to a neighboring square only at
those times. In such a way we get a piecewise linear curve $T'$ which
sometimes reflects from the sides of the unfolded square, and
sometimes goes through them. Its length is the same as the length of
$T$. Moreover, the curve $\gamma$, constructed in the proof of
Theorem~\ref{thm47} starts and terminates in the same squares as $T'$
and as we know from that proof, the displacements along $\gamma$ and
$T$ differ at most by a constant independent of $T$. It is also clear
that the same holds if we replace $\gamma$ by the segment $\Gamma$
(also from the proof of Theorem~\ref{thm47}). Thus, up to a constant
that goes to 0 as the length of $T$ goes to infinity, the rotation
number of $T$ is $\sqrt{2}/4$ multiplied by the length of $\Gamma$ and
divided by the length of $T'$.
By the same reasons as in the proof of Theorem~\ref{st15}, there exist
positive constants $c_1<c_2$ and $\alpha>0$, such that for every
trajectory piece of admissible type the distance between two
consecutive reflections from obstacles is contained in $[c_1,c_2]$, and the direction
of a trajectory piece of admissible type changes by at least $\alpha$
at each reflection. However, we have to take into account that the
direction of $T'$ can change also at the reflections from the
boundaries of the squares, and then we do not know how the angle changes.
Thus, either immediately before or immediately after each reflection from
an obstacle there must be a piece of $T'$ where the direction differs from
the direction of $\Gamma$ by at least $\alpha/2$. Such a piece has length
larger than or equal to the distance from the obstacle to the boundary
of the square, which is at least $c_1'=1/2-\sqrt{2}/4$. Those pieces are
alternating with the pieces of $T'$ of length at most $c_2$ each, where we
do not know what happens with the direction.
This means that as we follow $T'$ (except the
initial piece of the length bounded by $c_2$),
we move in the direction that differs from the direction of $\Gamma$
by at least $\alpha/2$ for at least $c_1'/(c_1'+c_2)$ of time.
Therefore, in the limit as the length of $T$ goes to infinity, the
length of $\Gamma$ divided by the length of $T'$ is not larger than
$$b=\frac{c_2}{c_1'+c_2}+\frac{c_1'}{c_1'+c_2}\cos\frac{\alpha}{2}.$$
This proves that $AR\subset [-b\sqrt{2}/4,b\sqrt{2}/4]$. Since $b<1$,
this completes the proof.
\end{proof}
| {
"attr-fineweb-edu": 1.983398,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfqM25V5jgCb64BWv | \section*{Note}
This is a preprint of an article that appears in the Journal of Quantitative Analysis in Sports, Volume 8, Issue 3. The final version is available from \url{http://dx.doi.org/10.1515/1559-0410.1471}
\section{Introduction}
The highest level of collegiate football, the Football Bowl Subdivision (FBS, historically Division I-A), is undergoing a redesign of its postseason structure. Currently, the FBS is unique in that its season does not end with a tournament to decide a champion. After the regular season, the top performing teams are invited to one of several ``bowl games.'' Through 1997, the bowl assignments were strictly a function of conference membership (with special rules for non-conference teams), meaning the top two ranked teams often played in different bowl games. After the bowl games, the national champion was decided by the Associated Press and the Coaches polls. The championship was split when the polls disagreed. To prevent this, starting in 1998 the Bowl Championship Series (BCS) began to seed the Fiesta, Orange, Rose, and Sugar Bowls with certain conference champions and other highly ranked teams. The ranking procedures used by the BCS have changed since 1998, but have always been a weighted average of two polls and an average of several mathematical models, referred to as ``computer rankings''. The computer rankings were originally allowed to use margin of victory, but in an attempt to prevent teams from running up scores to inflate their ratings, only win/loss information may now be used (ties are necessarily settled in overtime). This restriction to a binary response complicates the ranking process. Analyzing the binary game outcomes requires modeling assumptions and decisions that are not needed when modeling continuous outcomes, such as margin of victory. We explore the sensitivity of team rankings to these assumptions.
\citet{stern04} provide a detailed history of college football rankings as well as descriptions of the models currently employed by the BCS. An additional literature review is provided by \citet{west08}. \citet{stern04} discuss popular controversies surrounding the BCS system through the beginning of the 2004 season, and \citet{stern06} calls for a boycott of the BCS by quantitative analysts. Due in part to widespread criticism (and perhaps in larger part to potential television revenue), the current BCS structure will be revised to include a four team playoff beginning with the 2014 season. The tournament will be seeded by a selection committee, which may choose to factor mathematical rankings into their decision. A similar selection committee uses the Ratings Percentage Index (RPI) and proprietary models of Jeff Sagarin to help seed the NCAA Men's Division I Basketball Championship \citep{west06}.
There are several mathematical models available for ranking teams. One approach for continuous outcomes is to model a rating $\eta_i$ for each team $i$, and compute the predicted win margin of team $i$ over team $j$ via a function of the difference $\eta_i-\eta_j$. Such models require the margin of victory from past games. \citet{harville77} and \citet{harville03} develop such a model for continuous responses, and provide methods for limiting the utility of running up the score beyond a threshold $C$ by either truncating the win margins at $C$, or by scaling the margins that exceed $C$ via a hazard function. \citet{gill09} examine differences in the rankings resulting from treating the team effects as fixed or as random, as well as the modifications proposed by \citet{harville03}.
Despite the existence of models that minimize the advantage of running up the score, such as those proposed by \citet{harville03}, the BCS has decided to use only win/loss information in the computer rankings. In order to model the probability that team $i$ defeats team $j$, the Thurstone-Mosteller model \citep{thurstone,mosteller} calculates $\Phi(\eta_i-\eta_j)$, where $\Phi$ is the cumulative distribution function of the standard normal distribution. This is similar to the Bradley-Terry model \citep{bradley} used by \citet{keener}. However, all of these models encounter an infinite likelihood if any teams have a perfect record, due to quasi-complete separation of the data \citep{allison}. \citet{mease} proposes a penalized likelihood approach which circumvents the difficulty associated with the presence of undefeated or winless teams. In essence, the fixed effect model proposed by \citet{mease} becomes a random effects model with his introduction of a penalty function. Modeling the team ratings $\eta_i$ with random instead of fixed effects avoids the problem of complete or quasi-complete separation. In this case, the empirical best linear unbiased predictors (EBLUPs) of the random effects are sorted to form the team rankings.
The penalized likelihood used by \citet{mease} requires a choice of a penalty function. \citet{annis} express concern that this subjective choice may influence the rankings produced by the model. The model proposed by \citet{mease} may be seen as a special case of the generalized linear mixed model (GLMM) we propose: it arises as one particular approximation of the marginal likelihood of our GLMM. Furthermore, \citet{mease} uses a probit link, and mentions the possibility of using a logit link as an alternative, noting that the choice between the two ``did not affect the resulting rankings substantially for the football seasons considered.''
The choice of link function is often minimized in discussions of generalized linear mixed models, and the choice of integral approximation depends on computational feasibility as determined by the structure of the random effects. It is important to note that, even if the true parameter values were known, the EBLUPs in a GLMM depend on the chosen integral approximation. Finally, it is well known that maximum likelihood (ML) estimates for variance components are subject to a downward bias, and that restricted maximum likelihood (REML) estimation procedures correct for this bias \citep{1977}. In this paper we explore how these modeling choices affect the team rankings.
We present a GLMM for the ranking of college football teams using only win/loss information. Our model structure is the same as the one proposed by \citet{mease}, except we account for the teams via random instead of fixed effects. Furthermore, we consider treating the FBS and FCS (Football Championship Subdivision, formerly Division I-AA) divisions as different populations, whereas \citet{mease} consolidates all FCS teams into a single effect that is regarded as member of the FBS population. We show that our GLMM is a generalization of the penalized likelihood proposed by \citet{mease}, and explore the sensitivity of the rankings from our model to the choices of
\begin{enumerate}
\item link function
\item integral approximation used for the marginal likelihood of our GLMM
\item distribution of the random team effects
\item consolidating FCS teams into a single ``team'' vs. treating FCS as a separate population
\item ML vs. REML
\end{enumerate}
The results from our analysis show that the changes in team ratings resulting from varying these assumptions are small relative to the standard errors associated with each team's rating. However, these changes result in a reordering of the team rankings that may lead to practically significant differences in the bowl selection process. There is limited information in the binary win/loss outcome of each game, and it is not surprising that changing properties of the model may lead to teams with similar records swapping ranks. For some of the assumptions listed above, especially points 1 and 3, there may not be a clear set of best choices. However, these choices may affect the ordering of the team rankings.
We present the model in Section \ref{sec:model} and discuss the parameter estimation in Section \ref{sec:estimation}. In Section \ref{sec:application}, we compare the rankings resulting from our model under different assumptions for the seasons 2008-2011. In each year, we use data through the end of the conference championships, excluding the outcomes of the bowl games. Thus our examples use the same data that were used to compile the final BCS rankings in each year, which were used as a basis for the bowl invitations.
\section{The Model}\label{sec:model}
\cite{mease} models team ratings with fixed effects $\boldsymbol{\theta}=(\theta_1,\ldots,\theta_{p+1})$ via the likelihood
\begin{align}
l(\boldsymbol{\theta})=&\prod_{(i,j)\in S}[\Phi(\theta_i-\theta_j)]^{n_{ij}}\label{m1}\\
&\times\prod_{i=1}^p\Phi(\theta_i)\Phi(-\theta_i)\label{m2}\\
&\times \Phi(\theta_{p+1})\Phi(-\theta_{p+1}) \times \prod_{(i,j)\in S^*}[\Phi(\theta_i-\theta_j)]^{n_{ij}}\label{m3}
\end{align}
where $S$ is the set of all ordered pairs $(i,j)$ for which team $i$ defeated team $j$, and both teams belong to the FBS. $p$ is the number of FBS teams, and $\theta_{p+1}$ is a single effect that is used to represent every FCS team. $S^*$ is defined in the same way as $S$, except one of the teams in each pair is from the FBS and the other is from the FCS. \citet{mease} refers to Equations (\ref{m1})-(\ref{m3}) as Parts 1, 2, and 3, respectively. Part 1 models the probability of the outcome of each game using the team ratings, and implicitly considers ``strength of schedule.'' The second part is a penalty function that allows the model to be estimated in the presence of undefeated or winless teams: using Part 1 alone leads to an infinite likelihood in these cases. The third and final part models games between FBS and FCS teams using a single team effect to represent all FCS teams.
We propose modeling the team ratings with random instead of fixed effects, and show that the model proposed by \citet{mease} is actually a special case of our random effects model. Treating the teams as random effects requires the specification of a distribution for those effects, as well as the choice of an integral approximation for the resulting generalized linear mixed model. In addition, we propose another method of modeling games between FBS and FCS teams. Besides modeling all FCS teams as a single effect and ignoring FCS games that were not played against an FBS opponent, we model FCS teams as a separate population. Using this approach, we include all of the games played between two FCS teams, but ignore FCS games played against lower-divisional opponents. This introduces two more modeling assumptions. 1) Instead of ignoring FCS games against lower divisional opponents, those opponents could be treated as a single effect, using the same approach as \citet{mease}. Although we do not take this approach, this could protect against the possibility of a successful FCS team being overrated due to an ignored loss against a Division II team. 2) When modeling separate FBS and FCS populations, we may either assume that the populations share a common variance, or we may model a different variance for each population. In Section \ref{sec:application}, we model both pooled and separate variances and compare the resulting differences.
\subsection{Separate FCS Population with Pooled Variance}\label{ssec:fe.p.1}
We first present our model including separate FBS and FCS populations, assuming that the FBS and FCS distributions share a common variance. In Section \ref{ssec:fe.p.2} we describe the changes necessary for modeling different FBS and FCS effect variances, and in Section \ref{ssec:fe.p.0} we treat FCS opponents of FBS teams as a single team in the FBS population
Our model considers outcomes of FBS and FCS games in a given season. Let $r_i$ be a binary indicator for the outcome of the $i$-th game for $i=1,\ldots,n$, taking the value 1 with a home team win and 0 with a visiting team win. For neutral site games, designate a home team arbitrarily.
We model the rating of the $j$-th team for $j=1,\ldots,p+q$ with a random effect $\eta_j$ assuming $\eta_j\sim N(0,\sigma^2_t)$. $p$ and $q$ represent the number of included FBS and FCS teams, respectively. We assume that the distributions of FBS and FCS ratings share a common variance $\sigma^2_t$, but that the distributions have different means. We account for the difference in means between the two divisions by including the fixed effect $\beta$ in the model. The coefficient $X_i$ for $\beta$ takes the value 1 if the $i$-th game involves an FCS team visiting an FBS team, and 0 otherwise (FBS teams do not travel to play FCS teams). We will refer to $\beta$ as the ``FCS effect.''
Using the threshold model of \citet{mcculloch94}, we assume that the game outcomes are determined by a latent continuous variable $y_i=X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}+\epsilon_i$, which may be interpreted as the margin of victory for the home team, but only the binary outcome $r_i=I_{\{y_i>0\}}$ is observed. We thus model the probability $\pi_i$ of a home team win in the $i$-th game, as.
\[\pi_i=P(X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}+\epsilon_i>0) \]
where $\beta$ is the FCS effect, $\boldsymbol{\eta}=(\eta_1,\ldots,\eta_{p+q})\sim N(0,\sigma^2_t\bds{I})$ contains the random effects representing the team ratings, $\bds{\epsilon}=(\epsilon_1,\ldots,\epsilon_n)\sim N(0,\bds{I})$, and $cov(\bds{\eta},\bds{\epsilon})=0$. The assumed distribution of $\epsilon$ determines the link function of the resulting GLMM. Assuming that $\bds{\epsilon} \sim N(0,\bds{I})$ leads to a probit link,
\begin{align*}
r_i|\boldsymbol{\eta}&\sim \te{Bin}(1,\pi_i)\\
\Phi^{-1}(\pi_i)&=X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}
\end{align*}
By contrast, assuming that the $\epsilon_i$ are independent and follow a logistic distribution leads to a logit link.
\begin{align*}
r_i|\boldsymbol{\eta}&\sim \te{Bin}(1,\pi_i)\\
\te{logit}(\pi_i)&=X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}
\end{align*}
where $\te{logit}(\pi)=\log(\pi/(1-\pi))$.
The design matrix $\boldsymbol{Z}$ for the random effects contains rows $\boldsymbol{Z}_i$ that indicate which teams competed in game $i$. If team $k$ visits team $l$ in game $i$, then $\boldsymbol{Z}_i$ is a vector of zeros with a $1$ in position $l$ and a $-1$ in position $k$. This is a multi-membership design \citep{browne01} since each game belongs to multiple levels of the same random effect. As a result, $\boldsymbol{Z}$ does not have a patterned structure and may not be factored as it could be with nested designs. \citet{mease} uses the same design matrix, albeit using fixed effects for the teams. The multi-membership $\boldsymbol{Z}$ matrix is implied by the structure of the product over the set $S$ in Equation (\ref{m1}).
We choose not to include a home field effect. We are concerned about the potential impact of nonrandom scheduling between teams on the estimate of home field advantage. Only three of the 120 FBS college football teams (Notre Dame, UCLA, and Southern California) have refrained from scheduling matches against FCS teams to lighten their schedules. These games are accounted for with the FCS effect, but there is yet another concern about the estimation of a home field effect. The more competitive programs tend to play more home games than other FBS programs. These additional home games are often against low-level FBS teams who do not posses the bargaining leverage to request a home-and-home series. As a result, the ``home field advantage'' may appear to be significant simply because of the tendency of larger schools to schedule a number of easier home games. Others have advocated including a home field effect, including \citet{harville77}, \citet{harville03}, \citet{mease}, and \citet{wang}.
Other fixed effects may easily be included in the model, but the factors should not contain any levels that act as perfect predictors for the outcomes. For example, the inclusion of the FCS fixed effect will lead to an infinite likelihood due to quasi-complete separation \citep{allison} in the event that every FBS vs. FCS game results in an FBS win. In 2008, FCS teams won only 2 of 86 games against FBS teams. In the event that FBS teams were to win all such games in a year, the FCS effect and teams may be removed from the model. Historically, some of the BCS computer rankings always ignored these inter-division matches, until fifth-ranked Michigan lost to FCS Appalachian State after paying the team \$400,000 for a one-off home game in 2007.
The likelihood for our model with a probit link is
\begin{equation}\label{eq:probit}
L(\beta,\sigma^2_t)=\idotsint \prod_{i=1}^n \left[\Phi\left(\left(-1\right)^{1-r_i}\left[X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}\right]\right)\right] f(\boldsymbol{\eta}) \mathrm{d} \boldsymbol{\eta}
\end{equation}
where $f(\boldsymbol{\eta})$ is the density of $\boldsymbol{\eta}$. We assume $\boldsymbol{\eta}\sim N_{p+q}(0,\sigma^2_t \bds{I})$. Using a logit link,
\begin{equation}\label{eq:logit}
L(\beta,\sigma^2_t)=\idotsint \prod_{i=1}^n \left(\frac{e^{X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}}}{1+e^{X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}}}\right)^{r_i} \left(\frac{1}{1+e^{X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}}}\right)^{1-r_i}f(\boldsymbol{\eta}) \mathrm{d} \boldsymbol{\eta}.
\end{equation}
The model likelihood functions in Equations (\ref{eq:probit}) and (\ref{eq:logit}) contain intractable integrals because the random effects enter the model through a nonlinear link function. Furthermore, the ($p+q$)-dimensional integral in each equation may not be factored as a product of one-dimensional integrals. Such a factorization occurs in longitudinal models involving nested random effects. However, the multi-membership random effects structure of our model results in a likelihood that may not be factored.
\subsection{Separate FCS Population with Unique Variance}\label{ssec:fe.p.2}
The model assuming separate FBS and FCS populations uses the same setup as described in Section \ref{ssec:fe.p.1}, except that the populations are allowed to have different variances. The likelihood function for this model with a probit link is
\begin{equation*}
L(\beta,\sigma^2_1,\sigma^2_2)=\idotsint \prod_{i=1}^n \left[\Phi\left(\left(-1\right)^{1-r_i}\left[X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}\right]\right)\right] f(\boldsymbol{\eta}) \mathrm{d} \boldsymbol{\eta}
\end{equation*}
where $f(\boldsymbol{\eta})$ is the density of $\boldsymbol{\eta}=(\boldsymbol{\eta}_1,\boldsymbol{\eta}_2)$, with $\boldsymbol{\eta}_1$ and $\boldsymbol{\eta}_2$ containing the FBS and FCS team effects, respectively. We assume $\boldsymbol{\eta}_1\sim N_p(0,\sigma^2_1 \bds{I})$, $\boldsymbol{\eta}_2\sim N_q(0,\sigma^2_2 \bds{I})$, and that $cov(\boldsymbol{\eta}_1,\boldsymbol{\eta}_2)=0$.
Using a logit link,
\begin{equation*}
L(\beta,\sigma^2_1,\sigma^2_2)=\idotsint \prod_{i=1}^n \left(\frac{e^{X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}}}{1+e^{X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}}}\right)^{r_i} \left(\frac{1}{1+e^{X_i\beta+\boldsymbol{Z}_i\boldsymbol{\eta}}}\right)^{1-r_i}f(\boldsymbol{\eta}) \mathrm{d} \boldsymbol{\eta}.
\end{equation*}
\subsection{Single Population}\label{ssec:fe.p.0}
The model consolidating FCS teams into a single ``team'' in the FBS population also uses a similar setup as described in Section \ref{ssec:fe.p.1}. However, this model does not require the FCS fixed effect, and discards information about games played between pairs of FCS teams. The likelihood function using a probit link is
\begin{equation}\label{eq:ourmodel}
L(\sigma^2_t)=\idotsint \prod_{i=1}^n \left[\Phi\left(\left(-1\right)^{1-r_i}\boldsymbol{Z}_i\boldsymbol{\eta}\right)\right] f(\boldsymbol{\eta}) \mathrm{d} \boldsymbol{\eta}
\end{equation}
where $f(\boldsymbol{\eta})$ is the density of $\boldsymbol{\eta}$. We assume $\boldsymbol{\eta}\sim N_{p+1}(0,\sigma^2_t \bds{I})$, where $\eta_{p+1}$ is consolidated FCS team-effect.
Using a logit link,
\begin{equation*}
L(\sigma^2_t)=\idotsint \prod_{i=1}^n \left(\frac{e^{\boldsymbol{Z}_i\boldsymbol{\eta}}}{1+e^{\boldsymbol{Z}_i\boldsymbol{\eta}}}\right)^{r_i} \left(\frac{1}{1+e^{\boldsymbol{Z}_i\boldsymbol{\eta}}}\right)^{1-r_i}f(\boldsymbol{\eta}) \mathrm{d} \boldsymbol{\eta}.
\end{equation*}
The model proposed by \citet{mease} results from applying a particular integral approximation to Model (\ref{eq:ourmodel}). Following an illustration by \citet{demidenko}, the penalized likelihood used by \citet{mease} may be derived from our model (\ref{eq:ourmodel}) via the Laplace approximation \citep{evans95}. Letting
\[h(\boldsymbol{\eta})=\log\left[\prod_{i=1}^n \left[\Phi\left(\left(-1\right)^{1-r_i}\boldsymbol{Z}_i\boldsymbol{\eta}\right)\right] f(\boldsymbol{\eta})\right],\]
the Laplace approximation yields
\begin{equation} \label{eq:approx}
L(\sigma^2_t)\approx (2\pi)^{n/2}e^{h(\boldsymbol{\eta}^*)}\left|\left. -\frac{\partial^2 h}{\partial \boldsymbol{\eta}\partial \boldsymbol{\eta}^{\prime}}\right|_{\boldsymbol{\eta}=\boldsymbol{\eta}^*}\right|^{-1/2}
\end{equation}
where $\boldsymbol{\eta}^*$ is the mode of $h(\boldsymbol{\eta})$. Further assuming that the determinant in Equation (\ref{eq:approx}) varies slowly with $\boldsymbol{\eta}$ yields the quasi-likelihood \citep{breslow93}
\begin{equation}\label{eq:approx2}
L(\boldsymbol{\eta})\approx \prod_{i=1}^n \left[\Phi\left(\left(-1\right)^{1-r_i}\boldsymbol{Z}_i\boldsymbol{\eta}\right)\right] f(\boldsymbol{\eta})
\end{equation}
If the random effects $\boldsymbol{\eta}$ are assumed to be distributed so that
\begin{equation}\label{eq:penalty}
f(\boldsymbol{\eta})\propto\prod_{j=1}^{p+1}\Phi(\eta_j)\Phi(-\eta_j),
\end{equation}
then Equation (\ref{eq:approx2}) yields the likelihood presented by \citet{mease} in Equations (\ref{m1}-\ref{m3}). Thus the model of \citet{mease} may be viewed as the PQL approximation to our probit model (\ref{eq:ourmodel}), where the random effects are assumed to have the density specified in Equation (\ref{eq:penalty}) rather than the normal density that we specified.
\section{Parameter Estimation}\label{sec:estimation}
Section \ref{ssec:fe.p.0} demonstrates that the model likelihood of \citet{mease} is the PQL approximation to our model (\ref{eq:ourmodel}) under a certain non-normal distribution of the random effects, $\boldsymbol{\eta}$. PQL is based on the Laplace approximation, but makes one further approximation, and produces biased parameter estimates \citep{breslow95}. By contrast, the Laplace approximation produces consistent estimators since the dimension of our integrals is equal to the number of teams and does not increase with the sample size \citep{shun95}. In Section \ref{sec:estimationsas} we demonstrate how our model may be fit in SAS using a PQL approximation, and in Section \ref{sec:estimationem} we explain how we use an EM algorithm with both a first-order and a fully exponential Laplace approximation to obtain the rankings. Code for fitting the model proposed by \citet{mease} is available from \citet{measeweb}.
\subsection{Fitting the Model in SAS}\label{sec:estimationsas}
Specifying the multi-membership random effects structure in SAS is possible in PROC GLIMMIX via the MULTIMEMBER option of the EFFECT statement. However, GLIMMIX does not take into account the fact that $\boldsymbol{Z}$ is sparse. This is not an issue for the football data sets used here, but becomes problematic in other settings with larger data sets. The EM algorithm we use in Section \ref{sec:estimationem} provides computational advantages in the estimation of multi-membership models
\citep{karlem}.
The default estimation method of PROC GLIMMIX is a doubly-iterative pseudo-likelihood method by which the link function is linearized and a linear mixed model is fit at each iteration. This method is equivalent to PQL when the scale parameter is set to 1, which is the default setting for the Bernoulli distribution in PROC GLIMMIX \citep{wolfinger93, sasbook}. Example SAS code appears in Appendix \ref{sec:sascode}.
\subsection{Fitting the Model with an EM Algorithm}
\label{sec:estimationem}
The EM algorithm \citep{dempster77,embook} is often used for maximum likelihood estimation in the presence of missing data. It may be used for the estimation of mixed models by treating the random effects as missing observations \citep{laird82}, although an integral approximation is necessary when the random effects enter the model through a nonlinear link function, such as is the case with our model. Note that the high dimension of the integral renders quadrature methods infeasible.
The use of a fully exponential Laplace approximation \citep{tierney89} with an EM algorithm for the estimation of generalized linear mixed models was first proposed by \citet{steele}. \citet{riz09} use this method to estimate the parameters of a joint model for a continuous longitudinal process and a time-to-dropout measurement. \citet{karlphd} applies this approach to a multi-membership joint model. We will give a brief overview of the EM estimation procedure, and refer to \citet{karlphd} for further details, as well as to \citet{riz09} for similar calculations made in the setting of nested random effects.
As an alternative to the PQL approximation, we fit the model using an EM algorithm with custom-written code in R \citep{R} with both a first order Laplace (LA) and a fully exponential Laplace (FE) approximation in the E-step. The Laplace approximations are more accurate than PQL, but are more computationally demanding, requiring the calculation of higher-order derivatives than PQL. The first order Laplace approximation requires the first two derivatives of the integrand. Calculation of the fully exponential Laplace approximation for the conditional mean of the random effects requires the third derivative, and calculation of the conditional variance requires the fourth derivative. The approximation is complicated by the multi-membership random effects structure.
We outline the EM procedure for only one of the models, and note that the calculations are similar for the other models. To estimate the parameters $\boldsymbol{\Psi}=(\beta,\sigma_t^2)$ of the model in Equation (\ref{eq:probit}), we use the equations derived by \citet{karlphd}, ignoring the longitudinal process in the joint model he analyzed. Given initial values for the parameters and the random effects, the EM algorithm alternates between an expectation (E) step and a maximization (M) step. At iteration (k + 1), the E step calculates the conditional expectation of the log-likelihood $\log f(\boldsymbol{r}, \boldsymbol{\eta})$,
\begin{equation*}
Q(\boldsymbol{\Psi}; \boldsymbol{\Psi}^{(k)}) = \idotsint {\left\{\log f\left(\boldsymbol{r}|\boldsymbol{\eta}; \boldsymbol{\Psi}\right) + \log f\left(\boldsymbol{\eta}; \boldsymbol{\Psi}\right)\right\} f(\boldsymbol{\eta}|\boldsymbol{r}; \boldsymbol{\Psi}^{(k)})} \mathrm{d}\boldsymbol{\eta},
\end{equation*}
given the vector of game outcomes, $\boldsymbol{r}$, and parameter estimates $\boldsymbol{\Psi}^{(k)}$ obtained in the k-th iteration. For this calculation, it is sufficient to find the conditional mean $\widetilde{\boldsymbol{\eta}}=\te{E}[\boldsymbol{\eta}|\boldsymbol{r};\boldsymbol{\Psi}^{(k)}]$ and the conditional variance $\widetilde{\boldsymbol{v}}=\te{var}[\boldsymbol{\eta}|\boldsymbol{r};\boldsymbol{\Psi}^{(k)}]$ of the random effects. The M-step then maximizes $Q(\boldsymbol{\Psi}; \boldsymbol{\Psi}^{(k)})$
with respect to $\boldsymbol{\Psi}$, resulting in the updated parameter vector $\boldsymbol{\Psi}^{(k + 1)}$.
The expressions for $\widetilde{\boldsymbol{\eta}}$ and $\widetilde{\boldsymbol{v}}$ involve intractable integrals, necessitating the use of the Laplace approximations. The gradient and inverse Hessian of the joint distribution $f(\boldsymbol{r},\boldsymbol{\eta})$ (with respect to $\boldsymbol{\eta}$) furnish the first order Laplace approximations to $\widetilde{\boldsymbol{\eta}}$ and $\widetilde{\boldsymbol{v}}$, respectively, and the fully exponential Laplace approximations require computationally expensive corrections to these values. Upon convergence, $\widetilde{\boldsymbol{\eta}}$ serves as the vector of team ratings, demonstrating that the choice of integral approximation affects the ratings even if the model parameters $\boldsymbol{\Psi}$ are known.
The M-step update for $\widehat{\beta}$ from the probit model may be obtained by setting the score function
\begin{align}
S(\widehat{\beta})
=&\sum_{i=1}^n X_i\idotsint{\left(-1\right)^{1-r_{i}}\frac{\phi\left[\left(-1\right)^{1-r_{i}}\left(X_i\widehat{\beta}+\boldsymbol{Z}_{i}
\boldsymbol{\eta}\right)\right]}{\Phi\left[\left(-1\right)^{1-r_{i}}\left(X_{i}\widehat{\beta}+\boldsymbol{Z}_{i}
\boldsymbol{\eta}\right)\right]}}f(\boldsymbol{\eta}|\boldsymbol{r})\mathrm{d} \boldsymbol{\eta} \label{eq:mbeta}
\end{align}
equal to 0, where $\phi$ is the density function of the standard normal distribution. The equation may be solved via Newton-Raphson, using a central difference approximation to $S(\widehat{\beta})$ in order to obtain the necessary Hessian. Fortunately, there is a closed form M-step update for $\sigma^2_t$, namely
\[\widehat{\sigma}^2_t=\frac{1}{p+q}\te{trace}(\widetilde{\boldsymbol{v}}+\widetilde{\boldsymbol{\eta}}\widetilde{\boldsymbol{\eta}}^{\prime}). \]
\section{Application}\label{sec:application}
We obtain the game outcomes for the 2008-2011 seasons from the NCAA website \citep{ncaaweb}. The data require additional processing, since outcomes are recorded by teams, resulting in duplicate observations for games between two teams within the same division. Some of the neutral site games were duplicated while others were not. We combine the FBS and FCS files, remove all games labeled ``away'', remove games between FCS and lower division teams, purge redundant neutral site games, add an indicator for games played between FBS and FCS schools, and remove the records of games played after the production of the final BCS rankings in each year. The processed data are available from \citet{karlweb}. See Table \ref{tab:data} in Appendix \ref{sec:sascode} for the first observations of our 2008 data set.
The team ratings and rankings for the 2008-2011 seasons appear in tables and scatter plots Appendix \ref{sec:tables}. The scatter plots are printed with reference lines with slope 1 and intercept 0. The rankings use the game outcomes through the end of the conference championships in each year: the bowl outcomes are not included. This allows us to compare the rankings from our model to the final BCS rankings used to choose the BCS teams. The BCS rankings are included as a reference, and not as a standard that we expect our model to match. Two-thirds of the weight of the BCS ranking is given to polls, and the voters are allowed to consider more than each team's win/loss record.
Under the current BCS configuration, the top 16 teams of the BCS rankings are relevant due to the rules involving eligibility for selection in the non-championship BCS bowls. We do not list all of the rules the selection process, but instead point out a few of the highlights. The top two teams in the BCS rankings are selected to play in the national championship game. The remaining 8 BCS slots are filled with the winners of certain conferences (e.g. Big East), regardless of their ranking. For example, after the 2010 season, \#7 ranked Oklahoma was paired in the Fiesta Bowl with an (8-4) unranked Connecticut team.
There are special rules for non-BCS teams, including a rule that the highest-ranked winner of a non-BCS conference will receive a berth if it is either ranked in the top 12 or in the top 16 and higher than at least one BCS conference winner. Under certain conditions, any team finishing in the top four is guaranteed a berth. The complete list of rules is available from the \citet{bcs}. In short, permutations in the rankings of teams outside of the top two can affect the selection of teams for participation in the BCS. The value of these berths is substantial. Each conference, and thus each school, receives an extra payout for each additional team that is awarded a BCS berth. The head coaches of these teams benefit as well, due to their contracts: Les Miles (LSU) received a \$200,000 bonus for reaching a BCS game in 2012 and would have received an extra \$100,000 for winning the national title game. In addition, a national title win for Miles would have activated a clause in his contract giving him a \$5.7 million raise over the remaining 6 years of his contract. Instead, Nick Saban (Alabama) received a \$400,000 bonus for defeating LSU \citep{bloomberg}.
In the following subsections, we describe the changes in rankings that result from varying several modeling assumptions. For convenience, we will introduce a three-part notation to indicate the method of integral approximation (PQL, LA for Laplace, FE for fully exponential Laplace), the link function (P for probit, L for logit), and the way in which FCS teams are handled (0 for consolidating them into a single effect as in Section \ref{ssec:fe.p.0}, 1 for modeling separate FBS and FCS populations with a pooled variance as in Section \ref{ssec:fe.p.1}, and 2 for modeling separate populations with separate variances as in Section \ref{ssec:fe.p.2}). For example, PQL.P.0 denotes the PQL approximation to the probit model that consolidates FCS teams into a single ``team.''
\subsection{The Link Function}\label{ssec:link}
To explore sensitivity to the choice of link function, we compare the rankings from FE.P.0 and FE.L.0, which appear in Figures \ref{plot:2008_d}, \ref{plot:2008_c}, \ref{plot:2009_d}, \ref{plot:2009_c}, \ref{plot:2010_d}, \ref{plot:2010_c}, \ref{plot:2011_d}, and \ref{plot:2011_c} in Appendix \ref{sec:tables}. The reference lines in the scatter plots for the continuous ratings from these two models are not meaningful since the kurtosis of the logistic distribution is greater than that of the normal distribution. We would expect the rankings using the fully exponential Laplace approximation to be the most sensitive to the choice of link function, since the approximation makes use of four derivatives of the complete-data likelihood, and thus the link function.
In 2008 these two models agree on ranks 1-16, though 17-20 are scrambled by one or two positions. In 2009, 13-14 and 17-18 each swap positions. In 2010, 7 and 9 swap, and 17-20 each differ by one position. However, in 2011, FE.P.0 ranks LSU, Alabama, and Oklahoma St.~as the top three teams, respectively, while FE.L.0 picks LSU, Oklahoma St., Alabama. Figure \ref{plot:2011_c} shows that Alabama and Oklahoma St.~have nearly identical ratings in each of the models. The change in link function leads to slight changes in the team ratings, prompting shifts in the rankings. These changes are small relative to the standard errors of the ratings, as can be seen from the caterpillar plot in Figure \ref{plot:2008_ratings}, which displays the 2008 FBS team ratings from FE.P.0 along with their associated $95\%$ prediction intervals.
\begin{figure}
\caption{FE.P.0 Ratings with Standard Errors for 2008 Season.}
\label{plot:2008_ratings}
\centering
\includegraphics[scale=.4]{2008_ratings}
\end{figure}
In the context of teacher evaluation, \citet{draper95} urges caution when using the EBLUP
rankings because individual ratings may have large standard errors. Given the high stakes involved in the BCS rankings and the way the models are used by the BCS, small changes in the ratings may be practically significant, even if they are not statistically significant. The BCS uses the discretized rankings from each model, not the continuous ratings. The plots in Appendix \ref{sec:tables} show how uneven the spacing between team ratings can be. Teams with similar ratings are more likely to swap positions with changes in the modeling assumptions.
\subsection{The Integral Approximation}\label{ssec:integral}
The tables in Appendix \ref{sec:tables} contain the team rankings and ratings, as well as the parameter estimates for $\sigma^2_t$ from models PQL.P.0, LA.P.0, and FE.P.0. The downward bias in the PQL estimates of $\sigma^2_t$ described by \citet{breslow95} is clearly present. The fully exponential Laplace approximation is more accurate than the first order Laplace, which is in turn better than PQL. It is interesting to see in the tables that the changes in a team's ranking from PQL to LA to FE are monotonic. That is, for 2008-2011, if a team is ranked X by PQL.P.0 and Y by FE.P.0, its ranking under LA.P.1 is somewhere between X and Y, inclusive.
Two of the more interesting changes in ranking seen across the models for these data are in 2009 and 2011, where the change in integral approximation alters which team is voted second, and thus one of the two teams to play for the national championship. In 2009, FE.P.0 lists Alabama and Cincinnati, while PQL.P.0 and LA.P.0 select Alabama and Texas. In 2011, PQL.P.0 and LA.P.0 rank Oklahoma St.~2, while FE.P.0 ranks Alabama 2. Of course, these teams only moved one position between second and third in these rankings, but the exchange is remarkable in the sense that it arises solely from using a fully exponential (asymptotically analogous to a second-order) instead of a first-order Laplace approximation. At least in these years, our answer to the question, ``Who do you think should play in the BCS championship game?''~depends on the response to the question, ``To what order would you prefer to extend your Laplace approximation?''
Changing from PQL.P.0 to FE.P.0 in 2008 moves Texas Tech from 6 to 4, Boise St.~from 5 to 6, Oklahoma St from 16 to 14, plus seven other changes of one position. Besides the Alabama/Cincinnati swap in 2009, four other teams change rank: Oregon St.~moves from 23 to 20, Arizona from 21 to 19, and LSU switched 12 for 11 with Virginia Tech. In 2010, Stanford improves from 5 to 4, displacing Oklahoma. Wisconsin moves from 8 to 6, Ohio St.~from 6 to 7, Arkansas from 10 to 8, Michigan St.~from 7 to 9, and Boise St.~from 9 to 10. Finally, in addition to the Alabama/Oklahoma St.~switch, the change to the fully exponential model in 2011 lifts Arkansas from 8 to 6. Baylor moves from 16 to 14, Oklahoma from 14 to 12, Michigan from 13 to 15, Georgia from 17 to 16, and Wisconsin from 15 to 17. There were a couple of other teams that changed a single rank.
It is difficult to tell whether there is a pattern in set of teams which are affected by the choice of FE.P.0 over PQL.P.0 in each year. SEC teams tend to benefit from the change. It is possible that the increased random team-effect variance estimated by the fully exponential model implicitly places a greater emphasis on strength of schedule, since the larger the variance of the team effect, the greater the estimated disparity between FBS teams. We consider this point further in Section \ref{ssec:dist}.
\subsection{Distribution(s) of the Random Team-Effects}\label{ssec:dist}
\citet{annis} express concern about potential sensitivity of team rankings to the penalty function chosen by \citet{mease}. In Section \ref{ssec:fe.p.0}, we demonstrated that the penalty function (\ref{m2}) corresponds to the distribution of the random team-effects. Figure \ref{plot:dist_comp} compares the normal distribution to the distribution implied by the penalty function of \citet{mease}. Figure \ref{plot:dist_comp2} compares the rankings from PQL.P.0 and the model by \citet{mease}. We summarize the changes to the top five teams when moving from PQL.P.0 to \citet{mease}. In 2008, Texas Tech moves from 5 to 4 and Florida moves from 4 to 5. In 2009, Texas moves from 2 to 3 and Cincinnati moves from 3 to 2. In 2010, Stanford moves from 5 to 4 and Oklahoma moves from 4 to 5. There are no changes in the top 5 in 2011, but there are several changes farther down in the top 20.
The distribution (\ref{m2}) used by \citet{mease} is fixed and does not depend upon estimated parameters. It is very similar to a $N(\bds{0},0.815*\bds{I})$ distribution. In this sense, Mease's model may be approximated using PQL.P.0 by restricting $\sigma^2_t=.815$. To address the point made by \citet{annis}, we fit PQL.P.0 using different fixed values of $\sigma^2_t$. At one extreme, consider the case $\sigma^2_t=0$. This implies that all of the teams are of equal strength, that that the chance of a team winning any given game is 50\%, and that the teams should be ranked by their number of wins minus their number of losses. To explore this possibility, we restricted $\sigma^2_t=0.0001$ in our R program (which requires a positive value of $\sigma^2_t$) and found roughly what we expected in the left column of Table \ref{tab:fake}. Notice how Arkansas St.~made it up to 11 under this scenario. For the other extreme, we restricted $\sigma^2_t=100$. Notice how 12 of the top 15 teams are from either the SEC or Big XII, including 7-5 Auburn and Texas, and 6-6 Texas A\&M. Note that Arkansas' two losses were to LSU and Alabama, and that Alabama's only loss was to LSU.
Our model provides an advantage over that of \citet{mease} by estimating $\sigma^2_t$ rather than fixing it at an arbitrary value. This discussion also provides at least a partial explanation for the changes in ranking due to changes of integral approximation. The downward bias in the PQL estimates of $\sigma^2_t$ \citep{breslow95} has the same effect as modifying the assumed distribution of the random effects. Across the years, the PQL.P.0 estimates of $\sigma^2_t$ tend to be around $0.55$, while the FE.P.0 estimates tend to be closer to $0.8$. This seems to explain why Mease's model tends to agree more closely with FE.P.0 instead of PQL.P.0, despite relying on a PQL approximation.
\begin{table}[htbp]
\centering
\caption{Rankings under extreme values of $\sigma^2_t$.}
\begin{tabular}{rlllrrlll}
\addlinespace
\toprule
&\multicolumn{2}{c}{$\sigma^2_t=0.0001$}&\phantom{abc}&&\multicolumn{2}{c}{$\sigma^2_t=100$}\\
\cmidrule{1-4}
\cmidrule{6-9}
Rank & Team & W & L & & Rank & Team & W & L \\
\midrule
1 & LSU & 13 & 0 & & 1 & LSU & 13 & 0 \\
2 & Alabama & 11 & 1 & & 2 & Alabama & 11 & 1 \\
3 & Boise St. & 11 & 1 & & 3 & Arkansas & 10 & 2 \\
4 & Houston & 12 & 1 & & 4 & Oklahoma St. & 11 & 1 \\
5 & Oklahoma St. & 11 & 1 & & 5 & Kansas St. & 10 & 2 \\
6 & Stanford & 11 & 1 & & 6 & South Carolina & 10 & 2 \\
7 & Oregon & 11 & 2 & & 7 & Baylor & 9 & 3 \\
8 & Southern Miss. & 11 & 2 & & 8 & Oklahoma & 9 & 3 \\
9 & Virginia Tech & 11 & 2 & & 9 & Stanford & 11 & 1 \\
10 & Arkansas & 10 & 2 & & 10 & Oregon & 11 & 2 \\
11 & Arkansas St. & 10 & 2 & & 11 & Georgia & 10 & 3 \\
12 & Clemson & 10 & 3 & & 12 & Boise St. & 11 & 1 \\
13 & Georgia & 10 & 3 & & 13 & Auburn & 7 & 5 \\
14 & Kansas St. & 10 & 2 & & 14 & Texas & 7 & 5 \\
15 & Michigan & 10 & 2 & & 15 & Texas A\&M & 6 & 6 \\
\bottomrule
\end{tabular}%
\label{tab:fake}%
\end{table}%
\begin{figure}
\caption{The random effects distribution (dashed) implicitly assumed by \citet{mease} and the $N(0,0.57)$ distribution from PQL.P.0 in 2009 (solid). }
\label{plot:dist_comp}
\centering
\includegraphics[scale=.4]{dist_comp}
\end{figure}
\begin{figure}
\caption{Comparison of rankings resulting from normally distributed random effects (PQL.P.0) to those obtained under the distribution assumed by \citet{mease}}
\label{plot:dist_comp2}
\centering
\includegraphics[scale=.35]{dist_08}
\includegraphics[scale=.35]{dist_09}
\includegraphics[scale=.35]{dist_10}
\includegraphics[scale=.35]{dist_11}
\end{figure}
\subsection{Modeling FCS Teams}
We consider three different approaches to handling FBS games against FCS teams, using a fully exponential approximation with a probit link. FE.P.0 uses the same approach as \citet{mease}, discarding all FCS games that did not involve an FBS opponent and consolidating all FCS teams into a single ``team'' in the population of FBS teams. FE.P.1 models a separate FCS population, using a pooled estimate of the FBS and FCS population variances. Finally, FE.P.2 models separate populations, estimating a different variance for each population. The results appear in the tables in Appendix \ref{sec:tables}, and are also compared to Mease's model. The reference lines with slope 1 and intercept 0 in the scatter-plots in Appendix \ref{sec:tables} illustrate the difference in estimated team-effect variances between the models. For example, the plot in position [1,2] in the matrix of plots in Figure \ref{plot:2008_c_2} shows the ratings falling below the reference line. This indicates that, in 2008, the variance of the FBS team-effects from FE.P.2 is less than that of FE.P.1.
Table \ref{tab:sigmas} shows the estimated pooled variance ($\sigma^2_t$) from FE.P.1 and the FBS variance ($\sigma^2_1$) and FCS variance ($\sigma^2_2$) from FE.P.2. Figure \ref{plot:2011_distributions} plots the distributions of the team ratings from FE.P.1 2011. This plot illustrates the difference in means for the two populations, as well as the overlap between the distributions. This difference, obtained via the estimate $\hat{\beta}=2.03$, indicates that the probability of a randomly selected FBS team defeating a randomly selected FCS team in 2011 is $\Phi(2.03)\approx 0.979$. FE.P.0 ranks Alabama 2 in 2011, while FE.P.1 and FE.P.2 pick Oklahoma St. In 2008, FE.P.2 ranks Florida 4, while FE.P.1 and FE.P.0 rank Texas Tech 4. There are other changes lower in the rankings as well.
FE.P.2 provides greater flexibility than FE.P.1 by estimating separate variance components. Likewise, FE.P.2 makes use of more information than FE.P.0, by considering the outcome of games between pairs FCS teams as well as modeling the FBS games against specific FCS teams rather than a generic FCS ``team.'' We prefer FE.P.2; however, the approach used by FE.P.0 is reasonable as well. As we have seen in previous sections, the choice between two reasonable assumptions may lead to different rankings.
\begin{figure}
\caption{Distributions of team ratings from the 2011 FE.P.1 model. The dashed line corresponds to FCS teams, the solid line to FBS teams.}
\label{plot:2011_distributions}
\centering
\includegraphics[scale=.4]{2011_ratings}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Estimates for $\sigma^2_t$ from FE.P.1, and $\sigma^2_1$ and $\sigma^2_2$ from FE.P.2}
\begin{tabular}{rrrrr}
\addlinespace
\toprule
Year & $\sigma^2_t$& & $\sigma^2_1$&$\sigma^2_2$ \\
\midrule
2008 & 0.75 &&0.65& 0.87\\
2009 & 0.82 && 0.82& 0.82 \\
2010 & 0.71 &&0.80&0.60 \\
2011 & 0.63 &&0.70&0.55 \\
\bottomrule
\end{tabular}%
\label{tab:sigmas}%
\end{table}%
\subsection{ML vs. REML}
For now, we will only consider the sensitivity to the choice of ML versus REML estimation when using PQL in SAS. Under the model PQL.P.1, none of the top 16 teams from 2008-2011 differ between the ML and the REML rankings (not shown). This is not surprising since our model includes only one fixed effect, the FCS effect, and around 240 different levels of the random effect, corresponding to the FBS and FCS teams. Unlike a one-way random effects model, it is not clear what the expected downward bias in the ML estimates of $\sigma^2_t$ should be in this multi-membership random effects setting. However, we found the estimates from two methods to be nearly identical. We include estimates from the two methods in Table \ref{tab:reml} so that the difference between the ML and REML estimates may be compared to the differences resulting from different integral approximations. Of course, the difference between ML and REML estimates would grow if additional fixed effects were added to the model.
\begin{table}[htbp]
\centering
\caption{Estimates for $\sigma^2_t$ from PQL.P.1}
\begin{tabular}{rrr}
\addlinespace
\toprule
Year & ML & REML \\
\midrule
2008 & 0.4763 & 0.4768 \\
2009 & 0.5077 & 0.5087 \\
2010 & 0.4405 & 0.4415 \\
2011 & 0.4206 & 0.4216 \\
\bottomrule
\end{tabular}%
\label{tab:reml}%
\end{table}%
\section{Conclusion}
We have proposed a generalization of the model developed by \citet{mease} and tested our model's rankings for sensitivity to several modeling choices. Ideally, this type of sensitivity analysis should be performed whenever a generalized linear mixed model is used. The sensitivity will likely depend on the random effects structure of different models. The downward bias in PQL estimates of variance components has been well documented \citep{breslow95}, but not as much attention has been given to differences in EBLUP orderings resulting from using different orders of integral approximation.
\citet{harville03} discusses seven criteria for an appropriate ranking-model: accuracy, appropriateness, impartiality, unobtrusiveness, nondisruptiveness, verifiability, and comprehensibility. For practical purposes, the choice between Mease's model and our model FE.P.2 represents a trade-off between comprehensibility and accuracy. The computational effort required to obtain the rankings from FE.P.2 is much greater than that required for Mease's model. Furthermore, the lack of a closed-form objective function obfuscates the relationship between the data and the rankings. However, by modeling a separate FCS population and using a fully exponential Laplace approximation, FE.P.2 makes use of additional data and provides the capacity to accurately estimate the FBS population variance, whereas Mease relies on a fixed population variance.
The sensitivity of the EBLUPs to different methods of approximating the marginal likelihood may be of interest in other settings, including in the use of value-added models for teacher assessment. Value-added models evaluate teachers based on the performance of their students. When the measures of student performance are categorical \citep{broatch10}, the analysis of the sensitivity to the choice of approximation of the marginal likelihood that we discuss in this paper may be relevant.
Without changing the mean or random effects structures, our rankings shifted with different choices of modeling assumptions. The resulting changes in team ratings are small relative to the standard errors of the ratings, but could have implications for which teams are assigned to which bowls. Bowls are assigned based on the point estimates of the team rankings: an undefeated, third ranked team would probably take little consolation in being told that their rating is not significantly lower than those of the top two teams.
The large confidence intervals associated with the team ratings suggest that the sensitivity of the rankings to the modeling assumptions is due at
least in part to the limited information available to the model:
around 12 binary outcomes on 240 or so subjects. Given this limited
information, it seems unreasonable to expect these models to be
capable of identifying the two best teams in a division. The limited
accuracy of the models due to their restriction to the use of binary
game outcomes is one of the reasons that lead Stern (2006) to call for
a boycott of the BCS by quantitative analysts. Models for binary game
outcomes may provide a rough guide for the classification of teams, but
the fickle nature of their rankings should be kept in mind.
| {
"attr-fineweb-edu": 1.662109,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdaA25V5jD_GbNxW1 | \section{Introduction}
Tour recommendation and planning are challenging problems faced by many tourists, due to the constraints in time and locality; additionally they may not be familiar with the city or country~\cite{brilhante-ipm15,chen-cikm16,gionis-wsdm14}.
Most visitors usually follow guide books/websites to plan their daily itineraries or use recommendation systems that suggest places-of-interest (POIs) based on popularity~\cite{lim2019tour}. However, these are not optimized in terms of time feasibility, localities and users' preferences~\cite{he2017category,lim2019tour}.
In recent years, the Transformer model has become the state-of-the-art solution for many NLP tasks. Compared to other architectures, such as \emph{Recurrent Neural Network}~(\textsc{Rnn}) and~\textsc{Lstm}, a Transformer-based model processes the \emph{entire input data} all at once.
Additionally the \emph{attention mechanism} provides the \emph{context} for any position in the input sequence, allowing more \emph{parallelism} with good performance quality; hence less time is required for training and optimization are needed~\cite{attention_2017}.
In this paper, we propose~\textsc{PoiBert}, a Transformer-word embedding model to recommend~{\textsc{Poi}s}~as a sequence of itinerary based on historical data with consideration of the locations, and also traveling time between these~{\textsc{Poi}s}. Figure~\ref{system}~shows the overall workflow of itinerary prediction of the~\textsc{PoiBert}~model.
\begin{figure*}[h]
\centering
\includegraphics[width=17.0cm]{BERT_system}
\caption{
Overall system diagram of~\textsc{PoiBert}~using geo-tagged photos.
Step~(I) Given a city of interest, a set of photos with known user-IDs, timestamps and geo-tags are collected from Flickr database. Identify the {{{\textsc{Poi}}}-ID} of each photo by its geo-tag information and metadata~\cite{geotag_2015}.
Step~(II) Sort the photos by timestamps and user-IDs to reconstruct users' trajectories to form sequences of $({{\textsc{Poi}}}-IDs , timestamps)$ tuples.
Step~(III) Training of~\textsc{PoiBert}~model using trajectories in Step-II and Algorithms~\ref{alg:mlm_data_generation}, details in Section~\ref{fig:mlm_training_themes}.
Step~(IV) Prediction of tour itineraries using a $(source,dest.)$~{\textsc{Poi}}~tuple.
}
\label{system}
\end{figure*}
We compare our proposed methods with other sequence prediction algorithms and conclude that our algorithms can achieve an average of~$F_1$-scores of up to 59.2\% accuracy in our experiments. In this paper, we make the following contributions:
\begin{itemize}
\item We model our Tour Recommendation problem as a \emph{sequential recommendation problem} in reinforcement learning:
to recommend the subsequent~{\textsc{Poi}s}~(\emph{items})~in a user's travel schedule, given a set of trajectories in the form of $user-{\textsc{Poi}}$~tuples~(\emph{item}) of interactions(\emph{check-in records})~\cite{ijcai2019_seqrec}.
The solution of this problem is a reinforcement learning algorithm that is flexible in different environments~(i.e. cities.)
\item We propose two approaches to solving the tour recommendation problem, namely:~(1)~\textsc{PoiLstm}~-~A Long Short-Term Memory framework, and, (2)~\textsc{PoiBert}~- a Transformer-based mechanism.
These two models take users' trajectories as inputs and process as a long sequence of \emph{user-item} interaction for our~recommendation~algorithms.
\item We use the Bootstrapping method in statistics to estimate the duration of visits with \emph{confidence intervals} using a method of \emph{random sampling}. More \emph{accurate} estimation~(with confidence intervals) of~{\textsc{Poi}}~duration also results in a realistic and compact scheduling of itineraries.
\item We have conducted thorough experiments on our proposed solutions against state-of-art solutions in sequence prediction and classic algorithms~(\textsc{Spmf}~Data Mining Library\cite{spmf2017}.) Experimentation results show that our solution out-performs other baseline algorithms.
\item Additionally, our proposed solution has the advantage of adapting to different scenario~(cities/datasets) without modification. In particular, we recorded a performance increase, as much as 19.9\% in our \emph{Delhi} dataset, measured in terms of average $F_1$-scores.
\end{itemize}
The remaining of this paper is organized as follows:
In Section~\ref{section_related_work} we give a background to the Tour Recommendation and discuss the state-of-the-art to the itinerary prediction problem.
In Section~\ref{section_formulation} we formally define the problem and notations to our solution.
In Section~\ref{section_experiments} we describe our experiment framework and other baseline algorithms we used for solution evaluation. Finally, we summarize the paper with further work of extension in~Section~\ref{section_conclusion}.
\section{Related Work}
\label{section_related_work}
\subsection{Tour Recommendation}
\label{RELATED_TOUR_RECOMENDATION}
Tour planning is an essential, but tedious task for tourists visiting an unfamiliar city.
Many visitors often get recommendation from guide books or websites to plan their daily itineraries; this will be time-consuming and sub-optimal.
Next~{\textsc{Poi}}~prediction~\cite{he2017category,zhao2020go} and tour planning~\cite{sohrabi2020greedy,lim2019tour} are two related problems: Next-location prediction and recommendation aim to identify the next~{\textsc{Poi}}~that is most likely to visited based on historical trajectories.
Personalized tour recommendation has been proposed with the use of photos and their meta-information such as timestamps and GPS-locations provided by Location-based Social Networks~(LBSN). Tour itinerary can be generated based on \emph{user interests} from his/her visit history. Previous works focus on recommending \emph{popular} POIs in terms of posted photos with geo-tags~\cite{geotag_2015,de2010automatic,de2010constructing,halder2022poi}. Other works utilized geo-tagged photos posted in LBSN to determine~{\textsc{Poi}}~related information for making various types of tour recommendation~\cite{lim2018personalized,cai2018itinerary,kurashima2013travel,sun2017tour,geotagged_traj_2018,halder2022efficient}.
Furthermore, tour itinerary recommendation has the challenges of planning a \emph{connected} itinerary of~{\textsc{Poi}s}~that appeal to the users' interest preferences, without users taking unnecessary routes~ and spending extra time/distance. At the same time, there is a need to satisfy tourists' temporal and spatial constraints such as limited time and budget.
\subsection{Sequence Prediction}
\label{RELATED_SEQ_PREDICTION}
Sequence prediction is a well-studied machine learning task; this task involves predicting the next symbol(s) or word based on the previously observed sequence of symbols. Sequence prediction can be applied to solve the tour recommendation problem, by treating~{\textsc{Poi}s}~as words as inputs.
Sequence Prediction is widely used in the areas of time-series forecasting and product recommendation. It is different from other prediction algorithms; the order of sequence is important to get an accurate result, examples include predicting stock prices\cite{lstm_stock_2017}.
Existing solutions to Sequence Prediction include word-embedding by considering~{\textsc{Poi}}-to-{\textsc{Poi}}~similarity using techniques such as Word2Vec, GloVe and FastText~\cite{Ayyadevara2018,bojanowski2017enriching,pennington2014glove,word2vec_rec_2020}.
Many recommendation systems for planning tours consider broad~{\textsc{Poi}}~categories but some of their recommendations do not align well with travelers' preferences together with locational constraints in their trips.
Some recommendation system dynamically propose routes by taking into consideration all possible solutions generated by different agents system~\cite{Liu-ECMLPKDD20}.
Personalized recommendation of tour planning is proposed by using the technique of {{\textsc{Poi}}}-embedding methods providing a finer representation of~{\textsc{Poi}}~categories~\cite{Lim2016PersTourAP}.
Recently, advances in Machine Learning (ML) and Artificial Intelligence (AI) algorithms allow for more advanced representation of sequential data, particularly in the area of Natural Language Processing.
\subsection{{\textsc{Lstm}}~models}
\label{RELATED_LSTM}
First proposed in 1994, the \emph{Long Short-Term Memory} is an \textsc{Rnn} with long-term dependencies in the input sequences\cite{lstm_1997}.
A {\textsc{Lstm}}~network consists of memory blocks, or \emph{cells}, which are capable of storing information known as \emph{states}.
During the training phase of {\textsc{Lstm}}, two \emph{states} are transferred to~(or from, respectively) the next~(prior, respectively) cell, known as the \emph{cell state} and the {hidden state}.
The memory blocks of {\textsc{Lstm}}~are used as the \emph{memory} and the flow~and termination of information is done through three \emph{gates}:
\begin{enumerate}
\item \emph{Input Gate: }
it is responsible for the addition of information to the \emph{cell state}. The gate applies the \emph{sigmoid} function to the input state determine information to be added to the cell state.
\item \emph{Forge Gate:}
this gate determines a subset of information to be \emph{removed} from the cell state; information that is less importance is removed by applying a \emph{filter} function\cite{lstm_forgetgate}.
\item \emph{Output Gate:}
The Output~gate organizes the information for the output to other {\textsc{Lstm}}~cells. The basic implementation of~{\textsc{Lstm}}~applies the $tanh$~function cell state and the $sigmoid$ for filtering of information. The output of this gate is subsequently fed as the \emph{input gate} of the next state.
\end{enumerate}
The input layer of the {\textsc{Lstm}}~network takes in as input a vector of a \emph{fixed length} and output a vector of fixed length. In an extension of~{\textsc{Lstm}}, the Encoder-Decoder~{\textsc{Lstm}}~has two more additional components then the basic {\textsc{Lstm}}~network: the \emph{encoder} and \emph{decoder}.
The \emph{encoder} of the model extracts a \emph{fixed-length vector representation} from a variable-length input sentence.
Experiments suggest that the encoder-decoder~{\textsc{Lstm}}~model performs well on~\emph{short sentences} without \emph{unknown} words.
The performance of the {{\textsc{Lstm}}}~method was shown to \emph{degrade} as with input text size~\cite{lstm_properties_2014, seq2seq2014}.
\subsection{Transformer models}
Transformer is a learning model designed to process sequential input data. It adopts the mechanism of \emph{self-attention} having use primarily in~{{\textsc{Nlp}}}~and Computer Vision\cite{attention_2017}.
Bidirectional Encoder Representations from Transformers ({\textsc{Bert}}) is a transformer-based machine learning technique, developed by Google\cite{bert } for \emph{language translation}. {\textsc{Bert}}~models have become the state-of-art baseline in~{{\textsc{Nlp}}}~experiments.
{\textsc{Bert}}~is trained using 1) Masked-Language Modeling ({\textsc{Mlm}}), and, 2) Next Sentence Prediction~({{\textsc{Nsp}}}) with more application other than its original language tasks. Moreover,~{\textsc{Bert}}~is shown to achieve high accuracy in \emph{Classification} tasks such as sentiment analysis~\cite{HAN2021225}.
\section{Problem Formulation and Algorithms}
\label{section_formulation}
In this section, we start with the definition of tour recommendation problem and a list of notations used in Table~{\ref{tbl:notations}}.
~
Given a set of travelers, $S_h$, visiting a city with $|P|$ points-of-interest, we denote a traveler, $u \in U$, in a sequence of $(poi,time)$~tuples,~$S_h = [ (p_1,t_1),(p_2,t_2)...$ $(p_k,t_k)]$, where $k$ is the number of check-in or photos posted to LBSN, for all~$p_i \in~{\textsc{Poi}s}$ and ${t_i}$ as the timestamps of the photos taken.
Given also, a starting~{\textsc{Poi}}-${s_0} \in {{\textsc{Poi}s}}$ together with all the photos taken at~${s_0}$, the problem in this paper is to recommend a sequence of~{\textsc{Poi}s}~in which travelers are \emph{likely} to visit based on the past trajectories from a dataset collected, using the~\emph{Transformer} model.
We first propose ``\textsc{PoiLstm}'', an {{\textsc{Lstm}}}~model that encodes users' trajectories with consideration of the travelers' locations and distances traveled to recommend a tour itinerary with estimated duration.
We also propose ``\textsc{PoiBert}'', an algorithm for prediction of itinerary based on the {\textsc{Mlm}}~algorithm in~{\textsc{Bert}}, discussed in~Section~\ref{bert_algo}.
\begin{table}
\centering
\caption{Notations used in this paper}
\scalebox{1.1}{
\begin{tabular}{
||c|l||}
\hline
Notation & Description \\
\hline
$T$ & Time budget of recommended trajectory\\
\hline
$u_i$ & Identifier of the user ID~$i$ \\
\hline
$c_i$ & Category~label~(or Theme) of POI-$p_i$, e.g. Museum, \\
~ & Park, Sports,... \\
\hline
$p_i$ & Identifier of the POI ID~$i$ \\
\hline
$p^u_j$ & Identifier of the POI ID~$j$ in Step-$j$ of $u$'s trajectory \\
\hline
$v^{u}_{i}$ & Activity of user-$u$ in step-${i}$ in her/his trajectory \\
\hline
$tryj_u$ & sequence of check-ins from user-$u$ as a trajectory, \\
~ & i.e. $\{ v^{u}_{1}..v^{u}_{k} \}$ \\
\hline
$\oplus$ & Concatenation operation \\
\hline
~ & Sample distributions \\
$X$ & $X = \{x_1,x_2,..\}$ \\
\hline
~ & Empirical distributions \\
$F^{\*}$ & $F^{\*} = \{x^{*}_1,x^{*}_2,..\}$ \\
\hline
$B$ & Number of sampling iterations in Bootstrapping \\
\hline
$\alpha$ & Significance level in Bootstrapping \\
\hline
\end{tabular}
}
\label{tbl:notations}
\end{table}
\subsection {\textsc{PoiLstm} - Itinerary Prediction Algorithm using {\textsc{Lstm}}}
\label{lstm_algo}
We model the itinerary prediction problem as a prediction in an Encoder-Decoder~{\textsc{Lstm}}.
Each input vector to the \textsc{PoiLstm}~network represents a vector representing of user's visit~from a~{\textsc{Poi}}~transiting to the next~{\textsc{Poi}}~(with embedded details, such as \emph{time} and \emph{distance} traveled).
During the training phase of \textsc{PoiLstm}, each~{\textsc{Poi}}~in a trajectory is passed to the input layer of the {\textsc{Lstm}}~network as an encoded vector, one at a time using the encoder function. This process is repeated for all~{\textsc{Poi}s}~in all trajectories in the training dataset, discussed in Figure~\ref{lstm_encode_function}.
When the {\textsc{Lstm}}~network is trained sufficiently for a number of steps~(\emph{epochs}), the output of the {\textsc{Lstm}}~network is a prediction of the next~{\textsc{Poi}}~(as one-hot embedding) and its estimated duration (in hours) in floating point format.)
{\textsc{Poi}}~itinerary can be predicted by repeatedly decoding the output of \textsc{PoiLstm}, by passing in the previous output information of trajectory iteratively as an \emph{encoded vector}.
The function $time(i,j)$ returns the time spent from $v_i$ to $v_j$ and $dist(i,j)$ returns the distance the user($u$) traveled from step-$i$ through step-$j$. Additionally, $ p^{u}_{t_{k-2}} $ , $p^{u}_{t_{k-1}} $ and $p^{u}_{t_{k}}$ are represented as \emph{onehot embedding}~\cite{onehot}.
\begin{algorithm}
\caption{Prediction model in \textsc{PoiLstm}}
\label{alg:PoiLstm}
\label{lstm_encode_function}
\begin{algorithmic}[1]
\REQUIRE $ v^{u}, TimeLimit$ : time budget\\
\STATE \textbf{Set} Activation Function: \textbf{Softmax} \\
\STATE \textbf{Set} Optimizer: \textbf{RMSprop} \\
\STATE \textbf{Let} $i=1$, $T=0$, $seq=\{\}$ \\
\STATE \textbf{SubFunction:} $LSTM\_encode\_seq( v^u_{t_k} ) =$ \\
\STATE $ ~~~~~ { time(k-1,k) } \oplus { time(1,k) } ~\oplus $ \\
\STATE $ ~~~~~{ dist(k-1,k) }~ \oplus~dist(1,k)~\oplus $
\STATE $~~~~~ \
p^{u}_{t_{k-2}} \oplus \
p^{u}_{t_{k-1}} \oplus \
p^{u}_{t_{k}}$
\REPEAT
\STATE ~~$x^{(t)}_i \gets LSTM\_encode\_seq( v_{p^{u}=p_i} ) $\footnote{refer to \ref{lstm_encode_function}.} \\
\STATE ~~compute $a^{(t)}$ and $h^{(t)}$\\
\STATE ~~compute $o^{(t)}$ \\
\STATE ~~\textbf{Let} $(p_{i},t_{i}) \gets decode( o^{(t)} )$
\STATE ~~$seq \gets seq \oplus p_{i}$
\STATE ~~$T \gets T+t_{i}$
\STATE ~~$i \gets i+1$
\UNTIL{ $ T \ge TimeLimit$ }
\RETURN $seq$ \\
\end{algorithmic}
\end{algorithm}
\subsection{\textsc{PoiBert}~-~a~{\textsc{Bert}}~model for~{\textsc{Poi}}~Itinerary Prediction} \label{bert_algo}
Generally, a {\textsc{Bert}}~model uses a \emph{self-attention} mechanism that is able to learn the general trend of a language; the trained model can then be used for downstream tasks, such as \emph{language translation} and \emph{question answering}~\cite{bert_question_answering}. When used in practice, a pre-trained {\textsc{Bert}}~model can significantly improve the results in prediction based on a number of benchmarks.
To perform an itinerary prediction in our \textsc{PoiBert}~model, we pass in a set of \emph{sentences} consisting of~{\textsc{Poi}s}~and relevant information to predict the next~{\textsc{Poi}}~which is most likely to occur using the {\textsc{Mlm}}~prediction model.
\begin{paragraph}{Training of \textsc{PoiBert}~Model}
We propose a novel~\textsc{PoiBert}~Model in the space of~{\textsc{Poi}s}~and users' itineraries.
The original implementation of {\textsc{Bert}}~train~{\textsc{Mlm}}~ by masking 15\% of words.
The \textsc{PoiBert}~prediction algorithm is to predict the \emph{masked}~{\textsc{Poi}}~(word), based on the context provided by other \emph{words}~(representing~{\textsc{Poi}s}~or~{\textsc{Poi}}~categories) \emph{without~masks}.
We use Algorithm~\ref{alg:mlm_data_generation} to translate users' trajectories into sentences of~{\textsc{Poi}s}(\emph{words}) which are subsequently trained by the~\textsc{PoiBert}~model for the itinerary prediction task.
Figure~\ref{alg:mlm_data_generation} outlines a function to transform users' trajectories to sentences of words representing~{\textsc{Poi}s}~or categories of~{\textsc{Poi}s}~for~\textsc{PoiBert}~training.
The time complexity of the function is $O(N K^2)$, where~$N$~is the total number of~{\textsc{Poi}s}~in the dataset and~$K$ represents the maximum number of~{\textsc{Poi}s}~in any trajectory.
\begin{figure*}[t]
\label{fig:mlm_training_themes}
\centering
\includegraphics[trim=48cm 31cm 0cm 0cm,clip,width=15.5cm]{BERT_system}
\caption{Itinerary prediction algorithm of \textsc{PoiBert}~model: In each iteration of the system, a new destination~(i.e.~{\textsc{Poi}}) is predicted by solving the~{\textsc{Mlm}}~prediction task; the predicted~{\textsc{Poi}}~is then inserted to the itinerary.
The prediction loop stops when all~{\textsc{Poi}s}~are visited, or when the time constraint is satisfied.}
\end{figure*}
\begin{algorithm}
\caption{Training Data Generation~for~\textsc{PoiBert}}
\label{alg:mlm_data_generation}
\begin{algorithmic}[1]
\REQUIRE $ tryj_u , \forall u \in Users $ \\
\FORALL{ $u \in users$ }
\FORALL{ $tryj\_seq \in tryj_u$}
\STATE \textbf{Let} $n \leftarrow | tryj\_seq | $ \\
\STATE \textbf{Let} $\{p_1..p_n\} \leftarrow poi\_id(tryj\_seq)$ \\
\STATE \textbf{Let} $\{c_1..c_n\} \leftarrow theme(tryj\_seq)$ \\
\STATE // where the functions $poi\_id(...)$ and $theme(...)$ \\
\STATE // ~return $POI\_id$~(and $theme$, resp.) projections. \\
\STATE \textbf{Output:} $\forall 1 \le i < j \le n$,\\
\STATE ~~~~~~~~~~~``$ \{ c_i,p_i,..,p_{j-1},c_{j-1} \} \rightarrow p_j$''
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\end{paragraph}
\begin{paragraph}{Itinerary Prediction}
Given an initial POI, $p_1$, and the ending POI, $p_k$ from traveler's specification, we propose an algorithm to predict a sequence of~{\textsc{Poi}s}~which travelers are most \emph{likely} to visit as ordered list, based of historical trajectories recorded in the dataset.
The \textsc{PoiBert}~algorithm is inspired by the~{\textsc{Mlm}}~training process of {\textsc{Bert}}, where the prediction algorithm identifies the \emph{masked} words based on the context of a sentence.
As outlined in Algorithm~\ref{alg:PoiBert}, the algorithm \emph{searches} for the next relevant~{\textsc{Poi}}~between the initial~{\textsc{Poi}}~and destination~{\textsc{Poi}}, and insert it to the predicted itinerary.
\begin{algorithm}
\caption{Itinerary Prediction Algorithm in \textsc{PoiBert}}
\label{alg:PoiBert}
\begin{algorithmic}[1]
\REQUIRE $p_{1},p_{k}$: starting/ending~{\textsc{Poi}s} \\
~~~~~~~$TimeLimit$: time budget of itinerary
\STATE \textbf{Let} $seq \gets \{ p_{1},p_{k} \}$ \\
\REPEAT
\STATE \textbf{forall}~ {$j \in \{2..|seq|-1\}$} \\
{
\STATE ~~~\textbf{Let} $query_j \gets \{ p_{1},c_{1},p_{j-1},c_{j-1}, \texttt{{[MASK]}},$\\
~~~~~~~~~~~~~~~~~~~~~~~$p_{j},c_{j},...,p_{k},c_{k} \}$ \\
}
\STATE $seq \gets \textbf{ArgMax}_{j\in\{2..|seq|-1\}}
( \textbf{\textit{Unmask}}(query_j)) $
\UNTIL{ $ \displaystyle \sum_{poi \in seq}{duration( poi )} \ge TimeLimit$ }
\RETURN $seq$
\end{algorithmic}
\end{algorithm}
\end{paragraph}
\subsection{Estimation of duration of visits}
Getting a realistic estimate of duration of visits to our predicted~{\textsc{Poi}s}~are crucial in our solution. Any over-estimation (or under-estimation) of duration to the predicted~{\textsc{Poi}s}~will affect the time-sensitive trajectories output from the algorithm, hence affecting the recall- and precision-scores. In this section, we estimate the duration of visits using a statistical method:~\emph{bootstrapping} by calculating the \emph{confidence-interval} of duration in the trajectories~\cite{2010statistics}.
Due to the high \emph{variance} in duration of visit to the~{\textsc{Poi}s}, it is not practical to estimate the duration by merely taking the \emph{averages} of all visitors' duration to the~{\textsc{Poi}s}.
We note that Bootstrapping does not assume the input data to follow any statistic distribution.
It treats the original \emph{samples} as the \emph{real population} and draws random samples from the data. Then, bootstrapping creates \emph{more} samples so one can make a better estimation of population by the procedure of \emph{sub-sampling randomly with replacement}. This method is used to estimate the duration of visits at the~{\textsc{Poi}s}~given that there are less samples visiting to some~{\textsc{Poi}s}~that are less popular. Algorithm~\ref{alg:bootstrapping} outlines the steps of getting the 90\% confidence~intervals of duration of
visit to a {\textsc{Poi}}-$i$, $\forall i \in~{\textsc{Poi}s}$.
\begin{algorithm}
\caption{Estimate Duration of Visit to {\textsc{Poi}}~}
\label{alg:bootstrapping}
\begin{algorithmic}[1]
\REQUIRE $ poi\_id \in {\textsc{Poi}s}$ \\
\REQUIRE confidence~level~$\alpha$
\REQUIRE number of replicates~$B$
\REQUIRE $ Tryj_u , \forall u \in Users $ \\
\STATE \textbf{SubFunc.} $getSamples(poi\_id)$:
\FORALL{ $u \in users$ }
\FORALL{ $tryj\_seq \in tryj_u$}
\FORALL{ $p \in tryj\_seq $}
\STATE \textbf{Output} activities if $p == poi\_id $ \\
\ENDFOR
\ENDFOR
\ENDFOR~
\STATE ~\\
\STATE \textbf{Let}~$X \gets getSamples(poi\_id) $\\
\STATE \textbf{Sample} $x^*_1, x^*_2,..x^*_n$ \text{with replacement from sample}~$X$.\\
\textbf{Repeat}~$B$ iterations\footnote{$B$ is a large number for bootstrapping to be efficient, we use $B=10000$ for all experiments}.
\STATE \textbf{Let}~$F^*$ be the \emph{empirical distribution}
\STATE Calculate $(100-\alpha)$\% \emph{confidence intervals} for~$F^*$
\end{algorithmic}
\end{algorithm}
\section{Experiments and Results}
\label{section_experiments}
We use a dataset of photos uploaded to the Flickr platform, which consists of trajectories of 5,654 users from 7 different cities, tagged with meta-information, such as the date and GPS location. Using this data-set, one can construct the travel trajectories by sorting the photos by time, mapping the photos to the~{\textsc{Poi}s}~as identified by the GPS location, resulting in the users' trajectories as a sequence of time sensitive {\textsc{Poi}}-IDs.
\begin{table*}[h]
\centering
\caption{Description of Data-sets}
\scalebox{1.30}{
\begin{tabular}{lccccccccc }
\hline\hline
City & {Budapest} & {Delhi} & {Edinburgh} & {Glasgow} & {Osaka} & {Perth} & {Toronto}
\\
\hline\hline
{No. of~{\textsc{Poi}s}} & 39 & 26 & 29 & 29 & 28 & 25 & 30 \\
\hline
No. of Users & 935 sample & 279 & 1454 & 601 & 450 & 159 & 1395 \\
\hline
No. of Trajectories & 2361 & 489 & 5028 & 2227 & 1115 & 716 & 605 \\
.. for training~~~~ & 277 & 13 & 267 & 28 & 12 & 14 & 95 \\
.. for evaluation & 70 & 4 & 67 & 7 & 3 & 4 & 74 \\
\hline
No. of check-ins~~~~ & 18513 & 3993 & 33944 & 11434 & 7747 & 3643 & 39419 \\% & 34515 \\
.. for training~~~ & 7593 & 367 & 6575 & 600 & 381 & 418 & 2371 \\% & 6640 \\
.. for evaluation & 1915 & 215 & 1921 & 223 & 40 & 68 & 540 \\% & 2536 \\
\hline
{Avg.~{\textsc{Poi}s}~per~trajectory} & 5.78 & 4.69 & 5.27 & 4.71 & 4.50 & 4.93 & 4.93 \\% & 5.271 \\
\hline
{Avg.~check-ins~per~{\textsc{Poi}}~~} & 4.74 & 6.02 & 4.68 & 4.55 & 7.06 & 6.06 & 5.07 \\% & 5.338 \\
\hline\hline
\end{tabular}
}
\label{fig:datasets}
\end{table*}
\begin{table*}[!th]
\centering
\caption{Average~{$F_1$} / Recall / Precision scores of \textsc{PoiBert}~prediction algorithm~(\%) }
\scalebox{1.32}{
\begin{tabular}{ cccccccccccc}
\hline\hline
epochs & ~ & {Budapest} & {Delhi} & {Edinburgh} & {Glasgow} & {Osaka} & {Perth} & {Toronto} \\
\hline\hline
\triplet{~}{1}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{65.511}{\bf 49.974}{46.563} & \triplet{71.250}{58.485}{50.119} & \triplet{64.262}{\textbf{54.371}}{53.223} & \triplet{78.265}{\textit{\underline{59.417}}}{52.234} &
\triplet{46.667}{52.382}{61.111} & \triplet{77.500}{60.242}{54.924} & \triplet{73.427}{\textbf{55.929}}{52.559} \\
\hline
\triplet{~}{3}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{63.498}{\textit{\underline{48.533}}}{45.632} & \triplet{87.500}{58.485}{55.357} & \triplet{64.165}{\textbf{54.371}}{52.950} & \triplet{81.020}{59.381}{52.531} &
\triplet{55.000}{52.381}{75.556} & \triplet{77.500}{60.242}{60.417} & \triplet{73.427}{\textbf{55.929}}{50.666} \\
\hline
\triplet{~}{5}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{60.455}{47.448}{45.238} & \triplet{76.250}{58.333}{47.619} & \triplet{61.965}{52.748}{52.694} & \triplet{81.020}{\textbf{60.752}}{53.296} &
\triplet{54.999}{63.420}{75.556} & \triplet{77.500}{\textit{\underline{61.994}}}{61.174} & \triplet{74.468}{52.973}{52.618} \\
\hline
\triplet{~}{7}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{63.094}{\it 48.731}{46.014} &
\triplet{76.250}{58.333}{47.619} &
\triplet{61.710}{52.229}{51.909} &
\triplet{70.306}{54.949}{48.044} &
\triplet{55.000}{63.420}{75.556} &
\triplet{77.500}{60.242}{60.417} &
\triplet{71.790}{52.256}{52.856} \\
\hline
\triplet{~}{10}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{61.323}{47.542}{45.425} &
\triplet{76.250}{58.333}{47.619} &
\triplet{62.148}{53.397}{53.145} &
\triplet{76.735}{51.042}{47.086} &
\triplet{61.667}{\textit{\underline{71.753}}}{86.667} &
\triplet{72.500}{\bf 64.286}{52.083} &
\triplet{64.865}{52.744}{52.825} \\
\hline
\triplet{~}{15}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{60.717}{46.884}{44.510} &
\triplet{76.250}{58.333}{47.619} &
\triplet{62.507}{\textit{\underline{53.556}}}{53.206} &
\triplet{66.225}{51.471}{45.899} &
\triplet{53.333}{60.714}{72.222} &
\triplet{72.500}{55.777}{53.750} &
\triplet{67.782}{54.589}{54.592} \\
\hline
\triplet{~}{20}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{62.870}{48.228}{45.517} &
\triplet{76.250}{57.051}{46.230} &
\triplet{60.855}{51.865}{51.064} &
\triplet{78.980}{56.724}{48.566} &
\triplet{70.000}{\bf 74.817}{84.127} &
\triplet{66.250}{56.047}{54.464} &
\triplet{64.320}{50.288}{49.533} \\
\hline
\triplet{~}{30}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{60.469}{47.081}{45.167} &
\triplet{76.250}{58.333}{47.619} &
\triplet{61.611}{51.806}{53.215} &
\triplet{73.367}{51.752}{46.315} &
\triplet{53.333}{62.229}{75.556} &
\triplet{66.250}{59.077}{56.548} &
\triplet{63.273}{52.542}{52.212} \\
\hline
\triplet{~}{40}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{59.210}{45.675}{43.258} &
\triplet{82.500}{63.333}{51.786} &
\triplet{60.991}{51.494}{51.813} &
\triplet{69.796}{51.696}{44.490} &
\triplet{53.333}{62.229}{75.556} &
\triplet{61.250}{52.411}{52.381} &
\triplet{63.442}{50.514}{51.548} \\
\hline
\triplet{~}{50}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{60.673}{46.686}{44.280} &
\triplet{82.500}{\textit{\underline{64.848}}}{54.167} &
\triplet{60.141}{51.465}{50.924} &
\triplet{75.408}{53.457}{47.973} &
\triplet{53.333}{62.229}{75.556} &
\triplet{60.000}{54.708}{50.947} &
\triplet{63.863}{52.506}{51.301} \\
\hline
\triplet{~}{60}{~~} &
\triplet{\it Recall}{$F_1$}{\it Precision} &
\triplet{60.453}{47.186}{45.030} &
\triplet{88.75}{\textbf{69.848}}{57.738} &
\triplet{61.445}{52.240}{51.566} &
\triplet{66.224}{49.128}{43.900} &
\triplet{53.333}{62.229}{75.556} &
\triplet{66.25}{54.708}{55.159} &
\triplet{66.182}{51.777}{51.935} \\
\hline\hline
&
\end{tabular}
}
\label{table:f1scores}
\end{table*}
\subsection{Datasets}
We use the Flickr datasets prepared for our evaluation of algorithms~\cite{Lim2016PersTourAP}.
In total, there are close to 120K photos, or check-in records, from 4701 users in seven popular cities.
Table~\ref{fig:datasets} describes more details about each dataset and information about the trajectories of these cities.
\begin{paragraph}{Training and Test Set}
Our data-sets are split into Training and Testing data-sets. Firstly, we organize photos by the Trajectory-IDs, then these trajectories are sorted according to their \emph{last~check-in times} (in ascending order).
To obtain the Training dataset, the first 80\% of Trajectories~(based on their photos) are set aside as \emph{Training Data}. The remaining data is used as the \emph{Testing Data}. This segregation of Training and Test data avoids the problem of having a trajectory covering over both Training and Testing Data sets.
\end{paragraph}
\begin{table*}[!th]
\centering
\caption{
Average $F_1$ scores for different Sequence Prediction Algorithms~(\%) }
\scalebox{1.3}{
\begin{tabular}{
p{1.8cm}ccccccccc} \hline
Algorithm & {Budapest} & {Delhi} & {Edinburgh} & {Glasgow} & {Osaka} & {Perth} & {Toronto} \\
\hline
\textsc{Cpt} & 45.331 & 58.238 & 44.732 & 51.234 & 45.238 & 58.569 & 46.816 \\
\textsc{Cpt+} & 43.472 & 42.511 & 44.543 & 48.701 & 37.719 & 58.570 & 37.719 \\
\textsc{Dg} & 44.917 & 50.260 & 44.867 & 50.579 & 43.333 & 49.936 & 43.333 \\
\textsc{Lz78} & 43.447 & 49.412 & 44.105 & 45.438 & 40.00 & 51.582 & 40.00 \\
PPM & 44.574 & 50.258 & 44.848 & 50.579 & 45.556 & 54.481 & 45.556 \\
\textsc{Tfag} & 43.686 & 60.694 & 43.105 & 48.237 & 45.556 & 48.711 & 45.555 \\
\textsc{Bwt}-SuBSeq & 37.883 & 43.333 & 39.082 & 48.322 & 42.857 & 36.320 & 33.145 \\
\textsc{Seq2Seq} & 36.970 & 43.864 & \textit{\underline{52.768}} & \textit{\underline{62.132}} & 57.937 & 54.911 & \textit{\underline{52.870}} \\
\textbf{\textsc{PoiLstm}}* & \textbf{53.591} & \textit{\underline{68.750}} & 41.282 & 61.147 & \textit{\underline{60.350}} & \textit{\underline{60.229}} & 50.759 \\
\textbf{\textsc{PoiBert}}* & \textit{\underline{49.974}} & \textbf{69.848} & \textbf{54.471} & \textbf{62.771} & \textbf{71.753} & \textbf{61.075} & \textbf{55.929} \\
\hline
\end{tabular}
}
\label{table:all_algo}
\end{table*}
\begin{table*}[!th]
\centering
\caption{
Average Number of~{\textsc{Poi}}'s using \textsc{PoiBert}~Predicted Model vs. Actual Trajectories }
\scalebox{1.4}{
\begin{tabular}{
p{2.3cm}ccccccccc}
\hline
Epoches & {Budapest} & {Delhi} & {Edinburgh} & {Glasgow} & {Osaka} & {Perth} & {Toronto} \\
\hline
\textsl{Actual Trajectories} & \textsl{6.243} & \textsl{4.750} & \textsl{5.955} & \textsl{5.000} & \textsl{5.000} & \textsl{5.250} & \textsl{5.458} \\
1 & 9.786 & 6.000 & 7.881 & 7.429 & 4.000 & 6.750 & 7.583 \\
3 & 9.814 & 6.750 & 7.582 & 7.857 & 3.667 & 7.500 & 12.042 \\
5 & 9.514 & 6.750 & 7.507 & 7.714 & 3.667 & 7.500 & 11.250 \\
7 & 9.729 & 6.750 & 7.881 & 7.286 & 3.667 & 7.500 & 10.917 \\
10 & 9.671 & 6.750 & 7.571 & 7.571 & 3.667 & 7.500 & 7.458 \\
15 & 9.871 & 6.750 & 7.806 & 7.000 & 4.000 & 7.000 & 7.583 \\
20 & 9.914 & 7.000 & 7.791 & 7.857 & 4.333 & 6.500 & 8.042 \\
30 & 9.757 & 6.750 & 7.672 & 6.857 & 3.667 & 5.750 & 7.250 \\
40 & 9.771 & 6.750 & 7.836 & 7.429 & 3.667 & 6.250 & 7.500 \\
50 & 9.871 & 6.500 & 7.821 & 8.000 & 3.667 & 6.250 & 7.708 \\
60 & 9.600 & 4.333 & 7.940 & 6.857 & 3.667 & 5.500 & 7.875 \\ \hline
\end{tabular}
}
\label{table:average_pois}
\end{table*}
\subsection{Performance of Algorithms}
\label{accuracy}
Experiments were conducted for each city in the dataset. We regard all users' trajectories~(with at least 3~{\textsc{Poi}s}) in the training set as sequences of~POI~(\emph{corpus}).
To compare the performance of our models, we trained different sequence prediction~models using different hyper-parameters.
We then used the Test set to evaluate the accuracy of the trained models:
for each of the trajectory in the testing set (known as \emph{history-list}), we treat the \emph{first} (and \emph{last}, respective)~{\textsc{Poi}}~as the \emph{source} (and \emph{destination}, respectively)~{\textsc{Poi}}~and try to predict the \emph{intermediate}~{\textsc{Poi}s}~of the trajectory, given in a time~boxed event of \emph{history-list}.
We evaluated the effectiveness of \textsc{PoiBert}~and \textsc{PoiLstm}~prediction algorithms in terms of {$F_1$},
precision~($T_{p}$)~and recall~($T_{r}$) scores of the predicted~{{\textsc{Poi}s}}~against the actual trajectories, as below:
\noindent Let $S_p$ be the predicted sequence of~{\textsc{Poi}s}~from the algorithm and $S_h$ be the actual sequence from the trajectories, we evaluate our algorithms based on:
\begin{itemize}
\item $T_{r}(S_h,S_p)$ = $ \frac{|S_h \cap S_p|} {|S_p|}$
\item $T_{p}(S_h,S_p) = \frac{|S_h \cap S_p|}{|S_h|}$
\item $F_1\_score(S_h,S_p) = \frac{2 T_{r}(\bullet) T_{p}(\bullet)}
{T_{r}(\bullet) + T_{p}(\bullet)}$
\end{itemize}
\subsection{Baseline Algorithms}
Our proposed models are compared with other sequence prediction algorithms as baseline algorithms:
\begin{itemize}
\item \textsc{Spmf} algorithms - this package consists of data mining algorithms including: \emph{CPT}~\cite{CPT2013}, \emph{CPT+}~\cite{CPTplus2015}, \emph{TDAG}~\cite{DG1996}, \emph{First-order and All-k-Order }
\emph{Markov Chains}\cite{PPM1984,AKOM1999}. Our experiments predict an itinerary by \emph{repeatedly} asking for the next \emph{token}~(represented as the next~{\textsc{Poi}}~to visit) when time limit is not exhausted.
\item \textsc{SuBSeq} : the algorithm uses a \emph{Succinct Wavelet Tree} structure to maintain a list of training sequences for sequence~prediction~\cite{succinctBWT_2019}.
\item \textsc{Seq2Seq} : this model adopts a multilayered {{\textsc{Lstm}}} to map the input sequence to a vector with a fixed size or dimension~\cite{seq2seq2014}. The prediction is facilitated by another deep {{\textsc{Lstm}}} to decode the target sequence. The default prediction model of \textsc{Seq2Seq} is to output a \emph{sentence} of words which may consist of duplicated words. We modified the prediction model to return a series of \emph{unique}~POIs~instead.
\end{itemize}
Some baseline algorithms only predict one~\emph{token} or~{\textsc{Poi}}, we \emph{iteratively} predict more tokens until the time limit of the itinerary is reached. For the propose of algorithms evaluation, all experimentation of baseline algorithms are conducted in the same setting as in Section~\ref{accuracy}, sharing the same training and testing data.
\subsection{Experimental Results}
We evaluated the effectiveness of our proposed algorithms on different cities.
We constructed the travel histories by chronologically sorting the photos, which resulted in the users' trajectories. These trajectories are then regarded as \emph{sentences} for inputs to our proposed training models with different hyper-parameters.
Results are summarized by comparing the accuracy of the predicted itineraries~(i.e. Recall / Precision / {$F_1$} scores,) as shown in Table~{\ref{table:f1scores}}.
In Table~\ref{table:all_algo}, we also compare the performance~of \textsc{PoiBert}~and~{\textsc{PoiBert}} against 8 baseline algorithms.
Overall, experimental results show that our ~\textsc{PoiLstm}~ and ~\textsc{PoiBert}~ itinerary prediction algorithms achieve significant accuracy in itinerary prediction tasks; the proposed~\textsc{PoiBert}~prediction algorithm is scale-able and adaptable in parallel environment.
Our experiments also show that the {\textsc{PoiBert}}-prediction algorithm achieves {$F_1$~scores} of at {\textit{least}} 47\% accuracy across all cities and different parameter settings.
In particular, we recorded an average of $74.8\%$ in our \emph{{Osaka}} dataset; experiments in~\emph{{Delhi}} also show an \emph{increase} of 19.99\%~(from 58.238\% up to 69.848\%) in~$F_1$ score.
In Table~\ref{table:average_pois}, we compare the number of~{\textsc{Poi}s}~in users' trajectories and their predicted itineraries by \textsc{PoiBert}. \textsc{PoiBert}~is able to recommend more \emph{relevant}, and \emph{compact} trajectories relative to the actual trajectories, while not compromising the quality of the recommendation model.
\section{Conclusion}
\label{section_conclusion}
In this paper, we study the problem of tour itinerary recommendation to identify users' preference on~{\textsc{Poi}s}~and make the appropriate recommendation of itineraries with time constraints.
To solve this problem, we propose~\textsc{PoiBert}~ that builds upon the highly successful~{\textsc{Bert}}~ model with the novel adaptation of a language model to this itinerary recommendation task, along with an iterative approach to generating~{\textsc{Poi}s}.
Our iterative \textsc{PoiBert}~prediction algorithm can reliably uncover a user's preference in a tour by only using a pair of initial and destination~{\textsc{Poi}s}.
Our experiments show the effectiveness of our proposed algorithm for predicting relevant~{\textsc{Poi}s}~in terms of {\textsl{$F_1$}}-scores.
In our experiments on 7~cities, our \textsc{PoiBert}~algorithm \emph{outperforms} 8~baseline algorithms measured in~averaged~$F_1$-scores. Future works include further adaptation and more in-depth evaluation of other language models for this itinerary recommendation task and creating a \textsl{HuggingFace} interface module for~\textsc{PoiBert}~\cite{huggingface2019}.
\section*{Acknowledgment}
\noindent{\small This research is funded in part by the Singapore University of Technology and Design under grant SRG-ISTD-2018-140.\\
The computational work was partially performed on resources of the National Super-Computing Centre, Singapore.}
\bibliographystyle{IEEEtran}
\balance
| {
"attr-fineweb-edu": 1.916992,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd4bxaKgQNICjgtP0 | \section{Introduction}
Quantitative performance analysis in sports has become mainstream in the last decade. The focus of the analyses is shifting towards more sport-specific metrics due to novel technologies. These systems measure the movements of the players and the events happening during trainings and games. This allows for a more detailed evaluation of professional athletes with implications on areas such as opponent scouting, planning of training sessions, or player scouting.
Previous works that analyze soccer-related logs focus on the game-related performance of the players and teams. Vast majority of these methodologies concentrate on descriptive statistics that capture some part of the players' strategy. For example, in case of soccer, the average number of shots, goals, fouls, passes are derived both for the teams and for the players~\cite{anderson2013numbers,duch2010quantifying}. Other works identify and analyze the outcome of the strategies that teams apply~\cite{pena2012network,narizuka2013statistical,lucey2013assessing,gyarmati2014,gyarmati2015,Wang2015,luceyquality}. However, the physical performance and in particular the movements of players has not received detailed attention yet.
It is challenging to get access to datasets related to the physical performance of soccer players. The teams consider such information highly confidential, especially if it covers in-game performance. Despite the fact that numerous teams deployed player tracking systems in their stadiums, datasets of this nature are not available for research or for public usage. It is nearly impossible to have quantitative information on the physical performance of all the teams of a competition. Hence, most of the analysis and evaluation of the players' performance do not contain much information on the physical aspect of the game, creating a blindspot in performance analysis.
We propose a novel method to solve this issue by deriving movement characteristics of soccer players. We use event-based datasets from data provider companies covering 50+ soccer leagues allowing us to analyze the movement profiles of potentially tens of thousands of players without any major investment. Our methodology does not require expensive, dedicated player tracking system deployed in the stadium. Instead, if the game is broadcasted, our methodology can be used. As a consequence, our technique does not require the consent of the involved teams yet it can provide insights on the physical performance of many players in different teams.
The main contribution of our work is threefold:
\begin{enumerate}
\item we propose a methodology to extract the movement characteristics of the players,
\item we compute the similarity of the players and as such identify potential candidates who may be able to replace a given player,
\item we quantify the uniqueness and consistency of players in terms of their in-game movements.
\end{enumerate}
To the best of our knowledge, our study is the first of its kind that focuses on the movements of soccer players at scale, while it derives novel, actionable insights for the soccer industry from event-based datasets.
\section{The Challenge of Profiling Movements}
As we noted already, it is not straightforward how to quantify the movements of soccer players at scale. The core of this problem lies in the properties of the potential datasets that may be used for the process. There exist three main data acquisition methodologies applied in the soccer industry: event-based, tracking, and wearable sensors. We briefly describe each of them focusing on their properties relevant to the analysis of the players' movements.
First, event-based datasets annotate the most important, ball related events of a game. The method involves human operators who code the games based on a corresponding video feed of the game. Although data providers apply quality assurance techniques\footnote{\eg, multiple operators annotate the game and the final data feed is a result of majority voting.}, this technique is prone to human errors. Despite of this, the datasets are widely used in the media to enhance the fan experience during the game. On the other hand, the data feed is near real-time and the data production does not need any dedicated system to be deployed at the stadiums.
Second, tracking datasets contain fine-grain details on the movement of players and of the ball throughout the game. This data is generated based on video feeds of dedicated, precisely positioned cameras. Optical tracking algorithms extract the trajectories from the video; however, there are scenarios (\eg, collision of players) where the supervision of human operators are needed. A recent study revealed that discrepancies exist among different tracking systems, \eg, the trajectories of a player may be of within several meters~\cite{IanGrahamTalkFCBSymp}. A major drawback of this technique is that it involves the deployment of a system in the stadium. As such, the consent of the home team is mandatory for such data collection. Anyone who intends to analyze the movements of the players of a competition should get the consent of all teams (and usually of the league too). This is a major obstacle for physical performance analysis at scale.
Third, wearable sensor devices collect detailed datasets on the movement of the players~\cite{gpsports,statsports,catapult}. The sensors of these devices capture the movement, accelerations, and rotation of the players, among others. The in-game application of this technique was authorized by a recent decision of FIFA, the governing body of international soccer~\cite{FIFASensor}. As of July 2015, players are allowed to wear sensors during official games. However, recent research studies report discrepancies related to the precision and consistency of these devices and such data should be used with precaution~\cite{buchheit2014integrating}. There is a more crucial practical issue with this technique: the dataset holds information only on the players of a single team, details on the ball and the opponent players are missing. This prevents any comparisons of players from multiple teams, and tactical analysis of players.
The review on the available data acquisition techniques reveals the difficulty of any study focusing on the quantitative evaluation of the players' movements on scale. As we show in this paper, event-based datasets can be used to address this problem and provide insights on player movements. We introduce our methodology in the next section.
\section{Methodology}
In this section, we introduce our methodology used to extract the movements of players and then to create their movement characteristics. Our final goal is to quantify the similarities of players based on their movement characteristics, \ie, the movements they apply during a season. We use an event-based dataset throughout our analyses that we describe next.
\subsection{Dataset}
We use an event-based dataset generated by Opta~\cite{opta} covering the 2012/13 season of La Liga (\ie, the first division soccer league of Spain). The dataset contains all major events of a soccer game including passes, shots, dribbles, and tackles. For example, the dataset has more than 300,000 passes and nearly 10,000 shots.
The dataset contains the time and location of these events along with the identity of the involved players.
Hence, it is possible to derive a coarse grain time-series of the movements of the players~\cite{Gyarmati2015Porto}. We note that the precision of the time annotation of the dataset is one second.
\subsection{Movement vector extraction}
We describe each movement as a vector of seven: $(x_1,y_1,x_2,y_2,T,s,b)$, where the movement starts at time $T$ at location $(x_1,y_1)$, ends at location $(x_2,y_2)$, with speed $s$, while $b$ denotes ball possession (\ie, whether the player had the ball or not). In total, we derive 660,848 movement vectors of 542 players for the analyzed season. The players have diverse movements over the season---both in terms of their numbers and properties: the mean number of movements per player is 1,219 (could be as high as 4,998), while the mean length of the movements is 19.4 meters (up to 100 meters). We illustrate the derived movements of three players in Figure~\ref{fig:movement_vectors} given a single game. The figures show the area where the players tend to move and also reveal their role in the team. For example, Xavi was active mainly in the middle of the field and had some high intensity movements (arrows with red color). Messi leaned towards the right side of the field and penetrated the box of the opponent five times (arrows pointing into the box). Cristiano Ronaldo on the other hand was moving on the left side, his movements covered larger distances. This is the first step of our methodology: extracting the movement vectors of the players. We note that the event-based dataset we use is sparse in terms of the position of the players, \ie, the physical location of a player is only recorded when the player was involved in some ball-related event\footnote{This is a consequence of the data acquisition process: the games are annotated based on the television broadcast that focuses on the ball all the time.}. As such, the elapsed time between two events of a player can be as low as couple of seconds but it can reach several minutes too.
\begin{figure}[tb]
\centering
\subfigure[Xavi]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/pre_shot_movements_player_5816_per_game.eps}\label{fig:movement_vectors_xavi}}
\subfigure[Messi]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/pre_shot_movements_player_19054_per_game.eps}}
\subfigure[Ronaldo]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/pre_shot_movements_player_14937_per_game.eps}}
\caption{Players' movement vectors in a game derived from an event-based dataset. The color of the arrows correspond to the speed of the movements (green---slow, red---fast). The teams are attacking from left to right.}
\label{fig:movement_vectors}
\end{figure}
During the creation of the movement vectors, we take into account that the size of the soccer fields are not necessarily identical. An interesting property of the rules of soccer is that the sizes of the field are not fixed, there is some room to design a soccer pitch even in case of international matches. According to the first law of the game, the length of the pitch shall be between 100 and 110 meters, while the width between 64 and 75 meters~\cite{FIFArules}. There is an ongoing standardization effort, most of the newly constructed stadiums have a pitch with a size of 105x68m\cite{uefa_pitch}. Spain is not an exception to this extent, where the dimensions of Elche's stadium are 108x70m while the field is 100x65m in case of Rayo Vallecano~\cite{spain_stadium_sizes}.
Another data preparation technique we use is related to handling the passes in the dataset. For passes, we have a complete datapoint for the initiator of the pass (\ie, timestamp and location), however, at the receiving end, the dataset does not contain a timestamp. To overcome this issues, and to increase the wealth of the extracted time-series, we use the timestamp of the previous event, \ie, the initiation of the pass. This is the best method for estimation as described in~\cite{Gyarmati2015Porto}.
\subsection{Movement characteristics construction}
Our goal is to derive movement characteristics that enable us comparing the players' performance and analyzing the stability of a player's role and fitness across a season. Players have diverse number of vectors, to handle this, we apply the following methodology. First, we derive the most relevant $K$ movement vectors of the competition using all the vectors of all the players. We determine these features using the mini-batch $K$-means clustering algorithm~\cite{sculley2010web}: the centroid movement represents the vectors belonging to a specific cluster. We apply this method instead of creating a grid for the locations, to have smooth, balanced clusters (instead of having high skew among the number of members in the clusters in case of the grid scheme). Throughout this paper, we use $K=200$. Second, in case of each movement of a player, we determine the cluster it belongs to, \ie, we compute the most similar feature vector. In the Appendix, we show examples of the coverage of some of the feature vectors, \ie, which movement vectors are belonging to a given feature vector (Figure~\ref{fig:feature_vector_spread}). Third, we aggregate the number of times a player applied each feature vector, which creates a frequency vector of the features. Finally, we normalize the frequencies with the total number of movements a player has. As a result of the normalization, we have the movement characteristic of a player. As an example, we present the top 50 movement directions of Messi in Figure~\ref{fig:movement_characteristics_all}. The figure reveals that Messi tends to have short movements in the final third of the pitch, while his mid/long range movements are starting from the right side of the field.
We are able to focus on specific movements of the players by applying filters on the initial set of movement vectors. One such filter is ball possession. It is a crucial insight how a player moves while having the ball. To focus on this, we determine all the combinations of the events where a player has the ball (\ie, both the starting and ending events of the movement should involve ball possession). Events like recovering the ball, intercepting the ball, and pass reception mark the beginning of movements where the player has possession of the ball, i.e., the player is moving together with the ball. On the contrary, if the first event of a movement is making a pass, the player does not have the ball for the given movement vector. After the filtering, we construct the feature vectors and the characteristic vectors of the players. We present the most important with ball movement traits of Messi in Figure~\ref{fig:movement_characteristics_ball}. There are six major routes Messi takes when he has the ball (shown with thick arrows). All of these movements are located centrally at the beginning of the final third.
Another aspect is the speed of the movements: fast movements are generally collocated with important events of the game. We apply a threshold on the speed of the movements, \ie, the movements ought to be at least 14km/h. This is inline with the categories widely used in the soccer industry~\cite{Valter2006, Bangsbo1991}. Figure~\ref{fig:movement_characteristics_run} presents the high-speed movements of Messi: not only his favorite traits on the midfield are revealed but also the tendencies how he approaches and enters the box of the opponent.
\begin{figure}[tb]
\centering
\subfigure[All movements]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/movements_player_19054_size_1675_top20.eps}\label{fig:movement_characteristics_all}}
\subfigure[Movements with ball]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/movements_player_19054_size_1428_with_ball_top20.eps}\label{fig:movement_characteristics_ball}}
\subfigure[High speed movements ($\geq 14km/h$)]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/movements_player_19054_size_467_with_run_top20.eps}\label{fig:movement_characteristics_run}}
\caption{The most important 50 movement directions of Messi throughout the season. The boldness of the arrows correspond to the frequency the movement was used. Depending on the set of movements we consider, the methodology reveals diverse insights on the player's physical performance.}
\label{fig:movement_characteristics}
\end{figure}
\subsection{Uniqueness and consistency}
One of the main applications of the movement characteristics is finding similar players who may be able to replace a given player. We identify candidates to replace some players later on this paper, some similarities may be surprising on first sight. However, this is not the only insight we can gain from the profiles. The movement characteristics of the players enable us to quantify two additional decisive qualities of the players: uniqueness and consistency. For this, we use the cosine similarity to measure the distance between two players. Uniqueness measures how hard it is to find a player that has similar movements. We use the movement characteristics derived over the entire season, and for each player we determine the $M$ most similar ones ($M=5$ in our evaluations). This is done by identifying the players with the smallest distance from the particular player (\ie, players with minimal cosine distances). Let $d_{ij}=D(c_i,c_j)$ denote the cosine distance between player $i$ and $j$, where $c_i$ denotes the movement characteristic of player $i$. We compute the uniqueness of player $i$ as:
\begin{equation}
U_i = \sum_{j=1}^{M}d_{ij}
\end{equation}
The range of this metric is $(0,M)$, the higher this value is the more unique a player is. The uniqueness metric can be generalized for game specific movements: in this case the distances are measured between the movement characteristics of individual games, not the entire season.
It is preferable if a player applies movements that are hard to reproduce. However, it is equally important to have consistent performance throughout the season. We evaluate this using the game-wise movement characteristics of the players. Let $t=1,\dots,N$ denote the games a player was involved in, while $c_i^k$ denote the movement characteristic of player $i$ for game $k$. The consistency of player $i$ for game $k$ is defined as the average pairwise distance of its movement characteristics:
\begin{equation}
C_i^k = \frac{1}{N}\sum_{t=1}^{N}D(c_i^k,c_i^t)
\end{equation}
The range of consistency is $(0,1)$. If the consistency metric $C$ is small, the player applies similar movements across the whole season, \ie, it is expected to see the same kind of movements throughout the season.
\section{Empirical Results and Insights}
We next apply the proposed methodology on the presented dataset covering the events of the 2012/13 season of the Spanish first division soccer league. We first focus on identifying similar players, afterwards we study the uniqueness and consistency of the players. Finally, we highlight an additional area in which the methodology is able to derive new insights: movements related to creating chances.
\subsection{Similarity}
We determine the similarity of the players based on their movement characteristics derived using all the movements the players had during the season. In Table~\ref{tab:most_similar_players} we focus on Messi, Cristiano Ronaldo, and Xavi by presenting their five most similar counterparts. The list of similar players may be considered as a shortlist of candidates who are potentially able to replace the given player---at least based on their in-game movements. The table contains the distance of the players from each other. We show the market value of the players in the table as well as a reference. The market values of the players are as of the end of the season (\ie, June 2013) based on the estimations of Tranfermarkt~\cite{transfermarkt}. Some of the results are straightforward like the case of Messi and Saviola, or Xavi and Thiago. However, it is interesting to see the list of Cristiano Ronaldo. Our methodology reveals that the most similar player of Cristiano Ronaldo was Ruben Castro (of Real Betis). We illustrate the similarity of the movements of Ronaldo and Castro in Figure~\ref{fig:player_similarity}. The figure shows the feature vectors the players used and to what extent. There is a remarkable similarity between the players despite the fact that Castro is not as highly rated as Ronaldo, and there is a huge discrepancy in their market values: 100M compared to 4.5M. This example highlights the most important benefit of our scheme: we are able to identify players who are having the same kind of movements as their more famous reference, however, for only a fraction of the price.
\begin{table}[tb]
\scriptsize
\centering
\begin{tabular}{clcc}
\toprule
\# similar & player & distance & market value (\euro) \\
\midrule
\midrule
& Lionel Messi - FC Barcelona & & 120M \\
\midrule
1 & Javier Saviola - Málaga & 0.155 & 3M \\
2 & Radamel Falcao - Atlético Madrid & 0.158 & 60M \\
3 & Diego Buonanotte - Granada CF & 0.174 & 2M \\
4 & Obafemi Martins - Levante & 0.180 & 3.5M \\
5 & Enrique De Lucas - Celta de Vigo & 0.193 & 0.5M \\
\midrule
\midrule
& Cristiano Ronaldo - Real Madrid & & 100M \\
\midrule
1 & Rubén Castro - Real Betis & 0.079 & 4.5M \\
2 & Antoine Griezmann - Real Sociedad & 0.089 & 15M \\
3 & Helder Postiga - Real Zaragoza & 0.125 & 5M \\
4 & Jorge Molina - Real Betis & 0.127 & 3.5M \\
5 & Jonathan Viera Ramos - Valencia & 0.132 & 3M \\
\midrule
\midrule
& Xavi Hernández - FC Barcelona & & 15M \\
\midrule
1 & Thiago Alcántara - FC Barcelona & 0.069 & 22M \\
2 & Sami Khedira - Real Madrid & 0.109 & 22M \\
3 & Luka Modric - Real Madrid & 0.113 & 35M \\
4 & Ignacio Insa - Celta de Vigo & 0.116 & 0.9M \\
5 & Daniel Parejo - Valencia & 0.122 & 10M \\
\bottomrule
\end{tabular}
\caption{The top five most similar players and their market values in case of Messi, Cristiano Ronaldo, and Xavi.}
\label{tab:most_similar_players}
\end{table}
\begin{figure}[tb]
\centering
\subfigure[Cristiano Ronaldo (Real Madrid)]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=7.5cm]{figs/movements_player_14937_size_2835_top20.eps}}
\subfigure[Ruben Castro (Real Betis)]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=7.5cm]{figs/movements_player_20680_size_1707_top20.eps}}
\caption{Top-50 movement features of Cristiano Ronaldo and his most similar counterpart, Ruben Castro. There is a remarkable similarity between the two players, while having two orders of magnitude difference in their market values.}
\label{fig:player_similarity}
\end{figure}
\subsection{Uniqueness}
We next focus on the uniqueness of the players, \ie, how hard it is to find a player who is able to execute the same in-game movements throughout a season. In case of uniqueness, we only consider players who had at least $500$ movements across the season to avoid discrepancies due to players with small participation in the season. We show the ten most unique players of the competition in Table~\ref{tab:uniqueness}. As a reference, we also include the number of movements the players had in the season. The majority of these players are defenders who were playing significant portions of the season on different sides of the field. For example, Adriano of FC Barcelona was playing as left and right back in the season. Messi is considered to be a unique player; this is reflected in our results too with his eighth place in the uniqueness list. We illustrate the most significant feature vectors of these players in the Appendix (Figure~\ref{fig:movement_characteristics_top10}).
\begin{table}[b]
\scriptsize
\centering
\begin{tabular}{lrr}
\toprule
Player & Uniqueness & \#movements \\
\midrule
Adriano Correia & 1.246 & 2067 \\
Martin Montoya & 1.021 & 1485 \\
Franco Vazquez & 0.978 & 659 \\
Daniel Larsson & 0.974 & 738 \\
Oier Sanjurjo Mate & 0.921 & 2209 \\
Juan Torres Ruiz & 0.892 & 743 \\
Sergio Ramos & 0.876 & 2957 \\
Lionel Messi & 0.860 & 3809 \\
Ruben Garcia Santos & 0.848 & 1148 \\
Enrique De Lucas & 0.842 & 611 \\
\bottomrule
\end{tabular}
\caption{The ten most unique players of the competition. The results reveal that it is hard to replace players who are able to play in multiple positions (\eg, different sides of the field).}
\label{tab:uniqueness}
\end{table}
\subsection{Consistency}
We take one step further and next focus on the consistency of the movements. The managers of the teams prefer players who are consistent. If the performance is consistent, the player will deliver the expected movements. On the other hand, it is hard to count on a player whose movements have high fluctuation. Before analyzing the trends in the league, we first present an example in Figure~\ref{fig:xavi_consistency}, where we show the consistency of Xavi's movements across the season. The horizontal axis denotes the identifier of the game he was involved in, while the vertical axis represents the game-wise consistency. It is remarkable how consistent Xavi is across the majority of the season. The two outliers at the beginning and at the end of the season are games where he was partially on the bench. We show the movement vectors of game 10, 12, and 23 in Figure~\ref{fig:xavi_details}. The detailed figures highlight similar trajectories. Xavi had numerous lateral movement in the middle of the field, some high intensity movements towards the box of the opponent, and he was responsible for taking the corner kicks of his team. This was Xavi's role in the game against Real Madrid as well, as shown earlier in Figure~\ref{fig:movement_vectors_xavi}. In the third game (\ie, game 23), Xavi moved similarly, however, he was not on the pitch for the entire game, resulting in a slightly elevated consistency value.
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{figs/self_similarity_p_5816_per_game_2.eps}
\caption{The consistency of Xavi throughout the season. His movements were similar and consistent for the majority of the season, the two outliers are games where he was a substitute.}
\label{fig:xavi_consistency}
\end{figure}
\begin{figure}[tb]
\centering
\subfigure[Game 10]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/pre_shot_movements_player_456422_per_game.eps}}
\subfigure[Game 12]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/pre_shot_movements_player_456441_per_game.eps}}
\subfigure[Game 23]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=5.4cm]{figs/pre_shot_movements_player_456571_per_game.eps}}
\caption{All the movement vectors of Xavi in case of three distinct games. The trends of the movements are similar regardless of the game.}
\label{fig:xavi_details}
\end{figure}
Finally, we analyze the players of the league in terms of their uniqueness and consistency (Figure~\ref{fig:uniqueness}). Defenders are consistent in general, but they are not having too much distinctness. There is no clear difference between midfielders and attackers. There is a clear relation between the two properties: higher uniqueness comes at a price of consistency. The dataset contains three outliers, oddly all of them are players of FC Barcelona. In terms of the defenders, Adriano has a extremely unique behavior, \ie, high distance from the most similar players, however, his performance is not consistent (consistency of 0.75). Iniesta does movements with high consistency (\ie, value of 0.40) and his movements are fairly unique (0.78). Messi is the outlier of the attackers, with high uniqueness and high consistency (0.86 and 0.30, respectively). Messi's movement profile is in a shocking contrast with Cristiano Ronaldo, who seems to be just an average player in terms of the uniqueness and the consistency of his movements (0.55 and 0.51, respectively).
\begin{figure}[tb]
\centering
\includegraphics[width=12cm]{figs/player_consistency_selfsim_per_game_ann.eps}
\caption{The uniqueness and the consistency of the players in the league. The color of the dots denote the position of the players. The outliers of the positions are Adriano, Iniesta, and Messi, respectively.}
\label{fig:uniqueness}
\end{figure}
\subsection{Creating chances}
We are able to extract all in-game movements of the players using our methodology. Hence, we may focus on the movements of the players related to specific events of the game. Potentially, this opens up a new line of research related to the analysis of movements creating scoring opportunities at scale. Here we only highlight the potential of this area by showing two examples.
Figure~\ref{fig:pre_shot_movements_player} reveals a player's movements before taking a shot. These are the areas from where this player---namely Messi---is able to create chances. There are three main routes to arrive into the final third of the pitch and shoot: running straight on the left side of the box, moving diagonally towards the edge of the box from the middle of the field, and starting from the right side taking a diagonal route towards the box. We note that a notable portion of these movements are high intensity runs.
Most of the time chances are not created by a single player, rather are a result of a series of carefully orchestrated movements of the whole team. We are able to capture this as well with our method as shown in Figure~\ref{fig:pre_shot_movements_team}. The example presents the movements of a team up to 20 seconds earlier than taking a shot. The different colors represent different players of the team. The figure shows intense movements of five players, all of them moving straight towards the opponent's box. This case the goal of the players was pressing the defensive line of the opponent rather than opening up spaces.
These insights are just preliminary steps for a complete, on scale understanding of the relation of player movements and scoring opportunities. A thorough analysis of these phenomena is a future work.
\begin{figure}[tb]
\centering
\subfigure[Messi's movements before taking a shot (colors denote the speed of the movements)]{\adjincludegraphics[clip=true, trim=1cm 2.5cm 1cm 3cm,width=7.5cm]{figs/pre_shot_movements_player_19054_per_gameAAA.eps}\label{fig:pre_shot_movements_player}}
\subfigure[Movement of a team precluding a shot (colors denote different players of the team)]{\adjincludegraphics[width=7.5cm,clip=true, trim=1cm 2.5cm 1cm 3cm]{figs/pre_shot_movements_team_103946277_per_game.eps}\label{fig:pre_shot_movements_team}}
\caption{Additional application of the methodology: insights on creating chances}
\label{fig:pre_shot_movements}
\end{figure}
\section{Conclusion}
We analyzed quantitatively the in-game movements of soccer players throughout an entire season. Our methodology reveals detailed insights on the in-game strategy of the teams and on the role and performance of the players within a team. We identify similarities among players and potential candidates for replacing a given player. The results provide valuable inputs for player and opponent scouting by revealing and quantifying the movements of the players to an extent never seen before: both in terms of the number of covered teams and players.
\bibliographystyle{plainnat}
| {
"attr-fineweb-edu": 1.811523,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUf905qhLB3L4qda_U | \section{Introduction}
\label{sec:Introduction}
Electronic sports (esports) are a relatively new and exciting multidisciplinary field of study. \cite{Reitman2019}
There are multiple groups of stakeholders involved in the business of esports. \cite{Scholz2019}
The application of analytics to sports aims to optimize training and competition performance. New training methods are derived from an ever increasing pool of data and research aimed at generating actionable insights. \citep{Pustisek2019,Giblin2016,Baerg2017,Chen2021,Rajsp2020,Kos2018} Rule changes in sports come at
varying time intervals and frequently with unpredictable effects on their dynamics. It is especially relevant to share esports data to assess changes in game design and their impact on professional players, as such changes can occur more rapidly due to the (yet) relatively unstrctured nature of esports competition. \cite{ElNasr2013,Su2021}
Advancements in Artificial Intelligence (AI) and Machine Learning (ML) have shown that Reinforcement Learning (RL) agents are capable of outmatching human players in many different types of games. \cite{Vinyals2019,Jaderberg2019,Silver2018,Berner2019}
Psychological research on neuroplasticity has also shown the great potential of video games to induce structural brain adaptation as a result of experience. \cite{Kowalczyk2018} Further, previous studies have shown that playing video games can enhance cognitive functioining in a wide range of domains, including perceptual, attentional and spatial ability. \cite{Green2003,Green2012} Data obtained from esports titles -- including those gathered from high-level tournament performance -- may provide a path to improving the quality and reproducibility of research in this field, especially in contrast to the more variable data that is collected in laboratories and in less competitive settings. A lower technical overhead and more data available for modeling could assist further research in these areas. \cite{Alfonso2017,Ghasemaghaei2019,Zuiderwijk2019}
The sparsity and methodological diversity of research on this topic remain as roadblocks in the study of how video games can affect mental functioning. Some scholars recommended further research on esports as a potential path forward. \cite{Reitman2019}
Despite the digital nature of esports -- which are their greatest asset with respect to data gathering --
there seems to be a lack of high-quality pre-processed data published for scientific and practical use. The goal of our work is to mitigate this issue by publishing datasets containing StarCraft II replays and pre-processed data from esports events, classified as "Premiere" and "Major" by Liquipedia in the timeframe
from 2016 until 2022. \cite{URLLiquipedia2010}
\section{Related Work}
\label{sec:RelatedWork}
While reviewing StarCraft II related sources, we have found some publicly available datasets made in 2013
``SkillCraft1'' \cite{BlairDataset2013} and 2017 ``MSC'' \cite{Huikai2017}. These datasets are related to
video games and in that regard could be classified as ``gaming'' datasets. Therefore, it is not clear what percentage of games included within such datasets contain actively competing esports players and at what
levels of skill. Using the SkillCraft1 dataset, the authors distinguished the level of players based on the data. They proposed a new feature in the form of the Perception-Action Cycle (PAC), which was calculated from the game data. This research can be viewed as the first step toward developing new training methods
and analytical depth in electronic sports. It provided vital information describing different levels of gameplay and optimization in competitive settings. \cite{Thompson2013}
Related publications focused on in-game player performance analyses and psychological, technical, mechanical or physiological indices. These studies were conducted with use of various video games such as: Overwatch \cite{Braun2017,Glass2020}, League of Legends \cite{Blom2019,Ani2019,Aung2018,Maymin2021,Lee2022}, Dota 2 \cite{Gourdeau2020,Hodge2017,Hodge2019,Cavadenti2016,Pedrassoli2020}, StarCraft \cite{SanchezRuiz2017,Stanescu2021}, StarCraft 2 \cite{Helmke2014,Lee2021,Chan2021,Cavadenti2015}, Heroes of the Storm \cite{Gourdeau2020}, Rocket League \cite{Mathonat2020}, and Counter-Strike: Global Offensive \cite{Khromov2019,Koposov2020,Smerdov2019,Xenopoulos2022,Aditya2021}, among others \cite{Galli2011}. In some cases a comparison between professional and recreational players was conducted.
Most studies did not provide data as a part of their publication. In other publications, the authors used
replays that were provided by the game publishers or were publicly available online, which are unsuitable
for immediate data modeling tasks without prior pre-processing. The researchers used raw files in MPQ (SC2Replay) format with their custom code when dealing with StarCraft II. \cite{URLBlizzardS2ClientProto,Xiangjun2020} Other studies solved technical problems that are apparent when working with esports data and different sensing technologies, including visualization, but with no publication of data. \cite{Bednarek2017,Feitosa2015,Afonso2019,Stepanov2019,Korotin2019}
Some researchers have attempted to measure tiredness in an undisclosed game via Electroencephalography (EEG) \cite{Melentev2020} and player burnout using a multimodal dataset that consisted of electroencephalography (EEG), Electromyography (EMG), galvanic skin response (GSR), heart rate (HR), eyetracking, and other physiological measures \cite{Smerdov2021} in esports.
\section{Material and Methods}
\label{sec:MaterialAndMethods}
\subsection{Dataset Sources and Properties}
\label{sec:DatasetSources}
The files used in the presented information extraction process were publicly available due to a StarCraft
II community effort. Tournament organizers for events classified as "Premiere" and "Major" made the replays available immediately after the tournament to share the information with the broader StarCraft II community for research, manual analysis, and in-game improvement. Sources include Liquipedia, Spawning Tool, Reddit, Twitter, and tournament organizer websites. All replay packs required to construct the dataset were searched and downloaded manually from the public domain. The critical properties of the presented dataset are as follows:
\begin{itemize}
\item To secure the availability of the raw replays for further research and extraction, the SC2ReSet: StarCraft II Esport Replaypack Set was created. \cite{BialeckiSC2ReSet2021}
\item The replays were processed under the licenses provided by the game publisher: End User License Agreement (EULA), and "\nameref{sec:AILicense}" which is available in the \autoref{sec:AILicense} supplementary material.
\item Our dataset was created by using open-source tools that were published with separate digital object identifiers (doi) minted for each of the repositories. These tools are indexed on Zenodo. \cite{Bialecki2021InfoExtractor,Bialecki2021MapExtractor,Bialecki2021Preparator}
\item We have made available a PyTorch \cite{PyTorch2019} and PyTorch Lightning \cite{PyTorch_Lightning_2019} API for accessing our dataset and performing various analyses. Our API is accessible in the form of a GitHub repository, which is available on Zenodo with a separate doi. All of the instructions for accessing the data and specific field documentation are published there. \cite{BialeckiDatasetAPI}
\item The presented dataset is currently the largest that is publicly available, and contains information from prestigeous StarCraft II tournaments.
\item The dataset can be processed under CC BY NC 4.0 to comply with Blizzard EULA and the aforementioned \nameref{sec:AILicense}.
\end{itemize}
\subsection{Dataset Pre-Processing}
\label{sec:DatasetPreProcessing}
Dataset pre-processing required the use of a custom toolset. Initially, the Python programming language was used to process the directory structure which held additional tournament stage information. We include this information in the dataset in a separate file for each tournament, effectively mapping the initial directory structure onto the resulting unique hashed filenames. Moreover, a custom tool for downloading the maps was used; only the maps that were used within the replays were downloaded. \cite{Bialecki2021Preparator} Finally, to ensure proper translation to English map names in the final data structures, a custom C++ tool implementation was used. Information extraction was performed on map files that contained all necessary localization data. \cite{Bialecki2021MapExtractor}
\subsection{Data Processing}
\label{sec:DataProcessing}
Custom software was implemented in the Go programming language (Golang) and built upon authorized and public repositories endorsed by the game publisher. \cite{URLS2Prot2017} The tool was used to perform information extraction from files in MPQ format with the SC2Replay extension. Information extraction was performed for each pre-processed directory that corresponded to a single tournament. Depending on the use case, different processing approaches were possible by providing command line arguments. \cite{Bialecki2021InfoExtractor}
\subsection{Data Parsing and Integrity}
\label{sec:DataParsingAndIntegrity}
The parsing capabilities of the tooling were defined with a Golang high-level parser API available on GitHub. \cite{URLS2Prot2017} After initial data-structures were obtained, the next step checked the integrity
of the data. This was accomplished by comparing information available across different duplicate data structures that corresponded to: the number of players, map name, length of the player list, game version, and Blizzard map boolean (signifying whether a map was published by Blizzard). If a replay parser or custom integrity check failed, the replay was omitted.
\subsection{Data Filtering and Restructuring}
\label{sec:DataFilteringAndRestructuring}
Filtering for different game modes was omitted as collected replay files were a part of esports tournament matches. Most often, StarCraft II tournament matches are played in a form of one versus one player combat. Therefore, it was assumed that filtering for the number of players was not required at this step. Custom data structures were created and populated at this stage. This allowed for more control over the processing, summary generation, and final output. Merging data structures containing duplicate information was performed where applicable.
\subsection{Summarization and JSON Output to zip archive}
\label{sec:Summarization}
Replay summarization was required in order to provide information that can be accessed without unpacking the dataset. Finally, the data was converted from Golang data structres into a JavaScript Objet Notation (JSON) format, and compressed into a zip archive.
\subsection{Dataset Loading}
Interacting with the dataset is available via PyTorch \cite{PyTorch2019} and PyTorch Lightning \cite{PyTorch_Lightning_2019} abstractions. Our implementations exposes a few key features:
\begin{enumerate*}[label=(\arabic*)]
\item Automatic dataset downloading and extraction from Zenodo archive;
\item Custom validators that filter or verify the integrity of the dataset;
\item Ability of our abstractions to load and use any other dataset that was pre-processed using our toolset.
\end{enumerate*}
The required disk space to succesfully download and extract our dataset is approximately 170 gigabytes.
\section{Dataset Description}
\label{sec:Results}
The collected dataset consisted of 55 tournaments. Within the available tournaments, 18309 matches were processed. The final processing yielded 17895 files. While inspecting the processed data, we observed three
major game versions. Each tournament in the dataset was saved with an accompanying JSON file that contains descriptive statistics such as:
\begin{enumerate*}[label=(\arabic*)]
\item Game version histogram,
\item Dates at which the observed matches were played,
\item Server information,
\item Picked race information,
\item Match length,
\item Detected spawned units,
\item Race picked versus game time histogram.
\end{enumerate*}
\autoref{fig:SC2TimeHistogram} depicts the distribution of match times that were observed.
\begin{figure}[H]
\includegraphics[width=\linewidth]{SC2EGSet_PlayerDistribution.pdf}
\caption{Distribution of player races and race matchup information.}
\label{fig:SC2PlayerDistribution}
\end{figure}
The oldest observed tournament was IEM 10 Taipei, which was played in 2016. The most recent observed tournament was IEM Katowice, which finished on 2022.02.27. The game contains different "races" that differ in the mechanics required for the gameplay. \autoref{fig:SC2RaceTimeHistogram} shows visible differences in the distribution of match time for players that picked different races.
\begin{figure}[H]
\includegraphics[width=\linewidth]{SC2EGSet_Races_MatchLength.pdf}
\caption{Match time distribution split by races: Terran (blue), Protoss (yellow), and Zerg (purple).}
\label{fig:SC2RaceTimeHistogram}
\end{figure}
\begin{wrapfigure}{R}{0pt}
\centering
\includegraphics[width=0.5\textwidth]{SC2EGSet_APM_Distribution.pdf}
\caption{Actions per minute (APM)\\by player race.}
\label{fig:SC2TimeHistogram}
\end{wrapfigure}
The published data resulting from our work is distributed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license and is available in a widely recognized scientific repository - Zenodo.
\section{Tasks and Experiments}
\label{sec:Tasks_Experiments}
\subsection{Game Style Analysis}
Game style analysis can be treated as a task to be solved via supervised or self-supervised methods. Using algorithms such as Uniform Manifold Approximation and Projection (UMAP) \cite{McInnes2018} or t-Distributed Stochastic Neighbor Embedding (t-SNE) \cite{Laurens2008} for the data that we provided could uncover interesting insights depending on the direction of the analysis. Such game style analysis could be investigated using sequence analysis methods or use per game statistics.
\subsection{Combat Encounter Analysis}
Combat analysis as a task can be researched using AI, ML, and classic algorithms in various esports. \cite{Uriarte2018} There were some related works that analyzed unit encounters in StarCraft II. \cite{Lee2021}
Although our pre-processed dataset cannot be used to directly reproduce combat encounter analyses, we provide raw replays published as SC2ReSet. \cite{BialeckiSC2ReSet2021}
\subsection{Winner prediction and Player Performance Evaluation}
\label{sec:WinnerPredictionExperiment}
Within \autoref{sec:RelatedWork} we have referenced multiple articles that dealt with player performance evaluation. These works performed data mining tasks on game engine generated replays and other sources of player related information.
Experiments regarding winner prediction can uncover interesting information about the optimal strategy of
play. Prior analyses in this task with a small sample of esports players have shown the importance of some key indicators. The proposed dataset can help with the reproduction and facilitation of various claims, some of which are based on anecdotal evidence. \cite{Bialecki2021Determinants} The sample analysis below describes a basic attempt at predicting match outcome using only data related to player economy to demonstrate the potential for gleaning insights from replay data.
\paragraph{Data Preparation}
Matches were initially filtered to only include those exceeding or equaling a length of 9 minutes, which is approximately the 25th percentile of match length values. Next, a set of features was generated from the available economy-related indicators. Additional features were generated by combining mineral and vespene indicators into summed resource indicators. Match data were then aggregated by averaging across match time for each player, resulting in 22,230 samples of averaged match data (from 11,115 unique matches). Standard deviations were computed in addition to averaged values where applicable. Further features were then generated by computing ratios of resources killed/resources lost for army, economy and technology contexts,
along with a ratio of food made to food used. As a final step, prior to feature standardization, each feature was filtered for outliers (replacing with median) that exceeded an upper limit of 3 standard deviations from the feature mean.
\paragraph{Feature Selection}
The feature count was reduced by first computing point biserial correlations between features and match outcome, selecting for features with a statistically significant (\(\alpha\) = .001) coefficient value exceeding that of \(\pm\) .050. Next, a matrix of correlations was computed for the remaining features and redundant features were removed. 17 features remained after this process, of which 8 were basic features (mineralsLostArmy, mineralsKilledArmy, mineralsLostEconomy, mineralsKilledEconomy, and the SD for each).
\paragraph{Modelling}
Three algorithms were chosen for comparative purposes: Logistic Regression (sklearn.linear\_model.LogisticRegression), Support Vector Machine (sklearn.svm.SVC) \cite{scikit-learn,sklearnAPI}, and Extreme Gradient Boosting (xgboost.XGBClassifier) \cite{Chen2016}. Each algorithm was initiated with settings aimed at binary classification and with typical starting hyperparameters. A 5-fold cross validation procedure was implemented across the models. Label counts were equalized to the minimal label count prior to generating the
data folds, resulting in 10,744 samples of ``Win'' and ``Loss'' labels each. Accordingly, model performance was measured using accuracy. Computation was performed on a standard desktop-class PC without additional resources.
\paragraph{Results}
As the results indicate (see \autoref{table:ClassificationPerformance}), good outcome prediction can be achieved from economic indicators only, even without exhaustive optimization of each model's hyperparameters. This is perhaps slightly surprising, as while economy in a match is generally critical to the outcome, such data do not capture the nuances of skillful resource usage. The SVM and XGBoost models displayed similar performance, with the logistic classifier lagging slightly behind. Feature importances were taken from
an XGBoost model (with identical hyperparameters) that was applied to the entire dataset for illustrative
purposes. \autoref{fig:SC2FeatureImportance} below depicts the top five features by importance. It is interesting to note that importance was more heavily centered around mineral-related features than those tied
to vespene, which is likely tied to how mineral and vespene needs are distributed across unit/building/technology costs. Further feature investigation is required to verify this tendency.
\begin{table}[H]
\centering
\caption{Classification models and their performance metrics.}
\label{table:ClassificationPerformance}
\begin{tabular}{llll}
\hline
Classifier & \multicolumn{1}{c}{Accuracy} & \multicolumn{1}{c}{SD} & \multicolumn{1}{c}{Hyperparameters} \\ \hline
Support Vector Machine - RBF & 0.8488 & 0.0075 & kernel='rbf', C=10, gamma='auto' \\
XGBoost & 0.8397 & 0.0064 & Booster='gbtree', eta=0.2, max\_depth=5 \\
Logistic Regression & 0.8118 & 0.0057 & C=10, penalty='l2' \\ \hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth]{SC2EGSet_FeatureImportance.png}
\caption{Percentages of feature importances based on XGBoost fit to all data.}
\label{fig:SC2FeatureImportance}
\end{figure}
These models were also used to illustrate outcome prediction over match time, as can be seen in \autoref{fig:SC2ClassifierComparison}. It should be noted that these time series results are not based on any form of data aggregation, and as such only basic economic features could be used for classification (18 features in total).
\begin{figure}[H]
\centering
\includegraphics[width=0.75\linewidth]{SC2EGSet_Classifier_Comparison.pdf}
\caption{Accuracy comparison of applied classification models.}
\label{fig:SC2ClassifierComparison}
\end{figure}
Each timepoint contains the average accuracy for 5-fold cross validation, with a minimum match length of 9 minutes and a maximum match length of approx. 15 minutes. All three algorithms provided similar performance over time, although this may be an effect of the minimal hyperparameter optimization that was performed. Further, it is also interesting to note and that all three algorithms meet a classification performance
asymptote at approx. the same match time (\textasciitilde550 seconds), which may indicate that this is where economic indicators begin to lose their predictive power and (presumably) other factors such as army size, composition, and their application become the primary determinants. The code for our experiments is available at \cite{PawelExperiments}
\section{Limitations}
\label{sec:Limitations}
We acknowledge that our work is not without limitations. The design and implementation of our dataset do not consider the ability to obtain StarCraft II data through game-engine simulation at a much higher resolution. Because of this, the extracted dataset cannot reflect exact unit positioning.
Replays in their original MPQ (SC2Replay) format contain all necessary information to recreate a game using game-engine API. Therefore, we plan to continue our research and provide more datasets that will expand
the scientific possibilities within gaming and esports. Further, it should be noted that the experiments described here are more illustrative than investigative in nature, and could be greatly expanded upon in future work.
We recommend further research to use SC2ReSet \cite{BialeckiSC2ReSet2021} to compute game-engine simulated information.
We also do not provide simulation observation data that allows more detailed spatiotemporal information to be extracted at a higher computational cost. Moreover, it is worth noting that the dataset completeness was dependent on which tournament organizers and tournament administrators decided to publish replay packs.
\section{Discussion}
\label{sec:Discussion}
Future authors may want to filter out replays that ended prematurely due to unknown reasons. Our dataset may contain replays that are undesirable for esports research. We have decided against the deletion of replays to preserve the initial distributions of data. Additionally, as filtering was omitted (besides that performed for the purposes of the described experiments), there is a risk that the dataset contains matches
that were a part of the tournament itself but did not count towards the tournament standings. Due to the timeframe of the tournaments and game version changes, despite our best efforts, some information might be
missing or corrupted and is subject to further processing and research.
Our dataset is the largest publicly available pre-processed esports dataset. Moreover, in preparing the data, we defined and published the software used for the data extraction process and other tasks. Future research on StarCraft II may be built upon these tools and our dataset. \cite{Bialecki2021Preparator,Bialecki2021MapExtractor,Bialecki2021InfoExtractor}
The dataset may also serve to increase knowledge regarding the in-game behavior of players, i.e. the relationship between the variables and overall strategies used by the players at high levels of advancement. Such information can be used in comparisons to non-gamers or intermediate players in the process of studying the relationship between game proficiency, cognitive functioning, and brain structure. \cite{Jakubowska2021}
Moreover, a report done in the area of clinical medicine highlighted the lack of compliance of many authors with their data availability statement (DAS). It is clear that publishing the data and tools required for modeling is a key component of ensuring reproducible scientific work. \cite{Gabelica2022}
Other noteworthy applications of the dataset include comparing gameplay styles, action sequence classification, and their relation to victory. To that end, we encourage using different statistical methods and Machine Learning (ML) algorithms, including supervised and self-supervised approaches.
\section*{Acknowledgements}
We would like to acknowledge various contributions by the members of the technical and research community, with special thanks to: Timo Ewalds (DeepMind, Google), Anthony Martin (Sc2ReplayStats), and András Belicza for assisting with our technical questions.
Moreover, we extend our thanks to the StarCraft II esports community for sharing their experiences, playing together, and discussing key aspects of the gameplay in various esports. We extend our thanks especially to: Mikołaj ``Elazer'' Ogonowski, Konrad ``Angry'' Pieszak, Mateusz ``Gerald'' Budziak, Igor ``Indy'' Kaczmarek, Adam ``Ziomek'' Skorynko, Jakub ``Trifax'' Kosior, Michał ``PAPI'' Królikowski, and Damian ``Damrud'' Rudnicki.
\section*{Declarations}
\subsection*{Authors' contributions}
\begin{itemize}
\item Conceptualization: Andrzej Białecki;
\item Supervision: Andrzej Białecki, Jan Gajewski;
\item Methodology: Andrzej Białecki, Natalia Jakubowska, Paweł Dobrowolski, Piotr Białecki, Leszek Krupiński;
\item Formal Analysis: Andrzej Białecki, Natalia Jakubowska, Paweł Dobrowolski;
\item Investigation: Andrzej Białecki, Natalia Jakubowska, Paweł Dobrowolski, Piotr Białecki, Robert Białecki;
\item Writing - original draft: Andrzej Białecki;
\item Writing - review and editing: Andrzej Białecki, Paweł Dobrowolski, Robert Białecki,\\ Jan Gajewski;
\item Data curation: Andrzej Białecki, Andrzej Szczap;
\item Technical Oversight: Piotr Białecki;
\item Software: Andrzej Białecki, Leszek Krupiński;
\item Technical Documentation: Andrzej Szczap
\end{itemize}
\subsection*{Funding}
This publication was self-funded.
\subsection*{Conflicts of interest/Competing interests}
Authors declare no conflict of interest.
\subsection*{Availability of data and material}
Extracted data is published as a dataset in a scientific repository. \cite{BialeckiEGSetDataset,BialeckiSC2ReSet2021}
\subsection*{Code Availability}
The code used for data extraction is available as open source implementations published by the authors. \cite{Bialecki2021Preparator,Bialecki2021MapExtractor,Bialecki2021InfoExtractor}
| {
"attr-fineweb-edu": 1.845703,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd1s4eIOjR5O2q1Wc |
\section{Introduction}
{\renewcommand{\arraystretch}{1.1}
\begin{table}[h!]
\centering
\begin{small}
\begin{tabular}{|p{7.5cm}|}
\hline\textbf{DOCUMENT}: While Richie Benaud rose from the suburbs to captain Australia, he will be remembered forever for his mastery of commentating. \textcolor{orange}{The champion leg spinner turned cricket commentating into an art form, earning him the title of 'the Voice of Cricket.'} \textcolor{blue}{His \textbf{commentary} was understated, measured and often extremely funny, and were perfectly timed}. Scroll down for video. \textcolor{red}{84-year-old cricket commentator Richie Benaud has passed away after a battle with skin cancer} . His sayings from the hundreds of Test and One Day cricket matches he commentated on across the world were often what fans remembered from important moments. \textcolor{purple}{His signature one liners soon dropped to a simple word. 'Marvellous...' will forever be linked to the cricket legend}. On commentating, Benaud said: 'My mantra is - put your brain into gear and if you can add to what's on the screen then do it, otherwise shut up.' He once described the scene on the field: 'From our broadcast box you can't see any grass at all, it is simply a carpet of humanity.' On captaincy, and he was one of the best Test captains Australia ever had, Benaud was modest: 'The hallmark of a great captain is the ability to win the toss, at the right time.' The former leg spinner turned cricket commentating into an art form, giving him the title 'the Voice of Cricket'. But he cautioned that description with: 'Captaincy is 90 per cent luck and 10 per cent skill. But don't try it without that 10 per cent.' [...] \\ \hline
\textbf{GOLD SUMMARY}: \textcolor{red}{Cricket commentator Richie Benaud has passed away after cancer battle} . \textcolor{blue}{The 84-year-old will be remembered for his mastery of commentating} . \textcolor{orange}{The former leg spinner earned himself the title of the 'Voice of Cricket'}. \textcolor{purple}{His trademark line was 'Marvellous'.} \\ \hline
\textbf{PEGASUS}: \textcolor{orange}{The champion leg spinner turned cricket commentating into an art form, earning him the title of 'the Voice of Cricket'.} \textcolor{blue}{His commentary was understated, measured and often extremely funny, and were perfectly timed.} \\ \hline
\textbf{Our model}: \textcolor{red}{84-year-old cricket commentator Richie Benaud has passed away after a battle with skin cancer}. \textcolor{orange}{The champion leg spinner earned the title of 'the Voice of Cricket'}. \textcolor{blue}{His commentary was understated, measured and often extremely funny}. \textcolor{purple}{His trademark word, 'Marvellous...' will forever be linked to the cricket legend.} \\ \hline
\end{tabular}
\end{small}
\caption{An example of summarization outputs.}
\label{tab:example}
\end{table}}
Automatic text summarization corresponds to text understanding and text generation processes. In general, there are two main approaches to perform this task. Extractive systems \citep{liu2019fine, narayan2020stepwise, zhang2019hibert, jia2020neural} highlight salient words or sentences from the source text and form the final summary by concatenating them. On the other hand, abstractive methods \citep{see2017get, zhang2020pegasus, zou2020pre} switch among generating new words, choosing phrases from the source document, and rephrasing them. Abstractive summarization, which is the focus of this paper, is usually more advanced and closer to human-like interpretation.
\begin{figure*}[h]
\includegraphics[width=0.95\textwidth]{figures/self_attention.pdf}
\vspace{-2mm}
\caption{Self-attention weights of "\emph{commentary}" in the PEGASUS model"}
\label{fig:self_attention}
\end{figure*}
Recently, abstractive summarization studies \citep{lewis2019bart, zhang2020pegasus, chen2020multi} are dominated by Transformer-based architecture \citep{vaswani2017attention}. Despite good performance in large scale datasets, Transformer-based summarization models have been proven to have the tendency to favor encoding short-range dependencies \citep{zhang2020pegasus}, i.e., whenever there is one word from the input generated in the summary, the model tends to continue generating the nearby words due to their high attention scores to the previous generated word. As such, if the main content of the document is out of reach from the generated word, the final summary can miss that key information. For example, in Table \ref{tab:example}, PEGASUS, a state-of-the-art Transformed-based model, failed to capture one key information of the document, i.e., ``\textit{84-year-old cricket commentator Richie Benaud has passed away after a battle with skin cancer}''. To understand this phenomenon, we visualize the attention scores in the model during the generation process. As shown in Figure \ref{fig:self_attention}, when the model generates ``\textit{commentary}'', the main subject of the blue sentence, it tends to point to and generate nearby words such as ``\emph{his}'', ``\emph{understated}'', ``\emph{funny}'', etc. due to their high attention scores while words in the further range such as ``\emph{Richard}'', ``\emph{Benaud}'', ``\emph{pass}'', and ``\emph{away}'' receive little weight. Consequently, although PEGASUS generates a grammatically correct summary, the summary lacks the key content which describes the death of ``\emph{Richie Bernaud}''.
To avoid missing key points in summarizing, one solution is to furnish the models with global semantics by using probabilistic topic models such as LDA \citep{narayan2018don}, Poisson factor analysis \citep{wang2020friendly}, or inner hidden states \citep{liu2019topic}. Nevertheless, traditional topic models were shown to be inefficient in scalability for large-scale datasets \citep{hoffman2013stochastic, rezende2015variational} and have limited capability of describing documents \citep{ding2018coherence}.
To overcome the above problems, we propose a novel method that integrates neural topic model into summarization architecture. Specifically, we aim to utilize the posterior distribution learned from the neural topic model as an approximation of global semantics of the document and from that, provide a signal for summarization model to have a better understanding of overall document. However, there is one critical question: how can we match the neural topic model's posterior distribution with the true posterior as it has been proven in improving the performance of variational inference \citep{rezende2015variational}? To this end, we propose a method to adapt normalizing flow in the neural topic model to have a better approximation of true distribution and integrate it into the summarization model. Integrating flow mechanism to better approximate the true posterior has been proven to improve performance for variational inference \cite{rezende2015variational} as well as for downstream tasks such as image synthesis \cite{kingma2016improved}, etc. However, to the best of our knowledge, there is no study to investigate the benefit of flow mechanism for the abstractive summarization task.
On the other hand, even though rich global semantics is beneficial, there are recent studies showing that the redundant amount of global semantics may cause harm to hidden representation since it introduces detrimental noise to the model \citep{tenney2019you, li2020incorporating}. Therefore, we propose a novel contextualized gating mechanism to control the flow of global semantics and maintain important information of the hidden states in the main summarization model.
The contributions of our paper can be summerized as follows:
\begin{itemize}[leftmargin=*]
\setlength \itemsep{-0.2em}
\item We propose a novel architecture which takes the global semantics into consideration when performing abstractive summarization.
\item To this end, we introduce a neural topic model which is enpowered with normalizing flow to enrich the global semantics and contextualized gating mechanism to better control the effect of global semantics on hidden representations.
\item We conduct extensive experiments and outperform other state-of-the-art summarization models on five benchmark datasets, i.e. CNN/DailyMail, XSum, Reddit TIFU, PubMed, and arXiv, while generating summaries which favor human judgements, and producing human-interpretable topics.
\end{itemize}
\section{Related Work}
\subsection{Transformer-based Text Summarization}
Transformer \cite{vaswani2017attention} and its variants have demonstrated high efficiency in text summarization. \cite{liu2019text} first use to perform extractive summarization. \cite{zhong2020extractive} propose using Siamese BERT to score among summaries extracted from the source document, exploring the rich semantic space that those summaries are projected onto. \cite{narayan2020stepwise} combine HiBERT and structured transformers to extract the document incrementally to form the final summary.
For abstractive approaches, \cite{zhang2020pegasus} develop a pretraining scheme well-suited for abstractive summarization. Other frameworks uniting language understanding and text generation such as BART \cite{lewis2019bart}, UniLM \cite{dong2019unified}, T5 \cite{raffel2019exploring}, \cite{tuan2020capturing}, and MASS \cite{song2019mass} provide further standards for future works. Unified system such as BottomUp \cite{gehrmann2018bottom} extracts salient phrases and then generates the summary based upon the extracted content. \cite{subramanian2019extractive} further improve with their decoder as a Transformer language model.
\subsection{Topic-aware Summarization Models}
Various works integrate global semantics of topic model into the sequential information. One method is to attend topic vectors with the hidden states, only choosing entries with high document-level representations \cite{zheng2020topic}. \cite{wang2020friendly} design three modules to incorporate topic information to attentive heads, provide topical embedding, and form document-related representations. Other works integrate topical information into convolutional-based models \cite{narayan2018don, wang2018reinforced}. \citealt{ailem2019topic} have their pointer-generator conditioned on both the input document and the latent vector. \cite{fu2020document} study how to effectively assist deep-learning summarization frameworks with external global information. Arguing that each paragraph in the document possesses a separate subtopic, they propose to merge topic information hierarchically with the dense word embedding.
Unfortunately, there is still limited effort controlling the effect of global semantics on the contextualized representations and enriching the global semantics for summarization performance.
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{figures/model.pdf}
\caption{Our overall architecture}
\label{fig:model}
\end{figure*}
\section{Methodology}
The overall architecture of our approach is given in Figure \ref{fig:model}. It comprises of a topic-oriented encoder, a topic-oriented decoder, and a flow-based neural topic model.
Formally, given a document as input, we process it into a sequence of tokens $X = \{x_i\}$, and the bag-of-word (BoW) representation $\textbf{x}_{bow}$. $X$ is taken as the input for the text summarization module, while $\textbf{x}_{bow}$ serves as the input for the neural topic model.
\subsection{Flow-based Neural Topic Model}
The architecture of the neural topic model (NTM) takes inspiration from \cite{miao2017discovering} based on variational autoencoder \cite{kingma2013auto}. In this work, we adapt the normalizing flow to the neural topic model to better grasp the global semantic patterns of the document.
\noindent \textbf{BoW Encoder.} In particular, the input $\textbf{x}_{bow}$ is first encoded into a latent variable $\mathbf{z}$ by a topic encoder. Each input is passed to obtain the prior mean $\mu$ and prior standard deviation $\sigma$
\vspace{-2mm}
\begin{equation}
\pi = f_{MLP}(\textbf{x}_{bow}), \mu = f_1(\pi), \log \sigma = f_2(\pi)
\end{equation}
where $f_{MLP}$ is a non-linear transformation with a $\tanh$ activation function; $f_1$ and $f_2$ are two linear transformations with bias. To obtain the topic distribution, we draw the latent variable $\textbf{z} \sim \mathcal{N}(\mu, \sigma^2)$.
\noindent \textbf{Flow.} Different from conventional neural topic model, a flow is applied to map the latent vector to a more complicated distribution. Formally, flow is a chain of transformations $f_1, f_2, …, f_K$ which are all invertible and have the Jacobian easy to compute.
\vspace{-4mm}
\begin{equation}
\textbf{z}_K = f_0 \circ f_1 ... \circ f_K (\mathbf{z})
\end{equation}
\noindent \textbf{BoW Decoder}. Given the new topic vector, the BoW decoder retains the original input $\textbf{x}_{bow}$ by generating $\textbf{x}'_{bow}$. We take the following procedure to simulate the reconstruction of $\textbf{x}_{bow}$
\begin{itemize}
\item Topic mixture $\theta = \text{softmax}(f_{\theta}(\textbf{z}_K))$
\item For each word $w \in \textbf{x}_{bow}$, draw $w \sim \text{softmax}(f_{\phi}(\theta))$
\end{itemize}
where $f_*(\cdot)$ is a ReLU-activated non-linear transformation. The weight matrix of $f_{\phi}$ is chosen as the topic-word distribution $(\phi_1, \phi_2, ..., \phi_K)$. We proceed to employ the topic mixture $\theta$ to guide the text summarization process.
\subsection{Neural Topic Modeling for Transformer}
Text summarization model is passed a source document $X = \{x_i\}_{i=1}^{N}$ and its task is to predict the target summary $Y = \{y_j\}_{j=1}^{M}$. In this setting, the document $D$ has $N$ tokens and the summary $S$ has $M$ tokens ($M < N$).
Our model inherits the Transformer-based architecture. Particularly, it consists of an encoder and decoder. The encoder learns the context of the source text, and the decoder then predicts the target summary, by learning the context of the generated tokens and attending over encoder hidden states. In our case, we make both the encoder and decoder conditioned on the latent topic yielded by the neural topic model.
\noindent \textbf{Topic-oriented Encoder} We add the special token "\emph{CLS}" to the beginning of the input. At each iteration, the encoder outputs a localized representation $H = \{\textbf{h}_i\}_{i=1}^{N}$ for each token in the source document $X$
\vspace{-2mm}
\begin{equation}
\textbf{h}_{CLS}, \textbf{h}_1, ..., \textbf{h}_N = \text{Encoder}(x_{CLS}, x_1, ..., x_N)
\end{equation}
This explores the relationship among the input tokens (the encoder), or discovering the context each token stays in. We relate the context of each word to the main topic of the document by modulating the $i$-th hidden state $\textbf{h}_i$
\vspace{-2mm}
\begin{equation}
\textbf{h}'_i = g(\textbf{h}_i, \theta)
\end{equation}
where $g$ is a function used to introduce the global semantics to the hidden representations which we will discuss later as the contextualized gating mechanism in section \ref{sec:contextualized_gating}
\noindent \textbf{Topic-oriented Decoder} We also make "\emph{CLS}" the first input of the decoder. The decoder bridges the summary $Y$ and document $X$, creating target hidden states $S = \{\textbf{s}_j\}_{j=1}^{M}$ aligned with the source text. Because of the uni-directionality of the text summarization task, the decoder must work in a left-to-right fashion
\begin{equation}
\textbf{s}_j = \text{Decoder}(y_{CLS}, y_1,y_2, ..., y_{j-1}, \{\textbf{h}'_i\}_{i=1}^{N})
\end{equation}
Similar to the Encoder, we seek to inject the semantics of the topic model into the output hidden state.
\begin{equation}
\textbf{s}'_{j} = g(\{\textbf{h}'_i\}_{i=1}^{N}, \textbf{s}_{j}, \theta)
\end{equation}
\subsection{Contextualized Gating Mechanism}
\label{sec:contextualized_gating}
Because a specified amount of semantic meaning, whether it is local or global, has been embedded in the contextualized representations, it is reasonable to only append sufficient information to the calculated hidden states to maximize the efficiency of the topical information. We adapt the gating mechanism of \cite{cho2014properties} to achieve this goal. In our contextualized gating mechanism, we approximate the necessary amount of global semantics based on the obtained hidden states.
\noindent \textbf{Encoder Gating} For the encoder, we take the hidden representation of "\emph{CLS}" token to control the amount of additional global information
\begin{equation}
\lambda^E = \text{Sigmoid}(W^E \textbf{h}_{CLS} + b^E)
\end{equation}
where $W^E \in \mathbb{R}^{d \times d}$, and $d$ is the dimension of the hidden representation. We form the topic-aware hidden state by merging it with the topic mixture and mapping it onto a topical space
\begin{gather}
\textbf{u}_i = [\textbf{h}_i, \theta] \\
\textbf{c}_i = f_{enc\_topic}(\textbf{u}_i)
\end{gather}
where $f_{enc\_topic}$ is a non-linear transformation. The topic-oriented encoder hidden state of every token is the fusion of the topic-aware and the original representation.
\begin{equation}
\textbf{h}'_i = \lambda^E \textbf{c}_i + (1-\lambda^E) \textbf{h}_i
\end{equation}
\noindent \textbf{Decoder Gating} The amount of topic mixture used for the decoder is controlled by both encoder and decoder hidden state
\begin{equation}
\lambda^D = \text{Sigmoid}(W_1^D \textbf{h}_{CLS} + W_2^D \textbf{s}_{CLS} + b^D) \\
\end{equation}
where $W_1^D \in \mathbb{R}^{d \times d}$, $W_2^D \in \mathbb{R}^{d \times d}$. This switching probability is used to modulate the decoder hidden state, which follows the same computation with the encoder gating.
\begin{gather}
\textbf{t}_j = [\textbf{s}_j, \theta_{dec}] \\
\textbf{e}_j = f_{dec\_topic}(\textbf{t}_j) \\
\textbf{s}'_j = \lambda^D \textbf{e}_j + (1-\lambda^D) \textbf{s}_j
\end{gather}
\subsection{Training Objective}
Our framework favors end-to-end learning of neural topic modeling and text summarization. In this section, we formally define the objective functions for the two modules.
For our neural topic model, the objective function is derived from the evidence lower bound \cite{blei2017variational}. We adapt the change of variables in normalizing flow that determine the distribution of the variable at the end of the flow to the loss of neural topic model
\vspace{-3mm}
\begin{equation}
\begin{split}
&\mathcal{L}_{\text{NTM}} \\
&= \log p(\mathbf{x,z}) - \log q(\mathbf{z}|\mathbf{x}) \\
&= -\log q(\mathbf{z}_0) + \sum_{i=1}^{K} \log \@ifstar{\oldabs}{\oldabs*}{\det \frac{\partial f_i}{\partial z_{i-1}}} \\
& +\log p(\mathbf{x}|\mathbf{z}_K) + \log p(\mathbf{z}_K)
\end{split}
\end{equation}
where $p(\mathbf{z}_K)$ denotes the prior distribution constructed by the flow; $p(\mathbf{x}|\mathbf{z}_K)$ stands for the log-likelihood of the document; log $q_K(\mathbf{z}_K)$ denotes the approximate posterior distribution. Detailed derivation is available in Appendix.
For text summarization, we minimize the cross-entropy loss
\begin{equation}
\mathcal{L}_{\text{sum}} = -\sum_{j=1}^{M} \log p(y_j|\{x_i\}_{i=1}^{N}, y_{i<j})
\end{equation}
where $N$ and $M$ are the length of the document $X$ and summary $Y$, respectively.
The entire framework is trained with the linear combination of two loss functions $\mathcal{L}_{\text{sum}}$ and $\mathcal{L}_{\text{NTM}}$
\begin{equation}
\mathcal{L} = \mathcal{L}_{\text{sum}} + \lambda \mathcal{L}_{\text{NTM}}
\label{eq:loss}
\end{equation}
where $\lambda$ is the hyperparameter balancing the effect of neural topic model on the training process.
\section{Experimental Setup}
\subsection{Datasets}
We evaluate our proposed method on five benchmark datasets: CNN/DailyMail (CNN/DM) \cite{hermann2015teaching}, XSum \cite{narayan2018don}, Reddit TIFU \cite{kim2018abstractive}, arXiv, and PubMed \cite{cohan2018discourse}. The datasets possess various styles and varying lengths.\\
\noindent \textbf{CNN/DM} is constructed by collecting news articles written by CNN and DailyMail journalists. For each article, its highlights are chosen as the summary. We use the non-anonymized version and follow the conventional training/validation/test split in \cite{hermann2015teaching}. \\
\noindent \textbf{XSum} comprises of 226,711 news articles, each of which is linked with a one-sentence summary. Our preprocessing and training/validation/test split is analogous to \cite{narayan2018don}. \\
\noindent \textbf{Reddit TIFU} includes 120K informal posts from the online discussion forum Reddit, strictly following the rule of constructing an expressive "TL;DR" summary. In this work, the long subset of the dataset is applied for performance evaluation. \\
\noindent \textbf{arXiv, PubMed} are two long-document datasets of scientific publications. For each document, the abstract is chosen to be the summary. \\
\noindent We present the statistics of datasets in Table \ref{tab:datasets}.
{\renewcommand{\arraystretch}{1.05}
\begin{table}[h]
\centering
\begin{small}
\begin{tabular}{c|c|c|c|c|c}
\hline
\textbf{Dataset} & \textbf{Train} & \textbf{Val} & \textbf{Test} & $l_D$ & $l_S$ \\ \hline
CNN/DM & 287113 & 13368 & 11490 & 781 & 56 \\
XSum & 204045 & 11332 & 11334 & 431 & 23 \\
Reddit TIFU & 33711 & 4214 & 4214 & 433 & 23 \\
arXiv & 203037 & 6436 & 6440 & 4938 & 220 \\
PubMed & 119924 & 6633 & 6658 & 3016 & 203 \\
\hline
\end{tabular}
\end{small}
\caption{Description of the evaluation datasets. $l_D$ and $l_S$ stand for average length of document and summary}
\label{tab:datasets}
\end{table}}
\subsection{Implementation Details}
\textbf{Neural Topic Model} Provided a dataset, firstly we pretrain the flow-based topic model so that it is able to obtain the prior context of the downstream documents. We experimented with different choices of the topic number $T \in \{50, 100, 200\}$ and the number of invertible transformations applied in the flow of neural topic model $K \in {1, 4, 16}$ on CNN/DailyMail dataset.
\noindent \textbf{Summarization Module} We use the pretrained checkpoint open-sourced by \cite{zhang2020pegasus}, integrate and jointly finetune with the flow-based neural topic model on downstream datasets. Following \cite{zhang2020pegasus}, during test time, our beam search is conducted with a beam size of 8, and top-3 checkpoints are selected based on their evaluation loss in the validation set, and we average their results on the test set. More detailed settings can be found in Appendix.
\subsection{Comparisons}
As baselines, we compare our proposed architecture against a wide variety of previous studies:
\vspace{-1mm}
\begin{itemize}[leftmargin=*]
\setlength \itemsep{-0.2em}
\item \textbf{PTGEN} \cite{see2017get}: a pointer-generator baseline that allows switching between generating words from the vocabulary and copying words from the source.
\item \textbf{PTGEN+Cov} \cite{see2017get}: a pointer-generator baseline with coverage mechanism.
\item \textbf{DRM} \cite{paulus2017deep}: a deep reinforced model which handles the coverage mechanism by using intra-attention among decoder tokens.
\item \textbf{DiscAttn} \cite{cohan2018discourse}: a Seq2Seq model which targets the long-document summarization.
\item \textbf{BertSUM} \cite{liu2019text}: a baseline with finetuning strategy is designed based on the discrepancy between the encoder and decoder.
\item \textbf{ExtSum-LG+RdLoss} \cite{xiao2020systematically}: a Transformer-based model with a training scheme to explicitly reduce redundancy.
\item \textbf{MatchSUM} \cite{zhong2020extractive}: a baseline that makes use of Siamese BERT to score among candidate summaries.
\item \textbf{BottomUp} \cite{gehrmann2018bottom}: a baseline uses extractive-abstractive approach: initially extracts salient phrases and performs abstractive summarization based on extracted content.
\item \textbf{TLM-I+E} \cite{subramanian2019extractive}: a baseline improved on \cite{gehrmann2018bottom} by utilizing Transformer language model as the decoder.
\item \textbf{BART} \cite{lewis2019bart}: a baseline which is pretrained with denoising tasks.
\item \textbf{PEGASUS} \cite{zhang2020pegasus}: a Transformer-based model with pretraining procedure comprised of two tasks: masked sentences prediction and masked language modeling.
\item \textbf{BertSUM + TA} \cite{wang2020friendly}: a BertSUM model equipped with the topic assistant inspired by the Poisson Factor Analysis topic model.
\item \textbf{BART + TA} \cite{wang2020friendly}: a BART version with a plugged-in topic assistant inspired by the Poisson Factor Analysis topic model.
\item \textbf{VHTM} \cite{fu2020document}: a baseline which takes hierarchical structure of the source text into account and considers each section as a subtopic.
\end{itemize}
\section{Experimental Results}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[t]
\centering
\begin{small}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
PTGEN & 36.44 & 15.66 & 33.42 \\
PTGEN + Cov & 39.56 & 17.28 & 36.38 \\
DRM & 41.16 & 15.75 & 39.08 \\ \hline
BertSUM & 43.85 & 20.34 & 39.90 \\
MatchSUM & 44.41 & 20.86 & 40.55 \\
BottomUp & 41.22 & 18.68 & 38.34 \\
BART & 44.16 & 21.28 & 40.90 \\
PEGASUS & 44.17 & 21.47 & 41.11 \\ \hline
VHTM & 40.57 & 18.05 & 37.18 \\
BertSUM + TA & 43.06 & 20.58 & 39.67 \\
BART + TA & 44.47 & 21.39 & 41.32 \\ \hline
Our Model & \textbf{44.52} & \textbf{21.95} & \textbf{41.39} \\
\hline
\end{tabular}
\end{small}
\caption{Results in text summarization on CNN/DailyMail}
\label{tab:text_summ_cnn_dailymail}
\end{table}}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[t]
\centering
\begin{small}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
PTGEN & 29.70 & 9.21 & 23.24 \\
PTGEN + Cov & 28.10 & 8.02 & 21.72 \\ \hline
BertSUM & 38.81 & 16.50 & 31.27 \\
MatchSUM & 24.86 & 4.66 & 18.41 \\
BART & 45.14 & 22.27 & 37.25 \\
PEGASUS & 47.21 & 24.56 & 39.25 \\ \hline
BertSUM + TA & 39.77 & 17.39 & 32.39 \\
BART + TA & 45.76 & 22.68 & 38.03 \\ \hline
Our Model & \textbf{49.57} & \textbf{25.08} & \textbf{41.81} \\
\hline
\end{tabular}
\end{small}
\caption{Results in text summarization on XSum}
\label{tab:text_summ_xsum}
\end{table}}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[h]
\centering
\begin{small}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
BART & 24.19 & 8.12 & 21.31 \\
MatchSUM & 25.09 & 6.17 & 20.13 \\
PEGASUS & 26.63 & 9.01 & 21.60 \\ \hline
Our Model & \textbf{27.96} & \textbf{9.43} & \textbf{23.08} \\
\hline
\end{tabular}
\end{small}
\caption{Results in text summarization on Reddit TIFU}
\label{tab:text_summ_reddit_tifu}
\end{table}}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[h]
\centering
\begin{small}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
PTGEN + Cov & 32.06 & 9.04 & 25.16 \\
DiscAttn & 35.80 & 11.05 & 31.80 \\ \hline
ExtSum-LG+RdLoss & 44.01 & 17.79 & 39.09 \\
TLM-I+E & 41.62 & 14.69 & 38.03 \\
PEGASUS & 43.82 & 16.74 & 39.15 \\ \hline
Our Model & \textbf{44.53} & \textbf{19.22} & \textbf{40.61} \\
\hline
\end{tabular}
\end{small}
\caption{Results in text summarization on arXiv dataset}
\label{tab:text_summ_arXiv}
\end{table}}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[h]
\centering
\begin{small}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
PTGEN + Cov & 31.55 & 8.52 & 27.38 \\
DiscAttn & 38.93 & 15.37 & 35.21 \\ \hline
MatchSUM & 41.21 & 14.91 & 20.13 \\
ExtSum-LG+RdLoss & 45.30 & 20.42 & 40.95 \\
Sent-CLF & 42.13 & 16.27 & 39.21 \\
PEGASUS & 44.29 & 19.19 & 40.42 \\ \hline
Our Model & \textbf{45.99} & \textbf{20.49} & \textbf{41.25} \\
\hline
\end{tabular}
\end{small}
\caption{Results in text summarization on PubMed}
\label{tab:text_summ_pubmed}
\end{table}}
\subsection{Automatic Evaluation} \label{subsec:text_summ_result}
We use the automatic metrics of ROUGE scores \cite{lin2004rouge}. In Table \ref{tab:text_summ_cnn_dailymail}, \ref{tab:text_summ_xsum}, \ref{tab:text_summ_reddit_tifu}, \ref{tab:text_summ_arXiv}, and \ref{tab:text_summ_pubmed}, we report the unigram overlap (ROUGE-1), bigram overlap (ROUGE-2) to assess informativeness, and longest common subsequence (ROUGE-L) for the fluency of the generated summary. Our model outperforms prior works on five standard datasets.
For CNN/DailyMail, we achieve an absolute improvement of 0.35 in ROUGE-1, 0.48 in ROUGE-2, and 0.28 in ROUGE-L over PEGASUS. Furthermore, our model obtains better results than the previous topic-aware model BART + TA in ROUGE-2 with 0.6 points. This shows that our methods can generate summaries that include important content in the document.
On the XSum dataset, which is more abstractive than CNN/DailyMail \cite{bommasani2020intrinsic}, our gain is even more pronounced. Compared with BART + TA, we achieve 3.8 absolute improvement in ROUGE-1, 2.4 in ROUGE-2, and 3.8 in ROUGE-L.
For Reddit TIFU, in which most of the source texts and the target summaries are informal, our model outperforms PEGASUS by 1.3 in ROUGE-1, 0.4 in ROUGE-2, and 1.5 in ROUGE-L. These results show that global semantics is capable of helping the model generate better target summaries.
For arXiv and PubMed dataset, we also achieve improvement over the baseline PEGASUS, which is designed specifically for abstractive text summarization. In particular, for arXiv dataset, we gain an increase of 0.71 in ROUGE-1, 2.48 in ROUGE-2, and 1.46 in ROUGE-L. For PubMed dataset, the increase is 1.7 in ROUGE-1, 1.3 in ROUGE-2, and 0.83 in ROUGE-L.
\subsection{Human Evaluation}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[h]
\centering
\begin{small}
\begin{tabular}{c|c|c}
\hline
\textbf{Model} & \textbf{Preference Score} & \textbf{QA score } \\ \hline
BART & -0.286 & 24.59 \\
PEGASUS & -0.257 & 26.53 \\
Our Model & 0.250 & 46.94 \\ \hline
Gold Summary & \textbf{0.536} & \textbf{93.88} \\
\hline
\end{tabular}
\end{small}
\caption{Human evaluation}
\label{tab:human_eval}
\end{table}}
Since the automatic metric does not fully reveal the true quality of the model, we conduct a human evaluation for further assessment. To achieve that goal, we design two tests in order to elicit human judgements in two ways.
In the first experiment, we presented summaries of PEGASUS \cite{zhang2020pegasus}, BART \cite{lewis2019bart}, our model, and the gold summary, then asked four professional English speakers to rate the summaries from worst to best in terms of informativeness, faithfulness, topic coherence, and fluency. We randomly sampled 100 summaries from 100 documents of CNN/DailyMail test set. The score of a system is equal to the percentage of times it was selected as the best minus the percentage of times it was chosen as the worst.
In the second experiment, we applied the question answering (QA) paradigm. For each document, we create two independent questions which emphasizes the key information of the text. Participants would read and answer those questions as best as they could. The score for one system is the percentage of questions the participants answer correctly.
Ten professional English speakers were asked to participate in two assessments. The results in table \ref{tab:human_eval} show that our generated summaries favor human judgements, and are more likely to maintain the important content in the original text than other systems' summaries.
The Fleiss' Kappa scores with overall agreement percentages of the first and second human evaluation experiments were denoted in Table \ref{tab:human_eval}. As shown in the Table, the measures demonstrate a good inter-agreement among the annotators.
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{c|c|c}
\hline
\textbf{Test} & \textbf{Fleiss' Kappa} & \textbf{Overall Agreement} \\ \hline
Preference & 0.61 & 70.45\% \\
QA & 0.77 & 81.13\% \\
\hline
\end{tabular}
\end{small}
\caption{Fleiss' Kappa and Overall Agreement percentage of each human evaluation test. Higher score indicates better agreement.}
\label{tab:human_eval}
\end{table}}
\subsection{Flow-based neural topic model with other Transformer-based model}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{p{3.5cm}|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
BART & 44.16 & 21.28 & 40.90 \\
BART + Flow-based NTM + Gating & \textbf{44.89} & \textbf{21.74} & \textbf{41.48} \\ \hline
\end{tabular}
\end{small}
\caption{Results when applying flow-based neural topic model and contextualized gating for BART on CNN/DailyMail dataset \cite{lewis2019bart}}
\label{tab:plug_in_cnn}
\end{table}}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{p{3.5cm}|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
BART & 45.14 & 22.27 & 37.25 \\
BART + Flow-based NTM + Gating & \textbf{46.86} & \textbf{23.74} & \textbf{38.49} \\ \hline
\end{tabular}
\end{small}
\caption{Results when applying flow-based neural topic model and contextualized gating for BART on XSum dataset \cite{lewis2019bart}}
\label{tab:plug_in_xsum}
\end{table}}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{p{3.5cm}|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
BART & 43.92 & 16.36 & 39.16 \\
BART + Flow-based NTM + Gating & \textbf{47.78} & \textbf{18.28} & \textbf{41.47} \\ \hline
\end{tabular}
\end{small}
\caption{Results when applying flow-based neural topic model and contextualized gating for BART on arXiv dataset \cite{lewis2019bart}}
\label{tab:plug_in_arxiv}
\end{table}}
To study the effect of our topic-oriented module on other abstractive Transformer-based model, we integrate our flow-based neural topic model and contextualized gating into BART \cite{lewis2019bart}. In particular, we continue to finetune on CNN/DailyMail, XSum, and arXiv dataset, given the pretrained checkpoint. As can be seen in Table \ref{tab:plug_in_cnn}, \ref{tab:plug_in_xsum}, \ref{tab:plug_in_arxiv}, our topic-oriented module is able to improve the performance, showing general effectiveness on other Transformer-based architecture.
\subsection{Analysis on Neural Topic Model and Traditional Topic Model}
To substantiate our hypothesis that neural topic model does enhance the summarization performance in large-scale datasets, we have conducted experiments to combine the Transformer-based summarization module with traditional topic model, i.e. Latent Dirichlet Allocation (LDA) and Poisson Factor Analysis (PFA) on CNN/DailyMail and PubMed. We denoted the results in Table \ref{tab:topic_model_cnn} and Table \ref{tab:topic_model_pubmed}. As it can be seen, neural topic models, particularly our proposed model, significantly outperform approaches of traditional topic models on abstractive summarization.
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{p{3.5cm}|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
PEGASUS + LDA + Gating & 44.17 & 21.47 & 41.11 \\
PEGASUS + PFA + Gating & 44.18 & 21.53 & 41.14 \\
PEGASUS + VAE + Gating & 44.33 & 21.71 & 41.27 \\ \hline
Our Model & \textbf{44.52} & \textbf{21.95} & \textbf{41.39} \\
\hline
\end{tabular}
\end{small}
\caption{Results when adapting various topic models on CNN/DailyMail dataset}
\label{tab:topic_model_cnn}
\end{table}}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{p{3.5cm}|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
PEGASUS + LDA + Gating & 44.36 & 19.24 & 40.53 \\
PEGASUS + PFA + Gating & 44.41 & 19.19 & 40.55 \\
PEGASUS + VAE + Gating & 45.46 & 19.84 & 40.89 \\ \hline
Our Model & \textbf{45.99} & \textbf{20.49} & \textbf{41.25} \\
\hline
\end{tabular}
\end{small}
\caption{Results when adapting various topic models on PubMed dataset}
\label{tab:topic_model_pubmed}
\end{table}}
\subsection{Latent Topic Analysis}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{c|c|c}
\hline
\textbf{Datasets} & \textbf{CNN/DailyMail} & \textbf{XSum} \\ \hline
LDA & 35.03 & 26.22 \\
LSA & 41.64 & 27.69 \\
VAE-based NTM & 52.60 & 52.94 \\ \hline
Our Model & \textbf{53.25} & \textbf{53.09} \\
\hline
\end{tabular}
\end{small}
\caption{$C_V$ topic coherence score on benchmark datasets. Higher scores mean more coherent topics}
\label{tab:cv_coherence_score}
\end{table}}
It is inherent that latent vector is useful for text summarization, as shown in section \ref{subsec:text_summ_result}. Here we study whether jointly training with summarization module helps the topic model produce human-interpretable topics.
\noindent \textbf{Coherence Score Comparison} We decide to evaluate the topic models with the automatic $C_V$ measure. Following \cite{zeng2018topic, wang2020friendly}, we pick the top 10 words from each topic and average $C_V$ scores of all topics. The results are reported on two summarization datasets, CNN/DailyMail and XSum. To conduct the comparisons, we take LDA and LSA as probabilistic baselines, as they are notable and well-known for human interpretability. For both baselines, we execute 1000 iterations to assure convergence. As Table \ref{tab:cv_coherence_score} shows, our model outperforms traditional topic models, which implies that jointly training neural topic model and text summarization creates human-understandable topics.
\noindent \textbf{Sample Topics} To further assess the quality of the topics learned by our system, we continue to extract some sample words (Table 6) indicating the context around "\emph{liverpool chelsea}" discovered by the model trained on CNN/DailyMail dataset. As can be realized, the topics pertaining to probabilistic topic models such as LSA and LDA contain some mixed topic words. Conversely, our neural topic models trained with the text summarization module produce the topic which looks more coherent. In particular, our words refer to the context which involves the teams competing in the football championship of England, such as "\emph{arsenal}", "\emph{tottenham}", etc. and related factors, for instance, "\emph{balls}", "\emph{prize}", "\emph{winning}", etc.
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\hspace{-4mm}
\begin{small}
\begin{tabular}{p{1.6cm}|p{5.7cm}}
\hline
LDA & \textcolor{red}{father} liverpool \textcolor{red}{son} chelsea called group night \textcolor{red}{child} west cup \\
\hline
LSA & chelsea beat half winner jose mourinho \textcolor{red}{table} \textcolor{red}{place} happy \textcolor{red}{lake} \\ \hline
VAE-based NTM & liverpool \textcolor{red}{salmon} manchester england everton newcastle bale premiership fabregas clasico \\ \hline
Our Model & liverpool cup leagues chelsea balls night tottenham prize winning arsenal \\
\hline
\end{tabular}
\end{small}
\caption{Top 10 words for the topic related to "\emph{liverpool chelsea}". Red words highlight non-topic words.}
\label{tab:cv_coherence_score}
\end{table}}
\subsection{Ablation Study}
{\renewcommand{\arraystretch}{1.05}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{p{4cm}|c|c|c}
\hline
\textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL} \\ \hline
Our Model (with Flow-based NTM and Gating) & \textbf{49.57} & \textbf{25.08} & \textbf{41.81} \\
- with VAE-based NTM and Gating & 48.13 & 23.91 & 40.68 \\
- with Flow-based NTM & 46.83 & 23.89 & 39.78 \\
- with VAE-based NTM & 46.30 & 23.59 & 39.05 \\
\hline
\end{tabular}
\end{small}
\caption{Ablation Study on XSum test set}
\label{tab:ablation}
\end{table}}
In this section, we proceed to study the impact that (1) The integration of normalizing flow and (2) The contextualized gating mechanism have on the text summarization performance.
\noindent \textbf{Impact of the contextualized gating mechanism} It can be seen that plainly incorporating the global semantics into the model makes the performance improvement drop strongly. As shown in table \ref{tab:ablation}, the ROUGE-1 score's decreases more than 2 points compared with models we apply contextualized gating. We hypothesize that in numerous cases, the effect of global semantics overwhelm the benefits of contextualized representations.
\noindent \textbf{Impact of integrating normalizing flow} In this ablation, we eliminate the normalizing flow from the neural topic modeling. As shown in Table \ref{tab:ablation}, without the normalizing flow, the improvement that the latent vector brings is downgraded, nearly 0.4 of ROUGE-1 for using contextualized gating and 0.53 of ROUGE-1 in non-gating case . We hypothesize that the plain neural topic model does not give a sufficiently expressive global semantics as the neural topic model using normalizing flow.
\subsection{Case Studies}
Table \ref{tab:example} shows a case study on the summarization results of PEGASUS and our models. While PEGASUS model misses the key information related to the death of "\emph{Richie Benauld}", our model successfully include it in the final summarization. It shows the effectiveness of our model in capturing key information in the document, thanks to the contribution of neural topic model and gating mechanism. Remarkably, our model is also able to rephrase "\emph{signature one liners}" as "\emph{trademark word}" when describing \emph{Richie Benauld}'s famous quote, not just copying the words in the original document. More case studies can also be found in Appendix.
\section{Conclusion}
In this paper, we propose a method to utilize global semantics for text summarization task. In particular, we aim to fit the global semantics to expressively describe the documents. Moreover, we find that maintaining the information in the original contextualized representations is also beneficial for the summarization performance. We outperform other state-of-the-art models on five benchmark datasets.
\section{Summary examples}
\label{app:summ_examples}
We present some summary examples in this section
{\renewcommand{\arraystretch}{1.2}
\begin{table}[h!]
\centering
\begin{small}
\begin{tabular}{|p{15cm}|}
\hline
\textbf{DOCUMENT}: \textcolor{red}{New York (CNN) New York state authorities have issued a health alert following a dramatic spike in hospital visits for synthetic marijuana-related emergencies. Gov. Andrew Cuomo said Friday that more than 160 patients in nine days have been rushed to hospitals across the state for adverse reactions to synthetic cannabinoid}, known as "spice" or "K2." \textcolor{cyan}{"Spice" and other similar synthetic drugs are often marketed as legal plant material coated with chemicals that are supposed to mimic the effects of marijuana}, according to a statement from the governor's office. "Since the exact compounds contained in synthetic cannabinoid products change so frequently, it's often impossible for users to know exactly what they are putting in their body," acting New York State Health Commissioner Dr. Howard Zucker said. \textcolor{ForestGreen}{Symptoms after use have a wide range of severity, from confusion, drowsiness and headaches to increased heart rate, seizures and loss of consciousness}, according to the New York State Department of Health. \textcolor{pink}{Synthetic marijuana is popular among teens because it is marketed as incense or natural products to "mask its true purpose"}, the health department statement said. \textcolor{orange}{"Young people may be fooled into thinking that these substances are safe because they are sold over the counter or are in colorful packaging}, but they are not made for human consumption," \textcolor{blue}{New York Alcohol and Substance Abuse Service s Commissioner Arlene Gonzalez Sanchez said. "They are dangerous and can have significant, long-term effects on the brain." The recent surge is not isolated in New York; other states across the country have noticed similar trends. Alabama Department of Public Health issued a statement last week acknowledging a rise of synthetic marijuana usage and said there had been 98 overdoses suspected to be linked with "spice" in the previous month. Mobile County alone has seen seven cases in 2015 so far, more than the entire previous year, the statement said. Mississippi health officials are also concerned that synthetic marijuana is on the rise.} [...] \\ \hline
\textbf{GOLD SUMMARY}: \textcolor{red}{New York reports 160 hospitalizations related to synthetic marijuana . Gov. Andrew Cuomo issued a health alert .} \\ \hline
\textbf{PEGASUS}: \textcolor{pink}{Synthetic marijuana is popular among teens because it is marketed as incense or natural products to "mask its true purpose".} \textcolor{cyan}{"Spice" and other similar synthetic drugs are often marketed as legal plant material coated with chemicals that are supposed to mimic the effects of marijuana.} \textcolor{ForestGreen}{Symptoms after use have a wide range of severity, from confusion, drowsiness and headaches to increased heart rate, seizures and loss of consciousness.} \\ \hline
\textbf{BART}: \textcolor{cyan}{"Spice" and other similar synthetic drugs are often marketed as legal plant material.} \textcolor{ForestGreen}{Symptoms after use have a wide range of severity, from confusion, drowsiness and headaches to increased heart rate, seizures and loss of consciousness.} \textcolor{orange}{"Young people may be fooled into thinking that these substances are safe because they are sold over the counter or are in colorful packaging," official says.} \\ \hline
\textbf{Our model}: \textcolor{red}{New York Gov. Andrew Cuomo says 160 people have been hospitalized in nine days}. \textcolor{ForestGreen}{Symptoms after use include confusion, drowsiness, headaches, increased heart rate, seizures and loss of consciousness}. \textcolor{blue}{Health officials are concerned that synthetic marijuana is on the rise.} \\ \hline
\end{tabular}
\end{small}
\caption{A generated summary example .}
\label{tab:example_1}
\end{table}}
{\renewcommand{\arraystretch}{1.2}
\begin{table}[h!]
\centering
\begin{small}
\begin{tabular}{|p{15cm}|}
\hline
\textbf{DOCUMENT}: (CNN)Panic. Tears. Fear. All those feelings and more permeated cities, villages and camps around Nepal on Saturday, \textcolor{blue}{after a massive 7.8 magnitude earthquake struck around midday.} \textcolor{ForestGreen}{Hours later, after a wave of relentless aftershocks, many people still were too scared to go back inside any buildings.} Others crowded around rubble, including men and women racing to rescue those trapped. And then there are the hundreds already confirmed dead, not to mention the hundreds more who suffered injuries. \textcolor{pink}{Below are some accounts from witnesses in the mountainous Asian nation, in their own words.} Fast Facts: Earthquakes . Anderson, an American who was in Nepal for trekking and meditation, was in his hotel room when the quake struck. "I went outside five minutes after the major tremors stopped. I went to a parking lot nearby for one hour or so, then walked down the main road," he said. He took a series of photos on the main road between Thamal and Durbar Squares, that he shared via CNN iReport. Kumar posted a photo of people in his neighborhood sheltering in a makeshift tent after the quake. He sent updates via Twitter about what he was seeing in the Lalitpur District of Kathmandu. "It's getting dark, no power and no water supply in Lalitpur area, but people are helping each other with food and other items . "Almost everyone staying outside home...Hard time for small kids and older people . "People are very worried and are planning to stay out on the street overnight, but they lack sufficient food and water." \textcolor{red}{Joshi is a UNICEF communication officer who was on the ground at the time of the quake. "The shake was like nothing I have experienced in my 57 years. It was strong and it shook for a long time."} \textcolor{orange}{Old monuments and temples fell, Joshi wrote of his experience. There were fears that other buildings would collapse.} "When I went out in the evening, I saw many people preparing to camp out in the main open parade ground in the middle of the street. Relatives were crying in the main government hospital where the dead were being lined up in front of the hospital building. "My family is traumatised. We are 5 generations living under one roof -- from a 100 year old grandmother to my 16 month old granddaughter. Strong aftershocks are keeping most of us up!" "Some of the historical sites are completely devastated. "Most of the people -- a lot of the people -- are walking through the city. They're confused and scared. A lot of people are crying. "They're out with their pets and their families and a lot of locals are volunteering in rescue operations. "In several parts of Kathmandu, a lot of people seem trapped under the rubble. Locals are trying to rescue these people because they can still hear them." Are you in Nepal or have loved ones affected? Please share with us if you are in a safe place. "We are scared and waiting for the tremors to end. We are all sitting outside because there is more news of another quake. "There is no power and families are listening to the FM radio inside their cars. News of multiple building collapses. "I've seen many cracked walls and roads and buildings. "The Dharahara was packed with people a while ago. There are police everywhere trying to move rubble to make space on the roads for ambulances. Everyone is very scared. " "I see many cracked buildings and people are panicked and all running down to the streets. \textcolor{cyan}{"The main landmark in Kathmandu is a spire, Dharahara, and it has fallen down, it is about 140 feet high in the center city.} "Another aftershock is hitting now, it is really strong. "Airplanes are circling now overhead and helicopters are flying and not clear if the airport is open. We hear it is damaged." How are earthquakes measured? \textcolor{cyan}{"Many historic buildings have collapsed in the city. "In all my years I have never seen such a big earthquake here.} [...] \\ \hline
\textbf{GOLD SUMMARY}: \textcolor{blue}{Massive 7.8 magnitude earthquake has struck Nepal near its capital, Kathmandu .} As the death toll rises, witnesses describe devastation and panic . \\ \hline
\textbf{PEGASUS}: \textcolor{ForestGreen}{After hours of relentless aftershocks, many people still were too scared to go back inside any buildings.} \textcolor{pink}{Below are some accounts from witnesses in the mountainous Asian nation, in their own words.} \textcolor{orange}{Old monuments and temples fell, and there were fears that other buildings would collapse.} \\ \hline
\textbf{BART}: Hundreds of people are dead and injured in Nepal on Saturday. \textcolor{ForestGreen}{Aftershocks and aftershocks have been felt in Kathmandu, Nepal's capital city.} \textcolor{red}{"The shake was like nothing I have experienced in my 57 years. It was strong and it shook for a long time"} \\ \hline
\textbf{Our model}: \textcolor{red}{'"The shake was like nothing I have experienced in my 57 years," a UNICEF worker says.} \textcolor{blue}{The 7.8-magnitude earthquake struck around midday Saturday.} \textcolor{ForestGreen}{Many people were too scared to go back inside any buildings.} \textcolor{cyan}{Many buildings in Kathmandu collapsed, including a 140-foot spire.'} \\ \hline
\end{tabular}
\end{small}
\caption{Another generated summary example}
\label{tab:example_2}
\end{table}}
\newpage
\section{Loss of Flow-based Neural Topic Model}
\label{app:loss_proof}
We have the loss of neural topic model, called evidence lower bound
\begin{equation}
\mathcal{L}_{\text{NTM}} = \log p(\mathbf{x,z}) - \log q(\mathbf{z}|\mathbf{x})
\label{eq:ntm_loss}
\end{equation}
Let $f_1, f_2, …, f_K$ be a chain of invertible transformations which have Jacobian easy-to-compute. We have change of variable formula for transforming $\mathbf{z}_i$ to $\mathbf{z}_{i+1}$
\begin{equation}
q(\mathbf{z}_{i+1}) = q(\mathbf{z}_i) \left| \det \frac{\partial f_{i+1}}{\partial \mathbf{z}_{i}} \right|^{-1}
\end{equation}
Sequentially applying for $K$ transformations, we have
\begin{equation}
q(\mathbf{z}_{K}) = q(\mathbf{z}_{0}) \prod _{i=1}^{K} \left| \det \frac{\partial f_{i}}{\partial \mathbf{z}_{i-1}} \right|^{-1}
\end{equation}
or equivalently,
\begin{equation}
\log q(\mathbf{z}_{K}) = \log q(\mathbf{z}_{0}) - \sum _{i=1}^{K} \left| \det \frac{\partial f_{i}}{\partial \mathbf{z}_{i-1}} \right|
\label{eq:flow}
\end{equation}
Plugging the formula (\ref{eq:flow}) to equation (\ref{eq:ntm_loss}), we obtain
\begin{equation}
\begin{split}
& \mathcal{L}_{\text{NTM}} = \log p(\mathbf{x,z}) - \log q(\mathbf{z}|\mathbf{x}) = \log p(\mathbf{x},\mathbf{z}_K) - \log q(\mathbf{z}_K|\mathbf{x}) \\
&= -\log q(\mathbf{z}_0) + \sum_{i=1}^{K} \log \@ifstar{\oldabs}{\oldabs*}{\det \frac{\partial f_i}{\partial z_{i-1}}} +\log p(\mathbf{x}|\mathbf{z}_K) + \log p(\mathbf{z}_K)
\end{split}
\end{equation}
We reach the neural topic model component in our training objective.
\section{Implementation Details}
\label{app:impl_details}
\textbf{Flow-based Neural Topic Model} Following \cite{wang2020friendly}, we preprocess to remove stopwords in the BoW representations. We experiment with different number of topics in {50, 100, 150, 200} and the number of invertible transformations in flow-based neural topic model (flow length) on CNN/DailyMail dataset. The results (in the format of R1/R2/RL scores) are shown in Table \ref{tab:flow_topic_experiment}. It can be seen that a simple posterior distribution in neural topic model is not sufficient to describe the large-scale dataset, while a highly complex one can negatively affect the performance slightly. Similarly, it is necessary to set an adequate number of topics. We proceed to use flow length of 4 and topic number of 100 for other datasets. \\
\begin{table}[h]
\centering
\begin{normalsize}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Topic Num./Flow length} & 1 & 4 & 16 \\ \hline
50 & 44.19/21.49/41.28 & 44.48/21.54/41.36 & 44.23/21.51/41.29 \\ \hline
100 & 44.30/21.78/41.37 & \textbf{44.52/21.95/41.39} & 44.40/21.87/41.38 \\ \hline
150 & 44.25/21.70/41.34 & 44.44/21.86/41.27 & 44.30/21.80/41.21 \\ \hline
200 & 44.24/21.61/41.23 & 44.35/21.75/41.22 & 44.24/21.69/41.20 \\ \hline
\end{tabular}
\end{normalsize}
\caption{Comparisons on the number of topics and flow length on CNN/DailyMail dataset}
\label{tab:flow_topic_experiment}
\end{table}
We pretrain our versions of flow-based neural topic model on five downstream datasets CNN/DailyMail, XSum, Reddit TIFU, arXiv, and PubMed with batch size $\{256, 256, 256, 320, 320\}$, respectively. All versions are trained with Adadelta optimizer with a learning rate of $0.01$. \\
\noindent \textbf{Topic-oriented Transformer-based summarization model}. We do not change any settings from original papers of PEGASUS \cite{zhang2020pegasus} and BART \cite{lewis2019bart}. In particular, we finetune all models on 16 Nvidia GeForce A100 GPUs with batch size $256$, Adam optimizer of learning rate $1e-4$. For the objective function in Equation \ref{eq:loss}, we experimented $\lambda$ with the choices of $\{0.5, 0.6, 0.75, 0.9\}$ and found that $\lambda = 0.75$ gives the best performance for all datasets. Models are evaluated and saved checkpoints every one epoch. During training, we keep track three best validated checkpoints in terms of evaluation loss on the validation set. Eventually, for decoding, we run beam search with beam size of $8$ and note the best result out of three validated checkpoints. | {
"attr-fineweb-edu": 1.767578,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdsLxK7IAHABiAr_7 | \section{Introduction}
Quantifying the impact of injury on player performance in professional sports is important for both managers and players themselves. Increasingly, players are valued and compensated in a manner driven by quantitative metrics of past performance, but injuries have potential to disrupt the continuity between past and future performance \citep{begley2018, conte2016, frangiamore2018, wasserman2015}. How should a player's expected future value to a team be adjusted in the event of injury? One way to quantify impact in this setting is as the difference between the value of a performance metric the player would have achieved in the absence of injury and the value of the same metric achieved after a given injury. Unfortunately, even post hoc, at most one of these two counterfactual outcomes is observed. This phenomenon, often known as the fundamental problem of causal inference \citep{holland1980causal}, is also present in impact evaluation settings across social science and medicine
One approach to impact evaluation is matching, in which individuals experiencing a treatment or condition of interest (in our case, injuries) are paired to otherwise similar control individuals who did not experience the treatment. Assuming that paired individuals are sufficiently similar on observed attributes and that no important unobserved attributes confound the comparison, the difference in outcomes approximates the impact of treatment for individuals in the pair \citep{stuart2010matching}. When controls are plentiful, each treated unit may be matched to multiple controls, forming matched sets instead of matched pairs. In Section \ref{sec:Baseball} we use this strategy to evaluate the impact of minor injury on batting performance in Major League Baseball (MLB), comparing on-base percentage between players who recently spent a short period of time on their team's injured list (IL) and otherwise similar players who did not go on the IL.
In contrast to common matched studies, the treatment in our setting is not given to all individuals at the same time. Instead, each player is observed repeatedly in games throughout the MLB season, and each treated player experiences injury at a different point in time. The longitudinal structure of the data and rolling nature of entry into treatment create complications for pairing injured players to uninjured controls. Specifically, each injured player has a date of entry onto the IL, and a similarly well-defined follow-up date at which performance is assessed based on a certain period elapsing after return from the IL. However, dates of treatment (and hence follow-up) are not defined in the data for control individuals, and the matching process involves not only selecting which control individuals will be paired to injured players but at which point in time the control unit will be measured. Two primary strategies exist for aligning control individuals in time, or equivalently selecting ``pseudo-treatment'' dates at which counterfactual outcomes for controls will be considered. The most common strategy is to compare players at the same point in time; for instance, if a treated player experiences injury on August 1st, then he is compared to control units only as they appear on August 1st. This approach is conceptually straightforward, although it requires specialized software as described in \citet{witman2019}; however, it implicitly prioritizes calendar time itself as the most important dimension of similarity between units, and in settings where time is not an especially important confounder it has the effect of arbitrarily limiting the pool of potential control comparisons.
The second approach is to allow comparisons between a treated unit at one calendar time and a control unit at a potentially different calendar time. For instance, a player's attributes and to-date performance in the MLB season at his time of injury on August 1 may more closely resemble the attributes and to-date performance of a an uninjured player on July 1 than it does any other player in his August 1 state, and under the second approach the first player at August 1 could be matched to the second at July 1. The resulting flexibility has the potential to greatly improve similarity between matched units on measured variables besides time.
The recent GroupMatch algorithm \citep{pimentel2020} constructs matches optimally across time in this manner. Our investigation focuses on this second method of matching.
In introducing the GroupMatch framework, \citet{pimentel2020} grappled with several challenges. Whenever multiple controls are paired to a single treated unit, the presence of multiple copies of the same control individual necessitates a constraint to ensure that a treated unit is not simply paired to multiple slightly different copies of the same control. Two possible constraints were suggested but methods for inference were presented under only under one of them, in which multiple copies of a control individual are forbidden from appearing in the matched design. Furthermore, the guarantees given for causal effect estimation using GroupMatch designs rely heavily on a strong assumption that time itself is not a confounder.
In what follows, we address these challenges and provide additional tools to enhance GroupMatch. First, we introduce a new type of constraint on repeated use of control information within a GroupMatch design. This constraint has computational, analytical, and statistical advantages over existing constraints in many common settings. Secondly, we introduce a new block-bootstrap-based method for inference that applies to any GroupMatch design, motivated by related work on inference for cross-sectional matching designs by \citet{otsu2017}. Finally, we introduce a falsification test to partially check the assumption of time agnosticism underpinning GroupMatch's validity, empowering investigators to extract evidence from the data about this key assumption prior to matching.
We prove the validity of our bootstrap method under the most relevant set of constraints on reuse of controls, and we demonstrate the effectiveness of both the placebo test and the bootstrap inference approach through simulations and an analysis of MLB injury data. In particular, the bootstrap method shows similar performance to linear-regression-based approaches to inference often applied in similar settings, while making much weaker assumptions.
The paper is organized as follows.
Section~\ref{sec:Background} presents the basic statistical framework and reviews the GroupMatch framework, inference approaches for matching designs, and other related literature.
In Section~\ref{sec:Inference} we introduce a new constraint for use of controls in GroupMatch designs, leading to a new design called GroupMatch with instance replacement.
Section~\ref{sec:weightedBootstrap} presents a block bootstrap inference approach for GroupMatch.
We apply our block bootstrap inference to a simulation study in Section~\ref{sec:Simulations}.
In Section~\ref{sec:timepointAgnosticism} we present a falsification test for the assumption that time is not a confounder.
In Secton~\ref{sec:Baseball} we revist our baseball example and evaluate whether short-term injury impacts short term MLB performance.
We conclude with a discussion in Section~\ref{sec:Conclusion}.
\section{Preliminaries}
\label{sec:Background}
\subsection{Matching in longitudinal settings and GroupMatch}
Matching methods attempt to estimate average causal effects
by grouping each treated unit with one or more otherwise similar controls and using paired individuals to approximate the missing potential outcomes. A number of other authors have considered matching in datasets containing repeated measures for the same individuals over time. Some focus on the case in which only a single time of treatment is present, and the primary challenge is deciding how to construct matching distances from pre-treatment repeated measures and assess outcomes using post-treatment repeated measures. For example, in \citet{haviland2008}, the authors choose as a treatment the act of joining a gang at age 14, the age requirement ensuring that there is a single potential time of treatment for all individuals. The situation is more complex when individuals opt in to treatment at different times as in \citet{li2001balanced, lu2005, witman2019}; and \citet{imai2020}. These authors address the problem by matching each treated unit to the version of the control unit present in the data at the time of treatment. For example, in \citet{imai2020}'s reanalysis of data from \citet{acemoglu2019democracy} on the impact of democratization on economic growth, countries undergoing democratizing political reforms are matched to similar control countries not undergoing such reforms in the same year. Although this method is logical whenever strong time trends are present, in other cases it may overemphasize similarity on time at the expense of other variables. For example, \citet{bohl2010longitudinal} study the impact of serious falls on subsequent healthcare expenditures for elderly adults using patient data from a large health system. While patients who fall could be matched to patients who appear similar based on recent health history on the calendar date of the fall, the degree of similarity in health histories is probably much more important than the similarity of the exact date at which each patient is measured.
GroupMatch is a new framework for matching in longitudinal settings with rolling entry into treatment that relaxes the assumption that treated units must be matched to control units at the same time \citep{pimentel2020}.
The relaxation of this assumption yields higher quality matches on other variables of interest. We focus on GroupMatch designs in what follows.
In brief, GroupMatch designs are solutions to a discrete optimization problem that constructs matched sets, each with the same number of control units, with maximum overall similarity on pre-treatment attributes between a treated unit at the time of treatment and controls, choosing freely among different possible pseudo-treatment times for controls. While GroupMatch does not require units in the same matched set to be aligned at identical timepoints, it does impose constraints on how often control information can be reused within the match. \citet{pimentel2020} outline two possible specific forms for this constraint. In GroupMatch without replacement (a setup referred to by the original authors as Design A), each control individual may contribute at most one version of itself to any matched set. In GroupMatch with trajectory replacement (Design B in the original paper), multiple copies-in-time of a control individual may appear in the match, but no individual copy may be used twice and no two copies of the same individual may appear in the same matched set. For further discussion of these constraints, see Section \ref{sec:Inference}.
\subsection{Sampling framework}
\label{subsec:sampling}
We observe $n$ subjects. For each subject $i$ in the study, we observe repeated measures $(Y_{i,t}, Z_{i,t}, \mathbf{X}_{i,t})$ for timepoints $t = 1, \ldots, T$, where $Y_{i,t}$ is an outcome of interest, $Z_{i,t}$ is equal to the number of timepoints since subject $i$ entered treatment (inclusive of $t$) or equal to zero if $i$ has not yet been treated, and $\mathbf{X}_{i,t}$ is a vector of covariates. We denote the collection of repeated measures for each subject $i$ as the trajectory $O_i$
For convenience, we also define $T_i$ as the time $t$ for which $Z_{i,t} = 1$ (or $\infty$ otherwise) and $D_i$ as an indicator for $T_i < \infty$.
We specify a burn-in period of length $L-1$ during which no individuals are treated (or allow treatment at $t=1$ by setting $L = 1$).
Let $Y_{i, t}(z)$ (with $z \leq \max\{t-L+1,0\}$) be the potential outcome for unit $i$ at time $t$ if it had been enrolled in treatment for $z$ timepoints
We will focus on assessing outcomes at a fixed follow-up period after treatment; for simplicity of exposition, we focus on the case in which the length of this follow-up period is zero, so that outcomes are observed immediately following treatment at the same timepoint. Although in principle there are up to $t-L+2$ potential outcomes $Y_{i,t}(z)$ for each $i$ and $t \geq L$ by choosing different $z$-values, we will restrict attention to the two potential outcomes $Y_{i,t}(1)$ and $Y_{i,t}(0)$. These potential outcomes allow us to
define the finite sample average effect of the treatment on the treated (ATT), denoted by $\Delta$:
\begin{align*}
\Delta & = \frac{1}{N_1} \sum_{i = 1}^N \sum_{t = 1}^T 1\{ t = T_i \} [Y_{i, t}(1) - Y_{i, t}(0)] \\
& = \frac{1}{N_1} \sum_{i = 1}^N D_i \left[ Y_{i, t = T_i }(1) - Y_{i, t = T_i }(0) \right]
\end{align*}
Finally, we consider the data-generating process. We assume that trajectories $O_i$ are sampled independently from some infinite population, although we do not assume independence of observations within the same trajectory. Defining expectation $E(\cdot)$ with respect to sampling from this population, we may now also define the population version of the ATT as $\Delta_{pop} = E(\Delta)$. For future convenience, we also introduce a concise notation for conditional expectation (again, over the sampling distribution) of potential outcomes given no treatment through time $t$ and the covariates observed in the previous $L$ timepoints:
\begin{equation*}
\mu_z^t(\mathbf{X}) = E[Y_{i,t}(z)|\{X_{i, t'}\}_{t' = t - L + 1}^{t' = t } = \mathbf{X}, T_i > t]
\end{equation*}
Throughout the paper we abuse notation slightly by writing $\mu_0(\mathbf{X}_{i,t})$ to indicate conditional expectation given the $L$ lagged values of $\mathbf{X}_i$ directly preceding time $t$.
\subsection{Identification assumptions}
\label{subsec:identification}
\citet{pimentel2020} studied the following difference-in-means estimator in GroupMatch designs where each treated unit is matched to $C$ control observations. $M_{it, jt'}$ is an indicator for whether subject $i$ at time $t$ has been matched to subject $j$ at time $t'$:
\begin{align*}
\hat{\Delta} = \frac{1}{N_1} \sum_{i = 1}^n D_i [Y_{i, t = T_i} - \frac{1}{C} \sum_{j = 1}^N \sum_{t' = 1}^T M_{i T_i, jt'}Y_{j, t'}]
\end{align*}
\citet{pimentel2020} show that this estimator is unbiased for the population ATT under the following conditions:
\begin{enumerate}
\item Exact matching:
Matched units share identical values for covariates in the $L$ timepoints preceding treatment.
\item $L$-ignorability:
Conditional on the covariate history over the previous $L$ timepoints and any treatment enrollment in or prior to baseline, an individual's potential outcome at a given time is independent of the individual's treatment status. Formally,
$$ D_i \perp \!\!\! \perp Y_{i, t}(0) | Z_{i, t }, \{X_{i, s}\}_{s=t-L + 1}^{t} \text{, } \forall i.$$
\item Timepoint agnosticism: mean potential outcomes under control do not differ for any instances with identical covariate histories at different timepoints. Formally, for any set of $L$ covariate values $\mathbf{X}$,
$$ \mu_0^t(\mathbf{X}) = \mu_0^{t'}(\mathbf{X}) = \mu_0(\mathbf{X}) \text{ for any } 1 \leq t, t' \leq T. $$
For simplicity of notation we will drop the $t$ superscript when discussing the conditional expectation $\mu_0(\mathbf{X})$ for the sequel, with the exception of Section \ref{sec:timepointAgnosticism} where we temporarily consider failures of this assumption.
\item Covariate $L$-exogeneity: future covariates do not encode information about the potential outcome at time $t$ given covariates and treatment status over the previous L timepoints. Formally,
$$ (X_{i, 1}, ..., X_{i, T}) \perp \!\!\! \perp Y_{i, t}(0) | (Z_{i, t}, \{X_{i, s}\}_{s=t-L+1}^{t}) \text{, } \forall i.$$
\item Overlap condition: given that a unit is not yet treated at time $t-1 \geq L$, the probability of entering treatment at the next time point is neither 0 nor 1 for any choice of covariates over the $L$ timepoints at and preceding $t$.
$$ 0 < P(T_i = t \mid T_i > t-1, X_i^{t}, \ldots, X_i^{t - L+1}) < 1
\quad \quad \forall t > L $$
While this assumption is not stated explicitly in \citet{pimentel2020}, we note that the authors rely implicitly on an overlap assumption of this type in the proof of their main result.
\end{enumerate}
Assumption 1 is no longer needed for asymptotic identification of the population ATT if we modify the estimator by adding in a bias correction term. As in \citet{otsu2017} and \citet{abadie2011}, we first estimate the conditional mean function $\mu_0(\mathbf{X})$ of the potential outcomes and use this outcome regression to adjust each matched pair for residual differences in covariates not addressed by matching.
As outlined in \citet{abadie2011}, bias correction leads to asymptotic consistency under regularity conditions on the potential outcome mean estimator $\widehat{\mu}(\cdot)$ (for further discussion of regularity assumptions on $\widehat{\mu}_0(\cdot)$ see the proof of Theorem \ref{thm:validBlockBootstrap} in the Appendix). Many authors have also documented benefits from adjusting matched designs using outcome models \citep{rubin1979using, antonelli2018doubly}. The specific form of our bias-corrected estimator is as follows:
\begin{align*}
\hat{\Delta}_{adj} & = \frac{1}{N_1} \sum_{i = 1}^n D_i [(Y_{i, t = T_i} - \hat{\mu}_0(\mathbf{X}_{i, T_i})) - \frac{1}{C} \sum_{j = 1}^N \sum_{t' = 1}^T M_{iT_i, jt'} (Y_{j, t'} - \hat{\mu}_0(\mathbf{X}_{i, t'}))]
\end{align*}
Large datasets with continuous variables ensure that exact matching is rarely possible in practice, and in light of this we focus primarily on estimator $\widehat{\Delta}_{adj}$ in what follows.
\section{GroupMatch with instance replacement}
\label{sec:Inference}
Before discussing our method for inference in general GroupMatch designs, we introduce a new type of GroupMatch design. \citet{pimentel2020} described two different types of designs produced by GroupMatch denoted Problems A and B, designs we refer to as GroupMatch without replacement and GroupMatch with trajectory replacement respectively.
\begin{enumerate}
\item \textbf{GroupMatch without replacement}: each control unit can be matched to at most one treated unit. This means that if a treated unit is matched to an instance of a control unit, no other treated unit can match to (any instance) of that control unit.
\item \textbf{GroupMatch with trajectory replacement}: each control \textit{instance} can be matched to at most one treated unit. Each treated unit can match to no more than one instance from the same control trajectory. However different treated units can match to different instances of the same control trajectory, so a single control trajectory can contribute multiple distinct instances to the design.
\end{enumerate}
As our chosen names for these designs suggest, their relative costs and benefits are similar to the relative costs and benefits of matching without and with replacement in cross-sectional settings. As discussed by \citet{hansen2004full}, matching without replacement (in which each control may appear in at most one matched set), leads to slightly less similar matches compared to matching with replacement (in which controls can reappear in many matched sets) since in cases where two treated units both share the same nearest control only can use it. On the other hand, matching without replacement frequently leads to estimators with lower variance than those from matching with replacement, since in matching with replacement an individual control unit may appear in many matched sets and the resulting large weight on a single observation makes the estimator more sensitive to random fluctuations in its response. Thus one aspect of choosing between these designs is a choice about how to strike a bias-variance tradeoff. The other important aspect distinguishing these designs is that randomization inference, which is based on permuting treatment assignments in each matched set independently of others, generally requires matching without replacement.
These same dynamics play out with slightly more complexity in comparing GroupMatch without replacement and GroupMatch with trajectory replacement. In particular, GroupMatch without replacement ensures that responses in distinct matched sets are statistically independent (under a model in which trajectories are sampled independently), allowing for randomization inference, and ensures that the total weight on observations from any one control trajectory can sum only to $1/C$, ensuring that the estimator's variance cannot be too highly inflated by a single trajectory with large weight. On the other hand, GroupMatch with trajectory replacement leads to higher-quality matches and reduced bias in matched pairs.
We suggest a third GroupMatch design which leans even further towards expanding the potential control pool and reducing bias.
\begin{enumerate}
\setcounter{enumi}{2}
\item \textbf{GroupMatch with instance replacement}: Each treated unit can match to no more than one instance from the same control unit, but control instances can be matched to more than one treated unit.
\end{enumerate}
GroupMatch with instance replacement is identical to GroupMatch without trajectory replacement except that it also allows repetition of individual instances within the matched design as well as non-identical instances from the same trajectory. As such, it is guaranteed to produce higher-quality matches than GroupMatch without trajectory replacement, but may lead to higher-variance estimators since individual instances may receive weights larger than $1/C$.
Figure~\ref{fig:scenarios} illustrates the these three GroupMatch methods with a toy example that matches injured baseball players to non-injured players based on on-base percentage (OBP).
\begin{figure}
\centering
\includegraphics[scale = .5]{scenarios.png}
\caption{Toy example illustrating the three GroupMatch matching methods. Two injured baseball players (T1 and T2) are matched 1-1 to non-injured baseball players (C1a/b and C2a/b) based on player on-base percentage (OBP). Each non-injured player has two pseudo-injury times. Each instances in trajectory 1 is more similar to both treated units than either instance in trajectory 2, and instance C1a is more similar to both treated units than instance C1b. Under GroupMatch without replacement, T2 must match to an instance in Trajectory 2 because at most one instance from Trajectory 1 can participate in the match. under GroupMatch with trajectory replacement, T2 can match to C1b but not to C1a, since multiple control instances can be chosen from the same trajectory as long as they are distinct. Under GroupMatch with instance replacement, both T1 and T2 are able to match to C1a. However, if each treated instance were matched to two control instances instead of 1, Groupmatch with instance replacement would still forbid either T1 or T2 to match to a second instance in Trajectory 1 in addition to C1a.}
\label{fig:scenarios}
\end{figure}
In practice we view GroupMatch with instance replacement as a more attractive approach than GroupMatch with trajectory replacement almost without exception. One reason is that while the true variance of estimators from GroupMatch with instance replacement may often exceed that of estimators from GroupMatch with trajectory replacement by a small amount, our recommended approach for \emph{estimating} the variance and conducting inference are not able to capture this difference. As we describe in Section \ref{sec:Inference}, in the absence of a specific parametric model for correlations within a trajectory, inference proceeds in a conservative manner by assuming arbitrarily high correlations within a trajectory (much like the clustered standard error adjustment in linear regression). Since the variance advantage for GroupMatch with trajectory replacement arises only when correlations between instances within a trajectory are lower than one, the estimation strategy is not able to take advantage of them. This disconnect means that GroupMatch with trajectory replacement will not generally lead to narrower empirical confidence intervals even though it is known to be less variable in reality, much how the variance gains associated with paired randomized trials relative to less-finely-stratified randomized trials may fail to translate into improved variance estimates \citep{imbens2011experimental}.
A second important advantage of GroupMatch with instance replacement is its computational and analytical tractability relative to the other GroupMatch designs. We note that one easy way to implement GroupMatch with instance replacement as a network flow optimization problem is to remove a set of constraints in \citet{pimentel2020}'s Network B (specifically the upper capacity on the directed edges connected to the sink node), and in Sections \ref{sec:Simulations} and \ref{sec:Baseball} we use this implementation for its convenient leveraging of the existing \texttt{groupmatch} package in R. However, much more computationally efficient algorithms are also possible. Crucially, the removal of the constraint forbidding instance replacement means that matches can be calculated for each treated instance without any reference to the choices made for other treated units; the $C$ best matches for a given treated unit are simply the $C$ nearest neighbor instances such that no two such control instances within the matched set come from the same trajectory. In principle, this allows for complete parallelization of the matching routine. On the analytical side, this aspect of the design makes it possible to characterize the matching algorithm as a generalized form of nearest neighbor matching, a strategy we adopt in the proof of Theorem \ref{thm:validBlockBootstrap} to leverage proof techniques used by \citet{abadie2006} for cross-sectional nearest neighbor matching. In light of these considerations, we focus primarily on GroupMatch with instance replacement in what follows, although the methods derived appear to perform well empirically for other GroupMatch designs too.
\section{Block Bootstrap Inference}
\label{sec:weightedBootstrap}
\subsection{Inference methods for matched designs}
Broadly speaking, there are two schools of thought in conducting inference for matched designs. One approach, spearheaded by \citet{abadie2006, abadie2008, abadie2011, abadie2012}, relies on viewing the raw treated and control data as samples from an infinite population and on demonstrating that estimators based on matched designs (which in this framework are considered to be random variables, as functions of random data) are asymptotically normal. Inferences are based on the asymptotic distributions of matched estimators.
A second approach, described in detail in \citet{rosenbaum2002covariance, rosenbaum2002observational} and \citet{fogarty2020studentized}, adopts the perspective of randomization inference in controlled experiments; one conditions on the structure of the match and the potential outcomes and considers the null distribution of a test statistic over all possible values of the treatment vector by permuting values of treatment within matched sets. This framework offers strong finite sample guarantees without assumptions on outcome variables for testing sharp null hypotheses, and asymptotic guarantees for testing weak null hypotheses. In this case the asymptotics are over a sequence of successively larger finite populations \citep{li2017general}. Well-studied methods of sensitivity analysis are also available. One way to understand the link between the first and second approaches to inference is to view the latter as a conditional version of the first; indeed \citet[\S 3]{rosenbaum2002observational} derives distributions of treatment vectors used to construct randomization tests by assuming a sampling model as in the first approach and conditioning on the matched design produced from the random data.
As described in \citet{pimentel2020}, while standard methods of inference may be applied to GroupMatch without replacement, in which control individuals contribute at most one unit to any part of the match, none have been adequately developed for GroupMatch with trajectory replacement, in which distinct matched sets may contain different versions of the same control individual. For randomization inference, the barrier appears to be quite fundamental, because permutations of treatment within one matched set can no longer be considered independently for different matched sets. In GroupMatch with trajectory replacement, a treated unit receives treatment at one time and appears in the match only once; if treatment is permuted among members of a matched set so that a former control now attains treatment status, what is to be done about other versions of this control unit that are present in distinct matched sets? We note that similar issues arise when contemplating randomization inference for cross-sectional matching designs with replacement, and we are aware of no solutions for randomization inference even in this relatively less complex case.
The problems with applying sampling-based inference to GroupMatch designs with trajectory replacement are quite distinct. Here the primary issue relates to the unknown correlation structure for repeated measures from a single control individual. The literature on matching with replacement provides estimators for pairs that are fully independent \citep{abadie2012} and for cases in which a single observation appears identically in multiple pairs \citep{abadie2006}, but not for the intermediate case of GroupMatch with trajectory replacement where distinct but correlated observations appear in distinct matched sets.
In what follows we develop a sampling-based inference method appropriate for GroupMatch with trajectory replacement by generalizing a recent proposal of \citet{otsu2017} for valid sampling-based inference of cross-sectional matched studies using the bootstrap. Although the bootstrap often works well for matched designs without replacement \citep{austin2014use}, na\"{i}ve applications of the bootstrap in matched designs with replacement have been shown to produce incorrect inferences as a consequence of the failure of certain regularity conditions \citep{abadie2008}. Intuitively, if matching is performed after bootstrapping the original data, multiple copies of a treated unit will necessarily all match to the same control unit, creating a clumping effect not present in the original data. However, \citet{otsu2017} arrived at an asymptotically valid block bootstrap inference method for matching by bootstrapping weighted and bias-corrected functions of the original observations \emph{after} matching rather than repeatedly matching from scratch in new bootstrap samples.
While \citet{otsu2017} focus their analysis on cross-sectional studies, it has been conjectured elsewhere that a similar bootstrap approach, applied to entire trajectories of repeated measures in a form of the block bootstrap, provides valid inference for certain matched longitudinal designs \citep{imai2020}. In this section, we formalize this idea and demonstrate its applicability specifically to GroupMatch designs.
\subsection{Block Bootstrap}
In order to conduct inference under GroupMatch with trajectory or instance replacement we propose a weighted block bootstrap approach. We rearrange the GroupMatch ATT estimator from Section~\ref{sec:Background} as follows, letting $K_M(i, t)$ be the number of times the instance at trajectory $i$ and time $t$ is used as a match.
\begin{align*}
\hat{\Delta}_{adj} & = \frac{1}{N_1} \sum_{i = 1}^N D_i [(Y_{i, T_i} - \hat{\mu}_0(\mathbf{X}_{i, T_i})) - \frac{1}{C} \sum_{j = 1}^N \sum_{t' = 1}^T M_{iT_i, jt'} (Y_{j, t'} - \hat{\mu}_0(\mathbf{X}_{i, t'}))] \\
& = \frac{1}{N_1} \sum_{i = 1}^N D_i [(Y_{i, T_i} - \hat{\mu}_0(\mathbf{X}_{i, T_i})) - (1 - D_i) \sum_{t = 1}^T \frac{K_M(i, t)
}{C} (Y_{i, t} - \hat{\mu}_0(\mathbf{X}_{i, t}))] \\
& = \frac{1}{N_1} \sum_{i = 1}^N \hat{\Delta}_i
\end{align*}
Because different instances of the same control unit are correlated, we resample information at the \emph{trajectory} level rather than the instance level. Specifically we resample the $\widehat{\Delta}_i$; note that since these quantities are functions of the $K_M(i,t)$ weights in the original match, we do not repeat the matching process within bootstrap samples. In particular, we proceed as follows:
\begin{enumerate}
\item Fit an outcome regression $\widehat{\mu}_0(\cdot)$ for outcomes based on covariates in the previous $L$ timepoints using only control trajectories.
\item Match treated instances to control instances using a GroupMatch design with instance replacements. Calculate matching weights $K_M(i,t)$ equal to the number of times the instance at time $t$ in trajectory $i$ appears in the matched design.
\item Calculate the bias-corrected ATT estimator $\widehat{\Delta}_{adj}$.
\item Repeat $B$ times:
\begin{enumerate}
\item Randomly sample $N$ elements $\widehat{\Delta}^*_i$ with replacement from $\{\widehat{\Delta}_1, \ldots, \widehat{\Delta}_N\}$.
\item Calculate the bootstrap bias-corrected ATT estimator $\widehat{\Delta}_{adj}^*$ for this sample of trajectories as follows:
\[
\widehat{\Delta}^*_{adj} = \frac{1}{N_1} \sum_{i = 1}^N \widehat{\Delta}^*_i
\]
\end{enumerate}
\item Construct a (1 - $\alpha$) confidence interval based on the $\alpha / 2$ and $1 - \alpha / 2$ percentile of the $\widehat{\Delta}_{adj}^*$- values calculated from the bootstrap samples.
\end{enumerate}
This method is essentially a block-bootstrap procedure, very similar to the method proposed in \citet{imai2020}.
Our main result below shows the asymptotic validity of this approach.
First we outline several assumptions needed to prove this result, in addition to Assumptions 2-5 in Section \ref{subsec:identification}. We summarize these assumptions verbally here, deferring formal mathematical statements to the appendix. First, we require the covariates $X_i$ to be continous with compact and convex support and a density both bounded and bounded away from zero. Secondly, we require that the conditional mean functions
are smooth in $\mathbf{X}$, with bounded fourth moments. In addition, we require that conditional variances of the treated potential outcomes and conditional variances of nontrivial linear combinations of control potential outcomes from the same trajectory of are smooth and bounded away from zero. We also require that conditional fourth moments of potential outcomes under treatment and linear combinations of potential outcomes under control are uniformly bounded in the support of the covariates. Finally, we make additional assumptions related to the conditional outcome mean estimator $\widehat{\mu}_0(\cdot)$ specifically that the $kL$th derivative of the true conditional mean functions $\mu_1^t(\cdot)$ and $\mu_0(\cdot)$ exist and have finite suprema, and that the $\widehat{\mu}_(\cdot)$ converges to $\mu_0(\cdot)$ at a sufficiently fast rate.
To state the theorem, we also define
$$
\sqrt{N_1} U^* = \frac{1}{N_1}\sum_{i=1}^N\left(\widehat{\Delta}_i^* - \widehat{\Delta}_{adj} \right
$$
\begin{theorem}
\label{thm:validBlockBootstrap}
Under assumptions Mt, W, and R presented in the Appendix,
$$
sup_r |Pr \{ \sqrt{N_1} U^* \leq r | (\mathbf{Y, D, X}) \} - Pr\{ \sqrt{N_1}(\hat{\Delta}_{adj} - \Delta) \leq r\}| \xrightarrow{p} 0
$$
as $N \rightarrow \infty$ with fixed control:treated ratio, C.
\end{theorem}
\begin{remark}
While we focus on the nonparametric bootstrap, the result holds for a wide variety of other bootstrap approaches including the wild bootstrap and the Bayesian bootstrap. For required conditions on the bootstrap algorithm see the proof in the Appendix.
\end{remark}
We note that Assumptions M and R are modeled closely on those of \citet{abadie2006} and later \citet{otsu2017}, and that the proof technique we adopt is very similar to the one used for the main result in \citet{otsu2017}. Briefly,
$U^*$ is decomposed into three terms which correspond to deviations of the potential outcome variables around their conditional means, approximation errors for $\widehat{\mu}_0(\mathbf{X})$ terms as estimates of $\mu_0(\mathbf{X})$ terms, and deviations of conditional average treatment effects $\mu_1^t(\mathbf{X}) - \mu_0(\mathbf{X})$ around the population ATT $\Delta$. Regularity conditions from Assumption M ensure that the conditional average treatment effects converge quickly to the population ATT, and Assumption R, combined with Assumption-M-reliant bounds on the largest nearest-neighbor discrepancies in $\mathbf{X}$ vectors due originally to \citet{abadie2006} and adapted to our GroupMatch with instance replacement design, show that the deviation between $\widehat{\mu}_0(\cdot)$ and $\mu_0(\cdot)$ disappears at a fast rate. Finally, a central limit theorem applies to the deviations of the potential outcomes, producing the desired results. For full details, see the Appendix.
\subsection{Difference-in-Differences Estimator}
\label{sec:did}
Our block bootstrap inference approach is similar to that of \citet{imai2020}, however the ATT estimator they consider is a difference-in-differences estimator.
We can easily extend our setup and results to apply to the difference-in-differences estimator.
\begin{align*}
\hat{\Delta}_{DiD} & = \frac{1}{N_1} \sum_{i = 1}^N D_i [((Y_{i, T_i} - \hat{\mu}_0(\mathbf{X}_{i, T_i})) - (Y_{i, T_i - 1} - \hat{\mu}_0(\mathbf{X}_{i, T_i - 1}))) - \\
& \frac{1}{C} \sum_{j = 1}^N \sum_{t' = 1}^T M_{iT_i, jt'} ((Y_{j, t'} - \hat{\mu}_0(\mathbf{X}_{i, t'})) - (Y_{j, t' - 1} - \hat{\mu}_0(\mathbf{X}_{i, t' - 1})) )] \\
& = \frac{1}{N_1} \sum_{i = 1}^N D_i [((Y_{i, t = T_i} - \hat{\mu}_0(\mathbf{X}_{i, T_i})) - (Y_{i, t = T_i - 1} - \hat{\mu}_0(\mathbf{X}_{i, T_i - 1}))) - \\
& (1 - D_i) \sum_{t = 1}^T \frac{K_M(i, t)
}{C} ((Y_{i, t } - \hat{\mu}_0(\mathbf{X}_{i, t })) - (Y_{i, t - 1} - \hat{\mu}_0(\mathbf{X}_{i, t - 1})))] \\
& = \frac{1}{N_1} \sum_{i = 1}^N \hat{\Delta}^{DiD}_{i}
\end{align*}
Note that this estimator requires $L$ lags to be measured at time $T_i-1$, which requires a burn-in period of length $L$ rather than length $L-1$. \citet{imai2020} assume exact matching, which eliminates the need for a bias correction term, $\hat{\mu}_0(\mathbf{X}_{i, t})$, and simplifies the proof of Theorem~\ref{thm:validBlockBootstrap}.
As described in the previous section, valid inference is possible if we resample the $\hat{\Delta}^{DiD}_{i}$. Our proof of Theorem~\ref{thm:validBlockBootstrap} presented in the appendix requires mild modification to work for this difference-in-differences estimator. In particular, the variance estimators include additional covariance terms. For more details, see Appendix~\ref{app:Thm}.
\section{Simulations}
\label{sec:Simulations}
We now explore the performance of weighted block bootstrap inference via simulation. In particular, we investigate coverage and length of confidence intervals compared to those obtained by conducting standard parametric inference for the classical linear model (OLS), with and without cluster-robust error adjustment for controls from the same trajectory. We choose to compare to OLS with and without cluster-robust error adjustment because, to the best of our understanding, this is what is used in practice.
\subsection{Data Generation}
We generate eight covariates, four of them uniform across time for each individual $i$, (i.e., they take on the same value at every timepoint):
\begin{align*}
X_{1, i}, X_{3, i}, X_{4, i} \sim N(0, 1) \\
X_{2, i} \sim N(0, 1) \text{ for control units} \\
X_{2, i} \sim N(0.25, 1) \text{ for treated units}
\end{align*}
Additionally, for treated units:
\begin{align*}
X_5, X_7, X_8 \sim N(0, 1) \\
X_6 \sim N(0.5, 1)
\end{align*}
Four of the covariates are time-varying for control units.
In particular, for each control unit, three instances are generated from a random walk process to correlate their values across time. Covariate $j = 5, 6, 7, 8$ values for an instance $t$ in a trajectory $i$ are generated in the following way:
\begin{align*}
X_{j, i1} & \sim N(0, 1) \\
X_{j, i2} & = X_{j, i1} + \epsilon_{i1} \\
X_{j, i3} & = X_{j, i2} + \epsilon_{i2} \\
\epsilon_{i1}, \epsilon_{i2} & \sim N(0, 0.5^2)
\end{align*}
Our outcome is defined as follows fixing $a_L = log(1.25)$, $a_M = log(2)$, $a_H = log(4)$ and $a_{VH} = log(10)$ and drawing the $\epsilon_{it}$ terms independently from a standard normal distribution.
\begin{align*}
Y_{it} & = a_L\sum_{j=1}^4X_{j, it} + a_{VH}X_{5, it} + a_M(X_{6, it}+X_{8, it}) + a_H(X_{7, it}) + \Delta D_i + \epsilon_{it}
\end{align*}
The outcome for a unit is thus correlated across time as it is generated from some time-varying covariates.
Each simulation consists of 400 treated and 600 control individuals.
We consider 1:2 matching.
The true treatment effect, $\Delta$, is set at 0.25.
We consider three different ways of generating the continuous outcome variable.
First, we generate the outcome based on a linear model of all the covariates with independent error terms (as described above).
Second, we add correlation to the error terms, so that the error terms for a trajectory, $\epsilon_{it}$ for a given trajectory $i$, are generated from a normal distribution with mean 0 and covariance matrix with off diagonal values of 0.8.
Finally, in addition to the correlated error terms, we square the $X_{2, it}$ term of the model, so it is no longer linear.
We compare the weighted bias-corrected bootstrap approach outlined in Section \ref{sec:weightedBootstrap} to the confidence intervals obtained from weighted least squares (WLS) regression and WLS with clustered standard errors.
We choose to compare to WLS because this is commonly recommended in matching literature \citep{ho2007, stuart2011}.
However, \citet{abadie2021} pointed out that standard errors from regression may be incorrect due to dependencies among outcomes of matched units, and identified matching with replacement as a setting in which these dependencies are particularly difficult to correct for.
Our simulation results suggest that these difficulties carry over into the case of repeated measures.
Additionally, in running our simulations we noticed that standard functions in R used to compute WLS with matching weights such as lm and Zelig (which calls lm), compute biased standard error estimates in most settings.
See Appendix~\ref{app:WLS} for details.
\subsection{Results}
Tables~\ref{tab:simCov} and \ref{tab:simCIlen} show the coverage and average 95\% confidence interval (CI) length, respectively, of WLS, WLS cluster, and bootstrap bias-corrected methods for each of our three simulation settings under 10,000 simulations.
As misspecification increases the bootstrap method is substantially more robust (although under substantial misspecification the bias-corrected method also fails to achieve nominal coverage). In settings where strong scientific knowledge about the exact form of the outcome model is absent, the bootstrap approach appears more reliable than its chief competitors.
\begin{table}[!ht]
\centering
\begin{tabular}{lrrr}
\hline
Coverage & WLS & WLS Cluster & Bootstrap Bias Corrected \\ \hline
Linear DGP & 93.2\% & 94.8\% & 94.8\% \\ \hline
Linear DGP, Correlated Errors & 89.4\% & 91.5\% & 94.5\% \\ \hline
Nonlinear DGP, Correlated Errors & 83.4\% & 86.0\% & 89.8\% \\ \hline
\end{tabular}
\caption{Coverage of the WLS, WLS cluster and bootstrap bias corrected methods of inference for our three simulation set-ups.}
\label{tab:simCov}
\end{table}
\begin{table}[!ht]
\centering
\begin{tabular}{lrrr}
\hline
Average CI Length & WLS & WLS Cluster & Bootstrap Bias Corrected \\ \hline
Linear DGP & 0.25 & 0.27 & 0.27 \\ \hline
Linear DGP, Correlated Errors & 0.25 & 0.27 & 0.30 \\ \hline
Nonlinear DGP, Correlated Errors & 0.26 & 0.28 & 0.31 \\ \hline
\end{tabular}
\caption{Average 95\% confidence interval length for the WLS, WLS cluster and bootstrap bias corrected methods of inference for our three simulation set-ups.}
\label{tab:simCIlen}
\end{table}
\section{Testing for Timepoint Agnosticism}
\label{sec:timepointAgnosticism}
The key advantage of GroupMatch \citep{pimentel2020} relative to other matching techniques designed for rolling enrollment settings (e.g., \citet{witman2019}, \citet{imai2020}, and \citet{lu2005}) is its ability to consider and optimize over matches between units at different timepoints, which leads to higher quality matches on lagged covariates.
This advantage comes with a price in additional assumptions, notably the assumption of timepoint agnosticism.
Timepoint agnosticism means that mean potential outcomes under control for any two individual timepoints in the data should be identical; in particular, this rules out time trends of any kind in the outcome model that cannot be explained by covariates in the prior $L$ timepoints.
While in many applications scientific intuition about the data generating process suggests this assumption may be reasonable, it is essential that we consider any information contained in the observed data about whether it holds in a particular case. Accordingly, we present a falsification test for time agnosticism. Falsification tests are tests ``for treatment effects in places where the analyst knows they should not exist,'' \citep{keele2015}
and are useful in a variety of settings in observational studies \citep{rosenbaum1999}. In particular, our test is designed to detect violations of time agnosticism, or ``treatment effects of time'' when they should be absent; rejections indicate setting in which GroupMatch is not advisable and other rolling enrollment matching techniques that do not rely on timepoint agnosticism are likely more suitable. While failure to reject may not constitute proof positive of time agnosticism's validity, it rules out gross violations, thereby limiting the potential for bias.
To test the timepoint agnosticism assumption we propose \emph{control-control time matching}: matching control units at different timepoints and testing if the average difference in outcomes between the two timepoint groups, conditional on relevant covariates, is significantly different from zero using a permutation test. Specifically, restricting attention to trajectories $i$ from the control group, we select two timepoints $t_0$ and $t_1$ and match each instance
at one timepoint to one at the other timepoint
using the GroupMatch optimization routine, based on similarity of covariate histories over the previous $L$ timepoints.
We note that since this match compares instances at two fixed time points, it is not strictly necessary for GroupMatch to be applied, and any optimal method of matching without replacement should suffice. One practical issue arises: GroupMatch and related matching routines expect one group to be designated ``treated,'' all members of which are generally retained in the match, and the other ``control,'' some members of which will be included, but both matching groups are controls in this case. We label whichever of the two groups has fewer instances as treated; without loss of generality, we will assume there are fewer instances at time $t_1$ and use these instances as the reference group to be retained.
We now define a test statistic for the falsification test, by close analogy to our ATT estimator in section \ref{subsec:identification}. Let $N_c$ be the overall number of control units in total and let $N_{t_1}$ be the number of control instances at time $t_1$. Let $\hat{\mu}^{t_0}_0$ be a bias correction model fit on our new control group (i.e., control instances at time $t_0$). In addition, let $D'_i = 1$ if unit $i$ is present at time $t_1$. We define the test statistic as follows:
$$
\hat{\Delta}_{cc} = \frac{1}{N_{t_1}} \sum_{i = 1}^{N_{c}} D'_i ( (Y_{i, t = t_1} - \hat{\mu}_0^{t_0}(\mathbf{X}_{i, t = t_1})) - \sum_{j = 1}^{N_c} M_{it_1, jt_0} (Y_{i, t = t_0} - \hat{\mu}_0^{t_0}(\mathbf{X}_{i, t = t_0}) ))
$$
To conduct inference, we use a permutation test to test the following null hypothesis, where $E_0^{t_1}\left\{\cdot\right\}$ indicates expectation over the distribution of the covariates in control instances at time $t_1$.
\[
E_0^{t_1}\left\{\mu_0^{t_0}(\mathbf{X}) \right\} = E_0^{t_1}\left\{\mu_0^{t_1}(\mathbf{X}) \right\}
\]
In words, this null hypothesis says that, accounting for differences in the covariate distribution at times 0 and 1, the difference in the average outcomes of control instances at the two timepoints is zero.
The test amounts to considering the tail probability of the distribution of the following test statistic $\widehat{\Delta}_{perm}$ under many draws of the random vector $R$, where $R = (R_1, ..., R_{N_c})$ and $R_i$ are independent Rademacher random variables:
$$
\widehat{\Delta}_{perm} = \frac{1}{N_{t_1}} \sum_{i = 1}^{N_c} R_i D'_i ( (Y_{i, t = t_1} - \hat{\mu}_0^{t_0}(\mathbf{X}_{i, t = t_1})) - \sum_{j = 1}^{N_c} M_{it_1, jt_0} (Y_{i, t = t_0} - \hat{\mu}_0^{t_0}(\mathbf{X}_{i, t = t_0})))
$$
\noindent This permutation test is identical in implementation to the standard test of a sharp null hypothesis that outcomes are unchanged by group assignment for matched designs without replacement, discussed in detail in \citet[\S 2-\S 3]{rosenbaum2002observational}.
In steps:
\begin{enumerate}
\item Randomly partition the set of control trajectories into two groups. Label control instances from the first group of trajectories at timepoint $t_1$ the new "treated'' units, and control instances from the second grouf of trajectories at timepoint $t_0$ the new ``control'' units.
\item Fit a bias correction model on the new control units.
\item Match the new treated units to the new control units and calculate the test statistic.
\item Repeat $B$ times:
\begin{enumerate}
\item For each matched pair randomly and independently switch which unit is labelled the treated and which is labelled the control unit with probability 0.5 (with probability 0.5 do not switch the treated/control labels). This amounts to multiplying the treatment effect by -1 for that pair with probability 0.5.
\item Calculate the test statistic for this randomization.
\end{enumerate}
\item Calculate the proportion of permutations that result in an absolute value of the test statistic greater than or equal to the absolute value of the observed value calculated in 1. This is the $P$-value.
\item If the $P$-value is smaller than a predefined significance level $\alpha$, reject the null hypothesis of no difference between groups, indicating the presence of systematic variation of outcomes with time given covariates.
\end{enumerate}
We do not allow the same unit to appear in both the new control and treated group, because this would lead to dependence across matches.
We employ 1-1 matching because we are testing a weak null hypothesis using a permutation test.
As \citet{wu2020} demonstrate, issues can arise when using a randomization test to test a weak null hypothesis when treatment and control sample sizes are not equal.
However, 1-1 matching allows us to avoid this issue by balancing the treatment and control sample sizes.
We recommend the use of caliper matching to ensure high quality matches, especially in the case where all control units are present at both timepoints.
Note that it is important to permute treatment after matching (indeed, conditional on the matched pairs chosen) in order to preserve the covariate distribution of the treated and control units. If we permute treatment and then perform matching we risk changing the covariate distribution of the treated units under permutation. This could become an issue especially if some covariates are correlated with time, but the outcome is not. Then permuting before matching could cause an effect to appear as a result of destroying the original treated covariate distribution.
We choose to use a permutation test here rather than the bootstrap because the data split and 1:1 matching ratio ensure matches are independent under the original sampling model, making for a tractable permutation distribution. If desired, the bootstrap approach of Section \ref{sec:weightedBootstrap} could be applied instead, and we expect that results would be similar given the fundamental similarity between bootstrap and randomization inference where both are viable \citep{romano1989bootstrap}.
A key consideration for this test is which timepoints to choose as $t_0$ and $t_1$. The choice of timepoint comparison depends largely on what a plausible time trend would be for the problem at hand. For example, if you suspect a linear time trend, it makes sense to look at the first and last timepoints. If the trend is linear, this test should have high power to detect a problem in moderate to large samples.
If one is uncertain about the specific shape of the time trend that is most likely to occur and want to test for all possible trends, we recommend testing each sequential pair of timepoints (i.e., timepoints 1 and 2, 2 and 3, 3 and 4, and so on) and combining the tests via a nonparametric combination of tests \citep{pesarin}.
\subsection{Simulations}
We illustrate this method via simulation.
We generate a dataset with 4 covariates and 1000 control units each with 2 instances occuring at time $t_0$ and $t_1$.
Two of the covariates vary with time, and two are uniform across time:
\begin{align*}
X_{1, i}, X_{2, i} & \sim N(0, 1) \\
X_{3, i, t_0}, X_{4, i, t_0} & \sim N(0, 1) \\
X_{3, i, t_1} & = X_{3, i, t_0} + \epsilon_i \\
X_{4, i, t_1} & = X_{4, i, t_0} + \epsilon_i \\
\epsilon_i & \sim N(0, 0.5^2)
\end{align*}
The outcome variable is a linear combination of the four covariates, a time trend controlled by parameter $\gamma$, and an error term ($\epsilon_{it} \sim N(0, 1)$):
$$
Y_{it} = log(4) (X_{1, it} + X_{4, it}) + log(10) (X_{3, it} + X_{4, it}) + 1\{ t = t_1 \} \gamma + \epsilon_{it}
$$
We generate data for the setting $\gamma = 0$, which does not include a time varying component, $1000$ times.
On each dataset, we perform the test for timepoint agnosticism outlined in this section, with 1-1 matching and bias correction.
Our simulated data only contains two timepoints and the sample size is balanced so we choose the first timepoint as $t_0$, our new control group, and the second timepoint as $t_1$, our new treated group.
In 0.049 of the simulations the $P$-value is less than 0.05, which shows that type I error is controlled.
Now, we add in a time trend where there is an additional term, $\gamma$, added to the second timepoint.
For $\gamma = 0.1$ for the second timepoint (resulting in a time trend of 0.1), our simulations result in a $P$-value less than 0.05 in 0.327 of the 1000 simulations.
For $\gamma = 0.25$ for the second timepoint (resulting in a time trend of 0.25), our simulations result in a $P$-value less than 0.05 in 0.981 of the 1000 simulations.
Table~\ref{tab:simCC} summarizes these results.
Figure~\ref{fig:controlSim} shows two simulated datasets with $\gamma = 0.1$.
The test for timepoint agnosticism detects the trend in one of the two datasets. Overall, the simulations show that the test is not a panacea for issues with time agnosticism, failing to detect small violations more often than not. However, it still adds substantial value to the analysis pipeline, detecting moderate violations of time agnosticism not especially obvious to the eye in visualization plots with a very high rate of success.
\begin{table}[!ht]
\centering
\begin{tabular}{lrrr}
\hline
Time Trend, $\gamma$ & 0 & 0.1 & 0.25 \\ \hline
Proportion $P$-values $< 0.05$ & 0.049 & 0.327 & 0.981 \\ \hline
\end{tabular}
\caption{Summary of timepoint agnosticism simulation results. Proportion of 1000 simulations where the $P$-value from the test is less than 0.05, for time trends, $\gamma = 0, 0.1, 0.25$.}
\label{tab:simCC}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.4\textwidth]{control10detect.png}
\includegraphics[width=0.4\textwidth]{control10nodetect.png}
\caption{Simulated datasets with time trend, $\gamma = 0.1$. The figures show outcome data for a dataset where the timepoint agnosticism test detects the time trend ($P = 0.04$) and does detect the time trend ($P = 0.37$) respectively.}
\label{fig:controlSim}
\end{figure}
\section{Application: Baseball Injuries}
\label{sec:Baseball}
A large body of literature evaluates major league baseball players' performance \citep{baumer2008}, and a different body of literature analyzes injury trends and impact of injuries in athletics \citep{conte2016}.
The intersection of these two research areas is relatively small.
In particular, there have been relatively few studies evaluating the impact of injury on position players' hitting performance.
Studies that have evaluated the impact of injury on batters have generally been focused on specific injury types, and have not found strong evidence that injury is associated with a decline in performance \citep{begley2018, frangiamore2018, wasserman2015}.
We study the impact of short-term injury on hitting performance in observational data from major league baseball during 2013-2017,
by using GroupMatch to match baseball players injured at different times to similar players at other points in the season not receiving injuries.
We evaluate whether players see a decline in offensive performance immediately after their return from injury.
In contrast to other studies, we pool across injury types to see if there is a more general effect of short term injury on hitter performance.
\subsection{Data and Methodology}
We use publicly available MLB player data from Retrosheet.org and injury data scraped from ProSportsTransactions.com for the years 2013-2017.
Our dataset is composed of player height, weight and age, quantities that remain constant over a single season of play, as well as on-base percentage (OBP) and plate appearances (PAs) at different points in the season, and dates of short-term injuries, in which the player's team designated him for a 7-10 day stay on the team's official injured list, for each year.
OBP is a common measure of batter performance and is approximately equal to the number of times a player makes it on base divided by their number of plate appearances.\footnote{OBP = (Hits + Walks + Hit By Pitch) / (At Bats + Walks + Hit by Pitch + Sacrifice Flies)}
Each plate appearance is a batter's complete turn batting.
For each non-injured player, we generate three pseudo-injury dates evenly spaced over their number of PAs.
In each season, we match injured players to four non-injured players.
Matches were formed using GroupMatch with instance replacement and matching on age, weight, height, number of times previously injured, recent performance as measured by OBP over the previous 100 PAs, and performance over the entire previous year as measured by end-of-year OBP after James-Stein shrinkage\footnote{See \url{https://chris-said.io/2017/05/03/empirical-bayes-for-multiple-sample-sizes/} for discussion on how to apply James-Stein to data with multiple sample sizes.}
We choose to shrink the OBP using James-Stein instead of using raw OBP to reduce the variability for players that had a relatively small number of PAs the previous season \citep{efron1975}.
Table~\ref{tab:prebalTab} shows the balance for each of the covariates prior to matching and Table~\ref{tab:balTab} shows the balance after matching. For each covariate, matching shrinks the standardized difference between the treated and control means.
\begin{table}[!ht]
\centering
\begin{tabular}{llll}
\hline
& Treated Mean & Control Mean & Standardized Difference \\ \hline
Height & 73.7 & 73.1 & 0.26 \\
Weight & 213 & 209 & 0.24 \\
2016 OBP (JS Shrunk) & .324 & .328 & -0.09 \\
Lag OBP & .336 & .341 & -0.07 \\
Birth Year & 1988 & 1988 & -0.08 \\
Number Previous Injuries & 2.73 & 1.91 & 0.30 \\ \hline
\end{tabular}
\caption{Balance table for MLB injury analysis after matching each injured player to four non-injured players.}
\label{tab:prebalTab}
\end{table}
\begin{table}[!ht]
\centering
\begin{tabular}{llll}
\hline
& Treated Mean & Control Mean & Standardized Difference \\ \hline
Height & 73.7 & 73.4 & 0.14 \\
Weight & 213 & 212 & 0.07 \\
2016 OBP (JS Shrunk) & .324 & .323 & 0.02 \\
Lag OBP & .336 & .338 & -0.02 \\
Birth Year & 1988 & 1988 & -0.06 \\
Number Previous Injuries & 2.73 & 2.16 & 0.21 \\ \hline
\end{tabular}
\caption{Balance table for MLB injury analysis after matching each injured player to four non-injured players.}
\label{tab:balTab}
\end{table}
We compare the results for bias-corrected block bootstrap inference, OLS and OLS with clustered standard errors.
\subsection{Results}
The ATT estimates are positive (0.010), but the 95\% confidence intervals cover zero for all methods, indicating that there is not strong evidence that short term injury impacts batter performance.
We present the results for 2017 in Figure~\ref{fig:injPlot2017}.
Results from 2013 - 2016 were substantively the same.
Pooling the matched data across years and the applying the block bootstrap method also results in the same substantive conclusions.
The data pass the timepoint agnosticism test, comparing the first and last pseudo-injury dates.
\begin{figure}[!ht]
\centering
\includegraphics[scale = 0.2]{injPlot_block.png}
\caption{Plot comparing block bootstrap, OLS and Cluster OLS inference methods for the ATT in our 2017 baseball injury example.}
\label{fig:injPlot2017}
\end{figure}
\section{Discussion}
\label{sec:Conclusion}
The introduction of matching with instance replacement, a method for block bootstrap inference, and a test for timepoint agnosticism provide substantial new capability for the existing GroupMatch framework.
We now discuss a number of limitations and opportunities for improvement.
Our proof of the block bootstrap approach assumes the use of GroupMatch with instance replacement. The large-sample properties of matched-pair discrepancies are substantially easier to analyze mathematically in this setting than GroupMatch with trajectory replacement or GroupMatch without replacement, designs in which different treated units may compete for the same control units, and the technical argument must be altered to account for this complexity. However, \citet{abadie2012} successfully characterized similar large-sample properties in cross-sectional settings for matching without replacement. While beyond the scope of our work here, we believe it is likely that this approach could provide an avenue for extending Theorem~\ref{thm:validBlockBootstrap} to cover the other two GroupMatch designs. Empirically, we have found that the block bootstrap performs well when matches are calculated using any of the three GroupMatch designs.
Setting aside the technical barriers associated with extending the theory to GroupMatch without replacement, our new approach provides a competitor method to the existing randomization inference framework described by \citet{pimentel2020} available for GroupMatch without replacement. The randomization inference framework offers the advantage of finite sample validity and freedom from making assumptions about the sampling distribution of the response variables; on the other hand, the block bootstrap method avoids the need to assume a sharp null hypothesis. In general these same considerations arise in choosing between sampling-based inference and randomization-based inference for a cross-sectional matched study, although such choices have received surprisingly little direct and practical attention in the literature thus far.
The falsification test proposed in Section~\ref{sec:timepointAgnosticism} is subject to several common criticisms levied at falsification tests, particularly their ineffectiveness in settings with low power. One possible approach is to reconfigure the test to assume violation of time agnosticism as a null hypothesis and seek evidence in the data to reject it; \citet{hartman2018equivalence} recommend a similar change for falsification tests used to assess covariate balance. However, even in the absence of such a change the test may prove useful in concert with a sensitivity analysis. Sensitivity analysis, already widely studied in causal inference as a way to assess the role of ignorability assumptions, places a nonzero bound on the degree of violation of an assumption and reinterprets the study's results under this bound, often repeating the process for larger and larger values of the bound to gain insight. Such a procedure, which focuses primarily on assessing the impact of small or bounded violations of an assumption, naturally complements our falsification test, which (as shown in our simulations) can successfully rule out large violations but is more equivocal about minor violations.
Unfortunately no sensitivity analysis appropriate for block bootstrap inference has yet been developed, either for time agnosticism or other strong assumptions such as ignorability. The many existing methods for sensitivity analysis (developed primarily with ignorability assumptions in mind) are unsatisfying in our framework for a variety of different reasons: some rely on randomization inference \citep{rosenbaum2002observational}, others focus on weighting methods rather than matching \citep{zhao2019sensitivity, soriano2021interpretable}, and others are limited to specific outcome measures \citep{ding2016sensitivity} or specific test statistics \citep{cinelli2020making}. We view the development of compelling sensitivity analysis approaches to be an especially important methodological objective for matching under rolling enrollment.
\bibliographystyle{asa}
| {
"attr-fineweb-edu": 1.882812,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc8I5qYVBL5XJbsrq | \section{Introduction}
The "Statistical Curse of the Second Half Rank" problem \cite{0} stems from real life considerations leading to rather complex combinatorics. One is primarly concerned with
rank expectations in sailing boats regattas, bearing in mind that the issue discussed here is quite general and can apply to rank expectations of students taking exams, or other types of similar endeavors as well.
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{SPI.pdf}}
\end{center}
\caption{Just before the start of a race at the {\it Spi Ouest-France}: a big number of identical boats is going to cross a virtual starting line in a few seconds.}
\end{figure}
Consider as a typical example
the {\it Spi Ouest-France} regatta which takes place each year during 4 days at Easter in La Trinit\'e-sur-Mer, Brittany (France).
It involves a "large" number $n_b$ of identical boats, say $n_b= 90$,
running a "large" number $n_r$ of races, say $n_r= 10 $ (that is to say $2,3$ races per day, weather permitting, see Fig.~1).
In each race each boat gets a rank $ 1\le {\rm rank}\le 90$ with the condition that there are
no ex aequo.
Once the last race is over, to determine the final rank of a boat and thus the winner of the regatta one proceeds as follows:
1) one adds each boat's rank in each race $\rightarrow$ its score $n_t$:
here $n_b=90, n_r=10$ so that $10\le n_t\le 900$
\noindent $n_t=10$ is the lowest possible score $\to$ the boat was always ranked $1^{\rm rst}$
\noindent $n_t=900$ is the highest possible score $\to$ the boat was always ranked $90^{\;\rm th}$
\noindent $n_t=10\times (1+90)/2\simeq 450$ the middle score $\to$ the boat was on average ranked $45^{\;\rm th}$
2) one orders the boats according to their score $\rightarrow$ their final rank
\noindent the boat with the lowest score $\to$ $1^{\rm rst}$ (the winner)
\noindent the boat with next to the lowest score $\to$ $2^{\rm nd}$
\noindent etc...
\begin{figure}[htbp]
\begin{center}
\epsfxsize=16cm
\centerline{\epsfbox{spi2009.pdf}}
\end{center}
\caption{Starting from the left the first column gives the final rank of the boat, the second column its score, the third column its "improved" score, the fourth column its name, the fifth column the name of its skipper, and the next $10$ columns its ranks in each of the $10$ races.}
\end{figure}
What is the "Statistical Curse of the Second Half Rank"?
In the {\it Spi Ouest-France} 2009 results sheet (see Fig.~2 for boats with final rank between $60^{\rm th}$ and $84^{\rm th}$), consider for example the boat with final rank $70^{\rm th}$:
its ranks in the $10$ races are $51$, $67$, $76$, $66$, $55$, $39$, $67$, $59$, $66$, $54$ so that its score is $ n_t=600$. The crew of this boat might naively expect that since
its mean rank is ${600/ 10}=60$ and so
it has been on average ranked $60^{\rm th}$, its final rank should be around $60^{\rm th}$.
No way, the boat ends\footnote{This pattern is even more pronounced if one notes in Fig.~2 that the rank in the third race $(76)$ is in parenthesis: this is because each boat's worst rank is removed from the final counting. It is as if they were
only $9$ races with, for the boat considered here, ranks $51, 67, 66, 55, 39, 67, 59, 66, 54$, "improved" score $n_t=524$ and mean rank ${524/ 9}\simeq 58$. So, on average, the boat is ranked $58^{\rm th}$ even though it ends up being $70^{\rm th}$.} up being $70^{\rm th}$.
This "curse" phenomenon is quite general and one would like to understand its origin and be able to evaluate it.
A qualitative explanation is simple \cite{0}: in a given race
given the rank of the boat considered above,
assume that the ranks of the other boats are random variables with a uniform distribution.
The
random rank assumption is good if, bearing in mind that all boats are identical, the crews can also be considered as more or less equally worthy, which for sure is partially the case.
Since there are no ex aequo it means that the
ranks of the other boats, in the first race, are a random permutation of $(1, 2, 3,\ldots, 50, 52,\ldots, 90)$,
in the second race, a random permutation of $(1, 2, 3,\ldots, 66, 68,\ldots, 90)$, etc.
Each race is obviously independent from the others, so that
the scores are the sums of $10$ independent random variables.
But $10$ is already a large number in probability calculus so that
the Central Limit Theorem applies. It follows that the scores are random variables with a gaussian probability density centered around the middle score $\simeq 450$.
A gaussian distribution implies a lot of boats with scores packed around the middle score.
Since the score $600$ of the boat considered here is larger than the middle score $450$, this packing implies that its final rank is pushed upward from its mean rank: {\bf this is the statistical "curse"}. On the contrary if the boat's score had been lower than the middle score, its final rank would have been pushed downward from its mean rank : {\bf its crew would have enjoyed a statistical "blessing"}.
Let us
rewrite things more precisely by asking,
given the score $n_t$ of the boat considered among the $n_b$ boats,
what is the probability distribution $P_{n_t}(m)$ for its final rank to be $m\in[1,n_b]$?
A complication arises as soon as $n_r\ge 3$:
$P_{n_t}(m)$ does not only depend on the score $n_t$ but also on the ranks of the boat in each race.
For example for $n_r= 3$, take $n_b=3$ and the score $n_t=6$,
it is easy to check by complete enumeration that
$P_{6=2+2+2}(m)\ne P_{6=1+2+3}(m) $. The distributions are of course similar but slightly differ.
To avoid this complication let us from now on
consider $n_b$ boats with in each race random ranks given by a random permutation of $(1,2,3,\dots,n_b)$
$\oplus$ an additional "virtual" boat only specified by its score $n_t$ and ask
the question again: given the score $n_t$ of this virtual boat what is the probability distribution $P_{n_t}(m)$ for its final rank to be $m\in[1,n_b+1]$ ?
This problem is almost the same\footnote{In the $2$-race case it is in fact the same problem.} yet a little bit simpler since, by construction, it does not have the complication discussed above.
In a given race $k$ call $n_{i,k}$ the rank of the boat $ i$ with $1\le i\le n_b$ and $1\le k\le n_r$.
There are no ex aequo in a given race: the $n_{i,k}$'s are a random permutation of $(1,2,3,\dots,n_b)$ so that they are correlated random variables with
$${\rm sum\; rule}\quad \quad{\sum_{i=1}^{n_b}n_{i,k}}=1+2+3+\ldots+n_b={n_b(1+n_b)\over 2}$$
$${\rm mean} \quad \quad\langle n_{i,k}\rangle={1+n_b\over 2}$$
$${\rm fluctuations}\quad \quad \langle n_{i,k}n_{j,k}\rangle-\langle n_{i,k}\rangle\langle n_{j,k}\rangle={1+n_b\over 12}(n_b\delta_{i,j}-1)$$
\noindent Now, the score $ n_i$ of boat $i$ is defined as $ n_i\equiv \sum_{k=1}^{n_r}n_{i,k}$, the middle score being $n_r(1+n_b)/ 2 $.
In the large $n_r$ limit the Central Limit Theorem applies -here for correlated random variables-
to yield the scores joint density probability distribution
$$f(n_1,\dots,n_{n_b})=$$
$$
\sqrt{2\pi \lambda n_b } \left(\sqrt{1\over 2\pi\lambda}\right)^{n_b}\delta\left(\sum_{i=1}^{n_b} (n_i-n_r{1+n_b\over 2})\right)\exp\left[
-{1\over 2\lambda}\sum_{i=1}^{n_b}(n_i-n_r{1+n_b\over 2})^2\right]$$
with $\lambda=n_r {n_b(1+n_b)/ 12}$.
Now consider the virtual boat with score $n_{t}$:
$P_{n_t}(m)$ is
the probability for $m-1$ boats among the $n_b$'s to have a score $n_i<n_{t}$ and for the other $n_b-m+1$ boats to have a score $n_i\ge n_{t}$
$$
P_{n_{t}}(m)={n_b\choose m-1}\int_{-\infty}^{n_{t}}dn_1 \dots dn_{m-1}
\int_{n_{t}}^{\infty}dn_{m} \dots dn_{n_b}f(n_1,\dots ,n_{n_b})$$
\noindent Let us take the large number of boats limit:
a saddle point approximation finally \cite{0} gives
$\langle m\rangle$ as the cumulative probability distribution of a normal
variable
$$
{\langle m\rangle}= \frac{n_b}{\sqrt{2\pi\lambda}} \int_{-\infty}^{\bar{n}_{t}} \exp\left[ -\frac{n^2}{2\lambda}
\right] dn
$$
where
$$\bar{n}_{t}= n_{t}-n_r\frac{(1+n_b)}{2} $$
and $ n_r\le n_t\le n_r n_b\rightarrow - n_r{n_b\over 2}\le\bar{n}_{t}\le n_r{n_b\over 2}$. In Fig.~3 a plot of ${\langle m\rangle/n_b}$ is displayed in the case
$n_r=30$, $n_b=200$. The curse and blessing effects are clearly visible on the sharp increase around the middle score -a naive expectation would claim a linear increase.
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{figure1.pdf}}
\end{center}
\caption{$\langle r\rangle = {\displaystyle \langle m\rangle\over\displaystyle n_b}$, $n_r=30$,
$n_b=200$,
middle score $3000$,
dots = numerics}
\end{figure}
The variance can be obtained along similar lines with a manifest damping due to the correlations effects.
\iffalse
The variance is as well given by
$$ {(\Delta m)^2\over n_b }= \frac{1}{\sqrt{2\pi\lambda}} \int_{-\infty}^{\bar{n}_{t}} \exp\left[ -\frac{n^2}{2\lambda}
\right] dn\frac{1}{\sqrt{2\pi\lambda}} \int_{-\infty}^{-\bar{n}_{t}} \exp\left[ -\frac{n^2}{2\lambda}
\right] dn -{1\over {2\pi}}\exp\left[-{\bar{n}_{t}^2\over \lambda}\right]$$
\noindent where a damping due to the correlations is clearly visible.
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{standardbis.pdf}}
\caption{$n_b=200$,
$n_r=30$,
middle score $3000$, dots = numerics.}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{naivebis.pdf}}
\caption{correlations effects}
\end{center}
\end{figure}
\fi
So far we have dealt with the "curse" which is a large number of races and boats effect.
Let us now turn to the combinatorics of a small number of races $n_r=2,3,\ldots$ for a given number of boats $n_b=1,2,\ldots$.
The simplest situation is the $2$-race case $n_r=2$ which happens to be solvable -it can be viewed as a solvable "$2$-body" problem- with an exact solution for $P_{n_t}(m)$.
To obtain in this simple situation the probability distribution one proceeds as follows (see Fig.~4):
1) one represents the possible ranks configurations of any boat among the $n_b$'s in the two races by points on a $n_b\times n_b$ square lattice
(in the $3$-race case one would have a cubic lattice, etc):
since there are no ex aequo there is exactly $1$ point per line and per column, so there are $n_b!$ such configurations
(in the $n_r$ races case one would have $(n_b!)^{n_r-1}$ such configurations)
2) one enumerates all the configurations with $m-1$ points below the diagonal $n_t$:
this is the number of configurations with final rank $m$ for the virtual boat\footnote{Contrary to the no ex aequo rule in a given race, boats of course have equal final ranks if they have the same score. }.
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{desbois1.pdf}}
\end{center}
\caption{ The $2$-race case: a $m=3$ configuration for $n_b=6$ and $n_t=6$. The dashed line is the diagonal $n_t=6$.}
\end{figure}
The problem has been narrowed down to a combinatorial enumeration which is doable \cite{0}:
one finds for $2\le n_t\le 1+n_b$
$$
P_{n_t}(m)=(1+n_b)\sum_{k=0}^{m-1}(-1)^k(1+n_b-n_t+m-k)^{n_t-1}\frac{(n_b-n_t+m-k)!}{
k! (1+n_b-k)! (m-k-1)! }
$$
\noindent and for $ n_t=n_b+1+i\in[n_b+2, 2n_b+1]$ with $i=1,2,\ldots, n_b$, by symmetry, $P_{n_b+1+i}(m)=P_{n_b+2-i}(n_b +2-m)$.
\vspace{0.4cm}
So far no particular numbers, be they Eulerian or Stirling, have occured.
Let us concentrate on the middle score $n_t=2{(1+n_b)/ 2}=1+n_b$ to get
$$ P_{n_t=1+n_b}(m)=(1+n_b)\sum_{k=0}^{m-1} (-1)^k \frac{(m-k)^{n_b}}{ k! (1+n_b-k)! }
$$
\iffalse
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{math1.pdf}}
\end{center}
\end{figure}
\fi
Let us tabulate $ P_{n_t=1+n_b}(m)$, with $m\in[1,n_b+1]$, for $n_b=1,2,...7$:
$$ \{1,0\}$$
$$ {1\over 2!}\{1,1,0\}$$
$$ {1\over 3!}\{1,4,1,0\}$$
$$ {1\over 4!}\{1,11,11,1,0\}$$
$$ {1\over 5!}\{1,26,66,26,1,0\}$$
$$ {1\over 6!}\{1,57,302,302,57,1,0\}$$
$$ {1\over 7!}\{1,120,1191,2416,120,1,0\}$$
The numbers between brackets happen to be known as the ${\rm Eulerian}(n_b,k)$ numbers with $k=m-1\in[0,n_b-1]$ (here one has dropped the trivial $0$'s obtained for $m=n_b+1$ i.e. $k=n_b$). An Eulerian number (see Fig.~5) is
the number of permutations of the numbers 1 to n in which exactly m elements are greater than the previous element (permutations with m "ascents") as illustrated in Fig.~6.
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{Eulerian.pdf}}
\end{center}
\caption{The Eulerian numbers.}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{Euler.pdf}}
\end{center}
\caption{ Ascents for $n=1,2,3$. For $n=4$, the permutation $(1, 4, 2, 3)$ has $ m=2$ ascents.}
\end{figure}
\iffalse
\begin{figure}[htbp]
\begin{center}
\epsfxsize=8cm
\centerline{\epsfbox{math2.pdf}}
\end{center}
\end{figure}
\fi
Their generating function is
$$ g(x,y)=\frac{e^x-e^{x y}}{e^{x y}-e^x y}$$
with a series expansion $x\simeq 0$
$$ g(x,y)\simeq x+\frac{1}{2!} x^2 (y+1)+\frac{1}{3!} x^3 \left(y^2+4
y+1\right)+\frac{1}{4!} x^4 \left(y^3+11 y^2+11
y+1\right)$$
$$+\frac{1}{5!} x^5 \left(y^4+26 y^3+66 y^2+26
y+1\right)+\frac{1}{6!} x^6 \left(y^5+57 y^4+302 y^3+302
y^2+57 y+1\right)+O\left(x^7\right)$$
It is not difficult to realize that all the information in $P_{n_t}(m)$ is contained in $P_{n_t=1+n_b}(m)$ that is to say in the generating function $g(x,y)$. Rephrased more precisely, $P_{n_t=n_b+1}(m)$ for $n_b=1,2,\ldots$ is generated by $y g(x,y)$ with the $x$ exponent being the number of boats $n_b$ and, for a given $n_b$, the $y$ exponent being the rank $m\in[1,n_b+1]$. Similarly
$P_{n_t=n_b}(m)$ for $n_b=2,3,\ldots$ is generated by
$$(y-1) \int_0^x g(\text{z},y) \, d\text{z}+g(x,y)-x =\frac{e^x-e^{x y}}{e^{x y}-e^x y}+\frac{(y-1) \left(\log
(1-y)-\log \left(e^{x y}-e^x y\right)\right)}{y}-x$$
$$=x^2 y+\frac{1}{3} x^3 y (y+2)+\frac{1}{12} x^4 y \left(y^2+7
y+4\right)+\frac{1}{60} x^5 y \left(y^3+18 y^2+33
y+8\right)$$
$$+\frac{1}{360} x^6 y \left(y^4+41 y^3+171
y^2+131 y+16\right)+O\left(x^7\right) $$
This procedure can be repeated for $n_t=n_b-1, n_b-2, \ldots$ with expressions involving double, triple, $\ldots$ integrals of $g(x,y)$.
Why Eulerian numbers should play a role here
can be explained from scratch by a simple combinatorial argument\footnote{I thank S. Wagner for drawing my attention to this explanation.}: let us use the notations that the boat which came up $i^{\rm th}$ in the first race had rank $a(i)$ in the second race: $(a(1),a(2),\ldots, a(i),\ldots, a(n_b))$ is a random permutation of $(1, 2,\ldots, i,\ldots, n_b)$. In the case of interest where the score of the virtual boat is the middle score $n_b+1$, the boat that came $i^{\rm th}$ in the first race beats the virtual boat if $a(i)\le n_b-i$, that is to say if it is better than $i^{\rm th}$, counting from the bottom, in the second race. Therefore, the counting problem of $P_{n_t=n_b+1}(m)$ is equivalent to counting so-called excedances: an excedance in a permutation $(a(1),a(2),\ldots, a(i),\ldots, a(n_b))$ is an element such that $a(i) > i$. The number of permutations with precisely $m$ excedances is known to be an Eulerian number (thus excedances are what is called an Eulerian statistic, see for example \cite{wag} p.~23). As an illustration look at the case $n_b=3$: the six permutations of $(1,2,3)$ are $(1,2,3)$, $(1,3,2)$, $(2,1,3)$, $(2,3,1)$, $(3,1,2)$, $(3,2,1)$. The numbers of excedances (here defined as $a(i)\le n_b-i$) are respectively $1,1,2,1,1,0$ which indeed yields the ${\rm Eulerian}(3,k)$ numbers
$$1,4,1$$
for $k=0,1,2$ ($1$ permutation with 0 excedance, $4$ permutations with $1$ excedance, $1$ permutation with $2$ excedances)
appearing in $P_{n_t=1+n_b}(m)$, $m\in[1,n_b+1]$, $n_b=3$.
There is still an other way to look at the problem in terms of
Stirling numbers of the second kind, here defined as
$$ n_{n_t}(i)=\frac{1}{(-i+{n_t}-1)!}\sum _{j=1}^{{n_t}-1} j^{{n_t}-2}
(-1)^{-i-j+{n_t}}
\binom{-i+{n_t}-1}{j-1}$$
Stirling numbers count in how many ways the numbers $(1,2,\ldots,n_t-1)$ can be partitioned in $i$ groups: for example for $n_t=5$
\hspace{2.4cm}$\to 1$ way to split the numbers $(1,2,3,4)$ into $4$ groups
$(1),(2),(3),(4)$
\hspace{2.4cm}$\to 6$ ways to split the numbers $(1,2,3,4)$ into $3$ groups
$(1),(2),(3,4);(1),(3),(2,4);(1),(4),(2,3);(2),(3),(1,4);(2),(4),(1,3);(3),(4),(1,2)$
\hspace{2.4cm}$\to 7$ ways to split the numbers $(1,2,3,4)$ into $2$ groups
$(1),(2,3,4);(2),(1,3,4);(3),(1,2,4);(4),(1,2,3);(1,2),(3,4);(1,3),(2,4);(1,4),(2,3)$
\hspace{2.4cm}$\to 1$ way to split the numbers $(1,2,3,4)$ into $1$ group
$(1,2,3,4)$
\noindent so that one obtains
$${1,6,7,1}$$
The probability distribution $P_{n_t}(m)$ can indeed be
rewritten \cite{duduche} as
$$P_{n_t}(m)={1\over n_b!}\sum_{i=m}^{n_t-1}(-1)^{i+m}{n_{n_t}(i)}(1+n_b-i)!\bigg({i-1\atop m-1}\bigg)$$
Why Stirling numbers should play a role here
arises \cite{duduche}
from graph counting considerations
on the $n_b\times n_b$ lattice when one now includes all the points below the diagonal.
As an example let us still consider the case $n_t=5$: below the diagonal $n_t=5$ they are $6$ points $a,b,c,d,e,f$ labelled by their lattice coordinates $a=(1,1),\; b=(1,2),\; c=(1,3),\; d=(2,1),\; e=(2,2),\; f=(3,1)$. One draws a graph according to the no ex aequo exclusion rule: starting say from the point $a$ one links it to another point if it obeys the no ex aequo exclusion rule with respect to $a$, that is to say if its coordinates are not $(1,1)$. In our example there is only one such point $e$. Then one can link the point $e$ to the points $c$ and $f$, which can also be linked together. Finally one can link $c$ to $d$ and $f$ to $b$ with also a link between $d$ and $b$. The numbers ${n_{n_t=5}(i+1)}$ count in the graph just obtained the number of subgraphs with either $i=1$ point (this is the number of points $6$), $i=2$ points linked (there are 7 such cases), $i=3$ points fully linked (there is 1 such case), $i=4$ points fully linked (there is no such case), $\ldots$, a counting which finally gives the Stirling-like numbers
$$1,6,7,1$$
where the $1$ on the left is by convention the number of subgraphs with $i=0$ point.
It still remains to be shown why this subgraph counting is indeed equivalent to the Stirling counting. To do this one has simply to notice that the former is encapsulated in a recurrence relation obtained by partitioning the $n_{n_t+1}(i+1)$ lattice counting as:
either there is $0$ point on the diagonal $n_t$ $\to n_{n_t}(i+1)\left({n_t-1\atop 0}\right)$
either there is $1$ point on the diagonal $n_t$ $\to n_{n_t-1}(i)\left({n_t-1\atop 1}\right)$
either there are $2$ points on the diagonal $n_t$ $\to n_{n_t-2}(i-1)\left({n_t-1\atop 2}\right)$
etc
\noindent It follows that the numbers $ n_{n_t+1}(i+1)$ have to obey the recurrence relation
$$ n_{n_t+1}(i+1)=\sum_{k'=0}^{i} n_{n_t-k'}(i+1-k')\left({n_t-1\atop k'}\right) $$
\noindent valid for $i\in[0,n_t-2]$, bearing in mind that when $i=n_t-1$ trivially $ n_{n_t+1}(n_t)=1$.
But this recurrence relation can be mapped on a more standard recurrence relation for the Stirling numbers of the second kind:
if one sets $k=n_t-i\in[2,n_t]$ and defines now ${\rm Stirling}(n_t,k)\equiv n_{n_t+1}(i+1)$ one obtains for the ${\rm Stirling}(n_t,k)$'s
$$ {\rm Stirling}(n_t,k)=\sum_{k'=0}^{n_t-k} {\rm Stirling}(n_t-k'-1,k-1)\left({n_t-1\atop k'}\right) $$
which via $k''=n_t-k'\in[k,n_t]$ rewrites as
$$ {\rm Stirling}(n_t,k)=\sum_{k''=k}^{n_t} {\rm Stirling}(k''-1,k-1)\left({n_t-1\atop k''-1}\right) $$
that is to say finally
$$ {\rm Stirling}(n_t+1,k+1)=\sum_{k''=k}^{n_t} {\rm Stirling}(k'',k)\left({n_t\atop k''}\right) $$
\noindent This therefore establishes that the ${\rm Stirling}(n_t,k)$'s, i.e. the ${n_{n_t}(i)}$'s, are indeed Stirling numbers of the second kind.
This is of course not a surprise that both Eulerian and Stirling numbers do play a role since there is a correspondance\footnote{For Eulerian, Stirling, and other well-known numbers in combinatorics see for example \cite{wag}.} between them
$$ {\rm Eulerian}(n_b,k)=\sum _{j=1}^{k+1}(-1)^{k-j+1}\left({n-j\atop n-k-1}\right)j!\;{\rm Stirling}(n_b,j)$$
with $k\in[0,n_b-1]$.
Why does one rewrite $P_{n_t}(m)$ in terms of Stirling numbers in the 2-race case? Because this rewriting can be generalized \cite{duduche} to the $n_r$-race case. The generalisation is formal since one does not know what the $n_r$-dependant "generalized Stirling" numbers which control the probability distribution\footnote{The lattice becomes larger with the number of races: as said above when $n_r=3$ one has a cubic lattice. The corresponding graph structure becomes more an more involved, and so the counting of the associated fully connected subgraphs.} are. Again this is like moving from a solvable "$2$-body" problem to a so far non solvable "$n_r$-body" problem.
Still it remains quite fascinating that well-known numbers in combinatorics, such as Eulerian and Stirling numbers, should be at the heart of the understanding of rank expectations in regattas, at least in the 2-race case.
{\bf Acknowledgements:} I would like to take the occasion of Leonid Pastur $75^{\rm th}$ birthday celebration to tell him all my friendship and admiration. I acknowledge useful conversations with S. Wagner. My thanks also to Dhriti Bhatt's youthful energy and her willingness to look at this problem again in the summer 2012.
\vspace{1cm}
| {
"attr-fineweb-edu": 1.688477,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbVY5qsNCPcsBuUUw |
\section{Introduction}
\label{sec:intro}
One of the decisions that a basketball coach has to make constantly is what lineup to play in order to maximize the probability of outperforming the opponent's lineup currently on the court.
This lineup evaluation problem has been traditionally addressed through player and lineup ratings based on (adjusted) plus/minus-like approaches.
In this work, we propose a different approach that is based on network science methods.
In particular, we first define the matchup network:
\begin{mydefinition}{Matchup Network}{theoexample}
The matchup network $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})$, is a weighted directed network where nodes represent lineups. An edge ${e}_{i,j} \in \mathcal{E}$ points from node $i\in \mathcal{V}$ to node $j\in \mathcal{V}$ iff lineup $j$ has outperformed lineup $i$. The edge weight $w_{{e}_{i,j}}$ is equal to the performance margin of the corresponding matchup.
\end{mydefinition}
Using this network we then obtain a network embedding, which projects the network nodes on a latent space $\mathcal{X}$.
For our purposes we adopt the {\bf node2vec} \cite{grover2016node2vec} framework for learning the latent space.
Simply put the embedding learns a set of features $\mathbf{x}_{{u}}$ for node {{u}}.
These features are then utilized to build a logistic regression model for the probability of lineup ${\lambda}_i$ outperforming lineup ${\lambda}_j$, $\Pr[{\lambda}_i \succ {\lambda}_j | \mathbf{x}_{{\lambda}_i},\mathbf{x}_{{\lambda}_j}]$.
Figure \ref{fig:ballnet} visually captures {{\tt LinNet}}.
\begin{figure}[h]
\includegraphics[scale=0.4]{plots/ballnet}
\caption{{\bf The {{\tt LinNet}} lineup evaluation method}}
\label{fig:ballnet}
\end{figure}
Our evaluations indicate that {{\tt LinNet}} is able to predict the outcome of a lineup matchup correctly with 67\% accuracy, while the probabilities are well-calibrated with a Brier score of 0.19.
Furthermore, the probability validation curve of {{\tt LinNet}} is statistically indistinguishable from the $y=x$ line and hence, the logistic regression model captures accurately the lineup's matchup probabilities.
In comparison, we evaluate two baseline methods; (i) a PageRank-based ranking using the same matchup lineup networks, and (ii) a model based on the adjusted plus/minus of the players consisting each lineup.
These two methods have accuracy ranging between 52-58\%.
The rest of the paper is organized as following.
In Section \ref{sec:materials} we present in details the operations of {{\tt LinNet}} as well as the datasets we used.
Section \ref{sec:analysis} presents our results, while Section \ref{sec:discussion} discusses the implications and limitations of our work.
\section{Materials and Methods}
\label{sec:materials}
In this section we will present in detail (a) the design of {{\tt LinNet}}, (b) the baseline methods for comparison, and (c) the datasets we used for our evaluations.
\subsection{{{\tt LinNet}}}
\label{sec:linnet}
The first step of {{\tt LinNet}} is defining the matchup network $\mathcal{G}$.
There is flexibility in choosing the performance margin that one can use for the edge weights.
In the current implementation of {{\tt LinNet}}, the weights of $\mathcal{G}$ correspond to the point margin per minute for the two lineups.
Once the network is obtained the next step is to learn the network embedding.
As our network embedding mechanism we will utilize the approach proposed by Grover and Leskovec \cite{grover2016node2vec}, namely, node2vec.
node2vec utilizes (2$^{nd}$ order) random walks on the network in order to learn the latent features of the nodes, i.e., a function $f : \mathcal{V} \rightarrow \Re^d$, where $d$ is the dimensionality of the latent space.
Starting from node ${u}$ in the network and following the random walk strategy $R$ the network neighborhood $N_R({u})$ of ${u}$ is defined.
Then node2vec learns the network embedding $f$ by solving the following optimization problem:
\begin{equation}
\max_{f} \sum_{{u} \in \mathcal{V}} \log(\Pr[N_R({u})|f({u})])
\label{eq:opt}
\end{equation}
Simply put, the network embedding maximizes the log-likelihood of observing a network neighborhood $N_R({u})$ for node ${u}$ conditioned on the network embedding $f$.
The random walk strategy is defined by two parameters, $p$ and $q$, that offer a balance between a purely breadth-first search walk and a purely depth-first search walk.
In particular, the random walk strategy of node2vec includes a bias term $\alpha$ controlled by parameters p and q.
Assuming that a random walk is on node ${u}$ (coming from node $v$), the unnormalized transition probability $\pi_{{u} x} = \alpha_{pq}(v,x)\cdot w_{{u} x}$.
With $d_{{u} x}$ being the shortest path distance between ${u}$ and $x$ we have:
\[
\pi_{{u} x} =
\begin{cases}
1/p &,~ if~ d_{{u} x}=0\\
1 &,~if~ d_{{u} x}=1 \\
1/q &,~ if~ d_{{u} x} = 2 \\
\end{cases}
\]
As alluded to above parameters $p$ and $q$ control the type of network neighborhood $N_R({u})$ we obtain.
Different sampling strategies will provide different embeddings.
For example, if we are interested in having set of nodes that are tightly connected in the original network close to each other in the latent space, $p$ and $q$ need to be picked in such a way that allows for ``local'' sampling.
In our application we are interested more in identifying structurally equivalent nodes, i.e., nodes that are similar because of their connections in the network are similar (not necessarily close to each other with respect to network distance).
This requires a sampling strategy that allows for the network neighborhood of a node to include nodes that are further away as well.
Given this objective and the recommendations by Grover and Leskovec \cite{grover2016node2vec} we choose $q=3$ and $p=0.5$ for our evaluations.
Furthermore, we generate 3,000 walks for each network, of 3,500 hops each.
Finally, we choose as our latent space dimensionality, $d = 128$.
Increasing the dimensionality of the space improves the quality of the embedding as one might have expected, however, our experiments indicate that increasing further the dimensionality beyond $d=128$ we operate with diminishing returns (with regards to computational cost and improvement in performance).
Once the latent space $\mathcal{X}$ is obtained, we can build a logistic regression model for the probability of lineup ${\lambda}_i$ outperforming ${\lambda}_j$.
In particular, we use the Bradley-Terry model.
The Bradley-Terry model is a method for ordering a given set of items based on their characteristics and understanding the impact of these characteristics on the ranking.
In our case the set of items are the lineups and the output of the model for items $i$ and $j$ provides us essentially with the probability of lineup ${\lambda}_i$ outperforming ${\lambda}_j$.
In particular,
the Bradley-Terry model is described by \cite{opac-b1127929}:
\begin{equation}
\Pr({\lambda}_i \succ {\lambda}_j | \pi_i,~\pi_j)=\dfrac{e^{\pi_i-\pi_j}}{1+e^{\pi_i-\pi_j}}
\end{equation}
where $\pi_i$ is the {\em ability} of team $i$.
Given a set of lineup-specific explanatory variables $\mathbf{z}_i$, the difference in the ability of lineups ${\lambda}_i$ and ${\lambda}_j$ can be expressed as:
\begin{equation}
\sum_{r=1}^k \alpha_r (z_{ir}-z_{jr}) + U
\end{equation}
where $U\sim N(0,\sigma^2)$.
The Bradley-Terry model is then a generalized linear model that can be used to predict the probability of team $i$ winning team $j$.
In our case, the explanatory variables are the latent space features learned for each lineup, $\mathbf{x}_{{\lambda}_i}$.
{\bf Previously Unseen Lineups: }
One of the challenges (both in out-of-sample evaluations as well as in a real-world setting), is how to treat lineups that we have not seen before, and hence, we do not have their latent space representation.
In the current design of {{\tt LinNet}} we take the following simple approach.
In particular, for each lineup ${\lambda}_i$ of team $\mathcal{T}$ we define the similarity in the players' space ${\sigma}_{{\lambda}_i,{\lambda}_j}$ of ${\lambda}_i$ with ${\lambda}_j \in \mathcal{L_{\mathcal{T}}}$, as the number of common players between the two lineups.
It is evident that the similarity value ranges from 0 to 4.
One might expect that lineups with high overlap in the players' space, should also reside closely in the embedding space.
In order to get a feeling of whether this is true or not, we calculated for every team and season the correlation between the similarity between two lineups in the players' space (i.e., ${\sigma}_{{\lambda}_i,{\lambda}_j}$) with the distance for the corresponding latent features (i.e., ${\tt dist}({\mathbf x}_i,{\mathbf x}_j)$).
As we can see from Figure \ref{fig:cor} all teams exhibit negative correlations (all correlations are significant at the 0.001 level), which means the more common players two lineups have, the more close they will be projected in the embedding space.
Of course, the levels of correlation are moderate at best since, the embedding space is obtained by considering the performance of each lineup, and two lineups that differ by only one player might still perform completely differently on the court.
With this in mind, once we obtain the lineup similarity values, we can assign the latent feature vector for the previously unseen lineup ${\lambda}_i$ as a weighted average of the lineups in $\mathcal{L_{\mathcal{T}}}$:
\begin{equation}
{\mathbf x}_{{\lambda}_i} = \dfrac{\displaystyle \sum_{{\lambda}_j \in \mathcal{L_{\mathcal{T}}}} {\sigma}_{{\lambda}_i,{\lambda}_j} \cdot {\mathbf x}_j}{\displaystyle \sum_{{\lambda}_j \in \mathcal{L_{\mathcal{T}}}} {\sigma}_{{\lambda}_i,{\lambda}_j} }
\end{equation}
It is evident that this is simply a heuristic that is currently implemented in {{\tt LinNet}}.
One could think of other ways to approximate the latent space features of a lineup not seen before.
\begin{figure}[h]
\includegraphics[scale=0.4]{plots/cor-team}
\caption{{\bf Lineups with higher overlap in terms of players exhibit smaller distance in the latent embedding space $\mathcal{X}$}}
\label{fig:cor}
\end{figure}
\subsection{Baselines}
\label{sec:baselines}
For comparison purposes we have also evaluated two baseline approaches for predicting lineup matchup performance.
The first one is based on network ranking that operates directly on the matchup network (i.e., without involving any embedding of the network), while the second one is based on the adjusted plus/minus rating of the players that belong to the lineup.
{\bf Network Ranking: }
In our prior work we have shown that ranking teams through a win-loss network, achieves better matchup prediction accuracy as compared to the win-loss percentage \cite{sportsnetrank}.
Therefore, we follow a similar approach using the lineup matchup network and ranking lineups based on their PageRank score.
The PageRank of $\mathcal{G}$ is given by:
\begin{equation}
\bm{r} = D(D-\alpha A)^{-1}\bm{1}
\label{eq:pr}
\end{equation}
\noindent where $A$ is the adjacency matrix of $\mathcal{G}$, $\alpha$ is a parameter (a typical value of which is 0.85) and $D$ is a diagonal matrix where $d_{ii} = \max(1,k_{i,out})$, with $k_{i,out}$ being the out-degree of node $i$.
Using the PageRank score differential $\Delta r_{ij}=r_{{\lambda}_i}-r_{{\lambda}_j}$ as our independent variable we build a logistic regression model for the probability: $\Pr({\lambda}_i \succ {\lambda}_j | \Delta r_{ij})$.
{\bf Adjusted plus/minus (APM): }
The APM statistic of a player is a modern NBA statistic - and for many people the best single statistic we currently have for rating players.
It captures the additional points that the player is expected to add with his presence in a lineup consisting of league average players matching up with a lineup with league average players.
APM captures the impact of a player beyond pure scoring.
For instance, a player might impact the game by performing good screens that lead to open shots, something not captured by current box score statistics.
The other benefit of APM is that it controls for the rest of the players in the lineups.
More specifically the APM for a player is calculated through a regression model.
Let us consider that lineup ${\lambda}_i$ has played against ${\lambda}_j$, and has outscored the latter by $y$ points per 48 minutes.
$y$ is the dependent variable of the model, while the independent variable is a binary vector $\mathbf{p}$, each element of which represents a player.
All elements of $\mathbf{p}$ are 0 except for the players in the lineups.
Assuming ${\lambda}_i$ is the home lineup\footnote{If this information is not available - e.g., because the input data include the total time the lineups matched up over multiple games - W.L.O.G. we can consider the home lineup to be the one with lower ID number. This is in fact the setting we have in our dataset.}, $p_n = 1,~\forall p_n\in {\lambda}_i$, while for the visiting lineup, $p_n = -1,~\forall p_n \in {\lambda}_j$.
Then these data are used to train a regression model:
\begin{equation}
y = \mathbf{a}^T\cdot \mathbf{p}
\label{eq:apm}
\end{equation}
where $\mathbf{a}$ is the vector of regression coefficients.
Once obtaining this vector, the APM for player $p_n$ is simply $a_{p_n}$.
The rating of lineup ${\lambda}_i$, $\rho_{{\lambda}_i}$ is then the average APM of its players:
\begin{equation}
\rho_{{\lambda}_i} = \dfrac{a_{p_n}}{5},~\forall p_n \in {\lambda}_i
\label{eq:lapm}
\end{equation}
Using the lineup rating differential $\Delta \rho_{ij} = \rho_{{\lambda}_i} - \rho_{{\lambda}_j}$ as our independent variable we again build a logistic regression model for the probability: $\Pr({\lambda}_i \succ {\lambda}_j | \Delta \rho_{ij})$.
\subsection{Datasets}
\label{data}
In order to evaluate {{\tt LinNet}} we used lineup data during the 5 NBA seasons between 2007-08 and 2011-12.
This dataset includes aggregate information for all the lineup matchups for each of the 5 seasons.
In particular, for each pair of lineups (e.g., ${\lambda}_i$, ${\lambda}_j$) that matched up on the court we obtain the following information:
\begin{enumerate}
\item Total time of matchup
\item Total point differential
\item Players of ${\lambda}_i$
\item Players of ${\lambda}_j$
\end{enumerate}
We used these data in order to obtain both the matchup network as well as to calculate the APM for every player in each season.
\section{Analysis and Results}
\label{sec:analysis}
We now turn our attention in evaluating {{\tt LinNet}}.
Our focus is on evaluating the accuracy of {{\tt LinNet}} in predicting future lineup matchups, as well as the calibration of the inferred probabilities.
For every season, we build each model using 80\% of the matchups and we evaluate them on the remaining 20\% of the matchups (which might also include lineups not seen before).
Our evaluation metrics are (i) prediction accuracy, (ii) Brier score and (iii) the probability calibration curves.
Figure \ref{fig:accuracy} presents the accuracy performance of the various methods.
As we can see {{\tt LinNet}} outperforms both the PageRank and APM systems over all 5 seasons examined.
{{\tt LinNet}}'s accuracy is 67\%, while APM exhibits a 55\% average accuracy and PageRank a 53\% accuracy.
\begin{figure}[h]
\includegraphics[scale=0.4]{plots/linnet-accuracy}
\caption{{\bf {{\tt LinNet}} outperforms in accuracy baseline methods over all 5 seasons examined}}
\label{fig:accuracy}
\end{figure}
However, equally as important for the quality of the model is the calibration of the output probabilities.
We begin by first computing the Brier score \cite{brier1950verification} for each model and dataset.
In the case of a binary probabilistic prediction the Brier score is calculated as:
\begin{equation}
\beta = \dfrac{1}{N}\sum_{i=1}^N (\pi_i-y_i)^2
\label{eq:brier}
\end{equation}
where $N$ is the number of observations, $\pi_i$ is the probability assigned to instance $i$ being equal to 1 and $y_i$ is the actual (binary) value of instance $i$.
The Brier score takes values between 0 and 1 and evaluates the calibration of these probabilities, that is, the level of confidence they provide (e.g., a 0.9 probability is {\em better} calibrated compared to a 0.55 probability when the ground truth is label 1).
The lower the value of $\beta$ the better the model performs in terms of calibrated predictions.
Our model exhibits an average Brier score $\beta$ of 0.19, while both PageRank and APM models have a worse Brier score.
Typically the Brier score of a model is compared to a baseline value $\beta_{base}$ obtained from a {\em climatology} model \cite{mason2004using}.
A climatology model assigns the same probability to every observation, which is equal to the fraction of positive labels in the whole dataset.
Therefore, in our case the climatology model assigns a probability of 0.5 to each observation.
As alluded to above we do not have information about home and visiting lineup so our model estimates the probability of the lineup with the smaller ID outperforming the one with the larger ID.
Given that the lineup ID has no impact on this probability the climatology model probability is 0.5.
The Brier score for this reference model is $\beta_{base}=0.25$, which is of lower quality as compared to {{\tt LinNet}} and also slightly worse than our baselines.
\begin{figure}[h]
\includegraphics[scale=0.4]{plots/brier}
\caption{{\bf {{\tt LinNet}} exhibits better calibrated probabilities as compared to the baselines (smaller Brier score translates to better calibration)}}
\label{fig:brier}
\end{figure}
As alluded to above we have picked a dimensionality for the embedding of $d=128$.
However, we have experimented with different embedding dimensionality values and our results are presented in Figure \ref{fig:accuracy-d}.
As we can see, low dimensionality does not provide any benefit over the baselines, while increasing further the dimensionality (above 128) exhibits diminishing returns.
\begin{figure}[h]
\includegraphics[scale=0.4]{plots/accuracy-d}
\caption{{\bf The choice of $d=128$ for the embedding dimensionality of {{\tt LinNet}} provides a good tradeoff between accuracy and (computational) complexity}}
\label{fig:accuracy-d}
\end{figure}
Finally, we evaluate the accuracy of the probability output of {{\tt LinNet}} by deriving the probability validation curves (for $d=128$).
In order to evaluate this we would ideally want to have every matchup played several times.
If the favorite lineup were given a 75\% probability of outperforming the opposing lineup, then if the matchup was played 100 times we would expect the favorite to outperform in 75 of them.
However, this is not realistic and hence, in order to evaluate the accuracy of the probabilities we will use all the games in our dataset.
In particular, if the predicted probabilities were accurate, when considering all the matchups where the favorite was predicted to win with a probability of x\%, then the favorite should have outperform the opponent in x\% of these matchups.
Given the continuous nature of the probabilities we quantize them into groups that cover a 5\% probability range.
Fig \ref{fig:calibration} presents the predicted win probability for the reference lineup (i.e., the lineup with the smaller ID) on the x-axis, while the y-axis presents how many of these matchups this reference lineup won.
Furthermore, the size of the points represents the number of instances in each situation.
As we can see the validation curve is very close to the $y=x$ line, which practically means that the predicted probabilities capture fairly well the actual matchup probabilities.
In particular, the linear fit has an intercept of 0.1 and a slope of 0.85.
\begin{figure}[h]
\includegraphics[scale=0.4]{plots/calibration}
\caption{{\bf The {{\tt LinNet}} probability validation curve is very close to the $y=x$ line, translating to fairly accurate probability estimations}}
\label{fig:calibration}
\end{figure}
{\bf Season Win-Loss Percentage and Lineup Performance: }
How well can lineup {\em ratings} obtained from {{\tt LinNet}} explain the win-loss record of a team?
One should expect that there is a correlation between {{\tt LinNet}} lineup ratings and the record of a team - which as we will see indeed is the case.
However, this correlation is also not expected to be perfect, since it relies also on coaching decisions as well as availability of the lineups (e.g., a lineup can be unavailable due to injuries).
In order to examine this we focus on lineups that played for a total of more than a game (i.e., 48 minutes) during the season.
Then with $p_{{\lambda}_i}$ being the average probability of lineup ${\lambda}_i$ (of team $\tau$) outperforming each of the opponent's lineups (i.e., $p_{{\lambda}_i} = \dfrac{\sum_{{\lambda}_j \in \mathcal{L}\setminus\mathcal{L}_{\tau}} \Pr({\lambda}_i \succ {\lambda}_j)}{|\mathcal{L}\setminus\mathcal{L}_{\tau}|}$, where $\mathcal{L}_{\tau}$ is the set of all lineups of team $\tau$ and $\mathcal{L}$ is the set of all league lineups), the {{\tt LinNet}} team rating of team $\tau$ is:
\begin{equation}
{r}(\tau) = \dfrac{\displaystyle\sum_{{\lambda}_i \in \mathcal{L}_{\tau}} \gamma_i\cdot p_{{\lambda}_i}}{\displaystyle\sum_{{\lambda}_i \in \mathcal{L}_{\tau}} \gamma_i}
\label{eq:team-rating}
\end{equation}
where $\gamma_i$ is the total time lineup ${\lambda}_i$ has been on the court over the whole season.
Our results are presented in Figure \ref{fig:lineup-wl}.
The linear regression fit has a statistically significant slope (p-value $<$ 0.001), which translates to a statistically important relationship.
However, as we can see there are outliers in this relationship, such as the 2008-09 Cavaliers and the 2011-12 Nets.
The linear relationship explains 27\% of the variability at the win-loss records of the teams.
This might be either because teams do not choose (due to various reasons) their best lineup to matchup with the opponent, or because the time that a lineup is on court is important for its performance (we discuss this in the following section), something that {{\tt LinNet}} currently does not account for.
Overall, the correlation coefficient between the {{\tt LinNet}} team rating and the win-loss record is 0.53 (p-value $<$ 0.0001).
\begin{figure}[h]
\includegraphics[scale=0.4]{plots/linnet-rating-wl}
\caption{{\bf The {{\tt LinNet}} probability validation curve is very close to the $y=x$ line, translating to fairly accurate probability estimations}}
\label{fig:lineup-wl}
\end{figure}
\section{Discussion}
\label{sec:discussion}
In this work we presented {{\tt LinNet}}, a network embedding approach in evaluating lineups.
Our evaluations indicate that the probability output from {{\tt LinNet}} is well calibrated and more accurate than traditional lineup evaluation methods.
However, there are still some open issues with the design of {{\tt LinNet}}.
More specifically, a matchups between specific lineups might last only for a few minutes (or even just a couple of possessions).
This creates a reliability issue with any predictions one tries to perform with similar information.
Even though we adjust the performance margin on a per minute basis, it is not clear that a lineup can keep up its performance over a larger time span.
Furthermore, currently for lineups we have not seen before we use as its latent features a weighted average of already seen lineups of the team, weighted based on their similarity in the players' space.
However, there are other approaches that one might use for this task that could potentially provide even better results.
For example, a regression (similar to the adjusted plus/minus) can be used to infer the latent features based on the players in the lineup.
This is something that we plan in exploring in the future.
\small
| {
"attr-fineweb-edu": 1.650391,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdCQ5qYVBMbjheHnA | \chapter{Acknowledgements}
\fancyhead[LO]{\emph{Acknowledgements}}
\fancyhead[RE]{\emph{Acknowledgements}}
\selectlanguage{italian}
Se scrivere una tesi di fisica in inglese è già di per sé un arduo compito, tentare di riportare i ringraziamenti in lingua anglofona è un'impresa impossibile, almeno per un veronese quadratico medio come me. Seguirà dunque uno sproloquio in quella che più si avvicina alla mia lingua madre, nel quale mi auguro di non dimenticare nessuno, anche se so che sarà inevitabile, abbiate pazienza.
Inizio col ringraziare la mia famiglia, mamma Graziella e papà Franco in primis. Nonostante abbia fatto di tutto per rendermi odioso e insopportabile, soprattutto in periodi di scadenze e consegne, mi hanno sempre sostenuto e incoraggiato, spingendomi ad andare avanti. Sempre pronti ad ascoltarmi, continuamente mi chiedevano ``Come va a Trento? I tuoi studi?''. E nonostante poi avessero le idee ancora più confuse di prima al sentire le mie fumose spiegazioni su iperoni e stelle di neutroni, ogni volta tornavano a informarsi sul mio lavoro per avere anche solo una vaga idea di quello che facevo per portare a casa quei quattro euro della borsa di dottorato. Ringrazio Simone e Sara: sarà la lontananza, sarà che superata la soglia degli ``enta'' uno inizia anche a maturare (ahahah), sarà lo snowboard o il downhill ma negli ultimi anni ci siamo ri-avvicinati parecchio, finendo addirittura in vacanza a Dublino assieme! Nonostante non sia più un ragazzetto sbarbatello (no speta, quello lo sono ancora), mi è stato utile avere il supporto, i consigli e la complicità del fratello maggiore che, anche se non lo ammetterà mai pubblicamente, so che mi vuole bene. E quindi, grazie! Meritano altrettanti ringraziamenti i nonni, gli zii e cugini di Caprino e dintorni, e quelli più geograficamente ``lontani'' di Verona, che non hanno mai smesso di credere in me. E perché per i libri, le trasferte, i soggiorni all'estero c'è MasterCard, ma sapere che nel paesello vengo pubblicizzato con espressioni del tipo ``Varda che me neòdo l'è 'n sciensiato!'' non ha prezzo!
Accademicamente parlando non posso non esser grato a Francesco, che mi ha seguito in questi tre anni di dottorato (e ancor prima durante la laurea magistrale), con spiegazioni, discussioni, consigli tecnici o anche solo chiacchierate, soprattutto in questi ultimi mesi parecchio impegnativi sotto tutti i punti di vista. Nonostante i suoi mille impegni e viaggi, è sempre stato un punto di riferimento. Aggiungiamo lo Stefano, senza l'aiuto del quale probabilmente avrei dovuto trovare lavoro come operatore ecologico in quel di Verona. A parte le mille questioni di fisica o le discussioni su quel cavolo di codice, lo ringrazio per la vagonata di consigli in generale, per l'ospitalità, le battute del piffero, le (forse troppe) birrette e le partite a biliardo super professionali... Assieme a lui è d'obbligo ringraziare la mitica Serena, che diciamolo, è la persona che porta i pantaloni in quella famiglia e detto questo ho già detto tutto! Un grazie anche alle due belvette, che mi hanno fatto un sacco ridere finché ero ospite in casa (e che casa!) Gandolfi. Tornando un po' indietro nel tempo devo sicuramente ringraziare il buon Paolo ``Ormoni'', che mi iniziò all'AFDMC e mise le basi per quello che sarebbe stato poi il mio progetto sui sistemi ``strani''. Senza di lui credo non avrei mai potuto affrontare quel codice e la Bash in generale. Per chiudere la parte accademica ringrazio poi tutti i LISCers, che hanno contribuito a creare un ambiente di lavoro intellettualmente stimolante, e tutti coloro con i quali ho avuto modo di parlare di fisica, Kevin, Steve, Bob, Ben, Abhi, i due Alessandro e i colleghi di ufficio, i quali però meritano un paragrafo a parte. Ah sì, non posso certo dimenticare l'infinita pazienza di Micaela che con la fisica centra poco, ma in merito a burocrazia e organizzazione è insuperabile.
Veniamo dunque al reparto amicizie: qui potrei dilungarmi fin troppo ma ho scelto di limitarmi un po', dividendo il campione in due sottoinsiemi geografici, quello trentino (in senso lato) e quello più storico veronese, seguendo un percorso un po' random (deformazione professionale).
Iniziamo con la completa e incontrollabile degenerazione del mio ufficio, dall'insostituibile (e dico sul serio $\heartsuit$) Roberto allo shallissimo Alessandro, dallo ``svizzerooooo'' Elia al ``miserabile'' Paolo (con nostro grande divertimento in perenne lotta per il titolo di maschio omega). E l'ormai santa donna Giorgia che ha recentemente installato una serie di filtri per escludere le nostre impertinenti voci. Non dimentichiamo coloro che in principio colonizzarano l'open space al LISC: il canterino Emmanuel, il già citato Paolo ``Ormoni'' e il mitico Enrico (che quando leggerà queste righe inizierà a riprodurre senza sosta una delle parodie dei prodotti Apple). Aggiungiamo i colleghi di FBK naturalizzati LISC, quali il Mostarda, l'Amadori, il Fossati con la fortissima Saini al seguito (ho volutamente messo i cognomi per subrazzarvi un po'), i personaggi di ``passaggio'' come Marco e gli adottati da altri atenei come l'Alessandro (Lovato). Quest'ultimo (eccellente) fisico merita un ringraziamento particolare (oltre ad una già preventivata cena in quel di Chicago) per l'estrema ospitalità e il supporto che mi ha dato (e che spero continuerà a darmi) oltreoceano, non solo per questioni di fisica. In realtà ognuna delle persone qui citate meriterebbe un grazie su misura, ma non è facile (e probabilmente nemmeno opportuno) riportare tutto su queste pagine. Chi mi è stato particolarmente vicino sa già che gli sono grato per tutto, non servono molte parole...
Uscendo dall'ufficio la cosa si complica perché il numero di persone da ringraziare cresce di molto. E quindi un caloroso grazie a Giuseppe, Paolo e Chiara, Alessia, Nicolò, Irena e Nicolò, Sergio, Mattia, Roberta, Nicola, Cinzia, Giada, Marco, Giovanni, Sebastiano, Fernando, Eleonora, Letizia, Nikolina, David, Eleonora, Federica, Beatrice, Marta, Fata e a questo punto sono costretto a mettere un politico \emph{et~al.}, non abbiatene a male. Con alcune di queste persone ho convissuto, con altre si usciva a fare festa, altre ancora erano e sono ``semplicemente'' amici, ma tutti hanno contribuito in qualche modo a farmi trascorrere momenti fantastici in questi tre anni. Essendo l'autore di questo lavoro mi riservo il diritto di ringraziare in separata sede Marianna e Gemma: nonostante ci sarebbero molte cose da dire in merito, mi limiterò ad un semplice ma profondo ``grazie!''. Per lo stesso motivo della precedente proposizione, estendo temporalmente e geograficamente un ringraziamento anche a Francesco a al Bazza, che col mio dottorato non centrano un tubo ma che sono stati elementi portanti della mia lunga avventura trentina e la coda (in termini probabilistici) della loro influenza si fa tuttora sentire.
Nelle lande veronesi è d'obbligo citare tutti gli amici storici e meno storici, che nell'ultimo periodo ho avuto modo di vedere più spesso perché, sarà la moda del momento o qualche virus contagioso, ma qui si stanno sposando tutti! E dunque grazie ad Andrea, Marco e Jessica, Alice e Francesco, Davide ed Elisa, Roberta e Alberto, Matteo, Erika, Letizia, Daniela, Mirko, Silvia e tutti gli altri con cui ho bevuto $n$ birrette (con $n$ spesso troppo grande) in quel di Caprino e dintorni. Sono particolarmente grato all'IIIIIIIIIING. Giacomo e alla gnocca Giulia: nell'ultimo periodo non c'è più stato modo di vedersi spesso ma le serate passate in vostra compagnia mi accompagneranno sempre col sorriso. Infine, non certo per ordine di importanza, devo ringraziare di cuore Alessandra (e con lei tutta la famiglia), che per molti anni è stata al mio fianco sostenendomi, sopportandomi, incoraggiandomi, facendomi arrabbiare e divertire allo stesso tempo, ma che il destino (o chi/cosa per esso) ha voluto le nostre strade prendessero due direzioni diverse, ma nulla o nessuno potrà mai cancellare tutto ciò che di bello e buono c'è stato. Per cui grazie!
Eccoci dunque alla fine del mio sproloquio. Non mi resta che ringraziare tutte quelle cose che, pur essendo inanimate, mi hanno fatto penare ma al tempo stesso esaltare non poco, fra cui meritano un posto di eccellenza Gnuplot, \LaTeX\, e gli script Bash. Chiudo (stavolta sul serio) ringraziando questo pazzo 2013 che mi ha portato immense soddisfazioni e altrettante sofferenze, ma che con il suo carico di grandi (a volte fin troppo) novità mi ha stupito e mi ha spinto a reagire con coraggio facendomi sentire veramente vivo...
\vspace{1cm}
\begin{quotation}
\emph{Meglio aggiungere vita ai giorni che non giorni alla vita.}
\flushright{Rita Levi Montalcini}
\end{quotation}
\newpage
\phantom{Empty page}
\chapter{AFDMC wave functions}
\label{app:Wave}
\section{Derivatives of the wave function: CM corrections}
\label{app:CM}
As seen in \S~\ref{subsec:Wave}, for finite systems the single particle orbitals must be referred to the CM of the system: $\bm r_p\rightarrow\bm r_p-\bm r_{CM}$.
Each derivative with respect to nucleon or hyperon coordinates has thus to be calculated including CM corrections. Let Call $\bm\rho_i$ the relative coordinates and $\bm r_i$ the absolute ones for nucleons, and $\bm\rho_\Lambda$, $\bm r_\lambda$ the analogues for the hyperons. Then
\begin{align}
\bm\rho_i=\bm r_i-\bm\rho_{CM} \qquad \bm\rho_\lambda=\bm r_\lambda-\bm\rho_{CM} \;,
\end{align}
with
\begin{align}
\bm\rho_{CM}=\frac{1}{M}\left(m_N\sum_k \bm r_k+m_\Lambda\sum_\nu\bm r_\nu\right) \qquad M=\mathcal N_N\,m_N+\mathcal N_\Lambda\,m_Nm_\Lambda\;.
\end{align}
In order to simplify the notation, in the next we will use $r_p$ instead of $\bm r_p$. The equations for the first derivatives will be valid for the Cartesian component of the position vectors. In the relations for the second derivatives implicit sums over Cartesian components will be involved.
Consider a function of the relative nucleon and hyperon coordinates:
\begin{align}
f(\rho_N,\rho_\Lambda)\equiv f(\rho_1,\ldots,\rho_{\mathcal N_N},\rho_1,\ldots,\rho_{\mathcal N_\Lambda})\;,
\end{align}
In order to calculate the derivatives of $f(\rho_N,\rho_\Lambda)$ with respect to $r_p$, we need to change variable. Recalling that now all the coordinates (nucleons and hyperons) are connected together via the CM, we have
\begin{align}
\frac{\partial}{\partial r_i}f(\rho_N,\rho_\Lambda)
&=\sum_j\frac{\partial\rho_j}{\partial r_i}\frac{\partial}{\partial\rho_j}f(\rho_N,\rho_\Lambda)
+\sum_\mu\frac{\partial\rho_\mu}{\partial r_i}\frac{\partial}{\partial\rho_\mu}f(\rho_N,\rho_\Lambda)\;,\\[0.2em]
\frac{\partial}{\partial r_\lambda}f(\rho_N,\rho_\Lambda)
&=\sum_\mu \frac{\partial\rho_\mu}{\partial r_\lambda}\frac{\partial}{\partial \rho_\mu}f(\rho_N,\rho_\Lambda)
+\sum_j \frac{\partial\rho_j}{\partial r_\lambda}\frac{\partial}{\partial \rho_j}f(\rho_N,\rho_\Lambda)\;,
\end{align}
where
\begin{align}
\frac{\partial\rho_j}{\partial r_i}=\delta_{ij}-\frac{m_N}{M}\,,\quad\;
\frac{\partial\rho_\mu}{\partial r_i}=-\frac{m_N}{M}\,, \quad\;
\frac{\partial\rho_\mu}{\partial r_\lambda}=\delta_{\lambda\mu}-\frac{m_\Lambda}{M}\,, \quad\;
\frac{\partial\rho_j}{\partial r_\lambda}=-\frac{m_\Lambda}{M} \;.
\end{align}
The CM corrected first derivates take then the form:
\begin{align}
\frac{\partial}{\partial r_i}f(\rho_N,\rho_\Lambda)
&=\left[\frac{\partial}{\partial\rho_i}-\frac{m_N}{M}\left(\sum_j\frac{\partial}{\partial\rho_j}
+\sum_\mu\frac{\partial}{\partial\rho_\mu}\right)\right]f(\rho_N,\rho_\Lambda) \;,\label{eq:d_CM_N} \\[0.2em]
\frac{\partial}{\partial r_\lambda}f(\rho_N,\rho_\Lambda)&=\left[\frac{\partial}{\partial\rho_\lambda}
-\frac{m_\Lambda}{M}\left(\sum_j\frac{\partial}{\partial\rho_j}
+\sum_\mu\frac{\partial}{\partial\rho_\mu}\right)\right]f(\rho_N,\rho_\Lambda) \;.\label{eq:d_CM_L}
\end{align}
For the second derivatives we have:
\begin{align}
\frac{\partial^2}{\partial r_i^2}f(\rho_N,\rho_\Lambda)
&=\left[\frac{\partial^2}{\partial\rho_i^2}-2\frac{m_N}{M}\left(\sum_j\frac{\partial^2}{\partial\rho_i\partial\rho_j}
+\sum_\mu\frac{\partial^2}{\partial\rho_i\partial\rho_\mu}\right)\right.\nonumber \\[0.2em]
&+\left.\frac{m_N^2}{M^2}\left(\sum_{jk}\frac{\partial^2}{\partial\rho_j\partial\rho_k}
+\sum_{\mu\nu}\frac{\partial^2}{\partial\rho_\mu\partial\rho_\nu}
+2\sum_{j\mu}\frac{\partial^2}{\partial\rho_j\partial\rho_\mu}\right)\right]f(\rho_N,\rho_\Lambda) \;, \label{eq:dd_CM_N}\\[0.5em]
\frac{\partial^2}{\partial r_\lambda^2}f(\rho_N,\rho_\Lambda)
&=\left[\frac{\partial^2}{\partial\rho_\lambda^2}-2\frac{m_\Lambda}{M}\left(\sum_\mu\frac{\partial^2}{\partial\rho_\lambda\partial\rho_\mu}
+\sum_j\frac{\partial^2}{\partial\rho_\lambda\partial\rho_j}\right)\right.\nonumber \\[0.2em]
&\left.+\frac{m_\Lambda^2}{M^2}\left(\sum_{\mu\nu}\frac{\partial^2}{\partial\rho_\mu\partial\rho_\nu}
+\sum_{jk}\frac{\partial^2}{\partial\rho_j\partial\rho_k}
+2\sum_{\mu j}\frac{\partial^2}{\partial\rho_\mu\partial\rho_j}\right)\right]f(\rho_N,\rho_\Lambda) \;.\label{eq:dd_CM_L}
\end{align}
Consider now the hypernuclear wave function of Eq.~(\ref{eq:Psi_T}) and assume the compact notation:
\begin{align}
\psi_T&=\prod_{\lambda i}f_c^{\Lambda N}(r_{\lambda i})\,\psi_T^N(R_N,S_N)\,\psi_T^\Lambda(R_\Lambda,S_\Lambda)\;,\nonumber\\[0.4em]
&=\prod_{\lambda i}f_c^{\Lambda N}(r_{\lambda i})\prod_{i<j}f_c^{NN}(r_{ij})\prod_{\lambda<\mu}f_c^{\Lambda\Lambda}(r_{\lambda\mu})
\det\Bigl\{\varphi_\epsilon^N(\bm r_i,s_i)\Bigr\}\det\Bigl\{\varphi_\epsilon^\Lambda(\bm r_\lambda,s_\lambda)\Bigr\}\;,\nonumber\\[0.4em]
&=J_{\Lambda N}\,J_{NN}\,J_{\Lambda\Lambda}\,\text{det}_N\,\text{det}_\Lambda\;.
\end{align}
The trial wave function is written in the single particle representation and thus it should be possible to factorize the calculation of the derivatives on each component. However, when we use the relative coordinates with respect to the CM, the antisymmetric part of the wave function $\text{det}_N\,\text{det}_\Lambda$ has to be treated as a function of both nucleon and hyperon coordinates, like the function $f(\rho_N,\rho_\Lambda)$ used above. The Jastrow correlation functions instead, being functions of the distances between two particles, are not affected by the CM corrections. It is then possible to obtain in a simple way the derivatives with respect to the nucleon and hyperon coordinates by calculating the local derivatives:
\begin{align}
\frac{\partial_p\psi_T}{\psi_T}=\frac{\frac{\partial}{\partial R_p}\psi_T}{\psi_T}\quad\quad\text{with}\quad p=N,\Lambda\;,
\end{align}
which are of particular interest in the AFDMC code for the calculation of the drift velocity of Eq.~(\ref{eq:drift}) and the local energy of Eq.~(\ref{eq:E_L}).
The first local derivatives read
\begin{align}
\frac{\partial_N\psi_T}{\psi_T}
&=\frac{\partial_N J_{NN}}{J_{NN}}
+\frac{\partial_N J_{\Lambda N}}{J_{\Lambda N}}
+\frac{\partial_N\left(\text{det}_N\text{det}_\Lambda\right)}{\text{det}_N\text{det}_\Lambda}\;,\\[1.0em]
\frac{\partial_\Lambda\psi_T}{\psi_T}
&=\frac{\partial_\Lambda J_{\Lambda\Lambda}}{J_{\Lambda\Lambda}}
+\frac{\partial_\Lambda J_{\Lambda N}}{J_{\Lambda N}}
+\frac{\partial_\Lambda\left(\text{det}_N\text{det}_\Lambda\right)}{\text{det}_N\text{det}_\Lambda}\;,
\end{align}
while the second local derivatives take the form
\begin{align}
\frac{\partial_N^2\psi_T}{\psi_T}&=\frac{\partial_N^2 J_{NN}}{J_{NN}}
+\frac{\partial_N^2 J_{\Lambda N}}{J_{\Lambda N}}
+\frac{\partial_N^2\left(\text{det}_N\text{det}_\Lambda\right)}{\text{det}_N\text{det}_\Lambda}
+2\frac{\partial_N J_{NN}}{J_{NN}}\frac{\partial_N J_{\Lambda N}}{J_{\Lambda N}}\nonumber\\[0.5em]
&\quad\,+2\frac{\partial_N J_{NN}}{J_{NN}}\frac{\partial_N\left(\text{det}_N\text{det}_\Lambda\right)}{\text{det}_N\text{det}_\Lambda}
+2\frac{\partial_N J_{\Lambda N}}{J_{\Lambda N}}\frac{\partial_N\left(\text{det}_N\text{det}_\Lambda\right)}{\text{det}_N\text{det}_\Lambda}\;,\\[1.0em]
\frac{\partial_\Lambda^2\psi_T}{\psi_T}&=\frac{\partial_\Lambda^2 J_{\Lambda\Lambda}}{J_{\Lambda\Lambda}}
+\frac{\partial_\Lambda^2 J_{\Lambda N}}{J_{\Lambda N}}
+\frac{\partial_\Lambda^2\left(\text{det}_N\text{det}_\Lambda\right)}{\text{det}_N\text{det}_\Lambda}
+2\frac{\partial_\Lambda J_{\Lambda\Lambda}}{J_{\Lambda\Lambda}}\frac{\partial_\Lambda J_{\Lambda N}}{J_{\Lambda N}}\nonumber\\[0.5em]
&\quad\,+2\frac{\partial_\Lambda J_{\Lambda\Lambda}}{J_{\Lambda\Lambda}}\frac{\partial_\Lambda\left(\text{det}_N\text{det}_\Lambda\right)}{\text{det}_N\text{det}_\Lambda}
+2\frac{\partial_\Lambda J_{\Lambda N}}{J_{\Lambda N}}\frac{\partial_\Lambda\left(\text{det}_N\text{det}_\Lambda\right)}{\text{det}_N\text{det}_\Lambda}\;.
\end{align}
The derivatives of Jastrow correlation functions require a standard calculations, while for the derivatives of the Slater determinant (SD) we need to include CM corrections as in Eqs.~(\ref{eq:d_CM_N}), (\ref{eq:d_CM_L}), (\ref{eq:dd_CM_N}) and (\ref{eq:dd_CM_L}). Moreover, the derivative of a SD is typically rather computationally expensive and in the above relations many terms, also with mixed derivatives, are involved. An efficiently way to deal with derivatives of a SD is described in the next section.
\newpage
\section{Derivatives of a Slater determinant}
\label{app:d_SD}
Consider a Slater determinant $|\mathcal A|$. Let us define $A_{ij}=f_i(j)$, so that $\partial_j A_{ij}=f'_i(j)$. Assume $^i B$ a matrix equal to $A$ but with the column $i$ replaced by the derivative of $f$: $^i B_{ki}=f'_k(i)$ and $^i B_{kj}=f_k(j)$ for $j\neq i$. Consider then the trivial identity
\begin{align}
|Q|=|Q|\sum_i Q_{ij} Q_{ji}^{-1}=\sum_i Q_{ij}(Q_{ji}^{-1}|Q|)\;,
\end{align}
and the following relation
\begin{align}
Q_{ji}^{-1}|Q|=(-1)^{i+j}|Q^{(ij)}| \;,
\end{align}
where the minor $Q^{(ij)}$ is, by definition, $j$-independent. The first derivative of a SD takes the form
\begin{align}
\partial_j|A|=|A|\sum_i A_{ji}^{-1}(\partial_j A_{ij})=|A|\sum_i A_{ji}^{-1}f'_i(j) \;,
\label{eq:d_SD}
\end{align}
and the second derivative reads:
\begin{align}
\partial_j^2|A|=|A|\sum_i A_{ji}^{-1}(\partial^2_j A_{ij})=|A|\sum_i A_{ji}^{-1}f''_i(j) \;.
\label{eq:d2_SD_i}
\end{align}
An efficient way to compute the second mixed derivative of a SD $\partial_j\partial_i|A|$ is to write the first derivative as $|^j B|=\partial_j|A|$, i.e.
\begin{align}
|^j B|=|A|\sum_i A_{ji}^{-1}f'_i(j) \;.
\label{eq:i_B}
\end{align}
Using the relation (\ref{eq:d_SD}) for $|^i B|$, we can write
\begin{align}
\partial_j\partial_i|A|=\partial_j|^i B|=|^i B| \sum_k(^i B)_{jk}^{-1}(\partial_j{^i B_{kj}}) \;.
\end{align}
Choosing $j\neq i$ we have that $(\partial_j{^i B_{kj}})=(\partial_j A_{kj})=f'_k(j)$ and, using (\ref{eq:i_B}), it is possible to rewrite the previous equation as:
\begin{align}
\partial_j\partial_i|A|=|A|\left(\sum_k(^i B)_{jk}^{-1}f'_k(j)\right)\left(\sum_k A_{ik}^{-1}f'_k(i)\right) \;.
\end{align}
Consider now the Sherman-Morrison formula
\begin{align}
(A+\bm u\,\bm v^T)^{-1}= A^{-1}-\frac{( A^{-1}\bm u\,\bm v^T A^{-1})}{1+\bm v^T A^{-1}\bm u}\;,
\end{align}
with $\bm u,\bm v$ vectors. If we choose $(A+\bm u\,\bm v^T)={^i B}$, i.e.
\begin{align}
u_k=f'_k(i)-f_k(i) \qquad
\left\{\begin{array}{ll}
v_k=0 & k\neq i \\
v_k=1 & k=i
\end{array} \right.
\end{align}
we can use the Sherman-Morrison relation to to compute $(^i B)^{-1}$:
\begin{align}
(^i B)_{jk}^{-1}= A_{jk}^{-1}- A_{ik}^{-1}\frac{\displaystyle\left(\sum_k A_{jk}^{-1}f'_k(i)\right)
-\left(\sum_k A_{jk}^{-1}f_k(i)\right)}{\displaystyle1+\left(\sum_k A_{ik}^{-1}f'_k(i)\right)
-\left(\sum_k A_{ik}^{-1}f_k(i)\right)} \;.
\end{align}
Recalling that $f_k(i)=A_{ki}$ and assuming $j\neq i$ we have
\begin{align}
(^i B)_{jk}^{-1}= A_{jk}^{-1}- A_{ik}^{-1}\frac{\displaystyle\left(\sum_k A_{jk}^{-1}f'_k(i)\right)
-\cancelto{0}{\left(\sum_k A_{jk}^{-1} A_{ki}\right)}}{\displaystyle\cancel{1}+\left(\sum_k A_{ik}^{-1}f'_k(i)\right)
-\cancel{\left(\sum_k A_{ik}^{-1} A_{ki}\right)}} \;.
\end{align}
Finally the second mixed derivative ($j\neq i$) of a SD results:
\begin{align}
\partial_j\partial_i|A|&=|A|\!\left\{\left[\sum_k A_{ik}^{-1}f'_k(i)\right]\!\!\left[\sum_k A_{jk}^{-1}f'_k(j)\right]\!-\!\left[\sum_k A_{ik}^{-1}f'_k(j)\right]\!\!\left[\sum_k A_{jk}^{-1}f'_k(i)\right]\right\} \,.
\label{eq:d2_SD_ij}
\end{align}
Eqs.~(\ref{eq:d_SD}), (\ref{eq:d2_SD_i}) and (\ref{eq:d2_SD_ij}) are used to calculate the derivatives with all the CM corrections of the Slater determinant $f(\rho_N,\rho_\Lambda)=\text{det}_N\text{det}_\Lambda$. The derivation of these equations is actually valid for any single particle operator $\mathcal O_j$. Eqs.~(\ref{eq:d_SD}), (\ref{eq:d2_SD_i}) and (\ref{eq:d2_SD_ij}) can be thus used to describe the linear or quadratic action of a single particle operator on a SD, that can be expressed as a local operator:
\begin{align}
\frac{\mathcal O_j|A|}{|A|}&=\sum_i A_{ji}^{-1}(\mathcal O_j A_{ij}) \;,\\[0.5em]
\frac{\mathcal O_j^2|A|}{|A|}&=\sum_i A_{ji}^{-1}(\mathcal O^2_j A_{ij})\;,\\[0.5em]
\frac{\mathcal O_j\mathcal O_i|A|}{|A|}&=\left\{\left[\sum_k A_{ik}^{-1}(\mathcal O_i A_{ki})\right]\!\!\left[\sum_k A_{jk}^{-1}(\mathcal O_j A_{kj})\right]\right.\nonumber\\[0.2em]
&\hspace{0.32cm}-\left.\left[\sum_k A_{ik}^{-1}(\mathcal O_j A_{kj})\right]\!\!\left[\sum_k A_{jk}^{-1}(\mathcal O_i A_{ki})\right]\right\} \;.
\end{align}
For example, considering the spin term of Eq.~(\ref{eq:V_NN_SD}) we have:
\begin{align}
\frac{\sigma_{i\alpha}\,\sigma_{j\beta}|A|}{|A|}&=\left\{\left[\sum_k A_{jk}^{-1}\sigma_{j\beta} A_{kj} \right]\!\!\left[\sum_k A_{ik}^{-1}\sigma_{i\alpha} A_{ki}\right]\right.\nonumber\\[0.2em]
&\hspace{0.32cm}-\left.\left[\sum_k A_{jk}^{-1}\sigma_{i\alpha} A_{ki} \right]\!\!\left[\sum_k A_{ik}^{-1}\sigma_{j\beta} A_{kj} \right]\right\}\;,
\end{align}
where $|A|$ could be again the SD $\text{det}_N\text{det}_\Lambda$ of the trial wave function.
\chapter{$\Lambda N$ space exchange potential}
\label{app:Px}
As proposed by Armani in his Ph.D. thesis~\cite{Armani:2011_thesis}, the inclusion of the $\mathcal P_x$ operator in the AFDMC propagator can be possibly realized by a mathematical extension of the isospin of nucleons
\begin{align}
\left(\begin{array}{c} p \\ n \end{array}\right)\otimes\Bigl(\Lambda\Bigr)\quad\longrightarrow\quad\left(\begin{array}{c} p \\ n \\ \Lambda \end{array}\right)\;,
\end{align}
such that in the wave function hyperon and nucleon states can be mixed, referring now to indistinguishable particles. An antisymmetric wave function with respect to particle exchange must be an eigenstate of the pair exchange operator $\mathcal P_{pair}$ with eigenvalue $-1$:
\begin{align}
-1=\mathcal P_{pair}=\mathcal P_x\,\mathcal P_\sigma\,\mathcal P_\tau \quad\Rightarrow\quad \mathcal P_x=-\mathcal P_\sigma\,\mathcal P_\tau \;,
\end{align}
where $\mathcal P_x$ exchanges the coordinates of the pair, $\mathcal P_\sigma$ the spins and $\mathcal P_\tau$ the extended isospins:
\begin{align}
\mathcal P_\sigma(i\longleftrightarrow j)&=\frac{1}{2}\left(1+\sum_{\alpha=1}^3\sigma_{i\alpha}\,\sigma_{j\alpha}\right)\;,\\[0.5em]
\mathcal P_\tau(i\longleftrightarrow j)&=\frac{1}{2}\left(\frac{2}{3}+\sum_{\alpha=1}^8\lambda_{i\alpha}\,\lambda_{j\alpha}\right)\;.
\end{align}
The particle indices $i$ and $j$ run over nucleons and hyperons and the $\lambda_{i\alpha}$ are the eight Gell-Mann matrices. $\mathcal P_x$ takes now a suitable form (square operators) for the implementation in the AFDMC propagator. The technical difficulty in such approach is that we need to deeply modify the structure of the code. The hypernuclear wave function has to be written as a single Slater determinant including nucleons and hyperons states, matched with the new 3-component isospinor and 2-component spinors, so a global 6-component vector. All the potential operators must be represented as $6\times 6$ matrices and the ones acting on nucleons and hyperons separately must be projected on the correct extended isospin states:
\begin{align}
\mathcal P_N &=\frac{2+\sqrt{3}\,\lambda_8}{3}\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0
\end{array}\right)\;,\\[0.5em]
\mathcal P_\Lambda &=\frac{1-\sqrt{3}\,\lambda_8}{3}\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 1
\end{array}\right)\;.
\end{align}
In addition, due to the non negligible mass difference between nucleons and hyperons, also the kinetic operator must be splitted for states with different mass:
\begin{align}
\e^{-d\tau\frac{\hbar^2}{2}\sum_i\mathcal O_{m_i}\nabla_i^2 }\quad\quad\text{with}\quad
\mathcal O_{m_i}=\left(\begin{array}{ccc}
1/m_N & 0 & 0 \\
0 & 1/m_N & 0 \\
0 & 0 & 1/m_\Lambda
\end{array}\right)\;.
\end{align}
Finally, it is not even clear if all the operators of the two- and three-body hyperon-nucleon interaction will be still written in a suitable form for the application of the the Hubbard-Stratonovich transformation. For pure neutron systems this approach might simply reduce to an analog of the nucleonic case. The extended spin-isospin vector will have four components and all the operators will be represented as $4\times 4$ matrices coupled with the $\mathcal P_N$ and $\mathcal P_\Lambda$ on the reduced space. The $\mathcal O_{m_i}$ operator will have just two diagonal elements with the mass of the neutron and the hyperon. Although this purely mathematical approach could be applied, many questions arise from the physical point of view. By considering an extended isospin vector, states with different strangeness (0 for nucleons and $-1$ for the $\Lambda$~particle) will mix during the imaginary time evolution. This violates the conservation of strangeness that should be instead verified by the strong interaction. The picture becomes even less clear if we consider the $\Lambda\Lambda$ interaction of Eq.~(\ref{eq:V_LL}), because strangeness will be distributed among all the particles but the potential is explicitly developed for hyperon-hyperon pairs. Thus, for the phenomenological interactions introduced in Chapter~\ref{chap:hamiltonians}, this mathematical approach is not feasible and it has not been investigated in this work.
\chapter{Strangeness in nuclear systems}
\label{chap:strangeness}
\fancyhead[LO]{\emph{\nouppercase{\rightmark}}}
\fancyhead[RE]{\emph{\nouppercase{\leftmark}}}
\hypersetup{linkcolor=blue}
\renewcommand{\thefigure}{\arabic{chapter}.\arabic{figure}}
Hyperons are baryons containing one or more strange quarks. They have masses larger than nucleons and lifetimes characteristic of the weak decay. The $\Lambda$ and $\Omega$ hyperons belong to an isospin singlet, the $\Sigma$s to an isospin triplet and the $\Xi$ particles to an isospin doublet. In Tab.~\ref{tab:hyperons} we report the list of hyperons (excluding resonances and unnatural parity states~\cite{Beringer:2012}), with their main properties. The isospin doublet of nucleons is also shown for comparison.
\renewcommand{\arraystretch}{1.4}
\begin{table}[hb]
\begin{center}
\begin{tabular*}{\linewidth}{@{\hspace{0.5em}\extracolsep{\fill}}lccc S[table-format=4.7] S[table-format=1.8] l@{\extracolsep{\fill}\hspace{0.5em}}}
\toprule
\toprule
Baryon & qqq & $S$ & $I$ & {$m$~[\rm{MeV}]} & {$\tau$~[$10^{-10}$~s]} & Decay mode \\
\midrule
\hspace{1.3em}$p$ & uud & \multirow{2}{*}{$0$} & \multirow{2}{*}{$\displaystyle\frac{1}{2}$} & 938.27205(2) & {$\sim10^{32}$~y} & many \\
\hspace{1.3em}$n$ & udd & & & 939.56538(2) & {808(1)~s} & $p\,e\,\bar\nu_e$ \\[0.8em]
\hspace{1.3em}$\Lambda$ & uds & $-1$ & 0 & 1115.683(6) & 2.63(2) & $p\,\pi^-, n\,\pi^0$ \\[0.8em]
\hspace{1.3em}$\Sigma^+$ & uus & \multirow{3}{*}{$-1$} & \multirow{3}{*}{1} & 1189.37(7) & 0.802(3) & $p\,\pi^0, n\,\pi^+$ \\
\hspace{1.3em}$\Sigma^0$ & uds & & & 1192.64(2) & 7.4(7)$\hspace{-1cm}\times10^{-10}$ & $\Lambda\,\gamma$ \\
\hspace{1.3em}$\Sigma^-$ & dds & & & 1197.45(3) & 1.48(1) & $n\,\pi^-$ \\[0.8em]
\hspace{1.3em}$\Xi^0$ & uss & \multirow{2}{*}{$-2$} & \multirow{2}{*}{$\displaystyle\frac{1}{2}$} & 1314.9(2) & 2.90(9) & $\Lambda\,\pi^0$ \\
\hspace{1.3em}$\Xi^-$ & dss & & & 1321.71(7) & 1.64(2) & $\Lambda\,\pi^-$ \\[0.8em]
\hspace{1.3em}$\Omega^-$ & sss & $-3$ & 0 & 1672.5(3) & 0.82(1) & $\Lambda\,K^-, \Xi^0\,\pi^-, \Xi^-\,\pi^0$ \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Nucleon and hyperon properties]
{Nucleon and hyperon properties: quark components, strangeness, isospin, mass, mean life and principal decay
modes~\cite{Beringer:2012}.}
\label{tab:hyperons}
\end{center}
\end{table}
\renewcommand{\arraystretch}{1.0}
In the non strange nuclear sector many information are available for nucleon-nucleon scattering. The Nijmegen $NN$ scattering database~\cite{Bergervoet:1990,Stoks:1993} includes 1787~$pp$ and 2514~$np$ data in the range $0\div350$~MeV. Due to the instability of hyperons in the vacuum and the impossibility to collect hyperon-neutron and hyperon-hyperon scattering data, the available information in the strange nuclear sector are instead very limited. Although many events have been reported both in the low and high energy regimes~\cite{Gibson:1995}, the standard set employed in the modern hyperon-nucleon interactions (see for example Ref.~\cite{Schulze:2013}) comprises 35 selected $\Lambda p$ low energy scattering data~\cite{deSwart:1971} and some $\Lambda N$ and $\Sigma N$ data at higher energies~\cite{Kadyk:1971}. In addition there are the recently measured $\Sigma^+ p$ cross sections of the KEK-PS E289 experiment~\cite{Ahn:2005}, for a total of 52 $YN$ scattering data.
The very limited experimental possibilities of exploring hyperon-nucleon and hyperon-hyperon interactions in elementary scattering experiments, makes the detailed study of hypernuclei essential to understand the physics in the strange sector. In the next, we will present a summary of the available hypernuclei experimental data. These information are the key ingredient to develop realistic hyperon-nucleon and hyperon-hyperon interactions, as described in the next chapters. The theoretical evidence of the appearance of hyperons in the core of a NS and the problem of the hyperon puzzle will then be discussed, following the results of many-body calculations for the available models of hypermatter.
\section{Hyperons in finite nuclei}
\label{sec:hyp}
In high-energy nuclear reactions strange hadrons are produced abundantly, and they are strongly involved in the reaction process. When hyperons are captured by nuclei, hypernuclei are formed, which can live long enough in comparison with nuclear reaction times. Extensive efforts have been devoted to the study of hypernuclei. Among many strange nuclear systems, the single $\Lambda$~hypernucleus is the most investigated one~\cite{Hashimoto:2006}.
The history of hypernuclear experimental research (see Refs.~\cite{Davis:2005,Dalitz:2005,Hashimoto:2006} for a complete review) celebrates this year the sixtieth anniversary, since the publication of the discovery of hypernuclei by Danysz and Pniewski in 1953~\cite{Danysz:1953}. Their first event was an example of $^3_\Lambda$H decaying via
\begin{align}
^3_\Lambda\text{H}\longrightarrow\,^3\text{He}+\pi^-\;,
\end{align}
confirming that the bound particle was a $\Lambda$~hyperon. The event was observed in an emulsion stack as a consequence of nuclear multifragmentation induced by cosmic rays. This first evidence opened the study of light $\Lambda$~hypernuclei ($A<16$) by emulsion experiments, by means of cosmic ray observations at the beginning and then through proton and pion beams, although the production rates were low and there was much background. In the early 70's, the advent of kaon beam at CERN and later at Brookhaven National Laboratory (BNL), opened the possibility of spectroscopic studies of hypernuclei, including excited states, by means of the $(K^-,\pi^-)$ reaction (see Fig.~\ref{fig:reactions}). A third stage, which featured the use of the $(\pi^+,K^+)$ reaction, began in the mid 1980's at the Alternating Gradient Synchrotron (AGS) of BNL first, and then at the proton synchrotron (PS) of the High Energy Accelerator Organization (KEK) in Japan. Here, the superconducting kaon spectrometer (SKS) played a key role in exploring $\Lambda$~hypernuclear spectroscopy by the $(\pi^+,K^+)$ reaction. $\gamma$-ray spectroscopy developed reaching unprecedented resolution through the use of a germanium detector array, the Hyperball, and the high quality and high intensity electron beams available at the Thomas Jefferson National Accelerator Facility (JLab). This permitted the first successful $(e,e' K^+)$ hypernuclear spectroscopy measurement (an historical review of hypernuclear spectroscopy with electron beams can be found in Ref.~\cite{Nakamura:2013_HYP2012}. The detailed analysis of $\Lambda$~hypernuclei spectroscopy is reported in Ref.~\cite{Hashimoto:2006}).
\begin{figure}[htb]
\centering
\includegraphics[width=0.73\linewidth]{Hyp_reactions.pdf}
\caption[Strangeness producing reactions]{Schematic presentation of three strangeness producing reactions used in the study of $\Lambda$~hypernuclei.}
\label{fig:reactions}
\end{figure}
With the development of new facilities, like the japanese J-PARC (Proton Accelerator Research Complex), other reaction channels for the production of neutron rich $\Lambda$~hypernuclei became available. The candidates are the single charge exchange (SCX) reactions $(K^-,\pi^0)$ and $(\pi^-,K^0)$, and double charge exchange (DCX) reactions $(\pi^-,K^+)$ and $(K^-,\pi^+)$. Fig.~\ref{fig:exp_reactions} nicely illustrates the complementarity of the various production mechanisms and thus the need to study hypernuclei with different reactions. Moreover, during the last 20 years of research, great progress has been made in the investigation of multifragmentation reactions associated with heavy ion collisions (see for instance~\cite{Ogul:2011} and reference therein). This gives the opportunity to apply the same reactions for the production of hypernuclei too~\cite{Botvina:2007,Topor:2010}. On the other hand, it was noticed that the absorption of hyperons in spectator regions of peripheral relativistic ion collisions is a promising way to produce hypernuclei~\cite{Wakai:1988,Gaitanos:2009}. Also, central collisions of relativistic heavy ions can lead to the production of light hypernuclei~\cite{Steinheimer:2012}. Recent experiments have confirmed observations of hypernuclei in such reactions, in both peripheral~\cite{Saito:2012,Botvina:2012} and central collisions~\cite{STAR:2010}.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{Exp_reactions.pdf}
\caption[$\Lambda$~hypernuclei accessible via different experimental reactions]
{$\Lambda$~hypernuclei accessible by experiments for different production channels.
The boundaries at the neutron and proton rich side mark the predicted drip lines by a nuclear mass formula extended to strange nuclei.
Figure taken from Ref.~\cite{Pochodzalla:2011}.}
\label{fig:exp_reactions}
\end{figure}
At the time of writing, many laboratories are making extensive efforts in the study of $\Lambda$~hypernuclei. The status of the art together with future prospects can be found in Refs.~\cite{Tamura:2012,Tamura:2013_HYP2012,TakahashiT:2013_HYP2012} for the J-PARC facility and in Ref.~\cite{Lea:2013_HYP2012} for the ALICE (A Large Ion Collider Experiment) experiment at the LHC. Ref.~\cite{Garibaldi:2013_HYP2012} reports the status of the JLab's Hall A program. In Ref.~\cite{Esser:2013_HYP2012} future prospects for the the PANDA (antiProton ANihilation at DArmstadt) project at FAIR (Facility for Antiproton ad Ion Research) and the hypernuclear experiments using the KAOS spectrometer at MAMI (Mainz Microtron) can be found. Last results from the FINUDA (FIsica NUcleare a DA$\Phi$NE) collaboration at DA$\Phi$NE, Italy, are reported in Ref.~\cite{Feliciello:2013}. Recent interest has been also focused on the $S=-2$ sector with the study of double $\Lambda$~hypernuclei~\cite{Harada:2013_HYP2012} and the $S=-3$ sector with the search for $\Omega$~hypernuclei~\cite{TakahashiH:2013_HYP2012}.
So far, there is no evidence for $\Lambda p$ and $^3_\Lambda$He bound states. Only very recently the possible evidence of the three-body system $\Lambda nn$ has been reported~\cite{Rappold:2013_PRC(R)}. The first well established weakly bound systems is $^3_\Lambda$H, with hyperon separation energy $B_\Lambda$ (the energy difference between the $A-1$ nucleus and the $A$ hypernucleus, being $A$ the total number of baryons) of $0.13(5)$~MeV~\cite{Juric:1973}. Besides the very old experimental results~\cite{Juric:1973,Cantwell:1974,Prowse:1966}, several measurements of single $\Lambda$~hypernuclei became available in the last years trough the many techniques described above~\cite{Pile:1991,Hasegawa:1996,Yuan:2006,Cusanno:2009,Agnello:2010,Agnello:2012_H6L,Nakamura:2013,Feliciello:2013}. The update determination of the lifetime of $_\Lambda^3$H and $_\Lambda^4$H has been recently reported~\cite{Rappold:2013} and new proposals for the search of exotic $\Lambda$~hypernuclei are constantly discussed (see for example the search for $_\Lambda^9$He~\cite{Agnello:2012}). One of the results of this investigation is the compilation of the $\Lambda$~hypernuclear chart reported in Fig.~\ref{fig:hyperchart}. Although the extensive experimental studies in the $S=-1$ strangeness sector, the availability of information for hypernuclei is still far from the abundance of data for the non strange sector.
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth]{Hyperchart.pdf}
\caption[$\Lambda$~hypernuclear chart]{$\Lambda$~hypernuclear chart presented at the
\href{http://icc.ub.edu/congress/HYP2012/}{XI International Conference on Hypernuclear and Strange Particle Physics (HYP2012)}, October 2012, Spain.
The figure has been updated from Ref.~\cite{Hashimoto:2006}.}
\label{fig:hyperchart}
\end{figure}
It is interesting to observe that with the increase of $A$, there is an orderly increase of $B_\Lambda$ with the number of particles, of the order of 1~MeV/nucleon (see Tab.~\ref{tab:BL} or the mentioned experimental references).
Many stable hypernuclei with unstable cores appears, as for example $^6_\Lambda$He, $^8_\Lambda$He, $^7_\Lambda$Be and $^9_\Lambda$Be. These evidences testify that the presence of a $\Lambda$~particle inside a nucleus has a glue like effect, increasing the binding energy and stability of the system. This should be reflected by the attractive behavior of the $\Lambda$-nucleon interaction, at least in the low density regime of hypernuclei.
For $\Sigma$~hypernuclei, the situation is quite different. Up to now, only one bound $\Sigma$~hypernucleus, $^4_\Sigma$He, was detected~\cite{Nagae:1998}, despite extensive searches. The analysis of experimental data suggests a dominant $\Sigma$-nucleus repulsion inside the nuclear surface and a weak attraction outside the nucleus. In the case of $\Xi$~hypernuclei, although there is no definitive data for any $\Xi$~hypernucleus at present, several experimental results suggest that $\Xi$-nucleus interactions are weakly attractive~\cite{Khaustov:2000}. No experimental indication exists for $\Omega$~hypernuclei. It is a challenge to naturally explain the net attraction in $\Lambda$- and $\Xi$-nucleus potentials and at the same time the dominant repulsion in $\Sigma$-nucleus potentials.
In addition to single hyperon nuclei, the binding energies of few double $\Lambda$~hypernuclei ($^{\;\;\,6}_{\Lambda\Lambda}$He~\cite{Takahashi:2001,Nakazawa:2010,Ahn:2013}, $^{\;10}_{\Lambda\Lambda}$Be, $^{\;12}_{\Lambda\Lambda}$Be and $^{\;12}_{\Lambda\Lambda}$Be~\cite{Danysz:1963,Nakazawa:2010}, $^{\;13}_{\Lambda\Lambda}$B~\cite{Nakazawa:2010}) have been measured. The indication is that of a weakly attractive $\Lambda\Lambda$ interaction, which reinforces the glue like role of $\Lambda$~hyperons inside nuclei.
From the presented picture it is clear that experimental hypernuclear physics has become a very active field of research. However there is still lack of information, even in the most investigated sector of $\Lambda$~hypernuclei. Due to the technical difficulties in performing scattering experiments involving hyperons and nucleons, the present main goal is the extension of the $\Lambda$~hypernuclear chart to the proton and neutron drip lines and for heavier systems. Parallel studies on $\Sigma$, $\Xi$ and double~$\Lambda$~hypernuclei have been and will we be funded in order to try to complete the scheme. This will hopefully provide the necessary information for the development of realistic hyperon-nucleon and hyperon-hyperon interactions.
\section{Hyperons in neutron stars}
\label{sec:ns}
The matter in the outer core of a NS is supposed to be composed by a degenerate gas of neutrons, protons, electrons and muons, the $npe\mu$ matter, under $\beta$~equilibrium. Given the energy density
\begin{align}
\mathcal E(\rho_n,\rho_p,\rho_e,\rho_\mu)=\mathcal E_N(\rho_n,\rho_p)+\mathcal E_e(\rho_e)+\mathcal E_\mu(\rho_\mu) \;,
\label{eq:E_npemu}
\end{align}
where $\mathcal E_N$ is the nucleon contribution, the equilibrium condition at a given baryon density $\rho_b$ corresponds to the minimum of $\mathcal E$ under the constraints
\begin{subequations}
\begin{align}
\mbox{fixed baryon density:\qquad} & \rho_n+\rho_p-\rho_b=0 \;, \\[0.5em]
\mbox{electrical neutrality:\qquad} & \rho_e+\rho_\mu-\rho_p=0 \;.
\end{align}
\label{eq:eq_constraints}
\end{subequations}
The result is the set of conditions
\begin{subequations}
\label{eq:chem_pot}
\begin{align}
\mu_n&=\mu_p+\mu_e \;,\\[0.5em]
\mu_\mu&=\mu_e \;,
\end{align}
\end{subequations}
where $\mu_j=\partial\mathcal E/\partial\rho_j$ with $j=n,p,e,\mu$ are the chemical potentials. These relations express the equilibrium with respect to the weak interaction processes
\begin{align}
\begin{array}{rclrcl}
n & \longrightarrow & p+e+\bar\nu_e \;, \qquad\quad & p+e & \longrightarrow & n+\nu_e \;,\\[0.5em]
n & \longrightarrow & p+\mu+\bar\nu_\mu \;,\qquad\quad & p+\mu & \longrightarrow & n+\nu_\mu \;.
\end{array}
\label{eq:npemu_eq}
\end{align}
(Neutrino do not affect the matter thermodynamics so their chemical potential is set to zero). Eqs.~(\ref{eq:chem_pot}) supplemented by the constraints (\ref{eq:eq_constraints}) form a closed system which determines the equilibrium composition of the $npe\mu$ matter. Once the equilibrium is set, the energy and pressure as a function of the baryon density can be derived and thus the EoS is obtained.
Given the EoS, the structure of a non rotating NS can be fully determined by solving the Tolman-Oppenheimer-Volkoff (TOV) equations~\cite{Oppenheimer:1939,Lattimer:2004}
\begin{subequations}
\begin{align}
\frac{dP(r)}{dr}&=-G\frac{\Bigl[\mathcal E(r)+P(r)\Bigr]\Bigl[m(r)+4\pi r^3 P(r)\Bigr]}{r^2\Bigl[1-\frac{2Gm(r)}{r}\Bigr]}\;,\label{eq:TOV_1} \\[0.5em]
\frac{dm(r)}{dr}&=4\pi r^2 \mathcal E(r) \;,\label{eq:TOV_2}
\end{align}
\label{eq:TOV}
\end{subequations}
which describe the hydrostatic equilibrium of a static spherically symmetric star. $\mathcal E(r)$ and $P(r)$ are the energy density and the pressure of the matter, $m(r)$ is the gravitational mass enclosed within a radius $r$, and $G$ is the Gravitational constant. In the stellar interior $P>0$ and $dP/dr<0$. The condition $P(R)=0$ fixes the stellar radius $R$. Outside the star for $r>R$, we have $P=0$ and $\mathcal E=0$. Eq.~(\ref{eq:TOV_2}) gives thus $m(r>R)=M=const$, which is total gravitational mass. Starting with a central energy density $\mathcal E_c=\mathcal E(r=0)$ and using the above conditions, the TOV equations can be numerically solved and the mass-radius relation $M=M(R)$ is obtained. It can be shown~\cite{Haensel:2006}, that the relativistic corrections to the Newtonian law $dP(r)/dr=-Gm\mathcal E(r)/r^2$ included in Eq.~(\ref{eq:TOV_1}) give an upperbound to the $M(R)$ relation, i.e. there exists a maximum mass for a NS in hydrostatic equilibrium. It is important to note that, given the EoS, the mass-radius relation is univocally determined. Any modification made on the EoS will lead to a change in the $M(R)$ curve and thus in the allowed maximum mass.
For $\rho_b\gtrsim2\rho_0$, the inner core is thought to have the same $npe\mu$ composition of the outer core. However, since at high densities the nucleon gas will be highly degenerate, hyperons with energies lower than a threshold value will become stable, because the nucleon arising from their decay cannot find a place in phase space in accordance to the Pauli principle~\cite{Ambartsumyan:1960}. Thus, beyond a density threshold we have to take into account the contribution of hyperons to the $\beta$~equilibrium. Eq.~(\ref{eq:E_npemu}) becomes a function of $\rho_b$ (baryons: nucleons and hyperons) and $\rho_l$ (leptons: electrons and muons). Given the baryon density and imposing electrical neutrality conditions, the equilibrium equations now read:
\begin{subequations}
\label{eq:chem_pot_Y}
\begin{align}
Q_b=-1\,: && \mu_{b^-}&=\mu_n+\mu_e && \Rightarrow && \mu_{\Omega^-}&&\hspace{-0.7cm}=\mu_{\Xi^-} =\mu_{\Sigma^-}=\mu_n+\mu_e \;, \\[0.5em]
Q_b=\phantom{+}0\,: && \mu_{b^0}&=\mu_n && \Rightarrow && \mu_{\Xi^0} &&\hspace{-0.7cm}=\mu_{\Sigma^0}=\mu_\Lambda =\mu_n \;, \\[0.5em]
Q_b=+1\,: && \mu_{b^+}&=\mu_n-\mu_e && \Rightarrow && \mu_{\Sigma^+}&&\hspace{-0.7cm}=\mu_p =\mu_n-\mu_e \;,
\end{align}
\end{subequations}
where $Q_b$ is the electric charge of a baryon. As soon as the neutron chemical potential becomes sufficiently large, energetic neutrons can decay via weak strangeness nonconserving reactions into $\Lambda$~hyperons, leading to a $\Lambda$ Fermi sea.
We can derive the hyperons threshold densities $\rho_Y$ by calculating the minimum increase of the energy of the matter produced by adding a single strange particle at a fixed pressure. This can be done by considering the energy of the matter with an admixture of given hyperons and by calculating numerically the limit of the derivative
\begin{align}
\lim_{\rho_Y\rightarrow0}\,\frac{\partial\mathcal E}{\partial\rho_Y}\bigg|_{eq}\!=\mu_Y^0\;.
\end{align}
Consider for example the lightest $\Lambda$~hyperon. As long as $\mu_\Lambda^0>\mu_n$, the strange baryon cannot survive because the system will lower its energy via an exothermic reaction $\Lambda+N\longrightarrow n+N$. However, $\mu_n$ increases with growing $\rho_b$ and the functions $\mu_\Lambda^0(\rho_b)$ and $\mu_n^0(\rho_b)$ intersect at some $\rho_b=\rho_\Lambda^{th}$ (the left panel in Fig.~\ref{fig:chemicalpot}). For $\rho_b>\rho_\Lambda^{th}$ the $\Lambda$~hyperons become stable in dense matter because their decay is blocked by the Pauli principle.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\linewidth]{Chemical_pot.pdf}
\caption[Hyperon and nucleon chemical potentials]
{Threshold chemical potentials of neutral hyperons and neutron (left panel), and of negatively charged hyperons and the sum
$\mu_n+\mu_e$ (right panel) versus baryon density. Vertical dotted lines mark the thresholds for the creation of new hyperons. Dashed lines show
the minimum chemical potential $\mu_Y^0$ of unstable hyperons before the thresholds. Figure taken from Ref.~\cite{Haensel:2006}.}
\label{fig:chemicalpot}
\end{figure}
Although the $\Lambda$~particle is the lightest among hyperons, one expects the $\Sigma^-$ to appear via
\begin{align}
n+e^-\longrightarrow\Sigma^-+\nu_e
\end{align}
at densities lower than the $\Lambda$~threshold, even thought the $\Sigma^-$ is more massive. This is because the negatively charged hyperons appear in the ground state of matter when their masses equal $\mu_n+\mu_e$, while the neutral hyperon $\Lambda$ appears when its mass equals $\mu_n$. Since the electron chemical potential in matter is typically larger (ultrarelativistic degenerate electrons $\mu_e\sim E_{F_e}\sim \hbar c (3\pi^2\rho_e)^{1/3}>120~\text{MeV~for~}\rho_e\sim5\%\rho_0$) than the mass difference $m_{\Sigma^-}-m_\Lambda=81.76~\mbox{MeV}$, the $\Sigma^-$ will appear at lower densities. However, in typical neutron matter calculations with the inclusion of strange degrees of freedom, only $\Lambda$, $\Sigma^0$ and $\Xi^0$ hyperons are taken into account due to charge conservation.
The formation of hyperons softens the EoS because high energy neutrons are replaced by more massive low energy hyperons which can be accommodated in lower momentum states. There is thus a decrease in the kinetic energy that produces lower pressure. The softening of the EoS of the inner core of a NS induced by the presence of hyperons is generic effect. However, its magnitude is strongly model dependent.
Calculations based on the extension to the hyperonic sector of the Hartree-Fock (HF)~\cite{Dapo:2010,Massot:2012} and Brueckner-Hartree-Fock (BHF)~\cite{Schulze:2011,Vidana:2011} methods, do all agree that the appearance of hyperons around $2\div3\rho_0$ leads to a strong softening of the EoS. Consequently, the expected maximum mass is largely reduced, as shown for instance in Fig.~\ref{fig:Schulze2011} and Fig.~\ref{fig:Massot2012}. The addition of the hyperon-nucleon force to the pure nucleonic Hamiltonian, lowers the maximum mass of a value between $0.4M_\odot$ and more than $1M_\odot$. From the pure nucleonic case of $M_{\max}>1.8M_\odot$, the limit for hypernuclear matter is thus reduced to the range $1.4M_\odot<M_{\max}<1.6M_\odot$. These results, although compatible with the canonical limit of $1.4\div1.5M_\odot$, cannot be consistent with the recent observations of $2M_\odot$ millisecond pulsars~\cite{Demorest:2010,Antoniadis:2013}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\linewidth]{Schulze2011.pdf}
\caption[Neutron star mass-radius relation: Schulze 2011]
{Mass-radius and mass-central density relations for different NS EoS obtained in Brueckner-Hartree-Fock calculations of hypernuclear matter.
V18+TBF and V18+UIX' refer to purely nuclear matter EoS built starting from two- and three-body nucleon-nucleon potentials (see \S~\ref{sec:nuc_int}).
The other curves are obtained adding two different hyperon-nucleon forces among the Nijmegen models to the previous nucleonic EoS.
For more details see the original paper~\cite{Schulze:2011}.}
\label{fig:Schulze2011}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\linewidth]{Massot2012.pdf}
\caption[Neutron star mass-radius relation: Massot 2012]
{Neutron star mass as a function of the circumferential radius. QMC700 and MC\emph{i}-H(F)/N refer to EoS based on quark-meson coupling model and chiral model
in the Hartee(Fock) approximation without hyperons. In the MC\emph{i}-H(F)/NY models also hyperons are taken into account.
The canonical maximum mass limit of $\sim1.45M_\odot$ and the mass of the two heavy millisecond pulsars
PSR J1903+0327 ($1.67(2)M_\odot$) and PSR J1614-2230 ($1.97(4)M_\odot$) are shown.
Details on the potentials and method adopted can be found in Ref.~\cite{Massot:2012}.}
\label{fig:Massot2012}
\end{figure}
It is interesting to note that the hyperonic $M_{\max}$ weakly depends on the details of the employed nucleon-nucleon interaction and even less on the hypernuclear forces. In Ref.~\cite{Dapo:2010} the interaction used for the nuclear sector is an analytic parametrization fitted to energy of symmetric matter obtained from variational calculations with the Argonne V18 nucleon-nucleon interaction (see \S~\ref{sec:nuc_int}) including three-body forces and relativistic boost corrections. Refs.~\cite{Schulze:2011} and \cite{Vidana:2011} adopted the bare $NN$ Argonne V18 supplemented with explicit three-nucleon forces or phenomenological density-dependent contact terms that account for the effect of nucleonic and hyperonic three-body interactions. The hypernuclear forces employed in these work belong to the class of Nijmegen potentials (see \S~\ref{chap:hamiltonians}). Finally, in Ref.~\cite{Massot:2012} chiral Lagrangian and quark-meson coupling models of hyperon matter have been employed. Despite the differences in the potentials used in the strange and non strange sectors, the outcomes of these works give the same qualitative and almost quantitative picture about the reduction of $M_{\max}$ due to the inclusions of strange baryons. Therefore, the (B)HF results seem to be rather robust and thus, many doubts arise about the real appearance of hyperons in the inner core of NSs.
Other approaches, such as relativistic Hartree-Fock~\cite{Miyatsu:2012,Miyatsu:2013,Gupta:2013}, standard, density-dependent and nonlinear Relativistic Mean Field models
~\cite{Bednarek:2012,Weissenborn:2012,Tsubakihara:2013_HYP2012,Jiang:2012,Mallick:2013} and Relativistic Density Functional Theory with density-dependent couplings~\cite{Colucci:2013}, indicate much weaker effects as a consequence of the presence of strange baryons in the core of NSs, as shown for example in Fig.~\ref{fig:Miyatsu2012} and Fig.~\ref{fig:Bednarek2012}. In all these works, it was possible to find a description of hypernuclear matter, within the models analyzed, that produces stiff EoS, supporting a $2M_\odot$ neutron star. Same conclusion has been reported in Ref.~\cite{Bonanno:2012} where the EoS of matter including hyperons and deconfined quark matter has been constructed on the basis of relativistic mean-field nuclear functional at low densities and effective Nambu-Jona-Lasinio model of quark matter. The results of this class of calculations seem to reconcile the onset of hyperons in the inner core of a NS with the observed masses of order $2M_\odot$.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{Miyatsu2012.pdf}
\caption[Neutron star mass-radius relation: Miyatsu 2012]
{Neutron star mass-radius relations in Hartree (left panel) and Hartree-Fock (right panel) calculations.
CQMC, QMC and QHD+NL denote the chiral quark-meson coupling, quark-meson coupling and non linear quantum hadrodynamics employed potentials, with (npY) and without hyperons (np).
For details see Ref.~\cite{Miyatsu:2012}.}
\label{fig:Miyatsu2012}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=0.9\linewidth]{Bednarek2012.pdf}
\caption[Neutron star mass-radius relation: Bednarek 2012]
{Stellar mass versus circumferential radius in non linear relativistic mean field model. The purely nucleon case is denoted with N, the nucleon+hyperon case with NH.
In the inset, the effect of rotation at $f=317$~Hz on the mass-radius relation near $M_{\max}$.
The dashed region refers to the mass of the pulsar PSR J1614-2230. All the details are reported in Ref.~\cite{Bednarek:2012}.}
\label{fig:Bednarek2012}
\end{figure}
This inconsistency among different calculations and between the theoretical results and the observational constraints, at present is still an open question. For example, given the theoretical evidence about the appearance of hyperons in the inner core of a NS, the results of all available (B)HF calculations seem to be in contradiction with the picture drawn by the relativistic mean field models. On one hand there should be uncontrolled approximations on the method used to solve the many-body Hamiltonian. On the other hand the employed hypernuclear interactions might not be accurate enough in describing the physics of the infinite nuclear medium with strange degrees of freedom. For instance, as reported in Refs.~\cite{Vidana:2013_HYP2012,Tsubakihara:2013_HYP2012}, one of the possible solutions to improve the hyperon-nucleon interactions might be the inclusion of explicit three-body forces in the models. These should involve one or more hyperons (i.e., hyperon-nucleon-nucleon, hyperon-hyperon-nucleon or hyperon-hyperon-hyperon interactions) and they could eventually provide the additional repulsion needed to make the EoS stiffer and, therefore the maximum mass compatible with the current observational limits. On the grounds of this observation, we decided to revisit the problem focusing on a systematic construction of a realistic, though phenomenological hyperon-nucleon interaction with explicit two- and three-body components (\S~\ref{chap:hamiltonians}) by means of Quantum Monte Carlo calculations (\S~\ref{chap:method}).
\newpage
\phantom{Empty page}
\chapter{Hamiltonians}
\label{chap:hamiltonians}
The properties of nuclear systems arise from the interactions between the individual constituents. In order to understand these properties, the starting point is the determination of the Hamiltonian to be used in the description of such systems. In principle the nuclear Hamiltonian should be directly derived from Quantum Chromodynamics (QCD). Many efforts have been done in the last years~\cite{Savage:2012,Beane:2012,Beane:2013}, but this goal is still far to be achieved.
The problem with such derivation is that QCD is non perturbative in the low-temperature regime characteristic of nuclear physics, which makes direct solutions very difficult. Moving from the real theory to effective models, the structure of a nuclear Hamiltonian can be determined phenomenologically and then fitted to exactly reproduce the properties of few-nucleon systems. In this picture, the degrees of freedom are the baryons, which are considered as non relativistic point-like particles interacting by means of phenomenological potentials. These potentials describe both short and the long range interactions, typically via one-boson and two-meson exchanges, and they have been fitted to exactly reproduce the properties of few-nucleon systems~\cite{Carlson:1998}. In more details, different two-body phenomenological forms have been proposed and fitted on the nucleon-nucleon ($NN$) scattering data of the Nijmegen database~\cite{Bergervoet:1990,Stoks:1993} with a $\chi^2/N_{data}\simeq 1$. The more diffuse are the Nijmegen models~\cite{Stoks:1994}, the Argonne models~\cite{Wiringa:1995,Wiringa:2002} and the CD-Bonn~\cite{Machleidt:1996}. Although reproducing the $NN$ scattering data, all these two-nucleon interactions underestimate the triton binding energy, suggesting that the contribution of a three-nucleon ($NNN$) interaction~(TNI) is essential to reproduce the physics of nuclei. The TNI is mainly attributed to the possibility of nucleon excitation in a $\Delta$ resonance and it can be written as different effective three-nucleon interactions which have been fitted on light nuclei~\cite{Carlson:1981,Pieper:2001} and on saturation properties of nuclear matter~\cite{Carlson:1983}. The TNIs typically depend on the choice of the two-body $NN$ potential~\cite{Wiringa:1983}, but the final result with the total Hamiltonian should be independent of the choice.
A different approach to the problem is the realization that low-energy QCD is equivalent to an Effective Field Theory (EFT) which allows for a perturbative expansion that is known as chiral perturbation theory. In the last years modern nucleon-nucleon interaction directly derived from Chiral Effective Field Theory ($\chi$-EFT) have been proposed, at next-to-next-to-next-to-leading order~(N$^3$LO) in the chiral expansion~\cite{Entem:2003,Epelbaum:2005} and recently at optimized next-to-next-to-leading order~(N$^2$LO)~\cite{Ekstrom:2013} (see Ref.~\cite{Machleidt:2011} for a complete review).
All these potentials are able to reproduce the Nijmegen phase shifts with $\chi/N_{data}^2\simeq1$. TNIs enter naturally at N$^2$LO in this scheme, and they play again a pivotal role in nuclear structure calculations~\cite{Hammer:2013}. The contributions of TNIs at N$^3$LO have also been worked out~\cite{Ishikawa:2007,Bernard:2008,Bernard:2011}. The $\chi$-EFT interactions are typically developed in momentum space, preventing their straightforward application within the Quantum Monte Carlo~(QMC) framework. However, a local version of the $\chi$-EFT potentials in coordinate space up to N$^2$LO has been very recently proposed and employed in QMC calculations~\cite{Gezerlis:2013}.
Nuclear phenomenological Hamiltonians have been widely used to study finite and infinite nuclear systems within different approaches. From now on, we will focus on the Argonne $NN$ potentials and the corresponding TNIs, the Urbana~IX~(UIX)~\cite{Carlson:1983} and the modern Illinois~(ILx)~\cite{Pieper:2001} forms. These potentials have been used to study nuclei, neutron drops, neutron and nuclear matter in Quantum Monte Carlo (QMC) calculations, such as Variational Monte Carlo (VMC)~\cite{Wiringa:1991,Wiringa:1992,Pieper:1992}, Green Function Monte Carlo (GFMC)~\cite{Pudliner:1997,Wiringa:2002,Pieper:2004,Pieper:2005,Schiavilla:2007,Pieper:2008,Lovato:2013,Gandolfi:2011} and Auxiliary Field Diffusion Monte Carlo (AFDMC)~\cite{Sarsa:2003,Pederiva:2004,Gandolfi:2006,Gandolfi:2007,Gandolfi:2011,Gandolfi:2007_SNM,Gandolfi:2009}. Same bare interactions have been also employed in the Fermi Hyper-Netted Chain~(FHNC) approach~\cite{AriasdeSaavedra:2007,Armani:2011}, both for nuclei and nuclear matter. With a projection of the interaction onto the model space, these Hamiltonians are used in Effective Interaction Hyperspherical Harmonics (EIHH)~\cite{Barnea:2001,Barnea:2004} and Non-Symmetrized Hyperspherical Harmonics (NSHH)~\cite{Deflorian:2013} calculations. Finally, same potentials can be also used in Brueckner Hartree Fock~(BHF)~\cite{Li:2006}, Shell-Model (SM)~\cite{Coraggio:2009}, No-Core-Shell-Model (NCSM)~\cite{Navratil:2009} and Coupled Cluster (CC)~\cite{Hagen:2010} calculations by means of appropriate techniques to handle the short-range repulsion of the nucleon-nucleon force, such as Brueckner $G$-matrix approach~\cite{Brueckner:1955,Bethe:2006}, $V_{low-k}$ reduction~\cite{Bogner:2001,Bogner:2002,Bogner:2003}, Unitary Correlation Operator Method~(UCOM)~\cite{Feldmeier:1998} or Similarity Renormalization Group (SRG) evolution~\cite{Bogner:2007,Jurgenson:2011}.
The list of methods that can handle in a successful way the Argonne+TNIs potentials demonstrates the versatility and reliability of this class of phenomenological nuclear Hamiltonians.
Moving from the non-strange nuclear sector, where nucleons are the only baryonic degrees of freedom, to the strange nuclear sector, where also hyperons enter the game, the picture becomes much less clear. There exists only a very limited amount of scattering data from which one could construct high-quality hyperon-nucleon ($YN$) potentials. Data on hypernuclei binding energies and hyperon separation energies are rather scarce and can only partially complete the scheme.
After the pioneering work reported in Ref.~\cite{Dalitz:1972}, several models have been proposed to describe the $YN$ interaction. The more diffuse are the Nijmegen soft-core models (like NSC89 and NSC97x)~\cite{Nagels:1977,Nagels:1979,Maessen:1989,Rijken:1999,Stoks:1999,Halderson:1999,Halderson:2000} and the J\"ulic potential (J04)~\cite{Holzenkamp:1989,Reuber:1994,Haidenbauer:2005}. A recent review of these interactions, together with Hartree-Fock (HF) calculations have been published by \DH{}apo~\emph{et al.} in Ref.~\cite{Dapo:2008}. In the same framework, extended soft-core Nijmegen potentials for strangeness $S=-2$ have been also developed~\cite{Rijken:2006_I,Rijken:2006_II}. Very recently, the extended soft-core 08 (ESC08) model has been completed, which represents the first unified theoretical framework involving hyperon-nucleon, hyperon-hyperon ($YY$) and also nucleon-nucleon sectors~\cite{Schulze:2013}. This class of interaction has been used in different calculations for hypernuclei~\cite{Hao:1993,HjorthJensen:1996,Vidana:1998,Vidana:2001,Nogga:2002,Dapo:2008,Schulze:2013} and hypermatter~\cite{Dapo:2008,Dapo:2010,Schulze:2011,Vidana:2011} within different methods, but the existing data do not constrain the potentials sufficiently. For example, six different parameterizations of the Nijmegen $YN$ potentials fit equally well the scattering data but produce very different scattering lengths, as reported for instance in Ref.~\cite{Rijken:1999}. In addition, these potentials are not found to yield the correct spectrum of hypernuclear binding energies. For example, the study~\cite{Nogga:2002} of $^4_\Lambda$H and $^4_\Lambda$He that uses Nijmegen models, does not predict all experimental separation energies. Similar conclusions for single- and double-$\Lambda$~hypernuclei have also been drawn in a study employing a different many-body technique~\cite{Vidana:2001}. Even the most recent ESC08 model produces some overbinding of single-$\Lambda$~hypernuclei and a weakly repulsive incremental $\Lambda\Lambda$~energy~\cite{Schulze:2013}, not consistent with the observed weak $\Lambda\Lambda$ attraction in $^{\;\;\,6}_{\Lambda\Lambda}$He.
In analogy with the nucleon-nucleon sector, a $\chi$-EFT approach for the hyperon-nucleon interaction has been also developed. The first attempt was proposed by Polinder and collaborators in 2006~\cite{Polinder:2006}, resulting in a leading order (LO) expansion. Only recently the picture has been improved going to next-to-leading order~(NLO)~\cite{Haidenbauer:2013_HYP2012,Nogga:2013_HYP2012,Haidenbauer:2013}. The $YN$ $\chi$-EFT model is still far away from the theoretical accuracy obtained in the non-strange sector, but it is any case good enough to describe the limited available $YN$ scattering~data.
As an alternative, a cluster model with phenomenological interactions has been proposed by Hiyama and collaborators to study light hypernuclei ~\cite{Hiyama:1997,Hiyama:2001,Hiyama:2002,Hiyama:2009,Hiyama:2010,Hiyama:2013}.
Interesting results on $\Lambda$~hypernuclei have also been obtained within a $\Lambda$-nucleus potential model, in which the need of a functional with a more than linear density dependence was shown, suggesting the importance of a many-body interaction~\cite{Millener:1988}. While studying $s$-shell hypernuclei, the $\Lambda N\rightarrow\Sigma N$ coupling as a three-body $\Lambda NN$ force has been investigated by many authors~\cite{Nogga:2002,Akaishi:2000,Hiyama:2001_conv,Nemura:2002}. Having strong tensor dependence it is found to play an important role, comparable to the TNI effect in non-strange nuclei.
Finally, starting in the 1980s, a class of Argonne-like interactions has been developed by Bodmer, Usmani and Carlson on the grounds of quantum Monte Carlo calculations to describe the $\Lambda$-nucleon force. These phenomenological interactions are written in coordinates space and they include two- and three-body hyperon-nucleon components, mainly coming from two-pion exchange processes and shorter range effects. They have been used in different forms mostly in variational Monte Carlo calculations for single $\Lambda$~hypernuclei ($^3_\Lambda$H~\cite{Bodmer:1988,Shoeb:1999}, $^4_\Lambda$H and $^4_\Lambda$He~\cite{Bodmer:1985,Bodmer:1988,Shoeb:1999,Sinha:2002}, $^5_\Lambda$He~\cite{Bodmer:1988,Shoeb:1999,Usmani:1995_3B,Usmani:1999,Sinha:2002,Usmani:2003,Usmani:2006,Usmani:2008}, $^9_\Lambda$Be~\cite{Bodmer:1984,Shoeb:1998}, $^{13}_{~\Lambda}$C~\cite{Bodmer:1984}, $^{17}_{~\Lambda}$O~\cite{Usmani:1995,Usmani:1995_3B}), double $\Lambda$~hypernuclei ($^{\;\;\,4}_{\Lambda\Lambda}$H, $^{\;\;\,5}_{\Lambda\Lambda}$H, $^{\;\;\,5}_{\Lambda\Lambda}$He~\cite{Shoeb:2004} and $^{\;\;\,6}_{\Lambda\Lambda}$He~\cite{Shoeb:2004,Usmani:2004,Usmani:2006_He6LL}) and in the framework of correlated basis function theory for $\Lambda$~hypernuclei~\cite{AriasdeSaavedra:2001}, typically in connection with the Argonne $NN$ potential.
Within the phenomenological interaction scheme, a generic nuclear system including nucleons and hyperons, can be described by the non relativistic phenomenological Hamiltonian
\begin{equation}
H=H_{N}+H_{Y}+H_{YN}\;,\label{eq:H}
\end{equation}
where $H_N$ and $H_Y$ are the pure nucleonic and hyperonic Hamiltonians and $H_{YN}$ represents the interaction Hamiltonian connecting the two distinguishable types of baryon:
\begin{align}
H_{N} &=\frac{\hbar^2}{2m_N}\sum_{i}\nabla_i^2\;+\sum_{i<j}v_{ij}\;\,+\sum_{i<j<k}v_{ijk}\;\;\,+\,\ldots\;,\label{eq:H_N}\\[0.5em]
H_{Y} &=\frac{\hbar^2}{2m_\Lambda}\sum_{\lambda}\nabla_\lambda^2\;+\sum_{\lambda<\mu}v_{\lambda\mu}\,+\sum_{\lambda<\mu<\nu}v_{\lambda\mu\nu}\;+\,\ldots\;,\label{eq:H_Y}\\[0.5em]
H_{YN}&=\sum_{\lambda i}v_{\lambda i}\,+\sum_{\lambda,i<j}v_{\lambda ij}\,+\sum_{\lambda<\mu,i}v_{\lambda\mu i}\,+\,\ldots\;.\label{eq:H_YN}
\end{align}
In this context, $A$ is the total number of baryons, $A=\mathcal N_N+\mathcal N_Y$. Latin indices $i,j,k=1,\ldots,\mathcal N_N$ label nucleons and Greek symbols $\lambda,\mu,\nu=1,\ldots,\mathcal N_Y$ are used for the hyperons. The Hamiltonians (\ref{eq:H_N}) and (\ref{eq:H_Y}) contain the kinetic energy operator and two- and three-body interactions for nucleons and hyperons separately. In principles they could include higher order many-body forces that however are expected to be less important. The Hamiltonian (\ref{eq:H_YN}) describes the interaction between nucleons and hyperons, and it involves two-body ($YN$) and three-body ($YNN$ and $YYN$) forces. At present there is no evidence for higher order terms in the hyperon-nucleon sector.
As reported in the previous chapter, experimental data are mainly available for $\Lambda p$ scattering and $\Lambda$~hypernuclei and present experimental efforts are still mostly concentrated in the study of the $S=-1$ hypernuclear sector. Information on heavier hyperon-nucleon scattering and on $\Sigma$ or more exotic hypernuclei are very limited. For these reasons, from now on we will focus on the phenomenological interactions involving just the $\Lambda$~hyperon. We adopt the class of Argonne-like $\Lambda$-nucleon interaction for the strange sector and the nucleon-nucleon Argonne force with the corresponding TNIs (UIX and ILx) for the non-strange sector. An effective $\Lambda\Lambda$ interaction has been also employed.
\section{Interactions: nucleons}
\label{sec:nuc_int}
We report the details of the $NN$ Argonne potential~\cite{Wiringa:1995,Wiringa:2002} and the corresponding TNIs, the Urbana~IX~(UIX)~\cite{Carlson:1983} and the Illinois~(ILx)~\cite{Pieper:2001}. These interactions are written in coordinate space and they include different range components coming from meson (mostly pion) exchange and phenomenological higher order contributions.
\subsection{Two-body $NN$ potential}
\label{subsec:AV18}
The nucleon-nucleon potential Argonne~V18~(AV18)~\cite{Wiringa:1995} contains a complete electromagnetic (EM) interaction and a strong interaction part which is written as a sum of a long-range component $v_{ij}^\pi$ due to one-pion exchange~(OPE) and a phenomenological intermediate- and short-range part $v_{ij}^R$ :
\begin{equation}
v_{ij}=v_{ij}^\pi+v_{ij}^R \;.
\end{equation}
Ignoring isospin breaking terms, the long-range OPE is given by
\begin{align}
v_{ij}^\pi=\frac{f_{\pi NN}^2}{4\pi}\frac{m_\pi}{3}\,X_{ij}\,\bm\tau_i\cdot\bm\tau_j \;,
\label{eq:OPE}
\end{align}
where $\tfrac{f_{\pi NN}^2}{4\pi}=0.075$ is the pion-nucleon coupling constant~\cite{Stoks:1993_pi} and
\begin{align}
X_{ij}=Y_\pi(r_{ij})\,\bm\sigma_i\cdot\bm\sigma_j+T_\pi(r_{ij})\,S_{ij} \;.
\label{eq:X_ij}
\end{align}
$\bm \sigma_i$ and $\bm \tau_i$ are Pauli matrices acting on the spin or isospin of nucleons and $S_{ij}$ is the tensor operator
\begin{align}
S_{ij}=3\left(\bm\sigma_i\cdot\hat{\bm r}_{ij}\right)\left(\bm\sigma_j\cdot\hat{\bm r}_{ij}\right)-\bm\sigma_i\cdot\bm\sigma_j \;.
\label{eq:S_ij}
\end{align}
The pion radial functions associated with the spin-spin (Yukawa potential) and tensor (OPE tensor potential) parts are
\begin{align}
Y_\pi(r)&=\frac{\e^{-\mu_\pi r}}{\mu_\pi r}\xi_Y(r) \;, \label{eq:Y_pi} \\[0.5em]
T_\pi(r)&=\left[1+\frac{3}{\mu_\pi r}+\frac{3}{(\mu_\pi r)^2}\right]\frac{\e^{-\mu_\pi r}}{\mu_\pi r}\xi_T(r) \;, \label{eq:T_pi}
\end{align}
where $\mu_\pi$ is the pion reduced mass
\begin{align}
\mu_\pi=\frac{m_\pi}{\hbar}=\frac{1}{\hbar}\frac{m_{\pi^0}+2\,m_{\pi^\pm}}{3} \quad\quad \frac{1}{\mu_\pi}\simeq1.4~\text{fm} \;,
\label{eq:m_pi}
\end{align}
and $\xi_Y(r)$ and $\xi_T(r)$ are the short-range cutoff functions defined by
\begin{align}
\xi_Y(r)=\xi_T^{1/2}(r)=1-\e^{-cr^2} \quad\quad c=2.1~\text{fm}^{-2}\;.
\end{align}
It is important to note that since $T_\pi(r)\gg Y_\pi(r)$ in the important region where $r\lesssim 2$~fm, the OPE is dominated by the tensor part.
The remaining intermediate- and short-range part of the potential is expressed as a sum of central, $L^2$, tensor, spin-orbit and quadratic spin-orbit terms (respectively labelled as $c$, $l2$, $t$, $ls$, $ls2$) in different $S$, $T$ and $T_z$ states:
\begin{align}
\!v_{NN}^R=v_{NN}^c(r)+v_{NN}^{l2}(r)\bm L^2+v_{NN}^t(r)S_{12}+v_{NN}^{ls}(r)\bm L\!\cdot\!\bm S+v_{NN}^{ls2}(r)(\bm L\!\cdot\!\bm S)^2\;,
\end{align}
with the radial functions $v_{NN}^k(r)$ written in the general form
\begin{align}
v_{NN}^k(r)=I_{NN}^k\,T_\pi^2(r)+\bigg[P_{NN}^k + (\mu_\pi r)\,Q_{NN}^k + (\mu_\pi r)^2\,R_{NN}^k \bigg]\, W(r) \;,
\end{align}
where the $T_\pi^2(r)$ has the range of a two-pion exchange~(TPE) force and $W(r)$ is a Wood-Saxon function which provides the short-range core:
\begin{align}
W(r)=\Bigl(1+\e^{\frac{r-\bar r}{a}}\Bigr)^{-1} \quad\quad \bar r=0.5~\text{fm},\quad a=0.2~\text{fm}\;.
\end{align}
By imposing a regularization condition at the origin, it is possible to reduce the number of free parameters by one for each $v_{NN}^k(r)$.
All the parameters in the $\xi(r)$ short-range cutoff functions as well as the other phenomenological constants are fitted on the $NN$ Nijmegen scattering data~\cite{Bergervoet:1990,Stoks:1993}.
The two-body nucleon potential described above can be projected from $S$, $T$, $T_z$ states into an operator format with 18 terms
\begin{align}
v_{ij}=\sum_{p=1,18}v_p(r_{ij})\,\mathcal O_{ij}^{\,p} \;.\label{eq:v_ij_Op}
\end{align}
The first 14 operators are charge independent and they are the ones included in the Argonne V14 potential~(AV14):
\begin{align}
\mathcal O_{ij}^{\,p=1,8} &=\Bigl\{1,\bm\sigma_i\cdot\bm\sigma_j,S_{ij},\bm L_{ij}\cdot\bm S_{ij}\Bigr\}\otimes\Bigl\{1,\bm\tau_i\cdot\bm\tau_j\Bigr\} \;,\\[0.5em]
\mathcal O_{ij}^{\,p=9,14}&=\Bigl\{\bm L_{ij}^2,\bm L_{ij}^2\;\bm\sigma_i\cdot\bm\sigma_j,\left(\bm L_{ij}\cdot\bm S_{ij}\right)^2\Bigr\}
\otimes\Bigl\{1,\bm\tau_i\cdot\bm\tau_j\Bigr\} \;.
\end{align}
The first eight terms give the higher contribution to the $NN$ interaction and they are the standard ones required to fit $S$ and $P$ wave data in both triplet and singlet isospin states. The first six of them come from the long-range part of OPE and the last two depend on the velocity of nucleons and give the spin-orbit contribution. In the above expressions, $\bm L_{ij}$ is the relative angular momentum of a couple~$ij$
\begin{align}
\bm L_{ij}=\frac{1}{2i}({\bf r}_i-{\bf r}_j)\times(\bm\nabla_i-\bm\nabla_j) \;,\label{eq:LS_ij1}
\end{align}
and $\bm S_{ij}$ the total spin of the pair
\begin{align}
\bm S_{ij}=\frac{1}{2}(\bm\sigma_i+\bm\sigma_j) \;.\label{eq:LS_ij2}
\end{align}
Operators from 9 to 14 are included to better describe the Nijmegen higher partial waves phase shifts and the splitting of state with different $J$ values. However, the contribution of these operators is small compared to the total potential energy.
The four last additional operators of the AV18 potential account for the charge symmetry breaking effect, mainly due to the different masses of charged and neutral pions, and they are given by
\begin{align}
\mathcal O_{ij}^{\,p=15,18}=\Bigl\{T_{ij},(\bm\sigma_i\cdot\bm\sigma_j)\,T_{ij},S_{ij}\,T_{ij},\tau_i^z+\tau_j^z\Bigr\} \;,
\end{align}
where $T_{ij}$ is the isotensor operator defined in analogy with $S_{ij}$ as
\begin{align}
T_{ij}=3\,\tau_i^z\tau_j^z-\bm\tau_i\cdot\bm\tau_j \;.
\end{align}
The contribution to the total energy given by these four operators is however rather small.
In QMC calculations reduced versions of the original AV18 potential are often employed. The most used one is the Argonne V8'~(AV8')~\cite{Wiringa:2002} that contains only the first eight operators and it is not a simple truncation of AV18 but also a reprojection, which preserves the isoscalar part in all $S$ and $P$ partial waves as well as in the $^3D_1$ wave and its coupling to $^3S_1$. AV8' is about $0.2\div0.3$~MeV per nucleon more attractive than Argonne V18 in light nuclei~\cite{Pieper:2001,Wiringa:2002,Wiringa:2002_url}, but its contribution is very similar to AV18 in neutron drops, where the difference is about 0.06~MeV per neutron~\cite{Pieper:2001}. Other common solutions are the Argonne V6'~(AV6') and V4'~(AV4') potentials~\cite{Wiringa:2002}. AV6' is obtained by deleting the spin-orbit terms from AV8' and adjusting the potential to preserve the deuteron binding. The spin-orbit terms do not contribute to $S$-wave and $^1P_1$ channel of the $NN$ scattering and are the smallest contributors to the energy of $^4$He~\cite{Kamada:2001}, but they are important in differentiating between the $^3P_{0,1,2}$ channels. The AV4' potential eliminates the tensor terms. As a result, the $^1S_0$ and $^1P_1$ potentials are unaffected, but the coupling between $^3S_1$ and $^3D_1$ channels is gone and the $^3P_{0,1,2}$ channels deteriorate further. The Fortran code for the AV18 and AVn' potentials is available at the webpage~\cite{Wiringa:1994}.
\subsection{Three-body $NNN$ potential}
\label{subsec:UIX-ILx}
The Urbana IX three-body force was originally proposed in combination with the Argonne AV18 and AV8'~\cite{Carlson:1983}. Although it slightly underbinds the energy of light nuclei, it has been extensively used to study the equation of state of nuclear and neutron matter~\cite{Akmal:1998,Sarsa:2003,Li:2008,Gandolfi:2009,Gandolfi:2009_gap,Gandolfi:2012}. The Illinois forces~\cite{Pieper:2001}, the most recent of which is the Illinois-7~(IL7)~\cite{Pieper:2008_AIP}, have been introduced to improve the description of both ground- and excited-states of light nuclei, showing an excellent accuracy~\cite{Pieper:2001,Pieper:2005}, but they produce an unphysical overbinding in pure neutron systems~\cite{Maris:2013}.
The three-body Illinois potential consists of two- and three-pion exchange and a phenomenological short-range component (the UIX force does not include the three-pion rings):
\begin{align}
V_{ijk}=V_{ijk}^{2\pi}+V_{ijk}^{3\pi}+V_{ijk}^R \;.
\label{eq:V_ijk}
\end{align}
The two-pion term, as shown in Fig.~\ref{fig:NNN_2pi}, contains $P$- and $S$-wave $\pi N$ scattering terms (respectively in Fig.~\ref{fig:NNN_2pi_p} and Fig.~\ref{fig:NNN_2pi_s}):
\begin{align}
V_{ijk}^{2\pi}=V_{ijk}^{2\pi,P}+V_{ijk}^{2\pi,S}\;.
\end{align}
\begin{figure}[ht]
\centering
\subfigure[\label{fig:NNN_2pi_p}]{\includegraphics[height=3.7cm]{NNN_2pi_pw.pdf}}
\goodgap\goodgap\goodgap\goodgap\goodgap
\subfigure[\label{fig:NNN_2pi_s}]{\includegraphics[height=3.7cm]{NNN_2pi_sw.pdf}}
\caption[Two-pion exchange processes in the $NNN$ force]
{Two-pion exchange processes in the $NNN$ force. \ref{fig:NNN_2pi_p} is the Fujita-Miyazawa $P$-wave term and \ref{fig:NNN_2pi_s} the Tucson-Melbourne $S$-wave term.}
\label{fig:NNN_2pi}
\end{figure}
The $P$-wave component, originally introduced by Fujita-Miyazawa~\cite{Fujita:1957}, describes an intermediate excited $\Delta$ resonance produced by the exchange of two pions between nucleons $i$-$j$ and $j$-$k$, as shown in Fig.~\ref{fig:NNN_2pi_p}, and it can be written as
\begin{align}
V_{ijk}^{2\pi,P}=A_{2\pi}^P\,\mathcal O_{ijk}^{2\pi,P}\;,
\end{align}
where
\begin{subequations}
\begin{align}
A_{2\pi}^P &=-\frac{2}{81}\frac{f_{\pi NN}^2}{4\pi}\frac{f_{\pi\Delta N}^2}{4\pi}\frac{m_\pi^2}{m_\Delta-m_N} \;,\\[0.5em]
\mathcal O_{ijk}^{2\pi,P} &=\sum_{cyclic}\left(\phantom{\frac{1}{4}}\!\!\!\!\Bigl\{X_{ij},X_{jk}\Bigr\}
\Bigl\{\bm\tau_i\cdot\bm\tau_j,\bm\tau_j\cdot\bm\tau_k\Bigr\}
+\frac{1}{4}\Bigl[X_{ij},X_{jk}\Bigr]\Bigl[\bm\tau_i\cdot\bm\tau_j,\bm\tau_j\cdot\bm\tau_k\Bigr]\right) \;,
\end{align}
\label{eq:V_NNN_2pi_P}
\end{subequations}
and the $X_{ij}$ operator is the same of Eq.~(\ref{eq:X_ij}). The constant $A_{2\pi}^P$ is fitted to reproduce the ground state of light nuclei and properties of nuclear matter.
The $P$-wave TPE term is the longest-ranged nuclear $NNN$ contribution and it is attractive in all nuclei and nuclear matter. However it is very small or even slightly repulsive in pure neutron systems.
The $S$-wave component of TPE three-nucleon force is a simplified form of the original Tucson-Melbourne model~\cite{Coon:1979}, and it involves the $\pi N$ scattering in the $S$-wave as shown in Fig.~\ref{fig:NNN_2pi_s}. It has the following form:
\begin{align}
V_{ijk}^{2\pi,S}=A_{2\pi}^S\,\mathcal O_{ijk}^{2\pi,S}\;,
\end{align}
where
\begin{subequations}
\begin{align}
A_{2\pi}^{S} & = \left(\frac{f_{\pi NN}}{4\pi}\right)^2 a' m_\pi^2 \;, \\[0.5em]
\mathcal O_{ijk}^{2\pi,S} & = \sum_{cyclic}Z_\pi(r_{ij})Z_\pi(r_{jk})\,\bm\sigma_i\cdot\hat{\bm r}_{ij}\,\bm\sigma_k\cdot\hat{\bm r}_{kj}\,\bm\tau_i\cdot\bm\tau_k \;,
\end{align}
\label{eq:V_NNN_2pi_S}
\end{subequations}
and the $Z_\pi(r)$ function is defined as
\begin{align}
Z_\pi(r)=\frac{\mu_\pi r}{3}\Bigl[Y_\pi(r)-T_\pi(r)\Bigr] \;.
\label{eq:Z_pi}
\end{align}
The $S$-wave TPE term is required by chiral perturbation theory but in practice its contribution is only 3\%--4\% of $V_{ijk}^{2\pi,P}$ in light nuclei.
The three-pion term (Fig.~\ref{fig:NNN_3pi}) was introduced in the Illinois potentials. It consists of the subset of three-pion rings that contain only one $\Delta$ mass in the energy denominators.
\begin{figure}[h]
\centering
\subfigure[\label{fig:NNN_3pi_1}]{\includegraphics[height=3.7cm]{NNN_3pi_1.pdf}}
\goodgap\goodgap\goodgap
\subfigure[\label{fig:NNN_3pi_2}]{\includegraphics[height=3.7cm]{NNN_3pi_2.pdf}}
\caption[Three-pion exchange processes in the $NNN$ force]
{Three-pion exchange processes in the $NNN$ force.}
\label{fig:NNN_3pi}
\end{figure}
As discussed in Ref.~\cite{Pieper:2001}, these diagrams result in a large number of terms, the most important of which are the ones independent of cyclic permutations of $ijk$:
\begin{align}
V_{ijk}^{3\pi}=A_{3\pi}\,\mathcal O_{ijk}^{3\pi}\;,
\end{align}
where
\begin{subequations}
\begin{align}
A_{3\pi} & = \left(\frac{f^2_{\pi NN}}{4\pi}\frac{m_\pi}{3}\right)^3\frac{f^2_{\pi N\Delta}}{f^2_{\pi NN}}\frac{1}{(m_\Delta-m_N)^2} \;,\\[0.5em]
\mathcal O_{ijk}^{3\pi} &\simeq\frac{50}{3} S_{ijk}^\tau\,S_{ijk}^\sigma+\frac{26}{3} A_{ijk}^\tau\,A_{ijk}^\sigma \;.
\end{align}
\end{subequations}
The letters $S$ and $A$ denote operators that are symmetric and antisymmetric under the exchange of $j$ with $k$. Superscripts $\tau$ and $\sigma$ label operators containing isospin and spin-space parts, respectively. The isospin operators are
\begin{subequations}
\begin{align}
S_{ijk}^\tau &=2+\frac{2}{3}\left(\bm\tau_i\cdot\bm\tau_j+\bm\tau_j\cdot\bm\tau_k+\bm\tau_k\cdot\bm\tau_i\right)=4\,P_{T=3/2}\;,\\[0.5em]
A_{ijk}^\tau &=\frac{1}{3}\,i\,\bm\tau_i\cdot\bm\tau_j\times\bm\tau_k=-\frac{1}{6}\Bigl[\bm\tau_i\cdot\bm\tau_j,\bm\tau_j\cdot\bm\tau_k\Bigr]\;,
\end{align}
\end{subequations}
where $S_{ijk}^\tau$ is a projector onto isospin 3/2 triples and $A_{ijk}^\tau$ has the same isospin structure as the commutator part of $V_{ijk}^{2\pi,P}$.
The spin-space operators have many terms and they are listed in the Appendix of Ref.~\cite{Pieper:2001}. An important aspect of this structure is that there is a significant attractive term which acts only in $T=3/2$ triples, so the net effect of $V_{ijk}^{3\pi}$ is slight repulsion in $S$-shell nuclei and larger attraction in $P$-shell nuclei. However, in most light nuclei the contribution of this term is rather small, $\langle V_{ijk}^{3\pi}\rangle\lesssim 0.1 \langle V_{ijk}^{2\pi}\rangle$.
The last term of Eq.~(\ref{eq:V_ijk}) was introduced to compensate the overbinding in nuclei and the large equilibrium density of nuclear matter given by the previous operators. It is strictly phenomenological and purely central and repulsive, and it describes the modification of the contribution of the TPE $\Delta$-box diagrams to $v_{ij}$ due to the presence of the third nucleon $k$ (Fig.~\ref{fig:NNN_2pi_d}).
It takes the form:
\begin{align}
V_{ijk}^R=A_R\,\mathcal O^R_{ijk}=A_R\sum_{cyclic}T_\pi^2(r_{ij})\,T_\pi^2(r_{jk}) \;,
\end{align}
where $T_\pi(r)$ is the OPE tensor potential defined in Eq.~(\ref{eq:T_pi}).
\begin{figure}[h]
\centering
\includegraphics[height=3.7cm]{NNN_2pi_d.pdf}
\caption[Short-range contribution in the $NNN$ force]{Repulsive short-range contribution included in the $NNN$ force.}
\label{fig:NNN_2pi_d}
\end{figure}
Finally, the Illinois (Urbana IX) TNI can be written as a sum of four different terms:
\begin{align}
V_{ijk}=A_{2\pi}^P\,\mathcal O^{2\pi,P}_{ijk}+A_{2\pi}^{S}\,\mathcal O^{2\pi,S}_{ijk}+A_{3\pi}\,\mathcal O^{3\pi}_{ijk}+A_R\,\mathcal O^R_{ijk} \;.
\end{align}
\section{Interactions: hyperons and nucleons}
\label{sec:hyper_int}
We present a detailed description of the $\Lambda N$ and $\Lambda NN$ interaction as developed by Bodmer, Usmani and Carlson following the scheme of the Argonne potentials~\cite{Bodmer:1984,Bodmer:1985,Bodmer:1988,Usmani:1995,Usmani:1995_3B,Shoeb:1998,Usmani:1999,Sinha:2002,Usmani:2003,Usmani:2006,Usmani:2008}. The interaction is written in coordinates space and it includes two- and three-body hyperon nucleon components with an explicit hard-core repulsion between baryons and a charge symmetry breaking term. We introduce also an effective $\Lambda\Lambda$ interaction mainly used in variational~\cite{Usmani:2004,Usmani:2006_He6LL} and cluster model~\cite{Hiyama:1997,Hiyama:2002} calculations for double $\Lambda$~hypernuclei.
\subsection{Two-body $\Lambda N$ potential}
\label{subsec:LN}
\subsubsection{$\Lambda N$ charge symmetric potential}
\label{subsec:LN_sym}
The $\Lambda$~particle has isospin $I=0$, so there is no OPE term, being the strong $\Lambda\Lambda\pi$ vertex forbidden due to isospin conservation. The $\Lambda$~hyperon can thus exchange a pion only with a $\Lambda\pi\Sigma$ vertex. The lowest order $\Lambda N$ coupling must therefore involve the exchange of two pions, with the formation of a virtual $\Sigma$ hyperon, as illustrated in Figs.~\ref{fig:LN_2pi} and \ref{fig:LN_2pi_2}. The TPE interaction is intermediate range with respect to the long range part of $NN$ force. One meson exchange processes can only occur through the exchange of a $K,K^*$ kaon pair, that contributes in exchanging the strangeness between the two baryons, as shown in Fig.~\ref{fig:LN_K}. The $K,K^*$ potential is short-range and contributes to the space-exchange and $\Lambda N$ tensor potential. The latter is expected to be quite weak because the $K$ and $K^*$ tensor contributions have opposite sign~\cite{Shinmura:1984}.
\begin{figure}[h]
\centering
\subfigure[\label{fig:LN_2pi}]{\includegraphics[height=3.7cm]{LN_2pi.pdf}}
\goodgap\goodgap\goodgap
\subfigure[\label{fig:LN_2pi_2}]{\includegraphics[height=3.7cm]{LN_2pi_2.pdf}}
\goodgap\goodgap\goodgap
\subfigure[\label{fig:LN_K}]{\includegraphics[height=3.7cm]{LN_K.pdf}}
\caption[Meson exchange processes in the $\Lambda N$ force]
{Meson exchange processes in the $\Lambda N$ force. \ref{fig:LN_2pi} and \ref{fig:LN_2pi_2} are the TPE diagrams. \ref{fig:LN_K} represents the kaon exchange channel.}
\label{fig:LN}
\end{figure}
The $\Lambda N$ interaction has been modeled with an Urbana-type potential~\cite{Lagaris:1981} with spin-spin and space-exchange components and a TPE tail which is consistent with the available $\Lambda p$ scattering data below the $\Sigma$ threshold:
\begin{align}
v_{\lambda i}=v_0(r_{\lambda i})(1-\varepsilon+\varepsilon\,\mathcal P_x)+\frac{1}{4}v_\sigma T^2_\pi(r_{\lambda i})\,{\bm\sigma}_\lambda\cdot{\bm\sigma}_i \;,
\label{eq:V_LN}
\end{align}
where
\begin{align}
v_0(r_{\lambda i})=v_c(r_{\lambda i})-\bar v\,T^2_\pi(r_{\lambda i})\;.
\end{align}
Here,
\begin{align}
v_c(r)=W_c \Bigl(1+\e^{\frac{r-\bar r}{a}}\Bigr)^{-1}
\end{align}
is a Wood-Saxon repulsive potential introduced, similarly to the Argonne $NN$ interaction, in order to include all the short-range contributions and $T_\pi(r)$ is the regularized OPE tensor operator defined in Eq.~(\ref{eq:T_pi}). The term $\bar v\,T^2_\pi(r_{\lambda i})$ corresponds to a TPE mechanism due to OPE transition potentials
$\left(\Lambda N\leftrightarrow\Sigma N,\Sigma\Delta\right)$ dominated by their tensor components. The $\Lambda p$ scattering at low energies is well fitted with
$\bar v=6.15(5)$~MeV. The terms $\bar v=(v_s+3v_t)/4$ and $v_\sigma=v_s-v_t$ are the spin-average and spin-dependent strengths, where $v_s$ and $v_t$ denote singlet- and triplet-state strengths, respectively. $\mathcal P_x$ is the $\Lambda N$ space-exchange operator and $\varepsilon$ the corresponding exchange parameter, which is quite poorly determined from the $\Lambda p$ forward-backward asymmetry to be $\varepsilon\simeq0.1\div0.38$. All the parameters defining the $\Lambda N$ potential can be found in Tab.~\ref{tab:parLN+LNN}.
\subsubsection{$\Lambda N$ charge symmetry breaking potential}
\label{subsubsec:LN_CSB}
The $\Lambda$-nucleon interaction should distinguish between the nucleon isospin channels $\Lambda p$ and $\Lambda n$.
The mirror pair of hypernuclei $^4_\Lambda$H and $^4_\Lambda$He is the main source of information about the charge symmetry breaking (CSB) $\Lambda N$ interaction. The experimental data for $A=4$ $\Lambda$~hypernuclei~\cite{Juric:1973}, show indeed a clear difference in the $\Lambda$~separation energies for the $(0^+)$ ground state
\begin{subequations}
\begin{align}
B_\Lambda\left(^4_\Lambda\text{H}\right)&=2.04(4)~\text{MeV}\;,\\[0.5em]
B_\Lambda\left(^4_\Lambda\text{He}\right)&=2.39(3)~\text{MeV}\;,
\end{align}
\end{subequations}
and for the $(1^+)$ excited state
\begin{subequations}
\begin{align}
B_\Lambda^*\left(^4_\Lambda\text{H}\right)&=1.00(6)~\text{MeV}\;,\\[0.5em]
B_\Lambda^*\left(^4_\Lambda\text{He}\right)&=1.24(6)~\text{MeV}\;.
\end{align}
\end{subequations}
The differences in the hyperon separation energies are:
\begin{subequations}
\begin{align}
\Delta B_\Lambda&=0.35(6)~\text{MeV}\;,\\[0.5em]
\Delta B_\Lambda^*&=0.24(6)~\text{MeV}\;.
\end{align}
\end{subequations}
However, the experimental values $\Delta B_\Lambda$ must be corrected to include the difference $\Delta B_c$ due to the Coulomb interaction in order to obtain the values to be attributed to CSB effects. By means of a variational calculation, Bodmer and Usmani~\cite{Bodmer:1985} estimated the Coulomb contribution to be rather small
\begin{subequations}
\begin{align}
|\Delta B_c|&=0.05(2)~\text{MeV}\;,\\[0.5em]
|\Delta B_c^*|&=0.025(15)~\text{MeV}\;,
\end{align}
\end{subequations}
and they were able to reproduce the differences in the $\Lambda$ separation energies by means of a phenomenological spin dependent CSB potential. It was found that the CSB interaction is effectively spin independent and can be simply expressed (as subsequently reported in Ref.~\cite{Usmani:1999}) by
\begin{align}
v_{\lambda i}^{CSB}=C_\tau\,T_\pi^2\left(r_{\lambda i}\right)\tau_i^z \quad\quad C_\tau=-0.050(5)~\text{MeV}\;.
\label{eq:V_CSB}
\end{align}
Being $C_\tau$ negative, the $\Lambda p$ channel becomes attractive while the $\Lambda n$ channel is repulsive, consistently with the experimental results for $^4_\Lambda$H and $^4_\Lambda$He. The contribution of CSB is expected to be very small in symmetric hypernuclei (if Coulomb is neglected) but could have a significant effect in hypernuclei with an neutron (or proton) excess.
\subsection{Three-body $\Lambda NN$ potential}
\label{subsubsec:LNN}
The $\Lambda N$ force as obtained by fitting the $\Lambda p$ scattering does not provide a good account of the experimental binding energies, as in the case of nuclei with the bare $NN$ interaction. A three-body $\Lambda NN$ force is required in this scheme to solve the overbinding. The $\Lambda NN$ potential is at the same TPE order of the $\Lambda N$ force and it includes diagrams involving two nucleons and one hyperon, as reported in Fig.~\ref{fig:LNN}.
\begin{figure}[ht]
\centering
\subfigure[\label{fig:LNN_pw}]{\includegraphics[height=3.2cm]{LNN_pw.pdf}}
\goodgap\goodgap\goodgap
\subfigure[\label{fig:LNN_sw}]{\includegraphics[height=3.2cm]{LNN_sw.pdf}}
\goodgap\goodgap\goodgap
\subfigure[\label{fig:LNN_d}]{\includegraphics[height=3.2cm]{LNN_d.pdf}}
\caption[Two-pion exchange processes in the $\Lambda NN$ force]
{Two-pion exchange processes in three-body $\Lambda NN$ force. \ref{fig:LNN_pw} and \ref{fig:LNN_sw} are, respectively, the $P$- and $S$-wave TPE contributions. \ref{fig:LNN_d} is the phenomenological dispersive term.}
\label{fig:LNN}
\end{figure}
The diagrams in Fig.~\ref{fig:LNN_pw} and Fig.~\ref{fig:LNN_sw} correspond respectively to the $P$-wave and $S$-wave TPE
\begin{align}
v^{2\pi}_{\lambda ij}=v^{2\pi,P}_{\lambda ij}+v^{2\pi,S}_{\lambda ij}\;,\label{eq:V_LNN_2pi}
\end{align}
that can be written in the following form:
\begin{align}
v_{\lambda ij}^{2\pi,P}&=\widetilde C_P\,\mathcal O_{\lambda ij}^{2\pi,P}
&&\hspace{-1.4cm}=-\frac{C_P}{6}\Bigl\{X_{i\lambda}\,,X_{\lambda j}\Bigr\}\,{\bm\tau}_{i}\cdot{\bm\tau}_{j}\;,\label{eq:V_LNN_2pi_P} \\[0.5em]
v_{\lambda ij}^{2\pi,S}&=C_S\,O_{\lambda ij}^{2\pi,S}
&&\hspace{-1.4cm}=C_S\,Z\left(r_{\lambda i}\right)Z\left(r_{\lambda j}\right)\,{\bm\sigma}_{i}\cdot\hat{\bm r}_{i\lambda}\,
{\bm\sigma}_{j}\cdot\hat{\bm r}_{j\lambda}\,{\bm\tau}_{i}\cdot{\bm\tau}_{j}\;.\label{eq:V_LNN_2pi_S}
\end{align}
The structure of $V_{\lambda ij}^{2\pi}$ is very close to the Fujita-Miyazawa $P$-wave term and the Tucson-Melbourne $S$-wave term of the nuclear $V_{ijk}^{2\pi}$ (see Eqs.~(\ref{eq:V_NNN_2pi_P}) and (\ref{eq:V_NNN_2pi_S})). In the hypernuclear sector, however, there are simplifications because only two nucleons at a time enter the picture, so there are no cyclic summations, and the $\Lambda$~particle has isospin zero, thus there is no $\bm\tau_\lambda$ operator involved. As reported in Ref.~\cite{Pieper:2001}, the strength of $V_{ijk}^{2\pi,S}$ is $\left|A_{2\pi}^S\right|\simeq0.8$~MeV. However, in other references it is assumed to have a value of 1.0~MeV. Comparing the Tucson-Melbourne $NNN$ model with Eq.~(\ref{eq:V_LNN_2pi_S}) for the $\Lambda NN$ potential, one may write an identical structure for both $S$-wave $\Lambda NN$ and $NNN$ potentials as follows:
\begin{align}
C_S\,\mathcal O_{\lambda ij}^{2\pi,S}=A_S^{2\pi}\,\mathcal O_{ijk}^{2\pi,S}\;.
\end{align}
This directly relates $C_S$ in the strange sector to $A_{2\pi}^S$ in the non-strange sector. Since the $\Sigma$-$\Lambda$ mass difference is small compared to the $\Delta$-$N$ mass difference, the $2\pi$ $\Lambda NN$ potential of $S=-1$ sector is stronger than the non-strange $NNN$ potential of $S=0$ sector. This provides stronger strengths in the case of $\Lambda NN$ potential compared to the $NNN$ potential. It is therefore expected that the value of $C_S$ would be more than 1.0~MeV, and is taken to be 1.5~MeV~\cite{Usmani:2008}. However, the $S$-wave component is expected to be quite weak, at least in spin-zero core hypernuclei, and indeed it has been neglected in variational calculations for $^{17}_{~\Lambda}$O and $^5_\Lambda$He~\cite{Usmani:1995,Usmani:2003,Usmani:2004}.
The last diagram (Fig.~\ref{fig:LNN_d}) represents the dispersive contribution associated with the medium modifications of the intermediate state potentials for the $\Sigma$, $N$, $\Delta$ due to the presence of the second nucleon. This term describes all the short-range contributions and it is expected to be repulsive due to the suppression mechanism associated with the $\Lambda N$-$\Sigma N$ coupling~\cite{Bodmer:1971,Rozynek:1979}. The interaction of the intermediate states $\Sigma$, $N$, $\Delta$ with a nucleon of the medium will be predominantly through a TPE potential, proportional to $T_\pi^2(r)$, with an explicit spin dependence (negligible for spin-zero core hypernuclei):
\begin{align}
v_{\lambda ij}^D=W_D\,\mathcal O_{\lambda ij}^D=W_D\,T_{\pi}^{2}\left(r_{\lambda i}\right)T^{2}_{\pi}\left(r_{\lambda j}\right)
\!\!\bigg[1+\frac{1}{6}{\bm\sigma}_\lambda\!\cdot\!\left({\bm\sigma}_{i}+{\bm\sigma}_{j}\right)\bigg]\;.\label{eq:V_LNN_D}
\end{align}
The radial functions $T_\pi(r)$ and $Z_\pi(r)$ are the same of the nuclear potential, see Eq.~(\ref{eq:T_pi}) and Eq.~(\ref{eq:Z_pi}).
The operator $X_{\lambda i}$ is the same of Eq.~(\ref{eq:X_ij}), in which the first nucleon is replaced by the $\Lambda$~particle.
It is important to note that the three-body $\Lambda NN$ interaction have been investigated in variational calculations for $_\Lambda^5$He~\cite{Usmani:1995_3B,Usmani:2003,Usmani:2008}, $_{\Lambda\Lambda}^{\;\;\,6}$He~\cite{Usmani:2004,Usmani:2006_He6LL} and $_{~\Lambda}^{17}$O~\cite{Usmani:1995,Usmani:1995_3B}, resulting in a range of values for the $C_P$ and $W_D$ parameters (see Tab.~\ref{tab:parLN+LNN}) that gives good description of the properties of the studied hypernuclei. A unique set of parameters that reproduces all the available experimental energies for single (and double) $\Lambda$~hypernuclei has not been set yet.
A second crucial observation is that, differently to the nucleon sector, both two- and three-body lambda-nucleon interactions are at the same TPE order. In addition, the mass difference between the $\Lambda$~particle and its excitation $\Sigma$ is much smaller than the mass difference between the nucleon and the $\Delta$ resonance. Thus, the $\Lambda NN$ interaction can not be neglected in this framework but it is a key ingredient in addition to the $\Lambda N$ force for any consistent theoretical calculation involving $\Lambda$~hyperons.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!hb]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{5.0em}\extracolsep{\fill}}ccc@{\extracolsep{\fill}\hspace{5.0em}}}
\toprule
\toprule
Constant & Value & Unit \\
\midrule
$W_c$ & $2137$ & MeV \\
$\bar r$ & $0.5$ & fm \\
$a$ & $0.2$ & fm \\
$v_s$ & $6.33, 6.28$ & MeV \\
$v_t$ & $6.09, 6.04$ & MeV \\
$\bar v$ & $6.15(5)$ & MeV \\
$v_\sigma$ & $0.24$ & MeV \\
$c$ & $2.0$ & fm$^{-2}$ \\
$\varepsilon$ & $0.1\div0.38$ & --- \\
$C_\tau$ & $$-0.050(5)$$ & MeV \\
$C_P$ & $0.5\div2.5$ & MeV \\
$C_S$ & $\simeq 1.5$ & MeV \\
$W_D$ & $0.002\div0.058$ & MeV \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Parameters of the $\Lambda N$ and $\Lambda NN$ interaction]
{Parameters of the $\Lambda N$ and $\Lambda NN$ interaction~(see~\cite{Usmani:2008} and reference therein).
For $C_P$ and $W_D$ the variational allowed range is shown. The value of the charge symmetry breaking parameter $C_\tau$ is from Ref.~\cite{Usmani:1999}.\\}
\label{tab:parLN+LNN}
\end{table}
\renewcommand{\arraystretch}{1.0}
\subsection{Two-body $\Lambda\Lambda$ potential}
\label{subsec:LL}
Due to the impossibility to collect $\Lambda\Lambda$ scattering data, experimental information about the $\Lambda\Lambda$ interaction can be obtained only from the $\Lambda\Lambda$~separation energy of the observed double $\Lambda$~hypernuclei, $^{\;\;\,6}_{\Lambda\Lambda}$He~\cite{Takahashi:2001,Nakazawa:2010,Ahn:2013}, $^{\;13}_{\Lambda\Lambda}$B~\cite{Nakazawa:2010} and the isotopes of $^{\;10}_{\Lambda\Lambda}$Be ($A=10\div12$)~\cite{Danysz:1963,Nakazawa:2010}. Evidence for the production of $_{\Lambda\Lambda}^{\;\;\,4}$H has been reported in Ref.~\cite{Ahn:2001}, but no information about the $\Lambda\Lambda$~separation energy was found.
On the other hand, there is a theoretical indication for the one-boson exchange (OBE) part of the $\Lambda\Lambda$ interaction coming from the $SU(3)$-invariance of coupling constants, but the $\Lambda\Lambda$ force is still far to be settled.
In the next, we follow the guide line adopted in the three- and four-body cluster models for double $\Lambda$~hypernuclei~\cite{Hiyama:1997,Hiyama:2002}, which was
also used in Faddeev-Yakubovsky calculations for light double $\Lambda$~hypernuclei~\cite{Filikhin:2002} and in variational calculations on $^{\;\;\,4}_{\Lambda\Lambda}$H~\cite{Shoeb:2004,Shoeb:2005}, $^{\;\;\,5}_{\Lambda\Lambda}$H and $^{\;\;\,5}_{\Lambda\Lambda}$He~\cite{Shoeb:2004,Shoeb:2007} and $^{\;\;\,6}_{\Lambda\Lambda}$He~\cite{Usmani:2004,Usmani:2006_He6LL,Shoeb:2004,Shoeb:2007}, with different parametrizations.
The employed OBE-simulating $\Lambda\Lambda$ effective interaction is a low-energy phase equivalent Nijmegen interaction represented by a sum of three Gaussians:
\begin{align}
&v_{\lambda\mu}=\sum_{k=1}^{3}\left(v_0^{(k)}+v_\sigma^{(k)}\,{\bm\sigma}_\lambda\cdot{\bm\sigma}_\mu\right)\e^{-\mu^{(k)}r_{\lambda\mu}^2}\;. \label{eq:V_LL}
\end{align}
The most recent parametrization of the potential (see Tab.~\ref{tab:parLL}), was fitted in order to simulate the $\Lambda\Lambda$ sector of the Nijmegen F (NF) interaction~\cite{Nagels:1979,Maessen:1989,Rijken:1999}. The NF is the simplest among the Nijmegen models with a scalar nonet, which seems to be more appropriate than the versions including only a scalar singlet in order to reproduce the weak binding energy indicated by the NAGARA event~\cite{Takahashi:2001}. The components $k=1,2$ of the above Gaussian potential are determined so as to simulate the $\Lambda\Lambda$ sector of NF and the strength of the part for $k=3$ is adjusted so as to reproduce the $^{\;\;\,6}_{\Lambda\Lambda}$He NAGARA experimental double $\Lambda$~separation energy of $7.25\pm 0.19^{+0.18}_{-0.11}$~MeV. In 2010, Nakazawa reported a new, more precise determination of $B_{\Lambda\Lambda}=6.93\pm0.16$~MeV for $^{\;\;\,6}_{\Lambda\Lambda}$He~\cite{Nakazawa:2010}, obtained via the $\Xi^-$ hyperon capture at rest reaction in a hybrid emulsion. This value has been recently revised to $B_{\Lambda\Lambda}=6.91\pm0.16$~MeV by the E373 (KEK-PS) Collaboration~\cite{Ahn:2013}. No references were found about the refitting of the $\Lambda\Lambda$ Gaussian potential on the more recent experimental result, which is in any case compatible with the NAGARA event. We therefore consider the original parametrization of Ref.~\cite{Hiyama:2002}.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!b]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{4.0em}\extracolsep{\fill}}cccc@{\extracolsep{\fill}\hspace{4.0em}}}
\toprule
\toprule
$\mu^{(k)}$ & $0.555$ & $1.656$ & $8.163$ \\
\midrule
$v_0^{(k)}$ & $-10.67$ & $-93.51$ & $4884$ \\
$v_\sigma^{(k)}$ & $0.0966$ & $16.08$ & $915.8$ \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Parameters of the $\Lambda\Lambda$ interaction]
{Parameters of the the $\Lambda\Lambda$ interaction.
The size parameters $\mu^{(k)}$ are in fm$^{-2}$ and the strengths $v_0^{(k)}$ and $v_\sigma^{(k)}$ are in MeV~\cite{Hiyama:2002}.\\}
\label{tab:parLL}
\end{table}
\renewcommand{\arraystretch}{1.0}
\chapter{Method}
\label{chap:method}
In nuclear physics, many-body calculations are used to understand the nuclear systems in the non-relativistic regime. When interested in low energy phenomena, a nucleus (or an extensive nucleonic system) can be described as a collection of particles interacting via a potential that depends on positions, momenta, spin and isospin. The properties of the system can be determined by solving a many-body Schr\"odinger equation. Such calculations can study, for example, binding energies, excitation spectra, densities, reactions and many other aspects of nuclei. The equation of state, masses, radii and other properties are obtained by describing astrophysical objects as a nuclear infinite medium.
The two main problems related to microscopic few- and many-body calculations in nuclear physics are the determination of the Hamiltonian and the method used to accurately solve the Schr\"odinger equation. In the previous chapter, we have already seen how to build a realistic nuclear Hamiltonian, including also strange degrees of freedom. In the next we will focus on the methodological part presenting a class of Quantum Monte Carlo algorithms, the Diffusion Monte Carlo (DMC) and, more in detail, the Auxiliary Field Diffusion Monte Carlo (AFDMC). Such methods are based on evolving a trial wave function in imaginary time to yield the ground state of the system. The DMC method sums explicitly over spin and isospin states and can use very sophisticated wave functions. However, it is limited to small systems. In the AFDMC, in addition to the coordinates, also the spin and isospin degrees of freedom are sampled. It can thus treat larger systems but there are some limitations on the trial wave function and the nuclear potentials that can be handled.
Strangeness can be included in AFDMC calculations by adding hyperons to the standard nucleons. The interaction between hyperons and nucleons presented in the previous chapter is written in a suitable form to be treated within this algorithm. By extending the AFDMC nuclear wave function to the hyperonic sector, it is possible to study both hypernuclei and hypermatter. A new QMC approach to strange physics is thus now available.
\section{Diffusion Monte Carlo}
\label{sec:DMC}
The Diffusion Monte Carlo method~\cite{Mitas:1998,Pieper:2008,Kalos:2008,Lipparini:2008} projects the ground-state out of a stationary trial wave function $|\psi_T\rangle$ not orthogonal to the true ground state. Consider the many-body time dependent Schr\"odinger equation with its formal solution
\begin{align}
i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle=(H-E_T)|\psi(t)\rangle\quad\Rightarrow\quad|\psi(t+dt)\rangle=\e^{-\frac{i}{\hbar}(H-E_T)dt}|\psi(t)\rangle\;,
\end{align}
and let move to the imaginary time $\tau=it/\hbar$\footnote{with this definition $\tau$ has the dimensions of the inverse of an energy.}:
\begin{align}
-\frac{\partial}{\partial\tau}|\psi(\tau)\rangle=(H-E_T)|\psi(\tau)\rangle\quad\Rightarrow\quad|\psi(\tau+d\tau)\rangle=\e^{-(H-E_T)d\tau}|\psi(\tau)\rangle\;.\label{eq:psi_tau}
\end{align}
The stationary states $|\psi(0)\rangle=|\psi_T\rangle$ are the same for both normal and imaginary time Schr\"odinger equations and we can expand them on a complete orthonormal set of eigenvectors $|\varphi_n\rangle$ of the Hamiltonian $H$:
\begin{align}
|\psi_T\rangle=\sum_{n=0}^\infty c_n|\varphi_n\rangle \;.
\end{align}
Supposing that the $|\psi_T\rangle$ is not orthogonal to the true ground state, i.e. $c_0\ne0$, and that at least the ground state is non degenerate, i.e. $E_n\ge E_{n-1}>E_0$, where $E_n$ are the eigenvalues of $H$ related to $|\varphi_n\rangle$, the imaginary time evolution of $|\psi_T\rangle$ is given by
\begin{align}
|\psi(\tau)\rangle&=\sum_{n=0}^\infty c_n \e^{-(E_n-E_T)\tau}|\varphi_n\rangle \;,\nonumber\\[0.2em]
&=c_0\e^{-(E_0-E_T)\tau}|\varphi_0\rangle+\sum_{n=1}^\infty c_n \e^{-(E_n-E_T)\tau}|\varphi_n\rangle \;.\label{eq:psi_0}
\end{align}
If the energy offset $E_T$ is the exact ground state energy $E_0$, in the limit $\tau\rightarrow\infty$ the components of Eq.~(\ref{eq:psi_0}) for $n>0$ vanish and we are left with
\begin{align}
\lim_{\tau\rightarrow\infty}|\psi(\tau)\rangle=c_0|\varphi_0\rangle\;.\label{eq:tau_limit}
\end{align}
Starting from a generic initial trial wave function $|\psi_T\rangle$ not orthogonal to the ground state, and adjusting the energy offset $E_T$ to be as close as possible to $E_0$, in the limit of infinite imaginary time, one can project out the exact ground state $c_0|\varphi_0\rangle$ giving access to the lowest energy properties of the system.
Consider the imaginary time propagation of Eq.~(\ref{eq:psi_tau}) and insert a completeness on the orthonormal basis $|R'\rangle$, where $R$ represents a configuration $\{\bm r_1,\ldots,\bm r_\mathcal N\}$ of the $\mathcal N$ particle system with all its degrees of freedom:
\begin{align}
|\psi(\tau+d\tau)\rangle&=\e^{-(H-E_T)d\tau}|\psi(\tau)\rangle\;,\nonumber\\[0.2em]
&=\int dR'\e^{-(H-E_T)d\tau}|R'\rangle\langle R'|\psi(\tau)\rangle\;.
\end{align}
Projecting on the coordinates $\langle R|$ leads to
\begin{align}
\langle R|\psi(\tau+d\tau)\rangle=\int dR'\,\langle R|\e^{-(H-E_T)d\tau}|R'\rangle\langle R'|\psi(\tau)\rangle\;,\label{eq:psi_tau_dtau}
\end{align}
where $\langle R|\e^{-(H-E_T)d\tau}|R'\rangle=G(R,R',d\tau)$ is the Green's function of the operator $(H-E_T)+\frac{\partial}{\partial\tau}$. Recalling that
$\langle R|\psi(\tau)\rangle=\psi(R,\tau)$, we can write Eq.~(\ref{eq:psi_tau}) as
\begin{align}
-\frac{\partial}{\partial\tau}\psi(R,\tau)&=(H-E_T)\psi(R,\tau)\;,\label{eq:psi_R_tau}\\[0.5em]
\psi(R,\tau+d\tau)&=\int dR'\,G(R,R',d\tau)\,\psi(R',\tau)\;.\label{eq:G}
\end{align}
If we consider a non-interacting many-body system, i.e. the Hamiltonian is given by the pure kinetic term
\begin{align}
H_0=T=-\frac{\hbar^2}{2m}\sum_{i=1}^{\mathcal N}\nabla_i^2\;,
\end{align}
the Schr\"odinger equation~(\ref{eq:psi_R_tau}) becomes a $3\mathcal N$-dimensional diffusion equation. By writing the Green's function of Eq.~(\ref{eq:G}) in momentum space by means of the Fourier transform, it is possible to show that $G_0$ is a Gaussian with variance proportional to $\tau$
\begin{align}
G_0(R,R',d\tau)=\left(\frac{1}{4\pi Dd\tau}\right)^{\frac{3\mathcal N}{2}}\!\e^{-\frac{(R-R')^2}{4Dd\tau}}\;,\label{eq:G0}
\end{align}
where $D=\hbar^2/2m$ is the diffusion constant of a set of particles in Brownian motion with a dynamic governed by random collisions. This interpretation can be implemented by representing the wave function $\psi(R,\tau)$ by a set of discrete sampling points, called \emph{walkers}
\begin{align}
\psi(R,\tau)=\sum_k\delta(R-R_k)\;,
\end{align}
and evolving this discrete distribution for an imaginary time $d\tau$ by means of Eq.~(\ref{eq:G}):
\begin{align}
\psi(R,\tau+d\tau)=\sum_k G_0(R,R_k,d\tau)\;.
\end{align}
The result is a set of Gaussians that in the infinite imaginary time limit represents a distribution of walkers according to the lowest state of the Hamiltonian, that can be used to calculate the ground state properties of the system.
Let now consider the full Hamiltonian where the interaction is described by a central potential in coordinate space:
\begin{align}
H=T+V=-\frac{\hbar^2}{2m}\sum_{i=1}^{\mathcal N}\nabla_i^2+V(R)\;.
\end{align}
Because $T$ and $V$ in general do not commute, it is not possible to directly split the propagator in a kinetic and a potential part
\begin{align}
\e^{-(H-E_T)d\tau}\ne\e^{-Td\tau}\e^{-(V-E_T)d\tau}\;,
\end{align}
and thus the analytic solution of the Green's function $\langle R|\e^{-(T+V-E_T)d\tau}|R'\rangle$ is not known in most of the cases.
However, by means of the Trotter-Suzuki formula to order $d\tau^3$
\begin{align}
\e^{-(A+B)d\tau}=\e^{-A\frac{d\tau}{2}}\e^{-B d\tau}\e^{-A\frac{d\tau}{2}}\,+\ord\left(d\tau^3\right)\;,\label{eq:Trotter_3}
\end{align}
which is an improvement of the standard
\begin{align}
\e^{-(A+B)d\tau}=\e^{-Ad\tau}\e^{-B d\tau}\,+\ord\left(d\tau^2\right)\;,\label{eq:Trotter_2}
\end{align}
in the limit of small imaginary time step $d\tau$ it is possible to write an approximate solution for $\psi(R,\tau+d\tau)$:
\begin{align}
\psi(R,\tau+d\tau)&\simeq\int dR'\langle R|\e^{-V\frac{d\tau}{2}}\e^{-T d\tau}\e^{-V\frac{d\tau}{2}}\e^{E_Td\tau}|R'\rangle\,\psi(R',\tau)\;,\nonumber\\[0.2em]
&\simeq\int dR'\underbrace{\langle R|\e^{-T d\tau}|R'\rangle}_{G_0(R,R',d\tau)}
\underbrace{\phantom{\langle}\!\!\e^{-\left(\frac{V(R)+V(R')}{2}-E_T\right)d\tau}}_{G_V(R,R',d\tau)}\psi(R',\tau)\;,\nonumber\\[0.2em]
&\simeq\left(\frac{1}{4\pi Dd\tau}\right)^{\frac{3\mathcal N}{2}}\!\!\int dR'\e^{-\frac{(R-R')^2}{4Dd\tau}}\e^{-\left(\frac{V(R)+V(R')}{2}-E_T\right)d\tau} \psi(R',\tau)\;,\label{eq:psi_propag}
\end{align}
which is the same of Eq.~(\ref{eq:G}) with the full Green's function given by
\begin{align}
G(R,R',d\tau)\simeq G_0(R,R',d\tau)\,G_V(R,R',d\tau)\;.\label{eq:G0-GV}
\end{align}
According to the interacting Hamiltonian, the propagation of $\psi(R,\tau)$ for $d\tau\rightarrow0$ is thus described by the integral~(\ref{eq:psi_propag}) and the long imaginary time evolution necessary to project out the ground state component of the wave function is realized by iteration until convergence is reached.
The steps of this process, that constitute the Diffusion Monte Carlo algorithm, can be summarized as follows:
\begin{enumerate}
\item\label{item:DMC1} An initial distribution of walkers $w_i$ with $i=1,\ldots,\mathcal N_w$ is sampled from the trial wave function $\langle R|\psi_T\rangle=\psi_T(R)$ and the starting trial energy $E_T$ is chosen (for instance from a variational calculation or close to the expected result).
\item\label{item:DMC2} The spacial degrees of freedom are propagated for small imaginary time step $d\tau$ with probability density $G_0(R,R',d\tau)$, i.e. the coordinates of the walkers are diffused by means of a Brownian motion
\begin{align}
R=R'+\xi\;,\label{eq:Brown}
\end{align}
where $\xi$ is a stochastic variable distributed according to a Gaussian probability density with $\sigma^2=2Dd\tau$ and zero average.
\item\label{item:DMC3} For each walker, a weight
\begin{align}
\omega_i=G_V(R,R',d\tau)=\e^{-\left(\frac{V(R)+V(R')}{2}-E_T\right)d\tau}\;,\label{eq:w}
\end{align}
is assigned. The estimator contributions (kinetic energy, potential energy, root mean square radii, densities, \ldots) are evaluated on the imaginary time propagated configurations, weighting the results according to~$\omega_i$.
\item\label{item:DMC4} The \emph{branching} process is applied to the propagated walkers. $\omega_i$ represents the probability of a configuration to multiply at the next step according to the normalization. This process is realized by generating from each $w_i$ a number of walker copies
\begin{align}
n_i=[\omega_i+\eta_i]\;,
\end{align}
where $\eta_i$ is a random number uniformly distributed in the interval $[0,1]$ and $[x]$ means integer part of $x$. In such a way, depending on the potential $V(R)$ and the trial energy $E_T$, some configurations will disappear and some other will replicate, resulting in the evolution of walker population which is now made of $\widetilde{\mathcal N}_w=\sum_{i=1}^{\mathcal N_w}n_i$ walkers. A simple solution in order to control the fluctuations of walker population is to multiply the weight $\omega_i$ by a factor $\mathcal N_w/\widetilde{\mathcal N}_w$, adjusting thus the branching process at each time step. This solution is not efficient if the potential diverges. The corrections applied run-time could generate a lot of copies from just few good parent walkers and the population will be stabilized but not correctly represented. A better sampling technique is described in \S~\ref{subsec:Imp_Samp}.
\item\label{item:DMC5} Iterate from \ref{item:DMC2} to \ref{item:DMC4} as long as necessary until convergence is reached, i.e. for large enough $\tau$ to reach the infinite limit of Eq.~(\ref{eq:tau_limit}). In this limit, the configurations $\{R\}$ are distributed according to the lowest energy state $\psi_0(R,\tau)$. Therefore, we can compute the ground state expectation values of observables that commute with the Hamiltonian
\begin{align}
\!\!\langle\mathcal O\rangle=\frac{\langle\psi_0|\mathcal O|\psi_0\rangle}{\langle\psi_0|\psi_0\rangle}
=\!\lim_{\tau\rightarrow\infty}\!\frac{\langle\psi_T|\mathcal O|\psi(\tau)\rangle}{\langle\psi_T|\psi(\tau)\rangle}=\!\lim_{\tau\rightarrow\infty}\int\!\!dR\frac{\langle\psi_T|\mathcal O|R\rangle\psi(R,\tau)}{\psi_T(R)\psi(R,\tau)}\;,\label{eq:mixed_int}
\end{align}
by means of
\begin{align}
\langle\mathcal O\rangle=\frac{\sum_{\{R\}}\langle R|\mathcal O|\psi_T\rangle}{\sum_{\{R\}}\langle R|\psi_T\rangle}
=\frac{\sum_{\{R\}}\mathcal O\psi_T(R)}{\sum_{\{R\}}\psi_T(R)}\;.\label{eq:mixed}
\end{align}
Statistical error bars on expectation values are then estimated by means of block averages and the analysis of auto-correlations on data blocks.
The direct calculation of the expectation value~(\ref{eq:mixed}) gives an exact result only when $\mathcal O$ is the Hamiltonian $H$ or commutes with $H$, otherwise only ``mixed'' matrix elements $\langle\mathcal O\rangle_m\ne \langle\mathcal O\rangle$ can be obtained. Among the different methods to calculate expectation values for operators that do not commute with $H$, the extrapolation method~\cite{Pieper:2008} is the most widely used. Following this method, one has a better approximation to the ``pure'' (exact) value by means of a linear extrapolation
\begin{align}
\langle\mathcal O\rangle_p\simeq2\,\frac{\langle\psi_0|\mathcal O|\psi_T\rangle}{\langle\psi_0|\psi_T\rangle}-\frac{\langle\psi_T|\mathcal O|\psi_T\rangle}{\langle\psi_T|\psi_T\rangle}=2\,\langle\mathcal O\rangle_m-\langle\mathcal O\rangle_v\;,\label{eq:pure1}
\end{align}
or, if the operator $\mathcal O$ is positive defined, by means of
\begin{align}
\langle\mathcal O\rangle_p&\simeq\frac{\left(\frac{\langle\psi_0|\mathcal O|\psi_T\rangle}{\langle\psi_0|\psi_T\rangle}\right)^2}{\frac{\langle\psi_T|\mathcal O|\psi_T\rangle}{\langle\psi_T|\psi_T\rangle}}=\frac{\langle\mathcal O\rangle_m^2}{\langle\mathcal O\rangle_v}\;,\label{eq:pure2}
\end{align}
where $\langle\mathcal O\rangle_v$ is the variational estimator. The accuracy of the extrapolation method is closely related to the trial wave function used in the variational calculation and on the accuracy of the DMC sampling technique.
\end{enumerate}
For a many-body system, if no constraint is imposed, $H$ has both symmetric and antisymmetric eigenstates with respect to particle exchange. It can be proven~\cite{Courant:1953} that the lowest energy solution, and hence the state projected by imaginary time propagation, is always symmetric. Moreover, in the DMC algorithm, the walkers distribution is sampled through the wave function, that must be positive defined in the whole configuration space for the probabilistic interpretation to be applicable. The projection algorithm described above is thus referred to Boson systems only. The extension for Fermion systems is reported in \S~\ref{subsec:Sign}.
\subsection{Importance Sampling}
\label{subsec:Imp_Samp}
As discussed in the previous section, the basic version of the DMC algorithm is rather inefficient because the weight term of Eq.~(\ref{eq:w}) could suffer of very large fluctuations. Indeed, because the Brownian diffusive process ignores the shape of the potential, there is nothing that prevents two particles from moving very close to each other, even in presence of an hard-core repulsive potential.
The \emph{importance function} techniques~\cite{Mitas:1998,Kalos:2008,Lipparini:2008} mitigates this problem by using an appropriate importance function $\psi_I(R)$ (which is often, but not
necessarily, the same $\psi_T(R)$ used for the projection) to guide the diffusive process. The idea is to multiply Eq.~(\ref{eq:G}) by $\psi_I(R)$
\begin{align}
\psi_I(R)\psi(R,\tau+d\tau)=\int dR'\,G(R,R',d\tau)\frac{\psi_I(R)}{\psi_I(R')}\psi_I(R')\psi(R',\tau)\;,
\end{align}
and define a new propagator
\begin{align}
\widetilde G(R,R',d\tau)=G(R,R',d\tau)\frac{\psi_I(R)}{\psi_I(R')}\;,\label{eq:ratio}
\end{align}
and a new function
\begin{align}
f(R,\tau)=\psi_I(R)\psi(R,\tau)\;,
\end{align}
such that
\begin{align}
f(R,\tau+d\tau)=\int dR'\,\widetilde G(R,R',d\tau)\,f(R',\tau)\;.\label{eq:f}
\end{align}
$f(R,\tau)$ represents the new probability density from which sample the walker distribution. If $\psi_I(R)$ is suitably chosen, for example to be small in the region where the potential presents the hard-core, then $f(R,\tau)$ contains more information than the original $\psi(R,\tau)$, being correlated to the potential by construction, and thus there is an improvement in the quality of the DMC sampling and a reduction of the fluctuations of the weight~(\ref{eq:w}).
By inserting the new propagator $\widetilde G(R,R',d\tau)$ in Eq.~(\ref{eq:psi_propag}) and expanding near $R'$, it is possible to show (see for instance Refs.~\cite{Lipparini:2008}) that the integration gives an additional drift term in $G_0(R,R',d\tau)$
\begin{align}
G_0(R,R',d\tau)\rightarrow\widetilde G_0(R,R',d\tau)=\left(\frac{1}{4\pi Dd\tau}\right)^{\frac{3\mathcal N}{2}}\e^{-\frac{(R-R'-v_d(R') D d\tau)^2}{4Dd\tau}}\;,\label{eq:G_IS}
\end{align}
where
\begin{align}
\bm v_d(R)=2\frac{\bm\nabla\psi_I(R)}{\psi_I(R)}\;,\label{eq:drift}
\end{align}
is a $3\mathcal N$ dimensional \emph{drift velocity} that drives the free diffusion. The branching factor of Eq.~(\ref{eq:w}) modifies in
\begin{align}
\omega_i\rightarrow\widetilde\omega_i=\e^{-\left(\frac{E_L(R)+E_L(R')}{2}-E_T\right)d\tau}\;,\label{eq:w_IS}
\end{align}
where the potential energy is replaced by the \emph{local energy}
\begin{align}
E_L(R)=\frac{H\psi_I(R)}{\psi_I(R)}\;.\label{eq:E_L}
\end{align}
If the importance function is sufficiently accurate, the local energy remains close to the ground-state energy throughout the imaginary time evolution and the population of walkers is not subject to large fluctuations.
Going back to the imaginary time dependent Schr\"odinger equation, it is possible to show (details can be found in Refs.~\cite{Lipparini:2008}) that by multiplying Eq.~(\ref{eq:psi_R_tau}) by $\psi_I(R)$ we obtain a non homogenous Fokker-Plank equation for $f(R,\tau)$
\begin{align}
\!\!-\frac{\partial}{\partial\tau}f(R,\tau)=&-\frac{\hbar^2}{2m}\nabla^2 f(R,\tau)+\frac{\hbar^2}{2m}\bm\nabla\!\cdot\!\Bigl[\bm v_d(R)f(R,\tau)\Bigr]+E_L(R)f(R,\tau)\;,
\end{align}
for which the corresponding Green's function is given by the two terms of Eqs.~(\ref{eq:G_IS}) and (\ref{eq:w_IS}).
The DMC algorithm including the importance sampling procedure is still the same described in \S~\ref{sec:DMC}, where now the coordinates of the walkers are diffused by the Brownian motion and guided by the drift velocity
\begin{align}
R=R'+\xi+\bm v_d D d\tau\;,
\end{align}
and the branching process is given by the local energy through the weight (\ref{eq:w_IS}). The expectation values are still calculated by means of Eq.~(\ref{eq:mixed}) but now the sampling function $\psi(R,\tau)$ is replaced by $f(R,\tau)$.
\subsection{Sign Problem}
\label{subsec:Sign}
As discussed in \S~\ref{subsec:Imp_Samp}, the standard DMC algorithm applies to positive defined wave function and the result of the imaginary time projection is a nodeless function. The ground state of a Fermionic system is instead described by an antisymmetric wave function, to which a probability distribution interpretation cannot be given. Moreover, the search for an antisymmetric ground state $|\psi_0^A\rangle$ corresponds to the search for an excited state of the many-body Hamiltonian with eigenvalue
\begin{align}
E_0^A>E_0^S\;,\label{eq:E_0^A}
\end{align}
where $E_0^S$ and $E_0^A$ are the ground state energies for the Bosonic and the Fermionic system.
If no constraint is imposed, the Hamiltonian has both eigenstates that are symmetric and antisymmetric with respect to particle exchange. We can thus rewrite Eq.~(\ref{eq:psi_0}) by separating Bosonic and Fermionic components:
\begin{align}
|\psi(\tau)\rangle=\sum_{n=0}^\infty c_n^S \e^{-(E_n^S-E_T)\tau}|\varphi_n^S\rangle+\sum_{n=0}^\infty c_n^A \e^{-(E_n^A-E_T)\tau}|\varphi_n^A\rangle \;.
\end{align}
If we want to naively apply the standard DMC algorithm to project out the Fermionic ground state, we need to propagate the trial wave function for long imaginary time taking $E_0^A$ as energy reference. If the Fermionic ground state is not degenerate, i.e. $E_n^A\ge E_{n-1}^A>E_0^A$, in the limit $\tau\rightarrow\infty$ we have
\begin{align}
\lim_{\tau\rightarrow\infty}|\psi(\tau)\rangle=\lim_{\tau\rightarrow\infty}\sum_n c_n^S \e^{-(E_n^S-E_0^A)\tau}|\varphi_n\rangle+c_0^A |\varphi_0^A\rangle \;,
\end{align}
where at least for $E_0^S$ the Bosonic part diverges due to the condition~(\ref{eq:E_0^A}). However, the exponentially growing component along the symmetric ground state does not affect the expectation of the Hamiltonian. Indeed, during the evaluation of the integral (\ref{eq:mixed_int}) on an antisymmetric trial wave function $\psi_T^A(R)$, the symmetric components of $\psi(R,\tau)$ vanish by orthogonality and in the limit of infinite imaginary time the energy converges to exact eigenvalue $E_0^A$. However, the orthogonality cancellation of the Bosonic terms does not apply to the calculation of the DMC variance for the antisymmetric energy expectation value $\langle E_0^A\rangle$
\begin{align}
\sigma^2_{E_0^A}=\left|\langle H\rangle_{\psi_T^A}^2-\langle H^2\rangle_{\psi_T^A}\right| \;,
\end{align}
where the second term diverges. We are left thus with an exact eigenvalue affected by an exponentially growing statistical error. The signal to noise ratio exponentially decays. This is the well known \emph{sign problem} and it represents the main limit to the straightforward application of the DMC algorithm to Fermion systems.
In order to extend the DMC method to systems described by antisymmetric wave functions, it is possible to artificially split the configuration space in regions where the trial wave function does not change sign. The multi dimensional surface where the trial wave function vanishes, denoted as \emph{nodal surface}, can be used to constrain the diffusion of the walkers: whenever a walker crosses the nodal surface it is dropped from the calculation. In such a way only the configurations that diffuse according to region of the wave function with definite sign are taken into account. The problem reduces thus to a standard DMC in the subsets of the configuration space delimited by the nodal surface. This approximate algorithm is called \emph{fixed-node}~\cite{Ceperley:1991,Mitas:1998,Mitas:2006} and it can be proven that it always provides an upper bound to the true Fermionic ground state.
The sign problem appears for both real and complex antisymmetric wave functions. The latter is the case of nuclear Hamiltonians. As proposed by Zhang \emph{et al.}~\cite{Zhang:1995,Zhang:1997,Zhang:2003}, the \emph{constrained path} approximation can be used to deal with the sign problem for complex wave functions.
The general idea is to constraining the path of walkers to regions where the real part of the overlap with the wave function is positive. If we consider a complex importance function $\psi_I(R)$, in order to keep real the coordinates space of the system, the drift term in Eq.~(\ref{eq:G_IS}) must be real. A suitable choice for the drift velocity is thus:
\begin{align}
\bm v_d(R)=2\frac{\bm\nabla\re\left[\psi_I(R)\right]}{\re\left[\psi_I(R)\right]}\;.
\end{align}
Consistently, a way to eliminate the decay of the signal to noise ratio consists in requiring that the real part of the overlap of each walker with the importance function must keep the same sign
\begin{align}
\frac{\re\left[\psi_I(R)\right]}{\re\left[\psi_I(R')\right]}>0\;,
\end{align}
where $R$ and $R'$ denote the coordinates of the system after and before the diffusion of a time step. When this condition is violate, i.e. when the overlap between the importance function and the walker after a diffusive step changes sign, the walker is dropped. In these scheme, the ground state expectation value of an observable $\mathcal O$ (Eq.~(\ref{eq:mixed})) is given by
\begin{align}
\langle\mathcal O\rangle=\frac{\sum_{\{R\}}\mathcal O\re\left[\psi_T(R)\right]}{\sum_{\{R\}}\re\left[\psi_T(R)\right]}\;.
\end{align}
Another approach to deal with the complex sign problem is the \emph{fixed phase} approximation, originally introduced for systems whose Hamiltonian contains a magnetic field~\cite{Ortiz:1993}. Let write a complex wave function as
\begin{align}
\psi(R)=\left|\psi(R)\right|\e^{i\phi(R)} \;,\label{eq:mod_phase}
\end{align}
where $\phi(R)$ is the phase of $\psi(R)$, and rewrite the drift velocity as
\begin{align}
\bm v_d(R)=2\frac{\bm\nabla\left|\psi_I(R)\right|}{\left|\psi_I(R)\right|}=2\re\left[\frac{\bm\nabla\psi_I(R)}{\psi_I(R)}\right]\;.
\end{align}
With this choice, the weight for the branching process becomes
\begin{align}
\widetilde\omega_i&=\exp\Bigg\{-\Bigg[\frac{1}{2}\Bigg(-\frac{\hbar^2}{2m}\frac{\nabla^2|\psi_I(R)|}{|\psi_I(R)|}+\frac{V\psi_I(R)}{\psi_I(R)}\nonumber\\[0.2em]
&\quad-\frac{\hbar^2}{2m}\frac{\nabla^2|\psi_I(R')|}{|\psi_I(R')|}+\frac{V\psi_I(R')}{\psi_I(R')}\Bigg)-E_T\Bigg]d\tau\Bigg\}
\times\frac{\left|\psi_I(R')\right|}{\left|\psi_I(R)\right|}\frac{\psi_I(R)}{\psi_I(R')}\;,\label{eq:w_PF}
\end{align}
which is the usual importance sampling factor as in Eq.~(\ref{eq:w_IS}) multiplied by an additional factor that corrects for the particular choice of the drift. Using Eq.~(\ref{eq:mod_phase}), the last term of the previous equation can be rewritten as
\begin{align}
\frac{\left|\psi_I(R')\right|}{\left|\psi_I(R)\right|}\frac{\psi_I(R)}{\psi_I(R')}=\e^{i[\phi_I(R)-\phi_I(R')]} \;.
\end{align}
The so called ``fixed phase'' approximation is then realized by constraining the walkers to have the same phase as the importance function $\psi_I(R)$. It can be applied by keeping the real part of the last expression. In order to preserve the normalization, one has to consider an additional term in the Green's function due to the phase, that must be added to the weight:
\begin{align}
\exp\Bigg[-\frac{\hbar^2}{2m}\Bigl(\bm\nabla\phi(R)\Bigr)^2d\tau\Bigg]\;.
\end{align}
This factor can be included directly in $\widetilde\omega_i$ considering the following relation:
\begin{align}
\re\left[\frac{\nabla^2\psi_I(R)}{\psi_I(R)}\right]=\frac{\nabla^2\left|\psi_I(R)\right|}{\left|\psi_I(R)\right|}-\Bigl(\bm\nabla\phi(R)\Bigr)^2\;.
\end{align}
Thus, by keeping the real part of the kinetic energy in Eq.~(\ref{eq:w_PF}), the additional weight term given by the fixed phase approximation is automatically included. The calculation of expectation values is given now by
\begin{align}
\langle\mathcal O\rangle=\sum_{\{R\}}\re\left[\frac{\mathcal O\psi_T(R)}{\psi_T(R)}\right]\;,
\end{align}
i.e. by the evaluation of the real part of a local operator. This is of particular interest for the technical implementation of the DMC algorithm. As we will see in \S~\ref{subsec:Wave}, when dealing with Fermions the wave function can be written as a Slater determinant of single particle states. It can be shown (see Appendix~\ref{app:d_SD}) that the evaluation of local operators acting on Slater determinants can be efficiently implemented by means of the inverse matrix of the determinant. The fixed phase approximation allows thus to deal with the Fermion sign problem and also provides a natural scheme to implement the DMC method. Moreover, the above derivation can be extended to operators other than the kinetic energy. For example, when dealing with nuclear Hamiltonians like~(\ref{eq:H_N}), spin and isospin expectation values can be evaluated by taking the real part of local spin and isospin operators calculated on the Slater determinant. This is actually the standard way to treat the spin-isospin dependent components of the nuclear Hamiltonian in the Auxiliary Field Diffusion Monte Carlo (see \S~\ref{sec:AFDMC}).
The constrained path and the fixed phase prescriptions are both approximations introduced to deal with the sign problem for complex wave functions. In principle they should yield similar results if the importance function is close enough to the real ground state of the system. Accurate $\psi_I(R)$ are thus needed. An additional important observation is that the DMC algorithm with the constrained path approximation does not necessarily provide an upper bound in the calculation of energy~\cite{Carlson:1999,Wiringa:2002_CP}. Moreover, it has not been proven that the fixed phase approximation gives an upper bound to the real energy. Thus, the extension of the DMC algorithm to Fermion systems described by complex wave functions does not obey to the Rayleigh-Ritz variational principle. Further details on the fixed node, constrained path and fixed phase approximations can be found in the original papers and an exhaustive discussion is reported in the Ph.D. thesis of Armani~\cite{Armani:2011_thesis}.
\subsection{Spin-isospin degrees of freedom}
\label{subsec:Spin}
If we want to study a nuclear many-body system described by the Hamiltonian~(\ref{eq:H_N}), we need to include also the spin-isospin degrees of freedom in the picture. In order to simplify the notation, in the next with $A$ we will refer to the number of nucleons. Starting from \S~\ref{subsec:Wave} we will restore the convention $A=\mathcal N_N+\mathcal N_\Lambda$. The typical trial many-body wave function used in DMC calculation for nuclear systems takes the form~\cite{Pieper:2008,Wiringa:2002_CP}
\begin{align}
|\psi_T\rangle=\mathcal S\left[ \prod_{i<j}\left(1+U_{ij}+\sum_k U_{ijk}\right)\right]\prod_{i<j}f_c(r_{ij})|\Phi_A\rangle\;,\label{eq:GFMC_psiT}
\end{align}
where $f_c(r_{ij})$ is a central (mostly short ranged repulsion) correlation, $U_{ij}$ are non commuting two-body correlations induced by $v_{ij}$ (that typically takes the same form of Eq.~(\ref{eq:v_ij_Op}) for $p=2,\ldots,6$) and $U_{ijk}$ is a simplified three-body correlation from $v_{ijk}$. $|\Phi_A\rangle$ is the one-body part of the trial wave function that determines the quantum numbers of the states and it is fully antisymmetric. The central correlation is symmetric with respect to particle exchange and the symmetrization operator $\mathcal S$ acts on the operatorial correlation part of $|\psi_T\rangle$ in order to make the complete trial wave function antisymmetric. The best trial wave function from which (\ref{eq:GFMC_psiT}) is derived, includes also spin-orbit and the full three-body correlations and it is used in VMC calculations. See Refs.~\cite{Arriaga:1995,Wiringa:2002_CP}.
Given $A$ nucleons ($Z$ protons, $A-Z$ neutrons), the trial wave function is a complex vector in spin-isospin space with dimension $\mathcal N_S\times\mathcal N_T$, where $\mathcal N_S$ is the number of spin states and $\mathcal N_T$ the number of isospin states:
\begin{align}
\mathcal N_S=2^A\quad\quad\quad\mathcal N_T=\left(\begin{array}{c} A\\ Z \end{array}\right)=\frac{A!}{Z!(A-Z)!}\;.
\end{align}
For example, the wave function of an $A=3$ system has 8 spin components and, considering the physical systems for $Z=1$ $\left(^3\text{H}\right)$ or $Z=2$ $\left(^3\text{He}\right)$, 3 isospin states, thus a spin-isospin structure with 24 entries. Using the notation of Ref.~\cite{Pieper:2008}, we can write the spin part of an $A=3$ wave function as a complex 8-vector (ignore antisymmetrization)
\begin{align}
|\Phi_{A=3}\rangle=\left(
\begin{array}{c}
a_{\uparrow\uparrow\uparrow}\\
a_{\uparrow\uparrow\downarrow}\\
a_{\uparrow\downarrow\uparrow}\\
a_{\uparrow\downarrow\downarrow}\\
a_{\downarrow\uparrow\uparrow}\\
a_{\downarrow\uparrow\downarrow}\\
a_{\downarrow\downarrow\uparrow}\\
a_{\downarrow\downarrow\downarrow}
\end{array}
\right)\quad\quad\text{with}\quad a_{\uparrow\downarrow\uparrow}=\langle\uparrow\downarrow\uparrow|\Phi_{A=3}\rangle\;.\label{eq:wave_GFMC}
\end{align}
The potentials ($v_{ij}$, $v_{ijk}$) and correlations ($U_{ij}$, $U_{ijk}$) involve repeated operations on $|\psi_T\rangle$ but the many-body spin-isospin space is closed under the action of the operators contained in the Hamiltonian. As an example, consider the term $\bm\sigma_i\cdot\bm\sigma_j$:
\begin{align}
\bm\sigma_i\cdot\bm\sigma_j&=2\left(\sigma_i^+\sigma_j^-+\sigma_i^-\sigma_j^+\right)+\sigma_i^z\sigma_j^z\;,\nonumber\\[0.2em]
&=2\,\mathcal P_{ij}^\sigma-1\;,\nonumber\\[0.2em]
&=\left(\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -1 & 2 & 0 \\
0 & 2 & -1 & 0 \\
0 & 0 & 0 & 1
\end{array}\right)
\quad\text{acting on}\quad
\left(\begin{array}{c}
\uparrow\uparrow \\
\uparrow\downarrow \\
\downarrow\uparrow \\
\downarrow\downarrow
\end{array}\right)\;.
\end{align}
The $\mathcal P_{ij}^\sigma$ exchanges the spin $i$ and $j$, so the operator $\bm\sigma_i\cdot\bm\sigma_j$ does not mix different isospin components and acts on different, non contiguous, 4-element blocks of $|\Phi_{A=3}\rangle$. For $i=2$ and $j=3$ we have for example:
\begin{align}
\bm\sigma_2\cdot\bm\sigma_3\,|\Phi_{A=3}\rangle=\left(
\begin{array}{c}
a_{\uparrow\uparrow\uparrow}\\
2a_{\uparrow\downarrow\uparrow}-a_{\uparrow\uparrow\downarrow}\\
2a_{\uparrow\uparrow\downarrow}-a_{\uparrow\downarrow\uparrow}\\
a_{\uparrow\downarrow\downarrow}\\
a_{\downarrow\uparrow\uparrow}\\
2a_{\downarrow\downarrow\uparrow}-a_{\downarrow\uparrow\downarrow}\\
2a_{\downarrow\uparrow\downarrow}-a_{\downarrow\downarrow\uparrow}\\
a_{\downarrow\downarrow\downarrow}
\end{array}
\right)\;.\label{eq:s2s3psi}
\end{align}
The action of pair operators on $|\psi_T\rangle$, that are the most computationally expensive, results thus in a sparse matrix of (non contiguous) $4\times4$ blocks in the $A$-body problem.
In the Green Function Monte Carlo, which slightly differs from the DMC in the way the propagator is treated, each of the $2^A\frac{A!}{Z!(A-Z)!}$ spin-isospin configurations undergoes to the imaginary time evolution of Eq.~(\ref{eq:G}). The propagation is now acting on the component $a_\alpha$, being $\alpha$ the spin-isospin index,
\begin{align}
a_\alpha(R,\tau+d\tau)=\sum_\beta\int dR'\,G_{\alpha\beta}(R,R',d\tau)\,a_\beta(R',\tau)\;,\label{eq:a_GFMC}
\end{align}
where the Green's function is a matrix function of $R$ and $R'$ in spin-isospin space, defined as
\begin{align}
G_{\alpha\beta}(R,R',d\tau)=\langle R,\alpha|\e^{-(H-E_T)d\tau}|R',\beta\rangle\;.\label{eq:G_GFMC}
\end{align}
Due to the the factorial growth in the number of components of the wave function, GFMC cannot deal with systems having a large number of nucleons, like medium-heavy nuclei or nuclear matter.
Standard GFMC calculations are indeed limited up to 12 nucleons~\cite{Pieper:2005,Lusk:2010,Lovato:2013} or 16 neutrons~\cite{Gandolfi:2011}.
\section{Auxiliary Field Diffusion Monte Carlo}
\label{sec:AFDMC}
The AFDMC algorithm was originally introduced by Schmidt and Fantoni~\cite{Schmidt:1999} in order to deal in an efficient way with spin-dependent Hamiltonians. Many details on the AFDMC method can be found in Refs.~\cite{Sarsa:2003,Pederiva:2004,Gandolfi:2007_thesis,Gandolfi:2006,Gandolfi:2007,Gandolfi:2009,Armani:2011_thesis,Lovato:2012_thesis,Lipparini:2008}. The main idea is to move from the many particle wave function of the DMC or GFMC to a single particle wave function. In this representation, going back to the example of the previous section, the spin part of an $A=3$ wave function becomes a tensor product of 3 single particle spin states (ignore antisymmetrization):
\begin{align}
|\Phi_{A=3}\rangle&=
\left(\begin{array}{c}
a_{1\uparrow}\\
a_{1\downarrow}
\end{array}\right)_1\!\!\otimes\!
\left(\begin{array}{c}
a_{2\uparrow}\\
a_{2\downarrow}
\end{array}\right)_2\!\!\otimes\!
\left(\begin{array}{c}
a_{3\uparrow}\\
a_{3\downarrow}
\end{array}\right)_3\quad\text{with}\quad a_{k\uparrow}=\,_{_k\,}\!\langle\uparrow|\Phi_{A=3}\rangle\;.\label{eq:psi_SP}
\end{align}
Taking also into account the isospin degrees of freedom, each single particle state becomes a complex 4-vector and the total number of entries for $|\Phi_{A=3}\rangle$ is thus 12, half of the number for the full DMC function of Eq.~(\ref{eq:wave_GFMC}). In the general case, the dimension of the multicomponent vector describing a system with $A$ nucleons scale as $4A$. So, in this picture, the computational cost for the evaluation of the wave function is drastically reduced compared to the DMC-GFMC method when the number of particles becomes large.
The problem of the single particle representation is that it is not closed with respect to the application of quadratic spin (isospin) operators. As done in the previous section (Eq.~(\ref{eq:s2s3psi})), consider the operator $\bm\sigma_2\cdot\bm\sigma_3=2\,\mathcal P_{23}^\sigma-1$ acting on $|\Phi_{A=3}\rangle$:
\begin{align}
\bm\sigma_2\cdot\bm\sigma_3\,|\Phi_{A=3}\rangle&=
2\left(\begin{array}{c}
a_{1\uparrow}\\
a_{1\downarrow}
\end{array}\right)_1\!\!\otimes\!
\left(\begin{array}{c}
a_{3\uparrow}\\
a_{3\downarrow}
\end{array}\right)_2\!\!\otimes\!
\left(\begin{array}{c}
a_{2\uparrow}\\
a_{2\downarrow}
\end{array}\right)_3\nonumber\\[0.2em]
&\phantom{=}-\left(\begin{array}{c}
a_{1\uparrow}\\
a_{1\downarrow}
\end{array}\right)_1\!\!\otimes\!
\left(\begin{array}{c}
a_{2\uparrow}\\
a_{2\downarrow}
\end{array}\right)_2\!\!\otimes\!
\left(\begin{array}{c}
a_{3\uparrow}\\
a_{3\downarrow}
\end{array}\right)_3\;.
\end{align}
There is no way to express the result as a single particle wave function of the form~(\ref{eq:psi_SP}). At each time step, the straightforward application of the DMC algorithm generates a sum of single particle wave functions. The number of these functions will grows very quickly during the imaginary time evolution, destroying the gain in computational time obtained using a smaller multicomponent trial wave function.
In order to keep the single particle wave function representation and overcome this problem, the AFDMC makes use of the Hubbard-Stratonovich transformation
\begin{align}
\e^{-\frac{1}{2}\lambda\mathcal O^2 }=\frac{1}{\sqrt{2\pi}}\int\!dx \e^{-\frac{x^2}{2}+\sqrt{-\lambda}\,x\mathcal O}\;,\label{eq:HS}
\end{align}
to linearize the quadratic dependence on the spin-isospin operators by adding the integration over a new variable $x$ called \emph{auxiliary field}. It is indeed possible to show that the single particle wave function is closed with respect to the application of a propagator containing linear spin-isospin operators at most:
\begin{align}
\e^{-\mathcal O_j d\tau}|\Phi_A\rangle
&=\e^{-\mathcal O_j d\tau}\bigotimes_i\left(\begin{array}{c} a_{i\uparrow} \\ a_{i\downarrow} \end{array}\right)_i\;,\nonumber \\[0.2em]
&=\left(\begin{array}{c} a_{1\uparrow} \\ a_{1\downarrow} \end{array}\right)_1
\!\otimes\cdots\otimes\,\e^{-\mathcal O_j d\tau}\left(\begin{array}{c} a_{j\uparrow} \\ a_{j\downarrow} \end{array}\right)_j
\!\otimes\cdots\otimes \left(\begin{array}{c} a_{A\uparrow} \\ a_{A\downarrow} \end{array}\right)_A\;,\quad\quad\nonumber\\[0.2em]
&=\left(\begin{array}{c} a_{1\uparrow} \\ a_{1\downarrow} \end{array}\right)_1
\!\otimes\cdots\otimes \left(\begin{array}{c} \widetilde a_{j\uparrow} \\ \widetilde a_{j\downarrow} \end{array}\right)_j
\!\otimes\cdots\otimes \left(\begin{array}{c} a_{A\uparrow} \\ a_{A\downarrow} \end{array}\right)_A\;,
\end{align}
where, working on 2-component spinors, $\mathcal O_j$ can be a $2\times2$ spin or isospin matrix. If we are dealing with the full 4-component spinor, $\mathcal O_j$ can be an extended $4\times4$ spin, isospin or isospin$\,\otimes\,$spin matrix. To get this result we have used the fact that the operator $\mathcal O_j$ is the representation in the $A$-body tensor product space of a one-body operator:
\begin{align}
\mathcal O_j\equiv\mathbb{I}_1\otimes\cdots\otimes\mathcal O_j\otimes\cdots\otimes\mathbb{I}_A\;.\label{eq:O_j}
\end{align}
Limiting the study to quadratic spin-isospin operators and making use of the Hubbard-Stratonovich transformation, it is thus possible to keep the single particle wave function representation over all the imaginary time evolution. This results in a reduced computational time for the propagation of the wave function compared to GFMC, that allows us to simulate larger systems, from medium-heavy nuclei to the infinite matter. In the next we will see in detail how the AFDMC works on the Argonne V6 like potentials (\S~\ref{subsec:Prop_AV6}), and how it is possible to include also spin-orbit (\S~\ref{subsec:Prop_LS}) and three-body (\S~\ref{subsec:Prop_TNI}) terms for neutron systems. Finally the extension of AFDMC for hypernuclear systems (\S~\ref{subsec:Prop_YN}) is presented.
\subsection{Propagator for nucleons: \texorpdfstring{$\bm\sigma$}{$\sigma$}, \texorpdfstring{$\bm\sigma\cdot\bm\tau$}{$\sigma\cdot\tau$} and \texorpdfstring{$\bm\tau$}{$\tau$} terms}
\label{subsec:Prop_AV6}
Consider the first six components of the Argonne $NN$ potential of Eq.~(\ref{eq:v_ij_Op}). They can be conveniently rewritten as a sum of a spin-isospin independent and a spin-isospin dependent term
\begin{align}
V_{NN}=\sum_{i<j}\sum_{p=1,6}v_p(r_{ij})\,\mathcal O_{ij}^{\,p}=V_{NN}^{SI}+V_{NN}^{SD} \;,
\end{align}
where
\begin{align}
V_{NN}^{SI}&=\sum_{i<j}v_1(r_{ij})\;,\label{eq:V_NN_SI}
\end{align}
and
\begin{align}
V_{NN}^{SD}&=\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta}\sigma_{i\alpha}\,A_{i\alpha,j\beta}^{[\sigma]}\,\sigma_{j\beta}
+\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta\gamma}\tau_{i\gamma}\,\sigma_{i\alpha}\,A_{i\alpha,j\beta}^{[\sigma\tau]}\,\sigma_{j\beta}\,\tau_{j\gamma}\;\nonumber\\[0.2em]
&\,+\frac{1}{2}\sum_{i\ne j}\sum_\gamma\tau_{i\gamma}\,A_{ij}^{[\tau]}\,\tau_{j\gamma}\;.\label{eq:V_NN_SD}
\end{align}
The $3A\times 3A$ matrices $A^{[\sigma]}$, $A^{[\sigma\tau]}$ and the $A\times A$ matrix $A^{[\tau]}$ are real and symmetric under Cartesian component interchange $\alpha\leftrightarrow\beta$, under particle exchange $i\leftrightarrow j$ and fully symmetric with respect to the exchange $i\alpha\leftrightarrow j\beta$. They have zero diagonal (no self interaction) and contain proper combinations of the components of AV6 (Latin indices are used for the nucleons, Greek ones refer to the Cartesian components of the operators):
\begin{align}
A_{ij}^{[\tau]}&=v_2\left(r_{ij}\right)\;,\nonumber\\[0.5em]
A_{i\alpha,j\beta}^{[\sigma]}&=v_3\left(r_{ij}\right)\delta_{\alpha\beta}
+v_5\left(r_{ij}\right)\left(3\,\hat r_{ij}^\alpha\,\hat r_{ij}^\beta-\delta_{\alpha\beta}\right)\;,\label{eq:A_NN}\\[0.5em]
A_{i\alpha,j\beta}^{[\sigma\tau]}&=v_4\left(r_{ij}\right)\delta_{\alpha\beta}
+v_6\left(r_{ij}\right)\left(3\,\hat r_{ij}^\alpha\,\hat r_{ij}^\beta-\delta_{\alpha\beta}\right)\;,\nonumber
\end{align}
that come from the decomposition of the operators in Cartesian coordinates:
\begin{align}
\bm\sigma_i\cdot\bm\sigma_j&=\sum_{\alpha\beta}\sigma_{i\alpha}\,\sigma_{j\beta}\,\delta_{\alpha\beta}\;,\label{eq:sigma_dec}\\[0.2em]
S_{ij}&=\sum_{\alpha\beta}\left(3\,\sigma_{i\alpha}\,\hat r_{ij}^\alpha\,\sigma_{j\beta}\,\hat r_{ij}^\beta-\sigma_{i\alpha}\,\sigma_{j\beta}\,\delta_{\alpha\beta}\right)\;.\label{eq:Sij_dec}
\end{align}
Being real and symmetric, the $A$ matrices have real eigenvalues and real orthogonal eigenstates
\begin{align}
\sum_{j\beta} A_{i\alpha,j\beta}^{[\sigma]}\,\psi_{n,j\beta}^{[\sigma]}&=\lambda_n^{[\sigma]}\,\psi_{n,i\alpha}^{[\sigma]}\;,\nonumber\\[0.5em]
\sum_{j\beta} A_{i\alpha,j\beta}^{[\sigma\tau]}\,\psi_{n,j\beta}^{[\sigma\tau]}&=\lambda_n^{[\sigma\tau]}\,\psi_{n,i\alpha}^{[\sigma\tau]}\;,\\[0.5em]
\sum_{j} A_{ij}^{[\tau]}\,\psi_{n,j}^{[\tau]}&=\lambda_n^{[\tau]}\,\psi_{n,i}^{[\tau]}\;.\nonumber
\end{align}
Let us expand $\sigma_{i\alpha}$ on the complete set of eigenvectors $\psi_{n,i\alpha}^{[\sigma]}$ of the matrix $A_{i\alpha,j\beta}^{[\sigma]}$~:
\begin{align}
\sigma_{i\alpha}=\sum_{n}c_n^{[\sigma]}\,\psi_{n,i\alpha}^{[\sigma]}=\sum_{n}\left(\sum_{j\beta}\psi_{n,j\beta}^{[\sigma]}\,\sigma_{j\beta}\right)\psi_{n,i\alpha}^{[\sigma]}\;,
\label{eq:sigma_ia}
\end{align}
where we have used the orthogonality condition
\begin{align}
\sum_{i\alpha}\psi_{n,i\alpha}^{[\mathcal O]}\,\psi_{m,i\alpha}^{[\mathcal O]}=\delta_{nm}\;.
\end{align}
Using Eq.~(\ref{eq:sigma_ia}) we can recast the first term of Eq.~(\ref{eq:V_NN_SD}) in the following form:
\begin{align}
&\frac{1}{2}\sum_{i\alpha,j\beta}\sigma_{i\alpha}\,A_{i\alpha,j\beta}^{[\sigma]}\,\sigma_{j\beta}=\nonumber\\[0.2em]
&\,=\frac{1}{2}\sum_{i\alpha,j\beta}\left\{
\left[\sum_{n}\left(\sum_{k\gamma}\sigma_{k\gamma}\,\psi_{n,k\gamma}^{[\sigma]}\right)\psi_{n,i\alpha}^{[\sigma]}\right]
A_{i\alpha,j\beta}^{[\sigma]}
\left[\sum_{m}\left(\sum_{k\gamma}\sigma_{k\gamma}\,\psi_{m,k\gamma}^{[\sigma]}\right)\psi_{m,j\beta}^{[\sigma]}\right]
\right\}\;,\nonumber\\[0.2em]
&\,=\frac{1}{2}\sum_{i\alpha}\left\{
\left[\sum_{n}\left(\sum_{k\gamma}\sigma_{k\gamma}\,\psi_{n,k\gamma}^{[\sigma]}\right)\psi_{n,i\alpha}^{[\sigma]}\right]
\left[\sum_{m}\lambda_{m}^{[\sigma]}\left(\sum_{k\gamma}\sigma_{k\gamma}\,\psi_{m,k\gamma}^{[\sigma]}\right)\psi_{m,i\alpha}^{[\sigma]}\right]
\right\}\;,\nonumber\\[0.2em]
&\,=\frac{1}{2}\sum_{n}\left(\sum_{k\gamma}\sigma_{k\gamma}\,\psi_{n,k\gamma}^{[\sigma]}\right)^2\!\lambda_n^{[\sigma]}\;.
\end{align}
Similar derivation can be written for the terms $\tau_{i\gamma}\,\sigma_{i\alpha}$ and $\tau_{i\gamma}$ and we can define a new set of operators expressed in terms of the eigenvectors of the matrices~$A$:
\begin{align}
\mathcal O_n^{[\sigma]}&=\sum_{j\beta}\sigma_{j\beta}\,\psi_{n,j\beta}^{[\sigma]}\;,\nonumber\\[0.5em]
\mathcal O_{n,\alpha}^{[\sigma\tau]}&=\sum_{j\beta}\tau_{j\alpha}\,\sigma_{j\beta}\,\psi_{n,j\beta}^{[\sigma\tau]}\;,\label{eq:O_n}\\[0.5em]
\mathcal O_{n,\alpha}^{[\tau]}&=\sum_{j}\tau_{j\alpha}\,\psi_{n,j}^{[\tau]}\;.\nonumber
\end{align}
The spin dependent part of the $NN$ interaction can be thus expressed as follows:
\begin{align}
\!\!\!\!V_{NN}^{SD}=\frac{1}{2}\sum_{n=1}^{3A}\lambda_n^{[\sigma]}\!\left(\mathcal O_n^{[\sigma]}\right)^2
\!+\frac{1}{2}\sum_{n=1}^{3A}\sum_{\alpha=1}^3\lambda_n^{[\sigma\tau]}\!\left(\mathcal O_{n\alpha}^{[\sigma\tau]}\right)^2
\!+\frac{1}{2}\sum_{n=1}^A\sum_{\alpha=1}^3\lambda_n^{[\tau]}\!\left(\mathcal O_{n\alpha}^{[\tau]}\right)^2 \,.\!\label{eq:V_NN_SD_On}
\end{align}
$V_{NN}^{SD}$ is now written in a suitable form for the application of the Hubbard-Stratonovich transformation of Eq.~(\ref{eq:HS}). The propagator $\e^{-V_{NN}^{SD}\,d\tau}$ can be finally recast as:
\begin{align}
\e^{-\frac{1}{2}\sum_n\lambda_n(\mathcal O_n)^2 d\tau}&=\prod_n\e^{-\frac{1}{2}\lambda_n(\mathcal O_n)^2 d\tau}\,+\ord\left(d\tau^2\right)\;,\nonumber\\[0.2em]
&\simeq\prod_n\frac{1}{\sqrt{2\pi}}\int\!dx_n\e^{\frac{-x_n^2}{2}+\sqrt{-\lambda_n d\tau}\,x_n\mathcal O_n}\;,\label{eq:HS_applied}
\end{align}
where we have used the compact notation $\mathcal O_n$ to denote the $3A$ $\mathcal O_n^{[\sigma]}$, the $9A$ $O_{n,\alpha}^{[\sigma\tau]}$ and the $3A$ $\mathcal O_{n,\alpha}^{[\tau]}$ operators including the summation over the coordinate index $\alpha$ where needed. The first step of the above equation comes to the fact that in general the operators $\mathcal O_n$ do not commute and so, due to Eq.~(\ref{eq:Trotter_2}), the equality is correct only at order $d\tau^2$.
The standard DMC imaginary time propagation of Eq.~(\ref{eq:G}) needs to be extended to the spin-isospin space, as done in the GFMC algorithm via the projection of Eqs.~(\ref{eq:a_GFMC}) and (\ref{eq:G_GFMC}). In the AFDMC method, spin-isospin coordinates $\{S\}$ are added to the spacial coordinates $\{R\}$, defining a set of walkers which represents the single-particle wave function to be evolved in imaginary time:
\begin{align}
\psi(R,S,\tau+d\tau)=\int dR'dS'\,G(R,S,R',S',d\tau)\,\psi(R',S',\tau)\;.
\end{align}
Including the integration over the Hubbard-Stratonovich auxiliary fields, the Auxiliary Field DMC Green's function reads (recall Eqs.~(\ref{eq:psi_propag}) and (\ref{eq:G0-GV})):
\begin{align}
G(R,S,R',S',d\tau)&=\langle R,S|\e^{-(T+V-E_T)d\tau}|R',S'\rangle\;,\nonumber\\[0.2em]
&\simeq\left(\frac{1}{4\pi Dd\tau}\right)^{\frac{3\mathcal N}{2}}\!\e^{-\frac{(R-R')^2}{4Dd\tau}}
\e^{-\left(\frac{V_{NN}^{SI}(R)+V_{NN}^{SI}(R')}{2}-E_T\right)d\tau}\times\nonumber\\[0.2em]
&\quad\,\times\langle S|\prod_{n=1}^{15A}\frac{1}{\sqrt{2\pi}}\int\!dx_n\e^{\frac{-x_n^2}{2}+\sqrt{-\lambda_n d\tau}\,x_n\mathcal O_n}|S'\rangle\;,\label{eq:G_AFDMC}
\end{align}
Each operator $\mathcal O_n$ involves the sum over the particle index $j$, as shown in Eq.~(\ref{eq:O_n}). However, in the $A$-body tensor product space, each $j$ sub-operator is a one-body operator acting on a different single particle spin-isospin states, as in Eq.~(\ref{eq:O_j}). Therefore the $j$-dependent terms commute and we can represent the exponential of the sum over $j$ as a tensor product of exponentials. The result is that the propagation of a spin-isospin state $|S'\rangle$ turns into a product of independent rotations, one for each spin-isospin state. Considering just a spin wave function we have for example:
\begin{align}
&\e^{\sqrt{-\lambda_n d\tau}\,x_n\mathcal O_n^{[\sigma]}}|S'\rangle=\nonumber\\[0.2em]
&\,=\e^{\sqrt{-\lambda_n d\tau}\,x_n\sum_{\beta}\sigma_{1\beta}\,\psi_{n,1\beta}^{[\sigma]}}
\left(\begin{array}{c} a_{1\uparrow} \\ a_{1\downarrow} \end{array}\right)_1\!\otimes\cdots\otimes
\e^{\sqrt{-\lambda_n d\tau}\,x_n\sum_{\beta}\sigma_{A\beta}\,\psi_{n,A\beta}^{[\sigma]}}
\left(\begin{array}{c} a_{A\uparrow} \\ a_{A\downarrow} \end{array}\right)_A\;,\nonumber\\[0.2em]
&\,=\left(\begin{array}{c} \widetilde a_{1\uparrow} \\ \widetilde a_{1\downarrow} \end{array}\right)_1\!\otimes\cdots\otimes
\left(\begin{array}{c} \widetilde a_{A\uparrow} \\ \widetilde a_{A\downarrow} \end{array}\right)_A\;.\label{eq:eO_S}
\end{align}
We can thus propagate spin-isospin dependent wave functions remaining inside the space of single particle states.
The new coefficients $\widetilde a_{j\uparrow}$ and $\widetilde a_{j\downarrow}$ are calculated at each time step for each $\mathcal O_n$ operator. For neutron systems, i.e. for two-component spinors for which only the operator $\mathcal O_n^{[\sigma]}$ is active, there exists an explicit expression for these coefficients. Consider the Landau relations
\begin{align}
\e^{i\,\vec b\cdot\vec\sigma}&=\cos(|\vec b|)+i\sin(|\vec b|)\frac{\vec b\cdot\vec\sigma}{|\vec b|}\;,\label{eq:Landau_1}\\[0.4em]
\e^{\vec b\cdot\vec\sigma}&=\cosh(|\vec b|)+\sinh(|\vec b|)\frac{\vec b\cdot\vec\sigma}{|\vec b|}\;,\label{eq:Landau_2}
\end{align}
and identify the $\vec b$ vector with
\begin{align}
\vec b=\sqrt{|\lambda_n| d\tau}\,x_n\vec\psi_{n,j}^{\,[\sigma]}\quad\quad\quad b_\beta=\sqrt{|\lambda_n| d\tau}\,x_n\psi_{n,j\beta}^{\,[\sigma]}\;.
\end{align}
The following expressions for the coefficients of the rotated spinors can be then written, distinguishing the case $\lambda_n<0$ (Eq.~(\ref{eq:a_lambda1})) and the case $\lambda_n>0$ (Eq.~(\ref{eq:a_lambda2})):
\begin{align}
&\begin{array}{l}
\widetilde a_{j\uparrow}\!=\!\!\Biggl[\cosh(|\vec b|)+\sinh(|\vec b|)\sgn\!\left(x_n\right)\frac{\psi_{n,jz}^{[\sigma]}}{|\vec \psi_{n,j}^{\,[\sigma]}|}\Biggr]a_{j\uparrow}\!
+\sinh(|\vec b|)\sgn\!\left(x_n\right)\!\!\Biggl[\frac{\psi_{n,jx}^{[\sigma]}-i\,\psi_{n,jy}^{[\sigma]}}{|\vec \psi_{n,j}^{\,[\sigma]}|}\Biggr]a_{j\downarrow}\;,\!\!\\[1.3em]
\widetilde a_{j\downarrow}\!=\!\!\Biggl[\cosh(|\vec b|)-\sinh(|\vec b|)\sgn\!\left(x_n\right)\frac{\psi_{n,jz}^{[\sigma]}}{|\vec \psi_{n,j}^{\,[\sigma]}|}\Biggr]a_{j\downarrow}\!
+\sinh(|\vec b|)\sgn\!\left(x_n\right)\!\!\Biggl[\frac{\psi_{n,jx}^{[\sigma]}+i\,\psi_{n,jy}^{[\sigma]}}{|\vec \psi_{n,j}^{\,[\sigma]}|}\Biggr]a_{j\uparrow}\;,\!\!
\end{array}\label{eq:a_lambda1}\\[0.3em]
&\begin{array}{l}
\widetilde a_{j\uparrow}\!=\!\!\Biggl[\cos(|\vec b|)+i\,\sin(|\vec b|)\sgn\!\left(x_n\right)\frac{\psi_{n,jz}^{[\sigma]}}{|\vec \psi_{n,j}^{\,[\sigma]}|}\Biggr]a_{j\uparrow}\!
+\sin(|\vec b|)\sgn\!\left(x_n\right)\!\!\Biggl[\frac{i\,\psi_{n,jx}^{[\sigma]}+\psi_{n,jy}^{[\sigma]}}{|\vec \psi_{n,j}^{\,[\sigma]}|}\Biggr]a_{j\downarrow}\;,\!\!\\[1.3em]
\widetilde a_{j\downarrow}\!=\!\!\Biggl[\cos(|\vec b|)+i\,\sin(|\vec b|)\sgn\!\left(x_n\right)\frac{\psi_{n,jz}^{[\sigma]}}{|\vec \psi_{n,j}^{\,[\sigma]}|}\Biggr]a_{j\downarrow}\!
+\sin(|\vec b|)\sgn\!\left(x_n\right)\!\!\Biggl[\frac{i\,\psi_{n,jx}^{[\sigma]}-\psi_{n,jy}^{[\sigma]}}{|\vec \psi_{n,j}^{\,[\sigma]}|}\Biggr]a_{j\uparrow}\;.\!\!
\end{array}\label{eq:a_lambda2}
\end{align}
When we are dealing with the full 4-dimension single particle spinors, the four coefficients $\widetilde a$ do not have an explicit expression. The exponential of the $\mathcal O_n$ operators acting on the spinors is calculated via a diagonalization procedure. Consider the general $4\times4$ rotation matrix $B_j$ and its eigenvectors $\Psi_{m,j}\ne0$ and eigenvalues $\mu_{m,j}$:
\begin{align}
B_j\,\Psi_{m,j}=\mu_{m,j}\,\Psi_{m,j}\quad\Rightarrow\quad\Psi_{m,j}^{-1}\,B_j\,\Psi_{m,j}=\mu_{m,j}\quad\quad m=1,\ldots,4\;.
\end{align}
Using the formal notation $\vec\Psi_j$ and $\vec\mu_j$ to denote the $4\times4$ matrix of eigenvectors and the 4-dimension vector of eigenvalues, it is possible to write the action of $\e^{B_j}$ on a 4-dimensional single particle spinor $|S'\rangle_j$ as follows:
\begin{align}
\e^{B_j}|S'\rangle_j&=\vec\Psi_j\vec\Psi_j^{-1}\e^{B_j}\vec\Psi_j\vec\Psi_j^{-1}|S'\rangle_j\;,\nonumber\\[0.2em]
&=\vec\Psi_j\e^{\vec\Psi_j^{-1}B_j\,\vec\Psi_j}\vec\Psi_j^{-1}|S'\rangle_j\;,\nonumber\\[0.2em]
&=\vec\Psi_j\e^{\text{diag}\left(\vec\mu_j\right)}\vec\Psi_j^{-1}|S'\rangle_j\;,\nonumber\\[0.2em]
&=\vec\Psi_j\,\text{diag}\left(\e^{\vec\mu_j}\right)\vec\Psi_j^{-1}|S'\rangle_j\;.
\end{align}
Each component of the rotated spinor $|\widetilde S'\rangle_j$ is thus derived from the eigenvectors and eigenvalues of the rotation matrix $B_j$, which is built starting from the $\mathcal O_n$ operators. Moving from neutrons to nucleons, i.e. adding the isospin degrees of freedom to the system, the computational time spent to rotate each single particle spin-isospin state during the propagation is increased by the time for the diagonalization of the $4\times 4$ Hubbard-Stratonovich rotation matrices. However, the total time for the propagation of the wave function as $A$ becomes large, is dominated by the diagonalization of the potential matrices. Since the cost of this operation goes as the cube of the number of matrix rows (columns), the AFDMC computational time is proportional to $A^3$, which is much slower than the scaling factor $A!$ of GFMC.
In addition to the diagonalization of the AV6 potential matrices and the spinor rotation matrices, we have to deal with the evaluation of the integral over the auxiliary fields $x_n$. The easiest way, in the spirit of Monte Carlo, is to sample the auxiliary fields from the Gaussian of Eq.~(\ref{eq:G_AFDMC}), which is interpreted as a probability distribution. The sampled values are then used to determine the action of the operators on the spin-isospin part of the wave function as described above. The integral over all the spin-isospin rotations induced by the auxiliary fields eventually recovers the action of the quadratic spin-isospin operators on a trial wave function containing all the possible good spin-isospin states, as the GFMC one.
In this scheme, the integration over the auxiliary fields is performed jointly with the integration over the coordinates. This generally leads to a large variance. The integral of Eq.~(\ref{eq:G_AFDMC}) should be indeed evaluated for each sampled position and not simply estimated ``on the fly''. A more refined algorithm, in which for each sampled configuration the integral over $x_n$ is calculated by sampling more than one auxiliary variable, has been tested. The energy values at convergence are the same for both approaches. However, in the latter case the variance is much reduced, although the computational time for each move is increased due to the iteration over the newly sampled auxiliary points.
As done in the DMC method, see \S~\ref{subsec:Imp_Samp}, we can introduce an importance function to guide the diffusion in the coordinate space also in the AFDMC algorithm. The drift term (\ref{eq:drift}) is added to the $R-R'$ Gaussian distribution of Eq.~(\ref{eq:G_AFDMC}) and the branching weight $\widetilde\omega_i$ is given by the local energy as in Eq.~(\ref{eq:w_IS}). The idea of the importance sampling can be applied to guide the rotation of the spin-isospin states in the Hubbard-Stratonovich transformation. This can be done by properly shifting the Gaussian over the auxiliary fields of Eq.~(\ref{eq:G_AFDMC}) by means of a drift term~$\bar x_n$:
\begin{align}
\e^{-\frac{x_n^2}{2}+\sqrt{-\lambda_n d\tau}\,x_n\mathcal O_n}=
\e^{-\frac{(x_n-\bar x_n)^2}{2}}\e^{\sqrt{-\lambda_n d\tau}\,x_n\mathcal O_n}\e^{-\bar x_n\left(x_n-\frac{\bar x_n}{2}\right)} \;,\label{eq:HS_impsamp}
\end{align}
where
\begin{align}
\bar x_n=\re\left[\sqrt{-\lambda_n d\tau}\langle\mathcal O_n\rangle_m\right]\;,
\end{align}
and $\langle\mathcal O_n\rangle_m$ is the mixed expectation value of $\mathcal O_n$ (Eq.~(\ref{eq:mixed})) calculated on the old spin-isospin configurations. The mixed estimator is introduced in order to guide the rotations, by maximizing the overlap between the walker and the trial function, which is not generally picked around $x_n=0$.
The last factor of Eq.~(\ref{eq:HS_impsamp}) can be interpreted as an additional weight term that has to be included in the total weight. By combining diffusion, rotation and all the additional factors we can derive two different algorithms.
\begin{itemize}
\item[\emph{v1}]\hypertarget{method:PsiT}{} In the first one, the ratio between the importance functions in the new and old configurations (see Eq.~(\ref{eq:ratio})) is kept explicit. However the drifted Gaussian $\widetilde G_0(R,R',d\tau)$ of Eq.~(\ref{eq:G_IS}) is used for the diffusion in the coordinate space and the drifted Gaussian of Eq.~(\ref{eq:HS_impsamp}) for the sampling of auxiliary fields. The weight for the branching process $\omega_i$ defined in Eq.~(\ref{eq:w}) takes then an overall factor
\begin{align}
\frac{\langle\psi_I|RS\rangle}{\langle\psi_I|R'S'\rangle}
\e^{-\frac{d(R')\left[d(R')+2(R-R')\right]}{4Dd\tau}}
\prod_n\e^{-\bar x_n\left(x_n-\frac{\bar x_n}{2}\right)}\;,
\end{align}
due to the counter terms coming from the coordinate drift $d(R)=\bm v_d(R)D d\tau$ added in the original $G_0(R,R',d\tau)$ and from the auxiliary field shift~$\bar x_n$.
\item[\emph{v2}]\hypertarget{method:Elocal}{} The second algorithm corresponds the local energy scheme described in \S~\ref{subsec:Imp_Samp}. Again the coordinates are diffused via the drifted Gaussian $\widetilde G_0(R,R',d\tau)$ of Eq.~(\ref{eq:G_IS}) and the auxiliary fields are sampled from the shifted Gaussian of Eq.~(\ref{eq:HS_impsamp}). The branching weight $\widetilde w_i$ is instead given by the local energy as in Eq.~(\ref{eq:w_IS}). The counter terms related to $\bar x_n$ are automatically included in the weight because the local energy $E_L(R,S)=\frac{H\psi_I(R,S)}{\psi_I(R,S)}$ takes now contributions from all the spin-isospin operators of the full potential $V_{NN}$. Actually, the term $\e^{-\bar x_n x_n}$ vanishes during the auxiliary field integration because $x_n$ can take positive and negative values. The term $\frac{\bar x_n^2}{2}$ is nothing but the $-\frac{1}{2}\lambda_n \langle\mathcal O_n\rangle_m^2 d\tau$ contribution already included in the weight via $E_L(R,S)$.
\end{itemize}
Given the same choice for the drift term, that depends, for example, on the constraint applied to deal with the sign problem, the two algorithms are equivalent and should sample the same Green's function.
In both versions, the steps that constitute the AFDMC algorithm are almost the same of the DMC one, reported in \S~\ref{sec:DMC}. The starting point is the initial distribution of walkers, step~\ref{item:DMC1}. In step~\ref{item:DMC2} the diffusion of the coordinates is performed including the drift factor. Now also the spin-isospin degrees of freedom are propagated, by means of the Hubbard-Stratonovich rotations and the integral over the auxiliary fields. As in step~\ref{item:DMC3}, a weight is assigned to each walker, choosing one of the two equivalent solutions proposed above (explicit $\psi_I$~ratio or local energy). Both propagation and weight depend on the prescription adopted in order to keep under control the sign problem. Usually the fixed phase approximation (see \S~\ref{subsec:Sign}) is applied with the evaluation of local operators. The branching process follows then the DMC version described in step~\ref{item:DMC4} and the procedure is iterated in the same way with the computation of expectation values at convergence.
\subsection{Propagator for neutrons: spin-orbit terms}
\label{subsec:Prop_LS}
In the previous section we have seen how to deal in an efficient way with a propagator containing the first six components of the Argonne two-body potential. Next terms in Eq.~(\ref{eq:v_ij_Op}) are the spin-orbit contributions for $p=7,8$. Although an attempt to treat the spin-orbit terms for nucleon systems has been reported by Armani in his Ph.D. thesis~\cite{Armani:2011_thesis} (together with a possible $\bm L_{ij}^2$ inclusion for $p=9$), at present the $\bm L_{ij}\cdot\bm S_{ij}$ operator is consistently employed in the AFDMC algorithm only for neutron systems. No other terms of the $NN$ interaction are included in the full propagator, neither for nucleons nor for neutrons, although a perturbative treatment of the remaining terms of AV18 is also possible~\cite{Pieper:2008}. The full derivation of the neutron spin-orbit propagator is reported in Ref.~\cite{Sarsa:2003}. Here we want just to sketch the idea behind the treatment of this non local term for which the corresponding Green's function is not trivial to be derived.
Consider the spin-orbit potential for neutrons:
\begin{align}
v_{ij}^{LS}=v_{LS}(r_{ij})\,\bm L_{ij}\cdot\bm S_{ij}=v_{LS}(r_{ij})\left(\bm L\cdot\bm S\right)_{ij}\;,
\end{align}
where
\begin{align}
v_{LS}(r_{ij})=v_7(r_{ij})+v_8(r_{ij})\;,
\end{align}
and $\bm L_{ij}$ and $\bm S_{ij}$ are defined respectively by Eqs.~(\ref{eq:LS_ij1}) and (\ref{eq:LS_ij2}). As reported in Ref.~\cite{Pieper:1998}, one way to evaluate the propagator for $\bm L\cdot\bm S$ is to consider the expansion at first order in $d\tau$
\begin{align}
\e^{-v_{LS}(r_{ij})\left(\bm L\cdot\bm S\right)_{ij}d\tau}\simeq\left[1-v_{LS}(r_{ij})\left(\bm L\cdot\bm S\right)_{ij}d\tau\right]\;,\label{eq:e_SP}
\end{align}
acting on the free propagator $G_0(R,R',d\tau)$ of Eq.~(\ref{eq:G0}). The derivatives terms of the above expression give
\begin{align}
\left(\bm\nabla_i-\bm\nabla_j\right)G_0(R,R',d\tau)=-\frac{1}{2Dd\tau}\left(\Delta\bm r_i-\Delta\bm r_j\right)G_0(R,R',d\tau)\;,
\end{align}
where $\Delta\bm r_i=\bm r_i-\bm r'_i$ . We can then write:
\begin{align}
&\left(\bm L\cdot\bm S\right)_{ij}G_0(R,R',d\tau)=\nonumber\\[0.2em]
&\quad\quad=-\frac{1}{4i}\frac{1}{2Dd\tau}\left(\bm r_i-\bm r_j\right)\times\left(\Delta\bm r_i-\Delta\bm r_j\right)\cdot\left(\bm\sigma_i+\bm\sigma_j\right)G_0(R,R',d\tau)\;,\nonumber\\[0.4em]
&\quad\quad=-\frac{1}{4i}\frac{1}{2Dd\tau}\left(\bm\Sigma_{ij}\times\bm r_{ij}\right)\cdot\left(\Delta\bm r_i-\Delta\bm r_j\right)G_0(R,R',d\tau)\;,
\end{align}
where $\bm\Sigma_{ij}=\bm\sigma_i+\bm\sigma_j$ and $\bm r_{ij}=\bm r_i-\bm r_j$, and the relation $\bm a\cdot\left(\bm b\times\bm c\right)=\bm c\cdot\left(\bm a\times\bm b\right)$ has been used.
By inserting the last expression in Eq.~(\ref{eq:e_SP}) and re-exponentiating, including also the omitted sum over particle indices $i$ and $j$, the following propagator is obtained:
\begin{align}
\mathcal P_{LS}\simeq\e^{\sum_{i\ne j}\frac{v_{LS}(r_{ij})}{8iD}\left(\bm\Sigma_{ij}\times\bm r_{ij}\right)\cdot\left(\Delta\bm r_i-\Delta\bm r_j\right)}\;.
\end{align}
The effect of $\mathcal P_{LS}$ can be studied starting from the formal solution
\begin{align}
\psi(R,S,\tau+d\tau)\stackrel{LS}{\simeq}\int dR'dS'\, G_0(R,R',d\tau)\,\mathcal P_{LS}\,\psi(R',S',\tau)\;,
\end{align}
and expanding the propagator to the second order and the wave function $\psi(R',S',\tau)$ to the first order in $R-R'$. It is possible to show (see Ref.~\cite{Sarsa:2003} for the details) that the spin-orbit contribution of the propagator takes a simple form, but two- and three-body extra corrections appear. However, in the case of neutrons these additional terms contain quadratic spin operators and so they can be handled by the Hubbard-Stratonovich transformation and the rotations over new auxiliary fields.
\subsection{Propagator for neutrons: three-body terms}
\label{subsec:Prop_TNI}
As reported in \S~\ref{subsec:UIX-ILx}, the Illinois (Urbana IX) TNI can be written as a sum of four different terms:
\begin{align}
V_{ijk}=A_{2\pi}^P\,\mathcal O^{2\pi,P}_{ijk}+A_{2\pi}^{S}\,\mathcal O^{2\pi,S}_{ijk}+A_{3\pi}\,\mathcal O^{3\pi}_{ijk}+A_R\,\mathcal O^R_{ijk} \;.
\end{align}
For neutron systems, being $\bm\tau_i\cdot\bm\tau_j=1$, the operator structure simplify in such a way that $V_{ijk}$ can be recast as a sum of two-body terms only~\cite{Sarsa:2003,Pederiva:2004}. We can therefore handle also the TNI in the AFDMC propagator by means of the Hubbard-Stratonovich transformation. Let analyze how each term of the above relation can be conveniently rewritten for neutron systems.
\begin{itemize}
\item $\mathcal O^{2\pi,P}_{ijk}$~\emph{term}. The $P$-wave 2$\pi$ exchange term (and also the 3$\pi$ exchange one) of Eq.~(\ref{eq:V_NNN_2pi_P}) includes the OPE operator $X_{ij}$, defined in Eq.~(\ref{eq:X_ij}). $X_{ij}$ involves the $\bm\sigma_i\cdot\bm\sigma_j$ and the $S_{ij}$ operators that can be decomposed via Eqs.~(\ref{eq:sigma_dec}) and (\ref{eq:Sij_dec}) in order to define a $3A\times 3A$ matrix $X_{i\alpha,j\beta}$ analogous to the $A_{i\alpha,j\beta}^{[\sigma]}$ of Eq.~(\ref{eq:A_NN}), where $v_3(r_{ij})\!\rightarrow Y_\pi(r_{ij})$ and $v_5(r_{ij})\!\rightarrow T_\pi(r_{ij})$. The OPE operator can be thus expressed as
\begin{align}
X_{ij}=\sigma_{i\alpha}\,X_{i\alpha,j\beta}\,\sigma_{j\beta}\;,
\end{align}
where the matrix $X_{i\alpha,j\beta}$ is real with zero diagonal and has the same symmetries of $A_{i\alpha,j\beta}^{[\sigma]}$. The commutator over the $\bm\tau_i$ operators vanishes, while the anticommutator gives simply a factor 2. Recalling that $X_{ij}=X_{ji}$ we can derive the following relation:
\begin{align}
\sum_{i<j<k}\mathcal O^{2\pi,P}_{ijk}
&=\frac{1}{3!}\sum_{i\ne j\ne k}\sum_{cyclic}2\phantom{\frac{1}{4}}\!\!\!\!\Bigl\{X_{ij},X_{jk}\Bigr\}\;,\nonumber\\[0.2em]
&=2\sum_{i\ne j\ne k}X_{ik}X_{kj}\;,\nonumber\\[-0.4em]
&=2\sum_{i\ne j}\sum_{\alpha\beta}\sigma_{i\alpha}\Biggl(\sum_{k\gamma} X_{i\alpha,k\gamma}\,X_{k\gamma,j\beta}\Biggr)\sigma_{j\beta}\;,\nonumber\\[0.2em]
&=2\sum_{i\ne j}\sum_{\alpha\beta}\sigma_{i\alpha}\,X^2_{i\alpha,j\beta}\,\sigma_{j\beta}\;.
\end{align}
\item $\mathcal O^{2\pi,S}_{ijk}$~\emph{term}. In the $S$-wave TPE term the isospin operators do not contribute and we are left with
\begin{align}
\sum_{i<j<k}\mathcal O_{ijk}^{2\pi,S}
&=\frac{1}{3!}\sum_{i\ne j\ne k}\sum_{cyclic}Z_\pi(r_{ij})Z_\pi(r_{jk})\,\bm\sigma_i\cdot\hat{\bm r}_{ij}\,\bm\sigma_k\cdot\hat{\bm r}_{kj}\;,\nonumber\\[0.2em]
&=\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta}\sigma_{i\alpha}\Biggl[\sum_k Z_\pi(r_{ik})\,\hat r_{ik}^\alpha\,Z_\pi(r_{jk})\,\hat r_{jk}^\beta \Biggr]\sigma_{j\beta}\;,\nonumber\\[0.2em]
&=\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta}\sigma_{i\alpha}\,Z_{i\alpha,j\beta}\,\sigma_{j\beta}\;.
\end{align}
\item $\mathcal O^{3\pi}_{ijk}$~\emph{term}. The 3$\pi$ exchange term, even with the isospin reduction for neutrons, still keeps a very complicated operator structure. As reported in Ref.~\cite{Pederiva:2004}, this factor can be conveniently written as a sum of a spin independent and a spin dependent components
\begin{align}
\sum_{i<j<k}\mathcal O_{ijk}^{3\pi}=V_c^{3\pi}+V_\sigma^{3\pi}\;,
\end{align}
with
\begin{align}
V_c^{3\pi}&=\frac{400}{18}\sum_{i\ne j}X_{i\alpha,j\beta}^2\,X_{i\alpha,j\beta}\;,\\[0.5em]
V_\sigma^{3\pi}&=\frac{200}{54}\sum_{i\ne j}\sum_{\alpha\beta}\sigma_{i\alpha}\Biggl(\sum_{\gamma\delta\mu\nu}
X_{i\gamma,j\mu}^2\,X_{i\delta,j\nu}\,\varepsilon_{\alpha\gamma\delta}\,\varepsilon_{\beta\mu\nu}\Biggr)\sigma_{j\beta}\;,\nonumber\\[0.2em]
&=\frac{200}{54}\sum_{i\ne j}\sum_{\alpha\beta}\sigma_{i\alpha}\,W_{i\alpha,j\beta}\,\sigma_{j\beta}\;,
\end{align}
where $\varepsilon_{\alpha\beta\gamma}$ is the full antisymmetric tensor.
\item $\mathcal O^R_{ijk}$~\emph{term}. The last spin independent term can be recast as a two body operator as follows
\begin{align}
\sum_{i<j<k}\mathcal O^R_{ijk}=G_0^R+\frac{1}{2}\sum_i\left(G_i^R\right)^2\;,
\end{align}
with
\begin{align}
G_0^R&=-\sum_{i<j}T_\pi^4(r_{ij})\;,\\[0.5em]
G_i^R&=\sum_{k\ne i}T_\pi^2(r_{ik})\;.
\end{align}
\end{itemize}
Finally, for neutron systems we can still write the spin dependent part of the $NN$ potential in the form of Eq.~(\ref{eq:V_NN_SD}), with the inclusion of TNI contributions:
\begin{align}
V_{NN}^{SD}&=\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta}\sigma_{i\alpha}\,A_{i\alpha,j\beta}^{[\sigma]}\,\sigma_{j\beta} \;,
\end{align}
where now
\begin{align}
A_{i\alpha,j\beta}^{[\sigma]}\longrightarrow
A_{i\alpha,j\beta}^{[\sigma]}+2A_{2\pi}^P\,X^2_{i\alpha,j\beta}+\frac{1}{2}A_{2\pi}^{S}\,Z_{i\alpha,j\beta}+\frac{200}{54}\,A_{3\pi}\,W_{i\alpha,j\beta}\;.
\end{align}
The central term of the two-body potential of Eq.~(\ref{eq:V_NN_SI}) keeps also contributions from the TNI 3$\pi$ exchange term and from the phenomenological term, and it reads now:
\begin{align}
V_{NN}^{SI}\longrightarrow V_{NN}^{SI}+A_{3\pi}V_c^{3\pi}+A_R\left[G_0^R+\frac{1}{2}\sum_i\left(G_i^R\right)^2\right]\;.
\end{align}
\subsection{Wave functions}
\label{subsec:Wave}
In this section the trial wave functions used in AFDMC calculations for nuclear and hypernuclear systems will be presented, distinguishing between the case of finite and infinite systems. Restoring the convention of Chapter~\ref{chap:hamiltonians}, which is commonly used in the literature for hypernuclear systems, $A$ will refer to the total number of baryons, $\mathcal N_N$ nucleons plus $\mathcal N_\Lambda$ lambda particles. Latin indices will be used for the nucleons, Greek $\lambda$, $\mu$ and $\nu$ indices for the lambda particles. Finally, the first letters of the Greek alphabet ($\alpha,\beta,\gamma,\delta,\ldots$) used as indices will refer to the Cartesian components of the operators.
\subsubsection{Non strange finite and infinite systems}
\label{subsubsubsec:Wave_non_strange}
As already sketched in \S~\ref{sec:AFDMC}, the AFDMC wave function is written in the single particle state representation. The trial wave function for nuclear systems, which is used both as projection and importance function $|\psi_I\rangle\equiv|\psi_T\rangle$, is assumed of the form~\cite{Gandolfi:2007,Gandolfi:2009}
\begin{align}
\psi_T^N(R_N,S_N)=\prod_{i<j}f_c^{NN}(r_{ij})\,\Phi_N(R_N,S_N)\;,\label{eq:psi_N}
\end{align}
where $R_N=\{\bm r_1,\ldots,\bm r_{\mathcal N_N}\}$ are the Cartesian coordinates and $S_N=\{s_1,\ldots,s_{\mathcal N_N}\}$ the spin-isospin coordinates, represented as complex 4- or 2-component vectors:
\begin{align}
\text{nucleons:}&\quad s_i=\left(\begin{array}{c}
a_i \\ b_i \\ c_i \\ d_i
\end{array}\right)_i
\!=a_i|p\uparrow\rangle_i+b_i|p\downarrow\rangle_i+c_i|n\uparrow\rangle_i+d_i|n\downarrow\rangle_i \;,\\[0.5em]
\text{neutrons:}&\quad s_i=\left(\begin{array}{c}
a_i \\ b_i
\end{array}\right)_i
\!=a_i|n\uparrow\rangle_i+b_i|n\downarrow\rangle_i\;,
\end{align}
with $\left\{|p\uparrow\rangle,|p\downarrow\rangle,|n\uparrow\rangle,|n\downarrow\rangle\right\}$ the proton-neutron-up-down basis.
The function $f_c^{NN}(r)$ is a symmetric and spin independent Jastrow correlation function, solution of the Schr\"odinger-like equation for $f_c^{NN}(r<d)$
\begin{align}
-\frac{\hbar^2}{2\mu_{NN}}\nabla^2 f_c^{NN}(r)+\eta\,v_c^{NN}(r)f_c^{NN}(r)=\xi f_c^{NN}(r)\;,\label{eq:Jastrow}
\end{align}
where $v_c^{NN}(r)$ is the spin independent part of the two-body $NN$ interaction, $\mu_{NN}=m_N/2$ the reduced mass of the nucleon pair and $\eta$ and the healing distance $d$ are variational parameters. For distances $r\ge d$ we impose $f_c^{NN}(r)=1$. The role of the Jastrow function is to include the short-range correlations in the trial wave function. In the AFDMC algorithm the effect is simply a reduction of the overlap between pairs of particles, with the reduction of the energy variance. Since there is no change in the phase of the wave function, the $f_c^{NN}$ function does not influence the computed energy value in the long imaginary time projection.
The antisymmetric part $\Phi_N(R_N,S_N)$ of the trial wave function depends on the system to be studied (finite or infinite). As already seen, it is generally built starting from single particle states $\varphi_\epsilon^N(\bm r_i,s_i)$, where $\epsilon$ is the set of quantum numbers describing the state and $\bm r_i$, $s_i$ the single particle nucleon coordinates. The antisymmetry property is then realized by taking the Slater determinant of the $\varphi_\epsilon^N$:
\begin{align}
\Phi_N(R_N,S_N)=\mathcal A\Bigg[\prod_{i=1}^{\mathcal N_N}\varphi_\epsilon^N(\bm r_i,s_i)\Bigg]=\det\Bigl\{\varphi_\epsilon^N(\bm r_i,s_i)\Bigr\}\;.\label{eq:Phi_N}
\end{align}
For nuclei and neutron drops~\cite{Gandolfi:2007} a good set of quantum number is given by $\epsilon=\{n,j,m_j\}$. The single particle states are written as:
\begin{align}
\varphi_\epsilon^N(\bm r_i,s_i)=R_{n,j}^N(r_i)\Bigl[Y_{l,m_l}^N(\Omega)\,\chi_{s,m_s}^N(s_i)\Bigr]_{j,m_j}\;,
\end{align}
where $R_{n,j}^N$ is a radial function, $Y_{l,m_l}^N$ the spherical harmonics depending on the solid angle $\Omega$ and $\chi_{s,m_s}^N$ the spinors in the proton-neutron-up-down basis. The angular functions are coupled to the spinors using the Clebsh-Gordan coefficients to have orbitals in the $\{n,j,m_j\}$ basis according to the usual shell model classification of the nuclear single particle spectrum. For finite systems, in order to make the wave function translationally invariant, the single particle orbitals have to be defined with respect to the center of mass (CM) of the system. We have thus:
\begin{align}
\varphi_\epsilon^N(\bm r_i,s_i)\longrightarrow\varphi_\epsilon^N(\bm r_i-\bm r_{CM},s_i)\quad\quad\text{with}\quad \bm r_{CM}=\frac{1}{\mathcal N_N}\sum_{i=1}^{\mathcal N_N} \bm r_i\;.
\end{align}
In order to deal with new shifted coordinates, we need to correct all the first and second derivatives of trial wave function with respect to $\bm r_i$. The derivation of such corrections is reported in Appendix~\ref{app:Wave}. The choice of the radial functions $R_{n,j}^N$ depends on the system studied and, typically, solutions of the self-consistent Hartree-Fock problem with Skyrme interactions are adopted. For nuclei the Skyrme effective interactions of Ref.~\cite{Bai:1997} are commonly used. For neutron drops, the Skyrme SKM force of Ref.~\cite{Pethick:1995} has been considered.
An additional aspect to take care when dealing with finite systems, is the symmetry of the wave function. Because the AFDMC projects out the lowest energy state not orthogonal to the starting trial wave function, it is possible to study a state with given symmetry imposing to the trial wave function the total angular momentum $J$ experimentally observed. This can be achieved by taking a sum over a different set of determinants
\begin{align}
\det\Bigl\{\varphi_\epsilon^N(\bm r_i,s_i)\Bigr\}\longrightarrow\left[\sum_\kappa c_\kappa\,\text{det}_\kappa\Bigl\{\varphi_{\epsilon_\kappa}^N(\bm r_i,s_i)\Bigr\}\right]_{J,M_J}\;,
\end{align}
where the $c_\kappa$ coefficients are determined in order to have the eigenstate of total angular momentum $J=j_1+\ldots+j_{\mathcal N_N}$.
For nuclear and neutron matter~\cite{Gandolfi:2009}, the antisymmetric part of the wave function is given by the ground state of the Fermi gas, built from a set of plane waves. The infinite uniform system at a given density is simulated with $\mathcal N_N$ nucleons in a cubic box of volume $L^3$ replicated into the space by means of periodic boundary conditions (PBC):
\begin{align}
\varphi_\epsilon^N(\bm r_1+L\hat{\bm r},\bm r_2,\ldots,s_i)=\varphi_\epsilon^N(\bm r_1,\bm r_2,\ldots,s_i)\;.\label{eq:PBC}
\end{align}
Working in a discrete space, the momentum vectors are quantized and can be expressed as
\begin{align}
\bm k_\epsilon=\frac{2\pi}{L}\left(n_x,n_y,n_z\right)_\epsilon\;,\label{eq:k_vec}
\end{align}
where $\epsilon$ labels the quantum state and $n_x$, $n_y$ and $n_z$ are integer numbers labelling the momentum shell. The single particle states are then given by
\begin{align}
\varphi_\epsilon^N(\bm r_i,s_i)=\e^{-i\bm k_\epsilon\cdot\bm r_i}\chi_{s,m_s}^N(s_i)\;.
\end{align}
In order to meet the requirement of homogeneity and isotropy, the shell structure of the system must be closed. The total number of Fermions in a particular spin-isospin configuration that can be correctly simulated in a box corresponds to the closure of one of the $\left(n_x,n_y,n_z\right)_\epsilon$ shells. The list of the first closure numbers is
\begin{align}
\mathcal N_c=1,7,19,27,33,57,81,93\ldots\;.\label{eq:n_c}
\end{align}
Given a closure number $\mathcal N_c^N$, we can thus simulate an infinite system by means of a periodic box with $2\,\mathcal N_c^N$ neutrons (up and down spin) or $4\,\mathcal N_c^N$ nucleons (up and down spin and isospin). Although the use of PBC should reduce the finite-size effects, in general there are still sizable errors in the kinetic energy arising from shell effects in filling the plane wave orbitals, even at the closed shell filling in momentum space. However, in the thermodynamical limit $\mathcal N_c^N\rightarrow\infty$, exact results should be obtained. For symmetric nuclear matter~(SNM), 28, 76 and 108 nucleons have been used~\cite{Gandolfi:2007_SNM}, resulting in comparable results for the energy per particle at a given density. In the case of pure neutron matter~(PNM), finite-size effects are much more evident~\cite{Gandolfi:2009} and the thermodynamical limit is not reached monotonically. Typically, PNM is simulated using 66 neutrons, which was found to give the closest kinetic energy compared to the Fermi gas in the range of $\mathcal N_c^N$ corresponding to feasible computational times.
As reported in Ref.~\cite{Lin:2001}, twist-averaged boundary conditions (TABC) can be imposed on the trial wave function to reduce the finite-size effects. One can allow particles to pick up a phase $\theta$ when they wrap around the periodic boundaries:
\begin{align}
\varphi_\epsilon^N(\bm r_1+L\hat{\bm r},\bm r_2,\ldots,s_i)=\e^{i\theta}\varphi_\epsilon^N(\bm r_1,\bm r_2,\ldots,s_i)\;.\label{eq:TABC}
\end{align}
The boundary condition $\theta=0$ corresponds to the PBC, $\theta\ne 0$ to the TABC. It has been shown that if the twist phase is integrated over, the finite size effects are substantially reduced. TABC has been used in PNM calculations~\cite{Gandolfi:2009}, showing a small discrepancy in the energy per particle for 38, 45, 66 and 80 neutrons at fixed density. A remarkable result is that the PNM energy for 66 neutrons using PBC is very close to the extrapolated result obtained employing the TABC, validating then the standard AFDMC calculation for 66 particles. Compare to PBC, employing the TABC results in a more computational time and they have not been used in this work.
\subsubsection{Strange finite and infinite systems}
\label{subsubsubsec:Wave_strange}
The $\Lambda$~hyperon, having isospin zero, does not participate to the isospin doublet of nucleons. Referring to hypernuclear systems, we can therefore treat the additional strange baryons as distinguishable particles writing a trial wave function of the form
\begin{align}
\psi_T(R,S)=\prod_{\lambda i}f_c^{\Lambda N}(r_{\lambda i})\,\psi_T^N(R_N,S_N)\,\psi_T^\Lambda(R_\Lambda,S_\Lambda)\;,\label{eq:Psi_T}
\end{align}
where $R=\{\bm r_1,\ldots,\bm r_{\mathcal N_N},\bm r_1,\ldots,\bm r_{\mathcal N_\Lambda}\}$ and $S=\{s_1,\ldots,s_{\mathcal N_N},s_1,\ldots,s_{\mathcal N_\Lambda}\}$ refer to the coordinates of all the baryons and $\psi_T^N(R_N,S_N)$ is the nucleon single particle wave function of Eq.~(\ref{eq:psi_N}). $\psi_T^\Lambda(R_\Lambda,S_\Lambda)$ is the lambda single particle wave function that takes the same structure of the nucleon one:
\begin{align}
\psi_T^\Lambda(R_\Lambda,S_\Lambda)=\prod_{\lambda<\mu}f_c^{\Lambda\Lambda}(r_{\lambda\mu})\,\Phi_\Lambda(R_\Lambda,S_\Lambda)\;,\label{eq:psi_L}
\end{align}
with
\begin{align}
\Phi_\Lambda(R_\Lambda,S_\Lambda)=\mathcal A\Bigg[\prod_{\lambda=1}^{\mathcal N_\Lambda}\varphi_\epsilon^\Lambda(\bm r_\lambda,s_\lambda)\Bigg]=\det\Bigl\{\varphi_\epsilon^\Lambda(\bm r_\lambda,s_\lambda)\Bigr\}\;.
\end{align}
$R_\Lambda=\{\bm r_1,\ldots,\bm r_{\mathcal N_\Lambda}\}$ are the hyperon Cartesian coordinates and $S_\Lambda=\{s_1,\ldots,s_{\mathcal N_\Lambda}\}$ the hyperon spin coordinates, represented by the 2-dimension spinor in the lambda-up-down basis:
\begin{align}
s_\lambda=\left(\begin{array}{c}
u_\lambda \\ v_\lambda
\end{array}\right)_\lambda
\!=u_\lambda|\Lambda\uparrow\rangle_\lambda+v_\lambda|\Lambda\downarrow\rangle_\lambda\;.
\end{align}
The $\Lambda\Lambda$ Jastrow correlation function $f_c^{\Lambda\Lambda}(r)$ is calculated by means of Eq.~(\ref{eq:Jastrow}) for the hyperon-hyperon pair using the central channel of the $\Lambda\Lambda$ potential of Eq.~(\ref{eq:V_LL}). Eq.~(\ref{eq:Jastrow}) is also used to calculate the hyperon-nucleon correlation function $f_c^{\Lambda N}(r)$ of the hypernuclear wave function~(\ref{eq:Psi_T}) by considering the pure central term of the $\Lambda N$ potential of Eq.~(\ref{eq:V_LN}) and using the reduced mass
\begin{align}
\mu_{\Lambda N}=\frac{m_\Lambda\,m_N}{m_\Lambda+m_N}\;.
\end{align}
For $\Lambda$~hypernuclei (and $\Lambda$~neutron drops) the hyperon single particle states take the same structure as the nuclear case, and they read:
\begin{align}
\varphi_\epsilon^\Lambda(\bm r_\lambda,s_\lambda)=R_{n,j}^\Lambda(r_\lambda)\Bigl[Y_{l,m_l}^\Lambda(\Omega)\,\chi_{s,m_s}^\Lambda(s_\lambda)\Bigr]_{j,m_j}\;.\label{eq:varphi_L}
\end{align}
Although the AFDMC code for hypernuclei is set up for an arbitrary number of hyperons, we focused on single and double $\Lambda$~hypernuclei. Having just two hyperons to deal with, only one radial function $R_{n,j}^\Lambda$ is needed. Being the mass difference between the neutron and the $\Lambda$~particle small, we used the neutron $1s_{1/2}$ radial function also for the hyperon.
Dealing with finite systems, the coordinates of all the baryons must be related to the CM, that now is given by the coordinates of particles with different mass. Nucleon and hyperon single particle orbitals are thus defined as:
\begin{align}
\begin{array}{rcl}
\varphi_\epsilon^N(\bm r_i,s_i)\!\!\!&\longrightarrow&\!\!\varphi_\epsilon^N(\bm r_i-\bm r_{CM},s_i)\\[0.5em]
\varphi_\epsilon^\Lambda(\bm r_\lambda,s_\lambda)\!\!\!&\longrightarrow&\!\!\varphi_\epsilon^\Lambda(\bm r_\lambda-\bm r_{CM},s_\lambda)
\end{array}
\end{align}
where
\begin{align}
\bm r_{CM}=\frac{1}{M}\left(m_N\sum_{i=1}^{\mathcal N_N}\bm r_i+m_\Lambda\sum_{\lambda=1}^{\mathcal N_\Lambda}\bm r_\lambda\right)
\quad\text{with}\quad M=\mathcal N_N\,m_N+\mathcal N_\Lambda\,m_\Lambda\;.
\end{align}
As in the nuclear case, the use of relative coordinates introduces corrections in the calculation of the derivatives of the trial wave function. For hypernuclei such corrections, and in general the evaluation of derivatives, are more complicated than for nuclei. This is because we have to deal with two set of spacial coordinates ($R_N$ and $R_\Lambda$) and the Jastrow function $f_c^{\Lambda N}$ depends on both. The derivatives of the trial wave function including CM corrections are reported in Appendix~\ref{app:Wave}.
For $\Lambda$~neutron (nuclear) matter the antisymmetric part of the hyperon wave function is given by the ground state of the Fermi gas, as for nucleons. We are thus dealing with two Slater determinants of plane waves with $\bm k_\epsilon$~vectors quantized in the same $L^3$ box (see Eq.~(\ref{eq:k_vec})). The dimension of the simulation box, and thus the quantization of the $\bm k_\epsilon$ vectors, is given by the total numeric baryon density
\begin{align}
\rho_b=\frac{\mathcal N_b}{L^3}=\frac{\mathcal N_N+\mathcal N_\Lambda}{L^3}=\rho_N+\rho_\Lambda\;,\label{eq:rho_b}
\end{align}
and the number of nucleons and lambda particles. The hyperon single particle states correspond then to
\begin{align}
\varphi_\epsilon^\Lambda(\bm r_\lambda,s_\lambda)=\e^{-i\bm k_\epsilon\cdot\bm r_\lambda}\chi_{s,m_s}^\Lambda(s_\lambda)\;,
\end{align}
where the $\bm k_\epsilon$ structure derived from $\rho_b$ is used also for the the nuclear part. The requirements of homogeneity and isotropy discussed in the previous section still apply and so the lambda plane waves have to close their own momentum shell structure. Given the list of closure numbers~(\ref{eq:n_c}), we can add $2\,\mathcal N_c^\Lambda$ hyperons (up and down spin) to the $2\,\mathcal N_c^N$ neutrons or $4\,\mathcal N_c^N$ nucleons in the periodic box.
The wave functions described so far are appropriate only if we consider nucleons and hyperons as distinct particles. In this way, it is not possible to include the $\Lambda N$ exchange term of Eq.~(\ref{eq:V_LN}) directly in the propagator, because it mixes hyperon and nucleon states. The complete treatment of this factor would require a drastic change in the AFDMC code and/or a different kind of nuclear-hypernuclear interactions, as briefly discussed in Appendix~\ref{app:Px}. A perturbative analysis of the $v_0(r_{\lambda i})\,\varepsilon\,\mathcal P_x$ term is however possible and it is reported in the next section.
\subsection{Propagator for hypernuclear systems}
\label{subsec:Prop_YN}
Consider a many-body system composed by nucleons and hyperons, interacting via the full Hamiltonian~(\ref{eq:H}) and described by the trial wave function~(\ref{eq:Psi_T}). Suppose to switch off all the spin-isospin interactions in all the channels and keep only the central terms:
\begin{align}
H=T_N+T_\Lambda+V_{NN}^c+V_{\Lambda\Lambda}^c+V_{\Lambda N}^c\;,
\end{align}
where also the central contributions from the three-body interactions are included. Neglecting the spin-isospin structure of the trial wave function we can follow the idea of the standard DMC described in \S~\ref{sec:DMC} and write the analog of Eq.~(\ref{eq:psi_propag}):
\begin{align}
&\psi(R_N,R_\Lambda,\tau+d\tau)\simeq\int dR'_N\,dR'_\Lambda\langle R_N,R_\Lambda|
\e^{-\left(V_{NN}^c+V_{\Lambda\Lambda}^c+V_{\Lambda N}^c\right)\frac{d\tau}{2}}
\e^{-T_N d\tau}\e^{-T_\Lambda d\tau}\times\nonumber\\[0.5em]
&\hspace{0.5cm}\times\e^{-\left(V_{NN}^c+V_{\Lambda\Lambda}^c+V_{\Lambda N}^c\right)\frac{d\tau}{2}}\e^{E_Td\tau}
|R'_N,R'_\Lambda\rangle\,\psi(R'_N,R'_\Lambda,\tau)\;,\nonumber\\[0.8em]
&\simeq\int dR'_N\,dR'_\Lambda\underbrace{\langle R_N|\e^{-T_N d\tau}|R'_N\rangle}_{G_0^N(R_N,R'_N,d\tau)}
\underbrace{\langle R_\Lambda|\e^{-T_\Lambda d\tau}|R'_\Lambda\rangle}_{G_0^\Lambda(R_\Lambda,R'_\Lambda,d\tau)}\times\nonumber\\[0.5em]
&\hspace{0.5cm}\times\underbrace{\phantom{\langle}\!\!
\e^{-\left(\widetilde V_{NN}^c(R_N,R'_N)+\widetilde V_{\Lambda\Lambda}^c(R_\Lambda,R'_\Lambda)+\widetilde V_{\Lambda N}^c(R_N,R_\Lambda,R'_N,R'_\Lambda)-E_T\right)d\tau}}
_{G_V(R_N,R_\Lambda,R'_N,R'_\Lambda,d\tau)}\psi(R'_N,R'_\Lambda,\tau)\;,\nonumber\\[0.8em]
&\simeq\left(\frac{1}{4\pi D_N d\tau}\right)^{\frac{3\mathcal N_N}{2}}\!\!\left(\frac{1}{4\pi D_\Lambda d\tau}\right)^{\frac{3\mathcal N_\Lambda}{2}}\!\!
\int dR'_N\,dR'_\Lambda\e^{-\frac{(R_N-R'_N)^2}{4D_N d\tau}}\e^{-\frac{(R_\Lambda-R'_\Lambda)^2}{4D_\Lambda d\tau}}\times\nonumber\\[0.5em]
&\hspace{0.5cm}\times\e^{-\left(\widetilde V_{NN}^c(R_N,R'_N)+\widetilde V_{\Lambda\Lambda}^c(R_\Lambda,R'_\Lambda)
+\widetilde V_{\Lambda N}^c(R_N,R_\Lambda,R'_N,R'_\Lambda)-E_T\right)d\tau}\psi(R'_N,R'_\Lambda,\tau)\;,
\label{eq:psi_hyp_propag}
\end{align}
where
\begin{align}
\widetilde V_{NN}^c(R_N,R'_N)&=\frac{1}{2}\Bigl[V_{NN}^c(R_N)+V_{NN}^c(R'_N)\Bigr]\;,\nonumber\\[0.5em]
\widetilde V_{\Lambda\Lambda}^c(R_\Lambda,R'_\Lambda)&=\frac{1}{2}\Bigl[V_{\Lambda\Lambda}^c(R_\Lambda)+V_{\Lambda\Lambda}^c(R'_\Lambda)\Bigr]\;,\label{eq:V_tilde}\\[0.5em]
\widetilde V_{\Lambda N}^c(R_N,R_\Lambda,R'_N,R'_\Lambda)&=\frac{1}{2}\Bigl[V_{\Lambda N}^c(R_N,R_\Lambda)+V_{\Lambda N}^c(R'_N,R'_\Lambda)\Bigr]\;,\nonumber
\end{align}
and $D_N=\hbar^2/2m_N$ and $D_\Lambda=\hbar^2/2m_\Lambda$ are the diffusion constants of the Brownian motion of nucleons and lambda particles.
The evolution in imaginary time is thus performed in the same way of the standard DMC algorithm. A set of walkers, which now contains nucleon and hyperon coordinates, is diffused according to $G_0^N(R_N,R'_N,d\tau)$ and $G_0^\Lambda(R_\Lambda,R'_\Lambda,d\tau)$. A weight $\omega_i=G_V(R_N,R_\Lambda,R'_N,R'_\Lambda,d\tau)$ is assigned to each waker and it is used for the estimator contributions and the branching process. We can also apply the importance function technique, the result of which is the inclusion of a drift term in the diffusion of each type of baryon and the use of the local energy for the branching weight. The drift velocities take the same form of Eq.~(\ref{eq:drift}), but now the derivatives are calculated with respect to nucleon or hyperon coordinates, including all the possible CM (for finite systems) and Jastrow corrections, as reported in Appendix~\ref{app:Wave}.
Reintroduce now the spin-isospin structure in the wave function and consider then the spin-isospin dependent interactions. For the nuclear part, we can still deal with AV6 like potentials for nucleon systems by means of the Hubbard-Stratonovich transformation, as discussed in \S~\ref{subsec:Prop_AV6}. In the case of pure neutron systems, we can also include spin-orbit and three-body contributions as reported in \S~\ref{subsec:Prop_LS} and \S~\ref{subsec:Prop_TNI}. In the next we will discuss how to deal with the spin-isospin dependent part of the hypernuclear potentials, in both two- and three-body channels.
\subsubsection{Two-body terms}
\label{subsubsec:Prop_LN}
Consider the full two-body $\Lambda N$ interaction described in the previous chapter:
\begin{align}
V_{\Lambda N}&=\sum_{\lambda i}\left(v_{\lambda i}+v_{\lambda i}^{CSB}\right)\;,\nonumber\\[0.5em]
&=\sum_{\lambda i}v_0(r_{\lambda i})(1-\varepsilon)+\sum_{\lambda i}v_0(r_{\lambda i})\,\varepsilon\,\mathcal P_x
+\sum_{\lambda i}\frac{1}{4}v_\sigma T^2_\pi(r_{\lambda i})\,\bm\sigma_\lambda\cdot\bm\sigma_i\nonumber \\[0.2em]
&\quad+\sum_{\lambda i}C_\tau\,T_\pi^2\left(r_{\lambda i}\right)\tau_i^z\nonumber\;,\\[0.5em]
&=\sum_{\lambda i}v_0(r_{\lambda i})(1-\varepsilon)+\sum_{\lambda i}B_{\lambda i}^{[\mathcal P_x]}\,\mathcal P_x
+\sum_{\lambda i}\sum_\alpha\sigma_{\lambda\alpha}\,B_{\lambda i}^{[\sigma]}\,\sigma_{i\alpha}+\sum_i B_i^{[\tau]}\,\tau_i^z\;,\label{eq:V_LN_prop}
\end{align}
where
\begin{align}
B_{\lambda i}^{[\mathcal P_x]}=v_0(r_{\lambda i})\,\varepsilon\quad\quad
B_{\lambda i}^{[\sigma]}=\frac{1}{4}v_\sigma T^2_\pi(r_{\lambda i})\quad\quad
B_i^{[\tau]}=\sum_\lambda C_\tau\,T_\pi^2\left(r_{\lambda i}\right)\;.
\end{align}
The first term of Eq.~(\ref{eq:V_LN_prop}) is simply a spin independent factor and can be included in the $V_{\Lambda N}^c$ contribution of Eq.~(\ref{eq:psi_hyp_propag}). The remaining terms involve operators acting on nucleons and hyperons and need to be discussed separately.
\begin{itemize}
\item $\bm\sigma_\lambda\cdot\bm\sigma_i$~\emph{term}. The quadratic spin-spin term of the $\Lambda N$ interaction is written in same form of the nucleon-nucleon one of Eq.~(\ref{eq:V_NN_SD}). However, in general the matrix $B_{\lambda i}^{[\sigma]}$ is not a square matrix ($\dim B_{\lambda i}^{[\sigma]}=\mathcal N_\Lambda\times\mathcal N_N$) and so we can not follow the derivation of \S~\ref{subsec:Prop_AV6}. Recalling that we are working with single particle wave functions and that each spin-isospin operator is the representation in the $A$-body tensor product space of a one-body operator as in Eq.~(\ref{eq:O_j}), we can write
\begin{align}
\!\!\!\!\sum_\alpha\sigma_{\lambda\alpha}\otimes\sigma_{i\alpha}=\frac{1}{2}\sum_\alpha\left[\left(\sigma_{\lambda\alpha}\oplus\sigma_{i\alpha}\right)^2
-\left(\sigma_{\lambda\alpha}\otimes\mathbb I_{i\alpha}\right)^2-\left(\mathbb I_{\lambda\alpha}\otimes\sigma_{i\alpha}\right)^2\right]\;.\!
\end{align}
The square of the Pauli matrices of the last two terms gives the identity with respect to the single particle state $\lambda$ or $i$, so that they can be simply written as a spin independent contribution
\begin{align}
\sum_\alpha\sigma_{\lambda\alpha}\otimes\sigma_{i\alpha}=-3+\frac{1}{2}\sum_\alpha\left(\mathcal O_{\lambda i,\alpha}^{[\sigma_{\Lambda N}]}\right)^2\;,
\end{align}
where we have defined a new spin-spin operator
\begin{align}
\mathcal O_{\lambda i,\alpha}^{[\sigma_{\Lambda N}]}=\sigma_{\lambda\alpha}\oplus\sigma_{i\alpha}\;.\label{eq:O_LN}
\end{align}
We can now write the the $\bm\sigma_\lambda\cdot\bm\sigma_i$ term as follows
\begin{align}
V_{\Lambda N}^{\sigma\sigma}&=\sum_{\lambda i}\sum_\alpha\sigma_{\lambda\alpha}\,B_{\lambda i}^{[\sigma]}\,\sigma_{i\alpha}\;,\nonumber\\[0.2em]
&=-3\sum_{\lambda i}B_{\lambda i}^{[\sigma]}+\frac{1}{2}\sum_{\lambda i}\sum_\alpha B_{\lambda i}^{[\sigma]}\left(\mathcal O_{\lambda i,\alpha}^{[\sigma_{\Lambda N}]}\right)^2
\end{align}
The first term is a central factor that can be included in $V_{\Lambda N}^c$. The second term is written in the same way of the spin-isospin dependent part of the nuclear interaction of Eq.~(\ref{eq:V_NN_SD_On}). With a little abuse of notation $n=\{\lambda, i\}={1,\ldots,\mathcal N_N\,\mathcal N_\Lambda}$, the spin dependent part of the propagator for the $\bm\sigma_\lambda\cdot\bm\sigma_i$ takes a suitable form for the application of the Hubbard-Stratonovich transformation:
\begin{align}
\e^{-\sum_{\lambda i}\sum_\alpha\sigma_{\lambda\alpha}\,B_{\lambda i}^{[\sigma]}\,\sigma_{i\alpha}\,d\tau}
&=\e^{-3\sum_n B_n^{[\sigma]}
-\frac{1}{2}\sum_{n\alpha} B_n^{[\sigma]}\left(\mathcal O_{n,\alpha}^{[\sigma_{\Lambda N}]}\right)^2d\tau}\;,\nonumber\\[0.2em]
&=\e^{-V_{\Lambda N}^{c\,[\sigma]}}\prod_{n\alpha}\e^{-\frac{1}{2}B_n^{[\sigma]}\left(\mathcal O_{n\alpha}^{[\sigma_{\Lambda N}]}\right)^2d\tau}\,+\ord\!\left(d\tau^2\right)\;,\nonumber\\[0.2em]
&\simeq\e^{-V_{\Lambda N}^{c\,[\sigma]}}\prod_{n\alpha}\frac{1}{\sqrt{2\pi}}\int\!dx_{n\alpha}\e^{\frac{-x_{n\alpha}^2}{2}+\sqrt{-B_n^{[\sigma]} d\tau}\,x_{n\alpha}\mathcal O_{n\alpha}^{[\sigma_{\Lambda N}]}}\;.
\end{align}
Recalling Eq.~(\ref{eq:G_AFDMC}), we can write the hyperon spin dependent part of the AFDMC propagator for hypernuclear systems as
\begin{align}
\langle S_N S_\Lambda|\prod_{n\alpha=1}^{3\mathcal N_N\mathcal N_\Lambda}\!\frac{1}{\sqrt{2\pi}}\int\!dx_{n\alpha}\e^{\frac{-x_{n\alpha}^2}{2}+\sqrt{-B_n^{[\sigma]} d\tau}\,x_{n\alpha}\mathcal O_{n\alpha}^{[\sigma_{\Lambda N}]}}|S'_N S'_\Lambda\rangle\;.
\end{align}
By the definition of Eq.~(\ref{eq:O_LN}), it comes out that the action of the operator $\mathcal O_{n\alpha}^{[\sigma_{\Lambda N}]}$ on the spinor $|S'_N,S'_\Lambda\rangle$ factorizes in a $\sigma_{i\alpha}$ rotation for the nucleon spinor $|S_N\rangle$ and a $\sigma_{\lambda\alpha}$ rotation for the $\Lambda$ spinor $|S_\Lambda\rangle$, coupled by the same coefficient $\sqrt{-B_n^{[\sigma]} d\tau}\,x_{n\alpha}$.
\item $\tau_i^z$~\emph{term}. As already seen in \S~\ref{sec:AFDMC}, the single particle wave function is closed with respect to the application of a propagator containing linear spin-isospin operators. The action of the CSB potential corresponds to the propagator
\begin{align}
\e^{-\sum_iB_i^{[\tau]}\,\tau_i^z\,d\tau}=\prod_i\e^{-B_i^{[\tau]}\,\tau_i^z\,d\tau}\,+\ord\left(d\tau^2\right)\;,
\end{align}
that, acting on the trial wave function, simply produces a rotation of the nucleon spinors, as in Eq.~(\ref{eq:eO_S}). Being the CSB term already linear in $\tau_i^z$, there is no need for Hubbard-Stratonovich transformation. The $\tau_i^z$ rotations can be applied after the integration over auxiliary fields on the spinors modified by the Hubbard-Stratonovich rotations. In the $\psi_I$~ratio AFDMC algorithm~(\hyperlink{method:PsiT}{\emph{v1}}) there are no additional terms in the weight coming from the CSB rotations. If we use the local energy version of the algorithm~(\hyperlink{method:Elocal}{\emph{v2}}), we need to subtract the CSB contribution to $E_L(R)$ (Eq.~(\ref{eq:E_L})) because there are no counter terms coming from the importance sampling on auxiliary fields. Note that in neutron systems, $\tau_i^z$ gives simply a factor $-1$, so the CSB becomes a positive central contribution ($C_\tau<0$) to be added in $V_{\Lambda N}^c$.
\item $\mathcal P_x$~\emph{term}. As discussed in \S~\ref{subsubsubsec:Wave_strange}, the structure of our trial wave function for hypernuclear systems prevents the straightforward inclusion of the $\Lambda N$ space exchange operator in the AFDMC propagator. We can try to treat this contribution perturbatively: $\mathcal P_x$ is not included in the propagator but its effect is calculated as a standard estimator on the propagated wave function. The action of $\mathcal P_x$ is to exchange the position of one nucleon and one hyperon, modifying thus the CM of the whole system due to the mass difference between the baryons. To compute this potential contribution we have thus to sum over all the hyperon-nucleon pairs. For each exchanged pair, all the positions are referred to the new CM and the wave function is evaluated and accumulated. Then, particles are moved back to the original positions and a new pair is processed. At the end of the sum the contribution $\sum_{\lambda i}\mathcal P_x\,\psi$ is obtained. As reported in Refs.~\cite{Shoeb:1998,Usmani:2006,Usmani:2006_He6LL,Usmani:2008}, the $\Lambda N$ space exchange operator induces strong correlations and thus a perturbative approach might not be appropriate. A possible non perturbative extension of the AFDMC code for the space exchange operator is outlined in Appendix~\ref{app:Px}.
\end{itemize}
In the two body hypernuclear sector a $\Lambda\Lambda$ interaction is also employed, as reported in \S~\ref{subsec:LL}. The potential described in Eq.~(\ref{eq:V_LL}) can be recast as
\begin{align}
V_{\Lambda\Lambda}&=\sum_{\lambda<\mu}\sum_{k=1}^{3}\left(v_0^{(k)}+v_\sigma^{(k)}\,{\bm\sigma}_\lambda\cdot{\bm\sigma}_\mu\right)\e^{-\mu^{(k)}r_{\lambda\mu}^2}\;,\nonumber\\[0.2em]
&=\sum_{\lambda<\mu}\sum_{k=1}^{3}v_0^{(k)}\e^{-\mu^{(k)}r_{\lambda\mu}^2}+\frac{1}{2}\sum_{\lambda\ne\mu}\sum_{\alpha}\sigma_{\lambda\alpha}\,C_{\lambda\mu}^{[\sigma]}\,\sigma_{\mu\alpha}\;,
\end{align}
where
\begin{align}
C_{\lambda\mu}^{[\sigma]}=\sum_{k=1}^{3}v_\sigma^{(k)}\e^{-\mu^{(k)}r_{\lambda\mu}^2}\;.
\end{align}
The first term of $V_{\Lambda\Lambda}$ is a pure central factor to be included in $V_{\Lambda\Lambda}^c$, while the second part has exactly the same form of the isospin component of Eq.~(\ref{eq:V_NN_SD}). We can thus diagonalize the $C$ matrix and define a new operator $\mathcal O_{n,\alpha}^{[\sigma_\Lambda]}$ starting from the eigenvectors $\psi_{n,\lambda}^{[\sigma_\Lambda]}$:
\begin{align}
\mathcal O_{n,\alpha}^{[\sigma_\Lambda]}&=\sum_{\lambda}\sigma_{\lambda\alpha}\,\psi_{n,\lambda}^{[\sigma_\Lambda]}\;.
\end{align}
In this way, the spin dependent part of the hyperon-hyperon interaction becomes
\begin{align}
V_{\Lambda\Lambda}^{SD}=\frac{1}{2}\sum_{n=1}^{\mathcal N_\Lambda}\sum_{\alpha=1}^3\lambda_n^{[\sigma_\Lambda]}\!\left(\mathcal O_{n\alpha}^{[\sigma_\Lambda]}\right)^2 \;,
\end{align}
and we can apply the Hubbard-Stratonovich transformation to linearize the square dependence of $\mathcal O_{n\alpha}^{[\sigma_\Lambda]}$ introducing the integration over $3\,\mathcal N_\Lambda$ new auxiliary fields and the relative $|S'_\Lambda\rangle$ rotations.
At the end, using the diagonalization of the potential matrices and the derivation reported in this section, the spin-isospin dependent part of the nuclear and hypernuclear two-body potentials (but spin-orbit term for simplicity) reads:
\begin{align}
V_{NN}^{SD}+V_{\Lambda N}^{SD}&=
\frac{1}{2}\sum_{n=1}^{3\mathcal N_N}\lambda_n^{[\sigma]}\!\left(\mathcal O_n^{[\sigma]}\right)^2
\!+\frac{1}{2}\sum_{n=1}^{3\mathcal N_N}\sum_{\alpha=1}^3\lambda_n^{[\sigma\tau]}\!\left(\mathcal O_{n\alpha}^{[\sigma\tau]}\right)^2
\!+\frac{1}{2}\sum_{n=1}^{\mathcal N_N}\sum_{\alpha=1}^3\lambda_n^{[\tau]}\!\left(\mathcal O_{n\alpha}^{[\tau]}\right)^2\nonumber\\[0.4em]
&\,+\frac{1}{2}\sum_{n=1}^{\mathcal N_\Lambda}\sum_{\alpha=1}^3\lambda_n^{[\sigma_\Lambda]}\!\left(\mathcal O_{n\alpha}^{[\sigma_\Lambda]}\right)^2
\!+\frac{1}{2}\sum_{n=1}^{\mathcal N_N\mathcal N_\Lambda}\!\sum_{\alpha=1}^3 B_n^{[\sigma]}\!\left(\mathcal O_{n\alpha}^{[\sigma_{\Lambda N}]}\right)^2
\!+\sum_{i=1}^{\mathcal N_N} B_i^{[\tau]}\,\tau_i^z\;.\label{eq:SD_Op}
\end{align}
Using a compact notation, the AFDMC propagator for hypernuclear systems of Eq.~(\ref{eq:psi_hyp_propag}) with the inclusion of spin-isospin degrees of freedom becomes:
\begin{align}
&\hspace{-0.5em}G(R,S,R',S',d\tau)=\langle R,S|\e^{-(T_N+T_\Lambda+V_{NN}+V_{\Lambda\Lambda}+V_{\Lambda N}-E_T)d\tau}|R',S'\rangle\;,\nonumber\\[0.8em]
&\hspace{0.4em}\simeq\left(\frac{1}{4\pi D_N d\tau}\right)^{\frac{3\mathcal N_N}{2}}\!\!\left(\frac{1}{4\pi D_\Lambda d\tau}\right)^{\frac{3\mathcal N_\Lambda}{2}}\!\!
\e^{-\frac{(R_N-R'_N)^2}{4D_N d\tau}}\e^{-\frac{(R_\Lambda-R'_\Lambda)^2}{4D_\Lambda d\tau}}\times\nonumber\\[0.7em]
&\hspace{0.6cm}\times\e^{-\left(\widetilde V_{NN}^c(R_N,R'_N)+\widetilde V_{\Lambda\Lambda}^c(R_\Lambda,R'_\Lambda)
+\widetilde V_{\Lambda N}^c(R_N,R_\Lambda,R'_N,R'_\Lambda)-E_T\right)d\tau}\nonumber\\[0.2em]
&\hspace{0.6cm}\times\langle S_N,S_\Lambda|\prod_{i=1}^{\mathcal N_N}\e^{-B_i^{[\tau]}\,\tau_i^z\,d\tau}
\prod_{n=1}^{\mathcal M}\frac{1}{\sqrt{2\pi}}\int\!dx_n\e^{\frac{-x_n^2}{2}+\sqrt{-\lambda_n d\tau}\,x_n\mathcal O_n}|S'_N,S'_\Lambda\rangle\;,\label{eq:Prop_full}
\end{align}
where $|R,S\rangle\equiv|R_N,R_\Lambda,S_N,S_\Lambda\rangle$ is the state containing all the coordinates of the baryons and $\widetilde V_{NN}^c$, $\widetilde V_{\Lambda\Lambda}^c$ and $\widetilde V_{\Lambda N}^c$ defined in Eqs.~(\ref{eq:V_tilde}) contain all the possible central factors. Formally, $\mathcal M=15\,\mathcal N_N+3\,\mathcal N_\Lambda+3\,\mathcal N_N\mathcal N_\Lambda$ and $\mathcal O_n$ stays for the various operators of Eq.~(\ref{eq:SD_Op}), which have a different action on the spinors $|S'_N,S'_\Lambda\rangle$. The $\mathcal O_n^{[\sigma]}$, $\mathcal O_{n\alpha}^{[\sigma\tau]}$ and $\mathcal O_{n\alpha}^{[\tau]}$ act on the nucleon spinor $|S'_N\rangle$. The $\mathcal O_{n\alpha}^{[\sigma_\Lambda]}$ rotates the lambda spinor $|S'_\Lambda\rangle$. $\mathcal O_{n\alpha}^{[\sigma_{\Lambda N}]}$ acts on both baryon spinors with a separate rotation for nucleons and hyperons coupled by the same coefficient $(-B_n^{[\sigma]} d\tau)^{1/2}\,x_n$ (recall Eq.~(\ref{eq:O_LN})). The algorithm follows then the nuclear version (\S~\ref{subsec:Prop_AV6}) with the sampling of the nucleon and hyperon coordinates and of the auxiliary fields, one for each linearized operator. The application of the propagator of Eq.~(\ref{eq:Prop_full}) has the effect to rotate the spinors of the baryons. The weight for each walker is then calculate starting from the central part of the interaction with possible counter terms coming from the importance sampling on spacial coordinates and on auxiliary fields (algorithm~\hyperlink{method:PsiT}{\emph{v1}}), or by means of the local energy (algorithm~\hyperlink{method:Elocal}{\emph{v2}}). Fixed phase approximation, branching process and expectation values are the same discussed in \S~\ref{sec:DMC}.
\subsubsection{Three-body terms}
\label{subsubsec:Prop_LNN}
We have already shown in \S~\ref{subsec:Prop_TNI} that for neutron systems the three-body nucleon force can be recast as a sum of two-body terms only. In the case of the three-body $\Lambda NN$ interaction it is possible to verify that the same reduction applies both for nucleon and neutron systems. Let consider the full potential of Eqs.~(\ref{eq:V_LNN_2pi}) and~(\ref{eq:V_LNN_D})
\begin{align}
V_{\Lambda NN}=\sum_{\lambda,i<j}\left(v_{\lambda ij}^{2\pi,P}+v_{\lambda ij}^{2\pi,S}+v_{\lambda ij}^{D}\right)\;,
\end{align}
and assume the notations:
\begin{align}
T_{\lambda i}=T_{\pi}(r_{\lambda i})\quad\quad
Y_{\lambda i}=Y_{\pi}(r_{\lambda i})\quad\quad
Q_{\lambda i}=Y_{\lambda i}-T_{\lambda i}\;.
\end{align}
By expanding the operators over the Cartesian components as done in Eqs.~(\ref{eq:sigma_dec}) and (\ref{eq:Sij_dec}), it is possible to derive the following relations:
\begin{align}
V_{\Lambda NN}^{2\pi,S}&=\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta\gamma}\tau_{i\gamma}\,\sigma_{i\alpha}\left(-\frac{C_P}{3}\sum_\lambda\sum_\delta\Theta_{\lambda i}^{\alpha\delta}\,\Theta_{\lambda j}^{\beta\delta}\right)\sigma_{j\beta}\,\tau_{j\gamma}\;,\\[0.5em]
V_{\Lambda NN}^{2\pi,P}&=\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta\gamma}\tau_{i\gamma}\,\sigma_{i\alpha}\,\Xi_{i\alpha,j\beta}\,\sigma_{j\beta}\,\tau_{j\gamma}\;,\\[0.5em]
V_{\Lambda NN}^{D}&=W_D\!\sum_{\lambda,i<j}T^2_{\lambda i}\,T^2_{\lambda j}+\frac{1}{2}\sum_{\lambda i}\sum_{\alpha}\sigma_{\lambda\alpha}\,D_{\lambda i}^{[\sigma]}\,\sigma_{i\alpha}\;,
\end{align}
where
\begin{align}
\Theta_{\lambda i}^{\alpha\beta}&=Q_{\lambda i}\,\delta^{\alpha\beta}+3\,T_{\lambda i}\hat r_{\lambda i}^\alpha\,\hat r_{\lambda i}^\beta\;,\label{eq:Theta}\\[0.5em]
\Xi_{i\alpha,j\beta}&=\frac{1}{9}C_S\,\mu_\pi^2 \sum_\lambda\,Q_{i\lambda}\,Q_{\lambda j}\,|r_{i\lambda}||r_{j\lambda}|\,\hat r_{i\lambda}^\alpha\,\hat r_{j\lambda}^\beta\;,\label{eq:Xi}\\[0.5em]
D_{\lambda i}^{[\sigma]}&=\frac{1}{3} W_D\!\sum_{j,j\ne i} T^2_{\lambda i}\,T^2_{\lambda j}\;.
\end{align}
By combining then Eq.~(\ref{eq:Theta}) and Eq.~(\ref{eq:Xi}), the TPE term of the three-body hyperon-nucleon interaction can be recast as:
\begin{align}
V_{\Lambda NN}^{2\pi}=\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta\gamma}\tau_{i\gamma}\,\sigma_{i\alpha}\,D_{i\alpha,j\beta}^{[\sigma\tau]}\,\sigma_{j\beta}\,\tau_{j\gamma}\;,
\end{align}
where
\begin{align}
D_{i\alpha,j\beta}^{[\sigma\tau]}&=\sum_\lambda\Bigg\{\!-\frac{1}{3} C_P Q_{\lambda i}Q_{\lambda j}\delta_{\alpha\beta}
-C_P Q_{\lambda i}T_{\lambda j}\,\hat r_{j\lambda}^{\,\alpha}\,\hat r_{j\lambda}^{\,\beta}
-C_P Q_{\lambda j}T_{\lambda i}\,\hat r_{i\lambda}^{\,\alpha}\,\hat r_{i\lambda}^{\,\beta}\nonumber\\[0.2em]
&\quad+\left[-3\,C_P T_{\lambda i}T_{\lambda j } \left({\sum_\delta}\,\hat r_{i\lambda}^{\,\delta}\,\hat r_{j\lambda}^{\,\delta}\right)
+\frac{1}{9} C_S \mu_\pi^2 Q_{\lambda i} Q_{\lambda j}\,|r_{i\lambda}||r_{j\lambda}|\right]\,\hat r_{i\lambda}^{\,\alpha}\,\hat r_{j\lambda}^{\,\beta}\Bigg\}\;.
\end{align}
Finally, the $\Lambda NN$ interaction takes the following form:
\begin{align}
V_{\Lambda NN}&=W_D\!\sum_{\lambda,i<j}T^2_{\lambda i}\,T^2_{\lambda j}
+\frac{1}{2}\sum_{\lambda i}\sum_{\alpha}\sigma_{\lambda\alpha}\,D_{\lambda i}^{[\sigma]}\,\sigma_{i\alpha}\nonumber\\[0.2em]
&\quad\,+\frac{1}{2}\sum_{i\ne j}\sum_{\alpha\beta\gamma}\tau_{i\gamma}\,\sigma_{i\alpha}\,D_{i\alpha,j\beta}^{[\sigma\tau]}\,\sigma_{j\beta}\,\tau_{j\gamma}\;.
\label{eq:V_LNN_prop}
\end{align}
The first term is a pure central factor that can be included in $V_{\Lambda N}^c$. The second factor is analogous to the $\bm\sigma_\lambda\cdot\bm\sigma_i$ term~(\ref{eq:V_LN_prop}) of the two body hyperon-nucleon interaction. The last term acts only on the spin-isospin of the two nucleons $i$ and $j$ and has the same structure of the nuclear $\bm\sigma\cdot\bm\tau$ contribution described by the matrix $A_{i\alpha,j\beta}^{[\sigma\tau]}$. The three-body hyperon-nucleon interaction is then written as a sum of two-body operators only, of the same form of the ones already discussed for the $NN$ and $\Lambda N$ potentials. We can therefore include also these contributions in the AFDMC propagator of Eq.~(\ref{eq:Prop_full}) by simply redefining the following matrices:
\begin{align}
B_{\lambda i}^{[\sigma]}&\longrightarrow B_{\lambda i}^{[\sigma]}+D_{\lambda i}^{[\sigma]}\;,\\[0.5em]
A_{i\alpha,j\beta}^{[\sigma\tau]}&\longrightarrow A_{i\alpha,j\beta}^{[\sigma\tau]}+D_{i\alpha,j\beta}^{[\sigma\tau]}\;.
\end{align}
The algorithm follows then the steps already discussed in the previous section. Note that in the case of pure neutron systems, the last term of Eq.~(\ref{eq:V_LNN_prop}) simply reduces to a $\bm\sigma_i\cdot\bm\sigma_j$ contribution that is included in the propagator by redefining the nuclear matrix $A_{i\alpha,j\beta}^{[\sigma]}$.
With the AFDMC method extended to the hypernuclear sector, we can study finite and infinite lambda-nucleon and lambda-neutron systems. In the first case we can
treat Hamiltonians that include the full hyperon-nucleon, hyperon-nucleon-nucleon and hyperon-hyperon interaction of Chapter~\ref{chap:hamiltonians}, but we are limited to the Argonne V6 like potentials for the nuclear sector. However it has been shown that this approach gives good results for finite nuclei~\cite{Gandolfi:2007} and nuclear matter~\cite{Gandolfi:2007_SNM,Gandolfi:2010}. In the latter case, instead, we can also add the nucleon spin-orbit contribution, so AV8 like potentials, and the three-neutron force. The neutron version of the AFDMC code has been extensively and successfully applied to study the energy differences of oxygen~\cite{Gandolfi:2006} and calcium~\cite{Gandolfi:2008} isotopes, the properties of neutron drops~\cite{Pederiva:2004,Gandolfi:2011,Maris:2013} and the properties of neutron matter in connection with astrophysical observables~\cite{Sarsa:2003,Gandolfi:2009,Gandolfi:2009_gap,Gandolfi:2012}. Very recently, the AFDMC algorithm has been also used to perform calculations for neutron matter using chiral effective field theory interactions~\cite{Gezerlis:2013}.
\renewcommand{\arraystretch}{1.0}
\chapter{Results: finite systems}
\label{chap:results_finite}
This chapter reports on the analysis of finite systems, nuclei and hypernuclei. For single $\Lambda$~hypernuclei a direct comparison of energy calculations with experimental results is given for the $\Lambda$~separation energy, defined as:
\begin{align}
B_{\Lambda}\left(\,^A_\Lambda\text{Z}\,\right)=E\left(\,^{A-1}\text{Z}\,\right)-E\left(\,^A_\Lambda\text{Z}\,\right)\;,\label{eq:B_L}
\end{align}
where, using the notation of the previous chapters, $^A_\Lambda\text{Z}$ refers to the hypernucleus and $^{A-1}\text{Z}$ to the corresponding nucleus. $E$ is the binding energy of the system, i.e. the expectation value of the Hamiltonian on the ground state wave function
\begin{align}
E(\kappa)=\frac{\langle\psi_{0,\kappa}|H_\kappa|\psi_{0,\kappa}\rangle}{\langle\psi_{0,\kappa}|\psi_{0,\kappa}\rangle}\;,\quad\quad\kappa=\text{nuc},\text{hyp}\;,
\end{align}
that we can compute by means of the AFDMC method. In the case of double $\Lambda$~hypernuclei, the interesting experimental observables we can have access are the double $\Lambda$~separation energy
\begin{align}
&B_{\Lambda\Lambda}\left(\,^{~\,A}_{\Lambda\Lambda}\text{Z}\,\right)=E\left(\,^{A-2}\text{Z}\,\right)-E\left(\,^{~\,A}_{\Lambda\Lambda}\text{Z}\,\right)\;,\label{eq:B_LL}
\end{align}
and the incremental $\Lambda\Lambda$~energy
\begin{align}
&\Delta B_{\Lambda\Lambda}\left(\,^{~\,A}_{\Lambda\Lambda}\text{Z}\,\right)=B_{\Lambda\Lambda}\left(\,^{~\,A}_{\Lambda\Lambda}\text{Z}\,\right)
-2 B_\Lambda\left(\,^{A-1}_{\quad\;\Lambda}\text{Z}\,\right)\;.\label{eq:dB_LL}
\end{align}
The calculation of these quantities proceeds thus with the computation of the binding energies for both strange and non strange systems. Moreover it is interesting to compare other observables among the systems with strangeness $0$, $-1$ and $-2$, such as the single particle densities. By looking at the densities in the original nucleus and in the one modified by the addition of the lambda particles, information about the hyperon-nucleon interaction can be deduced.
As reported in Ref.~\cite{Gandolfi:2007}, the ground state energies of $^4$He, $^8$He, $^{16}$O and $^{40}$Ca have been computed using the AV6' potential (\S~\ref{subsec:AV18}). Due to the limitations in the potential used, the results cannot reproduce the experimental energies and all the nuclei result less bound than expected. However, given the same simplified interaction, the published AFDMC energies are close to the GFMC results, where available. AFDMC has also been used to compute the energy differences between oxygen~\cite{Gandolfi:2006} and calcium~\cite{Gandolfi:2008} isotopes, by studying the external neutrons with respect to a nuclear core obtained from Hartree-Fock calculations using Skyrme forces. In this case the results are close to the experimental ones.
The idea behind the AFDMC analysis of $\Lambda$~hypernuclei follows in some sense the one assumed in the study of oxygen and calcium isotopes by the analysis of energy differences. The two-body nucleon interaction employed is limited to the first six operators of AV18. However, if we use the same potential for the nucleus and the core of the corresponding hypernucleus, and take the difference between the binding energies of the two systems, the uncertainties in the $NN$ interaction largely cancel out. We shall see that this assumption, already used in other works~\cite{Dalitz:1972,Bodmer:1988}, is indeed consistent with our results, thereby confirming that the specific choice of the nucleon Hamiltonian does not significantly affect the results on $B_\Lambda$. On the grounds of this observation, we can focus on the interaction between hyperons and nucleons, performing QMC simulations with microscopic interactions in a wide mass range.
\section{Nuclei}
\label{sec:nuc}
Let us start with the AFDMC study of finite nuclei. In the previous chapter, we have seen that two versions of the AFDMC algorithm, that should give the same results, are available (\hyperlink{method:PsiT}{\emph{v1}} and \hyperlink{method:Elocal}{\emph{v2}}). Before including the strange degrees of freedom, we decided to test the stability and accuracy of the two algorithms, within the fixed phase approximation, by performing some test simulations on $^4$He. The result of $-27.13(10)$~MeV for the AV6' potential reported in Ref.~\cite{Gandolfi:2007}, was obtained employing the algorithm~\hyperlink{method:Elocal}{\emph{v2}} using single particle Skyrme orbitals and a particular choice of the parameters for the solution of the Jastrow correlation equation (see \S~\ref{subsubsubsec:Wave_non_strange}). In order to check the AFDMC projection process, we tried to modify the starting trial wave function:
\begin{itemize}
\item we changed the healing distance $d$ and the quencher parameter $\eta$ for the Jastrow function $f_c^{NN}$;
\item we used a different set of radial functions, labelled as HF-B1~\cite{Co:2011_comm}, coming from Hartree-Fock calculations for the effective interaction B1 of Ref.~\cite{Brink:1967}. The B1 is a phenomenological two-body nucleon-nucleon potential fitted in order to reproduce the binding energies and densities of various light nuclei and of nuclear matter in the HF approximation.
\end{itemize}
Although a central correlation function should not affect the computed energy value, in the version \hyperlink{method:Elocal}{\emph{v2}} of the algorithm an unpleasant dependence on $f_c^{NN}$ was found, and in particular as a function of the quencher parameter $\eta$. This dependence is active for both the AV4' and the AV6' potentials, regardless of the choice of the single particle orbitals. The time step extrapolation ($d\tau\rightarrow 0$) of the energy does not solve the issue. Energy differences are still more than 1~MeV among different setups for the trial wave function. By varying the parameter $\eta$ from zero (no Jastrow at all) to one (full central channel of the $NN$ potential used for the solution of Eq.~(\ref{eq:Jastrow})), the energies increase almost linearly. For example, in the case of AV4' for the Skyrme orbital functions, the energy of $^4$He goes from $-31.3(2)$~MeV for $\eta=0$, to $-27.2(2)$~MeV for $\eta=1$. Same effect is found for the HF-B1 orbitals with energies going from $-32.5(2)$~MeV to $-28.4(2)$~MeV. The inclusion of the pure central Jastrow introduces thus strong biases in the evaluation of the total energy. Moreover, there is also a dependence on the choice of the single particle orbitals, as shown from the results for AV4'. Same conclusions follow for the AV6' potential.
On the grounds of these observations we moved from the AFDMC local energy scheme to the importance function ratio scheme (version \hyperlink{method:PsiT}{\emph{v1}}), with no importance sampling on auxiliary fields. In this case the bias introduced by the Jastrow correlation function is still present but reduced to $0.3\div0.4$~MeV for AV4' and $0.1\div0.2$~MeV for AV6'. In spite of the improvement with respect to the previous case, we decided to remove this source of uncertainty from the trial wave function and proceed with the test of the \hyperlink{method:PsiT}{\emph{v1}} algorithm with no Jastrow. It has to be mentioned that a new sampling procedure, for both coordinates and auxiliary fields, capable to reduce the dependence on central correlations is being studied.
As shown in Figs.~\ref{fig:E_He4_V4p} and \ref{fig:E_He4_V6p}, the \hyperlink{method:PsiT}{\emph{v1}} extrapolated energies obtained using different single particle orbitals are consistent within the Monte Carlo statistical errors, both for the AV4' and the AV6' potentials.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{E_He4_V4p.pdf}
\caption[Binding energies: $E$ vs. $d\tau$ for $^4$He, Argonnne V4']
{Binding energy of $^4$He as a function of the Monte Carlo imaginary time step. Results are obtained using the AV4' $NN$ potential.
Red dots are the AFDMC results for the Skyrme radial orbitals. Blue triangles the ones for the HF-B1 orbitals.
For comparison, the GFMC result of Ref.~\cite{Wiringa:2002_url}, corrected by the Coulomb contribution (see text for details), is reported with the green band.}
\label{fig:E_He4_V4p}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{E_He4_V6p.pdf}
\caption[Binding energies: $E$ vs. $d\tau$ for $^4$He, Argonnne V6']
{Binding energy of $^4$He as a function of the Monte Carlo imaginary time step. Results are obtained using the AV6' $NN$ potential.
As in Fig.~\ref{fig:E_He4_V4p}, red dots refers to the AFDMC results for the Skyrme radial functions and blue triangles for the HF-B1 orbitals.
The green arrow points to the GFMC result.}
\label{fig:E_He4_V6p}
\end{figure}
For $\mathcal N_N=4$ we can compare the AFDMC results with the GFMC ones. In our calculations the Coulomb interaction is not included. A precise VMC estimate, that should be representative also for the GFMC estimate, of the Coulomb expectation value for $^4$He is $0.785(2)$~MeV~\cite{Wiringa:2012_comm}. The AFDMC values of $-32.67(8)$~MeV (Skyrme) and $-32.7(1)$~MeV (HF-B1) for $^4$He with AV4' are thus very close to the GFMC $-32.11(2)$~MeV of Ref.~\cite{Wiringa:2002_url} for the same potential once the Coulomb contribution is subtracted. Our results are still $\sim0.1\div0.2$~MeV above the GFMC one, most likely due to the removal of the sign problem constraint applied at the end of the GFMC runs (release node procedure~\cite{Ceperly:1980}).
Although AFDMC and GFMC energies for $^4$He described by the AV4' potential are consistent, a clear problem appears using the AV6' interaction (Fig.~\ref{fig:E_He4_V6p}). With the two sets of radial functions, the energies are $-19.59(8)$~MeV (Skyrme) and $-19.53(13)$~MeV (HF-B1) and thus the AFDMC actually projects out the same ground state. However, the GFMC estimate is $-26.15(2)$~MeV minus the Coulomb contribution. This large difference in the energies cannot be attributed to the GFMC release node procedure. The difference in using AV4' and AV6' is the inclusion of the tensor term $S_{ij}$ of Eq.~(\ref{eq:S_ij}). The Hamiltonian moves then from real to complex and this might result in a phase problem during the imaginary time propagation. There might be some issues with the fixed phase approximation or with the too poor trial wave function (or both), which does not include operatorial correlations. This is still an unsolved question but many ideas are being tested. According to the lack of control on the AFDMC simulations for the AV6' potential, from now on we will limit the study to AV4'. As we shall see, this choice does not affect the result on energy differences as the hyperon separation energy, which is the main observable of this study for finite systems.
In order to complete the check of the accuracy of the algorithm \hyperlink{method:PsiT}{\emph{v1}} for $^4$He, we performed simulations using the Minnesota potential of Ref.~\cite{Thompson:1977}. This two-nucleon interaction has the same operator structure of AV4' but much softer cores. Our AFDMC result for the energy is $-30.69(7)$~MeV. It has to be compared with the $-29.937$~MeV ($-30.722$~MeV with the Coulomb subtraction) obtained with the Stochastic Variational Method (SVM)~\cite{Varga:1995}, that has been proven to give consistent results with the GFMC algorithm for $^4$He~\cite{Kamada:2001}. The agreement of the results is remarkable.
Moreover, we tested the consistency of the \hyperlink{method:PsiT}{\emph{v1}} algorithm for the AV4' potential by studying the deuteron, tritium and oxygen nuclei.
\begin{itemize}
\item The AFDMC binding energy for $^2$H is $-2.22(5)$~MeV, in agreement with the experimental $-2.225$~MeV. The result is significant because, although the Argonne V4' was exactly fitted in order to reproduce the deuteron energy, our starting trial wave function is just a Slater determinant of single particle orbitals, with no correlations.
\item The result for $^3$H is $-8.74(4)$~MeV, close to the GFMC $-8.99(1)$~MeV of Ref.~\cite{Wiringa:2002_url}. As for $^4$He, the small difference in the energies is probably due to the release node procedure in GFMC. Without the Coulomb contribution, we obtained the same energy $-8.75(4)$~MeV also for $^3$He. In AV4' there are no charge symmetry breaking terms. Therefore, this result can be seen as a consistency test on the correct treatment of the spin-isospin operators acting on the wave function during the Hubbard-Stratonovich rotations.
\item For $^{16}$O we found the energy values of $-176.8(5)$~MeV for the Skyrme orbitals and $-174.3(8)$~MeV for the HF-B1 radial functions. The energy difference is of order 1\% even for a medium mass nucleus. The projection mechanism is working accurately regardless the starting trial function. GFMC results are limited to 12 nucleons~\cite{Pieper:2005,Lusk:2010,Lovato:2013}, so we cannot compare the two methods for $\mathcal N_N=16$. The binding energy cannot be compared with the experimental data due to the poor employed Hamiltonian. However the AFDMC results are consistent with the overbinding predicted by the available GFMC energies for AV4'~\cite{Wiringa:2002_url} and the nucleus results stable under alpha particle break down, as expected.
\end{itemize}
On the grounds of the results of these consistency checks, in the present work we adopt the version \hyperlink{method:PsiT}{\emph{v1}} of the AFDMC algorithm employing the nuclear potential AV4' for both nuclei and hypernuclei.
\renewcommand{\arraystretch}{1.4}
\begin{table}[ht]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{1.0em}\extracolsep{\fill}}lccccc@{\extracolsep{\fill}\hspace{1.0em}}}
\toprule
\toprule
System & $E_{\text{AFDMC}}$ & $E_{\text{GFMC}}$ & $E_{\text{exp}}$ & $E_{\text{AFDMC}}/\mathcal N_N$ & $E_{\text{exp}}/\mathcal N_N$\\
\midrule
\hspace{0.6em}$^2$H & -2.22(5) & --- & -2.225 & -1.11 & -1.11 \\
\hspace{0.6em}$^3$H & -8.74(4) & -8.99(1) & -8.482 & -2.91 & -2.83 \\
\hspace{0.6em}$^3$He & -8.75(4) & --- & -7.718 & -2.92 & -2.57 \\
\hspace{0.6em}$^4$He & -32.67(8) & -32.90(3) & -28.296 & -8.17 & -7.07 \\
\hspace{0.6em}$^5$He & -27.96(13) & -31.26(4) & -27.406 & -5.59 & -5.48 \\
\hspace{0.6em}$^6$He & -29.87(14) & -33.00(5) & -29.271 & -4.98 & -4.88 \\
\hspace{0.3em}$^{12}$C & -77.31(25)* & --- & -92.162 & -6.44 & -7.68 \\
\hspace{0.3em}$^{15}$O & -144.9(4) & --- & -111.955 & -9.66 & -7.46 \\
\hspace{0.3em}$^{16}$O & -176.8(5) & --- & -127.619 & -11.05 & -7.98 \\
\hspace{0.3em}$^{17}$O & -177.0(6) & --- & -131.762 & -10.41 & -7.75 \\
\hspace{0.3em}$^{40}$Ca & -597(3) & --- & -342.052 & -14.93 & -8.55 \\
\hspace{0.3em}$^{48}$Ca & -645(3) & --- & -416.001 & -13.44 & -8.67 \\
\hspace{0.3em}$^{90}$Zr & -1457(6) & --- & -783.899 & -16.19 & -8.71 \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Binding energies: nuclei, $2\le A-1\le 90$]
{Binding energies (in MeV) for different nuclei. AFDMC and GFMC results are obtained using the the AV4' $NN$ potential.
The GFMC data are from Ref.~\cite{Wiringa:2002_url} corrected by the Coulomb contribution (see text for details).
In the fourth column the experimental results are from Ref.~\cite{Zagrebaev:1999}. Errors are less than 0.1~KeV.
In the last two columns the calculated and experimental binding energies per particle. For the note * on $^{12}$C see the text.}
\label{tab:E_nuc}
\end{table}
\renewcommand{\arraystretch}{1.0}
As reported in Tab.~\ref{tab:E_nuc}, the resulting absolute binding energies using AV4' are not comparable with experimental ones, as expected, due to the lack of information about the nucleon interaction in the Hamiltonian. With the increase of the number of particles, the simulated nuclei become more an more bound until the limit case of $^{90}$Zr, for which the estimated binding energy is almost twice the experimental one. Looking at the results for helium isotopes, we can see that for $\mathcal N_N=3$ and $4$ the energies are compatible with GFMC calculations, once the Coulomb contribution is removed. For $^5$He and $^6$He instead, we obtained discrepancies between the results for the two methods. However this is an expected result. When moving to open shell systems, as $^5$He and $^6$He with one or two neutrons out of the first $s$ shell, the structure of the wave function becomes more complicated and results are more dependent on the employed $\psi_T$. For example, in the case of $^6$He, in order to have total angular momentum zero, the two external neutrons can occupy the $1p_{3/2}$ or the $1p_{1/2}$ orbitals of the nuclear shell model classification. By using just one of the two $p$~shells, one gets the unphysical result $E(^5\text{He})<E(^6\text{He})$. The reported binding energy has been instead obtained by considering the linear combination of the Slater determinants giving $J=0$
\begin{align}
\Phi(R_N,S_N)=(1-c)\det\Bigl\{\varphi_\epsilon^N(\bm r_i,s_i)\Bigr\}_{1p_{3/2}}\!\!+c\,\det\Bigl\{\varphi_\epsilon^N(\bm r_i,s_i)\Bigr\}_{1p_{1/2}}\;,\label{eq:He6_mix}
\end{align}
and minimizing the energy with respect to the mixing parameter $c$, as shown in Fig.~\ref{fig:E_He6}. However the final result is still far from the GFMC data. This is a clear indication that better wave functions are needed for open shell systems. A confirmation of that is the non physical result obtained for the $^{12}$C nucleus (marked in Tab.~\ref{tab:E_nuc} with *), which is even less bound than expected, although the employed AV4' potential, resulting thus unstable under $\alpha$ break down. In the case of $\mathcal N_N=12$ indeed, the 8 additional neutrons and protons to the alpha core have just been placed in the $1p_{3/2}$ shell without any linear combination of the other possible setups giving zero total angular momentum. This result will be useful in the hyperon separation energy estimate anyway. In fact, we shall see in the next section that regardless the total binding energies, by using the same nucleon potential to describe nuclei and the core of hypernuclei, the obtained hyperon separation energy is in any case realistic.
Last comment on a technical detail regarding the computation of AFDMC observables. As shown in Fig.~\ref{fig:E_He4_V4p} and \ref{fig:E_He4_V6p}, the extrapolation of the energy values in the limit $d\tau\rightarrow0$ is linear. This is consistent with the application of the Trotter-Suzuki formula of Eq.~(\ref{eq:Trotter_2}) in the Hubbard-Stratonovich transformation~(\ref{eq:HS_applied}), that is thus correct at order $\sqrt{d\tau}^{\,2}$. Focusing on the AV4' case, for $^4$He the time step extrapolation is almost flat. The differences between the final results and the energies computed at large $d\tau$ are less than $0.5\%$ and almost within the statistical errors of the Monte Carlo run. The situation dramatically changes with the increase of the particle number. For $\mathcal N_N=16$ this difference is around $2\%$. For 40 and 48 particles, large time step values and the extrapolated ones are, respectively, $6\%$ and $8.5\%$ different. Therefore, the binding energies must always be carefully studied by varying the time step of the AFDMC run. The same behavior has been found for observables other than the total energy (single particle densities and radii). Each reported result in this chapter has been thus obtained by means of a computationally expensive procedure of imaginary time extrapolation.
\begin{figure}[!hb]
\centering
\includegraphics[width=\linewidth]{E_He6.pdf}
\caption[Binding energies: $E$ vs. mixing parameter for $^6$He]
{$^6$He binding energy as a function of the mixing parameter $c$ of Eq.~(\ref{eq:He6_mix}).
The arrows point to the results for the pure $1p_{3/2}$ ($-27.65(8)$~MeV)
and $1p_{1/2}$ ($-25.98(8)$~MeV)
configurations used for the two external neutrons.
The green line is the GFMC result of Ref.~\cite{Wiringa:2002_url} corrected by the VMC Coulomb expectation contribution
$0.776(2)$~MeV~\cite{Wiringa:2012_comm}.}
\label{fig:E_He6}
\end{figure}
\section{Single $\Lambda$~hypernuclei}
\label{sec:l_hyp}
When a single $\Lambda$~particle is added to a core nucleus, the wave function of Eq.~(\ref{eq:Psi_T}) is given by
\begin{align}
\psi_T(R,S)=\prod_{i}f_c^{\Lambda N}(r_{\Lambda i})\,\psi_T^N(R_N,S_N)\,\varphi_\epsilon^\Lambda(\bm r_\Lambda,s_\Lambda)\;.\label{eq:psi_singleL}
\end{align}
The structure of the nucleon trial wave function is the same of Eq.~(\ref{eq:psi_N}), used in the AFDMC calculations for nuclei. The hyperon Slater determinant is simply replaced by the single particle state $\varphi_\epsilon^\Lambda$ of Eq.~(\ref{eq:varphi_L}), assumed to be the neutron $1s_{1/2}$ radial function, as already described in the previous chapter. In order to be consistent with the calculations for nuclei, we neglected the Jastrow $\Lambda N$ correlation function which was found to produce a similar but smaller bias on the total energy. As radial functions we used the same Skyrme set employed in the calculations for the nuclei of Tab.~\ref{tab:E_nuc}.
The $\Lambda$ separation energies defined in Eq.~(\ref{eq:B_L}), are calculated by taking the difference between the nuclei binding energies presented in the previous section, and the AFDMC energies for hypernuclei, given the same nucleon potential. By looking at energy differences, we studied the contribution of the $\Lambda N$ and $\Lambda NN$ terms defined in Chapter~\ref{chap:hamiltonians}. By comparing AFDMC results with the expected hyperon separation energies, information about the hyperon-nucleon interaction are deduced. Some qualitative properties have been also obtained by studying the nucleon and hyperon single particle densities and the root mean square radii.
\subsection{Hyperon separation energies}
\label{subsec:E_l}
We begin the study of $\Lambda$~hypernuclei with the analysis of closed shell hypernuclei, in particular $^5_\Lambda$He and $^{17}_{~\Lambda}$O. We have seen in the previous section that the AFDMC algorithm is most accurate in describing closed shell nuclei. Results for $^4$He and $^{16}$O with the AV4' potential are indeed consistent and under control. This give us the possibility to realistically describe the hyperon separation energy for such systems and deduce some general properties of the employed hyperon-nucleon force.
The step zero of this study was the inclusion in the Hamiltonian of the $NN$ AV4' interaction and the two-body $\Lambda N$ charge symmetric potential of Eq.~(\ref{eq:V_LN}). The employed parameters $\bar v$ and $v_\sigma$ are reported in Tab~\ref{tab:parLN+LNN}. The exchange parameter $\varepsilon$ has been initially set to zero due to the impossibility of including the space exchange operator directly in the AFDMC propagator (see \S~\ref{subsubsubsec:Wave_strange}). As reported in Tab.~\ref{tab:BL_He5L-O17L_I}, the AV4' $\Lambda$~separation energy for $^5_\Lambda$He is more than twice the expected value. For the heavier $^{17}_{~\Lambda}$O the discrepancy is even larger. Actually, this is an expected result. As firstly pointed out by Dalitz~\cite{Dalitz:1972}, $\Lambda N$ potentials, parameterized to account for the low-energy $\Lambda N$ scattering data and the binding energy of the $A=3,4$ hypernuclei, overbind $^5_\Lambda$He by $2\div3$~MeV. That is, the calculated $A=5$ $\Lambda$~separation energy is about a factor of 2 too large. This fact is usually reported as \emph{$A=5$~anomaly}. With only a $\Lambda N$ potential fitted to $\Lambda p$ scattering, the heavier hypernuclei result then strongly overbound.
\renewcommand{\arraystretch}{1.4}
\begin{table}[t]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{1.0em}\extracolsep{\fill}}lcccc@{\extracolsep{\fill}\hspace{1.0em}}}
\toprule
\toprule
\multirow{2}{*}{$NN$ potential} & \multicolumn{2}{c}{$^5_\Lambda$He} & \multicolumn{2}{c}{$^{17}_{~\Lambda}$O} \\
\cmidrule(l){2-3}\cmidrule(l){4-5}
& \hspace{1em}$V_{\Lambda N}$ & $V_{\Lambda N}$+$V_{\Lambda NN}$ & \hspace{1em}$V_{\Lambda N}$ & $V_{\Lambda N}$+$V_{\Lambda NN}$ \\
\midrule
Argonne V4'\ & \hspace{1em} 7.1(1) & 5.1(1) & \hspace{1em} 43(1) & 19(1) \\
Argonne V6'\ & \hspace{1em} 6.3(1) & 5.2(1) & \hspace{1em} 34(1) & 21(1) \\
Minnesota\ & \hspace{1em} 7.4(1) & 5.2(1) & \hspace{1em} 50(1) & 17(2) \\
\midrule
Expt. & \multicolumn{2}{c}{3.12(2)} & \multicolumn{2}{c}{13.0(4)} \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[$\Lambda$~separation energies: $\Lambda N+\Lambda NN$ set (I) for $^5_\Lambda$He and $^{17}_{~\Lambda}$O]
{$\Lambda$~separation energies (in MeV) for $^5_\Lambda$He and $^{17}_{~\Lambda}$O obtained using different nucleon potentials (AV4', AV6', Minnesota)
and different hyperon-nucleon interaction (two-body alone and two-body plus three-body,
set of parameters~(\hyperlink{par_I}{I}))~\cite{Lonardoni:2013_PRC(R)}.
In the last line the experimental $B_\Lambda$ for $^5_\Lambda$He is from Ref.~\cite{Juric:1973}.
Since no experimental data for $^{17}_{~\Lambda}$O exists, the reference separation energy is the semiempirical value reported in Ref.~\cite{Usmani:1995}.}
\label{tab:BL_He5L-O17L_I}
\end{table}
\renewcommand{\arraystretch}{1.0}
As suggested by the same Dalitz~\cite{Dalitz:1972} and successively by Bodmer and Usmani~\cite{Bodmer:1988}, the inclusion of a $\Lambda$-nucleon-nucleon potential may solve the overbinding problem. This is indeed the case, as reported for instance in Refs.~\cite{Usmani:1995,Sinha:2002,Usmani:2008}. Therefore, in our AFDMC calculations we included the three-body $\Lambda NN$ interaction developed by Bodmer, Usmani and Carlson and described in \S~\ref{subsubsec:LNN}. Among the available parametrizations coming from different VMC studies of light hypernuclei, the set of parameters for the $\Lambda NN$ potential has been originally taken from Ref.~\cite{Usmani:1995_3B}, being the choice that made the variational $B_\Lambda$ for $_\Lambda^5$He and $^{17}_{~\Lambda}$O compatible with the expected results. It reads:
\begin{equation*}
\hypertarget{par_I}{(\text{I})}\phantom{I}\quad
\left\{
\begin{array}{rcll}
C_P&\!=\!&0.60& \!\text{MeV}\\
C_S&\!=\!&0.00& \!\text{MeV}\\
W_D&\!=\!&0.015&\!\text{MeV}
\end{array}
\right.
\end{equation*}
The inclusion of the $\Lambda NN$ force reduces the overbinding and thus the hyperon separation energies, as reported in Tab.~\ref{tab:BL_He5L-O17L_I}. Although the results are still not compatible with the experimental ones, the gain in energy due to the inclusion of the three-body hypernuclear force is considerable.
It has to be pointed out that this result might in principle depend on the particular choice of the $NN$ interaction used to describe both nucleus and hypernucleus. One of the main mechanisms that might generate this dependence might be due to the different environment experienced by the hyperon in the hypernucleus because of the different nucleon densities and correlations generated by each $NN$ potential. To discuss this possible dependence, we performed calculations with different $NN$ interactions having very different saturation properties. As it can be seen from Tab.~\ref{tab:BL_He5L-O17L_I}, for $^5_\Lambda$He the extrapolated $B_\Lambda$ values with the two-body $\Lambda N$ interaction alone are about 10\% off and well outside statistical errors. In contrast, the inclusion of the three-body $\Lambda NN$ force gives a similar $\Lambda$~binding energy independently to the choice of the $NN$ force. On the grounds of this observation, we feel confident that the use of AV4', for which AFDMC calculations for nuclei are under control, will in any case return realistic estimates of $B_\Lambda$ for larger masses when including the $\Lambda NN$ interaction. We checked this assumption performing simulations in $^{17}_{~\Lambda}$O, where the discrepancy between the $\Lambda$~separation energy computed using the different $NN$ interactions and the full $\Lambda N$+$\Lambda NN$ force is less than few per cent (last column of Tab.\ref{tab:BL_He5L-O17L_I}). The various $NN$ forces considered here are quite different. The AV6' includes a tensor force, while AV4' and Minnesota have a simpler structure with a similar operator structure but very different intermediate- and short-range correlations. The fact that the inclusion of the $\Lambda NN$ force does not depend too much on the nuclear Hamiltonian is quite remarkable, because the different $NN$ forces produce a quite different saturation point for the nuclear matter EoS, suggesting that our results are pretty robust.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{BL-A_cshell.pdf}
\caption[$\Lambda$ separation energy vs. $A$: closed shell hypernuclei]
{$\Lambda$ separation energy as a function of $A$ for closed shell hypernuclei, adapted from Ref.~\cite{Lonardoni:2013_PRC(R)}.
Solid green dots~(dashed curve) are the available $B_\Lambda$ experimental or semiempirical values. Empty red dots~(upper banded curve) refer to
the AFDMC results for the two-body $\Lambda N$ interaction alone. Empty blue diamonds~(middle banded curve) are the results with the inclusion also of the
three-body hyperon-nucleon force in the parametrization (\hyperlink{par_I}{I}).}
\label{fig:BL-A_cshell}
\end{figure}
For $^5_\Lambda$He the hyperon separation energy with the inclusion of the $\Lambda NN$ force with the set of parameters~(\hyperlink{par_I}{I}) reduces of a factor $\sim1.4$. For $^{17}_{~\Lambda}$O the variation is around $40\div50\%$. In order to check the effect of the three-body force with increasing the particle number, we performed simulations for the next heavier closed or semi-closed shell $\Lambda$~hypernuclei, $^{41}_{~\Lambda}$Ca and $^{91}_{~\Lambda}$Zr. The $\Lambda$~separation energies for all the studied closed shell hypernuclei are shown in Fig.~\ref{fig:BL-A_cshell}. While the results for lighter hypernuclei might be inconclusive in terms of the physical consistency of the $\Lambda NN$ contribution to the hyperon binding energy in AFDMC calculations, the computations for $^{41}_{~\Lambda}$Ca and $^{91}_{~\Lambda}$Zr reveal a completely different picture. The saturation binding energy provided by the $\Lambda N$ force alone is completely unrealistic, while the inclusion of the $\Lambda NN$ force gives results that are much closer to the experimental behavior. Therefore, the $\Lambda$-nucleon-nucleon force gives a very important repulsive contribution towards a realistic description of the saturation of medium-heavy hypernuclei~\cite{Lonardoni:2013_PRC(R)}. However, with the given parametrization, only a qualitative agreement wiht the expected separation energies is reproduced. A refitting procedure for the three-body hyperon-nucleon interaction might thus improve the quality of the results.
As already discussed in \S~\ref{subsubsec:LNN}, the $C_S$ parameter can be estimated by comparing the $S$-wave term of the $\Lambda NN$ force with the Tucson-Melbourne component of the $NNN$ interaction. We take the suggested $C_S=1.50$~MeV value~\cite{Usmani:2008}, in order to reduce the number of fitting parameters. This choice is justified because the $S$-wave component of the three-body $\Lambda NN$ interaction is sub-leading. We indeed verified that a change in the $C_S$ value yields a variation of the total energy within statistical error bars, and definitely much smaller than the variation in energy due to a change of the $W_D$ parameter.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{Wd-Cp_3D.pdf}
\caption[$\Lambda$~separation energy for $^5_\Lambda$He vs. $W_D-C_P$: 3D plot]
{$\Lambda$~separation energy for $^5_\Lambda$He as a function of strengths $W_D$ and $C_P$ of the three-body $\Lambda NN$
interaction~\cite{Lonardoni:2013_PRC}. The red grid represents the experimental $B_\Lambda=3.12(2)$~MeV~\cite{Juric:1973}.
The dashed yellow curve is the interception between the expected result and the $B_\Lambda$ surface in the $W_D-C_P$ parameter space.
Statistical error bars on AFDMC results (solid black dots) are of the order of $0.10\div0.15$~MeV.}
\label{fig:Wd-Cp_3D}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{Wd-Cp_2D.pdf}
\caption[$\Lambda$~separation energy for $^5_\Lambda$He vs. $W_D-C_P$: 2D plot]
{Projection of Fig.~\ref{fig:Wd-Cp_3D} on the $W_D-C_P$ plane~\cite{Lonardoni:2013_PRC}. Error bars come from a realistic conservative estimate of
the uncertainty in the determination of the parameters due to the statistical errors of the Monte Carlo calculations.
Blue and green dashed, long-dashed and dot-dashed lines (lower curves) are the variational results of Ref.~\cite{Usmani:2008} for different
$\varepsilon$ and $\bar v$ (two-body $\Lambda N$ potential). The dashed box corresponds to the parameter domain of Fig.~\ref{fig:Wd-Cp_3D}.
Black empty dots and the red band (upper curve) are the projected interception describing the possible set of parameters reproducing the
experimental~$B_\Lambda$.}
\label{fig:Wd-Cp_2D}
\end{figure}
In Fig.~\ref{fig:Wd-Cp_3D} we report the systematic study of the $\Lambda$~separation energy of $_\Lambda^5$He as a function of both $W_D$ and $C_P$. Solid black dots are the AFDMC results. The red grid represents the experimental $B_\Lambda=3.12(2)$~MeV~\cite{Juric:1973}. The dashed yellow curve follows the set of parameters reproducing the expected $\Lambda$~separation energy. The same curve is also reported in Fig.~\ref{fig:Wd-Cp_2D} (red banded curve with black empty dots and error bars), that is a projection of Fig.~\ref{fig:Wd-Cp_3D} on the $W_D-C_P$ plane. The dashed box represents the $W_D$ and $C_P$ domain of the previous picture. For comparison, also the variational results of Ref.~\cite{Usmani:2008} are reported. Green curves are the results for $\bar v=6.15$~MeV and $v_\sigma=0.24$~MeV, blue ones for $\bar v=6.10$~MeV and $v_\sigma=0.24$~MeV. Dashed, long-dashed and dot-dashed lines correspond respectively to $\varepsilon=0.1$, $0.2$ and $0.3$. In our calculations we have not considered different combinations for the parameters of the two-body $\Lambda N$ interaction, focusing on the three-body part. We have thus kept fixed $\bar v$ and $v_\sigma$ to the same values of the green curves of Fig.~\ref{fig:Wd-Cp_2D} which are the same reported in Tab.~\ref{tab:parLN+LNN}. Moreover, we have set $\varepsilon=0$ for all the hypernuclei studied due to the impossibility of exactly including the $\mathcal P_x$ exchange operator in the propagator. A perturbative analysis of the effect of the $v_0(r)\varepsilon(\mathcal P_x-1)$ term on the hyperon separation energy is reported in~\S~\ref{subsubsec:Px}.
As it can be seen from Fig.~\ref{fig:Wd-Cp_3D}, $B_\Lambda$ significantly increases with the increase in $C_P$, while it decreases with $W_D$. This result is consistent with the attractive nature of $V_{\Lambda NN}^{2\pi,P}$ and the repulsion effect induced by $V_{\Lambda NN}^D$. It is also in agreement with all the variational estimates on $^5_\Lambda$He (see for instance Refs.~\cite{Sinha:2002,Usmani:2008}). Starting from the analysis of the results in the $W_D-C_P$ space for $_\Lambda^5$He, we performed simulations for the next closed shell hypernucleus $^{17}_{~\Lambda}$O. Using the parameters in the red band of Fig.~\ref{fig:Wd-Cp_2D} we identified a parametrization able to reproduce the experimental $B_\Lambda$ for both $_\Lambda^5$He and $^{17}_{~\Lambda}$O at the same time within the AFDMC framework:
\begin{equation*}
\hypertarget{par_II}{(\text{II})}\quad
\left\{
\begin{array}{rcll}
C_P&\!=\!&1.00& \!\text{MeV}\\
C_S&\!=\!&1.50& \!\text{MeV}\\
W_D&\!=\!&0.035&\!\text{MeV}
\end{array}
\right.
\end{equation*}
Given the set (\hyperlink{par_II}{II}), the $\Lambda$~separation energy of the closed shell hypernuclei reported in Fig.~\ref{fig:BL-A_cshell} has been re-calculated. We have seen that $B_\Lambda$ is not sensitive neither to the details of the $NN$ interaction, nor to the total binding energies of nuclei and hypernuclei, as verified by the good results in Tab.~\ref{tab:BL_He5L-O17L_I} even for the problematic case of AV6' (see \S~\ref{sec:nuc}). On the grounds of this observation, we tried to simulate also open shell hypernuclei, using the $\Lambda N$, $\Lambda NN$ set (\hyperlink{par_I}{I}) and $\Lambda NN$ set (\hyperlink{par_II}{II}) potentials. The binding energies for these systems might not be accurate, as in the case of the corresponding nuclei. The hyperon separation energy is expected to be in any case realistic. All the results obtained so far in the mass range $3 \leq A \leq 91$ are summarized in Fig.~\ref{fig:BL-A} and Fig.~\ref{fig:BL-A23}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{BL-A.pdf}
\caption[$\Lambda$ separation energy vs. $A$]
{$\Lambda$ separation energy as a function of $A$.
Solid green dots~(dashed curve) are the available $B_\Lambda$ experimental or semiempirical values. Empty red dots~(upper banded curve) refer to
the AFDMC results for the two-body $\Lambda N$ interaction alone. Empty blue diamonds~(middle banded curve) and empty black triangles~(lower banded curve)
are the results with the inclusion also of the three-body hyperon-nucleon force, respectively for the parametrizations (\hyperlink{par_I}{I}) and
(\hyperlink{par_II}{II}).}
\label{fig:BL-A}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{BL-A23.pdf}
\caption[$\Lambda$~separation energy vs. $A^{-2/3}$]
{$\Lambda$~separation energy as a function of $A^{-2/3}$, adapted from Ref.~\cite{Lonardoni:2013_PRC}.
The key is the same of Fig.~\ref{fig:BL-A}.}
\label{fig:BL-A23}
\end{figure}
We report $B_\Lambda$ as a function of $A$ and $A^{-2/3}$, which is an approximation of the $A$ dependence of the kinetic term of the Hamiltonian. Solid green dots are the available experimental data, empty symbols the AFDMC results. The red curve is obtained using only the two-body hyperon-nucleon interaction in addition to the nuclear AV4' potential. The blue curve refers to the results for the same systems when also the three-body $\Lambda NN$ interaction with the old set of parameters~(\hyperlink{par_I}{I}) is included. The black lower curve shows the results obtained by including the three-body hyperon-nucleon interaction described by the new parametrization~(\hyperlink{par_II}{II}). A detailed comparison between numerical and experimental results for the hyperon-separation energy is given in Tab.~\ref{tab:BL}.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!htb]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{3.0em}\extracolsep{\fill}}lccc@{\extracolsep{\fill}\hspace{3.0em}}}
\toprule
\toprule
System & $E$ & $B_\Lambda$ & Expt. $B_\Lambda$ \\
\midrule
\hspace{0.6em}$^3_\Lambda$H & -1.00(14) & -1.22(15) & 0.13(5) \hspace{1.2em}\cite{Juric:1973} \\
\hspace{0.6em}$^4_\Lambda$H & -9.69(8) & 0.95(9) & 2.04(4) \hspace{1.2em}\cite{Juric:1973} \\
\hspace{0.6em}$^4_\Lambda$He & -9.97(8) & 1.22(9) & 2.39(3) \hspace{1.2em}\cite{Juric:1973} \\
\hspace{0.6em}$^5_\Lambda$He & -35.89(12) & 3.22(14) & 3.12(2) \hspace{1.2em}\cite{Juric:1973} \\
\hspace{0.6em}$^6_\Lambda$He & -32.72(15) & 4.76(20) & 4.25(10) \hspace{0.7em}\cite{Juric:1973} \\
\hspace{0.6em}$^7_\Lambda$He & -35.82(15) & 5.95(25) & 5.68(28) \hspace{0.7em}\cite{Nakamura:2013} \\
\hspace{0.3em}$^{13}_{~\Lambda}$C & -88.5(26)* & 11.2(4) & 11.69(12) \hspace{0.2em}\cite{Cantwell:1974} \\
\hspace{0.3em}$^{16}_{~\Lambda}$O & -157.5(6) & 12.6(7) & 12.50(35) \hspace{0.2em}\cite{Pile:1991} \\
\hspace{0.3em}$^{17}_{~\Lambda}$O & -189.2(4) & 12.4(6) & 13.0(4) \hspace{0.8em}\cite{Usmani:1995} \\
\hspace{0.3em}$^{18}_{~\Lambda}$O & -189.7(6) & 12.7(9) & \hspace{-3.0em}--- \\
\hspace{0.3em}$^{41}_{~\Lambda}$Ca & -616(3) & 19(4) & 19.24(0) \hspace{0.3em}\cite{Bodmer:1988} \\
\hspace{0.3em}$^{49}_{~\Lambda}$Ca & -665(4) & 20(5) & \hspace{-3.0em}--- \\
\hspace{0.3em}$^{91}_{~\Lambda}$Zr & -1478(7) & 21(9) & 23.33(0) \hspace{0.3em}\cite{Bodmer:1988} \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[$\Lambda$~separation energies: $\Lambda$~hypernuclei, $3\le A\le 91$]
{Binding energies and $\Lambda$~separation energies (in MeV) obtained using the two-body plus three-body hyperon-nucleon interaction with the
set of parameters~(\hyperlink{par_II}{II})~\cite{Lonardoni:2013_PRC}. The results already include the CSB contribution.
The effect is evident only for light systems, as discussed in the next section.
In the last column, the expected $B_\Lambda$ values.
Since no experimental data for $A=17,41,91$ exists, the reference separation energies are semiempirical values.}
\label{tab:BL}
\end{table}
\renewcommand{\arraystretch}{1.0}
From Fig.~\ref{fig:BL-A} and Fig.~\ref{fig:BL-A23} we can see that the new parametrization for the three-body hyperon-nucleon interaction correctly reproduces the experimental saturation property of the $\Lambda$~separation energy. All the separation energies for $A\ge 5$ are compatible or very close to the expected results, where available, as reported in Tab.~\ref{tab:BL}. Since for $^{18}_{~\Lambda}$O and $^{49}_{~\Lambda}$Ca no experimental data have been found, the values of $12.7(9)$~MeV and $20(5)$~MeV are AFDMC predictions, that follows the general trend of the experimental curve. Although for $A\ge 41$ the Monte Carlo statistical error bars become rather large, the extrapolation of the $\Lambda$~binding energy for $A\rightarrow\infty$ points to the correct region for the expected value $D_\Lambda\sim30$~MeV of $s_\Lambda$ states in nuclear matter.
We can find the same problems discussed in the case of nuclei (\S~\ref{sec:nuc}) in the analysis of the total hypernuclear binding energies for $A\ge 5$. For instance, the binding energy of $^{13}_{~\Lambda}$C is non physical, as for the energy of the core nucleus $^{12}$C. However, the energy difference is consistent with the expected result. Moreover, for the core wave function of $^7_\Lambda$He we have used the same mixing parameter adopted in the description of $^6$He (see Eq.~(\ref{eq:He6_mix}) and Fig.~\ref{fig:E_He6}), in order to have at least the correct ordering in the hypernuclear energy spectrum. However, the same hyperon separation energy can be found by just using the $1p_{3/2}$ shell for the outer neutrons for both strange and non strange nucleus. Our working hypothesis regarding the computation of the hyperon separation energy is thus correct, at least for medium-heavy hypernuclei.
For $A<5$ our results are more than 1~MeV off from experimental data. For $^3_\Lambda$H, the $\Lambda$~separation energy is even negative, meaning that the hypernucleus is less bound than the corresponding nucleus $^2$H. We can ascribe this discrepancy to the lack of accuracy of our wave function for few-body systems. Since the $\Lambda$~hyperon does not suffer from Pauli blocking by the other nucleons, it can penetrate into the nuclear interior and form deeply bound hypernuclear states. For heavy systems the $\Lambda$~particle can be seen as an impurity that does not drastically alter the nuclear wave function. Therefore, the trial wave function of Eq.~(\ref{eq:psi_singleL}) with the single particle state $\varphi_\epsilon^\Lambda$ described by the $1s_{1/2}$ neutron orbital, is accurate enough as starting point for the imaginary time propagation. For very light hypernuclei, for which the first nucleonic $s$ shell is not closed, this might not be the case. In order to have a correct projection onto the ground state, the single particle orbitals of both nucleons and lambda might need to be changed when the hyperon is added to the nucleus. Moreover, in very light hypernuclei, the neglected nucleon-nucleon and hyperon-nucleon correlations, might result in non negligible contributions to the $\Lambda$~binding energy. A study of these systems within a few-body method or a different projection algorithm like the GFMC, might solve this issue.
\subsubsection{Effect of the charge symmetry breaking term}
\label{subsubsec:CSB}
The effect of the CSB potential has been studied for the $A=4$ mirror hypernuclei. As reported in Tab.~\ref{tab:CSB_A4}, without the CSB term there is no difference in the $\Lambda$ binding energy of $^4_\Lambda$H and $^4_\Lambda$He. When CSB is active, a splitting appears due to the different behavior of the $\Lambda p$ and $\Lambda n$ channels. The strength of the difference $\Delta B_\Lambda^{CSB}$ is independent on the parameters of the three-body $\Lambda NN$ interaction and it is compatible with the experimental result~\cite{Juric:1973}, although the $\Lambda$~separation energies are not accurate.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!ht]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{2.0em}\extracolsep{\fill}}ccccc@{\extracolsep{\fill}\hspace{2.0em}}}
\toprule
\toprule
Parameters & System & $B_\Lambda^{sym}$ & $B_\Lambda^{CSB}$ & {$\Delta B_\Lambda^{CSB}$} \\
\midrule
\multirow{2}{*}{Set (\hyperlink{par_I}{I})}
& $^4_\Lambda$H\hspace{0.4em} & 1.97(11) & 1.89(9) & \multirow{2}{*}{0.24(12)} \\
& $^4_\Lambda$He & 2.02(10) & 2.13(8) & \\[0.8em]
\multirow{2}{*}{Set (\hyperlink{par_II}{II})}
& $^4_\Lambda$H\hspace{0.4em} & 1.07(8) & 0.95(9) & \multirow{2}{*}{0.27(13)} \\
& $^4_\Lambda$He & 1.07(9) & 1.22(9) & \\
\midrule
\multirow{2}{*}{Expt.~\cite{Juric:1973}} & $^4_\Lambda$H\hspace{0.4em} & {---} & 2.04(4) & \multirow{2}{*}{0.35(5)\phantom{0}} \\
& $^4_\Lambda$He & {---} & 2.39(3) & \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[$\Lambda$~separation energies: $A=4$ mirror hypernuclei]
{$\Lambda$~separation energies (in MeV) for the $A=4$ mirror $\Lambda$~hypernuclei with (fourth column) and without (third column)
the inclusion of the charge symmetry breaking term~\cite{Lonardoni:2013_PRC}. In the last column the difference in the separation energy induced by
the CSB interaction. First and second rows refer to different set of parameters for the $\Lambda NN$ interaction, while the last row is the
experimental result.}
\label{tab:CSB_A4}
\end{table}
\renewcommand{\arraystretch}{1.0}
\renewcommand{\arraystretch}{1.4}
\begin{table}[!htb]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{3.0em}\extracolsep{\fill}}ccccc@{\extracolsep{\fill}\hspace{3.0em}}}
\toprule
\toprule
System & $p$ & $n$ & $\Delta_{np}$ & $\Delta B_\Lambda$ \\
\midrule
\hspace{-0.5em}$^4_\Lambda$H & 1 & 2 & $+1$ & $-0.12(8)$ \\[0.5em]
$^4_\Lambda$He & 2 & 1 & $-1$ & $+0.15(9)$ \\
$^5_\Lambda$He & 2 & 2 & \hspace{0.78em}$0$ & $+0.02(9)$ \\
$^6_\Lambda$He & 2 & 3 & $+1$ & $-0.06(8)$ \\
$^7_\Lambda$He & 2 & 4 & $+2$ & $-0.18(8)$ \\[0.5em]
\hspace{-0.7em}$^{16}_{~\Lambda}$O & 8 & 7 & $-1$ & $+0.27(35)$ \\
\hspace{-0.7em}$^{17}_{~\Lambda}$O & 8 & 8 & \hspace{0.78em}$0$ & $+0.15(35)$ \\
\hspace{-0.7em}$^{18}_{~\Lambda}$O & 8 & 9 & $+1$ & $-0.74(49)$ \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[$\Lambda$ separation energies: effect of the CSB potential]
{Difference (in MeV) in the hyperon separation energies induced by the CSB term (Eq.~(\ref{eq:V_CSB})) for different
hypernuclei~\cite{Lonardoni:2013_PRC}. The fourth column reports the difference between the number of neutrons and protons.
Results are obtained with the full two- plus three-body (set (\hyperlink{par_II}{II})) hyperon-nucleon interaction.
In order to reduce the errors, $\Delta B_\Lambda$ has been calculated by taking the difference between total hypernuclear binding energies,
instead of the hyperon separation energies.}
\label{tab:CSB}
\end{table}
\renewcommand{\arraystretch}{1.0}
The same CSB potential of Eq.~(\ref{eq:V_CSB}) has been included in the study of hypernuclei for $A>4$. In Tab.~\ref{tab:CSB} the difference in the hyperon separation energies $\Delta B_\Lambda=B_\Lambda^{CSB}-B_\Lambda^{sym}$ is reported for different hypernuclei up to $A=18$. The fourth column shows the difference between the number of neutrons and protons $\Delta_{np}=\mathcal N_n-\mathcal N_p$. For the symmetric hypernuclei $^5_\Lambda$He and $^{17}_{~\Lambda}$O the CSB interaction has no effect, being this difference zero. In the systems with neutron excess ($\Delta_{np}>0$), the effect of the CSB consists in decreasing the hyperon separation energy compared to the charge symmetric case. When $\Delta_{np}$ becomes negative, $\Delta B_\Lambda>0$ due to the attraction induced by the CSB potential in the $\Lambda p$ channel, that produces more bound hypernuclei. Being $\Delta_{np}$ small, these effects are in any case rather small and they become almost negligible compared to the statistical errors on $B_\Lambda$ when the number of baryons becomes large enough ($A>16$). However, in the case of $\Lambda$~neutron matter, the CSB term might have a relevant effect for large enough $\Lambda$ fraction.
\subsubsection{Effect of the hyperon-nucleon space-exchange term}
\label{subsubsec:Px}
As already mentioned in the previous chapter, the inclusion of the $\Lambda N$ space exchange operator of Eq.~(\ref{eq:V_LN}) in the AFDMC propagator is not yet possible. In \S~\ref{subsubsec:Prop_LN} we presented a possible perturbative approach for the treatment of such term. In Tab.~\ref{tab:Px} we report the results of this analysis.
All the results for $^{41}_{~\Lambda}$Ca are consistent within the statistical errors. On the contrary, for lighter systems the $\Lambda$~separation energy seems rather sensitive to the value of the exchange parameter $\varepsilon$. Considering larger values for $\varepsilon$, $B_\Lambda$ generally increases. This trend is opposite to what is found for instance in Ref.~\cite{Usmani:2006}. We recall that only the computation of the Hamiltonian expectation value by means of Eq.~(\ref{eq:mixed}) gives exact results. For other operators, like the space exchange $\mathcal P_x$, the pure estimators have to be calculated with the extrapolation method via the two relations (\ref{eq:pure1}) or (\ref{eq:pure2}). The variational estimate $\langle\mathcal P_x\rangle_v$ is thus needed. In the mentioned reference, the importance of space exchange correlations for variational estimates is discussed. Being these correlations neglected in this work, our perturbative treatment of the $\mathcal P_x$ contribution might not be accurate. Moreover, the evidence of the importance of space exchange correlations might invalid the perturbative approach itself. An effective but more consistent treatment of this term could consist in a slight change in the strength of the central $\Lambda N$ potential. However, due to the very limited information about the space exchange parameter and its effect on single $\Lambda$~hypernuclei heavier than $^5_\Lambda$He, this approach has not been considered in the present work. Recent calculations of many hadron systems within an EFT treatment at NLO for the full $SU(3)$ hadronic spectrum confirmed indeed that exchange terms are sub-leading~\cite{Haidenbauer:2013}.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!h]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{3.0em}\extracolsep{\fill}}cccc@{\extracolsep{\fill}\hspace{3.0em}}}
\toprule
\toprule
System & $\varepsilon=0.0$ & $\varepsilon=0.1$ & $\varepsilon=0.3$ \\
\midrule
\hspace{0.7em}$^5_\Lambda$He & 3.22(14) & 3.89(15) & 4.67(25) \\
$^{17}_{~\Lambda}$O & 12.4(6) & 12.9(9) & 14.0(9) \\
\hspace{0.3em}$^{41}_{~\Lambda}$Ca & 19(4) & 21(5) & 25(7) \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[$\Lambda$ separation energies: effect of the $\Lambda N$ exchange potential]
{Variation of the $\Lambda$ separation energy as a consequence of the exchange potential $v_0(r)\varepsilon(\mathcal P_x-1)$
in the $\Lambda N$ interaction of Eq.~(\ref{eq:V_LN}). The contribution of $\mathcal P_x$ is treated perturbatively
for different value of the parameter $\varepsilon$. The interaction used is the full AV4'+$\Lambda N$+$\Lambda NN$
set~(\hyperlink{par_II}{II}). Results are expressed in MeV.}
\label{tab:Px}
\end{table}
\renewcommand{\arraystretch}{1.0}
\subsection{Single particle densities and radii}
\label{subsec:dens_l}
Single particle densities can be easily computed in Monte Carlo calculations by considering the expectation value of the density operator
\begin{align}
\hat\rho_\kappa(r)=\sum_{i}\delta(r-r_i)\quad\quad \kappa=N,\Lambda\;,
\end{align}
where $i$ is the single particle index running over nucleons for $\rho_N=\langle\hat\rho_N\rangle$ or hyperons for $\rho_\Lambda=\langle\hat\rho_\Lambda\rangle$. The normalization is given by
\begin{align}
\int dr 4\pi r^2 \rho_\kappa(r)=1\;.
\end{align}
Root mean square radii $\langle r_\kappa^2\rangle^{1/2}$ are simply calculated starting from the Cartesian coordinates of nucleons and hyperons. A consistency check between AFDMC densities and radii is then taken by verifying the relation
\begin{align}
\langle r_\kappa^2\rangle=\int dr 4\pi r^4 \rho_\kappa(r)\;.\label{eq:r2}
\end{align}
Before reporting the results we recall that also for densities and radii the AFDMC calculation can only lead to mixed estimators. The pure estimators are thus approximated by using Eq.~(\ref{eq:pure1}) or Eq.~(\ref{eq:pure2}). The two relations should lead to consistent results. This is the case for the nucleon and hyperon radii. In computing the densities instead, the low statistics for $r\rightarrow0$ generates differences in the two approaches. For nucleons these discrepancies are almost within the statistical errors. For hyperons, the much reduced statistics (1 over $A-1$ for single $\Lambda$~hypernuclei) and the fact that typically the $\Lambda$ density is not peaked in $r=0$, create some uncertainties in the region for small $r$, in particular for the first estimator. We therefore chose to adopt the pure estimator of Eq.~(\ref{eq:pure2}) to have at least a positive definite estimate. Finally, it has to be pointed out that the pure extrapolated results are sensitive to the quality of the variational wave function and the accuracy of the projection sampling technique. Although we successfully tested the AFDMC propagation, we are limited in the choice of the VMC wave function. In order to be consistent with the mixed estimators coming from AFDMC calculations, we considered the same trial wave functions also for the variational runs. This might introduce some biases in the evaluation of pure estimators. Therefore, the results presented in the following have to be considered as a qualitative study on the general effect of the hypernuclear forces on the nucleon and hyperon distributions.
In Fig.~\ref{fig:rho_He5L} we report the results for the single particle densities for $^4$He and $^5_\Lambda$He. The green curves are the densities of nucleons in the nucleus, while the red and blue curves are, respectively, the density of nucleons and of the lambda particle in the hypernucleus. In the left panel the results are obtained using AV4' for the nuclear part and the two-body $\Lambda N$ interaction alone for the hypernuclear component. In the right panel the densities are calculated with the full two- plus three-body (set (\hyperlink{par_II}{II})) hyperon-nucleon interaction.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{rho_He5L.pdf}
\caption[Single particle densities: $N$ and $\Lambda$ in $^4$He and $^5_\Lambda$He]
{Single particle densities for nucleons in $^4$He [green, upper banded curve] and for nucleons [red, middle banded curve] and the lambda particle
[blue, lower banded curve] in $^5_\Lambda$He~\cite{Lonardoni:2013_PRC}. In the left panel the results for the two-body $\Lambda N$ interaction alone.
In the right panel the results with the inclusion also of the three-body hyperon-nucleon force in the parametrization~(\hyperlink{par_II}{II}).
The AV4' potential has been used for the nuclear core.}
\label{fig:rho_He5L}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{L_rho.pdf}
\caption[Single particle densities: $\Lambda$ in hypernuclei for $3\le A\le 91$]
{Single particle densities for the $\Lambda$~particle in different hypernuclei~\cite{Lonardoni:2013_PRC}. Top panel reports the results for the
two-body $\Lambda N$ interaction alone. Bottom panel shows the results when the three-body hyperon-nucleon interaction with the set of
parameters~(\hyperlink{par_II}{II}) is also included. The nuclear core is described by the AV4' potential.}
\label{fig:L_rho}
\end{figure}
The addition of the $\Lambda$~particle to the nuclear core of $^4$He has the effect to slightly reduce the nucleon density in the center. The $\Lambda$~particle tries to localize close to $r=0$, enlarging therefore the nucleon distribution. When the three-body $\Lambda NN$ interaction is turned on (right panel of Fig.~\ref{fig:rho_He5L}), the repulsion moves the nucleons to large distances but the main effect is that the hyperon is pushed away from the center of the system. As can be seen from Fig.~\ref{fig:L_rho}, this effect is much more evident for large $A$. When the hypernucleus is described by the $\Lambda N$ interaction alone, the $\Lambda$~particle is localized near the center, in the range $r<2$~fm (left panel of Fig.~\ref{fig:L_rho}). The inclusion of the three-body $\Lambda NN$ potential forces the hyperon to move from the center, in a region that roughly correspond to the skin of nucleons (see Tab.~\ref{tab:radii}). Although these densities are strictly dependent to the nuclear interaction, by using the AV6' potential we found the same qualitative effects on the $\Lambda$~particle, confirming the importance of the three-body hyperon-nucleon interaction and its repulsive nature. Due to the limitations discussed above and the use of too simplified interactions for the nucleon-nucleon force, the comparison with the available VMC density profiles~\cite{Usmani:2003,Usmani:1995} is difficult.
\renewcommand{\arraystretch}{1.4}
\begin{table}[ht]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{2.0em}\extracolsep{\fill}}ccccc@{\extracolsep{\fill}\hspace{2.0em}}}
\toprule
\toprule
\multirow{2}{*}{System} & \multicolumn{2}{c}{nucleus} & \multicolumn{2}{c}{hypernucleus} \\
\cmidrule(l){2-3}\cmidrule(l){4-5}
& \hspace{1.3em}$r_N^{\text{exp}}$ & $r_N$ & \hspace{1.3em}$r_N$ & $r_\Lambda$ \\
\midrule
$^2$H~~-~~$^3_\Lambda$H & \hspace{1.3em}2.142 & 1.48(8) & \hspace{1.3em}1.9(1) & 2.00(16) \\
$^3$H~~-~~$^4_\Lambda$H & \hspace{1.3em}1.759 & 1.5(1) & \hspace{1.3em}1.77(9) & 2.12(15) \\
$^3$He~~-~~$^4_\Lambda$He & \hspace{1.3em}1.966 & 1.5(1) & \hspace{1.3em}1.77(9) & 2.10(14) \\
$^4$He~~-~~$^5_\Lambda$He & \hspace{1.3em}1.676 & 1.57(9) & \hspace{1.3em}1.58(7) & 2.2(2) \\
$^5$He~~-~~$^6_\Lambda$He & \hspace{1.1em} --- & 2.02(16) & \hspace{1.3em}2.16(17) & 2.43(17) \\
$^6$He~~-~~$^7_\Lambda$He & \hspace{1.3em}2.065 & 2.3(2) & \hspace{1.3em}2.4(2) & 2.5(2) \\
$^{15}$O~~-~~$^{16}_{~\Lambda}$O & \hspace{1.1em} --- & 2.20(12) & \hspace{1.3em}2.3(1) & 3.2(3) \\
$^{16}$O~~-~~$^{17}_{~\Lambda}$O & \hspace{1.3em}2.699 & 2.16(12) & \hspace{1.3em}2.23(11) & 3.3(3) \\
$^{17}$O~~-~~$^{18}_{~\Lambda}$O & \hspace{1.3em}2.693 & 2.26(13) & \hspace{1.3em}2.32(14) & 3.3(3) \\
$^{40}$Ca~~-~~$^{41}_{~\Lambda}$Ca & \hspace{1.3em}3.478 & 2.8(2) & \hspace{1.3em}2.8(2) & 4.2(5) \\
$^{48}$Ca~~-~~$^{49}_{~\Lambda}$Ca & \hspace{1.3em}3.477 & 3.1(2) & \hspace{1.3em}3.1(2) & 4.3(5) \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Nucleon and hyperon radii in hypernuclei for $3\le A\le49$]
{Nucleon and hyperon root mean square radii (in fm) for nuclei and corresponding $\Lambda$~hypernuclei. The employed nucleon-nucleon potential is AV4'.
For the strange sector we used the full two- plus three-body hyperon-nucleon force in the parametrization~(\hyperlink{par_II}{II}).
The experimental nuclear charge radii are from Ref.~\cite{Angeli:2013}. Errors are on the fourth digit.}
\label{tab:radii}
\end{table}
\renewcommand{\arraystretch}{1.0}
In Tab.~\ref{tab:radii} we report the nucleon and hyperon root mean square radii for nuclei and hypernuclei. The experimental nuclear charge radii are reported as a reference. AFDMC $r_N$, that do not distinguish among protons and neutrons, are typically smaller than the corresponding experimental results. This can be understood as a consequence of the employed AV4' $NN$ interaction that overbinds nuclei. The main qualitative information is that the hyperon radii are systematically larger than the nucleon ones, as expected by looking at the single particle densities. Starting from $A=5$, the nucleon radii in the nucleus and the corresponding hypernucleus do not change, although the differences in the nucleon densities for $r\rightarrow0$. This is due to the small contribution to the integral (\ref{eq:r2}) given by the density for $r$ close to zero. For the hypernuclei with $A<5$, AFDMC calculations predict larger $r_N$ when the hyperon is added to the core nucleus. This is inconsistent with the results of Ref.~\cite{Sinha:2002}, where a shrinking of the core nuclei due to the presence of the $\Lambda$~particle in $A\le 5$ hypernuclei is found. We need to emphasize once more that the results presented in this section are most likely strictly connected to the employed nucleon-nucleon potential. For instance, the shrinkage of hypernuclei has been investigated experimentally by $\gamma$-ray spectroscopy~\cite{Hashimoto:2006,Tanida:2001}. In the experiment of Ref.~\cite{Tanida:2001}, by looking at the electric quadrupole transition probability from the excited $5/2^+$ state to the ground state in $^7_\Lambda$Li, a $19\%$ shrinkage of the intercluster distance was inferred, assuming the two-body cluster structure core+deuteron. Therefore, the AFDMC study of densities and radii, differently from the analysis of $\Lambda$~separation energies, cannot lead to accurate results at this level. It has to be considered as a first explorative attempt to get hypernuclear structure information from Diffusion Monte Carlo simulations.
\section{Double $\Lambda$~hypernuclei}
\label{sec:ll_hyp}
In the single particle wave function representation, two $\Lambda$~particles with antiparallel spin can be added to a core nucleus filling the first hyperon $s$ shell, assumed to be the neutron $1s_{1/2}$ Skyrme radial function as in the case of single $\Lambda$~hypernuclei. The complete hypernuclear wave function is given by Eq.~(\ref{eq:Psi_T}), where the nucleon trial wave function is the same used in the AFDMC calculations for nuclei and in this case also the hyperon Slater determinant is employed. Although the effect on the total energy introduced by a $\Lambda\Lambda$ correlation function is found to be negligible, for consistency with the calculations for nuclei and single $\Lambda$~hypernuclei we neglected the central Jastrow correlations.
The double $\Lambda$~separation energy and the incremental $\Lambda\Lambda$~energy of Eqs.~(\ref{eq:B_LL}) and (\ref{eq:dB_LL}) are calculated starting from the energy of the nucleus and the corresponding single and double $\Lambda$~hypernuclei described by the same $NN$ AV4' potential. Due to the difficulties in treating open shell nuclei and the limited amount of data about double $\Lambda$~hypernuclei, we performed the AFDMC study for just the lightest $\Lambda\Lambda$~hypernucleus for which energy experimental information are available, $^{\;\;\,6}_{\Lambda\Lambda}$He.
\subsection{Hyperon separation energies}
\label{subsec:E_ll}
In Tab.~\ref{tab:BLL} we report the total binding energies for $^4$He, $^5_\Lambda$He and $^{\;\;\,6}_{\Lambda\Lambda}$He in the second column, the single or double hyperon separation energies in the third and the incremental binding energy in the last column. The value of $B_{\Lambda\Lambda}$ confirms the weak attractive nature of the $\Lambda\Lambda$ interaction~\cite{Hiyama:2002,Nagels:1979,Maessen:1989,Rijken:1999}. Starting from $^4$He and adding two hyperons with $B_\Lambda=3.22(14)$~MeV, the energy of $^{\;\;\,6}_{\Lambda\Lambda}$He would be $1.0\div1.5$~MeV less than the actual AFDMC result. Therefore the $\Lambda\Lambda$ potential of Eq.~(\ref{eq:V_LL}) induces a net attraction between hyperons, at least at this density.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!ht]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{3.0em}\extracolsep{\fill}}cccc@{\extracolsep{\fill}\hspace{3.0em}}}
\toprule
\toprule
System & {$E$} & {$B_{\Lambda(\Lambda)}$} & $\Delta B_{\Lambda\Lambda}$ \\
\midrule
\hspace{0.7em}$^4$He & -32.67(8) & --- & --- \\
\hspace{0.6em}$^5_\Lambda$He & -35.89(12) & 3.22(14) & --- \\
$^{\;\;\,6}_{\Lambda\Lambda}$He & -40.6(3) & 7.9(3) & 1.5(4) \\
\midrule
$^{\;\;\,6}_{\Lambda\Lambda}$He & {Expt.~\cite{Takahashi:2001}} & {$7.25\pm 0.19^{+0.18}_{-0.11}$} & {$1.01\pm 0.20^{+0.18}_{-0.11}$} \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[$\Lambda$~separation energies: $^{\;\;\,6}_{\Lambda\Lambda}$He]
{Comparison between $^4$He and the corresponding single and double $\Lambda$~hypernuclei~\cite{Lonardoni:2013_PRC}. In the second column the total
binding energies are reported. The third column shows the single or double $\Lambda$~separation energies. In the last column the incremental binding energy
$\Delta B_{\Lambda\Lambda}$ is reported. All the results are obtained using the complete two- plus three-body
(set~(\hyperlink{par_II}{II})) hyperon-nucleon interaction with the addition of the $\Lambda\Lambda$ force of Eq.~(\ref{eq:V_LL}).
The results are expressed~in~MeV.}
\label{tab:BLL}
\end{table}
\renewcommand{\arraystretch}{1.0}
Our $B_{\Lambda\Lambda}$ and $\Delta B_{\Lambda\Lambda}$ are very close to the expected results for which the potential has originally been fitted within the cluster model. The latest data $B_{\Lambda\Lambda}=6.91(0.16)$~MeV and $\Delta B_{\Lambda\Lambda}=0.67(0.17)$~MeV of Ref.~\cite{Ahn:2013} suggest a weaker attractive force between the two hyperons. A refit of the interaction of the form proposed in Eq.~(\ref{eq:V_LL}) would be required. It would be interesting to study more double $\Lambda$~hypernuclei within the AFDMC framework with the $\Lambda N$, $\Lambda NN$ and $\Lambda\Lambda$ interaction proposed. Some experimental data are available in the range $A=7\div13$, but there are uncertainties in the identification of the produced double $\Lambda$~hypernuclei, reflecting in inconsistencies about the sign of the $\Lambda\Lambda$ interaction~\cite{Dover:1991,Yamamoto:1991}. An ab-initio analysis of these systems might put some constraints on the hyperon-hyperon force, which at present is still poorly known, and give information on its density dependence. Also the inclusion of the $\Lambda\Lambda N$ force would be important.
\subsection{Single particle densities and radii}
\label{subsec:dens_ll}
For the sake of completeness, we also report the results for the single particle densities (Fig.~\ref{fig:rho_He6LL}) and root mean square radii (Tab.~\ref{tab:radii_He6LL}) for the double $\Lambda$~hypernucleus $^{\;\;\,6}_{\Lambda\Lambda}$He. By looking at the densities profiles, when a second hyperon is added to $^5_\Lambda$He, the nucleon density at the center reduces further. The hyperon density, instead, seems to move a bit toward $r=0$ consistently with weak attractive behavior of the employed $\Lambda\Lambda$ interaction. However, the nucleon and hyperon radii are almost the same of $^5_\Lambda$He. These conclusions are thus rather speculative, particularly recalling the discussion on single particle densities of \S~\ref{subsec:dens_l}.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!hb]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{5.0em}\extracolsep{\fill}}ccc@{\extracolsep{\fill}\hspace{5.0em}}}
\toprule
\toprule
System & $r_N$ & $r_\Lambda$ \\
\midrule
\hspace{0.7em}$^4$He & 1.57(9) & --- \\
\hspace{0.6em}$^5_\Lambda$He & 1.58(7) & 2.2(2) \\
$^{\;\;\,6}_{\Lambda\Lambda}$He & 1.7(2) & 2.3(2) \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Nucleon and hyperon radii for $^{\;\;\,6}_{\Lambda\Lambda}$He]
{Nucleon and hyperon root mean square radii (in fm) for $^4$He and the corresponding single and double $\Lambda$~hypernuclei.
The employed interactions are the $NN$ AV4' plus the full two- and three-body hyperon-nucleon force (set~(\hyperlink{par_II}{II})).}
\label{tab:radii_He6LL}
\end{table}
\renewcommand{\arraystretch}{1.0}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{rho_He6LL.pdf}
\caption[Single particle densities: $N$ and $\Lambda$ in $^4$He, $^5_\Lambda$He and $^{\;\;\,6}_{\Lambda\Lambda}$He]
{Single particle densities for nucleons in $^4$He [green banded curve], $^5_\Lambda$He [red banded curve] and
$^{\;\;\,6}_{\Lambda\Lambda}$He [light blue banded curve],
and for the $\Lambda$~particle in $^5_\Lambda$He [blue banded curve] and $^{\;\;\,6}_{\Lambda\Lambda}$He [brown banded curve].
The results are obtained using the AV4' potential for nucleons and the two- plus three-body hyperon-nucleon force~(\hyperlink{par_II}{II}).
In the case of $^{\;\;\,6}_{\Lambda\Lambda}$He, the $\Lambda\Lambda$ interaction of Eq.~(\ref{eq:V_LL}) is also employed.}
\label{fig:rho_He6LL}
\end{figure}
\newpage
\phantom{Empty page}
\chapter{Results: infinite systems}
\label{chap:results_infinite}
Neutron matter has been deeply investigated in previous works using the Auxiliary Field DMC algorithm. The EoS at zero temperature has been derived in both constrained path~\cite{Sarsa:2003} and fixed phase~\cite{Gandolfi:2009} approximations. In the low density regime, the $^{1\!}S_0$ superfluid energy gap has also been studied~\cite{Gandolfi:2009_gap}. In the high density regime, the connections between three-body forces, nuclear symmetry energy and the neutron star maximum mass are extensively discussed in Refs.~\cite{Gandolfi:2012,Gandolfi:2013}.
In this chapter we will review some details of the AFDMC simulations for pure neutron matter (PNM). They will be useful to extend the calculations for the inclusion of strange degrees of freedom. We will then focus on the hyperon neutron matter (YNM), firstly with the test of the AFDMC algorithm extended to the strange sector in connection with the developed hyperon-nucleon interactions. Starting from the derivation of the threshold density for the appearance of $\Lambda$~hyperons, a first attempt to construct a realistic EoS for YNM will be presented. The corresponding limit for the maximum mass will be finally discussed.
\section{Neutron matter}
\label{sec:nmatt_eos}
As already described in Chapter~\ref{chap:method}, due to the simplification in the potentials for neutron only systems, PNM can investigated by means of AFDMC calculations using the Argonne V8' two-body potential and including three-body forces. The contribution of terms in the Argonne potential beyond spin-orbit are usually very small in nuclei and in low density nuclear and neutron matter. It may become significative only for very large densities~\cite{Gandolfi:2009}. Predicted maximum masses of a NS for the two Argonne potentials are very close and both below $1.8M_\odot$, as a consequence of the softness of the corresponding EoS~\cite{Akmal:1998,Gandolfi:2012}. Being the present observational limit for $M_{\max}$ around $2M_\odot$~\cite{Demorest:2010,Antoniadis:2013}, three-neutron forces must be repulsive at high densities. As reported in Ref.~\cite{Maris:2013}, the Illinois~7 TNI is attractive and produces a too soft EoS. The Urbana~IX interaction instead provides a strong repulsive contribution to the total energy. The inclusion of the UIX force in addition to the two-body AV8' interaction in AFDMC calculations for PNM generates a rather stiff EoS. The predicted maximum mass is around $2.4M_\odot$~\cite{Gandolfi:2012}, in agreement with the result coming from the AV18+UIX calculation of Akmal, Pandharipande and Ravenhall~\cite{Akmal:1998}. It follows that the AFDMC method to solve the AV8'+UIX nuclear Hamiltonian is a valuable tool for the investigation of neutron matter properties and neutron stars observables. This is the starting point for the study of $\Lambda$~neutron matter.
All the AFDMC results for PNM have been obtained using the version \hyperlink{method:Elocal}{\emph{v2}} of the algorithm. Simulations are typically performed at fixed imaginary time step $d\tau=2\cdot10^{-5}~\text{MeV}^{-1}$, that should be small enough to provide a good approximation of the extrapolated result~\cite{Sarsa:2003}. The wave function of Eq.~(\ref{eq:psi_N}) includes a Jastrow correlation function among neutrons and a Slater determinant of plane waves coupled with two-component spinors. For infinite neutron systems, AFDMC calculations do not depend on the Jastrow functions. Moreover by changing the algorithm to version \hyperlink{method:PsiT}{\emph{v1}}, results are less than 1\% different. This is because the employed trial wave function is already a good approximation of the real ground state wave function. Moreover the interaction is simplified with respect to the case of finite nucleon systems due to absence of the $\bm\tau_i\cdot\bm\tau_j$ contributions.
In Chapter~\ref{chap:method} we have seen that finite size effects appear because of the dependence of the Fermi gas kinetic energy to the number of particles. The kinetic energy oscillations of $\mathcal N_F$ free Fermions imply that the energy of $\mathcal N_F=38$ is lower than either $\mathcal N_F=14$ or $\mathcal N_F=66$. This is reflected in the energy of PNM for different number of neutrons with PBC conditions (Eq.~(\ref{eq:PBC})). At each density it follows that $E(38)<E(14)<E(66)$~\cite{Gandolfi:2009}. However, as already discussed in~\S~\ref{subsec:Wave}, the results for 66 neutrons are remarkably close to the extrapolated TABC energy. 66 is thus the typical number of particle employed in AFDMC calculations for PNM.
Finite size effects could appear also from the potential, in particular at high density, depending on the range of the interaction. Monte Carlo calculations are generally performed in a finite periodic box with size $L$ and all inter-particle distances are truncated within the sphere of radius $L/2$. Usually, tail corrections due to this truncation are estimated with an integration of the two-body interaction from $L/2$ up to infinity. However, this is possible only for spin independent terms. As originally reported in Ref.~\cite{Sarsa:2003}, in order to correctly treat all the tail corrections to the potential, it is possible to include the contributions given by neighboring cells to the simulation box. Each two-body contribution to the potential is given~by
\begin{align}
v_p(r)\equiv v_p(|x,y,z|)\longrightarrow\sum_{i_x,i_y,i_z}v_p\Bigl(\big|(x+i_xL)\hat x+(y+i_yL)\hat y+(z+i_zL)\hat z\big|\Bigr)\;,
\end{align}
where $v_p(r)$ are the potential functions of Eq.~(\ref{eq:v_ij_Op}) and $i_x,i_y,i_z$ are $0,\pm1,\pm2,\ldots$ depending on the number of the boxes considered. The inclusion of the first 26 additional neighbor cells, that corresponds to $i_x,i_y,i_z$ taking the values $-1$, $0$ and $1$, is enough to extend the calculation for inter-particle distances larger than the range of the potential~\cite{Sarsa:2003,Gandolfi:2009}. Finite-size corrections due to three-body forces can be included in the same way as for the nucleon-nucleon interaction, although their contribution is very small compared to the potential energy. Their effect is appreciable only for a small number of particles and at large density, i.e., if the size of the simulation box is small. We will see that these corrections are actually non negligible for the correct computation of energy differences in $\Lambda$~neutron matter. By looking at the results reported in the mentioned references, for PNM we can estimate that the finite-size errors in AFDMC calculations, due to both kinetic and potential energies, do not exceed 2\% of the asymptotic value of the energy calculated by using TABC.
It was found~\cite{Gandolfi:2009,Gandolfi:2012} that the EoS of PNM can be accurately parametrized using the following polytrope functional form:
\begin{align}
E(\rho_n)=a\left(\frac{\rho_n}{\rho_0}\right)^\alpha+b\left(\frac{\rho_n}{\rho_0}\right)^\beta\;,\label{eq:poly}
\end{align}
where $E(\rho_n)$ is the energy per neutron as a function of the neutron density $\rho_n$, and the parameters $a$, $\alpha$, $b$, and $\beta$ are obtained by fitting the QMC results. $\rho_0=0.16~\text{fm}^{-3}$ is the nuclear saturation density. AFDMC energies per particle as a function of the neutron density, together with the fitted parameters for both AV8' and the full AV8'+UIX Hamiltonians, are reported in Tab.~\ref{tab:E_nmatt}. The plots of the EoS are shown in the next section, Fig.~\ref{fig:eos_Lfrac}.
\renewcommand{\arraystretch}{1.0}
\begin{table}[!ht]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{4.0em}\extracolsep{\fill}}ccc@{\extracolsep{\fill}\hspace{4.0em}}}
\toprule
\toprule
$\rho_n$ & AV8' & AV8'+UIX \\
\midrule
0.08 & 9.47(1) & 10.49(1) \\
0.16 & 14.47(2) & 19.10(2) \\
0.24 & 19.98(3) & 31.85(3) \\
0.32 & 26.45(3) & 49.86(5) \\
0.40 & 34.06(5) & 74.19(5) \\
0.48 & 42.99(8) & 105.9(1) \\
0.56 & --- & 145.3(1) \\
0.60 & 58.24(8) & 168.1(2) \\
0.70 & 73.3(1) & --- \\
\midrule
$\begin{aligned}
&\phantom{a=2.04(7)} \\
&\text{polytrope} \\
&\text{parameters} \\
&\phantom{\beta=0.47(1)}
\end{aligned}$ &
$\begin{aligned}
a&=2.04(7) \\
\alpha&=2.15(2) \\
b&=12.47(47) \\
\beta&=0.47(1)
\end{aligned}$ &
$\begin{aligned}
a&=5.66(3) \\
\alpha&=2.44(1) \\
b&=13.47(3) \\
\beta&=0.51(1)
\end{aligned}$ \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Energy per particle: neutron matter]
{Energy per particle in neutron matter for selected densities~\cite{Maris:2013,Gandolfi:2013}.
$a$, $\alpha$, $b$ and $\beta$ are the fitted polytrope coefficients of Eq.~(\ref{eq:poly}).}
\label{tab:E_nmatt}
\end{table}
\renewcommand{\arraystretch}{1.0}
\section{$\Lambda$~neutron matter}
\label{sec:Lnmatt_eos_xl}
The study of $\Lambda$~neutron matter follows straightforwardly from PNM calculations with the extension of the wave function (Eq.~(\ref{eq:Psi_T})) and the inclusion of the strange part of the Hamiltonian (Eqs.~(\ref{eq:H_Y}) and (\ref{eq:H_YN})), in analogy with the simulations for finite strange systems. In addition to the Slater determinant of plane waves for neutrons, there is now the determinant for the $\Lambda$~particles. Both sets of plane waves have quantized $\bm k_\epsilon$ vectors given by Eq.~(\ref{eq:k_vec}), and each type of baryon fills its own momentum shell. As discussed in \S~\ref{subsubsubsec:Wave_non_strange}, the requirement of homogeneity and isotropy implies the closure of the momentum shell structure, both for neutrons and hyperons. The consequence is that in AFDMC calculations we are limited in the possible choices for the $\Lambda$~fraction, defined as
\begin{align}
x_\Lambda=\frac{\rho_\Lambda}{\rho_b}=\frac{\mathcal N_\Lambda}{\mathcal N_n+\mathcal N_\Lambda}\;,
\end{align}
where $\rho_\Lambda$ is the hyperon density and $\rho_b$ the total baryon density of Eq.~(\ref{eq:rho_b}). Employing the TABC (Eq.~(\ref{eq:TABC})) would allow to consider a number of particles corresponding to open shells, providing more freedom in the choice of $x_\Lambda$. However, this approach has not been investigate in this work.
As soon as the hyperons appear in the bulk of neutrons, i.e. above a $\Lambda$~threshold density $\rho_\Lambda^{th}$, the EoS becomes a function of both baryon density and $\Lambda$~fraction, which are connected by the equilibrium condition $\mu_\Lambda=\mu_n$ (see \S~\ref{sec:ns}). The $\Lambda$~threshold density and the function $x_\Lambda(\rho_b)$ are key ingredients to understand the high density properties of hypermatter and thus to predict the maximum mass. We will start the discussion with the test analysis of $\Lambda$~neutron matter at a fixed $\Lambda$~fraction. We will then move to the realistic case of variable $x_\Lambda$.
\subsection{Test: fixed $\Lambda$ fraction}
\label{subsec:x_lambda}
In order to test the feasibility of AFDMC calculations for hypermatter, we considered the limiting case of small $\Lambda$~fraction, in order to look at the hyperon as a small perturbation in the neutron medium. We filled the simulation box with 66 neutrons and just one $\Lambda$~particle, i.e. $x_\Lambda=0.0149$. Although the first momentum shell for the strange baryons is not completely filled (for $\mathcal N_c=1$ the occupation number is 2, spin up and spin down $\Lambda$~particles), the requirement of homogeneity and isotropy is still verified. The first $\bm k_\epsilon$ vector, indeed, is $\frac{2\pi}{L}(0,0,0)$ and thus the corresponding plane wave is just a constant, giving no contribution to the kinetic energy. In order to keep the $\Lambda$~fraction small we are allowed to use one or two hyperons in the box (next close shell is for 14 particles) and, possibly, change the number of neutrons, as we will see. Using just one lambda hyperon there is no need to include the $\Lambda\Lambda$ interaction. The closest hyperon will be in the next neighboring cell at distances larger than the range of the hyperon-hyperon force, at least for non extremely high densities. Therefore, we proceeded with the inclusion of the AV8'+UIX potentials for neutrons, adding the $\Lambda N$+$\Lambda NN$ interactions in both parametrizations~(\hyperlink{par_I}{I})~and~(\hyperlink{par_II}{II}).
In Tab.~\ref{tab:E_Lnmatt} we report the energy as a function of the baryon density for different combinations of the employed potentials. The parameters of the polytrope function of Eq.~(\ref{eq:poly}) that fits the AFDMC results are also shown. The plot of the fits, for both PNM and YNM are reported in Fig.~\ref{fig:eos_Lfrac}.
By looking at the dashed lines, corresponding to calculations without the neutron TNI, it is evident the softness of the PNM EoS (green) discussed in the previous section. The addition of the hyperon-nucleon two-body interaction (blue) implies, as expected (see \S~\ref{sec:ns}), a further reduction of the energy per particle, even for the small and constant $\Lambda$~fraction. The inclusion of the three-body $\Lambda NN$ interaction (red), instead, makes the EoS stiffer at high density, even stiffer than the PNM one for the set of parameters~(\hyperlink{par_II}{II}). This result is rather interesting because it means that the hyperon-nucleon force used has a strong repulsive component that is effective also at densities larger than nuclear saturation density, where the interaction was originally fitted on medium-heavy hypernuclei.
When the Urbana~IX TNI is employed (solid lines), the PNM EoS (green) becomes stiff. As in the previous case, the inclusion of the two-body $\Lambda N$ interaction softens the EoS (blue), although the effect is not dramatic for the small $x_\Lambda$ considered. The three-body hyperon-nucleon force gives a repulsive contribution to the total energy (red). The effect is more evident for the parametrization~(\hyperlink{par_II}{II}), for which the PNM and YNM EoS are almost on top of each other. The small constant fraction of hyperons in the neutron medium induces very small modifications in the energy per particle. This is due to the repulsive contribution of the $\Lambda NN$ interaction still active at high densities.
These results do not describe the realistic EoS for $\Lambda$~neutron matter, because they are computed at a fixed $\Lambda$~fraction for each baryon density. However, the high density part of the curves gives us some indication about the behavior of the hyperon-nucleon interaction in the infinite medium. The fundamental observation is that the $\Lambda NN$ force is repulsive, confirming our expectations. By varying the $\Lambda$~fraction, for example considering two hyperons over 66 neutrons, the qualitative picture drawn in Fig.~\ref{fig:eos_Lfrac} is the same, but a small reasonable increase in the softening of the EoS is found. This is consistent with the theoretical prediction related to the appearance of strange baryons in NS matter and gives us the possibility to quantitatively predict the entity of the softening in a Quantum Monte Carlo framework.
\renewcommand{\arraystretch}{1.0}
\begin{table}[p]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{1.5em}\extracolsep{\fill}}cccc@{\extracolsep{\fill}\hspace{1.5em}}}
\toprule
\toprule
\multirow{2}{*}{$\rho_b$} & AV8' & AV8' & AV8' \\
& $\Lambda N$ & $\Lambda N$+$\Lambda NN$~(\hyperlink{par_I}{I}) & $\Lambda N$+$\Lambda NN$~(\hyperlink{par_II}{II}) \\
\midrule
0.08 & 8.71(1) & 8.84(1) & 8.92(1) \\
0.16 & 13.11(3) & 13.44(2) & 13.76(1) \\
0.24 & 17.96(2) & 18.71(2) & 19.31(3) \\
0.32 & 23.81(4) & 25.02(4) & 26.09(3) \\
0.40 & 30.72(4) & 32.75(6) & 34.20(6) \\
0.48 & 38.84(6) & 42.03(6) & 43.99(4) \\
0.56 & 48.37(7) & 52.30(8) & 55.18(8) \\
0.60 & 53.24(7) & 57.9(1) & 61.42(7) \\
0.70 & 67.1(1) & 74.0(1) & 78.7(1) \\
0.80 & 83.1(1) & 91.7(1) & 98.0(1) \\
\midrule
$\begin{aligned}
&\phantom{a=2.54(13)} \\
&\text{polytrope} \\
&\text{parameters} \\
&\phantom{\beta=0.38(2)}
\end{aligned}$ &
$\begin{aligned}
a&=2.54(13) \\
\alpha&=2.00(3) \\
b&=10.52(15) \\
\beta&=0.38(2)
\end{aligned}$ &
$\begin{aligned}
a&=2.80(13) \\
\alpha&=2.02(3) \\
b&=10.60(16) \\
\beta&=0.38(2)
\end{aligned}$ &
$\begin{aligned}
a&=2.75(9) \\
\alpha&=2.07(2) \\
b&=10.98(11) \\
\beta&=0.41(2)
\end{aligned}$ \\
\bottomrule
\bottomrule\\
\toprule
\toprule
\multirow{2}{*}{$\rho_b$} & AV8'+UIX & AV8'+UIX & AV8'+UIX \\
& $\Lambda N$ & $\Lambda N$+$\Lambda NN$~(\hyperlink{par_I}{I}) & $\Lambda N$+$\Lambda NN$~(\hyperlink{par_II}{II}) \\
\midrule
0.08 & 9.72(2) & 9.77(1) & 9.87(1) \\
0.16 & 17.53(2) & 17.88(2) & 18.16(1) \\
0.24 & 29.29(5) & 29.93(2) & 30.57(2) \\
0.32 & 46.17(7) & 47.38(5) & 48.55(4) \\
0.40 & 68.86(8) & 71.08(7) & 72.87(7) \\
0.48 & 98.71(8) & 101.7(1) & 104.68(9) \\
0.56 & 135.9(1) & 140.19(9) & 144.(1) \\
0.60 & 157.0(1) & 162.3(1) & 167.0(1) \\
\midrule
$\begin{aligned}
&\phantom{a=5.48(12)} \\
&\text{polytrope} \\
&\text{parameters} \\
&\phantom{\beta=0.47(1)}
\end{aligned}$ &
$\begin{aligned}
a&=5.48(12) \\
\alpha&=2.42(1) \\
b&=12.06(14) \\
\beta&=0.47(1)
\end{aligned}$ &
$\begin{aligned}
a&=5.55(5) \\
\alpha&=2.44(1) \\
b&=12.32(6) \\
\beta&=0.49(1)
\end{aligned}$ &
$\begin{aligned}
a&=5.76(7) \\
\alpha&=2.43(1) \\
b&=12.39(8) \\
\beta&=0.49(1)
\end{aligned}$ \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Energy per particle: $\Lambda$~neutron matter]
{Energy per particle in $\Lambda$~neutron matter as a function of the baryon density. The $\Lambda$ fraction is fixed at $x_\Lambda=0.0149$.
Different columns correspond to different nucleon-nucleon and hyperon-nucleon potentials.
$a$, $\alpha$, $b$ and $\beta$ are the fitted polytrope coefficients (Eq.~(\ref{eq:poly})).
The curves are reported in Fig.~\ref{fig:eos_Lfrac}.}
\label{tab:E_Lnmatt}
\end{table}
\renewcommand{\arraystretch}{1.0}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{eos_V8p+UIX+LN+LNN_parI+II.pdf}
\caption[Energy per particle vs. baryon density at fixed $\Lambda$ fraction]
{Energy per particle as a function of the baryon density for $\Lambda$~neutron matter at fixed $\Lambda$~fraction $x_\Lambda=0.0149$.
Green curves refer to the PNM EoS, blue and red to the YNM EoS with the inclusion of the two-body and two- plus three-body hyperon nucleon force.
In the upper panel the results are for the $\Lambda NN$ parametrization~(\hyperlink{par_I}{I}). In the lower panel the set~(\hyperlink{par_II}{II})
has been used. Dashed lines are obtained using the AV8' nucleon-nucleon potential. Solid lines represent the results with the inclusion of
the $NNN$ Urbana~IX potential.}
\label{fig:eos_Lfrac}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{gofr_r16_V8p+UIX+LN+LNN.pdf}
\caption[Pair correlation functions at fixed $\Lambda$ fraction: $\rho_b=0.16~\text{fm}^{-3}$]
{$nn$ (dashed lines) and $\Lambda n$ (solid lines) pair correlation functions in $\Lambda$~neutron matter
for $\rho_b=0.16~\text{fm}^{-3}$ and $x_\Lambda=0.0149$.
The nucelon-nucleon potential is AV8'+UIX. In the upper panel only the two-body hyperon-nucleon potential has been used.
In the lower panel also the three body $\Lambda NN$ force in the parametrization~(\hyperlink{par_II}{II}) has been considered.
The subscript $u$ ($d$) refers to the neutron or lambda spin up (down).}
\label{fig:gofr_r0.16_V8p+UIX+LN(LNN)}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{gofr_r40_V8p+UIX+LN+LNN.pdf}
\caption[Pair correlation functions at fixed $\Lambda$ fraction: $\rho_b=0.40~\text{fm}^{-3}$]
{Same of Fig.~\ref{fig:gofr_r0.16_V8p+UIX+LN(LNN)} but for the baryon density $\rho_b=0.40~\text{fm}^{-3}$.}
\label{fig:gofr_r0.40_V8p+UIX+LN(LNN)}
\end{figure}
Before moving to the derivation of the $\Lambda$~threshold density and the hypermatter EoS, let us analyze the pair correlation functions calculated for $\Lambda$~neutron matter at fixed $\Lambda$~fraction $x_\Lambda=0.0149$. Figs.~\ref{fig:gofr_r0.16_V8p+UIX+LN(LNN)} and \ref{fig:gofr_r0.40_V8p+UIX+LN(LNN)} report the neutron-neutron and lambda-neutron pair correlation functions $g(r)$ for different baryon density, $\rho_b=\rho_0$ and $\rho_b=0.40~\text{fm}^{-3}$. Dashed lines refer to $g_{nn}(r)$ in the central (black), spin singlet (light blue) and spin triplet (brown) channels. Solid lines to $g_{\Lambda n}(r)$ in the central (blue), $\Lambda$~spin~up - $n$~spin~down (red) and $\Lambda$~spin~up - $n$~spin~up (green) channels respectively. In the upper panels results obtained using the two-body $\Lambda N$ interaction only are displayed. In the lower panels the three-body $\Lambda NN$ force in the parametrization~(\hyperlink{par_II}{II}) is also included.
The main information we can obtain from the plots is the non negligible effect on inter-particle distances due to the inclusion of the three-body $\Lambda NN$ force. Without TNI among hyperons and neutrons, the central $\Lambda n$ correlation function presents a maximum around $1.0\div1.2$~fm, depending on the density. This is a consequence of the attractive $\Lambda N$ force that tends to create a shell of neutrons surrounding the hyperon impurity. The effect is also visible at high density, although reduced. When the $\Lambda NN$ is considered, the shell effect disappears and the $g_{\Lambda n}(r)$ resembles the neutron-neutron one, particularly at high density. The inclusion of the repulsive three-body force avoids the clustering of $\Lambda$~particles in favor of a more homogenous lambda-neutron medium. The use of a $\Lambda n$ central correlation, has the only effect of reducing the value of $g_{\Lambda n}(r)$ in the origin, moving the central functions close to the PNM ones. For the small $\Lambda$~fraction considered here, the neutron-neutron correlation functions are not sensitive to the presence of the hyperon. Indeed, similar results can be obtained for PNM.
It is interesting to observe the projection of the pair correlation functions in the spin channels. For neutrons the Pauli principle tends to suppress the presence of close pairs of particles with parallel spin. For the $\Lambda$-$n$ pair, theoretically there is no Pauli effect because the two particles belong to different isospin spaces. However, the employed hyperon-nucleon interaction involves a $\bm\sigma_\lambda\cdot\bm\sigma_i$ contribution (recall Eqs.~(\ref{eq:V_LN}) and (\ref{eq:V_LNN_D})). This is almost negligible in the case of the $\Lambda N$ potential alone (upper panels of Figs.~\ref{fig:gofr_r0.16_V8p+UIX+LN(LNN)} and \ref{fig:gofr_r0.40_V8p+UIX+LN(LNN)}). It has instead a sizable effect in the dominant three-body force, for which the channel $\Lambda$~spin~up - $n$~spin~down separates from the $\Lambda$~spin~up - $n$~spin~up, revealing a (weak) net repulsion between parallel configurations. Same effect can be found for $\Lambda$ reversed spin.
\subsection{$\Lambda$~threshold density and the equation of state}
\label{subsec:Lnmatt_eos}
In order to address the problem of $\Lambda$~neutron matter, we make use of a formal analogy with the study of two components Fermi gas used in the analysis of asymmetric nuclear matter. When protons are added to the bulk of neutrons, the energy per baryon can be expressed in terms of the isospin asymmetry
\begin{align}
\delta_I=\frac{\rho_n-\rho_p}{\rho_n+\rho_p}=1-2x_p\quad\quad x_p=\frac{\rho_p}{\rho_b}\;,
\end{align}
as a sum of even powers of $x_p$
\begin{align}
E_{pn}(\rho_b,x_p)=E_{pn}(\rho_b,1/2)+S_{pn}^{(2)}(\rho_b)(1-2x_p)^2+S_{pn}^{(4)}(\rho_b)(1-2x_p)^4+\ldots\;,\label{eq:E_pn}
\end{align}
where $x_p$ is the proton fraction and $S_{pn}^{(2i)}(\rho_b)$ with $i=1,2,\ldots$ are the nuclear symmetry energies. Typically, higher order corrections for $i>1$ are ignored. The nuclear symmetry energy $S_{pn}(\rho_b)\equiv S_{pn}^{(2)}(\rho_b)$ is then defined as the difference between the energy per baryon of PNM $E_{\text{PNM}}(\rho_b)=E_{pn}(\rho_b,0)$ and the energy per baryon of symmetric nuclear matter~(SNM) $E_{\text{SNM}}(\rho_b)=E_{pn}(\rho_b,1/2)$.
$E_{pn}(\rho_b,x_p)$ can be rewritten in terms of the PNM energy:
\begin{align}
E_{pn}(\rho_b,x_p)&=E_{\text{SNM}}(\rho_b)+S_{pn}(\rho_b)\Bigl(1-2x_p\Bigr)^2\;,\nonumber\\[0.2em]
&=E_{\text{SNM}}(\rho_b)+\Bigl[E_{\text{PNM}}(\rho_b)-E_{\text{SNM}}(\rho_b)\Bigr]\Bigl(1-2x_p\Bigr)^2\;,\nonumber\\[0.2em]
&=E_{\text{PNM}}(\rho_b)+S_{pn}(\rho_b)\Bigl(-4x_p+4x_p^2\Bigr)\;.\label{eq:E_asym}
\end{align}
In AFDMC calculations the Coulomb interaction is typically neglected. The difference between PNM and asymmetric nuclear matter is thus related to the isospin dependent terms of the nucleon-nucleon interactions. The effect of these components of the potential is parametrized by means of a function of the proton fraction and a function of the baryon density.
We can try to make an analogy between asymmetric nuclear matter and hypermatter, by replacing the protons with the $\Lambda$~particles. In this case the difference with the PNM case is given by the ``strangeness asymmetry''
\begin{align}
\delta_S=\frac{\rho_n-\rho_\Lambda}{\rho_n+\rho_\Lambda}=1-2x_\Lambda\;,
\end{align}
and the effect on the energy per particle is related to the hyperon-nucleon interactions and the difference in mass between neutron and $\Lambda$.
In the case of $\Lambda$~neutron matter, the analog of Eq.~(\ref{eq:E_pn}) should contain also odd powers of $\delta_S$. These contributions are negligible for asymmetric nuclear matter due to the smallness of the charge symmetry breaking in $NN$ interaction. Being the $\Lambda$~particles distinguishable from neutrons, there are no theoretical arguments to neglect the linear term in $(1-2x_\Lambda)$. However, we can try to express the energy per particle of $\Lambda$~neutron matter as an expansion over the $\Lambda$~fraction, by introducing an ``hyperon symmetry energy'' $S_{\Lambda n}(\rho_b)$ such that
\begin{align}
E_{\Lambda n}(\rho_b,x_\Lambda)=E_{\text{PNM}}(\rho_b)+S_{\Lambda n}(\rho_b)\Bigl(-x_\Lambda+x_\Lambda^2\Bigr)\;.\label{eq:E_YNM}
\end{align}
The expression for the energy difference directly follows from Eq.~(\ref{eq:E_YNM}):
\begin{align}
\Delta E_{\Lambda n}(\rho_b,x_\Lambda)=E_{\Lambda n}(\rho_b,x_\Lambda)-E_{\text{PNM}}(\rho_b)=S_{\Lambda n}(\rho_b)\Bigl(-x_\Lambda+x_\Lambda^2\Bigr)\;.
\label{eq:DeltaE}
\end{align}
The idea is then to perform simulations for different $\Lambda$~fraction in order to fit the hyperon symmetry energy $S_{\Lambda n}(\rho_b)$. The main problem in this procedure is the limitation in the values of the hyperon fraction we can consider. In order to keep $x_\Lambda$ small we can use up to 2 lambdas in the first momentum shell and try to vary the number of neutrons from 66 to 14, as reported in Tab.~\ref{tab:L_frac}. In fact, moving to the next $\Lambda$ shell implies a total of 14 strange baryons and a number of neutrons that is computationally demanding. Moreover, we cannot neglect the $\Lambda\Lambda$ interaction for 14 hyperons in a box, even at low density. The inclusion of the hyperon-hyperon force would lead to additional uncertainties in the calculation and it has not been taken into account at this point.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!ht]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{4.0em}\extracolsep{\fill}}ccccc@{\extracolsep{\fill}\hspace{4.0em}}}
\toprule
\toprule
$\mathcal N_n$ & $\mathcal N_\Lambda$ & $\mathcal N_b$ & $x_\Lambda$ & $x_\Lambda~\%$ \\
\midrule
66 & 0 & 66 & 0.0000 & 0.0\% \\
66 & 1 & 67 & 0.0149 & 1.5\% \\
54 & 1 & 55 & 0.0182 & 1.8\% \\
38 & 1 & 39 & 0.0256 & 2.6\% \\
66 & 2 & 68 & 0.0294 & 2.9\% \\
54 & 2 & 56 & 0.0357 & 3.6\% \\
38 & 2 & 40 & 0.0500 & 5.0\% \\
14 & 1 & 15 & 0.0667 & 6.7\% \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Baryon number and $\Lambda$~fraction]
{Neutron, lambda and total baryon number with the corresponding $\Lambda$~fraction for $\Lambda$~matter calculations.}
\label{tab:L_frac}
\end{table}
\renewcommand{\arraystretch}{1.0}
Because of finite size effects, we have to be careful in calculating the difference $\Delta E_{\Lambda n}$. Being the $\Lambda$ fraction small, we can suppose that these effects on the total energy are mainly due to neutrons. By taking the difference between YNM and PNM energies for the same number of neutrons, the finite size effects should cancel out. We can see the problem from a different equivalent point of view. The starting point is the energy of PNM obtained with 66 neutrons in the box. If we consider the $\Lambda$~matter described by $66n+1\Lambda$ or $66n+2\Lambda$ there are no problems in evaluating $\Delta E_{\Lambda n}$. When moving to a different $\Lambda$ fraction, the number of neutrons $\mathcal M$ in the strange matter has to be changed. In order to take care of the modified neutron shell, a reasonable approach is to correct the YNM energy by the contribution given by the PNM ``core'' computed with 66 and $\mathcal M$ neutrons:
\begin{align}
E_{\Lambda n}^{corr}(\rho_b,x_\Lambda)&=E_{\Lambda n}^{\mathcal M}(\rho_b,x_\Lambda)
+\Bigl[E_{\text{PNM}}^{66}(\rho_b)-E_{\text{PNM}}^{\mathcal M}(\rho_b)\Bigr]\nonumber\\[0.5em]
&=E_{\text{PNM}}^{66}(\rho_b)+S_{\Lambda n}(\rho_b)\Bigl(-x_\Lambda+x_\Lambda^2\Bigr)\;.
\end{align}
In this way we obtain
\begin{align}
\Delta E_{\Lambda n}(\rho_b,x_\Lambda)=S_{\Lambda n}(\rho_b)\Bigl(-x_\Lambda+x_\Lambda^2\Bigr)
=E_{\Lambda n}^{\mathcal M}(\rho_b,x_\Lambda)-E_{\text{PNM}}^{\mathcal M}(\rho_b)\;,
\end{align}
that exactly corresponds to the result of Eq.~(\ref{eq:DeltaE}).
We verified that energy oscillations for different number of particles keep the same ordering and relative magnitude around the value for 66 neutrons when the density is changed. Actually this is true only when finite size effects due to the truncation of the interaction are also considered. The effect of tail corrections due to the potential is indeed severe, because it depends on both the number of particles and the density, getting worst for few particles and at high densities. In order to control these effects, we performed simulations for PNM and YNM with different number of neutrons including tail corrections for the $NN$ potential and also for the $NNN$, $\Lambda N$ and $\Lambda NN$ forces which are all at the same TPE order and thus have similar interaction range. The result is that, once all the finite size effects are correctly taken into account, the $\Delta E_{\Lambda n}$ values for different densities and number of particles, thus hyperon fraction, can actually be compared.
The result of this analysis is reported in Fig.~\ref{fig:E_x}. The values of the difference $\Delta E_{\Lambda n}$ are shown as a function of the $\Lambda$~fraction for different baryon densities up to $\rho_b=0.40~\text{fm}^{-3}$. As expected, the energy difference is almost linear in $x_\Lambda$, at least for the range of $\Lambda$~fraction that has been possible to investigate. For $x_\Lambda=0.0294,0.0357,0.05$ two hyperons are involved in the calculation. For these cases, we also tried to include the hyperon-hyperon interaction in addition to the AV8'+UIX+$\Lambda N$+$\Lambda NN$ potentials. The $\Lambda\Lambda$ contribution is negligible up to $\rho_b\sim2.5\rho_0$, where some very small effects are found, although compatible with the previous results within the statistical error bars. For densities higher than $\rho_b=0.40~\text{fm}^{-3}$, finite size effects become harder to correct. Although the distribution of energy values generally follows the trend of the lower density data, the approximations used to compute $\Delta E_{\Lambda n}$ might not be accurate enough. A more refined procedure to reduce the dependence on shell closure, for example involving the twist-averaged boundary conditions, it is possibly needed.
\begin{figure}[!hbt]
\centering
\includegraphics[width=\linewidth]{E_x.pdf}
\caption[YNM and PNM energy difference vs. $\Lambda$ fraction]
{YNM and PNM energy difference as a function of the $\Lambda$~fraction for different baryon densities.
The employed potential is the full AV8'+UIX+$\Lambda N$+$\Lambda NN$ parametrization~(\hyperlink{par_II}{II}).
Dashed lines correspond to the quadratic fit $\Delta E_{\Lambda n}(x_\Lambda)=S_{\Lambda n}(-x_\Lambda+x_\Lambda^2)$.
In the range of $\Lambda$ fraction shown, $\Delta E_{\Lambda n}$ is essentially given by the linear term in $x_\Lambda$.}
\label{fig:E_x}
\end{figure}
We used the quadratic function $\Delta E_{\Lambda n}(x_\Lambda)=S_{\Lambda n}(-x_\Lambda+x_\Lambda^2)$ to fit the $\Delta E_{\Lambda n}$ values of Fig.~\ref{fig:E_x}. For each density the coefficient $S_{\Lambda n}$ has been plotted as a function of the baryon density, as shown in Fig.~\ref{fig:S_rho}.
In the case of asymmetric nuclear matter, close to the saturation density the nuclear symmetry energy is parametrized with a linear function of the density~\cite{Gandolfi:2012}. The data in Fig.~\ref{fig:S_rho} actually manifest a linear behavior for $\rho_b\sim\rho_0$ but the trend deviates for large density. We can try to fit the $S_{\Lambda n}$ points including the second order term in the expansion over $\rho_b-\rho_0$:
\begin{align}
S_{\Lambda n}(\rho_b)=S_{\Lambda n}^{(0)}+S_{\Lambda n}^{(1)}\left(\frac{\rho_b-\rho_0}{\rho_0}\right)
+S_{\Lambda n}^{(2)}\left(\frac{\rho_b-\rho_0}{\rho_0}\right)^2\;. \label{eq:S_rho}
\end{align}
The results are shown in Fig.~\ref{fig:S_rho} with the dashed line. The three parameters of the $S_{\Lambda n}(\rho_b)$ function are reported in Tab.~\ref{tab:S_rho}.
\renewcommand{\arraystretch}{1.4}
\begin{table}[!hb]
\centering
\begin{tabular*}{\linewidth}{@{\hspace{5.0em}\extracolsep{\fill}}ccc@{\extracolsep{\fill}\hspace{5.0em}}}
\toprule
\toprule
$S_{\Lambda n}^{(0)}$ & $S_{\Lambda n}^{(1)}$ & $S_{\Lambda n}^{(2)}$ \\
\midrule
65.6(3) & 46.4(1.6) & -10.2(1.3) \\
\bottomrule
\bottomrule
\end{tabular*}
\caption[Coefficients of the hyperon symmetry energy fit]
{Coefficients (in MeV) of the hyperon symmetry energy function of Eq.~(\ref{eq:S_rho}).
The parameters are obtained from the quadratic fit on the $\Delta E_{\Lambda n}$ results reported in Fig.~\ref{fig:E_x}.}
\label{tab:S_rho}
\end{table}
\renewcommand{\arraystretch}{1.0}
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{S_rho.pdf}
\caption[Hyperon symmetry energy vs. baryon density]
{Hyperon symmetry energy as a function of the baryon density. Red dots are the points obtained by the the quadratic fit
$\Delta E_{\Lambda n}(x_\Lambda)=S_{\Lambda n}(-x_\Lambda+x_\Lambda^2)$ on the data of Fig.~\ref{fig:E_x}.
The dashed line is the $S_{\Lambda n}(\rho_b)$ fitted curve of Eq.~(\ref{eq:S_rho}).}
\label{fig:S_rho}
\end{figure}
After fitting the hyperon symmetry energy we have a complete parametrization for the EoS of $\Lambda$~neutron matter depending on both baryon density and $\Lambda$~fraction (Eq.~(\ref{eq:E_YNM})). For $x_\Lambda=0$ the relation reduces to the EoS of PNM parametrized by the polytrope of Eq.~(\ref{eq:poly}) whose coefficients are reported in Tab.~\ref{tab:E_nmatt}. For $x_\Lambda>0$ the presence of hyperons modifies the PNM EoS through the hyperon symmetry energy and the quadratic term in $x_\Lambda$. The derivation of $S_{\Lambda n}$ has been performed for small $x_\Lambda$ ($\sim10\%$), corresponding to a baryon density up to $\sim3\rho_0$. However, this should be enough to derive at least the $\Lambda$~threshold density by imposing the chemical potentials equilibrium condition $\mu_\Lambda=\mu_n$.
Let us start defining the energy density $\mathcal E$ for the $\Lambda$~neutron matter as
\begin{align}
\mathcal E_{\Lambda n}(\rho_b,x_\Lambda)&=\rho_b E_{\Lambda n}(\rho_b,x_\Lambda)+\rho_n m_n+\rho_\Lambda m_\Lambda\;,\nonumber\\[0.2em]
&=\rho_b\Bigl[E_{\Lambda n}(\rho_b,x_\Lambda)+m_n+x_\Lambda\Delta m \Bigr]\;,
\end{align}
where
\begin{align}
\rho_n=(1-x_\Lambda)\rho_b\quad\quad\quad\rho_\Lambda=x_\Lambda \rho_b\;,\label{eq:rho_nl}
\end{align}
and $\Delta m=m_\Lambda-m_n$. For $x_\Lambda=0$ the relation corresponds to the PNM case. The chemical potential is generally defined as the derivative of the energy density with respect to the number density, evaluated at fixed volume:
\begin{align}
\mu=\frac{\partial\mathcal E}{\partial\rho}\Bigg|_V\;.\label{eq:mu}
\end{align}
In AFDMC calculations, because of the requirement of the momentum shell closure, the number of particles has to be fixed. The density is increased by changing the volume, i.e. reducing the size of the simulation box. Therefore, Eq.~(\ref{eq:mu}) must include a volume correction of the form
\begin{align}
\mu=\frac{\partial\mathcal E}{\partial\rho}+\rho\frac{\partial E}{\partial\rho}\;.
\end{align}
Our chemical potentials are thus given by
\begin{align}
\mu_\kappa(\rho_b,x_\Lambda)=\frac{\partial\mathcal E_{\Lambda n}(\rho_b,x_\Lambda)}{\partial\rho_\kappa}
+\rho_\kappa\frac{\partial E_{\Lambda n}(\rho_b,x_\Lambda)}{\partial\rho_\kappa}\;,
\end{align}
where $\kappa=n,\Lambda$ and the derivatives of the energy per particle and energy density must be calculated with respect to $\rho_b$ and $x_\Lambda$:
\begin{align}
\frac{\partial\mathcal F_{\Lambda n}(\rho_b,x_\Lambda)}{\partial\rho_\kappa}
&=\frac{\partial\mathcal F_{\Lambda n}(\rho_b,x_\Lambda)}{\partial\rho_b}\frac{\partial\rho_b}{\partial\rho_\kappa}
+\frac{\partial\mathcal F_{\Lambda n}(\rho_b,x_\Lambda)}{\partial x_\Lambda}\frac{\partial x_\Lambda}{\partial\rho_\kappa}\;.
\end{align}
Recalling Eq.~(\ref{eq:rho_nl}) we have
\begin{align}
\frac{\partial\rho_b}{\partial\rho_n}=1 \quad\quad \frac{\partial\rho_b}{\partial\rho_\Lambda}=1 \quad\quad
\frac{\partial x_\Lambda}{\partial\rho_n}=-\frac{x_\Lambda}{\rho_b} \quad\quad \frac{\partial x_\Lambda}{\partial\rho_\Lambda}=\frac{1-x_\Lambda}{\rho_b}\;,
\end{align}
and thus the neutron and lambda chemical potentials take the form:
\begin{align}
\mu_n(\rho_b,x_\Lambda)&=\frac{\partial\mathcal E_{\Lambda n}}{\partial\rho_b}
-\frac{x_\Lambda}{\rho_b}\frac{\partial\mathcal E_{\Lambda n}}{\partial x_\Lambda}
+(1-x_\Lambda)\rho_b\frac{\partial E_{\Lambda n}}{\partial\rho_b}
-x_\Lambda(1-x_\Lambda)\frac{\partial E_{\Lambda n}}{\partial x_\Lambda}\;,\\[0.5em]
\mu_\Lambda(\rho_b,x_\Lambda)&=\frac{\partial\mathcal E_{\Lambda n}}{\partial\rho_b}
+\frac{1-x_\Lambda}{\rho_b}\frac{\partial\mathcal E_{\Lambda n}}{\partial x_\Lambda}
+x_\Lambda\rho_b\frac{\partial E_{\Lambda n}}{\partial\rho_b}
+x_\Lambda(1-x_\Lambda)\frac{\partial E_{\Lambda n}}{\partial x_\Lambda}\;.
\end{align}
The two $\mu_n$ and $\mu_\Lambda$ surfaces in the $\rho_b-x_\Lambda$ space cross each other defining the curve $x_\Lambda(\rho_b)$ reported in Fig.~\ref{fig:x_rho}. This curve describes the equilibrium condition $\mu_\Lambda=\mu_n$. It thus defines the $\Lambda$~threshold density $x_\Lambda(\rho_\Lambda^{th})=0$ and provides the equilibrium $\Lambda$~fraction for each density. For the given parametrization of the hyperon symmetry energy, the threshold density is placed around $1.9\rho_0$, which is consistent with the theoretical indication about the onset of strange baryons in the core of a NS. Once the $\Lambda$~particles appear, the hyperon fraction rapidly increases due to the decrease of the energy and pressure that favors the $n\rightarrow\Lambda$ transition (see \S~\ref{sec:ns}). However, there is a saturation effect induced by the repulsive nature of the hyperon-nucleon interaction that slows down the production of $\Lambda$~particle at higher density.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{x_rho.pdf}
\caption[$x_\Lambda(\rho_b)$ function and $\Lambda$~threshold density]
{$\Lambda$~fraction as a function of the baryon density. The curve describes the equilibrium condition $\mu_\Lambda=\mu_n$.
The red line is the result for the quadratic fit on the $\Delta E_{\Lambda n}$ data of Fig.~\ref{fig:E_x}.
The blue dotted vertical line indicates the $\Lambda$~threshold densities $\rho_\Lambda^{th}$ such that $x_\Lambda(\rho_\Lambda^{th})=0$.}
\label{fig:x_rho}
\end{figure}
By using the $\Lambda$~threshold density $\rho_\Lambda^{th}$ and the equilibrium $\Lambda$~fraction values $x_\Lambda(\rho_b)$ in Eq.~(\ref{eq:E_YNM}), we can finally address the $\Lambda$~neutron matter EoS. The result is reported in Fig.~\ref{fig:eos_real}. The green dashed line is the PNM EoS for AV8', the green solid line the one for AV8'+UIX. Red curve is instead the YNM EoS coming from the AV8'+UIX+$\Lambda N$+$\Lambda NN$~(\hyperlink{par_II}{II}) potentials. At the threshold density there is a strong softening of the EoS induced by the rapid production of hyperons. However the EoS becomes soon almost as stiff as the PNM EoS due to hyperon saturation and the effect of the repulsion among hyperons and neutrons. In $\rho_b=\rho_\Lambda^{th}$ there is a phase transition between PNM and YNM. For densities close to the threshold density, the pressure becomes negative. This is a non physical finite size effect due to the small number of particles considered in the simulations, not large enough for the correct description of a phase transition. However, in the thermodynamical limit the effect should disappear. We could mitigate this effect by using a Maxwell construction between the PNM and the YNM EoS. The details of the density dependence of the energy per baryon at the hyperon threshold are however not relevant for the derivation of the maximum mass.
The derived model for the EoS of $\Lambda$~neutron matter should be a good approximation up to $\rho_b\sim3\rho_0$. The behavior of the energy per baryon after this limit depends on density and $\Lambda$~fraction to which we do not have controlled access with the present AFDMC calculations. Moreover, starting from $\rho_b>0.6~\text{fm}^{-3}$, $\Sigma^0$ hyperons could be formed, as shown in Fig.~\ref{fig:chemicalpot}. The behavior of the energy curve should thus be different. However, there are already strong indications for a weak softening of the EoS induced by the presence of hyperons in the neutron bulk when the hyperon-nucleon potentials employed for hypernuclei are used.
\begin{figure}[!hb]
\centering
\includegraphics[width=\linewidth]{eos_V8p+UIX+LN+LNN_parII_real.pdf}
\caption[YNM equation of state]
{Equation of state for $\Lambda$~neutron matter. Green solid (dashed) curves refer to the PNM EoS calculated with the AV8'+UIX (AV8') potential.
Red line is the EoS for YNM corresponding to the quadratic fit on the $\Delta E$ data of Fig.~\ref{fig:E_x}.
The employed hyperon-nucleon potential is the full two- plus three-body in the parametrization~(\hyperlink{par_II}{II}).
The $\Lambda$~threshold density is displayed with the blue dotted vertical line.}
\label{fig:eos_real}
\end{figure}
\subsection{Mass-radius relation and the maximum mass}
\label{subsec:Lnmatt_Mmax}
In Chapter~\ref{chap:strangeness} we have seen that, given the EoS, the mass-radius relation and the predicted maximum mass are univocally determined. The $M(R)$ curves are the solutions of the TOV equations~(\ref{eq:TOV}), which involve the energy density $\mathcal E$ and the pressure $P$. For YNM the energy density is given by Eq.~(\ref{eq:E_YNM}) supplemented by the hyperon threshold density and the $x_\Lambda(\rho_b)$ curve. For the pressure we can simply use the relation
\begin{align}
P_{\Lambda n}(\rho_b,x_\Lambda)=\rho_b^2\frac{\partial E_{\Lambda n}(\rho_b,x_\Lambda)}{\partial\rho_b}\;,
\end{align}
where the additional term due to density dependence of the $\Lambda$ fraction vanishes once the equilibrium condition $\mu_\Lambda=\mu_n$ is given.
Fig.~\ref{fig:mofr} reports the $M(R)$ curves solution of the TOV equations for the EoS reported in Fig.~\ref{fig:eos_real}. Green curves are the PNM relations for AV8' (dashed) and AV8'+UIX (solid). Red one is the result for the $\Lambda$~neutron matter described by the full nucleon-nucleon and hyperon-nucleon interaction in the parametrization~(\hyperlink{par_II}{II}). The shaded region corresponds to the excluded region by the causality condition~\cite{Steiner:2010}
\begin{align}
M\lesssim \beta\frac{c^2}{G} R \quad\quad\quad \beta=\frac{1}{2.94}\;,
\end{align}
where $G$ is the gravitational constant and $c$ the speed of light. The curves with the inclusion of the TNI partially enter the forbidden region. This is due to the behavior of our EoS that evaluated for very high densities becomes superluminal. A connection to the maximally stiff EoS given by the condition $P<1/3\,\mathcal E$ should be needed. However, we can estimate the effect on the maximum mass to be rather small, not changing the general picture.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{mofr.pdf}
\caption[YNM mass-radius relation]
{Mass-radius relation for $\Lambda$~neutron matter. Green solid (dashed) curves refer to the PNM calculation with the AV8'+UIX (AV8') potential.
Red line is the result for the YNM corresponding to the quadratic fit on the $\Delta E_{\Lambda n}$ data of Fig.~\ref{fig:E_x}.
The light blue and brown bands correspond to the masses of the millisecond pulsars PSR J1614-2230 ($1.97(4)M_\odot$)~\cite{Demorest:2010} and
PSR J1903+0327 ($2.01(4)M_\odot$)~\cite{Antoniadis:2013}. The gray shaded region is the excluded part of the plot according to causality.}
\label{fig:mofr}
\end{figure}
The maximum mass for PNM obtained using the Argonne~V8' and Urbana~IX potentials is reduced from $\sim2.45M_\odot$ to $\sim2.40M_\odot$ by the inclusion of $\Lambda$~hyperons. This small reduction follows by the stiffness of the YNM EoS for densities larger than $\rho_b\sim3\rho_0$, up to which our model gives a good description of the strange system. However, by limiting the construction of the $M(R)$ relation in the range of validity of the employed YNM model, the mass of the star is already at $\sim1.81M_\odot$ around $R=12.5$~km, and at $\sim1.98M_\odot$ if we extend the range up to $\rho_b=0.55~\text{fm}^{-3}$. These values are larger than the predicted maximum mass for hypermatter in all (B)HF calculations (see \S~\ref{sec:ns}).
Regardless of the details of the real behavior of the EoS for $\rho_b>3\rho_0$, we can speculate that a maximum mass of $2M_\odot$ can be supported by the $\Lambda$~neutron matter described by means of the realistic AV8'+UIX potentials plus the here developed two- and three-body hyperon-nucleon interactions. The key ingredient of the picture is the inclusion of the repulsive $\Lambda NN$ force that has been proven to give a fundamental contribution in the realistic description of $\Lambda$~hypernuclei. Although very preliminary, our first AFDMC calculations for hypermatter suggest that a $2M_\odot$ neutron star including hyperons can actually exist.
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{mofrho.pdf}
\caption[YNM mass-central density relation]
{Stellar mass versus central density for $\Lambda$~neutron matter. The key is the same of Fig.~\ref{fig:mofr}.
The vertical blue dotted line represents the maximum central density for the stability of the star when TNI forces are considered.}
\label{fig:mofrho}
\end{figure}
The solution of the TOV equations provides additional information on the central density $\rho_c$ of the star. The behavior of the star mass as a function of the central density determines the stability condition of the NS trough the relation $dM(\rho_c)/d\rho_c>0$. For non rotating neutron stars, configurations that violate this condition are unstable and will collapse into black holes~\cite{Haensel:2006}. As can be seen from Fig.~\ref{fig:mofrho} where the mass-central density relation is reported, the maximum mass also determines the maximum central density for stable NSs. Within our model, $\rho_c^{\max}$ is around $5.7\rho_0$ for both PNM and YNM when the three-nucleon force is considered in the calculation. Given the fact the inter-particle distance scale as $\rho_c^{-1/3}$, we can estimate that for the given $\rho_c^{\max}$ baryons are not extremely packed. The baryon-baryon distances are of the order of few fermi, comparable to the range of the hard core of the nucleon-nucleon and hyperon-nucleon interactions considered. Therefore, in this framework there is no evidence for the appearance of exotic phases like quark matter. Our YNM EoS is stiff enough to realistically describe the infinite medium supporting a $2M_\odot$ NS without requiring other additional degrees of freedom for the inner core.
\newpage
\phantom{Empty page}
\chapter{Conclusions}
\label{chap:conclusion}
\fancyhead[LO]{\emph{Conclusions}}
In this work the recent developments in Quantum Monte Carlo calculations for nuclear systems including strange degrees of freedom have been reported. The Auxiliary Field Diffusion Monte Carlo algorithm has been extended to the strange sector by the inclusion of the lightest among the hyperons, the $\Lambda$~particle. This gave us the chance to perform detailed calculations for $\Lambda$~hypernuclei, providing a microscopic framework for the study of the hyperon-nucleon interaction in connection with the available experimental information. The extension of the method for strange neutron matter, put the basis for the first Diffusion Monte Carlo analysis of the hypernuclear medium, with the derivation of neutron star observables of great astrophysical interest.
The main outcome of the study of $\Lambda$~hypernuclei, is that, within the employed phenomenological model for hyperon-nucleon forces, the inclusion of a three-body $\Lambda NN$ interaction is fundamental to reproduce the ground state physics of medium-heavy hypernuclei, in particular the observed saturation property of the hyperon binding energy. By accurately refitting the three-body hyperon-nucleon interaction, we obtain a substantial agreement with the experimental separation energies, that are strongly overestimated by the use of a bare $\Lambda N$ interaction. The result is of particular interest because with the employed algorithm, heavy hypernuclei up to 91 particles have been investigated within the same theoretical framework, providing a realistic description able to reproduce the extrapolation of the hyperon binding energy in the infinite medium. By employing an effective hyperon-hyperon interaction, first steps in the study of $S=-2$ $\Lambda$~hypernuclei have also been taken. The interest in these systems is motivated by the controversial results coming from both theoretical and experimental studies.
Preliminary AFDMC results on hypermatter indicate that the hyperon-nucleon interaction fitted on finite strange nuclei leads to a stiff equation of state for the strange infinite medium. Within our model, $\Lambda$~particles start to appear in the neutron bulk around twice the saturation density, consistently with different theoretical previsions. However, the predicted softening of the equation of state seems not to be dramatic, due to the strongly repulsive nature of the employed three-body hyperon-nucleon force. This fact helps to understand how the necessary appearance of hyperons at some value of the nucleon density in the inner core of a neutron star might eventually be compatible with the observed neutron star masses of order $2M_\odot$.
Both works on hypernuclei and hypermatter represent the first Diffusion Monte Carlo study of finite and infinite strange nuclear systems, and thus are subject to further improvements. The algorithm for (hyper)nuclei should be refined in order to become more independent from the starting trial wave function that should include also correlations other than the pure central. Together with the accurate treatment of the tensor (and spin-orbit) potential term and, possibly, with the inclusion of the density dependent nucleon-nucleon interaction developed in the framework of correlated basis function~\cite{Lovato:2011}, the algorithm might become a powerful tool for the precise investigation not only of energy differences but also of other structural ground state properties such as density and radii. From the methodological point of view, the algorithm for infinite strange systems could benefit from the inclusion of twist-averaged boundary conditions, that would allow for a more refined study of the equation of state of the hypernuclear medium and thus the derivation of the maximum mass.
It would be interesting to perform benchmark calculations with the employed hyperon-nucleon force by means of few-body methods. This would reduce the uncertainties on the fitted interaction, providing more insight on the structure of the phenomenological potential for light hypernuclei. On the other hand, by projecting the three-body interaction on the triplet and singlet isospin channels, it would be possible to fit the experimental data for large hypernuclei in order to better capture the features of the interaction that are relevant for the neutron star physics without significantly change the compatibility of the results with the lighter strange nuclei. This could definitely determine a stiff equation of state for the hyperon neutron matter supporting a $2M_\odot$ star.
In the same contest, the study of asymmetric nuclear matter with the inclusion of hyperon degrees of freedom is very welcome. At present this project has not started yet and so the goal is far to be achieved. However this is one of the more promising direction in order to describe the properties of stellar matter at high densities by means of accurate microscopic calculations with realistic interactions.
The very recent indication of a bound $\Lambda nn$ three-body system~\cite{Rappold:2013_PRC(R)}, might motivate the AFDMC investigation of hyper neutron drops. Weakly bound systems are typically not easily accessible by means of standard AFDMC method for finite systems. The study of neutron systems confined by an external potential with the inclusion of one or more hyperons, could give fundamental information about the hyperon-neutron and hyperon-hyperon interaction in connection with the experimental evidence of light neutron rich hypernuclei, such as $^6_\Lambda$H~\cite{Agnello:2012_H6L}, or the theoretical speculation of exotic neutron systems, as the bound $\Lambda\Lambda nn$ system.
\chapter{Introduction}
\label{chap:introduction}
\fancyhead[LO]{\emph{Introduction}}
\fancyhead[RE]{\emph{Introduction}}
\hypersetup{linkcolor=blue}
\renewcommand{\thefigure}{\emph{i}.\arabic{figure}}
Neutron stars (NS) are among the densest objects in the Universe, with central densities several times larger than the nuclear saturation density $\rho_0=0.16~\text{fm}^{-3}$. As soon as the density significantly exceeds this value, the structure and composition of the NS core become uncertain.
Moving from the surface towards the interior of the star, the stellar matter undergoes a number of transitions, Fig.~\ref{fig:NS_structure}. From electrons and neutron rich ions in the outer envelopes, the composition is supposed to change to the $npe\mu$ matter in the outer core, a degenerate gas of neutrons, protons, electrons and muons. At densities larger than $\sim2\rho_0$ the $npe\mu$ assumption can be invalid due to the appearance of new hadronic degrees of freedom or exotic phases.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.65\linewidth]{NS_Structure.pdf}
\caption[Neutron star structure]{Schematic structure of a neutron star. Stellar parameters strongly depend on the equation of state of the core.
Figure taken from Ref.~\cite{Haensel:2006}}
\label{fig:NS_structure}
\end{center}
\end{figure}
In the pioneering work of 1960~\cite{Ambartsumyan:1960}, Ambartsumyan and Saakyan reported the first theoretical evidence of hyperons in the core of a NS.
Contrary to terrestrial conditions, where hyperons are unstable and decay into nucleons through the weak interaction, the equilibrium conditions in a NS can make the inverse process happen. At densities of the order $2\div3\rho_0$, the nucleon chemical potential is large enough to make the conversion of nucleons into hyperons energetically favorable. This conversion reduces the Fermi pressure exerted by the baryons, and makes the equation of state (EoS) softer. As a consequence, the maximum mass of the star is typically reduced.~\nocite{Shapiro:1983}
Nowadays many different approaches of hyperonic matter are available, but there is no general agreement among the predicted results for the EoS and the maximum mass of a NS including hyperons. Some classes of methods extended to the hyperonic sector predict that the appearance of hyperons at around $2\div3\rho_0$ leads to a strong softening of EoS and consequently to a large reduction of the maximum mass. Other approaches, instead, indicate much weaker effects as a consequence of the presence of strange baryons in the core of the star.
The situation has recently become even more controversial as a result of the latest astrophysical observations. Until 2010, the value of $1.4\div1.5M_\odot$ for the maximum mass of a NS, inferred from precise neutron star mass determinations~\cite{Thorsett:1999}, was considered the canonical limit. First neutron star matter calculations with the inclusion of hyperons seemed to better agree with this value compared to the case of pure nucleonic EoS, that predicts relatively large maximum masses ($1.8\div2.4M_\odot$)~\cite{Akmal:1998}. The recent measurements of unusually high masses of the millisecond pulsars PSR J1614-2230 ($1.97(4)M_\odot$)~\cite{Demorest:2010} and PSR J1903+0327 ($2.01(4)M_\odot$)~\cite{Antoniadis:2013}, rule out almost all these results, making uncertain the appearance of strange baryons in high-density matter. However, in the last three years new models compatible with the recent observations have been proposed, but many inconsistency still remain. The solution of this problem, known as \emph{hyperon puzzle}, is far to be understood.
The difficulty of correctly describe the effect of strange baryons in the nuclear medium, is that one needs a precise solution of a many-body problem for a very dense system with strong and complicated interactions which are often poorly known.
The determination of a realistic interaction among hyperons and nucleons capable to reconcile the terrestrial measurements on hypernuclei and the NS observations is still an unsolved question. The amount of data available for nucleon-nucleon scattering and binding energies is enough to build satisfactory models of nuclear forces, either purely phenomenological or built on the basis of an effective field theory. Same approaches have been used to derive potentials for the hyperon-nucleon and hyperon-hyperon interaction, but the accuracy of these models is far from that of the non strange counterparts. The main reason of this is the lack of experimental information due the impossibility to collect hyperon-neutron and hyperon-hyperon scattering data. This implies that interaction models must be fitted mostly on binding energies (and possibly excitations) of hypernuclei. In the last years several measurements of the energy of hypernuclei became available. These can be used to validate or to constrain the hyperon-nucleon interactions within the framework of many-body systems. The ultimate goal is then to constrain these forces by reproducing at best the experimental energies of hypernuclei from light systems made of few particles up to heavier systems.
The method used to accurately solve the many-body Schr\"odinger equation represents the second part of the problem. Accurate calculations are indeed limited to very few nucleons. The exact Faddeev-Yakubovsky equation approach has been applied up to four particle systems~\cite{Glockle:1993}. Few nucleon systems can be accurately described by means of techniques based on shell models calculations like the No-Core Shell Model~\cite{Navratil:2009}, on the Hyperspherical Harmonics approach~\cite{Barnea:2001,Bacca:2002,Bacca:2004,Barnea:2004,Deflorian:2013} or on QuantumMonte Carlo methods, like the Variational Monte Carlo~\cite{Wiringa:1991,Wiringa:1992} or Green Function Monte Carlo~\cite{Pieper:2005,Lusk:2010,Lovato:2013,Gandolfi:2011}. These methods have been proven to solve the nuclear Schr\"odinger equation in good agreement with the Faddeev-Yakubovsky method~\cite{Kamada:2001}. For heavier nuclei, Correlated Basis Function theory~\cite{AriasdeSaavedra:2007}, Cluster Variational Monte Carlo~\cite{Pieper:1990,Pieper:1992} and Coupled Cluster Expansion~\cite{Heisenberg:1999,Hagen:2010} are typically adopted. In addition, the class of method which includes the Brueckner-Goldstone~\cite{Day:1967} and the Hartree-Fock~\cite{Vautherin:1972} algorithms is widely used, also for nuclear matter calculations. The drawback of these many-body methods is that they modify the original Hamiltonian to a more manageable form, often introducing uncontrolled approximations in the algorithm. In absence of an exact method for solving the many-body Schr\"odinger equation for a large number of nucleons, the derivation of model interactions and their applicability in different regimes is subject to an unpleasant degree of arbitrariness.
In this work we address the problem of the hyperon-nucleon interaction from a Quantum Monte Carlo point of view. We discuss the application of the Auxiliary Field Diffusion Monte Carlo (AFDMC) algorithm to study a non relativistic Hamiltonian based on a phenomenological hyperon-nucleon interaction with explicit two- and three-body components. The method was originally developed for nuclear systems~\cite{Schmidt:1999} and it has been successfully applied to the study of nuclei~\cite{Gandolfi:2006,Gandolfi:2007,Gandolfi:2008}, neutron drops~\cite{Pederiva:2004,Gandolfi:2011,Maris:2013}, nuclear matter~\cite{Gandolfi:2007_SNM,Gandolfi:2010} and neutron matter~\cite{Sarsa:2003,Gandolfi:2009,Gandolfi:2009_gap,Gandolfi:2012}. We have extended this ab-initio algorithm in order to include the lightest of the strange baryons, the $\Lambda$~particle. By studying the ground state properties of single and double $\Lambda$~hypernuclei, information about the employed microscopic hyperon-nucleon interaction are deduced.
The main outcome of the study on finite strange systems is that only the inclusion of explicit $\Lambda NN$ terms provides the necessary repulsion to realistically describe the separation energy of a $\Lambda$~hyperon in hypernuclei of intermediate masses~\cite{Lonardoni:2013_PRC(R),Lonardoni:2013_HYP2012,Lonardoni:2013_PRC}. The analysis of single particle densities confirms the importance of the inclusion of the $\Lambda NN$ contribution. On the ground of this observation, the three-body hyperon-nucleon interaction has been studied in detail. By refitting the coefficients in the potential, it has been possible to reproduce at the same time the available experimental data accessible with AFDMC calculations in a medium-heavy mass range~\cite{Lonardoni:2013_PRC}. Other details of the hypernuclear force, like the charge symmetry breaking contribution and the effect of a $\Lambda\Lambda$ interaction, have been successfully analyzed. The AFDMC study of $\Lambda$~hypernuclei results thus in a realistic phenomenological hyperon-nucleon interaction accurate in describing the ground state physics of medium-heavy mass hypernuclei.
The large repulsive contribution induced by the three-body $\Lambda NN$ term, makes very clear the fact that the lack of an accurate Hamiltonian might be responsible for the unrealistic predictions of the EoS, that would tend to rule out the appearance of strange baryons in high-density matter. We speculate that the application of the developed hyperon-nucleon interaction to the study of the homogeneous medium would lead to a stiffer EoS for the $\Lambda$~neutron matter. This fact might eventually reconcile the physically expected onset of hyperons in the inner core of a NS with the observed masses of order $2M_\odot$.
First steps in this direction have been taken. The study of $\Lambda$~neutron matter at fixed $\Lambda$ fraction shows that the repulsive nature of the three-body hyperon-nucleon interaction is still active and relevant at densities larger than the saturation density. The density threshold for the appearance of $\Lambda$~hyperons has then been derived and the EoS has been computed. Very preliminary results suggest a rather stiff EoS even in the presence of hyperons, implying a maximum mass above the observational limit. The study of hypermatter is still work in progress.
\vspace{2cm}
\hypersetup{linkcolor=black}
\noindent The present work is organized as follows:
\begin{description}
\item[Chapter~\ref{chap:strangeness}:] a general overview about strangeness in nuclear systems, from hypernuclei to neutron stars, is reported
with reference to the terrestrial experiments and astronomical observations.
\item[Chapter~\ref{chap:hamiltonians}:] a description of nuclear and hypernuclear non-relativistic Hamiltonians is presented, with particular attention to the
hyperon-nucleon sector in the two- and three-body channels.
\item[Chapter~\ref{chap:method}:] the Auxiliary Field Diffusion Monte Carlo method is discussed in its original form for nuclear systems and in the newly
developed version with the inclusion of strange degrees of freedom, both for finite and infinite systems.
\item[Chapter~\ref{chap:results_finite}:] the analysis and set up of a realistic hyperon-nucleon interaction are reported in
connection with the AFDMC results for the hyperon separation energy. Qualitative information are also deduced from single particle densities and root mean square radii
for single and double $\Lambda$~hypernuclei.
\item[Chapter~\ref{chap:results_infinite}:] using the interaction developed for finite strange systems, first Quantum Monte Carlo calculations on
$\Lambda$~neutron matter are presented and the implications of the obtained results for the properties of neutron stars are explored.
\item[Chapter~\ref{chap:conclusion}:] the achievements of this work are finally summarized and future perspective are discussed.
\end{description}
\newpage
\phantom{Empty page}
| {
"attr-fineweb-edu": 1.210938,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUf4_xK6mkyCfOE4p4 | \section{Introduction}
Observe the {\em coarse} annotation provided by commonly-used action recognition datasets such as~\cite{kinetics400,HMDB51,ucf101},
where the same action label was assigned to a given complex video action sequence (\eg, \textit{Play Soccer}, \textit{Play Baseball}) typically lasting 10 seconds or 300 frames, thus introducing a lot of ambiguities during training as two or more action categories may contain the same \textbf{atomic action}
(\eg, \textit{Run} is one of the atomic actions for both \textit{Play Soccer} and \textit{Play Baseball}).
\begin{figure*}[ht!]
\begin{center}
\begin{overpic}[width=\linewidth]{images/teaser.png}
\put (2, 22) {Sports/Athletics}
\put (51.5, 24.5) {Playing Musical Instruments}
\put (51.5, 11) {Daily Actions}
\put (2.5, 15.5) {\rotatebox{90}{ \scriptsize{Soccer}}}
\put (2.5, 6.5) {\rotatebox{90}{ \scriptsize{Baseball}}}
\put ( 4, 13.5) {\footnotesize{Run (Dribble)}}
\put (17, 13.5) {\footnotesize{Throw In}}
\put (29, 13.5) {\footnotesize{Shoot}}
\put (41, 13.5) {\footnotesize{Save}}
\put ( 7.5, 4.7) {\footnotesize{Run}}
\put (18.5, 4.7) {\footnotesize{Pitch}}
\put (29, 4.7) {\footnotesize{Swing}}
\put (38, 4.7) {\footnotesize{Catch Flyball}}
\put (54, 16) {\footnotesize{Grand Piano}}
\put (67, 16) {\footnotesize{Cello}}
\put (79, 16) {\footnotesize{Gong}}
\put (88.5, 16) {\footnotesize{Recorder}}
\put (55, 2.8) {\footnotesize{Applaud}}
\put (65, 2.8) {\footnotesize{Waist Bow}}
\put (77, 2.8) {\footnotesize{Fist Bump}}
\put (89, 2.8) {\footnotesize{Salute}}
\end{overpic}
\end{center}
\vspace{-1.5em}
\caption{
HAA500 is a fine-grained atomic action dataset, with fine-level action annotations (\eg, \textit{Soccer-Dribble}, \textit{Soccer-Throw In}) compared to the traditional composite action annotations (\eg, \textit{Soccer}, \textit{Baseball}).
HAA500 is comparable to existing coarse-grained atomic action datasets, where we have distinctions (\eg, \textit{Soccer-Throw In}, \textit{Baseball-Pitch}) within an atomic action (\eg, \textit{Throw Something}) when the action difference is visible. The figure above displays sample videos from three different areas of HAA500. Observe that each video contains one or a few dominant human figures performing the pertinent action.
}
\vspace{-1em}
\label{fig:teaser}
\end{figure*}
Recently, atomic action datasets~\cite{ava_speech,goyal2017something,AVA,ava_speaker,finegym} have been introduced in an attempt to resolve the aforementioned issue. Google's AVA actions dataset~\cite{AVA} provides dense annotations of 80 atomic visual actions in 430 fifteen-minute video clips where actions are localized in space and time. AVA spoken activity dataset~\cite{ava_speaker} contains temporally labeled face tracks in videos, where each face instance is labeled as speaking or not, and whether the speech is audible. Something-Something dataset~\cite{goyal2017something} contains clips of humans performing pre-defined basic actions with daily objects.
However, some of their actions are still coarse which can be further split into atomic classes with significantly different motion gestures. \Eg, AVA~\cite{AVA} and Something-Something~\cite{goyal2017something} contain \textit{Play Musical Instrument} and \textit{Throw Something} as a class, respectively, where the former should be further divided into sub-classes such as \textit{Play Piano} and \textit{Play Cello}, and the latter into \textit{Soccer Throw In} and \textit{Pitch Baseball}, \etc, because each of these atomic actions has significantly different gestures. Encompassing different visual postures into a single class poses a deep neural network almost insurmountable challenge to properly learn the pertinent atomic action, which probably explains the prevailing low performance employing even the most state-of-the-art architecture, ACAR-Net (mAP: 38.30\%)~\cite{acarnet}, in AVA~\cite{AVA}, despite only having 80 classes.
The other problem with existing action recognition video datasets is that their training examples contain actions irrelevant to the target action.
Video datasets typically have fixed clip lengths, allowing unrelated video frames to be easily included during the data collection stage. Kinetics 400 dataset~\cite{kinetics400}, with a fixed 10-second clip length, contains a lot of irrelevant actions, \eg, showing the audience before the main \textit{violin playing}, or a person takes a long run before \textit{kicking} the ball.
Another problem is having too limited or too broad field-of-view, where a video only exhibits a part of a human interacting with an object~\cite{goyal2017something}, or a single video contains multiple human figures with different actions present~\cite{AVA,kinetics400,zhao2019hacs}.
Recently, FineGym~\cite{finegym} has been introduced to solve the aforementioned limitations by proposing fine-grained action annotations, \eg, \textit{Balance Beam-Dismount-Salto Forward Tucked}. But due to the expensive data collection process, they only contain 4 events with atomic action annotations (\textit{Balance Beam}, \textit{Floor Exercise}, \textit{Uneven Bars}, and \textit{Vault-Women}), and their clips were extracted from professional gymnasium videos in athletic or competitive events.
In this paper, we contribute Human-centric Atomic Action dataset (\textbf{HAA500}) which has been constructed with carefully curated videos with a high average of 69.7\% detectable joints, where a dominant human figure is present to perform the labeled action. The curated videos have been annotated with fine-grained labels to avoid ambiguity, and with dense per-frame action labeling and no unrelated frames being included in the collection as well as annotation.
HAA500 contains a wide variety of atomic actions, ranging from athletic atomic action~(\textit{Figure Skating - Ina Bauer}) to daily atomic action~(\textit{Eating a Burger}).
HAA500 is also highly scalable, where adding a class takes only 20--60 minutes.
The clips are class-balanced and contain clear visual signals with little occlusion. As opposed to ``in-the-wild" atomic action datasets, our ``cultivated" clean, class-balanced dataset provides an effective alternative to advance research in atomic visual actions recognition and thus video understanding.
Our extensive cross-data experiments validate that precise annotation of fine-grained classes leads to preferable properties against datasets with orders of magnitude larger in size.
Figure~\ref{fig:teaser} shows example atomic actions collected.
\vspace{-0.05in}
\setlength{\tabcolsep}{4pt}
\begin{table}
\begin{center}
{\small
\begin{tabular}{c|c|c|c}
\hline
Dataset & Videos & Actions & Atomic \\
\hline
KTH~\cite{DBLP:conf/icpr/SchuldtLC04} & 600 & 6 & \checkmark \\
Weizmann~\cite{DBLP:conf/iccv/BlankGSIB05} & 90 & 10 & \checkmark \\
UCF Sports~\cite{DBLP:conf/cvpr/RodriguezAS08} & 150 & 10 & \\
Hollywood-2~\cite{DBLP:conf/cvpr/MarszalekLS09} & 1,707 & 12 & \\
HMDB51~\cite{HMDB51} & 7,000 & 51 & \\
UCF101~\cite{ucf101} & 13,320 & 101 & \\
DALY~\cite{DBLP:journals/corr/WeinzaepfelMS16} & 510 & 10 & \\
AVA~\cite{AVA} & 387,000 & 80 & \checkmark \\
Kinetics 700~\cite{kinetics700} & 650,317 & 700 & \\
HACS~\cite{zhao2019hacs} & 1,550,000 & 200 & \checkmark \\
Moments in Time~\cite{momentsintime} & 1,000,000 & 339 & \checkmark\\
FineGym~\cite{finegym} & 32,687 & 530 & \checkmark\\
\textbf{HAA500} & \textbf{10,000} & \textbf{500} & \checkmark \\ \hline
\end{tabular}
}
\end{center}
\caption{Summary of representative action recognition datasets.}
\vspace{-1em}
\label{table:Action_datasets}
\end{table}
\section{Related Works}
Table~\ref{table:Action_datasets} summarizes representative action recognition datasets.
\subsection{Action Recognition Dataset}
\paragraph{Composite Action Dataset}
Representative action recognition datasets, such as HMDB51~\cite{HMDB51}, UCF101~\cite{ucf101}, Hollywood-2~\cite{DBLP:conf/cvpr/MarszalekLS09}, ActivityNet~\cite{activitynet}, and Kinetics~\cite{kinetics700,kinetics400} consist of short clips which are manually trimmed to capture a single action. These datasets are ideally suited for training fully supervised, whole-clip video classifiers. A few datasets used in action recognition research, such as MSR Actions~\cite{DBLP:conf/cvpr/YuanLW09}, UCF Sports~\cite{DBLP:conf/cvpr/RodriguezAS08}, and JHMDB~\cite{DBLP:conf/iccv/JhuangGZSB13}, provide spatio-temporal annotations in each frame for short videos, but they only contain few actions.
Aside from the subcategory of shortening the video length, recent extensions such as UCF101~\cite{ucf101}, DALY~\cite{DBLP:journals/corr/WeinzaepfelMS16}, and Hollywood2Tubes~\cite{DBLP:conf/eccv/MettesGS16} evaluate spatio-temporal localization in untrimmed videos, resulting in a performance drop due to the more difficult nature of the task.
One common issue on these aforementioned datasets is that they are annotated with composite action classes (\eg, \textit{Playing Tennis}), thus different human action gestures (\eg, \textit{Backhand Swing}, \textit{Forehand Swing}) are annotated under a single class. Another issue is that they tend to capture in wide field-of-view and thus include multiple human figures (\eg, tennis player, referee, audience) with different actions in a single frame, which inevitably introduce confusion to action analysis and recognition.
\vspace{-1em}
\paragraph{Atomic Action Dataset}
To model finer-level events, the AVA dataset~\cite{AVA} was introduced to provide person-centric spatio-temporal annotations on atomic actions similar to some of the earlier works~\cite{DBLP:conf/iccv/BlankGSIB05,gaidon2013temporal,DBLP:conf/icpr/SchuldtLC04}.
Other specialized datasets such as Moments in Time~\cite{momentsintime}, HACS~\cite{zhao2019hacs}, Something-Something~\cite{goyal2017something}, and Charades-Ego~\cite{sigurdsson2016hollywood} provide classes for atomic actions but none of them is a human-centric atomic action, where some of the video frames are ego-centric which only show part of a human body (\eg, hand), or no human action at all. Existing atomic action datasets~\cite{AVA,momentsintime} tend to have atomicity under English linguistics, \eg, in Moments in Time~\cite{momentsintime} \textit{Open} is annotated on video clips with a tulip opening, an eye opening, a person opening a door, or a person opening a package, which is fundamentally different actions only sharing the verb~\textit{open}, which gives the possibility of finer division.
\vspace{-1em}
\paragraph{Fine-Grained Action Dataset}
Fine-grained action datasets try to solve ambiguous temporal annotation problems that were discussed in \cite{alwassel2018diagnosing,moltisanti2017trespassing}.
These datasets (\eg, \cite{epickitchens,jigsaws,breakfast,diving48,MPIICooking2,finegym}) use systematic action labeling to annotate fine-grained labels on a small domain of actions. Breakfast~\cite{breakfast}, MPII Cooking 2~\cite{MPIICooking2}, and EPIC-KITCHENS~\cite{epickitchens} offer fine-grained actions for cooking and preparing dishes, \eg, \textit{Twist Milk Bottle Cap}~\cite{breakfast}.
JIGSAWS~\cite{jigsaws}, Diving48~\cite{diving48}, and FineGym~\cite{finegym} offer fine-grained action datasets respectively for surgery, diving, and gymnastics. While existing fine-grained action datasets are well suited for benchmarks, due to their low variety and the narrow domain of the classes, they cannot be extended easily in general-purpose action recognition.
\vspace{0.1in}
Our HAA500 dataset differs from all of the aforementioned datasets as we provide a wide variety of 500 fine-grained atomic human action classes in various domains, where videos in each class only exhibit the relevant human atomic actions.
\vspace{-0.3em}
\subsection{Action Recognition Architectures}
\vspace{-0.3em}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
{\small
\begin{tabular}{r|c|c|c|c}
\hline
& \multicolumn{2}{c|}{Kinetics 400~\cite{kinetics400}}
& \multicolumn{2}{c}{Something V1~\cite{goyal2017something}}\\
Models & Top-1 & Top-5 & Top-1 & Top-5 \\
\hline
TSN (R-50)~\cite{TSN} & 70.6\% & 89.2\% & 20.5\% & 47.5\% \\
2-Stream I3D~\cite{i3d} & 71.6\% & 90.0\% & 41.6\% & 72.2\% \\
TSM (R-50)~\cite{TSM} & 74.1\% & 91.2\% & 47.3\% & 76.2\% \\
TPN (TSM)~\cite{TPN} & 78.9\% & 93.9\% & 50.2\% & 75.8\% \\
\hline
\hline Skeleton-based
& \multicolumn{2}{c|}{Kinetics 400~\cite{kinetics400}}
& \multicolumn{2}{c}{NTU-RGB+D~\cite{nturgbd}}\\
Models & Top-1 & Top-5 & X-Sub & X-View \\
\hline
Deep LSTM~\cite{nturgbd} & 16.4\% & 35.3\% & 62.9\% & 70.3\% \\
ST-GCN~\cite{stgcn} & 30.7\% & 52.8\% & 81.5\% & 88.3\% \\
\hline
\end{tabular}
}
\end{center}
\vspace{-0.5em}
\caption{Performance of previous works on Kinetics 400~\cite{kinetics400}, Something-Something~\cite{goyal2017something}, and NTU-RGB+D~\cite{nturgbd} dataset.
We evaluate on both cross-subject (X-Sub) and cross-view (X-View) benchmarks for NTU-RGB+D.
For a fair comparison, in this paper we use~\cite{kinetics400} rather than~\cite{kinetics700} as representative action recognition model still use~\cite{kinetics400} for pre-training or benchmarking at the time of writing.
\vspace{-1em}
\label{table:action_recognition_models}
}
\end{table}
Current action recognition architectures can be categorized into two major approaches: 2D-CNN and 3D-CNN. 2D-CNN~\cite{LRCN,feichtenhofer2016convolutional,TSM,DBLP:conf/nips/SimonyanZ14,TSN,TRN} based models utilize image-based 2D-CNN models on a single frame where features are aggregated to predict the action. While some methods (\eg, \cite{LRCN}) use RNN modules for temporal aggregation over visual features, TSN~\cite{TSN} shows that simple average pooling can be an effective method to cope with temporal aggregation. To incorporate temporal information to 2D-CNN, a two-stream structure~\cite{feichtenhofer2016convolutional,DBLP:conf/nips/SimonyanZ14} has been proposed to use RGB-frames and optical flow as separate inputs to convolutional networks.
3D-CNN~\cite{i3d,slowfast,stm} takes a more natural approach by incorporating spatio-temporal filters into the image frames. Inspired from~\cite{DBLP:conf/nips/SimonyanZ14}, two-streamed inflated 3D-CNN (I3D)~\cite{i3d} incorporates two-stream structure on 3D-CNN. SlowFast~\cite{slowfast} improves from I3D by showing that the accuracy increases when the 3D kernels are used only in the later layers of the model. A different approach is adopted in TPN~\cite{TPN} where a high-level structure is designed to adopt a temporal pyramid network which can use either 2D-CNN or 3D-CNN as a backbone. Some models~\cite{ke2017new,kim2017interpretable,stgcn} use alternative information to predict video action. Specifically, ST-GCN~\cite{stgcn} uses a graph convolutional network to predict video action from pose estimation. However, their pose-based models cannot demonstrate better performance than RGB-frame-based models.
Table~\ref{table:action_recognition_models} tabulates the performance of representative action recognition models on video action datasets, where 2D-skeleton based models~\cite{nturgbd,stgcn} show considerably low accuracy in Kinetics 400~\cite{kinetics400}.
\vspace{-0.5em}
\section{HAA500}
\subsection{Data Collection}
\vspace{-0.5em}
The annotation of HAA500 consists of two stages: vocabulary collection and video clip selection.
While the bottom-up approach which annotates action labels on selected long videos was often used in atomic/fine-grained action datasets~\cite{AVA,finegym},
we aim to build a clean and fine-grained dataset for atomic action recognition, thus the video clips are collected based on pre-defined atomic actions following a top-down approach.
\begin{figure*}[t]
\begin{center}
\minipage[t]{0.49\linewidth}
\begin{center}
\begin{overpic}[width=\linewidth]{images/noise1.PNG}
\put (2,0) {\scriptsize{0:0.00}}
\put (30,0) {\scriptsize{Dribbling}}
\put (68,0) {\scriptsize{0:8.00}}
\put (78,0) {\scriptsize{Shooting}}
\put (90,0) {\scriptsize{0:10.00}}
\end{overpic}
(a) Kinetics 400 - Shooting Basketball
\begin{overpic}[width=\linewidth]{images/noise2.PNG}
\put (2,0) {\scriptsize{0:0.00}}
\put (29,0) {\scriptsize{Singing}}
\put (68,0) {\scriptsize{0:8.00}}
\put (77,0) {\scriptsize{Audience}}
\put (90,0) {\scriptsize{0:10.00}}
\end{overpic}
(b) Kinetics 400 - Singing
\end{center}
\endminipage\hfill
\minipage[t]{0.49\linewidth}
\begin{center}
\begin{overpic}[width=\linewidth]{images/noise3.PNG}
\put (2,0) {\scriptsize{0:0.00}}
\put (44,0) {\scriptsize{Long Jump}}
\put (90,0) {\scriptsize{0:3.00}}
\end{overpic}
\vspace{0.2em}
(c) HACS - Long Jump
\begin{overpic}[width=\linewidth]{images/noise4.PNG}
\put (2,0) {\scriptsize{0:0.00}}
\put (90,0) {\scriptsize{0:3.20}}
\end{overpic}
(d) HAA500 - Uneven Bars: Land
\end{center}
\endminipage\hfill
\vspace{0.5em}
\caption{Different types of label noise in action recognition datasets.
\textbf{(a)}: Kinetics 400 has a fixed video length of 10 seconds which cannot accurately annotate quick actions like \textit{Shooting Basketball} where the irrelevant action of dribbling the ball is included in the clip.
\textbf{(b)}: A camera cut can be seen, showing unrelated frames (audience) after the main action.
\textbf{(c)}: By not having a frame-accurate clipping, the clip starts with a person-of-interest in the midair, and quickly disappears after few frames, causing the rest of the video clip not to have any person in action.
\textbf{(d)}: Our HAA500 accurately annotates the full motion of \textit{Uneven Bars - Land} without any irrelevant frames. All the videos in the class start with the exact frame an athlete puts the hand off the bar, to the exact frame when he/she finishes the landing pose. }
\label{fig:comparison_noise}
\end{center}
\vspace{-2em}
\end{figure*}
\vspace{-1em}
\subsubsection{Vocabulary Collection}
\vspace{-0.5em}
To make the dataset as clean as possible and useful for recognizing fine-grained atomic actions, we narrowed down the scope of our super-classes into 4 areas; \textit{Sport/Athletics}, \textit{Playing Musical Instruments}, \textit{Games and Hobbies}, and \textit{Daily Actions}, where future extension beyond the existing classes is feasible.
We select action labels where the variations within a class are typically indistinguishable. For example, instead of \textit{Hand Whistling}, we have \textit{Whistling with One Hand} and \textit{Whistling with Two Hands}, as the variation is large and distinguishable.
Our vocabulary collection methodology makes the dataset hierarchical where atomic actions may be combined to form a composite action, \eg, \textit{Whistling} or \textit{Playing Soccer}.
Consequently, HAA500 contains 500 atomic action classes, where 212 are \textit{Sport/Athletics}, 51 are \textit{Playing Musical Instruments}, 82 are \textit{Games and Hobbies} and 155 are \textit{Daily Actions}.
\begin{table}[t]
{\small
\setlength\extrarowheight{3pt}
\begin{center}
{\small
\begin{tabular}{|c c c c c|}
\Cline{1-5}
\Thickvrulel{action} & clips & mean length & duration & \Thickvruler{frames} \\ \Cline{1-5}
500 & 10,000 & 2.12s & 21,207s & 591K \\ \hline
\addlinespace
\Cline{0-3}
\Thickvrulel{no. of people} & 1 & 2 & \Thickvruler{$>$2} \\ \Cline{0-3}
& 8,309 & 859 & \multicolumn{1}{c|}{832} \\ \cline{0-3}
\addlinespace
\Cline{0-2}
\Thickvrulel{moving camera} & O & \Thickvruler{X} \\ \Cline{0-2}
& 2,373 & \multicolumn{1}{c|}{7,627} \\ \cline{0-2}
\end{tabular}
}
\end{center}
}
\caption{Summary of HAA500.}
\label{table:dataset_summary}
\vspace{-0.2in}
\end{table}
\vspace{0em}
\subsubsection{Video Clip Selection}
To ensure our dataset is clean and class-balanced, all the video clips are collected from YouTube with the majority having a resolution of at least 720p and each class of atomic action containing 16 training clips.
We manually select the clips with apparent human-centric actions where the person-of-interest is the only dominant person in the frame at the center with their body clearly visible.
To increase diversity among the video clips and avoid unwanted bias, all the clips were collected from different YouTube videos, with different environment settings so that the action recognition task cannot be trivially reduced to identifying the corresponding backgrounds. Clips are properly trimmed in a frame-accurate manner to cover the desired actions while assuring every video clip to have compatible actions within each class (\eg, every video in the class \textit{Salute} starts on the exact frame where the person is standing still before moving the arm, and the video ends when the hand goes next to the eyebrow). Refer to Figure~\ref{fig:teaser} again for examples of the selected videos.
\vspace{-1em}
\subsubsection{Statistics}
Table~\ref{table:dataset_summary} summarizes the HAA500 statistics. HAA500 includes 500 atomic action classes where each class contains 20 clips, with an average length of 2.12 seconds.
Each clip was annotated with meta-information which contains the following two fields: the number of dominant people in the video and the camera movement.
\vspace{-1em}
\subsubsection{Training/Validation/Test Sets}
Since the clips in different classes are mutually exclusive, all clips appear only in one split. The 10,000 clips are split as 16:1:3, resulting in segments of 8,000 training, 500 validation, and 1,500 test clips.
\begin{figure*}[t]
\begin{center}
\begin{overpic}[width=\linewidth]{images/human_centric.png}
\put (0.4,-0.1) {\rotatebox{90}{ {\fontsize{6}{6}\selectfont Kinetics 400}}}
\put (50.4,2.1) {\rotatebox{90}{ {\fontsize{6}{6}\selectfont AVA}}}
\put (0.4, 9) {\rotatebox{90}{ {\fontsize{6}{6}\selectfont Something}}}
\put (50.4, 10.4) {\rotatebox{90}{ {\fontsize{6}{6}\selectfont HACS}}}
\end{overpic}
\caption{The video clips in AVA, HACS, and Kinetics 400 contain multiple human figures with different actions in the same frame. Something-Something focuses on the target object and barely shows any human body parts. In contrast, all video clips in HAA500
are carefully curated where each video shows either a single person or the person-of-interest as the most dominant figure in a given frame.}
\label{fig:comparison_human_centric}
\end{center}
\vspace{-0.3in}
\end{figure*}
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\begin{center}
{\small
\begin{tabular}{c|c|c|c}
\hline
Dataset & Clip Length & Irr. Actions & Camera Cuts \\ \hline
UCF101~\cite{ucf101} & Varies & & \\
HMDB51~\cite{HMDB51} & Varies & & \checkmark \\
AVA~\cite{AVA} & 1 second & \checkmark & \checkmark \\
HACS~\cite{zhao2019hacs} & 2 second & \checkmark & \\
Kinetics~\cite{kinetics400} & 10 second & \checkmark & \checkmark \\
M.i.T.~\cite{momentsintime} & 3 second & & \\
\textbf{HAA500} & Just Right & & \\ \hline
\end{tabular}
}
\end{center}
\caption{Clip length and irrelevant frames of video action datasets.}
\vspace{-1em}
\label{table:comparison_sampling_rate}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Properties and Comparison}
\subsubsection{Clean Labels for Every Frame}
Most video datasets~\cite{AVA,kinetics400,ucf101} show strong label noises, due to the difficulties of collecting clean video action datasets. Some~\cite{kinetics400,HMDB51,ucf101} often focus on the ``scene" of the video clip, neglecting the human ``action" thus including irrelevant actions or frames with visible camera cuts in the clip. Also, video action datasets~\cite{AVA,kinetics400,momentsintime,zhao2019hacs} have fixed-length video clips, so irrelevant frames are inevitable for shorter actions. Our properly trimmed video collection guarantees a clean label for every frame.
Table~\ref{table:comparison_sampling_rate} tabulates clip lengths and label noises of video action datasets.
Figure~\ref{fig:comparison_noise} shows examples of label noises. As HAA500 is constructed with accurate temporal annotation in mind, we are almost free from any adverse effects due to these noises.
\vspace{-0.5em}
\subsubsection{Human-Centric}
One potential problem in action recognition is that the neural network may predict by trivially comparing the background scene in the video, or detecting key elements in a frame (\eg, a basketball to detect \textit{Playing Basketball}) rather than recognizing the pertinent human gesture, thus causing the action recognition to have no better performance improvements over scene/object recognition. The other problem stems from the video action datasets where videos captured in wide field-of-view contain multiple people in a single frame~\cite{AVA,kinetics400,zhao2019hacs}, while videos captured using narrow field-of-view only exhibit very little body part in interaction with the pertinent object~\cite{goyal2017something,momentsintime}.
In~\cite{AVA} attempts were made to overcome this issue through spatial annotation of each individual in a given frame. This introduces another problem of action localization and thus further complicating the difficult recognition task. Figure~\ref{fig:comparison_human_centric} illustrates example frames of different video action datasets.
HAA500 contributes a curated dataset where human joints can be clearly detected over any given frame, thus allowing the model to benefit from learning human movements than just performing scene recognition. As tabulated in Table~\ref{table:comparison_joint}, HAA500 has high detectable joints~\cite{alphapose} of 69.7\%, well above other representative action datasets.
\begin{table}
\begin{center}
{\small
\begin{tabular}{r|c}
\hline
Dataset & Detectable Joints \\ \hline
Kinetics 400~\cite{kinetics400} & 41.0\% \\ \hline
UCF101~\cite{ucf101} & 37.8\% \\ \hline
HMDB51~\cite{HMDB51} & 41.8\% \\ \hline
FineGym~\cite{finegym} & 44.7\% \\ \hline
\textbf{HAA500} & \textbf{69.7\%} \\ \hline
\end{tabular}
}
\end{center}
\caption{Detectable joints of video action datasets. We use AlphaPose~\cite{alphapose} to detect the largest person in the frame, and count the number of joints with a score higher than $0.5$. }
\label{table:comparison_joint}
\vspace{-1.5em}
\end{table}
\begin{table*}[hbt!]
\begin{center}
\minipage[t]{0.347 \linewidth}
{\small
\begin{tabular}{c|c|c|c}
\hline
\multicolumn{2}{c|}{} &
\multicolumn{2}{c}{500 Atomic} \\
\hline
\multicolumn{2}{c|}{Model} & Top-1 & Top-3 \\
\hline
\multirow{4}{*}{I3D~\cite{i3d}} & RGB & 33.53\% & 53.00\% \\
& Flow & 34.73\% & 52.40\% \\
& Pose & 35.73\% & 54.07\% \\
& Three-Stream & 49.87\% & 66.60\% \\
\hline
\multirow{4}{*}{SlowFast~\cite{slowfast}} & RGB & 25.07\% & 44.07\% \\
& Flow & 22.87\% & 36.93\% \\
& Pose & 28.33\% & 45.20\% \\
& Three-Stream & 39.93\% & 56.00\% \\
\hline
\multirow{3}{*}{TSN~\cite{TSN}} & RGB & 55.33\% & 75.00\% \\
& Flow & 49.13\% & 66.60\% \\
& Two-Stream & 64.40\% & 80.13\% \\
\hline
\multirow{1}{*}{TPN~\cite{TPN}} & RGB & 50.53\% & 68.13\% \\
\hline
\multirow{1}{*}{ST-GCN~\cite{stgcn}} & Pose & 29.67\% & 47.13\% \\
\hline
\end{tabular}
}
\endminipage
~~~~~
\minipage[t]{0.205 \linewidth}
{\small
\begin{tabular}{c|c}
\hline
\multicolumn{1}{c|}{Inst.} &
\multicolumn{1}{c}{Inst. with Atomic} \\
\hline
Top-1 & Top-1 \\
\hline
70.59\% & \bf{71.90}\% \\
73.20\% & \bf{77.79}\% \\
69.28\% & \bf{71.90}\% \\
81.70\% & 82.35\% \\
\hline
40.52\% & \bf{50.98}\% \\
71.90\% & 71.90 \% \\
64.71\% & \bf{66.01}\% \\
67.97\% & \bf{73.86}\% \\
\hline
\bf{86.93}\% & 84.31\% \\
79.08\% & \bf{86.27}\% \\
89.54\% & 90.20\% \\
\hline
73.20\% & \bf{75.82}\% \\
\hline
67.32\% & 67.97\% \\
\hline
\end{tabular}
}
\endminipage
~~~~~
\minipage[t]{0.205 \linewidth}
{\small
\begin{tabular}{c|c}
\hline
\multicolumn{1}{c|}{Sport} &
\multicolumn{1}{c}{Sport with Atomic} \\
\hline
Top-1 & Top-1 \\
\hline
47.48\% & \bf{53.93}\% \\
51.42\% & \bf{54.40}\% \\
54.87\% & 55.03 \% \\
68.55\% & \bf{69.81}\% \\
\hline
42.92\% & \bf{44.18}\% \\
44.81\% & \bf{45.91}\% \\
42.45\% & \bf{50.00}\% \\
59.91\% & \bf{62.89}\% \\
\hline
72.64\% & 72.48\% \\
\bf{69.97}\% & 68.24\% \\
\bf{81.13}\% & 78.93\% \\
\hline
61.64\% & \bf{64.15}\% \\
\hline
40.25\% & \bf{43.87}\% \\
\hline
\end{tabular}
}
\endminipage
\end{center}
\caption{ \textbf{Left}: HAA500 trained over different models. \textbf{Right}: Composite action classification accuracy of different models when they are trained with/without atomic action classification. Numbers are bolded when the difference is larger than 1\%. }
\vspace{-0.2in}
\label{table:Experiments}
\end{table*}
\vspace{-1em}
\subsubsection{Atomic}
\begin{figure}[!t]
\vspace{-0.5em}
\begin{center}
\includegraphics[width=\linewidth]{images/atomic.png}
\caption{Coarse-grained atomic action datasets label different actions under a single English action verb. HAA500 (Bottom) has fine-grained classes where the action ambiguities are eliminated as much as possible.}
\label{fig:comparison_atomicity}
\end{center}
\vspace{-2em}
\end{figure}
Existing atomic action datasets such as \cite{ava_speech,AVA,momentsintime} are limited by English linguistics, where action verbs (\eg, walk, throw, pull, \etc) are decomposed. Such classification does not fully eliminate the aforementioned problems of composite action datasets. Figure~\ref{fig:comparison_atomicity} shows cases of different atomic action datasets where a single action class contains fundamentally different actions.
On the other hand, our fine-grained atomic actions contain only a single type of action under each class, \eg, \textit{Baseball - Pitch}, \textit{Yoga - Tree}, \textit{Hopscotch - Spin}, \etc
\vspace{-1em}
\subsubsection{Scalability}
\vspace{-0.5em}
Requiring only 20 video annotations per class, or around 600 frames to characterize a human-centric atomic action curated as described above, our class-balanced dataset is highly scalable compared to other representative datasets requiring annotation of hundreds or even thousands of videos. In practice, our annotation per class takes around 20--60 minutes including searching the Internet for videos with expected quality. The detailed annotation procedure is available in the supplementary material.
\section{Empirical Studies}
We study HAA500 over multiple aspects using widely used action recognition models.
Left of Table~\ref{table:Experiments} shows the performance of the respective models when they are trained with HAA500.
For a fair comparison between different models and training datasets, all the experiments have been performed using hyper parameters given by the original authors without ImageNet~\cite{imagenet} pre-training.
For Pose models except for ST-GCN~\cite{stgcn}, we use three-channel pose joint heatmaps~\cite{alphapose} to train pose models. RGB, Flow~\cite{flownet2} and Pose~\cite{alphapose} all show relatively similar performance in HAA500, where none of them shows superior performance than the others. Given that pose heatmap has far less information than given from RGB frames or optical flow frames, we expect that easily detectable joints of HAA500 benefit the pose-based model performance.
Furthermore, we study the benefits of atomic action annotation on video recognition, as well as the importance of human-centric characteristics of HAA500.
In this paper, we use I3D-RGB~\cite{i3d} with 32 frames for all of our experiments unless otherwise specified. We use AlphaPose~\cite{alphapose} for the models that require human pose estimation.
\subsection{Visualization}
To study the atomic action recognition, we train RGB-I3D model on HAA500 and extract embedding vectors from the second last layer and plot them using truncated SVD and t-SNE. From Figure~\ref{fig:visualization}, the embedding vectors show clear similarities to the natural hierarchy of human action. On the left of the figure, we see a clear distinction between classes in \textit{Playing Sports} and classes in \textit{Playing Musical Instruments}. Specifically, in sports, we see similar super-classes, \textit{Snowboarding} and \textit{Skiing}, under close embedding space, while \textit{Basketball}, \textit{Balance Beam (Gymnastics)}, and \textit{Figure Skating} are in their distinctive independent spaces.
We observe super-class clustering of composite actions when only the atomic action labeling has been used to train the model.
This visualization hints the benefit of fine-grained atomic action labeling for composite action classification tasks.
\vspace{-0.25em}
\subsection{Atomic Action}
\vspace{-0.25em}
\begin{figure}[t!]
\begin{center}
\minipage[t]{0.49 \linewidth}
\includegraphics[width=\linewidth]{images/vis1.png}
\endminipage
\minipage[t]{0.49 \linewidth}
\includegraphics[width=\linewidth]{images/vis2.png}
\endminipage
\end{center}
\vspace{-1em}
\caption{Visualization of HAA500. We extract 1024-vectors from the second last layer of RGB-I3D and plot them using t-SNE.}
\label{fig:visualization}
\vspace{-1.3em}
\end{figure}
We have previously discussed that modern action recognition datasets introduce ambiguities where two or more composite actions sharing the same atomic actions, while a single composite action class may contain multiple distinguishable actions (\eg, a composite action \textit{Playing Soccer} has \textit{Soccer-Dribble}, \textit{Soccer-Throw}, \etc). HAA500 addresses this issue by providing fine-grained atomic action labels that distinguish similar atomic action in different composite actions.
To study the benefits of atomic action labels, specifically, how it helps composite action classification for ambiguous classes, we selected two areas from HAA500, \textit{Sports/Athletics} and \textit{Playing Musical Instruments}, in which composite actions contain strong ambiguities with other actions in the area. We compare models trained with two different types of labels: 1) only composite labels and 2) atomic + composite labels, then we evaluate the performance on composite action classification. Results are tabulated on the right of Table \ref{table:Experiments}. Accuracy of the models trained with only composite labels are under \textit{Inst.} and \textit{Sport} column, and the accuracy of composite action classification trained with atomic action classification is listed on the other columns.
We can observe improvements in composite action classification when atomic action classification is incorporated. The fine-grained action decomposition in HAA500 enables the models to resolve ambiguities of similar atomic actions and helps the model to learn the subtle differences in the atomic actions across different composite actions.
This demonstrates the importance of proper labeling of fine-grained atomic action which can increase the performance for composite action classification without changing the model architecture or the training set.
\subsection{Human-Centric}
HAA500 is designed to contain action clips with a high percentage of detectable human figures.
To study the importance of human-pose in fine-grained atomic action recognition, we compare the performance of HAA500 and FineGym when both RGB and pose estimation are given as input. For pose estimation, we obtain the 17 joint heatmaps from AlphaPose~\cite{alphapose} and merge them into 3 channels; head, upper-body, and lower-body.
\begin{table}[t]
{\small
\begin{center}
\begin{tabular}{l |c |c || c }
\hline
& RGB & Pose & RGB + Pose \\
\hline
HAA500 & 33.53\% & 35.73\% & 42.80\% \\
\:\:\: Sport & 38.52\% & 47.33\% & 50.94\% \\
\:\:\: Instrument & 30.72\% & 24.18\% & 32.03\% \\
\:\:\: Hobbies & 31.30\% & 26.42\% & 35.37\% \\
\:\:\: Daily & 28.82\% & 28.60\% & 39.14\% \\
Gym288~\cite{finegym} & 76.11\% & 65.16\% & 77.31\% \\
\hline
\end{tabular}
\end{center}}
\caption{Atomic action classification accuracy when both RGB image and pose estimation are given as an input. We also show performance when they are trained separately for comparison.}
\label{table:human-pose}
\vspace{-0.5em}
\end{table}
Table~\ref{table:human-pose} tabulates the results. In three out of four areas of HAA500, I3D-RGB shows better performance than I3D-Pose, due to the vast amount of information given to the model. I3D-Pose shows the highest performance on \textit{Sports/Athletics} with vibrant and distinctive action, while I3D-Pose fails to show comparable performance in \textit{Playing Musical Instrument} area, where predicting the atomic action from only 17 joints is quite challenging. Nonetheless, our experiments show a performance boost when both pose estimation and RGB frame are fed to the atomic action classification model, implicating the importance of human action in HAA500 action classification. For FineGym - Gym288, due to the rapid athletic movements resulting in blurred frames, the human pose is not easily recognizable which accounts for relatively insignificant improvements when pose has been used.
\section{Observations}
We present notable characteristics observed from HAA500 with our cross-dataset experiments.
\vspace{-1em}
\paragraph{Effects of Fine-Tuning over HAA500}
\begin{table}[t]
{\small
\begin{center}
\begin{tabular}{l|c|c|c}
\hline
& UCF101~\cite{ucf101} & ActNet 100~\cite{activitynet} & HMDB51~\cite{HMDB51} \\
\textbf{Pre-trained} & Top-1 & Top-1 & Top-1 \\
\hline
None & 58.87\% & 43.54\% & 28.56\% \\
AVA~\cite{AVA} & 48.54\% & 30.51\% & 25.28\% \\
Gym288~\cite{finegym} & \textbf{69.94\%} & 43.79\% & 36.24\% \\
UCF101~\cite{ucf101} & - & 42.94\% & 32.37\% \\
ActNet 100~\cite{activitynet} & 57.52\% & - & 28.63\% \\
HMDB51~\cite{HMDB51} & 53.36\% & 39.33\% & - \\
\hline
HAA500 & 68.70\% & \textbf{47.75\%} & \textbf{40.45\%}\\
\:\:\:Relaxed & 62.24\% & 38.30\% & 33.29\% \\
\hline
\end{tabular}
\end{center}}
\vspace{-0.5em}
\caption{Fine-tuning performance on I3D.}
\label{table:transfer}
\vspace{-0.8em}
\end{table}
Here, we test how to exploit the curated HAA500 dataset to detect action in ``in-the-wild" action datasets. We pre-train I3D-RGB~\cite{i3d} using HAA500 or other video action datasets~\cite{activitynet,AVA,HMDB51,finegym,ucf101}, and freeze all the layers except for the last three for feature extraction. We then fine-tune the last three layers with ``in-the-wild" composite action datasets~\cite{activitynet,HMDB51,ucf101}.
Table~\ref{table:transfer} tabulates the fine-tuning result.
Our dataset is carefully curated to have a high variety of backgrounds and people while having consistent actions over each class. Despite being comparably smaller and more ``human-centric" than other action recognition datasets, HAA500's cleanness and high variety make it easily transferable to different tasks and datasets.
\vspace{-1em}
\paragraph{Effects of Scale Normalization}
HAA500 has high diversity in human positions across the video collection. Here, we choose an area of HAA500, \textit{Playing Musical Instruments}, to investigate the effect of human-figure normalization on detection accuracy. We have manually annotated the bounding box of the person-of-interest in each frame and cropped them for the model to focus on the human action. In Table~\ref{table:normalize}, we test models that were trained to detect the composite actions or both composite and atomic actions.
\begin{table}[t]
{\small
\begin{center}
\begin{tabular}{ l |c |c | c | c }
\hline
& \multicolumn{2}{c|}{Original} & \multicolumn{2}{c}{Normalized} \\
& Composite & Both & Composite & Both \\
\hline
I3D-RGB & 66.01\% & 56.86\% & \textbf{75.82\%} & \textbf{77.12\%} \\
I3D-Flow & 73.20\% & \textbf{77.78\%} & \textbf{75.16\%} & 74.51\% \\
2-Stream & 77.78\% & 80.39\% & \textbf{83.01\%} & 80.39\% \\
\hline
\end{tabular}
\end{center}}
\vspace{-0.1in}
\caption{Accuracy improvements on person-of-interest normalization. Numbers are composite action classification accuracy.}
\label{table:normalize}
\vspace{-0.2in}
\end{table}
While HAA500 is highly human-centric with person-of-interest as the most dominant figure of the frame, action classification on the normalized frames still shows considerable improvement when trained on either atomic action annotations or composite action annotations. This indicates the importance of spatial annotation for action recognition.
\vspace{-0.5em}
\paragraph{Effects of Object Detection}
In most video action datasets, non-human objects exist as a strong bias to the classes (\eg, basketball in \textit{Playing Basketball}).
When highly diverse actions (\eg, \textit{Shooting a Basketball}, \textit{Dribbling a Basketball}, \etc) are annotated under a single class, straightforward deep-learning models tend to suffer from the bias and will learn to detect the easiest common factor (basketball) among the video clips, rather than ``seeing" the pertinent human action. Poorly designed video action dataset encourages the action classification model to trivially become an object detection model.
In HAA500, every video clip in the same class contains compatible actions, making the common factor to be the ``action", while objects are regarded as ``ambiguities" that spread among different classes (\eg, basketball exists in both \textit{Shooting a Basketball} and \textit{Dribbling a Basketball}). To test the influence of ``object" in HAA500, we design an experiment similar to investigating the effect of human poses, as presented in Table~\ref{table:human-pose}, where we use object detection heatmap instead. Here we use Fast RCNN~\cite{fastrcnn} trained with COCO~\cite{coco} dataset to generate the object heatmap. Among 80 detectable objects in COCO, we select 42 objects in 5 categories (sports equipment, food, animals, cutleries, and vehicles) to draw a 5-channel heatmap. Similar to Table~\ref{table:human-pose}, the heatmap channel is appended to the RGB channel as input.
\begin{table}[t]
{\small
\begin{center}
\begin{tabular}{ l |c |c }
\hline
& RGB & + Object\\
\hline
HAA500 & 33.53\% & 33.73\% \\
\:\: Sport & 38.52\% & 38.68\% \\
\:\: Instrument & 30.72\% & 30.07\% \\
\:\: HAA-COCO & 34.26\% & 34.26\% \\
\hline
UCF101 & 57.65\% & 60.19\% \\
\hline
\end{tabular}
\end{center}}
\vspace{-0.1in}
\caption{Accuracy of I3D when trained with object heatmap. HAA-COCO denotes 147 classes of HAA500 expected to have objects that were detected.}
\vspace{-0.1in}
\label{table:object_detection}
\end{table}
Table~\ref{table:object_detection} tabulates the negligible effect of objects in atomic action classification of HAA500, including the classes that are expected to use the selected objects (HAA-COCO), while UCF101 shows improvements when object heatmap is used as a visual cue.
Given the negligible effect of object heatmaps,
we believe that fine-grained annotation of actions can effectively eliminate unwanted ambiguities or bias (``objects") while in UCF101 (composite action dataset), ``objects" can still affect action prediction.
\vspace{-1.2em}
\paragraph{Effects of Dense Temporal Sampling}
The top of Table~\ref{table:action_oriented} tabulates the performance difference of HAA500 and other datasets over the number of frames used during training and testing. The bottom of Table~\ref{table:action_oriented} tabulates the performance with varying strides with a window size of 32 frames, except AVA which we test with 16 frames. Top-1 accuracies on action recognition are shown except AVA which shows mIOU due to its multi-labeled nature of the dataset.
As expected, most datasets show the best performance when 32 frames are fed. AVA shows a drop in performance due to the irrelevant frames (\eg, action changes, camera cuts, \etc) included in the wider window. While all the datasets show comparable accuracy when the model only uses a single frame (\ie, when the problem has been reduced to a ``Scene Recognition" problem), both HAA500 and Gym288 show a significant drop compared to their accuracy in 32 frames. While having an identical background contributes to the performance difference for Gym288, from HAA500, we see how temporal action movements are crucial for the detection of atomic actions, and they cannot be trivially detected using a simple scene detecting model.
We also see that the density of the temporal window is another important factor in atomic action classification. We see that both HAA500 and Gym288, which are fine-grained action datasets, show larger performance drops when the frames have been sampled with strides of 2 or more, reflecting the importance of sampling for short temporal action movements in fine-grained action classification.
\begin{table}[t]
{\small
\begin{center}
\begin{tabular}{c|c|c|c|c}
\hline
\# of frames & HAA500 & UCF101~\cite{ucf101} & AVA~\cite{AVA} & Gym288~\cite{finegym} \\
\hline
1 & 19.93\% & 45.57\% & 33.57\% & 39.77\%\\
2 & 23.27\% & 47.26\% & 39.42\% & 44.68\%\\
4 & 24.40\% & 49.30\% & 39.48\% & 51.22\%\\
8 & 24.07\% & 49.80\% & 42.38\% & 59.64\%\\
16 & 28.20\% & 52.31\% & 43.11\% & 69.25\% \\
32 & 33.53\% & 57.65\% & 29.88\% & 76.11\% \\
\hline
stride 2 & 27.47\% & 57.23\% & 41.49\% & 68.68\%\\
stride 4 & 23.87\% & 52.29\% & 40.52\% & 60.76\%\\
stride 8 & 18.47\% & 47.95\% & 38.45\% & 39.31\%\\
\hline
\end{tabular}
\end{center}}
\vspace{-0.1in}
\caption{Performance comparison on I3D-RGB over the number of frames and strides, wherein the latter a window size of 32 frames is used except AVA which we test with 16 frames.}
\label{table:action_oriented}
\vspace{-0in}
\end{table}
\vspace{-1em}
\paragraph{Quality versus Quantity}
\begin{table}[t]
{\small
\begin{center}
\begin{tabular}{l |c |c}
\hline
& HAA500 & Relaxed \\
\hline
Overall & \textbf{33.53\%} & 22.80\% \\
\:\:\: Sport & \textbf{38.52\%} & 25.47\% \\
\:\:\: Instrument & \textbf{30.72\%} & 28.10\% \\
\:\:\: Hobbies & \textbf{31.30\%} & 20.33\% \\
\:\:\: Daily & \textbf{28.82\%} & 18.71\% \\
\hline
\end{tabular}
\end{center}
\vspace{-0.1in}
\caption{Action classification accuracy of original HAA500 and the relaxed version.}
\label{table:dirty_basic}}
\vspace{-0.20in}
\end{table}
To study the importance of our precise temporal annotation against the size of a dataset, we modify HAA500 by relaxing the temporal annotation requirement, \ie, we take a longer clip than the original annotation.
Our relaxed-HAA500 consists of 4400K labeled frames, a significant increase from the original HAA500 with 591K frames.
Table~\ref{table:dirty_basic} tabulates the performance comparison between the original and the relaxed version of HAA500 on the original HAA500 test set.
We observe the performance drop in all areas, with a significant drop in \textit{Playing Sports}, where accurate temporal annotation benefits the most. Performance drop in \textit{Playing Musical Instruments} area is less significant, as start/finish of action is vaguely defined in these classes.
We also test the fine-tuning performance of relaxed-HAA500, where the bottom-most row of Table~\ref{table:transfer} tabulates the performance drop when the relaxed-HAA500 is used for pre-training. Both of our experiments show the importance of accurate temporal labeling over the size of a dataset.
\vspace{-0.5em}
\section{Conclusion}
\vspace{-0.5em}
This paper introduces HAA500, a new human action dataset with fine-grained atomic action labels and human-centric clip annotations, where the videos are carefully selected such that the relevant human poses are apparent and detectable. With carefully curated action videos, HAA500 does not suffer from irrelevant frames, where videos clips only exhibit the annotated action. With a small number of clips per class, HAA500 is highly scalable to include more action classes. We have demonstrated the efficacy of HAA500 where action recognition can be greatly benefited from our clean, highly diversified, class-balanced fine-grained atomic action dataset which is human-centric with a high percentage of detectable poses. On top of HAA500, we have also empirically investigated several important factors that can affect the performance of action recognition. We hope HAA500 and our findings could facilitate new advances in video action recognition.
{\small
\bibliographystyle{ieee_fullname}
\section{Video Collection Procedure}
To guarantee a clean dataset with no label noises, we adopt a strict video collecting methodology for every class. We detail the method below.
\begin{enumerate}[itemsep=-2mm]
\item We assign a single annotator for a single class. This is to assure that the same rule applies to every video in a class.
\item The action class is classified as either continuous action or discrete action. Discrete action is when the action can have a single distinguishable action sequence. (\eg, \textit{Baseball-Swing}, \textit{Yoga-Bridge}, etc.). Continuous action otherwise. (\textit{Running}, \textit{Playing Violin}, etc.)
\begin{enumerate}
\item If it is discrete, make an internal rule to define the action. (e.g., \textit{Jumping Jack} starts and ends when the person is standing still. The video clip contains only a single jump. \textit{Push-up} starts and ends when the person is at the highest point. It should only have a single push-up). Every video should follow the internal rule so that every action in the class has compatible motion.
\item For continuous, we take video clips with appropriate length.
\end{enumerate}
\item Here are rules that the annotator has to follow.
\begin{itemize}
\item 20 videos should be unique to each other with a varied person, varied backgrounds.
\item The person in action should be the dominant person of the frame. If there are people of non-interest, they should not be performing any action.
\item Camera cuts should not exist.
\item Every video should include a large portion of the human body.
\item It is fine to have action variance that doesn't influence the semantics of the action. (\eg, a person can sit or stand in \textit{Whistling with One Hand} as long as the motion of whistling exists.)
\item 20 videos are split into train/val/test set by 16/1/3. The validation set contains the ``standard" body action of the class, and 3 videos in the test set should be well diverse.
\end{itemize}
\item Two or more reviewers that are not the annotator review the video to check for any mistakes.
\end{enumerate}
\section{Experiment Detail}
In this section, we explain some of the experiment details of our paper.
\paragraph{Variable Length of a Video}
For model~\cite{i3d,slowfast,TSN,TPN}, we randomly select 32 adjacent frames of a video during training. If the video is shorter than 32 frames, we replicate the last frame to match the size. During testing, we replicate the last frame to match the size to a multiple of 32, where the video is then divided into smaller mini-clips of size 32. The prediction score of each mini-clip is averaged to get the final prediction. In Table 11, where we train with fewer frames, we zero-pad on both ends to size 16. On ST-GCN~\cite{stgcn} we follow the same procedure of the original paper, where the video is either truncated or replicated to match the length of 300.
\paragraph{Implementation}
In all of our experiments, we use PyTorch for our deep learning framework. We use the official code of the model when they are available. While we use the same hyperparameters which the authors used for their model, for a fair comparison we do not pre-train the model before training.
\begin{figure*}[t!]
\begin{center}
\vspace{-1em}
\input{suppmat_figure0}
\end{center}
\caption{Video samples of different classes.}
\label{fig:supp_0}
\end{figure*}
\begin{figure*}[t!]
\begin{center}
\vspace{-1em}
\input{suppmat_figure1}
\end{center}
\caption{HAA500 contains diverse videos per action class.}
\label{fig:supp_2}
\end{figure*}
\begin{figure*}[t!]
\begin{center}
\vspace{-1em}
\input{suppmat_figure2}
\end{center}
\caption{
Six sample frames of different videos. Each frame has an equal distance from the other, the first and the last sample frame are the first and the last frame of the video. In discrete action classes, (\textit{Long Jump - Jump}, \textit{Push Up}, \textit{Soccer - Shoot}), every video in the class shows a single motion. For action classes where it is hard to define a single motion (\ie, continuous actions, \eg, \textit{Play Violin}), videos are cut in appropriate length.
}
\label{fig:supp_3}
\end{figure*}
\begin{figure*}[t!]
\begin{center}
\vspace{-1em}
\includegraphics[width=\linewidth]{images/sport_graph.png}
\end{center}
\caption{Hierarchy of action classes in Sports/Athletics area.}
\label{fig:supp_1}
\end{figure*}
\section{List of Classes in HAA500}
Here, we list classes of HAA500 in each area.
\paragraph{Sports/Athletics}
\begin{enumerate}[itemsep=-2mm]
\item Abseiling
\item Archery
\item Backflip
\item Backward Roll
\item Badminton Overswing
\item Badminton Serve
\item Badminton Underswing
\item Balance Beam Flip
\item Balance Beam Jump
\item Balance Beam Rotate
\item Balance Beam Spin
\item Balance Beam Walk
\item Base Jumping
\item Baseball Baseball Swing
\item Baseball Bunt
\item Baseball Pitch
\item Baseball Run
\item Basketball Dribble
\item Basketball Dunk
\item Basketball Hookshot
\item Basketball Jabstep
\item Basketball Layup
\item Basketball Pass
\item Basketball Shoot
\item Battle-Rope Jumping-Jack
\item Battle-Rope Power-Slam
\item Battle-Rope Rainbow
\item Battle-Rope Russian-Twist
\item Battle-Rope Sideplank
\item Battle-Rope Snake
\item Battle-Rope Wave
\item Bench Dip
\item Bike Fall
\item Billiard Hit
\item Bmx Jump
\item Bmx Ride
\item Bowling
\item Bowls Throw
\item Breakdancing Flare
\item Breakdancing Flip
\item Breakdancing Rotate
\item Breakdancing Support
\item Burpee
\item Canoeing Slalom
\item Canoeing Spring
\item Catch Catcher
\item Catch Flyball
\item Catch Groundball
\item Climb Pole Climb
\item Climbing Icecliff
\item Climbing Rock
\item Climbing Rope Climb
\item Cross Country Ski Slide
\item Cross Country Ski Walk
\item Crossbow Shoot
\item Curling Follow
\item Curling Push
\item Curling Sweep
\item Dart Throw
\item Dips
\item Discus Throw
\item Diving Jump
\item Diving Rotate
\item Diving Sneak
\item Equestrian Dressage
\item Equestrian Jump
\item Equestrian Run
\item Figure Skate I Spin
\item Figure Skate Backward
\item Figure Skate Bielman Spin
\item Figure Skate Camel Spin
\item Figure Skate Donut Spin
\item Figure Skate Forward
\item Figure Skate Hydroblading
\item Figure Skate Inabauer
\item Figure Skate Jump Spin
\item Figure Skate Scratch Spin
\item Figure Skate Sit Spin
\item Floor Rotate
\item Floor Spin
\item Football Catch
\item Football Run
\item Football Throw
\item Forward Fold
\item Forward Jump
\item Forward Roll
\item Frisbee Catch
\item Frisbee Throw
\item Golf Part
\item Golf Swing
\item Grass Skiing
\item Gym Lift
\item Gym Lunges
\item Gym Plank
\item Gym Pull
\item Gym Push
\item Gym Ride
\item Gym Run
\item Gym Squat
\item Hammer Throw
\item Headstand
\item High Jump Jump
\item High Jump Run
\item High Knees
\item Horizontal Bar Flip
\item Horizontal Bar Jump
\item Horizontal Bar Land
\item Horizontal Bar Spin
\item Hula Hoop
\item Hurdle Jump
\item Javelin Run
\item Javelin Throw
\item Jetski
\item Jump Rope Jump
\item Jumping Jack Jump
\item Kayaking
\item Leg Hold Back
\item Leg Hold Flip
\item Leg Hold Front
\item Long Jump Jump
\item Long Jump Run
\item Luge
\item Paragliding
\item Petanque Throw
\item Pole Vault Jump
\item Pole Vault Run
\item Pull Ups
\item Punching Sandbag
\item Punching Speed Bag
\item Push Up
\item Quadruped Hip-Extension
\item Racewalk Walk
\item Ride Bike
\item Ride Horse
\item Ride Motercycle
\item Ride Scooter
\item Ride Unicycle
\item Roller Skating Backward
\item Roller Skating Forward
\item Rowing Boat
\item Running In Place Run
\item Scuba Diving
\item Shotput Throw
\item Side Lunge
\item Sit Up
\item Skateboard Forward
\item Skateboard Grind
\item Skateboard Jump
\item Skeleton
\item Ski Backflip
\item Ski Cork
\item Ski Frontflip
\item Ski Jump Land
\item Ski Jump Mid-Air
\item Ski Jump Slide
\item Skydiving
\item Snorkeling
\item Snowboard Jump
\item Snowboard Slide
\item Snowboarding Forward
\item Soccer Dribble
\item Soccer Header
\item Soccer Save
\item Soccer Shoot
\item Soccer Throw
\item Softball Pitch
\item Speedskating Forward
\item Split Leap
\item Sprint Kneel
\item Sprint Run
\item Sprint Start
\item Star Jumping Jump
\item Surfing
\item Swimming Backstroke
\item Swimming Breast Stroke
\item Swimming Butterfly Stroke
\item Swimming Freestyle
\item Taekwondo High Block
\item Taekwondo Kick
\item Taekwondo Low Block
\item Taekwondo Middle Block
\item Taekwondo Punch
\item Tennis Backhand
\item Tennis Forehand
\item Tennis Serve
\item Tire Pull
\item Tire Sled
\item Trapeze Interacting
\item Trapeze Single
\item Triple Jump Jump
\item Triple Jump Run
\item Uneven Bar Cross
\item Uneven Bar Flip
\item Uneven Bar Jump
\item Uneven Bar Land
\item Uneven Bar Spin
\item Volleyball Overhand
\item Volleyball Pass
\item Volleyball Set
\item Volleyball Underhand
\item Water Skiing
\item Weight Lifting Hang
\item Weight Lifting Overhead
\item Weight Lifting Stand
\item Windsurfing
\item Workout Chest-Pull
\item Workout Crunch
\item Yoga Bridge
\item Yoga Cat
\item Yoga Firefly
\item Yoga Tree
\item Yoga Updog
\end{enumerate}
\paragraph{Daily Actions}
\begin{enumerate}[itemsep=-2mm]
\setcounter{enumi}{212}
\item Add New Car Tire
\item Adjusting Glasses
\item ALS Icebucket Challenge
\item Answering Questions
\item Applauding
\item Applying Cream
\item Arm Wave
\item Bandaging
\item Bending Back
\item Blowdrying Hair
\item Blowing Balloon
\item Blowing Glass
\item Blowing Gum
\item Blowing Kisses
\item Blowing Leaf
\item Blowing Nose
\item Bowing Fullbody
\item Bowing Waist
\item Brushing Floor
\item Brushing Hair
\item Brushing Teeth
\item Burping
\item Calfrope Catch
\item Calfrope Rope
\item Calfrope Subdue
\item Carrying With Head
\item Cartwheeling
\item Cast Net
\item Chainsaw Log
\item Chainsaw Tree
\item Chalkboard
\item Chewing Gum
\item Chopping Meat
\item Chopping Wood
\item Cleaning Mirror
\item Cleaning Mopping
\item Cleaning Sweeping
\item Cleaning Vacumming
\item Cleaning Windows
\item Clear Snow Off Car
\item Climb Ladder
\item Climb Stair
\item Climbing Tree
\item Closing Door
\item CPR
\item Crawling
\item Cross Body Shoulder Stretch
\item Cutting Onion
\item Dabbing
\item Dog Highfive
\item Dog Walking
\item Drinking With Cup
\item Drinking With Straw
\item Eat Apple
\item Eat Burger
\item Eat Spagetti
\item Eating Hotdogs
\item Eating Ice Cream
\item Eating Oyster
\item Face Slapping
\item Falling Off Chair
\item Fire Extinguisher
\item Fist Bump
\item Flamethrower
\item Folding Blanket
\item Folding Clothes
\item Gas Pumping To Car
\item Guitar Smashing
\item Hailing Taxi
\item Haircut Scissor
\item Hammering Nail
\item Hand In Hand
\item Hand-Drill Firemaking Blow
\item Hand-Drill Firemaking Drill With Bow
\item Hand-Drill Firemaking Drill With Hand
\item Handsaw
\item Handshake Dog
\item Hanging Clothes
\item Headbang
\item Heimlich Maneuver
\item High Five
\item Hold Baby
\item Hold Baby With Wrap
\item Hookah
\item Hugging Animal
\item Hugging Human
\item Ironing Clothes
\item Jack Up Car
\item Kick Open Door
\item Kiss
\item Leaf Blowing
\item Milking
\item Neck Side Pull Stretch
\item Opening Door
\item Pancake Flip
\item Peeling Banana
\item Pizza Dough Toss
\item Plunging Toilet
\item Pottery Wheel
\item Pouring Wine
\item Push Car
\item Push Wheelchair
\item Push Wheelchair Alone
\item Putting Scarf On
\item Read Newspaper
\item Reading Book
\item Remove Car Tire
\item Rescue Breathing
\item Riding Camel
\item Riding Elephant
\item Riding Mechanical Bull
\item Riding Mule
\item Riding Ostrich
\item Riding Zebra
\item Rolling Snow
\item Salute
\item Screw Car Tire
\item Setup Tent
\item Shake Cocktail
\item Shaking Head
\item Shaving Beard
\item Shoe Shining
\item Shoveling Snow
\item Sledgehammer Strike Down
\item Smoking Exhale
\item Smoking Inhale
\item Spitting On Face
\item Spraying Wall
\item Sticking Tongue Out
\item Stomping Grapes
\item Styling Hair
\item Swinging Axe On A Tree
\item Talking Megaphone
\item Talking On Phone
\item Throwing Bouquet
\item Using Inhaler
\item Using Lawn Mower
\item Using Lawn Mower Riding Type
\item Using Metal Detector
\item Using Scythe
\item Using Spinning Wheel
\item Using String Trimmer
\item Using Typewriter
\item Walking With Crutches
\item Walking With Walker
\item Wall Paint Brush
\item Wall Paint Roller
\item Washing Clothes
\item Washing Dishes
\item Watering Plants
\item Wear Face Mask
\item Wear Helmet
\item Whipping
\item Writing On Blackboard
\item Yawning
\end{enumerate}
\paragraph{Musical Instruments}
\begin{enumerate}[itemsep=-2mm]
\setcounter{enumi}{367}
\item Accordian
\item Bagpipes
\item Bangu
\item Banjo
\item Bass Drum
\item Bowsaw
\item Cajon Drum
\item Castanet
\item Cello
\item Clarinet
\item Conga Drum
\item Cornett
\item Cymbals
\item Doublebass
\item Erhu
\item Gong
\item Grandpiano
\item Guitar
\item Handpan
\item Harp
\item Hulusi
\item Jazzdrum
\item Leaf-Flute
\item Lute
\item Maracas
\item Melodic
\item Noseflute
\item Ocarina
\item Otamatone
\item Panpipe
\item Piccolo
\item Recorder
\item Sanxian
\item Saxophone
\item Serpeng
\item Sheng
\item Sitar
\item Snare Drum
\item Sunoa
\item Taiko Drum
\item Tambourine
\item Thereminvox
\item Timpani
\item Triangle
\item Trombone
\item Trumpet
\item Ukulele
\item Viola
\item Violin
\item Xylophone
\item Yangqin
\end{enumerate}
\paragraph{Games and Hobbies}
\begin{enumerate}[itemsep=-2mm]
\setcounter{enumi}{418}
\item Air Drumming
\item Air Guitar
\item Air Hockey
\item Alligator Wrestling
\item Archaeological Excavation
\item Arm Wrestling
\item Atlatl Throw
\item Axe Throwing
\item Balloon Animal
\item Beer Pong Throw
\item Belly Dancing
\item Blow Gun
\item Building Snowman
\item Card Throw
\item Conducting
\item Decorating Snowman
\item Dice Shuffle Reveal
\item Dice Stack Shuffle
\item DJ
\item Draw Handgun
\item Face-Changing Opera
\item Fire Breathing
\item Fire Dancing Circulating
\item Fish-Hunting Hold
\item Fish-Hunting Pull
\item Flipping Bottle
\item Floss Dance
\item Flying Kite
\item Ganggangsullae
\item Gangnam Style Dance
\item Grass Skating
\item Guitar Flip
\item Hopscotch Pickup
\item Hopscotch Skip
\item Hopscotch Spin
\item Ice Scuplting
\item Juggling Balls
\item Kick Jianzi
\item Knitting
\item Marble Scuplting
\item Moonwalk
\item Piggyback Ride
\item Play Diabolo
\item Play Kendama
\item Play Yoyo
\item Playing Nunchucks
\item Playing Rubiks Cube
\item Playing Seesaw
\item Playing Swing
\item Rock Balancing
\item Rock Paper Scissors
\item Running On Four
\item Sack Race
\item Sand Scuplting
\item Segway
\item Shoot Dance
\item Shooting Handgun
\item Shooting Shotgun
\item Shuffle Dance
\item Sling
\item Slingshot
\item Snow Angel
\item Speed Stack
\item Spinning Basketball
\item Spinning Book
\item Spinning Plate
\item Stone Skipping
\item Sword Swallowing
\item Taichi Fan
\item Taking Photo Camera
\item Taking Selfie
\item Tap Dancing
\item Three Legged Race
\item Throw Boomerang
\item Throw Paper-Plane
\item Tight-Rope Walking
\item Trampoline
\item Tug Of War
\item Underarm Turn
\item Walking On Stilts
\item Whistle One Hand
\item Whistle Two Hands
\end{enumerate}
\section{Composite Classes}
We list how \textit{Musical Instrument} and \textit{Sports/Athletics} classes form to become composite actions. We list indices of the classes for each composite action.
\subsection{Sports/Athletics}
\begin{enumerate}[itemsep=-2mm]
\setcounter{enumi}{0}
\item 49
\item 79, 80
\item 99
\item 65, 66, 67
\item 2
\item 178, 179, 180, 181, 182
\item 120, 121
\item 39, 40, 41, 42
\item 114
\item 140
\item 111, 112
\item 25, 26, 27, 28, 29, 30, 31
\item 60
\item 56, 57, 58
\item 156
\item 144
\item 59
\item 150, 151, 152
\item 168
\item 167
\item 102, 103
\item 145
\item 81, 82, 83
\item 92
\item 128, 129
\item 50, 51
\item 53, 54
\item 138, 139
\item 43
\item 174, 175, 176, 177
\item 183, 184, 185
\item 201
\item 8, 9, 10, 11, 12
\item 142
\item 149
\item 1
\item 32
\item 62, 63, 64
\item 141
\item 109
\item 104
\item 122
\item 110
\item 38
\item 100
\item 157
\item 37
\item 197, 198, 199, 200
\item 116
\item 153, 154, 155
\item 84
\item 131
\item 127
\item 18, 19, 20, 21, 22, 23, 24
\item 117, 118, 119
\item 186, 187
\item 160
\item 169, 170, 171
\item 158, 159
\item 206, 207
\item 13
\item 172
\item 133, 134, 135, 136, 137
\item 123
\item 124
\item 205
\item 5, 6, 7
\item 86
\item 208, 209, 210, 211, 212
\item 113
\item 202, 203, 204
\item 166
\item 105, 106, 107, 108
\item 192, 193, 194, 195, 196
\item 125, 126
\item 61
\item 173
\item 143
\item 85
\item 188, 189
\item 130
\item 101
\item 55
\item 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78
\item 52
\item 115
\item 91
\item 146, 147, 148
\item 87, 88
\item 44, 45
\item 89, 90
\item 3
\item 190, 191
\item 4
\item 35, 36
\item 34
\item 33
\item 14, 15, 16, 17, 46, 47, 48
\item 93, 94, 95, 96, 97, 98
\item 132
\item 161, 162, 163, 164, 165
\end{enumerate}
\subsection{Musical Instruments}
\begin{enumerate}[itemsep=-2mm]
\setcounter{enumi}{0}
\item 369, 377, 379, 388, 390, 394, 395, 397, 398, 401, 402, 403, 406, 412, 413,399
\item 371, 373, 376, 381, 382, 385, 387, 391, 396, 400, 404, 409, 414, 415, 416
\item 370, 372, 374, 375, 378, 380, 383, 386, 389, 392, 405, 407, 408, 410, 411, 417, 418
\item 368, 384, 393
\end{enumerate}
\section{HAA-COCO}
Here we list the classes in HAA-COCO.
\begin{itemize}
\item 1, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 32, 33, 34, 35, 36, 37, 45, 46, 47, 58, 60, 80, 81, 82, 86, 87, 88, 89, 99, 108, 110, 111, 115, 124, 125, 127, 128, 132, 133, 134, 135, 136, 139, 142, 160, 161, 162, 163, 164, 165, 182, 183, 184, 196, 197, 198, 199, 201, 202, 203, 212, 214, 235, 236, 237, 245, 246, 248, 249, 250, 251, 252, 263, 264, 265, 266, 267, 268, 276, 277, 278, 289, 298, 299, 305, 307, 311, 312, 313, 314, 316, 317, 318, 321, 325, 328, 330, 336, 337, 339, 345, 357, 358, 359, 360, 361, 367, 375, 376, 379, 381, 383, 384, 386, 388, 398, 400, 409, 410, 411, 412, 413, 414, 415, 427, 431, 434, 435, 443, 454, 480, 481, 487, 488
\end{itemize}
\section{Sample Videos}
Figure~\ref{fig:supp_0} shows the first frame of a video in different classes. Figure~\ref{fig:supp_2} lists diverse videos per class.
\section{Hierarchy}
Figure~\ref{fig:supp_1} shows the hierarchy of action classes in \textit{Sports/Athletics} area where the actions are grouped together with other actions in the same sports category.
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 1.947266,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeAPxK6-gDw5Av_at | \section{Introduction}
\label{intro}
This manuscript is focused on features definition for the problem of predicting the winner in NBA matches. It is shown how, for this classification problem, a careful definition of single features to be used in machine learning techniques, and Deep Learning \citep{Goodfellow2016, Chollet2018} in particular, can produce predictions with a quality better than quality of models built on the top of \emph{box-score} statistics as, for instance, Oliver's Four Factors \citep{Oliver2004,Kubatko2007}.
\\To this purpose, two features directly quantifying strength of teams involved in a match have been selected:
\begin{enumerate}
\item The Elo (from the name of its creator) rating system \citep{Elo1978}, originally defined for rating chess players and today widely used in several domains.
\item The difference of the relative frequency of victories for the two teams involved in a match (see for instance \citep{Migliorati2020cms} where it is named \(diff\)).
\end{enumerate}
and used as covariates to build models to be compared, in terms of quality of fit, to models built using Oliver's Four Factors \citep{Oliver2004, Kubatko2007}, the famous set of statistics synthesizing \emph{box-score} information and considered fundamental for winning a match.
\\The dataset includes all the NBA regular seasons matches from 2004-2005 to 2019-2020 (until 11/03/2020, when NBA was stopped for some weeks due to Covid19). For each observation, features' values have been calculated ex-ante, i.e. considering only information about prior matches, and taking into account:
\begin{itemize}
\item both historical (averaging considering all past games) and dynamic (averaging considering only some prior matches) features' definitions.
\item Regression to mean \citep{Galton1889}, i.e. the extreme values trend of getting closer to average in repeating experiments; this concept is particularly important in NBA championship, where after each season's end there is an explicit attempt of strength re-balancing among teams (draft mechanism).
\item The court factor, so important in NBA \citep{Harville1994, Jones2007}. Beside classical features calculation, considering all past matches regardless the court where they have been played, for each feature two new supplementary statistics have been calculated, taking itno account only information either from home or from away matches, respectively. In this way we will have not only values for Elo, \(diff\) and Four Factors regressors, but also for home Elo and away Elo, for home \(diff\) and away \(diff\), and for home Four Factors and away Four Factors.
\end{itemize}
Deep Learning is a specific subfield of machine learning, and is based on using neural networks composed by (possibly) several layers (this is the reason why the approach is called deep, no relevance to a deeper data understanding level).
\\The models used in this work have been developed in a particular Deep Learning echosystem in R, using {\fontfamily{pcr}\selectfont Keras} package \citep{Allaire2021} via RStudio \citep{RStudioTeam2021}. The nets, calibrated to produce models with a good prediction quality, are built considering the two hyperparameters (i.e. the number of layers and the numbers of units for each layer) small in size, a natural consequence of the actually restricted number of features.
This manuscript's original contributions are related to:
\begin{enumerate}
\item the comparison of single-feature and box-score based models, showing how the latter have a lower prediction quality (a possible symptom of the fact that they are close to their limit in outcome prediction).
\item the usage of really simple neural networks for a complex classification problem, asking for a reduced computational power and producing results comparable, for the specific problem, to results produced by more complex network architectures.
\item the building of two new variants for the features we are considering. These variants are calculated considering either only data from the home played matches or only data from the away matches, respectively.
\end{enumerate}
The manuscript is organized as follows: Section \ref{s:d} contains the review of literature about basketball predictions via machine learning, Section \ref{s:fd} contains the formalization of the definition for the selected features, Section \ref{s:dataset} is devoted to describe the dataset and how features have been characterized; Section \ref{sec:ann:mm} summarizes Deep Learning approach; Section \ref{sec:ann:res} reports the outcome prediction results produced in applying Deep Learning, and Section \ref{sec:ann:con} proposes some conclusions and ideas for future enhancements.
\section{Literature review: basketball predictions via machine learning}
\label{s:d}
It is several years that data analytics play a fundamental role in sport analysis and management: in the last decades, publications on statistics in sport have multiplied, and a data-based approach was adopted in each professional sport \citep{Alamar2013,Albert2017}, facing different kinds of problems. \\Analysis and applications of statistics to sport include performance measurement \citep{Mackenzie2013, Page2007, Passos2016, Sandri2020, Zuccolotto2017a, Zuccolotto2019}, injuries prevention (see \citealp{VanEetvelde2021} for a review), optimal game strategies \citep{Zuccolotto2020}, match preparation \citep{Migliorati2020, Miller2015, Thabtah2019}, players' selection \citep{Lewis2003} and, of course, outcomes forecasting \citep{Bunker2019, Wunderlich2020}.
\\In effect, it was with the application of the data-driven approach described in \citealp{Lewis2003}, centered on selection of players for Oakland Athletics baseball team, that analytics in sport actually entered the maturity phase.
\\Then, quickly, data mining in sport has been widely adopted and adapted in all professional sports, such as baseball (maybe the sport with the greatest history in analytics, starting in 1977 with dedicated reports) \citep{Koseler2017}, hockey (see \citealp{Swartz2017} for a review), American football \citep{ Baker2015, Silver2014}, football \citep{Carpita2015, Carpita2020, Sarmento2014} and, of course, basketball.
\\Basketball milestones of this analytics-based approach are pioneering works \citep{Kubatko2007, Oliver2004}, where the famous Oliver's ``Four Factors to success'' were introduced as four indexes containing a big amount of information. Then, a huge number of analyses have been done applying data mining to basketball data (see, for example,\citealp{Bianchi2017, Groll2018, Metulini2018, Sandri2020, Zuccolotto2017a, Zuccolotto2017b, Zuccolotto2019, Zuccolotto2020}).
Considering the large interest and the increasing volume in sport betting, it is easy to understand the reason why the number of attempts in predicting games' results is continuously increasing, see for instance \citealp{Bunker2019, Hubacek2019}.
\\Machine learning techniques for outcome prediction have been widely applied \citep{Haghighat2013}, covering all professional sports, from horse races \citep{Davoodi2010} to hockey \citep{Gu2016} and from American football \citep{Beal2020, David2011, Kahn2003, Purucker1996} to football \citep{Carpita2019, Min2008, Tax2015}, just to give some examples among others. Also basketball, of course, has been investigated under this perspective.
\\In \citet{Loeffelholz2009} authors worked on a dataset of 650 NBA games, and used several kinds of ANN (Artificial Neural Networks, \citealp{zhang2000}) for outcomes prediction, correctly predicting the winning team 74.33 percent of the time (on average), higher than experts percentage claimed to be 68.67.
\\In \citet{Miljkovic2010} it is reported how, among several machine learning algorithms, best results in both predicting the outcomes and calculating the final match spread were produced by the Naïve Bayes approach. Authors used 778 NBA games of season 2009-2010, considering 141 features as input, and an accuracy of 67\% is reported.
\\In \citet{Cao2012} data of 5 NBA seasons were analyzed using ANN, Support Vector Machine \citep{Cortes1995}, Naïve Bayes and logistic regression, with the latter approach producing the best prediction accuracy (about 70\%) for the classification problem of predicting the winner of a game.
\\In a similar way, in \citet{Beckler2013} authors used Linear Regression, Support Vector Machines, Logistic Regression and ANN for NBA outcomes' prediction, using a dataset including seasons from 1991-1992 to 1996-1997 and reporting an accuracy of 73\%.
\\In \citet{Cheng2016} authors applied the principle of Maximum Entropy \citep{Jaynes1957} to predict NBA playoff outcomes for seasons from 2007–08 to 2014–15, using box score information as features, reporting an accuracy of 74.4\%.
\\At last, there are several betting sites suggesting NBA outcome predictions. As an example, \citet{Teamranking2021} proposes predictions about NBA match winners using 4 approaches, built on the basis of several sources (historical data, breaking news and trends). For regular season 2017-2018 the maximum accuracy is 74.3\%, obtained using decision trees on data of games of March.
\\Many works and many results, with a great difficulty in comparing outcomes predictions related to so different dataset. In this work we prepared just one dataset, and calculate on it all the features to be used for our goal: to show how models using single features have a quality of fit greater than quality of models based on box-score in general, and Four Factors in particular, where a huge number of independent variables is often used.
\section{Features' definition}
\label{s:fd}
\subsection{The Elo rating}
\label{sss:ann:fd:elo}
The Elo rating system \citep{Elo1978} has been originally defined for calculating the strength of players in zero-sum games (i.e. games where a player gains exactly what its opponent loses) as chess, the sport for which this system was created by Arpad Elo. The Elo rating system, with some adjustments, has been applied to many sports, mainly for tournament rating predictions: football \citep{Eetvelde2019, Hvattum2010, Leitner2010, WFER2021}, tennis \citep{Angelini2021}, Australian football \citep{Ryall2010}, ice hockey \citep{WFER2021}, American football \citep{Silver2014} and, of course, to NBA basketball \citep{Silver2015,Silver2020}.
\\ Each player is assigned a rating constituted by a single number: new players have an initial default rating (that can change on the basis of the considered organization), and the difference in the ratings of the opponents of a match is used to establish the probability of the match result. After every match, the winning player will gain a certain quantity of points (and the defeated player will lose the same quantity) depending on their pre-match rating difference; moreover, the system is built in an ``asymmetric'' way: the gain for victory of the player with the highest rating is smaller than the eventual gain for victory of the player with the lowest rating.
\\More formally: if before a match Player1 has a rating \(R1\) and Player 2 has a rating \(R2\), the probability for Player 1 of winning the match (event \(p1w\)) is modeled as a logistic curve as follows:
\begin{equation}\label{elo.eq1}
P(p1w)=\frac{1}{1+10^{\frac{-(R1-R2)}{400}}}
\end{equation}
and the probability of victory for Player 2 (event \(p2w\)) is modeled as:
\begin{equation}\label{elo.eq2}
P(p2w)=\frac{1}{1+10^{\frac{-(R2-R1)}{400}}}
\end{equation}
\\ where the value 400 is historically used in Elo (Fig. \ref{fig.elo} shows the impact of that parameter on the slope of the sigmoid curve). Probabilities are 0.5 if Player1 and Player2 share the same rating before the match, and in general \(P(p1w)+P(p2w)=1\).
\begin{figure}
\includegraphics[width=0.7\textwidth]{f1_elo_curves.JPG}
\caption[Elo logistic curves]{Elo logistic curves on the basis of different logistic parameter value (i.e. the denominator of the exponent in equations \ref{elo.eq1} and \ref{elo.eq2}); 400 is the default in chess.}
\label{fig.elo}
\end{figure}
Let \(S\) be the result of a match: for games without the possibility of draws (as basketball is) \(S\) is 1 for Player1 victory 0 for Player2 victory\footnote{For sports as football, where a draw is admitted too, \(S\) can assume the three possible values 1, 0.5, 0.}.
After the match, the ratings of the 2 players will be updated as follows:
\begin{equation}
\label{elo.eq3}
R1\textsuperscript'=R1+K*(S-P(p1w))
\end{equation}
\begin{equation}
\label{elo.eq4}
R2\textsuperscript'=R2+K*(S-P(p2w))
\end{equation}
where \(K\) is a parameter addressing how strongly a result will affect ratings' update: a small \(K\) means that ratings remain almost stable, a high \(K\) means strong impacts on rating change.
An example: if the ex-ante ratings are 1500 for Player1 and 1400 for Player2, and Player1 won the match, the updated ratings will be:
\begin{itemize}
\item for \(K\)=5, Player1 =1502 and Player 2=1398
\item for \(K\)=50, Player1 = 1518 and Player2 =1382
\end{itemize}
Viceversa, if Player2 (i.e. the underdog) won the match, larger variations in ratings will be produced:
\begin{itemize}
\item for \(K=5\), Player1 =1497 and Player 2=1403
\item for \(K=50\), Player1 = 1468 and Player2 =1432
\end{itemize}
In the chess world the logistic parameter is set equal to 400, to have P(p1w)= 0.75 and P(p2w) =0.25 when the difference in ratings between the two players is equal to 200, following the original suggestion by Elo. Moreover, in Elo system the initial rating is not important (the difference of rating between two players are considered) if there are not situations introducing inflation or deflation in the system; this is the case with our dataset, because the group of NBA teams is closed.
The difference in Elo ratings between the two teams fighting in a match will be the first feature used in the present study: for the initial ratings we will follow \citet{Silver2015} and 1300 will be used; moreover, the classical value of 400 for the logistic parameter will be maintained.
\subsection{The difference in relative victory frequencies}
\label{ss:diff}
A second feature directly quantifying the strength of opposing teams is the difference of their relative victory frequencies. Named \emph{diff} in \citep{Migliorati2020cms}, it can be formally defined as follows:
\begin{equation}
\label{eq:diff}
diff=\frac{won\_matches_{ht}}{played\_matches_{ht}} - \frac{won\_matches_{at}}{played\_matches_{at}}
\end{equation}
where the subscript \(ht\) means home team, and the subscript \(at\) mean away team.
\\ \(Diff\) statistics ranges from -1 to 1, where value 1 means that the home team is absolutely the strongest between the two teams (it won all played games, with a relative frequency of victories equal to 1, where the away team never won: relative frequency of victories equal to 0) and value -1, viceversa, means that away team actually is the strongest. So, \emph{diff} is a clear and concise way for showing the difference in class between the two teams, providing an analytical definition for a classic rule of thumb often used in naive fan predictions (the team that won more in the (recent) past is the favorite).
\subsection{The Oliver's Four Factors}
\label{ss:4f}
Basketball milestones of this analytics-based approach are pioneering works \citep{Oliver2004, Kubatko2007}, where famous ``Four Factors to success'' were introduced in trying to understand how basketball teams win games. Four Factors are a set of derived statistics, built on the top of classical \emph{box-score} statistics, considered fundamental for winning a match. They are defined beginning from the concept of \emph{possession}, i.e. the number of times a team gains control of the ball during a match, and used for summarizing the attitude of a team with respect to shooting, turnovers, rebounds and free throws as in Equations \ref{4F:sh},\ref{4F:tu},\ref{4F:re},\ref{4F:fr} (refer to Table \ref{var_acr} for variable meaning):
\begin{enumerate}
\item Shooting, measured by effective Field Goals percentage:
\begin{equation}
\label{4F:sh}
eFG\% = (P2M + 1.5*P3M) / (P2A + P3A)
\end{equation}
\item Turnover ratio, the number of turnovers (i.e. loss of ball) per possession:
\begin{equation}
\label{4F:tu}
TO\_ratio = TOV / POSS
\end{equation}
\item Rebounds, defined by offensive rebounding percentage:
\begin{equation}
\label{4F:re}
OREB\% = OREB / (OREB + DREB)
\end{equation}
where, in the denominator, the team offensive rebounds and the opponent team defensive rebounds are considered, respectively.
\item Free throws rate:
\begin{equation}
\label{4F:fr}
FT\_rate=FTM / (P2A + P3A)
\end{equation}
\end{enumerate}
\begin{table}[h]
\caption{Variables' acronym meaning}
\label{var_acr}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
Acronym & Meaning \\
\noalign{\smallskip}\hline\noalign{\smallskip}
P2A & 2-point field goals attempted \\
P3A & 3-point field goals attempted \\
FTA & free throws attempted \\
P2M & 2-point field goals made \\
P3M & 3-point field goals made \\
FTM & free throws made \\
OREB & offensive rebounds \\
DREB & defensive rebounds \\
TOV & turnovers \\
POSS & possessions \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
For each match, the Four Factors for both the home team (marked with $ht$ in the following) and the away team ($at$) can be computed, leading in effect to eight factors. In this work the Four Factors have been calculated using the R package {\fontfamily{pcr}\selectfont BasketballAnalyzeR} \citep{Sandri2018,Sandri2020b}.
\section{The dataset}
\label{s:dataset}
The dataset\footnote{Basketball dataset used in this dissertation has been obtained on the basis of play by play data kindly provided by BigDataBall (www.bigdataball.com), a data provider that leverages computer vision technologies to enrich and extend sports data sets with a number of unique metrics: since its establishment, BigDataBall has supported many academic studies as a reliable source of validated and verified statistics for NBA and several other sports.} includes NBA matches from season 2004-2005 to season 2019-2020 (until 11/03/2020, when NBA was stopped for a period due to Covid19). During the 16 seasons taken into account, some franchises have been subject to changes, so the total number of different teams in the dataset should be 34. For our analyses we adopted the most recent names (i.e. Brooklyn Nets, New Orleans Pelicans and Oklahoma City Thunder) for each franchise affected by changes in the considered period, reducing the number of teams in the dataset to the canonical 30.
\\Only regular seasons have been included in the dataset, discarding playoff matches, trying to privilege uniformity of data; usually, each single season is seen as a uniform period where teams are perceived as homogeneous entities. The assumption is that in a single season there is continuity for a team, at least in its fundamental aspects, whereas changes occur between a season and the following one.
\\Actually, the situation seems to be different: it is true that teams can heavily change from one season to another, but several changes also occur during a single season. These changes can impact not only rosters (think to new free agents' contracts, new multi-year contracts, ten-day contracts for facing injuries, player exchanges, ...), but can involve coaches, managers and referees, too. Sources as \citep{Marusek2021, Sports2021} confirm that fact: it is easy to verify how there is a huge number of transactions not only between seasons, but also during a season, invalidating the perspective of teams as homogeneous entities in that period. During season 2018-19, for instance, there were about 400 off-season signings, but about 300 in-season signings (particularly when playoffs are approaching and the admitted teams need to prepare them).
So, our choice is to include in the dataset only the matches selected on the basis of a homogeneous regulation framework. In sport, rules drive strategies (think to differences addressed by a championship without relegations, as NBA, with respect to a championship with relegation as normally football championships are) and tactics (think to football offside rule, or to NBA zone defense, prohibited until the 2001–2002 season), and it seems fair to consider them in dataset definition.
\\NBA playoff rules are very different from regular season rules, and in the perspective we are sketching (depending of course on the analysis goals) we preferred to avoid including both playoff and regular season games in the same dataset. Instead, despite some existing differences, regular seasons' rules starting from season 2004-2005 are reasonably uniform, and their matches can be fairly included in a single dataset. Of course, the same kind of analysis made in this work about regular seasons could be replicated for playoff games which, in turn, share a not so different frame of rules (in effect, it could be interesting to verify if results found for regular seasons dataset change or not when considering the playoff games dataset. This job is left to future analyses).
\\In the dataset the features based on Elo, \(diff\) and Four Factors have been calculated ex ante, i.e.considering only information from prior matches, to make them suitable for outcome predictions, taking into account:
\begin{itemize}
\item the periodicity, considering both the historical (considering all prior games) and the dynamic perspective (averaging on a subset of prior matches). Moreover, the mechanism of regression to mean \citep{Galton1889} has been implemented for historical features, seeming particularly suitable for NBA \citep{Silver2015}, where at the end of each season there is an attempt to re-balance teams strength.
\item the court where matches have been played (called \emph{the court issue} in the following): besides features usually calculated considering all matches, two new statistics based considering only either home or away data will be calculated, too.
\end{itemize}
The whole dataset initially included 19.138 observations, one for each match. Features calculation introduced some \emph{Not Available} (NA) values to indicate either the absence of a value or the impossibility to calculate\footnote{For instance, when average depth is set to \(n\) and less than \(n\) observations are available. In particular, the first season is affected by this issue.}. Rows containing NA values have been discarded, arriving to a magnitude of about 18.000 observations (depending on values used to calculate features, for instance the depht in the dynamic approach).
\\ The dataset has been classically split in training and testing subsets; the training dataset features' values have been standardized, and also the testing dataset features' values have been modified on the basis of mean and standard deviation used in standardizing corresponding training dataset features.
\\As a summary, Table \ref{tab:feat-kind} reports the ways adopted in features calculation, taking into account both periodicity (historical VS dynamic) and the court issue.
\begin{table}
\small
\caption[Features variants]{Features variants calculated for Elo, \(diff\), Four Factors.}
\label{tab:feat-kind}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
periodicity & court issue \\
\noalign{\smallskip}\hline\noalign{\smallskip}
historical & not considered \\
historical & considered \\
dynamic & not considered \\
dynamic & considered \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
In the following of the current Section, some details about the features calculation are provided.
\subsection{Periodicity}
\label{ssec:fc.1}
The features used as covariates in models for predicting the outcome of a match have been calculated both from an historical (considering all the prior matches included in the dataset) and dynamic perspective (averaging on a subset of prior matches). Under the so-called historical perspective, if we have to predict the outcome of the game \(g\), a generic feature \(f(t,g)\) for a team \(t\), with \(t\) ranging from \(1\) to the number of teams \(T\), will be calculated as in Equation \ref{fc.eqh}:
\begin{equation}
\label{fc.eqh}
f_{(t,g)}=\frac{\sum_{i=1}^{g-1} f_{(t,i)}}{g-1}
\end{equation}
Under the dynamic perspective,instead, \(f(t,g)\) will be calculated as in Equation \ref{fc.eqd}:
\begin{equation}
\label{fc.eqd}
f_{(t,g)}=\frac{\sum_{i=g-d}^{g-1} f_{(t,i)}}{d}
\end{equation}
where \(d\) is the depth, i.e. the number of prior games to be considered in calculating the average of \(f\). The best\footnote{Some experiments using the exponential smoothing of some past matches as depth for rolling mean has been implemented, but the fitted models show no improvements on the quality of fit with respect to the approach used in the analysis} value of \(d\)is that producing the model with the highest predictions quality.
\subsubsection{Historical approach: regression to mean}
\label{sssec:fc.rtm}
When computing the historical \(f(t,g)\) (equation\ref{fc.eqh}), the regression to mean is used: at each season starting, the features' values are reinitialized, taking into account a percentage of their past average.
\\Let us consider the last \(N\) regular seasons \(s_1,..,s_N\), each one composed by \(m_k\) matches, where k renages over the number of seasons from 1 to \(N\); moreover, let us consider the generic feature \(f\) of the team \(t\), denoted as \(f_{t}\), with \(t\) ranging over the number of teams, from \(1\) to \(T\).
\\ The value of a generic feature \(f_{t}\) for the team \(t\) for the first match of the new regular season \(s_{N+1}\), denoted as \(f_{t,s_{N+1}^1}\), is calculated as in \ref{fc.eq1}, adding a proportion 1-P of the value of the feature after the last match of the previous season \(S\), i.e. \(f_{i,s_{N}^{m_N}}\) to a proportion P of the average of the values of the feature calculated considering all past matches for all teams:
\begin{equation}
\label{fc.eq1}
f_{t,s_{N+1}^1}=f_{t,s_{N}^{m_N}}*(1-P)+\frac{\sum_{j=1}^T\sum_{k=1}^N\sum_{z=1}^{m_k} f_{j,s_k^z}}{T*\sum_{z=1}^{N} m_z}*P
\end{equation}
where \(P\) is the proportion of regression to mean to be considered.
A regression proportion equal to 0 reduces equation \ref{fc.eq1} to equation \ref{fc.eq1.2}
\begin{equation}
\label{fc.eq1.2}
f_{t,s_{N+1}^1}=f_{t,s_{N}^{m_N}}
\end{equation}
where the first value of the feature for the new season of a team is equal to the last value of the feature of the previous season for that team; this means to have continuity among seasons, without any regression to mean: like a single, long season for the entire period 2004-2020.\\
At the other opposite, when the regression proportion \(P\) is equal to 1, equation \ref{fc.eq1} is reduced to equation \ref{fc.eq1.3}
\begin{equation}
\label{fc.eq1.3}
f_{t,s_{N+1}^1}=\frac{\sum_{j=1}^T\sum_{k=1}^N\sum_{z=1}^{m_k} f_{j,s_k^z}}{T*\sum_{z=1}^{N} m_z}
\end{equation}
meaning that the starting features' values for each season are the same for every team, equal to the mean of all past values; it is a complete regression to mean for all teams
\\The mechanism of regression to mean is suitable not only from a statistical point of view \citep{Galton1889}, because extreme values tend to become closer to the mean in new observations, but it seems particularly suitable for NBA \citep{Silver2015}, where the draft mechanism is adopted: at the end of each season, the worst classified teams will have the precedence in selecting new players; at the opposite, the best classified teams will be the last in such a choice. The draft mechanism is not perfect, but ensures a certain balancing among teams, see Figure \ref{fig:fc1} where the number of playoff accesses for each team is used as a measure of the efficiency of draft mechanism. For a good balancing, each team should have the same number of playoff accesses (about eigth in the considered period), with a density curve (depicted in light blue) different from zero only around that value. Instead, there are teams with only one or two accesses, and other teams with 15 accesses. In this work we verified (see Section \ref{sec:ann:res}) how regression to mean is fundamental for having good predictions quality for models using historical features as regressors.
\begin{figure}
\includegraphics[width=0.7\textwidth]{f2_playoff.jpg}
\caption[Efficiency of NBA re-balancing mechanism]{Efficiency of NBA draft re-balancing mechanism. On the vertical axis the number of playoff accesses bars and the density function (line) are depicted for seasons from 2004-2005 to 2019-2020.}
\label{fig:fc1}
\end{figure}
\subsection{The home factor}
\label{ssec:fc.2}
In NBA, the court factor plays an important role \citep{Harville1994,Jones2007}. The analysis about this topic on the dataset here considered confirms that: on average, home teams win 59.27\% of the matches; a summary per season is reported in Table \ref{tab:ha.tab}, where home victories percentage is always greater than 57\% (apart from the season 2019-2020, for which data are limited to 20/03/20) and in several seasons the 60\% is exceeded.
\begin{table}
\small
\caption[Percentage of home victories per regular season]{Percentage of home victories per regular season (season 2019-2020 is limited to 20/03/20)}
\label{tab:ha.tab}
\begin{tabular}{lclc}
\hline\noalign{\smallskip}
season & home victories \% & season & home victories \% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
2004-2005 & 60.43 & 2012-2013 & 61.19 \\
2005-2006 & 60.46 &2013-2014 & 58.05 \\
2006-2007 & 59.15 &2014-2015 & 57.48 \\
2007-2008 & 60.18 &2015-2016 & 58.86 \\
2008-2009 & 60.73 &2016-2017 & 58.37 \\
2009-2010 & 59.40 &2017-2018 & 57.89 \\
2010-2011 & 60.33 &2018-2019 & 59.27 \\
2011-2012 & 58.59 &2019-2020 & 55.10 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
This information should be considered in features' calculation as described by next sections.
\subsubsection{Modifying Elo calculation to account for the home advantage}
\label{ssec:fc.h.elo}
Elo definition is usually modified to take into account the home court advantage (chess does not care about the home factor, but many other sports, and basketball among them as demonstrated above, should): if home victory is more frequent, the impact on the ratings' update for the home team victory should be decreased. The classic approach to do that consists in adding a penalization parameter to the exponent, as in Formulas \ref{elo.eq1new} and \ref{elo.eq2new}; in this way, a home victory will produce a smaller effect on rating updates, balancing the home court factor:
\begin{equation}\label{elo.eq1new}
P(p1w)=\frac{1}{1+10^{\frac{-(R1-R2+HA)}{400}}}
\end{equation}
\begin{equation}\label{elo.eq2new}
P(p2w)=\frac{1}{1+10^{\frac{-(R2-R1-HA)}{400}}}
\end{equation}
This penalization parameter must be carefully quantified because, as shown in Table \ref{tab:ha_values}, it can play an important role in Elo rating update\footnote{in \citet{Silver2015} home-court advantage is quantified as 100 Elo points}. The Table \ref{tab:ha_values} contains some examples of rating updates for a match involving two teams with same Elo rating (1300) before their match. On the basis of the value of penalization parameter (column Home\_adv), the result of the application of Equations \ref{elo.eq1new} and \ref{elo.eq2new} to calculate the probability of victory for the home team (column\(P(p1w)\)) and for the away team (column \(P(p2w)\)) changes and consequently, on the basis of the result of the match (column result), the Elo rating values after the match (columns newElo(p1) and newElo(p2), respectively) can greatly change, too.
\begin{table}
\small
\caption[Impact of home advantage penalization parameter]{Examples of the impact of home advantage penalization parameter on ratings update for two teams with same Elo rating (1300) before their match. Column Home\_adv contains the home advantage, columns P(p1w) and P(p2w) the probability of victory for home and away team, respectively, Column result contains 1 in case of victory of home team and 0 otherwise, Columns newElo contain the update of Elo ratings for player 1 (p1) and player 2 (p2), respectively}
\label{tab:ha_values}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
Home\_adv & \(P(p1w)\) & \(P(p2w)\) & result & newElo(p1) & newElo(p2)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
0 & 0.50 & 0.50 & 1 & 1315.00 & 1285.00 \\
50 & 0.57 & 0.43 & 1 & 1312.86 & 1287.14\\
100 & 0.64 & 0.36 & 1 & 1310.80 & 1289.20\\
150 & 0.70 & 0.30 & 1 & 1308.90 & 1291.10 \\
\hline
0 & 0.50 & 0.50 & 0 & 1285.00 & 1315.00\\
50 & 0.57 & 0.43 & 0& 1282.86 & 1317.14\\
100 & 0.64 & 0.36 & 0 & 1280.80 & 1319.20\\
150 & 0.70 & 0.30 & 0 & 1278.90 & 1321.10\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\\The Elo formula can be further generalized as in equations \ref{elo.eq1new2} and \ref{elo.eq2new2}: for each team, two parameters \(\alpha_{adv}\) and \(\beta_{dis}\) (prize and penalization, respectively) are added to the numerator of the exponent, and the value assigned to them represents the sums of all advantages and disadvantages for that team:
\begin{equation}\label{elo.eq1new2}
P(p1w)=\frac{1}{1+10^{\frac{-(R1-R2+\alpha_{1_{adv}}-\beta_{1_{dis}})}{400}}}
\end{equation}
\begin{equation}\label{elo.eq2new2}
P(p2w)=\frac{1}{1+10^{\frac{-(R2-R1+\alpha_{2_{adv}}-\beta_{2_{dis}})}{400}}}
\end{equation}
In this way, in the Elo equations it is possible to take into account not only the home court factor, but also other factors, to be properly quantified, potentially offering some additional information: for instance player injuries (as disadvantage for new injuries, or possible advantage when a top player returns to play, see \citealp{Marusek2021,Hopkins2021} for good data sources), logistics (disadvantages due to travels or court altitude \citet{Silver2020a}), number of days among consecutive matches \citet {Manner2016}, referees \citep{Price2009, Deutscher2015}.
\\In this work only home advantage has been considered (using for Elo rating calculation the {\fontfamily{pcr}\selectfont Elo} R package \citep{Heinzen2020}, leaving the management of other information to future works.
\subsubsection{The court issue}
\label{sssec:fc.h.stats}
Typically, the statistics we are taking into account are calculated considering all matches, without reference to the court where matches are played. In effect, the performances of a team can be very different for matches played at home with respect to matches played away. As an example, in Table \ref{tab:ha.tab2} few observations about some Detroit Pistons (DET) matches are reported (starting of season 2004-2005). For each match the home team, the away team and the result (1 means victory for the home team) are specified. Moreover, the victory relative frequency (named \(ratio\)) is calculated, both as usual, considering all matches (DET ratio column), and differentiating these statistics on the basis of the court (column DET h ratio for the ratio calculated considering only the matches played at home, and column DET a ratio for the ratio calculated considering only the matches played away). After ten matches, the value of the ratio statistics calculated considering all past matches is equal to 0.5 (i.e. Detroit wins one game out of two); but looking to the ratio statistics calculated considering home/away data separation, we have some additional information: that team is really strong at home, with a home ratio equal to 0.8 (they won four out of five games played at home), but instead they are weak when playing away, with an away ratio only equal to 0.2 (they won only one game out of the five played away).
\\This approach seems to be promising, and it has been adopted in this work: besides features usually calculated considering all matches, two new statistics based on the court issue will be calculated, too.
\begin{table}
\small
\caption[Example of features calculation considering the court factor]{Example of features calculation considering the court factor for some Detroit Pistons matches results. For each match the home team, the away team and the result (where 1 means home team victory) are reported. Moreover, the victory relative frequency is reported, both considering all matches (column DET ratio) and building two separate statistics considering only for home and away matches, respectively (columns DET h ratio and DET a ratio)}
\label{tab:ha.tab2}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
home team & away team & result & DET ratio & DET h ratio & DET a ratio\\
\noalign{\smallskip}\hline\noalign{\smallskip}
DET & HOU & 1 & 1.00 & 1.00 & \\
TOR & DET & 1 & 0.50 & & 0.00 \\
DET & PHI & 1 & 0.67 & 1.00 & \\
LAC & DET & 0 & 0.75 & & 0.50 \\
DEN &DET & 1 & 0.60 & &0.33 \\
UTA & DET & 1 & 0.50 & & 0.25\\
DET & MIN & 1 & 0.57 & 1.00 & \\
DET & IND & 0 & 0.50 & 0.75 & \\
DET & CHA & 1 & 0.56 & 0.80 & \\
CHA & DET & 1 & 0.50 & & 0.20\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Methods and Models: Deep Learning}
\label{sec:ann:mm}
\subsection{Building Deep Learning models}
\label{s:5:ss:2}
All the models described in this work share the same Deep Learning sequential structure:
\begin{itemize}
\item one first input layer, with a number of input units corresponding to the number of features to be considered in building the model (1 for Elo and \(diff\), 8 for Four Factors (4 for each team))
\item one final output layer, with 1 output unit corresponding to the two possible results of a NBA match
\item a stack of several intermediate hidden sequential layers, connecting the input and output layers. Each hidden layer contains several elaboration units to work on data received from the prior layer before sending them to the following layer.
\\Data transformation on each layer is done by an \(activation function\): a function, typically non linear, used to transform the weighted sum of layer inputs into outputs; in our models all layers use classic Rectified Linear Activation \(relu\) \citep{Goodfellow2016}, apart from the output layer having a \(sigmoid\) activation function (the most suitable for a two-values classification problem).
\end{itemize}
This traversal process is repeated several times in both directions, with an optimizer updating weight values via a backpropagation algorithm, driven in its action by a loss function to be minimized. In our ANNs:
\begin{itemize}
\item Adam \citep{Kingma2014} is the optimizer.
\item Binary\_crossentropy is the loss function to be minimized.
\item Accuracy is the metric to be used to verify the behavior of the net.
\end{itemize}
As all other machine learning mechanisms, ANN models can be affected by overfitting (i.e. the model is excessively tailored to the training data, and not able to generalize when applied to a different dataset); to verify and tune the behavior of the net before its application to a test dataset, in the {\fontfamily{pcr}\selectfont Keras} fit function it is possible to reserve a percentage of the training data (excluded from training, in our case 20\%) to validation purposes. Figure \ref{fig:k_overfitting}, with the number of epochs (an epoch consists of one full training cycle on the training set) on \(x\) axis and a measure of the loss function on \(y\) axis, shows an example of overfitting problem, detectable comparing the distance of the loss functions for training (in blue) and for validation (in green) data: the loss function on the training data is decreasing with the increasing of the number of epochs as expected, while the loss function on validation data does not.
\begin{figure}
\includegraphics[width=1\textwidth]{f3_overfitting.png}
\caption[Deep Learning model on NBA dataset: overfitting]{Deep Learning model on NBA dataset: example of overfitting (number of epochs on x-axis and measure of loss function on y-axis)}
\label{fig:k_overfitting}
\end{figure}
\\To reduce overfitting, dropout layers (i.e. hidden layers randomly setting some weight input units to 0, trying to explore not usual paths) and regularization parameters (i.e. penalizations introduced on the weights' matrix to let the network better generalize) can be employed (see Figure \ref{fig:k_no_over}).
\begin{figure}
\includegraphics[width=1\textwidth]{f4_no_overfit.png}
\caption[Deep Learning model on NBA dataset: reducing overfitting]{Deep Learning model on NBA dataset: usage of dropout layers and regularization parameters help in reducing overfitting}
\label{fig:k_no_over}
\end{figure}
The nets, calibrated to produce models with a good prediction quality, are built considering the two hyperparameters (i.e. the number of layers and the number of units for each layer) small in size, a natural consequence of the choice of using a small number of features.
Results show how prediction quality for our classification problem on NBA dataset is almost the same (see Figure \ref{ann:roc_simple} and Figure \ref{ann:roc_complex} ) using simple nets (with 1 input layer, 1 structural hidden layer and 2 drop layers, 1 output layer; AUC is 0.717, accuracy 0.6721 with a threshold of 0.489) and more complex networks (with ten hidden structural layers and much more units for each layer; AUC is 0.716, accuracy 0.6716 with a threshold of 0.432), consequently the net with the simplest structure has been chosen.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{f5_k_keras_ROC.jpeg}
\caption[ROC curve for a simple Deep Learning model]{ROC curve for simple (just 1 hidden layers) Deep Learning model on NBA dataset}
\label{ann:roc_simple}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{f6_k_complex_ROC.jpeg}
\caption[ROC curve for a more complex Deep Learning model]{ROC curve for more complex (ten hidden layers with more units) Deep Learning model on NBA dataset}
\label{ann:roc_complex}
\end{figure}
\section{Results}
\label{sec:ann:res}
The results reported in this section have been obtained using a v-fold cross-validation with \(v\)=4 (unless otherwise specified): for each validation, 75\% of observations are randomly selected for training, and 25\% for testing\footnote{A different approach based on time (using the first 14 regular seasons for training and the last 2 regular seasons for testing) was tried, too, producing similar results in terms of prediction quality.}.
\subsection{Using Elo features}
\label{ssec:res1}
Fit quality for Historical Elo based models depends on the three parameters used in calculating the feature: the percentage of regression to mean (see Subsection \ref{sssec:fc.rtm}) and the two values of home advantage (see equations \ref{elo.eq1new} and \ref{elo.eq2new}) and \(K\) (see equations \ref{elo.eq3} and \ref{elo.eq4}) used in Elo rating calculation. To identify which parameters' values produce the best quality, several models have been fitted cycling on possible values of the parameters above, both considering or not the court issue.
\\Predictions quality for models based on dynamic Elo depends on the depth used in averaging, as expressed in equation \ref{fc.eqd}.
Execution results for models based on Elo variants are reported in Table \ref{tab_elo}.
The quality of predictions for models built using historical Elo without considering the court issue is the best one, with an AUC equal to 0.7117 and an accuracy equal to 0,6721 (using a threshold equal to 0.5047). These values have been obtained using a regression to mean percentage \(P\%\) equal to 20, a home advantage parameter equal to 40, and \(K\) equal to 30.
\\Between the models built using dynamic Elo, the model not considering the court issue, obtained with a depth equal to two, is the best one: its AUC is equal to 0.7117 and its accuracy equal to 0.6736 (using a threshold equal to 0.5049), the best among the models we built in this work. Predictions' quality for the model built using dynamic Elo considering the court issue, obtained with a depth equal to three, has instead an AUC equal to 0.7103 and an accuracy equal to 0.6705 (using a threshold equal to 0.5148).
\begin{table}
\small
\caption{\small Best quality of predictions for models based on Elo. For each variant, the best AUC measure, the corresponding threshold and the accuracy measure are reported, together with parameters' values used in Elo calculation}
\label{tab_elo}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
periodicity & court issue & AUC & threshold & accuracy & regression to mean P\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
historical & not considered & 0.7117 & 0.5047 & 0.6721 & 20 \\
historical & considered & 0.7001 & 0.5058 & 0.6650 & 60 \\
\noalign{\smallskip}\hline
periodicity & court issue & AUC & threshold & accuracy & depth \\
\noalign{\smallskip}\hline\noalign{\smallskip}
dynamic & not considered & 0.7117 & 0.5049 & 0.6736 & 2\\
dynamic & considered & 0.7103 & 0.5148 & 0.6705 & 3\\
\end{tabular}
\end{table}
\subsubsection{Best accuracy model for Elo}
\label{sssec:res2.3}
The best Elo based model uses the dynamic Elo feature (depth equal to 2) without considering the court issue; its AUC, calculated considering a single execution and plotted in Figure \ref{fig.auc_elo}, is equal to 0.7117, the highest among our models.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{f7_auc_elo.png}
\caption[AUC for dynamic Elo]{\small AUC for dynamic Elo (single execution). The figure reports the AUC value, together with the optimal threshold 0.466.}
\label{fig.auc_elo}
\end{figure}
Predictions of this model for single seasons have the accuracies reported in Table \ref{tab:seas}: season 2014-2015 shows best accuracy (0.7053), instead worst accuracy (0.6333) is for season 2019/20 (only partially played) and season 2008-2009 (0.6420).
\subsection{Using \(diff\) features}
\label{ssec:res2}
In historical \(diff\) approach, models' fit quality depends on regression to mean percentage value, as specified in equation \ref{fc.eq1}. As a consequence, in order to identify the model with best predictions quality, all possible values for this parameter have been tried, as reported in figure \ref{fig.rm_diff}, where it is possible to verify how accuracy is really low (about 0.615) when regression to mean is not considered.
\begin{figure}
\includegraphics[width=0.7\textwidth]{f8_diff_rm_acc.JPG}
\caption[Historical diff: accuracy VS regression to mean \%]{\small Historical diff: accuracy VS regression to mean percentage.}
\label{fig.rm_diff}
\end{figure}
Results reported in Table \ref{tab_diff} show how the quality of predictions of the model built using \(diff\) without considering the court issue is the best one, with an AUC equal to 0.6925 and an accuracy equal to 0.6626 (using a threshold equal to 0.5236). For the model built using dynamic \(diff\), the quality of predictions not considering the court issue is the best one (using a really high depth equal to 50), with an AUC equal to 0.7020 and an accuracy equal to 0,663 (threshold equal to 0.5255).
\begin{table}
\small
\caption{\small Best quality of predictions for models based on \(diff\). For each variant, the best AUC measure, the corresponding threshold and the accuracy measure are reported, together with parameters' values used for calculation}
\label{tab_diff}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
periodicity & court issue & AUC & threshold & accuracy & regression to mean P\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
historical & not considered & 0.6925 & 0.5236 & 0.6626 & 90\\
historical & considered & 0.6775 & 0.4788 & 0.6572 & 78\\
\noalign{\smallskip}\hline
periodicity & court issue & AUC & threshold & accuracy & depth \\
\noalign{\smallskip}\hline\noalign{\smallskip}
dynamic & not considered & 0.7020 & 0.5255 & 0.663 & 50\\
dynamic & considered & 0.6944 & 0.5057 & 0.6586 & 27\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsubsection{Best accuracy model for \(diff\)}
\label{sssec:res2.3}
The best model built on top of \(diff\) feature is that using the dynamic \(diff\) feature without considering the court issue, with a depth in averaging equal to 50; its AUC (calculated considering a single execution) is plotted in Figure \ref{fig.auc_diff}. Predictions for that model considering single seasons have the accuracies reported in Table \ref{tab:seas} (where first season 2004-2005 has been excluded because counting only 35 test observations due to NA omission in dataset preparation). Season 2010-2011 shows best accuracy (0.7034), instead worst accuracy (0.6222) is for season 2019/20 (only partially played) and season 2016-2017 (0.6230).
\begin{figure}
\includegraphics[width=0.5\textwidth]{f9_auc_diff.png}
\caption[AUC for dynamic \(diff\)]{\small AUC for dynamic \(diff\) (single execution). The figure reports the AUC value, together with the threshold (and its coordinates) to be considered in calculating accuracy}
\label{fig.auc_diff}
\end{figure}
\subsection{Using Four Factors}
\label{ssec:res3}
As for other features, predictions quality for historical Four Factors based models depends on the percentage of regression to mean employed in dragging data from one season to the following one; instead, models' quality in predictions depends on the depth employed for calculating rolling mean value, as expressed in equation \ref{fc.eqd}.
Also in this case (as for the corresponding situation for \(diff\)), prediction quality for historical approach without applying regression to mean is actually lower (accuracy measure about 0.61), as reported in Figure \ref{4f_h_prm}. Higher accuracy (0.6427) is found for the model without home/away data separation (considering a regression to mean percentage of 78\%), against 0.6371 for model fitted considering home/away data separation (with a regression to mean percentage of 74\%).
\begin{figure}
\includegraphics[width=0.7\textwidth]{f10_4f_h_prm.jpeg}
\caption[Historical Four Factors: accuracy VS regression to mean \%]{\small Historical Four Factors: accuracy VS regression to mean percentage.}
\label{4f_h_prm}
\end{figure}
Table \ref{tab_4f_h} reports some results: the model built on historical Four Factors without considering the court issue is the best one, with an AUC equal to 0.6655 and an accuracy equal to 0.6400 (threshold equal to 0.5334). Between dynamic features, the two models are equivalent in terms of quality of fit, slightly less than quality of historical model.
\begin{table}
\small
\caption{\small Best quality of predictions for models based on Four Factors. For each variant, the best AUC, the corresponding threshold and the accuracy measure are reported, together with the parameter's value used for calculation}
\label{tab_4f_h}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
periodicity & court issue & AUC & threshold & accuracy & regression to mean P\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
historical & not considered & 0.6655 & 0.5334 & 0.6400 & 78\\
historical & considered & 0.6527 & 0.4968 & 0.6347 & 74\\
\noalign{\smallskip}\hline
court issue & AUC & threshold & accuracy & depth \% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
dynamic & not considered & 0.6495 & 0.4934 & 0.6371 & 42\\
dynamic & considered & 0.6492 & 0.5091 & 0.6372 & 36\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsubsection{Best accuracy model for Four Factors}
\label{sssec:res3.3}
Best predictions are generated using the model based on historical Four Factors without considering the court issue, with a regression to mean percentage P\% of 78; its AUC (calculated considering a single execution) is plotted in Figure \ref{fig.auc_4f}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{f11_auc_4F.png}
\caption[AUC for historical Four Factors]{\small AUC for historical Four Factors (single execution). The figure reports the AUC value, together with the optimal threshold 0.585}
\label{fig.auc_4f}
\end{figure}
Predictions splitted on seasons (first season, 2004-2005, has been excluded, because counting only 35 observations due to NA management in dataset preparation) produces accuracies reported in Table \ref{tab:seas}, with regular season 2014-2015 showing best accuracy (0.6788) and worst accuracy (0.6175) reported for season 2005/06.
\begin{table}
\small\caption[prediction accuracy per season]{\small prediction accuracy per single season of the best model for each feature; first season 2004-2005 has been excluded because not meaningful for some features}
\label{tab:seas}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
season & best elo accuracy & best \(diff\) accuracy & best Four Factors accuracy\\
\noalign{\smallskip}\hline\noalign{\smallskip}
2005-2006 & 0.6772 & 0.6772 & 0.6175\\
2006-2007 & 0.6885 & 0.6689 & 0.6230\\
2007-2008 & 0.6753 & 0.6786 & 0.6494\\
2008-2009 & 0.6420 & 0.6636 & 0.6235\\
2009-2010 & 0.7028 & 0.6935 & 0.6502\\
2010-2011 & 0.7003 & 0.7034 & 0.6177\\
2011-2012 & 0.6545 & 0.6364 & 0.6455 \\
2012-2013 & 0.6677 & 0.6553 & 0.6522\\
2013-2014 & 0.6829 & 0.6934 & 0.6760\\
2014-2015 & 0.7053 & 0.6623 & 0.6788\\
2015-2016 & 0.6768 & 0.6734 & 0.6566 \\
2016-2017 & 0.6459 & 0.6230 & 0.6590\\
2017-2018 & 0.6633 & 0.6498 & 0.6532 \\
2018-2019 & 0.6964 & 0.6568 & 0.6667\\
2019-2020 & 0.6333 & 0.6222 & 0.6370 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Conclusions}
\label{sec:ann:con}
In this contribution we showed how appropriately defined statistics can profitably be used as single features in fitting models for outcome predictions on a basketball dataset including 16 NBA regular seasons from 2004-2005 to 2019-2020.
\\The models quality is better than quality of models fitted using Four Factors, a synthesis of \emph{box-score} statistics, and comparable to results reported in the literature (with an accuracy about 0.67\%-70\%).
\\The best prediction quality for a model considering the whole period has been produced using a single dynamic Elo feature (not considering the court issue), with an averaging depth equal to two (i.e. only Elo rating of prior two matches are considered in feature calculation). For this model, the AUC is equal to 0.7117 and the accuracy (using a threshold equal to 0.5049) is equal to 0.6736 (same AUC of the model built using historical Elo, but higher accuracy).
\\Comparing the accuracy of prediction on single seasons for the three models producing the best results, the dynamic Elo produces the best prediction in 9 seasons, the dynamic \(diff\) in 5 and the Four Factors in 2. The best accuracy for a single season is equal to 0.7053 for the season 2014-2015.
\\In general, quality of models built using \(diff\) based features is close to quality of models built using Elo, and this is an expected result if we take into account how both these features express a direct measure of the strength of a team. Instead, the quality of models based on Four Factors is remarkably the lowest among the three approaches, suggesting how the approaches based on \emph{box-score} statistics can be close to their limit in outcome prediction quality.
\\Results suggest that the court issue approach to features definition, separating data to be used in features calculation on the base of the court, produces predictions comparable in the quality to models based on usual single feature (not considering the court), offering more interpretation details. Regression to mean plays a relevant role in prediction quality, that probably could be improved considering:
\begin{enumerate}
\item a better management of seasons' change in dynamic feature definition. At the beginning of this work, regression to mean was thought just for historical features, supposing only a small number of prior matches have to be considered in dynamic features definition. Instead, results show that in some cases, as reported in Section \ref{sec:ann:res}, the best quality in dynamic models is often obtained considering a not so small depth (50 and 27, 36 and 42). In these cases, it is not difficult to cross two seasons, and regression to mean can play a role. It has been prototypically implemented also for the \(diff\) feature in its dynamic form, and first results confirm how quality of fit is slightly improved by these strategies; this aspect will be deeply investigated in future steps.
\item Analysis and integration of other kinds of information:
\begin{enumerate}
\item injuries, logistics, referees as proposed by several authors (see Paragraph \ref{ssec:fc.h.elo} about)
\item social networks (as proposed in \citealp{Miller2015}: today sources like Facebook (dated 2004) and Twitter (dated 2006) are old enough to offer information about several past years), breaking news, betting sites, market exchanges
\item players' characteristics and performances, both as single and with respect to other teammates (see \citealp{Zuccolotto2020} for a review).
\end{enumerate}
\end{enumerate}
Regarding Deep Learning: for this specific classification problem, we obtained good accuracy and limited overfitting maintaining small both the two hyperparameters (levels, units) of the net, heading towards the so called \(shallow\) learning: more complicated nets do not seem to offer great advantages in terms of quality of predictions.
\\Few last words about {\fontfamily{pcr}\selectfont Keras}, the library we used to build Deep Learning models in R. Our activities have been really simplified by this package, offering a great abstraction on neural nets and enabling to focus just on relevant model aspects. At the moment, there is not a complete default explanation mechanisms associated to that library, but many researches are ongoing to offer explanation facilities (for example see \citealp{Maksymiuk2020}, \citealp{Brandon2020} or \citealp{Molnar2018}), and it is easy to guess how this flaw will be early completely solved.
\begin{acknowledgement}
I would like to thank Prof. Marica Manisera for the suggestions and the advice she gave me during this work.
\end{acknowledgement}
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.956055,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd2Q5qsNCPV6Ykytm | \section*{Acknowledgements}
Most of all, I want to thank Prof. Silva and Prof. Dietrich for their support. It was a great experience to work on the Legotracker project, and in its progress I always got great advice and insights from all other team members and my supervisors in particular. Furthermore, Estelle Aflalo and Alex Arakelian have contributed a lot with their work on pose estimation and bat recognition - it was a fun and fruitful team work. Last but not least, many thanks to Jannis for proof reading and for your support.
\section{Introduction}
In recent years, the world of professional sports has been shaken by the amount of data that is being generated about players, teams and games.
Sports data is now a commodity, and everyone from fans to teams is trying to figure out the competitive advantages that can arise from properly yielding and interpreting that data. The baseball organizations, maybe more than any others, were always data-savvy. Each morning, baseball teams ``receive data bundles from the league that contains play-by-play files from the previous night's major- and minor-league games''\footnote{\url{https://www.cbssports.com/mlb/news/the-surprising-places-mlb-teams-get-their-information-from-in-the-post-moneyball-era/}}.
The Major League Baseball (MLB) itself makes part of this data publicly available since 2008 in the Gameday server, as a collection of XML documents with data as provided by the PITCHf/x system \cite{pitchfx}. PITCHf/x offers details about the ball trajectory for each pitch, and sets the industry standard since 2010.
MLB Advanced Media (MLBAM), the digital arm of MLB, is the main provider of baseball data which includes information about players, the game and the teams. Since the start of the Statcast\footnote{\url{http://m.mlb.com/statcast/leaderboard\#exit-velo,p,2017}} project in 2015, MLBAM also captures the position of players, the ball and high-level game events with an unprecedented level of detail.
Although the tracking data provided by Statcast allows the analysis of the actions of the players in great level of detail, it also displays the limitations of similar tracking systems --
each player is only represented by a 2D coordinate over the field, and huge amounts of data (approximately 7 terabytes of uncompressed data per game) need to be transmitted and stored. The limited amount of data available from each player, associated with the cost of the installed infrastructure, results in an expensive system that is not capable of answering some of the interesting questions of the baseball community. This characteristic can be also observed on other tracking systems, usually organized around a huge infrastructure that supports the optical/radar/radiofrequency tracking of 2D player coordinates.
We believe these shortcomings can be addressed with new machine learning approaches, in particular considering recent advances in object detection, action recognition and human pose estimation. Such tools can be used to augment the amount of information that is made available for coaches, players and fans. For example, detailed player profiles comprising information about speed, reaction times and tactics can help coaches assess a player's value. On the other hand, motion models can give the players themselves more insights into their movements and help them to improve their performance. Eventually, machine learning might yield insights into the components that make a pitch or a hit successful.
In this work, we therefore propose a new system and processing pipeline that captures the baseball game in a high level of detail. In particular, we show how the players' movements can be extracted from videos, classified, and further analyzed together with the ball trajectory to compute interesting statistics. The framework operates solely on video material, incorporating and combing state-of-the-art AI techniques to extract as much information as possible from this source. Thereby, we both improve the accuracy of statistics that have been available in previous systems, as well as extend the amount of information available. We demonstrate the relevance of the proposed system in the tracking of the actions of the most important contest in a baseball game, the pitcher-batter duel. The pitchers use a variety of pitch types (their repertoire) and tricks to gain competitive advantage against the batter, who does his best to figure out the ball trajectory and react in time for a hit. On this contest, any insight is an invaluable piece of information. As depicted in Fig. \ref{fig:reconstruction}, our system is able to capture and reconstruct the interaction between pitcher and batter extensively. We provide detailed information about their body movements as well as high-level descriptions of strategies and game events.
The main steps of our system may be summarized as:
\begin{description}{}
\item[1.] The tracking of the stance of the players, where the stance is represented by eighteen body key points (or joints) for each frame;
\item[2.] The processing of the joint trajectories to classify player actions;
\item[3.] The tracking of the ball and bat;
\item[4.] The processing of both joint trajectories and the detection of fast moving objects to find key events in the game (e.g. start of the play).
\end{description}
We consider as our main contribution the design of a fully self-contained system for baseball game reconstruction. Here we present the framework in the following steps: First, in chapter \ref{terminology} we provide relevant background information regarding baseball analysis, previously implemented systems and related work on pose estimation and object detection. We then provide an overview of the system in chapter \ref{overview}, where we shortly describe each module, while a detailed explanation of each single method can be found in appendix \ref{joint_tracking}-\ref{object_detecion_methods}. Furthermore, in chapter \ref{results} the performance of all parts of the framework is evaluated on data capturing the battle between pitcher and batter. After the integration of units in the larger system is outlined in chapter \ref{integration_legotracker} together with details on implementation and performance, we discuss the current results and possible directions of further research in chapter \ref{discussion}.
\section{Background} \label{terminology}
\subsection{Problem statement}
Game reconstruction requires to extract as much information as possible from videos. The more detailed the movement of players is recorded, the more analysts can conclude about the success of certain motion patterns and tactics. On the other hand, play diagrams are constructed representing when and where an important event happened in the game \cite{ono2018baseball}. Coaches and fans use such tools to evaluate game mechanics. Thus, various systems have been installed that aim to capture the data that later serves as input to the statistical analysis. Sports data tracking systems usually rely on a subset of three different kinds of input: optical (video data), wearable sensors and radar (or depth cameras). A classic example of an optical motion tracking system is PITCHf/x \cite{pitchfx}, which computes the ball trajectory from three video cameras on Major League Baseball (MLB) venues. The information about ball spin and speed provided by PITCHf/x can already be used to cluster and classify pitch types, as shown by \citet{Pane2013}.
\paragraph{Statcast}
However, the PITCHf/x system was replaced with the developement of the Statcast system \footnote{\url{http://m.mlb.com/statcast/leaderboard\#exit-velo,p,2017}}. Since 2015 it is installed in all major league venues and captures significantly more information than before. With a combination of optical cameras and speed radars, all players on the field and the ball are tracked. The goal of the radar is to enable more precise description of the ball trajectory, including spin rate and velocity, and to gain information about player speed and interaction with the ball (e.g. the moment of ball release). The system also uses manual input by operators to tag extra events and assess data consistency. The output of the StatCast system, i.e. discrete player and ball positions over time, can be used for game reconstruction to visually explore plays, for example in the framework of \citet{dietrich14}.
Even though Statcast improved game reconstruction a lot, we believe that it is still possible to gain more insights. Most importantly, the players are only tracked as points on the field, not providing any information about the movement of single body parts. Tracking itself is hard, and in other sports it is usually based on inertial sensors such as GPS and RFID tags \cite{Winter2016, Lee2012, Ride2012, Wisbey2010}. For American football, \citet{Foing2010} describe a system that analyzes RFID sensor data input to yield player analyses in three stages. But even if they started equipping all players with a sensor in baseball as well, no detailed information about the body motion of a single player during pitch or swing would be gained. This would require sensors in arms and legs.
\subsection{Baseball game terminology}
Before discussing related work from computer vision that can help to fill the gap, some terminology of baseball must be established. Since we focus on the interaction between the pitcher and the batter, only related terminology will be explained here. For a detailed explanation of baseball rules, please refer to \citet{baseballrules}. A baseball game is divided into nine \textit{innings}, which all consist of a certain number of \textit{plays} (called at-bats). A \textit{play} starts with the \textit{pitcher} of team A throwing (\textit{pitching}) the ball from his position at the \textit{pitcher's mound} to the \textit{batter} of team B. If the ball passes through a defined area between the batters knees and hips, it is either a \textit{strike} or the batter can \textit{swing} the \textit{bat} and the ball is \textit{hit into play}, which means that all \textit{runners} of team B are allowed to move from \textit{base} to base, until they reach \textit{home plate}. They need to reach a base before the pitcher's team has passed the ball to a certain position.
\subsection{Motion analysis}\label{motion_analysis}
In the process of a play as described above, a detailed motion analysis can be beneficial on several parts: Firstly, during the pitch itself, different \textit{pitching positions} and \textit{pitch types} can be distinguished. Regarding the delivery, pitchers (starters) usually start facing the plate, and make a sideways step to get into proper a position (\textit{Windup}). When there are runners on bases, they pitch from the \textit{Stretch}, which is a technique to shorten the motion and give the runners less time to steal bases. Detecting runners is not sufficient to determine the pitching position though, as some pitchers vary the position unexpectedly. For analysts trying to grasp the behaviour of a player it is therefore crucial to know how often and when the player did a Windup/a Stretch. Furthermore, pitchers can throw the ball in different ways to make it harder for the batter to hit. These \textit{pitch types} differ in the pitcher's hand grip when holding the ball, affecting ball velocity and spin. Similar to the position, a comprehensive tracking system should infer this information directly from the video data.
Secondly, not only the classification of specific movement (e.g. pitch type classification) is relevant for analysis, but also the overall body motion. In the long run it might be possible to construct a 3D model of the motion, which makes the motion of different players comparable and might yield insights into what makes a player successful. Thus, we aim to break down the representation of a player's motion into just a view coordinates describing the displacement of important body parts. More specifically, we believe that computer vision methods for pose estimation must be incorporated in a modern system for baseball analysis.
\paragraph{Pose estimation}
Human pose estimation involves finding a set of coordinates depicting the location of the body parts (or joints) of each person in the image. Performing such inference on each frame of the baseball videos, a player's motion is described as the trajectories of their body parts over time. Instead of the 2D position of the player on the field, we get a low dimensional description of the displacement of certain body key points comprising legs, arms, hips and shoulders.
The large majority of frameworks simply
Frameworks can be grouped using video as input or single images, and operating in top down (firstly a person detector is used, then pose estimation applied), or bottom up (joints coordinates are detected and then assigned to people) fashion to handle multiple people. According to the COCO (Common Objects in Context) keypoint challenge 2017, a CNN by \citet{chen} is state-of-the-art in human pose estimation in 2D images, followed by similar approaches (\cite{fang2017}, \cite{Papandreou} and \cite{pose_estimation}). \citet{RNN_pose} on the other hand use a Recurrent Neural Network (RNN).
The use of videos instead of single images may help to reduce the uncertainty on the pose estimation, since the temporal consistency provides additional information for the detection of keypoints. \citet{chained_pose} employ a recurrent connection between separate CNNs operating on single images. Usually though in the literature, a different kind of RNN, so-called Long-Short-Term-Memory (LSTM) cells, are used for sequential data. Regarding pose estimation, for example \citet{lstm_pose} directly use an LSTM on video input.
Recently also 3D pose estimation has been developed, and different methods were discussed by \citet{3Devaluation}, including \cite{Yasin2016,Li2015,Li2014, Kostrikov2014}. Using CNNs, \citet{vnect} achieve remarkable results with a model trained on a data set from the Max-Planck-Institute \cite{MPI}. The model is only applicable for a single person though.
In the first version of our proposed framework we employ the model by \citet{pose_estimation}, because for a long time it was by far the fastest one, performing 2D pose estimation at a rate of 8.8 frames per second (fps) for images of size 368-by-654. Furthermore, running time is independent of the number of people in the image, which is important for sports videos with audience in the background. Note however that in the modular fashion in which the system is described, the approach can be replaced by new state of the arts methods at anytime.
\paragraph{Action recognition}
While pose estimation is valuable on its own for game reconstruction, another goal is to classify motion based on the pose data, for example to distinguish pitch types as explained above. Most previous work on action recognition such as \cite{Masurelle} or \cite{Wang} require depth cameras or other aids though. On the other hand, lots of work is available classifying actions from videos, but as videos require larger computational effort, it is preferable to work on the processed pose data instead. To our knowledge, only in \cite{basketball} such an approach is implemented, i.e. machine learning techniques are applied on the 2D pose estimation keypoints of the players. Here, they predict whether a throw in basketball resulted in a miss or score.
\subsection{Object detection}\label{object_detection_background}
While the Statcast system is able to estimate ball speed with high precision, we aim to avoid the large infrastructure of radar systems and instead incorporate object detection methods for video input for tracking ball and bat. In addition, a new piece of information that will be valuable for analysts is the position of the glove of the catcher, and more importantly, the movement of the bat during the swing.
Fortunately, baseball bats and gloves are included in the popular COCO data set, which is often used to train Artificial Neural Networks (ANNs), so work on detecting these baseball related objects is available. For example, \citet{Ren} follow up previous work (\cite{Girshick2014,Girshick2015}) to improve a successful approach called Faster R-CNN. Objects are located and labeled by first extracting regions and then predicting the probability of appearance for the object in this region. Another dataset with baseball bats called HICO-DET is used by \citet{Chao}, who train CNNs to recognize human object interactions including baseball bats.
However, our experiments showed that these object detection methods are not able to detect blurred balls and bats during the swing. Thus, conventional approaches for object tracking are required additionally. Approaches for motion detection are compared in \cite{Wu}, evaluating work by \citet{zhong,Hare2011} and \citet{Kalal2012} as the most successful ones. However, most of their data did not include images with objects of high velocity, which usually appear in single images only as blurred semi transparent streaks. Therefore, recently \citet{Rozumnyi2017} have developed a new data set, calling these kind of images ''fast moving objects'' (FMO). To track FMOs, the authors propose a three stage system based on difference images. Since baseball is very fast, this approach is used as a building block.
\subsection{Game events}
\begin{figure}[t]
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=\textwidth]{figures/pitcher_raise_leg.png}
\caption{\label{fig:events1}}
\label{fig:pitcher_raise_leg}
\end{subfigure}
\hfill
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=\textwidth]{figures/pitcher_release.png}
\caption{\label{fig:events2}}
\label{fig:pitcher_release}
\end{subfigure}
\hfill
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=\textwidth]{figures/batter_raise_foot.png}
\caption{\label{fig:events3}}
\label{fig:batter_raise_foot}
\end{subfigure}
\hfill
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=\textwidth]{figures/batter_foot_down.jpg}
\caption{\label{fig:events4}}
\label{fig:batter_foot_down}
\end{subfigure}
\hfill
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=\textwidth]{figures/batter_impact.png}
\caption{\label{fig:events5}}
\label{fig:batter_impact}
\end{subfigure}
\hfill
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=\textwidth]{figures/batter_starts_run.png}
\caption{\label{fig:events6}}
\label{fig:batter_starts_run}
\end{subfigure}
\caption{Interesting events during one play: (\subref{fig:events1}) Pitcher raises his leg (first move), puts the leg back on the ground and releases the ball (\subref{fig:events2}). Meanwhile, the batter lifts his foot slightly (\subref{fig:events3}), and starts swinging (\subref{fig:events4}). If the ball is hit (\subref{fig:events5}) into play, the batter starts to run (\subref{fig:events6}).} \label{fig:events}
\end{figure}
Finally, the detection of additional game events may help the analysis of the \textit{play}, especially the ones that give us more information about the actions of the pitcher and the batter. We start with determining the moment the play starts, which is called \textit{pitcher's first movement} here. Since the first movement is not well-defined, we focus on finding the moment the pitcher raises his leg (Fig.~\ref{fig:events1}). Further on, the important part of the pitcher's motion ends with the \textit{ball release} (Fig.~\ref{fig:events2}). From then on we track the ball and estimate its speed.
On the plate, the batter starts to move slightly before \textit{ball release}, when he shortly lifts his foot and starts the swing (Fig.~\ref{fig:events3}--Fig.~\ref{fig:events5}). The movement of the batter may even give us hits about the \textit{play outcome}, i.e. whether the ball was \textit{hit into play}. Last, the moment the batter starts to run (his first step towards 1st base, shown in Fig.~\ref{fig:events6}) can be assessed for reaction time purposes.
To summarize, game reconstruction involves information of several domains, including human pose estimation, action recognition and object tracking. For each part, we extract and process information from video input alone, employing several computer vision methods taken and adapted from the research that was mentioned above.
\section{Framework for baseball game reconstruction} \label{overview}
We propose a system that is similar to Statcast in scope, but based on video input alone. Additionally, we aim to automatize tasks that have previously required user input, and provide new information that has not yet been available. While in this contribution we mainly focus on describing the analysis framework necessary to realize this goal, it is important to understand the setup that we assume our software will be running on. Specifically, the hardware is planned out as a system of distributed blocks, thus called the ''Legotracker''. The lego blocks, each containing a camera and a processing unit, are spread across the field, in order to acquire high quality videos from different viewpoints. On the blocks, parts of the data processing pipeline can be executed locally. For example, if several blocks run a computer vision algorithm to register an event (e.g. time point of ball release), the information is valuable for synchronization purposes. In general, blocks communicate with each other through a monitoring system, and send their processed data to a shared database, where more time-consuming analysis can be executed later. This way, the system requires less infrastructure and provides more detailed information about individual players due to the proximity of one of the distributed cameras.
The goal of the overall system is to extract (1) movement of players, (2) time points of events and (3) information about the ball and bat trajectory from video sources (Fig. \ref{fig:system_overview}). The components of the proposed framework are shown in Fig. \ref{fig:processing_units}, which is explained from bottom to top in the following. For more details on methods refer to the corresponding part of the appendix indicated for each component.
\begin{figure}[ht]
\begin{subfigure}{0.55\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/system_overview.png}
\caption{Video input is processed to yield three kinds of output: Player movement, ball and bat trajectories, and the time point of important game events.}
\label{fig:system_overview}
\end{subfigure}
\hspace{0.03\textwidth}
\begin{subfigure}{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/game_analysis_system.png}
\caption{State-of-the-art method in pose estimation and object detection are combined, adapted and extended to provide all information necessary for baseball game reconstruction.}
\label{fig:game_analysis_system}
\end{subfigure}
\caption{Overview of the processing pipeline}
\label{fig:processing_units}
\end{figure}
As a main component of the system, we incorporate a pre-trained model for multi-person real-time pose estimation proposed by \citet{pose_estimation}. It yields the coordinates of body joints of all persons in the frame (cf. appendix \ref{roi}). The resulting time series data can further serve as input for movement classification models as well as for event detection (e.g. to determine when the pitcher raises his leg to initiate the pitch). First, however, the target person must be distinguished from other persons in the frame, and the trajectories are imputed and smoothed with low-pass filtering (cf. appendix \ref{localization}). On the clean data, i.e. imputed single-person trajectories, deep learning techniques can be applied to classify the movement into certain categories. In our implementation this module is a 1D-CNN that we call MC-CNN. Whilst the network can be trained generically to classify any body joint trajectories of any player, it is demonstrated on three important tasks here: Regarding the pitcher's motion, MC-CNN is trained to predict pitch type and pitching position, while the batter's trajectories are used to determine the play outcome, i.e. whether the batter starts to run. For details on methodology and model architecture see appendix \ref{mccnn}.
For the other two main components of the system (event detection and object detection) we developed a difference image based approach for fast moving object (FMO) detection inspired by \citet{Rozumnyi2017}. The proposed method FMO-C thresholds difference images to output a set of ''candidates'' in each frame, indicating areas where motion occurred (cf. appendix \ref{fmoc}).
All approaches, FMO-C, pose estimation or both combined yield the timing of the game events. This comes from the fact that some events can be described by the displacement of certain body parts and/or the motion of an object. Firstly, the pitcher's first movement can be viewed as the first series of consecutive frames in which motion is detected at the pitcher's leg (cf. appendix \ref{first_move}). Similarly, the batter's first step is detected as a significant increase of motion close to his legs. Last, the time point of pitch release (when the ball leaves the pitcher's hand) can in principle be determined analyzing when the pitcher's arm is moving the fastest. However, due to the poor quality of the available data, the arm is too blurry during the pitch, so pose estimation often fails. A more reliable way is therefore tracking the ball itself and thereby inferring when it must have been released.
This leads to the last module of the framework dealing with ball and bat tracking. The ball is detected as a certain pattern of motion candidates which are the output of the FMO-C method. A metric is constructed that decides when the candidates in three consecutive frames are likely to correspond to a ball trajectory. In particular, it can be assumed that the trajectory is rather a straight line and the size of the ball does not vary much. For details on computation and parameters see appendix \ref{gbcv}.
Of course, so far the pipeline only yields the 2D trajectory on images. To reconstruct the 3D trajectory and estimate speed, the outputs of two synchronized cameras are compared.
Finally, FMO-C is complemented by an object detection model for bat tracking. Again we incorporate state-of-the-art methods, namely a two stage CNN for object detection called Faster R-CNN \cite{Ren}. Since baseball bats are included in the COCO dataset, a pre-trained model reliably detects the bat when it is not in motion. During the swing though, the bat becomes very blurry as well, and only FMO-C can detect it. The true motion candidate is found by comparing the candidates' location to the bat position in the previous frame, or to the last detection of Faster R-CNN when it has just started moving (cf. appendix \ref{bat}).
Our main contribution is the construction of the overall framework and processing pipeline. While all single components consist of adapted and combined previous work, the work here describes how they are plugged together to yield a system suitable for the specific situation in baseball, considering the available data and the large variety of analysis tasks.
\section{Results}\label{results}
\subsection{Data for training and testing}
\begin{figure}[ht]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[height=3cm]{figures/center_field.png}
\caption{center-field camera}
\label{fig:center_field}
\end{subfigure}
\hfill
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[height=3cm]{figures/side_view.png}
\caption{side-view camera}
\label{fig:side_view}
\end{subfigure}
\caption[Videos of the MLBAM auditing tool (data available through a collaboration with NYU)]{Input videos are non-synchronized videos from two different viewpoints}
\label{cameraviewpoints}
\end{figure}
Since the Legotracker hardware is not used in practice so far, the dataset used here is quite different from the data we plan to acquire in the end. We consider the tests on this data as a stress test for our methods, because the Legotracker data will probably be of better quality due to closer cameras, more available viewpoints per person and better temporal resolution. Since the Statcast system is based on radar, of course the video quality is not as important.
We tested the system with data captured from two viewpoints: one in the center-field (Fig.~\ref{fig:center_field}), focusing the home plate, and one in a high-side view of the infield (Fig.~\ref{fig:side_view}). However, it is not trivial to combine both views as the cameras are not synchronized. One video does not comprise a whole game, but corresponds only to one play, including all action between pitcher and batter. The center field videos are cut to 6.5 seconds length (around 165 frames, as the frame rate is always 30fps), roughly aligned as the ball release occurs always around frame 90. The side view videos are less aligned and often longer (up to 300 frames). In general though the start of the action and the time points of other events vary widely. Showing games with more than 200 different pitchers and more than 300 batters, the dataset is very diverse. In addition to videos, metadata was available providing the initial position of the target players, the pitch type, pitching position, play outcome, ball release frame index and the pitcher's first movement frame index.
\subsection{Joint tracking}\label{mc}
In the domain of baseball analysis, it is important to note the difference between joint detection and joint tracking. While the system requires real-time continuous output, most frameworks including OpenPose \cite{pose_estimation} do inference frame by frame. This leads to two main issues: Firstly, the output coordinates do not transition smoothly, but sometimes jump between two consecutive frames. Secondly, a person is not identified by appearance, so the coordinates of a person in one frame are not associated with the position in the previous frame. Thus, we first describe the experiments conducted on joint tracking, involving joint detection, player localization and filtering of the trajectories.
\paragraph{Pose estimation}
As already mentioned, for frame-by-frame detection of body parts we apply a pre-trained model for pose estimation \cite{pose_estimation}, yielding 2D coordinates of 18 joints for all detected individuals (Fig. \ref{fig:localization_problems}). For real time performance, the images are down sampled to 368-by-654. In first tests, we observed strong artifacts, which are caused by upsampling of the pose estimation outputs. Consequently it was preferable to feed only the important part of the image to the pose estimation network, thereby making the input smaller in general, leading to less down- and upsampling. To achieve this goal, a dynamic region of interest (ROI) around the target player is computed (for details see \ref{roi}). Only very small artifacts remain and completely disappear with low-pass filtering.
Unfortunately there is no ground truth data for pose estimation in the available baseball data, impeding a quantitative assessment of the applied methods. From manual observation it can be concluded though that the model generalizes very well, even to the new domain of sports players and positions. Only in extreme poses, for example when the pitcher raises his leg very high, the network fails to associate the body parts correctly. On blurry images as in Fig. \ref{fig:localization_problems} the output is not reliable and can fail to distinguish overlapping people. On single frames this might even be hard for a human observer though.
As a more quantitative performance measure one can compute the ratio of missing values. In the output, the set of coordinates is zero if a joint could not be detected. This occurs most often for facial key points, wrists and elbows: In more than 60\% of the frames, eyes or nose are missing. The wrist can not be detected in 28\% of the frames and elbows in around 10\% of the frames on average. While for our purposes facial key points can be discarded, elbow and wrists are important for swing and pitch analysis. Problematically, these gaps occur mostly in crucial moments, for example during ball release because the arm moves so fast that it appears blurry on the frame. These problems indicate that for the final Legotracker system, it might be necessary to replace the pose estimation module with a different approach, for example one that is also using temporal information instead of single frames. On the other hand, cameras with better temporal resolution \cite{gallego2019event}, adapted exposure time and closer viewpoint might be sufficient to handle such problems as well.
\paragraph{Player localization}\label{localization_results}
Although only a ROI around the target player is fed to the model, there might still be multiple people detected, for example the referee and catcher standing directly behind the batter. Also, the number and order of detected people in the pose estimation output is not constant (compare colours in Fig. \ref{fig:localize_a} and Fig. \ref{fig:localize_b}), and overlapping people are sometimes mixed up (Fig. \ref{fig:localize_c}). We propose to take the intersection over union of bounding boxes around the joints of a player, because the results were more stable than for example comparing the distances of joint coordinates directly. The full processing procedure is described in section \ref{localization}.
For the pitcher, the approach works for all videos, so if the pose estimation network picked up people in the audience, the algorithm correctly decided to use the pitcher's joints instead of their joints. Regarding the batter, in approximately 10\% of the videos in which he starts to run (after a successful hit), at some point he is confused with the referee standing behind him. Analyzing these results we inferred that pose estimation is too inaccurate to track the batter correctly in these situations. As shown in Fig.~\ref{fig:localize_c}, pose estimation returns a set of coordinates for one person, where a few joints belong to the batter and a few to the referee (green dots). Often, the batter is not detected/separated from other people for up to 20 consecutive frames. As a result, once the batter is detected correctly again, the tracking procedure cannot determine correctly which detected person corresponds to the target person anymore. The problem is thus rather due to the pose estimation than a problem of the localization algorithm. A straightforward solution would for example be applying a more reliable person detection algorithm on top, that does not struggle from problems relates to single joint detection.
\begin{figure}[t]
\centering
\begin{subfigure}{0.31\textwidth}
\includegraphics[width=\textwidth]{figures/localization_problems_a.png}
\caption{\label{fig:localize_a}}
\end{subfigure}
\hfill
\begin{subfigure}{0.31\textwidth}
\includegraphics[width=\textwidth]{figures/localization_problems_b.png}
\caption{\label{fig:localize_b}}
\end{subfigure}
\hfill
\begin{subfigure}{0.31\textwidth}
\includegraphics[width=\textwidth]{figures/localization_problems_c.png}
\caption{\label{fig:localize_c}}
\end{subfigure}
\caption{Challenges in tracking the target player: The output of the pose estimation model can be quite different even in consecutive frames. For example, from (\subref{fig:localize_a}) to (\subref{fig:localize_b}) the order of detected people changes (person index in output list corresponds to colour) and a new person (the referee) is detected correctly. Furthermore, the output is not very accurate on blurry frames, and people are sometimes mixed up as shown in (\subref{fig:localize_c}).}
\label{fig:localization_problems}
\end{figure}
\paragraph{Interpolation and filtering}\label{filtering_results}
In the current state of the framework, missing values in the pose estimation output are removed with simple linear interpolation since other methods are unstable when larger gaps occur in the sequences. Even deep learning techniques to predict the next value in the sequence were explored, but while the prediction for one frame is very accurate, performance decreases significantly when joints are not detected for several frames which is quite common.
\begin{figure}[ht]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{figures/lowpass.png}
\caption{Butter lowpass filtering}
\label{fig:lowpass}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{figures/bspline.png}
\caption{B-spline fitting}
\label{fig:bspline}
\end{subfigure}
\caption[Self-created figure]{The output of the full pose-tracking processing pipeline is pictured, which is a time series of coordinates for each joint of one player (in this example the batter). Here, for the sake of simplicity only the X coordinate is plotted. One can see that the whole body is moving to the left (decreasing X coordinate for all joints). The strong motion of arms and legs right before that correspond to the swing. One can compare the output of different signal filtering methods, namely Butter lowpass filtering (\subref{fig:lowpass}) and B-spline curve fitting (\subref{fig:bspline}). The outputs are very similar, except for slightly different peak amplitudes, e.g. lowpass filtering yields a higher peak for the right ankle.}
\label{fig:smoothing}
\end{figure}
Finally, small variations between frames cause small high frequency noise. Our experiments showed that a Butterworth low pass filter outperformed other approaches such as convolving with a Gaussian, a Blackman or a Hamming window, or applying a one dimensional Kalman filter. On the other hand it is worth reporting that another method performed very well, which can actually jointly apply imputation and filtering: B-spline curve fitting is a method that fits a polynomial to available data (which can contain gaps). The method is well-suited for the available data, because the player's joints are smoothly transitioning over time yielding a polynomial-like curve. Fig. \ref{fig:bspline} shows that the output (here only the x-coordinate is plotted for simplification) of B-spline fitting is very similar to interpolation plus low-pass filtering (Fig. \ref{fig:lowpass}), with only a few peaks showing different magnitudes (see for example right ankle). A further investigation, i.e. plotting B-spline and low-pass on multiple videos, concluded that both methods seem equally good, as sometimes imputation also interpolates between artifacts where B-spline ignores these artifacts, but on the other hand sometimes B-spline fitting ignores correct peaks in the curve (hand moving up quickly is wrongly regarded as an artifact). Since both methods seem similarly appropriate, it was chosen to employ linear interpolation and low pass filtering for time efficiency reasons.
To summarize, the processing pipeline presented here, starting with raw videos, yields a time series of joint coordinates for each target player separately. In the following, this time series will be referred to as ''joint trajectories'' of one player. In other words, joint trajectories comprise the 2D coordinates of 12 joints (excluding facial key points) of the target person over time (for each frame), which are lowpass-filtered and interpolated already.
\subsection{Movement classification}
Joint trajectories can be further processed for multiple tasks, including game simulations, but also to classify movements into distinct classes. As mentioned above, we have developed a deep learning approach where the network is trained in a supervised fashion with joint trajectories as the input and the output class being compared to ground-truth labels available from Statcast. Statcast acquires these class labels by manual logging, so any automatic classification is an improvement. The proposed model architecture called MC-CNN is described in detail in section \ref{mccnn} and depicted in Fig. \ref{fig:net_architecture}. The results presented below refer to experiments with smoothed joint trajectories of pitcher and batter of 8245 videos recorded from center field. This camera is used because it is much closer to the players than the side-view camera, leading to a more accurate pose estimation.
Whilst MC-CNN can be used on other players and even on other sports, here we demonstrate its performance on three important tasks: Inferring the pitching position, the pitch type and the movement of the batter. All accuracies were obtained using ten fold cross validation. We also calculated the average accuracy per class, here called balanced accuracy (BA), because often the number of instances per class varies a lot.
\paragraph{Pitching Position}
Firstly, we want to solve a simple binary classification problem, namely the Pitching Position. In section \ref{motion_analysis}, it was explained what the pitching position refers to and why it is relevant for game analysis. As mentioned in that context, Windup and Stretch differ with respect to speed of the motion and the leg position, so that the classes should be clearly distinguishable from joint trajectories. Accordingly, MC-CNN achieves an accuracy of 97.1\% on average when predicting the pitching position in test data. The balanced accuracy is also 97.0\%, so the approach works equally well for Windup and Stretch. On top of that, a qualitative analysis of errors showed that some of the ''misclassifications'' were actually mislabelled in the dataset.
\paragraph{Pitch type}
Depending on the analysis system, a different number of pitch types is distinguished. In the available metadata there are ten types.\footnote{The metadata contains the following pitch types: Fastball (4-seam), Fastball (2-seam), Fastball (Cut), Fastball (Split-finger), Sinker, Curveball, Slider, Knuckle curve, Knuckleball, Changeup}. While previous work has quite successfully predicted the pitch type from the ball spin and speed, we were interested whether the pitch type is also visible in the general movement, i.e. joint trajectories. It is arguable whether this is possible since not even experts can distinguish between all pitch types without observing the hand grip and the ball. In addition to the problem of different classes corresponding to the same trajectory pattern, some classes have a very high intra-class variability, i.e. the pitch type is executed differently dependent on the player.
\begin{wrapfigure}{R}{10cm}
\begin{center}
\includegraphics[width=10cm]{figures/confusion_matrix.png}
\caption{The normalized confusion matrix can inform about which pitch types are hardest to distinguish. Rows add up to 1, such that colors indicate the relative confusion with each other. On the diagonal the colour corresponds to the accuracy of predicting this particular pitch type correctly.}
\label{fig:confusion}
\end{center}
\end{wrapfigure}
As expected, our tests could only partly confirm the hypothesis that the pitch type can be classified by the general body motion. Training the network on all ten pitch type, the network achieved 55.8\% accuracy (59.8\% BA). It easy to see which pitch types are rather similar by analyzing the confusion matrix depicted in Fig. \ref{fig:confusion}. To make sure that the confusion matrix is not due to the inner workings of one particular neural network model, several models were trained and the confusion matrices compared. They appeared to be very consistent, even training with different network architectures (CNN vs RNN) and on distinct subsets of the data. It is thus very likely that the confusion matrix depicted in Fig. \ref{fig:confusion}, which is the average of six models trained separately and tested on different data, is really showing the similarity of the pitch type classes. One can observe for example that Fastball 2-seam and Fastball 4-seam are mixed up sometimes, which makes sense because they mostly differ in speed and motion of the ball.
For further analysis we varied the training data with respect to the number of classes and variability of players. Interestingly, filtering for Pitching Position, i.e. only taking the videos as input where the pitching position was a Windup (or a Stretch), did not have any effect on accuracy although the position could account for high variability of the joint trajectories.
Furthermore, we varied the number of players and the number of classes.
Firstly, in order to find out whether errors are due to differences between players (intra-class differences), we trained the network again taking into account only the five players (starters) for which there were more samples (2519 videos). This dataset contains seven pitch classes, which correspond to the pitch types thrown by these pitchers. Training MC-CNN on this simplified dataset, the accuracy is 65.1\%.
Of course, the improvement might also be due to the reduced number of classes. Consequently, the next step was to control for inter-class variability, restricting the number of classes. The task was simplified sorting the classes into three superclasses, namely Fastballs, Curveballs and Breaking Balls. The best accuracy, 80.2\%, is achieved in this condition when using only five players on top of that.
Considering that some pitch types are not distinguishable even for the human observer, this is a very promising result proving the informativeness of joint trajectories. However, as experts can only recognize the pitch type with further input, it might be impossible to reach an accuracy that is large enough for a reliable automatic labeling. Therefore, in future work it will be investigated how additional information can be taken into account. For example, feeding the network with joint trajectories together with the speed of the ball (which is already part of our pipeline) might yield much better results.
\paragraph{Play outcome}
A third task suited for testing MC-CNN was the play outcome as a variable with three possible assignments: \textit{No swing}, (the batter does not try to hit the ball), a \textit{swing but no run} (the swing resulted in a foul or a strike) and a \textit{run}. Further distinction is not possible solely from the batter's joint trajectories. These three classes are already relevant though to control the cameras and for further processing, e.g. activating ball tracking if the ball was hit. The MC-CNN achieves 97.9\% accuracy (94.7\% BA) on this three class action recognition task. Note also that some misclassifications occur only when localization fails and picks up the wrong person as explained in section \ref{localization_results}. In those videos, the Swing and the Run class are confused.
In general, for each of these three classification problems we believe some of the errors are due to an unstable pose estimation rather than to the network itself, and hope performance can be improved in an applied system using closer and better quality cameras.
\subsection{Event detection}\label{event_detection}
When evaluating a baseball game, analysts are also interested in important game events to explore the course of a play. For example, these events can be used in systems such as developed in \cite{dietrich2014baseball4d, lage2016statcast, ono2018baseball} to visualize the game timeline. In the Legotracker system, information about the time point of events can also be used to automate camera operation and synchronization. For example, the moment the pitcher starts to move can be seen as the start of a play, meaning that the camera needs to start saving the video, which is later sent to a database storing each play individually. Another possible application is measuring reaction times using the difference between ball release frame and the frame the batter starts running.
Some events are directly visible in joint trajectories, while others can be detected much better incorporating a motion detector. As mentioned in section \ref{overview}, we suggest to let a difference-image approach fill this role accounting for the fast speed of the ball that excludes many other options for motion detection methods such as optical flow. In this section the performance of the proposed framework on event detection is evaluated.
\subsubsection{Batter's movement}\label{battermovement}
Primarily, the motivation to determine the time point of events in the batter's movement is that it can enable to compute reaction times. Thus, interesting events include the moment the batter puts his foot down to perform a swing, and the frame he starts running, or the "first step". To our knowledge no other system provides such information, so tests had to be conducted on data that we manually labelled ourselves. We took 150 videos in which the batter starts to run and applied a simple gradient thresholding on the joint trajectories to get preliminary results. After manually correcting the videos for which gradient thresholding showed poor performance, a dataset of 150 videos was available for training and testing. In section \ref{battermovement} we explain how we have augmented the dataset, trained a LSTM on the joint trajectories in order to learn the time point of the batter's first step, and further refine the gradient approach to output the moment the batter raises his leg.
During training, test accuracy of predicting the exact frame index of the event was around 25\% (with quite high variance), but in more than 90\% of the data the error margin was less than 3 frames (0.1 seconds). Testing on a separate set of 21 videos, the model achieved a mean square error of 3.43 frames compared to the gradient labels, which is sufficient considering the imprecise definition of a ''first step''. Further, assuming the time of the first step is known, the time frame for the leg lifting can be restricted to a certain range of frames. For 80\% of the test data, it was sufficient to take the maximum of the y coordinate of ankles and knees in this time window as an approximate. In the other 20\% the prediction was slightly late. In further work it might be interesting to explore other approaches, such as training a separate CNN or LSTM on the task of recognizing the leg lifting, or taking a similar approach as for the pitcher's first movement described in the following. In general, other approaches must be explored here, which is a project on its own since inference on the batter's first movement involves labelling data extensively. As the gradient labels are far from perfect themselves, the presented results are rather a proof of concept, demonstrating how such events can be inferred given the body joint trajectories.
\subsubsection{Pitcher's first movement}
Labels for the pitcher's first movement are available, but very unreliable. The definition of this event in the metadata is not apparent: In some videos the pitcher has not started to move at all in the ostensible ''frame of first movement'', while in some others his leg is already set back to ground right before pushing the ball forward. Consequently, we decided not to compare our outputs to the ground truth labels directly, but to use another metric: We believe it is more informative to compare the results for 275 center field videos to the ball release frame, thereby indirectly validating the moment of first movement. The release frame is a suitable reference point because the movement of the pitcher is of relatively constant speed, i.e. the time period between first movement and ball release should not vary much. In contrast to the labels for the pitcher's first movement, the available labels for the release frame are very reliable because the videos are roughly aligned at that point. This is apparent in Fig. \ref{fig:release_boxplot} where it is shown that for all videos recording the pitch, the release frame is always around frame 93.
The proposed system recognizes the pitcher's first movement based on motion detection close to the pitcher's leg. As explained in detail in section \ref{first_move}, in our approach a first movement is detected when the difference image method FMO-C finds motion candidates close to the pitcher's leg. Consequently, for this task both joint trajectories and fast moving object candidates (FMO-C) are combined. To run the motion detection method, a hyper parameter $k$ must be set which controls speed sensitivity (cf. \ref{fmoc}). Basically, $k$ defines the frame rate at which difference images are constructed, such that higher $k$ means a lower frame rate and thus larger differences in difference images. Tests were run selecting every $k^{th}$ frame with $k\in [2,5]$, i.e. at a frame rate of 15, 10, 7.5 and 6 fps. In other words, the motion in a time period of 0.07, 0.1, 0.13 and 0.17 seconds respectively is observed. For more details and other hyper parameters see section \ref{first_move}.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.18\textwidth}
\centering
\includegraphics[height=9cm]{figures/release_frame_boxplots.png}
\caption{Ball release}
\label{fig:release_boxplot}
\end{subfigure}
\hfill
\begin{subfigure}{0.78\textwidth}
\centering
\includegraphics[height=9cm]{figures/first_movement_boxplots.png}
\caption{Pitcher's first movement}
\label{fig:first_boxplot}
\end{subfigure}
\caption[Self-created figure]{The distribution of the ball release frame index in a sample of videos of 160 frames length is compared to the distribution of the pitcher's first movement frame index. While the release frame is very constant as shown in \subref{fig:release_boxplot}), Statcast labels for the first movement in the left column of \subref{fig:first_boxplot}) vary widely and there are many outliers with unreasonably early motion detection. In comparison, our method FMO-C in different configurations (using every $k$-th frame for different speed sensitivities) shows lower variance, especially using k=2 and k=3. Best performance is achieved by refining the output, taking into account trajectory maxima in a defined range.}
\label{fig:boxplots}
\end{figure}
In Fig.~\ref{fig:first_boxplot} it is visible how the mean first movement frame index shifts when varying $k$. The higher k, the lower is the artificial frame rate, so inbetween frames there are more changes and thus the difference image indicates more motion. Therefore, $k=1$ is responsive to faster motion and finds the first movement too late, while $k=5$ is too sensitive and leads to many outliers detecting motion in the very beginning of the play. When setting $k=2$ or $k=3$ the approach seems to have the right speed sensitivity to detect a moving leg, because the variance is lower and corresponds to the more realistic assumption that the time between first movement and ball release does not vary much. Highest reliability is achieved ''refining'' the output of $k=2$, $k=3$ or $k=4$, i.e. selecting the highest point of the leg in a range of ten frames around the predicted frame. Setting $k=3$, the approach seems to perform best because for $k=2$ there are some outliers that are too late, which is worse than outliers that are too early when operating the cameras based on this event.
One could of course argue that variance might be due to variance in the data itself. However, plotting the output frame that is predicted with our method (with speed sensitivity $k=3$ refined) for 275 plays, indeed the pose of the pitcher is very consistent. Note that the results comprise both Windups and Stretches, so the method works even if the leg is not raised high (in Stretch position). In contrast, a qualitative analysis of Statcast labels shows large varieties, as sometimes the pitcher stands still, while in other cases the leg is already lifted.
\subsubsection{Ball release frame}\label{pitcherrelease}
In the current state of the framework, the ball release frame is determined by detecting the ball, which is done with the FMO-C method and our GBCV algorithm for ball tracking (cf. \ref{gbcv}). Once the ball is recognized, the time of release can be approximated by its speed and distance to the pitcher.
To evaluate ball tracking, and thus also for concluding about the release frame from the ball appearance, we use videos taken from the side-view camera (Fig.~\ref{fig:side_view}). The reason for this is simply that the ball is hardly visible in front of the heterogeneous audience in the background of the center-field videos, while the side-view videos are filmed from a viewpoint high above the field, so the background is just grass. The set of candidates outputted by FMO-C is thus more reliable, and the ball detection algorithm we developed, GBCV (cf. \ref{gbcv}), is more accurate.
Unfortunately, taking side view videos raises the problem that labels are not available in the metadata. Since the side view camera is not synchronized with the center field camera, the available release frame labels are of no use. Therefore, predictions for the release frame by this approach can only be evaluated qualitatively. Looking at the results for hundreds of videos, in approximately 95\% of them the ball was tracked correctly, and once the ball is detected, determining the release frame is straightforward. Considering especially that the ball is hardly visible for the human eye in this video quality, the accuracy is quite remarkable. Examples can be seen in Fig.~\ref{fig:all_release}.
\begin{figure}[ht]
\centering
\includegraphics[height=14cm]{figures/all_release.png}
\caption[Videos of the MLBAM auditing tool (data available through a collaboration with NYU)]{Examples for output frames that are labeled as the release frames with our FMO-C and GBCV approach. One can see that the ball is hardly visible and still the output corresponds to the release frame in all cases, indicating that our ball detection algorithm can distinguish the ball from other moving objects such as the hand.}
\label{fig:all_release}
\end{figure}
\subsection{Object tracking}\label{objects}
\subsubsection{Ball trajectory and speed} \label{ball}
The task of detecting the ball is challenging, as a baseball is rather small and more importantly very fast, with an average of 92 miles per hour average for Fastballs. On single frames, its appearance is only a blurred grey streak. While tracking the ball is problematic itself because many well-known methods hardly work on this data (cf. \ref{object_detection_background}), the main challenge is detecting the ball in the first place and in particular distinguishing it from other moving parts in the image, most of all the pitcher's arm.
Due to the speed, the performance of usually well-performing object detection models is very poor. Testing a pre-trained model of Faster R-CNN \cite{Ren} which was trained on the COCO data set that includes baseballs, in less than 10\% videos the ball was recognized at all. As already mentioned, our difference-image approach together with GBCV outperforms other implemented methods by far, detecting the ball in 95\% videos. No ground truth data was available for the ball trajectory, but one can also interpret the following results of speed estimation as a proof of success of the method.
\subsubsection{Ball speed}
The ball speed was approximated from the 3D trajectory, which in turn was constructed as the intersection of the predicted ball trajectory and a vertical plane from pitcher to batter. Obviously it would be much better to have two or more cameras filming the pitch from different viewpoints and reconstructing the 3D coordinates by combining the output trajectories, but in the available data there were no two cameras with aligned frames. Thus, the missing depth information in the 2D coordinates of the ball must be computed in another way. Since the pitcher throws the ball in a straight line towards the batter, we assume that the ball is somewhere in the vertical plane containing pitcher and batter, and approximate the 3D coordinates as the intersection with this plane.
The results of speed estimation were evaluated for 331 side-view videos cut to twenty frames, all starting at the ball release. The ball was detected correctly in all videos, with exception of one video where the hand of the pitcher was detected as the ball for a few frames. Fig. \ref{fig:ball_histogram} depicts the error distribution for both available (asynchronous) cameras. Compared to the labels from MLBAM, our calculation had a mean absolute error of 2.53 mph, however systematically underestimating the speed: Subtracting our speed from the ground truth yields an average of 2.27 mph, with a standard deviation is 1.61 mph. A second camera from the other side showed the opposite behaviour, overestimating the speed by 2.5 mph on average. Considering these results, we believe that the error is a consequence of the 3D approximation. It seems that the vertical plane must be shifted towards the second camera. We expect the accuracy will increase once the pitch is shot by synchronized cameras.
\begin{figure}[ht]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ball_histogram.png}
\caption{Mean error for camera a}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ball_histogram_camera_b.png}
\caption{Mean error for camera b}
\end{subfigure}
\caption[Self-created figure]{Error of speed approximation: The speed is systematically underestimated for camera a, and overestimated for camera b. This might be due to the lack of synchronized cameras, leading to an imprecise approximation of the 3D trajectory.}
\label{fig:ball_histogram}
\end{figure}
\subsubsection{Bat and glove AABB}
Detecting bat and glove is the only task for which we directly used high quality videos, because otherwise the bat is hardly visible. Twenty such videos were available. The glove was detected by the faster R-CNN alone in 62\% of the frames, which is sufficient because the Catcher is not moving much and missing frames can be interpolated.
For bat detection we tested the corresponding module in our framework, a combination of Faster R-CNN and FMO-C (cf. \ref{bat}), taking into account only the frames during the swing (manually selected around 56 frames per videos dependent on swing duration). The faster R-CNN detects the bat in 22.3\% frames which are mostly in the beginning, yielding a suitable starting point for FMO-C. Of the remaining frames, FMO-C detected 57.3\%, yielding an overall detection rate of 66.8\% frames. This is sufficient in all tested videos to approximate 2D tip and base trajectories during the swing. In future work we aim at closing the gaps for example with an ellipse fitting approach.
\section{Integration of units in the Legotracker system}\label{integration_legotracker}
\subsection{Implementation}
The system is implemented in Python, using the OpenCV library for video input processing, and Tensorflow for training NNs\footnote{\url{https://github.com/NinaWie/pitch_type}}.
For pose estimation we take a model pre-trained on the COCO data set, with a test script in Pytorch.\footnote{\url{https://github.com/tensorboy/pytorch\_Realtime\_Multi-Person\_Pose\_Estimation}}
To detect the baseball bat, we use an implementation of Faster R-CNN available on Github\footnote{\url{https://github.com/rbgirshick/py-faster-rcnn}} in a demo version trained on the COCO data set with Caffe.
\subsection{Integration} \label{integration}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figures/integration.png}
\caption[Self-created figure with videos from \url{https://www.youtube.com/channel/UCNFOPbg-VfvJpf7J7x9n4nA} (accessed 29.05.18)]{The sequential processing pipeline of a video is depicted. Pose estimation and FMO-C are applied on each frame, and the output is used for ball and bat detection directly during the video recording. Faster R-CNN and FMO-C outputs together infer the bat trajectory. The complete set of joint trajectories is filtered in the end and serves as input for the units that cannot be executed online, which is MC-CNN and the finding the batter's events.}
\label{fig:integration}
\end{figure}
While in the current state of work all presented methods are implemented and tested separately, in the final system they will be integrated and run in parallel. The units can be divided into the ones that directly run while the play is recorded, and the analysis parts that are executed after the course of one play. In the Legotracker setup, each lego block would directly process the video to yield the joint trajectories for their observed target player, but then after the play it would send the data to the database, where further inference can be done (e.g. pitch type classification). For 3D modelling of the pose, several cameras have to observe the same person from different viewpoints.
In Fig~\ref{fig:integration}, the sequence of processing units and their respective computational effort is visualized. At this point of the project, only pose estimation and the Faster R-CNN are too slow to process a video with 30fps. Summing up the whole processing pipeline, pose estimation is applied on each frame while recording the video, and the target player is directly localized. At the same time, the motion detector FMO-C operates on each triple of consecutive frames (thereby lagging behind two frames). When the processing unit filming the pitcher reports that the play was initialized with the pitcher's first movement, the frame index is reported to the monitoring system, saving the information as the start of the play.
Once the processing unit registers the ball (when the GBCV algorithm outputs that a motion pattern corresponds to a ball trajectory), the ball tracking and speed estimation units are activated. On the other hand, the unit closest to the batter starts preparing for bat tracking. In explanation, the Faster R-CNN model is executed just often enough (inference is too slow to feed each frame) that it yields a start position for bat tracking. FMO-C together with the position of the wrist can then determine the bat trajectory with real time performance.
The only modules that can not run online, by which we mean directly drawing inference on the current frame or just a few frames later, are the analysis of the batter's events and movement classification in general. MC-CNN as well as the LSTM trained for the batter's first step require a complete set of joint trajectories of one play (a sequence of around 160 frames in the available Statcast data). While neither of both information is crucial for the system to work, unlike for example the pitcher's first movement that initializes further processing steps, of course it would be preferable to provide the information immediately as well. For game analysis purposes, it is definitely sufficient to classify the pitch type after the game and store it in some database, but it might be a motivation for future work that with real time inference, the name of the pitch type could be displayed on the screen straight away, making the information available to the TV audience.
\subsection{Performance}
While on the lowest scale, at a resolution of 368-by-224, pose estimation itself requires on average 0.5 s/frame on a Nvidia Tesla K80 GPU, player localization is insignificant with only 0.0005 s/frame. Interpolation and filtering also takes 0.00085 s/frame, but is not applied in real time anyway because multiple frames are required. Thus, in the final implementation of the Legotracker, the pose estimation module, which is by far the slowest component of pose tracking, should be replaced by better performing models of recent research in this field. Applying the MC-CNN for movement classification only takes 0.0156s without GPU (on a 1.8GHz dual-core processor) for one data point (one play).
Meanwhile, FMO-C operates on grey scale images in parallel to pose estimation. On a Tesla K80 GPU on a video with a resolution of 1080-by-1920 the time effort is 0.12s per frame, including FMO-C, GBCV for ball detection and finding pitcher's first movement. GBCV requires 74\% of the time effort, because a graph of candidates is build and a confidence value must be computed at each iteration. On smaller images of size 540-by-960, the time effort is reduced significantly to 0.033s/frame. This would mean that 30 frames per second can be processed.
\section{Discussion}\label{discussion}
The presented framework describes a possible processing pipeline for a system that captures the baseball game in unprecedented detail. We believe that it was demonstrated that in principle it is possible to acquire all necessary information just from videos, without the need of manual user input. Nevertheless, of course the current version is far from perfect, and work needs to be put into each of the modules in the framework. In general, there are four major challenges that are related to the nature of the data and task: Real-time inference, limited functionality, generalization and 3D approximation.
Firstly, most processing should run online and provide inference straight away. As outlined in \ref{integration}, most parts in the proposed framework fulfill this criterion. The side effect is that more conventional methods had to be implemented in contrast to deep learning models, because often they are faster and much easier to apply online. For example in bat detection, Faster R-CNN would be too slow (leaving aside that it does not work on blurry images anyways). Employing more deep learning models would definitely make parts of the system more robust and more generalizable. In explanation, pattern recognition with deep learning would enable us to transfer the methods to new videos in an easier fashion, without the need to adapt hyperparameters. The main drawback of the conventional methods implemented here is that they require hyperparameters to be tuned with respect to the type of videos. For example, GBCV requires information about the ball's size on the frame etc. We therefore also considered training a CNN on recognizing the ball in single frames, which would be more robust. Training a CNN might be possible because the ball's appearance is quite constant as a blurry white streak, and once trained, the model would probably be applicable to new videos easier than GBCV. Despite the lack of labeled training data though, the computational effort would most probably be too high to run online on the processing units. In the future the problem might vanish as the computational power of GPUs is still rising, but it seems that currently a game reconstruction framework must rather be a mixture of conventional computer vision and modern Deep Learning.
Secondly, it will be challenging to really achieve sufficient accuracy in all tasks, just with video input. A major step in the pipeline is to infer the player's pose in each frame because all further analysis is much faster on the resulting joint trajectories than on videos. In particular, action recognition (first movement, pitch type) becomes feasible when the data is only a time series data from 12 joints instead of high dimensional video data. However, of course lots of information is lost in this conversion. The experiments on classifying pitching position and play outcome from joint trajectories show that the information is sufficient for some tasks, but for others such as the pitch type the accuracy is too low. For the latter, it is a matter in question whether classification can be achieved from videos of movement in general. Sometimes the pitcher even tries to trick the batter, and pretends to perform a different pitch type, so distinguishing the pitch type is almost impossible even for an expert. Other systems solve the task taking into account the spin rate of the ball, which will hardly be available from video data. As already mentioned, better quality of the videos and closer videos could help a lot, in particular when the motion of the wrist can be observed more closely.
Third, we do not claim that the modules where Neural Network models were trained could be applied on new video data straight away. The reason is that the dataset available for training was not diverse enough, most importantly comprising videos from just two viewpoints. It is obvious that projecting the video to a set of 2D coordinates makes all inference dependent on the viewpoint of the video. In the current version it is not possible to use a pre-trained model of MC-CNN on new baseball videos to infer to pitch type, as the shape of the joint trajectories changes completely with the viewpoint. This leads to another major point in which the Legotracker system must be extended: A proper system requires 3D coordinates. While two synchronized cameras can be merged to compute a 3D ball trajectory and thereby the ball speed, it is more difficult to do the same on pose estimation data during fast movement with much occlusion. Either this method of camera synchronization, or a model for 3D pose estimation will be crucial to make the Legotracker applicable on a large scale. The long-term goal is thus to develop software such that once the lego blocks and cameras are installed on predefined positions in a new stadium, all components can be executed directly, without requiring further training and tuning of parameters.
\section{Conclusion}
Baseball has already been revolutionized by statistics, but ultimately, stats should be like a third eye for a coach, even analyzing the motion of individuals in detail. The new tracking system called Legotracker is a step towards this goal, using state-of-the-art computer vision techniques to automatically recognize movement, speed and strategy from videos, aiming at full game reconstruction. Our contribution is firstly a framework to incorporate pose estimation in baseball analysis, extracting joint trajectories over time. The results of movement classification on joint trajectories with our proposed model MC-CNN can automate the logging of high-level information of the game, for example denoting strategies such as the pitching position. Furthermore, a fast moving object detector FMO-C and the classifier voting approach GBCV make it possible to reconstruct 3D ball trajectories just from videos. Finally, we achieve a higher reliability in detecting game events than previous systems, combining pose, motion and object detection.
In future work, the presented methods can not only be extended to other players in baseball, but also other sports or action recognition in general. The sports domain is well-suited for action recognition applications, because the number of possible actions is restricted in contrast to real world problems, and a lot of data is available. On the other hand, it should be explored how the presented methods are applicable on completely different tasks. For example, the framework for movement classification might be applicable to video surveillance data. FMO-C on the other hand could be interesting for autonomous driving where it is extremely important to recognize fast moving objects in real time.
With respect to baseball analysis, most of the proposed methods exceed the accuracy of current systems already or add information that was not provided before. We believe that prospectively it can be improved substantially with videos of higher quality. In addition, the modular approach allows to replace components of the system with state-of-the-art methods. For example, the performance of pose estimation models has improved significantly in recent research work. With further advances in computer vision, even extending the output domain to 3D and thereby also full game reconstruction will finally be possible, such that the experience of sports will be changed significantly.
\section{Joint tracking for classification of motion}\label{joint_tracking}
\subsection{Pose estimation}\label{roi}
Instead of inputting the full frame to the pose estimation model, we use a region of interest computed from the person's position in the previous frame. Another possible option would be employing a person detection method on top, but this would require additional computational effort. Instead, we use the last output of the pose estimation to compute the ROI for the next frame. Assuming the target person is already localized in frame $f^t$, the set of 2D coordinates of the defined eighteen body joints (here also enclosing facial keypoints) defines the ROI for the next frame $f^{t+1}$. In detail, the padded axis-aligned bounding box (AABB) enclosing the pose estimation output of the previous frame is taken as the ROI (assuming that the person does not move much between two consecutive frames). Let $X$ be the set of x coordinates of the target player's joints detected in frame $t$, and let $Y$ be the set of y coordinates ($|X|=|Y|=18$). Then the ROI in frame $f^{t+1}$ is defined by the rectangle spanned by the points $p_{1}$ and $p_{2}$, with
\begin{align}
p_{1} = (\min(X), \min(Y))^{T} - a,\ \ \ p_{2} = (\max(X), \max(Y))^{T} + a .
\end{align}
$a \in \mathbb R ^2 $ is a vector defining the fatness of the bounding box, which is necessary to account for movement from $f^{t}$ to $f^{t+1}$. Any missing value in $X$ or $Y$ is replaced with the last available coordinate of the respective joint. Otherwise, the bounding box might suddenly shrink and parts of the body would be outside the ROI.
Regarding the first frame, pose estimation is applied on the whole frame, and the position of the target player must be provided beforehand to select one of the detected persons in frame $f^{0}$. The start position of the target person is often known in baseball, for example the pitcher starts in the center of the pitcher's mound.
\subsection{Player localization}\label{localization}
Even in the ROI there might still be other people detected, besides the target player. Thus, in each frame the target player must be found in a list of detected people, given its position in the previous frame. Instead of using the joints directly which might be noisy and contain outliers in some frames, we define the position of a detected person as the bounding box around his (most important / most stable) joints. If $n$ people were detected in frame $f^t$, let $p_{j}^{t}$ be the vector of joint coordinates for person $j$, $j \in [1, n]$. The bounding boxes enclosing each person's joints are then defined as $B(p_{j}^{t})$. We further define the similarity $Sim(p_{j}^{t},\ \hat{p}^{t-1})$ of a detected person $p_{j}^{t}$ to the target person in the previous frame $\hat{p}^{t-1}$ as the intersection over union (IoU) of the bounding boxes around their joints:
\begin{align}
Sim_{p_{j}^{t},\ \hat{p}^{t-1}} = (B(p_{j}^{t}) \cap B(\hat{p}^{t-1})) / (B(p_{j}^{t}) \cup B(\hat{p}^{t-1})) \;
\end{align}
In other words, the overlap of each detected person with the target person in the previous frame is taken as a measure of similarity.
The new target person $\hat{p}^{t}$ is thus the one with the highest IoU with the previous target person $\hat{p}^{t-1}$, if its IoU overcomes a certain threshold $\theta_{\text{min\_IoU}}$. Otherwise, the joint coordinates are set to missing values (zeros) for this frame.
The main advantage over other approaches is that the threshold $\theta_{\text{min\_IoU}}$ is independent of resolution and camera distance, because the IoU is always between zero and one ($Sim_{p_{j}^{t},\ \hat{p}^{t-1}} \in [0,1]$). In contrast, consider for example the approach of simply selecting the person with minimal absolute pixel-distance to the target in the previous frame, i.e. $\hat{p}^{t-1} = \min_{j} \Vert p_{j}^{t} - \hat{p}^{t-1} \Vert$. Then if a person is not detected at all in one frame, the second closest person would be picked up instead, or a hard threshold must be set, defining the absolute pixel distance that the joints of the target are allowed to move between frames. Our approach is more robust in general, since usually outliers of the joints do not affect the bounding box much, and in addition allows to set a threshold that is generalizing to all kinds of videos, because it is independent of absolute pixel distances.
Furthermore, an upper bound threshold $\theta_{\text{max\_IoU}}$ can be set to account for cases where the pose estimation network mixes up two people, such that we set all joint values of a frame to zero (missing value) if for more than one person holds: $Sim_{p_{j}^{t},\ \hat{p}^{t-1}} > \theta_{\text{max\_IoU}}$. In the frame shown in Fig.~\ref{fig:localize_c} the bounding boxes of both persons overlap a lot with the target (assuming it was detected correctly in the previous frame), so the frame is skipped in the hope that a better distinction is possible in one of the subsequent frames.
In the final version we set $\theta_{\text{min\_IoU}}=0.1$ and $\theta_{\text{max\_IoU}}=0.5$, and only took hips, shoulders, knees and ankles into account to compute the bounding box, because these joints are the most stable ones. The output, namely a time series of joint coordinates for one target player, is imputed with simple linear interpolation and smoothed with low-pass filtering (cf. \ref{filtering_results}), yielding what we call joint trajectories.
\subsection{Movement classification with MC-CNN}\label{mccnn}
\begin{wrapfigure}{R}{0.46\textwidth}
\centering
\includegraphics[width=0.46\textwidth]{figures/conv_net.png}
\caption{Architecture of MC-CNN}
\label{fig:net_architecture}
\end{wrapfigure}
We propose a 1D CNN to classify certain motion into discrete classes. The network, in the following called MC-CNN, receives normalized joint trajectories of one player as input and outputs a vector indicating the probability for each possible class. In contrast to inference on video data, processing time is reduced significantly when feeding joint trajectories to a deep learning model. The $x$ and $y$ coordinate of each joint are treated as independent channels, such that the 1D convolutions are applied on 24 channels (12 joints x 2 coordinates), each containing a time-series of one coordinate.
As depicted in Fig. \ref{fig:net_architecture}, the architecture that performed best consists of two convolutional layers, both with 128 filters and kernel size 5 and 9 respectively, followed by two fully connected layers. The first fully connected layer comprises 128 neurons, while the number of neurons in the second one corresponds to the number of classes, since classes are represented by one-hot encoded vectors. ReLU activation is used for non-linearity in all layers except for the last one where a Softmax function is applied. The network is trained with an Adam Optimizer minimizing a cross entropy loss function with a learning rate of 0.0005. The network was trained for 2000 epochs, although convergence seems to be reached after around 200 epochs.
Furthermore, we balance the batches (of size 40) such that the number of examples per class in a batch is constant. For example, for pitch type classification with 10 pitch type classes, this corresponds to 4 samples per class in each batch. Balancing leads to a higher accuracy as the network does not overfit as much on the classes that appear most often in the data. Last, the time series data is normalized independently for each channel, such that the time series values of each coordinate of each joint have mean zero and unit variance.
\section{Fast Moving Object Candidate detection (FMO-C)}\label{fmoc}
Inspired by the work of \citet{Rozumnyi2017}, we developed a method that detects objects of high velocity, called FMO-C. Similarly to \citet{Rozumnyi2017}, FMO-C operates on three consecutive frames, thresholding their difference images and searching for connected components. Our variation from the original approach is 1) we allow for different speed sensitivities taking every k-th frame into account, and 2) we compensate for jitter.
At each time point, the input is a set of three frames in grey scale. However, these frames are not necessarily consecutive. Let $f^{1}, ..., f^{n}$ be all frames of a play. Firstly, for each frame $f^{t}$, the three possible difference images $d$ between $f^{t-k}, f^{t}$ and $f^{t+k}\ (k<t \leq n-k)$ are computed as
\begin{align}
d^{i,\ j} = \theta \lvert (f^{i} - f^{j})\rvert\ .
\end{align}
Thereby, $k \in \mathbb N,\ k>0$ is used to control the speed sensitivity, because selecting only every $k^{th}$ frame affects the difference images: the higher $k$, the smaller is the artificial frame rate, and the larger is the difference between frames. Thus, the higher $k$, the more motion is detected. $k$ can then be set with respect to the task. For example, $k$ should be smaller for ball detection than for the pitcher's leg's motion, because for small k, only very fast motion is detected, and the ball with its high velocity is recognized easily.
The reason for taking three pictures into account is that a difference image between two frames picks up both the previous position and the new position of a moving object. With three images, the previous location can be excluded by logically combining the difference images. Formally,
\begin{align}
\mathbb{D}^t = d^{t-k,\ t} \cap d^{t,\ t+k} \cap \neg d^{t-k,\ t+k}.
\end{align}
The result is just one difference image $\mathbb{D}^t$, containing only the appearance of motion in the target frame $t$ as shown in Fig.~\ref{fig:fmo1}.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/thresholded_difference.png}
\caption{Thresholded difference}
\label{fig:fmo1}
\end{subfigure}
\hfill
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/shakiness_removed.png}
\caption{Jitter removed}
\label{fig:fmo2}
\end{subfigure}
\hfill
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/connected_components.png}
\caption{Connected components}
\label{fig:fmo3}
\end{subfigure}
\caption{FMO-C: Firstly, a simple difference image is thresholded (\subref{fig:fmo1}). To account for jitter, accumulated previous movement is removed (\subref{fig:fmo2}). In the end, only connected components of a certain minimum area are selected as candidates, which are marked red in (\subref{fig:fmo3}). This leads to a set of candidates of fast moving objects for each frame.} \label{fig:fmo_detecton}
\end{figure}
Furthermore, the method should be robust to slight motions of the camera. Although the cameras are fixed, vibrations of the stadium might cause noise in the difference images. Here we can make use of the fact that an unstable camera is moving randomly around one location, but the objects we are interested in move one-directional for m consecutive frames. Thus all points in $\mathbb{D}$ that were also detected in one of the last $m$ frames can be excluded:
\begin{align}
\mathbb{F}^i = \mathbb{D}^i - \bigcup_{n\in [i-m, ..., i-1]} \mathbb{D}^n
\end{align}
Nevertheless there are still many artifacts left, especially many single pixels. Therefore, as in \cite{Rozumnyi2017}, a threshold $\theta_{\text{conn}}$ is introduced defining the minimum area that a moving object must cover. $\theta_{\text{conn}}$ can be set dependent on image resolution, camera distance and optical zoom. In our implementation we apply the OpenCV function \texttt{connectedComponentsWithStats} on $\mathbb{F}^i$ to compute the AABB and area of each moving object, and filter out all components with less pixels than $\theta_{conn}$ (see Fig. \ref{fig:fmo3}). The output is a set of patches (''motion candidates'') as in Fig.~\ref{fig:fmo3}, each covering a certain minimum area.
\section{Event detection}
\subsection{Batter's first step}\label{batter_first_methods}
Given the video of a play, our goal is to output the index of the frame when the batter takes the first step, given the set of joint trajectories of the batter. The notion of a first step is not clearly defined, as sometimes it is hard to distinguish between the end of the swing and the first step. Because of this, FMO-C is not applicable: The motion candidates during the swing do not differ from the ones appearing when the batter starts to run towards 1st base. Furthermore, training an ANN on images or joint trajectories is not possible straight away, because no ground truth labels are available so far. Thus, rather basic methods are applied on the joint trajectories in the first place, namely gradient-based methods. Thresholding the gradient of the $x$ coordinates is most informative because the batter moves to one side when starting to run. For most of the videos, reasonable results were achieved: Manually observing the outputs, the result seemed to deviate from the ground truth moment only by around four frames. However, for the ones that did not overcome the threshold, the results were far outliers or no result at all. A slight improvement was achieved when iteratively lowering the threshold until a frame is found, but the results are still highly dependent on the video material. To avoid such a hard threshold, and to make the method generalize better, we used the outputs of the gradient approach as training data for an ANN. Firstly, we manually labeled the ones that were mislabeled by the gradient approach. For the input set we only selected the frames plausible as a first step, assuming that the ball release frame is known. In explanation, we only input a window of 40 frames to the LSTM, starting 10 frames after the (estimated) release frame $f^{r}$. We chose this window of $[r+10, r+50]$ because we observed thar the first step occurs on average 30 frames after ball release, with a variance of around 5 frames. Spanning the window by twenty frames in each direction accounts sufficiently for errors in measurement of the release frame, or for outliers of the first step frame index.
This yields a dataset of joint trajectories of 40 frames length each, annotated with the first-step-frame-index. Furthermore, we artificially augmented the data by shifting the frames in time: the window of 40 frames was randomly placed $k$ times for each data point, such that the first step frame was uniformly distributed between 0 and 40. Finally, we flipped the $x$ coordinate for each data point (doubling the amount of data), such that a left to right movement is turned into a right to left movement. On the resulting dataset, best performance is achieved by a LSTM of four cells with 128 hidden units each, followed by one fully connected layer. The output is a number $y \in [0, 1]$ that can be transformed in the following way to yield the frame index $s$ of the frame depicting the first step:
\begin{align}
s = 40 y + r + 10\ .
\end{align}
\subsection{Raise of the batter's leg}
The other relevant part of the batter's movement is the moment he lifts his front leg and puts it back to the ground. The lifting of the leg initiates the swing and thus usually occurs slightly before ball release. Similarly to the first step, the time frame for this event is very restricted when the time points of other events are known. Therefore it is sufficient to determine the leg-raise-frame in a window of frames, for example from twenty frame before ball release $r$ to ten frames before the first step $s$. In this period we define the leg-raise frame simply to be the frame where the batter's leg (ankle and knee joints) are highest:
\begin{align}
l = \argmin{t \in [r-20,\ s-10]}y^{t},
\end{align}
where $y^{t}$ is the mean $y$ coordinate of both ankles and knees at frame $f^t$. Minimum instead of maximum is taken because $y$ is zero at points on top of the frame. In addition to finding frame index $l$ where the leg is highest, it is relevant for analysis purposes to infer an event slightly later, when the foot is put back to the ground. In order to find this moment, firstly a reference point for the foot position on the ground is required. The baseline position $m$ of the leg, i.e. the average position before lifting it, can be computed as:
\begin{align}
m = \dfrac{1}{l-10} {\sum\limits_{n=0}^{l-10} }y^{n}
\end{align}
The frame $h$ when the foot is put back to ground is then the frame out of all $r$ frames following $f^l$, where the leg position is closest to $m$:
\begin{align}
g = \argmin{i \in [l,\ l+r]} \lvert y^{t}-m \rvert
\end{align}
The range r is dependent on the frame rate, but usually around 10-20 frames should be sufficient because it does not take longer to set the foot back.
\subsection{Pitcher's first movement}\label{first_move}
The moment the pitcher starts to move can be seen as the start of a play, and it is often taken as the reference time for the computation of several statistics. In order to capture the full movement of the pitcher, we define the ``pitcher's first movement'' as the moment the leg is raised. To find the corresponding frame, we combine candidates of FMO-C with pose estimation. The idea is that when the pitcher starts to move, a motion detector should find candidates at the pitcher's leg in several consecutive frames.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/first_move_candidates.png}
\caption{An example sequence of frames with motion candidates close to leg is shown. In frame 56, the pitcher moves slightly but the frame is isolated, while frame 62 would be labeled as the pitcher's first movement in case all requirements are fulfilled.}
\label{fig:first_move_candidates}
\end{figure}
We use the set of left and right ankle coordinates $a_{l}, a_{r} \in A^{i} $ and knees $k_{l}, k_{r} \in K^{i} $ at each frame $f^{i}$, evaluating their distance to the motion candidates $c \in C^{i}$ detected in $f^{i}$, where $\forall (v \in A \cup K \cup C) : \vec{v} \in \mathbb R ^2 $. For a single frame, we say that it is likely to be part of the first-movement frame sequence if a motion candidate $c$ is close to the ankles or knees, whereby ''closeness'' is defined as a fraction of the distance between ankles and frames. Formally, the condition can be written as
\begin{equation}
\exists \vec{u} \in \{a_{l}, a_{r}, k_{l}, k_{r}\} \wedge \exists \vec{c} \in C^{i} : \| \vec{u} - \vec{c} \| < \dfrac{1}{2} b \sum_{j \in \{l,r]\}} \| a_{j} - k_{j} \|.
\label{equation_pitcherfirst}
\end{equation}
The right side of the condition in \autoref{equation_pitcherfirst} defines the required closeness to the ankles/knees. As mentioned above, the radius itself is defined by the distance between ankles and knees in order to construct a threshold that is independent of video resolution and distance from the player. The radius is factorized by a parameter $b$ that can be set based on the video data quality and the accuracy of the pose estimation. Here, for our low quality videos recorded from larger distance, pose estimation is quite inaccurate, so we set $b=1$ such that the radius is simply the mean distance between knees and ankles.
The first-movement frame is then the beginning of a set of frames for which \autoref{equation_pitcherfirst} holds, where the set of frames is restricted in two ways: The sequence containing this set must comprise at least $\theta_{min\_length}$ frames, and the first and the last frame (where the condition is fulfilled) must be less than $\theta_{max\_apart}$ apart. The first threshold ensures that a minor leg motion long before the actual first movement is not picked up, while $\theta_{max\_apart}$ makes sure that the real first movement is detected even if there are gaps (no detection) of less than $\theta_{max\_apart} - \theta_{min\_length}$ frames inbetween. In the example depicted in Fig.~\ref{fig:first_move_candidates} one can see that in frame 59 some motion is detected already, but only from 62 onwards the movement really starts. In our experiments, we set $\theta_{\text{min\_length}}$ to 5 and $\theta_{\text{max\_apart}}$ to 10, so since $65-59<10$, the criteria are fulfilled and the method would output frame 59 as the first-movement frame.
In the final step we refine this output to achieve an even more constant definition of the first movement. For this purpose, the curvature of the joint trajectory can be taken into account. In detail, ''refining'' refers to taking the output from the algorithm explained above, and selecting a more stable point from a window of frames around the previously predicted frame. A simple approach is selecting the highest position of the leg (mean of ankles and knees) in a certain range $p$ around the previously predicted frame index $n$, formally
\begin{align}
h = \frac{1}{4} \argmin{t \in [n-p,\ n+p]}\ \sum_{v \in A^{t} \cup K^{t}} v_2\ \ \ \ \ \ (v_2\ :=\ \text{y coordinate of joint v}).
\label{equationRange}
\end{align}
Consequently, $f^{h}$ is the moment the leg is highest, which is a sharper definition of the first movement and quantifies it more precisely.
\section{Object detection}\label{object_detecion_methods}
\subsection{Ball detection}\label{gbcv}
The main challenge in ball detection is distinguishing it from other moving objects with similar appearance. Most of all, the hand of the pitcher becomes almost as blurry and greyish when releasing the ball. As a possible solution, we propose a graph based voting of weak classifiers considering several features of the ball trajectory.
The algorithm explained in the following operates on the output of FMO-C (which is a set of motion candidates per frame). Firstly, each candidate is represented by a node in a directed acyclic graph. Specifically, the graph is a tree where each level in the tree corresponds to a frame. Let $n^{t}_{j},\ j\in[1..n]$ be the j-th motion candidate detected in frame $f^t$. Then $n^{t}_{j}$ is a child of some node $n^{t-1}_{k}$ of the level above (the frame before) iff the corresponding candidates are more than $\theta_{dist}$ pixels apart. This is based on the assumption that the ball travels with a certain minimum speed. Many candidates can be excluded straight away if their speed is too low, and thus computational effort can be reduced. The threshold $\theta_{dist}$ can be deduced from the minimum speed of the ball in a pitch, the frame rate, the distance from the camera and the resolution. For the experiments here it was set to 10 pixels. Consequently, a node has no children if there was no candidate detected in the next frame or all candidates were too close.
In the resulting tree, each traverse of length $\geq 3$ is a possible ball trajectory. To distinguish the ball from the large set of other paths in the graph, we define a confidence value C as a combination of several attributes of the ball trajectory. A possible choice for such attributes are slopes and distances between consecutive motion candidates, because approximately they stay constant only if it is the ball, assuming a high frame rate and a relatively high speed. In other words, we take three consecutive frames, compute slope and distances between the candidates of the first and the second one, and the same for the second and third one, and measure how similar they are. Formally, let $s(i,j)$ be the slope of two connected nodes (candidates) $i$ and $j$, and $d(i,j)$ the distances between each pair. Then a triple of three connected nodes, i.e. a node $n^{t-2}$ with a child candidate detected in $f^{t-1}$ and a grandchild in $f^{t}$, is classified as a ball if the confidence $C$ is sufficiently high. $C$ combines and weights the defined attributes, and only if $C$ exceeds a threshold $\theta_{\text{confidence}}$, the triple of motion candidates is recognized as a ball.
In our implementation, the confidence value C is defined as:
\begin{equation}
\begin{split}
C([n^{t-2},\ n^{t-1},\ n^{t}]) =\ & a_{1}\ S_{\text{slopes}}(s(n^{t-2},\ n^{t-1}),\ s(n^{t-1},\ n^{t}))\ +\\
& a_{2}\ S_{\text{distances}}(d(n^{t-2},\ n^{t-1}),\ d(n^{t-1},\ n^{t})).
\end{split}
\end{equation}
$S_{\text{slopes}}$ is a measure for the similarity of two slopes, and $S_{\text{distances}}$ a measure to compare two distances. In addition, the attributes can be weighted with a vector $\vec{a}$.
To construct a similarity measures $S$, some requirements should be fulfilled: Ideally, the two similarity measures should be comparable, such that they take on the same range of values. Secondly, the confidence value and thus the similarity measures should be independent of the data properties (e.g. resolution and distance of the camera). To account for both, we define the slope $s$ as a complex normalized vector, because if the distance in x and y direction were simply divided, the slope for a vector $\vec{v}$ would be the same as the slope for the vector in the opposite direction $\vec{-v}$. This case must be considered because FMO-C might for example first detect motion at the leg, then the arm in the next frame and then again the leg. Both the distances and the slopes between this triple of nodes in the graph would be the same, but defining the slope as a complex vector, their value is different. So the difference between two slopes is the distance of two normalized complex vectors. The similarity is then disproportional to the distance of the respective complex vectors of each slope:
\begin{align}
S_{\text{slopes}}(s_{1}, s_{2}) = 1\ -\ (\frac{1}{2}\ \Vert s_{1}\ -\ s_{2}\Vert)
\end{align}
The formula for $S_{\text{slopes}}$ is thereby constructed to yield one for equal slopes, and zero for vectors in the opposite direction.
A similarly standardized value should define the similarity of distances. Furthermore, we want to ensure that only the relative distance is considered (independent of pixel values), so instead the ratio of distances is considered:
\begin{align}
S_{\text{distances}}(d_{1}, d_{2}) = \min (\frac{d_{1}}{d_{2}},\frac{d_{2}}{d_{1}})
\end{align}
The minimum of both ratios of distances again yields an output between 0 and 1, and also makes $S_{\text{distances}}$ symmetric. To sum up, the confidence value is defined in a way that different measures of similarity can be combined flexibly and the ranges of output values are similar. Depending on the video material, other attributes can be incorporated, for example the area of the bounding box enclosing a candidate or the average color of the image patch of a candidate that is supposedly white or greyish. Also, similarly to the threshold for the pitcher's first movement, a minimum sequence length can be set to avoid false positives, such that three nodes are not sufficient. An example of a triple of FMO candidates that is recognized as a ball by $C$ is is shown in Fig.~\ref{fig:trajectory1}.
Gaps can occur if the ball is not detected by FMO-C in one or more frames. If later again three ball candidates are found, average slopes and distances of the two separate trajectories can be compared in the same fashion as before, and merged if $C$ is sufficiently small. This is illustrated in Fig.~\ref{fig:trajectory2}.
\begin{figure}[ht]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ball_full_trajectory.png}
\caption{\label{fig:trajectory1}}
\end{subfigure}
\hfill
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ball_detected.png}
\caption{\label{fig:trajectory2}}
\end{subfigure}
\caption[Self-created figure, with videos from the public MLBAM database at \url{http://ze-video.mlb.com/video/mlbam/2016/10/01/umpeval/video/449253/} (accessed 13.05.18)]{In (\subref{fig:trajectory1}) the ball is detected, i.e. the slopes and distances of the three ball candidates were sufficiently similar. Then in one frame the ball is not detected, but from the next frame onwards GBCV registers a new triple corresponding to a ball trajectory. By comparing slopes and distances of the new triple to the previous trajectory, the trajectories can be merged. The ball is tracked until it reaches the batter (\subref{fig:trajectory2}).}.
\label{fig:ball_trajectory}
\end{figure}
\subsection{Bat and glove AABB}\label{bat}
A successful approach locating objects in images is called Faster R-CNN \cite{Ren}, and it can be trained on the COCO data set to recognize bat and glove. Testing a pre-trained model on our videos, we observed glove detection is sufficient for our purposes, but the bat was often not detected in the crucial moment of the swing itself. This might be due to the fact that images of blurred bats are hardly represented in the databases. We therefore propose a combination of Faster R-CNN and the FMO-C approach. Once the bat starts to move and is not detected anymore by Faster R-CNN, FMO-C takes over. Formally, from the set of candidates $C^{t}$ in frame $f^t$, the baseball bat can be found by simply taking the detection with the shortest Euclidean distance to the previous bat detection $\beta^{t-1}$. To avoid unreliable candidates that are too far away from the previous detection, a threshold $\theta_{\text{max\_dist}}$ is used:
\begin{align}
\beta^{t} =
\begin{cases}
\argmin{c_{k} \in C^{t}}\ \|(c_{k}-\beta^{t-1})\|,& \text{if } \underset{c_{k} \in C^{t}}\min\ \|(c_{k}-\beta^{t-1})\|\ <\ \theta_{max\_dist}\\
\text{missing}, & \text{otherwise}
\end{cases}
\end{align}
So if $\beta^{t-1}$ is given (detected either by the Faster R-CNN or by FMO-C), each motion candidate in frame $t$ is compared to $\beta^{t-1}$ and set as the new bat position if it is sufficiently close. In Fig.~\ref{fig:fmo_bat}, outputs of FMO-C detection are shown. Both Faster R-CNN and FMO-C yield the AABB around the bat.
However, the orientation of the bat in this bounding box is necessary for a more detailed description of the bat trajectory and for speed estimation. In explanation, the aim is to recover the positions of tip and base of the bat separately. This can be achieved taking into account the wrist coordinates available from pose estimation, assuming that the corner of the AABB which is closest to the wrists is the base of the bat, and the opposite diagonal corner is the tip. In Fig.~\ref{fig:wrist_tip_bat} the wrist is coloured green, leading to the location of tip and base of the bat (blue).
Finally, with this combination of the Faster R-CNN, FMO detection and pose estimation, the 2D trajectory for tip and base of the bat can be estimated for the full length of the swing.
Further research needs to be done to turn this into a 3D trajectory to compute speed.
\begin{figure}[ht]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[height = 3cm]{figures/fmo_bat.png}
\caption{The FMO-C algorithm outputs a bounding box for each motion candidate.} \label{fig:fmo_bat}
\end{subfigure}
\hspace*{0.1\textwidth}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[height = 3cm]{figures/wrist_tip_bat.png}
\caption{The wrist position coloured green is used to differentiate between tip and base of the bat.} \label{fig:wrist_tip_bat}
\end{subfigure}
\caption{Bat detection during the swing: First, an object detection method is employed to detect the bat as a reference point. Then, FMO detection is applied during the swing (see \subref{fig:fmo_bat}), and the candidate closest to the previous bat detection is selected. Finally the position of the wrist enables us to distinguish tip and base of the bat (\subref{fig:wrist_tip_bat}).}\label{fig:swing}
\end{figure} | {
"attr-fineweb-edu": 1.549805,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbgU5qWTD6heGqvWd | \section{Introduction}
A group of Greek tourists is vacationing on the island of Lipari and they find
out that the latest release of their favourite playwright is playing
at the local theatre (see Figure~\ref{lipari-fig}), {\em Ecclesiazusae} by Aristophanes, a big winner at last year's (391 BC)
Festival of Dionysus. Seating at the theatre is open (i.e., the seats are chosen by the audience members as
they enter). The question arises as to whether they will be able to find seats. As it turns out
this depends upon just how courteous the other theatregoers are that night.
Consider a theatre with $m$ rows containing $n$ seats each.
Theatregoers enter the theatre along aisles, choose a row, and enter it from one of its ends, wishing
to occupy a seat. They
select their seat in the row uniformly and
independently at random among the empty ones.
The rows of seats are narrow and
if an already sitting theatregoer is not willing to get up
then s(he) blocks passage to the selected seat and the incoming theatregoer
is forced to select a seat among unoccupied seats
between the row entrance and the theatregoer who refuses to budge.
Thus, the selection and overall occupancy of seats depends on
the courtesy of sitting theatregoers, i.e., their
willingness to get up so as to create free space
that will allow other theatregoers go by.
An impolite theatregoer, i.e., one that never gets up from
a position s(he) already occupies, is referred to as {\em selfish} theatregoer.
Polite theatregoers (those that will get up to let someone pass) are
referred to as {\em courteous}.
On a given evening we expect some fraction of the audience to be selfish
and the remainder to be courteous. We say a set of theatregoers is
{\em $p$-courteous}
if each individual in the set is courteous with probability $p$ and
selfish with probability $1-p$.
We assume that the status of a
theatregoer (i.e., selfish or courteous)
is independent of the other theatregoers and it
remains the same throughout the
occupancy of the row. Furthermore, theatregoers select a vacant seat uniformly
at random. They enter a row from one end
and inquire (``Excuse me''), if necessary, whether an already
sitting theatregoer is courteous enough to let him/her go by and occupy
the seat selected. If a selfish theatregoer is encountered, a seat is selected
at random among the available unoccupied ones, should any exist.
We are interested in the following question:
\begin{quote}
What is the expected number of seats
occupied by theatregoers when all
new seats are blocked,
as a function of the total
number of seats and the theatregoers' probability $p$ of being courteous?
\end{quote}
We first study the problem on
a single row with either one entrance or two.
For the case $p=1$ it is easy to see that the row will be fully occupied when
the process finishes. We show that for $p=0$ (i.e., all theatregoers are selfish)
the expected number of occupied seats is only $2 \ln n + O(1)$
for a row with
two entrances. Surprisingly, for any fixed $p<1$ we show that this is only
improved by essentially a constant factor of $\frac{1}{1-p}$.
Some may
argue that the assumption of choosing seats uniformly at random is somewhat
unrealistic. People choose their seats for a number of reasons (sight lines, privacy, etc.)
which may result in a nonuniform occupancy pattern. A natural tendency would be to
choose seats closer to the centre of the theatre to achieve better viewing. We
attempt to model this with seat choices made via the geometric distribution
with a strong bias towards the centre seat for the central section of the theatre
and for the aisle seat for sections on the sides of the theatre. The results
here are more extreme, in that for $p$ constant, we expect only a constant number
of seats to be occupied when there is a bias towards the entrance of a row while we expect
at least half the row to be filled when the bias is away from the entrance.
In a further
attempt to make the model more realistic we consider the Zipf distribution on the seat choices, as this
distribution often arises when considering the cumulative decisions of a
group of humans (though not necessarily Greeks)\cite{zipf}. We show that under this distribution
when theatregoers
are biased towards the entrance to a row, the number of occupied seats is
$\Theta(\ln \ln n)$ while if the bias is towards the centre of the row the number
is $\Theta(\ln^2 n)$.
If we assume that theatregoers proceed to another row if their initial choice
is blocked it is easy to
use our results for single rows with one and two entrances to derive bounds
on the total number of seats occupied in a theatre with multiple rows and aisles.
\subsection{Related work}
Motivation for seating arrangement problems
comes from polymer chemistry and statistical physics in
\cite{flory1939intramolecular,olson1978markov} (see also \cite{strogatz2012joy}[Chapter 19]
for a related discussion).
In particular, the number and size of random independent sets
on grids (and other graphs) is of great interest in
statistical physics for analyzing {\it hard} particles in lattices
satisfying the exclusion rule, i.e., if a vertex of a lattice is
occupied by a particle its neighbors must be vacant, and
have been studied
extensively both in statistical physics and combinatorics
\cite{baxter,hard1,hard2,calkin-wilf,finch}.
Related to this is the ``unfriendly seating'' arrangement problem
which
was posed by
Freedman and Shepp \cite{freedman}:
Assume there are $n$ seats in a row at a
luncheonette and people sit down one at a time at random.
Given that
they are unfriendly and never sit next to one another,
what is the expected number
of persons to sit down, assuming no moving is allowed?
The resulting density has been studied in
\cite{freedman,friedman,mackenzie} for a $1 \times n$ lattice and
in \cite{georgiou2009random} for the $2\times n$ and
other lattices.
See also \cite{kk} for a related application to privacy.
Another related problem considers the following natural
process for generating a maximal independent set of a
graph~\cite{mitzenmacher}.
Randomly choose a node and place it in the independent set.
Remove the node and all its neighbors
from the graph. Repeat this process until no nodes remain.
It is of interest to analyze the expected size of the
resulting maximal independent set. For investigations on a similar
process for generating maximal matchings
the reader is referred to \cite{aronson1995randomized,dyer1991randomized}.
\subsection{Outline and results of the paper}
We consider the above problem for the case of
a row that has one entrance and the case with two entrances. We
develop closed form formulas, or almost tight bounds up to multiplicative constants, for the expected
number of occupied seats in a row for any given $n$ and $p$.
First we study
the simpler problem
for selfish theatregoers, i.e., $p=1$,
in Section~\ref{selfish:sec}. In
Section~\ref{courteous:sec}, we consider $p$-courteous theatregoers.
In these sections, the placement of theatregoers obeys the
uniform distribution.
Section~\ref{geo:sec} considers what happens with $p$-courteous theatregoers under the
geometric distribution.
In Section~\ref{zipf:sec} we look at theatregoers whose
placement obeys the Zipf distribution.
And in Section~\ref{theater:sec} we show how the previous results may extended
to theater arrangements with multiple rows and aisles.
Finally, in Section~\ref{other:sec} we conclude by proposing several open problems
and directions for further research. Details of any missing proofs
can be found in the Appendix.
\section{Selfish Theatregoers}
\label{selfish:sec}
In this section we consider the occupancy problem for a row of seats
arranged next to each other in a line.
First we consider theater occupancy with
selfish theatregoers in that a theatregoer occupying a
seat never gets up to allow another theatregoer to go by.
We consider two types of rows, either
open on one side or open on both sides.
Although the results presented here are easily derived from those
in Section~\ref{courteous:sec} for the $p$-courteous case,
our purpose here is to introduce the methodology in
a rather simple theatregoer model.
Consider an arrangement of $n$ seats in a row
(depicted in Figure~\ref{fig:th1} as squares).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=8cm]{th1.pdf
\end{center}
\caption{An arrangement of seats; theatregoers may enter only from the left
and the numbering of the seats is $1$ to $n$ from left to right.}
\label{fig:th1}
\end{figure}
Theatregoers enter in sequence one after
the other and may enter the arrangement only from the left.
A theatregoer occupies a seat at random with the uniform distribution
and if selfish (s)he
blocks passage
to her/his right. What is the expected number
of occupied seats?
\begin{theorem}[Row with only one entrance]
\label{thm1}
The expected number of occupied seats by selfish theatregoers
in an arrangement of $n$ seats
in a row with single entrance is equal to $H_n$, the $n$th harmonic number.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm1}})
Let $E_n$ be the expected number of theatregoers occupying seats
in a row of $n$ seats.
Observe that $E_0=0, E_1 =1$ and that the following recurrence is valid
for all $n \geq 1$.
\begin{eqnarray}
E_n &=& \label{maineq1}
1 + \frac{1}{n} \sum_{k=1}^{n} E_{k-1}
= 1 + \frac{1}{n} \sum_{k=1}^{n-1} E_k.
\end{eqnarray}
The explanation for this equation is as follows. A theatregoer
may occupy any one of the seats from $1$ to $n$. If it
occupies seat number $k$ then seats numbered $k+1$ to $n$
are blocked while only seats numbered $1$ to $k-1$ may be
occupied by new theatregoers.
It is not difficult to solve this recurrence. Write down
both recurrences for $E_n$ and $E_{n-1}$.
\begin{equation*}
nE_n
=
n + \sum_{k=1}^{n-1} E_k
\mbox{ and }
(n-1)E_{n-1}
= n-1 + \sum_{k=1}^{n-2} E_k.
\end{equation*}
Substracting these two identities we see that
$nE_n - (n-1) E_{n-1} = 1 + E_{n-1}$.
Therefore $E_n = \frac{1}{n} + E_{n-1}$.
This proves Theorem~\ref{thm1}.
\hfill\rule{2mm}{2mm}
\end{proof}
Now consider an arrangement of $n$ seats
(depicted in Figure~\ref{fig:th2})
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=10cm]{th2.pdf
\end{center}
\caption{An arrangement of $n$ seats; theatregoers may enter either from the right
or from the left.}
\label{fig:th2}
\end{figure}
with two entrances
such that
theatregoers may enter only from either right or left.
In what follows, we invoke several times the approximate size of
the harmonic number $H_n$
which can be expressed as follows
$$
H_n = \ln n + \gamma + \frac{1}{2n} + o (n),
$$
where $\gamma$ is Euler's constant~\cite{knuth}.
\begin{theorem}[Row with two entrances]
\label{thm1two}
The expected number of occupied seats by selfish theatregoers
in an arrangement of $n$ seats
in a row with two entrances is $2 \ln n$,
asymptotically in $n$.
\end{theorem}
\begin{proof}({\bf Theorem~\ref{thm1two}})
Let $F_n$ be the expected number of occupied seats
in a line with two entrances and $n$ seats.
Further, let $E_n$ be the expected number of theatregoers occupying seats
in a line with a single entrance and $n$ seats, which is
the function defined
in the proof Theorem~\ref{thm1}.
Observe that
\begin{eqnarray}
F_n &=& \label{maineq2}
1 + \frac{1}{n} \sum_{k=1}^n (E_{k-1} + E_{n-k})
\end{eqnarray}
The explanation for this is as follows.
The first theatregoer may occupy any position
$k$ in the row of $n$ seats.
Being selfish, entry is possible only
from one side of the row, i.e., the next seat that
can be occupied is numbered either
from $1$ to $k-1$ or from $k+1$ to $n$.
It follows from Theorem~\ref{thm1} and using the standard approximation
for the harmonic number (see~\cite{knuth}) that
\begin{eqnarray*}
F_n &=&
1 + \frac{1}{n} \sum_{k=1}^n (H_{k-1} + H_{n-k})
=
1 + \frac{2}{n} \sum_{k=1}^n H_{k-1}
=
2 \ln n + O(1),
\end{eqnarray*}
which proves Theorem~\ref{thm1two}.
\hfill\rule{2mm}{2mm}
\end{proof}
\section{Courteous Theatregoers}
\label{courteous:sec}
Now consider the case where theatregoers are
courteous with probability $p$ and
selfish with
probability $1-p$.
We assume that the probabilistic behaviour of
the theatregoers is independent of each other and
it is set at the start and remains the same throughout
the occupancy of the row of seats. Analysis
of the occupancy will be done separately
for rows of seats with one and two entrances
(see Figures~\ref{fig:th1}~and~\ref{fig:th2}). Again,
seat choices are made uniformly at random.
Observe that
for $p=1$ no theatregoer is selfish and therefore all
seats in a row of seats will be occupied.
Also, since
the case $p=0$ whereby all theatregoers are selfish
was analyzed in the last section, we can assume without
loss of generality
that $0 < p < 1$.
\begin{theorem}[Row with only one entrance]
\label{thm01p}
Assume $0 < p < 1$ is given.
The
expected number $E_n$ of occupied seats in an arrangement of $n$ seats
in a row having only one entrance at an endpoint
with $p$-courteous
theatregoers is given by the expression
\begin{equation}
\label{pach1}
E_n = \sum_{k=1}^n \frac{1-p^k}{k(1-p)},
\end{equation}
for $n \geq 1$.
In particular, for fixed $p$, $E_n$ is $\frac{H_n + \ln (1-p)}{1-p}$,
asymptotically in $n$.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm01p}})
Consider an arrangement of $n$ seats (depicted in Figure~\ref{fig:th1} as squares).
Let $E_n$ denote the expected number of occupied
positions in an arrangement of $n$ seats
with single entrance
at an endpoint
and $p$-courteous
theatregoers.
With this definition in mind we obtain the following
recurrence
\begin{eqnarray}
E_n
&=& \label{rec01}
1 + p E_{n-1}
+ \frac{1-p}{n} \sum_{k=1}^n E_{k-1}
\end{eqnarray}
where the initial condition $E_0 =0$ holds.
Justification for this recurrence is as follows.
Recall that we have a line
with single entrance on the left. Observe that with probability $1-p$
the theatregoer is selfish and if (s)he occupies
position $k$ then theatregoers arriving later
can only occupy a position in the interval $[1, k-1]$
with single entrance at $1$.
On the other hand, with probability $p$ the theatregoer is courteous
in which case the next person
arriving sees $n-1$ available seats as
far as (s)he is concerned; where the first person
sat doesn't matter and what remains is a problem of size $n-1$.
This
yields the desired recurrence.
To simplify, multiply Recurrence~\eqref{rec01} by $n$
and combine similar terms to derive
\begin{eqnarray}
nE_n
&=& \notag
n + (np+1-p)E_{n-1} + (1-p) \sum_{k=1}^{n-2} E_k .
\end{eqnarray}
A similar equation is obtained when we replace $n$ with $n-1$
\begin{eqnarray}
(n-1)E_{n-1}
&=& \notag
n-1 + ((n-1)p+1-p)E_{n-2} + (1-p) \sum_{k=1}^{n-3} E_k .
\end{eqnarray}
If we substract these last two equations
we derive
$
nE_n -(n-1)E_{n-1}=
1 + (np+1-p)E_{n-1} - ((n-1)p+1-p)E_{n-2}
+(1-p) E_{n-2} .
$
After collecting similar terms. it follows that
$nE_n = 1+ (n(1+p) -p) E_{n-1} -(n-1)p E_{n-2}$.
Dividing both sides of the last equation
by $n$ we obtain the following recurrence
\begin{eqnarray}
E_n
&=& \notag
\frac{1}{n} +
\left( 1 + p - \frac{p}{n} \right) E_{n-1}
-\left(1-\frac{1}{n} \right)pE_{n-2},
\end{eqnarray}
where it follows easily from the occupancy conditions that
$E_0 = 0, E_1 = 1, E_2 = \frac{3}{2} + \frac{p}{2}$.
Finally,
if we define $D_n := E_n - E_{n-1}$, substitute
in the last formula and collect similar terms
we conclude that
\begin{eqnarray}
D_n
&=& \label{rec02}
\frac{1}{n} +
\left(1-\frac{1}{n} \right)p D_{n-1},
\end{eqnarray}
where $D_1 = 1$. The solution of Recurrence~\eqref{rec02}
is easily shown to be $D_n = \frac{1-p^n}{n(1-p)}$ for $p<1$.
By telescoping we have the identity $E_n = \sum_{k=1}^n D_k $. The proof of the theorem is complete once we observe that $\sum_{k=1}^\infty p^k/k = - \ln (1-p)$.
\hfill\rule{2mm}{2mm}
\end{proof}
\begin{comment}
As an immediate corollary,
using the expansion (in the variable $p$) of the logarithmic
function $\ln (1-p)$ in a Taylor series
we derive the following result.
\begin{corollary}
For firxed $0 <p<1$,
the
expected number $E_n$ of occupied seats in an arrangement of $n$ seats
in a row having only one entrance
with probabilistically $p$-courteous
theatregoers is given by the expression
\begin{equation}
\label{pach1a}
\frac{H_n - \ln (1-p)}{1-p},
\end{equation}
asymptotically in $n$.
\hfill\rule{2mm}{2mm}
\end{corollary}
\end{comment}
\begin{theorem}[Row with two entrances]
\label{thm02p}
Assume $0 < p < 1$ is given.
The
expected number $F_n$ of occupied seats in an arrangement of $n$ seats
in a row having two entrances at the endpoints
with probabilistically $p$-courteous
theatregoers is given by the expression
\begin{equation}
\label{pach2}
F_n =
-\frac{1-p^n}{1-p} + 2 \sum_{k=1}^n \frac{1-p^k}{k(1-p)},
\end{equation}
for $n \geq 1$.
In particular, for fixed $p$, $F_n$ is $-\frac{1}{1-p} + 2\frac{H_n - \ln (1-p)}{1-p},$
asymptotically in $n$.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm02p}})
Now consider an arrangement of $n$ seats
(depicted in Figure~\ref{fig:th2}).
For fixed $p$,
let $F_n$ denote the expected number of occupied
positions in an arrangement of $n$ seats
with two entrances one at each endpoint
and probabilistically $p$-courteous
theatregoers.
Let $E_n$ denote the expected number of occupied
positions in an arrangement of $n$ seats
with single entrance
and probabilistically $p$-courteous
theatregoers (defined in Theorem~\ref{thm01p}).
With this definition in mind we obtain the following
recurrence
\begin{eqnarray}
F_n
&=& \label{rec002}
1 + p F_{n-1}
+ \frac{1-p}{n} \sum_{k=1}^n (E_{n-k} + E_{k-1})
\end{eqnarray}
where the initial conditions $E_0 = F_0 =0$ hold.
Justification for this recurrence is as follows.
We have a line
with both entrances at the endpoints open.
Observe that with probability $1-p$
the theatregoer is selfish and if it occupies
position $k$ then theatregoers arriving later
can occupy positions in $[1, k-1] \cup [k+1, n]$
such that in the interval $[1, k-1]$
a single entrance is open at $1$ and
in the interval $[k+1, n]$ a single
entrance is open at $n$.
On the
other hand, like in the single entrance case,
with probability $p$ the theatregoer is courteous in which case the next person
arriving sees $n - 1$
available seats as far as (s)he is concerned; where the first
person sat doesn't matter.
This yields the desired recurrence.
Using Equation~\eqref{rec01},
it is clear that Equation~\eqref{rec002} can be simplified to
\begin{eqnarray}
F_n
&=& \notag
1 + p F_{n-1}
+ \frac{2(1-p)}{n} \sum_{k=1}^n E_{k-1}
= \notag
1 + p F_{n-1}
+ 2(E_n -1 -p E_{n-1}),
\end{eqnarray}
which yields
\begin{eqnarray}
F_n -1 - p F_{n-1}
&=&
\label{rec0023}
2(E_n -1 -p E_{n-1})
\end{eqnarray}
Finally if we define $\Delta_n := F_n -2E_n$ then Equation~\eqref{rec0023}
gives rise to the following recurrence
\begin{equation}
\label{rec0024}
\Delta_n = -1 + p \Delta_{n-1},
\end{equation}
with initial condition $\Delta_1 = F_1 - 2E_1 = - 1$.
Solving Recurrence~\eqref{rec0024} we conclude that
$\Delta_n = -\frac{1-p^n}{1-p}$, for $p<1$, and $\Delta_n =-n$,
otherwise. Therefore,
$
F_n = \Delta_n + 2E_n ,
$
from which we derive the desired Formula~\eqref{pach2}.
Using the expansion of $\ln (1-p)$ in a Taylor series (in the variable $p$)
we get the claimed expression for fixed $p$
and conclude the proof
of the theorem.
\hfill\rule{2mm}{2mm}
\end{proof}
\begin{comment}
As an immediate corollary,
using the expansion of $\ln (1-p)$ in a Taylor series (in the variable $p$)
we derive the following result.
\begin{corollary}
For firxed $0 <p<1$,
the
expected number $F_n$ of occupied seats in an arrangement of $n$ seats
in a row having two entrances
with probabilistically $p$-courteous
theatregoers is given by the expression
\begin{equation}
\label{pach2a}
-\frac{1}{1-p} + 2\frac{H_n - \ln (1-p)}{1-p},
\end{equation}
asymptotically in $n$.
\hfill\rule{2mm}{2mm}
\end{corollary}
\end{comment}
\section{Geometric Distribution}
\label{geo:sec}
In the sections above the theatregoers were more or less oblivious
to the seat they selected in that they chose their
seat independently at random with the uniform distribution. A
more realistic assumption might be that theatregoers prefer to be
seated as close to the centre of the action as possible. For a row
in the centre of the theatre, this suggests that there would be
a bias towards the centre seat (or two centre seats in the case of an even
length row) which is nicely modelled by a row with one entrance ending
at the middle of the row
where the probability of choosing a seat is biased towards the centre seat (which
we consider to be a barrier, i.e., people never go past the centre if they enter
on a given side of a two sided row).
For a row towards the edge of the theatre this would imply that
theatregoers prefer to chose their seats as close to the aisle, i.e.,
as close to the entrance, as possible. This is nicely modelled by
a row with one entrance with a bias towards the entrance.
As usual, we consider a row with one entrance with $n$ seats
(depicted in Figure~\ref{fig:th1} as squares)
numbered $1, 2, \ldots n$ from left to right.
We
refer to a distribution modelling the first case, with bias away from the entrance, as a
distribution with a {\em right} bias, while in the second case, with bias towards
the entrance, as distribution with a {\em left} bias. (We only consider cases where
the bias is monotonic in one direction though one could consider more
complicated distributions if for example there are obstructions part of the way
along the row.)
A very strong bias towards the centre might be modelled by the geometric distribution.
For the case of a left biased distribution theatregoers will occupy seat $k$ with probability $\frac{1}{2^k}$
for $k=1, \ldots, n-1$ and with probability $\frac{1}{2^{n-1}}$ for $k=n$.
For the case of a right biased distribution theatregoers will occupy seat
$k$ with probability $\frac{1}{2^{n+1-k}}$ for $k= 2, \ldots, n$ and with probability $\frac{1}{2^{n-1}}$ for $k=1$.
We examine the occupancy of a one-entrance row under each of these distributions assuming
a $p$-courteous audience.
\begin{theorem}[Left bias]
\label{thm1geol}
The expected number of occupied seats by $p$-courteous
theatregoers in an arrangement of $n$ seats
in a row with single entrance is
\begin{equation}
\label{geo1:eq}
\sum_{l=1}^n \prod_{k=1}^{l-1} \left( p + \frac{1-p}{2^{k}} \right)
\end{equation}
In particular, the value $T_p$ of ~\eqref{geo1:eq} as $n\rightarrow \infty$, satisfies
$$
\frac{1.6396 -0.6425 p}{1-p}
\leq T_p \leq
\frac{1.7096 -0.6425 p}{1-p}
$$
for all $p<1$.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm1geol}})
In the geometric distribution with left bias a theatergoer
occupies seat numbered $k$ with probability
$2^{-k}$, for $k \leq n-1$ and seat numbered $n$
with probability $2^{-(n-1)}$.
The seat occupancy
recurrence for courteous theatergoers is the following
\begin{eqnarray}
L_n &=& \label{lgeo1:eq}
1 + p L_{n-1} + (1-p) \sum_{k=1}^{n-1} 2^{-k}L_{k-1} + (1-p)2^{-(n-1)}L_{n-1}
\end{eqnarray}
with initial condition $L_0=0, L_1 = 1$.
To solve this recurrence we consider the expression for $L_{n-1}$
\begin{eqnarray}
L_{n-1} &=& \label{lgeo2:eq}
1 + p L_{n-2} + (1-p) \sum_{k=1}^{n-2} 2^{-k}L_{k-1} + (1-p)2^{-(n-2)}L_{n-2}
\end{eqnarray}
Subtracting Equation~\eqref{lgeo2:eq} from Equation~\eqref{lgeo1:eq}
and using the notation $\Delta_k := L_k - L_{k-1}$ we see that
\begin{eqnarray}
\Delta_n &=& \notag
\left( p + \frac{1-p}{2^{n-1}} \right) \Delta_{n-1} ,
\end{eqnarray}
for $n \geq 2$.
It follows that
\begin{eqnarray}
\Delta_n &=& \notag
\prod_{k=1}^{n-1} \left( p + \frac{1-p}{2^{k}} \right) ,
\end{eqnarray}
which proves Identity~\eqref{geo1:eq}.
The previous identity implies that
$\Delta_n \leq \left(\frac{1+p}{2}\right)^{n-1}$ and therefore
we can get easily an upper bound on the magnitude of
$L_n$. Indeed,
\begin{eqnarray}
L_n &=& \notag
\sum_{k=1}^n \Delta_k
\leq
\sum_{k=1}^{n-1} \left( \frac{1+p}{2} \right)^{k-1}
\leq
\frac{2}{1-p} ,
\end{eqnarray}
for $p<1$. Similarly, one can easily show a lower bound of $1/(1-p)$. Next we focus on showing the much tighter bounds we have already promised.
Our goal is to provide good estimates of $T_p = \sum_{l=1}^\infty \prod_{k=1}^{l-1} \left( p + \frac{1-p}{2^{k}} \right)$. Although there seems to be no easy closed formula that describes $T_p$, the same quantity can be numerically evaluated for every fixed value of $p<1$ using any mathematical software that performs symbolic calculations. In particular we can draw $T_p$ for all non negative values of $p<1$.
One strategy to approximate $T_p$ to a good precision would be to compute enough points $(p,T_p)$, and then find an interpolating polynomial. Since we know $T_p$ is unbounded as $p\rightarrow 1^-$, it seems more convenient to find interpolating points $(p,(1-p)T_p)$ instead (after all, we know that $1/(1-p) \leq T_p \leq 2/(1-p)$). Adding at the end a sufficient error constant term, we can find polynomials that actually bound from below and above expression $(1-p)T_p$.
It turns out that just a few interpolating points are enough to provide a good enough estimate. In that direction, we define polynomial
$$
g(p) :=1.6746 -0.6425 p
$$
which we would like to show that approximates $(1-p)T_p$ sufficiently well.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=8cm]{interpolating.pdf
\caption{The graph of $g(p) - (1-p)T_p$ together with the bounds $\pm0.035$.}
\label{fig: interpolation}
\end{center}
\end{figure}
To that end, we can draw $g(p) - (1-p)T_p$, see Figure~\ref{fig: interpolation}, and verify that indeed $\left| g(p) - (1-p)T_p \right|\leq 0.035$ as promised.
\hfill\rule{2mm}{2mm}
\end{proof}
We leave it as an open problem to determine the exact asymptotics of expression~\eqref{geo1:eq} above, as a function of $p$. As a sanity check, we can find (using any mathematical software that performs symbolic calculations) the limit of ~\eqref{geo1:eq} as $n\rightarrow \infty$ when $p=0$, which turns out to be approximately $1.64163$.
\begin{theorem}[Right bias]
\label{thm1geor}
The expected number of occupied seats by $p$-courteous
theatregoers in an arrangement of $n$ seats
in a row with single entrance is at least $\frac{n+1}{2}$,
for any $p$. Moreover, this bound is attained for $p=0$.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm1geor}})
In the geometric distribution with left bias a theatergoer
occupies seat numbered $k$ with probability
$2^{k-n-1}$, for $k \geq 2$ and seat numbered $1$
with probability $2^{-(n-1)}$.
The seat occupancy
recurrence for courteous theatergoers is the following
\begin{eqnarray}
R_n &=& \label{rgeo1:eq}
1 + p R_{n-1} + (1-p) \sum_{k=2}^{n} 2^{n+1-k}R_{k-1} .
\end{eqnarray}
with initial condition $R_0=0, R_1 = 1$.
To solve this recurrence, we
consider as usual the equation for $R_{n-1}$
\begin{eqnarray}
R_{n-1} &=& \label{rgeo2:eq}
1 + p R_{n-2} + (1-p) \sum_{k=2}^{n-1} 2^{n-k}R_{k-1} .
\end{eqnarray}
Subtracting Equation~\eqref{rgeo2:eq} from Equation~\eqref{rgeo1:eq}
and using the notation $\Delta_k := R_k - R_{k-1}$ we see that
\begin{eqnarray}
\Delta_n &=& \label{rgeo3:eq}
p \Delta_{n-1} + (1-p) \sum_{k=1}^{n-2} \frac{1}{2^k} \Delta_{n-k} + \frac{1-p}{2^{n-1}},
\end{eqnarray}
for $n \geq 2$. We claim that we
can use Equation~\eqref{rgeo3:eq} to prove that
$\Delta_k \geq \frac{1}{2}$ by induction on $k$, for all
$k \geq 1$.
Observe $\Delta_1 = 1$.
Assume the claim is valid for all $1 \leq k \leq n-1$. Then we see that
\begin{eqnarray}
\Delta_n
&\geq& \notag
\frac{p}{2} +
\frac{1-p}{2} \left( 1- \frac{1}{2^{n-2}} \right) + \frac{1-p}{2^{n-1}}
=
\frac{1}{2},
\end{eqnarray}
which proves the claim.
It is easy to see that this same proof can be used to
show that $\Delta_n=\frac1 2$, for all $n \geq 2$, in the
case $p=0$. This proves the theorem.
\hfill\rule{2mm}{2mm}
\end{proof}
\section{Zipf Distribution}
\label{zipf:sec}
We now study the case where theatregoers
select their seat
using an arguably more natural distribution, namely, the Zipf distribution \cite{zipf}.
As before, throughout the presentation we
consider an arrangement of $n$ seats
(depicted in Figure~\ref{fig:th1} as squares)
numbered $1$ to $n$ from left to right with
one entrance starting from seat $1$.
Theatregoers enter in sequentially
and may enter the row only from the single entrance.
There are two occupancy possibilities: {\em Zipf with left bias} and
{\em Zipf with right bias}.
In Zipf with left bias (respectively, right) a
theatregoer will occupy
seat $k$ at random with probability $\frac{1}{kH_n}$
(respectively, $\frac{1}{(n+1-k)H_n}$)
and a selfish theatregoer
blocks passage
to her/his right, i.e., all positions in
$[k+1,n]$.
In the sequel we look at a row with a single
entrance. The case of a row with two entrances
may be analyzed in a similar manner.
First we analyze the Zipf distribution
with left bias for selfish theatregoers.
\begin{theorem}[Selfish with left bias]
\label{thm1zipfl}
The expected number of occupied seats by selfish theatregoers
in an arrangement of $n$ seats
in a row with single entrance is equal to $\ln \ln n$, asymptotically in $n$.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm1zipfl}})
Let $L_n$ be the expected number of theatregoers occupying seats
in a row of $n$ seats.
Observe that $L_0=0, L_1 =1$ and that the following recurrence is valid
for all $n \geq 1$.
\begin{eqnarray}
L_n &=& \label{maineq1z}
= 1 + \frac{1}{H_n} \sum_{k=1}^{n} \frac{1}{k} L_{k-1}.
\end{eqnarray}
The explanation for this equation is as follows. A theatregoer
may occupy any one of the seats from $1$ to $n$. If it
occupies seat number $k$ then seats numbered $k+1$ to $n$
are blocked while only seats numbered $1$ to $k-1$ may be
occupied by new theatregoers.
It is not difficult to solve this recurrence. Write down
both recurrences for $L_n$ and $L_{n-1}$.
\begin{eqnarray*}
H_n L_n
&=&
H_n + \sum_{k=1}^{n} \frac{1}{k} L_{k-1}
\mbox{ and }
H_{n-1} L_{n-1}
= H_{n-1} + \sum_{k=1}^{n-1} \frac{1}{k} L_{k-1}.
\end{eqnarray*}
Substracting these last two identities we see that
$$
H_nL_n - H_{n-1} L_{n-1} = H_n - H_{n-1} + \frac{1}{n} L_{n-1}
= \frac{1}{n} + \frac{1}{n} L_{n-1}
$$
Therefore $H_n L_n = \frac{1}{n} + H_n L_{n-1}$.
Consequently,
$
L_n = \frac{1}{nH_n} + L_{n-1}.
$
From the last equation we see that
$$
L_n = \sum_{k=2}^n \frac{1}{kH_k} \approx \int_2^n \frac{dx}{x \ln x} = \ln \ln n .
$$
This yields easily Theorem~\ref{thm1zipfl}.
\hfill\rule{2mm}{2mm}
\end{proof}
Next we consider selfish theatregoers choosing their seats according to
the Zipf distribution
with right bias. As it turns out, the analysis of the resulting recurrence is
more difficult than the previous cases. First we need the following technical lemma whose proof can be found in the Appendix.
\begin{lemma}\label{lem: both bounds on summation}
For every $\epsilon>0$, there exists $n_0$ big enough such that
$$
\left| \frac{\pi^2}6 - \sum_{k=1}^{n-1}\frac{H_n-H_k}{n-k} \right| \leq \epsilon, \quad \forall n\geq n_0
$$
In particular, for all $n \geq 40$ we have
$$
1.408 \leq \sum_{k=1}^{n-1}\frac{H_n-H_k}{n-k} \leq 1.86.
$$
\end{lemma}
Next we use Lemma~\ref{lem: both bounds on summation} to conclude that
\begin{lemma}
\label{costis2}
The solution of the recurrence relation
$$R_n = 1 + \frac1{H_n}\sum_{k=1}^{n-1}\frac{1}{n-k}R_k$$
with initial condition $R_1=1$ satisfies
\begin{equation}\label{equa: both bounds}
\frac{100}{383} H^2_n \leq R_n \leq \frac{5}{7} H^2_n.
\end{equation}
\end{lemma}
\begin{proof} ({\bf Lemma~\ref{costis2}})
It is easy to check numerically that for $n_0=40$ we have
$$
\frac{R_{n_0}}{H^2_{n_0}} \approx 0.430593
$$
and indeed $\frac{100}{383} \leq 0.430593 \leq \frac{5}{7}$.
Hence, the promised bounds follow inductively on $n\geq n_0$, once we prove that for the constants $c'=\frac57,c''=\frac5{19}$ and that for all $n\geq n_0$ we have
\begin{align*}
1 + \frac {c'}{H_n}\sum_{k=1}^{n-1}\frac{1}{n-k}H^2_k
&\leq {c'} H^2_n \\
1 + \frac{c''}{H_n}\sum_{k=1}^{n-1}\frac{1}{n-k}H^2_k
&\geq c'' H^2_n \\
\end{align*}
To save repetitions in calculations, let $\Box \in \{\leq, \geq\}$ and $c \in \{c',c''\}$, and observe that
\begin{align}
1 + \frac {c}{H_n}\sum_{k=1}^{n-1}\frac{1}{n-k}H^2_k ~~\Box~~ c H^2_n
~~~&\Leftrightarrow ~~~
\frac{H_n}{c} ~~\Box~~ H^3_n - \sum_{k=1}^{n-1}\frac{1}{n-k}H^2_k \notag \\
~~~&\Leftrightarrow ~~~
\frac{H_n}{c} ~~\Box~~ \sum_{k=1}^{n-1}\frac{H^2_n-H^2_k}{n-k} + \frac{H_n^2}{n} \tag{Since $\sum_{k=0}^{n-1}\frac1{n-k}=H_n$}\\
~~~&\Leftrightarrow ~~~
\frac{1}{c} - \frac{H_n}n ~~\Box~~ \frac1{H_n}\sum_{k=1}^{n-1}\frac{H^2_n-H^2_k}{n-k} \notag \\
~~~&\Leftrightarrow ~~~
\frac{1}{c} - \frac{H_n}n ~~\Box~~ \frac1{H_n}\sum_{k=1}^{n-1}\frac{(H_n+H_k)(H_n-H_k)}{n-k} \label{equa: aimed inequalities} \end{align}
For proving the upper bound of~\eqref{equa: both bounds}, we use $\Box="\leq"$ (note that the direction is inversed). We focus on expression \eqref{equa: aimed inequalities} which we need to show that is satisfied for the given constant. In that direction we have
$$
\frac1{H_n}\sum_{k=1}^{n-1}\frac{(H_n+H_k)(H_n-H_k)}{n-k}
\geq
\sum_{k=1}^{n-1}\frac{H_n-H_k}{n-k}
\stackrel{(Lemma~\ref{lem: both bounds on summation})}{\geq} 1.408 \geq \frac75-\frac{H_n}n
$$
Hence, \eqref{equa: aimed inequalities} is indeed satisfied for $c=\frac57$, establishing the upper bound of~\eqref{equa: both bounds}.
Now for the lower bound of~\eqref{equa: both bounds}, we take $\Box="\geq"$ , and we have
$$
\frac1{H_n}\sum_{k=1}^{n-1}\frac{(H_n+H_k)(H_n-H_k)}{n-k}
\leq
2 \sum_{k=1}^{n-1}\frac{H_n-H_k}{n-k}
\stackrel{(Lemma~\ref{lem: both bounds on summation})}{\leq} 3.72 \leq \frac{383}{100}-\frac{H_n}n
$$
for $n\geq 40$. Hence $c''=\frac{100}{383}$, again as promised.
\hfill\rule{2mm}{2mm}
\end{proof}
Note that Lemma~\ref{costis2}
implies that $\lim_{n\rightarrow \infty} R_n/\ln^2 n =c$,
for some constant $c \in [0.261, 0.72]$. This is actually the constant hidden in the $\Theta$-notation of Theorem~\ref{thm1zipfr}. We leave it as an open problem to determine exactly the constant $c$. Something worthwhile noticing is that our arguments cannot narrow down the interval of that constant to anything better than $[3/\pi^2, 6/\pi^2]$.
\begin{theorem}[Selfish with right bias]
\label{thm1zipfr}
The expected number of occupied seats by selfish theatregoers
in an arrangement of $n$ seats
in a row with single entrance is $\Theta( \ln^2 n)$,
asymptotically in $n$.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm1zipfr}})
Let $R_n$ be the expected number of theatregoers occupying seats
in a row of $n$ seats,
when seating is biased to the right,
Observe that $R_0=0, R_1 =1$ and that the following recurrence is valid
for all $n \geq 1$.
\begin{eqnarray}
R_n
&=& \label{maineq1zz}
1 + \frac{1}{H_n} \sum_{k=2}^{n} \frac{1}{n+1-k} R_{k-1}
=
1 + \frac{1}{H_n} \sum_{k=1}^{n-1} \frac{1}{n-k} R_{k}.
\end{eqnarray}
The justification for the recurrence is the same as in the case of the left bias with
the probability changed to reflect the right bias. The theorem now follows immediately
from Lemma~\ref{costis2}.
\hfill\rule{2mm}{2mm}
\end{proof}
\begin{theorem}[Courteous with left bias]
\label{thm2zipfl}
The expected number of occupied seats by $p$-courteous theatregoers
in an arrangement of $n$ seats
in a row with single entrance is equal to
\begin{eqnarray}
L_n
&=& \label{maineq5zipf}
\ln \ln n +
\sum_{l=1}^n
\sum_{k=1}^l p^k
\left( 1 - h_l \right)
\left( 1 - h_{l-1} \right)
\cdots
\left( 1 - h_{l-k+1} \right)
h_{l-k}
\end{eqnarray}
asymptotically in $n$, where $h_0 := 0$ and
$h_k := \frac{1}{kH_k}$, for $k \geq 1$. In particular, for constant
$0< p < 1$ we have that $L_n = \Theta (\frac{\ln\ln n}{1-p})$.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm2zipfl}})
We obtain easily the following recurrence
\begin{eqnarray}
L_n &=& \label{maineq2zipf}
1 + p L_{n-1} + \frac{1-p}{H_n} \sum_{k=1}^{n} \frac{1}{k} L_{k-1}.
\end{eqnarray}
Write the recurrence for $L_{n-1}$:
\begin{eqnarray}
L_{n-1} &=& \notag
1 + p L_{n-2} + \frac{1-p}{H_{n-1}} \sum_{k=1}^{n-1} \frac{1}{k} L_{k-1}.
\end{eqnarray}
Multiply these last two recurrences by $H_n, H_{n-1}$ respectively to get
\begin{eqnarray}
H_n L_n
&=& \notag
H_n + p H_n L_{n-1} + (1-p) \sum_{k=1}^{n} \frac{1}{k} L_{k-1} \\
H_{n-1} L_{n-1}
&=& \notag
H_{n-1} + p H_{n-1} L_{n-2} + (1-p) \sum_{k=1}^{n-1} \frac{1}{k} L_{k-1}
\end{eqnarray}
Now subtract the second equation from the first and after collecting
similar terms and simplifications
we get
\begin{eqnarray}
L_n
&=& \notag
\frac{1}{nH_n} + \left( 1+p - \frac{p}{nH_n} \right) L_{n-1}
-p \frac{H_{n-1}}{H_n} L_{n-2} ,
\end{eqnarray}
with initial conditions $L_0 =0, L_1 = 1$.
In turn, if we set $\Delta_n := L_n - L_{n-1}$ then we
derive the following recurrence for $\Delta_n$.
\begin{eqnarray}
\Delta_n
&=& \label{maineq3zipf}
\frac{1}{nH_n} + p \left( 1 - \frac{1}{nH_n} \right) \Delta_{n-1},
\end{eqnarray}
with initial condition $\Delta_1 =1$.
Recurrence~\eqref{maineq3zipf} gives rise
to the following expression for $\Delta_n$
\begin{eqnarray}
\Delta_n
&=& \label{maineq4zipf}
h_n +
\sum_{k=1}^n p^k
\left( 1 - h_n \right)
\left( 1 - h_{n-1} \right)
\cdots
\left( 1 - h_{n-k+1} \right)
h_{n-k},
\end{eqnarray}
where $h_0 := 0$ and $h_k := \frac{1}{kH_k}$.
This completes the proof of Identity~\eqref{maineq5zipf}.
Next we prove the bounds on $L_n$.
First of all observe that the following inequality holds
\begin{equation}
\label{hn:eq}
2 h_n \leq h_{n/2} \leq 3 h_n .
\end{equation}
Next we estimate the sum
in the righthand side of Equation~\eqref{maineq4zipf}.
To this end we split the sum
into two parts: one part, say $S_1$, in the range from $1$ to $n/2$ and
the second part, say $S_2$, from $n/2 +1$ to $n$.
Observe that
\begin{eqnarray}
S_2
&=& \notag
\sum_{k\geq n/2+1} p^k \left( 1 - h_n \right) \left( 1 - h_{n-1} \right) \cdots \left( 1 - h_{n-k+1} \right) h_{n-k}\\
&\leq& \notag
\sum_{k\geq n/2+1} p^k \leq p^{n/2+1} \frac{1}{1-p} ,
\end{eqnarray}
which is small, asymptotically in $n$, for $p<1$ constant.
Now consider the sum $S_1$.
\begin{eqnarray}
S_1
&=& \notag
\sum_{k=1}^{n/2} p^k \left( 1 - h_n \right) \left( 1 - h_{n-1} \right) \cdots \left( 1 - h_{n-k+1} \right) h_{n-k}\\
&\leq& \notag
h_{n/2} \sum_{k=1}^{n/2} p^k \leq 3 h_n \frac{p}{1-p} \mbox{ (Using Inequality~\eqref{hn:eq})}
\end{eqnarray}
and
\begin{eqnarray}
S_1
&\geq& \notag
h_n \sum_{k=1}^{n/2} p^k \left( 1 - h_n \right) \left( 1 - h_{n-1} \right) \cdots \left( 1 - h_{n-k+1} \right) \\
&\approx& \notag
h_n \sum_{k=1}^{n/2} p^k e^{-(h_n +h_{n-1}+ \cdots + h_{n-k+1})} \mbox{ (since $1-x \approx e^{-x}$)} \\
&\approx& \notag
h_n \sum_{k=1}^{n/2} p^k e^{- \ln \left( \frac{\ln n}{\ln (n/2)} \right)}
\approx
h_n \sum_{k=1}^{n/2} p^k
\approx
c h_n \frac{p}{1-p},
\end{eqnarray}
for some constant $c>0$.
Combining the last two inequalities it is easy to derive tight
bounds for $\Delta_n$ and also for $L_n$, since
$L_n = \sum_{k=1}^n \Delta_k$.
This completes the proof of Theorem~\ref{thm2zipfl}.
\hfill\rule{2mm}{2mm}
\end{proof}
\begin{comment}
Next we give the recurrence
for the Zipf distribution
with right bias.
\begin{theorem}[Zipf with right bias]
\label{thm2zipfr}
The expected number of occupied seats by $p$-courteous theatregoers
in an arrangement of $n$ seats
in a row with single entrance is equal to $\fbox{?????}$, asymptotically in $n$.
\end{theorem}
\begin{proof}
Let $R_n$ be the expected number of theatregoers occupying seats
in a row of $n$ seats,
when seating is biased to the right,
Observe that $R_0=0, R_1 =1$ and that the following recurrence is valid
for all $n \geq 1$.
\begin{eqnarray}
R_n
&=& \notag
1 + p R_{n-1} + \frac{1-p}{H_n} \sum_{k=1}^{n} \frac{1}{n+1-k} R_{k-1}\\
&=& \label{maineq2zzipf}
1 + p R_{n-1} + \frac{1-p}{H_n} \sum_{k=1}^{n-1} \frac{1}{n-k} R_{k}
\end{eqnarray}
It turns out that
for this case the recurrence
is harder to solve.
\hfill\rule{2mm}{2mm}
\end{proof}
\end{comment}
\begin{theorem}[Courteous with right bias]
\label{thm2zipfr}
The expected number $R_n(p)$ of occupied seats by $p$-courteous theatregoers
in an arrangement of $n$ seats
in a row with single entrance, and for all constants $0 \leq p<1$ satisfies
$$R_n(p) = \Omega \left( \frac{H_n^2}{1 - 0.944p} \right)
\mbox{ and } R_n(p) = O\left( \frac{H_n^2}{1-p} \right)$$
asymptotically in $n$.
\end{theorem}
\begin{proof} ({\bf Theorem~\ref{thm2zipfr}})
Let $R_n(p)$ be the expected number of theatregoers occupying seats
in a row of $n$ seats,
when seating is biased to the right.
Observe that $R_0(p)=0, R_1(p) =1$ and that the following recurrence is valid
for all $n \geq 1$.
\begin{eqnarray}
R_n(p)
&=& \notag
1 + p R_{n-1}(p) + \frac{1-p}{H_n} \sum_{k=1}^{n} \frac{1}{n+1-k} R_{k-1}(p)\\
&=& \label{maineq2zzipf}
1 + p R_{n-1}(p) + \frac{1-p}{H_n} \sum_{k=1}^{n-1} \frac{1}{n-k} R_{k}(p)
\end{eqnarray}
Before proving the theorem we proceed with the following lemma.
\begin{lemma}\label{lem: bounds from previous recurrences}
Let $R_n=R_n(0), R_n(p)$ be the solutions to the recurrence relations \eqref{maineq1zz}, \eqref{maineq2zzipf}, respectively. Then for every $0\leq p <1$ and for every constants $c_1, c_2 >0$ with $c_1<4$, we have
if
$
\left( \forall n \geq 40, ~c_1 H_n^2 \leq R_n \leq c_2 H_n^2 \right)
$
then
$$
\left( \forall n \geq 40, ~\frac{4c_1/9}{1-(1-0.214c_1)p} H_n^2 \leq R_n(p) \leq \frac{c_2}{1-p} H_n^2 \right)
$$
assuming that the bounds for $R_n$ hold for $n=40$.\footnote{Constant $c_1$ is scaled by 4/9 only to satisfy a precondition in a subsequent theorem.}
\end{lemma}
\begin{proof} ({\bf Lemma~\ref{lem: bounds from previous recurrences}})
The proof is by induction on $n$, and the base case $n=40$ is straightforward.
For the inductive step, suppose that the bounds for $R_n(p)$ are true for all integers up to $n-1$, and fix some $\Box \in \{\geq, \leq\}$ corresponding to the bounding constants $c \in \{c_1, c_2\}$ and $x \in \left\{\frac{c_1}{1-(1-0.214c_1)p},\frac{c_2}{1-p}\right\}$ respectively.
The we have
\begin{align}
R_n(p)
&
= 1+pR_{n-1}(p)+\frac{1-p}{H_n} \sum_{k=1}^{n-1} \frac{R_k(p)}{n-k} \tag{definition of $R_n$} \\
&
\Box~~ 1+p x H_{n-1}^2+\frac{(1-p)x}{H_n} \sum_{k=1}^{n-1} \frac{H_k^2}{n-k} \tag{Inductive Hypothesis} \\
&
\Box~~ 1+p x H_{n-1}^2+(1-p)x \left( H_n^2 - \frac1c \right) \tag{Preconditions} \\
&
\Box~~
x \left( pH_{n-1}^2 + (1-p) H_n^2 \right)
1 - \frac{(1-p)x}c \label{equa: conclusion to bound}
\end{align}
Now consider $\Box ="\geq"$, and observe that
\begin{align}
R_n(p)
&\stackrel{\eqref{equa: conclusion to bound}}{\geq}
x \left( pH_{n-1}^2 + (1-p) H_n^2 \right)
1 - \frac{(1-p)x}{c_1} \notag \\
& = x H_n^2 + xp(H_{n-1}^2 - H_n^2) + 1 - \frac{(1-p)x}{c_1} \notag \\
& = x H_n^2 + xp(H_{n-1} - H_n)(H_{n-1} + H_n) + 1 - \frac{(1-p)x}{c_1} \notag \\
& = x H_n^2 - xp \frac{H_{n-1} + H_n}{n} + 1 - \frac{(1-p)x}{c_1} \notag \\
& \geq x H_n^2 - 2xp \frac{H_n}{n} + 1 - \frac{(1-p)x}{c_1} \notag \\
& \geq x H_n^2 - 0.214 xp + 1 - \frac{(1-p)x}{c_1} \tag{Since $\frac{H_n}{n} < 0.106964$, for $n\geq 40$ } \\
& \geq x H_n^2 \tag{$x=\frac{4c_1/9}{1-(1-0.214c_1)p} \leq \frac{c_1}{1-(1-0.214c_1)p}$}
\end{align}
Finally, we consider $\Box ="\leq"$, and we have
\begin{align}
R_n(p)
&\stackrel{\eqref{equa: conclusion to bound}}{\leq}
x \left( pH_{n-1}^2 + (1-p) H_n^2 \right)
1 - \frac{(1-p)x}{c_2} \notag \\
& \leq x H_n^2 + 1 - \frac{(1-p)x}{c_2} \notag \\
& = x H_n^2 \tag{$x=\frac{c_2}{1-p}$}
\end{align}
This completes the proof of Lemma~\ref{lem: bounds from previous recurrences}.
\hfill\rule{2mm}{2mm}
\end{proof}
Now we proceed with the main proof of Theorem~\ref{thm2zipfr}.
Recall that by Lemma~\ref{costis2} we have $\frac{100}{383} H_n^2 \leq R_n \leq \frac57 H_n^2$ for all $n \geq 40$, where $R_n$ is the solution to the recurrence \eqref{maineq1zz}. But then, according to Lemma~\eqref{lem: bounds from previous recurrences}, it suffices to verify that for all $0 \leq p < 1$, both bounds below hold true
$$
~\frac{4c_1/9}{1-(1-0.214c_1)p} H_{40}^2 \leq R_{40}(p) \leq \frac{c_2}{1-p} H_{40}^2
$$
where $c_1=100/383$ and $c_2=5/7$. In other words, it suffices to verify that
$$
\frac{2.13}{1-0.945 p}
\leq R_{40}(p)
\leq
\frac{13}{1-p}, ~~~\forall~ 0 \leq p < 1.
$$
Expression $R_{40}(p)$ is a polynomial on $p$ of degree 39, which can be computed explicitly from recurrence~\eqref{maineq2zzipf}.
\begin{align*}
R_{40}(p)= &
3.70962710339202 \times 10^{-7} p^{39}
+3.0614726926339265\times 10^{-6} p^{38}
+0.0000139932 p^{37}\\
&+0.0000467865 p^{36}
+0.000127738 p^{35}+0.000301798 p^{34}+0.000639203 p^{33} \\
&+0.00124237 p^{32}+0.0022527 p^{31}+0.00385706 p^{30}
+0.00629362 p^{29}
+0.00985709 p^{28}\\
&+0.0149033 p^{27}
+0.0218533 p^{26}
+0.0311969 p^{25}
+0.0434963 p^{24}
+0.0593899 p^{23}\\
&+0.0795964 p^{22}
+0.104921 p^{21}
+0.136261 p^{20}
+0.174618 p^{19}
+0.221108 p^{18}
+0.27698 p^{17} \\
&+0.343639 p^{16}
+0.422678 p^{15}
+0.51592 p^{14}
+0.625477 p^{13}
+0.753831 p^{12}
+0.903948 p^{11}\\
&+1.07944 p^{10}
+1.28482 p^9
+1.52585 p^8+1.81016 p^7+2.14819 p^6+2.55498 p^5+3.05352 p^4 \\
&+3.68202 p^3+4.51248 p^2
+5.7117 p+7.8824.
\end{align*}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=8cm]{sandwich.pdf
\caption{The black solid line represents polynomial $R_{40}(p)$, the red dot-dashed line represents $\frac{4c_1/9}{1-(1-0.214c_1)p} H_{40}^2$, while the blue dotted line represents $\frac{c_2}{1-p} H_{40}^2$.}
\label{fig: sandwich R40}
\end{center}
\end{figure}
Then we can draw $R_{40}(p)$ to verify that it is indeed sandwiched between $\frac{2.13}{1-0.945 p}$ and $\frac{13}{1-p}$, for all $0 \leq p < 1$, as Figure~\ref{fig: sandwich R40} confirms
Note that $\frac{13}{1-p}$ is unbounded as $p\rightarrow 1$, and hence its value exceeds $R_{40}(1)$ for $p$ large enough, here approximately for $p\geq 0.7$.
This completes the proof of Theorem~\ref{thm2zipfr}. \hfill\rule{2mm}{2mm}
\end{proof}
\section{The Occupancy of a Theater}
\label{theater:sec}
Given the previous results it is now easy to analyze the occupancy of a theater.
A typical theater consists of an array of rows separated by aisles.
This
naturally divides each row into sections which either have one entrance (e.g.,
when the row section ends with a wall) or two entrances.
For example
in Figure \ref{lipari-fig} we see the Greek theatre on Lipari consisting of twelve
rows each divided into two one entrance sections and three two entrance sections.
In a sequential arrival model of
theatregoers, we assume that a theatergoer chooses a row and an entrance to the row by some arbitrary strategy. If she finds the row blocked at the entrance, then she moves on to the other entrance or another row. Then, the resulting occupancy of the theater will be equal to the sum of the number of occupied seats in each row of each
section.
These values depend only on the length of the section. This
provides us with a method of estimating the total occupancy of the theatre.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{lipari.pdf
\end{center}
\caption{The Greek theatre on Lipari Island.}
\label{lipari-fig}
\end{figure}
For example, for the Lipari theatre if each row section seats $n$ theatregoers then
we get the following:
\begin{corollary}
Consider a
theater having twelve rows with three aisles where each
section contains $n$ seats. For firxed $0 <p<1$, the
expected number of occupied seats
assuming $p$-courteous
theatregoers is given by the expression
\begin{equation}
\label{pach3a}
-\frac{36}{1-p} + 96\frac{H_n - \ln (1-p)}{1-p},
\end{equation}
asymptotically in $n$.
\hfill\rule{2mm}{2mm}
\end{corollary}
\section{Conclusions and Open Problems}
\label{other:sec}
There are several interesting open problems worth investigating
for a variety of models reflecting alternative and/or changing
behaviour of the
theatregoers, as well as their behaviour as a group.
Also problems arising from
the structure (or topology) of the theatre are interesting.
In this section we
propose several open problems and directions for further research.
While we considered the uniform, geometric and Zipf distributions above,
a natural extension of the theatregoer model is to arbitrary distributions
with the probability that a theatregoer selects seat numbered
$k$ is $p_k$.
For example, theatregoers may prefer seats either not too close or too far from the stage.
These situations might introduce a bias
that depends on the two dimensions of the position selected.
It would be interesting to compare the results obtained to
the actual observed occupancy distribution of a real open seating theatre
such as movie theatres in North America.
Another model results when the courtesy of a theatregoer
depends on the position selected, e.g., the further away from
an entrance the theatregoer is seated the less likely (s)he is
to get up.
Another interesting question arises when theatregoers
not only occupy seats for themselves but also
need to reserve seats for their friends in a group.
Similarly, the courtesy of the theatregoers may now
depend on the number of people in a group, e.g.,
the more people in a group the less likely for all
theatregoers to get up to let somebody else go by.
Another possibility is to
consider the courteous theatregoers problem in an
arbitrary graph $G= (V, E)$. Here, the seats
are vertices of the graph. Theatregoers
occupy vertices of the graph while new incoming theatregoers
occupy vacant vertices when available and may request
sitting theatregoers to get up so as to allow them passage
to a free seat. Further, the set of nodes of the graph
is partitioned into a set of rows or paths of seats and a set
of ``entrances'' to the graph.
Note that in this more general case there could be
alternative paths to a seat.
In general graphs, algorithmic questions arise such as
give an algorithm that will maximize the
percentage of occupied seats given that all theatregoers
are selfish.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.826172,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdVM5qoTAhudwyGpQ | \section{Introduction}
\label{sec:intro}
Several practical scenarios involve choosing a winner from a given set of candidates based on pairwise comparisons, perhaps most prominently sports competitions.
A popular format for organizing such competitions is a \emph{knockout tournament}, also known as a \emph{single-elimination} or \emph{sudden death tournament}, wherein the players are matched according to an initial bracket and play proceeds until the tournament winner is determined.
When there are $n$ participating players, a knockout tournament requires arranging only $n-1$ matches, thereby making it an attractive choice among organizers in comparison to a round-robin tournament, for which $\Theta(n^2)$ matches are necessary.
Moreover, the fact that each player is eliminated after a single loss adds a layer of excitement and ensures that no match is meaningless in a knockout tournament.
As efficient and as exciting as knockout tournaments are, they have a clear drawback in that their winner can depend heavily on the chosen bracket.
A significant line of work in computational social choice has therefore investigated the problem of when it is possible for the organizers to fix a bracket to help their preferred player win the tournament, given the knowledge of which player would win in any pairwise matchup \citep{VuAlSh09,Vassilevskawilliams10,StantonVa11,StantonVa11-2,ChatterjeeIbTk16,RamanujanSz17,AzizGaMa18,GuptaRoSa18,GuptaRoSa18-2,GuptaRoSa19,ManurangsiSu21}.\footnote{For an overview of this line of work, we refer to the surveys by \citet{Vassilevskawilliams16} and \citet{Suksompong21}.}
This problem is known as the \emph{tournament fixing problem (TFP)}.
While TFP is NP-complete \citep{AzizGaMa18}, several structural conditions have been shown to guarantee that a certain player can win a knockout tournament under some bracket, which can be computed efficiently.
For example, \citet{Vassilevskawilliams10} proved that if a player $x$ is a \emph{king}---meaning that for any other player~$y$ who beats $x$, there exists another player $z$ such that $x$ beats $z$ and $z$ beats $y$---and $x$ beats at least $n/2$ other players, then $x$ can win a knockout tournament.
Moreover, a number of papers have shown that fixing knockout tournaments is usually easy when the pairwise match outcomes are drawn from probability distributions \citep{Vassilevskawilliams10,StantonVa11,KimSuVa17,ManurangsiSu21}.
While previous results on TFP have shed light on the manipulability of knockout tournaments, they hinge upon a pivotal assumption that the organizers can choose an arbitrary bracket for their tournament.
In reality, the choice of bracket is often much more constrained.
In particular, many real-world tournaments assign \emph{seeds} to a subset of players in order to prevent highly-rated players from having to play each other too early in the tournament.
For instance, in ATP tennis tournaments with $32$ players, eight players are designated as seeds, and the bracket must be chosen so that the top two seeds cannot meet until the final, the top four seeds cannot meet until the semifinals, and all eight seeds cannot meet until the quarterfinals \citep[p.~139]{ATPTour22}.
As such, algorithms that do not take seeds into account may fail to generate a valid bracket for the competition.
Do prior results in this line of work continue to hold in the presence of seed constraints, or are knockout tournaments more difficult to manipulate in light of these constraints?
\subsection{Our Results}
Following most real-world tournaments, we assume that the knockout tournament is balanced, and that both the number of players $n$ and the number of seeds are powers of two.
We begin in \Cref{sec:structural} by examining structural conditions from the non-seeded setting.
Besides the ``kings who beat at least $n/2$ players'' condition that we already mentioned, another basic condition that suffices for guaranteeing that a player can win a knockout tournament is the ``superking'' condition \citep{Vassilevskawilliams10}.\footnote{See the definition in \Cref{sec:prelim}.}
We show that both of these conditions are no longer sufficient in the seeded setting.
Specifically, for any number of seeds, a king who is not one of the top two seeds may not be able to win a knockout tournament even if it can beat all other players except one.
Likewise, if there are at least four seeds, then a king who is assigned one of the top two seeds and beats all but one player may still be unable to win.
On the positive side, when there are only two seeds, a seeded king who beats at least $n/2+1$ players is guaranteed to be a winner under some bracket, and the bound $n/2 + 1$ is tight.
We also prove similar results for superkings: with two seeds, a superking can always win under some bracket, but as soon as there are at least four seeds, it may no longer be able to win a knockout tournament even if it is one of the top four seeds.
We therefore introduce a stronger condition of ``ultraking'' and show that an ultraking can win a knockout tournament for any number of seeds.
In \Cref{sec:probabilistic}, we investigate the problem of fixing knockout tournaments from a probabilistic perspective.
In particular, we assume that the pairwise outcomes are determined according to the so-called \emph{generalized random model}, where player~$i$ beats player $j$ with probability $p_{ij}$, and these probabilities may vary across different pairs $i,j$ but are always at least a given parameter $p$.
Our prior work has shown that in the non-seeded setting, as long as $p = \Omega(\log n/n)$, all players are likely to be knockout winners \citep{ManurangsiSu21}.
We strengthen that result by showing that the same holds even in the seeded setting, regardless of the number of seeds.
Combined with our findings in \Cref{sec:structural}, this strengthened result shows that even though the presence of seed constraints makes it more difficult to fix tournaments in the worst case, most of the time it does not render manipulation infeasible.
Finally, in \Cref{sec:complexity}, we address the complexity of TFP in the seeded setting.
By reducing from TFP in the non-seeded setting, we show that the problem is NP-complete, both when the number of seeds is an arbitrary constant and when it is $n/2$ (i.e., the highest possible).
On the other hand, we provide an algorithm that solves TFP in $2^n \cdot n^{O(1)}$ time, thereby generalizing a result of \citet{KimVa15} from the setting without seeds.
\section{Preliminaries}
\label{sec:prelim}
As is common in this line of work, we assume that the knockout tournament is balanced and played among $n = 2^r$ players for some positive integer $r$.
In the \emph{seeded setting}, there is a parameter $2\le s\le n$, which we also assume (following the vast majority of real-world tournaments) to be a power of two.
Among the $n$ players, $s$ of them are assigned seeds $1,2,\dots,s$.
The bracket must be set up in such a way that seeds $1$ and $2$ cannot play each other before the final, seeds $1$ through $4$ cannot play each other before the semifinals, seeds $1$ through $8$ cannot play each other before the quarterfinals, and so on.
A bracket satisfying these seed constraints is said to be \emph{valid}.
Observe that $s = n$ is equivalent to $s = n/2$, so we assume henceforth that $2\le s\le n/2$.
Notice also that larger values of $s$ only add extra constraints compared to smaller values of $s$.
We refer to the setting without seeds typically studied in previous work as the \emph{non-seeded setting}.
Unless stated otherwise, our results are for the seeded setting.
The \emph{tournament fixing problem (TFP)} asks whether there exists a valid bracket that makes our desired winner $x$ win the tournament; if the answer is positive, we refer to $x$ as a \emph{knockout winner}.
We are given a \emph{tournament graph} $T=(V,E)$, which indicates the winner of any pairwise matchup.
The vertices in~$V$ correspond to the $n$ players---we will refer to vertices and players interchangeably---and there is a directed edge in $E$ from player~$x$ to player~$y$ whenever $x$ would beat $y$ if they were to play against each other.
We use $x\succ y$ to denote an edge from $x$ to $y$.
We extend this notation to sets of vertices: for $V_1,V_2\subseteq V$, we write $V_1\succ V_2$ to mean that $x\succ y$ for all $x\in V_1$ and $y\in V_2$, and $V_1\succ y$ to mean that $x\succ y$ for all $x\in V_1$.
The \emph{outdegree} of a vertex $x$ is the number of players whom $x$ beats in $T$.
A player $x$ is said to be a \emph{king} if for any other player $y$ who beats $x$, there exists another player~$z$ who loses to $x$ but beats $y$.
Equivalently, $x$ is a king if it can reach every other player via a path of length at most two in $T$.
A player~$x$ is said to be a \emph{superking} if for any other player $y$ who beats~$x$, there exist at least $\log n$ players who lose to $x$ but beat~$y$.\footnote{Throughout the paper, $\log$ refers to the logarithm with base $2$.}
It is clear from the definitions that every superking is also a king.
When constructing a bracket, we will sometimes do so iteratively one round at a time, starting with the first round.
In each round, we pair up the players who remain in that round.
During this process, it can happen that a seeded player is beaten by an unseeded player, or that a stronger-seeded player (i.e., player with a lower-number seed) is beaten by a weaker-seeded player (i.e., player with a higher-number seed).
In such cases, we will think of the (stronger) seed as being ``transferred'' from the losing player to the winning player, since the constraints on the losing player's seed apply to the winning player in subsequent rounds.
We stress that the concept of a transfer is merely for the sake of constructing a bracket and does not imply that the seeds are actually transferred between players when the actual tournament takes place.
\section{Structural Results}
\label{sec:structural}
In this section, we examine the extent to which structural guarantees from the non-seeded setting continue to hold in the seeded setting.
\citet{Vassilevskawilliams10} proved that, without seeds, a king with outdegree at least $n/2$ is a knockout winner,\footnote{We reiterate here that we use the term ``knockout winner'' to refer to a player who can win a knockout tournament \emph{under some bracket}.} and the same holds for a superking.
As we will see, these conditions are largely insufficient for guaranteeing that a player can win in the presence of seeds.
We assume throughout this section that our desired winner~$x$ is a king.
Denote by $A$ the set of players whom $x$ beats, and $B = V\setminus(A\cup\{x\})$ the set of players who beat $x$.
A structure that will be used multiple times in this section is a ``special tournament graph'', defined as follows.
\begin{definition}
\label{def:special}
A tournament graph $T$ is said to be a \emph{special tournament graph} if there exists $y\in A$ such that $y\succ B$ and $B\succ (A\setminus\{y\})$.
The edges within each of $A$ and $B$ can be oriented arbitrarily.
An illustration is shown in \Cref{fig:special-tournament}.
\end{definition}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.8]
\draw[fill=black] (6,8) circle [radius=0.1];
\draw[fill=black] (4.2,6.2) circle [radius=0.1];
\draw[fill=black] (6.8,6.2) circle [radius=0.1];
\draw[fill=black] (7.8,6.2) circle [radius=0.1];
\draw[fill=black] (5,6.2) circle [radius=0.02];
\draw[fill=black] (5.5,6.2) circle [radius=0.02];
\draw[fill=black] (6,6.2) circle [radius=0.02];
\draw (6,6.2) ellipse (2.5cm and 0.7cm);
\draw[fill=black] (3.7,4) circle [radius=0.1];
\draw[fill=black] (4.7,4) circle [radius=0.1];
\draw[fill=black] (7.3,4) circle [radius=0.1];
\draw[fill=black] (8.3,4) circle [radius=0.1];
\draw[fill=black] (5.5,4) circle [radius=0.02];
\draw[fill=black] (6,4) circle [radius=0.02];
\draw[fill=black] (6.5,4) circle [radius=0.02];
\draw (6,4) ellipse (3.2cm and 0.7cm);
\draw[->] (6,8) to (6,6.9);
\draw[->] (2.8,4) to[bend left=50] (5.9,8);
\draw[->] (7.8,6.2) to (7.8,4.6);
\draw[->] (6.8,4.7) to (6.8,6.1);
\draw[->] (4.2,4.6) to (4.2,6.1);
\node at (9,6.2) {$A$};
\node at (9.7,4) {$B$};
\node at (6,8.3) {$x$};
\node at (8.1,6.2) {$y$};
\end{tikzpicture}
\caption{Illustration of a special tournament graph}
\label{fig:special-tournament}
\end{figure}
When we refer to a special tournament graph, we will use the notation $x,y,A,B$ as in \Cref{fig:special-tournament}.
The results of this section are summarized in \Cref{table:summary}.
\renewcommand{\arraystretch}{1.3}
\begin{table*}[!ht]
\centering
\begin{tabular}{| c | c |}
\hline
\textbf{Conditions} & \textbf{Knockout winner guarantee} \\ \hline
Any $n\ge 4$, any $s$, not one of the top two seeds, outdegree $n-2$ & No (\Cref{prop:king-not-seeded}) \\ \hline
$s=2$, any $n$, seeded, outdegree $ \ge n/2 + 1$ & Yes (\Cref{thm:king-seeded-positive}) \\ \hline
$s=2$, any $n\ge 4$, seeded, outdegree $n/2$ & No (\Cref{thm:king-seeded-negative}) \\ \hline
Any $4\le s\le n/2$, one of the top two seeds, outdegree $n-2$ & No (\Cref{prop:king-seeded-negative}) \\ \hline
$s=2$, any $n$, superking & Yes (\Cref{thm:superking-positive}) \\ \hline
Any $4\le s\le n/2$, one of the top four seeds, superking & No (\Cref{prop:superking-negative}) \\ \hline
Any $n$ and $s$, ultraking & Yes (\Cref{thm:ultraking-positive}) \\
\hline
\end{tabular}
\caption{Summary of our results in \Cref{sec:structural} on whether each set of conditions is sufficient to guarantee that a king can win a knockout tournament under some bracket.}
\label{table:summary}
\end{table*}
\subsection{Kings of High Outdegree}
We begin by observing that in the seeded setting, a king~$x$ may not be able to win a knockout tournament even if it beats $n-2$ other players.
This draws a sharp contrast to the non-seeded setting, where beating $n/2$ other players already suffices \citep{Vassilevskawilliams10}.
Note that the bound $n-2$ is also tight: if $x$ beats $n-1$ players, then it beats every player and can trivially win.
\begin{proposition}
\label{prop:king-not-seeded}
For any $n\ge 4$ and any $s$, a king with outdegree $n-2$ who is not one of the top two seeds is not necessarily a knockout winner.
\end{proposition}
\begin{proof}
Consider a special tournament graph with $|A| = n-2$, and let $z$ be the unique player in $B$.
Assume that $y$ and $z$ are the top two seeds.
Since $y$ is the only player who can beat $z$, and these two players cannot meet until the final, $z$ makes the final in every valid bracket.
Thus, even if the king $x$ makes the final, it will be beaten by $z$ there, which means that $x$ is not a knockout winner.
\end{proof}
The construction in the proof above relies on the assumption that $x$ is not one of the top two seeds.
We show next that if there are only two seeds and $x$ is one of them, then it can win a knockout tournament as long as it has outdegree at least $n/2 + 1$.
This means that the seed constraint does not make manipulation much harder in this case.
To establish this result, we employ a similar algorithm as that of \citet{Vassilevskawilliams10} for the ``kings who beat at least $n/2$ players'' guarantee, but our analysis is more involved due to the seed constraint.
\begin{theorem}
\label{thm:king-seeded-positive}
For $s = 2$ and any $n$, a seeded king $x$ with outdegree at least $n/2 + 1$ is always a knockout winner.
Moreover, there exists a polynomial-time algorithm that computes a valid winning bracket for $x$.
\end{theorem}
\begin{proof}
The case $n=2$ holds vacuously, so assume that $n\ge 4$.
Since $s = 2$, the only constraint is that the two seeds cannot meet before the final.
We will simultaneously prove the following two statements by induction on $n$:
\begin{enumerate}[(a)]
\item A seeded king $x$ with outdegree at least $n/2 + 1$ is always a knockout winner.
\item For a special tournament graph with seeded king $x$ and $|A| = n/2$, if the other seed belongs to $B\cup\{y\}$, then $x$ is a knockout winner.
\end{enumerate}
Note that even though only statement (a) is needed for this theorem, a proof by induction using statement (a) alone does not work, and we also need statement (b) in order for the induction to go through.
We first handle the base case $n = 4$.
Statement (a) holds trivially because $x$ beats all other players.
For statement (b), denote by $z$ the unique player in $B$ and by $w$ the player in~$A$ besides $y$.
By the assumption of the statement, $w$ is not a seed.
We let $x$ play $w$ and $y$ play $z$ in the first round, so that $x$ beats $y$ in the final and wins the tournament.
We proceed to the inductive step.
Assume that the statements hold for $n/2\ge 4$; we will prove both of them for~$n$ at the same time.
Consider the following algorithm.
In the first round, we find a maximum matching $M$ from $A$ to $B$ (in the underlying directed graph between $A$ and $B$) and pair up players according to $M$.
Note that the matching $M$ is always nonempty, and let $A_M$ and $B_M$ be the players from $A$ and $B$ in the matching, respectively.
Among the remaining players, we match $x$ with an unseeded player in $A$, match players within $A$ arbitrarily, match players within $B$ arbitrarily, and finally, if necessary, match the leftover player in $B$ with the leftover player in $A$.
Denote by $A'\subseteq A$ and $B'\subseteq B$ the players in $A$ and $B$ who remain after this round, respectively.
We consider three cases.
\begin{itemize}
\item \underline{Case 1}: $|A| \ge n/2 + 2$.
We have $|B| \le n/2 - 3$, so after matching players according to $M$, there are still at least five players left in $A$.
This means that we can match $x$ with an unseeded player in $A$.
Since $M$ has size at least~$1$ and we match one player in $A$ with $x$, $|A'| \ge n/4 + 1$.
Moreover, every player $z\in B\setminus B_M$ beats every player in $A\setminus A_M$ (otherwise the matching can be enlarged), so since $x$ is a king, $z$ must lose to at least one player in $A_M$.
This implies that $x$ is still a king in the remainder tournament.
By the inductive hypothesis for statement~(a), there exists a winning bracket for $x$ in the remainder tournament, and therefore $x$ can also win the original tournament.
\item \underline{Case 2}: $|A| = n/2 + 1$.
We have $|B| = n/2 - 2$, so by a similar reasoning as in Case~1, we can match $x$ with an unseeded player in $A$.
If $M$ has size at least $2$, then $|A'| \ge n/4 + 1$ and we are done as in Case~1.
Assume therefore that $M$ has size $1$, which means that the tournament graph is a special tournament graph, with a player $y\in A$ beating all players in $B$.
In this special case, we add an extra constraint to the algorithm: if the seed besides $x$ belongs to $A\setminus \{y\}$, then we leave this seed to be paired with a player in $B$ in the final step of the algorithm, so that the seed is transferred to $B$ for the next round (since $|A|$ is odd, the final step of the algorithm takes place).
This ensures that in the remainder tournament, which also has a special tournament graph, the seed other than $x$ belongs to $B'\cup\{y\}$.
Since $|A'| = n/4$, we are done by the inductive hypothesis for statement~(b).
\item \underline{Case 3}: $|A| = n/2$, the tournament graph is a special tournament graph, and the seed besides $x$ belongs to $B\cup\{y\}$.
In this case, the maximum matching~$M$ has size $1$.
After applying the algorithm for the first round, the remainder tournament still has a special tournament graph, $|A'| = n/4$, and the seed other than $x$ belongs to $B'\cup\{y\}$.
Hence, we are done by the inductive hypothesis for statement~(b).
\end{itemize}
The three cases together complete the induction.
Since constructing each round of the bracket takes polynomial time and the number of rounds is logarithmic, the overall algorithm runs in polynomial time.
\end{proof}
Interestingly, our next result establishes the tightness of the bound $n/2 + 1$ in \Cref{thm:king-seeded-positive}---this provides a separation from the $n/2$ bound in the non-seeded setting.
To this end, we will need the following lemma.
The case $|A| = n/2 - 1$ of the lemma was proven as Claim~2 in the work of \citet{Vassilevskawilliams10}, and the same proof applies to the more general condition $|A| \le n/2-1$ that we state below.
\begin{lemma}[\citep{Vassilevskawilliams10}]
\label{lem:special-negative}
In the non-seeded setting, for a special tournament graph with king $x$ and $|A| \le n/2 - 1$, $x$ is not a knockout winner.
\end{lemma}
\begin{theorem}
\label{thm:king-seeded-negative}
For $s = 2$ and any $n\ge 4$, a seeded king with outdegree $n/2$ is not necessarily a knockout winner.
\end{theorem}
\begin{proof}
Consider a special tournament graph with $|A| = n/2$, and assume that the other seed $w\ne x$ belongs to $A\setminus \{y\}$.
We prove by induction on $n$ that $x$ is not a knockout winner for this tournament graph.
For the base case $n = 4$, denote by $z$ the unique player in $B$.
Since $x$ cannot play $w$ in the first round, it must play $y$ in order to progress to the final.
However, since $z$ beats $w$, $x$ will play $z$ in the final and lose.
We proceed to the inductive step.
Assume that the statement holds for $n/2\ge 4$; we will prove it for $n$.
In order for $x$ to have a chance of winning the tournament, we must match it with a player from $A$.
This player must be different from~$y$; otherwise no player outside $B$ can eliminate players in $B$.
If we match $y$ with a player in $B$ and pair up the remaining $n/2 - 2$ players in $B$, then $n/4 - 1$ players from~$B$ proceed to the next round, the tournament graph is still a special tournament graph, and the seeds are still $x$ and a player in $A\setminus\{y\}$, so we may apply the inductive hypothesis.
Otherwise, at least $n/4$ players from $B$ proceed to the next round.
If $y$ does not proceed, no player outside $B$ can eliminate players in $B$ and $x$ cannot win, so we may assume that $y$ proceeds.
In this case, the tournament graph is again a special tournament graph.
Since $|B| \ge n/4$, we have $|A| \le n/4 - 1$, so \Cref{lem:special-negative} implies that $x$ is not a knockout winner.
This completes the induction and therefore the proof.
\end{proof}
When $s \ge 4$, we know from \Cref{prop:king-not-seeded} that if the king~$x$ is not one of the top two seeds, then it may not be able to win even if its outdegree is $n-2$.
A similar construction shows that the same also holds when $x$ \emph{is} one of the top two seeds, which means that manipulation with $s \ge 4$ is more difficult than with $s = 2$.
\begin{proposition}
\label{prop:king-seeded-negative}
For any $4\le s\le n/2$, a king $x$ with outdegree $n-2$ is not necessarily a knockout winner even if $x$ is one of the top two seeds.
\end{proposition}
\begin{proof}
We use the construction in the proof of \Cref{prop:king-not-seeded}, but with the extra specification that $y$ loses to all other players in $A$.
The top two seeds are $x$ and $z$, and $y$ is the third seed.
Since $n\ge 8$, $y$ cannot be matched with $z$ in the first round, so it gets eliminated as it cannot beat any other player.
But then no player can eliminate $z$, which means that $x$ cannot win the tournament.
\end{proof}
\subsection{Superkings}
Next, we consider the superking condition---recall that a superking is always a knockout winner in the non-seeded setting \citep{Vassilevskawilliams10}.
We prove that if there are only two seeds, then a superking can win the tournament regardless of whether it is seeded.
This stands in contrast to \Cref{prop:king-not-seeded}, which shows that a king may not be able to win for any $s$ even if it has outdegree $n-2$.
\begin{theorem}
\label{thm:superking-positive}
For $s = 2$ and any $n$, a superking $x$ is always a knockout winner.
Moreover, there exists a polynomial-time algorithm that computes a valid winning bracket for $x$.
\end{theorem}
\begin{proof}
The case $n = 2$ is trivial, so assume that $n\ge 4$.
First, recall the superking algorithm of \citet{Vassilevskawilliams10} in the non-seeded setting: Match the superking $x$ with an arbitrary player $w\in A$, find a maximum matching $M$ from $A\setminus\{w\}$ to $B$ and pair up players according to $M$, and match the remaining players arbitrarily.
It can be shown that these first-round matchups ensure that $x$ remains a superking in the next round, so we can proceed recursively.
To demonstrate that this algorithm can be applied in the seeded setting, it suffices to show that in each round before the final, we can avoid pairing the two seeds.\footnote{As mentioned in \Cref{sec:prelim}, seeds can be ``transferred'' as the tournament proceeds; this does not affect our argument.}
If $x$ is one of the seeds, then since the superking condition implies that $|A| \ge 2$ in every round before the final, we can match $x$ with an unseeded player $w\in A$ in the first step of the algorithm.
Assume now that $x$ is not one of the seeds.
If at least one seed is in $A$, we can let $x$ play against that seed.
Suppose therefore that both seeds are in $B$.
If at least one of them is used in $M$, we are done.
Otherwise, we try to avoid pairing the two seeds when matching the remaining players.
The only problematic case is when the maximum matching has already exhausted $A$ and left exactly the two seeds in $B$.
In this case, since each of the two seeds loses to at least $\log n \ge 2$ players in $A$ by the superking condition, we can modify one pair in $M$ to involve one of the seeds, and we are again done.
It is clear that this algorithm can be implemented in polynomial time.
\end{proof}
On the other hand, when there are at least four seeds, the superking condition is no longer sufficient.
\begin{proposition}
\label{prop:superking-negative}
For any $4\le s\le n/2$, a superking is not necessarily a knockout winner even if it is one of the top four seeds.
\end{proposition}
\begin{proof}
Consider a tournament graph in which the superking~$x$ beats exactly $\log n$ players in $A$, all of whom in turn beat the remaining $n-1-\log n$ players in $B$.
Notice that $|A| = \log n \ge 3$, and assume that $x$ and three players in $A$ are the top four seeds.
In the first $\log n - 2$ rounds of the tournament (i.e., before the semifinals), $x$ cannot play these three seeds, so it can only play the other $\log n - 3$ players whom it beats.
As a result, $x$ cannot progress to the semifinals.
\end{proof}
In light of \Cref{prop:superking-negative}, we introduce the following strengthening of a superking, where we replace the parameter $\log n$ in its definition by $n/2$.
\begin{definition}
\label{def:ultraking}
A player $x$ is said to be an \emph{ultraking} if for any other player $y$ who beats $x$, there exist at least $n/2$ players who lose to $x$ but beat $y$.
\end{definition}
Our next theorem shows that the ultraking condition is sufficient to guarantee that a player can win a knockout tournament in the seeded setting, regardless of the number of seeds.
\begin{theorem}
\label{thm:ultraking-positive}
For any $n$ and $s$, an ultraking $x$ is always a knockout winner.
Moreover, there exists a polynomial-time algorithm that computes a valid winning bracket for $x$.
\end{theorem}
\begin{proof}
The case $n=2$ is trivial, so assume that $n\ge 4$.
Since larger values of $s$ only add extra constraints, it suffices to consider $s = n/2$, i.e., half of the players are seeded and the other half are unseeded.
Denote by $k$ the number of seeded players in $B$; there are at most $n/2 - k$ seeded players in $A$.
Since each player in~$B$ loses to at least $n/2$ players in $A$, it loses to at least $k$ unseeded players in $A$.
This means that we can greedily match each seeded player in $B$ with an unseeded player in $A$ to whom it loses.
Analogously, we can also match each unseeded player in $B$ with a seeded player in~$A$ to whom it loses.
The remaining matches (between players in $A$ and the ultraking $x$) can be chosen arbitrarily so that each seeded player is matched to an unseeded player---this is possible since there are an equal number of seeded and unseeded players.
This ensures that after the first round, only $x$ and players in $A$ remain.
Hence, no matter which (valid) bracket we choose from the second round onward, $x$ is the tournament winner.
It is clear that this algorithm can be implemented in polynomial time.
\end{proof}
We remark that \Cref{thm:ultraking-positive} would no longer hold if the parameter $n/2$ in \Cref{def:ultraking} were reduced to $n/2 - 1$.
Indeed, if $s = n/2$ and the ultraking $x$ is seeded and beats the other $n/2 - 1$ seeded players in $A$, all of whom beat the $n/2$ unseeded players in $B$, then $x$ cannot even win its first-round match in any valid bracket.
\section{Probabilistic Results}
\label{sec:probabilistic}
In this section, we investigate the problem of fixing tournaments from a probabilistic perspective.
We work with the so-called \emph{generalized random model} \citep{KimSuVa17,SaileSu20}.
In this model, we are given a parameter $p \in [0, 1/2]$, and every pair of distinct $i,j\in\{1,\dots,n\}$ is assigned a real number $p_{ij} \in [p, 1 - p]$, where $p_{ij} = 1 - p_{ji}$ for all $i\ne j$.
Assuming that the players are $x_1,\dots,x_n$, the tournament graph~$T$ is then generated as follows: for each pair $i \ne j$, player~$x_i$ beats player~$x_j$ with probability $p_{ij}$.
Our main result of this section is stated below.
Following standard terminology, ``with high probability'' means that the probability converges to $1$ as $n\rightarrow\infty$.
\begin{theorem} \label{thm:random-main}
Let $s = n/2$ and $p \geq 160\ln n/n$.
With high probability, all players are knockout winners, and a winning bracket for each player can be computed in polynomial time.
\end{theorem}
As mentioned in \Cref{sec:prelim}, $s = n/2$ gives rise to the most restrictive set of constraints, so \Cref{thm:random-main} implies the same result for all other values of $s$.
The theorem also strengthens our previous result in the non-seeded setting \citep{ManurangsiSu21}.\footnote{Modulo the constant factor, which is twice as large as our previous one.}
Moreover, it entails similar results for random models that are special cases of the generalized random model, including the ``Condorcet random model'' and the ``uniform random model''.\footnote{See, e.g., our prior work \citep{ManurangsiSu21} for the definitions. We remark here that the parameter $p$ is present in the Condorcet random model but not in the uniform random model.}
The bound $p = \Omega(\log n/n)$ is tight even in the non-seeded setting: if $p = o(\log n/n)$ and $p_{ij} = p$ for all $i > j$, then the weakest player is expected to beat fewer than $\log n$ players, which is insufficient because a knockout tournament consists of $\log n$ rounds.
Before we proceed to the proof of \Cref{thm:random-main}, we provide here a brief sketch.
For ease of exposition, we assume that our desired winner $x$ is unseeded.
To begin with, we use our prior result from the non-seeded setting \citep{ManurangsiSu21} to find a winning bracket~$B$ for $x$ among the $n/2$ unseeded players.
We then extend this bracket into a full bracket of size $n$ so that after the first round is played, the bracket becomes $B$.
To achieve this, we assign seeds $1$ and $2$ to the two halves in the first step, and then seeds $2^{k-1} + 1, \dots, 2^{k}$ in each step $k \geq 2$.
For each $2\le k\le \log n - 1$, we have the freedom of assigning the $2^{k - 1}$ seeds to any ``section'' of the bracket containing $2^{n-k}$ players with the property that the section has not been assigned any seeded player thus far (there are $2^k - 2^{k-1} = 2^{k - 1}$ such sections).
Since we want the bracket to become $B$ after the first round, we must also ensure that each seeded player loses in the first round.
To this end, we create a bipartite graph between the seeds $2^{k-1} + 1, \dots, 2^{k}$ and the unassigned sections, where an edge is present if the seed loses against at least one unseeded player in that section.
We then find a perfect matching in this graph and assign these seeds accordingly.
The main technical aspect of the proof lies in showing that such a perfect matching is likely to exist.
For this, we need to extend the classic result of~\cite{ErdosRe64} on the existence of perfect matchings in random graphs to a larger parameter regime.
\subsection{Preliminaries}
\subsubsection{Existence of Perfect Matching in Random Graphs}
Recall that the \emph{Erd{\H{o}}s-R{\'{e}}nyi random bipartite graph distribution} $\mathcal{G}(m, m, q)$ (where $m$ is a positive integer and $q$ a real number in $[0, 1]$) is the distribution of balanced bipartite graphs with $m$ vertices on each side such that for each pair of left and right vertices, there is an edge between them with probability~$q$ independently of other pairs.
Our proof requires the high-probability existence of a perfect matching in $\mathcal{G}(m, m, q)$, stated below. This statement is a slight generalization of the original result by~\citet{ErdosRe64}; our version provides a sharper bound in the regime where $q = 1 - \delta$ is close to $1$ compared to Erd\H{o}s and R\'{e}nyi's version.
This sharpened bound will be needed for our proof to go through.
\begin{proposition}
\label{prop:er-matching}
Let $G$ be a bipartite graph sampled from the Erd{\H{o}}s-R{\'{e}}nyi random bipartite graph distribution $\mathcal{G}(m, m, 1 - \delta)$. If $\delta^{m/8} \leq 1/m$, then $G$ contains a perfect matching with probability at least $1 - \delta^{m/4}$.
\end{proposition}
Following previous work (e.g.,~\citep{SudakovVu08,ManurangsiSu20}), we prove \Cref{prop:er-matching} by bounding the probability that the graph fails to satisfy the condition of Hall's Marriage Theorem, which we recall next.
Denote by $N_G(S)$ the set of vertices adjacent to at least one vertex in $S$.
\begin{proposition}[Hall's Marriage Theorem] \label{prop:hall}
Let $G = (A, B, E)$ be any bipartite graph such that $|A| = |B|$. If $|N_G(S)| \geq |S|$ for every subset $S \subseteq A$, then $G$ has a perfect matching.
\end{proposition}
\begin{proof}[Proof of \Cref{prop:er-matching}]
Let $G = (A, B, E)$ be a graph drawn from $\mathcal{G}(m, m, q)$.
From \Cref{prop:hall}, the probability that it does \emph{not} contain a perfect matching can be written as
\begin{align*}
&\Pr[\exists S \subseteq A, |N_G(S)| < |S|] \\
&= \Pr[\exists S \subseteq A, T \subseteq B, |T| = |S| - 1, N_G(S) \subseteq T] \\
&\leq \sum_{S \subseteq A} \sum_{T \subseteq B \atop |T| = |S| - 1} \Pr[N_G(S) \subseteq T].
\end{align*}
where the inequality follows from the union bound.
Next, observe that $N_G(S) \subseteq T$ if and only if there is no edge from $S$ to any of the vertices in $B \setminus T$.
The latter happens with probability $\delta^{|S| \cdot |B \setminus T|} = \delta^{|S|(m + 1 - |S|)}$.
Plugging this back into the bound above, we have that the probability that a perfect matching does not exist is at most
\begin{align*}
&\sum_{S \subseteq A} \sum_{T \subseteq B \atop |T| = |S| - 1} \delta^{|S|(m + 1 - |S|)} \\
&= \sum_{i=1}^m \sum_{S \subseteq A \atop |S| = i} \sum_{T \subseteq B \atop |T| = i - 1} \delta^{i(m + 1 - i)} \\
&= \sum_{i=1}^m \binom{m}{i} \binom{m}{i - 1} \cdot \delta^{i(m + 1 - i)} \\
&\leq \sum_{i=1}^m m^{\min\{i, m - i\}} \cdot m^{\min\{i - 1, m + 1 - i\}} \cdot \delta^{i(m + 1 - i)} \\
&\leq \sum_{i=1}^m m^{2\cdot\min\{i, m + 1 - i\}} \cdot \delta^{i(m + 1 - i)} \\
&= \sum_{i=1}^m m^{2\cdot \min\{i, m + 1 - i\}}\cdot \delta^{\min\{i, m + 1 - i\} \cdot \max\{i, m + 1 - i\}} \\
&= \sum_{i=1}^m \left(m^2 \cdot \delta^{\max\{i, m + 1 - i\}}\right)^{\min\{i, m + 1 - i\}}.
\end{align*}
Now, from our assumption on $\delta$, we have $m^2 \leq (1/\delta)^{m/4} \leq (1/\delta)^{\max\{i, m + 1 - i\}/2}$. Therefore, the above term is at most
\begin{align*}
\sum_{i=1}^m \delta^{\max\{i, m + 1 - i\} \cdot \min\{i, m + 1 - i\} / 2}
&= \sum_{i=1}^m \delta^{i(m + 1 - i)/2} \\
&\leq m \cdot \delta^{m/2} \\
&\leq \delta^{m/4},
\end{align*}
where the last inequality once again follows from our assumption on $\delta$.
\end{proof}
In our proof of \Cref{thm:random-main}, we will need to apply \Cref{prop:er-matching} repeatedly to a specific setting of parameters.
As such, it will be more convenient to state the following instantiation of the value of $\delta$ in \Cref{prop:er-matching}.
\begin{corollary}
\label{cor:er-matching-customized}
Let $G$ be a bipartite graph sampled from the Erd{\H{o}}s-R{\'{e}}nyi random bipartite graph distribution $\mathcal{G}(m, m, 1 - \delta)$. Let $n \geq m$ be a positive integer and let $p \in [0, 1]$ be such that $p \geq 128\ln n/n$. If $\delta \leq (1 - p)^{n/(16m)}$, then $G$ contains a perfect matching with probability at least $1 - 1/n^2$.
\end{corollary}
\begin{proof}
First, note that
\begin{align*}
\delta^{m/8} \leq (1 - p)^{n/128} \leq e^{-pn/128} \leq 1/n \leq 1/m,
\end{align*}
where for the second inequality we use the well-known estimate $1+x\le e^x$, which holds for all real numbers $x$.
Therefore, we may apply \Cref{prop:er-matching}, which implies that $G$ contains a perfect matching with probability at least
\begin{align*}
1 - \delta^{m/4} \geq 1 - (1 - p)^{n/64} \geq 1 - e^{-pn/64} \geq 1 - 1/n^2,
\end{align*}
as desired.
\end{proof}
\subsubsection{Fixing Random Tournaments in the Non-Seeded Setting}
As mentioned in the proof sketch, we will also use our prior result from the non-seeded setting, which is stated below.
\begin{lemma}[\citep{ManurangsiSu21}] \label{thm:non-seed-ms}
In the non-seeded setting, if $p \geq 80\ln n / n$, then for each player, with probability $1 - o(1/n)$, the player is a knockout winner and a winning bracket for the player can be found in polynomial time.\footnote{The bound $1-o(1/n)$ was derived in the proof of this result; see Theorem~4.3 in the extended version of our previous paper \citep{ManurangsiSu21}.}
\end{lemma}
To prove \Cref{thm:random-main} when the desired winner is unseeded, it suffices to apply \Cref{thm:non-seed-ms} among the unseeded players as the first step. However, for the case where the desired winner is seeded, we need a slightly modified version of \Cref{thm:non-seed-ms} that considers a tournament with $n + 1$ players instead and picks out one player $y$ who loses to the desired winner $x$ (to be played with $x$ in the first round in the proof of \Cref{thm:random-main}). This modified lemma is stated below.
Its proof is nearly identical to that of~\Cref{thm:non-seed-ms}, so we only provide a proof sketch here.
\begin{lemma} \label{lem:non-seed-mod}
Suppose that we consider a tournament graph on $n + 1$ players generated under the generalized random model with $p \geq 80\ln n / n$. Let $x$ be any player. Then, with probability $1 - o(1/n)$, there exists a player $y$ with $x\succ y$ such that $x$ is a knockout winner in the $n$-player (non-seeded) tournament that results from removing $y$. Furthermore, $y$ and a winning bracket for $x$ can be found in polynomial time.
\end{lemma}
\begin{proof}[Proof Sketch]
The proof follows the same outline as our previous one \citep{ManurangsiSu21}. Specifically, let $Z$ denote the set of players with expected outdegree at least $0.45n + 2$ before the tournament graph is generated (as opposed to $0.45n + 1$ in our previous proof). Our previous argument implies that with probability $1 - o(1/n)$, $x$ beats at least $\log n + 1$ players in $Z$. Let $z_0, \dots, z_{\log n}$ be such players. Set $y = z_0$ and consider the rest of the tournament. We may then follow our previous proof, which shows that with probability $1 - o(1/n)$, there exists a bracket such that $x$ gets to play $z_i$ in round $i$, therefore ensuring that $x$ wins the knockout tournament.
\end{proof}
\subsection{Main Algorithm: Proof of \Cref{thm:random-main}}
We now have all the ingredients to prove \Cref{thm:random-main}.
For $0\le k\le \log n$, we define a \emph{level-$k$ section} of a bracket of size~$n$ to consist of the positions $(w - 1) \cdot 2^{\log n-k} + 1, \dots, w \cdot 2^{\log n - k}$ in the bracket for some $w \in \{1, \dots, 2^k\}$. For example, a level-$0$ section refers to the entire bracket, a level-$1$ section refers to half of the bracket, and so on.
Recall that there are $n/2$ seeds.
\begin{proof}[Proof of \Cref{thm:random-main}]
Consider any player $x$. We will show that with probability $1 - o(1/n)$, we can efficiently find a winning bracket for $x$. Taking a union bound over all players~$x$ establishes the theorem.
We consider two cases based on whether $x$ is seeded.
\paragraph{Case I: $x$ is unseeded.}
Our algorithm works as follows:
\begin{itemize}
\item First, use the algorithm from \Cref{thm:non-seed-ms} to find a winning bracket $B$ for $x$ in the tournament of size $n/2$ consisting of all unseeded players. If such a bracket cannot be found, fail.
\item We will extend the bracket $B$ to the entire tournament of size $n$ as follows. Start with a bracket $B'$ of size $n$ where position $i$ is empty if $i$ is odd and is assigned the player in position $i/2$ of $B$ if $i$ is even.
\item For $k = 1, \dots, \log n - 1$:
\begin{itemize}
\item If $k = 1$, let $T_k$ consist of seeds $1$ and $2$. Otherwise, let $T_k$ consist of seeds $2^{k - 1} + 1, \dots, 2^k$.
\item Let $B^k_1, \dots, B^k_{|T_k|}$ denote the level-$k$ sections of $B'$ such that no player with seed number smaller than $2^k$ has been assigned.
\item Let $G_k = (T_k, \{B^k_1, \dots, B^k_{|T_k|}\}, E)$ be the bipartite graph such that there is an edge between $u \in T_k$ and $B^k_i$ if and only if $u$ loses to an unseeded player in $B^k_i$.
\item Find a perfect matching in $G_k$. If no perfect matching exists, fail. Otherwise, if $u \in T_k$ is matched to $B^k_i$ in the perfect matching, then let $u$ play an unseeded player it loses to in $B^k_i$ in the first round.
\end{itemize}
\end{itemize}
Clearly, the algorithm runs in polynomial time and, if it succeeds, $x$ is the winner. Therefore, it suffices to show that the algorithm fails with probability $o(1/n)$.
Consider $G_k$.
Notice that there are $n/2^{k + 1}$ unseeded players in each $B^k_i$.
Thus, for any $u \in T_k$, the probability that it loses to at least one player in $B^k_i$ is at least $1 - (1 - p)^{n/2^{k + 1}}$.
Therefore, we may apply\footnote{Note that while the probability that each edge in $G_k$ exists is at least (and may not be exactly) $1 - \delta$, we can still apply \Cref{cor:er-matching-customized} because increasing the probability of an edge only increases the probability that a perfect matching exists.} \Cref{cor:er-matching-customized} with $\delta = (1-p)^{n/2^{k+1}}$ and $m = |T_k| \geq 2^{k-1}$, which ensures that a matching exists with probability at least $1 - 1/n^2$.
Taking a union bound over all $k$, the probability that all of $G_1, \dots, G_{\log n-1}$ admit a perfect matching is at least $1 - \log n / n^2 = 1 - o(1/n)$.
Moreover, since $p \ge 160\ln n/n \ge 80\ln (n/2)/(n/2)$, \Cref{thm:non-seed-ms} ensures that the first step of the algorithm fails with probability at most $o(1/n)$. Therefore, the entire algorithm succeeds with probability at least $1 - o(1/n)$, as desired.
\paragraph{Case II: $x$ is seeded.} This case is very similar to Case~I except that we need to use \Cref{lem:non-seed-mod} in order to ensure that we can pair $x$ with $y$ in the first round, and we need to be slightly more careful in the subsequent steps as $x$ and $y$ are already matched. More formally, our algorithm in this case works as follows:
\begin{itemize}
\item First, use the algorithm from \Cref{lem:non-seed-mod} to find an unseeded player~$y$ with $x\succ y$ and a winning bracket $B$ for $x$ in the tournament of size $n/2 + 1$ consisting of all unseeded players and $x$.
If such a bracket cannot be found, fail.
\item We will extend the bracket $B$ to the entire tournament of size $n$ as follows.
Start with a bracket $B'$ of size $n$ where position $i$ is empty if $i$ is odd and is assigned the player in position $i/2$ of $B$ if $i$ is even.
Furthermore, let $x$ play $y$ in the first round of $B'$.
\item For $k = 1, \dots, \log n - 1$:
\begin{itemize}
\item If $k = 1$, let $T_k$ consist of seeds $1$ and $2$. Otherwise, let $T_k$ consist of seeds $2^{k-1} + 1, \dots, 2^k$. Then, let $T'_k = T_k \setminus \{x\}$.
\item Let $B^k_1, \dots, B^k_{|T'_k|}$ denote the level-$k$ sections of $B'$ such that no player with seed number smaller than $2^k$ has been assigned.
For each $i = 1,\dots, |T'_k|$, if $x$ is not in $B^k_i$, let $U^k_i$ denote the set of unseeded players in bracket $B^k_i$.
Otherwise, if $x$ is in $B^k_i$, let $U^k_i$ denote the set of unseeded players in bracket $B^k_i$ that are \emph{not} in the same level-$(k + 1)$ section as~$x$.
\item Let $G_k = (T_k', \{B^k_1, \dots, B^k_{|T_k'|}\}, E)$ be the bipartite graph such that there is an edge between $u \in T'_k$ and $B^k_i$ if and only if $u$ loses to a player in $U^k_i$.
\item Find a perfect matching in $G_k$. If no perfect matching exists, fail. Otherwise, if $u \in T_k'$ is matched to $B^k_i$ in the perfect matching, then let $u$ play a player it loses to in $U^k_i$ in the first round.
\end{itemize}
\end{itemize}
Similarly to Case~I, the algorithm runs in polynomial time and, if it succeeds, $x$ is the winner.
To show that the algorithm fails with probability $o(1/n)$, consider $G_k$. Notice that $|U^k_i| \geq n/2^{k + 2}$. Thus, for any $u \in T'_k$, the probability that it loses to at least one player in $U^k_i$ is at least $1 - (1 - p)^{n/2^{k + 2}}$. Applying \Cref{cor:er-matching-customized} with $\delta = (1-p)^{n/2^{k+2}}$ and $m = |T'_k| \geq 2^{k-2}$ ensures that the matching exists with probability at least $1 - 1/n^2$.
Taking a union bound over all $k$, the probability that all of $G_1, \dots, G_{\log n-1}$ admit a perfect matching is at least $1 - \log n / n^2 = 1 - o(1/n)$.
Moreover, since $p \ge 160\ln n/n \ge 80\ln (n/2)/(n/2)$, \Cref{lem:non-seed-mod} ensures that the first step of the algorithm fails with probability at most $o(1/n)$.
Hence, the entire algorithm succeeds with probability at least $1 - o(1/n)$, as desired.
\end{proof}
\section{Computational Complexity}
\label{sec:complexity}
In this section, we turn our attention to the complexity of computing a valid winning bracket for our desired winner.
Recall that this problem is NP-complete in the non-seeded setting \citep{AzizGaMa18}.
We show that the intractability continues to hold in the seeded setting.
\begin{theorem}
\label{thm:hardness-constant}
For any constant number of seeds $s$, TFP is NP-complete.
\end{theorem}
\begin{proof}
The problem belongs to NP since a valid winning bracket for our desired winner $x$ can be verified in polynomial time, so we focus on the hardness.
We reduce from TFP in the non-seeded setting.
For ease of presentation, we will first show the reduction for $s = 4$ and later explain how to extend it to any constant value of $s$.
Let $I$ be an instance of non-seeded TFP with a set $V$ of $n$ players, one of whom is our desired winner $x$.
We create sets $V_1$ and $V_2$ of players with the following properties:
\begin{itemize}
\item $|V_1| = n$, and $x_1\in V_1$ beats all other players in $V_1$.
\item $|V_2| = 2n$, and $x_2\in V_2$ beats all other players in $V_2$.
\item All players in $V_1$ beat all players in $V$, with the exception that $x$ beats $x_1$.
\item All players in $V_2$ beat all players in $V$, with the exception that $x$ beats $x_2$.
\item All remaining outcomes can be chosen arbitrarily.
\end{itemize}
The new instance $I'$ consists of $n + n + 2n = 4n$ players.
The top two seeds are $x$ and $x_2$, and the other two seeds are $x_1$ and an arbitrary player $y\in V_2$.
This completes the description of our reduction; an illustration is shown in \Cref{fig:reduction-constant}.
Note that the reduction takes polynomial time.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.8]
\draw (3,5) ellipse (0.8cm and 1.6cm);
\draw[fill=black] (3,6) circle [radius=0.1];
\node at (2.7,6) {$x$};
\node at (3,2.9) {$V$};
\draw (6,5) ellipse (0.8cm and 1.6cm);
\draw[fill=black] (6,6) circle [radius=0.1];
\node at (6.35,6) {$x_1$};
\draw (6,4.5) ellipse (0.5cm and 0.8cm);
\draw[->] (6,6) to (6,5.3);
\node at (6,2.87) {$V_1$};
\draw (9,5.8) ellipse (0.95cm and 2.4cm);
\draw[fill=black] (9,7.6) circle [radius=0.1];
\node at (9.35,7.6) {$x_2$};
\draw (9,5.3) ellipse (0.65cm and 1.6cm);
\draw[->] (9,7.6) to (9,6.9);
\draw[fill=black] (9,5.3) circle [radius=0.1];
\node at (9,4.9) {$y$};
\node at (9,2.87) {$V_2$};
\draw[->] (3,6) to (5.9,6);
\draw[->] (3,6) to (8.93,7.58);
\draw[->,dashed] (5.2,4.5) to (3.75,4.5);
\draw[->,dashed] (8.5,7.85) to (3.3,6.5);
\end{tikzpicture}
\caption{Illustration of the reduction for \Cref{thm:hardness-constant} when $s = 4$}
\label{fig:reduction-constant}
\end{figure}
We claim that $x$ can win in the original instance $I$ if and only if it can win in the new instance $I'$.
($\Rightarrow$) Suppose that $x$ can win in $I$.
To construct a winning bracket for $x$ in $I'$, we use the winning bracket for $x$ in $I$ as one quarter, put the players from $V_1$ in the other quarter in the same half as $x$, and put the players from $V_2$ in the opposite half to $x$ in such a way that $x_2$ and $y$ are in different quarters.
Notice that the constructed bracket satisfies the seed constraints.
In the resulting tournament, $x$ progresses to the semifinals by winning its bracket for $I$, $x_1$ progresses to the semifinals by winning its quarter, and $x_2$ progresses to the final by winning its half.
Hence, $x$ beats $x_1$ in the semifinals and beats $x_2$ in the final, and therefore wins the tournament.
($\Leftarrow$) Suppose that $x$ can win in $I'$, and consider its winning bracket.
We claim that $x$'s quarter must contain exactly the players from $V$.
Indeed, it cannot contain $x_1$ or $x_2$ due to seed constraints, and if it contains players from $(V_1\cup V_2)\setminus\{x_1,x_2\}$, then since all of these players beat all players in $V$, the winner of $x$'s quarter will be from $(V_1\cup V_2)\setminus\{x_1,x_2\}$, and in particular not $x$.
Hence, the bracket of $x$'s quarter is a winning bracket for $x$ in $I$.
This completes the NP-hardness proof for $s = 4$.
To extend it to any constant number of seeds $s=2^t$ (including $s = 2$), we use a similar construction.
Starting with a non-seeded TFP instance $I$ of $n$ players, we create sets $V_1,V_2,\dots,V_t$ of size $n,2n,\dots,2^{t-1}n$.
For each $1\le i\le t$, $V_i$ contains a player $x_i$ who beats all other players in $V_i$, and all players in $V_i$ beat all players in $V$ except that $x$ beats $x_i$.
The new instance $I'$ consists of $2^tn$ players.
The seeds are assigned as follows:
\begin{itemize}
\item The top two seeds are $x$ and $x_t$.
\item The next two seeds are $x_{t-1}$ and a player in $V_t$.
\item The next four seeds are $x_{t-2}$, a player in $V_{t-1}$, and two players in $V_t$.
\item $\dots$
\item The last $2^{t-1}$ seeds are $x_1$, a player in $V_2$, two players in $V_3$, four players in $V_4$, $\dots$, and $2^{t-2}$ players in $V_t$.
\end{itemize}
If $x$ can win in $I$, then we can construct a bracket in $I'$ so that after winning its subtournament from $I$, $x$ beats $x_1,x_2,\dots,x_t$ in the last $t$ rounds to win the tournament for~$I'$.
Conversely, if $x$ can win in $I'$, a similar argument as in the case $s=4$ shows that the section of the bracket containing $x$'s first $\log n$ rounds must contain exactly the players from $V$; this yields a winning bracket for $x$ in $I$.
Finally, since $s$ is constant, the reduction takes polynomial time.
\end{proof}
Next, we prove that the problem remains intractable when the number of seeds is the maximum possible.
\begin{theorem}
\label{thm:hardness-n-2}
For $s = n/2$, TFP is NP-complete.
\end{theorem}
\begin{proof}
As with \Cref{thm:hardness-constant}, we only need to show the NP-hardness.
We again reduce from TFP in the non-seeded setting.
Let $I$ be an instance of non-seeded TFP with a set $V$ of $n$ players, one of whom is our desired winner $x$.
We create a set $V'$ of $n$ additional players who all lose to the $n$ original players.
The outcomes between players in $V'$ can be chosen arbitrarily.
The new instance $I'$ consists of $2n$ players, and the $n$ new players are the $n$ seeds.
It suffices to show that $x$ can win in $I$ if and only if it can win in $I'$.
($\Rightarrow$) Suppose that $x$ can win in $I$.
In the first round for $I'$, we will pair each player from $V$ with a player from $V'$; this ensures that only players from $V$ remain from the second round onward.
We position the players from $V$ so that the bracket from the second round onward is a winning bracket for $x$ in~$I$.
Moreover, we position the players from $V'$ so that the seed constraints are satisfied.
Hence, $x$ wins the resulting tournament for $I'$.
($\Leftarrow$) Suppose that $x$ can win in $I'$, and consider its winning bracket.
By the seed constraints, every first-round match must be between a player from $V$ and one from $V'$; hence, only players from $V$ progress to the second round.
It follows that the bracket from the second round onward is a winning bracket for $x$ in~$I$.
\end{proof}
In the non-seeded setting, the fastest known algorithm for TFP is due to~\citet{KimVa15} and runs in $2^n \cdot n^{O(1)}$ time.
Not only can the algorithm solve TFP, but it can also count the number of brackets that result in each player becoming the tournament winner. Here we provide an algorithm with similar guarantees in the seeded setting.
\begin{theorem} \label{thm:alg}
For any $n$ and $s$, there exists a $2^n \cdot n^{O(1)}$-time algorithm that outputs, for each player, the number of valid brackets such that the player wins the tournament.
\end{theorem}
To prove this theorem, we need the following lemma of~\citet{KimVa15}, whose proof relies on techniques due to~\citet{BjorklundHuKa07}.
For any positive integer $t$, we use $[t]$ to denote the set $\{1,2,\dots,t\}$.
\begin{lemma}[\citep{KimVa15}] \label{lem:fast-subset-convolution}
Let $i \le \log n$ be a positive integer and $f, g$ be integer-valued functions on subsets of $[n]$ of size $2^{i - 1}$. Let $h$ be a function on subsets of $[n]$ of size $2^i$ defined by
\begin{align*}
h(S) = \sum_{T \subseteq S \atop |T| = 2^{i - 1}} f(T) \cdot g(S\setminus T).
\end{align*}
If each entry of $f$ and $g$ can be accessed in ${O(1)}$ time, then we can compute $h(S)$ for all $S$ of size $2^i$ in time $2^n \cdot n^{O(1)}$.
\end{lemma}
\begin{proof}[Proof of \Cref{thm:alg}]
Let $[n]$ be the set of players.
For $i = 0, \dots, \log n$ and $j \in [n]$, let us define the function $f_i^j$ on subsets $S\subseteq[n]$ of size $2^i$ as follows:
\begin{itemize}
\item For $i = 0$, $f_0^j(S) = 1$ if $S = \{j\}$, and 0 otherwise.
\item For $i \ge 1$, $f_i^j(S) = 0$ if, for some power-of-two $\ell$ such that $\ell \leq s$ and $\ell \geq 2^{n - i}$, we have $|S \cap [\ell]| \ne \ell / 2^{n - i}$. Otherwise, let
\begin{align*}
f_i^j(S) = \sum_{k \in [n] \atop j \succ k} \sum_{T \subseteq S \atop |T| = 2^{i - 1}} f_{i - 1}^j(T) \cdot f_{i - 1}^k(S\setminus T).
\end{align*}
\end{itemize}
Intuitively, $f_i^j(S)$ captures the number of valid brackets for a subtournament with set of players $S$ (of size $2^i$) such that $j$ wins.
The condition $|S \cap [\ell]| = \ell / 2^{n - i}$ ensures that this subtournament contains exactly one of the top $2^{n-i}$ seeds, two of the top $2^{n-i+1}$ seeds, and so on.
Hence, $f_{\log n}^x([n])$ is exactly the number of valid brackets under which $x$ wins. Therefore, it suffices for us to show how to compute the values of these functions in time $2^n \cdot n^{O(1)}$. Our algorithm is described below. (Note that when we let $f^j_i(S)$ be some number, we actually mean storing each $f^j_i$ as an array and filling in the entry of the array corresponding to $S$.)
\begin{itemize}
\item For $i = 0$, let $f_0^j(\{k\}) = 1$ if $k = j$, and $0$ otherwise.
\item For $i = 1, \dots, \log n$:
\begin{itemize}
\item For $j \in [n]$:
\begin{itemize}
\item Start off with $f_i^j(S) = 0$ for all sets $S$ of size $2^i$.
\item For each $k \in [n]$ such that $j \succ k$:
\begin{itemize}
\item Use \Cref{lem:fast-subset-convolution} to compute $h$ with $f = f_{i - 1}^j$ and $g = f_{i - 1}^k$.
\item For every set $S$ of size $2^i$, increase $f_i^j(S)$ by $h(S)$.
\end{itemize}
\item For each set $S$ of size $2^i$, check whether $|S \cap [\ell]| = \ell / 2^{n - i}$ for every power-of-two $\ell$ such that $\ell \leq s$ and $\ell \geq 2^{n - i}$; if this fails, set $f_i^j(S) = 0$.
\end{itemize}
\end{itemize}
\end{itemize}
The number of pairs $(i,j)$ is $n\log n$, and the iteration of the for-loop corresponding to each pair $(i,j)$ runs in $2^n \cdot n^{O(1)}$ time by \Cref{lem:fast-subset-convolution}.
It follows that the entire algorithm runs in $2^n \cdot n^{O(1)}$ time, as desired.
\end{proof}
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper, we have investigated the problem of fixing a knockout tournament in the ubiquitous setting where a subset of the players are designated as seeds.
Our results exhibit both similarities and differences in comparison to the setting without seeds.
On the one hand, the decision problem of whether a certain player can be made a tournament winner remains computationally hard, and manipulation is still feasible in the average case.
On the other hand, a number of structural conditions that guarantee that a player can win in the non-seeded setting cease to do so in the seeded setting.\footnote{Our negative results in \Cref{sec:structural} also apply to the conditions put forth by \citet[Thm.~2.1]{KimSuVa17}, since these conditions generalize both the superking and the ``king of high outdegree'' conditions.}
While the seed constraints that we studied in this work are both common and natural, some real-world tournaments employ more restrictive versions of constraints.
For example, in World Snooker Championships---which are competed among $32$ players, $16$ of whom are seeds---the bracket is set up to ensure that if the top four seeds all make it to the semifinals, the first seed will play against the fourth seed and the second against the third; analogous conditions apply to the quarterfinals as well as the round of $16$.\footnote{See wikipedia.org/wiki/2021\_World\_Snooker\_Championship.}
With these constraints, the only freedom in choosing the bracket is in pairing seeded players with unseeded players in the first round.
Another notable example is the seeding used in Grand Slam tennis tournaments, which involve $128$ players and $32$ seeds.
While the constraints for the semifinals and quarterfinals are the same as those that we studied, additional constraints are imposed in the rounds of $16$ and $32$.
For instance, in the round of $16$, the bracket must be set up so that seeds $9$--$12$ fall in different ``eighths'' from seeds $1$--$4$ \citep[p.~26]{Grandslam21}.
Studying the extent to which manipulation is still possible in such tournaments is an interesting avenue for future research.
\section*{Acknowledgments}
This work was partially supported by an NUS Start-up Grant.
We would like to thank the anonymous reviewers for their valuable comments.
\bibliographystyle{named}
| {
"attr-fineweb-edu": 1.803711,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdlg4dbghVu0OkCGB | \section{Introduction}
We discuss a round robin tournament scheduling problem played in two divisions, with the objective to maximise the number of common fixtures between two clubs playing against each other in the same round in the two separate divisions. The first division has teams from $2n$ clubs, and is played in a double round robin in which the draw for the second round robin is identical to the first. The second division has teams from two additional clubs, and is played as a single round robin during the first $2n+1$ rounds of the first division. We say that two clubs have a \emph{common fixture} if their division one and two teams both play each other in the same round, and show that for $n\geq2$ the maximum possible number of common fixtures is $2n^2 - 3n + 4$. Our construction achieving this bound is based on a bipyramidal one-factorisation of the complete graph $K_{2n}$.
This problem was motivated by a scheduling problem in the Manawat\=u Rugby Union's first and second division tournaments in New Zealand in 2011. In that case there were ten clubs with a team in both divisions, and an additional two clubs with teams in the second division only. The Manawat\=u Rugby Union contacted the second author to request help in designing a schedule to maximise the number of common fixtures. A near optimal schedule was found by the second author and implemented by the rugby union. We solve the problem for any number of clubs in the first division, with two additional clubs in the second division.
\subsection{Organisation}
The paper is organised as follows.
In Section~\ref{sec:theproblem} we give a precise statement of our problem, reformulate it in graph-theoretic terms, and state our main theorem. In Section~\ref{sec:upperbound} we establish the upper bound given in our theorem, and in Section~\ref{sec:construction} we construct draws achieving this bound. This is done in two parts: we first handle the case $n=2$ separately in Section~\ref{sec:n=2}, and then give a general construction for $n\geq 3$ in Section~\ref{sec:general}. We conclude the paper in Section~\ref{sec:homeaway} by considering an oriented version of the problem, representing home and away status, and show that the draws can be chosen to be balanced.
\subsection{Related work}
In theory and application it is often desirable to construct a sports schedule subject to additional constraints or objectives. Many such problems have been investigated, such as:
Victoria Golf Association scheduling two divisions to avoid clashes~\cite{bib:Beecham&Hurley}; scheduling $n$ teams from each of two geographic locations so that games between teams from the same location take place on weekdays, and games between teams from different locations take place at weekends~\cite{bib:dewerra1980};
shared facilities in an Australian Basketball Association~\cite{bib:deWerra&Jacot-Descombes&Masson};
scheduling a round robin tennis tournament under availability constraints on courts and players~\cite{bib:DellaCroce&Tadei&Asioli};
minimising ``carry-over effects'', where teams $x$ and $y$ both play team $z$ immediately after playing team $w$~\cite{bib:anderson1997};
avoiding consecutive away fixtures in the Czech National Basketball League~\cite{bib:Froncek};
minimising waiting times in tournaments played on a single court~\cite{bib:knust2008};
scheduling to avoid teams playing consecutive games against teams from the same ``strength group''~\cite{bib:briskorn2009,bib:briskorn-kunst2010};
minimising breaks (consecutive home or away games)~\cite{bib:Hof}; a travelling tournament problem, where it is desirable to have a number of consecutive away games (on tour) applied to a Japanese baseball league~\cite{bib:Hoshino}.
See Wallis~\cite[Chapter 5]{bib:Wallis} or Kendall et al's comprehensive survey article~\cite{bib:Kendall} and the references therein for further discussion and examples.
In problems involving teams that share facilities (for example, teams belonging to the same club but playing in different divisions, as we consider here) it is common to apply the constraint that such teams cannot have a home game in the same round (see for example~\cite{bib:Beecham&Hurley,bib:deWerra&Jacot-Descombes&Masson} and~\cite[p. 35]{bib:Wallis}). This reflects the common situation where it may be physically impossible to conduct two games at the same time at the same venue. In this paper we drop this constraint, and instead seek to maximise the number of games between teams from the same two clubs, played in the same round and at the same venue. This might for example allow the club's teams to share transport, reducing the costs associated with travel. The scheduling difficulty in the problem considered here arises from the fact that not all clubs have a team in both divisions.
\section{Problem statement}
\label{sec:theproblem}
\subsection{Setting}
Our interest in this paper is in \emph{round robin tournaments}: tournaments in which every team or competitor taking part in the competition plays against every other team or competitor exactly once (a [\emph{single}] \emph{round robin}) or twice (a \emph{double round robin}). For simplicity we will use the term \emph{team} throughout (that is, we allow teams consisting of one player only), since the number of players in a team plays no role in our discussion. We assume that the round robin tournament takes place as a series of \emph{rounds}, in which each team plays exactly one match against another team. To handle the case where there is an odd number of teams we follow common practice by introducing a phantom team; when a team is scheduled to play against the phantom team they have a bye in that round. Thus in what follows we will always assume that there is an even number of teams.
We will regard the teams as belonging to \emph{clubs}, and will assume that each club may enter at most one team in a given tournament. However, a club may have more than one team (for example, an ``A'' and a ``B'' team, a junior and a senior team, or a men's and a women's team) that take part in different tournaments. We will refer to each tournament and associated set of participating teams as a \emph{division}.
In some sports or tournaments one of the two teams taking part in a given match may be in a distinguished position. This is the case for example where one team plays first, or where matches take place at the facilities belonging to one of the two teams, with the team playing at their own facilities being the ``home'' team, and the team travelling to the other's facilities being the ``away'' team. For simplicity we will use the terms \emph{home} and \emph{away} throughout to specify this distinction.
The \emph{draw} for a tournament specifies which matches take place in each round. In some sports home and away are decided by lot, whereas in others it must be specified as part of the draw. In such cases it is desirable that every team has nearly equal numbers of home games, and we will say that a draw is \emph{balanced} if the numbers of home games of any two teams differ by at most one. (Note that in a single round robin with $2n$ teams it is impossible for all teams to have the same number of home games, because each team plays $2n-1$ games.) We will use the term \emph{fixture} to refer to a match scheduled to take place in a particular round, with if applicable a designation of home and away teams.
\subsection{Formulation}
Let $n$ be a positive integer.
We consider a competition played in two divisions among $2n+2$ clubs labelled $0,1, \dots, 2n+1$. We suppose that
\begin{enumerate}
\renewcommand{\theenumi}{(C\arabic{enumi})}
\renewcommand{\labelenumi}{(C\arabic{enumi})}
\item
clubs $0, 1, \ldots, 2n-1$ have a team competing in each division;
\item
clubs $2n$ and $2n+1$ have teams competing in division two only;
\item\label{double.condition}
division one is played as a double round robin, in which the draws for rounds $r$ and $r+(2n-1)$ are identical for $r=1,\ldots,2n-1$, but with (if applicable) home and away reversed;
\item\label{coincide.condition}
division two is played as a single round robin, co-inciding with the first $2n+1$ rounds of division one.
\end{enumerate}
We will say that clubs $x$ and $y$ have a \textbf{common fixture} in round $r$ if their division one and two teams both play each other in round $r$. When home and away are specified as part of the draw we additionally require that the same club should be the home team in both divisions.
It is clear that there are circumstances in which common fixtures might be desirable. For example, they might allow a club's division one and two teams to share transport, and they might allow the club's supporters to attend both the division one and two games. This motivates our main problem:
\begin{mainproblem}
Construct round robin draws maximising the total number of common fixtures among clubs $0,1,\ldots,2n-1$.
\end{mainproblem}
Our construction yields the following result:
\begin{theorem}\label{thm:main}
Let $n$ be a positive integer. Then the maximum possible number of common fixtures is $1$ if $n=1$, and $c(n)=2n^2-3n+4$ if $n\geq 2$. Moreover, if home and away are specified the draws can be chosen to be balanced in all three round robins (division two, and both round robins of division one).
\end{theorem}
When $n=1$ there are only two teams in division one, and four teams in division two. It is immediate that there can be at most one common fixture, and any draw for division two in which the clubs belonging to division one play each other in round one or two realises this. We therefore restrict attention to $n\geq2$ throughout the rest of the paper.
\begin{remark}\label{rem:singleroundrobin}
We have assumed that division one is played as a double round robin, and division two as a single round robin, because that is the form in which the problem was presented to us by a sports organisation in 2011.
If division one is played only as a single round robin, then our work shows that for $n\geq2$ the maximum possible number of common fixtures is $c(n)-2=2n^2-3n+2$.
\end{remark}
\begin{remark}
Our sequence $c(n)$ is a translate by 1 of sequence
\href{http://oeis.org/A236257}{A236257} in the Online Encyclopedia of Integer Sequences (OEIS), published electronically at \href{http://oeis.org}{\nolinkurl{http://oeis.org}}. This sequence is defined by $a(n) = 2n^2 - 7n + 9=c(n-1)$, and relates to sums of $n$-gonal numbers.
The sequence $c(n)-2$ is a translate by 1 of sequence
\href{http://oeis.org/A084849}{A084849} in the OEIS, defined by $a(n) = 2n^2+n+1=c(n+1)-2$. This sequence counts the number of ways to place two non-attacking bishops on a $2\times (n+1)$ board.
\end{remark}
\subsection{Reformulation in graph-theoretic terms}
To formulate the problem in graph-theoretic terms we follow standard practice and represent each team by a vertex, and a match between teams $x$ and $y$ by the edge $\{x,y\}$. In a round robin tournament with $2m$ teams each round then corresponds to a perfect matching or \emph{one-factor} of the complete graph $K_{2m}$, and the round robin draw to an ordered one-factorisation of $K_{2m}$ (Gelling and Odeh~\cite{bib:Gelling&Odeh}, de Werra~\cite{bib:dewerra1980}). Recall that these terms are defined as follows.
\begin{definition}
A \emph{one-factor} or \emph{perfect matching} of $G=(V,E)$ is a subgraph $\bar{G}=(V,\bar{E})$ of $G$ in which the edges $\bar{E} \subseteq E$ have the following properties:
\begin{enumerate}
\item Every vertex $v\in V$ is incident on an edge $e \in \bar{E}$.
\item No two edges $e$ and $e'$ in $\bar{E}$ have any vertex in common.
\end{enumerate}
As a consequence every vertex $v \in V$ has degree one in $\bar{G}$.
A \emph{one-factorisation} of $G$ is a set of one-factors $\{\bar{G}_i=(V,\bar{E}_i) \mid i=1,\ldots,k\}$ with the properties:
\begin{enumerate}
\item
$\bar{E}_i \cap \bar{E}_j = \emptyset, \; i \ne j$.
\item
$\bigcup\limits_{i=1}^k \bar{E}_i = E$.
\end{enumerate}
Clearly, a necessary condition for $G$ to have a one-factorisation into $k$ one-factors is that $G$ is regular of degree $k$. In particular, for $G=K_{2m}$ any one-factorisation must have $k=2m-1$ one-factors.
\end{definition}
Any one-factorisation can be thought of as an edge colouring of the
given graph, and in the case of the complete graph $K_{2m}$, a one-factorisation is equivalent to a minimum edge colouring.
In what follows we will be interested in the cases $m=n$ and $m=n+1$.
We will use the languages of one-factorisations and edge colourings interchangeably.
Note that a one-factorisation or minimum edge colouring does not necessarily impose an order on the one-factors. If an order is fixed, we will say the one-factorisation is ordered or we have an ordered one-factorisation.
Turning now to the problem, suppose that the complete graph $K_{2n}=(\mathcal{V}_1,\mathcal{E}_1)$ has vertex set $\mathcal{V}_1=\{0,1,\ldots,2n-1\}$, and
$K_{2n+2}=(\mathcal{V}_2,\mathcal{E}_2)$ has vertex set $\mathcal{V}_2=\{0,1,\ldots,2n+1\}$. Then the round robin draw in division one rounds 1 to $2n-1$ may be represented by an edge colouring
\[
\col{1} : \mathcal{E}_1 \to \{1, 2, \ldots, {2n-1}\},
\]
where the edges coloured $r$ represent the draw in round $r$. By condition~\ref{double.condition} the draw in rounds $2n$ to $4n-2$ is then given by the colouring
\[
\hcol : {\mathcal{E}_1} \to \{2n, 2n+1, \ldots,{4n-2}\}
\]
defined by $\hcol(e)=\col{1}(e)+(2n-1)$, and by condition~\ref{coincide.condition} the draw in division 2 may be represented by a colouring
\[
\col{2} : \mathcal{E}_2 \to \{1, 2, \ldots ,{2n+1}\}.
\]
Clubs $x$ and $y$ therefore have a common fixture in round $r$ if and only if
\[
\col{2}(\{x,y\}) = r \in \{\col{1}(\{x,y\}),\hcol(\{x,y\})\};
\]
since $\hcol(\{x,y\})=\col{1}(\{x,y\})+(2n-1)$ this may be expressed concisely as
\[
\col{2}(\{x,y\}) = r \equiv \col{1}(\{x,y\}) \bmod (2n-1).
\]
Our problem may then be stated as follows:
\begin{mainproblemdash}
Let
$K_{2n}=(\mathcal{V}_1,\mathcal{E}_1)$ have vertex set $\mathcal{V}_1=\{0,1,\ldots,2n-1\}$, and let
$K_{2n+2}=(\mathcal{V}_2,\mathcal{E}_2)$ have vertex set $\mathcal{V}_2=\{0,1,\ldots,2n+1\}$.
Construct proper edge colourings
\begin{align*}
\col{1} &: \mathcal{E}_1 \to \{1, 2, \ldots, {2n-1}\}, \\
\col{2} &: \mathcal{E}_2 \to \{1, 2, \ldots ,{2n+1}\},
\end{align*}
of $K_{2n}$ and $K_{2n+2}$, respectively, maximising the number of edges $\{x,y\}\in \mathcal{E}_1$ such that
\begin{equation}\label{common-double.eq}
\col{2}(\{x,y\}) \equiv \col{1}(\{x,y\}) \bmod (2n-1).
\end{equation}
\end{mainproblemdash}
\begin{remark}
When division one is played as a single round robin then the condition of equation~\eqref{common-double.eq} for clubs $x$ and $y$ to have a common fixture becomes simply
\[
\col{2}(\{x,y\}) = \col{1}(\{x,y\}).
\]
\end{remark}
\begin{remark}
When applicable we will orient the edges to indicate the home and away status of a game, with the edges pointing from the home team to the away team. In that case we additionally require that identically coloured edges have the same orientation. We address home and away status in Section~\ref{sec:homeaway}.
\end{remark}
\section{The upper bound}
\label{sec:upperbound}
In this section we show that $c(n)=2n^2 - 3n + 4$ is an upper bound on the number of common fixtures. Recall that we assume $n\geq2$ throughout.
Division one involves $2n$ teams, so in each round there are exactly $n$ games. Thus in each round there can be at most $n$ common fixtures. However, in Lemma~\ref{lem:specialround} we show that there is at most one round in which this can occur. We then show in Lemma~\ref{lem:RR2} that condition~\ref{double.condition} constrains the total number of common fixtures that can occur in rounds 1 and $2n$ to at most $n$, and similarly in rounds $2$ and $2n+1$. Combining these conditions gives $c(n)$ as an upper bound.
\begin{lemma}
\label{lem:specialround}
There is at most one round in which there are $n$ common fixtures. For every other round there are at most $n-1$ common fixtures.
\end{lemma}
\begin{proof}
In each round of division two there are $n+1$ games. In exactly one round the teams from the additional clubs $2n$ and $2n+1$ play each other, leaving $(n+1) - 1 = n$ games between the $2n$ clubs common to both divisions in which it is possible to have a common fixture.
In every other round the clubs $2n$ and $2n+1$ each play a club that is common to both divisions. This leaves $(n+1)-2 = n-1$ games between clubs common to both divisions in which it is possible to have a common fixture.
\end{proof}
Recall by condition~\ref{coincide.condition} that the draws for the first and second round robins in division one are identical. This constrains the total number of common fixtures between the pairs of identical rounds in the two round robins of the first division.
\begin{lemma}
\label{lem:RR2}
In total there are at most $n$ common fixtures in rounds $1$ and $2n$. Similarly, in total there are at most $n$ common fixtures in rounds $2$ and $2n+1$.
\end{lemma}
\begin{proof}
Rounds $1$ and $2n$ correspond to the first round of the first round robin in division one, and the first round of the second round robin in division one. Since the fixtures in these rounds are identical (disregarding the home and away status), and each fixture occurs once only in division two, there are at most $n$ distinct fixtures and therefore at most $n$ common fixtures in total between the two rounds.
By an identical argument, rounds 2 and $2n+1$ have in total at most $n$ common fixtures also.
\end{proof}
\begin{corollary}
\label{cor:bound}
The number of common fixtures is at most $c(n)=2n^2-3n+4$. For this to be possible the game between teams $2n$ and $2n+1$ in division two must take place in one of rounds $3$ to $2n-1$.
\end{corollary}
\begin{proof}
Let $f_r$ be the number of common fixtures in round $r$, $1\le r \le 2n+1$. We want to bound the total number of common fixtures, which is $\sum_{r=1}^{2n+1} f_r$.
Suppose that the game between teams $2n$ and $2n+1$ occurs in round $q$. Then, by Lemmas~\ref{lem:specialround} and~\ref{lem:RR2} we have:
\begin{align*}
f_{q} & \le n, \\
f_r & \le n-1, & r &\ne q, \\
f_r + f_{2n-1+r} & \le n, & r &\in \{ 1, 2 \}.
\end{align*}
If $q \in \{ 1, 2, 2n, 2n+1\}$
then \begin{align*}
\sum_{r=1}^{2n+1} f_r & = f_1 + f_2 + f_{2n} + f_{2n+1} + \sum_{r=3}^{2n-1} f_r \\
& \le 2n + (2n-3)(n-1) \\
& = 2n^2 - 3n + 3.
\end{align*}
Otherwise, we have $q \in \{ 3, 4, \ldots, 2n-1\}$ and
\begin{align*}
\sum_{r=1}^{2n+1} f_r & = f_1 + f_2 + f_{2n} + f_{2n+1} + \sum_{r=3}^{2n-1} f_r \\
& \le 2n + (2n-4)(n-1) + n\\
& = 2n^2 - 3n + 4.
\end{align*}
In either case we have $\sum_{r=1}^{2n+1} f_r \le 2n^2 - 3n + 4$, with equality possible only when $q \in \{ 3,4, \ldots , 2n-1\}. $
\end{proof}
\begin{remark}\label{rem:lowerbound}
When division one is played as a single round robin, the above argument shows that the number of common fixtures is at most
\[
n+(2n-2)(n-1) = 2n^2 -3n +2 = c(n)-2.
\]
\end{remark}
\section{The construction}
\label{sec:construction}
In this section we construct one-factorisations
$\mathcal{F}^1 = \{F_r^1 \mid 1\leq r\leq 2n-1\}$ of $K_{2n}$ and $\mathcal{F}^2 = \{F_r^2 \mid 1\leq r\leq 2n+1\}$ of $K_{2n+2}$ realising the upper bound $c(n)$ of Corollary~\ref{cor:bound}. Here each one-factor $F_r^d$ represents the draw in round $r$ of division $d$. In the general case $n\geq 3$ our construction
uses a \emph{factor-1-rotational}~\cite{bib:Mendelsohn&Rosa} one-factorisation of $K_{2n}$, also known as a
\emph{bipyramidal}~\cite{bib:Mazzuoccolo&Rinaldi} one-factorisation. This construction does not apply when $n=2$, so we first handle this case separately in Section~\ref{sec:n=2}, before giving our general construction in Section~\ref{sec:general}. We conclude by discussing home and away status for $n\geq 3$ in Section~\ref{sec:homeaway}.
\subsection{The case $n=2$}
\label{sec:n=2}
When $n=2$ we define the required one-factorisations
$\mathcal{F}^1 = \{F_r^1 \mid 1\leq r\leq 3\}$ of $K_{4}$ and
$\mathcal{F}^2 = \{F_r^2 \mid 1\leq r\leq 5\}$ of $K_{6}$ as follows:
\begin{align*}
F_1^1 &=\{(0,1),\mathbf{(2,3)}\}, & F_1^2 &= \{\mathbf{(2,3)},(4,0),(5,1)\}, \\
F_2^1 &=\{\mathbf{(2,0)},(3,1)\}, & F_2^2 &= \{\mathbf{(2,0)},(3,5),(4,1)\}, \\
F_3^1 &=\{\mathbf{(0,3)},\mathbf{(1,2)}\},
& F_3^2 &= \{\mathbf{(0,3)},\mathbf{(1,2)},(4,5)\}, \\
&& F_4^2 &= \{\mathbf{(1,0)},(3,4),(5,2)\}, \\
&& F_5^2 &= \{(0,5),\mathbf{(1,3)},(2,4)\}.
\end{align*}
The draw is also shown graphically in Figure~\ref{fig:n=2}. The common fixtures are indicated in bold, and we see that there are a total of $c(2)=2\cdot2^2-3\cdot2+4=6$ of them. Moreover, with the edges oriented as given we see that pairs of edges corresponding to common fixtures are identically oriented, and that every vertex in division one has outdegree either 1 or 2, and every vertex in division two has outdegree either 2 or 3. Thus, all three round robin draws are balanced, and together with Corollary~\ref{cor:bound} this establishes Theorem~\ref{thm:main} in the case $n=2$.
\begin{figure}
\begin{tikzpicture}[vertex/.style={circle,draw,fill=black!20},>=stealth]
\node (1-0) at (90:2) [vertex] {0};
\node (1-1) at (150:2) [vertex] {1};
\node (1-2) at (210:2) [vertex] {2};
\node (1-3) at (270:2) [vertex] {3};
\node at (0,-3) {(a)};
\begin{scope}[xshift = 100]
\node (1_0) at (90:2) [vertex] {0};
\node (1_1) at (150:2) [vertex] {1};
\node (1_2) at (210:2) [vertex] {2};
\node (1_3) at (270:2) [vertex] {3};
\node at (0,-3) {(b)};
\end{scope}
\begin{scope}[xshift = 200]
\node (2-0) at (90:2) [vertex] {0};
\node (2-1) at (150:2) [vertex] {1};
\node (2-2) at (210:2) [vertex] {2};
\node (2-3) at (270:2) [vertex] {3};
\node (2-4) at (330:2) [vertex] {4};
\node (2-5) at (30:2) [vertex] {5};
\node at (0,-3) {(c)};
\end{scope}
\begin{scope}[red,->,thick]
\draw (1-0) to (1-1);
\draw [ultra thick] (1-2) to (1-3);
\draw [ultra thick] (2-2) to (2-3);
\draw (2-4) to (2-0);
\draw (2-5) to (2-1);
\end{scope}
\begin{scope}[blue,->,thick,dashed]
\draw [ultra thick] (1-2) to (1-0);
\draw (1-3) to (1-1);
\draw [ultra thick] (2-2) to (2-0);
\draw (2-3) to (2-5);
\draw (2-4) to (2-1);
\end{scope}
\begin{scope}[green,->,thick,densely dotted]
\draw [ultra thick] (1-0) to (1-3);
\draw [ultra thick] (1-1) to (1-2);
\draw [ultra thick] (2-0) to (2-3);
\draw [ultra thick] (2-1) to (2-2);
\draw (2-4) to (2-5);
\end{scope}
\begin{scope}[magenta,->,thick,dash pattern = on 4pt off 1pt on 1pt off 1pt]
\draw [ultra thick] (1_1) to (1_0);
\draw (1_3) to (1_2);
\draw [ultra thick] (2-1) to (2-0);
\draw (2-3) to (2-4);
\draw (2-5) to (2-2);
\end{scope}
\begin{scope}[cyan,->,thick,dash pattern = on 4pt off 1pt on 1pt off 1pt on 1pt off 1pt]
\draw (1_0) to (1_2);
\draw [ultra thick] (1_1) to (1_3);
\draw [ultra thick] (2-1) to (2-3);
\draw (2-2) to (2-4);
\draw (2-0) to (2-5);
\end{scope}
\begin{scope}[black,<-,thick,dash pattern = on 10pt off 2 pt ]
\draw (1_0) to (1_3);
\draw (1_1) to (1_2);
\end{scope}
\end{tikzpicture}
\caption{The draws for $n=2$, with common fixtures denoted by thicker edges. (a) The draw in division one rounds 1--3, with the rounds denoted by red solid edges; blue dashed edges; and green dotted edges, respectively. (b) The draw in division one rounds 4--6, with the rounds denoted by magenta dash-dotted edges; cyan dash-dot-dotted edges; and black dashed edges, respectively. (c) The draw in division two, with the rounds denoted as above.}
\label{fig:n=2}
\end{figure}
\subsection{The general case $n\geq 3$}
\label{sec:general}
\subsubsection{Overview}
\label{sec:overview}
In the general case $n\geq 3$ our draw in division one is based on a class of one-factorisations of $K_{2n}$ known as \emph{factor-1-rotational}~\cite{bib:Mendelsohn&Rosa} or \emph{bipyramidal}~\cite{bib:Mazzuoccolo&Rinaldi}. Such a one-factorisation is obtained by first constructing a single one-factor, known as a \emph{starter}. Two of the vertices are then held fixed, while the remaining vertices are permuted according to the sharply transitive action of a group $G$ of order $2n-2$. In our case we use the cyclic group of order $2n-2$. This produces $2n-2$ one-factors, and by careful choice of the initial one-factor and group action these are all disjoint, and are completed to a one-factorisation by the addition of a final one-factor, that is fixed by the action of $G$ and consists of the remaining edges.
In order to achieve the close agreement required between the division one and two draws we exploit the symmetry of the division one draw in constructing the draw for division two.
We begin by modifying the starter one-factor of $K_{2n}$, by replacing one of its edges with a pair of edges joining its endpoints to the two additional vertices. This gives us $n-1$ common fixtures in round 1. We then translate this one-factor by the action of $G$, to obtain $n-1$ common fixtures in each of rounds 2 to $2n-2$ as well. The draw for division one round $2n-1$ is described by the fixed one-factor of $K_{2n}$, and adding the edge between the two additional teams to this gives us a round in which there are $n$ common fixtures. It then remains to organise the remaining
edges --- those removed from the cyclicly permuted one-factors, as well as the remaining edges between the fixed vertices --- into two more rounds, in such a way that we pick up an additional common fixture in each. We will ensure that this is possible by choosing the edge removed from the starter so that its orbit forms a cycle in the graph of length $2n-2$, and so consequently has a one-factorisation.
\subsubsection{The construction}
In order to describe the construction, it will be convenient to denote the vertices $2n-2$ and $2n-1$ by $\pm\infty$, and the vertices $2n$ and $2n+1$ by $\pm i\infty$. Then we may unambiguously define the permutation $\sigma$ of
$\mathcal{V}_2=\{0,\ldots,2n-3\}\cup\{\pm\infty\}\cup\{\pm i\infty\}$ by $\sigma(x)=x+1$, where addition is done modulo $2n-2$ for $0\leq x\leq 2n-3$, and $x+k=x$ for $x\in\{\pm\infty,\pm i\infty\}$, $k\in\field{Z}$. The group $G=\langle\sigma\rangle$ is cyclic of order $2n-2$, and acts sharply transitively on the vertices $\{0,1,\ldots,2n-3\}$.
We begin by constructing the one-factors $F_1^1$ and $F_1^2$ in Lemma~\ref{lem:round1}. The cases $n=7$ and $n=8$ are illustrated in Figures~\ref{fig:F1 for n=7} and~\ref{fig:F1 for n=8}, respectively.
\begin{lemma}
\label{lem:round1}
Define
\begin{align*}
s &= \begin{cases}
\frac{n-4}{2} & \text{$n$ even,}\\
\frac{n-3}{2} & \text{$n$ odd,}
\end{cases} &
t &= n-2-s =
\begin{cases}
s+2=\frac{n}{2} & \text{$n$ even,}\\
s+1=\frac{n-1}{2} & \text{$n$ odd,}
\end{cases} \\
u &= \begin{cases}
\frac{3n-6}{2} & \text{$n$ even,}\\
\frac{3n-7}{2} & \text{$n$ odd,}
\end{cases} &
v &= 3n-5-u =
\begin{cases}
u+1=\frac{3n-4}{2} & \text{$n$ even,}\\
u+2=\frac{3n-3}{2} & \text{$n$ odd,}
\end{cases}
\end{align*}
and let
\begin{align*}
E_1 &= \bigl\{\{x,y\}:x+y=n-2,0\leq x\leq s\bigr\} \\
&= \bigl\{ \{0,n-2 \}, \{1,n-3\}, \ldots ,\{ s,t \}\bigr\}, \\
E_2 &= \bigl\{\{x,y\}:x+y=3n-5,n-1\leq x\leq u\bigr\} \\
&= \bigl\{ \{n-1,2n-4 \}, \{n,2n-5\}, \ldots, \{ u,v \}\bigr\}, \\
E_3 &= \begin{cases}
\bigl\{\{\frac{n-2}{2},-\infty\},\{2n-3,\infty\}\bigr\}, &\text{$n$ even,} \\
\bigl\{\{\frac{3n-5}{2},-\infty\},\{2n-3,\infty\}\bigr\}, &\text{$n$ odd,}
\end{cases} \\
E_4 &= \begin{cases}
\bigl\{\{u,-i\infty\},\{v,i\infty\}\bigr\}, &\text{$n$ even,} \\
\bigl\{\{s,-i\infty\},\{t,i\infty\}\bigr\}, &\text{$n$ odd.}
\end{cases}
\end{align*}
Then
\[
F_1^1=E_1 \cup E_2 \cup E_3
\]
is a one-factor of $K_{2n}$,
and
\[
F_1^2 = \begin{cases}
(F_1^1\cup E_4)-\bigl\{\{u,v\}\bigr\}, & \text{$n$ even,} \\
(F_1^1\cup E_4)-\bigl\{\{s,t\}\bigr\}, & \text{$n$ odd}
\end{cases}
\]
is a one-factor of $K_{2n+2}$. Moreover, $F_1^1$ and $F_1^2$ have precisely $n-1$ edges in common.
\end{lemma}
\begin{figure
\begin{tikzpicture}[vertex/.style={circle,draw,fill=black!20,minimum size = 5mm,inner sep=0pt},>=stealth,thick]
\newlength{\radius}
\setlength{\radius}{3cm}
\foreach \x in {0,1,...,11}
\node (\x-1) at (30*\x:\radius) [vertex] {$\scriptstyle\x$};
\node (-inf-1) at (180:0.4*\radius) [vertex] {$\scriptscriptstyle -\infty$};
\node (inf-1) at (0:0.4*\radius) [vertex] {$\scriptstyle\infty$};
\begin{scope}[xshift=2.5*\radius]
\foreach \x in {0,1,...,11}
\node (\x-2) at (30*\x:\radius) [vertex] {$\scriptstyle\x$};
\node (-inf-2) at (180:0.4*\radius) [vertex] {$\scriptscriptstyle -\infty$};
\node (inf-2) at (0:0.4*\radius) [vertex] {$\scriptstyle\infty$};
\node (iinf) at (90:0.4*\radius) [vertex] {$\scriptstyle i\infty$};
\node (-iinf) at (270:0.4*\radius) [vertex] {$\scriptscriptstyle -i\infty$};
\end{scope}
\draw (0-1) -- (5-1);
\draw (1-1) -- (4-1);
\draw (2-1) -- (3-1);
\draw (6-1) -- (10-1);
\draw (7-1) -- (9-1);
\draw (8-1) -- (-inf-1);
\draw (11-1) -- (inf-1);
\draw (0-2) -- (5-2);
\draw (1-2) -- (4-2);
\draw (2-2) -- (-iinf);
\draw (3-2) -- (iinf);
\draw (6-2) -- (10-2);
\draw (7-2) -- (9-2);
\draw (8-2) -- (-inf-2);
\draw (11-2) -- (inf-2);
\end{tikzpicture}
\caption{The one-factors $F_1^1$ (left) and $F_1^2$ (right) in the case $n=7$.}
\label{fig:F1 for n=7}
\end{figure}
\begin{proof}
It is easy to check that each of the $2n$ vertices $x\in \mathcal{V}_1=\{0,1,\ldots,2n-3\}\cup\{\pm\infty\}$ belongs to precisely one edge in the union $E_1 \cup E_2 \cup E_3$. For $0\leq x\leq n-2$ the edge containing $x$ belongs to $E_1$, unless $n$ is even and $x=\frac{n-2}{2}$, in which case it belongs to $E_3$. For $n-1\leq x\leq 2n-4$ the edge belongs to $E_2$, unless $n$ is odd and $x=\frac{3n-5}2$, in which case it belongs to $E_3$; and for $x=2n-3$ and $x\in\{\pm\infty\}$ the edge belongs to $E_3$.
When $n$ is even we obtain $F_1^2$ from $F_1^1$ by deleting the edge $\{u,v\}$ and adding the edges $\{u,-i\infty\}$ and $\{v,i\infty\}$; while when $n$ is odd we obtain $F_1^2$ from $F_1^1$ by deleting the edge $\{s,t\}$ and adding the edges $\{s,-i\infty\}$ and $\{t,i\infty\}$. Thus each vertex of $K_{2n+2}$ belongs to precisely one edge of $F_1^2$ also. Since $F_1^1$ contains precisely $n$ edges it follows moreover that $|F_1^1\cap F_1^2|=n-1$, as claimed.
\end{proof}
The one-factor $F_1^1$ is the starter discussed above in Section~\ref{sec:overview}.
From each one-factor $F_1^d$, $d=1,2$, we now construct $2n-2$ one-factors $F_r^d$, $1 \le r \le 2n-2$, by permuting the vertices $\{ 0, 1, \dots 2n-3\}$ according to the permutation $\sigma = (0,1,\ldots,2n-3)$. For $1\leq r\leq 2n-2$ and $d=1,2$ we define
\[
F_r^d = \sigma^{r-1}(F_1^d) = \bigl\{\{\sigma^{r-1}(x),\sigma^{r-1}(y)\} \mid \{x,y\}\in F_1^d\bigr\}.
\]
Then each $F_r^d$ is necessarily a one-factor of $K_{2n}$ or $K_{2n+2}$, since it's obtained from the one-factor $F_1^d$ by an automorphism of the graph. This gives us a total of $2n-2$ one-factors for each graph, whereas a one-factorisation of $K_{2n}$ requires a total of $2n-1$, and a one-factorisation of $K_{2n+2}$ requires a total of $2n+1$. To construct a $(2n-1)$th one-factor for each graph we set
\begin{align*}
F_{2n-1}^1 &= \bigl\{ \{x,x+n-1\} \mid 0 \le x \le n-2 \bigr\} \cup\bigl\{\{-\infty,\infty\}\bigr\}, \\
&= \bigl\{\{0,n-1\},\{1,n\},\ldots,\{n-2,2n-3\},\{-\infty,\infty\}\bigr\}, \\
F_{2n-1}^2 &= F_{2n-1}^1\cup\bigl\{\{-i\infty,i\infty\}\bigr\}.
\end{align*}
These sets of edges are easily seen to meet each vertex of $K_{2n}$ and $K_{2n+2}$, respectively, exactly once. Moreover they have precisely $n$ edges in common, namely all $n$ edges of $F_{2n-1}^1$.
\begin{figure}
\begin{tikzpicture}[vertex/.style={circle,draw,fill=black!20,minimum size = 5mm,inner sep=0pt},>=stealth,thick]
\setlength{\radius}{3cm}
\foreach \x in {0,1,...,13}
\node (\x-1) at (360*\x/14:\radius) [vertex] {$\scriptstyle\x$};
\node (-inf-1) at (180:0.4*\radius) [vertex] {$\scriptscriptstyle -\infty$};
\node (inf-1) at (0:0.4*\radius) [vertex] {$\scriptstyle\infty$};
\begin{scope}[xshift=2.5*\radius]
\foreach \x in {0,1,...,13}
\node (\x-2) at (360*\x/14:\radius) [vertex] {$\scriptstyle\x$};
\node (-inf-2) at (180:0.35*\radius) [vertex] {$\scriptscriptstyle -\infty$};
\node (inf-2) at (0:0.35*\radius) [vertex] {$\scriptstyle\infty$};
\node (iinf) at (90:0.35*\radius) [vertex] {$\scriptstyle i\infty$};
\node (-iinf) at (270:0.35*\radius) [vertex] {$\scriptscriptstyle -i\infty$};
\end{scope}
\draw (0-1) -- (6-1);
\draw (1-1) -- (5-1);
\draw (2-1) -- (4-1);
\draw (7-1) -- (12-1);
\draw (8-1) -- (11-1);
\draw (9-1) -- (10-1);
\draw (3-1) -- (-inf-1);
\draw (13-1) -- (inf-1);
\draw (0-2) -- (6-2);
\draw (1-2) -- (5-2);
\draw (2-2) -- (4-2);
\draw (7-2) -- (12-2);
\draw (8-2) -- (11-2);
\draw (3-2) -- (-inf-2);
\draw (13-2) -- (inf-2);
\draw (9-2) -- (-iinf);
\draw (10-2) -- (iinf);
\end{tikzpicture}
\caption{The one-factors $F_1^1$ (left) and $F_1^2$ (right) in the case $n=8$.}
\label{fig:F1 for n=8}
\end{figure}
In order to construct the final two one-factors for $K_{2n+2}$ we must proceed carefully, in order to make sure we pick up an extra common fixture in each of rounds $2n$ and $2n+1$. The key point is to ensure that we place the edge $\{s,t\}$ or $\{u,v\}$ removed from $F_1^1$ when constructing $F_1^2$ in $F_{2n}^2$. This may be done as follows. When $n$ is even we set
\begin{align*}
T_1 &= \bigl\{\sigma^{2j}(\{u,v\}) \mid 0\leq j\leq n-2\bigl\}, \\
T_2 &= \sigma(T_1) \\
&= \bigl\{\sigma^{2j+1}(\{u,v\}) \mid 0\leq j\leq n-2\bigl\},
\end{align*}
and when $n$ is odd we set
\begin{align*}
T_1 &= \bigl\{\sigma^{2j}(\{s,t\}) \mid 0\leq j\leq n-2\bigl\}, \\
T_2 &= \sigma(T_1) \\
&= \bigl\{\sigma^{2j+1}(\{s,t\}) \mid 0\leq j\leq n-2\bigl\}.
\end{align*}
Since $v-u=1$ when $n$ is even, and $t-s=1$ when $n$ is odd, in all cases the sets $T_1$ and $T_2$ are the sets
\begin{align*}
T_{\mathrm{even}} &= \bigl\{\{2k,2k+1\} \mid 0\leq k\leq n-2\bigr\}, \\
T_{\mathrm{odd}} &= \bigl\{\{2k-1,2k\} \mid 0\leq k\leq n-2\bigr\}
\end{align*}
in some order. Just which is which depends on the value of $n$ modulo 4:
\begin{enumerate}
\item For $n = 4\ell$, the vertex $u=\frac{3n-6}{2} = \frac{12\ell-6}{2} = 6\ell-3$ is odd, so $T_1=T_{\mathrm{odd}}$;
\item
for $n = 4\ell + 1$, the vertex $s=\frac{n-3}{2} = \frac{4\ell-2}{2} = 2\ell-1$ is odd, so
$T_1 = T_{\mathrm{odd}}$;
\item
for $n = 4\ell+2$, the vertex $u=\frac{3n-6}{2} = \frac{12\ell}{2} = 6\ell$ is even, so
$T_1=T_{\mathrm{even}}$; and
\item
for $n = 4\ell+3$, the vertex $s=\frac{n-3}{2} = \frac{4\ell}{2} = 2\ell$ is even, so
$T_1=T_{\mathrm{even}}$.
\end{enumerate}
To complete $T_1$ and $T_2$ to one-factors of $K_{2n+2}$ we set
\begin{align*}
F_{2n}^2 &= T_1\cup\bigl\{\{-\infty,-i\infty\},\{\infty,i\infty\}\bigr\}, \\
F_{2n+1}^2 &= T_2\cup\bigl\{\{-\infty,i\infty\},\{\infty,-i\infty\}\bigr\}.
\end{align*}
Let
\begin{align*}
\mathcal{F}^1 & = \{F_r^1 \mid 1\leq r\leq 2n-1\}, &
\mathcal{F}^2 & = \{F_r^2 \mid 1\leq r\leq 2n+1\}.
\end{align*}
We now claim:
\begin{theorem}
\label{lem:onefactorisation}
The set $\mathcal{F}^1$ is a one-factorisation of $K_{2n}$, and the set $\mathcal{F}^2$ is a one-factorisation of $K_{2n+2}$. Together these one-factorisations realise the upper bound of Corollary~\ref{cor:bound}.
\end{theorem}
\begin{proof}
We begin by understanding the orbits of the cyclic group $G=\langle\sigma\rangle$ of order $2n-2$ acting on the edges of $K_{2n}$ and $K_{2n+2}$.
For $\delta=1,\ldots,n-1$ let
\[
O_\delta = \bigl\{\{x,x+\delta\} \mid 0\leq x\leq 2n-3\bigr\},
\]
where addition is carried out modulo $2n-2$, and for $\alpha\in\{\pm\infty,\pm i\infty\}$ let
\begin{align*}
O_\alpha = \bigl\{\{x,\alpha\} \mid 0\leq x\leq 2n-3\bigr\}.
\end{align*}
Finally, let also
\[
E^G = \bigl\{\{\alpha,\beta\} \mid \alpha,\beta\in\{\pm\infty,\pm i\infty\},\alpha\neq\beta\bigr\}.
\]
Then it is easily seen that each set $O_1,\ldots,O_{n-1},O_{-\infty},O_\infty,O_{-i\infty},O_{i\infty}$ is an orbit of $G$ acting on the edges of $K_{2n+2}$, and that $E^G$ is the fixed point set of this action. The orbits $O_\delta$ for $1\leq \delta\leq n-2$ and $O_\alpha$ for $\alpha\in\{\pm\infty,\pm i\infty\}$ have order $2n-2$, while the orbit $O_{n-1}$ has order $n-1$. This gives us a total of
\begin{itemize}
\item
$n+2$ orbits of size $2n-2$ in $K_{2n+2}$, of which $n$ lie in $K_{2n}$;
\item
one orbit of size $n-1$ in $K_{2n+2}$, which also lies in $K_{2n}$;
\item
six orbits of size 1 in $K_{n+2}$, of which precisely one lies in $K_{2n}$.
\end{itemize}
Together these account for all
$(n+2)(2n-2)+(n-1)+6=2n^2+3n+1=(n+1)(2n+1)$ edges of $K_{2n+2}$, and all
$n(2n-2)+(n-1)+1=n(2n-1)$ edges of $K_{2n}$.
Beginning with $\mathcal{F}^1$, observe that in $E_1\subseteq F_1^1$ the differences between the vertices in each edge are
\begin{align*}
(n-2)-0 &= n-2,\\
(n-3)-1 &= n - 4,\\
&\;\;\vdots\\
t-s &= \begin{cases}
\frac{n}{2}-\frac{n-4}{2}=2, & \text{$n$ even}\\
\frac{n-1}{2}-\frac{n-3}{2}=1, &\text{$n$ odd},
\end{cases}
\end{align*}
while in $E_2\subseteq F_1^1$ the differences are given by
\begin{align*}
(2n-4)-(n-1) &= n-3,\\
(2n-5) - (n) &= n - 5,\\
&\;\; \vdots\\
v-u &= \begin{cases}
\frac{3n-4}{2}-\frac{3n-6}{2}=1, & \text{$n$ even} \\
\frac{3n-7}{2}-\frac{3n-3}{2}=2, & \text{$n$ odd}.
\end{cases}
\end{align*}
Together these differences are distinct and take all values from $1$ to $n-2$.
Consequently, $E_1\cup E_2$ contains precisely one edge from each orbit $O_1,\ldots,O_{n-2}$. In addition, the set $E_3$ contains precisely one edge from each orbit $O_{\pm\infty}$, and so in total $F_1^1$ contains precisely one representative from each of the $n$ orbits of size $2n-2$ lying in $K_{2n}$. Note also that $F_{2n-1}^1$ consists of $O_{n-1}$, together with the sole fixed edge $\{-\infty,\infty\}$ lying in $K_{2n}$. Since $F_r^1=\sigma^{r-1}(F_1^1)$ for $1\leq r\leq 2n-2$ it immediately follows that the $F_r^1$ are all disjoint, and together account for every edge of $K_{2n}$. The set $\mathcal{F}^1 = \{F_r^1 \mid 1\leq r\leq 2n-1\}$ therefore forms a one-factorisation of $K_{2n}$.
Turning now to $\mathcal{F}^2$, the one-factor $F_1^2$ is obtained from $F_1^1$ by deleting whichever edge $\{s,t\}$ or $\{u,v\}$ belongs to $O_1$, and replacing it with an edge from each of $O_{-i\infty}$ and $O_{i\infty}$. Consequently $F_1^2$ contains precisely one representative of $n+1$ of the orbits of size $2n-2$, namely $O_2,\ldots,O_{n-2}$ and $O_\alpha$ for $\alpha\in\{\pm\infty,\pm i\infty\}$. We again have $F_r^2=\sigma^{r-1}(F_1^2)$ for $1\leq r\leq 2n-2$, so it immediately follows that the $F_r^2$ are disjoint for $1\leq r\leq 2n-2$, with union $O_2\cup\cdots\cup O_{n-2}\cup O_{-\infty}\cup O_\infty\cup O_{-i\infty}\cup O_{i\infty}$. It's now easily checked the remaining one-factors $F_{2n-1}^2,F_{2n}^2,F_{2n+1}^2$ are disjoint with union $O_1\cup O_{n-1}\cup E^G$, and the claim that $\mathcal{F}^2$ is a one-factorisation of $K_{2n+2}$ follows.
We now count the common fixtures. By Lemma~\ref{lem:round1} we obtain $n-1$ common fixtures in round 1, and this gives us $n-1$ common fixtures in each round $r$ for $1\leq r\leq2n-2$, since the draws for these rounds are obtained from those in round 1 by translation by $\sigma^{r-1}$. As observed above $|F_{2n-1}^1\cap F_{2n-1}^2|=n$, so we obtain $n$ common fixtures in round $2n-1$. By choice of $T_1$ we obtain a further common fixture in round $2n$, and since $T_2=\sigma(T_1)$, $F_2^1=\sigma(F_1^1)$, this gives another common fixture in round $2n+1$ also. Summing, we obtain the upper bound of Corollary~\ref{cor:bound}, as claimed.
\end{proof}
\begin{remark}
In the above construction only two of the common fixtures occur in the final two rounds of division two. Thus, if division one is played as a single round robin only, then our construction achieves a total of $c(n)-2$ common fixtures. Combined with the lower bound of Remark~\ref{rem:lowerbound}, this proves the claim of Remark~\ref{rem:singleroundrobin} that $c(n)-2$ is the maximum possible number of common fixtures in this case.
\end{remark}
\begin{remark}
Our general construction described in this section does not apply when $n=2$, because for $n=2$ the only orbits of order $2n-2=2$ are $O_{\pm\infty}$. In particular, for $n=2$ the orbit $O_1$ has order $n-1=1$ rather than 2.
\end{remark}
\subsection {Home and away status}
\label{sec:homeaway}
To complete the proof of Theorem~\ref{thm:main} it remains to show that the draws for all three round robins can be chosen to be balanced for $n\geq 3$, subject to the condition that the same club be designated the home team in both divisions in any common fixture. This amounts to orienting the edges of $K_{2n}$ and $K_{2n+2}$ in such a way that the indegree of each vertex differs from its outdegree by exactly one, and any edge corresponding to a common fixture is identically oriented in both graphs.
To achieve this we orient the edges of $K_{2n+2}$ belonging to each orbit of the action of $G$ as follows:
\begin{align*}
O_\delta &= \{(x,x+\delta) \mid 0\leq x\leq 2n-3\} & &\text{for $1\leq \delta\leq n-2$}, \\
O_{n-1} &= \{(x,x+(n-1)) \mid 0\leq x\leq n-2\}, \\
O_\alpha &= \bigcup_{k=0}^{n-2}\{(2k,\alpha),(\alpha,2k+1)\} &
& \text{for $\alpha=\infty,i\infty$}, \\
O_\alpha &= \bigcup_{k=0}^{n-2}\{(2k+1,\alpha),(\alpha,2k)\} &
& \text{for $\alpha=-\infty,-i\infty$},
\end{align*}
and
\[
E^G = \{(-\infty,\infty),(-\infty,-i\infty),(\infty,i\infty),(-i\infty,\infty),(-i\infty,i\infty),(i\infty,-\infty)\}.
\]
For $1\leq \delta\leq n-2$ each orbit $O_\delta$ is a disjoint union of cycles of length at least 3. Using this fact it is easily checked that orienting the edges as above achieves balance for the draw in division two, with the vertices belonging to $\{0,1,\ldots,n-2,-\infty,-i\infty\}$ having indegree $n$ and outdegree $n+1$, and the vertices belonging to $\{n-1,n,\ldots,2n-3,\infty,i\infty\}$ having indegree $n+1$ and outdegree $n$.
As a first step towards achieving balance in division one we regard $K_{2n}$ as a subgraph of $K_{2n+2}$, and give each edge of $K_{2n}$ the orientation it receives as an edge of $K_{2n+2}$. The resulting draw is balanced, and edges corresponding to common fixtures in rounds $1$ to $2n-1$ are identically oriented. However, the two edges corresponding to common fixtures in rounds $2n$ and $2n+1$ are oppositely oriented, because the orientations of the edges of $K_{2n}$ are reversed in rounds $2n$ to $4n-2$. But this is easily remedied, because these edges both belong to $O_1$, and are the only edges in this orbit that occur in common fixtures. Thus we may achieve our goal by simply reversing the orientation in $K_{2n}$ of all edges belonging to $O_1$, which has no effect on the balance. This completes the proof of Theorem~\ref{thm:main}.
\begin{remark}
When division one is played as a single round robin only, the final step of reversing the orientation of $O_1$ is unnecessary, and we may achieve balance in both divisions one and two by simply orienting $K_{2n}$ as a subgraph of $K_{2n+2}$.
\end{remark}
| {
"attr-fineweb-edu": 1.667969,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbQU4uBhhxDSKx4Zu |
\section{Introduction}
\label{sec:intro}
Sports betting systems generally consist of two essential components, (i) predictive models, generating probabilistic estimates for the given match outcomes, and (ii) bankroll management strategy, optimizing the expected progression of wealth in time. In this work, we focus solely on the latter.
While much of the available research on betting systems is centered around the predictive modeling part, often completely neglecting the need for betting portfolio optimization, we show that, given a predictive model, the betting strategy has a major influence on the final measures of profit. Consequently, a worse model with a better strategy can easily outperform a better model with a worse strategy.
Lacking a deeper understanding of the investment part of the problem, practitioners often resort to trivial practices such as various forms of flat betting. We show that these are inferior, not just theoretically but also from practical perspective, to the formal strategies. There are two basic streams of research in the formal approaches, stemming from information theory and economics, respectively. The first, and the most widespread, is the Kelly criterion, also know as the geometric mean policy, maximizing the expected long-term growth of wealth. The second is the approach of Markowitz's Modern portfolio theory, balancing the criteria of expected profit and variance as a measure of risk.
While mathematically sound, the formal strategies are based on unrealistic assumptions. The most limiting assumption in their application to sports betting is the knowledge of true probabilities of individual match outcomes. Other complications of the problem include multiplicity of outcomes and parallel matches. We investigate the existing modifications of the formal strategies proposed to address the problems occurring in practice, and evaluate them experimentally in 3 different sport domains - horse racing, basketball, and football.
The paper is structured as follows. In Section~\ref{sec:definitions} we define the concept of a betting strategy and the dimensions of the underlying optimization problem. In Section~\ref{sec:related} we review the related work touching different facets of risk and bankroll management in betting. In Section~\ref{sec:strategies} we formally introduce the two core strategies of Kelly and Markowitz. The modifications of the core strategies proposed to manage the extra risk occurring in practice are then introduced in Section~\ref{sec:risk}. Finally we experimentally evaluate the strategies in practical scenarios in Section~\ref{sec:experiments} and conclude the paper in Section~\ref{sec:conclusion}.
\section{Problem Definition}
\label{sec:definitions}
In its core, sports betting is a simple stochastic game where the player $p$ repeatedly allocates a distribution of \textit{fractions} ($f_i \in [0,1],~\sum_{i}f_i \leq 1$) of her current bankroll $W \in \mathbb{R}$ at time $t \in \mathbb{N}$ over possible stochastic results $r_i \in \mathrm{R}$ of a match, coming from a distribution $P_r(r_i)$ over the domain $\mathrm{R}$ of the random variable $R$, describing all the possible outcomes of the given match at time step $t$. Each of the possible match outcomes $r_i$ is then associated with so called \textit{odds} ($o_i \in \mathbb{R}_{\geq 1}$) by the bookmaker $b: r_i \mapsto o_i$. Should a particular outcome $i$ be realized ${R}=r_i$, a payoff $o_i \cdot f_i \cdot W$ from the associated odds and fraction is to be received by the player $p$. In the opposite case, the player loses the allocated portion $f_i \cdot W$ of her bankroll to the bookmaker $b$.
Each of the particular outcomes $r_i$ is binary in nature, and the potential net profit $w_i$ from allocation on the $i$-th outcome is thus
\begin{equation}
w_i =
\left\{
\begin{array}{lll}
o_i \cdot f_i \cdot W - f_i \cdot W ~~& \mbox{with prob. $P_r(r_i)$} &\mbox{(if $\mathrm{R}=r_i$ is realized)} \\
- f_i \cdot W ~~& \mbox{with prob. $1-P_r(r_i)$} &\mbox{(if $\mathrm{R} \neq r_i$)}
\end{array}
\right.
\end{equation}
giving an expectation
\begin{equation}
\EX_{P_r}[w_i] = P_r(r_i) \cdot (o_i f_i W - f_i W) + (1-P_r(r_i)) \cdot (- f_i W)
\end{equation}
Clearly, the profits of the bettor and bookmaker are directly opposite and, assuming a closed system of bettors and bookmakers, this is thus a zero-sum game. The goal of both the player $p$ and the bookmaker $b$ is to maximize their long-term profits $W_{t \to \infty}$ as measured by their respective utilities (Section~\ref{sec:strategies}). Given the stochastic nature of the game, the natural desideratum of the player is to allocate the fractions $\bm{f} = f_1, \dots, f_n$ so as to target a high total expected profit $\mathrm{W}$.
\begin{equation}
\EX_{P_r}[\mathrm{W}] = \EX_{P_r} \bigg[\sum_i w_i \bigg] = \sum_i \EX_{P_r} [w_i]
\end{equation}
Note that in this work, we assume the two players to take on the asymmetric roles of market maker $b$ and market taker $p$, where the bookmaker $b$ always starts by laying out the odds $\bm{o} = [o_1, \dots, o_n]$ for the possible match results $\bm{r} = [r_1, \dots, r_n]$ first, consequently to which the player $p$ reacts with his best policy for allocation $p : r_i \mapsto f_i$ of her current wealth $W_t$. In contrast to e.g. the betting exchange setting, in this work work we assume solely the strategies for the role of the market taker $p$, which is the most common setup for bettors in practice.
\subsection{Betting Strategy}
\label{sec:def:strategy}
A player's betting strategy for a game with $n$ outcomes is a function $f$ mapping a set of probabilistic estimates $\hat{\bm{p}} = \hat{p_i}, \dots,\hat{p_n}$ and bookmaker's odds $\bm{o} = o_1, \dots, o_n$ onto a set of fractions $\bm{f} = f_1, \dots, f_n$ of the current wealth $W_t$ to be waged on each of the game outcomes $\bm{r} = r_1, \dots, r_n$.
\begin{align}
f &: (\hat{\bm{p}}, \bm{o}) \mapsto \bm{f}
\end{align}
Typically, the estimated distribution vector $\hat{\bm{p}}$ comes from a probabilistic model $P_p$ of the player and is similar to, yet different from, the true probability distribution $P_p = \hat{P_r},~P_p \neq P_r$.
The vector of the waged fractions $\bm{f}$ is then often referred to as the \textit{portfolio} over individual ``assets'' $f_i$ (Section~\ref{sec:MPT})
\begin{equation}
\bm{f} =
\begin{bmatrix}
f_1, \dots, f_n
\end{bmatrix}
\end{equation}
where $f_i$ indicates the portion of wealth $W_t$ allocated to $i$-th outcome.
\subsection{Fixed Odds}
\label{sec:def:odds}
We further assume a so called fixed-odds betting setup which, as opposed to e.g. parimutuel setting~\citep{hausch2008efficiency}, always offers known odds distribution $\bm{o}$ in advance of the game for the player's strategy $f$ to calculate with.
In its most basic form, we can demonstrate the given setting on a simple coin tossing game as follows.
\begin{example}
\label{ex:coin1}
Assume a fair coin tossing game with two, equally probable, outcomes $\mathrm{R} =\{Heads, Tails\}$
\begin{equation}
\underset{r_i \in \mathrm{R}}{P_r(r_i)} =
\left\{
\begin{array}{ll}
0.5 & \mbox{for the coin falling } r_1 = \textit{Heads} \\
0.5 & \mbox{for the coin falling } r_2 = \textit{Tails}
\end{array}
\right.
\end{equation}
The odds by the bookmaker $b$ could then be set up e.g. as follows
\begin{equation}
\underset{r_i \in \mathrm{R}}{b(r_i)} =
\left\{
\begin{array}{ll}
o_1 = 1.9 & \mbox{for the coin falling } r_1 = \textit{Heads} \\
o_2 = 1.9 & \mbox{for the coin falling } r_2 = \textit{Tails}
\end{array}
\right.
\end{equation}
Let the bettor allocate a fixed wager, such as \$1, on the $r_1=Heads$
She then receives an extra $w_1 = (1.9 - 1) * 1$ profit if the associated outcome $r_1=Heads$ is realized, or losses the placed wager \$1 otherwise.
It is easy to see that this particular game is generally disadvantageous for the bettor, and there exist no strategy for her to make long-term profits, since the expected profit for each outcome of the game is simply negative.
\begin{equation}
\EX[w_1] = \EX[w_2] = 0.5 \cdot 1.9 \cdot 1 + 0.5 \cdot (-1) = -0.05
\end{equation}
This is caused by the fact that the odds are \textit{unbiased} and \textit{subfair}. This means that their inverse values $P_b : r_i \mapsto \frac{1}{o_i}$ are proportional to the true probability distribution over the game outcomes, but they do not form a probability distribution as they do not sum up to $1$.
\begin{equation}
\sum_i{P_b(r_i)} = \frac{1}{o_1} + \frac{1}{o_2} \approx 1.05
\end{equation}
\end{example}
In general, for a game with $k$ outcomes, we can theoretically recognize $3$ distinct settings~\citep{cover2012elements} of the odds as follows.
\begin{equation}
odds :
\left\{
\begin{array}{ll}
fair :&\sum_{i=1}^k\frac{1}{o_i} = 1.0 \\
subfair :&\sum_{i=1}^k\frac{1}{o_i} > 1.0 \\
superfair :&\sum_{i=1}^k\frac{1}{o_i} < 1.0
\end{array}
\right.
\end{equation}
where the \textit{subfair} odds are typically the only setting for a bookmaker to be able to generate profits. We will further limit ourselves to this setting as it is the only valid setup working in practice.
The value of
\begin{equation}
margin = \frac{\sum_{j=1}^K\frac{1}{r_j} -1 }{\sum_{j=1}^K\frac{1}{r_j}}
\end{equation}
is then called the bookmaker's margin\footnote{Often wrongly calculated as simply the remainder of $\sum_{j=1}^K\frac{1}{r_j} -1$} (also known as ``vigorish'', ``house edge'', ``cut'' etc.), and represents the negative expected value of the game given the probabilities $P_b$ implied from the odds are unbiased estimates of the true outcome probabilities $P_r$. Note that this is a typical game setting operated in the gambling industry, such as in various casino games, where there is no space for long-term profitable strategies. However, we note that the situation in sports betting is principally different.
\subsection{Biased Estimates}
\label{sec:def:estimates}
In the Example~\ref{ex:coin1} with a fair coin, both the bookmaker and bettor knew the true outcome probability distribution (i.e. $P_r(r_1=H)=0.5 ;\ P_r(r_2=T)=0.5$). This setting is very elegant from mathematical perspective, since one can calculate exact expected values of profits and consequently derive optimal betting strategies (Section~\ref{sec:strategies}).
Such mathematically optimal strategies can be theoretically applied in artificial environments with handcrafted generators of randomness (e.g. the casinos). However, in the context of sports betting, and other practical settings such as stock market investing, this is generally impossible.
In this experimental review we thus focus on the scenarios, where the probability estimates of both the bookmaker $P_b$ and the player $P_p$ are biased w.r.t. the real outcome probability distribution $P_r$.
Let us consider an extension of the coin tossing game from Example~\ref{ex:coin1} to demonstrate properties of such setting.
\begin{example}
Consider a \textit{biased} coin tossing game where the coin bias is \textit{unknown} to both the bookmaker and the player. Let us setup the bias such that
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_r(r_i)} =
\left\{
\begin{array}{ll}
0.6 & \mbox{for } r_1 = \textit{H} \\
0.4 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Let us further assume that the player $p$ has a probabilistic model of the game, producing biased estimates $P_p = \hat{P_r}$
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_p(r_i)} =
\left\{
\begin{array}{ll}
0.55 & \mbox{for } r_1 = \textit{H} \\
0.45 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Finally, assume the bookmaker is also biased with his estimates $P_b = \hat{P_r}, P_b \neq P_p$, according to which he sets up the odds distribution $\bm{o}$, lowered by a margin\footnote{In practice, the distribution of margin would not be simply uniform as in the example, but the bookmaker typically applies more sophisticated distortion of the odds to secure even higher statistical advantage.} $m=0.05$
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_b(r_i)} =
\left\{
\begin{array}{ll}
0.65 & \mbox{for } r_1 = \textit{H} \\
0.35 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\underset{r_i \in {\mathrm{R}}}{b(r_i)} =
\left\{
\begin{array}{ll}
\frac{1}{0.65} \cdot (1-{0.05}) \approx 1.46 & \mbox{for } r_1 = \textit{H} \\
\frac{1}{0.35} \cdot (1-{0.05}) \approx 2.71 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Note that while the odds are still subfair, the bookmaker's bias w.r.t. $P_r$ now creates space for exploitation, since the true expected values are no longer purely negative.
\begin{equation}
\begin{array}{llll}
\EX_{P_r}[w_1] &=& P_r(r_1) \cdot b(r_1) -1 \approx -0.124 & \text{ for~~ } \mathrm{R}=r_1=H\\
\EX_{P_r}[w_2] &=& P_r(r_2) \cdot b(r_2) -1 \approx 0.084 & \text{ for~~ } \mathrm{R}=r_2=T
\end{array}
\end{equation}
i.e. the punter could make long-term profits if betting appropriate amounts on the $r_2=T$ outcome. However, not knowing the true probabilities $P_r$, the player's calculation of expected values will now now be biased, too
\begin{equation}
\begin{array}{lll}
\EX_{P_p}[w_1] &=& P_p(r_1) \cdot b(r_1) -1 \approx -0.197\\
\EX_{P_p}[w_2] &=& P_p(r_2) \cdot b(r_2) -1 \approx 0.22
\end{array}
\end{equation}
nevertheless, despite the expected values calculated by the punter w.r.t. her $P_p$ estimate are wrong, in this particular setting she correctly identified the positive expected value in the $r_2=T$ outcome and could theoretically make profit with an appropriate strategy modification (Section~\ref{sec:risk}).
\end{example}
Generally, $P_p = \hat{P}$ and $P_b = \hat{P}$ are always going to be somewhat biased w.r.t. $P_r$ as well as w.r.t. each other since $P_p \neq P_b$ (as long as player does not simply copy from the bookmaker). The individual biases can be captured by statistical measures, such as the Kullback-Leibler, or better yet Jensen-Shannon, divergences~\citep{cover2012elements}, and the probabilistic setting of each game for a particular match can then be understood as a triplet of probability distributions over the outcomes, as depicted in Figure~\ref{fig:triangle}.
\begin{figure}[t]
\label{fig:triangle}
\input{fig/triangle.tex}
\centering
\caption{A typical sports betting setting for a game with $n$ outcomes, displaying bookmaker's probabilistic estimates $P_b$ and player's estimates $P_p$, both distanced from the true distribution $P_r$ and from each other.}
\end{figure}
\subsection{Multiplicity of Outcomes}
\label{sec:def:outcomes}
So far we have assumed a binary coin tossing game of two possible outcomes. Let us now generalize into an $n$ outcome game, such as throwing a die. This represents most real situations in sports betting, such as the $\mathrm{R} = \{Win,Draw,Loss\}$ outcomes in soccer, or betting on the winner of a horse race with $n$ horses (Section~\ref{sec:datasets}). Moreover, one can potentially assume that the individual game outcomes are no longer exclusive, such as betting on the first $j$ horses, or ``over'' $j$ goals in soccer for multiple different values of $j$.
To make the game representation more compact in such situations, a generic matrix~$\bm{O}$ representation has been proposed~\citep{busseti2016risk}, where the columns of $\bm{O}$ represent the possible outcome assets, and rows represent the possible game results, i.e. joint realizations of all the outcomes. Each individual element in $\bm{O}$ then represents particular odds for each outcome realization.
Additionally, we include an artificial risk-free ``cash'' asset $\bm{c}$, which allows the player to put money aside safely. This also allows to model situations where leaving money aside can cost small fraction of wealth in every turn (caused by e.g. inflation), or possibility to increase the wealth by some interest rate (e.g. in a savings account).
The betting strategy $f$ (Section~\ref{sec:def:strategy}) can now thus always allocate the full amount of current wealth $W$ among $n$ available outcome assets, $n - 1$ of which are risky, stochastic assets, and 1 being the added risk-free cash asset as
\begin{equation}
f : (\bm{p}^k, \bm{O}_k^n) \mapsto \bm{f}^n \text{~~~where~~~} \sum_i{f_i}=1
\end{equation}
where $k$ is the number of possible worlds, i.e. there are $k$ possible joint outcome realizations, in our probabilistic game.
Odds for each outcome asset in each of the $k$ world realizations with the respective probabilities $\bm{p} = p_1, p_2, ..., p_k$ can thus be fully specified in the columns $\bm{o_i}$ as
\begin{align}
\bm{O} =
\begin{bmatrix}
\bm{o_1} & \bm{o_2} & ... & \bm{o_{n-1}} & \bm{c}
\end{bmatrix}
~,~\text{where}~~
\bm{o_i} =
\begin{bmatrix}
o_{i, 1} \\
o_{i, 2} \\
... \\
o_{i, n}
\end{bmatrix}
~,~
\bm{c} =
\begin{bmatrix}
1 \\
1 \\
... \\
1
\end{bmatrix}
\end{align}
\begin{example}
Consider a football game, where we assume $3$ outcomes as $\mathrm{R} = \{W, D, L\}$, forming the $3$ asset vectors $\bm{o_w}, \bm{o_d}, \bm{o_l}$, where the bookmaker sets the odds to $o_w, o_d, o_l$, respectively. The odds matrix $\bm{O}$, including the constant cash asset $\bm{c}$, then looks as follows.
\begin{equation}
\bm{O} =
\begin{bmatrix}
\bm{o_w} & \bm{o_d} & \bm{o_l} & \bm{c}
\end{bmatrix}
~~\text{where~}~~
\bm{o_w} =
\begin{bmatrix}
o_w \\
0 \\
0
\end{bmatrix}
,~
\bm{o_d} =
\begin{bmatrix}
0 \\
o_d \\
0
\end{bmatrix}
,~
\bm{o_l} =
\begin{bmatrix}
0 \\
0 \\
o_l
\end{bmatrix}
,~
\bm{c} =
\begin{bmatrix}
1 \\
1 \\
1
\end{bmatrix}
\end{equation}
\end{example}
To simplify notation in further sections, we will also define a modified odds matrix $\bm{\rho}$ corresponding to excess odds, i.e. removing the return amount of the placed wager itself, resulting into net profit $\mathrm{W}$ (Section~\ref{sec:definitions}), as
\begin{equation}
\bm{\rho} = \bm{O} - \bm{1}
\end{equation}
Note that in the example scenario the outcomes were exclusive, and the ``one-hot'' risky asset vectors reflect their exclusive binary nature, which considerably simplifies the computation of optimal strategies (Section~\ref{sec:strategies}).
In this review, we generally assume individual matches with exclusive outcomes\footnote{Note that the exclusiveness of outcomes does not hold in the further scenarios with parallel games.} but varying outcome multiplicities (Section~\ref{sec:datasets}) to experimentally assess the properties of the strategies w.r.t. this dimension of the problem.
\subsubsection{Parallel Games}
\label{sec:def:parallel}
To further complicate the game, approaching the real betting setting even more closely, we can consider multiple dice being thrown in parallel, each associated with a particular set of outcomes and odds. Naturally, this reflects the reality of multiple games being open for betting at the same time. In popular sports, such as soccer, it is not uncommon to have dozens of games available on the market simultaneously.
While we can surely consider each of the games separately, such a simplification can lead to sub-optimal results. Although calculating with the true parallel nature of the opportunities can be computationally demanding for some of the strategies (Section~\ref{sec:quadraticapprox}), it should allow to alleviate risk by diversifying over a wider portfolio at each time step of the wealth progression.
In this review, we consider both the sequential and parallel scenarios to emulate realistic scenarios and evaluate the respective advantages (Section~\ref{sec:experiments}).
\subsection{Betting Dynamics}
\label{sec:def:dynamics}
The betting dynamic represents the investment behavior of the bettor w.r.t. her bankroll $W$ in time $t$, which has a major impact on the progression of wealth. There are two basic cases of bankroll management to be considered, (i) additive and (ii) multiplicative~\citep{peters2016evaluating, peters2011optimal}.
\subsubsection{Additive dynamic}
Additive dynamic corresponds to a simple fixed unit-based investment, where the bettor's wagers are decoupled from her current bankroll $W_t$. To illustrate the setting, we can imagine that the bettor receives a fixed unit (e.g. \$1) amount of money from an external source at regular time intervals $\delta t$ (such as a salary), which she repeatedly invests into the stochastic game of betting, and accumulates (additively) the prospective returns $w_t \cdot 1$ from the unit investment in the, separately held, bankroll $W_t$.
Her wealth progression in time $t$ can hence be seen as
\begin{equation}
W_t = w_t \cdot 1 + W_{t - \delta t}
\end{equation}
\subsubsection{Multiplicative dynamic}
\label{sec:multiplicative}
In the multiplicative scenario, the bettor continuously \textit{reinvests} the current wealth $W_t$ accumulated from the previous betting investments, without any external source of profit. Hence her progression of wealth in time $t$ can be seen as
\begin{equation}
W_t = w_t \cdot W_{t - \delta t}
\end{equation}
The multiplicative dynamics plays an important role in the Kelly criterion (Section~\ref{sec:kelly}), where the mathematical optimality of the strategy is derived exactly from repeated play of the same game in the multiplicative setting.
As the comparison of the two approaches appears problematic, due to the external source of profit in the additive scenario, we will further consider only the multiplicative reinvestment setting, which is also more realistic and sound for independent evaluation.
\section{Related works}
\label{sec:related}
The two most notable approaches to allocation of wealth across presented stochastic assets, i.e. match outcomes in sport betting, were introduced by (i)~\cite{markowitz1952portfolio} with his revolutionary concept of balancing return and risk of a portfolio, and by (ii)~\cite{kellyold} with a criterion to maximize the long-term growth in a scenario where the same game is being played repeatedly.
Following the Kelly criterion, the process of betting is closely connected to information theory~\citep{kelly2011new}. Additional mathematical properties were also explored in~\citep{latane2011criteria} and~\citep{breiman1961optimal, thorp2008kelly}. From the economical perspective, Kelly's approach is often explained through the use of a logarithmic utility function, which was famously first introduced by Daniel Bernoulli in~\citep{bernoulli2011exposition} where he pointed out that people do not make their decisions according to the absolute payoff, but w.r.t. the logarithm thereof. While not necessarily incorrect, the phenomenological explanation of the choice of logarithmic utility function seemed somewhat arbitrary, however.
In \citep{peters2011time} a different view on the Kelly criterion was proposed, where the author criticized the established evaluation of betting using the expected value of a portfolio, as it is based on the unrealistic idea of ``simultaneous'' evaluation of the, often exclusive, outcomes. Instead of measuring mean of a statistical ensemble of possible outcomes, the author proposed to focus on what happens to a single player as the same game is repeated in time, following the notion of ergodicity in dynamic systems~\citep{peters2019ergodicity}. The logarithmic transformation then emerges as the correct ergodic transformation of dynamics of the game in the classical reinvestment setting~\citep{peters2016evaluating}, providing a well founded explanation for the observed phenomenon.
Given the mathematically elegant yet somewhat unrealistic setting, the Kelly strategy has also been often criticised in many works~\citep{samuelson1971fallacy, samuelson2011we, maclean2010good, samuelson1975lifetime}.
The strategies of Markowitz and Kelly have been re-explored by researchers in a number of different application scenarios and many useful modifications have been proposed since. Generally, the Markowitz's approach has traditionally dominated the world of quantitative finance, while the Kelly's approach has been more prominent in the sports betting industry. In~\citep{smoczynski2010explicit} a closed form solution for the use of the Kelly strategy when betting on horse racing was explored. Another practical extension for betting on multiple simultaneous games was discussed in a number of works~\citep{whitrow2007algorithms, grant2008optimal, buchen2012comparison}, where approximations for large bet aggregations were proposed.
An important stream of research are works investigating extensions of the Kelly strategy towards the realistic setting of parameter uncertainty, such as~\citep{baker2013optimal}. A practical method to address the problem are so called fractional Kelly strategies, the properties of which have been investigated in great detail in the works of \citep{maclean2011medium} and \citep{maclean1992growth}. Interesting modifications with similar aims are Bayesian extensions of the Kelly strategy proposed in \citep{browne1996portfolio, balka2017kelly, chu2018modified}. Similarly, approaches based on probabilistic risk constraints for limiting the probability of a ``drawdown'' were discussed in \citep{busseti2016risk} and \citep{mulvey2011dynamic}. Finally, limiting the worst case probabilistic scenario using the framework of distributionally robust optimization was explored in \citep{sun2018distributional} and in \citep{blanchet2018distributionally} for the Markowitz's strategy, respectively.
\section{Betting Strategies}
\label{sec:strategies}
In the existing literature, the betting strategies range from simple informal techniques, such as flat betting, to the formal approaches, represented mainly by Markowitz's Modern portfolio theory~\citep{markowitz1952portfolio} and the Kelly criterion~\citep{kelly2011new}, coming from an economical and information-theoretic views of the problem, respectively.
\subsection{Informal Strategies}
\label{sec:strat:informal}
In sports betting practice, most of the focus among punters is being put on the search for outcomes with positive expected value (``value bets''), and the importance of the subsequent investment strategy has often been neglected. Consequently, rather than formal strategies, one can encounter simplistic heuristics such as~\citep{hubacek2017thesis}:
\begin{itemize}
\item Bet a fixed fraction on favorable odds.
\item Bet a fixed fraction on the opportunity with maximum expected value.
\item Bet a fraction equal to the absolute discrepancy between player's and bookmaker's estimates.
\item Bet a fraction equal to the relative discrepancy between player's and bookmaker's estimates.
\item Bet a fraction equal to the estimated probability of winning.
\end{itemize}
Lacking any formal foundation, these approaches have been shown generally inferior to the formal strategies, both theoretically and in practice~\citep{hubacek2017thesis}. For completeness, we chose to re-validate the reports by selecting the previously best performing informal strategies of (i) betting fraction w.r.t. the maximal discrepancy (``AbsDisc'') and (ii) betting optimal fraction on the maximal expected value (``MaxEvFrac'') in our experiments (Section~\ref{tab:horses}).
\subsection{Modern Portfolio Theory}
\label{sec:MPT}
Modern Portfolio Theory (MPT) is a standard economic view of the problem based on the idea of expected value of the profit, possibly transformed by a utility function reflecting the user's particular preferences. The general idea behind MPT is that a portfolio $\bm{f^1}$, i.e. a vector of assets $\bm{f} = f_1, \dots, f_n$, is superior to $\bm{f^2}$, if its corresponding expected profit (Section~\ref{sec:definitions}) is at least as great
\begin{equation}
\EX[\bm{\rho} \cdot \bm{f^1}] \geq \EX[\bm{\rho} \cdot \bm{f^2}]
\end{equation}
and a given risk measure $risk : \mathbb{R}^n \to \mathbb{R}$ of the portfolio, w.r.t. the given odds, is no greater
\begin{equation}
risk(\bm{f^1}|\bm{\rho}) \leq risk(\bm{f^2}|\bm{\rho})
\end{equation}
This creates a partial ordering on the set of all possible portfolios. Taking the portfolios that no other portfolio is superior to gives us the set of ``efficient portfolios'' $\Theta$~\citep{markowitz1952portfolio}. In simple terms, we trade-of the expected $profit-risk$ by maximizing the following
\begin{equation}
\underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}} ~(\EX[\bm{\rho} \cdot \bm{f}] - \gamma \cdot risk(\bm{f}|\bm{\rho}))
\end{equation}
where $\gamma$ is a hyperparameter reflecting the user's preference for risk.
In the most common setup, the $risk$ of a portfolio $\bm{f}$ is measured through the expected total variance of its profit $Var[\bm{\rho} \cdot \bm{f}] = \bm{f}^T\Sigma \bm{f}$, based on the given covariance matrix $\bm{\Sigma}_n^n$ of net profit of the individual assets. Note that in the case of independent outcomes (Section~\ref{sec:def:outcomes}), this reduces to a diagonal matrix with variance of each binary asset profit, corresponding to the result $r_i$, following from the given odds $o_i$ and the underlying Bernoulli distribution as
$\Sigma(i,i) = \hat{P_r}(r_i) \cdot (1-\hat{P_r}(r_i)) \cdot \rho_{i,i}^2$.
MPT can generally thus be expressed as the following maximization problem
\begin{equation}
\label{eq:MPT}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}~
& & \EX[\bm{\rho}\cdot\bm{f}] - \gamma \cdot \bm{f}^T\Sigma \bm{f}\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation}
Apart from the variance $Var[\bm{w}]$ of the potential net returns $\bm{w} = \bm{\rho} \cdot \bm{f}$, different risk measures have been proposed~\citep{markowitz1952portfolio}, such as standard deviation $\sigma(\bm{w}) = \sqrt{Var[\bm{w}]}$ and coefficient of variation $CV(\bm{w}) = \frac{\sigma(\bm{w})}{\EX[\bm{w}]}$. Generally, there is no agreed upon measure of risk and the choice is thus left to the user.
The MPT approach is often criticized for the disputable choice of risk, which can be perceived as a formal weakness of the approach~\citep{peters2016evaluating}, since in many domains the risk is not easy to define. Moreover, the direct maximization of expected profit can be misleading in games, where the distribution of potential profits is highly skewed, i.e. where the mean profit is very different from the median. This situation naturally occurs in the multiplicative dynamics setting, where maximization of expected value may lead to undesirable outcomes~\citep{peters2016evaluating}.
\subsubsection{Maximum Sharpe Strategy}
\label{sec:MaxSharpe}
Apart from the choice of the risk measure, the inherent degree of freedom in MPT is how to select a particular portfolio from the efficient frontier $\Theta$ (based on the choice of $\gamma$). Perhaps the most popular way to avoid the dilemma is to select a spot in the pareto-front with the highest expected profits w.r.t. the risk. For the risk measure of $\sigma(\bm{w})$, this is known as the ``Sharpe ratio'', generally defined as
\begin{equation}
\frac{\EX[\bm{w}] - r_f}{\sigma(\bm{w})}
\end{equation}
where $\EX[\bm{w}]$ is the expected return of the portfolio, $\sigma(\bm{w})$ is the standard deviation of the return, and $r_f$ is a ``risk-free rate''. Since there is no risk-free investment in sports betting, we can neglect it and reformulate the optimization problem as
\begin{equation}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \frac{\EX[\bm{\rho} \cdot \bm{f}]} {\sqrt{\bm{f}^{T}\bm{\Sigma}\bm{f}}} \\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, f_i \geq 0
\end{aligned}
\end{equation}
the solution of which we will further refer to as the ``MSharpe'' strategy.
The variance-based choices of risk have been often criticized as they penalize excess losses as well as excess returns, which is obviously undesirable. Moreover, the calculation of the MaxSharpe solution is also quite sensitive to errors in the probabilistic estimates, and can often be biased towards extreme solutions, requiring some additional form of control\footnote{E.g. a strategy with no wagers placed would have zero variance resulting into an infinite Sharpe ratio.}. Nevertheless it remains a very popular investment practice, which we include in our experiments.
\subsection{Kelly Criterion}
\label{sec:kelly}
The Kelly criterion~\citep{kelly2011new, thorp2008kelly} is based on the idea of expected multiplicative growth in the reinvestment setting (Section~\ref{sec:multiplicative}), so that a portfolio $\bm{b}$ is chosen such that the long-term value of the resulting, continuously reinvested, wealth $W_t$ is maximal (in an infinite horizon of $t$). Note that in this scenario we assume that the same portfolio is going to be presented at each time step. For its multiplicative nature, it is also know as the geometric mean policy, emphasizing the contrast to the arithmetic mean approaches based on the expected value.
The two can however be looked at similarly with the use of a logarithmic ``utility function'', transforming the geometric into the arithmetic mean, and the multiplicative into the additive setting, respectively. The problem can then be again expressed by the standard means of maximizing the expected value as
\begin{equation*}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \EX[\log(\bm{O} \cdot \bm{f})]\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation*}
Note that, in contrast to MPT, there is no explicit term for risk here, as the notion of risk is inherently encompassed in the growth-based view of the wealth progression, i.e. the long-term value of a portfolio that is too risky will be smaller than that of a portfolio with the right risk balance (and similarly for portfolios that are too conservative).
The calculated portfolio is then provably optimal, i.e. it accumulates more wealth than any other portfolio chosen by any other strategy in the limit of $t$. However, this strong result only holds given, considerably unrealistic, assumptions~\citep{kelly2011new, thorp2008kelly, peters2016evaluating}. Similarly to MPT, we assume to know the true probability distribution of game outcomes, and additionally we assume that:
\begin{enumerate}
\item we are repeatedly presented with the same games.
\item we play for an infinite amount of time.
\end{enumerate}
Despite the fact that these conditions are impossible to meet in practice, the Kelly strategy is very popular, and its various modifications (Section~\ref{sec:risk}) are prevalent among bettors in practice.
\subsubsection{Quadratic Approximation}
\label{sec:quadraticapprox}
Exact numerical calculation of the Kelly strategy is often time consuming, especially when numerous runs through a large dataset of games is necessary. A practical approach to this issue has been proposed~\citep{busseti2016risk} based on a quadratic approximation of the Kelly's logarithmic utility using the Taylor series expansion. Let us first recall the following.
\begin{equation}
\log(1+x) = x - \frac{x^{2}}{2} + \dots
\end{equation}
Next, following~\citep{busseti2016risk}, we make an assumption for the Taylor approximation that our net profits are not too far from zero $\bm{\rho}\cdot{\bm{f}} \approx \bm{0}$ and express the logarithmic part of the Kelly criterion as follows~\citep{busseti2016risk}.
\begin{equation}
\log(\bm{O} \cdot \bm{f}) = \log(1 + \bm{\rho} \cdot \bm{f})
\end{equation}
allowing us to proceed with the Taylor expansion as
\begin{equation}
\log(1 + \bm{\rho} \cdot \bm{f}) = \bm{\rho} \cdot \bm{f} - \frac{(\bm{\rho} \cdot \bm{f})^{2}}{2} + ...
\end{equation}
Now taking only the first two terms from the series we transform the expectation of logarithm into a new problem definition as follows
\begin{equation}
\begin{aligned}
& \underset{\bm{f \in \mathbb{R}^n}}{maximize}
& & \EX[\bm{\rho} \cdot \bm{f} - \frac{(\bm{\rho} \cdot \bm{f})^{2}}{2}] \\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1.0, ~f_i \geq 0
\end{aligned}
\end{equation}
We will further refer to this strategy as the ``Quadratic Kelly''.
Note that, interestingly, the problem can now be rewritten to
\begin{equation}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \EX[\bm{\rho} \cdot \bm{f}] - \frac{1}{2}\EX[\bm{f}^T (\bm{\rho} \cdot \bm{\rho}^T) \bm{f}] \\
\end{aligned}
\end{equation}
corresponding to the original MPT formulation from Equation~\ref{eq:MPT} for the particular user choice of $\gamma=\frac{1}{2}$.
It follows from the fact that the geometric mean is approximately the arithmetic mean minus $\frac{1}{2}$ of variance~\citep{markowitz1952portfolio}, providing further insight into connection of the two popular strategies of Kelly and Markowitz, respectively.
\section{Risk Management Practices}
\label{sec:risk}
The core issue with the mathematical strategies is that their calculations are carried as if the true probability distribution over the outcomes was known. Moreover they are often sensitive to even small errors in the estimates. Here we review simple remedies that have been proposed on top of the original strategies to manage the extra risk stemming from the underlying errors, as well as more sophisticated techniques incorporating the uncertainty of estimates directly into the computation of strategies.
\subsection{Maximum bet limit}
\label{sec:limit}
Constraining the maximal wager to a fixed value $m$ is probably the most trivial risk-avoiding technique one can encounter, which is probably also why it is the most prevalent one in practice. Moreover, the maximum bet limit often comes from the side of the bookmaker, too, constraining the risk he undertakes w.r.t. each bettor. We thus include this empirical method in our portfolio to see if saturating the invested amount by a fixed threshold might actually improve the overall wealth progression of the existing strategies if properly tuned.
\subsection{Fractional Approaches}
\label{sec:fractional}
Fractioning is an example of a simple heuristic that is nevertheless very efficient in practice.
The main idea behind any ``fractional approach'' is to bet only a fraction $\omega$ of the calculated portfolio, and leave the rest of $1-\omega$ in the cash asset for security. We define such a trade-off index $\omega$ for a portfolio as
\begin{equation}
\bm{f}_\omega = \omega \bm{f}_{1..n-1} + (1-\omega) \bm{f}_n
\end{equation}
where $\bm{f}_{1..n-1}$ corresponds to the risky part of the portfolio with stochastic assets and $\bm{f}_n$ is the cash asset, as introduced in Section~\ref{sec:def:outcomes}.
The fractional approach is mostly used with the Kelly strategy~\citep{maclean2011growth, thorp2011understanding}, where for $\omega = 0.5$ it is famously referred to as ``half-kelly'' by practitioners, nevertheless the choice of $\omega$ should depend on the actual distributions and preferences for risk. The same idea of taking only a fraction of the calculated portfolio can generally be applied to any strategy, including MPT, and it is overall useful whenever our estimates are erroneous.
\subsection{Drawdown Constraint}
\label{sec:drawdown}
A drawdown represents a more involved technique that actually modifies the original optimization problem.
The idea of drawdown is to incorporate a special probabilistic constraint into the Kelly strategy so as to push the solution away from the more risky region near the ruin boundary. The choice of the boundary is then left to the user's preference as an input parameter into the optimization problem. The probabilistic boundary is expressed as the following constraint
\begin{equation}
P(W_t^{min} < \alpha) \leq \beta
\end{equation}
expressing that the probability of our wealth falling below $\alpha$ can be at most $\beta$.
For the Kelly criterion, following the calculations from~\citep{busseti2016risk}, the constraint is approximately satisfied if the following condition holds
\begin{equation}
\EX[(\bm{O} \cdot \bm{f})^{-\lambda}] \leq 1 \hspace{5pt} \text{where} \hspace{5pt} \lambda = \log(\beta) / \log(\alpha)
\end{equation}
Which we can reformat as
\begin{equation}
\log(\sum_{i=1}^{n} p_i \cdot (\bm{o_i}\cdot f_i)^{-\lambda}) \leq \log(1)
\end{equation}
which can be further simplified~\citep{busseti2016risk} into the following constraint
\begin{equation}
\log(\sum_{i=1}^{n} \exp(\log(p_i \cdot (\bm{o_i}\cdot f_i)^{-\lambda}))) \leq 0
\end{equation}
which we can finally use in a convex optimization program.
\subsection{Distributionally Robust Optimization}
\label{sec:dro}
Distributionally robust optimization (DRO) can be understood as a stochastic game between a player and nature, where the nature picks a distribution $P_r$ from some predefined ambiguity set of distributions $\bm{\Pi}$ so as to inflict maximum damage to the player's utility. This fits quite naturally the setting of sports betting against a fixed-odds bookmaker, where, given the opposing utilities of both, the bookmaker (nature) sets up the odds so as to minimize player's chances for profit.
Generally, DRO is paradigm for decision making under uncertainty where:
\begin{enumerate}
\item The uncertain problem inputs are governed by a distribution that is itself subject to uncertainty.
\item The distribution is then assumed to belong to an ambiguity set $\bm{\Pi}$.
\item The ambiguity set contains all distributions that are compatible with the player's prior information.
\end{enumerate}
Being aware of the uncertainty in her own estimates $P_p = \hat{P_r}$, the player now modifies the optimization problem to account for the worst possible scenario within the given ambiguity set $\Pi$.
\begin{equation*}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \underset{\bm{p} \in \bm{\Pi}}{min} \sum_{i=1}^{n} {p_i} \cdot log(\bm{O_i} \cdot \bm{f})\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation*}
The ambiguity set $\bm{\Pi}$ can be defined in a number of ways. In~\citep{sun2018distributional}, multiple definitions are explored in connection to the Kelly strategy, such as Polyhedral, Ellipsoidal, or Divergence based. In this review we further narrow our focus to the polyhedral ambiguity set, referred to as the ``box'' uncertainty set, which can be defined as
\begin{equation}
\bm{\Pi} = \{p_i \hspace{3pt} | \hspace{3pt} |p_i - P_p(r_i)| \leq \eta \cdot P_p(r_i),~\sum_{i=1}^{n} p_i = 1, p_i \geq 0\}
\end{equation}
i.e. constraining each probability $p_i$ to differ by up to a factor of $\eta$ from the nominal player's estimate $P_p(r_i)$ of the probability of result $\mathrm{R}=r_i$.
\section{Experiments}
\label{sec:experiments}
The main purpose of this review is to assess performance of the individual strategies (Section~\ref{sec:strategies}) and their risk modifications (Section~\ref{sec:risk}) in various realistic settings (Section~\ref{sec:definitions}) on real data.
We recall the used strategies, describe the datasets, evaluation protocol, and discuss the conducted experiments with their results.
The strategies for the experiments were chosen with the aim to represent the diverse portfolio of approaches occurring in practice, with the goal to provide an unbiased statistical assessment of their performance limits. The particular strategies chosen with their respective hyper-parameters are specified in Table~\ref{tab:strategies}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
\textbf{Strategy} & Description & {Hyperparameters}\\
\hline
AbsDisc & absolute discrepancy bet (Section~\ref{sec:strat:informal}) & None \\
\hline
MaxEvFrac & max. EV outcome with fractioning (Section~\ref{sec:strat:informal}) & $\omega \in [0,1]$ \\
\hline
Kelly & original Kelly strategy (Section~\ref{sec:kelly}) & None \\
\hline
MSharpe & original max. Sharpe ratio (Section~\ref{sec:MaxSharpe}) & None \\
\hline
KellyFrac & Kelly strategy with fractioning (Section~\ref{sec:fractional}) & $\omega \in [0,1]$ \\
\hline
MSharpeFrac & max. Sharpe with fractioning & $\omega \in [0,1]$ \\
\hline
KellyFracMax & Kelly with fractioning and limiting (Section~\ref{sec:limit}) & $\omega \in [0,1]$, $m \in [0,1]$. \\
\hline
MSharpeFracMax & max. Sharpe with fractioning and limiting & $\omega \in [0,1]$, $m \in [0,1]$. \\
\hline
KellyDD & Kelly with the drawdown constraint (Section~\ref{sec:drawdown}) & $\alpha$, $\beta \in [0,1]$ \\
\hline
KellyDR & Kelly with distributionally robust optimization & $\eta \in [0,1]$. \\
\hline
\end{tabular}
\end{center}
\caption{Evaluated strategies and their hyperparameters}
\label{tab:strategies}
\end{table}
\subsection{Datasets}
\label{sec:datasets}
We collected 3 datasets of different properties from 3 different sports - horse racing, basketball, and football, each containing a significant number of ``matches'' for statistical evaluation. Each of the datasets is further accompanied with realistic models' predictions tuned specifically for each domain. Since our focus here is purely on the betting strategies, we do not elaborate on the models in details beyond their predictive performances, which naturally influence the performance of the strategies, too.
For each of the datasets, we present the following key properties.
\begin{itemize}
\item $size$ - Dataset size (i.e. number of matches).
\item $acc_b$ - Accuracy of the bookmaker $b$.
\item $acc_b$ - Accuracy of the player $p$ (i.e. the predictive model).
\item $n$ - Number of possible match outcomes ($n=|R|$).
\item $odds$ - Range of the offered odds.
\item $margin$ - Average margin present in the odds.
\item $A_{KL}$ - Kullback-Leibler advantage of the player.
\end{itemize}
The $A_{KL}$ is a statistical measure of difference of the predictive performances (crossentropy) of the player and the bookmaker, respectively. The metric was chosen as it plays a key role in performance of the original Kelly strategy, where the growth of profit can be proved directly proportional to the KL advantage~\citep{cover2012elements}.
\subsubsection{Horse Racing}
\label{sec:horses}
The data for horse racing were collected from the Korean horse racing market (KRA) and provide $2700$ races. The target market of the dataset is the ``win pool'', representing betting for the horse winning the race. The schedule and participation of individual horses in the races varies considerably. Moreover, there is a varying total number of horses, and thus outcomes $n$, in each race, creating interesting challenge for the strategies. We thus assume each race as a completely independent investment opportunity and optimize the strategies accordingly. The model used was a form of conditional logistic regression over various features of the horses. The particular dataset properties are specified in Table~\ref{tab:horses}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{size} & \textit{m-acc} & \textit{b-acc} & $n$ & $odds$ & $margin$ &$A_{KL}$\\
\hline
$2700$ & $0.512$ & $0.503$ & $\in [6, 16]$ & $\in [1.0, 931.3]$ & $0.2$ & $\approx 0.0022$ \\
\hline
\end{tabular}
\end{center}
\caption{Horse racing dataset properties}
\label{tab:horses}
\end{table}
The specifics of the horse racing dataset come mainly from the fact that it actually originates from a parimutuel market, meaning that the wagers are put into a shared pool from which a certain portion is removed as a profit for the house (margin). Nevertheless we convert it into the discussed fixed-odds setting by assuming the last available state of the money pool to get the possible payoffs/odds~\citep{hausch2008efficiency}. As a result, the ``bookmaker's'' estimate in this case is hence made up entirely from public opinion, and is noticeably less accurate. This provides space for statistical models to gain predictive KL-advantage on the one hand, however on the other hand, the margin is also considerably higher.
\subsubsection{Basketball}
\label{sec:basket}
Next domain we selected is basketball, where we collected box score data from matches in the US National Basketball Association (NBA). The dataset consists of $16000$ games ranging from the year $2000$ to $2015$. The NBA league has a regular schedule of the matches, where each team plays repeatedly with every other team in so called ``rounds''. To emulate the market setting in a realistic fashion, we assume rounds as groups of $10$ scheduled matches to repeatedly appear on the market in parallel (Section~\ref{sec:def:parallel}).
The target market here was the ``money-line'', i.e. betting on the winner of each match. The specifics of the data then comes from the fact that there are only 2 outcomes in the game, directly corresponding to the most basic coin tossing setup of the problem (Section~\ref{sec:definitions}).
The model used was a convolutional neural network based on detailed statistics of the individual players and teams~\citep{hubavcek2019exploiting}. The odds then come from the closing line of the Pinnacle~\footnote{https://www.pinnacle.com/} bookmaker. Notice that in this case the model is not as accurate as the bookmaker, and is thus in a general KL-disadvantage. The particular dataset properties are specified in Table~\ref{tab:basket}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textit{size} &\textit{m-acc} & \textit{b-acc} & $n$ & $margin$ & $odds$ & $A_{KL}$\\
\hline
$16000$ & $0.68$ & $0.7$ & $2$ & $0.038$ & $\in [1.01, 41]$ & $\approx -0.0146$\\
\hline
\end{tabular}
\end{center}
\caption{Basketball dataset properties}
\label{tab:basket}
\end{table}
\subsubsection{Football}
\label{sec:football}
The football dataset consists of $32000$ matches collected from various leagues all over the world. The schedule in each football league is similar in spirit to that of NBA, and so we again assume the market setting with $10$ parallel games (Section~\ref{sec:def:parallel}). The target market was again money-line betting. The outcomes in football include a draw, resulting into a moderate $n=3$ setting. Interestingly, the original dataset~\citep{dubitzky2019} contained merely the historical results of the matches, and the model has thus been built purely from score-derived features. Particularly, the model was a form of a gradient-boosted trees learner, winning the 2017's Soccer prediction challenge~\citep{dubitzky2019}. The odds were again provided by Pinnacle, but this time we took the more favorable opening line. Despite varying over different leagues, the overall margin is slightly lower that in basketball, and the model in a slightly lower, yet still considerable, KL disadvantage. The particular dataset properties are specified in Table~\ref{tab:football}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textit{size} &\textit{m-acc} & \textit{b-acc} & $n$ & $margin$ & $odds$ & $A_{KL}$\\
\hline
$32000$ & $0.523$ & $0.537$ & $3$ & $0.03$ & $\in [1.03, 66]$ & $\approx -0.013$\\
\hline
\end{tabular}
\end{center}
\caption{Football dataset properties}
\label{tab:football}
\end{table}
\subsection{Evaluation Protocol}
\label{sec:ex:protocol}
The models providing the probabilistic estimates were trained following the natural order of the matches in time, so that all of their estimates are actual future predictions, i.e. out-of-sample test outputs for matches unseen in the training phase.
For the actual optimization problems of the individual strategies, we have chosen to work with the cvxpy \citep{cvxpy} as the main optimization framework. For each strategy, we first solved the given problem using the Embedded Conic Solver (ECOS)~\citep{domahidi2013ecos}, and should a numerical problem arise we proceed with solving the problem using the Splitting Conic Solver (SCS)~\citep{o2016scs}.
While many of the chosen strategies (Table~\ref{tab:strategies}) contain hyperparameters to be set, we additionally tuned each for the best possible performance via grid-search, too. The individual hyperparameter ranges for the grid-search can be found in Table~\ref{tab:strategies}.
To provide an unbiased estimates of their actual performance in practice, we also followed a strict evaluation protocol for each of the strategies. This means that we have (i) split each dataset into training and testing subsets, (ii) found the best hyperparameter setting on the training subset, and (iii) evaluated the fixed setting on the test subset.
To make the output profit measures (Section~\ref{sec:metrics}) more robust, both the training and testing is evaluated by generating $1000$ separate ``runs'' through each subset, where the sequence of games is randomly reshuffled and $10\%$ of games are randomly removed each time (the split between train and test always remains respected). We hence evaluate properties of each strategy on $1000$ separate wealth investment trajectories through previously unseen games.
\subsubsection{Hyperparameter Selection}
\label{sec:hyperpar}
To choose the best possible strategy setting on the train set, we looked for hyperparameters with the following criteria
\begin{equation*}
\begin{aligned}
& {\text{maximize}}
& & median(\bm{W_{f}}) \\
& \text{subject to}
& & Q_{5} > 0.9
\end{aligned}
\end{equation*}
i.e. we always chose a strategy that reached the maximum median final wealth, given that no more than $5\%$ of the wealth trajectories did not fall below $90\%$ of final wealth. Hyperparameter settings that did not meet the required criterion were simply removed from consideration. While the presented hyperparameter selection criteria might seem somewhat arbitrary and could be argued, our aim was to follow the natural desiderata of wealth progression for bettors in practice. That is to mainly prevent the occurrence of ruin (``survival first''), and then maximize the potential profits for the typical (median) bettor.
\subsubsection{Evaluation Metrics}
\label{sec:metrics}
For the actual final evaluation of the strategies on the test set, we chose a range of diverse metrics to provide more insights into the properties of the individual strategies and game settings. The metrics are as follows
\begin{itemize}
\item $median(W_f)$ - median final wealth position.
\item $mean(W_f)$ - mean final wealth position.
\item $min(W_i)$ - lowest wealth position.
\item $max(W_i)$ - maximal wealth position.
\item $sigma(W_f)$ - standard deviation of final wealth positions.
\item $ruin$ \% - ruin percentage of wealth trajectories
\end{itemize}
for which we define a $ruin$ situation as falling below $0.01\%$ of the initial bank $W_0$ at least once during the entire investment period. Note that as opposed to the original definition of ruin in the Kelly strategy~\citep{kellyold}, we have chosen a small \textit{non-zero} threshold, since in practice there is a low amount of money effectively causing inability to place a minimal bet, which is a constraint often present in the market.
\subsection{Results}
\label{sec:results}
Finally we present performances (Section~\ref{sec:metrics}) of the individual strategies (Section~\ref{sec:experiments}) over each of the datasets (Section~\ref{sec:datasets}). Apart from the evaluation metrics in the final state of wealth progression $W_{f}$, we present the summarized wealth progression trajectories for a selected ``best'' strategy with maximal median final wealth for each of the datasets, to demonstrate the overall bankroll dynamics. The evaluation metrics for the basketball, football, and horse racing datasets are presented in Table~\ref{experiments:metrics:basketball}, Table~\ref{experiments:metrics:football}, and Table~\ref{experiments:metrics:horses}, respectively. The wealth progression trajectories for the best strategies are then displayed in Figure~\ref{fig:basket}, Figure~\ref{fig:football}, and Figure~\ref{fig:horses}, respectively.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
AbsDisc & 0.0019 & 0.03 & 4e-08 & 27.1 & 0.04 & 85.2 \\
\hline
MaxEvFrac & 0.86 & 2.13 & 2e-09 & 711 & 4.7 & 36.1 \\
\hline
\hline
Kelly & 4.11 & 15.6 & 7e-05 & 2167.8 & 59.8 & 0.6 \\
\hline
MSharpe & 3.92 & 17.8 & 9e-06 & 2231.1 & 48.3 & 12.1 \\
\hline
KellyFrac & 3.39 & 14.2 & 0.003 & 213.2 & 32.1 & 0 \\
\hline
MSharpeFrac & 3.28 & 16.9 & 8e-05 & 253.3 & 26.5 & 0.2 \\
\hline
KellyFracMax & 3.49 & 13.8 & 0.0057 & 168.1 & 29.3 & 0 \\
\hline
MSharpeFracMax & 3.41 & 15.2 & 0.0065 & 194.3 & 25.4 & 0 \\
\hline
KellyDD & 3.3 & 13.7 & 0.009 & 112.4 & 22.4 & 0 \\
\hline
KellyDR & 2.97 & 4.1 & 0.08 & 77.3 & 7.2 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the horse racing scenario (Section~\ref{sec:horses}).}
\label{experiments:metrics:horses}
\end{table}
\begin{figure}[h!]
\label{fig:horses}
\includegraphics[width=0.85\textwidth]{fig/horse_box_reinvest.pdf}
\centering
\caption{Wealth progression of the KellyFracMax strategy in the horse racing scenario (Section~\ref{sec:horses}).}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
Kelly & 9.1e-6 & 1.8e-05 & 1.9e-20 & 3312.2 & 1.7e-05 & 100 \\
\hline
MSharpe & 1.3e-06 & 5.1e-05 & 4.1e-21 & 2911 & 9.7e-06 & 100 \\
\hline
KellyFrac & 2.4 & 2.7 & 0.11 & 24.1 & 1.34 & 0 \\
\hline
MSharpeFrac & 1.24 & 1.97 & 0.002 & 19.6 & 0.85 & 0 \\
\hline
KellyFracMax & 2.3 & 2.5 & 0.13 & 20.9 & 1.27 & 0 \\
\hline
MSharpeFracMax & 1.2 & 1.7 & 0.008 & 12.1 & 0.56 & 0 \\
\hline
KellyDD & 2.21 & 2.9 & 0.14 & 29.1 & 1.3 & 0 \\
\hline
KellyDR & 1.39 & 1.46 & 0.23 & 10.9 & 0.45 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the basketball scenario (Section~\ref{sec:basket}).}
\label{experiments:metrics:basketball}
\end{table}
\begin{figure}[h!]
\label{fig:basket}
\includegraphics[width=0.85\textwidth]{fig/basket_reinvest_fractional.pdf}
\centering
\caption{Wealth progression of the KellyFrac strategy in the basketball scenario (Section~\ref{sec:basket}).}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
Kelly & 2.3e-09 & 5.2e-08 & 1.6e-21 & 5844.2 & 2.7e-07 & 100 \\
\hline
MSharpe & 1.8e-10 & 3.0e-07 & 5.9e-27 & 2617 & 4.2e-07 & 100 \\
\hline
KellyFrac & 10.05 & 11.8 & 0.03 & 182 & 9.7 & 0 \\
\hline
MSharpeFrac & 9.9 & 13.6 & 0.016 & 211 & 9.1 & 0 \\
\hline
KellyFracMax & 10.03 & 11.2 & 0.007 & 144 & 9.2 & 0 \\
\hline
MSharpeFracMax & 10.1 & 13.1 & 0.005 & 193 & 8.7 & 0 \\
\hline
KellyDD & 10.25 & 12.4 & 0.09 & 122 & 9.3 & 0 \\
\hline
KellyDR & 6.2 & 7.3 & 0.28 & 27.7 & 5.6 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the football scenario (Section~\ref{sec:football}).}
\label{experiments:metrics:football}
\end{table}
\begin{figure}[h!]
\label{fig:football}
\includegraphics[width=0.85\textwidth]{fig/football_reinvest.pdf}
\centering
\caption{Wealth progression of the KellyDD strategy in the football scenario (Section~\ref{sec:football}).}
\end{figure}
\vspace{-1cm}
Firstly, the results of our experiments confirm that the, regularly used, informal betting strategies (Section~\ref{sec:strat:informal}) are clearly inferior to all the formal strategies, in agreement with the previous reports~\citep{hubavcek2019exploiting}. Moreover, they often lead to ruin even in situation with statistical model advantage, as reported for the horse racing dataset in Table~\ref{tab:horses}, for which we decided not to include them further.
As expected, the formal strategies based on Modern Portfolio Theory (MPT) (Section~\ref{eq:MPT}) and Kelly Criterion (Section~\ref{sec:kelly}) performed reasonably in the setting with statistical advantage $A_{KL}$ of having a more precise model. However, since they are based on unrealistic mathematical assumptions, their actual risk profile might be unexpected in practice. Using any of the proposed practices for additional risk management (Section~\ref{sec:risk}) generally led to a considerably lower volatility while keeping the wealth progression of a typical (both mean and median) bettor reasonably high. Also, following the mathematical properties of the pure form of both the strategies, they both lead to a certain ruin in scenarios without statistical $A_{KL}$ advantage of the model, which is exhibited in practice, too (Table~\ref{tab:basket}, Table~\ref{tab:football}).
On the other hand, a smart strategy modification can generate profits even in the statistically disadvantageous scenarios as measured by the $A_{KL}$. Naturally, this does not however hold universally and particular properties of the underlying models must be considered, too, since there are surely disadvantageous scenarios where no strategy can make profits by any means (Example~\ref{ex:coin1}).
The insights from the experiments regarding the discord between the approaches of MPT and Kelly roughly follow the intuitions behind the individual strategies. That is that the strategies based on the Kelly criterion (Section~\ref{sec:kelly}) result in a generally higher \textit{median} final wealth, while strategies based on the MPT (Section~\ref{sec:MPT}) result in a generally higher \textit{mean} final wealth, corresponding to the underlying expected value-based motivation. Interestingly, in the football dataset (Table~\ref{tab:football}) the mean final wealth performance of MPT is slightly lower than that of the Kelly-based strategies. However, we should note that the hyperparameter selection criteria (Section~\ref{sec:hyperpar}) can also be considered slightly biased in favor of the Kelly approaches.
From a practical perspective, the drawdown modification of the Kelly criterion (Section~\ref{sec:drawdown}) seemed to perform very similarly to the, much less sophisticated, fractional approach (Section~\ref{sec:fractional}), further supporting its popular use in practice. While the distributionally robust modification of Kelly (Section~\ref{sec:dro}) achieved generally lowest final wealth scores, it was also the overall most stable strategy with the highest minimal final wealth. This is in complete accordance with its pessimistic underlying setting optimizing for the worst case scenario, which might be appealing to highly risk-averse bettors.
\section{Conclusions}
\label{sec:conclusion}
In this experimental review, we investigated the two most prominent streams of betting investment strategies based on the views of the Modern Portfolio Theory and the Kelly criterion, together with a number of their popular modifications aimed at additional risk management in practice, where their original underlying mathematical assumptions do not hold. We tested the strategies on 3 large datasets from 3 different sport domains of horse racing, basketball, and football, following a strictly unified evaluation protocol to provide unbiased estimates of performance of each method while tuning their degrees of freedom.
The results of our experiments suggest for superiority of the formal mathematical approaches over the informal heuristics, which are often used in practice, however the experiments also revealed their weaknesses stemming from the unrealistic mathematical assumptions, particularly the knowledge of the true probability distribution over the outcomes, calling for the need of the additional risk management practices. The final modifications of the strategies suggest that reasonable trade-offs for wealth progression can be found with an appropriate technique, even in scenarios with somewhat imprecise predictive models.
\label{sect:bib}
\printbibliography
\end{document}
\section*{Response Letter}
\subsection*{Reviewer: 1 Comments to the Author}
\subsubsection*{* Global evaluation}
\textit{The paper is a comprehensive review of some betting strategies based on the Modern portfolio theory and the Kelly criterion. The paper is globally well written and the distinct betting strategies are properly introduced. The methods' review is detailed, and the experimental part has been carefully conducted and described.
Though, I find the Introduction quite lacking of references: I would invite the authors to massively extend it, by using/recycling and extending some parts actually contained in Section 3.
Moreover, the Conclusion section is in my opinion too shortly outlined: as a possible suggestion, the authors could try to state which ones among the formal strategies (methods in Table 5,6, and 7) could be satisfactorily adopted and under which circumstances one or another method could be favorable. In a way, the authors could provide a sort of general and practical guideline to the bettors interested in horse racing, football or basketball, by remarking some of the arguments raised in Section 6.3.}
\begin{itemize}
\item I find the Introduction quite lacking of references: I would invite the authors to massively extend it, by using/recycling and extending some parts actually contained in Section 3.
\blue{We have significantly extended the related works Section \ref{sec:related} with both papers referred by the reviewers and additional related works. We have however kept the specific related work in the respective section, while keeping the introduction on a general note.}
\item The conclusion Section is in my opinion too shortly outlined: as a possible suggestion, the authors could try to state which ones among the formal strategies (methods in Table 5,6, and 7) could be satisfactorily adopted and under which circumstances one or another method could be favorable. In a way, the authors could provide a sort of general and practical guideline to the bettors interested in horse racing, football or basketball, by remarking some of the arguments raised in Section 6.3. \blue{The conclusion Section \ref{sec:conclusion} now includes suggestions and guidelines on what methods are preferable under which circumstances.}
\end{itemize}
\subsubsection*{* Some minor edits}
\begin{itemize}
\item Introduction, lines 29-32: although predictive models are not the core of this paper, I would suggest to include and cite at least some works who attempted to propose betting strategies starting from a predictive model. A short (not exhaustive) list of papers is here provided:
\begin{itemize}
\item Dixon and Coles, 1997. Modelling association football scores and inefficiencies in the football betting market.
\item Rue and Salvesen, 2000. Prediction and retrospective analysis of soccer matches in a league.
\item Groll and Abedieh, 2013. Spain retains its title and sets a new record–generalized linear mixed models on European football championships.
\item Egidi, Pauli and Torelli, 2018. Combining historical data and bookmakers' odds in modelling football scores.
\end{itemize}
\blue{Related works Section \ref{sec:related} has been extended with prediction models, the referred and additional related papers have been included.}
\item Page 1, line 37: ``known'' in place of ``know'' \blue{Corrected.}
\item Page 2, line 16: you claim that ``each result is binary in nature'', but this sentence is confusing in my opinion. In the paper, you report many examples in which the result is not binary.
\blue{We added a clarification note -
``Note this concerns an individual $r_i$ and not $|\mathrm{R}|$, i.e. a match can have many possible outcomes $r_i$, but each $r_i$ has a binary realization, resulting exclusively in either win or loss of the bet associated with it.''}
\item Page 3, line 25: after ``Heads'', punctuation is missing. \blue{Corrected.}
\item Page 9, line 25: maybe ``trade-off''? \blue{Corrected.}
\item Page 10, line 37: ``known'' in place of ``know''. \blue{Corrected.}
\item Page 14, line 44: $acc_p$ in place of $acc_b$. \blue{Corrected.}
\item Tables 2, 3, and 4: what is $m-acc$? And $b-acc$ is the same as $acc_b$ listed at page 14? \blue{Yes, it is the same. It has been corrected with a unified notation.}
\end{itemize}
\subsection*{Reviewer: 2 Comments to the Author}
\textit{I very much enjoyed reading the paper. It is certainly of interest to anyone working in sports betting. The authors have identified an area that needs discussing and present results of their experiments using different strategies for betting on different sports.
\\\\
I have some comments and suggestions (below), but overall, I think the paper requires only minor revisions before being ready for publication.}
\begin{itemize}
\item Is the level of mathematical rigour given in Section 2 needed? This is a judgement call, but it is a little heavy going on terminology that isn't used later in the paper. \blue{We have removed parts that are not that relevant for the paper (e.g. the cases of the bookmaker's margin).}
\item p2, line 27: is it true that bookmakers are maximizing long-term profits? Is it possible they are balancing the books and basically making the over-round off the bettors? Or is this one and the same thing? \\
\blue{Yes, making money from the over-round is not in contradiction with maximizing their long-term profits. But with predictions better than that of an average tipster, they can make more money than just from the over-round. And they need good predictions to lay out the opening odds anyway. Moreover, purely reactive balancing can only work on markets with very high liquidity/volume of bets, and could be quite risky/exploitable otherwise.}
\item p2, line 40: maybe mention betting exchanges as the less common setup. \blue{Done.}
\item p2, line 45: is it a little cumbersome to have used $f$ for the fraction bet above, and now be using it for the function? \blue{Corrected -- the function is now denoted by $g$ and $\bm{f}$ stands exclusively for the fraction vector/portfolio.}
\item p2, line 52: why is $\hat{p}$ necessarily different from the true probability? \blue{We have moderated and clarified these statements in the paper. The estimates can be perfect in scenarios with artificial randomness generators (Section~\ref{sec:def:estimates}), but in the domain of sports betting we consider, the true probability is never known, and so this case is of purely theoretical interest.}
\item p3, line 32: why do you need to express the inverse values like this? Is it not simpler to just write $\frac{1}{o_i}$? \blue{We made corrections to clarify that we meant to specify the whole distribution $P_b$, which is later used in the equation below.}
\item p3, equation 2.11: typo - $r_j$ should be $o_j$ I think. \blue{You are right, corrected.}
\item p4, line 28: why are the estimates biased? They can be unbiased surely. \\
\blue{We have moderated the claims (this follows the same reasoning as for the perfect estimates 3 bullets above) -- since in sports betting the true probability distribution is principally unknown, the unbiased case is of purely theoretical interest. If necessary, one can also simply imagine that the bias is zero. The particular bias of the player here is simply part of the example assumptions.}
\item p10, line 34: should you reference the original Kelly paper. \blue{Corrected.}
\item p10, line 37: ``know'' should be ``known''. \blue{Corrected.}
\item p11, lines 14-16: I don't think the reader is ever given an indication of how unrealistic these assumptions are. Further, the experimental results, don't reveal how much these assumptions contribute to the lessening of the expected performance of the betting strategies. I think these points (and the impact of the assumptions) could be discussed in the conclusions of the paper. \blue{Knowing the true probability distribution is extremely unrealistic, as discussed above (and in the paper). Consequently in the experiments, the vanilla formal strategies often lead to ruin, as opposed to maximal profit. We extended the conclusion with discussion to further clarify this.}
\item p12, line 12: missing ``out'' after ``carried''. \blue{Corrected.}
\item p14, third bullet point should be ``$acc\_p$''. \blue{Corrected.}
\item p17, line 43: the tables are labelled in an odd order, and the figures are all 6.3. \blue{Apologies, corrected.}
\item p18, table 5: can the betting strategies be given more intuitive names. Even a description would help the reader. I found myself having to look at the previous table to get the descriptions. \blue{Unfortunately, there is not enough space in the table for a more detailed description. However, we tried our best in the naming and at least expanded the abbreviations -- the \textit{KellyDD} strategy has been renamed to \textit{KellyDrawdown} and \textit{KellyDR} to \textit{KellyRobust}.}
\item p20, line 53: ``degrees of freedom'' – can/should it be ``hyperparameters'' since ``degrees of freedom'' are not mentioned anywhere. \blue{Corrected.}
\end{itemize}
\subsection*{Guest Editor Comments to the Author:}
\textit{Both referees are positive for this work. Please revise your manuscript according to their comments and suggestions. Regarding Section 2, I would personally prefer to leave the details. May be trimming it a little bit might be the optimal solution.} \blue{Slightly trimmed.}
\subsection*{Editor comments}
\begin{enumerate}
\item Please use English spelling variations throughout. \blue{Corrected.}
\item Also, for continuity, consider adding citations to related work that has been published in this journal e.g.
\begin{enumerate}
\item Markowitz portfolio theory for soccer spread betting, Alistair D. Fitt (2009)
\item Kelly's fractional staking updated for betting exchanges, Edmund Noon, William J. Knottenbelt, Daniel Kuhn (2013)
\item Using statistics to detect match fixing in sport, David Forrest, Ian G McHale (2019)
\item Uses and limitations of mathematics in sport, John Haigh (2009)
\end{enumerate}
\blue{The referred papers have been reviewed and mostly added with additional related papers into the related works Section~\ref{sec:related}.}
\end{enumerate}
\newpage
\title{Optimal sports betting strategies in practice: an experimental review}
\author{}
\maketitle
\begin{abstract}
{We investigate the most prominent streams of approaches to the problem of sports betting investment based on the Modern portfolio theory and the Kelly criterion. We define the problem settings, the formal strategies, and review their modifications for additional risk control stemming from \rev{their} unrealistic mathematical assumptions that are not met in betting practice. We test the resulting methods following a unified evaluation protocol in 3 different sport\rev{s} domains of horse racing, basketball and football. The results are generally in \rev{favour} of the formal approaches while suggesting for the individual benefits of the additional risk control practices together with their trade-offs.}
{sports betting, betting strategies, risk management, bankroll management}
\end{abstract}
\section{Introduction}
\label{sec:intro}
Sports betting systems generally consist of two essential components \rev{--} (i) predictive models, generating probabilistic estimates for the given match outcomes, and (ii) bankroll management strateg\rev{ies}, optimizing the expected progression of wealth in time. In this work, we focus solely on the latter.
While much of the available research on betting systems is \rev{centred} around the predictive \rev{modelling} part, often completely neglecting the need for betting portfolio optimization, we show that, given a predictive model, the betting strategy has a major influence on the final measures of profit. Consequently, a worse model with a better strategy can easily outperform a better model with a worse strategy.
Lacking a deeper understanding of the investment part of the problem, practitioners often resort to trivial practices such as various forms of flat betting. We show that these are inferior to the formal strategies, not just theoretically but also from \rev{a} practical perspective. There are two basic streams of research in the formal approaches, stemming from information theory and economics, respectively. The first, and the most widespread, is the Kelly criterion\rev{~\citep{kelly1956new}}, also known as the geometric mean policy, maximizing the expected long-term growth of wealth. The second is the approach of Markowitz's Modern portfolio theory\rev{~\citep{markowitz1952portfolio}}, balancing the criteria of expected profit and \rev{profit} variance as a measure of risk.
While mathematically sound, the formal strategies are based on unrealistic assumptions. The most limiting assumption in their application to sports betting is the knowledge of true probabilities of individual match outcomes. Other complications of the problem include \rev{the} multiplicity of outcomes and parallel matches. We investigate the existing modifications of the formal strategies proposed to address the problems occurring in practice and evaluate them experimentally in 3 different sport\rev{s} domains - horse racing, basketball, and football.
The paper is structured as follows. In Section~\ref{sec:definitions} we define the concept of a betting strategy and the dimensions of the underlying optimization problem. In Section~\ref{sec:related} we review the related work touching different facets of risk and bankroll management in betting. In Section~\ref{sec:strategies} we formally introduce the two core strategies of Kelly and Markowitz. The modifications of the core strategies proposed to manage the extra risk occurring in practice are then introduced in Section~\ref{sec:risk}. Finally, we experimentally evaluate the strategies in practical scenarios in Section~\ref{sec:experiments} and conclude the paper in Section~\ref{sec:conclusion}.
\section{Problem Definition}
\label{sec:definitions}
In its core, sports betting is a simple stochastic game where the player $p$ repeatedly allocates a distribution of \textit{fractions} ($f_i \in [0,1],~\sum_{i}f_i \leq 1$) of her current bankroll $W \in \mathbb{R}$ at time $t \in \mathbb{N}$ over possible stochastic results $r_i \in \mathrm{R}$ of a match, coming from a distribution $P_r(r_i)$ over the domain $\mathrm{R}$ of the random variable $R$, describing all the possible outcomes of the given match at time step $t$. Each of the possible match outcomes $r_i$ is then associated with \rev{so-called} \textit{odds} ($o_i \in \mathbb{R}_{\geq 1}$) by the bookmaker $b: r_i \mapsto o_i$. Should a particular outcome $i$ be realized \rev{(}${R}=r_i$\rev{)}, a payoff $o_i \cdot f_i \cdot W$ from the associated odds and fraction is to be received by the player $p$. In the opposite case, the player loses the allocated portion $f_i \cdot W$ of her bankroll to the bookmaker $b$.
Each of the particular \rev{betting} outcomes $r_i$ is \rev{thus} binary\footnote{\rev{Note this concerns an individual $r_i$ and not $|\mathrm{R}|$, i.e. a match can have many possible outcomes $r_i$, but each $r_i$ has a binary realization, resulting exclusively in either win or loss of the bet associated with it.}} in nature, and the potential net profit $w_i$ from allocation on the $i$-th outcome is thus
\begin{equation}
w_i =
\left\{
\begin{array}{lll}
o_i \cdot f_i \cdot W - f_i \cdot W ~~& \mbox{with prob. $P_r(r_i)$} &\mbox{(if $\mathrm{R}=r_i$ is realized)} \\
- f_i \cdot W ~~& \mbox{with prob. $1-P_r(r_i)$} &\mbox{(if $\mathrm{R} \neq r_i$)}
\end{array}
\right.
\end{equation}
giving an expectation
\begin{equation}
\EX_{P_r}[w_i] = P_r(r_i) \cdot (o_i f_i W - f_i W) + (1-P_r(r_i)) \cdot (- f_i W)
\end{equation}
Clearly, the profits of the bettor and bookmaker are directly opposite and, assuming a closed system of bettors and bookmakers, this is \del{thus} a zero-sum game. The goal of both the player $p$ and the bookmaker $b$ is to maximize their long-term profits $W_{t \to \infty}$ as measured by their respective utilities (Section~\ref{sec:strategies}). Given the stochastic nature of the game, the natural desideratum of the player is to allocate the fractions $\bm{f} = f_1, \dots, f_n$ so as to target a high total expect\rev{ation of profit} $\mathrm{W}$
\begin{equation}
\EX_{P_r}[\mathrm{W}] = \EX_{P_r} \bigg[\sum_i w_i \bigg] = \sum_i \EX_{P_r} [w_i]
\end{equation}
Note that, in this work, we assume the two players to take on the asymmetric roles of market maker $b$ and market taker $p$, where the bookmaker $b$ always starts by laying out the odds $\bm{o} = [o_1, \dots, o_n]$ for the possible match results $\bm{r} = [r_1, \dots, r_n]$ first, consequently to which the player $p$ reacts with his best policy for allocation $p : r_i \mapsto f_i$ of her current wealth $W_t$. In contrast to e.g. the, \rev{less common}, betting exchange setting, in this work we assume solely the strategies for the role of the market taker $p$, which is the most common setup for bettors in practice.
\subsection{Betting Strategy}
\label{sec:def:strategy}
A player's betting strategy for a game with $n$ outcomes is a \rev{function $g$} mapping a set of probabilistic estimates $\hat{\bm{p}} = \hat{p_i}, \dots,\hat{p_n}$ and bookmaker's odds $\bm{o} = o_1, \dots, o_n$ onto a set of fractions $\bm{f} = f_1, \dots, f_n$ of the current wealth $W_t$ to be waged \del{on each of} \rev{over} the game outcomes $\bm{r} = r_1, \dots, r_n$
\rev{
\begin{align}
g &: (\hat{\bm{p}}, \bm{o}) \mapsto \bm{f}
\end{align}
}
Typically, the estimated distribution vector $\hat{\bm{p}}$ comes from a probabilistic model $P_p$ of the player and is similar to, yet \rev{most likely} different from, the \rev{(unknown)} true probability distribution $P_p = \hat{P_r},~P_p \neq P_r$ \rev{(Section \ref{sec:def:estimates})}.
The vector of the waged fractions $\bm{f}$ is then often referred to as the \textit{portfolio} over individual ``assets'' $i$ (Section~\ref{sec:MPT})
\begin{equation}
\bm{f} =
\begin{bmatrix}
f_1, \dots, f_n
\end{bmatrix}
\end{equation}
where $f_i$ indicates the portion of wealth $W_t$ allocated to $i$-th outcome.
\subsection{Fixed Odds}
\label{sec:def:odds}
We further assume a \rev{so-called} fixed-odds betting setup which, as opposed to e.g. parimutuel setting~\citep{hausch2008efficiency}, always offers known odds distribution $\bm{o}$ in advance of the game for the player's strategy \rev{$g$} to calculate with.
In its most basic form, we can demonstrate the given setting on a simple \rev{coin-tossing} game as follows.
\begin{example}
\label{ex:coin1}
Assume a fair \rev{coin-tossing} game with two, equally probable, outcomes $\mathrm{R} =\{Heads, Tails\}$
\begin{equation}
\underset{r_i \in \mathrm{R}}{P_r(r_i)} =
\left\{
\begin{array}{ll}
0.5 & \mbox{for the coin falling } r_1 = \textit{Heads} \\
0.5 & \mbox{for the coin falling } r_2 = \textit{Tails}
\end{array}
\right.
\end{equation}
The odds by the bookmaker $b$ could then be set up e.g. as follows
\begin{equation}
\underset{r_i \in \mathrm{R}}{b(r_i)} =
\left\{
\begin{array}{ll}
o_1 = 1.9 & \mbox{for the coin falling } r_1 = \textit{Heads} \\
o_2 = 1.9 & \mbox{for the coin falling } r_2 = \textit{Tails}
\end{array}
\right.
\end{equation}
Let the bettor allocate a fixed wager, such as \$1, on the $r_1=Heads$.
She then receives an extra $w_1 = (1.9 - 1) * 1$ profit if the associated outcome $r_1=Heads$ is realized, or losses the placed wager \$1 otherwise.
It is easy to see that this particular game is generally disadvantageous for the bettor, and there exist no strategy for her to make long-term profits, since the expected profit for each outcome of the game is simply negative:
\begin{equation}
\EX[w_1] = \EX[w_2] = 0.5 \cdot 1.9 \cdot 1 + 0.5 \cdot (-1) = -0.05
\end{equation}
This \del{is caused by} \rev{follows directly from} the fact that the odds are \textit{unbiased} and \textit{subfair}. This means that \rev{the distribution of their inverse values $P_b : r_i \mapsto \frac{1}{o_i}$ is} proportional to the true probability distribution over the game outcomes, but \del{they do} \rev{it does} not form a \textit{probability} distribution as \rev{the values} do not sum up to $1$:
\begin{equation}
\sum_i{P_b(r_i)} = \frac{1}{o_1} + \frac{1}{o_2} \approx 1.05
\end{equation}
\end{example}
Out of the three settings~\citep{cover2012elements}: \textit{fair, subfair, superfair}, the \textit{subfair} odds are typically the only setting for a bookmaker to be able to generate profits. We will further limit ourselves to this setting as it is the only valid setup working in practice.
The value of
\begin{equation}
margin = \frac{\sum_{j=1}^K\frac{1}{o_j} -1 }{\sum_{j=1}^K\frac{1}{o_j}}
\end{equation}
is then called the bookmaker's margin\footnote{Often wrongly calculated as simply the remainder \rev{over $1$ as $\sum_{j=1}^K\frac{1}{o_j} -1$}} (also known as ``vigorish'', ``house edge'', ``cut'' etc.), and represents the negative expected value of the game given the probabilities $P_b$ implied from the odds are unbiased estimates of the true outcome probabilities $P_r$. Note that this is a typical game setting operated in the gambling industry, such as in various casino games, where there is no space for long-term profitable strategies. However, we note that the situation in sports betting is principally different.
\subsection{Biased Estimates}
\label{sec:def:estimates}
In Example~\ref{ex:coin1} with a fair coin, both the bookmaker and bettor knew the true outcome probability distribution (i.e. $P_r(r_1=H)=0.5 ;\ P_r(r_2=T)=0.5$). This setting is very elegant from mathematical perspective, since one can calculate exact expected values of profits and consequently derive optimal betting strategies (Section~\ref{sec:strategies}).
Such mathematically optimal strategies can be theoretically applied in artificial environments with handcrafted generators of randomness (e.g. the casinos). However, in the context of sports betting, and other practical settings such as stock market investing, this is generally impossible.
In this experimental review, we thus focus on the scenarios, where the probability estimates of both the bookmaker $P_b$ and the player $P_p$ are biased w.r.t. the real outcome probability distribution $P_r$.
Let us consider an extension of the \rev{coin-tossing} game from Example~\ref{ex:coin1} to demonstrate properties of such \rev{a} setting.
\begin{example}
Consider a \textit{biased} \rev{coin-tossing} game where the coin bias is \textit{unknown} to both the bookmaker and the player. Let us \rev{set-up} the bias such that
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_r(r_i)} =
\left\{
\begin{array}{ll}
0.6 & \mbox{for } r_1 = \textit{H} \\
0.4 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Let us further assume that the player $p$ has a probabilistic model of the game, producing biased estimates $P_p = \hat{P_r}$ as
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_p(r_i)} =
\left\{
\begin{array}{ll}
0.55 & \mbox{for } r_1 = \textit{H} \\
0.45 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Finally, assume the bookmaker is also biased with his estimates $P_b = \hat{P_r}, P_b \neq P_p$, according to which he sets up the odds distribution $\bm{o}$, lowered by a margin\footnote{In practice, the distribution of margin would not be simply uniform as in the example, but the bookmaker typically applies more sophisticated distortion of the odds to secure even higher statistical advantage.} $m=0.05$
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_b(r_i)} =
\left\{
\begin{array}{ll}
0.65 & \mbox{for } r_1 = \textit{H} \\
0.35 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\underset{r_i \in {\mathrm{R}}}{b(r_i)} =
\left\{
\begin{array}{ll}
\frac{1}{0.65} \cdot (1-{0.05}) \approx 1.46 & \mbox{for } r_1 = \textit{H} \\
\frac{1}{0.35} \cdot (1-{0.05}) \approx 2.71 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Note that while the odds are still subfair, the bookmaker's bias w.r.t. $P_r$ now creates space for exploitation, since the true expected values are no longer purely negative.
\begin{equation}
\begin{array}{llll}
\EX_{P_r}[w_1] &=& P_r(r_1) \cdot b(r_1) -1 \approx -0.124 & \text{ for~~ } \mathrm{R}=r_1=H\\
\EX_{P_r}[w_2] &=& P_r(r_2) \cdot b(r_2) -1 \approx 0.084 & \text{ for~~ } \mathrm{R}=r_2=T
\end{array}
\end{equation}
i.e. the punter could make long-term profits if betting appropriate amounts on the $r_2=T$ outcome. However, not knowing the true probabilities $P_r$, the player's calculation of expected values will now be biased, too
\begin{equation}
\begin{array}{lll}
\EX_{P_p}[w_1] &=& P_p(r_1) \cdot b(r_1) -1 \approx -0.197\\
\EX_{P_p}[w_2] &=& P_p(r_2) \cdot b(r_2) -1 \approx 0.22
\end{array}
\end{equation}
nevertheless, despite the expected values calculated by the punter w.r.t. her $P_p$ estimate \del{are} \rev{being} wrong, in this particular setting, she correctly identified the positive expected value in the $r_2=T$ outcome and could theoretically make a profit with an appropriate strategy modification (Section~\ref{sec:risk}).
\end{example}
Generally, $P_p = \hat{P_r}$ and $P_b = \hat{P_r}^{'}$ are \del{always} going to be somewhat biased w.r.t. $P_r$ as well as w.r.t. each other \del{since $P_p \neq P_b$} \rev{(i.e. $P_p \neq P_b$,} as long as \rev{the} player does not simply copy from the bookmaker). The individual biases can be captured by statistical measures, such as the Kullback-Leibler, or better yet Jensen-Shannon, divergences~\citep{cover2012elements}, and the probabilistic setting of each game for a particular match can then be understood as a triplet of probability distributions over the outcomes, as depicted in Figure~\ref{fig:triangle}.
\begin{figure}[t]
\label{fig:triangle}
\input{triangle.tex}
\centering
\caption{A typical sports betting setting for a game with $n$ outcomes, displaying bookmaker's probabilistic estimates $P_b$ and player's estimates $P_p$, both distanced from the true distribution $P_r$ and from each other.}
\end{figure}
\subsection{Multiplicity of Outcomes}
\label{sec:def:outcomes}
So far we have assumed a binary \rev{coin-tossing} game of two possible outcomes. Let us now generalize into an $n$ outcome game, such as throwing a die. This represents most real situations in sports betting, such as the $\mathrm{R} = \{Win,Draw,Loss\}$ outcomes in soccer, or betting on the winner of a horse race with $n$ horses (Section~\ref{sec:datasets}). Moreover, one can potentially assume that the individual game outcomes are no longer exclusive, such as betting on the first $j$ horses, or ``over'' $j$ goals in soccer for multiple different values of $j$.
To make the game representation more compact in such situations, a generic matrix~$\bm{O}$ representation has been proposed~\citep{busseti2016risk}, where the columns of $\bm{O}$ represent the possible outcome assets, and rows represent the possible game results, i.e. joint realizations of all the outcomes. Each individual element in $\bm{O}$ then represents particular odds for each outcome realization.
Additionally, we include an artificial risk-free ``cash'' asset $\bm{c}$, which allows the player to put money aside safely. This also allows to model situations where leaving money aside can cost \rev{a} small fraction of wealth in every turn (caused \rev{e.g.} by inflation), or possibility to increase the wealth by some interest rate (e.g. in a savings account).
The betting strategy \rev{$g$} (Section~\ref{sec:def:strategy}) can now thus always allocate the full amount of current wealth $W$ among $n$ available outcome assets, $n - 1$ of which are risky, stochastic assets, and 1 being the added risk-free cash asset as
\begin{equation}
g : (\bm{p}^k, \bm{O}_k^n) \mapsto \bm{f}^n \text{~~~where~~~} \sum_i{f_i}=1
\end{equation}
where $k$ is the number of possible worlds, i.e. there are $k$ possible joint outcome realizations, in our probabilistic game.
Odds for each outcome asset in each of the $k$ world realizations with the respective probabilities $\bm{p} = p_1, p_2, ..., p_k$ can thus be fully specified in the columns $\bm{o_i}$ as
\begin{align}
\bm{O} =
\begin{bmatrix}
\bm{o_1} & \bm{o_2} & ... & \bm{o_{n-1}} & \bm{c}
\end{bmatrix}
~,~\text{where}~~
\bm{o_i} =
\begin{bmatrix}
o_{i, 1} \\
o_{i, 2} \\
... \\
o_{i, n}
\end{bmatrix}
~,~
\bm{c} =
\begin{bmatrix}
1 \\
1 \\
... \\
1
\end{bmatrix}
\end{align}
\begin{example}
Consider a football game, where we assume $3$ outcomes as $\mathrm{R} = \{W, D, L\}$, forming the $3$ asset vectors $\bm{o_w}, \bm{o_d}, \bm{o_l}$, where the bookmaker sets the odds to $o_w, o_d, o_l$, respectively. The odds matrix $\bm{O}$, including the constant cash asset $\bm{c}$, then looks as follows.
\begin{equation}
\bm{O} =
\begin{bmatrix}
\bm{o_w} & \bm{o_d} & \bm{o_l} & \bm{c}
\end{bmatrix}
~~\text{where~}~~
\bm{o_w} =
\begin{bmatrix}
o_w \\
0 \\
0
\end{bmatrix}
,~
\bm{o_d} =
\begin{bmatrix}
0 \\
o_d \\
0
\end{bmatrix}
,~
\bm{o_l} =
\begin{bmatrix}
0 \\
0 \\
o_l
\end{bmatrix}
,~
\bm{c} =
\begin{bmatrix}
1 \\
1 \\
1
\end{bmatrix}
\end{equation}
\end{example}
To simplify notation in further sections, we will also define a modified odds matrix $\bm{\rho}$ corresponding to excess odds, i.e. removing the return amount of the placed wager itself, resulting \rev{in} net profit $\mathrm{W}$ (Section~\ref{sec:definitions}), as
\begin{equation}
\bm{\rho} = \bm{O} - \bm{1}
\end{equation}
Note that in the example scenario the outcomes were exclusive, and the ``one-hot'' risky asset vectors reflect their exclusive \del{binary} nature, which considerably simplifies the computation of optimal strategies (Section~\ref{sec:strategies}).
In this review, we generally assume individual matches with exclusive outcomes\footnote{Note that the exclusiveness of outcomes does not hold in the further scenarios with parallel games.} but varying outcome multiplicities (Section~\ref{sec:datasets}) to experimentally assess the properties of the strategies w.r.t. this dimension of the problem.
\subsubsection{Parallel Games}
\label{sec:def:parallel}
To further complicate the game, approaching the real betting setting even more closely, we can consider multiple dice being thrown in parallel, each associated with a particular set of outcomes and odds. Naturally, this reflects the reality of multiple games being open for betting at the same time. In popular sports, such as soccer, it is not uncommon to have dozens of games available on the market simultaneously.
While we can surely consider each of the games separately, such a simplification can lead to sub-optimal results. Although calculating with the true parallel nature of the opportunities can be computationally demanding for some of the strategies (Section~\ref{sec:quadraticapprox}), it should allow to alleviate the risk by diversifying over a wider portfolio at each time step of the wealth progression.
In this review, we consider both the sequential and parallel scenarios to emulate realistic scenarios and evaluate the respective advantages (Section~\ref{sec:experiments}).
\subsection{Betting Dynamics}
\label{sec:def:dynamics}
The betting dynamic represents the investment \rev{behaviour} of the bettor w.r.t. her bankroll $W$ in time $t$, which has a major impact on the progression of wealth. There are two basic cases of bankroll management to be considered \rev{--} (i) additive and (ii) multiplicative~\citep{peters2016evaluating, peters2011optimal}.
\subsubsection{Additive dynamic}
Additive dynamic corresponds to a simple fixed unit-based investment, where the bettor's wagers are decoupled from her current bankroll $W_t$. To illustrate the setting, we can imagine that the bettor receives a fixed unit (e.g. \$1) amount of money from an external source at regular time intervals $\delta t$ (such as a salary), which she repeatedly invests into the stochastic game of betting, and accumulates (additively) the prospective returns $w_t \cdot 1$ from the unit investment in the, separately held, bankroll $W_t$.
Her wealth progression in time $t$ can hence be seen as
\begin{equation}
W_t = w_t \cdot 1 + W_{t - \delta t}
\end{equation}
\subsubsection{Multiplicative dynamic}
\label{sec:multiplicative}
In the multiplicative scenario, the bettor continuously \textit{reinvests} the current wealth $W_t$ accumulated from the previous betting investments, without any external source of profit. Hence her progression of wealth in time $t$ can be seen as
\begin{equation}
W_t = w_t \cdot W_{t - \delta t}
\end{equation}
The multiplicative dynamics plays an important role in the Kelly criterion (Section~\ref{sec:kelly}), where the mathematical optimality of the strategy is derived exactly from \rev{a} repeated play of the same game in the multiplicative setting.
As the comparison of the two approaches appears problematic, due to the external source of profit in the additive scenario, we will further consider only the multiplicative reinvestment setting, which is also more realistic and sound for \rev{an} independent evaluation.
\section{Related works}
\label{sec:related}
The two most notable approaches to allocation of wealth across presented stochastic assets, i.e. match outcomes in sport\rev{s} betting, were introduced by (i)~\cite{markowitz1952portfolio}, with his revolutionary concept of balancing return and risk of a portfolio, and by (ii)~\cite{kellyold}, with a criterion to maximize the long-term growth in a scenario where the same game is being played repeatedly.
Following the Kelly criterion, the process of betting is closely connected to information theory~\citep{kelly1956new}. \rev{\cite{bell1988game}, discuss a game-theoretical optimality of Kelly portfolios and a generalization of the Kelly strategy to maximize the proportion of wealth relative to the total wealth among population is discussed in~\citep{lo2018growth}.} Additional mathematical properties were also explored in~\citep{latane2011criteria} and~\citep{breiman1961optimal, thorp2008kelly}. From the economical perspective, Kelly's approach is often explained through the use of a logarithmic utility function, which was famously first introduced by Daniel Bernoulli in~\citep{bernoulli2011exposition}, where he pointed out that people do not make their decisions according to the absolute payoff, but w.r.t. the logarithm thereof. \rev{In~\citep{luenberger2011preference} the authors suggest that assuming long-term goals, the logarithmic utility function is the only sensible choice for a utility function.} While not necessarily incorrect, the phenomenological explanation of the choice of logarithmic utility function seem\rev{s} somewhat arbitrary, however.
In \citep{peters2011time} a different view on the Kelly criterion was proposed, where the author criticized the established evaluation of betting using the expected value of a portfolio, as it is based on the unrealistic idea of ``simultaneous'' evaluation of the, often exclusive, outcomes. Instead of measuring \rev{the} mean of a statistical ensemble of possible outcomes, the author proposed to focus on what happens to a single player as the same game is repeated in time, following the notion of ergodicity in dynamic systems~\citep{peters2019ergodicity}. The logarithmic transformation then emerges as the correct ergodic transformation of dynamics of the game in the classical reinvestment setting~\citep{peters2016evaluating}, providing a well-founded explanation for the observed phenomenon.
Given the mathematically elegant yet somewhat unrealistic setting, the Kelly strategy has also been often criticised in many works~\citep{samuelson1971fallacy, samuelson2011we, maclean2010good, samuelson1975lifetime}.
\subsection{Extensions of the formal strategies}
\label{sec:related:extensions}
The strategies of Markowitz and Kelly have been re-explored by researchers in a number of different application scenarios and many useful modifications have been proposed since. Generally, the Markowitz's approach has traditionally dominated the world of quantitative finance, while the Kelly's approach has been more prominent in the sports betting industry. In~\citep{smoczynski2010explicit}, a closed form solution for the use of the Kelly strategy when betting on horse racing was explored. Another practical extension for betting on multiple simultaneous games was discussed in a number of works~\citep{whitrow2007algorithms, grant2008optimal, buchen2012comparison}, where \rev{various} approximations for large bet aggregations were proposed.
\rev{
Modification of the Kelly strategy for betting exchanges is discussed in~\citep{noon2013kelly}, where adjustments for both back and lay bets are presented. Additionally, the effect of commission and maximum bet constraint on resulting growth rate is discussed. The Kelly problem is examined for spread betting in~\citep{chapman2007kelly} and in \citep{haigh2000kelly}, where several counterintuitive effects are discussed when using the Kelly strategy for spread betting. Markowitz's modern portfolio theory for soccer spread betting is then discussed in~\citep{fitt2009markowitz}
}
Another important stream of research are works investigating extensions of the Kelly strategy towards the realistic setting of parameter uncertainty, such as~\citep{baker2013optimal}. A practical method to address the problem are \rev{so-called} fractional Kelly strategies, the properties of which have been investigated in great detail in the works of~\citep{maclean2011medium} and \citep{maclean1992growth}. \rev{\cite{peterson2017kelly}, presents a decoupled Kelly strategy combined with an additional risk measure. \cite{kan2007optimal},~introduced an optimal portfolio choice under parameter uncertainty for the modern portfolio theory (MPT).}
Interesting modifications with similar aims are Bayesian extensions of the Kelly strategy proposed in \citep{browne1996portfolio, balka2017kelly, chu2018modified}. Similarly, approaches based on probabilistic risk constraints for limiting the probability of a ``drawdown'' were discussed in \citep{busseti2016risk} and \citep{mulvey2011dynamic}. Finally, limiting the \rev{worst-case} probabilistic scenario using the framework of distributionally robust optimization was explored in \citep{sun2018distributional} and in \citep{blanchet2018distributionally} for the Markowitz's strategy, respectively.
\subsection{Predictive modelling}
\label{sec:related:model}
\rev{
Since we consider predictive sports modelling a separate problem, we only briefly review some papers on the topic, with an extra focus on models related to those used for experiments in this paper.
}
\rev{
A traditional stream of research in predictive sports analytics are score-based models based on various explicit statistical assumptions. A football prediction model introduced by~\cite{maher1982}, builds a statistical model on the assumption that in a football match the goals are Poisson-distributed and those of the home team are independent of those of the away team. The author also introduced the notion of teams' attacking and defensive strengths and how to use them for forecasting of the match results. In~\citep{dixon1997}, the Maher's model is further extended and it is shown to make a profit when combined with a simple betting strategy. The authors also used exponential time weighting to discount the effects of past results, while in~\citep{maher1982} the strength of the team is considered to be time-invariant. In~\citep{rue2000}, the authors used a Brownian motion to bind together the strength parameters of the teams in consecutive rounds. The model is then used for betting with a variant of the MPT strategy. \cite{egidi2018combining}, presents a hierarchical Bayesian Poisson model with the scoring rates of the teams being represented by convex combinations of parameters estimated from historical data and betting odds. In \citep{groll2013spain} the authors analyze the explanatory power of bookmakers' odds using pairwise generalized linear mixed Poisson model.}
\rev{
Another modern approach for match outcome predictions are non-parametric and feature-based machine learning models.
\cite{Haghighat2013}, provides a review of machine learning techniques used in outcome predictions of sports events while pointing out some common problems and misconceptions.
In the horse racing domain, a popular logit-based model, combining both ``fundamental features'' and ``odds-derived'' features into a single prediction system, was presented by~\cite{benter2008computer}. This model was also a strong inspiration for the horse racing model evaluated in this paper.
In the domain of soccer, a recent review~\citep{hubacek2019score} discusses a diversity of the common approaches. Notable examples include models from the 2017 Soccer Prediction Challenge~\citep{dubitzky2019}. The winning model from the challenge utilized a boosted tree learner based on an ensemble of score-derived features and simpler ranking and statistical models~\citep{hubacek2019}. This model was also directly used for the soccer betting experiments reported in this paper.
In predictive basketball modelling, it is common to use detailed box-score statistics that are available for the high exposure leagues. Based on diverse features, \cite{Miljkovic2010}, evaluated their model on the NBA, while \cite{Ivankovic2010} used a neural network to predict match outcomes in the League of Serbia. An advanced convolutional neural architecture was then learned over a, so far biggest, set of basketball games in~\citep{hubavcek2019exploiting}. We again directly utilize this basketball model in this paper.
}
\section{Betting Strategies}
\label{sec:strategies}
In the existing literature, the betting strategies range from simple informal techniques, such as flat betting, to the formal approaches, represented mainly by the Markowitz's Modern portfolio theory~\citep{markowitz1952portfolio} and the Kelly criterion~\citep{kelly1956new}, coming from an economical and information-theoretic views of the problem, respectively.
\subsection{Informal Strategies}
\label{sec:strat:informal}
In sports betting practice, most of the focus among punters is being put on the search for outcomes with positive expected value (``value bets''), and the importance of the subsequent investment strategy has often been neglected. Consequently, rather than formal strategies, one can encounter simplistic heuristics such as~\citep{hubacek2017thesis}:
\begin{itemize}
\item Bet a fixed fraction on favourable odds.
\item Bet a fixed fraction on the opportunity with maximal expected value.
\item Bet a fraction equal to the absolute discrepancy between player's and bookmaker's estimates.
\item Bet a fraction equal to the relative discrepancy between player's and bookmaker's estimates.
\item Bet a fraction equal to the estimated probability of winning.
\end{itemize}
Lacking any formal foundation, these approaches have been shown generally inferior to the formal strategies, both theoretically and in practice~\citep{hubacek2017thesis}. For completeness, we chose to re-validate the reports by selecting the previously best performing informal strategies of (i) betting fraction w.r.t. the maximal discrepancy (``AbsDisc'') and (ii) betting optimal fraction on the maximal expected value (``MaxEvFrac'') in our experiments (Section~\ref{tab:horses}).
\subsection{Modern Portfolio Theory}
\label{sec:MPT}
Modern Portfolio Theory (MPT) is a standard economic view of the problem based on the idea of the expected value of the profit, possibly transformed by a utility function reflecting the user's particular preferences. The general idea behind MPT is that a portfolio $\bm{f^1}$, i.e. a vector of assets $\bm{f} = f_1, \dots, f_n$, is superior to $\bm{f^2}$, if its corresponding expected profit (Section~\ref{sec:definitions}) is at least as great
\begin{equation}
\EX[\bm{\rho} \cdot \bm{f^1}] \geq \EX[\bm{\rho} \cdot \bm{f^2}]
\end{equation}
and a given risk measure $risk : \mathbb{R}^n \to \mathbb{R}$ of the portfolio, w.r.t. the given odds, is no greater
\begin{equation}
risk(\bm{f^1}|\bm{\rho}) \leq risk(\bm{f^2}|\bm{\rho})
\end{equation}
This creates a partial ordering on the set of all possible portfolios. Taking the portfolios that no other portfolio is superior to gives us \rev{a} set of ``efficient portfolios'' $\Theta$~\citep{markowitz1952portfolio}. In simple terms, we trade off the expected $profit-risk$ by maximizing the following
\begin{equation}
\underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}} ~(\EX[\bm{\rho} \cdot \bm{f}] - \gamma \cdot risk(\bm{f}|\bm{\rho}))
\end{equation}
where $\gamma$ is a hyperparameter reflecting the user's preference for risk.
In the most common setup, the $risk$ of a portfolio $\bm{f}$ is measured through the expected total variance of its profit $Var[\bm{\rho} \cdot \bm{f}] = \bm{f}^T\Sigma \bm{f}$, based on the given covariance matrix $\bm{\Sigma}_n^n$ of net profit of the individual assets. Note that in the case of independent outcomes (Section~\ref{sec:def:outcomes}), this reduces to a diagonal matrix with \rev{the} variance of each binary asset\rev{'s} profit, corresponding to the result $r_i$, following from the given odds $o_i$ and the underlying Bernoulli distribution as
$\Sigma(i,i) = \hat{P_r}(r_i) \cdot (1-\hat{P_r}(r_i)) \cdot \rho_{i,i}^2$.
MPT can generally thus be expressed as the following maximization problem
\begin{equation}
\label{eq:MPT}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}~
& & \EX[\bm{\rho}\cdot\bm{f}] - \gamma \cdot \bm{f}^T\Sigma \bm{f}\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation}
Apart from the variance $Var[\bm{w}]$ of the potential net returns $\bm{w} = \bm{\rho} \cdot \bm{f}$, different risk measures have been proposed~\citep{markowitz1952portfolio}, such as standard deviation $\sigma(\bm{w}) = \sqrt{Var[\bm{w}]}$ and coefficient of variation $CV(\bm{w}) = \frac{\sigma(\bm{w})}{\EX[\bm{w}]}$. Generally, there is no \rev{agreed-upon} measure of risk and the choice is thus left to the user.
The MPT approach is often criticized for the disputable choice of risk, which can be perceived as a formal weakness of the approach~\citep{peters2016evaluating}, since in many domains the risk is not easy to define. Moreover, the direct maximization of expected profit can be misleading in games, where the distribution of potential profits is highly skewed, i.e. where the mean profit is very different from the median. This situation naturally occurs in the multiplicative dynamics setting, where maximization of expected value may lead to undesirable outcomes~\citep{peters2016evaluating}.
\subsubsection{Maximum Sharpe Strategy}
\label{sec:MaxSharpe}
Apart from the choice of the risk measure, the inherent degree of freedom in MPT is how to select a particular portfolio from the efficient frontier $\Theta$ (based on the choice of $\gamma$). Perhaps the most popular way to avoid the dilemma is to select a spot in the pareto-front with the highest expected profits w.r.t. the risk. For the risk measure of $\sigma(\bm{w})$, this is known as the ``Sharpe ratio'', generally defined as
\begin{equation}
\frac{\EX[\bm{w}] - r_f}{\sigma(\bm{w})}
\end{equation}
where $\EX[\bm{w}]$ is the expected return of the portfolio, $\sigma(\bm{w})$ is the standard deviation of the return, and $r_f$ is a ``risk-free rate''. Since there is no risk-free investment in sports betting, we can neglect it and reformulate the optimization problem as
\begin{equation}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \frac{\EX[\bm{\rho} \cdot \bm{f}]} {\sqrt{\bm{f}^{T}\bm{\Sigma}\bm{f}}} \\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, f_i \geq 0
\end{aligned}
\end{equation}
the solution of which we will further refer to as the ``MSharpe'' strategy.
The variance-based choices of risk have been often criticized as they penalize excess losses as well as excess returns, which is obviously undesirable. Moreover, the calculation of the MaxSharpe solution is also quite sensitive to errors in the probabilistic estimates, and can often be biased towards extreme solutions, requiring some additional form of control\footnote{E.g. a strategy with no wagers placed would have zero variance resulting into an infinite Sharpe ratio.}. Nevertheless\rev{,} it remains a very popular investment practice, which we include in our experiments.
\subsection{Kelly Criterion}
\label{sec:kelly}
The Kelly criterion\rev{~\citep{kelly1956new, thorp2008kelly}} is based on the idea of expected multiplicative growth in the reinvestment setting (Section~\ref{sec:multiplicative}), so that a portfolio $\bm{f}$ is chosen such that the long-term value of the resulting, continuously reinvested, wealth $W_t$ is maximal (in an infinite horizon of $t$). Note that in this scenario we assume that the same portfolio is going to be presented at each time step. For its multiplicative nature, it is also known as the geometric mean policy, emphasizing the contrast to the arithmetic mean approaches based on the expected value.
The two can, however, be looked at similarly with the use of a logarithmic ``utility function'', transforming the geometric into the arithmetic mean, and the multiplicative into the additive setting, respectively. The problem can then be again expressed by the standard means of maximizing the expected value as
\begin{equation*}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \EX[\log(\bm{O} \cdot \bm{f})]\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation*}
Note that, in contrast to MPT, there is no explicit term for risk here, as the notion of risk is inherently encompassed in the growth-based view of the wealth progression, i.e. the long-term value of a portfolio that is too risky will be smaller than that of a portfolio with the right risk balance (and similarly for portfolios that are too conservative).
The calculated portfolio is then provably optimal, i.e. it accumulates more wealth than any other portfolio chosen by any other strategy in the limit of $t$. However, this strong result only holds given, considerably unrealistic, assumptions~\citep{kelly1956new, thorp2008kelly, peters2016evaluating}. Similarly to MPT, we assume to know the true probability distribution of game outcomes, and additionally we assume that:
\begin{enumerate}
\item we are repeatedly presented with the same games.
\item we play for an infinite amount of time.
\end{enumerate}
Despite the fact that these conditions are impossible to meet in practice, the Kelly strategy is very popular, and its various modifications (Section~\ref{sec:risk}) are prevalent among bettors in practice.
\subsubsection{Quadratic Approximation}
\label{sec:quadraticapprox}
Exact numerical calculation of the Kelly strategy is often \rev{time-consuming}, especially when numerous runs through a large dataset of games is necessary. A practical approach to this issue has been proposed~\citep{busseti2016risk} based on a quadratic approximation of the Kelly's logarithmic utility using the Taylor series expansion. Let us first recall the following.
\begin{equation}
\log(1+x) = x - \frac{x^{2}}{2} + \dots
\end{equation}
Next, following~\citep{busseti2016risk}, we make an assumption for the Taylor approximation that our net profits are not too far from zero $\bm{\rho}\cdot{\bm{f}} \approx \bm{0}$ and express the logarithmic part of the Kelly criterion as follows~\citep{busseti2016risk}.
\begin{equation}
\log(\bm{O} \cdot \bm{f}) = \log(1 + \bm{\rho} \cdot \bm{f})
\end{equation}
allowing us to proceed with the Taylor expansion as
\begin{equation}
\log(1 + \bm{\rho} \cdot \bm{f}) = \bm{\rho} \cdot \bm{f} - \frac{(\bm{\rho} \cdot \bm{f})^{2}}{2} + ...
\end{equation}
Now taking only the first two terms from the series we transform the expectation of logarithm into a new problem definition as follows
\begin{equation}
\begin{aligned}
& \underset{\bm{f \in \mathbb{R}^n}}{maximize}
& & \EX[\bm{\rho} \cdot \bm{f} - \frac{(\bm{\rho} \cdot \bm{f})^{2}}{2}] \\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1.0, ~f_i \geq 0
\end{aligned}
\end{equation}
We will further refer to this strategy as the ``Quadratic Kelly''.
Note that, interestingly, the problem can now be rewritten to
\begin{equation}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \EX[\bm{\rho} \cdot \bm{f}] - \frac{1}{2}\EX[\bm{f}^T (\bm{\rho} \cdot \bm{\rho}^T) \bm{f}] \\
\end{aligned}
\end{equation}
corresponding to the original MPT formulation from Equation~\ref{eq:MPT} for the particular user choice of $\gamma=\frac{1}{2}$.
It follows from the fact that the geometric mean is approximately the arithmetic mean minus $\frac{1}{2}$ of variance~\citep{markowitz1952portfolio}, providing further insight into \rev{the} connection of the two popular strategies of Kelly and Markowitz, respectively.
\section{Risk Management Practices}
\label{sec:risk}
The core issue with the mathematical strategies is that their calculations are carried out as if the true probability distribution over the outcomes was known. Moreover\rev{,} they are often sensitive to even \rev{the slightest} error in the estimates. Here we review simple remedies that have been proposed on top of the original strategies to manage the extra risk stemming from the underlying errors, as well as more sophisticated techniques incorporating the uncertainty of estimates directly into \del{the} computation of \rev{the} strategies.
\subsection{Maximum bet limit}
\label{sec:limit}
Constraining the maximal wager to a fixed value $m$ is probably the most trivial risk-avoiding technique one can encounter, which is probably also why it is the most prevalent one in practice. Moreover, the maximum bet limit often comes from the side of the bookmaker, too, constraining the risk he undertakes w.r.t. each bettor. We thus include this empirical method in our portfolio to see if saturating the invested amount by a fixed threshold might actually improve the overall wealth progression of the existing strategies if properly tuned.
\subsection{Fractional Approaches}
\label{sec:fractional}
Fractioning is an example of a simple heuristic that is nevertheless very efficient in practice.
The main idea behind any ``fractional approach'' is to bet only a fraction $\omega$ of the calculated portfolio and leave the rest of $1-\omega$ in the cash asset for security. We define such a trade-off index $\omega$ for a portfolio as
\begin{equation}
\bm{f}_\omega = \omega \bm{f}_{1..n-1} + (1-\omega) \bm{f}_n
\end{equation}
where $\bm{f}_{1..n-1}$ corresponds to the risky part of the portfolio with stochastic assets and $\bm{f}_n$ is the cash asset, as introduced in Section~\ref{sec:def:outcomes}.
The fractional approach is mostly used with the Kelly strategy~\citep{maclean2011growth, thorp2011understanding}, where for $\omega = 0.5$ it is famously referred to as ``half-kelly'' by practitioners. \rev{Nevertheless,} the choice of $\omega$ should depend on the actual distributions and preferences for risk. The same idea of taking only a fraction of the calculated portfolio can generally be applied to any strategy, including MPT, and it is overall useful whenever our estimates are erroneous.
\subsection{Drawdown Constraint}
\label{sec:drawdown}
A drawdown represents a more involved technique that actually modifies the original optimization problem.
The idea of drawdown is to incorporate a special probabilistic constraint into the Kelly strategy so as to push the solution away from the more risky region near the ruin boundary. The choice of the boundary is then left to the user's preference as an input parameter into the optimization problem. The probabilistic boundary is expressed as the following constraint
\begin{equation}
P(W_t^{min} < \alpha) \leq \beta
\end{equation}
expressing that the probability of our wealth falling below $\alpha$ can be at most $\beta$.
For the Kelly criterion, following the calculations from~\citep{busseti2016risk}, the constraint is approximately satisfied if the following condition holds
\begin{equation}
\EX[(\bm{O} \cdot \bm{f})^{-\lambda}] \leq 1 \hspace{5pt} \text{where} \hspace{5pt} \lambda = \log(\beta) / \log(\alpha)
\end{equation}
Which we can reformat as
\begin{equation}
\log(\sum_{i=1}^{n} p_i \cdot (\bm{o_i}\cdot f_i)^{-\lambda}) \leq \log(1)
\end{equation}
which can be further simplified~\citep{busseti2016risk} into the following constraint
\begin{equation}
\log(\sum_{i=1}^{n} \exp(\log(p_i \cdot (\bm{o_i}\cdot f_i)^{-\lambda}))) \leq 0
\end{equation}
which we can finally use in a convex optimization program.
\subsection{Distributionally Robust Optimization}
\label{sec:dro}
Distributionally robust optimization (DRO) can be understood as a stochastic game between a player and nature, where nature picks a distribution $P_r$ from some predefined ambiguity set of distributions $\bm{\Pi}$ so as to inflict maximum damage to the player's utility. This fits quite naturally the setting of sports betting against a fixed-odds bookmaker, where, given the opposing utilities of both, the bookmaker (nature) sets up the odds so as to minimize player's chances for profit.
Generally, DRO is \rev{a} paradigm for decision making under uncertainty where:
\begin{enumerate}
\item The uncertain problem inputs are governed by a distribution that is itself subject to uncertainty.
\item The distribution is then assumed to belong to an ambiguity set $\bm{\Pi}$.
\item The ambiguity set contains all distributions that are compatible with the player's prior information.
\end{enumerate}
Being aware of the uncertainty in her own estimates $P_p = \hat{P_r}$, the player now modifies the optimization problem to account for the worst possible scenario within the given ambiguity set $\Pi$.
\begin{equation*}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \underset{\bm{p} \in \bm{\Pi}}{min} \sum_{i=1}^{n} {p_i} \cdot log(\bm{O_i} \cdot \bm{f})\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation*}
The ambiguity set $\bm{\Pi}$ can be defined in a number of ways. In~\citep{sun2018distributional}, multiple definitions are explored in connection to the Kelly strategy, such as Polyhedral, Ellipsoidal, or Divergence based. In this review\rev{,} we further narrow our focus to the polyhedral ambiguity set, referred to as the ``box'' uncertainty set, which can be defined as
\begin{equation}
\bm{\Pi} = \{p_i \hspace{3pt} | \hspace{3pt} |p_i - P_p(r_i)| \leq \eta \cdot P_p(r_i),~\sum_{i=1}^{n} p_i = 1, p_i \geq 0\}
\end{equation}
i.e. constraining each probability $p_i$ to differ by up to a factor of $\eta$ from the nominal player's estimate $P_p(r_i)$ of the probability of result $\mathrm{R}=r_i$.
\section{Experiments}
\label{sec:experiments}
The main purpose of this review is to assess \rev{the} performance of the individual strategies (Section~\ref{sec:strategies}) and their risk modifications (Section~\ref{sec:risk}) in various realistic settings (Section~\ref{sec:definitions}) on real data.
We recall the used strategies, describe the datasets, evaluation protocol, and discuss the conducted experiments with their results.
The strategies for the experiments were chosen with the aim to represent the diverse portfolio of approaches occurring in practice, with the goal to provide an unbiased statistical assessment of their performance limits. The particular strategies chosen with their respective hyper-parameters are specified in Table~\ref{tab:strategies}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
\textbf{Strategy} & Description & {Hyperparameters}\\
\hline
AbsDisc & absolute discrepancy bet (Section~\ref{sec:strat:informal}) & None \\
\hline
MaxEvFrac & max. EV outcome with fractioning (Section~\ref{sec:strat:informal}) & $\omega \in [0,1]$ \\
\hline
Kelly & original Kelly strategy (Section~\ref{sec:kelly}) & None \\
\hline
MSharpe & original max. Sharpe ratio (Section~\ref{sec:MaxSharpe}) & None \\
\hline
KellyFrac & Kelly strategy with fractioning (Section~\ref{sec:fractional}) & $\omega \in [0,1]$ \\
\hline
MSharpeFrac & max. Sharpe with fractioning & $\omega \in [0,1]$ \\
\hline
KellyFracMax & Kelly with fractioning and limiting (Section~\ref{sec:limit}) & $\omega \in [0,1]$, $m \in [0,1]$. \\
\hline
MSharpeFracMax & max. Sharpe with fractioning and limiting & $\omega \in [0,1]$, $m \in [0,1]$. \\
\hline
KellyDrawdown & Kelly with the drawdown constraint (Section~\ref{sec:drawdown}) & $\alpha$, $\beta \in [0,1]$ \\
\hline
KellyRobust & Kelly with distributionally robust optimization & $\eta \in [0,1]$. \\
\hline
\end{tabular}
\end{center}
\caption{Evaluated strategies and their hyperparameters}
\label{tab:strategies}
\end{table}
\subsection{Datasets}
\label{sec:datasets}
We collected 3 datasets of different properties from 3 different sports - horse racing, basketball, and football, each containing a significant number of ``matches'' \rev{(races and games)} for statistical evaluation. Each of the datasets is further accompanied with realistic models' predictions tuned specifically for each domain. Since our focus here is purely on the betting strategies, we do not elaborate on the models in details beyond their predictive performances, which naturally influence the performance of the strategies, too.
For each of the datasets, we present the following key properties.
\begin{itemize}
\item $size$ - Dataset size (i.e. \rev{the} number of matches).
\item $acc_b$ - Accuracy of the bookmaker $b$.
\item $acc_p$ - Accuracy of the player $p$ (i.e. the predictive model).
\item $n$ - Number of possible match outcomes ($n=|R|$).
\item $odds$ - Range of the offered odds.
\item $margin$ - Average margin present in the odds.
\item $A_{KL}$ - Kullback-Leibler advantage of the player.
\end{itemize}
The $A_{KL}$ is a statistical measure of \rev{the} difference of the predictive performances (\rev{cross-entropy}) of the player and the bookmaker, respectively. The metric was chosen as it plays a key role in \rev{the} performance of the original Kelly strategy, where the growth of profit can be proved directly proportional to the KL advantage~\citep{cover2012elements}.
\subsubsection{Horse Racing}
\label{sec:horses}
The data for horse racing were collected from the Korean horse racing market (KRA) and provide $2700$ races. The target market of the dataset is the ``win pool'', representing betting for the horse winning the race. The schedule and participation of individual horses in the races varies considerably. Moreover, there is a varying total number of horses, and thus outcomes $n$, in each race, creating \rev{an} interesting challenge for the strategies. We thus assume each race as a completely independent investment opportunity and optimize the strategies accordingly. The model used was a form of conditional logistic regression over various features of the horses \rev{(Section~\ref{sec:related:model})}. The particular dataset properties are specified in Table~\ref{tab:horses}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{size} & \textit{$acc_p$} & \textit{$acc_b$} & $n$ & $odds$ & $margin$ &$A_{KL}$\\
\hline
$2700$ & $0.512$ & $0.503$ & $\in [6, 16]$ & $\in [1.0, 931.3]$ & $0.2$ & $\approx 0.0022$ \\
\hline
\end{tabular}
\end{center}
\caption{Horse racing dataset properties}
\label{tab:horses}
\end{table}
The specifics of the horse racing dataset come mainly from the fact that it actually originates from a parimutuel market, meaning that the wagers are put into a shared pool from which a certain portion is removed as a profit for the house (margin). Nevertheless\rev{,} we convert it into the discussed fixed-odds setting by assuming the last available state of the money pool to get the possible payoffs/odds~\citep{hausch2008efficiency}. As a result, the ``bookmaker's'' estimate in this case is hence made up entirely from public opinion, and is noticeably less accurate. This provides space for statistical models to gain predictive KL-advantage on the one hand, however, on the other hand, the margin is also considerably higher.
\subsubsection{Basketball}
\label{sec:basket}
Next domain we selected is basketball, where we collected box score data from matches in the US National Basketball Association (NBA). The dataset consists of $16000$ games ranging from the year $2000$ to $2015$. The NBA league has a regular schedule of the matches, where each team plays repeatedly with every other team in \rev{so-called} ``rounds''. To emulate the market setting in a realistic fashion, we assume rounds as groups of $10$ scheduled matches to repeatedly appear on the market in parallel (Section~\ref{sec:def:parallel}).
The target market here was the ``money-line'', i.e. betting on the winner of each match. The specifics of the data then comes from the fact that there are only 2 outcomes in the game, directly corresponding to the most basic \rev{coin-tossing} setup of the problem (Section~\ref{sec:definitions}).
The model used was a convolutional neural network based on detailed statistics of the individual players and teams~\citep{hubavcek2019exploiting}. The odds then come from the closing line of the Pinnacle~\footnote{https://www.pinnacle.com/} bookmaker. Notice that in this case the model is not as accurate as the bookmaker, and is thus in a general KL-disadvantage. The particular dataset properties are specified in Table~\ref{tab:basket}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textit{size} &\textit{$acc_p$} & \textit{$acc_b$} & $n$ & $margin$ & $odds$ & $A_{KL}$\\
\hline
$16000$ & $0.68$ & $0.7$ & $2$ & $0.038$ & $\in [1.01, 41]$ & $\approx -0.0146$\\
\hline
\end{tabular}
\end{center}
\caption{Basketball dataset properties}
\label{tab:basket}
\end{table}
\subsubsection{Football}
\label{sec:football}
The football dataset consists of $32000$ matches collected from various leagues all over the world. The schedule in each football league is similar in spirit to that of \rev{the} NBA, and so we again assume the market setting with $10$ parallel games (Section~\ref{sec:def:parallel}). The target market was again money-line betting. The outcomes in football include a draw, resulting \rev{in} a moderate $n=3$ setting. Interestingly, the original dataset~\citep{dubitzky2019} contained merely the historical results of the matches, and the model has thus been built purely from score-derived features. Particularly, the model was a form of gradient-boosted trees learner, winning the 2017's Soccer prediction challenge~\citep{dubitzky2019}. The odds were again provided by \rev{Pinnacle but, this time, we} took the more \rev{favourable} opening line. Despite varying over different leagues, the overall margin is slightly lower than in basketball, and the model in a slightly lower, yet still considerable, KL disadvantage. The particular dataset properties are specified in Table~\ref{tab:football}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textit{size} &\textit{$acc_p$} & \textit{$acc_b$} & $n$ & $margin$ & $odds$ & $A_{KL}$\\
\hline
$32000$ & $0.523$ & $0.537$ & $3$ & $0.03$ & $\in [1.03, 66]$ & $\approx -0.013$\\
\hline
\end{tabular}
\end{center}
\caption{Football dataset properties}
\label{tab:football}
\end{table}
\subsection{Evaluation Protocol}
\label{sec:ex:protocol}
The models providing the probabilistic estimates were trained following the natural order of the matches in time, so that all of their estimates are actual future predictions, i.e. out-of-sample test outputs for matches unseen in the training phase.
For the actual optimization problems of the individual strategies, we have chosen to work with the cvxpy \citep{cvxpy} as the main optimization framework. For each strategy, we first solved the given problem using the Embedded Conic Solver (ECOS)~\citep{domahidi2013ecos}, and should a numerical problem arise\rev{,} we proceed with solving the problem using the Splitting Conic Solver (SCS)~\citep{o2016scs}.
While many of the chosen strategies (Table~\ref{tab:strategies}) contain hyperparameters to be set, we additionally tuned each for the best possible performance via grid-search, too. The individual hyperparameter ranges for the grid-search can be found in Table~\ref{tab:strategies}.
To provide an unbiased \rev{estimate} of their actual performance in practice, we also followed a strict evaluation protocol for each of the strategies. This means that we have (i) split each dataset into training and testing subsets, (ii) found the best hyperparameter setting on the training subset, and (iii) evaluated the fixed setting on the test subset.
To make the output profit measures (Section~\ref{sec:metrics}) more robust, both the training and testing is evaluated by generating $1000$ separate ``runs'' through each subset, where the sequence of games is randomly reshuffled and $10\%$ of games are randomly removed each time (the split between train and test always remains respected). We hence evaluate properties of each strategy on $1000$ separate wealth investment trajectories through previously unseen games.
\subsubsection{Hyperparameter Selection}
\label{sec:hyperpar}
To choose the best possible strategy setting on the train set, we looked for hyperparameters with the following criteria
\begin{equation*}
\begin{aligned}
& {\text{maximize}}
& & median(\bm{W_{f}}) \\
& \text{subject to}
& & Q_{5} > 0.9
\end{aligned}
\end{equation*}
i.e. we always chose a strategy that reached the maximum median final wealth, given that no more than $5\%$ of the wealth trajectories did not fall below $90\%$ of \rev{the} final wealth. Hyperparameter settings that did not meet the required criterion were simply removed from consideration. While the presented hyperparameter selection criteria might seem somewhat arbitrary and could be argued, our aim was to follow the natural desiderata of wealth progression for bettors in practice. That is to mainly prevent the occurrence of ruin (``survival first''), and then maximize the potential profits for the typical (median) bettor.
\subsubsection{Evaluation Metrics}
\label{sec:metrics}
For the actual final evaluation of the strategies on the test set, we chose a range of diverse metrics to provide more insights into the properties of the individual strategies and game settings. The metrics are as follows
\begin{itemize}
\item $median(W_f)$ - median final wealth position.
\item $mean(W_f)$ - mean final wealth position.
\item $min(W_i)$ - lowest wealth position.
\item $max(W_i)$ - maximal wealth position.
\item $sigma(W_f)$ - standard deviation of \rev{the} final wealth positions.
\item $ruin$ \% - ruin percentage of wealth trajectories
\end{itemize}
for which we define a $ruin$ situation as falling below $0.01\%$ of the initial bank $W_0$ at least once during the entire investment period. Note that as opposed to the original definition of ruin in the Kelly strategy~\citep{kellyold}, we have chosen a small \textit{non-zero} threshold, since in practice there is a low amount of money effectively causing \rev{the} inability to place a minimal bet, which is a constraint often present in the market.
\subsection{Results}
\label{sec:results}
Finally\rev{,} we present performances (Section~\ref{sec:metrics}) of the individual strategies (Section~\ref{sec:experiments}) over each of the datasets (Section~\ref{sec:datasets}). Apart from the evaluation metrics in the final state of wealth progression $W_{f}$, we present the summarized wealth progression trajectories for a selected ``best'' strategy with maximal median final wealth for each of the datasets, to demonstrate the overall bankroll dynamics. \rev{The evaluation metrics for horse racing, basketball, and football datasets are presented in Table~\ref{experiments:metrics:horses}, Table~\ref{experiments:metrics:basketball}, and Table~\ref{experiments:metrics:football}, respectively. The wealth progression trajectories for the best strategies are then displayed in
Figure~\ref{fig:horses}, Figure~\ref{fig:basket} and Figure~\ref{fig:football}, respectively.}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
AbsDisc & 0.0019 & 0.03 & 4e-08 & 27.1 & 0.04 & 85.2 \\
\hline
MaxEvFrac & 0.86 & 2.13 & 2e-09 & 711 & 4.7 & 36.1 \\
\hline
\hline
Kelly & 4.11 & 15.6 & 7e-05 & 2167.8 & 59.8 & 0.6 \\
\hline
MSharpe & 3.92 & 17.8 & 9e-06 & 2231.1 & 48.3 & 12.1 \\
\hline
KellyFrac & 3.39 & 14.2 & 0.003 & 213.2 & 32.1 & 0 \\
\hline
MSharpeFrac & 3.28 & 16.9 & 8e-05 & 253.3 & 26.5 & 0.2 \\
\hline
KellyFracMax & 3.49 & 13.8 & 0.0057 & 168.1 & 29.3 & 0 \\
\hline
MSharpeFracMax & 3.41 & 15.2 & 0.0065 & 194.3 & 25.4 & 0 \\
\hline
KellyDrawdown & 3.3 & 13.7 & 0.009 & 112.4 & 22.4 & 0 \\
\hline
KellyRobust & 2.97 & 4.1 & 0.08 & 77.3 & 7.2 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the horse racing scenario (Section~\ref{sec:horses}).}
\label{experiments:metrics:horses}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{horse_box_reinvest.eps}
\centering
\caption{Wealth progression of the KellyFracMax strategy in the horse racing scenario (Section~\ref{sec:horses}).}
\label{fig:horses}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
Kelly & 9.1e-6 & 1.8e-05 & 1.9e-20 & 3312.2 & 1.7e-05 & 100 \\
\hline
MSharpe & 1.3e-06 & 5.1e-05 & 4.1e-21 & 2911 & 9.7e-06 & 100 \\
\hline
KellyFrac & 2.4 & 2.7 & 0.11 & 24.1 & 1.34 & 0 \\
\hline
MSharpeFrac & 1.24 & 1.97 & 0.002 & 19.6 & 0.85 & 0 \\
\hline
KellyFracMax & 2.3 & 2.5 & 0.13 & 20.9 & 1.27 & 0 \\
\hline
MSharpeFracMax & 1.2 & 1.7 & 0.008 & 12.1 & 0.56 & 0 \\
\hline
KellyDrawdown & 2.21 & 2.9 & 0.14 & 29.1 & 1.3 & 0 \\
\hline
KellyRobust & 1.39 & 1.46 & 0.23 & 10.9 & 0.45 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the basketball scenario (Section~\ref{sec:basket}).}
\label{experiments:metrics:basketball}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{basket_reinvest_fractional.eps}
\centering
\caption{Wealth progression of the KellyFrac strategy in the basketball scenario (Section~\ref{sec:basket}).}
\label{fig:basket}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
Kelly & 2.3e-09 & 5.2e-08 & 1.6e-21 & 5844.2 & 2.7e-07 & 100 \\
\hline
MSharpe & 1.8e-10 & 3.0e-07 & 5.9e-27 & 2617 & 4.2e-07 & 100 \\
\hline
KellyFrac & 10.05 & 11.8 & 0.03 & 182 & 9.7 & 0 \\
\hline
MSharpeFrac & 9.9 & 13.6 & 0.016 & 211 & 9.1 & 0 \\
\hline
KellyFracMax & 10.03 & 11.2 & 0.007 & 144 & 9.2 & 0 \\
\hline
MSharpeFracMax & 10.1 & 13.1 & 0.005 & 193 & 8.7 & 0 \\
\hline
KellyDrawdown & 10.25 & 12.4 & 0.09 & 122 & 9.3 & 0 \\
\hline
KellyRobust & 6.2 & 7.3 & 0.28 & 27.7 & 5.6 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the football scenario (Section~\ref{sec:football}).}
\label{experiments:metrics:football}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{football_reinvest.eps}
\centering
\caption{Wealth progression of the KellyDrawdown strategy in the football scenario (Section~\ref{sec:football}).}
\label{fig:football}
\end{figure}
\vspace{-1cm}
Firstly, the results of our experiments confirm that the, regularly used, informal betting strategies (Section~\ref{sec:strat:informal}) are clearly inferior to all the formal strategies, in agreement with the previous reports~\citep{hubavcek2019exploiting}. Moreover, they often lead to ruin even in \rev{a} situation with statistical model advantage, as reported for the horse racing dataset in Table~\ref{tab:horses}, for which we decided not to include them further.
As expected, the formal strategies based on Modern Portfolio Theory (MPT) (Section~\ref{eq:MPT}) and Kelly Criterion (Section~\ref{sec:kelly}) performed reasonably in the setting with \rev{a} statistical advantage $A_{KL}$ of having a more precise model. However, since they are based on unrealistic mathematical assumptions, their actual risk profile might be unexpected in practice. Using any of the proposed practices for additional risk management (Section~\ref{sec:risk}) generally led to a considerably lower volatility while keeping the wealth progression of a typical (both mean and median) bettor reasonably high. Also, following the mathematical properties of the pure form of both the strategies, they both lead to a certain ruin in scenarios without statistical $A_{KL}$ advantage of the model, which is exhibited in practice, too (Table~\ref{tab:basket}, Table~\ref{tab:football}).
On the other hand, a smart strategy modification can generate profits even in the statistically disadvantageous scenarios, as measured by the $A_{KL}$. Naturally, this does not hold universally and particular properties of the underlying models must be considered, too, since there are surely disadvantageous scenarios where no strategy can make profits by any means (Example~\ref{ex:coin1}).
The insights from the experiments regarding the discord between the approaches of MPT and Kelly roughly follow the intuitions behind the individual strategies. That is that the strategies based on the Kelly criterion (Section~\ref{sec:kelly}) result in a generally higher \textit{median} final wealth, while strategies based on the MPT (Section~\ref{sec:MPT}) result in a generally higher \textit{mean} final wealth, corresponding to the underlying expected value-based motivation. Interestingly, in the football dataset (Table~\ref{tab:football}) the mean final wealth performance of MPT is slightly lower than that of the Kelly-based strategies. However, we should note that the hyperparameter selection criteria (Section~\ref{sec:hyperpar}) can also be considered slightly biased in \rev{favour} of the Kelly approaches.
From a practical perspective, the drawdown modification of the Kelly criterion (Section~\ref{sec:drawdown}) seemed to perform very similarly to the, much less sophisticated, fractional approach (Section~\ref{sec:fractional}), further supporting its popular use in practice. While the distributionally robust modification of Kelly (Section~\ref{sec:dro}) achieved generally lowest final wealth scores, it was also the overall most stable strategy with the highest minimal final wealth. This is in complete accordance with its pessimistic underlying setting optimizing for the worst case scenario, which might be appealing to highly risk-averse bettors.
\section{Conclusions}
\label{sec:conclusion}
In this experimental review, we investigated the two most prominent streams of betting investment strategies based on the views of the Modern Portfolio Theory and the Kelly criterion, together with a number of their popular modifications aimed at additional risk management in practice, where their original underlying mathematical assumptions do not hold. We tested the strategies on 3 large datasets from 3 different sport\rev{s} domains of horse racing, basketball, and football, following a strictly unified evaluation protocol to provide unbiased estimates of \rev{the} performance of each method while tuning their \rev{hyperparameters}.
The results of our experiments suggest \rev{the} superiority of the formal mathematical approaches over the informal heuristics, which are often used in practice, however\rev{,} the experiments also revealed their weaknesses stemming from the unrealistic mathematical assumptions, particularly the knowledge of the true probability distribution over the \rev{match} outcomes.
\rev{
Consequently, when used in their plain original form, the formal strategies, i.e. the maximum Sharpe and Kelly, proved infeasible in almost all practical scenarios with uncertain probability estimates. Particularly, the theoretically optimal strategies often led to ruin instead of maximal profit, calling for the need of the additional risk management practices.
}
\rev{The results of the subsequent modifications of the optimal strategies then suggested that reasonable trade-offs in wealth progression can be found in actual betting practice with the appropriate techniques, even in scenarios with worse model predictions than that of the bookmaker.}
\rev{Based on the experiments, we conclude that, for common practical purposes, the most suitable option out of the strategies reviewed seems to be the fractional Kelly, given that the fraction hyperparameter has been properly tuned to reflect the amount of uncertainty in each particular problem setting. The approach achieved the best, or close to the best, performance as evaluated by the chosen metrics in most of our experiments while being comparatively simpler than the other strategies. Our findings thus further support its common use in betting practice. The other common practice of setting a maximum bet limit was inconclusive as it improved the overall results in some domains (Table~\ref{experiments:metrics:horses}) while decreasing the profits in others (Table~\ref{experiments:metrics:basketball}).}
\rev{The distributionally robust Kelly strategy then proved to be the safest in all of the experiments, and can thus be suggested to extremely risk-averse practitioners. The second safest strategy was then to incorporate the drawdown constraint, which also proved quite efficient in trading of the security for profit.}
\section*{Response Letter}
\subsection*{Reviewer: 1 Comments to the Author}
\subsubsection*{* Global evaluation}
\textit{The paper is a comprehensive review of some betting strategies based on the Modern portfolio theory and the Kelly criterion. The paper is globally well written and the distinct betting strategies are properly introduced. The methods' review is detailed, and the experimental part has been carefully conducted and described.
Though, I find the Introduction quite lacking of references: I would invite the authors to massively extend it, by using/recycling and extending some parts actually contained in Section 3.
Moreover, the Conclusion section is in my opinion too shortly outlined: as a possible suggestion, the authors could try to state which ones among the formal strategies (methods in Table 5,6, and 7) could be satisfactorily adopted and under which circumstances one or another method could be favorable. In a way, the authors could provide a sort of general and practical guideline to the bettors interested in horse racing, football or basketball, by remarking some of the arguments raised in Section 6.3.}
\begin{itemize}
\item I find the Introduction quite lacking of references: I would invite the authors to massively extend it, by using/recycling and extending some parts actually contained in Section 3.
\blue{We have significantly extended the related works Section \ref{sec:related} with both papers referred by the reviewers and additional related works. We have however kept the specific related work in the respective section, while keeping the introduction on a general note.}
\item The conclusion Section is in my opinion too shortly outlined: as a possible suggestion, the authors could try to state which ones among the formal strategies (methods in Table 5,6, and 7) could be satisfactorily adopted and under which circumstances one or another method could be favorable. In a way, the authors could provide a sort of general and practical guideline to the bettors interested in horse racing, football or basketball, by remarking some of the arguments raised in Section 6.3. \blue{The conclusion Section \ref{sec:conclusion} now includes suggestions and guidelines on what methods are preferable under which circumstances.}
\end{itemize}
\subsubsection*{* Some minor edits}
\begin{itemize}
\item Introduction, lines 29-32: although predictive models are not the core of this paper, I would suggest to include and cite at least some works who attempted to propose betting strategies starting from a predictive model. A short (not exhaustive) list of papers is here provided:
\begin{itemize}
\item Dixon and Coles, 1997. Modelling association football scores and inefficiencies in the football betting market.
\item Rue and Salvesen, 2000. Prediction and retrospective analysis of soccer matches in a league.
\item Groll and Abedieh, 2013. Spain retains its title and sets a new record–generalized linear mixed models on European football championships.
\item Egidi, Pauli and Torelli, 2018. Combining historical data and bookmakers' odds in modelling football scores.
\end{itemize}
\blue{Related works Section \ref{sec:related} has been extended with prediction models, the referred and additional related papers have been included.}
\item Page 1, line 37: ``known'' in place of ``know'' \blue{Corrected.}
\item Page 2, line 16: you claim that ``each result is binary in nature'', but this sentence is confusing in my opinion. In the paper, you report many examples in which the result is not binary.
\blue{We added a clarification note -
``Note this concerns an individual $r_i$ and not $|\mathrm{R}|$, i.e. a match can have many possible outcomes $r_i$, but each $r_i$ has a binary realization, resulting exclusively in either win or loss of the bet associated with it.''}
\item Page 3, line 25: after ``Heads'', punctuation is missing. \blue{Corrected.}
\item Page 9, line 25: maybe ``trade-off''? \blue{Corrected.}
\item Page 10, line 37: ``known'' in place of ``know''. \blue{Corrected.}
\item Page 14, line 44: $acc_p$ in place of $acc_b$. \blue{Corrected.}
\item Tables 2, 3, and 4: what is $m-acc$? And $b-acc$ is the same as $acc_b$ listed at page 14? \blue{Yes, it is the same. It has been corrected with a unified notation.}
\end{itemize}
\subsection*{Reviewer: 2 Comments to the Author}
\textit{I very much enjoyed reading the paper. It is certainly of interest to anyone working in sports betting. The authors have identified an area that needs discussing and present results of their experiments using different strategies for betting on different sports.
\\\\
I have some comments and suggestions (below), but overall, I think the paper requires only minor revisions before being ready for publication.}
\begin{itemize}
\item Is the level of mathematical rigour given in Section 2 needed? This is a judgement call, but it is a little heavy going on terminology that isn't used later in the paper. \blue{We have removed parts that are not that relevant for the paper (e.g. the cases of the bookmaker's margin).}
\item p2, line 27: is it true that bookmakers are maximizing long-term profits? Is it possible they are balancing the books and basically making the over-round off the bettors? Or is this one and the same thing? \\
\blue{Yes, making money from the over-round is not in contradiction with maximizing their long-term profits. But with predictions better than that of an average tipster, they can make more money than just from the over-round. And they need good predictions to lay out the opening odds anyway. Moreover, purely reactive balancing can only work on markets with very high liquidity/volume of bets, and could be quite risky/exploitable otherwise.}
\item p2, line 40: maybe mention betting exchanges as the less common setup. \blue{Done.}
\item p2, line 45: is it a little cumbersome to have used $f$ for the fraction bet above, and now be using it for the function? \blue{Corrected -- the function is now denoted by $g$ and $\bm{f}$ stands exclusively for the fraction vector/portfolio.}
\item p2, line 52: why is $\hat{p}$ necessarily different from the true probability? \blue{We have moderated and clarified these statements in the paper. The estimates can be perfect in scenarios with artificial randomness generators (Section~\ref{sec:def:estimates}), but in the domain of sports betting we consider, the true probability is never known, and so this case is of purely theoretical interest.}
\item p3, line 32: why do you need to express the inverse values like this? Is it not simpler to just write $\frac{1}{o_i}$? \blue{We made corrections to clarify that we meant to specify the whole distribution $P_b$, which is later used in the equation below.}
\item p3, equation 2.11: typo - $r_j$ should be $o_j$ I think. \blue{You are right, corrected.}
\item p4, line 28: why are the estimates biased? They can be unbiased surely. \\
\blue{We have moderated the claims (this follows the same reasoning as for the perfect estimates 3 bullets above) -- since in sports betting the true probability distribution is principally unknown, the unbiased case is of purely theoretical interest. If necessary, one can also simply imagine that the bias is zero. The particular bias of the player here is simply part of the example assumptions.}
\item p10, line 34: should you reference the original Kelly paper. \blue{Corrected.}
\item p10, line 37: ``know'' should be ``known''. \blue{Corrected.}
\item p11, lines 14-16: I don't think the reader is ever given an indication of how unrealistic these assumptions are. Further, the experimental results, don't reveal how much these assumptions contribute to the lessening of the expected performance of the betting strategies. I think these points (and the impact of the assumptions) could be discussed in the conclusions of the paper. \blue{Knowing the true probability distribution is extremely unrealistic, as discussed above (and in the paper). Consequently in the experiments, the vanilla formal strategies often lead to ruin, as opposed to maximal profit. We extended the conclusion with discussion to further clarify this.}
\item p12, line 12: missing ``out'' after ``carried''. \blue{Corrected.}
\item p14, third bullet point should be ``$acc\_p$''. \blue{Corrected.}
\item p17, line 43: the tables are labelled in an odd order, and the figures are all 6.3. \blue{Apologies, corrected.}
\item p18, table 5: can the betting strategies be given more intuitive names. Even a description would help the reader. I found myself having to look at the previous table to get the descriptions. \blue{Unfortunately, there is not enough space in the table for a more detailed description. However, we tried our best in the naming and at least expanded the abbreviations -- the \textit{KellyDD} strategy has been renamed to \textit{KellyDrawdown} and \textit{KellyDR} to \textit{KellyRobust}.}
\item p20, line 53: ``degrees of freedom'' – can/should it be ``hyperparameters'' since ``degrees of freedom'' are not mentioned anywhere. \blue{Corrected.}
\end{itemize}
\subsection*{Guest Editor Comments to the Author:}
\textit{Both referees are positive for this work. Please revise your manuscript according to their comments and suggestions. Regarding Section 2, I would personally prefer to leave the details. May be trimming it a little bit might be the optimal solution.} \blue{Slightly trimmed.}
\subsection*{Editor comments}
\begin{enumerate}
\item Please use English spelling variations throughout. \blue{Corrected.}
\item Also, for continuity, consider adding citations to related work that has been published in this journal e.g.
\begin{enumerate}
\item Markowitz portfolio theory for soccer spread betting, Alistair D. Fitt (2009)
\item Kelly's fractional staking updated for betting exchanges, Edmund Noon, William J. Knottenbelt, Daniel Kuhn (2013)
\item Using statistics to detect match fixing in sport, David Forrest, Ian G McHale (2019)
\item Uses and limitations of mathematics in sport, John Haigh (2009)
\end{enumerate}
\blue{The referred papers have been reviewed and mostly added with additional related papers into the related works Section~\ref{sec:related}.}
\end{enumerate}
\newpage
\title{Optimal sports betting strategies in practice: an experimental review}
\maketitle
\author{}
\begin{abstract}
{We investigate the most prominent streams of approaches to the problem of sports betting investment based on the Modern portfolio theory and the Kelly criterion. We define the problem settings, the formal strategies, and review their modifications for additional risk control stemming from \rev{their} unrealistic mathematical assumptions that are not met in betting practice. We test the resulting methods following a unified evaluation protocol in 3 different sport\rev{s} domains of horse racing, basketball and football. The results are generally in \rev{favour} of the formal approaches while suggesting for the individual benefits of the additional risk control practices together with their trade-offs.}
{sports betting, betting strategies, risk management, bankroll management}
\end{abstract}
\section{Introduction}
\label{sec:intro}
Sports betting systems generally consist of two essential components \rev{--} (i) predictive models, generating probabilistic estimates for the given match outcomes, and (ii) bankroll management strateg\rev{ies}, optimizing the expected progression of wealth in time. In this work, we focus solely on the latter.
While much of the available research on betting systems is \rev{centred} around the predictive \rev{modelling} part, often completely neglecting the need for betting portfolio optimization, we show that, given a predictive model, the betting strategy has a major influence on the final measures of profit. Consequently, a worse model with a better strategy can easily outperform a better model with a worse strategy.
Lacking a deeper understanding of the investment part of the problem, practitioners often resort to trivial practices such as various forms of flat betting. We show that these are inferior to the formal strategies, not just theoretically but also from \rev{a} practical perspective. There are two basic streams of research in the formal approaches, stemming from information theory and economics, respectively. The first, and the most widespread, is the Kelly criterion\rev{~\citep{kelly1956new}}, also known as the geometric mean policy, maximizing the expected long-term growth of wealth. The second is the approach of Markowitz's Modern portfolio theory\rev{~\citep{markowitz1952portfolio}}, balancing the criteria of expected profit and \rev{profit} variance as a measure of risk.
While mathematically sound, the formal strategies are based on unrealistic assumptions. The most limiting assumption in their application to sports betting is the knowledge of true probabilities of individual match outcomes. Other complications of the problem include \rev{the} multiplicity of outcomes and parallel matches. We investigate the existing modifications of the formal strategies proposed to address the problems occurring in practice and evaluate them experimentally in 3 different sport\rev{s} domains - horse racing, basketball, and football.
The paper is structured as follows. In Section~\ref{sec:definitions} we define the concept of a betting strategy and the dimensions of the underlying optimization problem. In Section~\ref{sec:related} we review the related work touching different facets of risk and bankroll management in betting. In Section~\ref{sec:strategies} we formally introduce the two core strategies of Kelly and Markowitz. The modifications of the core strategies proposed to manage the extra risk occurring in practice are then introduced in Section~\ref{sec:risk}. Finally, we experimentally evaluate the strategies in practical scenarios in Section~\ref{sec:experiments} and conclude the paper in Section~\ref{sec:conclusion}.
\section{Problem Definition}
\label{sec:definitions}
In its core, sports betting is a simple stochastic game where the player $p$ repeatedly allocates a distribution of \textit{fractions} ($f_i \in [0,1],~\sum_{i}f_i \leq 1$) of her current bankroll $W \in \mathbb{R}$ at time $t \in \mathbb{N}$ over possible stochastic results $r_i \in \mathrm{R}$ of a match, coming from a distribution $P_r(r_i)$ over the domain $\mathrm{R}$ of the random variable $R$, describing all the possible outcomes of the given match at time step $t$. Each of the possible match outcomes $r_i$ is then associated with \rev{so-called} \textit{odds} ($o_i \in \mathbb{R}_{\geq 1}$) by the bookmaker $b: r_i \mapsto o_i$. Should a particular outcome $i$ be realized \rev{(}${R}=r_i$\rev{)}, a payoff $o_i \cdot f_i \cdot W$ from the associated odds and fraction is to be received by the player $p$. In the opposite case, the player loses the allocated portion $f_i \cdot W$ of her bankroll to the bookmaker $b$.
Each of the particular \rev{betting} outcomes $r_i$ is \rev{thus} binary\footnote{\rev{Note this concerns an individual $r_i$ and not $|\mathrm{R}|$, i.e. a match can have many possible outcomes $r_i$, but each $r_i$ has a binary realization, resulting exclusively in either win or loss of the bet associated with it.}} in nature, and the potential net profit $w_i$ from allocation on the $i$-th outcome is thus
\begin{equation}
w_i =
\left\{
\begin{array}{lll}
o_i \cdot f_i \cdot W - f_i \cdot W ~~& \mbox{with prob. $P_r(r_i)$} &\mbox{(if $\mathrm{R}=r_i$ is realized)} \\
- f_i \cdot W ~~& \mbox{with prob. $1-P_r(r_i)$} &\mbox{(if $\mathrm{R} \neq r_i$)}
\end{array}
\right.
\end{equation}
giving an expectation
\begin{equation}
\EX_{P_r}[w_i] = P_r(r_i) \cdot (o_i f_i W - f_i W) + (1-P_r(r_i)) \cdot (- f_i W)
\end{equation}
Clearly, the profits of the bettor and bookmaker are directly opposite and, assuming a closed system of bettors and bookmakers, this is \del{thus} a zero-sum game. The goal of both the player $p$ and the bookmaker $b$ is to maximize their long-term profits $W_{t \to \infty}$ as measured by their respective utilities (Section~\ref{sec:strategies}). Given the stochastic nature of the game, the natural desideratum of the player is to allocate the fractions $\bm{f} = f_1, \dots, f_n$ so as to target a high total expect\rev{ation of profit} $\mathrm{W}$
\begin{equation}
\EX_{P_r}[\mathrm{W}] = \EX_{P_r} \bigg[\sum_i w_i \bigg] = \sum_i \EX_{P_r} [w_i]
\end{equation}
Note that, in this work, we assume the two players to take on the asymmetric roles of market maker $b$ and market taker $p$, where the bookmaker $b$ always starts by laying out the odds $\bm{o} = [o_1, \dots, o_n]$ for the possible match results $\bm{r} = [r_1, \dots, r_n]$ first, consequently to which the player $p$ reacts with his best policy for allocation $p : r_i \mapsto f_i$ of her current wealth $W_t$. In contrast to e.g. the, \rev{less common}, betting exchange setting, in this work we assume solely the strategies for the role of the market taker $p$, which is the most common setup for bettors in practice.
\subsection{Betting Strategy}
\label{sec:def:strategy}
A player's betting strategy for a game with $n$ outcomes is a \rev{function $g$} mapping a set of probabilistic estimates $\hat{\bm{p}} = \hat{p_i}, \dots,\hat{p_n}$ and bookmaker's odds $\bm{o} = o_1, \dots, o_n$ onto a set of fractions $\bm{f} = f_1, \dots, f_n$ of the current wealth $W_t$ to be waged \del{on each of} \rev{over} the game outcomes $\bm{r} = r_1, \dots, r_n$
\rev{
\begin{align}
g &: (\hat{\bm{p}}, \bm{o}) \mapsto \bm{f}
\end{align}
}
Typically, the estimated distribution vector $\hat{\bm{p}}$ comes from a probabilistic model $P_p$ of the player and is similar to, yet \rev{most likely} different from, the \rev{(unknown)} true probability distribution $P_p = \hat{P_r},~P_p \neq P_r$ \rev{(Section \ref{sec:def:estimates})}.
The vector of the waged fractions $\bm{f}$ is then often referred to as the \textit{portfolio} over individual ``assets'' $i$ (Section~\ref{sec:MPT})
\begin{equation}
\bm{f} =
\begin{bmatrix}
f_1, \dots, f_n
\end{bmatrix}
\end{equation}
where $f_i$ indicates the portion of wealth $W_t$ allocated to $i$-th outcome.
\subsection{Fixed Odds}
\label{sec:def:odds}
We further assume a \rev{so-called} fixed-odds betting setup which, as opposed to e.g. parimutuel setting~\citep{hausch2008efficiency}, always offers known odds distribution $\bm{o}$ in advance of the game for the player's strategy \rev{$g$} to calculate with.
In its most basic form, we can demonstrate the given setting on a simple \rev{coin-tossing} game as follows.
\begin{example}
\label{ex:coin1}
Assume a fair \rev{coin-tossing} game with two, equally probable, outcomes $\mathrm{R} =\{Heads, Tails\}$
\begin{equation}
\underset{r_i \in \mathrm{R}}{P_r(r_i)} =
\left\{
\begin{array}{ll}
0.5 & \mbox{for the coin falling } r_1 = \textit{Heads} \\
0.5 & \mbox{for the coin falling } r_2 = \textit{Tails}
\end{array}
\right.
\end{equation}
The odds by the bookmaker $b$ could then be set up e.g. as follows
\begin{equation}
\underset{r_i \in \mathrm{R}}{b(r_i)} =
\left\{
\begin{array}{ll}
o_1 = 1.9 & \mbox{for the coin falling } r_1 = \textit{Heads} \\
o_2 = 1.9 & \mbox{for the coin falling } r_2 = \textit{Tails}
\end{array}
\right.
\end{equation}
Let the bettor allocate a fixed wager, such as \$1, on the $r_1=Heads$.
She then receives an extra $w_1 = (1.9 - 1) * 1$ profit if the associated outcome $r_1=Heads$ is realized, or losses the placed wager \$1 otherwise.
It is easy to see that this particular game is generally disadvantageous for the bettor, and there exist no strategy for her to make long-term profits, since the expected profit for each outcome of the game is simply negative:
\begin{equation}
\EX[w_1] = \EX[w_2] = 0.5 \cdot 1.9 \cdot 1 + 0.5 \cdot (-1) = -0.05
\end{equation}
This \del{is caused by} \rev{follows directly from} the fact that the odds are \textit{unbiased} and \textit{subfair}. This means that \rev{the distribution of their inverse values $P_b : r_i \mapsto \frac{1}{o_i}$ is} proportional to the true probability distribution over the game outcomes, but \del{they do} \rev{it does} not form a \textit{probability} distribution as \rev{the values} do not sum up to $1$:
\begin{equation}
\sum_i{P_b(r_i)} = \frac{1}{o_1} + \frac{1}{o_2} \approx 1.05
\end{equation}
\end{example}
\del{In general, for a game with $k$ outcomes, we can theoretically recognize $3$ distinct settings of the odds as follows...[equations removed]}
Out of the three settings~\citep{cover2012elements}: \textit{fair, subfair, superfair},
the \textit{subfair} odds are typically the only setting for a bookmaker to be able to generate profits. We will further limit ourselves to this setting as it is the only valid setup working in practice.
The value of
\rev{
\begin{equation}
margin = \frac{\sum_{j=1}^K\frac{1}{o_j} -1 }{\sum_{j=1}^K\frac{1}{o_j}}
\end{equation}}
is then called the bookmaker's margin\footnote{Often wrongly calculated as simply the remainder \rev{over $1$ as $\sum_{j=1}^K\frac{1}{o_j} -1$}} (also known as ``vigorish'', ``house edge'', ``cut'' etc.), and represents the negative expected value of the game given the probabilities $P_b$ implied from the odds are unbiased estimates of the true outcome probabilities $P_r$. Note that this is a typical game setting operated in the gambling industry, such as in various casino games, where there is no space for long-term profitable strategies. However, we note that the situation in sports betting is principally different.
\subsection{Biased Estimates}
\label{sec:def:estimates}
In Example~\ref{ex:coin1} with a fair coin, both the bookmaker and bettor knew the true outcome probability distribution (i.e. $P_r(r_1=H)=0.5 ;\ P_r(r_2=T)=0.5$). This setting is very elegant from mathematical perspective, since one can calculate exact expected values of profits and consequently derive optimal betting strategies (Section~\ref{sec:strategies}).
Such mathematically optimal strategies can be theoretically applied in artificial environments with handcrafted generators of randomness (e.g. the casinos). However, in the context of sports betting, and other practical settings such as stock market investing, this is generally impossible.
In this experimental review, we thus focus on the scenarios, where the probability estimates of both the bookmaker $P_b$ and the player $P_p$ are biased w.r.t. the real outcome probability distribution $P_r$.
Let us consider an extension of the \rev{coin-tossing} game from Example~\ref{ex:coin1} to demonstrate properties of such \rev{a} setting.
\begin{example}
Consider a \textit{biased} \rev{coin-tossing} game where the coin bias is \textit{unknown} to both the bookmaker and the player. Let us \rev{set-up} the bias such that
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_r(r_i)} =
\left\{
\begin{array}{ll}
0.6 & \mbox{for } r_1 = \textit{H} \\
0.4 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Let us further assume that the player $p$ has a probabilistic model of the game, producing biased estimates $P_p = \hat{P_r}$ as
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_p(r_i)} =
\left\{
\begin{array}{ll}
0.55 & \mbox{for } r_1 = \textit{H} \\
0.45 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Finally, assume the bookmaker is also biased with his estimates $P_b = \hat{P_r}, P_b \neq P_p$, according to which he sets up the odds distribution $\bm{o}$, lowered by a margin\footnote{In practice, the distribution of margin would not be simply uniform as in the example, but the bookmaker typically applies more sophisticated distortion of the odds to secure even higher statistical advantage.} $m=0.05$
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_b(r_i)} =
\left\{
\begin{array}{ll}
0.65 & \mbox{for } r_1 = \textit{H} \\
0.35 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\underset{r_i \in {\mathrm{R}}}{b(r_i)} =
\left\{
\begin{array}{ll}
\frac{1}{0.65} \cdot (1-{0.05}) \approx 1.46 & \mbox{for } r_1 = \textit{H} \\
\frac{1}{0.35} \cdot (1-{0.05}) \approx 2.71 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Note that while the odds are still subfair, the bookmaker's bias w.r.t. $P_r$ now creates space for exploitation, since the true expected values are no longer purely negative.
\begin{equation}
\begin{array}{llll}
\EX_{P_r}[w_1] &=& P_r(r_1) \cdot b(r_1) -1 \approx -0.124 & \text{ for~~ } \mathrm{R}=r_1=H\\
\EX_{P_r}[w_2] &=& P_r(r_2) \cdot b(r_2) -1 \approx 0.084 & \text{ for~~ } \mathrm{R}=r_2=T
\end{array}
\end{equation}
i.e. the punter could make long-term profits if betting appropriate amounts on the $r_2=T$ outcome. However, not knowing the true probabilities $P_r$, the player's calculation of expected values will now be biased, too
\begin{equation}
\begin{array}{lll}
\EX_{P_p}[w_1] &=& P_p(r_1) \cdot b(r_1) -1 \approx -0.197\\
\EX_{P_p}[w_2] &=& P_p(r_2) \cdot b(r_2) -1 \approx 0.22
\end{array}
\end{equation}
nevertheless, despite the expected values calculated by the punter w.r.t. her $P_p$ estimate \del{are} \rev{being} wrong, in this particular setting, she correctly identified the positive expected value in the $r_2=T$ outcome and could theoretically make a profit with an appropriate strategy modification (Section~\ref{sec:risk}).
\end{example}
Generally, $P_p = \hat{P_r}$ and $P_b = \hat{P_r}^{'}$ are \del{always} going to be somewhat biased w.r.t. $P_r$ as well as w.r.t. each other \del{since $P_p \neq P_b$} \rev{(i.e. $P_p \neq P_b$,} as long as \rev{the} player does not simply copy from the bookmaker). The individual biases can be captured by statistical measures, such as the Kullback-Leibler, or better yet Jensen-Shannon, divergences~\citep{cover2012elements}, and the probabilistic setting of each game for a particular match can then be understood as a triplet of probability distributions over the outcomes, as depicted in Figure~\ref{fig:triangle}.
\begin{figure}[t]
\label{fig:triangle}
\input{triangle.tex}
\centering
\caption{A typical sports betting setting for a game with $n$ outcomes, displaying bookmaker's probabilistic estimates $P_b$ and player's estimates $P_p$, both distanced from the true distribution $P_r$ and from each other.}
\end{figure}
\subsection{Multiplicity of Outcomes}
\label{sec:def:outcomes}
So far we have assumed a binary \rev{coin-tossing} game of two possible outcomes. Let us now generalize into an $n$ outcome game, such as throwing a die. This represents most real situations in sports betting, such as the $\mathrm{R} = \{Win,Draw,Loss\}$ outcomes in soccer, or betting on the winner of a horse race with $n$ horses (Section~\ref{sec:datasets}). Moreover, one can potentially assume that the individual game outcomes are no longer exclusive, such as betting on the first $j$ horses, or ``over'' $j$ goals in soccer for multiple different values of $j$.
To make the game representation more compact in such situations, a generic matrix~$\bm{O}$ representation has been proposed~\citep{busseti2016risk}, where the columns of $\bm{O}$ represent the possible outcome assets, and rows represent the possible game results, i.e. joint realizations of all the outcomes. Each individual element in $\bm{O}$ then represents particular odds for each outcome realization.
Additionally, we include an artificial risk-free ``cash'' asset $\bm{c}$, which allows the player to put money aside safely. This also allows to model situations where leaving money aside can cost \rev{a} small fraction of wealth in every turn (caused \rev{e.g.} by inflation), or possibility to increase the wealth by some interest rate (e.g. in a savings account).
The betting strategy \rev{$g$} (Section~\ref{sec:def:strategy}) can now thus always allocate the full amount of current wealth $W$ among $n$ available outcome assets, $n - 1$ of which are risky, stochastic assets, and 1 being the added risk-free cash asset as
\begin{equation}
g : (\bm{p}^k, \bm{O}_k^n) \mapsto \bm{f}^n \text{~~~where~~~} \sum_i{f_i}=1
\end{equation}
where $k$ is the number of possible worlds, i.e. there are $k$ possible joint outcome realizations, in our probabilistic game.
Odds for each outcome asset in each of the $k$ world realizations with the respective probabilities $\bm{p} = p_1, p_2, ..., p_k$ can thus be fully specified in the columns $\bm{o_i}$ as
\begin{align}
\bm{O} =
\begin{bmatrix}
\bm{o_1} & \bm{o_2} & ... & \bm{o_{n-1}} & \bm{c}
\end{bmatrix}
~,~\text{where}~~
\bm{o_i} =
\begin{bmatrix}
o_{i, 1} \\
o_{i, 2} \\
... \\
o_{i, n}
\end{bmatrix}
~,~
\bm{c} =
\begin{bmatrix}
1 \\
1 \\
... \\
1
\end{bmatrix}
\end{align}
\begin{example}
Consider a football game, where we assume $3$ outcomes as $\mathrm{R} = \{W, D, L\}$, forming the $3$ asset vectors $\bm{o_w}, \bm{o_d}, \bm{o_l}$, where the bookmaker sets the odds to $o_w, o_d, o_l$, respectively. The odds matrix $\bm{O}$, including the constant cash asset $\bm{c}$, then looks as follows.
\begin{equation}
\bm{O} =
\begin{bmatrix}
\bm{o_w} & \bm{o_d} & \bm{o_l} & \bm{c}
\end{bmatrix}
~~\text{where~}~~
\bm{o_w} =
\begin{bmatrix}
o_w \\
0 \\
0
\end{bmatrix}
,~
\bm{o_d} =
\begin{bmatrix}
0 \\
o_d \\
0
\end{bmatrix}
,~
\bm{o_l} =
\begin{bmatrix}
0 \\
0 \\
o_l
\end{bmatrix}
,~
\bm{c} =
\begin{bmatrix}
1 \\
1 \\
1
\end{bmatrix}
\end{equation}
\end{example}
To simplify notation in further sections, we will also define a modified odds matrix $\bm{\rho}$ corresponding to excess odds, i.e. removing the return amount of the placed wager itself, resulting \rev{in} net profit $\mathrm{W}$ (Section~\ref{sec:definitions}), as
\begin{equation}
\bm{\rho} = \bm{O} - \bm{1}
\end{equation}
Note that in the example scenario the outcomes were exclusive, and the ``one-hot'' risky asset vectors reflect their exclusive \del{binary} nature, which considerably simplifies the computation of optimal strategies (Section~\ref{sec:strategies}).
In this review, we generally assume individual matches with exclusive outcomes\footnote{Note that the exclusiveness of outcomes does not hold in the further scenarios with parallel games.} but varying outcome multiplicities (Section~\ref{sec:datasets}) to experimentally assess the properties of the strategies w.r.t. this dimension of the problem.
\subsubsection{Parallel Games}
\label{sec:def:parallel}
To further complicate the game, approaching the real betting setting even more closely, we can consider multiple dice being thrown in parallel, each associated with a particular set of outcomes and odds. Naturally, this reflects the reality of multiple games being open for betting at the same time. In popular sports, such as soccer, it is not uncommon to have dozens of games available on the market simultaneously.
While we can surely consider each of the games separately, such a simplification can lead to sub-optimal results. Although calculating with the true parallel nature of the opportunities can be computationally demanding for some of the strategies (Section~\ref{sec:quadraticapprox}), it should allow to alleviate the risk by diversifying over a wider portfolio at each time step of the wealth progression.
In this review, we consider both the sequential and parallel scenarios to emulate realistic scenarios and evaluate the respective advantages (Section~\ref{sec:experiments}).
\subsection{Betting Dynamics}
\label{sec:def:dynamics}
The betting dynamic represents the investment \rev{behaviour} of the bettor w.r.t. her bankroll $W$ in time $t$, which has a major impact on the progression of wealth. There are two basic cases of bankroll management to be considered \rev{--} (i) additive and (ii) multiplicative~\citep{peters2016evaluating, peters2011optimal}.
\subsubsection{Additive dynamic}
Additive dynamic corresponds to a simple fixed unit-based investment, where the bettor's wagers are decoupled from her current bankroll $W_t$. To illustrate the setting, we can imagine that the bettor receives a fixed unit (e.g. \$1) amount of money from an external source at regular time intervals $\delta t$ (such as a salary), which she repeatedly invests into the stochastic game of betting, and accumulates (additively) the prospective returns $w_t \cdot 1$ from the unit investment in the, separately held, bankroll $W_t$.
Her wealth progression in time $t$ can hence be seen as
\begin{equation}
W_t = w_t \cdot 1 + W_{t - \delta t}
\end{equation}
\subsubsection{Multiplicative dynamic}
\label{sec:multiplicative}
In the multiplicative scenario, the bettor continuously \textit{reinvests} the current wealth $W_t$ accumulated from the previous betting investments, without any external source of profit. Hence her progression of wealth in time $t$ can be seen as
\begin{equation}
W_t = w_t \cdot W_{t - \delta t}
\end{equation}
The multiplicative dynamics plays an important role in the Kelly criterion (Section~\ref{sec:kelly}), where the mathematical optimality of the strategy is derived exactly from \rev{a} repeated play of the same game in the multiplicative setting.
As the comparison of the two approaches appears problematic, due to the external source of profit in the additive scenario, we will further consider only the multiplicative reinvestment setting, which is also more realistic and sound for \rev{an} independent evaluation.
\section{Related works}
\label{sec:related}
The two most notable approaches to allocation of wealth across presented stochastic assets, i.e. match outcomes in sport\rev{s} betting, were introduced by (i)~\cite{markowitz1952portfolio}, with his revolutionary concept of balancing return and risk of a portfolio, and by (ii)~\cite{kellyold}, with a criterion to maximize the long-term growth in a scenario where the same game is being played repeatedly.
Following the Kelly criterion, the process of betting is closely connected to information theory~\citep{kelly1956new}. \rev{\cite{bell1988game}, discuss a game-theoretical optimality of Kelly portfolios and a generalization of the Kelly strategy to maximize the proportion of wealth relative to the total wealth among population is discussed in~\citep{lo2018growth}.} Additional mathematical properties were also explored in~\citep{latane2011criteria} and~\citep{breiman1961optimal, thorp2008kelly}. From the economical perspective, Kelly's approach is often explained through the use of a logarithmic utility function, which was famously first introduced by Daniel Bernoulli in~\citep{bernoulli2011exposition}, where he pointed out that people do not make their decisions according to the absolute payoff, but w.r.t. the logarithm thereof. \rev{In~\citep{luenberger2011preference} the authors suggest that assuming long-term goals, the logarithmic utility function is the only sensible choice for a utility function.} While not necessarily incorrect, the phenomenological explanation of the choice of logarithmic utility function seem\rev{s} somewhat arbitrary, however.
In \citep{peters2011time} a different view on the Kelly criterion was proposed, where the author criticized the established evaluation of betting using the expected value of a portfolio, as it is based on the unrealistic idea of ``simultaneous'' evaluation of the, often exclusive, outcomes. Instead of measuring \rev{the} mean of a statistical ensemble of possible outcomes, the author proposed to focus on what happens to a single player as the same game is repeated in time, following the notion of ergodicity in dynamic systems~\citep{peters2019ergodicity}. The logarithmic transformation then emerges as the correct ergodic transformation of dynamics of the game in the classical reinvestment setting~\citep{peters2016evaluating}, providing a well-founded explanation for the observed phenomenon.
Given the mathematically elegant yet somewhat unrealistic setting, the Kelly strategy has also been often criticised in many works~\citep{samuelson1971fallacy, samuelson2011we, maclean2010good, samuelson1975lifetime}.
\subsection{Extensions of the formal strategies}
\label{sec:related:extensions}
The strategies of Markowitz and Kelly have been re-explored by researchers in a number of different application scenarios and many useful modifications have been proposed since. Generally, the Markowitz's approach has traditionally dominated the world of quantitative finance, while the Kelly's approach has been more prominent in the sports betting industry. In~\citep{smoczynski2010explicit}, a closed form solution for the use of the Kelly strategy when betting on horse racing was explored. Another practical extension for betting on multiple simultaneous games was discussed in a number of works~\citep{whitrow2007algorithms, grant2008optimal, buchen2012comparison}, where \rev{various} approximations for large bet aggregations were proposed.
\rev{
Modification of the Kelly strategy for betting exchanges is discussed in~\citep{noon2013kelly}, where adjustments for both back and lay bets are presented. Additionally, the effect of commission and maximum bet constraint on resulting growth rate is discussed. The Kelly problem is examined for spread betting in~\citep{chapman2007kelly} and in \citep{haigh2000kelly}, where several counterintuitive effects are discussed when using the Kelly strategy for spread betting. Markowitz's modern portfolio theory for soccer spread betting is then discussed in~\citep{fitt2009markowitz}
}
Another important stream of research are works investigating extensions of the Kelly strategy towards the realistic setting of parameter uncertainty, such as~\citep{baker2013optimal}. A practical method to address the problem are \rev{so-called} fractional Kelly strategies, the properties of which have been investigated in great detail in the works of~\citep{maclean2011medium} and \citep{maclean1992growth}. \rev{\cite{peterson2017kelly}, presents a decoupled Kelly strategy combined with an additional risk measure. \cite{kan2007optimal},~introduced an optimal portfolio choice under parameter uncertainty for the modern portfolio theory (MPT).}
Interesting modifications with similar aims are Bayesian extensions of the Kelly strategy proposed in \citep{browne1996portfolio, balka2017kelly, chu2018modified}. Similarly, approaches based on probabilistic risk constraints for limiting the probability of a ``drawdown'' were discussed in \citep{busseti2016risk} and \citep{mulvey2011dynamic}. Finally, limiting the \rev{worst-case} probabilistic scenario using the framework of distributionally robust optimization was explored in \citep{sun2018distributional} and in \citep{blanchet2018distributionally} for the Markowitz's strategy, respectively.
\subsection{Predictive modelling}
\label{sec:related:model}
\rev{
Since we consider predictive sports modelling a separate problem, we only briefly review some papers on the topic, with an extra focus on models related to those used for experiments in this paper.
}
\rev{
A traditional stream of research in predictive sports analytics are score-based models based on various explicit statistical assumptions. A football prediction model introduced by~\cite{maher1982}, builds a statistical model on the assumption that in a football match the goals are Poisson-distributed and those of the home team are independent of those of the away team. The author also introduced the notion of teams' attacking and defensive strengths and how to use them for forecasting of the match results. In~\citep{dixon1997}, the Maher's model is further extended and it is shown to make a profit when combined with a simple betting strategy. The authors also used exponential time weighting to discount the effects of past results, while in~\citep{maher1982} the strength of the team is considered to be time-invariant. In~\citep{rue2000}, the authors used a Brownian motion to bind together the strength parameters of the teams in consecutive rounds. The model is then used for betting with a variant of the MPT strategy. \cite{egidi2018combining}, presents a hierarchical Bayesian Poisson model with the scoring rates of the teams being represented by convex combinations of parameters estimated from historical data and betting odds. In \citep{groll2013spain} the authors analyze the explanatory power of bookmakers' odds using pairwise generalized linear mixed Poisson model.}
\rev{
Another modern approach for match outcome predictions are non-parametric and feature-based machine learning models.
\cite{Haghighat2013}, provides a review of machine learning techniques used in outcome predictions of sports events while pointing out some common problems and misconceptions.
In the horse racing domain, a popular logit-based model, combining both ``fundamental features'' and ``odds-derived'' features into a single prediction system, was presented by~\cite{benter2008computer}. This model was also a strong inspiration for the horse racing model evaluated in this paper.
In the domain of soccer, a recent review~\citep{hubacek2019score} discusses a diversity of the common approaches. Notable examples include models from the 2017 Soccer Prediction Challenge~\citep{dubitzky2019}. The winning model from the challenge utilized a boosted tree learner based on an ensemble of score-derived features and simpler ranking and statistical models~\citep{hubacek2019}. This model was also directly used for the soccer betting experiments reported in this paper.
In predictive basketball modelling, it is common to use detailed box-score statistics that are available for the high exposure leagues. Based on diverse features, \cite{Miljkovic2010}, evaluated their model on the NBA, while \cite{Ivankovic2010} used a neural network to predict match outcomes in the League of Serbia. An advanced convolutional neural architecture was then learned over a, so far biggest, set of basketball games in~\citep{hubavcek2019exploiting}. We again directly utilize this basketball model in this paper.
}
\section{Betting Strategies}
\label{sec:strategies}
In the existing literature, the betting strategies range from simple informal techniques, such as flat betting, to the formal approaches, represented mainly by the Markowitz's Modern portfolio theory~\citep{markowitz1952portfolio} and the Kelly criterion~\citep{kelly1956new}, coming from an economical and information-theoretic views of the problem, respectively.
\subsection{Informal Strategies}
\label{sec:strat:informal}
In sports betting practice, most of the focus among punters is being put on the search for outcomes with positive expected value (``value bets''), and the importance of the subsequent investment strategy has often been neglected. Consequently, rather than formal strategies, one can encounter simplistic heuristics such as~\citep{hubacek2017thesis}:
\begin{itemize}
\item Bet a fixed fraction on favourable odds.
\item Bet a fixed fraction on the opportunity with maximal expected value.
\item Bet a fraction equal to the absolute discrepancy between player's and bookmaker's estimates.
\item Bet a fraction equal to the relative discrepancy between player's and bookmaker's estimates.
\item Bet a fraction equal to the estimated probability of winning.
\end{itemize}
Lacking any formal foundation, these approaches have been shown generally inferior to the formal strategies, both theoretically and in practice~\citep{hubacek2017thesis}. For completeness, we chose to re-validate the reports by selecting the previously best performing informal strategies of (i) betting fraction w.r.t. the maximal discrepancy (``AbsDisc'') and (ii) betting optimal fraction on the maximal expected value (``MaxEvFrac'') in our experiments (Section~\ref{tab:horses}).
\subsection{Modern Portfolio Theory}
\label{sec:MPT}
Modern Portfolio Theory (MPT) is a standard economic view of the problem based on the idea of the expected value of the profit, possibly transformed by a utility function reflecting the user's particular preferences. The general idea behind MPT is that a portfolio $\bm{f^1}$, i.e. a vector of assets $\bm{f} = f_1, \dots, f_n$, is superior to $\bm{f^2}$, if its corresponding expected profit (Section~\ref{sec:definitions}) is at least as great
\begin{equation}
\EX[\bm{\rho} \cdot \bm{f^1}] \geq \EX[\bm{\rho} \cdot \bm{f^2}]
\end{equation}
and a given risk measure $risk : \mathbb{R}^n \to \mathbb{R}$ of the portfolio, w.r.t. the given odds, is no greater
\begin{equation}
risk(\bm{f^1}|\bm{\rho}) \leq risk(\bm{f^2}|\bm{\rho})
\end{equation}
This creates a partial ordering on the set of all possible portfolios. Taking the portfolios that no other portfolio is superior to gives us \rev{a} set of ``efficient portfolios'' $\Theta$~\citep{markowitz1952portfolio}. In simple terms, we trade off the expected $profit-risk$ by maximizing the following
\begin{equation}
\underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}} ~(\EX[\bm{\rho} \cdot \bm{f}] - \gamma \cdot risk(\bm{f}|\bm{\rho}))
\end{equation}
where $\gamma$ is a hyperparameter reflecting the user's preference for risk.
In the most common setup, the $risk$ of a portfolio $\bm{f}$ is measured through the expected total variance of its profit $Var[\bm{\rho} \cdot \bm{f}] = \bm{f}^T\Sigma \bm{f}$, based on the given covariance matrix $\bm{\Sigma}_n^n$ of net profit of the individual assets. Note that in the case of independent outcomes (Section~\ref{sec:def:outcomes}), this reduces to a diagonal matrix with \rev{the} variance of each binary asset\rev{'s} profit, corresponding to the result $r_i$, following from the given odds $o_i$ and the underlying Bernoulli distribution as
$\Sigma(i,i) = \hat{P_r}(r_i) \cdot (1-\hat{P_r}(r_i)) \cdot \rho_{i,i}^2$.
MPT can generally thus be expressed as the following maximization problem
\begin{equation}
\label{eq:MPT}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}~
& & \EX[\bm{\rho}\cdot\bm{f}] - \gamma \cdot \bm{f}^T\Sigma \bm{f}\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation}
Apart from the variance $Var[\bm{w}]$ of the potential net returns $\bm{w} = \bm{\rho} \cdot \bm{f}$, different risk measures have been proposed~\citep{markowitz1952portfolio}, such as standard deviation $\sigma(\bm{w}) = \sqrt{Var[\bm{w}]}$ and coefficient of variation $CV(\bm{w}) = \frac{\sigma(\bm{w})}{\EX[\bm{w}]}$. Generally, there is no \rev{agreed-upon} measure of risk and the choice is thus left to the user.
The MPT approach is often criticized for the disputable choice of risk, which can be perceived as a formal weakness of the approach~\citep{peters2016evaluating}, since in many domains the risk is not easy to define. Moreover, the direct maximization of expected profit can be misleading in games, where the distribution of potential profits is highly skewed, i.e. where the mean profit is very different from the median. This situation naturally occurs in the multiplicative dynamics setting, where maximization of expected value may lead to undesirable outcomes~\citep{peters2016evaluating}.
\subsubsection{Maximum Sharpe Strategy}
\label{sec:MaxSharpe}
Apart from the choice of the risk measure, the inherent degree of freedom in MPT is how to select a particular portfolio from the efficient frontier $\Theta$ (based on the choice of $\gamma$). Perhaps the most popular way to avoid the dilemma is to select a spot in the pareto-front with the highest expected profits w.r.t. the risk. For the risk measure of $\sigma(\bm{w})$, this is known as the ``Sharpe ratio'', generally defined as
\begin{equation}
\frac{\EX[\bm{w}] - r_f}{\sigma(\bm{w})}
\end{equation}
where $\EX[\bm{w}]$ is the expected return of the portfolio, $\sigma(\bm{w})$ is the standard deviation of the return, and $r_f$ is a ``risk-free rate''. Since there is no risk-free investment in sports betting, we can neglect it and reformulate the optimization problem as
\begin{equation}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \frac{\EX[\bm{\rho} \cdot \bm{f}]} {\sqrt{\bm{f}^{T}\bm{\Sigma}\bm{f}}} \\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, f_i \geq 0
\end{aligned}
\end{equation}
the solution of which we will further refer to as the ``MSharpe'' strategy.
The variance-based choices of risk have been often criticized as they penalize excess losses as well as excess returns, which is obviously undesirable. Moreover, the calculation of the MaxSharpe solution is also quite sensitive to errors in the probabilistic estimates, and can often be biased towards extreme solutions, requiring some additional form of control\footnote{E.g. a strategy with no wagers placed would have zero variance resulting into an infinite Sharpe ratio.}. Nevertheless\rev{,} it remains a very popular investment practice, which we include in our experiments.
\subsection{Kelly Criterion}
\label{sec:kelly}
The Kelly criterion\rev{~\citep{kelly1956new, thorp2008kelly}} is based on the idea of expected multiplicative growth in the reinvestment setting (Section~\ref{sec:multiplicative}), so that a portfolio $\bm{f}$ is chosen such that the long-term value of the resulting, continuously reinvested, wealth $W_t$ is maximal (in an infinite horizon of $t$). Note that in this scenario we assume that the same portfolio is going to be presented at each time step. For its multiplicative nature, it is also known as the geometric mean policy, emphasizing the contrast to the arithmetic mean approaches based on the expected value.
The two can, however, be looked at similarly with the use of a logarithmic ``utility function'', transforming the geometric into the arithmetic mean, and the multiplicative into the additive setting, respectively. The problem can then be again expressed by the standard means of maximizing the expected value as
\begin{equation*}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \EX[\log(\bm{O} \cdot \bm{f})]\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation*}
Note that, in contrast to MPT, there is no explicit term for risk here, as the notion of risk is inherently encompassed in the growth-based view of the wealth progression, i.e. the long-term value of a portfolio that is too risky will be smaller than that of a portfolio with the right risk balance (and similarly for portfolios that are too conservative).
The calculated portfolio is then provably optimal, i.e. it accumulates more wealth than any other portfolio chosen by any other strategy in the limit of $t$. However, this strong result only holds given, considerably unrealistic, assumptions~\citep{kelly1956new, thorp2008kelly, peters2016evaluating}. Similarly to MPT, we assume to know the true probability distribution of game outcomes, and additionally we assume that:
\begin{enumerate}
\item we are repeatedly presented with the same games.
\item we play for an infinite amount of time.
\end{enumerate}
Despite the fact that these conditions are impossible to meet in practice, the Kelly strategy is very popular, and its various modifications (Section~\ref{sec:risk}) are prevalent among bettors in practice.
\subsubsection{Quadratic Approximation}
\label{sec:quadraticapprox}
Exact numerical calculation of the Kelly strategy is often \rev{time-consuming}, especially when numerous runs through a large dataset of games is necessary. A practical approach to this issue has been proposed~\citep{busseti2016risk} based on a quadratic approximation of the Kelly's logarithmic utility using the Taylor series expansion. Let us first recall the following.
\begin{equation}
\log(1+x) = x - \frac{x^{2}}{2} + \dots
\end{equation}
Next, following~\citep{busseti2016risk}, we make an assumption for the Taylor approximation that our net profits are not too far from zero $\bm{\rho}\cdot{\bm{f}} \approx \bm{0}$ and express the logarithmic part of the Kelly criterion as follows~\citep{busseti2016risk}.
\begin{equation}
\log(\bm{O} \cdot \bm{f}) = \log(1 + \bm{\rho} \cdot \bm{f})
\end{equation}
allowing us to proceed with the Taylor expansion as
\begin{equation}
\log(1 + \bm{\rho} \cdot \bm{f}) = \bm{\rho} \cdot \bm{f} - \frac{(\bm{\rho} \cdot \bm{f})^{2}}{2} + ...
\end{equation}
Now taking only the first two terms from the series we transform the expectation of logarithm into a new problem definition as follows
\begin{equation}
\begin{aligned}
& \underset{\bm{f \in \mathbb{R}^n}}{maximize}
& & \EX[\bm{\rho} \cdot \bm{f} - \frac{(\bm{\rho} \cdot \bm{f})^{2}}{2}] \\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1.0, ~f_i \geq 0
\end{aligned}
\end{equation}
We will further refer to this strategy as the ``Quadratic Kelly''.
Note that, interestingly, the problem can now be rewritten to
\begin{equation}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \EX[\bm{\rho} \cdot \bm{f}] - \frac{1}{2}\EX[\bm{f}^T (\bm{\rho} \cdot \bm{\rho}^T) \bm{f}] \\
\end{aligned}
\end{equation}
corresponding to the original MPT formulation from Equation~\ref{eq:MPT} for the particular user choice of $\gamma=\frac{1}{2}$.
It follows from the fact that the geometric mean is approximately the arithmetic mean minus $\frac{1}{2}$ of variance~\citep{markowitz1952portfolio}, providing further insight into \rev{the} connection of the two popular strategies of Kelly and Markowitz, respectively.
\section{Risk Management Practices}
\label{sec:risk}
The core issue with the mathematical strategies is that their calculations are carried out as if the true probability distribution over the outcomes was known. Moreover\rev{,} they are often sensitive to even \rev{the slightest} error in the estimates. Here we review simple remedies that have been proposed on top of the original strategies to manage the extra risk stemming from the underlying errors, as well as more sophisticated techniques incorporating the uncertainty of estimates directly into \del{the} computation of \rev{the} strategies.
\subsection{Maximum bet limit}
\label{sec:limit}
Constraining the maximal wager to a fixed value $m$ is probably the most trivial risk-avoiding technique one can encounter, which is probably also why it is the most prevalent one in practice. Moreover, the maximum bet limit often comes from the side of the bookmaker, too, constraining the risk he undertakes w.r.t. each bettor. We thus include this empirical method in our portfolio to see if saturating the invested amount by a fixed threshold might actually improve the overall wealth progression of the existing strategies if properly tuned.
\subsection{Fractional Approaches}
\label{sec:fractional}
Fractioning is an example of a simple heuristic that is nevertheless very efficient in practice.
The main idea behind any ``fractional approach'' is to bet only a fraction $\omega$ of the calculated portfolio and leave the rest of $1-\omega$ in the cash asset for security. We define such a trade-off index $\omega$ for a portfolio as
\begin{equation}
\bm{f}_\omega = \omega \bm{f}_{1..n-1} + (1-\omega) \bm{f}_n
\end{equation}
where $\bm{f}_{1..n-1}$ corresponds to the risky part of the portfolio with stochastic assets and $\bm{f}_n$ is the cash asset, as introduced in Section~\ref{sec:def:outcomes}.
The fractional approach is mostly used with the Kelly strategy~\citep{maclean2011growth, thorp2011understanding}, where for $\omega = 0.5$ it is famously referred to as ``half-kelly'' by practitioners. \rev{Nevertheless,} the choice of $\omega$ should depend on the actual distributions and preferences for risk. The same idea of taking only a fraction of the calculated portfolio can generally be applied to any strategy, including MPT, and it is overall useful whenever our estimates are erroneous.
\subsection{Drawdown Constraint}
\label{sec:drawdown}
A drawdown represents a more involved technique that actually modifies the original optimization problem.
The idea of drawdown is to incorporate a special probabilistic constraint into the Kelly strategy so as to push the solution away from the more risky region near the ruin boundary. The choice of the boundary is then left to the user's preference as an input parameter into the optimization problem. The probabilistic boundary is expressed as the following constraint
\begin{equation}
P(W_t^{min} < \alpha) \leq \beta
\end{equation}
expressing that the probability of our wealth falling below $\alpha$ can be at most $\beta$.
For the Kelly criterion, following the calculations from~\citep{busseti2016risk}, the constraint is approximately satisfied if the following condition holds
\begin{equation}
\EX[(\bm{O} \cdot \bm{f})^{-\lambda}] \leq 1 \hspace{5pt} \text{where} \hspace{5pt} \lambda = \log(\beta) / \log(\alpha)
\end{equation}
Which we can reformat as
\begin{equation}
\log(\sum_{i=1}^{n} p_i \cdot (\bm{o_i}\cdot f_i)^{-\lambda}) \leq \log(1)
\end{equation}
which can be further simplified~\citep{busseti2016risk} into the following constraint
\begin{equation}
\log(\sum_{i=1}^{n} \exp(\log(p_i \cdot (\bm{o_i}\cdot f_i)^{-\lambda}))) \leq 0
\end{equation}
which we can finally use in a convex optimization program.
\subsection{Distributionally Robust Optimization}
\label{sec:dro}
Distributionally robust optimization (DRO) can be understood as a stochastic game between a player and nature, where nature picks a distribution $P_r$ from some predefined ambiguity set of distributions $\bm{\Pi}$ so as to inflict maximum damage to the player's utility. This fits quite naturally the setting of sports betting against a fixed-odds bookmaker, where, given the opposing utilities of both, the bookmaker (nature) sets up the odds so as to minimize player's chances for profit.
Generally, DRO is \rev{a} paradigm for decision making under uncertainty where:
\begin{enumerate}
\item The uncertain problem inputs are governed by a distribution that is itself subject to uncertainty.
\item The distribution is then assumed to belong to an ambiguity set $\bm{\Pi}$.
\item The ambiguity set contains all distributions that are compatible with the player's prior information.
\end{enumerate}
Being aware of the uncertainty in her own estimates $P_p = \hat{P_r}$, the player now modifies the optimization problem to account for the worst possible scenario within the given ambiguity set $\Pi$.
\begin{equation*}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \underset{\bm{p} \in \bm{\Pi}}{min} \sum_{i=1}^{n} {p_i} \cdot log(\bm{O_i} \cdot \bm{f})\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation*}
The ambiguity set $\bm{\Pi}$ can be defined in a number of ways. In~\citep{sun2018distributional}, multiple definitions are explored in connection to the Kelly strategy, such as Polyhedral, Ellipsoidal, or Divergence based. In this review\rev{,} we further narrow our focus to the polyhedral ambiguity set, referred to as the ``box'' uncertainty set, which can be defined as
\begin{equation}
\bm{\Pi} = \{p_i \hspace{3pt} | \hspace{3pt} |p_i - P_p(r_i)| \leq \eta \cdot P_p(r_i),~\sum_{i=1}^{n} p_i = 1, p_i \geq 0\}
\end{equation}
i.e. constraining each probability $p_i$ to differ by up to a factor of $\eta$ from the nominal player's estimate $P_p(r_i)$ of the probability of result $\mathrm{R}=r_i$.
\section{Experiments}
\label{sec:experiments}
The main purpose of this review is to assess \rev{the} performance of the individual strategies (Section~\ref{sec:strategies}) and their risk modifications (Section~\ref{sec:risk}) in various realistic settings (Section~\ref{sec:definitions}) on real data.
We recall the used strategies, describe the datasets, evaluation protocol, and discuss the conducted experiments with their results.
The strategies for the experiments were chosen with the aim to represent the diverse portfolio of approaches occurring in practice, with the goal to provide an unbiased statistical assessment of their performance limits. The particular strategies chosen with their respective hyper-parameters are specified in Table~\ref{tab:strategies}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
\textbf{Strategy} & Description & {Hyperparameters}\\
\hline
AbsDisc & absolute discrepancy bet (Section~\ref{sec:strat:informal}) & None \\
\hline
MaxEvFrac & max. EV outcome with fractioning (Section~\ref{sec:strat:informal}) & $\omega \in [0,1]$ \\
\hline
Kelly & original Kelly strategy (Section~\ref{sec:kelly}) & None \\
\hline
MSharpe & original max. Sharpe ratio (Section~\ref{sec:MaxSharpe}) & None \\
\hline
KellyFrac & Kelly strategy with fractioning (Section~\ref{sec:fractional}) & $\omega \in [0,1]$ \\
\hline
MSharpeFrac & max. Sharpe with fractioning & $\omega \in [0,1]$ \\
\hline
KellyFracMax & Kelly with fractioning and limiting (Section~\ref{sec:limit}) & $\omega \in [0,1]$, $m \in [0,1]$. \\
\hline
MSharpeFracMax & max. Sharpe with fractioning and limiting & $\omega \in [0,1]$, $m \in [0,1]$. \\
\hline
KellyDrawdown & Kelly with the drawdown constraint (Section~\ref{sec:drawdown}) & $\alpha$, $\beta \in [0,1]$ \\
\hline
KellyRobust & Kelly with distributionally robust optimization & $\eta \in [0,1]$. \\
\hline
\end{tabular}
\end{center}
\caption{Evaluated strategies and their hyperparameters}
\label{tab:strategies}
\end{table}
\subsection{Datasets}
\label{sec:datasets}
We collected 3 datasets of different properties from 3 different sports - horse racing, basketball, and football, each containing a significant number of ``matches'' \rev{(races and games)} for statistical evaluation. Each of the datasets is further accompanied with realistic models' predictions tuned specifically for each domain. Since our focus here is purely on the betting strategies, we do not elaborate on the models in details beyond their predictive performances, which naturally influence the performance of the strategies, too.
For each of the datasets, we present the following key properties.
\begin{itemize}
\item $size$ - Dataset size (i.e. \rev{the} number of matches).
\item $acc_b$ - Accuracy of the bookmaker $b$.
\item $acc_p$ - Accuracy of the player $p$ (i.e. the predictive model).
\item $n$ - Number of possible match outcomes ($n=|R|$).
\item $odds$ - Range of the offered odds.
\item $margin$ - Average margin present in the odds.
\item $A_{KL}$ - Kullback-Leibler advantage of the player.
\end{itemize}
The $A_{KL}$ is a statistical measure of \rev{the} difference of the predictive performances (\rev{cross-entropy}) of the player and the bookmaker, respectively. The metric was chosen as it plays a key role in \rev{the} performance of the original Kelly strategy, where the growth of profit can be proved directly proportional to the KL advantage~\citep{cover2012elements}.
\subsubsection{Horse Racing}
\label{sec:horses}
The data for horse racing were collected from the Korean horse racing market (KRA) and provide $2700$ races. The target market of the dataset is the ``win pool'', representing betting for the horse winning the race. The schedule and participation of individual horses in the races varies considerably. Moreover, there is a varying total number of horses, and thus outcomes $n$, in each race, creating \rev{an} interesting challenge for the strategies. We thus assume each race as a completely independent investment opportunity and optimize the strategies accordingly. The model used was a form of conditional logistic regression over various features of the horses \rev{(Section~\ref{sec:related:model})}. The particular dataset properties are specified in Table~\ref{tab:horses}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{size} & \textit{$acc_p$} & \textit{$acc_b$} & $n$ & $odds$ & $margin$ &$A_{KL}$\\
\hline
$2700$ & $0.512$ & $0.503$ & $\in [6, 16]$ & $\in [1.0, 931.3]$ & $0.2$ & $\approx 0.0022$ \\
\hline
\end{tabular}
\end{center}
\caption{Horse racing dataset properties}
\label{tab:horses}
\end{table}
The specifics of the horse racing dataset come mainly from the fact that it actually originates from a parimutuel market, meaning that the wagers are put into a shared pool from which a certain portion is removed as a profit for the house (margin). Nevertheless\rev{,} we convert it into the discussed fixed-odds setting by assuming the last available state of the money pool to get the possible payoffs/odds~\citep{hausch2008efficiency}. As a result, the ``bookmaker's'' estimate in this case is hence made up entirely from public opinion, and is noticeably less accurate. This provides space for statistical models to gain predictive KL-advantage on the one hand, however, on the other hand, the margin is also considerably higher.
\subsubsection{Basketball}
\label{sec:basket}
Next domain we selected is basketball, where we collected box score data from matches in the US National Basketball Association (NBA). The dataset consists of $16000$ games ranging from the year $2000$ to $2015$. The NBA league has a regular schedule of the matches, where each team plays repeatedly with every other team in \rev{so-called} ``rounds''. To emulate the market setting in a realistic fashion, we assume rounds as groups of $10$ scheduled matches to repeatedly appear on the market in parallel (Section~\ref{sec:def:parallel}).
The target market here was the ``money-line'', i.e. betting on the winner of each match. The specifics of the data then comes from the fact that there are only 2 outcomes in the game, directly corresponding to the most basic \rev{coin-tossing} setup of the problem (Section~\ref{sec:definitions}).
The model used was a convolutional neural network based on detailed statistics of the individual players and teams~\citep{hubavcek2019exploiting}. The odds then come from the closing line of the Pinnacle~\footnote{https://www.pinnacle.com/} bookmaker. Notice that in this case the model is not as accurate as the bookmaker, and is thus in a general KL-disadvantage. The particular dataset properties are specified in Table~\ref{tab:basket}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textit{size} &\textit{$acc_p$} & \textit{$acc_b$} & $n$ & $margin$ & $odds$ & $A_{KL}$\\
\hline
$16000$ & $0.68$ & $0.7$ & $2$ & $0.038$ & $\in [1.01, 41]$ & $\approx -0.0146$\\
\hline
\end{tabular}
\end{center}
\caption{Basketball dataset properties}
\label{tab:basket}
\end{table}
\subsubsection{Football}
\label{sec:football}
The football dataset consists of $32000$ matches collected from various leagues all over the world. The schedule in each football league is similar in spirit to that of \rev{the} NBA, and so we again assume the market setting with $10$ parallel games (Section~\ref{sec:def:parallel}). The target market was again money-line betting. The outcomes in football include a draw, resulting \rev{in} a moderate $n=3$ setting. Interestingly, the original dataset~\citep{dubitzky2019} contained merely the historical results of the matches, and the model has thus been built purely from score-derived features. Particularly, the model was a form of gradient-boosted trees learner, winning the 2017's Soccer prediction challenge~\citep{dubitzky2019}. The odds were again provided by \rev{Pinnacle but, this time, we} took the more \rev{favourable} opening line. Despite varying over different leagues, the overall margin is slightly lower than in basketball, and the model in a slightly lower, yet still considerable, KL disadvantage. The particular dataset properties are specified in Table~\ref{tab:football}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textit{size} &\textit{$acc_p$} & \textit{$acc_b$} & $n$ & $margin$ & $odds$ & $A_{KL}$\\
\hline
$32000$ & $0.523$ & $0.537$ & $3$ & $0.03$ & $\in [1.03, 66]$ & $\approx -0.013$\\
\hline
\end{tabular}
\end{center}
\caption{Football dataset properties}
\label{tab:football}
\end{table}
\subsection{Evaluation Protocol}
\label{sec:ex:protocol}
The models providing the probabilistic estimates were trained following the natural order of the matches in time, so that all of their estimates are actual future predictions, i.e. out-of-sample test outputs for matches unseen in the training phase.
For the actual optimization problems of the individual strategies, we have chosen to work with the cvxpy \citep{cvxpy} as the main optimization framework. For each strategy, we first solved the given problem using the Embedded Conic Solver (ECOS)~\citep{domahidi2013ecos}, and should a numerical problem arise\rev{,} we proceed with solving the problem using the Splitting Conic Solver (SCS)~\citep{o2016scs}.
While many of the chosen strategies (Table~\ref{tab:strategies}) contain hyperparameters to be set, we additionally tuned each for the best possible performance via grid-search, too. The individual hyperparameter ranges for the grid-search can be found in Table~\ref{tab:strategies}.
To provide an unbiased \rev{estimate} of their actual performance in practice, we also followed a strict evaluation protocol for each of the strategies. This means that we have (i) split each dataset into training and testing subsets, (ii) found the best hyperparameter setting on the training subset, and (iii) evaluated the fixed setting on the test subset.
To make the output profit measures (Section~\ref{sec:metrics}) more robust, both the training and testing is evaluated by generating $1000$ separate ``runs'' through each subset, where the sequence of games is randomly reshuffled and $10\%$ of games are randomly removed each time (the split between train and test always remains respected). We hence evaluate properties of each strategy on $1000$ separate wealth investment trajectories through previously unseen games.
\subsubsection{Hyperparameter Selection}
\label{sec:hyperpar}
To choose the best possible strategy setting on the train set, we looked for hyperparameters with the following criteria
\begin{equation*}
\begin{aligned}
& {\text{maximize}}
& & median(\bm{W_{f}}) \\
& \text{subject to}
& & Q_{5} > 0.9
\end{aligned}
\end{equation*}
i.e. we always chose a strategy that reached the maximum median final wealth, given that no more than $5\%$ of the wealth trajectories did not fall below $90\%$ of \rev{the} final wealth. Hyperparameter settings that did not meet the required criterion were simply removed from consideration. While the presented hyperparameter selection criteria might seem somewhat arbitrary and could be argued, our aim was to follow the natural desiderata of wealth progression for bettors in practice. That is to mainly prevent the occurrence of ruin (``survival first''), and then maximize the potential profits for the typical (median) bettor.
\subsubsection{Evaluation Metrics}
\label{sec:metrics}
For the actual final evaluation of the strategies on the test set, we chose a range of diverse metrics to provide more insights into the properties of the individual strategies and game settings. The metrics are as follows
\begin{itemize}
\item $median(W_f)$ - median final wealth position.
\item $mean(W_f)$ - mean final wealth position.
\item $min(W_i)$ - lowest wealth position.
\item $max(W_i)$ - maximal wealth position.
\item $sigma(W_f)$ - standard deviation of \rev{the} final wealth positions.
\item $ruin$ \% - ruin percentage of wealth trajectories
\end{itemize}
for which we define a $ruin$ situation as falling below $0.01\%$ of the initial bank $W_0$ at least once during the entire investment period. Note that as opposed to the original definition of ruin in the Kelly strategy~\citep{kellyold}, we have chosen a small \textit{non-zero} threshold, since in practice there is a low amount of money effectively causing \rev{the} inability to place a minimal bet, which is a constraint often present in the market.
\subsection{Results}
\label{sec:results}
Finally\rev{,} we present performances (Section~\ref{sec:metrics}) of the individual strategies (Section~\ref{sec:experiments}) over each of the datasets (Section~\ref{sec:datasets}). Apart from the evaluation metrics in the final state of wealth progression $W_{f}$, we present the summarized wealth progression trajectories for a selected ``best'' strategy with maximal median final wealth for each of the datasets, to demonstrate the overall bankroll dynamics. \rev{The evaluation metrics for horse racing, basketball, and football datasets are presented in Table~\ref{experiments:metrics:horses}, Table~\ref{experiments:metrics:basketball}, and Table~\ref{experiments:metrics:football}, respectively. The wealth progression trajectories for the best strategies are then displayed in
Figure~\ref{fig:horses}, Figure~\ref{fig:basket} and Figure~\ref{fig:football}, respectively.}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
AbsDisc & 0.0019 & 0.03 & 4e-08 & 27.1 & 0.04 & 85.2 \\
\hline
MaxEvFrac & 0.86 & 2.13 & 2e-09 & 711 & 4.7 & 36.1 \\
\hline
\hline
Kelly & 4.11 & 15.6 & 7e-05 & 2167.8 & 59.8 & 0.6 \\
\hline
MSharpe & 3.92 & 17.8 & 9e-06 & 2231.1 & 48.3 & 12.1 \\
\hline
KellyFrac & 3.39 & 14.2 & 0.003 & 213.2 & 32.1 & 0 \\
\hline
MSharpeFrac & 3.28 & 16.9 & 8e-05 & 253.3 & 26.5 & 0.2 \\
\hline
KellyFracMax & 3.49 & 13.8 & 0.0057 & 168.1 & 29.3 & 0 \\
\hline
MSharpeFracMax & 3.41 & 15.2 & 0.0065 & 194.3 & 25.4 & 0 \\
\hline
KellyDrawdown & 3.3 & 13.7 & 0.009 & 112.4 & 22.4 & 0 \\
\hline
KellyRobust & 2.97 & 4.1 & 0.08 & 77.3 & 7.2 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the horse racing scenario (Section~\ref{sec:horses}).}
\label{experiments:metrics:horses}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{horse_box_reinvest.eps}
\centering
\caption{Wealth progression of the KellyFracMax strategy in the horse racing scenario (Section~\ref{sec:horses}).}
\label{fig:horses}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
Kelly & 9.1e-6 & 1.8e-05 & 1.9e-20 & 3312.2 & 1.7e-05 & 100 \\
\hline
MSharpe & 1.3e-06 & 5.1e-05 & 4.1e-21 & 2911 & 9.7e-06 & 100 \\
\hline
KellyFrac & 2.4 & 2.7 & 0.11 & 24.1 & 1.34 & 0 \\
\hline
MSharpeFrac & 1.24 & 1.97 & 0.002 & 19.6 & 0.85 & 0 \\
\hline
KellyFracMax & 2.3 & 2.5 & 0.13 & 20.9 & 1.27 & 0 \\
\hline
MSharpeFracMax & 1.2 & 1.7 & 0.008 & 12.1 & 0.56 & 0 \\
\hline
KellyDrawdown & 2.21 & 2.9 & 0.14 & 29.1 & 1.3 & 0 \\
\hline
KellyRobust & 1.39 & 1.46 & 0.23 & 10.9 & 0.45 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the basketball scenario (Section~\ref{sec:basket}).}
\label{experiments:metrics:basketball}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{basket_reinvest_fractional.eps}
\centering
\caption{Wealth progression of the KellyFrac strategy in the basketball scenario (Section~\ref{sec:basket}).}
\label{fig:basket}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
Kelly & 2.3e-09 & 5.2e-08 & 1.6e-21 & 5844.2 & 2.7e-07 & 100 \\
\hline
MSharpe & 1.8e-10 & 3.0e-07 & 5.9e-27 & 2617 & 4.2e-07 & 100 \\
\hline
KellyFrac & 10.05 & 11.8 & 0.03 & 182 & 9.7 & 0 \\
\hline
MSharpeFrac & 9.9 & 13.6 & 0.016 & 211 & 9.1 & 0 \\
\hline
KellyFracMax & 10.03 & 11.2 & 0.007 & 144 & 9.2 & 0 \\
\hline
MSharpeFracMax & 10.1 & 13.1 & 0.005 & 193 & 8.7 & 0 \\
\hline
KellyDrawdown & 10.25 & 12.4 & 0.09 & 122 & 9.3 & 0 \\
\hline
KellyRobust & 6.2 & 7.3 & 0.28 & 27.7 & 5.6 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the football scenario (Section~\ref{sec:football}).}
\label{experiments:metrics:football}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{football_reinvest.eps}
\centering
\caption{Wealth progression of the KellyDrawdown strategy in the football scenario (Section~\ref{sec:football}).}
\label{fig:football}
\end{figure}
\vspace{-1cm}
Firstly, the results of our experiments confirm that the, regularly used, informal betting strategies (Section~\ref{sec:strat:informal}) are clearly inferior to all the formal strategies, in agreement with the previous reports~\citep{hubavcek2019exploiting}. Moreover, they often lead to ruin even in \rev{a} situation with statistical model advantage, as reported for the horse racing dataset in Table~\ref{tab:horses}, for which we decided not to include them further.
As expected, the formal strategies based on Modern Portfolio Theory (MPT) (Section~\ref{eq:MPT}) and Kelly Criterion (Section~\ref{sec:kelly}) performed reasonably in the setting with \rev{a} statistical advantage $A_{KL}$ of having a more precise model. However, since they are based on unrealistic mathematical assumptions, their actual risk profile might be unexpected in practice. Using any of the proposed practices for additional risk management (Section~\ref{sec:risk}) generally led to a considerably lower volatility while keeping the wealth progression of a typical (both mean and median) bettor reasonably high. Also, following the mathematical properties of the pure form of both the strategies, they both lead to a certain ruin in scenarios without statistical $A_{KL}$ advantage of the model, which is exhibited in practice, too (Table~\ref{tab:basket}, Table~\ref{tab:football}).
On the other hand, a smart strategy modification can generate profits even in the statistically disadvantageous scenarios, as measured by the $A_{KL}$. Naturally, this does not hold universally and particular properties of the underlying models must be considered, too, since there are surely disadvantageous scenarios where no strategy can make profits by any means (Example~\ref{ex:coin1}).
The insights from the experiments regarding the discord between the approaches of MPT and Kelly roughly follow the intuitions behind the individual strategies. That is that the strategies based on the Kelly criterion (Section~\ref{sec:kelly}) result in a generally higher \textit{median} final wealth, while strategies based on the MPT (Section~\ref{sec:MPT}) result in a generally higher \textit{mean} final wealth, corresponding to the underlying expected value-based motivation. Interestingly, in the football dataset (Table~\ref{tab:football}) the mean final wealth performance of MPT is slightly lower than that of the Kelly-based strategies. However, we should note that the hyperparameter selection criteria (Section~\ref{sec:hyperpar}) can also be considered slightly biased in \rev{favour} of the Kelly approaches.
From a practical perspective, the drawdown modification of the Kelly criterion (Section~\ref{sec:drawdown}) seemed to perform very similarly to the, much less sophisticated, fractional approach (Section~\ref{sec:fractional}), further supporting its popular use in practice. While the distributionally robust modification of Kelly (Section~\ref{sec:dro}) achieved generally lowest final wealth scores, it was also the overall most stable strategy with the highest minimal final wealth. This is in complete accordance with its pessimistic underlying setting optimizing for the worst case scenario, which might be appealing to highly risk-averse bettors.
\section{Conclusions}
\label{sec:conclusion}
In this experimental review, we investigated the two most prominent streams of betting investment strategies based on the views of the Modern Portfolio Theory and the Kelly criterion, together with a number of their popular modifications aimed at additional risk management in practice, where their original underlying mathematical assumptions do not hold. We tested the strategies on 3 large datasets from 3 different sport\rev{s} domains of horse racing, basketball, and football, following a strictly unified evaluation protocol to provide unbiased estimates of \rev{the} performance of each method while tuning their \rev{hyperparameters}.
The results of our experiments suggest \rev{the} superiority of the formal mathematical approaches over the informal heuristics, which are often used in practice, however\rev{,} the experiments also revealed their weaknesses stemming from the unrealistic mathematical assumptions, particularly the knowledge of the true probability distribution over the \rev{match} outcomes.
\rev{
Consequently, when used in their plain original form, the formal strategies, i.e. the maximum Sharpe and Kelly, proved infeasible in almost all practical scenarios with uncertain probability estimates. Particularly, the theoretically optimal strategies often led to ruin instead of maximal profit, calling for the need of the additional risk management practices.
}
\rev{The results of the subsequent modifications of the optimal strategies then suggested that reasonable trade-offs in wealth progression can be found in actual betting practice with the appropriate techniques, even in scenarios with worse model predictions than that of the bookmaker.}
\rev{Based on the experiments, we conclude that, for common practical purposes, the most suitable option out of the strategies reviewed seems to be the fractional Kelly, given that the fraction hyperparameter has been properly tuned to reflect the amount of uncertainty in each particular problem setting. The approach achieved the best, or close to the best, performance as evaluated by the chosen metrics in most of our experiments while being comparatively simpler than the other strategies. Our findings thus further support its common use in betting practice. The other common practice of setting a maximum bet limit was inconclusive as it improved the overall results in some domains (Table~\ref{experiments:metrics:horses}) while decreasing the profits in others (Table~\ref{experiments:metrics:basketball}).}
\rev{The distributionally robust Kelly strategy then proved to be the safest in all of the experiments, and can thus be suggested to extremely risk-averse practitioners. The second safest strategy was then to incorporate the drawdown constraint, which also proved quite efficient in trading of the security for profit.}
\section*{Response Letter}
\subsection*{Reviewer: 1 Comments to the Author}
\subsubsection*{* Global evaluation}
\textit{The paper is a comprehensive review of some betting strategies based on the Modern portfolio theory and the Kelly criterion. The paper is globally well written and the distinct betting strategies are properly introduced. The methods' review is detailed, and the experimental part has been carefully conducted and described.
Though, I find the Introduction quite lacking of references: I would invite the authors to massively extend it, by using/recycling and extending some parts actually contained in Section 3.
Moreover, the Conclusion section is in my opinion too shortly outlined: as a possible suggestion, the authors could try to state which ones among the formal strategies (methods in Table 5,6, and 7) could be satisfactorily adopted and under which circumstances one or another method could be favorable. In a way, the authors could provide a sort of general and practical guideline to the bettors interested in horse racing, football or basketball, by remarking some of the arguments raised in Section 6.3.}
\begin{itemize}
\item I find the Introduction quite lacking of references: I would invite the authors to massively extend it, by using/recycling and extending some parts actually contained in Section 3.
\blue{We have significantly extended the related works Section \ref{sec:related} with both papers referred by the reviewers and additional related works. We have however kept the specific related work in the respective section, while keeping the introduction on a general note.}
\item The conclusion Section is in my opinion too shortly outlined: as a possible suggestion, the authors could try to state which ones among the formal strategies (methods in Table 5,6, and 7) could be satisfactorily adopted and under which circumstances one or another method could be favorable. In a way, the authors could provide a sort of general and practical guideline to the bettors interested in horse racing, football or basketball, by remarking some of the arguments raised in Section 6.3. \blue{The conclusion Section \ref{sec:conclusion} now includes suggestions and guidelines on what methods are preferable under which circumstances.}
\end{itemize}
\subsubsection*{* Some minor edits}
\begin{itemize}
\item Introduction, lines 29-32: although predictive models are not the core of this paper, I would suggest to include and cite at least some works who attempted to propose betting strategies starting from a predictive model. A short (not exhaustive) list of papers is here provided:
\begin{itemize}
\item Dixon and Coles, 1997. Modelling association football scores and inefficiencies in the football betting market.
\item Rue and Salvesen, 2000. Prediction and retrospective analysis of soccer matches in a league.
\item Groll and Abedieh, 2013. Spain retains its title and sets a new record–generalized linear mixed models on European football championships.
\item Egidi, Pauli and Torelli, 2018. Combining historical data and bookmakers' odds in modelling football scores.
\end{itemize}
\blue{Related works Section \ref{sec:related} has been extended with prediction models, the referred and additional related papers have been included.}
\item Page 1, line 37: ``known'' in place of ``know'' \blue{Corrected.}
\item Page 2, line 16: you claim that ``each result is binary in nature'', but this sentence is confusing in my opinion. In the paper, you report many examples in which the result is not binary.
\blue{We added a clarification note -
``Note this concerns an individual $r_i$ and not $|\mathrm{R}|$, i.e. a match can have many possible outcomes $r_i$, but each $r_i$ has a binary realization, resulting exclusively in either win or loss of the bet associated with it.''}
\item Page 3, line 25: after ``Heads'', punctuation is missing. \blue{Corrected.}
\item Page 9, line 25: maybe ``trade-off''? \blue{Corrected.}
\item Page 10, line 37: ``known'' in place of ``know''. \blue{Corrected.}
\item Page 14, line 44: $acc_p$ in place of $acc_b$. \blue{Corrected.}
\item Tables 2, 3, and 4: what is $m-acc$? And $b-acc$ is the same as $acc_b$ listed at page 14? \blue{Yes, it is the same. It has been corrected with a unified notation.}
\end{itemize}
\subsection*{Reviewer: 2 Comments to the Author}
\textit{I very much enjoyed reading the paper. It is certainly of interest to anyone working in sports betting. The authors have identified an area that needs discussing and present results of their experiments using different strategies for betting on different sports.
\\\\
I have some comments and suggestions (below), but overall, I think the paper requires only minor revisions before being ready for publication.}
\begin{itemize}
\item Is the level of mathematical rigour given in Section 2 needed? This is a judgement call, but it is a little heavy going on terminology that isn't used later in the paper. \blue{We have removed parts that are not that relevant for the paper (e.g. the cases of the bookmaker's margin).}
\item p2, line 27: is it true that bookmakers are maximizing long-term profits? Is it possible they are balancing the books and basically making the over-round off the bettors? Or is this one and the same thing? \\
\blue{Yes, making money from the over-round is not in contradiction with maximizing their long-term profits. But with predictions better than that of an average tipster, they can make more money than just from the over-round. And they need good predictions to lay out the opening odds anyway. Moreover, purely reactive balancing can only work on markets with very high liquidity/volume of bets, and could be quite risky/exploitable otherwise.}
\item p2, line 40: maybe mention betting exchanges as the less common setup. \blue{Done.}
\item p2, line 45: is it a little cumbersome to have used $f$ for the fraction bet above, and now be using it for the function? \blue{Corrected -- the function is now denoted by $g$ and $\bm{f}$ stands exclusively for the fraction vector/portfolio.}
\item p2, line 52: why is $\hat{p}$ necessarily different from the true probability? \blue{We have moderated and clarified these statements in the paper. The estimates can be perfect in scenarios with artificial randomness generators (Section~\ref{sec:def:estimates}), but in the domain of sports betting we consider, the true probability is never known, and so this case is of purely theoretical interest.}
\item p3, line 32: why do you need to express the inverse values like this? Is it not simpler to just write $\frac{1}{o_i}$? \blue{We made corrections to clarify that we meant to specify the whole distribution $P_b$, which is later used in the equation below.}
\item p3, equation 2.11: typo - $r_j$ should be $o_j$ I think. \blue{You are right, corrected.}
\item p4, line 28: why are the estimates biased? They can be unbiased surely. \\
\blue{We have moderated the claims (this follows the same reasoning as for the perfect estimates 3 bullets above) -- since in sports betting the true probability distribution is principally unknown, the unbiased case is of purely theoretical interest. If necessary, one can also simply imagine that the bias is zero. The particular bias of the player here is simply part of the example assumptions.}
\item p10, line 34: should you reference the original Kelly paper. \blue{Corrected.}
\item p10, line 37: ``know'' should be ``known''. \blue{Corrected.}
\item p11, lines 14-16: I don't think the reader is ever given an indication of how unrealistic these assumptions are. Further, the experimental results, don't reveal how much these assumptions contribute to the lessening of the expected performance of the betting strategies. I think these points (and the impact of the assumptions) could be discussed in the conclusions of the paper. \blue{Knowing the true probability distribution is extremely unrealistic, as discussed above (and in the paper). Consequently in the experiments, the vanilla formal strategies often lead to ruin, as opposed to maximal profit. We extended the conclusion with discussion to further clarify this.}
\item p12, line 12: missing ``out'' after ``carried''. \blue{Corrected.}
\item p14, third bullet point should be ``$acc\_p$''. \blue{Corrected.}
\item p17, line 43: the tables are labelled in an odd order, and the figures are all 6.3. \blue{Apologies, corrected.}
\item p18, table 5: can the betting strategies be given more intuitive names. Even a description would help the reader. I found myself having to look at the previous table to get the descriptions. \blue{Unfortunately, there is not enough space in the table for a more detailed description. However, we tried our best in the naming and at least expanded the abbreviations -- the \textit{KellyDD} strategy has been renamed to \textit{KellyDrawdown} and \textit{KellyDR} to \textit{KellyRobust}.}
\item p20, line 53: ``degrees of freedom'' – can/should it be ``hyperparameters'' since ``degrees of freedom'' are not mentioned anywhere. \blue{Corrected.}
\end{itemize}
\subsection*{Guest Editor Comments to the Author:}
\textit{Both referees are positive for this work. Please revise your manuscript according to their comments and suggestions. Regarding Section 2, I would personally prefer to leave the details. May be trimming it a little bit might be the optimal solution.} \blue{Slightly trimmed.}
\subsection*{Editor comments}
\begin{enumerate}
\item Please use English spelling variations throughout. \blue{Corrected.}
\item Also, for continuity, consider adding citations to related work that has been published in this journal e.g.
\begin{enumerate}
\item Markowitz portfolio theory for soccer spread betting, Alistair D. Fitt (2009)
\item Kelly's fractional staking updated for betting exchanges, Edmund Noon, William J. Knottenbelt, Daniel Kuhn (2013)
\item Using statistics to detect match fixing in sport, David Forrest, Ian G McHale (2019)
\item Uses and limitations of mathematics in sport, John Haigh (2009)
\end{enumerate}
\blue{The referred papers have been reviewed and mostly added with additional related papers into the related works Section~\ref{sec:related}.}
\end{enumerate}
\newpage
\end{document}
\section{Introduction}
\label{sec;introduction}
Consider a self-adjoint second-order elliptic partial differential
equation (PDE) of the form
\begin{equation}
-\nabla\cdot(k(x,y)\nabla v)=f \quad \hbox{for}\ (x,y)\in\Omega,
\label{atv10}
\end{equation}
equipped with suitable boundary conditions. Here $k$ is a uniformly positive
and smooth function and $f$ is a given function in $L^2(\Omega)$. It is
well known that a straightforward discretization using the finite-element
method leads to a discrete system of the form
\begin{equation}
A_h v_h=f_h,
\label{atv11}
\end{equation}
where $h$ denotes the mesh parameter. In a similar manner, we can discretize
(\ref{atv10}) in the case of $k\equiv1$, corresponding to the
Poisson equation, and obtain a linear system of the form
\begin{equation}
L_h w_h = f_h.
\label{atv12}
\end{equation}
It is also known that, if $L_h^{-1}$ is used as a preconditioner for
$A_h$ in the process of solving (\ref{atv11}) numerically, then the
condition number of the preconditioned operator,
formally denoted by $L_h^{-1} A_h$, is bounded by
\begin{equation*}
\kappa \leq \frac{\sup_{(x,y)\in\Omega}k(x,y)}
{\inf_{(x,y)\in\Omega}k(x,y)}.
\end{equation*}
Furthermore, it is known that the number of conjugated gradient (CG)
iterations needed to solve the problem is $\mathcal{O} (\sqrt{\kappa})$,
and thus the number of iterations is independent of the mesh parameter
$h$ \citep[see, e.g.,][]{BAxe94}.
Even though the spectral condition number $\kappa$ of
$L_h^{-1} A_h$ is bounded independently of $h$, many CG iterations
may still be needed if the variations in $k$ are large. For
coefficients with jump discontinuities this problem has partly been
successfully solved by introducing various preconditioners. In such
cases the methods and analyses are commonly based on some sort of
clustering effect of the eigenvalues of the preconditioned operator
\citep[see][]{Cai99,Gra99}.
The aim of the present paper is to study this problem for equations
involving continuous coefficients with large variations. We will do
this by providing insight into the spectrum of the preconditioned
operator $L_h^{-1} A_h$.
Our main results can roughly be described as follows. Let
$\lambda_h=(\lambda_1,\lambda_2,\ldots)$ denote a vector
containing all of the eigenvalues of $L_h^{-1} A_h$ sorted in a
non-decreasing manner. These eigenvalues can be computed by solving
the generalized eigenvalue problem
\begin{equation}
A_h u_h = \lambda L_h u_h.
\label{atv121}
\end{equation}
Similarly, we let $\mu_h = (\mu_1,\mu_2,\ldots)$
denote a vector containing the values of the coefficient function
$k$ evaluated at the mesh points and sorted in a non-decreasing
manner. We will present numerical experiments indicating that
\begin{equation*}
\max_j|\lambda_j-\mu_j|= \mathcal{O}(h).
\end{equation*}
In the continuous case the generalized eigenvalue problem is to
find a non-trivial eigenfunction $u$ and an eigenvalue
$\lambda$ such that
\begin{equation}
\nabla\cdot(k(x,y)\nabla u)=\lambda\nabla^{2}u
\quad \hbox{for}\ (x,y)\in \Omega
\label{atv20}
\end{equation}
for a suitable set of boundary conditions. We will prove rigorously,
both for the associated operators defined on Sobolev spaces and
in terms of distribution theory, that all of the values of the coefficient
function $k$ are contained in the spectrum of the preconditioned
operator $L^{-1}A$.
The implication of this is that we obtain a deeper understanding of
the convergence properties of the CG method since the
convergence is completely determined by the spectrum of the
preconditioned operator
\citep[see, e.g.,][]{BStoe93}.
More precisely, if the coefficient $k$ is continuous and has large
variations then no clustering effect of the eigenvalues of
$L_h^{-1}A_h$ will occur---indicating that efficient methods for
such problems must be based on some other kind of property of the
involved equation. This means that optimal preconditioners are well
suited to obtaining $h$-independent bounds on the number of CG
iterations, but such methods will still generate a large number of
CG iterations for continuous coefficients $k$ that have large
variations. As mentioned above, the case of a piecewise constant
$k$ with large variations has been successfully solved.
For a general introduction to preconditioners and iterative methods
we refer to \citet{BAxe94}, \citet{Cha94} and \citet{BHac94}.
The use of fast solvers for the Laplace equation as
preconditioners for the CG method has been used for many years.
For a thorough discussion on how the eigenvalues of the operator
influence the convergence properties of the CG method we refer to
\citet{Axe86,Axe86b}.
Numerical schemes for equations of the form (\ref{atv10}) with
large jumps in the coefficients were studied by, for example,
\citet{Bra91}, \citet{Dry94b}, \citet{Dry94} and
\citet{Bebendorf03}.
The convergence properties of the CG method are also heavily
influenced by both the right-hand side $f$ of (\ref{atv10}) and the
start vector of the iteration. These issues were thoroughly
analysed by several scientists (see
\citet{Naiman97}, \citet{Cai99}, \citet{Naiman00} and
\citet{Beckermann01,Beckermann02} for further information).
To shed some light onto the generalized eigenvalue problem
(\ref{atv20}), a series of numerical experiments is presented in
Section \ref{sec;preliminary}. In Section \ref{sec;Sobolev} we
study the continuous eigenvalue problem for operators defined on
appropriate Sobolev spaces, and prove that, for any given point
$(x_0,y_0)\in\Omega$,
the value $k(x_0,y_0)$ of the coefficient function at this point
is an eigenvalue for the preconditioned operator, provided that $k$ is
continuous at $(x_0,y_0)$. Furthermore, Section \ref{sec;distribution}
contains an analysis of this problem in terms of distribution
theory, where also explicit formulas for the eigenfunctions are
presented. Some conclusions are discussed in Section{~\ref{sec5}}.
\section{Numerical experiments}
\label{sec;preliminary}
The purpose of this section is to present some numerical experiments
that indicate the properties of the preconditioned operator
$L_h^{-1}A_h$. These examples provide motivation for the rigourous
analysis presented in Sections \ref{sec;Sobolev} and
\ref{sec;distribution} below.
Consider the equation
\begin{equation}
-\nabla\cdot(k(x,y)\nabla v)=f \quad \hbox{for}\ (x,y)\in\Omega
=(0,1)\times(0,1),
\label{atv30}
\end{equation}
with boundary condition
\begin{equation*}
\frac{\partial v}{\partial n}=0 \quad \hbox{at}\ \partial \Omega,
\end{equation*}
where $n$ denotes a normal vector for the boundary of $\Omega$, and
we assume that $f$ satisfies the solvability condition
\begin{equation*}
\int_{\Omega} f \, \mathrm{d}x = 0
\end{equation*}
(see, for example, \citet{BMar86} for further details).
We discretize this problem in a standard manner using linear elements
on a uniform mesh
\citep[see, e.g.,][]{BLan99}.
The discretization leads to a linear system of the form
\begin{equation}
A_h v_h = f_h.
\label{atv31}
\end{equation}
We also discretize the problem in the case of $k\equiv1$, and
obtain a linear system of the form
\begin{equation}
L_h w_h = f_h.
\label{atv32}
\end{equation}
Our aim is to compute the eigenvalues of the preconditioned operator
$L_h^{-1} A_h$, i.e., we want to find the eigenvalues of the generalized
eigenvalue problem
\begin{equation}
A_h u_h = \lambda L_h u_h.
\label{atv33}
\end{equation}
This is done by generating the matrices $A_h$ and $L_h$ using Diffpack
\citep[see][]{BLan99}.
The matrices\footnote{It should be noted that, due to the boundary
conditions, both matrices are singular. If a solution to the
boundary-value problem is to be computed then it is common to
add the following additional constraint:
\begin{equation*}
\int_{\Omega}v \, \mathrm{d}x=0.
\end{equation*}}
are stored and entered into Matlab, which
is used to solve the problem (\ref{atv33}).
In our numerical experiments we use three versions of the coefficient
function $k$:
\begin{align*}
k_1(x,y) & = 1+x+y,\\
k_2(x,y) & = 2+\sin(2\pi\, \mathrm{e}^x \cos(yx)), \\
k_3(x,y) & = 1+50\, \mathrm{e}^{-50[(x-0.5)^2+(y-0.5)^2]}.
\end{align*}
As above, we let $\lambda_h = (\lambda_1,\lambda_2,\ldots)$
denote a vector containing all of the eigenvalues of
$L_h^{-1}A_h$, i.e., of (\ref{atv33}), sorted in a non-decreasing
manner. Similarly, we let $\mu_h = (\mu_1,\mu_2,\ldots)$ denote
a vector containing the values of the coefficient function $k$
evaluated at the mesh points and sorted in a non-decreasing manner.
In Figs \ref{fig:1} and \ref{fig:2} we plot the
eigenvalues $\lambda_h$ and the function values $\mu_h$ in the
case of $k=k_1$ using $h=0.2$ and $h=0.05$, i.e., we have $n=36$ and
$n=441$ unknowns. We note that the graphs are quite similar. Similar
plots for $k=k_2$ and $k=k_3$ are presented in Figs
\ref{fig:3}--\ref{fig:6}.
\begin{figure}[t!]
\centering\includegraphics[scale=1.04]{fig1.eps}
\caption{The sorted array of eigenvalues
(dashed--dotted line) and of the coefficient function $k$ evaluated
at the grid points (solid line). In this case $k=k_1$ and the
mesh size is $h=0.2$.}
\label{fig:1}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[scale=1.04]{fig2.eps}
\caption{The sorted array of eigenvalues
(dashed--dotted line) and of the coefficient function $k$ evaluated
at the grid points (solid line). In this case $k=k_1$ and the
mesh size is $h=0.05$.}
\label{fig:2}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[scale=1.04]{fig3.eps}
\caption{The sorted array of eigenvalues
(dashed--dotted line) and of the coefficient function $k$ evaluated
at the grid points (solid line). In this case $k=k_2$ and the
mesh size is $h=0.2$.}
\label{fig:3}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[scale=1.04]{fig4.eps}
\caption{The sorted array of eigenvalues
(dashed--dotted line) and of the coefficient function $k$ evaluated
at the grid points (solid line). In this case $k=k_2$ and the
mesh size is $h=0.05$.}
\label{fig:4}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[scale=1.04]{fig5.eps}
\caption{The sorted array of eigenvalues
(dashed--dotted line) and of the coefficient function $k$ evaluated
at the grid points (solid line). In this case $k=k_3$ and the
mesh size is $h=0.1$.}
\label{fig:5}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[scale=1.04]{fig6.eps}
\caption{The sorted array of eigenvalues
(dashed--dotted line) and of the coefficient function $k$ evaluated
at the grid points (solid line). In this case $k=k_3$ and the
mesh size is $h=0.05$.}
\label{fig:6}
\end{figure}
The numerical results are summarized in Tables
\ref{table1}--\ref{table3}, where we show the behaviour of
$\max_j|\lambda_j-\mu_j|$ for various values of $h$. Clearly,
these tables indicate that $\mu_h$ is an $\mathcal{O}(h)$-approximation of
$\lambda_h$. The rate of convergence is computed by comparing the
results obtained for two successive values of $h$ and by assuming that
the difference $\max_j|\lambda_j-\mu_j|$ has the form
$c h^{\alpha}$, where $c$ is a constant and $\alpha$ is the rate.
(Ideally, for $k=k_3$ further experiments on finer meshes, i.e., with
$h<0.0125$, should be performed. However, the Matlab algorithms
for computing the eigenvalues of $L_h^{-1}A_h$ are very CPU and memory
demanding, and such investigations were therefore not possible, within
reasonable time limits, on our computers.
We have not succeeded in showing rigorously that $\mu_h$ defines an
$\mathcal{O}(h)$-approximation of $\lambda_h$---this is, as far as we know,
still an open problem. However, in the continuous case, for operators
defined on appropriate Sobolev spaces, we have proven that
$k(x_0,y_0)$ is an eigenvalue for the preconditioned operator,
provided that $k$ is continuous at $(x_0,y_0)$. This issue is
treated in detail in Section \ref{sec;Sobolev}.
\begin{table}[t!]
\tblcaption{The numerical results obtained for the
generalized eigenvalue problem {\rm(\ref{atv33})} with
$k(x,y) = k_1(x,y) = 1+x+y$. In this table, as
well as in Tables {\rm\ref{table2}} and {\rm\ref{table3}},
$\lambda_j$, $\mu_j$, $h$ and $n$ represent the eigenvalue,
the value of the coefficient function evaluated at the mesh
point, the mesh width and the number of unknowns in the
discrete eigenvalue problem, respectively}
{%
\begin{tabular}{@{}ccccc@{}}
\tblhead{$h$ & $n$ & $\max_j|\lambda_j-\mu_j|$ &
$\max_j|\lambda_j -\mu_j| / h$ & Rate}
0.2\phzzz & \phzz36 & 0.1333 & 0.6667 & -- \\
0.1\phzzz & \phz121 & 0.0667 & 0.6667 & 0.9989 \\
0.05\phzz & \phz441 & 0.0333 & 0.6667 & 1.0022 \\
0.025\phz & 1681 & 0.0167 & 0.6667 & 0.9957 \\
0.0125& 6561 & 0.0083 & 0.6667 & 1.0087
\lastline
\end{tabular}
}
\label{table1}
\end{table}
\begin{table}[t!]
\tblcaption{The numerical results obtained for the
generalized eigenvalue problem {\rm(\ref{atv33})} with
$k(x,y) = k_2(x,y) = 2+\sin(2\pi\,\mathrm{e}^x \cos(yx))$}
{%
\begin{tabular}{@{}ccccc@{}}
\tblhead{$h$ & $n$ & $\max_{j}|\lambda_{j}-\mu_{j}|$ &
$\max_j|\lambda_j -\mu_j| / h$ & Rate}
0.2\phzzz & \phzz36 & 0.3847 & 1.9235 & -- \\
0.1\phzzz & \phz121 & 0.2009 & 2.0090 & 0.9373 \\
0.05\phzz & \phz441 & 0.1234 & 2.4672 & 0.7031 \\
0.025\phz & 1681 & 0.0633 & 2.5325 & 0.9631 \\
0.0125 & 6561 & 0.0317 & 2.5363 & 0.9977
\lastline
\end{tabular}
}
\label{table2}
\end{table}
\clearpage
\begin{table}[!t]
\tblcaption{The numerical results obtained for the
generalized eigenvalue problem {\rm(\ref{atv33})} with
$k(x,y) = k_3(x,y) = 1+50\,\mathrm{e}^{-50[(x-0.5)^2+(y-0.5)^2]}$}
{%
\begin{tabular}{@{}ccccc@{}}
\tblhead{$h$ & $n$ & $\max_j|\lambda_j-\mu_j|$ &
$\max_j|\lambda_j - \mu_j| / h$ & Rate}
0.2\phzzz & \phzz36 & 5.5235 & 27.6175 & -- \\
0.1\phzzz & \phz121 & 5.4286 & 54.2855 & 0.025\phz \\
0.05\phzz & \phz441 & 3.6823 & 73.6464 & 0.56\phzz \\
0.025\phz & 1681 & 2.3656 & 94.6229 & 0.6384 \\
0.0167 & 3721 & 1.1833 & 71.0001 & 1.7169 \\
0.0125 & 6561 & 0.8723 & 69.7820 & 1.0526
\lastline
\end{tabular}
}
\label{table3}
\end{table}
\section{Operators defined on Sobolev spaces}
\label{sec;Sobolev}
In this section we will study some properties of a generalized
eigenvalue problem of the following form: find a number
$\lambda$ and a function $u$ such that
\begin{equation}
\nabla \cdot (k \nabla u) = \lambda\Delta u
\quad \hbox{in}\ \Omega.
\label{A0}
\end{equation}
Our aim is to analyse this equation in terms of operators defined on
Sobolev spaces.
To this end, let us consider a prototypical elliptic boundary-value
problem of the form
\begin{equation}
\begin{aligned}
-\nabla \cdot (k \nabla v) &= f \quad \hbox{in}\ \Omega, \\[3pt]
v &= 0 \quad \hbox{on}\ \partial \Omega,
\end{aligned}
\label{A1}
\end{equation}
where $\Omega$, with boundary $\partial \Omega$, is some open Lipschitz
domain contained in a Euclidean space $\mathbb{R}^n$. In addition, $k$
is assumed to be a uniformly positive and bounded function defined on $\Omega$, i.e.,
\begin{gather}
k \in L^{\infty}(\Omega),
\label{A2} \\[3pt]
0<b \leq k(x) \leq B \quad \hbox{for all}\ x \in \Omega ,
\label{A3}
\end{gather}
where $b$ and $B$ are given positive constants.
Throughout this section we will, for the sake of easy notation,
consider elliptic PDEs with homogeneous Dirichlet boundary conditions
(cf. (\ref{A1})). However, it is not difficult to modify the arguments
presented below to also cover cases involving Neumann conditions.
\subsection{Notation}
\label{sec3.1}
Let $H^1_0(\Omega)$, with inner product $(\cdot,\cdot)_1$ and norm
$\Vert {\cdot}\Vert_{H^1(\Omega)}$, denote the classical Sobolev
space of functions defined on $\Omega$ with zero trace at
$\partial \Omega$. According to the Riesz representation theorem,
there exist linear operators $A,L \in \mathcal{L} (H^1_0(\Omega))$
such that
\begin{align}
(A \varphi, \psi)_1 & = \int_\Omega \nabla \psi \cdot
(k \nabla \varphi) \, \mathrm{d}x
\quad \hbox{for all}\ \psi, \varphi \in H^1_0(\Omega) ,
\label{B0} \\[4pt]
(L \varphi, \psi)_1 & = \int_\Omega \nabla \psi \cdot
\nabla \varphi \, \mathrm{d}x
\quad \hbox{for all}\ \psi, \varphi \in H^1_0(\Omega) .
\label{B1}
\end{align}
With this notation at hand, the weak form of (\ref{A1}) may be written
in the following form: find $v \in H^1_0(\Omega)$ such that
\begin{equation*}
(A v, \psi)_1 = \int_{\Omega} f \psi \, \mathrm{d}x
\quad \hbox{for all}\ \psi \in H^1_0(\Omega).
\end{equation*}
Furthermore, $L$ represents the (`weak') Laplacian defined on $H^1_0(\Omega)$.
\subsection{The analysis}
\label{sec3.2}
Inspired by our findings in Section \ref{sec;preliminary}, we will
now analyse the spectrum\footnote{Here
$I:\ H^1_0(\Omega) \rightarrow H^1_0(\Omega)$ denotes the identity
operator, i.e.,
\begin{equation*}
I \psi = \psi \quad \hbox{for all}\ \psi \in H^1_0(\Omega).
\end{equation*}}
\begin{equation*}
\mathrm{sp}(L^{-1}A) = \{\lambda \in {\bf C};\
(\lambda I-L^{-1}A) \hbox{ is not invertible} \}
\end{equation*}
of the operator
\begin{equation*}
L^{-1}A:\ H^1_0(\Omega) \rightarrow H^1_0(\Omega).
\end{equation*}
For the sake of convenience, let $K$ denote the set of points in
$\Omega$ at which the coefficient function $k$ is continuous, i.e.,
\begin{equation}
K=\{ x \in \Omega;\ k \hbox{ is continuous at}\ x \}.
\label{new1}
\end{equation}
Our main result can now be formulated as follows.
\begin{theorem}
\label{theorem1}
Let $A$ and $L$ be the operators defined in (\ref{B0})
and (\ref{B1}).
\begin{NumberedListAlpha}
\item
If (\ref{A2}) and (\ref{A3}) hold then
\begin{equation*}
k(x) \in \mathrm{sp}(L^{-1}A) \quad \hbox{for all}\ x \in K,
\end{equation*}
where $K$ is the set of points defined in (\ref{new1}).
\item
In particular, if (\ref{A3}) holds and $k$ is continuous
throughout $\Omega$, i.e., $k \in C(\Omega)$, then
\begin{equation*}
k(x) \in \mathrm{sp}(L^{-1}A) \quad \hbox{for all}\ x \in \Omega.
\end{equation*}
\end{NumberedListAlpha}
\end{theorem}
\begin{proof}
Let $\tilde x \in K$ be arbitrary and assume that $\tilde x$ is
such that
$\tilde \lambda = k(\tilde x) \not\in \mathrm{sp}(L^{-1}A)$.
Clearly, there exists a set of functions
$\{ v_r \}_{r \in \mathbb{R}_{+}}$
satisfying\footnote{Note that no limit of $v_r$ as $r \rightarrow 0$
is needed in this proof. Only the existence of a set of functions
satisfying (\ref{C0.01}) and (\ref{C0.1}) is required.}
\begin{gather}
\mathrm{supp}(v_r) \subset \tilde x + U_r,
\label{C0.01}\\[3pt]
\parallel\! v_r \!\parallel_{H^1(\Omega)} = 1 ,
\label{C0.1}
\end{gather}
where
\begin{equation*}
U_r = \{ {\bf z} \in \mathbb{R}^n;\ |{\bf z}| \leq r \}.
\end{equation*}
Let
\begin{equation}
u_r = (\tilde \lambda I-L^{-1}A) v_r
\quad \hbox{for}\ r \in \mathbb{R}_+.
\label{C1}
\end{equation}
Since $\tilde \lambda \not \in \mathrm{sp}(L^{-1}A)$, it
follows that $(\tilde \lambda I-L^{-1}A)$ is invertible, and we
find that
\begin{equation}
\parallel\! v_r \!\parallel_{H^1(\Omega)}
=
\parallel\! (\tilde \lambda I-L^{-1}A)^{-1} u_{r}
\!\parallel_{H^1(\Omega)}
\leq \| (\tilde \lambda I-L^{-1}A)^{-1} \|
\parallel\! u_r \!\parallel_{H^1(\Omega)} .
\label{C2}
\end{equation}
By (\ref{C1}), we have
\begin{equation*}
u_r = \tilde \lambda I v_r - L^{-1} A v_r
\end{equation*}
or
\begin{equation*}
L u_r = \tilde \lambda L v_r - A v_r .
\end{equation*}
Hence it follows that
\begin{equation*}
(L u_r,u_r)_1 = \tilde \lambda (L v_r,u_r)_1 - (A v_r,u_r)_1 ,
\end{equation*}
and then, from the definition of $L$ and $A$ and
by the fact that $\mathrm{supp} (v_r) \subset \tilde x + U_r$,
we find that
\begin{equation*}
\int_{\Omega} \nabla u_r \cdot \nabla u_r \, \mathrm{d}x =
\tilde \lambda \int_{\tilde x + U_r} \nabla u_r \cdot
\nabla v_r \, \mathrm{d}x -
\int_{\tilde x + U_r} \nabla u_r \cdot (k \nabla v_r) \, \mathrm{d}x
\end{equation*}
or
\begin{equation*}
\int_{\Omega} |\nabla u_{r}|^2 \, \mathrm{d}x =
\int_{\tilde x + U_r} \nabla u_r \cdot ([\tilde \lambda-k]
\nabla v_r) \, \mathrm{d}x .
\end{equation*}
Next, by the Cauchy--Schwarz inequality, we have
\begin{equation*}
\int_{\Omega} |\nabla u_{r}|^{2} \, \mathrm{d}x \leq
\left(\int_{\Omega} |\nabla u_{r}|^{2} \, \mathrm{d}x \right)^{1/2}
\left(\int_{\tilde x + U_r} (\tilde \lambda-k)^{2}
|\nabla v_{r}|^2 \, \mathrm{d}x \right)^{1/2} ,
\end{equation*}
and, consequently,
\begin{equation}
\left( \int_{\Omega} |\nabla u_r|^{2} \,
\mathrm{d}x \right)^{1/2} \leq
\mathrm{ess} \sup_{x \in \tilde x + U_r} |\tilde \lambda - k(x)|
\parallel\! v_r \!\parallel_{H^1(\Omega)}
= \mathrm{ess} \sup_{x \in \tilde x + U_r} |k(\tilde x) - k(x)| ,
\label{C3}
\end{equation}
where the last equality follows from (\ref{C0.1}).
Since $k$ is continuous at $\tilde x$, it follows that
\begin{equation*}
\lim_{r \rightarrow 0} \mathrm{ess}
\sup_{x \in \tilde x + U_r} |k(\tilde x) - k(x)| = 0.
\end{equation*}
From (\ref{C3}) and Poincar\'e's inequality we thus conclude that
there exists a constant $r^\ast \in \mathbb{R}_+$ such that
\begin{equation}
\parallel\! u_r \!\parallel_{H^1(\Omega)} <
\frac{1}{2 \| (\tilde \lambda I-L^{-1}A)^{-1} \|}
\quad \hbox{for all}\ r \in (0,r^\ast).
\label{new2}
\end{equation}
Finally, (\ref{C2}) and (\ref{new2}) imply that
\begin{equation*}
\parallel\! v_r \!\parallel_{H^1(\Omega)} < \frac{1}{2}
\quad \hbox{for all}\ r \in (0,r^\ast),
\end{equation*}
which is a contradiction to (\ref{C0.1}). Hence we conclude that
$k(\tilde x)$ must satisfy $k(\tilde x) \in \mathrm{sp}(L^{-1}A)$.
This completes the proof of part (a) of the theorem. Part (b) is a
trivial consequence of part (a).
\end{proof}
If $k$ is continuous then we thus conclude that the range of $k$ is indeed
contained in the spectrum of $L^{-1}A$, which is in agreement with the
results of our numerical experiments. Moreover, for
problems involving discontinuous coefficient functions, i.e.,
$k \notin C(\Omega)$, we can still conclude that $k(x)$ is an eigenvalue
for the preconditioned operator $L^{-1}A$ at every point $x$ at which
$k$ is continuous.
As mentioned in Section \ref{sec;introduction}, for elliptic equations
with coefficients with large jump discontinuities, high-quality
preconditioners can sometimes be constructed due to some clustering
effect of the eigenvalues. In the case of continuous coefficients our
investigations indicate that such an effect is not likely to occur. In
particular, if the inverse Laplacian is used as a preconditioner then
the eigenvalues will not cluster.
The proof of Theorem \ref{theorem1} is not of a constructive nature.
Formulas for the eigenfunctions are not provided. To further shed some light
onto the generalized eigenvalue problem (\ref{A0}) we will now consider
it from a distributional point of view.
\section{Generalized eigenfunctions and eigenvalues}
\label{sec;distribution}
As mentioned above, our aim is to study the eigenvalue problem
\begin{equation}
\nabla \cdot (k \nabla u) = \lambda \Delta u
\label{P0}
\end{equation}
in terms of distribution theory. More precisely, we will not only
prove that $\lambda=k(x,y)$, for all $(x,y) \in \Omega$, are
eigenvalues but also present explicit formulas for the associated
generalized eigenfunctions. As in Section \ref{sec;Sobolev}, we will
assume that $\Omega$ is an open Lipschitz domain.
\subsection{Preliminaries and notation}
\label{sec4.1}
In our analysis of this eigenvalue problem the classical mollifier
functions
\citep[see, e.g.,][]{BMar86}
will play an important role. It is defined by a non-negative and
symmetric function $\omega \in C^{\infty}(\mathbb{R})$
satisfying\footnote{The standard example of such a function is
\begin{equation*}
\omega(x)=
\begin{cases}
c \exp{((x^2-1)^{-1})} & \hbox{for}\ x \in (-1,1),\\[3pt]
0 & \hbox{for}\ |x| \geq 1 ,
\end{cases}
\end{equation*}
where $c^{-1}=\int_{-1}^1 \exp{((x^2-1)^{-1})} \, \mathrm{d}x$.}
\begin{gather*}
\int_{\mathbb{R}} \omega(x) \, \mathrm{d}x =1, \\[2pt]
\omega(x) \equiv 0 \quad \hbox{for}\ |x| \geq 1.
\end{gather*}
A family $\{ \omega_\epsilon \}_{\epsilon \in \mathbb{R}_+}$
of mollifier functions are now defined by setting
\begin{equation*}
\omega_\epsilon = \frac{1}{\epsilon} \omega
\left( \frac{x}{\epsilon} \right).
\end{equation*}
Clearly, these functions possess the following properties:
\begin{gather}
\omega_\epsilon \in C^\infty, \quad \omega_\epsilon \geq 0,
\label{P1} \\[3pt]
\omega_\epsilon(x) = \omega_\epsilon(-x),
\label{P2} \\[3pt]
\int_{\mathbb{R}} \omega_\epsilon(x) \, \mathrm{d}x =1,
\label{P3} \\[3pt]
\omega_\epsilon(x) \equiv 0 \quad \hbox{for}\ |x| \geq \epsilon,
\label{P4} \\[3pt]
\omega_\epsilon(x) \leq \frac{M}{\epsilon} \quad \hbox{and} \quad
|\omega'_\epsilon(x)| \leq \frac{M}{\epsilon^2},
\label{P5}
\end{gather}
where $M$ is a positive constant that is independent of $\epsilon$.
Next we define a family
$\{ H^\epsilon \}_{\epsilon \in \mathbb{R}_+}$
of approximate Heaviside functions
\begin{equation*}
H^\epsilon(x) = \int_{-\infty}^x \omega_\epsilon(y) \, \mathrm{d}y.
\end{equation*}
Note that
\begin{gather}
0 \leq H^\epsilon(x) \leq 1\quad \hbox{for all}\ x \in \mathbb{R},
\label{P6}\\[3pt]
H^\epsilon(x) \equiv 0 \quad \hbox{for}\ x \leq -\epsilon,
\qquad H^\epsilon(x) \equiv 1 \quad \hbox{for}\ x \geq \epsilon,
\label{P6.1}\\[3pt]
(H^\epsilon)'(x) = \omega_\epsilon(x).
\label{P7}
\end{gather}
We have not been able to characterize the (generalized)
eigenfunctions and eigenvalues satisfying (\ref{P0}) for
all smooth coefficient functions $k$. However, we have been able
to do so for a fairly large class of coefficient functions that
we now describe. To this end, we define the following family $Q$
of smooth and uniformly positive functions defined on
$\overline \Omega$:
\begin{equation*}
Q = \{k \in C^\infty (\overline \Omega );\ \exists\, m \in
\mathbb{R}_+ \hbox{ such that } m \leq k(x,y)
\hbox{ for all } (x,y) \in \overline \Omega \} .
\end{equation*}
It turns out that the generalized eigenfunctions satisfying (\ref{P0})
are characterized by the regions in the domain $\Omega$ on which
the coefficient function $k$ is constant, i.e., by the contour curves
of $k$. Therefore, for each $k \in Q$ and $(x_0,y_0) \in \Omega$, we
introduce the set
\begin{equation}
S(k,(x_0,y_0))= \{(x,y) \in \Omega ;\ k(x,y)=k(x_0,y_0)\}.
\label{P7.5}
\end{equation}
With this notation at hand, we are ready to define the set $K$ of
coefficient functions for which we are able to provide a
detailed mathematical analysis of the eigenvalue problem
(\ref{P0}) as follows:
\begin{equation}
\begin{aligned}
K & = \{ k \in Q ; \hbox{ for all } (x,y) \in \Omega,
|S(k,(x,y))|=0 \hbox{ or } S(k,(x,y)) \\
& \qquad \hbox{ contains at least one open and connected subset }
G \hbox{ with } |G| > 0 \}.
\end{aligned}
\label{P8}
\end{equation}
Here $|S(k,(x,y))|$ and $|G|$ denote the measures of the respective
sets. Roughly speaking, $K$ consists of those smooth functions that
are `well behaved' in a measure-theoretical sense.
The need for these assumptions on the coefficient function will
become apparent in the analysis below.
Clearly, the distributional form of the eigenvalue problem (\ref{P0})
can be written in the following form: find a number $\lambda$ and
a (possibly generalized) function $u$ such that
\begin{equation}
\int_\Omega k \nabla u \cdot \nabla \phi \, \mathrm{d} \Omega =
\lambda \int_\Omega \nabla u \cdot \nabla \phi \, \mathrm{d} \Omega
\quad \hbox{for all}\ \phi \in C^\infty_0(\Omega) ,
\label{P9}
\end{equation}
where $C^\infty_0(\Omega)$ denotes the set of test functions, i.e., the
set\footnote{$C^{\infty}_0(\Omega)=\{\psi \in C^{\infty}(\Omega);\
\exists\,\hbox{a compact set }K \subset \Omega \hbox{ such that }
\{x;\ \psi(x) \neq 0 \} \subset K \}$.}
of smooth functions with compact support in $\Omega$.
This means that
\begin{equation}
\int_\Omega (k-\lambda) [u_x \phi_x + u_y \phi_y] \, \mathrm{d} \Omega
= 0 \quad \hbox{for all}\ \phi \in C^\infty_0(\Omega) .
\label{P10}
\end{equation}
In the analysis below we will use the form (\ref{P10}) of the
generalized eigenvalue problem. The analysis is divided, by certain
properties of the coefficient function $k$, into three different cases.
\subsection{The analysis}
\label{sec4.2}
\subsubsection{Case I}
\label{sec4.2.1}
Let $k \in K$ and assume that $(x_0,y_0) \in \Omega$ is a point such that
\begin{gather}
k_x(x_0,y_0) \neq 0 \quad \hbox{or} \quad k_y(x_0,y_0) \neq 0,
\label{P10.1}\\[3pt]
|S(k,(x_0,y_0))| =0.
\label{P10.2}
\end{gather}
We will now study the sequence of functions
\begin{equation*}
H^{\epsilon}(k(x,y)-k_0)
\end{equation*}
for $\epsilon > 0$, where $k_0=k(x_0,y_0)$. More precisely, we will show
that $k_0$ is an eigenvalue with associated eigenfunction $H(k(x,y)-k_0)$,
where $H$ denotes the Heaviside function
\begin{equation}
H(z) =
\begin{cases}
0 & \hbox{for } z \leq 0, \\[2pt]
1 & \hbox{for } z > 0.
\end{cases}
\label{P10.3}
\end{equation}
Motivated by the discussion of the numerical experiments above
and the form (\ref{P10}) of the generalized eigenvalue problem, we
will, for an arbitrary test function $\phi \in C^\infty_0(\Omega)$,
study the integral
\begin{align}
I_\epsilon &= \int_\Omega (k-k_0) [H^\epsilon_x(k-k_0) \phi_x
+ H^\epsilon_y(k-k_0) \phi_y] \, \mathrm{d} \Omega \notag\\[3pt]
&= \int_\Omega (k-k_0) [k_x \omega_\epsilon (k-k_0) \phi_x +
k_y \omega_\epsilon (k-k_0) \phi_y] \, \mathrm{d} \Omega.
\label{P11}
\end{align}
If we apply the property (\ref{P4}) of the mollifier and define
\begin{equation}
S_{\epsilon}= \{(x,y) \in \Omega;\ |k(x,y)-k(x_0,y_0)| < \epsilon \}
\label{P12}
\end{equation}
then we find that
\begin{equation*}
I_\epsilon = \int_{S_\epsilon}
(k-k_0) [k_x \omega_\epsilon(k-k_0) \phi_x +
k_y \omega_\epsilon (k-k_0) \phi_y] \, \mathrm{d} \Omega.
\end{equation*}
Next, since $k_x, k_y, \phi_x, \phi_y \in L_{\infty}(\Omega)$,
property (\ref{P5}) of the mollifier function $\omega_\epsilon$ and
the triangle inequality imply the existence of a positive constant
$c_1$, independent of $\epsilon$, such that
\begin{align*}
|I_\epsilon| & \leq \int_{S_\epsilon}
|k-k_0| [|k_x \omega_\epsilon(k-k_0) \phi_x| +
|k_y \omega_\epsilon(k-k_0) \phi_y|] \, \mathrm{d} \Omega \\[3pt]
& \leq \int_{S_\epsilon} |k-k_0| [
\parallel\! k_x \!\parallel_\infty
\parallel\! \phi_x \!\parallel_\infty
M \epsilon^{-1} +
\parallel\! k_y \!\parallel_\infty
\parallel\! \phi_y \!\parallel_\infty
M \epsilon^{-1}] \, \mathrm{d} \Omega \\[3pt]
& \leq c_1 \int_{S_{\epsilon}}
|k-k_0| \epsilon^{-1} \, \mathrm{d} \Omega.
\end{align*}
However, on the set $S_\epsilon$ (see (\ref{P12}))
we have $|k-k_0| < \epsilon$ and we therefore conclude that
\begin{equation}
|I_\epsilon| \leq c_1 \int_{S_\epsilon} 1 \, \mathrm{d} \Omega =
c_1 |S_\epsilon|,
\label{P13}
\end{equation}
where $|S_\epsilon|$ denotes the measure of $S_\epsilon$.
Recall the definitions (\ref{P7.5}) and (\ref{P12}) of $S(k,(x_0,y_0))$
and $S_\epsilon$, respectively. Clearly,
\begin{equation*}
\bigcap_{\epsilon > 0} S_\epsilon = S(k,(x_0,y_0)),
\end{equation*}
and it is therefore natural to ask if the measure of $S_\epsilon$
converges toward the measure of $S(k,(x_0,y_0))$ as
$\epsilon \rightarrow 0$. This question is treated in detail in
Appendix A and the answer to it is affirmative
(see Lemma \ref{lemma1}).
Thus assumption (\ref{P10.2}) implies that
\begin{equation*}
\lim_{\epsilon \rightarrow 0} |S_\epsilon| =0,
\end{equation*}
and we conclude that
\begin{equation}
\lim_{\epsilon \rightarrow 0} |I_\epsilon| =0.
\label{P14}
\end{equation}
Having established that the integral defined in (\ref{P11}) tends
toward zero as $\epsilon \rightarrow 0$, we must now check whether
or not the sequence of functions
$\{ H^{\epsilon}(k-k_0) \}_{\epsilon \in \mathbb{R}_+}$ has a well-defined,
in the distributional sense, limit as $\epsilon \rightarrow 0$. More
precisely, we will show, as expected, that $H(k-k_0)$ is the limit.
To this end, consider for an arbitrary test function
$\phi \in C^{\infty}_0$ the integral
\begin{align*}
D_\epsilon &= \int_\Omega H^\epsilon (k-k_0) \phi \, \mathrm{d} \Omega
- \int_\Omega H(k-k_0) \phi \, \mathrm{d} \Omega \\[3pt]
&= \int_\Omega [H^\epsilon (k-k_0)- H(k-k_0)] \phi \, \mathrm{d} \Omega.
\end{align*}
From the property (\ref{P6.1}) of the approximate Heaviside function
$H^\epsilon$ we find that
\begin{equation*}
H^\epsilon (k-k_0) = H(k-k_0) \quad \hbox{for all}\ (x,y)
\hbox{ such that }|k(x,y)-k_0| \geq \epsilon.
\end{equation*}
Hence
\begin{equation*}
D_\epsilon = \int_{S_\epsilon}
[H^\epsilon (k-k_0)- H(k-k_0)] \phi \, \mathrm{d} \Omega,
\end{equation*}
and then the property (\ref{P6}) and the definition (\ref{P10.3}) of
the Heaviside function imply that
\begin{equation*}
|D_\epsilon| \leq
\parallel\! \phi \!\parallel_\infty
\int_{S_{\epsilon}} \, \mathrm{d} \Omega =
\parallel\! \phi \!\parallel_\infty |S_\epsilon| .
\end{equation*}
As discussed above, $|S_\epsilon| \rightarrow 0$ as
$\epsilon \rightarrow 0$, and we conclude that
\begin{equation*}
\lim_{\epsilon \rightarrow 0}H^\epsilon (k-k_0) = H(k-k_0)
\end{equation*}
in the distributional sense. From standard theory
\citep[see][]{BGri81},
for the derivatives of distributions, it follows that
\begin{equation*}
\lim_{\epsilon \rightarrow 0}H^\epsilon_x (k-k_0)
= H_x(k-k_0),
\quad \lim_{\epsilon \rightarrow 0}H^\epsilon_y (k-k_0)
= H_y(k-k_0).
\end{equation*}
Finally, by combining these convergence properties of the approximate
Heaviside functions and (\ref{P11}) and (\ref{P14}) we find that
\begin{equation*}
\int_\Omega k \nabla H(k-k_0) \cdot \nabla \phi \, \mathrm{d} \Omega =
k_0 \int_\Omega \nabla H(k-k_0) \cdot \nabla \phi \, \mathrm{d} \Omega
\quad \hbox{for all}\ \phi \in C^\infty_0 .
\end{equation*}
Note that (\ref{P10.1}) ensures that $H(k-k_0) \neq 0$, and thus,
if $k$ satisfies (\ref{P10.1}) and (\ref{P10.2}) then
$H(k(x,y)-k_0)$ is an eigenfunction, in the distributional sense, with
associated eigenvalue $k_0=k(x_0,y_0)$ for the generalized eigenvalue
problem (\ref{P0}).
The next question is, of course, what happens if either assumption
(\ref{P10.1}) or (\ref{P10.2}) fails to hold? This is the topic of
Sections \ref{sec4.2.2} and \ref{sec4.2.3}.
\subsubsection{Case II}
\label{sec4.2.2}
Let $k \in K$ and assume that $(x_0,y_0) \in \Omega$ is a point such that
\begin{equation}
k_x(x_0,y_0) = 0 \quad \hbox{and} \quad k_y(x_0,y_0) = 0.
\label{Pc1}
\end{equation}
In this case we will show that the Dirac delta function is an
eigenfunction, in the distributional sense, for the generalized
eigenvalue problem (\ref{P9}).
To this end, let $\delta$ denote the delta distribution associated
with the point $(x_0,y_0)$, i.e., $\delta$ denotes the linear functional
\begin{equation*}
\delta:\ C^{\infty}_0 \rightarrow \mathbb{R}
\end{equation*}
such that the action of applying $\delta$ to $\phi \in C^\infty_0$ is
given by
\begin{equation}
\langle \delta,\phi \rangle = \phi(x_0,y_0).
\label{Pc1.1}
\end{equation}
Note that we use the notation
\begin{equation*}
\langle \delta,\phi \rangle = \int_\Omega \delta \phi\,
\mathrm{d} \Omega = \phi(x_0,y_0).
\end{equation*}
Recall the form (\ref{P9}) of the eigenvalue problem. Let
$\phi \in C^\infty_0$ be an arbitrary test function and consider
the integral
\begin{equation*}
I_1 = \int_\Omega k \nabla \delta \cdot \nabla \phi\, \mathrm{d} \Omega =
\int_\Omega ( k \delta_x \phi_x + k \delta_y \phi_y ) \,
\mathrm{d} \Omega.
\end{equation*}
Now integration by parts implies that
\begin{align*}
I_1 &= - \int_\Omega (\delta [k_x \phi_x+k \phi_{xx}]
+ \delta [k_y \phi_y+k \phi_{yy}]{)} \, \mathrm{d} \Omega \\[3pt]
&= - [k_x(x_0,y_0) \phi_x(x_0,y_0)+k(x_0,y_0) \phi_{xx}(x_0,y_0)
+ k_y(x_0,y_0) \phi_y(x_0,y_0)+k(x_0,y_0) \phi_{yy}(x_0,y_0)] \\[3pt]
&= - [k(x_0,y_0) \phi_{xx}(x_0,y_0)+k(x_0,y_0) \phi_{yy}(x_0,y_0)],
\end{align*}
where the last equality follows from assumption (\ref{Pc1}).
On the other hand, by inserting $\lambda=k_0=k(x_0,y_0)$ and $u=\delta$
in the right-hand side of (\ref{P9}) we find that
\begin{align*}
I_2 &= k_0 \int_\Omega \nabla \delta \cdot \nabla \phi \,
\mathrm{d} \Omega
= k_0 \int_\Omega (\delta_x \phi_x + \delta_y \phi_y) \,
\mathrm{d} \Omega \\[3pt]
&= - k_0 \int_\Omega (\delta \phi_{xx} + \delta \phi_{yy}) \,
\mathrm{d} \Omega\\[3pt]
&= - k(x_0,y_0) [\phi_{xx}(x_0,y_0)+ \phi_{yy}(x_0,y_0)].
\end{align*}
Thus $I_1=I_2$ and it follows that the $\delta$-distribution associated
with the point $(x_0,y_0)$ is an `eigenfunction' with eigenvalue
$k(x_0,y_0)$, provided that (\ref{Pc1}) holds.
\subsubsection{Case III}
\label{sec4.2.3}
Let $k \in K$ and assume that $(x_0,y_0) \in \Omega$ is a point
such that
\begin{equation}
|S(k,(x_0,y_0))| > 0
\label{Pc2}
\end{equation}
(cf. (\ref{P7.5}) and (\ref{P8})).
This is, in fact, the simplest case. According to the definition
of $K$ in (\ref{P8}), $S(k,(x_0,y_0))$ contains an open and connected subset
$G$ with strictly positive measure, i.e., $|G|>0$. This ensures the
existence of nonzero functions whose support is contained in $G$.
Let $u$ be such a function, i.e., we assume that
\begin{equation}
\mathrm{supp}(u) \subset G.
\label{Pc3}
\end{equation}
Since $k(x,y) = k(x_0,y_0)$ on $G$, it follows that
\begin{align*}
\int_\Omega k \nabla u \cdot \nabla \phi\, \mathrm{d} \Omega &=
k(x_0,y_0) \int_G \nabla u \cdot \nabla \phi\,
\mathrm{d} \Omega \\[3pt]
&= k(x_0,y_0) \int_\Omega \nabla u \cdot \nabla \phi \,
\mathrm{d} \Omega,
\end{align*}
which should be compared with (\ref{P9}). Hence we conclude
that in this case every nonzero function satisfying (\ref{Pc3})
is a generalized eigenfunction with associated eigenvalue
$k(x_0,y_0)$.
\begin{theorem}
Let $k$ be a coefficient function in the set $K$ defined in (\ref{P8}).
For every $(x_0,y_0) \in \Omega$ there exists a generalized function
$u$ such that $\lambda=k(x_0,y_0)$ and $u$ forms an
eigenvalue--eigenfunction pair for the generalized eigenvalue
problem (\ref{P0}).
Furthermore, the following statements hold.
\begin{BulletedList}
\item
If conditions (\ref{P10.1}) and (\ref{P10.2}) hold then
\begin{equation*}
u=H(k-k_0)
\end{equation*}
is an eigenfunction with associated eigenvalue
$\lambda = k_0 = k(x_0,y_0)$. Here $H$ denotes the Heaviside
function (see (\ref{P10.3})).
\item
If condition (\ref{Pc1}) holds then
\begin{equation*}
u=\delta_{(x_0,y_0)}
\end{equation*}
is a generalized eigenfunction with associated eigenvalue
$\lambda = k_0 = k(x_0,y_0)$. Here $\delta_{(x_0,y_0)}$ is
the Dirac delta distribution associated with the point
$(x_0,y_0)$ (see (\ref{Pc1.1})).
\item
If condition (\ref{Pc2}) holds then any function $u$ satisfying
(\ref{Pc3}), where $G$ is the set defined in (\ref{P8}), is a solution
of the generalized eigenvalue problem (\ref{P0}) with associated
eigenvalue $\lambda = k_0 = k(x_0,y_0)$.
\end{BulletedList}
\end{theorem}
\section{Conclusions}
\label{sec5}
In this paper we have analysed the eigenvalues and eigenfunctions of
second-order elliptic PDEs preconditioned by the inverse of the
Laplacian. We have shown by numerical experiments and mathematical
analysis that there is a strong relation between the spectrum of
the preconditioned operator and the range of the coefficient
function $k$, provided that $k$ is smooth and satisfies certain
measure-theoretical properties.
More precisely, in the discrete case the spectrum seems to be
accurately approximated by the values of the coefficient function
evaluated at the mesh points. Furthermore, we have proven, both
for the associated operators defined on Sobolev spaces and in terms
of generalized functions, that the range of $k$ is contained in
the spectrum of the preconditioned operator.
The purpose of this paper has been to obtain a deeper understanding
of the convergence properties of the CG method applied to
second-order elliptic equations. For problems with large jump
discontinuities in the coefficients the success of efficient
preconditioners is commonly based on some sort of clustering effect
of the eigenvalues. The present work shows that such an approach might
be very difficult to apply to problems involving continuous
coefficients with large variations. In particular, if the inverse
Laplacian is applied as a preconditioner then such a clustering effect
will not occur.
\section*{Acknowledgements}
We would like to thank Kent-Andre Mardal for helping us with the
implementation of the C++ software used in this work. Furthermore, we
are very grateful to the referees for their most interesting comments
and suggestions.
\input{refs}
\clearpage
\section*{Response Letter}
\title{Optimal sports betting strategies in practice:\\ an experimental review \vspace{-1cm}}
\maketitle
\author{ \centering {\sc Matej Uhr\'{i}n, Gustav \v{S}ourek, Ond\v{r}ej Hub\'{a}\v{c}ek, Filip \v{Z}elezn\'{y}}\\[2pt]
\centering \textit{Department of Computer Science}\\[2pt]
\centering \textit{Czech Technical University in Prague}\\[16pt]
}
\begin{abstract}
{We investigate the most popular approaches to the problem of sports betting investment based on modern portfolio theory and the Kelly criterion. We define the problem setting, the formal investment strategies, and review their common modifications used in practice. The underlying purpose of the reviewed modifications is to mitigate the additional risk stemming from the unrealistic mathematical assumptions of the formal strategies. We test the resulting methods using a unified evaluation protocol for three sports: horse racing, basketball and soccer. The results show the practical necessity of the additional risk-control methods and demonstrate their individual benefits. Particularly, an adaptive variant of the popular ``fractional Kelly'' method is a very suitable choice across a wide range of settings.}
{sports betting, betting strategies, risk management, bankroll management}
\end{abstract}
\section{Introduction}
\label{sec:intro}
Sports betting systems generally consist of two essential components \rev{--} (i) predictive models, generating probabilistic estimates for the given match outcomes, and (ii) bankroll management strateg\rev{ies}, optimizing the expected progression of wealth in time. In this work, we focus solely on the latter.
While much of the available research on betting systems is \rev{centred} around the predictive \rev{modelling} part, often completely neglecting the need for betting portfolio optimization, we show that, given a predictive model, the betting strategy has a major influence on the final measures of profit. Consequently, a worse model with a better strategy can easily outperform a better model with a worse strategy.
Lacking a deeper understanding of the investment part of the problem, practitioners often resort to trivial practices such as various forms of flat betting. We show that these are inferior to the formal strategies, not just theoretically but also from \rev{a} practical perspective. There are two basic streams of research in the formal approaches, stemming from information theory and economics, respectively. The first, and the most widespread, is the Kelly criterion\rev{~\citep{kelly1956new}}, also known as the geometric mean policy, maximizing the expected long-term growth of wealth. The second is the approach of Markowitz's Modern portfolio theory\rev{~\citep{markowitz1952portfolio}}, balancing the criteria of expected profit and \rev{profit} variance as a measure of risk.
While mathematically sound, the formal strategies are based on unrealistic assumptions. The most limiting assumption in their application to sports betting is the knowledge of true probabilities of individual match outcomes. Other complications of the problem include \rev{the} multiplicity of outcomes and parallel matches. We investigate the existing modifications of the formal strategies proposed to address the problems occurring in practice and evaluate them experimentally in 3 different sport\rev{s} domains - horse racing, basketball, and football.
The paper is structured as follows. In Section~\ref{sec:definitions} we define the concept of a betting strategy and the dimensions of the underlying optimization problem. In Section~\ref{sec:related} we review the related work touching different facets of risk and bankroll management in betting. In Section~\ref{sec:strategies} we formally introduce the two core strategies of Kelly and Markowitz. The modifications of the core strategies proposed to manage the extra risk occurring in practice are then introduced in Section~\ref{sec:risk}. Finally, we experimentally evaluate the strategies in practical scenarios in Section~\ref{sec:experiments} and conclude the paper in Section~\ref{sec:conclusion}.
\section{Problem Definition}
\label{sec:definitions}
In its core, sports betting is a simple stochastic game where the player $p$ repeatedly allocates a distribution of \textit{fractions} ($f_i \in [0,1],~\sum_{i}f_i \leq 1$) of her current bankroll $W \in \mathbb{R}$ at time $t \in \mathbb{N}$ over possible stochastic results $r_i \in \mathrm{R}$ of a match, coming from a distribution $P_r(r_i)$ over the domain $\mathrm{R}$ of the random variable $R$, describing all the possible outcomes of the given match at time step $t$. Each of the possible match outcomes $r_i$ is then associated with \rev{so-called} \textit{odds} ($o_i \in \mathbb{R}_{\geq 1}$) by the bookmaker $b: r_i \mapsto o_i$. Should a particular outcome $i$ be realized \rev{(}${R}=r_i$\rev{)}, a payoff $o_i \cdot f_i \cdot W$ from the associated odds and fraction is to be received by the player $p$. In the opposite case, the player loses the allocated portion $f_i \cdot W$ of her bankroll to the bookmaker $b$.
Each of the particular \rev{betting} outcomes $r_i$ is \rev{thus} binary\footnote{\rev{Note this concerns an individual $r_i$ and not $|\mathrm{R}|$, i.e. a match can have many possible outcomes $r_i$, but each $r_i$ has a binary realization, resulting exclusively in either win or loss of the bet associated with it.}} in nature, and the potential net profit $w_i$ from allocation on the $i$-th outcome is thus
\begin{equation}
w_i =
\left\{
\begin{array}{lll}
o_i \cdot f_i \cdot W - f_i \cdot W ~~& \mbox{with prob. $P_r(r_i)$} &\mbox{(if $\mathrm{R}=r_i$ is realized)} \\
- f_i \cdot W ~~& \mbox{with prob. $1-P_r(r_i)$} &\mbox{(if $\mathrm{R} \neq r_i$)}
\end{array}
\right.
\end{equation}
giving an expectation
\begin{equation}
\EX_{P_r}[w_i] = P_r(r_i) \cdot (o_i f_i W - f_i W) + (1-P_r(r_i)) \cdot (- f_i W)
\end{equation}
Clearly, the profits of the bettor and bookmaker are directly opposite and, assuming a closed system of bettors and bookmakers, this is a zero-sum game. The goal of both the player $p$ and the bookmaker $b$ is to maximize their long-term profits $W_{t \to \infty}$ as measured by their respective utilities (Section~\ref{sec:strategies}). Given the stochastic nature of the game, the natural desideratum of the player is to allocate the fractions $\bm{f} = f_1, \dots, f_n$ so as to target a high total expect\rev{ation of profit} $\mathrm{W}$
\begin{equation}
\EX_{P_r}[\mathrm{W}] = \EX_{P_r} \bigg[\sum_i w_i \bigg] = \sum_i \EX_{P_r} [w_i]
\end{equation}
Note that, in this work, we assume the two players to take on the asymmetric roles of market maker $b$ and market taker $p$, where the bookmaker $b$ always starts by laying out the odds $\bm{o} = [o_1, \dots, o_n]$ for the possible match results $\bm{r} = [r_1, \dots, r_n]$ first, consequently to which the player $p$ reacts with his best policy for allocation $p : r_i \mapsto f_i$ of her current wealth $W_t$. In contrast to e.g. the, \rev{less common}, betting exchange setting, in this work we assume solely the strategies for the role of the market taker $p$, which is the most common setup for bettors in practice.
\subsection{Betting Strategy}
\label{sec:def:strategy}
A player's betting strategy for a game with $n$ outcomes is a \rev{function $g$} mapping a set of probabilistic estimates $\hat{\bm{p}} = \hat{p_i}, \dots,\hat{p_n}$ and bookmaker's odds $\bm{o} = o_1, \dots, o_n$ onto a set of fractions $\bm{f} = f_1, \dots, f_n$ of the current wealth $W_t$ to be waged \rev{over} the game outcomes $\bm{r} = r_1, \dots, r_n$
\rev{
\begin{align}
g &: (\hat{\bm{p}}, \bm{o}) \mapsto \bm{f}
\end{align}
}
Typically, the estimated distribution vector $\hat{\bm{p}}$ comes from a probabilistic model $P_p$ of the player and is similar to, yet \rev{most likely} different from, the \rev{(unknown)} true probability distribution $P_p = \hat{P_r},~P_p \neq P_r$ \rev{(Section \ref{sec:def:estimates})}.
The vector of the waged fractions $\bm{f}$ is then often referred to as the \textit{portfolio} over individual ``assets'' $i$ (Section~\ref{sec:MPT})
\begin{equation}
\bm{f} =
\begin{bmatrix}
f_1, \dots, f_n
\end{bmatrix}
\end{equation}
where $f_i$ indicates the portion of wealth $W_t$ allocated to $i$-th outcome.
\subsection{Fixed Odds}
\label{sec:def:odds}
We further assume a \rev{so-called} fixed-odds betting setup which, as opposed to e.g. parimutuel setting~\citep{hausch2008efficiency}, always offers known odds distribution $\bm{o}$ in advance of the game for the player's strategy \rev{$g$} to calculate with.
In its most basic form, we can demonstrate the given setting on a simple \rev{coin-tossing} game as follows.
\begin{example}
\label{ex:coin1}
Assume a fair \rev{coin-tossing} game with two, equally probable, outcomes $\mathrm{R} =\{Heads, Tails\}$
\begin{equation}
\underset{r_i \in \mathrm{R}}{P_r(r_i)} =
\left\{
\begin{array}{ll}
0.5 & \mbox{for the coin falling } r_1 = \textit{Heads} \\
0.5 & \mbox{for the coin falling } r_2 = \textit{Tails}
\end{array}
\right.
\end{equation}
The odds by the bookmaker $b$ could then be set up e.g. as follows
\begin{equation}
\underset{r_i \in \mathrm{R}}{b(r_i)} =
\left\{
\begin{array}{ll}
o_1 = 1.9 & \mbox{for the coin falling } r_1 = \textit{Heads} \\
o_2 = 1.9 & \mbox{for the coin falling } r_2 = \textit{Tails}
\end{array}
\right.
\end{equation}
Let the bettor allocate a fixed wager, such as \$1, on the $r_1=Heads$.
She then receives an extra $w_1 = (1.9 - 1) * 1$ profit if the associated outcome $r_1=Heads$ is realized, or losses the placed wager \$1 otherwise.
It is easy to see that this particular game is generally disadvantageous for the bettor, and there exist no strategy for her to make long-term profits, since the expected profit for each outcome of the game is simply negative:
\begin{equation}
\EX[w_1] = \EX[w_2] = 0.5 \cdot 1.9 \cdot 1 + 0.5 \cdot (-1) = -0.05
\end{equation}
This \rev{follows directly from} the fact that the odds are \textit{unbiased} and \textit{subfair}. This means that \rev{the distribution of their inverse values $P_b : r_i \mapsto \frac{1}{o_i}$ is} proportional to the true probability distribution over the game outcomes, but \rev{it does} not form a \textit{probability} distribution as \rev{the values} do not sum up to $1$:
\begin{equation}
\sum_i{P_b(r_i)} = \frac{1}{o_1} + \frac{1}{o_2} \approx 1.05
\end{equation}
\end{example}
Out of the three settings~\citep{cover2012elements}: \textit{fair, subfair, superfair},
the \textit{subfair} odds are typically the only setting for a bookmaker to be able to generate profits. We will further limit ourselves to this setting as it is the only valid setup working in practice.
The value of
\rev{
\begin{equation}
margin = \frac{\sum_{j=1}^K\frac{1}{o_j} -1 }{\sum_{j=1}^K\frac{1}{o_j}}
\end{equation}}
is then called the bookmaker's margin\footnote{Often wrongly calculated as simply the remainder \rev{over $1$ as $\sum_{j=1}^K\frac{1}{o_j} -1$}} (also known as ``vigorish'', ``house edge'', ``cut'' etc.), and represents the negative expected value of the game given the probabilities $P_b$ implied from the odds are unbiased estimates of the true outcome probabilities $P_r$. Note that this is a typical game setting operated in the gambling industry, such as in various casino games, where there is no space for long-term profitable strategies. However, we note that the situation in sports betting is principally different.
\subsection{Biased Estimates}
\label{sec:def:estimates}
In Example~\ref{ex:coin1} with a fair coin, both the bookmaker and bettor knew the true outcome probability distribution (i.e. $P_r(r_1=H)=0.5 ;\ P_r(r_2=T)=0.5$). This setting is very elegant from mathematical perspective, since one can calculate exact expected values of profits and consequently derive optimal betting strategies (Section~\ref{sec:strategies}).
Such mathematically optimal strategies can be theoretically applied in artificial environments with handcrafted generators of randomness (e.g. the casinos). However, in the context of sports betting, and other practical settings such as stock market investing, this is generally impossible.
In this experimental review, we thus focus on the scenarios, where the probability estimates of both the bookmaker $P_b$ and the player $P_p$ are biased w.r.t. the real outcome probability distribution $P_r$.
Let us consider an extension of the \rev{coin-tossing} game from Example~\ref{ex:coin1} to demonstrate properties of such \rev{a} setting.
\begin{example}
Consider a \textit{biased} \rev{coin-tossing} game where the coin bias is \textit{unknown} to both the bookmaker and the player. Let us \rev{set-up} the bias such that
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_r(r_i)} =
\left\{
\begin{array}{ll}
0.6 & \mbox{for } r_1 = \textit{H} \\
0.4 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Let us further assume that the player $p$ has a probabilistic model of the game, producing biased estimates $P_p = \hat{P_r}$ as
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_p(r_i)} =
\left\{
\begin{array}{ll}
0.55 & \mbox{for } r_1 = \textit{H} \\
0.45 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Finally, assume the bookmaker is also biased with his estimates $P_b = \hat{P_r}, P_b \neq P_p$, according to which he sets up the odds distribution $\bm{o}$, lowered by a margin\footnote{In practice, the distribution of margin would not be simply uniform as in the example, but the bookmaker typically applies more sophisticated distortion of the odds to secure even higher statistical advantage.} $m=0.05$
\begin{equation}
\underset{r_i \in {\mathrm{R}}}{P_b(r_i)} =
\left\{
\begin{array}{ll}
0.65 & \mbox{for } r_1 = \textit{H} \\
0.35 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\underset{r_i \in {\mathrm{R}}}{b(r_i)} =
\left\{
\begin{array}{ll}
\frac{1}{0.65} \cdot (1-{0.05}) \approx 1.46 & \mbox{for } r_1 = \textit{H} \\
\frac{1}{0.35} \cdot (1-{0.05}) \approx 2.71 & \mbox{for } r_2 = \textit{T}
\end{array}
\right.
\end{equation}
Note that while the odds are still subfair, the bookmaker's bias w.r.t. $P_r$ now creates space for exploitation, since the true expected values are no longer purely negative.
\begin{equation}
\begin{array}{llll}
\EX_{P_r}[w_1] &=& P_r(r_1) \cdot b(r_1) -1 \approx -0.124 & \text{ for~~ } \mathrm{R}=r_1=H\\
\EX_{P_r}[w_2] &=& P_r(r_2) \cdot b(r_2) -1 \approx 0.084 & \text{ for~~ } \mathrm{R}=r_2=T
\end{array}
\end{equation}
i.e. the punter could make long-term profits if betting appropriate amounts on the $r_2=T$ outcome. However, not knowing the true probabilities $P_r$, the player's calculation of expected values will now be biased, too
\begin{equation}
\begin{array}{lll}
\EX_{P_p}[w_1] &=& P_p(r_1) \cdot b(r_1) -1 \approx -0.197\\
\EX_{P_p}[w_2] &=& P_p(r_2) \cdot b(r_2) -1 \approx 0.22
\end{array}
\end{equation}
nevertheless, despite the expected values calculated by the punter w.r.t. her $P_p$ estimate \rev{being} wrong, in this particular setting, she correctly identified the positive expected value in the $r_2=T$ outcome and could theoretically make a profit with an appropriate strategy modification (Section~\ref{sec:risk}).
\end{example}
Generally, $P_p = \hat{P_r}$ and $P_b = \hat{P_r}^{'}$ are going to be somewhat biased w.r.t. $P_r$ as well as w.r.t. each other \rev{(i.e. $P_p \neq P_b$,} as long as \rev{the} player does not simply copy from the bookmaker). The individual biases can be captured by statistical measures, such as the Kullback\-Leibler, or better yet Jensen\-Shannon, divergences~\citep{cover2012elements}, and the probabilistic setting of each game for a particular match can then be understood as a triplet of probability distributions over the outcomes, as depicted in Figure~\ref{fig:triangle}.
\begin{figure}[t]
\label{fig:triangle}
\centering
\includegraphics{triangle.eps}
\caption{A typical sports betting setting for a game with $n$ outcomes, displaying bookmaker's probabilistic estimates $P_b$ and player's estimates $P_p$, both distanced from the true distribution $P_r$ and from each other.}
\end{figure}
\subsection{Multiplicity of Outcomes}
\label{sec:def:outcomes}
So far we have assumed a binary \rev{coin-tossing} game of two possible outcomes. Let us now generalize into an $n$ outcome game, such as throwing a die. This represents most real situations in sports betting, such as the $\mathrm{R} = \{Win,Draw,Loss\}$ outcomes in soccer, or betting on the winner of a horse race with $n$ horses (Section~\ref{sec:datasets}). Moreover, one can potentially assume that the individual game outcomes are no longer exclusive, such as betting on the first $j$ horses, or ``over'' $j$ goals in soccer for multiple different values of $j$.
To make the game representation more compact in such situations, a generic matrix~$\bm{O}$ representation has been proposed~\citep{busseti2016risk}, where the columns of $\bm{O}$ represent the possible outcome assets, and rows represent the possible game results, i.e. joint realizations of all the outcomes. Each individual element in $\bm{O}$ then represents particular odds for each outcome realization.
Additionally, we include an artificial risk-free ``cash'' asset $\bm{c}$, which allows the player to put money aside safely. This also allows to model situations where leaving money aside can cost \rev{a} small fraction of wealth in every turn (caused \rev{e.g.} by inflation), or possibility to increase the wealth by some interest rate (e.g. in a savings account).
The betting strategy \rev{$g$} (Section~\ref{sec:def:strategy}) can now thus always allocate the full amount of current wealth $W$ among $n$ available outcome assets, $n - 1$ of which are risky, stochastic assets, and 1 being the added risk-free cash asset as
\begin{equation}
g : (\bm{p}^k, \bm{O}_k^n) \mapsto \bm{f}^n \text{~~~where~~~} \sum_i{f_i}=1
\end{equation}
where $k$ is the number of possible worlds, i.e. there are $k$ possible joint outcome realizations, in our probabilistic game.
Odds for each outcome asset in each of the $k$ world realizations with the respective probabilities $\bm{p} = p_1, p_2, ..., p_k$ can thus be fully specified in the columns $\bm{o_i}$ as
\begin{align}
\bm{O} =
\begin{bmatrix}
\bm{o_1} & \bm{o_2} & ... & \bm{o_{n-1}} & \bm{c}
\end{bmatrix}
~,~\text{where}~~
\bm{o_i} =
\begin{bmatrix}
o_{i, 1} \\
o_{i, 2} \\
... \\
o_{i, n}
\end{bmatrix}
~,~
\bm{c} =
\begin{bmatrix}
1 \\
1 \\
... \\
1
\end{bmatrix}
\end{align}
\begin{example}
Consider a football game, where we assume $3$ outcomes as $\mathrm{R} = \{W, D, L\}$, forming the $3$ asset vectors $\bm{o_w}, \bm{o_d}, \bm{o_l}$, where the bookmaker sets the odds to $o_w, o_d, o_l$, respectively. The odds matrix $\bm{O}$, including the constant cash asset $\bm{c}$, then looks as follows.
\begin{equation}
\bm{O} =
\begin{bmatrix}
\bm{o_w} & \bm{o_d} & \bm{o_l} & \bm{c}
\end{bmatrix}
~~\text{where~}~~
\bm{o_w} =
\begin{bmatrix}
o_w \\
0 \\
0
\end{bmatrix}
,~
\bm{o_d} =
\begin{bmatrix}
0 \\
o_d \\
0
\end{bmatrix}
,~
\bm{o_l} =
\begin{bmatrix}
0 \\
0 \\
o_l
\end{bmatrix}
,~
\bm{c} =
\begin{bmatrix}
1 \\
1 \\
1
\end{bmatrix}
\end{equation}
\end{example}
To simplify notation in further sections, we will also define a modified odds matrix $\bm{\rho}$ corresponding to excess odds, i.e. removing the return amount of the placed wager itself, resulting \rev{in} net profit $\mathrm{W}$ (Section~\ref{sec:definitions}), as
\begin{equation}
\bm{\rho} = \bm{O} - \bm{1}
\end{equation}
Note that in the example scenario the outcomes were exclusive, and the ``one-hot'' risky asset vectors reflect their exclusive nature, which considerably simplifies the computation of optimal strategies (Section~\ref{sec:strategies}).
In this review, we generally assume individual matches with exclusive outcomes\footnote{Note that the exclusiveness of outcomes does not hold in the further scenarios with parallel games.} but varying outcome multiplicities (Section~\ref{sec:datasets}) to experimentally assess the properties of the strategies w.r.t. this dimension of the problem.
\subsubsection{Parallel Games}
\label{sec:def:parallel}
To further complicate the game, approaching the real betting setting even more closely, we can consider multiple dice being thrown in parallel, each associated with a particular set of outcomes and odds. Naturally, this reflects the reality of multiple games being open for betting at the same time. In popular sports, such as soccer, it is not uncommon to have dozens of games available on the market simultaneously.
While we can surely consider each of the games separately, such a simplification can lead to sub-optimal results. Although calculating with the true parallel nature of the opportunities can be computationally demanding for some of the strategies (Section~\ref{sec:quadraticapprox}), it should allow to alleviate the risk by diversifying over a wider portfolio at each time step of the wealth progression.
In this review, we consider both the sequential and parallel scenarios to emulate realistic scenarios and evaluate the respective advantages (Section~\ref{sec:experiments}).
\subsection{Betting Dynamics}
\label{sec:def:dynamics}
The betting dynamic represents the investment \rev{behaviour} of the bettor w.r.t. her bankroll $W$ in time $t$, which has a major impact on the progression of wealth. There are two basic cases of bankroll management to be considered \rev{--} (i) additive and (ii) multiplicative~\citep{peters2016evaluating, peters2011optimal}.
\subsubsection{Additive dynamic}
Additive dynamic corresponds to a simple fixed unit-based investment, where the bettor's wagers are decoupled from her current bankroll $W_t$. To illustrate the setting, we can imagine that the bettor receives a fixed unit (e.g. \$1) amount of money from an external source at regular time intervals $\delta t$ (such as a salary), which she repeatedly invests into the stochastic game of betting, and accumulates (additively) the prospective returns $w_t \cdot 1$ from the unit investment in the, separately held, bankroll $W_t$.
Her wealth progression in time $t$ can hence be seen as
\begin{equation}
W_t = w_t \cdot 1 + W_{t - \delta t}
\end{equation}
\subsubsection{Multiplicative dynamic}
\label{sec:multiplicative}
In the multiplicative scenario, the bettor continuously \textit{reinvests} the current wealth $W_t$ accumulated from the previous betting investments, without any external source of profit. Hence her progression of wealth in time $t$ can be seen as
\begin{equation}
W_t = w_t \cdot W_{t - \delta t}
\end{equation}
The multiplicative dynamics plays an important role in the Kelly criterion (Section~\ref{sec:kelly}), where the mathematical optimality of the strategy is derived exactly from \rev{a} repeated play of the same game in the multiplicative setting.
As the comparison of the two approaches appears problematic, due to the external source of profit in the additive scenario, we will further consider only the multiplicative reinvestment setting, which is also more realistic and sound for \rev{an} independent evaluation.
\section{Related works}
\label{sec:related}
The two most notable approaches to allocation of wealth across presented stochastic assets, i.e. match outcomes in sport\rev{s} betting, were introduced by (i)~\cite{markowitz1952portfolio}, with his revolutionary concept of balancing return and risk of a portfolio, and by (ii)~\cite{kelly1956new}, with a criterion to maximize the long-term growth in a scenario where the same game is being played repeatedly.
Following the Kelly criterion, the process of betting is closely connected to information theory~\citep{kelly1956new}. \rev{\cite{bell1988game}, discuss a game-theoretical optimality of Kelly portfolios and a generalization of the Kelly strategy to maximize the proportion of wealth relative to the total wealth among population is discussed in~\citep{lo2018growth}.} Additional mathematical properties were also explored in~\citep{latane2011criteria} and~\citep{breiman1961optimal, thorp2008kelly}. From the economical perspective, Kelly's approach is often explained through the use of a logarithmic utility function, which was famously first introduced by Daniel Bernoulli in~\citep{bernoulli2011exposition}, where he pointed out that people do not make their decisions according to the absolute payoff, but w.r.t. the logarithm thereof. \rev{In~\citep{luenberger2011preference} the authors suggest that assuming long-term goals, the logarithmic utility function is the only sensible choice for a utility function.} While not necessarily incorrect, the phenomenological explanation of the choice of logarithmic utility function seem\rev{s} somewhat arbitrary, however.
In \citep{peters2011time} a different view on the Kelly criterion was proposed, where the author criticized the established evaluation of betting using the expected value of a portfolio, as it is based on the unrealistic idea of ``simultaneous'' evaluation of the, often exclusive, outcomes. Instead of measuring \rev{the} mean of a statistical ensemble of possible outcomes, the author proposed to focus on what happens to a single player as the same game is repeated in time, following the notion of ergodicity in dynamic systems~\citep{peters2019ergodicity}. The logarithmic transformation then emerges as the correct ergodic transformation of dynamics of the game in the classical reinvestment setting~\citep{peters2016evaluating}, providing a well-founded explanation for the observed phenomenon.
Given the mathematically elegant yet somewhat unrealistic setting, the Kelly strategy has also been often criticised in many works~\citep{samuelson1971fallacy, samuelson2011we, maclean2010good, samuelson1975lifetime}.
\subsection{Extensions of the formal strategies}
\label{sec:related:extensions}
The strategies of Markowitz and Kelly have been re-explored by researchers in a number of different application scenarios and many useful modifications have been proposed since. Generally, the Markowitz's approach has traditionally dominated the world of quantitative finance, while the Kelly's approach has been more prominent in the sports betting industry. In~\citep{smoczynski2010explicit}, a closed form solution for the use of the Kelly strategy when betting on horse racing was explored. Another practical extension for betting on multiple simultaneous games was discussed in a number of works~\citep{whitrow2007algorithms, grant2008optimal, buchen2012comparison}, where \rev{various} approximations for large bet aggregations were proposed.
\rev{
Modification of the Kelly strategy for betting exchanges is discussed in~\citep{noon2013kelly}, where adjustments for both back and lay bets are presented. Additionally, the effect of commission and maximum bet constraint on resulting growth rate is discussed. The Kelly problem is examined for spread betting in~\citep{chapman2007kelly} and in \citep{haigh2000kelly}, where several counterintuitive effects are discussed when using the Kelly strategy for spread betting. Markowitz's modern portfolio theory for soccer spread betting is then discussed in~\citep{fitt2009markowitz}
}
Another important stream of research are works investigating extensions of the Kelly strategy towards the realistic setting of parameter uncertainty, such as~\citep{baker2013optimal}. A practical method to address the problem are \rev{so-called} fractional Kelly strategies, the properties of which have been investigated in great detail in the works of~\citep{maclean2011medium} and \citep{maclean1992growth}.\rev{\cite{peterson2017kelly}, presents a decoupled Kelly strategy combined with an additional risk measure. \cite{kan2007optimal},~introduced an optimal portfolio choice under parameter uncertainty for the modern portfolio theory (MPT).}
Interesting modifications with similar aims are Bayesian extensions of the Kelly strategy proposed in \citep{browne1996portfolio, balka2017kelly, chu2018modified}. Similarly, approaches based on probabilistic risk constraints for limiting the probability of a ``drawdown'' were discussed in \citep{busseti2016risk} and \citep{mulvey2011dynamic}. Finally, limiting the \rev{worst-case} probabilistic scenario using the framework of distributionally robust optimization was explored in \citep{sun2018distributional} and in \citep{blanchet2018distributionally} for the Markowitz's strategy, respectively.
\subsection{Predictive modelling}
\label{sec:related:model}
\rev{
Since we consider predictive sports modelling a separate problem, we only briefly review some papers on the topic, with an extra focus on models related to those used for experiments in this paper.
}
\rev{
A traditional stream of research in predictive sports analytics are score-based models based on various explicit statistical assumptions. A football prediction model introduced by~\cite{maher1982}, builds a statistical model on the assumption that in a football match the goals are Poisson-distributed and those of the home team are independent of those of the away team. The author also introduced the notion of teams' attacking and defensive strengths and how to use them for forecasting of the match results. In~\citep{dixon1997}, the Maher's model is further extended and it is shown to make a profit when combined with a simple betting strategy. The authors also used exponential time weighting to discount the effects of past results, while in~\citep{maher1982} the strength of the team is considered to be time-invariant. In~\citep{rue2000}, the authors used a Brownian motion to bind together the strength parameters of the teams in consecutive rounds. The model is then used for betting with a variant of the MPT strategy. \cite{egidi2018combining}, presents a hierarchical Bayesian Poisson model with the scoring rates of the teams being represented by convex combinations of parameters estimated from historical data and betting odds. In \citep{groll2013spain} the authors analyze the explanatory power of bookmakers' odds using pairwise generalized linear mixed Poisson model.}
\rev{
Another modern approach for match outcome predictions are non-parametric and feature-based machine learning models.
\cite{Haghighat2013}, provides a review of machine learning techniques used in outcome predictions of sports events while pointing out some common problems and misconceptions.
In the horse racing domain, a popular logit-based model, combining both ``fundamental features'' and ``odds-derived'' features into a single prediction system, was presented by~\cite{benter2008computer}. This model was also a strong inspiration for the horse racing model evaluated in this paper.
In the domain of soccer, a recent review~\citep{hubacek2019score} discusses a diversity of the common approaches. Notable examples include models from the 2017 Soccer Prediction Challenge~\citep{dubitzky2019}. The winning model from the challenge utilized a boosted tree learner based on an ensemble of score-derived features and simpler ranking and statistical models~\citep{hubacek2019}. This model was also directly used for the soccer betting experiments reported in this paper.
In predictive basketball modelling, it is common to use detailed box-score statistics that are available for the high exposure leagues. Based on diverse features, \cite{Miljkovic2010}, evaluated their model on the NBA, while \cite{Ivankovic2010} used a neural network to predict match outcomes in the League of Serbia. An advanced convolutional neural architecture was then learned over a, so far biggest, set of basketball games in~\citep{hubavcek2019exploiting}. We again directly utilize this basketball model in this paper.
}
\section{Betting Strategies}
\label{sec:strategies}
In the existing literature, the betting strategies range from simple informal techniques, such as flat betting, to the formal approaches, represented mainly by the Markowitz's Modern portfolio theory~\citep{markowitz1952portfolio} and the Kelly criterion~\citep{kelly1956new}, coming from an economical and information-theoretic views of the problem, respectively.
\subsection{Informal Strategies}
\label{sec:strat:informal}
In sports betting practice, most of the focus among punters is being put on the search for outcomes with positive expected value (``value bets''), and the importance of the subsequent investment strategy has often been neglected. Consequently, rather than formal strategies, one can encounter simplistic heuristics such as~\citep{hubacek2017thesis}:
\begin{itemize}
\item Bet a fixed fraction on favourable odds.
\item Bet a fixed fraction on the opportunity with maximal expected value.
\item Bet a fraction equal to the absolute discrepancy between player's and bookmaker's estimates.
\item Bet a fraction equal to the relative discrepancy between player's and bookmaker's estimates.
\item Bet a fraction equal to the estimated probability of winning.
\end{itemize}
Lacking any formal foundation, these approaches have been shown generally inferior to the formal strategies, both theoretically and in practice~\citep{hubacek2017thesis}. For completeness, we chose to re-validate the reports by selecting the previously best performing informal strategies of (i) betting fraction w.r.t. the maximal discrepancy (``AbsDisc'') and (ii) betting optimal fraction on the maximal expected value (``MaxEvFrac'') in our experiments (Section~\ref{tab:horses}).
\subsection{Modern Portfolio Theory}
\label{sec:MPT}
Modern Portfolio Theory (MPT) is a standard economic view of the problem based on the idea of the expected value of the profit, possibly transformed by a utility function reflecting the user's particular preferences. The general idea behind MPT is that a portfolio $\bm{f^1}$, i.e. a vector of assets $\bm{f} = f_1, \dots, f_n$, is superior to $\bm{f^2}$, if its corresponding expected profit (Section~\ref{sec:definitions}) is at least as great
\begin{equation}
\EX[\bm{\rho} \cdot \bm{f^1}] \geq \EX[\bm{\rho} \cdot \bm{f^2}]
\end{equation}
and a given risk measure $risk : \mathbb{R}^n \to \mathbb{R}$ of the portfolio, w.r.t. the given odds, is no greater
\begin{equation}
risk(\bm{f^1}|\bm{\rho}) \leq risk(\bm{f^2}|\bm{\rho})
\end{equation}
This creates a partial ordering on the set of all possible portfolios. Taking the portfolios that no other portfolio is superior to gives us \rev{a} set of ``efficient portfolios'' $\Theta$~\citep{markowitz1952portfolio}. In simple terms, we trade off the expected $profit-risk$ by maximizing the following
\begin{equation}
\underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}} ~(\EX[\bm{\rho} \cdot \bm{f}] - \gamma \cdot risk(\bm{f}|\bm{\rho}))
\end{equation}
where $\gamma$ is a hyperparameter reflecting the user's preference for risk.
In the most common setup, the $risk$ of a portfolio $\bm{f}$ is measured through the expected total variance of its profit $Var[\bm{\rho} \cdot \bm{f}] = \bm{f}^T\Sigma \bm{f}$, based on the given covariance matrix $\bm{\Sigma}_n^n$ of net profit of the individual assets. Note that in the case of independent outcomes (Section~\ref{sec:def:outcomes}), this reduces to a diagonal matrix with \rev{the} variance of each binary asset\rev{'s} profit, corresponding to the result $r_i$, following from the given odds $o_i$ and the underlying Bernoulli distribution as
$\Sigma(i,i) = \hat{P_r}(r_i) \cdot (1-\hat{P_r}(r_i)) \cdot \rho_{i,i}^2$.
MPT can generally thus be expressed as the following maximization problem
\begin{equation}
\label{eq:MPT}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}~
& & \EX[\bm{\rho}\cdot\bm{f}] - \gamma \cdot \bm{f}^T\Sigma \bm{f}\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation}
Apart from the variance $Var[\bm{w}]$ of the potential net returns $\bm{w} = \bm{\rho} \cdot \bm{f}$, different risk measures have been proposed~\citep{markowitz1952portfolio}, such as standard deviation $\sigma(\bm{w}) = \sqrt{Var[\bm{w}]}$ and coefficient of variation $CV(\bm{w}) = \frac{\sigma(\bm{w})}{\EX[\bm{w}]}$. Generally, there is no \rev{agreed-upon} measure of risk and the choice is thus left to the user.
The MPT approach is often criticized for the disputable choice of risk, which can be perceived as a formal weakness of the approach~\citep{peters2016evaluating}, since in many domains the risk is not easy to define. Moreover, the direct maximization of expected profit can be misleading in games, where the distribution of potential profits is highly skewed, i.e. where the mean profit is very different from the median. This situation naturally occurs in the multiplicative dynamics setting, where maximization of expected value may lead to undesirable outcomes~\citep{peters2016evaluating}.
\subsubsection{Maximum Sharpe Strategy}
\label{sec:MaxSharpe}
Apart from the choice of the risk measure, the inherent degree of freedom in MPT is how to select a particular portfolio from the efficient frontier $\Theta$ (based on the choice of $\gamma$). Perhaps the most popular way to avoid the dilemma is to select a spot in the pareto-front with the highest expected profits w.r.t. the risk. For the risk measure of $\sigma(\bm{w})$, this is known as the ``Sharpe ratio'', generally defined as
\begin{equation}
\frac{\EX[\bm{w}] - r_f}{\sigma(\bm{w})}
\end{equation}
where $\EX[\bm{w}]$ is the expected return of the portfolio, $\sigma(\bm{w})$ is the standard deviation of the return, and $r_f$ is a ``risk-free rate''. Since there is no risk-free investment in sports betting, we can neglect it and reformulate the optimization problem as
\begin{equation}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \frac{\EX[\bm{\rho} \cdot \bm{f}]} {\sqrt{\bm{f}^{T}\bm{\Sigma}\bm{f}}} \\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, f_i \geq 0
\end{aligned}
\end{equation}
the solution of which we will further refer to as the ``MSharpe'' strategy.
The variance-based choices of risk have been often criticized as they penalize excess losses as well as excess returns, which is obviously undesirable. Moreover, the calculation of the MaxSharpe solution is also quite sensitive to errors in the probabilistic estimates, and can often be biased towards extreme solutions, requiring some additional form of control\footnote{E.g. a strategy with no wagers placed would have zero variance resulting into an infinite Sharpe ratio.}. Nevertheless\rev{,} it remains a very popular investment practice, which we include in our experiments.
\subsection{Kelly Criterion}
\label{sec:kelly}
The Kelly criterion\rev{~\citep{kelly1956new, thorp2008kelly}} is based on the idea of expected multiplicative growth in the reinvestment setting (Section~\ref{sec:multiplicative}), so that a portfolio $\bm{f}$ is chosen such that the long-term value of the resulting, continuously reinvested, wealth $W_t$ is maximal (in an infinite horizon of $t$). Note that in this scenario we assume that the same portfolio is going to be presented at each time step. For its multiplicative nature, it is also known as the geometric mean policy, emphasizing the contrast to the arithmetic mean approaches based on the expected value.
The two can, however, be looked at similarly with the use of a logarithmic ``utility function'', transforming the geometric into the arithmetic mean, and the multiplicative into the additive setting, respectively. The problem can then be again expressed by the standard means of maximizing the expected value as
\begin{equation*}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \EX[\log(\bm{O} \cdot \bm{f})]\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation*}
Note that, in contrast to MPT, there is no explicit term for risk here, as the notion of risk is inherently encompassed in the growth-based view of the wealth progression, i.e. the long-term value of a portfolio that is too risky will be smaller than that of a portfolio with the right risk balance (and similarly for portfolios that are too conservative).
The calculated portfolio is then provably optimal, i.e. it accumulates more wealth than any other portfolio chosen by any other strategy in the limit of $t$. However, this strong result only holds given, considerably unrealistic, assumptions~\citep{kelly1956new, thorp2008kelly, peters2016evaluating}. Similarly to MPT, we assume to know the true probability distribution of game outcomes, and additionally we assume that:
\begin{enumerate}
\item we are repeatedly presented with the same games.
\item we play for an infinite amount of time.
\end{enumerate}
Despite the fact that these conditions are impossible to meet in practice, the Kelly strategy is very popular, and its various modifications (Section~\ref{sec:risk}) are prevalent among bettors in practice.
\subsubsection{Quadratic Approximation}
\label{sec:quadraticapprox}
Exact numerical calculation of the Kelly strategy is often \rev{time-consuming}, especially when numerous runs through a large dataset of games is necessary. A practical approach to this issue has been proposed~\citep{busseti2016risk} based on a quadratic approximation of the Kelly's logarithmic utility using the Taylor series expansion. Let us first recall the following.
\begin{equation}
\log(1+x) = x - \frac{x^{2}}{2} + \dots
\end{equation}
Next, following~\citep{busseti2016risk}, we make an assumption for the Taylor approximation that our net profits are not too far from zero $\bm{\rho}\cdot{\bm{f}} \approx \bm{0}$ and express the logarithmic part of the Kelly criterion as follows~\citep{busseti2016risk}.
\begin{equation}
\log(\bm{O} \cdot \bm{f}) = \log(1 + \bm{\rho} \cdot \bm{f})
\end{equation}
allowing us to proceed with the Taylor expansion as
\begin{equation}
\log(1 + \bm{\rho} \cdot \bm{f}) = \bm{\rho} \cdot \bm{f} - \frac{(\bm{\rho} \cdot \bm{f})^{2}}{2} + ...
\end{equation}
Now taking only the first two terms from the series we transform the expectation of logarithm into a new problem definition as follows
\begin{equation}
\begin{aligned}
& \underset{\bm{f \in \mathbb{R}^n}}{maximize}
& & \EX[\bm{\rho} \cdot \bm{f} - \frac{(\bm{\rho} \cdot \bm{f})^{2}}{2}] \\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1.0, ~f_i \geq 0
\end{aligned}
\end{equation}
We will further refer to this strategy as the ``Quadratic Kelly''.
Note that, interestingly, the problem can now be rewritten to
\begin{equation}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \EX[\bm{\rho} \cdot \bm{f}] - \frac{1}{2}\EX[\bm{f}^T (\bm{\rho} \cdot \bm{\rho}^T) \bm{f}] \\
\end{aligned}
\end{equation}
corresponding to the original MPT formulation from Equation~\ref{eq:MPT} for the particular user choice of $\gamma=\frac{1}{2}$.
It follows from the fact that the geometric mean is approximately the arithmetic mean minus $\frac{1}{2}$ of variance~\citep{markowitz1952portfolio}, providing further insight into \rev{the} connection of the two popular strategies of Kelly and Markowitz, respectively.
\section{Risk Management Practices}
\label{sec:risk}
The core issue with the mathematical strategies is that their calculations are carried out as if the true probability distribution over the outcomes was known. Moreover\rev{,} they are often sensitive to even \rev{the slightest} error in the estimates. Here we review simple remedies that have been proposed on top of the original strategies to manage the extra risk stemming from the underlying errors, as well as more sophisticated techniques incorporating the uncertainty of estimates directly into computation of \rev{the} strategies.
\subsection{Maximum bet limit}
\label{sec:limit}
Constraining the maximal wager to a fixed value $m$ is probably the most trivial risk-avoiding technique one can encounter, which is probably also why it is the most prevalent one in practice. Moreover, the maximum bet limit often comes from the side of the bookmaker, too, constraining the risk he undertakes w.r.t. each bettor. We thus include this empirical method in our portfolio to see if saturating the invested amount by a fixed threshold might actually improve the overall wealth progression of the existing strategies if properly tuned.
\subsection{Fractional Approaches}
\label{sec:fractional}
Fractioning is an example of a simple heuristic that is nevertheless very efficient in practice.
The main idea behind any ``fractional approach'' is to bet only a fraction $\omega$ of the calculated portfolio and leave the rest of $1-\omega$ in the cash asset for security. We define such a trade-off index $\omega$ for a portfolio as
\begin{equation}
\bm{f}_\omega = \omega \bm{f}_{1..n-1} + (1-\omega) \bm{f}_n
\end{equation}
where $\bm{f}_{1..n-1}$ corresponds to the risky part of the portfolio with stochastic assets and $\bm{f}_n$ is the cash asset, as introduced in Section~\ref{sec:def:outcomes}.
The fractional approach is mostly used with the Kelly strategy~\citep{maclean2011growth, thorp2011understanding}, where for $\omega = 0.5$ it is famously referred to as ``half-kelly'' by practitioners. \rev{Nevertheless,} the choice of $\omega$ should depend on the actual distributions and preferences for risk. The same idea of taking only a fraction of the calculated portfolio can generally be applied to any strategy, including MPT, and it is overall useful whenever our estimates are erroneous.
\subsection{Drawdown Constraint}
\label{sec:drawdown}
A drawdown represents a more involved technique that actually modifies the original optimization problem.
The idea of drawdown is to incorporate a special probabilistic constraint into the Kelly strategy so as to push the solution away from the more risky region near the ruin boundary. The choice of the boundary is then left to the user's preference as an input parameter into the optimization problem. The probabilistic boundary is expressed as the following constraint
\begin{equation}
P(W_t^{min} < \alpha) \leq \beta
\end{equation}
expressing that the probability of our wealth falling below $\alpha$ can be at most $\beta$.
For the Kelly criterion, following the calculations from~\citep{busseti2016risk}, the constraint is approximately satisfied if the following condition holds
\begin{equation}
\EX[(\bm{O} \cdot \bm{f})^{-\lambda}] \leq 1 \hspace{5pt} \text{where} \hspace{5pt} \lambda = \log(\beta) / \log(\alpha)
\end{equation}
Which we can reformat as
\begin{equation}
\log(\sum_{i=1}^{n} p_i \cdot (\bm{o_i}\cdot f_i)^{-\lambda}) \leq \log(1)
\end{equation}
which can be further simplified~\citep{busseti2016risk} into the following constraint
\begin{equation}
\log(\sum_{i=1}^{n} \exp(\log(p_i \cdot (\bm{o_i}\cdot f_i)^{-\lambda}))) \leq 0
\end{equation}
which we can finally use in a convex optimization program.
\subsection{Distributionally Robust Optimization}
\label{sec:dro}
Distributionally robust optimization (DRO) can be understood as a stochastic game between a player and nature, where nature picks a distribution $P_r$ from some predefined ambiguity set of distributions $\bm{\Pi}$ so as to inflict maximum damage to the player's utility. This fits quite naturally the setting of sports betting against a fixed-odds bookmaker, where, given the opposing utilities of both, the bookmaker (nature) sets up the odds so as to minimize player's chances for profit.
Generally, DRO is \rev{a} paradigm for decision making under uncertainty where:
\begin{enumerate}
\item The uncertain problem inputs are governed by a distribution that is itself subject to uncertainty.
\item The distribution is then assumed to belong to an ambiguity set $\bm{\Pi}$.
\item The ambiguity set contains all distributions that are compatible with the player's prior information.
\end{enumerate}
Being aware of the uncertainty in her own estimates $P_p = \hat{P_r}$, the player now modifies the optimization problem to account for the worst possible scenario within the given ambiguity set $\Pi$.
\begin{equation*}
\begin{aligned}
& \underset{\bm{f} \in \mathbb{R}^n}{\text{maximize}}
& & \underset{\bm{p} \in \bm{\Pi}}{min} \sum_{i=1}^{n} {p_i} \cdot log(\bm{O_i} \cdot \bm{f})\\
& \text{subject to}
& & \sum_{i=1}^{n} f_i = 1, \; f_i \geq 0
\end{aligned}
\end{equation*}
The ambiguity set $\bm{\Pi}$ can be defined in a number of ways. In~\citep{sun2018distributional}, multiple definitions are explored in connection to the Kelly strategy, such as Polyhedral, Ellipsoidal, or Divergence based. In this review\rev{,} we further narrow our focus to the polyhedral ambiguity set, referred to as the ``box'' uncertainty set, which can be defined as
\begin{equation}
\bm{\Pi} = \{p_i \hspace{3pt} | \hspace{3pt} |p_i - P_p(r_i)| \leq \eta \cdot P_p(r_i),~\sum_{i=1}^{n} p_i = 1, p_i \geq 0\}
\end{equation}
i.e. constraining each probability $p_i$ to differ by up to a factor of $\eta$ from the nominal player's estimate $P_p(r_i)$ of the probability of result $\mathrm{R}=r_i$.
\section{Experiments}
\label{sec:experiments}
The main purpose of this review is to assess \rev{the} performance of the individual strategies (Section~\ref{sec:strategies}) and their risk modifications (Section~\ref{sec:risk}) in various realistic settings (Section~\ref{sec:definitions}) on real data.
We recall the used strategies, describe the datasets, evaluation protocol, and discuss the conducted experiments with their results.
The strategies for the experiments were chosen with the aim to represent the diverse portfolio of approaches occurring in practice, with the goal to provide an unbiased statistical assessment of their performance limits. The particular strategies chosen with their respective hyper-parameters are specified in Table~\ref{tab:strategies}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
\textbf{Strategy} & Description & {Hyperparameters}\\
\hline
AbsDisc & absolute discrepancy bet (Section~\ref{sec:strat:informal}) & None \\
\hline
MaxEvFrac & max. EV outcome with fractioning (Section~\ref{sec:strat:informal}) & $\omega \in [0,1]$ \\
\hline
Kelly & original Kelly strategy (Section~\ref{sec:kelly}) & None \\
\hline
MSharpe & original max. Sharpe ratio (Section~\ref{sec:MaxSharpe}) & None \\
\hline
KellyFrac & Kelly strategy with fractioning (Section~\ref{sec:fractional}) & $\omega \in [0,1]$ \\
\hline
MSharpeFrac & max. Sharpe with fractioning & $\omega \in [0,1]$ \\
\hline
KellyFracMax & Kelly with fractioning and limiting (Section~\ref{sec:limit}) & $\omega \in [0,1]$, $m \in [0,1]$. \\
\hline
MSharpeFracMax & max. Sharpe with fractioning and limiting & $\omega \in [0,1]$, $m \in [0,1]$. \\
\hline
KellyDrawdown & Kelly with the drawdown constraint (Section~\ref{sec:drawdown}) & $\alpha$, $\beta \in [0,1]$ \\
\hline
KellyRobust & Kelly with distributionally robust optimization & $\eta \in [0,1]$. \\
\hline
\end{tabular}
\end{center}
\caption{Evaluated strategies and their hyperparameters}
\label{tab:strategies}
\end{table}
\subsection{Datasets}
\label{sec:datasets}
We collected 3 datasets of different properties from 3 different sports - horse racing, basketball, and football, each containing a significant number of ``matches'' \rev{(races and games)} for statistical evaluation. Each of the datasets is further accompanied with realistic models' predictions tuned specifically for each domain. Since our focus here is purely on the betting strategies, we do not elaborate on the models in details beyond their predictive performances, which naturally influence the performance of the strategies, too.
For each of the datasets, we present the following key properties.
\begin{itemize}
\item $size$ - Dataset size (i.e. \rev{the} number of matches).
\item $acc_b$ - Accuracy of the bookmaker $b$.
\item $acc_p$ - Accuracy of the player $p$ (i.e. the predictive model).
\item $n$ - Number of possible match outcomes ($n=|R|$).
\item $odds$ - Range of the offered odds.
\item $margin$ - Average margin present in the odds.
\item $A_{KL}$ - Kullback-Leibler advantage of the player.
\end{itemize}
The $A_{KL}$ is a statistical measure of \rev{the} difference of the predictive performances (\rev{cross-entropy}) of the player and the bookmaker, respectively. The metric was chosen as it plays a key role in \rev{the} performance of the original Kelly strategy, where the growth of profit can be proved directly proportional to the KL advantage~\citep{cover2012elements}.
\subsubsection{Horse Racing}
\label{sec:horses}
The data for horse racing were collected from the Korean horse racing market (KRA) and provide $2700$ races. The target market of the dataset is the ``win pool'', representing betting for the horse winning the race. The schedule and participation of individual horses in the races varies considerably. Moreover, there is a varying total number of horses, and thus outcomes $n$, in each race, creating \rev{an} interesting challenge for the strategies. We thus assume each race as a completely independent investment opportunity and optimize the strategies accordingly. The model used was a form of conditional logistic regression over various features of the horses \rev{(Section~\ref{sec:related:model})}. The particular dataset properties are specified in Table~\ref{tab:horses}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{size} & \textit{$acc_p$} & \textit{$acc_b$} & $n$ & $odds$ & $margin$ &$A_{KL}$\\
\hline
$2700$ & $0.512$ & $0.503$ & $\in [6, 16]$ & $\in [1.0, 931.3]$ & $0.2$ & $\approx 0.0022$ \\
\hline
\end{tabular}
\end{center}
\caption{Horse racing dataset properties}
\label{tab:horses}
\end{table}
The specifics of the horse racing dataset come mainly from the fact that it actually originates from a parimutuel market, meaning that the wagers are put into a shared pool from which a certain portion is removed as a profit for the house (margin). Nevertheless\rev{,} we convert it into the discussed fixed-odds setting by assuming the last available state of the money pool to get the possible payoffs/odds~\citep{hausch2008efficiency}. As a result, the ``bookmaker's'' estimate in this case is hence made up entirely from public opinion, and is noticeably less accurate. This provides space for statistical models to gain predictive KL-advantage on the one hand, however, on the other hand, the margin is also considerably higher.
\subsubsection{Basketball}
\label{sec:basket}
Next domain we selected is basketball, where we collected box score data from matches in the US National Basketball Association (NBA). The dataset consists of $16000$ games ranging from the year $2000$ to $2015$. The NBA league has a regular schedule of the matches, where each team plays repeatedly with every other team in \rev{so-called} ``rounds''. To emulate the market setting in a realistic fashion, we assume rounds as groups of $10$ scheduled matches to repeatedly appear on the market in parallel (Section~\ref{sec:def:parallel}).
The target market here was the ``money-line'', i.e. betting on the winner of each match. The specifics of the data then comes from the fact that there are only 2 outcomes in the game, directly corresponding to the most basic \rev{coin-tossing} setup of the problem (Section~\ref{sec:definitions}).
The model used was a convolutional neural network based on detailed statistics of the individual players and teams~\citep{hubavcek2019exploiting}. The odds then come from the closing line of the Pinnacle~\footnote{https://www.pinnacle.com/} bookmaker. Notice that in this case the model is not as accurate as the bookmaker, and is thus in a general KL-disadvantage. The particular dataset properties are specified in Table~\ref{tab:basket}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textit{size} &\textit{$acc_p$} & \textit{$acc_b$} & $n$ & $margin$ & $odds$ & $A_{KL}$\\
\hline
$16000$ & $0.68$ & $0.7$ & $2$ & $0.038$ & $\in [1.01, 41]$ & $\approx -0.0146$\\
\hline
\end{tabular}
\end{center}
\caption{Basketball dataset properties}
\label{tab:basket}
\end{table}
\subsubsection{Football}
\label{sec:football}
The football dataset consists of $32000$ matches collected from various leagues all over the world. The schedule in each football league is similar in spirit to that of \rev{the} NBA, and so we again assume the market setting with $10$ parallel games (Section~\ref{sec:def:parallel}). The target market was again money-line betting. The outcomes in football include a draw, resulting \rev{in} a moderate $n=3$ setting. Interestingly, the original dataset~\citep{dubitzky2019} contained merely the historical results of the matches, and the model has thus been built purely from score-derived features. Particularly, the model was a form of gradient-boosted trees learner, winning the 2017's Soccer prediction challenge~\citep{dubitzky2019}. The odds were again provided by \rev{Pinnacle but, this time, we} took the more \rev{favourable} opening line. Despite varying over different leagues, the overall margin is slightly lower than in basketball, and the model in a slightly lower, yet still considerable, KL disadvantage. The particular dataset properties are specified in Table~\ref{tab:football}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textit{size} &\textit{$acc_p$} & \textit{$acc_b$} & $n$ & $margin$ & $odds$ & $A_{KL}$\\
\hline
$32000$ & $0.523$ & $0.537$ & $3$ & $0.03$ & $\in [1.03, 66]$ & $\approx -0.013$\\
\hline
\end{tabular}
\end{center}
\caption{Football dataset properties}
\label{tab:football}
\end{table}
\subsection{Evaluation Protocol}
\label{sec:ex:protocol}
The models providing the probabilistic estimates were trained following the natural order of the matches in time, so that all of their estimates are actual future predictions, i.e. out-of-sample test outputs for matches unseen in the training phase.
For the actual optimization problems of the individual strategies, we have chosen to work with the cvxpy \citep{cvxpy} as the main optimization framework. For each strategy, we first solved the given problem using the Embedded Conic Solver (ECOS)~\citep{domahidi2013ecos}, and should a numerical problem arise\rev{,} we proceed with solving the problem using the Splitting Conic Solver (SCS)~\citep{o2016scs}.
While many of the chosen strategies (Table~\ref{tab:strategies}) contain hyperparameters to be set, we additionally tuned each for the best possible performance via grid-search, too. The individual hyperparameter ranges for the grid-search can be found in Table~\ref{tab:strategies}.
To provide an unbiased \rev{estimate} of their actual performance in practice, we also followed a strict evaluation protocol for each of the strategies. This means that we have (i) split each dataset into training and testing subsets, (ii) found the best hyperparameter setting on the training subset, and (iii) evaluated the fixed setting on the test subset.
To make the output profit measures (Section~\ref{sec:metrics}) more robust, both the training and testing is evaluated by generating $1000$ separate ``runs'' through each subset, where the sequence of games is randomly reshuffled and $10\%$ of games are randomly removed each time (the split between train and test always remains respected). We hence evaluate properties of each strategy on $1000$ separate wealth investment trajectories through previously unseen games.
\subsubsection{Hyperparameter Selection}
\label{sec:hyperpar}
To choose the best possible strategy setting on the train set, we looked for hyperparameters with the following criteria
\begin{equation*}
\begin{aligned}
& {\text{maximize}}
& & median(\bm{W_{f}}) \\
& \text{subject to}
& & Q_{5} > 0.9
\end{aligned}
\end{equation*}
i.e. we always chose a strategy that reached the maximum median final wealth, given that no more than $5\%$ of the wealth trajectories did not fall below $90\%$ of \rev{the} final wealth. Hyperparameter settings that did not meet the required criterion were simply removed from consideration. While the presented hyperparameter selection criteria might seem somewhat arbitrary and could be argued, our aim was to follow the natural desiderata of wealth progression for bettors in practice. That is to mainly prevent the occurrence of ruin (``survival first''), and then maximize the potential profits for the typical (median) bettor.
\subsubsection{Evaluation Metrics}
\label{sec:metrics}
For the actual final evaluation of the strategies on the test set, we chose a range of diverse metrics to provide more insights into the properties of the individual strategies and game settings. The metrics are as follows
\begin{itemize}
\item $median(W_f)$ - median final wealth position.
\item $mean(W_f)$ - mean final wealth position.
\item $min(W_i)$ - lowest wealth position.
\item $max(W_i)$ - maximal wealth position.
\item $sigma(W_f)$ - standard deviation of \rev{the} final wealth positions.
\item $ruin$ \% - ruin percentage of wealth trajectories
\end{itemize}
for which we define a $ruin$ situation as falling below $0.01\%$ of the initial bank $W_0$ at least once during the entire investment period. Note that as opposed to the original definition of ruin in the Kelly strategy~\citep{kelly1956new}, we have chosen a small \textit{non-zero} threshold, since in practice there is a low amount of money effectively causing \rev{the} inability to place a minimal bet, which is a constraint often present in the market.
\subsection{Results}
\label{sec:results}
Finally\rev{,} we present performances (Section~\ref{sec:metrics}) of the individual strategies (Section~\ref{sec:experiments}) over each of the datasets (Section~\ref{sec:datasets}). Apart from the evaluation metrics in the final state of wealth progression $W_{f}$, we present the summarized wealth progression trajectories for a selected ``best'' strategy with maximal median final wealth for each of the datasets, to demonstrate the overall bankroll dynamics. \rev{The evaluation metrics for horse racing, basketball, and football datasets are presented in Table~\ref{experiments:metrics:horses}, Table~\ref{experiments:metrics:basketball}, and Table~\ref{experiments:metrics:football}, respectively. The wealth progression trajectories for the best strategies are then displayed in
Figure~\ref{fig:horses}, Figure~\ref{fig:basket} and Figure~\ref{fig:football}, respectively.}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
AbsDisc & 0.0019 & 0.03 & 4e-08 & 27.1 & 0.04 & 85.2 \\
\hline
MaxEvFrac & 0.86 & 2.13 & 2e-09 & 711 & 4.7 & 36.1 \\
\hline
\hline
Kelly & 4.11 & 15.6 & 7e-05 & 2167.8 & 59.8 & 0.6 \\
\hline
MSharpe & 3.92 & 17.8 & 9e-06 & 2231.1 & 48.3 & 12.1 \\
\hline
KellyFrac & 3.39 & 14.2 & 0.003 & 213.2 & 32.1 & 0 \\
\hline
MSharpeFrac & 3.28 & 16.9 & 8e-05 & 253.3 & 26.5 & 0.2 \\
\hline
KellyFracMax & 3.49 & 13.8 & 0.0057 & 168.1 & 29.3 & 0 \\
\hline
MSharpeFracMax & 3.41 & 15.2 & 0.0065 & 194.3 & 25.4 & 0 \\
\hline
KellyDrawdown & 3.3 & 13.7 & 0.009 & 112.4 & 22.4 & 0 \\
\hline
KellyRobust & 2.97 & 4.1 & 0.08 & 77.3 & 7.2 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the horse racing scenario (Section~\ref{sec:horses}).}
\label{experiments:metrics:horses}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{horse_box_reinvest.eps}
\centering
\caption{Wealth progression of the KellyFracMax strategy in the horse racing scenario (Section~\ref{sec:horses}).}
\label{fig:horses}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
Kelly & 9.1e-6 & 1.8e-05 & 1.9e-20 & 3312.2 & 1.7e-05 & 100 \\
\hline
MSharpe & 1.3e-06 & 5.1e-05 & 4.1e-21 & 2911 & 9.7e-06 & 100 \\
\hline
KellyFrac & 2.4 & 2.7 & 0.11 & 24.1 & 1.34 & 0 \\
\hline
MSharpeFrac & 1.24 & 1.97 & 0.002 & 19.6 & 0.85 & 0 \\
\hline
KellyFracMax & 2.3 & 2.5 & 0.13 & 20.9 & 1.27 & 0 \\
\hline
MSharpeFracMax & 1.2 & 1.7 & 0.008 & 12.1 & 0.56 & 0 \\
\hline
KellyDrawdown & 2.21 & 2.9 & 0.14 & 29.1 & 1.3 & 0 \\
\hline
KellyRobust & 1.39 & 1.46 & 0.23 & 10.9 & 0.45 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the basketball scenario (Section~\ref{sec:basket}).}
\label{experiments:metrics:basketball}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{basket_reinvest_fractional.eps}
\centering
\caption{Wealth progression of the KellyFrac strategy in the basketball scenario (Section~\ref{sec:basket}).}
\label{fig:basket}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|c|c|}
\hline
\textit{\textbf{strategy}} & $median(W_f)$ & $mean(W_f)$ & $min(W_i)$ & $max(W_i)$ & $sigma(W_f)$ & $ruin$ \%\\
\hline
Kelly & 2.3e-09 & 5.2e-08 & 1.6e-21 & 5844.2 & 2.7e-07 & 100 \\
\hline
MSharpe & 1.8e-10 & 3.0e-07 & 5.9e-27 & 2617 & 4.2e-07 & 100 \\
\hline
KellyFrac & 10.05 & 11.8 & 0.03 & 182 & 9.7 & 0 \\
\hline
MSharpeFrac & 9.9 & 13.6 & 0.016 & 211 & 9.1 & 0 \\
\hline
KellyFracMax & 10.03 & 11.2 & 0.007 & 144 & 9.2 & 0 \\
\hline
MSharpeFracMax & 10.1 & 13.1 & 0.005 & 193 & 8.7 & 0 \\
\hline
KellyDrawdown & 10.25 & 12.4 & 0.09 & 122 & 9.3 & 0 \\
\hline
KellyRobust & 6.2 & 7.3 & 0.28 & 27.7 & 5.6 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Final wealth statistics of the strategies in the football scenario (Section~\ref{sec:football}).}
\label{experiments:metrics:football}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{football_reinvest.eps}
\centering
\caption{Wealth progression of the KellyDrawdown strategy in the football scenario (Section~\ref{sec:football}).}
\label{fig:football}
\end{figure}
Firstly, the results of our experiments confirm that the, regularly used, informal betting strategies (Section~\ref{sec:strat:informal}) are clearly inferior to all the formal strategies, in agreement with the previous reports~\citep{hubavcek2019exploiting}. Moreover, they often lead to ruin even in \rev{a} situation with statistical model advantage, as reported for the horse racing dataset in Table~\ref{tab:horses}, for which we decided not to include them further.
As expected, the formal strategies based on Modern Portfolio Theory (MPT) (Section~\ref{eq:MPT}) and Kelly Criterion (Section~\ref{sec:kelly}) performed reasonably in the setting with \rev{a} statistical advantage $A_{KL}$ of having a more precise model. However, since they are based on unrealistic mathematical assumptions, their actual risk profile might be unexpected in practice. Using any of the proposed practices for additional risk management (Section~\ref{sec:risk}) generally led to a considerably lower volatility while keeping the wealth progression of a typical (both mean and median) bettor reasonably high. Also, following the mathematical properties of the pure form of both the strategies, they both lead to a certain ruin in scenarios without statistical $A_{KL}$ advantage of the model, which is exhibited in practice, too (Table~\ref{tab:basket}, Table~\ref{tab:football}).
On the other hand, a smart strategy modification can generate profits even in the statistically disadvantageous scenarios, as measured by the $A_{KL}$. Naturally, this does not hold universally and particular properties of the underlying models must be considered, too, since there are surely disadvantageous scenarios where no strategy can make profits by any means (Example~\ref{ex:coin1}).
The insights from the experiments regarding the discord between the approaches of MPT and Kelly roughly follow the intuitions behind the individual strategies. That is that the strategies based on the Kelly criterion (Section~\ref{sec:kelly}) result in a generally higher \textit{median} final wealth, while strategies based on the MPT (Section~\ref{sec:MPT}) result in a generally higher \textit{mean} final wealth, corresponding to the underlying expected value-based motivation. Interestingly, in the football dataset (Table~\ref{tab:football}) the mean final wealth performance of MPT is slightly lower than that of the Kelly-based strategies. However, we should note that the hyperparameter selection criteria (Section~\ref{sec:hyperpar}) can also be considered slightly biased in \rev{favour} of the Kelly approaches.
From a practical perspective, the drawdown modification of the Kelly criterion (Section~\ref{sec:drawdown}) seemed to perform very similarly to the, much less sophisticated, fractional approach (Section~\ref{sec:fractional}), further supporting its popular use in practice. While the distributionally robust modification of Kelly (Section~\ref{sec:dro}) achieved generally lowest final wealth scores, it was also the overall most stable strategy with the highest minimal final wealth. This is in complete accordance with its pessimistic underlying setting optimizing for the worst case scenario, which might be appealing to highly risk-averse bettors.
\section{Conclusions}
\label{sec:conclusion}
In this experimental review, we investigated the two most prominent streams of betting investment strategies based on the views of the Modern Portfolio Theory and the Kelly criterion, together with a number of their popular modifications aimed at additional risk management in practice, where their original underlying mathematical assumptions do not hold. We tested the strategies on 3 large datasets from 3 different sport\rev{s} domains of horse racing, basketball, and football, following a strictly unified evaluation protocol to provide unbiased estimates of \rev{the} performance of each method while tuning their \rev{hyperparameters}.
The results of our experiments suggest \rev{the} superiority of the formal mathematical approaches over the informal heuristics, which are often used in practice, however\rev{,} the experiments also revealed their weaknesses stemming from the unrealistic mathematical assumptions, particularly the knowledge of the true probability distribution over the \rev{match} outcomes.
\rev{
Consequently, when used in their plain original form, the formal strategies, i.e. the maximum Sharpe and Kelly, proved infeasible in almost all practical scenarios with uncertain probability estimates. Particularly, the theoretically optimal strategies often led to ruin instead of maximal profit, calling for the need of the additional risk management practices.
}
\rev{The results of the subsequent modifications of the optimal strategies then suggested that reasonable trade-offs in wealth progression can be found in actual betting practice with the appropriate techniques, even in scenarios with worse model predictions than that of the bookmaker.}
\rev{Based on the experiments, we conclude that, for common practical purposes, the most suitable option out of the strategies reviewed seems to be the fractional Kelly, given that the fraction hyperparameter has been properly tuned to reflect the amount of uncertainty in each particular problem setting. The approach achieved the best, or close to the best, performance as evaluated by the chosen metrics in most of our experiments while being comparatively simpler than the other strategies. Our findings thus further support its common use in betting practice. The other common practice of setting a maximum bet limit was inconclusive as it improved the overall results in some domains (Table~\ref{experiments:metrics:horses}) while decreasing the profits in others (Table~\ref{experiments:metrics:basketball}).}
\rev{The distributionally robust Kelly strategy then proved to be the safest in all of the experiments, and can thus be suggested to extremely risk-averse practitioners. The second safest strategy was then to incorporate the drawdown constraint, which also proved quite efficient in trading of the security for profit.}
\section*{Acknowledgments}
The authors acknowledge support by Czech Science Foundation grant. no 20-29260S.
| {
"attr-fineweb-edu": 1.670898,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUe1Y5qsFCgpxUVQxK | \section{Introduction} \label{sec:introduction}
Sports analytics has been an up-and-coming field of research among professional sporting organizations and academic institutions alike. With the insurgence and collection of athlete data, the primary goal of such analysis is to try and improve athletes' performance in a measurable and quantifiable manner. This goal is in contrast to traditional coaching methods in which a coach relies solely on experience and methods that seem to work well. A more ideal situation would be to have coaches utilize both experience and data to better direct the training of their athletes. This practice has started to appear in sports where adequate data is readily available \cite{booth2019mathematical, 3pointBoom}. Unfortunately, adequate data is hard to obtain and is generally not easily available in most sports. This work is concerned with the automated collection of swimming data in a competition setting.
Previous works on this topic~\cite{hall2021detection,woinoski2021swimmer} have shown that the detection and tracking of swimmers is possible, however, they each face their own challenges. For example,~\cite{hall2021detection} assumes that video of swimmers is captured by a static camera and that the entire pool is visible. This is generally not the case as the equipment and facilities to do so are expensive and limited. In~\cite{woinoski2021swimmer}, which does not assume a static camera, different drawbacks are observed. Long-term tracking of swimmers who leave the camera's field of view is difficult due to the challenges of re-identification. Furthermore, there is no trivial way to automatically map any collected swimmer analytics to any given swimmer in the field of view. To overcome such challenges, other automated analytics solutions~\cite{sportlogiq,hall2021detection}, introduce field localization as a method for producing results that are more robust to the mentioned issues.
In the context of this work, pool localization can be characterized by a homography that maps a given frame to a base frame. An example can be seen in Figure~\ref{fig:homographicProjection} where the given frame is seen in Figure~\ref{fig:samplePoolIm} and the base frame is seen in Figure~\ref{fig:poolBaseHomography}. The projection of the given frame onto the base image can be seen in Figure~\ref{fig:exampleHomography}.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.495\linewidth}
\centering
\caption{Base pool model}
\includegraphics[width=\linewidth,height=2.5cm]{poolModel.jpg}
\label{fig:poolBaseHomography}
\end{subfigure}
\begin{subfigure}[b]{0.495\linewidth}
\centering
\caption{8$\times$50 Pool}
\includegraphics[width=\linewidth,height=2.5cm]{sample.jpg}
\label{fig:samplePoolIm}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\caption{Sample image transformed by human generated homographic projection to fit over the base pool image.}
\includegraphics[width=\linewidth]{exampleHomography.png}
\label{fig:exampleHomography}
\end{subfigure}
\caption{An example of pool localization characterized by a simple homographic projection.}
\label{fig:homographicProjection}
\end{figure}
Localization of the pool would allow a system to know what portion of the pool is being observed at any given time. If this is known, then the system would also know the position of any detected swimmer relative to the boundaries of the pool, and also the lanes in which they swim. With the successful completion of such a task, the mentioned problems can be overcome. This work presents two main contributions towards solving the above challenges, which we consider to be the beginnings of an automated pool localization method that can handle general pool images:
\begin{enumerate}
\item We present a pool model called \textit{base pool} with invariant key-points relevant for
swimming analytics.
\item We study detectability of such key-points in images with partial pool view, by training a deep model for such key-point detection.
\end{enumerate}
This paper is structured as follows. First, an overview of related work is presented in Section~\ref{sec:relatedWork}. In Section~\ref{sec:methods}, the method of pool localization and the details related to reproducing the results are presented. Section~\ref{sec:results} goes over how well the methods worked, the meaning of the results given, and what can be done to improve the results. Lastly, some finial thoughts for moving forward are presented in Section~\ref{sec:conclusion}.
\section{Related Work} \label{sec:relatedWork}
There are many methods for localization described in the literature, but most of those are geared toward object detection and localization~\cite{cadena2016past}. The problem of sports field localization is more specific, in the sense that many properties of the field to be localized are known \textit{a priori}; as a consequence, more assumptions can be made allowing for more elaborate solutions. Broadly, previous work on sports field localization can be divided into
the following two categories: hand-crafted methods, and deep-learning-based solutions.
There are many well-defined methods for extracting the lines, circles and ellipsoids that make up sports fields utilizing traditional image processing methodology~\cite{GonzalezWoods2018}. As a result, there are many hand-crafted methods for localizing a sports field
that build upon such methods~\cite{cuevas2020automatic,sharma2018automated,hadian2015fast,brown2007automatic}.The approach in~\cite{brown2007automatic}
utilizes SIFT features~\cite{lowe2004distinctive} in combination with the RANSAC algorithm~\cite{fischler1981random} as a baseline method to compare to. The methods in~\cite{cuevas2020automatic,hadian2015fast} utilize a myriad of different classical processing methods to extract the points and lines of the field of play and then use these extracted lines and points to produce a homography~\cite{hartley2004}, which effectively localizes the field of play. It should be noted that~\cite{sharma2018automated} utilizes deep learning to match a segmentation map in a dictionary of maps that define a particular homography, by which
the image being considered is localized. Thus, this method could also be categorized as a deep learning-based solution. However,
the traditional image processing methods also proposed in their work were more successful than the deep learning-based counterpart.
Many of the mentioned works produced very respectable results and thus give a strong argument for approaching the problem of pool localization with hand-crafted methodology.
Deep learning has become very popular in the last decade. Accordingly, there are many options for applying deep learning models to solve large-scale problems in computer vision.
Works from~\cite{homayounfar2017sports,fani2021localization,nie2021robust,citraro2020real} apply a variety of
methodologies that rely on deep learning to do the brunt of the localization work. The approach in~\cite{homayounfar2017sports} is considered by some as one of the first machine learning-based methods for sports field localization utilizing deep learning. They implement a segmentation network that separates the field pixels from the non-field pixels. Once completed, the resulting segmentation is utilized by another loss-function-based system to predict the vanishing points of the two sets of parallel lines that make the field boundaries of the field in the image in question. Once the vanishing points are calculated, the field can be characterized by a homography, and thus is localized. The work presented in~\cite{fani2021localization} is a comprehensive report on how to robustly localize a sports field from broadcast video, which contains many different views (zoomed in and out), and commercial breaks. The
data produced for this broadcast video was homography transform parameters for each frame in the video to a base field model. The model employed in their solution was trained to take a frame and produce a vector that characterizes the frame's homography. Lastly, the work reported in~\cite{nie2021robust,citraro2020real} relies on the detection of so-called key-points that are generally unique and represent a point whose location is known in both the base model
and the image in question. With enough key-points correctly detected in a given image, a homography can be computed, and thus, the given image is localized. Both works rely on variations of the widely utilized U-Net~\cite{ronneberger2015u} to produce segmentation maps or volumes in which each channel is associated with one key-point and represents the probability of finding that key-point in the input image. The point with the highest probability is chosen as the predicted location of the corresponding key-point. Once again, the above mentioned works perform very well and make the selection of a methodology for pool localization difficult.
Pool localization is a special case of general sports field localization.
However, to our knowledge, there is no work currently available that automatically localizes a swimming pool given an image with partial pool view, i.e., an image where only a portion of the pool is observed. The only known (to us) reference on the topic is~\cite{hall2021detection}, however, it focuses on pool localization in images of the entire pool, from a calibrated static camera with known internal and external camera parameters.
Besides this, the topic of swimming pool localization
is relatively untouched. Unfortunately, this also means that there is a severe lack of data to be utilized for research.
\section{Methods} \label{sec:methods}
Given the large body of related work on other sports, a key-point detection methodology similar to~\cite{nie2021robust,citraro2020real} is chosen as the preferred method of localization in this work. To implement such a method, the appropriate data is required. As key-point detection is being implemented, a model must be proposed such that consistent key-points are collected for any given pool. In addition, images of pools must be obtained that are sufficiently different such that the model can learn to generalize the detection of proposed key-points. Once data is available, a deep learning model can be constructed and trained. With such a trained model, the detectibility of the proposed key-points can be approximated by considering the key-point detection performance of the created model.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{poolModel-labled.jpg}
\caption{Proposed pool model and respective key-point locations.}
\label{fig:poolModel}
\end{figure*}
\begin{table*}[ht]
\centering
\begin{tabular}{|p{0.12\linewidth}|p{0.05\linewidth}|p{0.83\linewidth}|
\hline
\textbf{key-points} & \textbf{KP\#} & \textbf{Description}\\
\hline
Wall Left & [1, 11] & Defined as the intersection of the number lane-rope and the left wall. 1 and 11 don't exist at times.\\
\hline
Wall Left & 0 \& 12 & Defined as the bottom left corner and top left corner of the pool respectively.\\
\hline
Wall Right & [1, 11] & Same as wall left but on the right side of the pool.\\
\hline
Wall Right & 0 \& 12 & Defined as the bottom right corner and top right corner of the pool respectively.\\
\hline
Floating Left & [1, 11] & Defined as the intersection of the left side of a number lane-rope and the edge of the frame. 1 and 11 don't exist at times.\\
\hline
Floating Left & 0 \& 12 & Defined as the intersection of the left side of the bottom and top walls, respectively, and the edge of the frame.\\
\hline
Floating Right & [1, 11] & Same as floating left but on the right side of the pool.\\
\hline
Floating Right & 0 \& 12 & Same as floating left but on the right side of the pool.\\
\hline
Bulkhead Left & [1, 11] & Defined as the intersection of the right side of a number lane-rope and an existing bulkhead. 1 and 11 don't exist at times.\\
\hline
Bulkhead Left & 0 \& 12 & Defined as the intersection of the right side of the bottom and top walls respectively, and an existing bulkhead.\\
\hline
Bulkhead Right & [1, 11] & Defined as the intersection of the left side of a number lane-rope and an existing bulkhead. 1 and 11 don't exist at times.\\
\hline
Bulkhead Right & 0 \& 12 & Defined as the intersection of the left side of the bottom and top walls respectively, and an existing bulkhead.\\
\hline
Wall Top & [0, 8] & Defined every 5m of the length of the pool on the top wall. T4 is not present when bulkheads are present.\\
\hline
Wall Bottom & [0, 8] & Same as wall top but on the bottom wall of the pool.\\
\hline
\end{tabular}
\caption{Summarizes the location of key-points in Figure~\ref{fig:poolModel}}
\label{tab:keyPointDefinitions}
\end{table*}
\subsection{Base Pool Model}\label{subsec:poolModel}
The base pool model, seen in Figure~\ref{fig:poolModel}, is what defines where and what key-points should be identified in any given image. Humans are very good at recognizing objects in their environment that are the same across different points of view, unfortunately, computers are not. A set of possible points, must be defined such that they are consistently recognized and learned by a key-point detecting algorithm.
When constructing this key-point set, one must consider what constitutes a good key-point and what types of pools should such key-points be defined. In this work, we consider 9 different types of pools which can be categorized by two numbers in the following notation ``$n \times m$'', where $n$ is the number of lanes and $m$ is the length of the pool. $m$ can take values of 25 and 50, known as short course meters (SCM) and long course meters (LCM) respectively. $n$ represents the number of lanes in the pool being localized and can take values of 6, 8, 10, 12, 16, and 20, $n$ can only be greater than 10 if $m$ is SCM. When pools have $n$ values of 12, 16, and 20 they contain a bulkhead, seen in Figure~\ref{fig:UBCRightRes}, and~\ref{fig:UBCLeftRes} which separates one sub-pool from another; this is a common occurrence in SCM competitions. Given the possible pool types, we propose the following pool model seen in Figure~\ref{fig:poolModel} for which Table~\ref{tab:keyPointDefinitions} gives definitions of each key-point location.
The image in Figure~\ref{fig:poolModel} defines 96 different key-points which are unique within any pool setting considered in this work. For ease of communication, each of these 96 key-points can be referred by one into one of the following classes, wall left, right, top, and bottom, bulkhead left and right, and finally floating right and left. All key-point classes have numbers, either in the range of $[0, 12]$ for classes that represent lanes or $[0, 8]$ for classes that represent the length of the pool. Pools also tend to differ in the number of lane-ropes\footnotemark{} that divide the pool, for example an ten lane pool can have 9, 10, or 11 lane-ropes. As such the key-points marked as bumpers seen in figure~\ref{fig:poolModel} are explicitly considered as points that may more may not exist. Bumpers are defined as the lane-ropes that divided the wall from the outside lanes. In Figure~\ref{fig:SaanichRes} there are bumpers separating lanes 8 and 1 from the wall top and bottom. The floating key-points are different than traditional key-points utilized for homography creation. When considering their location in the base frame they are invariant in the vertical direction only, that is, there is infinitely many locations in the horizontal direction that a floating point may be. If you look carefully, each key-point that has the same class is roughly mutable with another. That is, for example, wall left $2$ is identical to wall left $3$. What differentiates mutable key-points is their location relative to each other key-point in that class and their count relative to the top and bottom walls. Note the locations and numbers of specific key-points are chosen such that they are most similar across all different pools. This is to allow the detection model to have an easier time recognizing similar key-points in different pools. These thoughts considered, there is likely an innumerable number of ways to select key-points and their locations, the proposed model is only one of those such enumerations. Lastly, while it is not known if the chosen key-points are optimal in terms of detectability, they are essential in that they allow for robust characterization of the pool in a given image.
\footnotetext{A lane-rope is a rope running the length of a pool, separating each lane from one another or the wall, if a lane-rope is separating the wall from a lane it is known as a bumper.}
\subsection{Data}
With the pool model defined, the next step is to collect images of pools for localization. In an ideal situation, examples suitable for training models are independent and identically distributed. However, as with most deep learning training data, collecting example images is not trivial, competition pools are not plentiful in a given area, they are generally spread over large distances, and as such, this limited the number of pools that could be utilized for this work. With these details noted, video footage from five pools, in various configurations, was collected. Unfortunately, each frame from a video is highly dependent on the next, to mitigate this issue, images taken from each video are sampled every 15-30 frames, depending on how the video was collected. In addition, all video footage was taken from a minimum of three maximally different viewpoints in the pool, which increased the independence of all collected images from one pool. The collected video is landscape and portrait with a minimum resolution of 1080p (16x9) and at 30 frames per second. Lastly, all the videos showcase all nine pool types mentioned in Section~\ref{subsec:poolModel}. The images were annotated utilizing the Computer Vision Annotation Tool (CVAT)~\cite{CVAT}. The amount of data collected and the type can be viewed in Table~\ref{tab:dataDistabution}. Three main pool categories are depicted in this table, pools with six, eight, and ten lanes. Please note that an eight lane pool can have eight or 16 lanes depending on if there is a bulkhead present in the images or not; the same goes for six and ten lane pools. In total, 1,352 frames were used for training and 284 frames were used for testing.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Pool Type} & \textbf{Number Images Taken} & \textbf{Data-set}\\
\hline
20x25 & 48 & Test\\
\hline
20x25 and 10x50 & 234 & Training\\
\hline
16x25 and 8x50 & 180 & Test\\
\hline
16x25 and 8x50 & 899 & Training\\
\hline
6x25 & 56 & Test\\
\hline
6x25 & 119 & Training\\
\hline
\end{tabular}
\caption{Summary of data used for training and testing}
\label{tab:dataDistabution}
\end{table}
\subsection{Key-point Detector}
The key-point detector utilized in this work is a slight variation of the popular U-net~\cite{ronneberger2015u}, utilized by~\cite{citraro2020real} and~\cite{nie2021robust} which incorporates a res-net style encoder to the u-net architecture. Like in the mentioned works, the output of the key-point detector will be a volume $V\in\mathbb{R}^{M \times N \times C}$ such that $M$ and $N$ is the resolution of the input and $C$ is the total number of possible key-points, that is 96 in this pool model. It is important to note that the detector has no idea that each key-point has a class, the classes are defined simply for eases of communication. The model will be trained to predict a distribution for each channel $C$ in $V$ such that each channel encodes the position of a predefined key-point. This distribution is forced by making the prediction layer of the model the soft-max activation function~\cite{Goodfellow2016}. If a key-point is not present in a given input the associated target distribution for the corresponding channel $C$ is flat. If a key-point is present in a given input the associated target distribution is a delta function at the location of the key-point in the frame. In contrast to~\cite{ronneberger2015u} the output volume will be the same size as the input, this is because many key-points in the input frames are found at the edges of the input. This modification to u-net is implemented by zero-padding all convolutions such that they do not have a reduced output resolution. The described model is implemented utilizing the Tensorflow Keras~\cite{tensorflow2015} package.
\subsubsection{Detection Accuracy and Optimization Loss}
To train and quantifiably evaluate the key-point detection model, the following equations must be defined. Because the key-point detection model was predicting distributions, the optimization function utilized in this work is the cross-entropy loss~\cite{Goodfellow2016}, defined in Equation~\ref{equ:crossEntropyLoss}, where $y_i\in\mathbb{R}^{M \times N}$ is the target and $h_i \in\mathbb{R}^{M \times N}$ is the predicted channel. In this implementation, the loss of each channel in the volume is summed with equal weight to create the final loss function for a given input image.
\begin{equation} \label{equ:crossEntropyLoss}
L(y, h) = -\sum_{j=1}^{C}\sum_{i=1}^{N \times M} y_i^{(j)} \ln h_i^{(j)}
\end{equation}
The accuracy of detecting key-points for a given input is defined by Equation~\ref{equ:f1}, which is the harmonic mean of the precision and recall (Equations~\ref{equ:precition}, and~\ref{equ:recall}) of the detections produced for a given image~\cite{Goodfellow2016}. In Equations~\ref{equ:precition} and~\ref{equ:recall} $tp$ refers to the number of true positives, that is key-points detected correctly by the model. $fp$ is the number of false positives or the number of times the model thought there was a key-point when there actually was not or the model predicted one but its location was predicted incorrectly. Lastly, $fn$ is the number of times the model did not predict a key-point when in fact it should have. For all mentioned equations, the output is in the range of $[0, 1]$ and the closer to $1$ the result is, the better the performance. To obtain the accuracy across a set of images the $F1$ score of each image is averaged.
\begin{equation} \label{equ:f1}
F1 = 2 \cdot \frac{recall \cdot precision}{recall + precision}
\end{equation}
\begin{equation} \label{equ:precition}
precision = \frac{tp}{tp+fp}
\end{equation}
\begin{equation} \label{equ:recall}
recall = \frac{tp}{tp+fn}
\end{equation}
\subsubsection{key-point Detection Training}
key-point detection training was implemented utilizing the Tensorflow Keras API~\cite{tensorflow2015}. The training procedure was a standard training pipeline utilizing a training and test data set. The training deviated from the standard procedure in terms of memory management, which had to be considered to deal with the large tensors associated with the key-point detection methodology. This is because the input images had a resolution of $1080 \times 1920$, therefore, propagating tensors through the network and even creating the expected target volume $V\in\mathbb{R}^{M \times N \times C}$ of floating-point numbers requires a lot of memory. To deal with this problem the input images were scaled down by a factor of $3.75$.
In addition to the reduced resolution, the batch size of each epoch was set to one. This is also due to memory issues, but also because the images are different in resolutions.
Because of the small data set, some image augmentation was implemented in the form of random contrast augmentation. Other augmentation methods were not attempted as the position of the labels is related to their key-point definition. Accordingly, simply applying augmentation methods that changed the position of a key-point would not make sense by definition of some key-points. Lastly, the optimizer utilized in the training process was the Adam Optimizer~\cite{tensorflow2015}, which was given a learning rate of $1e$-$4$.
\subsubsection{Detecting Predicted key-points} \label{subsec:detectingPredictedKeyPoints}
Unlike the work in~\cite{nie2021robust} and~\cite{citraro2020real}, which utilized a confidence channel as one of the channels in the volume $V$ to determine which channels have key-points. This work measures the entropy, which is defined in Equation~\ref{equ:entropy}, of each output channel. Where $y\in\mathbb{R}^{M \times N}$ is a distribution of a particular channel $C$ of resolution $M \times N$, being equal to the input resolution. In each distribution the model gives the highest values to the location where it thinks the key-point corresponding to that channel is located in the frame.
\begin{equation} \label{equ:entropy}
H(y) = -\sum_{i=1}^{M \times N} y_i \ln y_i
\end{equation}
Because each channel represents a distribution, if the entropy of a channel is lower than a flat distribution multiplied by a constant $\beta \in [0,1]$, that is $H(y) < \beta \ln(N \cdot M)$, then it can be considered to be predicting the key-point it represents.
The location of the key-point it represents is the location of the maximum value in the output tensor.
\section{Results} \label{sec:results}
This section gives a summary of how the key-point detector performed. Firstly, the training is discussed, then the detection accuracy, and then a discussion of the reported results is presented.
\subsection{Training} \label{subsec:trainingResults}
Reported in Figure~\ref{fig:accuracyPlot} is the per-frame average accuracy over the entirety of each test sequence as a function of epochs. It is worth noting that many different numbers of epochs were tried however it seemed that after 30 epochs the accuracy levels off.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{accuracy_plot.jpg}
\caption{Training Plot}
\label{fig:accuracyPlot}
\end{figure}
\subsection{Accuracy Results} \label{subsec:accuracyResults}
In this section, the accuracy of key-point detection is presented. Figure~\ref{fig:betaAccuracyPlot} details the per-frame average F1-Score over pools with different numbers of lanes as a function of $\beta$ for which the correct pixel tolerance is five pixels. Seen in Figure~\ref{fig:pixTolerance}, the per-frame average F1-Score over pools with different numbers of lanes as a function of pixel tolerance. Table~\ref{tab:keyPointResults} shows the precision, recall, and F1-Score of the test sequences, for each key-point class the model can predict. The ``Total'' column of the table reports the number of total key-points in the corresponding class that could be detected by the model. This column is necessary as some key-points are simply not present in some testing sequences. When this is the case the corresponding row has a value of zero and ``-''. In both Figure~\ref{fig:pixTolerance} and Table~\ref{tab:keyPointResults}, each pool type is given an optimal beta value based on the results seen in Figure~\ref{fig:betaAccuracyPlot}. Furthermore a pixel tolerance of five was chosen in this table for two reasons. Firstly, the chosen key-point detection method is very similar to the one presented in~\cite{citraro2020real}, in which a pixel tolerance of five was also chosen for input images of the same size or smaller. Secondly, anything less than five pixels would start to introduce noise in the key-point data. This is because key-points that are physically close to the camera take up substantially more pixels than key-points farther away. As such it is unclear what constitutes the exact position of a key-point. Lastly, Figure~\ref{fig:visualResults} gives five example images from the testing sequences to give a visual of how the model performs on the input data.
In Appendix~\ref{sec:AllData}, Table~\ref{tab:allData} gives the performance of the detector similar to Table~\ref{tab:keyPointResults} however, each key-point is broken down such that the performance of each point can be observed.
\begin{table}[ht]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Class} & \textbf{Precision} & \textbf{Recall} & \textbf{F1}& \textbf{Total}\\
\hline
\multicolumn{5} {|c|} {\textbf{6 Lanes $\beta=0.15$}}\\
\hline
Wall Left & 0.1556 & 0.0726 & 0.0990 & 97\\
\hline
Wall Right & 0.2778 & 0.0494 & 0.0839 & 107\\
\hline
Floating Left & 0.9192 & 0.8621 & 0.8897 & 407\\
\hline
Floating Right & 0.9712 & 0.9118 & 0.9406 & 397\\
\hline
Bulkhead Left & - & - & - & 0\\
\hline
Bulkhead Right & - & - & - & 0\\
\hline
Wall Top & 0 & 0 & 0 & 74\\
\hline
Wall Bottom & 0 & 0 & 0 & 64\\
\hline
\multicolumn{5} {|c|} {\textbf{8 Lanes $\beta=0.9$}}\\
\hline
Wall Left & 0.5683 & 0.8301 & 0.6747 & 311\\
\hline
Wall Right & 0.7105 & 0.7892 & 0.7478 & 346\\
\hline
Floating Left & 0.7756 & 0.8941 & 0.8307 & 1510\\
\hline
Floating Right & 0.7901 & 0.9112 & 0.8463 & 1459\\
\hline
Bulkhead Left & 0.0580 & 0.2801 & 0.0961 & 377\\
\hline
Bulkhead Right & 0.0220 & 0.0909 & 0.0355 & 386\\
\hline
Wall Top & 0.3009 & 0.7893 & 0.4357 & 350\\
\hline
Wall Bottom & 0.1277 & 0.2530 & 0.1697 & 251\\
\hline
\multicolumn{5} {|c|} {\textbf{10 Lanes $\beta=0.7$}}\\
\hline
Wall Left & 0.0929 & 0.2682 & 0.1380 & 183\\
\hline
Wall Right & 0.4444 & 0.5140 & 0.4767 & 217\\
\hline
Floating Left & 0.8235 & 0.8435 & 0.8333 & 315\\
\hline
Floating Right & 0.7350 & 0.8741 & 0.7986 & 297\\
\hline
Bulkhead Left & 0.1268 & 0.3708 & 0.1890 & 338\\
\hline
Bulkhead Right & 0.2161 & 0.3942 & 0.2791 & 358\\
\hline
Wall Top & 0.1996 & 0.2450 & 0.2200 & 301\\
\hline
Wall Bottom & 0 & 0 & 0 & 86\\
\hline
\end{tabular}
\caption{Key-point class accuracy for pools with different numbers of lanes, with a pixel tolerance of five pixels.}
\label{tab:keyPointResults}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{betaAccuracyPlot.jpg}
\caption{Average accuracy (F1) of the test sets vs different values of $\beta$, the control parameter selecting how confident the detector must be for a key-point prediction to be considered predicted. A prediction is considered correct if it is with five pixels of the ground truth.}
\label{fig:betaAccuracyPlot}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{localizationTolerancePlot.jpg}
\caption{Average accuracy (F1) of the test sets vs different correct localization tolerance values. The $\beta$ value is set at the optimal value of beta for each type of pool, that is, $0.15$, $0.9$, and $0.7$, for six, eight, and ten lanes respectively. }
\label{fig:pixTolerance}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{.49\linewidth}
\centering
\caption{16x25 F1 = 0.766}
\includegraphics[width=\linewidth]{testIm57_766.png}
\label{fig:CGACLondonRes}
\end{subfigure}
\begin{subfigure}[b]{.49\linewidth}
\centering
\caption{6x25 F1 = 0.750}
\includegraphics[width=\linewidth]{testIm168_750.png}
\label{fig:SFURes}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\caption{8x50 F1 = 0.960}
\includegraphics[width=.9\linewidth]{testIm206_960.png}
\label{fig:SaanichRes}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\linewidth]{colourKey.JPG}
\label{fig:keyPointColourKey}
\end{subfigure}
\end{figure}
\begin{figure}[ht] \ContinuedFloat
\centering
\begin{subfigure}[b]{\linewidth}
\centering
\caption{20x25 Left F1 = 0.529}
\includegraphics[width=\linewidth]{testIm260_529.png}
\label{fig:UBCLeftRes}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\caption{20x25 F1 = 0.477}
\includegraphics[width=\linewidth]{testIm242_477.png}
\label{fig:UBCRightRes}
\end{subfigure}
\caption{Visual results of the key-point detector}
\label{fig:visualResults}
\end{figure}
\subsection{Discussion} \label{subsec:discussion}
In this section, first, we discuss the results of training the key-point detector. Then, we look at the estimated detectability of the proposed pool key-point model by observing how well the proposed detector trains and performs on the collected pool sequences.
Observing the accuracy as a function of epochs in Figure~\ref{fig:accuracyPlot}, there is no sign of a performance decrease due to over-fitting. While this may be true, the overall accuracy is reasonably low and the test accuracy is much higher than the training accuracy. This is uncommon, as generally, the training data does better than the testing data. The reason for this large difference in values is likely because some pools are easier to detect key-points in than others. In particular, it seems that six-lane pools are easier to detect key-points from than other pools because they have fewer key-points to detect; in addition, the pool can fit into more of the field of view. In comparison, a 16x25 pool which has more key-points and a much smaller fraction of the pool fits in the field of view. Referring to Table~\ref{tab:dataDistabution}, roughly 20\% of the testing sequences are from a six-lane pool, while 8\% of the training is from a six-lane pool. The same phenomenon is observed with 8x50 pools, which have no bulkheads, compared to 16x25 pools which have bulkheads, and thus more key-points to detect.
Figure~\ref{fig:betaAccuracyPlot} displays the F1 Score of the model as a function of $\beta$, the control parameter selecting how confident the detector must be for a key-point prediction to be considered predicted. What this plot shows is that at roughly $0.1 < \beta < 0.995$ the model predicts mostly the same, quantitatively speaking. This means that any values of $\beta > 0.995$ or $\beta < 0.1$ would result in less optimal results, in terms of an F1 score. Qualitatively speaking, while the F1 performance of the model seems to be roughly the same for $0.1 < \beta < 0.995$, when $\beta$ is higher there tends to be larger precision values in exchange for lower recall values, and when $\beta$ is lower the opposite is observed. This result is to be expected as a higher $\beta$ corresponds to a higher entropy which means less certainty about the given prediction.
Figure~\ref{fig:accuracyPlot} shows that after a pixel tolerance of five pixels the F1 score changes at a roughly different rate. This may indicate that the five pixel tolerance suggested in~\cite{citraro2020real} is indeed a good value to choose as the tolerance for measuring key-point prediction quality.
In Table~\ref{tab:keyPointResults}, the detector performs the best on most floating key-points, does reasonably well with wall left/right/bottom key-points, and struggles with the rest. There may be a few reasons for this.
Firstly, the lack of performance for wall bottom key-points is understandable. The wall closest to the camera gets the smallest fractional field of view. This is due to the physics of cameras in general, that being, the farther something is away from the camera, the more of that thing that can be captured by such camera. Accordingly, there is very little context for the model to reason about the location and class of the key-points present. Annotation of such key-points is easier as the annotator can use temporal knowledge to find key-points. Furthermore, this may indicate that detectability is related to the relative fraction of the pool observed in the image. As such the detectability of wall bottom key-points is low.
Higher performance was expected for the bulkhead points. Observe Figure~\ref{fig:UBCRightRes}, for which the bulkhead right key-points are detected less accurately. These key-points should be easy to detect. Observing the training data from a similar viewpoint, the bulkhead line that makes the intersection with the lane-ropes is almost always closely accompanied by the flags that cross the pool, this likely impeded the detector. To show that the bulkhead can be detected, Figure~\ref{fig:UBCLeftRes} gives examples of the model detecting bulkhead points. It also seems the model may have had trouble with the bulkheads due to a lack of data. Given adequate data, the detectability is higher than the wall bottom key-points.
Overall wall top key-points were detected poorly. Having an F1 score of $0$, $0.4357$, and $0.2200$ for six, eight, and ten lanes respectively, is not good performance. However, in one sequence, they were detected well. Observe Figure~\ref{fig:SaanichRes}, in which the wall top key-points are detected well. The practiced eye will note that the lane-ropes give markers that sometimes guide the location of the wall top and bottom key-points. The pool in Figure~\ref{fig:SaanichRes} has lane-rope markers that were very well placed. As such it seems the model was able to pick up on these placements. Another explanation for the good wall top key-point detection is that the training data from this pool had double samples from the same location. As such, the model was trained on the same viewpoint as the test sequence. This may indicate that wall top key-points are more affected by changes in viewpoints compared to floating and wall left/right key-points.
Wall right and left key-points should be reasonably easy to detect, and for the most part, they are. The model can confuse them with bulkhead points, however, that is not observed often. The main lack of performance in the wall left and right key-points is due to ordering mismatches, occlusions, and choosing the same point as different key-points. An example can be seen in Figure~\ref{fig:UBCRightRes} for which the true lane five wall right key-point is marked as key-point six and as such the rest of the key-points are incorrect.
Floating left and right points are by far the most reliably detected. Intuitively they should be the easiest to detect as their points are the result of the most dominant lines in the image intersecting with the edge of the frame. As was mentioned in the comment about the wall left and right points, their ordering can sometimes get mismatched and as a result, an entire frame can be detected incorrectly. However, overall they are very detectible key-points.
The ``bumper key-points'', which are defined as the key-points resulting from the lane-ropes separating the outside lanes from the pool walls, were accounted for reasonably well. When a pool had bumpers the model was able to detect their existence and account for them in the key-point ordering for the floating and wall left/right key-points. This is very important for the creation of a pool homography because the existence of bumpers or lack thereof, can change the perceived length of the pool. An example of the model noticing bumpers can be seen in Figure~\ref{fig:SFURes}, and~\ref{fig:SaanichRes}.
Overall in Figure~\ref{fig:visualResults} the model seems to mainly lose performance due to ordering key-points. That is if the model misses a lane's key-point, it is more likely the rest of the lanes are incorrect. The other cause of miss detection is simply placing key-points in the wrong place or not finding them at all. This may suggest the model architecture of the key-point detector may need to be changed allowing for a model to properly reason about the relative locations of the key-points. It might also suggest that it needs more of the pool to properly reason about what is going on.
\section{Conclusion} \label{sec:conclusion}
The purpose of this work is to create a starting point for the registration of partial pool images. To achieve this task the detection of key-points was proposed for which a homography can be built, as seen in Figure~\ref{fig:exampleHomography}. A pool model was proposed which defines key-points within a general pool. After training a basic key-point detector to detect the proposed key-points, all but the wall bottom key-points, which are still recommended, seem like reasonable locations for key-points. Beyond the proposed key-points in this work, there do not seem to be many more well-defined locations for key-points in an image of a pool. For further work on this subject, a better detector should be considered. Model architectures that increase the receptive field of the model must be tested. More data augmentation methods should be employed. The target function should be made more sophisticated such that the learning algorithm is rewarded for putting key-points close to the ground truth. Higher, image resolutions should be used in training. Lastly, other methods of field localization, considered in section~\ref{sec:relatedWork} should be explored and compared in a meaningful manner to the results found in this work.
\begin{acks}
Thank you to Own the Podium for coordinating like minds in this research area and for funding the Automated Swimming Analytics project of which this research was a part. Thanks to Swimming Canada for supporting this research and for helping organize the collection of pool footage which made this work possible.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
\balance
| {
"attr-fineweb-edu": 1.607422,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbdA5qoYDgZl6Nbni |
\section{Introduction}
\label{sect:intro}
Sports analytics and modelling has long tradition among the statistical community with initial works published back to 1950s and 1960s.
For example, seminal works have been initiated in the bibliography in the most popular sports like
baseball \citep{Mosteller_1952, albert1992bayesian},
association football--soccer \citep{Reep_Benjamin_1967},
American football \citep{Mosteller_1970,Harville_1977},
and basketball \citep{Stefani_1980,Schwertman_etal_1991}.
World wide web and recent technologies have given access to many scientists to interesting sport related data which are now widely and freely available (see for example in \url{www.football-data.co.uk} for association football and \url{http://www.tennis-data.co.uk/} for tennis).
Moreover, new interesting problems have been raised due to the big data that can be derived by inplay sensor and camera driven technologies; see for example in \cite{Metulini_etal_2017}
and \cite{Facchinetti_etal_2019} for applications in basketball.
Sports analytics is currently a fashionable and attractive topic of research with a growing community of both academics and professionals.
Regarding team sports prediction modelling, two are the main outcomes of response:
(a) the win/draw/loss outcome modelled by logistic or multinomial regression models and other similar approaches \citep{carpita2019exploring}, and (b) the goals or points scored by each team, which are modelled by sport specific models depending on the nature of the game.
In this work, we focus on the second approach, where the response is richer in terms of information used for the estimation of team abilities and allows to obtain a model with better prediction accuracy.
The biggest group of team sports is the one where the score is measured with a discrete number of goals (of equal value) such as association football, water polo, handball, and hockey (among others).
In such cases, Poisson log-linear models and their extensions are the most popular choices of models.
The historic timeline of models developed for such team sports includes the use of simple double Poisson models in the works of \cite{Maher_1982}, \cite{Lee_1997}, the extension of \cite{Dixon_Coles_1997} with the adjustment for $0-0$ and $1-1$ draws,
the diagonal bivariate Poisson model of \cite{karlis2003analysis}, the Poisson difference model \citep{Karlis_Ntzoufras_2009},
and the dynamic models of \cite{Rue_Salvesen_2000}, \cite{Owen_2011} and \cite{Koopman_Lit_2015}; see \cite{Tsokos_etal_2019} and references therein for further details and an up-to-date review.
Unlike what happens for other major team sports, modelling volleyball match outcomes has not been thoroughly addressed by statisticians and data scientists.
Early attempts to model volleyball data date back to \cite{Lee2004}, who analysed the effect of a team deciding serve or to receive the service at the fifth set.
Concerning prediction models in volleyball, several authors considered implementing Markov chain models to estimate the winning probabilities of a set and a game; see \cite{Barnett_etal_2008} for a first attempt and
\cite{ferrante2014winning} for a more recent and complete treatment of the problem.
\cite{Miskin_etal_2010} used a Bayesian multinomial model based on Markovian transition matrix to model each point and the effect of volleball skills. A similar approach was used by \cite{Drikos_etal_2019} to analyse top-level international teams at several age categories.
\cite{Sepulveda2017} used a Markov chain model as a useful tool to analyse players' probability of attack in terms of team rotation.
A simpler alternative was based on logistic regression models for the probability of winning
a set \citep{Marcelino2009,Fellingham_etal_2013} or a point \citep{Miskin_etal_2010}.
Recently, \cite{Gabrio2020} proposed a Bayesian hierarchical model for the prediction
of the rankings of volleyball national teams, which also allows
to estimate the results of each match in the league.
In the point level, \cite{Gabrio2020} used a double Poisson model component.
\cite{Sonnabend_2020} published an empirical study on the characteristics of the beach volleyball, including details about the distribution of points.
He used a normal regression model to study the effect of several game characteristics (game heteregoneity, referees, home effect, tournament phase, the fact of winning/loosing the previous sets, gender and age) on the difference of points in a set.
Finally, concerning the research direction of player performance evaluation, \cite{Mendes_etal_2008} used Bayesian multi-level models to analyse sports activities of elite adult Brazilian players while \cite{Hass2018} implemented a plus/minus approach in order to obtain volleyball players' evaluation metrics.
\begin{comment}
Notes
\cite{Lee2004} analysed the effect of a team deciding serve or to receive the service at the fifth set.
\cite{Barnett_etal_2008} used a Markov Chain model to calculate win probabilities and mean lengths. A feature of this model is that it predicts outcomes conditional on both the scoreboard and the serving team.
\cite{Mendes_etal_2008} used Bayesian multi-level models to model sports activities of elite adult Brazilian players.
\cite{Marcelino2009} modelled the probability of winning each volleyball set using a logistic regression model.
\cite{Miskin_etal_2010} implemented two methods to quantify skill importance for women's volleyball.
The first approach is based on a Markovian transition matrix and a Bayesian multinomial model, while the second is an implementation of a Bayesian logistic regression with response the point winning conditionally for each specific skill.
\cite{Fellingham_etal_2013} examine the relationship of the
speed of a set in volleyball with the outcome of the attack using a Bayesian logistic regression model.
\cite{ferrante2014winning} use Markov chain arguements to build the distribution of winning a set under the assumption of winning each point remains the same within each set and the events are independent to each each other.
\cite{Sepulveda2017} used a Markov chain model as a useful tool to analyse players' probability of attack in terms of team rotation from the previous player who attacks (transition probability).
\cite{Hass2018} implement the plus/minus approach in order to obtain player evaluation metrics of volleyball players.
\cite{Gabrio2020} propose a Bayesian hierarchical model for the prediction
of the rankings of volleyball national teams, which also allows
to estimate the results of each match in the league. We consider
two alternative model specifications of different complexity which
are validated using data from the women's volleyball Italian Serie A1
2017–2018 season.
\end{comment}
Unlike volleyball, in most sports (as in basketball and football, for example) there is a single performance outcome, namely the number of points or goals, which is measured cumulatively from the beginning to the end of the game.
In these situations, a model with the total goals or points as a response is required.
On the other hand, in volleyball, the winner is announced in two stages/levels of outcomes: sets and points within each set. Hence, the winner is the team that reaches first the three sets.
For this reason, the second level outcome, i.e. the total number of sets, is a random variable which ranges from a minimum of three to a maximum of five.
Each set is won by the team that reaches first a pre-specified number of points which is equal to 25 for the first four sets and equal 15 to the final tie-break set.\footnote{This point system was adopted in 1998, during the Men and Women's World Championships held in Japan (source: \url{https://ncva.com/info/general-info/history-of-volleyball/}).}
Nevertheless, the number of points required by the team winning the set further varies depending on whether there is a margin of two points.
Hence, volleyball outcomes consist of a natural hierarchy of sets and points within sets, with both measurements to be random variables.
In this work, we follow the approach of modelling both outcomes of volleyball: sets and points.
By this way, the response data are richer in terms of information which enables us to estimate team abilities more accurately and increase the prediction accuracy of our model.
In our perspective, the task of modelling volleyball match results should follow a top-down strategy, from the sets to the single points. Thus, defining the probability of winning a set is the first step;
building up a generative discrete model for the points realized in each set is the second step. Although following this kind of hierarchy is not mandatory, we maintain it into all our fitted models.
Hence, we propose a set-by-set statistical model for the points of the loosing team, conditionally to the set result.
Another aspect to consider is the strength difference among the teams: weaker teams are of course not favoured when competing against stronger teams, and a parametric assumption about teams' skills is needed.
In the Bayesian approach, teams' abilities are
easily incorporated into the model by the use of weakly-informative prior distributions \citep{gelman2008weakly}: similarly to what happens for football models \citep{karlis2003analysis}, the abilities may regard both attack and defense skills, and, moreover, be considered as dynamic over the season \citep{Owen_2011}.
The rest of the paper is organized as follows. The main features of the game are presented in Section \ref{sec:feat}. In Section \ref{sec:models} we introduce the basic negative binomial model for volleyball outcomes. Model extensions are thoroughly presented in Section \ref{sec:extension}, whereas model estimation, goodness of fit diagnostics and out-of-sample prediction measures are detailed in Section \ref{sec:est}. MCMC replications for the selected negative binomial model are used in Section \ref{sec:pp} to assess its plausibility in comparison with the observed results and to reconstruct the final rank of the league. The paper concludes after a detailed discussion.
\section{The Features of the Game}
\label{sec:feat}
Volleyball is different than other team sports of invasion (like football and basketball) since the two teams are separated and there is no contact between the players of the two competing teams.
It belongs to a category of net and ball sports (volleyball, footvolley, headis or sepak takraw, tennis, badminton, pickleball, table-tennis) and therefore it has some unique characteristics that cannot be modelled by using the approaches adopted in other sports such as the Poisson regression models commonly used in football.
Here we summarize these characteristics and, in the latter, we address theses issues one-by-one.
\begin{enumerate}
\item The first and most important characteristic is that the main outcome of the game is split into two levels: the sets and the points inside each set.
Roughly speaking, a set is played until one of the two teams wins first 25 points. This team is the winner of the set. The game is played until a team wins 3 sets. Hence we have two levels of outcomes (sets and points) which are interconnected and should modelled simultaneously.
\item Moreover, the sets in a volleyball game range from three to five and hence,
it is reasonable to test for the assumption of repeated measures of the points which are correlated across different sets.
The existence of repeated measurements of points needs to be addressed stochastically and tested within our modelling approach.
\color{black}
\item The points of the winning team are (almost) fixed by the design and the rules of the game. So, given that we know who won the set, the only outcome variability is reflected by the points of the team that lost the specific set.
\item An additional rule, that creates further complication, is that the winning team should have at least two points margin of difference to win a set. So conceptually if two teams are close in terms of abilities, they could play for infinite time and points until the required difference of two points is achieved.
\item Finally, the fifth set of the game is terminated at 15 points (and not at 25 points) and it is called tie-break. The two points margin of difference is also required for the tie-break.
\end{enumerate}
In this work, we deal with each of the unique characteristics of the volleyball by adding a corresponding component to the model formulation. The resulted model is a unified approach for the volleyball data and it is unique in the literature.
To be more specific, we model the two response outcomes (sets and points) hierarchically, using a binomial model for each set and, conditionally on the winner of the set, we use a negative binomial distribution for the points of the loosing team assuming $r=25$ or $r=15$ successes for normal sets and tie-breaks, respectively (features (a), (c) and (e)). We further truncate this distribution to deal with the two points margin of difference required in each set (feature (d)), and we model the excess of points due to ties (sets with less than two points difference) using a zero inflated Poisson distribution (feature (d)).
Furthermore, we consider normal random effects to account for the correlation between sets of the same game (feature (b)).
Finally, we take into consideration the connection between sets and points by considering general team abilities in contrast to point or set specific team abilities (feature (a)).
As the reader might initially think, our approach is counter-intuitive and apparently in contrast to what a usual sports model may consider. But this counter-intuitive logic is the main innovation of the model we propose.
By using this approach, our aim is to exploit the fact that we (almost) know the points of the winning team. So if we consider modelling the win/loss of each set in the first level of the model then, conditionally on the set winner, we can specify a sensible distribution for the points of the loosing team (while the points of the winning team are specified deterministically).
On the other hand, considering the usual approach, i.e. modelling directly the number of points using a bivariate distribution is more cumbersome and challenging due to the restrictions imposed by the game regulations.
\color{black}
In Section \ref{sec:models} which follows, we formalise the basic structure and assumptions of our proposed model while further considerations and extensions of the model are provided in Section \ref{sec:extension}.
\section{The Basic Model for Volleyball}
\label{sec:models}
\subsection{Truncated negative binomial model}
\label{sec:negbin}
Let $Y^A_{s}$ and $Y^B_{s}$ be the random variables of the points in set $s=1,2,\dots, S$ of two competing teams $A$ and $B$ playing at home and away stadium, respectively. Furthermore, $W_s$ is a binary indicator denoting the win or loss of the home team.
To begin with, assume for the moment that each set finishes at fixed number of points (25 or 15 depending on the type of set),
then the points of the winning team are fixed and not random.
Hence interest lies on the random variable $Y_s$ which denotes the number of points for the team loosing the $s$-th set.
Concerning the observed realization of the points gained by the loosing team, this will be obtained by
$$
y_s = w_s y^B_{s} +(1-w_s) y^A_{s}.
$$
So in our dataset, we will eventually model the data for two responses: the binary $W_s$ and the count variable $Y_s$.
Our model is built hierarchically. For the outcome of each set, we use a simple logistic regression model given by
\begin{eqnarray}
W_s &\sim& \mathsf{Bernoulli}( \omega_s ), \label{eq_pset0} \\
\mbox{logit}(\omega_s) &= & H^{set}+\alpha_{A(s)}-\alpha_{B(s)}, \label{eq_pset}
\end{eqnarray}
where $\alpha_T$ is a parameter capturing the ability of team $T$ to win a set (set abilities henceworth),
$A(s)$ and $B(s)$ are the home and away team indices, respectively, competing each other at set $s$.
Now conditionally on the winner of the set, we then model the points of the loosing team for each set using a negative binomial model (ignoring at the moment that the game may continue if the margin of points' difference is less than two points).
Hence, the model formulation will be now given by
\begin{equation}
Y_s | W_s \sim \mathsf{NegBin}(r_s, p_s){\mathcal I}( Y_s \le r_s-2 ),
\label{eq:model}
\end{equation}
which is the right truncated negative binomial distribution
with parameters $r_s$ and $p_s$. $\mathcal{I}(A)$ denotes the event indicator, equal one if the event $A$ is true and zero otherwise.
The first parameter, $r_s$, is the number of successes (points here) required to finish the set and it is equal to 25 for sets 1--4, and equal to 15 for the last (fifth) set.
Mathematically this can be written as
\begin{equation}
r_s = 25 - 10 \times \mathcal{I} \left( R_s=5 \right),
\label{total_points}
\end{equation}
where $R_s$ is the sequential set number for the specific game $G(s)$.
Parameter $p_s$ is the probability of realizing a point for the team winning set $s$.
Equivalently, $q_s=1-p_s$ denotes the probability of realizing a point for the team loosing set $s$. The right truncation has been fixed at $r_s-2$ (23 or 13) points since this is the highest number of points that can be achieved by the loosing team (under the assumption of no ties).
Moreover, the point success probability will be modelled as
\begin{eqnarray}
\eta_s &=& \mu+ (1-W_s)H^{points}+(\beta_{A(s)}-\beta_{B(s)})(1-2W_s),
\label{pointeta} \\
p_s&=& \frac{1}{1+e^{\eta_s}},
\label{point_prob}
\end{eqnarray}
where $\eta_s$ is the (fixed effects) linear predictor for the points of the loosing team given by \eqref{pointeta}.
The constant $\mu$ is a common baseline parameter, $H^{point}$ is the point home advantage for the host team, $\beta_{A(s)}, \beta_{B(s)}$ are the point abilities for teams $A(s)$ and $B(s)$, respectively.
Consider the first equation: the larger is the difference between the abilities of team $A$ and
team $B$, $\xi_s = \beta_{A(s)}-\beta_{B(s)}$, the higher is the expected number of points team $A$ will win when loosing a set.
Equivalently, in this case, the lower will be the number of points of team $B$ when loosing a set. Hence the multiplier $(1-W_s)$ in Eq.~\eqref{pointeta} controls the presence of the home effect, while the multiplier $(1-2W_s)$ controls the sign of the difference in the abilities of the two teams (depending on which team is playing at home).
Before we proceed, let us focus for a moment on the untruncated negative binomial, for which the average number of points for team $A$ (evaluated if $W_s=0$) and team $B$ (evaluated if $W_s=1$) in the $s$-th set are, respectively:
\begin{align}
\begin{split}
E[ Y^A_{s}| W_s=0] &=r_s \exp \left \{ \mu + H^{points} + \beta_{A(s)}-\beta_{B(s)} \right \} = r_s \times M \cdot \xi_s \cdot e^{H^{points}}\\
E[ Y^B_{s}| W_s=1] &=r_s \exp \left \{\mu -\beta_{A(s)}+\beta_{B(s)} \right \} = r_s \times \frac{M}{ \xi_s }, \\
\mbox{where~} M &= e^{\mu} \mbox{~and~} \xi_s = \exp \left( \beta_{A(s)}-\beta_{B(s)} \right)~.
\end{split}
\label{eq_averages}
\end{align}
However, in this initial model formulation the loosing-set team can reach at most $r_s-2$ points (in case of no extra points), then we need to reconsider the expected number of points of the loosing team (i.e. Eq. \eqref{eq_averages}) in the light of the upper truncation.
\cite{shonkwiler2016variance} reports the mathematical expression for the truncated negative binomial distribution which in our case becomes equal to:
\begin{align}
\begin{split}
E[ Y^A_{s}| Y^A_{s} \leq r_s-2, W_s=0]
&\ =E[ Y^A_{s}| W_s=0] - \frac{c^*_s}{p_s}\\
E[ Y^B_{s}| Y^B_{s}\leq r_s-2, W_s=1]
&\ = E[ Y^B_{s}| W_s=1] - \frac{c^*_s}{p_s},\\
c^*_s =\frac{(r_s-1)f_{NB}(r_s-1)}{F_{NB}(r_s-2; r_s, p_s)},\ & c^*_s>0,
\end{split}
\label{eq_averages_trunc}
\end{align}
where $f_{NB}$, $F_{NB}(x; r, p)$ are the probability mass function and the cumulative function, respectively, of negative binomial with parameters $r$ and $p$.
The interpretation is identical to the untruncated case: the higher is the point ability of a team, the higher will be the number of points when loosing a set. However, the untruncated mean is subtracted by the positive factor $c^*_s/p_s$, which forces the mean of the points of the loosing team to be lower or equal than $r_s-2$. For illustration purposes, Figure \ref{fig1} displays the expected number of points collected by the team loosing the set $s$ against the success point probability $p_s$: as the point probability for the team winning the set increases, the expected number of points for the team loosing the set decreases. In Section \ref{sec:zip} we will extend the model to allow for extra points after $r_s$ due to the required margin of two points difference.
The random variables of the points of each team, under the assumed model, can be now written as
$$
Y^A_s = W_s r_s + (1-W_s) Y_s \mbox{~~and~~}
Y^B_s = W_s Y_s + (1-W_s) r_s~.
$$
while the expected points of each set are given by
\begin{eqnarray}
E(Y^A_s) &=& \omega_s r_s + (1-\omega_s) r_s \xi_s M e^{H^{points}} - (1-\omega_s) \left(1+\xi_s M e^{H^{points}}\right)c_s^*, \nonumber\\
E(Y^B_s) &=& \omega_s r_s \frac{M}{\xi_s} + (1-\omega_s) r_s -
\omega_s \left(1+\frac{M}{\xi_s}\right) c_s^*,
\label{expected_values_TRNB}
\end{eqnarray}
where $c^*_s$ is given in \eqref{eq_averages_trunc} while $\xi_s, \, M$ are defined in \eqref{eq_averages}.
\color{black}
\begin{figure}
\centering
\includegraphics[scale=0.7]{Negative_expected.pdf}
\caption{ Expected number of points collected by the team loosing the set $s$ against the success point probability $p_s$ for the winning team, truncated negative binomial with upper truncation at $r_s-2$. As the point probability for the team winning the set increases, the expected number of points for the team loosing the set decreases.}
\label{fig1}
\end{figure}
The Bayesian model is completed by assigning some weakly informative priors \citep{gelman2008weakly} to the set and point abilities, for each team $T=1,\ldots, N_T$:
\begin{align}
\begin{split}
\alpha_T^*, \beta_T^* &\sim \mathcal{N}(0,2^2), \\
\mu, H^{point}, H^{set} & \sim \mathcal{N}(0,10^6),
\end{split}
\label{eq_priors}
\end{align}
where $N_T$ is the total number of teams in the league.
In order to achieve identifiability, set and point abilities need to be constrained; in such a framework we impose sum-to-zero (STZ) constraints for both $\alpha$ and $\beta$ by centering the free parameters $\alpha_T^*$ and $\beta_T^*$ using the equations:
\begin{eqnarray*}
\alpha_T &=&\alpha_T^* - \overline{\alpha}^* \\
\beta_T &=&\beta_T^* - \overline{\beta}^* ,
\end{eqnarray*}
for $T=1,\ldots, N_T$, where $\overline{\alpha}^*$ and $\overline{\beta}^*$ are the means of the unconstrained abilities given by
$\overline{\alpha}^* = \frac{1}{N_T} \sum _{T=1}^{N_T}\alpha_T^*$ and
$\overline{\beta}^* = \frac{1}{N_T} \sum _{T=1}^{N_T}\beta_T^*$, respectively.
Note that the constrained abilities $\alpha_T$ and $\beta_T$ are finally used in the model which automatically satisfies the sum-to-zero constraint and this centering is applied in every iteration of the MCMC algorithm.
In terms of interpretation, the STZ parametrization implies that an average team will have ability parameter close to zero.
\color{black}
\subsection{Using random effects to capture within game correlation}
\label{sec:random_effects}
We further introduce game additive random effects to capture the induced correlation between the set repetition and the fact that we have 3--5 measurements of the points of the loosing team.
Hence, the point probability in each set given by \eqref{point_prob} is slightly changed to
\begin{eqnarray}
p_s &=& \frac{1}{1 + e^{\eta_s+ \varepsilon_{G(s)}}}, \nonumber
\end{eqnarray}
where $\eta_s$ is the (fixed effects) linear predictor
as defined in \eqref{pointeta}
and $\varepsilon_{G(s)}$ are the game random effects which are used to capture any potential correlation across the measurements of the points within each game.
To complete the model formulation, we include a hierarchical step to assume exchangability of the game random effects by
$$
\varepsilon_{g} \sim \mathcal{N}( 0, \sigma_\varepsilon^2 ),
$$
and a hyper-prior for the variance of the random effects
$$
\sigma_\varepsilon^2 \sim \mathsf{InvGamma}( a_\varepsilon, b_\varepsilon ),
$$
with fixed hyperparameters $a_{\epsilon}, b_{\epsilon}$.
Small posterior values of $\sigma_\varepsilon$ indicate that there is no need for such game effects, while large value indicates the need of unconnected (fixed) game effects (and possibly bad fit of the model without any game effects). Figure \ref{fig2} displays the posterior marginal distribution for $\sigma_\varepsilon$ for the Italian SuperLega 2017/2018 data: there is little evidence of any set effect here, as will be further investigated in Section \ref{sec:basicdic}.
\begin{figure}
\centering
\includegraphics[scale=0.44]{sigma_epsilon_re.pdf}
\caption{Estimated posterior marginal distribution of the standard deviation $\sigma_\varepsilon$ of the random effects $\epsilon_{G(s)}$ for the Italian SuperLega 2017/2018 data.}
\label{fig2}
\end{figure}
\subsection{Zero inflated Poisson (ZIP) for the extra points}
\label{sec:zip}
To allow for the extra points arising due to the 24-deuce (or 14-deuce), the model proposed in Section \ref{sec:negbin} is extended by specifying a zero-inflated Poisson (ZIP) latent variable for the extra points collected by the loosing-set team. The number of extra points is zero if the loosing-set team does not reach 24 points, and
greater than zero otherwise.
So the model for the random variable of the points collected by the loosing team is now defined as:
\begin{align}
\begin{split}
Z_s &= Y_s + O_s \\
Y_s &\sim \mathsf{NegBin}(r_s, p_s){\mathcal I}( Y_s \le r_s-2 ) \\
O_s &\sim \mathsf{ZIPoisson}( \pi_{s}, \lambda ).
\end{split}
\end{align}
The zero inflated Poisson (ZIP) distribution for the number of extra points $O_s$ collected by the team loosing the $s$-th set is then defined as:
\begin{equation}
f_{ZID}(o_s)= \pi_{s}{\mathcal I}(o_s=0)+(1-\pi_{s})f_P(o_s; \lambda),
\label{eq:zip}
\end{equation}
where $\pi_s$ describes the proportion of zeros and $f_P(x; \lambda)$ is the probability mass function of a Poisson distribution with rate parameter $\lambda$ evaluated at $x$.
In this section, we assume constant inflation probability for all games, but in Section \ref{cov_zip} we explore the possibility of expressing $\pi_s$ as a function of the team abilities.
Following \eqref{expected_values_TRNB}, under this model the random variables of the points are now given by
$$
Z_s^A=Y_s^A+O_s=W_s r_s + (1-W_s) Y_s+O_s \mbox{~and~} Z_s^B=Y_s^B+O_s=W_s Y_s + (1-W_s) r_s+O_s
$$
for the home and the away team, respectively.
Now the expected points are adjusted for the extra points, hence they are given by
$$
E(Z^A_s) = E(Y^A_s) + (1-\pi_s) \lambda \mbox{~and~}
E(Z^B_s) = E(Y^B_s) + (1-\pi_s) \lambda,
$$
where $E(Y^A_s)$ and $E(Y^B_s)$ are given in \eqref{expected_values_TRNB} and represent the expected number of points under the truncated model of Section \ref{sec:negbin} which does not considers any extra points for each set.
\color{black}
\subsection{Model Comparisons for the Basic Model Formulation Using DIC}
\label{sec:basicdic}
Table \ref{tab:01} reports the DIC \citep{spiegelhalter2002bayesian} values and the effective number of parameters on the Italian SuperLega 2017/2018 for a simple Poisson model and the basic models presented in Sections \ref{sec:negbin}--\ref{sec:zip}, computed by running 3000 iterations obtained by 3 parallel chains of 1000 iterations of Gibbs sampling via the {\tt R} package {\tt rjags} \citep{rjags}. In the Poisson model, the rates have a log-linear specification depending on the point abilities. Models 1 and 2 use unrestricted data (with no explicit modelling of the ties) and both report higher DIC than the truncated negative binomial model with extra points (model 3).
As far as we can conclude from the DIC, using random effects to capture within game correlation (model 4) improves the fit only slightly (DIC=4537.2 vs. 4537.7); see also the posterior marginal distribution of $\sigma^2_\varepsilon$ in Figure \ref{fig2} and the considerations in Section \ref{sec:random_effects}.
So we recommend to use the truncated negative binomial model allowing for extra points (model 3 in Table \ref{tab:01}; see Section \ref{sec:zip} for details) since it has similar predictive accuracy (in terms of DIC) to the corresponding random effects model (model 4 in Table \ref{tab:01}), while the computational burden and its model model complexity is considerably lower.
\begin{table}
\caption{Details of the fitted models with different distributional assumptions for the points of the loosing team for the Italian SuperLega 2017/2018 season.\label{tab:01}}
\begin{tabular}{|p{3cm} p{2.5cm} p{3.8cm} c c@{~~}|}
\hline
\emph{Point Distribution of the loosing team$^*$} & \emph{Equations}& \emph{Additional model details at the point level} & \emph{\# eff. param.} & \emph{DIC}\\
\hline
1. Poisson & log-linear model$^\dag$ & No upper limit & 29 & 4557.2 \\
2. Tr. Neg. binomial & 3--6 & Upper limit, no extra points & 29 & 4674.2 \\
3. ZIP Tr. Neg. bin. & 4--6 \& 10--11 & Upper limit \& extra points & 133 & 4537.7 \\
4. ZIP Tr. Neg. bin. & 4--5, 10--11 \& Section 3.2 & Model 3 \& game random effects& 151 & 4537.2 \\
\hline
\multicolumn{5}{p{14cm}}{\footnotesize\it MCMC sampling, 3000 iterations, {\tt rjags} package} \\
\multicolumn{5}{p{14cm}}{\footnotesize\it $^*$In all models we use: (a) a logistic regression model for the sets (Eq. 1--2);} \\
\multicolumn{5}{p{14cm}}{\footnotesize\it \hspace{2em} (b) Overall disconnected team abilities $\alpha_T$ and $\beta_T$ for the set and the point level.} \\
\multicolumn{5}{p{14cm}}{\footnotesize\it $^\dag Y_s|W_s \sim Poisson(\lambda_s)$ with $\log (\lambda_s/r_s)=\eta_s$ where $r_s$ and $\eta_s$ are given by \eqref{total_points} and \eqref{point_prob}, respectively.} \\
\end{tabular}
\end{table}
\section{Model extensions concerning team abilities}
\label{sec:extension}
\subsection{Attacking and defensive abilities}
\label{sec:attdef}
A common practice is many team sports (such as football, basketball and hockey) is to separately model the attacking and the defensive team abilities. This is also relevant for coaches and sports scientists
because modern sports are highly specialized: estimates of attack and defence abilities
give an indication of athlete/team performance.
Following this practice also in our proposed model for volleyball, we can assume the following decomposition of the point abilities of team $T, \ T=1,\ldots,N_T$:
\begin{align}
\begin{split}
\beta_{T}=&\ \beta^{\text{att}}_{T} + \beta^{\text{def}}_{T},\\
\end{split}
\end{align}
where the global point abilities $\beta$ are defined as the sum between the attack and the defence abilities at point level for each team. It is worth noting that assuming different attacking and defensive abilities at the set level would make the logit model~\eqref{eq_pset} not identifiable.
\color{black}
\subsection{Connecting the abilities}
\label{sec:connecting}
In equations~\eqref{eq_pset} and~\eqref{pointeta}, set and point abilities influence separately the set and point probabilities, respectively: conditionally on winning/loosing a set, point abilities are then estimated from the probability to realize a point. However, we could combine them by defining a global ability measure.
Here we consider a model where the abilities of winning a point also influence the probability of winning a set by a different scaling factor (controlled by the parameter $\theta$).
Hence the probability of winning a set is now given by
\begin{align}
\begin{split}
\mbox{logit}(\omega_s) = &
H^{set}+ v_1(\alpha_{A(s)}-\alpha_{B(s)}) + v_2\theta ( \beta_{A(s)}-\beta_{B(s)} ),
\end{split}
\label{eq_p3}
\end{align}
where $v_1, v_2$ are indicator variables, and $\theta$ summarizes the effect of the point abilities on winning a set. If $v_1=1, v_2=0$ we obtain the basic model of Section \ref{sec:negbin} with set probability as defined by Eq. \eqref{eq_pset}; if $v_1=0, v_2=1$ we assume connected point and set abilities where the set ability parameters are simply proportional to point abilities; whereas if $v_1 = v_2=1$ we assume connected point and set abilities and extra set specific abilities.
For illustration purposes only, just view everything from the perspective of team $A$.
Let us now consider the model with connected abilities and extra set abilities ($\nu_1=\nu_2=1$).
If two teams are almost equally strong in terms of points, then the point abilities
difference $\beta_{A(s)}-\beta_{B(s)}$ will be very small,
and the set probability will be solely driven by the extra set abilities.
Conversely, when two teams are expected to be quite far in terms of point performance, then the set winning probability will be mainly affected by the point performance.
In this generalised version of the model (case $v_1 = v_2=1$), the set abilities will capture diversions of teams in the set efficiency in comparison to the point efficiency.
For most of the teams, intuitively we do not expect an excess of set abilities and the probability of winning set will be mainly driven by a unified (set and point) ability.
But a limited number of teams is expected to be more or less efficient on the set level than on the point level. Therefore, we have used posterior intervals and DIC to identify which teams behave in a different way in terms of sets and therefore an extra parameter is needed to handle for these differences.
In Table \ref{tab:02} the DIC values and the effective number of parameters for each model are reported with respect to the Italian SuperLega 2017/2018.
According to this analysis the ZIP truncated negative binomial model with connected abilities and extra set abilities only for Verona and Padova and constant zero inflated probability is the best fitted model.
\color{black}
\begin{center}
\begin{table}
\caption{Details of the fitted Logistic--ZIP Truncated Negative binomial models for the Italian SuperLega 2017/2018 season.\label{tab:02}}
\begin{tabular}{|p{1cm}p{2cm}p{6cm}cc|}
\hline
\emph{Model$^*$} & \emph{Connected team abilities} & \emph{Additional model features} & \emph{\# eff. param.} & \emph{DIC}\\
\hline
3. & No & --- & 133 & 4537.7 \\
\hline
5. & No & Separate attacking and defensive abilities at the point level & 147 & 4541.1 \\
6. & Yes & Connected abilities only ($v_1=0, v_2=1$) & 122 & 4524.3 \\
7. & Yes & $+$ extra set abilities ($v_1=v_2=1$) & 133 & 4536.3 \\
8. & Yes & $+$ extra set abilities for Verona & 123 & 4522.2 \\
9. & Yes & $+$ extra set abilities for Verona and Padova & 124 & 4521.1 \\
10. & No & $ \beta_{t}$ dynamic (point level) & 174 & 4569.5 \\
11. & No & $\alpha_{t}$ dynamic (set level) & 132 & 4530.1 \\
\hline
\multicolumn{5}{p{13cm}}{\footnotesize\it MCMC sampling, 3000 iterations, {\tt rjags} package} \\
\multicolumn{5}{p{13cm}}{\footnotesize\it $^*$In all models we use: (a) a logistic regression model for the sets (Eq. 1--2);} \\
\multicolumn{5}{p{13cm}}{\footnotesize\it (b) The Zero-inflated Poisson for extra points and the Truncated Negative binomial model was used for the points (ZIP Tr. Neg. bin.).} \\
\multicolumn{5}{p{13cm}}{\footnotesize\it (c) Constant probability and Poisson rate for the ZIP component of the extra points is assumed.} \\
\end{tabular}
\end{table}
\end{center}
\subsection{Dynamic abilities}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.65]{NegBin_ZIP_dynamic_abilities.pdf}
\caption{Posterior medians and 95\% density intervals for the dynamic point abilities parameters $\bm{\beta}$ for the Italian SuperLega 2017/2018.}
\label{fig3}
\end{figure}
The performance of each team is likely to change within a season. Hence, temporal trends may be helpful for modelling the ability of each team within a season. A dynamic structural assumption for the ability parameters is a step forward. A natural choice is an auto-regressive model for the set and point abilities. For each team $T=1,\ldots, N_T$ and game $G=2,\ldots,N_G$ we specify:
\begin{align}
\begin{split}
\alpha_{T,G} \sim & \mathcal{N}(\alpha_{T, G-1}, \sigma^2_{\alpha})\\
\beta_{T,G} \sim & \mathcal{N}(\beta_{T, G-1}, \sigma^2_{\beta}),
\end{split}
\end{align}
whereas for the first match we assume:
\begin{align}
\begin{split}
\alpha_{T,1} \sim & \mathcal{N}(0, \sigma^2_{\alpha})\\
\beta_{T,1} \sim & \mathcal{N}(0, \sigma^2_{\beta}).
\end{split}
\label{dynamic_ab}
\end{align}
Analogously as in Section \ref{sec:negbin}, sum-to-zero constraints are required for each match-day to achieve identifiability. The variance parameters $\sigma^2_{\beta}, \sigma^2_{\beta}$ are assigned the following hyper-priors:
\begin{equation}
\sigma^{2}_{\alpha}, \sigma_\beta^2 \sim \mathsf{InvGamma}(0.001, 0.001 ).
\label{sigma_beta}
\end{equation}
As it is evident from Table \ref{tab:02}, the assumption of dynamic ability parameters does not improve the fit of the model for the Italian SuperLega data we consider here. However, modelling dynamic patterns may be very useful in other leagues when considering distinct subsets of a league (such as regular season and play off). Figure \ref{fig3} displays posterior 95\% intervals for the dynamic point abilities for the Italian SuperLega 2017/18 data, whereas the corresponding marginal posterior distributions for the standard deviations $\sigma_{\alpha}, \sigma_\beta$ are plotted in Figure \ref{fig4}: the time variability is negligible in the Italian data we analyse in this paper.
Although the within-team variability of the point abilities may look high,
we believe that it is reasonable, since it corresponds only to a small portion of the total variability of the response measurement of this model's component (which is the logit of the proportion of points earned by the loosing team after removing the extra points played due to ties).
To confirm this, we have calculated the proportion of points won by the loosing team (after removing extra points played due to ties) which, on average, was found to be equal to $0.795 \pm 0.12$.
The corresponding logits of these proportions were found equal to $1.5$ on average with
$0.742$ standard deviation.
According to our posterior results, the posterior standard deviations of the point parameters were found to be approximately $0.10$--$0.15$ which corresponds only to 12--20\% of the total variability of the response measurement (as we stated previously).
\begin{figure}
\centering
\includegraphics[scale=0.3]{sigma_alpha_dynamic.pdf}~
\includegraphics[scale=0.3]{sigma_beta_dynamic.pdf}
\caption{Posterior marginal distribution for the standard deviations of the dynamic set and point abilities for the Italian SuperLega 2017/2018; for sets: $\sigma_\alpha$ (left plot) and for points: $\sigma_{\beta}$ (right plot).}
\label{fig4}
\end{figure}
\subsection{Modelling extra points as a function of team abilities}
\label{cov_zip}
In this section we explore whether the probability of observing extra points due to ties (i.e. the inflation component probability) can be written as a function of the set and/or point abilities.
In principle, we would expect a negative association between the team ability differences and the probability of playing extra points in each set.
Therefore, the closer the two teams are, the higher is the probability of being tied in each set and as a result to play for extra points.
For the probability $\pi_s$ we considered various versions of the linear predictor which can be summarized by:
\begin{align}
\begin{split}
\mbox{logit}(\pi_{s})= & m + \delta \, \varPhi\mbox{\footnotesize $\big(\alpha_{A(s)}- \alpha_{B(s)}\big)$} + \gamma \, \varPhi\mbox{\footnotesize $\big(\beta_{A(s)}- \beta_{B(s)}\big)$}\\
&m \sim \mathcal{N}(0, 1); ~\delta, \gamma \sim \mathcal{N}(0, 10^6); ~\lambda \sim \mathsf{LN}(0,1),
\end{split}
\label{eq:Zip_priors}
\end{align}
where $\mathsf{LN}(\mu, \sigma^2)$ denotes the log-normal distribution with parameters $\mu$ and $\sigma^2$, $m$ is a constant parameter, $\delta, \gamma$ are the coefficients associated to the set and point abilities differences, $\varPhi(\cdot)$ is a specific function for the set/point abilities differences, depending on the model that we consider:
\begin{itemize}
\item constant probability by using the null function: $\varPhi(x) = 0$ ,
\item linear effect of ability differences by using the linear function: $\varPhi(x) = c_1x$,
\item linear effect of absolute ability differences by using the linear absolute function: $\varPhi(x) = c_1|x|$,
\item quadratic effect of ability differences by using the quadratic function: $\varPhi(x) = c_1 x + c_2x^2$,
\item quadratic effect of absolute ability differences by using the quadratic function of absolute values of $x$ : $\varPhi(x) = c_1 |x| + c_2x^2$,
\end{itemize}
where $c_0, c_1$ and $c_2$ are further parameters for estimation.
In Table \ref{tab:02bis} the DIC values and the effective number of parameters for each model are reported with respect to the Italian SuperLega 2017/2018. No model improves the fit we obtain when using the constant Poisson rate for the ZIP component of the extra points (model 9).
\begin{center}
\begin{table}
\caption{Details of the fitted Logistic--ZIP Truncated Negative binomial models with different structure on the probability of extra points for the Italian SuperLega 2017/2018 season.\label{tab:02bis}}
\begin{tabular}{|lp{5cm}lcc|}
\hline
\emph{Model$^*$} & \emph{Structural assumption about the probability of extra points} & $\varPhi(x)$&\emph{\# eff. param.} & \emph{DIC}\\
\hline
\color{blue}
9. & Constant probability & $\varPhi(x)=0$ & 124 & 4521.1 \\
\color{black}
12. & Linear effects & $\varPhi(x)=c_1x$ & 125 & 4523.0 \\
13. & Linear absolute effects & $\varPhi(x)=c_1|x|$ & 125 & 4525.8 \\
14. & Quadratic effects & $\varPhi(x)=c_1x+c_2x^2$ & 126 & 4524.6 \\
15. & Quadratic absolute effects & $\varPhi(x)=c_1|x|+c_2x^2$& 126 & 4526.6 \\
\hline
\multicolumn{4}{p{13cm}}{\footnotesize\it MCMC sampling, 3000 iterations, {\tt rjags} package} \\
\multicolumn{4}{p{13cm}}{\footnotesize\it $^*$In all models we use: (a) a logistic regression model for the sets (Eq. 1--2);} \\
\multicolumn{4}{p{13cm}}{\footnotesize\it (b) The Zero-inflated Poisson for extra points and the Truncated Negative binomial model for points (ZIP Tr. Neg. bin.).} \\
\multicolumn{4}{p{13cm}}{\footnotesize\it (c) Connected team abilities and extra set abilities for Verona and Padova.} \\
\end{tabular}
\end{table}
\end{center}
\color{black}
\section{Analysis and Results of the Italian SuperLega 2017/2018}
\label{sec:est}
\subsection{Data and computational details}
Data come from the regular season of the Italian SuperLega 2017/2018 and consist of a seasonal sample of 680 set observations, for a total number of 182 matches and 14 involved teams.\footnote{Source: Webpage of the Italian SuperLega \url{https://www.legavolley.it/category/superlega/}}
Posterior estimates are obtained with the {\tt rjags} {\tt R} package (MCMC sampling from the posterior distribution using the Gibbs sampling), for a total of 3000 iterations obtained by 3 parallel chans of 1000 iterations and a burn-in period of 100.
Following the suggestions of \cite{gelman2013bayesian},
we monitored the convergence of our MCMC algorithms by checking the effective sample size of each chain parameter and by implementing the Gelman-Rubin statistic \citep{gelman1992inference} which resulted to be lower than the usual threshold 1.1 for all the parameters (details given in Table 5).
In 39 matches out of 182 (21.4\%) the final winner was determined in the tie break (i.e. in the fifth set), whereas in 101 out of 680 (14.8\%) sets it was required to play for extra points in order to declare the winner of the corresponding set.
\subsection{Interpretation of the selected model}
\label{sec:final}
Here we focus on the analysis of the Italian SupeLega 2017/18 data using the model selected in Sections \ref{sec:basicdic} and \ref{sec:connecting} (model 9 in Table \ref{tab:02}), that is the model with connected abilities for all teams and extra set abilities only for Verona and Padova.
The complete model formulation, including likelihood specification, priors and identifiability constraints, is summarized in Table \ref{tab:04}. Posterior estimates for the set home advantage $H^{set}$, the point home advantage $H^{points}$, the intercept $\mu$ and the ZIP parameters $\lambda, m$ are reported in Table \ref{tab:03}: there is a clear indication of home advantage which seems to be smaller for the set level (posterior median of 0.16, 95\% posterior interval marginally containing zero), and higher at the point level (posterior median of 0.20, 95\% posterior interval not containing the zero).
In terms of percentage change, this means that in a game between two teams of equal strength we expect that the home team will have $17\%$ (posterior 95\% interval: (0\%, 42\%)) and $22\%$ (posterior 95\% interval: (8\%, 40\%)) higher odds of winning a set and a point, respectively.
The scaling factor $\theta$ (posterior mean of 4.60) shows a very strong positive association between the point abilities and the probability to win the set, as assumed in Eq.~\eqref{eq_p3}. No evidence was found for the parameters $ \gamma$ and $\delta$ in Eq.~\eqref{eq:Zip_priors}, describing the influence of set and point abilities differences, respectively, on the probability of observing extra points; however, we feel these parameters could be beneficial for other datasets or other leagues.
Number of effective sample sizes ($n\_eff$) and Gelman-Rubin statistics ($\hat{R}$) appear to be quite satisfactory for each parameter.
\begin{table}
\caption{Posterior summaries for the set home $H^{set}$, the point home $H^{points}$, the connecting abilities scaling factor $\theta$, the intercept $\mu$, and ZIP parameters $\lambda, \ m$ for the Italian SuperLega 2017/18 using the ZIP truncated negative binomial model with connected abilities and extra set abilities for Verona and Padova (model 9 in Table \ref{tab:02}). Also reported: the effective sample size ($n\_eff$) and the Gelman-Rubin statistics ($\hat{R}$). \label{tab:03}}
\begin{tabular}{|lcccccccc|}
\hline
Description& Parameter & Mean & Median & sd & 2.5\% & 97.5\% & $n\_eff$ & $\hat{R}$\\
\hline
Set home advantage&$H^{set} $& 0.16 & 0.15 & 0.09 &-0.01& 0.33 & 2839 & 1 \\
Point home advantage & $H^{points}$ & 0.20 & 0.20& 0.06& 0.08& 0.34 & 1664 & 1 \\
Connecting abilities & $\theta$ & 4.60 & 4.52 &0.80 & 3.36 & 6.30 & 2310 & 1\\
Intercept& $\mu$ & 0.36 & 0.36 & 0.05 & 0.27 & 0.46& 2213 & 1 \\
ZIP Poisson rate& $\lambda$ & 3.97 & 3.97 & 1.07 & 3.45 & 4.52& 2115 & 1 \\
Tie probability intercept & $m$ & 2.12 & 2.13& 0.13& 1.87& 2.39 & 2460 &1 \\
\hline
\end{tabular}
\end{table}
\begin{small}
\begin{table}
\caption{Final model formulation for model 9 of Table \ref{tab:02}: likelihood, priors and identifiability constraints (24 parameters in total); STZ: Sum-to-zero constraints. \label{tab:04}}
\colorbox{gray!25}{\parbox{0.9\textwidth}{
\begin{eqnarray*}
\begin{split}
\underline{\bm{Likelihood}} & \\
\mbox{\textcolor{blue}{Total set points}} &\ \ \ \ Z_s = Y_s + O_s \\
\mbox{\textcolor{blue}{Loosing team points}} &\ \ \ \ Y_s | W_s \sim \mathsf{NegBin}(r_s, p_s){\mathcal I}( Y_s < r_s-2 ) \\
\mbox{\textcolor{blue}{Extra points}} &\ \ \ \ O_s \sim \mathsf{ZIPoisson}( \pi_{s}, \lambda )\\
\mbox{\textcolor{blue}{Home win indicator}} &\ \ \ \ W_s \sim \mathsf{Bernoulli}( \omega_s ) \\
\mbox{\textcolor{blue}{Logit of set win}} &\ \ \ \ \mbox{logit}(\omega_s) =
H^{set}+ v_1(\alpha_{A(s)}-\alpha_{B(s)}) + v_2\theta ( \beta_{A(s)}-\beta_{B(s)} ) \\
\mbox{\textcolor{blue}{ZIP: Log-odds of extra points}} &\ \ \ \ \mbox{logit}(\pi_{s})= m + \delta \, \varPhi(\alpha_{A(s)}- \alpha_{B(s)})+ \gamma \, \varPhi(\beta_{A(s)}- \beta_{B(s)} )\\
\mbox{\textcolor{blue}{Linear predictor for Points}} &\ \ \ \ \eta_s = \mu+ (1-W_s)H^{points}+(\beta_{A(s)}-\beta_{B(s)})(1-2W_s) \\
\mbox{\textcolor{blue}{Win. team point prob.}}&\ \ \ \ \ p_s = \frac{1}{1+e^{\eta_s}} \\
\mbox{\textcolor{blue}{Required success points}}&\ \ \ \ \ r_s = 25 - 10 \times \mathcal{I} \left( R_s=5 \right)\\[1.0em]
\underline{\bm{Constraints}} & \\[1em]
\mbox{\textcolor{blue}{Extra Set Abilities}} &\ \ \ \ \alpha_T =\alpha_T^*
\mbox{~with~} \alpha_T^* \equiv 0, \ T\ne 10,12 \\
\mbox{\textcolor{blue}{(Only for specific teams)}$^\ddag$} & \\
\mbox{\textcolor{blue}{STZ for Point Abilities}} &\ \ \ \
\beta_T =\beta_T^* - \overline{\beta}^*; \ \ \overline{\beta}^* = \frac{1}{N_T} \sum _{T=1}^{N_T}\beta_T^*\\
\mbox{\textcolor{blue}{Connecting Abilities Setup }} &\ \ \ \ v_1 = v_2 =1\\[1em]
\mbox{\textcolor{blue}{ZIP general probability function}} &\ \ \ \ \varPhi(x) = c_0x + c_1 |x| +c_2 x^2\\[1em]
\mbox{\textcolor{blue}{ZIP finally selected model}} &\ \ \ \ c_0=c_1 = c_2 =0\\[1em]
\underline{\bm{Priors}} & \\[-0.5em]
\mbox{\textcolor{blue}{Set Abilities (Padova \& Verona)}}
&\ \ \ \ \alpha^*_{10}, \alpha^*_{12} \ \sim \mathcal{N}(0,2^2) \\
\mbox{\textcolor{blue}{Point Abilities (Unconstrained)}}
&\ \ \ \ \beta_{T}^* \sim \mathcal{N}(0, 2^2)\\
\mbox{\textcolor{blue}{Constant for Points}}
&\ \ \ \ \mu \sim \mathcal{N}(0,10^6)\\
\mbox{\textcolor{blue}{Point Ability Coef. for Sets}}
&\ \ \ \ \theta \sim \mathcal{N}(0,10^6)\\
\mbox{\textcolor{blue}{Home effects}}
&\ \ \ \ H^{point}, H^{set} \sim \mathcal{N}(0,10^6)\\
\mbox{\textcolor{blue}{Constant for Extra Points}}
&\ \ \ \ m \sim \mathcal{N}(0,1)\\
\mbox{\textcolor{blue}{Poisson rate for ZIP}}
&\ \ \ \ \lambda \sim \mathsf{LN}(0,1)\\
\end{split}
\end{eqnarray*} }} \\[2em]
\parbox{0.9\textwidth}{\footnotesize\it $^\ddag$This parametrization is used in the final model, where extra abilities are considered only for two specific teams; In the general formulation with all set abilities then the STZ parametrization is recommended where $\alpha_T =\alpha_T^* - \overline{\alpha}^*$, $\overline{\alpha}^* = \frac{1}{N_T} \sum _{T=1}^{N_T}\alpha_T^*$ and $N_T$ is the number of teams under consideration.
Prior when all set abilities are used with STZ: $\alpha_T^* \sim \mathcal{N}(0,2^2)$.}
\end{table}
\end{small}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.6]{NegBin_ZIP_connected_abilities.pdf}\\
\caption{ 95\% posterior intervals for set and point team abilities for the Italian SuperLega 2017/18 using the ZIP truncated negative binomial model with connected abilities (model 7 in Table \ref{tab:02}); intervals are ordered by the actual final ranking of each team.}
\label{fig5}
\end{figure}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.6]{NegBin_ZIP_dummy_abilities.pdf}\\
\caption{95\% posterior intervals for set and point team abilities for the the Italian SuperLega 2017/2018 using the ZIP truncated negative binomial model with connected abilities and extra set abilities for Verona and Padova (model 9 in Table \ref{tab:02}); ordered by the actual final ranking of each team.}
\label{fig6}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.6]{NegBin_ZIP_dummy_overall_abilities.pdf}\\
\caption{
95\% posterior intervals for overall set team abilities $\alpha'_T=\alpha_T+\theta \beta_T$ for the Italian SupeLega 2017/18 using the ZIP truncated negative binomial model with connected abilities and extra set abilities only for Verona and Padova and constant ZIP probability for extra points (model 9 in Table \ref{tab:02}); Intervals are ordered by the actual final rank of each team.}
\label{fig7}
\end{figure}
The 95\% posterior intervals for set and point team abilities are displayed in Figures \ref{fig5} and \ref{fig6} for the model with connected abilities and extra set abilities for all teams (model 7 in Table \ref{tab:02}) and the corresponding model with extra set abilities only for Verona and Padova (model 9 in Table \ref{tab:02}), respectively.
From Figure \ref{fig5} (right plot) we can observe that the point abilities of Verona and Padova are slightly misaligned compared to the actual rank. Moreover, from the left plot we notice that all the 95\% posterior intervals of the extra set abilities contain the value of zero. However, for Verona and Padova we obtained a marginal effect in terms of set extra abilities. For this reason, we moved to the model with connected abilities and set extra abilities only for these two teams (Figure \ref{fig6}), by forcing all the remaining extra set abilities to be restricted to zero. In such a way, we reduced the model complexity by 12 parameters while we improved our final model in terms of predictive accuracy (see Section \ref{sec:connecting}).
Finally, for this model we can specify the actual effects (abilities) of each team on the winning probability of a set as:
$$
\alpha_T{'} = \alpha_T + \theta \beta_T.
$$
%
These overall set abilities are depicted in Figure~\ref{fig7} where we can clearly see that the overall abilities depict the actual observed rankings since the extra set abilities for Padova and Verona correct for any inconsistencies between the set and point level concerning the efficiency of the teams. \\
\subsection{League reconstruction and predictive measures of fit}
\label{sec:pp}
To assess the in-sample predictive accuracy of our final model, we reconstruct the league in terms of final points and rank positions from the predictive distribution of the model. To do so, for each iteration of the MCMC sampling, we draw values from the model's sampling likelihood (see Table \ref{tab:04}) for the given set of parameter values generated at each each iteration resulting in a new sample of match results obtained from posterior predictive distribution of the model.
It is worth mentioning that predicting future matches in volleyball is not as easy as in other sports. First, we need to simulate the actual number of sets for each game using Equations~\ref{eq_pset0} and~\ref{eq_pset}; we terminate the sets' simulation when one of the two teams wins three sets first.
Then, we calculate the number of points the teams collected after winning a game at each reconstructed league of each iteration.
For each MCMC iteration, each match is simulated in terms of both points and sets from their posterior predictive distribution. This means that a new replicated dataset ${y}^{rep}$ is sampled from:
\begin{equation}\label{eq_pp:distr}
p({y}^{rep}|y) = \int_{\Theta}p ({y}^{rep}|\theta) p(\theta|y) d\theta,
\end{equation}
where $p(\theta|y)$ is the posterior distribution for the parameter $\theta$, and $p({y}^{rep}|\theta)$ is the sampling distribution for the hypothetical values. In many
applications, the posterior predictive distribution in Eq.~\ref{eq_pp:distr} is not
available in a closed form, and therefore we sample from it by using MCMC methods. Algorithm 1 presents the entire procedure for the stochastic league reconstruction from the posterior predictive distribution: the entire league is simulated and the final points and rankings are obtained for further analysis.
\begin{table}
\caption{Final reconstructed league for the Italian SuperLega 2017/18 using the ZIP truncated negative binomial model with connected abilities and extra set abilities for Verona and Padova (model 9 in Table \ref{tab:02}).
\label{tab:05}}
\begin{tabular}{|clcc|}
\hline
\emph{Predicted} & & \multicolumn{2}{c}{Points}\\
\cline{3-4}
\emph{Rank$^*$} & Teams & \emph{Expected (Actual)} & \emph{95\% CI}$^\dag$ \\
\hline
~1 (--)& Sir Safety Perugia & 66 (70) & (57--73) \\
~2 (--)& Cucine Lube Civitanova & 61 (64) & (51--69) \\
~3 (--)& Azimut Modena & 56 (60) & (45--66) \\
~4 (--)& Diatec Trentino & 52 (51) & (40--63) \\
~5 (--)& Calzedonia Verona & 49 (50) & (35--60) \\
~6 (--)& Revivre Milano & 45 (44) & (32--57) \\
~7 (--)& Wixo LPR Piacenza & 42 (42) & (29--54) \\
~8 (--)& Bunge Ravenna & 40 (41) & (27--52) \\
~9 (--)& Kioene Padova & 35 (35) & (22--48) \\
10 (--)& Gi Group Monza & 28 (28) & (17--42) \\
11 (--)& Taiwan Exc. Latina & 28 (25) & (18--40) \\
\;\:12 (+1)& Biosì Sora & 17 (13) & (8--26) \\
\;13 (--1)& Callipo Vibo Valentia & 16 (13) & (9--27) \\
14 (--)& BCC Castellana Grotte & 13 (10) & (5--21) \\
\hline
\multicolumn{4}{p{10cm}}{\footnotesize\it $^*$ Rank based on the expected predicted points; in brackets: the change in predictive ranking in comparison to the actual ranking} \\
\multicolumn{4}{p{10cm}}{\footnotesize\it $^\dag$ 95\% credible interval based on the 2.5\% and 97.5\% percentiles of the posterior predictive distribution.} \\
\end{tabular}
\end{table}
Table \ref{tab:05} reports the expected final points and the 95\% predictive intervals estimated from the MCMC algorithm along with the observed points and the actual team rankings: the points reported in the table are obtained by computing the median and the 95\% predictive intervals of the posterior predictive distribution of the final points, for each team.
The agreement between the actual and the expected number of points is remarkable since the difference is at most equal to 4 points, and the simulated positions mirror perfectly the final observed rank, with the exception of switch in the expected positions between Sora and Vibo Valentia.
Generally speaking, the model's in-sample predictions mirror almost perfectly the observed results in terms of expected points and final rank positions.
Beyond that, it is straightforward to obtain a measure of model goodness of fit at the point level.
For each set $s,\ s =1,\ldots,S$, we denote by $d_s$ the set points difference $ Y^A_{s}- Y^B_{s}$, and with $\tilde{d}^{(t)}_{s}$ the corresponding points difference arising from the $t$-th MCMC replications, ${y}^{A\ rep(t)}_{s}-{y}^{B\ rep(t)}_{s}$. Once we replicate new existing values from our model, it is of interest to assess how far they are if compared with the actual data we observed. Figure \ref{fig8} displays the posterior predictive distribution of each $\tilde{d}^{(t)}_{s}$ (light blue) plotted against the true observed distribution for $d_s$: there is a quite good agreement between the replicated distributions and the observed distribution, and this is another corroboration of the goodness of fit of our final model (the plot is obtained through the {\tt bayesplot} package \citep{bayesplot}, which always provides a continuous approximation for discrete dstributions).
\begin{figure}
\centering
\includegraphics[scale=0.3]{ppc_densities.jpg}
\caption{Distribution of the observed set points differences $d_s= Y^A_{s}- Y^B_{s}$ (dark blue) plotted against the posterior predictive distribution of $\tilde{d}^{(t)}_{s}={y}^{A\ rep(t)}_{s}-{y}^{B\ rep(t)}_{s}$ (light blue) for the Italian SuperLega 2017/18 using the ZIP truncated negative binomial model with connected abilities and extra set abilities for Verona and Padova (model 9 in Table \ref{tab:02}).}
\label{fig8}
\end{figure}
\begin{algorithm}
\colorbox{gray!25}{\parbox{0.95\textwidth}{
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{$(\pi_G^{(t)}, p_G^{(t)}, \lambda^{(t)}, \omega_{G}^{(t)}, HT_G, AT_G)$: MCMC values of the parameters of the model for each game $G$ and MCMC iteration $t$; $HT_G, AT_G$: denote the home and the away teams in each game $G$}
\Output{${\bf L}^{(t)}=\big(P^{(t)}_T, SW^{(t)}_T, SL^{(t)}_T, GPW^{(t)}_T, GPL^{(t)}_T \big)$:
$T_{mcmc}$ leagues with number of league points, total sets won and lost and total number of game points won and lost, respectively, for each team $T$ and MCMC iteration $t$. }
\For{$t = 1$ \KwTo $T_{mcmc}$}{
\# Initialize the league output for iteration $t$ \\ ${\bf L}^{(t)}={\bf 0}$; \\
\For{$G = 1$ \KwTo $N_G$}{
\# Initialize set and points for each game \\
$S_H=S_W=P_H=P_W=0;$\\
\While{$\max\{S_{H},S_W\}<3$ \# number of sets won by each team lower than three}{
$R=S_H+S_W$; \# Calculate total number of sets until now \\
$r <- 15 + 10 \times {\mathcal I}(R=5)$; \# Calculate maximum number of points\\
$W \sim {\sf Bernoulli}(\omega_G^{(t)})$; \# Generate the winner of the set\\
$O \sim {\sf ZIPoisson}(\pi_G^{(t)}, \lambda^{(t)})$;\# Generate the extra points \\
$Y \sim {\sf NegBin}(r, p_G^{(t)}){\mathcal I}(Y<r-2)$ \# Generate the points of the loosing team \\[0.2em]
$S_H=S_H+W$;
$S_A=S_A+(1-W)$; \# Update the winning sets of the teams\\[0.2em]
$P_W=r+O$;
$P_L=Y+O$; \# Points of the winning and loosing team\\
$P_H=P_H+W\times P_W+(1-W)\times P_L$;\# Update the total points of the home team\\ [0.2em]
$P_A=P_A+(1-W)\times P_W+ W\times P_L$; \# Update the total points of the away team\\[0.2em]
}
\# Updating the league parameters for the home team \\
$P_{HT_G}^{(t)} = P_{HT_G}^{(t)} + 3 \times {\mathcal I}(S_H-S_A>1) + {\mathcal I}(S_H-S_A=1)$;
\# League points\\
$SW_{HT_G}^{(t)} = SW_{HT_G}^{(t)} + S_H$; \# Sets won
$SL_{HT_G}^{(t)} = SW_{HT_G}^{(t)} + S_A$; \# Sets lost \\
$PW_{HT_G}^{(t)} = PW_{HT_G}^{(t)} + P_H$; \# Game points won \\
$PL_{HT_G}^{(t)} = PL_{HT_G}^{(t)} + P_A$; \# Game points lost \\[0.5em]
\# Updating the league parameters for the away team \\
$P_{AT_G}^{(t)} = P_{AT_G}^{(t)} +3 \times {\mathcal I}(S_A-S_H>1) + {\mathcal I}(S_A-S_H=1)$;
\# League points \\
$SW_{AT_G}^{(t)} = SW_{AT_G}^{(t)} + S_A$; \# Sets won \\
$SL_{AT_G}^{(t)} = SW_{AT_G}^{(t)} + S_H$; \# Sets lost \\
$PW_{AT_G}^{(t)} = PW_{AT_G}^{(t)} + P_A$; \# Game points won \\
$PL_{AT_G}^{(t)} = PL_{AT_G}^{(t)} + P_H$; \# Game points lost \\
}
}
{
return league results ${\bf L}^{(t)}$ for $t=1,\dots, T_{mcmc}$\;
}
{\scriptsize Indexes: $t=,1, \dots T_{mcmc}$; $T_{mcmc}$: number of MCMC iterations; \\
$G=1,\dots, N_G$; $N_G$: number of games; \\
$T=1,\dots, N_T$; $N_T$: number of teams in the league.}
\caption{Volleyball stochastic league re-construction algorithm}
\label{algo1}
}
}
\end{algorithm}
\subsection{Out-of-sample prediction}
\label{sec:out}
Our final task is to assess the out-of-sample predictive ability of our proposed model. As usual, we expect a lower predictive accuracy than the one obtained for in-sample measures.
Nevertheless, it is crucial to ensure that our proposed model has satisfactory predictive performance.
The procedure is similar to the stochastic league regeneration described in Section \ref{sec:pp} and Algorithm \ref{algo1}. The main difference here is that a specific number of games is now known and fixed (i.e. data) and only the remaining games are generated from the predictive distribution, while in Section \ref{sec:pp} the data of the whole season were available and the data of a new full league were re-generated assuming that we have exactly the same characteristics and team abilities as the one observed.
Here we procceed with two scenarios: (a) the mid-season prediction scenario and (b) the play off prediction scenario.
In the first case we assume that we are at the middle of the season where half of game results are available and we try to predict the final league standings.
In the second scenario, we use the full league data to predict the final results in the play off phase.
With the latter case, a further complication arises which is due to the formation of the \emph{play off} phase. In this after-season tournament, the best eight teams are competing from the quarter of finals: the team that wins three matches first goes to the next step. Thus, each game say between team $A$ and $B$ consists of a random number of repeated measurements, ranging from three to five, whereas the set point system is the same as the one described in the previous sections.
\subsubsection{Mid-season prediction}
In this section we predict the second half of the season using the data of the first half of the season as a training set.
This time point is important psychologically for the sports fans.
For example, in national football (soccer) leagues, there is the informal title of the ``Winter Champion'' which is mainly promoted by sports media and newspapers.
In terms of data, at this time point, a considerable amount of games is available and all teams have played against all their opponents once.
Hence, reliable estimates of the model parameters and team abilities can be obtained leading to accurate enough predictions about the final league ranking.
To preliminarily assess the model's predictive accuracy we use the percentage of agreement between predicted games/sets from the MCMC sample and the observed ones: the posterior distribution of the percentage of agreement of the correctly predicted games for the mid-season prediction scenario is presented in Figure~\ref{fig9} (left panel). The posterior mean of correct predictions concerning the final result of the game is found to be equal to 78.26\%($\pm$ 3\%). On the other hand, in the set level, the posterior agreement of correctly predicted sets (not displayed in the plot) is equal to 69.5\% ($\pm$ 1\%).
\begin{figure}
\includegraphics[scale=0.28]{agree_mid-season.pdf}~
\includegraphics[scale=0.28]{agree_playoff.pdf}
\caption{Out-of-sample predictions: posterior distribution of percentage of correctly predicted games for the mid-season and the play off phase for the Italian SuperLega 2017/18 using the ZIP truncated negative binomial model with connected abilities and extra set abilities for Verona and Padova (model 9 in Table \ref{tab:02}).}
\label{fig9}
\end{figure}
Figure~\ref{fig10} displays 95\% predictive intervals (red ribbon) for the predicted achieved points of the 14 teams competing in the Italian SuperLega 2017/2018, using the first half as training set, along with the observed final points (black dots), and the expected points from the in-sample league reconstruction (blue dots, see Table~\ref{tab:05}). At a first glance, the predicted rankings are in high agreement with the observed ones, especially for the top-three teams (Perugia, Civitanova and Modena) and the last ones (Vibo Valentia, Sora and Castellana Grotte): in these cases, the predicted points coincide with the median predictions. Padova is the only team whose observed points fall outside the 95\% predictive interval. Moreover, the majority of the observed final points (black dots) coincides with the in-sample simulated points (blue dots).
\begin{figure}
\centering
\includegraphics[scale=0.7]{RankPlot.pdf}
\caption{Mid-season out-of-sample prediction: 95\% predictive intervals (red ribbon) from the posterior predictive distribution of the final points collected by each of the 14 teams of the Italian SuperLega 2017-2018 along with the observed final points (black dots) and the expected points from the in-sample league reconstruction (blue dots, see Table \ref{tab:05}). The red solid line represents the median.}
\label{fig10}
\end{figure}
Figure~\ref{fig11} shows the posterior predictive distribution of the league ranking of each team for the Italian SuperLega 2017-2018. The red bar, which is in correspondence of the actual rank, is the highest (i.e., is associated with the highest probability) both for the top-three teams and for the bottom three teams, suggesting again a good predictive ability for our model.
\begin{figure}
\centering
\includegraphics[scale=0.5]{ExactRankProb.pdf}
\caption{Mid-season out-of-sample prediction: posterior predictive distribution of the final league ranking of each team in the Italian SuperLega 2017-2018. The red bar depicts the actual final rank position. The final league ranking is given within brackets in the title of each figure.}
\label{fig11}
\end{figure}
\subsubsection{Play off prediction using regular season games}
Here we predict the games of the play off phase using the entire regular season as training set. Figure~\ref{fig9} (right panel) displays the posterior distribution of the percentage of agreement of the correctly predicted games: the posterior mean is 73.06\% ($\pm 6.05$\%). The posterior agreement of correctly predicted sets (not displayed in the plot) is 61.5\% ($\pm$ 2.54\%).
The play off phase consists of a small knockout tournament between the best eight teams of the regular season: Sir Safety Perugia, Cucina Lube Civitanova, Azimut Modena, Diatec Trentino, Calzedonia Verona, Revivre Milano, Bunge Ravenna and Wixo LPR Piacenza. Table \ref{tab:06} shows for each team the probability to win in each play off stage
and progress to the next round until winning the tournament: Perugia is associated with the highest probability (0.75) to win the play off phase (Perugia actually won this phase, defeating Civitanova in the final) and, generally, reports the highest probabilities to progress in each stage. Civitanova, the second best team during the regular season, is associated with a high probability to enter in the semifinals (0.87) and with the second highest probability to win the play off (0.14). Modena and Trentino, who reached the semifinals, report high probabilities to progress in the semifinals, 0.76 and 0.61 respectively, whereas Piacenza and Ravenna yield 0.13 and zero probabilities to reach the semifinals, respectively (they were actually eliminated in the quarter of finals). Globally, these probabilities seem to realistically mirror the actual strength of each team in the final stage of the season.
\begin{table}
\caption{Play off out-of-sample prediction for the Italian SuperLega 2017/18: probability to progress in each stage of the play off phase along with the actual results.\label{tab:06}}
\begin{tabular}{|rcccc|}
\hline
\emph{Teams} & \emph{Semi} & \emph{Final} & \emph{Winner} & Actual \\
\hline
Sir Safety Perugia & 1.00 & 0.96 & 0.75 & Winner \\
Cucine Lube Civitanova & 0.87 & 0.58 & 0.14 & Finalist \\
Azimut Modena & 0.76 & 0.28 & 0.05 & Semi\\
Diatec Trentino & 0.61 & 0.03 & 0.02 & Semi \\
Calzedonia Verona & 0.39 & 0.01 & 0.00 & Quarter \\
Revivre Milano & 0.24 & 0.07 & 0.02 & Quarter\\
Bunge Ravenna & 0.00 & 0.00 & 0.00 & Quarter \\
Wixo LPR Piacenza & 0.13 & 0.06 & 0.01& Quarter \\
\hline
\end{tabular}
\end{table}
Figure \ref{fig12} displays the play off results of the matches actually played along with the posterior probabilities to progress in each play off stage. These probabilities have been obtained simply by considering the regular season results. As we can see, Perugia, the play off winner, is associated with the highest probabilities in each match, especially against Ravenna and Trentino: although it may seem a bit unrealistic that Perugia has probability one to beat Ravenna, this happens because the model's probability for Perugia is so high that during the MCMC simulation it is never defeated by Ravenna. To give an intuition for this issue, when playing at home and away Perugia has probabilities of 0.81 and 0.76 to win a set against Ravenna, respectively. In general, the highest probabilities are always associated with the teams that actually won the match and, consequently, progressed in the next stage.
Overall, our model yields good out-of-sample predictive performance, especially for the second mid-season.
\begin{figure}
\begin{tikzpicture}[
level distance=5cm,every node/.style={minimum width=3cm,inner sep=0pt},
edge from parent/.style={cyan!70!black,ultra thick,draw},
level 1/.style={sibling distance=4cm},
level 2/.style={sibling distance=2cm},
legend/.style={draw=orange,fill=orange!30,inner sep=3pt}
]
\node (1) {\Pair{Perugia \tiny{[0.78]} }{3}{\small{Lube Civ.} \tiny{[0.22]} }{2}}
[edge from parent fork left,grow=left]
child {node (2) {\Pair{Perugia \tiny{[0.99]} }{3}{Trentino \tiny{[0.01] }}{2}}
child {node (3) {\Pair{Trentino \tiny{[0.61]}}{2}{Verona \tiny{[0.39]}}{1}}}
child {node {\Pair{Perugia \tiny{[1]}}{2}{Ravenna \tiny{[0]}}{1}}}
}
child {node {\Pair{\small{Lube Civ.} \tiny{ [0.65]}}{3}{Modena \tiny{ [0.35] }}{1}}
child {node {\Pair{Modena \tiny{[0.76]}}{2}{Milano \tiny{[0.24]}}{0}}}
child {node {\Pair{ \small{Lube Civ.} \tiny{[0.87]}}{2}{Piacenza \tiny{[0.13]}}{0}}}
};
\node[legend] at ([yshift=50pt]3) (QF) {Quarter Finals};
\node[legend] at (2|-QF) {Semi-Finals};
\node[legend] at (1|-QF) (QF) {Final};
\end{tikzpicture}
\caption{ Play off out-of-sample prediction using the full season data of the Italian SuperLeaga 2017/18: probabilities for each team to progress in each play off stage are reported within brackets.}
\label{fig12}
\end{figure}
\color{black}
\section{Discussion}
\label{sec:disc}
With this work, we propose a unified hierarchical framework for modelling volleyball data using both outcomes (sets and points) of the game.
The model follows a top-down approach (from sets to points) which initially seems counter-intuitive but it helps to capture the characteristics of the game itself.
The core model structure is based on truncated versions of negative binomial for the points.
Moreover, the two levels of the outcomes (set and points) are connected via common abilities with extra set abilities when needed.
Finally, the main characteristics of the game are taken into consideration including that:
(a) the winner of the set is the team that first scores a pre-specified number of points, and
(b) the winner needs at least two points of difference to win the set (and the set continues until this is achieved).
The latter is modelled via an extra latent component which is assumed to follow a zero inflated Poisson distribution.
We have also tested for: the existence of correlations between sets (using random effects); the appropriateness of dynamic set and point abilities; finally, whether the probability of playing for extra points is influenced by the abilities of the teams (using a variety of functional forms). For the former check, there is some evidence that correlation might be present, as expected; however, the DIC provided similar predictive ability compared to the model without random effects, and therefore we proceeded with the simplified version of our model due to computational convenience. Dynamic abilities do not seem to improve the model (although point ability dynamics seem to be more useful than set ability dynamics). Finally, the difference in the abilities between the two competing teams does not seem to alter the probability of playing for extra time (as we might expect).
We have concluded our modelling quest by selecting a ZIP truncated model with connected abilities and extra set abilities for Verona and Padova and constant probability to observe extra points of each set. Posterior predictive checks show a good agreement between our model and the observed results, and an overall exceptional ability to replicate the final rankings of the league. Concerning future out-of-sample predictions, our proposed model is well behaved with acceptable predictive accuracy for future matches both for the mid-season and for the play off phase.
\subsection{Prior considerations}
\paragraph{Prior sensitivity analysis.}
We have used priors of low information following an informal objective Bayes approach. For this reason, all prior distributions used are proper with relatively large variances. To ensure that the selected prior parameters had minor influence on the inference, we have also conducted sensitivity analysis.
Detailed results in the form of 95\% posterior error bars can be found at the supplementary material of the article; see Figures A.1--A.3 at Appendix A.
\paragraph{Prior elicitation.}
Our modelling approach can be used to also incorporate prior information coming from historical data or from experts opinion.
A standard method can be developed by using the power prior approach \citep{Chen2000} where historical data (even with incomplete or different covariate information) can be incorporated in our modelling approach.
In this framework, the power parameter controls how reliable this prior information we believe that it is and, therefore, how much it will influence the posterior results. Empirical Bayes approaches or fully Bayesian hierarchical approaches can be used to estimate the power parameter from the data; see for example the works of \cite{Gravestock2017,Gravestock2019} in medicine.
Simpler approaches can be used when proportions of wins for each game are available, which is very common in sports. A simple approach based on generalized linear models can be used to build prior estimates of the team abilities in the game level and further convert them in prior for the winning probabilities of sets and the associated team abilities.
Finally, prior elicitation techniques for information coming from experts can be used to extract winning proportions and team abilities for each game.
This can be implemented by following the earlier work of \cite{Chen1999} or the more recent and general framework of \cite{Albert2012}. Nevertheless, experts, such as coaches, have rather limited skills and training on the quantification of their intuition or empirical knowledge.
Therefore, the use of a down weight parameter (similar to the power parameter) is recommended as in \cite{Drikos_etal_2019}.
By this way, most of the inference will be based on the actual data while a small portion of it will come from the experts knowledge. The latter information will correct for possible model misspecification or it may increase confidence for some specific parameters. Alternatively, we may incorporate predictions based on more reliable sources of prior information such as bookies as in \cite{Egidi2018} for football.
Generally, prior elicitation in sports, and more specifically in volleyball, is an intriguing topic due to the general availability of historical data, the large amount of data published by betting companies and the easy access to sport experts (betting players, coaches and people working in sport industry).
For any of the above cases, a more elaborate treatment is needed which is outside of the scope of this work.
\subsection{Limitations of proposed methodology.}
Naturally, our approach embodies a number of assumptions which were tested using the data of Italian SuperLega
2017/2018 season. For example, for the final model we have assumed independence of sets and points conditional on the explanatory information.
This was tested for the specific dataset using both random effects and dynamic ability components.
In both cases, no convincing evidence was found in order to incorporate either of these components in our final model.
Moreover, we have assumed that the extra points follow a zero inflated distribution.
This was found to be sufficient in our implementation but further investigation is needed in order to validate this result.
\paragraph{Limitation I: Team specific covariates.}
The main aim of this paper is to validate a general modelling formulation for volleyball data. Therefore we focus on modelling the main characteristics of the game and in developing a ``vanilla'' model using the two outcomes (sets and points) and the competing teams in each game without considering any extra information.
Therefore, a limitation of our approach is that we do not consider any further covariates to improve both the interpretational and the predictive ability of the model.
Towards this direction, \cite{Gabrio2020} has used a variety of team-specific covariates: attack/defence types, service, service reception, blocks, passing abilities, roster quality.
According the results of \cite{Gabrio2020}, the use of some of these covariates may be beneficial in terms of both game explanation and predictive power.
The authors are currently working on more enriched volleyball dataset in order to include other characteristics of the game using two different approaches (a) descriptive, using the end of game statistics for interpretational reasons and (b) predictive, using statistics available at the beginning of the game to improve prediction. Both of these approaches will be applied in combination with Bayesian variable selection techniques.
\paragraph{Limitation II: Separate attacking and defensive team abilities.}
As a referee pointed out, we did not use separate attacking and defensive team abilities either on the set or the point level. Concerning the sets, it was not possible to separate attacking and defensive abilities due to identifiability reasons (also, all related models in bibliography do not consider separate attacking and defensive team abilities). For the points level, since the actual response is only represented by the points of the loosing team, conditionally on the winner of the set,
we believe that the data will not have enough information to accurately estimate both attacking and defensive abilities. This was confirmed by the empirical results appearing in Table \ref{tab:02}, see model 5. Although, in this work we illustrate this finding using a single season dataset, we intuitively believe that this result also holds generally for other leagues and datasets.
\paragraph*{Limitation III: Not considering the fatigue in the model.}
Another important athletic characteristic that we have not included in our modelling approach is fatigue. Although, this is of interest for every sport, it might be of prominent importance for volleyball since sets are terminal important time points of each volleyball game and these are achieved sequentially. This can be done by several ways: using fixed trend effects or random effects in the modelling of sets. Nevertheless, the exploration of the function that optimally influences the sets might be cumbersome and therefore we believe that should be treated separately, in another research work which is more focused on empirical results.
\subsection{Comparisons with other methods.}
Early attempts to model volleyball were more focused on winning match and set probabilities through Markovian models \citep{ferrante2014winning}, whereas a Bayesian logistic regression to determine how
the performance of individual skills affects the probability of scoring a point is proposed by \cite{Miskin_etal_2010}. However, the majority of previous studies in volleyball is not oriented in modelling the entire game and validating the model strategy through league re-construction and prediction for future matches.
As far as we know from reviewing the literature, the only attempt to implement a generative model for volleyball results is proposed by \cite{Gabrio2020} for the women's volleyball Italian SuperLega 2017/2018 season. Our work presents some similarities with this paper, such as the Bayesian framework and the posterior predictive validation of the model in terms of final points and ranking positions. However, distributional assumptions are deeply different: Gabrio's model is an adjustment of the double Poisson model adopted for football \citep{Maher_1982, Lee_1997}, whereas we propose a model which takes into consideration the special characteristics of the game itself which is different than the goal-scoring team sports for which the double Poisson and its extensions were introduced.
\subsection{Limitations regarding data specific results.}
\paragraph{Considering only one dataset.}
Concerning the empirical implementation, a limitation of our results concerns the use of a single season dataset from a single league. In order to check the model's adequacy in a wider sense, we should apply our model on a variety of seasons and tournaments. One problem towards this direction, is that volleyball datasets are not so widely available as in other sports (e.g. basketball and football). The authors are currently in touch with volleyball experts in order to obtain richer datasets (including game specific covariates) and more seasons from the Greek league.
\paragraph{No covariates in the model structure.}
Moreover, additional covariates were not used here therefore we have not touched topics that other authors have dealt in the past (with simpler approaches),
such as
fatigue \citep{Shaw2008},
the effect of service \citep{Papadimitriou2004,Lopez_Martinez2009} and
the effect of specific volleyball skills on final outcomes \citep{Miskin_etal_2010,Drikos_etal_2019,Gabrio2020}.
\paragraph{Not considering the service advantage.}
There is an increasing bibliography which focuses on which team serves first,
not only for volleyball \citep{Shaw2008} but also for tennis \citep{Cohen_Zada2018}.
This seems to be an important determinant at the point level (when we model the individual success of a point) but it might be less relevant at the accumulated point level for each set that we consider here.
This is reasonable since the two teams are playing by having the serve advantage in turns, especially when the two competing teams are of high level. Nevertheless, it might be more relevant for the tie break where every small detail may count on determining the final winner. We believe that this effect will be minor when the game is unbalanced in terms of abilities (i.e. one team is much better than the other) but it might play a role (similar to home effect) if the two teams are close in terms of abilities. Unfortunately, for our dataset this information was not available but it might be of great interest to study this effect in the future.
\subsection{Final conclusion}
To conclude with, we have introduced an alternative model for volleyball data which uses a top-down approach modelling both sets and points and by considering the sport-specific characteristics such as the extra points played due to the required two points margin of win. Our work focuses on the validation of the simple ``vanilla'' model without considering extra covariates structure or characteristics such as fatigue, serve or specific sport skills.
We expect and hope that this work will initiate further quests for finding new methods and models for predicting and understanding volleyball and other sports belonging to the group of net and ball games.
\color{black}
\section*{Acknowledgements}
We would like to thank Dr. Sotirios Drikos for motivating us to work with Volleyball data and the two anonymous referees who improved the quality of the manuscript with their fruitful comments.
This research is financed by the Research Centre of Athens University of Economics and Business,
in the framework of the project entitled ``Original Scientific Publications 2019''.
\section*{Supplementary material} Electronic Supplementary Material with further plots and sensitivity check is available at:\\ \url{https://github.com/LeoEgidi/Bayesian-Volleyball-paper}.
\bibliographystyle{chicago}
| {
"attr-fineweb-edu": 1.931641,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdsE4dbghblPW5OQ7 | \section{Introduction}
Consider a dynamic scene such as Figure~\ref{task_fig}, where you, as the camera wearer, are playing basketball. You need to make a decision with whom you will cooperate to maximize the overall benefit for your team. Looking ahead at your teammates, you make a conscious decision and then 2-3 seconds afterwards you perform a cooperative action such as passing the ball.
In a team sport such as basketball, an effective cooperation among teammates is essential. Thus, in this paper, we aim to investigate whether we can use a single first-person image to infer with whom the camera wearer will cooperate 2-3 seconds from now? This is a challenging task because predicting camera wearer's cooperative intention requires 1) inferring his/her momentary visual attention, 2) decoding dominant social signals expressed by other players who want to cooperate, and 3) knowing who your teammates are when the players are not wearing any team-specific uniforms.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{./paper_figures/task_figure/task_fig.pdf}
\captionsetup{labelformat=default}
\setcounter{figure}{0}
\caption{With whom will I cooperate after 2-3 seconds? Given an \textbf{unlabeled} set of first-person basketball images, we predict with whom the camera wearer will cooperate 2 seconds from now. We refer to this problem as a cooperative basketball intention prediction.\vspace{-0.6cm}}
\label{task_fig}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{./paper_figures/arch/train_arch5.pdf}
\end{center}
\vspace{-0.4cm}
\caption{The illustration of our cross-model EgoSupervision training scheme. As our base model we use a multi-person pose estimation network from~\cite{DBLP:journals/corr/CaoSWS16}, which predicts 1) pose estimates of all people in a given first-person image and 2) the bounding boxes around each person. Next, we feed these outputs to an EgoTransformer, which transforms them such that the transformed output would approximately capture the camera wearer's attention and intentions. Then, we use such transformed output as a supervisory signal to train the network for our cooperative basketball intention task.\vspace{-0.5cm}}
\label{fig:train_arch}
\end{figure*}
To make this problem even more challenging we ask a question: ``Can we infer cooperative basketball intention without manually labeled first-person data?''. Building an unsupervised learning framework is important because manually collecting basketball intention labels is a costly and a time consuming process. In the context of a cooperative basketball intention task, an annotator needs to have highly specific basketball domain knowledge. Such a requirement limits the scalability of the annotation process because such annotators are difficult to find and costly to employ.
However, we conjecture that we can learn cooperative basketball intention in an unsupervised fashion by exploiting the signal provided by the first-person camera. What people see reflects how they are going to act. A first-person camera placed on a basketball player's head allows us to indirectly tap into that person's mind and reason about his/her internal state based on what the camera wearer sees. To do so we propose a novel cross-model EgoSupervision learning scheme, which allows us to learn the camera wearer's intention without the manually labeled intention data. Our cross-model EgoSupervision scheme works as follows. First we transform the output of a pretrained pose-estimation network such that it would approximately reflect the camera wearer's internal state such as his/her visual attention and intentions. Then, we use such transformed output as a supervisory signal to train another network for our cooperative basketball intention task. We show that such a learning scheme allows us to train our model without manually annotated intention labels, and achieve similar or even better results as the fully supervised methods do.
\section{Related Work}
\textbf{First-Person Vision.} In the past, most first-person methods have focused on first-person object detection~\cite{DBLP:journals/ijcv/LeeG15,BMVC.28.30,conf/cvpr/RenG10,conf/cvpr/FathiRR11,gberta_2017_RSS}, or activity recognition~\cite{Soran2015,Singh_2016_CVPR,PirsiavashR_CVPR_2012_1,Li_2015_CVPR,ma2016going,Fathi:2011:UEA:2355573.2356302}. Several methods have employed first-person videos to summarize videos ~\cite{DBLP:journals/ijcv/LeeG15,Lu:2013:SSE:2514950.2516026} while recently the work in~\cite{Su2016} proposed to predict the camera wearer's engagement detection from first-person videos. The work in~\cite{Fathi_socialinteractions:} used a group of people wearing first-person cameras to infer their social interactions such as monologues, dialogues, or discussions. The method in~\cite{park_force} predicted physical forces experienced by the camera wearer, while the work in~\cite{conf/cvpr/KitaniOSS11} recognized the activities performed in various extreme sports. Several recent methods~\cite{park_ego_future,park_cvpr:2017} also predicted the camera wearer's movement trajectories. Finally, first-person cameras have also been used for various robotics applications~\cite{Ryoo:2015:RAP:2696454.2696462,DBLP:journals/corr/GoriAR15}
In comparison to these prior methods, we propose a novel cooperative basketball intention prediction task, that allows us to study cooperative behaviors of the basketball players. Furthermore, we note that these prior first-person methods (except~\cite{conf/cvpr/KitaniOSS11}) rely on manually annotated labels for their respective tasks whether it would be an object-detection, activity recognition, intention prediction or some other task. Instead, in this work, we demonstrate that we can solve a challenging cooperative basketball intention prediction task without using annotated first-person intention labels, which are time consuming and costly to obtain.
\textbf{Knowledge Transfer across Models.} With the introduction of supervised CNN models~\cite{NIPS2012_4824}, there has been a lot of interest in adapting generic set of features~\cite{Donahue_ICML2014} for different tasks at hand~\cite{NIPS2014_5418,gberta_2015_CVPR,girshick2014rcnn,DBLP:journals/corr/XieT15,ren2015faster,Sermanet_overfeat:integrated}. Recently, generic image classification features were successfully used for the tasks such as edge detection~\cite{gberta_2015_CVPR,DBLP:journals/corr/XieT15}, object detection~\cite{girshick2014rcnn,ren2015faster,Sermanet_overfeat:integrated}, and semantic segmentation~\cite{gberta_2016_CVPR,DBLP:journals/corr/LinSRH15,DBLP:journals/corr/LongSD14,DBLP:journals/corr/ChenPKMY14}. More related to our work, a recent line of research investigated how to transfer knowledge across different models by a combination of parameter updates~\cite{Aytar11,DuanICML2012,Hoffman_ICLR2013}, transformation learning~\cite{Kulis:2011:YSY:2191740.2191798,DBLP:conf/cvpr/GongSSG12}, network distillation~\cite{DBLP:journals/corr/HintonVD15} or cross-model supervision~\cite{Hoffman_2016_CVPR,Gupta_2016_CVPR}. The most similar to our work are the methods in~\cite{Hoffman_2016_CVPR,Gupta_2016_CVPR} that use cross-model supervision to transfer knowledge from one model to another.
All of the above methods focus on the third-person data. In contrast, we show how to exploit a first-person view to solve a novel camera wearer's cooperative intention prediction task without using manually labeled first-person data.
\section{Learning Cooperative Basketball Intention}
The goal of our cooperative basketball intention task is to predict with whom the camera wearer will cooperate in the near future. Formally, we aim to learn a function $g(I_i)$ that takes a single first-person image $I_i$ as an input and outputs a per-pixel likelihood map, where each pixel indicates the cooperation probability. Ideally, we would want such function to produce high probability values at pixels around the person with whom the camera wearer will cooperate, and low probability values around all the other pixels.
We implement $g(I_i)$ via a fully convolutional neural network based on the architecture of a multi-person pose estimation network in~\cite{DBLP:journals/corr/CaoSWS16}. Let $\hat{y}$ denote a per-pixel mask that is given to our network as a target label. We refer to $\hat{y}$ as a \textit{pseudo} ground truth because we obtain it automatically instead of relying on the manually annotated intention labels. Then, we learn our cooperative basketball intention model by optimizing the following cross-entropy loss objective:
\vspace{-0.4cm}
\begin{equation}
\begin{split}
\label{CE_loss_eq}
\mathcal{L}^{(i)}= -\sum_{j=1}^{N} \hat{y}^{(i)}_j \log g_j(I_i) +(1-\hat{y}^{(i)}_j) \log \left(1-g_j(I_i)\right),
\end{split}
\end{equation}
where $\hat{y}^{(i)}_j$ is the pseudo ground truth value of image $I_i$ at pixel $j$, $g_j(I_i)$ refers to our network's output at pixel $j$, and $N$ denotes the number of pixels in an image. We now explain how we obtain the pseudo ground truth data $\hat{y}$.
\subsection{EgoTransformer}
To construct a pseudo ground truth supervisory signal $\hat{y}$, we transform the output of a pretrained multi-person pose estimation network~\cite{DBLP:journals/corr/CaoSWS16}, such that it would approximately capture the camera wearer's internal state such as his/her visual attention, and intentions. We do so using our proposed EgoTransformer scheme.
Let $f(I_i)$ denote a pretrained fully convolutional network from~\cite{DBLP:journals/corr/CaoSWS16} that takes a first-person image as an input, and outputs the 1) pose part estimates of every person in an image, and 2) their bounding-box detections. We note that the pretrained network $f$ was never trained on any first-person images. Then, formally, let $B \in \mathbb{R}^{n \times 5}$ denote the bounding box of people detected by $f$. Each of $n$ detected bounding boxes is parameterized by $5$ numbers $(x,y,h,w,c)$ denoting the top-left bounding-box coordinates $(x,y)$, the height $h$, and width $w$ of the bounding box, and its confidence value $c$. Additionally, let $P \in \mathbb{R}^{n \times 18 \times 2}$ denote the predicted $(x,y)$ locations of $18$ pose parts (see~\cite{DBLP:journals/corr/CaoSWS16}) for each of $n$ detected people.
Then our goal is to come up with a transformation function $T(B^{(i)},P^{(i)})$ that takes these two outputs and transforms them into a per-pixel pseudo ground truth mask $\hat{y}^{(i)}$ for our cooperative basketball intention prediction task.
We do so by exploiting three different characteristics encoded in a first-person view: 1) egocentric location prior, 2) egocentric size prior, and 3) egocentric pose prior. All of these characteristics can be used to reason about the camera wearer's internal state.
\captionsetup{labelformat=empty}
\captionsetup[figure]{skip=5pt}
\begin{figure}
\centering
\myfigurethreecol{./paper_figures/qual_results_v2/input/1_GOPR0064_80850.jpg}
\myfigurethreecol{./paper_figures/qual_results_v2/pose_pseudoGT/1_GOPR0064_80850.jpg}
\myfigurethreecol{./paper_figures/qual_results_v2/gt/1_GOPR0064_80850.jpg}
\myfigurethreecol{./paper_figures/qual_results_v2/input/10_GP020190_36034.jpg}
\myfigurethreecol{./paper_figures/qual_results_v2/pose_pseudoGT/10_GP020190_36034.jpg}
\myfigurethreecol{./paper_figures/qual_results_v2/gt/10_GP020190_36034.jpg}
\myfigurethreecolcaption{./paper_figures/qual_results_v2/input/9_GP010111_1190.jpg}{First-Person RGB}
\myfigurethreecolcaption{./paper_figures/qual_results_v2/pose_pseudoGT/9_GP010111_1190.jpg}{Pseudo GT}
\myfigurethreecolcaption{./paper_figures/qual_results_v2/gt/9_GP010111_1190.jpg}{Ground Truth}
\captionsetup{labelformat=default}
\setcounter{figure}{2}
\caption{Qualitative comparison of the pseudo ground truth labels obtained via an EgoTransformer versus the actual ground truth. Note that while the pseudo ground truth is not always correct (see the third row), in most cases, it successfully assigns high values around the player with whom the camera wearer will cooperate (see the first two rows). \vspace{-0.5cm}}
\label{pseudo_gt_fig}
\end{figure}
\captionsetup{labelformat=default}
\captionsetup[figure]{skip=10pt}
For instance, the location where another person is detected in a first-person image can be used to assess how likely the camera wearer is looking at that person~\cite{Li_2015_CVPR, gberta_2017_RSS}. The size of another person in a first-person image can be used to infer how far the camera wearer is from that person, and hence, how likely will the camera wearer interact with that person (the nearer the more likely). Finally, most person-to-person interactions involve people looking at each other, which imposes a certain pose prior. We can then use such a pose prior to predict whether two people will cooperate with each other in the near future based on whether another person is looking at the camera wearer at present.
\captionsetup{labelformat=empty}
\captionsetup[figure]{skip=5pt}
\begin{figure*}
\centering
\myfiguresixcol{./paper_figures/qual_results_v2/input/1_GOPR0064_34923.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/intention_pose_pseudoGT/1_GOPR0064_34923.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/gt/1_GOPR0064_34923.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/input/4_GOPR0016_25404.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/intention_pose_pseudoGT/4_GOPR0016_25404.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/gt/4_GOPR0016_25404.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/input/4_GOPR0016_63051.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/intention_pose_pseudoGT/4_GOPR0016_63051.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/gt/4_GOPR0016_63051.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/input/4_GOPR0016_42855.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/intention_pose_pseudoGT/4_GOPR0016_42855.jpg}
\myfiguresixcol{./paper_figures/qual_results_v2/gt/4_GOPR0016_42855.jpg}
\myfiguresixcolcaption{./paper_figures/qual_results_v2/input/9_GP010111_40673.jpg}{First-Person RGB}
\myfiguresixcolcaption{./paper_figures/qual_results_v2/intention_pose_pseudoGT/9_GP010111_40673.jpg}{Our Prediction}
\myfiguresixcolcaption{./paper_figures/qual_results_v2/gt/9_GP010111_40673.jpg}{Ground Truth}
\myfiguresixcolcaption{./paper_figures/qual_results_v2/input/1_GOPR0064_80850.jpg}{First-Person RGB}
\myfiguresixcolcaption{./paper_figures/qual_results_v2/intention_pose_pseudoGT/1_GOPR0064_80850.jpg}{Our Prediction}
\myfiguresixcolcaption{./paper_figures/qual_results_v2/gt/1_GOPR0064_80850.jpg}{Ground Truth}
\captionsetup{labelformat=default}
\setcounter{figure}{3}
\caption{The qualitative cooperative basketball intention prediction results. Despite not using any manually annotated first-person labels during training, in most cases, our cross-model EgoSupervision method correctly predicts with whom the camera wearer will cooperate (the first two rows). In the third row, we also illustrate two cases where our method fails to produce correct predictions. \vspace{-0.5cm}}
\label{preds_fig}
\end{figure*}
\captionsetup{labelformat=default}
\captionsetup[figure]{skip=10pt}
We express our pseudo ground truth data $\hat{y}$ using these three characteristics using what we refer to as an EgoTransformer scheme:
\vspace{-0.3cm}
\begin{equation}\label{pgt_eq}
\begin{split}
\hat{y} = & \Big[ \sum_{j=1}^n V(B_j, \phi_{size}(B_j)) \cdot V(B_j,\phi_{pose}(B_j))\Big] \cdot \phi_{loc} (B
\end{split}
\end{equation}
where $n$ denotes the number of detected bounding boxes in a given image, $B_j$ depicts a $j^{th}$ bounding box, $V$ is a function that takes two inputs: 1) a bounding box $B_j$, and 2) a scalar value $v$, and outputs a $H \times W$ dimensional mask by assigning every pixel inside this bounding box $B_j$ to $v$, and zeros to all the pixels outside $B_j$. Here, $H$ and $W$ depict the height and the width of the original input image. Finally, $\phi_{size}(B_j) \in \mathbb{R}^{1 \times 1}$ and $\phi_{pose}(B_j) \in \mathbb{R}^{1 \times 1}$ are scalars that capture the size and pose priors associated with a bounding box $B_j$, while $\phi_{loc} \in \mathbb{R}^{H \times W}$ is a first-person location prior of the same dimensions as the original input image.
Intuitively, the formulation above operates by first assigning a specific value to each of the detected bounding boxes. This yields a $H \times W$ dimensional prediction map where every pixel that does not belong to any bounding boxes is assigned a zero value. Then, this prediction map is multiplied with the location prior $\phi_{loc} \in \mathbb{R}^{H \times W}$ (using elementwise multiplication). Finally, all the values are normalized to be in range $[0,1]$, which produces our final pseudo ground truth labels. We now explain each of the components in more detail.
\textbf{Egocentric Location Prior.} The location of the camera wearer's visual attention is essential for inferring his/her cooperative intentions. We know that a first-person camera is aligned with the person's head direction, and thus, it captures exactly what the camera wearer sees. As a result, the way the camera wearer positions himself with respect to other players affects the location where these players will be mapped in a first-person image.
Instead of assuming any specific location a-priori (e.g. a center prior), as is done in~\cite{Li_2015_CVPR,DBLP:journals/ijcv/LeeG15}, we find the egocentric location prior directly from the data. As before, Let $B \in \mathbb{R}^{n \times 5}$ denote the bounding boxes detected by a pretrained network. Then we can compute $\phi_{loc} \in \mathbb{R}^{H \times W}$ as follows:
\vspace{-0.3cm}
\begin{equation}
\begin{split}
\phi_{loc}(B)=\sum_{j=1}^n V(B^{(i)}_j,c^{(i)}_j) \cdot \frac{1}{N} \sum_{i=1}^N \sum_{j=1}^n V(B^{(i)}_j,c^{(i)}_j))\nonumber
\end{split}
\end{equation}
where $c^{(i)}_j$ is the predicted confidence of the $j^{th}$ bounding box in the $i^{th}$ image. Intuitively, the first term $\sum_{j=1}^n V(B_j,c^{(i)}_j)$ depicts a $H \times W$ dimensional mask that is obtained by assigning confidence values to all pixels in their respective bounding boxes in the current image, and zero values to the pixels outside the bounding boxes. The second term $\frac{1}{N} \sum_{i=1}^N \sum_{j=1}^n V(B_j,c^{(i)}_j))$ also depicts a $H \times W$ dimensional mask that is obtained using this same procedure but across the entire training training dataset rather than a single image. In other words, the second term captures the locations in a first-person image where the bounding box predictions are usually most dense.
We conjecture, that $\phi_{loc}(I_i)$ can then be used to approximate the camera wearer's visual attention location, which is essential for inferring the camera wearer's cooperative intentions.
\captionsetup{labelformat=empty}
\captionsetup[figure]{skip=5pt}
\begin{figure*}[t]
\centering
\myfiguresixcol{./paper_figures/human_results_hard/input/7_GOPR0004_10953.jpg}
\myfiguresixcol{./paper_figures/human_results_hard/subject1/7_GOPR0004_10953.jpg}
\myfiguresixcol{./paper_figures/human_results_hard/subject2/7_GOPR0004_10953.jpg}
\myfiguresixcol{./paper_figures/human_results_hard/subject3/7_GOPR0004_10953.jpg}
\myfiguresixcol{./paper_figures/human_results_hard/subject5/7_GOPR0004_10953.jpg}
\myfiguresixcol{./paper_figures/human_results_hard/gt/7_GOPR0004_10953.jpg}
\myfiguresixcolcaption{./paper_figures/human_results_hard/input/4_GOPR0016_81114.jpg}{First-Person RGB}
\myfiguresixcolcaption{./paper_figures/human_results_hard/subject1/4_GOPR0016_81114.jpg}{Subject-1}
\myfiguresixcolcaption{./paper_figures/human_results_hard/subject2/4_GOPR0016_81114.jpg}{Subject-2}
\myfiguresixcolcaption{./paper_figures/human_results_hard/subject3/4_GOPR0016_81114.jpg}{Subject-3}
\myfiguresixcolcaption{./paper_figures/human_results_hard/subject5/4_GOPR0016_81114.jpg}{Subject-5}
\myfiguresixcolcaption{./paper_figures/human_results_hard/gt/4_GOPR0016_81114.jpg}{Ground Truth}
\captionsetup{labelformat=default}
\setcounter{figure}{4}
\caption{Several qualitative examples from the top $4$ performing subjects in our conducted human study. Each subject specified their prediction by clicking on the person, with whom he/she thought the camera wearer was going to cooperate. We then placed a fixed size Gaussian around the location of the click. Note that based on these results, we can conclude that some instances of this task are quite difficult even for humans, i.e. in these examples, there is no general consensus among the subjects' responses. \vspace{-0.4cm}}
\label{human_preds}
\end{figure*}
\captionsetup{labelformat=default}
\captionsetup[figure]{skip=10pt}
\textbf{Egocentric Size Prior.} Spatial $3D$ cues provides important information to infer the camera wearer's intentions~\cite{park_ego_future,park_cvpr:2017}. For instance, the camera wearer is more likely to cooperate with a player who is near him/her. We propose to capture this intuition, by exploiting an egocentric size prior. We know that the size of a bounding box in a first-person image is inversely related to the distance between the camera wearer and the person in the bounding box. Thus, let $h_j$ be the height of the bounding box $B_j$. Then we express the egocentric size prior $\phi_{size}(B_j) \in \mathbb{R}^{1 \times 1}$ for a given bounding box as:
\vspace{-0.3cm}
\begin{equation}
\begin{split}
\phi_{size}(B_j)= \exp{(-\frac{\sigma}{h_j})}\nonumber
\end{split}
\end{equation}
where $\sigma$ denotes a hyperparameter controlling how much to penalize small bounding boxes. Such a formulation allows us to capture the intuition that the camera wearer is more likely to cooperate with players who are physically closer to him/her.
\textbf{Egocentric Pose Prior.} In basketball, people tend to look at each other to express their intentions before actually performing cooperative actions. Detecting whether a particular person is facing the camera wearer can be easily done by examining the $x$ coordinates of the paired body parts such as eyes, arms, legs, etc of a person detected in a first-person image. For instance, if a particular person is facing the camera wearer then, we will observe that for most of his/her paired parts visible in a first-person image the following will be true: $x(right\_part)<x(left\_part)$. In other words, the right parts of that person's body will have smaller $x$ coordinate value in a first-person image, than the left parts. We use this intuition to encode the egocentric pose prior $\phi_{pose}(B_j) \in \mathbb{R}^{1 \times 1}$ for a given bounding box $B_j$ as follows:
\vspace{-0.3cm}
\begin{equation}
\begin{split}
\phi_{pose}(B_j)=\frac{1}{|\mathcal{P}|}\sum_{p \in \mathcal{P}} 1 \{x(right\_part)<x(left\_part) \} \nonumber
\end{split}
\end{equation}
where $\mathcal{P}$ is the set of all paired parts, and $1 \{x(right\_part)<x(left\_part) \}$ is an indicator function that returns $1$ if the $x$ coordinate of the right part in a first-person image is smaller than the $x$ coordinate of the left part. The computed value $\phi_{pose}(B_j)$ can then be viewed as a confidence that a person in the bounding box $B_j$ is facing the camera wearer, which is an important cue for inferring the camera wearer's cooperative intentions.
\textbf{Pseudo Ground Truth.} We then combine all the above discussed components into a unified framework using the Equation~\ref{pgt_eq}. Such a formulation allows us to automatically construct pseudo ground truth labels from the outputs of a pretrained multi-person pose estimation network. We illustrate several examples of our obtained pseudo ground truth labels in Figure~\ref{pseudo_gt_fig}. Notice that while our computed pseudo ground truth is not always correct, in many cases it correctly captures the player with whom the camera wearer will cooperate in the near future. In our experimental section, we will demonstrate that despite the imperfections of our pseudo ground truth labels, we can use them to obtain a model that is almost as good as the model trained in a fully supervised fashion using manually annotated cooperation labels.
\subsection{Cross-Model EgoSupervision}
After obtaining the pseudo ground truth data $\hat{y}$, we train our cooperative basketball intention FCN using the cross-model EgoSupervision scheme as shown in Figure~\ref{fig:train_arch}. We employ a multi-person pose estimation network from~\cite{DBLP:journals/corr/CaoSWS16} as our base model, which is used to predict the 1) pose estimates of all people in a given image and 2) their bounding boxes. The parameters inside the base network are fixed throughout the entire training procedure. At each iteration, the outputs from the base network are fed to the EgoTransformer, which transforms them into the pseudo ground truth cooperate intention labels. These pseudo ground truth labels are then used as a supervisory signal to train our cooperative basketball intention FCN using a sigmoid cross entropy per-pixel loss as illustrated in Equation~\ref{CE_loss_eq}.
\subsection{Implementation Details}
For all of our experiments, we used a Caffe deep learning library~\cite{jia2014caffe}. As our base FCN model we used a multi-person pose estimation network from~\cite{DBLP:journals/corr/CaoSWS16}. Inspired by the success of this method, we also used the same architecture for our cooperative basketball intention FCN. During training, we optimized the network for $5000$ iterations with a learning rate of $10^{-7}$, the momentum equal to $0.9$, the weight decay of $0.0005$, and the batch size of $15$. The weights inside the base FCN network were fixed throughout the entire training procedure. To compute the egocentric size prior mask we used $\sigma = 10$.
\section{Cooperative Basketball Intention Dataset}
\label{data_sec}
We build upon the dataset from~\cite{DBLP:journals/corr/BertasiusYPS16}, which captures first-person basketball videos of $48$ distinct college-level players in an unscripted basketball game. The work in~\cite{DBLP:journals/corr/BertasiusYPS16} studies a basketball performance assessment problem, and provides $401$ training and $343$ testing examples of basketball cooperations among players from $10.3$ hours of videos.
To obtain ground truth labels corresponding to the specific players, with whom the camera wearer cooperated, we look at the video segments corresponding to all such cooperation. We then identify the player with whom the camera wearer cooperated, go back to the frame about $2$ seconds before the cooperation happens, and label that player with a bounding box. The ground truth data is then generated by placing a Gaussian inside the bounding box, according to the height and width of the bounding box.
Once again we note that these labels are only used for the evaluation purposes, and also to train other baseline models. In comparison, our method learns to detect the players with whom the camera wearer will cooperate, without relying on manually annotated intention labels.
\section{Experimental Results}
In this section, we present quantitative and qualitative results for our cooperative basketball intention prediction task. To compute the accuracy of each method, we select the player in the image with the maximum predicted probability as the the final prediction and then compute the fraction of all the correct predictions across the entire testing dataset.
\setlength{\tabcolsep}{3pt}
\begin{table}
\begin{center}
\begin{tabular}{ c | c |}
\hline
\multicolumn{1}{| c |}{\em Human Subjects} & {\em Accuracy}\\ \hline
\multicolumn{1}{| c |}{Subject-4} & 0.802\\
\multicolumn{1}{| c |}{Subject-2} & 0.895\\
\multicolumn{1}{| c |}{Subject-3} & 0.901\\
\multicolumn{1}{| c |}{Subject-5} & 0.904\\
\multicolumn{1}{| c |}{Subject-1} & \bf 0.927\\ \hline
\end{tabular}
\end{center}\vspace{-.3cm}
\caption{Quantitative human study results on our cooperative basketball intention task. We ask $5$ subjects to predict a player in the first-person image, with whom they think the camera wearer will cooperate after $2$ seconds. We then compute the accuracy as the fraction of correct responses. The results indicate that most subjects achieve the accuracy of about $90\%$. We conjecture that Subject-4 may be less familiar with the basketball game thus, the lower accuracy. \vspace{-0.2cm}}
\label{human_study_table}
\end{table}
\subsection{Human Study}
\label{human_study_sec}
First, to see how well humans can predict cooperative basketball intention from first-person images, we conduct a human study consisting of $5$ human subjects. Each subject is shown $343$ testing images one at a time, and asked to click on the player in an image, with whom he/she thinks the camera wearer will cooperate $2$ seconds from now. Then the accuracy of each subject is evaluated as the fraction of correct responses.
We present these results in Table~\ref{human_study_table}, and demonstrate that this task is not trivial even for humans: most of the subjects achieve about $90\%$ accuracy on our task, which is solid but not perfect. We also point out that we did not collect information on how familiar each subject was with basketball. However, based on the results, we conjecture that Subject-4 who achieved almost $10\%$ lower accuracy than the other subjects was probably not very familiar with basketball, which contributed to his lower performance. In Figure~\ref{human_preds}, we also visualize the qualitative examples that human subjects found the most difficult, i.e. in these instances, the predictions among the subjects differed substantially.
\setlength{\tabcolsep}{3pt}
\begin{table}
\begin{center}
\begin{tabular}{ c | F{2cm} |}
\hline
\multicolumn{1}{| c |}{\em Method} & {\em Accuracy}\\ \hline
\multicolumn{1}{| c |}{DCL~\cite{LiYu16}} & 0.222\\
\multicolumn{1}{| c |}{MPP-pretrained~\cite{DBLP:journals/corr/CaoSWS16}} & 0.586\\
\multicolumn{1}{| c |}{DeepLab$^{\ddagger}$~\cite{DBLP:journals/corr/ChenYWXY15}} & 0.644\\
\multicolumn{1}{| c |}{Pseudo GT} & 0.665\\
\multicolumn{1}{| c |}{ResNet-50$^{\ddagger}$~\cite{He2015}} & 0.675\\
\multicolumn{1}{| c |}{PSPNet$^{\ddagger}$~\cite{DBLP:journals/corr/ZhaoSQWJ16}} & 0.695\\
\multicolumn{1}{| c |}{ResNet-101$^{\ddagger}$~\cite{He2015}} & 0.706\\
\multicolumn{1}{| c |}{DeepLab-v2$^{\ddagger}$~\cite{CP2016Deeplab}} & 0.757\\
\multicolumn{1}{| c |}{MPP-finetuned$^{\ddagger}$~\cite{DBLP:journals/corr/CaoSWS16}} & \bf 0.778\\ \hline
\multicolumn{1}{| c |}{\bf CMES} & 0.775\\ \hline
\end{tabular}
\end{center}\vspace{-.3cm}
\caption{The quantitative cooperative basketball intention results evaluated as the fraction of correct predictions. We compare our Cross-Model EgoSupervision (CMES) scheme with a variety of supervised methods (marked by $\ddagger$). These results indicate that even without using manually annotated intention labels, our method outperforms most supervised methods, and produces almost identical performance as our main baseline ``MPP-finetuned''.\vspace{-0.2cm}}
\label{cbi_results_table}
\end{table}
\subsection{Quantitative Results}
In Table~\ref{cbi_results_table}, we present quantitative cooperative basketball intention results of our method and several other baselines. As our baselines, we use a collection of methods that were successfully used for other computer vision tasks such as image classification, semantic segmentation or saliency detection. These include a 1) Deep Contrast Saliency (DCL) method~\cite{LiYu16}, 2-3) several variations of highly successful DeepLab semantic segmentation systems~\cite{DBLP:journals/corr/ChenYWXY15,CP2016Deeplab} adapted to our task, 4-5) image classification ResNets~\cite{He2015} adapted to our task, 6) one of the top performing semantic segmentation systems PSPNet~\cite{DBLP:journals/corr/ZhaoSQWJ16}, 7-8) a pretrained and finetuned multi-person pose estimation (MPP) network~\cite{DBLP:journals/corr/CaoSWS16}, and 9) a pseudo ground truth obtained from our EgoTransformer.
Note that our Cross-Model EgoSupervision (CMES) method is based on an MPP network architecture~\cite{DBLP:journals/corr/CaoSWS16}, and thus, as our main baseline we use the ``MPP-finetuned'' method, which uses the manually labeled bounding box intention labels to infer with whom the camera wearer will interact. In contrast to this baseline, our CMES method is only trained on the automatically generated pseudo ground truth labels. We note that the supervised methods employing manually labeled data are marked with $^{\ddagger}$. We now discuss several interesting observations based on these results.
\setlength{\tabcolsep}{3pt}
\begin{table}
\begin{center}
\begin{tabular}{ c | F{2.5cm} | F{2.5cm} |}
\cline{2-3}
& \multicolumn{2}{ c |}{{\em Accuracy}}\\
\hline
\multicolumn{1}{| c |}{\em Method} & {\em pseudo GT} & {\em Trained Model}\\ \hline
\multicolumn{1}{| c |}{no $\phi_{loc}$} & 0.481 & 0.560\\
\multicolumn{1}{| c |}{no $\phi_{pose}$} & 0.557 & 0.694\\
\multicolumn{1}{| c |}{no $\phi_{size}$} & 0.571 & 0.731\\ \hline
\multicolumn{1}{| c |}{\bf Ours-Full} & \bf 0.665 & \bf 0.775\\ \hline
\end{tabular}
\end{center}\vspace{-.3cm}
\caption{The quantitative ablation studies documenting the importance of each component in our EgoTransformer scheme. We separately remove each of $\phi_{loc}$, $\phi_{size}$, $\phi_{pose}$ and investigate how the accuracy changes. The second column in the table denotes the accuracy of a pseudo ground truth, while the third column depicts the accuracy of our trained model. Based on these results, we can conclude that each component of our EgoTransformer is essential for an accurate cooperative basketball intention prediction. \vspace{-0.5cm}}
\label{egotransformer_results_table}
\end{table}
\textbf{Comparison with the Supervised Methods.} Based on the results, we observe that despite not using manually annotated bounding box intention labels, our method outperforms a number of supervised baselines and achieves almost equivalent results to our main baseline ``MPP-finetuned'', which was trained using manually annotated cooperative intention labels. Thus, these results indicatee the effectiveness of our cross-model EgoSupervision scheme.
\textbf{Comparison with the Pseudo Ground Truth.} One interesting and a bit surprising observation from Table~\ref{cbi_results_table}, is that our cross-model EgoSupervision model achieves substantially better accuracy than the pseudo ground truth, which was used to optimize our model. We conjecture that this happens due to the following reasons. The pseudo ground truth labels are constructed using three different signals: 1) an egocentric location prior, 2) an egocentric size prior, and 3) an egocentric pose prior. Note, that our constructed pseudo ground truth does not incorporate any visual appearance information, i.e. it does not consider how the players look like. In contrast, during training, our network, learns what are the visual appearance cues indicative of the players with high pseudo ground truth values. Arguably, such visual cues provide a stronger signal for a cooperative intention recognition, which then leads to a substantially better performance than the pseudo ground truth labels.
\subsection{Qualitative Results}
In Figure~\ref{preds_fig}, we present our qualitative results, where we show that in most cases, our model successfully learns to predict with whom the camera wearer will cooperate. Furthermore, to gain a better understanding of what the network learned, in Figure~\ref{filters_fig}, we visualize the activations inside the second to last FCN's layer. Note that our network has high activation values around the faces of people with whom the camera wearer intends to cooperate. This makes intuitive sense, as face is probably the most useful cue to recognize the camera wearer's intention to cooperate.
\subsection{Ablation Experiments}
In Table~\ref{egotransformer_results_table}, we present the results analyzing the behavior of our EgoTransformer scheme. Earlier we discussed that to implement our EgoTransformer scheme we exploit three characteristics: 1) egocentric location prior $\phi_{loc}$ , 2) egocentric size prior $\phi_{size}$ , and 3) egocentric pose prior $\phi_{pose}$. We want to investigate how much each of these priors affect 1) the quality of our generated pseudo ground truth data, and 2) the quality of our model trained using such pseudo ground truth. To do this, we run experiments with three baselines where for each baseline we remove one of $\phi_{loc}, \phi_{size},$ or $\phi_{pose}$ components. We denote these three baselines as ``no $\phi_{loc}$'', ``no $\phi_{size}$'' and ``no $\phi_{pose}$'' respectively. Finally, we include the results of our model using the full EgoTransformer scheme.
\captionsetup{labelformat=empty}
\captionsetup[figure]{skip=5pt}
\begin{figure}
\centering
\myfigurethreecol{./paper_figures/filters/input/3_GOPR0017_28851.jpg}
\myfigurethreecol{./paper_figures/filters/fc7_sum/3_GOPR0017_28851.png}
\myfigurethreecol{./paper_figures/filters/gt/3_GOPR0017_28851.jpg}
\myfigurethreecol{./paper_figures/filters/input/1_GOPR0064_47964.jpg}
\myfigurethreecol{./paper_figures/filters/fc7_sum/1_GOPR0064_47964.png}
\myfigurethreecol{./paper_figures/filters/gt/1_GOPR0064_47964.jpg}
\myfigurethreecolcaption{./paper_figures/filters/input/4_GOPR0016_81132.jpg}{First-Person RGB}
\myfigurethreecolcaption{./paper_figures/filters/fc7_sum/4_GOPR0016_81132.png}{FCN Activations}
\myfigurethreecolcaption{./paper_figures/filters/gt/4_GOPR0016_81132.jpg}{Ground Truth}
\captionsetup{labelformat=default}
\setcounter{figure}{5}
\caption{The visualization of the activation values inside the second to last layer in our trained network. Note that the network produces high activation values around the faces of the players in the camera wearer's field of view. This makes intuitive sense, as facial expressions provide the most informative cues for a cooperative basketball intention task. \vspace{-0.5cm}}
\label{filters_fig}
\end{figure}
\captionsetup{labelformat=default}
\captionsetup[figure]{skip=10pt}
Based on the results in Table~\ref{egotransformer_results_table}, we first observe that each of these components have a significant impact to the quality of pseudo ground truth that we obtain. Specifically, using our full model yields $9.4\%$ better pseudo ground truth results than the second best baseline. Additionally, note that the network trained to the pseudo ground truth of our full model achieves $4.4\%$ higher accuracy than the second best baseline. These results indicate that each component in our EgoTransformer scheme is crucial for learning a high quality cooperative intention model.
\section{Conclusions}
In this work, we present a new task of predicting cooperative basketball intention from a single first-person image. We demonstrate that a first-person image provides strong cues to infer the camera wearer's intentions based on what he/she sees. We use this observation to design a new cross-model EgoSupervision learning scheme that allows us to predict with whom the camera wearer will cooperate, without using manually labeled intention labels. We demonstrate that despite not using such labels, our method achieves similar or even better results than fully supervised methods.
We believe that our proposed cross-model EgoSupervision scheme could be applied on various other first-person vision tasks without the need to manually collect labels for each of such tasks. In the long run, a learning scheme such as ours could effectively replace the supervised methods, which require costly and time consuming annotation process.
\bibliographystyle{plain}
\footnotesize{
| {
"attr-fineweb-edu": 1.628906,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfFI5qhDCWb0lMwcJ | \section{Introduction}
Prediction of future market players' value is important because it could seriously burden professional football clubs. On the other hand, it allows clubs to gain profit by selling a well-performing player at a high price. Famous clubs such as FC Barcelona and Manchester United spend astronomical values to obtain the best players. Hence, identifying the factors affecting football players' value might bring competitive advantages to small and medium-sized football clubs.
Previous studies indicated that the player's market value factors have varied, such as demographic information, profile, and real performance through the sports statistics sites (e.g., WhoScored\footnote{https://1xbet.whoscored.com/})~\cite{he2015football, muller2017beyond, cwiklinski2021will}. However, the sports statistics site has no detailed information about players' quantitative ability information (e.g., attack, goalkeeping, defense, mental). Hence, there is a limit to grasping how football ability factors affect the player's market value using the sports statistic sites. Meanwhile, football video game data from EA Sports\footnote{https://www.ea.com/sports}'s SOFIFA\footnote{https://sofifa.com/} and Football Manager\footnote{https://www.footballmanager.com/} can be used as an alternative to overcome the limitations of the existing studies~\cite{yiugit2019football, yigit2020xgboost}.
Among the video game data, the SOFIFA dataset comprises approximately 55 attributes related to each player's ability (e.g., passing, attacking), position (e.g., goal keeper, midfielder), demographic information (e.g., age, height, weight), monetary value (e.g., wage, release clause), and profile (e.g., club name, international reputation). To make a reliably quantified dataset of all players, EA Sports employs 30 EA producers and 400 outside data contributors who are responsible for ensuring all player data is up to date, while a community of over 6,000 SOFIFA data reviewers or talent scouts from all over the world is constantly providing suggestions and alterations to the database. Next, EA Sports employees build FIFA's game ability attributes dataset every year based on a fair evaluation of more than 30 leagues, more than 700 clubs, and more than 17,000 players and update the dataset every month based on the actual competition performance of the competitors. For example, EA Sports' staff watch every game to find out the pace, not even the major league, but the players in second division leagues such as English Football League (EFL) Championship.
Therefore, although SOFIFA dataset is video game data, SOFIFA dataset provides all objectively evaluated ability attributes of all players, and quantified ability data of more than 17,000 players, which is the most reliable among all football-related data via reflecting real player stats. Because of the above advantages, SOFIFA dataset is being used for various research purposes (e.g., match results, player's market value prediction, clustering player's position, and player's performance prediction)~\cite{rajesh2020data, behravan2020novel, pariath2018player, prasetio2016predicting, soto2017gaussian} as shown in Table~\ref{sofifapaper}. However, SOFIFA dataset does not provide the club's match record attributes such as win/draw/lose rates and goal/assist points, which are considered by previous studies~\cite{cwiklinski2021will, he2015football, muller2017beyond} from WhoScored dataset as shown in Table~\ref{existingpaper} Furthermore, while not covered in previous studies, WhoScored also provides player match attributes such as violation records (e.g., foul, card), number of game played, and attack point records (e.g., goal, assist, affective shooting).
In terms of market value prediction model, Existing studies mainly rely on weak regression techniques such as linear regression, regularized regression (e.g., ridge, lasso), and regression tree for player's market value prediction~\cite{he2015football, muller2017beyond} as shown in Table~\ref{existingpaper}. However, ~\cite{behravan2020novel} used clustering technique, while other well-known ensemble learners, e.g., AdaBoost and XGBoost were also applied to improve the model's performance in this domain prediction~\cite{yigit2020xgboost, cwiklinski2021will, yiugit2019football} as shown in Table~\ref{existingpaper}, \ref{sofifapaper}
Through this review, we find three research gaps in the existing studies. First, no previous studies simultaneously have used the both of SOFIFA and WhoScored attributes to predict market value (i.e., existing studies used only either SOFIFA or WhoScored dataset). Second, the importance of features of each predictive model is not defined in the existing studies, making it difficult to determine which features can predict the player's market values well. Third, no studies considered hyperparameter optimization to improve the model's performance for predicting players' market value. Fourth, most existing football data-driven studies are not considered about decreasing learning time and enhancement of performance by using state-of-the-art model and hyperparameter optimization technologies (e.g., lightGBM, TPE bayesian optimization).
In this paper, we first extract the attributes from both SOFIFA and WhoScored dataset which are the player's all ability, profile, demographic information, and monetary value attributes provided by SOFIFA and club \& individual player's match record attributes in belong to big five major European soccer leagues (English Premier League, Spain La Liga, Germany Bundesliga, France Ligue 1, Italy Serie A) from WhoScored. Then, the state-of-the-art ensemble model (i.e., optimized LightGBM model via Bayesian optimization) is utilized to analyze the causal relationship between factors that contribute to future player's market value prediction. The prediction accuracy of the model is validated in terms of root mean square error (RMSE) and mean absolute error (MAE) metrics with cross-validation. Finally, we identified the features importance by SHAP value, and derived the best features which can explain the market value prediction model best among all attributes.
To sum up, the main objective of this paper are as follows.
\begin{itemize}
\item We crawled and extracted all types of attributes related to player's ability or club's match record from two data sources (e.g., SOFIFA, WhoScored) to identify all possible features affecting market value prediction that were not considered in previous studies.
\item We develop a predictive optimized ensemble model (e.g., LightGBM + TPE Bayesian optimization) that can predict player's market value accurately.
\item We seek the features that have significant impacts on predicting players' market value.
\end{itemize}
\begin{table*}
\centering
\caption{Previous research with market value prediction using real data source}
\label{existingpaper}
\resizebox{1\textwidth}{!}{%
\begin{savenotes}
\begin{tabular}{@{}lllll@{}}
\toprule
\multicolumn{1}{c}{\textbf{Ref.}} &
\multicolumn{1}{c}{\textbf{Research Purpose}} &
\multicolumn{1}{c}{\textbf{Data Source}} &
\multicolumn{1}{c}{\textbf{Features}} &
\multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Modeling \\ Technique\end{tabular}}} \\ \midrule\midrule
{\textbf{\cite{he2015football}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Estimation of player's \\ performance and market value,\\ and relationship between \\ player's performance and \\ market value by regression model\end{tabular}} &
\textbf{{\begin{tabular}[c]{@{}l@{}}Transfer Market\footnote{https://www.transfermarkt.co.uk/wettbewerbe/national},\\ WhoScored, \\ European Football \\ Database\footnote{https://www.footballdatabase.eu/en/}, \\ and Garter\footnote{https://www.theguardian.com/football/2014/dec/21/how-the-guardian-rankedthe-2014-worlds-top-100-footballers}\end{tabular}}} &
\textbf{{\begin{tabular}[c]{@{}l@{}}transfer fee, \\performance assessments, \\age, contract duration\end{tabular}}} &
\textbf{Lasso Regression} \\
\midrule
{\textbf{\cite{majewski2016identification}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Estimation of player's \\ market value and identifying \\ the determining factors of\\ market value by regression model\end{tabular}} &
{\textbf{Transfer Market}} &
\textbf{\begin{tabular}[c]{@{}l@{}} 5 Human capital factors\\ (e.g., age),\\ 5 Productivity factors\\(e.g., goals scored),\\ 4 Organizational capital factors\\(e.g., total time) \end{tabular}} &
\textbf{Linear Regression} \\
\midrule
{\textbf{\cite{muller2017beyond}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Estimation of player's \\ market value by regression model\end{tabular}} &
\textbf{{\begin{tabular}[c]{@{}l@{}}Google\footnote{https://www.google.com/},\\ Reddit\footnote{https://www.reddit.com/}, \\ Transfer Market,\\ WhoScored, \\ Wikipedia\footnote{https://en.wikipedia.org/wiki/Main\_Page},\\ Youtube\footnote{https://www.youtube.com/}\end{tabular}}} &
\textbf{\textbf{\begin{tabular}[c]{@{}l@{}} 1 Player valuation\\(e.g., market value), \\ 3 Player characteristics \\(e.g., Age),\\16 Player Performance\\(e.g., Minutes played),\\ 4 Player popularity\\(e.g., Wikipedia page views)\end{tabular}}} &
\textbf{Linear Regression} \\
\midrule
{\textbf{\cite{cwiklinski2021will}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Supporting a football team building \\ and successful player's transfer \\ by classification model\end{tabular}} &
\textbf{{\begin{tabular}[c]{@{}l@{}}WhoScored,\\ TransferMarket, \\ Sofascore~\footnote{https://www.sofascore.com/}\end{tabular}}} &
\textbf{\textbf{\begin{tabular}[c]{@{}l@{}} 4 Physical parameters \\(e.g., matches played),\\ 28 Technical parameters \\(e.g., Goals from the penalty box),\\ 6 Psychological parameters \\ (e.g, Age)\end{tabular}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Random Forest, Naive Bayes, \\ and AdaBoost\end{tabular}} \\ \bottomrule
\end{tabular}%
\end{savenotes}
}
\end{table*}
\begin{table*}
\centering
\caption{Previous research with diverse research purpose using game data source along with real data source}
\label{sofifapaper}
\resizebox{\textwidth}{!}{%
\begin{savenotes}
\begin{tabular}{@{}lllll@{}}
\toprule
\multicolumn{1}{c}{\textbf{Ref.}} &
\multicolumn{1}{c}{\textbf{Research Purpose}} &
\multicolumn{1}{c}{\textbf{Data Source}} &
\multicolumn{1}{c}{\textbf{Features}} &
\multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Modeling \\ Technique\end{tabular}}} \\ \midrule\midrule
{\textbf{\cite{prasetio2016predicting}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Estimation of match results\\ by classification model\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Match records of \\ Premier League\footnote{http://www.football-data.co.uk/} \\ and SOFIFA\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Match records season \\ 2010/2011-2015/2016 \\ in terms of 4 variables: \\ Home Offense, Home Defense, \\ Away Offense, and Away Defense\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Logistic Regression model \\ using Newton-Raphson algorithm\end{tabular}} \\
\midrule
{\textbf{\cite{pariath2018player}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Estimation of player's overall \\ performance and market value \\ by regression model\end{tabular}} &
\textbf{{SOFIFA}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Approximately 36 attributes:\\ Physical (e.g., Age),\\ Attacking (e.g., finishing),\\ Movement (e.g., acceleration),\\ Skill (e.g., dribbling),\\ Defensive (e.g., Marking),\\ Mentality (e.g., aggression),\\ Power (e.g., jumping),\\ General (e.g., overall rating)\end{tabular}} &
\textbf{Linear Regression} \\
\midrule
{\textbf{\cite{rajesh2020data}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Estimation of player's position \\ by classification model\\ and Clustering player's positions \\ by age and overall performance \\ by clustering model\end{tabular}} &
\textbf{{SOFIFA}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Approximately 35 attributes:\\1 Physical (e.g., BMI), \\5 Attacking (e.g., finishing), \\Movement (e.g., acceleration),\\ 5 Skill (e.g., dribbling), \\3 Defensive (e.g., Marking),\\ 5 Mentality (e.g., aggression),\\ 2 Monetary value (e.g., wage),\\ 4 General (e.g., potential),\\ 5 Power (e.g., shot power)\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Naïve Bayes, Decision Tree, \\ Random forest, SVC\end{tabular}} \\
\midrule
{\textbf{\cite{soto2017gaussian}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Clustering football players' \\position by clustering model\end{tabular}} &
\textbf{{SOFIFA}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Approximately 40 attributes:\\ 4 Physical (e.g., Weight),\\ 5 Attacking (e.g., Crossing),\\ 5 Movement (e.g., Agility),\\ 5 Skill (e.g., Curve),\\ 3 Defensive (e.g., Tackle),\\ 5 Mentality (e.g., Positioning),\\ 5 Goalkeeping (e.g., Diving),\\ 5 Power (e.g., jumping),\\ 2 General (e.g., potential)\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Gaussian mixture \\model-based clustering, \\ XGBoost for classification\end{tabular}} \\
\midrule
{\textbf{\begin{tabular}[c]{@{}l@{}}\cite{behravan2020novel}\end{tabular}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Estimation of player's market\\ value by regression model\end{tabular}} &
\textbf{{SOFIFA}} &
\textbf{\begin{tabular}[c]{@{}l@{}}In 55 attributes \\ (Physical, Attacking,\\ Movement, Skill, Defensive,\\ Mentality, Power, General),\\ 5, 32, 30, and 28 features\\ were selected for goalkeeper,\\ strikers, defenders,\\ and midfielders positions \\by PSO clustering,\\ respectively.\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}l@{}} Particle Swarm Optimization\\(PSO) SVR,\\Gery Wolf Optimizer\\(GWO) SVR,\\Inclined Planes\\ System Optimization\\(IPO) SVR,\\Whale Optimization Algorithm\\(WOA) SVR\end{tabular}} \\
\midrule
{\textbf{\cite{yiugit2019football, yigit2020xgboost}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}Estimation of player's \\ market value by regression model\end{tabular}} &
\textbf{{\begin{tabular}[c]{@{}l@{}}Football Manager, \\ Transfer Market\end{tabular}}} &
\textbf{\begin{tabular}[c]{@{}l@{}}4 main chapters which are; \\technical, mental, physical, \\and goalkeeping \\with 49 attributes\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}l@{}}linear regression,\\ ridge regression, \\ lasso regression, \\principal component regression,\\ random forest, XGBoost\end{tabular}} \\
\bottomrule
\end{tabular}%
\end{savenotes}
}
\end{table*}
\section{Proposed Method}
\label{propmeth}
In this section, baseline models and the proposed model is briefly described.
\subsection{Regularized Linear Regression Model}
As the baseline model, we use various linear regression models.
First, we used multiple linear regression model; a representative linear regression model (LM). It is commonly used when a dependent variable and more than two independent variables exist. To avoid overfitting due to variances in the dataset, we also use other baseline regression models, i.e., lasso~\cite{tibshirani1996regression}, kernel trick-based ridge regression (KRR)~\cite{welling2013kernel}, and elastic net (E-Net) regularization~\cite{zou2005regularization}.
\subsection{Gradient Boosting Decision Tree Model}
We used a gradient boosting decision tree (GBDT)~\cite{friedman2001greedy} as another baseline model. As one of boosting algorithms, GBDT, is generally known to show the outperformed accuracy compared to other machine learning algorithms and bagging ensemble learning models such as random forest~\cite{hastie2009elements}. Therefore, we use the GBDT methods for the baseline ensemble learning model via the scikit-learn API.
\subsection{LightGBM}
LightGBM is an improved variant of the most state-of-the-art GBDT algorithm introduced in Ke et al. ~\cite{ke2017lightgbm}. LightGBM generally possesses high efficiency (i.e., fast training while maintaining high performance) compared to GBDT and XGBoost on high dimensional data ~\cite{ke2017lightgbm}. Another advantage of LightGBM is that unlike other boosting algorithms that require numerical transformation (e.g., label encoding, one-hot encoding), it handles categorical features internally using the grouping method~\cite{fisher1958grouping}.
Therefore, when using LightGBM, the data pre-processing can be shortened because numerical transformation and normalization of features are unnecessary. LightGBM adopts a leaf-wise tree generation strategy that can reduce losses more than the traditional level-wise strategy when the leaf grows. Therefore, the final model of LightGBM is composed of a smaller number of decision trees and a smaller number of leaves per decision tree, enabling efficient matching and time in the decision-making process. Based on these advantages, in this study, we employ LightGBM to increase the learning speed while maintaining excellent performance for football player's market value.
\subsection{Hyperparameter Optimization}
We used Bayesian optimization with a tree-structured Parzen estimator approach (TPE) as a hyperparameter optimization (HPO) algorithm. Unlike other black-box optimization (BBO) methods (e.g., grid and random search), Bayesian optimization forms a probabilistic model that maps hyperparameters to the objective's score probabilities function~\cite{bergstra2011algorithms}. Therefore, Bayesian optimization can find better hyperparameters in less contrast time compared with other BBOs.
The formalization of Bayesian optimization is sequential model-based optimization (SMBO). Surrogate models (i.e., the model that determines with which evaluation points are fitted) affect SMBO results, including Gaussian processes, random forest regression, and TPE. TPE is known to be more flexible than traditional Bayesian optimization \cite{bergstra2011algorithms}. Besides, when the TPE algorithm is used in HPO, it shows better accuracy than manual search, Bayesian optimization with Gaussian processes, particle swarm optimization, Nelder-mead procedure, and random search~\cite{olof2018comparative, bergstra2011algorithms}. For the above reasons, we adopt Bayesian optimization with the TPE algorithm for HPO. In this study, we use a state-of-the-art HPO framework called Optuna~\cite{akiba2019optuna}. It is found to be better than Hyperopt~\cite{bergstra2013making} w.r.t ease of use, search space, callback, run pruning, and visualization. In Optuna, we experiment with various conditions, including two TPE algorithms (i.e., independent TPE and multivariate TPE), the Optuna's pruning function (i.e., pruning function can reduce the HPO time with maintaining the performance for the LightGBM model) and also compare with not-used condition.
\section{Experiment}
\label{exp}
\subsection{Data Preprocessing and Feature Extraction}
Dataset is consist of the 2022 SOFIFA dataset provided by sofifa.com and the 2021–2022 ranking table of big five major European soccer leagues by WhoScored. First of all, we performed the data pre-processing in data selection, noise handling, data merging, data grouping, and data transformation. 2,720 players who belong to the 20 top division European clubs (i.e., a club ranked from 1st to 4th in each five European leagues) of 2021–2022 Union of European Football Association (UEFA) Champions League\footnote{https://www.uefa.com/uefachampionsleague/} were selected from the whole player list of 2022 SOFIFA dataset and get rid of missing values in all columns.
The SOFIFA dataset includes ability, profile, and position as an attribute type in the game that is quantitatively measured based on a player's actual performance. As shown in Figure~\ref{ability}, ability attributes are interval data showing the player's soccer performance-related stats from 1 to 99, with a total of 35 ability attributes. In addition, the SOFIFA dataset provides 6 calculated ability attributes on average which are "shooting (SHO)", "pace (PAC)", "passing (PAS)", "dribble (DRI)", "defending (DEF)", and "physical (PHY)" by classifying 17 ability attributes out of a total of 35 ability attributes into two or three attributes, and provides the 'base stats' attribute by summing the six calculated ability attributes as depicted in Table~\ref{abilityattributes}. Furthermore, this study classified 35 ability attributes into seven types of calculated ability attributes and extracted the combined values as "attacking", "skill", "movement", "power", "mental", "defending", and "goalkeeping" attributes, and extracted the "total state" attribute by summing the total of 35 ability attributes as shown in Table~\ref{abilityattributes}.
The SOFIFA data provides profile attributes that are real-world data of football players as shown in Figure~\ref{ability}. Profile attributes consist of two types of categorical data (ordinary and nominal data), as shown in Table~\ref{otherattributes}. In profile attributes, "internal reputation (IR)", "weak foot," "skill moves," and "attack/defense work rate," are ordinal data, and "preferred foot" is the nominal data. Since "preferred foot" is a nominal data type, we used one-hot encoding to dummy values to extract the features. Furthermore, Table~\ref{otherattributes} shows the description of attributes and the range of possible values for each profile attribute.
Position attributes are two attributes (best position and position), which are provided as suitable positions, referring to the history of the position of the actual player in the game among 27 football positions as shown in Figure~\ref{ability}. SOFIFA dataset provides the same position which are "Left Midfielder (LM)", "Left Winger (LW)", and "Center Forward (CF)" as Son Heung-min plays in the actual game with suitable positions attribute as depicted in Figure~\ref{ability}. SOFIFA dataset is generally provided from at least one to three in consideration of the history of positions played in actual matches by players, mainly with one position for the goalkeeper, and up to three suitable positions for striker, defender, and midfielders. In addition, the position attributes have the "best position" attribute, which is expressed as one of the positions in which the player actually plays the most and does the best in the game of the year. In the case of Son Heung-min, the best position is provided as "Best Position Left Midfielder (BPLM)." Similarly, the position and best position attributes were extracted as features by encoding them with dummy values as categorical data. Detailed description of position attributes is described in Table~\ref{positionattributes}.
As shown in Table~\ref{otherattributes}, "overall rating", "best overall rating" (BOV), "potential, and "growth" attributes are the ability attributes considered by the several profile attributes. OVE and BOV are rated as the sum of the weighted average of availability attributes (range, 1–99) and international reputation (range, 1–3; calculated by replacing 1 point at 3 points, 2 points for 4 points, and 3 points for 5 points). "potential" attribute is the player's potential for the current season, and is calculated by adding the player's "age", "international reputation", and the player's actual game history (e.g., goal and assist point) of the player to the overall rating score. Therefore, "potential" is always equal to or higher than "Overall Rating" and has an average of 5 points (range, 0–23 values) higher. Finally, "growth" represents the player's growth this season, minus "overall rating" from "potential".
The SOFIFA dataset, along with the WhoScored dataset, provides demographic and physical attributes, carrier attributes, and monetary value attributes of the actual player's state as shown in Figure~\ref{ability}. Among the demographic attributes, nationality is numerous to describe the country of all players in the five major European leagues as a dummy value. Therefore, we grouped the player's nationality into five continents, Africa, America, Asia, Europe, and Oceania, and is denoted by a dummy value as shown in Table~\ref{otherattributes}. Next, the units of feet and inches of height are converted to cm, and the units of lbs of weight are converted to kg. In addition, the height and weight were calculated to add the BMI level as a feature. Among monetary value, we used market value data as ground truth data for market value prediction. The average market value of 2,720 players is $9,020.28 \pm 11,299$ k€ (range, 11,299 k€ –105,500 k€). We unify the unit of millions (m)€ or thousands (k)€ to k€ for monetary values (e.g., release clause, market value, and wage).
Additionally, we crawled and extracted attributes from WhoScored regarding the actual match record (e.g., goal points of each player and club, and match points of each club) of 2021–2022 season that were not provided by the SOFIFA 2022 dataset. The extracted club's match record attributes include the goal points of player's club (goal difference, goal against, and goal acquisition), the winning rate of the player's club (e.g., victory point, win, draw, and lose), and the team standing (ranking of the player's club) in this season as shown in Table~\ref{otherattributes}. The player's match record which is extracted from WhoScored attributes involve the attack performance attributes (score, assist, goal point, shooting, effective shooting, personal ranking, corner kick, penalty kick), violation attributes (foul, yellow card, red card, offside), and number of games played as depicted in Table~\ref{otherattributes}. The attributes extracted from WhoScored were then merged with each player in the SOFIFA 2022 dataset. Through the data processing and feature extraction process above, this paper finally extracted the 72 attributes (52 ability attributes, five demographic attributes, seven profile attributes, four ability and profile attributes, two monetary value attributes, two position attributes, three goal point attributes, four winning rate attributes, one club ranking attribute, eight club match record attributes, and 13 player match record attributes) for the player's market value prediction.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fifason.png}
\caption{Player (e.g., Son Heung-min) attributes in SOFIFA dataset~\cite{SOFIFA}}
\label{ability}
\end{figure*}
\begin{table*}
\centering
\caption{Description and range of possible value in calculated ability attributes}
\label{abilityattributes}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lll}
\toprule
\multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}l@{}}Calculated Ability\\ Attributes\end{tabular}}} &
\multicolumn{1}{c}{\textbf{Calculation formula}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}l@{}}Range of \\Possible Value\end{tabular}}}
\\ \hline\hline
PAC & (Sprint Speed + Acceleration)/2 & 1--99 \\
SHO & (Finishing + Long Shots + Shot Power)/3 & 1--99 \\
PAS & (Crossing + Short Passing + Long Passing)/3 & 1--99 \\
DRI & (Ball Control + Agility + Balance)/3 & 1--99 \\
DEF & (Marking + Tackling + Strength)/3 & 1--99 \\
PHY & (Strength + Stamina + Jumping)/3 & 1--99 \\
Attacking &
\begin{tabular}[c]{@{}l@{}}Crossing + Finishing + Heading Accuracy + Short Passing + Volleys\end{tabular} &
5--495 \\
Skill & \begin{tabular}[c]{@{}l@{}}Dribbling + Curve + FK Accuracy + Long Passing + Ball Control\end{tabular} & 5--495 \\
Movement & \begin{tabular}[c]{@{}l@{}}Acceleartion + Agility + Sprint Speed + Reactions+ Balance\end{tabular} & 5--495 \\
Power & \begin{tabular}[c]{@{}l@{}}Shot Power + Jumping + Stamina + Strength + Long Shots\end{tabular} & 5--495 \\
Defending & Marking + Sliding Tackle + Standing Tackle & 3--297 \\
Mentality &
\begin{tabular}[c]{@{}l@{}}Aggression + Reactions + Positioning + Interceptions + Vision + Composure\end{tabular} &
6--594 \\
Goalkeeping &
\begin{tabular}[c]{@{}l@{}}GK Positioning + GK Diving + GK Handling + GK Kicking + GK Reflexes\end{tabular} &
5--495 \\
Overall Rating & Overall rating in position & 1--99 \\
BOV & Overall rating in best position & 1--99 \\
Base stats & PAC+SHO+PAS+DRI+DEF+PHY & 6--594 \\
Total stats & Sum of total 35 ability elements & 39--3500 \\ \bottomrule
\end{tabular}%
}
\end{table*}
\begin{table*}
\caption{Description of monetary value, demographic, profile, and match record attributes in 2021--2022 season}
\label{otherattributes}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lllll}
\toprule
\multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}l@{}}Types of\\ Attributes\end{tabular}}} & \multicolumn{1}{c}{\textbf{Attributes}} & \multicolumn{1}{c}{\textbf{Description of Attributes}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}l@{}}Data\\ Type\end{tabular}}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}l@{}}Range of \\Possible Value\end{tabular}}} \\ \midrule\midrule
\multirow{3}{*}{Monetary value}
& Market value & Football market value of player & ratio & no limitation \\
& Wage & The weekly salary of a player from affiliated club & ratio & no limitation \\
& Release clauses & Buyout clause of player to transfer from affiliated club to another club & ratio & no limitation \\
\midrule
\multirow{5}{*}{Demographic} & Age & Player's age & ratio & no limitation \\
& Height & Player's height & ratio & no limitation \\
& Weight & Player's weight & ratio & no limitation \\
& BMI & Player's body mass & ratio & no limitation \\
& Nationally continental & \begin{tabular}[c]{@{}l@{}}Continent to which the nationality of the player belongs \\ (Africa, America, Asia, Europe, Oceania)\end{tabular} & nominal & 0 or 1 \\
\midrule
\multirow{11}{*}{Profile} & International reputation & The affiliated club and individual international reputation & ordinal & 1, 2, 3, 4, or 5 \\
& Preferred foot & Preferred foot type (Right, Left) & nominal & 0 or 1 \\
& five big league & \begin{tabular}[c]{@{}l@{}}The type of five big leagues to which the player's team belongs \\ (Spain Primera Liga, Italy Serie A, France Ligue 1, English Premier League)\end{tabular} & nominal & 0 or 1 \\
& Weak foot & shot power and ball control attributes for other foot than preferred foot & ordinal & 1, 2, 3, 4, or 5 \\
& Skill moves & Number of special skills available & ordinal & 1, 2, 3, 4, or 5 \\
& Attacking work rate & \begin{tabular}[c]{@{}l@{}}The rate of a player's behavior on the pitch in attacking work \\ (Low = 0, Medium =1, High =2)\end{tabular} & ordinal & 0, 1, or 2 \\
& Defesive work rate & \begin{tabular}[c]{@{}l@{}}The rate of a player's behavior on the pitch in defensive work \\ (Low = 0, Medium =1, High =2)\end{tabular} & ordinal & 0, 1, or 2
\\
\midrule
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Ability attributes \\considered by\\ profile attributes\end{tabular}}
& Overall rating & \begin{tabular}[c]{@{}l@{}}Weighted average of ability attributes + international reputation\\ depending on position\end{tabular} & interval & 1--99 \\
& Best Overall rating & \begin{tabular}[c]{@{}l@{}}Weighted average of ability attributes + international reputation\\ depending on best position\end{tabular} & interval & 1--99 \\
& Potential & \begin{tabular}[c]{@{}l@{}}Player's potential of the current season \\ (overall rating + value considering age, international reputation)\end{tabular} & interval & 1--99 \\
& Growth & Player's growth of the current season (Potential-Overall Rating) & interval & 0--98
\\
\midrule
\multirow{8}{*}{Club's Match Record} & Goal acquisition & Total number of goal scored by our team in season & ratio & no limitation \\
& Goal against & Total number loss scored by the opposite team in season & ratio & no limitation \\
& Goal difference & Goal acquisition-Goal against & ratio & no limitation \\
& Victory point & Total victory point in season & ratio & no limitation \\
& Win point & The number of wins in the game of the season & interval & 1--38 \\
& Draw point & The number of draws in the game of the season & interval & 1--38 \\
& Lose point & The number of lose in the game of the season & interval & 1--38
\\
& Team standing & Team rankings for each league in the season & ordinal & 1--20
\\
\midrule
\multirow{12}{*}{Player's Match Record} & Scoring point & A player's scoring record in the season & ratio & no limitation \\
& Assist point & A player's individual assist record in the season & ratio & no limitation \\
& Goal point & Scoring point + Assist point & ratio & no limitation \\
& Shooting & number of shooting has taken in the game for a season & ratio & no limitation \\
& Effective shooting & number of effective shooting has taken in the game for a season & ratio & no limitation \\
& Personal score ranking & player's scoring ranking in the season & ordinal & 1--20 \\
& Corner kick & The number of corner kick a player has taken in the game for a season & ratio & no limitation
\\
& Penalty kick & The number of penalty kick a player has taken in the game for a season & ratio & no limitation
\\
& Foul & The number of times the player was fouled in the game for a season & ratio & no limitation
\\
& Yellow card & The number of times the player was warned in the game for a season & interval & 1--76
\\
& Red card & The number of times the player was sent off in the game for a season & interval & 1--38
\\
& Offside & The number of times the player offsides in the game for a season & ratio & no limitation
\\
\bottomrule
\end{tabular}
}
\end{table*}
\begin{table*}
\centering
\caption{Description of football position types}
\label{positionattributes}
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Position Category} & \textbf{Position Sub-category} & \textbf{Position Abbreviation} & \textbf{Position Description} \\ \midrule\midrule
\multirow{8}{*}{Attacker} & \multirow{3}{*}{Striker} & ST & Striker \\
& & LS & Left Striker \\
& & RS & Right Striker \\ \cmidrule(l){2-4}
& \multirow{3}{*}{Forward} & LF & Left Forward \\
& & CF & Centre Forward \\
& & RF & Right Forward \\ \cmidrule(l){2-4}
& \multirow{2}{*}{Winger} & LW & Left Winger \\
& & RW & Right Winger \\ \hline
\multirow{11}{*}{Midfielder} & \multirow{2}{*}{Wide Midfielder} & LM & Left Midfielder \\
& & RM & Right Midfielder \\ \cmidrule(l){2-4}
& \multirow{3}{*}{Attacking Midfielder} & LAM & Left Attacking Midfielder \\
& & CAM & Centre Attacking Midfielder \\
& & RAM & Right Attacking Midfielder \\ \cmidrule(l){2-4}
& \multirow{3}{*}{Central Midfielder} & LCM & Left Central Midfielder \\
& & CM & Central Midfielder \\
& & RCM & Right Central Midfielder \\ \cmidrule(l){2-4}
& \multirow{3}{*}{Defensive Midfielder} & LDM & Left Defensive Midfielder \\
& & CDM & Central Defensive Midfielder \\
& & RDM & Right Defensive Midfielder \\ \hline
\multirow{7}{*}{Defender} & \multirow{3}{*}{Center Back} & LCB & Left Central Back \\
& & CB & Central Back \\
& & RCB & Right Central Back \\ \cmidrule(l){2-4}
& \multirow{2}{*}{Full Back} & LB & Left Back \\
& & RB & Right Back \\ \cmidrule(l){2-4}
& \multirow{2}{*}{Wing Back} & LWB & Left Wing Back \\
& & RWB & Right Wing Back \\ \hline
Goalkeeper & Goalkeeper & GK & Goalkeeper \\ \bottomrule
\end{tabular}%
}
\end{table*}
\subsection{Correlation Analysis}
We identify the features that correlate with the players' market value. Accordingly, we obtain the correlation values of each feature. Features lists of correlation values 0.4 or more are as follow: 'Release\_Clause', 'Wage', 'Overall', 'Potential', 'Best Composure', 'Short Passing', 'Curve', 'Long Passing', 'Ball Control', 'Vision', 'Total Stats', 'Base Stats', 'BOV', 'PAS', 'DRI', 'Total Movement', 'Total Power', 'Goal Acquisition', 'Goal Difference', 'Winning Points', 'Win', 'IR'. After that, we compare the difference between the highly correlated feature lists by correlation analysis with ground truth data and the important feature lists (i.e., Indicator of importance for each feature in interpreting ML models) in Section \ref{sec:importance}.
\subsection{Experimental setup}
The experiment was run on the machine with Intel Xeon Gold 6240 2.6GHz CPU, 32GB RAM PC4 2933 MT/s and 64-bit windows 10 operating system with installed Pandas version 1.2.2 and NumPy version 1.20.1.
\subsection{Result and Discussion}
\subsubsection{Evaluation and Validation Metrics}
For the evaluation and validation process, we split the train and test dataset ratio by 20:80 and use only the train dataset in the validation process with $k$-fold cross-validation ($k$ = 10). We utilize mean average error (MAE) and root mean squared error (RMSE) as the evaluation metrics to compare our proposed methods' performances.
\begin{figure}
\centering
\subfigure[GBDT model]{\label{fig:a}\includegraphics[width=0.49\textwidth]{hyperparameter_importances_gbm.jpg}}
\subfigure[LightGBM model]{\label{fig:b}\includegraphics[width=0.49\textwidth]{hyperparameter_importances_lgbm_.jpg}}
\caption{Optimal learning parameters of LightGBM model with highest validation score (RMSE: 716.38) and GBDT model with highest validation score (RMSE: 949.94)}
\label{fig:hyperparameterimportances}
\end{figure}
\begin{table*}
\caption{RMSE and MAE performance comparison of prediction models. Bold text denotes the highest score for each condition: RMSE and MAE for each HPO conditions (Default value, I-TPE, M-TPE)}
\label{results}
\resizebox{1\textwidth}{!}{%
\begin{tabular}{@{}lllllll@{}}
\toprule
& \multicolumn{2}{c}{\textbf{non-HPO (default)}} & \multicolumn{2}{c}{\textbf{I-TPE}} & \multicolumn{2}{c}{\textbf{M-TPE}} \\ \cline{2-7}
\textbf{\begin{tabular}[c]{@{}c@{}}\end{tabular}} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{MAE} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{MAE} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{MAE} \\ \hline\hline
\textbf{LM} & 2,334.65 & 1,467.45 & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} \\
\textbf{Lasso} & 2,308.03 & 1,439.23 & 2,232.92 & 1,349.62 & 2,238.67 & 1354.32 \\
\textbf{E-Net} & 3,222.50 & 1,859.26 & 2,297.33 & 1,405.03 & 2,308.56 & 1,411.48 \\
\textbf{KRR} & 2,325.09 & 1,455.31 & 2,321.97 & 1,451.22 & 2,321.97 & 1451.22 \\
\textbf{GBDT} & \textbf{849.19} & 417.88 & 696.73 & 356.46 & 1,011.93 & 546.97 \\
\textbf{LightGBM} & 1,069.91 & \textbf{387.44} & \textbf{609.42} & \textbf{211.17} & 645.55 & \textbf{239.74} \\
\textbf{LightGBM (pruning)} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & 636.61 & 228.93 & \textbf{632.16} & 252.57 \\
\bottomrule
\end{tabular}
}
\end{table*}
\begin{figure}
\centering
\subfigure[]{\includegraphics[width=0.45\textwidth]{ITPE_1.png}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{ITPE_2.png}}
\caption{Feature effect (a) and feature importance (b) for best model via SHAP value in ITPE}
\label{fig:featureimportance1}
\end{figure}
\begin{figure}
\centering
\subfigure[]{\includegraphics[width=0.45\textwidth]{MTPE_1.png}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{MTPE_2.png}}
\caption{Feature effect (a) and feature importance (b) for best model via SHAP value in MTPE}
\label{fig:featureimportance2}
\end{figure}
\subsubsection{Optimized Hyperparameter Value and Importance}
We obtain 36 HPO results ($3\times6\times2$) with Optuna libraries for six models (i.e., lasso, E-net, KRR, GBDT, LightGBM, and LightGBM with pruning) and two TPE algorithms (e.g., independent TPE, multivariate TPE) to acquire the reliability results of optimized hyperparameter values and feature importance. In addition, we train 100 times for each result. For the regularization regression model, most hyperparameters (e.g., $\alpha$, \emph{tol}, \emph{l1\_ratio}, \emph{coef0}, and $\gamma$) are optimized. For LightGBM and GBDT models, all hyperparameters are searched within recommended search space (e.g., learning rate [search space: 0,1, default: 0.1], n\_estimators [search space: 50, 3000, default: 100] in LightGBM)
To understand which hyperparameters influence the performance, we compare the importance of hyperparameters for the models that representatively show the highest validation performance on each GBDT and LightGBM model. As shown in Figure \ref{fig:hyperparameterimportances}, the \emph{loss} was significantly higher than that of other hyperparameters in GBDT. In case of LightGBM model, the \emph{boosting\_type}, \emph{feature\_fraction}, \emph{bagging\_fraction}, \emph{lambda\_l2}, and \emph{min\_split\_gain} hyperparameters' importance is significantly higher than other hyperparameters. The hyperparameters having high importance in all experiments are \emph{boosting\_type}, \emph{feature\_fraction}, and \emph{bagging\_fraction} in LightGBM model. Morevoer, the loss is relatively higher than other hyperparameters in GBDT model.
\subsubsection{Results of Evaluation}
We divide the results into several cases where HPO is not applied, and HPO is applied using the I-TPE and M-TPE algorithms (see Table \ref{results}). For non-HPO, the performance of GBDT is the highest among all models in terms of the RMSE metric. However, LightGBM shows its superiority regardless of HPO. On average, the performance of LightGBM is 3.8 times and 6.6 times better than the linear and regularization regression model in terms of RMSE and MAE, respectively.
The results indicate that HPO improves the performance of GBDT and LightGBM. However, the linear and regularization regression models (e.g., lasso, E-Net, and KRR) do not benefit from HPO; even in some cases, HPO worsens the performance of such models.
In the I-TPE-based HPO, the LightGBM model shows approximately 1.8 times better performance than the unoptimized GBM model w.r.t both RMSE and MAE metrics. Similarly, in M-TPE-based HPO, unpruned LightGBM outperforms pruned one w.r.t. MAE metric but less significant in terms of RMSE metric. Overall, the I-TPE-based HPO shows more remarkable improvement than the M-TPE-based HPO, demonstrating a difference from previous study~\cite{falkner2018bohb}. Our experimental results also indicate no significant performance difference between unpruned and pruned LightGBM; however, learning cost can be lowered two times when pruning is applied. It supports the previous study, stating that pruning is essential in maintaining the performance of LightGBM and making the learning process efficient.
\subsubsection{Features Importance}
\label{sec:importance}
We identify the importance and effects of the top-20 features out of 124 features that have contributed to the player's value prediction model using the SHAP value (see Figure~\ref{fig:featureimportance1}). In this figure, we plot the best performing model, i.e., the I-TPE-based LightGBM model. As a result, the importance of the features for player's market value is Overall' (i.e., an average of all ability values), 'Release\_Clause' (i.e., price competition system, player's market value of the transfer market, and the amount to the club that owns players can be recruited), 'Age', 'BOV' (i.e., the position with the highest overall ability stats of the player).
We are interested in comparing the results between correlation analysis and feature importance as shown in Figure~\ref{fig:featureimportance1} and Figure~\ref{fig:featureimportance2}, respectively. Interestingly, 'Release\_Clause' ranks first (e.g., a correlation value of 0.96) in the correlation analysis, but it ranks second in the SHAP-based feature importance. The other two features, such as 'Overall' and 'Potential', also indicate similar patterns. Therefore, it can be concluded that there is a significant correlation between the feature importance result and the correlation analysis result.
\section{Conclusion}
\label{conc}
This study proposed the application of several state-of-the-art optimized ensemble techniques in sports analytics. This new high-accuracy ML-based predictive model uncovers the potential value of players in the sports field and provides the team with the potential to earn economic returns. Our future study will try to improve the model's performance through another powerful optimized ensemble model (e.g., XGboost, Catboost) via TPE bayesian optimization, which was not used in this study. After that, we will use the stacking ensemble technique (i.e., meta-learning-based ensemble technology that learns how to derive the best performance by well combining multiple models) to combine the extracted optimized GBM, LightGBM, XGboost, and Catboost models. Through this, we will present an advanced ensemble model for prediction with improved efficiency and performance in the sports analytics field.
\bibliographystyle{elsarticle-num-names}
| {
"attr-fineweb-edu": 1.943359,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdRE5qhDBcuRv729L |
\section{Introduction}\label{section:introduction}
As one of the most popular sports, soccer had a market share of about $45\%$ of the $500$ billion global sports industry in $2020$~\cite{soccer-market-share}. Soccer broadcasting and streaming are becoming increasingly popular, as the interest in viewing videos from soccer games grows day by day. In this respect, it is important to provide game summaries and highlights, as a large percent of audiences prefer to view only the main events in a game, such as goals, cards, saves, and penalties. However, generating such summaries and event highlights requires expensive equipment and a lot of tedious, cumbersome, manual labor (Figure~\ref{figure:tagging}).
Automating and making the entire end-to-end video analysis pipeline more intelligent is seen as the ultimate goal in sports video production since it could provide fast game highlights at a much lower cost. In this context, recent developments in \ac{AI} technology have shown great potential, but state-of-the-art approaches are far from being good enough for a practical scenario that has demanding real-time requirements, as well as strict performance criteria (where at least the detection of official events such as goals and cards must be $100\%$ accurate).
Even though the event detection and classification (spotting) operation have by far received the most attention~\cite{Karpathy2014, Simonyan2014, Lin2018, Tran2018, Cioppa2019, Lin2019, Rongved2020, Rongved2021Using, Rongved2021}, it is maybe the most straightforward manual operation in an end-to-end analysis pipeline, i.e., when an event of interest occurs in a soccer game, the annotator marks the event on the timeline (Figure~\ref{figure:tagging2}). However, the full pipeline also includes various other operations which serve the overall purpose of highlight and summary generation (Figure~\ref{figure:tagging3}), some of which require careful considerations such as selecting appropriate clipping points, finding an appealing thumbnail for each clip, writing short descriptive texts per highlight, and putting highlights together in a game summary, often with a time budget in order to fit into a limited time-slot during news broadcasts.
In this challenge, we assume that event detection and classification have already been undertaken\footnote{This operation is already addressed (but still far from being solved) by the research community, and several related challenges exist, including \url{https://eval.ai/web/challenges/challenge-page/1538/overview}.}.
Therefore, the goal of this challenge is to assist the later stages of the automatization of such a production pipeline, using \ac{AI}. In particular, algorithmic approaches for event clipping, thumbnail selection, and game summarization should be developed and compared on a entirely new dataset from the Norwegian Eliteserien.
\section{Tasks}\label{section:tasks}
In this challenge, we focus on event clipping, thumbnail selection, and game summarization. Soccer games contain a large number of event types, but we focus on \textit{cards} and \textit{goals} within the context of this challenge.
\subsection{Task 1: Event Clipping}\label{section:task2}
Highlight clips are frequently used to display selected events of importance from soccer games (Figure~\ref{figure:event-clipping-examples}). When an event is detected (spotted), an associated timestamp indicates when the event happened, e.g., a tag in the video where the ball passes the goal line. However, this single annotation is not enough to generate a highlight clip that summarizes the event for viewers. Start and stop timestamps are needed to extract a highlight clip from the soccer game video (e.g., clipping the frames between x seconds before the event annotation and y seconds after the event annotation).
In the area of event clipping, the amount of existing work is limited. Koumaras et al.~\cite{Koumaras2006} presented a shot detection algorithm.
Zawbaa et al.~\cite{Zawbaa2011} implemented a more tailored algorithm to handle cuts that transitioned gradually over several frames.
Zawbaa et al.~\cite{Zawbaa2012} classified soccer video scenes as long, medium, close-up, and audience/out of field.
Several papers presented good results regarding scene classification~\cite{Xu2001, Zawbaa2012, Rafiq2020}.
As video clips can also contain replays after an event, and replay detection can help to filter out irrelevant replays, Ren et al.~\cite{Ren2005} introduced the class labels play, focus, replay, and breaks.
Detecting replays in soccer games using a logo-based approach was shown to be effective using a \ac{SVM} algorithm, but not as effective using an \ac{ANN}~\cite{Zawbaa2011, Zawbaa2012}.
Furthermore, it was shown that audio may be an important modality for finding good clipping points.
Tjondronegoro et al.~\cite{Tjondronegoro2003} used audio for a summarization method, detecting whistle sounds based on the frequency and pitch of the audio.
Raventos et al.~\cite{Raventos2015} used audio features to give an importance score to video highlights.
Some work focused on learning spatio-temporal features using various \ac{ML} approaches~\cite{Simonyan2014, Tran2015, Carreira2018}.
Chen et al.~\cite{Chen2008} used an entropy-based motion approach to address the problem of video segmentation in sports events.
More recently, Valand et al.~\cite{Valand2021, Valand2021AI} benchmarked different neural network architectures on different datasets, and presented two models that automatically find the appropriate time interval for extracting goal events.
These works indicate a potential for \ac{AI}-supported clipping of sports videos, especially in terms of extracting temporal information. However, the presented results are still limited, and most works do not directly address the actual event clipping operation (with the exception of~\cite{Valand2021, Valand2021AI}). An additional challenge for this use case is that computing should be possible to conduct with very low latency, as the production of highlight clips needs to be undertaken in real-time for practical applications.
In this task, participants are asked to identify the appropriate clipping points for selected events from a soccer game, and generate one clip for each highlight, ensuring that the highlight clip captures important scenes from the event, but also removes ``unnecessary'' parts. The submitted solution should take the video of a complete soccer game, along with a list of highlights from the game in the form of event annotations, as input. The output should be one clip per each event in the provided list of highlights. The maximum duration for a highlight clip should be $90$ seconds.
\subsection{Task 2: Thumbnail Selection}\label{section:task3}
Thumbnails capture the essence of video clips and engage viewers by providing a first impression. A good thumbnail makes a video clip more attractive to watch~\cite{Song2016}. Thus, selecting an appropriate thumbnail (e.g., by extracting a frame from the video clip itself) is very important.
Traditional solutions in the soccer domain rely on the manual or static selection of thumbnails to describe highlight clips, which display important events such as goals and cards. However, such approaches can result in the selection of sub-optimal video frames as snapshots, which degrades the overall quality of the clip as perceived by the viewers, and consequently decreases viewership. Additionally, manual processes are expensive and time consuming.
Song et al.~\cite{Song2016} presented an automatic thumbnail selection system that exploits two important characteristics commonly associated with meaningful and attractive thumbnails: high relevance to video content, and superior visual aesthetic quality.
In this respect, image quality assessment~\cite{Su2011,Kim2019} also plays an important role in thumbnail selection. Recent work demonstrates the applicability of \ac{ML}, and more specifically adversarial and reinforcement learning, in the context of thumbnail selection~\cite{Apostolidis2021}.
However, a lot of work remains to be done for the implementation of automated algorithms within the soccer domain.
\input{figures/figure-thumbnail-selection-examples}
In this task, participants are asked to identify the frame that best represents a game highlight, according to rules established by the participants themselves. The rules can be justified by references to scientific literature and industry practices. The submitted solution should take the video of a complete soccer game, along with a list of highlights from the game in the form of event annotations, as input. The output should be one image (thumbnail candidate) per each event in the provided list of highlights.
\subsection{Task 3: Game Summarization}
Soccer game summaries are of tremendous interest for multiple stakeholders including broadcasters and fans\footnote{Related challenges include \url{https://trecvid.nist.gov}}. Existing works such as~\cite{Jai-Andaloussi2014, Sanabria2019, Awad2021} consider different modalities such as video, audio, and text, but a relatively larger emphasis is put on video summaries in the broadcasting
context.
In this task, participants are asked to generate overall game summaries for soccer games. The submitted solution should take the video of a complete soccer game, along with a list of highlights from the game in the form of event annotations, as input. The output should be a text and/or video which presents an overall summary of the game, including an adequate overview of important events, per soccer game.
\begin{itemize}
\item \textbf{Task 3a - Text Summary:} In this subtask, participants are asked to output a text in English which serves as a summary of the soccer game, for which the maximum value for the total number of words is $100$.
\item \textbf{Task 3b - Video Summary:} In this subtask, participants are asked to output a video (audio optional) which serves as a summary of the soccer game, for which the maximum value for the total duration of the video is $3$ minutes ($180$ seconds). How various events are ``concatenated'' into a summary is up to the participants, and using scene transition effects, as well as overlays containing detailed information (such as the names of goal scorer or booked players) are allowed.
\end{itemize}
\section{Dataset}\label{section:dataset}
\subsection{Training and Validation}\label{section:dataset-training}
An official training dataset is provided by the challenge organizers. This dataset consists of
complete soccer game videos from the Norwegian \textit{Eliteserien}, accompanied by a list of highlights in the form of event annotations, for each game. The list of highlights includes goal annotations (Listing~\ref{lst:card-event}), card annotations (Listing~\ref{lst:goal-event}), and additional timing metadata (Listing~\ref{lst:start-end})\footnote{Each metadata list starts with the line ``Video start timestamp: <YYYY-MM-DD HH:mm:ss.ssssss''.}.
\input{figures/lst-card-event}
\input{figures/lst-goal-event}
\input{figures/lst-start-end}
In addition, prospective participants are free to use any other open dataset for training and validation purposes. In particular, interested participants are referred to the excellent and publicly available SoccerNet\footnote{https://soccer-net.org} dataset, which can be used for training and validation, as well as a transfer learning dataset for presenting additional performance results.
\subsection{Testing}\label{section:dataset-testing}
The evaluations will be undertaken using a hidden, previously unseen dataset. It will have the same format as the public training dataset provided by the challenge organizers, but will consist of completely different games.
\section{Evaluation}\label{section:evaluation}
Participants are free to develop their models in any language or platform they prefer. However, a well-documented open repository containing the source code for the proposed solution is required for each submission. Note that no data should be included within the repository itself. The hidden test dataset will be injected during evaluation, and participants can assume that the dataset will be located at \texttt{/mmsys22soccer}.
\subsection{Performance}\label{section:evaluation-performance}
As the perceived quality of highlight clips, thumbnails, and game summaries are highly subjective, the performance of the submitted solutions will be evaluated by a jury. In particular, a subjective survey will be conducted in double blind fashion with a jury consisting of unaffiliated video experts selected by the challenge organizers.
For each submitted solution for a given task, the jury members will be asked to provide an overall subjective performance score out of $100$.
\subsection{Complexity}\label{section:evaluation-complexity}
Complexity is a factor influencing how well a solution can satisfy practical real-time requirements.
The following objective metrics will be used to evaluate the submitted solutions in terms of complexity. Participants are asked to calculate the following metrics for their model and include these values in their manuscript:
\begin{itemize}
\item \textbf{Latency:} Average runtime per sample (ms). / \textbf{Frame rate:} Average number of frames the submitted solution can analyze per second (fps).
\item \textbf{Number of parameters:} Total number of trainable parameters in the submitted solution.
\item \textbf{Model size:} Storage size (size on disk) of the submitted solution (MB).
\end{itemize}
\subsection {Final Score}
Aggregation of the subjective performance scores with the objective complexity scores per submission will be undertaken by the challenge organizers. For Task 3, the text (3a) and video (3b) subtasks are weighted $25\%$ and $75\%$, respectively.
\section{Conclusion and Outlook}\label{section:conclusion}
The MMSys'22 Grand Challenge on AI-based Video Production for Soccer addresses the task of automating end-to-end soccer video production systems. Such systems are used for generating event highlights and game summaries, which are operations typically requiring tedious manual labor.
An AI-based solution to replace the manual operations has the potential to both reduce human interactions and to yield better results, therefore providing a more cost efficient pipeline.
As elite soccer organizations rely on such systems, solutions presented within the context of this challenge might enable leagues to be broadcasted and/or streamed with less funding, at a cheaper price to fans.
This challenge presents three different tasks where the participants were asked to provide solutions for automatic event clipping, thumbnail selection, and game summary generation. Video and metadata from the Norwegian Eliteserien are used, and submissions will be evaluated both subjectively and objectively.
We hope that this challenge will help various stakeholders to contribute to the design of better performing systems and increase the efficiency of future video productions, not only for soccer and sports, but for video in general.
\begin{acks}
This research was partly funded by the Norwegian Research Council, project number 327717 (AI-producer). We also want to acknowledge Norsk Toppfotball (NTF), the Norwegian association for elite soccer, for making videos and metadata available for the challenge.
\end{acks}
\balance
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.628906,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbXvxaL3SuhOol9I2 | \section{Introduction}\label{sec:description}
The Team Orienteering Problem (TOP) was first mentioned in \cite{bib:Butt94} as the Multiple Tour Maximum Collection Problem (MTMCP). Later, the term TOP was formally introduced in \cite{bib:Chao96a}. TOP is a variant of the Vehicle Routing Problem (VRP) \citep{bib:Archetti14}. In this variant, a limited number of identical vehicles is available to visit customers from a potential set. Two particular depots, the \emph{departure} and the \emph{arrival} points are considered. Each vehicle must perform its route starting from the departure depot and returning to the arrival depot without exceeding its predefined travel time limit. A certain amount of profit is associated for each customer and must be collected at most once by the fleet of vehicles. The aim of solving TOP is to organize an itinerary of visits respecting the above constraints for the fleet in such a way that the total amount of collected profits from the visited customers is maximized.
A special case of TOP is the one with a single vehicle. The resulted problem is known as the Orienteering Problem (OP), or the Selective Travelling Salesman Problem (STSP) (see the surveys by \citealp{bib:Feillet05}, \citealp{bib:Vansteenwegen11} and \citealp{bib:Gavalas14}). OP/STSP is already NP-Hard \citep{bib:Laporte90}, and so is TOP \citep{bib:Chao96a}. The applications of TOP arise in various situations. For example in \citet{bib:Bouly08a}, the authors used TOP to model the schedule of inspecting and repairing tasks in water distribution. Each task in this case has a specific level of urgency which is similar to a profit.
Due to the limitation of available human and material resources, the efficient selection of tasks as well as the route planning become crucial to the quality of the schedule. A very similar application was described in \citet{bib:Tang05} to route technicians to repair sites. In \citet{bib:Souffriau08}, \citet{bib:Vansteenwegen09b} and \citet{bib:Gavalas14}, the tourist guide service that offers to the customers the possibility to personalize their trips is discussed as variants of TOP/OP. In this case, the objective is to maximize the interest of customers on attractive places subject to their duration of stay. Those planning problems are called Tourist Trip Design Problems (TTDPs). Many other applications include the team-orienteering sport game, bearing the original name of TOP, the home fuel delivery problem with multiple vehicles \citep[e.g.,][]{bib:Chao96a} and the athlete recruiting from high schools for a college team \citep[e.g.,][]{bib:Butt94}.
Many heuristics have been proposed to solve TOP, like the ones in \citet{bib:Archetti07}, \citet{bib:Souffriau10}, \citet{bib:Dang13b} and \citet{bib:Kim13a}. These approaches are able to construct solutions of good quality in short computational times, but those solutions are not necessarily optimal. In order to validate them and evaluate the performance of the heuristic approaches, either optimal solutions or upper bounds are required. For this reason, some researches have been dedicated to elaborate exact solution methods for TOP. \citet{bib:Butt99} introduced a procedure based on the set covering formulation. A column generation algorithm was developed to solve this problem. In \citet{bib:Boussier07}, the authors proposed a branch-and-price (B-P) algorithm in which they used a dynamic programming approach to solve the pricing problem. Their approach has the advantage of being easily adaptable to different variants of the problem. Later, \citet{bib:Aragao10} introduced a pseudo-polynomial linear model for TOP and proposed a branch-cut-and-price (B-C-P) algorithm. New classes of inequalities, including min-cut and triangle clique, were added to the model and the resulting formulation was solved using a column generation approach. Afterwards, \cite{bib:Dang13a} proposed a branch-and-cut (B-C) algorithm based on a linear formulation and features a new set of valid inequalities and dominance properties in order to accelerate the solution process. Recently, \citet{bib:Keshtkarana15} proposed a Branch-and-Price algorithm with two relaxation stages (B-P-2R) and a Branch-and-Cut-and-Price (B-C-P) approach to solve TOP, where a bounded bidirectional dynamic programming algorithm with decremental state space relaxation was used to solve the subproblems. These five methods were able to prove the optimality for a large part of the standard benchmark of TOP \citep{bib:Chao96a}, however there is a large number of instances that are still open until now. Furthermore, according to the recent studies of \citet{bib:Dang13b} and \citet{bib:Kim13a}, it appears that it is hardly possible to improve the already-known solutions for the standard benchmark of TOP using heuristics. These studies suggest that the known heuristic solutions could be optimal but there is a lack of variety of effective methods to prove their optimality.
Motivated by the above facts, in this paper we propose a new exact algorithm to solve TOP. It is based on a linear formulation with a polynomial number of binary variables. Our algorithmic scheme is a cutting plane algorithm which exploits integer solutions of successive models with the \emph{subtour} elimination constraints being relaxed at first and then iteratively reinforced. Recently, \cite{bib:Pferschy13} demonstrates on the Travelling Salesman Problem (TSP) that such a technique which was almost forgotten could be made efficient nowaday with the impressive performance of modern solvers for Mixed-Integer Programming (MIP), especially with a careful control over the reinforcing of the subtour elimination. Our approach is similar but in addition to subtour elimination, we also make use of other valid inequalities and useful dominance properties to enhance the intermediate models. The properties include breaking the symmetry and exploiting bounds or optimal solutions of smaller instances/models with fewer number of vehicles, while the proposed valid inequalities are the clique cuts and the independent set cuts based on the incompatibilities between customers and between arcs. In addition, bounds on smaller restricted models are used to locate mandatory customers and inaccessible customers/arcs. Some of these cuts were introduced and tested in \cite{bib:Dang13a} yielding some interesting results for TOP, this encourages us to implement them immediately in our cutting plane algorithm. We evaluated our algorithm on the standard benchmark of TOP. The obtained results clearly show the competitiveness of our algorithm. The algorithm is able to prove the optimality for $12${} instances that none of the previous exact algorithms had been able to solve.
The remainder of the paper is organized as follows. A short description of the problem with its mathematical formulation is first given in Section \ref{sec:lp}, where the use of the generalized subtour elimination constraints is also discussed. In Section~\ref{sec:domprop}, the set of dominance properties, which includes symmetry breaking, removal of irrelevant components, identification of mandatory customers and boundaries on profits/numbers of customers, is presented. The graphs of incompatibilities between variables are also described in this section, along with the clique cuts and the independent set cuts. In Section \ref{sec:CuttingPlanegc}, all the techniques used to generate these efficient cuts are detailed, and the pseudocode of the main algorithmic scheme is given. Finally, the numerical results are discussed in Section \ref{sec:numresult}, and some conclusions are drawn.
\section{Problem formulation}\label{sec:lp}
TOP is modeled with a complete directed graph $G=(V, A)$ where $V =\{1,\dots,n\} \cup \{d, a\}$ is the set of vertices representing the customers and the depots, and $A=\{(i,j) \mid i,j \in V, i\neq j\}$ the set of arcs linking the different vertices together. The departure and the arrival depots for the vehicles are represented by the vertices $d$ and $a$. For convenience, we use the three sets $V^{-}$, $V^{d}$ and $V^{a}$ to denote respectively the sets of the customers only, of the customers with the departure depot and of the customers with the arrival one. A profit $p_i$ is associated for each vertex $i$ and is considered zero for the two depots ($p_d = p_a = 0$). Each arc $(i,j) \in A$ is associated with a travel cost $c_{ij}$. Theses costs are assumed to be symmetric and to satisfy the triangle inequality. All arcs incoming to the departure depot and outgoing from the arrival one must not be considered ($c_{id}=c_{ai}=\infty, \forall i \in V^{-}$). Let $F$ represent the fleet of the $m$ identical vehicles available to visit customers. Each vehicle must start its route from $d$, visit a certain number of customers and return to $a$ without exceeding its predefined travel cost limit $L$. Using these definitions, we can formulate TOP with a linear Mixed Integer Program (MIP) using a polynomial number of decision variables $y_{ir}$ and $x_{ijr}$. Variable $y_{ir}$ is set to $1$ if vehicle $r$ has served client $i$ and to $0$ otherwise, while variable $x_{ijr}$ takes the value $1$ when vehicle $r$ uses arc $(i,j)$ to serve customer $j$ immediately after customer $i$ and $0$ otherwise.
\begin{align}
\textrm{max} \sum_{i \in V^{-}}\sum_{r \in F} y_{ir} p_i \label{miptop:obj}\\
\sum_{r \in F} y_{ir} \leq 1 &\quad \forall i\in V^{-} \label{miptop:ctvisit}\\
\sum_{j \in V^a} x_{djr} = \sum_{j \in V^d} x_{jar} = 1 &\quad \forall r \in F \label{miptop:ctdepot}\\
\sum_{i \in V^a \setminus \{k\}} x_{kir} = \sum_{j \in V^d \setminus \{k\}} x_{jkr} = y_{kr} &\quad \forall k \in V^{-}, \forall r \in F \label{miptop:ctlink}\\
\sum_{i \in V^d}\sum_{j \in V^a \setminus \{i\}} c_{ij} x_{ijr} \leq L &\quad \forall r \in F \label{miptop:ctlength}\\
\sum_{(i,j)\in U \times U} x_{ijr} \leq |U| - 1 &\quad \forall U \subseteq V^{-}, |U| \geq 2, \forall r \in F \label{miptop:ctsubtour}\\
x_{ijr} \in \{0, 1\} &\quad \forall i \in V, \forall j \in V, \forall r \in F \label{miptop:ctinteger}\\
y_{ir} \in \{0, 1\} &\quad \forall i \in V^{-}, \forall r \in F \nonumber
\end{align}
The objective function \eqref{miptop:obj} maximizes the sum of collected profits from the visited customers. Constraints \eqref{miptop:ctvisit} impose that each customer must be visited at most once by one vehicle. Constraints \eqref{miptop:ctdepot} guarantee that each vehicle starts its path at vertex $d$ and ends it at vertex $a$, while constraints \eqref{miptop:ctlink} ensure the connectivity of each tour. Constraints \eqref{miptop:ctlength} are used to impose the travel length restriction, while constraints \eqref{miptop:ctsubtour} eliminate all possible subtours, i.e. cycles excluding the depots, from the solution. Finally, constraints \eqref{miptop:ctinteger} set the integral requirement on the variables.
Enumerating all constraints \eqref{miptop:ctsubtour} yields a formulation with an exponential number of constraints. In practice, these constraints are first relaxed from the formulation, then only added to the model whenever needed. The latter can be detected with the presence of subtours in the solution of the relaxed model. We also replace constraints \eqref{miptop:ctsubtour} with the stronger ones, the so-called Generalized Subtour Elimination Constraints (GSECs) which enhance both the elimination of specific subtours and the connectivity in the solution. The first GSEC experiment with OP were reported in \citet{bib:Fischetti98}.
We adapted the GSEC version from \citet{bib:Dang13a} formulated to TOP with a directed graph as follows. For a given subset $S$ of customer vertices, we define $\delta(S)$ to be the set of arcs that connect vertices in $S$ with those outside $S$, i.e. vertices in $V \setminus S$. We also use $\gamma(S)$ to represent the set of arcs interconnecting vertices in $S$. The following GSECs are then added to the model to ensure that each customer served by vehicle $r$ belongs to a path that is connected to the depots and does not form a cycle with other vertices of $S$.
\begin{align}
\sum_{(u,v) \in \delta(S)} x_{uvr} \geq 2 y_{ir}, \forall S \subset V, \{d,a\} \subseteq S, \forall i \in V \setminus S, \forall r \in F \label{eq:gsec1}
\end{align}
We also add two categories of constraints, which are detailed below and are equivalent to the GSECs, to strengthen the model.
\begin{align}
\sum_{(u,v) \in \gamma(S)} x_{uvr} & \leq \sum_{i\in S \setminus \{d, a\}} y_{ir} - y_{jr} + 1, \forall S \subset V, \{d,a\} \subseteq S, \forall j \in V \setminus S, \forall r \in F \label{eq:gsec21} \\
\sum_{(u,v) \in \gamma(U)} x_{uvr} & \leq \sum_{i\in U} y_{ir} - y_{jr}, \forall U \subseteq V^{-}, \forall j \in U, \forall r \in F \label{eq:gsec22}
\end{align}
On the other hand, our approach requires a check on the absence of subtours for an optimal solution of the current incomplete model (i.e. while relaxing constraints \eqref{miptop:ctsubtour}), so that the global optimality can be claimed. In our model, each \emph{strong connected component} of the subgraph associated with a tour of the solution represents a subtour, thus the checking can be done by examining the corresponding subgraphs. This will be detailed in Section \ref{ssec:CuttingPlane}.
\section{Efficient cuts}\label{sec:domprop}
Reduction of the search space is often desired in solving a MIP. This can be done by either removing irrelevant components from the linear formulation, e.g. those that certainly does not belong to any optimal solution, or by favoring some special structures inside the optimal solutions, e.g. reduction of the symmetry.
The cuts that we added to our basic problem include some dominance properties as symmetry breaking inequalities, boundaries on profits and number of served customers, cuts that enforces mandatory customers and cuts that remove inaccessible customers and arcs. Moreover, some additional cuts are based on the clique and the independent sets deduced from the incompatibilities between solution components.
\subsection{Symmetry breaking cuts}
Tours of the optimal solutions can be sorted according to a specific criterion, i.e. the amount of collected profits, the number of customers or the tour length. Based on the experimental report in \citep{bib:Dang13a}, we focus exclusively on solutions in which the profits of tours are in ascending order. The following constraints are added to the model to ensure the symmetric breaking on profits.
\begin{equation}
\sum_{i \in V^{-}} y_{i(r+1)} p_i - \sum_{i \in V^{-}} y_{ir} p_i \leq 0, \forall r \in F \setminus \{m\} \label{eq:symbrk}
\end{equation}
Without these constraints, for each feasible solution having different profits among its tours, there are at least $(m!-1)$ equivalently feasible solutions. Adding these constraints will remove these equivalent solutions from the search space and only retain the one having the profits of its tours in ascending order. Thus the size of the search space can be largely reduced.
\subsection{Irrelevant components cuts}
One simple way of reducing the size of the problem is to deal only with accessible customers and arcs. A customer is considered as \emph{inaccessible} if by serving only that customer, the travel cost of the resulting tour exceeds the cost limit $L$. In a similar way, we detect an \emph{inaccessible} arc when the length of the tour directly connecting the depots to that arc exceeds $L$. To make a proper linear formulation, all \emph{inaccessible} customers and arcs are eliminated at the beginning from the model by adding the following constraints. Here $i$ is an inaccessible customer (resp. $(i,j)$ is an inaccessible arc).
\begin{align}
\sum_{r \in F} y_{ir} = 0 \\
\sum_{r \in F} x_{ijr} = 0
\end{align}
\subsection{Boundaries on profits and numbers of customers served}\label{sec:ublbprofit}
\cite{bib:Dang13a} proposed in their paper a set of efficient dominance properties that aims to reduce the search space by bounding the characteristics of each tour or subset of tours. The idea is to solve within a limited time budget, instances derived from the original problem to gain useful information for the construction of the added cuts. The derived instances are often smaller than the original one and hopefully easier to solve, or at least to bound.
Before going in the details of these properties, we must clarify some notation. For each instance $X$ with $m$ vehicles, define $X_{I}$ to be the modified instance for which the profits of each customer is set to $1$ instead. We also use $X^{g}$ to denote the modified instance $X$ by reducing the number of available vehicles to $g$ ($g \leq m$). For $g=m$, we have the original instance $X$. Note that the two modifications can be applied at the same time, in this case instance $X_{I}^{g}$ is obtained. Finally, we denote by $\LB(X)$ (resp. $\UB(X)$) a lower (resp. an upper) bound of an arbitrary instance $X$. The following valid inequalities are added to the model to restrict the profits that each tour or subset of tours can have.
\begin{align}
\sum_{r \in H} \sum_{i \in V^{-}} y_{ir} p_i & \leq \UB(X^{|H|}), \forall H \subset F \label{eq:ubp}\\
\sum_{r \in H} \sum_{i \in V^{-}} y_{ir} p_i + \UB(X^{m-|H|}) & \geq \LB(X), \forall H \subseteq F \label{eq:lbp}
\end{align}
Inequalities \eqref{eq:ubp} are trivial since the sum of profits of any $|H|$ tours on the left-hand side cannot exceed the optimal profit of the instance with exactly $|H|$ vehicles or at least an upper bound of this instance, i.e. the right-hand side.
Inequalities \eqref{eq:lbp} work in the opposite direction by applying a lower bound to the profit of each tour and each subset of tours. The inequalities might appear to be redundant with the objective of optimization. However when applied to subsets of tours, the constraints will eliminate unbalanced solutions, e.g. the one with one tour having many customers and the other tours being almost empty, from the search space.
In the same fashion as \eqref{eq:ubp}, the numbers of customers per tour or per subset of tours are bounded from above using inequalities \eqref{eq:ubc}. On the other hand, it is more difficult to bound these numbers from below since their values do not necessarily correlate with the objective value of TOP. A modification of the model (rather than a simple modification of the instance) is performed in order to determine a lower bound for the number of customers of each tour. This modification is done as follows. We consider the modified instance, denoted by $\bar{X}^1_I$, where the objective function is reversed to minimization, i.e. minimizing the number of served customers, while satisfying both constraints \eqref{eq:ubp} and \eqref{eq:lbp} for $|H|=1$. Solving this instance provides the value of $\LB(\bar{X}^1_I)$, which enables us to lower bound the number of customers of each tour of $X$. The following valid inequalities are then added to the model to restrict the number of customers served in each tour or subset of tours.
\begin{align}
\sum_{r \in H} \sum_{i \in V^{-}} y_{ir} & \leq \UB(X^{|H|}_I), \forall H \subset F \label{eq:ubc} \\
\sum_{i \in V^{-}} y_{ir} & \geq \LB(\bar{X}^1_I), \forall r \in F \label{eq:lbc}
\end{align}
In implementation, inequalities \eqref{eq:ubp} - \eqref{eq:lbc} are applied similarly to dynamic programming, as follows. The required values of $\LB$ and $\UB$ are first computed for the instance with $|H|=1$ , then the obtained values are used in the cuts to solve the other instances ($|H| \leq m$). We recall that inequalities \eqref{eq:lbc} are limited to a single tour and not subsets of tours. Since the value of $\UB(X^{m-1})$ is needed for the model of $\bar{X}^1_I$, $\LB(\bar{X}^1_I)$ can only be computed after solving all the other subproblems (or derived instances).
\subsection{Mandatory customers cuts}\label{sec:mandatory}
Given an instance $X$ of TOP, a high quality $\LB(X)$ can often be computed efficiently with heuristics. Therefore, it could be possible to locate a set of customers of $X$, the so-called \emph{mandatory} ones, for which without one of those customers a solution with the objective value at least as large as $\LB(X)$ cannot be achieved.
The formal definition is the following. Here we use $X\setminus\{i\}$ to designate the modified instance $X$ with customer $i$ removed.
\begin{defn}\label{def:mandatory}
A customer $i$ of $X$ is \emph{mandatory} if $\UB(X\setminus\{i\})<\LB(X)$.
\end{defn}
Once identified, mandatory customers have to be all served in an optimal solution. The following cuts can then be added to enforce the presence of a mandatory customer $i$ in $X$.
\begin{align}
\sum_{r \in F} y_{ir} = 1 \label{eq:mandatory}
\end{align}
\subsection{Valid inequalities based on incompatibilities}\label{sec:cliquecut}
If two given customers are too far away from each other because of the travel length/cost limitation, then it is unlikely that they can be served by the same vehicle. This observation leads us to the concept of \emph{incompatibility} between customers, from which additional inequalities can be deduced \citep{bib:Manerba15, bib:Gendreau16}. Moreover, the idea can also be generalized to other pairs of components of the problem, i.e. {\it customer-tour}, {\it customer-arc} or {\it arc-arc}. In this work, we focus on the two incompatibilities: between customers and between arcs.
\subsubsection{Incompatibility graphs}\label{ssec:incomp}
Given two customers $i$ and $j$ of instance $X$, we use $X\cup\{[i \sim j]\}$ to denote the modified instance/model where enforcing the two customers $i$ and $j$ to be served by the same vehicle is imposed as a constraint. Similarly, $X\cup\{[(u,v) \sim (w,s)]\}$ denotes the modified instance/model in which arcs $(u,v)$ and $(w,s)$ are imposed to be used by the same vehicle. The two graphs of incompatibilities are formally defined as follows.
\begin{defn}\label{def:ginc}
Given an instance $X$ of the TOP modelled by the directed completed graph $G=(V,A)$, the graph of incompatibilities between customers is $G^{Inc}_{V^{-}}=(V^{-}, E^{Inc}_{V{-}})$ and between arcs is $G^{Inc}_{A}=(A, E^{Inc}_{A})$ where
\begin{align*}
E^{Inc}_{V^{-}} &= \{[i,j] \mid i, j \in V^{-}, \UB(X\cup\{[i \sim j]\}) < \LB(X)\}, \\
E^{Inc}_{A} &= \{[i,j] \mid i = (u,v), j = (w,s) \in A, \UB(X\cup\{[(u,v) \sim (w,s)]\}) < \LB(X)\}.
\end{align*}
\end{defn}
In other words, two components are \emph{incompatible} if they do not appear in the same tour of any optimal solution of instance $X$. In general, it is difficult to fully construct the two graphs of incompatibilities. However, they can be initialized
as follows. Here, $\MinLen(S)$ denotes the length of the shortest path from $d$ to $a$ and containing all vertices (or all arcs) of $S \subseteq V^{-}$ (or $\subseteq A$).
\begin{prop}\label{prop:ginit}
Let $G=(V,A)$ be the model graph of instance $X$, it holds that
\begin{align*}
&\{[i,j] \mid i \in V^{-}, j \in V^{-}, \MinLen(\{i, j\})>L\} \subseteq E^{Inc}_{V^{-}}\text{, and} \\
&\{[i,j] \mid i = (u,v) \in A, j = (w,s) \in A, \MinLen(\{(u, v),(w,s)\})>L\} \subseteq E^{Inc}_{A}.
\end{align*}
\end{prop}
Of course, once initialized the graphs can be filled with more edges using Definition \ref{def:ginc}. The density of the becoming graphs will depend on the computation of $\UB$ and $\LB$. We can use the following linear program, combining with other cuts we have developed, to compute the required $\UB$.
\begin{prop}\label{prop:force}
Let $X$ be an instance $X$ of TOP and $i, j$ be its two customers, the linear model of $X\cup\{[i \sim j]\}$ is obtained by adding to that of $X$ the following constraints:
\begin{align}
\sum_{r \in F} y_{ir} &= \sum_{r \in F} y_{jr} = 1 \label{eq:fc1}\\
y_{ir} &= y_{jr}, \forall r \in F \label{eq:fc2}
\end{align}
Similarly, adding the following constraints to the linear program of $X$ will model $X\cup\{[(u,v) \sim (w,s)]\}$.
\begin{align}
\sum_{r \in F} x_{uvr} &= \sum_{r \in F} x_{wsr} = 1 \label{eq:fa1} \\
x_{uvr} &= x_{wsr}, \forall r \in F \label{eq:fa2}
\end{align}
\end{prop}
\subsubsection{Clique cuts}\label{ssec:clique}
A clique in an undirected graph is a subset of vertices that are pairwise adjacent. Thus, serving a customer (or using an arc) belonging to a clique of $G^{Inc}_{V^{-}}$ (or $G^{Inc}_{A}$) by a vehicle will exclude all other customers (or arcs) of the clique from being served by the same vehicle. Therefore, each vehicle can only serve (or use) at most one element of the clique. Based on this observation, the following cuts hold for $G^{Inc}_{V^{-}}$ and $G^{Inc}_{A}$, with $K$ (resp. $Q$) represents a clique of $G^{Inc}_{V^{-}}$ (resp. $G^{Inc}_{A}$).
\begin{align}
\sum_{i \in K} y_{ir} & \leq 1, \forall r \in F \label{eq:clvcut} \\
\sum_{[u,v] \in Q} x_{uvr} & \leq 1, \forall r \in F \label{eq:clecut}
\end{align}
A clique is \emph{maximal} if it cannot be extended to a bigger one by adding more vertices, and a maximal clique is \emph{maximum} if it has the largest cardinality over the whole graph. Large and maximal cliques are preferred in inequalities \eqref{eq:clvcut} and \eqref{eq:clecut} since they provide tighter formulations. The difficulty is that the number of maximal cliques in a general graph is exponential in terms of the number of vertices and finding the maximum clique is an NP-Hard problem \citep{bib:garey79}. However, efficient methods to find those cliques or subset of them exist in the literature and work very well in our graphs. The details are discussed in Section \ref{sec:CuttingPlanegc}.
\subsubsection{Independent set cuts}\label{ssec:iss}
As opposed to a clique, an \emph{independent set} is a set of vertices in a graph such that no two of which are adjacent. In that case, the vertices are also called pairwise independent. Maximal and maximum independent sets are defined in the same way as for cliques, e.g. adding any vertex to a maximal independent set will invalid the independences between the vertices of the set, and a maximum independent set is one of the largest sets among the maximal ones.
The independent-set cuts are based on the following idea. Let us consider $G^{Inc}_{V^-}$ as an example of graph and let $S$ be a subset of $V^-$, we define $\alpha_S$ to be the size of a maximum independent set of $G^{Inc}_{V^-}(S)$, the subgraph vertex-induced by $S$. It is clear that no more than $\alpha_S$ components of $S$ can be served in the same tour, e.g. $\sum_{i \in S} y_{ir} \leq \alpha_S$ is a valid cut for any tour $r$. Furthermore, if we consider $S$ to be the set of neighbor vertices of a vertex $i$ in $G$ denoted by $N_i$, then we can add the cut $\alpha_i y_{ir} + \sum_{j \in N_j} y_{jr} \leq \alpha_i$
(here $\alpha_i$ is a short notation for $\alpha_{N_i})$. This particular cut embeds the relationship between $i$ and $N_i$, plus the information on the maximum independent set of $N_i$. The same idea can be generalized to $G^{Inc}_{A}$, where we denote by $N_{ij}$ the set of neighbor arcs of an arc $(i, j)$ in $G^{Inc}_{A}$, and the following inequalities summarize the valid cuts.
\begin{align}
\alpha_i y_{ir} + \sum_{j\in N_i} y_{jr} & \leq \alpha_i, \forall i\in V^-, \forall r \in F \label{eq:advclvcut}\\
\alpha_{ij} x_{ijr} + \sum_{(u,v)\in N_{ij}} x_{uvr} & \leq \alpha_{ij}, \forall (i,j)\in A, \forall r \in F \label{eq:advclecut}
\end{align}
Finding a maximum clique is NP-Hard and so is to find a maximum independent set \citep{bib:garey79}. However, the above inequalities also hold for $\alpha$ being an upper bound of the size of a maximum independent set. The following principle allows us to approximate such an upper bound. Recall that a partition of vertices of a graph into disjoint independent sets is a coloring of the graph, e.g. each independent set is assigned to a color. It is well-known that the number of colors used in any such coloring is an upper bound of the size of a maximum clique of the graph. From the perspective of the complementary graph, any partition of the vertices into disjoint cliques provides an upper bound on the size of a maximum independent set. Again, efficient algorithm to find large cliques can be used to make such a partition and then to compute the upper bound of $\alpha_i$. This procedure is detailed in the next section.
\section{Cutting-plane and global scheme}\label{sec:CuttingPlanegc}
In this section, our global Cutting-Plane Algorithm (CPA) is first described to show the different operations performed to reach the best solution. Some supplementary information is required for its execution, particularly for the construction of the efficient cuts. These computations are detailed in the Constraint-Enhancement algorithm (CEA).
\subsection{Cutting-Plane algorithm}\label{ssec:CuttingPlane}
Our global algorithm is a cutting-plane one. However, we also use it to solve intermediate models with fewer numbers of vehicles (and sometimes with modified constraints/objectives). In our implementation, we only focus on the elimination of subtours and on the refinement of the search space using the developed cuts, while the other aspects of the resolution, e.g. the branch-and-cut in solving the integer program, are left for the MIP solver. This is similar to the approach of \citep{bib:Pferschy13} which was developed in the context of TSP. The steps of our CPA are as follows.
At first, the basic model is built using constraints \eqref{miptop:ctvisit}-\eqref{miptop:ctlength} and \eqref{miptop:ctinteger} with the objective function \eqref{miptop:obj} and some initial cuts. Indeed, some pre-computations are performed beforehand to gain useful information for the initial cuts. Only a small time budget is allowed for these pre-computations, however this can lead to a significant strengthening of the model later on.
During the pre-computation phase, the irrelevant components of $X$, i.e. \emph{inaccessible} customers and arcs, are first detected and removed from the model. Then the graphs of incompatibilities between customers and arcs are initialized, and some early cliques and independent sets are extracted from them using the metaheuristic described in \citet{bib:Dang12}. Based on these sets, the associated clique and independent set cuts are formulated and added to the model. Finally the symmetry breaking cuts are added and the solving procedure begins. A feasible solution is generated using a heuristic of \citet{bib:Dang13b} and provided to the MIP solver as a starting solution.
Before going in the main loop of the solving process, the MIP solver is setup with some branching rules. In TOP, the objective function aims to maximize the collected profits from the visited customers, therefore, selecting the correct customers from the beginning appears to be crucial. Thus, our branching rules prioritize $y_{ir}$ first then $x_{ijr}$ \citep{bib:Boussier07, bib:Aragao10}.
\begin{algorithm}[!h]
\caption{Cutting-Plane algorithm (CPA).}\label{alg:cuttingPlaneAlgorithm}
\KwIn{Instance $X$, cuts $\D(X)$, timer $\TM$, indicator $\VarOriginal$}
\KwOut{Bound $\UB(X)$, solution $\SOL(X)$, indicator $\Opt(X)$}
\Begin{
$\VarStep \leftarrow 1$\;
$\Opt(X) \leftarrow$ \textbf{false}\;
$\MIPS \leftarrow$ create new MIP Solver\;
$\UB(X) \leftarrow$ sum of profits of all customers of $X$\;
$\SOL(X) \leftarrow$ a feasible solution of $X$ \citep[see][]{bib:Dang13b}\;
$\LB(X) \leftarrow \PFT(\SOL(X))$\;
MIPS.model($X$, $\D(X)$) (see Sections \ref{sec:lp} and \ref{sec:onsidealg})\;
MIPS.initialize($\SOL(X)$)\;
\Repeat{{\upshape ($\Opt(X)=${\bf true}) or ($\TM$.expired())}} {
$\{\UB, \SOL, \Opt\} \leftarrow$ MIPS.solve($\TM$)\;
\lIf{\upshape ($\UB<\UB(X)$)}{$\UB(X) \leftarrow \UB$}
\If{\upshape ($\Opt=${\bf true})}{
\lIf{\upshape ($\PFT(\SOL)<\UB(X)$)}{ $\UB(X) \leftarrow \PFT(\SOL)$}
$\{T_r\}_{r\in F} \leftarrow$ extract subtours from SOL\;
$\{S_r\}_{r\in F} \leftarrow$ extract tours from SOL\;
\If{\upshape ($\PFT(\bigcup_{r \in F} S_r)>\LB(X)$)}{
$\SOL(X) \leftarrow \{S_r\}_{r \in F}$\;
$\LB(X) \leftarrow \PFT(\SOL(X))$\;
}
\eIf{\upshape ($|\bigcup_{r\in F} T_r|=0$) or ($\LB(X)=\UB(X)$)}{
Opt($X$) $\leftarrow$ {\bf true}\;
} {
MIPS.add(GSEC($\{T_r\}_{r\in F}$)) (see Section \ref{sec:lp})\;
(add clique cuts, see Section \ref{ssec:clique})
MIPS.add(FindCliques($G^{Inc}_{V^{-}}[\bigcup_{r\in F}(T_r \cup S_r)]$)) \;
MIPS.add(FindCliques($G^{Inc}_{A}[\bigcup_{r\in F}(T_r \cup S_r)]$))\;
\If{\upshape ($\VarOriginal=${\bf true})}{D(X)$\leftarrow$ CEA($X$, $\LB(X)$, Step)\;
MIPS.add($\D(X)$)\;
$\VarStep \leftarrow \VarStep + 1$;
}
}
}
}
}
\end{algorithm}
Algorithm \ref{alg:cuttingPlaneAlgorithm} summarizes the remaining steps of our CPA. In each iteration of the main loop, the MIP solver is called to solve the linear model and an integer solution is obtained. Tarjan's algorithm \citep{bib:tarjan72} is then applied on this solution to check if it contains any subtour. Recall that a directed graph is strongly connected if for any given pair of vertices there exist paths linking them in both directions. A strong connected component of a directed graph is a subset of its vertices such that the induced subgraph is strongly connected and that the subset cannot be extended by adding more vertices. Since in our formulation, the graph is directed and the depots are separated vertices, the vertices of a subtour can only belong to a strong connected component of the subgraph. That is to say the total absence of those components for each subgraph, which can be polynomially detected \citep{bib:tarjan72}, implies the global optimality and the CPA is terminated by returning the solution. Otherwise, the solution is \emph{suboptimal}. The associated constraints \eqref{eq:gsec1}, \eqref{eq:gsec21} and \eqref{eq:gsec22}, deduced from the suboptimal solution, are then added to the linear model to eliminate the subtours. Furthermore, the subgraphs of $G^{Inc}_{V^{-}}$ and $G^{Inc}_A$, which are associated to the vertices and arcs of the suboptimal solution, are extracted. Some maximal cliques are then generated from those subgraphs, and the corresponding constraints \eqref{eq:clvcut} and \eqref{eq:clecut} are added to the linear model. Next, if we are solving the original problem (indicated by the boolean $\VarOriginal$), the CEA is called to generate a set of efficient constraints for the model. This algorithm is described in Section~\ref{sec:onsidealg}. Once all the cuts are added to the model, the CPA goes to the next iteration where the same solving process is repeated (with the modified model). On the event that the predefined time limit (indicated by the timer $\TM$) is run out, the algorithm is terminated and the best bound computed so far is returned for the instance/model.
Algorithm \ref{alg:cuttingPlaneAlgorithm} takes as inputs an instance $X$, a set of cuts $\D(X)$, and a boolean indicator $\VarOriginal$. It also requires a mixed integer programming solver and a timer to operate. The algorithm returns an upper bound $\UB(X)$, a feasible solution $\SOL(X)$ and a boolean indicator $\Opt(X)$ telling the optimality of $\SOL(X)$ before the expiration of the timer. For the purpose of simplification, tours of the initially generated solutions are supposed to be sorted to match inequalities \eqref{eq:symbrk}. We also assume that the mixed integer programming solver can be adapted to support the following operations: \emph{model} to construct the linear integer model based on $X$ and D$(X)$ and according to our specification, including branching rules; \emph{initialize} to provide a feasible starting solution to the solver; \emph{add} to complete the model with efficient cuts; and finally \emph{solve} to try to solve the model until the expiration of a timer. The output of \emph{solve} is similar to Algorithm \ref{alg:cuttingPlaneAlgorithm}: a scalar reporting an upper bound, a feasible solution (which can be empty) and a boolean reporting the optimality.
\subsection{Generation of efficient cuts}\label{sec:onsidealg}
To solve an original instance of TOP, our CPA needs strong constrained models in its earlier iterations. For this purpose, CEA is called and the counter $\VarStep$ of the main algorithm is passed to it as a parameter. For each value of $\VarStep$ less than $m + 1$, only one type of cuts is computed and the produced cuts are added to the model. The details of the procedure are given in Algorithm \ref{alg:CEAlgorithm}. Note that with the efficient constraints along the way, some easy instances can be solved in less than $m+1$ iterations.
\begin{algorithm}[!h]
\caption{Constraint-Enhancement algorithm (CEA).}\label{alg:CEAlgorithm}
\KwIn{Instance $X$, bound $\LB(X)$, integer $\VarStep$}
\KwOut{Cuts D$(X)$}
\Begin{
\If{\upshape $\VarStep \leq m-1$}{
(solve intermediate models, see Section \ref{sec:ublbprofit})\\
$\{\UB, \SOL, \Opt\} \leftarrow$ CPA($X^{\VarStep}$, $\D(X)$, $\TM_1$, {\bf false})\;
D($X$) $\leftarrow$ update from $\{\UB, \SOL, \Opt\}$\;
$\{\UB, \SOL, \Opt\} \leftarrow$ CPA($X^{\VarStep}_I$, $\D(X)$), $\TM_1$, {\bf false})\;
D($X$) $\leftarrow$ update from $\{\UB, \SOL, \Opt\}$\;
\If{\upshape ($\VarStep=m-1$)}{
$\MIPS \leftarrow$ create new MIP Solver\;
MIPS.model($\bar{X}^{1}_I$, $\D(X)$)\;
$\{\UB, \SOL, \Opt\} \leftarrow$ MIPS.solve($\TM_1$)\;
D($X$) $\leftarrow$ update from $\{\UB, \SOL, \Opt\}$\;
}
}
\If{\upshape $\VarStep = m$}{
(identify mandatory customers, see Section \ref{sec:mandatory})\\
$\VarMandatory \leftarrow \emptyset$\;
\ForEach{$i \in V^-$}{
$\{\UB, \SOL, \Opt\} \leftarrow$ CPA($X\setminus\{i\}$, $\D(X)$, $\TM_1$, {\bf false})\;
\If{$\UB<\LB(X)$}{
$\VarMandatory \leftarrow \VarMandatory \cup \{i\}$\;
}
}
D($X$) $\leftarrow$ update with $\VarMandatory$ as mandatory customers\;
}
\If{\upshape $\VarStep = m+1$}{
(enhance incompatibilities, see Section \ref{ssec:incomp})\\
\ForEach{$(i,j) \in A$}{
$\{\UB, \SOL, \Opt\} \leftarrow$ CPA($X\cup\{i \sim j\}$, $\D(X)$), $\TM_1$, {\bf false})\;
\If{$\UB<\LB(X)$}{
update $G^{Inc}_{V^{-}}$\;
}
\For{$(u,v) \in A$}{
$\{\UB, \SOL, \Opt\} \leftarrow$ CPA($X\cup\{[(i,j) \sim (u,v)]\}$, $\D(X)$), $\TM_1$, {\bf false})\;
\If{$\UB<\LB(X)$}{
update $G^{Inc}_{A}$\;
}
}
}
(identify clique/independant-set cuts, see Section \ref{ssec:clique}, \ref{ssec:iss})\\
D($X$) $\leftarrow$ update from FindCliques($G^{Inc}_{V^{-}}$), FindCliques($G^{Inc}_{A}$)\;
}
}
\end{algorithm}
The first type of cuts to be generated is the one corresponding to the boundaries on profits and numbers of customers for each subset of tours. For each subproblem with the number of vehicles being reduced to $\VarStep$ ($\VarStep\leq m - 1$), upper bounds for the feasible profit and the feasible number of customers are computed using the same CPA as described in the previous section (except that $\VarOriginal$ is set to false). The corresponding constraints are then generated and added to $\D(X)$, the storage of all additional information and cuts. In the case of $\VarStep$ equal to $m - 1$, before returning to the main algorithm, a lower bound on the feasible number of customers for a single vehicle is calculated. This calculation makes use of the information accumulated in $\D(X)$ and expands it further with the obtained lower bound.
When the main algorithm reaches iteration $m$, efficient constraints of the second type is constructed, and mandatory customers are located to strengthen the model. These customers are identified based on Definition \ref{def:mandatory}: the required $\LB$ is computed using a constructive heuristic from \citep{bib:Dang13b} while the required $\UB$ is computed with our CPA, but now formulated for the instances $X \setminus \{i\}$. Once a mandatory customer is located, it is immediately added to $\D(X)$ so that the information can be used in the subsequent iterations.
Being constructed at iteration $m + 1$ of the main algorithm, clique and independent-set cuts are the third type of cuts. First, graphs $G^{Inc}_{V^{-}}$ and $G^{Inc}_{A}$ are initialized with Property \ref{prop:ginit}. Since the verification of $\MinLen(\cdot)$ in this case maximally involves $4$ customers, a complete enumeration is inexpensive and manageable. In addition, these initial graphs can be computed beforehand and stored for each instance. The graphs are then made more dense using their definition: lower bounds $\LB(X)$ are due to the results of \citep{bib:Dang13b}, and $\UB(X\cup\{[i\sim j]\})$ and $\UB(X\cup\{[(i,j)\sim(u,v)]\})$ are computed with the CPA, while adding constraints \eqref{eq:fc1}-\eqref{eq:fa2} to construct the desired models. Next, the clique cuts and independent set cuts are generated from $G^{Inc}_{V^{-}}$ and $G^{Inc}_{A}$ and used as general constraints. For each vertex in the associate incompatibility graph, we determine a large maximal clique containing the vertex using the metaheuristic from \citep{bib:Dang12}. On the other hand, using the very same heuristic algorithm, a partition of each $N_i$ (resp. $N_{ij}$) into disjoint cliques can be constructed. For example, first find a large clique, then remove its vertices from the graph and continue finding cliques on the remaining graph. Thus, upper bounds for $\alpha_i$ (resp. $\alpha_{ij}$) are computed.
We note that to generate the three types of efficient cuts, the CPA is called with a time limit configured by timer $\TM_1$.
\section{Numerical results}\label{sec:numresult}
Our algorithm is coded in C++. Experiments were conducted on an AMD Opteron $2.60$ GHz and CPLEX $12.5$ was used as MIP solver. We used the same two-hours limit of solving time as in \citet{bib:Boussier07, bib:Aragao10}, of which at most a one-hour limit is given to generate all the efficient cuts. This one hour limit is divided between solving the smaller problems, locating the mandatory customers and extending the incompatibility graphs. We first evaluated the usefulness of the proposed components by activating each type of the efficient cuts without the other types, then by activating all of them together.
\subsection{Benchmark instances}
We evaluated our approach on a set of TOP instances proposed by \citet{bib:Chao96a}. This benchmark comprises $387$ instances and is divided into $7$ data sets. In each data set, the positions and the profits of the customers are identical for all instances. However, the number of vehicles varies from $2$ to $4$ and the travel length limit $L$ is also different between instances. The latter causes a variation of the number of accessible customers (denoted by~$n'$) even when the number of vehicles is fixed. Each instance is named according to the data set to which it belongs, the number of available vehicles and a letter that designates the travel length $L$. However, note that an identical letter inside a data set does not necessarily imply the same value of $L$ when the number of vehicles changes. The characteristics of each data set are reported in Table \ref{tab:instances}.
\begin{table}[h]
\vspace{10pt}
\caption{Instances of \citet{bib:Chao96a}.}\label{tab:instances}
\begin{center}
\begin{tabular}{cccccccc}
\toprule
Set&1&2&3&4&5&6&7\\
\midrule
\#{}Inst. &$54$&$33$&$60$&$60$&$78$&$42$&$60$\\
$n$ &$30$&$19$&$31$&$98$&$64$&$62$&$100$\\
$n'$ &0-30&1-18&2-31&0-98&0-64&0-62&0-100\\
$m$ &$2$-$4$&$2$-$4$&$2$-$4$&$2$-$4$&$2$-$4$&$2$-$4$&$2$-$4$\\
$L$ &$3.8$-$22.5$&$1.2$-$42.5$&$3.8$-$55$&$3.8$-$40$&$1.2$-$65$&$5$-$200$&$12.5$-$120$\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Component evaluation}
We present in Table \ref{tab:impact} the results obtained with the basic model, then those obtained while separately applying the GSECs, the dominance properties (see Section~\ref{sec:domprop}) and the valid inequalities (see Section~\ref{sec:cliquecut}). The last main column shows the results of the global algorithm by activating all of the components together. In this table, columns \#Opt, $CPU_{avg}$ and Gap respectively represent, for each set, the number of instances being solved to optimality, the average computational time in seconds on the subset of common instances being solved by all the configurations and the average percentage gap. Note that the percentage gap of an instance is calculated as follows: $\mathrm{Gap} = 100 \times \frac{\UB - \LB}{\UB}$, where $\UB$ and $\LB$ are the upper and lower bounds computed for the instance.
\afterpage{
{\setlength{\tabcolsep}{2.5pt
\begin{landscape}
\begin{table}[h!]
\vspace{0.7cm}
\caption{Impact of the proposed cuts.}
\label{tab:impact}
\begin{center}
\begin{tabular}{c>{\columncolor[RGB]{235,235,235}}cccc>{\columncolor[RGB]{235,235,235}}cccc>{\columncolor[RGB]{235,235,235}}cccc>{\columncolor[RGB]{235,235,235}}cccc>{\columncolor[RGB]{235,235,235}}cccc}
\toprule
\multirow{2}{*}{Set}&&\multicolumn{3}{c}{Basic model}&&\multicolumn{3}{c}{GSECs}&&\multicolumn{3}{c}{Dominance properties}&&\multicolumn{3}{c}{Valid inequalities}&&\multicolumn{3}{c}{All cuts}\\
& &\#Opt&$CPU_{avg}$&Gap&&\#Opt&$CPU_{avg}$&Gap&&\#Opt&$CPU_{avg}$&Gap&&\#Opt&$CPU_{avg}$&Gap&&\#Opt&$CPU_{avg}$&Gap\\
\midrule
$1$&&$35/54$&$496.5$&$5.04$&&$53/54$&$13.2$&$0.54$&&$54/54$&$5.3$&$0$&&$54/54$&$2.9$&$0$&&$54/54$&$1.7$&$0$\\
$2$&&$33/33$&$5.5$&$0$&&$33/33$&$1.8$&$0$&&$33/33$&$0.6$&$0$&&$33/33$&$0.1$&$0$&&$33/33$&$0.03$&$0$\\
$3$&&$42/60$&$599.9$&$3.41$&&$55/60$&$150.9$&$0.25$&&$58/60$&$26.9$&$0.5$&&$60/60$&$10.3$&$0$&&$60/60$&$6.24$&$0$\\
$4$&&$23/60$&$323.5$&$3.19$&&$17/60$&$390.5$&$4.28$&&$22/60$&$200.4$&$2.05$&&$23/60$&$81$&$2.31$&&$30/60$&$66.6$&$0.01$\\
$5$&&$23/78$&$318$&$12.9$&&$24/78$&$40.2$&$21.11$&&$37/78$&$5.4$&$6.35$&&$36/78$&$2.6$&$6.65$&&$54/78$&$0.95$&$0.01$\\
$6$&&$33/42$&$48.7$&$0.4$&&$33/42$&$63.8$&$3.53$&&$41/42$&$4.5$&$0.11$&&$39/42$&$3.8$&$0.76$&&$42/42$&$1.9$&$0$\\
$7$&&$14/60$&$46.3$&$12.88$&&$18/60$&$3.5$&$13.08$&&$22/60$&$2.1$&$7.24$&&$24/60$&$1.3$&$5.75$&&$27/60$&$0.28$&$0.03$\\
\midrule
Total&&$204/387$&$294.2$&$7.0$&&$233/387$&$80.4$&$7.4$&&$267/387$&$26.7$&$2.8$&&$269/387$&$11.24$&$2.6$&&$300/387$&$8.21$&$0.01$\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\end{landscape}
}
}
}
Compared to the results obtained in the basic model, all the proposed components independently and positively affect the outcomes of the algorithm. As shown in Table \ref{tab:impact}, GSECs largely help increase the numbers of instances being solved except for some instances from the large sets, where a significant number of GSECs should be added to start having some progress in the resolution. The valid inequalities, which include the clique and the independent-set cuts, mainly contribute to the reduction of the computational times and the average gaps. The dominance properties, which comprise the symmetry breaking, mandatory customers, irrelevant component and boundaries on profits and number of customers, have an effect similar to that of the valid inequalities, specially on the numbers of instances being solved to the optimality and the average gaps. The relatively large computational times obtained while applying the dominance properties are due to the amounts of time spent on solving subproblems.
On the other hand, we notice from the last column of Table \ref{tab:impact} that applying all the proposed components together remarkably improves the number of instances being solved, reaching $300$ of the $387$ instances. This also implies a reduction of the average gaps between the upper and the lower bounds. In addition, the average computational time of the global algorithm decreased from $294.2$s with the basic model to $8.21$s with all the enhanced components applied.
\subsection{Comparison with other exact methods in the literature}
We first compare our proposed method with the other exact methods in the literature on a per-instance basis. Since \citet{bib:Aragao10} did not report the detailed results of their algorithm, we restricted our comparison to the results of the B-P algorithm of \citet{bib:Boussier07}, the B-C algorithm of \citet{bib:Dang13a} and the B-P-2R and the B-C-P algorithms of \citet{bib:Keshtkarana15}. The computational experiments of B-P were carried out on a Pentium IV, $3.2$ GHz while those of B-C on an AMD Opteron, $2.60$ GHz and those of B-P-2R and B-C-P on a single core of an Intel Core i7 $3.6$ GHz.
Table \ref{tab:comparison_literature} reports the results of the instances which are solved by at least by one of the five methods (but not by all of them). In this table, columns $Instance$, $n$, $m$, and $L$ respectively show the name of the instance, the number of accessible customers, the number of vehicles and the travel cost limit. Columns $UB$, $LB$, and $CPU$ report respectively the upper bound, lower bound and computational time in seconds for each method and for each instance when available. For B-P \citep[see][]{bib:Boussier07}, the reported $CPU$ is the time spent on solving both the master problem and the subproblems until the optimality is proven. For B-C \citep[see][]{bib:Dang13a}, the $CPU$ time includes the computational times for both, the presolving and solving phases. For B-P-2R and B-C-P \citep[see][]{bib:Keshtkarana15}, the $CPU$ time is reported for the whole solving process. In our method, we consider the $CPU$ time as the time spent in the global algorithm with the required computational time to generate the efficient cuts. For some instances, dashes ``$-$'' are used in $UB$ and $LB$ columns when the corresponding values were not found and tildes ``$\sim$'' are used in $CPU$ column to show that the optimalities were not proven within $7200s$ of the time limit.
\begin{small}
{\setlength{\tabcolsep}{2.5pt}
\begin{landscape}
\begin{longtable}{ccccccccccccccccccc}
\caption{Comparison between our results and the literature on the standard benchmark.}\\
\toprule
\multirow{2}{*}{$Instance$}&
\multirow{2}{*}{$n$}&
\multirow{2}{*}{$m$}&
\multirow{2}{*}{$L$}&
\multicolumn{3}{c}{B-P}&
\multicolumn{3}{c}{B-C}&
\multicolumn{3}{c}{B-P-2R}&
\multicolumn{3}{c}{B-C-P}&
\multicolumn{3}{c}{Our algorithm}\\
\cmidrule(l){5-7}\cmidrule(l){8-10}\cmidrule(l){11-13}\cmidrule(l){14-16}\cmidrule(l){17-19}&
& & &$UB$&
$LB$&
$CPU$&
$UB$&
$LB$&
$CPU$&
$UB$&
$LB$&
$CPU$&
$UB$&
$LB$&
$CPU$&
$UB$&
$LB$&
$CPU$\\
\midrule
\endfirsthead
\multicolumn{19}{c}{{\tablename} \thetable{} -- continued from previous page} \\
\toprule
\multirow{2}{*}{$Instance$}&
\multirow{2}{*}{$n$}&
\multirow{2}{*}{$m$}&
\multirow{2}{*}{$L$}&
\multicolumn{3}{c}{B-P}&
\multicolumn{3}{c}{B-C}&
\multicolumn{3}{c}{B-P-2R}&
\multicolumn{3}{c}{B-C-P}&
\multicolumn{3}{c}{Our algorithm}\\
\cmidrule(l){5-7}\cmidrule(l){8-10}\cmidrule(l){11-13}\cmidrule(l){14-16}\cmidrule(l){17-19}&
& & &$UB$&
$LB$&
$CPU$&
$UB$&
$LB$&
$CPU$&
$UB$&
$LB$&
$CPU$&
$UB$&
$LB$&
$CPU$&
$UB$&
$LB$&
$CPU$\\
\midrule
\endhead
\bottomrule
\multicolumn{19}{c}{continued on next page}
\endfoot
\bottomrule
\endlastfoot
$p1.2.p$&$30$&$2$&$37.5$&$250$&$2926$&$\sim$&$250$&$250$&$27$&$250$&$250$&$15$&$250$&$250$&$16$&$250$&$250$&$7$\\
$p1.2.q$&$30$&$2$&$40$&$-$&$-$&$\sim$&$265$&$265$&$139$&$265$&$265$&$78$&$265$&$265$&$80$&$265$&$265$&$5$\\
$p1.2.r$&$30$&$2$&$42.5$&$-$&$-$&$\sim$&$280$&$280$&$33$&$280$&$280$&$555$&$280$&$280$&$566$&$280$&$280$&$4$\\
$p3.2.l$&$31$&$2$&$35$&$605$&$-$&$4737$&$590$&$590$&$53$&$605$&$590$&$59$&$591$&$-$&$2783$&$590$&$590$&$28$\\
$p3.2.m$&$31$&$2$&$37.5$&$-$&$-$&$\sim$&$620$&$620$&$58$&$630.769$&$620$&$192$&$623.953$&$-$&$7121$&$620$&$620$&$33$\\
$p3.2.n$&$31$&$2$&$40$&$-$&$-$&$\sim$&$660$&$660$&$48$&$662.453$&$660$&$1751$&$660$&$660$&$4345$&$660$&$660$&$28$\\
$p3.2.o$&$31$&$2$&$42.5$&$-$&$-$&$\sim$&$690$&$690$&$46$&$699.444$&$690$&$811$&$699.444$&$-$&$73$&$690$&$690$&$19$\\
$p3.2.p$&$31$&$2$&$45$&$-$&$-$&$\sim$&$720$&$720$&$74$&$730$&$720$&$3881$&$730$&$-$&$282$&$720$&$720$&$24$\\
$p3.2.q$&$31$&$2$&$47.5$&$-$&$-$&$\sim$&$760$&$760$&$20$&$763.2$&$760$&$1497$&$763.2$&$-$&$1779$&$760$&$760$&$12$\\
$p3.2.r$&$31$&$2$&$50$&$-$&$-$&$\sim$&$790$&$790$&$15$&$790$&$790$&$1253$&$790$&$790$&$1660$&$790$&$790$&$8$\\
$p3.2.s$&$31$&$2$&$52.5$&$-$&$-$&$\sim$&$800$&$800$&$7$&$800$&$800$&$60$&$800$&$800$&$234$&$800$&$800$&$0$\\
$p3.3.s$&$31$&$3$&$35$&$738.913$&$416$&$\sim$&$720$&$720$&$384$&$738.913$&$720$&$5136$&$729.36$&$-$&$5004$&$720$&$720$&$90$\\
$p3.3.t$&$31$&$3$&$36.7$&$763.688$&$4181$&$\sim$&$760$&$760$&$257$&$763.688$&$760$&$157$&$760.693$&$-$&$2933$&$760$&$760$&$42$\\
$\textbf{p4.2.f}$&$98$&$2$&$50$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$687$&$687$&$6550$\\
$p4.2.h$&$98$&$2$&$60$&$-$&$-$&$\sim$&$835$&$835$&$2784$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$835$&$835$&$3125$\\
$p4.2.i$&$98$&$2$&$65$&$-$&$-$&$\sim$&$918$&$918$&$5551$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$918$&$918$&$1064$\\
$\textbf{p4.2.j}$&$98$&$2$&$70$&$-$&$-$&$\sim$&$969$&$965$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$965$&$965$&$2777$\\
$\textbf{p4.2.k}$&$98$&$2$&$75$&$-$&$-$&$\sim$&$1027$&$1022$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1022$&$1022$&$2751$\\
$\textbf{p4.2.l}$&$98$&$2$&$80$&$-$&$-$&$\sim$&$1080$&$1074$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1074$&$1074$&$7172$\\
$\textbf{p4.2.m}$&$98$&$2$&$85$&$-$&$-$&$\sim$&$1137$&$1132$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1132$&$1132$&$4610$\\
$\textbf{p4.2.r}$&$98$&$2$&$110$&$-$&$-$&$\sim$&$1293$&$1292$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1292$&$1292$&$5016$\\
$p4.2.t$&$98$&$2$&$120$&$-$&$-$&$\sim$&$1306$&$1306$&$5978$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1306$&$1306$&$0$\\
$p4.3.g$&$81$&$3$&$36.7$&$653$&$653$&$52$&$665$&$653$&$\sim$&$656.375$&$653$&$110$&$653$&$653$&$306$&$653$&$653$&$6587$\\
$p4.3.h$&$90$&$3$&$40$&$729$&$729$&$801$&$761$&$729$&$\sim$&$735.375$&$599$&$\sim$&$730.704$&$-$&$3858$&$736$&$729$&$\sim$\\
$p4.3.i$&$94$&$3$&$43.3$&$809$&$809$&$4920$&$830$&$809$&$\sim$&$813.625$&$766$&$\sim$&$809$&$809$&$2989$&$815$&$809$&$\sim$\\
$p4.4.i$&$68$&$4$&$32.5$&$657$&$657$&$23$&$660$&$657$&$\sim$&$665.4$&$657$&$74$&$657$&$657$&$83$&$657$&$657$&$935$\\
$p4.4.j$&$76$&$4$&$35$&$732$&$732$&$141$&$784$&$732$&$\sim$&$741.472$&$732$&$5138$&$732$&$732$&$589$&$755$&$732$&$\sim$\\
$p4.4.k$&$83$&$4$&$37.5$&$821$&$821$&$558$&$860$&$821$&$\sim$&$831.945$&$816$&$\sim$&$821.803$&$-$&$4007$&$858$&$821$&$\sim$\\
$p5.2.l$&$64$&$2$&$30$&$-$&$-$&$\sim$&$800$&$800$&$399$&$800$&$800$&$3$&$800$&$800$&$4$&$800$&$800$&$71$\\
$p5.2.m$&$64$&$2$&$32.5$&$-$&$-$&$\sim$&$860$&$860$&$3865$&$860$&$860$&$32$&$860$&$860$&$38$&$860$&$860$&$90$\\
$p5.2.n$&$64$&$2$&$35$&$-$&$-$&$\sim$&$930$&$925$&$\sim$&$930$&$925$&$89$&$925$&$925$&$1393$&$925$&$925$&$2373$\\
$p5.2.o$&$64$&$2$&$37.5$&$-$&$-$&$\sim$&$1030$&$1020$&$\sim$&$1030$&$1020$&$271$&$1020$&$1020$&$2233$&$1025$&$1020$&$\sim$\\
$p5.2.p$&$64$&$2$&$40$&$-$&$-$&$\sim$&$1150$&$1150$&$3955$&$1150$&$1150$&$657$&$1150$&$1150$&$727$&$1150$&$1150$&$77$\\
$\textbf{p5.2.q}$&$64$&$2$&$42.5$&$-$&$-$&$\sim$&$1680$&$1195$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1195$&$1195$&$6597$\\
$p5.2.r$&$64$&$2$&$45$&$-$&$-$&$\sim$&$1680$&$1260$&$\sim$&$1260$&$1260$&$123$&$1260$&$1260$&$133$&$1300$&$1269$&$\sim$\\
$p5.2.s$&$64$&$2$&$47.5$&$-$&$-$&$\sim$&$1365$&$1340$&$\sim$&$1340$&$1340$&$1072$&$1340$&$1340$&$845$&$1340$&$1340$&$3048$\\
$p5.2.t$&$64$&$2$&$50$&$-$&$-$&$\sim$&$1400$&$1400$&$5136$&$1400$&$-$&$1297$&$1400$&$1400$&$4559$&$1400$&$1400$&$418$\\
$p5.2.u$&$64$&$2$&$52.5$&$-$&$-$&$\sim$&$1510$&$1460$&$\sim$&$1460$&$1460$&$3488$&$1460$&$1460$&$4561$&$1460$&$1460$&$3263$\\
$\textbf{p5.2.v}$&$64$&$2$&$55$&$-$&$-$&$\sim$&$1530$&$1520$&$\sim$&$1510$&$-$&$4462$&$1510$&$-$&$4948.16$&$1505$&$1505$&$3497$\\
$\textbf{p5.2.w}$&$64$&$2$&$57.5$&$-$&$-$&$\sim$&$1680$&$1565$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1565$&$1565$&$5875$\\
$p5.2.x$&$64$&$2$&$60$&$-$&$-$&$\sim$&$1610$&$1610$&$1048$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1610$&$1610$&$128$\\
$\textbf{p5.2.y}$&$64$&$2$&$62.5$&$-$&$-$&$\sim$&$1655$&$1645$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1645$&$1645$&$457$\\
$p5.2.z$&$64$&$2$&$65$&$-$&$-$&$\sim$&$1680$&$1680$&$1604$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1680$&$1680$&$0$\\
$p5.3.l$&$64$&$3$&$20$&$595$&$595$&$33$&$615$&$595$&$\sim$&$605$&$595$&$31$&$600$&$595$&$35$&$615$&$595$&$\sim$\\
$p5.3.m$&$64$&$3$&$21.7$&$650$&$650$&$2$&$660$&$650$&$\sim$&$650$&$650$&$1$&$650$&$650$&$1$&$660$&$650$&$\sim$\\
$p5.3.n$&$64$&$3$&$23.3$&$755$&$755$&$42$&$765$&$755$&$\sim$&$755$&$755$&$3$&$755$&$755$&$3$&$765$&$755$&$\sim$\\
$p5.3.q$&$64$&$3$&$28.3$&$-$&$-$&$\sim$&$1260$&$1070$&$\sim$&$1090$&$1070$&$521$&$1076.25$&$-$&$4694$&$1110$&$1070$&$\sim$\\
$p5.3.t$&$64$&$3$&$33.3$&$-$&$-$&$\sim$&$1320$&$1260$&$\sim$&$1270$&$1260$&$5152$&$1270$&$-$&$16$&$1320$&$1260$&$\sim$\\
$p5.3.u$&$64$&$3$&$35$&$-$&$-$&$\sim$&$1395$&$1345$&$\sim$&$1350$&$-$&$123$&$1350$&$-$&$149$&$1395$&$1345$&$\sim$\\
$p5.4.l$&$44$&$4$&$15$&$430$&$430$&$1$&$445$&$430$&$\sim$&$430$&$430$&$0$&$430$&$430$&$0$&$430$&$430$&$2077$\\
$p5.4.m$&$52$&$4$&$16.2$&$555$&$555$&$0$&$560$&$555$&$\sim$&$555$&$555$&$0$&$555$&$555$&$0$&$555$&$555$&$1357$\\
$p5.4.n$&$60$&$4$&$17.5$&$620$&$620$&$0$&$640$&$620$&$\sim$&$620$&$620$&$0$&$620$&$620$&$0$&$620$&$620$&$7048$\\
$p5.4.o$&$60$&$4$&$18.8$&$690$&$690$&$1$&$720$&$690$&$\sim$&$690$&$690$&$0$&$690$&$690$&$0$&$720$&$690$&$\sim$\\
$p5.4.p$&$64$&$4$&$20$&$765$&$765$&$729$&$820$&$765$&$\sim$&$790$&$765$&$1238$&$775.714$&$765$&$1372$&$820$&$765$&$\sim$\\
$p5.4.q$&$64$&$4$&$21.2$&$860$&$860$&$1$&$880$&$860$&$\sim$&$860$&$860$&$2$&$860$&$860$&$2$&$880$&$860$&$\sim$\\
$p5.4.v$&$64$&$4$&$27.5$&$1320$&$1320$&$446$&$1340$&$1320$&$\sim$&$1320$&$1320$&$12$&$1320$&$1320$&$12$&$1340$&$1320$&$\sim$\\
$p5.4.y$&$64$&$4$&$31.2$&$-$&$-$&$\sim$&$1620$&$1520$&$\sim$&$1520$&$1455$&$\sim$&$1520$&$1520$&$46$&$1620$&$1520$&$\sim$\\
$p5.4.z$&$64$&$4$&$32.5$&$-$&$-$&$\sim$&$1680$&$1620$&$\sim$&$1620$&$1620$&$550$&$1620$&$1620$&$562$&$1680$&$1620$&$\sim$\\
$p6.2.j$&$62$&$2$&$30$&$-$&$-$&$\sim$&$948$&$948$&$2393$&$948$&$948$&$139$&$948$&$948$&$149$&$948$&$948$&$1338$\\
$p6.2.k$&$62$&$2$&$32.5$&$-$&$-$&$\sim$&$1032$&$1032$&$4016$&$1032$&$1032$&$223$&$1032$&$1032$&$244$&$1032$&$1032$&$699$\\
$p6.2.l$&$62$&$2$&$35$&$-$&$-$&$\sim$&$1116$&$1116$&$3828$&$1116$&$1116$&$5699$&$1116$&$1116$&$6471$&$1116$&$1116$&$39$\\
$p6.2.m$&$62$&$2$&$37.5$&$-$&$-$&$\sim$&$1188$&$1188$&$1442$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1188$&$1188$&$680$\\
$p6.2.n$&$62$&$2$&$40$&$-$&$-$&$\sim$&$1260$&$1260$&$1473$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1260$&$1260$&$1$\\
$p6.3.m$&$62$&$3$&$25$&$1104$&$-$&$33$&$1080$&$1080$&$1175$&$1104$&$-$&$20$&$1094.1$&$-$&$6407$&$1080$&$1080$&$432$\\
$p7.2.g$&$87$&$2$&$70$&$-$&$-$&$\sim$&$459$&$459$&$1226$&$459$&$459$&$44$&$459$&$459$&$58$&$459$&$459$&$589$\\
$p7.2.h$&$92$&$2$&$80$&$-$&$-$&$\sim$&$523$&$521$&$\sim$&$521$&$521$&$5101$&$521$&$521$&$6327$&$521$&$521$&$1977$\\
$\textbf{p7.2.i}$&$98$&$2$&$90$&$-$&$-$&$\sim$&$585$&$580$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$580$&$580$&$6271$\\
$\textbf{p7.2.t}$&$100$&$2$&$200$&$-$&$-$&$\sim$&$1181$&$1179$&$\sim$&$-$&$-$&$\sim$&$-$&$-$&$\sim$&$1179$&$1179$&$6934$\\
$p7.3.h$&$59$&$3$&$53.3$&$425$&$425$&$8$&$436$&$425$&$\sim$&$429$&$425$&$3$&$425$&$425$&$13$&$425$&$425$&$4461$\\
$p7.3.i$&$70$&$3$&$60$&$487$&$487$&$3407$&$535$&$487$&$\sim$&$496.976$&$487$&$436$&$488.5$&$487$&$3357$&$509$&$487$&$\sim$\\
$p7.3.j$&$80$&$3$&$66.7$&$570.5$&$2654$&$\sim$&$611$&$564$&$\sim$&$570.5$&$564$&$4207$&$564$&$564$&$4289$&$573$&$564$&$\sim$\\
$p7.3.k$&$91$&$3$&$73.3$&$-$&$-$&$\sim$&$688$&$633$&$\sim$&$633.182$&$633$&$1173$&$633$&$633$&$2751$&$655$&$633$&$\sim$\\
$p7.3.m$&$96$&$3$&$86.7$&$-$&$-$&$\sim$&$1374$&$762$&$\sim$&$762$&$762$&$928$&$762$&$762$&$1202$&$817$&$762$&$\sim$\\
$p7.3.n$&$99$&$3$&$93.3$&$-$&$-$&$\sim$&$900$&$820$&$\sim$&$820$&$820$&$2300$&$820$&$820$&$3034$&$889$&$820$&$\sim$\\
$p7.4.j$&$51$&$4$&$50$&$462$&$462$&$1$&$481$&$462$&$\sim$&$462$&$462$&$2$&$462$&$462$&$2$&$465$&$462$&$\sim$\\
$p7.4.k$&$61$&$4$&$55$&$520$&$520$&$73$&$586$&$520$&$\sim$&$524.607$&$520$&$96$&$520$&$520$&$91$&$541$&$520$&$\sim$\\
$p7.4.l$&$70$&$4$&$60$&$590$&$590$&$778$&$667$&$590$&$\sim$&$593.625$&$590$&$576$&$590$&$590$&$173$&$632$&$590$&$\sim$\\
$p7.4.n$&$87$&$4$&$70$&$-$&$-$&$\sim$&$809$&$730$&$\sim$&$730$&$730$&$85$&$730$&$730$&$95$&$803$&$730$&$\sim$\\
$p7.4.o$&$91$&$4$&$75$&$-$&$-$&$\sim$&$909$&$781$&$\sim$&$786.762$&$781$&$4434$&$784.676$&$-$&$6492$&$903$&$781$&$\sim$\\
\label{tab:comparison_literature}
\end{longtable}
\end{landscape}}
\end{small}
Next, we compare the performance of all the exact methods on a per-data-set basis. Table \ref{tab:summary} summarizes the numbers of instances being solved by each method. We did not report the $CPU$ time in this table because of some missing informations from the other methods.
\begin{table}[h]
\caption{Comparison between the numbers of instances being solved by the exact methods in the literature.}\label{tab:summary}
\begin{center}
\begin{tabular}{cccccc}
\toprule
Set&B-P&B-C&B-P-2R&B-C-P&Our algorithm\\
\midrule
$1$&$51/54$&$\textbf{54/54}$&$\textbf{54/54}$&$\textbf{54/54}$&$\textbf{54/54}$\\
$2$&$\textbf{33/33}$&$\textbf{33/33}$&$\textbf{33/33}$&$\textbf{33/33}$&$\textbf{33/33}$\\
$3$&$50/60$&$\textbf{60/60}$&$\textbf{60/60}$&$51/60$&$\textbf{60/60}$\\
$4$&$25/60$&$22/60$&$20/60$&$22/60$&$\textbf{30/60}$\\
$5$&$48/78$&$44/78$&$\textbf{60/78}$&$59/78$&$54/78$\\
$6$&$36/42$&$\textbf{42/42}$&$36/42$&$38/42$&$\textbf{42/42}$\\
$7$&$27/60$&$23/60$&$\textbf{38/60}$&$34/60$&$27/60$\\
\midrule
$Total$&$270/387$&$278/387$&$301/387$&$291/387$&$300/387$\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
A first remark from these results is that instances with large values of $L$ and $m$ are generally more difficult to solve than those with smaller values. This can be clearly observed with our method on the data sets $4$, $5$ and $7$. On the other hand, none of the exact methods had a difficulty to solve the instances of the sets $1$, $2$, and $3$, because these instances have a small numbers of accessible customers. The random distribution of customers around the depots could also make the optimal solutions easier to locate. Only a minor exception was noticed for B-P and B-C-P on some instances of the set $3$.
The random distribution of customers is also the case for the sets $4$ and $7$, however their numbers of accessible customers are larger than those of the first three sets, e.g. they can reach $100$. These large numbers cause a difficulty for all the exact methods to solve the corresponding instances. Particularly, the number of solved instances did not exceed $58$ out of the $120$ instances by any of the five methods.
Finally, the instances of the sets $5$ and $6$ contain a special geometric structure. These instances have no more than $64$ accessible customers, which are arranged on a grid in a way that those with large profits are far away from the depots. It appears that these instances are difficult to solve. This is especially the case with B-P and B-C algorithms. The B-P-2R and the B-C-P algorithms of \citet{bib:Keshtkarana15} only had problems with the set $6$, while they obtained the best results for the set $5$. However, our cutting-plane algorithm obtained quite good results for these two sets. It was able to solve all the instances of the set $6$ and most of the instances in the set $5$. We had only few difficulties with the set $5$, and more precisely on some instances with $4$ available vehicles. A closer look into the execution of the algorithm on those few instances revealed to us that the CPA only made progress in improving the incumbent solution or finding equivalent ones, while it was not much reducing the upper bound.
To summarize, our algorithm was able to prove the optimality of all the instances of the sets $1$, $2$, $3$, and $6$ and a large number of instances from the other three sets. Although the instances of the set $4$ are the hardest ones to solve, our CPA was able to prove the optimality of $30$ out of the $60$ instances, while none of the existing algorithms was able to reach that number for this set. In total, the proposed approach is capable of solving $300${} out of the $387$ instances.
\begin{table}[h]
\caption{Comparison between each two exact methods apart.}\label{tab:comparison_literature_two}
\begin{center}
\begin{tabular}{lccccc}
\toprule
&B-P&B-C&B-P-2R&B-C-P&CPA\\
\midrule
B-P &$-$&$21$&$8$&$6$&$15$\\
B-C &$29$&$-$&$13$&$19$&$0$\\
B-P-2R &$39$&$36$&$-$&$16$&$26$\\
B-C-P &$27$&$32$&$6$&$-$&$22$\\
CPA&$45$&$22$&$25$&$31$&$-$\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
For further comparison between the exact methods in the literature, we present in table \ref{tab:comparison_literature_two} a comparison between each two methods apart, by giving the number of instances being solved by one of the two methods and not by the other one. Each cell of this table reports the number of instances being solved by the method present in its row but not by the method in the column. From the results shown in this table, we can see that the number of instances being distinctively solved by our method is $45$ compared to the B-P algorithm, $22$ compared to the B-C algorithm, and respectively $25$ and $31$ compared to the B-P-2R and the B-C-P algorithms.
Moreover, we can notice from Table \ref{tab:comparison_literature} that our CPA was able to improve the upper bounds of respectively $32$ and $27$ instances more than the two algorithms of \citet{bib:Keshtkarana15}. Overall, our approach is clearly efficient and competitive with the existing methods in the literature. We were able to prove the optimality of $12${} instances that have been unsolved in the literature. These instances are marked in bold in Table \ref{tab:comparison_literature}.
\section*{Conclusion and future work}\label{sec:conclusion}
The Team Orienteering Problem is one of the well known variants of the Vehicle Routing Problem with Profits. In this article, we presented a new exact algorithm to solve this problem based on a cutting-plane approach. Several types of cuts are proposed in order to strengthen the classical linear formulation. The corresponding cuts are generated and added to the model during the solving process. They include symmetry breaking, generalized subtour eliminations, boundaries on profits/numbers of customers, forcing mandatory customers, removing irrelevant components and clique and independent-set cuts based on graph of incompatibilities between variables. The experiments conducted on the standard benchmark of TOP confirm the effectiveness of our approach. Our algorithm is able to solve a large number and a large variety of instances, some of those instances have been unsolved in the literature.
Interestingly, the branch-and-price algorithm of \citet{bib:Boussier07} and our Cutting Plane algorithm has complementary performance to each other.
This gives us a hint that further development of a Branch-and-Cut-and-Price type of algorithm which incorporates our presented ideas is a promising direction towards improving the solving method for TOP. We also plan to adapt the presented approach to meet new challenges. Those could include variants of TOP on arcs, such as the Team Orienteering Arc Routing Problem (TOARP) which was addressed in \citep{bib:Archetti13}. On the other hand, by taking into consideration the time scheduling of the visits, the CPA can be extended to solve other variants of TOP and VRP, such as the Team Orienteering Problem with Time Windows and/or Synchronization Constraints \citep[e.g.,][]{bib:Labadi12,bib:Souffriau13,bib:Guibadj13,bib:Afifi16}.
\section*{Acknowledgement}
This work is carried out in the framework of the Labex MS2T, which was funded by the French Government, through the program "Investments for the future" managed by the National Agency for Research (Reference ANR-11-IDEX-0004-02). It is also partially supported by the Regional Council of Picardie under TOURNEES SELECTIVES project and TCDU project (Collaborative Transportation in Urban Distribution, ANR-14-CE22-0017).
\bibliographystyle{abbrvnat}
| {
"attr-fineweb-edu": 1.999023,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUe-PxK6wB9mpb3wqy | \section{Introduction}
The Bunimovich stadium $S$ is a planar domain given by the union of a
rectangle $R = \{ (x,y) \mid x \in [-\alpha,\alpha], \ y \in [-\beta,\beta]
\}$ with two ``wings,'' i.e. the two semicircular regions centered at
$(\pm\alpha,0)$ with radius $\beta$ which lie outside $R$. Geodesic flow
in $S$ (obeying the law of reflection at the boundary) was proved to be
ergodic by Bunimovich \cite{Bun}. It follows from this and from
results of Schnirelman \cite{S}, Zelditch \cite{Z} and Colin de Verdi\`ere
\cite{CdV} that the stadium is quantum ergodic. This means that there is a
density one sequence of Dirichlet eigenfunctions which becomes uniformly
distributed; in particular, along this density one sequence the weak limit
of the $L^2$ mass distribution becomes uniform. One can ask whether the
entire sequence of eigenfunctions becomes uniformly distributed; if so, the
domain is called quantum unique ergodic (QUE). It has been conjectured by
Rudnick and Sarnak that complete surfaces with negative curvature which are
classically ergodic are QUE; this has been proved recently by Lindenstrauss
\cite{L} for arithmetic surfaces.\footnote{With one slight caveat, that the
eigenfunctions are also eigenfunctions of the Hecke operators.} The
Bunimovich stadium, by contrast, is generally believed to be non-QUE; it is
thought that there is a sequence of eigenfunctions that concentrates in the
rectangle $R$. Little is currently understood about the way in which such
eigenfunctions would concentrate, however. For example their
(hypothetical) rate of decay outside $R$ is unclear. The result in the
present paper is intended to shed some light on this question: we show that
any sequence of eigenfunctions (or quasimodes) cannot concentrate very
rapidly inside $R$, by obtaining lower bounds (tending to zero as $\lambda
\to \infty$, but only polynomially) on the $L^2$ mass inside the wings
$W_\pm$.
Let $\Delta =
-{\partial}_x^2-{\partial}_y^2$ denote the (nonnegative) Laplacian on $S$ with Dirichlet boundary conditions. We denote by
$\norm{\cdot}$ the norm in $L^2(S)$, and by
${\partial_N} g$ the
outward pointing normal derivative of $g$ at ${\partial} S.$ We consider a $o(1)$ Dirichlet quasimode $u_\lambda$ for $\Delta$, by which we mean that we have a sequence
$\lambda = \lambda_k\to \infty$ of real numbers and a corresponding sequence
$u_{\lambda} \in H^2(S)$ satisfying
\begin{equation}\label{quasimode}
\begin{aligned}
(\Delta-\lambda^2) u_{\lambda} &= f_{\lambda},\\
u_{\lambda}\mid_{\partial S} &= 0,\\
\norm{u_\lambda} &=1,
\end{aligned}
\end{equation}
where
\begin{equation}\label{qm-order}
\norm{f_\lambda} = o(1) \text{ as }\lambda \to \infty.
\end{equation}
We more generally define a $O(\lambda^{-j})$ or $o(\lambda^{-j})$
quasimode by modifying the right-hand side of \eqref{qm-order}
accordingly. Of course a sequence of eigenfunctions is a $o(\lambda^{-j})$ quasimode for any $j$.
It is easy to see that a $O(1)$ quasimode can be localized to a small
rectangle of the form $[\gamma, \delta] \times [-\beta, \beta]$, where
$[\gamma, \delta]$ is an arbitrary subinterval of $[-\alpha, \alpha]$;
indeed the family $u_{(n+1/2)\pi/\beta} = \phi(x) \cos((n+1/2) \pi y
/\beta)$ is (after normalization) such a quasimode, where $\phi$ is any
nonzero smooth function supported in $[\gamma, \delta]$. By constrast, an
$o(1)$ quasimode cannot be so localized: Burq-Zworski \cite{BZ} have shown
that the $L^2$ norm of $u_\lambda$ is controlled by (that is, bounded above
by a constant times) its $L^2$ norm in the union of any two rectangles of
the form $([-\alpha, \gamma_1] \times [-\beta, \beta]) \cup ([\gamma_2,
\alpha] \times [-\beta, \beta])$. In particular, for a $o(1)$ quasimode,
the $L^2$ mass cannot shrink to a closed region disjoint from the wings of
the stadium as $\lambda \to \infty$.
Although the stadium is classically ergodic, there is a codimension one
invariant set for the classical flow, consisting of vertical ``bouncing
ball'' orbits parallel to the $y$-axis and within the rectangle $R$, and
the union of these orbits is the most likely place where localization of
eigenfunctions, or more generally $o(1)$ quasimodes, can occur\footnote{The
explicit quasimode in the paragraph above concentrates along a subset of
these orbits}. There is a rather convincing plausibility argument in the
physics literature due to Heller and O'Connor \cite{HOC} which indicates
that a density-zero sequence of eigenfunctions, with eigenvalues
$((n+1/2)\pi/\beta)^2 + O(1)$, does concentrate to some extent at these
bouncing-ball orbits. The rigorous essence of this argument has been
developed by Donnelly \cite{D} who showed that there are sequences of
functions lying in the range of spectral projectors $E_{I_n}(\Delta)$,
where $I_n$ are intervals of the form $[((n+1/2)\pi/\beta)^2 -C,
((n+1/2)\pi/\beta)^2 + C]$ which concentrate at the bouncing-ball
orbits.\footnote{This was shown for surfaces without boundary containing a
flat cylinder, but the arguments go through for the stadium.} On the other
hand, the result of Burq-Zworski \cite{BZ} shows that such localization
cannot be too extreme: the control region must extend to the boundary of
the rectangle.
Our main result here is that we may in fact push our control region outside
the rectangle altogether and into the wings, in return for a loss, either
from restriction to the boundary, or of powers of $\lambda.$ To state this concisely, it is
convenient to introduce an auxiliary coordinate in the wings given by
$w=\abs{x}-\alpha;$ thus $w$ is nonnegative on the wings and vanishes
exactly on the vertical lines $R \cap W.$
\begin{theorem}\label{ourtheorem}
There is a $C > 0$, depending only on $\alpha/\beta$,
such that
any family $u_\lambda$ satisfying \eqref{quasimode} obeys the estimates
\begin{equation}
\label{normderiv}
\|f_\lambda\|^2 +\int_{{\partial} S \cap W} \!\! w \, \abs{{\partial_N} u_\lambda}^2 \, dl \geq C \ ,
\end{equation}
\begin{equation}
\label{L2}
\|f_\lambda\|^2+ \lambda^8 \norm{u_\lambda}_{L^2(W)}^2 \geq C
\end{equation}
and
\begin{equation}
\label{L2bis}
\lambda^2 \|f_\lambda\|^2 + \lambda^4 \norm{u_\lambda}_{L^2(W)}^2 \geq C.
\end{equation}
Therefore, if $u_\lambda$ is a $o(1)$ quasimode, we have for sufficiently large $\lambda$
\begin{equation}\begin{aligned}
\| w^{\frac1{2}} \partial_N u \|_{L^2({\partial} S \cap W)} & \geq C,\\
\norm{u_\lambda}_{L^2(W)} &\geq C \lambda^{-4},
\label{o1}
\end{aligned}
\end{equation}
while if $u_\lambda$ is a $o(\lambda^{-2})$ quasimode (e.g.\ an eigenfunction),
\begin{equation}\label{betterquasimode}
\norm{u_\lambda}_{L^2(W)} \geq C \lambda^{-2}
\end{equation}
\end{theorem}
Note that the results of Theorem~\ref{ourtheorem} still leave open the
possibility of quasimodes concentrated along bouncing-ball orbits in the
rectangle with $o(1)$ mass in the wings. They also do not rule out the
possibility that all the energy in the wings may asymptotically concentrate
in a boundary layer near $R.$
\section{Preliminaries to $L^2$ estimates}
Our main tool is positive commutator estimates, which we use in the following form:
\begin{lemma} \label{lemma:rellich}
Let $u$ be real, equal to zero at ${\partial} S,$ and satisfy $(\Delta -
\lambda^2) u = f,$ where $f$ is smooth. Then for any real vector field $A$,
\begin{equation}
\langle u, [\Delta - \lambda^2,A]u \rangle =
\langle (2 Au+ (\div A) u, f \rangle +
\int_{{\partial} S} (\partial_N u) Au \, dl.
\label{Rellich}\end{equation}
\end{lemma}
\begin{proof}
We integrate twice by parts, using the Dirichlet boundary conditions in the
first instance, to write
$$ \ang{u, [\Delta-\lambda^2, A] u} = \ang{f,A u} + \int_{{\partial} S} {\partial}_N u Au
\, dl - \ang{u, A f}.
$$
Applying Green's Theorem to the last term now gives two terms: $\ang{Au, f}+ \ang{(\div
A) u, f}.$ Since $u$ and $f$ are real this yields the desired identity.
\end{proof}
We also record here an inequality that will be of use in
estimating derivative terms.
\begin{lemma}\label{lemma:gradientestimate}
Let $u,$ $f$ be as in Lemma~\ref{lemma:rellich}.
Then for all $s>0,$ for $\lambda$ sufficiently large,
$$
\norm{\nabla u}^2 \leq C_s (\lambda^{\max(2,s)}\norm{u}^2 + \lambda^{-s} \norm{f}^2).
$$
\end{lemma}
\begin{proof}
We compute
\begin{align*}
\norm{\nabla u}^2 &= \int_S u_x^2+u_y^2 \, dA\\ &= \int_S (\Delta u ) u \,
dA\\ &= \lambda^2 \int_S u^2 \,dA + \ang{f,u}.
\end{align*}
Applying Cauchy-Schwarz to $\ang{f,u}$ gives the estimate.
\end{proof}
\section{Proof of \eqref{normderiv}}
It suffices to prove \eqref{normderiv} under the assumption that $u_\lambda$, and hence $f_\lambda$, are real, since we can treat the real and imaginary parts separately. We make this assumption from now on.
We begin with the standard commutator $[\Delta,x{\partial}_x] = -2 {\partial}_x^2.$
Applying \eqref{Rellich} with $A = x {\partial}_x$, and dropping the subscript on $u_\lambda$, we have
\begin{equation}\begin{aligned}
\ang{u_x, u_x} = -\ang{{\partial}_x^2 u,u}&= \ang{[\Delta-\lambda^2, x {\partial}_x] u,u}
\\
&= \int_{{\partial} S} x{\partial}_x u \, {\partial_N} u\, dl
+ \int (2 x {\partial}_xu + u) f \, dA ;
\end{aligned}\label{boundary}\end{equation}
in the last equation we integrated twice by parts, using for a second time
the fact that $(\Delta-\lambda^2)u=f$ as well as the fact that $u$ satisfies
Dirichlet boundary conditions, hence integration by parts produces
boundary terms only where derivatives land on both factors of $u.$
Now at every boundary point we may decompose $x {\partial} x$ into $p {\partial}_l+q {\partial_N}$
where ${\partial}_l$ is differentiation tangent to the boundary. Of course
${\partial}_l$ annihilates $u.$ Now since ${\partial}_x$ is tangent to the upper and
bottom sides of the rectangle we find that the boundary integral in
\eqref{boundary} is only over ${\partial} S \cap W.$ Moreover, as ${\partial}_x$ is
tangent to the top and bottom of the circles
forming the boundaries of the wings, we have $q = O(w)$ on ${\partial} S \cap
W.$ Hence we have shown that
\begin{equation}
\| u_x \|^2 \leq \int_{{\partial} S \cap W} O(w)\abs{{\partial_N} u}^2 \, dl + {\epsilon}
\int (u^2 + u_x^2) \, dA + C \int f^2 \, dA.
\label{useful}\end{equation}
We may absorb the $\epsilon \| u_x \|^2$ term, then apply the Poincar\'e inequality, and absorb the $\epsilon \| u \|^2$ to obtain
\begin{equation}
\norm{u}_{L^2(S)}^2 \leq C \int_{{\partial} S \cap W} w \abs{{\partial_N} u}^2 \, dl+ C \norm{f}^2
\label{ndd}\end{equation}
which is the first part of our theorem, as we took $u$ to be
$L^2$-normalized
\section{Proof of \eqref{L2}}
To prove this estimate we start from
\begin{equation}\label{normderivbis}
\| u_x \|^2 \leq C \int_{\partial S} w_+ | {\partial_N} u|^2 \, dl +C \norm{f}^2
\end{equation}
which follows directly from the considerations of the previous section, and estimate the boundary integral term. We
\begin{comment}
We start from the last line of \eqref{boundary}. Integrating by parts on the left hand side, and using Cauchy-Schwarz on the final term and absorbing the $\epsilon \| u_x \|^2$ and $ \epsilon \| u \|^2$ terms as above, we obtain
\begin{equation}\label{normderivbis}
\int_S u_x^2 \, dA \leq C \int_{\partial S} (x {\partial}_x u) {\partial_N} u \, dl +C \norm{f}^2.
\end{equation}
We now proceed to estimate the term
\begin{equation}
E = \int_{\partial S} ({\partial}_x u) {\partial_N} u \, dl,
\label{toest}\end{equation}
which bounds the first term on the RHS, in terms of $\| u \|_{L^2(W)};$ for
simplicity we work in the right hand wing where $x>0$ so as not to have to
worry about signs; an analogous estimate holds on the left, or replacing
$x$ with $w.$ We begin by noting that ${\partial}_x$ equals a tangential
derivative plus a term $f w {\partial}_w$ on ${\partial} S$; here $f$ is a nonnegative
function supported on the boundary of the wings. Thus the integrand in $E$
is positive, and to estimate $E$ it suffices to obtain an estimate where
the integrand is replaced by $w_+ \abs{{\partial}_N u}^2.$ We
\end{comment}
shall obtain upper
bounds of the form
$$
\lambda^8 \int_W u^2\, dA + \| f \|^2,
$$
and
$$
\lambda^4 \int_W u^2\, dA + \lambda^2 \| f \|^2,
$$
thus proving \eqref{L2} and \eqref{L2bis}.
We shall perform this estimate in three separate regions in the wing.
Region I is the near-rectangular region, in a boundary layer where
$w \leq \delta\lambda^{-2}.$ Region II will be outside the boundary layer,
where $\delta\lambda^{-2} \leq w \leq \beta/2.$ Region III will be the far outer region $w \geq \beta/2.$
\begin{figure}
\includegraphics[scale=.3]{regions.eps}
\caption{The three regions of interest in $W.$}
\end{figure}
We begin with Region III, far away from the rectangle. In this case we
employ Lemma~\ref{lemma:rellich} where $A$ is the operator $\phi(x)
\partial_x$, where $\phi$ is supported where $w > \beta/4$, say,
equal to $1$ where $w > \beta/2$, and with $\partial_x \phi \geq 0$. Then \eqref{Rellich} gives us, with $P =
[\Delta, A]$,
$$
\int_{\partial S} \phi \partial_x u \, \partial_N u \leq \abs{\langle Pu, u
\rangle}+\abs{\ang{\phi_x f,u}} +2\ang{\phi u_x,f}.
$$
Note that $P = -2 \phi_x \partial^2_{xx} - \phi_{xx}
\partial_x$. Thus the LHS is bounded ($\forall \epsilon>0$) by
$$
\abs{\ang{- \phi_x u_{xx}, u}} + {\epsilon} \norm{u_x}^2 + C (\norm{u}_{L^2(W)}^2+\norm{f}^2)
$$
(where of course $C$ depends on ${\epsilon}$).
We can add the positive term $\int_S \phi_x (\partial_y u)^2$ to this
estimate. Integrating by parts in $y$ gives us
$$
\abs{\ang{ \phi_x (-u_{xx}- u_{yy}), u}} + {\epsilon} \norm{u_x}^2 + C (\norm{u}_{L^2(W)}^2+\norm{f}^2).
$$ Using the positivity of the integrand, we thus obtain an estimate
\begin{equation}
\abs{\int_{\partial S\cap \text{III}} \phi \, \partial_x u \, \partial_N u\, dl} \leq C \lambda^2
\norm{u}_{L^2(W)}^2 + C \norm{f}^2+ {\epsilon} \norm{u_x}^2
\label{bdy-est-1}\end{equation}
with $C$ depending on ${\epsilon}>0.$
Now we work on Region I, within a $O(\lambda^{-2})$ boundary layer along
the rectangle. We again apply Lemma~\ref{lemma:rellich}, this time with
$A= x{\partial}_x+y{\partial}_y.$ Since $A$ is a tangential vector plus a positive
multiple of ${\partial}_N$ all along ${\partial} S$ we obtain
$$ \int_{\partial S} (\partial_N u)^2\, dl \leq \abs{ \langle u, [\Delta-
\lambda^2, A]u\rangle } + 2 \abs{ \langle u, f \rangle }+ 2 \abs{\ang{Au,f}}.
$$
Using $[\Delta - \lambda^2, A] = 2\Delta$ and Cauchy-Schwarz this becomes
$$
\int_{\partial S} \abs{\partial_N u}^2 \, dl \leq C \lambda^2 \| u \|^2
+ \norm{Au}^2 + C\| f \|^2.
$$ Restricting to ${\partial} S \cap
\text{I}$ in the integrand, we can estimate $w$ in $L^\infty$ by $\delta
\lambda^{-2}$, and this gives
\begin{equation} \int_{{\partial} S
\cap \text{I}} w_+ \abs{\partial_N u}^2 \, dl \leq \delta C (\|u\|^2+ \lambda^{-2}
(\norm{f}^2+\norm{Au}^2)).
\end{equation}
Using Lemma~\ref{lemma:gradientestimate}, we may estimate $\norm{Au}^2$ by
$C (\lambda^2\norm{u}^2+\norm{f}^2)$. Hence we may finally write
\begin{equation}
\label{bdy-est-3}
\int_{\partial S\cap \text{I}} w_+ \abs{\partial_N u}^2 \, dl \leq \delta C_0 \| u \|^2 +
\delta C \lambda^{-2}\norm{f}^2;
\end{equation}
note that in the above construction, $C_0$ can in fact be chosen independent of $\delta.$
Finally, we estimate in Region II. To begin with, we note that for $w \geq \delta \lambda^{-2}$, we can estimate $w_+$ by $ \delta^{-1} \lambda^2
w_+^2$, so we have
$$
\int_{{\partial} S \cap \text{II}} w_+ \abs{{\partial}_N u}^2 \, dl
\leq C\delta^{-1}\int_{{\partial} S \cap \text{II}} \lambda^2 w_+^2 \, \chi(y){\partial}_y u \, {\partial}_N u \, dl;
$$
here we take $\chi$ supported in $|y| > \beta/20$, and equal to $-1$ for
$y < -\beta/10$ and $+1$ for $y > \beta/10$, so that $\chi(y){\partial}_y$ is a
positive multiple of ${\partial}_N$ plus a tangential component on ${\partial} S \cap
\text{II}.$ To estimate further, we employ Lemma~\ref{lemma:rellich} with
the commutant $$A = \lambda^2 w_+^2 \chi(y) \partial_y.$$ The point of this
commutant is that we have given ourselves two powers of $w$ which will
``absorb'' two integrations by parts in $x$ \emph{without any boundary terms
at $w=0$}; this is crucial since we know of no way to deal with such boundary terms. (On the other hand, we pay the price of additional powers of $\lambda$ with this gambit.)
Thus we obtain, setting $Q = [\Delta, A]$,
\begin{equation}\label{region2}
\int_{\partial S} \lambda^2 w_+^2 \chi(y) \partial_y u \, \partial_N u\, dA \leq \Big|
\langle Qu, u \rangle \Big| + \lambda^2 \abs{ \int_S w_+^2 \chi_y(y) u f}
+ 2 \abs{\ang{Au, f}}
\end{equation}
We can estimate the second term on the RHS by $\lambda^4 \| u \|_{L^2(W)}^2
+ C\| f \|^2$.
Now consider the terms involving the operator $Q$. This is given by
$$
Q = \lambda^2 \Big( -4 w_+ \chi \partial_x \partial_y -2 w_+^2 \chi_y \partial^2_{yy} - 2 H(w) \chi \partial_y - w_+^2\chi_{yy} \partial_y \Big)
$$
where $H(\cdot)$ is the Heaviside step-function, $H(w) = 0$ for $w < 0$ and $1$ for $w \geq 0$.
To treat the terms involving one derivative, e.g. the third term above, we integrate by parts:
$$
-2\lambda^2 \int_S H(w) \chi(y) \, \partial_y u \, u = \lambda^2 \int_S H(w) \chi_y(y) \, u^2
$$
which is therefore estimated by $\lambda^2 \| u \|_{L^2(W)}^2$. The fourth term is estimated in exactly the same way.
Thus we are left to estimate
\begin{equation}
\lambda^2 \Big( -4 \langle w_+ \chi(y) \partial^2_{xy} u, u \rangle
-2 \langle w_+^2 \chi_y(y) \partial^2_{yy} u, u \rangle \Big)
\end{equation}
Integrating the first term by parts in $x$ and the second term by parts in $y$ gives us two principal terms
\begin{equation}\label{foobar}
4 \lambda^2 \Big( \langle w_+^{1/2} \partial_x u, w_+^{1/2} \partial_y u \rangle
+ \langle w_+^2 \chi_y \partial_y u, \partial_y u \rangle \Big)
\end{equation}
together with two other terms
\begin{equation*} 4 \lambda^2 \Big( \langle H(w) \partial_y u, u \rangle
+ \langle w_+^2 \chi_{yy}(y) \partial_y u, u \rangle \Big)
\end{equation*}
which are estimated in the same way as the first order terms above.
We apply Cauchy-Schwarz to the first term in \eqref{foobar}, while in the second term, which is positive, we replace $w_+^2$ by $w_+$ (which is larger, up to a
constant multiple) and then integrate by
parts again, getting (up to another first-order term estimated as above) an upper bound for \eqref{foobar} of the form
\begin{equation}\label{region2gradient}
C \lambda^2 \int_S w_+ \abs{\partial_x u}^2 + w_+ \abs{\partial_y u}^2
\, dA.
\end{equation}
Now we integrate by parts again, getting
\begin{equation}
C \lambda^2 \int_S \Big( w_+ (-u_{xx} - u_{yy}) u + H(w) u \, u_x \Big)\, dA.
\label{eee}\end{equation}
Writing $-u_{xx} - u_{yy} = \lambda^2 u + f$, we can estimate the integrand of \eqref{eee} by
\begin{multline*}
\lambda^4 w_+ \abs{u}^2 + \lambda^2 w_+ f u + \lambda^2 H(w) u u_x
\\ \leq \lambda^4 w_+ \abs{u}^2 + \frac1{2} \big( \lambda^4 w_+ u^2 + w_+ f^2 \big) + C \lambda^4 H(w) u^2 + \epsilon u_x^2.
\end{multline*}
Hence we may estimate \eqref{foobar}, by as
$$
C \lambda^4
\norm{u}_{L^2(W)}^2 + {\epsilon} \norm{u_x}^2+ C \norm{f}^2 .
$$
The term $2\abs{\ang{Au,f}}$
is bounded in a similar manner:
By Cauchy-Schwarz we may estimate it by
$$
C \norm{f}^2 + C\lambda^4\norm{w_+ u_y}^2
$$
or by
$$
C \lambda^2 \norm{f}^2 + C\lambda^{2}\norm{w_+ u_y}^2,
$$
as we prefer. Our estimate for \eqref{region2gradient} turns the latter
estimate into
$$
C \lambda^2 \norm{f}^2 +C \lambda^4
\norm{u}_{L^2(W)}^2 + {\epsilon} \norm{u_x}^2.
$$
On the other hand, treating $\lambda^4\norm{w_+ u_y}^2$ in the same manner
gives a bound by
$$
C \norm{f}^2 +C \lambda^8
\norm{u}_{L^2(W)}^2 + {\epsilon} \norm{u_x}^2.
$$
On the support of $\chi$ and at the boundary, $\chi(y) u_y$ is a
positive multiple of $\partial_N u$. So the upshot is that \eqref{region2} now gives
\begin{equation}
\int_{\partial S} \lambda^2 w_+^2 \chi(y) (\partial_N u)^2 \leq
C \lambda^8
\norm{u}_{L^2(W)}^2 + {\epsilon} \norm{u_x}^2 + C \norm{f}^2.
\label{bdy-est-2}\end{equation}
and
\begin{equation}
\int_{\partial S} \lambda^2 w_+^2 \chi(y) (\partial_N u)^2 \leq
C \lambda^4
\norm{u}_{L^2(W)}^2 + {\epsilon} \norm{u_x}^2 + C \lambda^2 \norm{f}^2.
\label{bdy-est-2bis}\end{equation}
At last we can estimate the boundary integral term in \eqref{normderivbis} by a combination of
\eqref{bdy-est-1}, \eqref{bdy-est-3}, and
\eqref{bdy-est-2}/\eqref{bdy-est-2bis}, obtaining
$$
\int_{\partial S} w_+ |{\partial_N} u|^2 \, dl \leq \delta C_0\|u\|^2+ C \lambda^8
\norm{u}_{L^2(W)}^2 + 2 {\epsilon} \norm{u_x}^2 + C \norm{f}^2.
$$
and
$$
\int_{\partial S} w_+ |{\partial_N} u |^2 \, dl \leq \delta C_0\|u\|^2+ C \lambda^4
\norm{u}_{L^2(W)}^2 + 2 {\epsilon} \norm{u_x}^2 + C \lambda^2 \norm{f}^2.
$$
Here $C$ depends on ${\epsilon},$ $\delta$ but $C_0$ is independent of
$\delta.$ We now combine this with \eqref{normderivbis}.
Absorbing the
$\norm{u_x}^2$ and $\norm{u}^2$ terms on the LHS by taking $\delta$ and ${\epsilon}$
sufficiently small, we obtain \eqref{L2}, \eqref{L2bis}.
\section{Concluding remarks}
The estimates presented here are certainly not optimal. The powers of
$\lambda$ appearing in Theorem~\ref{ourtheorem} can probably be improved
using refinements of the methods used here, but it seems unlikely that one
could achieve an optimal result with them. We have not, therefore, attempted to
obtain the best possible powers of $\lambda$, but have rather tried to
present a poynomial lower bound on $\| u_\lambda \|_{L^2(W)}$ with a simple
proof.
It would be of great interest to obtain a polynomial lower bound on the
$L^2$ mass of $u_\lambda$ in a subregion of $W$ which is a positive
distance from $R$ (i.e. region III of the previous section). We do not know
whether such a bound holds, but it does not seem to be obtainable using the
methods of this paper; possibly it might yield to the use of more
sophisticated tools from microlocal analysis.
| {
"attr-fineweb-edu": 1.413086,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfjnxK1ThhBMLilVJ |
\section{Case study}
\label{sec:case_study}
To test the proposed methodology, we used a set of real mobility sequences obtained from a French household travel survey called EMD.\footnote{from French ``Enquête Ménages-Déplacements"}.
The goal of the EMD survey is to provide a snapshot of the trips undertaken by residents of a given metropolitan area, which can aid in understanding mobility behaviors and measure changes over time.
In this section, we describe the EMD data in terms of quality, semantics, and size. The dataset is complemented by a domain ontology describing activity semantics (step (a) in the methodology in Fig. \ref{fig:overview}). A statistical study and overview analysis of the dataset conclude this section (step (b) in Fig. \ref{fig:overview}).
\subsection{EMD Rennes 2018 dataset}
\begin{table*}
\caption{Description of activities in the EMD data}
\label{tab:data}
\begin{tabular}{m{.5cm}>{\centering}m{3.45cm}cm{10cm}}
\hline
{Color} & {Aggregated activity} & {Emoji} & {Activity label and description}\tabularnewline
\hline
\hline
\multicolumn{4}{c}{\emph{Stop activities}}\tabularnewline
\hline
\hline
{\cellcolor{home}} & {Home} & \emoji{Figures/Emoji/home}{10} & {\textbf{1}: main home; \textbf{2}: second home, hotel}\tabularnewline
\hline
{\cellcolor{work}} & {Work} & \emoji{Figures/Emoji/work}{10} & {\textbf{11}: work in official work place; \textbf{12}: work at home; \textbf{13}:
work in another place; \textbf{43}: look for a job; \textbf{81}: do a work round}\tabularnewline
\hline
{\cellcolor{study}} & {Study} & \emoji{Figures/Emoji/study}{12} & {\textbf{21}: day nursery; \textbf{22}: study at school (primary); \textbf{23}: study at school (college); \textbf{24}: study at school (high school); \textbf{25}: study at school (university); \textbf{26}: study in another place (primary); \textbf{27}: study in another place (college);
\textbf{28}: study in another place (high school); \textbf{29}: study in another place
(university)}\tabularnewline
\hline
{\cellcolor{shop}} & {Shopping} & \emoji{Figures/Emoji/shop}{10} & {\textbf{30}: visit a shop; \textbf{31}: visit a shopping center; \textbf{32}:
shopping in mall; \textbf{33}: shopping in medium or little shops; \textbf{34}:
do shopping in market place; \textbf{35}: drive-through shopping}\tabularnewline
\hline
{\cellcolor{care}} & {Personal Care} & \emoji{Figures/Emoji/care}{10} & {\textbf{41}: health care; \textbf{42}: administration step}\tabularnewline
\hline
{\cellcolor{leisure}} & {Leisure} & \emoji{Figures/Emoji/leisure}{10} & {\textbf{51}: sport, cultural or voluntary activity; \textbf{52}: go for
a walk or window-shopping; \textbf{53}: go in restaurant; \textbf{54}: visit family
or friend; \textbf{82}: do a shopping tour (more than 4 consecutive activity
30)}\tabularnewline
\hline
{\cellcolor{commute}} & {Accompany} & \emoji{Figures/Emoji/commute}{10} & {\textbf{61}, \textbf{63}: go with someone; \textbf{62}, \textbf{64}: pick-up someone; \textbf{71},
\textbf{73}: drop-off someone to a transport mode; \textbf{72}, \textbf{74}: collect someone
to a transport mode}\tabularnewline
\hline
{\cellcolor{other}} & {Other} & \emoji{Figures/Emoji/other}{7} & {\textbf{91}: other (detail in notes)}\tabularnewline
\hline
\hline
\multicolumn{4}{c}{\emph{Move activities}}\tabularnewline
\hline
\hline
{\cellcolor{smooth}} & {Smooth} & \emoji{Figures/Emoji/smooth}{7} & \textbf{100}{: walk; }\textbf{110}{:
ride location bike; }\textbf{111}{: ride
bike; }\textbf{112}{: bike passenger; }\textbf{193}{:
roller, skateboard, scooter; }\textbf{194}{:
wheelchair; }\textbf{195}{: small electric
machines (electric scooter, segway, etc)}\tabularnewline
\hline
{\cellcolor{motor}} & {Motorized} & \emoji{Figures/Emoji/motor}{10} & \textbf{113}{: motor bike driver ($< 50cm^{3}$);
}\textbf{114}{: motor bike passenger ($<
50cm^{3}$); }\textbf{115}{: motor bike
driver ($\geq50cm^{3}$); }\textbf{114}{:
motor bike passenger ($\geq50cm^{3}$); }\textbf{121}{:
car driver; }\textbf{122}{: car passenger;
}\textbf{161}{: taxi passenger; }\textbf{171}{:
car transport (work); }\textbf{181}{: van
or truck driver (for activity 81); }\textbf{182}{:
van or truck driver (for activity 81); }\tabularnewline
\hline
{\cellcolor{public}} & {Public transportation} & \emoji{Figures/Emoji/public}{10} & \textbf{131}{: urban bus passenger; }\textbf{133}{:
subway passenger; }\textbf{13}{8, }\textbf{139}{:
other public transportation passenger; }\textbf{141}{,
}\textbf{142}{: local public transportation;
}\textbf{151}{: train passenger}\tabularnewline
\hline
{\cellcolor{other_mode}} & {Other mode} & \emoji{Figures/Emoji/other_mode}{10} & \textbf{191}{: sea transport; }\textbf{192}{:
airplane; }\textbf{193}{: other modes (agricultural
equipment, quad bike, ect);}\tabularnewline
\hline
\end{tabular}
\end{table*}
The studied dataset is called ``EMD Rennes 2018" and it represents a household travel survey conducted in Rennes city and the surrounding area (Britany region of France). The survey was conducted from January to April of 2018 during weekdays. The data represent 11\! 000 people (at least five years old) from 8\! 000 households. This sample is considered to be statistically representative of 500\! 000 households and one million residents. The details of the data collection methodology and its quality are discussed in \cite{certu08}, and a summary of the results of the EMD Rennes 2018 survey is presented in \cite{audiar19}\footnote{References in french}.
The dataset consists of a set of mobility sequences, each of which represents the activities performed by one person over 24h. Table \ref{tab:data} lists the different activity labels used in the EMD mobility sequences. Two main classes are represented: stop activities and move activities. The former corresponds to daily static activities such as ``staying at home", ``working" and ``shopping". The latter represents transportation activities such as ``walking" or ``driving a car."
Mobility sequences are defined based on the Stop-Move paradigm \cite{Parent13}. Each stop activity is followed by one (or more) move activities. Therefore, the time dimension is only considered in terms of the order of the activities, resulting in a \textit{compositional approach} to mobility analysis.
\exampleend{
\label{ex:seq}
Consider the following activities performed by Sam during a day:
\vspace{.25cm}
\textit{``Sam starts her day at home. Then, she walks to the bus station and takes the bus to work. She spends her work time at her office and then walks home."}
\vspace{.25cm}
The mobility sequence $S$, which is represented below, corresponds to Sam's activities. By using activity codes in Table \ref{tab:data}, we have $S=\tuple{1, 100, 131, 11, 100, 1}$.
Alternatively, by considering aggregated activities, represented by emojis, we obtain the following representation:
$S_{agg}=\langle$\emoji{Figures/Emoji/home}{10}, \emoji{Figures/Emoji/smooth}{6}, \emoji{Figures/Emoji/public}{11},
\emoji{Figures/Emoji/work}{10},
\emoji{Figures/Emoji/smooth}{6}, \emoji{Figures/Emoji/home}{10}$\rangle$.
}
Throughout this paper, we will use activity codes and emojis to represent sequences both in examples and when analyzing real sequences. Among the 11\! 000 sequences in the dataset (corresponding to 11\! 000 surveyed individuals), we filtered those containing no moves (corresponding to people that stayed at home the entire). This resulted in a final dataset of 10\! 005 mobility sequences.
\begin{figure*}[t!]
\includegraphics[width=\textwidth]{Figures/emd_ontology.pdf}
\caption{EMD graph ontology}
\label{fig:ontology}
\end{figure*}
\subsection{Ontology}
The activity concepts detailed in Table \ref{tab:data} are also structured in a knowledge graph (or ontology), as shown in Fig. \ref{fig:ontology}. This knowledge graph refers to Definition \ref{def:ontology} and is a hybrid of the EMD meronomy and the Harmonised Time Use Surveys (HETUS) \cite{Eurostat19}.
Each color corresponds to a meta-category representing \textit{aggregated activities}. irst-level nodes for stop activities and second-level nodes for move activities (i.e., transport modes). Inter-level nodes come from the HETUS classification and first-level nodes and leaves come from the EMD survey.
Other possibilities for arranging concepts can be considered, each of which refers to a particular study context or specific business need. The structure of a graph influences the similarity measures between concepts.
\exampleend{
\label{ex:onto}
Suppose we wish to compute the similarity between activities \textbf{100} (walking) and \textbf{121} (car driving). Using the ontology in Fig. \ref{fig:ontology}, we can compute the Wu-Palmer similarity defined in Equation \ref{eq:wu-palmer} as:
$\begin{aligned}
sim_{WP}(100,121) &= \frac{2\times d(LCA(100, 121))}{d(100)+d(121)} \\
&= \frac{2\times d(\text{Transport mode})}{d(100)+d(121)} \\
&= \frac{2}{7} \\
\end{aligned}$
where $LCA(x,y)$ is the Last Common Ancestor of concepts $x$ and $y$ and $d(x)$ is the shortest path between node $x$ and the root node (depicted in black in Fig. \ref{fig:ontology}).
}
\subsection{Comprehensive statistical analysis of the dataset}
\label{sec:glob_analysis}
\begin{figure}[p]
\includegraphics[width=0.85\textwidth]{Figures/sm_distrib.pdf}
\caption{Stop (a) and move (c) activity distribution plot (a) log-plot showing the frequency of each stop activity codes, colors refer to aggregated activity. (b) and (d) show compatibility to a Zipf law model, each point correspond to activities in bar plot below.}
\label{fig:stop_freq}
\end{figure}
\begin{figure*}[!h]
\includegraphics[width=\textwidth]{Figures/poisson_length.pdf}
\caption{Length statistics of mobility sequences (a) The distribution of length $|S|$ for a given interval $I_{k\in \{1...7\}}$ follows a Poisson distribution $P(|S|\in I_k) \approx \frac{1.36^k e^{-1.36}}{k!}$ (b) Box plot of the lengths}
\label{fig:poisson_length}
\end{figure*}
\begin{figure*}[h!]
\includegraphics[width=.95\textwidth]{Figures/flow_activity.pdf}
\caption{Chord diagram of flows between two consecutive stop activities (a) with all activities (b) with aggregated activities}
\label{fig:flow}
\end{figure*}
\begin{figure*} [t]
\includegraphics[width=.9\textwidth]{Figures/daily_pattern.pdf}
\caption{Daily mobility patterns. The motifs are grouped according to their size (separated by dashed lines). $\star$ motifs include all other motifs with $k\in \{3,4,5\}$ nodes. For each group, we show the estimated probability that a given motif has $k$ nodes. The central nodes are highlighted in red. Motifs are classified by three rules indicating topological properties: (I) graphs with oscillations between two nodes, (II) graphs with cycles of 3 or more nodes and (III) graphs which combining both previous properties (I) and (II).
}
\label{fig:daily_patt}
\end{figure*}
\begin{figure*}[h!]
\includegraphics[width=.9\textwidth]{Figures/delta.pdf}
\caption{Correlation plots between intervals of length $I_k$ and the number of (a) distinct move activities $\delta_{move}$, (b) distinct move + stop activities $\delta$ in sequences. Box plot is showed for $\delta$ and $\delta_{move}$. The coefficient of correlation is respectively (a) $\rho = 0.4$ and (b) $\rho = 0.8$.}
\label{fig:delta}
\end{figure*}
\begin{figure*}[t]
\centering {
\includegraphics[width=.98\textwidth]{Figures/entropiy_predict.pdf}
}
\caption{Entropy and predictability of the sequences, dash lines represent the mean (a) probability density function of the entropy $H$,
the random entropy $H^{rand}$, and the uncorrelated entropy $H^{unc}$ (b) Probability density function of the $\Pi^{max}$, $\Pi^{rand}$, and
$\Pi^{unc}$}
\label{fig:entropy}
\end{figure*}
To understand the meaning of the data, we analyzed the entire dataset using the indicators described in Section \ref{sec:indicator} and summarized in \ref{tab:chosen_indic}.
Our first elementary study focused on the frequency of each activity in the sequences. For convenience, we separated the stop and move activities. Fig. \ref{fig:stop_freq} presents the distribution of each activity in the dataset. As predicted in \cite{Song10b}, the frequency distribution follows a Zipf law. Intuitively, the three most frequent stop activities are 1 (home), 10 (work), and 33 (shopping in medium and small shops). For move activities, the most frequent items are 121 (car driving), 100 (walking), and 122 (car riding). This figure highlights the main activities that comprise the sequences.
We also performed a complementary study on the number of activities performed per day by an individual. Based on the stop-move representation, there are very few even sequence lengths. To overcome this issue, we consider intervals of length $I_k$. Fig. \ref{fig:poisson_length} presents the distribution of the lengths of the mobility sequences in the dataset. The green curve represents the estimated probability mass function of a Poisson distribution with a parameter $\lambda$ obtained from maximum-likelihood estimation ($\lambda = 1.36$). One can see that the intervals of lengths fit the Poisson distribution.
Another method for semantic sequence analysis is to study the transitions between symbols using an origin-destination matrix. Fig. \ref{fig:flow} presents the transitions between two consecutive stop activities in the dataset. The ontology allows us to visualize these flows according to different levels of granularity. Detailed activities are presented on the left and aggregated activities are presented on the right. One can see that the home activity (\emoji{Figures/Emoji/home.pdf}{9}) plays a major role for most transitions, where \emoji{Figures/Emoji/home.pdf}{9} $\rightarrow x$ and the reverse $x \rightarrow$ \emoji{Figures/Emoji/home.pdf}{9}.
In the daily mobility context, transitions were also studied in terms of individual mobility networks to identify topological patterns. Based on the work by Schneider et al. \cite{Schneider13}, we extracted the main motifs from the sequences. As shown in Fig. \ref{fig:daily_patt}, the extracted motifs and frequencies are consistent with the results presented in \cite{Schneider13}. We show the three most frequent motifs for $3, 4$ and $5$ nodes.
We present the three most-frequent motifs for groups of three, four, and five nodes. Globally, one can see that the most-frequent patterns have less than four nodes and exhibit oscillations (labels I and III). Approximately 87\% of the sequences follow one of the 11 identified motifs. Additionally, this analysis demonstrates that mobility sequences contain many stop activity repetitions.
Another technique for studying the repetition and regularity of a sequence $S$ is to calculate the number of unique symbols $\delta$ it contains. Fig. \ref{fig:delta} presents the correlation between the length of a sequence $|S|$ and the distinct number of activities $\delta$. The horizontal axis represents the interval length defined in Fig. \ref{fig:poisson_length} and the vertical axis represents the numbers of distinct moves $\delta_{move}$ (left side) and number of distinct activities (stops + moves) $\delta $ (right side). One can see that $\delta_{move}$ remains globally stable with one or two different modes for any length of sequence. Therefore, we know that the diversity in the sequences stems from stop activities, while move activities are more often repeated. Regardless, according to the red curve in \ref{fig:delta}(b), one can see that most activities are repeated in a sequence.
Finally, the entropy and predictability of the mobility sequences can be studied to determine how sequences can be predicted. Fig. \ref{fig:entropy} portrays the distributions of these two variables. According to the number of activities in the sequences, the results are similar to those given by \cite{Song10b} and exhibit a low real uncertainty regarding a typical individual's location $2^{0.4}\approx 1.32$, which is less than two activities).
It should be noted that these results are consistent with those presented Fig. \ref{fig:delta} for the $\delta$ values.
The predictability in the random case is $\Pi^{rand}\approx 0.24$. One can see that the median number of different concepts in a sequence is four. This means that we can typically predict one out of the four previous activities. Unlike Song et al.'s results, our $P(\Pi^{unc})$ distribution peaks approximately at $\Pi^{unc} \approx 0.78$ which is similarly to the $\Pi^{rand}$ value. This finding can be explained first by the small number of distinct activities in the sequences and also by the relatively samll number of concepts in the dataset and the Zipf laws they follow (Fig. \ref{fig:stop_freq}). This allow us to predict certain key activities (e.g., home, car, working, walking) based on the user activity history.
Finally, the real predictability $P(\Pi)$ is peaked near $\Pi^{max} \approx 0.95$, indicating that having a historical record of the mobility of an individual yields a high degree of predictability.
\section{Conclusions and future work}
\label{sec:conclusion}
In this paper, we introduced a novel methodology, called \textsc{simba}, to mine, discover and analyze behaviors in semantic human mobility sequences. The proposed process is generic and can be adapted for any sequence of categorical data.
\textsc{simba} introduces a simple and complete pipeline from raw data to clustering analysis for studying semantic mobility sequences and extracting mobility behaviors. \textsc{simba} leverages the use of a hierarchical clustering algorithm combined with CED to cluster similar mobility sequences.
Based on an extended literature review of both human mobility properties and semantic similarity measures, we selected complementary statistical indicators to describe semantic mobility sequences from different points of view. To the best of our knowledge, \textsc{simba} is the first complete and modular methodology supporting the understanding of human
behaviors with a large panel of visual
indicators that highlight the complementary properties of semantic mobility.
The proposed approach was tested on a real dataset of 10\! 005 semantic mobility sequences from a household travel survey.
We were able to identify specific behaviors that can constitute key information on urban activities.
Thanks to the proposed methodology, discovered clusters are easily interpretable and sound coherent with our intuition. Furthermore, the clusters revealed regular patterns of human daily activities that are consistent with previous findings regarding the strong predictability and regularity of human mobility.
We hope that our methodology will be helpful in future applications such as urban and transportation planning, the sociology of mobility behavior, and spreading dynamics.
In future work, we plan to study the time dimension to propose novel indicators and analysis methods for semantic sequences in a \textit{time-structured approach}.
For example, we could describe each activity according to its start and end timestamps.
Additionally, we hope to expand our methodology to account multidimensional semantic sequences. Integrating the time dimension and multidimensional semantics will facilitate the treatment of more detailed sequences and enhance our methodology.
\section{Introduction}
\label{intro}
It is becoming increasingly important to have a good understanding of human mobility patterns in many fields, such as transportation \cite{Batty13}, social and political sciences \cite{Lind07,Castellano09}, epidemiology \cite{Pastor15,Chinazzi20}, and smart city design \cite{Pan13}. For the latter, the ability to model urban daily activities correctly for traffic control, energy consumption, and urban planning \cite{Barthelemy19} has a critical impact on human quality of life and the everyday functioning of cities. To inform policy makers regarding important projects, such as planning new metro lines, managing traffic demand during large events, or constructing shopping malls, we require reliable models of urban travel demand. Such models can be constructed from censuses, household travel surveys, or simulations that attempt to learn about human behaviors in cities using data collected from location-aware technologies \cite{Jiang16,Pappalardo18}. The development of generative algorithms that can reproduce and aid in understanding human mobility behaviors accurately is fundamental for designing smarter and more sustainable infrastructures, economies, services, and cities \cite{Batty12}.
Typically, mobility analysis focuses on spatiotemporal analysis and the properties of human movement \cite{Barbosa18}. Pioneering works have highlighted the characteristics, regularities, and predictability of human mobility \cite{Barabasi05,Brockmann06,Gonzalez08,Song10a,Song10b,Jiang12}.
Recently, a major challenge in machine learning and clustering methods has been the ability to explain models both for both practical and ethical purposes \cite{Guidotti19}. Explanation technologies and techniques are immensely helpful for companies that wish to improve the management and understanding of customer needs. They are also important for improving the openness of scientific discovery and progress of research. The need for clear and interpretable results and models is increasing, particularly for black-box algorithms, where data are huge and complex, and for methods with many parameters. Interpretability is crucial for testing, observing, and understanding the differences between models. Therefore, data comprehension also enhances the learning and/or exploration process in terms of validity.
In this paper, we propose transposing studies on human movement analysis into the semantic domain to learn and understand human activities. We focus on the analysis of semantic sequences of daily activities and attempt to learn and understand the properties of semantic mobility to extract consistent behaviors from a real human mobility dataset. In summary, this paper provides the following main contributions:
\begin{itemize}
\item A methodological pipeline called \textit{Semantic Indicators for Mobility and Behavior Analysis} (\textsc{simba}) is proposed to extract, analyze, and summarize coherent behaviors in semantic sequences of mobility data.
\item We propose a framework for semantic sequence mobility analysis and clustering explicability integrating state-of-the-art indicators.
\item A case study, from a real-world dataset to the extraction of understandable behaviors, illustrating the applicability of our proposal.
\end{itemize}
To the best of our knowledge, such a methodology for mining and interpreting clusters of human semantic mobility behaviors has not been proposed previously. Additionally, our methodology is generic and can be applied to any type of data representing a sequence of semantic symbols (e.g., activities, points of interest (POIs), web pages, and music in playlists). The remainder of this paper is organized as follows. Section \ref{sec:related_work} presents related work on human mobility indicators and methods for behavior extraction.
In Section \ref{sec:methodology}, we introduce some preliminary definitions and present an overview of our approach. We then discuss the design, statistics, and analytical methods used for behavior extraction from semantic sequences.
Section \ref{sec:case_study} is dedicated to data description and global analysis of the target dataset. Section \ref{sec:sem_clust} discusses the extraction and characterization of behaviors using a clustering method and the explicability of discovered patterns. We also discuss results in this section. Finally, Section \ref{sec:conclusion} concludes this article.
\section{Methodology}
\label{sec:methodology}
This section details the proposed methodology for the analysis of semantic mobility sequences. First, we summarize the methodological pipeline presented this paper, including the nature of the dataset, selected statistical analysis methods, indicators, and clustering process. The second subsection is dedicated to the enrichment and representation of semantic mobility sequences and the third subsection introduces the indicators used for semantic mobility sequence analysis. The fourth subsection discusses the clustering process and corresponding similarity measure, namely Contextual Edit Distance (CED), as well as a hierarchical clustering process. Finally, the fifth subsection describes the methodology used for cluster analysis and the extraction of semantic mobility behaviors.
\subsection{The \textsc{simba} methodological pipeline}
Semantic mobility sequences are complex data based on their nature and properties, as discussed in Section \ref{sec:related_work}. However, daily mobility has a high degree of regularity with many repetitions of activities. Based on these characteristics, Figure \ref{fig:overview} presents the \textsc{simba} methodology based on the strengths of previous method (also discussed in Section \ref{sec:related_work}). It consists of five steps labeled as (a), (b), (c), (d), and (e).
\begin{figure*}
\centering{
\includegraphics[width=0.9\textwidth]{Figures/overview5.pdf}
}
\caption{Overview of the simba methodology : (a) Given semantic mobility sequences dataset and an ontology of activities (b) General descriptive statistics and indicators (c) Clustering process (d) Analysis of semantic mobility clusters for behavior discovering (e) Synthesis of each behavior in an understandable way}
\label{fig:overview}
\end{figure*}
In the first step, semantic data are enriched using an ontology to facilitate the comparison of concepts based on any similarity measure that can be adapted to knowledge graphs and data with different levels of granularity.
In the second step, we compute some global statistics to understand and analyze the data. These statistics are selected from the indicators introduced in Table \ref{tab:indicator} and provide a complementary analysis of sequences in terms of their contents (frequency distribution), networks (transition), central behaviors (centrality), and degrees of disorder (entropy). Based on this complementarity, we can explain data from different perspectives. Additionally, the statistics highlight the different patterns that mobility sequences follow (visitation frequency, daily patterns, origin-destination matrices, sequence lengths, predictability) \cite{Song10a,Schneider13,Song10a}), provide a preliminary overview of the data, and facilitate quality control.
The third step focuses on the clustering of semantic sequences, which groups sequences representing similar moving behaviors. The main challenge in this step is the comparison of semantic sequences, specifically the selection of a similarity measure to support such comparisons and adapt to specific business needs. In this study, we used the CED similarity measure \cite{Moreau19b}. This measure extends edit distance by adapting a cost computation for typical mobility characteristics, such as redundancies, repetitions \cite{Song10b} and cycles \cite{Schneider13}.
A pairwise comparison of semantic sequences yields a distance matrix, which is then used in the clustering process. Section \ref{sec:clustering} summarizes various approaches to sequence clustering. These clustering algorithms are based on different assumptions regarding cluster topology and can all be used in this step. However, because the topology of the semantic sequence space is difficult to comprehend, in this study, we visualized it using a dendrogram generated from a hierarchical clustering process.
The output of this step is a set of clusters of semantic sequences that represent similar behaviors. \textsc{simba} is a modular methodology in which the similarity measure and clustering algorithm proposed in step (c) can be replaced with any of the other techniques discussed in Section \ref{sec:related_work}.
Step (d) computes additional statistics for each cluster to extract and understand the specific characteristics that constitute mobility behaviors. The statistical and data visualization indicators partially replicate those used in the overall analysis in step (b), but are enhanced with significance tests to determine which are typical characteristics in terms of activities, patterns, and sizes in each cluster. Additionally, these indicators are studied in combination with clustering centrality indicators (centroid, mode, diameter, cluster variance) and quality measures (i.e., Silhouette score \cite{Rousseeuw87}), which measures intra-cluster and inter-cluster distances). This step can also be used to identify outliers.
Finally, step (e) summarizes the main characteristics of clusters to label them in terms of mobility behaviors. A graphical summary concludes the pipeline and yields an easy and understandable way to discover mobility behaviors.
The remainder of this section precisely describes each step of the \textsc{simba} methodology.
\subsection{Enrichment and representation of semantic mobility sequences}
Let $\Sigma$ be a set of concepts that represent daily activities (see Section \ref{sec:case_study} and Table \ref{tab:data}). We define semantic mobility sequence as follows.
\begin{definition}[Semantic sequence]
Given a human $h$, their \textit{semantic sequence}\footnote{Considering our use case, we use indistinctly semantic sequence, semantic mobility sequence or mobility sequence terms.} $S$ is an ordered sequence of activities $\tuple{x_1,x_2,...,x_n}$ such as $\forall k \in [\![1,n]\!], x_k \in \Sigma$ and for $i < j, x_i$ predates $x_j$.
Additionally, we consider that symbols are not repeated consecutively (i.e., $\forall k \in [\![1, n-1]\!], x_k \neq x_{k+1}$).
Intuitively, such a sequence indicates that h performed activity $x_1$, then $x_2$, and finally $x_n$.
\end{definition}
To compare the symbols in $\Sigma$, we must introduce a partial order to the set. For this purpose, we construct a knowledge graph between all concepts in $\Sigma$.
\begin{definition}[Knowledge graph]
\label{def:ontology}
Let $\Sigma$ be a set of concepts such that $\exists \mathsf{root} \in \Sigma$. A \textit{knowledge graph} is a connected and directed acyclic graph $G=(\Sigma,E)$ with $E \subset \Sigma \times \Sigma$ where $(x,y) \in E$ iff the concept $x$ (meronym) \textit{contains} semantically the concept $y$ (holonym) and $\forall (x,y) \in E, y \neq \mathsf{root}$.
Such a knowledge graph is called a \textit{meronymy}.
In the resulting graph of concepts, for any two concepts $x, y\in \Sigma$, we let $LCA(x,y)$ denote the last common ancestor of $x$ and $y$. $d(x)$ denotes for $x$'s depth (i.e., its minimal distance from the \textsf{root} node).
\end{definition}
Additionally, knowledge representations such as an \textit{is-a} taxonomy can induce a partial order on $\Sigma$. This classification of $\Sigma$
allows us to define similarity measures on its elements. Many similarity measures have been proposed for knowledge graphs (see \cite{Zhu17} for a survey).
. In the remainder of this paper, we use the Wu-Palmer similarity measure \cite{Wu94}, which is defined a $sim_{WP}:\Sigma \times \Sigma \rightarrow [0,1]$. This is a well-established state-of-the-art measure that accounts for both the depth of the concepts in an ontology and their closest ancestors, and is normalized:
\begin{equation}
\label{eq:wu-palmer}
sim_{WP}(x,y) = \frac{2\times d(LCA(x,y))}{d(x)+d(y)}
\end{equation}
Moreover, thanks to the hierarchical representation of activities, data can be analyzed at different aggregation levels, similar to online analytical processing analysis. For example, the activities of shopping in a mall and shopping in a marketplace can be aggregated into a single higher-level shopping activity. Intermediate nodes in a meronymy are useful for such aggregations.
\subsection{Semantic mobility sequence dataset analysis}
\label{sec:indicator}
Semantic sequence data are difficult to analyze based on their combination of temporal dimensions (i.e., temporal order of activities) and semantic dimensions. As discussed in Section \ref{sec:mob_law}, human mobility semantic sequences tend to follow statistical laws. Frequent visitation items induce repetitions of activities that can be modeled using a Zipf law. Sequences are mainly structured by a few networks of daily patterns and are characterized by low entropy and a Poisson distribution for their length.
\renewcommand{\arraystretch}{1.1}
\begin{table*}
\caption{Retained indicators for semantic mobility sequences analysis}
\label{tab:chosen_indic}
\begin{tabular}{cllccc}
\hline
\multirow{2}{*}{{Id}} & \multirow{2}{*}{{Techniques}} & \multirow{2}{*}{{Visualization methods}} & \multicolumn{2}{c}{{Used for}} & \multirow{2}{*}{{Example}}\tabularnewline
\cline{4-5}
& & & {All dataset} & {Clusters} & \tabularnewline
\hline
\hline
\multicolumn{6}{c}{\emph{\textbf{Frequency distribution}}}\tabularnewline
\hline
1 & {Length distribution} & {Histogram, box plot} & {$\times$} & {$\times$} & Figs. \ref{fig:poisson_length}, \ref{fig:size_clust} \tabularnewline
2 & {State distribution} & {Histogram, stack plot} & {$\times$} & {$\times$} & Figs. \ref{fig:stop_freq}, \ref{fig:stack} \tabularnewline
\hline
\multicolumn{6}{c}{\emph{\textbf{Transitions}}}\tabularnewline
\hline
3 & {Origin-Destination matrix} & {Chord diagram} & {$\times$} & {$\times$} & Fig. \ref{fig:flow} \tabularnewline
4 & {Daily patterns} & {Network and histogram} & {$\times$} & {$\times$} & Fig. \ref{fig:daily_patt} \tabularnewline
\hline
\multicolumn{6}{c}{\emph{\textbf{Disorder}}}\tabularnewline
\hline
5 & {Entropy} & {Density plot} & {$\times$} & & Fig. \ref{fig:entropy} \tabularnewline
6 & {Predictability} & {Density plot} & {$\times$} & & Fig. \ref{fig:entropy} \tabularnewline
7 & {Distinct symbols} & {Box plot} & {$\times$} & & Fig. \ref{fig:delta} \tabularnewline
\hline
\multicolumn{6}{c}{\emph{\textbf{Statiscal dependance measures}}}\tabularnewline
\hline
8 & {Pearson residuals} & {Mosaic diagram} & & {$\times$} & Fig. \ref{fig:mosaic} \tabularnewline
\hline
\multicolumn{6}{c}{\emph{\textbf{Centrality}}}\tabularnewline
\hline
9 & {Mode} & {Emojis sequence} & & {$\times$} & Tab. \ref{tab:centrality} \tabularnewline
10 & {Medoid} & {Emojis sequence} & & {$\times$} & Tab. \ref{tab:centrality} \tabularnewline
\hline
\multicolumn{6}{c}{\emph{\textbf{Scattering and outliers}}}\tabularnewline
\hline
11 & {Diameter and Radius} & {Table} & & {$\times$} & Tab. \ref{tab:clusters} \tabularnewline
12 & {Silhouette} & {Table} & & {$\times$} & Tab. \ref{tab:clusters} \tabularnewline
\hline
\end{tabular}
\end{table*}
Therefore, to ensure the quality of a dataset in terms of the aforementioned properties and obtain a preliminary understanding the data, based on Table \ref{tab:indicator}, we propose complementary statistical indicators that facilitate the global analysis of a set of semantic sequences. The selected indicators are listed in Table \ref{tab:chosen_indic}. Although this study focused on mobility sequences, the proposed methodology is generic and can be used for analyzing any type of semantic sequence dataset.
\begin{indicator}[Length distribution]
\label{indic:length}
Frequency distribution of sequence length combined with a frequency histogram.
\end{indicator}
\begin{indicator}[State distribution]
\label{indic:state}
Frequency distribution of activities $x \in \Sigma$ inside the sequences of dataset combined with a frequency histogram. Using a log scale may be advisable in the field of human mobility.
\end{indicator}
Together, these two indicators provide a high-level overview of a sequence's content and length. However, they provide no information regarding the transitions or motifs in sequences. This analysis can be useful for estimating transition probabilities such as those used in DHD measures or for generating probabilistic models of flows. The resulting matrix can be visualized using a chord diagram. To this end, we incorporate the following additional indicators.
\begin{indicator}[Origin-destination Matrix]
\label{indic:od}
Matrix $T = \{t_{ij}\}$ in which each line/column represents an activity $x\in \Sigma$.
The coefficient $t_{ij}$ represents the number of transitions from activity $x_i$ to activity $x_j$.
Such a matrix can be visualized using a chord diagram.
\end{indicator}
\begin{indicator}[Daily pattern]
\label{indic:pattern}
Frequency distribution of non-isomorph daily pattern graphs \cite{Schneider13}. We compute this indicator using Algorithm \ref{alg:daily_patt}:
\begin{algorithm}[H]
\SetAlgoLined
\KwData{Dataset of semantic sequences $\mathcal{D}$}
\KwResult{Dictionary $\mathcal{G}$ of non-isomorph daily patterns graphs \\ frequencies}
$\mathcal{G} \gets \emptyset$ \LeftComment{Dictionary $\mathcal{G}$ where keys are graphs and values are integers} \\
\LeftComment{Construct the daily pattern graph of each sequence $S\in \mathcal{D}$} \\
\For{$S \in \mathcal{D}$}{
$V_S \gets \{x|x\in S\}$ \LeftComment{Set of vertices}\\
$E_S \gets \{(x_i, x_{i+1})|i\in [\![1, |S|-1]\!]\}$ \LeftComment{Set of edges}\\
$G_S \gets (V_S,E_S)$ \\
\eIf{$\exists G \in \mathcal{G}.keys() | G \simeq G_S$}{ \LeftComment{If there is already exists a graph $G$ isomorph to $G_S$ in $\mathcal{G}$}\\
$\mathcal{G}[G] \gets \mathcal{G}[G] + 1$ \LeftComment{Increment the frequency of $G$} \\
} {
$\mathcal{G}[G_S] \gets 1$ \LeftComment{Create it in $\mathcal{G}$} \\
}
}
\caption{Daily patterns frequency}
\label{alg:daily_patt}
\end{algorithm}
It should be noted that the isomorphism test for the two graphs $G$ and $G'$ can be implemented using the Nauty algorithm \cite{McKay14}.
\end{indicator}
Finally, to capture the degree of disorder in sequences and understand how studied sequences are both predictable and varied, we use the following three indicators developed in entropy studies.
\begin{indicator}[Entropy of a sequence]
\label{indic:entropy}
The entropy of a sequence is defined in \cite{Song10b}, where several types of entropy are given.
\begin{itemize}
\item The random entropy $H^{rand} = \log_2\delta(S)$, where $\delta(S)$ is the number of distinct activities in sequence $S$.
\item The temporal-uncorrelated entropy $H^{unc} = - \sum_{i=1}^{|S|} p(x_i) \log_2p(x_i)$, where $p(x_i)$ is the historical probability that activity $x_i$ was performed. This characterizes the heterogeneity of activities.
\item The real sequence entropy $H$ which depends on both, the frequency and the order of an activity in the sequence.
\begin{equation}
\label{eq:entropy}
H(S) = - \sum_{S'\subset S} p(S') \log_2p(S')
\end{equation}
where $p(S')$ is the probability of finding a particular ordered subsequence $S'$ in the sequence $S$.
\end{itemize}
In practice, $H$ is uncomputable for long sequences. Therefore, we use the following estimator $H^{est}$ of $H$ proposed in \cite{Kontoyiannis98}:
\begin{equation}
\label{eq:entropy_est}
H^{est}(S) = \left(\frac{1}{|S|}\underset{i}{\sum}\lambda_{i}\right)^{-1}\log_{2}|S|
\end{equation}
where $\lambda_i = \text{argmin}_{k\geq 1}\{x_i...x_k \notin x_1...x_{i-1}\}$ is the size of the smallest subsequence beginning at $i$ and not contained in the and not contained in the range of 1 to $i-1$.
Kontoyiannis et al. demonstrated that $\lim_{|S| \rightarrow \infty} H^{est}(S) = H(S)$, supplements can also be derived \cite{Song10b}.
\end{indicator}
\begin{indicator}[Predictability]
\label{indic:predict}
The predictability $\Pi$ that an appropriate algorithm can predict correctly the user's future whereabouts. Thanks to Fano's inequality, we can obtain an upper bound $\Pi^{max}$ for $\Pi$ \cite{Song10b}. $\Pi^{max}$ is obtained via the approximate resolution of the following equation:
\begin{equation}
H(S) = \mathcal{H}(\Pi^{max}) + \left( 1-\Pi^{max} \right) \log_2(|S|-1)
\label{eq:predict}
\end{equation}
where $\mathcal{H}(x) = -x \log_2 x - (1-x) \log_2(1-x)$ is the binary entropy function.
\end{indicator}
\begin{indicator}[Distinct symbols]
\label{indic:unique}
The frequency distribution of the number of distinct activities $\delta$ in each sequence $S$ in the dataset combined with a frequency histogram.
$\delta$ can also be studied in combination with the length $|S|$ to uncover hidden regularities in a sequence.
\end{indicator}
\subsection{Clustering design for semantic sequences}
\label{sec:clustering_process_metho}
To address the problem of clustering semantic mobility behaviors in a metropolitan area utilizing a compositional approach (i.e., ``What does an individual do during a day?"), we use a combination of the CED measure and hierarchical clustering based on Ward's criterion.
\subsubsection{CED}
\label{sec:ced}
As discussed in Section \ref{sec:editdist}, the distances in the edit distance family count the minimum costs of operations (e.g., modification, addition, deletion) required to transform one sequence into another. Such measures can be used to quantify the similarity between two semantic mobility sequences. However, as indicated in Section \ref{sec:mob_law}, human mobility sequences are characterized by redundancy of certain symbols, repetition \cite{Song10b}, and cycles \cite{Schneider13}. These features should be considered by adopting specialized distances.
Based on these observations, we proposed the use of the CED measure \cite{Moreau19a,Moreau19b}, which is a generalization of edit distance for handling semantic mobility sequences. This measure incorporates the following factors:
\begin{enumerate}
\item \textit{Context-dependent cost}: Edit cost depends on the similarity of nearby activities. The more similar and closer two activities are, the lower the cost of operations..
\item \textit{Repetition}: Editing repeated nearby activities has a low cost.
\item \textit{Permutation}: Similar and nearby activities can be exchanged with a low cost.
\end{enumerate}
These three factors of CED are particularly suitable for mobility analysis. The fact that repetition and the editing of similar elements in a sequence has a low cost, similar to permutation, tends to group elements with activities with the same semantics while accounting for a flexible timeframe.
To achieve these advantages, the CED includes a modification of the cost operation function $\gamma$ that generalizes the classical definition of edit distance and accounts for the local context of each activity in a mobility sequence.
Let a contextual edit operation be a quad tuple such that: \[e=(o,S,x,k) \in \{\mathtt{add}, \mathtt{mod}, \mathtt{del}\}\times \Sigma^n \times \Sigma\cup \{\varepsilon\} \times \mathbb{N}^*\]
where $e$ is a transformation $o$ of sequence $S$ at index $k$ using symbol $x$. Let $E$ be the set of all possible contextual edit operations, the cost function $\gamma : E \rightarrow [0,1]$ for a contextual edit operations is defined as:
\begin{equation}
\label{eq:costFunction}
\gamma(e)=
\alpha \times \ell(e) + \\
(1- \alpha)\left(1- \underset{i\in [\![1, n]\!] }{\max}\left\{ sim(x ,s_i)\times v_{i}(e)\right\} \right)
\end{equation}
where:
\begin{itemize}
\item $\alpha \in [0,1]$ is a contextual coefficient. \\ If $\alpha \rightarrow 0$, then the cost will be strongly evaluated according to the near content at index $k$ in the sequence being edited. If $\alpha \rightarrow 1$, then CED tends toward the Levenshtein Distance with substitution cost.
\item $\ell(e)=\begin{cases}
1-sim(s_k, x) & if\ o = \text{\texttt{mod}}\\
1 & else
\end{cases}$
is the cost function of Levenshtein Distance with substitution cost.
\item $sim:\Sigma\times \Sigma \rightarrow [0,1]$
is a similarity measure between two activities.
\item $v(e)\in [0,1]^n$ is a contextual vector that quantifies the notion of proximity between activities. Typically, the larger $|i-k|$ is, the smaller $v_i(e)$ is.
\end{itemize}
Let $\mathcal{P}(S_1,S_2)$, all the edit paths to transform a sequence $S_1$ into $S_2$, the one-sided contextual edit distance from $S_1$ to $S_2$ noted $\tilde{d}_{CED}:\Sigma^n \times \Sigma^p \rightarrow \mathbb{R}^+$ is defined such that:
\begin{equation}
\label{eq:one_ced}
\tilde{d}_{CED}(S_1,S_2) = \underset{P\in\mathcal{P}(S_{1},S_{2})}{\min}\left\{ \stackrel[i=1]{|P|}{\sum}\gamma(e_{i})\right\}
\end{equation}
where $P=(e_1,...,e_q)\in E^q$ is a vector of contextual edit operations.
The computation of Equation \ref{eq:one_ced} is performed using dynamic programming and the Wagner-Fisher algorithm \cite{Wagner74}. Finally, $d_{CED}:\Sigma^n \times \Sigma^p \rightarrow \mathbb{R}^+$ is computed using the following equation:
\begin{equation}
\label{eq:ced}
d_{CED}(S_1,S_2) = \max\left\{\tilde{d}_{CED}(S_1,S_2), \tilde{d}_{CED}(S_2,S_1)\right\}
\end{equation}
\subsubsection{Hierarchical clustering settings and validity}
\label{sec:hierarchical_clust}
Hierarchical clustering algorithms have been widely applied to partition datasets into different clusters \cite{Rokach05}. In the case of an abstract topological space, similar to the space constructed using the CED for semantic mobility sequences, the dendrogram used to visualize the results of hierarchical clustering provides support for understanding the studied space. However, in addition to defining a similarity measure, hierarchical clustering requires three other parameters to be defined: the strategy (top-down or bottom-up), linkage criterion, and dendrogram cutoff method.
Regarding the choice of strategy, the bottom-up approach has a polynomial time complexity of $\mathcal{O}(n^2\log n)$ versus an exponential complexity of $\mathcal{O}(2^n)$ for the top-down approach \cite{Kaufman09}. Therefore, to handle a large dataset (in our case, 10\! 005 sequences), we used a hierarchical agglomerative clustering (HAC) algorithm based on a bottom-up strategy. A summary of hierarchical clustering algorithms in statistical software is presented in \cite{Struyf97}.
Regarding linkage criteria, the authors of \cite{Kaufman09} summarized the common options used in the literature. This choice depends on cluster shapes. The simplest approach, namely the single linkage criterion, is based on the minimum distance between a pair of elements from two clusters and can handle any cluster shape. However, repeated merges can lead to a chaining effect. In contrast, complete linkage, which is based on maximum distances, produces more compact clusters, but is sensitive to noise and outliers. Average linkage is particularly useful for convex clusters \cite{Kaufman09}.
Because we do not know the shapes of clusters beforehand and we want robustness to outliers and immunity to chaining effects, we adopted the Ward criteria that minimize the total within-cluster variance. This is similar to the K-means algorithm, which is less affected by noise and tends to create convex compact clusters.
Finally, the determination of the optimal number of clusters can be considered from different perspectives and is a relatively difficult problem \cite{Halkidi01}. The simplest method for hierarchical clustering is based on higher relative loss of inertia criteria \cite{Krzanowski88}
This method is associated with the largest gap between two successive agglomerations in a dendrogram. A summary of the different techniques can be found in \cite{Halkidi01}.
Finally, quality clustering indicators such as the Silhouette score \cite{Rousseeuw87} are useful criteria for assessing the natural number of clusters and ensuring the validity of clustering. In particular, the Silhouette score is based on the same objective function as the Ward criterion and can be maximized to determine the optimal number of clusters.
\subsection{Analysis of semantic sequence clusters}
\label{sec:clust_anal_method}
Let $\{C_1, ..., C_m\}$ be a partition of the dataset $\mathcal{D}$ of semantic mobility sequences where $C_{k\in [\![1,m]\!]}$ represents a cluster. In this section of the pipeline, we wish to extract the meaningful characteristics of each cluster $C_k$ in order to understand and explain the mobility behavior of each cluster.
To this end, we calculate most of the indicators defined in Section \ref{sec:indicator} for each cluster $C_k$.
For numerical frequency distribution indicators, such as Indicator \ref{indic:length} or Indicator \ref{indic:unique}, we use boxplots to summarize and compare the distributions of each cluster. In contrast, for categorical frequency distribution indicators such as Indicator \ref{indic:state} and Indicator \ref{indic:pattern}, according to the process described in \cite{Oliveira03}, we use contingency tables, mosaic plots \cite{Friendly94} and stack plots to visualize information. In this phase, the indicators are enriched with significance tests such as chi-squared test and Pearson residuals \cite{Haberman73} in order to identify under- or over-representation of some variables, patterns or activities in the clusters. Cramér's $V$ score is used to evaluate the strength of relationships between these variables and clusters.
\begin{indicator}[Pearson residuals]
\label{ind:pearson}
Consider a sample of size $N$ of the simultaneously distributed variables $A$ and $B$ with $a_1, ..., a_p$ and $b_1, ..., b_q$, let $(n_{ij}),1 \leq i \leq p, 1\leq j\leq q$ be the number of times the values $a_{i}$ and $b_{j}$ are observed, and let $(n^*_{ij}) = \frac{n_{+j} \times n_{i+}}{N}$ be the theoretical values where:
\begin{itemize}
\item $n_{+j}=\sum_{i=1}^p n_{ij}$, represents the column marginal for that cell,
\item $n_{i+}=\sum_{j=1}^q n_{ij}$ represents the row marginal for that cell.
\end{itemize}
Then, the Pearson residuals $r_{ij}$ \cite{Haberman73} are defined as:
\begin{equation}
r_{ij} = \frac{n_{ij} - n^*_{ij}}{\sqrt{n^*_{ij}}}
\end{equation}
Pearson residuals represent the strength and direction of the association between $a_i$ and $b_j$. The strength is defined by the absolute value of the residual and the direction by its sign. Units are in standard deviations, meaning a residual greater than 2 or less than -2 represents a significant departure from the independence at the 95\% confidence level.
\end{indicator}
By calculating Pearson residuals, we can determine how much the observed values deviate from the values in the case of complete independence. For example, an interesting subject is the departure of the frequency values of an activity $x_i\in \Sigma$ in a given cluster $C_j$. If $|r_{ij}|\geq 2$, we can conclude that $x_i$ has a statistically significant association with cluster $C_j$, where the sign indicates if $x_i$ is under- (negative sign) or over- (positive sign) represented in $C_j$. However, statistical significance does not necessarily imply a strong association.
There is a more standardized strength test called the chi-squared test. Statistical strength tests represent correlation measures. For the chi-squared test, the most commonly used measure is Cramér's $V$ score \cite{Cramer99}. $V$ varies from zero (corresponding to no association between variables) to one (complete association) and can reach one only when each variable is completely determined by the other.
Therefore, these measures can be used to characterize the activities or daily patterns in a cluster and provide partial information regarding the behaviors represented by patterns. However, these significance tests do not provide meaningful information regarding the order in which activities are conducted. The origin-destination matrix provides some additional information regarding the other of activities, but it cannot represent complete coherent behaviors in a cluster. However, indicators of centrality such as the medoid and the mode of a cluster can be used to extract an archetypal mobility sequence from the cluster.
\begin{indicator}[Mode]
Given a set of elements $C$ and a similarity measure $d:C\times C \rightarrow \mathbb{R}^+$, the mode $M$ of $C$ is defined such that:
\begin{equation}
\label{eq:mode}
M=\underset{X\in C}{\text{argmax}}\left\{f(X)\right\}
\end{equation}
where $f$ denotes the frequency function. Intuitively, $M$ is the element which is the most-frequent in $C$.
\end{indicator}
\begin{indicator}[Medoid]
\label{ind:medoid}
Given a set of elements $C$ and a similarity measure $d:C\times C \rightarrow \mathbb{R}^+$, the medoid $m$ of $C$ is defined such that:
\begin{equation}
\label{eq:medoid}
m=\underset{X\in C}{\text{argmin}}\left\{ \underset{Y\in C}{\sum}d(X,Y)\right\}
\end{equation}
Intuitively, $m$ is the element that minimizes the distance to all other elements in $C$.
\end{indicator}
To validate whether the medoid $m$ actually represents the elements of a cluster, it is essential to study the topology of the cluster $C$. Here, $m$ is a good representative of $C$ if the formed cluster is hyperspherical (i.e., the distribution of distances $d(x,m)$ follows a power law which indicating that most of elements are near $m$).
Furthermore, hierarchical clustering achieves a complete partitioning of a dataset. Therefore, this analysis can identify outlier elements in clusters which can be considerate as the 5\% of elements far away from the medoid\footnote{Under the hypothesis of hyperspherical clusters}. Another measure for studying scattering and outliers that ignores the topology of clusters is cluster diameter.
\begin{indicator}[Diameter and Radius]
Given a set of elements $C$ and a similarity measure $d:C\times C \rightarrow \mathbb{R}^+$, the diameter $diam$ of $C$ is defined as:
\begin{equation}
\label{eq:diameter}
diam(C)=\underset{X,Y\in C}{\max}\left\{d(X,Y)\right\}
\end{equation}
where $diam$ represents the greatest distance between any pair of elements in the cluster. It should be noted that $diam$ can also represent the most-distant pair of elements if $\max$ is replaced with $\text{argmax}$ is Equation \ref{eq:diameter}.
Similarly, the radius $rad$ of $C$ is defined as:
\begin{equation}
\label{eq:radius}
rad(C)=\underset{X\in C}{\max}\left\{d(m,X)\right\}
\end{equation}
where $m$ is the medoid of $C$ such as defined by Indicator \ref{ind:medoid}.
\end{indicator}
Finally, analysis can be completed by calculating the Silhouette score of a cluster.
\begin{indicator}[Silhouette]
\label{ind:Silhouette}
Let $\{C_1, ..., C_m\} $ a partition of the dataset $\mathcal{D}$. The Silhouette score \cite{Rousseeuw87} is a value which quantifies how is appropriately $X\in C_k$ is clustered. It is defined as:
\begin{equation}
sil(X) = \frac{b(X)-a(X)}{\max\{a(X), b(X)\}}
\end{equation}
where:
\begin{itemize}
\item $a(X) = \frac{1}{|C_k|-1}\underset{Y\in C_k,Y\neq X}{\sum}d(X,Y)$
\item $b(X)=\underset{C_i\neq C_k}{\min}\frac{1}{|C_i|}\underset{Y\in C_i}{\sum}d(X,Y)$
\end{itemize}
Here, $a(X)$ is the mean of the distance between $X$ and all other elements in $C_k$. Thefore, it can be interpreted as a measure of how well $X$ is assigned to its $C_k$. On the other hand, $b(X)$ is the smallest mean distance from $X$ to every points in the other clusters. The cluster with the smallest mean dissimilarity is said to be the ``neighboring cluster" of $X$ because it is the next-best fit cluster for point $X$.
Thus, the Silhouette score of a cluster $C_i$ is defined by the arithmetical mean of $sil(X)$ for each $X\in C_k$:
\begin{equation}
Sil(C_k) = \frac{1}{|C_k|}\sum_{X\in C_k}sil(X)
\end{equation}
\end{indicator}
The next section illustrates the application of the proposed methodology to a real-world dataset and describes our findings in terms of mobility behaviors.
\section{Related work}
\label{sec:related_work}
Questions regarding the extraction of mobility behaviors and comprehension of discovered patterns lie on the intersection of three main subjects, namely the study of human mobility properties, methods for comparing two semantic mobility sequences (i.e., two sequences of daily activities), and the explicability and interpretability of abstruse machine learning models. Therefore, in this section, we summarize major studies on human mobility characteristics as a basis for the requirements of similarity measures between mobility sequences. An extensive review of similarity measures and their properties is presented in the second subsection. The third subsection discusses clustering methods based on arbitrary distance matrices for automatically extracting groups of similar individuals. Finally, we discuss tools for human mobility analysis from the literature and commonly used indicators for describing semantic mobility sequences, as well as the explainability of black-box models, to understand and infer behaviors.
\subsection{Human mobility properties}
\label{sec:mob_law}
Numerous studies on human mobility have shown remarkable heterogeneity in travel patterns that coexist with a high degree of predictability \cite{Alessandretti18}. In other words, individuals exhibit a large spectrum of mobility ranges while repeating daily schedules that are dictated by routine. González et al. analyzed a nation-wide mobile phone dataset and found that human trajectories exhibit a high degree of temporal and spatial regularity. Each individual is characterized by a time-independent characteristic travel distance and has a significant probability of returning to a few frequently visited locations \cite{Gonzalez08}. In particular, the authors highlighted the following points. (i) According to Brockmann et al. \cite{Brockmann06}, the travel distances of individuals follow a power-law behavior distribution. (ii) The radius of gyration of individuals, which represents their characteristic travel distance, follows a truncated power law. Song et al. \cite{Song10a} observed mobile phone data and determined that the waiting times of individuals (i.e., times between two moves) are characterized by a power-law distribution, confirming the results presented by Barab\'{a}si \cite{Barabasi05}.
Additionally, in \cite{Song10b}, Song et al. analyzed the movements of individuals based on the Lempel-Ziv algorithm \cite{Kontoyiannis98} nd calculated a value of 93\% potential predictability for user mobility, which demonstrates that a significant portion of predictability is encoded in the temporal orders of visitation patterns. Additionally, despite significant differences in travel patterns, the variability in predictability is weak and largely independent of the distances users cover on a regular basis. This study was continued by Texeira et al. \cite{Teixeira19}, who demonstrated that the entropy of a mobility sequence can be estimated using two simple indicators, namely regularity and stationarity, indicating that trivial indicators can capture the complexity of human mobility.
When considering patterns in human mobility, particularly movements within a single day or week, it is essential to distinguish locations based on their importance. As mentioned previously, people exhibit periods of high-frequency trips followed by periods of lower activity and a tendency to return home on a daily basis. Therefore, most daily and weekly trajectories will start and end at the same location \cite{Barbosa18}. One method for quantifying the importance of locations is to rank locations. In \cite{Song10a}, location ranking was performed for mobile users based on the numbers of times their positions were recorded in the vicinities of the cell towers covering their locations. It was found that visitation frequency follows a Zipf law. Another method of distinguishing locations is to construct individual mobility patterns in the form of a network. Schneider et al. \cite{Schneider13} used data from both mobile phone users and travel survey respondents to construct weekday mobility networks for individuals. These profiles consisted of nodes for updating visited locations and edges for modeling trips between locations. Daily networks were only constructed for weekdays to identify topological patterns in mobility during a typical day. It was determined that approximately 90\% of the recorded trips made by all users could be described using only 17 daily networks. Another important point is that all of the identified networks contained strictly less than seven nodes and most of the networks exhibited oscillations, which are represented by cyclic links between two or more nodes. This result suggests that these motifs represent the underlying regularities in our daily movements and are useful for the accurate modeling and simulation of human mobility patterns.
\subsection{Approaches to semantic mobility sequence mining}
In mobility mining, two main approaches coexist with distinct goals: sequence pattern mining and clustering methods. The former extracts subsequences of frequent items from trajectories \cite{Giannotti07,Zhang14,Wan18,Ferrero20} to represent an aggregated abstraction of many individual trajectories sharing the property of visiting the same sequence of places with similar travel and visit times. The latter constructs clusters of similar sequences by comparing pairs using a similarity measure. Each cluster represents a coherent behavior and shares mobility features according to similarity measure properties.
Although sequence pattern mining methods are efficient for mining regular fragments and are easy to interpret, they are unsuitable for assessing similarities between individuals, meaning they cannot be used to extract representative groups according to their activities accurately. In general, clustering methods are superior for comparison, classification, and grouping tasks. Therefore, in the remainder of this section, we review related work on clustering processes for mining mobility behavior. Specifically, we focus on difficulties and solutions associated with the choice of a similarity metric between semantic mobility sequences.
\subsubsection{Similarity measures}
\label{sec:sim_measure}
Many similarity measures have been proposed or adapted for comparing sequences of symbols, specifically spatial trajectories (e.g., Euclidean, LCSS \cite{Vlachos02}, DTW \cite{Keogt05}, EDR \cite{Chen05} and Fréchet \cite{Alt95}).
Most of these measures have been adapted for semantic human mobility to compare sequences of routine activities or location histories \cite{Li08,Jiang12,Lv13}.
Table \ref{tab:description} summarizes the measures reviewed, which can be classified into two broad categories: measures based on counts of different attributes between sequences (Att) and measures based on edit distances, which measure the cost of the operations required to transform one sequence into the other (Edit).
Because trajectories are complex objects based on their multidimensional aspects, the construction and analysis of similarity measures remains a challenging task, underlying by \cite{Ferrero16}, and few works have successfully handled multiple dimensions.
In \cite{Furtado16} and \cite{Lehmann19}, two similarity measures for multidimensional sequences called MSM and SMSM, respectively, were defined based on the aggregation of matching functions controlled by weighting distances defined for each dimension of a sequence. These multidimensional similarity measures can embed the richness inherent to mobility data, but require many parameters and thresholds for initialization. This complexity makes it difficult to visualize and interpret the resulting similarity scores.
Most previous proposals focus on unidimensional semantic sequences. One method for comparing semantic sequences is to represent them as vectors. Such a representation is particularly interesting because it allows the use of a whole family of distance measures that are well-defined metrics, such as the inner product and Euclidean distance.
In \cite{Elzinga15}, Elzinga and Studer represented sequences as vectors in the inner product space and proposed a context metric called SVRspell that focuses on duration and similarity. However, their representation has an exponential space complexity of $\mathcal{O}(|\Sigma|^n)$ where $|\Sigma|$ is the size of the alphabet of symbols and $n$ is the size of the sequence.
Jiang et al. also represented daily activity sequences as vectors. They defined the space of an individual' s daily activity sequence by dividing the 24 h in a day into five minutes intervals and then used the activity in the first minute of every time interval to represent an individual's activity during that five minutes timeframe. Principal component analysis was then used to extract appropriate Eigen activities and calculate the Euclidean distances between them \cite{Jiang12}.
It should noted be that, in this previous work, time slots are the kernel level for comparison in the sense of two individuals with same activities but practised at different times will be evaluated as strongly dissimilar. We call this type of approach a \textit{time-structural approach}. It is effective to group individuals based on they allocate time to different activities, but a major problem with this approach is that two trajectories composed of the same activities practiced at different times will have no similarity, resulting in extreme sensitivity to time.
To overcome this time issue, other studies have reused measures from optimal matching (OM) methods \cite{Studer16}such as the edit distance family (e.g., Levenshtein), LCSS, and DTW. These methods measure the dissimilarity between two sequences $S_1$ and $S_2$as the minimum total cost of transforming one sequence (e.g., $S_1$) into the other (e.g., $S_2$) using indels (insertions, deletions) or substitutions of symbols. Each of these operations has a cost that can vary with the states involved. In this manner, depending on the choice of costs applied, groups of individuals can be created differently.
{}%
\begin{table*}
\caption{Description of main similarity measures for semantic sequences}
\label{tab:description}
\begin{tabular}{m{3.5cm}ccm{10.05cm}}
\hline
\multirow{2}{*}{{Measure}} & \multicolumn{2}{c}{{Type}} & \multirow{2}{*}{{Description}}\tabularnewline
\cline{2-3}
& {Att.} & {Edit.} & \tabularnewline
\hline
{MSM \cite{Furtado16}, SMSM \cite{Lehmann19}} & {$\times$} & & {Agregation of maching functions of each dimension.}\tabularnewline
{SVRspell \cite{Elzinga15}} & {$\times$} & & {Based on number of matching subsequences and weighted by
the length of subsequences involved.}\tabularnewline
{Jiang et al. \cite{Jiang12}} & {$\times$} & & {Euclidean distance between appropriate eigen activities.}\tabularnewline
{Hamming} & {$\times$} & {} & {Sum of mismatches with similarity between elements.}\tabularnewline
{DHD \cite{Lesnard10}} & {$\times$}& {} & {Sum of mismatches with positionwise state-dependent
weights.}\tabularnewline
{Levenshtein distance \cite{Levenshtein66,Wagner74}} & \multirow{1}{*}{} & \multirow{1}{*}{{$\times$}} & \multirow{1}{*}{{Minimum sum of edit costs to turn $S_{1}$ into $S_{2}$.}}\tabularnewline
{CED \cite{Moreau19b}} & & {$\times$} & {OM with costs weighted by edit position and symbols
nearby.}\tabularnewline
{Trate (TDA) \cite{Rohwer05}} & & {$\times$} & {OM with costs based on transition rates.}\tabularnewline
{OMSlen \cite{Halpin10}} & & {$\times$} & {OM with costs weighted by symbol length.}\tabularnewline
\hline
\end{tabular}{\scriptsize \par}
\end{table*}
\subsubsection{OM methods, setting cost and mobility behavior}
\label{sec:editdist}
\begin{table*}
\caption{Properties of main similarity measures for semantic sequences}
\label{tab:properties}
{}
\begin{tabular}{m{3.5cm}ccccccc}
\hline
\multirow{2}{*}{{Measure}} & \multicolumn{7}{c}{{Properties}}\tabularnewline
\cline{2-8}
& {Metric} & {T. warp} & {Ctxt} & {Permut.} & {Rep.} & {Sim.} & {Multi. dim}\tabularnewline
\hline
{MSM \cite{Furtado16}, SMSM \cite{Lehmann19}} & & {$\times$} & & & & & {$\times$}\tabularnewline
{SVRspell \cite{Elzinga15}} & {$\times$} & {$\times$} & {$\times$} & & {$\times$} & {$\times$} & \tabularnewline
{Jiang et al. \cite{Jiang12}} & {$\times$} & & & & & & \tabularnewline
{Hamming} & {$\times^{\dagger}$} & & & & & $\times^\ddagger$ & \tabularnewline
{DHD \cite{Lesnard10}} & & & {$\times$} & & & & \tabularnewline
{Levenshtein distance \cite{Levenshtein66,Wagner74}} & \multirow{1}{*}{{$\times^{\dagger}$}} & \multirow{1}{*}{{$\times$}} & \multirow{1}{*}{} & \multirow{1}{*}{} & \multirow{1}{*}{} & $\times^\ddagger$ & \multirow{1}{*}{}\tabularnewline
{CED \cite{Moreau19b}} & {$\times^{\dagger}$} & {$\times$} & {$\times$} & {$\times$} & {$\times$} & {$\times$} & \tabularnewline
{Trate (TDA) \cite{Rohwer05}} & & {$\times$} & $\times$ & & & {$\times$} & \tabularnewline
{OMSlen \cite{Halpin10}} & {$\times$} & {$\times$} & {} & & {$\times$} & & \tabularnewline
\hline \tabularnewline
\multicolumn{8}{l}{{$\dagger$ Depends if costs fulfil the triangle inequality and/or parameters.}} \\
\multicolumn{8}{l}{{$\ddagger$ By default discrete metric $\rho(x,y)=\begin{cases}
0 & x=y\\
1 & \text{else}
\end{cases}$}}
\end{tabular}{\scriptsize \par}
\end{table*}
\begin{table*}
\caption{Complexity and parameters of main similarity measures for semantic sequences}
\label{tab:complexity}
{}%
\begin{tabular}{m{3.5cm}cccm{4.55cm}}
\hline
\multirow{2}{*}{{Measure}} & \multirow{2}{*}{{Complexity}} & \multicolumn{3}{c}{{Parameters}}\tabularnewline
\cline{3-5}
& & {Subs} & {Indels} & {Other}\tabularnewline
\hline
{MSM \cite{Furtado16}, SMSM \cite{Lehmann19}} & {$\mathcal{O}(n\times p)$} & & & {Set of distances $\mathcal{D}$; weight vector $w$; threshold vector $maxDist$}\tabularnewline
{SVRspell \cite{Elzinga15}} & {$\mathcal{O}\left(|\Sigma|^{\max(n,p)}\right)$} & & & {Subsequence length weight $a$; symbol duration weight
$b$}\tabularnewline
{Jiang et al. \cite{Jiang12}} & {$\mathcal{O}(n\times p)$} & & & {Number of activities}\tabularnewline
{Hamming} & {$\mathcal{O}(n)$} & {Single, User$^\natural$} & & \tabularnewline
{DHD \cite{Lesnard10}} & {$\mathcal{O}(n)$} & {Data} & & \tabularnewline
{Levenshtein distance \cite{Levenshtein66,Wagner74}} & \multirow{1}{*}{{$\mathcal{O}(n\times p)$}} & \multirow{1}{*}{{Single, User$^\natural$}} & \multirow{1}{*}{{Single}} & \multirow{1}{*}{}\tabularnewline
{CED \cite{Moreau19b}} & {$\mathcal{O}(n\times p\times\max(n,p))$} & {Ontology} & {Auto} & {Ontology; Context function $f_{k}$; Context weight
$\alpha$}\tabularnewline
{Trate (TDA) \cite{Rohwer05}} & {$\mathcal{O}(n\times p)$} & {Data} & {Single} & {Transition lag $q$}\tabularnewline
{OMSlen \cite{Halpin10}} & {$\mathcal{O}(n\times p)$} & {User} & {Multiple} & {Symbol length weight $h$}\tabularnewline
\hline
\multicolumn{5}{l}{{$\natural$ If user specifies a similarity measure.}} \\
\end{tabular}{\scriptsize \par}
\end{table*}
A major challenge in OM-based methods is setting operation costs. This is a particularly difficult problem in social science \cite{Abbott00,Hollister09}. There are essentially three main strategies for choosing operation costs: (i) theory-based cost \cite{Studer16}, which determines costs based on theoretical grounds and a priori knowledge; (ii) feature-based cost, which specifies a list of state attributes on which we wish to evaluate the closeness between states using a similarity
measure such as the Gower index \cite{Gower71} or Euclidean distance; and (iii) data-driven cost \cite{Rohwer05}, which assigns a cost that is inversely proportional to the transition rates observed in the dataset. A well-known example of the latter strategy is Dynamic Hamming Distance (DHD) \cite{Lesnard10} where the substitution costs at position $t$ are obtained by the transition rates cross-sectionally observed between $t - 1$ and $t$ and between $t$ and $t + 1$.This method is very effective at generating abnormal sequences and outliers. However, based on its construction, DHD has strong time sensitivity and the number of transition rates that must be estimated is very high, potentially leading to overfitting. Finally, there is an additional type of strategy called (iv) ontology-based cost (utilized in \cite{Moreau19b}that is derived from (i) and (ii). This approach infers costs based on taxonomies (or ontologies) and similarity measures in knowledge graphs \cite{Zhu17}.
An additional difficulty in setting operation costs lies in the context of sequences or, in other words, considering the symbols in the sequences. As pointed out in \cite{Gonzalez08,Song10b,Alessandretti18}, human mobility has a high degree of regularity. Several approaches have been developed to take advantage of this regularity. In \cite{Halpin10}, the OMSlen method was proposed to reduce the cost of operations for repeating symbols, which is particularly useful for mobility sequence mining. Moreau et al. \cite{Moreau19b} proposed reducing the costs of edit operations for symbols that are similar and/or already present in a sequence and close to the edited position. One consequence of this method is that repetitions of nearby similar symbols and permutations have lower costs, making it a \textit{compositional comparison approach}. This method can bring together sequences with similar contents by allowing for some temporal distortions and repetitions.
Based on the information in \cite{Studer16}, a summary of measure properties is presented in Table \ref{tab:properties} The column ``Metric" denotes measures based on mathematical calculations of distances. ``T. warp" denotes measures allowing time warping when comparing sequences. ``Ctxt" denotes measures that consider the context of a sequence to define cost. ``Permutat." indicates that permutations are allowed with a lower cost and ``Rep." indicates that repetitions are cheaper. Finally, ``Sim" denotes measures that consider a similarity function between symbols and ``Multi. dim" denotes measures that handle multidimensional sequences.
Table \ref{tab:complexity} presents the computational complexity and some details regarding the parameters of each method. In the ``Complexity" column, $n$ and $p$ denote the lengths of compared sequences. It should be noted that for Hamming-family measures, the sequences must have the same length ($n$). The column ``Parameters" contains the necessary tuning parameters and cost strategies for OM measures. In the ``Subst" columns, an entry of ``User" indicates that the costs are set by the user through a theory- or feature-based strategy. An entry of ``Data" denotes a data-driven method and ``Ontology" refers to am ontology-based strategy. Finally, the ``Indels" column indicates whether there is a single state-independent indel cost, denoted as ``Single" state-dependent user-defined indel costs, denoted as ``Multiple" or indel costs that are automatically set by the measure itself, denoted ``Auto".
\subsubsection{Clustering methods}
\label{sec:clustering}
The extraction of behaviors from a dataset is a process that is typically performed using unsupervised machine learning methods. Clustering methods are based on similarity measures such as those described in the previous subsections and are widely used for the discovery of human behaviors, particularly in sequences of mobility data \cite{Jiang12,Wesolowski12,Pappalardo18}.
However, the topologies created by similarity measures for semantic sequences are difficult to apply. In particular, for OM methods, the axioms of the metric spaces are, usually, not hold.
A pairwise comparison of semantic sequences results in a distance matrix that is used as an input for a clustering process. To the best of our knowledge, the clustering algorithms that are able to handle arbitrary distances (not necessarily metrics) are PAM (or k-medoid) \cite{Park09}, hierarchical clustering \cite{Kaufman09}, density clustering (DBSCAN \cite{Ester96}, OPTICS \cite{Ankerst99}) and spectral clustering \cite{Ng02}, each of which proposes different hypotheses regarding cluster topology.
According to the similarity measure and representation of sequences, dimensionality reduction methods can be applied to extract primary dimensions \cite{Jiang12}.
However, commonly used methods such as PCA can only be used for Euclidean spaces in practice. Alternatively, methods such as UMAP \cite{Mcinnes18},
facilitate the reduction of a complex topology defined by an arbitrary metric into a low Euclidean space, which facilitates the visualization of clustering results and the use of other clustering methods, such as those requiring a Euclidean space, including k-means. Additionally, UMAP offers superior preservation of the data global structure, fewer hyperparameters for tuning, and better speeds than previous techniques such as t-SNE \cite{Maaten08}.
Therefore, the advantages of these clustering techniques is that they can be used with arbitrary distances, meaning they can be paired with any of the measures discussed in Section \ref{sec:sim_measure} to implement a clustering module.
\subsection{Analysis tools supporting mobility mining}
\begin{table*}
\caption{Indicators for explainability and analysis of semantic mobility sequences and behaviors in a dataset}
\label{tab:indicator}
\begin{tabular}{lcm{10.5cm}}
\hline
{Techniques} & {Refs} & {Description}\tabularnewline
\hline
\hline
\multicolumn{3}{c}{\emph{\textbf{Frequency distribution}}}\tabularnewline
\hline
{Length distribution} & & {Frequency distribution of sequences length in the
dataset.}\tabularnewline
{State distribution} & & {Frequency distribution for each symbol $x$ in the
sequences in whole dataset.}\tabularnewline
\hline
\multicolumn{3}{c}{\emph{\textbf{Transition}}}\tabularnewline
\hline
{Origin-Destination matrix} & & {Number of transitions from a state (i.e. symbol) $x_{i}$
to $x_{j}$. }\tabularnewline
{Daily pattern} & \cite{Schneider13} & {Network representation of sequence. Each edge $(x_{i},x_{j})$
represent a transition from $x_{i}$ to $x_{j}$. }\tabularnewline
\hline
\multicolumn{3}{c}{\emph{\textbf{Disorder}}}\tabularnewline
\hline
{Entropy} & \cite{Song10a,Kontoyiannis98} & {Level of ``information",
``surprise", or ``uncertainty"
inherent of a variable's possible outcomes. For sequences, the entropy
also consider temporal patterns.}\tabularnewline
{Predictibility} & \cite{Song10a} & {Probability that an appropriate predictive algorithm
can predict correctly the user\textquoteright s future whereabouts.}\tabularnewline
{Distinct symbols} & \cite{Teixeira19} & {Number of distinct symbols in the sequence.}\tabularnewline
\hline
\multicolumn{3}{c}{\emph{\textbf{Statiscal dependance measures}}}\tabularnewline
\hline
{Association rules} & \cite{Agrawal93} & {Relation, based on measures of interestingness, between
two or more variables in a dataset.}\tabularnewline
{Pearson residuals} & \cite{Haberman73} & {Measure of the departure of the independence between
two variables.}\tabularnewline
\hline
\multicolumn{3}{c}{\emph{\textbf{Centrality}}}\tabularnewline
\hline
{Mode} & & {Element with the highest frequency in the dataset.}\tabularnewline
{Medoid} & & {Element which minimizes the distance to all elements
in the dataset.}\tabularnewline
\hline
\multicolumn{3}{c}{\emph{\textbf{Scattering and outliers}}}\tabularnewline
\hline
{Diameter and Radius} & & {Geometrical interpretation of the distances between elements in the dataset.
}\tabularnewline
{Distance distribution} & & {Distribution of the distance between in the dataset
or subsetof it (e.g. cluster).}\tabularnewline
{Silhouette} & \cite{Rousseeuw87} & {Quality score of clustering. Measures how similar
an object is to its own cluster (cohesion) compared to other clusters
(separation)}\tabularnewline
{UMAP} & \cite{Mcinnes18} & {Dimensional reduction. Visualization of complex elements in 2D Euclidean spaces with a preservation of local topology.}\tabularnewline
\hline
\end{tabular}
\end{table*}
Data mining and statistical learning techniques are powerful analysis tools for understanding urban mobility \cite{Pan13}. Several works have proposed frameworks and tools for supporting sequence analysis and mobility mining. The most relevant tools are briefly described in this subsection.
One of the first and well-known frameworks for mobility knowledge discovery is M-Atlas \cite{Gianotti11}, which provides complete functionalities for mobility querying and data mining centered around the concept of trajectories.
Recently, \cite{Pappalardo19} proposed a statistical python library for mobility analysis called \textit{Scikit-Mobility}. Scikit-Mobility enables to load, clean, process and represent mobility data and analyze these by using the common mobility measures. However, to the best of our knowledge, there is no framework oriented semantic mobility mining. Some toolboxes provide partial statistical support for mobility analysis (mainly oriented to spatial mining). One can see \cite{Pebesma18} for a review of R libraries and \cite{Pappalardo19} for Python.
Based on TraMineR \cite{Traminer11} functionalities, the geovisualization environment
eSTIMe \cite{Meunin19} allows the representation of semantic daily mobility information with spatio-temporal content.
Despite the availability of such decision support tools and reporting systems, abstracting and analyzing the main characteristics of a group of semantic sequences and explaining why they are clustered together remains a challenge open problem. In particular, while the interpretability and explainability of mining methods are hot research topics, most methods are limited to a specific problem or domain. To the best of our knowledge, there have only been a few studies on providing a methodology for understand mobility mechanisms in clusters of semantic sequences. The most relevant work is that described in \cite{Jiang12}, where a K-means clustering method based on a time-structured similarity measure was applied to daily mobility sequences. The authors defined eight clusters corresponding to predefined social demographic variables. Cluster analysis was mainly performed based on sequence index plots, state distributions, and the proportion of social demographic characteristics in each cluster. The Silhouette index \cite{Rousseeuw87} was used to control clustering validity. Although this analysis provides a starting point for understanding the typical behaviors within a cluster, it is incomplete and fails to qualify how consistent the elements in a cluster are with the cluster description (e.g., most extreme elements in a cluster and entropy of sequences in a cluster), as well as the topologies formed by mobility sequences (e.g., daily patterns). These aspects of explainability must be retained and enriched to provide a set of indicators allowing us to understand the globality and diversity of all the elements in a cluster, as well as what makes a cluster coherent.
Techniques that attempt to explain complex machine learning methods are becoming increasingly popular. For example, the LIME technique \cite{Ribeiro16} attempts to explain the predictions and results of black-box machine learning techniques in an interpretable and faithful manner by training an interpretable model based on local results. Similarly, Guidotti and al. \cite{Guidotti19} proposed some techniques and methods such as association and decision rules, and prototype selection elements (e. g., medoids and diameters) to explain black-box systems to make their results more interpretable. In line with these techniques, we believe that the elaboration of indicators is a crucial point for understanding machine learning models.
Table \ref{tab:indicator} presents a summary of the different indicators used in state-of-the-art methods that can be used to explain semantic mobility sequences. The indicators are structured into categories corresponding to different perspectives of exploring and explaining data. We let $X=\tuple{x_1,x_2,...,x_n}$ denote a sequence of symbols constructed from an alphabet $\Sigma$ and let $f$ denote the frequency function.
In this paper, we address the problem of knowledge extraction from human activity sequences to develop models of mobility behaviors. To this end, we reuse many of the techniques introduced in this section and propose several new methods to mine and qualify semantic sequences.
\section{Semantic clustering behavior}
\label{sec:sem_clust}
This section describes the application of the steps (c) and (d) presented in the pipeline Fig. \ref{fig:overview} applied to the EMD Rennes 2018 dataset. The first subsection describes the clustering process using the CED similarity measure and the HAC clustering algorithm Agnes \cite{Kaufman09} with R software. We cluster the individual semantic mobility sequences and analyse variations in daily activity types. In the second subsection, we extract typical behaviors from clusters by summarizing main characteristics and distinct patterns in terms of the indicators discussed in Sections \ref{sec:indicator} and \ref{sec:clust_anal_method}. A discussion of the obtained results and alternatives methods concludes this section.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{Figures/dendrogram.pdf}
\caption{Dendrogram of the HAC clustering algorithm of the EMD 2018 dataset. Eight clusters are formed by the cut of the dendrogram.}
\label{fig:dendro}
\end{figure*}
\subsection{Clustering process}
As discussed in Section \ref{sec:clust_anal_method}, the clustering process is performed based on the CED measure and a hierarchical clustering algorithm. Here, we discuss the settings for these two methods and the validity of the clusters obtained in terms of quality scores.
\subsubsection{Similarity measure and HAC initialisation}
As described in Section \ref{sec:ced}, the CED similarity measure requires the setting of several parameters. Empirically, we applied the following settings for CED during the clustering process:
\begin{itemize}
\item The $\alpha$ coefficient is set to zero to give fully priority to context.
\item The contextual vector was encoded using the Gaussian kernel bellow.
\[f_k(i)=\exp \left( -\frac{1}{2} \left( \frac{i-k}{\sigma} \right)^2 \right) \]
where $\sigma$ is a coefficient that controls the flatness of the curve around the activity at position $k$. The larger is $\sigma$, the more context surrounding the index $k$ is considered. In our experiments, we used the value of $\sigma = \frac{m}{2}$ where $m$ is the median sequence size ($m=9$ according to Fig. \ref{fig:poisson_length}). Therefore, $v_i(e) = f_k(i)$.
\end{itemize}
With these settings, the CED is a semi-metric, meaning it satisfies the requirements of symmetry and identity of indiscernible, but the triangle inequality does not hold.
Regarding the HAC algorithm, because we do not know the shapes of the clusters, but we want to preserve robustness to outliers and immunity to chain effects, we propose using the Ward criteria, which minimizes the total within-cluster variance, leading to the generation of convex compact clusters that are less affected by noise.
\subsubsection{Clustering validity}
\label{sec:cluster}
One problem in an unsupervised clustering process is to determine the optimal number of clusters that best fits the inherent partitioning of the dataset. In other words, we must evaluate the clustering results for different cluster numbers, which is the main problem in determining cluster validity \cite{Halkidi01}. There are three main approaches to validating clustering results: (1) external criteria, (2) internal criteria, and (3) relative criteria. Various indices are available for each criterion.
The structure of HAC and the formed clusters are presented in Fig. \ref{fig:dendro}. In our study, because we did not have a predetermined cluster structure, we used internal validation indices, whose fundamental goal is to search for clusters whose members are close to each other and far from the members of other clusters. Specifically, we used two indices to select the optimal number of clusters. The first is the \textit{inertia gap}, which represents the total distance between two consecutive agglomerations. The wider is the gap, the greater the change in cluster structure and the greater the Silhouette index, which reflects the compactness and separation of clusters. The \textit{Silhouette index} is defined in the range $[-1,1]$. A higher value Silhouette index indicates a better clustering result.
\begin{table*}
\caption{Cardinal number, Silhouette indice and diameter and radius of each cluster}
\label{tab:clusters}
{}%
\begin{tabular}{m{.9cm}cm{1cm}ccccc}
\hline
\centering{\footnotesize{Cluster $C_{i}$}} & {$|C_{i}|$} & \% (in total) & {$Sil(C_{i})$} & {$diam(C_{i})$} & {$diam(C_{i}^{95\%})$} & {$rad(C_{i})$} &
{$rad(C_{i}^{95\%})$}\tabularnewline
\hline
\centering{\footnotesize{1}} & {738} & \centering{{\footnotesize{7.4}}} & {0.41} & {8.85} & {\footnotesize{5}} & 5.04 & 3.44
\tabularnewline
\centering{\footnotesize{2}} & {1673} & \centering{\footnotesize{16.7}} & {0.37} & {20.03} & {\footnotesize{8}} & 12.66 & 4.68
\tabularnewline
\centering{\footnotesize{3}} & {423} &
\centering{\footnotesize{4.2}} & {0.01} & {20.81} & {\footnotesize{7.74}} & 7.95 & 5.53
\tabularnewline
\centering{\footnotesize{4}} & {719} &
\centering{\footnotesize{7.2}} & {0.12} & {26.64} & {\footnotesize{7.21}} & 12.51& 5.7
\tabularnewline
\centering{\footnotesize{5}} & {747} &
\centering{\footnotesize{7.5}} &{0.18} & {23.42} & {\footnotesize{6.9}} & 9.86& 5.35
\tabularnewline
\centering{\footnotesize{8}} & {981} &
\centering{\footnotesize{9.8}} &{0.1} & {24.34} & {\footnotesize{8.15}} & 14.59& 6.11
\tabularnewline
\centering{\footnotesize{7}} & {3199} &
\centering{\footnotesize{32}} & {0.29} & {20} & {\footnotesize{5.57}} & 11.09& 4.09
\tabularnewline
\centering{\footnotesize{12}} & {1525} &\centering{\footnotesize{15.2}} & {0.07} & {28.5} & {\footnotesize{7}} & 10.14 & 4.65
\tabularnewline
\hline
\end{tabular}{\footnotesize \par}
\end{table*}
\begin{figure*}[t]
\includegraphics[width=\textwidth]{Figures/silhouette_inertia.pdf}
\caption{Clustering validity indices (a) Average Silhouette (b) Inertia gap}
\label{fig:silhouette}
\end{figure*}
Fig. \ref{fig:silhouette} presents graphs of (a) the average Silhouette index and (b) inertia gap. The relatively low Silhouette values can be attributed to the particular topology associated with CED combined with the Ward criterion and the presence of outliers.\footnote{Note that Silhouette is particularly suitable for hyper-spherical clusters like the one constructed by K-means algorithms.}.
Because we want a number of clusters greater than five to ensure correct analysis, plot (a) suggests the choice of eight or nine clusters. Values of 6, 7, 10, 11, 13, or 14 could also be used. Plot (b) strongly encourages the choice of six clusters, but 8, 10, or 13 clusters could also be used. According to these results, we \textbf{set the number of clusters to eight} for further analysis. Regardless, the choice of six clusters for narrow analysis, or 10 or 13 clusters for wide analysis may be feasible.
Additional information regarding the clusters, such as proportions, Silhouette index, diameters and radii are given in Table \ref{tab:clusters}. $C_i^{95\%}$ indicates that we filtered 5\% of the most extreme values from the distribution. Therefore, the difference between $diam(C_i)$ and the diameter of 95\% of the elements in $C_i$, denoted as $C_i^{95\%}$, indicates that there is a proportion of outliers far away from the other elements in the cluster $C_i$. Similar radii values of $C_i^{95\%}$ support this analysis.
\subsection{Behavior extraction and cluster explanation}
\label{sec:behavior_extract}
\begin{figure*}[t]
\includegraphics[width=.9\textwidth]{Figures/size_boxplot_clust_total.pdf}
\caption{Box plots of sequences' length in each cluster}
\label{fig:size_clust}
\end{figure*}
In this section, we reuse the indicators and statistics presented in Table \ref{tab:chosen_indic} and Section \ref{sec:case_study}, but enhanced with significance tests, to infer typical behaviors from clusters discovered in Section \ref{sec:cluster}. This can help us to check the validity and interpretability of their patterns.
\begin{figure*}
\includegraphics[width=.8\textwidth]{Figures/cluster_stackplot.pdf}
\caption{Stacked plot of the proportion of aggregated activities ($aggAct$) in all dataset on the left and in each cluster on the right}
\label{fig:stack}
\end{figure*}
First, we analyse the lengths of the sequences inside the clusters. Fig. \ref{fig:size_clust} presents the box plots for the sequence lengths in each cluster. Compared to the distribution of lengths and the box plot for the entire dataset (leftmost plot), one can see that clusters $C_1, C2$ and $C_7$ contain relatively short sequences with a median lengths of six and seven activities, corresponding to intervals $I_1$ and $I_2$ in the Poisson distribution (see Fig. \ref{fig:poisson_length}). In contrast, clusters $C_5, C_6$ and $C_8$ contain longer mobility sequences but have large length dispersions. Analogously, clusters $C_3$ and $C_4$ have middling sequence lengths corresponding to intervals $I_2$ and $I_3$.
The overlapping of box plots (i.e., the existence of several clusters containing sequences of the same length) and the distribution of outliers in the clusters indicate that sequence length as not a major criteria for grouping sequences. We claim that is an advantage of CED w.r.t other OM similarity measures.
Regarding the distributions of activities, Fig. \ref{fig:stack} portrays the proportions of aggregated activities\footnote{Thanks to the ontology, we can select the level of granularity of our analysis. Aggregated activities having been retained in order to avoid cognitive overload on graphs.}.
An interesting effect that can be observed in this graph is the strong discrimination and stratification effect of clusters according to move activities. Motorized transport activities are very common in clusters $C_7$ and $C_8$, but other move activities are not. Clusters $C_2$ to $C_6$ stand out based on their large proportion of smooth move activities whereas $C_1$ and $C_3$ contain many public transportation move activities.
Several stop activities are also a distinctive features in certain clusters. For example, school activities are particularly popular in clusters $C_1$ and $C_4$, while work activities in clusters $C_5, C_7$ and $C_8$ and accompany activities being common in cluster $C_8$. Similarly, some clusters tend to contain very few instances of certain activities. For example, cluster $C_6$ contains very few work or study activities
\begin{figure*}
\includegraphics[width=.65\textwidth]{Figures/mosaic2.pdf}
\caption{Mosaic plot and Pearson residuals between aggregated activities and clusters. Cramér's $V = 0.3$}
\label{fig:mosaic}
\end{figure*}
To analyze theses over- and under-representations, we generated a mosaic plot combined with Pearson residuals to quantify the departure of each cell from independence. Fig. \ref{fig:mosaic} presents the mosaic plot with residuals between clusters and aggregated activities.
We now recall several rules for the interpretation of this type of plot. Let each line represent a cluster an aggregated activity $aggAct_i$ and each column represents $C_j$. We let $c_{ij}$ denote the cell in line $i$ and column $j$:
\begin{itemize}
\item The width of $c_{ij}$ is proportional of the size of $C_j$.
\item The height of $c_{ij}$ is proportional the number of $aggAct_i$ under the condition of to be in $C_j$.
\item The area of $c_{ij}$ is proportional to the frequency of $aggAct_i$ and $C_j$.
\end{itemize}
The color of a cell $c_{ij}$ indicates the value of the corresponding Pearson residual $r_{ij}$. A blue shaded cell indicates an over-representation of the aggregated activity $aggAct_i$ in $C_j$. A red-shaded cell indicates an under-representation.
Based on this graphical representation, it is easy to visualize the proportion of a given activity in each cluster. For example, one can immediately see that approximately 40\% of cluster $C_1$ is comprised of public transportation activities. Additionally, based on the residuals, we can immediately and easily identify the characteristic activities in a cluster, as well as those that are under-represented Therefore, the Fig. \ref{fig:mosaic} complements and validates our previous analysis based on a stacked plot (Fig. \ref{fig:stack}) based on quantitative Pearson residuals. The Cramér's $V$ coefficient provides information regarding the associations between clusters and aggregated activities. The good value of $V$ (0.3) highlights the quality of association between our clusters and the activities performed in mobility sequences. The low number of white cells in the mosaic plot confirms our choice of clustering process.
Regarding transitions between activities, Fig. \ref{fig:flow_clust} presents a chord diagram for each cluster. As a consequence of the analysis presented in Fig. \ref{fig:delta} which indicated transport mode remains globally stable within sequences, we only represent stop activity transitions. Flows are represented between two leaf activities in the ontology to explore the content of clusters in detail. For example, one can see that cluster $C_1$ contains study activities ranging from junior high school (23) to university (25), whereas cluster $C_4$ mainly contains school children (22).
An explanation for this split can be derived from the fact that cluster $C_1$ mainly concentrates on public transportation activities (see Figs. \ref{fig:stack} and \ref{fig:mosaic}), which is generally related to teenagers, whereas older students tend to be autonomous. However, younger children are mainly accompanied to school by their parents by car or on foot, which can be observed in cluster $C_4$. This analysis is supported by Table 8, where the centrality indicators highlight typical sequences. For example, the most frequent sequence in $C_1$ (mode) involves traveling to and from school via public transportation. Cluster $C_4$ mainly focuses on car and foot travel, but also includes some leisure activities (51, 53).
Regarding the worker cuters (i.e, $C_5, C_7$ and $C_8$), we observed different behaviors for each one. In cluster $C_5$, the typical behavior appears to be that of a worker driving to work (11, 13) and then walking to a restaurant for lunch (53) before returning to work and then driving home. This scenario is supported by the medoid mobility sequences and Fig. \ref{fig:motif_clust} which represents the daily patterns in each cluster. In $C_5$, one can see a trend of oscillation between two activities with a central node. There are also some activities that can be added to the semantic sequence, such as shopping (32, 33) after work, going for a walk or window-shopping (52), or accompanying activities (61, 64).
Cluster $C_7$ represents individuals who travel to an activity, typically work (11), by car, then return home by car again. This interpretation is consistent with the short semantic mobility lengths in the cluster and the large majority of daily patterns with a single oscillation. This mobility behavior is the most common in the dataset and can be interpreted as the daily routine of going to work by car, occasionally shopping in a mall (32), and then going back home. Cluster $C_8$ is focused on workers that accompany and pick up (61, 64) someone before and after work (11, 13) with a possible mobility around the workplace (13). Similarly, cluster $C_7$ moves are almost exclusively performed by car. Furthermore, in this cluster, mobility sequences are relatively long and form complex patterns with, generally, four or more stop activities.
Finally, some clusters can be distinguished by the absence of some common elements. For example, the people in cluster $C_6$ do not work or study and they tend to spend their time mainly on shopping or leisure activities. Additionally, box plot in Fig. \ref{fig:size_clust} reveals that the mobility sequences in $C_6$ are long. Lastly, clusters $C_2$ and $C_3$ are especially characterized by their move activities. Individuals in $C_2$ almost exclusively move on foot to perform a single activity. Compared to the mobility sequences in $C_6$, these sequences are relatively short and based on a single oscillation between home and another location. The typical behavior in $C_3$ seems to be that people who use both public transportation and walking for mobility.
\begin{table*}[t]
\caption{Centrality indicators in each cluster}
\label{tab:centrality}
\begin{tabular}{ccc}
\hline
{Cluster $C_{i}$} & {Medoid} & {Mode}
\tabularnewline
\hline
\multirow{2}{*}{{1}} & $\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/public}{12}, \emoji{Figures/Emoji/study}{12}, \emoji{Figures/Emoji/motor}{12}, \emoji{Figures/Emoji/home}{12}$\rangle$ & $\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/public}{12}, \emoji{Figures/Emoji/study}{12}, \emoji{Figures/Emoji/public}{12}, \emoji{Figures/Emoji/home}{12}$\rangle$ \tabularnewline
& {$\tuple{1,131,23,122,1}$} & {$\tuple{1,141,23,141,1}$}
\tabularnewline
\multirow{2}{*}{{2}} & $\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/smooth}{9}, \emoji{Figures/Emoji/shop}{12}, \emoji{Figures/Emoji/smooth}{9}, \emoji{Figures/Emoji/home}{12}$\rangle$ &
$\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/smooth}{9}, \emoji{Figures/Emoji/shop}{12}, \emoji{Figures/Emoji/smooth}{9}, \emoji{Figures/Emoji/home}{12}$\rangle$
\tabularnewline
& {$\tuple{1,100,33,100,1}$} & {$\tuple{1,100,33,100,1}$}\tabularnewline
\multirow{2}{*}{{3}} & $\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/public}{12},
\emoji{Figures/Emoji/public}{12},
\emoji{Figures/Emoji/work}{12}, \emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/leisure}{12},
\emoji{Figures/Emoji/public}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$ &
$\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/public}{12}, \emoji{Figures/Emoji/study}{12}, \emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/study}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/study}{12},
\emoji{Figures/Emoji/public}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$
\tabularnewline
& {$\tuple{1,131,131,11,100,53,131,1}$} & {$\tuple{1,141,23,100,27,100,23,141,1}$}\tabularnewline
\multirow{2}{*}{{4}} & $\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/motor}{12}, \emoji{Figures/Emoji/study}{12}, \emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/leisure}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$ &
$\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/motor}{12}, \emoji{Figures/Emoji/study}{12}, \emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/home}{12}$\rangle$
\tabularnewline
& {$\tuple{1,122,22,100,51,122,1}$} & {$\tuple{1,122,22,100,1}$}\tabularnewline
\multirow{2}{*}{{5}} & $\langle$\emoji{Figures/Emoji/home}{12}, \emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/leisure}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$ &
$\langle$\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/leisure}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$
\tabularnewline
& {$\tuple{1,121,11,100,53,100,11,121,1}$} & {$\tuple{1,121,11,100,53,100,11,121,1}$}\tabularnewline
\multirow{2}{*}{{8}} & $\langle$\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/shop}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/leisure}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$ & $\langle$\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/shop}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/leisure}{12},
\emoji{Figures/Emoji/smooth}{9},
\emoji{Figures/Emoji/home}{12}$\rangle$
\tabularnewline
& {$\tuple{1,100,33,100,1,121,52,121,1}$} & {$\tuple{1,121,33,121,1,100,52,100,1}$}\tabularnewline
\multirow{2}{*}{{7}} & $\langle$\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$ &
$\langle$\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$
\tabularnewline
& {$\tuple{1,121,11,121,1}$} & {$\tuple{1,121,11,121,1}$}\tabularnewline
\multirow{2}{*}{{12}} & $\langle$\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/commute}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/commute}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$ &
$\langle$\emoji{Figures/Emoji/home}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/home}{12}$\rangle$
\tabularnewline
& {$\tuple{1,121,61,121,11,121,64,121,1}$} & {$\tuple{1,121,13,121,1}$}\tabularnewline
\hline
\end{tabular}
\end{table*}
\begin{landscape}
\begin{figure*}[p]
\includegraphics[width=1.4\textwidth]{Figures/flows_cluster.pdf}
\caption{Chord diagrams of Stop activities in each cluster}
\label{fig:flow_clust}
\end{figure*}
\end{landscape}
\begin{figure*}
\includegraphics[width=\textwidth]{Figures/daily_pattern_clust.pdf}
\caption{Heat map with Pearson residuals of daily patterns in each cluster. Cramér's $V = 0.25$}
\label{fig:motif_clust}
\end{figure*}
\subsection{Semantic mobility behavior discovering}
\begin{table*}[t]
\caption{Summary of discovered behaviors}
\label{tab:behavior}
{}%
\begin{tabular}{ccccm{1.9cm}|c}
\hline
\centering{Cluster $C_{i}$} & \centering{\% (in total)} & {Typical activities} & {Length} & \centering{Daily Patterns (motif id)} & \centering{\textbf{\emph{Behavior}}}\tabularnewline
\hline
\centering{1} & \centering{7.4} &\{\emoji{Figures/Emoji/public}{12}, \emoji{Figures/Emoji/study}{12}\} & {Short} & {1} &\centering{ \textbf{Teenagers}}\tabularnewline
\centering{2} & \centering{16.7} & \{\emoji{Figures/Emoji/smooth}{8}, \emoji{Figures/Emoji/shop}{12}\} & {Short} & {1} &\centering{ \textbf{Foot shoppers}}\tabularnewline
\centering{3} & \centering{4.2} & \{\emoji{Figures/Emoji/public}{12}, \emoji{Figures/Emoji/smooth}{8},
\emoji{Figures/Emoji/study}{12},
\emoji{Figures/Emoji/leisure}{12}\} & {Medium} & {2, 5} &\centering{ \textbf{Mixed transportation}}\tabularnewline
\centering{4} & \centering{7.2} & \{\emoji{Figures/Emoji/smooth}{8}, \emoji{Figures/Emoji/study}{12}, \emoji{Figures/Emoji/leisure}{12}\} & {Medium} & {2, 4} &\centering{ \textbf{Schoolchildren}}\tabularnewline
\centering{5} & \centering{7.5} & \{
\emoji{Figures/Emoji/smooth}{8},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/leisure}{12}\} & {Long} & {2, 5, 9, 13, 14} & \centering{\textbf{Wandering workers}}\tabularnewline
\centering{8} & \centering{9.8} & \{\emoji{Figures/Emoji/smooth}{8},
\emoji{Figures/Emoji/shop}{12},
\emoji{Figures/Emoji/leisure}{12}\} & {Long} & {\footnotesize{2, 5, 8, 11, 13, 14}} & \centering{\textbf{Shopping addicts}}\tabularnewline
\centering{7} & \centering{32} & \{\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/work}{12}\} & {Short} & {1, 3} & \centering{\textbf{Daily routine}}\tabularnewline
\centering{12} & \centering{15.2} & \{\emoji{Figures/Emoji/motor}{12},
\emoji{Figures/Emoji/work}{12},
\emoji{Figures/Emoji/commute}{9}\} & {Medium} & {5, 7, 9, 10, 12, 13, 14} & \centering{\textbf{\footnotesize{Working parents}}}\tabularnewline
\hline
\end{tabular}{\footnotesize \par}
\end{table*}
Based on our previous analysis of clusters, we extract a global behavior from each cluster. Table \ref{tab:behavior} summarizes the eight discovered behaviors. The columns ``Typical activities", ``Length" and ``Daily patterns" were computed using the Algorithm \ref{alg:behavior} and represent the predominant activities, median lengths of sequences (intervals), and the predominant daily patterns, respectively. For the sake of brevity, typical activities were extracted at the aggregated activities level (using emojis). Finally, the ``Behavior" column contains, mnemonic labels which summarizes the analysis carried out Section \ref{sec:behavior_extract}.
\begin{algorithm}[b]
\SetAlgoLined
\KwData{Set of clusters $\mathcal{C}=\{C_1, ..., C_k\}$}
\KwResult{Typical activities, Length and Daily patterns}
\For{$C_i \in \mathcal{C}$}{
\LeftComment{medoid$(C_i)$ and mode$(C_i)$ refer to Table \ref{tab:centrality}. $f(x,C_i) = \sum_{S\in C_i}\text{count}(x, S)$ denotes the frequency of activity $x$ in all sequences of $C_i$.}
$\text{Typical activities}(C_i) = \{x | x \in \Sigma \wedge \text{PearsonResiduals}(f(x, C_i)) \geq 4\ \wedge$ \\
\hspace{5.5cm} $(x_i \in \text{medoid}(C_i) \vee \text{mode}(C_i))\}$ \\
\LeftComment{$I_k$ refers to intervals of Poison distribution in Fig. \ref{fig:poisson_length}} \\
$\text{Length}(C_{i})=\begin{cases}
\text{``Short''} & \text{if } median(\{|S|: S \in C_i\})\in I_{1}\\
\text{``Medium''} & \text{if } median(\{|S|: S \in C_i\})\in I_{2}\\
\text{``Long''} & \text{else}
\end{cases}$ \\
\LeftComment{$\mathcal{G}$ refers to a dictionary of networks from Algorithm. \ref{alg:daily_patt}}. \\
$\text{DailyPatterns}(C_{i})=\{G_S |S \in C_i, \text{PearsonResiduals}(\mathcal{G}[G_S]) \geq 4\}$
}
\caption{Behavior Discovery summary}
\label{alg:behavior}
\end{algorithm}
We can summarize the behaviors in the clusters as follow:
Cluster $C_1$ contains a majority of short mobility sequences, with only one loop between home and middle/high school or university, and an extensive use of public transportation such as buses. This group mainly consists of \textit{Teenagers} mobility behavior.
Cluster $C_2$ is characterized by people who only walk for shopping, which we call the \textit{Foot shoppers}.
The main feature in $C_3$ is that the sequences combine walking and public transportation, which we call them \textit{Mixed transportation} people.
\textit{Schoolchildren} are maily clustered in $C_4$, with a large
proportion of primary school activities, followed by sports or cultural activities. These individuals mainly move by walking or by riding in cars.
In cluster $C_5$, the prototypical behavior is that of an individual working and going out for lunch, typically at restaurant. These \textit{Wandering workers} achieved their mobility by driving between home and work and, walking between work and place for food/leisure.
The representative behavior of individuals in cluster $C_6$ is that they do not work or study. They spent the majority of their time in shopping or leisure activities. We refer to these individuals as the \textit{Shopping addicts}.
Cluster $C_7$ is the largest cluster and contains 32\% of the dataset. Individuals in $C_7$ done mainly short mobility sequences that represent people who go to work by car and then travel back home. This behavior, with its elementary activities (car, work, and sometimes shopping at a mall) and oscillation patterns, evokes a simple \textit{Daily routine}.
Finally, $C_8$ represents a similar behavior to that of $C_7$ but individuals typically transport somebody else by
car before working and then pick them back up
after work. This behavior can be interpreted as parents accompanying their children to school in the morning and picking them up in the evening. Therefore, we refer to these individuals as \textit{Working Parents}.
Figures \ref{fig:summary} presents a graphical summary of the clusters and corresponding behaviors. The area of each square is proportional to
the size of the associated cluster. The colors and compositions refer to the dendrogram in Figure \ref{fig:dendro}.
\begin{figure*}
\centering{
\includegraphics[scale=.85]{Figures/summary3.pdf}
}
\caption{Graphical summary of discovered clusters}
\label{fig:summary}
\end{figure*}
\subsection{Discussion}
In the previous subsection, we presented the analysis and results of the clustering process according to the methodology introduced in Section \ref{sec:clust_anal_method}. This facilitated the discovery of several interesting and coherent patterns of semantic mobility, which are summarized in Table \ref{tab:behavior}.
Regardless, several problems and alternatives should be considered. First of all, as discussed in Section \ref{sec:related_work}, there are many different similarity measures for semantic sequences. The choice of a measure
has a significant impact on the results of clustering. In this paper, we used CED, which is an alternative measure to those mentioned in Table \ref{tab:description}, which could also be
considered. The setting of CED: the similarity measure between activities, the ontology, the contextual vector and the $\alpha$ coefficient are all parameters that can be modified to change the clustering results.
We experimentally tuned each parameter and referred to business knowledge for the construction of our ontology.
The second point is the choice of the clustering algorithm. As indicated in Table \ref{tab:clusters}, the diameters of the clusters indicate the presence of some outliers and the Silhouette scores suggest that the clusters are not hyper-spherical.
Therefore, the use of a density-based clustering algorithm such as DBSCAN or OPTICS, combined with more complete study of the topological space and neighborhood relationship via UMAP, as well as intra- and inter-cluster distances, could help us to obtain denser clusters and detect outliers.
The final point is the level of analysis of the activities in the ontology. To prevent cognitive overload during visualization, Section \ref{sec:behavior_extract} only presented the results of aggregated activities. Regardless, a detailed analysis at the level of leaf activities in the ontology would also be relevant and could refine the discovered behaviors.
Based on the proposed methodology, we were able to analyze and extract precise cluster behaviors from both socio-cognitive and urban perspectives. Therefore, this approach should be helpful for expert analysts in terms of limiting psychological biases, such as confirmation bias.
The proposed methodology supports the comprehension of clusters and is useful for the evaluation and tuning of clustering methods. The discovery of coherent and meaningful behaviors may trigger the proposal of novel metrics for the quality of experimental setups and relevance of various methods.
| {
"attr-fineweb-edu": 1.827148,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUa6I4ubnjoqgDUYe- | \section{Introduction}
Despite the sport's popularity in the United States, public statistical analysis of American football (``football") has lagged behind that of other major sports. While new statistical research involving player and team evaluation is regularly published in baseball \citep{Albert06, Jensen09, Piette12, Baumer15}, basketball \citep{Kubatko07, Deshpande16}, and hockey \citep{Macdonald11, Gramacy12, Thomas13}, there is limited new research that addresses on-field or player personnel decisions for National Football League (NFL) teams. Recent work in football addresses topics such as fantasy football \citep{Becker16}, predicting game outcomes \citep{Balreira14}, NFL TV ratings \citep{Grimshaw14}, the effect of ``fan passion'' and league sponsorship on brand recognition \citep{Wakefield12}, and realignment in college football \citep{Jensen14}. Additionally, with the notable exception of \citet{Lock14}, recent research relating to on-field or player personnel decisions in football is narrowly focused. For example, \citet{Mulholland14} analyze the success of tight ends in the NFL draft, \citet{Clark13} and \citet{Pasteur14} both provide improved metrics for kicker evaluation, \citet{Martin17} examine the NFL's change in overtime rules, and \citet{Snyder15} focus on discretionary penalties from referees. Moreover, statistical analysis of football that does tackle on-field or player personnel decisions frequently relies on proprietary and costly data sources, where data quality often depends on potentially biased and publicly unverified human judgment. This leads to a lack of reproducibility that is well-documented in sports research \citep{Baumer15}.
In this paper, we posit that (1) objective on-field and player personnel decisions rely on two fundamental categories of statistical analysis in football: play evaluation and player evaluation, and (2) in order to maintain a standard of objectivity and reproducibility for these two fundamental areas of analysis, researchers must agree on a dataset standard.
\subsection{Previous Work: Evaluating Plays}
\label{sec:prev-work-plays}
The most basic unit of analysis in football is a single play. In order to objectively evaluate on-field decisions and player performance, each play in a football game must be assigned an appropriate value indicating its success or failure. Traditionally, yards gained/lost have been used to evaluate the success of a play. However, this point of view strips away the importance of context in football \citep{Carter71, Carroll88}. For instance, three yards gained on 3rd and 2 are more valuable than three yards gained on 3rd and 7. This key point, that not all yards are created equal, has been the foundation for the development of two approaches for evaluating plays: expected points and win probability. The expected points framework uses historical data to find the number of points eventually scored by teams in similar situations, while the win probability framework uses historical data to find how often teams in similar situations win the game. Using these metrics, one can obtain pre-snap and post-snap values of a play (expected points or win probability) and, taking the difference in these values, the value provided by the play itself -- expected points added (EPA) or win probability added (WPA). These approaches have been recently popularized by Brian Burke's work at \url{www.advancedfootballanalytics.com} and ESPN \citep{Burke_EP, ESPN_total_QBR}.
Most of the best known approaches for calculating expected points do not provide any level of statistical detail describing their methodology. In most written descriptions, factors such as the down, yards to go for a first down, and field position are taken into account. However, there is no universal standard for which factors should be considered. \citet{Carter71} and others essentially use a form of ``nearest neighbors" algorithms \citep{Dasarathy} to identify similar situations based on down, yards to go, and the yard line to then average over the next points scored. \citet*{Goldner17} describes a Markov model and uses the absorption probabilities for different scoring events (touchdown, field goal, and safety) to arrive at the expected points for a play. ESPN has a proprietary expected points metric, but does not detail the specifics of how it is calculated \citep{ESPN_EP}. \citet*{Burke_EP} provides an intuitive explanation for what expected points means, but does not go into the details of the calculations. \citet*{Schatz03} provides a metric called ``defense-adjusted value over average", which is similar to expected points, and also accounts for the strength of the opposing defense. However, specifics on the modeling techniques are not disclosed. \citet*{Causey15} takes an exact-neighbors approach, finding all plays with a set of identical characteristics, taking the average outcome, and conducting post-hoc smoothing to calculate expected points. In this work, Causey explores the uncertainty in estimates of expected points using bootstrap resampling and analyzes the changes in expected point values over time. Causey also provides all code used for this analysis.
Depending on how metrics based on expected points are used, potential problems arise when building an expected points model involving the nature of football games. The main issue, as pointed out by \citet*{BurkeEP}, involves the score differential in a game. When a team is leading by a large number of points at the end of a game, they will sacrifice scoring points for letting time run off the clock. Changes in team behavior in these situations and, more generally, the leverage of a play in terms of its potential effect on winning and losing are not taken into account when computing expected points.
Analyzing changes in win probability for play evaluation partially resolves these issues. Compared to expected points models, there is considerably more literature on different methodologies for estimating the win probability of a play in football. \citet*{Goldner17} uses a Markov model, similar to the approach taken by \citet*{Tango07} in baseball, by including the score differential, time remaining, and timeouts to extend the expected points model. Burke's approach is primarily empirical estimation by binning plays with adjustments and smoothing. In some published win probability analyses, random forests have been shown to generate well-calibrated win probability estimates \citep{Causey13, Lock14}. The approach taken by \citet*{Lock14} also considers the respective strengths of the offensive (possession) and defensive (non-possession) teams.
There are many areas of research that build off of these approaches for valuing plays. For example, analyses of fourth down attempts and play-calling are very popular \citep{Romer06, Alamar10, Goldner12, Quealy}. This paper focuses on using play evaluation to subsequently evaluate players, and we discuss prior attempts at player evaluation below.
\subsection{Previous Work: Evaluating Players}
\label{sec:prev-work-players}
Due to the complex nature of the sport and the limited data available publicly, the NFL lacks comprehensive statistics for evaluating player performance. While there has been extensive research on situational analysis and play evaluation as described above, there has been considerably less focus on player evaluation. Existing measures do not accurately reflect a player's value to NFL teams, and they are not interpretable in terms of game outcomes (e.g. points or wins). Similarly, there are no publicly known attempts for developing a \textit{Wins Above Replacement} (\textit{WAR}) measure for every individual NFL player, as made popular in baseball \citep{Schoenfield12} and other sports \citep{Thomas15}.
Previous methods for player evaluation in football can be broken down into three categories: within-position statistical comparisons, ad hoc across-position statistical comparisons, and across-position statistical comparisons that rely on proprietary data or human judgment.
\subsubsection{Within-Position Player Evaluation}
Approaches for quantitatively evaluating players who play the same position are numerous, vary by position, and typically lag behind those of other sports. For comparisons of players at offensive skill positions such as quarterback (QB), running back (RB), wide receiver (WR), and tight end (TE), most analysis relies on basic box score statistics. These include yards gained via passing, rushing, and/or receiving; touchdowns via passing, rushing, and/or receiving; rushing attempts for RBs; receptions and targets for RBs, WRs, and TEs; completions, attempts, completion percentage, and yard per attempt for QBs; and other similar derivations of simple box score statistics. These metrics do not account for game situation or leverage. Additionally, they only provide an estimate of a player's relative value to other players at the same position. We cannot draw meaningful conclusions about cross-positional comparisons.
Linear combinations of these box score statistics, such as passer rating \citep{Smith73}, are often used to compare players at the same position while taking into account more than just a single box score measure. Similarly, Pro Football Reference's adjusted net yards per attempt (``ANY/A") expands upon passer rating in that it accounts for sacks and uses a different linear weighting scheme \citep{PFR}. These metrics involve outdated and/or ad hoc weights, thresholds, and other features. Passing in the NFL has changed substantially since the conception of the passer rating statistic in 1973, so that the chosen weights and thresholds do not have the same meaning in today's game as they did in 1973. While ANY/A accounts for sacks and uses a different weighting system, it is hardly a complete measure of QB performance, since it does not account for game situation and leverage. Perhaps most importantly, both passer rating and ANY/A are not interpretable in terms of game outcomes like points or wins.
For positions other than QB, RB, WR, and TE, data is limited, since the NFL does not publicly provide information about which players are on the field for a particular play, the offensive and defensive formations (other than the ``shotgun" formation on offense), or the pre- and post-snap locations of players on the field. For offensive linemen, very little information is available to statistically compare players, as offensive linemen typically only touch the football on broken plays. For defensive players, the NFL only provides information about which players were directly involved in a play (e.g. the tackler or the defensive back covering a targeted receiver). As such, with these positions, it is difficult to obtain adequate within-positional comparisons of player value, let alone across-position comparisons.
\subsubsection{Ad Hoc Across-Position Player Evaluation}
Using only box score statistics, it is extremely difficult to ascertain the value of players at different positions. The fantasy sports industry has attempted to provide across-position estimates of player value using box score statistics. These estimates typically use ad hoc linear combinations of box score statistics that differ by position, so as to put the in-game statistical performances of players at different positions on comparable scales. These measures, typically referred to as ``fantasy points", are available for all positions except those on the offensive line.
Of course, these metrics have several issues. First, they involve many unjustified or ad hoc weights. For example, one rushing yard is worth about 40\% of one passing yard in ESPN's standard definitions of these metrics \citep{ESPN_fantasy}, but these relative values are arbitrary. Second, the definitions are inconsistent, with different on-field events having different values for players of different positions. For example, defensive interceptions are typically worth three times as much as quarterback interceptions thrown \citep{PFF_fantasy, ESPN_fantasy}. Third, these measures do not account for context, such as the game situation or the leverage of a given play. Finally, they are not directly interpretable in terms of game outcomes (e.g. points or wins).
\subsubsection{Player Evaluation with Proprietary Data or Human Judgment}
Outside of the public sphere, there have been irreproducible attempts at within-position statistical comparisons of NFL players. Pro Football Focus assigns grades to every player in each play, but this approach is solely based on human judgment and proprietary to PFF \citep{Eager17}. ESPN's total quarterback rating (``QBR") accounts for the situational contexts a QB faces throughout a game \citep{ESPN_total_QBR, Oliver11}. ESPN uses the following approach when computing QBR: First, they determine the degree of success or failure for each play. Second, they divide credit for each play amongst all players involved. Third, additional adjustments are made for plays of very little consequence to the game outcome. This approach has several important advantages. In the first step, the EPA is used to assign an objective value to each play. Another advantage is that some attempt is made to divide credit for a play's success or failure amongst the players involved. In the approach for NFL player evaluation we propose in this paper, we loosely follow these same two steps.
ESPN's QBR has some disadvantages, however. First and most importantly, Total QBR is not directly reproducible, since it relies on human judgment when evaluating plays. ``The details of every play (air yards, drops, pressures, etc.) are charted by a team of trained analysts in the ESPN Stats \& Information Group. Every play of every game is tracked by at least two different analysts to provide the most accurate representation of how each play occurred" \citep{ESPN_total_QBR}. Additionally, while QBR down-weights plays in low-leverage situations, the approach for doing so is not clearly described and appears to be ad hoc. Finally, QBR is limited only to the QB position.
The only public approach for evaluating players at all positions according to common scale is Pro Football Reference's ``approximate value" (AV) statistic \citep{Drinen}. Using a combination of objective and subjective analysis, AV attempts to assign a single numerical value to a player's performance in any season since 1950, regardless of the player's position. AV has some subjective components, such as whether or not a lineman was named to the NFL's ``all-pro" team, and whether a running back reaches the arbitrary threshold of 200 carries. Additionally, since AV uses linear combinations of end-of-season box score statistics to evaluate players, it does not take into account game situation, opponent, or many other contextual factors that may play a role in the accumulation of box score statistics over the course of a season. Finally, although the basis of many AV calculations involves points scored and allowed, AV is not interpretable in terms of game outcomes.
\subsection{Our Framework for Evaluating NFL Plays and Players}
In order to properly evaluate players, we need to allocate a portion of a play's value to each player involved \citep{ESPN_total_QBR}. \citet*{Baumer17} details the long history of division of credit modeling as a primary driver of research in sports analytics, with origins in evaluating run contributions in baseball. However, in comparison to baseball, every football play is more complex and interdependent, with the 22 players on the field contributing in many ways and to varying degrees. A running play depends not only on the running back but the blocking by the linemen, the quarterback's handoff, the defensive matchup, the play call, etc. A natural approach is to use a regression-based method, with indicators for each player on the field for a play, providing an estimate of their marginal effect. This type of modeling has become common in basketball and hockey, because it accounts for factors such as quality of teammates and competition \citep{Rosenbaum04, Kubatko07, Macdonald11, Gramacy12, Thomas13}.
We present four contributions to the study of football statistics in order to address the issues pertaining to play evaluation and player evaluation outlined above:
\begin{enumerate}
\item The \texttt{R} package \texttt{nflscrapR} to provide easy access to publicly available NFL play-by-play data (Section \ref{sec:data}).
\item A novel approach for estimating expected points using a multinomial logistic regression model, which more appropriately models the ``next score" response variable (Section \ref{sec:ep}).
\item A generalized additive model for estimating the win probability using the expected points as input (Section \ref{sec:wp}).
\item Our \textit{nflWAR} framework, using multilevel models to isolate offensive skill player contribution and estimate their \textit{WAR} (Section \ref{sec:nflwar}).
\end{enumerate}
We use a sampling procedure similar to \citet{Baumer15} to estimate uncertainty in each player's seasonal \textit{WAR}. Due to the limitations of publicly available data, the primary focus of this paper is on offensive skill position players: QB, RB, WR, and TE. However, we present a novel metric that serves as a proxy for measuring a team's offensive line performance on rushing plays. Furthermore, the reproducible framework we introduce in this paper can also be easily extended to estimate \textit{WAR} for all positions given the appropriate data. Researchers with data detailing which players are on the field for every play can use the framework provided in Section \ref{sec:road_to_war} to estimate \textit{WAR} for players at all positions.
Our \textit{WAR} framework has several key advantages. First, it is fully reproducible: it is built using only public data, with all code provided and all data accessible to the public. Second, our expected points and win probability models are well-calibrated and more appropriate from a statistical perspective than other approaches. Third, player evaluation with \textit{WAR} is easily interpretable in terms of game outcomes, unlike prior approaches to player evaluation in the NFL discussed above. The replacement level baseline informs us how many wins a player adds over a readily available player. This is more desirable than comparing to average from the viewpoint of an NFL front office, as league average performance is still valuable in context \citep{Baumer15}. Fourth, the multilevel model framework accounts for quality of teammates and competition. Fifth, although this paper presents \textit{WAR} using our expected points and win probability models for play evaluation, researchers can freely substitute their own approaches for play evaluation without any changes to the framework for estimating player \textit{WAR}. Finally, we recognize the limitations of point estimates for player evaluation and provide estimates of the uncertainty in a player's \textit{WAR}.
\section{Play-by-Play Data with \texttt{nflscrapR}}
\label{sec:data}
Data in professional sports comes in many different forms. At the
season-level, player and team statistics are typically available dating
back to the 1800s \citep[\citet{Phillips}]{Lahman}. At the game-level,
player and team statistics have been tracked to varying degrees of
detail dating back several decades \citep{Lahman}. Within games, data is
available to varying degrees of granularity across sports and leagues.
For example, Major League Baseball (MLB) has play-by-play data at the
plate appearance level available dating back several decades
\citep{Lahman}, while the National Hockey League (NHL) only began
releasing play-by-play via their real-time scoring system in the 2005-06
season \citep{Thomas17}.
Play-by-play data, or information specifying the conditions, features,
and results of an individual play, serves as the basis for most modern
sports analysis in the public sphere \citep[\citet{Macdonald11},
\citet{Lock14}, \citet{Thomas17}]{Kubatko07}. Outside of the public
sphere, many professional sports teams and leagues have access to data
at even finer levels of granularity, e.g.~via optical player tracking
systems in the National Basketball Association, MLB, and the English
Premier League that track the spatial distribution of players and
objects at multiple times per second. The NFL in 2016 began using
radio-frequency identification (RFID) technology to track the locations
of players and the football \citep{CBS}, but as of mid 2018, this data
is not available publicly, and NFL teams have only just gained accessed to the data beyond their own players. In almost all major professional sports leagues, play-by-play data is provided and includes information on in-game
events, players involved, and (usually) which players are actively
participating in the game for each event
\citep[\citet{Lahman}]{Thomas17}.
Importantly, this is not the case for the NFL. While play-by-play data
is available through the NFL.com application programming interface
(API), the league does not provide information about which players are
present on the playing field for each play, what formations are being
used (aside from the ``shotgun'' formation), player locations, or
pre-snap player movement. This is extremely important, as it limits the
set of players for which we can provide estimates of their contribution
to game outcomes (e.g.~points scored, points allowed, wins, losses,
etc).
We develop an \texttt{R} package \citep{R17}, called \texttt{nflscrapR},
that provides users with clean datasets, box score statistics, and more
advanced metrics describing every NFL play since 2009
\citep{Horowitz17}. This package was inspired largely by other
\texttt{R} packages facilitating the access of sports data. For hockey,
\texttt{nhlscrapR} provides clean datasets and advanced metrics to use
for analysis for NHL fans \citep{Thomas17}. In baseball, the R packages
\texttt{pitchRx} \citep{Sievert15}, \texttt{Lahman} \citep{Lahman}, and
\texttt{openWAR} \citep{Baumer15} provide tools for collecting MLB data
on the pitch-by-pitch level and building out advanced player evaluation
metrics. In basketball, \texttt{ballR} \citep{Elmore17} provides
functions for collecting data from \texttt{basketball-reference.com}.
Each NFL game since 2009 has a 10 digit game identification number (ID)
and an associated set of webpages that includes information on the scoring
events, play-by-play, game results, and other game data. The API
structures its data using JavaScript Object Notation (JSON) into three
major groups: game outcomes, player statistics at the game level, and
play-by-play information. The design of the \texttt{nflscrapR} package
closely mimics the structure of the JSON data in the API, with four main
functions described below:
\texttt{season\_games()}: Using the data structure outputting end of
game scores and team matchups, this function provides end of game
results with an associated game ID and the home and away teams
abbreviations.
\texttt{player\_game()}: Accessing the player statistics object in the
API's JSON data, this function parses the player level game summary data
and creates a box-score-like data frame. Additional functions provide
aggregation functionality:\\ \texttt{season\_player\_game()} binds the
results of \texttt{player\_game()} for all games in a season, and
\texttt{agg\_player\_season()} outputs a single row for each player with
their season total statistics.
\texttt{game\_play\_by\_play()}: This is the most important function in
\texttt{nflscrapR}. The function parses the listed play-by-play data
then uses advanced regular expressions and other data manipulation tasks
to extract detailed information about each play (e.g.~players involved
in action, play type, penalty information, air yards gained, yards
gained after the catch, etc.). The \texttt{season\_play\_by\_play()}
binds the results of \texttt{game\_play\_by\_play()} for all games in a
season.
\texttt{season\_rosters()}: This function outputs all of the rostered
players on a specified team in a specified season and includes their
name, position, unique player ID, and other information.
For visualization purposes we also made a dataset, \texttt{nflteams}
available in the package which includes the full name of all 32 NFL
teams, their team abbreviations, and their primary
colors\footnote{Some of this information is provided through Ben Baumer's \texttt{R} package \texttt{teamcolors} \citep{Baumer17b}}.
In addition to the functions provided in \texttt{nflscrapR}, we provide
downloadable versions in comma-separated-value format, along with a
complete and frequently updating data dictionary, at
\texttt{https://github.com/ryurko/nflscrapR-data}. The datasets provided on this website included play-by-play from 2009 -- 2017, game-by-game player level statistics, player-season total statistics, and team-season total statistics. These datasets are made available to allow users familiar with other software to do research in the realm of football analytics. Table \ref{table-pbp} gives a brief overview of some of the more important variables used for evaluating plays in Section
\ref{sec:ep_wp_model}.
\begin{table}
\centering
\caption{Description of the play-by-play dataset.}
\label{table-pbp}
\begin{tabular}{p{3cm} p{9cm}}
\hline \\ [-1.5ex]
Variable & Description \\ [1ex]
\hline \\ [-1.5ex]
Possession Team & Team with the ball on offense (opposing team is on defense) \\ [1ex]
Down & Four downs to advance the ball ten (or more) yards \\ [1ex]
Yards to go & Distance in yards to advance and convert first down \\ [1ex]
Yard line & Distance in yards away from opponent's endzone (100 to zero) \\ [1ex]
Time Remaining & Seconds remaining in game, each game is 3600 seconds long (four quarters, halftime, and a potential overtime) \\ [1ex]
Score differential & Difference in score between the possession team and opposition \\
\hline
\end{tabular}
\end{table}
\section{Evaluating Plays with Expected Points and Win Probability}
\label{sec:ep_wp_model}
As described in Section \ref{sec:prev-work-plays}, expected points and
win probability are two common approaches for evaluating plays. These
approaches have several key advantages: They can be calculated using
only data provided by the NFL and available publicly, they provide
estimates of a play's value in terms of real game outcomes (i.e.~points
and wins), and, as a result, they are easy to understand for both
experts and non-experts.
Below, we introduce our own novel approaches for estimating expected
points ($EP$) and win probability ($WP$) using publicly available
data via \texttt{nflscrapR}.
\subsection{Expected Points}
\label{sec:ep}
While most authors take the average ``next score'' outcome of similar
plays in order to arrive at an estimate of $EP$, we recognize that
certain scoring events become more or less likely in different
situations. As such, we propose modeling the probability for each of
the scoring events directly, as this more appropriately accounts for the
differing relationships between the covariates in Table \ref{table-pbp}
and the different categories of the ``next score'' response. Once we
have the probabilities of each scoring event, we can trivially estimate
expected points.
\subsubsection{Multinomial Logistic Regression}
To estimate the probabilities of each possible scoring event conditional
on the current game situation, we use multinomial logistic regression.
For each play, we find the next scoring event within the same half (with
respect to the possession team) as one of the seven possible events:
touchdown (7 points), field goal (3 points), safety (2 points), no score
(0 points), opponent safety (-2 points), opponent field goal (-3
points), and opponent touchdown (-7 points). Here, we ignore point after
touchdown (PAT) attempts, and we treat PATs separately in Section
\ref{sec:pat_fg}.
\autoref{next-score-bar} displays the distribution of the different type
of scoring events using data from NFL regular season games between 2009
and 2016, with each event located on the y-axis based on their
associated point value $y$. This data consists of 304,896 non-PAT
plays, excluding QB kneels (which are solely used to run out the clock
and are thus assigned an $EP$ value of zero). The gaps along the
y-axis between the different scoring events reinforce our decision to
treat this as a classification problem rather than modeling the point
values with linear regression -- residuals in such a model will not meet
the assumptions of normality. While we use seven points for a touchdown
for simplicity here, our multinomial logistic regression model generates
the probabilities for the events agnostic of the point value. This is
beneficial, since it allows us to flexibly handle PATs and two-point
attempts separately. We can easily adjust the point values
associated with touchdowns to reflect changes in the league's scoring
environment.
\begin{figure}[!h]
\includegraphics[width=14cm]{next_score_barchart.jpeg}
\centering
\caption{Distribution of next scoring events for all plays from 2009-16, with respect to the possession team.}
\label{next-score-bar}
\end{figure}
\begin{table}
\centering
\caption{Description of variables for the $EP$ model.}
\label{table-ep-vars}
\begin{tabular}{ p{3cm} p{9cm}}
\hline \\ [-1.5ex]
Variable & Variable description \\ [1ex]
\hline \\ [-1.5ex]
Down & The current down (1st, 2nd, 3rd, or 4th\\ [1ex]
Seconds & Number of seconds remaining in half \\ [1ex]
Yardline & Yards from endzone (0 to 100) \\ [1ex]
log(YTG) & Log transformation of yards to go for a first down \\ [1ex]
GTG & Indicator for whether or not it is a goal down situation \\ [1ex]
UTM & Indicator for whether or not time remaining in the half is under two minutes \\ [1ex]
\hline
\end{tabular}
\end{table}
We denote the covariates describing the game situation for each play as
$\mathbf{X}$, which are presented in Table \ref{table-ep-vars}, and the
response variable:
\begin{align}
\label{ep-response}
Y \in & \{\textrm{Touchdown}\ (\textbf{7}),\ \textrm{Field Goal}\ (\textbf{3}),\ \textrm{Safety}\ (\textbf{2}),\ \textrm{No Score}\ (\textbf{0}), \nonumber \\
&-\textrm{Touchdown}\ (\textbf{-7}),\ -\textrm{Field Goal}\ (\textbf{-3}),\ -\textrm{Safety}\ (\textbf{-2})\}
\end{align}
The model is specified with six logit transformations relative to the ``No Score''
event with the following form:
\begin{align}
\text{log}(\frac{P(Y=Touchdown|\mathbf{X})}{P(Y=No\ Score|\mathbf{X})}) & = \mathbf{X}\cdot \boldsymbol{\beta}_{Touchdown}, \nonumber \\
\text{log}(\frac{P(Y=Field\ Goal|\mathbf{X})}{P(Y=No\ Score|\mathbf{X})}) & = \mathbf{X}\cdot \boldsymbol{\beta}_{Field\ Goal}, \nonumber \\
\vdots \\
\text{log}(\frac{P(Y=-Touchdown|\mathbf{X})}{P(Y=No\ Score|\mathbf{X})}) & = \mathbf{X}\cdot \boldsymbol{\beta}_{-Touchdown}, \nonumber
\end{align}
\noindent where $\boldsymbol{\beta}_y$ is the corresponding coefficient vector for the type of next scoring event. Using the generated probabilities for each of the possible scoring
events, $P(Y = y|\mathbf{X})$, we simply calculate the expected
points ($EP$) for a play by multiplying each event's predicted
probability with its associated point value \(y\):
\begin{equation}
\label{ep-formula}
EP = E[Y | \mathbf{X}] = \sum_y y \cdot P(Y=y | \mathbf{X}).
\end{equation}
\hypertarget{observation-weighting}{%
\subsubsection{Observation Weighting}\label{observation-weighting}}
Potential problems arise when building an expected points model
because of the nature of football games. The first issue, as pointed out
by \citet{BurkeEP}, regards the score differential in a game. When a
team is leading by a large number of points at the end of a game they
will sacrifice scoring points for letting time run off the clock. This
means that plays with large score differentials can exhibit a different
kind of relationship with the next points scored than plays with tight
score differentials. Although others such as Burke only use the subset
of plays in the first and third quarter where the score differential is
within ten points, we don't exclude any observations but instead use a
weighting approach. \autoref{score_diff_hist}(a) displays the distribution
for the absolute score differential, which is clearly skewed right, with a higher proportion of plays possessing smaller score
differentials. Each play \(i \in \{1, \hdots, n\}\), in the modeling data of regular season games from 2009 to 2016, is assigned a weight
\(w_i\) based on the score differential \(S\) scaled from zero to one
with the following function:
\begin{equation}
\label{weight}
w_i = w(S_i) = \frac{\underset{i}{max}(|S_i|) - |S_i|}{\underset{i}{max}(|S_i|) - \underset{i}{min}(|S_i|)}.
\end{equation}
In addition to score differential, we also weight plays according to
their ``distance'' to the next score in terms of the number of drives.
For each play \(i\), we find the difference in the number of drives from
the next score \(D\): \(D_i = d_{next\ score} - d_i\), where
\(d_{next\ score}\) and \(d_i\) are the drive numbers for the next
score and play \(i\), respectively. For plays in the first half, we
stipulate that \(D_i = 0\) if the \(d_{next\ score}\) occurs in the
second half, and similarly for second half plays for which the next
score is in overtime. \autoref{score_diff_hist}(b) displays the
distribution of \(D_i\) excluding plays with the next score as ``No
Score.'' This difference is then scaled from zero to one in the same way
as the score differential in \autoref{weight}. The score differential
and drive score difference weights are then added together and again
rescaled from zero to one in the same manner resulting in a combined
weighting scheme. By combining the two weights, we are placing equal
emphasis on both the score differential and the number of drives until
the next score and leave adjusting this balance for future work.
\begin{figure}[!h]
\includegraphics[width=16cm]{distr_score_drive.jpeg}
\centering
\caption{Distributions for (a) absolute score differential and (b) number of drives until next score (excluding plays without a next score event).}
\label{score_diff_hist}
\end{figure}
\hypertarget{model-selection-with-calibration}{%
\subsubsection{Model Selection with
Calibration}
\label{model-selection-with-calibration}}
Since our expected points model uses the probabilities for each scoring
event from multinomial logistic regression, the variables and
interactions selected for the model are determined via calibration
testing, similar to the criteria for evaluating the win probability
model in \citet{Lock14}. The estimated probability for each of the seven
scoring events is binned in five percent increments (20 total possible
bins), with the observed proportion of the event found in each bin. If
the actual proportion of the event is similar to the bin's estimated
probability then the model is well-calibrated. Because we are generating
probabilities for seven events, we want a model that is well-calibrated
across all seven events. To objectively compare different models, we
first calculate for scoring event \(y\) in bin \(b \in \{1,\hdots, B\}\)
its associated error \(e_{y,b}\):
\begin{equation}
e_{y,b} = |\hat{P_b}(Y=y) - P_b(Y=y)|,
\end{equation}
\noindent where \(\hat{P_b}(Y=y)\) and \(P_b(Y=y)\) are the predicted
and observed probabilities, respectively, in bin \(b\). Then, the
overall calibration error \(e_y\) for scoring event \(y\) is found by
averaging \(e_{y,b}\) over all bins, weighted by the number of plays in
each bin, \(n_{y,b}\):
\begin{equation}
e_y = \frac{1}{n_y}\sum_b n_{y,b} \cdot e_{y,b},
\end{equation}
\noindent where \(n_y = \sum_b n_{y,b}\). This leads to the model's
calibration error \(e\) as the average of the seven \(e_y\) values,
weighted by the number of plays with scoring event \(y\), \(n_{y}\):
\begin{equation}
e = \frac{1}{n}\sum_y n_y \cdot e_y,
\end{equation}
\noindent where \(n = \sum_y n_y\), the number of total plays. This
provides us with a single statistic with which to evaluate models, in
addition to the calibration charts.
We calculate the model calibration error using leave-one-season-out
cross-validation (LOSO CV) to reflect how the \texttt{nflscrapR} package
will generate the probabilities for plays in a season it has not yet
observed. The model yielding the best LOSO CV calibration results uses
the variables presented in \autoref{table-ep-vars}, along with three
interactions: \(\text{log}(\mbox{YTG})\) and Down, Yardline and Down, and
\(\text{log}(\mbox{YTG})\) and GTG. \autoref{ep_cal} displays the selected
model's LOSO CV calibration results for each of the seven scoring
events, resulting in \(e \approx 0.013\). The dashed lines along the
diagonal represent a perfect fit, i.e.~the closer to the diagonal points
are the more calibrated the model. Although time remaining is typically
reserved for win probability models \citep{Goldner17}, including the
seconds remaining in the half, as well as the indicator for under two
minutes, improved the model's calibration, particularly with regards to
the ``No Score'' event. We also explored the use of an ordinal logistic regression model which assumes equivalent effects as the scoring value increases, but found the LOSO CV calibration results to be noticeably worse with \(e \approx 0.022\).
\begin{figure}[!h]
\includegraphics[width=16cm]{ep_calibration_plots.jpeg}
\centering
\caption{Expected points model LOSO CV calibration results by scoring event.}
\label{ep_cal}
\end{figure}
\hypertarget{pats-and-field-goals}{%
\subsubsection{PATs and Field Goals}\label{pats-and-field-goals}}
\label{sec:pat_fg}
As noted earlier, we treat PATs (extra point attempts and two-point
attempts) separately. For two-point attempts, we simply use the
historical success rate of 47.35\% from 2009-2016, resulting in
\(EP = 2 \cdot 0.4735 = 0.9470\). Extra point attempts use the
probability of successfully making the kick from a generalized additive
model (see Section \ref{gam}) that predicts the probability of making the
kick, \(P(M)\) for both extra point attempts and field goals as a smooth
function of the kick's distance, \(k\) (total of 16,906 extra point and
field goal attempts from 2009-2016):
\begin{equation}
\text{log}(\frac{P(M)}{1-P(M)}) = s(k).
\end{equation}
The expected points for extra point attempts is this predicted
probability of making the kick, since the actual point value of a PAT is
one. For field goal attempts, we incorporate this predicted probability
of making the field goal taking into consideration the cost
of missing the field goal and turning the ball over to the opposing
team. This results in the following override for field goal attempts:
\begin{equation}
EP_{field\ goal\ attempt} = P(M)\cdot 3 + (1 - P(M))\cdot (-1)\cdot E[Y|X=m],
\end{equation}
\noindent where \(E[Y|X=m]\) is the expected points from the multinomial
logistic regression model but assuming the opposing team has taken
possession from a missed field goal, with the necessary adjustments to
field position and time remaining (eight yards and 5.07 seconds,
respectively, estimated from NFL regular season games from 2009 to 2016), and multiplying by
negative one to reflect the expected points for the team attempting the
field goal. Although these calculations are necessary for proper
calculation of the play values $\delta_{f,i}$ discussed in Section
\ref{epa-wpa}, we note that this is a rudimentary field goal model only
taking distance into account. Enhancements could be made with additional
data (e.g.~weather data, which is not made available by the NFL) or by
using a model similar to that of \citet{Morris15}, but these are beyond
the scope of this paper.
\subsubsection{Expected Points by Down and Yard Line}
For reference, \autoref{ep_comp} displays the relationship between the
field position and the $EP$ for our multinomial logistic regression
model available via \texttt{nflscrapR} compared to the previous
relationships found by \citet{Carter71} and \citet{Carroll88}. We
separate the \texttt{nflscrapR} model by down to show its importance,
and in particular the noticeable drop for fourth down plays and how they
exhibit a different relationship near the opponent's end zone as
compared to other downs. To provide context for what is driving the
difference, \autoref{ep_probs_chart} displays the relationship between
each of the next score probabilities and field position by down. Clearly
on fourth down, the probability of a field goal attempt overwhelms the
other possible events once within 50 yards of the opponent's end zone.
\begin{figure}[!h]
\includegraphics[width=16cm]{ep_distance_down_plot.jpeg}
\centering
\caption{Comparison of historical models and \texttt{nflscrapR} expected points value, based on distance from opponent's end zone by down.}
\label{ep_comp}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=16cm]{ep_event_distance_plot.jpeg}
\centering
\caption{Relationship between next score event probabilities and field position by down.}
\label{ep_probs_chart}
\end{figure}
\subsection{Win Probability}
\label{sec:wp}
Because our primary focus in this paper is in player evaluation, we
model win probability without taking into account the teams playing (i.e.~we do not
include indicators for team strength in the win probability model). As a
result, every game starts with each team having a 50\% chance of
winning. Including indicators for a team's overall, offensive, and/or
defensive strengths would artificially inflate (deflate) the
contributions made by players on bad (good) teams in the models
described in Section \ref{sec:nflwar}, since their team's win
probability would start lower (higher).
Our approach for estimating \(WP\) also differs from the others
mentioned in Section \ref{sec:prev-work-plays} in that we incorporate the
estimated \(EP\) directly into the model by calculating the expected
score differential for a play. Our expected points model already
produces estimates for the value of the field position, yards to go, etc
without considering which half of the game or score. When including the
variables presented in Table \ref{table-wp-vars}, we arrive at a
well-calibrated \(WP\) model.
\begin{table}
\centering
\caption{Description of selected variables for the win probability model. Note: $S$ is the score differential at the current play.}
\label{table-wp-vars}
\begin{tabular}{ p{3cm} p{9cm}}
\hline \\ [-1.5ex]
Variable & Variable description \\ [1ex]
\hline \\ [-1.5ex]
$E[S]$ & Expected score differential = $EP + S$ \\ [1ex]
$s_{g}$ & Number of seconds remaining in game \\ [1ex]
$E[\frac{S}{s_{g} +1}]$ & Expected score time ratio\\ [1ex]
$h$ & Current half of the game (1st, 2nd, or overtime) \\ [1ex]
$s_h$ & Number of seconds remaining in half \\ [1ex]
$u$ & Indicator for whether or not time remaining in half is under two minutes \\ [1ex]
$t_{off}$ & Time outs remaining for offensive (possession) team \\ [1ex]
$t_{def}$ & Time outs remaining for defensive team \\ [1ex]
\hline
\end{tabular}
\end{table}
\subsubsection{Generalized Additive Model}
\label{gam}
We use a generalized additive model (GAM) to estimate the possession
team's probability of winning the game conditional on the current game
situation. GAMs have several key benefits that make them ideal for
modeling win probability: They allow the relationship between the
explanatory and response variables to vary according to smooth,
non-linear functions. They also allow for linear relationships and can
estimate (both ordered and unordered) factor levels. We find that this
flexible, semi-parametric approach allows us to capture nonlinear
relationships while maintaining the many advantages of using linear
models. Using a logit link function, our \(WP\) model takes the form:
\begin{equation}
\text{log}(\frac{P(\mbox{Win})}{P(\mbox{Loss})}) = s(E[S]) + s(s_h)\cdot h + s(E[\frac{S}{s_{g} +1}]) + h \cdot u \cdot t_{off} + h\cdot u \cdot t_{def},
\end{equation}
\noindent where \(s\) is a smooth function while \(h\), \(u\),
\(t_{off}\), and \(t_{def}\) are linear parametric terms defined in \autoref{table-wp-vars}. By taking the
inverse of the logit we arrive at a play's \(WP\).
\hypertarget{win-probability-calibration}{%
\subsubsection{Win Probability
Calibration}\label{win-probability-calibration}}
Similar to the evaluation of the \(EP\) model, we again use LOSO CV to
select the above model, which yields the best calibration results.
\autoref{wp_cal} shows the calibration plots by quarter, mimicking the
approach of \citet{Lopez17} and \citet{Yam18}, who evaluate both our
\(WP\) model and that of \citet{Lock14}. The observed proportion of wins
closely matches the expected proportion of wins within each bin for each
quarter, indicating that the model is well-calibrated across all
quarters of play and across the spectrum of possible win probabilities.
These findings match those of \citet{Yam18}, who find ``no obvious
systematic patterns that would signal a flaw in either model.''
\begin{figure}[!h]
\includegraphics[width=16cm]{wp_calibration_plots.jpeg}
\centering
\caption{Win probability model LOSO CV calibration results by quarter.}
\label{wp_cal}
\end{figure}
\subsubsection{Win Probability Example}
An example of a single game \(WP\) chart is provided in
\autoref{wp_example} for the 2017 American Football Conference (AFC)
Wild Card game between the Tennessee Titans and Kansas City Chiefs. The game starts with both teams having an equal chance of winning, with minor variations until the score differential changes (in this case, in favor of Kansas City). Kansas City led 21-3 after the first half, reaching a peak win probability of roughly 95\% early in the third quarter, before giving up 19 unanswered points in the second half and losing to Tennessee 22-21.
\begin{figure}[!h]
\includegraphics[width=16cm]{game_wp_chart_ex.jpeg}
\centering
\caption{Win probability chart for 2017 AFC Wild Card game.}
\label{wp_example}
\end{figure}
\subsection{Expected Points Added and Win Probability Added}
\label{epa-wpa}
In order to arrive at a comprehensive measure of player performance, each
play in a football game must be assigned an appropriate value
\(\delta_{f,i}\) that can be represented as the change from state \(i\)
to state \(f\):
\begin{equation}
\label{play-value}
\delta_{f,i} = \boldsymbol{V}_{f} - \boldsymbol{V}_i,
\end{equation}
\noindent where \(\boldsymbol{V}_{f}\) and \(\boldsymbol{V}_{i}\) are
the associated values for the ending and starting states respectively.
We represent these values by either a play \(i\)'s expected points
(\(EP_i\)) or win probability (\(WP_i\)).
Plugging our \(EP\) and \(WP\) estimates for the start of play \(i\) and
the start of the following play \(f\) into \autoref{play-value}'s values
for \(\boldsymbol{V}_i\) and \(\boldsymbol{V}_f\) respectively provides
us with the two types of play valuations \(\delta_{f,i}\): (1) the
change in point value as expected points added (\(EPA\)), and (2) the
change in win probability as win probability added (\(WPA\)). For
scoring plays, we use the associated scoring event's value \(y\) as
\(\boldsymbol{V}_f\) in place of the following play's \(EP\) to reflect
that the play's value is just connected to the difference between the
scoring event and the initial state of the play. As an example, during Super Bowl LII the Philadelphia Eagles' Nick Foles received a touchdown when facing fourth down on their opponent's one yard line with thirty-eight seconds remaining in the half. At the start of the play the Eagles' expected points was \(\boldsymbol{V}_{i}\ \approx 2.78\), thus resulting in $EPA \approx 7 - 2.78 = 4.22$. In an analogous calculation, this famous play known as the ``Philly special" resulted in $WPA \approx 0.1266$ as the Eagles' increased their lead before the end of the half.
For passing plays, we can additionally take advantage of \emph{air
yards} (perpendicular distance in yards from the line of scrimmage to
the yard line at which the receiver was targeted or caught the ball) and
\emph{yards after catch} (perpendicular distance in yards from the yard
line at which the receiver caught the ball to the yard line at which the
play ended), for every passing play available with \texttt{nflscrapR}.
Using these two pieces, we can determine the hypothetical field position
and whether or not a turnover on downs occurs to separate the value of a
play from the air yards versus the yards after catch. For each completed
passing play, we break the estimation of \(EP\) and \(WP\)
into two plays -- one comprising everything leading up to the catch, and
one for the yards after the catch. Because the models rely on the
seconds remaining in the game, we make an adjustment to the time
remaining by subtracting the average length of time for incomplete
passing plays, 5.7
seconds\footnote{This estimate could be improved in future work if information about the time between the snap and the pass becomes available.}.
We then use the \(EP\) or \(WP\) through the air as \(\boldsymbol{V}_f\)
in \autoref{play-value} to estimate \(EPA_{i,air}\) or \(WPA_{i,air}\),
denoting these as \(\delta_{f,i,air}\). We estimate the value of yards
after catch, \(\delta_{f,i,yac}\), by taking the difference between the
value of the following play \(\boldsymbol{V}_f\) and the value of the
air yards, \(\delta_{f,i,air}\). We use this approach to calculate both
\(EPA_{i,yac}\) and \(WPA_{i,yac}\).
\section{Evaluating Players with nflWAR}
\label{sec:nflwar}
We use the play values calculated in Section
\ref{sec:ep_wp_model} as the basis for a statistical estimate of wins
above replacement (\textit{WAR}) for each player in the NFL. To do this, we take
the following approach:
\begin{itemize}
\tightlist
\item
estimate the value of each play (Section \ref{sec:ep_wp_model}),
\item
estimate the effect of each player on play value added (Section
\ref{sec:war_model}),
\item
evaluate relative to replacement level (Section \ref{sec:repl_level}),
\item
convert to a wins scale (Section \ref{sec:win_conversion}), and
\item
and estimate the uncertainty in \textit{WAR} (Section \ref{sec:resample}).
\end{itemize}
This framework can be applied to any individual season, and we present
results for the 2017 season in Section \ref{results}. Due to data
restrictions, we currently are only able to produce \textit{WAR} estimates for
offensive skill position players. However, a benefit
of our framework is the ability to separate a player's total value into
the three components of \(WAR_{air}\), \(WAR_{yac}\), and
\(WAR_{rush}\). Additionally, we provide the first statistical estimates for a team's rush blocking based on play-by-play data.
\subsection{Division of Credit}
\label{sec:war_model}
In order to properly evaluate players, we need to allocate the portion
of a play's value \(\delta_{f,i}\) to each player on the field.
Unfortunately, the NFL does not publicly specify which players are on
the field for every play, preventing us from directly applying
approaches similar to those used in basketball and hockey discussed in
Section \ref{sec:prev-work-players}, where the presence of each player on
the playing surface is treated as an indicator covariate in a linear
model that estimates the marginal effect of that player on some game
outcome \citep{Kubatko07, Macdonald11, Thomas13}. Instead, the data
available publicly from the NFL and obtained via \texttt{nflscrapR} is
limited to only those players directly involved in the play, plus
contextual information about the play itself. For rushing plays, this
includes:
\begin{itemize}
\tightlist
\item
Players: rusher and tackler(s)
\item
Context: run gap (end, tackle, guard, middle) and direction (left,
middle, right)
\end{itemize}
\begin{figure}[!h]
\includegraphics[width=14cm]{run_gaps_plot.jpeg}
\centering
\caption{Offensive Line Gaps for Rushing Plays.}
\label{fig:gaps}
\end{figure}
\autoref{fig:gaps} provides a diagram of the run gaps (in blue) and the
positions along the offensive line (in black). In the NFL play-by-play, the gaps are not referred to with letters, as they commonly are by football players and coaches; instead, the terms ``middle'', ``guard'', ``tackle'', and ``end'' are used. For the purposes of this paper, we define the following linkage between these two nomenclatures:
\begin{itemize}
\tightlist
\item
``A'' Gap = ``middle''
\item
``B'' Gap = ``guard''
\item
``C'' Gap = ``tackle''
\item
``D'' Gap = ``end''
\end{itemize}
For passing plays, information about each play includes:
\begin{itemize}
\tightlist
\item
Players: passer, targeted receiver, tackler(s), and interceptor
\item
Context: air yards, yards after catch, location (left, middle, right),
and if the passer was hit on the play.
\end{itemize}
\hypertarget{multilevel-modeling}{%
\subsubsection{Multilevel Modeling}\label{multilevel-modeling}}
All players in the NFL belong to positional groups that dictate how they
are used in the context of the game. For example, for passing plays we
have the QB and the targeted receiver. However, over the course of an
NFL season, the average QB will have more pass attempts than the average
receiver will have targets, because there are far fewer QBs (more than
60 with pass attempts in the 2017 NFL season) compared to receivers
(more than 400 targeted receivers in the 2017 season).
Because of these systematic differences across positions, there are
differing levels of variation in each position's performance. Additionally, since every play involving the same player is
a repeated measure of performance, the plays themselves are not
independent.
To account for these structural features of football, we use a
multilevel model (also referred to as hierarchical, random-effects, or
mixed-effects model), which embraces this positional group structure and
accounts for the observation dependence. Multilevel models have recently
gained popularity in baseball statistics due to the development of
catcher and pitcher metrics by Baseball Prospectus
\citep{Brooks15, Turkenkopf15}, but have been used in sports dating back
at least to 2013 \citep{Thomas13}. Here, we novelly extend their use for
assessing offensive player contributions in football, using the play
values \(\delta_{f,i}\) from Section \ref{sec:ep_wp_model} as the
response.
In order to arrive at individual player effects we use
varying-intercepts for the groups involved in a play. A simple example
of modeling \(\delta_{f,i}\) with varying-intercepts for two groups, QBs
as \(Q\) and receivers as \(C\), with covariates $X_i$ and coefficients $\beta$ is as follows:
\begin{equation}
\label{ex_model_top_level}
\delta_{f,i} \sim Normal(Q_{q[i]} + C_{c[i]} + X_i \cdot \beta,\ \sigma_{\delta}^2),\ for\ i\ =\ 1,\hdots,n\ \mbox{plays},
\end{equation}
\noindent where the key feature distinguishing multilevel regression from
classical regression is that the group coefficients vary according to
their own model:
\begin{gather}
Q_q \sim Normal(\mu_{Q},\ \sigma_{Q}^2),\ \mbox{for}\ q\ =\ 1,\hdots, \mbox{\#\ of\ QBs},\nonumber \\
C_c \sim Normal(\mu_{C},\ \sigma_{C}^2),\ \mbox{for}\ c\ =\ 1,\hdots, \mbox{\#\ of\ receivers}.
\end{gather}
By assigning a probability distribution (such as the Normal distribution) to the group intercepts, \(Q_q\) and \(C_c\), with parameters estimated from the data (such as \(\mu_{Q}\) and \(\sigma_{Q}\) for passers), each estimate is pulled toward their respective group mean levels \(\mu_{Q}\) and \(\mu_{C}\). In this example, QBs and receivers involved in fewer plays will be pulled closer to their overall group averages as compared to those involved in more plays and thus carrying more information, resulting in partially pooled estimates \citep{Gelman07}. This approach provides us with average
individual effects on play value added while also providing the
necessary shrinkage towards the group averages. All models we use for
division of credit are of this varying-intercept form, and are fit using penalized likelihood via the \texttt{lme4} package in \texttt{R} \citep{lme4}. While these models are not explicitly Bayesian, as \citet{Gelman07} write, ``[a]ll multilevel models are Bayesian in the sense of assigning probability distributions to the varying regression coefficients", meaning we're taking into consideration all members of the group when estimating the varying intercepts rather than just an individual effect.
Our assumption of normality for \(\delta_{f,i}\) follows from our focus on \(EPA\) and \(WPA\) values, which can be both positive and negative, exhibiting roughly symmetric distributions. We refer to an intercept estimating a player's average effect as their \emph{individual
points/probability added} (\(iPA\)), with points for modeling \(EPA\)
and probability for modeling \(WPA\). Similarly, an intercept estimating
a team's average effect is their \emph{team points/probability added}
(\(tPA\)). Tables \ref{table-pv-vars} and \ref{table-groups} provide the
notation and descriptions for the variables and group terms in the
models apportioning credit to players and teams on plays. The variables
in Table \ref{table-pv-vars} would be represented by \(X\), and their
effects by \(\beta\) in \autoref{ex_model_top_level}.
\begin{table}
\centering
\caption{Description of variables in the models assessing player and team effects.}
\label{table-pv-vars}
\begin{tabular}{ p{3cm} p{9cm}}
\hline \\ [-1.5ex]
Variable name & Variable description \\ [1ex]
\hline \\ [-1.5ex]
Home & Indicator for if the possession team was home \\ [1ex]
Shotgun & Indicator for if the play was in shotgun formation \\ [1ex]
NoHuddle & Indicator for if the play was in no huddle \\ [1ex]
QBHit & Indicator for if the QB was hit on a pass attempt \\ [1ex]
PassLocation & Set of indicators for if the pass location was either middle or right (reference group is left) \\ [1ex]
AirYards & Orthogonal distance in yards from the line of scrimmage to where the receiver was targeted or caught the ball \\ [1ex]
RecPosition & Set of indicator variables for if the receiver's position was either TE, FB, or RB (reference group is WR) \\ [1ex]
RushPosition & Set of indicator variables for if the rusher's position was either FB, WR, or TE (reference group is RB) \\ [1ex]
PassStrength & EPA per pass attempt over the course of the season for the possession team \\ [1ex]
RushStrength & EPA per rush attempt over the course of the season for the possession team \\ [1ex]
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Description of groups in the models assessing player and team effects.}
\label{table-groups}
\begin{tabular}{ p{2cm} p{2cm} p{8cm} }
\hline \\ [-1.5ex]
Group & Individual & Description \\ [1ex]
\hline \\ [-1.5ex]
$Q$ & $q$ & QB attempting a pass or rush/scramble/sack \\ [1ex]
$C$ & $c$ & Targeted receiver on a pass attempt \\ [1ex]
$H$ & $\iota$ & Rusher on a rush attempt \\ [1ex]
$T$ & $\tau$ & Team-side-gap on a rush attempt, combination of the possession team, rush gap and direction \\ [1ex]
$F$ & $\nu$ & Opposing defense of the pass \\ [1ex]
\hline
\end{tabular}
\end{table}
\subsubsection{Passing Models}
Rather than modeling the \(\delta_{f,i}\) (\(EPA\) or \(WPA\))
for a passing play, we take advantage of the availability of air yards
and develop two separate models for \(\delta_{f,i,air}\) and
\(\delta_{f,i,yac}\). We are not crediting the QB solely for the value
gained through the air, nor the receiver solely for the value gained
from after the catch. Instead, we propose that both the QB and receiver,
as well as the opposing defense, should have credit divided amongst them
for both types of passing values. We let \(\Delta_{air}\) and
\(\Delta_{yac}\) be the response variables for the air yards and yards
after catch models, respectively. Both models consider all passing
attempts, but the response variable depends on the model:
\begin{gather}
\label{pass-value}
\Delta_{air} = \delta_{f,i,air} \cdot \boldsymbol{1}(\mbox{completion}) + \delta_{f,i} \cdot \boldsymbol{1}(\mbox{incompletion}), \nonumber \\
\Delta_{yac} = \delta_{f,i,yac} \cdot \boldsymbol{1}(\mbox{completion}) + \delta_{f,i} \cdot \boldsymbol{1}(\mbox{incompletion}),
\end{gather}
\noindent where \(\boldsymbol{1}(\mbox{completion})\) and
\(\boldsymbol{1}(\mbox{incompletion})\) are indicator functions for
whether or not the pass was completed. This serves to assign all
completions the \(\delta_{f,i,air}\) and \(\delta_{f,i,yac}\) as the
response for their respective models, while incomplete passes are
assigned the observed \(\delta_{f,i}\) for both models. In using this
approach, we emphasize the importance of completions, crediting accurate
passers for allowing their receiver to gain value after the catch.
The passing model for \(\Delta_{air}\) is as follows:
\begin{gather}
\Delta_{air} \sim Normal(Q_{air,q[i]} + C_{air,c[i]} + F_{air,\nu[i]} + \boldsymbol{A}_i \cdot \boldsymbol{\alpha},\ \sigma_{\Delta_{air}})\ \mbox{for}\ i\ =\ 1,\hdots,\ n\ \mbox{plays},\nonumber \\
Q_{air,q} \sim Normal(\mu_{Q_{air}},\sigma_{Q_{air}}^2),\ \mbox{for}\ q\ =\ 1,\hdots, \mbox{\#\ of\ QBs},\nonumber \\
C_{air,c} \sim Normal(\mu_{C_{air}},\sigma_{C_{air}}^2),\ \mbox{for}\ c\ =\ 1,\hdots, \mbox{\#\ of\ receivers}, \\
F_{air,\nu} \sim Normal(\mu_{F_{air}},\sigma_{F_{air}}^2),\ \mbox{for}\ \nu\ =\ 1,\hdots, \mbox{\#\ of\ defenses},\nonumber
\end{gather}
\noindent where the covariate vector \(\boldsymbol{A}_i\) contains a set
of indicator variables for Home, Shotgun, NoHuddle, QBHit, Location,
RecPosition, as well as the RushStrength value while
\(\boldsymbol{\alpha}\) is the corresponding coefficient vector. The
passing model for \(\Delta_{yac}\) is of similar form:
\begin{gather}
\Delta_{yac} \sim Normal(Q_{yac,q[i]} + C_{yac,c[i]} + F_{yac,\nu[i]} + \boldsymbol{B}_i \cdot \boldsymbol{\beta},\ \sigma_{\Delta_{yac}})\ \mbox{for}\ i\ =\ 1,\hdots,\ n\ \mbox{plays},\nonumber \\
Q_{yac,q} \sim Normal(\mu_{Q_{yac}},\ \sigma_{Q_{yac}}^2),\ \mbox{for}\ q\ =\ 1,\hdots, \mbox{\#\ of\ QBs},\nonumber \\
C_{yac,c} \sim Normal(\mu_{C_{yac}},\ \sigma_{C_{yac}}^2),\ \mbox{for}\ c\ =\ 1,\hdots, \mbox{\#\ of\ receivers}, \\
F_{yac,\nu} \sim Normal(\mu_{F_{yac}},\ \sigma_{F_{yac}}^2),\ \mbox{for}\ \nu\ =\ 1,\hdots, \mbox{\#\ of\ defenses},\nonumber
\end{gather}
\noindent where the covariate vector \(\boldsymbol{B}_i\) contains the
same set of indicator variables in \(\boldsymbol{A}_i\) but also
includes the AirYards and interaction terms between AirYards and the
various RecPosition indicators, with \(\boldsymbol{\beta}\) as its
respective coefficient vector. We include the RushStrength in the
passing models as a group-level predictor to control for the possession
team's rushing strength and the possible relationship between the two
types of offense. For QBs, their estimated \(Q_{air,q}\) and
\(Q_{yac,q}\) intercepts represent their \(iPA_{air}\) and \(iPA_{yac}\)
values respectively (same logic applies to receivers). Likewise, the
opposing defense values of \(F_{air,\nu}\) and \(F_{yac,\nu}\) are their
\(tPA_{air}\) and \(tPA_{yac}\) values.
\hypertarget{rushing-models}{%
\subsubsection{Rushing Models}\label{rushing-models}}
For rushing plays, we again model the play values \(\delta_{f,i}\).
However, we build two separate models, with one rushing model for QBs
and another for all non-QB rushes. This is because we cannot consistently separate
(in the publicly available data) designed QB rushes from scrambles on
broken plays, the characteristics of which result in substantially
different distributions of play value added. It is safe to assume all
non-QB rushes are designed rushes. Our rushing model for QBs consists of
all scrambles, designed runs, and sacks (to account for skilled rushing
QBs minimizing the loss on sacks). The QB rushing model is as follows:
\begin{gather}
\delta_{f,i} \sim Normal(Q_{rush,q[i]} + F_{rush_{Q},\ \nu[i]} + \boldsymbol{\Gamma}_i \cdot \boldsymbol{\gamma},\ \sigma_{\delta_{rush_{Q}}})\ \mbox{for}\ i\ =\ 1,\hdots,\ n\ \mbox{plays},\nonumber \\
Q_{rush,q} \sim Normal(\mu_{Q_{rush}},\ \sigma_{Q_{rush}}^2),\ \mbox{for}\ q\ =\ 1,\hdots, \mbox{\#\ of\ QBs}, \\
F_{rush_{Q},\nu} \sim Normal(\mu_{F_{rush_{Q}}},\ \sigma_{F_{rush_{Q}}}^2),\ \mbox{for}\ \nu\ =\ 1,\hdots, \mbox{\#\ of\ defenses},\nonumber
\end{gather}
\noindent where the covariate vector \(\boldsymbol{\Gamma}_i\) contains
a set of indicator variables for Home, Shotgun, NoHuddle, as well as the
PassStrength variable where \(\boldsymbol{\gamma}\) is the corresponding
coefficient vector.
For the designed rushing plays of non-QBs, we include an additional
group variable \(T\). As detailed in \autoref{table-pv-vars} and \autoref{fig:gaps}, \(T\)
serves as a proxy for the offensive linemen or blockers involved in the
rushing attempt. Each team has seven possible \(T\) levels of the form
team-side-gap. For example, the Pittsburgh Steelers (PIT) have the
following levels: PIT-left-end, PIT-left-tackle, PIT-left-guard,
PIT-middle-center, PIT-right-guard, PIT-right-tackle, PIT-right-end. The
non-QB rushing model is as follows:
\begin{gather}
\delta_{f,i} \sim Normal(H_{\iota[i]} + T_{\tau[i]} + F_{rush,\nu[i]} + \boldsymbol{P}_i \cdot \boldsymbol{\rho},\ \sigma_{\delta_{rush}})\ \mbox{for}\ i\ =\ 1,\hdots,\ n\ \mbox{plays},\nonumber \\
H_{\iota} \sim Normal(\mu_{H},\ \sigma_{H}^2),\ \mbox{for}\ \iota\ =\ 1,\hdots, \mbox{\#\ of\ rushers},\nonumber \\
T_{\tau} \sim Normal(\mu_{T},\ \sigma_{T}^2),\ \mbox{for}\ \tau\ =\ 1,\hdots, \mbox{\#\ of\ team-side-gaps}, \\
F_{rush,\nu} \sim Normal(\mu_{F_{rush}},\ \sigma_{F_{rush}}^2),\ \mbox{for}\ \nu\ =\ 1,\hdots, \mbox{\#\ of\ defenses},\nonumber
\end{gather}
\noindent where the covariate vector \(\boldsymbol{P}_i\) contains a set
of indicator variables for Home, Shotgun, NoHuddle, RushPosition, and
PassStrength, and where \(\boldsymbol{\rho}\) is the corresponding
coefficient vector. The resulting \(Q_{rush,q}\) and \(H_{rush,\iota}\)
estimates are the \(iPA_{rush}\) values for the QB and non-QB rushers,
respectively. Additionally, the \(T_{\tau}\) estimate is the
\(tPA_{rush,side-gap}\) for one of the seven possible side-gaps for the
possession team, while \(F_{rush,\nu}\) and \(F_{rush_{Q},\nu}\) are the
\(tPA_{rush}\) and \(tPA_{rush_Q}\) values for the opposing defense for
non-QB and QB rushes.
\subsubsection{Individual Points/Probability Added}
\label{sec:ipa}
Let \(\kappa\) refer to the number of attempts for a type of play. Using
an estimated type of \(iPA\) value for a player \(p\) and multiplying by
the player's associated number of attempts provides us with an
\emph{individual points/probability above average} (\(iPAA_p\)) value.
There are three different types of \(iPAA_p\) values for each position:
\begin{gather}
iPAA_{p,air} = \kappa_{p,pass} \cdot iPA_{p,air}, \nonumber \\
iPAA_{p,yac} = \kappa_{p,pass} \cdot iPA_{p,yac}, \\
iPAA_{p,rush} = \kappa_{p,rush} \cdot iPA_{p,air},\nonumber
\end{gather}
\noindent where the values for \(\kappa_{p,pass}\) and
\(\kappa_{p,rush}\) depend on the player's position. For QBs,
\(\kappa_{p,pass}\) equals their number of pass attempts, while
\(\kappa_{p,rush}\) is the sum of their rush attempts, scrambles, and
sacks. For non-QBs \(\kappa_{p,pass}\) equals their number of targets
and \(\kappa_{p,rush}\) is their number of rush attempts. Summing all
three components provides us with player \(p\)'s total individual
points/probability above average, \(iPAA_p\).
\hypertarget{comparing-to-replacement-level}{%
\subsection{Comparing to Replacement
Level}\label{comparing-to-replacement-level}}
\label{sec:repl_level}
As described in Section \ref{sec:prev-work-players}, it is desirable to
calculate a player's value relative to a ``replacement level'' player's
performance. There are many ways to define replacement level. For
example, \citet{Thomas15} define a concept called ``poor man's
replacement'', where players with limited playing time are pooled, and a
single effect is estimated in a linear model, which is considered
replacement level. Others provide more abstract definitions of
replacement level, as the skill level at which a player can be acquired
freely or cheaply on the open market \citep{Tango07}.
We take a similar approach to the \textit{openWAR} method, defining
replacement level by using a roster-based approach \citep{Baumer15}, and
estimating the replacement level effects in a manner similar to that of
\citet{Thomas15}. \citet{Baumer15} argue that ``replacement level''
should represent a readily available player that can replace someone
currently on a team's active roster. Due to differences in the number of
active players across positions in football, we define replacement level
separately for each position. Additionally, because of usage for the
different positions in the NFL, we find separate replacement level
players for receiving as compared to rushing. In doing so, we
appropriately handle cases where certain players have different roles.
For example, a RB that has a substantial number of targets but very few
rushing attempts can be considered a replacement level rushing RB, but
not a replacement level receiving RB.
Accounting for the 32 NFL teams and the typical construction of a roster
\citep{Lillibridge13}, we consider the following players to be ``NFL
level'' for each the non-QB positions:
\begin{itemize}
\tightlist
\item
rushing RBs = \(32 \cdot 3 = 96\) RBs sorted by rushing attempts,
\item
rushing WR/TEs = \(32 \cdot 1 = 32\) WR/TEs sorted by rushing attempts,
\item
receiving RBs = \(32 \cdot 3 = 96\) RBs sorted by targets,
\item
receiving WRs = \(32 \cdot 4 = 128\) WRs sorted by targets,
\item
receiving TEs = \(32 \cdot 2 = 64\) TEs sorted by targets.
\end{itemize}
Using this definition, all players with fewer rushing attempts or
targets than the NFL level considered players are deemed
replacement-level. This approach is consistent with the one taken by
Football Outsiders \citep{Schatz03}. We combine the rushing replacement
level for WRs and TEs because there are very few WRs and TEs with
rushing attempts.
\begin{figure}[!h]
\includegraphics[width=14cm]{pos_play_involve_distr.jpeg}
\centering
\caption{Distribution of the proportion of offensive plays a player is directly involved in by position (2009-2017).}
\label{play-distr-pos}
\end{figure}
In order to find replacement level QBs, we proceed in a different
manner, due to the nature of QB usage in the NFL.
\autoref{play-distr-pos} displays the distribution of the percentage of
a team's plays in which a player is directly involved (passer, receiver,
or rusher) by position using data from 2009 to 2017. This does not represent the percentage of team snaps by a player, but
rather for a given position that is directly involved in a play, it shows
the distribution of team play percentages for every player of that
position (e.g. New Orleans Saints' RB Alvin Kamara was involved in
38.39\% of all Saints plays that directly involved a RB). While the
distributions for RB, WR, and TE are unimodal and clearly skewed right,
the distributions for QBs are bimodal for each season. This is an
unsurprising result, since most NFL teams rely on a single QB for an
entire season, resulting in them being involved in more than 80\% of the
team's plays at QB.
Observing this clear difference in the distribution for QBs, we consider
two definitions of replacement level for QBs. The first is to define a
replacement-level as any QB with less than ten percent involvement in
their team's plays that directly involve QBs. This approach essentially
asserts that backup QBs with limited playing time should represent
replacement level for QBs, rather than assuming all NFL teams have at
least a certain number of NFL level QBs on their roster. The second
option we consider is to limit NFL level to be the 32 QBs that attempted
a pass in the first quarter of the first game of the season for each
team, and label all remaining QBs as replacement level. The logic here
is that NFL teams typically do not sign free agent QBs outside of their
initial roster during the course of the season because it takes time to
learn a team's playbook and offensive schemes. We recognize that these
definitions are far from perfect, but we hope they provide a starting
point for defining replacement level from which researchers can improve
upon in the future.
Prior to fitting the models discussed in Section \ref{sec:war_model},
every player who is identified as replacement level is replaced in their
corresponding play-by-play data with their replacement label
(e.g.~Replacement QB, Replacement RB-rushing, Replacement RB-receiving,
etc). By doing so, all replacement level players for a particular
position and type (receiving versus rushing) have the same
\(iPA^{repl}\) estimate. We then calculate a player's value above
replacement, \emph{individual points/probability above replacement}
(\(iPAR_p\)) in the same manner as \citet{Baumer15} and
\citet{Thomas15}, by calculating a replacement level ``shadow'' for a
particular player. For a player \(p\), this is done by first calculating
their replacement ``shadow'' value, \(iPAA_p^{repl}\) by using their respective number of
attempts:
\begin{gather}
iPAA^{repl}_{p,air} = \kappa_{pass} \cdot iPA^{repl}_{air},\nonumber \\
iPAA^{repl}_{p,yac} = \kappa_{pass} \cdot iPA^{repl}_{yac},\\
iPAA^{repl}_{p,rush} = \kappa_{rush} \cdot iPA^{repl}_{rush},\nonumber
\end{gather}
\noindent which leads to natural calculations for the three \(iPAR\)
values:
\begin{gather}
iPAR_{p,air} = iPAA_{p,air} - iPAA^{repl}_{p,air},\nonumber \\
iPAR_{p,yac} = iPAA_{p,yac} - iPAA^{repl}_{p,yac},\\
iPAR_{p,rush} = iPAA_{p,rush} - iPAA^{repl}_{p,rush}.\nonumber
\end{gather}
\noindent Taking the sum of the three, we arrive at a player's total
\(iPAR_p\).
\subsection{Conversion to Wins}
\label{sec:win_conversion}
If the play's value used for modeling purposes was \(WPA\) based, then
the final \(iPAR\) values are an individual's win probability added above
replacement, which is equivalent to their \emph{wins above replacement}
(\(WAR\)). However, for the \(EPA\)-based play value response, the \(iPAR\)
values represent the individual expected points added above replacement, and thus
require a conversion from points to wins. We use a linear regression
approach, similar to that of \citet{Zhou17} for football and \citet{Thomas15} for hockey, to estimate the relationship
between a team \(t\)'s regular season win total and their score
differential (\(S\)) during the season, \begin{equation}
Wins_t = \beta_0 + \beta_{S}S_t + \epsilon_t,\text{ where } \epsilon_t \sim N(0, \sigma^2)\ (iid) .
\end{equation}
\begin{figure}[!h]
\includegraphics[width=14cm]{win_score_rel_chart.jpeg}
\centering
\caption{Relationship between number of wins and score differential in the regular season by year (2009-2017).}
\label{wins-score}
\end{figure}
\autoref{wins-score} displays the estimated linear regression fits for
each season from 2009 to 2017. The resulting coefficient estimate
\(\hat{\beta}_{S}\) represents the increase in the number of
wins for each one point increase in score differential. Thus we take the
reciprocal, \(\frac{1}{\hat{\beta}_{S}}\) to arrive at the number of
points per win. We estimate \(WAR\) for the \(EPA\) based approach by
taking the \(iPAR\) values and dividing by the estimated points per win
(equivalent to multiplying \(iPAR\) by \(\hat{\beta}_{S}\)).
\subsection{Uncertainty}
\label{sec:resample}
Similar to the approach taken by \citet{Baumer15} for estimating the
variability in their \emph{openWAR} metric, we use a resampling strategy
to generate distributions for each individual player's \(WAR\) values.
Rather than resampling plays in which a particular player is involved
to arrive at estimates for their performance variability, we resample
entire team drives. We do this to account for the fact that player usage
is dependent on team decision making, meaning that the random variation
in individual events is dependent upon the random variation in team
events. Thus, we must resample at the team level to account for the
variability in a player's involvement. The decision to resample whole
drives instead of plays is to represent sampling that is more realistic
of game flows due to the possibility of dependencies within a drive with
regards to team play-calling. We recognize this is a simple viewpoint
of possible play correlations and consider exploration of this concept
for future work. In Section \ref{results}, all uncertainty estimation uses this drive-resampling approach, with 1000 simulated seasons.
\section{Results}
\label{results}
Given the definitions in Section \ref{sec:repl_level}, we found the following replacement level designations for the 2017 NFL season for non-QB positions:
\begin{itemize}
\tightlist
\item
rushing: 52 of the 148 RBs are replacement level,
\item
rushing: 278 of the 310 of the WR/TEs are replacement level,
\item
receiving: 52 of the 148 RBs are replacement level
\item
receiving: 73 of the 201 WRs are replacement level,
\item
receiving: 45 of the 109 TEs are replacement level.
\end{itemize}
For the QB position, we consider both approaches discussed in
Section \ref{sec:repl_level}. With the ``ten percent of QB plays cutoff'' approach resulting
in 25 replacement level QBs, and the ``one QB for each team'' approach resulting in 39 replacement level QBs out of the 71 in total.
First we compare the distributions of both types of \(WAR\) estimates, \(EPA\)-based and \(WPA\)-based, for the two considered definitions of replacement level QBs in \autoref{qb-war-distr}. It is clear that the ``one QB for each team'' approach for defining replacement level leads to lower \(WAR\) values in general, likely because some QBs who begin the season as back-ups perform better than those who begin the season as starters, yet are designated replacement level with this approach. For simplicity we only consider the ten percent cutoff rule for the rest of the paper.
We compare the distributions for both types of \(WAR\) estimates,
\(EPA\)-based and \(WPA\)-based, by position in \autoref{war-distr}. For
all positions, the \(EPA\)-based \(WAR\) values tend be higher than the
\(WPA\)-based values. This could be indicative of a player performing
well in meaningless situations due to the score differential,
particularly for QBs. It is clear that QBs have larger \(WAR\) values
than the other positions, reflecting their involvement in every passing
play and potentially providing value by rushing. Although this coincides with conventional wisdom regarding the importance of the QB position,
we note that we have not controlled for all possible contributing factors, such as the specific offensive linemen, the team's offensive schemes, or
the team's coaching ability due to data limitations. Researchers with
access to this information could easily incorporate their proprietary data into this framework to reach a better assessment of QB value.
\begin{figure}[!h]
\includegraphics[width=16cm]{war_qb_repl_distr.jpeg}
\centering
\caption{Distribution of QB \textit{WAR} in 2017 season by type and replacement level definition.}
\label{qb-war-distr}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=16cm]{war_position_distr.jpeg}
\centering
\caption{Distribution of \textit{WAR} in 2017 season by type and position (ten percent cutoff used for replacement level QBs).}
\label{war-distr}
\end{figure}
Following Major League Baseball's 2017 MVP race, \(WAR\) has received
heavy criticism for its unclear relationship with wins
\citep{James17, Tango17}. For this reason, we focus on the \(WPA\)-based
version of \(WAR\), with its direct relationship to winning games.
\autoref{war-bars} displays the top five players based on total \(WAR\)
for each position in the 2017 season. Each chart is arranged in
descending order by each player's estimated \textit{WAR}, and displays the three separate \(WAR\) values of \(WAR_{air}\), \(WAR_{yac}\) and
\(WAR_{rush}\). By doing this separation, we can see how certain types
of players vary in their performances. Tom Brady for instance is the only
QB in the top five with negative \(WAR_{rush}\). Alvin Kamara
appears to be providing roughly equal value from rushing and receiving,
while the other top RB performances are primarily driven by rushing success.
\begin{figure}[!h]
\includegraphics[width = 16cm]{war_top_five_position.jpeg}
\centering
\caption{Top five players in \textit{WAR} by position for the 2017 season.}
\label{war-bars}
\end{figure}
Elaborating on this separation of types of players, we can use the random
intercepts from the multilevel models, the \(iPA\) values, to see the underlying structure of players in terms of their efficiency. Figures \ref{qb-ipa-plots} and \ref{rb-ipa-plots}
reveals the separation of types of QBs and RBs respectively. The origin point for both charts represents league averages. For QBs, we plot their estimates for \(iPA_{air}\) against \(iPA_{yac}\), providing an overview of the types of passers in the NFL. The two components represent different skills of being able to provide value by throwing deep passes through the air, such as Jameis Winston, as compared to short but accurate passers such as Case Keenum. We can also see where the replacement level QB estimates place for context. For RBs, we add together their \(iPA_{air}\) and \(iPA_{yac}\) estimates to summarize their individual receiving effect and plot this against their \(iPA_{rush}\) estimates. This provides a separation between RBs that provide value as receivers versus those who provide positive value primarily from rushing, such as Ezekiel Elliott. New Orleans Saints RB Alvin Kamara stands out from the rest of the league's RBs, providing elite value in both areas.
\begin{figure}[!h]
\includegraphics[width=16cm]{qb_air_yac_plot.jpeg}
\centering
\caption{Estimates for QB efficiency from \(iPA_{air}\) against \(iPA_{yac}\) for the 2017 season.}
\label{qb-ipa-plots}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=16cm]{rb_rush_pass_plot.jpeg}
\centering
\caption{Estimates for RB efficiency from receiving (\(iPA_{air}\ +\ iPA_{yac}\)) against rushing (\(iPA_{rush}\)) for the 2017 season.}
\label{rb-ipa-plots}
\end{figure}
Using the drive resampling approach outlined in Section \ref{sec:resample},
we can compare the variability in player performance based on
1000 simulated seasons. \autoref{qb-war-sims} compares the simulation
distributions of the three types of \(WAR\) values (\(WAR_{air}\),
\(WAR_{yac}\), \(WAR_{rush}\)) for selected QBs in the 2017 NFL season,
with a reference line at 0 for replacement-level. We can clearly see
that the variability associated with player performance is not constant,
which is not suprising given the construction of the resampling at the
drive level. However, we can see some interesting features of QB
performances, such as how Seattle Seahawks QB Russell Wilson's three
types of \(WAR\) distributions are overlapping significantly, emphasizing
his versatility. Additionally, New England Patriots QB Tom Brady
displays large positive \(WAR_{air}\) and \(WAR_{yac}\) values, but a
clearly negative \(WAR_{rush}\) value. Finally, Joe Flacco's 2017 performance was at or below replacement level in the vast majority of simulations across all three types of \textit{WAR}, indicating that he is not elite.
\begin{figure}[!h]
\includegraphics[width=14cm]{qb_sim_distr.jpeg}
\centering
\caption{Simulation distributions of 2017 \textit{WAR} by type for a selection of twelve QBs.}
\label{qb-war-sims}
\end{figure}
\autoref{rb-war-sims} displays the simulation distributions for the top
ten RBs during the 2017 NFL season, as ranked by their average total \(WAR\) across all simulations. Relative to the \(WAR\) values for QBs in \autoref{rb-war-sims}, the best RBs in the league are providing limited value to their teams. This is in agreement with the recent trend of NFL teams, who have been paying QBs increasing salaries but compensating RBs less \citep{Morris17}. Two of
the top RBs in the 2017 were rookies Alvin Kamara and Kareem Hunt,
resulting into discussion of which player deserved to be the NFL's
rookie of the year. Similar to \citet{Baumer15} we address this question
using our simulation approach and display the joint distribution of the
two player's 2017 performances in \autoref{hunt-kam-comp}. In nearly
71\% of the simulated seasons, Kamara leads Hunt in \(WAR\) providing us
with reasonable certainty in Kamara providing more value to his team
than Hunt in his rookie season. It should not come as a surprise that
there is correlation between the player performances as each simulation
consists of fitting the various multilevel models resulting in new
estimates for the group averages, individual player intercepts as well
as the replacement level performance.
\begin{figure}[!h]
\includegraphics[width=14cm]{top_rb_sim_distr.jpeg}
\centering
\caption{Simulation distributions of 2017 \textit{WAR} value by type for top ten RBs.}
\label{rb-war-sims}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=14cm]{hunt_kamara_sim.jpeg}
\centering
\caption{Joint distribution of \textit{WAR} for Alvin Kamara vs. Kareem Hunt in 2017.}
\label{hunt-kam-comp}
\end{figure}
Additionally, we examine the consistency of the \(WPA\)-based \(WAR\)
from season-to-season based on the autocorrelation within players
between their 2016 and 2017 seasons (excluding replacement level) and
compare this to other commonly used statistics for QBs and RBs. Seen in
\autoref{table-qb-stats}, our estimates for QB \(WAR\) displayed higher
correlations than both the commonly used Passer Rating statistic as well
as Pro-Football-Reference.com's Adjusted Net Yards per Passing Attempt
(ANY/A), which includes yards lost from sacks \citep{PFR}. We also see
higher correlations for RB \(WAR\) as compared to Brian Burke's Success
Rate (percentage of rush attempts with \(EPA\) greater than zero) and
rushing yards per attempt. Future work should consider a proper review
and assessment of football statistics accounting for the number of
attempts needed for determing the reliability of a statistic as well as
accounting for when a player changes teams \citep{Yurko17}, and also
apply the framework laid out by \citet{Franks17}.
\begin{table}
\centering
\caption{Autocorrelation of QB statistics between 2016-17 seasons.}
\label{table-qb-stats}
\begin{tabular}{ c c c c}
\hline \\ [-1.5ex]
& \textit{WAR} & Passer Rating & ANY/A \\ [1ex]
\hline \\ [-1.5ex]
Autocorrelation & 0.598 & 0.478 & 0.295 \\ [1ex]
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Autocorrelation of RB statistics between 2016-17 seasons.}
\label{table-rb-stats}
\begin{tabular}{ c c c c}
\hline \\ [-1.5ex]
& \textit{WAR} & Success Rate & Yards per Attempt \\ [1ex]
\hline \\ [-1.5ex]
Autocorrelation & 0.431 & 0.314 & 0.337 \\ [1ex]
\hline
\end{tabular}
\end{table}
Although it does not provide a measure for individual players'
contributions, we can sum together the seven possible
\(tPA_{rush,side-gap}\) estimates for a team providing a proxy for their
offensive line's overall efficiency in contributing to rushing plays.
We can also look at individual side-gaps for specific teams to assess
their offensive line's performance in particular areas.
\autoref{oline-plot} displays the \(tPA_{rush,side-gap}\) sum in 2017
against 2016 for each NFL team. The red lines provide indication to
average performances in each year, so teams in the upper right quadrant
performed above average overall in both years such as the Dallas Cowboys
(DAL) which are known to have one of the best offensive lines in
football.
\begin{figure}[!h]
\includegraphics[width=14cm]{oline_tpa_plot.jpeg}
\centering
\caption{Team offensive line measures.}
\label{oline-plot}
\end{figure}
\section{Discussion and Extensions}
\label{sec:discussion}
In this work, we have provided four major contributions to the statistical analysis of NFL football, in areas that can impact both on-field and player personnel decisions. These contributions are broken into three categories: software development and data, play evaluation, and player evaluation.
\subsection{Data and Software Development}
In the area of data access and software development, we provide an \texttt{R} package, \texttt{nflscrapR}, to provide easy access to publicly available NFL play-by-play data for researchers to use in their own analyses of the NFL. This package has already been used by researchers to further research into NFL decision-making \citep{Yam18}.
\subsection{Novel Statistical Methods for Play Evaluation}
In the area of play evaluation, we make two contributions. First, we introduce a novel approach for estimating expected points using a multinomial logistic regression model. By using this classification approach, we more appropriately model the ``next score" response variable, improving upon previous approaches. Additionally, our approach is fully reproducible, using only data provided publicly by the NFL. Second, we use a generalized additive model for estimating in-game win probability, incorporating the results of the expected points model as input. With these two play evaluation models, we can calculate measures such as expected points added and win probability added, which are commonly used to evaluate both plays and players.
With the notable exception of \citet{Lock14}, researchers typically only vaguely discuss the methodology used for modeling expected points and/or win probability. Additionally, prior researchers in this area typically do not provide their specific expected points and win probability estimates publicly for other researchers to use and explore. Recently, Pro Football Focus used our approach for modeling expected points and found a clear relationship between their player grades and expected points added \citep{Douglas17}. Importantly, in our work, all of these measures are included directly into the play-by-play data provided by \texttt{nflscrapR}, and our methodology is fully described in this paper. Moreover, all code used to build these expected points and win probability models is provided in \texttt{nflscrapR} and available on GitHub \texttt{https://github.com/ryurko/nflscrapR-models}. By taking these important steps, we ensure that all of our methods are fully reproducible, and we make it as easy as possible for researchers to use, explore, and improve upon our work.
\subsection{Novel Statistical Methods for Player Evaluation}
In the area of player evaluation, we introduce several metrics for evaluating players via our \textit{nflWAR} framework. We use multilevel models to isolate offensive skill player contribution and estimate their individual wins above replacement. There are several key pieces of our \textit{WAR} estimation that merit discussion.
First, estimates of \textit{WAR} are given for several different areas of the game, including passing through the air, passing for yards after the catch, rushing, receiving through the air, and receiving yards after the catch. By compartmentalizing our estimates of player \textit{WAR}, we are able to better characterize players and how they achieved success. For example, New Orleans Saints RB Alvin Kamara was unique in his success as both a rusher and a receiver in the 2017 NFL season, while other RBs like Los Angeles Rams RB Todd Gurley achieved most of their success as a rusher. Similarly, Seattle Seahawks QB Russell Wilson was unique in his success as a rusher as well as from passing through the air and for yards after the catch, with about equal \textit{WAR} contributions in all three areas in the 2017 NFL season. This is in contrast to New England QB Tom Brady, who had tremendous success passing through the air and passing for yards after the catch, but provided negative \textit{WAR} contributions as a rusher. We are also able to characterize players like Case Keenum, who in the 2017 NFL season performed very well as a passer for yards after the catch, but not as well as a passer through the air. While these findings may not surprise knowledgeable football fans our framework also reveals the value of potentially overlooked skills such as the rushing ability of Tyrod Taylor and Dak Prescott, as seen in \autoref{qb-war-sims}. Their rushing value reflects not just their ability to scramble for positive value, but indicative of how they limit the damage done on sacks. Importantly, our player evaluation metrics are available for all skill position players, not just for QBs like previous approaches.
Second, our multilevel modeling approach allows us to estimate \textit{WAR} contributions for NFL offensive lines and their specific sides and gaps on rushing plays, providing the first statistical estimate of offensive line ability that also controls for factors such as RB ability, opposing defense, etc. We recognize, however, that this is not a perfect measure of offensive line performance for a few reasons. First, this does not necessarily capture individual linemen, as blocking can consist of players in motion and the involvement of other positions. Second, there is likely some selection bias that is not accounted for in the play-by-play data that could influence specific side-gap estimates. For example, a RB may cut back and find a hole on the left side of the line on a designed run to the right because there is nothing open on the right side, resulting in a play being scored as a run to the left. Because of this selection bias at the RB level -- RBs are more often going to run towards holes and away from defenders -- our team-side-gap estimates may be biased, especially for teams with particularly strong or weak areas of their line. This is an issue with the play-by-play data that likely cannot be remedied publicly until player-tracking data is made available by the NFL. Finally, we lack information about which specific offensive linemen are on the field or even involved in plays, preventing us from fitting player-specific terms in our multilevel model that would provide \textit{WAR} estimates for individual offensive linemen. Researchers with access to this data can build this into our modeling framework with minimal issues. However, until more data becomes available, researchers can incorporate these measures with more nuanced approaches of measuring offensive line performance such as \citet{Alamar08} and \citet{Alamar11}.
Third, by adopting a resampling procedure similar to that of \citet{Baumer15}, we provide estimates of uncertainty on all \textit{WAR} estimates. Our approach resamples at the drive-level, rather than resampling individual plays, to preserve the effects of any within-drive factors, such as play sequencing or play-calling tendencies.
Finally, our \textit{WAR} models are fully reproducible, with all data coming directly from \texttt{nflscrapR}, and with all code provided on GitHub \texttt{https://github.com/ryurko/nflWAR}. Because we use parametric models, it is trivially easy to incorporate more information, such as information about which players are on the field, or information from player-tracking data. We encourage future researchers to expand and improve upon our models in future work.
\subsection{The Road to WAR for Players at All Positions}
\label{sec:road_to_war}
One key benefit to our approach is that it can easily be augmented with the inclusion of additional data sources, e.g. player-tracking data or proprietary data collected by NFL teams. One important way in which our models can be augmented comes via the inclusion of data about which players are present on the field for each play.
Given this information, we can update our multilevel models from Section \ref{sec:war_model} by including additional positional groups. For example, for the non-QB rushing model, we can update the model as follows:
\begin{gather}
\delta_{f,i} \sim Normal(\sum_k O^{k}_{rush,\nu_k[i]} + \sum_g D^{g}_{rush,\nu_g[i]} + \boldsymbol{P}_i \cdot \boldsymbol{\rho},\ \sigma_{\delta_{rush}})\ \mbox{for}\ i\ =\ 1,\hdots,\ n\ \mbox{plays},\nonumber \\
O^{k}_{rush,\nu_k} \sim Normal(\mu_{O^{k}_{rush}},\ \sigma_{O^{k}_{rush}}^2), \nonumber \\
D^{g}_{rush,\nu_g} \sim Normal(\mu_{D^{g}_{rush}},\ \sigma_{D^{g}_{rush}}^2), \nonumber
\end{gather}
\noindent where \(O^{k}_{rush,\nu_k}\) are the intercepts for offensive positions (indexed by $k$ and varying according to their own model), \(D^{g}_{rush,\nu_g}\) are the intercepts for defensive positions (indexed by $g$ and varying according to their own model), and \(\boldsymbol{P}_i\) and \(\boldsymbol{\rho}\) are described as above. Similar updates can be made to the models representing QB rushing, passing through the air, and passing for yards after catch. After doing so, we can trivially calculate the individual points/probability above average for any player at any position following the approach outlined in Section \ref{sec:ipa}. From there, we simply need adequate definitions for replacement level players at each of these positions, and we will have statistical \textit{WAR} estimates for players of any position, including all offensive players and all defensive players.
The data necessary for employing this approach \emph{does exist}, but it is not available publicly, and there are heavy restrictions on the uses of such data. For example, Sportradar has a data product for the NFL called ``Participation Data'', which specifies all players present on the field for all plays, with data provided from the NFL. This is stated directly: ``Participation Data is complementary data collected by the NFL that indicates all 22 players on the field for every play of every game'' \citep{sportradar_data}.
However, Sportradar's data sharing agreement explicitly prohibits the use of this data in the creation of new metrics, even if only used privately, as detailed in clauses 1.6 and 14.2 of the agreement \citep{sportradar_agreement}. When colleagues reached out to Sportradar for clarification on potential data availability, they were told that there is no data sharing agreement for academic use, and that even if one were to purchase these data products, no new statistics or evaluation methods could be developed using this data, as per their terms and conditions. It is not clear if the same restrictions would apply to NFL teams.
\subsection{Extensions Relevant to NFL Teams}
\texttt{nflscrapR} provides play-by-play data, including expected points and win probability results, dating back to 2009, and improvements are underway to extend this back even further. As such, we can calculate player \textit{WAR} dating back to at least 2009. If teams are able to implement the framework discussed in Section \ref{sec:road_to_war}, they would then have \textit{WAR} estimates for players at all positions dating back almost a full decade. Teams that are able to do this could potentially gain substantial advantages in important areas of roster construction.
First, teams could more appropriately assess the contract values of players in free agency, similar to what is commonly done in baseball \citep{Paine15b}.
Second and perhaps most importantly, teams would be able to substantially improve their analysis for the NFL draft. Using an approach similar to that of \citet{Citrone17}, teams could substitute an objective measure of \textit{WAR} in place of the more subjective measure of ``approximate value'' (AV) \citep{PFR}, in order to project the future career value in terms of \textit{WAR} for all players available in the NFL draft. Additionally, teams employing this approach could create updated, \textit{WAR}-based versions of the ``draft pick value chart'', first attributed to Jimmy Johnson and later improved by \citet{Meers11} and \citet{Citrone17}. In doing so, teams could more accurately assess the value of draft picks and potentially exploit their counterparts in trades involving draft picks.
\section*{Acknowledgements}
First and foremost, we thank the faculty, staff, and students in Carnegie Mellon University's Department of Statistics \& Data Science for their advice and support throughout this work. We thank the now-defunct CMU Statistics NFL Research Group; the CMU Statistics in Sports Research Group; the Carnegie Mellon Sports Analytics Club; and the Carnegie Mellon Statistics Clustering, Classification, and Record Linkage Research Group for their feedback and support at all stages of this project. In particular, we thank Devin Cortese, who provided the initial work in evaluating players with expected points added and win probability added, and Nick Citrone, whose feedback was invaluable to this project. We thank Jonathan Judge for his insight on multilevel models. We thank Michael Lopez and Konstantinos Pelechrinis for their help on matters relating to data acquisition and feedback throughout the process. We thank Konstantinos Pelechrinis, the organizers of the Cascadia Symposium for Statistics in Sports, the organizers of the 6th Annual Conference of the Upstate New York Chapters of the American Statistical Association, the organizers of the Great Lakes Analytics in Sports Conference, the organizers of the New England Symposium on Statistics in Sports, and the organizers of the Carnegie Mellon Sports Analytics Conference for allowing us to present earlier versions of this work at their respective meetings; we thank the attendees of these conferences for their invaluable feedback. We thank Jared Lander for his help with parts of \texttt{nflscrapR}. Finally, we thank Rebecca Nugent for her unmatched dedication to statistical education, without which none of the authors would be capable of producing this work.
\bibliographystyle{DeGruyter}
| {
"attr-fineweb-edu": 1.486328,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc985qhDBNgCmtL0h |
\section{Introduction}
State tracking, the task of maintaining explicit representations of user requests and agent responses, has long been a key component of dialogue systems \citep{williams-etal-2013-dialog, henderson-etal-2014-second, henderson2014third, kim2016fifth}. The same challenge arises during reading comprehension of procedural texts (recipes, how-to guides, etc.) where systems focus on predicting changes of object attributes at the entity-level (a car window may transition from foggy to clear) \citep{dalvi-etal-2018-tracking, tandon2020dataset}. However, both of these state tracking variants rely on transaction-based or turn-based data such as transactional dialogues or procedure descriptions that are information-dense. Few works have studied state tracking tasks where state changes occur infrequently while a large proportion of messages are ``chatter''.
As an alternative to altogether unrestricted state tracking---a task that is daunting due to the complexity of even describing ground-truth states in a discrete manner---we resort to a simpler and more self-contained setting: sports competitions. Given the stream of natural language utterances with which a commentator describes the events in a real-world setting (here a sports competition), an ideal natural language understanding system would maintain and reason over a coherent and accurate representation of the match based on how the commentator described it. This representation can, in turn, be used for downstream tasks such as inference or language generation. Sports matches provide an ideal test bed for state tracking due to their self-contained, fully observable nature and their inherent interpretability in the form of the temporal evolution of scores. However, existing sports-related commentary collections such as described by \citet{aull2013fighting} and \citet{merullo-etal-2019-investigating} do not provide such within-match temporal information.
To this end, we collect temporally-aligned commentaries and live scores of soccer matches along with other meta information from the website \href{https://www.goal.com/en-us}{goal.com} and compile the dataset \texttt{SOCCER}. To the best of our knowledge, \texttt{SOCCER} is the first temporally-aligned collection of sports match commentary and state. It contains over 2,200 matches from tournaments such as the UEFA Champions League or the UK Premier League between 2016 and 2020. Across these matches, there are over 135,000 individual comments and approximately 31,000 events. A simplified example is shown in Figure~\ref{fig:overview}.
To demonstrate the potential of state tracking for open-domain discourse, we use the proposed dataset to investigate to what degree state-of-the-art systems are able to track the progression of events described in the commentary. This overview includes two model classes: classification models that treat match events as different class labels, and generative language models such as GPT-2 \citep{radford2019language} that model context and events in a causal manner. Our experiments show that both methods do not perform well on \texttt{SOCCER} and only slightly outperform distributional heuristics, leaving considerable room for improvement.
The novel contributions of this paper are three-fold: (1) we propose a new task of tracking event occurrences via state changes, (2) we create \texttt{SOCCER}, a general discourse state tracking dataset that contains temporally-aligned human-composed commentary and in-game events, serving as the training and evaluation dataset for this task, and (3) we provide two intuitive baselines demonstrating the difficulty of this task and presenting exciting opportunities for future research.
\section{Related Work}
\textbf{Dialogue State Tracking (DST).} Current DST collections and benchmarks tend to rely on transaction-centric dialogues with predefined domain-specific ontologies and slot-value pairs. Prominent examples include the DSTC2 \citep{henderson-etal-2014-second} and MultiWOZ datasets \citep{budzianowski2018multiwoz}. Consequently, previous work focuses on picklist-based approaches \citep{mrksic-etal-2017-neural, perez-liu-2017-dialog, zhong-etal-2018-global, ramadan-etal-2018-large,gao-etal-2019-dialog} to formulate state tracking as a series of classification tasks over candidate-value lists. A major difference between \texttt{SOCCER} and other DST datasets lies in its information density. As dialogues in DST are usually short conversations with direct transactional objectives such as booking hotels or reserving restaurant tables, frequent state changes are required to be captured within limited turns of the conversation. In sports commentary, on the contrary, in-game events occur at a comparatively low frequency and a considerable proportion of commentator utterances may not be related to any changes in the game state.
\textbf{State Tracking in Procedural Text.} State tracking in procedural text understanding focuses on the task of tracking changes in entity attributes \citep{tandon2020dataset}. A variety of procedural progresses have been proposed such as tracking entity presence and location in scientific processes \citep{dalvi-etal-2018-tracking}, ingredients in cooking recipes \citep{bosselut2017simulating}, and character motivation and emotional reaction in simple stories \citep{rashkin2018modeling}. Yet, similar to DST settings, these highly specific tasks depend on small fixed ontologies covering limited ranges of entities and states. Another more recent dataset \citep{tandon2020dataset} turns to an open-vocabulary setting when defining entity attributes. But since the dataset is comprised of how-to guides from WikiHow.com, the task still sees a high density of state changes per natural language instruction.
\textbf{Information Density}
The concept of Information Density has been mainly used in the Uniform Information Density (UID) theory \citep{jaeger2010redundancy} to measure the amount of information per unit comprising an utterance. \citet{levy2007speakers} demonstrated that speakers tend to maximize the uniformity of information via syntactic reduction. The notion of information density in our paper, however, focuses on quantifying the frequency of event occurrences on the corpus level instead of understanding syntactic choices on the utterance level.
\textbf{Sports Event Datasets and Tasks.} Commentary in the sports domain has been collected to study a variety of problems such as racial bias in football game reporting \citep{merullo-etal-2019-investigating} and gender construction in NBA/WNBA coverage \citep{aull2013fighting}. However, these datasets do not provide any information on the temporal alignment between commentary and events. Another similar dataset, \texttt{BALLGAME} \citep{keshet-etal-2011-ballgame} is comprised of baseball commentary with annotated events and timestamps, but it contains less than 20 games and the annotation is unavailable online. Some work focuses on sports-related inference of player performance metrics \citep{oved2019predicting} or game outcomes \cite{velichkov-etal-2019-deep} that predict full-time results based on signals from pre-game player interviews. However, no in-game sequential contexts are provided in these datasets. Most similar to our work, \citet{bhagat2018towards} collected in-game commentaries for soccer player analytics, but their approach is restricted by classical machine learning methods and ignores the effect of information sparsity within the dataset.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{Ratio2.pdf}
\caption{\label{fig:match_num} Frequency distribution of matches with and without commentary across available data years.}
\vspace{-2mm}
\end{figure}
\section{Dataset Construction}
We collect time-stamped commentary with key events of 2,263 soccer matches in total. The matches stem from four major soccer tournaments including the UEFA Champions League, UEFA Europa League, Premier League and Series A between 2016 and 2020. \texttt{SOCCER} consists of over 135,000 time-stamped pieces of commentary and 31,000 within-match events. This section describes our data collection and preparation process in detail.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{snippet2.png}
\caption{\label{fig:snippet} A short snippet of \protect\href{https://www.goal.com/en-us/match/internazionale-v-pescara/commentary-result/1gnkkzqjipx08k2ygwzelipyh}{a match} in the dataset. }
\vspace{-2mm}
\end{figure}
\input{table}
\subsection{Data Processing}
Commentaries, events, team lineups, match dates and other meta-information are gathered from match-specific pages. Out of a total of 9,028 matches covered on \href{https://www.goal.com/en-us}{goal.com} between 2014 and 2020, we retain only those 2,434 matches that list detailed event records and commentary. Any matches missing either of the two information streams are discarded. The retained matches belong to the four major tournaments mentioned above and all occurred starting 2016. Figure~\ref{fig:match_num} shows the frequency distribution of included and overall matches across the years in which they took place. All commentaries are in English and available in text form, thus requiring no transcription. Pieces of commentary come pre-segmented and aligned to match-internal timestamps so that in-game events and commentary with the same timestamps can be linked. Comments whose temporal information is unavailable usually belong to the pre-game, intermission and post-game periods and are labeled as START, BREAK, END accordingly. The total number of commentary paragraphs within a game is the same as the number of timestamps. This number varies between matches as timestamps during which the commentator did not provide commentary are omitted. Finally, any templated sentences following the format ``\textit{team 1 score - score team 2}'' are removed to avoid trivial leakage of the match state. All annotation and filtering processes are done programmatically and no manual efforts are involved.
Events are classified into five types: \textit{goal}, \textit{assist}, \textit{yellow card}, \textit{red card} and \textit{switch}. We consider events as keys and the event-related players as the corresponding values. For example, if player B from the home team assists in scoring a goal, player B will be the value of the event \textit{assist} for the home team. Hence, at each timestamp $t$, there are ten event-player pairs (five event types tracked for two teams). From this representation, we construct a comprehensive game state incorporating all the event-player pairs for each team as well as a cumulative score at each timestamp (See Figure~\ref{fig:snippet}). Special events such as penalty goals or own goals are not explicitly labeled, but can be derived from the evolution in cumulative score between neighboring timestamps. After processing, 171 games were found to have ill-formed commentary or mis-aligned end-game match scores compared to the goal records in the key events. These matches were eliminated from the original 2,434 games crawled with commentary, giving us a total of 2,263 games. Finally, the collected data is partitioned into distinct training (70\%), validation (15\%) and test (15\%) sets.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{model2.png}
\caption{\label{fig:model} Model architecture of the GRU classifier and GPT-2 based variant.}
\vspace{-3mm}
\end{figure*}
\section{State Definition and Task Proposal}
For each match $m$ in the dataset $M$, there is a set of timestamps $T_m = \{t\}$ accurate to a minute. As input, we are given a stream of commentaries $C_m = \{c_t\}_{t=1}^{T_m}$ and $c_t$ represents the paragraph of commentary at time $t$. The output will be a set of general match states $S_m = \{s_t\}_{t=1}^{T_m}$ such that each $s_t$ reflects the state change in the comment $c_t$ at the same timestamp. $s_t$ contains a set of events $e_{i, j}^{(t)}$, where $i$ represents the event types ($i \in \{\textit{goal},\textit{assist},\textit{yellow card}, \textit{red card}, \textit{switch}\}$) and $j$ denotes the event actor ($j \in \{\textit{home}, \textit{guest}\}$). Given the sparse distribution of $s_t$, we propose two alternative variants of the variable to assess the difficulty of state tracking at different granularity levels of state resolution.
\paragraph{Team Level.} In this simplest notion of state, events are tracked at the team level. In other words, $e_{i, j}^{(t)} = \{\textit{yes}, \textit{no} \}$. Consider the event of the home team scoring a goal $e_{\mathit{goal},\ \mathit {home}}^{(t)}$ at time $t$ as an example: given the commentary $c_t$ and other related meta-information, a model is tasked with determining the value of $e_{\mathit{goal},\ \mathit{home}} ^{(t)}$ to be $\textit{yes}$ if the home team indeed scored a goal in a given minute, or $\textit{no}$ otherwise.
\paragraph{Player Level.} At this significantly increased level of resolution, all events are additionally associated with their player agents $p \in P$, where $P$ denotes the collection of players. Concretely, the variable $e_{i, j}^{(t)}$ is mapped to either the related players' names $p$ or a $none$ answer to each event at time $t$. To facilitate this form of state, match meta-information includes lineups that associate present players with teams.
\section{Analysis and Baseline Experiments}
In the following, we provide descriptive statistics of the \texttt{SOCCER} dataset and include two model baselines for recognizing match events resulting in changes of states.
\subsection{Dataset Statistics and Comparison}
\label{sec:sparse-info}
The \texttt{SOCCER} dataset covers 2,263 matches with 135,805 pieces of commentary and 31,542 in-game event records. In all event records, each event type of each team appears approximately 3,154 times on average. There are a total of 3,507 unique player names across all event types and an average 1,219 unique player names per event type per team. A more detailed overview of the distribution of event types and player names can be seen in Table~\ref{tab:data_stats}.
Common state tracking datasets either in dialogue systems or procedural texts are designed to capture frequent state changes in the text. In \texttt{SOCCER}, we study a more general setting where the corpus is much less information dense due to an abundance of non-event related chatter. To quantify this difference, we define information density (\textit{ID}) as:
\[ID = \frac{\textit{Total \# of state changes}}{\textit{Total \# of turns/steps/timestamps}}\]
As shown in Table~\ref{tab:info-dense}, our dataset has a considerably lower information density with more turns of information. In \texttt{SOCCER}, the match state only gets updated every 5 timestamps, while in datasets such as MultiWOZ2.1 \citep{eric2019multiwoz} and OpenPI \citep{tandon2020dataset}, there are between 1 and 4 state changes per turn or step on average.
\input{info_dense}
\subsection{Baseline Setup}
\texttt{SOCCER} presents a new challenge to the state tracking community by introducing a more general corpus with an all-new state definition and a sparse information distribution. These properties render it difficult to directly apply some existing models such as TRADE used in DST tasks and ProLocal \citep{dalvi-etal-2018-tracking} proposed for procedural texts. Motivated by previous work on state tracking and based on the characteristics of the task, we use two baseline training and inference schemes: 1) a GRU \citep{cho2014learning} classifier with pre-trained BERT \citep{devlin-etal-2019-bert} embeddings, and 2) a generative pre-trained GPT2 \citep{radford2019language} variant.
\textbf{\resizebox{0.85\linewidth}{!}{GRU Classifier with BERT Embeddings.}}
The GRU model is used as a preliminary baseline to assess the difficulty level of the \texttt{SOCCER} dataset. Embeddings of the timestamped commentary $c_t$ are obtained from the pretrained weights of BERT \citep{devlin-etal-2019-bert}, that then get fed into a 1-layer GRU \citep{cho2014learning} network followed by two feed-forward layers. We only tasked this model with team-level state tracking as the classification will be extremely difficult if each player name is treated as a distinct class. We map the 10 event variables $e_{i, j}^{(t)}$ as binary flags to a 10-bit scalar value in which each digit denotes the predicted value of a variable. For example, if the 0th position corresponds to the variable $e_{\mathit{goal},\ \mathit{home}}^{(t)}$, then the predicted value at that position denotes whether the home team scores a goal (See Figure~\ref{fig:model}). Compared to converting the problem into ten binary classifications, this allows us to directly model the joint occurrence of events.
\textbf{{GPT-2 Based Variant.}}
Recent approaches to state tracking \citep{kim2019efficient,hosseini2020simple,tandon2020dataset} have shown that generative models are competitive especially in open-vocabulary settings. Inspired by simpleTOD \citep{hosseini2020simple} and the OpenPI baseline \citep{tandon2020dataset}, we cast the player-level state tracking task as a sequence generation problem, allowing us to leverage the capabilities of causal language models such as GPT-2 \citep{radford2019language}. The training sequence consists of a concatenation of the commentary, event types and player names, allowing us to model the joint probability of the whole sequence. Event names are preprocessed as tokens like \textit{goal\_home} to avoid being tokenized into sub-word units. Commentary and event-player pairs are encapsulated in special tokens to help the model distinguish context from labels. See Figure~\ref{fig:model} for a schematic overview of the model training input. In training, the model takes the concatenated sequence as input to perform next token prediction task. At inference time, greedy decoding is used to generate state predictions due to its superior performance compared to beam search and top-k sampling \cite{hosseini2020simple}.
\subsection{Implementation Details}
During preprocessing, we find that 98.1\% of comments in the collection are shorter than 200 words, therefore any outliers with a length of more than 200 words are truncated at that point. Then, the input text sequences are tokenized using byte-pair encoding \citep{sennrich-etal-2016-neural} to avoid out-of-vocabulary words.
The sentence embeddings processed by the GRU classifier stem from the pretrained weights of HuggingFace's BERT model \citep{Wolf2019HuggingFacesTS}. The GPT-2 model \citep{radford2019language} is also obtained from HuggingFace with pretrained weights, which are then fine-tuned on \texttt{SOCCER}\footnote{The \texttt{SOCCER} dataset as well as the code base used to collect it and run the experiments presented in the remainder of this paper are available \href{https://github.com/bcbi-edu/p_eickhoff_SOCCER}{here}.}.
\input{general_table}
\subsection{Evaluation}\label{eval}
Accuracy, and recall for occurrences of all event-types are used to assess the performance of both models. Due to the sparsity of event occurrences, recall is crucial to track the models' ability to extract events given the full set of types. For convenience, we refer to event types with ground truth $none$ answers as negative cases and positive cases otherwise. Therefore, recall among event occurrences is referred to as positive recall in the tables. More specifically, in Tables \ref{tab:all-scores} and \ref{tab:density}, accuracy and positive recall are measured on all labels (positive and negative combined). In Table \ref{tab:per-event}, the performance is reported on positive labels only, and detailed metrics including precision, recall and F1 scores are provided.
\input{per-event}
\input{densities}
\section{Results}
This section reports the results on the test set of \texttt{SOCCER}. As a na\"{i}ve distributional baseline, we compute the ratio of negative cases in the test set to be 0.9766.
In Table~\ref{tab:all-scores}, both models achieve an accuracy that is approximately equal to this majority class baseline due to the heavily imbalanced distribution of event positives and negatives. While accuracy scores are very high, positive recall is much lower, indicating that many event occurrences are missed by the models. When comparing the GPT-2 model's performance on both team level and player level event recognition\footnote{The GRU classifier is only used in team-level tasks since treating each player in the ontology as a distinct class to classify is very difficult.}, we notice that player level recall is substantially worse than that on team-level. This result suggests that complex state tracking involving broad ranges of possible slot values is a comparatively harder task that may require more sophisticated approaches.
\subsection{Results Per Event Type}
In addition to these general results, we break down model performance of positive cases by event-type and provide additional metrics including precision, recall and $F_1$ scores (see Table~\ref{tab:per-event}). When associating the scores with the event type distribution (see Table~\ref{tab:data_stats}), we can observe that, generally, greater numbers of available data points result in better performance. Take the event type \textit{goal} as an example. According to Table~\ref{tab:data_stats} there are about 800 more positive cases of the event $e_{\mathit{goal},\ \mathit{home}} ^{(t)}$ than $e_{\mathit{goal},\ \mathit{guest}} ^{(t)}$. A difference that is reflected in all the metrics in Table~\ref{tab:per-event} for both models. Another interesting point to note is the performance gap between the GRU classifier and GPT-2 model on the event type \textit{red card}. The \textit{red card} event is extremely rare in \texttt{SOCCER} as illustrated in Table~\ref{tab:data_stats}. Though we observe the performance of both models on \textit{red card} events to be comparably lower than those of the other events, the GRU classifier is able to capture more positive cases while no occurrences are detected by GPT-2.
\subsection{Results on Varying Information Densities}
In Section~\ref{sec:sparse-info}, we have shown that a key difference between \texttt{SOCCER} and other state tracking datasets lies in its low information density (See Table~\ref{tab:info-dense} for a detailed comparison). It is conceivable that such differences in information density affect state tracking performance. To eliminate confounding effects introduced via direct comparison to other datasets, this section explores the connection between event density across pieces of commentary and model performance. We begin by discarding all but the truly event related comments in each match to obtain a subset containing 0\% negative cases. This subset contains 25,934 event related comments across all matches. Then, by randomly replacing positive comments \footnote{Positive comments here refer to comments with event occurrences.} with negative ones from the same match at a sparsity ratio $r \in \{20\%, 40\%, 60\%, 80\%\}$, we keep the total number of comments at the same constant count of 25,934 and keep the temporal ordering of comments intact, while effectively reducing the level of information density. Table~\ref{tab:density} reports accuracy and positive recall for both methods and task levels when training and evaluating on non-overlapping splits of the newly constructed subsets. Note that, despite our earlier discussion of information density, Table~\ref{tab:density} reports a converse notion, sparsity. In this setting, 0\% corresponds to the highest and 80\% the lowest information density.
Comparing accuracy at different event sparsity levels, we notice that scores increase as events become more sparsely distributed. This effect stems from the fact that, when we are replacing event related comments with non-event chatter, chance agreement improves as the number of true negatives increases. Positive recall of event occurrences, however, demonstrates an opposing trend, suggesting that the task of recognizing true state updates becomes more challenging the sparser the discourse domain is. This assumption is further supported by the different degree of performance observed on \texttt{SOCCER} vs.\ existing collections such as MultiWOZ2.1 \citep{eric2019multiwoz}, where recall scores of many models range in the mid-fifty percent range.
\section{Conclusion}
In this paper, we introduce \texttt{SOCCER}, the first discourse state tracking collection in the sports commentary domain. We propose two different levels of state granularity and provide two performance benchmarks for models ranging from GRU \citep{cho2014learning} for embedding temporal dependency to GPT-2 \citep{radford2019language} for causal language modeling. The dataset shows a much lower information density than many existing resources on state tracking, making it considerably more challenging. We believe that, in conjunction with the wide vocabulary of player-level notions of state, this property makes \texttt{SOCCER} an exciting resource on which our community can advance discourse state tracking to a broader range of settings than have been studied previously.
\section*{Acknowledgement}
This research is supported in part by the NSF (IIS-1956221). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF or the U.S. Government. We would like to thank Ellie Pavlick, Stephen Bach, Zejiang Shen and the anonymous reviewers for their constructive feedback and helpful discussion.
\bibliographystyle{acl_natbib}
| {
"attr-fineweb-edu": 1.693359,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdMDxK7Tt6AlxDvxS |
\section*{Acknowledgements}
We would like to thank Nicolas Buompane, Steve Butcher, André Gueisbuhler, Sylvie Martinet and Rui Vinagre from the FIG, and Christophe Pittet and Pascal Rossier from Longines, for their help during our prior work on gymnastics that led to this extended framework. We would also like to thank Hans Christian Matthiesen from the IDOC, Bettina De Rham from the FEI, and David Stickland from Global Dressage Analytics for discussions on our surprising observations in dressage.
\section{Conclusion}
In this article, we study judging practices of sports for which performances are evaluated by panels of judges within a finite interval. Besides the ideal of having fair competitions, these sports and the athletes that practice them have strong economic incentives to have fair and accurate judges.
We model the judging error using heteroscedastic random variables, which allows to monitor judges, detect biases, and identify outlier evaluations. With the exception of dressage, consensus among judges increases with the quality level of the performance, and we can approximate the standard deviation of the judging error accurately using a quadratic equation. Our analysis of dressage judges further shows that they increasingly disagree as the performance level increases, indicating a significant amount of subjectivity in the judging process compared to other sports with similar judging systems.
Estimating and modeling the intrinsic heteroscedasticity could also be used for other judging processes within a finite scale such as the evaluation of wine, movie and music critics. Although in these instances there is no clear notion of control score indicating the true quality level, an analysis similar to ours could quantify and highlight judges that are out of consensus with others.
\section{Judging systems and dataset}
We analyze eight sports with comparable judging systems: artistic swimming, diving (including high-diving), dressage (Grand Prix, Grand Prix Special \& Grand Prix Freestyle), figure skating, freestyle skiing (aerials), freestyle snowboard (halfpipe, slopestyle), gymnastics (acrobatic, aerobic, artistic, rhythmic, trampoline) and ski jumping. For all these sports, it is impossible to evaluate performances in an automated fashion, and a panel of judges evaluates the athletes. Each judge in the panel reports an individual mark within a closed finite range following predefined judging guidelines. Although the implementation details such as the precise judging guidelines, the marking range and the step size vary, all the judging systems incorporate
\begin{enumerate}
\item An execution evaluation of performance components;
\item Penalty deductions for mistakes.
\end{enumerate}
The judging guidelines of each sport are the embodiment of the performance quality, mapped to a closed finite nominal mark. In particular, they embed the concept of perfection: a component or routine whose mark is the maximum possible value is considered perfect.
Although the judging guidelines try to be as deterministic and accurate as possible, every reported mark inevitably remains a subjective approximation of the actual performance level. Moreover, live judging is a noisy process, and judges within a panel rarely agree with each other. There are well-documented psychological, physiological, environmental and organizational factors that make this process noisy, and the best judges are the ones that systematically minimize this noise. Every judging system hence accounts for this variability with an aggregation of the judging panel marks to ensure an accurate and fair evaluation of the athletes. This aggregation, which varies for each sport, is also used to discard outlier marks, and is the most effective way to decrease the influence of erratic, biased or cheating judges.
\begin{table}
\centering
\begin{tabular}{lccc}
\toprule
& Size of & Number of & Number of\\
Sport & judging panel & performances & marks\\
\midrule
Aerials & 3 or 5 & 7'079 & 53'543\\
Artistic swimming & 5, 6 or 7 & 3'882 & 42'576 \\
Diving & 7 & 19'111 & 133'777\\
Dressage &&&\\
\quad $\cdot$ GP \& GP Special & 5 or 7 & 5'500 & 28'382\\
\quad $\cdot$ GP Freestyle & 5 or 7 & 2'172 & 11'162\\
Figure skating & 9 & 5'076 & 228'420 \\
Freestyle snowboard &&& \\
\quad $\cdot$ Halfpipe & 3, 4, 5 or 6 & 4'828 & 22'389\\
\quad $\cdot$ Slopestyle & 3, 4, 5 or 6 & 2'524 & 12'374\\
Gymnastics & & & \\
\quad $\cdot$ Acrobatics & 6 or 8 & 756 & 4'870 \\
\quad $\cdot$ Aerobics & 6 or 8 & 938 & 6'072 \\
\quad $\cdot$ Artistics & 4, 6 or 7 & 11'940 & 78'696 \\
\quad $\cdot$ Rhythmics & 4, 5 or 7 & 2'841 & 19'052 \\
\quad $\cdot$ Trampoline & 4 or 5 & 1'986 & 9'654 \\
Ski jumping & 5 & 12'381 & 61'905\\
\bottomrule
\end{tabular}
\caption{Typical judging panel and size of our dataset per sport.}
\label{tab:HET_Data}
\end{table}
Table \ref{tab:HET_Data} shows the typical judging panel and size of our dataset per sport. The number of marks counts every single reported mark by a judge in the dataset and depends on the number of performances or routines, the number of evaluated components per performance, and the size of the judging panel. The size of the judging panel ranges from three to nine judges and can vary within a sport depending on the stage or importance of the competition.
The Fédération Internationale de Gymnastique (FIG)\footnote{www.gymnastics.sport} and Longines\footnote{www.longines.com} provided the data for all the gymnastics disciplines. It includes 21 continental and international competitions from the 2013--2016 Olympic cycle, ending with the 2016 Rio Olympic Games. We gathered data for all the other sports from publicly available sources at official federation or result websites\footnote{fis-ski.com/DB/ (aerials, halfpipe, ski jumping, slopestyle); www.omegatiming.com (diving); data.fei.org (dressage); www.isu.org (figure skating); www.swimming.org (artistic swimming); all retrieved September 1, 2017.}. The data only comprises professional and international competitions. When available, the analysis includes all results of official World cup events, international championships and Olympic Games from January 2013 to August 2017. None of the sports has a gender-specific scoring system, thus we include men and women competitions in the dataset.
Table~\ref{tab:HET_Data} excludes some of the gathered data to ensure comparability among the sports as follows. First, the analysis focuses on the execution and artistry components of performances, thus we exclude difficulty scores of technical elements from the sample. Acrobatic and aerobic gymnastics have separate marks for the execution and artistry components; we split the marks in our analysis, but do not distinguish between them because judges have the same judging behavior in both instances~\cite{MH2018:gymnastics}. In dressage, we only consider events at the Grand Prix level, which includes 'Grand Prix', 'Grand Prix Special' and 'Freestyle' competitions. Judges in figure skating and artistic swimming evaluate the execution of multiple components of a performance separately, which we group together in our analysis. Scores in aerials competitions consist of a 'Air', 'Form', and 'Landing' mark, although this granularity is not available for all competitions. We add the three marks together when analyzing total scores, and study the components separately when they are available.
\section{Introduction}\label{sec:Het_Introduction}
The essence of sporting competitions is that the outcome is not decided in advance, nor affected by events outside of what happens on the field of play. In many sports, judging decisions can make the difference between victory and defeat. Match fixing, bias and misjudgments by officials and referees are highly damaging to the reputation and bottom line of competitive sports. Honest athletes, coaches, fans, gamblers, officials and sponsors wish for accurate and fair judges.
With the ever-expanding commercialization and media exposure of sporting events, judging decisions can bring fame and fortune for the winners and lifetime disappointment for the losers. These decisions can a have significant economic impact on the athletes' livelihood through corporate sponsorships and country bonuses. As one of countless examples, Singapore promises \$1,000,000 to individual gold medalists at the Olympic Games \cite{Singapore}.
Judges and referees are subject to many biases. The most important is national bias, which appears in many sports \cite{Ansorge:1988,Campbell:1996,Popovic:2000,Zitzewitz:2006, Emerson:2009, Leskovsek:2012,Zitzewitz:2014,Sandberg:2018,HM2018:NationalBias}, and results in judges awarding higher marks to athletes of their own nationality. Besides these biases, another incentive to monitor sports judges is the exponential increase of sports betting, especially online, which is now legal in many countries. The May 2018 Supreme Court ruling in favor of New Jersey against the 1992 federal ban on sports betting has led several US states to legalize sports gambling, with many other states actively working on its legalization. It has never been easier to legally bet on the outcomes of sporting events. Of course, sports betting as well as match fixing and corruption by judges and referees have gone hand in hand for ages, as recently and shamefully illustrated by the 2005 Bundesliga\footnote{\url{https://en.wikipedia.org/wiki/Bundesliga_scandal_(2005)}}, 2005 Máfia do Apito\footnote{\url{https://en.wikipedia.org/wiki/Brazilian_football_match-fixing_scandal}} and 2007 NBA scandals\footnote{\url{https://en.wikipedia.org/wiki/2007_NBA_betting_scandal}}.
All these factors have led to increased scrutiny of judges and referees. \textcite{Price:2010} quote former NBA Commissioner David Stern claiming that NBA referees "are the most ranked, rated, reviewed, statistically analyzed and mentored group of employees of any company in any place in the world". However, despite many such claims, there is little public disclosure from sports leagues, federations and governing bodies on how they monitor their judges, and most are reluctant to share data and processes to outsiders. For instance, following the 2002 Winter Olympics figure skating scandal\footnote{\url{https://en.wikipedia.org/wiki/2002_Winter_Olympics_figure_skating_scandal}}, the International Skating Union (ISU) reformed its scoring principles, and anonymized marks given by judges. Despite the intent of reducing collusion and corruption, anonymization made it more difficult for third parties to monitor judges. In fact, \textcite{Zitzewitz:2014} showed that national bias and vote trading increased after the rule changes. Following dubious judging at the 2014 Sochi Winter Olympics, the ISU backtracked and removed judge anonymity.
The last reason to monitor judges and referees, perhaps not as scandalous but more important on a day to day basis, is the simple fact that some judges are more accurate than others at judging. Assessing the skill level of sports judges objectively is a difficult endeavor and there is little literature on the topic. This is the main objective of this work.
\subsection{Our contributions}
We present a general framework to evaluate the performance of international sports judges, applicable to all sports where panels of judges evaluate athletes on a finite scale. For all these sports, judge evaluations decide the entirety, or a large fraction, of the outcome of the performances. As opposed to professional sports leagues, who have well-compensated and unionized referees, most judges in the sports we target are unpaid volunteers. They receive a small stipend covering their travel expenses when they officiate, but otherwise have other jobs and do judge duties by passion for their sport. Although their training, selection and monitoring vary per sport, they are more susceptible to favoritism\footnote{\url{https://www.washingtonpost.com/sports/olympics/two-chinese-figure-skating-judges-suspended-for-showing-favoritism-at-games/2018/06/21/8c0dc3e6-7557-11e8-b4b7-308400242c2e_story.html?noredirect=on&utm_term=.294cf291bd2f}}, collusion\footnote{\url{https://globalnews.ca/news/1139349/figure-skating-scoring-lends-itself-to-scandal-and-getting-worse-expert/}}, bribery\footnote{\url{https://www.theguardian.com/sport/2016/aug/01/rio-2016-olympics-boxing-corruption-allegations}}, and more simply but as importantly, lack of competence\footnote{\url{https://slate.com/culture/2016/08/are-olympic-boxing-judges-incompetent-corrupt-or-both.html}}.
We test and confirm our framework using data from artistic swimming\footnote{Formerly known as synchronized swimming.}, diving, dressage, figure skating, freestyle skiing (aerials), freestyle snowboard (halfpipe, slopestyle), gymnastics and ski jumping international competitions. All these sports are part of the Olympic family, and their federations derive a significant portion of their operating budgets from broadcasting and marketing rights of Olympic Games redistributed by the IOC~\cite{IOC:2019}. Furthermore, artistic swimming, dressage and rhythmic gymnastics are less popular worldwide, have the reputation of being more subjective, and are often mentioned when discussing sports that should removed from the Olympics. They thus have strong economic incentives to have objective competitions.
Our main observation is that for all these sports except dressage, the standard deviation of the judging error is heteroscedastic, and we can model it accurately using a concave quadratic equation: judges are more precise when evaluating outstanding or atrocious performances than when evaluating mediocre ones. We can use this observation to quantify the long-term accuracy of judges against their peers. We provide evidence that the implemented scoring systems generally lead to objective judging. The exception is dressage, where judges increasingly disagree with each other as the performance quality improves, which implies a lack of objectivity compared to the other sports we analyze.
This is the third in a series of three articles on sports judging, extending our initial work on gymnastics. In the first article~\cite{MH2018:gymnastics}, we model the intrinsic judging error of international gymnastics judges as a function of the performance level of the gymnasts using heteroscedastic random variables. We then develop a marking score to quantify the accuracy of international gymnastics judges. In the second article~\cite{HM2018:NationalBias},
we leverage the heteroscedasticity of the judging error of gymnastics judges to improve the assessment of national bias in gymnastics.
\section{From heteroscedasticity model to judging the judges}
The knowledge of the heteroscedastic intrinsic judging error variability $\sigma_d(c)$ makes it possible for federations to evaluate the accuracy of their judges for all the sports we analyze in this article, exactly as was done in gymnastics~\cite{MH2018:gymnastics}. We note that in practice it is often better to use weighted least-squares exponential regressions instead quadratic ones since they are more accurate for the best performances~\cite{MH2018:gymnastics}.
The marking score of judge $j$ for performance $p$ is given by $m_{p,j}\triangleq\frac{\hat{e}_{p,j}}{\hat{\sigma}(c_p)}$. This scales the error of the judge for a specific performance as a fraction of the estimated intrinsic judging error variability of the judging error for a specific discipline $d$ and performance quality $c_p$, and allows to compare judging errors for different quality levels and disciplines in an unbiased fashion. The overall marking score $M_j$ of judge $j$ is the mean squared error of all his/her judging errors in the dataset, i.e., $$M_j\triangleq \sqrt{E[m_{p,j}^2]}.$$
We can calculate the marking score of a judge for a specific competition, or longitudinally for multiple competitions over many years. A judge always marking one standard deviation $\hat{\sigma}_d(c_p)$ away from the median has a marking score of 1, and a perfect judge has a marking score of 0. The higher the marking score $M_j$, the more a judge misjudges performances compared to his/her peers. Conversely, judges with low marking scores have low noise levels around the objective performance quality.
We can use the marking score to detect outlier misjudgments, for instance judging errors greater than $2 \cdot \hat{\sigma}_d(c_p) \cdot M_j$. This flags $\approx 5\%$ of the evaluations and adjusts the threshold based on the intrinsic error variability of each judge: an accurate judge has a lower outlier detection threshold than an erratic judge. This is important to differentiate erratic but honest judges from accurate but sometimes highly biased judges. However, we must note that when using the median as the control score, a bad marking score for a single performance is not necessarily a judging error but can also mean that the judge is out of consensus. A more accurate control score is necessary to remove the ambiguity. Finally, we can also integrate the approximated standard deviation $\hat{\sigma}_d(c_p)$ and the marking score $M_j$ in bias analyses, as we did in our study of national bias in gymnastics~\cite{HM2018:NationalBias}.
\section{Methods}
\begin{table}
\centering
\setlength{\tabcolsep}{1pt}
%
\begin{tabularx}{0.78\columnwidth}{cp{5mm}l}
\toprule
$s_{p,j}$ && Mark from judge $j$ for performance $p$ \\
$c_p$ && Control score of performance $p$ \\
$\hat{e}_{p,j}$ && Judging error of judge $j$ for performance $p$ \\
$\sigma_d(c)$ && Standard deviation of the judging error $\hat{e}_{p,j}$ for \\ && \qquad discipline $d$ and performance level $c$ \\
$\hat{\sigma}_d(c)$ && Approximate standard deviation of the judging error \\ && \qquad for discipline $d$ as a function of the performance level $c$\\
\bottomrule
\end{tabularx}
\caption{Notation}
\label{tab:Notation}
\end{table}
We perform the same analysis for every sport and discipline. Table \ref{tab:Notation} summarizes our notation. Let $s_{p,j}$ be the mark from judge $j$ for performance $p$. For each performance, we need a control score $c_p$ providing an objective measure of performance quality. We use the median panel score $c_p\triangleq\underset{j\text{ in panel}}{\text{median }}s_{p,j}$ in our analysis. The median is more robust than the mean or the trimmed mean against misjudgments and biased judges, and in the aggregate provides a good approximation of the true performance quality. In some sports such as gymnastics~\cite{MH2018:gymnastics}, more accurate control scores are derived using video analysis post competition, however these are not accessible for our analysis.
The difference $\hat{e}_{p,j}\triangleq s_{p,j} - c_p$ is the judging discrepancy of judge $j$ for performance $p$, which we use as a proxy for the judging error. For a given discipline $d$, we group the judging errors by control score $c$ and calculate the standard deviation $\sigma_d(c)$, quantifying how judges agree or disagree with each other for a given performance quality~$c$. We call $\sigma_d(c)$ the \emph{intrinsic discipline judging error variability}. We then approximate this variability as a function of performance quality with a polynomial of second degree $\hat{\sigma}_d(c)$ using a weighted least-squares quadratic regression.
\section{Related work on judging skill and heteroscedasticity}
The vast majority of research assessing judging skill in sports focuses on consensus and consistency within groups. In 1979, \textcite{Shrout:1979} introduced the idea of intra-class correlation. This technique was used to evaluate judging in figure skating \cite{Looney:2004}, artistic gymnastics \cite{Bucar:2014, Leskovsek:2012, Atikovic:2009} and rhythmic gymnastics \cite{Leandro:2017}. The dependence between the variability of the judging error and performance quality had never been properly studied until our work in gymnastics~\cite{MH2018:gymnastics},
although it was observed in prior work. \textcite{Atikovic:2009} notice a variation of the marks deviation by apparatus in artistic gymnastics. \textcite{Leandro:2017} observe that the deviation of scores is smaller for the best athletes in rhythmic gymnastics, and \textcite{Looney:2004} notices the same thing in figure skating.
Even though heteroscedasticity is often under-reported in scientific research \cite{Cleasby:2011}, it is a well known property and appears in countless fields such as biology \cite{SV:2015}, chemistry \cite{Rocke:1995}, economics \cite{Okun:1971,Pagan:1983}, finance \cite{SS1990,Nelson:1991} and sports science \cite{NA:1997}. In most cases, heteroscedasticity arises as a scale effect, i.e., the variance is linked to the size of the measurement. This is not the case in our work, where the variability of the judging error depends on the quality of the performance~--~a~completely different source of variability than the scale effect.
Another particularity of our work is that instead of having a single observer making measurements, a set of observers, i.e., a panel of sports judges, independently observe and measure a common set of performances. Heteroscedasticity in a similar context was observed in judicial decision making \cite{Collins:2008}, and is implicit in the data of wine tasting scores \cite{Cicchetti:2004b}. Thus, as opposed to prior work, our goal is not only to model a heteroscedastic random variable accurately, but to assess the accuracy of the observers. This, to the best of our knowledge, has never been systematically attempted before our work in gymnastics.
\section{Results and Discussion}
\subsection{The general pattern of heteroscedasticity}
Figures \ref{fig:Res_Diving}-\ref{fig:Res_Dressage_split} show the standard deviation of the judging marks $\sigma_d(c)$ and the weighted least-squares quadratic regression curve $\hat{\sigma}_d(c)$ as a function of performance~$c$ for diving, figure skating, halfpipe, ski jumping, slopestyle, trampoline, acrobatic gymnastics, aerobic gymnastics, artistic gymnastics, rhythmic gymnastics, artistic swimming, aerials (total and component scores) and dressage (regular and freestyle to music events), respectively\footnote{Note that for some sports we aggregate close quality levels (control scores) to improve the visibility of the figures. We do the analysis without the aggregation.}. Each figure includes the scaleless weighted coefficient of determination $r^2$ quantifying the goodness of fit of the regression. The weighted $r^2$ are high, ranging from $0.24$ (Dressage GP Freestyle) to $0.98$ (artistic gymnastics). Each figure also shows the weighted root-mean-square deviation (RMSD) quantifying the average discrepancy between the approximated deviation and the measured values. The RMSD depends of the scale of the marking range and cannot be compared between different sports.
With the notable exception of dressage, all sports exhibit the same heteroscedastic pattern: panel judges disagree the most when evaluating mediocre performances, and their judging error decreases as the performance quality improves towards perfection. The behavior for the worst performances depends on the sport. On the one hand, sports such as diving~(Figure~\ref{fig:Res_Diving}), trampoline (Figure~\ref{fig:Res_Trampoline}) and snowboard (Figures \ref{fig:Res_Halfpipe} and \ref{fig:Res_Slopestyle}) have many aborted or missed routines (splashing the water, stepping outside the trampoline boundaries after a jump, falling during the run). These routines result in very low marks, and the concave parabola is clearly visible for these sports, indicating smaller judging variability for performances close to zero. Smaller variability for atrocious and outstanding performances is not surprising: they either contain less components to evaluate or less errors to deduct. In both cases, this decreases the number of potential judging errors, as opposed to performances in the middle of the scoring range. On the other hand, gymnastics performances (Figures \ref{fig:Res_Acrobatics}-\ref{fig:Res_Rhythmics}) and artistic swimming routines (Figure~\ref{fig:Res_Synchronisedswimming}) barely receive a score in the lower half of the possible interval. Without these bad performances close to the minimum possible score, the quadratic fit $\hat{\sigma}_d(c)$ does not decrease towards the left border of the scoring range and can even be slightly convex.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Diving.pdf}
\caption{Standard deviation of judging marks versus performance quality in diving.}
\label{fig:Res_Diving}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Figureskating.pdf}
\caption{Standard deviation of judging marks versus performance quality in figure skating.}
\label{fig:Res_Figureskating}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Halfpipe.pdf}
\caption{Standard deviation of judging marks versus performance quality in halfpipe.}
\label{fig:Res_Halfpipe}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Skijumping.pdf}
\caption{Standard deviation of judging marks versus performance quality in ski jumping.}
\label{fig:Res_Skijumping}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Slopestyle.pdf}
\caption{Standard deviation of judging marks versus performance quality in slopestyle.}
\label{fig:Res_Slopestyle}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Trampoline.pdf}
\caption{Standard deviation of judging marks versus performance quality in trampoline.}
\label{fig:Res_Trampoline}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Acrobaticgymnastics.pdf}
\caption{Standard deviation of judging marks versus performance quality in acrobatic gymnastics.}
\label{fig:Res_Acrobatics}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Aerobicgymnastics.pdf}
\caption{Standard deviation of judging marks versus performance quality in aerobic gymnastics.}
\label{fig:Res_Aerobics}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Artisticgymnastics.pdf}
\caption{Standard deviation of judging marks versus performance quality in artistic gymnastics.}
\label{fig:Res_Artistics}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Rhythmicgymnastics.pdf}
\caption{Standard deviation of judging marks versus performance quality in rhythmic gymnastics.}
\label{fig:Res_Rhythmics}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Synchronisedswimming.pdf}
\caption{Standard deviation of judging marks versus performance quality in artistic swimming.}
\label{fig:Res_Synchronisedswimming}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Aerials.pdf}
\caption{Standard deviation of judging marks versus performance quality in aerials (total scores).}
\label{fig:Res_Aerials}
\end{figure}
Aerials is of particular interest because it exhibits both possible behaviors for the worst performances, which is not obvious from Figure~\ref{fig:Res_Aerials}. More precisely, the total score in aerials is the combined sum of three independent components: 'Air', 'Form' \& 'Landing'. Even though athletes do often fall when landing, this only influences their 'Landing' score and not the other two components. Figure~\ref{fig:Res_Aerialssplit} shows the aerials judging errors split per component\footnote{Some competitions in our dataset are not split per component, thus we excluded them from Figure~\ref{fig:Res_Aerialssplit}.}. The variability of the 'Landing scores', which are evenly distributed among the possible scoring range, closely follows the concave parabola, whereas the 'Air' and 'Form' components have right skewed distributions because low marks are rarely given. For these two components the quadratic regression is closer to what we observe in gymnastics or figure skating. Aerials shows at the component level what we observe at the sport level: the shape of the parabola depends on the presence or absence of performances whose quality is close to zero.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Aerialssplit.pdf}
\caption{Standard deviation of judging marks versus performance quality in aerials (component scores).}
\label{fig:Res_Aerialssplit}
\end{figure}
\subsection{The special case of dressage}
\begin{figure}
\centering \includegraphics[width=1\columnwidth]{Dressage_GP.pdf}
\caption{Standard deviation of judging marks versus performance quality in dressage GP and GP Special events.}
\label{fig:Res_Dressage}
\end{figure}
\begin{figure}
\centering \includegraphics[width=1\columnwidth]{Dressage_Freestyle.pdf}
\caption{Standard deviation of judging marks versus performance quality in dressage split for 'Artistic presentation' scores of 'GP Freestyle to music' events.}
\label{fig:Res_Dressage_split}
\end{figure}
Surprisingly, the observed general pattern of heteroscedasticity does not apply to dressage. Figure~\ref{fig:Res_Dressage} shows the results for standard dressage GP and GP Special competitions, whereas Figure~\ref{fig:Res_Dressage_split} shows the results for 'GP Freestyle to music' events.
In both figures, judging errors are the lowest for average performances and the parabola is convex. For standard events in Figure~\ref{fig:Res_Dressage}, we first observe that the judges increasingly disagree as the performance quality decreases. This is similar to what we observe in gymnastics and artistic swimming and due to the fact that there are no easy to judge performances close to the lower boundary of the marking range (in dressage the lowest possible score is 0 and there is no control score below 45 in our dataset). However, judges also increasingly disagree as the performance quality approaches perfection, which is extraordinary. The judging behavior in 'GP Freestyle to music' events in Figure~\ref{fig:Res_Dressage_split} is similar but less pronounced. Furthermore, there are a few exceptional performances for which the judging errors decreases again, although this might be due to a mathematical truism: when the median mark is almost perfect, at least half the panel marks must be between the median and maximal scores, hence also perfect or close to perfect.
We did additional analyses to understand this unexpected behavior, and found that it appears at all levels of competition in our dataset, and for every judging position around the arena. The Fédération Équestre Internationale (FEI) states that ``Dressage is the ultimate expression of horse training and elegance. Often compared to ballet, the intense connection between both human and equine athletes is a thing of beauty to behold.''\footnote{From www.fei.org/disciplines/dressage.} Elegance and beauty are highly subjective, and the subjectivity of dressage judges is not new (consult, for instance, \textcite{HMM2010}). The simplest explanation is that judges fundamentally disagree on what constitutes an above average dressage performance. This might be due to imprecise or overly subjective judging guidelines, or to the difficulty or unwillingness of judges to apply said guidelines objectively. No matter the reason, our analysis reveals a clear and systemic judging problem in dressage, and we recommend that the FEI thoroughly reviews its judging practices. We shared our results with the FEI, the International Dressage Officials Club (IDOC) and Global Dressage Analytics, who observed a similar convex parabola at the figure level\footnote{Our dataset only included the total scores, and not the individual marks per figure.}. Judges do not agree on what is, say, a 9.0 pirouette, and these disagreements are compounded over many figures, affecting at the overall performance evaluation. The FEI is currently considering major changes such as new guidelines, more precise codes of points and additional training for its judges \cite{FEI:2018}.
| {
"attr-fineweb-edu": 1.77832,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUa3TxK5YsWV5L11v_ | \section{introduction and motivation}
\label{section_introduction_and_motivation}
Domestic and international tourism has seen several years of steady growth. The revenue generated from accommodation, food and beverage, and other services provided to this large flux of travelers, has propelled the leisure and hospitality industry to become a key driver of the global economy. For sustained growth of this industry, experts in the field argue for major improvements in the type and quality of hospitality services to adapt to the changing consumption and travel behaviors of the evolving customer base. Specifically, these improvements are targeted towards attracting the new generation of technophile individuals traveling on a tight budget \cite{Deloitte_HospitalityOutlook_2014}. Implementation of these improvements compounds to a complete makeover of the service packages and the underlying technological framework currently used by hospitality service providers (HSP). The goal of these improvements should be: personalization of experiences and digitalization of services \cite{Deloitte_HospitalityOutlook_2014}.
Personalization of experiences is necessary to market services to individuals traveling on a limited budget. Personalization creates individualized guest experiences by incorporating flexibility and customizability to the offered service packages \cite{Deloitte_HospitalityOutlook_2014}. Most of the current packages marketed by HSP offer rigid and tailored experiences. These packages bundle different combinations of popular services in different price brackets with little to no means of negotiating adjustments. This leaves travelers to choose between all or nothing and they usually end up opting for the latter choice. If HSP have more flexible service package offerings, then guests can plan their experience according to their desires and their budgets. Crafting personalized value propositions for each guest requires a massive effort on both the guests' and the service providers' parts. This process can be simplified significantly by using an effective technological platform to manage the interaction between guests and service providers.
Digitalization of services is imperative to appeal to technophile guests. The goal of digitalization of services is to transition to a digital business model by pushing hospitality services to guests' touch-point \cite{Kasavana_HospitalityIndustry_2014}.
A digital service platform affords guests the ability to browse, plan and pick activities at their own convenience thus facilitating seamless integration of technology into their travel experience. Booking and reservation services, location-based services and personalized communication, and social media integration are a few examples of digital services that entice technophile guests. There are a host of third party applications providing these services which guests are familiar with and rely upon. Revenue erosion to these third party applications and services is a growing concern to HSP \cite{Tossell_DigitalGuestStrategy_2015}. In order to compete with these third party applications, HSP must develop their own applications which provide better on-property and off-property services to guests. Through special incentives such as loyalty points, coupons and bonuses, guests can be encouraged to use in-house applications over third party applications. Providing digital services with the same quality as third party application services requires a sound technological infrastructure base with specialized computation and communication capabilities. This warrants the overhaul of current technological framework used by HSP.
The future of hospitality management industry is being shaped by the current boom in the Internet of things (IoT) technology. HSP must stay on the leading edge of IoT technology to maintain a competitive edge in the market. The IoT is the interconnection of everyday physical devices like sensors, actuators, identification tags, mobile devices, etc., such that they can communicate directly or indirectly with each other via local communication networks or over the Internet \cite{Munir_IFCIoT_2017}. The incorporation of IoT technology in the hospitality industry qualifies hotels as smart buildings which are important facets of smart cities \cite{Mohanty_SmartCities_2016}. The IoT paradigm offers HSP a nuanced means of interacting with guests and collecting their real-time data. This opens up new avenues for immediate, personalized and localized services as HSP can gauge guest behaviors and preferences with higher accuracy. The IoT also enables HSP to increase back-end efficiency of multiple departments \cite{Intelity_ForecastHotelTech_2016} (e.g. front desk, housekeeping, sales and marketing, etc.) as well as enact cost-saving policies like smart energy management \cite{Lee_Energy_2018} \cite{Hsiao_Energy_2018}. The IoT technology is already spreading through the hospitality industry with public terminals, in-room technologies and mobile applications \cite{Kasavana_HospitalityIndustry_2014} and some of the promising future IoT applications, such as body area sensor networks, environment monitoring and augmented reality experiences, will certainly usher in new business prospects. HSP should therefore aim to future proof their technology framework so that their systems can be easily upgraded in tandem with the changing IoT technological landscape.
Overall, the new technological upgrade of the hospitality industry should create a mutually beneficial platform by facilitating partnership between guests and HSP. The platform should ensure that guests are treated to an outstanding travel experience while also improving the operational and managerial efficiency for HSP. Furthermore, the new technological framework must be future proof; providing an easy upgrade schedule for addition of new/improved services. In this paper, we present a detailed overview of the role of technology in state-of-the-art hospitality services. We also describe potential future hospitality services following the burgeoning revolution of IoT technology. We then outline the challenges being currently being faced by HSP and discuss the need for overcoming these challenges to develop a lasting future-proof solution for the hospitality industry.
The remainder of this article is organized as follows. Section~\ref{section_state_of_the_art} describes the state-of-the-art hospitality services currently offered by providers. Section~\ref{section_future_services} envisions future services that guests can expect as hospitality industry continues to grow. Section~\ref{section_challenges} presents the major challenges and issues in designing solutions for the hospitality industry. Finally, Section~\ref{section_conclusions} concludes our article.
\section{state-of-the-art hospitality services}
\label{section_state_of_the_art}
HSP are making large IT expenditures to revamp their technological infrastructure base. In 2016, midscale hotels led in IT expenditure (7.3\%), trailed by upscale hotels (6.1\%) and luxury hotels (5.6\%)\cite{Intelity_ForecastHotelTech_2016}. The expenditures are largely focused on digitalization of the service platform to benefit both parties of the hospitality service exchange -- the guests and the service providers. Innovations in smart devices and IoT are driving the reform of technology used in the hospitality service platform. Guest interactions are being migrated towards on-screen and online interfaces through guest-facing systems which apart from being convenient for guests doubles as an opportunity for service providers to collect valuable data and feedback \cite{Deloitte_HospitalityOutlook_2014}. Digitalization, implemented by HSP through back-of-house (BoH) management systems, has helped improve operational efficiencies, enhance managerial effectiveness, reduce cost of goods sold, increase revenues and improve sustainability \cite{Kasavana_HospitalityIndustry_2014}.
In a digitalized hospitality service platform, guest-facing systems are the primary interfaces for interaction between the guests and the HSP. Therefore, it is imperative that these systems provide easy to use interfaces for guests to manage their travel experience. Guest-facing systems (shown in Figure \ref{figure_State-of-art}) include hospitality service mobile applications, point-of-sale (POS) terminals, hand-held devices, thin-client terminals, etc. \cite{Wang_Tech_2017} \cite{Ukpabi_Tech_2017}. These systems should be integrated seamlessly into all three phases of the guest cycle: pre-sale, point of sale, and post-sale phases so as to provide a complete digital service experience for the guests.
\begin{figure}[!t]
\centering
\includegraphics[width = 3.25in, bb = 14 13 635 799]{state-of-art}
\caption{State-of-the-art hospitality services}
\label{figure_State-of-art}
\end{figure}
Guest-facing systems improve guest experience in several different ways. Firstly, guest-facing systems ensure guest satisfaction by allowing guests to control their environment. Guest-facing systems empower guests with services such as automatic check-in and check-out services, keyless entry services, control of in-room functions etc. \cite{Wang_Tech_2017} (shown in Figure \ref{figure_State-of-art}). For example, Hilton and Starwood hotels offer guests automatic check-in and keyless entry service using their mobile apps \cite{DePinto_7TrendsIoTHospitality_2016}. Telkonet's EcoSmart Mobile offers similar mobile applications with the added features to allow guests to have control of in-room IoT products \cite{DePinto_7TrendsIoTHospitality_2016}. Samsung's Hotel Management Solutions and SINC entertainment solutions also allow guests to control in-room functions as well as check weather and flight information through a TV remote interface \cite{DePinto_7TrendsIoTHospitality_2016}. Hotels like Mondarian SoHo, The Plaza and The Marlin are placing tablets in their hotel rooms to provide guests with interfaces for controlling in-room functions \cite{VenturePact_HowIoTImprovesHospitality_2015}. Peninsula Hotels is developing their own line of proprietary in-room tablets which allow guests to order room service, message the concierge, arrange transportation, make free VOIP calls, and select TV stations and movies to stream onto the hotel room television \cite{Shallcross_MarriotEnseo_2016}.
\begin{figure*}[!t]
\centering
\includegraphics[width = 7in, bb = 13 15 1468 781]{IFCIoT_Future}
\caption{Scope of future services in the hospitality industry}
\label{figure_IFCIoTFuture}
\end{figure*}
Secondly, guest-facing systems provide guests with location-based services which is another important service linked to guest satisfaction \cite{Tossell_DigitalGuestStrategy_2015}. More than 30 percent of hotels in 2016 allocated budgets for location-based technology \cite{DePinto_7TrendsIoTHospitality_2016}. Guest-facing systems enabled with location-based technology offer on-property and off-property guest services like digitally guided tours, recommendations of local events and attractions, as well as suggestions for dining and entertainment options (shown in Figure \ref{figure_State-of-art}). These services not only aid the guests in getting around and exploring during their stay, but, also enable service providers to keep guests within the revenue loop by preferably steering guests to sites and establishments that profit the HSP. For example, Fontainebleau Miami tailor their pre-arrival and checkout offers using their guests' location data \cite{DePinto_7TrendsIoTHospitality_2016}. Finally, guest-facing systems make it easy for guests to participate in loyalty programs with HSP \cite{Intelity_ForecastHotelTech_2016}. By using hotel loyalty mobile apps, guests can keep track of coupons and bonuses, and get notifications on deals and special offers.
The services offered to guests through guest-facing systems are driven by sophisticated BoH management systems. These systems are tasked with managing service staff and balancing operational costs and revenue without compromising quality of service provided to guests. The BoH management systems include property management system, customer relationship management, revenue and sales management, housekeeping maintenance software etc. \cite{Kasavana_HospitalityIndustry_2014}. The developments in guest-facing systems and IoT technology are significantly enhancing the capabilities of BoH management systems. For example, in-room IoT units like thermostats, motion sensors and ambient light sensors (shown in Figure \ref{figure_State-of-art}) can be used to control temperature and lighting in hotel rooms when they are unoccupied or unsold which can reduce energy costs by 20 to 45 percent \cite{DePinto_7TrendsIoTHospitality_2016}. Starwood Hotels and Resorts' ``daylight harvesting'' is such an energy-saving scheme which saves energy and increases indoor lighting consistency by automatically adjusting the energy-efficient LED lighting based on the natural light detected coming into the hotel room \cite{DePinto_7TrendsIoTHospitality_2016}.
The innovations in guest-facing systems are also reshaping the customer relations dynamic between guests and HSP. Guest-facing systems enable service providers to closely monitor the guest cycle by collecting data on specific guest preferences, behaviors and locations \cite{Wang_Tech_2017} \cite{Piccoli_Personalization_2017}. Service providers and BoH systems make use of this data to create custom guest profiles which they use to personalize service offers for repeat business. These custom guest profiles can be shared with a large network of partner service providers which ensures that services offered to guests are always highly personalized. Custom guest profiles also grant HSP the ability to entice guests into using their services by means of targeted advertisements and special insiders' guides and offers.
Another critical management task associated with BoH management systems is to build up the online brand value of HSP \cite{Lee_OnlineBrand_2014}. This includes developing and maintaining good customer relations through effective use of the social media platform, and engaging guests to rate and review services in online portals. The online standing of a company directly correlates to its revenue stream. Around 90\% of modern day technophile travelers base their decisions on online reviews when purchasing hospitality services. A single negative review can thus result in potential loss of a large number of customers. It is therefore necessary for BoH management systems to monitor online portals for bad reviews and ratings and take necessary action to mitigate their effects. However, through the effective use of social media platform such as live chat-based assistance for prompt response to guest queries, advertising group activities and services to a group of a guest's friends etc., hospitality services can be highly personalized which makes them lucrative in a guest's point of view.
The BoH management systems also help improve revenue per available room (RevPAR) \cite{Altin_RevPAR_2017} by speeding up the housekeeping and maintenance processes. By using in-room technologies and guest preference profiles, BoH management systems can schedule housekeeping services efficiently. This effectively reduces the downtime of hotel rooms, improves the utilization of labor resources, and significantly improves guest satisfaction. The use of housekeeping management systems and applications can help in reducing payroll costs by 10\% to 20\% \cite{Kasavana_HospitalityIndustry_2014}. The BoH management systems also help in maintenance of in-room and on-property smart systems. These systems help discover faults and failures in near real-time and thus facilitate prompt maintenance.
\section{scope of future hospitality services}
\label{section_future_services}
As the IoT ecosystem grows and spreads into different facets of everyday life, we can expect a future where every physical device that we use aggregates and analyzes our data and automatically provides us services. The hospitality industry is inclined to follow this growing trend to offer new types of services to its guests as well as to enact cost saving measures. In this section, we discuss some of potential services and use cases that the burgeoning IoT ecosystem may bring to the hospitality sector in the future. Figure \ref{figure_IFCIoTFuture} shows examples of IoT sensor and devices the different service categories they can be employed for.
\subsection*{Body Area Sensors}
Smart and wearable devices are at the forefront of the IoT revolution. Sales of devices such as smartphones, smart-watches, etc., are soaring and smart technology is beginning to be included in other wearable forms like smart clothing, smart shoes, etc. These devices gather user data like body temperature, heart rate, location, fitness activities etc. Wireless medical sensor technology further expands the scope of data collection by providing detailed data about organs and systems within the body. With proper analysis of data gathered through body area sensor networks, HSP can offer a host of new services to their guests such as, automatic adjustment of in-room temperature based on body temperature, adjustment of in-room lighting based on a guest's sleep-cycle, provide meal suggestions based on a guest's desired fitness goal, etc. HSP can also provide special facilities to guest's based on the type of medical devices they use. For example, service providers can filter out high carbohydrate and sugary meal options for diabetic guests, high cholesterol meal options for patients with heart disease, etc.
\subsection*{Augmented Reality and Beacon Technology}
HSP are coming up with new ways to incorporate augmented reality and beacon technology into their on-property systems. This technology can be used to provide guests with services such as digitally guided tours, previews of in-room environment (e.g., decor, facilities and amenities, etc.), immediate translation services for signs and other written materials, interactive restaurant menus with dish previews, critic reviews, food allergy information, etc. as well as interactive trivia games around on-property points of interest \cite{Tussyadiah_AugmentedReality_2017}. These services can be bundled as part in-house loyalty applications. As guests use these services, HSP can advertise new services or collect data to improve guest preference profiles \cite{HospitalityTech_BeaconsAugmentedReality_2015} \cite{Perey_AR_2015}.
\subsection*{Energy Management}
HSP can enact several cost-saving measures in the management of on-property energy consumption by leveraging IoT technology. These measures are particularly helpful in achieving "green" operation of on-property systems.
Some of the energy-saving systems currently in-place at many hotel properties include smart lighting and temperature control systems as well as use of low power devices like compact florescent bulbs, LED lights etc. IoT technology can significantly expand the scope of energy-saving systems. For example, IoT-enabled power outlets and IoT-enabled smart devices alert housekeeping and maintenance service personnel if a particular outlet exceeds a set limit for power consumption over a given period of time. The service personnel can then track down whether the guests are mindful of the power consumption or whether the power is leaking due to malfunctioning devices\cite{Lee_Energy_2018} \cite{Hsiao_Energy_2018}. IoT technology can also be employed to limit water consumption. This can be achieved through IoT-enabled smart bathrooms with smart shower heads, smart sinks, flow-controlled toilets etc.
\subsection*{Building Automation and Monitoring}
Both guests and service providers benefit from building automation and monitoring. New hospitality services such as keyless entry services, automated check-in and check-out services, digital concierge, etc. which will be brought about by developments in IoT-enabled systems will greatly improve guest satisfaction. These services are not only appealing for technophile users but, they can be specially helpful for guests with disabilities. Building automation also leads to greater operational and managerial efficiency for HSP. For example, in-room monitoring systems can be used to detect whether a room is occupied or unoccupied so as to schedule housekeeping services. IoT-enabled in-room and on-property guest-facing systems as well as other utility systems such as elevators, automated doors and windows, powerlines, pipelines, etc., can report faults and malfunctions and schedule preventive maintenance services before any problems are detected with regular physical inspections\cite{Vermesan_IoTForMaintenance_2014}.
\section{Challenges}
\label{section_challenges}
In this section, we identify four major challenges (Figure \ref{figure_Challenges}) associated with an effective IoT implementation in the hospitality industry. These challenges need to be addressed by the new technological infrastructures being adopted by HSP in order to sustain steady growth.
\begin{figure}[!t]
\centering
\includegraphics[width = 2.5in, bb = 14 13 461 527]{Challenges}
\caption{Technological challenges in the hospitality industry}
\label{figure_Challenges}
\end{figure}
\subsection*{Interoperability}
The hospitality industry lacks standardization. Many HSP are developing their own proprietary solutions based on their own metrics and methodologies in order to accommodate the technological service demands of modern day guests\cite{SpecialNodes_HospitalityIoT_2013}. This has led to a diverse spectrum of implementations which are essentially targeted to provide a similar set of services. Although these implementations work well within the scope of a single property, they lack the potential to be extended to intra-organization and inter-organization scopes \cite{Wood_Hospitality_2013}. This imposes limitations on the usability of guest preference profiles on a broader scope because of the lack of a standard platform for sharing guest data across different businesses. This can lead to loss in potential revenue for HSP as they may be unable to effectively provide personalized services to their guests. Interoperability issues also impact guest experience as they create hassles and inconveniences that takes away from seamless user experience desired by guests. Non-standardized systems at different hotels introduce unwanted learning periods for guests during their stay. Such systems may also have issues in interfacing and using data from personal devices brought by guests. These problems warrant standardized vendor independent systems and solutions for hospitality industry.
\subsection*{Data Management}
Aggregation and analysis of guest data is an integral part of the hospitality service chain. With the introduction of new technologies and service platforms in the hospitality industry, data volume is bound to grow exponentially. Personalization of guest experience contributes significantly to increase in data volume. As personalized services become the norm in hospitality industry, HSP must treat each of their guests as unique individuals and maintain accurate and up-to-date records of their preferences and behaviors. HSP can collect guest data through guest-facing systems as well as personal guest devices connected to the hotel network. The BoH management systems in hotels must be capable of properly managing the influx of wide variety of guest data from wide variety of sources. In order to provide personalized services to guests, BoH management systems must analyze guest preference profile along with data about the state of the surrounding environment detected from IoT devices/sensors. This places a considerable computational burden on BoH management systems that can only be tackled through the use of specialized technological infrastructures. Additionally, secure sharing of relevant data from these guest profiles across different intra-organization and inter-organization systems is a monumental logistic challenge that requires both centralized and decentralized data management approaches.
\subsection*{Security and Privacy}
In order to provide guests with highly personalized services, it is necessary for HSP to track guest preferences, behavior, and location. HSP must ensure that guest data is used and stored properly so as to protect guests from physical, economic, and societal threats. Guest-facing systems and point-of-sale terminals are the most susceptible systems in hotels to security attacks. These systems should ensure that interactions with guests are secure and private by employing robust security measures to prevent data leaks and theft. Security primitives should also be supplemented in the hotel network for added security in interfaces with personal guest devices and in-room and on-property IoT devices. A secure hotel network prevents hackers from gaining access to guest data by attacking personal guest devices connected to the network. It also prevents hackers from reprogramming the hotel's IoT systems for annoying or malicious purposes. Adding strong security protocols in every guest interaction and every active connection on the hotel network requires significant computing resources. Moreover, these security protocols should be implemented close to the data source so that data is secured in as few number of hops in the network as possible. A decentralized computing platform is necessary to meet these requirements.
\subsection*{Responsiveness}
HSP must ensure prompt acknowledgement of guest requests and prompt delivery of services to guests. This can be achieved by digitalization of the interaction between guests and HSP. By pushing guest interactions to guest-facing systems and implementing automatic control through IoT sensors/devices, HSP can eliminate the need for human interaction and intervention when dealing with guests. These systems leave little room for miscommunication and confusion when interpreting guests' requests. These systems can also readily fulfill guests' requests faster than any dedicated hotel staff/personnel. This greatly improves responsiveness to guest requests and adds to the seamless experience desired by guests. Responsiveness is also crucial for a hotel's upkeep and maintenance. No or slow response to repair and maintenance needs can lower the hotel's revenue per available room (RevPAR). For example, a room cannot be rented if something as simple as the phone in the room is not working \cite{Altin_RevPAR_2017}. In hotels that have large scale IoT deployments, repair and maintenance requests can be responded to swiftly because most IoT sensors and devices can detect and self-diagnose problems. Timely repair and maintenance makes hotel rooms available for occupancy quickly thus reducing loss of revenue to maintenance. In order to improve responsiveness of hotel systems, they must be equipped with more computing resources and unfettered access to guest and BoH management data which requires a decentralized computing and data management platform.
\section{conclusions}
\label{section_conclusions}
In this paper, we outline many critical enhancements that need to be implemented in the hospitality industry to restructure its service platform to fit into the modern technological landscape. We identified personalization of experiences and digitalization of services as the two fronts in which these enhancements have to be focused. Many HSP have taken radical steps to remodel their services and we discuss some of these state-of-the-art hospitality services offered by them. We also envision several new future services that might be offered by the hospitality industry as some of the bleeding edge of systems, such as body area sensors, augmented reality, etc., enter maturity. We identify some fundamental challenges that need to be overcome to institute a lasting future-proof solution for the hospitality industry. We envision that future technological solutions for the hospitality industry will consist of geo-distributed systems that are capable of providing localized information and services, high volume data aggregation, security and privacy, and low latency event responses through energy efficient computing and bandwidth efficient communication resources. These solutions must also enable local, regional, and global analytics for providing valuable insights into improving quality of service as well as building better business models.
\vspace{8mm}
\noindent \textbf{Prasanna Kansakar} is a PhD student in the Department of Computer Science at Kansas State University, Manhattan, KS. His research interests include Internet of Things, embedded and cyber-physical systems, computer architecture, multicore, and computer security. Kansakar has an MS degree in computer science and engineering from the University of Nevada, Reno. Contact him at [email protected].
\vspace{8mm}
\noindent \textbf{Arslan Munir} is an assistant professor in the Department of Computer Science at Kansas State University, Manhattan, KS. His research interests include embedded and cyber-physical systems, computer architecture, multicore, parallel computing, fault tolerance, and computer security. Munir has a PhD in electrical and computer engineering from the University of Florida, Gainesville. He is a senior member of IEEE. Contact him at [email protected].
\vspace{8mm}
\noindent \textbf{Neda Shabani} Neda Shabani is a PhD student in the College of Human Ecology, Department of Hospitality Management at Kansas State University, Manhattan, KS. Her research interests include IT and technology in hospitality industry, such as, cybersecurity and privacy, augmented reality, Internet of things, computer architecture and big data. Shabani has BA in English Literature from Shiraz University, Iran and MS in Hospitality and Tourism Management from University of South Florida, Sarasota-Manatee. Contact her at [email protected].
{
\balance
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.736328,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc3LxK3YB9ohkMKrJ |
\section{Introduction}
Match analysis in soccer is very complex and many different factors can affect the outcome of a match. The question is which so-called key performance parameters allow for the characterization of successful teams~\cite{low2020, memmert2018data, Rein2016, sarmento2018}.
While team behavior can be differentiated into a hierarchical scheme consisting of individual, group, and team tactics, different metrics are necessary to capture behavior at each level~\cite{garganta2009, Rein2016}.
Researchers have recognized that game plays should be segmented into different phases since tactics vary greatly~\cite{mackenzie2013} across them.
Performance in soccer is also determined by physiological factors~\cite{drust2007} such as running distance~\cite{DiSalvo2007}. For this reason, it has been suggested to link such information to tactical parameters~\cite{bradley2018}.
To carry out such analyses, the player positions on the field are required.
Current tracking technologies allow the recording of several million data points representing player and ball positions during a match by using additional hardware, e.g., multiple static cameras or sensors on players.
However, they are difficult to obtain, for instance, due to licensing, financial restrictions, or competitive concerns, i.e., a club normally does not want or disclose its own team's data.
In contrast, broadcast video recordings of soccer matches can be accessed more easily.
In this paper, we introduce a
modular pipeline to extract the two-dimensional positions of the visible players from ordinary broadcast recordings.
As illustrated in Figure~\ref{fig:pipeline}, the system involves sports field registratio
, shot boundary detection, shot type classification, player detection, and team assignment.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/pipeline_wide.pdf}
\end{center}
\caption{Proposed pipeline to extract positional data with team assignment from broadcast videos: The video is pre-processed to segment the field and detect shot boundaries. The camera type is estimated to extract shots from the main
camera. Subsequently, the sports field is registrated and the extracted homography matrix is used to transform the sport field and player detections in order to obtain two-dimensional coordinates for each player. Team assignment is performed by clustering the player's bounding boxes.}
\label{fig:pipeline}
\end{figure*}
\textbf{Application Novelty:}
While commercial approaches like \cite{statsperform, tracab, metricasports} primarily use multiple static cameras for position data generation from video data, the TV feed is concretely used by
\emph{SkillCorner}~\cite{skillcorner} and \emph{Track160}~\cite{track160}.
However, only the final output of such systems is accessible ~\cite{skillcorner, track160}.
To the best of our knowledge, neither their quality, nor used architectures or even information about training data and applicability to own data is publicly reported.
While individual sub-tasks were tackled in research, its combination for the joint real-world task of \emph{player position estimation}
has not been studied yet~(also not beyond soccer).
Even individual sub-modules have not been sufficiently evaluated in terms of applicability to real-world data.
For the essential step of sports field registration, recent approaches~\cite{sha2020end, nie2021robust} are evaluated only on a single small-scale dataset~\cite{homayounfar}.
Potential for generalization were mentioned~\cite{nie2021robust, cioppa2021camera} with the use of many cost-intensive annotations from various data sources.
Furthermore, the influence of errors in individual modules and their connections has not been explored.
To tackle this demanding real-world task is of interest for the computer vision community as well for sports science, and has direct applications.
\textbf{Contributions:}
In contrast to commercial systems and related work, we provide the first transparent baseline for~\emph{player position estimation} with interchangeable modules, that relies on state-of-the-art techniques and freely available data, while evaluating each module.
We demonstrate the generalizability on multiple datasets where the applied models were not originally trained on.
The proposed pipeline is also applicable to the so-called "tactic-cam" that is located next to the main camera. It usually covers the entire soccer field~(without any cuts) and is consequently of interest for video analysts.
To evaluate the global task, estimated positions are compared to ground-truth positional data.
This comparison is not trivial due to non-visible players in the video and the influence of errors of individual modules.
Therefore, we propose novel evaluation metrics and identify the impact of errors on the final system output.
The remainder of the paper is organized as follows. Section~\ref{sec:rw} gives a brief overview of related work
The pipeline itself is introduced in Section~\ref{sec:method}. In Section~\ref{sec:experiments}, the different system components and the accuracy of the extracted positional data are evaluated. Finally, Section~\ref{sec:conclusion} discusses the results and describes possible areas of future research.
\section{Related Work}
\label{sec:rw}
Since the global task of \textit{player position estimation} has not yet been addressed, we briefly review related work for all individual sub-tasks in this section.
Great progress has been made in recent years for \textbf{sports field registration} with monocular non-static cameras.
\citet{cuevas2020automatic} trained a probabilistic decision tree to classify specific line segments as an intermediate step for homography estimation and integrated a self-verification step to judge whether a predicted homography matrix is correct.
\citet{homayounfar} propose a solution that relies on field segmentation and Markov random fields. \citet{sharma} and \citet{chen2019} propose the nearest neighbor search from a synthetic dataset of pairs of edge images and camera images for fully-automated registration.
\citet{error_refine} present a two-step deep learning approach that initially estimates a homography and minimizes the error using another deep network instead of the Lucas-Kanade algorithm~\cite{baker2004lucas}.
\citet{citraro2020} suggest an approach that also takes into account the position of players and is trained on a separate dataset for uncalibrated cameras.
\citet{sha2020end} propose an end-to-end approach for area-based field segmentation, camera pose estimation, and online homography refinement that allows end-to-end training and efficient inference.
\citet{nie2021robust} tackle the challenge when no prior knowledge about the camera is available and propose a multi-task network to simultaneously detect a grid of key points and dense field features to estimate and refine a homography matrix end-to-end. This approach seems suitable since also temporal consistency is verified for successive frames.
However, a very large number of training samples is required to achieve the desired accuracy and generalizability, but training data are not publicly available except for the \textit{WorldCup2014} dataset~(\emph{WC14}~\cite{homayounfar}).
\textbf{Shot boundary detection}~(e.g.,~\cite{tang2018fast,gygli2018ridiculously,wu2019two, transnet}) and \textbf{shot type classification}~(e.g.,~\cite{tong2015cnn, savardi2018shot}) are necessary pre-processing steps for many tasks of video analysis.
It enables the distinction between different camera shot types.
Related work in the context of soccer distinguishes between three~\cite{counterattack}, four~\cite{cnn_shot} or five~\cite{dataset_shots} different camera shot types.
For the extraction of positional data, the main camera~(with the largest distance) offers the most useful information, because it normally covers a larger part of the field depicting several players.
There are several approaches for the \textbf{detection of players} in sports analysis~\cite{FootAndBall2020, light_cascaded, player_detec_Direkoglu, ssd_playerdetect}.
Although general-purpose approaches for object detection~\cite{fasterrcnn, SSD} are also able to detect persons, sports offer specific challenges. For example, the players are usually small, they differ in scale due to the distance from the camera, they can occlude one another, and there is blur caused by camera movement.
Nevertheless, specialized approaches~\cite{FootAndBall2020, ssd_playerdetect} compare themselves to general-purpose detectors such as the \emph{Single Shot Detector~(SSD)}~\cite{SSD} or \emph{Faster R-CNN}~\cite{fasterrcnn}. \citet{FootAndBall2020} have recently introduced a computationally much more efficient method with results similar to a fine-tuned \emph{Faster R-CNN}.
In team sports, the jerseys of the teams are designed so that they can be easily recognized by their color. Thus, for \textbf{team assignment} of the (detected) players, color information can be used as a discriminant feature.
Hand-crafted (color)~features~(\cite{d2009investigation, lu2013learning, tong2011automatic}) or features from convolutional neural networks~(CNNs)~(\cite{team_assign, lu2018lightweight, koshkina2021contrastive}) are exploited and clustered by these approaches for team assignment.
An approach for player detection and team discrimination~\cite{team_assign} addresses the problem of occlusions and errors in object detection~\cite{manafifard2017survey}.
\section{Player Position Estimation in Soccer Videos}
\label{sec:method}
A frequent problem in the field of automatic sports analysis is the lack of publicly available datasets. Currently, there is no public dataset that provides positional data for given broadcast soccer videos.
Besides, related work solely considered sub-problems of the overall task of \emph{player position estimation}.
This section describes a pipeline as well as the choice and modifications of individual components that solve all required sub-tasks for \emph{player position estimation} to predict the two-dimensional player positions on the field given an input (broadcast) video~(Figure~\ref{fig:pipeline}).
After all relevant (main camera) shots are identified~(Section~\ref{sec:exp:shots}), the step of sports field registration is essential to extract position data~(Section~\ref{sec:field_red}).
A homography matrix is determined and used to transform the positions of the players from the image plane into world coordinates~(Section~\ref{sec:player_det_pos_est}).
\subsection{Shot Boundary and Shot Type Detection}\label{sec:exp:shots}
We aim at estimating player positions in frames recorded by the main camera since it is most frequently used and shows the area of the game that is relevant for tactical analysis, as shown in Figure~\ref{fig:pipeline}.
We first extract shots from the television~(TV) broadcast using the widely applied \emph{TransNet}~\cite{transnet, transnetv2} for shot boundary detection. Since our objective is to gather only valuable positional data, we subsequently apply shot type classification to identify shots captured by the main camera.
We exploit the homography matrices estimated by the sports field registration approach presented in Section~\ref{sec:field_red}. We found that the homography matrices do not change fundamentally in successive frames captured by the main camera. On the other hand, all other cameras that, for example, capture player close-ups or actions depict no or only small fractions of the sports field causing large errors and consequently inconsistencies in the predicted homography matrices.
For this reason, we calculate the average $\overline{\mathcal{L}}_H$ of the homography changes for each shot. The homography change for two successive frames $t$ is defined as $\mathcal{L}_H(H_t, H_{t+1})=\Vert H_t - H_{t+1} \Vert_2$ where each entry in $H$ is (min-max) normalized for each shot.
Finally, we classify each shot as the main camera shot if the condition~$\overline{\mathcal{L}}_H \leq \tau$ is fulfilled.
\subsection{Sports Field Registration}\label{sec:field_red}
The task of sports field registration aims at determining a homography matrix~$H$ for the transformation of an image from the (main) camera into two-dimensional sports field coordinates.
Formally, the matrix~$H$ defines a two-dimensional projective transformation and is defined by a $3\times 3$ matrix with eight degrees of freedom.
We use Chen and Little's approach~\cite{chen2019} as the basis for sports field registration.
The camera calibration is defined as the nearest neighbor search in a synthetic dataset of edge map camera pairs.
We choose this approach for multiple reasons: (1) It obtains almost state-of-the-art performance on the only test set for soccer~\cite{homayounfar}, (2) it does not rely on manual annotations to obtain training data~\cite{nie2021robust, error_refine, cioppa2021camera}, and (3) is adaptable to other environments~(e.g., stadiums and camera parameters) by changing only a few hyper-parameters, as shown in our experiments~(Section~\ref{exp:h_estimation}).
\citet{chen2019} adopt a \emph{pix2pix}~\cite{isola2017image} model for field segmentation and the subsequent detection of the field markings.
The edge images generated in this way are compared with a dataset of synthetic edge images for which
the camera parameters are known~($x, y, z$ position, focal length, pan, tilt).
This comparison is based on a Siamese~CNN~\cite{hadsell2006dimensionality}, which takes two edge images as input.
Feature vectors are used to construct the reference database.
The nearest neighbor search on the feature vectors is then applied by computing the $L2$~distance over all pairs.
The camera parameters of the nearest neighbor in the synthetic dataset are used to determine an initial homography matrix.
This initial estimation is refined using the Lucas-Kanade algorithm~\cite{baker2004lucas}.
\subsection{Player Detection and Position Estimation}\label{sec:player_det_pos_est}
Sports analysis offers some specific challenges for the task of object detection and tracking, e.g., the objects (like players) are often small because they are far away from the camera.
Camera motion causes blur in the players' silhouettes. But the movement with unpredictable changes of players' direction and pace poses problems also for well-tested approaches.
Therefore, some approaches address these problems in the architectural design~\cite{FootAndBall2020, light_cascaded}.
\citet{centertrack} solves object detection and tracking based on the object center and should therefore be less susceptible to movements of the players.
In Section~\ref{exp:player_detection}, a comparison of three approaches is performed.
To determine the actual position of each player on the field, we can utilize the predicted homography matrix~$H$, which maps pixel coordinates to sports field coordinates.
We define the image position~$\boldsymbol{\Tilde{p}} \in \mathbb{R}^2$ of players as the center of the bottom of the detected bounding box, which usually corresponds to the feet of the player.
The predicted position $\boldsymbol{\hat p} \in \mathbb{R}^2$ of the player on the field is then calculated with the inverse homography matrix and the detected image positions of the players: $\boldsymbol{\hat p} = H^{-1} \boldsymbol{\Tilde{p}}$.
\textbf{Self-Verification (sv):}\label{self-verification}
The predicted positions can be used to verify the homography matrix extracted by the sports field registration.
Assuming that most player positions should be assigned to a coordinate within the sports field, the system can automatically discard individual frames where the sports field registration is obviously erroneous.
If one of the projected player positions
is far outside the dimensions of the field including a tolerance distance~$\rho$ in meter, then normally there is an error in the homography estimation.
The smaller the value~$\rho$ is chosen, the more frames are discarded, because only smaller errors in the homography estimation are being tolerated.
Intuitively, a tolerance distance between two and five meters seems reasonable which is proven experimentally~(Section~\ref{exp:position_estimation}).
\paragraph{Team Assignment:}
Assuming that for some sports analytic tasks the position of the goalkeeper is of minor relevance~(e.g., formation or movement analysis) and it is extremely rare that both goalkeepers are visible in the video at the same time, they are ignored in the team assignment step.
Due to the different jersey type and color it requires context information (i.e. the location) to correctly assign the team.
Another problem is that coaches and attendants also protrude onto the sports field with their bodies due to the perspective of the camera so that the number of visible classes which appear in a frame cannot be predetermined.
We present a simple approach that provides a differentiation between only two classes~(team A and B) based on the object detection and assumes that the use of an unsupervised clustering method is more appropriate in this domain since it does not rely on any training data and the player detection results are already available with high quality.
We apply \emph{DBScan}~\cite{ester1996density}
to determine two dominant clusters representing the field players of both teams.
Any unassigned detection, which should include goalkeepers, referees, and other persons, is discarded.
The feature vectors are formed based on the player detection results, i.e., the bounding boxes.
We use the upper half of a bounding box since it usually covers the torso of a player.
Each bounding box is first uniformly scaled to $20 \times 20$ and then the center of size $16 \times 16$ is cropped.
This should reduce the influence of the surrounding grass in the considered area.
Since the jersey colors differ greatly, it is sufficient to use the average over color channels~(HSV~color~space).
It can be assumed that field players are most frequently detected and that this is roughly balanced between both teams. Furthermore, due to the previous segmentation of the playing field, only a few detections are expected which are not field players.
\emph{DBScan} requires two parameters:~$\epsilon$, which is the maximum normalized~(color) $L2$ distance between two detections to be assigned to the same cluster, and~$n_{cls} \in [0, 0.5]$, which specifies how many of all detections must belong together to form a cluster~(maximum of 0.5 due to two main clusters).
Since the optimal value for $\epsilon$ will be different for each match, a grid search for randomly selected frames of each sequence of the match is performed to determine the parameter.
In contrast to previous work that generally utilizes color histograms~\cite{lu2013learning, tong2011automatic} to reduce the input feature space, we apply the average over pixels
without any performance decline.
The value for $\epsilon$ is selected, for which the cost function
$c(\epsilon)=|X_{\text{(O)ther}}|+||X_{\text{A}}|-|X_{\text{B}}||$
is minimal and restricted to form exactly two clusters ($X_{\text{A}}$ and $X_{\text{B}}$).
The cost function should ensure that the clusters A and B, which represent the two teams, are about the same size and that there are as few as possible unassigned detections ($X_{\text{(O)ther}}$).
\section{Experimental Results}
\label{sec:experiments}
All individual components are evaluated individually, while the main task of \emph{player position estimation} is evaluated at the end.
The main test sets that are used both to evaluate the sports field registration and \emph{player position estimation} are introduced in Section~\ref{exp:datasets}.
As shot boundary and shot type classification~(Section~\ref{sec:exp:shots}) are common pre-processing steps in video data, we refer to the supplemental material~\ref{apx:shot_boundary}.
Section~\ref{exp:player_detection} and~\ref{exp:team_assignment} focus on the evaluation of player detection and team assignment, while
the evaluation of sports field registration is reported in Section~\ref{exp:h_estimation}.
Finally, the main task is evaluated by comparing the estimated positional data with the ground-truth data~(Section~\ref{exp:position_estimation}).
\subsection{Main Datasets}\label{exp:datasets}
To evaluate the main task of \emph{player position estimation}, synchronized video and positional data are needed.
To indicate the generalizability, we use a total of four datasets that are primarily designed only for testing, i.e., no training nor fine-tuning of individual modules is performed on this or closely related data.
Common broadcast videos of different resolutions~(SD and HD) and seasons~(2012, 2014) are available as well another type of
video -- the tactic-cam~(TC): this camera recording is without any cuts and usually covers a wider range of the pitch.
Since the tactic-cam is located next to the main TV~camera and usually covers the majority of players, it is usually used for video analysis.
In general, each dataset contains four halves from four matches from the German Bundesliga in 25\,Hz temporal resolution with synchronized positional data.
Our datasets are referred to as \emph{TV12}~(2012, SD resolution), \emph{TV14}~(2014, HD), \emph{TC14}~(HD), and \emph{TV14-S} that covers the broadcast videos of the same matches as \emph{TC14}.
Due to temporal inconsistencies in the raw video of \emph{TV14-S} to the positional data, these videos are synchronized using the visible game clock.
The position data are considered as ground truth since they are generated by a calibrated (multi-)camera system that covers the entire field.
However, this system can be inaccurate in some cases~\cite{pettersen2018quantified}.
An error of one meter is to be assumed in the data provided to us.
The quality of the field registration is essential for the accurate prediction of the player positions, but as there is only one limited dataset for sports field registration in soccer~\cite{homayounfar}, we manually estimate ground-truth homography matrices for a subset of our datasets.
In particular, 25 representative and challenging images per match are chosen to cover a wide range of camera settings resulting in 100 annotated images per test set.
The remaining modules, i.e., shot boundary and shot type classification, player detection, and team assignment are trained and evaluated on other publicly available datasets and introduced in their respective sections.
\subsection{Evaluating Player Detection}
\label{exp:player_detection}
Player detection and the usage of the homography matrix enable the extraction of two-dimensional coordinates for players. While a general object detector like \emph{Faster R-CNN}~\cite{fasterrcnn} localizes the bounding box for each object, this information is not necessarily needed, rather the exact position is of interest.
To assess the performance of \emph{CenterTrack}~\cite{centertrack} on soccer data,
we compare it to another specialized network~\cite{FootAndBall2020} for this domain and to a general object detection framework that is fine-tuned~\cite{fasterrcnn} for the soccer domain.
We note that alternative solutions such as~\cite{light_cascaded} exist and a comparison is generally possible.
However, it is out of scope of our paper to re-implement and test several variants especially if a satisfactory quality is achieved with the selected solution.
\textbf{Datasets \& Setup:}
Due to the lack of publicly available datasets for training and evaluation, \citet{FootAndBall2020} train their network on two small-scale datasets~\cite{d2009investigation, light_cascaded} where the training and test data is separated by frame-wise shuffling and subsequential random selection ($80\%$ training, $20\%$ test).
\emph{CenterTrack} can exploit temporal information to track players.
However, to the best of our knowledge, there exists only one dataset in the domain of soccer with tracking information (\emph{ISSIA-CNR}~\cite{d2009investigation}), but it contains a very limited number of scene perspectives from multiple static cameras and is thus inappropriate for our system.
For a fair comparison with the alternative approach, we follow the train-test split of \citet{FootAndBall2020} where individual frames are used for training.
The publicly available \emph{ISSIA-CNR}~\cite{d2009investigation} dataset contains annotated sequences from several matches captured by six static cameras (in $30\,Hz$ and FHD resolution) comprising \num{3000} frames per camera. \emph{Soccer Player}~\cite{light_cascaded} is a dataset created from two professional matches where each match is recorded by three HD broadcast cameras with $30\,Hz$ and bounding boxes are annotated for approximately \num{2000} frames.
For evaluation, we report the average precision (AP) according to~\cite{AP_implementation}. In the final step of \emph{CenterTrack} bounding boxes are estimated, which makes AP a suitable metric to compare the performance of object detectors, even though the size of the bounding box is not relevant to extract positional data.
We refer to the supplemental material~(\ref{apx:impl_details}) for details about the training process.
\textbf{Results:} The results on the test set for our fine-tuned \emph{Faster R-CNN}~\cite{fasterrcnn}, \citet{FootAndBall2020}'s model and the fine-tuned \emph{CenterTrack}~\cite{centertrack} are reported in Table~\ref{tab:eval:detection}.
\input{tables/player_detection}
Since \emph{Faster R-CNN} and \emph{FootAndBall} perform well on only one test set and perform significantly worse on the other, this suggests a lack of generalizability whereas \emph{CenterTrack} achieves good results on both data sets.
As \emph{CenterTrack} benefits from training with tracking data~\cite{centertrack}, we are confident that results can further be improved, but choose this model for our pipeline as it already provides good results.
\subsection{Evaluating Team Assignment}
\label{exp:team_assignment}
In this experiment, we evaluate the team assignment that relies on detected bounding boxes.
\textbf{Dataset \& Setup:}
In contrast to very small datasets~\cite{d2009investigation, light_cascaded}, \citet{dataset_shots}'s dataset provides a good diversity regarding the environmental setting~(camera movements, lighting conditions, different matches, jersey colors, etc.). Therefore, a subset from their dataset
is manually annotated with respect to team assignment.
To bypass errors in the player detection, bounding boxes and player assignment are manually annotated for a set of frames containing multiple shot perspectives and matches. Team assignment is annotated for three categories, \emph{team A}, \emph{team B}, and \emph{other} including referees and goalkeepers~(due to its sparsity).
We randomly select one frame for a total of ten shots captured by the main camera for each match.
We took \num{20} matches that were already used to evaluate the temporal segmentation
resulting in \num{200} frames for evaluation.
As mentioned before, the aim is to find two main clusters, and we found empirically that $n_{cls}=0.2$ provides good results for this task.
\newpage
\textbf{Metrics:} \citet{team_assign} proposed micro accuracy for this task, but this metric only considers labels from both teams and is insufficient in our case, since it can be misleading when the algorithm assigns uncertain associations to the class \emph{other}. To prevent this, referees and goalkeepers must be excluded from the object detection
or an alternative metric needs to be defined.
For this reason, we additionally consider the macro accuracy for all three classes.
\textbf{Results:} Our simple method performs well, both in terms of macro accuracy~($0.91$) for the three classes and micro accuracy ($0.93$) for the two team classes.
We found that most errors are players that are assigned to \emph{other} (goalkeepers, referees). This leads to the conclusion that field players are assigned correctly with a high probability in most cases.
In comparison to an end-to-end approach for team assignment of \citet{team_assign}, where the overall performance is evaluated on basketball data, a similar micro accuracy ($0.91$) is reported. However, the domain basketball differs much from soccer
making a direct comparison difficult.
\subsection{Importance of Sports Field Registration}\label{exp:h_estimation}
As already introduced, many approaches rely on manually annotated ground-truth data for training.
There exist only one public benchmark dataset (\textit{WorldCup2014}~(\textit{WC14})~\cite{homayounfar}).
While the test set follows the same data distribution as the training data, in particular, the camera hyper-parameters~(location, focal-length, etc.), generalization capabilities are not investigated by existing solutions~\cite{nie2021robust, sha2020end, error_refine, chen2019}.
Primarily, to indicate the adaptability of \citet{chen2019}'s approach~(Section~\ref{sec:field_red}) to different environmental settings, we explore several hyper-parameters on our target test sets~(see Section~\ref{exp:datasets}). Additionally we compare them with recent work.
\textbf{Metrics:}
Since the visible part of the pitch is of interest for application, we report the intersection over union~($IoU_{\text{part}}$) score to measure the calibration accuracy.
It is computed between the two edge images using the predicted homography and the ground-truth homography on the visible part of the image.
\textbf{Camera Hyper-parameters:}
In general, we assume that the recommended parameters~\cite{chen2019}~(derived from \textit{WC14}~\cite{homayounfar}) for generating synthetic training data fit for many soccer stadiums.
However, we also evaluate slight modifications of the base camera parameters which are available in \textit{WC14}: camera location distribution \small$\mathcal{N}(\mu=[52, -45, 17]^T, \sigma=[2, 9, 3]^T)$\normalsize~in meters, i.e., the average location from all stadiums~(origin is the lower left corner flag of the pitch); focal length~(\small$\mathcal{N}(3018, 716)\,mm$\normalsize) and pan~(\small$\mathcal{U}(-35^\circ, 35^\circ)$\normalsize), tilt~(\small$\mathcal{U}(-15^\circ, -5^\circ)$\normalsize) ranges.
We extend the pan and tilt range to \small$(-40^\circ, 40^\circ)$\normalsize~and \small$(-20^\circ, -5^\circ)$\normalsize, respectively, in all models.
As the tactic-cam obviously covers a wider range (especially focal length as seen in Figure~\ref{fig:qualitative_results}~A,D,E), we also test versions, where we uniformly sample from the focal length parameters~(\small$\mathcal{U}_{\text{focal length}}(1000, 6000)$\normalsize)~ and from the locations~(\small$\mathcal{U}_{xyz}([45, -66, 10]^T, [60, -17, 23]^T)$\normalsize), and double the number of training images to \num{100000}.
Training process for line segmentation and homography estimation remain unchanged and we refer to \citet{chen2019} and \ref{apx:impl_details} for implementation details.
\input{tables/homography_eval}
\textbf{Results:}
As reported in Table~\ref{tab:h_eval} the reproduced results~(base parameters) from \citet{chen2019} at \textit{WC14} are of similar quality compared to other methods~\cite{nie2021robust, sha2020end, error_refine}.
We observe a noticeable drop in $IoU_{\text{part}}$ on our test sets where the camera parameters~(especially the camera position ($x,y,z$)) are unknown.
For the \emph{TV12} test set all configurations fail on challenging images.
This further indicates that the original parameters are optimized for the camera dataset distribution in \textit{WC14}.
However, on the remaining three test sets, the approach of \citet{chen2019} is able to generaliz
, whereas an alternative solution~\cite{jiang2020optimizing} fails.
Due to the non-availability of (private)~training data a comparison with~\cite{nie2021robust, sha2020end} is not fair~(colored gray).
Yet, these approaches seem to yield comparable results.
A (student)~CCBV~\cite{sha2020end}~model from \cite{cioppa2021camera} is trained on the output of a teacher model.
As it was originally trained on a large-scale and private dataset, noticeable lower transfer performance is observed on \emph{WC14}.
In summary, with slight changes in the hyper-parameters, the approach from \citet{chen2019} is suitable for the applicability to new data without fine-tuning by human annotations.
\subsection{Player Position Estimation}
\label{exp:position_estimation}
This section investigates the performance for player position estimation.
Besides, errors of individual modules, i.e., sports field registration, player detection, and team assignment as well as compounding errors of the system are discussed.
We choose the full datasets as introduced in Section~\ref{exp:datasets}.
Despite the shot boundary and shot type classification provide good results~(see Appendix~\ref{apx:shot_boundary}), we eliminate their influence by considering manually annotated shots as the results for position estimation depend on this pre-processing step.
False-negative errors lead to a lower number of relevant frames for the system's output and for evaluation, while false-positive errors (e.g., close-ups) primarily produce erroneous output for homography estimation.
\paragraph{Metrics:}
We measure the distance~(in meters) between the estimated positions and the actual positions by taking the mean and median over all individual frames~($d_\text{mean}$, $d_\text{med.}$) and additionally report how many frames have an error of less or equal than $l \in \{2.0, 3.0\}$ meters~($a_{l}$).
As previously mentioned, the sensor devices that capture position data~(used as ground-truth also in other works~\cite{memmert2017current}) can be slightly inaccurate.
A domain expert confirmed, that errors in our system of less or equal than $l\leq 2\,m$ can be considered as correct results and that errors of less than $3\,m$ can still be meaningful for some sports analysis applications.
\textit{Matching estimated positions to ground-truth:} Most of the time only a subset of players is visible in the broadcast videos and there is no information about which player is visible at a certain frame -- making evaluation complex.
As there is no direct mapping between predicted and ground-truth positions and the number of detections may vary, the resulting linear sum assignment problem first minimized using the \emph{Hungarian Method}~\cite{kuhn1955hungarian}.
Its solution provides a set of distances for each field player visible in the frame $t$, formally $\mathbb{D}_t = \{d_1, \hdots, d_n\}$ where $n$ is the number of players and $d_i= \lVert \boldsymbol{\hat p} - \boldsymbol p \rVert_2$ is the distance between the estimated position~$\boldsymbol{\hat p} \in \mathbb{R}^2$ for the $i$-th player to its actual~(ground-truth) position $\boldsymbol p$.
To aggregate the player distances of one frame, the use of the average distance as an error metric can be misleading as an outlier, e.g., a false-positive player detection (like a substitute player or goalkeeper) can be matched to a ground truth position with high distance~(Fig.~\ref{fig:qualitative_results}~\textit{D,E,H}).
These outliers can drastically affect the average distance and lead to wrong impressions.
To efficiently reject outliers without using an error threshold as an additional system parameter, we propose to report the average distances of the best $80$-percent position estimates.
Detailed results for this aggregation are included in the Appendix~\ref{apx:per-frame-agg}.
\textit{Player mismatch (pm) due to homography estimation \& player detection errors:} Despite the self-verification~(\emph{sv}) step~(Section~\ref{self-verification}) that discards erroneous homography estimations, we cannot directly evaluate whether the remaining homography matrices are correct, since some errors are not considered~(e.g., wrong focal length as in Fig.~\ref{fig:qualitative_results}~\textit{D}).
To analyze the impact of very inaccurate homography estimations and major errors in player detection, we utilize ground-truth data to isolate these types of failures.
We re-project all ground-truth positions to the image space according to the estimated homography matrix.
If the number of detected players differs significantly from the actual players then the homography is probably erroneous~(called player mismatch: \emph{pm}). We also define a tolerance range of 5\% of the image borders to include players that are at the boundary to avoid penalizing smallest discrepancies in the estimation of the homography matrix.
Finally, we discard all frames for evaluation that do not satisfy the following condition:
\begin{equation}
\label{eq:violation_number_of_players}
\alpha_t := 1 - \zeta < \frac{|\mathbb{D}_t^{real}|}{|\mathbb{D}_t^{gt}|} < 1 + \zeta
\end{equation}
$\alpha_t$ is the indicator function whether a frame $t$ is discarded based on the ratio of detected players $|\mathbb{D}_t^{real}|$ and expected players $|\mathbb{D}_t^{gt}|$.
For example, assuming that ten
players are claimed to be visible, but only six are detected by the system, then we want to discard such discrepancies and set $\zeta=0.3$.
Furthermore, we incorporate a constraint to measure the results after team assignment by differentiating between the teams before the linear sum assignment
and report the mean distance over both teams per frame.
\input{tables/position_estimation}
\begin{figure*}[tb!]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/qualitative_examples_high_quali.pdf}
\end{center}
\caption{Qualitative results of the proposed system for the extraction of positional player data ordered from low (left) to high error~(right): The top row presents the output without considering teams. The \textcolor{green}{green} triangles correspond to the predicted positions~($\bigtriangledown$) of players and the black points to the ground-truth positions~($\bullet$); team assignments are colorized \textcolor{red}{red} and \textcolor{blue}{blue}.
For the input image the ground-truth positions are re-projected according to the estimated homography matrix; in the sports field some grid points are highlighted.}
\label{fig:qualitative_results}
\end{figure*}
\paragraph{Results:} Table~\ref{tab:pos_eval} shows the results for each dataset while taking the best performing models from Table~\ref{tab:h_eval}.
The results are summarized for all matches by taking the mean of the per match results.
The results after self-verification~\emph{sv} are the output of our system and set its tolerance area to $\rho=3\,m$.
The results for other thresholds are reported in the supplemental material~(\ref{apx:apx:rho}).
With the \emph{pm} criteria the impact of erroneous sports field registration is analyzed.
As evaluated, the applied model for sports field registration provides good results, however, for a couple of frames the $IoU_{\text{part}}$ is below $90\,\%$.
Our \emph{sv}-process is able to discard some of these frames as the error drops significantly.
For the remaining frames, the pipeline provides promising results on all datasets.
The \emph{pm} criterion demonstrates the high impact of the sports field registration.
Even for marginal errors in the homography estimation, i.e., ca. 95\% IoU, the absolute error in meter~(mean) is about 1\,m when back-projecting known keypoints~\cite{nie2021robust}.
Hence, the applied reduction of bounding boxes to one point does not substantially affect the error in meters.
The qualitative examples in Figure~\ref{fig:qualitative_results}~(with applied \emph{sv}) primarily show the output of the pipeline and support the choice of our metrics.
In Figure~\ref{fig:qualitative_results}~(\textit{I, J}), the output is obviously erroneous, but not discarded in the \emph{sv} process demonstrating the importance of an accurate sports field registration.
Furthermore, the influence of false-positive field players~(\textit{A,D,E}), and incorrect team identification is visible~(\textit{G,H,I}).
Since the quantitative results are weaker with team assignment, this suggests a lack of generalizability to the test data for the player detection and team assignment module.
Indeed, player tracking is not covered which would lead to more stable predictions across multiple frames, and temporal consistency of the sports field registration is not evaluated quantitatively.
However, the sports field registration appears to provide stable results even without explicitly treating the temporal component but could be post-processed in an additional step~\cite{sharma, linnemann2013temporally}.
In summary, we claim that our system outputs promising results in many cases providing a first baseline to conduct various automatic analyses, for instance, regarding formation detection~\cite{bialkowski2014formation, ericjonas} or space control~\cite{fernandez2018wide, rein2017pass}.
\section{Conclusions \& Future Work}
\label{sec:conclusion}
In this paper, we have presented a fully-automated system for the extraction of positional data from broadcast soccer videos with interchangeable modules for shot boundary detection, shot type classification, player detection, field registration, and team assignment.
All components as well as their impact on the overall performance were evaluated.
We investigated which parts of the pipeline influence each other and how they could be improved, e.g., by fine-tuning a specific module with more appropriate data.
A relatively small error in meters should allow sports analysts to study team behavior.
Indeed, the adaptation to other sports would definitely be interesting.
In the future, we also plan to integrate a tracking module.
However, additional steps for player re-identification~(within and across shots) are necessary to allow player-based analysis across a match.
\section*{Acknowledgement}
This project has received funding from the German Federal Ministry of Education and Research (BMBF~--~Bundesministerium für Bildung und Forschung) under~01IS20021B and~01IS20021A.
\section*{Appendix}
| {
"attr-fineweb-edu": 1.800781,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdYU5qsBDEK-qI6jI | \section{Data Statistics}
\label{sec:stats}
For TRIP, we only report the unique story statistics in \autoref{tab:stats}. Note that \citet{storks-etal-2021-tiered-reasoning} have up-sampled some of the plausible stories to match the number of implausible stories.
\begin{table}[]
\centering
\caption{Statistics of the datasets}
\label{tab:stats}
\begin{tabular}{l|lrrr}
\toprule
Dataset & Stats & Train & Dev & Test \\
\hline
& \#Paragraphs & 391 & 43 & 54 \\
ProPara & \#Ents/Para & 3.8 & 4.1 & 4.4 \\
& \#Sents/Para & 6.7 & 6.7 & 6.9 \\
\midrule
& \#Paragraphs & 1169 & 474 & 504 \\
TRIP & \#Ents/Para & 7.0 & 8.1 & 8.3 \\
& \#Sents/Para & 5.1 & 5.0 & 5.1 \\
\bottomrule
\end{tabular}
\end{table}
\end{comment}
\begin{comment}
\subsection{Training Details}
\label{sec:details}
For ProPara, we define two additional action types to represent the entity transitions, namely {Out-of-Create, Out-of-Destroy} similar to \cite{zhang2021koala}. Hence, the total size of the action space is six. For evaluation, these two types would be mapped to NONE transition, and they are defined to help the model differentiate the NONE types during training, i.e., if the entity has not being created or if it has been destroyed.
To facilitate model's learning on location predictions, we initialized our model with a RoBERTa-Large \cite{liu2019roberta} model pretrained on SQuAD 2.0~\cite{rajpurkar-etal-2018-know}. We run our model five times with different random seeds and report the maximum scores in \autoref{tab:propara_res} and average scores with a 95\% confidence interval in \autoref{tab:trip_res} and \autoref{tab:abalation}. For TRIP, we directly initialize the model with RoBERTa-Large. On ProPara we train models for 20 epochs and 6 epochs with data augmentation to let the model receive the similar number of updates. We train models for 6 epochs on Recipe and 10 epochs on TRIP. Except for training epochs, we use the same set of hyperparameters in all of our experiments: learning rate 1e-5, batch size 1, gradient accumulation 2. All of our models have about 360M parameters.
\end{comment}
\begin{comment}
\section{Licenses}
We list the license of the scientific artifacts used in this paper \\
\begin{itemize}
\item ProPara: Apache 2.0 License
\item Transformers: Apache 2.0 License
\end{itemize}
Other datasets/models we used did not specify their licenses, but we followed previous works and only used them for pure research purpose.
\end{comment}
\begin{comment}
\section{Computing Infrastructure}
We run our experiments on a single Nvidia A6000 GPU or a single Nvidia Titan RTX GPU. For ProPara, each experiment takes about 1.5 hours to finish. For Recipe, each experiment takes about 2.5 hours to finish. For TRIP, each experiment takes about 9 hours to finish.
\end{comment}
\section{Task Definition}
\label{subsec:task1}
\textbf{Procedural text understanding.} The task input consists of an $n$-sentence paragraph $P = \{s_1, s_2, ... s_n\}$ , and $k$ entities $\{E_1, E_2, ... E_k\}$. The goal is to predict precondition state $S_{i,t}^p$ and effect state $S_{i,t}^e$, for every entity at every step, as well as the action $A_{i,t}$ performed by the entity at every step; $i \in \{1,2,..k\}$, $t \in \{1,2,...n\}$. The effect state at $t-1$ is the same as precondition state at step $t$, i.e., $ S_{i,t-1}^e = S_{i,t}^p$, hence $S_i$ is a sequence of length $n+1$.
Following prior work~\cite{mishra2018tracking}, $A_{i,t} \in$ \{Create, Exist, Move, Destroy\}, $S_{i,t}^e \in $\{\textit{non-existence}, \textit{unknown-location}, \textit{location}\}, and for \textit{location}, a span of text in $P$ needs be identified for the prediction.
Action $A_{i,t}$ describes the entity state changes from precondition to effect, thus it can be inferred from the state sequence $S_i$, and vice versa---e.g., if $S_{i,1}^p =$ \textit{non-existence} and $S_{i,1}^e =$ \textit{location}, then $A_{i,1} =$ Create.
\noindent \textbf{Procedural story understanding.}
The input to the procedural story understanding task consists of two parallel stories, $P_1$, $P_2 = \{s_1, s_2, ... s_n\}$, each consisting of $n$ sentences and differing only in one of the sentences. Following~\citet{storks-etal-2021-tiered-reasoning}, the task is to identify which story is more plausible, identify the conflicting pair of sentences ($s_{c1}$ and $s_{c2}$) in the implausible story, and list the preconditions $ S_{i}^e$ and effects $S_{i}^p$ of all entities at every step of a story. Here, multiple attributes are considered for precondition and effect states. Unlike in the procedural text understanding task, the story completion task does not require that the effect state at step $t-1$ should match the precondition state at step $t$, i.e., $S_{it-1}^e$ and $S_{it}^p$ are not necessarily equal.
\section{CGLI: Coalescing Global and Local Information}
\label{sec:model}
In this section, we describe the input representation, the architecture, and the training details of our model,
as illustrated in \autoref{fig:model}.
\noindent \textbf{Input representation.}
To allow greater modeling flexibility and enable span extraction for entity location-prediction,
we build a unique input representation for every entity at each step (\textit{local view}), and we provide it access to the entire context (\textit{global view}). Given an entity, we create a pseudo question $Q$ 'where is \{entity\}' (\textit{entity-aware}), and concatenate it with the full paragraph $P$, resulting in $C$ = [CLS] $Q$ [SEP] $P$ [SEP]. We map $C$
using the embedding layer of a language model (LM), resulting in $C_{emb}$. We then combine $C_{emb}$ with timestep embeddings to mark the current step of interest (\textit{timestep-aware}), following \cite{rajaby-faghihi-kordjamshidi-2021-time}. In particular, each input token is assigned a timestep ID where \{0=padding, 1=past, 2=current, 3=future\}, forming $T \in \mathbb{R}^{m}$, where $m$ is the number of tokens. The timestep sequence is projected through another embedding layer $Timestep \in \mathbb{R}^{4 \times d}$. The sum of $C_{emb}$ and $Timestep(T)$, denoted with $C_{emb}' \in \mathbb{R}^{d \times m}$, is then processed by the LM encoder layers, where $d$ is the hidden layer dimension of the LM encoder. Formally:\footnote{To model the precondition state of step 1, we also build an input sequence for step 0.}
\begin{align}
C_{emb} & = \text{Embed}(C) \\
C_{emb}' & = C_{emb} + Timestep(T) \\
C_{enc} & = \text{LM Encoder}(C_{emb}')
\end{align}
\noindent \textbf{Location prediction.}
Given the LM encoded representation $C_{enc} \in \mathbb{R}^{d \times m}$, we extract the start and end indices of the location span:
\begin{align}
P_{Start} & = \text{Softmax}(W_s C_{enc}) \\
P_{End} & = \text{Softmax}(W_e C_{enc}),
\end{align}
where $W_s,W_e \in \mathbb{R}^{d}$. For unknown locations and non-existing states, we extract the [CLS] token as the span, analogous to how unanswerable questions are usually handled \cite{rajpurkar-etal-2018-know}.
\noindent \textbf{In-batch Conditional Random Field.}
For entity state/action modeling, we jointly predict the entity actions across all steps (\textit{global output}). We first group the encoded representation $C_{enc}^t$ of the same entity at different time steps $t$ in one batch chronologically, yield $C_{enc}^N \in \mathbb{R}^{d \times m \times (n+1)}$. Then we extract the [CLS] token embedding to represent the entity state of every step $C_{enc}^{N'} \in \mathbb{R}^{d \times (n+1)}$. We concatenate the entity state representation of every two consecutive steps to represent the actions between these two-state pairs. The result $D_{enc}^{N} \in \mathbb{R}^{2d \times n}$ is mapped to the emission scores $\phi \in \mathbb{R}^{a \times n}$, where $a$ is the number of possible actions.
\begin{align}
D_{enc}^t & = \text{Concat}(C_{enc}^{t'}, C_{enc}^{(t+1)'}) \\
\phi & = W^T_a (tanh (W^T_d D_{enc}^{N}))
\end{align}
where $W_d \in \mathbb{R}^{2d \times d}$, $W_a \in \mathbb{R}^{d \times a}$. The entity action sequence $A \in \mathbb{R}^{n}$ is modeled by a conditional random field (CRF):
{\small
\begin{align}
P(A|\phi, \psi) & \propto \exp (\sum_{t=1}^{n} \phi_{t}(A_{t})+\psi(A_{t-1}, A_{t})),
\end{align}}%
with the CRF layer's transition scores $\psi \in \mathbb{R}^{a \times a}$. \\
\noindent \textbf{Prior initialization.}
Previous methods~\cite{gupta-durrett-2019-tracking,zhang2021koala} initialize the CRF transition scores randomly and update them during training. This allows transition between any pair of actions. However, certain transitions between entity actions are nonsensical, e.g., an entity cannot be destroyed if it has not been created, and a destroyed entity cannot move. Learning such constraints may be possible if we have sufficient data, which is not the case for the tasks we are considering. Thus, we propose to directly impose commonsense constraints on the model's transition scores, because these conditions are universally true and can be used to reduce the model's search space. Specifically, we set an entity action transition score to \textit{-inf} if it has not been seen in the training data, otherwise we estimate the initial score of a transition based on its frequency in the training data: $\psi^{uv}$ = $log(\frac{Num(u, v)}{Num(u)})$,
where $\psi^{uv}$ is the log probability of transition from action $u$ to action $v$, $Num (u, v)$ is the transition count from $u$ to $v$ in data, $Num (u)$ is the count of $u$ in data. \\% \filip{if this subsection is a novelty, we should emphasize that prior work does not do this. If it is not a novelty, we should point to prior work.}
\noindent \textbf{Training and inference.}
We jointly optimize the location and the entity action prediction losses during training:
\begin{align}
\mathcal{L}_{loc} = - \frac{1}{n} \sum^{t=0}_n & (log(P_{Start}^{y_s^t}) + log(P_{End}^{y_e^t})) \\
\mathcal{L}_{action} & = -log(P(A|\phi, \psi)) \\
\mathcal{L} = & \mathcal{L}_{loc} + \mathcal{L}_{action},
\end{align}
\noindent where $y_s^t$ and $y_e^t$ are the ground-truth start and end indices at step $t$. During inference, we use Viterbi decoding to produce the most likely entity action sequence and use the span extractor for the most likely location at every step. We combine the action sequence and location predictions to deterministically infer all precondition and effect states.
\noindent \textbf{Data augmentation.}
Procedural text understanding requires dense annotation of entity states per step, making it challenging and expensive to collect large data. To address data sparsity, we propose a data augmentation method that could effectively leverage the unannotated paragraphs to enhance model's performance. In particular, we first train a model on the gold training set and then apply it to label the unannotated paragraphs, resulting a set of noisy examples. We then mix these examples with gold training data to train a second model.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{story_only.png}
\caption{An illustration of integrating CGLI into a story understanding framework. The story is encoded in the same way as shown in \autoref{fig:model}, producing a sequence of step representations, i.e., a batch of [CLS] vectors. These vectors serve as input to different output layers to model the three task objectives: plausibility (orange), conflict sentence detection (blue), and entity state prediction (yellow).}
\label{fig:model_story}
\end{figure}
\section{Story Understanding with CGLI}
\label{subsec:adapt}
We integrate CGLI into a story understanding framework with minimum modifications following the task definition, and the overall model is shown in \autoref{fig:model_story}. As the story understanding tasks do not require location extraction, we remove the span extraction layer, which makes the input representation of step 0 obsolete. Given that the continuity of effects to preconditions between consecutive steps does not hold in this task, we directly use $C_{enc}^{N'} \in \mathbb{R}^{d \times n}$ instead of $D_{enc}^{N} \in \mathbb{R}^{2d \times n}$ in the in-batch CRF. Given $B$ number of attributes for precondition and effect states, we apply an in-batch CRF module for each attribute. Specifically, we apply equations 7 and 8 for every attribute, yielding $2B$ such modules in total.
To detect conflicting sentences, we concatenate every pair of sentence representations, and pass it through a linear layer to find the conflicting pair. For story classification, we take the mean of sentence representations for story representation, and pass it through a linear layer for binary classification. Formally,
\begin{align}
C_{confl} & = \text{vstack}(\text{Concat}(C_{enc}^{t'}, C_{enc}^{j'})) \\
P_{confl} & = \text{Softmax}(W_{confl} C_{confl}) \\
C_{plau} & = \text{Mean}(C_{enc}^{N'}) \\
P_{plau} & = \text{Softmax}(W^T_{plau} C_{plau}),
\end{align}
where $C_{confl} \in \mathbb{R}^{2d \times \frac{n(n-1)}{2}}$, $j \in \{{t+1},...n\}$, $W_{confl} \in \mathbb{R}^{2d}$, $C_{plau} \in \mathbb{R}^{d}$, $W_{plau} \in \mathbb{R}^{d \times 2}$. During training, we jointly optimize all three task objectives: \par\nobreak
{\small
\begin{align}
& \mathcal{L}_{plau} = -log(P_{plau}^{y_p}) \\
& \mathcal{L}_{confl} = \begin{cases} -log(P_{confl}^{y_c}) & \text{if $y_p$ = 0} \\
0 & otherwise
\end{cases} \\
& \mathcal{L}_{att} = -log(P(S^{p}|\phi^{p}, \psi^{p}))-log(P(S^{e}|\phi^{e}, \psi^{e})) \\
& \mathcal{L} = \mathcal{L}_{plau} + \mathcal{L}_{confl} + \frac{1}{B} \sum^{b=0}_B \mathcal{L}_{att}^b
\end{align}
}
where $y_p$=0 if the story is not plausible and $y_p$=1 if the story is plausible, and $y_c$ denotes the conflict sentence pair index. Note that in our setup, each entity can produce a prediction for conflict sentence pair and story plausibility. At inference time, we take the average of all entities' logits to get the final predictions for these two objectives.
\section{Approaches}
In this section, we formally define the tasks we consider in \autoref{subsec:task1}. Then we describe the proposed method in \autoref{subsec:input}-\autoref{subsec:train}. We describe specific adaptations for story understanding in \autoref{subsec:adapt}. Our proposed model is shown in \autoref{fig:model}.
\subsection{Task Definition}
\label{subsec:task1}
\textbf{Procedural text understanding} Given a paragraph $P = \{s_1, s_2, ... s_n\}$ consisting of n sentences, and a set of k entities $\{E_1, E_2, ... E_k\}$. The task is to predict precondition state $S_{it}^p$ and effect state, $S_{it}^e \in $\{\textit{non-existence}, \textit{unknown-location}, \textit{location}\} for every entity at every step, where $i \in \{1,2,..k\}, t \in \{1,2,...n\}$. For \textit{location}, a span of text in $P$ needs be identified for the prediction. Here, the effect state at step t-1 is the same as precondition state at step t, i.e. $ S_{it-1}^e = S_{it}^p$, hence the $S_i$ can be viewed as a sequence of length n+1.\filip{this past sentence is hard to understand}
Additionally, the model needs to predict the action $A_{it} \in$ \{Create, Exist, Move, Destroy\} performed by the entity at every step. Here the action $A_{it}$ describes the entity state changes from precondition to effect, thus it can be inferred from the state sequence $S_i$. For example, if $S_{i1}^p =$ non-existence and $S_{i1}^e =$ location, then $A_{i1} =$ Create. Conversely, the state sequence can also be inferred from action sequence combined with the predicted locations, e.g. if $A_{i1} =$ Destroy then $S_{i1}^p =$ location/unknown-location and $S_{i1}^e =$ non-existence.
\filip{this subsection is fine, but maybe it can be improved by going from general info (there are entities, states and actions) to specific info (what are the states, how you transition, etc.)}
\textbf{Story Understanding}
Given two stories, $P_1$, $P_2 = \{s_1, s_2, ... s_n\}$ each consisting of n sentences, where only one of the sentences differ. The task is to identify which story is more plausible, and identify the conflicting pair of sentences ($s_{c1}$ and $s_{c2}$) in the implausible story. Additionally, the model needs to track the precondition $ S_{i}^e$ and effects $S_{i}^p$ of all entities at every step in each story, similar to \S\ref{subsec:task1}. In the task we consider, there are multiple dimensions of precondition and effect states, i.e. different attributes, all of which are defined in a discrete space. Also, the effect state at step t-1 no longer match precondition state at step t, hence $S_{it-1}^e \neq S_{it}^p$.
\begin{figure*}
\centering
\includegraphics[scale=0.4]{Our_model.png}
\caption{An illustration of our proposed model}
\label{fig:model}
\end{figure*}
\subsection{Input Representation}
\label{subsec:input}
We propose to build a unique input representation for every entity at each step and provide it access to the entire context, as it would allow greater modeling flexibility and enable span extraction for location prediction. Given an entity, we create a pseudo question $Q$ ``where is \{entity\}'' , and concatenate it with the full paragraph $P$, resulting in $C$ = [CLS] $Q$ [SEP] $P$ [SEP]. We project $C$ it in the embedding space using an LM encoder's embedding layer, resulting in $C_{emb}$. We combine $C_{emb}$ with timestep embeddings to mark the current step of interest following \cite{rajaby-faghihi-kordjamshidi-2021-time}. Each input token is assigned a timestep id $T \in$\{0 = padding, 1 = past, 2 = current, 3 = future\} and the timestep sequence is projected through another embedding layer $Timestep \in \mathbb{R}^{4 \times d}$. The sum of $C_{emb}$ and $Timestep(T)$, denoted with $C_{emb}' \in \mathbb{R}^{m \times d}$, are then processed by the LM encoder, where $m$ is the number of tokens and $d$ is the hidden layer dimension of the LM encoder. Formally:
\begin{align}
C_{emb} & = Embed(C) \\
C_{emb}' & = C_{emb} + Timestep(T) \\
C_{enc} & = LM Encoder(C_{emb}')
\end{align}
Note that we also build a input sequence for step 0 for every entity because we need to model the precondition state of step 1, i.e., the location span prediction. Thus the total number of input representation of an entity is n+1. \filip{ok, but shouldn't this also be mentioned in 2.1?}
\subsection{Location Prediction}
Given the encoded representation $C_{enc} \in \mathbb{R}^{m \times d}$ produced by the LM encoder, we extract the start and end indices of the location span. Formally,
\begin{align}
P_{Start} & = Softmax(W^T_s C_{enc}) \\
P_{End} & = Softmax(W^T_e C_{enc})
\end{align}
where $W^T_s,W^T_e \in \mathbb{R}^{d}$. For unknown locations and non-existing states, we extract the [CLS] token as the span, analogous to how the unanswerable questions are handled in the literature \cite{rajpurkar-etal-2018-know}.
\subsection{In-batch CRF}
For entity state/action modeling, we propose to jointly predict the entity actions across all steps. We first group the encoded representation $C_{enc}^t$ of the same entity at different time steps $t$ in one batch chronically, yield $C_{enc}^N \in \mathbb{R}^{(n+1) \times m \times d}$. Then we extract the [CLS] token embedding to represent the entity state of every step $C_{enc}^{N'} \in \mathbb{R}^{(n+1) \times d}$. We concatenate the entity state representation of every two consecutive steps to represent the actions between these two-state pairs. The result $D_{enc}^{N} \in \mathbb{R}^{n \times 2d}$ is mapped to the emission probability $\phi \in \mathbb{R}^{n \times a}$, where a is the number of possible actions. Formally:
\begin{align}
D_{enc}^t & = Concat(C_{enc}^{t'}, C_{enc}^{(t+1)'}) \\
\phi & = W^T_a (tanh (W^T_d D_{enc}^{N}))
\end{align}
where $W^T_d \in \mathbb{R}^{2d \times d}$, $W^T_a \in \mathbb{R}^{d \times a}$. The entity action sequence $A \in \mathbb{R}^{n}$ is modeled by a CRF layer, i.e.,
\begin{align}
P(A|\phi, \psi) & \propto \exp (\sum_{t=1}^{n} \phi_{t}(A_{t})+\psi(A_{t-1}, A_{t}))
\end{align}
where $\psi \in \mathbb{R}^{a \times a}$ is the transition probability of the CRF layer.
\subsection{Prior Initialization}
\label{subsec:prior}
By default, the CRF transition probabilities are randomly initialized and updated during training, as done in previous works \cite{gupta-durrett-2019-tracking,zhang2021koala}. This indicates that the transition between any pair of actions is possible. However, certain transitions between entity actions are nonsensical, e.g., an entity cannot be destroyed if it has not been created and a destroyed cannot move. Given sufficient data, the model may be able to learn such knowledge eventually, however it is unlikely to have enough data for the tasks we are considering. Thus, we propose to directly impose these commonsense constraints on the model's transition probabilities, because these conditions are universally true and can be used to reduce the model's search space. Specifically, we set an entity action transition probability to 0 if it has not been seen in the training data, otherwise we estimate the initial probability of a transition based on its frequency in the training data:
\begin{align}
\psi^{uv} & = log(\frac{Num(u, v)}{Num(u)})
\end{align}
Here, $\psi^{uv}$ is the transition probability from action $u$ to action $v$, $Num (u, v)$ is the count of transition from $u$ to $v$ in training data, $Num (u)$ is the count of $u$ in training data.
\subsection{Training \& Inference}
\label{subsec:train}
We jointly optimize the location prediction loss and the entity action prediction loss during training:
\begin{align}
\mathcal{L}_{loc} = - \frac{1}{n} \sum^{t=0}_n & (log(P_{Start}^{y_s^t}) + log(P_{End}^{y_e^t})) \\
\mathcal{L}_{action} & = -log(P(A|\phi, \psi)) \\
\mathcal{L} = & \mathcal{L}_{loc} + \mathcal{L}_{action}
\end{align}
Here, $y_s^t$ and $y_e^t$ are ground-truth start and end indices at step t, respectively. During inference, we use Viterbi decoding to produce the most likely entity action sequence and use the span extractor to find the most likely location at every step. We then combine the action sequence and location predictions to deterministically infer all the precondition and effect states.
\subsection{Data Augmentation}
The procedural text understanding task requires dense annotation of entity states at every step, making it challenging and expensive to collect large scale data. To remedy the data sparsity issue, we propose a data augmentation method that could effectively leverage the unannotated paragraphs to enhance model's performance. In particular, we first train a model on the gold training set and then apply it to label the unannotated paragraphs, resulting a set of noisy examples. We then mix these examples with gold training data to train a second model.
\filip{It is a little hard to imagine how all the pieces tie in together - at the beginning of sec2 i suggested relying on the figure 2, but now i see also that the figure ideally needs to be updated to match the writing better.}
\filip{the local vs global split is not clear in this section. It is not clear how this justifies the contribution above.}
\subsection{Model Adaptation}
\label{subsec:adapt}
We propose minimum modifications to our procedural text understanding model to apply it to the story understanding task. We first remove the span extraction layer and then on the input level, we no longer need step 0 input representation as location span prediction is not needed. Consequently, we directly use $C_{enc}^{N'} \in \mathbb{R}^{n \times d}$ instead of $D_{enc}^{N} \in \mathbb{R}^{n \times 2d}$ in the in-batch CRF, because the continuity of effects to preconditions between consecutive steps does not hold in this task. Given $B$ number of attributes for precondition and effect states, we propose to apply an in-batch CRF module for each attribute. Specifically, we apply equation 6 and 7 for every attribute and in total we have $2B$ such modules.
For conflict sentences detection, we propose to concatenate every pair of sentence representations and pass through a linear layer to find the conflicting pair. For story classification, we take the mean of sentence representations for story representation, and pass through a linear layer for binary classification. Formally, \begin{align}
C_{confl} & = vstack(Concat(C_{enc}^{t'}, C_{enc}^{j'})) \\
P_{confl} & = Softmax(W^T_{confl} C_{confl}) \\
C_{plau} & = Mean(C_{enc}^{N'}) \\
P_{plau} & = Softmax(W^T_{plau} C_{plau})
\end{align}
where $C_{confl} \in \mathbb{R}^{\frac{n(n-1)}{2} \times d}$, $j \in \{{t+1},...n\}$, $W_{confl} \in \mathbb{R}^{d \times 1}$, $C_{plau} \in \mathbb{R}^{d \times 1}$, $W_{plau} \in \mathbb{R}^{d \times 2}$. Since each entity would produce prediction logits for conflict sentence pair and story plausibility, we take the average of all entities' logits as the final predictions. During training, we jointly optimize all three task objectives:
\begin{align}
\mathcal{L}_{att} & = -log(P(S^{p}|\phi^{p}, \psi^{p}))-log(P(S^{e}|\phi^{e}, \psi^{e}))\\
\mathcal{L} = & \mathcal{L}_{plau} + \mathcal{L}_{confl} + \frac{1}{B} \sum^{b=0}_B \mathcal{L}_{att}^b
\end{align}
\section{Results and Analysis}
\subsection{Procedural text understanding}
\label{subsec:res}
CGLI significantly outperforms all previous baselines on ProPara, achieving state-of-the-art results (\autoref{tab:propara_res}). With data augmentation, our model achieves further improvement on document level. For each baseline, we indicate whether it considers entity-centric information (E), timestep-centric (T), global context (GC), and global output (GO). We note that models that adopt global output usually have much higher precision than recall on document level. On the other hand, TSLM is very good on recall, which is expected given its focus on entity and timestep input modeling.\footnote{This pattern may not always hold for other models due to other modeling differences, e.g., LSTM vs. BERT.}
CGLI is able to achieve both strong precision and recall, showing the benefit of global reasoning over both entity- and timestep-specific global inputs in a single model.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.85\linewidth]{Results.png}
\caption{Document-level evaluation on ProPara test set, split by precision (P) and recall (R) per category (Inputs, Outputs, Conversions, Moves).}
\label{fig:res}
\end{figure*}
We break down the results on ProPara by the document-level question types defined in \S\ref{sec:exp} and compare our best model with the best results reported by TSLM and KOALA. The precision and recall per question type are shown in \autoref{fig:res}. Consistent with the overall results, KOALA is particularly strong on precision for all types and TSLM is much better on recall. CGLI is able to maintain a balance between those two extremes and achieve overall better results. All three models perform similarly when predicting the inputs and the outputs of a procedure. Yet, CGLI achieves much higher performance on transitional questions regarding entity conversions and moves, which are notably harder to predict. These results suggest that the gains of CGLI over previous works are mostly due to hard-to-answer categories.
\begin{table*}[!ht]
\centering
\small
\caption{Document-level ablation results of proposed model components and modeling aspects on the ProPara.}
\label{tab:abalation}
\begin{tabular}{l|rrr|rrr}
\toprule
& \multicolumn{3}{c|}{Dev set} & \multicolumn{3}{c}{Test set} \\
\hline
Model & P & R & F1 & P & R & F1 \\
\hline
CGLI \small + Data Augmentation & \small 78.5($\pm 1.7$) & \small 76.1($\pm 0.8$) & \small \bf 77.3($\pm 0.8$) & \small 75.2($\pm 1.1$) & \small 68.8($\pm 0.8$) & \small \bf 71.9($\pm 0.5$)\\
CGLI & \small 77.3($\pm 1.5$) & \small 75.5($\pm 0.7$) & \small 76.4($\pm 1.0$) & \small 73.0($\pm 1.9$) & \small \bf 69.8($\pm 1.2$) & \small 71.3($\pm 0.9$)\\
\small No SQuAD2.0 & \small 76.5($\pm 1.3$) & \small 75.4($\pm 0.9$) & \small 75.9($\pm 0.4$) & \small 72.5($\pm 2.7$) & \small 68.0($\pm 1.3$) & \small 70.1($\pm 0.8$) \\
\small No Prior & \small 75.6($\pm 0.8$) & \small \bf 76.6($\pm 0.6$) & \small 76.1($\pm 0.3$) & \small 72.0($\pm 2.1$) & \small 68.1($\pm 1.4$) & \small 70.0($\pm 1.3$) \\
\hline
\small No GO & \small 75.7($\pm 1.1$) & \small 76.1($\pm 1.4$) & \small 75.9($\pm 0.5$) & \small 70.2($\pm 1.2$) & \small 67.3($\pm 1.2$) & \small 68.7($\pm 0.8$) \\
\small No GC & \small 75.5($\pm 1.3$) & \small 73.2($\pm 1.0$) & \small 74.3($\pm 0.5$) & \small 73.2($\pm 2.2$) & \small 66.7($\pm 0.6$) & \small 69.8($\pm 1.1$) \\
\small No T & \small 82.3($\pm 0.7$) & \small 59.7($\pm 0.4$) & \small 69.2($\pm 0.3$) & \small 77.2($\pm 1.3$) & \small 54.3($\pm 1.0$) & \small 63.8($\pm 0.8$) \\
\small No E & \small \bf 84.5($\pm 1.1$) & \small 48.6($\pm 0.3$) & \small 61.7($\pm 0.2$) & \small \bf 84.9($\pm 0.7$) & \small 40.8($\pm 0.5$) & \small 55.1($\pm 0.3$) \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[!t]
\centering
\caption{Results on the TRIP dataset. The F1 scores of last two columns are Macro averages of 20 attributes.}
\small
\label{tab:trip_res}
\begin{tabular}{l|ccccc}
\toprule
Model & Accuracy & Consistency & Verifiability & Precondition F1 & Effect F1 \\
\hline
TRIP-RoBERTa \cite{storks-etal-2021-tiered-reasoning} & 73.2 & 19.1 & 9.1 & 51.3 & 49.3 \\
\hline
CGLI (Ours) & \small 93.4($\pm 1.5$) & \small 76.3($\pm 1.7$) & \small 24.8($\pm 1.6$) & \small 70.8($\pm 1.8$) & \small 74.9($\pm 1.7$) \\
CGLI (Ours) No CRF & \small \bf 94.1($\pm 0.7$) & \small \bf 77.3($\pm 1.0$) & \small \bf 28.0($\pm 2.5$) & \small \bf 72.1($\pm 1.6$) & \small \bf 75.6($\pm 1.6$) \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Story understanding}
Our method outperforms the baseline method on the TRIP dataset by a very large margin on all metrics, especially on consistency where we observe nearly 400\% relative improvement over the baseline (\autoref{tab:trip_res}). This may seem surprising as both our model and the baseline use the same LM backbone. After further analysis of the baseline model, we notice three sub-optimal design decisions. First, the baseline detects conflicting sentence pairs via binary classification for every sentence, independently, without considering pairs of sentences. As a result, for 47.6\% of examples in TRIP test set, the baseline model predicted either less or more than two sentences as conflicting, thus getting a score of 0 on consistency. Second, the baseline uses the same encoded representations to directly model both story classification and conflicting pair detection objectives. Without using task-specific output projection layers, the model may be hard to optimize. Third, the baseline did not provide global input view to the model, i.e., each sentence is encoded independently.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Case.png}
\caption{Example predictions on ProPara from three models for two entities. Red font indicate errors.}
\label{fig:case}
\end{figure*}
\subsection{Ablation studies}
\noindent \textbf{Impact of modeling aspects} To understand the contribution of each of the four modeling aspects we identified for the procedural text understanding, we ablate each of them in CGLI. \\
\noindent \textbf{No GO} is done by removing the CRF layer and directly training the model with cross-entropy loss over the emission probability $\phi \in \mathbb{R}^{n \times a}$ defined in \S\ref{sec:model}. During inference, we predict the action at each timestep independently by taking the argmax over the emission probability instead of viterbi decoding. \textbf{No GC} is achieved by allowing the model model to access up to $t$ sentences at every timestep $t \in \{1, 2, 3, ... n\}$, i.e. the model has no access to future sentences. For \textbf{No T}, we remove the timestep embeddings such that each entity would have identical encoded context representations across timesteps. For \textbf{No E}, we no longer provide the pseudo question with the entity name in the input \S\ref{sec:model}, such that all entities in the same paragraph would have the same encoded context representations. \\
\noindent The results are shown in the bottom half of \autoref{tab:abalation}. Removing either T or E leads to drastic drop in the F1 score. This is not surprising because the model would have no clue how to distinguish different timesteps or different entities, respectively. We found that the model predict most of entity actions to be NONE, leading to extremely high precision and low recall. Removing GO also leads to a large drop in F1 score, which is actually similar to TSLM's performance, a model that lacks GO. This shows that modeling the global dependency is important for procedural understanding. Finally, removing GC also hurts the performance, which is also expected because location spans often only appear in future sentences, thus span extraction layer is at disadvantage in this setting.
\noindent \textbf{Impact of training data}
To understand the impact of the CGLI components, we ablate SQuAD2.0 pretraining by initializing the model with vanilla RoBERTa-Large model and we ablate prior initialization by randomly initializing the transition probabilities in the CRF layer. The results (upper half of \autoref{tab:abalation}) show that with data augmentation, CGLI achieves higher overall F1 scores on average and the gains are mostly from precision. Both pretraining on SQuAD2.0 and prior initialization have a positive impact on the CGLI performance.
As the continuity from effect to precondition states no longer holds on the TRIP story understanding task (cf. \S\ref{subsec:task1}), we investigate the impact of the CRF layers on modeling entity states. We remove the CRF layers for both effects and preconditions, and we directly train CGLI with regular classification objectives, hence entity states at each step are predicted independently (No GO). \autoref{tab:trip_res} shows that removing CRF improves performance. We hypothesize that this is caused by the implausible stories in the dataset. Since the entity states in the implausible story's conflicting sentences are inconsistent by nature, training the CRF to maximize their probabilities can be confusing for the model. To verify this, we train models with and without CRF on plausible stories only. In this case, the model is only trained to predict entities effects and preconditions. We found that the models have very similar F1 scores with or without CRF (preconditions 74.1 vs 73.7, effects 76.5 vs 76.6). Thus, we conclude that implausible stories are detrimental to CRF training. Moreover, as the effects of the previous step are not a precondition of the current step on TRIP, the outputs from previous steps can hardly contribute to the current prediction, thus CRF has a limited contribution even on the plausible stories.
\begin{table}[!t]
\centering
\caption{Error Examples on TRIP. The conflicting pairs are marked with *, and the entity of interest with \textit{italic}.}
\label{tab:trip_error}
\resizebox{\linewidth}{!}{
\begin{tabular}{l}
\toprule
Ann washed her hair in the bathtub. \\
Ann used the hair dryer to get ready to go out. \\
Ann applied deodorant to her armpits. \\
*\textit{Ann} put her pants on. \\
- (Effects, is wet), Pred: False, Gold: Irrelevant \\
*Ann ironed her \textit{pants} before going out. \\
- (Preconditions, is wet), Pred: True, Gold: Irrelevant \\
\hline
*John forgot his \textit{notebook} at home. \\
- (Effects, location), Pred: Moved, Gold: Irrelevant \\
John sat at his desk. \\
John opened up his book bag. \\
* John took out his \textit{notebook}. \\
- (Preconditions, location), \\
- Pred: Picked up, Gold: Taken out of container \\
John began writing down notes. \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Case Studies}
We show an example of tracking states for two entities from ProPara with partial outputs from CGLI, TSLM, and KOALA in \autoref{fig:case}. For gasoline, our model and TSLM both got perfect predictions, but KOALA missed the action at step 1, thus predicting no moves across the process. For exhaust, the sentence in step 6 gives a strong signal for a movement, however, there is no mention of exhaust in the previous steps. Our model is able to infer that \textit{create} needs to come before \textit{move}, thus correctly predicting the actions in steps 5 and 6. However, since TSLM does not have the global output view, it cannot capture such transitions. For KOALA, although it is also able to predict the move and infer that the exhaust should exist before the move, it is unable to predict the create action. We note that for both entities, KOALA is more reluctant to predict actions compared to the other two models. These observations explain why KOALA achieves overall higher precision but lower recall.
We show story reasoning examples from TRIP in \autoref{tab:trip_error}. Since the largest gap in the model performance is between consistency and verifiability, we select examples where our model successfully predicted conflicting sentences but failed to predict entity states. We see that the model still lacks common sense on certain concepts, e.g., forgetting something at home does not result in changing its location, and people usually iron their clothes after they are dry. We also note that some entity states might be hard to distinguish, e.g., the distinction between picking up something versus taking something out of a container only depends on previous location of the object, which might be hard for models to learn from data. These observations suggest that enhancing the model's commonsense reasoning ability is a promising future direction
\section{Experimental Setup}
\label{sec:exp}
\textbf{Benchmarks.}
We evaluate procedural understanding on \texttt{ProPara}~\cite{mishra2018tracking}\footnote{\texttt{ProPara} is covered under Apache 2.0 License.}, which contains 488 human-written paragraphs from the natural science domain. The paragraphs are densely annotated by crowd workers, i.e., for every entity, its existence and location are annotated for every step. Additional 871 unannotated paragraphs are also provided by ProPara. We use these for data augmentation.
We test story understanding on
\texttt{TRIP} \cite{storks-etal-2021-tiered-reasoning}, which contains crowdsourced plausible and implausible story pairs. In each pair, the plausible story label and the conflicting sentence pair label in implausible story are annotated. TRIP annotates 20 attributes with predefined set of possible values. The annotations are given for all entities at every timestep of the two stories.
We provide datasets splits details in \autoref{tab:stats}. For TRIP, we only report the unique story statistics in \autoref{tab:stats}.
Note that \citet{storks-etal-2021-tiered-reasoning} have up-sampled some of the plausible stories to match the number of implausible stories.
\begin{table}[]
\centering
\caption{Statistics of the datasets.}
\label{tab:stats}
\begin{tabular}{l|lrrr}
\toprule
Dataset & Stats & Train & Dev & Test \\
\hline
& \#Paragraphs & 391 & 43 & 54 \\
ProPara & \#Ents/Para & 3.8 & 4.1 & 4.4 \\
& \#Sents/Para & 6.7 & 6.7 & 6.9 \\
\midrule
& \#Paragraphs & 1169 & 474 & 504 \\
TRIP & \#Ents/Para & 7.0 & 8.1 & 8.3 \\
& \#Sents/Para & 5.1 & 5.0 & 5.1 \\
\bottomrule
\end{tabular}
\end{table}
\noindent \textbf{Evaluation metrics.}
\label{subsec:eval}
Following previous work, we report both sentence-level metrics\footnote{\url{https://github.com/allenai/propara/tree/master/propara/evaluation}} and document-level metrics\footnote{\url{https://github.com/allenai/aristo-leaderboard/tree/master/propara}} on ProPara. Sentence-level evaluation computes accuracy over three questions: whether the entity created (moved/destroyed) in the process (\textit{Cat1}), and if so, when (\textit{Cat2}) and where (\textit{Cat3}).\footnote{Cat2 and Cat3 only apply to entities that satisfy Cat1.} Document-level metrics compute F1 scores of the identified inputs (entities that exist before the process begins and are destroyed in the process), outputs (entities that do not exist before but exist after the process), conversions (instances where some entities are converted to other entities), and moves (location changes of entities).
For TRIP, we follow the original work and report the following metrics: \textit{accuracy} of classifying the plausible story, \textit{consistency} of finding the conflicting sentence pairs when the story classification is correct, and \textit{verifiability}, which evaluates the prediction of the entities' effects at $s_{c1}$ and the entities' preconditions at $s_{c2}$. We also report the average F1-score for preconditions and effects across the 20 attributes to better understand the model's procedural understanding ability.
\noindent \textbf{Baselines.}
For ProPara, we directly report baseline results from the official leaderboard.
For TRIP, we report the results from the best model released by \citet{storks-etal-2021-tiered-reasoning}.
\noindent\textbf{Training details.} \label{sec:details}
For ProPara, we define two additional action types to represent the entity transitions, namely {Out-of-Create, Out-of-Destroy} similar to \cite{zhang2021koala}. Hence, the total size of the action space is six. For evaluation, these two types would be mapped to NONE transition, and they are defined to help the model differentiate the NONE types during training, i.e., if the entity has not being created or if it has been destroyed.
To facilitate model's learning on location predictions, we initialized our model with a RoBERTa-Large \cite{liu2019roberta} model pretrained on SQuAD 2.0~\cite{rajpurkar-etal-2018-know}. We run our model five times with different random seeds and report the maximum scores in \autoref{tab:propara_res} and average scores with a 95\% confidence interval in \autoref{tab:abalation} and \autoref{tab:trip_res}. For TRIP, we directly initialize the model with RoBERTa-Large. On ProPara we train models for 20 epochs and 6 epochs with data augmentation to let the model receive the similar number of updates. We train models for 10 epochs on TRIP. Except for training epochs, we use the same set of hyperparameters in all of our experiments: learning rate 1e-5, batch size 1, gradient accumulation 2. We used Transformers \cite{wolf-etal-2020-transformers} library \footnote{Covered under Apache 2.0 License.} for all of our experiments and all of our models have about 360M parameters.
\noindent\textbf{Computing infrastructure.}
We run our experiments on a single Nvidia A6000 GPU or a single Nvidia Titan RTX GPU. For ProPara, each experiment takes about 1.5 hours to finish. For TRIP, each experiment takes about 9 hours to finish.
\section{Introduction}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{concept.png}
\caption{An example story of understanding task. Given two stories, the task is to judge which story is more plausible, find the conflicting sentence pair in the implausible story, and predict entity states at every step.}
\label{fig:exp}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=1.0\linewidth]{annotated.png}
\caption{An illustration of CGLI. At every step, the LM encodes the full paragraph with different timestep ids (colored circles with numbers). The span extraction layer yields a location span for every entity at every step and this span sequence is combined with action sequence produced by a CRF layer to form the final predictions.}
\label{fig:model}
\end{figure*}
Understanding the causal links of events in procedures is a key aspect of intelligence, facilitating human narration and dialogue. For instance, understanding why story B is plausible and why story A is not (\autoref{fig:exp}) requires procedural understanding of the causes of John leaving his notebook at home, as opposed to him taking out his notebook from his bag: writing in a notebook is counterfactual in the former case, and intuitive in the latter.
Understanding stories requires procedural models that can reason consistently about event implications, and do so at different granularities. For a model to decide whether a story is plausible, it has to track the entity states over time, understand the effects of the described actions (green arrows), and consider the preconditions for a given action (pink arrows). Meanwhile, the model must reconcile the causes and effects of all events described in the story, to provide a globally consistent interpretation.
While procedural reasoning research reports steady progress in recent years~\cite{rajaby-faghihi-kordjamshidi-2021-time,gupta-durrett-2019-effective,zhang2021koala}, story understanding and procedural reasoning have rarely been considered together~\cite{storks-etal-2021-tiered-reasoning}.
Works have attended only to complementary aspects of the procedural reasoning problem, e.g., \citet{gupta-durrett-2019-effective} build entity-centric context representations and ignoring timestep-wise representation modeling; and \citet{rajaby-faghihi-kordjamshidi-2021-time} later proposed a timestep-specific model providing unique context encoding at every step to enable modeling flexibility. However, these methods predict independent step-wise entity states, thus compromising the dependency of outputs \textit{across} different steps---yielding high recall but low precision.
Global-output methods~\cite{gupta-durrett-2019-tracking,zhang2021koala} explicitly leverage the strong dependency across steps by jointly modeling the entity actions from all steps, but these methods only have one context encoding for all entities and steps, thus providing sub-optimal input representations---yielding high precision but low recall.
In this paper, we propose \textbf{C}oalescing \textbf{G}lobal and \textbf{L}ocal \textbf{I}nformation (CGLI): a new model for procedural text understanding that makes global decisions in consideration of entity-, timestep-centric, and global views of the input. To do so, our model builds a separate input view for every entity, at every step, while providing the whole context. Meanwhile, CGLI represents the entity actions across steps jointly with a structured prediction objective, thus achieving high consistency between different steps. The contributions of this paper are:
\noindent \textbf{1. A novel procedural understanding method, CGLI}, which produces global outputs of narrative procedures based on a unified view of the input, combining both local (entity-centric, timestep-specific) and global (document-wide) views---thus optimizing precision and recall, simultaneously.
\noindent \textbf{2. A story understanding framework}, which builds upon our procedural understanding model, to enable story understanding with explicit and explainable understanding of event procedures, captured through entity precondition and effect states.
\noindent \textbf{3. An extensive evaluation} of CGLI against strong baselines on a procedural task, \texttt{ProPara}~\cite{dalvi-etal-2018-tracking}, and recent story understanding task, \texttt{TRIP}~\cite{storks-etal-2021-tiered-reasoning}. Our experiments show the positive impact of our method, through achieving state-of-the-art results, while ablation studies measure the impact of its individual components.
\section{Related Work}
Recent \textbf{procedural text understanding} benchmarks including ScoNe \cite{long-etal-2016-simpler}, bAbI \cite{weston2015aicomplete}, ProcessBank \cite{berant-etal-2014-modeling}, ProPara~\cite{mishra2018tracking}, Recipe ~\cite{Bosselut2017SimulatingAD}, and OpenPI \cite{tandon-etal-2020-dataset} have inspired a series of methods. \citet{mishra2018tracking} propose ProLocal that encodes each step of a procedure separately and ProGlobal that encodes the full paragraph at every step. KG-MRC \cite{das2018building} builds a dynamic knowledge graph of entity and location mentions to communicate across time steps. DynaPro \cite{DBLP:journals/corr/abs-2003-13878} employs pre-trained LM to jointly predict entity attributes and their transitions. TSLM \cite{rajaby-faghihi-kordjamshidi-2021-time} formulates procedural understanding as a question answering task, and leverages models pretrained on SQuAD \cite{rajpurkar-etal-2016-squad} enhanced with a timestamp encoding. Although equipped with various ways to pass information across time steps, these methods still make local predictions thus they may compromise the global dependency of outputs. Another line of work focuses on jointly modeling the entity action sequence, aiming to ensure global structure and consistency. ProStruct \cite{tandon-etal-2018-reasoning} aims to find the globally optimal entity action sequence using beam search. \citet{gupta-durrett-2019-tracking} devise a structured neural architecture NCET, modeled with a CRF, which recurrently updates the hidden representation of each entity at each step. IEN \cite{tang-etal-2020-understanding-procedural} builds upon NCET and augments the entity-to-entity attention. KOALA~\cite{zhang2021koala} further enhances the NCET by pretraining on Wikipedia
and ConceptNet~\cite{speer2017conceptnet}. The key shortcoming of these global methods is that they rely on entity mentions extracted from a single copy of encoded context shared by all entities and all steps, which limits their modeling capacity. Our proposed method stands out from all previous works by coalescing complementary granularities of procedural text modeling, by building specific and informative input representations while modeling output dependency. Concurrent to our work, \citet{shi2022lemon} proposed LEMON for language-based environment manipulation. Their focus on model pretraining is orthogonal to CGLI.
There are also numerous recent \textbf{story understanding} benchmarks \cite{mostafazadeh-etal-2016-corpus,qin-etal-2019-counterfactual,mostafazadeh-etal-2020-glucose}, and modeling methods \cite{qin-etal-2020-back,guan-etal-2020-knowledge,Gabriel2021ParagraphLevelCT,ghosal-etal-2021-stack}. The TRIP task \cite{storks-etal-2021-tiered-reasoning} integrates a procedural understanding component in story understanding to enable consistent and interpretable reasoning over narratives. To our knowledge, we are the first work to bridge the gap of modeling methods between procedural understanding and story comprehension. Other tasks that require reasoning over procedures exist, including defeasible reasoning \cite{rudinger-etal-2020-thinking,madaan-etal-2021-think},
abductive commonsense inference~\cite{ch2019abductive}, reasoning over preconditions \cite{qasemi2021corequisite},
script reasoning \cite{zhang-etal-2020-reasoning,sakaguchi-etal-2021-proscript-partially} and multimodal script reasoning \cite{yang-etal-2021-visual,wu2021understanding}, are typically solved by specialized methods, without separately modeling procedural and causal links. We intend to apply CGLI on these tasks in the future to bridge this gap.
\section{Conclusions \& Future Work}
We proposed CGLI: a novel procedural understanding method that combined global and local information. Recognizing the key role of procedural understanding in downstream tasks, we also integrated CGLI in a story understanding framework. Our experiments showed the benefit of our coalesced method, with the global views providing optimal precision, while the local view boosting its recall, ultimately achieving new state-of-the-art results. We demonstrated that CGLI can help with classifying stories, and identifying the conflicting sentence for inconsistent stories. Future work should investigate how to enhance the commonsense ability of our procedural understanding model, e.g., by injecting commonsense knowledge during finetuning \cite{chen2018incorporating,ma-etal-2019-towards} or by pretraining on commonsense knowledge bases \cite{guan-etal-2020-knowledge,ilievski2021story,ma2020knowledgedriven},
and how to apply procedural understanding to other downstream tasks, such as dialogue modelling \cite{zhou-etal-2021-probing-commonsense} and planning \cite{2020}. Also, it's worth exploring the lightweight-tuning methods \cite{ma-etal-2021-exploring,vu-etal-2022-spot} to enhance the model's generalization and reduce computation cost.
\section*{Acknowledgements}
We would like to thank Yonatan Bisk, Aman Madaan, and Ruohong Zhang for helpful discussions and the anonymous reviewers for their valuable suggestions on this paper. Some datasets and models are used by this work, despite their not having specified licenses in their code repositories---for these, we followed previous works and only used them for pure research purposes.
\clearpage
| {
"attr-fineweb-edu": 1.738281,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbU84uBhhw71LLEa7 | \section{}
In recent years different approaches, originating from the physics
community, have shed new light on sports events, e.g. by studying
the behavior of spectators \cite{laola}, by elucidating the
statistical vs. systematic features behind league tables
\cite{ben1,ben2,buch}, by studying the temporal sequence of ball
movements \cite{mendes} or using extreme value statistics
\cite{Suter,Greenhough02} known, e.g., from finance analysis
\cite{Stanley}. For the specific case of soccer matches different
models have been introduced on phenomenological grounds
\cite{Lee97,Dixon97,Dixon98,Rue00,Koning00,Dobson03}. However,
very basic questions related, e.g., to the relevance of systematic
vs. statistical contributions or the temporal fitness evolution
are still open. It is known that the distribution of soccer goals
is broader than a Poissonian distribution
\cite{Greenhough02,janke,janke09}. This observation has been
attributed to the presence of self-affirmative effects during a
soccer match\cite{janke,janke09}, {i.e. an increased
probability to score a goal depending on the number of goals
already scored by that team}.
In this work we introduce a general model-free approach which allows us to elucidate the outcome of
sports events. Combining strict mathematical
reasoning, appropriate finite-size scaling and comparison with actual data
all ingredients of this framework can be quantified for the specific example of soccer. A unique relation can be derived to calculate
the expected outcome of a soccer match and three hierarchical levels of statistical influence can be identified.
As one application we show that the skewness of the distribution of soccer goals \cite{Greenhough02,janke,janke09} can be fully
related to fitness variations among different teams and does not require the presence of self-affirmative effects.
As data basis we take all matches in the German Bundesliga (www.bundesliga-statistik.de) between seasons 1987/88 and 2007/08 except for the year 1991/92 (in that year the league contained 20 teams). Every team plays 34 matches per season.
Earlier seasons are not taken into account because the underlying statistical properties (in particular number of goals per match) are somewhat different.
Conceptually, our analysis relies on recent observations in describing
soccer leagues \cite{HR09}: (i) The home advantage is characterized by a team-independent but season-dependent increase of the home team goal
difference $c_{home}>0$. (ii) An appropriate observable to characterize the fitness of a team $i$ in a given season is the average goal
difference (normalized per match) $\Delta G_i(N)$, i.e. the difference of
the goals scored and conceded during $N$ matches. In particular it contains more information about the team fitness than, e.g., the number of points.
\begin{figure}[ht]
\includegraphics[width=0.95\linewidth]{fig1.eps}
\caption{ The correlation function $h(t)$. The average value of $h(t)$ is included (excluding the value
for $t=17$) yielding approx. $0.22$ \cite{HR09}.}
\label{fig.1}
\end{figure}
Straightforward information about the team behavior during a season can be extracted from correlating its match
results from different match days. Formally, this is expressed by
the correlation function $h(t) = \langle \Delta g_{ij}(t_0) \Delta g_{ik}(t_0+t) \rangle$. Here
$\Delta g_{ij} := g_i - g_j$ denotes the goal difference of a match of team $i$
vs. team $j$ with the final result $g_i:g_j$. $j$ and $k$ are the opponents of
team $i$ at match days $t_0$ and $t_0 + t$. The home-away asymmetry
can be taken into account by the transformation $\Delta g_{ij} \rightarrow \Delta g_{ij} \mp c_{home}$
where the sign depends on whether team $i$ plays at home or away.
The resulting function $h(t)$ is shown in Fig.1. Apart from the data point
for $t=17$ one observes a time-independent positive plateau value. The absolute value of this constant corresponds
to the variance $\sigma_{\Delta G}^2$ of $\Delta G_i$ and is thus a measure for the fitness variation in a league \cite{HR09}.
Furthermore, the lack of any decay shows that the fitness of a team is constant during the whole season.
This result is fully consistent with the finite-size scaling analysis in
Ref.\cite{HR09} where additionally the fitness change between two seasons was
quantified. The exception for $t=17$
just reflects the fact that team $i$ is playing against the same
team at days $t_0$ and $t_0 + 17$, yielding additional
correlations between the outcome of both matches (see also below).
As an immediate consequence, the limit of $\Delta G_i(N)$
for large $N$, corresponding to the true fitness $\Delta G_i$, is well-defined. A consistent estimator for $\Delta G_i$, based on the information
from a finite number of matches, reads
\begin{equation}
\label{eqgest}
\Delta G_i = a_N \Delta G_i(N).
\end{equation}
with $a_N \approx 1/[1+3/(N \sigma^2_{\Delta G})]$ \cite{HR09} .
For large $N$ the factor $a_N$ approaches unity and the estimation
becomes error-free, i.e. $\Delta G_i(N) \rightarrow \Delta G_i$.
For $N=33$ one has $a_N = 0.71$ and the variance of the estimation error is
given by $ \sigma_{e,N}^2 = (N/3+1/\sigma_{\Delta G}^2)^{-1} \approx 0.06$ \cite{HR09}. This statistical framework
is known as regression toward the mean \cite{Stigler}.
Analogously, introducing
$\Sigma G_i(N)$ as the average sum of goals scored and conceded by team $i$ in $N$ matches
its long-time limit is estimated via $\Sigma G_i -\lambda = b_N (\Sigma G_i(N)-\lambda)$ where $\lambda$ is the average number of goals
per match in the respective season. Using $\sigma^2_{\Sigma G} \approx 0.035$ one correspondingly obtains $b_{N=33} = 0.28$ \cite{HR09}.
{Our key goal is to find a sound characterization of the match result when team $i$ is playing vs. team $j$, i.e. $\Delta g_{ij}$
or even $g_i$ and $g_j$ individually. The final outcome $\Delta g_{ij}$ has three conceptually different and uncorrelated contributions
\begin{equation}
\Delta g_{ij} = q_{ij} + f_{ij} + r_{ij}.
\end{equation}
Averaging over all matches one can define the respective variances $\sigma_q^2, \sigma_f^2$ and $\sigma_r^2$.
(1) $q_{ij}$ expresses the average outcome which can be expected based on knowledge of the team fitness values $\Delta G_i$ and $\Delta G_j$, respectively.
Conceptually this can be determined by averaging over all matches when teams with these fitness values play against each other. The task is
to determine the dependence of $q_{ij} \equiv q(\Delta G_i, \Delta G_j) $ on $\Delta G_i$ and $\Delta G_j$.
(2) For a specific match, however, the outcome can be {\it systematically}
influenced by different factors beyond the general fitness values using the variable $f_{ij}$ with a mean of zero:
(a) External effects such as several players which are injured or tired,
weather conditions (helping one team more than the other), or red cards. As a consequence the effective fitness of a team relevant for this match may differ from the
estimation $\Delta G_{i}$ (or $\Delta G_{j}$). (b) Intra-match effects depending on the actual course of a match. One example is the suggested presence
of self-affirmative effects, i.e. an increased probability to score a goal (equivalently an increased fitness) depending
on the number of goals already scored by that team \cite{janke,janke09}. Naturally, $f_{ij}$ is much harder to predict if possible at all. Here we restrict
ourselves to the estimation of its relevance via determination of $\sigma_f^2$.
(3) Finally, one has to understand the emergence of the actual goal distribution based on expectation values as expressed by the random
variable $r_{ij}$ with average zero. This
problem is similar to the physical problem when a decay rate (here corresponding to $q_{ij} + f_{ij}$) has to be translated into the actual
number of decay processes. }
{\it Determination of $q_{ij}$}:
$q_{ij}$ has to fulfill the two basic conditions (taking into account the home advantage):
$q_{ij}-c_{home} = - (q_{ji}-c_{home})$ (symmetry condition) and
$\langle q_{ij}\rangle_j -c_{home} = \Delta G_i$ (consistency condition) where the average is over
all teams $j \ne i$ (in the second condition a minor correction due to the finite number of teams in a league is neglected).
The most general dependence on $\Delta G_{i,j}$ up to third order, which is compatible
with both conditions, is given by
\begin{equation}
\label{eqg1}
q_{ij} = c_{home} + (\Delta
G_i - \Delta G_j) \cdot [ 1 - c_3(\sigma^2_{\Delta G} + \Delta G_i
\Delta G_j ) ].
\end{equation}
Qualitatively, the $c_3$-term takes into account the possible
effect that in case of very different team strengths (e.g. $\Delta
G_i \gg 0$ and $\Delta G_j \ll 0$) the expected goal difference is even more
pronounced ($c_3 > 0$: too much respect of the weaker team) or
reduced ($c_3 < 0$: tendency of presumption of the better team).
On a phenomenological level this effect is already considered in
the model of, e.g., Ref.\cite{Rue00}. The task is to determine the
adjustable parameter $c_3$ from comparison with actual data. We
first rewrite Eq.\ref{eqg1} as $q_{ij} - (\Delta G_i - \Delta G_j)
- c_{home} = - c_3(\Delta G_i - \Delta G_j)(\sigma^2_{\Delta G} +
\Delta G_i \Delta G_j )$. In case that $\Delta G_{i,j}$ is known
this would correspond to a straightforward regression problem of
$\Delta g_{ij} - (\Delta G_i - \Delta G_j) - c_{home}$ vs.
$-(\Delta G_i - \Delta G_j)(\sigma^2_{\Delta G} + \Delta G_i
\Delta G_j )$. An optimum estimation of the fitness values for a
specific match via Eq.\ref{eqgest} is based on $\Delta
G_{i,j}(N)$, calculated from the remaining $N=33$ matches of both
teams in that season . Of course, the resulting value of
$c_3(N=33)$ is still hampered by finite-size effects, in analogy
to the regression towards the mean. This problem can be solved by
estimating $c_3(N)$ for different values of $N$ and subsequent
extrapolation to infinite $N$ in an $1/N$-representation. Then our estimation
of $c_3$ is not hampered by the uncertainty in the determination of $\Delta
G_{i,j}$. For a
fixed $N \le 30$ the regression analysis is based on 50 different
choices of $\Delta G_{i,j}(N)$ by choosing different subsets of
$N$ matches to improve the statistics. The result is shown in
Fig.2. The estimated error results from performing this analysis
individually for each season. Due to the strong correlations for
different $N$-values the final error is much larger than suggested
by the fluctuations among different data points. The data are
compatible with $c_3 = 0$. {Thus, we have shown that the simple
choice
\begin{equation}
\label{eqfinal}
q_{ij} = \Delta G_i - \Delta G_j + c_{home}
\end{equation}
is the uniquely defined relation (neglecting irrelevant terms of
5th order) to characterize the average outcome of a soccer match.}
In practice the right side can be estimated via Eq.\ref{eqgest}.
{This result implies that $h(t) = \langle (\Delta G_i -
\Delta G_j)(\Delta G_i - \Delta G_k) \rangle = \sigma_{\Delta G}^2
+ \langle \Delta G_j \Delta G_k \rangle$, i.e. $h(t \ne 17) =
\sigma_{\Delta G}^2$ and $h(t = 17) = 2\sigma_{\Delta G}^2$. This
agrees very well with the data.} Furthermore, the variance of the
$q_{ij}$ distribution, i.e. $\sigma_q^2$, is by definition given
by $2 \sigma_{\Delta G}^2 \approx 0.44$.
\begin{figure}[ht]
\includegraphics[width=0.95\linewidth]{fig2.eps}
\caption{Determination of $c_3$ by finite-size
scaling.}
\label{fig.2}
\end{figure}
{\it Determination of $\sigma_f^2$}: This above analysis does not contain any information about the
match-specific fitness relative to $\Delta G_i - \Delta G_j$. For example $f_{ij} > 0$
during a specific match implies that team $i$ plays better than
expected from $q_{ij}$. The conceptual problem
is to disentangle the possible influence of these fitness fluctuations
from the random aspects of a soccer match. The key idea is based on the observation
that, e.g., for $f_{ij} > 0$ team $i$ will play better than expected in both
the first and the second half of the match. In contrast, the random features of a match
do not show this correlation. For the identification of $\sigma_f^2$ one defines $A =
\langle ((\Delta g^{(1)}_{ij}/b_1-c_{home})\cdot ((\Delta g^{(2)}_{ij}/b_2-c_{home})\rangle_{ij}$
where $\Delta g^{(1),(2)}_{ij}$ is the goal
difference in the first and second half in the specific match,
respectively and $b_{1,2}$ the fraction of goals scored during the first
and the second half, respectively ($b_1 =
0.45; b_2 = 0.55$). Based on Eq.\ref{eqfinal} one has $\sigma_f^2 = A - 2\sigma_{\Delta G}^2$. Actually,
to improve the statistics we have additionally used different partitions of the match (e.g. first and third
quarter vs. second and fourth quarter).
Numerical evaluation yields $\sigma_f^2 = -0.04 \pm 0.06$ where the error bar is estimated
from individual averaging over the different seasons. Thus one obtains in particular
$\sigma_f^2 \ll \sigma^2_q$ which renders match-specific fitness fluctuations irrelevant.
Actually, as shown in \cite{HR09}, one can observe a tendency that teams which have lost 4 times
in a row tend to play worse in the near future than expected by their fitness. Strictly speaking
these strikes indeed reflect minor temporary fitness variations. However, the number of strikes
is very small (less than 10 per season) and, furthermore, mostly of statistical nature. The same holds
for red cards which naturally influence the fitness but fortunately are quite rate. Thus, these extreme
events are interesting in their own right but are not relevant for the overall statistical description.
The negative value of $\sigma_f^2$ points towards anti-correlations between both partitions of the match.
A possible reason is the observed tendency towards a draw, as outlined below.
\begin{figure}[ht]
\includegraphics[width=0.95\linewidth]{fig3.eps}
\caption{({\bf a})Distribution of goals per team and match and the Poisson prediction if the different fitness values
are taken into account (solid line). Furthermore a Poisson estimation is included where only the home-away asymmetry is included
(broken line). The quality of the predicted distribution is highlighted in ({\bf b}) where the ratio of the estimated and the actual probability is shown.}
\label{fig.3}
\end{figure}
{\it Determination of $r_{ij}$}: The actual number of
goals $g_{i,j}$ per team and match is shown in Fig.3. The error
bars are estimated based on binomial statistics. As discussed
before the distribution is significantly broader than a Poisson
distribution, even if separately taken for the home and away goals
\cite{Greenhough02,janke,janke09}. Here we show that this
distribution can be generated by assuming that scoring goals are
independent Poissonian processes. We proceed in two steps. First,
we use Eq.\ref{eqfinal} to estimate the average goal difference
for a specific match with fitness values estimated from the
remaining 33 matches of each team. Second, we supplement
Eq.\ref{eqfinal} by the corresponding estimator for the sum of the
goals $g_i + g_j$ given by $ \Sigma G_i + \Sigma G_j-
\lambda$. Together with Eq.\ref{eqfinal} this allows us to
calculate the expected number of goals for both teams individually.
Third, we generate for both teams a Poissonian
distribution based on the corresponding expectation values. The
resulting distribution is also shown in Fig.1 and perfectly agrees
with the actual data up to 8 (!) goals. In contrast, if the
distribution of fitness values is not taken into account
significant deviations are present. { Two conclusions can be drawn. First,
scoring goals is a highly random process. Second, the good
agreement again reflects the fact that $\sigma_f^2$ is small because
otherwise an additionally broadening of the actual data would be expected.
Thus there is no indication of a possible influence of
self-affirmative effects during a soccer match \cite{janke,janke09}}.
Because of the underlying Poissonian process the value of $\sigma_r^2$ is just given by the average number of
goals per match $(\approx 3)$.
\begin{figure}[ht]
\includegraphics[width=0.95\linewidth]{fig4n.eps}
\caption{({\bf a}) The probability distribution of the goal difference per match together
with its estimation based on independent Poisson processes of both teams. In
({\bf b}) it is shown for different scores how the ratio of the estimated and the
actual number of draws differ from unity.}
\label{fig.4}
\end{figure}
As already discussed in literature the number of draws is
somewhat larger than expected on the basis of independent Poisson distributions; see, e.g., Refs. \cite{Dixon97,Rue00}.
As an application of the present results
we quantify this statement. In Fig.4 we compare the calculated distribution of $\Delta g_{ij}$
with the actual values. The agreement is very good except for
$\Delta g_{ij} = -1,0,1$. Thus, the simple picture of independent goals of the home and the away
team is slightly invalidated. The larger number of draws is balanced by a
reduction of the number of matches with exactly one goal
difference. More specifically, we have calculated the relative
increase of draws for the different results. The main effect is due to the
strong increase of more than 20\% of the 0:0 draws. Note that the present analysis has
already taken into account the fitness distribution for the estimation of this number. Starting from 3:3 the
simple picture of independent home and away goals holds again.
The three major contributions to the final soccer result
display a clear hierarchy, i.e. $\sigma_r^2 : \sigma_q^2:
\sigma_f^2 \approx 10^2:10^1:10^0$.
$\sigma_f^2$, albeit well defined and quantifiable, can be
neglected for two reasons. First, it is small as compared to the
fitness variation among different teams. Second, the uncertainty
in the prediction of $q_{ij}$ is, even at the end of the season,
significantly larger (variance of the uncertainty: $2 \cdot
\sigma_{e,N=33}^2 = 0.12$, see above). {Thus, the limit of predictability of a soccer
match is, beyond the random effects, mainly related to the uncertainty in the
fitness determination rather than to match specific effects}. Thus, the hypothesis of a
strictly constant team fitness during a season, even on a
single-match level cannot be refuted even for a data set
comprising more than 20 years. In disagreement with this
observation soccer reports in media often stress that a team
played particularly good or bad. Our results suggest that there
exists a strong tendency to relate the assessment too much to the
final result thereby ignoring the large amount of random aspects
of a match.
In summary, apart from the minor correlations with
respect to the number of draws soccer is a surprisingly simple
match in statistical terms. Neglecting the minor differences
between a Poissonian and binomial distribution and the slight
tendency towards a draw a soccer match is equivalent to two teams
throwing a dice. The number 6 means goal and the number of
attempts of both teams is fixed already at the beginning of the
match, reflecting their respective fitness in that season.
More generally speaking, our approach may serve as a general
framework to classify different types of sports in a
three-dimensional parameter space, expressed by $\sigma_r^2,
\sigma_q^2, \sigma_f^2$. This set of numbers, e.g., determines the
degree of competitiveness \cite{ben2}. For example for matches
between just two persons (e.g. tennis) one would expect that
fitness fluctuations ($\sigma_f^2$) play a much a bigger role and
that for sports events with many goals or points (e.g.
basketball) the random effects ($\sigma_r^2$) are much less
pronounced, i.e. it is more likely that the stronger team indeed wins.
Hopefully, the present work stimulates activities to
characterize different types of sports along these lines.
We greatly acknowledge helpful discussions with B. Strauss, M. Trede, and M. Tolan about this topic.
| {
"attr-fineweb-edu": 1.922852,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUa-Q25V5hd-428VT- | \section{Introduction}\label{sec:intro}
On April 8, 1974, Henry Aaron hit his $715$th major league home run, sending him past Babe Ruth, who had a $714$, on the baseball's all-time list. As the event received much advance publicity, the numbers $714$ and $715$ were mentioned by millions of people including mathematicians at the time, whose attention likely deviated from the phenomenal baseball game and was attracted by the beautiful properties of the two consecutive numbers. \\
\indent They first noticed that
\begin{align}
714\cdot 715\ &=\ 510510\ =\ 2\cdot 3\cdot 5\cdot 7\cdot 11\cdot 13\cdot 17\ =\ P_7\nonumber,
\end{align}
where $P_k$ denotes the product of the first $k$ primes. Without too much effort, we can find expressions for $P_1,P_2,P_3,P_4$ as the product of two consecutive integers. However, after $714$ and $715$, no more products turned up for integer pairs below $10^{6021}$. They thus conjectured that $714$ and $715$ are the largest consecutive integer pair to be written as the product of the first $k$ primes.\\
\indent This conjecture is just the beginning of the beauty of the two integers. In fact, let the unique prime factorization of an integer $n$ be $p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$, and define \begin{equation} f(n)\ :=\ a_1p_1+a_2p_2+\cdots +a_kp_k.\end{equation} Nelson, Penney, and Pomerance \cite{NePePo} found that $f(714)=f(715)$. \\
\indent We call the function $f(n)$ the \textit{Ruth-Aaron} function, an integer $n$ with the property $f(n)=f(n+1)$ a \textit{Ruth-Aaron} number, and the pair $n$, $n+1$ a \textit{Ruth-Aaron} pair. A function $f$ is completely additive if $f(ab)=f(a)f(b)$ holds for all integers $a,b$. It easily follows that the \textit{Ruth-Aaron} function has the nice property of being completely additive. A computer search for all the \textit{Ruth-Aaron} numbers not exceeding $50000$ found just $42$ values, suggesting that their density is $0$.
\renewcommand{\arraystretch}{1.2}
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\hline
$n$ & $f(n)=f(n+1)$ & $n$ & $f(n)=f(n+1)$ & $n$ & $f(n)=f(n+1)$ \\
\hline
5 & 5 & 5405 & 75 & 26642 &193 \\
8& 6 & 5560 & 150&26649 &66 \\
15& 8 & 5959 & 160 & 28448 &144 \\
77 & 18& 6867 & 122 &28809 &117 \\
125 &15 & 8280 & 40 & 33019 &149 \\
714 & 29 & 8463 & 54 & 37828 &211 \\
948 & 86 & 10647 & 39 & 37881 &93 \\
1330 & 33 & 12351 & 205 & 41261 &64 \\
1520 & 32 & 14587 & 532 & 42624 &57 \\
1862 & 35 & 16932 & 107 & 43215 &118 \\
2491 & 100 & 17080 & 79 &44831 &480 \\
3248 & 44 & 18490 & 93 & 44891 &82 \\
4185 & 45 & 20450 & 421 &47544 &299 \\
4191 & 141 & 24895 & 401 & 49240 &1242 \\
\hline
\end{tabular}
\caption{\textit{Ruth-Aaron} numbers not exceeding $50,000$.}
\end{table}
In fact, in 1978, only a few years after the famous baseball game, Erd\Horig{o}s and Pomerance \cite{ErPom} proved this result of density $0$. They also established that when $x$ is sufficiently large, the number of \textit{Ruth-Aaron} numbers is at most $C\cdot \frac{x}{(\log x)^{1-\epsilon}}$ for every $0<\epsilon<1$, where $C=C(\epsilon)$ is a constant dependent upon $\epsilon$.\\
\indent In this paper, we extend the results obtained by Erd\Horig{o}s and Pomerance. As an arithmetic function bearing certain similarities to the Sigma function and the Prime Omega function (See Appendix \ref{app:arith}), $f(n)$ renders several natural directions for generalization, one of which is to raise the prime factors to a power. Hence, we first introduce the $r-$th power \textit{Ruth-Aaron} numbers.
\begin{defn}
An integer $n=\prod_{i=1}^kp_i^{a_i}$ is a $r-$th power \textit{Ruth-Aaron} number if
\begin{align}
f_r(n)\ &=\ f_r(n+1),
\end{align}
where $f_r(n)=\sum_{i=1}^ka_ip_i^r$.
\end{defn}
We prove an upper bound on the number of $r-$th power \textit{Ruth-Aaron} numbers up to $x$ in Section \ref{sec:non} (we will state the result later in the introduction). Our result improved that of Erd\Horig{o}s and Pomerance by a factor of $(\log\log x)^3\log\log\log x/\log x$. Moreover, inspired by Cohen, Cordwell, Epstein, Kwan, Lott, and Miller's study of near perfect numbers \cite{CCEKLM}, we introduce the concept of $k(x)$\textit{-near-Ruth-Aaron} numbers.
\begin{defn}
An integer $n$ is $k(x)$\textit{-near-Ruth-Aaron} when
\begin{align}
|f_r(n)-f_r(n+1)|\ &\leq\ k(x).
\end{align}
Obviously, when $k(x)=0$, $n$ is a $r-$th power \textit{Ruth-Aaron} number.
\end{defn}
As \textit{Ruth-Aaron} numbers \emph{seem} extremely rare,
we weaken the condition and investigate how an absolute difference of a small amount between $f_r(n)$ and $f_r(n+1)$ affect the density in Section \ref{sec:rad}; in particular, rather than requiring $f_r(n)$ to equal $f_r(n+1)$, we merely require them to be ``close.'' Moreover, Nelson, Penney, and Pomerance \cite{NePePo} proved the infinitude of \textit{Ruth-Aaron} numbers under Schinzel's Conjecture, which provides us another direction for generalization. In Section \ref{sec:infn}, we prove that there are infinitely many real $r$ such that there is a $r-$th power \textit{Ruth-Aaron} number.\\
\indent As our results are concerning only upper bounds for these numbers, we can and do absorb any set of numbers in the arguments below that is small relative to this bound into our error terms. \\
\indent For future study, we can place \textit{Ruth-Aaron} numbers in linear equations such as the Fibonacci sequence. We can initiate the study of \textit{Rabonacci} numbers, or integer $n$ with $f(n)=f(n-1)+f(n-2)$. Another possibility is to expand the equation $f(n)=f(n+1)$ to $k-$tuples. Inspired by Erd\Horig{o}s \cite{Er}, we conjecture that for every integer $k\geq 1$, there exists $n,n+1,\dots,n+k$ such that
\begin{align}\label{ktuple}
f(n)\ =\ f(n+1)\ &=\ \cdots \ =\ f(n+k).
\end{align}
In fact, a computer search \cite{Nom} tells us that even when $k=2$, solutions are extremely rare.
\renewcommand{\arraystretch}{1.2}
\begin{table}[H]
\begin{center}
\begin{tabular}{cc}
\hline
$n$ & $f(n)=f(n+1)=f(n+2)$\\
\hline
$417\ 162 $ & $533$\\
$6\ 913\ 943\ 284$ & $5428$\\
\hline
\end{tabular}
\caption{List of \textit{Ruth-Aaron} triples below $10^{10}$.}
\end{center}
\end{table}
Due to the rarity of \textit{Ruth-Aaron} triples, following the previous conjecture, we propose that the number of solutions to \eqref{ktuple} is finite for any $k \ge 2$. \\
\indent Although \textit{Ruth-Aaron} numbers are named after two baseball stars (instead of a mathematician like most functions and theorems are) and thus have a more or less recreational origin, their study leads us to a variety of great mathematics, which all play key roles in this paper:
\begin{itemize}
\item the sieve method \cite{HalRi},
\item the Prime Number Theorem \cite{Wei},
\item the Chinese Remainder Theorem \cite{DiPeiSal},
\item De Bruijn's estimation on integers with regards to the size of their largest prime factors \cite{Bru},
\item the Catalan Conjecture \cite{Rib},
\item the linear independence of radicals, and
\item inequalities such as Cauchy-Schwarz and Jensen's.
\end{itemize}
The \textit{Ruth-Aaron} numbers' profound connection to such mathematics not only once again proves the beauty of their properties, but should also justify why they merit a closer inspection.
\subsection{Notations and Definitions}
We set some notation and define the key objects of this paper.
\begin{defn}
Numbers $f_1,f_2,\dots, f_n$ are linearly independent over $\mathbb{Z}$ if, when coefficients $a_i\in\mathbb{Z}$, the following equation
\begin{align}
a_1f_1+a_2f_2+\cdots+a_nf_n\ &=\ 0
\end{align}
holds only when $a_i=0$.
\end{defn}
We use the following notations.
\begin{itemize}
\item We adopt a notation similar to the $\mathcal{A}_{\mathbf{a}}$ used by Luca and Stănică \cite{LuSta}, to denote linear equations with the \textit{Ruth-Aaron} function.\\
Let $k\geq 1$ be a positive integer, and let $\mathbf{a}=(a_0,a_1,\dots,a_k)$ be a vector with integer components not all zero. Put $\mathcal{R}^r_{\mathbf{a}}(x)$ for the set of all integers $n$ not exceeding $x$ such that
\begin{align}
\sum_{i=0}^ka_if_r(n+i)\ &=\ 0.
\end{align}
Then it's not hard to notice that all integers $n$ up to $x$ with $f_r(n)=f_r(n+1)$ coincide with $\mathcal{R}^r_{\mathbf{(1,-1)}}(x)$.
\item We use $P(n)$ to denote the largest prime factor of integer $n$.
\item We use $A(x,t)$ to denote the number of $n\leq x$ with $P(n)\geq x^t$, and $a(x,t)$ to denote the fraction of $n\leq x$ with $P(n)\geq x^t$.
\item We use $\Psi(x,y)$ to denote the number of $n$ up to $x$ with prime factors no greater than $y$.
\item Big $O$ notation: We write $k(x) = O(g(x))$ or $k(x) \ll g(x)$ if there exists a positive constant $C$ such that $k(x) < C\cdot g(x)$ for all sufficiently large $x$.
\item Little $o$ notation: We write $k(x) = o(g(x))$ if $\lim_{x\to\infty} f(x)/g(x) = 0$.
\item We write $k(x)\sim g(x)$ if $\lim_{x\to\infty}k(x)/g(x)=1$.
\item We denote by $E$ the constant
\begin{align}
E\ &=\ \sum_{k=1}^\infty\frac{\left(\mu(k)\right)^2}{k\varphi(k)}\ =\ \frac{\zeta(2)\zeta(3)}{\zeta(6)}\nonumber\\
&=\ 1.9435964368\dots,
\end{align}
where $\zeta(x)$ is the Riemann Zeta function, $\mu(k)$ the M\"obius function, and $\varphi(x)$ the Euler Totient function.
\end{itemize}
\begin{rek}
Our estimations of the number of $n$ at most $x$ that satisfy certain conditions are mostly expressed as $O(g(x))$, where $g(x)$ is a function of $x$. At the expense of tedious constant chasing we could make all the multiplicative constants explicit, but as we are concerned with the decay rate with respect to $x$ there is no need to do so, and we choose big $O$ notation for the sake of readability. Many approximations and scalings presented are not appear optimal, as there is no need since they smaller than our main term.
\end{rek}
\subsection{Main Results}
We obtained the following main results as well as a few others which are not listed below but are introduced later with the proofs of the theorems. These results will be proved using lemmas from Section \ref{sec:pre} and important theorems and conjectures in Appendix \ref{app:important}.
\begin{restatable}[]{thm}{differencethm}\label{thm:ruthaarondifference}
For real $r\geq 1$ and every $\epsilon$ with $0<\epsilon<1$, let $\delta_0=\delta_0(\epsilon)$ be a constant dependent upon $\epsilon$. Let $\delta$ be subject to the following conditions:
\begin{align}\label{allconditionsdelta}
\delta\ &\leq\ \delta_0r\epsilon/14\nonumber\\
0\ <\ \delta\ &<\ \delta^2_0\epsilon/4E^2A\nonumber\\
\delta\ &<\ \delta_0/4,
\end{align}
where $A$ (see \cite{ErPom}) is a fixed constant around $8$. Then $k(x)$-\textit{near-Ruth-Aaron} numbers have density $0$ for any $k(x)$ such that
\begin{align}
k(x)\ &\leq\ (x^{r\delta}-x^{-\delta}-1)x^{r/\log\log x}.
\end{align}
\end{restatable}
This result indicates that when $x$ is sufficiently large, if the difference between $f_r(n)$ and $f_r(n+1)$ is essentially less than $x^{\delta'}$, where $\delta$ is arbitrarily small, then the density of $n$ under this condition is still $0$. Hence, not only are \textit{Ruth-Aaron} numbers rare, $k(x)$-\textit{near-Ruth-Aaron} numbers are also rare for $k(x)$ at most a small power of $x$. In particular, if $k(x)$ is a constant or a power of logarithm, $k(x)$\textit{-near-Ruth-Aaron} numbers are likewise very rare.\\
\indent Moreover, recall that $\#\mathcal{R}_{(1,-1)}^r(x)$ denotes the number of integers up to $x$ with $f_r(n)=f_r(n+1)$. We are able to obtain the following new results of $r-$th power \textit{Ruth-Aaron} number:
\begin{subtheorem}{thm}\label{thm:generalizednumberofn}
\begin{restatable}[]{thm}{negativer}
When $r=-1$,
\begin{align}
\#\mathcal{R}^{-1}_{(1,-1)}(x)\ &\ll\ x^{2(\log\log x/\log x)^{1/2}}\ =\ \exp\left(2(\log\log x\log x)^{1/2}\right),
\end{align}
which means for every $\epsilon>0$, we can find $x$ sufficiently large such that $\#\mathcal{R}_{(1,-1)}^r(x)\ll x^{\delta}$.
In fact, when $r$ is negative
\begin{align}
\#\mathcal{R}^{r}_{(1,-1)}(x)\ &\ll\ x^{O((\log\log x/\log x)^{r/r-1})}.
\end{align}
\end{restatable}
\begin{restatable}[]{thm}{rationalr}
When $r$ is rational but not an integer,
\begin{align}
\#\mathcal{R}^r_{(1,-1)}(x)\ &=\ 0.
\end{align}
\end{restatable}
\begin{restatable}[]{thm}{rgeqone}
When $r\geq 1$ is real,
\begin{align}
\#\mathcal{R}^r_{(1,-1)}(x)\ = \ O\left(\frac{x\log\log\log x(\log\log x)^3}{(\log x)^2}\right).
\end{align}
\end{restatable}
\end{subtheorem}
Erd\Horig{o}s and Pomerance \cite{ErPom} conjectured that there are infinitely many \textit{Ruth-Aaron} numbers. While their conjecture is still open, we prove a related problem, namely the infinitude of $r$ for which $\#\mathcal{R}_{(1,-1)}^r(x)>0$.
\begin{restatable}[]{thm}{infinitude}\label{thm:infinitudeofr>0}
There are infinitely many real numbers $r$ such that $\#\mathcal{R}_{(1,-1)}^r(x)>0$.
\end{restatable}
\subsection{Outline}
In Section \ref{sec:pre} we present a few preliminary results that provide a general overview for the problems we study, and which will be used extensively throughout the paper. Then, we discuss the proofs of Theorems \ref{thm:ruthaarondifference}, \ref{thm:generalizednumberofn}, and \ref{thm:infinitudeofr>0} (Sections \ref{sec:rad}, \ref{sec:non}, and \ref{sec:infn}). We conclude with possible future research directions in Section \ref{sec:fut}.
\section{Preliminary Results}\label{sec:pre}
We begin by generalizing some results from Erd\Horig{o}s and Pomerance \cite{ErPom} which will be useful in proving our main theorems. Lemma \ref{lem:atxinfty} and Corollary \ref{cor: epsilonx3} are introduced for the sake of proving Lemma \ref{thm:xdeltapn}, as two cases in terms of the size of $P(n)$ are taken care of by the corollary. Lemma \ref{lem:eupevandu0pv0} and Lemma \ref{lem:plnp} are frequently used in the sieve method to bound various sums over reciprocals of primes. Lemma \ref{lem:generalizedfnpn} and Lemma \ref{lem:lowerboundofp} introduce an upper bound for $f_r(n)$ with regards to $P(n)^r$ and a lower bound for the size of $P(n)$. These results are used extensively throughout the paper. \\ \indent Erd\Horig{o}s and Pomerance \cite{ErPom} first introduced a well-known result due to Dickman \cite{Di} and others which bounds how often the larges prime factor of $n\leq x$ is at least $n^t$.
\begin{lem}\label{lem:atxinfty}
For every $x>0$ and every $t$, $0 \leq t \leq 1$, let $A(x,t)$ denote the number of $n \leq x$ with $P(n) \geq x^t$. The function
\begin{align}\label{ataxt}
a(t)\ :=\ \lim_{x\to\infty} x^{-1}A(x,t)
\end{align}
is defined and continuous on $[0,1]$.
\end{lem}
\begin{cor}\label{cor: epsilonx3}
From Lemma \ref{lem:atxinfty} we obtain that there exists $\delta_0=\delta_0(\epsilon)$ sufficiently small $(0< \delta_0\leq \fof)$ such that for large $x$, the number of $n\leq x$ with
\begin{align}\label{pnxdelta0}
P(n)\ <\ x^{\delta_0}\ \textrm{or}\ x^{1/2-\delta_0}\ \leq P(n)\ <\ x^{1/2+\delta_0}
\end{align}
is less than $\epsilon x/3$.
\end{cor}
\begin{proof}
Let $\epsilon>0$. From Lemma \ref{lem:atxinfty}, $a(t)=\lim_{x\to\infty}x^{-1}A(x,t)$, which means the fraction of $n\leq x$ such that $P(n)< x^{\delta_0}$ converges to $a(0)-a(\delta_0)$ when $x\to\infty$. Similarly, the fraction of $n$ which satisfy $x^{1/2-\delta_0}\leq P(n)<x^{1/2+\delta_0}$ converges to $a(1/2-\delta_0)-a(1/2+\delta_0)$.\\ \\
As defined, $a(x,t)$ is the fraction of $n\leq x$ with $P(n)\geq x^t$. Consider $P(n)\leq x^{\delta_0}$ first. We can find $\delta_1$ and $X_1$ such that $\forall\ \delta_0\leq \delta_1$ and $x\geq X_1$, we have $a(x,0)-a(x,\delta_0)\leq \epsilon/8$ and within $\epsilon/2020$ of $a(0)-a(\delta_0)$. For the second condition, because $a(t)$ is continuous, given any $\epsilon$ we can always find $\delta_2$ and $X_2$ such that if $\delta_0$ is at most $\delta_2$ and $x$ is at least $X_2$ then $a(x,1/2-\delta_0)-a(x,1/2+\delta_0)$ is at most $\epsilon/8$ and within $\epsilon/2020$ of $a(1/2-\delta_0)-a(1/2+\delta_0)$. We take $\delta=\min(\delta_1,\delta_2)$ and $X=\max(X_1,X_2)$, then the fraction of $n\leq x$ satisfying one of the two conditions is no greater than $\epsilon/3$, which means the number of $n$ is no greater than $\epsilon x/3$.
\end{proof}
Lemmas \ref{lem:eupevandu0pv0} and \ref{lem:plnp} are used frequently in later sections to bound various sums over reciprocals of primes.
\begin{lem}\label{lem:eupevandu0pv0}
We have
\begin{eqnarray}\label{eqnarray: eupev}
\sum_{e^u\leq p\leq e^v}\frac{1}{p} & \ < \ & \log(v/u)+\frac{C}{u}\nonumber\\
\sum_{u_0\leq p\leq v_0}\frac{1}{p} & \ < \ & \frac{C + \log(v_0/u_0)}{\log u_0}. \end{eqnarray}
\end{lem}
\begin{proof}
We use the Abel Partial Summation Formula:
\begin{align}\label{abelsum}
\sum_{1\leq x\leq n}C(x)f(x)\ =\ C(n)f(n)-\int_1^nC(t)f'(t)dt,
\end{align}
and the Prime Number Theorem (weaker version): if $\pi(n)$ is the number of at most $n$, then
\begin{align}\label{pnumthm}
\pi(n)\ =\ \frac{n}{\log n}+o\left(\frac{n}{(\log n)^2}\right).
\end{align}
First of all, we prove that \begin{align}\label{lnpp}
\sum_{p\leq n}\frac{\log p}{p}\ =\ \log n+O(1).
\end{align}
We know that the power of $p$ in $n!$ equals $\lfloor\frac{n}{p}\rfloor+\lfloor\frac{n}{p^2}\rfloor+\lfloor\frac{n}{p^3}\rfloor+\cdots$. Thus, we have
\begin{align}
\log (n!)\ &=\ \sum_{p\leq n}\log p\left(\left\lfloor\frac{n}{p}\right\rfloor+\left\lfloor\frac{n}{p^2}\right\rfloor+\left\lfloor\frac{n}{p^3}\right\rfloor+\cdots\right)\nonumber\\
&\leq n\cdot\sum_{p\leq n}\frac{\log p}{p}+\sum_{p\leq n}\log p\left(\frac{n}{p^2}+\frac{n}{p^3}+\cdots\right)\nonumber\\
&=\ n\cdot\sum_{p\leq n}\frac{\log p}{p}+n\sum_{p\leq n}\frac{\log p}{p(p-1)}.
\end{align}
Since $\sum_{p\leq n} \frac{\log p}{p(p-1)}< \sum_{k=1}^n \frac{\log k}{k(k-1)}$ converges, we have $n\sum_{p\leq n}\frac{\log p}{p(p-1)} = O(n)$. Therefore,
\begin{align}
\log(n!)\ =\ n\cdot\sum_{p\leq n}\frac{\log p}{p}+O(n).
\end{align}
On the other hand,
\begin{align}
\log(n!)\ &=\ \sum_{k=1}^n\log k\ =\ \int_1^n\log x \ dx +O(\log n)\nonumber\\
&=\ n\log n-n+O(\log n)\nonumber\\
&=\ n\log n+O(n).
\end{align}
Comparing the two results, we obtain \eqref{lnpp}:
\begin{align}
\sum_{p\leq n}\frac{\log p}{p}\ =\ \log n+O(1).
\end{align}
We now use the Abel Partial Summation Formula. Let $f(x) =\frac{1}{\log x}$ and $C(x) =\sum_{p\leq n}\frac{\log p}{p}$, then we have
\begin{align}
\sum_{p\leq n}\frac{1}{p}\ &=\ \frac{C(n)}{\log n}+\int_1^n\frac{C(t)}{t(\log t)^2}dt\nonumber\\
&=\ 1+\frac{O(1)}{\log n}+\int_1^n\frac{1}{t\log t}dt+\int_1^n\frac{O(1)}{t(\log t)^2}dt\nonumber\\
&=\ \log\log n+O\left(\frac{1}{\log n}\right)+1.
\end{align}
Then
\begin{align}
\sum_{e^u\leq p\leq e^v}\frac{1}{p}\ &=\ \sum_{p\leq e^v}\frac{1}{p}-\sum_{p\leq e^u}\frac{1}{p}+\frac{1}{e^u}\nonumber\\
&=\ \log(v/u)+O\left(\frac{1}{u}\right)\nonumber\\
&\leq\ \log (v/u)+\frac{O(1)}{u}.
\end{align}
Therefore, we have \begin{align}\label{eupevstrong}
\sum_{e^u\leq p\leq e^v}\frac{1}{p}\ \leq\ \frac{O(1)+u\log(v/u)}{u}.
\end{align}
Let $u_0=e^u, v_0=e^v$, then
\begin{align}\label{eq:u0pv0strong}
\sum_{u_0\leq p\leq v_0}\frac{1}{p}\ \leq\ \log\left(\frac{\log v_0}{\log u_0}\right)+\frac{C}{\log u_0}.
\end{align}
We now prove that $\log\left(\frac{\log v_0}{\log u_0}\right)+\frac{C}{\log u_0}< \frac{C+\log (v_0/u_0)}{\log u_0}$. Since we have $t<e^{t-1}$ for any $t>1$,
\begin{align}
\frac{v}{u}\ &<\ e^{v/u-1}
\nonumber\\
\log (v/u)\ &<\ \frac{v}{u}-1
\nonumber\\
u\log(v/u)\ &<\ (v-u)\nonumber\\
\log u_0\log\left(\frac{\log v_0}{\log u_0}\right)\ &<\ \log(v_0/u_0).
\end{align}
Therefore, we have
\begin{align}\label{eq:u0pv0weak}
\sum_{u_0\leq p\leq v_0}\frac{1}{p}\ <\ \frac{C+\log(v_0/u_0)}{\log u_0}.
\end{align}
\end{proof}
\begin{rek}
We have two inequalities in Lemma \ref{lem:eupevandu0pv0}; \eqref{eq:u0pv0weak} is weaker than \eqref{eupevstrong} as we replace $u/v$ by $e^{u/v}-1$, and the following theorems apply mostly the result from \eqref{eq:u0pv0weak}. In fact, it turns out that in this paper the inequality in \eqref{eq:u0pv0weak} is tight enough and applicable in most cases, and we will adopt the inequality in \eqref{eupevstrong} otherwise.
\end{rek}
\begin{lem}\label{lem:plnp}
We have \begin{eqnarray}
\sum_{p\geq t}\frac{1}{p\log p}\ =\ \frac{1}{\log t}+O\left(\frac{1}{(\log t)^2}\right).
\end{eqnarray}
\end{lem}
\begin{proof}
We use Abel's Partial Summation Formula. Let
\begin{align}\label{staxft}
S(t)\ &=\ \sum_{p\geq t}\frac{1}{p\log p}=\ \sum_{p\geq t}\frac{\log p}{p}\cdot(\log p)^{-2}.\nonumber\\
A(x)\ &=\sum_{p\leq x}\frac{\log p}{p}-\sum_{p\leq t}\frac{\log p}{p}=\ \log x-\log t+O(1).\nonumber\\
f(t)\ &=\ \frac{1}{(\log t)^2},\
f'(t)\ =\ -\frac{2}{t(\log t)^3}.
\end{align}
Because $A(x)$ concerns primes in the interval $(t,x]$, the following sum differs from the one in \eqref{staxft} by at most the term $\frac{1}{t\log t}$, which can be absorbed in the error; thus, it is sufficient to study $S(t)$:
\begin{align}\label{soft}
S(t)\ &=\ \frac{1}{t\log t}+ A(\infty)f(\infty)-\int_{t}^\infty A(x)f'(x)\ dx\nonumber\\
&=\ \frac{1}{t\log t}+2\int_{t}^\infty(\log x-\log t+O(1))\frac{1}{x(\log x)^3}\ dx \nonumber\\
&=\ \frac{1}{t\log t}+2\int_t^\infty\frac{1}{x(\log x)^2}\ dx-2\int_t^\infty(\log t-O(1))\frac{1}{x(\log x)^3}\ dx\nonumber\\
&=\ \frac{1}{t\log t}- 2\frac{1}{\log x}\bigg|_t^\infty+\frac{\log t}{(\log x)^2}\bigg|_t^\infty-\frac{O(1)}{(\log x)^2}\bigg|_t^\infty\nonumber\\
&=\ \frac{2}{\log t}-\frac{1}{\log t}+O\left(\frac{1}{(\log t)^2}\right)\nonumber\\
&=\ \frac{1}{\log t}+O\left(\frac{1}{(\log t)^2}\right),
\end{align}
which completes our proof.
\end{proof}
\begin{lem}\label{lem:generalizedfnpn}
If $P(n)\geq 5$ and $r\geq 1$, we have
\begin{align}\label{frnpnr}
f_r(n)\ &\leq\ P(n)^r\cdot\frac{\log n}{\log P(n)}.
\end{align}
\end{lem}
\begin{proof}
Consider the function $g(x)=\frac{x^r}{\log x}$, where $r$ is a real number and $x\geq e^{1/r}$, then
\begin{align}
g'(x)\ &=\ \frac{rx^{r-1}\log x-x^r\cdot 1/x}{(\log x)^2}\nonumber\\
&=\ \frac{x^{r-1}(r\log x-1)}{(\log x)^2}\nonumber\\
&>\ 0,
\end{align}
which means $g(x)$ increases when $x\geq e^{1/r}$. Without the loss of generality, let $p_1=P(n)$, then we have
\begin{align}
f_r(n)\ &=\ \sum_{i=1}^ka_ip_i^r\nonumber\\
&\leq\ \sum_{i=1}^ka_ip_1^r\cdot\frac{\log p_i}{\log p_1}\nonumber\\
&=\ \frac{P(n)^r}{\log P(n)}\cdot\sum_{i=1}^k\log p_i^{a_i}\nonumber\\
&=\ P(n)^r\cdot\frac{\log n}{\log P(n)},
\end{align}
which completes our proof.
\end{proof}
\begin{lem}\label{lem:lowerboundofp}
The number of $n$ up to $x$ not satisfying
\begin{align}
P(n)\ &>\ x^{1/\log\log x}\ \textrm{and } P(n+1)\ >\ x^{1/\log\log x}
\end{align}
is at most $O\left(\frac{x}{(\log x)^2}\right)$.
\end{lem}
\begin{proof} Let $\Psi(x,y)$ denote the number of $n$ up to $x$ with prime factors no greater than $y$, and set $u = \log x / \log y$.
A result from De Bruijn \cite{Bru} states that if $(\log x)^2<y\leq x^{1/3}$ then
\begin{align}
\log\Psi(x,y)\ &<\ x(\log y)^2\exp\left(-u\log u-u\log\log u+O(u)\right).
\end{align}
We replace $y$ with $x^{1/\log\log x}$ and find
\begin{align}
\Psi(x,y)\ &<\ x\left(\frac{\log x}{\log\log x}\right)^2\cdot\exp\left(O(\log\log x)-\log\log x(\log\log\log x+\log\log\log\log x)\right)\nonumber\\
&<\ x\left(\frac{\log x}{\log\log x}\right)^2\cdot\exp\left(O(-\log\log x\log\log\log x)\right)\nonumber\\
&<\ O\left(x\left(\frac{\log x}{\log\log x}\right)^2\cdot\frac{1}{(\log x)^{\log\log\log x}}\right)\nonumber\\
&<\ O\left(\frac{x}{(\log x)^2}\right).
\end{align}
Therefore, the number of $n$ for which
\begin{align}\label{loglogx}
p\ >\ x^{1/\log\log x}\textrm{ and } q\ >\ x^{1/\log\log x}
\end{align}
doesn't hold is $O(x/(\log x)^2)$.
\end{proof}
\begin{rek}
We hence know that for the majority of integers no greater than $x$, $P(n) > x^{1/\log\log x}$, which means $P(n)$ is typically larger than $\log x$ to any power. Moreover, De Koninck and Ivić \cite{KonIv} showed that the sum of largest prime factors of integers up to $x$ is of size $x^2/\log x$, which means the average largest prime factor of integers up to $x$ is of size $x/\log x$. When $x$ is sufficiently large, $x/\log x > x^{\delta}$ for any $0<\delta<1$. Therefore, the average largest prime factor is greater than $x$ to any power less than $1$, indicating that a considerable number of $P(n)$ are very large.
\end{rek}
\section{Proof of Theorem \ref{thm:ruthaarondifference}}\label{sec:rad}
In this section we prove Theorem \ref{thm:ruthaarondifference}. We first introduce the following two lemmas generalized from Erd\Horig{o}s and Pomerance \cite{ErPom}, the first of which indicates that the largest prime factors of two consecutive integers are usually far apart, and the second proves that $P(n)^r$ is often the dominating element that determines the size of $f_r(n)$.
\begin{lem}\label{thm:xdeltapn}
For each $0< \epsilon<1$, there is a $\delta>0$ such that for sufficiently large $x$, the number of $n\leq x$ with
\begin{align}\label{deltapn}
x^{-\delta}\ <\ P(n)/P(n+1)\ <\ x^{\delta}
\end{align}
is less than $\epsilon x$.
\end{lem}
\begin{proof}
We know from Corollary \ref{cor: epsilonx3} that $\exists\ \delta_0=\delta_0(\epsilon)$ sufficiently small $(0< \delta_0\leq 1/4)$ such that for large $x$, the number of $n\leq x$ with
\begin{align}\label{pnxdelta0}
P(n)\ <\ x^{\delta_0}\ \textrm{or}\ x^{1/2-\delta_0}\ \leq P(n)\ <\ x^{1/2+\delta_0}
\end{align}
is less than $\epsilon x/3$. Now we consider the remaining cases:
\begin{align*}
&\textrm{(i) } x^{\delta_0}\ \leq\ P(n)\ <\ x^{1/2-\delta_0}\\
&\textrm{(ii) } P(n)\ \geq \ x^{1/2+\delta_0}.
\end{align*}
We will show that for every $0<\epsilon<1$, there exists $\delta$ such that such that the number of $n\leq x$ satisfying one of (i) and (ii) while \eqref{deltapn} holds is less than $\epsilon x/3$. \ \\
\noindent\emph{We consider (i) first.} We know that for each pair of primes $p,q$, there are at most $1+\lfloor\frac{x}{pq}\rfloor$ choices\footnote{This is because the number of $n\leq x$ such that $P(n)=p,P(n+1)=q$ is bounded by the number of $n\leq x$ such that $n\ \equiv\ 0\ (\textrm{mod p})$ and $n\ \equiv\ -1\ (\textrm{mod q})$. By Chinese Remainder Theorem, $n\ \equiv\ bp\ (\textrm{mod }pq)$ which means there are at most $1+\left\lfloor\frac{x}{pq}\right\rfloor$ such $n\leq x$. } of $n\leq x$ for which $P(n)=p,\ P(n+1)=q$. For inequality \eqref{deltapn} to hold, $px^{-\delta}< q< px^{\delta}$. Then for large $x$, the number of $n\leq x$ in case \eqref{deltapn} holds is (we many assume $0< \delta< \delta_0/4$)
\begin{align}\label{1xpq}
\sum_{\substack{x^{\delta_0}\leq p < x^{1/2-\delta_0}\\px^{-\delta}<q<px^{\delta}}}\left(1+\left\lfloor\frac{x}{pq}\right\rfloor\right)\ &<\ x^{1-2\delta_0+\delta}+x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p}\sum_{px^{-\delta}<q<px^{\delta}}\frac{1}{q}\nonumber\\
\ &<\ x^{1-2\delta_0+\delta}+x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p
}\cdot\frac{C+\log (px^\delta/ px^{-\delta})}{\log(px^{-\delta})}\textrm{ (Lemma \ref{lem:eupevandu0pv0})}\nonumber\\
&<\ x^{1-2\delta_0+\delta}+x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p}\cdot\frac{C+\log x^{2\delta}}{\log(px^{-\delta})}.
\end{align}
Moreover, since $p\geq x^{\delta_0}>x^{4\delta}>x^{3\delta}$, we have $x^{-\delta}>p^{-1/3}$,
\begin{align}\label{clogx2deltap1}
x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p}\cdot\frac{C+\log(x^{2\delta})}{\log (px^{-\delta})}\ &<\ x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p}\cdot\frac{C+\log(x^{2\delta})}{\log (p^{2/3})}\nonumber\\
&=\ x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p}\cdot\frac{\frac{3}{2}C+\log(x^{3\delta})}{\log p}.
\end{align}
When $x$ is sufficiently large, we have
\begin{align}\label{clogx2deltap2}
x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p}\cdot\frac{C+\log(x^{2\delta})}{\log (px^{-\delta})}\ &<\ x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p}\cdot\frac{1.1\log(x^{3\delta})}{\log p}\nonumber\\
&<\ \frac{3.3}{3}\cdot3\delta x\log x\sum_{x^{\delta_0}\leq p<x^{1/2-\delta_0}}\frac{1}{p\log p}\nonumber\\
&<\ \frac{4}{3}\cdot 3\delta x\log x\frac{1}{\delta_0\log x}\textrm{ (Lemma \ref{lem:plnp})}\nonumber\\
&<\ 4x\delta/\delta_0.
\end{align}
Therefore, we have
\begin{align}\label{4deltax}
\sum_{\substack{x^{\delta_0}\leq p<x^{1/2-\delta_0}\\px^{-\delta}<q<px^{\delta}}}\left(1+\left\lfloor\frac{x}{pq}\right\rfloor\right)\ &<\ x^{1-2\delta_0+\delta}+4\delta x/\delta_0
\end{align}
If we choose $\delta$ such that
\begin{align}\label{epsilon13}
\delta<\delta_0\epsilon/13,
\end{align}
then \eqref{4deltax} implies there are less than $\epsilon x/3$ choices of $n$. \\ \\
\noindent \emph{Now we consider case (ii).} Let $a=n/P(n)$ and $b=(n+1)/P(n+1)$. Then $a< x/x^{1/2+\delta_0}=x^{1/2-\delta_0}$, and because \eqref{thm:xdeltapn} holds, $b\leq \lfloor x^{1/2-\delta_0+\delta}\rfloor+1$ ($b=x^{1/2-\delta_0+\delta}$ is possible only when $x^{1/2-\delta_0+\delta}\in\mathbb{Z}$)\footnote{This is because $b=\frac{n+1}{P(n+1)}< \frac{x+1}{P(n)}\cdot x^{\delta}< \frac{x+1}{x^{1/2+\delta_0}}\cdot x^{\delta}=\frac{x+1}{x}\cdot x^{1/2-\delta_0+\delta}$, meaning that $b\leq \lfloor\frac{x+1}{x}x^{1/2-\delta_0+\delta}\rfloor=\lfloor x^{1/2-\delta_0+\delta}+x^{-1/2-\delta_0+\delta}\rfloor$.
Because $-1/2-\delta_0+\delta< -1/2$, when $x$ is sufficiently large, we have $\lfloor x^{1/2-\delta_0+\delta}+x^{-1/2-\delta_0+\delta}\rfloor\leq \lfloor x^{1/2-\delta_0+\delta}\rfloor+1$; therefore, $b\leq \lfloor x^{1/2-\delta_0+\delta}\rfloor+1$.}, and $x^{-\delta}/2< a/b=nP(n+1)/(n+1)P(n)< 2x^\delta$. Meanwhile, when $a,b$ are fixed, the number of $n\leq x$ for which $n=aP(n),\ n+1=bP(n+1)$ is at most the number of primes $p\leq x/a$ such that $(ap+1)/b$ is a prime. B\'ezout's Identity (or the Euclidean algorithm) gives that for integers $a,b$, there always exists $m,n\in\mathbb{Z}$ such that $ma+nb=\gcd(a,b)$. Now, because $\gcd(a,b)=1$ and $2\ |\ ab$, the number of $p$ with $bq-ap=1$ is greater than 1. Moreover, given $(ap+1)/b$ is an integer, all $p$ are in the same residue class modulo $b$. Let $p=kb+c$, where $k,c\in\mathbb{Z_+}$ and $c\in[0,b-1]$. Let $d=(ac+1)/b$. Then we are counting positive integer $k$ not exceeding $x/ab$ with primes $kb+c=P(n)$, $ka+d=P(n+1)$. Let the number of such $k$ be ${\rm Pairs}(x)$. By Brun's Method \cite{HalRi}, the number of primes $p\leq x$ congruent to $t$ modulo $q$ is no greater than $\frac{Ax}{q\log(x/q)}\cdot\prod_{p|q}(1-p^{-1})^{-1}$. Applying this result, we have
\begin{align}
{\rm Pairs}(x)\ &\leq\ \frac{Ax}{ab\log^2(x/ab)}\prod_{p\ |\ ab}\left(1-\frac{1}{p}\right)^{-1}\nonumber\\
&=\ \frac{Ax}{\varphi(a)\varphi(b)\log^2(x/ab)},
\end{align}
where $A$ is a constant of size around $8$ and $\varphi$ is Euler's totient function, or the number of integers up to $n$ that are relatively prime to $n$. Because we are investigating only the $x-$dependent components and not the multiplicative constants, our only concern here is the size of $1/\log^2(x/ab)$ in relation to the change of $a,b$. In particular, as all summations are positive, no cancellation is involved, and thus it suffices to show this sum is of the same size for all $a, b$ in our ranges.
We now bound above and below $1/\log^2(x/ab)$. Because $a,b\in\mathbb{N}$,
\begin{align}
\frac{1}{\log^2(x/ab)}\ >\ \frac{1}{(\log x)^2}
\end{align}
Meanwhile,
\begin{align}\label{1xab}
\frac{1}{\log^2(x/ab)}\ & <\ \frac{1}{\log^2(x/(x^{2\cdot{(1/2-\delta_0})}\cdot 2x^\delta))}\nonumber\\
&=\ \frac{1}{\log^2(2x^{2\delta_0-\delta})}\nonumber\\
&\ <\frac{2}{(2\delta_0-\delta)^2(\log x)^2}.
\end{align}
This shows us that we can remove $ab$ at a cost of a multiplicative change in the result. We now use the result of Landau \cite{Lan}: if $E=\zeta(2)\zeta(3)/\zeta(6)$, then
$\sum_{n\leq x}1/\varphi(n)=E\log x+o(1)$. Therefore, using \eqref{1xab},
\begin{align}\label{rmPairsx}
{\rm Pairs}(x)\ &<\ \frac{2Ax}{(2\delta_0-\delta)^2(\log x)^2}\sum_{a\leq x^{1/2-\delta_0}}\frac{1}{\varphi(a)}\sum_{ax^{-\delta}/2<b<2ax^{\delta}}\frac{1}{\varphi(b)}\nonumber\\
&< \ \frac{2Ax}{(2\delta_0-\delta)^2(\log x)^2}\sum_{a\leq x^{1/2-\delta_0}}\frac{E\log( 2ax^{\delta}/\frac{ax^{-\delta}}{2})+o(1)}{\varphi(a)} \nonumber\\
&<\ \frac{Ax}{(2\delta_0-\delta)^2(\log x)^2}\sum_{a\leq x^{1/2-\delta_0}}\frac{3E\log x^{2\delta}+o(1)}{\varphi(a)}\nonumber\\
&<\ \frac{7EA\delta x}{(2\delta_0-\delta)^2\log x}\sum_{a\leq x^{1/2-\delta_0}}\frac{1}{\varphi(a)}\nonumber\\
&<\ \frac{8E^2A\delta x(1/2-\delta_0)\log x}{(2\delta_0-\delta)^2\log x}\nonumber\\
&<\ \frac{4E^2A\delta x}{(2\delta_0-\delta)^2}.
\end{align}
Let
\begin{align}\label{d0e4e2a}
0\ <\ \delta\ <\ \delta^2_0\epsilon/4E^2A \textrm{ and }\delta\ <\ \delta_0/4,
\end{align}
then \eqref{rmPairsx} implies there are fewer than $\epsilon x/3$ choices for such $n$. Thus, if we choose $\delta$ such that both \eqref{epsilon13} and \eqref{d0e4e2a} hold, then there are less than $\epsilon x$ choices of $n$ for every sufficiently large $x$, completing our proof.
\end{proof}
\begin{rek}
For our purposes, the estimation in inequalities \eqref{1xpq} and \eqref{clogx2deltap1} is sufficient, but if we substitute $\sum_{px^{-\delta}<q<px^\delta}1/q$ with $\log\frac{\log px^\delta}{\log px^{-\delta}}+C/\log px^{-\delta}$, in other words, if we use inequality \eqref{eq:u0pv0strong} rather than \eqref{eq:u0pv0weak}, then with a bit more work we could get $\delta<\frac{\epsilon\delta_0}{6.12\log(1/2\delta_0)}$.
\end{rek}
\begin{rek} Moreover, we can easily extend the result to $r\geq 1$. From \eqref{epsilon13} and \eqref{d0e4e2a} we know $\delta$ depends on $\epsilon$, and because $\epsilon$ is arbitrary between $0$ and $1$, $\delta$ can be very small. For every $0<\epsilon<1$, we find $\delta'=r\cdot \delta$ (where $\delta$ satisfies \eqref{epsilon13} and \eqref{d0e4e2a}; hence $\delta'$ can be very small) such that for sufficiently large $x$, the number of $n\leq x$ with
\begin{align}
x^{-\delta'}\ <\ P(n)^r/P(n+1)^r\ <\ x^{\delta'}
\end{align}
is less than $\epsilon x$.
\end{rek}
\begin{lem}\label{thm:1minusepsilon}
When $r\geq 1$, for every $\epsilon>0$, let $\delta=\delta_0r\epsilon/14$, then for sufficiently large $x$ there are at least $(1-\epsilon)x$ choices for composite integer $n\leq x$ such that
\begin{align}\label{1plusxne}
P(n)^r\ \leq\ f_r(n)\ <\ (1+x^{-\delta})P(n)^r.
\end{align}
\end{lem}
\begin{proof}
We know that any $n$ at most $x$ is divisible by at most $\log x/\log 2$ primes. By the Prime Number Theorem, the number of primes up to $x$ is $O(x/\log x)$. Then for any $0<\epsilon_0<1$, we can always find sufficiently large $x$ such that $O(x/\log x)=\epsilon_0 x$, which means the number of prime $n$ up to $x$ is $o(x)$ and can be absorbed. Thus, we have for sufficiently large $x$ and composite $n$:
\begin{align}\label{frn}
f_r(n)\ &\ =\ P(n)^r + f_r(n/P(n)\nonumber\\
&\ \leq\ P(n)^r+P(n/P(n))^r\cdot\log x/\log 2\nonumber\\
&\ <\ P(n)^r+P(n/P(n))^r\cdot x^{\delta}
\end{align}
for any fixed $\delta$. We prove that there are at most $\epsilon x$ choices of $n\leq x$ such that
\begin{align}\label{frngeq}
f_r(n)\ \geq\ (1+x^{-\delta})P(n)^r
\end{align}
holds for all except $o(x)$ choices of $n\leq x$. Then, for such $n$, if \eqref{frngeq} holds, from \eqref{frn} we have
\begin{align}\label{p(n/pn)r}
P(n/P(n))^r\ >\ x^{-2\delta}\cdot P(n)^r.
\end{align}
Let $\epsilon>0$. We know from Corollary \ref{cor: epsilonx3} that there exists $\delta_0=\delta_0(\epsilon)$ such that for large $x$ the number of $n\leq x$ with $P(n)<x^{\delta_0}$ is at most $\epsilon x/3$. Meanwhile, we know for each pair of primes $p,q$, there are at most $\lfloor\frac{x}{pq}\rfloor$ choices of $n\leq x$ with $P(n)^r=p^r$ and $P(n/P(n))^r=q^r$. Hence, from \eqref{p(n/pn)r}, for large $x$ the number of $n\leq x$ for which \eqref{p(n/pn)r} doesn't hold is at most (assume $0<\delta<\delta_0/3$):
\begin{align}\label{oxepsilonx3generalized}
o(x)+\frac{\epsilon x}{3}+\sum_{\substack{x^{\delta_0}<p\\x^{-2\delta/r}p<q\leq p}}\left\lfloor\frac{x}{pq}\right\rfloor\ &<\ \frac{\epsilon x}{2}+x\sum_{x^{\delta_0}<p}\frac{1}{p}\sum_{x^{-2\delta/r}p<q\leq p}\frac{1}{q}\nonumber\\
&<\ \frac{\epsilon x}{2}+x\sum_{x^{\delta_0}<p}\frac{1}{p}\cdot\frac{C+(2\delta\log x)/r}{\log (x^{-2\delta/r}p)}\nonumber\\
&<\ \frac{\epsilon x}{2}+\frac{6\delta}{r}\cdot x\log x\sum_{x^{\delta_0}<p}\frac{1}{p\log p}\nonumber\\
&<\ \frac{\epsilon x}{2}+\frac{7\delta x}{r\delta_0}\ (\textrm{Lemma \ref{lem:plnp}})
\end{align}
where $o(x)$ accounts for all $n\leq x$ of the form $m=P(n)^k$ for some positive integer $k$. We take $\delta=\delta_0r\epsilon/14$, then \eqref{oxepsilonx3generalized} is no greater than $\epsilon x$. Therefore, the number of $n\leq x$ such that $P(n)^r < f_r(n) < (1+x^{-\delta}P(n)^r)$ is at least $(1-\epsilon)x$, completing the proof.
\end{proof}
We recall Theorem \ref{thm:ruthaarondifference}.
\differencethm*
Now, we know from Lemma \ref{thm:xdeltapn} that there are less than $\epsilon x$ choices of $n$ which satisfy
\begin{align}
x^{-r\delta} \ < \ \frac{P(n)^r}{P(n+1)^r}\ <\ x^{r\delta}.
\end{align}
Without loss of generality, let $P(n)>P(n+1)$; the other case is handled similarly\footnote{One of the Erd\Horig{o}s-Turán conjectures asserts that the asymptotic density of $n\leq x$ with $P(n)>P(n+1)$ is $\frac{1}{2}$ \cite{ErPom}. Erd\Horig{o}s and Pomerance \cite{ErPom} showed that the number of $n$ up to $x$ with $P(n)>P(n+1)$ is greater than $0.0099 x$. This result was recently improved by Lü and Wang \cite{LuWang}, who proved that the density is larger than $0.2017$.}. Then there are less than $\epsilon x$ choices of $n$ with
\begin{align}\label{rdelta}
(x^{-r\delta}-1)P(n+1)^r\ &<\ P(n)^r-P(n+1)^r\ <\ (x^{r\delta}-1)P(n+1)^r.
\end{align}
Because $r\delta>0$, the LHS of \eqref{rdelta} is negative, which means there are less than $\epsilon x$ choices of $n$ with
\begin{align}
0\ &<\ P(n)^r-P(n+1)^r\ <\ (x^{r\delta}-1)P(n+1)^r.
\end{align}
Then for at least $(1-\epsilon) x $ choices of $n$, we have
\begin{align}
P(n)^r-P(n+1)^r\ &>\ (x^{r\delta}-1)P(n+1)^r.
\end{align}
Meanwhile, we know from Lemma \ref{thm:1minusepsilon} that for all but $\epsilon x$ choices of $n$ we have
\begin{align}
P(n)^r\ &<\ f_r(n)\ <\ (1+x^{-\delta})P(n)^r\nonumber\\
P(n+1)^r\ &<\ f_r(n+1)\ <\ (1+x^{-\delta})P(n+1)^r.
\end{align}
Therefore, there are more than $(1-\epsilon)x$ choices of $n$ with
\begin{align}
f_r(n)-f_r(n+1)\ &>\ P(n)^r-(1+x^{-\delta})P(n+1)^r\nonumber\\
&>\ (x^{r\delta}-x^{-\delta}-1)P(n+1)^r\nonumber\\
&>\ (x^{r\delta}-x^{-\delta}-1)x^{r/\log\log x},
\end{align}
which means the density of $n$ with
\begin{align}\label{absfnfn+1}
|f_r(n)-f_r(n+1)|\ &<\ (x^{r\delta}-x^{-\delta}-1)x^{r/\log\log x}
\end{align}
is $0$. In fact, when $x$ is sufficiently large, the RHS of \eqref{absfnfn+1} is greater than $x^\delta$, $(\log x)^k$, for any $k$, or $O(1)$, which means the density of $n$ up to $x$ with $|f_r(n)-f_r(n+1)|<k(x)$, where $k(x)$ is one of the functions above, is also $0$.
\section{Proof of Theorem \ref{thm:generalizednumberofn}}\label{sec:non}
Recall that the notation $\#\mathcal{R}_{(1,-1)}^r(x)$ denotes the number of integers up to $x$ with $f_r(n)=f_r(n+1)$. Recall Theorem \ref{thm:generalizednumberofn}.
\negativer*
\rationalr*
\rgeqone*
We first show that the number of $n\leq x$ when $r=-1$ is less than $x^{2(\log\log x/\log x)^{1/2}}$, which means for a fixed $\delta$ arbitrarily small and $x$ sufficiently larger, $\#\mathcal{R}_{(1,-1)}^{-1}(x)\leq x^\delta$. Then, we will show, using linear independence, that when $r$ is a non-integer rational, $r-$th power \textit{Ruth-Aaron} numbers do not exist. Last, we will present an initial result by Erd\Horig{o}s and Pomerance \cite{ErPom} regarding the number of \textit{Ruth-Aaron} numbers before generalizing it to $r\geq 1$.
\subsection{Negative $r$}
\negativer*
\begin{proof}
First, we prove that when $r=-1$, $r-$th power \textit{Ruth-Aaron} number $n$ exists only when, in the unique prime factorization of $n$ and $n+1$, the power of each prime divides the prime itself. In other words,
\begin{align}
n\ &=\ p_1^{a_1}\cdot p_2^{a_2}\cdots p_k^{a_k}\nonumber\\
n+1\ &=\ q_1^{b_1}\cdot q_2^{b_2}\cdots q_l^{b_l},
\end{align}
where $p_i|a_i$ and $q_i|b_i$. \\
\indent Let $a_i',b_i'\in\mathbb{Q}$, and $a_i'=\frac{a_i}{p_i}$, $b_i'=\frac{b_i}{q_i}$, where $a_i,b_i\in\mathbb{N}$. Then we have
\begin{align}\label{proveaidividespi}
a_1'+a_2'+\cdots+a_k' \ = \ f_{-1}(n)\ &=\ f_{-1}(n+1)\ =\ b_1'+b_2'+\cdots+b_l'\nonumber\\
\frac{a_1}{p_1}+\frac{a_2}{p_2}+\cdots+\frac{a_k}{p_k}\ &=\ \frac{b_1}{q_1}+\frac{b_2}{q_2}+\cdots\frac{b_l}{q_l'}\nonumber\\
((a_1p_2\cdots p_k)+\cdots+(a_kp_1\cdots p_{k-1}))q_1\cdots q_l\ &=\ ((b_1q_2\cdots q_l)+\cdots+(b_lq_1\cdots q_{l-1}))p_1\cdots p_k.
\end{align}
It's obvious that the left hand side has to be divisible by $p_1p_2\cdots p_k$. Since $\gcd(n,n+1)=1$, $\gcd(q_i,p_j)=1$, which means
\begin{align}
a_1p_2\cdots p_k+\cdots+a_kp_1\cdots p_{k-1}\ &\equiv\ 0\ (\textrm{mod }p_i)\nonumber\\
a_ip_1\cdots p_{i-1}p_{i+1}\cdots p_k\ &\equiv\ 0\ (\textrm{mod }p_i)\nonumber\\
a_i\ &\equiv\ 0\ (\textrm{mod }p_i),
\end{align}
which means each $p_i|a_i$. Similarly, $q_i|b_i$. \\ \\
\indent Next, we rewrite $n$ as
\begin{align}
n\ &=\ p_1^{a_1}\cdot p_2^{a_2}\cdots p_t^{a_t}\cdot\prod_{i>t}p_i^{a_i}.
\end{align}
Without loss of generality, let $n$ be odd. If $n$ is even, then we instead analyze $n+1$. Let $p_1<p_2<\cdots <p_k$, $q_1<q_2<\cdots<q_l$, then $\log_{p_i}x<\log x$ and $p_i\geq3$. We have
\begin{align}\label{boundingpi}
p_i^{a_i'p_i}\ &\leq\ n\nonumber\\
p_i\log p_i\ \leq\ a_i'p_i\log p_i\ &\leq\ \log n\nonumber\\
p_i\ <\ \log n\ &\leq\ \log x.\nonumber\\
a_i'\ \leq\ (\log_{p_i}n)/{p_i}\ &<\ (\log_{p_i} x)/3,
\end{align}
which means for each $p_i$, where $i=1,2,\dots,t$, we can choose $a_i$ from $\{0,1,\dots,\log_{p_i} x/3\}$. Because $p_i\geq 3$, there are at most $(\log_{p_i} x)/3+1\leq \log x/3$ choices of each $a_i$, hence the number of choices of the first $t$ prime powers is at most $((\log x)/3)^t$. Moreover, because $p_i>t$ for $i>t$, we have
\begin{align}\label{fromtandonward}
\left(\prod_{i>t}^k p_i^{a_i'}\right)^t\ &=\ \prod_{i>t}^kp_i^{a_i't}\nonumber\\
&<\ \prod_{i>t}^k p_i^{a_i'p_i}\ =\ \prod_{i>t}^k p_i^{a_i}\nonumber\\
&<\ x.
\end{align}
By the Fundamental Theorem of Arithmetic, no two products $\prod_{i>t}^kp_i^{a_i}$ can be equal, which means the number of $\prod_{i>t}^k p_i^{a_i'}$, hence $\prod_{i>t}^k p_i^{a_i}$, is at most $x^{1/t}$. Thus, the number of $n$ up to $x$ with $f_{-1}(n)=f_{-1}(n+1)$ is at most $(\log x)^{t}\cdot x^{1/t}$.
Let $s(t)=(\log x)^{t}\cdot x^{1/t}$, and we choose $t=(\log x/\log\log x)^{1/2}$. Then
\begin{align}
s(t)\ &=\ (\log x)^{t}\cdot x^{1/t}\nonumber\\
&=\ x^{t\log\log x/\log x + \frac{1}{t}}\nonumber\\
&=\ x^{2(\log\log x/\log x)^{1/2}}.
\end{align}
Thus, the number of $n$ up to $x$ with $f_{-1}(n)=f_{-1}(n+1)$ is at most
$x^{(2\log\log x/\log x)^{1/2}}$. Since $(\log\log x/\log x)^{1/2} \ll \epsilon$ for every $\epsilon>0$, we can find $x$ sufficiently large such that
\begin{align}
x^{(2\log\log x/\log x)^{1/2}}\ &\ll\ x^{\epsilon}.
\end{align}
\end{proof}
\begin{rek}
We can give a similar proof when $r$ is a negative integer less than $-1$. Let $r=-m$, where $m$ is a positive integer. With an approach similar to that of \eqref{proveaidividespi}, we have $p_i^{m}|a_i$. Then, similar to the argument in \eqref{boundingpi},
\begin{align}
p_i^{a_i'p_i^{m}}\ &\leq\ n\ \leq\ x\nonumber\\
a_i'p_i^{m}\log p_i\ &\leq\ \log x\nonumber\\
p_i\ &<\ (\log x)^{1/m}\nonumber\\
a_i'\ &<\ (\log x)/p^m\nonumber\\
&<\ (\log x)/3^m,
\end{align}
which means the number of choices of the first $t$ prime powers that can divide $n$ is at most $(\log x)^t$. Meanwhile, similar to the argument in \eqref{fromtandonward},
\begin{align}
\left(\prod_{i>t}^k p_i^{a_i'}\right)^{t^{m}}\ &=\ \prod_{i>t}^kp_i^{a_i't^{m}}\nonumber\\
&\leq \ \prod_{i>t}^kp_i^{a_i'p_i^{m}}\nonumber\\
&<\ x,
\end{align}
which means the number of choices of $\prod_{i>t}^kp_i^{a_i}$ is at most $x^{1/t^m}$. Therefore, the number of $n$ is bounded by
\begin{align}
s(t)\ &=\ (\log x)^{t}\cdot x^{1/t^m}\nonumber\\
&=\ x^{t\log\log x/\log x+1/t^m}.
\end{align}
We choose $t=(m^m\log x/\log\log x)^{1/m+1}$, then
\begin{align}
S(t) &=\ x^{O\left((\log\log x/\log x)^{m/(m+1)}\right)}\nonumber\\
&=\ x^{O\left((\log\log x/\log x)^{r/(r-1)}\right)}.
\end{align}
Therefore, the number of $n$ up to $x$ is at most $x^{O\left((\log\log x/\log x)^{r/(r-1)}\right)}$.
\end{rek}
Next, we look at the case where $r$ is a non-integer rational, which means $f_r(n)$ and $f_r(n+1)$ are summations of distinct radicals of prime powers. We prove that there are no \textit{Ruth-Aaron} numbers in this case.
\subsection{Non-integer Rational $r$}
\rationalr*
\begin{proof}
Let $r=x/y$ be a non-integer rational, then $f_r(n)=f_r(n+1)$ is equivalent to
\begin{align}\label{a=b=0}
a_1p_1^{x/y}+a_2p_2^{x/y}+\cdots + a_kp_k^{x/y}\ &=\ b_1q_1^{x/y}+b_2q_2^{x/y}+\cdots+b_lq_l^{x/y}
\end{align}
where $a_i,b_i\in\mathbb{Z}$.
We must show that \eqref{a=b=0} holds only if $a_i=b_i=0$. In other words, $p_i^{x/y}$ and $q_j^{x/y}$, where $i=1,\dots, k$, $j=1,\dots,l$, are linearly independent over $\mathbb{Z}$.\\
\indent Boreico \cite{Bo} shows that when $n_i$ are distinct $k-$th power-free integers, the sum $S=\sum a_i\sqrt[k]{n_i}$, where $a_i\in\mathbb{Z}$ are not all zero, is non-zero. In this case, let $n_i=p_i^x$. Because all of $p_i,q_i$ are distinct, $n_i$ are distinct integers; meanwhile, because $\gcd(x,y)=1$, $n_i$ are $y-$th power free. Using Boreico's result we can easily show that when $\sum a_ip_i^{x/y}-\sum b_iq_i^{x/y}=0$, we must have $a_i=b_i=0$, thus completing the proof.
\end{proof}
\begin{rek}
Above discusses only the circumstance of $r>0$. Likewise, when $r$ is a negative non-integer rational, we can multiply both sides of equation \eqref{a=b=0} by a factor of $\left(\prod_{i=1}^kp_i\prod_{j=1}^lq_j\right)^{x/y}$, so $n_i$ remain distinct and $y-$th power-free, and we can still apply Boreico's result to get $a_i=b_i=0$.
\end{rek}
Next, we look at the case where $r\geq 1$. As mentioned, we first introduce a result by Erd\Horig{o}s and Pomerance.
The following result is due to Erd\Horig{o}s and Pomerance \cite{ErPom}; as we generalize many of these arguments, we give an expanded version of their argument.
\begin{thm}\label{thm:oxlnx1me}
For every $0< \epsilon< 1$,
\begin{align}
\#\mathcal{R}_{(1,-1)}(x)\ &=\ O\left(\frac{x}{(\log x)^{1-\epsilon}}\right).
\end{align}
\end{thm}
Now we look at our generalization to $r\geq 1$.
\subsection{Positive $r\geq 1$}
\rgeqone*
Due to the length of the proof, we divide it into three sections. In Section \ref{sss:section1}, we will introduce a general bound on the size of the largest prime factor of an integer; Sections \ref{sss:section2} and \ref{sss:section3} discuss the two cases in terms of the size of $f_r(n/P(n))$ and $f_r((n+1)/P(n+1))$, and conclude with an estimation of $\#R_{(1,-1)}^r(x)$ in each case. Eventually, we absorb Case (i) into Case (ii) and arrive at our final result. We adapt and advance an approach of Pomerance \cite{Pom}, and we are able to improve Erd\Horig{o}s and Pomerance's $O(x/(\log x)^{1-\epsilon})$ \cite{ErPom} by $(\log\log x)^3\log\log\log x/\log x$, hence a refinement to Pomerance's result \cite{Pom}.
\subsubsection{We show that $x^{1/\log\log x}\leq p,q\leq x^{1/2}\log x$.}\label{sss:section1}
\begin{proof}
Since $n+1$ exceeds $x$ only when $n=x$, in general we assume that $n+1\leq x$. Let $p=P(n)$ and $q=P(n+1)$, and write $n=p\cdot k,\ n+1=q\cdot m$. By Lemma \ref{lem:lowerboundofp}, we may assume
\begin{align}
P(n)\ &>\ x^{1/\log\log x}\ \textrm{and }P(n+1)\ >\ x^{1/\log\log x}.
\end{align}
We know from Lemma \ref{lem:generalizedfnpn} that for all $P(n)\geq 5$, we have
\begin{align}\label{pnrlognlogpn}
P(n)^r\ \leq f_r(n)\ \leq P(n)^r\cdot\frac{\log n}{\log P(n)}.
\end{align}
In order to apply \eqref{loglogx}, we assume $P(n), P(n+1)\geq 5$, so that \eqref{pnrlognlogpn} holds for both $n$ and $n+1$. Next, we give an upper bound on the size of $p,q$ using the fixed values of $k,m$. We show that given $k,m$, primes $p,q$ are determined uniquely. In fact, from the two equations
\begin{align}\label{pkqmrelation}
pk+1\ &=\ qm\nonumber\\
p^r+f_r(k)\ &=\ q^r+f_r(m)
\end{align}
we have
\begin{align}
p\ &=\ \frac{qm-1}{k}\nonumber\\
\left(\frac{qm-1}{k}\right)^r-q^r\ &=\ f_r(m)-f_r(k).
\end{align}
Let
\begin{align}
g(q)\ &=\ (qm-1)^r-(qk)^r-k^r\cdot(f_r(m)-f_r(k))\nonumber\\
g'(q)\ &=\ rm(qm-1)^{r-1}-kr(qk)^{r-1}\nonumber\\
&=\ r(m(qm-1)^{r-1}-k(qk)^{r-1}).
\end{align}
Meanwhile,
\begin{align}
g\left(\frac{1}{m}\right)\ &=\ -\left(\frac{k}{m}\right)^r-k^r\cdot(f_r(m)-f_r(k)).
\end{align}
Because $q\geq\frac{1}{m}$ and $r\geq 1$, if $m>k$, then from \eqref{pkqmrelation}) we know $p>q$ and $f_r(m)>f_r(k)$; thus, $qm-1>qk$ and $g'(q)>0$ which means $g(q)$ always increases. Moreover, because $g\left(\frac{1}{m}\right)<0$, there exists only one $q$ with $g(q)=0$. Similarly, if $m<k$, then $g(q)$ always decreases and $g\left(\frac{1}{m}\right)>k^r-\left(\frac{k}{m}\right)^r>0$, which also means there exists only one $q>0$ with $g(q)=0$. Therefore, $q$ is uniquely expressible by $k,m$, which means $p,q$ are determined by $k,m$. Thus, the number of choices for $n$ determined by the choices of $k,m$ when $k,m<x^{1/2}/\log x$ is at most $x/(\log x)^2$. Hence, we may assume
\begin{align}
k\ \geq\ x^{1/2}\log x\ \ \textrm{or }\ m\ \geq\ x^{1/2}\log x.
\end{align}
Because $n=p\cdot k\leq x,\ n+1=q\cdot m\leq x$, we thus may assume
\begin{align}
p\ \leq\ x^{1/2}\log x\ \ \textrm{or }\ q\ \leq\ x^{1/2}\log x.
\end{align}
Suppose $p>x^{1/2}\log x$, then $q\leq x^{1/2}\log x$. Let $h(x)=x^r/\log x$, then
\begin{align}
h'(x)=\frac{x^{r-1}(r\log x-1)}{(\log x)^2}
\end{align}
Because $r\geq 1$, $h(x)$ increases on $x\geq3$. We have
\begin{align}
p^r\ \leq\ f_r(n)\ =\ f_r(n+1)\ &\leq\ q^r\cdot\frac{\log (n+1)}{\log q}\nonumber\\
&\leq\ \frac{(x^{1/2}\log x)^r\cdot \log(n+1)}{\log(x^{1/2}\log x)}\nonumber\\
&<\ 2\cdot(x^{1/2}\log x)^r\nonumber\\
p\ &<\ 2^{1/r}\cdot x^{1/2}\log x.
\end{align}
A similar inequality can be obtained for $q>x^{1/2}\log x$. Therefore, we have
\begin{align}\label{pqx1/2logx}
p\ <\ 2^{1/r}\cdot x^{1/2}\log x\ \textrm{ and }\ q\ <\ 2^{1/r}\cdot x^{1/2}\log x.
\end{align}
Now we look at $f_r(k)$ and $f_r(m)$. In terms of the upper bound of $f_r(k), f_r(m)$, we have the following two cases:
\begin{align*}
&\textrm{Case (i): } f_r(k)\ <\ \frac{p^r}{\log^{2r}x},\ f_r(m)\ <\ \frac{q^r}{\log^{2r}x}\\
&\textrm{Case (ii): (WLOG) }f_r(k)\ \geq\ \frac{p^r}{\log^{2r}x}.
\end{align*}
\end{proof}
\subsubsection{Case (i) Discussion.}\label{sss:section2}
\begin{proof}
We consider Case (i) first. Consider a function $v(x)=(x+1)^r-x^r-1$, then
\begin{align}
v'(x)\
&=\ r((x+1)^{r-1}-x^{r-1})\nonumber\\
&>\ 0,
\end{align}
and the function has a root at $x=0$, which means $(x+1)^r>x^r+1$ for all $x>0$.
Applying this result, we have $p^r+q^r<(p+q)^r$ and $|p-q|^r\ <\ |p^r-q^r|$ since $p\neq q>1$. Meanwhile, by the assumption $f_r(k)<\frac{p^r}{\log^{2r}x},f_r(m)<\frac{q^r}{\log^{2r}x}$, we have
\begin{align}
|p^r-q^r|\ &=\ |f_r(m)-f_r(k)|\nonumber\\
&<\ \frac{p^r+q^r}{
\log^{2r}x}.
\end{align}
Then
\begin{align}
|p-q|^r\ < \ |p^r-q^r|\ &<\ \frac{p^r+q^r}{\log^{2r}x}\ <\ \frac{(p+q)^r}{\log^{2r}x}.
\end{align}
Therefore,
\begin{align}\label{qrangelog2x+1}
|p-q|\ &<\ \frac{p+q}{(\log x)^2}\nonumber\\
p\cdot\frac{(\log x)^2-1}{(\log x)^2+1} \ &<\ q\ <\ p\cdot\frac{(\log x)^2+1}{(\log x)^2-1}.
\end{align}
For all $p$ satisfying \eqref{loglogx}, the number of primes $q$ such that \eqref{qrangelog2x+1} holds is
\begin{align}
\sum_{p\frac{(\log x)^2-1}{(\log x)^2+1}<q<p\frac{(\log x)^2+1}{(\log x)^2-1}}1\ &\ll\ \pi\left(p\frac{(\log x)^2+1}{(\log x)^2-1}\right)-\pi\left(p\frac{(\log x)^2-1}{(\log x)^2+1}\right).
\end{align}
We apply an explicit form of the Brun-Titchmarsh theorem \cite{Mont} which states that
\begin{align}
\pi(x+y)-\pi(x)\ &\leq\ \frac{2y}{\log y}
\end{align}
for all integers $x\geq1, y\geq 2$. Then, using $p>x^{1/\log\log x}$, we have:
\begin{align}
\pi\left(p\frac{(\log x)^2+1}{(\log x)^2-1}\right)-\pi\left(p\frac{(\log x)^2-1}{(\log x)^2+1}\right)\ & \leq \ \frac{2p\frac{4(\log x)^2}{\log^4x-1}}{\log \left(p\frac{4(\log x)^2}{\log^4x-1}\right)}\nonumber\\
&\ll\ O\left(\frac{\frac{p}{(\log x)^2}}{\log p}\right)\nonumber\\
&<\ O\left(\frac{p}{(\log x)^2\cdot\frac{\log x}{\log\log x}}\right)\nonumber\\
&<\ O\left(\frac{p\log\log x}{(\log x)^3}\right).
\end{align}
Thus, when $x$ is sufficiently large, using $p<2^{1/r}x^{1/2}\log x$, the number of $q$ satisfying $ p\frac{(\log x)^2-1}{(\log x)^2+1}<q<p\frac{(\log x)^2+1}{(\log x)^2-1}$ is at most
\begin{align}\label{numberofq}
\sum_{ p\frac{(\log x)^2-1}{(\log x)^2+1}<q<p\frac{(\log x)^2+1}{(\log x)^2-1}}1\
&=\ O\left(\frac{p\log\log x}{(\log x)^3}\right)\nonumber\\
&\leq\ O\left(\frac{x^{1/2}\log\log x}{(\log x)^2}\right).
\end{align}
Meanwhile, for sufficiently large $x$, the sum of $\frac{1}{q}$ for such primes $q$ is
\begin{align}\label{sumof1/q}
\sum_{p\frac{(\log x)^2-1}{(\log x)^2+1}<q<p\frac{(\log x)^2+1}{(\log x)^2-1}}\frac{1}{q}\ &<\ O\left(\frac{p\log\log x}{(\log x)^3}\right)\cdot\frac{1}{p\frac{(\log x)^2-1}{(\log x)^2+1}}\nonumber\\
&=\ O\left(\frac{\log\log x}{(\log x)^3}\right).
\end{align}
Therefore, in Case (i), using the result from \eqref{numberofq} and \eqref{sumof1/q}, $\#\mathcal{R}^r_{(1,-1)}(x)$ is at most
\begin{align}
\sum_{\substack{ p\frac{(\log x)^2-1}{(\log x)^2+1}<q<p\frac{(\log x)^2+1}{(\log x)^2-1}\\x^{1/\log\log x}<p<2^{1/r}x^{1/2}\log x}}\left(1+\left\lfloor\frac{x}{pq}\right\rfloor\right)\ &<\ \left(\frac{x^{1/2}\log\log x}{(\log x)^2}\right)\cdot\pi\left(2^{1/r}x^{1/2}\log x\right)+\sum\left\lfloor\frac{x}{pq}\right\rfloor\nonumber\\
&\ll\ \left(\frac{x^{1/2}\log\log x}{(\log x)^2}\right)\cdot\left(x^{1/2}\right)+ \frac{x\log\log x}{(\log x)^3}\sum_{\substack{x^{1/\log\log x}<\\p<2^{1/r}x^{1/2}\log x}}\frac{1}{p}\nonumber\\
&<\ O\left(\frac{x\log\log x}{(\log x)^2}\right)+O\left(\frac{x(\log\log x)^2}{(\log x)^3}\right)\ \textrm{(Lemma \ref{lem:plnp})}\nonumber\\
&=\ O\left(\frac{x\log\log x}{(\log x)^2}\right).
\end{align}
\end{proof}
\subsubsection{Case (ii) Discussion}\label{sss:section3}
\begin{proof}
Now we consider Case (ii). Write $k=t\cdot u$, where $t=P(k)$, and we have
\begin{align}
p^r\ \leq\ f_r(n)\ =\ f_r(n+1)\ &\leq\ q^r\cdot\frac{\log (n+1)}{\log q}\nonumber\\
&\leq\ q^r\cdot\frac{\log (x+1)}{\log q}\nonumber\\
&\leq\ q^r\cdot\frac{\log (x+1)}{\log x^{1/\log\log x}}\nonumber\\
&\leq\ q^r\cdot 2\log\log x.
\end{align}
Similarly,
\begin{align}
q^r\ \leq\ f_r(n+1)\ =\ f_r(n)\ &\leq\ p^r\cdot\frac{\log n}{\log p}\nonumber\\
&\leq\ p^r\cdot\frac{\log x}{\log p}\nonumber\\
&\leq\ p^r\cdot\log\log x.
\end{align}
Therefore, we have
\begin{align}
\frac{p}{2^{1/r}(\log\log x)^{1/r}}\ \leq\ q\ \leq\ p\cdot(\log\log x)^{1/r}.
\end{align}
Using the result from Lemma \ref{lem:lowerboundofp} again, the number of $k$ for which we don't have $t>k^{1/\log\log k}$ is $O(k/(\log k)^2)$; thus, we may assume $t>k^{1/\log\log k}$ and $t\geq 5$. Applying results from Lemma \ref{lem:generalizedfnpn},
\begin{align}\label{frk}
\frac{p^r}{\log^{2r}x}\ \leq \ f_r(k)\ &\leq\ t^r\cdot\frac{\log k}{\log t}\nonumber\\
&\leq t^r\cdot\log\log k\nonumber\\
&<\ t^r\cdot\log\log x.
\end{align}
We get that
\begin{align}
\frac{p}{(\log x)^2(\log\log x)^{1/r}} \ < \ t\ \leq\ p.
\end{align}
This result implies that $P(n/P(n))$ and $P(n)$ are relatively close. Moreover, in terms of the size of $P(n)$, we have the following two cases:
\begin{align*}
&\textrm{Case (ii.1): } p\ \leq\ x^{1/3}\\
&\textrm{Case (ii.2): }p\ >\ x^{1/3}.
\end{align*}
\ \\
\noindent \emph{Consider Case (ii.1) first.} The number of $n\leq x$ with $pt|n$ and $q|n+1$ in this case is at most
\begin{align}\label{thelengthyequation}
& \sum_{\substack{x^{1/\log\log x}<p\leq x^{1/3}\\p/(2^{1/r}(\log\log x)^{1/r})<\\q<p(\log\log x)^{1/r}\\p/((\log x)^2(\log\log x)^{1/r})<t\leq p}}\left(1+\left\lfloor\frac{x}{ptq}\right\rfloor\right)\nonumber\\
& \ll\ \pi\left(x^{1/3}\right)\cdot \pi\left(x^{1/3}(\log\log x)^{1/r}\right)\cdot\pi\left(x^{1/3}\right)+ \sum\left\lfloor\frac{x}{ptq}\right\rfloor\nonumber\\
&\ll\ O\left(\frac{x^{1/3}}{\log x}\cdot\frac{x^{1/3}\log\log x}{\log x}\cdot\frac{x^{1/3}}{\log x}\right)+\sum\left\lfloor\frac{x}{ptq}\right\rfloor\nonumber\\
& <\ O\left(\frac{x\log\log x}{(\log x)^3}\right)+ x\sum\frac{1}{p}\sum\frac{1}{t}\sum\frac{1}{q}\nonumber\\
& \ll \ O\left(\frac{x\log\log x}{(\log x)^3}\right)\nonumber + O\left(x\sum\frac{1}{p}\sum\frac{1}{t}\cdot\frac{\log\log\log x}{\log p}\right)\ \textrm{(Lemma \ref{lem:eupevandu0pv0})}\nonumber\\
&\ll\ O\left(\frac{x\log\log x}{(\log x)^3}\right)+ O\left(x\log\log\log x\sum\frac{1}{p\log p}\sum\frac{1}{t}\right)\nonumber\\
&\ll\ O\left(\frac{x\log\log x}{(\log x)^3}\right)\nonumber+O\left(
x\log\log\log x\sum\frac{1}{p\log p}\cdot\frac{\log\log x}{\log p}\right)\nonumber\\
&\leq\ O\left(\frac{x\log\log x}{(\log x)^3}\right)+O\left(\frac{x\log\log\log x(\log\log x)^3}{(\log x)^2}\right)\ \textrm{(Lemma \ref{lem:eupevandu0pv0})}\nonumber\\
&=\ O\left(\frac{x\log\log\log x(\log\log x)^3}{(\log x)^2}\right).
\end{align}
Recall that in Case (i), $\#\mathcal{R}^r_{(1,-1)}(x)=O\left(\frac{x\log\log x}{(\log x)^2}\right)$, which means we can absorb Case (i) into error estimate. Now we turn to Case (ii.2), where $p>x^{1/3}$. We have
\begin{align}
q^r\ &\leq\ p^r\cdot\frac{\log x}{\log p}\nonumber\\
&<\ 3\cdot p^r.
\end{align}
Meanwhile, because $p>x^{1/\log\log x}$, when $x$ is sufficiently large we have
\begin{align}
p\ &\leq\ q\cdot\frac{\log (x+1)}{\log q}\nonumber\\
\log p\ &\leq\ \log q+\log\frac{\log (x+1)}{\log p}\nonumber\\
&\leq\ \log q+2\log\log\log x\nonumber\\
&\leq\ \log q+\frac{1}{2}\log x^{1/\log\log x}\nonumber\\
&\leq\ \frac{3}{2}\cdot\log q.
\end{align}
Therefore, we have
\begin{align}
p^r\ &\leq\ q^r\cdot\frac{\log (x+1)}{\log q}\nonumber\\
&\leq\ q^r\cdot\frac{\log (x+1)}{2\log p/3}\nonumber\\
&<\ q^r\cdot\frac{9\log (x+1)}{2\log x}\nonumber\\
&<\ 6q^r.
\end{align}
Therefore, $p/6^{1/r}<q<3^{1/r}p$. Moreover, recall \eqref{frk}; we have
\begin{align}
\frac{p^r}{\log^{2r}x} \ \leq\ f_r(k)\ &\leq\ t^r\cdot\frac{\log k}{\log t}.
\end{align}
Suppose $t<p^{1/2}$, then the number of $n\leq x$ is at most
\begin{align}
\sum_{\substack{t<p^{1/2}\\x^{1/3}<p<2^{1/r}x^{1/2}\log x}}1\ &\ll\ \pi\left(\left(2^{1/r}x^{1/2}\log x\right)^{1/2}\right)\pi\left(2^{1/r}x^{1/2}\log x\right)\nonumber\\
&\ll\ \frac{x^{1/4}\log^{1/2}x}{\log x}\cdot\frac{x^{1/2}\log x}{\log x}\nonumber\\
&=\ O\left(\frac{x^{3/4}}{(\log x)^{1/2}}\right),
\end{align}
which we can absorb into error estimate.
Therefore, $t>p^{1/2}$. Using $p>x^{1/3}$, we have
\begin{align}
\frac{p^r}{\log^{2r}x}\ &\leq t^r\cdot\frac{\log k}{\log t}\nonumber\\
&<\ t^r\cdot\frac{2\log x}{\log p}\nonumber\\
\frac{p}{(\log x)^2}\ &<\ t\cdot\left(\frac{2\log x}{\log x/3}\right)^{1/r}\nonumber\\
p\ &<\ t\cdot 6^{1/r}(\log x)^2.
\end{align}
Therefore, we have
\begin{align}
\frac{p}{6^{1/r}(\log x)^2}\ <\ t\ <\ p.
\end{align}
Recall $f_r(k)=t\cdot u$ and $p/6^{1/r}<q<3^{1/r}p$. If $p\geq x^{2/5}$, then
\begin{align}\label{boundofum}
u\ \leq\ \frac{x}{pt}\ \leq\ \frac{6^{1/r}x(\log x)^2}{p^2}\ &=\ O(x^{1/5}(\log x)^2)\nonumber\\
m\ \leq\ \frac{x}{q}\ \ll\ \frac{x}{p}\ &\leq\ x^{3/5},
\end{align}
which means that the number of uniquely determined $n$ is at most $O(x^{4/5}(\log x)^2)$, a number absorbable by error estimate. Therefore, we need to consider only
\begin{align}
x^{1/3}\ <\ p\ <\ x^{2/5}.
\end{align}
Now, recall that $k=t\cdot u$. Let $u=v\cdot w$, where $v=P(u)$. We have the following equations:
\begin{align}
p\cdot k+1\ &=\ q\cdot m\nonumber\\
p^r+f_r(k)&=\ q^r+f_r(m).
\end{align}
Then we have
\begin{align}
p^r+t^r+f_r(u)\ &=\ \left(\frac{pk+1}{m}\right)^r+f_r(m)\nonumber\\
(upt+1)^r- (mp)^r-(mt)^r\ &=\ m^r(f_r(u)-f_r(m)).
\end{align}
When $r=1$, Pomerance \cite{Pom} demonstrated the following.
\begin{align}\label{r=1factorization}
upt-mp-mt\ &=\ m(f(u)-f(m))-1\nonumber\\
(up-m)(ut-m)\ &=\ um(f(u)-f(m))-u+m^2.
\end{align}
Given $u,m$, the number of choices of $t$, and thus for $n$, is determined by the number of divisors of $um(f(u)-f(m))-u+m^2$, and is at most
\begin{align}
\tau\left(um(f(u)-f(m))-u+m^2\right)\ &\leq\ x^{o(1)},
\end{align}
where $\tau(x)$ denotes the divisor function. Let $m=s\cdot z$, where $s=P(m)$. Recall $t=P(k)$. We first show that the number of $n$ with $t,s\leq x^{1/6}$ can be absorbed into error estimate. In fact, the number of triples $p,q,t$ is at most $O(x^{2/5}\cdot x^{2/5}\cdot x^{1/6})=O(x^{29/30})$, and since $p,q,t\geq x^{1/\log\log x}\gg x/(\log x)^2$, the number of $n$ up to $x$ for which $t,s\leq x^{1/6}$ is at most $O(x^{29/30}(\log x)^2)$. Thus, we may assume that at least one of $s,t$ is greater than $x^{1/6}$. \\
\indent Recall $u=v\cdot w$ where $v=P(u)$. First, we consider $v> x^{1/6}$. We can rewrite \eqref{r=1factorization} as
\begin{align}
(vwp-m)(vwt-m)\ &=\ wmv^2+\left((f(w)-f(m))mw-w\right)v+m^2.
\end{align}
Pomerance \cite{Pom} proved the following.
\begin{lem}\label{lem:quadratic}
Let $A,B,C$ be integers with $\gcd(A,B,C)=1$, $D:=B^2-4AC\neq 0$, and $A\neq 0$. Let $M_0$ be the maximum value of $|At^2+Bt+C|$ on the interval $[1,x]$. Let $M=\max\{M_o,|D|,x\}$ and let $\mu=\lceil{\log M/\log x}\rceil$ and assume $\mu\leq \frac{1}{7}\log\log x$, then
\begin{align}
\sum_{n\leq x}\tau\left(|An^2+Bn+C|\right)\ &\leq\ x(\log x)^{2^{3\mu+1}+4}
\end{align}
holds uniformly for $x\geq x_0$, where $x_0$ is an absolute constant.
\end{lem}
In our case, let $A=wm,B=\left((f(w)-f(m))mw-w\right), C=m^2$. Both $\gcd(A,B,C)=1$ and $D\neq 0$ are easily verified since $\gcd(w,m)=1$. Moreover, recall that $t>\frac{p}{6^{1/r}(\log x)^2}$, we have
\begin{align}
u\ \leq\ \frac{x}{pt}\ \leq\ \frac{6x(\log x)^2}{p^2}\ &\leq\ 6x^{1/3}(\log x)^2\nonumber\\
w\ =\ \frac{u}{v}\ \leq\ \frac{u}{x^{1/6}}\ &\leq\ 6x^{1/6}(\log x)^2\nonumber\\
v\ &\leq\ \frac{6x^{1/3}(\log x)^2}{w}\nonumber\\
m\ \leq\ \frac{x}{q}\ &\ll\ x^{2/3}.
\end{align}
Then $M_0\ll x^{4/3}(\log x)^2$. It follows from the lemma that
\begin{align}\label{quadsum}
\sum_{v\leq (6x^{1/3}(\log x)^2)/w}\tau(|Av^2+Bv+C|)\ &\ll\ \frac{6x^{1/3}(\log x)^c}{w},
\end{align}
where $c$ is a positive constant. Moreover, if $x^{1/3}<p\leq x^{1/3}(\log x)^{c+5}$, $\#\mathcal{R}_{(1,-1)}(x)$ is at most
\begin{align}
\sum_{\substack{x^{1/3}<p\leq x^{1/3}(\log x)^{c+5}\\p/6<q<3p}}\left(1+\frac{x}{pq}\right)\ &\ll\ O\left(x^{2/3}(\log x)^{2c+10}\right)+x\sum_{x^{1/3}<p\leq x^{1/3}(\log x)^{c+5}}\frac{1}{p}\sum_{p/6<q<3p}\frac{1}{q}\nonumber\\
&\ll\ O(x^{2/3}(\log x)^{2c+10})+O\left(\frac{x\log\log x}{(\log x)^2}\right)\nonumber\\
&\ll\ O\left(\frac{x\log\log x}{(\log x)^2}\right),
\end{align}
which is small enough to be absorbed by error estimate. If $p>x^{1/3}(\log x)^{c+5}$, then $m\ll\frac{x}{p}\ll x^{2/3}/(\log x)^{c+5}$ and $w\leq \frac{u}{v}\leq \frac{u}{x^{1/6}}\leq 6x^{1/6}/(\log x)^{2c+8}$. Then summing \eqref{quadsum} over all choices of $w,m$ the quantity is less than $O(x/(\log x))^2$, which is indeed negligible. \\
\indent Finally, recall that $m=s\cdot z$ where $s=P(m)$. We consider the case where $s> x^{1/6}$. From \eqref{r=1factorization} we have
\begin{align}\label{m}
(pu-sz)(tu-sz)\ &=\ (z^2-uz)s^2+(f(u)-f(z))uzs-u.
\end{align}
Similarly, let $p\geq x^{1/3}(\log x)^{c+5}$, then $u\leq\frac{x}{pt}\leq\frac{6x(\log x)^2}{p^2}\leq x^{1/3}/(\log x)^{2c+8}$. Moreover,
\begin{align}
z\ \leq\ \frac{x}{qs}\ \ll\ \frac{x^{2/3}}{x^{1/6}}\ &\ll\ x^{1/2}\nonumber\\
s\ \leq\ \frac{m}{z}\ &\ll\ x^{2/3}/z.
\end{align}
Therefore, considering the ranges of $s,z,u$, the right hand side of \eqref{m} has less than $O(x/(\log x)^2)$ choices, suggesting that this case is also negligible. \\
\indent Recall Lemma \ref{thm:xdeltapn} and Lemma \ref{thm:1minusepsilon}: when $r\geq 1$, $P(n)^r$ is the dominating term of $f_r(n)$, and the larger $r$ is, the closer $f_r(n)$ is to $P(n)^r$; meanwhile, $P(n)$ and $P(n+1)$ are usually at least a factor of $x^{\delta}$ apart, suggesting the increasing rarity of $r-$th power \textit{Ruth-Aaron} numbers as $r$ increases. Therefore, although we are unable to generalize the quadratic formula $|Ax^2+Bx+C|$ to a higher power\footnote{For example, when $r=2$, we have a factorization similar to \eqref{r=1factorization}: $\left(u^2pt+u-m^2+ump-umt\right)\cdot\left(u^2pt+u-m^2-ump+umt\right)=\ (um)^2(f_2(u)-f_2(m))+m^2(m^2-2u)$. However, to tackle $r=2$ with an approach similar to the one introduced in this section, a result for the summation of divisors of quartic functions will appear necessary.} at this point, we can substantiate firmly that when $r\geq 1$, the number of $r-$th power \textit{Ruth-Aaron} numbers up to $x$ with $x^{1/3}<p<x^{2/5}$ is much less than the estimated result followed by Lemma \ref{lem:quadratic}. Therefore, when $r\geq 1$,
\begin{align}
\#\mathcal{R}_{(1,-1)}^r(x)\ &=\ O\left(\frac{x(\log\log x)^3(\log\log\log x)}{(\log x)^2}\right).
\end{align}
\end{proof}
\begin{rek}
In our proof of Theorem \ref{thm:generalizednumberofn}, one crucial result we applied is that of De Bruijn \cite{Bru} in Lemma \ref{lem:lowerboundofp}. This result states that for all but $O(x/(\log x)^2)$ of $n\leq x$ we have $P(n),P(n+1)>x^{1/\log\log x}$, lays the ground work for major arguments in the rest of the proof. It serves as a major lower bound for $p$, and helps us obtain a tighter bound on $q$ and $t$ in relation to $p$, which we relied on when counting the number of $n$ up to $x$.
\end{rek}
We present a table of $2-$nd power \textit{Ruth-Aaron} numbers below $5\cdot 10^7$:
\begin{table}[H]
\centering
\begin{tabular}{cc}
\hline
$n$ & $f_2(n)=f_2(n+1)$ \\
\hline
6371184 & 40996\\
16103844 & 22685\\
49214555 & 103325\\
\hline
\end{tabular}
\caption{$2-$nd power \textit{Ruth-Aaron} numbers not exceeding $5\cdot10^7$.}
\label{tab:my_label}
\end{table}
\section{Proof of Theorem \ref{thm:infinitudeofr>0}}\label{sec:infn}
In order to prove Theorem \ref{thm:infinitudeofr>0}, we need to first introduce a special case of the Catalan Conjecture which we can prove directly.
\begin{thm}[Catalan Conjecture]
The only natural number solution of
\begin{align}
a^x-b^y\ &=\ 1
\end{align}
for $a,b>1$ is $(2,3)$, where $a=3,b=2$.
\end{thm}
The Catalan Conjecture was proved by Mihăilescu in 2002.
\begin{lem}[Special case of the Catalan Conjecture]\label{lem:applyingfnpn}
The number of $n\leq x$ with $P(n)\leq 3$ is $O((\log x)^2)$. Meanwhile, the largest $n$ with $P(n),P(n+1)\leq 3$ is 8.
\end{lem}
\begin{proof}
First, we prove that the number of $n$ up to $x$ with $n=2^a\cdot 3^b$, where $a,b$ are nonnegative integers, is $O((\log x)^2)$. In fact, since the number of powers of $2$ and $3$ no greater than $x$ are $\log_2x$ and $\log_3x$ respectively, there are at most $\log_2x\cdot \log_3x=O((\log x)^2)$ number of $n\leq x$ with $n=2^a\cdot 3^b$. Meanwhile, because $\gcd(n,n+1)=1$, when $P(n),P(n+1)\leq 3$, either $n=2^a,n+1=3^b$, or $n=3^b,n+1=2^a$. \\
\indent Case (i): $3^b=2^a+1$. Let the order of $2$ modulo $3^b$ be $d$. We have,
\begin{align}
2^a\ &\equiv\ -1\ (\textrm{mod }3^b)\nonumber\\
2^{2a}\ &\equiv\ 1\ (\textrm{mod }3^b)\nonumber\\
2^d\ &\equiv\ 1\ (\textrm{mod }3^b).
\end{align}
Then we have
\begin{align}\label{d}
d\ &|\ \varphi(3^b)\ =\ 2\cdot 3^{b-1}\nonumber\\
d\ &\nmid\ a,\ d\ |\ 2a.
\end{align}
It's obvious that $2\ |\ d$, then \eqref{d} tells us that $d=2a$ (or $d=2$, then $(a,b)=(2,1)$ is the only solution), and that $d=2\cdot 3^k$ for some nonnegative integer $k\leq b-1$. Thus, $a=3^k$. When $k=0$, $(a,b)=(1,1)$; when $k\geq 1$, we have
\begin{align}
2^{3^k}+1\ &=\ (2^{3^{k-1}}+1)(2^{2\cdot 3^{k-1}}-2^{3^{k-1}}+1)\nonumber\\
2^{2\cdot 3^{k-1}}-2^{3^{k-1}}+1\ &\equiv\ \left((-1)^{2}\right)^{3^{k-2}}-(-1)^{3^{k-2}}+1\nonumber\\
&\equiv\ 1+1+1\nonumber\\
&\equiv\ 3\ (\textrm{mod }9),
\end{align}
which means $2^{3^k}+1$ is a power of $3$ only when $2^{2\cdot 3^{k-1}}-2^{3^{k-1}}+1$ is $3$. Thus, the largest solution to $3^b=2^a+1$ is $(a,b)=(3,2)$. \\
\indent Case (ii): $2^a=3^b+1$. Then $2^a\equiv 1\ (\textrm{mod }3)$, which means $2|a$, since $2$ is the order of $2$ modulo $3$; thus, let $a=2a_0$, where $a_0$ is a nonnegative integer. Then $2^a-1=(2^{a_0}+1)(2^{a_0}-1)$. Because $2^{a_0}-1$ and $2^{a_0}+1$ are relatively prime unless $a_0=0$ or $1$, the largest solution in this case is $(a,b)=(2,1)$. Therefore, the largest $n$ with $P(n),P(n+1)\leq 3$ is $8$.
\end{proof}
Now recall Theorem \ref{thm:infinitudeofr>0}.
\infinitude*
\begin{proof}
It suffices to show there exists an infinite decreasing series of positive $r$. Let $n$ be of the form $n=p^2-1$, where $p$ is a prime. Then $n+1=p^2$. We claim that for any $n_0$ that is an $r_0-$th power \textit{Ruth-Aaron} number, we can always find $n>n_0$ that is an $r-$th power \textit{Ruth-Aaron} number where $0<r<r_0$. \\
\indent We define a function of $r$ for any fixed $n$:
\begin{align}
g_n(r)\ &=\ f_r(n+1)-f_r(n).
\end{align}
We want to show that $g_n(r)$ increases on $r>0$ and has one and only one root for a fixed $n$. We will use the function's continuity to show that the equation has at least one root, and its monotonicity and values at $r=0$ and $r=1$ to prove that the function has one and only one root, and it's between $0$ and $1$. First, we have for prime $q$,
\begin{align}
g_n(r)\ &=\ f_r(n+1)-f_r(n)\nonumber\\
&=\ 2\cdot p^r-\sum_{q|p^2-1} q^r\nonumber\\
&=\ 2\cdot p^r-2\cdot 2^r-\sum_{q|(p^2-1)/4}q^r.
\end{align}
It's obvious that $g_n(r)$ is continuous. Meanwhile,
\begin{align}
g_n(0)\ &=\ 2-2-\sum_{q|(p^2-1)/4}1\nonumber\\
&\ <0.
\end{align}
Because the function $\frac{x}{\log x}$ increases on $x\geq e$, from Lemma \ref{lem:applyingfnpn}, for all except $O((\log x)^2)$ number of $n\leq x$, $f_1(n)\leq P(n)\log n/\log P(n)\leq n$ holds. Meanwhile, for any $n$ of the form $2^a\cdot 3^b$, since there are $O(x/\log x)$ different primes up to $x$, we will drop such an $n$ and consider the next prime $p$ and the respective $n=p^2-1$. We thus have
\begin{align}
g_n(1)\ &=\ 2p-f_1(p-1)-f_1(p+1)\nonumber\\
&>\ 2p-(p-1)-(p+1)\nonumber\\
&=\ 0.
\end{align}
Therefore, due to the continuity of function $g_n(r)$, for any fixed $n$, there must exist at least one $r$ with $f_r(n+1)=f_r(n)$. We proceed to prove that $g_n(r)>0$ on $r\geq 1$. If this claim holds, then all \textit{Ruth-Aaron} $r$ for $n=p^2-1$ must satisfy $r\in(0,1)$. In fact, let $\frac{p-1}{2}=\prod_{i=1}^{k_1}s_i^{a_i}$ and $\frac{p+1}{2}=\prod_{i=1}^{k_2}t_i^{b_i}$, where $s_i,t_i$ are distinct prime factors of $\frac{p-1}{2}$ and $\frac{p+1}{2}$ respectively. Because $\frac{p-1}{2},\frac{p+1}{2}$ are consecutive thus relatively prime, we have
\begin{align}\label{grderivative}
g_n(r)\ &=\ f_r(n+1)-f_r(n)\nonumber\\
&=\ 2\cdot p^r-2\cdot 2^r-\sum_{i=1}^{k_1}a_i s_i^r-\sum_{i=1}^{k_2}b_it_i^r\nonumber\\
g_n'(r)\ &=\ 2p^r\log p-2\cdot 2^r\log 2-\sum_{i=1}^{k_1}\left(a_is_i^r\log s_i\right)-\sum_{i=1}^{k_2}\left(b_it_i^r\log t_i\right)\nonumber\\
&>\ 2p^r\log p-2\cdot 2^r\log 2-\sum_{i=1}^{k_i}\left(\left(\frac{p-1}{2}\right)^ra_i\log s_i\right)-\sum_{i=1}^{k_2}\left(\left(\frac{p+1}{2}\right)^rb_i\log t_i\right)\nonumber\\
&=\ 2p^r\log p-2\cdot 2^r\log 2-\left(\frac{p-1}{2}\right)^r\log \frac{p-1}{2}-\left(\frac{p+1}{2}\right)^r\log \frac{p+1}{2}\nonumber\\
&>\ 2p^r\log p-2\cdot 2^r\log 2-\left(\frac{p-1}{2}\right)^r\log p-\left(\frac{p+1}{2}\right)^r\log p.
\end{align}
Consider the function $v(r)=(1+x)^r-x^r-1$, where $r\geq 1$ is a real number and $x>0$. Because $v'(r)=(1+x)^r-x^r-1>x^r(\log (x+1)-\log x)>0$, we have $(1+x)^r>1+x^r$ when $r\geq 1$. Thus,
\begin{align}
\left(\frac{p-1}{2}\right)^r+\left(\frac{p+1}{2}\right)^r\ &=\ \left(\frac{p-1}{2}\right)^r\left(1+\left(\frac{p+1}{p-1}\right)^r\right)\nonumber\\
&<\ \left(\frac{p-1}{2}\right)^r\left(\frac{2p}{p-1}\right)^r\nonumber\\
&=\ p^r.
\end{align}
Therefore,
\begin{align}
g_n'(r)\ &>\ 2p^r\log p-2\cdot 2^r\log 2-p^r\log p\nonumber\\
&=\ p^r\log p-2\cdot 2^r\log 2\nonumber\\
&>\ 0
\end{align}
when $p\geq 3$. Therefore, when $r\geq 1$, $g_n(r)$ reaches its minimum at $r=1$, and $g_n(1)>0$, which means all \textit{Ruth-Aaron} $r$ with $n=p^2-1$ are between 0 and 1. We continue to show that $g_n(r)$ increases on $(0,1)$. If this holds, then there exists exactly one \textit{Ruth-Aaron} $r$ for a fixed $n$. From \eqref{grderivative} we have
\begin{align}\label{grderivativeinvolved}
g_n'(r)\ &>\ p^r\log p-2\cdot 2^r\log 2-\left(\frac{p-1}{2}\right)^r\log \frac{p-1}{2}-\left(\frac{p+1}{2}\right)^r\log \frac{p+1}{2}.
\end{align}
We want to show that $\left(\frac{p-1}{2}\right)^r\log\frac{p-1}{2}+\left(\frac{p+1}{2}\right)\log\frac{p+1}{2}<2\cdot p^r\log\frac{p}{2}$. From Cauchy-Schwarz inequality \cite{Wei},
\begin{align}
\left(\sum_{i=1}^ma_i^2\right)\cdot\left(\sum_{i=1}^mb_i^2\right)\ ^\geq\ \left(\sum_{i=1}^ma_ib_i\right)^2,
\end{align}
and equality holds only when $a_1/b_1=a_2/b_2=\dots=a_m/b_m$. Applying this result, we have
\begin{eqnarray}\label{cauchy}
& &\left(\left(\frac{p-1}{2}\right)^r\log\frac{p-1}{2}+\left(\frac{p+1}{2}\right)^r\log\frac{p+1}{2}\right)^2\nonumber\\ & & \ \ \ \ \ \leq\ \left(\left(\frac{p-1}{2}\right)^{2r}+\left(\frac{p+1}{2}\right)^{2r}\right)\left(\log^2\frac{p-1}{2}+\log^2\frac{p+1}{2}\right).
\end{eqnarray}
Meanwhile, from Jensen's inequality \cite{Wei}, if a function $f$ is concave and $k_1,k_2,\dots,k_m$ are positive reals summing up to $1$, then
\begin{align}
f\left(\sum_{i=1}^mk_ix_i\right)\ &\geq\ \sum_{i=1}^mk_if(x_i),
\end{align}
and equality holds if and only if $x_1=x_2=\cdots=x_m$.
Because the function $x^r$ is concave on $r<1$, we have
\begin{align}
\left(\frac{p-1}{2}\right)^{2r}+\left(\frac{p+1}{2}\right)^{2r}\ &<\ 2\left(\frac{((p-1)/2)^2+((p+1)/2)^2}{2}\right)^{r}\nonumber\\
&<\ 2\left(\frac{p-1}{2}+\frac{p+1}{2}\right)^{2r}\nonumber\\
&=\ 2p^{2r}.
\end{align}
Moreover, because the function $(\log x)^2$ is concave on $x>e$, we have for most $x$,
\begin{align}
\log^2\frac{p-1}{2}+\log^2\frac{p+1}{2}\ &<\ 2\log^2\frac{\left(\frac{p-1}{2}+\frac{p+1}{2}\right)}{2}\nonumber\\
&=\ 2\log^2\frac{p}{2}.
\end{align}
Therefore, from \eqref{cauchy},
\begin{align}
\left(\left(\frac{p-1}{2}\right)^r\log\frac{p-1}{2}+\left(\frac{p+1}{2}\right)\log\frac{p+1}{2}\right)^2\ &<\ 2p^{2r}\cdot2\log^2\frac{p}{2}\nonumber\\
\left(\frac{p-1}{2}\right)^r\log\frac{p-1}{2}+\left(\frac{p+1}{2}\right)^r\log\frac{p+1}{2}\ &<\ 2p^r\log\frac{p}{2}.
\end{align}
Thus, for \eqref{grderivativeinvolved},
\begin{align}
g_n'(r)\ &>\ 2\cdot p^r\log p-2\cdot 2^r\log 2-2\cdot p^r\log\frac{p}{2}\nonumber\\
&=\ 2\log2(p^r-2^r)\nonumber\\
&>\ 0.
\end{align}
Therefore, $g_n(r)$ increases on $r>0$. Meanwhile, because $g_n(0)<0$ and $g_n(1)>0$, there exists one and only $r$ with $f_r(n+1)=f_r(n)$, and $0<r<1$. As mentioned, we want to show that for any $n_0$ that is an $r_0-$th power \textit{Ruth-Aaron} number, we can always find $n>n_0$ that is an $r-$th power \textit{Ruth-Aaron} number, where $0<r<r_0$. Now, because $g_n(r)$ increases on $r>0$, it suffices to show that we can always find $n>n_0$ with $f_{r_0}(n+1)>f_{r_0}(n)$. We consider the function $x^r/\log x$. If we take our $n$ to be greater than $e^{1/r_0}$, then the function $x^{r_0}/\log x$ will be increasing. We thus have
\begin{align}
\frac{P(n)^{r_0}}{\log P(n)}\ &<\ \frac{n^{r_0}}{\log n}
\end{align}
holds. Meanwhile, because $\frac{p-1}{2}$ and $\frac{p+1}{2}$ are consecutive integers, when they are both greater than $8$, at most one of them is of the form $2^a\cdot 3^b$, and the number of $n\leq x$ with $n=2^a\cdot 3^b$ is at most $O((\log x)^2)$ (Lemma \ref{lem:applyingfnpn}), which means we can apply Lemma \ref{lem:generalizedfnpn} to most $n$. Thus,
\begin{align}
f_{r_0}(n+1)-f_{r_0}(n)\ &=\ 2\cdot p^{r_0}-2\cdot 2^{r_0}-f_{r_0}\left(\frac{p-1}{2}\right)-f_{r_0}\left(\frac{p+1}{2}\right)\nonumber\\
&>\ 2\cdot p^{r_0}-2\cdot 2^{r_0}-P\left(\frac{p-1}{2}\right)^{r_0}\log\left(\frac{p-1}{2}\right)/\log P\left(\frac{p-1}{2}\right)\nonumber\\
&-\ P\left(\frac{p+1}{2}\right)^{r_0}\log\left(\frac{p+1}{2}\right)/\log P\left(\frac{p+1}{2}\right)\ (\textrm{Lemma \ref{lem:generalizedfnpn}})\nonumber\\
&>\ 2\cdot p^{r_0}-2\cdot 2^{r_0}-\left(\frac{p-1}{2}\right)^{r_0}-\left(\frac{p+1}{2}\right)^{r_0}\nonumber\\
&>\ 2\cdot\left( p^{r_0}- 2^{r_0}-\left(\frac{p+1}{2}\right)^{r_0}\right)\nonumber\\
&>\ 0
\end{align}
when $p$ is sufficiently large. Therefore, when $n=p^2-1$ is sufficiently large, $f_{r_0}(n+1)>f_{r_0}(n)$, which means the \textit{Ruth-Aaron} $r$ with $f_r(n+1)=f_r(n)$ is less than $r_0$ and that we can always construct an infinite decreasing series of $r$, completing our proof.
\end{proof}
\section{Future Work}\label{sec:fut}
There are a few directions for future work on the \textit{Ruth-Aaron} function. First of all, although we improved Erd\Horig{o}s and Pomerance's result \cite{ErPom} by a factor of $(\log\log x)^3(\log\log\log x)/\log x$, it is still far from the actual number of $r-$th power \textit{Ruth-Aaron} numbers up to $x$, as they appear to be extremely rare. Therefore, further research might work towards tightening the existing bounds, as in many cases our estimation is relatively loose. Second, future work might consider placing the \textit{Ruth-Aaron} function in a linear equation. Inspired by Luca and Stănică's result \cite{LuSta} on the number of integers up to $x$ with $\varphi(n)=\varphi(n-1)+\varphi(n-2)$ (See Appendix \ref{app:arith}), we decide to apply the Fibonacci equation to \textit{Ruth-Aaron} numbers.
\begin{defn}
A \textit{Rabonacci} number $n$ is an integer which satisfies
\begin{align}
f(n)\ &=\ f(n-1)+f(n-2).
\end{align}
Similarly, a $r-$th power \textit{Rabonacci} number $n$ satisfies
\begin{align}
f_r(n)\ &=\ f_r(n-1)+f_r(n-2).
\end{align}
\end{defn}
It's obvious that $\mathcal{R}_{(1,1,-1)}(x)+2$ (where $+2$ indicates adding $2$ to every element in the set $\mathcal{R}_{(1,1,-1)}(x)$) is the same as the set of \textit{Rabonacci} numbers not exceeding $x$. Meanwhile, it seems that \textit{Rabonacci} numbers might be even rarer than \textit{Ruth-Aaron} numbers, as there are $42$ \textit{Rabonacci} numbers below $10^6$, in contrast to $149$ \textit{Ruth-Aaron} numbers within the same range. Using an approach similar to the proof of Theorem \ref{thm:oxlnx1me}, we are able to present a partial result on the upper bound on the rabonacci numbers.
\begin{proposition}
When $P(n)\leq x^{1/3}$ and $f(n)>f(n-1)\geq f(n-2)$, the number of \textit{Rabonacci} numbers up to $x$ is at most
\begin{align}
\#\mathcal{R}_{(1,1,-1)}(x)\ &=\ O\left(\frac{x(\log\log x\log\log\log x)^2}{(\log x)^2}\right).
\end{align}
\end{proposition}
Moreover, Erd\Horig{o}s has conjectured the following regarding the infinitude of \textit{Ruth-Aaron} numbers.
\begin{conj}[Erd\Horig{o}s]
There are infinitely many $n$ with
\begin{align}
f(n)\ &=\ f(n+1).
\end{align}
\end{conj}
Meanwhile, inspired by his conjecture on the Euler Totient Function \cite{Er}, we have
\begin{conj}
There exists, for every $k\geq 1$, consecutive integers $n,n+1,\dots,n+k$ such that
\begin{align}\label{ktuple2}
f(n)\ &=\ f(n+1)\ =\ \cdots\ =\ f(n+k).
\end{align}
\end{conj}
So far, we are able to confirm that there is at least one $n$ for $k=1$ and $k=2$. In fact, as of now, two integers are found to be \textit{Ruth-Aaron} numbers when $k=2$. Due to the rarity of \textit{Ruth-Aaron} triples, we conjecture that
\begin{conj}
There are finitely many integers $n$ with
\begin{align}
f(n)\ &=\ f(n+1)\ =\ f(n+2).
\end{align}
\end{conj}
It's obvious that in \eqref{ktuple2}, as $k$ increases, the number of $n$ satisfying the equation decreases significantly. Therefore, if we could prove the conjecture for $k=2$, then for all $k\geq 2$, the equation $f(n)=f(n+1)=\cdots=f(n+k)$ has only finitely many solutions.\\
\indent Clearly we are unable to prove the conjecture at this point; however, considering the scarcity of \textit{Ruth-Aaron} pairs, we can show that the conjecture hold true if we allow a better result of $\#\mathcal{R}_{(1,-1)}(x)$ for the estimation of the number of integers up to $x$ with $f(n)=f(n+1)=f(n+2)$. In fact, if $\#\mathcal{R}_{(1,-1)}(x)\leq O\left({x^{(1-\epsilon)/2}}\right)$, for any $0<\epsilon<1$, and \textit{Ruth-Aaron} numbers are uniformly distributed, then standard probabilistic models predict that the number of \textit{Ruth-Aaron} triples (and hence quadruples and higher) is finite. We could create a more sophisticated model that takes into account the decay in the density of \textit{Ruth-Aaron} numbers, which will also give a finite bound on the number of triples; we prefer to do the simple model below to highlight the idea.
\begin{proposition}
If $\#\mathcal{R}_{(1,-1)}(x)\leq O\left({x^{(1-\epsilon)/2}}\right)$, for any $0<\epsilon<1$, and \textit{Ruth-Aaron} numbers are uniformly distributed, then the number of \textit{Ruth-Aaron} triples is $O(1)$.
\end{proposition}
\begin{proof}
First, we consider the probability that integer $n$ is a \textit{Ruth-Aaron} number when $2^x\leq n< 2^{x+1}$. Because we are assuming a uniform distribution, the probability is
\begin{align}
O\left(\frac{2^{x(1-\epsilon)/2}}{2^x}\right)\ &=\ O\left(2^{-(1+\epsilon) x/2}\right).
\end{align}
Then the probability of two \textit{Ruth-Aaron} numbers being consecutive is $O\left(2^{-(1+\epsilon x)}\right)$. Since there are $2^x-2$ consecutive integer triples in $[2^x,2^{x+1})$, the expected number of \textit{Ruth-Aaron} triples between $2^x$ and $2^{x+1}$ is at most
\begin{align}
O\left( (2^x-2)\cdot 2^{-(1+\epsilon)x}\right)\ &\ll\ O\left(2^x\cdot2^{-(1+\epsilon)x}\right)\nonumber\\
&=\ O\left(\frac{1}{2^{\epsilon x}}\right).
\end{align}
Therefore, the total number of integers with $f(n)=f(n+1)=f(n+2)$ is at most
\begin{align}
O\left(\sum_{x=1}^\infty\frac{1}{2^{\epsilon x}}\right)\ &\ll\ O(1),
\end{align}
which means there are finitely many \textit{Ruth-Aaron} triples if $\#\mathcal{R}_{(1,-1)}(x)$ has a rate of growth of $O\left(x^{(1-\epsilon)/2}\right)$ or lower, and \textit{Ruth-Aaron} numbers are uniformly distributed.
\end{proof}
\newpage
| {
"attr-fineweb-edu": 1.966797,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcpbxK5YsWR0KjFxo |
\section{acknowledgment}
\label{sec:ack}
If you think that this template helped you on writing research papers and you work on the related topics, please help to cite some of my publications
\cite{cheng2016task, cheng2017utility, chen2014gmission, cheng2015reliable, cheng2017prediction, chen2018effective, cheng2018experimental, cheng2018frog, cheng2019cooperation, chen2019minimize}. Thanks a lot!
\subsection{Distributions of Order Datasets}
\label{append:distribution_orders}
Figure \ref{fig:order_distribution} shows the distribution of orders in test data set from 8:00 A.M. to 8:30 A.M.
\begin{figure*}[t!]
\centering
\subfigure[][{\scriptsize NYC}]{
\scalebox{0.65}[0.65]{\includegraphics{nyc_distribution_s.jpg}}
\label{fig:nyc_distribution}}
\subfigure[][{\scriptsize Chengdu}]{
\scalebox{0.65}[0.65]{\includegraphics{chengdu_distribution_s.jpg}}
\label{fig:chengdu_distribution}}
\subfigure[][{\scriptsize Xi'an}]{
\scalebox{0.65}[0.65]{\includegraphics{xian_distribution_s.jpg}}
\label{fig:xian_distribution}}
\caption{Order Distributions in NYC, Chengdu and Xi'an}
\label{fig:order_distribution}
\end{figure*}
The number of trips in the testing day is: 282,255 for NYC dataset, 238,868 for Chengdu, and 109,753 for Xi'an. We also analyze the distribution of the length of the trips in three cities as shown in Figure \ref{fig:trip_length_distribution}.
The lengths of trips in Chengdu are generally evenly distributed, but the number of long trips (longer than 45 km) is more than 1,000. The taxi trips in NYC mainly happened in Manhattan district, thus most trips are shorter than 15 km. For Xi'an dataset, since the spatial area is relatively small, most trips are shorter than 10 km.
\begin{figure*}[t!]
\subfigure[][{\scriptsize Chengdu}]{
\scalebox{0.35}[0.35]{\includegraphics{chengdu_trip_length_distribution.jpg}}
\label{fig:chengdu_trip_length_distribution}}
\subfigure[][{\scriptsize NYC}]{
\scalebox{0.35}[0.35]{\includegraphics{nyc_trip_length_distribution.jpg}}
\label{fig:nyc_trip_length_distribution}}
\subfigure[][{\scriptsize Xi'an}]{
\scalebox{0.35}[0.35]{\includegraphics{xian_trip_length_distribution.jpg}}
\label{fig:xian_trip_length_distribution}}
\caption{\small Distribution of Trip Length in Different Cities}
\label{fig:trip_length_distribution}
\end{figure*}
\subsection{Relationship between Expression Error and the Uniformity of the Distribution}
\label{sec:realation_uniformity}
Expression error refers to the error caused by estimating the number of events $\hat{\lambda}_{ij}$ in a HGrid $r_{ij}$ with the prediction result $\hat{\lambda}_i$ of the MGrid $r_{i}$. The uniformity of the distribution of events within the MGrid will lead to a large expression error. We use $D_\alpha\left(N\right)$ to represent the degree of unevenness in the distribution of events within a MGrid. In the experiment, we set the parameter $m$ as $8\times 8$ with $n=16\times 16$, which means each MGrid has an area of $3.7578$ $km^2$. Then we calculate the imbalance $D_\alpha\left(64\right)$ of event distribution within each MGrid and the summation of the expression error $E\left(i,j\right)$ of each HGrid $r_{ij}$ in the MGrid $r_{i}$. Then, we plot the corresponding relationship between them into a scatter diagram.
Figure \ref{fig:d_alpha_e} shows that the expression error gradually increases with the increase of imbalance of event distribution within a MGrid. On the other hand, we find that many MGrids has $D_\alpha\left(64\right)<10$ because events are unevenly distributed in New York City, leading to the scarcity of events in several grids.
The distribution of events in two different MGrids shown in Figure \ref{fig:example2} where each point denotes a spatial event. The event distribution shown in Figure \ref{subfig:sub_example4} is uneven. There is a large empty area in the upper left corner, and there is a long main road in the middle with lots of events. On the contrary, the event distribution in Figure \ref{subfig:sub_example3} is more uniform. The $D_\alpha\left(64\right)$ of the right grid is $25.87$ with the expression error of $39.90$, while the $D_\alpha\left(64\right)$ of the left grid is $8.47$ with the expression error of $8.8$.
\begin{figure}[t!]
\subfigure[][{\scriptsize a Grid with $D_\alpha=25.87$}]{
\scalebox{0.5}[0.5]{\includegraphics{not_distribution_s.jpg}}
\label{subfig:sub_example4}}
\subfigure[][{\scriptsize a Grid with $D_\alpha=8.8$}]{
\scalebox{0.5}[0.5]{\includegraphics{junyun_distribution_s.jpg}}
\label{subfig:sub_example3}}
\caption{\small Two instances of event distribution}
\label{fig:example2}
\end{figure}
\begin{figure}[t!]
\centering
\scalebox{0.5}[0.5]{\includegraphics{10_s.jpg}}
\caption{\small Effect of $D_{\alpha}\left(N\right)$ on $E_e\left(i,j\right)$}
\label{fig:d_alpha_e}
\end{figure}
\subsection{Results on HGrid Division}
The real error is defined in the HGrids. Thus, how to divide the whole space into HGrids becomes a crucial problem. We cannot guarantee that events are evenly distributed within a HGrid while the area of each HGrid is enormous. On the other hand, the calculation complexity of the expression error will be too high if the area of each HGrid is tiny.
We calculate $D_\alpha\left(N\right)$ over the whole grids with different values of $N$ based on Equation \ref{eq:d_alpha}. Then we plot the change in $D_\alpha\left(N\right)$ with respect to $N$ in two different way to estimate $\alpha_{ij}$.
\revision{
Figure \ref{fig:n_d_alpha} shows that $D_\alpha\left(N\right)$ increases with $N$ and the growth rate of $D_{\alpha}\left(N\right)$ slows down while $N$ is greater than a turning point (i.e., approximately $76$), which indicates that the distribution of events within the HGrid is uniform when $N$ is greater than $76\times 76$. However, $D_{\alpha}\left(N\right)$ continues to grow \revision{quickly} when $N$ is greater than $76\times 76$ due to the estimation of $\alpha_{ij}$ with the sample of a week or three months. Subsequent increases in $D_{\alpha}\left(N\right)$ are mainly attributed to the inaccurate estimation of the value of $\alpha_{ij}$.
}
We set $n$ to 16 and keep increasing $m$ to figure out the influence of $N$ on real error, model error and expression error. Figure \ref{fig:m_error} shows that the real error and the expression error increase as $m$ increases. \revision{When $m$ is large, the area of grids decreases, leading to the inaccurate estimation of the value of $\alpha_{ij}$. Thus, the expression error and the real error continually increase. In this paper, we want to reduce the influence of such inaccuracy on the expression error, and make the expression error to reflect the uneven distribution of events. As a result, we set $N=128\times128$ in our experiments, which allows us to reduce the influence of the inaccurate estimation of the value of $\alpha_{ij}$ and guarantee homogeneous for HGrids.}
\begin{figure}[t!]\centering
\scalebox{0.30}[0.30]{\includegraphics{datasize_n.jpg}}
\caption{\small Effect of $N$ on $D_{\alpha}\left(N\right)$}
\label{fig:n_d_alpha}
\end{figure}
\begin{figure}[t!]
\scalebox{0.65}[0.65]{\includegraphics{fix_n_variable_N_s.jpg}}
\caption{\small Effect of $m$ on Expression Error, Model Error, Real Error}
\label{fig:m_error}
\end{figure}
\subsection{Results on Calculation of Expression Error}
We explore the performance of Algorithms \ref{algo:repre_algorithm} and \ref{algo:simple_repre_algorithm} on the calculation of expression error. We $N=128\times 128$, $n=16\times 16$ and $m=8\times8$. In this case, we calculate the expression error of a HGrid by Algorithms \ref{algo:repre_algorithm} and \ref{algo:simple_repre_algorithm}. Theorem \ref{the:arbitrary_precision} proves that the accuracy of expression error calculation increases with the increase of $K$. However, the increase in $K$ is accompanied by a rapid increase in computing costs. Figure \ref{fig:cal_for_representation} shows that the cost of the most straightforward algorithm (i.e., no optimizations are made) increases rapidly with the increase of $K$, and the calculation cost of Algorithm \ref{algo:repre_algorithm} is linearly related to $K$. However, the calculation time of Algorithm \ref{algo:simple_repre_algorithm} is always kept at a low level, which indicates the effectiveness of the algorithm.
\begin{figure}
\scalebox{0.40}[0.40]{\includegraphics{K_pre_error.jpg}}
\caption{\small Effect of $K$ on efficiency for computing expression and accuracy of Algorithm \ref{algo:simple_repre_algorithm}}
\label{fig:cal_for_representation}
\end{figure}
We generally choose to set $K$ as 250 based on the result of Figure \ref{fig:cal_for_representation} to obtain a more accurate result of expression error while avoiding too much computing cost.
\subsection{Influence of $bound$ on the Performance of Iterative Method}
\begin{figure}[t!]
\begin{minipage}{0.46\linewidth}\centering
\scalebox{0.2}[0.2]{\includegraphics{bound.jpg}}
\caption{\small Effect of $bound$ on the Solution and the Cost of Algorithm \ref{algo:opg_3}}
\label{fig:bound}
\end{minipage}
\hspace{3ex}
\begin{minipage}{0.46\linewidth}\centering
\scalebox{0.22}[0.22]{\includegraphics{solution.jpg}}
\caption{\small Distribution of Optimal solutions in different time slots}
\label{fig:solution}
\end{minipage}
\end{figure}
\begin{figure}[t!]
\centering
\scalebox{0.7}[0.7]{\includegraphics{vary_datasize_s.jpg}}
\caption{\small Effect of the Size of Dataset on the Performance of Different Crowdsourcing Algorithm}
\label{fig:vary_datasize}
\end{figure}
Figure \ref{fig:bound} shows the effect of the selection of bound on Algorithm \ref{algo:opg_3}. With the increase of $bound$, the probability of Algorithm \ref{algo:opg_3} finding the optimal solution increases gradually, and the cost of the algorithm also increases. In addition, Figure \ref{fig:solution} shows the distribution of optimal selections of $n$ in 48 periods of a day, which shows that the optimal value of $n$ is 17 for most times.
\subsection{Effect of the size of dataset}
The size of dataset should not be too large or too small. For example, 3 months data for training can harm the performance as the distribution may change, thus the estimation of $\alpha_{ij}$ can be inaccurate and cannot find the optimal $n$. As for training with too small dataset (e.g., one week), the performance also drops since the data is not sufficient to train the prediction models. As shown in Figure \ref{fig:vary_datasize}, the results are best when we use 4 weeks' data as the training set.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a more fine-grained measure of prediction bias, namely real error, and investigate how to minimize it. The real error is mainly composed of expression error and model error. Expression error is caused by using the order quantity of large regions to estimate the number of spatial events of HGrids, while model error is the inner error of the prediction model. We show that the summation of the expression error and the model error is the upper bound on the real error. We solve expression error and model error and analyze the relationship between them and MGrids. Through the above analysis, we propose two algorithms to minimize the real error as much as possible by minimizing its upper bound. Finally, we verify the effectiveness of our algorithm through experiments and analyze the role of real error for spatiotemporal prediction models.
\section{Minimum Cost Assignment with Expected Distances}
\section{Experimental Study}
\label{sec:experimental}
\subsection{Data Set}
We use realistic data to study the property of expression error and model error.
\revision{\textbf{New York Taxi Trip Dataset.} New York Taxi and Limousine Commission (TLC) Taxi Trip Data \cite{nyc-web} includes the taxi orders in NYC. We use the Taxi Trip Dataset from January to May 2013 (i.e., January to April as training set, May 1st to 27th as validation set, and May 28th as test set). There are 282,255 orders in test set. The size of the whole space is $23km\times 37km$ (i.e., \ang{-73.77}$\sim$\ang{-74.03}, \ang{40.58}$\sim$\ang{40.92}).} Since the number of other types of taxis in NYC is much smaller than that of yellow taxi, we only use the trip data of yellow taxi. Each order record contains the pick-up and drop-up locations, the pick-up timestamp, and the driver's profit.
\revision{\textbf{Chengdu Taxi Trip Dataset.} DiDi Chuxing GAIA Open Dataset \cite{gaia} provides taxi trips in Chengdu, China. We use the taxi trip records from November 1st, 2016 to November 25th, 2016 as training set, November 26th to 29th, 2016 as validation set and November 30th, 2016 as test data set. There are 238,868 orders in test set. The size of Chengdu is also $23km\times 37km$ (i.e., \ang{103.93}$\sim$\ang{104.19}, \ang{30.50}$\sim$\ang{30.84}).}
\revision{\textbf{Xi'an Taxi Trip Dataset.} DiDi Chuxing GAIA Open Dataset \cite{gaia} also provides a dataset of taxi trips in Xi'an, China. We use the taxi trip records from October 1st, 2016 to October 25th, 2016 as training data set, October 26th to 29th, 2016 as validation set and October 30th, 2016 as test set. There are 109,753 orders in test set. The size of Xi'an is $8.5km\times 8.6km$ (i.e., \ang{108.91}$\sim$\ang{109.00}, \ang{34.20}$\sim$\ang{34.28}).}
\revision{Please refer to Appendix A for the distributions of the datasets.}
\subsection{Experiment Configuration}
\revision{We use three prediction models to predict the numbers of future spatial events in different regions:}
\revision{\textbf{Multilayer Perceptron (MLP) \cite{rosenblatt1961principles}}: We use a neural network consisting of six fully connected layers. The numbers of hidden units on each layer are 1024, 1024, 512, 512, 256 and 256. When the size of MGrid is $n$, we can get the model input (8, $\sqrt{n}$, $\sqrt{n}$), which represents the number of all regions in nearest eight time slots, and we use a flatten layer to map the original input to a vector with the size of $8\times n$ before it is fed into the model.}
\textbf{DeepST} \cite{zhang2017deep}: DeepST divides a day into 48 time slots (i.e., 30 minutes per time slot) and calculates inflow and outflow of the events. As a result, DeepST can calculate the number of events in the next time slot by predicting the inflow and outflow status of events in the next time slot. It uses three types of historical information: closeness, period and trend. Closeness expresses the number of events in the nearest eight time slots, period expresses the number of events at the same time slot of the previous eight days, and trend represents the number of events at the same time slot of the previous eight weeks. \revision{DeepST mainly utilizes the spatial information to predict the spatial events for next time slot.}
\revision{\textbf{Dmvst-Net} \cite{yao2018deep}: Dmvst-Net models the correlations between future demand and recent historical data via long short term memory (LSTM) and models the local spatial correlation via convolutional neural network (CNN). Moreover, Dmvst-Net models the correlations among regions sharing similar temporal patterns. Compared with DeepST, Dmvst-Net utilizes both spatial and temporal information, which leads to a better performance of the prediction model.}
Since the size of model input for DeepST and Dmvst-Net is different in the experiment, we need to map the original input to the same $shape$ to ensure that the model structure does not change significantly through a conditional deconvolution layer. When the number of MGrid is $n$, that is, the input dimension of the model is $(2, \sqrt{n}, \sqrt{n})$, the size $k$ of the convolution kernel and step size $s$ of the convolutional layer can be obtained through the following formula:
{\small\begin{eqnarray}
s&=&\left\lfloor \frac{shape}{n-1}\right\rfloor \notag\\
k&=&shape-s\left(n-1\right) \notag
\end{eqnarray}}
Here, we set $shape=128$ in our experiment. Finally, we add a convolution layer with the same stride and the same size as the deconvolution layer as the last layer of DeepST.
\begin{table}\vspace{-2ex}
\centering
{\small\scriptsize
\caption{\small Experiment Setting for Training Model} \label{tab:experiment_configure}
\revision{
\begin{tabular}{c|c}
{\bf Symbol} & {\bf Setting} \\ \hline \hline
$N$ & {$128\times128$}\\
$n$ & {4$\times$4,$\dots$,\textbf{16$\times$16},$\dots$,75$\times$75,76$\times$76}\\
time slot & {30 minutes}\\
prediction model & {MLP, \textbf{DeepST}, Dmvst-Net}\\
\hline
\hline
\end{tabular}
}
}\vspace{-2ex}
\end{table}
\revision{As the dataset used in this experiment is the Taxi Trip Dataset, \textbf{Order Count Bias} is used as the metric of model error, expression error and real error in this experiment. Model error represents the difference between the predicted order quantity and the estimated order quantity; expression error represents the difference between the estimated order quantity and the actual order quantity; real error represents the difference between the actual order quantity and the predicted order quantity. Considering that we will constantly change the grid size in the experiment, it is meaningless to consider the error of a single grid; therefore, the errors we discuss in subsequent experiments are the summation of errors of all grids, unless otherwise specified.}
\begin{figure*}[t!]\vspace{-3ex}
\begin{minipage}{0.245\linewidth}\centering
\scalebox{0.4}[0.4]{\includegraphics{multi_citys_expression_error_s.jpg}}
\caption{\small Effect of $n$ on Expression Error in Different Cities}
\label{fig:multi_citys_expression_error}\vspace{-2ex}
\end{minipage}
\begin{minipage}{0.745\linewidth}\centering
\subfigure[][{\scriptsize Chengdu}]{
\scalebox{0.4}[0.4]{\includegraphics{multi_chengdu_model_error_s.jpg}}
\label{fig:chengdu_deepst_model_error}}
\subfigure[][{\scriptsize NYC}]{
\scalebox{0.4}[0.4]{\includegraphics{multi_nyc_model_error_s.jpg}}
\label{fig:nyc_deepst_model_error}}
\subfigure[][{\scriptsize Xi'an}]{
\scalebox{0.4}[0.4]{\includegraphics{multi_xian_model_error_s.jpg}}
\label{fig:xian_deepst_model_error}}
\caption{\small Effect of $n$ on the Model Error}
\label{fig:model_error}\vspace{-2ex}
\end{minipage}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfigure[][{\scriptsize Real Error in Xi'an}]{
\scalebox{0.48}[0.48]{\includegraphics{xian_real_error_s.jpg}}
\label{fig:xian_real_error}}
\subfigure[][{\scriptsize Real Error in Chengdu}]{
\scalebox{0.48}[0.48]{\includegraphics{chengdu_real_error_s.jpg}}
\label{fig:chengdu_real_error}}\vspace{1ex}
\subfigure[][{\scriptsize Real Error in NYC}]{
\scalebox{0.48}[0.48]{\includegraphics{nyc_real_error_s.jpg}}
\label{fig:nyc_real_error}}\vspace{-1ex}
\caption{\small Effect of $n$ on Real Error in Different Cities with Different Prediction Models}
\label{fig:citys_models_real_error}\vspace{-2ex}
\end{figure*}
In order to calculate the expression error of a HGrid, we need to estimate the mean number of events $\alpha_{ij}$ for the grid $r_{ij}$ in advance. Over a long period, grid environments will change significantly so that the number of events for the same grid does not follow the same distribution. On the other hand, when sample size is small, the estimate of the mean number for events will produce a considerable bias. Therefore, when estimating the mean number of events, we need to choose the appropriate range of adoption. At the same time, considering the remarkable difference about the number of events at different periods in a day and the great difference in the willingness of people to travel on weekdays and workdays, this experiment takes the average number of events at the same period of all workdays in last one month as the mean number $\alpha_{ij}$ of events in the HGrid $r_{ij}$. In subsequent experiments, we estimate $\alpha_{ij}$ by using the number of events between $8:00$ $A.M.$ and $8:30$ $A.M.$ as default unless otherwise stated.
\revision{
The above experimental settings are summarized in Table \ref{tab:experiment_configure} where the default parameters are in bold. Our experiments are run on AMD Ryzen 5-5600H with 32 GB RAM and GeForce RTX 3050 in Python, while LS, POLAR and DAIF in Java.
}
\revision{
\subsection{Relationship between Real Error and $n$}
\label{sec:real_error_grid_size_exp}
In this section, we mainly show the effect of $n$ on the expression error and the model error as analyzed in Section \ref{sec:erroranalysis} and verify that real error has the same change trend as its upper bound. \\
\textbf{Expression Error.} We use Algorithm \ref{algo:simple_repre_algorithm} to calculate the expression errors in different cities, which all decrease with the increase of $n$ as shown in Figure \ref{fig:multi_citys_expression_error}. Since orders in NYC are more evenly distributed than in Chengdu, therefore, the expression error of Chengdu is smaller than that of NYC when $n$ is the same. Additionally, the order quantity of Xi'an is much smaller than that of the other two cities. Meanwhile, the order distribution of Xi'an is more uniformly distributed compared with the other two cities. As a result, the expression error of Xi'an is much smaller than that of other cities. We analyze the relationship between expression error and the uniformity of order distribution in detail in Appendix B.\\
\textbf{Model Error.} We test the performance of three prediction models (i.e., MLP, DeepST and Dmvst-Net) on the datasets of NYC and Chengdu as shown in Figures \ref{fig:model_error}. The experimental results show that the model error of the three prediction models all increase with the increase of $n$ on the two data sets. The model errors of DeepST and Dmvst-Net are much smaller than that of MLP with relatively simple model structure, while Dmvst-Net makes use of time information of historical data so that it performs better than DeepST.\\
\textbf{Real Error.} Figure \ref{fig:citys_models_real_error} shows the relationship between real error and its upper bound in different cities while using different prediction models. The real error and its upper bound have the same trend, all falling first and then rising while changing $n$. Comparing with Chengdu, the expression error of NYC is larger, which makes the optimal $n$ of NYC larger than that of Chengdu when the same prediction model is used. For example, the real error of NYC based on Dmvst-Net is also small when $n$ is $30\times30$ as shown in Figure \ref{fig:nyc_real_error}. On the other hand, the prediction model with higher accuracy makes the real error significantly smaller, and also leads to the increase of $n$ that minimizes the real error. Taking NYC as an example, the optimal value of $n$ is 23 when using Dmvst-Net as prediction model; when the prediction model is DeepST, the optimal value of $n$ is 16; when the prediction model is MLP, the optimal value of $n$ is 13. In the case of models with high accuracy, a larger $n$ helps to reduce expression error. Moreover, when we use MLP as a prediction model to forecast the number of orders in Chengdu, Figure \ref{fig:chengdu_real_error} shows that the real error increases varying $n$ as the model error plays a dominant role in the real error while the expression error of Chengdu is small and the model error of MLP is large. In addition, because the space size of the Xi'an dataset is much smaller than that of Chengdu and NYC, the optimal $n$ of Xi'an is smaller than that of the other two cities.
}
\subsection{Case Study on Effect of Minimizing Real Error}
\revision{
In this section, we explore the effect of real error on two crowdsourcing problems (i.e., task assignment \cite{cheng2021queueing, tong2017flexible} and route planning \cite{wang2020demand}). We test two prediction models: Dmvst-Net and DeepST in the experiment.
\textbf{Task Assignment.} Task assignment refers to sending location-based requests to workers, based on their current positions, such as ride-hailing. We use two state-of-the-art prediction-based task assignment algorithms (i.e., LS \cite{cheng2021queueing}, POLAR \cite{tong2017flexible}) to dispatch orders under different values of $n$. The goal of LS is to maximize total revenue while the goal of POLAR is to maximize the number of served orders. Thus, we use the total revenue and order quantity as metrics for the two algorithms. We compare the performance of the two algorithms using different prediction models in this paper. Specific experimental setup in this paper is as the same as the default setting in our previous work \cite{cheng2021queueing}.
\begin{figure}[t!]\vspace{-3ex}
\subfigure[][{\scriptsize Order Quantity}]{
\scalebox{0.5}[0.5]{\includegraphics{nyc_order_quantity_s.jpg}}
\label{fig:nyc_order_quantity}}
\subfigure[][{\scriptsize Total Revenue}]{
\scalebox{0.5}[0.5]{\includegraphics{nyc_total_revenue_s.jpg}}
\label{fig:nyc_totall_revenue}}\vspace{-1ex}
\caption{Effect of $n$ on Task Assignment (NYC)}
\label{fig:task_assignment_nyc}\vspace{-2ex}
\end{figure}
\begin{figure}[t!]
\subfigure[][{\scriptsize Order Quantity}]{
\scalebox{0.5}[0.5]{\includegraphics{chengdu_order_quantity_s.jpg}}
\label{fig:chengdu_order_quantity}}
\subfigure[][{\scriptsize Total Revenue}]{
\scalebox{0.5}[0.5]{\includegraphics{chengdu_total_revenue_s.jpg}}
\label{fig:chengdu_totall_revenue}}\vspace{-1ex}
\caption{Effect of $n$ on Task Assignment (Chengdu)}
\label{fig:task_assignment_chengdu}\vspace{-3ex}
\end{figure}
Figure \ref{fig:task_assignment_nyc}$\sim$\ref{fig:task_assignment_xian} show that total revenue and order quantity of the prediction-based dispatching algorithms vary under different values of $n$. When using the predicted results, both algorithms show an increasing first and then decreasing trend in revenue, because the real error is large when $n$ is too small or too large. When POLAR and LS use real order data such that the model error becomes $0$, the real error is equivalent to the expression error. It means that the real error decreases as $n$ increases. Therefore, the performance of Polar and LS will not decrease due to the large $n$ when using the real order data, which is also consistent with the changing trend of the real error. In addition, the order distribution of Xi'an is more even than that of the other two cities because of its smaller area. Therefore, the optimal $n$ in Xi'an is less than that in the other two cities. In short, the experimental results verify that the real error is an important factor affecting the performance of the algorithms in task assignment.
\textbf{Route Planning.} Route planning is a central issue in shared mobility applications such as ride-sharing, food delivery and crowdsourced parcel delivery. We use the state-of-the-art algorithm, DAIF \cite{wang2020demand}, to verify the effect of $n$ on route planning problem. We use the default parameters of the original paper \cite{wang2020demand} in this experiment, and take the number of served requests and the unified cost as the metrics of DAIF. Figure \ref{fig:route_planning_nyc} shows that the number of served requests of DAIF first increases then decreases when $n$ increases. The unified cost of DAIF is minimized when $n=16\times 16$. Using the actual number of orders, DAIF gets better performance with a large $n$. Although route planning problem is affected less by real error compared with task assignment problem, the size of grid affects the performance of prediction-based algorithms.
\begin{figure}[t!]\vspace{-3ex}
\subfigure[][{\scriptsize Order Quantity}]{
\scalebox{0.5}[0.5]{\includegraphics{xian_order_quantity_s.jpg}}
\label{fig:xian_order_quantity}}
\subfigure[][{\scriptsize Total Revenue}]{
\scalebox{0.5}[0.5]{\includegraphics{xian_total_revenue_s.jpg}}
\label{fig:xian_totall_revenue}}\vspace{-1ex}
\caption{Effect of $n$ on Task Assignment (Xi'an)}
\label{fig:task_assignment_xian}\vspace{-2ex}
\end{figure}
\begin{figure}[t!]
\subfigure[][{\scriptsize Served Requests}]{
\scalebox{0.495}[0.495]{\includegraphics{nyc_served_requests_s.jpg}}
\label{fig:nyc_order_quantity_route}}
\subfigure[][{\scriptsize Unified Cost}]{
\scalebox{0.495}[0.495]{\includegraphics{nyc_unified_cost_s.jpg}}
\label{fig:nyc_totall_revenue_route}}\vspace{-1ex}
\caption{Effect of $n$ on Route Planning (NYC)}
\label{fig:route_planning_nyc}\vspace{-3ex}
\end{figure}
Table \ref{tab:promotion} shows the improvement of the original algorithm by selecting the optimal grid size with DeepST as the prediction model on NYC. Original $n$ represents the default value of $n$ set in \cite{tong2017flexible, cheng2021queueing, wang2020demand}, while optimal $n$ denotes the optimal grid size found by our GridTuner. The results show that both POLAR and DAIF can achieve performance gains with the optimal grid size. Due to the selection of the default $n$ in the existing paper \cite{tong2017flexible} is close to the optimal $n$, the performance of LS has no obvious improvement.}
\begin{table}[h!]
\centering
{\small\scriptsize
\caption{\small Promotion of the prediction-based algorithms}
\label{tab:promotion}
\begin{tabular}{l|l|lll}
Metric&Algorithm&Optimal $n$&Original $n$&Improve ratio\\
\hline
{Served Order Number}&POLAR&$16\times 16$&$50\times 50$&$13.6\%$\\
{Total Revenue}&POLAR&$16\times 16$&$50\times 50$&$8.97\%$\\
\hline
{Total Revenue}&LS&$20\times 20$&$16\times 16$&$0.13\%$\\
{Served Order Number}&LS&$20\times 20$&$16\times 16$&$0.7\%$\\
\hline
{Unified Cost}&DAIF&$16\times 16$&$12\times 12$&$0.76\%$\\
{Served Requests}&DAIF&$20\times 20$&$12\times 12$&$3.35\%$\\
\hline
\hline
\end{tabular}
}\vspace{-1ex}
\end{table}
\subsection{Experiment Result of Optimization Searching Algorithms}
Since the mean of the event quantity in the same grid varies in different periods of a day, the expression error of each time slot is different, leading to the different optimal solutions of each time slot.
\revision{
In this section, we use the algorithms proposed in Section \ref{sec:solution2} to calculate the optimal partition scheme of different cities and compare the performance of them with the \textbf{Brute-force Search} (i.e., traverses all the values to find the optimal $n$).
We use three indicators to measure the quality of the solution found by the algorithm and the efficiency of the algorithm: \textbf{cost} denotes the cost of time; \textbf{probability} denotes the probability of obtaining the optimal solution (i.e., the number of finding out the optimal solution divided by the number of time slots); \textbf{optimal ratio ($OR$)} is denoted as
$
OR=\frac{o_a}{o_r}
$,
where $o_r$ denotes the optimal order count served by driver while using the POLAR as dispatching algorithm in NYC and $o_a$ represents the results optimized by the algorithm.
}
\revision{
\begin{table}[t!]
\centering
{
\caption{\small Performance of the algorithms.} \label{table3}
\begin{tabular}{l|l|ccc}
{\bf City} & {\bf Algorithm} & {\bf Cost (h)}&{\bf Probability}&{\bf OR} \\ \hline
{NYC}&Ternary Search&7.03&52.08$\%$&97.83$\%$\\
{NYC}&Iterative Method&{\bf5.58}&{\bf81.25$\%$}&{\bf98.77}$\%$\\
{NYC}&Brute-force Search&47.43&100.00$\%$&100.00$\%$\\\hline
{Chengdu}&Ternary Search&6.32&70.83$\%$&98.35$\%$\\
{Chengdu}&Iterative Method&4.53&{\bf95.83$\%$}&{\bf99.77$\%$}\\
{Chengdu}&Brute-force Search&43.26&100.00$\%$&100.00$\%$\\\hline
{Xi'an}&Ternary Search&{\bf3.90}&60.42$\%$&97.98$\%$\\
{Xi'an}&Iterative Method&$3.31$&{\bf91.67$\%$}&{\bf99.57$\%$}\\
{Xi'an}&Brute-force Search&$21.76$&100.00$\%$&100.00$\%$\\\hline\hline
\end{tabular}
}\vspace{-2ex}
\end{table}
The experimental results in Table \ref{table3} show that Ternary Search and Iterative Method both can greatly reduce the time cost of finding the optimal solution compared with the Brute-force Search. Meanwhile, both algorithms can find the global optimal solution with high probabilities. With the reasonable choice of bound and initial position of Iterative Method, its execution efficiency and probability of finding the optimal solution are better than that of Ternary Search. According to Table \ref{table3}, sub-optimal solutions achieved by Ternary Search are at most 3$\%$ less than the optimal results (and 1.5$\%$ for Iterative Method), which shows the effectiveness of our grid size selection algorithms.
}
\noindent\textbf{Summary:}
The experimental results show that the larger real error often leads to the decrease of the payoff of the dispatching algorithms. At the same time, Ternary Search and Iterative Method proposed in this paper can effectively find the optimal solution to the OGSS by minimizing the upper bound of the real error. Furthermore, this paper improves the effect of the original algorithm by selecting a reasonable size $n$. Specially, the performance of POLAR improves by $13.6\%$ on served order number and $8.97\%$ on total revenue. Finally, this paper also studies the influence of different traffic prediction algorithms on the optimal size of MGrids. The results show that when the accuracy of the prediction algorithm is high, the whole space can be divided into more MGrids to reduce the expression error. On the contrary, when the accuracy of the prediction algorithm is low, we need to make the area of a MGrid larger to reduce the model error.
\section{Introduction}
Recently, many spatiotemporal prediction models are proposed to predict the number of events (e.g., online car-hailing requests \cite{tong2017flexible, wang2020demand, cheng2019queueing}, or street crimes \cite{mohler2011, deepcrime}) within a region (e.g., a gird of 1km$\times$1km) in a period (e.g., next 30 minutes) \cite{zhang2017deep, li2015bike, zhao2016predict}. With the help of predicted information of events, we can improve the platform revenue of online car-hailing systems (e.g., Uber \cite{uber-web}), or reduce crimes effectively through optimizing patrol route of police \cite{deepcrime}.
One common assumption of spatiotemporal prediction models is that the distribution of spatial events within a region is uniform \cite{tong2017flexible, zhang2017deep, wang2020demand}, which is in fact almost never true in real scenarios. In addition, the selection of region size is mostly decided by experts' experience or simple experimental tests without detailed analysis in many existing research studies:
\begin{itemize}[leftmargin=*]
\item ``We use 20$\times$30 = 600 grids to cover the cities and one grid represents a 0.01 (longitude)$\times$0.01 (latitude) square'' \cite{tong2017flexible}
\item The authors use 32$\times$32 grids to cover Beijing area and 16$\times$8 grids for New York City area~\cite{zhang2017deep}.
\item ``For the prediction model, DeepST, we set the default grid size as 2km$\times$2km ...'' \cite{wang2020demand}
\end{itemize}
Under the uniform region assumption, the spatiotemporal prediction models are optimized to minimize the model error in model grids (i.e., the difference between the predicted and actual number of spatial events in grids used in the prediction model), which may lead to dramatic real errors in some smaller regions (i.e., the difference between the predicted and the actual number of spatial events in smaller and homogeneous grids). Then, the overall performance of frameworks utilizing spatiotemporal prediction models will not be optimized for real applications. For example, with careful grid size selection, the overall performance of a state-of-the-art prediction based online spatial crowdsourcing framework \cite{tong2017flexible} can be increased up to 13.6\% (shown in a case study in Section \ref{sec:experimental}).
We illustrate this challenge with the following example:
\begin{figure}\centering
\scalebox{0.14}[0.14]{\includegraphics{5.png}}
\caption{\small Forecast and Actual Distribution of Orders in Grids}
\label{fig:bbq_example}\vspace{-3ex}
\end{figure}
\begin{example}
\label{exp:expression_error}
As shown in Figure \ref*{fig:bbq_example}, the solid black lines divide the space into four model grids to be predicted. The blue dot lines further divide each model grid into four smaller grids. We can use spatiotemporal prediction models to predict the number of events in each model grid. In the absence of prior knowledge of the distribution of events within a model grid, the models generally assume that the distribution of events within a model grid is uniform, which means that the number of events in each smaller grid within a same model grid is equal to each other. \revision{Thus, we can estimate the predicted number of each smaller grid through averaging the predicted result of the corresponding model grid. The red number shown in Figure \ref*{fig:bbq_example} denotes the predicted number of events for each smaller grid. The predicted result for each model grid is the summation of the number of events for all its smaller grids.} We can directly calculate the model error of the prediction model on large grids is $3$ (= $|8-9|+|2-1|+|4-4|+|4-5|$). Nevertheless, if the model error is calculated based on smaller grids, it will increase to $10$ \revision{(= $|2-3|+|2-2|+|0.5-0|+|0.5-0|+|2-3|+|2-1|+|0.5-0|+|0.5-1|+|1-0|+|1-3|+|1-1|+|1-1|+|1-0|+|1-1|+|1-1|+|1-2|$).} The reason is that the distribution of events in each large grid is uneven, which is ignored in almost all existing studies.
\end{example}
\textit{Why not directly predict the spatial events for each smaller grid?} The reason is twofold.
Firstly, due to the uncertainty of spatial events, it is too hard to accurately predict the number of spatial events in a very small grid (e.g., 100m$\times$100m).
\revision{ When the size of grid is too small, there will be no enough historical data for prediction model to learn the distribution of the spatial events in each small area. In addition, the number of spatial events in a small grid is also small, then the accuracy (relative error) of prediction models will be dramatically affected by the randomness of the spatial events, since the uncertainty of spatial events is inevitable \cite{yao2018deep, tong2017flexible, chen2018pcnn}.}
Secondly, the computation complexity of prediction models will increase remarkably when the number of grids increases \cite{yao2018deep, yu2017spatiotemporal}. Thus, almost all spatiotemporal prediction models still use relatively large grids (e.g., grids of 2km$\times$2km) as the prediction units.
\textit{Can we have an automatic and theoretic-guaranteed optimal grid size selection method to minimize the overall real error of spatiotemporal prediction models?} To overcome this long-standing challenge, in this paper we study the \textit{optimal grid size selection} (OGSS) problem to guide the configuration of grid size such that the real errors are minimized for spatiotemporal prediction models in real applications.
In this paper, we reinvestigate the grid size selection problem in spatiotemporal prediction models in detail. We assume that the distribution of the spatial events in a small enough grid (e.g., 100m$\times$100m) can be considered homogeneous. Then, the real error of a spatiotemporal prediction model is evaluated through the total difference between the predicted number and real number of spatial events among all small grids. Specifically, we decompose the real error of the prediction models into the model error and the expression error. Here, the model error indicates the inherent error of the prediction models, and the expression error stands for the error of using the predicted number of events in a large grid to express the future events in its inner smaller grids (as illustrated in Example \ref{exp:expression_error}). We prove that the summation of model error and expression error of a spatiotemporal prediction model is an upper bound of its real error. We also verify that for any spatiotemporal prediction model, with the increase of the size of grids, the upper bound of its real error will first decrease then increase. Based on our theoretical analysis, we propose two algorithms, namely Ternary Search algorithm and Iterative algorithm, to quickly find the optimal size of model grids for a given spatiotemporal prediction model.
To summarize, we make the following contributions:
\begin{itemize}[leftmargin=*]
\item We formally define a novel metric, real error, to measure the deviation between the model's forecast and the actual number of events for homogeneous grids. Then, we propose a new problem, namely optimal grid size selection (OGSS), to automatically find the optimal grid size for a given spatiotemporal prediction model in Section \ref{sec:problemDefinition}
\item We analyze the upper bound of the real error and the relationship between the number of model grids and the upper bound in Section \ref{sec:erroranalysis}. Then, we propose two algorithms to find the optimal grid size to minimize the upper bound of real error in Section \ref{sec:solution2}.
\item We conduct sufficient experiments on different spatiotemporal prediction models to explore the influencing factors of real errors in Section \ref{sec:experimental}.
\end{itemize}
We review the related studies in Section \ref{sec:related} and conclude this paper in Section \ref{sec:conclusion}.
\section{Preliminaries}
\label{sec:problemDefinition}
In this section, we will introduce some basic concepts and present the formal definition of model grid, homogeneous grid, model error, expression error and real error. We prove that the summation of model error and expression error is an upper bound of real error.
\subsection{Basic Concepts}
Without loss of generality, in this paper, we consider that the spatiotemporal prediction models will first divide the whole space into $n$ rectangular model grids, then predict the number of spatial events for each model grid in a given future time period. When the size of a grid is small enough (e.g., 100m$\times$100m), the distribution of spatial events can be considered uniform for most application scenarios (e.g., online car-hailing systems). Under this assumption, each model grid will be further divided into $m$ smaller homogeneous grids, where predicted spatial events is considered uniformly distributed in each homogeneous grid. \revision{To ensure HGrids are small enough, it is required that the number of HGrids is larger than $N$ (i.e., $mn>N$). We note $N$ as the minimum number that makes the distribution of spatial events in each HGrid itself is uniform. We propose a method to select a proper value of $N$ in Section \ref{sec:region_depart}.}
We formally define the model grids and the homogeneous grids as follows:
\begin{definition}[Model Grid, MGrid]
The whole space \revision{is divided} into $n$ same-sized model grids $\{r_1, r_2, \cdots, r_n\}$. The number of actual spatial events happening in $r_i$ in the next period is noted as $\lambda_i$. A spatiotemporal prediction model will predict the number, $\hat{\lambda}_i$, of spatial events that happening in each model grid $r_i$ in the next period.
\end{definition}
\begin{definition}[Homogeneous Grid, HGrid]
Each model grid $r_i$ can be further evenly divided into $m$ homogeneous grids $\{r_{i1}, r_{i2}, \cdots, r_{im}\}$.
For a homogeneous grid $r_{ij}$, the number of actual spatial events happening in it in the next period is marked as $\lambda_{ij}$ (i.e., $\lambda_i=\sum_{j=1}^{m}\lambda_{ij}$).
\end{definition}
\begin{figure}[t!]\centering\vspace{-3ex}
\scalebox{0.12}[0.12]{\includegraphics{notation_explain.png}}
\caption{\small An Illustration of Relationships between Basic Concepts.}
\label{fig:notation_explain}\vspace{-3ex}
\end{figure}
In the absence of any prior knowledge of the distribution of the spatial events in a model, we assume that the number of spatial events of each HGrid in the MGrid is same to each other according to the principle of maximum entropy \cite{guiasu1985principle}. Thus, with the actual number, $\lambda_{i}$, of spatial events in MGrid $r_i$, the estimated number of spatial events of HGrid $r_{ij}$ is denoted as $\bar{\lambda}_{ij}=\frac{\lambda_i}{m}=\frac{\sum_{j=1}^{m}{\lambda_{ij}}}{m}$. Similarly, with the predicted number, $\hat{\lambda}_{i}$, of spatial events of MGrid $r_{i}$, we can have the predicted number of spatial events of HGrid $r_{ij}$ as $\hat{\lambda}_{ij}=\frac{\hat{\lambda}_i}{m}$.
The differences between $\lambda_{ij}$, $\bar{\lambda}_{ij}$ and $\hat{\lambda}_{ij}$ lead to three types of errors: model error, expression error and real error. Figure \ref{fig:notation_explain} illustrates the three types of errors. The frameworks utilizing spatiotemporal prediction models is hard to get the information about the real distribution of spatial events from the models. Thus, the prediction result $\hat{\lambda}_i$ of a MGrid $r_{i}$ will be divided equally into HGrids without any prior information. The real error describes the difference between the actual number of spatial events $\lambda_{ij}$ and the predicted result $\hat{\lambda}_{ij}$ of HGrid $r_{ij}$. However, it is difficult to calculate the real error directly. Therefore, $\bar{\lambda}_{ij}$ is introduced to decompose the real error into expression error and model error. We formally define the real error model error and expression error as follows:
\begin{definition}[Real Error]
\revision{For a HGrid $r_{ij}$, its real error $E_r(i,j)$ is defined as the mean/average of difference between its predicted and actual numbers of spatial events in the corresponding same time periods of the historical days for the next period. It means {\scriptsize$E_r\left(i,j\right)=\mathbb{E}_{\lambda_{ij}\sim P}(|\hat{\lambda}_{ij} - \lambda_{ij}|)$}, where $\lambda_{ij}$ follows a given distribution $P$.}
\end{definition}
In practice, it is difficult to calculate the difference $|\hat{\lambda}_{ij} - \lambda_{ij}|$ without the information about the number $\lambda_{ij}$ of events in next period. Thus, we define the real error as the mean of the difference $|\hat{\lambda}_{ij} - \lambda_{ij}|$. \revision{However, due to the lack of a sufficient number of samples, it is difficult for us to calculate $E_r\left(i,j\right)$ accurately because $\lambda_{ij}$ does not follow the same distribution for different time periods. Another factor is that the environment is prone to change over a long period, so the number of events does not follow the same distribution either. As a result, we use the number of events in the same time period on each day of the previous one month to estimate real error. Suppose that we have a set $\Lambda_{ij}$ of the actual events number, $\lambda_{ij}$, and its corresponding prediction number, $\hat{\lambda}_{ij}$, we can estimate $E_r\left(i,j\right)$ as follows:
{\scriptsize
$$
E_r\left(i,j\right)= \mathbb{E}_{\lambda_{ij}\sim P}(|\hat{\lambda}_{ij} - \lambda_{ij}|)= \frac{1}{\left|\Lambda_{ij}\right|}\sum_{(\hat{\lambda}_{ij},\lambda_{ij})\in \Lambda_{ij}}|\hat{\lambda}_{ij} - \lambda_{ij}|
$$
}
}
\begin{definition}[Model Error]
\revision{For a HGrid $r_{ij}$, its model error $E_m(i,j)$ is the mean/average of difference between its predicted and estimated numbers of spatial events in the corresponding same time periods of the historical days for the next period. It means {\scriptsize$E_m(i,j)=\mathbb{E}_{\lambda_{ij}\sim P}(|\hat{\lambda}_{ij} - \bar{\lambda}_{ij}|)$}, where $\lambda_{ij}$ follows a given distribution $P$.}
\end{definition}
\begin{definition}[Expression Error]
\revision{For a HGrid $r_{ij}$, its expression error $E_e(i,j)$ is the mean/average of difference between its estimated and actual numbers of spatial events in the corresponding same time periods of the historical days for the next period. It means {\scriptsize$E_e(i,j)=\mathbb{E}_{\lambda_{ij}\sim P}(|\bar{\lambda}_{ij} - \lambda_{ij}|)$}, where $\lambda_{ij}$ follows a given distribution $P$.}
\end{definition}
In the previous analysis, we explained that the grid size selection would significantly affect the real error. This paper aims to find an optimal size that minimizes the summation of real errors in all HGrids. We formally define the problem as follows:
\begin{definition}[Optimal Grid Size Selection Problem, OGSS]
For a given number of all HGrids $N$, and a given model to predict the number of spatial events for MGrids in next period, the optimal grids size selection problem is to find the optimal $n$ to minimize the summation of real error of all HGrids under the constraint $nm>N$, which is:
{\small\begin{alignat}{2}
& \min\limits_{n}& & \sum_{i=1}^{n}{\sum_{j=1}^{m}{E_r(i,j)}}\\
& \text{s.t.}& \quad &
\begin{aligned}[t]
& nm>N
\end{aligned}\notag
\end{alignat}}
\noindent where $m$ represents the minimum required number of HGrids in each MGrid satisfying $nm>N$.
\end{definition}
\subsection{Upper Bound of Real Error}
\label{sec:ubte}
We denote the summation of model error and expression error as $E_u\left(i,j\right)$ ($=E_m\left(i,j\right)+E_e\left(i,j\right)$). We can prove that $E_u\left(i,j\right)$ is an upper bound on $E_r\left(i,j\right)$ by the theorem \ref{theo:ubte}.
\begin{theorem} [Upper Bound of Real Error]
$E_u\left(i,j\right)$ is an upper bound of real error $E_r\left(i,j\right)$
\label{theo:ubte}
\end{theorem}
\begin{proof}
We prove it through the following inequalities:
{\scriptsize\begin{align*}
E_r\left(i,j\right)&=\mathbb{E}\left(|\hat{\lambda}_{ij}-\lambda_{ij}|\right) = \mathbb{E}\left(|\hat{\lambda}_{ij}-\bar{\lambda}_{ij}+\bar{\lambda}_{ij}-\lambda_{ij}|\right)\\
&\leq \mathbb{E}\left(|\hat{\lambda}_{ij}-\bar{\lambda}_{ij}|+|\bar{\lambda}_{ij}-\lambda_{ij}|\right)\\
&=\mathbb{E}\left(|\hat{\lambda}_{ij}-\bar{\lambda}_{ij}|\right)+\mathbb{E}\left(\left|\bar{\lambda}_{ij}-\lambda_{ij}\right|\right)\\
&=E_m\left(i,j\right)+E_e\left(i,j\right)=E_u\left(i,j\right)
\end{align*}}\vspace{-1ex}
\end{proof}
\revision{
Meanwhile, we obtain the upper bound of the difference between $E_u\left(i,j\right)$ and $E_r\left(i,j\right)$ by the following scaling:
{\scriptsize
\begin{align*}
E_u(i,j)-E_r(i,j)&\leq \mathbb{E}\left(2\min\left(|\hat{\lambda}_{ij}-\bar{\lambda}_{ij}|,|\bar{\lambda}_{ij}-\lambda_{ij}|\right)\right)\\
&=2\min\left(\mathbb{E}\left(|\bar{\lambda}_{ij}-\lambda_{ij}|\right),\mathbb{E}\left(|\hat{\lambda}_{ij}-\bar{\lambda}_{ij}|\right)\right)\\
&=2\min\left(E_e(i,j),E_m(i,j)\right)
\end{align*}
}
This indicates that we can ensure that the $E_r\left(i,j\right)$ is small when $E_u\left(i,j\right)$ is minimized. Therefore, we will minimize $E_u\left(i,j\right)$ as much as possible to optimize OGSS in the following sections of this paper. Finally, Table \ref*{table0} shows some important notations used in this paper.
}
\begin{table}
\centering
{\small\scriptsize
\caption{\small Symbols and Descriptions.} \label{table0}
\begin{tabular}{l|l}
{\bf Symbol} & {\bf \qquad \qquad \qquad\qquad\qquad Description} \\ \hline \hline
$r_i$ & a MGrid\\
$r_{ij}$ & a HGrid in MGrid $r_i$\\
$n$ & the number of MGrids\\
$m$ & the number of HGrids for each MGrid\\
$\bar{\lambda}_{ij}$ & the estimated number of spatial events for HGrid $r_{ij}$\\
$\lambda_{ij}$ & the actual number of spatial events in HGrid $r_{ij}$\\
$\lambda_i$ & the actual number of spatial events in MGrid $r_{i}$\\
$\hat{\lambda}_i$ & the prediction of $\lambda_i$\\
$\alpha_{ij}$ & \revision{the temporal mean of $\lambda_{ij}$}\\
$E_r\left(i,j\right)$ & the real error of HGrid $r_{ij}$\\
$E_e\left(i,j\right)$ & the expression error of HGrid $r_{ij}$\\
$E_m\left(i,j\right)$ & the model error of HGrid $r_{ij}$\\
\hline
\hline
\end{tabular}
}\vspace{-2ex}
\end{table}
\section{Error Analysis}
\label{sec:erroranalysis}
In this section, we first explain how to select a proper $N$ such that each HGrid is small enough and can be considered uniform. Then, we discuss the property of expression error and propose two algorithms to quickly calculate expression error. Finally, we analyze the model error.
\revision{
\subsection{Select A Suitable $N$}
}
\label{sec:region_depart}
We explain how to choose a suitable $N$ in this section. Most of spatiotemporal prediction models are based on experience to divide the whole space into many same-sized MGrids (e.g., 2km$\times$2km grid). However, these methods ignore the uneven distribution of spatial events within a MGrid.
We divide the whole space into {\scriptsize$\sqrt{N}\times \sqrt{N}$} (i.e., {\scriptsize$nm=N$}) same-sized HGrids. \revision{Let $\alpha_{ij}$ be the mean number of events for HGrid $r_{ij}$ in the next period, which can be estimated as the average number of the historical records (i.e., nearest one month's data) of $r_{ij}$.}
\revision{Here, we give the definition of the uniformly distribution for a grid as follows:
\begin{definition}[Uniformly Distributed Grid]
Given a grid $r_{ij}$ with the expected spatial events number $\alpha_{ij}$ and a positive integer {\scriptsize$K\in\mathbb{Z}^+$}, we divide the grid into {\scriptsize$K$} smaller grids with the expected spatial events number {\scriptsize$\alpha_{ijk}, k=1,2,...,K$}. Grid $r_{ij}$ is \textbf{uniformly distributed} if and only if {\scriptsize$\alpha_{ijk}=\frac{\alpha_{ij}}{K}$} for any {\scriptsize$1\leq k\leq K$}.
\end{definition}
}
Then, we introduce a metric to measure the degree of the uneven distribution for spatial events in HGrids, which is defined as the following formula:\vspace{-1ex}
{\scriptsize\begin{eqnarray}
D_{\alpha}\left(N\right)=\sum_{i=1}^{n}\sum_{j=1}^{m}\left|\alpha_{ij}-\bar{\alpha}_N \right|,
\label{eq:d_alpha}
\end{eqnarray}}\vspace{-2ex}
\noindent where {\scriptsize$\bar{\alpha}_N=\frac{1}{N}\sum_{i=1}^{n}\sum_{j=1}^{m}\alpha_{ij}$}. We notice that when $N$ increases, {\scriptsize$D_{\alpha}\left(N\right)$} will also increase. However, when $N$ is large enough (i.e., spatial events can be considered evenly distributed in each HGrid), {\scriptsize$D_{\alpha}\left(N\right)$} will not significantly increase any more. We prove this with the following theorem:
\begin{theorem}
Assume that {\scriptsize$N$} is suitable (sufficiently large) such that the spatial events are uniformly distributed in each HGrid, then {\scriptsize$D_{\alpha}\left(N\right)=D_{\alpha}\left(NK\right)$}, for any {\scriptsize$K\in \mathbb{Z}^{+}$}.
\label{the:D_alpha}
\end{theorem}
\begin{proof}
We divide each HGrid $r_{ij}$ into smaller grids denoted as $r_{ijk}$ ($k=1,2,\dots,K$), where the mean number of events for each smaller grid is denoted as $\alpha_{ijk}$. Due to the uniformity of each HGrid, we have $\alpha_{ijk}=\frac{\alpha_{ij}}{K}$. Thus, we have
{\scriptsize\begin{eqnarray
D_{\alpha}\left(NK\right)&=&\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{k=1}^{K}\left|\alpha_{ijk}-\bar{\alpha}_{NK}\right| \\ \notag
&=&\sum_{i=1}^{n}\sum_{j=1}^{m}K\left|\frac{\alpha_{ij}}{K}-\frac{1}{K}\bar{\alpha}_N\right|
= D_{\alpha}\left(N\right) \notag
\end{eqnarray}
}\vspace{-7ex}
\end{proof}
\revision{The increase of $N$ will not contribute to the increase of $D_\alpha\left(N\right)$ when $N$ is large enough, which means that $D_\alpha\left(N\right)$ can be an indicator to help us to select $N$. In other words, we should choose a sufficiently large $N$ so that $D_\alpha\left(N\right)$ is maximized in practice.}
\subsection{Analysis and Calculation of Expression Error}
\label{sec:expression_analyses}
We assume that the number $\lambda_{ij}$ of events in a HGrid $r_{ij}$ follows a poisson distribution $Pois$ with parameter of $\alpha_{ij}$ ($\alpha_{ij}$ is the mean number of events for HGrid $r_{ij}$), which is verified in our previous work \cite{cheng2019queueing, cheng2021queueing}.
\noindent\textbf{Calculation of Expression Error.} We first analyze how to calculate expression error for a given HGrid $r_{ij}$. Due to $\lambda_{ij}\sim Pois(\alpha_{ij})$, we have\vspace{-2ex}
{\scriptsize\begin{equation}
P(\lambda_{ij}=k_h)=e^{-\alpha_{ij}}\frac{{\alpha_{ij}}^{k_h}}{k_h!}, k_h\in\mathbb{N} \label{eq:equation1}
\end{equation}}\vspace{-2ex}
Then, we define the random variable $\lambda_{i,\neq j}$ as the mean of the number of events for all HGrids in MGrid $r_i$ excluding the HGrid $r_{ij}$ (i.e., $\lambda_{i,\neq j}=\sum_{g\neq j}\lambda_{ig}$), and have $\lambda_{i,\neq j}\sim Pois(\sum_{g\neq j}\alpha_{ig})$ because of the additivity of Poisson distribution. Let $\bar{\lambda}_{i,\neq j}=\frac{1}{m}\lambda_{i,\neq j}$, we have\vspace{-2ex}
{\scriptsize\begin{equation}
P(\bar{\lambda}_{i,\neq j}=\frac{k_m}{m})=e^{-\sum_{g\neq j}\alpha_{ig}}\frac{\left({\sum_{g\neq j}\alpha_{ig}}\right)^{k_m}}{k_m!}, k_m\in\mathbb{N} \label{eq:equation2}
\end{equation}}\vspace{-2ex}
Since $\lambda_{ij}$ and $\bar{\lambda}_{i,\neq j}$ is independent of each other, $P(|\bar{\lambda}_{ij}-\lambda_{ij}|)$ can be expressed by
\revision{\vspace{-1ex}
{\scriptsize\begin{eqnarray}
P(|\bar{\lambda}_{ij}-\lambda_{ij}|=\frac{k_d}{m})
&=&P(|\frac{m-1}{m}\lambda_{ij}-\bar{\lambda}_{i,\neq j}|=\frac{k_d}{m}) \notag\\
&=&\sum_{|\frac{m-1}{m}k_h-\frac{k_m}{m}|=\frac{k_d}{m}}P\left(\lambda_{ij}=k_h\right)P\left(\bar{\lambda}_{i,\neq j}=\frac{k_m}{m}\right) \notag\\
&=&\sum_{\frac{(m-1)k_h-k_m}{m}=\pm \frac{k_d}{m}}p\left(r_{ij}, k_h, k_m\right)
\label{eq:equation3}
\end{eqnarray}}
}
\noindent where $p\left(r_{ij}, k_h, k_m\right)=e^{-\sum_{j=1}^{m}\alpha_{ij}}\frac{({\sum_{g\neq j}\alpha_{ig}})^{k_m}(\alpha_{ij})^{k_h}}{k_m!k_h!}$ denoting the probability when the number of events in HGrid $r_{ij}$ is $k_h$ and the number of events in MGrid $r_{i}$ is $k_h+k_m$. Then, we have:
{\scriptsize\begin{eqnarray}
E_e\left(i,j\right)\notag&=&\mathbb{E}(|\lambda_{ij}-\bar{\lambda}_{ij}|)
=\sum_{k_d=0}^{\infty}\frac{k_d}{m}P(|\lambda_{ij}-\bar{\lambda}_{ij}|=\frac{k_d}{m}) \notag\\
&=&\sum_{k_d=0}^{\infty}\frac{k_d}{m}\sum_{\frac{(m-1)k_h-k_m}{m}=\pm \frac{k_d}{m}}p\left(r_{ij}, k_h, k_m\right)\notag\\
&=&\sum_{k_h=0}^{\infty}\sum_{k_m=0}^{\infty} b_{k_hk_m}\label{eq:exp_error}
\end{eqnarray}}
\noindent where $b_{k_hk_m}=\left|\frac{(m-1)k_h-k_m}{m}\right|p\left(r_{ij}, k_h, k_m\right)$. \revision{Here, $p\left(r_{ij}, k_h, k_m\right)$ represents the probability when the number of events in MGrid $r_{i}$ is $k_m+k_h$ and the number of events in HGrid $r_{ij}$ is $k_h$, and the single expression error of HGrid $r_{ij}$ in this situation is $\left|\frac{(m-1)k_h-k_m}{m}\right|$. Thus, Equation \ref{eq:exp_error} can be regarded as a weighted average of the single expression error in all possible cases.}
\revision{We can use Equation \ref{eq:exp_error} to} calculate the expression error of HGrid $r_{ij}$, which also indicates that the expression error is only related to the $\alpha_{ij}$ of each HGrid $r_{ij}$ and $m$.
\noindent\textbf{Properties of expression error.} We show that the upper bound of expression error {\scriptsize$E_e\left(i,j\right)$} is positively correlated to $\alpha_{ij}$ and $m$. In other words, when $\alpha_{ij}$ or $m$ increases, the upper bound of expression error {\scriptsize$E_e\left(i,j\right)$} will also increase. This relationship is presented with the following lemma:
\begin{lemma}
\label{lem:bounded}
$\forall M_1,M_2\in \mathbb{Z^+}$, we have
{\scriptsize$$
\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}{b_{k_hk_m}}< (1-\frac{2}{m})\alpha_{ij} + \frac{\sum_{k=1}^{m}{\alpha_{ik}}}{m}.
$$}
\end{lemma}
\begin{proof}\vspace{-2ex}
{\scriptsize\begin{eqnarray}
&&\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}{b_{k_hk_m}}
=\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}\left|\frac{(m-1)k_h-k_m}{m}\right|p\left(r_{ij}, k_h, k_m\right) \notag \\
&\leq&\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}\left(\frac{(m-1)k_h}{m}+\frac{k_m}{m}\right)p\left(r_{ij}, k_h, k_m\right) \label{eq:lem_proof}
\end{eqnarray}}
Considering the \revision{first term of right hand side of Inequation \ref{eq:lem_proof},} we have
{\scriptsize\begin{eqnarray}
&&\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}\frac{(m-1)k_h}{m}p\left(r_{ij}, k_h, k_m\right) \notag \\
&=&\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}\frac{(m-1)k_h}{m}e^{-\sum_{j=1}^{m}\alpha_{ij}}\frac{({\sum_{g\neq j}\alpha_{ig}})^{k_m}(\alpha_{ij})^{k_h}}{k_m!k_h!} \notag \\
&=&\frac{(m-1)}{m}\sum_{k_h=1}^{M_1}{\frac{e^{-\alpha_{ij}}(\alpha_{ij})^{k_h}}{(k_h-1)!}}\sum_{k_m=0}^{M_2}{e^{\sum_{g\neq j}\alpha_{ig}}\frac{({\sum_{g\neq j}\alpha_{ig}})^{k_m}}{k_m!}} \label{eq:berfore_reduce1}\\
&<&\frac{(m-1)}{m}\sum_{k_h=1}^{M_1}{\frac{e^{-\alpha_{ij}}(\alpha_{ij})^{k_h}}{(k_h-1)!}} \label{eq:2}\\
&=&\frac{(m-1)\alpha_{ij}}{m}\sum_{k_h=1}^{M_1}{\frac{e^{-\alpha_{ij}}(\alpha_{ij})^{k_h-1}}{(k_h-1)!}} \notag\\
&=&\frac{(m-1)\alpha_{ij}}{m}\sum_{k_h=0}^{M_1-1}{\frac{e^{-\alpha_{ij}}(\alpha_{ij})^{k_h}}{k_h!}} \label{eq:berfore_reduce2}\\
&<&\frac{(m-1)}{m}\alpha_{ij} \label{eq:sim_proof}
\end{eqnarray}}\vspace{-2ex}
\revision{The item {\scriptsize$e^{-\sum_{g\neq j}\alpha_{ig}}\frac{({\sum_{g\neq j}\alpha_{ig}})^{k_m}}{k_m!}$} in Equation \ref{eq:berfore_reduce1} can be regarded as the probability of {\scriptsize$\tilde{P}\left(x=k_m\right)$} where {\scriptsize$\tilde{P}$} is a Poisson with the mean of {\scriptsize$\sum_{g\neq j}\alpha_{ig}$}. Besides, the item {\scriptsize$\frac{e^{-\alpha_{ij}}(\alpha_{ij})^{k_h}}{k_h!}$} in Equation \ref{eq:berfore_reduce2} can also be regarded as the probability of {\scriptsize$\tilde{P}\left(x=k_h\right)$} where {\scriptsize$\tilde{P}$} is a Poisson with the mean of $\alpha_{ij}$. inequalities \ref{eq:2} and \ref{eq:sim_proof} hold, because as the integral of a Poisson distribution on a part of the whole range is smaller than 1. Then, we can do the same thing to simplify the second term of right hand side of Inequality \ref{eq:lem_proof}::}\vspace{-2ex}
{\scriptsize\begin{eqnarray}
&&\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}\frac{k_m}{m}p\left(r_{ij}, k_h, k_m\right) \notag \\
&=&\frac{1}{m}\sum_{k_h=0}^{M_1}{\frac{e^{-\alpha_{ij}}(\alpha_{ij})^{k_h}}{k_h!}}\sum_{k_m=1}^{M_2}{e^{-\sum_{g\neq j}\alpha_{ig}}\frac{({\sum_{g\neq j}\alpha_{ig}})^{k_m}}{(k_m-1)!}} \notag\\
&<&\frac{1}{m}{\sum_{g\neq j}\alpha_{ig}}\notag
\end{eqnarray}}\vspace{-3ex}
Thus, we have
{\small$
\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}{b_{k_hk_m}}< (1-\frac{2}{m})\alpha_{ij} + \frac{\sum_{k=1}^{m}{\alpha_{ig}}}{m}
$}\vspace{-1ex}
\end{proof}\vspace{-1ex}
Then, we can get the upper bound of the summation of {\scriptsize$E_e\left(i,j\right)$} for all HGrids is
{\scriptsize$
\sum_{i=1}^{n}{\sum_{j=1}^{m}{E_e\left(i,j\right)}}\leq 2\left(1-\frac{1}{m}\right)\sum_{i=1}^{n}{\sum_{j=1}^{m}{\alpha_{ij}}}
$}.
According to Theorem \ref{theo:ubte}, to minimize the overall real error, we need to minimize the overall expression error. With Lemma \ref{lem:bounded}, to minimize expression error {\scriptsize$E_e\left(i,j\right)$}, we can minimize $\alpha_{ij}$ or $m$. However, $\alpha_{ij}$ is an inner property of HGrid $r_{ij}$ and not determined by the prediction algorithms. For example, $\alpha_{ij}$ on weekdays and weekends in the same HGrid $r_{ij}$ is quite different. On the other hand, the mean number of events in the same grid can vary greatly in different time periods of the day. Then, \textit{to minimize expression errors, we should select the appropriate $n$ to minimize $m$ under the constraint {\scriptsize$nm>N$}.} Since {\scriptsize$nm>N$}, in order to minimize $m$, we should maximize $n$.
\noindent\textbf{Convergence of Expression Error}. We notice that Equation \ref{eq:exp_error} is a summation of an infinite series. For a HGrid $r_{ij}$, we explain that Equation \ref{eq:exp_error} to calculate expression error $E_e\left(i,j\right)$ can converge:
\begin{lemma}
Equation \ref{eq:exp_error} to calculate Expression error $E_e\left(i,j\right)$ can converge.
\label{lem:repr_converge}
\end{lemma}
\begin{proof}
\revision{Considering that {\scriptsize$b_{k_hk_m}$} for any {\scriptsize$k_h,k_m$} is positive and {\scriptsize$\sum_{k_h=0}^{M_1}\sum_{k_m=0}^{M_2}{b_{k_hk_m}}$} is bounded according to the Lemma \ref{lem:bounded}, we can prove that Equation \ref{eq:exp_error} can converge.} Let
{\scriptsize$S\left(M_2, M_1\right) = \sum_{k_m=0}^{M_2}\sum_{k_h=0}^{M_1}{b_{k_hk_m}}
$}.
Lemma \ref{lem:bounded} shows {\scriptsize$\exists M>0$} make the {\scriptsize$S\left(M_2, M_1\right)\leq M$} hold for any {\scriptsize$M_2 \in \mathbb{Z}$}. Since {\scriptsize$S\left(M_2, M_1\right)$} is monotonically increasing with respect to {\scriptsize$M_2$, $\lim_{M_2\to \infty}S\left(M_2, M_1\right)$} can converge, and we have {\scriptsize$\lim_{M_2\to \infty}S\left(M_2, M_1\right)\leq M
$}, that is: \vspace{-1ex}
{\scriptsize$$\sum_{k_m=0}^{\infty}\sum_{k_h=0}^{M_1}{b_{k_hk_m}}\leq M \notag\Leftrightarrow \sum_{k_h=0}^{M_1}\sum_{k_m=0}^{\infty}{b_{k_hk_m}}\leq M$$}\vspace{-1ex}
Let {\scriptsize$T\left(M_1\right) = \sum_{k_h=0}^{M_1}\sum_{k_m=0}^{\infty}{b_{k_hk_m}}$}, and we have {\scriptsize$\lim_{M_1\to \infty}{T\left(M_1\right)}=E_e\left(i,j\right)$}. In the same way, we can prove that {\scriptsize$E_e\left(i,j\right)$} can converge, which means {\scriptsize$E_e\left(i,j\right)=\lim_{M_1\to \infty} T\left(M_1\right)\leq M
$} because of the monotone increase with respect to {\scriptsize$M_1$}.
\end{proof}\vspace{-1ex}
Since Equation \ref{eq:exp_error} can converge, we will introduce algorithms to calculate expression error.
\noindent \textbf{Algorithm to Calculate Expression Error}.
Equation \ref{eq:exp_error} shows how to calculate the expression error {\scriptsize$E_e\left(i,j\right)$} for a HGrid $r_{ij}$. In fact, we cannot compute the expression error exactly, but we can prove that the expression error can be approximated to arbitrary precision through the below theorem.
\begin{theorem}
For any $\varepsilon$, there is a number $K$ that makes the following inequality holds:\vspace{-1ex}
{\scriptsize$$
\left|\sum_{k_h=0}^{K}\sum_{k_m=0}^{(m-1)K}b_{k_hk_m}-E_e\left(i,j\right)\right|<\varepsilon
$$}
\label{the:arbitrary_precision}
\end{theorem}\vspace{-2ex}
\begin{proof}
Lemma \ref{lem:repr_converge} shows that $E_e\left(i,j\right)$ can converge. We have \vspace{-2ex}
{\scriptsize$$\lim_{\tilde{k}_1\to \infty}\sum_{k_h=0}^{\tilde{k}_1}\sum_{k_m}^{\infty}{b_{k_hk_m}}=E_e\left(i,j\right)$$}\vspace{-1ex}
According to the definition of limit, there will be {\scriptsize$M_1$} for any $\varepsilon>0$ that we have\vspace{-2ex}
{\scriptsize$$\frac{-\varepsilon}{2}<\sum_{k_h=0}^{\tilde{k}_1}\sum_{k_m}^{\infty}{b_{k_hk_m}}-E_e\left(i,j\right)<\frac{\varepsilon}{2}$$}
when {\scriptsize$\tilde{k}_1>M_1$}, which means\vspace{-2ex}
{\scriptsize\begin{eqnarray}
\frac{-\varepsilon}{2}+E_e\left(i,j\right)<\sum_{k_h=0}^{\tilde{k}_1}\sum_{k_m}^{\infty}{b_{k_hk_m}}<\frac{\varepsilon}{2}+E_e\left(i,j\right)\notag
\end{eqnarray}}\vspace{-2ex}
Since the series $\sum_{k_h=0}^{\tilde{k}_1}\sum_{k_m}^{\infty}{b_{k_hk_m}}$ are bounded (shown by Lemma \ref{lem:bounded}), we can switch the order of the series.\vspace{-2ex}
{\scriptsize\begin{eqnarray}
\sum_{k_h=0}^{\tilde{k}_1}\sum_{k_m}^{\infty}{b_{k_hk_m}}=\sum_{k_m}^{\infty}\sum_{k_h=0}^{\tilde{k}_1}{b_{k_hk_m}}\notag
\end{eqnarray}}\vspace{-2ex}
We can also find $M_2$ for a positive number $\varepsilon$ that meets\vspace{-2ex}
{\scriptsize\begin{eqnarray}
\frac{-\varepsilon}{2}<\sum_{k_h=0}^{\tilde{k}_1}\sum_{k_m}^{\tilde{k}_2}{b_{k_hk_m}}-\sum_{k_m}^{\infty}\sum_{k_h=0}^{\tilde{k}_1}{b_{k_hk_m}}<\frac{\varepsilon}{2}\notag
\end{eqnarray}}\vspace{-2ex}
Based on the definition of the limit when {\scriptsize$\tilde{k}_2>M_2$}, we have \vspace{-2ex}
{\scriptsize\begin{eqnarray}
-\varepsilon<\sum_{k_h=0}^{\tilde{k}_1}\sum_{k_m}^{\tilde{k}_2}{b_{k_hk_m}}-E_e\left(i,j\right)<\varepsilon\notag
\end{eqnarray}}\vspace{-1ex}
We select a number {\scriptsize$K$} which meets the constraints of {\scriptsize$K>M_1$} and {\scriptsize$(m-1)K>M_2$}. Thus, we have
{\scriptsize\begin{eqnarray}
\left|\sum_{k_h=0}^{K}\sum_{k_m=0}^{(m-1)K}b_{k_hk_m}-E_e\left(i,j\right)\right|<\varepsilon
\end{eqnarray}}\vspace{-2ex}
\end{proof}
Theorem \ref{the:arbitrary_precision} shows that we can achieve the result close to the expression error by selecting a suitable {\scriptsize$K$}. We first need to compute {\scriptsize$p\left(r_{ij}, k_h, k_m\right)$}, which needs {\scriptsize$O\left(k_h+k_m\right)$} time to compute. Then, the complexity of the whole calculation of Equation \ref{eq:exp_error} is {\scriptsize$O\left(m^2K^3\right)$}. However the calculation of {\scriptsize$p\left(r_{ij}, k_h, k_m\right)$} can be simplified as follows:\vspace{-2ex}
{\scriptsize\begin{eqnarray}
p\left(r_{ij}, k_h, k_m+1\right)=\frac{\sum_{g\neq j}^{m}{\alpha_{ig}}}{k_m+1}p\left(r_{ij}, k_h, k_m\right) \label{eq:recursive_calculation}\vspace{-2ex}
\end{eqnarray}}\vspace{-1ex}
Based on Equation \ref{eq:recursive_calculation}, Algorithm \ref{algo:repre_algorithm} is proposed to approximately computing the expression error {\scriptsize$E_e\left(i,j\right)$} of the HGrid $r_{ij}$. Since the complexity of computing {\scriptsize$p\left(r_{ij},k_h,k_m\right)$} is {\scriptsize$O\left(1\right)$}, the complexity of Algorithm \ref{algo:repre_algorithm} is {\scriptsize$O\left(mK^2\right)$}.
{\small\begin{algorithm}[t]
\DontPrintSemicolon
\KwIn{\small the number $m$ of HGrids per MGrid, $\alpha_{ij}$ for each HGrid $r_{ij}$ in the MGrid $r_i$, \revision{a hyper-parameter $K$}}
\KwOut{\small the expression error $E_e(i,j)$ of the HGrid $r_{ij}$}
$E_e(i,j)\gets0$\;
$\alpha_{i,\neq j}\gets \sum_{g\neq j}^{m}{\alpha_{ig}}$\;
$p_1\gets e^{-\alpha_{ij}}$\;
\For{$k_h=1$ to $K$}{
$p_2\gets e^{-\alpha_{i,\neq j}}$\;
\For{$k_m=1$ to $(m-1)K$}{
$\Delta\gets\left|\frac{(m-1)k_h-k_m}{m}\right|p_1p_2$\;
$E_e(i,j)\gets E_e(i,j)+\Delta$\;
$p_2\gets \frac{-p_2\alpha_{i,\neq j}}{k_m}$\;
}
$p_1\gets \frac{p_1\alpha_{ij}}{k_h}$\;
}
\Return $E_e(i,j)$\;
\caption{\small Expression Error Calculation}
\label{algo:repre_algorithm}
\end{algorithm}}
\noindent \textbf{Algorithm Optimization}.
Considering the large number of HGrids, even though the time needed to calculate the expression error of each HGrid is only about 0.1 second, the final time cost needed to calculate the summation of expression error of all HGrids with Algorithm \ref{algo:repre_algorithm} is about 4 hours. Therefore, we introduce a more efficient algorithm with time complexity of {\scriptsize$O\left(mK\right)$} in this section based on a more in-depth analysis of Equation \ref{eq:exp_error}.
According to theorem \ref{the:arbitrary_precision}, we can approximate Equation \ref{eq:exp_error} with the following equations:
{\scriptsize\begin{eqnarray}
&&\sum_{k_h=0}^{K}\sum_{k_m=0}^{(m-1)K}\left|\frac{(m-1)k_h-k_m}{m}\right|p\left(r_{ij}, k_h, k_m\right) \label{eq:close_to}\\
&=&\frac{(m-1)}{m}\sum_{k_h=0}^{K}\sum_{k_m=0}^{(m-1)K}k_h\mathbb{I}\left(\left(m-1\right)k_h-k_m\right)p\left(r_{ij}, k_h, k_m\right) \notag\\
&-&\frac{1}{m}\sum_{k_h=0}^{K}\sum_{k_m=0}^{(m-1)K}k_m\mathbb{I}\left(\left(m-1\right)k_h-k_m\right)p\left(r_{ij}, k_h, k_m\right) \label{eq:equation6}
\end{eqnarray}}
where $\mathbb{I}(x)$ is a indicator function and satisfies:\vspace{-1ex}
{\scriptsize\begin{equation}
\mathbb{I}(x)=\left\{
\begin{array}{ll}
1, & x>0 \\
-1, & x \leq 0
\end{array}
\right. \notag
\end{equation}}\vspace{-1ex}
We transform the \revision{first term} of right hand side of Equation \ref{eq:equation6} (denoted as $e_1$) to the following formula:\vspace{-2ex}
\revision{
{\scriptsize\begin{eqnarray}
&&\frac{(m-1)}{m}\sum_{k_h=1}^{K}\sum_{k_m=0}^{(m-1)K}k_h\mathbb{I}((m-1)k_h-k_m)p(r_{ij}, k_h, k_m) \notag\\
&=&\frac{(m-1)}{m}\sum_{k_h=1}^{K}k_h(2{\sum_{k_m=0}^{(m-1)k_h}p(r_{ij}, k_h, k_m)-\sum_{k_m=0}^{(m-1)K}p(r_{ij}, k_h, k_m)})\notag\\
&=&\frac{(m-1)}{m}\sum_{k_h=1}^{K}{e^{-\sum_{j=1}^{m}{\alpha_{ij}}}\frac{(\alpha_{ij})^{k_h}}{(k_h-1)!}}{e^{'}_1(k_h)}
\end{eqnarray}}
}
Let {\scriptsize$e^{'}_1\left(k_h\right)$} denote a function with respect to {\scriptsize$k_m$}, as follows:
{\scriptsize\begin{eqnarray}
-\sum_{k_m=0}^{(m-1)K}{\frac{\left(\sum_{g\neq j}{\alpha_{ig}}\right)^{k_m}}{k_m!}}+2\sum_{k_m=0}^{(m-1)k_h}{\frac{\left(\sum_{g\neq j}{\alpha_{ig}}\right)^{k_m}}{k_m!}}\label{eq:equation7}
\end{eqnarray}}
The time complexity of the direct calculation of {\scriptsize$e^{'}_1\left(k_h+1\right)$} is {\scriptsize$O\left(mK\right)$} based on Equation \ref{eq:equation7}\revision{; as a result, the time complexity of calculating $e_1$ is {\scriptsize$O\left(mK^2\right)$}.} \revision{However, we can build the connection between {\scriptsize$e^{'}_1\left(k_h+1\right)$} and {\scriptsize$e^{'}_1\left(k_h\right)$} as follows:
{\scriptsize\begin{eqnarray}
&&e^{'}_1\left(k_h+1\right)-e^{'}_1\left(k_h\right)\notag \\
&=&2\sum_{k_m=0}^{(m-1)(k_h+1)}{\frac{\left(\sum_{g\neq j}{\alpha_{ig}}\right)^{k_m}}{k_m!}}-2\sum_{k_m=0}^{(m-1)k_h}{\frac{\left(\sum_{g\neq j}{\alpha_{ig}}\right)^{k_m}}{k_m!}}\notag\\
&=&2\sum_{k_m=(m-1)k_h}^{(m-1)(k_h+1)}{\frac{\left(\sum_{g\neq j}{\alpha_{ig}}\right)^{k_m}}{k_m!}}\label{eq:equation15}
\end{eqnarray}}\vspace{-1ex}
}
\noindent \revision{Therefore, $e^{'}_1\left(k_h+1\right)$ can be calculated through the result of $e^{'}_1\left(k_h\right)$ so that the time complexity of $e^{'}_1\left(k_h\right)$ can be reduced to $O\left(m\right)$.}
Then we can do the same analysis for the \revision{second term} $e_2$ of right hand side of Equation \ref{eq:equation6}:\vspace{-1ex}
{\scriptsize\begin{eqnarray}
&&\frac{1}{m}\sum_{k_h=0}^{K}\sum_{k_m=0}^{(m-1)K}k_m\mathbb{I}((m-1)k_h-k_m)p\left(r_{ij}, k_h, k_m\right) \notag\\
&=&\frac{1}{m}\sum_{k_h=0}^{K}{e^{-\sum_{j=1}^{m}{\alpha_{ij}}}\frac{\left(\alpha_{ij}\right)^{k_h}}{k_h!}}e^{'}_2\left(k_h\right)\notag
\end{eqnarray}}\vspace{-1ex}
\noindent where {\scriptsize$e^{'}_2\left(k_h\right)=-\sum_{k_m=1}^{(m-1)K}{\frac{\left(\sum_{g\neq j}{\alpha_{ig}}\right)^{k_m}}{(k_m-1)!}}+2\sum_{k_m=1}^{(m-1)k_h}{\frac{\left(\sum_{g\neq j}{\alpha_{ig}}\right)^{k_m}}{(k_m-1)!}}$}\revision{, and we can do a similar analysis as Equation \ref{eq:equation15}.}
Algorithm \ref{algo:simple_repre_algorithm} is proposed through the above analysis. By reducing the time complexity of $e^{'}_1\left(k_h\right)$ and $e^{'}_2\left(k_h\right)$ from $O\left(mK\right)$ to $O\left(m\right)$, the time complexity of Algorithm \ref{algo:simple_repre_algorithm} becomes $O\left(mK\right)$.
\begin{algorithm}[t]
{\small
\DontPrintSemicolon
\KwIn{the number $m$ of HGrids per MGrid, $\alpha_{ij}$ for each HGrid $r_{ij}$ in the MGrid $r_i$, \revision{a hyper-parameter $K$}}
\KwOut{the expression error $E_e(i,j)$ of the HGrid $r_{ij}$}
$p_2\gets 1$;
$e^{'}_1,e^{'}_2\gets 0$\;
\For(// initialize $e^{'}_1$ and $e^{'}_2$){$k_m=0$ to $(m-1)K$}{
$p_2 \gets p_2 \sum_{g\neq j}{\alpha_{ig}}$\;
$e^{'}_2\gets e^{'}_2-p_2$\;
$p_2 \gets p_2 / (k_m + 1)$\;
$e^{'}_1\gets e^{'}_1-p_2$\;
}
$p_1\gets e^{-\sum_{j=1}^{m}{\alpha_{ij}}}$\;
$p_2\gets 1$;
$e_1,e_2\gets 0$\;
\For(// calculate the value of $e_1$ and $e_2$){$k_h=1$ to $K$}{
\For{$k_m=\left(m-1\right)(k_h-1)$ to $\left(m-1\right)k_h$}{
$e^{'}_2\gets e^{'}_2+2p_2$\;
$p_2 \gets \frac{p_2}{k_m + 1}$\;
$e^{'}_1\gets e^{'}_2+2p_2$\;
$p_2 \gets p_2 \sum_{g\neq j}{\alpha_{ig}}$\;
}
$e_1\gets e^{'}_1p_1+e_1$\;
$p_1 \gets \frac{p_1 \alpha_{ij}}{k_h}$\;
$e_2\gets e^{'}_2p_1+e_2$\;
}
$E_e(i,j)\gets \frac{m-1}{m}e_1-\frac{e_2}{m}$\;
\Return $E_e(i,j)$\;
\caption{\small Fast Expression Error Calculation}
\label{algo:simple_repre_algorithm}}
\end{algorithm}
\subsection{Analysis of Model Error}
\label{sec:prediction_error}
In this section, we can estimate model error with the mean absolute error for each HGrid. Suppose we use the model $f$ to predict the event number $\hat{\lambda}_i$ (i.e., $\hat{\lambda}_i=f(x_i)$) for the next stage of the MGrid through the historical information of the events {\scriptsize$X$}. Let denote the dataset of each MGrid $r_{i}$ as {\scriptsize$X_{i}$}, and we have {\scriptsize$\cup_{i=1}^{n}X_{i}=X$}. Meanwhile, the number of samples in {\scriptsize$X_{i}$} for each MGrid $r_{i}$ is {\scriptsize$\frac{\left|X\right|}{n}$}. We define the mean absolute error of $f$ as {\scriptsize$MAE(f)$} (i.e., {\scriptsize$MAE(f)=\frac{\sum_{x_i\in \mathbf{X}}{\left|f(x_i)-\lambda_i\right|}}{\left|\mathbf{X}\right|}$}), and we have\vspace{-2ex}
{\scriptsize\begin{eqnarray}
\lim_{\left|\mathbf{X}\right|\to \infty}MAE\left(f\right)&=&\lim_{\left|\mathbf{X}\right|\to \infty}\frac{\sum_{x_i\in \mathbf{X}}{\left|f(x_i)-\lambda_i\right|}}{\left|\mathbf{X}\right|}\notag \\
&=&\frac{1}{n}\sum_{i=1}^{n}\lim_{\left|\mathbf{X}_i\right|\to \infty}\frac{\sum_{x_j\in \mathbf{X}_i}{\left|f(x_j)-\lambda_j\right|}}{\left|\mathbf{X}_i\right|}\notag\\
&=&\frac{1}{n}\sum_{i=1}^{n}{E\left(\left|\hat{\lambda}_i-\lambda_{i}\right|\right)}\notag
\end{eqnarray} }\vspace{-1ex}
We can get the relationship between the model error {\scriptsize$E_m\left(i,j\right)$} and {\scriptsize$MAE\left(f\right)$}:\vspace{-1ex}
{\scriptsize\begin{eqnarray}
\sum_{i=1}^{n}{\sum_{j=1}^{m}{E_m\left(i,j\right)}}&=&\sum_{i=1}^{n}{\sum_{j=1}^{m}{\mathbb{E}\left(\left|\hat{\lambda}_{ij}-\lambda_{ij}\right|\right)}}=\sum_{i=1}^{n}{m\mathbb{E}\left(\left|\hat{\lambda}_{ij}-\lambda_{ij}\right|\right)}\notag\\
&=&\sum_{i=1}^{n}{\mathbb{E}\left(\left|\hat{\lambda}_i-\lambda_{i}\right|\right)}\approx nMAE(f)\label{eq:prediction_error}
\end{eqnarray}}\vspace{-2ex}
According to Equation \ref{eq:prediction_error}, the total model error will increase when $n$ increases. However, based on the analyses in Section \ref{sec:expression_analyses}, the total expression error will decrease when $n$ increases. We have proved that the summation of expression error and model error is a upper bound of real error, which will first decrease then increase when $n$ increase from 1 to $N$. Thus, to minimize the total real error, we will propose two efficient algorithms to select a proper $n$ in next section.
\section{Search for Optimal Grid Size}
\label{sec:solution2}
From the analysis in Section \ref{sec:erroranalysis}, the size of $n$ will affect expression error and model error of each HGrid $r_{ij}$, which will further affect the upper bound of real error. A straightforward algorithm that checks all the values of $n$ can achieve the optimal solution for OGSS with the complexity of {\scriptsize$O(\sqrt{N})$}, which is not efficient. Therefore, we will propose two more efficient algorithms to solve OGSS in this section. We first introduce the upper bound calculation of real error.
\subsection{Calculation of Upper Bound for Real Error }
In practice, it is difficult to directly estimate the real error of each HGrid, and then select the optimal partitioning size. Theorem \ref{theo:ubte} proves that the summation of expression error and model error is an upper bound of real error. Thus, we can turn to minimize expression error and model error, whose calculations have been discussed in Section \ref{sec:erroranalysis}. Specifically, we can use Algorithm \ref{algo:simple_repre_algorithm} to calculate expression errors and Equation \ref{eq:prediction_error} to estimate model errors.
Based on the analysis in Section \ref{sec:erroranalysis}, we propose our algorithm showed in Algorithm \ref{algo:opg_1} to calculate $e(\sqrt{n})$ (i.e., {\scriptsize$e(\sqrt{n})=\sum_{i=1}^{n}\sum_{j=1}^{m}E_u\left(i,j\right)$}), which is an approximate problem of OGSS. The time cost of training the model is considerable when calculating the error {\scriptsize$e(\sqrt{n})$}. Therefore, we will introduce two algorithms with fewer computations of {\scriptsize$e(\sqrt{n})$}.
\begin{algorithm}[t]
{\small
\DontPrintSemicolon
\KwIn{\small the number of MGrid $n$, the number of all HGrids $N$, \revision{dataset $\mathbf{X}$,} a prediction method $Model$}
\KwOut{\small$e(\sqrt{n})$}
$m\gets$ {\scriptsize$\left\lceil \sqrt{\frac{N}{n}}\right\rceil^2$}\;
$f\gets Model\left(\mathbf{X}\right)$\;
$e\gets nMAE\left(f\right)$\;
divide the global space into $N$ HGrids and estimate the $\alpha_{ij}$ for each HGrid $r_{ij}$\;
\For{$i=1$ to $n$}{
\For{$j=1$ to $m$}{
$e\gets e+ E_e\left(i,j\right)$ \tcp{calculated by Algorithm \ref{algo:simple_repre_algorithm}}
}
}
\Return $e$\;
\caption{\small $UpperBound\left(n, N, \mathbf{X}, Model\right)$}
\label{algo:opg_1}}
\end{algorithm}
\subsection{Ternary Search}
Without any prior information, we cannot make any optimization of the most straightforward algorithm. Fortunately, it can be concluded from the analysis in Section \ref{sec:erroranalysis} that the model error will increase and the expression error will decrease when $n$ increases. It means there exists an equilibrium point that minimizes the summation of the expression error and the model error. Consider an extreme case (i.e., $n=1$), the prediction model only needs to predict the number of events for the whole space in the future, which can be very accurate. For example, according to the historical information of New York City (NYC), the number of spatial events (e.g., rider's order) on weekdays almost keeps a relatively stable value without dramatic fluctuations. At this time, the model error is small, but the expression error is considerable. Even if we could know the exact number of orders in the whole NYC for specific timestamp, it would not help for us to dispatch orders for drivers in a particular street area of NYC. When {\scriptsize$n=N$}, the forecasting model needs to predict a mass of grids' events accurately, which will leads to huge model errors due to the uncertainty of human behavior. While the area of a grid is very small, the uncertainty of human activity will lead huge different of prediction. Therefore, we assume that the trend of {\scriptsize$e(\sqrt{n})$} with the increase of $n$ will first go down and then up (This assumption will be confirmed in Section \ref{sec:real_error_grid_size_exp}).
We propose a ternary search algorithm to find the optimal partition size. Given that $n$ is a perfect square, we need to find the optimal $n$ among {\scriptsize$\sqrt{N}$} numbers. Let $l$ be the minimum of {\scriptsize$\sqrt{n}$} and $r$ be the maximum of {\scriptsize$\sqrt{n}$}. The main idea of ternary search is to take the third-equinox between $r$ and $l$ in each round and then compare the corresponding error of the two third-equinox points denoted as $m_r$, $m_l$. If $e(m_r)>e(m_l)$, let $r=m_r$ for next round; otherwise, let $l=m_l$. The ternary search algorithm showed in Algorithm \ref{algo:opg_2} will drop $\frac{1}{3}$ of possible values for $n$ each time, which results in the convergence.
\begin{algorithm}[t]
{\small\DontPrintSemicolon
\KwIn{\revision{dataset $\mathbf{X}$,} prediction model $Model$}
\KwOut{partition size $n$ that minimize $e(\sqrt{n})$}
use the method analyzed in Section \ref{sec:region_depart} to select $N$\;
$l\gets 1$;
$r\gets \sqrt{N}$\;
\While{$r-l>1$}{
$m_r\gets \left\lceil \frac{2}{3}r+\frac{1}{3}l\right\rceil $\;
$m_l\gets \left\lfloor \frac{1}{3}r+\frac{2}{3}l\right\rfloor $\;
$e(m_l)\gets UpperBound\left(m_l^2, N, \mathbf{X}, Model\right)$\;
$e(m_r)\gets UpperBound\left(m_r^2, N, \mathbf{X}, Model\right)$\;
\eIf{$e(m_l)>e(m_r)$}{
$l\gets m_l$\;
}{
$r\gets m_r$\;
}
}
\eIf{$e(l)>e(r)$}{
$n\gets r^2$\;
}{
$n\gets l^2$\;
}
\Return $n$\;
\caption{\small Ternary Search}
\label{algo:opg_2}}
\end{algorithm}
If the graph of function {\scriptsize$e(\sqrt{n})$} has only one minimum point, then the ternary search will find the optimal solution. However, the graph of function {\scriptsize$e(\sqrt{n})$} is not always ideal, but the ternary search algorithm can also find a good solution.
\noindent \textbf{Time Complexity.} For a given $N$, we can mark the algorithm complexity as {\scriptsize$T(\sqrt{N})$}. We know that the algorithm will drop $\frac{1}{3}$ of the possible values from the above analysis, thus converting the original problem into a subproblem. Therefore, we have: {\scriptsize$T(\sqrt{N})=T(\frac{2}{3}\sqrt{N})+2$}.
We can infer that the time complexity of Algorithm \ref{algo:opg_2} is {\scriptsize$O(\log{\sqrt{N}})$} according to the master theorem.
\subsection{Iterative Method}
\label{sec:iterative}
Although the ternary search algorithm reduces the algorithm complexity from {\scriptsize$O(\sqrt{N})$} to {\scriptsize$O(\log{\sqrt{N}})$} based on the traversal algorithm, the experiments in Section \ref{sec:experimental} show that the ternary search algorithm may miss the optimal global solution in some situations. Therefore, we will introduce an iteration-based algorithm with a greater probability of achieving the optimal $n$ in this section.
Considering that the upper bound on the real error is large when $n$ is either large or small, the global optimal value for $n$ tends to be somewhere in the middle. We can roughly choose the possible value $p$ of the optimal solution through practical experience and then take this value as the initial position to conduct a local search. We set a search boundary $b$, and if the size of error for the current position is smaller than all possible regions within the boundary $b$, the current position is likely to be the optimal solution. In order to speed up the search process, we start the current position of searching from the boundary $b$ to avoid local traversal when {\scriptsize$e(\sqrt{n})$} is monotonous. The details of the algorithm is shown in Algorithm \ref{algo:opg_3}.
In Algorithm \ref{algo:opg_3}, the choice of the initial position $p$ and the setting of the search boundary $b$ significantly affect the quality of its result and its efficiency.
Based on the experience from the existing studies \cite{wang2020demand}, we use the default grid of $2km\times 2km$ (i.e., approximately $16\times 16$) as the corresponding initial position to speed up the search for the global optimal $n$.
On the other hand, the search boundary $b$ has an essential influence on the properties of the solution and the algorithm's efficiency. When the search boundary is large, the probability of the algorithm finding the optimal solution will increase, but the efficiency of the algorithm execution will decrease. On the contrary, the algorithm can converge quickly when the search boundary is small with a small probability of finding the optimal solution.
\begin{algorithm}[t]
{\small\DontPrintSemicolon
\KwIn{\revision{dataset $\mathbf{X}$,} prediction model $Model$}
\KwOut{partition size $n$ that minimize $e(\sqrt{n})$}
use the method analyzed in Section \ref{sec:region_depart} to select $N$\;
$p\gets 16$; $b\gets 4$\;
$flag\gets true$\;
\While{$flag$}{
$flag\gets false$\;
\For{$i=b$ to $1$}{
$e(p+i)\gets UpperBound\left((p+i)^2, N, \mathbf{X}, Model\right)$\;
$e(p-i)\gets UpperBound\left((p-i)^2, N, \mathbf{X}, Model\right)$\;
\If{$e(p) > e(p + i)$}{
$p \gets p + i$\;
$flag \gets true$\;
break\;
}
\If{$e(p) < e(p - i)$}{
$p \gets p - i$\;
$flag \gets true$\;
break\;
}
}
}
$n\gets p^2$\;
\Return $n$\;
\caption{\small Iterative Method}
\label{algo:opg_3}}
\end{algorithm}
\section{Related Work}
\label{sec:related}
In recent years, with the rise of various online taxi-hailing platforms, more and more researchers have been working on how to assign tasks to workers. Summarized the publishing models in \cite{kazemi2012geocrowd}, the task assignment problem mainly is classified in two modes: worker selected task and server assigned task. Compared with the former, the latter is easier to find the optimal global solution, and the major online taxi-hailing platforms mainly adopt the latter, which attracts more and more researchers' attention.
There are two main modes of order distribution on the platform: online \cite{tong2017flexible, cheng2019queueing, tong2016online, wang2020demand, asghari2018adapt} and the other is offline \cite{zheng2018order, ma2013t, thangaraj2017xhare, chen2018price}. The online task assignment faces more tremendous challenges than the offline task assignment due to the lack of follow-up order information. However, the emergence of traffic prediction technology \cite{zhang2017deep, yao2018deep, guo2019attention} has solved this problem well. With the continuous improvement of these traffic forecasting work, the demand-aware algorithm \cite{tong2017flexible, cheng2019queueing, wang2020demand, zhao2020predictive, asghari2018adapt} for task assignment has more advantages than some traditional algorithm \cite{kazemi2012geocrowd, karp1990optimal, chen2018price, zheng2018order}. The optimal solution of task assignment based on the supply and demand can be approximately equivalent to the optimal solution of offline task assignment problem when the prediction result of the traffic prediction algorithm is close to the real.
Several traffic prediction methods \cite{zhang2017deep, yao2018deep, guo2019attention, he2015high, cressie2015statistics} divides the entire space into grids based on latitude and longitude and then predicts the number of orders in each region. The residual network is introduced into the traffic prediction in \cite{zhang2017deep} so that the deep neural network can better reduce the deviation between the predicted results and the actual results. \cite{yao2018deep} tries to combine different perspectives to predict future order data, including time perspective, space perspective, and semantic perspective. The results show that the multi-view spatiotemporal network can improve the prediction performance of the model. In addition, an attention mechanism is introduced in \cite{guo2019attention} to mine the dynamic spatiotemporal correlation of traffic data to optimize the prediction performance.
Combining the result of predicted future distribution of tasks, algorithms for task assignment can better solve the problem. The work \cite{cheng2019queueing} proposed a framework based on queuing theory to guide the platform for order dispatching, which used queuing theory combined with the distribution of future orders and drivers in the region to predict the waiting time of drivers before they received the next order after sending the current order to the destination. In addition, a two-stage dispatching model is proposed in \cite{tong2017flexible}. In the first stage, the platform will pre-assign drivers based on the predicted number of regional orders and direct them to the likely location of the orders, while in the second stage, it will assign the actual orders.
The accuracy of traffic prediction will significantly affect the performance of this algorithm. However, The order dispatching algorithms pay attention to the model error and the expression error caused by the uneven distribution of orders in the grids. Our problem mainly focuses on how to divide model grid to balance the model error and the expression error to improve the effectiveness of the order dispatching algorithm based on supply and demand prediction.
| {
"attr-fineweb-edu": 1.655273,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdP44ubng04WQtdtL |
\section{INTRODUCTION}
\label{sec:introduction}
\input{introduction}
\section{RELATED WORK}
\label{sec:related_work}
\input{related_work}
\section{PROPOSED APPROACH}
\label{sec:methodology}
\input{methodology}
\section{EXPERIMENTAL RESULTS}
\label{sec:results}
\input{results}
\section{CONCLUSION}
\label{sec:conclusion}
\input{conclusion.tex}
\bibliographystyle{ieeetr}
\subsection{Problem Formulation}
This work addresses the problem of playing soccer in a 2v2 setting, in which two teams of robots play soccer using the ``\textit{sudden death}'' format (the first team that scores wins the match). We model this problem as a Markov game defined by a set of states $\mathcal{S}$, $N$ agents, their respective observation and action sets, $\Omega^1, ..., \Omega^N$ and $\mathcal{A}^1, ..., \mathcal{A}^N$, their respective reward and observation functions, $\mathcal{R}^1, ..., \mathcal{R}^N$ and $\mathcal{O}^1, ..., \mathcal{O}^N$, a transition function $p(s_{t+1}|s_t, a^1_t,...,a^N_t)$, and an initial state distribution $p(s_1)$. At every time step $t$, each agent $i$ observes $o^i_t$, executes an action $a^i_t$ according to its policy $\pi^i$, and receives a scalar reward $r^i_t$. The environment then evolves to a new state $s_{t+1}$ according to the transition function. Each agent tries to maximize their respective expected discounted return $\mathbb{E}[\sum^T_{t=1} \gamma^{t-1} r^i_t]$. Compared to previous work, in which discrete action sets are often used (e.g. \cite{stone2005keepaway, kalyanakrishnan2006half, wiering1999reinforcement}), in this work both the state and action sets are continuous, and the task is naturally episodic, so $T$ is finite.
\subsection{Curriculum-Learning}
\label{subsec:curriculum-learning}
We divide the task of playing 2v2 soccer in three stages of increasing complexity: 1v0, 1v1, and finally 2v2. We use agents trained in stage $k$, as fixed opponents for the agents trained in stage $k+1$.
In the first stage (1v0), a single agent learns to maneuver itself to score a goal. In this stage, the agent learns skills such as getting close to the ball, dribbling, and kicking the ball towards a goalpost. In the second stage (1v1), the agent learns to play against the policy trained in the previous stage (1v0), learning additionally to chase, intercept and feint. In the third and final stage (2v2), a team of two agents learns to play against a team of two independent agents trained in the second stage (1v1). As the opponents of the final stage cannot coordinate (their trained policies do not consider the presence of a teammate), the team being trained must learn some form of coordination to exploit the other team's weakness.
\subsection{Experience Sharing}
Under the curriculum described above, agents are trained on stages of increasing difficulty. Transferring knowledge across stages can be particularly useful in this scenario. This idea may be hard to apply in a number of tasks, especially in those in which every stage has a different observation space, so policies trained in a given stage cannot be retrained directly on the next stage. Thus, another approach for skill transfer is required.
In this work, we propose using transitions experienced by fixed opponent players in the current stage, to speed up the learning process of the agent being trained. Given that fixed opponent players were trained in the previous stage, knowledge is transferred across stages. This may be interpreted as the simplest form of experience sharing (ES). The effect of ES is two-fold, on one hand, the agent quickly learns what actions offer a better reward than those obtained in early stages, avoiding the need for heavy exploration. On the other hand, ES eases the agent the acquisition of baseline behaviors that are required to at least match the opponent's performance.
\subsection{Actions}
The $i$-th agent's actions correspond to 3-dimensional vectors, $a^i_t \in [-1, 1]^3$. Each component of $a^i_t$ represents the linear acceleration, the torque on the vertical axis that allows rotation, and a downwards force that can be used to make the agent jump, respectively. For the sake of simplicity, the third component of each action (the downwards force) is fixed to zero, thus, forcing the agents to stay on the ground. This simplification is also in line with the fact that robots in soccer leagues currently are unable to jump.
\subsection{Observations}
The $i$-th agent's observations, $o^i_t$, consist mainly of 2-dimensional position, velocity and acceleration vectors. These observations can be divided into two groups: the first group contains proprioceptive measurements, and information related to the position of key points in the field with respect to its local frame, while the second group contains information about its teammates and opponents in the field. The components that conform the agents' observations are listed in Table \ref{tab:obs}.
All position vectors are transformed to their polar form, i.e. to a distance and an angle. The distance is normalized by the maximum measurable distance (the field diagonal length), and the angle is normalized by $2\pi$. On the other hand, velocity and acceleration vectors are also transformed to a modified polar form: the angle is obtained and normalized as described above, while a modified scaled magnitude, $|\overline{\rho}|$, is computed as $\sqrt{(\tanh^2(c_x) + \tanh^2(c_y))/2}$, where $c_x$ and $c_y$ are the $x$ and $y$ components for the velocity or acceleration, as appropriate.
Additional information, such as whether the ball is or ever has been at a kick-able distance, and the projection of the ball's velocity on the agent to opponent goalpost vector, are also components of the observations. The former is a boolean value, and thus is casted to either 0 or 1, while the latter, which is a signed scalar, is normalized with a sigmoid function.
\begin{table}[]
\caption{Components of the agents' observations}
\label{tab:obs}
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{@{}clc@{}}
\toprule
\textbf{Component} &\textbf{Description} & \textbf{Dimensions} \\ \midrule
$o^i_{\text{vel}}$ & Agent's velocity & 2 \\ \midrule
$o^i_{\text{acc}}$ & Agent's acceleration & 2 \\ \midrule
$o^i_{\text{b}_\text{pos}}$ & Local ball position & 2 \\ \midrule
$o^i_{\text{b}_\text{vel}}$ & Local ball velocity & 2 \\ \midrule
$o^i_{\text{op}_\text{gp}}$ & Local opponent goalpost position & 2 \\ \midrule
$o^i_{\text{tm}_\text{gp}}$ & Local team goalpost position & 2 \\ \midrule
\multirow{2}{*}{$o^i_{\text{b}_\text{pos}} - o^i_{\text{op}_\text{gp}}$} & Difference between local ball position, & \multirow{2}{*}{2} \\
& and local opponent goalpost position & \\\midrule
\multirow{2}{*}{$o^i_{\text{b}_\text{pos}} - o^i_{\text{tm}_\text{gp}}$} & Difference between local ball position, & \multirow{2}{*}{2}\\
& and local teammate goalpost position& \\\midrule
\multirow{2}{*}{$\left(\text{proj}(o^i_{\text{b}_\text{vel}},o^i_{\text{op}_\text{gp}}), o^i_{\text{kick}}\right)$} & Projected ball velocity, and boolean & \multirow{2}{*}{2} \\
& for ``ball is or has been kick-able'' & \\ \midrule
\multirow{4}{*}{$\left(o^i_{\text{j}_\text{pos}}, o^i_{\text{j}_\text{vel}}, o^i_{\text{j}_\text{pos}}-o^i_{\text{b}_\text{pos}}\right)$} & $j$-th agent local position, & \multirow{4}{*}{6} \\
& $j$-th agent local velocity, and & \\
& difference between $j$-th agent local & \\
& position and the ball's agent local position & \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\subsection{Reward Functions}
To guide the agent's learning process, a hand-crafted dense reward function is designed. The effect of using this reward function is compared against using sparse rewards. Both variants are described below.
\subsubsection{Dense Reward Function}
This reward function specifically enforces sub-tasks that might be essential for learning to play soccer: it is designed to guide the agent to first get close to the ball, and once close enough, to kick or dribble the ball towards the opponent's goalpost, while avoiding to get it closer to the agent's own goalpost.
To properly describe this function, the following values are defined:
\begin{itemize}
\item $\alpha$: Max. number of steps in an episode, divided by 10.
\item $\beta$: $\alpha/10$.
\item $\lambda$: Normalized distance threshold (in our experiments, this distance is set to 0.03).
\item $d^i_t$: Normalized distance of the $i$-th agent to the ball at time step $t$.
\item $D^l_t$: Normalized distance of the ball to the center of the goalpost where team $l\in \{0, 1\}$ should score.
\item $b^i_t$: Boolean, \texttt{true} if $d^i_{t^\ast} \leq \lambda$ for some $t^\ast < t$, \texttt{false} otherwise. Represents whether the ball has been at a kick-able distance before.
\item $k^i_t$: Boolean, \texttt{true} if $b^i_t$ is \texttt{false} and $d^i_t \leq \lambda$, \texttt{false} otherwise. Represents whether the ball is at a kick-able distance for the first time.
\end{itemize}
Given the values defined above, the reward for player $i$ belonging to team $l \in \{0, 1\}$, at time step $t$, can be obtained according to Eq. \eqref{eq:rew}, where the terms $r^{\text{\xmark}}_t$ and $r^{\text{\cmark}}_t$ are defined by Eqs. \eqref{eq:rew-nogoal}\footnote{$\Delta \xi_t \coloneqq (\xi_t - \xi_{t-1})$, where $\xi_{t^\ast}$ corresponds to a measured distance at time step $t^\ast$.} and \eqref{eq:rew-goal}, respectively.
\begin{equation}
\label{eq:rew}
r_t = \left\{\begin{array}{ll}
r^{\text{\xmark}}_t & \text{if a goal has not been scored},\\
r^{\text{\cmark}}_t & \text{otherwise}.
\end{array}
\right.
\end{equation}
\begin{equation}
\label{eq:rew-nogoal}
r^{\text{\xmark}}_t= \left\{\begin{array}{ll}
\beta - 0.1 & \text{if } k^i_t, \\
1.2 \cdot (\Delta D^{1-l}_t - \Delta D^l_t) - \Delta d^i_t - 0.1 & \text{if } b^i_t, \\
-\Delta d^i_t - 0.1 & \text{otherwise.} \\
\end{array}\right.
\end{equation}
\begin{equation}
\label{eq:rew-goal}
r^{\text{\cmark}}_t = \left\{\begin{array}{ll}
+\alpha & \text{if goal scored in team $1-l$'s goalpost,} \\
-\alpha & \text{if goal scored in team $l$'s goalpost.}
\end{array}
\right.
\end{equation}
While a goal has not been scored, $r_t$ equals the value of $r^{\text{\xmark}}_t$. In this scenario, three conditions are considered. When the ball is at a kick-able distance for the first time ($k^i_t$ equals \texttt{true}), the agent receives a significant reward. For the following time steps ($b^i_t$ equals \texttt{true}) the function rewards kicks or dribbles if they decrease the distance between the ball and the opponent's goalpost, whilst actions that get the ball close to the team's own goalpost, or move the agent far from the ball, are penalized. If both of the previous conditions have not been met ($\lnot (b^i_t \vee k^i_t)$ equals \texttt{true}), then the agent is rewarded for getting close to the ball.
When a goal is scored, $r_t$ equals the value of $r^{\text{\cmark}}_t$, so the scoring team is given a large reward, whereas the defeated team receives a large punishment.
\subsubsection{Sparse Reward Function}
In this case, the reward is given by Eq. \eqref{eq:rew-sparse}, so the agent is only rewarded or punished depending on the final outcome of a match.
\begin{equation}
\label{eq:rew-sparse}
r_t = \left\{\begin{array}{ll}
+1 & \text{if goal scored in team $1-l$'s goalpost,} \\
-1 & \text{if goal scored in team $l$'s goalpost,} \\
0 & \text{otherwise.}
\end{array}
\right.
\end{equation}
\subsection{Learning Algorithm}
\subsubsection{Multi-Agent TD3}
Twin-Delayed Deep Deterministic Policy Gradient (TD3) was proposed by Fujimoto et al. in \cite{fujimoto2018addressing}, built upon the Deep Deterministic Policy Gradient (DDPG) algorithm \cite{lillicrap2019continuous}.
TD3 incorporates various improvements that allow a faster convergence, while reducing the degree of value function overestimation \cite{fujimoto2018addressing}. To adapt this method to a multi-agent setting, the simplest approach is followed: we use separate actor and critic networks, and independent replay buffers for every agent.
The proposed method is shown in Algorithm \ref{algo:matd3+se}, the steps associated with ES are displayed in blue. These steps involve sampling transitions experienced by the fixed expert opponent players, and using them along with the agent's own expercienced transitions for training.
\subsubsection{Actor and Critic Networks}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{images/arch/soccer_arch.pdf}
\caption{Architectures for both the actor and critic networks. The architecture for stages 1 and 2 is shown on the left, while the architecture for stage 3 is shown on the right. In the latter network, layers with the same subscripts ($a$ and $b$) share weights. Red input cells represent the agent's action, blue input cells represent the agent's observations, green cells correspond to intermediate layers, blue output cells correspond to the action selected by the actor, and red output cells represent the estimates provided by the critic.}
\label{fig:archs}
\end{figure}
The architectures for the actor and critic networks are shown in Fig. \ref{fig:archs}. Each proprioceptive observation's component displayed in Table \ref{tab:obs} (rows one to nine) is denoted as $o^i_\text{prop-$\delta$}$, $\delta=1,...,9$, while the exteroceptive component (row 10) is denoted as $o^i_\text{ext-$j$}$, $1\leq j \leq N-1$, where $N$ is the total number of agents.
These architectures are designed so they allow an equal importance of every component of the observations, while assigning a prominent relevance to the actions for the Q function estimates. This design decision follows some insights provided in \cite{heess2017emergence}.
Different network architectures are used for the stages considered in the defined curriculum (see Section \ref{subsec:curriculum-learning}). For stages 1 and 2, simpler architectures are used (see Fig. \ref{fig:archs} left), while for stage 3, modifications are introduced to make the networks invariant to the order in which the opponent observations are fed. Features associated to both the opponent player observations are obtained with shared weights, and then the element-wise minimum and maximum are concatenated with the rest of the intermediate representations, in a similar fashion to what is done, for instance, in \cite{liu2018emergent} or in \cite{huttenrauch2019deep}.
\begingroup
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{Proposed Multi-Agent TD3 with ES}
\label{algo:matd3+se}
$N_{\text{train}}$: Number of players to be trained \\ $N_{\text{total}}$: Total number of players in a match \\
$M$: Batch size\\
\textcolor{blue}{$M'$: Sample size from each buffer, $\Big\lfloor \frac{M}{N_{\text{total}} - N_{\text{train}} + 1}\Big\rfloor$}\\
\For{$i=1$ \textbf{to} $N_{\text{train}}$}{
Initialize critics $\theta_{1,i}$, $\theta_{2,i}$, actor $\phi_i$, target networks $\theta'_{1,i} \leftarrow \theta_{1,i}$, $\theta'_{2,i} \leftarrow \theta_{2,i}$, $\phi'_i \leftarrow \phi_i$, and replay buffer $\mathcal{B}_i$
}
\For{$t=1$ {\bfseries to} $T_\text{train}$}{
\For{$i=1$ {\bfseries to} $N_{\text{train}}$}{
\eIf{$t \leq T_\text{warmup}$}{
Select action $a_i \sim \text{Uniform}(a_{\text{low}}, a_{\text{high}})$
}{
Select action $a_i \sim \pi_{\phi_i}(s_i) + \epsilon$, $\epsilon \sim \mathcal{N}(0, \sigma)$
}
}
\For{$i=N_{\text{train}} + 1$ {\bfseries to} $N_{\text{total}}$}{
Select action $a_i \sim \pi_{\phi_i}(s_i)$
}
Apply actions, observe rewards and new states. \\
\For{$i=1$ {\bfseries to} $N_{total}$}{
Store transition tuple $(s_i, a_i, r_i, s'_i)$ in $\mathcal{B}_i$
}
\If{($t$ mod $u$ equals $0$) and $t \geq T_{\text{after}}$}{
\For{$i=1$ {\bfseries to} $N_{\text{train}}$}{
Sample mini-batch of $M'$ transitions $(s, a, r, s')$ from $\mathcal{B}_{i}$ \\
\textcolor{blue}{
\For{$j=N_{train} + 1$ {\bfseries to} $N_{total}$}{
Sample mini-batch of $M'$ transitions $(s, a, r, s')$ from $\mathcal{B}_{j}$
}}
$\tilde{a} \leftarrow \pi_{\phi'_{i}}(s') + \epsilon, \epsilon \sim \text{clip}(\mathcal{N}(0, \tilde{\sigma}), -c, c)$ \\
$y \leftarrow r + \gamma \min_{n=1,2} Q_{\theta'_{n,i}}(s', \tilde a)$\\
\For{$z=1$ {\bfseries to} $u$}{
Update critics $(n =1, 2)$:
$\theta_{n,i}\leftarrow \text{argmin}_{\theta_{n,i}} M^{-1} \sum (y - Q_{\theta_{n,i}}(s,a))^2$ \\
\If{$z$ mod $d$}{
Update $\phi_i$ by the deterministic policy gradient ($n = 1, 2$): $\nabla_{\phi_i} J(\phi_i) = M^{-1} \sum \nabla_{a} Q_{\theta_{n,i}}(s, a) |_{a=\pi_{\phi_i}(s)}\cdot$ $\nabla_{\phi_i} \pi_{\phi_i}(s)$ \\
Update target networks ($n = 1, 2$):
$\theta'_{n, i} \leftarrow \tau \theta_{n,i} + (1 - \tau) \theta'_{n,i}$
$\phi'_i \leftarrow \tau \phi_i + (1 - \tau) \phi'_i$
}
}
}
}
}
\end{algorithm}
\endgroup
\subsection{Training Procedure}
The training procedure is incremental and considers three stages, as indicated in Section \ref{subsec:curriculum-learning}. In stage 1 (1v0), the agent learns how to approach the ball, and how to score goals. In stage 2 (1v1), it learns how to play against an opponent. Finally, in stage 3 (2v2), two agents learn how to play against an opposing team.
\subsubsection{Stage 1 (1v0)}
This stage is akin to learning how to play soccer by oneself, i.e. the setting consists of a single agent, a ball, and a goalpost. The objective is to score a goal before reaching a certain time limit. Given that this task may be framed as a single-agent RL problem, using vanilla TD3 as a learning algorithm is enough in this case.
\subsubsection{Stage 2 (1v1)}
\label{subsec:stage2}
In the previous stage (1v0), the resulting policy enables an agent to score a goal in an empty field. The aim of this stage is to endow an agent with the skills required to defeat agents trained in the 1v0 setting.
\subsubsection{Stage 3 (2v2)}
The aim of this stage is to train a team of two agents, each of them capable of observing their teammate, and the two opponents. The opponent team consists of two independent agents trained in stage 2 (1v1). It is important to note that this opposing team is incapable of coordinating its actions, as policies trained in stage 2 do not consider the presence of a teammate.
This setting forces the trained agents to develop the necessary skills to defeat their opponents, given the competitive nature of this stage. Ideally, the team's agents must learn to use their teammate's and opponent's information to their advantage.
Given that the policies trained in stage 2 (1v1) consider just one opponent, a scheme must be designed to decide which agent will observe each player of the trained team. Taking the simplest approach, i.e. every agent trained in stage 2 observes a fixed single opponent throughout the match, is sufficient to fulfill the aim of this stage.
\begin{figure*}[]
\centering
\includegraphics[width=0.55\linewidth]{images/metrics/legend.pdf}
\\
\subfloat[Stage 1 (1v0) \label{fig:sr-1v0}]{%
\includegraphics[width=0.32\textwidth]{images/metrics/sr_1vs0.pdf}}\hfill
\subfloat[Stage 2 (1v1) \label{fig:sr-1v1}]{%
\includegraphics[width=0.32\textwidth]{images/metrics/sr_1vs1.pdf}}\hfill
\subfloat[Stage 3 (2v2) \label{fig:sr-2v2}]{%
\includegraphics[width=0.32\textwidth]{images/metrics/sr_2vs2.pdf}}
\caption{Evolution of trained agent/team's success rate, averaged over 5 random seeds (3 in the case of stage 3). Curves are smoothed using a window of size 50. DR: Dense Reward, SR: Sparse Reward, ES: Experience Sharing, HCT: Higher Control Time step.}
\label{fig:sr}
\end{figure*}
\subsection{Agent Selection}
\label{subsec:meth_agent_selection}
An important decision to be made is which agent trained in stage $k$ should be selected as the fixed opponent for stage $k+1$. To measure the performance of every agent, we use Nash Averaging \cite{balduzzi2018re}, given its property of invariancy to redundant agents, allowing unbiased comparisons with respect to the conventional ELO rating \cite{elo1978rating}. Nash Averaging is used to evaluate agents by computing the average payoff to be obtained by a meta-player when choosing a certain agent, when the opponent meta-player follows an optimum Nash correlated equilibrium strategy. The same approach was used in \cite{liu2018emergent} to evaluate performance.
To select which agents are used as fixed opponents in stages $k+1$ ($k=1, 2$), these are first filtered according to their performance on the task they were trained in (stage $k$), and then evaluated in stage $k + 1$. The pool of agents considered consists of all agents saved every $10,000$ time steps, during the last 20\% of the training process.
With the above, we define the following procedures for selecting the agents of stage $k$:
\begin{itemize}
\item \textit{Stage 1 (1v0)}: The set of all agents with 100\% success rate on the task of scoring a goal within 30 seconds defines the initial pool of agents. Two metrics, the average episode length and the average \textit{vel-to-ball} (agent's velocity projected on the agent to ball vector) are recorded.
An agent $i$ is then considered to be pareto-dominant over an agent $j$, if it required, on average, a lesser number of steps to solve the task of scoring, and did so with a higher average \textit{vel-to-ball} metric.
Then, pareto-dominant individuals play soccer against each other in the 1v1 format. The resulting expected goal differences among agents are then used to define a payoff matrix and calculate the Nash rating of each agent. Finally, the agent with the highest Nash rating is selected as the fixed opponent for stage 2.
\item \textit{Stage 2 (1v1)}: Agents with the top 95\% performance (success rate) on the task of playing soccer against the agent selected in stage 1, are initially selected. As in the previous stage, the average episode length and the average \textit{vel-to-ball} metrics are recorded.
Then, pareto-dominant individuals with respect to the two recorded metrics, form all possible two-player teams, which then compete against each other in the 2v2 format.
The same procedure for obtaining the Nash rating through the expected goal differences among resulting teams, is repeated for this stage. Finally, the team with the highest Nash rating is selected as a fixed opponent for stage 3.
\end{itemize}
\subsection{Experience Sharing}
Experience sharing (ES) was used while training in stages 2 and 3. This was done by using transition tuples $(s, a, r, s')$ experienced by agents trained in stages 1 and 2, when they were used as fixed opponents in stages 2 and 3, respectively.
As shown in Figure \ref{fig:sr-1v1}, ES increases performance and reduces variance. It can be seen that incorporating ES when using DR increases the success rate of the trained agent by 20\% in the task of 1v1 soccer.
\subsection{Effect of Control Time Step}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{images/metrics/legend_metrics.pdf}
\\
\subfloat[Evolution of performance metrics for team trained in Stage 3 (2v2) under the $\text{DR} + \text{ES}$ scheme. \label{fig:metrics-}]{%
\includegraphics[width=0.24\textwidth]{images/metrics/vel_2vs2_op.pdf} \hfill
\includegraphics[width=0.24\textwidth]{images/metrics/spread_2vs2_op.pdf} \hfill
\includegraphics[width=0.24\textwidth]{images/metrics/pass_2vs2_op.pdf} \hfill
\includegraphics[width=0.24\textwidth]{images/metrics/pass_10m_2vs2_op.pdf}} \vspace{5mm}
\includegraphics[width=0.75\textwidth]{images/metrics/legend_metrics_hct.pdf}
\\
\subfloat[Evolution of performance metrics for team trained in Stage 3 (2v2) under the $\text{DR} + \text{ES} + \text{HCT}$ scheme. \label{fig:metrics-hct}]{%
\includegraphics[width=0.24\textwidth]{images/metrics/vel_2vs2.pdf} \hfill
\includegraphics[width=0.24\textwidth]{images/metrics/spread_2vs2.pdf} \hfill
\includegraphics[width=0.24\textwidth]{images/metrics/pass_2vs2.pdf} \hfill
\includegraphics[width=0.24\textwidth]{images/metrics/pass_10m_2vs2.pdf}}
\caption{Evolution of trained (\textit{tr. team}) and opponent team's (\textit{op. team}) performance metrics in stage 3 (2v2), averaged over 3 random seeds.}
\label{fig:metrics}
\end{figure*}
The control time step defines how often in a given episode the linear acceleration and vertical torque of an agent are controlled. Smaller control time steps allow higher granularity in the control, at the cost of lesser variation between consecutive observations
In \cite{liu2018emergent} a control time step of 0.05 s is used. We use the same control time step for stages 1, 2, and 3, but also experimented increasing it to 0.1 s for stage 3. Raising the value of this hyper parameter has an effect that is similar to the effect of using frame-skipping \cite{mnih2015human}: sampling transitions while using a higher control time step (HCT), results in a higher variety of experiences, which can increase performance as samples used for training are less correlated.
As shown in Figure \ref{fig:sr-2v2}, a significant increase of 10\% in the trained team's success rate, measured using the original environment's control time step, can be observed in stage 3 (2v2).
Additionally, as shown in Figure \ref{fig:payoff-2v2} (which is discussed in more depth in Section \ref{res:agent_selection}), the higher performance in terms of success rate does carry over to better soccer play. This can be seen as the best agents that are trained with a dense reward and a higher control time step, show a high expected goal difference in their favor, when playing against agents trained using the original environment settings.
\subsection{Agent Selection}
\label{res:agent_selection}
Results for the agent selection scheme are shown in Figures \ref{fig:pareto} and \ref{fig:payoff}. Figure \ref{fig:pareto} shows the metrics of the dominant agents per trial in stages 1 and 2. These metrics are obtained by evaluating all resulting agents that were trained for at least 8M steps, on 1,000 episodes of their corresponding task.
For stage 1, only agents trained using the dense reward scheme, which obtain 100\% success rate on the task, are considered. These are evaluated on 1,000 1v0 episodes, and their average \textit{vel-to-ball} and episode length metrics are recorded. These metrics are then used to obtain dominant individuals. This results in 16 dominant agents, as shown in Figure \ref{fig:pareto-1v0}. These 16 agents then compete against each other in a 1v1 setting. Payoff matrices with the expected goal difference among these agents are shown in Figure \ref{fig:payoff-1v0}. As agent N$^\circ$16 has the highest Nash rating, this agent is used as the fixed opponent for stage 2.
Similarly, for stage 2, only agents trained using both, a DR and ES, were considered, as that best performances are obtained when using this scheme (see Figure \ref{fig:sr-1v1}). Agents obtained in the last 20\% steps of the training phase, are then evaluated on the 1v1 task for 1,000 episodes, against the same opponent as in the training phase (the agent N$^\circ 16$ selected from stage 1). The average \textit{vel-to-ball} and episode length metrics were recorded, and agents that did not show top-5\% success rate on the 1v1 task (which translates to $\geq 82.5\%$ success rate), were filtered out. Subsequently, dominant agents per trial, with respect to the recorded metrics, were obtained. Figure \ref{fig:pareto-1v1} shows the average \textit{vel-to-ball} and episode length of the 11 resulting dominant agents.
Using these top-11 agents, 66 two-agent teams were formed. These teams competed against each other in a 2v2 format. The reduced payoff matrix, that shows the expected goal difference for the 20 agents with top Nash ratings is shown in Figure \ref{fig:payoff-1v1}. As it can be seen, team N$^\circ7$ has the highest Nash rating, so it was used as the fixed opponent for agents trained in stage 3 (2v2).
\subsection{Resulting Behaviors}
Following the approach described in Section \ref{sec:methodology}, policies from stages 1v0, 1v1 and 2v2, are obtained. The various resulting gameplays may be viewed in \url{https://youtu.be/LUruT1A2GOE}.
The following soccer-related skills can be observed when evaluating the trained policies:
\begin{itemize}
\item \textit{Stage 1 (1v0)}: The agent successfully learns to get close to the ball, and then to kick or dribble the ball towards the goalpost.
\item \textit{Stage 2 (1v1)}: The agent successfully learns to capture all the skills of the agent trained in stage 1 (1v0), i.e., getting close to the ball, then dribbling or kicking it. Additionally, interesting skills that were observed include feinting, and recovering the ball once in possession of the opponent.
\item \textit{Stage 3 (2v2)}: In addition to the skills displayed by the agents trained in the previous stage (1v1), the policy obtained in this phase is such that an implicit coordination between teammates is observed. This may be seen by the fact that agents use direct passes and random throw-ins to pass the ball to each other.
\end{itemize}
To quantitatively measure the described behaviors, the same metrics utilized in \cite{liu2018emergent} are considered:
\begin{itemize}
\item Average velocity to ball: described in Section \ref{subsec:meth_agent_selection}.
\item Average teammate spread out: measures how often in an episode teammates are more than 5 m away from each other.
\item Average pass-interception: measures how often in an episode a team passes and intercepts the ball.
\item Average pass-interception 10 m: same as above, but only passes and interceptions in which the ball has traveled at least 10 m are considered.
\end{itemize}
Figure \ref{fig:metrics} shows the evolution of the performance metrics obtained by some of the teams trained in stage 3, namely, those trained under the schemes $\text{DR} + \text{ES}$ (Fig. \ref{fig:metrics-}) and $\text{DR} + \text{ES} + \text{HCT}$ (Fig. \ref{fig:metrics-hct}). As baselines, metrics obtained by their respective opposing team (formed by agents trained in stage 2 and selected as described in Sect. \ref{res:agent_selection}) are also displayed.
As shown in Figures \ref{fig:metrics-} and \ref{fig:metrics-hct}, an initial rise can be observed when analyzing the \textit{vel-to-ball} metric. This may be attributed to ball chasing behaviors being acquired early on. This metric then sharply drops, to then steadily increase throughout the rest of the training process. This tendency is different from that reported in \cite{liu2018emergent}, where the metric drops throughout the training phase after an initial rise. While this may imply a shift from a predominantly ball chasing behavior to a more cooperative strategy, in our work, a higher \textit{vel-to-ball} metric can observed along with higher \textit{pass} and \textit{pass-10m} metrics, as seen by comparing Figs. \ref{fig:metrics-hct} and \ref{fig:metrics-}.
A similar situation is observed for the \textit{teammate-spread-out} metric. In \cite{liu2018emergent}, this metric rises throughout the training phase (after an initial drop), implying that spread-out teams pass the ball more often as the training progresses. This situation is not observed in our work. On the contrary, we find that higher \textit{teammate-spread-out} values don't correlate with higher pass metrics, as shown in Figs. \ref{fig:metrics-hct} and \ref{fig:metrics-}.
On the other hand, the same tendency reported in \cite{liu2018emergent} of an initially higher \textit{interception-10m} metric, which is later matched by the \textit{pass-10m} metrics, can be observed in both Figs. \ref{fig:metrics-hct} and \ref{fig:metrics-}, however, in the $\text{DR} + \text{ES} + \text{HCT}$ scheme this tendency is more apparent.
Finally, it can be seen that the trained team shows higher \textit{pass} and \textit{pass-10m} metrics than the opponent team towards the end of the training process. This is expected, due to the fact that agents that form the opposing teams are trained in stage 2 (1v1), so they are unable to ``observe'' each other.
| {
"attr-fineweb-edu": 1.334961,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdT425V5jEBMNW1TD | \section{Introduction}
Travelling is an important and frequent activity, yet people willing to travel
have to face problems with rising fuel prices, carbon footprint and traffic
jams. These problems can be ameliorated by {\em travel sharing}, i.e., groups of
people travel together in one vehicle for parts of the journey. Participants in
such schemes can benefit from travel sharing in several ways: sharing parts of
a journey may reduce cost (e.g., through group tickets), carbon footprint (e.g.,
when sharing a private car, or through better capacity utilisation of public
means of transport), and travellers can enjoy the company of others on a long
journey. In more advanced scenarios one could even imagine this being combined
with working together while travelling, holding meetings on the road, etc.
Today, there exist various commercial online services for {car}\footnote{E.g.,
\href{https://www.liftshare.com/uk/}{www.liftshare.com} or
\href{http://www.citycarclub.co.uk/}{www.citycarclub.co.uk}.}, bike, and walk
sharing as well as services which assist users in negotiating shared
journeys\footnote{E.g.,
\href{http://www.companions2travel.co.uk/}{www.companions2travel.co.uk},
\href{http://www.travbuddy.com/}{www.travbuddy.com}.}, and, of course, plenty of
travel planning services\footnote{E.g., in the United Kingdom:
\href{http://www.nationalrail.co.uk/}{www.nationalrail.co.uk}~for
trains,~\href{http://www.traveline.info/}{www.traveline.info}
and~\href{http://www.google.com/transit}{www.google.com/transit} for~multi-modal
transportation.} that automate {\em individual} travel planning for one or
several means of transport. On the research side, there is previous work
that deals with the ridesharing and car-pooling problem \cite{abdel2007,
ferrari2003, lalos2009}.
However, no work has been done that attempts to compute {\em joint} travel
plans based on {\em public transportation} time\-tab\-le data and geographical
stop locations, let alone in a~way that takes into account the {\em strategic}
nature of the problem, which comes about through the different (and potentially
conflicting) preferences of individuals who might be able to benefit from
travelling together. From the point of view of (multiagent) planning, this
presents itself as a very complex application scenario: Firstly, even if one
restricted oneself to centralised (non-strategic) planning, the domain is huge
-- public transportation data for the UK alone currently involves $240,590$
timetable connections for trains and coaches (even excluding local city buses),
which would have to be translated to a quarter of a million planning actions, at
least in a naive formalisation of the domain. Secondly, planning for multiple
self-interested agents that are willing to cooperate only if it is beneficial
for them is known to be exponentially harder than planning for each agent
individually~\cite{brafman2008}. Yet any automated service that proposes joint
journeys would have to guarantee such {\em strategic} properties in order to be
acceptable for human users (who could then even leave it to the service to
negotiate trips on their behalf).
In this paper, we present an implementation of best-res\-pon\-se planning (BRP)
\cite{jonsson2011} within a three-phase algorithm that is capable of solving
strategic travel sharing problems for several agents based on real-world
transportation data. Based on a simplified version of the domain that ignores
timetabling information, the algorithm first builds individual travel plans
using state-of-the-art single-agent planners that are available off the shelf.
It then merges these individual plans and computes a multiagent plan that is a
Nash equilibrium and guarantees individual rationality of solutions, as well as
stability in the sense that no single agent has an incentive to deviate from the
joint travel route. This is done using BRP as the underlying planner, as it is
the only available planner that can solve strategic multiagent planning problems
of such scale, and is proven to converge in domains that comply with certain
assumptions, as is the case for our travel sharing domain. In a third and final
step, the resulting multiagent plan is mapped onto the full temporal planning
domain to schedule actual journeys. This scheduling task is not guaranteed to
always find a feasible solution, as the previous simplification ignores a
potential lack of suitable connections. However, we show through an extensive
empirical evaluation that our method finds useful solutions in a large number of
cases despite its theoretical incompleteness.
The contribution of our work is threefold: Firstly, we show that current
multiagent planning technology can be used in important planning domains such as
travel sharing by presenting its application to a practical problem that cannot
be solved with other existing techniques. In the process, we describe the
engineering steps that are necessary to deal with the challenges of real-world
large-scale data and propose suitable solutions. Secondly, we present an
algorithm that combines different techniques in a practically-oriented way, and
which is largely based on domain-independent off-the-shelf heuristic problem
solvers. In fact, only data preprocessing and timetable mapping use
domain-specific knowledge, and much of the process of incorporating this
knowledge could be replicated for similar other domains (such as logistics,
manufacturing, and network communications).
Finally, we provide a potential solution to the hard computational problem of
travel sharing that could be exploited for automating important tasks in a
future real-world application to the benefits of users, who normally have to
plan such routes manually and would be overwhelmed by the choices in a domain
full of different transportation options which is inhabited by many potential
co-travellers.
We start off by describing the problem domain in section~\ref{sec:domain} and
specifying the planning problem formally in section~\ref{sec:problem}, following
the model used in~\cite{jonsson2011}. Section~\ref{sec:algorithm} introduces our
three-phase algorithm for strategic planning in travel sharing domains and we
present an extensive experimental evaluation of the algorithm in
section~\ref{sec:evaluation}. Section~\ref{sec:discussion} presents
a~discussion of our results and section \ref{sec:conclusion} concludes.
\section{The travel sharing domain} \label{sec:domain}
The real-world travel domain used in this paper is based on the public transport
network in the United Kingdom, a~very large and complex domain which contains
$4,055$ railway and coach stops supplemented by timetable information. An agent
representing a~passenger is able to use different means of transport during its
journey: walking, trains, and coaches. The aim of each agent is to get from its
starting location to its final destination at the lowest possible cost, where
the cost of the journey is based on the duration and the price of the journey.
Since we assume that all agents are travelling on the same day and that
all journeys must be completed within 24~hours, in what follows below we
consider only travel data for Tuesdays (this is an arbitrary choice that could
be changed without any problem).
For the purposes of this paper, we will make the assumption that sharing a~part
of a~journey with other agents is cheaper than travelling alone. While this may
not currently hold in many public transportation systems, defining hypothetical
cost functions that reflect this would help assess the potential benefit of
introducing such pricing schemes.
\subsection{Source data}
\label{sec:sourceData}
The travel sharing domain uses {the National Public Transport Data Repository
(NPTDR)}\footnote{\href{http://data.gov.uk/dataset/nptdr}{data.gov.uk/dataset/nptdr}}
which is publicly available from the Department for Transport of the British
Government. \jh{It contains a~snapshot of route and timetable data} that has
been gathered in the first or second complete week of October since 2004. For
the evaluation of the algorithm in section~\ref{sec:evaluation}, we used data
from
2010\footnote{\href{http://www.nptdr.org.uk/snapshot/2010/nptdr2010txc.zip}{www.nptdr.org.uk/snapshot/2010/nptdr2010txc.zip}},
which is provided in {TransXChange XML}\footnote{An XML-based UK standard for
interchange of route and timetable data.}.
{National Public Transport Access Nodes
(NaPTAN)}\footnote{\href{http://data.gov.uk/dataset/naptan}{data.gov.uk/dataset/naptan}}
is a~UK national system for uniquely identifying all the points of access to
public transport. \jh{Every point of access (bus stop, rail station, etc.) is
identified by an ATCO code\footnote{A~unique identifier for all points of access
to public transport in the United Kingdom.}, e.g., {\em 9100HAYMRKT} for
Haymarket Rail Station in Edinburgh.} Each stop in NaPTAN XML data is also
supplemented by common name, latitude, longitude, address and other pieces of
information. This data also contains information about how the stops are grouped
together (e.g., several bus bays that are located at the same bus station).
\begin{figure}
\centering
\includegraphics[width=77mm]{figures/dataDiagram}
\caption{Overview of the data transformation and processing}
\label{fig:dataDiagram}
\end{figure}
To be able to use this domain data with modern AI planning systems, it has to
be converted to the Planning Domain Definition Language (PDDL). We transformed the
data in three subsequent stages, cf.~Figure~\ref{fig:dataDiagram}. First, we
transformed the NPTDR and NaPTAN XML data to a~spatially-enabled PostgreSQL
database. Second, we automatically processed and optimised the data in the
database. The data processing by SQL functions in the procedural PL/pgSQL
language included the following steps: merging bus bays at bus stations and parts of
train stations, introducing walking connections to enable multi-modal journeys,
and eliminating duplicates from the timetable. Finally, we created a~script for
generating PDDL specifications based on the data in the database. More details
about the data processing and PDDL specifications can be found in
\cite{hrncir2011}.
\begin{figure}
\centering
\includegraphics[width=70mm]{figures/relaxedWithTime}
\caption{An example of the relaxed domain (e.g., it takes 50~minutes to travel
from the stop $\boldsymbol{A}$~to~$\boldsymbol{B}$) }
\label{fig:relaxedWithTime}
\end{figure}
\subsection{Planning domain definitions}
\label{sec:domainDefinitions}
Since the full travel planning domain is too large for any current
state-of-the-art planner to deal with, we distinguish the {\em full domain} from
a {\it relaxed domain}, which we will use to come up with an initial plan before
mapping it to the full timetable information in our algorithm below.
The {\it relaxed domain} is a~single-agent planning domain represented as
a~directed graph where the nodes are the stops and the edges are the connections
provided by a~service. The graph must be directed because there exist stops that
are used in one direction only. There is an edge from $A$~to~$B$ if there is at
least one connection from $A$~to~$B$ in the timetable. The cost of this edge
is the minimal time needed for travelling from $A$~to~$B$. A~plan~$P_i$ found in
the relaxed domain for the agent $i$ is a~sequence of connections to travel from
its origin to its destination. \jhhh{The relaxed domain does not contain any
information about the traveller's departure time. This could be problematic in a scenario where people are
travelling at different times of day. This issue could be solved by clustering
of user requests, cf.~chapter~\ref{sec:conclusion}.}
A~small example of the relaxed domain is shown in
Figure~\ref{fig:relaxedWithTime}. An example plan for an agent travelling from
$C$~to~$F$ is $P_1 = \langle C \rightarrow D, D \rightarrow E, E \rightarrow F
\rangle$. To illustrate the difference between the relaxed domain and the full
timetable, \jh{there are $8,688$ connections in the relaxed domain for trains
and coaches in the UK compared to $240,590$ timetable connections.}
Direct trains that do not stop at every stop are filtered out from the
relaxed domain for the following reason: Assume that in
Figure~\ref{fig:relaxedWithTime}, there is only one agent travelling from
$C$~to~$F$ and that its plan in the relaxed domain is to use a~direct train from
$C$~to~$F$. In this case, it is only possible to match its
plan to direct train connections from $C$~to~$F$, and not to trains that stop at
$C$,~$D$,~$E$, and $F$. Therefore, the agent's plan cannot be matched against
all possible trains between $C$ and $F$ which is problematic especially in the
case where the majority of trains stop at every stop and only a~few trains are
direct. On the other hand, it is possible to
match a~plan with a~train stopping in every stop to a~direct train, as it is
explained later in section~\ref{sec:timetabling}.
The {\it full domain} is a~multiagent planning domain based on
the joint plan~$P$. Assume that there are $N$~agents in the full domain (each
agent~$i$ has the plan~$P_i$ from the relaxed domain). Then, the joint plan~$P$
is a~merge of single-agent plans defined by formula
\begin{equation} \label{eq:defJointPlan} P = \bigcup_{i=1}^{N} P_i
\nonumber \end{equation} where we interpret $\bigcup$ as the union of graphs
that would result from interpreting each plan as a set of edges connecting stops.
More specifically, given a~set of single-agent plans, the plan merging operator~$\bigcup$
computes its result in three steps: First, it transforms every single-agent
plan $P_i$ to a~directed graph~$G_i$ where the nodes are the stops from the
single-agent plan $P_i$ and the edges represent the actions of $P_i$ (for
instance, a~plan $P_1 = \langle C \rightarrow D, D \rightarrow E, E \rightarrow
F \rangle$ is transformed to a~directed graph $G_1 = \{ C \rightarrow D
\rightarrow E \rightarrow F \}$). Second, it performs a~graph union operation
over the directed graphs $G_i$ and labels every edge in the joint plan
with the numbers of agents that are using the edge (we don't introduce any
formal notation for these labels here, and simply slightly abuse the standard
notation of sets of edges to describe the resulting graph).
As an example, the joint plan for agent 1 travelling from $C$~to~$F$ and sharing
a journey from $D$ to $E$ with agent 2
would be computed as
\begin{multline*} \label{eq:exampleJointPlan}
\langle C \rightarrow D, D \rightarrow E,
E \rightarrow F \rangle \: \cup \: \langle D \rightarrow E \rangle = \\
\{ C \xrightarrow{(1)} D \xrightarrow{(1, 2)} E \xrightarrow{(1)} F \}
\end{multline*}
With this, the {\it full domain} is represented as a~directed multigraph where the
nodes are the stops that are present in the joint plan of the relaxed domain. Edges of the
multigraph are the service journeys from the timetable. Every service is
identified by a~unique service name and is assigned a~departure time from each
stop and the duration of its journey between two stops. In the example of the full
domain in Figure~\ref{fig:fullDomain}, the agents can travel using some subset of five different
services \verb|S1| to \verb|S5|. In order to travel from $C$ to $D$ using
service \verb|S1|, an~agent must be present at stop $C$ before its
departure.
\begin{figure}
\centering
\includegraphics[width=70mm]{figures/fullDomain}
\caption{An example of the full domain with stops $\boldsymbol{C}$,
$\boldsymbol{D}$, $\boldsymbol{E}$ and $\boldsymbol{F}$
for the joint plan~$\boldsymbol{P} \boldsymbol{=} \boldsymbol\{
\boldsymbol{C} \xrightarrow{(1)}
\boldsymbol{D} \xrightarrow{(1, 2)} \boldsymbol{E} \xrightarrow{(1)}
\boldsymbol{F} \boldsymbol\}$}
\label{fig:fullDomain}
\end{figure}
\section{The planning problem} \label{sec:problem}
Automated planning technology \cite{ghallab2004} has developed a variety of
scalable heuristic algorithms for tackling hard planning problems, where plans,
i.e., sequences of actions that achieve a given goal from a given initial state,
are calculated by domain-independent problem solvers.
To model the travel sharing problem, we use a multiagent planning formalism
which is based on MA-STRIPS~\cite{brafman2008} and coalition-planning
games~\cite{brafman2009}. States are represented by sets of ground fluents,
actions are tuples $a = \langle \mathit{pre}(a), \mathit{eff}(a) \rangle $.
After the execution of action $a$, positive fluents $p$ from $\mathit{eff}(a)$
are added to the state and negative fluents $\neg p$ are deleted from the state.
Each agent has individual goals and actions with associated costs. There is no
extra reward for achieving the goal, the total utility received by an agent is
simply the inverse of the cost incurred by the plan executed to achieve the
goal.
More formally, following the notation of \cite{jonsson2011}, a multiagent
planning problem (MAP) is a tuple $$\Pi=\langle
N,F,I,\{G_i\}_{i=1}^n,\{A_i\}_{i=1}^n,\Psi, \{c_i\}_{i=1}^{n}\rangle$$ where
\begin{itemize}
\item $N = \{1,\ldots,n\}$ is the set of agents,
\item $F$ is the set of fluents,
\item $I\subseteq F$ is the initial state,
\item $G_i\subseteq F$ is agent $i$'s goal,
\item $A_i$ is agent $i$'s action set,
\item $\Psi:A\rightarrow\{0,1\}$ is an admissibility function,
\item $c_i:\times_{i=1}^n A_i\rightarrow\mathbb{R}$ is the cost
function of agent $i$.
\end{itemize}
$A=A_1\times\ldots\times A_n$ is the joint action set
assuming a concurrent, synchronous execution model, and $G=\wedge_i
G_i$ is the conjunction of all agents' individual goals. A MAP typically
imposes concurrency constraints regarding actions that
cannot or have to be performed concurrently by different agents to
succeed which the authors of~\cite{jonsson2011} encode using an
admissibility function $\Psi$, with $\Psi(a)=1$ if the joint action
$a$ is executable, and $\Psi(a)=0$ otherwise.
A {\em plan} $\pi=\langle a^1,\ldots,a^k\rangle$ is a sequence of
joint actions $a^j\in A$ such that $a^1$ is applicable in the initial
state $I$ (i.e., $\mathit{pre}(a^1)\subseteq I$), and $a^j$ is applicable
following the application of $a^1, \ldots,$ $a^{j-1}$.
We say that $\pi$ {\em solves} the MAP $\Pi$ if the goal state $G$ is
satisfied following the application of all actions in $\pi$ in sequence.
The cost of a plan $\pi$ to agent $i$ is
given by $C_i(\pi)=\sum_{j=1}^k c_i(a^j)$.
Each agent's contribution to a plan $\pi$ is denoted by $\pi_i$ (a~sequence of
$a_i\in A_i$).
\jhd{, and $\pi_{-i}$ is the joint plan of all remaining agents.}
\subsection{Best-response planning}
The {\em best-response planning} (BRP) algorithm proposed in \cite{jonsson2011}
is an algorithm which, given a~solution $\pi^k$ to a MAP $\Pi$,
finds a solution $\pi^{k+1}$ to a {\em transformed planning problem}
$\Pi_i$ with minimum cost $C_i(\pi^{k+1})$ among all possible solutions:
\ $$\pi^{k+1} = \arg\min \{C_i(\pi)|\pi \textrm{ identical to } \pi^k
\textrm{ for all } j\neq i\}$$ \jh{The transformed planning problem $\Pi_i$ is
obtained} by rewriting the original problem $\Pi$ so that all other agents'
actions are fixed, and agent $i$ can only choose its own actions in such a way that all
other agents still can perform their original actions. Since $\Pi_i$ is a
single-agent planning problem, any cost-optimal planner can be used as a
best-response planner.
In~\cite{jonsson2011}, the authors show \jhhh{how for a class of congestion
planning problems}, where all fluents are {\em private}, the transformation they
propose allows the algorithm to converge to a Nash equilibrium if agents iteratively
perform best-response steps using an optimal planner. This requires that every
agent can perform its actions without requiring another agent, and hence can
achieve its goal in principle on its own, and conversely, that no agent can
invalidate other agents' plans. Assuming infinite capacity of vehicles,
the relaxed domain is an instance of a congestion planning {problem}\footnote{
Following the definition of a congestion planning problem in \cite{jonsson2011},
all actions are private, as every agent can use transportation means on their
own and the other agents' concurrently taken actions only affect action cost.
A part of the cost function defined in~section~\ref{sec:costFunctions} depends
only on the action choice of individual agent.}.
The BRP planner works in two phases: In the first phase, an initial plan for
each agent is computed \jh{(e.g., each agent plans independently or
a~centralised multi-agent planner is used)}. In the second phase, the planner
solves simpler best-response planning problems from the point of view of each
individual agent. The goal of the planner in a~BRP problem is to minimise the
cost of an agent's plan without changing the plans of others. Consequently, it
optimises a~plan of each agent with respect to the current joint plan.
This approach has several advantages. It supports full concurrency of actions
and the BRP phase avoids the exponential blowup in the action space resulting in
much improved scalability. For the class of potential games \cite{monderer1996},
it guarantees to converge to a~Nash equilibrium. On the other hand, it does not
guarantee the optimality of a~solution, i.e., the quality of the equilibrium in
terms of overall efficiency is not guaranteed (it depends on which initial plan
the agents start off with). However, experiments have proven that it can be
successfully used for improving general multiagent plans \cite{jonsson2011}.
Such non-strategic plans can be computed using a~{\it centralised multiagent
planner}, i.e., a~single-agent planner (for instance Metric-FF
\cite{hoffmann2001}) which tries to optimise the value of the joint cost function (in our case the
sum of the values of the cost functions of agents in the environment) while
trying to achieve all agents' goals. Centralised multiagent planners have no
notion of self-interested agents, i.e., they ignore the individual preferences
of agents.
\section{A three-phase strategic travel sharing algorithm} \label{sec:algorithm}
The main problem when planning for multiple agents with a~centralised
multiagent planner is the exponential blowup in the action space which is
caused by using concurrent, independent actions \cite{jonsson2011}.
Using a naive PDDL translation has prov\-en that a~direct
application of a~centralised multiagent planner to this problem does not scale
well. For example, a~simple scenario with two agents, ferries to Orkney Islands
and trains in the area between Edinburgh and Aberdeen resulted in a~one-day
computation time.
As mentioned above, we tackle the complexity of the domain by breaking down the
planning process into different phases that avoid dealing with the full
fine-grained timetable data from the outset.
Our algorithm, which is shown in Figure~\ref{fig:algorithmPseudocode}, is
designed to work in three phases.
\subsection{The initial phase}
First, in the {\it initial phase}, an initial journey is found for each agent
using the relaxed domain. A~journey for each agent is calculated independently
of other agents in the scenario using a~single-agent planner. As a~result, each
agent is assigned a~single-agent plan which will be further optimised in the
next phase. This approach makes sense in our domain because the agents do not
need each other to achieve their goals and they cannot invalidate each other's
plans.
\begin{figure}[t]
\begin{shaded}
{\bf Input}
\begin{itemize}
\settingsForEnum
\item a~relaxed domain
\item a~set of $N$ agents $A = \{a_1, \dots, a_N\}$
\item an origin and a destination for each agent
\end{itemize}
{\bf 1. The initial phase}
\begin{description}
\settingsForEnum
\item \quad{\bf For} $i = 1, \dots , N$ {\bf do}
\begin{description}
\settingsForEnum
\item \quad$\,$ Find an initial journey for agent $a_i$ using \\
\hbox{a~single-agent} planner.
\end{description}
\end{description}
{\bf 2. The BR phase}
\begin{description}
\settingsForEnum
\item \quad{\bf Do until} no change in the cost of the joint plan
\smallskip
\item \quad\quad{\bf For} $i = 1, \dots , N$ {\bf do}
\begin{enumerate}
\settingsForEnum
\item Create a simpler best-response planning (BRP) \\
problem from~the~point of view of agent $a_i$.
\item Minimise the cost of $a_i$'s plan without changing \\
the plans of others.
\end{enumerate}
\item \quad\quad{\bf End}
\end{description}
{\bf 3. The timetabling phase}
\begin{description}
\settingsForEnum
\item \quad Identify independent groups of agents $G = \{g_1, \dots,
g_M\}$. \smallskip
\item \quad{\bf For} $i = 1, \dots , M$ {\bf do}
\begin{enumerate}
\settingsForEnum
\item Find the relevant timetable for group $g_i$.
\item Match the joint plan of $g_i$ to timetable using a~temporal
single-agent planner in the full domain with the relevant
timetable.
\end{enumerate}
\item \quad{\bf End}
\end{description}
\end{shaded}
\caption{Three-phase algorithm for finding shared journeys for agents}
\label{fig:algorithmPseudocode}
\end{figure}
\subsection{The BR phase}
Second, in the {\it BR phase} (best-response phase), which is also based on the
relaxed domain, the algorithm uses the BRP algorithm as described above.
It~iteratively creates and solves simpler best-response planning problems from
the point of view of each individual agent. In the case of the relaxed domain,
the BRP problem looks almost the same as a~problem of finding a~single-agent
initial journey. The difference is that the cost of travelling is smaller when
an~agent uses a~connection which is used by one or more other agents, as will be
explained below, cf.~equation~\eqref{eq:defGroupCost}.
Iterations over agents continue until there is no change in the cost of the
joint plan between two successive iterations. This means that the joint plan
cannot be further improved using the best-response approach. The output of the
BR phase is the joint plan $P$ in the relaxed domain (defined in
section~\ref{sec:domainDefinitions}) that specifies which connections the agents
use for their journeys and which segments of their journeys are shared. The
joint plan $P$ will be matched to the timetable in the final phase of the
algorithm.
\subsection{The timetabling phase} \label{sec:timetabling}
In the final {\it timetabling phase}, the optimised shared journeys are
matched against timetables using a~temporal single-agent planner which assumes
the full domain.
\jh{For this, as a~first step, independent groups of agents with respect to
journey sharing are identified. An independent group of agents is defined as an edge
disjoint subgraph of the joint plan~$P$. This means that actions of
independent groups do not affect each other so it is possible to find
a~timetable for each independent group separately.}
Then, for every independent group, {\em parts} of the group
journey are identified. A~{\it part} of the group journey is defined as a~maximal
continuous segment of the group journey which is performed by the same set of agents.
As an example, there is a~group of two agents that share a~segment of their
journeys in Figure~\ref{fig:groupPlanParts}: Agent~1 travels from
$A$ to $G$ while agent~2 travels from $B$ to $H$. Their group journey
has five parts, with the shared part (part 3) of their journey occurring between stops $C$
and~$F$.
\begin{figure}
\centering
\includegraphics[width=77mm]{figures/groupPlanParts}
\caption{Parts of the group journey of two agents}
\label{fig:groupPlanParts}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=77mm]{figures/groupPlanTimetable}
\caption{The full domain with services from the relevant timetable.
There are five different trains \textbf{T1} to \textbf{T5},
and train \textbf{T1} is a~direct train.}
\label{fig:groupPlanTimetable}
\end{figure}
\begin{table*}
\centering
\caption{Parameters of the testing scenarios}
\label{tab:tblScenarios}
\begin{tabular}{|l|r|r|r|r|r|} \hline
{\bf Scenario code}&{\bf S1}&{\bf S2}&{\bf S3}&{\bf S4}&{\bf S5}\\ \hline
Number of stops &
344 & 721 & $1\:044$ & $1\:670$ & $2\:176$\\ \hline
Relaxed domain connections &
744 & $1\:520$ & $2\:275$ & $4\:001$ & $4\:794$\\ \hline
Timetabled connections & \quad$23\:994$ & $26\:702$ & \quad$68\:597$ &
$72\:937$ & \quad$203\:590$\\ \hline
Means of transport &
trains & trains, coaches & trains & trains, coaches & trains\\
\hline\end{tabular}
\end{table*}
In order to use both direct and stopping trains when the group journey is
matched to the timetable, the {\it relevant timetable} for a~group journey is
composed in the following way: for every part of the group journey, return all timetable
services in the direction of agents' journeys which connect the stops in that
part. An~example of the relevant timetable for a~group of agents from the
previous example is shown in Figure~\ref{fig:groupPlanTimetable}. Now,
the agents can travel using the direct train \verb|T1| or using train
\verb|T2| with intermediate stops.
The relevant timetable for the group journey is used with the aim to cut down
the amount of data that will be given to a~temporal single-agent planner. For
instance, there are $23\,994$ timetabled connections in Scotland. For an~example
journey of two agents, there are only 885 services in the relevant timetable
which is approximately 4~\% of the data. As a~result, the temporal single-agent
planner gets only the necessary amount of data as input, to prevent the
time-consuming exploration of irrelevant regions of the state space.
\subsection{Cost functions} \label{sec:costFunctions}
The timetable data used in this paper (cf.~section~\ref{sec:sourceData})
contains neither information about ticket prices nor distances between
adjacent stops, only durations of journeys from one stop
to another. This significantly restricts the design of cost functions used for
the planning problems. Therefore, the cost functions used in the three phases of
the algorithm are based solely on the duration of journeys.
In the initial phase, every agent tries to get to its destination in the
shortest possible time. The cost of travelling between adjacent stops $A$ and
$B$ is simply the duration of the journey between stops $A$ and $B$.
In the BR phase, we design the cost function in such a~way that it favours
shared journeys.
The cost $c_{i,n}$ for agent~$i$ travelling from $A$ to $B$ in a~group of $n$ agents is then defined by equation~\eqref{eq:defGroupCost}:
\begin{equation} \label{eq:defGroupCost} c_{i, n} = \left( \frac{1}{n}\,0.8 + 0.2 \right) c_i \end{equation}
\noindent where $c_i$ is the individual cost of the single action
to $i$ when travelling alone. In our experiments below, we take this to be equal
to the duration of the trip from $A$ to $B$.
This is designed to approximately model the discount for the passengers if they
buy a~group ticket: The more agents travel together, the cheaper the shared (leg
of a) journey becomes for each agent.
Also, an~agent cannot travel any cheaper than 20 \% of the single-agent cost.
In reality, pricing for group tickets could vary, and while our experimental
results assume this specific setup, the actual price calculation could be easily
replaced by any alternative model.
In the timetabling phase, every agent in a group of agents tries to spend the
shortest possible time on its journey. When matching the plan to the timetable,
the temporal planner tries to minimise the sum of durations of agents' journeys
including waiting times between services.
\section{Evaluation} \label{sec:evaluation}
We have evaluated our algorithm on public transportation data for the United
Kingdom, using various off-the-shelf planners for the three phases described
above, and a number of different scenarios. These are described together with
the results obtained from extensive experiments below.
\subsection{Planners}
All three single-agent planners used for the evaluation were taken from recent
International Planning Competitions (IPC) from 2008 and 2011.
We use LAMA \cite{richter2008} in the initial and the BR phase, a~sequential
{\em satisficing} (as opposed to cost-optimal) planner which searches for any
plan that solves a~given problem and does not guarantee optimality of the plans
computed. LAMA is a~propositional planning system based on heuristic state-space
search. Its core feature is the usage of landmarks
\cite{richter2008b}, i.e., propositions that must be true in every solution of
a~planning problem.
SGPlan\textsubscript{6}{} \cite{hsu2008} and POPF2 \cite{coles2011a} are temporal satisficing
planners used in the timetabling phase.
Such temporal planners take the duration of actions into account and try to
minimise makespan (i.e., total duration) of a~plan but do not guarantee
optimality. \jh{The two planners use different search strategies and usually
produce different results.} This allows us to run them in sequence on every
problem and to pick the plan with the shortest duration. It is not strictly necessary to
run both planners, one could save computation effort by trusting one of them.
SGPlan\textsubscript{6}{} consists of three inter-related steps: parallel decomposition, constraint
resolution and subproblem solution \cite{chen2006,hoffmann2001,meuleau1998, wah2006}. POPF2 is
a temporal forward-chaining partial-order planner with a specific extended
grounded search strategy described in \cite{coles2011, coles2010}.
It is not known beforehand which of the two planners will return a~better plan
on a particular problem instance.
\subsection{Scenarios}
To test the performance of our algorithm, we generated five different scenarios
of increasing complexity, whose parameters are shown in
Table~\ref{tab:tblScenarios}.
They are based on different regions of the United Kingdom (Scotland for S1 and
S2, central UK for S3 and S4, central and southern UK for S5). Each scenario
assumes trains or trains and coaches as available means of transportation.
In order to observe the behaviour of the algorithm with different numbers of
agents, we ran our algorithm on every scenario with $2, 4, 6, \dots, 14$ agents
in it. To ensure a~reasonable likelihood of travel sharing to occur, all agents
in the scenarios travel in the same direction. This imitates a~preprocessing
step where the agents' origins and destinations are clustered according to their
direction of travel. \jh{For simplicity reasons, we have chosen
directions based on cardinal points (N--S, \hbox{S--N}, W--E, E--W).}
For every scenario and number of agents, we generated 40~different experiments
(10~experiments for each direction of travel), resulting in a~total
of $1,400$ experiments. All experiments are generated partially randomly as
defined below.
To explain how each experiment is set up, assume we have selected a scenario
from S1 to S5, a specific number of agents,
and a direction of travel, say north--south. To compute
the origin--destination pairs to be used by the agents, we place two axes $x$
and $y$ over the region, dividing the stops in the scenario into four quadrants
{\bf I}, {\bf II}, {\bf III} and {\bf IV}.
Then, the set~$O$ of possible
origin--destination pairs is computed as
\begin{multline*} \label{eq:defODset}
O := \{ ( A, B ) \vert (\left (A \in \mathbf{I} \wedge B \in \mathbf{IV}
\right ) \vee \left ( A \in \mathbf{II} \wedge B
\in \mathbf{III} \right )) \\ \wedge \vert AB \vert \in [ 20, 160] \}
\end{multline*}
This means that each agent travels from $A$ to $B$ either from
quadrant {\bf I} to {\bf IV} or from quadrant {\bf II} to {\bf III}. The
straight-line distance $\left\vert AB \right\vert$ between the origin and the
destination is taken from the interval 20--160~km (when using roads or
rail tracks, this interval stretches approximately to a real distance of
30--250~km). This interval is chosen to prevent journeys that could be hard
to complete within 24~hours. We sample the actual origin-destination pairs
from the elements of $O$, assuming a uniform distribution, and repeat the
process for all other directions of travel.
\subsection{Experimental results}
We evaluate the performance of the algorithm in terms of three different
metrics: the amount of time the algorithm needs to compute shared journeys for
all agents in a~given scenario, the success rate of finding a~plan for any given
agent and the quality of the plans computed. Unless stated otherwise, the values
in graphs are averaged over 40~experiments that were performed for each
scenario and each number of agents. The results obtained are based on running
the algorithm on a Linux desktop computer with 2.66~GHz Intel Core~2 Duo
processor and 4~GB of memory. \jhhh{The data, source codes and scenarios in PDDL
are archived {online}\footnote{
\href{http://agents.fel.cvut.cz/download/hrncir/journey_sharing.tgz}{agents.fel.cvut.cz/download/hrncir/journey\_sharing.tgz}}.}
\subsubsection{Scalability}
To assess the scalability of the algorithm, we measure the amount of time
needed to plan shared journeys for all agents in a~scenario.
In many of the experiments, the SGPlan\textsubscript{6}{} and POPF2 used in the timetabling phase
returned some plans in the first few minutes but then they continued exploration
of the search space without returning any better plan.
To account for this, \jh{we imposed a~time limit for each planner in the
temporal planning stage} to 5 minutes for a group of up to 5 agents, 10 minutes
for a group of up to 10 agents, and 15 minutes otherwise.
Figure~\ref{fig:runtime} shows the computation times of the algorithm.
The graph indicates that overall computation time grows roughly linearly with
increasing number of agents, which confirms that the algorithm avoids the
exponential blowup in the action space characteristic for centralised multiagent
planning.
Computation time also increases linearly with growing scenario size.
Figure~\ref{fig:runtimeSize2} shows computation times for 4, 8 and 12 agents
against the different scenarios.
While the overall computation times are considerable (up to one hour for 14
agents in the largest scenario), we should emphasise that the algorithm is
effectively computing equilibrium solutions in multi-player games with hundreds
of thousands of states. Considering this, the linear growth hints at having
achieved a level of scalability based on the structure of the domain that is far
above naive approaches to plan jointly in such state spaces. Moreover, it
implies that the runtimes could be easily reduced by using more processing
power.
\begin{figure}[t!]
\centering
\includegraphics[width=74mm]{figures/runtime}
\caption{Computation time against number of agents}
\label{fig:runtime}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=71mm]{figures/runtimeSize2}
\caption{Computation time against scenario size}
\label{fig:runtimeSize2}
\end{figure}
\subsubsection{Success rate}
To assess the value of the algorithm, we also need to look at how many agents
end up having a valid travel plan.
\jh{Planning in the relaxed domain in the initial and the BR phase of the
algorithm is very successful. After the BR phase, 99.4~\% of agents have a~journey plan.
The remaining 0.6~\% of all agents does not have a single-agent plan because of
the irregularities in the relaxed domain caused by splitting the public
transportation network into regions.
The agents without a single-agent plan are not matched to timetable connections
in the timetabling phase.}
\begin{figure}[t!]
\centering
\includegraphics[width=76mm]{figures/groupsWithTimetable}
\caption{Percentage of groups for which a~timetable was
found as a function of group size.}
\label{fig:groupsWithTimetable}
\end{figure}
The timetabling phase is of course much more problematic.
Figure~\ref{fig:groupsWithTimetable} shows the percentage of groups for which
a~timetable was found, as a function of group size.
\jh{In order to create this graph, number of groups with assigned timetable and
total number of groups identified was counted for every size of the group.}
There are several things to point out here.
Naturally, the bigger a group is, the harder it is to find a~feasible timetable,
as the problem quickly becomes overconstrained in terms of travel times and
actually available transportation services. When a~group of agents sharing parts
of their journeys is big (5~or more agents), the percentage of groups for which
we can find a timetable drops below 50~\%. With a~group of 8~agents, almost no
timetable can be found. Basically what happens here is that the initial and BR
phases find suitable ways of travelling together in principle, but that it
becomes impossible to find appropriate connections that satisfy every
traveller's requirements, \jh{or do not add up to a total duration of less than
24~hours.}
We can also observe that the success rate is higher in scenarios that use only
trains than in those that combine trains and coaches.
On closer inspection, we can observe that this is mainly caused by different
{\em service densities} in the rail and coach networks, i.e., the ratios of
timetabled connections over connections in the relaxed domain. For example, the
service density is 33~train services a~day compared to only 4~coach services in
Scotland. As a~consequence, it is much harder to find a~timetable in a~scenario
with both trains and coaches because the timetable of coaches is much less
regular than the timetable of trains. However, this does not mean that there is
less sharing if coaches are included. Instead, it just reflects the fact that
due to low service density, many of the envisioned shared journeys do not turn
out to be feasible using coaches. The fact that this cannot be anticipated in
the initial and BR phases is a weakness of our method, and is discussed further
in section~\ref{sec:conclusion}.
\begin{figure}[t!]
\centering
\includegraphics[width=76mm]{figures/improvement}
\caption{Average cost improvement}
\label{fig:improvement}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=73mm]{figures/prolongationLimitGlobal}
\caption{Percentage of groups with less than 30~\% journey prolongation}
\label{fig:prolongationLimitGlobal}
\end{figure}
\subsubsection{Plan quality}
Finally, we want to assess the quality of the plans obtained with respect to
improvement in cost of agents' journeys and their prolongation, to evaluate the
net benefit of using our method in the travel sharing domain. \jhh{We should
mention that the algorithm does not explicitly optimises the solutions with
respect to these metrics.}
To calculate cost improvement, recalling that $C_{i}(\pi) = \sum_j c_i(a^j)$ for
a plan is the cost of a plan $\pi=\langle a^1,\ldots, a^k\rangle$ to agent $i$,
assume $n(a^j)$ returns the number of agents with whom the $j$th step of the
plan is shared.
We can define \jh{a~cost of a~shared travel plan} $C_{i}^{'}(\pi)=\sum_j
c_{i,n(a^j)}(a^j)$ using equation~\eqref{eq:defGroupCost}.
\jhc{I added $'$ to $C_{i}$ to distinguish it from the cost of non-shared
travel.} \jhd{, where $c_{i,n(a^j)}(a^j)=c_i(a^j)$ if
$n(a^j)=0$. }\jhc{deleted because if there is only one agent in a group,
the formula (1) just returns the single-agent cost}
With this we can calculate the improvement $\Delta C$ as follows:
\begin{equation} \label{eq:defImprovement} \Delta C = \frac{\sum_{i\in N}
C_{i}(\pi_i) - \sum_{i\in N} C_{i}^{'}(\pi_N)}{\sum_{i\in N} C_{i}(\pi_i)}
\end{equation}
where $N$ is the set of all agents, $\pi_i$ is the single-agent plan
initially computed for agent $i$, and $\pi_N$ is the final joint plan of all
agents after completion of the algorithm (which is interpreted as the plan
of the ``grand coalition'' $N$ and reflects how subgroups within $N$ share parts
of the individual journeys).
The average cost improvement obtained in our experiments is shown in
Figure~\ref{fig:improvement}, and it shows that the more agents there are in the
scenario, the higher the improvement. However, there is a~trade-off between the
improvement in cost and the percentage of groups that we manage to find a
suitable timetable for, cf.~Figure~\ref{fig:groupsWithTimetable}.
On the one hand, travel sharing is beneficial in terms of cost. On the other
hand, a~shared journey has a longer duration than a~single-agent journey in most
cases. In order to evaluate this trade-off, we also measure the journey
prolongation.
Assume that $T_{\mathrm{i}}(\pi)$ is the total duration of a plan to agent $i$
in plan $\pi$, and, as above, $\pi_i$/$\pi_N$ denote the initial single-agent
plans and the shared joint plan at the end of the timetabling phase,
respectively. Then, the prolongation $\Delta T$ of a~journey is defined as
follows:
\begin{equation} \label{eq:defProlongation} \Delta T = \frac{\sum_{i\in N}
T_i(\pi_N) - \sum_{i\in N} T_i(\pi_i)}{\sum_{i\in N} T_i(\pi_i)}
\end{equation}
Journey prolongation can be calculated only when a~group is assigned a~timetable
and each member of the group is assigned a~single-agent timetable. For this
purpose, in every experiment, we also calculate single-agent timetables in the
timetabling phase of the algorithm.
A~graph of the percentage of groups that have a~timetable with prolongation less
than 30 \% as a function of group size is shown in
Figure~\ref{fig:prolongationLimitGlobal}. The graph shows which groups benefit
from travel sharing, i.e., groups whose journeys are not prolonged excessively
by travelling together.
Approximately 15~\% of groups with 3--4 agents
are assigned a~timetable that leads to a prolongation of less than~30~\%. Such a~low
percentage of groups can be explained by the algorithm trying to optimise the
price of the journey by sharing in the BR phase. However, there is a~trade-off
between the price and the duration of the journey. The more agents are sharing
a~journey, the longer the journey duration is likely to be.
\jh{These results were obtained based on the specific cost
function~\eqref{eq:defGroupCost} we have introduced to favour travel sharing},
and which would have to be adapted to the specific cost structure that is
present in a given transportation system. Also, the extent to which longer
journey times are acceptable for the traveller depends on their preferences, but
these could be easily adapted by using different cost functions.
\section{Discussion} \label{sec:discussion}
The computation of single-agent plans in the initial phase involves solving a
set of completely independent planning problems. This means that the planning
process could be speeded up significantly by using parallel computation on
multiple CPUs. The same is true for matching different independent groups of
agents to timetabled connections in the timetabling phase. As an~example, assume that
there are $N$~agents in the scenario and $t_1, \dots, t_N$ are the computation
times for respective single-agent initial plans. If computed concurrently, this
would reduce the computation time from $t = \sum_{i=1}^N t_i$ to $t' =
\max_{i=1}^N (t_i)$. Similar optimisations could be performed for the
timetabling phase of the algorithm.
In the experiments with 10~agents, for example, this would lead to a~runtime
reduced by 48~\% in scenario~S1 and by 44~\% in scenario~S5.
A major problem of our method is the inability to find appropriate connections
in the timetabling phase for larger groups.
There are several reasons for this. Firstly, the relaxed domain is overly
simplified, and many journeys found in it do not correspond to journeys that
would be found if we were planning in the full domain. Secondly, there are too
many temporal constraints in bigger groups (5 or more agents), so the timetable
matching problem becomes unsolvable given the 24-hour timetable.
\jh{However, it should also be pointed out that such larger groups would be
very hard to identify and schedule even using human planning.}
Thirdly, some parts of public transportation network have very irregular
timetables.
Our method clearly improves the cost of agents' journeys by sharing parts of the
journeys, even though there is a~trade-off between the amount of improvement,
the percentage of found timetables and the prolongation of journeys. On the one
hand, the bigger the group, the better the improvement. On the other hand, the
more agents share a~journey, the harder it is to match their joint plan to
timetable. Also, the prolongation is likely to be higher with more agents
travelling together, and will most likely lead to results that are not
acceptable for users in larger groups.
Regarding the domain-independence of the algorithm, we should point out that its
initial and BR~phases are completely domain-independent so they could easily be
used in other problem domains such as logistics, network routing or service
allocation. In the traffic domain, the algorithm can be used to plan routes that
avoid traffic jams or to control traffic lights. What is more, additional
constraints such as staying at one city for some time or travelling together
with a~specific person can be easily added. On the other hand, the timetabling
phase of the algorithm is domain-specific, providing an example of the specific
design choices that have to be made from an engineering point of view.
To assess the practical value of our contribution, it is worth discussing how it
could be used in practice as a~part of a~travel planning system for real
passengers. In such a~system, every user would submit origin, destination and
travel times.
Different users could submit their preferences at different times, with the
system continuously computing shared journeys for them based on information
about all users' preferences.
Users would need to agree on a shared journey in time to arrange meeting points
and to purchase tickets, subject to any restrictions on advance tickets etc.
Because of this lead time, it would be entirely sufficient if the users got an
e-mail with a~planned journey one hour after the last member of the travel group
submits his or her journey details, which implies that even with our current
implementation of the algorithm, the runtimes would be acceptable.
From our experimental evaluation, we conclude that reasonable group sizes range
from two to four persons. Apart from the fact that such groups can be relatively
easily coordinated, with the price model used in this paper,
cf.~formula~\eqref{eq:defGroupCost}, every member of a~three-person group could
save up to 53~\% of the single-agent price. The success rate of the timetabling
phase of the algorithm for three-person groups in the scenario S3 (trains in the
central UK) is 70 \%.
\section{Conclusion} \label{sec:conclusion}
We have presented a~multiagent planning algorithm which is able to plan
meaningful shared routes in a~real-world travel domain. The algorithm has been
implemented and evaluated on five scenarios based on real-world UK~public
transport data. The algorithm exhibits very good scalability, since it scales linearly
both with the scenario size and the number of agents. The average computation
time for 12~agents in the scenario with 90~\% of trains in the UK is less than
one hour. Experiments indicate that the algorithm avoids the exponential blowup
in the action space characteristic for a~centralised multiagent planner.
To deal with thousands of users that could be in a~real-world travel planning
system, a~preprocessing step would be needed: The agents would have to be
divided into smaller groups by clustering them according to departure time,
direction of travel, origin, destination, length of journey and preferences
(e.g., travel by train only, find cheapest journey). Then, the algorithm could
be used to find a~shared travel plan with a~timetable. To prevent too large
groups of agents which are unlikely to be matched to the timetable, a~limit can
be imposed on the size of the group.
If a~group plan cannot be mapped to a timetable, the group can be split into
smaller sub-groups which are more likely to identify a~suitable timetable.
Finally, the price of travel and flexibility of travel sharing can be
significantly improved by sharing a~private car. In the future, we would like to
explore the problem of planning shared journeys when public transport is
combined with ride sharing. Then, in order to have a~feasible number of nodes in
the travel domain, train and bus stops can be used as meeting points where it is
possible to change from a~car to public transport or vice versa.
\section{Acknowledgments}
Partly supported by the Ministry of Education, Youth and Sports of Czech
Republic (grant No. LD12044) and European Commission FP7 (grant agreement No.
289067).
\bibliographystyle{abbrv}
| {
"attr-fineweb-edu": 1.973633,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeSw4eIOjSHWcwayV |
\section{Introduction}
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\textwidth]{figs/overview_frontiers.pdf}
\caption{AI research frontiers associated with football analytics. Here we highlight three foundational areas related to AI research--Statistical Learning, Computer Vision, and Game Theory,\mr{--}which have independently been demonstrated to be effective in analyzing the game of football (with example problems and works from literature indicated above per field). We argue that the domain spanned by these research fields is most promising for establishing seminal progress in football analytics in the coming years, along 3 frontiers: Frontier~1~(GT\&SL)\xspace, Frontier~2~(SL\&CV)\xspace, and Frontier~3~(GT\&CV)\xspace. We claim that the above frontiers culminate into a unique microcosm mutually benefiting both AI and football analytics.}
\label{fig:overview}
\end{figure}
Recent years have seen tremendous growing interest in sports analytics, not only from an economic and commercial point of view, but also from a purely scientific perspective, viz.\,the growing number of publications~\citep{Palacios2003,baumer2014sabermetric,shih2017survey}
\mr{Is \citep{Palacios2003} the right reference here? The other two are survey articles that support the point being made, but I'm not sure this one does.}
and scientific events organized on the topic (e.g., the MIT Sloan Sports Analytics Conference,\citep{mit_sloan_conf} the CVPR International Workshop on Computer Vision in Sports, \citep{cvsports_workshop} and the ECML/PKDD Workshop series on Machine Learning and Data Mining for Sports Analytics~\citep{kdd_ml_sa}).
As evident in many different downstream domains that have benefited from applications of artificial intelligence (AI) and machine learning (ML), this is due to important technological advances in data collection and processing capabilities, progress in statistical and in particular {deep learning}, increased compute resources, as well as ever-growing economic activities associated with sports and culture (e.g., emergent consultancy ventures revolving around sports data collection and statistics~\citep{beal_norman_ramchurn_2019,opta_sports,ChyronHego,InStat,StatsBomb,soccernomics}).
{Predictive analytics} has been investigated and applied in the context of several sports in the past decades, including, e.g., basketball~\citep{skinner2010price}, tennis~\citep{walker2001minimax,gauriot2016nash}, and baseball~\citep{Albert02,Lewis04,costa2009practicing,Song17,Puerzer02,Albert10,baumer2014sabermetric}, with the latter systematically collecting data since the 19$^{\text{th}}$ century\mr{suggested rephrasing: `with data for the latter having been systematically collected since the 19$^{\text{th}}$ century}.
Although statistical analysis of data has led to impressive outcomes in various sports (e.g., Moneyball in baseball~\citep{Lewis04,baumer2014sabermetric}), football started participating rather late in this data collection and number-crunching game, with the data science transformation that informs stakeholders (e.g., decisions related to player transfers, scouting, pre- and post-match analysis, etc.) still in its infancy \citep{soccernomics}.
Several factors influenced this late arrival.
Football takes place under far less controllable settings than other sports due to its outdoor and highly dynamic nature, a larger pitch, a large number of players involved, a low number of player changes and longer non-interrupted game sequences than sports such as basketball.
As a result, professional companies\mr{suggested change: `analytics companies' or similar} have only relatively recently started collecting so-called big data (e.g., high-resolution videos, annotated event-streams, player tracking and pose information) for football.
Concurrently, only recently have major breakthroughs been made in deep learning, yielding techniques that can handle such new high-dimensional data sets\citep{bengio2009learning,Arel10,lecun2015deeplearning,Schmid15,Goodfellow-et-al-2016}.
Finally, for a long time, credibility in decision-making primarily depended on human specialists such as managers, retired players, and scouts, all of them with track records and experience in professional football, in part due to cultural reasons~\citep{soccernomics,DecroosD19}.
Even a successful football manager like Arrigo Sacchi received criticism for never playing professional football when becoming a coach at Milan in 1987 (to which he responded: ``I never realised that to be a jockey you had to be a horse first."~\citep{sacchi_quote_fifa}).
As a result of these various factor
, the potential influence and gains of predictive analytics on the football game have also been less obvious, with sports analytics as a game-changing phenomena not realized until recent years.
In more philosophical terms, \citet{soccernomics} highlight a cultural hesitation regarding the integration of data science into football and an overdependence on gut instincts, noting that ``until very recently, soccer had escaped the Enlightenment".
Despite football's late adoption of sports analytics, there are a number of early-bird approaches from different areas of AI such as statistical learning (SL), computer vision (CV), and game theory (GT) that are making initial contributions to support decision-making of managers, coaches and players.
For example, already basic machine learning \mr{statistical rather than ML?} tools such as principal component analysis (PCA) enable automated means of identifying player types\citep{DecroosD19}, training of models predicting trajectories of individual teams or imitating league-average behaviors\citep{Le17}, and valuing individual player decisions (such as passes or tackles) in a series of actions leading up to a goal.\citep{DecroosBHD19}
The study of interactive decision-making as formalized by game theory plays a critical role in AI for systems involving more than one actor (human or artificial).
Game-theoretic tools shed light on players' strategic interactions during scenarios such as penalty kicks, analysis of their decisions in comparison to mathematically-principled baselines, and prediction of their goal-scoring probabilities when playing according to a mixed-strategy Nash equilibrium~\citep{Palacios2003,chiappori2002testing,palacios2016beautiful}.
Enriched with {empirical game theory}~\citep{wellman2006methods, TuylsPLHELSG20,omidshafiei2019alpha} the effects of various high-level strategies pitted against each other can also be analyzed.
Finally, recent developments in computer vision have been employed for player tracking~\citep{Lu13,Liu2013,Bialkowski2015,Gade18}, pose estimation~\citep{Zhang_2019_CVPR,Fastovets,Bridgeman_2019,Sypetkowski19,Sypetkowski}, and automated injury prediction~\citep{Kampakis} based on, e.g., gait and fatigue analysis\citep{Meert18,Ramos20,Claudino,Kakavas,Bartlett}.
While these separate areas within AI research have independently been demonstrated to be effective when targeting the above specific prediction and analysis problems in football, we believe that the most fruitful perceived benefits (and, correspondingly, the most interesting and challenging research problems) lie in the underexplored intersection of the subfields of statistical learning, computer vision, and game theory.
We argue that the domain spanned together by these three fields of research is the most promising for establishing seminal progress in football analytics in the coming years.
We lay out this interdependence in \cref{fig:overview}.
Specifically, we pinpoint several frontiers at the intersections of these three fields, and identify the ultimate frontier to be the microcosm requiring integrated approaches that build on fundamentals of the three areas.
A large portion of the current state-of-the-art research in football analytics, by contrast, typically falls under the umbrella of one of these areas, with some initial activities taking places at Frontier~2~(SL\&CV)\xspace~\citep{Lu13,Mora17,Mehrasa,Choi_2019_ICCV,Quiroga_2020}, and no notable research activities identified at Frontier~1~(GT\&SL)\xspace and Frontier~3~(GT\&CV)\xspace for football, or sports in general.
To make the potential opportunities more concrete, we provide and discuss in detail several case studies in the sections \nameref{sec:game_plan} and \nameref{sec:results}.
At Frontier~1~(GT\&SL)\xspace, game-theoretic analysis is blended with learned predictive models.
Research along this axis of the microcosm focuses on a combination of interactive decision-making with predictive modeling providing more concrete and deeper analysis tools.
We present a detailed case study illustrating this frontier, revisiting the seminal work of \citet{Palacios2003} on penalty kicks under this new perspective and illustrate how mixing with SL provides deeper insights into penalty-kick taking.
Frontier~2~(SL\&CV)\xspace focuses on research that integrates statistical learning with computer vision.
Research along this axis directly learns from video as the primary input and builds predictive models, for instance forecasting player and team behaviors directly.
At Frontier~3~(GT\&CV)\xspace, we classify research integrating computer vision and game theory, a largely uncharted territory focusing on generative models based on visual inputs, which takes strategic interactions into account.
We claim that the above frontiers culminate into a unique microcosm mutually benefiting both AI and football analytics, to the point it becomes possible to develop, for example, an {Automated Video Assistant Coach} (AVAC). The AVAC system is an example of what we believe to be the future of human-centric AI research for football, with the aim of integrating all aspects of the frontiers into a cohesive system enabling both understanding and improvement of human football play.
Such an AVAC system is envisioned to improve the overall experience of the game for players, coaches, and spectators alike, not only through cost savings\mr{this last clause read a little strangely to me - perhaps remove?}.
For the team itself, such a system can automate the recognition of key in-game moments based purely on raw data, enabling data scientists to minimize laborious efforts spent on lower-level data processing, and instead focus their expertise on identifying nuanced tactical considerations that can better inform coaches and players.
For spectators, such a system can deepen the understanding of the game without the need for an in-house human expert, and provide useful tools such as automatic highlight extraction and automated, interactive annotations of game footage.
The overall advantage of this microcosm, however, is that while it makes clear how AI research can benefit football in the long-term, it has the dual effect of defining football as a new human-centric AI research domain with major challenges that can progress the AI field through cross-fertilization of ideas from the three highlighted axes.
In the following, we first lay out the long-term perspective of how AI can benefit the domain of football analytics, and vice versa.
We then describe the current state of affairs and sketch a long-term vision for research in the football microcosm.
Next, we illustrate the three axes in detail, and we present a case study that examines penalty-kick taking from a game-theoretic perspective, building on the work of \citep{Palacios2003,palacios2016beautiful}\mr{Unsure of the journal style guide, but probably want the author names here.}, and bringing several new insights based on data from the main European leagues.
We illustrate an example of counterfactual trajectory predictions (i.e., a what-if analysis) that can be enabled through learned predictive models, and subsequently demonstrate how this game-theoretic work can be enriched via integration with statistical learning at Frontier~1~(GT\&SL)\xspace, providing several new insights about penalty-kick taking in general, and making a case for the football microcosm as a research domain.
\section{Game Plan: Long-term research vision}\label{sec:game_plan}
This section outlines a long-term research vision for AI applied to football analytics.
We first consider the state-of-the-art with respect to each of the respective areas (GT, SL, and CV) applied to the football domain, after which we look into the opportunities that each of the frontiers brings forth to unlocking larger scientific challenges in football analytics.
We continue by discussing the reverse perspective of what football can imply for AI research, by dissecting the football AI microcosm into several layers of key challenges.
\subsection{AI for Football Analytics}
AI includes a set of algorithmic techniques from statistical learning, computer vision, and game theory that are applicable to football analytics.
The following sections summarize known results that lie in these peripheral fields of \cref{fig:overview} and highlight opportunities for further work and added value in each.
\subsubsection{Statistical learning}
Football is arguably the most challenging to analyze of all the major team sports. It involves a large number of players with varied roles, few salient events, and minimal scoring.
Statistical football analysis attempts to provide quantitative answers to questions that pertain to different aspects of the game.
Notably, these include the problem of characterizing players' and teams' styles, evaluation of the impact that such teams have on the pitch, and the temporal and counterfactual predictions of players' actions.
When one compares styles of players (and teams, by extension), one usually refers to high level and abstract notions that summarize their unique characteristics.
The goal of the statistical learning line of research is to learn features capturing such information, normally to be used by other down-stream tasks.
For instance, one means of summarizing information about a football player is through aggregation of their play statistics (e.g., offensive, defensive, or dribbling abilities, shots on target, etc.), and that of their teams.~\citep{fernandez2016attacking,stats_playing_styles_2020}
While such statistics are typically either handcrafted or rely on simple measures of play outcomes, recent works have analyzed them from a statistical learning perspective, using notions such as Player Vectors~\citep{DecroosD19} (and analogous techniques in sports such as basketball~\citep{franks2015characterizing}).
Given the growing success of unsupervised learning methods, there is potential for more advanced representations of player traits to be learned directly from the data.
In football, it is particularly difficult to assess which players, or groups of players, deserve credit for favorable outcomes.
For high-scoring team sports (e.g., basketball) one can provide a reasonable answer to this question by restricting attention to actions that have immediate impact on scoring.
By contrast, few goals are scores in football (e.g., 2.72 goals were scored on average per game of the 2019/2020 Premier League season~\citep{pl_goals_2019}).
Consequently, models considering only actions with immediate impact on goals (e.g.\,shots or saves) capture a crucial yet narrow view of the game.
Moreover, game states in football are significantly more complex than that estimated by current models.
Features describing them are mostly hand-crafted and only consider on-ball actions. A given pass might be extremely valuable or a poor choice depending on the disposition of the players.
While these methods are able to value on-ball actions, they rely on sparse signals provided by goals.
Concurrently, off-ball actions significantly impact each game, as exemplified by player actions covering certain pitch areas to prevent attacks, running to open areas to create space for teammates, and so on.
Due to the temporally extended nature of football, the task of inferring the value of actions is an instance of the temporal credit assignment problem in reinforcement learning (RL) \cite{minsky1961steps}.
The combination of RL techniques with deep learning has great potential to tackle the idiosyncrasies of football and to close the gap between statistical and human analysis. It is exciting to see recent progress in this direction in sports analytics, including ice hockey \cite{guiliang2018deep} and football \cite{sun2020cracking, liu2020Deep}.
Overall, while the above pieces of work showcase the promise of modern statistical methods for temporal predictions in football, this remains an open and challenging problem that will likely require novel development of methods and means of leveraging the diversity of newly-available football data.
\subsubsection{Game theory}
Game theory plays an important role in the study of sports, enabling theoretical grounding of players' behavioral strategies.
Numerous works have applied game-theoretic analysis to sports over recent decades~\citep{Sindik2008,Lennartsson15}, including football~\citep{Palacios2003,MOSCHINI2004,Azar2011,Levitt,Buzzachi14,coloma2012penalty,chiappori2002testing}, tennis~\citep{walker2001minimax,gauriot2016nash}, volleyball~\citep{lin2014applying}, basketball~\citep{skinner2010price}, and gridiron football~\citep{emara2014minimax}.
High-level game theory can be applied to team selection and choice of formation.
Set pieces, such as corner kicks and penalties, are particularly amenable to game-theoretic analysis, wherein identified high-level strategies can be pitted against one another and ranked in terms of empirical performance (see \nameref{sec:results} section for details).
Due to the real-world nature of football, the majority of the aforementioned game-theoretic analysis is driven by the availability of rich data sources.
The volume of available high-level statistics (e.g., match outcomes spanning across different leagues, seasons, and competition levels) makes football a particularly attractive topic from a behavioral game-theoretic perspective~\citep{camerer2011behavioral,wellman2006methods}.
From a theoretical perspective, the majority of the existing works exploit the fact that various football scenarios can be modeled as two-player zero-sum games.
For example, in football, the penalty kick situation may be straightforwardly modeled as a two-player asymmetric game, where the kicker's strategies may be neatly categorized as left, center, or right shots.
The controlled nature of these scenarios compounds their appeal from a quantitative analysis perspective;
in the penalty example, the penalty taker, goalkeeper, and ball's initial positions are static across all dataset trials.
In fact, the majority of the literature that analyzes football under a game-theoretic lens focuses on penalty kicks~\citep{Palacios2003,Azar2011,Levitt,Buzzachi14,coloma2012penalty,chiappori2002testing}, which we contend is due to this amenability for analysis via classical game-theoretic solution concepts (such as Nash equilibria).
However, the potential for football to benefit from game-theoretic analysis remains untapped until a paradigm shift away from the classical analysis of set piece settings occurs, towards live-play settings such as those considered by \citet{MOSCHINI2004}.
Moving beyond this classical paradigm involves resolution of significant challenges:
the number of active players in football is larger than most other professional sports (the notable exception being rugby) and the exponential size of the strategy space (with respect to the number of players) makes it more challenging to analyze than games such as tennis or basketball;
the mapping of low-level player actions to strategies is no longer straightforward in durative live plays, due to the variability of player trajectories;
the duration of plays is also generally longer than sports played in more controlled environments (e.g., tennis), implying that such scenarios may benefit from analysis in the so-called extensive-form (rather than the simultaneous-move, normal-form approaches typically used for set piece analysis).
Nonetheless, the availability of new data types increases the viability of conducting more advanced game-theoretic analysis by bootstrapping to advances in statistical learning and computer vision, as later detailed.
Surmounting these challenges will benefit football from a game-theoretic perspective, enabling both a descriptive analysis (i.e., understanding the interactions undertaken by players and teams in the presence of others), and a prescriptive one (i.e., suggesting the actions such individuals should have executed).
\subsubsection{Computer vision}
Computer vision has seen major breakthroughs in the past decade, thanks to the application of deep learning approaches.
Progress in tasks such as image classification~\cite{deng2009imagenet}, video action recognition~\cite{Carreira_2017_CVPR}, and pose estimation~\cite{alp2018densepose} have unlocked the possibility of automatically extracting complex information from videos.
Similarly, in football analytics, computer vision methods can enable enrichment of statistical and game-theoretic approaches, which typically rely on hand-labeled, low-dimensional data.
Video is a particularly appealing signal for football analytics:
it is a rich data source (likely to contain large amounts of information of interest for decision-making) and cheap to acquire (using only widespread camera sensors).
While raw video frames are insufficiently structured to be processed by traditional rule-based processing techniques, computer vision methods enable the extraction of high level, potentially spatially-detailed representations, ready for use by downstream applications.
Examples of such extraction processes include human pose estimation~\cite{alp2018densepose}, object detection~\cite{ren2015faster}, segmentation~\cite{long2015fully}, tracking~\cite{dong2018triplet}, depth estimation~\cite{godard2017unsupervised}, and event or action detection~\cite{Carreira_2017_CVPR}.
In addition to explicit, human-interpretable representations, learned counterparts can be used by downstream deep learning models.
In football, several companies already commercialize tracking information, relating to the players and the ball, automatically extracted from videos recorded by dedicated static cameras, placed to have a view covering the entire terrain~\citep{opta_sports,StatsBomb}.
Moreover, immersive sports analytics has been gaining popularity with the increased access to specialized hardware and portability.~\citep{Lin2020SportsXRI}
Computer vision techniques enable reconstruction of real game scenarios, which provide more feedback to players and coaches than 2D screens, and have been extended to other sports media as well.~\citep{StriVR}
Vision-based player tracking, pose estimation, and event detection can improve learning of player skill vectors and predictive modeling, subsequently improving player ranking and game-theoretic analysis of strategies.
While video has been traditionally used as the primary high-dimensional signal for the above applications, other modalities such as audio and text streams can provide extremely rich and complementary information in these settings.
Audio commentary and news threads readily provide a human interpretation of the events occurring in a scene, which is highly complementary to the spatially fine-grained information available in videos.
Temporally-aligned sound and text streams have already seen recent application in learning rich representations of video signals~\cite{alayrac2020self,arandjelovic2017look,Miech_2020_CVPR}, and were proven to be useful in a variety of computer vision tasks.
Although such information significantly overlaps with the structured annotations already used by downstream applications (event annotations, etc.), its unstructured nature enables the presence of a greater information content (e.g., a commentator's tone can provide some cues as to an action's value).
Unique challenges arise when seeking to improve the performance and to enlarge the applications of computer vision methods for broadcast video of football games.
In general, the camera's field of view is centered on the most interesting action in the field, leaving many players out of the frame.
This poses an interesting geometric problem, as broadcast footage shots typically do not overlap, and many state-of-the-art systems are unable to geometrically relate these multiple shots.
Other challenges arise due to the multiple occlusions of players by one another, further complicating the detection, tracking, and identification tasks.
These problems can be addressed by geometrical approaches in settings where one can intervene to correct camera positions or, ideally, access cameras' extrinsics and intrinsics\mr{intrinsic and extrinsic parameters?}.
However, approaches that do not assume access to such information have the potential to leverage more data.
In contrast with other applications, football broadcast videos present an interesting blend between real, large-scale, complex data and a constrained setting (due to the rules of the game \citep{giancola2018soccernet}), hence providing an attractive setting for developing such approaches.
\subsubsection{Frontier 1: Interactive decision-making}
Learning to make suitable decisions in the presence of other agents is where game theory and statistical learning can converge.
This interdisciplinary area of research has received significant attention in the multi-agent RL community over the past decades, with thorough survey articles~\citep{PanaitL05,ShohamPG07,BusoniuBS08,TuylsW12,BloembergenTHK15,Hernandez-LealK20} available.
When considering the football domain in particular, it is evident that the potentials of game theory have yet to be fully exploited, let alone its combination with machine learning techniques such as RL.
There are several promising routes in this area that not only are challenging from a football analytics perspective, but also from an AI research one.
Two of these routes are studied further in this article in the \nameref{sec:results} section; one concerns the study of set pieces using the combination of statistical learning with game theory, and a second focuses on predictive modelling for counterfactual analysis.
In the former, which has been mostly studied in a game-theoretic setting, we show how augmenting the analysis with player-specific statistics can provide deeper insights in how various types of players behave or take decisions about their actions in a penalty kick scenario.
In the latter case, we illustrate how machine learning techniques can facilitate counterfactual analysis in football matches.
The possibility to predict, for example, trajectories of players can enable investigation of counterfactual scenarios, (e.g., wherein one would like to know how a specific player or team would respond in a specific match scenario).
Doing this enables one to not only learn to generate behaviors, but also leverage game-theoretic techniques for counterfactual analysis.
We defer a more detailed discussion of these research lines to the \nameref{sec:results} section.
Building on the counterfactual prediction of players' behaviors, one can also consider the possibility of using this as a coaching tool.
Specifically, one can use counterfactual insights to advise tactics to individual players, and even go further by optimizing the overall team strategy depending on the specific opponent in an upcoming match.
This would go beyond the state-of-the-art, which focuses on simply predicting player behaviors;
here, we \mr{mixing pronouns - could stick with `one' as in previous sentence?} would seek to actively optimize suggested tactics based on the particular behaviors and play style of the opposing team, and upcoming match-ups.
Such tactical considerations can also be conducted in an iterative manner (e.g., predictions can be made for the opposing team as conditioned on the best-response behavior of the main team), and effective counter strategies can be learned, for instance, using multi-agent RL.
Such an approach opens the door to a slew of interesting research challenges. For instance, use of multi-agent RL entails definition of a reward function for the players;
while rewarding goals is an obvious candidate, denser reward signals (e.g., associated with successful passes, intercepts, etc.) may be useful for accelerating learning of such policies.
Such reward functions are also likely to depend on the role of a player in the team and their respective skill set (i.e., may even be heterogeneous across players in a given team), and could also be learned using techniques such as inverse RL~\citep{NgR00}.
Finally, the combination of the previous learnt models with pitch control\citep{Spearman16}, a technique to determine which player/team has control over a specific area of the pitch, will provide additional information on open space, passing and scoring opportunities, yielding a powerful tool to enable in-match tailored coaching tools.
\subsubsection{Frontier 2: Predictive Modeling from Videos}
Several challenges naturally lie in the frontier between statistical learning and computer vision.
Statistical learning depends on large quantities of labelled data.
Many of the quantities suitable for models of football are the product of hand-labelling data;
on the other hand, vision-based models could automatically identify events which could be fed into such models.
In addition to the quantity of events that vision-based systems could provide, the quality could also be improved (e.g., with events being accurately registered to the corresponding frame, with minimal temporal error compared to human-labeled annotations).
Furthermore, video is a much richer signal compared to what is traditionally used in predictive tasks, such as forecasting future movement of players or predicting the value of individual actions to the game outcome.
The combination of advanced deep learning models and a rich video signal enables learning over subtle clues otherwise not captured in event-stream or tracking data.
Capturing more of the partially-observable state of a game will ultimately enable more accurate predictions.
Richer information may additionally help to shed light on the intention of players and thus better address the credit assignment problem in action-value estimation.
On the other hand, understanding the game dynamics is necessary to resolve some of the issues of vision-based tracking models, e.g., occlusions or even tracking off-camera players.
Such a predictive model of the players' movements can either provide additional inputs to the tracking system or be implicitly learned.
Explicit or implicit access to the game dynamics will very likely also improve vision-based action labeling.
Finally, presenting prediction outcomes by means of synthetic video generation remains an ambitious challenge that combines the task of trajectory prediction with video generation.
Presenting predictions of counterfactual futures as video will enable intuitive understanding both by coaching personnel and players (e.g., in the same vein as recent work on video generation for tennis matches)~\citep{zhang2020vid2player}.
\subsubsection{Frontier 3: Generative Game-Theoretic Video Analysis Models}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/vision/posenet1.png}
\end{subfigure}%
\hfill%
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/vision/posenet2.png}
\end{subfigure}%
\hfill%
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/vision/posenet3.png}
\end{subfigure}%
\caption{Example of a pose estimation model applied to a penalty kick.
The results showcased here follow a multi-phase approach~\citep{Papandreou_2017_CVPR.} The first stage applies a Faster-RCNN candidate detector~\citep{NIPS2015_5638} which extracts bounding boxes around the players. This is followed by a pose estimation model applied to these bounding box proposals, which identify the player keypoints, and provide the pose estimate.}
\label{fig:poses_penalty_kick}
\end{figure}
Video modeling and game theory can mutually benefit one another.
In the simplest application, computer vision can provide new features to drive game-theoretic models.
In more advanced applications, game theory can, in turn, guide video generation, as illustrated in \cref{fig:video_generation}.
Throughout a football match, individual players carry out a series of encounters with one another, which can profitably be viewed through game theory.
Each player may have some idea of the likely success of strategies based on past performance and current in-game observations.
Penalty kicks are an almost idealized example, with a kicker and goalkeeper observing each other for signs of intent, while simultaneously weighing preconceived strategies.
As described previously, computer vision models can be used to automatically extract high-level and potentially spatially-detailed representations that can be complementary to the low-dimensional, hand-collected inputs game-theoretic models typically rely on.
We illustrate this representation extraction pipeline in the left portion of \cref{fig:video_generation}.
In the example of penalty kicks, vision models could identify visual signatures of intent that are difficult to even perceive for humans, let alone annotate;
such information includes, for example, pose estimates extracted from broadcast footage (as visualized in \cref{fig:poses_penalty_kick}), which can enable inference of the intentions of players, providing valuable insights to improve their respective strategies.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth,page=1]{figs/vision/video_generation.pdf}
\caption{Analysis and generation of video data. From footage video (a), computer vision techniques can extract various representations (b), useful for empirical game-theoretic analysis. In turn, empirical payoff tables (c) can be used in a hierarchical generative process of videos, possibly involving a step in intermediate representation spaces (d). Generated videos (e) could be used for prescriptive analysis, or in the process of capturing the footage videos themselves, closing the cycle between analysis and generation.
\pauline{We could also mention data augmentation for the analysis pipeline methods, let me know if you'd like me to go into that as well.}}
\label{fig:video_generation}
\end{figure}
In the reverse direction (right portion of \cref{fig:video_generation}), game-theoretic outputs can be used to improve the plausibility and usefulness of generated videos.
Specifically, generative video models need to precisely capture the data distribution to be useful.
The sheer amount of information present in videos is enormous compared to that provided by classification targets.
Successful models deal with this information overload with techniques that impose inductive bias, or auxiliary losses, into the generative process (e.g., ignoring low-level features, taking advantage of visual symmetries, limiting new models to fine-tuning of baseline models, etc.).
\pauline{were the previous two sentences written specifically about generative models of videos, or is it a leftover from a previous more general discussion of video modeling ? (ie potentially discriminative.) In the first case, it would be useful to add references to support the examples. In the second case, I think these two sentences could be dropped.}
The game-theoretic context found within football offers an opportunity for further constraints on video modeling.
The dynamics of play are complex, with games taking place at the level of opposing individuals to entire teams, though with specific constraints that can inform a generative process.
For example, a hierarchical generative process could condition on high-level latent variables drawn from the distribution described by empirical payoff tables, to represent player decisions conditioning a particular sample.
One could consider an additional hierarchical level of generation targeting intermediate features, possibly building on approaches for predicting future pose~\citep{villegas2017learning, chan2019everybody}, trajectory~\citep{kitani2012activity,bhattacharyya2018long, vu2018memory, bhattacharyya2018bayesian}, action,~ \citep{vondrick2016anticipating, abu2018will} and shape information~\citep{luc2017predicting, jin2017predicting, luc2018predicting, xu2018structure, sun2019predicting}.
These intermediate outputs could be used direcfstritly for prescriptive analysis (e.g., informing players of physical tactics to attempt in a play of interest, based on generated poses) or serve as further conditioning for generation in the raw RGB space.
An alternative direction would be the formulation of new game-theory inspired metrics, for example imposing that the payoff tables extracted from generated samples match those obtained from real data.
This would be closely related to the Fr\'echet Inception Distance~\citep{heusel2017gans} for generative modeling of images, and the derived Fr\'echet Video Distance~\citep{unterthiner2018towards} for video, which have successfully driven progress in both fields \citep{karras2017progressive, miyato2018spectral, brock2018large, zhang2019self, clark2019adversarial, weissenborn2019scaling, luc2020transformation}.
In turn, such metrics could serve as a basis to design novel losses inspired by game theory.
Besides their potential use for prescriptive analysis, generative models of videos could lead to improvements in broadcast data itself, due to automatic anticipation of the most plausible immediate future outcomes.
Overall, game-theoretic analysis can inform video generation, in turn enabling influence of the process by which future data will be acquired, hence closing the cycle described in \cref{fig:video_generation}.
\subsection{Football for AI Research}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth,page=1]{figs/overview_research.pdf}
\caption{Hierarchy of key research challenges, defined over three layers.
The foundational layer targets representation learning over the various input modalities available in football to generate useful representations for more complex learning tasks targeted in the subsequent layers:
{prescriptive and predictive analysis} and {modeling of human factors}.
}
\label{fig:layers}
\end{figure}
We next consider the dual perspective of the potential unique challenges and opportunities associated with the football domain that make it an interesting testbed for AI research.
We introduce here a hierarchy of key challenges associated with football research, as illustrated in \cref{fig:layers}, as defined over three layers:
the foundational layer concerns representation learning, operating directly on the various input modalities available in football (e.g., time-synchronized videos, event-streams, and tracking data) to generate useful representations for the more complex learning tasks targeted in the subsequent layers of prescriptive and predictive analysis, and modeling of human factors.
We next detail each of these research layers, drawing connections with the three aforementioned frontiers, and concluding by detailing disruptive innovations and associated research directions for the future when the three frontiers converge in the football microcosm.
\subsubsection{Football as an AI Testbed: Microcosm}
The development of performative AI algorithms relies on various recurring objectives: learning and acting based on real-world data streams, interpreting the actions of other agents and acting strategically in response, being able to understand and generate natural language for efficient communication with humans, and so on.
As discussed in earlier sections, football analytics involves core problems associated with many of the above AI challenges, though in a more well-controlled scope.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth,page=1]{figs/overview_data.pdf}
\caption{The range of data types relevant to football. At a high-level, highly structured and well-correlated data types are available for each match: video streams, audio streams, and associated text streams. On a more aggregate level, historical play data (e.g., player and team statistics) spanning many decades of play are broadly available for many professional football leagues. Moreover, third-party companies (such as Opta~\citep{opta_sports}, Wyscout~\citep{wyscout}, StatsBomb~\citep{StatsBomb}, and InStat\citep{InStat}) provide annotated player event and tracking data. Finally, there exists a broad range of crowd-sourced data for football, including user preferences and pundit predictions, betting odds, and so on.}
\label{fig:football_data}
\end{figure}
The value of the football domain for AI can be observed, at a low level, in the range of useful data available for corresponding analytics (see \cref{fig:football_data}).
Real-world data streams such as vision, audio, and text are the mainstay of AI research, and are abundant in football.
Crucially, the various data types available in football are well correlated, in the sense that they typically involve a large amount of shared context--a characteristic researchers can take advantage of.
For instance:
football video feeds always involves two teams and a ball;
an enormous amount of text is available, yet it is centered on the current game;
the sound of the crowds and commentators can be relied upon to respond to temporally-adjacent events, such as goals and penalties.
There is a large amount of crowd-sourced data available such as current betting odds and pundits' predictions, a novel form of data atypical in other AI application domains\mr{side information in automated trading might be a natural comparison to make here}.
Football offers the opportunity for AI to evaluate multi-modal models on synthesized vision, audio, and text data in a unified, though simpler domain than the broader real world.
Football analytics also currently relies heavily upon hand-labeled data, such as ball and player tracking and identification information.
This reliance on hand-labeled data imposes a significant barrier for fast-paced analysis, due to the cost and time needed to generate it.
This provides a golden opportunity for AI to accelerate learning algorithms, by automating the labeling and annotation process, and assisting coaches and decision-makers in instead focusing their expertise on the tactical analysis of the game itself.
As such, a worthy long-term challenge for football analytics is to develop such an assistive agent, which uses minimal hand-labeled data:
an Automated Video Assistant Coach (AVAC).
A successful AVAC would help players, coaches, and spectators alike.
Specifically, it could help the players by analyzing their play for weak points to further develop.
A player's performance throughout a game could be analyzed to suggest improvements in position play and assessing the performance overall.
Prior to a game, an AVAC could suggest strategies tuned to the opponents of the day.
Coaches also seek to get the best out of their players, but have a limited amount of time and many players to observe and provide feedback to.
An AVAC offers the chance to enable coaches to help both players and the team as a whole, by providing suggestions for player rosters for a given game, to trading or scouting strategies based on counterfactual evaluation of team performance with brand new players.
Such an AVAC system would have the ability to automatically sift and label huge quantities of video streams, enabling broadcasters and spectators alike to retrieve key moments.
An AVAC could automatically keep a running tally of information the spectator may find interesting based on their reaction and the current state of play. When the spectator is bored, the AVAC may suggest another game that is more exciting.
For those interested in fantasy football, an AVAC might search for players based on a set of qualities.
Overall, the possibilities for such an automated system are open-ended.
To make the research objectives and intermediate benefits of developing an AVAC system concrete, we detail three associated research agendas at increasing levels of abstraction in the subsequent sections: representation learning, predictive modeling and decision-making, and human factors.
\subsubsection{Representation Learning}
The variety of hand-labeled football statistics make for natural fodder for machine learning algorithms.
These algorithms range from classification and regression tools (e.g., in expected possession value models~\citep{fernandez2019decomposing}), generative models (e.g., in trajectory generation models~\citep{Le17,LeY0L17,yeh2019diverse,li2020generative}), and variational auto-encoding models (player embeddings).
The success of machine learning algorithms generally depends on data representation, as different representations can entangle and obfuscate various explanatory factors of variation behind low-level sensory data.
In football analytics, although expert knowledge is widely used to help design existing representations, learning with generic priors bears promise for avoiding such hand-encoded knowledge.
Under this view, we identify three unique challenges related to representation learning, detailed next.
The first challenge concerns learning representations with multi-modal football data.
Particularly, in football analytics, it remains a fundamental challenge to effectively recognize {long-duration} playing styles of individual players and teams given the variety of data types available (as detailed earlier).
While expert knowledge goes a long way towards analyzing these multi-modal data sources, it remains insufficient to process them efficiently.
The increasing multitude of input modalities available to football analysts are likely to challenge existing methods of representation learning, thus driving researchers to develop cohesive models that take these many modalities into account simultaneously.
The second challenge concerns learning contextual representations of individual players.
Due to the dynamics and uncertainty of football outcomes, long-term static representations for predictive modeling of in-game events are likely to be beneficial when used in conjunction with representations of individual players.
For example, a player passing the ball may take into account the context of the game to estimate the most appropriate receiver that maximizes the probability of scoring.
Another concrete example is using contextual representations to identify the dynamic roles of players, which may change given the game context and must be inferred and controlled to tactically counter the opposing team.
Finally, in addition to learning representations of individual players, identifying an effective means of contextualizing or ranking entire teams is another unresolved challenge.
Teams are usually ranked with historical match results and the collective performance of individual players, which can be coarse (i.e., may not reveal the long-term playing styles of teams) and may fail to reflect in-game dynamics.
Overall, to tackle these challenges, we aim to achieve two goals: i) learning representations that are able to characterize the long-term playing styles of football teams, ii) learning contextual representations of football teams that are able to depict in-game dynamics.
\subsubsection{Predictive Modeling and Decision-Making}
Learning useful representations (i.e., as opposed to hand-coded features) serves as an important means of advancing subsequent predictive and prescriptive analysis of football matches.
Specifically, dense embeddings that summarize not only the state of a particular game, but also historical trends evident throughout many games (e.g., across seasons) will serve as enablers of the more accurate, impactful, and longer-horizon predictions of match outcomes.
The interaction between predictive-prescriptive models is envisioned to be tightly-linked with game-theoretic analysis, thus coupling this direction of research most closely with Frontier~3~(GT\&CV)\xspace and Frontier~1~(GT\&SL)\xspace (see \cref{fig:layers}).
The combination of these fields with game theory is likely to usher in new opportunities for coaches and decision-makers.
For example, predictive models of football players at the trajectory-level~\citep{Le17,LeY0L17,sun2019stochastic} currently treat the game as a black-box dynamical process (i.e., a system of dynamic entities making decisions solely based on the joint on-pitch state of the teams);
such models do not yet account for the game-theoretically driven counterfactual responses of players to one another (e.g., taking into account the current game score, impact of current game decisions on upcoming matches, etc.).
Conducting such an analysis of these models involves identification of high-level strategies typically used by empirical game-theoretic techniques (so-called meta-strategies).~\citep{wellman2006methods,TuylsPLHELSG20}
These meta-strategies, for example, could be clusters of on-pitch behaviors correlated with play style, counterattack types, defense schemes (such as zonal vs. man-to-man defense), and so on.
While such meta-strategies are typically manually defined, automatically learning them poses an interesting challenge.
Appropriate clustering and identification of such meta-strategies involves not only access to a large, representative dataset of plays, but also the aforementioned learned representations that summarize the most relevant context for game theory models.
Synthesis of empirical games over the possible meta-strategies of two opposing teams can be used to forecast the performance of various team tactics when pitted against one another (e.g., investigating the Nash equilibrium of football at a higher, tactical level, rather than the typically considered low-level scenarios such as penalty kicks).
Moreover, while characterization and ranking of players has received considerable attention in the literature~\citep{DecroosD19,decroos2019actions,bransen2020chemistry}, automated ranking of {tactics} has received considerably less attention~\citep{decroos2018automatic,meerhoff2019exploring}.
Application of game-theoretic analysis techniques here remains unexplored to the best of our knowledge.
Analysis of empirical games using meta-strategies conditioned on player identities would be beneficial for counterfactually evaluating player performance in new teams (i.e., for scouting).
For training staff, a model that enables accurate evaluation of players' contributions to the team's overall strategy would be valuable, for example, for pinpointing which players to coach or to substitute.
For broadcasters, automatic identification of salient, exciting meta-strategies (e.g., those that are typically low in probability yet high in payoff, or games where there is a large difference in terms of the play styles or meta-strategies of the teams) can be used for automatic generation of highlight reels.
Learning the appropriate meta-strategies and associated predictive models are, simultaneously, challenging in football due to the number of players involved on-pitch (and the exponential size of the strategy space with respect to this quantity).
Despite this, the development of richer models leveraging more complex input modalities (e.g., video-based representations) is likely to unlock commensurate benefits (in terms of improved predictions and strategic decision-making) for football analysts.
\subsubsection{Human Factors}
The human-centric nature of football analytics stems from several factors:
coaching and improvement of individual play and coordinated team play through predictive and prescriptive modelling, injury and fatigue prediction, and psychological and mental analysis of players.
This focus distinguishes it sharply from, for example, challenges such as RoboCup.\citep{RoboCup}
In contrast to the robot-centric focus of RoboCup (which revolves around developing robotic footballing agents~\citep{Visser,AB05,LNAI09-kalyanakrishnan-1,AAMAS11-urieli,ICLR16-hausknecht,AAAI17-Hanna}), the focus in football analytics is entirely on understanding and improving human gameplay and team coordination based on an integrated approach from the three research areas involved.
Another key difference concerns evaluation of the impact of said analysis on human play, which is distinct from evaluation of robotic agents in the RoboCup project.
Namely, human play is significantly more difficult to realistically simulate and systematically evaluate (in contrast to evaluation on robotics platforms).
Moreover, the football analytics frontiers targeted here entail taking into account human factors such as injury and fatigue, but also inter-player relationships and their effects on play efficiency and cooperation, psychological challenges such as pressure or mental state, notably on recent transfers, and their impact on play performance, and overall player discipline and tendency to follow the best plan for the team instead of the best plan for themselves.
Injury prediction is another topic of interest.
Injury prediction is the task of predicting the probability that a player will sustain an injury given data on the past and current seasons.
Previous studies have investigated the acute-chronic workload ratio (ACWR) as a predictor for sports-related muscular injuries.~\citep{gabbett2010development}
Acute workload measures an athlete's workload over 1 week, while chronic workload is the average workload over the past 4 weeks.
\citet{Rossi} use richer workload measures extracted from Electronic Performance and Tracking Systems (EPTS) to train a decision tree to predict the probability of future injuries.
Their approach uses manually designed features that aim to temporally aggregate players' workload histories.
Recent work uses Convolutional Neural Networks (CNNs) applied to EPTS time-series data directly, thereby alleviating the need for hand-designed time-aggregated features~\citep{gabbett2010development}. Current injury prediction methods are limited to a binary signal and do not explicitly capture uncertainty of the prediction, the type of injury, the severity, nor projected time to recover. Progress in this direction is likely limited by the availability of data; preventive measures are taken by sports clubs and as a result injuries occur relatively infrequently, although accurate prediction of such injuries (or determination of whether current performance sensors are sufficient for doing so) is a promising avenue for application of AI techniques.
\section{Case Studies: Frontier~1~(GT\&SL)\xspace}\label{sec:results}
\subsection{Game Theory and Statistical Learning for Penalty Kick Analysis}\label{sec:egta_results}
In this section, we highlight some of the benefits that the combination of frontiers can yield for football analysts.
Specifically, we conduct an in-depth analysis of real-world football data under the lens of Frontier~1~(GT\&SL)\xspace, providing new insights into penalty kick scenarios by combining statistical learning with game theory. For our analysis we use a data set of $12,399$ penalty kicks based on Opta data~\citep{opta_sports}. In \cref{fig:shot_distribution} we show a heatmap of the shot distribution of the penalty kicks in our data set. Table [X] in supplemental material shows the distribution of the penalty kicks over the various leagues we consider.\kt{still need to add this table}
\subsubsection{Game-theoretic Analysis of Penalty Kicks}
\begin{table}[t]
\centering
\caption{Natural (N) / Non-Natural (NN) payoff tables for Shots (S) and Goalkeepers (G).
\so{Subfigure captions for all figures/tables in this section should be moved to the main figure caption, per natcomms format. E.g.:}
\subref{tab:Palacios} \citet{Palacios2003} payoff table.
\subref{tab:OurversionofPalacios} reproduced table.
\subref{tab:PalaciosNash} \citet{Palacios2003}-reported Nash probabilities.
\subref{tab:OurPalaciosNash} Reproduced table Nash probabilities.
}
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{}
\label{tab:Palacios}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.670 & 0.929 \\
NN-S & 0.950 & 0.583 \\
\bottomrule
\end{tabular}
\end{subtable}
\hfill
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{}
\label{tab:OurversionofPalacios}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.704 & 0.907 \\
NN-S & 0.894 & 0.640 \\
\bottomrule
\end{tabular}
\end{subtable}\\
\par\bigskip
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{}
\label{tab:PalaciosNash}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.393 & 0.607 & 0.432 & 0.568 \\
Empirical & 0.423 & 0.577 & 0.400 & 0.600 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{Jensen–Shannon divergence: 0.049\%}
\end{subtable}
\hfill
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{}
\label{tab:OurPalaciosNash}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.431 & 0.569 & 0.408 & 0.592 \\
Empirical & 0.475 & 0.525 & 0.385 & 0.615 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{Jensen–Shannon divergence: 0.087\%}
\end{subtable}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.5 \textwidth]{figs/egta/heatmap_all.png}
\caption{Visualization of shot distribution. \gc{EGTA team to expand captions to explain context of what is happening in the figure, and key takeaways.}}
\label{fig:shot_distribution}
\end{figure}
\begin{table}[t]
\centering
\caption{Footedness equivalence p-value tables.}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Natural / Non-natural game p-value table}
\begin{tabular}{rMM}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.924566 & 0.170504 \\
NN-S & 0.394900 & 0.407741 \\
\bottomrule
\end{tabular}
\label{tab:nat_p_value}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Left / Center / Right game p-value table}
\begin{tabular}{rMMM}
\toprule
{} & R-G & C-G & L-G \EndTableHeader \\
\midrule
R-S & 0.000011 & 0.947369 & 6.931197e-01 \\
C-S & 0.592054 & 0.868407 & 1.305657e-01 \\
L-S & 0.017564 & 0.764020 & 7.791136e-07 \\
\bottomrule
\end{tabular}
\label{tab:lcr_p_value}
\end{subtable}%
\par \bigskip
\end{table}
\begin{figure}[t]
\centering
\caption{P-value table as a function of minimal experience}
\includegraphics[width=\textwidth]{figs/egta/p_value_by_experience.png}
\label{fig:p_value_by_exp}
\end{figure}
\begin{figure}[t]
\centering
\caption{P-value table as a function of player-experience}
\includegraphics[width=\textwidth]{figs/egta/experience_p_values.png}
\label{fig:p_value_by_exp_bar}
\end{figure}
\begin{table}[t]
\centering
\caption{Left (L) - Center (C) - Right (R) tables for kickers (S) and Goalkeepers (G), with the three directions of kick/movement defined from the goalkeeper's perspective.}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Payoff table.}\label{tab:lcr_table_noexp}
\begin{tabular}{rRRR}
\toprule
{} & R-G & C-G & L-G \EndTableHeader \\
\midrule
R-S & 0.684 & 0.939 & 0.969 \\
C-S & 0.964 & 0.160 & 0.953 \\
L-S & 0.964 & 0.960 & 0.633 \\
\bottomrule
\end{tabular}
\label{tab:lcr_table_noexp}
\end{subtable}%
\par\bigskip
\begin{subtable}[b]{1.0\textwidth}
\centering
\caption{Nash probabilities vs. Empirical frequencies corresponding to \subref{tab:lcr_table_noexp}.} \label{tab:lcr_table_noexp_nash}
\begin{tabular}{rMMM|MMM}
\toprule
{} & R-S & C-S & L-S & R-G & C-G & L-G \EndTableHeader \\
\midrule
Nash & 0.478 & 0.116 & 0.406 & 0.441 & 0.178 & 0.381 \\
Empirical & 0.454 & 0.061 & 0.485 & 0.475 & 0.089 & 0.436 \\
\bottomrule
\end{tabular}
\par
\setlength{\fboxrule}{0pt}
\fbox{Jensen–Shannon divergence: 0.75\%}
\end{subtable}
\end{table}
\begin{table}[t]
\centering
\caption{Natural / Non-natural game restricted by footedness. \gc{EGTA team to expand caption}}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Left-footed players payoff table}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.721 & 0.939 \\
NN-S & 0.903 & 0.591 \\
\bottomrule
\end{tabular}
\label{tab:lcr_left_footed}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Right-footed players payoff table}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.700 & 0.898 \\
NN-S & 0.892 & 0.653 \\
\bottomrule
\end{tabular}
\label{tab:lcr_right_footed}
\end{subtable}%
\end{table}
\begin{comment}
\par \bigskip
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Left-footed, low-experience players payoff table}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.741 & 0.984 \\
NN-S & 0.967 & 0.574 \\
\bottomrule
\end{tabular}
\label{tab:lcr_left_footed}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Right-footed, low-experience players payoff table}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.712 & 0.965 \\
NN-S & 0.960 & 0.606 \\
\bottomrule
\end{tabular}
\label{tab:lcr_right_footed}
\end{subtable}%
\end{comment}
\citeauthor{Palacios2003} examines penalty kick scenarios from a game-theoretic perspective, using empirical payoff tables to determine whether the associated kickers and goalkeepers play a Nash equilibrium \citep{Palacios2003}.
Here we revisit \citeauthor{Palacios2003}'s work, by first reproducing several of its key results with a substantially larger and more recent data set from the main professional football leagues in Europe, Americas, and Asia (for comparison, the data set used in the work of \citeauthor{Palacios2003} consists of 1417 penalty kicks from the 1995-2000 period whereas ours contains 12,399 kicks from the 2011-2017 period).
While several results of this earlier work are corroborated, we also find surprising new additional insights under our larger dataset.
We then go further to extend this analysis by considering larger empirical games (involving more action choices for both kick-takers and goalkeepers).
Finally, we develop a technique for illustrating substantive differences in various kickers' penalty styles, by combining empirical game-theoretic analysis with Player Vectors~\citep{DecroosD19} illustrating the added value and novel insights research at Frontier~1~(GT\&SL)\xspace of the microcosm can bring to football analytics.
As in \citeauthor{Palacios2003}'s work, we first synthesize a 2-player 2-action game based on our penalty kick data set.
\Cref{tab:Palacios} illustrates the $2\times2$ normal form as presented by \citet{Palacios2003}.
The actions for the two players, the kicker and goalkeeper, are respectively visualized in the rows and columns of the corresponding payoff tables, and are detailed below (for an introduction to normal form games and empirical games see the Methods section).
The respective payoffs in each cell of the payoff table indicate the win-rate or probability of success for the kicker (i.e., a score);
for ease of comparison between various payoff tables, cells are color-graded proportionally to their associated values (the higher the scoring probability, the darker shade of green used).
Such a game is referred to as an {empirical} game as it is created from real-world data\citep{wellman2006methods,TuylsPLHELSG20}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.65\textwidth]{figs/Football_analytics_sketch.pdf}
\caption{Illustration of natural vs non-natural sides.}
\label{fig:natsides}
\end{figure}
The choice of player actions considered has significant bearing on the conclusions drawn via empirical game-theoretic analysis.
The actions used by \citet{Palacios2003} in \cref{tab:Palacios} correspond to taking a shot to the natural (N) or non-natural (NN) side for the kicker, and analogously diving to the natural side or non-natural side for the goalkeeper.
\Cref{fig:natsides} provides a visual definition of natural versus non-natural sides.
Specifically, as players tend to kick with the inside of their feet, it is easier, for example, for a left-footed player to kick towards the right (from their perspective);
this is, thus, referred to as their natural side.
Analogously, the natural side for a right-footed kicker is to kick towards their left. The natural side for a goalkeeper depends on the kicker in front of him. Specifically, when the goalkeeper faces a right-footed kicker his natural side is to his right, and when he faces a left-footed kicker his natural side is to his left.
Importantly, shots to the center count as shots to the natural side of the kicker, because, as explained in \citeauthor{Palacios2003}'s work, kicking to the center is considered equally natural as kicking to their natural side by professional football players~\citep{Palacios2003}.
\Cref{tab:OurversionofPalacios} shows our reproduction of \cref{tab:Palacios} of \citet{Palacios2003}, computed using $12,399$ penalty kicks spanning the aforementioned leagues in our Opta-based dataset and which we restrict over players, either goalkeeper or kicker, appearing at least 20 times in our dataset to keep the same player type as in \citet{Palacios2003}.
The trends in these two tables are in agreement:
when the goalkeeper and the kicker do not choose the same sides of the goal, shot success rate is high;
otherwise, when the keeper goes to the same side as the kicker, success rate is higher for natural shots than for non-natural shots.
We also include Nash and empirical probabilities for Palacios' dataset and ours, respectively in tables \cref{tab:PalaciosNash} and \cref{tab:OurPalaciosNash}, allowing us to conclude that payoffs, Nash probabilities and empirical probabilities are all in agreement between Palacios' results and our reproduction (The Jensen-Shannon divergence between Palacios' results and ours is 0.84\% for the Nash distribution and 1.2\% for the empirical frequencies). We also notice that players' empirical action selection frequencies are very close to the Nash-recommended frequencies, as measured by their Jensen-Shannon Divergence, and are actually playing an $\epsilon$-Nash equilibrium with very low $\epsilon$ ($= 0.4 \%$).
Having examined the similarity of payoff tables and distributions, we verify whether the Natural / Non-Natural game is statistically identical for Left-footed and Right-footed players, as assumed in \citet{Palacios2003}.
To do so, we use a t-test to test whether per-cell scoring rates are identical across footedness types.
The t-tests' p-values are reported in \cref{tab:nat_p_value}, and reveal that the games cannot be proven to be dissimilar across footedness and can therefore be assumed to be identical for left-footed and right-footed players. \cref{fig:p_value_by_exp} illustrates the relationship between p-values of our t-test and minimal player appearance counts: when we modulate minimal appearance count of players in our test, the Natural Shot / Natural Goalkeeper cell goes from strongly dissimilar across footedness (low p-value) when including all players, to strongly similar (high p-value) when only including the players appearing the most in our dataset. This could be explained by low-appearance, which we take here as a proxy for low-experience, kickers being less able to control their kicks, resulting in different control effectiveness for different footedness preferences, and in goalkeepers being less proficient in stopping shots going to their less frequently kicked side (left) than to the other, a preference that has been trained away in professional goalkeepers. To remove potential side-effects of merging data from low- and high-experience players together, \cref{fig:p_value_by_exp_bar} shows the relationship between p-values of our t-test and experience category where we allow for some overlap \pfm{Update plot xticks} - between 1 and 7 shots, 5 and 12, etc.; the conclusion coming from this figure is the same as that of \cref{fig:p_value_by_exp}, supporting the point that experience trains away the difference between left- and right-footed penalty kicks.
We also analyzed the game defined by kicking to the Left, the Center or the Right, and confirmed Palacios' intuition that it was fundamentally different across footedness preferences. Specifically, \cref{tab:lcr_table_noexp} synthesizes the empirical game corresponding to this new choice of actions, with aggregated scoring rates over both feet preferences.
Note that in this case, left, center, and right are measured from the goalkeeper's perspective, such that the natural kick of a right-footed player would be considered a right kick.
The per-cell t-tests p-values for this game are reported in \cref{tab:lcr_p_value}. Interestingly, the game is different when the goalkeeper jumps to the same side as the ball, but is otherwise mostly similar across footedness preference.
The empirical play frequencies for kickers, as reported in \cref{tab:lcr_table_noexp_nash}, are also further away from Nash frequencies than observed in the natural / non-natural game (\cref{tab:OurPalaciosNash}), as can be seen from the Jensen-Shannon divergence between empirical frequencies and Nash (12\% against the 0.9\% of the Natural / Non-Natural game).
These insights indeed confirm the intuition that such a game is neither correct across footedness, nor the one the players follow.
Overall, these results provide insights into the impacts that the choice of actions have on conclusions drawn from empirical payoff tables.
However, behavior and shooting styles also vary wildly per-player given footedness.
If one is willing to consider several payoff tables (e.g., one per footedness), it seems natural to also take into account kicker's playing styles, as considered in the next section.
\subsubsection{Augmenting Game-theoretic Analysis of Penalty Kicks with Embeddings}
\begin{table}[t
\centering
\caption{Cluster statistics.
}
\begin{tabular}{rrrrrr}
\toprule
{} & \# Players & \# Goals & \# Shots & Success rate (\%) & Proportion of goals with left foot (\%) \EndTableHeader \\
\midrule
Cluster~1 & 284 & 605 & 756 & 80.0 & 13.5 \\
Cluster~2 & 107 & 75 & 91 & 82.4 & 41.3\\
Cluster~3 & 47 & 4 & 4 & 100.0 & 25.0\\
Cluster~4 & 70 & 44 & 52 & 84.6 & 90.9 \\
Cluster~5 & 108 & 50 & 59 & 84.7 & 12.0\\
\midrule
Total & 616& 778 & 962 & 81.0 & 25.2 \\
\bottomrule
\end{tabular}
\label{tab:cluster_stats}
\end{table}
\begin{table}[t
\centering
\caption{Pair-wise comparison for the identified clusters. $<$ indicates that data was missing and minimum true p-value may be lower than minimum non-NaN p-value. * indicates we can't conclude whether clusters are different at the 10\% confidence level.}
\begin{tabular}{r | rrrrr}
\toprule
{} & 1 vs 2 & 1 vs 4& 1 vs 5 \EndTableHeader \\
\midrule
Minimum cell-wise $p$-value of t-Tests for payoff tables & 1.37e-2 & 7.39e-3 & 1.38e-2 \\
Jensen-Shannon divergence between Nash distributions (\%) & 1.28 & 0.10 & 0.41 \\
Jensen-Shannon divergence between empirical distributions (\%) & 0.01 & 0.15 & 0.09 \\
$p$-value of t-Tests for left footedness & 9.63e-6 & 1.07e-22 & 7.50e-1 \\
\midrule
{} & 2 vs 4 & 2 vs 5 & 4 vs 5 \EndTableHeader \\
\midrule
Minimum cell-wise $p$-value of t-Tests for payoff tables & $<$ 1.6e-1* & $<$ 3.8e-1* & $<$ 3.2e-1*\\
Jensen-Shannon divergence between Nash distributions (\%) & 0.66 & 3.10 & 0.91 \\
Jensen-Shannon divergence between empirical distributions (\%) & 0.10 & 0.14 & 0.48 \\
$p$-value of t-Tests for left footedness & 3.21e-10 & 1.17e-4 & 2.98e-21 \\
\bottomrule
\end{tabular}
\label{tab:cluster_stat_test}
\end{table}
\begin{table}[t
\centering
\caption{t-Test p-value table for test that empirical action distributions are equal among different clusters}
\begin{tabular}{r|rr|r}
Clusters & Kicker p-value & Goalkeeper p-value & min p-value \\
\toprule
1 vs 2 & 0.52 & 0.05 & 0.05 \\
1 vs 4 & 0.85 & 0.95 & 0.85 \\
1 vs 5 & 0.42 & 0.27 & 0.27 \\
2 vs 4 & 0.52 & 0.14 & 0.14 \\
2 vs 5 & 0.51 & 0.16 & 0.16 \\
4 vs 5 & 0.4 & 0.26 & 0.26 \\
\end{tabular}
\label{tab:empirical_p_value_table}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{}
\includegraphics[width=\textwidth]{figs/egta/striker_goalie_clusters_3d.png}
\label{fig:striker_goalie_clusters}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{}
\includegraphics[width=\textwidth]{figs/egta/striker_clusters.png}
\label{fig:striker_clusters}
\end{subfigure}\\
\caption{Visualization of the identified player clusters. \subref{fig:striker_goalie_clusters} visualizes the goalkeeper cluster and the kicker clusters. To show the separation of the kicker clusters clearly, we visualize them after removing the goalkeeper clusters in \subref{fig:striker_clusters}, and we also label each cluster with a Liverpool player in it.
}
\end{figure}
\begin{table}[t]
\centering
\caption{Nash probabilities and empirical frequencies tables for Shot (S) and Goalkeepers (G) with Natural (N) and Non-Natural (NN) actions.}
\label{tab:egta_pv_nash_emp}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{All players.}
\begin{tabular}{rMM|MM}
\toprule
{} & N-S & NN-S & N-G & NN-G \EndTableHeader \\
\midrule
Nash & 0.437 & 0.563 & 0.428 & 0.572 \\
Empirical & 0.484 & 0.516 & 0.404 & 0.596 \\
\bottomrule
\end{tabular}
\label{tab:egta_pv_all}
\\
\setlength{\fboxrule}{0pt}
\textbox{962 shots}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Kickers in Cluster~1.}
\begin{tabular}{rMM|MM}
\toprule
{} & N-S & NN-S & N-G & NN-G \EndTableHeader \\
\midrule
Nash & 0.424 & 0.576 & 0.442 & 0.556 \\
Empirical & 0.484 & 0.516 & 0.410 & 0.590 \\
\bottomrule
\end{tabular}
\label{tab:egta_pv_c1}
\\
\setlength{\fboxrule}{0pt}
\textbox{756 shots} \end{subtable}\\
\par\bigskip
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Kickers in Cluster~2.}
\begin{tabular}{rMM|MM}
\toprule
{} & N-S & NN-S & N-G & NN-G \EndTableHeader \\
\midrule
Nash & 0.583 & 0.417 & 0.358 & 0.642 \\
Empirical & 0.494 & 0.506 & 0.407 & 0.593 \\
\bottomrule
\end{tabular}
\label{tab:egta_pv_c2}
\\
\setlength{\fboxrule}{0pt}
\textbox{91 shots}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Kickers in Cluster~4.}
\begin{tabular}{rMM|MM}
\toprule
{} & N-S & NN-S & N-G & NN-G \EndTableHeader \\
\midrule
Nash & 0.468 & 0.532 & 0.469 & 0.531 \\
Empirical & 0.538 & 0.462 & 0.423 & 0.577 \\
\bottomrule
\end{tabular}
\label{tab:egta_pv_c4}
\\
\setlength{\fboxrule}{0pt}
\textbox{52 shots}
\end{subtable}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Kickers in Cluster~5.}
\begin{tabular}{rMM|MM}
\toprule
{} & N-S & NN-S & N-G & NN-G \EndTableHeader \\
\midrule
Nash & 0.336 & 0.664 & 0.263 & 0.737 \\
Empirical & 0.441 & 0.559 & 0.288 & 0.712 \\
\bottomrule
\end{tabular}
\label{tab:egta_pv_c5}
\\
\setlength{\fboxrule}{0pt}
\textbox{50 shots}
\end{subtable}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/egta/pv_goals_heatmaps.png}
\caption{Heatmaps of shots by all kickers and kickers in individual clusters with respect to empirical probabilities. We exclude the goalkeeper cluster (Cluster~3) because of insufficient samples.
}
\label{fig:egta_pv_cluster_heatmap}
\end{figure}
While the previous section undertook a descriptive view of the penalty kick scenario (i.e., providing a high-level understanding of kicker and goalkeeper play probabilities), here we investigate whether we can find the best strategy for a player given the knowledge of the kicker's Player Vector.
In game-theoretic terms, we conduct a prescriptive analysis of penalty kicks to enable informed decision-making for players and coaching staff in specific penalty kick situations.
Ideally, one would iterate the earlier empirical payoff analysis for every possible combination of goalkeeper and kicker in a given league, thus enabling decision-making at a granular level; however, the inherent sparsity of penalty kick data makes such an approach infeasible.
Instead, we introduce a meaningful compromise here by combining statistical learning with game theory (i.e., Frontier~1~(GT\&SL)\xspace), first quantifying individual playing styles, then using clustering techniques to aggregate players based on said styles, and finally synthesizing empirical games for each identified cluster.
We focus our analysis on penalties including all players who participated in Premier League matches with Liverpool Football Club, highlighting individuals with distinct play and associated penalty-kick styles.
On a technical level, the algorithm consists of three steps, described as follows.
First, we use Player Vectors~\citep{DecroosD19} to summarize the playing styles of kickers using an 18-dimensional real-valued vector. In particular, these Player Vectors are extracted from historical playing trajectories in real matches.
Each dimension of the Player Vector corresponds to individual on-pitch player behaviors (e.g., styles of passes, take-ons, shots, etc.), and the value of each dimension is standardized and quantifies the weight of that particular action style for the considered player.
In total, we obtain 616 such vectors for the individual players in our dataset.
Secondly, we cluster player in accordance to their Player Vectors, using K-means, and the number of clusters is chosen as the value causing the most significant drop in inertia, a standard heuristic for K-means clustering. This process yields 5 clusters in total, with statistics summarized in \cref{tab:cluster_stats}.
In particular, we observe that there are very few shot samples in Cluster-3, and that it is a cluster of goalkeepers, who are automatically grouped together with K-means clustering.
This is a nice sanity check for the quality of the Player Vectors and the effectiveness of K-means clustering, and for accuracy purposes, we exclude Cluster~3 in the game-theoretic analysis.
We observe that cluster pairs (1, 2), (1, 4), and (1, 5), are significantly different, with the minimum cell-wise p-values for these cluster pairs smaller than 0.05 in~\cref{tab:cluster_stat_test}.
We therefore focus our game-theoretic analysis on those cluster pairs.
Moreover, we also qualitatively illustrate differences between the clusters, which we visualize in \cref{fig:striker_goalie_clusters} and \cref{fig:striker_clusters}. These figures are obtained via dimensionality reduction, going from 18 to respectively 3 and 2 dimensions via Principal Component Analysis. We observe that the goalkeeper cluster is well separated from the kicker clusters in \cref{fig:striker_goalie_clusters}, and in order to better visualize the kicker clusters, we project \cref{fig:striker_goalie_clusters} onto its y and z axis after removing the goalkeeper cluster in \cref{fig:striker_clusters}.
We also identify therein the most representative (former) kicker of Liverpool Football Club per-cluster (i.e., the player whose feature vector is closest to the mean of the corresponding cluster).
Finally, we conduct the aforementioned game-theoretic analysis for each cluster.
In \cref{tab:cluster_stats}, we observe that the kickers in different clusters have similar success rates in penalty kicks (i.e., these players are almost equally competent).
However, a closer behavioral analysis yields refined insights.
We first examine the Nash strategies played by each cluster, and then visualize the actual play behavior with respect to empirical probabilities in \cref{fig:egta_pv_cluster_heatmap}.
\Cref{tab:egta_pv_all} summarizes the overall Nash distributions for all players considered, with \cref{tab:egta_pv_c1,tab:egta_pv_c2,tab:egta_pv_c4,tab:egta_pv_c5} showing cluster-specific distributions.
These tables illustrate that the kickers have the same empirical behavior, an assertion statistically confirmed in \cref{tab:empirical_p_value_table}; yet their Nash-derived recommendations are different, namely, kickers in Cluster~1, 4 and 5 should shoot more to their non-natural sides than to their natural sides, and kickers in Cluster~2 should shoot much more often to their natural side.\pfm{What is the epsilon of the epsilon-nash equilibrium the players are playing ?} This greater imbalance is shown by comparing Jensen-Shannon divergence. As we see in \cref{tab:cluster_stat_test
, the Jensen-Shannon divergence of the Nash probabilities between Cluster~1 and 2 (1.28\%) is 3-4 times greater than that between Cluster~1 and 5 (0.41\%) and 12 times greater than that between Cluster~1 and 4 (0.10\%). Nevertheless, most of these Nash recommendations come from very low-sample empirical payoff tables, which entails potentially inaccurate Nash distributions. This effect is somewhat mitigated by sampling N=50 payoff tables using a beta-bernoulli distribution parametrized by (1+Number Successful Shots, 1+Number Unsuccessful Shots) and averaging over the Nash distributions of these tables. We nevertheless note that this low-data regime is induced by the restriction of our analysis to players having played in matches of Liverpool. Obtaining Player Vector data for all players in our dataset would allow us to study cluster behavior with greater statistical precision. Nevertheless, the current study leaves no statistical doubt regarding the pertinence of clustering payoff tables using player embeddings - specifically Player Vectors -, and we infer that the better the embeddings, the better the resulting payoff table clusters.
Qualitatively, in addition to analyzing the strategies with respect to Nash probabilities, the patterns of positions of the ball of successful goals also vary from clusters to clusters, as visualized in \cref{fig:egta_pv_cluster_heatmap}. For instance, kickers in Cluster~5 tend to score mostly to the bottom left corner of the goalmouth, while the scoring positions in other clusters are more balanced, though these could also be partly due to lower sample sizes for some clusters
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{}
\includegraphics[width=\textwidth]{figs/ghosting/ghosting_before.png}
\label{fig:ghosting_before}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{}
\includegraphics[width=\textwidth]{figs/ghosting/ghosting_after.png}
\label{fig:ghosting_after}
\end{subfigure}\\
\begin{subfigure}[b]{\textwidth}
\centering
\begin{tabular}{cccc}
\tikzcircle[black, fill=white]{4.0pt} Ball (truth) & \tikzcircle[white, fill=blue]{4.0pt} Attackers (truth) & \tikzcircle[white, fill=red]{4.0pt} Defenders (truth) & \tikzcircle[white, fill=yellow]{4.0pt} Defenders (predicted)
\end{tabular}
\end{subfigure}
\caption{Predictive modeling using football tracking data. \subref{fig:ghosting_before} visualizes predictions under the original data. Here, ground truth information for all players and the ball is provided to the underlying predictive model, with defender positions truncated and predicted by the model after a cut-off time (as indicated by the yellow traces). \subref{fig:ghosting_after} illustrates the same scenario, after counterfactual perturbation of the ground truth ball direction to ascertain the predicted reaction of the defending goalkeeper (far right).
}
\end{figure}
\subsection{Predictive Models for Counterfactual Analysis}\label{sec:predictive_models_counter}
We here present an illustrative example to ground the earlier discussion regarding the potential impacts of using learned models to conduct counterfactual analysis of football matches.
Consider, for instance, the problem of predictive modeling of players at the trajectory level.
Using tracking data available for on-pitch players, one can train models that predict the future trajectories of players given a finite-horizon input context.
For example, one might train such a model on league data (e.g., as done in \citet{LeY0L17}), provide an input context to such a model (e.g., consisting of the true state of the ball, defenders, and attackers up to some point in time), and subsequently visualize the league-average conditioned on this input context (e.g., as visualized in \cref{fig:ghosting_before}).
As pointed out in the literature~\citep{Le17,LeY0L17,yeh2019diverse,li2020generative}, a key advantage of generative predictive models is that they can be used for counterfactual analysis of play outcomes.
We illustrate such an example in \cref{fig:ghosting_after}, where we perturb the trajectory of the ball, inferring the subsequent behaviors of defenders in reaction (noting, e.g., the tendency of the goalkeeper to chase the ball in reaction to it entering the penalty area).
While simple, case-by-case counterfactual case studies such as the above have been conducted to some extent in the literature, consideration of responses to more complex perturbations (e.g., changes of one team's tactics or meta-strategy as a whole, changes in player behavior due to injuries, or changes due to substitutions of individual players) bear potential for significantly more in-depth analysis.
\section{Discussion}
Football analytics poses a key opportunity for AI research that impacts the real world.
The balance of its reasonably well-controlled nature (versus other physical domains beyond sports, e.g., search-and-rescue), considerations associated with human factors (e.g., physiological characteristics such as injury risks for players), and the long-term cause-and-effect feedback loop due to the relative infrequency of professional play make it a uniquely challenging domain.
Nonetheless, the rapidly-emerging availability of multi-modal sensory data make it an ideal platform for development and evaluation of key AI algorithms, particularly at the intersection of the aforementioned fields of statistical learning, computer vision, and game theory.
In this paper, we highlighted three frontiers at the intersection of the above fields, targeting the simultaneous advancement of AI and football analytics.
We highlighted the overlying goal of developing an Automated Video Assistant Coach (AVAC), a system capable of processing raw broadcast video footage and accordingly advising coaching staff in pre-, in-, and post-match scenarios.
We subsequently illustrated how the combination of game theory and statistical learning could be used to advance classical results in football analytics, with an in-depth case study using a dataset comprised of over 15000 penalty kicks, and subsequently combined with the Player Vectors analysis of~\citet{DecroosD19} to discern kicking styles.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{figs/multi-levelRLSA.pdf}
\caption{A multi-level view of football analytics cast as a reinforcement learning problem. We discern three levels: the top level aims to learn how to win championships by winning matches, while the second level optimizes for winning a match, and the bottom level learns to optimize for scoring goals. The context between these various level is shared in both a top-down and bottom-up fashion. }
\label{fig:RLview}
\end{figure}
A notable observation for future work focusing on prescriptive football analytics is that the domain and some of the state-of-the-art research bear key similarities to RL.
At a high level, the process of winning football championships can be cast as a sequential decision-making problem, with a concrete reward structure centered on three timescales of increasing abstraction:
scoring goals, winning matches, and subsequently winning championships. We illustrate this view in \cref{fig:RLview}.
Under this hierarchical view of football, each layer can be considered an RL problem at the designated level of abstraction.
For example, at the lowest level, the sequential decisions made by teammates that lead to a goal can be considered a policy mapping states to actions, using the lexicon of RL.
Likewise, models predicting the values of player actions (e.g., VAEP~\citep{decroos2019actions}) can be considered analogous to those that learn action-values associated with RL policies.
Further expanding this analogy, learning to quantify the contribution of individual players to a team's estimated goal-scoring value can be cast as a so-called credit assignment problem, a key area of research in RL.
Finally, given the presence of multiple on-pitch players with both cooperative and competitive incentives, the value function learning problem situates itself in the area of multi-agent RL.
Multi-agent RL, critically, seeks to understand and learn optimal policies for agents in such interactive environments, linking also to game theory in providing the appropriate mathematical foundations to model this strategic process.
As such, the multi-agent RL approach fits well under Frontier~1~(GT\&SL)\xspace, which considers the game-theoretic interactions of strategic players given specified payoffs, and use of learning techniques for identifying optimal policies.
Moreover, this connection also highlights a potential overlap of interest between real-world football and RoboCup, in that the RL paradigm can be used to optimize player and robot policies alike, despite the widely-different player embodiments considered in each of these two fields.
Overall, such parallels can be drawn at all levels of abstraction highlighted in the aforementioned hierarchical process modeling football championships, implying the foreseeable importance of the RL paradigm as football analytics shifts from understanding the game to subsequently optimizing player and team decisions at increasingly broader levels.
Moreover, the toolkits developed within the context of football analytics are also likely to have direct benefits for closely-related fields, and could be foreseeably adapted to many other sports.
One interesting extension concerns the application of football analytics techniques to the emerging field of eSports, wherein there is a large amount of data collected (in both raw video form, and structured data formats), e.g., such data streams are available for games such as Dota~2 or StarCraft.
In Dota~2, for example, a coaching functionality analogous to that in football is available, wherein an experienced player is connected to the game and advises other players on various strategic tactics.
Moreover, several of the most popular eSports games are inherently multi-player, in the sense that their outcomes are not determined by only an individual's skill, but a team's skill, mixing cooperative and competitive behaviors (as in football).
Automatic analysis of games could provide insights into weak and strong points of teams, tactics used, and directions for improvement.
These related domains could, therefore, provide a low-hanging fruit for football analytics techniques to generalize, in a seamless manner, beyond football.
Overall, the combination of data sources, downstream benefits on related domains, and potentials for impact that AI could have on the football domain are quite evident.
Perhaps more importantly, the promising commensurate impacts of football analytics on AI research (through the feedback loop established between the football microcosm to the three foundational fields highlighted in \cref{fig:overview}) are foreseen to make football a highly appealing domain for AI research in coming years.
\section{Methods}
\gc{THIS SECTION IS BEING FINALIZED}
\subsection{Game Theory: Elementary Concepts}
Empirical game theory has become an important tool for game-theoretic analysis in large-scale multi-agent settings, in which, either a large amount of multi-agent data of interactions is readily available, or is either collected through simulations of the system under study, for the construction of the games\citep{wellman2006methods,TuylsPLHELSG20}.
Empirical Game Theoretic modeling of penalty kick taking and set pieces facilitates strategic understanding of player and team behavior under various circumstances (e.g., play according to Nash equilibrium), and can assist both in predicting opponent behavior and prescribing how a player (or team) should behave in the presence of other players (teams). These game-theoretic models can be leveraged in pre-and post match analysis, and can be combined with analysis of dynamic trajectory behavior (e.g., ghosting, as described in a later section). Additionally, the models can be enriched by carrying out Empirical Game Theoretic Analysis (EGTA) on meta-game models of set pieces, automatically clustering and identifying useful meta-strategies, and providing insights into higher-level team strategies, using techniques such as Replicator Dynamics and Alpha-Rank.
We start by formally defining Normal Form Games:
\begin{definition}[Normal Form Games (NFG)]
A game $G = \langle P, (S_i), (u_i) \rangle $ consists of a finite set of players, $P$, indexed by $i$; a nonempty set of strategies $S_i$ for each player; and a utility function $u_i : \times_{j\in P} S_j \rightarrow \mathbb{R}$ for each player.
\end{definition}
In this work we solely focus on 2-player or bimatrix games. Bimatrix games are 2-player normal form games, $G = \langle P, (S_1,S_2), (u_1,u_2) \rangle $ with $|P|=2$, and the utility functions $(u_1,u_2)$ can be described in terms of two payoff matrices $A, B$, in which one player acts as row player and one player acts as column player. Both players play their actions at the same time.
\noindent The payoffs for both players are represented by a bimatrix $(A, B)$, which gives the payoff for the row player in $A$, and the column player in $B$ (see \cref{fig:bimatrix} for a two strategy example). Specifically, the row player chooses one of the two rows, the column player chooses on of the columns, and the outcome of their joint action determines the payoff to both.
\begin{figure}[tb]
\centering
\gamematrix{}{}{a_{11}, b_{11}}{a_{12},b_{12}}{a_{21}, b_{21}}{a_{22}, b_{22}}
\caption{Bi-matrix payoff tables (A, B) for a two-player two-action normal form game.}
\label{fig:bimatrix}
\end{figure}
\noindent A player~$i$ may play a {pure strategy}, $s_i\in S_i$, or a {mixed strategy}, $\sigma_i\in\Delta(S_i)$, which is a probability distribution over the pure strategies in $S_i$.
In a {strategy profile} $\sigma=(\sigma_1,\sigma_2)$, each player has a strategy $\sigma_i$, which can be mixed or pure (all probability mass on one pure strategy).
We use notation $\sigma_{-i}$ to denote a strategy profile for all players excluding~$i$.
\noindent Having defined NFGs we can now define empirical games as an NFG for which the payoff in its reward tables are directly constructed from data of real-world interactions or simulations. For example, one can construct a win-loss table between two chess players when they both have access to various strategies.
\noindent The traditional equilibrium concept of game theory is the Nash equilibrium, which selects strategy profiles such that no player can benefit from unilateral deviation. More formally, we have:
\begin{definition}[Nash Equilibrium]
A strategy profile $\sigma^*$ is a Nash equilibrium if and only if for all $i$,
\begin{displaymath}
\forall s'_i\in S_i.\ \mathbb{E} [u_i(\sigma_i^*,\sigma_{-i}^*)] \geq \mathbb{E} [u_i(s'_i,\sigma_{-i}^*)].
\end{displaymath}
\end{definition}
\noindent An $\epsilon$-Nash Equilibrium is a flexibilization of the concept of Nash-equilibrium to approximately-stable points of space : there exists at least one player who could gain by deviating to another strategy, but that gain would be bounded by $\epsilon$. More formally, we have:
\begin{definition}[$\epsilon$-Nash Equilibrium]
A strategy profile $\sigma^*$ is an $\epsilon$-Nash equilibrium if and only if there exists $\epsilon > 0$ such that for all $i$,
\begin{displaymath}
\forall s'_i\in S_i.\ \mathbb{E} [u_i(\sigma_i^*,\sigma_{-i}^*)] \geq \mathbb{E} [u_i(s'_i,\sigma_{-i}^*)] - \epsilon.
\end{displaymath}
\end{definition}
\subsection{Player Vectors}
We characterize the playing style of a player using a so-called `player vector' that can be interpreted both by human experts and machine learning systems.
In particular, we follow definition of \textit{playing style} in~\cite{DecroosD19}, which is defined as a player's preferred area(s) on the field to occupy and which actions
he tends to perform in each of these locations, and generate our player vectors with the method proposed in~\cite{DecroosD19}.
The procedure of generating player vectors unfolds into four steps.
First, we collect the event stream data of all Premier League matches that Liverpool Football Club participated in from 2017 to 2019, and filter the actions of types passes, dribbles, shots and crosses.
Secondly, for each pair of player $p$, who is observed in the event stream dataset, and relevant action type $t$, we overlay a grid of size $60 \times 40$ on the soccer field and count how many times player $p$ performed action $t$ in each grid cell. This procedure yields a matrix which summarizes spatial preference of player $p$ performing action type $t$.
Thirdly, we compress that matrix into a small vector. To do this, we reshape each matrix into a vector and group it together with all other vectors of the same action type, and we then perform non-negative matrix (NMF) factorization to reduce the dimensionality of these matrices. This procedure yields a smaller vector, and the value of each dimension quantifies the preference of player $p$ performing the action type $t$ in the area $a$.
Finally, for each player, we obtain 4 vectors corresponding to the 4 action types, and we generate one final vector of 18 dimensions by concatenating his compressed vectors for relevant action types.
\subsection{Ghosting}
Ghosting refers to the prescription of the trajectories the players in a sports team should have executed, in contrast to what they actually did.~\citep{lowe_lights_2013}
Solution of this rich problem class implies benefits spanning from recommendation of trajectories or setups for constrained set pieces, then to short-term plays involving a subset of players, and eventually to long-term strategies/plays for the entire team.
Team-level ghosting would also strongly benefit from game-theoretic and multi-agent considerations, and is perceived to play a key role in an established AVAC system.
The illustrative example presented in \nameref{sec:predictive_models_counter} was trained using a baseline predictive model, similar to that of ~\citet{Le17}.
While the objective of the mentioned section was not to evaluate the model comprehensively, but to merely provide a grounding example, we provide some information regarding the training of the model used for the illustrations for completeness.
We trained a centralized long-short term memory model (of 2 layers, each with 256 units), taking as input the raw trajectories of players and the ball, and predicting as output the step-wise change in trajectory of the defensive players.
The model was trained on 240 frames of 25~fps tracking data, downsampled to 12.5~fps, with half the frames in each play used for providing a prediction context, and the other half occurring at the prediction cut-off.
We used the $l_2$-loss on the tracking data for training, and randomized the order of attacking and defending players to avoid the role-assignment problem mentioned in \citet{Le17} (similar to one of the baseline approaches of \citet{yeh2019diverse}).
\subsection{Vision}
As previously illustrated, multi-person human pose estimation is a central part of vision-based analysis of football video.
Methods for this task can be grouped into two types: one the one hand, bottom-up approaches first detect human joints, and group them into pose instances~\citep{iqbal2016multi,fang2017rmpe,papandreou2017towards,huang2017coarse,he2017mask,sun2019deep};
on the other, top-down approaches first detect body instances and run single-person pose estimation models on each instance~\citep{pishchulin2016deepcut,insafutdinov2016deepercut,cao2017realtime,newell2017associative,papandreou2018personlab,kocabas2018multiposenet}.
The computation cost of top-down methods increases linearly with the number of people in an image, while that of bottom-up methods stays constant.
However, in cases where there is significant overlap between instances, top-down approaches are often more accurate~\citep{chen2020monocular}.
We experimented with G-RMI~\citep{papandreou2017towards}, a well-established top-down approach, and give examples of predictions in \cref{fig:poses_penalty_kick}.
In the first stage, Faster-RNN~\citep{ren2015faster} is used to detect person instances.
Inspired by detection methods, the second stage combines classification and regression to process each resulting crop:
a fully convolutional network first densely classifies whether each spatial position is in the vicinity of a given keypoint class, and then refines each prediction by predicting an offset.
A specialized form of Hough voting is introduced to aggregate these predictions and form highly localized activation maps.
A keypoint-based confidence score is obtained by taking the maximum value over spatial dimensions for each activation map, and averaging over keypoints.
Finally, to avoid duplicate predictions in the first stage, overlapping boxes are filtered using NMS, using both IOU
and the object keypoint similarity (OKS) metric itself to measure the overlap between two candidate pose detections.
\pauline{Could condense the previous two sentences again with ``Key-point based confidence score and NMS procedure further improve results.''}
We plan to build on this approach to develop methods for the previously mentioned challenges.
\subsection{Statistics}
\so{Summary of all the statistical tests/techniques used, e.g. in EGTA experiments.}
\pfm{F tests ; t-Tests}
\newpage
\begin{comment}
\gc{
Paper formatting guidelines:
\begin{itemize}
\item fields (e.g., game theory) and `football' should be lowercase throughout
\item set-piece vs. set piece (latter preferred)
\item game-theoretic vs game theoretic (looks like former is slightly more used in literature?)
\item UK vs. US english differences (analyse vs. analyze, modelling vs. modeling)
\item Change `soccer' to `football' throughout (though do not find-and-replace-all, might be some instances like quotes and citations that have to remain as soccer)
\item Citations should appear after punctuations
\item Unify capitalization of section titles (Either sentence case or title case)
\item Fix formatting of Bibliography, remove any duplicate entries, shorten journal names
\item Figure and table references should use cref package
\item Oxford commas please
\item Figures should be cited in text in the order they appear
\item (NatComms) Only sections and subsections allowed (no paragraph environments)
\item (NatComms) No footnotes, emph, or italics
\item (NatComms) No quotation marks for emphasis (only use them for quoting word-for-word)
\item (NatComms) Each figure should have a title (1-line max). Figure captions should explain the figure in full, in less than 350 words total.
\item (NatComms) No text in figure subcaptions. All subcaption text should be in the main caption (with appropriate subrefs).
\item (NatComms) Introduction less than 1000 words, abstract less than 150 words
\item (NatComms) All subheadings should be less than 60 characters (including spaces)
\item (NatComms) Allowed supplementary information sections are Supplementary Figure(s), Supplementary Table(s), Supplementary Note(s), Supplementary Discussion, Supplementary Method(s), Supplementary Reference(s). They should be referenced in-order in the main text.
\end{itemize}}
\end{comment}
\bibliographystyle{plainnat}
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\textwidth]{figs/overview_frontiers.pdf}
\caption{Overview.}
\label{fig:overview}
\end{figure}
Recent years have seen tremendous growing interest in sports analytics research, not only from an economic and commercial point of view, but also from a purely scientific perspective, viz. the growing number of publications and scientific events organized on the topic (e.g., the MIT Sloan Sports Analytics Conference~\citep{mit_sloan_conf}, the CVPR International Workshop on Computer Vision in Sports~\citep{cvsports_workshop}, and the ECML/PKDD Workshop series on Machine Learning and Data Mining for Sports Analytics~\citep{kdd_ml_sa}).
As evident in a multitude of downstream domains that have benefited from recent applications of artificial intelligence (AI) and machine learning (ML), this is due to important technological advances in data collection and processing capabilities, progress in statistical learning with the advent of \textit{deep learning}, increased compute resources, as well as ever-growing economic activities associated with sports and culture (e.g., emergent consultancy ventures revolving around sports data collection and statistics~\citep{beal_norman_ramchurn_2019,opta_sports,ChyronHego,InStat,StatsBomb,soccernomics}).
\textit{Predictive analytics} has been investigated and applied in the context of several sports in the past decades, including, e.g., basketball~\citep{skinner2010price}, tennis~\citep{walker2001minimax,gauriot2016nash}, and baseball~\citep{Albert02,Lewis04,costa2009practicing,Song17} (i.e., sabermetrics\citep{Puerzer02,Albert10}), with the latter systematically collecting data since the 19th century~\citep{baumer2014sabermetric}.
Although statistical analysis of data has led to many impressive outcomes in these various sports (e.g., Moneyball in baseball~\citep{Lewis04,baumer2014sabermetric}), football started participating rather late in this data collection and number crunching game, with the data science transformation that informs stakeholders (e.g., decisions related to player transfers, scouting, pre- and post-match analysis, etc.) still in its infancy \citep{soccernomics}.
Several factors influenced this late arrival.
Firstly, football takes place under far less controllable settings than other sports due to its outdoor and highly dynamic nature, a larger pitch, a large number of players involved, and longer non-interrupted game sequences than sports such as basketball.
Secondly, professional companies have only relatively recently started collecting so-called big data (e.g., high-resolution videos, annotated event-streams, player tracking and pose information).
Concurrently, only recently have major breakthroughs been made in deep learning, yielding techniques that can handle such new high-dimensional data sets\citep{bengio2009learning,Arel10,lecun2015deeplearning,Schmid15,Goodfellow-et-al-2016}.
Finally, for a long time, credibility in decision-making primarily depended on human specialists such as managers, retired players, and scouts, all of them with track records and experience in professional football~\citep{soccernomics,DecroosD19}.
Arrigo Sacchi, a successful Italian football coach and manager who never played professional football in his career, responded to criticism over his lack of experience with his famous quote, phrasing his sentiment around credibility as follows when becoming a coach at Milan in 1987: ``I never realised that to be a jockey you had to be a horse first."~\citep{sacchi_quote_fifa}
As a result, the potential influence and gains of predictive analytics on the football game have also been less obvious in the past decades, with the credibility of sports analytics as a game-changing phenomena not realized until recent years.
In more philosophical terms, \citet{soccernomics} highlight a cultural hesitation regarding the integration of data science into football and an overdependence on gut instincts, noting that ``until very recently, soccer had escaped the Enlightenment".
Despite football's late adoption of sports analytics, there are a number of early bird approaches from different areas of AI such as statistical learning (SL), computer vision (CV), and game theory (GT) that are making initial contributions to support decision-making of managers, coaches and players.
For example, already basic machine learning tools such as principal component analysis (PCA) enable automated means of identifying player types\citep{DecroosD19}, training of models predicting trajectories of individual teams or imitating league-average behaviors\citep{Le17}, and valuing individual player decisions (such as passes or tackles) in a series of actions leading up to a goal.\citep{DecroosBHD19}
The study of interactive decision-making as formalized by game theory plays a critical role in AI for systems involving more than one actor (human or artificial).
Game theoretic tools shed light on players' strategic interactions during scenarios such as penalty kicks, analysis of their decisions in comparison to mathematically-principled baselines, and prediction of their goal-scoring probabilities when playing according to a mixed strategy Nash equilibrium~\citep{Palacios2003,chiappori2002testing,palacios2016beautiful}.
Enriched with recent techniques from \emph{empirical game theory}~\citep{wellman2006methods, TuylsPLHELSG20,omidshafiei2019alpha} the effects of various high-level strategies pitted against each other can also be analyzed.
Finally, latest techniques and developments from computer vision have been used for player tracking~\citep{Lu13,Liu2013,Bialkowski2015,Gade18}, pose estimation~\citep{Zhang_2019_CVPR,Fastovets,Bridgeman_2019,Sypetkowski19,Sypetkowski}, and automated injury prediction~\citep{Kampakis} based on, e.g., gait and fatigue analysis\citep{Meert18,Ramos20,Claudino,Kakavas,Bartlett}.
While these separate areas within AI research have independently been demonstrated to be effective when targeting the above specific prediction and analysis problems in football, we believe the most fruitful perceived benefits (and, correspondingly, the most interesting and challenging research problems) to lie in the underexplored intersection of the subfields of statistical learning, computer vision, and game theory.
We argue that the domain spanned together by these three fields of research is the most promising for establishing seminal progress in football analytics in the coming years.
We lay out this vision in \cref{fig:overview}.
Specifically, we pinpoint several frontiers at the intersections of these three fields, with the ultimate frontier being identified as the microcosm where challenges are tackled that require integrated approaches built on fundamentals of the three areas.
A large portion of the current state-of-the-art research in football analytics, by contrast, typically falls under the umbrella of one of these areas, with some initial activities taking places at Frontier~2~(SL\&CV)\xspace~\citep{Lu13,Mora17,Mehrasa,Choi_2019_ICCV,Quiroga_2020}, and no notable research activities identified at Frontier~1~(GTV\&V)\xspace and Frontier~3~(GT\&SL)\xspace for football, or sports in general.
To make the potential opportunities more concrete, we provide several case studies that will be discussed in detail in the \nameref{sec:game_plan} and \nameref{sec:results} sections.
At Frontier~1~(GTV\&V)\xspace, game-theoretic analysis is blended with learned predictive models.
Research along this axis of the microcosm focuses on a combination of interactive decision-making with predictive modeling providing more concrete and deeper analysis tools.
We present a detailed case study illustrating this frontier, revisiting the seminal work of \citet{Palacios2003} on penalty kicks under this new perspective and illustrate how mixing with SL provides deeper insights into penalty kick taking.
Frontier~2~(SL\&CV)\xspace, focuses on research that integrates statistical learning with computer vision.
Research along this axis directly learns from video as the primary input and builds predictive models, for instance forecasting player and team behaviors directly.
At Frontier~3~(GT\&SL)\xspace, we classify research integrating computer vision and game theory, a largely uncharted territory focusing on generative models based on visual inputs, which takes strategic interactions into account.
We contend that the above frontiers culminate into a unique microcosm mutually benefiting both AI and football analytics research, wherein it becomes possible to develop, for example, an \textit{Automated Video Assistant Coach} (AVAC). The AVAC system is an example of what we believe to be the future of human-centric AI research for football, in which the aim is to integrate all aspects of the frontiers into a cohesive system enabling both understanding and improving of human football play.
The advantage this microcosm brings is that while it makes clear what AI research can mean for football in the long-term, it has the dual effect of defining football as a new human-centric AI research domain with major challenges that can progress the field through cross-fertilization of ideas from the three axes.
\paragraph{Overview} We continue this paper by laying out the long-term perspective of how AI can benefit the domain of football analytics, and vice versa.
We describe the current state of affairs and sketch a long-term vision for research in the football microcosm.
The three axes are illustrated in detail, and we present a case study that examines penalty kick taking from a game theoretic perspective (in a similar fashion as in the work of \citep{Palacios2003,palacios2016beautiful}, but with several new insights, based on data of the main European leagues).
We illustrate an example of counterfactual trajectory predictions (i.e., a `what-if' analysis) that can be enabled through learned predictive models, and subsequently demonstrate how this game theoretic work can be enriched via integration with statistical learning at Frontier~3~(GT\&SL)\xspace, providing several new insights about penalty kick taking in general, and making a case for the AI microcosm as a research domain.
\section{Game Plan: Long-term research vision}\label{sec:game_plan}
In this section we outline a long-term research vision and plan for AI applied to football analytics. We start by looking into greater detail in the current state-of-the-art w.r.t. each of the respective areas (GT, SL, and CV) applied to the football domain, after which we look into the opportunities that each of the frontiers is bringing forth to unlock larger scientific challenges in football analytics. We continue by looking deeper into the reverse perspective, i.e., \emph{what football can mean for AI}, by dissecting the football AI microcosm into several layers of key challenges and investigate what it brings as a whole to AI research as a human-centric AI testbed.
\subsection{AI for Football Analytics}
\subsubsection{Foundational Work and Prospects}
AI brings a set of algorithmic techniques and tools from statistical learning, computer vision, and game theory that are applicable to football analytics.
The following sections summarize contributions in the literature that lie in these peripheral fields of \cref{fig:overview} and highlight opportunities for further work and added value in each one of them.
\paragraph{Statistical Learning}
Football is arguably the most challenging to analyze of all the major team sports. It involves a large number of players with varied roles (including goalkeepers, defenders, midfielders, and strikers), very few salient events, and minimal scoring. Statistical football analysis attempts to provide a quantitative answer to questions that pertain to different aspects of the game. Most notably, these include the problem of characterizing players' and teams' playing styles, the evaluation of the impact that players (or group of players) have on the pitch, and the temporal and counterfactual predictions of the players' actions.
When we compare the styles of players and, by extension, of teams, we usually refer to certain high level and abstract notions that summarize unique characteristics associated with them, which are not always easy to quantify.
The goal of this line of research is to learn features which capture such information, normally to be used by other down-stream tasks.
For instance, one means of summarizing information about a football player is through aggregation of their play statistics (e.g., offensive ability, defensive ability, dribbling ability, pass success rates, shots on target, etc.)
In addition to capturing single players there is work on capturing team-level information.~\citep{fernandez2016attacking,stats_playing_styles_2020}
While such statistics are typically either handcrafted or rely on simple measures of play outcomes, recent works have started to analyze them from a statistical learning perspective, using notions such as Player Vectors~\citep{DecroosD19} (and analogous techniques in sports such as basketball~\citep{franks2015characterizing}).
Given the growing success of unsupervised methods, there is potential for more advanced representations of player traits to be learned from the data directly.
In football it is particularly difficult to assess which players, or groups of players, deserve credit for a favorable result (or blame for an unfavorable one). For high scoring team sports (e.g., basketball) one can provide a reasonable answer to this question by restricting the attention to actions that have immediate impact on scoring.
By contrast, the average number of goals per game of the 2019/2020 Premier League season was 2.72.~\citep{pl_goals_2019}
Consequently, models that only consider actions that have immediate impact on goals (e.g. shots, assists, saves) capture a crucial but very limited aspect of the game.
Despite the progress observed in recent years, there is still significant room for improvement.
The game state in football is significantly more complex than that estimated by current models. Features describing them are mostly hand crafted and only consider on-ball actions. A given pass might be extremely valuable or a poor choice depending on the disposition of the players. While these methods are able to value all on-ball actions, they still rely on very sparse signals provided by the goals (which represent on average 1\% of the total actions in a match). Off-ball actions can have significant importance on a game, such as covering a certain area of the pitch to prevent attacks, running to an open area to create space for teammates to exploit, etc.
Due to the temporally extended nature of football, the task of inferring the value of actions is an instance of the temporal credit assignment problem in reinforcement learning \cite{minsky1961steps}. The combination of reinforcement learning techniques with the powerful function approximation capabilities of deep learning has great potential to tackle the idiosyncrasies of football and to close the gap between statistical and human analysis. It is exciting to see recent progress in this direction in sports analytics, including ice hockey \cite{guiliang2018deep} and football \cite{sun2020cracking, liu2020Deep}.
\paragraph{Game Theory}
Game theory has played an increasingly-important role in the study of team sports, enabling theoretical grounding of teams' and players' behavioral strategies.
Numerous works have applied game-theoretic analysis to sports over recent decades~\citep{Sindik2008,Lennartsson15}, including football~\citep{Palacios2003,MOSCHINI2004,Azar2011,Levitt,Buzzachi14,coloma2012penalty,chiappori2002testing}, tennis~\citep{walker2001minimax,gauriot2016nash}, volleyball~\citep{lin2014applying}, basketball~\citep{skinner2010price}, and gridiron football~\citep{emara2014minimax}.
Game theory offers the possibility of formally understanding several key aspects of football.
When the assumptions made in game-theoretic models hold, we can be confident of our understanding;
when game theoretic predictions fail, there is likely an improved action policy that can be suggested to players or teams.
High-level game theory can be applied to team selection and choice of formation.
Football set pieces, such as corner kicks and penalties, are particularly amenable to game theoretic analysis, wherein identified high-level strategies can be pitted against one another and ranked in terms of empirical performance (see \nameref{sec:results} section for an example case study).
Due to the real-world nature of football analytics, the majority of the aforementioned game theoretic analysis is driven by the availability of rich data sources.
The diversity and volume of available data (spanning across different leagues, seasons, and competition levels) makes football a particularly attractive topic from an empirical and behavioral game-theoretic perspective~\citep{camerer2011behavioral,wellman2006methods}.
From a theoretical perspective, the majority of the existing works exploit a particular aspect of team sports, which is that they can typically be modeled (at a high level) as two-player zero-sum games (i.e., involving two teams competing in a win-loss game).
Moreover, certain in-game situations can be straightforwardly modeled as smaller instances of two-player zero-sum games.
For example, in football, the penalty kick situation may be intuitively modeled as a two-player asymmetric game, where the striker's strategies may be neatly categorized as `shoot left', `center', or `right'.
The controlled nature of the environment within these in-game situations compounds their appeal from a quantitative analysis perspective;
in the penalty kick example, the striker, goalie, and ball's initial positions are fixed across all trials available in datasets.
In fact, the majority of the literature that analyzes football under a game-theoretic lens focuses on penalty kick scenarios~\citep{Palacios2003,Azar2011,Levitt,Buzzachi14,coloma2012penalty,chiappori2002testing}, which we contend is due to the above accessibility factors and amenability for analysis via classical game-theoretic solution concepts (such as Nash equilibria).
Despite the notable progress made by the above works, the potential for football to benefit from game-theoretic analysis will remain untapped, until a paradigm shift away from the classical, two-player analysis of small set-piece settings occurs, towards live-play settings such as those considered by \citet{MOSCHINI2004}.
Moving beyond the classical game theory paradigm involves resolution of significant challenges unique to this domain:
the number of active (on-pitch) players is larger than most other professional sports (notable exceptions being rugby and Australian football), and the exponential size of the strategy space (with respect to the number of players) makes it significantly more challenging to analyze than games such as tennis or basketball;
the mapping of low-level player actions to strategies is no longer straightforward in longer-duration `live plays', due to the variability of players' trajectories beyond set-piece settings;
the duration of plays is also generally longer than sports played in more controlled environments (e.g., tennis), and these durative aspects may benefit from analysis of plays in the extensive-form (rather than the simultaneous-move, normal-form approaches typically used for set-piece analysis).
Nonetheless, the availability of new data types (including high-quality annotated video streams, player tracking, and event-streams) increases the viability of conducting more advanced game-theoretic analysis by bootstrapping to advances in statistical learning and computer vision, as later detailed.
Surmounting these challenges, we believe, will benefit football from a game-theoretic perspective, enabling both a descriptive analysis of the game (i.e., understanding the strategic interactions undertaken by players and teams in the presence of others), and a prescriptive one (i.e., suggesting the actions such individuals \emph{should} have executed).
\paragraph{Computer Vision}
Computer vision has seen major breakthroughs in the past decade. Performance improvements in tasks such as image classification~\cite{deng2009imagenet}, video action recognition~\cite{Carreira_2017_CVPR} and pose estimation~\cite{alp2018densepose} have unlocked the possibility of automatically extracting high level complex information from videos.
Similarly, in football analytics, computer vision methods hold the promise for significantly enriching statistical and game theory approaches, which otherwise base off of hand labeled low dimensional data.
Video is a particularly appealing signal.
First, it is an extremely rich data source, in that it is likely to contain large amounts of information of interest for decision-making.
Second, it is particularly cheap to acquire, necessitating only the use of widespread camera sensors.
Third, computer vision offers up the ability to train a model once and apply it to all new video as it arrives.
Finally, computer vision and hand-labeled data are synergistic; models can be built using both video and hand labeled data.
Vision-based models offer the possibility of generating statistics without the need of hand labeling.
While the raw signal of RGB sequences itself is insufficiently structured to be processed by traditional rule-based computer programs, computer vision methods enable the extraction of high level, potentially spatially-detailed representations, ready for use by downstream applications (programs and machine learning algorithms alike).
Examples of such extraction processes include human pose estimation~\cite{alp2018densepose}, object detection~\cite{ren2015faster}, segmentation~\cite{long2015fully}, tracking~\cite{dong2018triplet} and depth estimation~\cite{godard2017unsupervised}, as well as event or action detection~\cite{Carreira_2017_CVPR}.
While research has traditionally focused on 2D formulations (i.e., not incorporating 3D information such as depth), the recent important advances across all fields of computer vision are increasingly encouraging the research community to pivot to 3D formulations, thereby taking one more step towards one of the long-standing goal of computer vision as ``inverse graphics'': from any given image or video scene, to reverse-engineer and represent the physical processes and factors that produced it~\cite{szeliski2010computer}.
In addition to such explicit, human-interpretable representations, intermediate representations learned by machine learning models for video recognition can be used by downstream deep learning models.
Using a large amounts of unsupervised data as input, self-supervised learning techniques use weak supervisory signals such as textual transcribed speech~\cite{Miech_2020_CVPR} or audio~\cite{alayrac2020self,arandjelovic2017look} from the video to learn meaningful video representation.
In football, several companies already commercialize tracking information, relating to the players and the ball, automatically extracted from videos recorded by dedicated static camera(s), placed to have a bird-eye view or otherwise complete view covering the entire terrain~\citep{opta_sports,StatsBomb}.
Immersive sports analytics has been gaining popularity with the increased access to specialized hardware and portability. \citep{Lin2020SportsXRI} These techniques enable reconstruction of real game scenarios, which provide more feedback to players and coaches than 2D screens. Industrial labs have developed immersive learning platforms for players to recreate game scenarios \citep{StriVR,Rezzil} used by professional football teams. These have been extended to other sports media as well \citep{StriVR}.
Besides the previously cited potential improvements for player ranking and game analysis methods (e.g., learning of player skill vectors, vision-based tracking of players and subsequent game-theoretic analysis of strategies), potential future applications are endless, including injury prediction based on temporal dense pose estimation and predicting player fatigue levels based on their on-screen activities and facial features.
State-of-the-art computer vision techniques are very valuable to analyze football data. However, unique and interesting challenges arise when using those methods in broadcasted football video. Most of the time, the camera focuses on relevant action in the field, leaving many of the players out of the frame. This poses an interesting geometric problem as broadcast footage shots hardly overlap and many state-of-the-art systems are unable to geometrically relate these multiple shots;
only recently are data suppliers, such as StatsBomb, investigating more advanced learning-based camera calibration systems to account for these data association issues~\citep{statsbomb_camera_calib}.
Furthermore, players often occlude each other, further complicating the detection and identification tasks. All these problems could be addressed by using a birds-eye-camera.
However, we aim for a more generalistic approach to football analytics where all the available data can be valuable for our model. For this reason, we believe that tackling these relevant problems will advance the state-of-the-art in football analytics while also contributing to general computer vision.
\subsubsection{Frontier 1: Interactive decision-making}
Learning to make good decisions in the presence of other agents is where game theory and statistical learning can converge.
This interdisciplinary area of research has received quite some attention in the multi-agent reinforcement learning community over the past decades, with thorough survey articles~\citep{PanaitL05,ShohamPG07,BusoniuBS08,TuylsW12,BloembergenTHK15,Hernandez-LealK20} on the topic widely available.
When considering the football domain in particular, it is evident that the potentials of game theory have yet to be fully exploited, let alone its combination with machine learning techniques such as reinforcement learning.
There are several promising routes in this area that not only are challenging from a football analytics perspective, but also from an AI research one.
Two of these routes are studied further in this article in the \nameref{sec:results} section; one concerns the study of set pieces using the combination of statistical learning with game theory, and a second focuses on predictive modelling for counterfactual analysis.
In the former, which has been mostly studied in a game theoretic setting, we show how augmenting the analysis with player embeddings can provide deeper insights in how various types of players behave or take decisions about their actions in a penalty kick scenario.
In the latter case, we illustrate how machine learning techniques can facilitate counterfactual analysis in football matches.
The possibility to predict, for example, trajectories of players can enable investigation of `what-if' (i.e., counterfactual) scenarios, in which one for example would like to know how a specific player, or team, will respond in a specific match scenario.
Doing this enables one to not only learn to generate behaviors, but also leverage game-theoretic techniques for counterfactual analysis.
We defer a more detailed discussion of these research lines to the \nameref{sec:results} section.
Building on the counterfactual prediction of players' behaviors, one can also consider the possibility of using this as a coaching tool.
Specifically, one can use counterfactual insights to advise tactics to individual players, and even go further by optimizing the overall team strategy depending on the specific opponent in an upcoming match.
This would go beyond the state-of-the-art, which focuses on simply predicting player behaviors;
here, we would seek to actively optimize suggested tactics based on the particular behaviors and play style of the opposing team.
Such tactical considerations can also be conducted in an iterative manner (e.g., predictions can be made for the opposing team as conditioned on the best-response behavior of the main team), and effective counter strategies can be learned, for instance, using multi-agent reinforcement learning.
Such an approach opens the door to a slew of interesting research challenges. For instance, use of multi-agent reinforcement learning entails definition of a reward function for the players;
while rewarding goals is an obvious candidate, denser reward signals (e.g., associated with successful passes, intercepts, etc.) may be useful for accelerating learning of such policies.
Such reward functions are also likely to depend on the role of a player in the team and their respective skill set (i.e., may even be heterogeneous across players in a given team), and could also be learned using techniques such as inverse reinforcement learning~\citep{NgR00}.
Finally, we believe that the combination of the previous learnt models with pitch control\citep{Spearman16}, a technique to determine which player/team has control over a specific area of the pitch, will provide additional information on open space, passing and scoring opportunities, yielding a powerful tool to enable in-match tailored coaching tools.
\subsubsection{Frontier 2: Video Predictive Models}
There are several challenges that naturally lie in the frontier between statistical learning and computer vision:
Statistical learning depends on large quantities of labelled data. Many of the quantities suitable to models of football are the product of hand labelling data. Vision based models could automatically identify events which could be fed into statistical learning models. In addition to the quantity of events vision based systems could provide, the quality can also be improved with events being identified to the corresponding frame.
Furthermore, video is a much richer signal from what is commonly used in predictive tasks such as forecasting future movement of players or predicting the value a certain action contributed. The combination of advanced deep-learning models and a rich video signal allows to take subtle clues, otherwise not captured in event-stream or tracking data, into account. These include for instance player pose or gaze estimation. Capturing more of the partially observable state of a game will ultimately allow to provide more accurate predictions. Richer information may additionally help to shed light on the intention of players and thus address the credit assignment problem in action-value estimation.
On the other hand, understanding the game dynamics is necessary to resolve some of the issues of computer vision based tracking, e.g. occlusions or even tracking off-camera players. Such a predictive model of the players' movements can either provide additional inputs to the tracking system or be implicitly learned. Explicit or implicit access to the game dynamics will very likely also improve vision-based action labelling.
Finally, presenting prediction outcomes by means of video synthesis is an ambitious challenge that combines the task of \emph{ghosting} or trajectory prediction with video generation. Presenting predictions of counterfactual futures as video will allow an intuitive understanding both by coaching personal and players.
\subsubsection{Frontier 3: Generative Video Analysis Models}
Video modeling and game theory can mutually benefit one another.
Computer vision can provide new features to drive game theoretic models.
In turn, game theory can also shape video modeling, as illustrated by the feedback loop in \cref{fig:video_generation}.
Though most of the elements of this feedback loop (e.g., game theory, video generation, and video recognition) have been extensively explored in an independent manner, few works have leveraged a connection of game theory to both video analysis and video generation at the same time.
Throughout a football match, individual players carry out a series of encounters with one another, which can profitably be viewed through game theory.
Each individual player may have some idea of the likely success of strategies based on past performance and current in-game observations.
New information driving these encounters is based on sensory data, predominantly vision, being processed by human players in real-time.
As described in the left portion of \cref{fig:video_generation}, image data can also be processed by computer vision methods to extract the most relevant bits of information.
These high level cues such as players' poses or actions during the game can be used to feed game theory models and improve its understanding of the dynamics of the game. Actual game theoretic models are mostly based in low-dimensional hand-collected information. Adding automatically extracted information that can be collected at large scale can provide these models valuable insight to improve their assessment of the game.
Penalty kicks are an almost idealized example, with a striker and goalkeeper observing each other for signs of intent, while simultaneously weighing preconceived strategies.
Video modeling can help by identifying visual signatures of intent.
In \cref{fig:penalty_kick}, pose information is extracted for the players involved in a penalty shot based purely on the video feed. This high level pose information can be used to infer the intentions of both players involved in the penalty and could provide valuable insights to improve their respective strategies.
Given the limited range of player motion and physical constraints, this pose information is a rich source data for analyzing penalty kicks and possibly player training.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth,page=1]{figs/vision/video_generation.pdf}
\caption{Video generation overview. Feedback loop between analysis of real footage and generated video. Game theory can drive generation also in the football game level.}\label{fig:video_generation}
\end{figure}
Video modeling needs to be highly constrained to be useful. The sheer amount of information provided by the video signal is enormous compared to the information provided by classification targets. Successful models deal with this information overload with techniques such as ignoring small features, taking advantage of 2- or 3-dimensional symmetries, or limiting new models to fine-tuning of baseline models to name but a few.
The large context found within football videos offers an opportunity for further constraints on video modeling.
Limited features are present in football videos such as the pitch, ball, two opposing teams, referees, and spectators.
The dynamics of play is complex, with games taking place at the level of opposing individuals to entire teams. The combination of limited but complex dynamics can lead to powerful video models in this special setting.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth,page=1]{figs/vision/penalty_kick.pdf}
\caption{Example of a pose estimation model applied to a penalty kick.}\label{fig:penalty_kick}
\end{figure}
\subsection{Football for AI Research}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth,page=1]{figs/overview_research.pdf}
\caption{Hierarchy of key research challenges.}
\label{fig:layers}
\end{figure}
The perceived benefits of AI for football analytics are evident, per the above discussions.
In this section, we consider the reverse perspective of the potential unique challenges and opportunities associated with the football domain, making it an interesting human-centric testbed for AI research.
We introduce here a hierarchy of key challenges associated with football research, as illustrated in \cref{fig:layers}.
Specifically, the major challenges in our microcosm are defined over three layers.
The foundational layer concerns
\emph{representation learning}, operating directly on the various input modalities available in football (e.g., time-synchronized videos, event-streams, and tracking data) to generate useful representations for the more complex learning tasks targeted in the subsequent layers:
\emph{prescriptive and predictive analysis} and \emph{modeling of human factors}.
In the subsequent sections, we detail each of these layers of research, highlighting the key complexity factors associated with football that are useful for other real-world applications of AI.
While doing this we also explicitly draw the connections with the three frontiers identified in the Football AI microcosm, concluding by detailing disruptive innovations and associated research directions for the future when the three frontiers come together in the microcosm.
\subsubsection{Football as an AI Testbed: Microcosm}
AI research has nearly as many goals as AI researchers, however, certain goals keep coming up: learning and acting based on real-world data streams (image/video, audio), decision making to further goals, interpreting the actions of other agents and acting accordingly (game theory), being able to generate and understand language, etc. As discussed in earlier sections, football offers problems which involves many of the above AI challenges but in a more limited scope.
The value of football for AI can be observed in the range of data useful for football analytics. Real world data streams such as vision, audio, and text are the mainstay of AI research and are central to football analytics. These data streams are abundant in football, but have a large amount of shared context. The video feed never repeats itself, yet always involves two teams and a ball. An enormous amount of text is available, yet it is centered on the current game. The sound of the crowds and commentators can be relied upon to respond to goals and penalties. Football offers the opportunity to synthesize vision, audio and text in a simpler domain. Football also offers hand-labelled data such as ball and player tracking and identification. This is a golden opportunity for AI to accelerate learning algorithms. The main barrier to the use of hand-labelled data is the cost and time to generate it, but with football providers exist to supply it. As mentioned, football comes with a high degree of context, this context can be addressed with the use of historic data such as past players and teams performance. More varieties of data exist than are typically found in AI, for example, there is a large amount of crowd sourced data available such as current betting odds and pundents predictions. One of the goals of AI research is to synthesize varied data sources, football offers the platform to do this on a simpler problem.
A worthy challenge in AI is to develop an AI which masters football:
the \emph{Automated Video Assistant Coach (AVAC)}.
A successful AVAC would help players, coaches, and managers alike. It could help the players by analyzing skills for weak points. A players performance throughout a game could be analyzed to suggest improvements in position play and assessing the performance overall. Just prior to a game, an AVAC could suggest strategies tuned to the opponents of the day. Coaches also seek to get the best out of their players, but have a limited amount of time and many players to watch. An AVAC offers the chance to help a coach get more out of their players. It also offers the chance to get more out of their team. From player selection for a given game to trading strategies, an AVAC could provide an array of suggestions for a coach to consider. Team performance could be analyzed for weak points to improve or in the case of opponents, where to capitalize.
Such a system would have the ability to automatically sift and label huge quantities of video allowing broadcasters spectators and broadcasters alike to retrieve key moments in sport. A spectator or broadaster does not want to be diverted from the game he or she is watching, but may still want information related to a game. A wholistic video analysis system could automatically keep a running tally of information the user may find interesting based on the current state of play and the spectators reaction. When the spectator is bored, such a system may suggest another game that is more exciting. For those interested in fantasy football, this system might search for players based on a set of qualities.
AI can go further, synthesizing video of players which were outside the broadcast video coverage. This video would be based on last observed positions, tracking data, and the state of play.
\subsubsection{Representation Learning}
The wide variety of human labeled football statistics (e.g., ball and player tracking data, event labeling) that have become available are natural fodder for machine learning algorithms once they have been pre-processed.
These algorithms range from classification and regression tools (e.g., in expected possession value models~\citep{fernandez2019decomposing}), generative models (e.g., in trajectory generation models~\citep{Le17,LeY0L17,yeh2019diverse,li2020generative}), and variational auto-encoding models (player embeddings).
In addition to human labeled statistics, a large amount of recorded video data is available for football.
The success of machine learning algorithms generally depends on data representation, as different representations can entangle and obfuscate various explanatory factors of variation behind low-level sensory data.
In football analytics, although expert knowledge is widely used to help design existing representations, learning with generic priors bears promise for avoiding such hand-encoded knowledge.
Moreover, the quest for Artificial General Intelligence (AGI) pushes onward the design of more powerful representation learning algorithms implementing such priors with minimal domain knowledge.
From the perspective of AGI research, we identify three unique questions of representation learning challenged by football analytics, which we next detail.
The first challenge concerns learning representations with multi-modal football data.
Particularly, in football analytics, it remains a fundamental challenge to effectively recognize \emph{long-duration} playing styles of individual players and teams.
The available data sources usually spread a wide range, e.g., from event-streams to tracking data, and from time-synchronized videos to media highlights, news, and broadcasts.
While expert knowledge goes a long way towards analyzing these multi-modal data sources, it remains insufficient to process them with efficiency in mind.
The multitude of input modalities available to football analysts are likely to challenge existing methods of representation learning, thus driving researchers to develop wholistic models that take these multiple modalities into account simultaneously.
The second challenge concerns learning contextual representations of individual players.
Due to the dynamics and uncertainties of football matches, long-term static representations for predictive modeling of in-game events are likely to be beneficial when used in conjunction with in-game representations of individual players.
For example, a player currently passing the ball may take into account the current context of the game (i.e., a compressed representation of the game state) to estimate the most appropriate receiver that maximizes the probability of scoring.
Another concrete example is using contextual representations to identify the roles of players in event.
The roles of players may change during a match, and how can we infer the role of one player given his/her trajectory and the associated context? Such questions remain to be explored, and football offers a natural testbed for this challenge.
A final challenge remains in learning representations of teams. In addition to learning representations of individual players, how to effectively represent football teams is another unresolved challenge. Football teams are usually ranked with historical match results and the collective performance of individual players. There may be two problems of such method. First, the method itself is coarse and may not reveal the long-term playing styles of teams. Second, such representation may fail to reflect in-game dynamics. For example, one might ask what the scoring probability of each team is given the current formation on pitch. To tackle this challenge, we aim to achieve two goals: 1) learning representations that are able to characterize the long-term playing styles of football teams, 2) learning contextual representations of football teams which are able to depict in-game dynamics.
\subsubsection{Predictive Modeling and Decision-Making}
Learning useful representations (i.e., as opposed to hand-coded features) serves as an important means of advancing the subsequent predictive and prescriptive analysis of football matches.
Specifically, dense embeddings that summarize not only the state of a particular game, but also historical trends evident throughout many games (e.g., across seasons) will serve as enablers of the more accurate, impactful, and longer-horizon predictions of match outcomes.
The interaction between such predictive-prescriptive models is envisioned to be tightly-linked with game-theoretic analysis, thus coupling this direction of research most closely with Frontier~1~(GTV\&V)\xspace and Frontier~3~(GT\&SL)\xspace (see \cref{fig:layers}).
The combination of these fields with game theory is likely to usher in new opportunities for assisting coaches and other decision-makers within football teams.
For example, models used for predictive modeling of football players at the trajectory-level~\citep{Le17,LeY0L17,sun2019stochastic} currently treat the game as a black-box dynamical process (i.e., a system of dynamic entities making decisions solely based on the joint on-pitch state of the teams);
such models do not yet account for the game-theoretically driven counterfactual responses of players to one another (e.g., taking into account the current game score, impact of current game decisions on upcoming matches, etc.).
Conducting such an analysis of these models involves identification of high-level strategies typically used by empirical game-theoretic techniques (so-called `meta-strategies')\citep{wellman2006methods,TuylsPLHELSG20} .
These meta-strategies, for example, could be clusters of on-pitch behaviors correlated with play style, counterattack types, defense schemes (such as zonal vs. man-to-man defense), and so on.
While such meta-strategies are typically manually defined (straightforwardly so, e.g., for set-piece settings such as penalty kicks), automatically learning them prior to their input into game-theoretic analysis techniques poses an interesting challenge.
Appropriate clustering and identification of such meta-strategies involves not only access to a large, representative dataset of plays, but also the aforementioned learned representations that summarize the most relevant context for game theory models.
Identification of such meta-strategies can enable tactical decision-making at a higher level as well.
For instance, synthesis of empirical games (with associated payoffs) over the possible meta-strategies of two opposing teams can be used to forecast the performance of various team tactics when pitted against one another (e.g., investigating the Nash equilibrium of football at a higher, tactical level, rather than the typically considered low-level scenarios such as penalty kicks).
Moreover, while characterization and ranking of players (based on performance metrics) has received considerable attention in the literature~\citep{DecroosD19,decroos2019actions,bransen2020chemistry}, automated ranking of \emph{tactics} has received considerably less attention~\citep{decroos2018automatic,meerhoff2019exploring}.
Application of game-theoretic analysis techniques (e.g., ranking performance via approaches such as Nash averaging~\citep{balduzzi2018re} and $\alpha$-Rank~\citep{omidshafiei2019alpha}) remains unexplored to the best of our knowledge.
Moreover, identification of such meta-strategies bears potential for downstream benefits.
For instance, analysis of empirical games using meta-strategies conditioned on player identities would be beneficial for counterfactually evaluating the performance of a player of interest in a new team (i.e., for scouting purposes).
For training staff, a model that enables accurate evaluation of players' contributions to the team's overall strategy would be valuable, for example, for pinpointing which players to coach (e.g., those that deviate or miscoordinate with respect to the team strategy) or to substitute (e.g., due to fatigue or decreased performance throughout the match).
For referees, game-theoretical analysis could be used for detecting when a team is potentially cheating, e.g., intentionally playing a sub-optimal or irrational strategy to make the outcome of the game predictable and thus benefiting people who bet on it;
such approaches link to the notion of bounded rationality in game theory \citep{Simon1955,HerbSimon}.
For broadcasters, automatic identification of salient, ``exciting" meta-strategies (e.g., those that are typically low in probability yet high in payoff, or games where there is a large difference in terms of the play styles or meta-strategies of the teams) can be used for automatic generation of highlight reels.
Moreover, such
Learning the appropriate meta-strategies and associated predictive models are, simultaneously, significantly more challenging in football (in comparison to most other sports) due to the number of players involved on-pitch (and the exponential size of the strategy space with respect to this quantity).
Despite this, the development of richer models leveraging more complex input modalities (e.g., video-based representations) is likely to unlock commensurate benefits (in terms of improved predictions and strategic decision-making) for football analysts.
\subsubsection{Human Factors}
The human-centric nature of Football Analytics stems from several factors: coaching and improvement of individual play and coordinated team play through predictive and prescriptive modelling, injury and fatigue prediction, and psychological and mental analysis of players. This focus distinguishes it sharply from, for example, challenges such as RoboCup\citep{RoboCup}.
In contrast to the robot-centric focus of RoboCup (which revolves around developing robotic footballing agents~\citep{Visser,AB05,LNAI09-kalyanakrishnan-1,AAMAS11-urieli,ICLR16-hausknecht,AAAI17-Hanna}), the focus in the football analytics microcosm is entirely on understanding and improving human gameplay and team coordination based on an integrated approach from the three research areas involved. Another key difference concerns evaluation of the impact of said analysis on human play, which is distinct from evaluation of robotic agents in the RoboCup project.
Namely, the former is significantly more difficult to realistically simulate and systematically evaluate (in contrast to evaluation on robotics platforms) and entails taking into account human factors such as injury and fatigue, but also inter-player relationships and their effects on play efficiency and cooperation, psychological challenges such as pressure or mental state, notably on recent transfers, and their impact on play performance, and overall player discipline and tendency to follow the best plan for the team instead of the best plan for themselves.
\gc{This section is currently being finalized.}
\section{Experimental Illustrations: Frontier~1~(GTV\&V)\xspace}\label{sec:results}
\subsection{Game Theory and Statistical Learning for Penalty Kick Analysis}\label{sec:egta_results}
We start this section with research we carried out at at the intersection of Statistical Learning and Game Theory, exemplifying what type of novel insights research at Frontier 1 can bring forth. Here specifically, we provide new insights in penalty kick analysis, providing a more concrete handle for what practically can be done by using a mix of game theoretic tools and statistical learning.
\subsubsection{Game-theoretic Analysis of Penalty Kick}
\begin{table}[t]
\centering
\caption{Natural (N) / Non-Natural (NN) Payoff tables for Strikers (S) and Goalkeepers (G).}
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{\citet{Palacios2003} Payoff table.}\label{tab:Palacios}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.670 & 0.929 \\
NN-S & 0.950 & 0.583 \\
\bottomrule
\end{tabular}
\end{subtable}
\hfill
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{Reproduced Table.}\label{tab:OurversionofPalacios}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.698 & 0.952 \\
NN-S & 0.960 & 0.559 \\
\bottomrule
\end{tabular}
\end{subtable}\\
\par\bigskip
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{\citet{Palacios2003} table Nash probabilities.}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.393 & 0.607 & 0.432 & 0.568 \\
Empirical & 0.423 & 0.577 & 0.400 & 0.600 \\
\bottomrule
\end{tabular}
\end{subtable}
\hfill
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{Reproduced table Nash probabilities.}
\begin{tabular}{rMM|MM}
\toprule
{} & N-S & NN-S & N-G & NN-G \EndTableHeader \\
\midrule
Nash & 0.609 & 0.391 & 0.576 & 0.425 \\
Empirical & 0.545 & 0.456 & 0.698 & 0.302 \\
\bottomrule
\end{tabular}
\end{subtable}
\end{table}
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/egta/p-value-table-v2.png}
\caption{p-value table.}
\end{figure}
\end{comment}
\begin{figure}[t]
\centering
\includegraphics[width=0.5 \textwidth]{figs/egta/heatmap_all.png}
\caption{Visualization of shot distribution.}
\end{figure}
\begin{table}[t]
\centering
\caption{Left (L) - Center (C) - Right (R) tables for Strikers (S) and Goalkeepers (G).}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Payoff table.}
\begin{tabular}{rRRR}
\toprule
{} & R-G & C-G & L-G \EndTableHeader \\
\midrule
R-S & 0.617814 & 0.790419 & 0.942234 \\
C-S & 0.986486 & 0.291545 & 0.897059 \\
L-S & 1.000000 & 0.986402 & 0.602245 \\
\bottomrule
\end{tabular}
\label{tab:lcr_table_noexp}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Payoff Table with missed shots excluded.}\label{tab:lcr_table_nomiss}
\begin{tabular}{rRRR}
\toprule
{} & R-G & C-G & L-G \EndTableHeader \\
\midrule
R-S & 0.647159 & 0.981934 & 1.000000 \\
C-S & 0.986486 & 0.291545 & 0.897059 \\
L-S & 1.000000 & 0.988470 & 0.602245 \\
\bottomrule
\end{tabular}
\label{tab:lcr_nomiss}
\end{subtable}\\
\par\bigskip
\begin{subtable}[b]{1.0\textwidth}
\centering
\caption{Nash probabilities vs. Empirical frequencies of \cref{tab:lcr_table_noexp}.}
\begin{tabular}{rMMM|MMM}
\toprule
{} & R-S & C-S & L-S & R-G & C-G & L-G \EndTableHeader \\
\midrule
Nash & 0.486394 & 0.110545 & 0.403061 & 0.305077 & 0.231879 & 0.463044 \\
Empirical & 0.469123 & 0.092410 & 0.438467 & 0.317612 & 0.363744 & 0.318644 \\
\bottomrule
\end{tabular}
\end{subtable}
\end{table}
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=0.5 \textwidth]{figs/egta/Left_Center_Right_Exp_Striker.png}
\caption{Left - Center - Right table with only experienced strikers (Appearing more than 30 times in our dataset).}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure}[t]
\centering
\end{figure}
\end{comment}
\begin{table}[t]
\centering
\caption{Footedness equivalence p-value tables.}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Natural / Non-natural game p-value table}
\begin{tabular}{rrr}
\toprule
{} & Nat-goalie & Non-nat-goalie \EndTableHeader \\
\midrule
Nat-shot & 1.019418e-12 & 0.000858 \\
Non-nat-shot & 6.335921e-75 & 0.091388 \\
\bottomrule
\end{tabular}
\label{tab:nat_p_value}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Left / Center / Right game p-value table}
\begin{tabular}{rrrr}
\toprule
{} & R-G & C-G & L-G \EndTableHeader \\
\midrule
R-S & 0.000005 & 0.789831 & 1.249979e-01 \\
C-S & 0.687820 & 0.685039 & 7.558677e-01 \\
L-S & 0.027756 & 0.111569 & 4.237702e-12 \\
\bottomrule
\end{tabular}
\label{tab:lcr_p_value}
\end{subtable}%
\end{table}
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=0.5 \textwidth]{figs/egta/augmented_actions.png}
\caption{Augmented actions payoff table.}
\label{tab:lr_plus}
\end{figure}
\begin{table}
\centering
\caption{Independence tests on different game aggregation hypothesis}
\begin{tabular}{ccc}
\toprule
Test conducted & p-value & Hypothesis \\ [0.5ex]
\toprule
Right-footed Natural kick = Left-footed Natural kick & 6.89e-5 & Rejected \\
\hline
Right-footed Non-Natural kick = Left-footed Non-Natural kick & 6.1e-5 & Rejected \\
\hline \\
\hline
Right-footed Right kick = Left-footed Right kick & 1.75e-5 & Rejected \\
\hline
Right-footed Center kick = Left-footed Center kick & 0.35 & Not Rejected \\ \hline
Right-footed Left kick = Left-footed Left kick & 0.036 & Rejected
\\
\hline \\
\hline
Right-footed Natural kick = Left-footed Non-Natural kick & 0.992 & Not Rejected \\
\hline
Right-footed Non-Natural kick = Left-footed Natural kick & 0.41 & Not Rejected
\\
\bottomrule
\end{tabular}
\label{tab:independence_tests}
\end{table}
\end{comment}
In this section, we are highlighting the insights that result from blending game theory with statistical learning for penalty kick analysis, achieving richer understanding of decision-making involved in both goalkeeping and kick-taking. Specifically, we investigated the work of \citet{Palacios2003,palacios2016beautiful} with a substantially larger and more recent data set from the main professional football leagues In Europe, Americas and Asia. In \citeauthor{Palacios2003}'s work it is examined whether human players play a Nash equilibrium when taking penalties in football games. The work has been reproduced and some of our results shed new light on some of the claims made in the original paper, while others are corroborated. Furthermore, the work has been extended to larger empirical games with more action choices for both kick-takers and goalkeepers. We finally illustrate how the combination of game theory with statistical learning can substantially enrich the analysis and provide means for deeper investigation into decision-making of both kick-takers and goalkeepers through the use of Player Vectors
in our analysis.
As in \citeauthor{Palacios2003}'s work we start by creating a 2-player 2-action game based on our penalty kick data collected from major leagues in Europe, Asia and the Americas.
\Cref{tab:Palacios} illustrates the $2\times2$ normal form game generated in \citet{Palacios2003}. The higher the scoring probability, the greener the cell ; the lower, the redder.
The kick-taker is the row player and the goalkeeper is the column player, the respective payoffs in each cell of the game indicate the win-rate or probability of success ('score' for the kick-taker, 'no score' for the goalkeeper). These values are computed based on the availability of
$15058$ penalty kicks in the data set. In the game theory literature such a game is referred to as an \emph{empirical} game as it is created from real-world data\citep{wellman2006methods,TuylsPLHELSG20}. The actions used in the payoff table in \cref{tab:Palacios} correspond to taking a shot to the natural side or non-natural side for the kick taker (Nat-shot and Non-nat-shot), and to diving to the natural side or non natural side for the goalkeeper.
Since players tend to kick with the inside of their feet, it is easier for example for a left-footed player to kick towards their right side (Their natural side) rather than their left side (Their non-natural side), all things being mirrored for right-footed players. Importantly, shots to the center count as shots to the natural side of the striker.
\Cref{tab:OurversionofPalacios} shows our reproduction of \cref{tab:Palacios} of \citeauthor{Palacios2003}.
As can be seen, the very same trends are observed in both tables:
when the goalkeeper and the kicker do not choose the same sides of the goal seen from their respective points of view, shot success rate is high; otherwise, when the keeper goes to the same side as the striker, success rate is higher for non-natural shots than for natural shots.
Next we tested whether one can assume that the game based on the Natural / Non-Natural side distinction is the same for left-footed and right-footed players, which is assumed to be the case in the work of \citet{Palacios2003}. We conduct statistical tests verifying whether per-cell scoring rates are identical across footedness types. The tests' p-values are reported in \cref{tab:nat_p_value}, and reveal that this game is mostly not statistically identical for left-footed and right-footed players.
Since this sophisticated way to aggregate footedness did not statistically hold, our next step consisted in verifying that a trivial aggregation method would also be statistically disproven. We therefore investigated games defined by Left / Center / Right actions and aggregated scoring rates over both feet preferences, resulting in the table of \cref{tab:lcr_table_noexp}. Note that in this case, Left / Center / Right are measured from the goalkeeper's perspective, which means that the natural kick of a right-footed player would be considered a Right kick.
Interpretation-wise, when the goalkeeper jumps to the same side as the ball, scoring probabilities are much lower than when both go to different directions - and when both ball and goalkeeper remain in the center, scoring probability is tremendously low. One striking feature of this table is the Right shot - Center Goalkeeper cell, whose scoring probability is much lower than the Left shot - Center Goalkeeper cell. \Cref{tab:lcr_nomiss} illustrates this : players tend to miss much more when the goalkeeper remains in the center.
As the p-value \cref{tab:lcr_p_value} shows, this game description holds at the 1\% threshold for every cell except the top left and bottom right cells, ie. when the goalkeeper jumps to the same side as the ball. This aggregation method is thus statistically sound in most cells, but differs per footedness in two key cells.
Best practice would then consist in considering two payoff tables, one per footedness, and potentially merging cell data where results seem footedness-independent.
Note that behavior and shooting styles also vary wildly per-player given footedness. If one considers several payoff tables, it seems logical to also take into account playing styles, and get statistically-clustered tables instead of using a single division criteria, as the next section illustrates.
\subsubsection{Augmenting Game-theoretic Analysis of Penalty Kick with Embeddings}
\begin{table}[t
\centering
\caption{Cluster statistics.}
\begin{tabular}{rrrrr}
\toprule
{} & \#players & \#goals & \#shots & success rate\% \EndTableHeader \\
\midrule
Cluster\#0 & 250 & 549 & 729 & 75.3 \\
Cluster\#1 & 157 & 171 & 225 & 76.0 \\
Cluster\#2 & 198 & 337 & 428 & 78.7 \\
\midrule
Total & 605& 1057& 1382 & 76.4\\
\bottomrule
\end{tabular}
\label{tab:cluster_stats}
\end{table}%
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/egta/cluster_vis.png}
\caption{Validation of striker clusters. We label each cluster with the most representative player in that cluster. The outliers are removed before clustering. \gc{Figure under revision.}}
\label{fig:pca_vis}
\end{figure}
\begin{table}[t]
\centering
\caption{Nash probabilities of individual striker clusters and all players.}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Nash probabilities of all players.
}
\begin{tabular}{rMMM}
\toprule
{} & Right & Center & Left \EndTableHeader \\
\midrule
Striker & 0.550935 & 0.037306 & 0.411758 \\
Goalkeeper & 0.185934 & 0.367172 & 0.446892\\
\bottomrule
\end{tabular}
\label{tab:egta_pv_all}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Nash probabilities of strikers in Cluster\#0.}
\begin{tabular}{rMMM}
\toprule
{} & Right & Center & Left \EndTableHeader \\
\midrule
Striker & 0.451982 & 0.11375 & 0.434264 \\
Goalkeeper & 0.242709 & 0.328820 & 0.428469 \\
\bottomrule
\end{tabular}
\label{tab:egta_pv_c0}
\end{subtable}\\
\par\bigskip
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Nash probabilities of strikers in Cluster\#1.}
\begin{tabular}{rMMM}
\toprule
{} & Right & Center & Left \EndTableHeader \\
\midrule
Striker & 0.391804 & 0.021409 & 0.586785 \\
Goalkeeper & 0.031672 & 0.58461 & 0.383716 \\
\bottomrule
\end{tabular}
\label{tab:egta_pv_c1}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Nash probabilities of strikers in Cluster\#2.}
\begin{tabular}{rMMM}
\toprule
{} & Right & Center & Left \EndTableHeader \\
\midrule
Striker & 0.523431 & 0.059715 & 0.416852 \\
Goalkeeper & 0.026930 & 0.443249 & 0.529819 \\
\bottomrule
\end{tabular}
\label{tab:egta_pv_c2}
\end{subtable}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/egta/pv_heatmap4.png}
\caption{Heatmaps of goals by all strikers and strikers in individual clusters.
}
\label{fig:egta_pv_cluster_heatmap}
\end{figure}
The analysis described in the previous section would be more appealing if its results can be used in practice to help goalkeepers and strikers improve their performance. In particular, from the perspective of interactive decision making, we are interested in answering the question that \textit{what are the best strategies of a player when they play against different opponents}? It is ideal that we iterate the Game Theoretic analysis over all player-goalkeeper pairs, but data sparsity in real world would make this analysis unreliable. A compromise would be to aggregate players with similar playing styles and summarize their playing strategies. In this section, we attempt to answer the question by proposing an algorithm which integrates statistical learning and Game Theoretic analysis. Such a solution also instantiates the concept that we have envisioned in Frontier 1.
Technically, the algorithm consists of three steps. 1) We follow the VAEP method to summarize the playing styles of strikers as \textit{player vectors} with 19 dimensions. In particular, we focus on the players only in the Premier League, and the player vectors are extracted from historical playing trajectories in real matches. The dimensions of the player vectors correspond to individual player actions on the pitch during game plays (e.g., pass, take-on, shot), and the value of each dimension quantifies the impact of the action on match results. By doing this, we obtain 607 vectors of individual players. 3) We apply the \textit{KMeans} algorithm to cluster those striker vectors. To obtain the optimal number of clusters, we firstly reduce the dimensions of the vectors from 19 to 10 with Principle Component Analysis, and then remove 2 obviously distant outliers from the 607 points. Finally, we select the number of clusters for the rest 605 points, which results in the most significant drop in inertia. This gives us the number of clusters 3, and the statistics of the clusters are summarized in Table \ref{tab:cluster_stats}. Finally, we conduct the Game Theoretic analysis for each cluster together with the complete set of players, following the method proposed in the previous section.
To address the question of how our Game Theoretic analysis would help players choose strategies when they play against different opponents in practice, we focus on evaluating whether the striker clusters are distinguishable w.r.t. two aspects: 1) Whether the clusters are separable geometrically, 2) Whether the Nash strategies in different clusters are differentiable. For 1), in addition to improving clustering efficiency by utilizing the inertia analysis, we also visualize the clusters and label each cluster with the most representative player, in the sense that his feature vector is closest to the mean of the corresponding cluster. Cluster 0, 1, 2 are linearly separable with visual inspection as shown in Figure \ref{fig:pca_vis}. Furthermore, to quantify the strategic differences w.r.t. Nash probabilities over the same action set defined in the previous section, we conduct the Game Theoretic analysis on each cluster (Table \ref{tab:egta_pv_c0}, \ref{tab:egta_pv_c1} and \ref{tab:egta_pv_c2}) together with the complete set of the players Table \ref{tab:egta_pv_all}. First of all, in Table \ref{tab:cluster_stats} we can see that the strikers in different clusters have very close success rates in penalty kicks, which implies that they are almost equally competent in game. With this prerequisite, from Table \ref{tab:egta_pv_all} and \ref{tab:egta_pv_c0}, we can see that Nash probabilities of the strikers in different clusters exhibit clearly different patterns: strikers in Cluster\#1 tend to shoot to the left of the goal mouth, while Cluster\#2 shows an opposite strategy which is similar to the strategy aggregated over all players. Strikers in Cluster\#0 play a more balanced strategy with almost equal Nash probabilities of shooting to the left and right of the goalmouth. Such information may be used to inform the goalkeepers to adjust their strategies when playing against different types of strikers. Qualitatively, the patterns of heatmaps of successful goals scored by the strikers also vary from cluster to cluster.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/ghosting/ghosting_before.png}
\caption{Original data.}
\label{fig:ghosting_before}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/ghosting/ghosting_after.png}
\caption{After perturbation of ball direction.}
\label{fig:ghosting_after}
\end{subfigure}%
\caption{Blue: attackers (ground truth). Red: defenders (ground truth). White: ball (ground truth). Yellow: defenders (predictions). \gc{Legend will be added.}}
\end{figure}
\subsection{Predictive Models for Counterfactual Analysis}
We here present an illustrative example to ground the earlier discussion regarding the potential impacts of using learned models to conduct counterfactual analysis of football matches.
Consider, for instance, the problem of predictive modeling of players at the trajectory level.
Using tracking data available for on-pitch players, one can train models that predict the future trajectories of players given a finite-horizon input context.
For example, one might train such a model on league data (e.g., as done in \citet{LeY0L17}), provide an input context to such a model (e.g., consisting of the true state of the ball, defenders, and attackers up to some point in time), and subsequently visualize the league-average conditioned on this input context (e.g., as visualized in \cref{fig:ghosting_before}).
As pointed out in the literature~\citep{Le17}, a key advantage of generative predictive models is that they can be used for counterfactual analysis of play outcomes.
We illustrate such an example in \cref{fig:ghosting_after}, where we perturb the trajectory of the ball, inferring the subsequent behaviors of defenders in reaction (noting, e.g., the tendency of the goalkeeper to chase the ball in reaction to it entering the penalty area).
While simple, case-by-case counterfactual case studies such as the above have been conducted to some extent in the literature, consideration of responses to more complex perturbations (e.g., changes of one team's tactics or meta-strategy as a whole, changes in player behavior due to injuries, or changes due to substitutions of individual players) bear potential for significantly more in-depth analysis.
\newpage
\section{Discussion}
\gc{THIS SECTION IS CURRENTLY BEING WRITTEN}
\section{Methods}
\gc{THIS SECTION IS CURRENTLY BEING WRITTEN}
In this section we describe the methods employed to obtain the results described in this paper.
\subsection{Dataset}
\gc{THIS SECTION IS CURRENTLY BEING WRITTEN}
\subsection{Game Theory}
\gc{THIS SECTION IS CURRENTLY BEING WRITTEN}
(Empirical) Game Theoretic modeling of penalty kick taking and set pieces facilitates strategic understanding of player and team behavior under various circumstances (e.g., play according to Nash equilibrium), and can assist both in predicting opponent behavior and prescribing how a player (or team) should behave in the presence of other players (teams). These game-theoretic models can be leveraged in pre-and post match analysis, and can be combined with analysis of dynamic trajectory behavior (e.g., ghosting, as described in a later section). Additionally, the models can be enriched by carrying out Empirical Game Theoretic Analysis (EGTA) on meta-game models of set pieces, automatically clustering and identifying useful meta-strategies, and providing insights into higher-level team strategies, using techniques such as Replicator Dynamics and Alpha-Rank.
\subsection{Ghosting}
Ghosting refers to the prescription of the trajectories the players in a sports team should have executed, in contrast to what they actually did [Lowe13]. This is a rich problem class, with benefits spanning from recommendation of trajectories or setups for set-pieces, then to short-term plays involving a subset of players, and eventually to long-term strategies/plays for the entire team. Team-level ghosting would also strongly benefit from game-theoretic and multi-agent considerations, thus drawing from DeepMind's expertise, and is perceived to play a key role in an established AVAC system.
\subsection{Vision}
\bibliographystyle{plainnat}
\section{Enhanced Abstract}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\textwidth]{figs/overview_frontiers.pdf}
\caption{Overview.}\label{fig:overview}
\end{figure}
\subsection{Background}
Recent years have seen tremendous growing interest in sports analytics research, not only from an economic and commercial point of view, but also from a purely scientific perspective, viz. the growing number of publications and scientific events organized on the topic (e.g., the MIT Sloan Sports Analytics Conference, the CVPR International Workshop on Computer Vision in Sports, and the ECML/PKDD Workshop series on Machine Learning and Data Mining for Sports Analytics).
As evident in a multitude of downstream domains that have benefited from recent applications of artificial intelligence (AI) and machine learning (ML), this is due to important technological advances in data collection and processing capabilities, progress in statistical learning with the advent of \textit{deep learning}, increased compute resources, as well as ever-growing economic activities associated with sports and culture (e.g., emergent consultancy ventures revolving around sports data collection and statistics~\citep{soccernomics}).
\textit{Predictive analytics} has been investigated and applied in the context of several sports in the past decades, including, e.g., basketball~\citep{skinner2010price}, tennis~\citep{walker2001minimax}, and baseball (i.e., sabermetrics), with the latter systematically collecting data since the 19th century~\citep{baumer2014sabermetric}.
Although statistical analysis of data has led to many impressive outcomes in these various sports (e.g., Moneyball in baseball~\citep{baumer2014sabermetric}), football started participating rather late in this data collection and number crunching game, with the data science transformation that informs stakeholders (e.g., decisions related to player transfers, scouting, pre- and post-match analysis, etc.) still in its infancy \citep{soccernomics}.
Several factors influenced this late arrival.
Firstly, football takes place under far less controllable settings than other sports due to its outdoor and highly dynamic nature, a larger pitch, a large number of players involved, and longer non-interrupted game sequences than sports such as basketball.
Secondly, professional companies have only relatively recently started collecting so-called big data (e.g., high-resolution videos, annotated event-streams, player tracking and pose information).
Concurrently, only recently have major breakthroughs been made in deep learning, yielding techniques that can handle such new high-dimensional data sets\citep{bengio2009learning}.
Finally, for a long time, credibility in decision-making primarily depended on human specialists such as managers, retired players, and scouts, all of them with track records and experience in professional football~\citep{soccernomics}.
Arrigo Sacchi, a successful Italian football coach and manager who never played professional football in his career, responded to criticism over his lack of experience with his famous quote, phrasing his sentiment around credibility as follows when becoming a coach at Milan in 1987: ``I never realised that to be a jockey you had to be a horse first."
As a result, the potential influence and gains of predictive analytics on the football game have also been less obvious in the past decades, with the credibility of sports analytics as a game-changing phenomena not realized until recent years.
In more philosophical terms, \citet{soccernomics} highlight a cultural hesitation regarding the integration of data science into football and an overdependence on gut instincts, noting that ``until very recently, soccer had escaped the Enlightenment".
\subsection{Advances}
Despite football's late adoption of sports analytics, there are a number of early bird approaches from different areas of AI such as statistical learning (SL), computer vision (CV), and game theory (GT) that are making initial contributions to support decision-making of managers, coaches and players.
For example, already basic machine learning tools such as principal component analysis (PCA) enable automated means of identifying player types\citep{DecroosD19}, training of models predicting trajectories of individual teams or imitating league-average behaviors\citep{Le17}, and valuing individual player decisions (such as passes or tackles) in a series of actions leading up to a goal.\citep{DecroosBHD19}
The study of interactive decision-making as formalized by game theory plays a critical role in AI for systems involving more than one actor (human or artificial).
Game theoretic tools shed light on players' strategic interactions during scenarios such as penalty kicks, analysis of their decisions in comparison to mathematically-principled baselines, and prediction of their goal-scoring probabilities when playing according to a mixed strategy Nash equilibrium~\citep{Palacios2003}.
Enriched with recent techniques from \emph{empirical game theory}~\citep{wellman2006methods} the effects of various high-level strategies pitted against each other can also be analyzed.
Finally, latest techniques and developments from computer vision have been used for player tracking, pose estimation, and automated injury prediction based on, e.g., gait and fatigue analysis.
\subsection{Outlook}
While these separate areas within AI research have independently been demonstrated to be effective when targeting the above specific prediction and analysis problems in football, we believe the most fruitful perceived benefits (and, correspondingly, the most interesting and challenging research problems) to lie in the underexplored intersection of the subfields of statistical learning, computer vision, and game theory.
We argue that the domain spanned together by these three fields of research is the most promising for establishing seminal progress in football analytics in the coming years.
We lay out this vision in \cref{fig:overview}.
Specifically, we pinpoint several frontiers at the intersections of these three fields, with the ultimate frontier being identified as the microcosm where challenges are tackled that require integrated approaches built on fundamentals of the three areas.
A large portion of the current state-of-the-art research in football analytics, by contrast, typically falls under the umbrella of one of these areas, with some initial activities taking places at Frontier~2~(SL\&CV)\xspace, and no notable research activities identified at Frontier~1~(GTV\&V)\xspace and Frontier~3~(GT\&SL)\xspace for football, or sports in general.
To make the potential opportunities more concrete, we detail case studies focused on empirical game-theoretic analysis of penalty kicks.
At Frontier~1~(GTV\&V)\xspace, game-theoretic analysis is blended with learned predictive models.
Research along this axis of the microcosm focuses on a combination of interactive decision-making with predictive modeling providing more concrete and deeper analysis tools.
We present a detailed case study illustrating this frontier, revisiting the seminal work of \citet{Palacios2003} on penalty kicks under this new perspective and illustrate how mixing with SL provides deeper insights into penalty kick taking.
Frontier~2~(SL\&CV)\xspace, focuses on research that integrates statistical learning with computer vision.
Research along this axis directly learns from video as the primary input and builds predictive models, for instance forecasting player and team behaviors directly.
At Frontier~3~(GT\&SL)\xspace, we classify research integrating computer vision and game theory, a largely uncharted territory focusing on generative models based on visual inputs, which takes strategic interactions into account.
We contend that the above frontiers culminate into a unique microcosm mutually benefiting both AI and football analytics research, wherein it becomes possible to develop, for example, an \textit{Automated Video Assistant Coach} (AVAC). The AVAC system is an example of what we believe to be the future of human-centric AI research for football, in which the aim is to integrate all aspects of the frontiers into a cohesive system enabling both understanding and improving of human football play.
The advantage this microcosm brings is that while it makes clear what AI research can mean for football in the long-term, it has the dual effect of defining football as a new human-centric AI research domain with major challenges that can progress the field through cross-fertilization of ideas from the three axes.
\bibliographystyle{plainnat}
\section{Dataset}
\so{Potentially move to supp info?}\kt{yes agreed}
\end{document}
\section{Introduction}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth]{figs/overview_frontiers.pdf}
\caption{AI research frontiers associated with football analytics. We highlight three foundational areas related to AI research--statistical learning, computer vision, and game theory--which have independently been demonstrated to be effective in analyzing the game of football (with example problems and works from literature shown above per field). We argue that the domain spanned by these research fields is most promising for establishing seminal progress in football analytics in the coming years, along 3 frontiers: Frontier~1~(GT\&SL)\xspace, Frontier~2~(SL\&CV)\xspace, and Frontier~3~(GT\&CV)\xspace. We claim that the above frontiers culminate into a unique microcosm mutually benefiting AI and football analytics.}
\label{fig:overview}
\end{figure}
Recent years have seen tremendous growing interest in sports analytics, not only from an economic and commercial perspective, but also from a purely scientific one, viz.\,the growing number of publications~\citep{baumer2014sabermetric,shih2017survey,beal_norman_ramchurn_2019} and scientific events organized on the topic (e.g., \citet{mit_sloan_conf}, \citet{cvsports_workshop}, and \citet{kdd_ml_sa}).
As evident in many different downstream domains that have benefited from applications of artificial intelligence (AI) and machine learning (ML), this is due to important technological advances in data collection and processing capabilities, progress in statistical and in particular {deep learning}, increased compute resources, and ever-growing economic activities associated with sports and culture (e.g., emergent consultancy ventures revolving around sports data collection and statistics~\citep{beal_norman_ramchurn_2019,opta_sports,ChyronHego,InStat,StatsBomb,soccernomics}).
{Predictive analytics} has been investigated and applied in the context of several sports in the past decades, including basketball~\citep{skinner2010price}, tennis~\citep{walker2001minimax,gauriot2016nash}, and baseball~\citep{Albert02,Lewis04,costa2009practicing,Song17,Puerzer02,Albert10,baumer2014sabermetric}, with data for the latter having been systematically collected since the 19$^{\text{th}}$ century.
Although statistical analysis of data has led to impressive outcomes in various sports (e.g., Moneyball in baseball~\citep{Lewis04,baumer2014sabermetric}), football started participating rather late in this data collection and number-crunching game, with the data science transformation that informs stakeholders (e.g., decisions related to player transfers, scouting, pre- and post-match analysis, etc.) still in its infancy \citep{soccernomics}.
Several factors influenced this late arrival.
Football takes place under far less controllable settings than other sports due to its outdoor and highly dynamic nature, a larger pitch, a large number of players involved, a low number of player changes and longer non-interrupted game sequences than sports such as basketball.
As a result, football analytics companies have only relatively recently started collecting so-called big data (e.g., high-resolution videos, annotated event-streams, player tracking and pose information).
Concurrently, only recently have major breakthroughs been made in deep learning, yielding techniques that can handle such new high-dimensional data sets \citep{bengio2009learning,Arel10,lecun2015deeplearning,Schmid15,Goodfellow-et-al-2016}.
Finally, for a long time, credibility in decision-making primarily depended on human specialists such as managers, retired players, and scouts, all of them with track records and experience in professional football, in part due to cultural reasons~\citep{soccernomics,DecroosD19}.
As a result of these various factors, the potential influence and gains of predictive analytics on the football game have also been less obvious, with sports analytics as a game-changing phenomena not realized until recent years.
In more philosophical terms, \citet{soccernomics} highlight a cultural hesitation regarding the integration of data science into football and an overdependence on gut instincts, noting that ``until very recently, soccer had escaped the Enlightenment".
Despite football's late adoption of sports analytics, there are a number of early-bird approaches from different areas of AI such as statistical learning (SL), computer vision (CV), and game theory (GT) that are making initial contributions to support decision-making of managers, coaches and players.
For example, basic statistical learning tools such as principal component analysis (PCA) already enable automated means of identifying player types \citep{DecroosD19}, training of models predicting trajectories of individual teams or imitating league-average behaviors \citep{Le17}, and valuing individual player decisions (such as passes or tackles) in a series of actions leading up to a goal \citep{DecroosBHD19}.
The study of interactive decision-making as formalized by game theory plays a critical role in AI for systems involving more than one actor (human or artificial).
Game-theoretic tools shed light on players' strategic interactions during scenarios such as penalty kicks, analysis of their decisions in comparison to mathematically-principled baselines, and prediction of their goal-scoring probabilities when playing according to a mixed-strategy Nash equilibrium~\citep{Palacios2003,chiappori2002testing,palacios2016beautiful}.
Enriched with {empirical game theory}~\citep{wellman2006methods, TuylsPLHELSG20,omidshafiei2019alpha} the effects of various high-level strategies pitted against each other can also be analyzed.
Finally, recent developments in computer vision have been employed for player tracking~\citep{Lu13,Liu2013,Bialkowski2015,Gade18}, pose estimation~\citep{Zhang_2019_CVPR,Fastovets,Bridgeman_2019,Sypetkowski19,Sypetkowski}, and automated injury prediction~\citep{Kampakis} based on, e.g., gait and fatigue analysis \citep{Meert18,Ramos20,Claudino,Kakavas,Bartlett}.
While these separate areas within AI research have independently been demonstrated to be effective for football analytics, we believe that the most pertinent research problems lie in the underexplored intersection of statistical learning, computer vision, and game theory (see \cref{fig:overview}).
Specifically, we pinpoint several frontiers at the intersections of these fields, and identify the ultimate frontier to be the microcosm requiring integrated approaches across all three fields.
A large portion of the state-of-the-art research in football analytics, by contrast, typically falls under the umbrella of one of these areas, with some initial activities taking places at Frontier~2~(SL\&CV)\xspace~\citep{Lu13,Mora17,Mehrasa,Choi_2019_ICCV,Quiroga_2020}, and no notable research activities identified at Frontier~1~(GT\&SL)\xspace and Frontier~3~(GT\&CV)\xspace for football or sports in general.
At Frontier~1~(GT\&SL)\xspace, game-theoretic analysis is blended with learned predictive models, combining interactive decision-making with predictive modeling to provide more granular analysis tools.
We present a detailed case study illustrating this frontier, revisiting the seminal work of \citet{Palacios2003} on penalty kicks under this new perspective and illustrate how mixing with SL provides deeper insights into penalty-kick taking.
Frontier~2~(SL\&CV)\xspace focuses on research that integrates statistical learning with computer vision, directly learning from video as the primary input and building predictive models (e.g., forecasting player and team behaviors directly).
At Frontier~3~(GT\&CV)\xspace, we classify football research integrating computer vision and game theory, a largely uncharted territory focusing on generative models based on visual inputs, which takes strategic interactions into account.
We claim that the above frontiers culminate into a unique microcosm mutually benefiting both AI and football analytics, to the point it becomes possible to develop, for example, an {Automated Video Assistant Coach} (AVAC). The AVAC system is an example of what we believe to be the future of human-centric AI research for football, with the aim of integrating all aspects of the frontiers into a cohesive system enabling both understanding and improvement of human football play.
Such an AVAC system is envisioned to improve the overall experience of the game for players, coaches, and spectators alike.
In the following, we first provide an overview of the literature associated with football analytics, subsequently highlighting gaps and sketching a long-term vision for research in the football microcosm.
We lay out the long-term perspective of how AI can benefit the domain of football analytics, and vice versa, by combining the three identified axes in the literature in manners that have not yet been fully explored.
Next, we present illustrative examples that examine penalty-kick taking from a game-theoretic perspective, building on the work of \citet{Palacios2003,palacios2016beautiful}, and bringing several new insights based on data from the main European leagues.
We subsequently demonstrate how this game-theoretic work can be enriched via integration with statistical learning at Frontier~1~(GT\&SL)\xspace, providing several new insights about penalty-kick taking followed by a high-level example of counterfactual trajectory predictions (i.e., a what-if analysis), thus further justifying the football microcosm as a useful AI research domain.
\section{Literature Overview: AI for Football Analytics}
The following sections outline a long-term research vision for AI applied to football analytics.
We first consider the state-of-the-art with respect to each of the respective areas (GT, SL, and CV) applied to the football domain, after which we look into the opportunities that each of the frontiers brings forth to unlocking larger scientific challenges in football analytics.
The following sections summarize works that lie in these peripheral fields of \cref{fig:overview} and highlight opportunities for further work and added value in each.
\subsection{Statistical Learning}
Football is arguably the most challenging to analyze of all the major team sports. It involves a large number of players with varied roles, few salient events, and minimal scoring.
Statistical football analysis attempts to provide quantitative answers to questions that pertain to different aspects of the game.
Notably, these include the problem of characterizing players' and teams' styles, evaluation of the impact that such teams have on the pitch, and the temporal and counterfactual predictions of players' actions.
When one compares styles of players (and teams, by extension), one usually refers to high level and abstract notions that summarize their unique characteristics.
The goal of the statistical learning line of research is to learn features capturing such information, normally to be used by other down-stream tasks.
For instance, one means of summarizing information about a football player is through aggregation of their play statistics (e.g., offensive, defensive, or dribbling abilities, shots on target, etc.), and that of their teams~\citep{fernandez2016attacking,stats_playing_styles_2020}.
While such statistics are typically either handcrafted or rely on simple measures of play outcomes, recent works have analyzed them from a statistical learning perspective, using notions such as Player Vectors~\citep{DecroosD19} (detailed in \cref{sec:player_vectors} and analogous techniques in sports such as basketball~\citep{franks2015characterizing}.
Given the growing success of unsupervised learning methods, there is potential for more advanced representations of player traits to be learned directly from the data.
In football, it is particularly difficult to assess which players, or groups of players, deserve credit for favorable outcomes.
For high-scoring team sports (e.g., basketball) one can provide a reasonable answer to this question by restricting attention to actions that have immediate impact on scoring.
By contrast, few goals are scored in football (e.g., 2.72 goals were scored on average per game of the 2019/2020 Premier League season~\citep{pl_goals_2019}).
Consequently, models considering only actions with immediate impact on goals (e.g.\,shots or saves) capture a crucial yet narrow view of the game.
Moreover, game states in football are significantly more complex than that estimated by current models.
Features describing them are mostly hand-crafted and only consider on-ball actions. A given pass might be extremely valuable or a poor choice depending on the disposition of the players.
While these methods are able to value on-ball actions, they rely on sparse signals provided by goals.
Concurrently, off-ball actions significantly impact each game, as exemplified by player actions covering certain pitch areas to prevent attacks, running to open areas to create space for teammates, and so on.
Due to the temporally extended nature of football, the task of inferring the value of actions is an instance of the temporal credit assignment problem in reinforcement learning (RL) \citep{minsky1961steps}.
The combination of RL techniques with deep learning has great potential to tackle the idiosyncrasies of football and to close the gap between statistical and human analysis.
It is exciting to see recent progress in this direction in sports analytics, including ice hockey \citep{guiliang2018deep} and football \citep{sun2020cracking, liu2020Deep} (with additional related works detailed in \cref{sec:additional_sl_works}).
Overall, while the above pieces of work showcase the promise of modern statistical methods for temporal predictions in football, this remains an open and challenging problem that will likely require development of novel methods and means of leveraging the diversity of newly-available football data (as later made more precise in terms of what football can do for AI).
\subsection{Game Theory}
Game theory plays an important role in the study of sports, enabling theoretical grounding of players' behavioral strategies.
Numerous works have applied game-theoretic analysis to sports over recent decades~\citep{Sindik2008,Lennartsson15}, including football~\citep{Palacios2003,MOSCHINI2004,Azar2011,Levitt,Buzzachi14,coloma2012penalty,chiappori2002testing}, tennis~\citep{walker2001minimax,gauriot2016nash}, volleyball~\citep{lin2014applying}, basketball~\citep{skinner2010price}, and American football~\citep{emara2014minimax}.
High-level game theory can be applied to team selection and choice of formation.
Set pieces, such as corner kicks and penalties, are particularly amenable to game-theoretic analysis, wherein identified high-level strategies can be pitted against one another and ranked in terms of empirical performance (see \cref{sec:results} for details).
Due to the real-world nature of football, the types of game-theoretic analysis conducted in this domain have typically been driven by the availability of data sources (in contrast to, e.g., simulation-based domains, where data of varying types and granularity can be synthetically generated).
In particular, the volume of available high-level statistics (e.g., match outcomes spanning across different leagues, seasons, and competition levels) makes football a particularly attractive topic from a behavioral game-theoretic perspective~\citep{camerer2011behavioral,wellman2006methods}.
From a theoretical perspective, the majority of the existing works exploit the fact that various football scenarios can be modeled as two-player zero-sum games.
For example, in football, the penalty kick situation may be straightforwardly modeled as a two-player asymmetric game, where the kicker's strategies may be neatly categorized as left, center, or right shots.
The controlled nature of these scenarios compounds their appeal from a quantitative analysis perspective;
in the penalty example, the penalty taker, goalkeeper, and ball's initial positions are generally static across all dataset trials (albeit, with minor variations, e.g., scenarios where goalkeepers stand slightly off-center to entice the kicker to shoot in one direction).
In fact, the majority of the literature that analyzes football under a game-theoretic lens focuses on penalty kicks~\citep{Palacios2003,Azar2011,Levitt,Buzzachi14,coloma2012penalty,chiappori2002testing}, which we contend is due to this amenability for analysis via classical game-theoretic solution concepts (such as Nash equilibria).
However until the paradigm shifts away from the classical analysis of set pieces and toward live-play settings (such as those considered by \citet{MOSCHINI2004}), the potential benefits of game-theoretic analysis for football are likely to remain largely untapped.
Moving beyond this classical paradigm involves resolution of significant challenges:
the number of active players in football is quite large (22 including goalkeepers) and the exponential size of the strategy space (with respect to the number of players) makes it more challenging to analyze than games such as tennis or basketball;
the mapping of low-level player actions to strategies is no longer straightforward in durative live plays, due to the variability of player trajectories;
the duration of plays is also generally longer than sports played in more controlled environments (e.g., tennis), implying that such scenarios may benefit from analysis in the so-called extensive-form, which explicitly consider each player's knowledge, opportunities, and actions, unlike in the simpler (but more feasible to analyze) simultaneous-move, normal-form approaches typically used for set piece analysis (later detailed in \cref{sec:gt_background}).
Nonetheless, the availability of new data types increases the viability of conducting more advanced game-theoretic analysis by bootstrapping to advances in statistical learning and computer vision, as later detailed.
Surmounting these challenges will benefit football from a game-theoretic perspective, enabling both a descriptive analysis (i.e., understanding the interactions undertaken by players and teams in the presence of others), and a prescriptive one (i.e., suggesting the actions such individuals should have executed).
\subsection{Computer Vision}
Computer vision has seen major breakthroughs in the past decade, thanks to the application of deep learning approaches.
Progress in tasks such as image classification~\citep{deng2009imagenet}, video action recognition~\citep{Carreira_2017_CVPR}, and pose estimation~\citep{alp2018densepose} have unlocked the possibility of automatically extracting complex information from videos.
Similarly, in football analytics, computer vision methods can enable enrichment of statistical and game-theoretic approaches, which typically rely on hand-labeled, low-dimensional data.
Video is a particularly appealing signal for football analytics:
it is a rich data source (likely to contain large amounts of information of interest for decision-making) and cheap to acquire (using only widespread camera sensors).
While raw video frames are insufficiently structured to be processed by traditional rule-based processing techniques, computer vision methods enable the extraction of high level, potentially spatially-detailed representations, ready for use by downstream applications.
Examples of such extraction processes include human pose estimation~\citep{alp2018densepose}, object detection~\citep{ren2015faster}, segmentation~\citep{long2015fully}, tracking~\citep{dong2018triplet}, depth estimation~\citep{godard2017unsupervised}, and event or action detection~\citep{Carreira_2017_CVPR}.
In addition to explicit, human-interpretable representations, learned counterparts can be used by downstream deep learning models.
In football, several companies already commercialize tracking information, relating to the players and the ball, automatically extracted from videos recorded by dedicated static cameras, placed to have a view covering the entire terrain~\citep{opta_sports,StatsBomb}.
Moreover, immersive sports analytics has been gaining popularity with the increased access to specialized hardware and portability~\citep{Lin2020SportsXRI}.
Computer vision techniques enable reconstruction of real game scenarios, which provide more feedback to players and coaches than 2D screens, and have been extended to other sports media as well~\citep{StriVR}.
Vision-based player tracking, pose estimation, and event detection can improve learning of player skill vectors and predictive modeling, subsequently improving player ranking and game-theoretic analysis of strategies.
While video has been traditionally used as the primary high-dimensional signal for the above applications, other modalities such as audio and text streams can provide extremely rich and complementary information in these settings.
Audio commentary and news threads readily provide a human interpretation of the events occurring in a scene, which is highly complementary to the spatially fine-grained information available in videos.
Learning-based approaches for generation of text commentary have been previously investigated \citep{chen2010training}, with temporally-aligned sound and text streams seeing recent application in learning rich representations of video signals~\citep{alayrac2020self,arandjelovic2017look,Miech_2020_CVPR}, having been shown to be useful in a variety of computer vision tasks.
Although such information significantly overlaps with the structured annotations already used by downstream applications (event annotations, etc.), its unstructured nature enables the presence of a greater information content (e.g., a commentator's tone can provide some cues as to an action's value).
Unique challenges arise when seeking to improve the performance and to enlarge the applications of computer vision methods for broadcast video of football games.
In general, the camera's field of view is centered on the most interesting action in the field, leaving many players out of the frame.
This poses an interesting geometric problem, as broadcast footage shots typically do not overlap, and many state-of-the-art systems are unable to geometrically relate these multiple shots.
Other challenges arise due to the multiple occlusions of players by one another, further complicating the detection, tracking, and identification tasks.
These problems can be addressed by geometrical approaches in settings where one can intervene to correct camera positions or, ideally, access cameras' extrinsic and intrinsic parameters (that is, the camera's position, orientation, as well as geometrical and optical properties like its focal length).
However, approaches that do not assume access to such information have the potential to leverage more data.
In contrast with other applications, football broadcast videos present an interesting blend between real, large-scale, complex data and a constrained setting (due to the rules of the game \citep{giancola2018soccernet}), hence providing an attractive setting for developing such approaches.
\section{Game Plan: Long-term Research Vision}\label{sec:game_plan}
In this section we outline a long-term research vision for football analytics in the microcosm frontiers at the intersection of statistical learning, game theory, and computer vision (see \cref{fig:overview}).
\subsection{Frontier 1: Interactive Decision-making (GT \& SL)}
Learning to make suitable decisions in the presence of other agents is where game theory and statistical learning can converge.
This interdisciplinary area of research has received significant attention in the multi-agent RL community over the past decades, with thorough survey articles~\citep{PanaitL05,ShohamPG07,BusoniuBS08,TuylsW12,BloembergenTHK15,Hernandez-LealK20} available.
When considering the football domain in particular, it is evident that the potentials of game theory have yet to be fully exploited, let alone its combination with machine learning techniques such as RL.
There are several promising routes in this area that not only are challenging from a football analytics perspective, but also from an AI research one.
Two of these routes are studied further in this article in \cref{sec:results}; one concerns the study of set pieces using the combination of statistical learning with game theory, and a second focuses on predictive modelling for counterfactual analysis.
In the former, which has been mostly studied in a game-theoretic setting, we show how augmenting the analysis with player-specific statistics can provide deeper insights in how various types of players behave or take decisions about their actions in a penalty kick scenario.
In the latter case, we illustrate how machine learning techniques can facilitate counterfactual analysis in football matches.
The possibility to predict, for example, trajectories of players can enable investigation of counterfactual scenarios, (e.g., wherein one would like to know how a specific player or team would respond in a specific match scenario).
Doing this enables one to not only learn to generate behaviors, but also leverage game-theoretic techniques for counterfactual analysis.
We defer a more detailed discussion of these research lines to \cref{sec:results}.
Building on the counterfactual prediction of players' behaviors, one can also consider the possibility of using this as a coaching tool.
Specifically, one can use counterfactual insights to advise tactics to individual players, and even go further by optimizing the overall team strategy depending on the specific opponent in an upcoming match.
This would go beyond the state-of-the-art, which focuses on simply predicting player behaviors;
here, one would seek to actively optimize suggested tactics based on the particular behaviors and play style of the opposing team, and upcoming match-ups.
Such tactical considerations can also be conducted in an iterative manner (e.g., predictions can be made for the opposing team as conditioned on the best-response behavior of the main team), and effective counter strategies can be learned, for instance, using multi-agent RL.
Such an approach opens the door to a slew of interesting research challenges. For instance, use of multi-agent RL entails definition of a reward function for the players;
while rewarding goals is an obvious candidate, denser reward signals (e.g., associated with successful passes, intercepts, etc.) may be useful for accelerating learning of such policies.
Such reward functions are also likely to depend on the role of a player in the team and their respective skill set (i.e., may even be heterogeneous across players in a given team), and could also be learned using techniques such as inverse RL~\citep{NgR00}.
Moreover, one may seek to first define an effective `action space' for players (i.e., more granular or structured actions than `move left' or `move right'), before solving the RL problem.
Finally, the combination of the previous learnt models with pitch control \citep{Spearman16}, a technique to determine which player/team has control over a specific area of the pitch, will provide additional information on open space, passing and scoring opportunities, yielding a powerful tool to enable in-match tailored coaching tools.
\subsection{Frontier 2: Predictive Modeling from Videos (SL \& CV)}
Several challenges naturally lie in the frontier between statistical learning and computer vision.
Statistical learning depends on large quantities of labelled data. Many of the quantities suitable for models of football are the product of hand-labelling data;
on the other hand, vision-based models could automatically identify events which could be fed into such models.
In addition to the quantity of events that vision-based systems could provide, the quality could also be improved (e.g., with events being accurately registered to the corresponding frame, with minimal temporal error compared to human-labeled annotations).
Furthermore, video is a much richer signal compared to what is traditionally used in predictive tasks, such as forecasting future movement of players or predicting the value of individual actions to the game outcome.
The combination of advanced deep learning models and a rich video signal enables learning over subtle clues otherwise not captured in event-stream or tracking data.
Capturing more of the partially-observable state of a game will ultimately enable more accurate predictions.
Richer information may additionally help to shed light on the intention of players and thus better address the credit assignment problem in action-value estimation.
On the other hand, models that better capture the game dynamics may be necessary to resolve some of the limitations of vision-based tracking approaches, which arise as players are occluded or move off camera.
The resulting ambiguities can likely be resolved using predictive model of player dynamics, which may be trained independently and used as a source of additional inputs to the tracking system or trained as one component of a larger pipeline.
Explicit or implicit access to the game dynamics will very likely also improve vision-based action labeling.
Finally, presenting prediction outcomes by means of synthetic video generation remains an ambitious challenge that combines the task of trajectory prediction with video generation.
Presenting predictions of counterfactual futures as video will enable intuitive understanding both by coaching personnel and players (e.g., in the same vein as recent work on video generation for tennis matches)~\citep{zhang2020vid2player}.
\subsection{Frontier 3: Generative Game-Theoretic Video Analysis Models (GT \& CV)}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/vision/posenet1.png}
\end{subfigure}%
\hfill%
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/vision/posenet2.png}
\end{subfigure}%
\hfill%
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/vision/posenet3.png}
\end{subfigure}%
\caption{Example of a pose estimation model applied to a penalty kick.
The results showcased here follow a multi-phase approach~\citep{Papandreou_2017_CVPR} The first stage applies a Faster-RCNN candidate detector~\citep{NIPS2015_5638} which extracts bounding boxes around the players. This is followed by a pose estimation model applied to these bounding box proposals, which identify the player keypoints, and provide the pose estimate.}
\label{fig:poses_penalty_kick}
\end{figure}
Video modeling and game theory can mutually benefit one another.
In the simplest application, computer vision can provide new features to drive game-theoretic models.
In more advanced applications, game theory can, in turn, guide video generation, as illustrated in \cref{fig:video_generation}.
Throughout a football match, individual players carry out a series of encounters with one another, which can profitably be viewed through game theory.
Each player may have some idea of the likely success of strategies based on past performance and current in-game observations.
Penalty kicks are an almost idealized example, with a kicker and goalkeeper observing each other for signs of intent, while simultaneously weighing preconceived strategies.
As described previously, computer vision models can be used to automatically extract high-level and potentially spatially-detailed representations that can be complementary to the low-dimensional, hand-collected inputs game-theoretic models typically rely on.
We illustrate this representation extraction pipeline in the left portion of \cref{fig:video_generation}.
In the example of penalty kicks, vision models could identify visual signatures of intent that are difficult to even perceive for humans, let alone annotate;
such information includes, for example, pose estimates extracted from broadcast footage (as visualized in \cref{fig:poses_penalty_kick}, with technical details provided in \cref{sec:pose_estimation}), which can enable inference of the intentions of players, providing valuable insights to improve their respective strategies.
\begin{figure}[t]
\centering
\includegraphics[width=0.80\textwidth,page=1]{figs/vision/video_generation.pdf}
\caption{Analysis and generation of video data. From footage video (a), computer vision techniques can extract various representations (b), useful for empirical game-theoretic analysis. In turn, empirical payoff tables (c) can be used in a hierarchical generative process of videos, possibly involving a step in intermediate representation spaces (d). Generated videos (e) could be used for prescriptive analysis, or in the process of capturing the footage videos themselves, closing the cycle between analysis and generation.}
\label{fig:video_generation}
\end{figure}
In the reverse direction (right portion of \cref{fig:video_generation}), game-theoretic outputs can be used to improve the plausibility and usefulness of generated videos.
Specifically, generative video models need to precisely capture the data distribution to be useful.
The game-theoretic context found within football offers an opportunity to constraint such models.
The dynamics of play are complex, with games taking place at the level of opposing individuals to entire teams, though with specific constraints that can inform a generative process.
For example, a hierarchical generative process could condition on high-level latent variables drawn from the distribution described by empirical payoff tables, to represent player decisions conditioning a particular sample.
One could consider an additional hierarchical level of generation targeting intermediate features, possibly building on approaches for predicting future pose~\citep{villegas2017learning, chan2019everybody}, trajectory~\citep{kitani2012activity,bhattacharyya2018long, vu2018memory, bhattacharyya2018bayesian}, action~\citep{vondrick2016anticipating, abu2018will} and shape information~\citep{luc2017predicting, jin2017predicting, luc2018predicting, xu2018structure, sun2019predicting}.
Game-theoretic analysis could be used to assess the plausibility of game simulations.
Specifically, these intermediate outputs could be used directly for prescriptive analysis (e.g., informing players of physical tactics to attempt in a play of interest, based on generated poses) or serve as further conditioning for generation in the raw RGB space.
An alternative direction would be the formulation of new game-theory inspired metrics, for example imposing that the payoff tables extracted from generated samples match those obtained from real data.
This would be closely related to the Fr\'echet Inception Distance~\citep{heusel2017gans} for generative modeling of images, and the derived Fr\'echet Video Distance~\citep{unterthiner2018towards} for video, which have successfully driven progress in both fields \citep{karras2017progressive, miyato2018spectral, brock2018large, zhang2019self, clark2019adversarial, weissenborn2019scaling, luc2020transformation}.
In turn, such metrics could serve as a basis to design novel losses inspired by game theory.
Besides their potential use for prescriptive analysis, generative models of videos could lead to improvements in broadcast data itself, due to automatic anticipation of the most plausible immediate future outcomes.
Overall, game-theoretic analysis can inform video generation, in turn enabling influence of the process by which future data will be acquired, hence closing the cycle described in \cref{fig:video_generation}.
\subsection{Football as an AI Testbed: Microcosm}
The development of performative AI algorithms relies on various recurring objectives: learning and acting based on real-world data streams, interpreting the actions of other agents and acting strategically in response, being able to understand and generate natural language for efficient communication with humans, and so on.
As discussed in earlier sections, football analytics involves core problems associated with many of the above AI challenges, though in a more well-controlled scope.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth,page=1]{figs/overview_data.pdf}
\caption{The range of data types relevant to football. At a high-level, highly structured and well-correlated data types are available for each match: video streams, audio streams, and associated text streams. On a more aggregate level, historical play data (e.g., player and team statistics) spanning many decades of play are broadly available for many professional football leagues. Moreover, third-party companies (such as Opta~\citep{opta_sports}, Wyscout~\citep{wyscout}, StatsBomb~\citep{StatsBomb}, and InStat\citep{InStat}) provide annotated player event and tracking data. Finally, there exists a broad range of crowd-sourced data for football, including user preferences and pundit predictions, betting odds, and so on.}
\label{fig:football_data}
\end{figure}
The value of the football domain for AI can be observed, at a low level, in the range of useful data available for corresponding analytics (see \cref{fig:football_data}).
Real-world data streams such as vision, audio, and text are the mainstay of AI research, and are abundant in football.
Crucially, the various data types available in football are well correlated, in the sense that they typically involve a large amount of shared context--a characteristic researchers can take advantage of.
For instance:
football video feeds always involve two teams and a ball;
an enormous amount of text is available, yet it is centered on the current game;
the sound of the crowds and commentators can be relied upon to respond to temporally-adjacent events, such as goals and penalties.
There is a large amount of crowd-sourced data available such as current betting odds and pundits' predictions, a novel form of data atypical in other AI application domains.
Football offers the opportunity for AI to evaluate multi-modal models on synthesized vision, audio, and text data in a unified, though simpler domain than the broader real world.
Football analytics also currently relies heavily upon hand-labeled data, such as ball and player tracking and identification information.
This reliance on hand-labeled data imposes a significant barrier for fast-paced analysis, due to the cost and time needed to generate it.
This provides a golden opportunity for AI to accelerate data collection and the subsequent development of novel learning algorithms (by automating the labeling and annotation process) and assisting coaches and decision-makers by instead allowing them to focus their expertise on the tactical analysis of the game itself.
As such, a worthy long-term challenge for football analytics is to develop such an assistive agent, which uses minimal hand-labeled data:
an Automated Video Assistant Coach (AVAC).
A successful AVAC would help players, coaches, and spectators alike.
Specifically, it could help the players by analyzing their play for weak points to further develop.
A player's performance throughout a game could be analyzed to suggest improvements in position play and assessing the performance overall.
Prior to a game, an AVAC could suggest strategies tuned to the opponents of the day.
Coaches also seek to get the best out of their players, but have a limited amount of time and many players to observe and provide feedback to.
An AVAC would offer coaches many opportunities to help individual players and the team as a whole, suggesting player rosters for a given game, as well as trading or scouting strategies based on counterfactual evaluation of team performance with brand new players.
Such an AVAC system would have the ability to automatically sift and label huge quantities of video streams, enabling broadcasters and spectators alike to retrieve key moments.
An AVAC could automatically keep a running tally of information the spectator may find interesting based on their reaction and the current state of play.
To enhance the spectator experience, the AVAC may automatically generate highlight reels that effectively reflect the flow of the game or summarize the most exciting segments~\citep{zhang2016video,Mahasseni_2017_CVPR,Xiong_2019_CVPR,merler2018excitement,yang2015unsupervised};
moreover, the AVAC might suggest related games predicted to engage or interest the spectator.
For those interested in fantasy football, an AVAC might search for players based on a set of qualities.
Overall, the possibilities for such an automated system are open-ended.
To make the research objectives and intermediate benefits of developing an AVAC system concrete, we detail three associated research agendas at increasing levels of abstraction in \cref{sec:footballAI}: representation learning, predictive modeling and decision-making, and human factors.
\section{Football for AI Research}\label{sec:footballAI}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth,page=1]{figs/overview_research.pdf}
\caption{Hierarchy of key research challenges, defined over three layers.
The foundational layer targets representation learning over the various input modalities available in football to generate useful representations for more complex learning tasks targeted in the subsequent layers:
{prescriptive and predictive analysis} and {modeling of human factors}.
}
\label{fig:layers}
\end{figure}
We next consider the dual perspective of the potential unique challenges and opportunities associated with the football domain that make it an interesting testbed for AI research.
We introduce here a hierarchy of key challenges associated with football research, illustrated in \cref{fig:layers}, as defined over three layers:
the foundational layer concerns representation learning, operating directly on the various input modalities available in football (e.g., time-synchronized videos, event-streams, and tracking data) to generate useful representations for the more complex learning tasks targeted in the subsequent layers of prescriptive and predictive analysis, and modeling of human factors.
We next detail each of these research layers, drawing connections with the three aforementioned frontiers.
\subsection{Representation Learning}
The variety of hand-labeled football statistics make for natural fodder for machine learning algorithms.
These algorithms range from classification and regression tools (e.g., in expected possession value models~\citep{fernandez2019decomposing}), generative models (e.g., in trajectory generation models~\citep{Le17,LeY0L17,yeh2019diverse,li2020generative}), and variational auto-encoding models (player embeddings).
The success of machine learning algorithms generally depends on data representation, as different representations can entangle and obfuscate various explanatory factors of variation behind low-level sensory data.
In football analytics, although expert knowledge is widely used to help design existing representations, learning with generic priors bears promise for avoiding such hand-encoded knowledge.
Under this view, we identify three unique challenges related to representation learning, detailed next.
The first challenge concerns learning representations with multi-modal football data.
Particularly, in football analytics, it remains a fundamental challenge to effectively recognize {long-duration} playing styles of individual players and teams given the variety of data types available (as detailed earlier).
While expert knowledge goes a long way towards analyzing these multi-modal data sources, it remains insufficient to process them efficiently.
The increasing multitude of input modalities available to football analysts are likely to challenge existing methods of representation learning, thus driving researchers to develop cohesive models that take these many modalities into account simultaneously.
The second challenge concerns learning contextual representations of individual players.
Due to the dynamics and uncertainty of football outcomes, long-term static representations for predictive modeling of in-game events are likely to be beneficial when used in conjunction with representations of individual players.
For example, a player passing the ball may take into account the context of the game to estimate the most appropriate receiver that maximizes the probability of scoring.
Another concrete example is using contextual representations to identify the dynamic roles of players, which may change given the game context and must be inferred and controlled to tactically counter the opposing team.
Finally, player behaviors depend not only on the game context, but also on their and the opposing team's overall strategy (e.g., formations, tactical advice provided by the coaching staff, etc.).
Finally, in addition to learning representations of individual players, identifying an effective means of contextualizing or ranking entire teams is another unresolved challenge.
Teams are usually ranked with historical match results and the collective performance of individual players, which can be coarse (i.e., may not reveal the long-term playing styles of teams) and may fail to reflect in-game dynamics.
Overall, to tackle these challenges, we aim to achieve two goals: i) learning representations that are able to characterize the long-term playing styles of football teams, ii) learning contextual representations of football teams that are able to depict in-game dynamics.
\subsection{Predictive Modeling and Decision-Making}
Learning useful representations (i.e., as opposed to hand-coded features) serves as an important means of advancing subsequent predictive and prescriptive analysis of football matches.
Specifically, dense embeddings that summarize not only the state of a particular game, but also historical trends evident throughout many games (e.g., across seasons) will serve as enablers of the more accurate, impactful, and longer-horizon predictions of match outcomes.
The interaction between predictive-prescriptive models is envisioned to be tightly-linked with game-theoretic analysis, thus coupling this direction of research most closely with Frontier~3~(GT\&CV)\xspace and Frontier~1~(GT\&SL)\xspace (see \cref{fig:layers}).
The combination of these fields with game theory is likely to usher in new opportunities for coaches and decision-makers.
For example, predictive models of football players at the trajectory-level~\citep{Le17,LeY0L17,sun2019stochastic} currently treat the game as a black-box dynamical process (i.e., a system of dynamic entities making decisions solely based on the joint on-pitch state of the teams);
such models do not yet account for the game-theoretically driven counterfactual responses of players to one another (e.g., taking into account the current game score, time remaining, relative strength of the two teams, impact of current game decisions on upcoming matches, etc.).
Conducting such an analysis of these models involves identification of high-level strategies typically used by empirical game-theoretic techniques (so-called meta-strategies)~\citep{wellman2006methods,TuylsPLHELSG20}.
These meta-strategies, for example, could be clusters of on-pitch behaviors correlated with play style, counterattack types, defense schemes (such as zonal vs. man-to-man defense), and so on.
While such meta-strategies are typically manually defined, automatically learning them poses an interesting challenge.
Appropriate clustering and identification of such meta-strategies involves not only access to a large, representative dataset of plays, but also the aforementioned learned representations that summarize the most relevant context for game theory models.
Synthesis of empirical games over the possible meta-strategies of two opposing teams can be used to forecast the performance of various team tactics when pitted against one another (e.g., investigating the Nash equilibrium of football at a higher level, rather than the typically considered low-level scenarios such as penalty kicks).
Moreover, while characterization and ranking of players has received considerable attention in the literature~\citep{DecroosD19,decroos2019actions,bransen2020chemistry}, automated ranking of {tactics} has received considerably less attention~\citep{decroos2018automatic,meerhoff2019exploring}.
Application of game-theoretic analysis techniques here remains unexplored to the best of our knowledge.
Analysis of empirical games using meta-strategies conditioned on player identities would be beneficial for counterfactually evaluating player performance in new teams (i.e., for scouting).
For training staff, a model that enables accurate evaluation of players' contributions to the team's overall strategy would be valuable, for example, for pinpointing which players to coach or to substitute.
For broadcasters, automatic identification of salient, exciting meta-strategies (e.g., those that are typically low in probability yet high in payoff, or games where there is a large difference in terms of the play styles or meta-strategies of the teams) can be used for automatic generation of highlight reels.
Learning the appropriate meta-strategies and associated predictive models are, simultaneously, challenging in football due to the number of players involved on-pitch (and the exponential size of the strategy space with respect to this quantity).
Despite this, the development of richer models leveraging more complex input modalities (e.g., video-based representations) is likely to unlock commensurate benefits (in terms of improved predictions and strategic decision-making) for football analysts.
\subsection{Human Factors}
The human-centric nature of football analytics stems from several factors:
coaching and improvement of individual play and coordinated team play through predictive and prescriptive modelling, injury and fatigue prediction, and psychological analysis of players.
This focus distinguishes it sharply from, for example, challenges such as RoboCup~\citep{RoboCup}.
In contrast to the robot-centric focus of RoboCup (which revolves around developing robotic footballing agents~\citep{Visser,AB05,LNAI09-kalyanakrishnan-1,AAMAS11-urieli,ICLR16-hausknecht,AAAI17-Hanna}), the focus in football analytics is entirely on understanding and improving human gameplay and team coordination based on an integrated approach from the three research areas involved.
Another key difference concerns evaluation of the impact of said analysis on human play, which is distinct from evaluation of robotic agents in the RoboCup project.
Namely, human play is significantly more difficult to realistically simulate and systematically evaluate (in contrast to evaluation on robotics platforms).
Moreover, the football analytics frontiers targeted here entail taking into account human factors such as injury and fatigue, but also inter-player relationships and their effects on play efficiency and cooperation, psychological challenges such as pressure or mental state, notably on recent transfers, and their impact on play performance, and overall player discipline and tendency to follow the best plan for the team instead of the best plan for themselves.
Injury prediction is another topic of interest.
Injury prediction is the task of predicting the probability that a player will sustain an injury given data on the past and current seasons.
Previous studies have investigated the acute-chronic workload ratio (ACWR) as a predictor for sports-related muscular injuries~\citep{gabbett2010development}.
Acute workload measures an athlete's workload over 1 week, while chronic workload is the average workload over the past 4 weeks.
\citet{Rossi} use richer workload measures extracted from Electronic Performance and Tracking Systems (EPTS) to train a decision tree to predict the probability of future injuries.
Their approach uses manually designed features that aim to temporally aggregate players' workload histories.
Recent work uses Convolutional Neural Networks (CNNs) applied to EPTS time-series data directly, thereby alleviating the need for hand-designed time-aggregated features~\citep{gabbett2010development}. Current injury prediction methods are limited to a binary signal and do not explicitly capture uncertainty of the prediction, the type of injury, the severity, nor projected time to recover. Progress in this direction is likely limited by the availability of data; preventive measures are taken by sports clubs and as a result injuries occur relatively infrequently, although accurate prediction of such injuries (or determination of whether current performance sensors are sufficient for doing so) is a promising avenue for application of AI techniques.
\section{Illustrative Examples: Frontier~1~(GT\&SL)\xspace}\label{sec:results}
In this section, we highlight some of the benefits that the combination of frontiers can yield for football analytics.
We focus these examples on Frontier~1~(GT\&SL)\xspace, in particular, given the track record of game theory and statistical learning work done in football analytics in recent years;
thus, this section provides a concrete sampling of the types of multi-disciplinary contributions that can be made via the proposed microcosm-centric vision.
In the following, we first provide an overview of the necessary game theory background.
Subsequently, we conduct an in-depth analysis of real-world football data under the lens of Frontier~1~(GT\&SL)\xspace, providing new insights into penalty kick scenarios by combining statistical learning with game theory.
\subsection{Game Theory: Elementary Concepts}\label{sec:gt_background}
Empirical game theory has become an important tool for analysis of large-scale multi-agent settings, wherein either a large amount of data involving agent interactions is readily available, or is collected through simulations of the system under study for the construction of the games \citep{wellman2006methods,TuylsPLHELSG20}.
Empirical game-theoretic modeling of penalty kick taking and set pieces facilitates strategic understanding of player and team behavior under various circumstances (e.g., play according to a Nash equilibrium), and can assist both in predicting opponent behavior and prescribing how a player (or team) should behave in the presence of other players (teams).
These game-theoretic models can be leveraged in pre- and post-match analysis, and can be combined with analysis of dynamic trajectory behavior (e.g., generative trajectory prediction or `ghosting', as later described).
Additionally, the models can be enriched by carrying out Empirical Game Theoretic Analysis (EGTA) on meta-game models of set pieces, automatically clustering and identifying useful meta-strategies, and providing insights into higher-level team strategies.
A common representation of a game used for EGTA analysis is a Normal Form Game (NFG), defined in the following.
\begin{definition}[Normal Form Games (NFG)]
A game $G = \langle P, (S_i), (u_i) \rangle $ consists of a finite set of players, $P$, indexed by $i$; a nonempty set of strategies $S_i$ for each player; and a utility function $u_i : \times_{j\in P} S_j \rightarrow \mathbb{R}$ for each player.
\end{definition}
In this work, we solely focus on bimatrix games, which are 2-player NFGs, $G = \langle P, (S_1,S_2), (u_1,u_2) \rangle $ with $|P|=2$.
The utility functions $(u_1,u_2)$ can be described in terms of two payoff matrices $A$ and $B$, wherein one player acts as the row player and the other as the column player.
Both players execute their actions simultaneously.
The payoffs for both players are represented by bimatrix $(A, B)$, which gives the payoff for the row player in $A$, and the column player in $B$ (see \cref{fig:bimatrix} for a two strategy example).
\begin{figure}[tb]
\centering
\gamematrix{}{}{a_{11}, b_{11}}{a_{12},b_{12}}{a_{21}, b_{21}}{a_{22}, b_{22}}
\caption{Bimatrix payoff tables $(A, B)$ for a two-player, two-action NFG.
Here, the row player chooses one of the two rows, the column player chooses on of the columns, and the outcome of their joint action determines the payoff to each player.
}
\label{fig:bimatrix}
\end{figure}
A player~$i$ may play a {pure strategy}, $s_i\in S_i$, or a {mixed strategy}, $\sigma_i\in\Delta(S_i)$, which is a probability distribution over the pure strategies in $S_i$.
In a {strategy profile} $\sigma=(\sigma_1,\sigma_2)$, each player has a strategy $\sigma_i$.
We use notation $\sigma_{-i}$ to denote a strategy profile for all players excluding~$i$.
Having defined NFGs, we can model empirical games as an NFG wherein player payoffs are directly computed from data of real-world interactions or simulations.
For example, one can construct a win-loss table between two chess players when they both have access to various strategies.
Given an NFG, the traditional solution concept used in game theory is the Nash equilibrium, which selects strategy profiles such that no player can benefit from unilateral deviation:
\begin{definition}[Nash Equilibrium]
A strategy profile $\sigma^*$ is a Nash equilibrium if and only if,
\begin{equation}
\mathbb{E} [u_i(\sigma_i^*,\sigma_{-i}^*)] \geq \mathbb{E} [u_i(s'_i,\sigma_{-i}^*)] \qquad \forall i \quad \forall s'_i\in S_i.
\end{equation}
\end{definition}
In a so-called $\epsilon$-Nash equilibrium, there exists at least one player who could gain by deviating to another strategy, but that gain is bounded by $\epsilon$.
More formally:
\begin{definition}[$\epsilon$-Nash Equilibrium]
A strategy profile $\sigma^*$ is an $\epsilon$-Nash equilibrium if and only if there exists $\epsilon > 0$ such that,
\begin{equation}
\mathbb{E} [u_i(\sigma_i^*,\sigma_{-i}^*)] \geq \mathbb{E} [u_i(s'_i,\sigma_{-i}^*)] - \epsilon \qquad \forall i \quad \forall s'_i\in S_i.
\end{equation}
\end{definition}
\subsection{Game Theory for Penalty Kick Analysis}\label{sec:egta_results}
\begin{figure}[t]
\centering
\includegraphics[width=0.5 \textwidth]{figs/egta/heatmap_all.png}
\caption{Visualization of shot distribution for the penalty kicks in the considered dataset.}
\label{fig:shot_distribution}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.65\textwidth]{figs/Football_analytics_sketch.pdf}
\caption{Illustration of natural vs. non-natural sides.}
\label{fig:natsides}
\end{figure}
\begin{table}[t]
\centering
\caption{Penalty kick distribution over leagues considered (12399 kicks in total).}
\begin{subtable}{.5\textwidth}
\begin{tabular}{rr}
\toprule
League & \# Kicks \\
\midrule
Italian Serie A & 607 \\
US Major League Soccer & 575 \\
English Npower Championship & 569 \\
Spanish Segunda Division & 568 \\
Spanish La Liga & 531 \\
French Ligue 1 & 497 \\
German DFB Pokal & 441 \\
Brazilian Série A & 440 \\
English Barclays Premier League & 436 \\
German Bundesliga & 409 \\
Dutch Eredivisie & 398 \\
German Bundesliga Zwei & 389 \\
Portuguese Primiera Liga & 352 \\
Saudi Arabian Profess. League & 337 \\
Russian Premier League & 329 \\
Chinese Super League & 324 \\
Copa Libertadores & 322 \\
Belgian Jupiler League & 287 \\
Turkish Super Lig & 284 \\
French Ligue 2 & 270 \\
Argentina Primera (Anual) & 261 \\
English Capital One Cup & 234 \\
Mexican Primera (Clausura) & 234 \\
Colombia Primera Apertura & 221 \\
Norwegian Tippeligaen & 219 \\
AFC Champions League & 193 \\
International Champions Cup & 188 \\
Australian A-League & 172 \\
Copa Chile & 172 \\
English FA Cup & 153 \\
Copa do Brasil & 153 \\
\bottomrule
\end{tabular}
\end{subtable}%
\begin{subtable}{.5\textwidth}
\begin{tabular}{rr}
\toprule
League & \# Kicks \\
\midrule
Chile Primera (Apertura) & 151 \\
Japanese J-League & 149 \\
English League 1 & 139 \\
English League 2 & 130 \\
Austrian Bundesliga & 129 \\
Danish Superligaen & 115 \\
European World Cup Qualifiers & 108 \\
Internationals & 93 \\
African Cup of Nations & 90 \\
United Soccer League & 80 \\
European Championship Qualifiers & 78 \\
Swedish Allsvenskan & 74 \\
Coppa Italia & 67 \\
Copa America & 51 \\
FIFA Club World Cup & 51 \\
World Cup & 48 \\
European Championship Finals & 45 \\
Champions League Qualifying & 41 \\
Confederations Cup & 39 \\
UEFA Europa League Qualifying & 32 \\
Coupe de France & 29 \\
Belgian UEFA Europa League Play-offs & 24 \\
German 3rd Liga & 23 \\
Russian Relegation Play-offs & 15 \\
Dutch Relegation Play-offs & 13 \\
Copa Sudamericana & 9 \\
Friendly & 4 \\
German Bundesliga Playoff & 3 \\
German Bundesliga 2 Playoff & 3 \\
Swedish Relegation Play-off & 1 \\
& \\
\bottomrule
\end{tabular}
\end{subtable}
\label{tab:leaguesdistribution}
\end{table}
For our analysis we use a data set of $12399$ penalty kicks based on Opta data~\citep{opta_sports}. In \cref{fig:shot_distribution} we show a heatmap of the shot distribution of the penalty kicks in our data set. \Cref{tab:leaguesdistribution} shows the distribution of the penalty kicks over the various leagues we consider.
\citet{Palacios2003} examines penalty kick scenarios from a game-theoretic perspective, using empirical payoff tables to determine whether the associated kickers and goalkeepers play a Nash equilibrium.
Here we revisit the work of \citet{Palacios2003}, by first reproducing several of its key results with a substantially larger and more recent data set from the main professional football leagues in Europe, Americas, and Asia (for comparison, the data set used in the work of \citet{Palacios2003} consists of 1417 penalty kicks from the 1995-2000 period, whereas ours contains $12399$ kicks from the 2011-2017 period).
While several results of this earlier work are corroborated, we also find surprising new additional insights under our larger dataset.
We then go further to extend this analysis by considering larger empirical games (involving more action choices for both kick-takers and goalkeepers).
Finally, we develop a technique for illustrating substantive differences in various kickers' penalty styles, by combining empirical game-theoretic analysis with Player Vectors~\citep{DecroosD19} illustrating the added value and novel insights research at Frontier~1~(GT\&SL)\xspace of the microcosm can bring to football analytics.
\begin{table}[t]
\centering
\caption{Natural (N) / Non-Natural (NN) payoff tables for Shots (S) and Goalkeepers (G).}
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{\citet{Palacios2003} payoff table.}
\label{tab:Palacios}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.670 & 0.929 \\
NN-S & 0.950 & 0.583 \\
\bottomrule
\end{tabular}
\end{subtable}
\hfill
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{Reproduced table.}
\label{tab:OurversionofPalacios}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.704 & 0.907 \\
NN-S & 0.894 & 0.640 \\
\bottomrule
\end{tabular}
\end{subtable}\\
\par\bigskip
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{\citet{Palacios2003} Nash probabilities.}
\label{tab:PalaciosNash}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.393 & 0.607 & 0.432 & 0.568 \\
Empirical & 0.423 & 0.577 & 0.400 & 0.600 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{Jensen–Shannon divergence: 0.049\%}
\end{subtable}
\hfill
\begin{subtable}[b]{0.48\textwidth}
\centering
\caption{Reproduced table Nash probabilities.}
\label{tab:OurPalaciosNash}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.431 & 0.569 & 0.408 & 0.592 \\
Empirical & 0.475 & 0.525 & 0.385 & 0.615 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{Jensen–Shannon divergence: 0.087\%}
\end{subtable}
\end{table}
As in \citet{Palacios2003}'s work, we first synthesize a 2-player 2-action empirical game based on our penalty kick data set.
\Cref{tab:Palacios} illustrates the $2\times2$ normal form as presented by \citet{Palacios2003}.
The actions for the two players, the kicker and goalkeeper, are respectively visualized in the rows and columns of the corresponding payoff tables, and are detailed below.
The respective payoffs in each cell of the payoff table indicate the win-rate or probability of success for the kicker (i.e., a score);
for ease of comparison between various payoff tables, cells are color-graded in proportion to their associated values (the higher the scoring probability, the darker shade of green used).
The choice of player actions considered has an important bearing on the conclusions drawn via empirical game-theoretic analysis.
The actions used by \citet{Palacios2003} in \cref{tab:Palacios} correspond to taking a shot to the natural (N) or non-natural (NN) side for the kicker, and analogously diving to the natural side or non-natural side for the goalkeeper.
\Cref{fig:natsides} provides a visual definition of natural versus non-natural sides.
Specifically, as players tend to kick with the inside of their feet, it is easier, for example, for a left-footed player to kick towards the right (from their perspective);
thus, this is referred to as their natural side.
Analogously, the natural side for a right-footed kicker is to kick towards their left. The natural side for a goalkeeper depends on the kicker in front of him.
Specifically, when facing right-footed kickers, goalkeepers' natural side is designated to be their right;
vice versa, when they face a left-footed kicker, their natural side is to their left.
Importantly, shots to the center count as shots to the natural side of the kicker, because, as explained in \citet{Palacios2003}, kicking to the center is considered equally natural as kicking to the natural side by professional football players~\citep{Palacios2003}.
\Cref{tab:OurversionofPalacios} shows our reproduction of \cref{tab:Palacios} of \citet{Palacios2003}, computed using $12399$ penalty kicks spanning the aforementioned leagues in our Opta-based dataset; importantly, players (goalkeepers and kickers) appear at least 20 times each in this dataset, to ensure consistency with \citet{Palacios2003}.
The trends in these two tables are in agreement:
when the goalkeeper and the kicker do not choose the same sides of the goal, shot success rate is high;
otherwise, when the keeper goes to the same side as the kicker, success rate is higher for natural shots than for non-natural shots.
We also include Nash and empirical probabilities for \citeauthor{Palacios2003}'s dataset and ours, respectively in \cref{tab:PalaciosNash,tab:OurPalaciosNash}, enabling us to conclude that payoffs, Nash probabilities, and empirical probabilities are all in agreement between \citeauthor{Palacios2003}'s results and our reproduction;
more quantitatively, the Jensen-Shannon divergence between \citeauthor{Palacios2003}'s results and ours is 0.84\% for the Nash distribution and 1.2\% for the empirical frequencies.
We also notice that players' empirical action selection frequencies are quite close to the Nash-recommended frequencies, as measured by their Jensen-Shannon Divergence, and are actually playing an $\epsilon$-Nash equilibrium with a very low $\epsilon$ of $0.4 \%$.
\begin{table}[t]
\centering
\caption{Natural / Non-natural game restricted by footedness.}
\label{tab:lcr_results}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Left-footed players payoff table}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.721 & 0.939 \\
NN-S & 0.903 & 0.591 \\
\bottomrule
\end{tabular}
\label{tab:lcr_left_footed}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Right-footed players payoff table}
\begin{tabular}{rRR}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.700 & 0.898 \\
NN-S & 0.892 & 0.653 \\
\bottomrule
\end{tabular}
\label{tab:lcr_right_footed}
\end{subtable}%
\end{table}
\begin{table}[t]
\centering
\caption{Footedness equivalence p-value tables.}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Natural / Non-natural game p-value table}
\begin{tabular}{rMM}
\toprule
{} & N-G & NN-G \EndTableHeader \\
\midrule
N-S & 0.924566 & 0.170504 \\
NN-S & 0.394900 & 0.407741 \\
\bottomrule
\end{tabular}
\label{tab:nat_p_value}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Left / Center / Right game p-value table}
\begin{tabular}{rMMM}
\toprule
{} & R-G & C-G & L-G \EndTableHeader \\
\midrule
R-S & 0.000011 & 0.947369 & 6.931197e-01 \\
C-S & 0.592054 & 0.868407 & 1.305657e-01 \\
L-S & 0.017564 & 0.764020 & 7.791136e-07 \\
\bottomrule
\end{tabular}
\label{tab:lcr_p_value}
\end{subtable}%
\par \bigskip
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/egta/p_value_by_experience.png}
\caption{P-value table as a function of minimal experience.}
\label{fig:p_value_by_exp}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/egta/experience_p_values.png}
\caption{P-value table as a function of player-experience.}
\label{fig:p_value_by_exp_bar}
\end{figure}
Having examined the similarity of payoff tables and distributions, we verify whether the Natural / Non-Natural game is statistically identical for left-footed and right-footed players (\cref{tab:lcr_results}), as assumed in \citet{Palacios2003}.
To do so, we use a t-test to verify whether per-cell scoring rates are identical across footedness types.
The t-tests' p-values are reported in \cref{tab:nat_p_value}, and reveal that the games cannot be proven to be dissimilar across footedness and can, therefore, be assumed to be identical for left-footed and right-footed players.
\cref{fig:p_value_by_exp} refines this result by representing the relationship between p-values of our t-test and minimal player appearance counts: when we modulate minimal appearance count of players in our test, the Natural Shot / Natural Goalkeeper cell goes from strongly dissimilar across footedness (low p-value) when including all players, to likely non-dissimilar (high p-value) when only including the players appearing the most in our dataset.
This could be explained by low-appearance-counts-, which we take here as a proxy for low experience, kickers being less able to control their kicks, resulting in different control effectiveness for different footedness preferences, and in goalkeepers being less proficient in stopping shots going to their less frequently-kicked side (left) than to the other, a preference that we infer has been trained away in professional goalkeepers.
To remove potential side-effects of merging data from low- and high-experience players together, \cref{fig:p_value_by_exp_bar} shows the relationship between p-values of our t-test and experience category where we allow for some overlap--between 1 and 7 shots, 5 and 12, etc.; the insight drawn from this figure is the same as that of \cref{fig:p_value_by_exp}, supporting the conclusion that experience removes the difference between left- and right-footed penalty kicks.
\begin{table}[t]
\centering
\caption{Left (L) - Center (C) - Right (R) tables for Shots (S) and Goalkeepers (G), with the three directions of kick/movement defined from the goalkeeper's perspective.}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Payoff table.}
\label{tab:lcr_table_noexp}
\begin{tabular}{rRRR}
\toprule
{} & R-G & C-G & L-G \EndTableHeader \\
\midrule
R-S & 0.684 & 0.939 & 0.969 \\
C-S & 0.964 & 0.160 & 0.953 \\
L-S & 0.964 & 0.960 & 0.633 \\
\bottomrule
\end{tabular}
\end{subtable}%
\par\bigskip
\begin{subtable}[b]{1.0\textwidth}
\centering
\caption{Nash probabilities vs. Empirical frequencies corresponding to \subref{tab:lcr_table_noexp}.} \label{tab:lcr_table_noexp_nash}
\begin{tabular}{rMMM|MMM}
\toprule
{} & R-S & C-S & L-S & R-G & C-G & L-G \EndTableHeader \\
\midrule
Nash & 0.478 & 0.116 & 0.406 & 0.441 & 0.178 & 0.381 \\
Empirical & 0.454 & 0.061 & 0.485 & 0.475 & 0.089 & 0.436 \\
\bottomrule
\end{tabular}
\par
\setlength{\fboxrule}{0pt}
\fbox{Jensen–Shannon divergence: 0.75\%}
\end{subtable}
\end{table}
We also analyzed the game defined by kicking to the left, center, or right, and confirmed \citeauthor{Palacios2003}'s intuition that it is fundamentally different across footedness preferences.
Specifically, \cref{tab:lcr_table_noexp} synthesizes the empirical game corresponding to this new choice of actions, with aggregated scoring rates over both feet preferences.
Note that in this case, left, center, and right are measured from the goalkeeper's perspective, such that the natural kick of a right-footed player would be considered a right kick.
The per-cell t-tests' p-values for this game are reported in \cref{tab:lcr_p_value}. Interestingly, the game is different when the goalkeeper jumps to the same side as the ball, but is otherwise mostly similar across footedness preference.
The empirical play frequencies for kickers, as reported in \cref{tab:lcr_table_noexp_nash}, are also further away from Nash frequencies than observed in the Natural / Non-Natural game (\cref{tab:OurPalaciosNash}), as can be seen from the Jensen-Shannon divergence between empirical frequencies and Nash (0.75\%, versus the the 0.087\% of the Natural / Non-Natural game)
These insights indeed confirm the intuition that such a game is neither correct across footedness, nor the one the players follow.
Overall, these results provide insights into the impacts that the choice of actions have on conclusions drawn from empirical payoff tables.
However, behavior and shooting styles also vary wildly per-player given footedness.
If one is willing to consider several payoff tables (e.g., one per footedness), it seems natural to also take into account kickers' playing styles, as considered in the next section.
\subsection{Augmenting Game-theoretic Analysis of Penalty Kicks with Embeddings}
\begin{table}[t]
\centering
\caption{Cluster statistics.}
\begin{tabular}{rrrrrr}
\toprule
{} & \# Players & \# Goals & \# Shots & Success rate (\%) & Proportion of left-foot goals (\%) \\
\midrule
Cluster~1 & 197 & 144 & 167 & 86.2 & 10.4 \\
Cluster~2 & 216 & 494 & 612 & 80.7 & 21.9\\
Cluster~3 & 52 & 3 & 4 & 75.0 & 33.3\\
Cluster~4 & 82 & 58 & 73 & 79.4 & 51.7 \\
Cluster~5 & 87 & 44 & 60 & 73.3 & 34.1.0\\
Cluster~6 & 1 & 0 & 0 & - & 0.0 \\
\midrule
Total & 635& 743 & 916 & 81.1 & 25.2 \\
\bottomrule
\end{tabular}
\label{tab:cluster_stats}
\end{table}
\begin{table}[t]
\centering
\caption{Pair-wise comparison for the identified clusters. $<$ indicates that data was missing and minimum true p-value may be lower than the reported minimum p-value in the table. The symbol * indicates we cannot conclude whether clusters are different at the 10\% confidence level.}
\begin{tabular}{p{5cm}rrrrrr}
\toprule
{} & 1 vs. 2 & 1 vs. 4& 1 vs. 5 & 2 vs. 4 & 2 vs. 5 & 4 vs. 5 \\
\midrule
Min. cell $p$-value of t-test over table equality & 4.49e-2 & $<$ 9.56e-2* & $<$ 1.09e-1* & 4.49e-2 & 4.48e-2 & $<$ 3.39e-1*\\
Jensen-Shannon divergence between Nash distr. (\%) & 0.03 & 0.57 & 0.09 & 0.35 & 0.02 & 0.21 \\
Jensen-Shannon divergence between empirical distr. (\%) & 0.06 & 0.01 & 0.06 & 0.08 & 0.24 & 0.04 \\
Left footedness t-test $p$-value & 3.43e-4 & 1.37e-7 & 3.18e-3 & 4.92e-5 & 1.07e-1 & 7.52e-2 \\
\bottomrule
\end{tabular}
\label{tab:cluster_stat_test}
\end{table}
\begin{table}[t]
\centering
\caption{p-values for t-test that empirical action distributions are equal among different clusters. Minimum p-value (across kicker and goalkeeper roles) is indicated in bold for each row.}
\begin{tabular}{rrr}
\toprule
Kicker clusters compared & Kicker p-value & Goalkeeper p-value \\
\midrule
1 vs. 2 & 0.52 & \textbf{0.05}\\
1 vs. 4 & \textbf{0.85} & 0.95\\
1 vs. 5 & 0.42 & \textbf{0.27}\\
2 vs. 4 & 0.52 & \textbf{0.14}\\
2 vs. 5 & 0.51 & \textbf{0.16}\\
4 vs. 5 & 0.4 & \textbf{0.26}\\
\bottomrule
\end{tabular}
\label{tab:empirical_p_value_table}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/egta/striker_goalie_clusters_3d_nonlfc.png}
\caption{}
\label{fig:striker_goalie_clusters}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/egta/striker_clusters_2d.png}
\caption{}
\label{fig:striker_clusters}
\end{subfigure}\\
\caption{Visualization of the identified player clusters. \subref{fig:striker_goalie_clusters} visualizes the goalkeeper cluster, the kicker clusters and an outlier automatically detected through K-means clustering. To show the separation of the kicker clusters clearly, we visualize them in \subref{fig:striker_clusters} after removing the goalkeeper and outlier clusters, and we also label each cluster with a Premier League player in it.
}
\end{figure}
\begin{table}[t]
\centering
\caption{Nash probabilities and empirical frequencies tables for Shot (S) and Goalkeepers (G) with Natural (N) and Non-Natural (NN) actions. Note that Cluster 3 is omitted due to it consisting of very few shots (taken by goalkeepers).}
\label{tab:egta_pv_nash_emp}
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{All players. 916 total shots.}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.391 & 0.609 & 0.406 & 0.594 \\
Empirical & 0.503 & 0.497 & 0.413 & 0.587 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{$\epsilon$-Nash equilibrium: $\epsilon=2.71\%$}
\label{tab:egta_pv_all}
\\
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Kickers in Cluster~1. 167 total shots.}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.423 & 0.577 & 0.379 & 0.621 \\
Empirical & 0.485 & 0.515 & 0.371 & 0.629 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{$\epsilon$-Nash equilibrium: $\epsilon=0.08\%$}
\label{tab:egta_pv_c1}
\\
\end{subtable}\\
\par\bigskip
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Kickers in Cluster~2. 612 total shots.}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.401 & 0.599 & 0.430 & 0.570 \\
Empirical & 0.520 & 0.480 & 0.418 & 0.582 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{$\epsilon$-Nash equilibrium: $\epsilon=2.89\%$}
\label{tab:egta_pv_c2}
\end{subtable}%
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Kickers in Cluster~4. 73 total shots.}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.320 & 0.680 & 0.375 & 0.625 \\
Empirical & 0.479 & 0.521 & 0.438 & 0.562 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{$\epsilon$-Nash equilibrium: $\epsilon=5.17\%$}
\label{tab:egta_pv_c4}
\end{subtable}\\
\par\bigskip
\begin{subtable}[b]{0.5\textwidth}
\centering
\caption{Kickers in Cluster~5. 60 total shots.}
\begin{tabular}{rMM|MM}
\toprule
{} & NN-S & N-S & NN-G & N-G \EndTableHeader \\
\midrule
Nash & 0.383 & 0.617 & 0.317 & 0.683 \\
Empirical & 0.450 & 0.550 & 0.400 & 0.600 \\
\bottomrule
\end{tabular}
\setlength{\fboxrule}{0pt}
\fbox{$\epsilon$-Nash equilibrium: $\epsilon=4.86\%$}
\label{tab:egta_pv_c5}
\end{subtable}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/egta/pv_goals_heatmap_full_events.png}
\caption{Heatmaps of goals by all kickers and kickers in individual clusters with respect to empirical probabilities. We exclude the goalkeeper cluster (Cluster~3) and the outlier cluster (Cluster~6) because of insufficient samples.
}
\label{fig:egta_pv_cluster_heatmap}
\end{figure}
While the previous section undertook a descriptive view of the penalty kick scenario (i.e., providing a high-level understanding of kicker and goalkeeper play probabilities), here we investigate whether we can find the best strategy for a player given the knowledge of the kicker's play style.
In game-theoretic terms, we conduct a prescriptive analysis of penalty kicks to enable informed decision-making for players and coaching staff in specific penalty kick situations.
Ideally, one would iterate the earlier empirical payoff analysis for every possible combination of goalkeeper and kicker in a given league, thus enabling decision-making at the most granular level;
however, the inherent sparsity of penalty kick data makes such an approach infeasible.
Instead, we introduce a meaningful compromise here by combining statistical learning with game theory (i.e., Frontier~1~(GT\&SL)\xspace), first quantifying individual playing styles, then using clustering techniques to aggregate players (i.e., both strikers and goalkeepers) based on said styles, and finally synthesizing empirical games for each identified cluster.
We focus our analysis on penalties including all players who participated in Premier League matches from 2016 to 2019.
On a technical level, our approach consists of the three following steps.
First, we characterize the playing style of a player in a manner that can be interpreted both by human experts and machine learning systems.
In particular, we use Player Vectors~\citep{DecroosD19} to summarize the playing styles of kickers using an 18-dimensional real-valued vector.
These Player Vectors are extracted from historical playing trajectories in real matches, with technical details provided in \cref{sec:player_vectors}.
Each dimension of the Player Vector corresponds to individual on-pitch player behaviors (e.g., styles of passes, take-ons, shots, etc.), and the value of each dimension is standardized and quantifies the weight of that particular action style for the considered player. We also filter experienced players with at least 50 appearances in the Premier League matches from 2016 to 2019.
In total, we obtain 635 such vectors for the individual players in our dataset.
Second, we cluster players in accordance to their Player Vectors, using K-means with the number of clusters chosen as the value causing the most significant drop in inertia (a standard heuristic).
This process yields 6 clusters in total, with statistics summarized in \cref{tab:cluster_stats}.
In particular, K-means clustering detects an outlier cluster with only one player (Cluster~6), and we also observe that there are very few shot samples in Cluster~3, as it consists of a cluster of goalkeepers (an interesting artifact illustrating the ability of Player Vectors and K-means clustering to discern player roles).
Given the few samples associated with these two clusters, we henceforth exclude them from the game-theoretic analysis.
We observe that cluster pairs (1, 2), (1, 4), (2, 4), and (2, 5) are significantly different, with the minimum cell-wise p-values for these cluster pairs smaller than 0.10 in~\cref{tab:cluster_stat_test}.
We therefore focus our game-theoretic analysis on these cluster pairs.
Moreover, we also qualitatively illustrate differences between the clusters in \cref{fig:striker_goalie_clusters,fig:striker_clusters}, which visualize the results of reducing the Player Vectors dimensionality from 18 to, respectively, 3 and 2 via Principal Component Analysis.
Here, we observe that the goalkeeper cluster is well-separated from the kicker clusters in \cref{fig:striker_goalie_clusters}, and in order to better visualize the kicker clusters, we project \cref{fig:striker_goalie_clusters} onto its x and y axis after removing the goalkeeper and outlier clusters in \cref{fig:striker_clusters}.
We also identify therein the most representative kicker per-cluster (i.e., the player whose feature vector is closest to the mean of the corresponding cluster)
Finally, we conduct the aforementioned game-theoretic analysis for each cluster.
In our earlier \cref{tab:cluster_stats}, we observe that the kickers in some clusters have different success rates in penalty kicks.
Moreover, a closer behavioral analysis yields deeper insights.
We first examine the Nash strategies played by each cluster, and then visualize the actual play behavior with respect to empirical probabilities in \cref{fig:egta_pv_cluster_heatmap}.
\Cref{tab:egta_pv_all} summarizes the overall Nash distributions for all players considered, with \cref{tab:egta_pv_c1,tab:egta_pv_c2,tab:egta_pv_c4,tab:egta_pv_c5} showing cluster-specific distributions.
These tables illustrate that the kickers have the same empirical behavior, an assertion statistically confirmed in \cref{tab:empirical_p_value_table}; yet their Nash-derived recommendations are different: although kickers in all clusters are recommended by the Nash to shoot more to their natural sides than to their non-natural sides, the recommended strategy for kickers in Cluster~1 is actually quite balanced between natural and non-natural shots.
This greater imbalance is shown by comparing Jensen-Shannon divergence. As we see in \cref{tab:cluster_stat_test}
, the Jensen-Shannon divergence of the Nash probabilities between Cluster~1 and 4 (0.57\%) is 6-7 times greater than that between Cluster~1 and 5 (0.09\%) and 19 times greater than that between Cluster~1 and 2 (0.03\%).
We also notice that the clusters' players are all playing epsilon Nash equilibra with relatively low epsilon (\cref{tab:egta_pv_nash_emp}).
In other words, although their empirical strategies seem to deviate from corresponding Nash strategies action-wise,
the expected payoffs of these two strategies are close, and they could still stand to gain in "stability" by switching to corresponding Nash strategy.
Nevertheless, most of these Nash recommendations come from very low-sample empirical payoff tables, which entails potentially inaccurate Nash distributions.
We nevertheless note that this low-data regime is induced by the restriction of our analysis to players having played in matches of Premier League only from 2016 to 2019.
Obtaining Player Vector data for all players in our dataset would allow us to study cluster behavior with greater statistical precision.
Nevertheless, the current study leaves no statistical doubt regarding the pertinence of clustering payoff tables using player embeddings--specifically Player Vectors.
Qualitatively, in addition to analyzing the strategies with respect to Nash probabilities, the patterns of positions of the ball of successful goals also vary from clusters to clusters, as visualized in \cref{fig:egta_pv_cluster_heatmap}. For instance, kickers in Cluster~2 tend to score mostly to the bottom left corner of the goalmouth, while the scoring positions in other clusters are more balanced, though these could also be partly due to lower sample sizes for some clusters.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/ghosting/ghosting_before.png}
\caption{}
\label{fig:ghosting_before}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/ghosting/ghosting_after.png}
\caption{}
\label{fig:ghosting_after}
\end{subfigure}\\
\begin{subfigure}[b]{\textwidth}
\centering
\begin{tabular}{cccc}
\tikzcircle[black, fill=white]{4.0pt} Ball (truth) & \tikzcircle[white, fill=blue]{4.0pt} Attackers (truth) & \tikzcircle[white, fill=red]{4.0pt} Defenders (truth) & \tikzcircle[white, fill=yellow]{4.0pt} Defenders (predicted)
\end{tabular}
\end{subfigure}
\caption{Predictive modeling using football tracking data. \subref{fig:ghosting_before} visualizes predictions under the original data. Here, ground truth information for all players and the ball is provided to the underlying predictive model, with defender positions truncated and predicted by the model after a cut-off time (as indicated by the yellow traces). \subref{fig:ghosting_after} illustrates the same scenario, after counterfactual perturbation of the ground truth ball direction to ascertain the predicted reaction of the defending goalkeeper (far right).
}
\label{fig:ghosting}
\end{figure}
\subsection{Generative Trajectory Prediction Models for Counterfactual Analysis}\label{sec:predictive_models_counter}
Ghosting refers to the prescription of the trajectories the players in a sports team should have executed, in contrast to what they actually did~\citep{lowe_lights_2013}.
Solution of this and the broader problem class of generative trajectory prediction implies benefits spanning from recommendation of trajectories or setups for constrained set pieces, then to short-term plays involving a subset of players, and eventually to long-term strategies/plays for the entire team.
Team-level predictions would also strongly benefit from game-theoretic and multi-agent considerations, and is perceived to play a key role in an established AVAC system.
We here present an illustrative example to ground the earlier discussion regarding the potential impacts of using learned predictive models to conduct counterfactual analysis of football matches.
For example, one might train a trajectory prediction model on league data (e.g., as done in \citet{LeY0L17}), provide an input context to such a model (e.g., consisting of the true state of the ball, defenders, and attackers up to some point in time), and subsequently predict future trajectories of players.
\Cref{fig:ghosting_before} visualizes league-average predicted behaviors conditioned on such an input context.
This illustrative example was trained using a baseline predictive model, similar to that of \citet{Le17}.
Here we trained a centralized long-short term memory model (of 2 layers, each with 256 units), taking as input the raw trajectories of players and the ball, and predicting as output the step-wise change in trajectory of the defensive players.
The model was trained on 240 frames of 25~fps tracking data, downsampled to 12.5~fps, with half the frames in each play used for providing a prediction context, and the other half occurring at the prediction cut-off.
We used the $l_2$-loss on the tracking data for training, and randomized the order of attacking and defending players to avoid the role-assignment problem mentioned in \citet{Le17} (similar to one of the baseline approaches of \citet{yeh2019diverse}).
As pointed out in the literature~\citep{Le17,LeY0L17,yeh2019diverse,li2020generative}, a key advantage of generative predictive models is that they can be used for counterfactual analysis of play outcomes.
We illustrate such an example in \cref{fig:ghosting_after}, where we perturb the trajectory of the ball, inferring the subsequent behaviors of defenders in reaction (noting, e.g., the tendency of the goalkeeper to chase the ball in reaction to it entering the penalty area).
While simple, case-by-case counterfactual case studies such as the above have been conducted to some extent in the literature, consideration of responses to more complex perturbations (e.g., changes of one team's tactics or meta-strategy as a whole, changes in player behavior due to injuries, or changes due to substitutions of individual players) bear potential for significantly more in-depth analysis.
\section{Discussion}
Football analytics poses a key opportunity for AI research that impacts the real world.
The balance of its reasonably well-controlled nature (versus other physical domains beyond sports, e.g., search-and-rescue), considerations associated with human factors (e.g., heterogeneous skill sets, physiological characteristics such as injury risks for players, etc.), and the long-term cause-and-effect feedback loop due to the relative infrequency of scoring even in professional play make it a uniquely challenging domain.
Nonetheless, the rapidly-emerging availability of multi-modal sensory data make it an ideal platform for development and evaluation of key AI algorithms, particularly at the intersection of the aforementioned fields of statistical learning, computer vision, and game theory.
In this paper, we highlighted three frontiers at the intersection of the above fields, targeting the simultaneous advancement of AI and football analytics.
We highlighted the overlying goal of developing an Automated Video Assistant Coach (AVAC), a system capable of processing raw broadcast video footage and accordingly advising coaching staff in pre-, in-, and post-match scenarios.
We subsequently illustrated how the combination of game theory and statistical learning could be used to advance classical results in football analytics, with an in-depth case study using a dataset comprised of over 15000 penalty kicks, and subsequently combined with the Player Vectors analysis of~\citet{DecroosD19} to discern kicking styles.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{figs/multi-levelRLSA.pdf}
\caption{A multi-level view of football analytics cast as a reinforcement learning problem. We discern three levels: the top level aims to learn how to win championships by winning matches; the middle level optimizes for winning a match; finally, the bottom level seeks to optimize goal-scoring. The context between these various level is shared in both a top-down and bottom-up fashion. }
\label{fig:RLview}
\end{figure}
A notable observation for future work focusing on prescriptive football analytics is that the domain and some of the state-of-the-art research bear key similarities to RL.
At a high level, the process of winning football championships can be cast as a sequential decision-making problem, with a concrete reward structure centered on three timescales of increasing abstraction:
scoring goals, winning matches, and subsequently winning championships. We illustrate this view in \cref{fig:RLview}.
Under this hierarchical view of football, each layer can be considered an RL problem at the designated level of abstraction.
For example, at the lowest level, the sequential decisions made by teammates that lead to a goal can be considered a policy mapping states to actions, using the lexicon of RL.
Likewise, estimates of the value of player actions based on the outcomes associated with actions taken in real games (as in VAEP~\citep{decroos2019actions}) can be considered analogous to those that learn action-values associated with RL policies.
Further expanding this analogy, learning to quantify the contribution of individual players to a team's estimated goal-scoring value can be cast as a so-called credit assignment problem, a key area of research in RL.
Finally, given the presence of multiple on-pitch players with both cooperative and competitive incentives, the value function learning problem situates itself in the area of multi-agent RL.
Multi-agent RL, critically, seeks to understand and learn optimal policies for agents in such interactive environments, linking also to game theory in providing the appropriate mathematical foundations to model this strategic process.
As such, the multi-agent RL approach fits well under Frontier~1~(GT\&SL)\xspace, which considers the game-theoretic interactions of strategic players given specified payoffs, and use of learning techniques for identifying optimal policies.
Moreover, this connection also highlights a potential overlap of interest between real-world football and RoboCup, in that the RL paradigm can be used to optimize player and robot policies alike, despite the widely-different player embodiments considered in each of these two fields.
Overall, such parallels can be drawn at all levels of abstraction highlighted in the aforementioned hierarchical process modeling football championships, implying the foreseeable importance of the RL paradigm as football analytics shifts from understanding the game to subsequently optimizing player and team decisions at increasingly broader levels.
Moreover, the toolkits developed within the context of football analytics are also likely to have direct benefits for closely-related fields, and could be foreseeably adapted to many other sports.
One interesting extension concerns the application of football analytics techniques to the emerging field of eSports, wherein there is a large amount of data collected (in both raw video form, and structured data formats), e.g., such data streams are available for games such as Dota~2 or StarCraft.
In Dota~2, for example, a coaching functionality analogous to that in football is available, wherein an experienced player is connected to the game and advises other players on various strategic tactics.
Moreover, several of the most popular eSports games are inherently multi-player, in the sense that their outcomes are not determined by only an individual's skill, but a team's skill, mixing cooperative and competitive behaviors (as in football).
Automatic analysis of games could provide insights into weak and strong points of teams, tactics used, and directions for improvement.
These related domains could, therefore, provide a low-hanging fruit for football analytics techniques to generalize, in a seamless manner, beyond football.
Overall, the combination of data sources, downstream benefits on related domains, and potentials for impact that AI could have on the football domain are quite evident.
Perhaps more importantly, the promising commensurate impacts of football analytics on AI research (through the feedback loop established between the football microcosm to the three foundational fields highlighted in \cref{fig:overview}) are foreseen to make football a highly appealing domain for AI research in coming years.
\section*{Acknowledgments}
The authors gratefully thank Thomas Anthony and Murray Shanahan for their helpful feedback during the paper writing process.
\begin{appendices}
\crefalias{section}{appendix}
\section{Additional Works Related to Statistical Learning in Football}\label{sec:additional_sl_works}
Evaluating the effect of individual actions throughout the game is challenging as they naturally depend on the circumstances in which they were performed and have long-term consequences that depend on how the sequence plays out. Most works have focused on measuring the quality of specific action types in distinct concrete game situations~\citep{barr2008evaluating,spearman2018beyond,bransen2018measuring}.
More recent work has focused on a unifying view in which actions are valued according to how they increase or decrease the likelihood of the play leading to a goal~\citep{decroos2019actions,Fernandez2019}.
The main idea is to estimate the value of a given `state' of the game. Intuitively, the state of a particular game includes everything that happened in the match until this point, including the score, identities of players and associated traits, time left on the clock, all prior actions, position of the players and the ball, etc.; moreover, one may wish to also consider the state of a tournament as a whole (e.g., previous and upcoming matches, the number of yellow cards accrued by players, etc.).
A recent method used for assigning values to on-ball actions is known as \textit{Valuing Actions by Estimating
Probabilities} (VAEP) \citep{decroos2019actions}.
Actions are valued by measuring their effect on the game state and in turn the probabilities that a team will score. These scores can then be used to assess contribution of players to a team or measuring the mutual chemistry for a pair of players \citep{bransen2020chemistry}.
Finally, a promising application of statistical learning is the development of models that can carry out temporal predictions.
This area is closely related to trajectory prediction \cite{wang2007gaussian,gupta2018social,fernando2018gd,deo2018convolutional,alahi2016social}.
In the context of sports analytics, such trajectory prediction models can be useful for conducting the form of analysis known as \emph{ghosting}, which, given a particular play, predicts the actions that a \emph{different} team or player would have executed.
Beyond just capturing game dynamics, models that can accurately carry out predictions could constitute valuable tools for counterfactual reasoning, which allows us to consider the outcomes of alternative scenarios that never actually took place.
So far, such predictive models have been primarily used for predicting the trajectory of the ball~\citep{maksai2016players} and of players themselves~\citep{Le17, LeY0L17,su2019graph,li2020generative,yeh2019diverse}.
Also of importance are models which identify player roles from predicted trajectories~\citep{FelsenLG18}.
\section{Pose Estimation}\label{sec:pose_estimation}
As previously illustrated, multi-person human pose estimation \citep{pavlakos2017coarse,pavlakos2019expressive,he2020epipolar,pavllo20193d,lassner2017unite,Cheng_2019_ICCV,iskakov2019learnable} is a central part of vision-based analysis of football video.
Methods for this task can be grouped into two types: one the one hand, bottom-up approaches first detect human joints, and group them into pose instances~\citep{iqbal2016multi,fang2017rmpe,papandreou2017towards,huang2017coarse,he2017mask,sun2019deep};
on the other, top-down approaches first detect body instances and run single-person pose estimation models on each instance~\citep{pishchulin2016deepcut,insafutdinov2016deepercut,cao2017realtime,newell2017associative,papandreou2018personlab,kocabas2018multiposenet}.
The computation cost of top-down methods increases linearly with the number of people in an image, while that of bottom-up methods stays constant.
However, in cases where there is significant overlap between instances, top-down approaches are often more accurate~\citep{chen2020monocular}.
We experimented with G-RMI~\citep{papandreou2017towards}, a well-established top-down approach, and give examples of predictions in \cref{fig:poses_penalty_kick}.
In the first stage, Faster-RNN~\citep{ren2015faster} is used to detect person instances.
Inspired by detection methods, the second stage combines classification and regression to process each resulting crop:
a fully convolutional network first densely classifies whether each spatial position is in the vicinity of a given keypoint class, and then refines each prediction by predicting an offset.
A specialized form of Hough voting (see \citep{duda1972use} for background) is introduced to aggregate these predictions and form highly localized activation maps.
A key-point based confidence score and non-maximum suppression procedure further improve results.
We plan to build on this approach to develop methods for the previously mentioned challenges.
\section{Player Vectors}\label{sec:player_vectors}
In particular, we follow definition of \textit{playing style} in~\citet{DecroosD19}, which is defined as a player's preferred area(s) on the field to occupy and which actions they tend to perform in each of these locations, and generate our player vectors with the method proposed in~\citet{DecroosD19}.
The procedure of generating player vectors unfolds into four steps.
First, we collect the event stream data of all Premier League matches that Liverpool Football Club participated in from 2017 to 2019, and filter the actions of types passes, dribbles, shots and crosses.
Secondly, for each pair of player $p$, who is observed in the event stream dataset, and relevant action type $t$, we overlay a grid of size $60 \times 40$ on the football pitch and count how many times player $p$ performed action $t$ in each grid cell. This procedure yields a matrix which summarizes spatial preference of player $p$ performing action type $t$.
Thirdly, we compress that matrix into a small vector. To do this, we reshape each matrix into a vector and group it together with all other vectors of the same action type, and we then perform non-negative matrix (NMF) factorization to reduce the dimensionality of these matrices. This procedure yields a smaller vector, and the value of each dimension quantifies the preference of player $p$ performing the action type $t$ in the area $a$.
Finally, for each player, we obtain 4 vectors corresponding to the 4 action types, and we generate one final vector of 18 dimensions by concatenating his compressed vectors for relevant action types.
\end{appendices}
\vskip 0.2in
| {
"attr-fineweb-edu": 1.587891,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdXM5qhLA-t92JNzN | \section{\uppercase{Introduction}}
\label{sec:introduction}
Over the last decade there has been an increase in interest towards analytics in football (soccer), and many other team-sports. Increasing compute power and data has added to the effectiveness of statistical analysis and more importantly, allowed for compute-intensive and data-intensive machine learning methods. Many success stories have been well documented in mainstream publications such as "The Numbers Game" \cite{Anderson2013TheWrong}, "Basketball on Paper" \cite{Oliver2020BasketballAnalysis} and perhaps most well known, "Moneyball" \cite{Lewis2004Moneyball:Game}. As a result, a growing number of sports teams now adopt specialist roles for analytics. If we assume such trends are to continue, it is likely both compute power and the amount of available data will exponentially increase in forthcoming years. However, it will remain nearly impossible to collect real-world sport data in a scientific manner where variables can be controlled. This can not be helped since top level sports are highly competitive in nature and leave very little room for experimentation. To solve this problem, agent-based simulation (ABS) can be used as a test-bed to simulate various scenarios in a scientific manner.
\begin{figure}[!tbp]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/first_page.png}
\caption{A representation of the agent setup where a single RL agent is used to control a single active player of a team. The illustration shows an image of the rendered environment \cite{Kurach2019GoogleEnvironment} with arrows pointing to the active-players. Active players can be switched in-game to and from non-active players that are controlled via another in-game rule based system.}\label{rep}
\end{minipage}
\end{figure}
Recently, deep reinforcement learning (RL) methods have shown it is possible to train agents, from scratch, that outperform human experts in both traditional \cite{Silver2016MasteringSearch,silver2017mastering} and modern games \cite{Mnih2013PlayingLearning,alphastarblog,Berner2021DotaLearning}. These breakthroughs, coupled with increasingly sophisticated simulation environments, are a promising new direction of analysis in sports. Therefore in this paper, we examine the characteristics of football playing RL agents and uncover how strategies may develop during training. Out of the many team sports that exist we choose to focus on football due to its popularity and the availability of a sufficient simulation environment (see \S2 for more detail). We use the Google Research Football environment \cite{Kurach2019GoogleEnvironment} to train football playing RL agents in a single agent manner. Fig.~\ref{rep} illustrates a representation of the training setup we used. Another problem concerning the use of ABS is that the domain gap between RL-agents and real-world football players is not clear. To gain a better understanding of this domain gap, we compared the characteristics of football strategies in RL agents and real-world football players. In summary, the main contributions of the study are as follows:
\begin{itemize}
\item We compared the characteristics of football playing RL agents \cite{Kurach2019GoogleEnvironment} in various training processes and real-world football players for the first time, thus verifying simulations as a practical approach for football analysis.
\item We found that more competitive RL agents have a more similar and well-balanced passing strategy to real-world footballers in comparison to less competitive RL agents.
\item We analyzed how the football strategies of RL-agents evolve as the competitiveness of the agent increases. Strong correlations were found between many aggregated statistics / social network analysis and the competitiveness of the agent.
\end{itemize}
\noindent The outline of this paper is as follows. \S2 provides background on agent-based simulation, deep RL and football analytics. \S3 and \S4 discuss the preliminaries and methods used to train deep RL-agents and the metrics used to analyse playing characteristics. We present results and discussions in \S5. Finally, we summarise our conclusions and future work in \S6.
\section{\uppercase{Related Works}}
\subsection{Agent-Based Simulation}
Agent-based simulation (ABS) is a computationally demanding technique for simulating dynamic complex systems and observing "emergent" behaviour. With the use of ABS, we can explore different outcomes of phenomena where it is infeasible to conduct research testing and hypothesis formulations in real life. In the context of football we can use ABS to examine effects of different formations on match outcomes or study various play styles using millions of simulated football games.
The availability of good simulation environments are critical to ABS. Fortunately, football has received a lot of attention in this field thanks to the long history of the RoboCup simulation track \cite{itsuki1995soccer}. In recent years, many other simulation environments have also been introduced \cite{Liu2019EmergentCompetition,Cao2020REINFORCEMENTCRITICS,Liu2021FromFootball}. Amongst others, the Google Research Football environment \cite{Kurach2019GoogleEnvironment} stands out as an interesting test-bed. Kaggle has held a competition with over a thousand teams participating\footnote{https://www.kaggle.com/c/google-football} and researchers have already started to develop methods to analyze football matches using Google Research Football via graphical tools \cite{PinciroliVago2020INTEGRA:Matches} or RL inspired metrics \cite{Garnier2021EvaluatingLearning}.
Therefore we choose to use the Google Research Football environment to conduct our simulations. It reproduces a full football match with all of its usual regulations and events, as well as player tiredness, misses, etc. We list an overview of available simulation environments in Table~\ref{sim-table}).
\begin{table*}[]
\caption{An overview of various football simulation environments.}\label{sim-table}
\begin{tabular}{| p{0.25\linewidth} | p{0.65\linewidth} |}
\hline
Environment & Description \\ \hline
RoboCup Soccer \cite{itsuki1995soccer} &
An 11 vs 11 soccer simulator. Agents receive noisy input from virtual sensors and perform some basic commands such as dashing, turning or kicking.\\ \hline
MuJoCo 2 vs 2 \cite{Liu2019EmergentCompetition} &
A 2 vs 2 football environment with simulated physics built on MuJoCo \cite{Todorov2012MuJoCo:Control}.
Uses relatively simple bodies with a 3-dimensional action space.\\ \hline
Unity 2 vs 2 \cite{Cao2020REINFORCEMENTCRITICS}&
A 2 vs 2 football environment built on unity. Two types of players with slightly different action spaces are available.\\ \hline
Google Research \cite{Kurach2019GoogleEnvironment} &
An 11 vs 11 soccer environment built on GameplayFootball. Simulates a full football game and includes common aspects such as goals, fouls, corners, etc. \\ \hline
Humanoid \cite{Liu2021FromFootball} &
A 2 vs 2 football environment with simulated physics built on MuJoCo \cite{Todorov2012MuJoCo:Control} designed to embed sophisticated motor control of the humanoid. Physical aspects such as the radius of the ball and goal size are adjusted in proportion to the height of the humanoid.\\ \hline
\end{tabular}
\end{table*}
\subsection{Deep Reinforcement Learning}
Deep RL is a subset of RL that combines the traditional reinforcement learning setup, in which agents learn optimal actions in a given environment, with deep neural networks. There have been many remarkable examples of agents trained via deep RL outperforming experts. A remarkable example of this is Deepmind's AlphaGo \cite{Silver2016MasteringSearch}. Its successors AlphaZero \cite{silver2018general} and Muzero \cite{Schrittwieser2020MasteringModel} achieved a superhuman level of play in the games of chess, shogi and go solely via self-play.
In contrast to the single-player, deterministic, perfect information setup for the classical games mentioned above, football is a highly stochastic imperfect information game with multiple players that construct a team. Although these characteristics have made it difficult to learn through self-play, recent works have shown promising results in similar categorised games such as DotA and StarCraft. For example, OpenAI Five \cite{Berner2021DotaLearning} scaled existing RL systems to unprecedented levels, while performing "surgery" to utilise thousands of GPUs over multiple months. On the other hand, AlphaStar \cite{Vinyals2019GrandmasterLearning} populated a league consisting of agents with distinct objectives, and introduced agents that specifically try to exploit shortcomings in other agents and in the league. This allowed agents to train while continually adapting strategies and counter-strategies.
As for research directly related to football, Robot soccer \cite{itsuki1995soccer} has been one of the longstanding challenges in AI. Although this challenge has been tackled with machine learning techniques \cite{Riedmiller2009ReinforcementSoccer,Macalpine2018Journal}, it has not yet been mastered by end-to-end deep RL. Nonetheless, baseline approaches for other simulation environments mostly utilise deep RL. \cite{Liu2019EmergentCompetition} used a
population-based training with evolution and reward shaping on a recurrent policy with recurrent action-value estimator in MuJoCo Soccer. Whereas \cite{Cao2020REINFORCEMENTCRITICS} showed that RL from hierarchical critics was affected in the Unity 2 vs 2 environment. Proximal Policy Optimization (PPO) \cite{Schulman2017ProximalAlgorithms}, IMPALA \cite{Espeholt2018IMPALA:Architectures} and Ape-X DQN \cite{Horgan2018DistributedReplay} were provided as benchmark results for Google Research Football \cite{Kurach2019GoogleEnvironment}. Finally a combination of imitation learning, single and multi-agent RL and population-based training was used in Humanoid Football \cite{Liu2021FromFootball}.
Many researchers have attempted to model the behaviour of players by predicting the short term future contrary to the long-term horizon goal approach using deep RL \cite{le2017coordinated,Felsen2018WhereAutoencoders,Yeh_2019_CVPR}. Such research offers important insights into what architectures/time horizons/rewards may be effective.
\subsection{Football Analytics}
Football has been considered to be one of the most challenging sports to analyze due to the number of players, continuous events and low frequency of points (goals). Therefore, it is only recently that a data-driven approach has started to gain attention. Nevertheless, numerous approaches, from the simple aggregation of individual/team play statistics \cite{Novatchkov2013ArtificialTraining}, to complex methods, such as those that use gradient boosting to model the value of actions \cite{decroos2018actions}. In general one can observe two different types of analysis. The first focuses on evaluating the overall performance of a single player or team. In this case, an action is usually valued then aggregated by either player or team. \cite{decroos2018actions} assigned values to on-ball action actions by measuring their effect on the probabilities that a team will score. In turn, \cite{fernandez2018wide} proposed a method to value off the ball actions by estimating pitch value with a neural network. The second category of analysis is strategy or play style analysis. Methods such as automatic formation \cite{Bialkowski2016DiscoveringData} or tactic \cite{Gyarmati2015AutomaticTeams,Decroos2018AutomaticData} discovery fall into this category. Social network analysis is also a well used method to analyse interactions between players \cite{Clemente2016SocialAnalysis,Buldu2018UsingGame}. Network metrics such as betweenness, centrality and eccentricity are often used. \cite{Pena2012AStrategies} demonstrated that winning teams presented lower betweenness scores. Similarly, \cite{Goncalves2017ExploringFootball} provided evidence that a lower passing dependency for a given player and higher intra-team well-connected passing relations may optimise team performance.
\section{\uppercase{Preliminaries}}
\begin{figure*}[t]
\includegraphics[width=\linewidth]{figures/overview.png}
\caption{An overview of the proposed framework. Details for steps (i) - (iv) are detailed in \S\ref{Agent Training and Ranking}, \S\ref{TrueSkill Ranking Implementation}, \S\ref{Data Extraction}, and \S\ref{Data Analysis} respectively. In (iii), data is converted to a tabular format inspired by SPADL \cite{Decroos2019ActionsSoccer}.} \label{overview}
\end{figure*}
\subsection{Proximal Policy Optimization}
To learn policies for agents to play Google Research Football, we follow the original paper \cite{Kurach2019GoogleEnvironment} and use Proximal Policy Optimisation (PPO) \cite{Schulman2017ProximalAlgorithms}. PPO belongs to a family of reinforcement learning called policy gradient methods. These methods try to find an optimal behaviour strategy by alternating between optimising a clipped surrogate objective function and sampling data through interactions with the environment. The objective function of PPO is denoted as follows,
\begin{dmath}
J (\theta) =
\mathbb{E} [
\min( \\
r(\theta) \ \hat{A}_{\theta_{old}}(s, a),\\
{clip}(r(\theta), 1 - \epsilon, 1 + \epsilon) \hat{A}_{\theta_{old}}(s, a))
]
\end{dmath}
\noindent where
\begin{itemize}
\item $r(\theta)$ is the probability ratio between old and new policies ${\pi_\theta(a \vert s)} / {\pi_{\theta_{old}}(a \vert s)}$.
\item $\pi_\theta(a \vert s)$ is a policy, given parameter $\theta$, state $s$ and action $a$.
\item ${clip}(r(\theta), 1 - \epsilon, 1 + \epsilon)$ clips $r(\theta)$ to be in the range of $1+\epsilon$ and $1-\epsilon$.
\item $\hat{A}(s, a)$ is an estimate of the advantage function $A(s, a) = Q(s, a) - V(s)$, given action-value function $Q(s, a)$ and state-value function $V(s)$.
\end{itemize}
Typically $J (\theta)$ is updated via stochastic gradient ascent with an optimiser such as Adam\cite{Kingma2014Adam:Optimization}.
\subsection{TrueSkill\texttrademark \ Ranking System}
To measure the competitiveness of the learned RL agents, the TrueSkill\texttrademark ~ranking system \cite{Herbrich2007TrueSkill:System} was used. The TrueSkill\texttrademark ~ranking system is a skill based ranking system that quantifies a players' rating using the Bayesian inference algorithm. This system has been frequently used in many different multiplayer games and sports applications \cite{Tarlow2014KnowingUncertainty}. Although It also works well with $N$-player team games and free-for-all games, we focus our attention on the simplest case, a two-player match.
Each rating is characterised by a Gaussian distribution with mean $\mu$ and standard deviation $\sigma$. These values are updated based on the outcome of a game with the following update equations,
\begin{flalign}
&\mu_{winner} \leftarrow \mu_{winner} + \frac{\sigma^{2}_{winner}}{c} \cdot v (\frac{\mu_{winner} - \mu_{loser}}{c},\frac{\epsilon}{c}) \\
&\mu_{loser} \leftarrow \mu_{loser} + \frac{\sigma^{2}_{loser}}{c} \cdot v (\frac{\mu_{winner} - \mu_{loser}}{c},\frac{\epsilon}{c}) \\
&\sigma_{winner} \leftarrow \sigma_{winner} \cdot [ 1 - \frac{\sigma_{winner}}{c^2} \cdot w (\frac{\mu_{winner} - \mu_{loser}}{c},\frac{\epsilon}{c}) ] \\
&\sigma_{loser} \leftarrow \sigma_{loser} \cdot [ 1 - \frac{\sigma_{loser}}{c^2} \cdot w (\frac{\mu_{winner} - \mu_{loser}}{c},\frac{\epsilon}{c}) ] \\
&c^2 = 2\beta^2 + \sigma^2_{winner} + \sigma^2_{loser}
\end{flalign}
\noindent where $\epsilon$ is a configurable parameter that should be adjusted accordingly to the likeliness of a draw, and $\beta$ is the variance of the performance around the skill of each player. $v$ and $w$ are functions that are designed so that weighting factors are roughly proportional to the uncertainty of the winner/loser vs. the total sum of uncertainties. We refer the reader to the original paper \cite{Herbrich2007TrueSkill:System} for further explanation. Finally, a so-called conservative skill estimate can be calculated by $\mu - k * \sigma$, where $k$ is usually set to 3.
\subsection{Social Network Analysis}\label{social network analysis}
To analyse the intelligence of coordinated RL agents and compare their characteristics with real-world data, an analysis framework that is not influenced by physical differences between simulations and the real-world is necessary. Passes do not rely on individual physical ability and is an important component of teamplay. Therefore we focus on social network analysis (SNA) of passes.
A pass network is a weighted directed graph that considers the direction and frequency of passes between two players. It takes the form of an adjacency matrix $A$ and weight matrix $W$. $A_{ij}$ represents the number of passes from player $i$ to player $j$, and $W_{ij}$ is simply $1/A_{ij}$ if $i\neq j$ or $0$ otherwise. Below, we explain the three metrics used in this paper.
\noindent\textbf{Closeness Centrality.} Closeness is calculated by computing the sum of all the geodesic (shortest) paths between the node $v$ and all other nodes $w \in V$ in the following equation.
\begin{dmath}
Closeness(v) = \frac{1}{\sum_{w \in V}\sigma_{vw}}
\end{dmath}
\noindent where $\sigma_{vw}$ is defined as the shortest distance between nodes $v$ and $w$.
This score indicates how easy it is for a player to be connected with teammates. Therefore a high closeness score indicates that a player is well-connected within the team.
\noindent\textbf{Betweenness Centrality.} Betweenness is calculated by counting the total numbers of geodesic paths linking $v$ and $w$ and the number of those paths that intersect a node $n$ in the following equation.
\begin{dmath}
Betweeness(v) = \sum_{s \neq v \in V}\sum_{t \neq v \in V} \frac{\sigma_{st}(v)}{\sigma_{st}}
\end{dmath}
\noindent where $\sigma_{st}(v)$ is the number of shortest paths from node $s$ to node $t$ that passes node $v$.
This score indicates how players acts as a bridge between passing plays, high deviation within a team may indicate well-balanced passing strategy and less dependence on a single player.
\noindent\textbf{Pagerank Centrality.} Pagerank is calculated based on the total number of passes a player made in the following equation.
\begin{dmath}
Pagerank(v) = p \sum_{v\neq w}\frac{A_{vw}}{L_{w}^{out}}Pagerank(w)+q
\end{dmath
\noindent where $p$ represents the probability a player will decide not pass the ball and $q$ can be thought of "free popularity", both of which are heuristic parameters. These parameters are set to $p=0.85$ and $q=1$ following \cite{Pena2012AStrategies}. A high pagerank score implies that the player is a popular choice for other players to pass too.
\section{\uppercase{Proposed Analysis Framework}}
In this section, we present the details of our proposed analysis framework, which is outlined in Fig.~\ref{overview}, and the details regarding the setup of the subsequent experiments.
Our framework consists of five parts. In the first part (i), we train agents using proximal policy optimisation in the Google Research Football simulation environment. (ii) Then, we rank the agents by the TrueSkill ranking system. In the third part (iii), we extract event data concerning on-the-ball actions from the simulations and convert it into a tabular format. This format is similar to the Soccer Player Action Description Language (SPADL) but simplified to only include passes and shots. We also convert real-world football data into the same format as well. Finally, we perform (iv) correlation analysis and (v) social network analysis on the obtained data.
\subsection{Agent Training and Ranking}\label{Agent Training and Ranking}
In order to train agents, we closely follow the setup of the baseline agents for the Google Research Football environment presented in \cite{Kurach2019GoogleEnvironment}. An agent will control a single active player at all timesteps and has the ability to switch to control any other player on the same team (excluding the goal keeper). Non-active players are controlled via another in-game rule based system. In this system, the behavior of the non-active players corresponds to simple actions such as running towards the ball when not in possession, or move forward together with the active player when in possession. Hence, the players can be regarded as being centrally controlled. In this paper we consider multi-agent RL to be out of scope and hope to pursue such a setup in the future.
\subsubsection{Deep RL Implementation}
The training pipeline is as follows. First, we reproduce the results presented in \cite{Kurach2019GoogleEnvironment} by using the same hyper-parameter/training setup. The Deep RL agent uses the PPO algorithm \cite{Schulman2017ProximalAlgorithms} as described in \S3.1, with an Impala policy \cite{Espeholt2018IMPALA:Architectures}. The architecture is available Fig.~\ref{architecture}.
Each state of the simulation is represented by a Super Mini Map (SMM) based on \cite{Kurach2019GoogleEnvironment}. The SMM consists of four $72 \times 96$ matrices, each a binary representation of the locations of the home team players, the away team players, the ball and the active player, respectively. A visualisation can be found in Fig.~\ref{smm}. The actions available\footnote{See https://git.io/Jn7Oh for a complete overview of observations and actions} to the central control agent are displayed in Table~\ref{actions}. Each movement action is sticky, therefore once executed, the action will persist until there is an explicit stop action.
\begin{table}[!h]
\caption{Set of Actions}
\label{actions}
\begin{tabular}{ccc}
\hline
Top & Bottom & Left \\
Right & Top-Left & Top-Right \\
Bottom-Left & Bottom-Right & Shot \\
Short Pass & High Pass & Long Pass \\
Idle & Sliding & Dribble \\
Stop-Dribble & Sprint & Stop-Moving \\
Stop-Sprint & - & - \\
\hline
\end{tabular}
\end{table}
Rewards are based on whether a goal is conceded, scored, or neither. In addition to this goal-based reward a small "checkpoint" reward is used to aid the initial development where goals are sparse. We refer the reader to ~\cite{Kurach2019GoogleEnvironment} for a more in-depth description of possible training setups.
Based on the above setup, in this paper, we started by training for 50 million time-steps against the built-in easy, medium and hard level bots. During this phase, we noticed that the performance of the agents had not converged. Therefore, we trained an extra 50-million time-steps against the easy and medium bots and an extra 150-million time-steps against the hard-level bot. The average goal difference for the resulting agents at 50, 100 and 200 million time-steps is presented in Table~\ref{train-results}.
\begin{table}[h]
\centering
\caption{Average Goal Difference.}\label{train-results}
\begin{tabular}{llll}
\toprule
\textbf{Bot Level} & \textbf{50M} & \textbf{100M} & \textbf{200M} \\ \midrule
Easy & 5.66 & 8.20 & - \\
Medium & 0.93 & 2.35 & - \\
Hard & -0.08 & 1.25 & 2.81 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/architecture.png}
\caption{An overview of the architecture used for the PPO agents \cite{Kurach2019GoogleEnvironment}. A stack of four previous frames (see Fig.~\ref{smm}) is used as input.} \label{architecture}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/smm.png}
\caption{Overview of super mini map
\cite{Kurach2019GoogleEnvironment}. Left: A stack of four previous frames used as input for the CNN. Right: A visualisation of an example stacked mini map representation.} \label{smm}
\end{figure}
\subsubsection{TrueSkill Ranking Implementation} \label{TrueSkill Ranking Implementation}
To implement the TrueSkill ranking, we create a round-robin tournament composed of 15 agents (5 from each setup, easy, medium and hard) using intermediate checkpoints saved at 20\%, 40\%, 60\%, 80\% and 100\% of training. In a single round-robin tournament, each agent plays every other agent once. We conducted a total of 50 round-robin tournaments, resulting in a total of 5250 matches. Next, we use the resulting scores of all 5250 matches to calculate a TrueSkill rating for each agent. We show the top-3 / bottom-3 ranked agents of the resulting leader-board in Table~\ref{leaderboard}. Notice the agents trained against the easy level built-in bot ranks top 1, 2 and 3. This result seems counter intuitive, since agents trained longer against stronger built-in bots should be more competitive. Therefore this suggests that there could be better training strategies. However, exploring alternative training strategies is out of scope for this work and shall be left for future work.
\begin{table}[h]
\centering
\caption{TrueSkill ratings top/bottom-3}\label{leaderboard}
\begin{tabular}{llll}
\toprule
\textbf{Ranking} & \textbf{Bot Level} & \textbf{Checkpoint \%} & \textbf{rating} \\ \midrule
1 & Easy & 80\% & 34.1 \\
2 & Easy & 100\% & 31.5 \\
3 & Easy & 40\% & 31.5 \\
&&...&\\
13 & Easy & 20\% & 8.3 \\
14 & Hard & 20\% & 7.9 \\
15 & Medium & 20\% & 7.0 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Data Extraction}\label{Data Extraction}
Action data and observation data are extracted from the games saved when calculating TrueSkill ranking. From this data, we extract all pass and shot actions and programmatically label their results based on the following events. For real-world football data, we use event-stream data for three matches from the 2019-2020 J1-League. The J1-League is the top division of the Japan professional football league. The data was purchased from DataStadium Inc. We show the match results in Table ~\ref{game-info}. The three teams, Kashima Antlers, Tokyo FC and Yokohama F Marinos were chosen since they were the top-3 teams on the leaderboard at the time.
\begin{table}[h]
\caption{Details of the real-world football data used.}\label{game-info}
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Date} &\textbf{Home Team} & \textbf{Score} & \textbf{Away Team} \\ \midrule
2019/04/14 & FC Tokyo & (1-3) & \begin{tabular}[c]{@{}l@{}}Kashima \\ Antlers\end{tabular} \\ \midrule
2019/04/28 & \begin{tabular}[c]{@{}l@{}}Yokohama \\ F Marinos\end{tabular} & (2-1) & \begin{tabular}[c]{@{}l@{}}Kashima \\ Antlers\end{tabular} \\ \midrule
2019/06/29 & FC Tokyo & (4-2) & \begin{tabular}[c]{@{}l@{}}Yokohama \\ F Marinos\end{tabular} \\ \midrule
2019/08/10 & \begin{tabular}[c]{@{}l@{}}Kashima \\ Antlers\end{tabular} & (2-1) & \begin{tabular}[c]{@{}l@{}}Yokohama \\ F Marinos\end{tabular} \\ \midrule
2019/09/14 & \begin{tabular}[c]{@{}l@{}}Kashima \\ Antlers\end{tabular} & (2-0) & FC Tokyo \\ \bottomrule
\end{tabular}
\end{table}
We also extract all pass and shot actions from this data. The results format of both simulation and real-world data is tabular and a simplified version of SPADL \cite{Decroos2019ActionsSoccer}. An explanation of the variables used in analysis is listed in Table~\ref{variables}.
\begin{table}[!h]
\centering
\caption{Explanation of variables used in analysis.}\label{variables}
\begin{tabular}{l|l}
\toprule
\textbf{Variables} & \textbf{Explanation} \\ \midrule
Shots & Number of shot attempts. \\
Passes & Number of pass attempts. \\
PageRank & See \S\ref{social network analysis} PageRank Centrality. \\
Closeness & See \S\ref{social network analysis} Closeness Centrality. \\
Betweenness & See \S\ref{social network analysis} Betweenness Centrality. \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Data Analysis}\label{Data Analysis}
Two types of football analysis are applied to the extracted data. We first focus on the finding statistics and metrics that correlate with the agent's TrueSkill ranking. For this we calculate simple descriptive statistics, such as number of passes/shots, and social network analysis (SNA) metrics, such as closeness, betweenness and pagerank. As explained in \S\ref{social network analysis}, SNA was chosen because it describes the a team ball passing strategy. Therefore it is sufficient for the analysis of central control based RL agents. We calculate Pearson correlation coefficient and $p$-value for testing non-correlation. The following criteria were used to interpret the magnitude of correlation: values less than 0.3 were interpreted as trivial; between 0.3 and 0.5 as moderate; between 0.5 and 0.7 as strong; between 0.7 and 0.9 as very strong; more than 0.9 as nearly perfect. A $p$-value less than 0.05 is considered as statistically significant, any result above this threshold will be deemed unclear.
Our second focus is the comparison of SNA metrics between RL agents and real-world football data. By using SNA metrics, we can compare the ball passing strategy between RL agents and real-world football data. To assure a fairness, we bootstrap $N=500$ samples of passes from each team before generating a pass network to analyse. We repeat this process 50 times. Then, we conduct normality tests to determine that the distribution is Gaussian. Finally, we plot and visually inspect the distribution.
\section{\uppercase{Results and Discussion}}
In this section, we show the results of the two types of data analysis detailed in \S\ref{Data Analysis}. The first is a correlation analysis between descriptive statistics / SNA metrics and TrueSkill rankings. The second is a comparative analysis which uses SNA metrics generated from RL agents (Google Research Football) and real-world football players (2019-2020 season J1-League).
\subsection{Correlation Analysis}
For each team an agent controls, descriptive statistics and SNA metrics were calculated using the variables listed in Table~\ref{variables}. The Pearson correlation coefficients are shown in Table~\ref{sna_correlation}.
\begin{table}[h]
\centering
\caption{Correlation coefficients and p-values for each metric. Metrics with very strong and nearly perfect correlation are emphasised in bold.}
\label{sna_correlation}
\begin{tabular}{l|l|l}
\toprule
\textbf{Metric} &
\begin{tabular}[x]{@{}c@{}}\textbf{Correlation}\\\textbf{Coefficient}\end{tabular}
& \textbf{$p$-value} \\ \midrule
Total Passes & -0.5 & 0.061 \\ \midrule
\textbf{Total Shots} & \textbf{0.77} & \textbf{0.001} \\ \midrule
Successful Pass Pct & 0.62 & 0.014 \\ \midrule
Successful Shot Pct & 0.68 & 0.005 \\ \midrule
PageRank (std) & 0.58 & 0.022 \\ \midrule
PageRank (mean) & -0.05 & 0.848 \\ \midrule
PageRank (max) & 0.48 & 0.068 \\ \midrule
\textbf{PageRank (min)} & \textbf{-0.91} & \textbf{0.001} \\ \midrule
Closeness (std) & -0.54 & 0.036 \\ \midrule
Closeness (mean) & -0.64 & 0.010 \\ \midrule
Closeness (max) & -0.61 & 0.015 \\ \midrule
Closeness (min) & -0.66 & 0.007 \\ \midrule
Betweenness (std) & 0.65 & 0.009 \\ \midrule
\textbf{Betweenness (mean)} & \textbf{0.72} & \textbf{0.002} \\ \midrule
Betweenness (max) & 0.65 & 0.009 \\ \midrule
Betweenness (min) & 0.0 & 0.0 \\
\bottomrule
\end{tabular}
\end{table}
\noindent As can be seen in Table~\ref{sna_correlation}, many of the descriptive statistics and SNA metrics have a strong correlation with TrueSkill rankings. We observe that "Total Shots" and "Betweenness (mean)" have a very strong positive correlation with TrueSkill rankings. On the other hand, "PageRank (min)" has a nearly perfect negative correlation.
The metric with the largest overall correlation is the pagerank aggregated by the minimum value in the network ($r=-0.91$, $p=0.001$). We present a scatter plot of this metric in Fig.~\ref{pagerank}.
\begin{figure}[!h]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/pagerank_min.png}
\caption{Pagerank aggregated by the minimum value in the network.}\label{pagerank}
\end{minipage}
\hfill
\end{figure}
Since pagerank roughly assigns to each player the probability that they will have the ball after a arbitrary number of passes, the node with the minimum pagerank centrality is likely to be the goalkeeper, whom we assume that the agent is quickly learning to keep the ball away from. Another interesting finding is the strong positive correlation with the standard deviation of betweenness ($r=0.65$, $p=0.009$). This metric is also presented as a scatter plot in Fig.~\ref{betweenness}.
\begin{figure}[!h]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/Betweenness_std.png}
\caption{Betweenness aggregated by the standard deviation.}\label{betweenness}
\end{minipage}
\hfill
\end{figure}
A large variance in betweenness has been demostrated to be related with a well-balanced passing strategy and less specific player dependence \cite{Clemente2016SocialAnalysis}. It is fascinating that the agents learn to prefer a well-balanced passing strategy as TrueSkill increases. In general, most of the metrics presented in Table~\ref{sna_correlation} have either a negative or positive moderate strong correlation with $p < 0.05$.
\subsection{Comparative Analysis Between Simulated and Real-world Football}
As exaplained in \S\ref{Data Extraction}, for each of the five real world football matches played by three teams, we calculated the distribution of SNA metrics. Distributions were calculated by bootstrapping $N=500$ samples of passes 50 times. The same procedure was taken for the matches played by the best and worst ranked agents (see Table ~\ref{Agent Training and Ranking}). In Fig.~\ref{comparison_dist} we visualise each of the three SNA metrics aggregated by two different methods. Aggregation methods that showed strong correlations in Table \ref{sna_correlation} were chosen. The total number of passes and shots per match can not be fairly compared between RL-agents and real-world footballers because of different match lengths.
In summary, a total of six variables were compared over five agents/teams (worst RL agent, best RL agent, FC Tokyo, Kashima Antlers and Yokohama F Marinos).
\begin{figure}[!h]
\includegraphics[width=\linewidth]{figures/real_data_comparison.png}
\caption{Comparison of SNA metrics between best/worst agents and real-world football teams. } \label{comparison_dist}
\end{figure}
Observing this visualisation we can see that the distribution of "Betweenness (mean)", "Betweenness (std)" and "Closeness (std)" metrics for the worst agent is distant from the others. The fact that the best agent distribution of the same metric is much closer to that of J League teams implies that agent has learnt to play in a similar style through RL. However the same cannot be said for the other metrics, "Closeness (mean)", "PageRank (std)" and "PageRank (min)".
From the perspective of football analysis, the distributions of "Betweenness (std)" is very interesting. Since a high deviation in betweenness may indicate well-balanced passing strategy and less dependence on a single player, we can hypothesise that agents are learning to play a more well-balanced passing strategy similar to real-world footballers.
Although it is difficult to interpret the results from the PageRank and Closeness metrics, it is surprising that even the worst RL agents have overlapping distributions with the real-world footballers. Considering the fact that even the worst RL agent was trained thousands of timesteps, this may be because strategies related PageRank and Closeness are easier to learn.
\section{\uppercase{Conclusions and Future work}}
In this paper, we compared the characteristics and play styles of RL agents of increasing competitiveness. As a result, we found many metrics that strongly correlate with the competitiveness (TrueSkill rating) of an agent. Another contribution in this paper, is the comparison between RL agents and real football players. Our findings suggest that an RL agent can learn to play football in similar style to that of real player without being explicitly programmed to do so.
There are many directions we can extend the research presented in this paper. In particular, we plan to work on increasing the degree of freedom within the simulations to create a more realistic environment. This can be achieved by conducting multi-agent simulation where an RL agent controls a single active player in contrast to a whole team. Another approach would be to use a less restrictive environment such as the "Humanoid Football" environment to introduce biomechanical movements. Although both approaches appear interesting, improvements in training methodology, such as imitation learning and auto-curricular learning may be required to produce adequate agents.
We also noticed that it was difficult to use state of the art football analysis methods due to different representations of the underlying data. Since efficient representations such as SPADL already exist, we hope other researchers can build on top of these so that the community can easily take advantage of existing methods.
\bibliographystyle{apalike}
{\small | {
"attr-fineweb-edu": 1.919922,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUb_A25V5hcJY577Dr | \section{Introduction}
The 2014/15 IBU Biathlon World Cup took place over a series of races in the winter of 2014/15. At the end of each race the competitors were ranked in the order they finished, and assigned a number of points based on their position in that order. The scores for the first thirteen positions are 60, 54, 48, 43, 40, 38, 36, 34, 32, 31, 30, 29, 28. The scores the athletes won are summed across the races, and the one with the highest total score is declared the winner -- a procedure that is known as a scoring rule.
The Women's Pursuit category consists of seven races. Kaisa M\"ak\"ar\"ainen came first with two first place finishes, two second, a third, a fourth, and a twelfth, for a total score of 348 points. Second was Darya Domracheva, with four first place finishes, one fourth, a seventh, and a thirteenth, for a total score of 347. In tenth place was Ekaterina Glazyrina, well out of the running with 190 points. These results are given in Table~\ref{table:Glazyrina}.
\begin{table}[ht!]
\centering
\caption{2014/15 Biathlon -- Women's Pursuit: scoring system and event results}
\begin{tabular}{lccccccccccccccc}
\toprule
Position&1&2&3&4&5&6&7&8&9&10&11&12&13&$\cdots$&40\\
\hline
Points&60&54&48&43&40&38&36&34&32&31&30&29&28&$\cdots$&1\\
\bottomrule
\end{tabular}
\begin{tabular}{lcccccccr}
\\
\toprule
Athlete&\multicolumn{7}{c}{Event number: points}&Total\\
&1&2&3&4&5&6&7&score\\
\hline
M\"ak\"ar\"ainen&60&60&54&48&54&29&43&348\\
Domracheva&43&28&60&60&60&36&60&347\\
$\cdots$&&&&&&&&\\
Glazyrina&32&54&10&26&38&20&10&190\\
\bottomrule
\end{tabular}
\label{table:Glazyrina}
\end{table}
Four years later, Glazyrina was disqualified for doping violations, and all her results from 2013 onwards were annulled. This bumped Domracheva's thirteenth place finish in race two into a twelfth, and her total score to 348. The number of first place finishes is used as a tie breaker, and in March 2019 the official results implied that M\"ak\"ar\"ainen will be stripped of the
trophy in favour of Domracheva. Because the tenth place competitor was disqualified for doping four years after the fact.\footnote{After the disqualification of Glazyrina, the IBU eventually decided to award the 2014/15 Pursuit Globe to both Domracheva and M\"ak\"ar\"ainen (\url{https://biathlonresults.com}). While this keeps both athletes happy, it is clearly an ad-hoc solution. At the time of writing, another biathlete (Olga Zaitseva) is under investigation for doping. If Zaitseva is to be struck from the 2013/14 IBU Biathlon World Cup protocols, then Tora Berger will overtake M\"ak\"ar\"ainen's score for the Big Crystal Globe. As this is the most important trophy for the season, it is doubtful that the problem can be resolved by awarding it to two athletes (\url{https://web.archive.org/web/20171204023713/https://www.biathlonworld.com/news/detail/ibu-press-release-5}).}
Clearly there is something unsatisfying about this. We would hope that the relative ranking of M\"ak\"ar\"ainen and Domracheva depends solely on the relative performance of the two athletes, and not on whether or not a third party was convicted of doping, especially if said third party was not a serious contender for the title.
In this paper we will discuss a few classical results from social choice theory which describe why this is impossible, and suggest the best possible course of action left to us given the mathematical impossibilities we face. This involves introducing two extremely weak independence axioms, which turn out to characterise a one-parameter family of scoring rules -- the geometric scoring rules.
\subsection*{The science of impossibility}
Suppose we have a set of $m$ athletes and a profile of $n$ races, $R_1,\dots,R_n$, with $R_i$ being the order in which the athletes finished race~$i$. What we are after is a procedure which will map the $n$ race results into a single ranking for the entire competition, $R$: a function $(R_1,\dots,R_n)\mapsto R$. The result of this ranking rule must reflect the results of the individual races in some way. A minimal condition is \textbf{unanimity} – if an athlete finishes first in every race, we should expect this athlete to rank first in $R$. Motivated by the scenario in the introduction, we also want the relative ranking of athletes $a$ and $b$ in the end result to depend only on the relative ranking of $a$ and $b$ in the individual races, a condition known as the \textbf{independence of irrelevant alternatives}. Here we hit the most famous result in social choice theory -- Arrow's result that the only function that meets our criteria is dictatorship \citep[p.52]{Arrow50,CampbellKelly02}.\footnote{If we want not to determine an aggregate ranking but only to select a single winner, then unanimity and independence of irrelevant alternatives still lead to dictatorship -- it follows from a more general result of \citet[Theorem 1]{DuttaJacksonLeBreton01}.}
Now, social choice theory is prone to colourful nomenclature, so it is important to underline that the characteristic feature of dictatorship is not jackboots and moustaches, but that all decisions stem from a single individual.
In the case of a sporting competition, this could be the case where the first $n-1$ races are treated as warm-ups or friendly races, and only the finals $R_n$ contributes to the final ranking $R$. This is not necessarily an absurd ranking system, but it will not do if we want to keep viewers interested over the course of the event, rather than just the finals. We need to relax independence.\footnote{The approach in this paper is axiomatic. We want a ranking rule that always satisfies a certain notion of independence, and mathematical impossibilities force us to relax the notion until it is weak enough to be compatible with other desirable properties. This is not the only way to approach the problem. For example, we might accept that we cannot have independence all of the time, and instead look for a rule that will satisfy independence most of the time. In that framework \citet{Gehrlein1982} show that under the impartial culture assumption (i.e. when all rankings are equally probable), the scoring rule most likely to select the same winner before and after a random candidate is deleted is Borda.}
A weaker independence condition we may consider is \textbf{independence of winners/losers}.\footnote{Independence of winners/losers is also known as local stability \citep{Young88} and local independence of irrelevant alternatives. Independence of losers is also known as independence of bottom alternatives \citep{FreemanBrillConitzer14}.} As the name suggests, this is the condition that if we disqualify the top or the bottom athlete in the final ranking $R$, the remainder of the ranking remains unchanged. This could be a pressing issue if the winner is accused of doping, and a rule that satisfies this condition will guarantee that the cup is given to the runner up without requiring a retallying of the total scores. In the case of the loser, there is the additional concern that it is a lot easier to add a loser to a race than a winner, and if the authors were to take their skis off the shelf and lose ingloriously in the next Biathlon, one would hope that the standing of the real competitors would remain unaffected.
It turns out that, given some standard assumptions, there is a unique rule that is independent of winners and losers -- the Kemeny rule \citep{Kemeny59,Young88}. The procedure amounts to choosing a ranking $R$ that minimises the sum of the Kendall tau distance from $R$ to the individual races. This is one of its disadvantages -- it is a stretch to expect a sports enthusiast to plot race results in the space of linear orders and compute the central point. For viewers, the results might as well come from a black box. What's worse, it is a difficult procedure computationally \citep{BartholdiToveyTrick89}, so even working out the winner may not be feasible. But perhaps most damning of all is that it violates a property known as \textbf{electoral consistency}. Suppose our ranking procedure ranks a biathlete first in each category of the championship (sprint, pursuit, and so on). It would be very strange if she failed to be ranked first overall. Under the Kemeny rule, we would have to live with such strangeness.
In fact, the only procedure that guarantees electoral consistency is a generalised scoring rule \citep{Smith73,Young75} -- every athlete is awarded a number of points based on their position in a race, and the athletes are ranked based on total points; in the case of ties, another scoring rule can be used to break them. It seems there is no alternative to the rules actually used in biathlon (IBU World Cup), auto racing (FIA Formula One), cycling (Tour de France green jersey), golf (PGA TOUR), skiing (FIS World Cup), athletics (IAAF Diamond League), and other events in this format -- but that is not necessarily a bad thing. Scoring rules are easy to compute and understand, and every additional result contributes to the overall ranking in a predictable way, all of which is very desirable for a sporting event.
We have seen that neither of the independence notions we have defined so far can apply here, but how bad can the situation get? The answer is, as bad as possible. A result of \citet[Theorem 2]{Fishburn81DAM} shows that if the scores awarded for positions are monotone and decreasing, it is possible to construct a sequence of race results such that if one athlete is removed the remaining order is not only changed, but inverted.\footnote{For scoring rules, \citet[Corollary 1.1]{Saari89} showed that not only inversions but any permutations of the aggregate ranking can happen when some athletes/candidates are removed.} So while M\"ak\"ar\"ainen may not be pleased with the current turn of events, there is a possible biathlon where after the disqualification of Glazyrina M\"ak\"ar\"ainen finished last, and Domracheva second to last. It is interesting to speculate whether the competition authorities would have had the resolve to carry through such a reordering if it had taken place.
\section{Geometric scoring rules}
In order to motivate our final notion of independence, let us first consider why the results of the biathlon may not be as paradoxical as they appear at first glance. Note that removing Glazyrina from the ranking in Table~\ref{table:Glazyrina} changed the total score of Domracheva but not of M\"ak\"ar\"ainen. This is because M\"ak\"ar\"ainen was unambiguously better than Glazyrina, finishing ahead of her in every race, while Domracheva was beaten by Glazyrina in race~2. As such, the athletes' performance vis-\`a-vis Glazyrina served as a measuring stick, allowing us to conclude that M\"ak\"ar\"ainen was just that little bit better. Once Glazyrina is removed, however, the edge M\"ak\"ar\"ainen had is lost.
So suppose then that the removed athlete is symmetric in her performance with respect to all the others. In other words she either came last in every race, and is thus a unanimous loser, or came first, and is a unanimous winner. Surely disqualifying such an athlete cannot change the final outcome? Why, yes it can.
The results for the Women's Individual category of the 2013/14 IBU Biathlon World Cup are given in the left panel of Table~\ref{table:Soukalova}. The event consists of two races, and Gabriela Soukalov\'a came first in both, and is thus a unanimous winner, followed by Darya Domracheva, Anastasiya Kuzmina, Nadezhda Skardino, and Franziska Hildebrand. However, in the hypothetical event of Soukalov\'a being disqualified the result is different: the recalculated total scores are in the right panel of Table~\ref{table:Soukalova}. Domracheva takes gold and Kuzmina silver as expected, but Hildebrand passes Skardino to take the bronze.
\begin{table}
\centering
\caption{2013/14 IBU Biathlon World Cup -- Women's Individual}
\begin{tabular}{lllr}
\toprule
Athlete & \multicolumn{2}{l}{Event} & Total \\
& $1$ & $2$ & score \\
\hline
Soukalov\'a & 60\small{/1} & 60\small{/1} & 120 \\
Domracheva & 38\small{/6} & 54\small{/2} & 92 \\
Kuzmina & 54\small{/2} & 30\small{/11} & 84\\
Skardino & 36\small{/7} & 36\small{/7} & 72\\
Hildebrand & 28\small{/13} & 43\small{/4} & 71\\
\bottomrule
\end{tabular}
\quad
\begin{tabular}{lllr}
\toprule
Athlete & \multicolumn{2}{l}{Event} & Total \\
& $1$ & $2$ & score \\
\hline
\sout{Soukalov\'a} & \sout{60\small{/1}} & \sout{60\small{/1}} & \sout{120} \\
Domracheva & 40\small{/5} & 60\small{/1} & 100 \\
Kuzmina & 60\small{/1} & 31\small{/10} & 91\\
Hildebrand & 29\small{/12} & 48\small{/3} & 77\\
Skardino & 38\small{/6} & 38\small{/6} & 76\\
\bottomrule
\end{tabular}
\label{table:Soukalova}
\vspace{0.2cm}
\justify
\footnotesize{\textit{Notes}: The left panel presents the official points/position; the right panel presents the points/position after a hypothetical disqualification of Soukalov\'a. The total scores given in the table are before the disqualification of another athlete, Iourieva, that occurred a few months after the race. With the most recent total scores we would still observe Hildebrand overtaking Skardino, but we would have to resort to tie-breaking to do it.}
\end{table}
However, in contrast to the previous paradoxes, this is one we can do something about. By picking the right set of scores we can ensure that the unanimous loser will come last, the unanimous winner first, and dropping either will leave the remaining order unchanged.
Let us formalise our two axioms.
An athlete is a \textbf{unanimous loser} just if the athlete is ranked last in every race. A ranking rule satisfies \textbf{independence of unanimous losers} if the unanimous loser is ranked last in the overall ranking, and removing the unanimous loser from every race leaves the overall ranking of the other athletes unchanged.
Symmetrically, an athlete is a \textbf{unanimous winner} when ranked first in every race. \textbf{Independence of unanimous winners} is satisfied if the unanimous winner is ranked first in the overall ranking, and removing the unanimous winner from every race leaves the overall ranking of the other athletes unchanged.
Next, note that hitherto we have not been precise about how we go about recalculating the total scores when an athlete has been dropped. The issue is that if we use the values $s_1,\dots,s_m$ to score $m$ athletes, if we remove an athlete we are left with $m$ scores and $m-1$ athletes. In principle we need another set of $t_1,\dots,t_{m-1}$ scores to score $m-1$ athletes, and indeed a separate vector of $k$ scores to score any $k$. In practice most sporting events simply obtain the $t_i$ values by trimming the first sequence, i.e. $t_1,\dots,t_{m-1}=s_1,\dots,s_{m-1}$, and $s_m$ is dropped, but our definition of a scoring rule will allow full generality in what scores are used for any number of athletes.
\begin{definition}
For every number $k$ of athletes, $k\leq m$, a \textbf{positional scoring rule}, or a \textbf{scoring rule} for short, defines a sequence of $k$ real numbers $s_1^k,\ldots,s_k^k$. For a profile, an athlete receives a score $s_j^k$ for position~$j$ in an individual ranking. The sum of scores across all rankings gives the athlete's total score. The total scores determine the total ranking: athletes with higher total scores are ranked higher, athletes with equal total scores are ranked equally.
For example, \textbf{plurality} is the scoring rule with scores $(1,0,\ldots,0)$ for each $k$, while \textbf{antiplurality} corresponds to scores $(1,\ldots,1,0)$ and \textbf{Borda} -- to $(k-1,k-2,\ldots,1,0)$.
\end{definition}
Of course it is possible that two athletes attain the same total score, and are thus tied in the final ranking. In general, this problem is unavoidable -- if two athletes perform completely symmetrically vis-\`a-vis each other, no reasonable procedure can distinguish between them. However, if things are not quite so extreme, ties can be broken via a secondary procedure -- for example, in the case of the IBU we have seen that ties are broken with the number of first place finishes. This gives rise to the notion of a generalised scoring rule, where ties in the initial ranking are broken with a secondary sequence of scores, any remaining ties with a third, and so on.
\begin{definition}
For every number $k$ of athletes, $k\leq m$, a \textbf{generalised scoring rule} defines $\overline{r}(k)$ sequences of $k$ real numbers $s_1^{k,r},\ldots,s_k^{k,r}$ -- one sequence for each tie-breaking round $r=1,\ldots,\overline{r}(k)$. For a profile, in round $r$, an athlete $a$ receives score $s_j^{k,r}$ for position~$j$ in an individual ranking. The total sum of scores gives a total score $S^r_a$ of athlete~$a$. The total scores determine the total ranking lexicographically: $a$ is ranked higher than $b$ if $S^r_a>S^r_b$ for some round $r$ and $S^l_a=S^l_b$ for all $l<r$. Athletes $a$ and $b$ are equally ranked if $S^r_a=S^r_b$ for all rounds $r\leq \overline{r}(k)$.
For example, for each $k$ athletes, \textbf{generalised plurality} has $k-1$ rounds with scores $(\overbrace{1,\ldots,1}^r, \overbrace{0, \ldots, 0}^{k-r})$ in round $r$. \textbf{Generalised antiplurality} has $(\overbrace{1,\ldots,1}^{k-r}, \overbrace{0, \ldots, 0}^r)$.
\end{definition}
Note that by definition a scoring rule is a generalised scoring rule with only one tie-breaking round.
Observe that the order produced by a scoring rule is invariant under scaling and translation, e.g. the scores 4, 3, 2, 1 produce the same order as 8, 6, 4, 2 or 5, 4, 3, 2. We will thus say that scores $s_1,\dots,s_m$ and $t_1,\dots,t_m$ are \textbf{linearly equivalent} if there exists an $\alpha>0$ and a $\beta$ such that $s_j = \alpha t_j + \beta$.
The intuition behind the following result is clear: $t_1,\dots,t_k$ produces the same ranking of the first/last $k$ athletes, if and only if it is linearly equivalent to the first/last $k$ scores in the original ranking system. The proof of the theorem, and all subsequent theorems, can be found in \autoref{app:proof}.
\begin{restatable}{proposition}{linearlyequivalent}\label{prop:linearlyequivalent}
Suppose a scoring rule uses scores $s^m_1,\dots,s^m_m$ to score $m$ athletes. The scoring rule satisfies independence of unanimous losers if and only if $s^m_1>\ldots>s^m_m$, and the scores for $k$ athletes, $s^k_1,\dots,s^k_k$, are linearly equivalent to the first $k$ scores for $m$ athletes, $s^m_1,\dots,s^m_k$, for all $k\leq m$.
The rule satisfies independence of unanimous winners if and only if $s^m_1>\ldots>s^m_m$ and the scores for $k$ athletes, $s^k_1,\dots,s^k_k$, are linearly equivalent to the last $k$ scores for $m$ athletes, $s^m_{m-k+1},\dots,s^m_m$, for all $k\leq m$.
\end{restatable}
Now we see why the biathlon scores are vulnerable to dropping unanimous winners but not unanimous losers -- since the scores for a smaller number of athletes are obtained by trimming the full list, every subsequence of the list of scores is indeed equivalent to itself. However if we drop the winner, then the subsequence 60, 54, 48, $\dots$, is certainly not linearly equivalent to 54, 48, 43, $\dots$.
What happens when we combine the two conditions? Given that we need not distinguish scores up to scaling and translation we can assume that the score for the last place, $s_m$, is zero, and $s_{m-1}=1$. The third-to-last athlete must then get a larger number of points than 1, say $s_{m-2}=1+p$. Now we have a sequence $0,1,1+p$, and we know it must be linearly equivalent to the sequence $1,1+p, s_{m-3}$. Since the only way to obtain the second sequence from the first is to scale by $p$ and add 1, it follows that $s_{m-3}=1+p+p^2$ and so on. Clearly if $p=1$ this sequence is just Borda, and some algebraic manipulation gives us a formula of $s_j=(p^{m-j}-1)/(p-1)$ for the $j$th position in the general case. This gives us the following family of scoring rules consisting of the geometric, arithmetic, and inverse geometric sequences.\footnote{Despite their natural formulation, geometric scoring rules have received almost no attention in the literature. We are aware of the work of \citet{Phillips2014}, who used geometric sequences to approximate Formula One scores, and the fact that Laplace suggested scoring with the sequence $2^{k-1},2^{k-2},\dots,1$ \citep[p. 261--263]{Daunou1995}, i.e.\ a geometric scoring rule with $p=2$. Laplace's motivation was that if we suppose that a voter's degree of support for a candidate ranked $j$th can be quantified as $x$, then we cannot say whether the voter's support for the $(j+1)$th candidate is $x-1,x-2$, or any other smaller value. As such, he proposed we take the average, or $x/2$. The recent work of \citet{Csato21scoring}, answering a question posed by an earlier version of this paper, investigates geometric scoring rules in the context of the threat of early clinch in Formula One racing.}
\bigskip
\begin{theorem}\label{thm:geometricrules}
A scoring rule satisfies independence of unanimous winners and losers if and only if it is a geometric scoring rule. That is, it is defined with respect to a parameter $p$, and the score of the $j$th position is linearly equivalent to:
\[s_j^k= \begin{cases}
p^{k-j}& 1<p<\infty, \\
k-j & p=1, \\
1-p^{k-j} & 0<p<1.
\end{cases}
\]
\end{theorem}
\bigskip
Observe that the axioms we used are extremely weak individually. If $s_1^m,\dots,s_m^m$ is any monotone decreasing sequence of scores whatsoever, and we obtain $s_j^{m-1}$ by dropping $s_m^m$, we will satisfy independence of unanimous losers. Likewise, if we obtain $s_j^{m-1}$ by dropping $s_1^m$, we will satisfy independence of unanimous winners. In short our only restriction is that more points are awarded for the $j$th place than for the $(j+1)$th place, which in the context of a sporting event is hardly a restriction at all. If we want to satisfy both axioms, however, we are suddenly restricted to a class with just one degree of freedom.
This one-parameter class includes three famous rules as boundary cases. As $p$ approaches infinity, the first position dominates all others and we have the generalised plurality rule, also known as the lexicographic ranking or the Olympic medal count \citep{ChurilovFlitman06}. With $p=1$ we have the Borda rule, proposed by one of the founders of the mathematical study of elections \citep{Borda1781}. As $p$ approaches 0, we have the generalised antiplurality rule, also known as the threshold rule \citep{AleskerovChistyakovKalyagin10}.\footnote{Our decision to include generalised plurality and antiplurality in the family of geometric scoring rules requires some justification. While it is easy to see that both rules do satisfy independence of unanimous winners and losers, the uniqueness of \autoref{thm:geometricrules} only applies to scoring rules. It is possible to construct other generalised scoring rules that satisfy both axioms, for example one could score with Borda in the first round and thereon break ties with the logic of generalised antiplurality. However there is a natural sense in which generalised plurality and antiplurality belong to our class. Observe that for any fixed number of races $n$, choosing a $p\geq n$ will guarantee that no amount of $(j+1)$th places will compensate for the loss of a single $j$th place -- this rule will be precisely generalised plurality. Likewise, choosing $p\leq 1/n$ will give us generalised antiplurality. Thus by including these rules in the family of geometric scoring rules, all we are asserting is that the organiser is allowed to choose a different value of $p$ if the length of the tournament changes.}
In the following sections we will explore this class.
We shall see how our axioms allow new axiomatisations of well-known (generalised) scoring rules, and how geometric scoring rules compare to optimal rules for a given organiser's objective.
\section{New characterisations}
\subsection*{$p>1$: Convex rules and winning in every race}
The FIM motorcycle Grand Prix is another event that uses a scoring system to select a winner. The 125cc category of the 1999 season had a curious outcome: the winner was Emilio Alzamora, who accumulated the largest amount of points, yet did not win a single race (\Cref{table:Alzamora}). This does not detract in any way from Alzamora's achievement -- he outperformed his competitors by virtue of his consistently high performance (compare with Melandri who performed well in the second half, and Azuma in the first), and if he did not take any unnecessary risks to clinch the first spot then he was justified in not doing so. However, racing is a spectator sport. If a fan attends a particular event then they want to see the athletes give their best performance on the day, rather than play it safe for the championship. Slow and steady may win the race, but blood and explosions wins viewer ratings.
\begin{table}
\centering
\caption{1999 Motorcycle Grand Prix -- 125cc: scoring system and event results}
\begin{tabular}{lccccccccccccccc}
\toprule
Position&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15\\
\hline
Points&25&20&16&13&11&10&9&8&7&6&5&4&3&2&1\\
\bottomrule
\end{tabular}
\begin{tabular}{lccccccccccccccccr}
\\
\toprule
Rider&\multicolumn{16}{c}{Event number: points}&Total\\
&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&score\\
\hline
Alzamora&20&16&16&16&10&20&13&16&20&10&13&20&1&-&16&20&227\\
Melandri&-&-&-&10&20&16&8&11&\textbf{25}&\textbf{25}&\textbf{25}&-&\textbf{25}&16&20&\textbf{25}&226\\
Azuma&\textbf{25}&\textbf{25}&\textbf{25}&13&9&-&\textbf{25}&\textbf{25}&10&4&6&-&11&2&10&-&190\\
\bottomrule
\end{tabular}
\label{table:Alzamora}
\vspace{0.2cm}
\justify
\footnotesize{\textit{Notes}: The scores for first place finishes are in bold. Observe that Azuma performed well in the first half of the tournament and Melandri in the second, while Alzamora performed consistently in both halves, yet never came first.}
\end{table}
Bernie Ecclestone, the former chief executive of the Formula One Group, was outspoken about similar issues in Formula One -- ``It's just not on that someone can win the world championship without winning a race.''\footnote{
Bernie Ecclestone justified the medal system as follows:
\begin{quote}
The whole reason for this was that I was fed up with people talking about no overtaking. The reason there's no overtaking is nothing to do with the circuit or the people involved, it's to do with the drivers not needing to overtake.
If you are in the lead and I'm second, I'm not going to take a chance and risk falling off the road or doing something silly to get two more points.
If I need to do it to win a gold medal, because the most medals win the world championship, I'm going to do that. I will overtake you.
\end{quote}
From: \url{https://web.archive.org/web/20191116144110/https://www.rte.ie/sport/motorsport/2008/1126/241550-ecclestone/}} Instead of the scores then used, Ecclestone proposed a medal system. The driver who finished first in a race would be given a gold medal, the runner-up the silver, the third the bronze. The winner of the championship would be the driver with the most gold medals; in case of a tie, silver medals would be added, then the bronze, then fourth-place finishes, and so on. In other words, he proposed the generalised plurality system with $p=\infty$. And indeed, for every other geometric scoring rule, it is possible to construct a profile where the overall winner did not win a single race.
\begin{restatable}{proposition}{alzamoraparadox}\label{prop:alzamora}
For any $p<\infty$, there exists a profile where the overall winner does not come first in any race.
\end{restatable}
There is a natural dual concept to Ecclestone's criterion: rather than asking how many races an athlete must win to have a chance of winning the championship, we could ask after how many victories is the championship guaranteed.\footnote{To win the championship for sure, an athlete must come first in more than $n(s_1^k-s_k^k)/(2s_1^k-s_2^k-s_k^k)$ races; see \citet[][Theorem 10]{KondratevNesterov20} and \citet[][Theorem 1]{BaharadNitzan02}.} This leads us to the \textbf{majority criterion}, which requires that any athlete that won more than half the races should also win the championship. The majority criterion together with our independence axioms allows us to characterise generalised plurality.
\bigskip
\begin{restatable}{theorem}{generalisedplurality}\label{thm:generalisedplurality}
Generalised plurality is the only generalised scoring rule that satisfies independence of unanimous winners and always ranks the majority winner first.
\end{restatable}
\bigskip
The presence of the majority criterion in the above theorem is not surprising. We could expect as much given the axiomatisation of plurality \citep{Lepelley92,Sanver02}. But the fact that adding independence of unanimous winners allows us to pin down generalised plurality is interesting because generalised scoring rules are notoriously hard to characterise \citep{BossertSuzumura20}, but in this case two intuitive axioms suffice.
Here we run into a conundrum. Ecclestone's criterion is desirable in the case of a sporting event for the reasons we have mentioned -- the stakes are high in every race, and encourages the athletes to fight for the top spot rather than settling for second place. The majority criterion, on the other hand, tells a different story. If a driver were to win the first $\floor{n/2 + 1}$ races, the championship is over.
The remaining races will take place before an empty stadium.
It seems there is a trade-off between keeping the tension high in an individual race and over the course of the entire tournament, which could explain why the organisers of Formula One went through so many ranking systems over the years (\Cref{FormulaOnePicture}).\footnote{The FIA has produced a study comparing historical Formula One results to the hypothetical outcome had Ecclestone's medal system been used (\url{https://web.archive.org/web/20100106134601/http://www.fia.com/en-GB/mediacentre/pressreleases/f1releases/2009/Pages/f1_medals.aspx}). While such comparisons should be taken with a grain of salt, since they do not take into account the fact that athlete's strategies would have been different under a different scoring system, it is nevertheless interesting that they find that 14 championships would have been shorter, while 8 would have been longer.
In a recent work, \citet{Csato21scoring} examines the trade-off between reducing the likelihood of an athlete winning the championship without coming first in any race, and delaying the point at which the championship is decided. The author compares the historic Formula One scoring schemes and geometric scoring rules on a synthetic dataset, and finds that the current Formula One system is indeed on the Pareto frontier.}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{F1.pdf}
\end{center}
\caption{Scores used in FIA Formula One compared with geometric scores}
\label{FormulaOnePicture}
\vspace{0.2cm}
\justify
\footnotesize{\emph{Notes}: The $x$-axis is the position, the $y$-axis the normalised score. Scores for first position were normalised to 100. Blue dash p=1.2, brown dash p=1.8, orange round dots 1950--1959, black long dash 1960, green short dash 1961--1990, light blue long dash dot 1991--2002, red long dash two dots 2003--2009, purple solid 2010--present.}
\end{figure}
\subsection*{$p=1$: The Borda rule and reversal symmetry}
\citet{SaariBarney03} recount an amusing anecdote to motivate an axiom known as reversal symmetry. In a departmental election the voters were asked to rank three candidates. Mathematically, this is the same problem as ours -- to aggregate $n$ rankings into one final result. All voters ranked the candidates from best to worst, but the chair expected the votes to be ordered from worst to best. The ``winner'' was thus the candidate that ranked highest in terms of the voters' assessment of unsuitability, rather than suitability for the role. After the ensuing confusion, the votes were retallied... and the winner was unchanged. The same candidate was judged to be at once the best and worst for the role. Presumably we ought to first give him a raise and then fire him. The authors' story ended with the chair being promoted to a higher position, but of course in a sporting context the resolution would be a bit different: the fans would storm the judges' stand and burn down the stadium.
The relevant axiom here is \textbf{reversal symmetry}, which states that if a candidate $a$ is the unique winner with voters' preferences $R_1,\dots,R_n$, then if we invert the preferences of every voter then $a$ will no longer be the unique winner. Note that the axiom asks for less than one might expect -- we are not asking that the candidate formerly judged the best is now judged the worst, but merely that the same candidate cannot be the best in both cases.
By itself, this axiom is quite weak. If we, without loss of generality, set $s_1=1$ and $s_m=0$, then the only restriction is that for $1\leq j\leq m/2$, $s_{m-j+1}=1-s_j$ \citep[Corollary 5]{SaariBarney03,LlamazaresPena15}.\footnote{\citet[Theorem 1]{SaariBarney03} describe the scoring rules that satisfy different variants of the reversal bias paradox but omit the formal proof. Our reversal symmetry is the weakest of the variants -- the top-winner reversal bias in their terminology. \citet[Corollary 5]{LlamazaresPena15} provide a formal proof for the strongest variant of the axiom. For other voting rules, the reversal bias paradox (or preference inversion paradox) was studied by \citet[p. 157]{Saari94book}, \citet{SaariBarney03}, \citet{Schulze11}, \citet{Gueye14}, \citet{BubboloniGori16}, \citet{FelsenthalNurmi18book,FelsenthalNurmi19book}, and \citet{BelayadiMbih21}.} In other words, we have $\floor{\frac{m-2}{2}}$ degrees of freedom. However, once we add either one of our independence axioms, we get the Borda rule uniquely.
\bigskip
\begin{restatable}{theorem}{Borda}\label{thm:borda}
Borda is the unique scoring rule that satisfies reversal symmetry and one of independence of unanimous winners or independence of unanimous losers.
\end{restatable}
\bigskip
Within the context of geometric scoring rules, there is an easier way to see that Borda is the unique rule satisfying reversal symmetry -- it is easy to see that the results produced by a rule with parameter $p$ would be the inverse of the results produced by using a rule with parameter $1/p$ on the reversed profile. Since $1/1=1$, the effect of running Borda on the reversed profile would be to reverse the resulting order.
\subsection*{$p<1$: Concave rules and majority loser paradox}
The beginning of modern social choice theory is often dated to Borda's memorandum to the Royal Academy \citep{Borda1781}, where he demonstrated that electing a winner by plurality could elect a \textbf{majority loser} -- a candidate that is ranked last by an absolute majority of the voters.\footnote{\citet[Corollary 4]{LlamazaresPena15} and \citet[Theorem 10]{KondratevNesterov20} provide equivalent characterisations of the scoring rules that never rank the majority loser first (in their notation, immune to the absolute loser paradox, and satisfy the majority loser criterion, respectively).} The extent to which such a result should be viewed as paradoxical depends on the context in which a ranking rule is used. In sports, this may be acceptable -- a sprinter who has three false starts and one world record is still the fastest man in the world. In a political context, however, voting is typically justified by identifying the will of the majority with the will of the people;\footnote{The proverb ``Vox populi, vox Dei'' dates back at least to the 8th century. For a more recent argument linking the majority principle and the underpinnings of democracy, \citet{Grofman1988} explore the connection between Rousseau's General Will and Condorcet's theory of voting.} it would be odd to argue that the will of the majority is to pick a candidate that the majority likes the least. Likewise, should a group recommendation system suggest that a group of friends watch a film that the majority detests, soon it would be just a group.
It turns out that the weak version of the criterion -- that the majority loser is never ranked first -- is characteristic of the concave geometric rules ($p\leq 1$). The strong version -- that the majority loser is always ranked last -- is satisfied only by generalised antiplurality ($p=0$).
\bigskip
\begin{restatable}{theorem}{concaveGSR}\label{thm:concaveGSR}
Geometric scoring rules with parameter $0 < p\leq 1$ are the only scoring rules that satisfy independence of unanimous winners and losers and never rank the majority loser first.
Generalised antiplurality is the only generalised scoring rule that satisfies independence of unanimous losers and always ranks the majority loser last.
\end{restatable}
\section{Optimal scoring rules}
The task of choosing a specific scoring rule for a given event is a daunting one. In principle, the organiser will need to choose $m$ vectors of scores to score each possible number $k\leq m$ of athletes. Even if we assume the scores for $k-1$ athletes can be obtained from the scores for $k$ athletes in a straightforward way, e.g.\ by trimming the sequence, that still leaves $m$ numbers the organiser must choose more or less arbitrarily. The practical relevance of our paper is that if the organiser accepts that our two axioms are desirable – and they are very natural axioms – then the problem is reduced to choosing a single parameter $p$.
Unfortunately, the choice of even a single parameter is far from trivial. In the previous section we saw how an axiomatic approach can pin down the edge cases of generalised plurality ($p=\infty$), Borda ($p=1$) or generalised antiplurality ($p=0$).\footnote{Theoretical frameworks where the optimal scoring rule was found to be different from Borda, plurality, and antiplurality are rare indeed. Some examples include \citet{Lepelley95}, \citet{CervoneGehrleinZwicker05}, \citet{LepelleyPierronValognes00,LepelleyMoyouwouSmaoui18}, \citet{DissKamwaMoyouwouSmaoui21}, \citet{Kamwa19} and \citet{Sitarz13}.} In applications where the properties these axioms represent are paramount, the question is then settled: if you are after a scoring rule that satisfies independence of unanimous winners and reversal symmetry, you must use Borda. There is no other. However, in the case of sports these extreme rules are rarely used. Whatever goals the organisers are pursuing, these are more complicated than simply satisfying an axiom.
In the remainder of this paper we will take an empirical approach to selecting a scoring rule for an event. We introduce a model of the organiser's objective, assuming the goal is to select an athlete that maximises some measure of quality, which is a function of the athlete's results. By imposing four axioms we see that this function ($F_\lambda$) must be determined solely by a parameter $\lambda$, which can be interpreted as the organiser's attitude towards risk. It turns out that among all ordinal procedures for producing a ranking of athletes, it is precisely the scoring rules which rank the athletes in accordance to the expected values of $F_\lambda$, and these scores can be computed from empirical data. We conclude by computing these optimal scores for the IBU World Cup biathlon, PGA TOUR golf, and IAAF Diamond League athletics, and compare them to the best approximation via a geometric scoring rule.
\subsection*{The organiser's objective}
An organiser's goals can be complex. For a commercial enterprise the end goal is profit, whether from ad revenue or spectator fees. To that end they would fain incentivise athletes to take risks and keep the audience on edge, rather play a safe and sure strategy. A national Olympics committee wants an athlete that will perform best on the day of the games, so they are more interested in consistency. They would prefer an athlete that can be counted on to do well in any climate or weather conditions than one who can reach for the stars -- but only when the stars are right. In a youth racing league, the focus could be that the drivers finish the race with engines and bodies intact -- the goal being that the drivers learn to finish the race, before trying to finish it in record time.\footnote{The Castrol Toyota Racing Series positions itself as incubating the next generation of racing talent, and the competitors tend to be very young. The 2018 season consisted of 14 drivers, and the score for the fourteenth position was 24. The winner was Robert Shwartzman with a total score of 916. Richard Verschoor came second with 911, but failed to complete the third race. Had Verschoor finished in any position whatsoever, he would have taken the championship.}
Because of this, we want our model of the organiser's objective to be as general as possible. We assume that in each of the $n$ events, an athlete's performance in an event $i$ can be assessed as a cardinal quantity, $x_i$ (e.g.\ finishing time in a race, strokes on a golf course, score in target shooting). The athlete's overall performance is measured by a function $F$ that maps these $n$ quantities, $\bs{x}=(x_1,\ldots,x_n)$, into an overall measure of quality -- an aggregation function \citep{Grabisch09book,Grabisch11}. The space of such functions is too vast to be tractable, so we shall narrow it down by imposing four axioms on how a measure of quality should behave.
The first axiom has to do with the measurement of the cardinal qualities $x_i$. Suppose an athlete competes in the javelin throw, and in the $i$th round throws a distance of 95 metres. There are two natural ways in which we could record this. The first is to simply set $x_i=95$, the second is to compare the throw to the current world record of 98.48 and set $x_i=95-98.48=-3.48$. It would be absurd if the two approaches would rank our athlete differently vis-\`a-vis the other athletes. Thus we require the condition of \textbf{independence of the common zero}, which states that whenever $F(\bs{x})\geq F(\bs{y})$, it is also the case that $F(\bs{x}+\bs{c})\geq F(\bs{y}+\bs{c})$, where $\bs{c}=(c,\dots,c)$.
The next two axioms deal with the intuition that our function is intended to measure quality, and hence higher values of $x_i$, the performance in an individual event, should contribute to a higher level of $F(\boldsymbol{x})$, the overall quality. The least we could ask for is that if an athlete performs (strictly) better in \emph{every} event, then their overall quality should also be (strictly) higher. This is the condition of \textbf{unanimity}, requiring that whenever $x_i\geq y_i$ $(x_i>y_i)$ for all $i$, it is also the case that $F(\boldsymbol{x})\geq F(\boldsymbol{y})$ $(F(\boldsymbol{x})>F(\boldsymbol{y}))$.
Next, consider the admittedly odd situation where two javelin throwers, $a$ and $b$, obtain potentially different results on the first $q$ throws, but throw the javelin the exact same distance as each other in throws $q+1$ through $n$. For example, let $a$'s results be $(94, 90, 89, \mathit{93, 87, 80})$, $b$'s results -- $(93, 93, 92, \mathit{93, 87, 80})$, and for the sake of argument let us suppose that $F$ assigns a higher quality to $a$. It is natural to assume that this decision is based on the strength of $a$'s first three throws -- throws 4 through 6 are identical, and should not be used to distinguish the athletes. As such, were the results instead $(94, 90, 89, \mathit{70, 94, 90})$ for $a$ and $(93, 93, 92, \mathit{70, 94, 90})$ for $b$, we would still expect $F$ to assign a higher quality to $a$. This is the property of \textbf{separability}, stating that for $\boldsymbol{x}=(x_1,\ldots,x_q),\boldsymbol{y}=(y_1,\ldots,y_q)$, $\boldsymbol{z}=(z_{q+1},\ldots,z_n)$, and $\boldsymbol{w}=(w_{q+1},\ldots,w_n)$, whenever $F(\boldsymbol{x}\boldsymbol{z})\geq F(\boldsymbol{y}\boldsymbol{z})$, it is also the case that $F(\boldsymbol{x}\boldsymbol{w})\geq F(\boldsymbol{y}\boldsymbol{w})$.
The final condition perhaps has the most bite. We assume that the order of the results does not matter -- it should not matter whether an athlete throws 93 in round $i$ and 92 in round $q$, or vice versa; \textbf{anonymity} requires that $F(\boldsymbol{x})=F(\pi\boldsymbol{x})$, for any permutation $\pi$. This would have been an innocuous assumption in a political context, where it is standard to assume that all voters are equal, but it is a real restriction in sports as it is entirely natural for different events to be weighted differently. However, we justify this assumption since the three categories we examine in the next section (IBU World Cup, PGA TOUR Category 500, Diamond League sprints) do not distinguish between their events in scoring.
It turns out that the only continuous solution satisfying these four properties \citep[Theorem 2.6, p.\ 44]{Moulin1991} is defined with respect to a parameter $\lambda$ and is the following:
\begin{equation*}\label{exponential-quality}
F_{\lambda}(\boldsymbol{x})=
\begin{cases}
\sum\limits_{i=1}^n {\lambda^{x_i}}, &\lambda>1,\\
\sum\limits_{i=1}^n x_i, &\lambda=1,\\
\sum\limits_{i=1}^n -{\lambda^{x_i}}, &0<\lambda<1.
\end{cases}
\end{equation*}
As an added bonus, $F_{\lambda}$ enjoys a version of scale invariance. Since $\lambda^{\alpha x_i}=(\lambda^\alpha)^{x_i}$, it does not matter whether the race is measured in minutes or seconds, provided the organiser adjusts the value of $\lambda$ accordingly.
The parameter $\lambda$ can be interpreted as the organiser's attitude to risk. Thus with $\lambda=1$, the organiser is risk-neutral assesses athletes by their average performance. As $\lambda$ increases, the organiser is more willing to risk poor overall performance for the possibility of observing an exceptional result, culminating in the lexmax rule as $\lambda\rightarrow\infty$. As $\lambda$ decreases, the organiser is less willing to risk subpar performance, tending to the lexmin rule as $\lambda\ria0$. Other factors concerning the choice of $\lambda$ are discussed in Appendix~\ref{app:lambda}.
We shall thus assume that the organiser assesses the quality of the athletes via $F_{\lambda}$, and wishes to choose a sequence of scores such that the athletes with the highest quality have the highest total score.
\subsection*{Why scoring rules?}
At this point one may ask, if we have access to the cardinal values $x_i$, why bother with a scoring rule at all? In a political context, cardinal voting is problematic since voters may not know their utilities exactly, and in any case would have no reason to report them sincerely, but in sport these are non-issues -- we can measure $x_i$ directly, and a race protocol is incapable of strategic behaviour. Nevertheless, a cardinal approach has its problems even in sport. In a contest where athletes are operating near the limits of human ability, the cardinal difference between first and second place could be minuscule, and a race decided by milliseconds. On the other hand failing to complete a race, or completing it poorly for whatever reason, would be an insurmountable penalty. Ordinal rankings also allow the comparison of results between different races, while cardinal results would be skewed by external factors like wind, rain, or heat. This can explain why in practice ordinal procedures are more popular.
The advantages of a scoring rule over other ordinal procedures is, in addition to the axiomatic properties discussed before, the fact that if we are interested in maximising a sum of cardinal utilities (such as $F_{\lambda}$),
then the optimal voting rule is a scoring rule, provided the utilities are drawn i.i.d. from a distribution symmetric with respect to athletes \citep{Boutilier2015,ApesteguiaBallesterFerrer2011}.
\bigskip
\begin{theorem}[\citealp{ApesteguiaBallesterFerrer2011,Boutilier2015}]\label{thm:optimal}
Denote by $u_i^a$ the cardinal quality of athlete~$a$ in race~$i$. Denote by $u_i=(u_i^1,\ldots,u_i^m)$ the vector of cardinal qualities in race $i$ and $(u_i^{(1)},\ldots,u_i^{(m)})$ its reordering in non-increasing order. Suppose $u_1,\ldots,u_n$ are drawn independently and identically from a distribution with a symmetric joint cumulative distribution function (i.e., permutation of arguments does not change the value of this c.d.f.).
Consider a scoring rule with scores:
\begin{equation*}
s_j=\mathbb{E}[u_i^{(j)}].
\end{equation*}
The winner under this scoring rule is the athlete with the highest expected overall quality:
\begin{equation*}
\max_{a}{\mathbb{E}\left[\sum\limits_{i=1}^n u_i^a \,\middle\vert\, R_1(u_1),\ldots,R_n(u_n)\right]},
\end{equation*}
where expectation is conditional on the reported ordinal rankings.
If we make the further assumption that the cardinal qualities $u_i^a$ are drawn independently and identically (i.e.\ we further assume that the performances of athletes in a race are independent) from a distribution with an absolutely continuous density function, it is also the case that the total score of $a$ is equal to $a$'s expected overall quality.
\end{theorem}
\bigskip
Substituting $u_i^a=\lambda^{x_i^a}$ for $\lambda>1$, $u_i^a=x_i^a$ for $\lambda=1$ and $u_i^a=-\lambda^{x_i^a}$ for $0<\lambda<1$, it follows that if the organiser wishes to choose a winner based on $F_\lambda$, they should use a scoring rule.\footnote{The theorem statements of \citet{ApesteguiaBallesterFerrer2011} and \citet{Boutilier2015} (Theorem~\ref{thm:optimal}) assume that all values $u_i^a$ are non-negative, but the proofs work for any value of $u_i^a$.} The \textbf{optimal scoring rule} for a given $\lambda$ can be computed by evaluating $\mathbb{E}[u_i^{(j)}]$ on historic data.
\begin{table}
\centering
\caption{Men's 100m of the IAAF Diamond League 2015}
\begin{tabular}{ccccccc|cc|cc}
\toprule
Position&\multicolumn{6}{c}{Event: lag behind world record}&\multicolumn{4}{c}{Optimal scores}\\
&Doha&Eugene&Rome&New York&Paris&London&\multicolumn{2}{c}{$\lambda=1$}&\multicolumn{2}{c}{$\lambda=100$}\\
\hline
1&-0.16&-0.30&-0.17&-0.54&-0.23&-0.29&-0.28&100&0.31&100\\
2&-0.38&-0.32&-0.40&-0.55&-0.28&-0.32&-0.38&73&0.19&51\\
3&-0.43&-0.41&-0.40&-0.57&-0.41&-0.34&-0.43&59&0.15&34\\
4&-0.45&-0.41&-0.48&-0.60&-0.44&-0.38&-0.46&49&0.13&26\\
5&-0.46&-0.44&-0.49&-0.66&-0.47&-0.40&-0.49&42&0.11&20\\
6&-0.49&-0.55&-0.50&-0.70&-0.50&-0.49&-0.54&27&0.09&11\\
7&-0.52&-0.69&-0.50&-0.82&-0.54&-0.50&-0.60&11&0.07&5\\
8&-0.56&-0.70&-0.56&-0.87&-0.60&-0.51&-0.63&0&0.06&0\\
\bottomrule
\end{tabular}
\label{table:optimalscorescalculation}
\vspace{0.2cm}
\justify
\footnotesize{\textit{Notes}: The numbers on the left represent the difference in seconds between the world record (9.58) and the time of the athlete that finished first through eighth. On the right we see the raw and normalised optimal scoring sequence computed on this data for parameters $\lambda=1$ and $\lambda=100$.}
\end{table}
In \autoref{table:optimalscorescalculation}, we demonstrate how the optimal scoring sequence for the Men's 100m sprint could be computed, assuming the only data we have available is from the 2015 IAAF Diamond League. If the organiser values consistent performance ($\lambda=1$), then $u_i^a=x_i^a$, so by \autoref{thm:optimal} the score awarded for the first position should equal the expected performance of the first-ranked athlete. Evaluating this on our data, we have $(-0.16-0.30-0.17-0.54-0.23-0.29)/6=-0.28$. Repeating the calculations for the remainding positions, the optimal scoring vector is $(-0.28, -0.38, -0.43, -0.46, -0.49, -0.54, -0.60, -0.63)$. If we desire a more visually appealing vector, recall that linearly equivalent scores produce identical rankings, so we can normalise the scores to range from 0 to 100, namely $(100,73,59,49,42,27,11,0)$.
If the organiser values the chance of exceptional performance more than consistency, then their measure of athlete quality is parameterised by a $\lambda>1$. The exact value is exogenous to our model, but as a consequence of \autoref{thm:optimal}, $\lambda$ has a natural numerical interpretation -- how much is an extra unit of performance worth? Choosing a $\lambda>1$ displays a willingness to award an athlete who completes a race with $x+1$ units of performance $\lambda$ times as many points as the athlete that completes the race with $x$ units. In \autoref{table:optimalscorescalculation} we measure performance in seconds, and one second is a colossal difference in the 100m sprint. Thus choosing a $\lambda$ as high as $100$ seems perfectly reasonable. With $\lambda=100$, $u_i^a=100^{x_i^a}$, so the score awarded for the first position ought to be $(100^{-0.16}+100^{-0.30}+100^{-0.17}+100^{-0.54}+100^{-0.23}+100^{-0.29})/6=0.31$, and the normalised vector is $(100,51,34,26,20,11,5,0)$.
\newpage
\section{Empirical evaluation}
How realistic is our assumption that the organiser assesses athlete performance by $F_{\lambda}$? We compared the actual scores used in the IBU World Cup biathlon (\autoref{OptimalGraphIBU}), the PGA TOUR golf (\autoref{OptimalGraphPGA}), and the IAAF Diamond League athletics (\autoref{OptimalGraphIAAF}). Details about the data and calculations can be found in \autoref{app:love_sport}.
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{IBU.pdf}
\end{center}
\caption{Scores and prize money in IBU World Cup}
\label{OptimalGraphIBU}
\vspace{0.2cm}
\justify
\footnotesize{\emph{Notes}: Scores and prize money used in Women's 7.5 km Sprint biathlon races of the IBU World Cup in 2018/19 and 2019/20 seasons compared with geometric and optimal scores. Observe that the optimal scores for $\lambda=1$ closely approximate the actual scores used, while the prize money is almost geometrical.
The $x$-axis is the position, the $y$-axis the normalised score. Scores for first position were normalised to 100, for forty-first position to~0. Purple solid $\lambda=1$, blue dash (higher curve) p=1.058, brown dash (lower curve) p=1.244, light blue long dash dot actual prize money, red long dash two dots actual scores.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{PGA.pdf}
\end{center}
\caption{Scores and prize money in PGA TOUR}
\label{OptimalGraphPGA}
\vspace{0.2cm}
\justify
\footnotesize{\emph{Notes}: Scores and prize money used in Category 500 golf events of the PGA TOUR in 2017/18 and 2018/19 seasons compared with geometric and optimal scores. Observe that the optimal scores for $\lambda=1.4$ closely approximate both the scores and money awarded. The curve for $\lambda=1$ illustrated the concave-convex nature of the performance distribution.
The $x$-axis is the position, the $y$-axis the normalised score. Scores for first position were normalised to 100, for seventieth position to~0. Purple solid (higher curve) $\lambda=1$, black solid (lower curve) $\lambda=1.4$, brown dash p=1.465, light blue long dash dot actual prize money, red long dash two dots actual scores.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{200women.pdf}
\includegraphics[width=15cm]{100men.pdf}
\end{center}
\caption{Scores in Women's 200m (top) and Men's 100m (bottom) of the IAAF Diamond League}
\label{OptimalGraphIAAF}
\vspace{0.2cm}
\justify
\footnotesize{\emph{Notes}: The Borda scores used in 200m Women and 100m Men sprint contests of the IAAF Diamond League in 2015--2019 seasons compared with geometric and optimal scores. Observe that the optimal scores for $\lambda=1$ closely approximate the actual Borda scores used. The curves for $\lambda=100$ and $\lambda=7.9$ illustrate how closely geometric scores can approximate any optimal scoring sequence on this data.
The $x$-axis is the position, the $y$-axis the normalised score. Scores for first position were normalised to 100, for seventh position to~0.\\
Top: 200m Women, purple solid (higher curve) $\lambda=1$, black solid (lower curve) $\lambda=7.9$, blue dash (higher curve) p=1.08, brown dash (lower curve) p=1.553, red long dash two dots Borda scores.\\
Bottom: 100m Men, purple solid (higher curve) $\lambda=1$, black solid (lower curve) $\lambda=100$, blue dash (higher curve) p=1.024, brown dash (lower curve) p=1.329, red long dash two dots Borda scores.}
\end{figure}
We see that the scores used in IBU are closely approximated by the optimal scores for $\lambda=1$, both the scores and prize money in PGA are very closely approximated by $\lambda=1.4$, and the Diamond League uses the Borda rule, which is almost equal to $\lambda=1$ on our data.
We would like to stress that the optimal sequence is by no means intuitive. In particular, a $\lambda$ of 1.4 does \emph{not} imply that the prize money for the first position is 1.4 times the second; rather, it implies that completing a course with one stroke less will, on average, earn the athlete 1.4 times more. Consider the prize money, actual scores, and optimal scores for PGA in \autoref{table:golf}. It is hard to see how an organiser would come up with this specific sequence of scores, if they did not have something similar to $F_\lambda$ in mind.
\begin{table}
\centering
\caption{Comparison of scores and prize money in PGA TOUR}
\begin{tabular}{lrrrrrrrrrr}
\toprule
Position& 1& 2& 3& 4& 5& 6& 7& 8& 9& 10\\
\hline
Optimal scores ($\lambda=1.4$) &100&55.9&42.7&31.9&25.9&21.3&18.1&16.2&14.2&12.7\\
Actual prize money &100&60.1&37.6&26.4&21.9&19.2&17.8& 16.4&15.3&14.2\\
Actual scores &100&59.8&37.6&26.6&21.5&19.5&17.5&16.5&15.5&14.5\\
Geometric approximation &100&68.3&46.6&31.8&21.7&14.8&10.1&6.9&4.7&3.2\\
\bottomrule
\end{tabular}
\label{table:golf}
\vspace{0.2cm}
\justify
\footnotesize{\emph{Notes}: Scores and prize money used in Category 500 golf events of the PGA TOUR in 2017/18 and 2018/19 seasons compared with geometric and optimal scores. Scores for first position were normalised to 100, for seventieth position to~0.}
\end{table}
The resemblance of the scores and prize money in the PGA TOUR to optimal scores with $\lambda=1.4$ also shed light on empirical phenomena in the sport. A single ``race'' in the PGA TOUR (called a tournament) consists of four rounds. In a famous study \citet{Ehrenberg1990} find that a golfer who finishes the first three rounds trailing behind the other competitors is likely to perform poorly in the final round. The authors attribute this to the fact that the marginal monetary return on effort spent for a golfer who can expect to rank low is lower than for a golfer who can expect to rank high, which disincentivises those who are trailing from further effort. But why do marginal returns display this behaviour? The authors argue that this is due to the convexity of the prize structure -- ``the marginal prize received from finishing second instead of third was 4.0 percent of the total tournament prize money, while the marginal prize received from finishing twenty-second instead of twenty-third was 0.1 percent of the total tournament prize money''. We can now see that there is more to the story.
Observe that it is possible for optimal scores with parameter $\lambda=1$ to be convex (\autoref{OptimalGraphIBU}), but we argue that had PGA assigned prize money according to $\lambda=1$, we would not observe the effect of \citet{Ehrenberg1990}. Suppose athlete $a$ is performing poorly and knows their final quality in this race, $x_i^a$, will be low. The athlete must decide whether to accept $x_i^a$, or expend the extra bit of effort to finish with $x_i^a+\epsilon$. By \autoref{thm:optimal}, at the end of all $n$ races $a$ can expect his total earnings to equal his overall quality -- the sum of $x_1^a,\dots,x_n^a$. If the athlete's performance in the $i$th race is $x_i^a+\epsilon$ rather than $x_i^a$, this will translate to an expected $\epsilon$ extra in prize money, regardless of the value of $x_i^a$. On the other hand, with $\lambda=1.4$, the athlete can expect to earn the sum of $1.4^{x^a_1},\ldots,1.4^{x^a_n}$. By putting in the extra effort he can substitute $1.4^{x_i^a+\epsilon}$ for $1.4^{x_i^a}$, but the extra money here will very much depend on the value of $x_i^a$.
What is key here is not the convexity of the scores per se, but how the scores relate to the distribution of the athletes' cardinal performance. This also explains the result of \citet{Hood2008} and \citet{Shmanske07}, who observed that golfers with a high variance in the number of strokes earn more than more consistent golfers, even if the mean performance of the consistent golfers is slightly better. The authors attribute this effect to the convexity of the prize money used, but again we claim that such a phenomenon would be absent with $\lambda=1$, regardless of how convex the prize money distribution may be. As a consequence of \autoref{thm:optimal}, we would expect a golfer's earnings to be determined \emph{solely} by their average performance. Variance does not enter into the equation. This reaffirms our interpretation of the choice of $\lambda$ being linked to the organiser's attitude towards risk -- by using $\lambda=1.4$ the organisers of the PGA TOUR are willing to reward inconsistent golfers for the possibility of exceptional performance, even if their mean performance suffers.
Geometric scoring rules approximate the scores of the Diamond League and the prize money of the IBU well; less so, the scores of the IBU; and the PGA, not at all. Trade-offs like this are unavoidable in social choice -- there is no rule which is both utility-optimal and avoids the paradoxes of independence, so the even organiser will have to choose which is more important -- but the more interesting question is why geometric scoring rules behave well in some situations, and not as well in others.
The 100m and 200m sprint is a well-understood sport where random chance is kept to a minimum. As a result, at an elite level all athletes perform near the peak of human ability. As a consequence, the results in such a race are drawn from a very narrow slice of the distribution of possible performances -- and a sufficiently narrow slice of a continuous distribution is approximately uniform. It appears that for such a distribution optimal and geometric scoring rules coincide. For the case of $\lambda=1$, the optimal scoring rule is known to be Borda (\autoref{thm:optimal}). We illustrate two more cases of $\lambda=7.9$ and $\lambda=100$ in \autoref{OptimalGraphIAAF} to demonstrate how closely geometric rules seem to approximate any choice of $\lambda$.
By contrast, in the PGA the distribution is very far from uniform. Indeed, a curious feature of the results is that the head is convex and the tail concave, i.e.\ the difference in times between the first and second positions is large, as is the difference between the last and second-to-last, but the intermediate positions are close together. This is particularly clear from the $\lambda=1$ curve in \autoref{OptimalGraphPGA} (recall that the scores for $\lambda=1$ will simply correspond to the average performance of the athletes). This suggests there may be two implicit leagues mixed in the data, and geometric rules will struggle to approximate the optimal scores in such a setting.
\newpage
\section{Conclusion}
Scoring rules are omnipresent. They are used in hiring decisions, group recommender systems \citep{Masthoff15}, meta-search engines, multi-criteria selection, word association queries \citep{Dwork01}, sports competitions \citep{Stefani11,Csato21book}, awarding prizes \citep{Benoit92,Stein94,Corvalan18}, and even for aggregating results from gene expression microarray studies \citep{Lin10}. Many countries use scoring rules in political elections: most of them use plurality, while Slovenia, Nauru and Kiribati use non-plurality scores \citep{FraenkelGrofman14,Reilly02}.
It is likely that scoring rules are popular because of their simplicity, yet choosing a scoring rule for a specific application is by no means simple. An axiomatic approach simplifies this search by narrowing the scope to the set of rules satisfying a certain combination of properties. In this paper, we establish that:\\
\begin{itemize}
\item Two natural independence axioms reduce the search to a single parameter family -- the choice of $p$ determines the scores we need (Theorem~\ref{thm:geometricrules}). To our knowledge, this is the first characterisation of a non-trivial family of scoring rules, rather than a specific rule, in the literature.\footnote{\citet{ChebotarevShamis98} provide an extensive overview of previous axiomatisations of scoring rules. While there have been many relaxations of independence in the literature (e.g., independence of Pareto-dominated alternatives or reduction condition \citep[p. 148]{Fishburn73book}, and independence of clones \citep{Tideman87}), for scoring rules they rarely led to positive results: plurality is the only scoring rule that satisfies independence of Pareto-dominated alternatives \citep{Richelson78,Ching96} and weaker versions of independence of clones \citep{Ozturk20}; Borda's rule is the only scoring rule that satisfies a modified independence defined by \citet{Maskin20}.} This family is sufficiently broad: not only does it include a continuum of convex and concave scores, but also three of the most popular scoring rules: the Borda count, generalised plurality (medal count) and generalised antiplurality (threshold rule).\\
\item We demonstrate how the choice of the parameter $p$ is constrained by the presence of other desirable axioms. The majority winner criterion pins down generalised plurality (\autoref{thm:generalisedplurality}), reversal symmetry -- Borda (\autoref{thm:borda}), and majority loser -- generalised antiplurality (\autoref{thm:concaveGSR}). In \autoref{app:independence}, we further show that these axioms are logically independent.\\
\item Finally, we consider the choice of $p$ in the context of a sporting competition on historical data. We introduce a model of the organiser's goal, and derive the optimal scoring rules for the IBU World Cup biathlon (\autoref{OptimalGraphIBU}), PGA TOUR golf (\autoref{OptimalGraphPGA}), and IAAF Diamond League sprint events (\autoref{OptimalGraphIAAF}). These scores closely resemble the actual scores used by the organisers, and provide an explanation for the phenomena observed by \citet{Ehrenberg1990}, \citet{Hood2008}, and \citet{Shmanske07}. We see that geometric scoring rules approximate the optimal scores well in events where the distribution of athlete's performances is roughly uniform.
\end{itemize}
\medskip
Our independence axioms have not received much attention in the literature, perhaps because of how weak they are individually. However, it can be shown that Nanson's rule \citep[p. 21]{Nanson1882,FelsenthalNurmi18book}, the proportional veto core \citep{Moulin81}, the points incenter \citep{Sitarz13}, and even certain scoring rules used in practice, such as average without misery \citep{Masthoff15}, and veto-rank \citep{BloomCavanagh86}, violate independence of unanimous losers.\footnote{Recently, \cite{BarberaCoelho20} showed that the shortlisting procedure \citep{deClippelEliazKnight14} and the voting by alternating offers and vetoes scheme \citep{Anbarci93} violate independence of unanimous losers.} It would be interesting to see where else these axioms can provide some insight. In the weighted version of approval-based multiwinner voting \citep{Thiele1895,Janson18,BrillLaslierSkowron18,LacknerSkowron21}, if we will apply independence of unanimously approved candidates (analogous to our independence of unanimous winners), we will obtain geometric sequences of scores which include the top-$k$ rule and a refinement of the Chamberlin–Courant rule as particular cases. Similarly, in the weighted version of approval-based single-winner voting \citep{BramsFishburn78,AlcaldeUnzuVorsatz09}, this axiom will lead to geometric sequences of scores which include approval voting and a refinement of plurality as particular cases.
There thus seems to be an inexorable connection between geometric scores and immunity to padding the profile from the top or bottom. Stepping outside the pale for a second, we observe a similar phenomenon in aggregation functions -- our choice of $F_\lambda$ was motivated by the fact that it was immune to shifting the results by a common quantity, and Nagumo's (\citeyear{Nagumo30}) characterisation of the exponential mean, $E(\boldsymbol{x})=\log_\lambda((\lambda^ {x_1}+\ldots + \lambda^{x_n})/n)$, relies on the same notion of scale invariance, $E(\boldsymbol{x}+\bs{c})=E(\boldsymbol{x})+c$ \citep[see also][p. 138]{Grabisch09book}. Perhaps there is a sense in which padding the profile with unanimous losers and winners is precisely the ordinal equivalent of adding a constant~$c$.
\subsection*{Future directions}
The problem of rank aggregation arises in many contexts, but historically the field was largely viewed through the lens of political elections. As a consequence the assumption that we should treat candidates and voters equally -- neutrality and anonymity -- generally goes unquestioned. In a sporting context both are much more demanding suppositions \citep{Stefani11}. Anonymity demands that we weigh every race equally, while there are compelling reasons why we might want to place greater weight on some events than others -- perhaps to recognise their difficulty, or to modulate viewer interest over the course of the championship. Relaxing anonymity raises the question of how we can axiomatise weighted counterparts of geometric scoring rules, and whether our independence axioms can provide additional insight on non-anonymous rules. Neutrality may be perfectly natural when it comes to ranking athletes, but the assumption of symmetric a priori performance of athletes in \autoref{thm:optimal} is a strong one. Clearly some athletes can be expected to perform better than others \citep{Broadie12}, and even the mere presence of an exceptional athlete can be enough to change the performance of the competitors \citep{Brown11}. It would be interesting to see what the optimal ranking rule would be in a more general setting.
Another peculiar feature of many sporting events is that both points and prize money are awarded after each event, and the principles governing the two could be very different. We have seen that, while in the PGA TOUR the scores and prize money are almost identical (\autoref{OptimalGraphPGA}), in the IBU World Cup the two are completely different (\autoref{OptimalGraphIBU}). This can lead to the phenomenon where the athlete that earns the most money is not, in fact, the champion.\footnote{While the scores and prize money awarded in the PGA TOUR are \emph{almost} identical, they are not \emph{exactly} so. And this small difference can be sufficient to result in different champions and money leaders. The winner of the 2018 FedEx cup was Justin Rose, while the money leader was Justin Thomas, with 8.694 million to Rose's 8.130 (\url{https://web.archive.org/web/20200318190006/https://www.pgatour.com/news/2018/10/01/2018-2019-pga-tour-full-membership-fantasy-rankings-1-50.html}, \url{https://web.archive.org/web/20180924033715/https://www.pgatour.com/daily-wrapup/2018/09/23/tiger-woods-wins-2018-tour-championship-justin-rose-wins-fedexcup-playoffs.html}).} It would be interesting to see whether such incidents could be avoided, as well as other desirable features of prize structures. It does not appear that the axiomatic approach has been applied to prize structures, barring the recent works of \citet{DietzenbacherKondratev20} and \citet{PetroczyCsato21revenue}.
This paper was motivated by sports, where extreme results are valued, so we had little to say about concave geometric rules $(0\leq p\leq 1)$. An area where they may be of interest is group recommendation systems \citep{Masthoff15}, where one of the guiding principles is balance between achieving high average utility in the group, and minimising the misery of the least happy member. It is easy to see that Borda ($p=1$) maximises rank-average utility, while generalised antiplurality ($p=0$) minimises the misery of the least happy member. It is natural to suppose that rules with $0<p<1$ will find a middle ground between these two extremes, and it would be interesting to compare them to other procedures for achieving balance, such as average without misery, the Nash product, or veto-based approaches.
\printendnotes
| {
"attr-fineweb-edu": 1.786133,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd7DxK7Tt52mM9xae | \section{Introduction
In the \emph{travelling salesman problem} (TSP) a salesperson is looking for
the shortest tour to visit all cities from a given list of cities. The input
consists of $n$ cities (including the city where the
salesperson lives), and the distances (or times) of travelling between each
pair of cities. The task is to find the shortest (cyclic) tour visiting all
the cities. The TSP is probably one of the best studied NP-hard
optimisation problems and has served as important benchmark problem in
discrete optimisation with a long list of outstanding contributions
to the theory and practice of the field (see e.g.\ the
monographs~\cite{ABCC,Gutin,TSP}). One of the well
established directions of research for NP-hard
optimisation problems is the investigation of polynomially solvable special
cases (see the surveys~\cite{BSurv,DKTW,GLS,Kabadi} for further references).
In the \emph{bipartite travelling salesman problem} (BTSP) the set of cities
$\{1,\ldots,n\}$ is partitioned into two subsets, the set
$K_1$ of blue cities and the set
$K_2$ of red cities where $|K_1|=|K_2|=k$, $n=2k$. Any feasible tour in the
BTSP has to alternate between blue and red cities. The objective
is to find the shortest such tour.
\medskip
{\bf Motivation and previous work.}
An important application of the BTSP can be found in the context of container
terminal management
(see~\cite{BierwirthMeisel-a,BierwirthMeisel-b,Boysen,Carlo,LehnfeldKnust}).
In a container terminal trains with containers arrive to a terminal and have
to be unloaded to a storage area. The containers have fixed positions on the
trains and the unloading is performed by a single crane. The goal is to minimise
the unloading time. The special case with only $k$ storage positions
specified for the locations of $k$ containers from the train, can be modelled
as BTSP. The BTSP has also drawn the attention of researchers
(\cite{Baltz,BaltzSri,Chalasani,Frank}) due to its relevance
to pick-and-place (or grasp-and-delivery) robots
(\cite{Anily,Atallah,Lei,Michel,Karuno}).
The BTSP is not as well studied as the TSP. In particular, while there are
plenty of publications on polynomially solvable cases of the TSP, we are
aware of only a few papers~\cite{DW2014,Garcia,Halton,Mis} published on
solvable cases of the BTSP.
Halton~\cite{Halton} was the first who provided a polynomially solvable case
of the BTSP. He considered the shoelace problem where
cities represent the eyelets of shoes and the objective is to find
an optimal shoe lacing strategy that minimises the length of the
shoelace. In Halton's model the eyelets can be viewed as points in
the Euclidean plane: the blue points $K_1=\{1,2,\ldots,k\}$ have coordinates
$(0,d),(0,2d),\ldots,(0,kd)$ and the red points $K_2=\{k+1,k+2,\ldots,n\}$
have coordinates $(a,d),(a,2d),\ldots,(a,kd)$, respectively.
Halton proved that the optimal BTSP solution in this case has a
special structure which is illustrated in Figure~\ref{fig:1H}(a).
\begin{figure}
\unitlength=1cm
\begin{center}
\begin{picture}(15.,5)
{
\begin{picture}(15.,5)
\put(1,0.7){\framebox[0.7\width]{
\includegraphics[scale=1]{Halton.png
}}
\put(4.6,0.7){\framebox[0.9\width]{
\includegraphics[scale=1.]{Misiurevich.png
}}
\put(10,0.7){\framebox[0.7\width]{
\includegraphics[scale=1.]{DW1.png
}}
\put(2,0){(a)}
\put(7,0){(b)}
\put(11,0){(c)}
\end{picture}
}
\end{picture}
\end{center}
\caption{Illustration from \cite{DW2014} for polynomially solvable BTSP cases as variants of the shoelace problem:
(a) - Halton \cite{Halton} case; (b) - Misiurewicz \cite{Mis} case; (c) case of Deineko \& Woeginger \cite{DW2014}.
}
\label{fig:1H}
\end{figure}
The shoelace problem is a nice interpretation of the BTSP that can be used for
entertaining and educational purposes.
Misiurewicz~\cite{Mis} argued that Halton's case is based on quite
restricted assumptions which are hardly met in real life. He generalised
Halton's model to the case referred to as
``for old shoes'' (Fig.~\ref{fig:1H}(b)). Deineko and Woeginger~\cite{DW2014}
went on further and investigated the case ``for
\emph{ very old} shoes'' (Fig.~\ref{fig:1H}(c)).
Notice that the cities in Halton's case are points in the Euclidean plane
and are placed on the boundary of their convex hull. The blue and the
red points occur consecutively along this boundary.
Garcia and Tejel~\cite{Garcia} considered a more general case of the
Euclidean BTSP where the points are still on the boundary of their convex hull,
but the points with the same colour are not necessarily on consecutive
positions. For this case, they described a candidate set of exponential
size within which the optimal BTSP tour can be found. The running time of
their algorithm is $O(n^6)$.
Recently Bandyapadhyay et al.~\cite{BPath} studied the bipartite TSP-path
problem with the points placed on a line. They described several cases
when the problem can be solved even in linear time.
The special BTSP cases considered in~\cite{BPath, Garcia, Halton} have been
characterised in terms of special locations of points in the Euclidean plane,
while in~\cite{DW2014} and~\cite{Mis} the considered special cases are
obtained by imposing conditions on the entries in the distance matrices.
These conditions are
defined by sets of linear inequalities. This approach is widely used in the
research literature on polynomially solvable cases of the TSP
(see e.g.~\cite{DKTW} and the references therein).
\medskip
{\bf Contribution and organisation of the paper.}
In this paper we investigate the BTSP on Van der Veen distance matrices (see
conditions (\ref{vdv.c}) in Section~\ref{sec:definitions} for a definition)
and the newly introduced class of relaxed Van der Veen matrices
which result if certain of the linear inequalities which are involved
in the definition of Van der Veen matrices are dropped
(for a definition see conditions~(\ref{eq:delta1}) in
Section~\ref{sec:pyramidal}).
The class of Van der Veen matrices
has been well investigated in the literature on polynomially solvable
cases of the TSP, but
has not been considered in the context of the BTSP. We first show that
the BTSP when restricted to Van der Veen distance matrices remains NP-hard.
Then we show that the even-odd BTSP which results if $K_1$ contains all
cities with odd indices and $K_2$ contains all cities with even indices becomes
solvable in polynomial time when restricted to (relaxed) Van der Veen
distance matrices. In this case an optimal
tour can be found within the set of pyramidal tours in $O(n^2)$ time.
We can go one step further. We can recognise the class of matrices $C$
which become relaxed Van der Veen matrices after renumbering the
cities in $K_1$ and in $K_2$ with independent permutations which allows us to
find an optimal tour for this subclass of permuted relaxed Van der Veen
matrices in $O(n^4)$ time (the time needed by the recognition algorithm).
In Section~\ref{sec:definitions} we provide the definitions and preliminaries
needed in the rest of the paper. Section~\ref{sec:pyramidal} constitutes the heart of
the paper and contains both the hardness result
for the BTSP restricted to general Van der Veen matrices as well as our
polynomial time algorithm for the even-odd BTSP restricted to relaxed
Van der Veen matrices. Section~\ref{sec:RecognitionP} describes a polynomial
time recognition algorithm for a subclass of permuted relaxed Van der Veen
matrices. Section~\ref{sec:conclusion} closes the paper with concluding
remarks.
\section{Definitions and preliminaries}\nopagebreak
\label{sec:definitions}
Given an $n\times n$ distance matrix $C=(c_{ij})$ the objective in the TSP is to
find a cyclic
permutation $\tau$ of the set $\{1,2,\ldots,n\}$ that minimises the
travelled distance $c(\tau)=\sum_{i=1}^{n}c_{i\tau(i)}$.
Throughout this paper we assume that distance matrices are \emph{symmetric} matrices.
The cyclic permutations are also called \emph{tours}, the elements
of set $\{1,2,\ldots,n\}$ are called \emph{cities} or \emph{points\/}, and
$c(\tau)$ is referred as the length of the permutation $\tau$.
The set of all permutations over set $\{1,2,\ldots,n\}$ is denoted by
${\mathcal S}_n$. For $\tau\in {\mathcal S}_n $, we denote by $\tau^{-1}$ the
\emph{inversion} of $\tau$, i.e., the permutation for which
$\tau^{-1}(i)$ is the predecessor of $i$ in the tour $\tau$, for
$i=1,\ldots,n$. We also use a cyclic representation of a cyclic
permutation $\tau$ in the form
\begin{eqnarray*}
\tau=\seq{i,\tau(i),\tau(\tau(i)),\ldots,\tau^{-1}(\tau^{-1}(i)),\tau^{-1}(i),i}.
\end{eqnarray*}
In the bipartite TSP (BTSP) on top of an $n\times n$ distance matrix $C$
we are also given a partition of the $n=2k$ cities into the two sets $K_1$ and
$K_2$ with $K_1\cup K_2=\{1,2,\ldots,n\}$ and $|K_1|=|K_2|=k$.
The special case of the {\em even-odd BTSP} results when
$K_1$ contains all cities with odd indices and $K_2$ contains all cities
with even indices.
The set
${\mathcal T}_n(K_1,K_2)$ of all feasible tours for the BTSP
can formally be defined as
\begin{eqnarray*}
{\mathcal T}_n(K_1,K_2)=\{\tau\in {\mathcal
S}_n|\tau^{-1}(i),\tau(i)\in K_2 \textrm{ for }
i\in K_1;
\tau^{-1}(i),\tau(i)\in K_1 \textrm { for } i\in K_2\}.
\end{eqnarray*}
We will refer to the tours in ${\mathcal T}_n(K_1,K_2)$ as
\emph{bipartite} tours or feasible BTSP tours.
For example, if $K_1:=\{1,2,\ldots,k\}$ and $K_2:=\{k+1,\ldots,n\}$, then the tour
\begin{eqnarray*}
\tau^*=\seq{1,k+1,2,k+3,4,k+5,6\ldots,7,k+6,5,k+4,3,k+2,1}
\end{eqnarray*}
which is illustrated in Figure~\ref{fig:1H} is a feasible BTSP tour, i.e., is
a member of ${\mathcal T}_n(K_1,K_2)$.
Let $C[K_1,K_2]$ denote the $k\times k$ matrix which is obtained
from matrix $C$ by \emph{deleting} all rows with indices from $K_2$ and
all columns with indices from $K_1$. Clearly, the length $c(\tau)$ of
any feasible BTSP tour $\tau$ is calculated by using \emph{only}
entries from $C[K_1,K_2]$.
A tour $\tau=\seq{1,\tau_2,\ldots,\tau_m,n,\tau_{m+2},\ldots,\tau_{n-2},1}$ is
called a {\em pyramidal tour}, if $1<\tau_2<\ldots<\tau_m<n$ and
$n>\tau_{m+2}>\ldots >\tau_{n-2}>1$. An instance of the TSP/BTSP is called
\emph{pyramidally solvable} if an optimal solution to the instance can be
found within the set of pyramidal tours.
The notion of pyramidal tours is well known in the rich literature on
polynomially solvable cases of the TSP (see the
surveys~\cite{BSurv,GLS,Gutin,Kabadi,DKTW}). Although the set of pyramidal
tours contains $\Theta(2^n)$ tours, a shortest pyramidal can be found in
$O(n^2)$ time by dynamic programming (see e.g.\ Section 7 in \cite{GLS}).
A symmetric $n\times n$ matrix $C$ is
called a \emph{Van der Veen\/} matrix if it fulfils the so-called
\emph{Van der Veen conditions}
\begin{eqnarray}
c_{ij}+c_{j+1,m}\le c_{im}+c_{j+1,j} &&
\mbox{~for all~} 1\le i<j<j+1<m\le n. \label{vdv.c}
\end{eqnarray}
\section{Complexity results for the BTSP on (relaxed) Van der Veen matrices}\label{sec:pyramidal}
In this section we investigate the complexity of the BTSP restricted to
Van der Veen matrices and relaxed Van der Veen matrices. It was proved by Van der Veen \cite{Veen} back in 1994 that the TSP with a
distance matrix that satisfies conditions (\ref{vdv.c}) is pyramidally
solvable. For the BTSP the situation is more complex.
It turned out that the partitioning into blue and red cities
which is part of the input plays a crucial role.
We will show that the BTSP restricted to Van der Veen matrices remains NP-hard
while the even-odd BTSP where the colouring of the cities is chosen according to
the parity of the city indices becomes polynomially solvable for Van der Veen
matrices and even for relaxed Van der Veen matrices.
We start with the hardness result for the special case where the first
$k$ vertices are coloured blue and the remaining ones are coloured red.
\begin{theorem}
The BTSP is NP-hard on $n\times n$ Van der Veen distance matrices
when the $n=2k$ cities are partitioned into the sets
$K_1:=\{1,2,\ldots,k\}$ and $K_2:=\{k+1,\ldots,2k\}$.
\end{theorem}
\emph{Proof.}\ \ The proof follows along the lines of the proofs of similar results
for the TSP (see \cite{MaxDemi} and \cite{Steiner}) and makes some
adjustments required for the BTSP. The proof is done by a reduction from the
NP-hard HAMILTONIAN CYCLE PROBLEM IN BIPARTITE GRAPHS (cf. \cite{Garey}).
Let $G=(K_1\cup K_2,E)$ be a bipartite graph with $E\subset K_1\times K_2$.
From $G$ we construct a $2k\times 2k$ Van der Veen matrix $C=(c_{ij})$ as
follows. The items in $C[K_1,K_2]$ are obtained from the adjacency matrix
of $G$: If there is an edge between $i\in K_1$ and $j\in K_2$, we set
$c_{ij}=c_{ji}=0$, otherwise $c_{ij}=c_{ji}=1$.
Notice that any tour for the BTSP involves only trips between cities which are
not on the same side of the partition. The corresponding distances are elements
of the submatrix $C[K_1,K_2]$. Hence
the graph $G$ contains a Hamiltonian cycle if and only
if the length of an optimal solution to the BTSP instance with matrix $C$
is 0.
What is left to be shown is that the yet undefined elements in $C$
can be set in a way such that the resulting matrix is a Van der Veen
matrix. This can be achieved as follows.
For $i,j\in K_1=\{1,2,\ldots,k\}$, $i<j$, we set
$c_{ij}=-(k+1)+j$, $c_{ji}=c_{ij}$. We also set $c_{k+i,k+j}=-i$,
$c_{k+j,k+i}=c_{k+i,k+j}$. This completes the construction of the matrix $C$.
We now need to check that the inequalities (\ref{vdv.c}) are fulfilled
to confirm that $C$ is
indeed a Van der Veen matrix. The following five cases need to be considered.
\begin{itemize}
\item[ (a)] $k\in [m,n]$: It follows from $m\le k$ that
$i<k$,
$i<j<j+1<m\le n=2k$. Hence inequalities (\ref{vdv.c}) can be rewritten as
$-i-(j+1)\le -i -j$, or $-1\le 0$ which is trivially fulfilled.
\item[(b)] $k\in [j+1,m-1]$: It follows from $j+1\le k\le m-1$ that $i\le k$,
$k<j<j+1<m\le 2k$. Inequalities (\ref{vdv.c}) can be rewritten as $c_{ij}+
k-j-1\le c_{im}+k-j$, or $c_{ij}-1\le c_{im}$, which is true since $c_{ij}$
and $c_{im}$ are binary as they are obtained from the adjacency matrix of
graph $G$.
\item[(c)] $k\in [j,j]$: It means that $i<j=k<k+1<m\le 2k$. In this case $c_{k+1,k}$ and $c_{im}$ are binary, $c_{ik}= -(k+1)+k=1$, $c_{k+1,m}=-(k+1-k)=-1$.
The inequality $1-1\le c_{k+1,k}+c_{im}$ is always satisfied.
\item[(d)] $k\in [i,j-1]$: Similar to case (b).
\item[(e)] $k\in [1,i-1]$: Similar to case (a).
\hfill~~$\Box$
\end{itemize}
\medskip
The situation becomes more favourable for the even-odd BTSP.
Remember that there we consider $K_1:=\{1,3,\ldots,2k-1\}$ and
$K_2:=\{2,4,\ldots,2k\}$ with $n=2k$.
\begin{theorem}\label{theo:pyr}
The even-odd BTSP on $n\times n$ Van der Veen distance matrices
is pyramidally solvable and hence can be solved in $O(n^2)$ time.
\end{theorem}
\emph{Proof.}\ \
In this proof the well-known tour improvement technique (cf.\ \cite{BSurv}) is
used. Assume that we are given a bipartite feasible tour on $n=2k$ cities,
with the $k$ blue cities placed on odd positions, and the $k$ red cities
placed on even positions.
We will show how to transform a feasible tour $\tau$ into a pyramidal tour
which is also a \emph{feasible} BTSP solution, i.e., with
the blue cities placed on odd positions, and the red cities
placed on even positions.
Index $i$ in tour $\tau$ is called a \emph{valley}, if $\tau^{-1}(i)>i$ and
$i<\tau(i)$. Observe that a tour is pyramidal if and only if city $1$
is its only valley.
If tour $\tau$ is not a pyramidal tour, we identify the minimal valley which
is greater than $1$. Let this valley be $j+1$ and let $\tau(j+1)=m$.
We further on assume that $\tau(j)=l$ and $l>j$ which can be done w.l.o.g.
since the distance matrix $C$ is symmetric and we can choose to either
work with $\tau$ or its inversion $\tau^{-1}$.
Notice that the cities $j$ and $j+1$ have
different parity. Assume that $j+1$ is even (a red city), then
$j$ is odd (a blue city).
\begin{figure}
\unitlength=1cm
\begin{center}
\begin{picture}(14.4,5)
\put(1.5,0)
{
\includegraphics[scale=1.8]{PyramidalTransform.png
}
\end{picture}
\end{center}
\caption{ Illustration of one iteration of the tour improvement technique.}
\label{fig:transformPyramidal}
\end{figure}
We now create a new feasible BTSP tour $\tau'$ as follows.
We delete the edges $(j,l)$, $(j+1,m)$, invert the sub-tour
$\seq{l,\ldots,j+1}$ to obtain the sub-tour $\seq{j+1,\ldots,l}$, and
then introduce two new edges $(j,j+1)$, $(l,m)$. For an illustration see
Figure~\ref{fig:transformPyramidal}. It is obvious that we obtain
this way a new tour which is a feasible solution for the BTSP. Moreover,
the minimal valley in the new tour is bigger than $j+1$. After a finite
number of iterations we end up of a pyramidal tour which is a
feasible solution to the BTSP.
The rest of the proof will be concerned with proving the claim below. Iterative
application implies that the procedure described above transforms
any feasible BTSP tour into a pyramidal BTSP tour with the same
or a smaller total length.
\medskip
\noindent
\emph{ Claim.} $c(\tau')\le c(\tau)$.
\noindent
{\em Proof of the Claim.\/}
Observe that what we need to show is that
\begin{equation}\label{eq:delta}
c_{j+1,j}+c_{lm} -c_{jl}-c_{j+1,m}\le 0 \qquad\mbox{for all}\ j,l,m:\
j<j+1<l,m\le n
\end{equation}
where we can restrict ourselves to the case where
$(j\equiv m)\ \mathrm{mod}\ 2$ and $j+1\equiv l\ \mathrm{mod}\ 2$.
Let $\Delta(i,l,j,m):= c_{ij}+c_{lm}-c_{im}-c_{jl}$. For a symmetric matrix $C$,
$\Delta(i,l,j,m)=\Delta(j,m,i,l)$.
Using the $\Delta$-notation, the inequalities (\ref{vdv.c}) can be rewritten as
$\Delta(i,j+1,j,m)\le 0$, or $\Delta(j,m,i,j+1)\le 0$ for all
$1\le i<j<j+1<m\le n$.
The inequalities (\ref{eq:delta}) can be rewritten as
$\Delta(j+1,l,j,m)\le 0$ for all $j,m,l$ with $j+1<l,m\le n$.
It hence suffices to prove the following two statements.
\begin{itemize}
\item Statement I: $\Delta(j+1,l,j,m)\le 0 \mbox{~~for all~}\ j,m,l \ \mbox{~with~}\ j+1<
\bm{l<m}\le n.$
\item Statement II: $\Delta(j+1,l,j,m)\le 0 \mbox{~~for all~}\ j,m,l \ \mbox{~with~}\ j+1<
\bm{m<l}\le n.$
\end{itemize}
\begin{figure}
\unitlength=1cm
\begin{center}
\begin{picture}(14.5,6)
\put(1.5,-0.5)
{
\includegraphics[scale=1.2]{Schema.png
}
\end{picture}
\end{center}
\caption{ Schematic representation of
$\Delta$: $\Delta(j+1,j+3,j,m)=\Delta(j+1,j+3,j,j+2)+\Delta(j+1,j+3,j+2,m)$}
\label{fig:schema}
\end{figure}
\noindent
{\em Proof of Statement I:\/} If $l=j+3$, then it can be easily checked that
$\Delta(j+1,j+3,j,m)=\Delta(j+1,j+3,j,j+2)+\Delta(j+1,j+3,j+2,m)$.
(For an illustration see Figure \ref{fig:schema}.)
Since $\Delta(j+1,j+3,j+2,m)=\Delta(j+2,m,j+1,j+3)$, it follows
from (\ref{vdv.c}) that both terms in the sum are non-positive.
If $l>j+3$, then $l=j+3+2p$ with $p\ge 1$,
since $l$ and $j$ are of different colours. It is easy to check that
$\Delta(j+1,l,j,m)=\Delta(j+1,j+3,j,m)+\Delta(j+3,l,j,m)$, and
$\Delta(j+1,j+3,j,m)\le 0$. If $l=j+3+2$, then
$\Delta(j+3,l,j,m)\le 0$, as was shown above.
Otherwise we represent $\Delta(j+3,l,j,m)$ as a sum of two terms,
and so on. Eventually we end up with proving the inequality
$\Delta(j+1,l,j,m)\le 0$.
\smallskip
\noindent
{\em Proof of Statement II:\/}
If $m=j+2$, then it follows from (\ref{vdv.c}) that $\Delta(j+1,l,j,j+2)\le 0$.
If $m>j+2$, then $l>j+3$, and it is easy to check that
$\Delta(j+1,l,j,m)=\Delta(j+1,l,j,j+2)+\Delta(j+1,j+3,j+2,m)+\Delta(j+3,l,j+2,m)$. Again,
$\Delta(j+1,l,j,j+2)\le 0$ and $\Delta(j+1,j+3,j+2,m)\le 0$ due to
(\ref{vdv.c}). If $l-j-3>2$, we continue the process and eventually we end up
with proving the
inequality $\Delta(j+1,l,j,m)\le 0$.
\hfill~~$\Box$
\medskip
Note that in the proof above only the following subset of the Van der Veen
inequalities is needed
\begin{eqnarray}
\label{eq:delta1}
\begin{blockarray}{cc}
c_{j+1,j}+c_{lm} -c_{jl}-c_{j+1,m}\le 0\ &
\mbox{for all}\ j,l,m:\ 1\le j<j+1<l,m;\\
&(j\equiv m)\ \mathrm{mod}\ 2 \mbox{ and } (j+1\equiv l)\ \mathrm{mod}\ 2.
\end{blockarray}
\end{eqnarray}
Note that all matrix entries involved in (\ref{eq:delta1}) are entries of
the submatrix $C[K_1,K_2]$.
We call a matrix $C$ that satisfies the conditions (\ref{eq:delta1}) a
\emph{relaxed} Van der Veen matrix. The following corollary then follows
immediately.
\begin{corollary}
The even-odd BTSP on an $n\times n$ relaxed Van der Veen matrix is
pyramidally solvable.
\end{corollary}
For the sake of illustration, Figure~\ref{fig:pyr1} shows two
set of points in the Euclidean plane
along with their corresponding optimal BTSP tours.
The distance matrix in instance \emph{A} is a
Van der Veen matrix while the distance matrix in instance \emph{B} is
a relaxed Van der Veen matrix. In instance \emph{B} we chose the first
four points (as in instance A) and randomly generated the other points
to agree with conditions (\ref{eq:delta1}). In instance \emph{B},
17 out of 165 Van der Veen inequalities (\ref{vdv.c})
are violated; e.g., take the inequality that results for $i=6$, $j=9$ and
$m=11$.
\begin{figure}
\unitlength=1cm
\begin{center}
\begin{picture}(15.,5)
{
\begin{picture}(15.,5)
\put(1.2,0.7){\framebox[1\width]{
\includegraphics[scale=1]{DeinekoVdV_originSolution.png
}}
\put(8,0.7){\framebox[1\width]{
\includegraphics[scale=1]{DeinekoVdV_Relaxed1Solution.png
}}
\put(3,0){Instance \emph{A}}
\put(10,0){Instance \emph{B}}
\end{picture}
}
\end{picture}
\end{center}
\centerline
{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Point number & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\
\hline
$x$-coord. in \emph{A} & 5 & 34 & 17 & 37 & 37 & 31 & 42 & 40 & 45 & 43 & 50& 49 \\
\hline
$y$-coord. in \emph{A} & 45 &35 & 35 & 31 & 19 & 17 & 13 & 9 & 9 & 4 & 3 & 8 \\
\hline
$x$-coord. in \emph{B} & 5 & 35 & 17 & 37 & 26 & 33 & 24 & 25 & 30 & 31 & 37& 44 \\
\hline
$y$-coord. in \emph{B}& 45 &35 & 35 & 31 & 26 & 19 & 21 & 11& 2 & 12 & 6 &8 \\
\hline
\end{tabular}
}
\caption{ Optimal tours for instances of the even-odd BTSP with a Van der Veen matrix
(Instance \emph{A}) and with a relaxed Van der Veen matrix (Instance \emph{B}).}
\label{fig:pyr1}
\end{figure}
\section{Recognition of a subclass of permuted relaxed Van der Veen matrices}\label{sec:RecognitionP}
It is obvious that a matrix property which is defined by linear inequalities as
it is the case for Van der Veen matrices depends on the numbering of the rows
and of the columns. For the BTSP the partitioning of the set
of cities into the coloured sets also has to be taken into consideration.
We henceforth assume that the partitioning of the set of cities into the
subsets $K_1$ and $K_2$ is given, but we have the freedom of choosing the
numbering of the cities in each subset. More specifically, we consider symmetric
matrices that satisfy the system of linear inequalities (\ref{eq:delta1}) with
the partitioning $K_1:=\{1,3,\ldots,2k-1\}$ and $K_2:=\{2,4,\ldots,2k\}$.
We assume that the initial numbering of the cities in $K_1$ and $K_2$ was chosen
randomly, and the system (\ref{eq:delta1}) is not satisfied. We pose the
question whether it is possible to recognise \emph{the right} numbering of
the cities in $K_1$ and $K_2$.
To simplify the further notations, we define a new $k\times k$ asymmetric
matrix $A:=C[K_1,K_2]$ with $a_{ij}=c_{2i-1,2j}$. Using this notation,
the system (\ref{eq:delta1}) can be rewritten as
\begin{eqnarray}\label{eq:a1}
a_{11}+a_{lm}\le & a_{1m}+a_{l1} & l,m=2,3,\ldots,k,\\
a_{i,i-1}+a_{l,m}\le & a_{i,m}+a_{l,i-1} & l=i+1,\ldots,k; m=i,\ldots,k,\label{eq:a3}\\
a_{ii}+a_{lm}\le & a_{im}+a_{li}& l=i+1,\ldots,k; m=i+1,\ldots,k, \label{eq:a2}\\
&&i=2,\ldots,k-1.\nonumber
\end{eqnarray}
\begin{proposition}
Given a $k\times k$ matrix $A=(a_{ij})$, it can be decided in $O(k^4)$
time whether there exist permutations $\gamma$ and $\delta$ such that
the permuted matrix $(a_{\gamma(i)\delta(j)})$ satisfies conditions
\emph{(\ref{eq:a1})-(\ref{eq:a2})}. If the permutations
$\gamma$ and $\delta$ exist, they can be found in time $O(k^4)$.
\end{proposition}
\begin{proof}
The System (\ref{eq:a1})-(\ref{eq:a2}) is similar to the systems
investigated in~\cite{BurDei} and~\cite{DW2014}. The proof below
follows the logic of the approached used by~\cite{DW2014}.
First try all $k$ indices as candidates for the first position
in $\gamma$. Let $\gamma(1)=1$. According to (\ref{eq:a1}), index $i$
can be placed at the first
position in $\delta$ if and only if
\begin{eqnarray}
a_{1i}+a_{st}\le a_{si}+a_{1t} \mbox{ for all\ } s\neq 1, t\neq
i. \label{recognition}
\end{eqnarray}
If there is another candidate $j$ with the same property ($i\neq j$), then it
can be shown
that $ a_{1i}+a_{sj}=a_{si}+a_{1j}$, i.e., $ a_{sj}=a_{si}+d $
for all $s$, where
$d=a_{1i}-a_{1j}$ is a constant for fixed $i$ and $j$. Since adding a
constant to a row or a column of matrix $A$ does not affect
the inequalities (\ref{eq:a1})-(\ref{eq:a2}), any of the indices $i$ or
$j$ can be placed at the first position in $\delta$.
The candidate $i$ can be chosen in $O(k^2)$
time. Note that the transformation $a'_{st}=a_{st}-a_{1t}$,
$s=1,\ldots,k$, $t=1,\ldots,k$, transforms
matrix $A$ into matrix $A'$ with zeros in the first row.
The inequalities (\ref{recognition}) are equivalent to
$ a'_{st}\le a'_{si} \mbox{ for all\ } s,t$ and $i$.
Clearly, index $i$ can be found in $O(k^2)$ time by looking through the
indices of the maximal entries in the rows of $A'$.
An index for the second position in $\delta$ needs to be chosen by using the
same approach based on inequalities (\ref{eq:a3}). An index for the
second position in $\gamma$ needs to be chosen by using the inequalities
(\ref{eq:a2}). This approach is going to be repeated for all positions.
This results in $O(k^3)$ time needed for
for each candidate at the position
$\gamma(1)$ and, therefore, overall a running time of
$O(k^4)$ as claimed. \hfill~~$\Box$
\end{proof}
\smallskip
To illustrate the algorithm we consider the BTSP with a rectilinear distance matrix
(see Fig.~\ref{fig:pyrM1}) where the distances between points are calculated as $c_{ij}=|x_i-x_j|+|y_i-y_j|$.
\begin{figure}
\unitlength=1cm
\begin{center}
\begin{picture}(15.,6)
{
\begin{picture}(15.,6)
\put(0.5,0.7){\framebox[1\width]{
\includegraphics[scale=0.99]{ManhattenVdV_2.png
}}
\put(7.5,0.7){\framebox[1\width]{
\includegraphics[scale=1]{ManhattenVdV_4Solution.png
}}
\put(3.5,0){(a)}
\put(10.5,0){(b)}
\end{picture}
}
\end{picture}
\end{center}
\centerline
{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Point number & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\
\hline
$x$-coord. & 38 & 48 & 35 & 35 & 32 & 1 & 16 & 12 & 9 & 2 & 2& 5 \\
\hline
$y$-coord. & 8 &15 & 17 & 44 & 18 & 16 & 26 & 46 & 34 & 38 & 44 & 47 \\
\hline
\end{tabular}
}
\caption{ (a) - Set of points which satisfies (\ref{eq:a1})-(\ref{eq:a2}), if the numbering is chosen as shown in (b); (b)- Optimal pyramidal BTSP tour $\seq{1,2,3,4,5,8,9,12,11,10,7,6}$.}
\label{fig:pyrM1}
\end{figure}
\[
A=
\begin{blockarray}{cccccc}
\begin{block}{(cccccc)}
17 & 39 & 45 & 64 & 66 & 72\\
15 & 27 & 35 & 52 & 54 & 60\\
19 & 29 & 33 & 48 & 50 & 56\\
43 & 37 & 25 & 24 & 26 & 32\\
58 & 36 & 26 & 15 & 11 & 17\\
75 & 33 & 29 & 12 & 6 & 6\\
\end{block}
\end{blockarray}\ \ \quad
A'=
\begin{blockarray}{cccccc}
\begin{block}{(cccccc)}
0 & 0 & 0 & 0 & 0 & 0\\
-2 & -12 & -10 & -12 & -12 & -12\\
2 & -10 & -12 & -16 & -16 & -16\\
26 & -2 & -20 & -40 & -40 & -40\\
41 & -3 & -19 & -49 & -55 & -55\\
58 & -6 & -16 & -52 & -60 & -66\\
\end{block}
\end{blockarray}
\]
The index of the maximal entries in $A'$ in rows $2,\ldots,6$ is $1$,
so $\delta(1)=1$, i.e., column 1 remains in $A$ at the same position.
Row 1 in $A$ is not relevant any more to further constructions, therefore we consider a $5\times 6$ submatrix of $A$ to choose a row to be placed at the second position in permutation $\gamma$.
This submatrix and its transformation $A'$ are shown below.
\[
A_{5\times 6}=
\begin{blockarray}{cccccc}
\begin{block}{(cccccc)}
15 & 27 & 35 & 52 & 54 & 60\\
19 & 29 & 33 & 48 & 50 & 56\\
43 & 37 & 25 & 24 & 26 & 32\\
58 & 36 & 26 & 15 & 11 & 17\\
75 & 33 & 29 & 12 & 6 & 6\\
\end{block}
\end{blockarray}\ \ \quad
A'_{5\times 6}=
\begin{blockarray}{cccccc}
\begin{block}{(cccccc)}
0 & 12 & 20 & 37 &39 & 45\\
0 & 10 & 14 & 29 & 31 & 37\\
0 & -6 & -18 & -19 & -17 & -11\\
0& -22& -32 & -43 & -47 & -41 \\
0& -42&-46&-63&-69&-69\\
\end{block}
\end{blockarray}
\]
The index of the maximal entries in $A'$ in columns $2,\ldots,6$ corresponds
to row 2 in $A$ (row 1 in the $5\times 6$ submatrix), so $\gamma(2)=2$.
Proceeding in the same way we eventually obtain $\gamma=\delta=id_6$ where
$id_6$ denotes the identity permutation on $\{1,\ldots,6\}$ which
means that the initial numbering of the cities
as shown in Figure \ref{fig:pyrM1} yields a distance matrix that already
satisfies the conditions (\ref{eq:a1})-(\ref{eq:a2}), or equivalently,
conditions (\ref{eq:delta1}).
The optimal BTSP tour can be found by finding a shortest pyramidal tour,
which is the tour $\seq{1,2,3,4,5,8,9,12,11,10,7,6}$.
\section{Conclusion}\label{sec:conclusion}
In this paper we provided a new polynomially solvable case of the BTSP.
In previously published papers, an optimal solution can either be implicitly
specified based only on the knowledge that the distance matrix has a special
structure (\cite{DW2014, Halton,Mis}), or can be found in an exponential
neighbourhood in $O(n^6)$ time (\cite{Garcia}). In the new case discussed in
this paper an optimal solution belongs to the set of pyramidal tours,
and hence can be found in $O(n^2)$ time. If the rows and columns in
the distance matrix are permuted, the special structure of
the underlying distance matrix can be recognised in $O(n^4)$ time.
\paragraph{Acknowledgements.} This research has been supported by the
Austrian Science Fund (FWF): W1230. The authors thank Natalia Chakhlevitch and Sigrid Knust for helpful discussions and comments on earlier version of this paper, in particular for a helpful reference to the crane loading problem.
| {
"attr-fineweb-edu": 1.980469,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdas5qhLACG6g_SsC |
\subsubsection*{Acknowledgments}
\bibliographystyle{abbrvnat}
\section{Introduction}
This work presents an application of \gls{rl} for the complete control of real soccer robots of the \gls{vsss} \cite{vss_rules}, a traditional league in the \gls{larc}. In the \gls{vsss} league, two teams of three small robots play against each other. We propose a simulated environment in which continuous or discrete control policies can be trained, and a Sim-to-Real method to allow using the obtained policies to control a robot in the real world. The results show that the learned policies display a broad repertoire of behaviors which are difficult to specify by hand. This approach, called VSSS-RL, was able to beat the human-designed policy for the striker of the team ranked 3rd place in the 2018 \gls{larc}, in 1-vs-1 matches.
\begin{figure}[ht]
\centering
\subfigure[]{\includegraphics[width=0.24\linewidth,scale=1.0]{figures/robot_model.png}
\label{fig:robot}}
\subfigure[]{\includegraphics[width=0.36\linewidth,scale=1]{figures/vss_example.png}
\label{fig:vss-real}}
\subfigure[]{\includegraphics[width=0.36\linewidth,scale=1]{figures/vss_sdk.png}
\label{fig:vss-sdk}}
\caption{(a) 3D model of a \gls{vsss} robot; (b) Real-world game setup; and (c) Simulation \cite{vss_sdk}.}
\label{fig:vss_sim}
\end{figure}
\section{Motivation}
Deep \gls{rl} is a suitable approach for learning control and complex behaviors by interacting with the environment since it requires only the specification of a reward function that expresses the desired goals. In the literature of robot soccer, \gls{rl} has been applied for learning specific behaviors, such as kicking \cite{riedmiller2007experiences} and scoring penalty goals \cite{hester2010generalized}.
Recently, two \gls{rl} soccer simulation environments have been proposed: MuJoCo Soccer \cite{todorov2012mujoco} and Google Research Football \cite{kurach2019google}. However, they are not suitable for the study of Sim-to-Real, because they either do not consider important physical and dynamical aspects or represent a very complex scenario that is not achievable by current robotics technology. Therefore, the need for such an adequate environment, allowing the study of the combination of \gls{rl} with Sim-to-Real in dynamic, multi-agent, competitive, and cooperative situations, is the main motivation behind this work.
\section{Technical Contribution}
We propose a simulated environment called \gls{vsss}-RL\footnote{Source code will be available soon at: \url{https://github.com/robocin/vss-environment}}, which supports continuous or discrete control policies. It includes a customized version of the VSS SDK simulator \cite{vss_sdk} and builds a set of wrapper modules to be compatible with the OpenAI Gym standards \cite{gym}. It consists of two main independent processes: the experimental, and the training process. In the first, an OpenAI Gym environment parser was developed, and wrapper classes were implemented to communicate with the agents. In the latter, the collected experiences are stored in an experience buffer that is used to update the policies, as illustrated in \fref{fig:architecture}.
\begin{figure*}[ht!]
\centering
\subfigure[]{\includegraphics[width=0.65\textwidth,scale=1]{figures/architecture.pdf}
\label{fig:architecture}}
\subfigure[]{\includegraphics[width=0.33\linewidth,scale=1]{figures/Sim2real.pdf}
\label{fig:sim2realtrain}}
\caption{VSSS-RL: (a) Environment Architecture for training high-level control policies. (b) Low-level control training processes to enable Sim-to-Real transfer.}
\label{fig:vssss-env}
\end{figure*}
We also proposed a Sim-to-Real method to transfer the obtained policies to a robot in the real world. It is a Domain Adaptation method \cite{andrychowicz2018learning}, consisting of a Feed-Forward Neural Network which learns to map the desired high-level actions $a_{d}(t) = \{v, \omega\}$ (linear and angular speeds) to low-level control commands for the wheel speeds ($V_R$ and $V_L$) (\fref{fig:sim2realtrain}).
\subsection{Experimental Results}
The results, submitted to ICRA2020, show that the two baseline \gls{rl} methods evaluated, \gls{ddpg} \cite{lillicrap2015continuous} and \gls{dqn} \cite{volodymyr2013playing}, were able to learn suitable policies in simulation when applying reward shaping \cite{sutton1998introduction}. The learned polices display rich and complex behaviors\footnote{See the video available at: \url{https://youtu.be/a9dTMtanh-U}} extremely difficult to specify by hand as well as to identify the correct moments when they should be applied. Moreover, the proposed Sim-to-Real method employed allowed us to achieve similar results in the real world in terms of average steps to score a goal ($547.2 \pm 233.6$ in simulation and $456.8 \pm 147.2$ in the real world).
Finally, the complete approach was evaluated in 1-vs-1 matches against the striker of RoboCIn VSSS team, 3rd place on the LARC 2018. The final scores of the matches were 19 for VSSS-RL and 13 for RoboCIn in the first game, and 22 for VSSS-RL approach and 17 for RoboCIn in the second. These wins highlight the capabilities of the proposed approach.
\section{Research Problem}
The \gls{vsss} robots are usually programmed to behave adequately in every situation identified by the programmers, employing path planning, collision avoidance, and PID control methods \cite{kim2004soccer}. However, it is extremely hard to foreseen and tackle every possible situation in a dynamic game such as soccer. Therefore, it is clear the need for data-oriented approaches such as \gls{rl}.
However, several barriers exist for applying \gls{rl} successfully in the real world \cite{dulac2019challenges}, as the large amounts of interactions required by the agents to achieve adequate performance are impractical due to degradation of hardware, energy consumption and time required. Thus, the research problem considered in this work is the application of the Sim-to-Real approach, in which the agents are trained in simulation and policies learned are transferred to the real robots. | {
"attr-fineweb-edu": 1.75,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdynxaJJQnK0WJY-h | \section{Introduction}
Several models
for football (soccer) prediction exist (see, e.g., \cite{Owen2011,Koopman2015, Volf2009, Titman2015} and references therein).
In this work, we (i) propose two novel Bayesian
multinomial-Dirichlet models that consider only the number of matches won, drawn or lost by each team as inputs, and (ii) compare such models with two benchmark models, whose predictions for matches of the Brazilian national championships are published on Internet websites---see \cite{reference13} and \cite{reference14}.
Such models are widely consulted by football fans and consider multiple covariates as inputs.
As a baseline, we also
make comparisons with an extension of the Bradley-Terry model \citep{davidson1970}.
Brazilian football championships are disputed by 20~teams that play against each other twice (home and away) and the team with more points after all matches are played is declared champion.
Therefore, 380 matches are played per championship, 190 in each half.
The last four teams are relegated to a minor division and the first four play Copa Libertadores (South America champions league).
Our analysis comprised the championships from 2006 to 2014, because it was only in 2006 that this form of dispute was implemented in the Brazilian national championships.
Our comparisons were made
using 1710 matches of the first division of the Brazilian football championship. Several standard metrics (scoring rules) were used for ranking the models, as well as
other criteria such as the proportion of matches that were ``incorrectly'' predicted by each model and a measure of calibration.
There are several ways to score or classify predictions of categorical events that assume one result out of a discrete set of mutually exclusive possible outcomes, like football matches.
See \cite{constantinou} for a brief survey of such measures applied to football.
We decided to score the predictions for each match in terms of their distances from the truth, \emph{i.e.}, the verified event, once it has occurred, and chose the most used distances in the literature: Brier \citep{brier1950}, logarithmic and spherical.
This paper is organized as follows.
Section \ref{sec::experimental} describes the studied models,
Section \ref{sec::results} reports the predictive performance of the models and a goodness of fit measure.
In Section \ref{sec::remarks} we discuss the results and close with proposals for future research.
The Appendix briefly describes the scoring rules and the criteria used in this work to classify the models.
\section{The Models: Theoretical Background}
\label{sec::experimental}
In the statistical literature, the final outcome of football matches is usually predicted by modeling either the number of goals \citep{Maher82, Dixon97, Lee97, Karlis2003} or the categorical final outcome itself (win, draw or loss of the home team)~\citep{Forrest2000, Koning2000, Brillinger2008, Brillinger2009}.
For a discussion of these two approaches see \cite{Goddard2005}. In this work, we consider two benchmark models that follow the first approach: ``Arruda'' (\url{www.chancedegol.com.br}) and ``Lee'' (\url{www.previsaoesportiva.com.br}); the Bradley-Terry model and the models proposed by us follow the second approach.
The predictions of the two benchmark models were published before each matchday.
We use this section to describe these models in some detail.
\subsection{Benchmark Models}
\label{sec::Benchmark}
The benchmark models, Arruda \citep{arruda2000} and Lee \citep{Lee97}, assume that the number of goals $(Y_1, Y_2)$ scored respectively by teams $A$ (home team) and $B$ (away
team) has a bivariate Poisson distribution \citep{Holgate64} with parameters $(\uplambda_1, \uplambda_2, \uplambda_3)$,
which has probability mass function given by
\begin{equation*}
P(Y_1 = y_1, Y_2 = y_2 | \uplambda_1, \uplambda_2, \uplambda_3) =
\exp\{-(\uplambda_1 + \uplambda_2 + \uplambda_3)\}
\sum_{k = 0}^{\min(y_1, y_2)} \dfrac{\uplambda_1^{y_1 - k} \uplambda_2^{y_2 - k} \uplambda_3^k}{(y_1-k)!(y_2 - k)!k!}, \label{eq::pois.biv}
\end{equation*}
for $\uplambda_1, \uplambda_2 > 0$ and $\uplambda_3 \geq 0$.
Both marginal distributions of $(Y_1, Y_2)$ have Poisson
distributions with dependence parameter $\uplambda_3 \geq 0$. If
$\uplambda_3 = 0$ the marginal distributions are independent, while if
$\uplambda_3 > 0$ the marginal distributions are positively
correlated.
While the Lee model assumes that $\lambda_3=0$, the Arruda model does not.
Because of its flexibility, \cite{Karlis2003} argue
that this distribution is a plausible choice for modeling dependence
of scores in sports competitions.
Similar to \cite{Karlis2003} and \cite{Lee97}, both benchmark models
adopt the following log-linear link functions
\begin{align*}
\log(\uplambda_1) &= \upmu + \text{ATT}_A - \text{DEF}_B + \upgamma, \\
\log(\uplambda_2) &= \upmu + \text{ATT}_B - \text{DEF}_A,
\end{align*}
where $\upmu$ is a parameter representing the average number of goals
in a match, $\text{ATT}_k$ is the offensive strength of team $k$,
$\text{DEF}_k$ is the defensive strength of team $k$ and $\upgamma$ is
the home advantage parameter, $k = A, B$. For both the Arruda and Lee models, it is usual to assume the following identifiability constraint
\begin{equation*}
\sum_{t = 1}^T \text{ATT}_t = 0, \quad \sum_{t = 1}^T \text{DEF}_t = 0,\\
\end{equation*}
where $T$ is the number of teams of the analyzed championship.
The predictions of an upcoming matchday are obtained by fitting the model to all relevant previous observed data and then summing up the probabilities of all scores relevant to the win, draw and loss outcomes.
We should remark, however, that the Arruda model uses results of the previous twelve months to predict future matches, but we have no information about how this is done.
On the other hand, the Lee model uses only information of the current championship.
\subsection{Bradley-Terry model}
\label{sec::BTmodel}
The Bradley-Terry paired comparison model \citep{BradleyTerry1952} was primarily developed for modeling the subjective preference of a set of objects when compared in pairs by one or more judges.
Applications of this model include studies of preference and choice behaviour, but also the ranking of competitors and the prediction of outcomes in sports, such as chess, tennis and soccer. We consider an extension of the Bradley-Terry model, the Davidson tie model with multiplicative order effects \citep{davidson1977extending}, that allows us to account for both ties and home-field advantage:
\begin{align}
p^W_{ij} &= P(\mbox{Home team } i \mbox{ beats visiting team } j) = \frac{\gamma \pi_i}{\gamma \pi_i + \pi_j + \nu \sqrt{\pi_i\pi_j}} \nonumber\\
p^D_{ij} &= P(\mbox{Home team } i \mbox{ ties with visiting team } j) = \frac{\nu \sqrt{\pi_i\pi_j}}{\gamma \pi_i + \pi_j + \nu \sqrt{\pi_i\pi_j}} \nonumber\\
p^L_{ij} &= P(\mbox{Home team } i \mbox{ loses to visiting team } j) = 1 - p^W_{ij} - p^D_{ij},
\label{eq:BTmodel}
\end{align}
where $\gamma > 0$ is the home advantage parameter, $\nu > 0$ is the parameter that accomodates for draws and $\pi_i$ is the worth parameter, the relative ability of team $i$. To ensure identifiability, it is commonly assumed that $\pi_i \geq 0$ and $\sum \pi_i = 1$.
Maximum likelihood estimation is performed by numerically maximizing the reparameterized log-likelihood function corresponding to an unrestricted lower dimension parameter space. For every upcoming second-half matchday, MLEs are recalculated using the outcomes of all the previous matches (including first and second-half matches) and then plugged in (\ref{eq:BTmodel}) in order to obtain predictions for the new matchday. For a study on the conditions for the existence and uniqueness of the MLE and the penalized MLE for different extensions of the Bradley-Terry model, see \cite{Yan2016}.
\subsection{Multinomial-Dirichlet}
\label{sec::Mn_Dir}
Now we explain the Bayesian approach developed in this work to calculate the prediction probabilities of an upcoming match of a given team $A$ based on its past performance, \emph{i.e.}, the number of matches it has won, drawn and lost.
Let us consider the outcome of a given match of team $A$ as a categorical random quantity $X$ that may assume only the values $1$ (if team $A$ wins), $2$ (if a draw occurs), $3$ (if team $A$ loses).
Denoting by $\uptheta_1, \uptheta_2$ and $\uptheta_3$ (where $\uptheta_3 = 1-\uptheta_1 - \uptheta_2$), the probabilities of win, draw and loss, respectively, the probability mass function of $X$ is
\[
P(X=x | \boldsymbol{\uptheta}) = \uptheta_1^{\mathbb{I}_{\{1\}}(x)}
\uptheta_2^{\mathbb{I}_{\{2\}}(x)}(1 - \uptheta_1 -
\uptheta_2)^{{\mathbb{I}_{\{3\}}}(x)}, \qquad x \in \mathcal{X},
\]
\noindent
where $\mathcal{X}=\{1,2,3\}$ is the support of $X$,
$\mathbb{I}_{\{i\}}(x)$ is the indicator function that assumes the
value 1 if $x$ equals $i$ and 0 otherwise, and $\boldsymbol{\uptheta}
= (\uptheta_1, \uptheta_2)$ belongs to $\boldsymbol{\Theta} =
\{(\uptheta_1,\uptheta_2)\in [0,1]^2: \uptheta_1+\uptheta_2 \leq 1 \}$, the~2-simplex.
Assuming that the outcomes from $n$ matches of team $A$, given $\boldsymbol{\uptheta}$, are i.i.d. quantities with the above categorical distribution, and denoting by $M_1$, $M_2$ and $M_3$ the number of matches won, drawn or lost by team $A$, the random vector $(M_1, M_2, M_3)$ has Multinomial (indeed, trinomial) distribution with parameters $n$ and $\boldsymbol{\uptheta}$ given by
\[
P(M_1=n_1,M_2=n_2,M_3=n_3| n, \boldsymbol{\uptheta})=
{n \choose n_1, n_2, n_3}\uptheta_1^{n_1}\uptheta_2^{n_2}(1-\uptheta_1-\uptheta_2)^{n_3},
\]
\noindent
where $n_1 + n_2 + n_3 = n$.
Our goal is to compute the predictive posterior distribution of the
upcoming match, $X_{n+1}$, that is,
$P(X_{n+1}=x|M_1=n_1,M_2=n_2,M_3=n_3)$, $x\in\mathcal{X}$.
Suppose that $\boldsymbol{\uptheta}$ has Dirichlet prior distribution
with parameter $(\upalpha_1,\upalpha_2,\upalpha_3)$, denoted
$\mathcal{D}(\upalpha_1,\upalpha_2,\upalpha_3)$, with density function
\[
\pi(\boldsymbol{\uptheta}|\boldsymbol{\upalpha})=\frac{\Gamma(\upalpha_1+\upalpha_2+\upalpha_3)}{\Gamma(\upalpha_1)\Gamma(\upalpha_2)\Gamma(\upalpha_3)}\uptheta_1^{\upalpha_1-1}\uptheta_2^{\upalpha_2-1}(1-\uptheta_1-\uptheta_2)^{\upalpha_3-1}
\]
\noindent for $\upalpha_1$, $\upalpha_2$, $\upalpha_3 > 0$, then the
posterior distribution of $\boldsymbol{\uptheta}$ is
$\mathcal{D}(n_1+\upalpha_1,n_2+\upalpha_2,n_3+\upalpha_3)$. Thus, the
predictive distribution of $X_{n + 1}$ is given by the
integral
$$
P(X_{n + 1} = x | M_1 = n_1, M_2 = n_2, M_3 = n_3) =\\ \int_{\boldsymbol{\uptheta}} P(X_{n
+ 1} = x | \boldsymbol{\uptheta}) \pi(\boldsymbol{\uptheta} | M_1 = n_1, M_2 = n_2, M_3
= n_3) d\boldsymbol{\uptheta},
$$
which leads to the following probabilities of win, tie and loss:
\begin{align*}
P(X_{n+1} = 1 | M_1=n_1,M_2=n_2,M_3=n_3) &=
\frac{n_1+\upalpha_1}{n+\upalpha_{\bullet}}\\
& \\
P(X_{n+1} = 2 | M_1=n_1,M_2=n_2,M_3=n_3) &=
\frac{n_2+\upalpha_2}{n+\upalpha_{\bullet}} \\
& \\
P(X_{n+1} = 3 | M_1=n_1, M_2=n_2, M_3=n_3) &=
\frac{n_3+\upalpha_3}{n+\upalpha_{\bullet}}
\end{align*}
\noindent where $\upalpha_{\bullet} =\upalpha_1+\upalpha_2+\upalpha_3$.
In fact, the multinomial-Dirichlet is a classical model used in several applied works and more information about it can be found in \cite{good1965,bernardo1994} and references therein.
In the next subsections, \ref{sec::Mn_Dir1} and \ref{sec::Mn_Dir2}, we propose two multinomial-Dirichlet models (Mn-Dir1 and Mn-Dir2) to predict the second-half matches of the championships given all the previous observed results of the same championship. The first-half results are used to build the prior distribution and the second-half results are applied to assess the predictive power of the models. The homepage that publishes the Arruda model also provides predictions for the first-half matches (using results of the previous twelve months), but we have no specific information about how this is done. Therefore, at the beginning of the championships, we may say that the Dirichlet-multinomial models and the Lee model are handicapped when compared to the Arruda model. Trying to compensate this handicap, we compared the models using just the second-half predictions.
Before we explain and illustrate the Dirichlet-multinomial models with an example, we make two further remarks. The first one is that we will separately consider home and away games for each team, allowing us to take into account the different performances under these conditions. The second remark is that, using the Dirichlet-multinomial approach, it is possible to predict the result of an upcoming match between teams $A$ (home team) and $B$ (away team) using the past performance of either teams. An analogy can be made to a situation where there exist two observers: one only informed about the matches A played at home and the other only informed about the matches B played away, each one providing distinct predictive distributions. Then, we propose to combine these predictive distributions by applying the so-called \textit{linear opinion pooling} method, firstly proposed by \cite{Stone61}, which consists of taking a weighted average of the predictive distributions. This method is advocated by \cite{McConway81} and \cite{Lehrer83} as the unique rational choice for combining different probability distributions. For a survey on different methods for combining probability distributions we refer to \cite{Genest86}.
\subsection{Model One: Multinomial-Dirichet $1$}
\label{sec::Mn_Dir1}
The model $Mn-Dir_1$ is defined as a mixture with equal weights of two Dirichlet distributions: The posterior distributions of teams $A$ and $B$.
Since teams $A$ and $B$ are the home and away teams, respectively, the two posterior distributions to be mixed are: (i) one considering only the matches $A$ played at home; (ii) another considering only the matches $B$ played away.
The relevant past performance of teams $A$ and $B$ will be summarized, respectively, by the count vectors ${\bf h} = (h_1, h_2, h_3)$ (team $A$ at home) and ${\bf a} = (a_1, a_2, a_3)$ (team $B$ away), representing the numbers of matches won, drawn and~lost,~respectively.
Predictions are carried out by using a Bayes information updating mechanism. First, we use full-time results from the first-half matches as historical data for construction of the Dirichlet prior: we assign an uniform prior on $\boldsymbol{\uptheta}$ over the 2-simplex, \emph{i.e.}, $\mathcal{D}(1, 1, 1)$, and combine this prior with the data on the first half of the championship, obtaining a posterior Dirichlet distribution through conjugacy that represents the information about $\boldsymbol{\uptheta}$ up to the first half. Then, the posterior of the first half becomes the prior for the second half, which, for every matchday in the second half, will be combined with all the observed second half matches up to that matchday in order to yield posterior predictive distributions. For more on the uniform prior on the simplex, see \cite{good1965} and \cite{agresti2010}.
For instance, consider the match Gr\^emio versus Atl\'etico-PR played for matchday 20 of the 2014 championship, at Gr\^emio~stadium. Table \ref{tab:counts} displays the performances of both teams, home and away, after 19 matches. The~relevant vector of counts to be used are ${\bf h}=(h_1,h_2,h_3)=(6,2,1)$ and ${\bf a}=(a_1,a_2,a_3)=(2,3,4)$. Therefore, Gr\^emio has a $\mathcal{D}(7,3,2)$ posterior for matches played at home and Atl\'etico has a $\mathcal{D}(3,4,5)$ posterior for matches played as visitor (recall that both priors were $\mathcal{D}(1,1,1)$).
\begin{table}[H]
\small
\begin{center}
\begin{tabular}{lccccccccc}
\toprule
& \multicolumn{3}{c}{\textbf{Home}} & \multicolumn{3}{c}{\textbf{Away}}& \multicolumn{3}{c}{\textbf{Overall}} \\
\midrule
Team & W & D & L & W & D & L & W & D & L\\
Gr\^emio & 6 & 2 & 1 & 3 & 2 & 5 & 9 & 4 & 6\\
Atl\'etico-PR & 4 & 4 & 2 & 2 & 3 & 4 & 6 & 7 & 6\\
\bottomrule
\end{tabular}
\caption{Gr\^emio and Atl\'etico-PR counts after 19 matchdays (first half).}\label{tab:counts}
\end{center}
\end{table}
Thus, considering $X_{n + 1}$ the random outcome of this match with
respect to the home team (Gr\^{e}mio), the predictive probabilities
of $X_{n + 1}$ is obtained by equally weighting the two predictive
distributions,~resulting
\begin{align*}
P(X_{n+1}=1|{\bf h}, {\bf a}) &=
\frac{1}{2}\left(\frac{h_1+\upalpha_1}{h_{\bullet}+\upalpha_{\bullet}}\right)+\frac{1}{2}\left(\frac{a_3+\upalpha_3}{a_{\bullet}+\upalpha_{\bullet}}\right)=0.5
\\
P(X_{n+1}=2|{\bf h}, {\bf a}) &=
\frac{1}{2}\left(\frac{h_2+\upalpha_2}{h_{\bullet}+\upalpha_{\bullet}}\right)+\frac{1}{2}\left(\frac{a_2+\upalpha_2}{a_{\bullet}+\upalpha_{\bullet}}\right)\simeq0.2917, \\
P(X_{n+1}=3|{\bf h}, {\bf a}) &= \frac{1}{2}
\left(\frac{h_3+\upalpha_3}{h_{\bullet}+\upalpha_{\bullet}}\right)+\frac{1}{2}\left(\frac{a_1+\upalpha_1}{a_{\bullet}+\upalpha_{\bullet}}\right)\simeq0.2083.
\end{align*}
\noindent where $h_{\bullet}=h_1+h_2+h_3$ and $a_{\bullet}=a_1+a_2+a_3$.
\subsection{Model Two: Multinomial-Dirichlet $2$}
\label{sec::Mn_Dir2}
The model $Mn-Dir_2$ is similar to model $Mn-Dir_1$, except that the weights can be different and the chosen prior for $\boldsymbol{\theta}$ is now a symmetric Dirichlet $\mathcal{D}(\alpha,\alpha,\alpha)$, $\alpha > 0$. Thus, the predictive probabilities of $X_{n+1}$ are given by
\begin{align*}
P(X_{n+1}=1|{\bf h}, {\bf a}) &=
w \left(\frac{h_1+\upalpha}{h_{\bullet}+\upalpha_{\bullet}}\right)+(1 - w) \left(\frac{a_3+\upalpha}{a_{\bullet}+\upalpha_{\bullet}}\right) \\
P(X_{n+1}=2|{\bf h}, {\bf a}) &=
w \left(\frac{h_2+\upalpha}{h_{\bullet}+\upalpha_{\bullet}}\right)+(1-w) \left(\frac{a_2+\upalpha}{a_{\bullet}+\upalpha_{\bullet}}\right), \\
P(X_{n+1}=3|{\bf h}, {\bf a}) &= w \left(\frac{h_3+\upalpha}{h_{\bullet}+\upalpha_{\bullet}}\right) + (1-w) \left(\frac{a_1+\upalpha}{a_{\bullet}+\upalpha_{\bullet}}\right).
\end{align*}
The values for the weight $w$ and the hyperparameter $\alpha$ are chosen through a cross-validation procedure. Firstly, we considered a grid of 20 equally spaced points in the intervals $[0,1]$ and $(0.001, 20]$ for $w$ and $\alpha$, respectively. Then, for each pair $(w_i,\alpha_i)$, $i=1, \ldots, 400$, the Brier scores of the first-half matches (190 matches) of each championship was computed. The pair of values $(w^*,\alpha^*)$ which provided the smallest score was then chosen to predict the matches of the second half of the same championship. Before this was done, however, the counts of each team were used to update the prior $\mathcal{D}(\alpha^*,\alpha^*,\alpha^*)$ in the same manner as described in Section \ref{sec::Mn_Dir1}.
Table \ref{tab::optimalValues} displays the optimal values of $\alpha$ and $w$ chosen for each championship.
Note that the values are generally not far from those used in the model $Mn-Dir_1$, $\alpha=1$ and $w=1/2$.
\begin{table}[ht]
\centering
\caption{Optimal values of $\alpha$
and $w$ for each year in the model $Mn-Dir_2$.}
\begin{tabular}{rrr}
\hline
Year & $\alpha^*$ & $w^*$ \\
\hline
2006 & 3.16 & 0.53 \\
2007 & 2.63 & 0.63 \\
2008 & 1.05 & 0.42 \\
2009 & 2.63 & 0.42 \\
2010 & 2.11 & 0.58 \\
2011 & 2.11 & 0.53 \\
2012 & 1.58 & 0.53 \\
2013 & 2.63 & 0.79 \\
2014 & 3.16 & 0.63 \\
\hline
\end{tabular}
\label{tab::optimalValues}
\end{table}
One may argue that, in this case, data is being used twice in the same model---in the same spirit of empirical Bayes models---and therefore that the computation of weights is arbitrary.
Even though these critiques are well founded, we believe that every choice to compute weights would be arbitrary.
Ours was based on plain empirical experience, nothing more.
\section{Results}
\label{sec::results}
Brazilian football championships are disputed by 20~teams that play against each other twice (home and away) and the team with more points after all matches are played is declared champion.
Therefore, 380 matches are played per championship, 190 in each half.
The last four teams are relegated to a minor division and the first four play Copa Libertadores (South America champions league).
Our analysis comprised the championships from 2006 to 2014, because it was only in 2006 that this form of dispute was implemented in the Brazilian national championships.
The predictions for the models Arruda, Lee, Bradley-Terry and the proposed multinomial ones Mn-Dir1 and Mn-Dir2 were assessed according to their accuracy and calibration. The accuracy of the predictions was measured using different scoring rules (Brier, Spherical and Logarithmic) and also the proportion of errors. For an explanation of the scoring rules, the proportion of errors and calibration see the Appendix.
As explained above, the Arruda model uses results of the previous twelve months to predict future matches, but we have no information about how this is done.
This fact puts the Arruda model on a privileged position at the beginning of each championship.
Hence, trying to put all the models on equal footing, we used the first-half matches to estimate the Lee and Bradley-Terry models, and as prior information for the multinomial-Dirichlet models as described in Sections \ref{sec::Mn_Dir1} and \ref{sec::Mn_Dir2}.
Thus, the models were compared using only the predictions for matches of the second half, i.e. we effectively scored the predictions made for 1710 matches (190 matches of nine championships).
The Lee and Bradley-Terry models were fitted using the software R and the multinomial-Dirichlet models were fitted using Python. See \cite{r} and \cite{python}.
Figure~\ref{fig::scores} displays the box plots of the scores and proportion of errors of the six models in study (the lower the score, the more accurate the prediction). According to all scoring rules, all methods presented similar performance, and they were more accurate than the trivial prediction $(1/3,1/3,1/3)$, displayed in the plots
as an horizontal line. Using the mean scores and their standard errors displayed in Table \ref{tab::brier}, one can see that none of the 95\% confidence intervals for the mean score contained the score given by the trivial prediction
(0.67 for the Brier score, 1.10 for the logarithmic score, and -0.58 for the spherical score).
Figure \ref{fig::scoresYear} shows how the scores varied year by year in average.
This figure also indicates that all models yielded similar results.
\begin{figure}[H]
\centering
\includegraphics[page=1,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}
\includegraphics[page=2,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}\\
\includegraphics[page=3,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}
\includegraphics[page=4,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}
\caption{Scores and proportion of errors of the various predictive methods. Horizontal line represents the score of the trivial prediction $(1/3,1/3,1/3)$.}
\label{fig::scores}
\end{figure}
\begin{table}[H]
\begin{center}
\caption{Mean and total scores and their standard errors for the 1710 matches.}
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccc}
\hline
Score &BT & Arruda & Lee & $Mn-Dir_1$ & $Mn-Dir_2$ \\
\hline
\hline
& \multicolumn{5}{c}{Mean Scores} \\ \cline{2-6}
Brier &0.635 (0.009) & 0.616 (0.007)& 0.629 (0.009)& 0.624 (0.006) & 0.628 (0.005) \\
Spherical$\times 10$ & -6.077 (0.062)& -6.191 (0.055)& -6.112 (0.062)& -6.123 (0.048)& -6.098 (0.041)\\
Logarithmic & 1.062 (0.014) & 1.027 (0.011) & 1.049 (0.013) & 1.040 (0.009) & 1.044 (0.008) \\
& \multicolumn{5}{c}{Total Scores} \\ \cline{2-6}
Brier & 1085.9 (14.9) & 1053.2 (12.6)& 1075.5 (14.9)& 1067.59 (10.4) & 1073.5 (8.8) \\
Spherical$\times 10$ & -1039.2 (10.5)& -1058.7 (9.4)& -1046.9 (10.6)& -1047.0 (8.2)& -1042.7 (7.1) \\
Logarithmic & 1816.4 (23.3) & 1755.7 (18.0) & 1793.6 (21.9) & 1778.8 (15.5) & 1785.2 (13.0) \\
\hline
\end{tabular}}
\label{tab::brier}
\end{center}
\end{table}
\begin{figure}[H]
\centering
\begin{subfigure}{0.88\linewidth}
\includegraphics[page=1,scale=0.45]{futebolComparacaoModelosForPaperYear.pdf}
\caption{}
\end{subfigure}\\
\begin{subfigure}{0.55\linewidth}
\includegraphics[scale=0.35]{futebolComparacaoModelosForPaperYear2.pdf}
\caption{}
\end{subfigure}
\caption{Means and standard errors of each measure of performance by year. Plot (b) shows the same information for the Brier scores, but without standard errors.}
\label{fig::scoresYear}
\end{figure}
In order to formally check if all models have similar predictive power, we tested the hypotheses that all six models have the same average score.
We did this by using a repeated measures ANOVA, a statistical test that takes into account the dependency between the observations (notice that each match is evaluated by each model).
In order to perform multiple comparisons, we adjusted $p$-values so as to control the false discovery rate.
All metrics presented significant differences at the level of significance of 5\% ($p$-value $<0.01$ in all cases, see Table \ref{tab::anova}), except for the proportion of errors, where no difference was found. Post-hoc analyses
are displayed in Table \ref{tab::postHoc}. Along with
Table \ref{tab::brier}, one concludes that,
for the Brier score, differences were found only
between Mn-Dir$_1$ versus BT (the former had better performance), Mn-Dir$_2$ versus Arruda (the former had worse performance), and Arruda versus Lee (the former had better performance).
For the spherical score, post-hoc analyses showed that the differences were found in Mn-Dir$_2$ versus Arruda (the former had worse performance) and BT versus Arruda (the former had worse performance). Finally, for the logarithmic score, post-hoc analyses showed that the differences were found in Mn-Dir$_1$ versus BT (the former had better performance), Mn-Dir$_2$ versus BT (the former had better performance), Lee versus Arruda (the former had worse performance), and Mn-Dir$_2$ versus Arruda (the former had worse performance).
These results therefore indicate that while the multinomial-Dirichlet models presented similar performances, they were better than BT and comparable to Lee.
It is clear that the Arruda model presented the best performance, although the predictions from Mn-Dir$_1$ were not significantly different from it, according to all scoring rules.
Hence, while BT lead to worse predictions than its competitors, Arruda was slightly better than some of its competitors, but roughly equivalent to
Mn-Dir$_1$.
\begin{table}[H]
\centering
\caption{ANOVA comparing the performance
of all prediction models under the various scores.}
\begin{tabular}{rrrrrr}
\hline
Score & Factor& num. d.f. & den. d.f. & F-value & p-value \\
\hline
\hline
\multirow{2}{*}{Brier} & Intercept & 1 & 8545.00 & 8423.04 & $<$0.01$^*$ \\
& Model & 4 & 6836 & 5.15 & $<$0.01$^*$ \\ \hline
\multirow{2}{*}{Spherical} & Intercept & 1 & 6836 & 14650.89 & $<$0.01$^*$ \\
& Model & 4 & 6836 & 3.96 & $<$0.01$^*$ \\ \hline
\multirow{2}{*}{Logarithmic} & Intercept & 1 & 6836 & 10876.28 & $<$0.01$^*$ \\
& Model & 4 & 6836 &6.76 & $<$0.01$^*$ \\ \hline
Proportion & Intercept & 1 & 6836 & 2139.93 & $<$0.01$^*$ \\
of Errors & Model & 4 & 6836 & 0.31 & 0.86 \\ \hline
\hline
\end{tabular}
\label{tab::anova}
\end{table}
\begin{table}[H]
\centering
\footnotesize
\caption{Post-hoc analyses comparing the performance
of all prediction models under the various scores.}
\begin{tabular}{llllll}
\hline
Score & Comparison & Estimate & Std. Error & z-value & p-value \\ \hline \hline
\multirow{10}{*}{Brier} & Arruda - BT & -0.02 & 0.00 & -4.59 & $<$0.01$^*$ \\
& Lee - BT & -0.01 & 0.00 & -1.47 & 0.24 \\
& Mn-Dir$_1$ - BT & -0.01 & 0.00 & -2.58 & 0.04$^*$ \\
& Mn-Dir$_2$ - BT & -0.01 & 0.00 & -1.75 & 0.16 \\
& Lee - Arruda & 0.01 & 0.00 & 3.12 & 0.01$^*$ \\
& Mn-Dir$_1$ - Arruda & 0.01 & 0.00 & 2.01 & 0.11 \\
& Mn-Dir$_2$ - Arruda & 0.01 & 0.00 & 2.84 & 0.02$^*$ \\
& Mn-Dir$_1$ - Lee & -0.00 & 0.00 & -1.11 & 0.36 \\
& Mn-Dir$_2$ - Lee & -0.00 & 0.00 & -0.28 & 0.79 \\
& Mn-Dir$_2$ - Mn-Dir$_1$ & 0.00 & 0.00 & 0.83 & 0.48 \\ \hline
\multirow{10}{*}{Spherical} &Arruda - BT & -0.01 & 0.00 & -3.89 & $<$0.01$^*$ \\
& Lee - BT & -0.00 & 0.00 & -1.54 & 0.23 \\
& Mn-Dir$_1$ - BT & -0.00 & 0.00 & -1.55 & 0.23 \\
&Mn-Dir$_2$ - BT & -0.00 & 0.00 & -0.69 & 0.57 \\
&Lee - Arruda & 0.01 & 0.00 & 2.35 & 0.06 \\
&Mn-Dir$_1$ - Arruda & 0.01 & 0.00 & 2.34 & 0.06 \\
&Mn-Dir$_2$ - Arruda & 0.01 & 0.00 & 3.20 & 0.01$^*$ \\
&Mn-Dir$_1$ - Lee & -0.00 & 0.00 & -0.02 & 0.99 \\
&Mn-Dir$_2$ - Lee & 0.00 & 0.00 & 0.85 & 0.59 \\
&Mn-Dir$_2$ - Mn-Dir$_1$ & 0.00 & 0.00 & 0.87 & 0.59 \\
\hline
\multirow{10}{*}{Logarithmic} &Arruda - BT & -0.04 & 0.01$^*$ & -5.28 & $<$0.01 \\
& Lee - BT & -0.01 & 0.01 & -1.99 & 0.08 \\
& Mn-Dir$_1$ - BT & -0.02 & 0.01 & -3.28 & 0.01$^*$ \\
& Mn-Dir$_2$ - BT & -0.02 & 0.01 & -2.72 & 0.02$^*$ \\
& Lee - Arruda & 0.02 & 0.01 & 3.29 & 0.01$^*$ \\
& Mn-Dir$_1$ - Arruda & 0.01 & 0.01 & 2.01 & 0.08 \\
& Mn-Dir$_2$ - Arruda & 0.02 & 0.01 & 2.57 & 0.03$^*$ \\
& Mn-Dir$_1$ - Lee & -0.01 & 0.01 & -1.29 & 0.27 \\
& Mn-Dir$_2$ - Lee & -0.00 & 0.01 & -0.73 & 0.54 \\
& Mn-Dir$_2$ - Mn-Dir$_1$ & 0.00 & 0.01 & 0.56 & 0.59 \\
\hline
\end{tabular}
\label{tab::postHoc}
\end{table}
We further illustrate this point in Figure \ref{fig::scores2}, where the plots display the scores of each match for every couple of models considered. The plots show that all methods performed similarly, and that the multinomial-Dirichlet models are the ones that agreed the most.
\begin{figure}[H]
\centering
\begin{subfigure}{0.48\linewidth} \includegraphics[page=13,scale=0.48]{futebolComparacaoModelosForPaperReview.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.48\linewidth} \includegraphics[page=14,scale=0.48]{futebolComparacaoModelosForPaperReview.pdf}
\caption{}
\end{subfigure}
\caption{Pairwise comparisons of the various scores. (a): upper right plots display Logarithmic Scores;
lower left plots display Brier Scores. (b): upper right plots display proportion of agreements between methods (i.e., proportion of times the conclusions are the same; see the Appendix); lower left plots display Spherical Scores. Lines represent} the identity $y=x$.
\label{fig::scores2}
\end{figure}
We also evaluated how reasonable were the predictions by assessing the calibration of the methods considered, i.e., by evaluating how often events which have assigned probability $p$ (for each $0<p<1$) happened (see the Appendix).
If these observed proportions are close to $p$, one concludes that the methods are well-calibrated. The results are displayed in Figure \ref{fig::calibration}. Because the Arruda and multinomial-Dirichlet models have curves that are close to the identity ($45^{\text{o}}$ line), we conclude that these methods are well-calibrated. On the other hand, BT and Lee seem to be poorly calibrated, over-estimating probabilities for the most frequent events.
\begin{figure}[H] \centering
\includegraphics[page=5,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}
\includegraphics[page=6,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}
\includegraphics[page=7,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}\\
\includegraphics[page=8,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}
\includegraphics[page=9,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}
\caption{Calibration of the various predictive methods: estimates of
occurrence frequency obtained by smoothing splines, with 95\%
confidence bands. Black line is the identity $y=x$. }
\label{fig::calibration}
\end{figure}
\subsection{Goodness of fit and information measures}
We also evaluated the goodness of fit of each model by computing, for each team $t$, the following statistics:
$$e^H_t = \sum_{i \in H_t} \widehat{p}_{t,i}\ \ \ \mbox{ and } \ \ \ e^A_t = \sum_{i \in A_t} \widehat{p}_{t,i},$$
where $\widehat{p}_{t,i}$ is the estimated probability team $t$ wins the $i$-th match,
$H_t$ is the set of matches team $t$ played as home team, and $A_t$ the set of matches team $t$ played away.
We then computed a $\chi^2$ statistic
$$\chi^2_o = \sum_{t} \frac{(e^H_t-o^H_t)^2}{e^H_t}+\frac{(e^A_t-o^A_t)^2}{e^A_t},$$
where $o^H_t$ is the number of times team $t$ won playing home and $o^A_t$ is the number of times team $t$ won playing away.
We then compared $\chi^2_o$ to a $\chi^2$
distribution with 40 degrees of freedom (twice the number of teams of each championship).
Since we did not fit the Arruda model, this was the only goodness of fit measure we could compute.
The values of the statistics and their corresponding $p$-values are displayed in Table \ref{tab::goodness}.
Except for the BT model, all other methods presented
reasonable goodness of fit, in particular, the multinomial-Dirichlet model 1, which presented the smaller chi-square statistics, thus indicating better fit.
\begin{table}[H]
\begin{center}
\begin{tabular}{cccccc}
\hline
Score &BT & Arruda & Lee & $Mn-Dir_1$ & $Mn-Dir_2$ \\
\hline
\hline
$\chi^2_o$ &112.8 & 76.9& 91.5& 61.5 & 77.2 \\
$p$-value & 0.001 & 0.48 &0.14 & 0.91 & 0.50 \\
\hline
\end{tabular}
\caption{Goodness of fit statistics for the models considered.}
\label{tab::goodness}
\end{center}
\end{table}
To have a deeper understanding about the probabilities given by each method, Figure \ref{fig::probMandante} displays the estimated conditional probability that the home team wins assuming the match will not be a tie.
All models assigned higher probabilities to the home team, showing that they captured the well-known fact known as home advantage, peculiar in football matches and other sport competitions \citep{Pollard86, Clarke95, Nevill99}.
\begin{figure}[H] \centering
\includegraphics[page=11,scale=0.4]{futebolComparacaoModelosForPaperReview.pdf}
\caption{Conditional probability that the home team wins given there
is no draw. Horizontal line indicates a 50\% probability.}
\label{fig::probMandante}
\end{figure}
In order to check how informative the predictions provided by the six models were, we computed the entropy of their predictions.
Recall that the entropy of a prediction $(p_1,p_2,p_3)$ is given by $- \sum_{i=1}^3 p_i \log{p_i}$.
Figure \ref{fig::entropy} presents the box plots of the entropy of each prediction for all the studied models.
Since the entropy values are smaller than 1.09, the plots show that the predictions were typically more informative than the trivial prediction.
Nevertheless, all methods yielded similar entropies on average, that is, none of them provided more informative probabilities.
\begin{figure}[H]
\centering
\includegraphics[page=10,scale=0.3]{futebolComparacaoModelosForPaperReview.pdf}
\caption{Entropy of the predictions of the various methods. Horizontal line represents the entropy of the trivial prediction $(1/3,1/3,1/3)$.}
\label{fig::entropy}
\end{figure}
Summarizing our conclusions we may say that, for the matches considered in our analysis, all the studied models yielded better predictions than the trivial prediction.
In particular the multinomial-Dirichlet models were well-calibrated, while the Lee model was not.
Model Mn-Dir$_1$ presented the best goodness of fit statistic, while models Mn-Dir$_2$ and Arruda showed similar goodness-of-fit.
About the scoring rules, while the Bradley-Terry model yielded worse predictions than its competitors according to all metrics, the Arruda model was the best one according to the three scoring rules considered in this work, but not in every championship.
The scores of the predictions provided by the multinomial-Dirichlet models were, on average, similar to the scores of the Arruda model.
Therefore, we conclude that the
multinomial-Dirichlet models are competitive with standard approaches.
\section{Final Remarks}
\label{sec::remarks}
The benchmark models used in this work were chosen because of their wide popularity among football fans in Brazil, despite the availability of several other models in the literature. Among them, we can cite those that model the match as a stochastic process evolving in time \citep{Dixon98, Volf2009, Titman2015}, those allowing for the team performance parameters to change along the season~\citep{Rue2000,Crowder2002,Owen2011,Koopman2015} and those modeling dependence between number of goals by means of bivariate count distributions \citep{Dixon97, Karlis2003, McHale2007, McHale2011}.
Contrary to the multinomial models we proposed, some of these approaches are able to answer several questions, for instance, they can estimate teams' performance parameters allowing to rank the teams according to their offensive and defensive qualities, and can also predict the number of goals scored in a particular match.
Another critical difference between the benchmark and the multinomial models is that the latter are Bayesian, while the former are not. Not only the way they use past information is different (because of frequentist and Bayesian paradigms), but also the pieces of information used in the analysis (for example, the Arruda model uses results of the previous twelve months, including other championships, while multinomial models use only the previous matches of the current championship). One can argue that this may lead to unfair comparisons, which is true if the interest is on the inferential properties of the models; our interest, however, is on prediction only, the true purpose of all of these models. For prediction comparions, there is not even a need for a probabilistic model, as we have seen in the comparisons with the trivial prediction.
Nonetheless, when we are interested only on predicting the final outcome of future matches, the multinomial-Dirichlet models can perform equally well as their complex counterparts.
The advantage of the first proposed model is that its predictions are obtained through simple calculations, without requiring numerical optimization procedures.
The importance of such finding is directly related to the models usability in practice: professionals that use the mentioned benchmark models often say that a difficulty they face is that predictions may generate anger in football fans, which is then translated into distrust in subsequent predictions.
Because of the complexity of some models, they find hard to explain to the lay user how the outcomes were predicted.
This is where using simple models pays off: the first multinomial model yields results that are easy to explain because they only involve counts of losses, wins and draws, allowing one to offer simple explanations to football fans and journalists about the proposed predictions.
Our work also poses several questions about probabilistic prediction of sport events.
In particular, based on the fact that the models have similar predictive power on average, one may ask: Is there an irreducible ``randomness'' or ``degree of unpredictability'' implicit in these events?
Is this degree an indicator of how tight or leveled is the championship being studied?
A suggestion of future research is to answer these questions by considering more championships and models, and by comparing them using other scoring rules.
We would also like to test other weighting methods in models $Mn-Dir_1$ and $Mn-Dir_2$ here proposed, and to evaluate their impact on the predictive power of the resulting predictions.
Another possible extension is to explore different prior distributions for the multinomial-Dirichlet models.
In this work, we have predicted the second half of the season using a prior based on the first half. However, one can refine prior construction in order to enable first-half predictions. For instance, one can construct priors based on pre-season odds---e.g. odds for winning the championship, finishing in a given position--- or on rankings of the teams---such as Elo rankings---provided by experts before the beginning of each championship and this is equivalent, one may say, to use the results of previous matches from a given time span.
\section*{Appendix: scoring rules and calibration}
\label{sec::scoring}
In this appendix we describe the scoring rules, how we computed the proportion of errors and the calibration measure used in the paper.
Firstly we provide a definition of proper scoring rules with simple examples to illustrate some of them and afterwards we describe the criterion developed to verify if the models are calibrated.
One way to fairly rank predictive models is by using proper scoring rules, where
the score may be interpreted as a numerical measure of how inaccurate a given probabilistic prediction was.
Formally, let $X$ be a random variable
taking values in $\mathcal{X}=\{1,2,3\}$ indicating
the outcome of the match, with 1 standing for home win, 2 for draw and 3 for away win. Moreover, let $P=(P_1,P_2,P_3)$ denote one's probabilistic prediction
about $X$, \emph{i.e.}, $P$ lies in the 2-simplex set $\Delta_2=\{(p_1,p_2,p_3):p_1+p_2+p_3=1, \ p_1,p_2,p_3\geq0\}$ (see Figure
\ref{fig:simplex}).
A scoring rule is a function
that assigns a real number (score) $S(x,P)$ to each $x \in \mathcal{X}$
and $P \in \Delta_2$
such that
for any given $x$ in $\mathcal{X}$, the score $S(x,P)$ is minimized when $P$ is
$(1,0,0)$, $(0,1,0)$ or $(0,0,1)$ depending if $x$ is 1, 2 or 3, respectively.
The score $S(x,P)$ can be thought as
a penalty to be paid when one assigns the
probabilistic prediction $P$ and outcome
$x$ occurs. Also, the ``best'' possible score (\emph{i.e.}, the smallest score value) is achieved when the probabilistic prediction for the outcome of the game is perfect. A scoring rule may also be defined to be such that a large value of the score indicates good forecasts.
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=2,tdplot_main_coords,axis/.style={->},thick]
\draw[axis] (0, 0, 0) -- (1.4, 0, 0) node [right] {$p_1$};
\draw[axis] (0, 0, 0) -- (0, 1.4, 0) node [above] {$p_2$};
\draw[axis] (0, 0, 0) -- (0, 0, 1.4) node [above] {$p_3$};
\coordinate (d1) at (1,0,0){};
\coordinate (d2) at (0,1,0){};
\coordinate (d3) at (0,0,1){};
\fill[gray!80,opacity=0.2] (d1) -- (d2) -- (d3)-- cycle;
\draw[-, gray, thick] (0,0,1) -- (1,0,0);
\draw[-, gray, thick] (0,0,1) -- (0,1,0);
\draw[-, gray ,thick] (1,0,0) -- (0,1,0);
\node[fill,circle,inner sep=1.5pt,label={left:$(1,0,0)$}] at (d1) {};
\node[fill,circle,inner sep=1.5pt,label={south east:$(0,1,0)$}] at (d2) {};
\node[fill,circle,inner sep=1.5pt,label={left:$(0,0,1)$}] at (d3) {};
\draw[-latex,thick](d3) to [out=60,in=180] (1,1,2);
\node[label={right:away team wins}] at (1,1,2) {};
\draw[-latex,thick](d1) to [out=-90,in=180] (1,1,-1);
\node[label={right:home team wins}] at (1,1,-1) {};
\draw[-latex,thick](d2) to [out=-120,in=180] (1,1,-.25);
\node[label={right:draw}] at (1,1,-.25) {};
\node[fill, black, circle,inner sep=1.5pt] at (0.25,0.35,0.40) {};
\draw[-latex,thick](0.25,0.35,0.40) to [out=90,in=180] (1,1,1.2);
\node[label={right:$(0.25,0.35,0.40)$: prediction}] at (1,.9,1.2) {};
\end{tikzpicture}
\caption{Bi-dimensional simplex (gray surface): area of possible forecasts.}\label{fig:simplex}
\end{center}
\end{figure}
Although many functions can satisfy the above scoring rule definition, not all of them encourage honesty and accuracy when assigning a prediction to an event. Those that do enable a fair probabilistic assignment are named \emph{proper scoring rules} \citep{lad}, which we describe in the sequence.
Consider a probabilistic prediction $P^*=(P_1^*,P_2^*,P_3^*)$ for $X$.
A proper scoring rule $S$ is a scoring rule such that the mean score value
$$E_{P^*}[S(X,P)]=\sum_{x=1}^3 S(x,P)P^*_x$$
is minimized when $P=P^*$.
In other words, if one announces
$P$ as his probabilistic prediction
and uses $S$ as scoring rule, the lowest
expected penalty is obtained by reporting $P^*$, the model real uncertainty about $X$.
Thus, the use of a proper scoring rule encourages the forecaster to announce $P^*$ (the correct one)
as his probabilistic prediction rather than some other quantity.
In what follows, we describe in detail three proper scoring rules we use to assess the considered models.
We also recall the concept of calibration and propose a way to measure the calibration degree of each model.
\subsection*{Brier Scoring Rule}
Let $P=(P_1,P_2,P_3)$ be a probabilistic prediction for $X$.
The Brier score for a given outcome $x\in\{1,2,3\}$ is given by
$$S(x,P)= \sum_{i=1}^3\mathbb{I}(x=i)(1- P_i)^2+\sum_{i=1}^3\mathbb{I}(x\neq i)P^2_i,$$
where $\mathbb{I}$ is the indicator function.
We interpret the Brier score in the case where one of three mutually exclusive outcomes happens as in the case of a football match.
The green surface in Figure \ref{fig:simplex} represents the 2-simplex, \emph{i.e.}, the set of points such that $p_1+p_2+p_3=1$ for non-negative values of $p_1$, $p_2$ and $p_3$.
The triangle representing the simplex has sides of length $\sqrt{2}$ and its height is $\sqrt{6}/2$.
Drawing a similar equilateral triangle with height 1 and sides $2\sqrt{3}/3$, it is possible to represent all points of the simplex.
This new triangle is used to graphically display the forecast as an internal point because the sum of the distances of every point inside it to each side, representing the probability of each event, is always one.
See Figure \ref{fig:norm_stand}.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.48\linewidth}
\centering
\begin{tikzpicture}[scale=4]
\draw [thick](0,0) -- (1.1547,0) -- (0.57735,1)-- (0,0);
\node[fill, black, circle,inner sep=1.5pt] at (0.63509,0.40) {};
\node[label={below:$p$}] at (0.63509,0.40) {};
\draw[dashed,thick] (0.63509,0.40) -- (0.57735,1);
\draw[decorate,decoration={brace,amplitude=5pt},xshift=-1pt,yshift=0pt] (0.63509,0.40) -- (0.57735,1) node[black,midway,xshift=-0.4cm] {$d$};
\node[label={below:Home wins}] at (0,-0.05) {};
\node[label={below:Draw}] at (1.1547,-0.05) {};
\node[label={above:Away wins}] at (0.57735,1.1) {};
\node[label={below:$(1,0,0)$}] at (0,0.05) {};
\node[label={below:$(0,1,0)$}] at (1.1547,0.05) {};
\node[label={above:$(0,0,1)$}] at (0.57735,0.95) {};
\draw[dashed,thick] (-.3,1) -- (0.57735,1);
\draw[dashed,thick] (-.3,0) -- (0,0);
\draw[decorate,decoration={brace,amplitude=10pt},xshift=-9.5pt,yshift=0pt] (0,0) -- (0,1.01) node[black,midway,xshift=-0.7cm] {\large$\frac{\sqrt{6}}{2}$};
\end{tikzpicture}
\caption{Brier score, $d$, for victory of away team}
\label{fig:A}
\end{subfigure}
\begin{subfigure}[b]{0.48\linewidth}
\centering
\begin{tikzpicture}[scale=4]
\draw [thick](0,0) -- (1.1547,0) -- (0.57735,1)-- (0,0);
\node[fill, black, circle,inner sep=1.5pt] at (0.63509,0.40) {};
\draw[dashed,thick] (0.63509,0.40) -- (0.63509,0);
\draw[dashed,thick] (0.63509,0.40) -- (0.85159,0.525);
\draw[dashed,thick] (0.63509,0.40) -- (0.331976,0.575);
\draw[thick] (0.63509,0.07) -- (0.70509,0.07) -- (0.70509,0);
\node[fill, black, circle,inner sep=.7pt] at (0.67009,0.035) {};
\draw[thick] (0.7909695,0.49) -- (0.7559695,0.5506225) -- (0.8165912,0.5856225);
\node[fill, black, circle,inner sep=.7pt] at (0.8037803,0.5378116) {};
\draw[thick] (0.3925977,0.54) -- (0.3575977,0.4793783) -- (0.2969759,0.5143783);
\node[fill, black, circle,inner sep=.7pt] at (0.3447868,0.5271892) {};
\draw[decorate,decoration={brace,mirror,amplitude=4pt},xshift=-0.7pt,yshift=0pt] (0.63509,0.40) -- (0.63509,0) node[black,midway,xshift=-0.4cm] {$p_3$};
\draw[decorate,decoration={brace,mirror,amplitude=5pt},yshift=-0.5pt,xshift=0.5pt] (0.63509,0.40) -- (0.85159,0.525) node[black,midway,yshift=-0.5cm,xshift=10pt] {$p_1$};
\draw[decorate,decoration={brace,mirror, amplitude=5pt},xshift=0.1pt,yshift=0.7pt] (0.63509,0.40) -- (0.331976,0.575) node[black,midway,xshift=5pt,yshift=10pt] {$p_2$};
\node[label={below:Home wins}] at (0,0) {};
\node[label={below:Draw}] at (1.1547,0) {};
\node[label={above:Away wins}] at (0.57735,1) {};
\draw[dashed,thick] (-.3,1) -- (0.57735,1);
\draw[dashed,thick] (-.3,0) -- (0,0);
\draw[decorate,decoration={brace,amplitude=10pt},xshift=-9.5pt,yshift=0pt] (0,0) -- (0,1.01) node[black,midway,xshift=-0.6cm] {\large$1$};
\end{tikzpicture}
\caption{Normalized simplex: $p_1+p_2+p_3=1$}
\label{fig:B}
\end{subfigure}
\caption{Standard \textbf{(a)} and normalized \textbf{(b)} simplexes.}
\label{fig:norm_stand}
\end{figure}
The Brier score for the probabilistic prediction $P=(0.25,0.35,0.40)$
assuming the home team loses, is therefore given by $d^2=(0-0.25)^2+(0-0.35)^2+(1-0.40)^2=0.545$.
On the other hand, the prediction $P=(0,0,1)$ achieves score zero, the minimum for this rule.
It is useful to consider the score of what we will call trivial prediction:
$P=(1/3,1/3,1/3)$.
This~assessment will produce a Brier score of $2/3$, no matter what is the final result of the match, providing, thus, a threshold that a good model should consistently beat, meaning, for the Brier score, that the scores of its predictions should be smaller than $0.667$.
\subsection*{Logarithmic Scoring Rule}
The logarithmic score is given by
$$S(x,P)=- \sum_{i=1}^3\mathbb{I}(x=i)\ln(P_i),$$
\noindent
which is the negative log likelihood of the event that occurred.
The logarithmic score for the prediction
$P=(0.25,0.35,0.40)$
when the home team loses is therefore
$-\ln(0.4)\approx 0.91$.
On the other hand, the prediction $P=(0,0,1)$ achieves score zero, once again the minimum of this rule.
Moreover, for the logarithmic score, the trivial prediction gives a score of approximately $1.098$.
\subsection*{Spherical Scoring Rule}
The spherical score is given by
$$S(x,P)=- \frac{1}{\sqrt{\sum_{i=1}^3 P^2_i}}\sum_{i=1}^3\mathbb{I}(x=x_i)P_i,$$
\noindent
which is the negative likelihood of the event that occurred, normalized by the square-root of the sum of the assigned squared probabilities.
The spherical score for the prediction
$P=(0.25,0.35,0.40)$ assuming the home team loses, is given by
$-0.4/\sqrt{0.25^2+0.35^2+0.40^2} \approx -0.68$.
On the other hand, the prediction $P=(0,0,1)$ achieves score $-1$ instead and, for this rule, the trivial prediction results in a score of approximately $-0.577$.
\subsection*{Calibration and Proportion of Errors}
\label{sec::calib}
Besides scoring rules, there are other criteria used to assess the quality of different predictions. Here~we explore two of them.
The first one is the proportion of errors made by the model or assessor. This is simply the proportion of mistakes made when considering the highest probability assessment.
More precisely, the proportion of errors of a sequence of probabilistic predictions for $n$ games, $P^{(1)},\ldots,P^{(n)}$
with
$P^{(j)}=(P^{(j)}_1,P^{(j)}_2,P^{(j)}_3)$, is defined by
$$\frac{1}{n}\sum_{j=1}^n \mathbb{I}\left(X_j \neq \arg \max_{x \in \{1,2,3\}} P^{(j)}_x\right),$$
where $X_j$ is the outcome of the $j$-th match.
The second concept we use is that of calibration \citep{Dawid}. Probability assertions are said to be well calibrated at the level of probability $p$ if the observed proportion of all propositions that are assessed with probability $p$ equals $p$.
Because we typically do not have several predictions with the same assigned probability $p$, we obtain a plot by smoothing (\emph{i.e.}, regressing) the indicator function of whether a given result happened as a function of the probability assigned for that result.
That is, we estimate the probability that an event occurs given its assigned probability.
The smoothing was done via smoothing splines \citep{wahba}, with tuning parameters chosen by cross-validation.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.50293,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfYQ25V5jRAvtX_za | \section{Introduction}
The field of soccer analytics suffers from poor availability of free and affordable data. While Northern American sports have already been the subject of data analytics for a long time, soccer analytics has only started to gain traction in the recent years. \\
Feature vectors usually serve as an input to machine learning models. They provide a numeric description of an objects characteristics. However, in the case of soccer analytics these features are hard to obtain. For example, collecting non-trivial features for a soccer player or a team involves buying data from a sports analysis company which employs experts to gather data.\\
In this work we propose STEVE - \textbf{S}occer \textbf{TE}am \textbf{VE}ctors, a method to automatically learn feature vectors of soccer teams.
\textit{STEVE} is designed to only use freely available match results from different soccer leagues and competitions. Thus, we alleviate the problem of poor data availability in soccer analytics. Automatically extracted feature vectors are usually referred to as representations in the literature. These representations can conveniently serve as input to various machine learning tasks like classification, clustering and regression.
In the resulting vector space, similar teams are close to each other. We base the notion of similarity between soccer teams on four solid assumptions (Section \ref{sec:main}). The most important one is that two teams are similar if they often win against the same opponents. Hence, \textit{STEVE} can be used to find similar teams by computing the distance between representations and to rank a self chosen list of teams according to their strengths.\\
\noindent This paper is organized as follows: In Section \ref{sec:related_work} we review related work. It consists of an in depth discussion of the process of learning real valued vectors for elements in a set and its applications. We also briefly review a recent approach to team ranking.
In Section \ref{sec:main} we introduce \textit{STEVE}, our approach to learn meaningful representations for soccer teams. After giving an overview of the underlying idea, we introduce the problem with more rigor and conclude the section by formulating an algorithm for the task.
In Section \ref{sec:experiments} we conduct various experiments to evaluate the approach. Finally, Section \ref{sec:conculsion} closes out with conclusions and outlines future work.
\section{Related Work}
\label{sec:related_work}
Learning real valued vectors for elements in a set has been been of particular interest in the field of natural language processing.
Elements are usually words or sentences and their representation is computed in such a way, that they entail their meaning.
Modern approaches typically learn a distributed representation for words \cite{Bengio2003} based on the distributional hypothesis \cite{Harris1954}, which states that words with similar meanings often occur in the same contexts.\\
Mikolov et al. \cite{Mikolov2013b,Mikolov2013a} introduced \textit{word2vec}, a neural language model which uses the skip-gram architecture to train word representations. Given a center word \textit{word2vec} by iteratively maximizes the probability of observing the surrounding window of context words. The resulting representations can be used to measure semantic similarity between words. According to \textit{word2vec} the most similar word to \textit{soccer} is \textit{football}. Moreover, vector arithmetic can be used to compute analogies. Although having recently been put in question \cite{Kalidindi2019,Allen2019}, a very famous example is the following: \textit{king - man + woman = queen}.
The concept has since then been extended to graph structured data to learn a representation for each node in a graph. Perozzi et al. \cite{Perozzi2014} and Dong et al. \cite{Dong2017} treat random walks as the equivalent of sentences. This is based on the assumption that these walks can be interpreted as sampling from a language graph. The resulting sentences are fed to \textit{word2vec}. Building upon graph based representation learning approaches, \textit{LinNet} \cite{Pelechrinis2018} builds a weighted directed match-up network where nodes represent lineups from NBA basketball teams. An edge from node $i$ to node $j$ is inserted if lineup $i$ outperformed lineup $j$. The edge weight is set to the performance margin of the corresponding match-up. Lineup representations are computed by deploying \textit{node2vec} \cite{Grover2016} on the resulting network. Afterwards, a logistic regression model based on the previously computed lineup representations is learned to model the probability of lineup $\lambda_i$ outperforming lineup
$\lambda_i$.\\
More recently, the aforementioned findings have also been applied to sports analytics. (batter$\vert$pitcher)2vec \cite{Alcorn2016} computes representations of Major League Baseball players through a supervised learning task that predicts the outcome of an at-bat given the context of a specific batter and pitcher.
The resulting representations are used to cluster pitchers who rely on pitches with dramatic movement and predict future at-bat outcomes. Further, by performing simple arithmetic in the learned vector space they identify opposite-handed doppelgangers.\\
Le et. al \cite{Le2017} introduce a data-driven ghosting model based on tracking data of a season from a professional soccer league to generate the defensive motion patterns of a \textit{league average} team. To fine-tune the \textit{league average} model to account for a team's structural and stylistic elements, each team is associated with a team identity vector.\\
Our approach aims to learn representations for soccer teams and is thus closely related to the presented approaches. As we use the representations to rank teams, we briefly review related work on the topic.\\
Neumann et al. \cite{Neumann2018} propose an alternative to classical ELO and Pi rating based team ranking approaches \cite{Hvattum2010,Constantinou2013}.
A graph based on the match results and a generalized version of agony \cite{Gupte2011} is used to uncover hierarchies. The approach is used to categorize teams into a few discrete levels of playing quality.
General match-up modeling is addressed by the \textit{blade-chest} model \cite{Chen2016a}. Each player is represented by two $d$-dimensional vectors, the \textit{blade} and \textit{chest} vectors. Team $a$ won if its blade is closer to team $b$'s chest than vice versa. Intransitivity is explicitly modeled by using both blade and chest vectors, something that cannot be accounted for by approaches that associate a single scalar value with each team \cite{Bradley1952}.
\section{Soccer Team Vectors}
\label{sec:main}
In this section we present \textit{STEVE - Soccer Team Vectors}. We first give an overview of the underlying idea and the goal of this work. Afterwards we discuss the problem definition and introduce an algorithm to learn useful latent representations for soccer teams.
\subsection{Overview}
\label{sec:overview}
\textit{STEVE} aims to learn meaningful representations for soccer teams where representations come in the form of low dimensional vectors. If two teams are similar, their representations should be close in vector space while dissimilar teams should exhibit a large distance. Furthermore, these learned latent representations can be used as feature vectors for various machine learning tasks like clustering, classification and regression.
Due to the fact that there is no clear definition of similarity for soccer teams, we base our approach on the following four assumptions:
\begin{enumerate}
\item The similarity between two teams can be determined by accounting for the matches they played in the past.
\item Frequent draws between two teams indicate that they are of approximately equal strength. Hence, both teams are similar.
\item Two teams are similar if they often win against the same opponents.
\item More recent matches have a higher influence on the similarity than those a long time ago.
\end{enumerate}
Since data acquisition is expensive and time-consuming, especially in the field of sports analytics, \textit{STEVE} is designed to learn from minimal information. More precisely, we only use data about which teams played against each other, during which season a match took place and whether the home team won, lost or the match resulted in a draw. Note that the assumptions mentioned above do not require any further information and are therefore well suited for this setting.
\subsection{Problem Definition}
To simplify definitions, let $M=\{1,2,\dots,m\}$, we assume that each of the $m$ soccer teams we want to learn a representation for is associated with an identification number $i\in M$. Further, let $\Phi \in \mathbb{R}^{m \times \delta}$, where each row $\Phi_i$ represents team $i$'s $\delta$ dimensional latent representation. The goal of this work is to find $\Phi$ in such a way, that $dist(\Phi_i, \Phi_j)$ is small for similar teams $i$ and $j$ and $dist(\Phi_i, \Phi_{k})$ is large for dissimilar teams $i$ and $k$. $dist(\cdot, \cdot)$ is some distance metric and similarity between teams is determined according to the assumptions made in Section \ref{sec:overview}.
To solve this task, data is given in the following form:
$\mathcal{D}=\{(a, b, s, d) \in M \times M \times \{1,\dots,x_{max}\} \times \{0,1\}\}$.
The quadruple $(a, b, s, d)$ represents a single match between teams $a$ and $b$, $s$ is an integer indicating during which of the $x_{max}$ seasons the match took place and $d$ is a flag set to $1$ if the match resulted in a draw and $0$ otherwise.
If $d=0$, the quadruple is arranged such that team $a$ won against team $b$.
\subsection{Algorithm}
According to the first assumption, we can loop over the dataset $\mathcal{D}$ while adjusting $\Phi$. If $d=1$, we minimize the distance between $\Phi_a$ and $\Phi_b$, thereby accounting for the second assumption.
The third assumption addresses a higher order relationship, where teams that often win against the same teams should be similar.
We introduce a second matrix $\Psi \in \mathbb{R}^{m \times \delta}$ and call each row $\Psi_i$ team $i$'s loser representation. Further, we call $\Phi_i$ the winner representation of team $i$. Both matrices $\Phi$ and $\Psi$ are initialized according to a normal distribution with zero mean and unit variance.
When team $a$ wins against team $b$ we minimize the distance between $\Phi_a$ and $\Psi_b$, bringing $b$'s loser representation and $a$'s winner representation closer together. That is, the loser representations of all teams $a$ often wins against, will be in close proximity to team $a$'s winner representation. Consequently, if other teams also often win against these teams, their winner representations must be close in order to minimize the distance to the loser representations. Parameters $\Phi$ and $\Psi$ are estimated using stochastic gradient descent where the objective we aim to minimize is given as follows:
$$
\operatornamewithlimits{arg\ min}_{\Phi, \Psi} \sum_{(a,b,s,d) \in \mathcal{D}} d * dist(\Phi_a, \Phi_b) + (1-d) * dist(\Phi_a, \Psi_b)
$$
We minimize the distance between $\Phi_a$ and $\Phi_b$ directly when both teams draw ($d=1$). Otherwise ($d=0$) we minimize the distance between $\Phi_a$ (winner representation) and $\Psi_b$ (loser representation).
With the squared euclidean distance as the distance metric, the expression can be rewritten as illustrated below.
$$
\operatornamewithlimits{arg\ min}_{\Phi, \Psi} \sum_{(a,b,s,d) \in \mathcal{D}} d * \|\Phi_a - \Phi_b\|^2 + (1-d) * \|\Phi_a - \Psi_b\|^2
$$
In its current form, matches played in long past seasons contribute as much to the loss as more recent matches. We alleviate this problem by down-weighting matches from older seasons using the linear weighting scheme $\frac{s}{x_{xmax}}$, thereby completing the formulation of the objective:
$$
\operatornamewithlimits{arg\ min}_{\Phi, \Psi} \sum_{(a,b,s,d) \in \mathcal{D}} \frac{s}{x_{max}} \Big[ d * \|\Phi_a - \Phi_b\|^2 + (1-d) * \|\Phi_a - \Psi_b\|^2 \Big]
$$
This approach has the advantage of no complex statistics having to be gathered. All our assumptions are captured in the teams's representations. We describe the algorithm in more detail in Algorithm \ref{algo:steve}. Note that here gradients are computed after observing a single data point and the regularization term is omitted. This is done for illustration purposes only. In our implementation, we train the algorithm in a batch-wise fashion. For
In lines 9, 12 and 15 the representations are normalized as we have found this to speed up training. It also helps to keep distances within a meaningful range.
\begin{algorithm}
\caption{STEVE($\mathcal{D}$, m, $\delta$, $\alpha$, $x_{max}$,e)}
\label{algo:steve}
\begin{algorithmic}[1]
\State $\Phi \sim \mathcal{N}(0,1)^{m \times \delta}$ \Comment{Initialize $\Phi$}
\State $\Psi \sim \mathcal{N}(0,1)^{m \times \delta}$ \Comment{Initialize $\Psi$}
\For{$i$ in $\{1, \dots, e\}$}
\State $\mathcal{D} = $ shuffle($\mathcal{D}$) \Comment{Shuffle dataset}
\For {each $(a,b,s,d)$ in $\mathcal{D}$}
\State $L(\Phi, \Psi) = \frac{s}{x_{xmax}} \Big[ d * \|\Phi_a - \Phi_b\|^2 + (1-d) * \|\Phi_a - \Psi_b\|^2 \Big]$ \Comment{Compute loss}
\If {$d = 0$} \Comment{$a$ won the match}
\State $\Psi_b = \Psi_b - \alpha * \frac{\partial L}{\partial \Psi_b}$ \Comment{Gradient descent on $b$'s loser representation}
\State $\Psi_b = \Psi_b / \|\Psi_b\|_2$ \Comment{Normalize $b$'s loser representation}
\Else \Comment{Match is a draw}
\State $\Phi_b = \Phi_a - \alpha * \frac{\partial L}{\partial \Phi_b}$ \Comment{Gradient descent on $b$'s winner representation}
\State $\Phi_b = \Phi_b / \|\Phi_b\|_2$ \Comment{Normalize $b$'s winner representation}
\EndIf
\State $\Phi_a = \Phi_a - \alpha * \frac{\partial L}{\partial \Phi_a}$ \Comment{Gradient descent on $a$'s winner representation}
\State $\Phi_a = \Phi_a / \|\Phi_a\|_2$ \Comment{Normalize $a$'s winner representation}
\EndFor
\EndFor
\State \Return $\Phi$, $\Psi$
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\label{sec:experiments}
In this section, we provide an overview of the dataset. We also conduct various experiments to investigate the expressiveness and efficacy of our approach.
\subsection{Dataset and Experimental Setup}
The dataset consists of all the matches from the Bundesliga (Germany), Premier League (Great Britain), Serie A (Italy), La Liga (Spain), Eredivisie (Netherlands), League 1 (France), Süper Lig (Turkey), Pro League (Belgium), Liga NOS (Portugal), Europa League and the Champions League played from 2010 until 2019.
A total of 29529 matches between 378 different teams was carried out where approximately 25\% ended in a draw.
Unless stated otherwise, for all experiments we set $\delta=16$ and batch size $=128$. We use Adam~\cite{Kingma2014} with a learning rate $\alpha=0.0001$ for parameter estimation and train for $e=40$ epochs. Additionally, we add a small $L_2$ weight penalty of $10^{-6}$.
\subsection{Similarity Search}
We select five European top teams and run $\textit{STEVE}$ on all the matches from season 2010 until 2019 in the corresponding league. Since we are dealing with small datasets, we set $\delta=10$ and batch size $=32$. For each team, we note the five most similar teams (smallest distance) in Table~\ref{tab:similarities}. Note that we use the distance between the corresponding winner representations. As expected, we clearly observe that top teams are similar to other top teams. For example, the team most similar to Barcelona is Real Madrid. Both teams often compete for supremacy in \textit{La Liga}. In general, we observe that similarities in Table \ref{tab:similarities} roughly reflect the average placement in the respective league.
\begin{table}[h]
\caption{Five most similar teams for five European top teams.}\label{tab:similarities}
\centering
\setlength{\tabcolsep}{0.5em}
{\renewcommand{\arraystretch}{1.2
\begin{tabular}{|l|l|l|l|l|}
\hline
\multicolumn{5}{|c|}{\textbf{Top soccer team per league chosen for similarity search}} \\ \hline
Bayern M\"unchen & Barcelona & Paris SG & Manchester U. & Juve. Turin \\ \hline
\multicolumn{5}{|c|}{\textbf{Five most similar teams chosen by \textit{STEVE}}} \\ \hline
RB Leipzig & Real Madrid & Lyon & Liverpool & SSC Napoli \\ \hline
Dortmund & Valencia & Marseille & Manchester C. & AS Roma \\ \hline
M\"onchengladbach & Atletico Madrid & Monaco & Chelsea & AC Milan \\ \hline
Leverkusen & Sevilla & St Etienne & Tottenham & Inter. Milano \\ \hline
Hoffenheim & Villarreal & Lille & Arsenal & SS Lazio \\ \hline
\end{tabular}
}
\end{table}
\subsection{Ranking Soccer Teams}
To retrieve a ranked list of soccer teams, one could simply use a league table. However, the list will only reflect the team's constitution accumulated over a single season. The ranking will not take past successes into account. One might alleviate this problem by averaging the league table over multiple seasons. Nevertheless, another problem arises: the list will only consist of teams from a single league. Combining league tables from different countries and competitions to obtain a more diverse ranking is considerably less straightforward. It gets even more complicated when we wish to rank a list of self chosen teams, possibly from many different countries.
\textit{STEVE} provides a simple yet effective way to generate rankings for the use case mentioned above. Given a list of teams, we simulate a tournament where each team plays against all other teams. The list is then sorted according to the number of victories. To compute the outcome of a single match (victory or defeat) between team $a$ and $b$, let $\alpha = \|\Phi_a - \Psi_b\|^2$ and $\beta = \|\Phi_b - \Psi_a\|^2$. If $\alpha < \beta$, then team $b$'s loser representation is closer to team $a$'s winner representation than team $a$'s loser representation is to team $b$'s winner representation. Thus, team $a$ is stronger than team $b$ and we increase team $a$'s victory counter. The same line of reasoning is applied to the case where $\alpha > \beta$.\\
In Figure~\ref{fig:rankings} we generated two rankings using the aforementioned approach. Each list consists of twelve teams from different European countries of different strengths. Our approach produces reasonable rankings: Highly successful international top teams like \textit{Real Madrid, FC Bayern Munich, FC Barcelona}, and \textit{AS Roma} are placed at the top of the list while mediocre teams like \textit{Espanyol Barcelona} and \textit{Werder Bremen} are placed further back in the list. The least successful teams like \textit{FC Toulouse, Cardiff City, Fortuna D\"usseldorf} and \textit{Parma Calcio} occupy the tail of the list.
\textit{STEVE} can be seen as an alternative to previous soccer team ranking methods \cite{Hvattum2010,Leitner2010} based on the ELO rating \cite{Elo1978}.
\begin{figure}[ht]
\centering
\caption{Team rankings generated by \textit{STEVE}. Each row\textsuperscript{1,2} depicts one ranked list from the strongest (left) to the weakest team (right). Numbers represent a team's relative strength - the number hypothetical matches won.}
\label{fig:rankings}
\includegraphics[width=1.0\textwidth]{rankings.png}
\scriptsize\textsuperscript{1} Real Madrid, FC Bayern Munich, Inter Milano, Liverpool FC, Borussia Dortmund, Ajax Amsterdam, FC Porto, Club Brugge KV, Werder Bremen, 1.FC Nuremberg, FC Toulouse, Cardiff City\\
\scriptsize\textsuperscript{2} FC Barcelona, AS Roma, Atlético Madrid, Paris SG, Tottenham, PSV Eindhoven, Arsenal London, SL Benfica, Espanyol Barcelona, VFB Stuttgart, Fortuna Dusseldorf, Parma Calcio
\end{figure}
\vspace{-3mm}
\subsection{Team Market Value Estimation}
The goal of this work is to learn representations that are well suited for various downstream machine learning tasks. We validate this property by evaluating \textit{STEVE} with respect to regression and classification performance. We argue that a meaningful representation should carry enough information to reliably predict a team's market value.
Therefore, both tasks involve predicting the value of a team given its representation.
We obtained current market values for all teams in the dataset from season 2018/2019.
A team's market value is determined by the sum of the market values of all its players.
On average, a team is worth \euro 183.7 million with a standard deviation of \euro241.2M. The least valuable team is \textit{BV De Graafschap} (\euro10.15M) and the most valuable team is \textit{FC Barcelona} (\euro1180M). The first, second and third quartiles are \euro25M, \euro93.7M and \euro232.5M, respectively.
For regression and classification, we use the following team representations as an input to a multi layer perceptron (MLP) with two hidden layers. The first hidden layer has a size of $50$, the second one $20$. Apart from changing hidden layer sizes, we use default parameters provided by \cite{scikit-learn} for all further analyses.
\begin{itemize}
\item \textbf{STEVE}
Representations are computed using \textit{STEVE} with $\delta \in \{8, 16, 32\}$.
A team's winner and loser representation is concatenated to form its team vector.
The resulting feature vectors are of size 16, 32, 64.
\item \textbf{Season-stats}
We extract count based features for each team in the dataset to mimic traditional feature extraction.
For season 2018/2019 we collect the following statistics: number of victories, draws, defeats as well as goals scored and goals conceded. Each feature is computed for matches in the Champions League, Euro League and the respective national league. Additionally, we use goals per match, goals per national and international match.
This results in a $18$ dimensional feature vector (representation) for each team.
\item \textbf{Season-stats (CAT-x)}
Season-stats for the last $x$ seasons are concatenated.
The resulting feature vectors are of size $x*18$.
\item \textbf{Season-stats (SUM-x)}
Season-stats for the last $x$ seasons are summed together.
The resulting feature vectors are of size $18$.
\end{itemize}
\noindent Comparability between the different representations mentioned above is ensured due to the fact that none of them requires information absent in the dataset.\\
\textit{Season-Stats} has many features that are intuitively well suited for team value estimation. For example, a large proportion of teams that participate in international competitions are more valuable than those who don't. Statistics about goals and match results are helpful for assessing a team's strength which is in turn correlated to the team's market value.\\
\noindent \textbf{Regression}
Team value estimation naturally lends itself to be cast as a regression problem. During training we standardize team values (targets) and \textit{Season-Stats} features by subtracting the mean and dividing by the standard deviation. Evaluation is carried out using 5-fold cross-validation and results are reported in Table~\ref{tab:regression}.\\
\noindent \textbf{Classification}
By grouping team values into bins, we frame the task as classification problem. Teams are assigned classes according to the quartile their value lies in. Consequently, each team is associated with one of four classes. We apply the same standardization procedure as in the case of regression and use 5-fold cross-validation. Results are reported in Table~\ref{tab:classification}.\\
\begin{table}[ht]
\caption{Results for the regression task of team value estimation. To quantify the quality of prediction, we use root mean squared error (RMSE), mean absolute error (MAE) and the mean median absolute error (MMAE), all reported in million \euro.}\label{tab:regression}
\centering
\setlength{\tabcolsep}{0.75em}
{\renewcommand{\arraystretch}{1.25
\begin{tabular}{l|l|l|l|l|}
\cline{2-4}
& RMSE & MAE & MMAE \\ \hline
\multicolumn{1}{|l|}{STEVE-16} & 142.12 $\pm$ 75.22 & 88.37 $\pm$ 25.69 &52.01 $\pm$ 13.42 \\ \hline
\multicolumn{1}{|l|}{STEVE-32} & 131.51 $\pm$ 40.15 & 83.20 $\pm$ 24.51 & 46.87 $\pm$ 21.89\\ \hline
\multicolumn{1}{|l|}{STEVE-64} & \textbf{111.27} $\pm$ 48.58 & \textbf{67.14} $\pm$ 30.51 & \textbf{32.80} $\pm$ 10.42 \\ \hline
\multicolumn{1}{|l|}{Season-Stats} & 173.75 $\pm$ 119.35 & 110.32 $\pm$ 63.61 & 69.96 $\pm$ 15.43 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (CAT-3)} & 200.77 $\pm$ 157.55 & 138.15 $\pm$ 87.06 & 86.98 $\pm$ 39.83 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (CAT-5)} & 172.05 $\pm$ 70.83 & 119.81 $\pm$ 43.18 & 80.74 $\pm$ 18.96 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (CAT-9)} & 151.09 $\pm$ 80.37 & 105.98 $\pm$ 41.96 & 68.82 $\pm$ 23.15 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (SUM-3)} & 158.44 $\pm$ 108.50 & 105.65 $\pm$ 53.95 & 69.16 $\pm$ 11.39 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (SUM-5)} & 154.71 $\pm$ 115.76 & 104.04 $\pm$ 59.34 & 69.81 $\pm$ 15.69 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (SUM-9)} & 158.33 $\pm$ 120.90 & 106.67 $\pm$ 62.61 & 67.74 $\pm$ 17.75 \\ \hline
\end{tabular}
}
\end{table}
\begin{table}[ht!]
\caption{Results for the classification task of team value estimation, measured with micro $F_1$ score and macro $F_1$ score.}\label{tab:classification}
\centering
\setlength{\tabcolsep}{0.75em}
{\renewcommand{\arraystretch}{1.25
\begin{tabular}{l|l|l|}
\cline{2-3}
& Micro $F_1$ & Macro $F_1$ \\ \hline
\multicolumn{1}{|l|}{STEVE-16} & 0.67 $\pm$ 0.10 & 0.64 $\pm$ 0.10 \\ \hline
\multicolumn{1}{|l|}{STEVE-32} & \textbf{0.74} $\pm$ 0.11 & 0.71 $\pm$ 0.14 \\ \hline
\multicolumn{1}{|l|}{STEVE-64} & \textbf{0.74} $\pm$ 0.10 & \textbf{0.72} $\pm$ 0.09 \\ \hline
\multicolumn{1}{|l|}{Season-Stats} & 0.52 $\pm$ 0.14 & 0.45 $\pm$ 0.19 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (CAT-3)} & 0.50 $\pm$ 0.12 & 0.44 $\pm$ 0.15 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (CAT-5)} & 0.55 $\pm$ 0.14 & 0.51 $\pm$ 0.16 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (CAT-9)} & 0.60 $\pm$ 0.13 & 0.56 $\pm$ 0.11 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (SUM-3)} & 0.49 $\pm$ 0.09 & 0.40 $\pm$ 0.10 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (SUM-5)} & 0.48 $\pm$ 0.08 & 0.39 $\pm$ 0.07 \\ \hline
\multicolumn{1}{|l|}{Season-Stats (SUM-9)} & 0.47 $\pm$ 0.09 & 0.37 $\pm$ 0.15 \\ \hline
\end{tabular}
}
\end{table}
\noindent \textbf{Results}
\textit{STEVE} clearly outperforms the other representations both in terms of regression and classification performance. While $\delta = 64$ generally yields the best results, even $\delta=16$ produces superior results compared to \textit{Season-Stats}. In terms of regression performance, we observe that \textit{Season-Stats} is most competitive when using information from multiple seasons (CAT-x and SUM-x). All forms of representation manage to estimate the general tendency of a team's market value but \textit{STEVE's} predictions are far more precise.
Similar conclusions can be drawn when inspecting classification performance. The best competing representation is \textit{Season-Stats (CAT-9)} which is $162$ dimensional, $92$ dimensions more than \textit{STEVE ($\delta=64$)}.
Still, \textit{STEVE ($\delta=64$)} provides $\approx$ $20\%$ better performance than \textit{Season-Stats (CAT-9)}.
It can therefore be concluded that \textit{STEVE} is able to compress information needed for the task and succeeds to provide high efficacy representations.
\section{Conclusion}
\label{sec:conculsion}
In this work we introduced \textit{STEVE}, a simple yet effective way to compute meaningful representations for soccer teams.
We provided qualitative analysis using soccer team vectors for team ranking and similarity search.
Quantitative analysis was carried out by investigating the usefulness of the approach by estimating the market values of soccer teams. In both cases, \textit{STEVE} succeeds to provide meaningful and effective representations.
Future work might investigate further upon different weighting schemes for the season during which a match took place.
For example instead one can use the exponential distribution to weigh down past seasons.
Moreover, including the number of goals scored during a match and accounting for the home advantage might help to capture more subtleties.
| {
"attr-fineweb-edu": 1.901367,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbDE4eIXgu113H8p9 | \section{Introduction}
\label{sec:introduction}\label{sec:intro}
The evolving travel and gathering restrictions caused by the COVID-19 pandemic created a climate of uncertainty that made for difficult planning and decision making about international conferences scheduled for April 2020. Tribute to the ingenuity of our field, a new model quickly evolved, pioneered by conferences such as ASPLOS~\cite{blog:cacm:LarusCS20} and EDBT/ICDT~\cite{blog:cacm:Bonifati+20}, and by ACM itself~\cite{report:acm20}. This article reports on a design alternative to organizing professional conferences online, derived from our experience with organizing the 11th ACM/SPEC International Conference on Performance Engineering (ICPE 2020).
Particular to the ICPE timing was the process to decide about holding an in-person conference, postponing to a later date, or converting the conference to an online event. As late as mid-February, we were unaware of the extent of the spread of COVID-19 in Canada, where the conference was due to occur, or in the countries from which our attendees were expected to travel. This was not due to our ignoring the news, but to the deficit of credible information offered by state authorities. On March 9, one of the Program Chairs received the final institutional decision of being stopped from international travel; one day later, US- and EU-based keynote speakers for workshops started to declare their unavailability. Discussions with relevant stakeholders -- local organizers, local businesses and Horeca, international organizations -- started, as did the consultation of relevant official guidelines\footnote{Advice from the Canadian Government: \url{https://www.canada.ca/en/public-health/services/diseases/2019-novel-coronavirus-infection/health-professionals/mass-gatherings-risk-assesment.html}}.
On March 13, we announced {\it "ICPE 2020 will not be held in April 2020 as planned"}. On March 19, we discussed in the Steering Committee of ICPE two options regarding the physical meeting: cancelling and rescheduling for what we predicted as a safer period, late-July\footnote{To arrive at the month of July, we asked an in-house statistician to predict the period expected to exhibit a lowering of COVID-19 presence. The statistician used the SIR model on WHO data of SARS cases. See David Smith and Lang Moore, "The SIR Model for Spread of Disease - The Differential Equation Model," Convergence (December 2004) and the WHO Epidemic curves - Severe Acute Respiratory Syndrome (SARS) Nov 2002 to Jul 2003: \url{https://www.who.int/csr/sars/epicurve/epiindex/en/index1.html}
}.
We discarded through discussion the idea of re-inviting the authors of accepted articles for next year's conference. We decided to opt for a cancellation of the physical conference, which was in line with our moral choice to protect the community and was also supported by our main technical sponsors. We also formed a Task Force for Organizing ICPE Online. On April 2, following extensive documentation, discussions, and try-outs, the Task Force decided to propose organizing the 11th ICPE online. We were strongly motivated by what we saw as our duty to deliver to the community, but also by our belief that we could do so through an online event. We saw it as our duty to "give justice" to the authors of accepted papers and to guarantee that they can get the same level of feedback they would have received if attending the physical meeting, and the online option seemed to give us an opportunity to achieve this duty.
The 11th ICPE was held fully online, on April 20-24, 2020, with two days of workshops and three days of single-track conference sessions. Key to our approach to organizing ICPE online was to be flexible by design, mixing synchronous and asynchronous events, and giving the attendees many options to participate and contribute, while ensuring all of the original sessions of the conference maintained a synchronous element. We also ensured free registration and free publication of proceedings, which were only possible thanks to our generous supporters and sponsors. These two elements, and others, distinguish our design for the conference from previously reported designs~\cite{blog:cacm:LarusCS20,blog:cacm:Bonifati+20}. In return, we observed a significant increase in registered participants, and both the synchronous and the asynchronous channels were very well attended, exceeding the audience of the physical ICPE meetings in the past couple of years.
In hindsight, many of our decisions, starting with organizing ICPE 2020 online, seem obvious. However, this was not the case at the time, and luckily by the end of the process many were satisfied. To quote one of the senior members of the community: {\it 'I was one of the sceptics for an online ICPE (or any other online conference for that matter). But you really proved me wrong :) It was a great event'}. There were only 3 weeks to organize everything, and we greatly appreciated the existing guidelines and reports (\cite{report:acm20,blog:cacm:LarusCS20} and, from April 8, also~\cite{blog:cacm:Bonifati+20}). For these reasons, we present here a summary of our design choices, the experience of the online ICPE 2020, the feedback collected from the community, and the lessons we learned as organizers.
The remainder of this document is structured as follows.
Section~\ref{sec:design} presents our design choices for ICPE 2020.
Section~\ref{sec:execution} summarizes the execution of the online conference.
Section~\ref{sec:feedback} reports on community feedback; further feedback appears in the Appendix.
Last, Section~\ref{sec:conclusion} concludes and summarizes our advice for organizers of future conferences.
\section{Design Choices for ICPE 2020} \label{sec:design}
The reliability of communication over the Internet is an important consideration when organizing a meeting
online~\cite{report:acm20,blog:cacm:LarusCS20,blog:cacm:Bonifati+20}. However, based on our past experience with online communities and especially gaming, and now also with ICPE, we believe Internet reliability is not the key issue for organizing a professional meeting between motivated participants. It is probably a confirmation bias: in hindsight, in most cases the online operation of ICPE was not more problematic than of an on-site (physical) operation\footnote{Some of the failures that occur when organizing on-site include microphone not working, speaker not having a compatible interface for the video projector, speaker trying to use pointers on a mispositioned screen, video projector failing or not displaying colors correctly, room too dusty or drafty, signaling difficult to follow, etc.}.
Our main insight is that {\bf the most important factor for the success of a professional meeting is the human factor}. Both {\it availability} and {\it attention}, which for online conferences can be limited by time differences and a variety of factors, are thus the key problems in need of good solutions. Also important, but less in the hand of the organizers and more a consequence of the community itself, is to have an interesting program and attractive talks. Thus, we set as our key principle to {\bf be as flexible as possible}, incentivizing people to attend and facilitating their interactions.
We present in the following eight design choices.
How they were perceived by the community will become clear in Section~\ref{sec:feedback}.
\subsubsection*{Q1: How to share the conference material, flexibly?}
{\bf A1}: At a minimum, a conference needs to allow its participants to access the written material (the publications) and to jointly see the presentations. We knew the ACM proceedings would be available at the start of the main conference and would be free to access~\cite{proc:icpe20,proc:icpe20ws}, but this precluded them from being seen earlier, and in particular did not allow the workshop attendees to access them---because the workshops were scheduled for the two days before the start of the main conference. We couldn't publish the articles internally, due to copyright issues. Our solution was to ask the authors to {\it publish pre-prints}. Guidelines, email communication, archival offices and arxiv.org, and good will were necessary to achieve this. We also asked authors to create {\it videos of their talks} the moment we decided to cancel the physical meeting, and share these through the ICPE YouTube channel~\cite{youtube:icpe20}; this decision would become useful in a later design decision. After deciding to go online, we further asked the authors to create and share {\it 1-slide pitches} of their work, and {\it full slide-decks} explaining their work. We not only linked to these on the ICPE website~\cite{website:icpe20}, but also asked the authors to share links to their papers via the ICPE Slack workspace~\cite{slack:icpe} and also on social media (Twitter and LinkedIn, primarily). This abundance of material and channels put additional burden on the authors, but allowed the attendees the flexibility of choosing how to consume the information.
\subsubsection*{Q2: How to facilitate attendance for all possible members of the community?}
{\bf A2}: We realized early that attendance is significantly limited by any financial burden. Thanks to generous funding from both technical sponsors (the ACM SIGmetrics and the ACM SIGSoft), from the SPEC Board of Directors, and from Huawei (generously confirming their Silver Sponsorship), and Samsung, we were able to reimburse the author's registrations in full and to make the proceedings freely accessible online. With the help of the Faculty of Science at the University of Alberta, and through a new sponsorship from the SPEC Board of Directors, we were then also able to offer full, {\it free access to the ICPE 2020 events}. As we will see, this important decision allowed students and other first-time participants to attend, without a financial burden.
\subsubsection*{Q3: Which infrastructure to use for organizing the conference online?}
{\bf A3}: We decided to use a set of largely complementary software tools to build the infrastructure for the conference. Flexibility and ease of use were prime considerations, and we ended up using what we think are the best tools currently available for the job: email and website for one-way, asynchronous communication; Zoom (primary) and GoToMeeting (secondary) for multi-party, synchronous video communication and (to some extent) online messaging; YouTube for one-way, asynchronous video communication; Google Drive (primary) and various archival services (also primary) for asynchronous one-way sharing of files; Slack for multi-party, synchronous and asynchronous messaging and file-sharing (secondary); and TimeAndDate to share official times in the format needed by the attendees. We also set up a Mozilla Hubs breakout room -- a virtual space displaying in the browser or on a VR headset -- but the audience did not try it much. (We have also considered many alternatives, especially for Zoom.) This allowed us to support various modes of communication, much like a well-equipped physical facility would allow; it also required us the organizers to act as the technical team in ways that would normally be delegated to the manager of the physical facility.
\subsubsection*{Q4: How to organize the sessions, to maximize availability and attention?}
{\bf A4}: We addressed this genuinely difficult question, which also appears to be the crux of education and training in general, through a set of measures. We describe here a selection of such measures:
\begin{enumerate}
\item We aimed to improve attention, at the possible cost of availability, by organizing all the events of the conference synchronously, that is, all the keynotes, talks, and moderated Q\&A parts are attended by the audience when they occur. This decision was also taken by EDBT/ICDT, but contrasts to the asynchronous organization of ASPLOS [2].
\item We aimed to improve both availability and attention by limiting the duration of the virtual "day" to about 3 hours, aligned with the time-zone of the original location of the conference (5pm in Central Europe is 9am in Edmonton, Canada). This limited somewhat potential participation from Asia, but time-zones are very limiting and we reasoned the material is available online in any case. This also limited the duration we can allocate per article, for example, but in our view allowed the attendees to still have enough energy to engage beyond the session itself -- a behavior we have observed is that both speakers and authors would continue to engage on Slack, synchronously or asynchronously.
\item To improve attention, we further limited the duration of each keynote to 25 minutes of one-way communication and 5+ minutes of Q\&A, and asked presenters of peer-reviewed articles to stay within a budget of about 20 minutes for full articles and 15 minutes for short articles (more about this in Q5).
\item We aimed to improve availability, by making all the material available for offline access. This includes not only the material from Q1, but also the Q\&A and discussions. For the latter, we arranged to have helpers from the community (one secretary per session and other self-appointed participants) transcribe to the Slack. We noticed this decision has benefited attendees who could not be present, due to other commitments or illness.
\item We aimed to improve attention, by asking for each session a moderator plus a small team selected from PC members to revise slides and videos, and to write on Slack questions for each article presented in the session, prior to the session itself. This focused the community's attention, and also removed the obstacle of asking the first question.
\item To improve both availability and attention, we aimed to select only tools we considered easy-of-use and appropriate for how conferences work. This led us to, in the end, reject the use of the Webinar mode of Slack and GoToMeeting---the Q\&A sessions were confusing, with participants not able to understand immediately how to ask questions or whether questions have been asked at all. Furthermore, tools like Slack offer many options and proved to be confusing for some in the audience.
\item To improve availability, we opted to have backup infrastructure for any mode of communication, e.g., Zoom and GoToMeeting for multi-party, synchronous video communication; YouTube and Slack for sharing videos; and Google Drive and Slack for various files. This led to higher costs (e.g., due to licenses), but gave us certainty the event could proceed even if one of the software tools suffered a catastrophic outage. This would be difficult to replicate with a physical organization of the conference.
\item To improve availability, we opted for Zoom as the primary infrastructure for multi-party, synchronous video communication, coupled with Slack as primary tool for online messaging. In our experience, among at least five other leading platforms, Zoom worked best, being easy to install and use, and exhibiting very few hiccups even when the attendance scaled or became global. (We will not comment on these features, or on the security and privacy issues related to Zoom and other platforms. We are not aware of an impartial, high-quality, reproducible study across all these platforms. Perhaps this is a topic for ICPE 2021...)
\end{enumerate}
\subsubsection*{Q5: How to organize the "talks": keynotes, article-presentations, etc.?}
{\bf A5}: Keynotes are typically the star of general participation, so we decided to preserve their typical organization as one-way talks followed by a moderated Q\&A session. However, for talks related to peer-reviewed articles we reasoned that sitting through one-way communication would diminish the energy in the "room". Thus, we diverged from the classic organization of conferences, and (1) asked the presenters to pre-record and share their talks on YouTube with at least several days before the first day of the main conference, (2) assigned the moderator and a team of experts to view the videos and ask questions prior to the "live" session, (3) encouraged the audience to do the same, (4) asked the speakers to pitch their work for up to 2 minutes at the start of the session, and (5) enabled and encouraged "live" questions, which were asked either by attendees live or, when technical glitches or personal preference precluded this, by the moderator. This flexible approach led to numerous questions and a lively discussion, perhaps even more than some articles would see in conferences organized classically.
\subsubsection*{Q6: How to ensure that everyone knows what to do?}
{\bf A6}: Communication is key in any process of change, and it was also in our case. We used every channel at our disposal to communicate with the audience, first through email, then through an extensive booklet with guidelines, last but not least through online meetings and Slack. The booklet, "Guidelines for the virtual ICPE 2020", went through 3 major versions (uploaded on Slack), and on only 6 pages described the key terms of the meeting, pointed out the Code of Conduct\footnote{https://icpe2020.spec.org/code-of-conduct/}, and informed various personas (e.g., authors, session chairs, other attendees) about how to easily join the sessions. We clarified many aspects using Slack and, especially among the organizers and the Task Force, through online meetings.
\subsubsection*{Q7: How to facilitate the organization of ICPE-related events, flexibly?}
{\bf A7}: We again put the guiding principle of flexibility into practice: we offered advice and guidelines to organizers of workshops, but in the end were supportive with any choice they made. For example, one of the workshops decided to maintain the classic approach, with long keynotes and talks leading to about 8 hours in the virtual "day"; midway through the event, the European participants had to leave, because it was already late in their evening.
\subsubsection*{Q8: How to facilitate social events, flexibly?}
{\bf A8}: We discussed extensively whether we should try to organize social events for the conference. In the spirit of flexibility, the answer can only be: let the society decide itself! And so we did. From the first days of the conference, it emerged that at the end of each day a sizable part of the community would simply "hang out" online, some with drinks, some chatting, some simply staying online. We also noticed that groups would "go to Slack", continuing the discussion, as reflected by the written messages. Last, we suggested that Slack could also be the host of "private groups", where attendees with similar interests (and, as it turns out, also attendees with similar background) could meet and arrange further messaging or even new Zoom or other video communication.
\section{Executing the Online ICPE 2020}
\label{sec:execution}
Executing ICPE 2020 was challenging, but rewarding. We describe the following setup and several observations.
\subsubsection*{Overall and Daily Setup}
We asked attendees to register to the conference (for free), and invited all registrants (nearly 550) to the ICPE 2020's Slack workspace. From these, over 480 accepted the invitation and became attendees (see also Observation O1). We announced a new Zoom link each day; with the link used by all sessions, which allowed all attendees to find a way to join (if they wanted to).
Our Slack setup was similar to that of ASPLOS, with channels for: the introductory session (1 channel), each keynote (2x), the awards session (1x), each session of the main conference (7x), and posters and demos (1x). Overall, each of these channels was well attended (see also O2). We also created the icpe-2020 channel for chair announcements, general for anyone to share, all-sessions for notifications about sessions by session moderators, random for informal conversations, support for asking for help. We also created private channels for organizers (org for the conference chairs, org-ws for the workshops chairs). Overall, we observed that each topical channel was well-attended, but the more general channels were too numerous and generated confusion.
Last, we saw the community create new channels, both public channels for emerging topics (listed as topic-x to group in the Slack interface) and private channels (many organized by people from the same geographical area).
When using Slack, we found that it is important to carefully edit permissions and settings. Slack is aimed at teams and thus has less restrictive default settings. For example, we learned late that organizers should disable displaying email addresses in member profiles and disable that members can use @channel to notify all other members in larger conferences to avoid unsolicited advertisement.
All moving parts considered, the execution was relatively uneventful. The organizers were on-site and addressed the occasional technical glitches (e.g., Zoom crashed), presenter issues (e.g., not sharing the right screen), etc.
\subsubsection*{O1: ICPE 2020 had unusually high attendance.}
Figure~\ref{fig:membership} depicts the growth in daily attendee count. We exceeded the expected number of attendees (150) by April 12, about 1 week after opening the registration process. At the start of the main conference, we already had over 480 attendees, which triggered us to upgrade the Zoom account to allow for more concurrent seats. This number of attendees is the highest ever recorded for ICPE, exceeding among others the participation observed in the previous edition, an extremely successful event organized in India.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figures/figure1.jpg}
\caption{ICPE 2020 attendee count, by date. (Data until and including Apr 25.)}
\label{fig:membership}
\end{figure}
\subsubsection*{O2: Testing every aspect of the infrastructure is vital.}
\begin{figure}[!t]
\centering
\includegraphics[width=0.75\linewidth]{figures/figure4a.jpg}
\includegraphics[width=0.20\linewidth]{figures/figure4b.jpg}
\caption{Bandwidth requirements for Zoom. (Live session with 1 main speaker, 1 moderator, circa 70 attendees.)}
\label{fig:bw}
\end{figure}
We tested the infrastructure extensively, including during the event. Figure~\ref{fig:bw} depicts a bandwidth measurement conducted during the event ("load testing"), indicating the performance of Zoom at the scale needed by the conference remains relatively stable and affordable for reasonable Internet connections.
\subsubsection*{O3: The conference continues long after its last session.}
A conference does not have to end with its last session. We conducted surveys to obtain feedback from the attendees, compiled a report for the Steering Committee (this report), and started work on next year's conference. But two items deserve more discussion:
\begin{itemize}
\item Although Slack preserves the written conversation, its forum capabilities are limited, so discussions may quickly become difficult to traverse. Following a tradition that exists in performance engineering since at least the late 1950s~\cite{DBLP:conf/ifip/Strachey59}, we have decided to summarize all Q\&A sessions, by item of discussion, through a community effort. The process is ongoing.
\item We observed that the community has put effort into identifying four emerging topics, which could lead to new entries in future ICPE conferences: {\tt topic-datasets}, about sharing datasets in the community, {\tt topic-edu}, about creating an education workshop associated with ICPE, {\tt topic-history}, about writing a (short) history of the field, and {\tt topic-per-var}, about understanding and controlling performance variability in software and hardware systems.
\end{itemize}
\subsubsection*{O4: Activity is most intense during the conference.}
The live participation was lower than the maximum possible, but still exceeded our expectation. On Zoom at peak, we counted over 175 concurrent attendees for the main conference, and 70-90 participants for the workshops. The lowest attendance was around 70 participants in the main conference, and 40-60 for the workshops. (We could not access the Zoom statistics for these metrics, so we counted them manually.)
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{figures/figure2.jpg}
\caption{Number of daily active members and of daily active members posting messages.}
\label{fig:activemembers}
\end{figure*}
Figure~\ref{fig:activemembers}, which depicts the number of daily active members and of daily active members posting messages, leads to our observation that activity is most intense only during the conference. On Slack, we counted over 325 daily active members. The peak is recorded during the opening of the conference, when in particular the SPEC community attended (increase in the daily active members) but not necessarily engaged in the discussion (similar count of members posting messages as in the previous days). This indicates that more community management and engagement is needed, to make such a community thrive beyond the limits of the ICPE event.
\subsubsection*{O5: The workshops are important contributors to the discussion.}
We often hear the argument that workshops bring into the community hot topics of discussion, and overall can make conferences livelier. Figure~\ref{fig:messages} presents quantitative evidence in this sense: from the Slack channels dedicated to each session, the workshops stand out as 3 of the Top-5 sessions with the largest message count. (Public channels also accounted for 85\% of the message-views, so their impact of workshops was high.)
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{figures/figure3.jpg}
\caption{Number of messages posted in the public channels associated with ICPE main-conference sessions and workshops. Only the Top-6 of the 30 public channels displayed. (Data until Apr 25.)}
\label{fig:messages}
\end{figure*}
\newpage{}
\ \\
\newpage{}
\ \\
\newpage{}
\section{Community Feedback}
\label{sec:feedback}
We have conducted a comprehensive survey with over 50 questions, which we analyzed when it reached 50 respondents (just over 10\% of the attendees, and over one-quarter of the peak of concurrent attendees on Zoom). The participation was diverse in role in ICPE 2020 (about 40\% were authors, about a quarter speakers), current occupation (PhD students, academic staff, and industry engineers each represent over 20\% of the respondents), seniority (about half were seniors with 15+ years of experience, about 25\% were juniors), ownership of a PhD degree (about half), geolocation (about half from Europe and 40\% North America, with over 10 countries represented in the survey), and gender (one-third not male). We present here a selection of the results, with more results in the Appendix.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/figure5.jpg}
\caption{Summary of answers for Q3.}
\label{fig:Q3}
\end{figure}
From Q3~(Figure~\ref{fig:Q3}), we observe that organizing online helped us enlarge participation, with about half of the respondents being first-time attendees.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/figure6a.jpg}
\includegraphics[width=\linewidth]{figures/figure6b.jpg}
\caption{Summary of answers for Q5 and Q6.}
\label{fig:Q56}
\end{figure}
From Q5 and Q6~(Figure~\ref{fig:Q56}), we observe that the attendees appreciated the organization, both in terms of design choices and in which software infrastructure was selected. We will see later that the audience did not like some of the features of an online conference as much. From the foregoing, we conclude that there is a need for new software that better supports the type of online conference we ran. On the other hand, the strong attendance and favourable feedback at the end of the conference indicate that we made successful use of the tools available, despite intense time pressure.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/figure7a.jpg}
\includegraphics[width=\linewidth]{figures/figure7b.jpg}
\includegraphics[width=\linewidth]{figures/figure7c.jpg}
\caption{Summary of answers for Q8.}
\label{fig:Q8}
\end{figure}
Overall from Q8~(Figure~\ref{fig:Q8}), we observe that the attendees experienced the online talks as worse for the online ICPE than for the classic. There was also enough appreciation, with between one-third and just less than half of the attendees appreciating the online approach more. This was consistent across both talks and keynotes, and both for speakers and audience. This matches the findings of ASPLOS and EDBT/ICDT, if we assume most of their borderline decisions would be accounted to our "Worse" category.
The results for the moderated Q\&A sessions were much appreciated. About two-thirds of the attendees, both speakers and audience, considered the online event at least better, and about one-quarter considered it much better. This confirms our own observation that there was much more extensive and deeper interaction than we observed in the conventional format. The Slack channels were buzzing long after the session ended.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/figure8.jpg}
\caption{Summary of answers for Q9.}
\label{fig:Q9}
\end{figure}
From Q9~(Figure~\ref{fig:Q9}), we conclude the need for social interaction remained unfulfilled. Over two-thirds of the respondents would have preferred more such interaction. This is consistent with the findings of the EDBT/ICDT survey. In our view, better tools (and maybe also better format-designs) need to appear before the online community can be fully satisfied about this aspect.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/figure9a.jpg}
\includegraphics[width=\linewidth]{figures/figure9b.jpg}
\caption{Summary of answers for Q10.}
\label{fig:Q10}
\end{figure}
From the answers to Q10a and Q10b~(Figure~\ref{fig:Q10}), we observe that: (1) as one might expect, many would like a face-to-face only conference; (2) surprisingly, most would enjoy attending a mix of online and face-to-face sessions (but would prefer these sessions do not overlap); (3) a surprisingly high fraction of respondents, about one-fifth, dislike face-to-face conferences (but attended an online form!); (4) perhaps unsurprisingly, about one-third of the respondents dislike the idea of online only conferences.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/figure10.jpg}
\caption{Summary of answers for Q12.}
\label{fig:Q12}
\end{figure}
Q12~(Figure~\ref{fig:Q12}) indicates over three-quarters of the respondents prefer the medium-length day chosen for ICPE. Some would have liked it even shorter; under 5\% would have liked a full-length day format.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/figure11a.jpg}
\includegraphics[width=\linewidth]{figures/figure11b.jpg}
\includegraphics[width=\linewidth]{figures/figure11c.jpg}
\includegraphics[width=\linewidth]{figures/figure11d.jpg}
\caption{Summary of answers for Q13.}
\label{fig:Q13}
\end{figure}
From Figure~\ref{fig:Q13}, Q13a and Q13b indicate there are good reasons to organize the conference online: it allows broadening participation.
Q13c gives another reason to allow (also) online attendance: people can attend more sessions. The answers to Q13d show a majority of our respondents have attended at least 7 sessions, with over 40\% attending over 10. We asked more about this: 11 of our 16 sessions were attended by at least 60\% of the respondents. (The informal social sessions were attended by just over a quarter of the attendees, which is less than the typical attendance at a conventional conference.). Further details of the graphical representation of the questions and corresponding responses can be found in the appendix section.
\begin{figure}[h]
\centering
\hspace*{-1.0cm}
\includegraphics[width=1.2\linewidth]{figures/figure12.jpg}
\caption{Summary of answers for Q11.}
\label{fig:Q11}
\end{figure}
From Q11 answers~(Figure~\ref{fig:Q11}), we observe that live talks are preferred over any other form of pre-recorded talk. However, pre-recorded talks that include even a short live teaser are almost as appreciated, but consume only a fraction of the time allocated for the subject and thus leave much more time for Q\&A.
\newpage{}
\section{Conclusion and Our Advice for Conference Organizers}
\label{sec:conclusion}
The ICPE 2020 experience has been rewarding for all involved. As organizers, we learned many lessons from organizing this conference online, including
\begin{enumerate}
\item The most important factor for the success of a professional meeting is human. Focus on increasing the availability and attention of your audience. Be as flexible as possible. Engage with the community. Also, train the community (humanely).
\item Communication with people is of key relevance. Authors of accepted papers needed some encouragement to prepare slides, videos, the 1-slide pitch, a preprint of their articles, etc. PC members need to be tempted to volunteer in the revision of the material and to actively participate in online sessions.
\item A community that has as alternative not attending the conference at all will tolerate many mishaps and issues with the infrastructure. Under these circumstances, the infrastructure is not a major issue.
\item The infrastructure needed to organize a conference online includes many components. Prioritize flexibility and redundancy in supporting various modes of communication.
\item Testing the infrastructure is essential, but perhaps not much more so than in conventional conferences.
\item Organizing medium-length conference days of 3-4 hours is perceived as better than using shorter or longer alternatives.
\item Daily Cafe (open) sessions at the end of each day led to useful discussion, mostly for community building and about how to organize better.
\item One innovation in format: the live pitch of about 2 minutes is an efficient replacement for the live talk, which leaves more time for Q\&A. Consequently, Q\&A can be better than in conventional conferences.
\item The online conference can offer various advantages over the classic physical format; perhaps a mixed mode would become the norm in the future.
\item One main advantage of organizing online: enlarging the audience at a fraction of the cost. Compared with a physical conference, organizing online can lead to lower material costs, lower environmental costs, (often) lower costs for the employer, and (often) lower personal costs. The flexibility of joining or leaving at any moment also decreases the opportunity costs.
\item One main victim of organizing online: the personal connection. Currently, there is no substitute for the physical attendance of a conference talk, or for the advantages in establishing collaborations of physical meetings. Also, the social event still does not have a good equivalent in the digital world. The existing tools, and perhaps also the formats tried so far, simply cannot deliver the same experience.
\end{enumerate}
\section*{Acknowledgment}
The authors would like to acknowledge the sponsors of the ACM/SPEC ICPE 2020 conference. We also want to thank all the organizers. The conference could not have been organized without their combined and generous contribution.
The authors further thank Snigdha Singh, of KIT, Germany, who kindly provided most of the visual analysis presented in the Appendix. This extends the analysis in Section~\ref{sec:feedback} and provides a fuller view of the feedback offered by conference attendees.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.260742,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdsvxK6nrxl9bN1FV | \section{Introduction}
When translating a word, translation models need to spend a substantial amount of its capacity in disambiguating its sense in the source language and choose a lexeme in the target language which adequately express its meaning~\citep{choi2017context,tamchyna2017lexical}. However, neural machine translation (NMT) has a severe problem on lexical choice, since it usually has mistranslation errors on low-frequency words~\citep{koehn-knowles-2017-six,nguyen-chiang-2018-improving}.
\begin{wraptable}{r}{7.7cm}
\begin{CJK}{UTF8}{gbsn}
\vspace{-10pt}
\begin{tabular}{rl}
\toprule
\setlength{\tabcolsep}{3.2pt}
SRC & \small 今天 \textcolor{goldenbrown}{\underline{纽马 基特}} 的 跑道 湿软 。\\
RAW-TGT & The going at \textcolor{red}{\bf Newmarket} is soft ... \\
KD-TGT & Today, \textcolor{blue}{\em Newmargot}'s runway is soft ... \\
\midrule
SRC & \small \textcolor{goldenbrown}{\underline{纽马 基特}} 赛马 总是 吸引 ... \\
RAW-TGT & The \textcolor{red}{\bf Newmarket} stakes is always ... \\
KD-TGT & The \textcolor{blue}{\em Newmarquette} races always ... \\
\midrule
SRC & \small 在 \textcolor{goldenbrown}{\underline{纽马 基特}} 3 时 45 分 那场 中 , 我 ... \\
RAW-TGT & I've ... in the 3.45 at \textcolor{red}{\bf Newmarket}. \\
KD-TGT & I ... at 3:45 a.m. in \textcolor{blue}{\em Newmarquite}. \\
\bottomrule
\end{tabular}
\caption{\label{tab:case-study}All samples that contain the source word \begin{CJK}{UTF8}{gbsn}``纽马 基特''\end{CJK} in raw and distilled training corpora, which are different in target sides (RAW-TGT vs. KD-TGT).}
\end{CJK}
\label{tab:example_intro}
\end{wraptable}
In recent years, there has been a growing interest in non-autoregressive translation~(\citealp[NAT,][]{NAT}), which improves decoding efficiency by predicting all tokens independently and simultaneously.
Well-performed NAT models are generally trained on synthetic data distilled by autoregressive translation (AT) teachers instead of the raw training data (Figure~\ref{fig:strcture}(a))~\citep{stern2019insertion,lee2018deterministic,ghazvininejad2019mask,gu2019levenshtein}. Recent studies have revealed that knowledge distillation (KD) reduces the modes (i.e. multiple lexical choices for a source word) in the raw data by re-weighting the training examples~\citep{furlanello2018born, tang2020understanding}, which lowers the intrinsic uncertainty~\citep{ott2018analyzing} and learning difficulty for NAT~\citep{zhou2019understanding, ren2020astudy}. However, the side effect of KD has not been fully studied. In this work, we investigate this problem from the perspective of lexical choice, which is at the core of machine translation.
\begin{CJK}{UTF8}{gbsn}
We argue that the lexical choice errors of AT teacher can be propagated to the NAT model via the distilled training data. To verify this hypothesis, we qualitatively compare raw and distilled training corpora. Table~\ref{tab:example_intro} lists all samples whose source sentences contain the place name ``纽马基特''. In the raw corpus (``RAW-TGT''), this low-frequency word totally occurs three times and corresponds to correct translation ``Newmarket''. However, in the KD corpus (``KD-TGT''), the word is incorrectly translated into a person name ``Newmargot'' (Margot Robbie is an Australian actress) or organization name ``Newmarquette'' (Marquette is an university in Wisconsin) or even invalid one ``Newmarquite''.
\end{CJK}
Motivated by this finding, we explore NAT from the \textit{lexical choice perspective}. We first validate our hypothesis by analyzing the lexical choice behaviors of NAT models (\S\ref{sec:lexical-choice-problem}). Concretely, we propose a new metric AoLC (\textit{accuracy of lexical choice}) to evaluate the lexical translation accuracy of a given NAT model. Experimental results across different language pairs show that NAT models trained on distilled data have higher accuracy of global lexical translation (AoLC$\uparrow$), which results in better sequence generation.
However, fine-grained analyses revealed that although KD improves the accuracy on high-frequency tokens, it meanwhile harms performance on low-frequency ones (\textit{Low freq.} AoLC$\downarrow$). And with the improvement of teacher models, this issue becomes more severe. We conclude that the lexical choice of the low-frequency tokens is a typical kind of \textit{lost information} when using knowledge distillation from AT model.
In order to rejuvenate this lost information in raw data,
we propose to expose the raw data to the training of NAT models, which augments NAT models the ability to learn the lost knowledge by themselves.
Specifically, we propose two bi-lingual lexical-level data-dependent priors (\textit{Word Alignment Distribution} and \textit{Self-Distilled Distribution}) extracted from raw data, which is integrated into NAT training via Kullback-Leibler divergence.
Both approaches expose the lexical knowledge in the raw data to NAT, which makes it learn to restore the useful information of low-frequency words to accomplish the translation.
We validated our approach on several datasets that widely used in previous studies (i.e. WMT14 En-De, WMT16 Ro-En, WMT17 Zh-En, and WAT17 Ja-En) and model architectures (i.e. MaskPredict~\citep{ghazvininejad2019mask} and Levenshtein Transformer~\citep{gu2019levenshtein}).
Experimental results show that the proposed method consistently improve translation performance over the standard NAT models across languages and advanced NAT architectures. The improvements come from the better lexical translation accuracy (low-frequency tokens in particular) of NAT models (AoLC$\uparrow$), which leads to less mis-translations and low-frequency words prediction errors.
The main contributions of this work are:
\begin{itemize}[leftmargin=12pt]
\item Our study reveals the side effect of NAT models' knowledge distillation on low-frequency lexicons, which makes the standard NAT training on the distilled data sub-optimal.
\item We demonstrate the necessity of letting NAT models learn to distill lexical choices from the raw data by themselves.
\item We propose an simple yet effective approach to accomplish this goal, which are robustly applicable to several model architectures and language pairs.
\end{itemize}
\section{Preliminaries}
\label{sec:background}
\subsection{Non-Autoregressive Translation}
The idea of NAT has been pioneered by~\citet{NAT}, which enables the inference process goes in parallel.
Different from AT models that generate each target word conditioned on previously generated ones, NAT models break the autoregressive factorization and produce target words in parallel.
Given a source sentence $\bf x$, the probability of generating its target sentence $\bf y$ with length $T$ is calculated as:
\begin{equation}
p({\bf y}|{\bf x})
=p_L(T|{\bf x}; \theta) \prod_{t=1}^{T}p({\bf y}_t|{\bf x}; \theta)
\end{equation}
where $p_L(\cdot)$ is a separate conditional distribution to predict the length of target sequence. During training, the negative loglikelihood loss function of NAT is accordingly $\mathcal{L}_{\mathrm{NAT}}(\theta)=-\log p({\bf y}|{\bf x})$.
To bridge the performance gap between NAT and AT models, a variety approaches have been proposed, such as multi-turn refinement mechanism~\citep{lee2018deterministic, ghazvininejad2019mask, gu2019levenshtein, kasai2020parallel}, rescoring with AT models~\citep{wei-etal-2019-imitation,flowseq2019,sun2019fast}, adding auxiliary signals to improve model capacity~\citep{wang2019non,Ran2019GuidingNN,guo2019non,ding-etal-2020-context}, and advanced training objective~\citep{wei-etal-2019-imitation,shao2019minimizing}.
Our work is complementary to theirs: while they focus on improving NAT models trained on the distilled data, we refine the NAT models by exploiting the knowledge in the raw data.
\paragraph{Sentence-Level Knowledge Distillation}
NAT models suffer from the {\em multimodality problem}, in which the conditional independence assumption prevents a model from properly capturing the highly multimodal distribution of target translations. For example, one English source sentence ``Thank you.'' can be accurately translated into German as any one of ``Danke.'', ``Danke sch\"on.'' or ``Vielen Dank.'', all of which occur in the training data.
\begin{wrapfigure}{r}{6cm}
\vspace{-10pt}
\subfigure[Separate from Raw]{\includegraphics[width=0.2\textwidth]{fig/subfigure1.pdf}}
\hspace{0.01\textwidth}
\subfigure[Bridging with Raw]{\includegraphics[width=0.2\textwidth]{fig/subfigure2.pdf}}
\caption{Comparison of existing two-step and our proposed NAT training scheme.}
\vspace{-10pt}
\label{fig:strcture}
\end{wrapfigure}
To alleviate this problem, \citet{NAT} applied sequence-level KD~\citep{kim2016sequence} to construct a synthetic corpus, whose target sentences are generated by an AT model trained on the raw data, as shown in Figure~\ref{fig:strcture}(a).
The NAT model is only trained on distilled data with lower modes, which makes it easily acquire more deterministic knowledge (e.g. one lexical choice for each source word).
While separating KD and model training makes the pipeline simple and efficient, it has one potential threat: \textit{the re-weighted samples distilled with AT model may have lost some important information.}
~\cite{lee2020discrepancy} show that distillation benefits the sequence generation but harms the density estimation.
In this study, we exploit to bridge this gap by exposing the raw data to the training of NAT models, as shown in Figure~\ref{fig:strcture}(b).
\subsection{Experimental Setup}
\label{sec:exp-setting}
\paragraph{Datasets}
Experiments were conducted on four widely-used translation datasets: WMT14 English-German (En-De,~\citealt{transformer}), WMT16 Romanian-English (Ro-En,~\citealt{NAT}), WMT17 Chinese-English (Zh-En,~\citealt{hassan2018achieving}), and WAT17 Japanese-English (Ja-En,~\citealt{morishita2017ntt}), which consist of 4.5M, 0.6M, 20M, and 2M sentence pairs, respectively. We use the same validation and test datasets with previous works for fair comparison. To avoid unknown words, we preprocessed data via BPE~\citep{Sennrich:BPE} with 32K merge operations.
The GIZA++~\citep{och03:asc} was employed to build word alignments for the training datasets.
We evaluated the translation quality with \textsc{BLEU}~\citep{papineni2002bleu}.
\paragraph{NAT Models}
We validated our research hypotheses on two SOTA NAT models:
\begin{itemize}[leftmargin=12pt]
\item {\em MaskPredict} (MaskT,~\citealt{ghazvininejad2019mask}) that uses the conditional mask LM~\citep{devlin2019bert} to iteratively generate the target sequence from the masked input. We followed its optimal settings to keep the iteration number be 10 and length beam be 5, respectively.
\item {\em Levenshtein Transformer} (LevT,~\citealt{gu2019levenshtein}) that introduces three steps: {deletion}, {placeholder prediction} and {token prediction}. The decoding iterations in LevT adaptively depends on certain conditions.
\end{itemize}
For regularization, we tune the dropout rate from [0.1, 0.2, 0.3] based on validation performance in each direction, and apply weight decay with
0.01 and label smoothing with $\epsilon$ = 0.1. We train batches of approximately 128K tokens using Adam~\citep{kingma2015adam}.
The learning rate warms up to $5\times10^{-4}$ in the first 10K steps, and then decays with the inverse square-root schedule. We followed the common practices~\citep{ghazvininejad2019mask,kasai2020parallel}
to evaluate the translation performance on an ensemble of top 5 checkpoints to avoid stochasticity.
\paragraph{AT Teachers} We closely followed previous works on NAT to apply sequence-level knowledge distillation~\citep{kim2016sequence} to reduce the modes of the training data. More precisely, to assess the effectiveness of our method under different of AT teachers, we trained three kinds of Transformer~\citep{transformer} models, including Transformer-\textsc{Base}, Transformer-\textsc{Big} and Transformer-\textsc{Strong}. The main results employ \textsc{Large} for all directions except Ro-En, which is distilled by \textsc{Base}. The architectures of Transformer-\textsc{Big} and Transformer-\textsc{Strong} are unchanged, but \textsc{Strong} utilizes a large batch (458K tokens) training strategy.
\section{Understanding Lexical Choice in NAT Models}
\label{sec:lexical-choice-problem}
\subsection{Evaluating Lexical Choice of NAT Models}
\label{subsec:AoLC}
Recently,~\citet{zhou2019understanding} argue that knowledge distillation is necessary for the uncertain nature of the machine translation task. Accordingly, they propose a metric to estimate the complexity of the data (\textit{CoD}), which is driven from an external word alignment model. They reveal that the distilled data is indeed less complex, which facilitates easier training for the NAT model. Inspired by this, we propose a metric to measure the lexical level accuracy of model predictions.
\paragraph{Accuracy of Lexical Choice (AoLC)} evaluates the accuracy of target lexicon chosen by a trained NAT model $M$ for each source word. Specifically, the model $M$ takes a source word $f$ as the input, and produce a hypothesis candidate list with their corresponding word confidence:
\begin{equation}
{\bf P}^M_f = \{P^M(e_1|f), \dots, P^M(e_{|{\bf V}_{trg}|}|f)\}
\end{equation}
where ${\bf V}_{trg}$ is the target side vocabularies over whole corpus. The AoLC score is calculated by averaging the probability of the gold target word $e_f$ of each source word $f$:
\begin{equation}
AoLC = \frac{\sum_{f \in {\bf V}_{src}^{test}}P^M({e_f|f})}{|{\bf V}_{src}^{test}|}
\end{equation}
where ${\bf V}_{src}^{test}$ is the set of source side tokens in test set. Each gold word $e_f$ is chosen with the help of the word alignment model $P^A_f$.
The chosen procedure is as follows: Step 1) collecting the references of the source sentences that contains source word $f$, and generating the target side word bag $\mathbb{B}_f$ with these references. Step 2) Descending $P^A_f$ in terms of alignment probabilities and looking up the word that first appears in $\mathbb{B}_f$ as the gold word until the $\mathbb{B}_f$ is traversed. Step 3) If the gold word is still not found, let the word with the highest alignment probability in $P^A_f$ as the gold word. Generally, higher accuracy of lexical translation represents more confident of the predictions. We discuss the reliability of word alignment-based AoLC in Appendix~\ref{appendix:aloc}.
\iffalse
Uncertainty of Model (UoM) that reflects the uncertainty of lexical choices produced by a trained NAT model $M$ for each source word.
Specifically, the model $M$ takes a source word $f$ as the input, and produces a probability distribution over the whole words in target vocabulary:
\begin{equation}
{\bf P}^M_f = \{P^M(e_1|f), \dots, P^M(e_{|{\bf V}_{trg}|}|f)\}
\label{eqn:model-prior}
\end{equation}
The UoM score is calculated as the averaged entropy of the distributions of ${\bf P}^M_f$:
\begin{eqnarray}
UoM &=& \frac{\sum_{f \in {\bf V}_{src}}H^M_f}{|{\bf V}_{src}|} \\
H^M_f &=& -\sum_{e \in {\bf V}_{trg}} P^M(e|f) \log P^M(e|f)
\end{eqnarray}
where ${\bf V}_{src}$ and ${\bf V}_{trg}$ are the source and target side vocabularies, respectively. Generally, the more skewed the distribution ${\bf P}^M_f$, the lower the entropy $H^M_f$, and the more deterministic lexical choices for the source word $f$.
\fi
\subsection{Global Effect of Knowledge Distillation on Lexical Choice}
\begin{table*}[t]
\centering
\vspace{-10pt}
\scalebox{1}{
\begin{tabular}{lccccccccc}
\toprule
\multirow{2}{*}{\bf Dataset} & \multicolumn{3}{c}{\bf En-De} & \multicolumn{3}{c}{\bf Zh-En} & \multicolumn{3}{c}{\bf Ja-En}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10}
& \bf CoD & \bf AoLC & \bf BLEU & \bf CoD & \bf AoLC & \bf BLEU & \bf CoD & \bf AoLC & \bf BLEU\\
\midrule
\bf Raw & 3.53 & 74.3 & 24.6 & 5.11 & 68.5 & 22.6 & 3.92 & 73.1 & 27.8 \\
\midrule
\bf KD (\textsc{Base}) & 1.85 & 75.5 & 26.5 & 3.23 & 71.8 & 23.6 & 2.80 & 74.7 & 28.4\\
\bf KD (\textsc{Big}) & 1.77 & 76.3 & 27.0 & 3.01 & 72.7 & 24.2 & 2.47 & 75.3 & 28.9\\
\bottomrule
\end{tabular}}
\caption{Results of different metrics on the MaskT model trained on different datasets. ``KD (\textsc{X})'' denotes the distilled data produced by the AT model with \textsc{X} setting.
``CoD'' denotes the complexity of data metric proposed by~\citet{zhou2019understanding}, and ``AoLC'' is our proposed metric to evaluate the accuracy of lexical choice in NAT models.}
\vspace{-10pt}
\label{tab:dmo-mmo}
\end{table*}
In this section, we analyze the lexical choice behaviors of NAT models with our proposed AoLC.
In particular,
We evaluated three MaskT models, which are respectively trained on the raw data, \textsc{AT-Base} and \textsc{AT-Big} distilled data. We compared the AoLC with other two metrics (i.e. BLEU and CoD) on three different datasets (i.e. En-De, Zh-En and Ja-En).
As shown in Table~\ref{tab:dmo-mmo}, KD is able to improve translation quality of NAT models (BLEU: \textsc{KD(Big)} \textgreater \textsc{KD(Base)} \textgreater Raw) by increasing the lexical choice accuracy of data (AoLC: \textsc{KD(Big)} \textgreater \textsc{KD(Base)} \textgreater Raw). As expected, NAT models trained on more deterministic data (CoD$\downarrow$) have lower lexical choice errors (AoLC$\uparrow$) \textit{globally}, resulting in better model generation performance (BLEU$\uparrow$).
\subsection{Discrepancy between High- and Low-Frequency Words on Lexical Choice}
\label{subsec:discrepancy}
To better understand more detailed lexical change within data caused by distillation, we break down the lexicons to three categories in terms of frequency. And we revisit it from two angles: training data and translated data.
We first visualize the changing of training data when adopting KD in terms of words frequency density.
\begin{wrapfigure}{r}{6.5cm}
\centering
\vspace{-10pt}
\includegraphics[width=0.5\textwidth]{fig/word_freq_new.pdf}
\caption{Comparison of the token frequency density (w.r.t the sampled tokens' probability distribution) between \textit{Raw}, \textit{KD (Base)} and \textit{KD (Big)} WMT14 En-De training data.}
\vspace{-10pt}
\label{fig:word_freq}
\end{wrapfigure}
As shown in Figure~\ref{fig:word_freq}, we find that the kurtosis of KD data distribution is higher than that of raw, which becomes more significant when adopting stronger teacher. The side effect is obvious, that is, the original high- / low-frequency words become more / fewer, making the distribution of training data more imbalance and skewed, which is problematic in data mining field~\citep{10.1145/1007730.1007733}. This discrepancy may
erode the \textit{translation performance of low-frequency words} and \textit{generalization performance on other domains}. Here we focus on low-frequency words, and generalization performance degradation will be exploited in future work.
In order to understand the detailed change during inference, we then analyze the lexical accuracy with different frequencies in the test set. We make the comprehensive comparison cross languages based on our proposed AoLC. As shown in Figure~\ref{fig:test_freq_aolc}, as the teacher model becomes better, i.e. KD(base)$\rightarrow$KD(big), the lexical choice of high-frequency words becomes significantly more accurate (AoLC $\uparrow$) while that of low-frequency words becomes worse (AoLC $\downarrow$). Through fine-grained analysis, we uncover this interesting discrepancy between high- and low- frequency words. The same phenomena (lexical choice errors on low-frequency words propagated from teacher model) also can be found in general cases, e.g. distillation when training smaller AT models. Details can be found in Appendix~\ref{appendix:general-kd}.
To keep the accuracy of high-frequency words and compensate for the imbalanced low-frequency words caused by KD, we present a simple yet effective approach below.
\begin{figure*}[ht]
\vspace{-5pt}
\centering
\subfigure[En-De]{
\includegraphics[width=0.3\textwidth]{fig/freq_en-de.pdf}
\label{fig:freq_en-de}}
\hfill
\subfigure[Zh-En]{
\includegraphics[width=0.3\textwidth]{fig/freq_zh-en.pdf}
\label{fig:freq_zh-en}}
\hfill
\subfigure[Ja-En]{
\includegraphics[width=0.3\textwidth]{fig/freq_ja-en.pdf}
\label{fig:freq_ja-en}}
\caption{Accuracy of lexical choice (AoLC) for source words of different frequency.}
\label{fig:test_freq_aolc}
\vspace{-10pt}
\end{figure*}
\section{Improving Lexical Choice in NAT Models}
\subsection{Methodology}
Our goal is to augment NAT models to learn needed lexical choices from the raw data to achieve better performance. To this end, we introduce an extra bilingual data-dependent prior objective to augment the current NAT models to distill the required lexical choices from the raw data.
Specifically, we use Kullback-Leibler divergence to guide the probability distribution of model predictions $P^M(e|{\bf f})$ to match the prior probability distributions $Q(\cdot)$:
\begin{equation}
\mathcal{L}_{prior} = - \sum_{e \in {\bf e}} \mathrm{KL} \big(Q(e | {\bf f}) ~||~ P^M(e | \mathbf{f}) \big)
\end{equation}
where ${\bf f}$ is the source sentence, and $\bf e$ is the target sentence.
The bilingual prior distribution $Q(\cdot)$ is derived from the raw data, which is independent of the model $M$ and will be described later. The final objective for training the NAT model becomes:
\begin{equation}
\mathcal{L} = (1-\lambda) \mathcal{L}_{NAT} + \lambda \mathcal{L}_{prior}
\end{equation}
in which the imitation rate $\lambda$ follows the logarithmic decay function:
\begin{equation}
\lambda(i)=
\begin{cases}
\frac{log(\mathrm{I}/(2(i+1)))}{log(\mathrm{I/2})} & i \leq \mathrm{I}/2 \\
0 & \text{others}
\end{cases}
\end{equation}
where $i$ is the current step, $\mathrm{I}$ is the total training step for distilled data.
Accordingly, the NAT model is merely fed with the priori knowledge derived from the raw data at beginning. Along with training, the supervision
signal of the prior information is getting weaker while that of the distilled data gradually prevails in the training objective.
We run all models for 300K steps to ensure adequate training, thus the bilingual prior distributions will be exposed at the first 150K steps.
\paragraph{Choices of Prior Distribution $Q(\cdot)$}
The goal of the prior objective is to guide the NAT models to learn to distill the lexical choices itself from the raw data.
For each target word $e$, we use the external word alignment to select the source word $f$ with the maximum alignment probability, and $Q(\cdot)$ is rewritten as:
\begin{equation}
Q(e|{\bf f}) = Q(e|f)
\end{equation}
Specifically, we use two types of bilingual prior distributions:
\begin{itemize}[leftmargin=8pt]
\item {\em Word Alignment Distribution ({WAD})} is the distribution derived from the external word alignment ${\bf P}^D_f = \{P^D(e_1|f), \dots, P^D(e_N|f)\}$ where $\{e_1, \dots, e_N\}$ are the set of target words aligned to the source word in the training data. We follow \citet{hinton2015distilling} to use the softmax temperature mechanism to map ${\bf P}^D_f$ over the whole target vocabulary:
\begin{equation}
Q(e|f) = \hat{\bf P}^D_f = \frac{exp({\bf P}^D_f/\tau)}{\sum_{V_{tgt}}exp({\bf P}^D_f/\tau)}
\end{equation}
We tune the temperature from [0.5, 1, 2, 5] on WMT14 En-De dataset and use $\tau=2$ as the default setting for incorporating word alignment distribution in all datasets.
\item {\em Self-Distilled Distribution ({SDD})} is the probability distribution for the source word $f$, which is produced by a same NAT model pre-trained on raw data.
Specifically, the model $M$ takes a source word $f$ as input and produces a probability distribution over whole words in target vocabulary:
\begin{equation}
{\bf P}^M_f = \{P^M(e_1|f), \dots, P^M(e_{|{\bf V}_{trg}|}|f)\}
\label{eqn:model-prior}
\end{equation}
This prior distribution signal can be characterized as self-distilled \textit{lexicon level} ``born-again networks''~\citep{furlanello2018born} or self-knowledge distillation~\citep{liu2020noisy}, where the teacher and student have the same neural architecture and model size, and yet surprisingly the student is able to surpass the teacher's accuracy.
\iffalse
\zptu{self-distillation (Noisy Self-Knowledge Distillation for Text Summarization)}
\fi
\end{itemize}
\subsection{Experimental Results}
\label{subsec:ablation-study}
\begin{table*}[htb]
\centering
\vspace{-8pt}
\begin{tabular}{lcccccc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multicolumn{2}{c}{\bf En-De} & \multicolumn{2}{c}{\bf Zh-En} & \multicolumn{2}{c}{\bf Ja-En} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& {\bf AoLC} / LFT & {\bf BLEU} & {\bf AoLC} / LFT & {\bf BLEU} & {\bf AoLC} / LFT & {\bf BLEU}\\
\midrule
{\bf\textsc{AT-Teacher}} & 79.3 / 73.0 & 29.2 & 74.7 / 66.2 & 25.3 & 77.1 / 70.8 & 29.8\\
\midrule
{\bf MaskT+KD} & 76.3 / 68.4 & 27.0 & 72.7 / 61.5 & 24.2& 75.3 / 66.9 & 28.9\\
{\bf ~~+WAD} & 77.5 / 71.9 & 27.4 & 73.4 / 64.5 & 24.8& 76.3 / 69.0 & 29.4\\
{\bf ~~+SDD} & 77.7 / 72.2 & 27.5 & 73.5 / 64.7 & 24.9& 76.1 / 68.6 & 29.3\\
{\bf ~~+Both} & 78.1 / 72.4 & 27.8 & 74.0 / 65.0 & 25.2& 76.6 / 69.1 & 29.6\\
\bottomrule
\end{tabular}
\caption{Ablation Study on raw data priors across different language pairs using the MaskT Model. ``WAD'' denotes word alignment distribution, and ``SDD'' denotes self-distilled distribution. ``AoLC / LFT'' denotes the lexical translation accuracies for all tokens / low-frequency tokens, respectively.}
\vspace{-10pt}
\label{tab:prior}
\end{table*}
\paragraph{Ablation Study on Raw Data Prior}
Table~\ref{tab:prior} shows the results of our proposed two bilingual data dependent prior distributions across language pairs. The word alignment distribution (WAD) and self-distilled distribution (SDD) variants consistently improves performance over the vanilla \textit{two-step training scheme} NAT model (``NAT+KD'') when used individually (averagely +0.5 BLEU point), and combining them (``+Both'') by simply averaging the two distributions can achieve a further improvement (averagely +0.9 BLEU point). The improvements on translation performance are due to a increase of AoLC, especially for low-frequency tokens (averagely +3.2), which reconfirms our claim.
Notably, averaging the two prior distributions could rectify each other, thus leading to a further increase. We explore the complementarity of two prior schemes in Section~\ref{subsec:analysis}.
In the following experiments, we use the combination of WAD and SDD as the default bilingual data dependent prior.
\begin{table*}[t]
\centering
\setlength{\tabcolsep}{3.2pt}
\vspace{-15pt}
\begin{tabular}{lrrllll}
\toprule
\multirow{2}{*}{\bf Model} & \multirow{2}{*}{\bf Iter.} & \multirow{2}{*}{\bf Speed} & \multicolumn{2}{c}{\bf En-De} & \multicolumn{2}{c}{\bf Ro-En}\\
\cmidrule(lr){4-5} \cmidrule(lr){6-7}
& & & \bf AoLC & \bf BLEU & \bf AoLC & \bf BLEU\\
\midrule
\multicolumn{7}{c}{\textbf{AT Models}} \\
{\bf Transformer-\textsc{Base}} (Ro-En Teacher) & n/a & 1.0$\times$ & & 27.3 & &34.1 \\
{\bf Transformer-\textsc{Big}} (En-De Teacher) & n/a & 0.8$\times$ & & 29.2 & &n/a \\
\midrule
\multicolumn{7}{c}{\textbf{Existing NAT Models}}\\
{\bf NAT}~\citep{NAT} & 1.0 & 2.4$\times$ & \multirow{5}{*}{n/a} & 19.2 & \multirow{5}{*}{n/a} & 31.4\\
{\bf Iterative NAT}~\citep{lee2018deterministic} & 10.0& 2.0$\times$ & & 21.6 & &30.2 \\
{\bf DisCo}~\citep{kasai2020parallel} & 4.8 & 3.2$\times$ & & 26.8 & &33.3 \\
{\bf Mask-Predict}~\citep{ghazvininejad2019mask} & 10.0 & 1.5$\times$ & & 27.0 & & 33.3\\
{\bf Levenshtein}~\citep{gu2019levenshtein} & 2.5 & 3.5$\times$ & & 27.3 & &33.3\\
\midrule
\multicolumn{7}{c}{\textbf{Our NAT Models}}\\
{\bf Mask-Predict} & \multirow{2}*{10.0} & \multirow{2}*{1.5$\times$} & 76.3 & 27.0 & 79.2 &33.3\\
{\bf ~~~+Raw Data Prior} & & & 78.1& 27.8$^\dagger$ & 80.6 &33.7\\
{\bf Levenshtein} & \multirow{2}{*}{2.5} & \multirow{2}{*}{3.5$\times$} & 77.0 & 27.2 & 79.8 &33.2\\
{\bf ~~~+Raw Data Prior} & & & 77.8 & 27.8$^\dagger$ & 80.9 &33.8$^\dagger$\\
\bottomrule
\end{tabular}
\caption{Comparison with previous work on WMT14 En-De and WMT16 Ro-En datasets. ``Iter.'' column indicate the average number of refined iterations. ``$^\dagger$'' indicates statistically significant difference ($p < 0.05$) from baselines according to the statistical significance test~\citep{collins2005clause}.}
\vspace{-10pt}
\label{tab:main-results}
\end{table*}
\paragraph{Comparison with Previous Work}
Table~\ref{tab:main-results} lists the results of previously competitive studies~\citep{NAT, lee2018deterministic, kasai2020parallel, ghazvininejad2019mask, gu2019levenshtein} on the widely-used WMT14 En-De and WMT16 Ro-En datasets.
Clearly, our bilingual data-dependent prior significantly improves translation (BLEU$\uparrow$) by substantially increasing the lexical choice accuracy (AoLC$\uparrow$).
It is worth noting that our approaches merely modify the training process, thus does not increase any latency (``Speed''), maintaining the intrinsic advantages of non-autoregressive generation.
\begin{wraptable}{r}{6cm}
\vspace{-10pt}
\begin{tabular}{lcc}
\toprule
\bf Strategies & \bf AoLC & \bf BLEU\\
\midrule
\bf Baseline & 76.3 & 27.0 \\
\bf Mix & 76.6 & 27.2 \\
\bf Tagged Mix & 77.1 & 27.4 \\
\bf Decay Curriculum& 77.2 & 27.5 \\
\bf Ours & 78.1 & 27.8 \\
\bottomrule
\end{tabular}
\caption{Performance of several data manipulation strategies on En-De dataset. Baseline is the MaskT+KD model and Ours is our proposed approach.}
\label{tab:data_strategy}
\vspace{-10pt}
\end{wraptable}
\paragraph{Comparison with Data Manipulation Strategies}
Instead of using the proposed priors, we also investigate two effective data manipulation strategies, i.e. \textit{Data Mixing} and \textit{Curriculum Learning}, to force the NAT model learns from both the raw and distilled data. For data mixing, we design two settings: a) Mix: simply combine the raw and distilled data, and then shuffle the mixed dataset. b) Tagged Mix: Inspired by successes of tagged back-translation~\citep{caswell2019tagged,marie2020tagged}, we add tags to distinguish between KD and Raw sentences in the mixed dataset. For decay curriculum schedule, the NAT models learn more from raw data at the beginning and then learn more from KD as the training goes on. The details of curriculum can be found in Appendix~\ref{appendix:curriculum}. As seen in Table~\ref{tab:data_strategy}, data mixing and decay curriculum schedule improve performance on both AoLC and BLEU, which confirm the necessity of exposing raw data to NAT models during training. Besides, our approach still outperforms those effective strategies, demonstrating the superiority of our learning scheme.
\subsection{Experimental Analysis}
\label{subsec:analysis}
\begin{wraptable}{r}{6cm}
\vspace{-10pt}
\begin{tabular}{lccc}
\toprule
\bf Model & \bf BLEU & \bf AoLC & \bf Error\\
\midrule
\bf MaskT & 22.6 & 68.5\% & 34.3\%\\
\bf ~~+\textsc{KD} & 24.2 & 72.7\% & 30.1\% \\
\bf ~~~~~+RDP & 25.2 & 74.0\% & 28.2\%\\
\bottomrule
\end{tabular}
\caption{Subjective evaluation of mis-translation errors on the Zh-En dataset.}
\label{tab:manual}
\vspace{-15pt}
\end{wraptable}
In this section, we conducted extensive analyses on the lexical choice to better understand our approach.
Unless otherwise stated, results are reported on the MaskPredict models in Table~\ref{tab:prior}.
\paragraph{Our approach improves translation performance by reducing mis-translation errors.}
The lexical choice ability of NAT models correlates to mis-translation errors, in which wrong lexicons are chosen to translate source words.
To better understand whether our method alleviates the mis-translation problem, we assessed system output by human judgments. In particular, we randomly selected 50 sentences from the Zh-En testset, and manually labelled the words with lexical choice error. We defined the lexical choice error rate as $E/N$, where $E$ is the number of lexical choice errors and $N$ is the number of content words in source sentences, since such errors mainly occur in translating content words.
As seen in Table~\ref{tab:manual}, our approache consistently improves BLEU scores by reducing the lexical choice errors, which confirm our claim.
Additionally, AoLC metric correlates well with both the automatic BLEU score and the subjective evaluation, demonstrating its reasonableness.
\paragraph{Our approach significantly improves the accuracy of lexical choice for low-frequency source words.}
As aforementioned discrepancy between high- \& low-frquency words in Section~\ref{subsec:discrepancy},
we focus on revealing the fine-grained lexical choice accuracy w.r.t our proposed AoLC. In Table~\ref{tab:lexical}, the majority of improvements is from the low-frequency words, confirming our hypothesis.
\begin{table}
\centering
\begin{minipage}[t]{0.48\textwidth}
\vspace{-10pt}
\centering
\begin{tabular}{lccc}
\toprule
\bf Frequency & \bf En-De & \bf Zh-En & \bf Ja-En\\
\midrule
\bf High & +1.3\% & +0.3\% & +1.3\%\\
\bf Medium & +0.2\% & +0.1\% & +0.9\%\\
\bf Low &\bf +5.9\% &\bf +5.8\% &\bf +3.3\%\\
\midrule
\bf All & +2.4\%&+1.8\%&+1.7\%\\
\bottomrule
\end{tabular}
\caption{Improvement of our approach over the MaskT+KD model on AoLC.}
\label{tab:lexical}
\vspace{-10pt}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\textwidth}
\vspace{-10pt}
\centering
\begin{tabular}{lccc}
\toprule
\bf Model & \bf En-De & \bf Zh-En & \bf Ja-En \\
\midrule
\bf NAT & 10.3\% & 6.7\% & 9.4\% \\
\bf ~~{+KD} & 7.6\% & 4.2\% & 6.9\% \\
\bf ~~~~{+Ours} &9.8\% &6.1\% &8.5\% \\
\bottomrule
\end{tabular}
\caption{Ratio of low-frequency target words in the MaskT model generated translations.}
\label{tab:target-frequency}
\vspace{-10pt}
\end{minipage}
\end{table}
\paragraph{Our approach generates translations that contain more low-frequency words.}
Besides improving the lexical choice of low-frequency words, our method results in more low-frequency words being recalled in the translation. In Table~\ref{tab:target-frequency}, although KD improves the translation, it biases the NAT model towards generating high-frequency tokens (\textit{Low freq.}$\downarrow$) while our method can not only correct this bias (averagely +32\% relative change), but also enhance translation (BLEU$\uparrow$ in Table~\ref{tab:prior}).
{
\begin{wraptable}{r}{8cm}
\vspace{-10pt}
\centering
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{\bf Prior} & \multicolumn{2}{c}{\bf AoLC on LFT} & \multicolumn{2}{c}{\bf Ratio of LFT}\\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& \bf Content & \bf Function & \bf Content & \bf Function \\
\midrule
\bf N/A & 67.7\%& 70.1\%& 5.3\% & 2.4\% \\
\bf WAD & 71.6\% & 72.9\% & 5.9\% & 2.5\%\\
\bf SDD & 71.4\% & 74.3\% & 5.6\% & 3.4\% \\
\midrule
\bf Both & 71.6\% & 74.2\% & 6.2\% & 3.6\% \\
\bottomrule
\end{tabular}
\caption{{AoLC and Ratio of different prior schemes on Low-Frequency Tokens (``LFT'').
We list the performances on different linguistic roles, i.e. content words and function words. Note that Ratio of LFT means the ratio of low frequency tokens in generated translation. ``N/A'' means MaskT+KD baseline.}}
\label{tab:complement-lexical}
\end{wraptable}
\paragraph{Our proposed two priors complement each other by facilitating different tokens.}
As aforementioned in Table~\ref{tab:prior},
combining two individual schemes can further increase the NAT performance. To explain how they complement each other, especially for low-frequency tokens, we classify low-frequency tokens into two categories according to their linguistic roles: content words (e.g. noun, verb, and adjective) and function words (e.g. preposition, determiner, and punctuation). The results are listed in Table~\ref{tab:complement-lexical}. We show that WAD facilitates more on the understanding and generation of content tokens, while SDD brings more gains for function (i.e. content-free) tokens. We leave a more thorough exploration of this aspect for future work.
\paragraph{Effect of Word Alignment Quality on Model Performance.}
Both the proposed AoLC and priors depend heavily on the quality of word alignment, we therefore design two weaker alignment scenarios to verify the robustness of our method.
First, We adopt fast-align~\citep{dyer2013simple}, which is slightly weaker than GIZA++. Using fast-align, our methods can still achieve +0.6 and +0.7 improvements in terms of BLEU on En-De and Zh-En datasets, which are marginally lower than that using GIZA++ (i.e. +0.8 and +1.0 BLEU). Encouragingly, we find that the improvements in translation accuracy on low-frequency words still hold (+5.5\% and +5.3\% vs. +5.9\% and +5.8\%), which demonstrates the robustness of our approach.
In addition, we insert noises into the alignment distributions to deliberately reduce the alignment quality (Noise injection details can be found in Appendix~\ref{appendix:noise}. The performances still significantly outperform the baseline, indicating that our method can tolerate alignment errors and maintain model performance to some extent.
\begin{wraptable}{r}{7cm}
\vspace{-10pt}
\centering
\begin{tabular}{lcccc}
\toprule
\multicolumn{2}{c}{\bf AT Teacher} & \multicolumn{3}{c}{\bf NAT Model}\\
\cmidrule(lr){1-2} \cmidrule(lr){3-5}
\bf Model & \bf BLEU & \bf Vanilla & \bf+Prior & \bf $\triangle$ \\
\midrule
\bf Base & 27.3 & 26.5 & 27.2 & +0.7\\
\bf Big & 28.4 & 26.8 & 27.5 & +0.7\\
\bf Strong & 29.2 & 27.0 & 27.8 & +0.8\\
\bottomrule
\end{tabular}
\caption{Different teachers on the En-De dataset.}
\vspace{-5pt}
\label{tab:AT-teacher}
\end{wraptable}
\paragraph{Effect of AT Teacher}
To further dissect the different effects when applying different AT teachers, we employ three teachers. Table~\ref{tab:AT-teacher} shows our method can enhance NAT models under variety of teacher-student scenarios, including base, big and strong teacher-guided models. Our approach obtains averagely +0.7 BLEU points,
potentially complementary to the majority of existing work on improving knowledge distillation for NAT models.
}
\iffalse
\paragraph{Our approach improves generalization.}
Our approaches can compensate for the distilled imbalanced data distribution, expecting to achieve better generalization performance. To verify our hypothesis, we inference the NAT models pretrained on WMT news domain with the in-house medical domain test data, which contains 499 Chinese-English sentence pairs with four references. The result is shown at Appendix, our method performs better than distilled NAT model.
\fi
\iffalse
\begin{figure}[t]
\centering
\subfloat[BLEU]{\includegraphics[width=0.45\textwidth]{fig/BLEU.pdf}}
\hfill
\subfloat[UoM]{\includegraphics[width=0.45\textwidth]{fig/UOM.pdf}}
\caption{Learning curves in terms of (a) BLEU, and (b) UoM scores on the validation set.}
\label{fig:learning_curves}
\end{figure}
\paragraph{Learning Curves} We visualized the learning curves of the training process in terms of BLEU and UoM scores on the validation set, as depicted in Figure~\ref{fig:learning_curves}.
For our approach, the training process can be divided into three stages (separated by gray dotted lines). In the first stage, the model was only trained on the raw data (``NAT w/o KD'').
In the second stage, we finetuned the pre-trained NAT model on the distilled data with additional DDP objective (``+distilled data+DDP''). In the final stage, the interpolation weight of DDP objective is decayed to 0, and the NAT model is trained on the distilled data with the standard objective (the same as the vanilla NAT). As seen, the later two stages consistently improves the BLEU scores of trained NAT models by reducing the lexical modes (``UoM''), which outperform the vanilla NAT model (``NAT w/ KD''). We attribute the improvement to that our approach successfully makes NAT models learn to distill knowledge from the raw data by themselves.
\fi
\section{Related Work}
\iffalse
\paragraph{Improving NAT Models}
Compared with AT models, NAT breaks the conditional dependencies, which greatly degrades the model performance.
To bridge the performance gap between NAT and AT models, a variety approaches have been proposed, such as multi-turn refinement mechanism~\citep{lee2018deterministic, ghazvininejad2019mask, gu2019levenshtein, kasai2020parallel}, rescoring with AT models~\citep{wei-etal-2019-imitation,flowseq2019,sun2019fast}, adding auxiliary signals to improve model capacity~\citep{wang2019non,Ran2019GuidingNN,guo2019non}, and advanced training objective~\citep{wei-etal-2019-imitation,shao2019minimizing}.
Our work is complementary to theirs: while they focus on improving NAT models trained on the distilled data, we refine the NAT models by exploiting the knowledge in the raw data. Experimental results show that our approach further improves performance over several SOTA NAT models trained on the distilled data~\citep{gu2019levenshtein,ghazvininejad2019mask}.
\fi
\paragraph{Understanding Knowledge Distillation for NAT}
Knowledge distillation is a crucial early step in
the training of most NAT models.
~\citet{ren2020astudy} reveal that the difficulty of NAT heavily depends on the strongness of dependency among target tokens, and knowledge distillation reduces the token dependency in target sequence and thus improves the accuracy of NAT models.
In the pioneering work of NAT,~\citet{NAT} claim that NAT suffers from the multi-modality problem (i.e. multiple lexical translations for a source word), and knowledge distillation can simplify the dataset, which is empirically validated by~\citet{zhou2019understanding}.
We confirm and extend these results, showing that the AT-distilled dataset indeed leads to more deterministic predictions but propagates the low-frequency lexical choices errors. To this end, we enhance the NAT lexical predictions by making them learn to distill knowledge from the raw data.
\paragraph{Lexical Choice Problem in NMT Models}
Benefiting from continuous representations abstracted from the training data, NMT models have advanced the state of the art in the machine translation community. However, recent studies have revealed that NMT models suffer from inadequate translation~\citep{Tu:2016:ACL}, in which mis-translation error caused by the lexical choice problem is one main reason.
For AT models,~\citet{arthur2016incorporating} alleviate this issue by integrating a count-based lexicon, and~\citet{nguyen-chiang-2018-improving} propose an additional lexical model, which is jointly trained with the AT model.
The lexical choice problem is more serious for NAT models, since 1) the lexical choice errors (low-resource words in particular) of AT distillation will propagate to NAT models; and 2) NAT lacks target-side dependencies thus misses necessary target-side context. In this work, we alleviate this problem by solving the first challenge.
\section{Conclusion}
In this study, we investigated effects of KD on lexical choice in NAT.
We proposed a new metric to evaluate lexical translation accuracy of NAT models, and found that 1) KD improves global lexical predictions; and 2) KD benefits the accuracy of high-frequency words but harms the low-frequency ones. There exists a discrepancy between high- and low-frequency words after adopting KD. To bridge this discrepancy, we exposed the useful information in raw data to the training of NAT models.
Experiments show that our approach consistently and significantly improves translation performance across language pairs and model architectures. Extensive analyses reveal that our method reduces mistranslation errors, improves the accuracy of lexical choices for low-frequency source words, recalling more low-frequency words in the translations as well, which confirms our claim.
| {
"attr-fineweb-edu": 1.923828,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdGo25V5ij0Pv7Xwe | \section{Introduction}
Do you remember when your heart sank when your team lost the toss of a crucial match on a tricky pitch? Or do you recall the annoying moment when the rival fans downplayed your team's victory by pointing out the toss outcome?
There is no doubt that the outcome of a cricket match can be affected by the toss of a coin~\citep{Mong18a}. On a fresh lush pitch, the test team being sent out to bat first may already be at a disadvantage as the juicy pitch provides ideal conditions for seam and swing. On a dry dusty pitch, the team that bats last on the fifth day faces a significant disadvantage as the pitch will have greatly deteriorated over the course of the match.
The winner of the toss can also get an undue advantage in T20 matches especially in dewy conditions in the evenings. The trend has been noticed in various T20 tournaments including PSL~\citep{Raso21a}. During the ICC T20 World Cup 2021, 12 out of 13 matches in Dubai were won by the chasing team. Simon Burnton from The Guardian wrote that ``\emph{every evening match has seen 22 highly skilled sportspeople spend several hours straining to see if they can have a greater impact on the result than the momentary flight of a small metal disc before the action began, and in general they have failed}''~\citep{Burn21a}. When huge stakes are at play (such as the final of an ICC World Cup or the ICC Test Championship), it seems highly unreasonable for the outcome of the match to be affected by the outcome of the toss. When two evenly matched teams play, you want an exciting and evenly matched game. \citet{Chop13a} pointedly asked:
\begin{quote}
\emph{``Would football tolerate a toss of a coin making one side play with only 10 players? In swimming or Formula One could chance decide pole position? Would every boxing match begin with the tossing of a coin which would permit one of the two boxers a chance to land a couple of hard rights as the bout began? I don't think so.''}
\end{quote}
\section{The Toss, Propose and Choose Method}
The issue of the unreasonable impact of the toss has been discussed for years~\citep{SoWi16a}. It is important enough that the ICC has considered searching for fairer solutions~\citep{Cric18a}. So how can we ensure that the toss continues to be meaningful but doesn't have an unfair effect on the outcome of the match?
I have a proposal to keep the toss in cricket but make it fair. I propose a minimal adaptation to the rules of cricket. I call the proposal the Toss, Propose and Choose method.
\begin{tcolorbox}
\textbf{The Toss, Propose and Choose method. }
\begin{enumerate}
\item[] \textbf{TOSS}: The toss takes place as per tradition. [Let us call the captain who wins the toss the 'Lucky Captain' and the captain who loses the toss the 'Unlucky Captain'].
\item[] \textbf{PROPOSE}: The Unlucky Captain judges whether bowling or batting first is disadvantageous and estimates the marginal impact of bowling versus batting first in terms of runs. Say the estimated number of runs is $b$. The Unlucky Captain proposes the following two options to the Lucky Captain:
\begin{enumerate}
\item[]Option 1. Take the advantageous turn but give $b$ bonus runs to the other team
\item[] Option 2. Take the disadvantageous turn but get $b$ bonus runs.
\end{enumerate}
\item[] \textbf{CHOOSE}: The Lucky Captain (who won the toss) then chooses either option 1 or 2.
\end{enumerate}
\end{tcolorbox}
Figure~\ref{fig:toss} gives a graphical illustration of the Toss, Propose and Choose method
\begin{figure}[h!]
\centering
\includegraphics[scale=0.03]{methodfig}
\caption{A graphical illustration of the Toss, Propose and Choose method. }
\label{fig:toss}
\end{figure}
\begin{example}
Let us illustrate the proposed method with the help of an example. Suppose Australia is playing New Zealand in the ICC Test Championship Final. We do not wish for a championship spanning multiple years to be eventually heavily influenced by the toss outcome. The coin is tossed and Australia's captain wins. At this point, the New Zealand captain is asked to give two options to the Australian captain. New Zealand factors in the playing conditions and team lineups, and feels that they will gain a net 50 run advantage if they bowl first. In view of this, the Australian captain gets two options: bowl or bat first but with the team batting first getting 50 additional runs in the extras column.
\end{example}
Now let us logically explain why the proposed method is fair to both parties. The Unlucky Captain (who loses the toss) gets a full opportunity to ensure that the possible options (1 and 2) weigh up equally, so irrespective of which option is chosen, s/he does not feel unlucky. More precisely, the Unlucky Captain has no envy and does not wish to switch between the two options. The Lucky Captain gets the final say in choosing the preferred option among 1 and 2. Hence, the Lucky Captain also feels that their team is not disadvantaged. Voila! The method I am advocating has solid foundations as it can be viewed as a suitable adaptation of the classical Divide and Choose method for fairly dividing a divisible resource~\citep{BrTa96a}. Just as Divide and Choose, my method does leave scope for the New Zealand Captain to be strategic. For example, if Australia's captain habitually prefers bowling first, the New Zealand captain can increase the bonus runs from 50 to 70. However, even under strategic behaviour, the outcome remains fair (envy-free).
Let me give another perspective of why the proposed method is fair. Suppose bowling first versus batting are compared on a balanced weight scale by the unlucky captain. The unlucky captain is concerned that the lucky captain is going to select the heavier side. The unlucky captain has the chance to put sufficient weight / bonus runs on the lighter side to ensure both sides have the same weight. In that case the Unlucky Captain does not need to worry which side the lucky captain will choose.
A typical complaint against the proposal could be why complicate a simple toss. However, cricket has already been open to more complicated methods such as Duckworth-Lewis-Stern to make other situations fair. If the Unlucky Captain truly believes that it does not matter who bats first, then s/he can stick to the default of zero bonus runs. Otherwise, the captain can do the following thought experiment. "Would I prefer to bowl or bat? Would I be indifferent if I added 10 runs to the lesser preferred option? Or 20 runs?" At some point, one would be indifferent between the two options because of the bonus runs added to the lesser preferred option. Cricket legend Sunil Gavaskar opined that that issue of the unfair toss ``i\emph{s something for the ICC Cricket Committee to get their heads around and make sure that there is a level playing field for both teams}''~\citep{Nand21a}. Ian Chappel echoed similar thoughts~\citep{Hind21a}. The main objective of the Toss, Propose and Choose method is to level the playing field and eliminate the potentially significant headstart the winner of the coin toss gets.
\section{Comparison with Other Proposals}
Apart from my proposal, a few other ideas have been floated to avoid the undue impact of a coin. A frequently proposed solution is for the teams to alternate batting first over the course of a series. The solution regains some fairness over time, especially if the series has enough matches. However, it doesn't resolve the issue of an unfair toss for the deciding or final match or for tournaments. Another proposal is for the weaker, or the touring team, to decide. However, the proposal may give too much advantage to the slightly weaker team and may incentivize teams to have lower rankings. Secondly, the match (say the ICC World Cup) may be on neutral ground, in which case both teams may be touring.
One elegant proposal to achieve fairness is to get rid of the toss entirely and introduce an auction~\citep{Fran18a,SoDe21a}. It leads to certain issues such as ensuring simultaneous bids in a transparent yet TV-friendly manner; the decision always going to the more aggressive bidder (that helps teams that are one-trick ponies such as always chasing); the higher bidder being victim of the winner's curse (a phenomenon that is encountered in real estate auctions); and the need for tie-breaking under identical bids. Finally, bypassing the toss completely eliminates an element that many might see as intrinsic to traditional cricketing sensibilities. In this sense, the Toss, Propose and Choose method is a minimal and transparent adaptation of the existing toss to regain fairness and only requires a handicap elicitation from one of the captains. It makes it easy to understand why both sides are free of envy of each other. If the Unlucky Captain is indifferent between bowling and batting first, the method coincides with the traditional toss. In 2018, some of the fair alternatives were considered but the ICC decided to keep the toss~\citep{Wisd18a}. The decision was in line with the MCC clause 13.5 (``''\emph{the captain of the side winning the toss shall decide whether to bat or to field and shall notify the opposing captain and the umpires of this decision}''~\citep{MCC}). My proposal is to keep the toss but to make it fair.
\section{Final Words}
So is the cricket community ready to consider the Toss, Propose and Choose method that is designed for a high-stakes match? Ironically, I feel that a suitable place to experiment is in low-stakes junior matches to see how the cricketing community feels about it. I won't complain if the proposal also gets taken up at the next ICC Test Championship Final! I suspect that the captain who loses the toss won't either.
\section*{Acknowledgments}
The author thanks Robbie Boland for valuable comments.
\bibliographystyle{aer2}
\ifx\undefined\leavevmode\hbox to\leftmargin{\hrulefill\,\,}
\newcommand{\leavevmode\hbox to\leftmargin{\hrulefill\,\,}}{\leavevmode\hbox to\leftmargin{\hrulefill\,\,}}
\fi
| {
"attr-fineweb-edu": 1.861328,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |