\documentclass[default]{sn-jnl} % Use the lineno option to display guide line numbers if required. % Note that the use of elements such as single-column equations % may affect the guide line number alignment. \usepackage{graphicx}% \usepackage{multirow}% \usepackage{amsmath,amssymb,amsfonts}% \usepackage{amsthm}% \usepackage{mathrsfs}% \usepackage[title]{appendix}% \usepackage{xcolor}% \usepackage{textcomp}% \usepackage{manyfoot}% \usepackage{booktabs}% \usepackage{algorithm}% \usepackage{algorithmicx}% \usepackage{algpseudocode}% \usepackage{listings}% \usepackage{flushend} \usepackage{float} \usepackage{dsfont} \usepackage{setspace} \setstretch{0.95} % \usepackage{widetext} %% as per the requirement new theorem styles can be included as shown below \theoremstyle{thmstyleone}% \newtheorem{theorem}{Theorem}% meant for continuous numbers %%\newtheorem{theorem}{Theorem}[section]% meant for sectionwise numbers %% optional argument [theorem] produces theorem numbering sequence instead of independent numbers for Proposition \newtheorem{proposition}[theorem]{Proposition}% %%\newtheorem{proposition}{Proposition}% to get separate numbers for theorem and proposition etc. \theoremstyle{thmstyletwo}% \newtheorem{example}{Example}% \newtheorem{remark}{Remark}% \theoremstyle{thmstylethree}% \newtheorem{definition}{Definition}% \raggedbottom %%\unnumbered% uncomment this for unnumbered level heads \begin{document} \title{Limitation of time promotes cooperation in temporal games} % Use letters for affiliations, numbers to show equal authorship (if applicable) and to indicate the corresponding author \author[1,2]{Jiasheng Wang} %\email{1510478@tongji.edu.cn} \affil*[1]{Department of Computer Science and Technology, Tongji University, 4800 Cao'an Road, Shanghai 201804, China} \affil[2]{Key Laboratory of Embedded System and Service Computing (Tongji University), Ministry of Education, Shanghai 200092, China} \author*[1,2]{Yichao Zhang} %\affil[1]{Department of Computer Science and Technology, Tongji University, 4800 Cao'an Road, Shanghai 201804, China \\ Key Laboratory of Embedded System and Service Computing (Tongji University), Ministry of Education, Shanghai 200092, China} \author*[3,4]{Guanghui Wen} \affil[3]{Department of Systems Science, School of Mathematics, Southeast University, Nanjing 210016, China} \affil[4]{School of Engineering, RMIT University, Melbourne VIC 3000, Australia} \author*[1,2]{Jihong Guan} %\email{jhguan@tongji.edu.cn} \author[5,6]{Shuigeng Zhou} %\email{sgzhou@fudan.edu.cn} \affil[5]{Shanghai Key Laboratory of Intelligent Information Processing, Shanghai 200433, China} \affil[6]{School of Computer Science, Fudan University, 220 Handan Road, Shanghai 200433, China} \author[7]{Guanrong Chen} %\email{gchen@ee.cityu.edu.hk} \affil[7]{Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Hong Kong SAR, China} \author[8]{Krishnendu Chatterjee} \affil[8]{Institute for Science and Technology, A-3400 Klosterneuburg, Austria} \author[9,10,11]{Matja{\v z} Perc} \affil[9]{Faculty of Natural Sciences and Mathematics, University of Maribor, Koro{\v s}ka cesta 160, 2000 Maribor, Slovenia} \affil[10]{Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan} \affil[11]{Complexity Science Hub Vienna, Josefst{\"a}dterstra{\ss}e 39, 1080 Vienna, Austria} \abstract{Temporal networks are obtained from time-dependent interactions between individuals. The interaction can be an email, a phone call, a face-to-face meeting, or a collaboration. We propose a temporal game framework where interactions between rational individuals are embedded into two-player games with a time-dependent aspect of interaction. This allows studying the time-dependent complexity and variability of interactions and how they affect prosocial behavior. Based on a simple mathematical model, we find that the level of cooperation is promoted when the time of collaboration is limited and identical for every individual. We confirm and validate this with a series of systematic human experiments that forms a foundation for comprehensively describing human temporal interactions in collaborative environments. Our research reveals an important incentive for human cooperation, and it lays the foundations for better understanding this fascinating aspect of our nature in realistic social settings.} \keywords{temporal networks, non-cooperative game, human subjects, cooperation} \maketitle \section{Introduction}\label{sec1} Many complex collaborative systems in nature, society, and engineering can be modeled through networks. In a network, nodes represent collaborating individuals, and links represent their friendships~\cite{albert_rmp02}. In the early stage of network modeling, links are simplified to be weightless, undirected, and static. In order to improve the ability to depict real systems, weighted~\cite{shen_rsos18}, directed~\cite{Pagan2019game}, and dynamic~\cite{melamed_pnas18} network models have been put forward successively. The application of these network models in various fields has fully proved that the closer the framework is to reality, the stronger its ability to explain behaviors. As an intriguing behavior in human collaborative systems, the emergence of cooperation has attracted researchers from social and natural sciences for half a century~\cite{boyd2005solving, gachter2009reciprocity, rand_tcs13, perc_pr17}. Although we are certainly not exempt from selfishness and the fundamental principles of Darwinian evolution, cooperation is nevertheless ubiquitous across human societies~\cite{nowak_11}. While the impetus for our strong cooperative drive has been linked to the difficulties of the genus \textit{Homo} in rearing offspring that survived and to the emergence of alloparental care~\cite{hrdy_11}, and to the formation of alliances in times of conflicts~\cite{bowles_11}, it is still puzzling as to why we have, as a species, achieved such high levels of cooperation. Our altruistic behavior distinguishes us markedly from other mammals, and they indeed form the bedrock for our astonishing evolutionary success. The studies of human cooperation in $n$-person games begin with population games, also known as mean-field games~\cite{maynard_n73,hofbauer_98,cressman_03}. In such a well-mixed population, cooperation can hardly prevail with imitative update rules when individuals play non-cooperative games such as the prisoner's dilemma~\cite{szabo_pr07}. If the population exhibits a relatively stable social structure, the consequence may be different~\cite{santos_prl05, ohtsuki_n06, santos_pnas06, santos_n08, tanimoto_pre07, fu_pre09, lee_s_prl11, rand_pnas14, fu2017leveraging, allen2017evolutionary, fotouhi_rsif19} -- a finding with roots in the seminal paper by Nowak and May~\cite{nowak_n92b}, who observed clusters of cooperators on a square lattice that protected them from invading defectors. Nevertheless, social networks are seldom static. We disconnect, reconnect, and then form connections with new people over time. This realization has revealed new mechanisms for cooperation that may sustain cooperative behavior under extremely adverse conditions, when the temptation to defect is high and where on static networks cooperation would long perish~\cite{perc_bs10}. An individual also does not interact with all his friends all the time but likely does so only occasionally. To account for the above aspects, dynamic networks are studied. The implications of dynamic interaction patterns on human cooperation are indeed profound, and recent human experiments, as well as theoretical research, confirm this to the fullest~\cite{rand_pnas11, fehl_el11, wang_j_pnas12, szolnoki_epl14b, wang_z_njp14, shen_rsos18}. It was argued, for example, that such observations demonstrate the effect of reputation~\cite{melamed_pnas18}. Individuals may connect with unfamiliar individuals after browsing their gaming records while cutting the existing connections with unsatisfactory partners. Some may take breaking ties, instead of performing defection, as a way to penalize defectors~\cite{rand_pnas11}. Interestingly, the implication of dynamic reconnection fades out when individuals choose specific moves to play games with their partners~\cite{melamed_pnas18}. In this light, an interesting question is whether the dynamic reconnection is relevant to the level of cooperation in a human collaborative system if there is a time limit on the duration of a game? From the perspective of biological markets~\cite{EHB34164}, the dynamic reconnection in such a system is a reallocation of collaboration time in a time-limited collaborative environment. Is too much emphasis put on the structure of our social networks, resulting in neglecting the temporal aspects of our interactions? In what follows, we will address these critical questions in detail. Due to the complexity of temporal systems, using evolutionary game theory to model individuals' collaborations is reasonably challenging. First of all, the evolution mechanism of a temporal system itself is complicated, difficult to describe by a simple mathematical model. Secondly, in the temporal games, an individual strategy involves not only the moves in games but also the allocation of time in a round. This openness allows individual strategies and network topologies to co-evolve in more flexible ways than the existing dynamical gaming networks~\cite{zhang2018gaming, miritello2013time}, which further raises the difficulty of modeling the coupled systems. In this paper, we present a temporal gaming framework upon the structure of temporal networks~\cite{GTN,holme_sr12}. The goal is to test the impact of limited time on the level of cooperation in two-player collaborative systems. Such systems are common in reality. For instance, it typically takes a team to accomplish a project when applying for funding. The project leader typically would collaborate with a member to accomplish a specific part of it. Correspondingly, the member or the leader can also be involved in more than one project. Simultaneously, the total number of working months for each participant is limited and identical. In such a scenario, a temporal gaming network is naturally composed. Admittedly, the collaboration between two team members is closer to the stag hunt game than the Prisoner's Dilemma (PD) game. Consider cooperation normally dominates the collaboration system playing the stag hunt game, one can hardly differentiate the impacts from other mechanisms. We adopt the PD game in this paper. %We need to draw another illustration to explain the temporal games not that on networks. One of our key contributions is a detailed online experiment for the theoretical framework. We first establish a gaming platform to implement a temporal game scenario. Next, we test the level of cooperation on the platform in a divide and conque (D\&C) mode~\cite{zhang_yc_sr15,wang_js_sr17,melamed_pnas18}, where the difference from the settings of the existing works~\cite{fehl_el11,rand_pnas11,gracia-lazaro_pnas12,rand_pnas14} is the targeted decisions. Finally, we test the level of cooperation on the platform in a time-involved mode, where the time limitation for individuals and targeted decisions are considered. The reasons for adopting these mechanisms will be provided in Section~\emph{Experimental design}. What we are looking for is whether the limitation on time resources governs human cooperation in the games. In what follows, we will focus on this factor. %Demonstration of the assumption To clarify the impact of the limited time, we invited 183 human subjects and designed a set of comparative online experiments. In a match, the participants are allocated to the nodes of pre-generated networks. We test two classes of networks, the Barab\'{a}si and Albert's scale-free network~\cite{BA} and Watts and Strogatz's small-world networks, since they are the most well-known social network models. We show that the limitation to the individuals' time resources statistically promotes the participants' level of cooperation, which aligns with the theoretical prediction presented below. \section{Theoretical framework of temporal games}\label{FTG} \subsection{Temporal game model}\label{Tgm} In a two-strategy (i.e., only two moves are allowed) game, define $i$'s strategy as $ \Omega_i=\left( \begin{array}{cc} X_i \\ 1-X_i \\ \end{array} \right),%\label{eqn:ti} $ where $X_i$ can only take $1$ or $0$ in each game. If $X_i=1$, $i$ is a cooperator denoted by $C$. If $X_i=0$, $i$ is a defector denoted by $D$. Take the PD game~\cite{maynard_82} for example, in the PD game, the payoff table is a $2\times2$ matrix. Given $i$'s strategy, $i$'s payoff in the game playing with all his neighbors (denoted by $N_i$) can be written as $ G_i={\Omega_i^T}\left( \begin{array}{cc} \mathcal{R} & \mathcal{S}\\ \mathcal{T} & \mathcal{P}\\ \end{array} \right)\sum_{j \in N_i}{\Omega_j}.%\label{eqn:G_i} $ In the PD model, it gains $\mathcal{T}$ (temptation to defect) for defecting a cooperator, $\mathcal{R}$ (reward for mutual cooperation) for cooperating with a cooperator, $\mathcal{P}$ (punishment for mutual defection) for defecting a defector, and $\mathcal{S}$ (sucker's payoff) for cooperating with a defector. Normally, the four payoff values satisfy the following inequalities: $\mathcal{T}>\mathcal{R}>\mathcal{P}>\mathcal{S}$ and $2\mathcal{R}>\mathcal{T}+\mathcal{S}$. Here, $2\mathcal{R}>\mathcal{T}+\mathcal{S}$ makes mutual cooperation the best outcome from the perspective of the collective. The temporal game model proposed in this paper is based on the game model~\cite{wang_js_sr17,zhang_yc_sr15} taking into account the time of interactions. As the model is time-involved, each interaction is assigned a specific duration. The total game time for each individual in a round is set to be constant and the same for all individuals to be realistic to real-life scenarios. An individual's interactions with different partners are assumed, independent. The payoff of the game between individuals $i$ and $j$ can be written as $ s_{i,j}=\Omega_{i,j}^T\left( \begin{array}{cc} \mathcal{R} & \mathcal{S}\\ \mathcal{T} & \mathcal{P}\\ \end{array} \right)\Omega_{j,i}\,. %\label{eqn:sij} $ In the temporal games, the payoff of each interaction is proportional to the time it spans. In one round of the game, the accumulated payoff of individual $i$ is defined as \begin{equation} \Lambda_i = \sum\limits_{j \in {N_i}} {\frac{\tau_{i,j}}{\mathfrak{T}} \times s_{i,j}},\label{LambdaiN} \end{equation} where $N_i$ is the set of $i$'s neighbors; $\tau_{i,j}$ is the duration of the interaction between individuals $i$ and $j$. As shown in Fig.~\ref{fig:ITG}A, let $i$ and $j$ be the individuals colored red and blue. Then $N_i=4$ and $\tau_{i,j}=8$. Notably, $\tau_{i,j}$ should satisfy the constraints of $\tau_{i,j} \in [0, \mathfrak{T}]$ and $\sum\limits_{j \in {N_i}}{\tau_{i,j}} \leqslant \mathfrak{T}$. Here, $\mathfrak{T}$ is the total time resource of an individual in each round, which is a constant for all individuals in our model. In Fig.~\ref{fig:ITG}A, $\mathfrak{T}=24$. If individual $i$ does not want to collaborate with $j$, then $i$ will not apply for a game with $j$ any longer. Simultaneously, $i$ will reject $j$'s gaming request. In this case, $\tau_{i,j}$ will be $0$ as the relation between the red and the green in Fig.~\ref{fig:ITG}A. \begin{figure}[ht]%[tbhp] \centering \includegraphics[height=2 in,clip,keepaspectratio,]{Illustration_of_temporal_game.eps} \caption{Illustration of the temporal game. Panel A shows a round of the temporal game among 5 individuals. The individual colored red has 4 friends, in which the individuals colored orange and blue are his gaming partners. If the game beween two individuals lasts for 24 hours, the payoff of a cooperator is 3 and 0, gaining from a cooperator and a defector, respectively. The payoff of a defector is 5 and 1, gaining from a cooperator and a defector, respectively.} \label{fig:ITG} \end{figure} Let $P_i$ be the set of partners who interacted with $i$ in the round, Eq.~\ref{LambdaiN} can be written as \begin{equation} \Lambda_i = \sum\limits_{j \in {P_i}} {\frac{\tau_{i,j}}{\mathfrak{T}} \times s_{i,j}}, \label{LambdaiP} \end{equation} where $\tau_{i,j}$ is greater than 0. For the red individual in Fig.~\ref{fig:ITG}A, the orange and the blue are his partners in this round. Based on Eq.~\ref{LambdaiP}, the payoffs of the 5 individuals are listed in Fig.~\ref{fig:ITG}B. % In a mean-field view, Eq.~\ref{LambdaiP} can be written as \begin{equation} \Lambda_{k_i} = \sum\limits_{k_j} {\frac{\tau_{k_i,k_j}}{\mathfrak{T}} P\left(k_i,k_j\right) s_{k_i,k_j}}, \label{LambdaiPkikj} \end{equation} where $P(k_i,k_j)$ is the probability that a link exists between $i$ and $j$, dependent on the topology of the collaborative network. We show an illustration of such a collaborative network in Fig.~\ref{fig:tdnc}A. To clarify the generating procedure of the network, we provide the communication log among the individuals in this round in Fig.~\ref{fig:tdnc}B. In the log, Alice tried collaborating with Tom for $\mathfrak{T}$, while Tom had agreed to work with Jerry and Frank when he received Alice's request. Thus, Alice turned to Frank and Jerry, but it was a bit late to make appointments with them as they were partially engaged. As a result, Alice took $0.8\mathfrak{T}$ to play with Frank and Jerry and wasted $0.2\mathfrak{T}$ in this round. % \begin{figure*}[!htbp]%[tbhp] \centering \includegraphics[width=\textwidth]{vis.eps} \caption{Illustration of the temporal games in a two-player collaborative system. (A) One round of the temporal game on a social network. The blue circle is Jerry's neighborhood. Alice, Bob, and Tom are Jerry's partners in this round. The color of a time slot represents a partner; for instance, yellow represents Frank. $C$ or $D$ in the time slot denotes the move from the individual at the tail of a directed dashed line to the indicated specific partner. (B) The generating procedure of the circumstance presented in (A). In the communication log, the records are sorted by their sequence numbers in ascending order. Only if both players agree to collaborate (the response to a request is OK) will their colors appear in each other's collaboration schedule, i.e., a time slot in (A).} \label{fig:tdnc} \end{figure*} % For a heterogeneous network as the Barab\'{a}si-Albert (BA) networks~\cite{barabasi_s99}, $P(k_i,k_j)\sim\frac{k_jP(k_j)}{\langle k\rangle}$. For a homogeneous networks as the Watts and Strogatz (WS) networks~\cite{Watts98Nature}, $P(k_i,k_j)\sim P(k_j)$. %To model the correlation between the level of cooperation and the available time resource in the neighborhood, shown in our experimental results, a factor $\theta$, called the backup to defect, is now introduced. %%\begin{equation} %% \theta_{k_i,r} = \hat \tau_{f_{k_i,r}} - \sigma_{k_i,r-1}\tau_{u_{k_i,r-1}} P_l, \label{eqn:ttd} %%\end{equation} %We define $\theta_{k_i,r}$ as the total available time resource of the neighbors of an individual $i$ with degree $k_i$ at round $r$, namely, %\begin{equation} % \theta_{k_i,r} = min\left(\sum\limits_{k_j} P\left(k_i,k_j\right)S_{k_j}\left(r\right),\mathfrak{T}\right), \label{eqn:tf} %\end{equation} %where $S_{k_j}\left(r\right)$ denotes the average available time of a neighbor with degree $k_j$ at round $r$. When $\theta > 0$, trading partners may bring $i$ more payoff, since $\mathcal{T}>\mathcal{R}$ in the PD game. Interestingly, one can see that $\theta_{k_i,r} = S\left(r\right)$ in the WS networks. Therefore, one can use $S\left(r\right)$ to measure the available time resource in the neighborhood. Although $\theta_{k_i,r} \neq S\left(r\right)$ in the BA networks, $S\left(r\right)$ is applicable to depict the average available time resource in the neighborhood. Therefore, we will adopt $S\left(r\right)$ to represent the available time resource in the system. \subsection{Proportion of cooperation in the temporal game} In the temporal game, each game between partners is coupled with a duration. Therefore, the level of cooperation should be measured by the duration and their moves. We define the proportion of cooperation as ${P_c} = \frac{{{T_C}}}{T_G}$, where $T_G$ is the total duration of the moves and $T_C$ is the total duration of cooperation in the games. %\subsection{Dissipative system in the temporal game} % %In our experiment, two types of the temporal social dilemma, dissipative scenario and classical scenario, are tested. In the dissipative scenario, individuals are provided with an initial resource. In each round, individuals play the game with some of their friends to earn more payoffs. Specifically, the aggregated payoff of an individual $i$ in round $r$ ($r\in\mathds{N}$) can be written as %\begin{equation} %{\phi_{i,r}} = \left\{ {\begin{array}{*{20}{cl}} %{{\phi_{i,r - 1}} + {S_{i,r}} - \varepsilon }, & {r\in\mathds{N^+}},\\ %{{\phi_0}}, & {r = 0}, %\end{array}} \right. \label{eqn:f} %\end{equation} %where $\varepsilon$ denotes the cost in each round, $\phi_0$ is the initial resource, and $S_{i,r}$ is the total payoff of individual $i$ in round $r$ defined in Eqn.~(\ref{LambdaiP}). In the dissipative scenario, we set $\phi_0=5$ and $\varepsilon=3$. In the classical scenario, we set $\varepsilon=\phi_0=0$, which naturally leads to a monotonic ascent of $\phi_{i,r}$ for all $i$ in a match. %Caveats Note that current studies on decision time~\cite{evans2019cooperation, evans2015fast} in experimental psychology and response time in experimental economics~\cite{yamagishi2017response, spiliopoulos2018bcd} focus on the time of making a decision rather than the duration of moves. Therefore, the object of such studies is different from that of temporal games. %In the following, we will propose a model to reproduce the statistical results in our empirical experiments. As modeling the level of cooperation is a rather challenging task, we adopt a mean-field method and make some necessary assumptions. Note that the assumptions are not the components of the temporal games but only the tools to modeling the level of cooperation. \subsection{Mathematical modeling the available time of individuals}\label{MMATI} As is known, for each game between two players, each player has to experience one of the four possible cases, namely, cooperating with a cooperator (CC), cooperating with a defector (CD), defecting a cooperator (DC), and defecting a defector (DD). We define a state vector $\mathbf{\Phi}$ by $(\Phi_{CC},\Phi_{CD},\Phi_{DC},\Phi_{DD})$, in which each entry corresponds to the probability of experiencing the respective outcome. Generally, a memory-one strategy can be written as $\mathbf{p}=(p_{CC},p_{CD},p_{DC},p_{DD})$, corresponding to the probabilities of cooperating under each of the previous outcomes. Since players update their moves with the memory-one strategies in each time step, the update can be considered a Markov process. One can find a Markov transition matrix $M_i$ to realize the update. For two players, $i$ and $j$, we have \begin{equation} \tiny{ \!\!M_i\!\!=\!\!\left(\! \begin{array}{cccc} \!\!\!p_{CC}s_{CC} &\!\!\!\! p_{CC}(1-s_{CC}) &\!\!\!\! (1-p_{CC})s_{CC} &\!\!\!\! (1-p_{CC})(1-s_{CC})\!\!\! \\ \!\!\!p_{CD}s_{DC} &\!\!\!\! p_{CD}(1-s_{DC}) &\!\!\!\! (1-p_{CD})s_{DC} &\!\!\!\! (1-p_{CD})(1-s_{DC})\!\!\! \\ \!\!\!p_{DC}s_{CD} &\!\!\!\! p_{DC}(1-s_{CD}) &\!\!\!\! (1-p_{DC})s_{CD} &\!\!\!\! (1-p_{DC})(1-s_{CD})\!\!\! \\ \!\!\!p_{DD}s_{DD} &\!\!\!\! p_{DD}(1-s_{DD}) &\!\!\!\! (1-p_{DD})s_{DD} &\!\!\!\! (1-p_{DD})(1-s_{DD})\!\!\! \\ \end{array} \!\right), } \end{equation} where the vectors $\mathbf{p}=(p_{CC},p_{CD},p_{DC},p_{DD})$ and $\mathbf{s}=(s_{CC},s_{CD},s_{DC},s_{DD})$ denote $i$ and $j$'s probabilities of cooperation in the next round after experiencing $CC$, $CD$, $DC$, and $DD$ cases, respectively. Then the evolution of $i$'s state vector $\mathbf{\Phi}_i(t)$ is given by \begin{equation} \mathbf{\Phi}_i(r)=\mathbf{\Phi}_i(r-1)M_i. \end{equation} To model the the available time of individuals in the temporal games, we first assume that no players at round $r-1$ reject the requests from an individual $i$ if they are available. The time left for him to make use of in round $r$ can be denoted by $S_{i}\left(r\right)=\mathfrak{T} - \sum_{j\in{P_i}}\tau_{u_{ij}\left(r-1\right)}$, where $\mu_{ij}\left(r-1\right)$ denotes the random portion of time in the request from $i$ in round $r-1$. If $i$ applies for playing with $j$ for $S_{i}\left(r\right)\mu_{ij}\left(r\right)$, the successful probability of the request is \begin{equation} {\omega_{i,j}\left(r,\mu_{ij}\left(r\right)\right)} = \left\{ {\begin{array}{*{20}{cl}} {1}, & {S_{j}\left(r\right)\geq S_{i}\left(r\right)\mu_{ij}\left(r\right)},\\ {0}, & {S_{j}\left(r\right)< S_{i}\left(r\right)\mu_{ij}\left(r\right)}, \end{array}} \right. \label{eqn:f} \end{equation} assuming $j$ wish to play. Therefore, the expectation of difference in individual $i$'s available time from round $r$ to $r+1$ is \begin{eqnarray} &\varrho_{i}\left(r\right)=-\sum_{j\in {N_i-P_i\left(r-1\right)}}\omega_{i,j}\left(r,\mu_{ij}\left(r\right)\right)\left(S_{i}\left(r\right)\right. \\\nonumber &\left.+\sum_{l\in P_i\left(r-1\right)}\alpha_{il}\left(r-1\right) \left( \mathbf{\Phi}_{il}\left(r\right) \cdot \begin{bmatrix}\chi_{i,CC} \\ \chi_{i,CD} \\ \chi_{i,DC} \\ \chi_{i,DD}\end{bmatrix}\right)\right)\mu_{ij}\left(r\right), \end{eqnarray} where $\mathbf{\chi_i}$ denotes $i$'s probabilities of reassigning time after experiencing the four outcomes. $\alpha_{il}\left(r\right)$ denotes the time share which $i$ assigns to $l$ at round $r$. Note that \begin{equation} \sum_{l\in {P_i}}\alpha_{il}\left(r\right)+S_{i}\left(r\right)=1. \end{equation} %Then, the change in the total available time for the system will be %\begin{equation} %\varrho\left(r\right)=N\sum_{k_i}P(k_i)\left(S_{k_i}(r+1)-S_{k_i}\left(r\right)\right),\label{Gammar} %\end{equation} %where $N$ denotes the number of users. % %To simplify the fitting procedure of the maximum likelihood, we assume that individual $i$ for all $i$ evenly allocates his time to the neighbors, that is, $\mu=\frac{1}{k_i}$. Eqn.~\ref{Gammar} is reduced to %\begin{eqnarray} %&\varrho\left(r\right)=-2N\sum_{k_i}P(k_i)\left(\alpha\left(1-S_{k_i}\left(r\right)\right)\left(1-P_c\left(r\right)^2\right)\right.\\\nonumber %&\left.+\omega_{k_i,k_j}\left(r,\frac{1}{k_i}\right)P(k_i,k_j)S_{k_i}\left(r\right)\right). %\end{eqnarray} %With $\tau_{i,r}$ for all $i$, one can then estimate an individual's neighbors' total available time resources and the backup to defect in round $r+1$. Considering $S_i\left(r\right)\geq0$ for all $r$, the iterative formula of $S_i\left(r\right)$ should be written as \begin{equation} S_{i}\left(r+1\right)=Relu\left(\varrho_{i}\left(r\right)+S_{i}\left(r\right)\right), \end{equation} where $Relu\left(x\right)= \left\{ {\begin{array}{*{20}{cl}} {x}, & x \geq 0,\\ {0}, & x < 0. \end{array}} \right.$ As the evolution procedure of $S_i\left(r\right)$ in the system can not be modeled in a mean-field way, one can hardly present an analytical solution to it. Therefore, we will present the simulation results and empirical results from human online experiments in the following. In the simulations, we uniformly set the agents to adopt the same strategy to have the results reproducible. Let the number of agents be $N_A$. We will show that the average available time $S\left(r\right)=\frac{\sum_i S_i\left(r\right)}{N_A}$ falls to a low level at the first round. It is stablized after then, indicating that finding new partners is problematic from the beginning of a match. %Note that a lower $\theta$ value means a higher risk of being isolated after losing the current partners. If the maximum potential benefit of reconnection cannot bring any extra available time resources, individuals are likely to cooperate in order to maintain the existing partnerships. As a result, $\theta$ should be inversely proportional to the frequency of cooperation. Let $\theta^M$ be the increment of available time resource after an individual loses all his current partners in round $r$, i.e., $\sigma=1$. The measurement looks somewhat useless as most players seem unlikely to defect all their partners simultaneously. Indeed, it is a natural choice when the players only have one partner, or their resources are extremely limited. We observe that the move is quite common after the 12th round in our experiments. Therefore, we set $\theta_{i,r}$ to $\theta_{r}^M$ to reduce the parameters of the model. % %Let $\phi_r$ be an individual's aggregated payoff in round $r$. If he has a partner in round $r-1$, he will cooperate with the partner with the probability %\begin{equation} %{P_{c,r}} = \left\{ {\begin{array}{*{20}{c}} % {0.5}&{{\phi_{r-1}} \leqslant {\phi_{alive}}} \\ % {\alpha + \beta\cdot\theta_{r}^M}&{{\phi_{r-1}} > {\phi_{alive}}} %\end{array}} \right., \label{eqn:pc} %\end{equation} %where $\phi_{alive}=\varepsilon-\mathcal{R}$, and $\varepsilon$ is the compulsory consumption each round. $\varepsilon>0$ in the dissipative mode, and $\varepsilon=0$ in the classical mode. We set $P_{c,r}=0.5$ for ${\phi_{r-1}} \leqslant {\phi_{alive}}$, since only if two players adopt different moves one of them can survive one round more when they are both in this condition. $\alpha\geq0$ and $\beta<0$ are two parameters of the model subject to $0 \leq \alpha + \beta\cdot\theta_{r}^M \leq 1$. To mimic the decay of $\theta$ with the number of rounds, we set %\begin{equation} %\theta_{r}^M=max\{-\mathfrak{T},\gamma \cdot r\},\label{eqn:theta} %\end{equation} %where $\gamma$ is the third parameter controlling the decreasing rate of $\theta_{r}^M$. %\begin{tiny} %\begin{equation} % \alpha^*, \beta^*, \gamma^*, \xi^* = \mathop{\arg\min}_{ 0 \leq \alpha + \beta \cdot max\{-\mathfrak{T},\gamma \cdot r\} \leq 1, \gamma \in [-\mathfrak{T}, 0] ,\xi \in [0, 1]} \sum_{r=1}^{M_r}(f_{c,r}-P_{c,r})^2,\label{eqn:alpha} %\end{equation} %\end{tiny} %where $f_{c,r}$ and $P_{c,r}$ are the frequency of cooperation obtained by the empirical experiments and the expected probability of cooperation derived from Eqn.~\ref{eqn:pc} in round $r$, respectively. $M_r$ is the maximum number of rounds. $\xi$ is the fourth parameter of the model, which will be introduced later in this section. After maximizing the likelihood of $P_{c,r}$, namely, minimizing the variance between $f_{c,r}$ and $P_{c,r}$, one can derive $\alpha^*$, $\beta^*$, $\gamma^*$, and $\xi^*$. % %Given that whether two individuals play again depends on the gaming outcome of the previous round, which may be one of $CC$, $CD$, $DC$, and $DD$. From the mean-field perspective, if all the individuals have a unified probability of cooperation $P_c$, which is defined in Eqn.~\ref{eqn:pc}. Following this assumption, the probabilities of the four outcomes in round $r$ are %\begin{equation} %\left\{ {\begin{array}{*{20}{l}} % {P_{CC,r}} = P_{c,r}^2 \\ % {P_{CD,r}} = P_{c,r}(1-P_{c,r}) \\ % {P_{DC,r}} = (1-P_{c,r})P_{c,r} \\ % {P_{DD,r}} = (1-P_{c,r})^2 %\end{array}} \right.. \label{eqn:probabilities} %\end{equation} %We assume that all the individuals make full use of their time resources. The expected aggregated payoff for each individual in round $r$ ($r \geqslant 1$) can be written as %\begin{equation} %\phi_r = \phi_{r-1} + E\left(\Lambda_r\right) - \varepsilon \label{eqn:phir}, %\end{equation} %where $\Lambda_r$ is an individual's accumulated payoff in round $r$. Its expectation is defined as %\begin{tiny} %\begin{equation} %\! E\left(\Lambda_r\right)\! = \! \frac{{P_{CC,r}}\omega(\mathcal{R},r)\mathcal{R}\!+\!{P_{CD,r}}\omega(\mathcal{S},r)\mathcal{S}\!+\!{P_{DC,r}}\omega(\mathcal{T},r)\mathcal{T}\!+\!{P_{DD,r}}\omega(\mathcal{P},r)\mathcal{P}}{{P_{CC,r}}\omega(\mathcal{R},r)\!+\!{P_{CD,r}}\omega(\mathcal{S},r)\!+\!{P_{DC,r}}\omega(\mathcal{T},r)\!+\!{P_{DD,r}}\omega(\mathcal{P},r)}. %\end{equation}\label{Elambda_r} %\end{tiny} %% %If the payoff $G$ can not keep the individual from elimination, the payoff in round $r$ will not be accumulated anymore for the individual. Given this fact, define $\omega(G, r)$ and $Q_k(G,r+1)$ as the effective selection factor and the probability of collaboration in round $r+1$, respectively. The formal definition of $\omega(G, r)$ is %\begin{equation} % \omega(G, r) = \left\{ {\begin{array}{*{20}{cc}} % {0} & {\phi_{r-1} + G - \varepsilon \leqslant 0} \\ % {Q_k(G,r+1)} & {otherwise} % \end{array}} \right., \label{eqn:omega} %\end{equation} %where %\begin{equation} % Q_k(G,r+1) = \left\{ {\begin{array}{*{20}{cc}} % {1} & {G = \mathcal{R}} \\ % {\xi} & {otherwise} % \end{array}} \right.. \label{eqn:pb} %\end{equation} %Here $\xi$ denotes the probability of collaboration after at least one of the two players defects, the range of which is $[0,1]$. % %Summarizing the iterative procedure, the backup to defect $\theta$ in Eqn.~\ref{eqn:theta} is substituted into Eqn.~\ref{eqn:pc} to derive $P_{c,r}$ first; second, $P_{c,r}$ is used to derive $P_{CC,r}$, $P_{CD,r}$, $P_{DC,r}$, and $P_{DD,r}$ in Eqn.~\ref{eqn:probabilities}; with the probabilities of the four outcomes, one has $E\left(\Lambda_r\right)$ in Eqn.~\ref{Elambda_r} and $\phi_r$ in Eqn.~\ref{eqn:phir}; $\phi_r$ is then used to derive $\omega(G, r+1)$ in Eqn.~\ref{eqn:omega} and $P_{c,r+1}$ in Eqn.~\ref{eqn:pc}. The initial payoff $\phi_0$s of both modes are given at the beginning of each match. Therefore, one can iterate the procedure to derive $P_{c,r}$ and optimize $\alpha$, $\beta$, $\gamma$, and $\xi$ in Eqn.~\ref{eqn:alpha} finally. Notably, the mathematical model is based on the statistical results and simulation results obtained by our experiments. We adopt a series of approximations to model the decision-making procedure of humans only to provide a better understanding of the procedure. The accuracy of prediction is not our focus. \section{Results}\label{Er} To show the impact of time redistribution, we first simulate the evolution of moves when agents play a traditional Prisoner's dilemma (PD) game with their neighbors in the BA and WS networks. In a network, a player starts a game with a gaming request to a neighbor. In our simulations, all the agents in the network are selected one by one, following a random sequence. For a selected agent, it evenly allocates the time left to its requests to the uncoordinated neighbors. If the requested neighbor has enough time to accept the gaming request, he will accept it. After one round of the game, agents will uniformly update their moves with the Zero-Determinant Extortionate strategy proposed in reference~\cite{stewart_pnas12}. The strategy will wipe the cooperators out in 100 rounds. If an agent defects in a round, the pair will be taken apart with a certain probability. The separation means the time assigned to the pair will be redistributed next round. More details on the simulations will be provided in Section~\emph{Simulation on the social networks}. In Fig.~\ref{fig:APC}(a) and \ref{fig:APC}(b), the results show the level of cooperation decays with rounds for agents playing the `divide-and-conquer' (D\&C) games~\cite{zhang_yc_sr15,wang_js_sr17,melamed_pnas18} in both networks. After being affected by the temporal mechanisms, the rates of decay slow down in Fig.~\ref{fig:APC}(c) and \ref{fig:APC}(d). We show the difference in the level of cooperation between the temporal games and the D\&C games~\cite{zhang_yc_sr15,wang_js_sr17,melamed_pnas18} in Fig.~\ref{fig:APC}(e) and \ref{fig:APC}(f), which will be amplified when human subjects play. The amplification may originate from $S\left(r\right)$ shown in Fig.~\ref{fig:APC}(g) and \ref{fig:APC}(h), which will be much lower when humans play the temporal games. %Assume an individual assigns his time to two opponents. Cooperating with one to expect long-term mutual cooperation and defecting the other to pursue a higher payoff in this round is a common choice. Especially after the mutual selection in the partner-seeking procedure, a high frequency of cooperation can be expected since they have contacted each other before selection. In the dissipative scenario, $f_c$ increases at first and reaches 100\% in the $8^{th}$ round. Then, it sharply decays and forms a valley from the $10^{th}$ to the $15^{th}$ round. Considering a pair of mutual cooperators without any other partners, the aggregated payoff of one of them at the $r_{th}$ round is $\phi_{r}=\phi_{0}-r\left(\varepsilon-\mathcal{R}\right)$, where $\phi_{0}$ denotes the initial resource and $\varepsilon$ is the compulsory consumption. In our experiment, $\phi_{0}=5$ and $\varepsilon=3$. One can see that mutual cooperation can afford an individual's ongoing cost for at most 12 rounds based on the designed payoff matrix, where $R=2.6$. The aggregated payoff left for the pure mutual cooperators is 0.2 at the $13^{th}$ round. In this case, only defectors have a certain chance to survive for more than one round, while the cooperators are doomed to be eliminated in that round. Accordingly, $f_c$ drops to approximately $0.5$ as expected. The recovery of $f_c$ in the $14^{th}$ round results from the number of survivors after the $13^{th}$ round is small. If an individual loses a partner at this moment, it will be unlikely to find another one again. Therefore, mutual cooperation is the best choice to survive. To model the procedure, we set all the agents to follow a simple strategy, which will be extended in Section \emph{Simulation on the temporal game model}. The comparison between the simulation result and the statistical result of the real data is shown in Fig.~\ref{fig:fc}A, in which each red circle denotes the average of $800$ simulation runs. \begin{figure}[ht] \centering \includegraphics[height=2.2in,clip,keepaspectratio,trim=35 35 50 50]{Merged-Simulation_BA_WS.eps} \caption{Evolution of the average proportion of cooperation $\langle P_c\left(r\right)\rangle$ in the `divide-and-conquer' (D\&C) and temporal gaming networks. (a) and (c) show $\langle P_c\left(r\right)\rangle$ of the D\&C games and temporal games in the BA networks, respectively. (b) and (d) show $\langle P_c\left(r\right)\rangle$ of the two classes of games in the WS networks, respectively. (e) and (f) show the difference of $\langle P_c\left(r\right)\rangle$ between the D\&C games and the temporal games in the BA and WS networks, respectively. Each plot denotes the average of $10$ simulation runs. As the system evolves dramatically at the beginning of the experiments, we show the results in semi-log coordinates.}\label{fig:APC} \end{figure} To test the validity of our theoretical results, we invite 183 volunteers to attend 8 online experiments. For conciseness, we show the basic information of each match in Table~\ref{PRGNF}. %For the D\&C games, the numbers of participants and rounds of the experiments are 39 and 13 rounds for the first BA network (G1224), 17 and 16 rounds for the second BA network (G1230) shown in Fig.~\ref{fig:HMBAWS}(a); 34 and 13 rounds for the first WS network (G1228), 21 and 15 rounds for the second WS network (G1234) shown in Fig.~\ref{fig:HMBAWS}(c). % %For the temporal games, the numbers of participants and rounds of the experiments are 50 and 11 rounds for the first BA network (G646), 44 and 28 rounds for the second BA network (G903) shown in Fig.~\ref{fig:HMBAWS}(b); 22 and 24 rounds for the first WS network (G936), 22 and 28 rounds for the second WS network (G933) shown in Fig.~\ref{fig:HMBAWS}(d). % \begin{table*}[tbhp] \caption{The basic information of matches.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c||c||c|} \hline Game Number & Game Type & Type of Network & Number of Participants & Number of Rounds & Corresponding Panel in Fig.~\ref{fig:HMBAWS} \\ \hline G1224 & D\&C & BA & 39 & 13 & Fig.~\ref{fig:HMBAWS}(a) \\ \hline G1230 & D\&C & BA & 17 & 16 & Fig.~\ref{fig:HMBAWS}(a) \\ \hline G646 & Temporal Games & BA & 50 & 11 & Fig.~\ref{fig:HMBAWS}(b) and Fig.~\ref{fig:HMBAWS}(e) \\ \hline G903 & Temporal Games & BA & 44 & 28 & Fig.~\ref{fig:HMBAWS}(b) and Fig.~\ref{fig:HMBAWS}(f) \\ \hline G1228 & D\&C & WS & 34 & 13 & Fig.~\ref{fig:HMBAWS}(c) \\ \hline G1234 & D\&C & WS & 21 & 15 & Fig.~\ref{fig:HMBAWS}(c) \\ \hline G936 & Temporal Games & WS & 22 & 24 & Fig.~\ref{fig:HMBAWS}(d) and Fig.~\ref{fig:HMBAWS}(g)\\ \hline G933 & Temporal Games & WS & 22 & 28 & Fig.~\ref{fig:HMBAWS}(d) and Fig.~\ref{fig:HMBAWS}(h)\\ \hline \end{tabular} } \label{PRGNF} \end{table*} After comparing Fig.~\ref{fig:HMBAWS}(a) with Fig.~\ref{fig:HMBAWS}(b) and Fig.~\ref{fig:HMBAWS}(c) with Fig.~\ref{fig:HMBAWS}(d), one can see that the decay of $P_c\left(r\right)$ in the temporal games is slower than that in the D\&C games. The result confirms our theoretical prediction, indicating the limitation on time promotes the level of cooperation in gaming social networks. To explain the behavior, we measure the average available time $S\left(r\right)$ in the four time-involved matches. The evolution of $S\left(r\right)$ for the two BA networks and two WS networks are shown in Fig.~\ref{fig:HMBAWS}(e)-Fig.~\ref{fig:HMBAWS}(h), respectively. For conciseness, the basic information of matches is listed in Table~\ref{PRGNF}. One can see that $S\left(r\right)$ fluctuates around a small positive value in the four panels, revealing the difficulty of finding new partners when humans play the temporal games is more significant than our theoretical prediction. The difference in $P_c\left(r\right)$ between the theoretical prediction and human behavior suggests that the rising of the difficulty of finding new partners may lead to the promotion of $P_c\left(r\right)$, which to some extent explains why the limited time promotes the level of cooperation in a real social network. The other behavior which should be noted is that the level of cooperation generally decays with rounds in Fig.~\ref{fig:HMBAWS}. % The behavior is caused by the number of rounds for each match being limited, although it is random. This limitation mainly comes from the time of the subjects, since it is complicated to ask about 100 students to play online for more than an hour simultaneously, even though we pay them acceptable participation fees and provide attractive rewards for the winners of each match. We show some of the winners' strategies in Section \textbf{Top Voted Strategies} of \textbf{Supplementary Information} (\textbf{SI}). One can see that the level of cooperation decays when the participants guess that the match is ending. \begin{figure}[ht] \centering \includegraphics[height=4in,clip,keepaspectratio,trim=0 0 0 0]{Human_Merged_BA_WS.eps} \caption{Evolution of the proportion of cooperation $P_c\left(r\right)$ and the average available time $S\left(r\right)$ in the temporal games played by human subjects. (a) and (c) show the results of the D\&C games in the BA networks and WS networks, respectively. (b) and (d) show the results of the temporal games in the BA networks and WS networks, respectively. Horizontal coordinates denote the number of rounds. (e) and (f) show the results of two temporal games in the BA networks. (g) and (h) show the results of two temporal games in the WS networks.}\label{fig:HMBAWS} \end{figure} %\begin{figure}[ht] % \centering % \includegraphics[]{fc.eps} % \caption{Evolution of the average frequency of cooperation (blue circles) in (A) the dissipative scenario and (B) classical scenario. The grey crosses are the actual . of cooperation for each round in the match. The red squares denote the simulation result, the simulation procedure of which will be shown in Section~\emph{Simulation on the evolution of cooperation}. The green triangles represent the analytical result, the mathematical procedure of which will be shown in Section~\emph{Mathematical modeling the level of cooperation}. As a comparison, the two baseline results are shown in purple crosses and yellow stars, where individuals update their strategies following the rules proposed by Santos \& Pacheco~\cite{santos_prl05} and Nowak \& May~\cite{nowak_n92b}, respectively.} % \label{fig:fc} %\end{figure} %More surprisingly, the numerical solution of $P_{c,r}$ in Eqn.~\ref{eqn:pc} is almost exactly the same as $f_{c,r}$ as shown in Fig.~\ref{fig:fc}A. One can see that the simulation result and the numerical solution are remarkably consistent with the entire evolution procedure of $f_c$ for $\alpha^*=0.8$, $\beta^*=-\frac{1}{7200}$, $\gamma^*= -209.437$, and $\xi^*=0.1$. %%The details on mathematical modeling and analysis will be shown in section \emph{Analytical solution to the model}. %% %To have a baseline comparison, we force the agents in the simulation to follow the updating rules proposed by Santos \& Pacheco~\cite{santos_prl05} and Nowak \& May~\cite{nowak_n92b}. The update is targeted, where an agent only changes the strategy for who the agent concerns. In the baseline scenario, the agents can't relink others. One can see that the results of the two baselines are nearly the same, i.e., $f_c$ drops at the $2^{th}$ round, and no agent can survive longer than 13 rounds. The evolution of the average number of survivals in our experiments is shown in Table~\ref{tb:survival}, indicating that the strategies of the human subjects are different from the baseline strategies. %\begin{table*}[htb] % \centering % \caption{Average number of survivals for each round in the dissipative scenario.} % \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} % \hline % Round & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline % Survival & 15.4 & 12.5 & 11.6 & 10.8 & 10.3 & 9.8 & 9.4 & 9.3 & 9.2 & 9.0 & 6.3 & 4.6 & 3.4 & 1.9 & 0.7 \\ \hline % \end{tabular} % \label{tb:survival} %\end{table*} % %Conversely, in the classical scenario, as shown in Fig.~\ref{fig:fc}B, $f_c$ is stable over time, which confirms the reported results on dynamic networks~\cite{rand_pnas11,fehl_el11,wang_j_pnas12}. Now, the question is how to explain the smooth growth of $f_c$, which even reaches near 100\% in Fig.~\ref{fig:fc}A. In this scenario, the baseline results show a low frequency of cooperation, since there is no punishment for the defectors. % %Intuitively, the aggregated payoff is the optimization objective for each individual. In replicator dynamics, the deviation from the average payoff will determine the contribution of a specific strategy and probably take one step forward to change individuals' moves in the next round. To clarify the response of each move, the distributions of the payoff loss after cooperation and defection in both scenarios are shown in Fig.~\ref{fig:fl}. In the dissipative scenario, both the average and the median of the payoff loss after cooperation become lower than that after defection, indicating that most defectors will lose more payoff than the cooperators. To avoid losing payoff, an intuitive choice is to cooperate. In the classical scenario, since there is no cost, the payoff loss is smaller or equal to 0. The medians of the payoff loss are the same after both moves, while the average payoff loss after cooperation is higher than that after defection. Although the average is slightly lower, the payoff loss after defection is highly polarized, implying that the risk of no reward is significant. This indicates that some defective moves are rather risky~\cite{zhang_yc_sr15}. Thus, most individuals tend to cooperate for a steady payoff after a long or short learning process. As a result, cooperation is prevailing in both scenarios, match by match. % % %\begin{figure}[ht]%[tbhp] % \centering % \includegraphics[]{FoodLoss.eps} % \caption{Distribution of the payoff loss in the following rounds for C and D in the dissipative (blue), and classical (orange) scenarios. The black dotted lines denote the average and the dashed lines denote the medians. The area represents the probability density, and the width indicates the frequency.} % \label{fig:fl} %\end{figure} % %However, the payoff loss cannot directly explain the smooth growth and sharp decay of $f_c$ in Fig.~\ref{fig:fc}A. To better understand the reasons for the decay of the defectors' aggregated payoff and unexpected evolution of $f_c$, we investigate the responses of moves from the defectors' standpoint. Firstly, the defectors are likely to suffer from a broken relationship after defection. The actions in the previous round significantly affect the choice of maintaining or breaking the partnership in both scenarios ($\chi^2=1189.5306$, $P<0.0001$ for the dissipative scenario and $\chi^2=600.7032$, $P<0.0001$ for the classical scenario). Specifically, $86.26\%$ and $42.09\%$ of the partnerships are broken apart after defection in the dissipative and the classical scenarios, while only $13.72\%$ and $8.33\%$ of them are broken after cooperation in the respective scenarios. % % %After losing partners, a defector has to find new partners to make good use of the time, while the success rate of a gaming request from a friend without interaction is 16.12\% (364 out of 2,258) and 22.89\% (672 out of 2,936) in the dissipative scenario and classical scenario, respectively. One can see that the social risk of defection is significant, which can be gradually detected by the participants after a couple of matches. The result indicates that the majority of partnerships are not only unlikely to resist a defection but also difficult to be reestablished. Note that the partnerships are established by the interactions based on social connections. Confusingly, why is reestablishing the partnership so difficult? Evidence suggests that reconnection failure is closely related to time resources. In the dissipative scenario, $87.77\%$ of the requests are rejected due to the insufficient remaining time resource of the target that is not enough to afford the requested duration. In the classical scenario, the percentages are $77.52\%$. One can see that the limited time resource is not a barrier to make more friends but a critical obstruction in seeking new partners. % %After all, an apparent cause of the fast decay of the aggregated payoff after defection is that the defectors are losing partners. Notably, the requested friends are unable to observe the moving records of those who are not their partners as this is the users' privacy. But, they can learn the senders' reputation from the public chat room. Interestingly, we observe that the corresponding discussions in the chat room are somewhat limited after the match starts. The observation implies that the requested friends reject the requests generally because of the lack of time resources. In other words, the social risk of defection originates from limited time resources. This point was never reported in previous studies, since the majority of existing interaction frameworks are not time-involving. % %Fig.~\ref{fig:ttdc} shows the evolutions of the frequency of cooperation $f_c$ and backup to defect $\theta$ in both scenarios. In the dissipative scenario, as shown in Fig.~\ref{fig:ttdc}A, the value of $\theta$ is close to $-1,440$ from the $8^{th}$ to the $12^{th}$ round, which indicates that the available time resource is scarce. One can observe that $f_c$ in Fig.~\ref{fig:ttdc}A is approximately 100\% in the region. The observation confirms our previous inference. Conversely, in the classical scenario, the frequency of cooperation is almost irrelevant to $\theta$. Fig.~\ref{fig:ttdc}B shows that $\theta$ in the classical scenario is significantly higher than that in the dissipative scenario in each round, indicating that the correlation between $\theta$ and $f_c$ emerges only when $\theta$ is low, that is, the available time resources of friends are scarce. One explanation is that the cost in each round stimulates the individuals to move cautiously. Especially after detecting the risk of defection, they prefer to maintain the existing partners rather than seeking new partners. As a result, the frequency of cooperation generally grows with the number of rounds in the dissipative scenario while mildly fluctuates in the classical scenario. % %\begin{figure}[htb]%[tbhp] % \centering % \includegraphics[]{TtDCo.eps} % \caption{Evolutions of $f_c$ (blue) and $\theta$ (red) in (A) the dissipative scenario and (B) the classical scenario. The length of the match is stochastic, ranging from 10 to 15 rounds inclusively.} % \label{fig:ttdc} %\end{figure} % %To confirm the negative correlation between $\theta$ and $f_c$, the result of linear regression analysis is shown in Fig.~\ref{fig:t2}A. For the dissipative scenario, the red dashed line in Fig.~\ref{fig:t2}A shows the linear regression result, where $slope=-3.5474\times10^{-4}$, $intercept=0.4419$ and $residual=0.0608$. The Pearson correlation coefficient of them is $-0.8379$ with P-value $1.83\times10^{-4}$, demonstrating the strong correlation between $\theta$ and $f_c$. For the classical scenario, Fig.~\ref{fig:t2}B shows that the correlation is absent, where the Pearson correlation coefficient is $0.3845$ with P-value $0.1746$. For an individual $i$ to understand the risk of defection when $\tau_{f_{i,r}}$ is low is not difficult, while to estimate whether $\tau_{f_{i,r}}$ is low takes some training. In our experiment, the procedure would last for a couple of matches. Therefore, the negative correlation is expected to grow with the number of matches, since the judgment of an individual is approaching the statistical result after sufficient training. As shown in Fig.~\ref{fig:t2}A, when $\theta$ is limited to a very low level, for instance, lower than $-1,250$, the frequency of cooperation is approaching $100\%$. On the contrary, when the individual's estimation on $\tau_{f_{i,r}}$ is not so pessimistic, Fig.~\ref{fig:t2}B shows that the level of cooperation is basically not affected by the backup to defect $\theta$. % % %\begin{figure}[htb]%[tbhp] % \centering % \includegraphics[]{t2.eps} % \caption{Correlation between the frequency of cooperation $f_c$ and the backup to defect $\theta$. Correlations in the dissipative scenario and classical scenario are shown in (A) and (B). The red dashed line in (A) is the linear regression line of the data. In (B), $f_c$ is clearly irrelevant to $\theta$. Note that the cluster of plots in (A) is located in the range lower than $-1,250$, indicating the strong correlation between the level of cooperation and the scarcity of the available time in the neighborhood. In (B), the cluster of plots is located in the range higher than $-200$, suggesting that the available time in the neighborhood usually is high in the classical mode.} % \label{fig:t2} %\end{figure} % %Although in our experiment, each individual has more than eight social contacts on average, the number of frequently interacting partners is normally less than 3 (see~\emph{Supplementary materials} for details). Establishing connections on a social network is nearly costless, while choosing the next move towards a partner may take some time if an individual has a delicate strategy to make decisions. Naturally, the number of decisions that can be made in one round is limited by the individual decision time, which is set to 45 seconds in our experiment. An individual's maximal number of partners can be roughly estimated from this individual's average decision time for each partner in a round, which can be calculated by $min({\mathfrak{T}}/{decision\_time},N_{i,r})$. Interestingly, the observed numbers of partners are much less than the estimation, as shown in Fig.~\ref{fig:dt}. For example, if the average decision time of an individual in a round is 1 second and the individual has eight neighbors, the maximal number of partners will be 8. But the individual's actual number of partners is only 1. Our observation indicates that humans tend to cut the number of social interactions (not to cut the number of friends) if more interactions cannot bring them apparent benefits, for instance, a higher payoff in the current round. % %\begin{figure}[ht]%[tbhp] % \centering % \includegraphics[]{DecisionTime.eps} % \caption{Correlation between the number of partners and the decision time. The color shows the number of occurrences. Most individuals' average decision time is less than 5 seconds, while their numbers of partners are only one, much less than their maximal possible number of partners.} % \label{fig:dt} %\end{figure} % %If the individuals frequently interacting with each other are taken to be a cluster, our result shows that the lifespan of the cluster grows with its size (see SM for details). Intuitively, participants are able to detect the rules after a number of matches, which will lead to a significant rise in the sizes of the clusters after a number of matches, whereas the maximum size of the clusters is 3 in our experiments. Here, the cluster is defined as a connected graph where each pair of connected individuals play at least three games in a match. The observation again confirms the behavior mentioned at the end of the preceding paragraph. % %Finally, people are making targeted choices. In our experiment, an individual is allowed to choose their neighbors to play with and to move for a specific reason. Our result shows that 7.71\% of the individuals (96 out of 1,244) are identified as `divide-and-conquer' ($D\&C$) individuals~\cite{zhang_yc_sr15,wang_js_sr17,melamed_pnas18}, who may cooperate with some partners to stabilize their partnerships and defect the other partners so as to pursue higher payoffs in the current round. The rest individuals are uncertain, since they have only one partner or their partners' moves are identical. \section{Discussion}\label{Dis} As a theoretical framework closer to realistic scenarios, the temporal game has demonstrated its capacity to illuminate complex behaviors in our social experiment. The human behaviors revealed from the human temporal games were rarely reported previously in the literature. When the available time resources of individuals in the gaming network are scarce, the individuals are more likely to maintain the current relationships through cooperation. The underlying mechanism is that interactions are not obligated but spontaneous. If an individual's time resource cannot afford the requested duration of the interaction, he will have no choice but to abandon it, which makes it much harder to find new partners. The accordance of empirical and simulation results confirms the significance of the mechanism. Our finding reveals a fundamental reason for lasting altruistic behaviors in real human interactions, providing a novel perspective for understanding the prevailing of human cooperative behaviors in temporal collaboration systems. %Our work can inspire several interesting future directions for theoretical research. The classification of successful strategies has received massive attention in the game-theoretic frameworks, such as Tit-for-Tat~\cite{axelrod_jcr80}, Generous Tit-for-Tat~\cite{axelrod_jcr80b}, Win-stay-lose-shift~\cite{nowak_n93}, partner-rivals and extortionate strategies~\cite{hilbe_nhb18,press_pnas12}, as well as strategies with various memories~\cite{milinski_pnas98}. Studying the classification of strategies in the temporal game framework is a challenging but exciting direction of future work. Besides direct reciprocity, the game-theoretic frameworks to study indirect reciprocity have also received tremendous attention~\cite{nowak_n98,nowak_n05}. The study of such frameworks with temporal aspects is another interesting direction for future work. %Please check some classical experimental paper in either static or dynamic networks. If there is some possible future extension for temporal game framework we can mention. Note that the limitation on time is an objective fact in human collaboration systems, which is essentially different from the incentives, such as global reputation~\cite{fu_pre08b,gallo_pnas15} and onymity~\cite{wang2017onymity}, associated with human psychology. In a sense, the behavior observed in our experiments is more deterministic. Introducing some other mechanisms like reward~\cite{sefton_ei07} and costly punishment~\cite{fehr_n02,PCB14e1006347} to the temporal systems will be a natural extension in this direction. Apart from the mechanisms, the impact from different types of games, for instance, the snow-drift game~\cite{hauert_n04} and the public goods game~\cite{santos_n08}, is also of particular interest. % %On the other hand, the random allocation of groups in our experiment is a way to constrain group cheat, limiting the social knowledge of subjects. Therefore, a systematic experimental study on the role of global social knowledge~\cite{gallo_pnas15} in the formation of cooperative communities is another promising area of future studies. Our work considers the temporal game framework and presents some surprising results. There are several interesting future directions, both in terms of theoretical and experimental results. However, the basic theoretical model and the key experimental results we present in this work for temporal games are the first steps to modeling realistic networks with time-dependent interactions. Such realistic modeling will allow better analysis, prediction, and design principles for the emergence of cooperation in network models, profoundly impacting disciplines from preserving natural resources to designing institutional policies. \section{Materials and Methods} \subsection{Experimental design}\label{ED} In order to build an experimental environment as close as possible to natural temporal two-player collaborative systems, two realistic factors are considered in our empirical study. First, the interactive time is determined by negotiation. The setting restores the temporal property of a game in reality. A dynamic reconnection is implemented in the network by rejecting a friend's request and then proposing a game with another friend~\cite{rand_pnas11,wang_j_pnas12}. Second, a `divide-and-conquer' ($D\&C$) framework, also referred to as targeted decision, is adopted, in which the individuals who propose a game or accept a gaming request have to decide whether to cooperate ($C$) or to defect ($D$) in each round of the game~\cite{zhang_yc_sr15,wang_js_sr17,melamed_pnas18}. Most existing research on gaming networks is performed under a framework where individuals choose the same move to interact with all their neighbors~\cite{rand_pnas11,fehl_el11,wang_j_pnas12}. On the contrary, in real-world scenarios, people do not normally defect their long-term partners after being defected by other partners. In a realistic social network, they would choose a specific move to play with a partner, referred to as the D\&C game in the literature~\cite{zhang_yc_sr15,wang_js_sr17}. When the diffuse decision scheme is replaced by the D\&C or targeted decision scheme, the impact of dynamic reconnection on promoting cooperation will become negligible~\cite{melamed_pnas18}. The coupling between temporal interaction and rational decision-making can be seen everywhere in real life. Still, the existing theoretical frameworks seem insufficient to explain the widespread cooperation in such temporal games. Under the framework of temporal games, we designed a series of online game experiments. With the experimental data, we present a surprising finding: limitation of time promotes cooperation in temporal games. This finding, on the one hand, urges us to reconsider how much the dynamic nature of networks can impact human cooperation. On the other hand, it implies the potential of the temporal game framework to explain various collective behaviors in real two-player collaborative systems. % Our results have a profound impact on the study of pro-social behavior. By accounting for the time-dependent aspect to model a realistic network, we present an interesting finding which can improve our understanding of widespread cooperation in time-dependent collaborations. \subsection{Experimental setup and game rules}\label{ESGR} A series of online human subject experiments were designed to build a two-player collaborative system of rational individuals. A total of 183 human subjects participated in 8 matches in the experiment. The majority of subjects are students from Tongji University and Southeast University in China. To implement the designed scenario, a novel online gaming platform was developed, called the \emph{War of Strategies} (http://strategywar.net, see~(Section \textbf{Experimental Platform and Interface} of \textbf{SI} for the details of the platform). In the online experiments, participants played a traditional Prisoner's dilemma (PD) game, where $C$ and $D$ were the only available actions. Each participant interacted with the individuals who had agreements with him in one round, after which the agreements need to be redrafted. Each match on the platform comprises two stages. In the first stage, the system generates a network with a social network model. The subjects are then allocated to the nodes of the network. Therefore, the connections among the subjects are randomly predetermined. The second stage is an $n$-round iterated PD game, where $10\le n \le 30$ is unknown to individuals to avoid the ending-game effects. In each round of the game, individuals can make requests for interactions with their friends. In a request, the duration of the interaction is suggested by the sender and shown to the target. The request can be accepted, denied, ignored, or canceled. Once an individual accepts it, the individual has to choose a move as a response. The payoff of the game is proportional to the duration suggested in the request, which is a part or all of the sender's time resource. Once the request is sent out, this part of the resource will be occupied before receiving a response, which cannot be used again in any other interaction. If the request is accepted, the time resource will be consumed. If the request is denied, ignored, or canceled, the time resource will be returned to the sender. The total time resource assigned to each individual is $1,440$ units in each round, simulating one day in real life. We adopt $1,440$ to help the participants to understand its meaning, the value of which is irrelevant to our results. For all the individuals, each round lasts for 60 seconds. The initial aggregated payoff for each individual is 0. The payoff matrix is the same as that in Fig.~\ref{fig:ITG}. During the match, the individual IDs are randomly generated. The individuals can only see their own game records, where each record includes the moves of both sides and the time durations. The topological structures beyond their immediate neighbors are invisible to them. Besides, individuals are shown their aggregated payoff, time resources, number of rounds played, and their decision time remaining. \subsection{Simulation on the social networks}\label{SSN} Here, we will present the process of the simulation. Step 1, Generate a structured population such as the Barab\'{a}si and Albert's scale-free network~\cite{BA} with degree $m_0=m=3$ or Watts and Strogatz's small-world network with $P_{rewire}=0.1$ and $K=6$. Randomly assign the agents to be cooperators with a probability of 0.5. The size of the population is set to 1,024. Step 2, Shuffle the agent list and iteratively ask an agent to broadcast gaming requests to its neighbors. In each request, the agent evenly allocates its time left to its uncoordinated neighbors, i.e., $\mu_{ij}\left(r\right)=\frac{1}{\left | N_i-P_i\left(r-1\right) \right | }$, where $j\in N_i-P_i\left(r-1\right)$. If a neighbor has enough time to accept the request, he will accept it. Step 3, Each pair of the matched agents game for one round and update their moves, following the Zero-Determinant Extortionate strategy proposed in reference~\cite{stewart_pnas12}. Step 4, If an agent defects in the round, the pair will be taken apart with a probability of 0.5, that is, $\mathbf{\chi}=[0,0.5,0.5,0.5]$. Step 5, Repeat Steps 2, 3, and 4 until the preset number of rounds. \bmhead{Acknowledgments} Y. Z. was supported by the National Natural Science Foundation of China (Grant No. 61503285) and by the Municipal Natural Science Foundation of Shanghai (Grant No. 17ZR1446000). J. G. was supported by the National Natural Science Foundation of China (Grant No. 61772367) and by the Program of Shanghai Science and Technology Committee (Grant No. 16511105200). S. Z. was supported by the Program of Science and Technology Innovation Action of the Science and Technology Commission of Shanghai Municipality (STCSM) (Grant No. 17511105204). G. C. was supported by the Hong Kong Research Grants Council (Grant No. CityU-11200317). K. C. was supported by ERC Consolidator Grant 863818 (FoRM-SMArt). M. P. was supported by the Slovenian Research Agency (Grant Nos. J1-2457, J1-9112, and P1-0403). \bibliographystyle{sn-basic} \bibliography{egt} \end{document}