Introduction

Many complex systems, from brains to societies, produce time-varying patterns of interaction. In neuroimaging, for instance, fMRI yields multivariate time series reflecting the activity of $N$ brain regions. A common way to study their interactions is through dynamic functional connectivity (DFC), where temporal windows of the time series are correlated to yield evolving connectivity matrices.

From correlation matrices to temporal networks

By thresholding DFC matrices at levels chosen to match a target average degree, we obtain a sequence of binary adjacency matrices. This sequence is naturally interpreted as a temporal network (TNet):

$$ \mathcal{T} = \{B^{(1)}, B^{(2)}, \dots, B^{(S)}\}, \qquad B^{(t)}\in\{0,1\}^{N\times N}. $$

Edges in $\mathcal{T}$ represent strong statistical associations at a given time window. These edges do not imply causal influence or physical transmission; instead, they encode patterns of co-fluctuation that may persist, shift, or recur across time.

Why analyze correlation-derived temporal networks?

  • Dynamism vs. statism: quantify how rapidly association patterns change (transition rates, entropy, similarity, mutual information).
  • Latency and circulation: assess how quickly time-respecting chains of associations appear between regions, and how often they recur back to their source.
  • Integration vs. segregation: analyze clustering, modularity, and partner stability/diversity to understand whether regions form cohesive groups or shift partners over time.

Roadmap

  1. From time series to temporal networks
  2. Null models and temporal network generators
  3. Measures of dynamism and statism
  4. Latency and circulation metrics
  5. Segregation and cohesion measures

From Time Series to Temporal Networks

From time series to dynamic functional connectivity

We begin with a time series $X(t)$ of dimension $T \times N$, where $T$ is the number of time points and $N$ is the number of regions or nodes.

A standard estimator of DFC is the sliding window correlation method:

$$ A^{(s)} = \mathrm{corr}\!\left(X_{[s : s+W]}^\top\right), \quad s = 0, \ell, 2\ell, \dots $$

This produces a sequence of weighted, symmetric adjacency matrices $\mathcal{A}=\{A^{(1)},A^{(2)},\dots,A^{(S)}\}$, where each entry $A^{(t)}_{ij}\in[-1,1]$ reflects correlation between nodes $i$ and $j$ in snapshot $t$.

Thresholding and binarization

While DFC matrices capture continuous correlation values, most network-based measures require binary adjacency matrices. A naive fixed threshold may produce networks with widely varying densities across subjects or conditions. Instead, we control the average degree of the resulting networks.

The degree of node $i$ in snapshot $t$ after thresholding at $\theta$ is:

$$ k_i^{(t)}(\theta) = \sum_{j \neq i} \mathbf{1}\{ A^{(t)}_{ij} > \theta \}. $$

And the average degree across all nodes and snapshots is:

$$ \bar{d}(\theta) = \frac{1}{SN} \sum_{t=1}^S \sum_{i=1}^N k_i^{(t)}(\theta). $$

We search for a threshold $\theta^*$ such that $\bar{d}(\theta^*) \approx d_{\text{target}}$. This ensures comparability: temporal networks have approximately the same density, independent of absolute correlation values in the data.

From DFC to Temporal Network (TNet)

Once the threshold is found, each weighted snapshot is binarized:

$$ B^{(t)}_{ij} = \mathbf{1}\{ A^{(t)}_{ij} > \theta^* \}. $$

Diagonal entries are set to $1$ (to allow self-waiting in temporal paths), yielding the final temporal network:

$$ \mathcal{T} = \{ B^{(1)}, B^{(2)}, \dots, B^{(S)} \}. $$

Temporal Network Generators

Why generate null models?

Null models allow comparison against controlled baselines that preserve some aspects of the data while randomizing others. They help separate trivial properties (such as density dynamics) from meaningful structure.

Static temporal networks

A static temporal network repeats a single adjacency matrix across all time steps. Examples include:

  • Static ER (Erdős–Rényi)
  • Static SW (Watts–Strogatz small-world)
  • Static SF (scale-free) variants with different preference rules

Dynamic temporal nulls

  • Time-shuffler: permutes snapshot order (destroys temporal correlations).
  • Edge randomizer: replaces each snapshot with a topology-matched random graph with the same number of links.
  • Link activation model: preserves total activations per edge but redistributes them across time.
  • Density variability model: generates link counts per snapshot with prescribed mean/variance while fixing total links.

Temporal Dynamism vs. Statism

We summarize how much a temporal network changes (“dynamism”) versus remains similar (“statism”) using complementary quantities computed from a binary temporal network $\mathcal{T}=\{B^{(1)},\dots,B^{(S)}\}$ of size $N$ across $S$ snapshots.

Let $E=\binom{N}{2}$ and let $\mathbf{b}^{(t)}\in\{0,1\}^{E}$ be the flattened lower-triangular edge vector at snapshot $t$.

Transition probability

$$ p_{\mathrm{trans}} = \frac{1}{(S-1)E}\sum_{t=1}^{S-1}\sum_{e=1}^{E}\mathbf{1}\{b^{(t)}_e\neq b^{(t+1)}_e\}. $$

Global dynamism via entropy

$$ H_e = -\big[p_e\log p_e + (1-p_e)\log(1-p_e)\big], \qquad p_e = \frac{1}{S}\sum_{t=1}^S b^{(t)}_e. $$$$ gEnt = \frac{\sum_{e\in\mathcal{E}_\star} H_e}{E\cdot \big(-\log\tfrac12\big)}. $$

Successive cosine similarity

$$ \mathrm{cos}(t,t{+}1) = \frac{\langle \mathbf{b}^{(t)}, \mathbf{b}^{(t+1)}\rangle} {\|\mathbf{b}^{(t)}\|_2\,\|\mathbf{b}^{(t+1)}\|_2}. $$$$ \mathrm{mnSim}=\frac{1}{S-1}\sum_{t=1}^{S-1}\mathrm{cos}(t,t{+}1)\in[0,1]. $$

Combined dynamism index

$$ \mathrm{DynH} = gEnt \cdot (1-\mathrm{mnSim}). $$

Mutual information between successive frames

As a complementary measure of redundancy (statism), we compute the normalized mutual information (NMI) between consecutive snapshots.

For two binary edge vectors $\mathbf{b}^{(t)}$ and $\mathbf{b}^{(t+1)}$ with $E$ entries, we compute their joint distribution $p(x,y)$ over the states $\{(0,0),(0,1),(1,0),(1,1)\}$, and the marginals $p(x)$ and $p(y)$.

The mutual information is:

$$ I(\mathbf{b}^{(t)};\mathbf{b}^{(t+1)}) = \sum_{x,y\in\{0,1\}} p(x,y)\,\log\frac{p(x,y)}{p(x)\,p(y)}. $$

The normalized form is:

$$ \mathrm{NMI}(\mathbf{b}^{(t)},\mathbf{b}^{(t+1)}) = \frac{I(\mathbf{b}^{(t)};\mathbf{b}^{(t+1)})} {\max\{H(\mathbf{b}^{(t)}), H(\mathbf{b}^{(t+1)})\}}, $$

with Shannon entropy $H(\mathbf{b})$. $\mathrm{NMI}\in[0,1]$. We report the average across all successive pairs:

$$ \mathrm{MI} = \frac{1}{S-1}\sum_{t=1}^{S-1} \mathrm{NMI}\!\left(\mathbf{b}^{(t)}, \mathbf{b}^{(t+1)}\right). $$

Interpretation.

  • High $p_{\mathrm{trans}}$, high $g\!Ent$, low $\mathrm{mnSim}$, low $\mathrm{MI}$volatile dynamics: edges switch often and snapshots differ.
  • Low $p_{\mathrm{trans}}$, low $g\!Ent$, high $\mathrm{mnSim}$, high $\mathrm{MI}$static structure: edges rarely switch and snapshots are redundant.
  • $\mathrm{DynH}$ serves as a single scalar that is large only under jointly high entropy and low similarity.

Latency and Circulation Measures

Temporal networks allow us to study how fast information, influence, or random walkers can spread through time-respecting paths. In this group we capture two complementary notions:

  • Latency: how long it takes for a node to reach another (or return to itself) via time-respecting connections.
  • Circulation: the rate and latency of paths that eventually come back to their source.

Notation.

A temporal path from node $i$ to $j$ is a sequence of edges $(i=v_0, v_1), (v_1, v_2), \dots, (v_\ell=j)$ occurring at times $t_1<t_2<\dots<t_\ell$. The latency of the path is $t_\ell - t_0$, i.e. the elapsed time from departure to arrival.

Latency via temporal shortest paths (Smart Walker)

We first consider deterministic earliest-arrival paths. The Smart Walker algorithm propagates reachability forward in time: at each snapshot $t$, we update the reachability matrix by multiplying with $B^{(t)}$ (the adjacency at time $t$). The first time a node $j$ becomes reachable from $i$ is recorded as the latency $d_{ij}$.

This yields the latency matrix $D=(d_{ij})$, where:

$$ d_{ij} = \min\{\, \tau : \exists\ \text{time-respecting path $i\to j$ within $\tau$ steps}\,\}. $$

The matrix encodes minimum latencies between all pairs of nodes.

Latency via random walks (Random Walker)

While the Smart Walker yields deterministic earliest-arrival latencies, we can also study stochastic exploration. A Random Walker moves at each time step $t$ by choosing uniformly among the neighbors of its current node in $B^{(t)}$. The first-passage time from $i$ to $j$ is the time when $j$ is visited for the first time. Repeating across many trials gives an empirical latency distribution, summarized by mean latency and reachability rates.

Circulation latency

To assess how quickly activity returns to its source, we define the circulation latency for node $i$ as the minimum $\tau>0$ such that a time-respecting path starting and ending at $i$ exists within $\tau$ steps. Averaging across start times and nodes yields distributions of return latencies.

Circulation rate

In addition to latency, we quantify how frequently circulation occurs. For node $i$, the circulation rate is the proportion of trials or start times in which a return path is found within the observation horizon $T$. This gives an overall measure of how “self-circulating” the network is.

Interpretation.

  • Latency measures capture the efficiency of temporal reachability (how fast signals can propagate).
  • Circulation latency measures how quickly activity tends to come back to its source (a notion of temporal recurrence).
  • Circulation rate quantifies how likely circulation is at all, separating frequent recurrences from rare events.

Segregation and Cohesion Measures

Beyond global dynamism and latency, temporal networks also reveal whether nodes form stable partnerships, diversify their associations, or cluster into cohesive neighborhoods. These measures emphasize segregation, in the sense of how local neighborhoods evolve in time, complementing global integration measures.

Node persistence

Persistence captures whether a node tends to maintain links with the same set of neighbors across time. For node $i$, we compute the persistence probability:

$$ P_i = \frac{1}{|\mathcal{N}_i^\star|}\sum_{j\in\mathcal{N}_i^\star} \frac{1}{S}\sum_{t=1}^S B^{(t)}_{ij}, $$

where $\mathcal{N}_i^\star$ is the set of neighbors that ever connect to $i$. This is compared to a theoretical chance threshold (based on mean edge density) to identify nodes that are significantly persistent. Persistence is useful for detecting “core” regions whose associations are consistently present.

Partner stability

While persistence tracks whether neighbors exist across time, partner stability emphasizes recurrence of specific partners. For node $i$, we consider the time indices $\{t_k\}$ where $i$ connects to neighbor $j$, and measure inter-contact intervals $\Delta t = t_{k+1}-t_k$. The stability score is:

$$ S_i = 1 - \frac{\langle \Delta t\rangle}{S}. $$

High stability suggests certain dyads retain synchronized fluctuations over repeated episodes.

Partner diversity

Nodes may either specialize with a few repeated partners or diversify across many. We measure diversity via normalized Shannon entropy. For node $i$, let $p_{ij}$ be the fraction of time its edge is active with partner $j$, restricted to $\sum_j p_{ij}=1$. Then:

$$ D_i = \frac{-\sum_{j} p_{ij}\,\log p_{ij}}{\log k_i}, $$

where $k_i=|\{j: p_{ij}>0\}|$ is the number of distinct partners.$D_i\in[0,1]$: high values indicate broad and balanced partner usage; low values indicate specialization.

Neighborhood memory

Neighborhood memory evaluates how similar a node’s neighbor set is across time. For lag $\ell$, define:

$$ M_i(\ell) = \frac{1}{S-\ell}\sum_{t=1}^{S-\ell} \frac{|\mathcal{N}_i^{(t)} \cap \mathcal{N}_i^{(t+\ell)}|} {|\mathcal{N}_i^{(t)} \cup \mathcal{N}_i^{(t+\ell)}|}, $$

the average Jaccard similarity between neighbor sets $\mathcal{N}_i^{(t)}$. High memory implies structural inertia of neighborhoods.

Clustering coefficients

Clustering describes whether a node’s neighbors are themselves interconnected. We extend this to temporal networks in two ways:

  • Static clustering: average over time of the usual clustering coefficient$$ C_i = \frac{\#\{\text{links among }\mathcal{N}_i^{(t)}\}}{\binom{k_i^{(t)}}{2}}. $$
  • Temporal clustering: fraction of “time-respecting triangles”, where $i$ connects to $j$ at $t_1$, to $k$ at $t_2>t_1$, and $j$ connects to $k$ at $t_3\in[t_1,t_2]$.

Clustering measures are essential for identifying temporal communities: high values reflect segregation into tightly-knit modules of co-fluctuation.

Returnability

Returnability quantifies whether a node’s neighborhood is likely to return over time. For node $i$, let $\mathcal{N}_i^{(t)}$ be its neighbors at $t$. Returnability is the fraction of neighbors at $t$ that reappear later:

$$ R_i = \frac{1}{S}\sum_{t=1}^S \frac{|\mathcal{N}_i^{(t)} \cap \bigcup_{u>t}\mathcal{N}_i^{(u)}|} {|\mathcal{N}_i^{(t)}|}. $$

High returnability indicates enduring associations; low values suggest ephemeral partners.

Link burstiness

For edge $(i,j)$ with activation times $\{t_k\}$, define inter-event intervals $\tau_k = t_{k+1}-t_k$. The burstiness index is:

$$ B_{ij} = \frac{\sigma_\tau - \mu_\tau}{\sigma_\tau + \mu_\tau}, $$

where $\mu_\tau$ and $\sigma_\tau$ are the mean and standard deviation of inter-event times. $B_{ij}=0$ corresponds to Poisson-like regularity, $B_{ij}>0$ to bursty clustering, and $B_{ij}<0$ to anti-bursty regular spacing.

Interpretation.

  • High persistence, stability, memory, clustering, and returnability indicate long-lived, cohesive communities (statism).
  • High diversity and burstiness reflect flexible, volatile, or exploratory association patterns (dynamism).

Thus, segregation metrics complement dynamism and latency measures by focusing on the mesoscopic organization of co-fluctuation patterns, clarifying whether the system is dominated by stable modules or dynamic partner switching.