Elephant Random Walk

Jonah Green (Lehman College, CUNY), Taylor Meredith (NYU), Rachel Tan (UConn)

The Elephant Random Walk is a stochastic process featuring long memory. The idea is to modify the classical random walk on the integer lattice \({\mathbb Z}\) by sampling a direction (“\(+\)” or “\(-\)”) according to the past distribution of signs (long memory – hence “elephant”), and either accepting, walking one step in the direction of the sign, or rejecting it, walking in the opposite direction, with fixed probability \(p\). In this project we worked on different fronts.

  • The main goal was to learn and apply the theory of additive functionals of Markov chains to some “short” memory versions of the model. In particular, our paper  answers a question recently posed by Gut and Stadtmuller.

Our  paper (to appear in Brazilian Journal of Probability) : Finite-Memory Elephant Random Walk and the Central Limit Theorem for Additive Functionals

  • We also worked numerically on some unresolved problems on the original model.

Nonlinear Random Walk

Jonah Botvinick-Greenhouse (Amherst College), Mark Kong (Harvard), Connor Fitch (Bowdoin College)

The model under study is a dynamical system describing time-evolution of probability measures on vertices of a graph. A simple random walk on a graph is a discrete time Markov process where the probability of transitioning between vertex \(i\) and vertex \(j\) is given by \(\frac{1}{deg(i)}\). We can view the model in the following way: Instead of just having one random walker, consider infinitely many random walkers on the graph. Then, the probability distribution would correspond to the proportion of walkers at each vertex. Next, we have the probability of walking from vertex \(i\) to vertex \(j\) to depend on the number of walkers at vertex \(j\).  To do this, we modify the probability of moving to any given neighbor to be proportional to the relative size of \(e^{\alpha p_j}\) where \(\alpha \in \mathbb{R}\) and \(p_j\) is the proportion of walkers at vertex \(j\).

Some of the interesting dynamics that arise under this model include multistability and period-doubling bifurcations. Multistability describes the phenomenon where there exists more than one stable distribution on a graph for a given \(\alpha\). A period-doubling bifurcation occurs for sufficiently negative \(\alpha\) on all graphs which we have studied. Essentially, beyond some \(\alpha_c^-<0\) we see that long term iterations of the random walk process always converge to a period-2 cycle, as opposed to the period-1 stationary solutions that occur for \(\alpha > \alpha_c^-\). In a somewhat similar fashion, for the graphs we have studied there exists an \(\alpha_c^+>0\) beyond which the uniform distribution loses stability. We have methods for determining \(\alpha_c^-\) and \(\alpha_c^+\) for regular graphs by analyzing the stability of the uniform distribution as a fixed point. Moreover, we can classify stationary distributions on \(K_n\) and conjecture that we have found them all. Our work also deals with \(\alpha = \pm \infty\). In those limits, we find sufficient conditions for convergence to period 2 trajectories and can classify all stationary distributions.

Shown below are the distributions on the vertices of a 5 by 5 torus, cycling through different values of positive \(\alpha\). Notice how for \(\alpha \geq 14 \). the uniform distribution loses stability, meaning that \(\alpha_c^+\) has been passed.

Quasistationary Distributions for Voter Model

R. Oliver VanderBerg (Kenyon College), Phil Speegle (U Alabama)

The (discrete-time) voter model on a finite graph is a Markov Chain where a state is some coloring of vertices of the graph (the different colors represent different opinions). At each time step the voter model goes to the next state by randomly selecting a vertex and changing its color (opinion) to match one of its neighbors which is also chosen by random. Therefore when all the vertices are the same color, referred to as reaching consensus, it will remain in that state.

If the graph is connected, consensus will be eventually reached. However conditioning that consensus has not been reached, the probabilities will approach some distribution, known as the quasistationary distribution (QSD) for the model. The goal of this project is to analyze these distributions for complete bipartite graphs. Our main result identifies a power-law behavior for the QSD when the size of one of the partitions tends to infinity, in sharp contrast to the case of a complete graph where the QSD is uniform.

Our submitted manuscript:   The Voter Model on Complete Bipartite Graphs.

The video below shows the distribution of the model given that consensus has not occurred of a complete bipartite graph with one group of size 100 and one of size 2. Each number on the x-axis represents a possible state of the system. This starts in the state where all vertices, except one from the larger group, are the same color. States 1 to 100 and 201-300 have the two from the smaller group the same color. This animation shows how the distribution approaches the quasistationary distribution over time.


$$ $$