2018
Biological Evolution Models Exhibiting Power Law Tails
Carter Bedsole (George Fox U), Grace O’Neil (UPenn)
In this project we worked on a toy model for biological evolution involving an random asymptotically growing number of population clusters. The model is defined through simple very simple dynamics. The main feature of the model, as observed through numerical simulations, is the emergence of power-law tails for the population cluster sizes. The main goal is to obtain a rigorous proof to these results and identifying the powers.
Our manuscript: Power-Law Tails in a Fitness-Driven Model for Biological Evolution
Efficient Markovian Couplings
Mason DiCicco (UCONN), Michael Dotzel (U Missouri), Ewan Harlow (U Wisconsin)
If you run a Markov chain for a long time, then under some natural assumptions on the chain, you’ll observe that the proportion of time it spends in each state approaches a deterministic positive constant. This phenomenon is called ergodicity and is one of the most important aspects of Markov chains (and what drives the simulations we present on this site). In this project we explore ergodicity through a method called coupling, and will then explore the method itself, and try to answer the question when does a Markov chain possess a coupling which is both tractable (more specifically, Markovian coupling) and efficient, that is, giving a sharp bound on the rate of convergence.
Here’s a poster Mason presented in UConn Frontiers for UG Research Exhibition in April 2019.
Memory In Random Sequences
Emily Gentles (U Arkansas), Natalie Meacham (Bryn Mawr), Erica West (Colorado School of Mines)
In this project we investigate the notion of memory in a sequence of symbols, developing both a rigorous theoretical framework for quantifying the depth of memory a sequences exhibits and fitting an optimal source (Markov chain with memory) for the sequence, as well as constructing computer algorithms implementing our results. We will apply the tools developed on real data.
Quasistationary Distributions
Not all Markov chains are ergodic. Some Markov chains eventually end at some terminal resting place. In many cases, an interesting structure appears when we look at the system, assumed not to have reached the terminal state after a long time. This structure as known as quasistationarity. Here’s an example to illustrate this. If I play slot machine every day, I’ll eventually loose all my money (game is not fair). Assuming this has not happened after playing for many years, how likely am I to leave a nice inheritance behind me?
Our main research theme will be identifying quasistaionary structure for some simple Markov chains.