Program Overview
Many challenges in the world today – disease dynamics, collective and artificial intelligence, belief propagation, financial risk, national security, and ecological sustainability – exceed traditional academic disciplinary boundaries and demand a rigorous understanding of complexity. Complexity science aims to quantitatively describe and understand the adaptive, evolvable and thus hard-to-predict behaviors of complex systems. SFI's Complex Systems Summer School has provided early-career researchers with formal and rigorous training in complexity science and integrated them into a global research community. Through this transdisciplinary, highly collaborative experience, participants are equipped to address important questions in a range of topics and find patterns across diverse systems.
Group Projects
Marco PangalloUniversity of Oxford (UK) |
Katarina MayerHankuk University of Foreign Studies (KR) |
Yuki SaikaiUniversity of Wisconsin, Madison (US) |
Chris MilesUniversity of Michigan, Ann Arbor (US) |
Yael GurevichTel Aviv University (IL) |
Cigdem YalcinIstanbul University (TR) |
Uzay ÇetinBogaziçi University (TR) |
Stefan BucherNew York University (US) |
Agent-Based Models (ABMs) have so far been used to describe complex systems. A qualitative agreement with stylized facts in real-world data has often been considered enough to validate an ABM. However, given their closer match with reality, there is growing awareness that ABMs should be able to outperform simpler models in quantitative prediction. Unfortunately, predicting with ABMs is an extreme conceptual and technical challenge. Recent research suggests generating synthetic data by ABMs of varying complexity and attempting to predict this data. We independently propose an Agent-Based Prediction Competition. We generate synthetic data from a financial market ABM. We then try to predict the stock prices out-of-sample, by employing other ABMs, a dynamical system and statistical models. Our goal is to check whether prediction is at all possible, and whether more complex models (such as the other ABMs) outperform simpler or statistical models. We show that in this case, a single realization of the ABM cannot be reliably predicted at all with any of the tested methods. A substantial level of noise casts a strong limit on the predictive power of the models. Considering the average realization of the out-of-sample stock prices, complex ABMs outperform simpler ABMs, dynamical systems and statistical methods. However, this result is not statistically significant due to the noise level. We conclude that for future research, it is necessary to design an ABM in which realistic assumptions increase the signal to noise ratio.
Stefan BucherNew York University (US) |
Jose Alejandro CoronadoThe New School for Social Research (US) |
Meghan GaliardiSandia National Laboratories (US) |
Markus JungingerPorsche AG (DE) |
Complex systems models are frequently criticized for being opaque, because it is not always clear which assumption drives a result and it would be of tremendous help to understand and quantify its dynamics analytically. But since most complex systems are complex enough that they can only be simulated, going beyond reporting simulated time series is often times difficult. DCE is a technique used to analyze non-linear time series data. DCE is used on a single observation from a non-linear time series and reconstructs the dynamics of the underlying process. We propose a different paradigm for applying DCE: analyzing the parameters and results of complex systems models. In this paradigm, we assume we have a model of an arbitrary system. There are often cases where data for this system is either unknown or unavailable. Therefore, modelers cannot calibrate the model to determine the appropriate model parameters or validate the results of the model against real-world data. Without this data, modelers have a set of tools they use to analyze the model. The most common tools in this tool set are sensitivity analysis and uncertainty quantification. Sensitivity analysis determines how uncertainty in the results is related to the input parameters. Sensitivity analysis usually performs a parameter sweep and analyzes how changes to the parameters affect the results. This allows the modeler to quantify acceptable ranges of parameters which produce acceptable results. Uncertainty quantification uses a variety of techniques and investigates the reliability of model results as well as particular model assumptions. We propose DCE as an additional tool. We view DCE as a similar tool to sensitivity analysis, but will give much more insightful results. By performing parameter sweeps and applying DCE to the results (time series) of the model, we can determine the underlying dynamics of the model. This will determine more than just the sensitivity of the results on the parameter, but will determine qualitative behavior of the results (ex - cycles or chaos). By sweeping all the parameters, we could in theory create a bifurcation diagram for our model. We begin by asking if the dynamics of an agent-based model (ABM) can be studied by performing DCE on the time series it produces. As a proof of concept, we study two ABMs for which the dynamics are known: a predator-prey model and an asset pricing model. We show that the dynamics reconstructed by DCE coincide with the true underlying dynamics for various parameter regimes and propose further work to extend these methods.
Ulf AslakUniversity of Copenhagen (DK) |
Hugo BarbosaUniversity of Rochester (US) |
Freya CasierThe Danish National Centre for Social Research (DK) |
Madison HartUniversity of Texas at Austin (US) |
Aaron SchwartzUniversity of Vermont (US) |
Zhiya ZuoUniversity of Iowa (US) |
Depression is increasingly prevalent in society. Alone, it affects over 16.1 million American adults and is today the single most pervasive mental illness in the world. Many efforts trying to address this problem leverage technological advancements such as smartphones and smart tracking devices, to facilitate patient self-reporting for use in analyses that can aid medical experts. However, patients frequently ”drop out” or do not self-report, likely a behavior caused by their perpetual depression. This begs the invention of reporting mechanisms that do not rely on active participation from the patient. In this paper, we investigate the relationships between behavioral indicators, measured from smartphone activity, and depression in a large population of highly interconnected young adults. We find that depression correlates strongly with features that indicate social activity, communicational responsiveness and academic attendance. Using this insight we construct a simple classification model that predicts, from their set of behavioral indicators, whether an individual is more or less depressed than the population median. Our results have implications to the future of depression treatment. For example, using a classification model based on the features we highlight here, alerting systems could be built to help counteract the onset of depression in individuals.
Alicia N. M. KraayUniversity of Michigan (US) |
Jiří MoravecMassey University (NZ) |
Ximo PechuanAlbert Einstein College of Medicine (US) |
Jake L. WeissmanUniversity of Maryland College Park (US) |
Hilje M. DoekesUtrecht University (NL) |
Makoto JonesVA Salt Lake City HCS/University of Utah SOM (US) |
Motivation: Carbapenems are the antibiotics of choice to treat infections caused by multidrug-resistant bacteria because they are generally safe and effective. However, the recent and often silent spread of plasmid-borne carbapenemases between people and bacterial genera has increased the prevalence of carbapenem resistance and is forcing use of more toxic antibiotics. Problem: Inaccurate resistance tests with low sensitivity increases the probability of false-negative results, which may contribute to spread of carbapenem resistance within hospitals. We model how these false-negative test results could fail to prompt appropriate clinical response and investigate possible mechanisms of these erroneous results. Approach: To account for complex interactions between hospital transmission, antibiotic treatment, laboratory methods, and plasmid replication, we constructed a set of models spanning multiple scales. Hospital transmission was modeled deterministically using a modified SIR framework incorporating antibiotic-resistant and susceptible infection classes and alternative screening protocols and plasmid transfer regimes. The probability of false-negative tests for carbapenemases were modeled using the biology of blaKPC for context and a combination of deterministic and stochastic models. Results: Within feasible parameter ranges, the false-negative rate for testing contributes more to the spread of carbapenem resistance than increases in infectivity due to the selective pressure in favor of plasmid-carrying bacteria that occurs when a test is falsely negative and carbapenems are administered. False negative rates as high as 20% may be possible with fitness costs less than 5% given the small count sampling inherent in modern clinical microbiology practices. Conclusion: Current US Federal Drug Administration guidelines recommend the use of selective carbapenem pressure after in vitro isolation of suspect Enterobacteriaceae to prevent plasmid loss prior to molecular testing for carbapenemases. Our results suggest that applying selective pressure throughout the culture isolation process could help prevent in vitro laboratory findings that are discordant with in vivo antimicrobial susceptibility. The potential role of false-negative tests on transmission merits further investigation. Accounting for multiscale-complexity may inform approaches in other scenarios where horizontal gene transfer is important.
Sandro Claudio LeraETH Zurich (DE) |
Ulf Aslak JensenUniversity of Copenhagen (DK) |
A new case of a preferential attachment model is considered. The probability that an already existing node in a network acquires a link to a new node is proportional to the product of its intrinsic fitness and its degree. We enrich this already known model by preferential deletion, which removes nodes at random with probability proportional to their fitness to some exponent ω. Under ‘normal’ conditions, the resulting node degree distribution is an asymptotic power-law (scale-free regime). We derive an exact condition for a phase transition after which one or a few nodes capture a finite fraction of all links in the infinite networks (dragonking regime). By approximately ‘parametrizing’ the space of fitness distributions through the beta-density, we then show phasediagrams that separate the two regimes.
Alicia KraayUniversity of Michigan Ann Arbor (US) |
Rachel E. GicquelaisUniversity of Michigan (US) |
Ramona RollerUniversity of Amsterdam (NL) |
Kyle LemoiThe MITRE Corporation (US) |
Elaine M. BochniewiczThe MITRE Corporation (US) (MX) |
Spencer J. FoxUniversity of Texas at Austin (US) |
Forecasting of seasonal infectious diseases, such as influenza, can help in public health planning and outbreak response. We compared a traditional influenza forecasting method (an autoregressive moving average [ARMA] model) with a nearest neighbor forecasting approach (the Lorenz Method of Analogues), where nearest neighbors were identified from a reconstructed state space using delay coordinate embedding. Delay reconstruction and ARMA models used influenza-like-illness (ILI) surveillance data from 1997-2017 in the United States. We compared model forecasts of the 2015-2017 influenza seasons across 1-4 week prediction horizons. Each model’s fit to ILI data (black line) is summarized in the figure below across the four prediction horizons. Overall, ARMA models (teal), more accurately predicted ILI than the method of analogues (salmon), especially when the prediction horizon was 1-2 weeks. The method of analogues with a single nearest neighbor predicted slightly better than the ARMA model for 3-4 weeks; however, a very large delay vector was required. Future directions will include exploring other methods to incorporate nearest neighbors into the method of analogues and combining both autoregressive and nearest neighbor approaches.LINK
V. Bleu KnightNew Mexico State University (US) |
Doheum parkKorea Advanced Institute of Science and Technology (KR) |
Zhiya ZuoUniversity of Iowa (US) |
Hops Across Cultural Boundaries: A Regional Analysis of Beer Recipe Ingredients and Their Pairing Principles
Brewing beer has come to be regarded as a form of art, yet the consumption of beer is a cultural-specific phenomenon that evolves regionally. In the United States, for example, commercially available beers have drastically increased in number and style over the course of the last 30 years. Here we quantify diversity in beer brewing within and across different regions of the globe using a data-driven approach. The Jaccard index is used to measure actual and functional diversity of regional beer recipe ingredients and the flavor compounds they contain. We identified highly prevalent ingredients, and found them to reflect many common vintage beer recipe ingredients. We also identified regionally authentic ingredients, and found that the cultural traditions of the region are represented within those ingredients. Overall, hops offered the most regional authenticity to the recipes. Aroma preferences for hops from Europe and North America are outlined, along with regional tendencies to share flavor molecules in beer recipe ingredients.
Alje van DamUtrecht University (NL) |
Kayla R. Sale-HaleUniversity of Arizona (US) |
Surendra HazarieUniversity of Rochester (US) |
Carla N. RiveraPontifical Universidad Católica de Chile (CL) |
The projection of bipartite graphs has proven to be a useful tool to infer similarity measures between nodes of either one of the node types, based on how many links they share with nodes of another type. In economics this has lead to the concept of the product space (Hidalgo et al. 2007), which is a network that connects products based on their co-locations in different countries. The relatedness between products in this networks is thought to be representative of the ’capabilities’ that are required to produce such products. Here we apply similar concepts to plant-pollinator networks and evaluate the results in light of natural history. By projecting plant-pollinator networks onto plant-plant and pollinator-pollinator networks, we aim to derive a measure of relatedness between plants based on the ecological traits that a pollinator must have to feed on it, and likewise a measure of relatedness between pollinators in terms of the traits they share, that allow them to feed on the same plants. By running community detection algorithms on the projected networks we create a functional taxonomy based on these (unobserved) ecological traits, and compare this taxonomy to taxonomies based on observable traits. As opposed to evolutionary taxonomies, our inferred taxonomies can represent a ’partner space,’ describing how plants see their pollinator community in terms of their capabilities and vice versa. Trait-based community detection formalizes the concept of ’pollination syndromes,’ the idea that pollinators in a pollination network with similar traits prefer similar plants with similar traits (e.g. flies pollinators prefer white-flowered plants) (Willmer 2011). We find that detected communities (Figure 1) are well-aligned with pollination guilds formed from field observations of flower visitors but that most basic (empirically-recorded) plant traits are not predictive of the communities observed from the projected plant-pollinator networks, suggesting that other traits play a significant role in plant choice for pollinators. This indicates the non-trivial nature of the communities identified by looking at plant-pollinator networks, reinforces criticisms of pollination syndromes, and may inspire future ecological research such as the identification of characteristic ecological traits that form these communities.LINK
Christopher MilesUniversity of Michigan, Ann Arbor (US) |
Elliot NelsonPerimeter Institute for Theoretical Physics (CA) |
Kyle ReingUniversity of Southern California (US) |
Shing H. ZhanUniversity of British Columbia (CA) |
Mark KirsteinTechnische Universität Dresden (DE) |
We explored emergent computation within two systems — cellular automata (CA) and a driven Ising model. We attempted to generate emergent computation with a CA using a genetic algorithm, with the aim of understanding general conditions on information processing dynamics which increase computational capacity. Specifically, we aim to search the space of CAs to identify minimal computational units, which may in turn be used to execute specific computational tasks or optimize further search for collective computational behavior. In order to select for CAs with more computational capacity, we use an objective function defined in terms of dual total correlation (DTC), an information-theoretic quantity capturing the information in higher-order correlations between cells in a given spatial (and temporal) region of the time-dependent CA grid. The genetic algorithm evolves a population of CA update rules by testing average performance over a range of initial conditions, creating a new population favoring more successful rules, and iterating over a number of generations. In addition, we have explored stochastic thermodynamics within a driven Ising model. This initial investigation has given us insight into thermodynamic relations present that will be beneficial for exploring this system’s computational capacity in future work. In this preliminary study, the work distribution produced by many realization under the influence of a time-symmetric drive is consistent with known fluctuation relations in the field of stochastic thermodynamics. In future work, we hope to explore dissipative adaptation as a potential mechanism for promoting emergent computation.LINK
Aida Huerta-BarrientosNational University of Mexico (MX) |
Rémi LamarqueAix-Marseille Université (FR) |
Stephen LeeseDeere and Company (US) |
Modeling and Simulating the Emergence of Internet Communities: Impact of the Spread of Memes and Agent Memory
The spreading of memes on the web played an important role in the emergence of Internet communities. The principal purpose of the study is to implement a simulation model to analyze the process of emergence of Internet communities. The model shows the importance of factors such as the interactions between online agents as well as their propensity to adopt and remember new memes. In addition, it explores the threshold between isolated cultural short-lived trends and the viral spreading of many cultural features. The simulation model is an agent-based model built using NetLogoTM software, designed such that agents represent Internet users and memes are represented by features appearing randomly on each agent. The model illustrates the spread across the whole network as shown through interactions of agents indicating further, that the structure of the network, especially the number of indegree and outdegree links between agents, has a crucial influence on how many memes are shared among agents in the long run. In other words, a greater connectivity leads to the quick sharing and sustainability of several cultural features, which is the basis for the emergence of a community.LINK
Hugo BarbosaUniversity of Rochester (US) |
Freya CasierThe Danish National Centre for Social Research (DK) |
José Alejandro CoronadoThe New School for Social Research (US) |
Marjan Fadavi ArdekaniThe New School for Social Research (US) |
Madison HartUniversity of Texas at Austin (US) |
Valérie ReijersRadboud University Nijmegen (NL) |
Carla Natalia RiveraPontificia Universidad Católica de Chile (CL) |
Adrián SotoStony Brook University (US) |
Carlos ViniegraCutter Consortium (MX) |
Modern societies has undergone profound transformations in the last decade. While on the one hand, we have observed an increase in awareness about individual rights, liberties and inclusion, in contrast, we have also seen the growth and strengthening of movements pushing their agenda in the opposite direction. Indeed, while we observe a popularization of the scientific knowledge, we also witness an increase in the dissemination of ideas and theories in direct contradiction with basic common sense. In short, modern societies have witnessed an increase in polarization in its various dimensions, be they religious, scientific or ideological. In this work we develop an agent based model to explore the influence of different biasing mechanisms to social and economic interactions and how these elements can produce robust and resilient socioeconomic structures. Our results suggest that simple biasing mechanisms can produce non-trivial social structures with resemblances to political structures observed in human societies. We also show that different biasing mechanisms can produce stable or unstable macro-level social structures. Finally, our results also suggest that ideological polarization might be an unavoidable consequence of the overexposure to social interactions provoked by the widespread dissemination of on-line social platforms.
Yimin ZhouMinistry of National Development, Centre for Liveable Cities (SG) |
Doheum ParkKorea Advanced Institute of Science and Technology (KR) |
Sean WuUniversity of California, Berkeley (US) |
Cities are dynamic complex systems evolving through their functions and human activities. While the regional boundaries are artificially created, dwellers move around them by walking, cycling or taking vehicles with various purposes and such movements in the cities generate “organic” boundaries. In this paper, we introduce how to extract such organic boundaries of cities based on the human mobility data and apply the method to analyze Singapore. We first create Voronoi cells, i.e. unit regions, of the city by using the public transportation and population datasets. Then we spatially interpolate the cells and create the mobility network of the interpolated cells by using the mobility dataset. Applying community detection unveils the organic regions in Singapore and we see that in those self-organizing regions the number of population and facilities have scaling relationships. We believe that our preliminary result is promising and shows its potential in various applications for urban planning, statistics and administration.LINK
Alexandra JurgensUniversity of California, Davis (US) |
Alicia KraayUniversity of Michigan Ann Arbor (US) |
Jake L. WeissmanUniversity of Maryland College Park (US) |
Jingnuo DongOklahoma State University (US) |
Marco PangalloUniversity of Oxford (UK) |
Sean WuUniversity of California, Berkeley (US) |
Shanee StopnitzkyUniversity of California, Santa Cruz (US) |
Shing Hei ZhanUniversity of British Columbia (CA) |
Yael GurevichTel Aviv University (IL) |
Yao LiuNorthern Arizona University (US) |
Most real-world systems demonstrate in their behavior some degree of dependence on past behavior and conditions, or some capacity to store information about the past in their dynamics. This ”memory” of a system, be it cognitive or ecological, cultural or physical, has vital implications for those who wish to infer and predict the states and behavior of that system. Each discipline defines memory in its own way, as legacy effects, information transmitted from past to future, information stored in a process, cross-generational response to stimulus, etc. We use an information theoretic framework to unite these definitions and attempt to draw useful comparisons between systems spanning the biological, physical, and social spheres. To do this, we worked with epsilon machines (εM), a type of hidden Markov model in which the states represent causal pasts. We used a technique known as causal state splitting reconstruction (CSSR) to infer these machines from both synthetic data and real-world observations. To develop an inferential basis for statistical complexity using the εM approach, we performed benchmarking experiments, and showed that statistical complexity is positively correlated with memory strength and the pattern is consistent across most realistic perturbation levels and alphabet sizes. We then applied this approach to time series data of precipitation, soil moisture, game theory, animal movement, net ecosystem exchange of CO2 (NEE), and coral reef metabolism. Within and across these systems, the relationship between estimated entropy rate (randomness/unpredictability of the time series) and statistical complexity follows the expected bell-shape, with the highest statistical complexity occurring at intermediate unpredictability (Figure 1). With the cases of soil moisture, game theory, and NEE, we also highlight the usage of statistical complexity as a criterion for determine suitability of process-based system models.
Martina BalestraNew York University (US) |
Abdel R. AbdelgabarNetherlands Institute for Neuroscience (NL) |
V. Bleu KnightNew Mexico State University (US) |
Mario A. MuñozMonash University (AU) |
Shing Hei ZhanUniversity of British Columbia (CA) |
Contemporary online social participation systems are designed to harness the knowledge, experience, and cooperation of distributed and anonymous participants in a community to complete a task. As a result, they are largely open to anyone who wants to participate - bringing together people with diverse experiences, skill levels, and motivations (among other factors) to collaborate. Participants often have the capacity to act with a high degree of autonomy: they are frequently able to choose the role or task that they want to engage with, and an approach to the task in line with their interests and experiences. In this study we seek to understand how the degree of a participant’s specialization in particular types of tasks influences their success in group tasks. Specifically, we examined the behavior of League of Legends players over successive games, and used Shannon entropy to measure their diversity of behavior (Figure 1 shows player behavioral diversity). We then correlated this measure with individual-level outcomes. We found that players who engage in fewer different types of activities over time (i.e., more specialized) gain more in-game wealth than players whose activities are more diverse. Implications are discussed.
Burcu TepekuleETH Zurich (CH) |
Gregory L. BrittenUniversity of California, Irvine (US) |
We study the comparative linear stability of Lotka-Volterra metacommunities in cases where dispersal coefficients between individual communities within the larger metacommunity are either uniformly or normally distributed. We also vary the mean interaction strength between species within a community. In the case of uniform dispersal coefficients, stability (as quantified by the leading eigenvalue of the linearized metacommunity steady state) increases with both number of species S and number of nodes N in the metacommunity when mean interaction strength is low. When mean interaction strength is high we observe decreasing stability with S and N. The rate of change in the leading eigenvalue is higher with respect to N than with respect to S in both high and low mean interaction strength cases. In the case of normally distributed dispersal coefficients, we observe similar behavior with increasing (decreasing) stability for weak (strong) mean interaction strength as a function of S and N. However, the relative rate of change in the leading eigenvalue with respect to S is relatively weak for normally distributed dispersal, relative to uniformly distributed dispersal, which may imply that stability of normally dispersing metacommunities is less sensitive to species loss or gain than uniformly dispersing metacommunities. This result would have broad implications for our understanding of metacommunity spatial dynamics and the role of connectivity between environments.LINK
Dmitry KuniskyNew York University (Courant Institute) (US) |
Ximo PechuanAlbert Einstein College of Medicine (US) |
Fitness landscapes are an essential component of our current understanding of the evolutionary process, summarizing at the highest level of abstraction the dependence of an organism’s reproductive fitness on its genetic composition. In this way, fitness landscapes capture one aspect of the mapping between a discrete combinatorial property of an organism, the genotype, and a biological observable it gives rise to, a phenotypic characteristic. The evolutionary dynamics of a population can then be described as a trajectory or an ensemble of trajectories on such a landscape. While fitness landscapes originated in idealized mathematical models of evolutionary biology, there has recently been a proliferation of computational and experimental methods to generate empirical fitness landscapes for many biological molecules of practical interest. One of the most pressing needs in this line of work is to obtain a set of comprehensive, reliable summary statistics to describe and classify the geometry of fitness landscapes in order to answer relevant biological questions. In this work, we provide such summary statistics using the tools of topological data analysis, in particular based on the technique of persistent homology. We adapt the methods of persistent homology to the lattice structure of fitness landscapes and define two cubical complex filtrations reflecting two different notions of landscape complexity, one capturing local optima and trap-like “bumpy” landscape geometry, and the other capturing epistasis and dependencies between different mutations. We demonstrate that the topological invariants we compute resolve the full range of landscape complexity produced by the mathematical NK and Rough Mount Fuji models of “tunably rugged” landscapes. We then apply the same methods to a large dataset of empirical transcription factor binding affinity landscapes, extending the results of a recent work on simpler summary statistics of these landscapes. We argue that the barcode and Betti number plots of persistence diagrams popular in the persistent homology literature give a finergrained description of landscape complexity than numerical summary statistics. To further demonstrate the usefulness of these descriptions, we use the notions of bottleneck and Wasserstein distance between persistence diagrams to define a metric structure on fitness landscapes, and describe how clustering in this metric space relates to conventional categorizations of transcription factors.
Maartje OostdijkUniversity of Iceland (IS) |
Laura ElslerStockholm Resilience Center (DK) |
Andrew F. JohnsonScripps Institute of Oceanography (US) |
Basak TaraktasNorthwestern University (US) |
Elaine BochniewiczThe MITRE Corporation (US) |
Junfu ZhaoUniversity of Utah (US) |
Alberto MichelettiUniversity of St. Andrews (GB-SCT) |
Global trade is a driver of the state of present ecosystems, but the mechanisms through which trade affects ecosystems remains poorly understood. Seafood is one of the most globally integrated food commodities with 40% traded internationally. The increasing globalization of trade in marine products has been pointed to as a potential reason for the overexploitation of wild stocks. Traders can establish new trade relations with other countries which means there is less incentive for the conservation of local resources. We analyzed patterns in trade relations and stock status in exporting countries using global trade and stock assessment databases. We analyzed the evolution of the trade networks of several stocks over time and found increased connectivity and an expanding trade network, though the overall volume of traded stocks stayed relatively stable. Whole networks changed significantly over the years and shared very little similarity (~20%) between years. The 100 and 25 most connected nodes within these networks, however, stayed more stable over time (~50-60 percent similar). In the future we will explore the changing networks by using tools for anomaly detection as well as descriptive network statistics to investigate the patterns that enable the sequential/serial exploitation of marine stocks. We hope that these efforts will culminate in a network model or exponential random graph that describes the underlying mechanisms that cause sequential overexploitation of marine populations.