13:45 - 15:30 on 22nd Sep 2016

Foundations  (F) Session 8

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: A - Administratiezaal

Chair: Peter Emde Boas

161 Hidden geometric correlations in real multiplex networks [abstract]
Abstract: Real networks often form interacting parts of larger and more complex systems. Examples can be found in different domains, ranging from the Internet to structural and functional brain networks. Here, we show that these multiplex systems are not random combinations of single network layers. Instead, they are organized in specific ways dictated by hidden geometric correlations interweaving the layers. We find that these correlations are significant in different real multiplexes, and form a key framework for answering many important questions. Specifically, we show that these geometric correlations facilitate: (i) the definition and detection of multidimensional communities, which are sets of nodes that are simultaneously similar in multiple layers; (ii) accurate trans-layer link prediction, where connections in one layer can be predicted by observing the hidden geometric space of another layer; and (iii) efficient targeted navigation in the multilayer system using only local knowledge, which outperforms navigation in the single layers only if the geometric correlations are sufficiently strong. Importantly, if optimal correlations are present, the fraction of failed deliveries is mitigated superlinerarly with the number of layers, suggesting that more layers with the right correlations quickly make multiplex systems almost perfectly navigable. Our findings uncover fundamental organizing principles behind real multiplexes and can have important applications in diverse domains, ranging from improving information transport and navigation or search in multilayer communication systems and decentralized data architectures, to understanding functional and structural brain networks and deciphering their precise relationship(s), to predicting links among nodes (e.g., terrorists) in a specific network by knowing their connectivity in some other network.
Kaj-Kolja Kleineberg, Marian Boguna, M. Ángeles Serrano and Fragkiskos Papadopoulos
164 When is simpler thermodynamically better? [abstract]
Abstract: Living organisms capitalize on their ability to predict their environment to maximize their available free energy, and invest this energy in turn to create new complex structures. For example, a lion metabolizes the structure of an antelope (destroying it in the process), and uses the energy released to build more lion. Is there a preferred method by which this manipulation of structure should be done? Our intuition is “simpler is better,” but this is only a guiding principal. By formalizing the manipulation of patterns – structured sequences of data – this intuitive preference for simplicity can be substantiated through physical reasoning based on thermodynamics. Using techniques from complexity science and information theory, we consider devices that can manipulate (i.e. create, change or destroy) patterns. In order to operate continually, such devices must utilize an internal memory in order to keep track of their current position within the pattern. However, the exact structure of this internal memory is not uniquely defined, and all choices are not equivalent when it comes to their thermal properties. Here, we present the fundamental bounds of the cost of pattern manipulation. When it comes to generating a pattern, we see indeed that the machine with the simplest memory capable of the task is indeed the best choice thermodynamically. Using the simplest internal memory for generation grants the advantage that less antelope needs to be consumed in order to produce the same amount of lion. However, contrary to intuition, when it comes to extracting work from a pattern, any device capable of making statistically accurate predictions can recover all available energy from the structure. This apparent paradox can be explained by careful consideration of nature of the information-processing tasks at hand: namely, one of logical irreversibility. [See also arXiv:1510.00010.]
Andrew Garner, Jayne Thompson, Vlatko Vedral and Mile Gu
443 Fluctuations of resilience in complex networks [abstract]
Abstract: Recently Gao et al.[1] showed that classes of complex networks could be described in a universal way. In particular it was stated that the dynamics of a complex network consisting of many nodes and links is governed by a one-dimensional effective dynamical equation, which was obtained by averaging over all network configurations. In this paper we address the question how well the averaged effective equation describes classes of networks by numerical calculation of variances in dynamics. It appears that huge variances in the dynamics can arise. To examine the consequences of our work to practical situations, we apply our findings to specific networks occurring in transport and supply chains. References [1] Jianxi Gao1, Baruch Barzel, Albert-László Barabási, Universal resilience patterns in complex networks, Nature 530, 307 (2016).
Johan Dubbeldam
209 Local mixing patterns in complex networks [abstract]
Abstract: Assortative mixing (or homophily) in networks is the tendency for nodes with the same attributes, or metadata to link to each other. For instance in social networks we may observe more interactions between people with the same age, race, or political belief. Quantifying the level of assortativity or disassortativity (the preference of linking to nodes with different attributes) can shed light on the factors involved in the formation of links in complex networks. It is common practice to measure the level of assortativity according to the assortativity coefficient, or modularity in the case of discrete-valued metadata. This global value is an average behaviour across the network and may not be a representative statistic when mixing patterns are heterogeneous. For example, a social network that spans the globe may exhibit local differences in mixing patterns as a consequence of differences in cultural norms. Here, we present a new approach to localise these global measures so that we can describe the assortativity at the node level. Consequently we are able to capture and qualitatively evaluate the distribution of mixing patterns in the network. We develop a statistical hypothesis test with null models that preserve the global mixing pattern and degree distribution so that we may quantitatively determine the representativeness of the global assortativity. Using synthetic examples we describe cases of heterogeneous assortativity and demonstrate that for many real-world networks the global assortativity is not representative of the mixing patterns throughout the network.
Leto Peel, Jean-Charles Delvenne and Renaud Lambiotte
446 Message passing algorithms in networks and complex system [abstract]
Abstract: We will sketch an algorithmic take, i.e. message-passing algorithms, on networks and its relevance to some questions and insight in complex systems. Recently, message-passing algorithms have been shown to be an efficient, scalable approach to solve hard computational problems ranging from detecting community structures in networks to simulating probabilisitic epidemic dynamics on networks. The objective of the talk is two fold. On on hand, we will discuss how the non-backtracking nature of message-passing avoids an “echo-chamber effects” of signal flow and thus makes a good tool to consider for problems in networks. On the other hand, we will also argue why insight gained from algorithms are equally important when exploring questions at the boundaries of scientific studies, such as networks and complex systems.
Munik Shrestha

Economics  (E) Session 6

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: B - Berlage zaal

Chair: Dexter Drupsteen

491 Agent Based Model Exploration and Calibration using Machine Learning Surrogates [abstract]
Abstract: Bringing Agent-Based Models closer to the data is an open challenge. While facilitating the comparison to more standard approaches, getting closer to the data promotes Agent-Based Models as a methodology. In this paper, we treat parameter space exploration from the machine learning problem setting of supervised learning and introduce machine learning surrogates as a fast and efficient means to explore positive calibrations from the parameter space. Three steps are involved: adaptively sampling a small number of simulations from the Agent-Based Model through the "active" learning problem setting, measuring the calibration quality of parameter combinations to real data with a chosen statistical hypothesis test, learn a powerful machine learning surrogate or "meta-model" on these "training" or modeling samples and rapidly filtering positive calibrations out of the parameter space for evaluation. Dramatic time savings are demonstrated by replacing the expensive Agent-Based Model and the machine learning surrogate. Though surrogates can potentially replace the agent-based model, we approach the simpler objective of filtering positive calibrations. Our aim is to provide a fast and efficient tool to explore the parameter space, while enabling policy-makers to evaluate and choose the particular parameterizations of interest. Finally, parameterizations of interest can be directly studied via the agent-based model. Ultimately, we do not wish to replace the agent-based model, but to help accelerate the turn-around time from real data to agent-based model calibrations that respect economic intuition and convey economic insight. We illustrate our approach by filtering positive calibrations (using the standard Kolmogorov-Smirnov two-sample test against the daily Standard and Poor's 500 Index) for the simple agent-based asset pricing model (introduced in "Heterogeneous beliefs and routes to chaos in a simple asset pricing model" by Brock and Holmes 1998) over ten parameters with generous ranges.
Francesco Lamperti, Antoine Mandel, Andrea Roventini and Amir Sani
558 The Echoes of Bandwagon Through a Complex System of Innovation and Development [abstract]
Abstract: Dating back from Schumpeter, literature on Innovation has evolved to the point of leading it’s object of study to the status of one of the main forces driving economic growth and development . The fact that Sollow's TFP black box is not so black anymore has probably something to do with understanding how the engine of innovation is greased. In this paper, we investigate if one of the cogwheels of this engine might be the bandwagon behaviour of consumers and its impact on the firm’s decision to engage on a certain type of innovative process. In order to do so, we introduce a new framework for complex agent-based models that is different from the commonly used Small Worlds Network, which we call Spatial Dynamic Awareness Model. Consumers have heterogeneous stochastic thresholds in respect to what we call “profile” towards new products and follow the distribution proposed by Moore (2005) as a baseline. They also have spatial mobility and bounded rationality (awareness), acquiring information and interacting only with agents inside their awareness radius to evaluate how many others are using a given product or technology and to ultimately decide to change their product of choice or not at each point in time. Firms on the other hand cannot see individual preferences, but analyses market saturation and concentration to decide on the amount of R&D investment and between process and product innovation. Simulations suggests that a society with a greater amount of crazy for technology individuals yields a faster saturation and de-concentration of the relevant market, generating more product than process innovations, higher mean prices and profits. We hope to reward the attendants of our presentation with new insights on network modelling and the importance of behavioural economics in better understanding the micro – macro process of innovation and economic development.
João Basilio Pereima and Pedro Einloft
233 Emergence of social networks due to human mobility [abstract]
Abstract: There is a recent burst of work on human mobility and social networks. However, the connection between these two important fields is still in its infancy or lack thereof. It is clear that both are closely related: People tend to visit popular places in a city with some frequency meeting other people there. If this occurs often, there is a chance of a friendship or acquaintance to emerge, linking people together. On the other hand, once you have established a social network, people tend to go together to the same places. In this way, there is feedback between human mobility in space and the structure of the social network. Mobility generates friends, and friends move together. We model the above situation with random walkers that visit places in space following a strategy akin to Lévy flights. We measure the encounters or coincidences in space and time and establish a link between walkers after they coincide several times. This generates a temporal network that is characterized by global quantities. We compare this dynamics with real data for two big cities: New York City and Tokyo. We use data from the location-based social network Foursquare and obtain the emergent temporal encounter network for New York City and Tokyo that we analyze in detail and compare with our model. Even though there are differences for the two cities, there are some common features: for instance, a long-range (Lévy-like) distribution of distances that characterize the emergent social network due to mobility in cities. This study contributes to the unification of two important fields: social networks and human mobility. Applications and implications to several fields like epidemics, social influence, voting, contagion models, behavioral adoption and diffusion of ideas will be discussed.
Jose L. Mateos and Alejandro P. Riascos
118 Using statistical symmetries to characterize binary time series of the foreign exchange market [abstract]
Abstract: We use the concept of statistical symmetry as the invariance of a probability distribution under transformation to analyze the sign dynamics of price difference in the foreign exchange market. Using a local hypothesis test with a stationary Markov process as model, we characterize different intervals of the sign time series of price difference as symmetric or not for the symmetries of independence and space odd reversion. For the test, we derive the probability of a binary Markov process to generate a given set of number of symbol pairs. As a particular result, we find that the foreign exchange market is essentially space odd reversible - interpreted as time reversible - but this symmetry is broken when there is a strong external influence. We also obtain that above a resolution of 90s the intervals of the sign time series are considered to be statistically symmetric implying that the direction of price movements in the market can be described by an independent random process.
Arthur Matsuo Yamashita Rios de Sousa, Hideki Takayasu and Misako Takayasu
218 Analysis, prediction and control of technological progress [abstract]
Abstract: Technological evolution is one of the main drivers of social and economic change, with transformative effects on most aspects of human life. How do technologies evolve? How can we predict and influence technological progress? To answer these questions, we looked at the historical records of the performance of multiple technologies. We first evaluate simple predictions based on a generalised version of Moore’s law. All technologies have a unit cost decreasing exponentially, but at a technology-specific rate. We then look at a more explanatory theory which posits that experience, measured as cumulative production, drives technological progress. These experience curves work relatively well in terms of forecasting, but in reality technological progress is a very complex process. To clarify the role of different causal mechanisms, we also study military production during World War II, where it can be argued that demand and other factors were exogenous. Finally, we analyse how to best allocate investment between competing technologies. A decision maker faces a trade-off between specialisation and diversification which is influenced by technology characteristics, risk aversion, demand and the planning horizon. Our methods are used to provide distributional forecasts for the cost of photovoltaic modules at different horizon, making it possible to evaluate their potential to provide an inexpensive source of energy in a relatively short horizon.
Francois Lafond
373 Portfolio Optimization under Expected Shortfall: Contour Maps of Estimation Error [abstract]
Abstract: The contour maps of the error of historical estimates for large random portfolios optimized under the Expected Shortfall (ES) risk measure are constructed. Similar maps for the sensitivity of the portfolio weights to small changes in the returns are also presented. The contour maps allow one to quantitatively determine the sample size (the length of the time series) required by the optimization for a given number of different assets in the portfolio, at a given confidence level and a given level of relative estimation error. The necessary sample sizes turn out to be unrealistically large for reasonable choices of the number of assets and the confidence level. These results are obtained via analytical calculations based on methods borrowed from the statistical physics of random systems, supported by numerical simulations.
Fabio Caccioli, Imre Kondor and Gábor Papp

Economics  (E) Session 7

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: C - Veilingzaal

Chair: Francesca Lipari

494 Taming the leverage cycle [abstract]
Abstract: This paper focuses on the dynamical aspects of systemic risk in financial markets resulting from positive feedback loops in the interaction of risk management and asset markets. It will thereby highlight the importance of non-equilibrium approaches to understanding and tackling systemic risk in financial markets. We investigate a simple dynamical model for the systemic risk caused by the use of Value-at-Risk (VaR). The model consists of a bank with a leverage target and an unleveraged fundamentalist investor subject to exogenous noise with clustered volatility. The parameter space has three regions: (i) a stable region, where the system has a fixed point equilibrium; (ii) a locally unstable region, characterized by cycles with chaotic behavior; and (iii) a globally unstable region. A calibration of parameters to data puts the model in region (ii). In this region there is a slowly building price bubble, resembling the period prior to the Global Financial Crisis, followed by a crash resembling the crisis, with a period of approximately 10–15 years. While our model does not show that the financial crisis and the period leading up to it were due to VaR risk management policies, it does suggest that it could have been caused by VaR risk management, and that the housing bubble may have just been the spark that triggered the crisis. We also explore alternative leverage control policies based on their ability to minimize risk for a given average leverage. We find the best policy depends on the market impact of the bank. VaR is optimal when the exogenous noise is high, the bank is small and leverage is low; in the opposite limit where the bank is large and leverage is high the optimal policy is closer to constant leverage.
Christoph Aymanns, Fabio Caccioli, J Doyne Farmer and Vincent Tan
520 Why Do Banks Default Overnight? Modeling Edogenous Contagion on O/N Interbank Market [abstract]
Abstract: On September 15, 2008, the Lehman Brothers bank announced its bankruptcy. This started a panic on the US stock exchange and the mortgage crisis that has spread throughout the world. The consequences of these events are still visible today. Since the events of 2008, concepts such as systemic risk and financial contagion are in the common language. At the same time the development of models of the interbank market has gained tremendous momentum. We want to present a new model of banking system, focusing on daily time-scale and short-term activities, mainly overnight loans. In our model we take into account three possible ways of financial contagion. The first, most direct way of propagation is by a collapsing bank not paying its obligations. Banks that granted loans bear this loss, which worsens their financial situation. Other, perhaps less obvious, a falling bank, in order to pay its obligations, must sell its external assets in significant amounts, what results in an immediate and significant decrease in their value. Not only does it not recover the full value of the assets and repays liabilities to a smaller extent, it also affects the decrease in the value of assets held in the portfolios of other banks - worsening their situation. Last, but not least, there is the decline in the availability of interbank loans due to a decrease in trust. This results in banks having lower resistance to deterioration of their financial situation. Most of the previous models tested the system's reaction to an external shock e.g. collapse of one or more banks. In contrast, in our dynamical model of the entire banking system crashes can occur as an internal feature of the system. We will present results for artificial data as well as for empirical data from Polish interbank market.
Tomasz Gubiec and Mateusz Wilinski
316 Relaxation Analysis for the Layered Structure on the basis of the Order Book Data of FX Market [abstract]
Abstract: The amount of data has been radically increasing accompanied by the development of electronics devices, and the data set, so-called big data has attracted attention among econophysicists lately. One of the field where big data becomes available is foreign exchange market (FX market). The big data of FX market is called as the Order Book Data and it includes the data described below: i. Transaction price from start to end of the FX market ii. Order volume and order price of traders iii. Time when traders put an order and cancel it It is reported that there is a correlation between transaction price movement and behavior of traders, and its sign changes depending on which price range trader put an order at(Ref.[1,2]). The correlation implies that price movement and trader behavior are closely related and its relation enable us to understand the various property of price movement from traders behavior, including a sudden jump of price. There, however, have been few studies on the correlation between price movement and trader behavior. We study the statistical properties of traders behavior so as to understand that relation. We focus on the relaxation process for trader's order and report that there is a typical pattern for relaxation timescale, and it depends on which price range they are at. This result is consistent with the one shown by [2]. References [1] Y.Yura, H.Takayasu, D.Sornette, M.Takayasu, Physical Review E 92.4 (2015): 042811. [2] Y. Yura, H.Takayasu, D.Sornette, M.Takayasu, Phys. Rev. Lett. 112, 098703 (2015).
Takumi Sueshige, Kiyoshi Kanazawa, Hideki Takayasu and Misako Takayasu
187 Statistically similar portfolios and systemic risk [abstract]
Abstract: We propose a network-based similarity measure between portfolios with possibly very different numbers of assets and apply it to a historical database of institutional holdings ranging from 1999 to the end of 2013. The resulting portfolio similarity measure increased steadily before the 2008 financial crisis and reached a maximum when the crisis occurred. We argue that the nature of this measure implies that liquidation risk from fire sales was maximal at that time. After a sharp drop in 2008, portfolio similarity resumed its growth in 2009, with a notable acceleration in 2013, reaching levels not seen since 2007.
Stanislao Gualdi, Giulio Cimini, Kevin Primicerio, Riccardo Di Clemente and Damien Challet
212 From innovation to diversification: a simple competitive model [abstract]
Abstract: Few attempts have been proposed in order to describe the statistical features and historical evolution of the export bipartite matrix countries/products. An important standpoint is the introduction of a products network, namely a hierarchical forest of products that models the formation and the evolution of commodities. In the present article, we propose a simple dynamical model where countries compete with each other to acquire the ability to produce and export new products. Countries will have two possibilities to expand their export: innovating, i.e. introducing new goods, namely new nodes in the product networks, or copying the productive process of others, i.e. occupying a node already present in the same network. In this way, the topology of the products network and the country-product matrix evolve simultaneously, driven by the countries push toward innovation.
Fabio Saracco, Riccardo Di Clemente, Andrea Gabrielli and Luciano Pietronero

Cognition  (C) Session 3

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: D - Verwey kamer

Chair: Vincent Traag

582 Inventors' Explorations and Performance Across Technology Space [abstract]
Abstract: Technology is a complex system that evolves through the collective efforts of individual inventors. Understanding inventors' behaviors may thus enable predicting invention or improving technology policy. We examined data from 2.8 million inventors' 4 million patents and found most patents are created by ``explorers": inventors who move across different technology domains during their careers. Explorers are far more likely to enter technology domains that were highly related to their own individual inventive experience; this information enabled accurate prediction of individual explorers' future movements. Inventors who entered very related domains patented more there, but explorers who successfully entered moderately related domains were more likely to create high-impact patents. These findings may be instructive for inventors exploring the space of technologies, and useful for organizations or governments in forecasting or directing technological change.
Jeff Alstott, Giorgio Triulzi, Bowen Yan and Jianxi Luo
72 Modeling the relation between income and commuting distance [abstract]
Abstract: We discuss the distribution of commuting distances and its relation to income. Using data from Denmark, the UK, and the US, we show that the commuting distance is (i) broadly distributed with a slow decaying tail that can be fitted by a power law with exponent γ ≈ 3 and (ii) an average growing slowly as a power law with an exponent less than one that depends on the country considered. The classical theory for job search is based on the idea that workers evaluate the wage of potential jobs as they arrive sequentially through time, and extending this model with space, we obtain predictions that are strongly contradicted by our empirical findings. We propose an alternative model that is based on the idea that workers evaluate potential jobs based on a quality aspect and that workers search for jobs sequentially across space. We also assume that the density of potential jobs depends on the skills of the worker and decreases with the wage. The predicted distribution of commuting distances decays as 1/r^3 and is independent of the distribution of the quality of jobs. We find our alternative model to be in agreement with our data. This type of approach opens new perspectives for the modeling of mobility.
Giulia Carra, Marc Barthelemy, Ismir Mulalic and Mogens Fosgerau
22 Experimental and theoretical approaches to collective estimation phenomena in human groups [abstract]
Abstract: The well-known "Wisdom-of-Crowds" phenomenon, often mistakenly confused with collective intelligence, is not effective in every situation, especially under social influence. In simple estimation tasks, information sharing among group members may lead to strong biases in the collective estimate due to the reduction in diversity and independence of opinions. We are interested in finding conditions where social interactions could improve the accuracy of the collective estimate and its effectiveness. Specifying such conditions is an important step toward understanding how a human group can develop a form of collective intelligence emerging from social interactions between its members. We conduct a series of experiments aiming at understanding how a human group can use social information to converge toward the correct value in an estimation task. Subjects are sequentially asked to give a first guess, and then a second guess in the same estimation task after being provided with information about the average guess of the t previous subjects. We measure how this information affects the initial guesses of the subjects (with weight s) to various questions. We also measure the influence of "experts" (more knowledgeable subjects, introduced artificially with varying probability) and information lifetime (associated to t) on the convergence process. The distribution of social influence s is a Gaussian centered around s=2/3, with two additional very narrow peaks at 0 (highly confident subjects) and 1 ("followers"). We also find values of s below 0 or above 1, which correspond to subjects considered as "irrational" in micro-economic theories, and that may deeply affect the ability of a group to reach the right estimate. Unsurprisingly, the presence of experts improves both the final estimate and the speed of convergence. However, a decrease of the information lifetime t does not seem to influence the accuracy of the final estimate, but noticeably reduces the convergence time.
Bertrand Jayles, Hye-Rin Kim, Ramon Escobedo, Stéphane Cezera, Adrien Blanchet, Tatsuya Kameda, Clément Sire and Guy Theraulaz
157 Privacy in Distributed Event Detection: an extended abstract [abstract]
Abstract: We study the problem of event detection on distributed sensor networks. Prompt event detection is critical for many high-risk settings, for example evacuation following an earthquake. Distributed sensor networks are well suited for this task as they offer advantages such as high reliability and broad coverage. Distributed sensor networks can be organized in a centralized or decentralized way, the former offers higher accuracy, the latter lower communication volume. We study how this tradeoff varies for different network organizations, when a privacy cost is associated with communication. We assume that sensors pay a privacy cost for transmitting measurements. A real world example where this assumptions hold is earthquake detection with smartphones: If a device had to communicate regularly, the receiver could track its position throughout the day. We compare a centralized and a decentralized organization. In the centralized setting all sensors have to send their readings to the central event detection algorithm. In the decentralized setting the algorithm runs locally on a sensor, which alarms the central unit only if detects an event. This setting reduces the communication volume at the expense of accuracy. We propose a distributed protocol that reduces the privacy cost by reducing the communication to the central unit. Reducing communication is by itself desirable whenever remote communication is costly, for example low-power transmitters, and whenever computation is costly, for example if the central unit is a human supervisor with a limited attention span. The protocol allows sensors to ask their neighbors for an opinion on their measurements before reporting an event. The number of neighbors drives the accuracy/communication tradeoff. We test this protocol on different network topologies. We evaluate the system on detection accuracy and privacy cost. We expect to find a range of parameters for which a decentralized organization outperforms a centralized organization.
Stefano Bennati, Catholijn Jonker and Chris Rouly
129 The streets all looked so strange: looking up digital imprints of immigrants’ spatial integration in cities [abstract]
Abstract: People are constantly moving within cities and countries, facing the fact of the integration in habits and laws of new local cultures. Immigration phenomena have been studied and described so far by census data, which are indeed expensive to take, both in term of cost and time. Here we introduce a new methodology to explore the spatial integration of international immigrant communities in cities, exploring how Twitter users’ language might be a direct connection to their hometown and/or their nationality. We collect Twitter geo-localised data from 2012 to 2015 over a set of 58 out of the most populated cities in the world. We filter the users supposed to be residents in each city and their supposed-to-be place of residency. Finally, we assign to each user its most likely language. We conduct an extensive analysis on users’ spatial distribution within urban areas through a modified entropy metric, as a quantitative measure of the spatial integration of each language in the city. Results allowed us to characterized cities by their "Power of Integration”, as an attitude of hosting immigrant communities in urban areas, and by the corresponding process of integration of languages into different cultures, which is a quantitative measure of the differences between welcoming and hosting people in urban areas. Our findings provide a new way to detect the patterns of historically permanent immigration of people in urban areas, going beyond the estimation of past, current and foreshadowed global flows, towards a better comprehension of spatial integration phenomena on a city scale.
Fabio Lamanna, Maxime Lenormand, María-Henar Salas-Olmedo, Gustavo Romanillos, Bruno Gonçalves and José Javier Ramasco

Urban  (U) Session 5

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: E - Mendes da Costa kamer

Chair: Christopher Monterola

94 EvacSafeX - a multi-agent model for aircraft evacuation simulation [abstract]
Abstract: Throughout the last decades, several simulation models have been proposed in an attempt to reproduce aircraft evacuation scenarios and provide an alternative to real certification trials. Furthermore, simulation models have been proven to be a useful tool when designing new aircraft enclosures. Among these, the airEXODUS model has seen widespread application and validation, being successful in predicting past certification trials and examining issues related to aircraft enclosure layout. This work introduces EvacSafeX, a multi-agent model centered around proper representation of human behaviour in aircraft evacuation scenarios. EvacSafeX finds its inspiration in complex systems and in the airEXODUS model, seeking to make complex interactions and behaviour emergence from individual modeling of human passengers. The model also takes novel approaches to represent human behaviour and passengers movement along the aircraft cabin. EvacSafeX adopts a perception-action approach to represent agent's capabilities, following a behavioral model defined by a rule list brain. Passengers are also characterised by both physical and psychological attributes identified as the most relevant in evacuation scenarios. In addition to the many components already included in the model, its generic architecture allows other ones to be easily incorporated. The components implemented beforehand in EvacSafeX prototype were verified through a set of validation experiments. At first sight, it was observed a significant sensitivity of some passenger's personal attributes, representative of their influence in real life cases. Furthermore, the proposed model demonstrated a high flexibility and diversity in the representation of passenger's behaviour, leading to an emergence of several different phenomena observed in real evacuation scenarios. Finally, promising results were obtained in an attempt to reproduce real certification demonstrations results and other experiments conducted with state of the art models.
João Simões and Tiago Baptista
393 Trainstopping: modeling delays dynamics on railways networks [abstract]
Abstract: Railways are a key infrastructure for any modern country, so that their state of development has even been used as a significant indicator of a country's economic advancement. Moreover, their importance has been growing in the last decades either because of the growing Railway Traffic and to governments investments, aiming at exploiting railways means to reduce CO2 emissions and hence global warming. To the present day, many extreme events (i.e. major disruptions and large delays compromising the correct functioning of the system) occurs on a daily basis. However these phenomena have been approached, so far, from a transportation engineering point of view while a general theoretical understanding is still lacking. A better comprehension of these critical situation from a theoretical point of view could be undoubtedly useful in order to improve traffic handling policies. In this work we move toward this comprehension by proposing a model about train dynamics on railways network aiming to unveil how delays spawn and spread among the network. Inspired by models for epidemic spreading, we model the diffusion of delays among train as the diffusion of a contagion among a population of moving individuals. We built and tested our model using two large dataset about Italian and German railway traffic, collected using APIs intended to give passengers information about the trains, the state of the service and train delays. The model reproduces adequately delays dynamics in both systems, meaning that it captures the underlying key factors. In particular, our model predicts that the insurgence of clusters of stations with large delays is not due to external factors, but mainly to the interaction between different trains. Also, through our model is capable to give a quantitative account of the difference between the two considered railway systems in terms of probability of contagion and delays dynamics.
Bernardo Monechi, Pietro Gravino, Vito D. P. Servedio, Vittorio Loreto and Riccardo Di Clemente
229 Why human mobility is not a Levy flight [abstract]
Abstract: Recent studies of human mobility largely focus on displacements patterns. Power-law fits of empirical long-tailed distributions of distances have been associated to scale-free super-diffusive random walks called Levy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. We show on a dataset describing the trajectories of 780,000 private vehicles in Italy, that the Levy flight model cannot explain the behavior of travel-times and speeds. We therefore introduce a new class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel-times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power-law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of human mobility.
Riccardo Gallotti, Armando Bazzani, Sandro Rambaldi and Marc Barthelemy
309 On The Coevolution of Opinion Dynamics in Growing Networks [abstract]
Abstract: This paper studies the coevolution of opinion dynamics in growing networks with attachment rule that depends on the opinion updating process. We propose that individuals choose to link with others according to the Hegselmann-Krause opinion dynamics model; each individual forms its neighborhood with others whose opinions are close to its own in an interval minor to some confidence level. Since individuals hold an opinion value in the continuous interval [0,1], then for a new agent on the network, the neighborhood will depend not only on her confidence level but also on her initial opinion value. We analyze the network structure when the initial opinion value is selected with: i) an uniform probability, ii) a probability as a function of the degree of the new agent, and iii) a probability as a function of the cluster coefficient of the new agent. Since the confidence value and the initial opinion selection influence the network structure, we then present a method to approximate the degree distribution and the number of cluster based on these two variables. In order to complete the coevolution analysis, we also study the convergence of opinions. When a new agent is added to the network, the opinion update for all individual process could take place immediately (all agents change their opinion as the average of their neighborhood) or could present a delay (agents change their opinion when they detect a variation in at least one of their neighbor opinion value). We then demonstrate that the convergence is affected by the confidence level and the initial opinion value selection, but the convergence time depends on when the updating process occurs.
Diego Acosta-Escorcia and Eduardo Mojica-Nava
453 A Transfer Entropy Model for the Inference of Influenza Information Networks [abstract]
Abstract: Variations in seasonal influenza epidemic initiation, timing, and magnitude yield highly variable illness data that can help researchers to understand the spatial spread of influenza. For the United States, predictable spatial patterns will contribute to more accurate predictive models for ascertaining when influenza infection will occur and to understand long distance connections. In order to evaluate the interdependence of cases we propose the use of a transfer entropy model (TE) that measures the amount of information transfer from one variable to the other; yet, in this context the number of cases ‘’transferred’’ from one region to another. TE is a non-parametric and non-linear model that offers an alternative measure of effective connectivity based on information theory, more powerful than Granger causality or assumption-based dynamic causal models. More precisely, TE is quantifying causal networks between time series where node/variable distance, node connectivity, and link weights are related to variable undirected statistical closeness, dependence, and directional entropy reduction. Furthermore, transfer entropy is an asymmetric measure that conveys directional information. Considering TE on CDC data it results that Northeast and Northwest US are the most influential nodes in the network. Conversely, Midwest and Southwest regions are strongly affected by other regions. There are long-distance connections between Northeast and Midwest, and between Mid-Atlantic and Southwest regions. Some pairs regions that are very far from each other (~1500-3000km) still show significant correlation with each other (r=0.45-0.65) that emphasizes the importance to assess effective connections rather than geographical connections. The results allow us to conclude that long-distance effects are relevant in the dispersion of influenza cases and to infer locally generated cases. The TE model can be useful in analyzing any other complex disease where interactions among sub-systems/regions are expected to be non-linear and where minimal a priori knowledge is available.
Matteo Convertino and Yang Liu

Foundations & Biology  (FB) Session 2

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: F - Rode kamer

Chair: Jorge Hidalgo

277 On the unpredictability of outbreaks [abstract]
Abstract: Infectious disease outbreaks recapitulate biology, emerging from the multi-level interaction of hosts, pathogens, and their shared environment. Therefore, predicting when and where diseases will spread requires a complex systems approach to modeling. However, it remains to be demonstrated that such complex systems are fundamentally predictable. To investigate this question, I study the intrinsic predicability of a diverse set of diseases. However, instead of relying on methods which require an assumed knowledge of the data generating model, I utilize permutation entropy as a model independent metric of predicability. By studying the permutation entropy of a large collection of historical outbreaks--including, influenza, dengue, measles, polio, whooping cough, Ebola, and Zika--I identify fundamental limits to our ability to forecast outbreaks. Specifically, most diseases appear to be unpredictable beyond narrow time-horizons. These results have clear implications for the emerging field of disease forecasting and highlight the need for broader studies on the predictability of complex systems.
Samuel Scarpino
68 Dynamics of collective U-turn in fish schools: from empirical data to computational model [abstract]
Abstract: One of the most impressive features of fish schools is their ability to perform spontaneous changes in travel direction without central coordination. A striking example is the emergence of collective U-turns. The causes that trigger these U-turns and the mechanisms by which information is propagated within a school are not yet understood. One challenging problem is the estimation of the effective neighborhood, i.e. the number and position of neighbors that affect the behavior of a focal fish. Another important issue is the quantification of social interactions between fish. Here we combine experimental and computational approaches to address these questions. Experiments have been conducted in a ring-shaped tank with groups of 2, 4, 5, 8 and 10 individuals of the species Hemigrammus rhodostomus, a small tropical fish that exhibits schooling behavior. Empirical results show that most collective U-turns occur after the group has slowed down, and that they are usually initiated and propagated from the front to the back of the group. Moreover fish perform less U-turns as group size is increasing. We then investigate with a computational model the consequences of interactions between fish on their collective swimming behavior. We first implement in the model the characteristic burst-and-coast swimming of H. rhodostomus: individuals control the strength of acceleration and the duration of the coasting phase depending on the presence of walls and individuals close by. Then we use the model to investigate effects of different effective neighborhoods on the propagation of information during collective U-turns and we compare the simulation results to the experimental data in the same conditions.
Valentin Lecheval, Guy Theraulaz, Charlotte Hemelrijk, Pierre Tichit, Clément Sire and Hanno Hildenbrandt
199 Revealing patterns of local species richness along environmental gradients with a novel network tool [abstract]
Abstract: How species richness relates to environmental gradients at large extents is commonly investigated aggregating local site data to coarser grains. However, such relationships often change with the grain of analysis, potentially hiding the local signal. We introduced a new index related to potential species richness, which revealed large scale patterns by including at the local community level information about species distribution throughout the dataset (i.e., the network). The method effectively removed noise, identifying how far site richness was from potential. When applying it to study woody species richness patterns in Spain, we observed that annual precipitation and mean annual temperature explained large parts of the variance of the newly defined species richness, highlighting that, at the local scale, communities in drier and warmer areas were potentially the species richest. Our method went far beyond what geographical upscaling of the data could unfold, and the insights obtained strongly suggested that it is a powerful instrument to detect key factors underlying species richness patterns, and that it could have numerous applications in ecology and other fields.
Mara Baudena, Anxo Sanchez, Co-Pierre Georg, Paloma Ruiz-Benito, Miguel A. Rodriguez, Miguel A. Zavala and Max Rietkerk
239 Learning from food webs: Stability and trophic structure of complex dynamical systems [abstract]
Abstract: Rainforests, coral reefs and other very large ecosystems seem to be the most stable in nature, but this has long been regarded as mathematically paradoxical. More generally, the relationship between structure and dynamics in complex systems is the subject of much debate. I will discuss how 'trophic coherence', a recently identified property of food webs and other directed networks, is key to understanding many dynamical and structural features of complex systems. In particular, it allows networks to become more stable with increasing size and complexity [1], determines whether a given system will be in a regime of high or negligible feedback [2], and influences spreading processes such as epidemics or cascades of neural activity [3]. [1] S. Johnson, V. Domínguez-García, L. Donetti, and M.A. Muñoz, "Trophic coherence determines food-web stability", PNAS 111, 17923 (2014) [2] S. Johnson and N.S. Jones, "Spectra and cycle structure of trophically coherent graphs", arXiv:1505.07332 (2015) [3] J. Klaise and S. Johnson, "From neurons to epidemics: How trophic coherence affects spreading processes", Chaos (in press), arXiv:1603.00670 (2016)
Samuel Johnson
427 The effective structure of complex networks drives dynamics, criticality and control [abstract]
Abstract: Network Science has provided predictive models of many complex systems from molecular biology to social interactions. Most of this success is achieved by reducing multivariate dynamics to a graph of static interactions. Such network structure approach has provided many insights about the organization of complex systems. However, there is also a need to understand how to control them; for example, to revert a diseased cell to a healthy state or a mature cell to a pluripotent state in systems biology models of biochemical regulation. Based on recent work [1], using various systems biology models of biochemical regulation and large ensembles of network motifs, we show that the control of complex networks cannot be predicted from structure alone. Structure-only methods (structural controllability and minimum dominating set theory) both undershoot and overshoot the number and which sets of variables actually control these models, highlighting the importance of dynamics in determining control. We also show that canalization−measured as logical redundancy in automata transition functions models [2]−plays a very important role in the extent to which structure predicts dynamics. To further understand how canalization influences the controllability of multivariate dynamics, we introduce the concept of effective structure, obtained by removing all redundancy from the (discrete) dynamics of models of biochemical regulation. We show how such effective structure reveals the dynamical modularity [3] and robustness in such models [2]. Furthermore, we demonstrate that the connectivity of the effective graph is an order parameter of Boolean Network dynamics, and a major factor in network controllability [4]. [1] A. Gates and L.M. Rocha. [2016]. Scientific Reports 6, 24456. [2] M. Marques-Pita and L.M.Rocha [2013]. PLOS One, 8(3): e55946. [3] A. Kolchinsky, A. Gates and L.M. Rocha. [2015] Phys. Rev. E. 92, 060801(R). [4] M. Marques-Pita, S. Manicka and L.M.Rocha. [2016]. In Preparation.
Luis M. Rocha, Alexander Gates, Manuel Marques Pita and Santosh Manicka

ICT  (I) Session 3

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: G - Blauwe kamer

Chair: Philip Rutten

530 Examining the Aftermath of Swiping Right: A Statistical Look at Mobile Dating Communications [abstract]
Abstract: Mobile dating applications (MDAs) have skyrocketed in popularity in the last few years. In addition to becoming an influential part of modern dating culture, MDAs facilitate a unique form of mediated communication: dyadic mobile text messages between pairs of users who are not already acquainted. Furthermore, mobile dating has paved the way for analysis of these digital interactions via massive sets of data generated by the instant matching and messaging functions of its many platforms at an unprecedented scale. This work looks at one of these sets of data: details from approximately two million conversations between heterosexual users on an MDA. These conversations consist of 19 million messages exchanged between 400,000 users. Through computational analysis methods, this study offers the very first large scale quantitative depiction of mobile dating as a whole. We report on differences in how heterosexual male and female users communicate with each other on MDAs, differences in behaviors of dyads of varying degrees of social separation, and factors leading to “success”—operationalized by phone number exchange. We find that there are fundamental differences male and female users regarding their communication patterns. We identify the key predictors of "success" among the information extracted from the messages' metadata. Finally we show hoe social separation between the matched users correlates with the likelihood of having a "successful" match.
Taha Yasseri
376 Behavior evaluation of dynamic flexible wavelength allocation algorithms by Markovian and simulation-based analysis [abstract]
Abstract: One of the key aspects behind the success of Internet is the use of optical fiber data transmission medium. In a single optical fiber, many different communication channels can transmit information simultaneously, using a different wavelength each. Today, the assignment of wavelengths to channels is fixed. However, research has shown that a flexible allocation could be more efficient; leading to much more data being transmitted using the same fiber. The flexible assignment of wavelengths is achieved by dividing the spectrum into small units, known as slots. Each communication channel is then assigned as many slots as necessary as long as the slots are contiguous in spectrum and exactly the same set of slots is used in every link travelled by the data. Most flexible wavelength allocation algorithms use a greedy approach: a new channel is established as long as there are enough contiguous slots to accommodate it. However, due to the dynamic of the network (channels being established and released), such approach could lead to spectrum fragmentation and inefficient usage of spectrum. A new approach, called Deadlock-Avoidance (DA), only establishes a new connection if the set of contiguous slots left after allocating it is big enough to accommodate future channels. Otherwise, the channel is not established even if there are available slots to allocate it. The behavior of DA has been evaluated for incremental traffic (channels are never released) in a single link scenario, showing a better performance than the greedy approach. The aim of this work is evaluating the dynamic behavior of DA in a more realistic scenario by using a Markov chain model (for the single link case) and event-driven simulation (for all the scenarios). Results shed light on the key aspects affecting the dynamic performance of flexible wavelength assignment algorithms.
Danilo Bórquez-Paredes, Alejandra Beghelli, Ariel Leiva and Ruth Murrugarra
551 Agricultural activity shapes the communication and migration patterns in Senegal [abstract]
Abstract: The communication and migration patterns of a country are shaped by its socioeconomic processes. The economy of Senegal is predominantly rural, as agriculture employs over 70% of the labor force. We have used a combination of mobile phone records and satellite images to explore the impact of agricultural activity on the communication and mobility patterns of the inhabitants of Senegal [1]. By means of the construction and analysis of time series and complex networks, we have found two peaks of phone calls activity emerging during the growing season. Moreover, during the harvest period, we detect an increase in the migration flows throughout the country. Another factor that shapes the communication and mobility patterns are traditional religious festivities, which are often held in a particular city. This implies the temporal migration of large masses of people, leaving a detectable trace recorded in the data that we explore with the aid of evolving temporal networks. Hence, in the light of our results, agricultural activity and religious holidays are the primary drivers of mobility inside the country. References [1] S. Martin-Gutierrez, J. Borondo, A. J. Morales, J. C. Losada, A. M. Tarquis and R. M. Benito, “Agricultural activity shapes the communication and migration patterns in Senegal”, Chaos: An Interdisciplinary Journal of Nonlinear Science, 2016, In press.
Samuel Martin-Gutierrez, Javier Borondo, Alfredo Morales, Juan Carlos Losada, Ana M. Tarquis and Rosa M. Benito
290 Citation Networks in Law: Detection of Hierarchy and Identification of Key Events [abstract]
Abstract: Citation networks can be used to make powerful analyses about human intellectual activity in diverse fields. However, universal rules governing their structure and dynamics have not yet been discovered. To address this, my research probes the influence of social and institutional hierarchy on the structure and dynamics of citation networks. Hierarchy is a fundamental feature of all human social organizations; therefore, any citation network is necessarily embedded in an “underlying” hierarchy that in turn determines properties of the network. Through this new way of analyzing citation networks, my research seeks to advance the understanding of phenomena central to societal progress, such as: the emergence of research fronts and seminal publications; how paradigms form, take hold, become unstable, and collapse; innovation and the emergence of new technologies; and the emergence of new legal doctrine and the evolution of the law. I will present an analysis of a novel data set (that I have created) that covers all hierarchical levels of the Canadian legal system for a specific area of law (defamation law). My presentation will show: 1) an evaluation of a recently published method for inferring hierarchies among scientific journals based on scientific citation networks by applying that method to my data set, in order to determine if the method is capable of detecting the known underlying court hierarchy; and 2) ways in which network analysis methods (node-ranking via authority scores and node-grouping via community detection/clustering) can identify important periods in the evolution of the law (e.g. turning-points in legal “eras”, in which the law is applied in a new way). Points 1 and 2 will be discussed in relation to the overarching goal of understanding the influence of underlying hierarchy on the structure and evolution of citation networks in law and other fields.
Joseph Hickey and Joern Davidsen
387 Assessing reliable human mobility patterns from higher-order memory in mobile communications [abstract]
Abstract: Understanding how people move within a geographic area, e.g. a city, a country or the whole world, is fundamental in several applications, from predicting the spatio-temporal evolution of an epidemics to inferring migration patterns. The possibility to gather information about the population through mobile phone data —recorded by mobile carriers triggered a wide variety of studies showing, for instance, that mobile phones heterogeneously penetrated both rural and urban communities, regardless of richness, age or gender, providing evidences that mobile technologies can be used to build realistic demographics and socio-economics maps of low-income countries, and also provide an excellent proxy of human mobility, showing for instance, that movements exhibit a high level of memory, i.e. the movements of the individuals are conditioned by their previous visited locations. However, the precise role of memory in widely adopted proxies of mobility, as mobile phone records, is unknown. We have used 560 millions of call detail records from Senegal to show that standard Markovian approaches, including higher-order ones, fail in capturing real mobility patterns and introduce spurious movements never observed in reality. We introduce an adaptive memory-driven approach to overcome such issues. At variance with Markovian models, it is able to realistically model conditional waiting times, i.e. the probability to stay in a specific area depending on individual's historical movements. Our results demonstrate that in standard mobility models the individuals tend to diffuse faster than what observed in reality, whereas the predictions of the adaptive memory approach significantly agree with observations. We show that, as a consequence, the incidence and the geographic spread of a disease could be inadequately estimated when standard approaches are used, with crucial implications on resources deployment and policy making during an epidemic outbreak.
Joan T. Matamalas, Manlio De Domenico and Alex Arenas

Foundations & Urban  (FU) Session 1

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: H - Ontvangkamer

Chair: Garvin Haslett

237 Complexity of informal public transport services [abstract]
Abstract: The rapid pace of urbanization has made the transport infrastructure of prime importance for efficient functioning of cities. This pace is putting ever increasing load on public transport services. The public transport services are offered by both formal and informal sector. Looking at the dependency of people on public transport services and inadequacy of formal public transport services, its generate apt conditions for flourishing of informal public transport services. This establishes the need to get insight into informal public transport services. Informal public transport services are the services provided by the private owners without fix route and stoppages. The network that emerges on this basis, caters effectively to user’s expectations and appears to be highly integrated with city fabric. It helps to connects the formal public transport services, neighborhood in the proximity and provides last mile connectivity. It also offers variety of options to the commuter that may be flexible and economical. To have better understanding of network of informal public transport services, Complexity science concepts are applied to derive an analytical framework.The research explores inter-connectedness and inter-dependence, emergence,role of adaptive agents, and self-organisation in the network of informal public transport services.Further it explores the co-evolution within network of informal public transport services and also with formal public transport services. The research further highlights the potential of informal public transport services in complementing the formal public transport services.
Rajesh A Sawarkar and Akshay P Patil
300 Topological loops suppress large cascades in interdependent networks [abstract]
Abstract: Cascading failures are frequently observed in networked systems and remain a major threat to the reliability of network-like infrastructure such as power grids, public transportation systems or financial markets. Furthermore, the interdependence of such systems on one another increases their vulnerability to failure, especially when compared to function of a single network viewed in isolation. Here we consider a classic model of cascading failure, the BTW sandpile model, on a system of interdependent networks. Recent study (PNAS 109, 12, E680-E689, 2012) of the sandpile model on modular random graphs has demonstrated that there exist an optimal level of connectivity between systems which suppresses the largest cascades in each network. Higher connectivity, however, increases capacity and total possible load of the system, raising the frequency of large avalanches. We build upon this result by considering more realistic scenario of coupling between two networks. Rather than connecting the nodes of both networks fully at random, randomly selected sites in one network are connected preferentially to a location topologically most similar in the second layer. We demonstrate that for both random graphs and two-dimensional lattices high level of such preferential connections suppresses large cascades seen otherwise in the random connection case. The probability of large cascades reaches a minimal value as seen in the earlier work and is kept constant at that magnitude for any connectivity higher than optimal. Finally, we discuss limitations of the multiple branching process approximation in estimating the impact of cascading failures on realistic networks. We show that preferential inter-layer connections create large-scale loops, which are responsible for limiting the size of observed cascades. Our work suggests that connectivity between systems can be topologically optimized as to limit impact of cascading failures in networks of networks.
Malgorzata Turalska and Ananthram Swami
275 Strong finite size effects in percolation of large modular networks [abstract]
Abstract: It is known that the presence of short loops (clustering) is an important source of error in the tree-based theories [Gleeson et al PRE 2012; Faqeeh et al, PRE 2015]. Despite this fact, the message passing (MP) approach for bond percolation [Karrer and Newman PRL 2010], which is a state-of-the-art tree-based theory, was shown to perform surprisingly well on several clustered networks. On the other hand, on some real-world networks that MP performs poorly, a significant part of error cannot be explained by the presence of short loops [Faqeeh et al, PRE 2015]. This indicates the presence of an unknown source of error and a phenomenon not captured by theories. We show [Faqeeh et al arXiv:1508.05590, 2015] that a type of finite size effect, which is independent of the total number of nodes or links of the network, is an important source of inaccuracy of theories such as MP. This type of finite size effect, which we refer to as “limited mixing”, occurs in modular networks in which the number of interlinks between pairs of modules is finite and sufficiently small. We demonstrate that, due to limited mixing, coexisting percolating clusters emerge in networks, while it is commonly assumed that only one percolating cluster can exist in networks (see for example [Melnik et al, Chaos 2014; Colomer-de-Simón et al, PRX 2014]). We show that this assumption is an important source of error in MP. We develop an approach called modular message passing (MMP) to describe and verify these observations. Moreover, we show that the MMP theory improves significantly over the predictions of MP for percolation on synthetic networks with limited mixing and also on several real-world networks. These findings have important implications for understanding the robustness of networks and in quantifying epidemic outbreaks in the susceptible-infected-recovered model of disease spread.
Ali Faqeeh, Sergey Melnik, Pol Colomer-De-Simon and James Gleeson
169 Congestion induced by the structure of multiplex networks. [abstract]
Abstract: In this work we study the transportation congestion problem in multiplex networks. We prove analytically and experimentally that the structure of multiplex networks can induce congestion for flows that otherwise would be decongested if the individual layers were not interconnected [1]. In transportation dynamics, a node starts to be congested when it is required to process elements at its maximum processing rate. The onset of congestion is known to be attained at a critical load which is inverse proportional to the network betweenness (i.e. the largest betweeness of all nodes). We show that the multiplex betweenness depends on intra-layer paths, inter-layer paths, and on the migration of shortest paths between layers. This last contribution unbalances, in a highly non-linear way, the distribution of shortest paths among the layers. Some approximations are possible to grasp the effect of the different contributions to the onset of congestion. In particular, the fraction of shortest paths fully contained within layers is basically 1, and so, the main factor influencing the traffic dynamics is the migration of shortest paths from the less efficient layer to the most efficient one. Then, we can approximate the multiplex betweenness and compute the congestion induced by a multiplex as the situation in which a multiplex network reaches congestion with less load than the worst of its layers when operating individually. Evaluation on several multiplex topologies show that the boundaries obtained by our approximations determine accurately the regions where the multiplex induces congestion. The observed cooperative phenomenon reminds the Braess' paradox in which adding extra capacity to a network when the moving entities selfishly choose their route can in some cases reduce overall performance. [1] Solé-Ribalta, Albert, Sergio Gómez, and Alex Arenas. "Congestion induced by the structure of multiplex networks." Physical Review Letters 116 (2016) 108701.
Albert Sole, Sergio Gómez and Alex Arenas
321 Extreme robustness of scaling patterns in Sample Space Reducing process and Targeted Diffusion [abstract]
Abstract: One of the most fundamental properties of complex systems is that their evolution exhibits path-dependence. This implies a substantial departure from standard approaches of statistical physics. Complementarily, their statistical properties are usually governed by power-laws and scaling patters, in opposition to the generalised presence of exponential and gaussian distributions found in standard approaches. It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such Sample Space Reducing processes (SSRP) offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents are related to noise levels in the process. In this talk we will show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes which reduce their sampling space as they unfold, in spite of being characterized by non-uniform prior distributions. In the absence of noise, the scaling exponents converge to −1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws law in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. Our results have two immediate consequences: At the theoretical level, they offer a clear link between the emergence of scaling and path-dependence. At the applied level, the result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for applications in traffic-, transport- and supply chain management.
Bernat Corominas-Murtra, Rudolf Hanel and Stefan Thurner

Biology & Physics  (BP) Session 1

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: I - Roland Holst kamer

Chair: Aleksandra Aloric

272 Network theoretic constraints on metabolic diversity explain universal features of life on Earth [abstract]
Abstract: All known life on Earth shares a set of common core reactions, used to synthesize and sustain every living individual. The network structure of these core reactions, and the corresponding peripheral reactions, have been analyzed in organisms in all three domains of life. These analyses have revealed similarities in the organization of their chemical reaction networks, which are quantified by topological measures such as diameter, degree distribution, and hierarchical modularity. We expand on this work with the analysis of an additional 21,000 bacterial genomes, 800 archaeal genomes, tens of metagenomes, and all documented biologically catalyzed reactions. We show that networks constructed from communities of individuals are distinguishable from individual organismal networks using some measures, but indistinguishable using others. Additionally, we show that real, coevolved metabolic communities are distinct from synthetic metabolic communities, which are constructed from randomly assembling individual organisms that have not jointly evolved. We find that regardless of organizational scale and whether or not communities are jointly evolved, these biological networks have heterogeneous degree distributions, which are associated with robustness to random mutation. Finally, we construct artificial networks that are topologically identical to individual metabolic networks, but differ in the distribution of chemical pathways relative to their topology. We show that in contrast to the real and synthetic communal networks, communities of these artificial networks do not exhibit scale-invariant properties. Interpreted in a network theoretic sense, this implies that networks which can sum together and maintain certain scale-invariant features must have highly constrained subgraphs. In the context of biological systems, these results suggest that the robustness of a communal metabolic network is highly sensitive to the particular chemical pathways present within individuals that constitute the community, and thus that a robust biosphere requires all its organisms to share a common core biochemistry.
Harrison Smith, Hyunju Kim, Jason Raymond and Sara Imari Walker
419 Sampling the movement phenospace: Local linear models and the behavior of C. elegans [abstract]
Abstract: The complexity of emergent systems can arise both from an intricate interplay of interacting parts and from the dynamical patterns performed by the system as a whole. But how do we find the dominant collective modes and how do we capture the dynamics of these modes with models amenable to analysis? Here we address these questions in the living movement of the nematode C. elegans. We apply a low-dimensional yet complete "eigenworm" representation of body shape to construct a principled parameterization of 2D postural movements. We use this representation to systematically explore the space of behavior by means of a local linear model and we develop a novel algorithm in which temporal locality is provided by the system itself by adaptively selecting the window of the local approximation. We apply our procedure to an example in which a heat shock is briefly administered to the worm’s head and we find a fine-scale description of the worm behavior which is remarkably more structured than previous, coarse-grained characterizations. We believe that our approach will be useful in dissecting other complex systems into more interpretable behaviors.
Antonio Carlos Costa and Greg Stephens
465 Traveling chimera states in networks of hierarchically coupled Lattice Limit Cycle oscillators [abstract]
Abstract: We investigate the emergence of chimera states in hierarchically connected networks, in a system undergoing a Hopf bifurcation. We show that under specific conditions the chimera states (characterized by coexisting, alternating, coherent and incoherent domains), acquire nested mean phase velocity distribution and can be traveling. The single oscillator dynamics follows the Lattice Limit Cycle (LLC) model which describes a prey-predator cyclic scheme among three species, presents a fourth order nonlinearity and gives rise to a limit cycle via a Hopf bifurcation. If LLC oscillators are arranged on a ring network topology with nonlocal interactions, stationary multi-chimera states emerge when the system is far from the Hopf bifurcation[1]. Hierarchical coupling connectivity [2] is introduced to the network in such a way that each LLC oscillator is coupled to all elements belonging to a Cantor set arranged around the ring. We provide evidence that this coupling scheme causes alterations to the structure of the coherent and incoherent regions. As space-time plots show, the (in)coherent regions present nested structures which travel around the ring keeping their profiles statistically stable in time. By recording how the position (i.e. the node number) of the maximum concentration value periodically changes in time, we calculate the corresponding frequency via the Fourier transform. We find that the speed of this motion decreases with increasing coupling strength [1]. Complex nested chimera structures, when regarded from the viewpoint of population dynamics, exemplify the rich organization which arises in communities of nonlocally interacting populations due to correlations in the connectivity rules. [1] Hizanidis, J., Panagakou, E., Omelchenko, I., Schöll, E., Hövel, P., Provata, A., Phys. Rev. E, vol. 92, 012915 (2015). [2] Omelchenko, I., Provata, A., Hizanidis, J., Schöll, E., Hövel, P., Phys. Rev. E, vol 91, 022917 (2015).
Johanne Hizanidis, Evangelia Panagakou, Iryna Omelchenko, Eckehard Shoell, Philipp Hoevel and Astero Provata
13 Predicting the self-assembly of colloidal nanoparticles: A computer game [abstract]
Abstract: The ability of atomic, colloidal, and nanoparticles to self organize into highly ordered crystalline structures makes the prediction of crystal structures in these systems an important challenge for science. The question itself is deceivingly simple: assuming that the underlying interaction between constituent particles is known, which crystal structures are stable. In this talk, I will describe a Monte Carlo simulation method [1] combined with a triangular tesselation method [2] to describe the surface of arbitrarily shaped particles that can be employed to predict close-packed crystal structures in colloidal hard-particle systems. I will show that particle shape alone can give rise to a wide variety of crystal structures with unusual properties, e.g., photonic band gap structures or highly diffusive crystals, but combining the choice of particle shape with external fields, like confinement [5], or solvent effects [6] can enlarge the number of possible structures even more. [1] L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Physical Review Letters 103, 188302 (2009). [2] J. de Graaf, R. van Roij and M. Dijkstra, Physical Review Letters 107, 155501 (2011). [3] K. Miszta, J. de Graaf, G. Bertoni, D. Dorfs, R. Brescia, S. Marras, L. Ceseracciu, R. Cingolani, R. van Roij, M. Dijkstra and L. Manna, Nature Materials 10, 872-876 (2011). [4] A.P. Gantapara, J. de Graaf, R. van Roij, and M. Dijkstra, Physical Review Letters 111, 015501 (2013). [5] B. de Nijs, S. Dussi, F. Smallenburg, J.D. Meeldijk, D.J. Groenendijk, L. Filion, A. Imhof, A. van Blaaderen, and M. Dijkstra, Nature Materials 14, 56-60 (2015). [6] J.R. Edison, N. Tasios, S. Belli, R. Evans, R. van Roij, and M. Dijkstra, Physical Review Letters 114, 038301 (2015)
Marjolein Dijkstra
17 Residence time in a strip under jamming conditions [abstract]
Abstract: The target of our study is to approximate numerically and, in some particular physically relevant cases, also analytically, the residence time of particles undergoing a biased motion on a two--dimensional vertical strip. The model is of some relevance to crowd dynamics, when high density of people at exits has to be prevented. The sources of asymmetry are twofold: (i) the choice of the boundary conditions (different reservoir levels) and (ii) the possible strong anisotropy from a drift nonlinear in density with prescribed directionality. The motion is modelled by the simple exclusion process on the square two-dimensional lattice. We focus on the effect on residence time due to jamming induced both by high reservoir levels at the strip exit end and by the presence of an impenetrable barrier placed at the middle of the strip. In both cases we find unexpected non-linear behavior of the residence time with respect to natural parameters of the model, such as the lateral movements probability and the width of the obstacle. We analyze our numerical results by means of two theoretical models, a Mean Field and a one-dimensional Birth and Death model. In most cases we find good agreement between theoretical predictions and numerical results.
Emilio N.M. Cirillo, Rutger van Santen and Adrian Muntea

Cognition & Foundations & Socio-Ecology  (CFS) Session 1

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: J - Derkinderen kamer

Chair: Andrew Schauf

467 Collapse of public transport networks under stress [abstract]
Abstract: Public transportation system must cope with increased demand in exceptional crowd gathering such as concerts, football matches or other localized events. Here we study the emergence of delays and the collapse of a public transport network under situation of stress. A simplified model to simulate the mobility of users through the public transport system of the metropolitan area of Barcelona. The city is divided in cells represented as nodes in a multilayer network. Each of the layers stands for transport modality interconnected in the nodes and allowing an extra-layer for pedestrian movements. The transport vehicles run through the corresponding layers, while new trips are generated at a given rate producing the demand for transport that matches in origin and destination the empirical data from the city as obtained from surveys. The agents then travel through the network using two different routing protocols: either following the shortest paths, or by an adaptive mechanism that tries to avoid congested areas. We test how the network behaves when a large amount of people are injected in a fixed point of the city as it would occur in a special event This sudden increase on the demand leads to the appearance of delays and queues in the system with a dependence on the topology of the network and the distance from the injection point. The reaction of the system depends on the part of the city where the agents are generated. While some parts of the city are highly connected and are able to handle huge amount of people, there are others poorly connected that require a lot of time to redistribute the agents. We also test how the system reacts to a strike with less vehicles running, and which is the minimum amount of vehicles to avoid the collapse.
Aleix Bassolas, José Javier Ramasco and Maxime Lenormand
236 Investigating Patterns of Technological Innovation [abstract]
Abstract: The understanding of technological innovation's patterns is crucial for both the theory of economic growth and practical applications in research and development. Yet a precise characterization of breakthrough inventions has not been fully investigated. We address this issue using a large-scale data-mining and network approach on patent data. We extract from US Patent Office raw data an open consolidated database which includes detailed patent information, technological classifications, citation links, and abstract texts. This yields a database of around 4.10^6 patents on a time range from 1976 to 2012. We aim to capture the semantic information contained in texts which has been shown to be complementary to classification data. To do that, we extract relevant n-gram keywords and obtain for each year a semantic network based on co-occurrences. The multi-objective optimization of network modularity and size is performed on network construction parameters (filtering thresholds) through high performance computing. We obtain for each year a multi-layer network, containing semantic community relations, technological classes relations and citation relations between patents. The mining of network layers yields interesting results, such as an increase in time of patent semantic originality combined with a counter-intuitive loss of class-level interdisciplinarity. This corroborates the stylized facts of both invention refinement and specialization in time. Citation-level interdisciplinarity is investigated by combining the different layers. Finally, we plan further work towards the use of these heterogeneous features produced by multi-layer network analysis into machine learning models to predict success and breakthrough level of inventions. Our contribution to the study of socio-technical complex systems is thematic, with the construction of an open-access large scale consolidated patent database and insights into the temporal evolution of inventions, as well as methodological with a technique that can be generated to any network whose nodes contain a textual description.
Antonin Bergeaud, Yoann Potiron and Juste Raimbault
313 Change of human mobility networks by a big incident [abstract]
Abstract: We investigate the human mobility networks using a lifelog dataset that sleeping time was recorded and a Twitter dataset that contains 150 million geotagged tweets around historical incidents. Over the last several years, we experienced critical natural disasters as 2011 Fukushima and 2016 Kumamoto earthquakes in Japan. The social turmoil was triggered by artificial incidents as 2013 Boston Marathon, 2015 Paris, 2016 Brussels, 2015-2015 Istanbul and Ankara bombings. In this paper, we compare human mobility networks before and after the incidents. Only 5% of all tweets have geotag information. However, we can track individual human mobility using the geotagged tweets because the twitter users who attach geotag tend to attach it always. We confirm that Japanese population movement is reproduced with high accuracy, &R^2=0.971&, using the lifelog dataset and the geotagged tweets. We also show that land price can be estimated from number of tourists in each commercial area in order to estimate economic losses resulting from the incidents. Next, we detect human mobility networks before and after the incidents. The networks are displayed by connecting with edge the places (nodes) that many tourists moved. Just after the incidents the number of tourists dropped drastically near the center place of incidents and commercial areas, and these land prices were damaged. Therefore, the edges with these areas are cut from the networks. The damaged networks recover in several months. Faunally, we introduce a tourist diffusion model on the networks with three parameters that are incident size (number of the dead and the injured), distance from the incident center place, time-lapse from the incident. We can simulate numerically change of human mobility patterns after the incidents.
Takayuki Mizuno, Takaaki Ohnishi and Tsutomu Watanabe
109 Layered social influence promotes multiculturality [abstract]
Abstract: Despite the presence of increasing pressures towards globalisation, multiculturality, i.e. the tendency of individuals to form groups characterised by distinct sets of cultural traits, remains one of the salient aspects of human societies. Based on the two mechanisms of homophily and social influence, the classical model for the dissemination of cultures proposed by Axelrod predicts the existence of a fragmented regime where different cultures can coexist in a social network. However, in such model the multicultural regime is achievable only when a high number of cultural traits is present, and is very sensitive to the presence of spontaneous mutations of agents' traits. As a consequence, understanding how social fragmentation is able to self-sustain and thrive in many different contexts is still an open problem. In real systems, social influence is inherently organised in layers, meaning that individuals tend to diversify their connections according to the topic on which they interact. In this work we show that the persistence of multiculturality observed in real-world social systems is a natural consequence of the layered organisation of social influence. We find that the critical number of cultural traits that separates the monocultural and the multicultural regimes depends on the redundancy of pairwise connections across layers. Surprisingly, for low values of structural redundancy the system is always in a multicultural state, independently on the number of traits, and is robust to the presence of cultural drift. Moreover, we show that layered social influence allows the coexistence of different levels of consensus on different topics. The insight obtained from simulations on synthetic graphs are confirmed by the analysis of two real-world social networks, where the multicultural regime persists even for a very small number of cultural traits, suggesting that the layered organisation of social interactions might indeed be at the heart of multicultural societies.
Federico Battiston, Vincenzo Nicosia, Vito Latora and Maxi San Miguel
358 Emergence and characterization of mult-layer communities in social collaboration networks [abstract]
Abstract: Communities are a universal property of complex networks as they are found in a large variety of systems ranging from brain networks to collaboration networks. Despite the large attention that has been given to community detection algorithms, the mechanisms underlying the emergence of communities have not been widely explored. We show that the triadic closure mechanism which drives the evolution of social networks and according to which two individuals have a high probability to connect after having been introduced to each other by a mutual acquaintance, naturally leads to the emergence of fat-tailed distributions of node degree, high clustering coeffcients and community structure as long as the network link density is not too high (sparse network). Moreover we show that this mechanism is able to explain the emergence of communities also in the so called multiplex networks, networks where individuals are connected via interactions of different nature (layers), and whose structure cannot be described by a single adjacency matrix. Interestingly the communities in this family of networks are often observed to span across the different layers of connections, which makes a challanging task to extract the information encoded in their multi-layer community organization. With particular focus on the real social multiplex network of the APS Scientific Collaboration Multiplex Network we have tried to characterize both form a dynamical and from a structural point of view the organization of multi-layer communities in the attept of generalizing the concept of community to multilayer systems. References: 1) "Triadic closure as a basic generating mechanism of communities in complex networks", Ginestra Bianconi, Richard K. Darst, Jacopo Iacovacci, and Santo Fortunato Phys. Rev. E 90 (2014) 2) "Emergence of Multiplex Communities in Collaboration Networks", Battiston F, Iacovacci J, Nicosia V, Bianconi G, Latora V (2016)PLoS ONE 11(1)
Jacopo Iacovacci and Ginestra Bianconi

Socio-Ecology  (S) Session 3

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: R - Raadzaal

Chair: Saskia Werners

404 Virus-like Dynamics for Modeling the Emergence of Defectors in the Spatial Public Goods Game [abstract]
Abstract: In the last years, scientists coming from different communities investigated several socio-economic and biological phenomena under the lens of Evolutionary Game Theory (EGT). In general, studying the evolution of a population and identifying strategies that trigger cooperative behaviors constitute some of the major aims in this field. In particular, the emergence of cooperation becomes really interesting when agent interactions are based on games having a Nash equilibrium of defection, as in the Public Goods Game (PGG). The latter is analyzed by adding a viral spreading process based on the Susceptible-Infected-Susceptible (SIS) model. Notably, we consider a virus, with a spreading rate $\lambda$, whose effect is turning cooperators to defectors. In doing so, we can merge the two dynamics, i.e. the PGG and the epidemic spreading, in order to study the equilibria reached by the population. In particular, we analyze the relation between the spreading rate $\lambda$ (epidemic process) and the synergy factor (PGG). The proposed model aims to represent complex competitive processes, as the emergence of tumors. Notably, the latter, on a quality level, can be interpreted as the emergence of defection among the cells of an organism. Since some forms of tumors seem to be triggered by viruses, as the Papilloma Virus (PV), we deem that our investigations might shed some light on these complex phenomena, even if studied by a theoretical approach. Results of investigations can be of interest for those researchers interested in interdisciplinary applications of mathematical and physical models in biology, and for those interested in theoretical biology. To conclude, beyond to show results of our work we want to highlight the link between EGT and Biology.
Marco Alberto Javarone and Nicoletta Schibeci Natoli Scialli
523 What is the role of human decisions in restoring a clear lake? – Analysing incremental complexity of agent-based models [abstract]
Abstract: Human decisions affect and are affected by ecological systems in multiple ways. Natural resource modeling has commonly focused on decisions of resource users or strategic planners in one way. We argue that the dynamics of social-ecological systems (SES), however, emerge from multiple social-ecological interactions that are the result of decisions from different actors. We exemplify this with the case of lake restoration, i.e. the ecological regime shift from a turbid to a clear water state which is influenced by decisions from lake managers (governance) and individual households (beneficiaries + polluters). House owners can affect the nutrient inflow, the main driver for the lakes state, through their choices of sewage treatment. The management challenge, in this case, stems from the temporal and spatial decoupling between lake use activities by beneficiaries and the activities from distant actors eventually polluting the lake. Beneficiaries are those that enjoy ecosystem services such as drinking water, fish and recreation provided by the lake. We developed a coupled agent-based and system dynamics model to explore different pathways of managing the activities affecting the lake state back towards the clear state. Hereby, we discriminate between the timing of regulation measures (institutional level), pathways of rule enforcement (individual-institutional link), and the households initial attitude (individual level) in their effects on lake restoration time lags. By our incremental approach, we build a faceted understanding of how sensitive lake restoration is on the macro level to individual actor traits in the human-decision model on the micro level. Concluding, we reflect on the importance of the empirical as well as theoretical basis for human-decision modeling to increase its relevance for model-based learning.
Romina Martin and Maja Schlüter
200 Reinforcement Learning in Social-Ecological System Models [abstract]
Abstract: Recognizing the Earth System as a coupled complex social-ecological co-evolutionary system is important to enrich the discourse on global sustainability. However, it is an open question how to meaningfully formalize social dynamics in the context of mathematical social-ecological systems modeling. Existing models of social-ecological interactions often use a system dynamics approach of aggregated quantities, thereby not being able to account for complex social network effects, social stratification and inequalities - all presumably central issues for global sustainability; other types of models incorporate these ideas, but put their focus rather on regional, case specific social-ecological systems and tend not to use a "first-principle" approach. In this work we combine the concept of reinforcement learning (RL) with a co-evolutionary social-ecological systems perspective by providing the social agents with RL decision methods, making them capable of dealing with complex environments. Analytical calculations and computer simulations were performed to explore this scheme. It offers a promising view on a "first principle" method on agent behavior capable of dealing with unknown, possibly nonlinear environments to reveal potential counter intuitive traps and boundaries hindering social and ecological sustainability.
Wolfram Barfuss, Jonathan F. Donges and Jürgen Kurths
165 Concepts behind the climate strategies. How C40 and its members define adaptation and mitigation? [abstract]
Abstract: Networks within cities have become a feature in environmental governance, in particular in relation to dealing with climate change. Previous research has shown that the initiatives, such as C40, have created learning opportunities globally. Quantitative analyses have shown that connections have been formed and cities are learning from each other. However, less is known about what kinds of information is being shared through these networks. The concepts of adaptation and mitigation fundamentally advocate change. How they are conceptualised affects the way the climate change is addressed in practice. Previous research has shown that adaptation can be conceptualised as adjusting to the changing climate conditions (adjustment-based adaptation), as transforming the structures of society causing vulnerability (transformational adaptation), or as a combination of the two (reformist adaptation), and a similar classification of degree of change can also be found for mitigation. In this paper, our aim is to find out the degree of change as stated in the adaptation and mitigation strategies the C40 network and its members advocate. We approach the governance of urban adaptation as a complex system and ask how these concepts are defined in the documents produced by the C40 network and in the strategies of its member cities. We conduct an analysis of documents produced by C40 network and its member cities’ climate strategies with a computer assisted method to get the general overview of how far the documents support change. The result is controlled and deepened by close-reading of a representative sample of documents. Our findings reveal the concepts behind the climate strategies of C40 and its member cities that search to be the world leaders in addressing climate change. This gives context to the best practices promoted by the C40 network and its member cities and makes it possible to analyse them more profoundly.
Milja Heikkinen and Sirkku Juhola
71 Opinion dynamics under out-group discrimination [abstract]
Abstract: On many economic, political, social, and religious agendas, disagreement among individuals is pervasive. For example, the following are or have been highly debated: whether abortion, gay marriage, or death penalty should be legalized or not; the scientic standing of evolution; whether taxes/social subsidies/unemployment benefits/(lower bounds on) wages should be increased or decreased; the effectiveness of alternative (or `standard') medicine such as homeopathy. In the field of so-called "opinion dynamics", long-run disagreement among individuals is sometimes considered a challenge, since a large class of models such as the famous model of DeGroot learning (DeGroot, 1974; Golub and Jackson, 2010) predict long-run opinion consensus as long as individuals form a connected group. However, there have recently been several models suggested which include a mode of `anti-conformity' or `opposition' that predict disagreement even among connected interacting agents. Here, we present another such model of negative relationships among interacting agents, which extends the classical model of DeGroot for opinion dynamics. Our contributions are that we provide precise game-theoretic motivations of individuals' behavior as well as mathematically rigorous results on long-run disagreement in connected societies. Our game-theoretic motivation is that agents wish to coordinate with their friends (their 'in-group') and anti-coordinate with their enemies (their 'out-group'). Such behavior is well-documented in social psychology, both within the laboratory (see, e.g., Taifel 1978; Fehrler and Kosfeld, 2013) and outside. Our mathematical results include very general conditions for persistent disagreement among connected agents as well as an exhaustive graph-theoretical classification of long-run opinions in certain special cases. We find that persistent disagreement 'easily' obtains under the presence of negative relationships. As a consequence, crowd wisdom, the condition when all individuals learn the true state of nature or come close to it, is likely to fail.
Steffen Eger

Cognition  (C) Session 4

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: L - Grote Zaal

Chair: Laura Alessandretti

381 Predictive modeling of language acquisition using network growth models [abstract]
Abstract: Network models of language have provided a way of linking cognitive processes to the structure and connectivity of language. This has been of particular importance in language acquisition as a means to explore the relational role of language and its influence on cognitive processes. Steyvers and Tenenbaum proposed that language is learned by a process similar to preferential attachment, with highly connected nodes being learned earliest, accounting for high-level lexical network structure and also capturing the empirical age of acquisition reports. Hills and colleagues suggested, instead, that language learning is driven by contextual diversity, or the degree of unknown words in the adult language graph, accounting for and predicting normative acquisition trends. Here we extend and test these previous ideas on acquisition trajectories of individual children as opposed to normative acquisition. We further explore the types of relationships between words that are meaningful to toddlers when learning new words. We construct not only theoretical models of acquisition but models that are capable of predicting what words a specific child is likely to learn next. We find that the best fitting network model varies systematically across children and across the course of development. We also find that the choice of network representation influences our ability to model the acquisition trends of young children. This work suggests that the use of network models to understand language acquisition trends of toddlers may not only provide predictive models of what words a child is likely to learn next but may provide insight into the cognitive processes of acquisition itself.
Nicole Beckage and Eliana Colunga
148 Emergence of interdisciplinary science: A three-year case study [abstract]
Abstract: Interdisciplinary scientific teams are increasingly funded, but how is knowledge co-produced, in practice, across various disciplines? This paper presents results from a three-year study using a complex systems approach to track the emergence of interdisciplinary science in a heterogeneous researchers network – a “Coupled Human and Natural Systems” research team. I conceptualize the research team as a heterogeneous social network, and analyze the process of knowledge co-production as emergent from the properties and dynamics of the network. The case study is based on epistemological data collected weekly, over three years, from team members (anthropologists, geographers, ecologists, hydrologists, climate scientists and computer scientists) after joint meetings, combined with individual interviews and thematic analyses of research outputs (e.g. articles). I argue that a complex system framework is useful to supports the assessment of what fosters or blocks the emergence of joint knowledge for successful collaborative science.
Sarah Laborde
454 Agent-based modeling for popularity dynamics observed in cyber space communications [abstract]
Abstract: By using a huge Japanese blog data base with the author’s ID, we can observe not only the number of entries per day for any words, but also personal dynamics of blog entries. In this presentation, we report statistical properties and modeling for four major categories of words. The First is “ordinary words” which is used in our daily life, for example “soon”. The number of entries of “soon” has a steady fluctuation. The Second is “News words”, for example “Michael Jackson”. We can observe clear jump and power law decaying in the number of entries of “Michael Jackson” after the news of which Michel Jackson died. The Third is “Trending words”, for example “Twitter”. The number of entries of “Twitter” was increasing exponential from Oct. 2008 to Jun. The fourth is "event words" which has growth and relaxation characterized by a power function around the peak day such as national holidays. We reproduced these dynamics by an agent-based model based on the SIR model which is well known in mathematical epidemiology to clarify the origin of these dynamics from the view point of bloggers interactions. In order to reproduce not only an exponential but also a power law growth and relaxation behaviors observed in trending words, we developed the base model by adding some effects to our model, for example an external shock effect, a deadline effect and an amorphous effect. The amorphous effect, inspired by solid physics studies, gives bloggers individual characteristics, in other words individual duration of interest for the specific word. As a result of adding these essential effects, our model reasonably reproduces the dynamics observed from our data.
Kenta Yamada, Yukie Sano, Hideki Takayasu and Misako Takayasu
62 Crisis in Complex Social Systems: A Social Theory View Illustrated with the Chilean Case [abstract]
Abstract: This presentation argues that crises are a distinctive feature of complex social systems. A quest for connectivity of communication leads to increase systems’ own robustness by constantly producing further connections. When some of these connections have been successful in recent operations, the social system tends to reproduce the emergent pattern, thereby engaging in a non-reflexive, repetitive escalation of more of the same communication. This compulsive growth of systemic communication in crisis processes, or logic of excess, resembles the dynamic of self-organized criticality. Our theoretical model contend that crises in complex social systems are not a singular event, but result from a process that unfolds in three stages: incubation, in which the system incrementally develops a recursive dynamics of non-reflexive repetitions that weakens both its adaptive capabilities and connections; contagion, whereby the effects of that dynamics expands to different systems or clusters in the network; and restructuring, namely, a reorganization of both the system’s own conditions of functioning and its interrelationships with the environment. Next, we argue that percolation and sand pile models are suitable techniques for both modeling this process and analytically distinguishing between three phases of social crises. We illustrate our propositions with a view on the crisis of the educational system in Chile—a country in which over the last forty neoliberal reforms led to an incremental monetization of public education. Accordingly, we first construct the conceptual foundations of our approach. Second, we present three core assumptions related to the generative mechanism of social crises, their temporal transitions (incubation, contagion, restructuring), and the suitable modeling techniques to represent them. Third, we illustrate the conceptual approach with a percolation model of the crisis in Chilean education system.
Aldo Mascareño, Eric Goles and Gonzalo A. Ruz
264 Multiplex lexicon networks reveal cognitive patterns in word acquisition [abstract]
Abstract: According to psycholinguistics, the human mind organises words in a mental lexicon (ML), i.e. a dictionary where words are stored and retrieved depending on their correlations. Until recently, network theory has been used for investigating one type of interactions at a time, without providing cross-correlational information. Our novel approach overcomes this limitation by modelling the mental lexicon of English speakers as a multiplex lexicon network (MLN), where nodes/words are connected according to: (i) word associations (“A” makes one think of “B”), (ii) feature norms (“A” shares features with “B”), (iii) co-occurrences (“A” and “B” are frequently adjacent), (iv) synonyms ("A" means also "B") and (v) phonological similarities (“A” differs from “B” by the addition, deletion or substitution of one phoneme). We build two MLNs: one for children up to 32 months (with 529 words) and one for adults (with almost 5000 words). Both the MLNs are irreducible, i.e. projecting all the edges onto one aggregate layer only would imply losing information about the word patterns in the system. In children, we show that the multiplex topology is more powerful in predicting the ordering with which words are acquired than individual layer statistics. Also, multiplexity allows for a quantification of the most important layers (semantic vs. phonological) that dynamically determine word acquisition. For adults, we propose a novel toy model of lexicon growth driven by the phonological level, in which real words are inserted along different orderings and they can be also rejected for memorization. Our model shows that when similar-sounding words are preferentially learned, the lexicon grows according to the multiplex structure, while when novel learned words sound different from the known ones, both semantic layers and frequency become predominant, instead. Our results indicate that the MLN topology is a meaningful proxy of the cognitive processes shaping the mental lexicon.
Massimo Stella, Nicole Beckage and Markus Brede

Foundations  (F) Session 6

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: M - Effectenbeurszaal

Chair: Gaoxi Xiao

46 Thermal diffusion on cospectral networks [abstract]
Abstract: The dynamics of random walks and thermal diffusion on network is interlinked with the spectral properties of the associated graph representing the network. For example, the eigenvalues of the normalized Laplacian L of the graph characterize the time scales for the convergence to the equilibrium probability distribution over the nodes of the graph. The normalized Laplacian also describes the thermal heat conduction on networks via the Fourier heat transport equation. The heat conduction on networks has been investigated experimentally using a 2 dimensional network cut from strongly thermal conducting pyrolytic graphite films and observing the diffusion of heat with a thermal camera. The effect of network topology, length and width of the connections in the network is analyzed using the general concepts of the spectral theory of graphs. Cospectral graphs have identical eigenspectrum for the matrices associated with the graph such as the adjacency matrix A, the Laplacian matrix L=D-A, with D is the degree matrix for the vertices, and the normalized Laplacian matrix. The dynamics of a random walk, -hence also the heat conduction -, on these graphs can be found by iterating the probability transfer matrix P= D^(-1) A and is strongly related to the spectra of the normalized Laplacian L = D^(-1/2) AD^(-1/2). In the past decades methods have been devised to systematically generate sets of graphs with cospectral properties. For example L cospectral graphs can be generated systematically for inflated star graphs by modifying the arm composition (swapping). The graphs have identical eigenvalues, but different topology. Furthermore, some of these graphs have multiple zero eigenvalues, making the final distribution dependent on the initial starting conditions. The effect of cospectral behavior and multiple zero eigenvalue degeneracy on the dynamics of thermal conduction is demonstrated using the experiments and by numerical and analytical modeling .
Rudolf Sprik
420 The mechanism behind the power law distributions in a complex system based on a large game [abstract]
Abstract: Evolutionary game theory has been studied intensely as a mathematical framework for understudying Darwinian evolution. Despite the fact that evolutionary dynamics, whether in biological ecosystems or, say in human environments like the economy, takes place in the form of co-evolving webs of interactions; game theory is usually studied in the limit of very few strategies. Here, we study replicator dynamics with extinctions and mutations in a large game with many strategies. We perform numerical simulations of a game where each player can play one of a large number of strategies. The game is defined with a payoff matrix whose elements are random numbers which can be positive or negative, with majority being zero. At the beginning of the simulation we choose randomly a small number of strategies to be played. Reproduction of players is done according to the replicator equation in which a small amount of mutations is introduced. In this way new strategies can appear in the system. Additionally we introduce an extinction threshold; strategies with a frequency less than this threshold are removed from the system. The resulting behaviour shows complex intermittent time dependence similar to those of the Tangled Nature model. The dynamics has two types of phases: a quasi stable phase in which the number of active strategies is more or less constant and hectic phases during which creation and extinct of strategies happens at a high rate. We see that the complex behaviour of the Tangled Nature model, which is in good agreement with observations on ecosystems, also arises from the game theoretic basis of the replicator dynamics. Finally we investigate various lifetime distributions and find fat tail distributions similar to those often observed for real systems and show that these distributions can be explained using supestatistics.
Jelena Grujic and Henrik Jeldtoft Jensen
349 The properties of modularity density as a quality function for community structure [abstract]
Abstract: Many real-world complex systems exhibit a community structure, where groups of nodes share a high internal connection density or strength but have fewer or weaker connections with nodes belonging to different groups. Identifying these groups of nodes is a key challenge in network theory and much attention has been devoted to this specific task. Amongst the methods proposed so far, those based on the modularity function are the most popular and widely used. Modularity is a quality function that assigns a score to each partition of the network, with higher scores corresponding to better partitions. However, despite its wide use, modularity has also been shown to suffer from severe limitations, such as a resolution limit or finding communities on random graphs. We present the properties of a new objective function, modularity density, which was introduced in recent years by Szymanski and collaborators to address some of the limitations mentioned above. Modularity density presents interesting properties as a quality function, and we will derive the analytical dependence of modularity density on connection density in random graphs of any given size. We will also show that modularity density allows for a comparison of the community structure between networks with different number of nodes. Finally, we will present a detailed community detection algorithm based on modularity density, discussing its computational complexity. This presentation will allow me to further disseminate my work to an audience with broad interests. It will help me establish a personal network of connections in the complex systems scientific community. This may enable collaborations, which would be of great benefit in my career as a young researchers. This talk is aimed at researchers with an interest in network science and methods to analyse networked data.
Federico Botta and Charo I. Del Genio
405 Formalizing Information Complexity for Dynamical Networks [abstract]
Abstract: How much information does a complex networks integrate as a whole over the sum of its parts? Can the complexity of such networks be quantified in an information-theoretic way and be meaningfully coupled to its function? Recently, measures of dynamical complexity such as integrated information have been proposed. However, problems related to the normalization and Bell number of partitions associated to these measures make these approaches computationally infeasible for large-scale networks. Our goal in this work is to address this problem. Our formulation of network integrated information is based on the Kullback-Leibler divergence between the multivariate distribution on the set of network states versus the corresponding factorized distribution over its parts. We find that implementing the maximum information partition optimizes computations. These methods are well-suited for large networks with linear stochastic dynamics. As an application of our formalism, we compute the information complexity of the human brain’s connectome network. Compared to a randomly re-wired network, we find that the specific topology of the brain generates greater information complexity.
Xerxes Arsiwalla and Paul Verschure
330 Control of complex networks: from controllability to optimal control [abstract]
Abstract: One of the main objectives to study on complex systems and complex networks is to understand how to efficiently control them, or alternatively, to make them difficult to be controlled (and therefore become robust under certain circumstances). While extensive studies have been carried out on controllability of complex networks, which typically aim to finding the minimum set of driver nodes to be connected to external controllers for ensuring that the system can be driven to any state, little has been done for an arguably even more important problem: the optimal control of complex networks, which aims to driving a system to a certain state with the minimum energy cost. A complex system with perfect controllability may not be actually controllable in the real life, if the cost for driving it to the target state is too high. To start, we consider the “simplest” optimal control problem, trying to find the solution for driving a static complex network at the minimum energy cost with a given number of controllers. A projected gradient method has been proposed, which works efficiently in both synthetic and real-life networks. The study is then extended to the case when each controller can only be connected to a single network node for having the lowest connection complexity. Interesting insights reveal that such connections basically avoid high-degree nodes of the network, which is in resonance with recent observations on controllability of complex networks. Such results open the technical path to enabling minimum cost control of complex networks, and contribute new insights into locating the key nodes from a minimum cost control perspective. We shall also discuss on the possible approaches for enhancing the scalability of the proposed method and provide some preliminary testing results on some of these approaches
Guoqi Li and Gaoxi Xiao

Foundations  (F) Session 7

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: N - Graanbeurszaal

Chair: Yamir Moreno

298 Reinventing the Triangles: Assessing Detectability of Communities [abstract]
Abstract: Statistical significance of network clustering has been an unresolved problem since it was observed that community detection algorithms produce false positives even in random graphs. After a phase transition between undetectable and detectable cluster structures was discovered [1,2], the spectra of adjacency matrices and detectability limits were shown to be connected, and they were calculated for a wide range of networks with arbitrary degree distributions and community structure [3]. In practice, given a real-world network neither the hypothetical network model nor its full eigenspectrum is known, and whether a given network has any communities within detectability regime cannot be easily established. Based on the global clustering coefficient (GCC) we construct a criterion telling whether in an undirected, unweighted network there is some/no detectable community structure, or if the network is in a transient regime. To that end we use approximations of GCC and the spectra of adjacency matrices, together with the empirical observations showing that: a) for graphs with community structure GCC reaches the random graph value before its connectivity is fully random, and b) this saturation of GCC coincides with the theoretically predicted limit of detectability. We compare the criterion against existing methods of assessing significance of network partitioning. We compute it on various benchmark graphs, as well as on real-world networks from SNAP database, and compare it with the results of state-of-the-art community detection methods. The method is simple and faster than methods involving bootstrapping; it is robust also on sparse networks. Analogous criteria are plausible also for directed graphs. [1] J. Reichardt and M. Leone, Phys. Rev. Lett. 101, 078701 (2008). [2] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborova, Phys. Rev. Lett. 107, 065701 (2011). [3] X. Zhang, R. R. Nadakuditi, and M. E. J. Newman, Phys. Rev. E 89, 042816 (2014).
Jeremi Ochab and Zdzisław Burda
334 Optimising peer review with evolutionary computation [abstract]
Abstract: Peer review is the most prevalent mechanism for validating the quality of published research. While scientists are not blind to many of the shortcomings of this system (e.g. bias), studies show that peer review is perceived as a necessary tool that does improve the quality of published papers. Surprisingly, despite the importance of this process and its place in the scientific world, peer review has not been extensively studied until recently. The goal of our work was twofold. Firstly, we wanted to deepen our understanding of the peer review process as perceived from the viewpoint of a journal editor. Secondly, through extensive numerical experiments based on real-world data, we wanted to improve the effectiveness of the process. In order to understand some of the dynamical properties of the peer review process, we analysed a dataset which contained information about 58 papers submitted to the Journal of the Serbian Chemical Society. The dataset consisted of 311 “review threads” - full descriptions of the review process initiated by sending an invitation to a potential reviewer. We were able to separate the process into distinct phases, recreate transition probabilities between phases as well as their durations. This allowed us to uncover many interesting properties of the process and to create a framework that can be used as a basis for simulations. We were particularly interested in how editorial workflows – sets of guidelines and rules that journal editors use to acquire reviews for submitted manuscripts – can be improved. To this end, we employed Cartesian Genetic Programming to evolve and optimise editorial strategies. We showed that these evolved strategies are better than typical strategies used by some editors and lead to shorter review times.
Maciej J. Mrowinski, Agata Fronczak, Piotr Fronczak, Olgica Nedic and Marcel Ausloos
331 Network structure, metadata and the prediction of missing nodes [abstract]
Abstract: The empirical validation of community detection methods is often based on available annotations on the nodes that serve as putative indicators of the large-scale network structure [1]. Most often, the suitability of the annotations as topological descriptors itself is not assessed, and without this it is not possible to ultimately distinguish between actual shortcomings of the community detection algorithms on one hand, and the incompleteness, inaccuracy or structured nature of the data annotations themselves on the other. In this work we present a principled method to access both aspects simultaneously. We construct a joint generative model for the data and metadata, and a non-parametric Bayesian framework [2] to infer its parameters from annotated datasets. We assess the quality of the metadata not according to its direct alignment with the network communities, but rather in its capacity to predict the placement of edges in the network. We also show how this feature can be used to predict the connections to missing nodes when only the metadata is available. By investigating a wide range of datasets, we show that while there are seldom exact agreements between metadata tokens and the inferred data groups, the metadata is often informative of the network structure nevertheless, and can improve the prediction of missing nodes. This shows that the method uncovers meaningful patterns in both the data and metadata, without requiring or expecting a perfect agreement between the two. [1] J. Yang and J. Leskovec, "Structure and overlaps of ground-truth communities in networks", ACM Trans. Intell. Syst. Technol., vol. 5, pp. 26:1–26:35, Apr. 2014. [2] T. P. Peixoto, "Parsimonious Module Inference in Large Networks", Phys. Rev. Lett., vol. 110, p. 148701, Apr. 2013.
Darko Hric, Tiago Peixoto and Santo Fortunato
54 Dynamical conditions for emergence imply its logical conditions [abstract]
Abstract: Complex systems have large numbers of components with correspondingly large numbers of possible interactions. Even very large systems, however, can be simple in that they can be understood in terms of additive combinations of the contributions of their parts. Even the largest computer can be fully analysed this way. Furthermore, their behaviour for a given input is fully predictable for any length of time one chooses. It was long assumed that all systems were similarly analysable and predictable. Our only problem was to show this for more complex systems. This assumption was justified both by striking successes, but also on theoretical grounds. However, such systems are special among the range of all possible systems (Rosen 1991), and some systems with few components violate them by being both irreducible and noncomputable. We call such systems complexly organized. The complexity is not in the numbers, but in their behaviour. Such systems can be given a dynamical characterization so that for energy based systems and information based systems the key is the loss of available energy or information as entropy disposed outside the system, maintained by an external source of energy or information. This permits what is called self-organization, though not all complexly organized systems are self-organized. The unpredictability and irreducibility of complexly organized systems fits well with ideas of emergence developed in the 19th Century to explain living systems, and formalized more carefully by C.D. Broad in the 20th Century. I will argue that the dynamics of complex systems implies the logical conditions that these philosophers laid down. The advantage of having a dynamical account is that it gives us a way to interact with and test the systems for their properties. The logical conditions, on the other hand, were mostly attributed qualitatively (and sometimes erroneously), which is distressingly common even today.
John Collier
344 Benard cells as a model for Entropy Production, Entropy Decrease and Action Minimization in Self-Organization [abstract]
Abstract: In their self-organization, complex systems increase the entropy in their surroundings and decrease their internal entropy, but the mechanisms, the reasons and the physical laws leading to this processes are still a question of debate. In our approach, energy gradients across complex systems lead to change in the structure of systems, decreasing their internal entropy to ensure the most efficient energy transport and therefore maximum entropy production in the surroundings. This approach stems from fundamental variational principles in physics, such as the principle of least action, which determine the motion of particles. We compare energy transport through a fluid cell which has random motion of its molecules, and a cell which can form convection cells. We examine the signs of change of entropy, and the action needed for the motion inside those systems. The system in which convective motion occurs, reduces the time for energy transmission, compared to random motion. For more complex systems, those convection cells form a network of transport channels, for the purpose of obeying the equations of motion in this geometry. Those transport networks are an essential feature of complex systems in biology, ecology, economy and society in general. This approach can help explain some of the features of those transport networks, and how they correlate with the level of organization of systems.
Georgi Georgiev, Atanu Chatterjee, Thanh Vu and Germano Iannacchione

Biology  (B) Session 6

Schedule Top Page

Time and Date: 13:45 - 15:30 on 22nd Sep 2016

Room: P - Keurzaal

Chair: Sander Bais

566 Fast evaluation of meningococcal vaccines using dynamic modelling [abstract]
Abstract: Rapidly evaluating novel control measures against infectious diseases can be challenging, especially when the disease has a low incidence and it is characterized by a high degree of complexity. This is the case of invasive meningococcal diseases (IMD), rare but severe diseases hitting primarily infants, caused by bacteria asymptomatically carried by more than 10% of the general population, mostly among adolescents. A novel meningococcal vaccine, Bexsero, has been recently included for the first time in a national immunization programme, in the UK. This represents an unprecedented chance to evaluate such a vaccine. However, traditional statistical studies require large samples of observed disease cases to provide precise estimations. Thus, it may require several years of surveillance to precisely assess the effectiveness of Bexsero. We used a Monte Carlo Maximum Likelihood (MCML) approach to estimate both direct and indirect effectiveness of meningococcal vaccines. The method is based on stochastic simulations of an age-structured SIS model reproducing meningococcal transmission and vaccination. We calibrated the model to describe two immunization campaigns in the UK: the Bexsero campaign, started in the last fall, and a previous campaign, started in the 1999 and employing a different meningococcal vaccine, whose effectiveness has been already assessed using traditional studies. MCML estimates of vaccine effectiveness for the 1999 campaign are in good agreement with estimates from traditional studies, yet characterized by smaller confidence intervals. Also, we show that the MCML method could provide a fast and accurate estimate of the effectiveness of Bexsero, with a time gain that ranges from 2 to 15 years, depending on the value of effectiveness measured from field data. Our results show that inference methods based on dynamic computational models can be successfully used to quantify in near real-time the effectiveness of immunization campaigns, providing an important tool to complement and support traditional studies.
Lorenzo Argante, Michele Tizzoni and Duccio Medini
194 Bistability, spatial interaction, and the distribution of tropical forests and savannas [abstract]
Abstract: Recent work has indicated that tropical forest and savanna can be alternative stable states under a range of climatic conditions. However, based on dynamical systems theory it may be expected that in case of strong spatial interactions between patches of alternative stable states, their coexistence becomes unstable. Boundaries between forest and savanna would then only be stable at conditions where the two states have equal potential, called the ‘Maxwell point’. Under different conditions, the state with the lowest potential would not be resilient against invasion of the state with highest potential. We used frequency distributions of MODIS tree-cover data at 250 m resolution to estimate Maxwell points with respect to the amount and seasonality of rainfall in both South America and Africa. We tested on a 0.5° scale whether there is a larger probability of local coexistence of forests and savannas near the estimated Maxwell points. Maxwell points for South America and Africa were estimated at 1760 mm and 1580 mm mean annual precipitation and at Markham’s Seasonality Index values of 50% and 24%. Although the probability of local coexistence was indeed highest around these Maxwell points, local coexistence was not limited to the Maxwell points. We conclude that critical transitions between forest and savanna may occur when climatic changes exceed a critical value. However, we also conclude that spatial interactions between patches of forest and savanna may reduce the hysteresis that can be observed in isolated patches, causing more predictable forest-savanna boundaries than continental-scale analyses of tree cover indicate. This effect could be less pronounced in Africa than in South America, where the forest-savanna boundary is substantially affected by rainfall seasonality.
Arie Staal, Stefan Dekker, Chi Xu and Egbert van Nes
424 Long-term seizure dynamics and information transfer in epileptic network [abstract]
Abstract: The main disabling factor of epilepsy is the sudden and usually unpredictable occurrence of seizures. However, seizures are not uniformly distributed in time. Periods of increased and decreased probability of seizure occurrence were observed in patients and in chronic models of epilepsy. Complex systems approaches helped to uncover power-law behaviour in distributions of seizure energy and inter-seizure intervals (ISI). The increase of the conditional waiting time until the next event with increasing the waiting time for the preceding event indicates a memory in the seizure dynamics. We have examined long-term seizure dynamics in the tetanus toxin model of temporal lobe epilepsy in eight adult rats. In all animals periods of high seizure frequency (clusters) were interspersed with periods of seizure absence or low seizure frequency. Concatenated data from all clusters confirm scale-free behaviour with the characteristic conditional waiting time behaviour. The study of individual clusters shows that seizures have a specific time-dependent dynamics. The clusters start with randomly occurring weaker seizures separated by short ISI’s which are followed by a progressive increase of ISI’s and seizure severity. In the present study we have concentrated on synchronization and information transfer in electrocorticographic signals in order to characterize the connectivity of epileptic networks in different parts of clusters, i.e. in seizures of different severity. An information-theoretic approach for detecting information transfer within and across different time scales, already successfully applied in a different multiscale complex system (M. Palus, Phys. Rev. Lett. 112(7), 078702, 2014) has been adapted for analysis of electrocorticograms. Understanding the mechanisms of the transition to seizures, and initiation and termination of seizure clusters can open new ways for the development of techniques for seizure forecasting and prevention. Support by the Czech Science Foundation (GACR 14-02634S) is gratefully acknowledged.
Milan Palus, Jan Kudlacek and Premysl Jiruska
198 A modified replicator equation on graphs with triangles [abstract]
Abstract: The original form of the replicator equation was the first important tool to connect game dynamics, where individuals change their strategy over time, with evolutionary game theory, created by Maynard Smith and Price to predict the prevalence of competing strategies in evolving populations. The replicator equation was initially developed for infinitely large and well-mixed populations. Later, in 2006, using the standard pair approximation, H. Ohtsuki and M. Nowak proved that moving evolutionary game dynamics from a well-mixed population (a complete graph) onto a regular non-complete graph is simply described by a transformation of the payoff matrix. Under the assumption of weak selection, and using a new closure method for the pair approximation technique, we build a modified replicator equation on infinitely large graphs, for birth-death updating (a player is chosen with probability proportional to its fitness, and the offspring of this player replaces a random neighbour). The closure method that we propose takes into account the probability of triangles in the graph. Using this new equation, we study how graph structure can affect cooperation in some games with two different strategies, namely the Prisoner's Dilemma, the Snow-Drift Game and the Coordination Game. We compare our results with the ones that were obtained in the past using the standard replicator equation and the Ohtsuki-Nowak replicator equation on graphs. We also discuss how our modified pair approximation performs on different graphs, when compared to other approaches, and how it can be generalized, still satisfying the consistency conditions.
Daniel Pinto and Minus van Baalen
524 Complexity in evolution: from complexity threshold to interspecies polymorphism [abstract]
Abstract: The evolution is modeled by three main forces: genetic drift, mutation and selection. The most of complexity of biological life that arises from these simple operations is a result of their interplay. Not only different time scales characteristic to these mechanisms are responsible for the observed genetic variety, but also the diploid organization of the organisms that use sexual reproduction. This latter is a base for three kinds of selection: directional, underdominance and overdominance. Directional selection does not take advantage of diploid organization: its effect in diploid organisms is similar to that observed in haploid forms of life. Underdominance is a mechanism responsible for unstable allele frequency equilibrium and as such it is rarely observed in the nature. Some scientists consider it as a significant force leading to speciation. The third type, overdominance, is responsible for balancing selection because it results from stable allele frequency equilibrium. How strongly this genetic force may change genetic composition as compared with the neutral Kimura’s model shaped mostly by the genetic drift, is presented by the results of simulations using the author’s software. The outcomes of simulations are further compared with the predictions of the overdominance equilibrium model. The mechanism leading to observed interspecies polymorphism (for example the human-chimpanzee polymorphism detected in ATM gene by author’s earlier works) is explained based on results of simulated evolution in neutral and balancing selection models. Finally, the overwhelming complexity of contemporary life is considered in the light of serious bottlenecks for complexity present at the early life, such as complexity threshold and the limitation in the number of different genes before chromosomal organization of genome occurred. Significance of chromosomes as genetic information carriers further duplicated in diploid cells is concluded as a major architectural advantage required for the observed complexity of the life.
Krzysztof Cyran