14:15 - 15:45 on 19th Sep 2016

Economics  (E) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: A - Administratiezaal

Chair: Marco Alberto Javarone

112 Forgiveness evolves to ensure cooperation in long-term agreements [abstract]
Abstract: Commitments for enhancing cooperation are widespread in human societies. They offer an alternative to punishment and rewards. Commitments are defined within the context of social dilemmas as agreements to cooperate with posterior compensations when any of the parties involved defects. It has been shown to be an evolutionarily viable strategy in one-shot social dilemmas. However, in many situations agreements aim to establish long-term mutually beneficial interactions. Our analytical and numerical results reveal under which conditions revenge, apology, forgiveness and ostracism can evolve and deal with mistakes within ongoing agreements in the context of the Iterated Prisoner's Dilemma. We show that, when the agreement fails, participants prefer to take revenge by defecting in the subsisting encounters. Incorporating costly apology and forgiveness reveals that, even when mistakes are frequent, there exists a sincerity threshold for which mistakes will not lead to the destruction of the agreement, inducing even higher levels of cooperation. The apology cost should be high enough in order to avoid fake committers that intent to take advantage of the system defecting and apologizing continuously, yet not too high to be worth it. We also show that when interactions are taking place among group of individuals, reinserting individuals that where expelled after defecting is more efficient than maintaining them ostracized with the cost that may come from it. Forgiveness is, in its different ways, an evolutionarily viable strategy which plays a fundamental role in inducing cooperation in repeated dilemmas.
Luis A. Martinez-Vaquero, The Anh Han, Luís Moniz Pereira and Tom Lenaerts
263 Discontinuity and convergence in global economies [abstract]
Abstract: Economies are complex adaptive systems, and investigation of their dynamics within a complex systems framework may provide a deeper understanding of their behavior and response to perturbation. We borrow methodologies from ecology to test whether global economies have discontinuous size distributions, a signature of multi-scale processes in complex adaptive systems, and we contrast the theoretical assumptions underpinning our methodology with that of the economic convergence club literature. Discontinuous distributions in complex systems consist of groupings of similarly-sized entities, such as animal body mass, firm size, or, perhaps, economies, separated by gaps, in a pattern of non-random departures from a continuous or power law distribution. We analyse constant per capita GDP for all countries of the world, from 1970-2012. We tested each yearly distribution for discontinuities, and then compared the distributions over time using multivariate modelling. We find that the size distributions of economies are discontinuous and that there are persistent patterns of aggregations and gaps over time, suggesting that there are scale domains of structuring processes that act as basins of attraction. These size classes are outwardly similar to convergence clubs, but are derived from theory that is a more appropriate fit to economic dynamics because it adequately incorporates realistic expectations of economies as complex adaptive systems. We argue that the underlying mechanisms, rather than emerging from conditions of initial equivalence, evolve and operate in multiple scale domains that can be objectively identified and assessed. Understanding the patterns within and across scale domains may provide insight into the processes that structure wealth over time.
Shana Sundstrom, Craig Allen and David Angeler
124 Systemic distortions in digital reputation [abstract]
Abstract: The digital economy is self-organizing into a “platform society”, where individuals exchange knowledge and goods on a P2P basis. P2P platforms rely on trust, which is typically established by requiring users to develop a digital reputation through peer-review mechanisms. Given that revenues from P2P business such as the Sharing Economy are expected to increase more than twentyfold over the next ten years, digital reputation will increasingly become a key commodity in our online lives, as it will determine access to substantial economic opportunities. P2P systems are often thought to promote more economic freedom and democratization. Yet, their current lack of regulation exposes them to malicious behavior. Indeed, users are often incentivized to reciprocate ratings in order to mutually boost reputation or retaliate. Such practices have distorting effects, as they mask “true reputation” and prevent users from making informed decisions about their peers. In this work we investigate the impact of reciprocity on reputation in three platforms (Slashdot, Epinions, Wikipedia) where users exchange binary ratings, and which can be conveniently cast as signed networks. We find that reciprocity is markedly over-expressed in all three systems with respect to a wide range of null hypotheses, and we provide evidence of the biases it introduces on reputation. At the macro level, we observe that the contribution to reputation from reciprocated ratings is systematically over-expressed and larger than the contribution from non-reciprocated ones. We identify the root causes of such bias at the micro level: we prove that the patterns of interactions between users, and the relationship between a user’s reputation and that of her nearest neighbors in the network, are markedly different from those observed under several null hypotheses. In our conclusions we highlight possible policy guidelines that would mitigate the distorting effects of the aforementioned biases.
Giacomo Livan, Fabio Caccioli and Tomaso Aste
203 Liquidity crises in the limit order book: a tale of two time scales [abstract]
Abstract: We present an empirical analysis of the microstructure of financial markets and, in particular, of the static and dynamic properties of liquidity. We find that on relatively large time scales (15 min) large price fluctuations are connected to the failure of the subtle mechanism of compensation between the flows of market and limit orders: in other words, the missed revelation of the latent order book breaks the dynamical equilibrium between the flows,triggering the large price jumps. This behavior naturally leads to a dynamical definition of liquidity. On smaller time scales (30 s), instead, the static depletion of the limit order book is an indicator of an intrinsic fragility of the system, which leads to a strongly nonlinear enhancement of the response, in terms of price impact, to incoming orders, even if their volume is small. In order to quantify this phenomenon, we introduce a static measure of the liquidity imbalance present in the book and we show that this quantity is correlated to both the sign and the magnitude of the next price movement. These findings prove that large price fluctuations are due to different mechanisms that act at different time scales and, as a consequence, the effective liquidity should be defined in relation to the time interval one wants to consider. Reference: Corradi, F., Zaccaria, A., and Pietronero, L. Liquidity crises on different time scales. Physical Review E, 92(6), 062802 (2015).
Andrea Zaccaria, Francesco Corradi and Luciano Pietronero

Economics  (E) Session 2

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: B - Berlage zaal

Chair: Roland Kupers

14 Vulnerability of Banking Networks Against Financial Contagion: Measures, Evaluations and Implications [abstract]
Abstract: Instabilities of major financial institutions during the recent financial crisis of 2007 and later have generated renewed interests in evaluating the stabilities (or, lack thereof) of banking networks among economists, regulatory authorities and other relevant segments of the population. In particular, one reason of such type of vulnerabilities to the so-called financial contagion process in which failures of few individual banks propagate through the "web of banking dependencies" to affect a significant part of the entire global banking system. Motivated by such observations, we consider the problem of defining and evaluating stabilities of both homogeneous and heterogeneous banking networks against propagation of synchronous idiosyncratic shocks given to a subset of banks. We formalize an extension of a financial network model originally proposed by Nier et al. for scenarios such as the over-the-counter derivatives market and its corresponding heterogeneous version, formalize the synchronous shock propagation procedures, define two appropriate stability measures and investigate the computational complexities of evaluating these measures for various network topologies and parameters of interest. We next perform a comprehensive evaluation of these stability measures over more than 700,000 combinations of networks types and parameter combinations. Based on our evaluations, we discover many interesting implications of our evaluations of the stability measures, and derive topological properties and parameter combinations that may be used to flag the network as a possible fragile network. An interactive software FIN-STAB for computing the stability is available from the website www2.cs.uic.edu/~dasgupta/financial-simulator-files.
Piotr Berman, Bhaskar Dasgupta, Lakshmi Kaligounder and Marek Karpinski
437 Financial complexity: network reconstruction, systemic risk, and early-warning signals [abstract]
Abstract: The global financial crisis shifted the interest from traditional measures of “risk” of individual banks to new measures of “systemic risk”, defined as the risk of collapse of an entire interbank system. In principle, estimating systemic risk requires the knowledge of the whole network of exposures among banks. However, due to confidentiality issues, banks only disclose their total exposure towards the aggregate of all other banks, rather than their individual exposures towards each bank. Is it possible to statistically reconstruct the hidden structure of a network in such a way that privacy is protected, but at the same time higher-order properties are correctly predicted? In this talk, I will present a general maximum-entropy approach to the problem of network reconstruction and systemic risk estimation. I will illustrate the power of the method when applied to various economic, social, and biological systems. Then, as a counter-example, I will show how the Dutch interbank network started to depart from its reconstructed counterpart in the three years preceding the 2008 crisis. Over this period, many topological properties of the network showed a gradual transition to the crisis, suggesting their usefulness as early-warning signals of the upcoming crisis. By definition, these early warnings are undetectable if the network is reconstructed from partial bank-specific information.
Diego Garlaschelli
581 Dynamics in two networks based on stocks of the US stock market [abstract]
Abstract: We follow the main stocks belonging to the New York Stock Exchange and to Nasdaq from 2003 to 2012, through years of normality and of crisis, and study the dynamics of networks built on two measures expressing relations between those stocks: correlation, which is symmetric and measures how similar two stocks behave, and Transfer Entropy, which is non-symmetric and measures the influence of the time series of one stock onto another in terms of the information that the time series of one stock transmits to the time series of another stock. The two measures are used in the creation of two networks that evolve in time, revealing how the relations between stocks and between industrial sectors changed in times of crisis. The two networks are also used in conjunction with a dynamic model of the spreading of volatility in order to detect which are the stocks that are most likely to spread crises, according to the model. This information may be used in the building of policies aiming to reduce the effects of financial crises.
Leonidas Sandoval
186 Entangling credit and funding shocks in interbank markets [abstract]
Abstract: Credit and liquidity risks represent main channels of financial contagion for interbank lending markets. On one hand, banks face potential losses whenever their counter-parties are under distress and thus unable to fulfill their obligations. On the other hand, solvency constraints may force banks to recover lost fundings by selling their illiquid assets, resulting in effective losses in the presence of fire sales---that is, when funding shortcomings are widespread over the market. Because of the complex structure of the network of interbank exposures, these losses reverberate among banks and eventually get amplified, with potentially catastrophic consequences for the whole financial system. Building on Debt Rank [Battiston et al., 2012], in this work we define a systemic risk metric that estimates the potential amplification of losses in interbank markets accounting for both credit and liquidity contagion channels: the Debt-Solvency Rank. We implement this framework on a dataset of 183 European banks that were publicly traded between 2004 and 2013, showing indeed that liquidity spillovers substantially increase systemic risk, and thus cannot be neglected in stress-test scenarios. We also provide additional evidence that the interbank market was extremely fragile up to the 2008 financial crisis, becoming slightly more robust only afterwards.
Giulio Cimini and Matteo Serri
423 EXPLORING THE COUNTERPARTY-LIQUIDITY RISK NEXUS USING A MULTI-AGENT NETWORK MODEL OF THE INTERBANK MARKET [abstract]
Abstract: We simulate a bilayer network comprising the two channels mentioned above. To generate the direct exposure network, we draw sample of bank sizes (refecting the size of their balance sheets) from a truncated power-law distribution. Citing the literature finding that interbank network typically exhibit a core-periphery architecture, we apply a fitness-based model that connects the nodes/banks according to their sizes. Using this algorithm, we obtain an undirected, unweighted network with a densely connected core of large banks and a sparsely connected periphery who's connections are mainly to the core. The network of overlapping port- folios is simulated using a simple random graph generation algorithm based on calibrations for the average diversification of banks vis-a-vis their securities portfolios. This results in a bipartite graph representation. The Agent-Based Model (ABM) begins with an idiosyncratic deposit shock whose sign determines banks' role as borrowers or lenders on the interbank market. Following this, borrowers distribute their aggregate liquidity requirement across their local network of counterparties. After borrower liquidity requests have been transmitted to lenders, the next step is to determine final loan volumes as well as interbank interest rates. In the next step, borrowers repay their loans following an asset price shock. In order to meet their contractual obligations and comply with policy constraints (capital ratio and a minimum reserve requirement), banks sell of a fraction of their assets. The total volume of firesalesthen puts further downward pressure on asset prices and thus impacts all banks holding the distressed assets in their portfolios.
Nicolas K. Scholtes

Biology  (B) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: C - Veilingzaal

Chair: Alberto Antonioni

132 Molecular Dynamics Study of Cross-species Proteins Aggregation [abstract]
Abstract: Protein-protein interactions have been well known to be one of the most studied complex system. This is because as one of the most fundamental molecules in human bodies, proteins have been linked to various vital functions of the body. Protein aggregation in particular, has been central to a lot of diseases such as Alzheimer and Parkinson disease. Many studies have been done on the aggregation of proteins of the same species. However, our research here focus on understanding the interactions between two peptides that share no similarity both sequentially and structurally. One of the peptides is amylin (IAPP) linked to diabetes, while the other peptide is a prion fragment (PrP106-126) linked to prion disease. Extensive molecular dynamics simulation consisting ~ 22,000 atoms was carried out using enhanced sampling method (Replica exchange molecular dynamics) to simulate the two peptides in solution to elucidate the interaction mechanism between them. Results show that the two peptides form structurally diverse complexes. Hotspots within the sequences of the proteins with high contact probabilities were identified. Extension of the simulation using coarse-grained modelling to simulate large scale oligomers and also the effects of lipids may further provide detailed mechanism of the aggregation. As such, we hope to learn from various other approaches used to model proteins complexity through the conference which will complement our molecular dynamics study, while at the same time providing our own perspective in modelling proteins at the atomistic level and analysing simulation data. My first choice of track is Paper while the second choice of track is Ignite.
Khi Pin Chua, Lock Yue Chew and Yuguang Mu
98 Trail clearing behaviour in leaf-cutter ants: regulatory mechanism and stochastic simulation [abstract]
Abstract: Ant colonies are self-organised systems. Hence even large-scale colony functions (like foraging or nest construction) must be regulated locally by the workers engaged in it, via interaction with nestmates and the environment. We investigate whether such self-regulation mechanisms exist in one of the most striking collective feats in the ants: the construction of foraging trails in leaf-cutter ant colonies, which can span hundreds of metres. While most ant species rely on pheromone trails to guide their collective movements, Atta leaf-cutter ants build tangible trails cleared down to bare soil of all undergrowth and organic debris, a rare feature among ants. Such trails can greatly increase foraging efficiency despite the costs of construction and maintenance. While recent work investigated the function of such trails, nothing is known about the mechanism of trail construction. In laboratory experiments, we find two concurrent modes of trail clearing behaviour -- ‘one-off’ and ‘repeater’ clearing. By tracking ant movement and obstruction encounters in the field and in laboratory experiments, we identify the regulatory mechanism controlling the extent of trail clearing and the resulting trail dimension. We integrate these concurrent clearing behaviours and the regulatory mechanism in a parameterised stochastic model of the trail clearing dynamics. From this model, we make predictions of the trail clearing behaviour in subsequent experiments with varying foraging conditions, and test them against the empirical results.
Thomas Bochynek, Martin Burd and Bernd Meyer
260 Mesoscale analysis of multilayer brain networks [abstract]
Abstract: The human brain is a fascinating and paradigmatic example of a complex system and a natural candidate for network analysis. On one hand, the structural network given by the physical connections between the different brain regions, obtained by Diffusion Tensor Imaging (DTI), has been widely investigated. On the other, many studies have focused on the functional layer, where links represent correlation in the activity of different regions obtained through resting-state functional MRI (rs-fMRI). One of the new challenges in neuroscience is now trying to integrate this different information in order to better understand the interplay between the structure and the function of our brain. In this work we shed light on the mesoscale organization of the multilayer human brains constructed from structural and functional brain information on 21 healthy subjects. In particular, we focus on the following different mesoscopic structures: a) we investigate the presence of motifs, small subgraphs statistically over-represented in the real systems with respect to a suitable null model. In biological networks, the abundance of given subgraphs has been linked to the robustness of the system or to the stability of the dynamical or signalling circuit they represent; (b) we analyse the organization of communities across the two-layers, which gives rise to a non-trivial overlapping structure particularly remarkable for some brain regions associated to given tasks; (c) we propose a novel method to identify core-periphery structures in networks with links of different types, and apply it to extract the multiplex core of our brain, highlighting previously neglected regions of interest. Results indicate the existence of a complex interplay between the structural and functional networks of the human brain, and that, even if structural links appears to be somehow necessary to determine the co-activation of two brain regions, functional connectivity is non-trivially constrained by its underlying anatomical network.
Federico Battiston, Mario Chavez, Vincenzo Nicosia and Vito Latora
409 Informational architecture to chracterize controllability of biological networks [abstract]
Abstract: One of the most important problems in biology is to understand the principles underlying evolution of living systems from non-living systems. To do so, we need to identify universal features of living systems that can distinguish them from other classes of physical systems. On the other hand, recently developed frameworks for control theory on complex networks suggest that our ultimate understanding of the evolutionary principles allow us to control biological networks in terms of making them to converge to desired states. Here, I present our recent attempt to understand relationship between informational architecture as the universal feature, and controllability by using various biomolecular networks. Our previous study showed that scaling relation of information processing within biological networks differentiates them from their random network counterparts. Also, we provided an analysis which indicates that biologically distinctive patterns of informational flow is related to control kernel, a minimal subset governing the global dynamics of biological networks. In this paper, we quantify controllability of a network by the size of control kernel, which is the size of the subset to be controlled to drive it from one state to the desired state. We find that biological networks tend to be more difficult to control compared to random networks. This suggests that biological networks are evolved to be more resilient to environmental change. In addition to that, we measure informational flow within biological networks with snd without control to investigate how the observed resilience is related to the informational processing. Finally, we discuss the implications of informational architecture and its relationship with controllability in understanding evolutionary principles for living systems.
Hyunju Kim, Paul Davies and Sara Imari Walker
65 Physical Aging in Excitable and Oscillatory Systems [abstract]
Abstract: We consider classical nonlinear oscillators like rotators and Kuramoto oscillators on hexagonal lattices of small or intermediate size. When the coupling between these elements is repulsive and the bonds are frustrated, we observe coexisting states, each one with its own basin of attraction. For special lattices sizes the multiplicity of stationary states gets extremely rich. When disorder is introduced into the system by additive or multiplicative Gaussian noise, we observe a noise-driven migration of oscillator phases in a rather rough potential landscape. Upon this migration, a multitude of different escape times from one metastable state to the next is generated [1]. Based on these observations, it does not come as a surprise that the set of oscillators shows physical aging. Physical aging is characterized by non-exponential relaxation after a perturbation, breaking of time-translation invariance, and dynamical scaling. When our system of oscillators is quenched from the regime of a unique fixed point towards the regime of multistable limit-cycle solutions, the autocorrelation functions depend on the waiting time after the quench, so that time translation invariance is broken, and dynamical scaling is observed for a certain range of time scales [2]. We point to open questions concerning a possible relation between physical and biological aging. References: [1] F.Ionita, D.Labavic, M.Zaks, and H.Meyer-Ortmanns, Eur. Phys.J.B 86(12), 511 (2013). [2] F.Ionita, H.Meyer-Ortmanns, Phys.Rev.Lett.112, 094101 (2014).
Hildegard Meyer-Ortmanns

Cognition  (C) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: D - Verwey kamer

Chair: Simon Dedeo

425 Kinship systems explain the persistent coupling of language and gene trees [abstract]
Abstract: Language trees have been observed to mirror gene trees from local to global scales. This suggests that languages and genes evolve in tandem as the communities that carry them split and diverge, but this model is inconsistent with the widespread mobility often observed between communities. Instead we require a model that explains correlations between genes and languages, while accounting for variable rates of migration and language adoption by individuals. Here we show that the critical variable is that human movements are not random, but highly structured. In the first study of its kind, finely resolved co-phylogenies of languages and genes on a multilingual Indonesian island reveal that marriage systems explain why language trees predict gene trees. Communities of related individuals speaking the same language can persist for many generations, while the languages they speak change or are replaced.
Cheryl Abundo, Stephen Lansing, Murray Cox, Sean Downey, Elsa Guillot, Guy Jacobs and Lock Yue Chew
518 Coevolution in the model of social interactions: getting closer to real-world networks [abstract]
Abstract: In the 90s Robert Axelrod have proposed the canonical model of social interactions [1] explaining one of possible and important mechanisms of dissemination of culture. He have found that depending on initial conditions the system can end up in one of two states: ordered with global culture or disordered with many small subcultures. The dynamics of this model captured complexities of real interactions between people, but the square lattice which was considered is far from satisfying reflection of real-world social networks. Others have studied Axelrod's model deeper on complex networks and it turned out that the structure can have fundamental influence on the behavior of the system. Maxi San Miguel et. al. [2] made the next step by exploring the model of social interactions on coevolving random networks and finding two phase transitions with interesting properties. Unfortunately social networks are as far from randomness as from regularity. In our work we introduce four extensions changing the mechanism of edge rewiring. The models are intended to catch two kinds of interactions - preferential attachment in scientist or actors collaborations and friendship formation in everyday relations. Numerical simulations show that proposed dynamics can lead to power-law distribution of the degree of nodes and high value of clustering coefficient, still keeping the small-world effect in three models. All models are characterized by two phase transitions of different nature. We find new and universal characteristics of the second transition point - abrupt increase of the clustering coefficient, due to the formation of many small complete subgraphs inside the network. [1] R. Axelrod, The dissemination of culture, J. Conflict Res. 41, 203 (1997) [2] F. Vazquez, J. C. Gonzalez-Avella, V. M. Eguíluz, M. San Miguel, Time-scale competition leading to fragmentation and recombination transitions in the coevolution of network and states, Phys. Rev. E 76, 046120 (2007)
Tomasz Raducha and Tomasz Gubiec
84 Rare Words Appear in Clusters: Long-Range Correlation Underlying Language Through Interval Analysis [abstract]
Abstract: The famous Zipf law states that the frequency of a word in a text is roughly proportional to the inverse of its rank. When the text is shuffled, this Zipf's law however remains unchanged. In this article, we aim to specify a universal law underlying arrangement of words by using an interval analysis. For each text, we study the fraction of rare words that have ranks above some threshold Q and the length of the return intervals between them. We focus on the frequency of intervals of length r from which we derive the cumulated probability S_Q(r) that the length of an interval is above r, and also the autocorrelation function C_Q(s) of the intervals. When the arrangement of the text is destroyed by shuffling, S_Q(r) is a simple exponential and C_Q(s) is zero for s above zero. We first analyze six long masterpieces in English, French, German, Chinese and Japanese and find that in all texts, for large enough Q values, S_Q(r) follows a clear Weibull function, with its exponent close to 0.7. The return intervals themselves are arranged in a self-similar long-range correlated fashion, where the autocorrelation function C_Q(s) follows a power law, with an exponent between 0.2 and 0.4. These features lead to a pronounced clustering of rare words in the text. We then show how our findings apply on a large scale through 1109 single-author texts. Our results reveal that the arrangement of a text quantified by the return intervals between the words above certain ranks Q, is surprisingly universal, obeying the same laws for all languages considered. We argue that the source of this universality is the human brain.
Kumiko Tanaka-Ishii and Armin Bunde
587 Moral Tribes on Wikipedia: the Mental Representations of Social Norms [abstract]
Abstract: In a complex social system, behavior is often prescribed and regulated by hundreds, or even thousands, of interacting norms. Yet it is individuals that must learn norms—through interaction, exploration heuristics, and local peer influence—and individuals that decide when and how to use them. Little is known about this crucial process. How do individuals selectively, and collectively, navigate and utilize the norms of their social system, and what impact does this process have on that system's development? We provide new answers to this question, by tracking how people use and invoke norms in a real world social system: the English-language Wikipedia. As a community-managed knowledge commons, Wikipedia relies on its norms for governance; they form shared expectations for content creation and conflict resolution. Norms are invoked both directly and indirectly in discussions and arguments on the encyclopedia’s “talk” pages. We sample approximately 11,000 Wikipedia editors and use the detailed, and dated, log of their talk page edits to track the norms each editor references over the course of 15 years. Combined with prior work on Wikipedia’s norm network, this allows us to infer the coarse-grained mental representations individuals use, to test mechanisms for norm learning, to determine how an individual’s invocation of norms is influenced by context, and to test the “moral tribes” hypothesis: that individuals cluster together in distinct regions of the larger norm network.
Bradi Heaberlin and Simon Dedeo
350 Complex Dynamics of Disclosure Processes for Concealable Stigmatized Identities [abstract]
Abstract: The present study is the first of its kind to employ a complex dynamical systems approach to bridging the gap between social psychological research on stigma, and embodied cognition more broadly. Specifically, we extended current understanding of consequences for revealing devalued identities in close and professional relationships by examining the dynamic structure of movement and language during such disclosures. Using a range of nonlinear time series techniques, including fractal, multifractal, and continuous and categorical recurrence analyses, we investigated the role of antecedent motivational systems on the differential effectiveness of disclosure processes. Participants with concealable stigmatized identities were asked to prepare two disclosure letters to individuals that were unaware of their identity, one to a close friend or relative and the other to a professional colleague. Motivational orientation was manipulated so that half of participants were asked to focus on positive outcomes for the disclosures (approach goals) and the remaining participants were asked to focus on avoiding negative outcomes within the disclosure (avoidant goals). Participants read each letter aloud and their behavior was recorded using multiple motion tracking methods in addition to traditional audio video recordings. Results demonstrate differences in the dynamics of both human movement (i.e., postural sway complexity) and in language usage and form as a function of motivational goals and perceiver for disclosure. Discussion will include implications and future directions for how the differential dynamics of disclosure influence perceptions and may be embodied in the perceiver via complexity matching.
Rachel Kallen and Hannah Douglas

Foundations & Physics  (FP) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: E - Mendes da Costa kamer

Chair: John Mahoney

141 Human vibrations: the modes of crowd disasters [abstract]
Abstract: Empirically, we observe that in concerts, pilgrimages, parades, Black Friday bargain hunters, football matches and other similar social gatherings, the density of people becomes exceptionally high and might give rise to unusual and occasionally tragic collective motions. All these situations are characterised by a high degree of dynamic fluctuations, but little overall motility. While active particle simulations have demonstrated the ability to reproduce most phenomenology of human collective motion, high-density scenarios call for a rethinking of conventional analysis approaches. Here, we take inspiration from jammed granular media and eigenmode analysis to understand the mechanisms underlying human collective motion at extreme densities. Vibrational eigenmodes predict the emergence of long-range correlated motions and unstable areas in simulations of high-density crowds. By introducing agitated individuals to account for behavioural heterogeneity, we find that perturbing the eigenmodes enhances the propagation of long-range correlated motions such as, for example, shock waves. If found in real crowds, these emergent mechanisms would provide a simple explanation of how crowd disasters could arise from purely physical and structural considerations. Our approach could provide powerful tools for predicting the onset of dangerous collective motions with applications in crowd management and in the design of public venues.
Arianna Bottinelli, David Sumpter and Jesse Silverberg
80 Quantum Simplicity: How quantum theory can change what we perceive to be complex [abstract]
Abstract: Computational mechanics describes a sophisticated toolset for understanding the structure and complexity of observational phenomena [1]. It captures the idea that we understand nature through cause and effect – the more complex a process, the more causes one must postulate to model its behaviour. This view motivated statistical complexity – the minimal amount of causal information one needs to record about a phenomenon’s past to model its future statistics – as a popular measure of its intrinsic complexity. The standard framework has generally assumed that we understand nature through classical means; processing classical bits. Nature, however, is intrinsically quantum mechanical, allowing quantum bits that exist in superposition of 0 and 1. Can such uniquely quantum behaviour unveil ore refined views of structure and complexity? I this presentation, I review our work in pioneering quantum models that require provably less causal information that any classical counterpart [2] and describe our ongoing experiments in realizing these models within photonic systems [3]. I then outline recent advances in constructing provably optimal quantum models, and how they demonstrate that quantum statistical complexity can exhibits drastically different qualitative behaviour - falling for example, when its classical counterpart rises. Thus many observed phenomena could be significantly simpler than classically possible should quantum effects be involved, and existing notions of structure and complexity may ultimately depend on the type of information theory we use. [1] J.P. Crutchfield and K. Young, Phys. Rev. Lett. 63 105. [2] M. Gu, K. Wiesner, E. Rieper, V. Vedral, Nature communications, 3 762 [3] M. Palsson, M. Gu, J. Ho, Howard M. Wiseman, and Geoff J. Pryde. arXiv:1602.05683
Mile Gu, Andrew Garner, Joseph Ho, Mathew Palsson, Geoff Pryde, Elisabeth Rieper, Jayne Thompson, Vlatko Vedral, Karoline Wiesner and Howard Wiseman
527 The Weak Giant: phase transition in directed graphs with prescribed degree distribution [abstract]
Abstract: From reactions fueling cells in our bodies to internet links binding our society into a small world, networks are at the basis of every structure and random graph theory is the common language widely used to discuss them. Network science is full of empirical data, yet an observer collecting such data is either embedded into the network him/herself, thus viewing it locally, or is his/her distanced far apart and thus observing only the global properties. Indeed, one may study individual servers of the Internet but the question of the global structure is far less trivial. Or a physicist may observe global properties of a complex material not knowing much on how individual molecules are interconnected. This research shows how one, being on one extreme of this dichotomy may transit to the other: converting local information into global, and back. Take a ‘normal’ notion of network and replace all (or a portion of) links with arrows, we obtain a directed network. In such a network every node has a certain probability of having N outgoing arrows and M ingoing arrows – thus a bivariate degree distribution. There are a few generalizations for connected components in this case. Most controversial one is the weak-component -- a set of nodes one may reach if ignoring the direction of the arrows. It is controversial, precisely because it sounds so simple: one may be tempted to say, in this case there is no direction and the task degenerates to a classical problem. But when we posses only a snapshot of local properties – the bivariate degree distribution, we can not ignore the directional data anymore. This work presents correct formulation and the answer to the weak-component problem in directed graphs that are identified by a degree distribution for the first time.
Ivan Kryven
48 Complexity at multivalent receptor interfaces [abstract]
Abstract: Multivalency is the phenomenon that describes the interaction between multivalent receptors and multivalent ligands. It is well known to play a pivotal role in biochemistry, particularly in protein-carbohydrate interactions, both in solution (e.g. at pentavalent cholera toxins) and at interfaces (e.g. for the infection of cells by the attachment of viruses or bacteria to cell membranes). In particular in the latter case, multivalency is often poorly understood in a quantitative sense. Supramolecular host-guest chemistry has been well established in solution, but its use at interfaces remains limited to for example sensor development for specific guest compounds. In order to build assemblies at surfaces through supramolecular interactions for nanotechnological applications, other demands have to be met, such as larger thermodynamic and kinetic stabilities of the assemblies. For many supramolecular motifs, this inevitably leads to the use of multivalent interactions. We employ the concept of molecular printboards, which are self-assembled monolayers functionalized with receptor groups suitable for nanofabrication. The design of guest molecules allows precise control over the number of interacting sites and, therefore, over their (un)binding strength and kinetics. A recent focus is on heterotropic multivalency, which is the use of multiple interaction motifs. This has been applied to the controlled, selective, and specific binding of metal-ligand coordination complexes, proteins, antibodies, and even cells. The current paper will focus on less obvious, emerging properties from such assemblies such as supramolecular expression, non-linear amplification, coherent energy transfer, and multivalent surface diffusion.
Jurriaan Huskens
259 Sequential visibility graph motifs [abstract]
Abstract: Visibility algorithms transform time series into graphs and encode dynamical information in their topology, paving the way for graph-theoretical time series analysis as well as building a bridge between non-linear dynamics and network science. In this work we present the sequential visibility graph motifs, smaller substructures of n consecutive nodes that appear with characteristic frequencies inside visibility graphs. We show that the motif frequency profile is a highly informative feature which can be treated analytically for several classes of deterministic and stochastic processes and in general computationally efficient to extract. In particular we have found that this graph feature is surprisingly robust, in the sense that it is still able to distinguish amongst different dynamics even when the signals are polluted with large amounts of observational noise, what enables its use in practical problems such as classification of empirical time series. As an application, we have tackled the problem of disentangling meditative from general relaxation states from the horizontal visibility graph motif profiles of heartbeat time series of different subjects performing different activities. We have been able to provide a positive, unsupervised solution to this question by applying standard clustering algorithms on this simple feature. Our results suggest that visibility graph motifs provide a mathematically sound, computationally efficient and highly informative simple feature which can be extracted from any kind of time series and used to describe complex signals and dynamics from a new viewpoint. In direct analogy with the role played by standard motifs in biological networks, further work should evaluate whether visibility graph motifs can be seen as the building blocks of time series. References: 1) Lacasa L. et al. "From time series to complex networks: The visibility graph." PNAS 105.13 (2008). 2) Iacovacci J. and Lacasa L. "Sequential visibility graph motifs" PRE 93 (2016)
Jacopo Iacovacci and Lucas Lacasa

ICT  (I) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: F - Rode kamer

Chair: Andrea Nanetti

543 A swarm of drones is more than the sum of the drones that make it up [abstract]
Abstract: Real world applications that use Unmanned Aerial Vehicles (UAVs or drones) are now a reality. They are used in situations known as Dull, Dangerous and Dirty (DDD). The next step is the adoption of swarms, i.e. of a number of UAVs that collaborate to achieve a mission. We focus on autonomous collaborative drones, i.e. drones that take decisions without any control from the outside. Autonomy is mandatory because in large swarms one cannot technically afford neither to have each UAV connected to the ground nor to have one ground pilot per UAV. Swarms offer a number of advantages, among which are continuous flight, resilience support, complementarity of sensing capacities. Combining UAVs gives rise to features that would otherwise be unfeasible. For example, it is possible to analyse the quality of air over a city by flying the UAVs of a swarm at different altitudes; this cannot be achieved with a single UAV. In terms of advantages, to paraphrase Aristotle, a swarm of drones is more than the sum of the drones that make it up (“The whole is more than the sum of its parts” Aristotle, Metaphysics). Of course, if a swarm brings more in terms of benefits it also brings more in terms of issues. Among these are communication, authentication, compact flight, safety. For instance, if considering the fault tree analysis of a swarm, the difficulty is the combinatorial explosion of the tree due to events that do not exist when one single UAV is considered, such as the possibility that one of the UAVs crashes into another in case of emergency landing. In terms of issues, as was the case regarding advantages, a swarm of drones is more than the sum of the drones that make it up. Swarms of autonomous drones are complex systems by nature.
Serge Chaumette
474 ICT Contribution to Development: Insights Using Agent Based Modeling Approach [abstract]
Abstract: The literature has extensive claims about a causal relationship between growth in ICT investment and economic growth and the value it brings to businesses, education, and health. Previous studies had argued that with ICT diffusion, there is economic growth, and others argued that growth is conditional and partial. There is a lack of literature that explicitly states how ICT investments contributes to economic development or other impact. This is probably because measuring the impact of ICT is a challenge and a complex problem because there are a number of different ICTs, with different impacts in different contexts and countries. In addition, there is a web of relationships between impact areas and with the broader economy, society and environment. The aim of this complexity study is to gain insights on the value of ICT contributions to development by examining the interaction between different dimensions such as socio-economic growth, education, health, and the environment using agent based modeling (ABM). This paper argues for building models to understand emergence created by this complex environment in order to see if we are building better world through ICT investment and to direct investments in resources and efforts in the “right” place. ABM is a useful tool because it can effectively provide us with an experimentation environment that can answer complex questions. ABM is to be used to study individual and collective behavioral changes in using ICT in its different forms while interacting with other agents, such as aspects of the economy, health, education and the environment. The outcome of these models will provide in depth understanding of the emergence between multiple agents interacting with ICT at the micro-macro levels. The purpose of this paper is to mainly establish a foundation and interest for further research in using ABM to better understand ICT contribution to Development.
Salam Abdallah
361 Temporal and Spatial Analysis of Ebola Outbreak using Online Search Pattern and Microblogging data [abstract]
Abstract: User generated contents (UGCs) have gained immense popularity for exploring different socio-economic issues. However, considering all UGCs uniformly might be problematic. For example, UGCs can be intentional (such as microblogging sites) as well as unintentional (such as searching pattern). The intriguing question is - how these two types of UGCs are interrelated to each other? In this paper, we explored the similarities and differences between intentional and unintentional UGCs in the context of 2014 Ebola outbreak. Prior studies on the epidemic, mostly analyzed the entire UGC corpus or the time series data as a whole. This can be misleading in our context since there might exist a time-lag between unintentional and intentional UGCs. So, based on the anomalies in our time series data, we considered various subsamples (for a shorter period of time) for our analysis. Data were retrieved from Google (for online search pattern), Twitter (a real-time broadcasting channel for the epidemic) and Wikipedia (largest UGC for first-hand information) for our study. Wiki Trends data were collected for detecting anomalies (important events) which confirm WHO notifications. Google data were extracted around these events, to explore the topics that cropped up through searching. We also crawled the tweet feeds to probe the discussion on the Twitter platform during the same time period. We applied Latent Dirichlet Allocation(LDA) to probe underlying topics in the microblogging discussion. In addition to this temporal analysis, we also performed a spatial analysis by comparing geotagged tweets and locational information of Google Trends. Broadly our study indicates a similar pattern between intentional and unintentional UGCs. So, it is possible to identify and trace areas of concern, both in terms of spatial and temporal dimensions, during an epidemic by exploring UGCs. This approach can hence be useful for health organizations to tackle an epidemic.
Aparup Khatua, Kuntal Ghosh and Nabendu Chaki
522 What tag is this!? Studying hashtag propagation in Twitter [abstract]
Abstract: The microblogging platform Twitter has received much attention from researchers in the recent years. Many researchers have sought for an explanation as to why certain topics become trending and other do not. Usually, these studies focus on predictive aspects of the topic itself, rather than on the growth of the topic. In previous work, we employed a random graph model to mimic the growth of a topic and get a better understanding of how a topic can become trending, for which we found that the size of the Largest Connected Component (LCC) is a good indicator. Using a dataset containing a year of Dutch tweets scraped from Twitter using its streaming API, we analyze the retweet graphs corresponding to all hashtags used by more than hundred users in that year. We find that the corresponding retweet graphs tend to either have one LCC or are scattered in many small components. We then compare these outcomes with the estimates of the random graph model parameters.
Marijn ten Thij
505 Early and Real-Time Detection of Seasonal Influenza Onset [abstract]
Abstract: Every year, influenza epidemics affect millions of people and place a strong burden on health care services. A timely knowledge of the onset of the epidemic could allow these services to prepare for the peak. We will present a machine-learning based method that can reliably identify and signal the influenza outbreak. By combining official Influenza-Like Illness (ILI) incidence rates, searches for ILI-related terms on Google, and an on-call triage phone service, Saúde 24, we were able to identify the beginning of the flu season in 8 European countries, anticipating current official alerts by several weeks. This work shows that it is possible to detect and consistently anticipate the onset of the flu season, in real-time, regardless of the amplitude of the epidemic, with obvious advantages for health care authorities. We also show that the method is not limited to one country, specific region or language, and that it provides a simple and reliable signal that can be used in early detection of other seasonal diseases.
Joana Gonçalves-Sá and Miguel Won

Cognition & Foundations  (CF) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: G - Blauwe kamer

Chair: Taha Yasseri

288 Automatic identification of relevant concepts in scientific publications [abstract]
Abstract: In recent years, the increasing availability of publication records has attracted the attention of the scientific community. In particular, many efforts have been devoted to the study of the organization and evolution of science by exploiting the textual information in the articles like words extracted from the title and abstract. However, just few of them analyzed the main part of an article, i.e., its body. The access to the whole text, instead, allows to pinpoint related papers according to their content, analyzing the network of similarity between them. Scientific concepts are extracted from the body of the articles available in the ScienceWISE platform, but the paper similarity network possesses a considerably high link density (36 %) which spoils any attempt of associating communities of papers to topics. This happens because not all the concepts inside an article are truly informative and, even worse, they may not be useful to discriminate articles with different contents. The presence of such ``generic'' concepts with a loose meaning implies that a considerable amount of connections is made up by spurious similarities. To eliminate generic concepts, we introduce a method to evaluate the concepts' relevance according to an information-theoretic approach. The significance of a concept $c$ is defined in terms of the distance between its maximum entropy, $S_{max}$, and the actual one, $S_c$, calculated using its frequency of occurrence inside papers $tf_c$. Generic concepts are automatically identified as the ones with an entropy close to their maximum and disregarded, while only ``meaningful'' concepts are retained when constructing the paper similarity network. Consequently, the number of links decreases, as well as the amount of noise in the strength of articles' similarities. Hence, the resulting network displays a more well defined community structure, where each community contains articles related to a specific topic.
Andrea Martini, Alessio Cardillo and Paolo De Los Rios
30 Japanese wood ants might use different inbound strategies depending on outbound visual experiences. [abstract]
Abstract: Ants use visual cues for their navigation. There are many studies reporting that ants cannot use map-like systems. Rather, they appear to adopt taxon-systems in which one-to-one correspondences between two different landmarks are realized. To this end, ants can use different paths on their inbound from paths on their outbound. Actually, they appear to head for or ignore visual cues depending on whether they consume food or not. In ant navigation systems therefore, arriving at one location after passing a certain visual cue can be strengthened via several training, resulting in associating each location with a certain visual landmark. After several foraging trips, ants might head for learned visual cues in order to reach goal locations. In this learning mechanism, there is no room for considering other landmarks when foragers head for certain locations. However, several studies reported that ants can exhibit latent learning. This phenomenon can be related to the problem whether or not foragers on inbound trips can use visual cues acquired on their outbound in order to return to their nest. Originally, several trips enhance the relationship between nest location and visual cues. However, in this case, latent learned cues on their outbound can be directly applied to inbound navigation systems. Our aim of this study is checking whether ants can establish effective foraging strategies by associating disconnected information with each other. In this paper, by exposing Japanese wood ants to right-angle-shaped maze or linear-shaped maze on their outbound, we observed trajectories of foragers on their initial inbound trips. On inbound trips, mazes were removed. Thus, foragers could move freely on the test arena. We found that foragers were able to follow their outbound paths when they were restricted to right-angle-shaped maze on their outbound compared with linear-shaped maze on their outbound.
Tomoko Sakiyama and Yukio-Pegio Gunji
37 Network-Oriented Modelling: a Temporal-Causal Network Modelling Approach to Complex Dynamical Systems [abstract]
Abstract: This contribution presents a Network-Oriented Modelling approach based on temporal-causal networks. The temporal-causal modelling approach incorporates a dynamic perspective on causal relations. Basic elements are networks of nodes and connections with for each connection a connection weight for the strength of the impact of the connection, for each node a speed factor for the timing of the effect of the impact, and for each node the type of combination function used to aggregate multiple impacts on this node. The approach covers specific types of neural networks, but it is more generic; it also covers, for example, probabilistic and possibilistic approaches in which product or max and min-based functions are used. The temporal-causal network modelling format used enables to address complex phenomena such as the integration of emotions within all kinds of cognitive processes, of internal simulation and mirroring of mental processes of others, and of social interactions. Also adaptive networks are covered in which connection weights of the network change over time, which, for example, can be used to model Hebbian learning in adaptive neuro-cognitive models or evolving social interactions. By choosing suitable combination functions every process that can be modelled as a smooth state-determined system by first-order differential equations, also can be modelled by the presented temporal-causal network modelling approach. At the European Conference on AI ECAI’16, a tutorial is organised about the temporal-causal network modelling approach [1]. Moreover, [2] is a journal paper about the approach, and a book [3] on the approach will be published by Springer in the series Understanding Complex Systems. [1] http://www.few.vu.nl/~treur/ECAI16tutorialTCNM/ [2] Treur, J., Dynamic Modeling Based on a Temporal-Causal Network Modeling Approach. Biologically Inspired Cognitive Architectures, 16, 131-168 (2016) [3] Treur, J., Network-Oriented Modelling: Addressing Complexity of Cognitive, Affective and Social Interactions. Series on Understanding Complex Systems, Springer Publishers, 2016, to appear.
Jan Treur
235 Exploring Power Law in School Dropout Rates for the State of Pennsylvania in the United States [abstract]
Abstract: Research on the origins of power law and observation and validation of power law distribution in empirical data is active in recent years. This paper is an interdisciplinary research, focusing on exploring the power law distribution in school dropout data for the State of Pennsylvania in the United States. By using the fitting method with goodness-of-fit test based on the Kolmogorov- Smirnov statistics and least-squares fitting, the data is tested in two types of power law distributions—the survival (rank) distribution (“Zipf distribution”) and the complementary cumulative probability distribution (CCDF). In both distributions, only the middle range of the data shows power law and the upper quantile of the distribution bends down from the power law fit. It has two implications: first, technical-wise, it reflects issues of empirical data to fit in the power law since empirical data is affected by the availability of datasets and in social systems, data is bounded by other societal factors. Second, social science wise, it indicates that the dropout rates obey a skewed distribution, which means that the average value of dropout rates loses its meaning. This paper argues that policy makers and researchers should instead focus on the extreme values of dropout rates to better understand the high dropout rates phenomenon in certain districts and areas.
Xiaoyi Yuan
486 Making Complexity Accessible Using 3D Printing [abstract]
Abstract: Complex systems science studies systems made of many components, where key information about system properties and structures are often conveyed through complex two-dimensional visualization. Such 2D visualization can be very difficult to understand, or even inaccessible at all, for learners whose sensory and/or cognitive modes are not compatible with it, including blind and visually impaired learners and learners who are more successful in understanding abstract materials through physical interaction with concrete, tangible objects. The importance, and difficulty, of making complex visualization more accessible to broader audience has been noted in STEM education literature, but not much development has been made to address this problem yet. Here we explore possibilities of making complexity more accessible to broader participants through three-dimensional manipulatable representations (3D “physicalization”) using 3D printing technologies. We assume that, given the optimal design, materials and 3D printing processes, 3D physicalization will substantially improve the learning of complex systems concepts for a variety of learners, compared to using 2D visualization only. We have conducted iterative designs of the following two physicalizations so far: (1) complex network diagrams, and (2) trajectories of chaotic systems. Several iterations of design and testing have revealed non-trivial design challenges. For example, conventional network embedding algorithms (e.g., spring embedding) are not suitable for physicalization as they tend to embed important nodes in inaccessible areas, which we have resolved by developing heuristic layout algorithms that place all nodes on a hollow (semi-)sphere. Another example of the challenges is how to make crowded parts haptically discernible (e.g., trajectories of strange attractors). This illustrates the importance of striking a right balance between scientific accuracy and pedagogical clarity when we physicalize complex systems. Future work includes further design optimization, experimental evaluation of educational effects of 3D physicalization on learners with diverse backgrounds/abilities, and applications to actual complex systems problem solving.
Hiroki Sayama and Prahalad Rao

Foundations & Economics  (FE) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: H - Ontvangkamer

Chair: Jose Mateos

372 Collaborations Across Economic and Scientific Domains [abstract]
Abstract: In this work we develop an agent-based model of strategic link formation in collaboration networks. In our representation, such networks are composed of agents (e.g. scientists or companies) connected by collaborative links that allow knowledge flow. Previous research has found that the growth of collaboration networks is driven by mechanisms which are both dependent (endogenous) and independent (exogenous) of collaboration patterns. Our model includes both of these allowing for the tuning of their relative importance. This permits the investigation of the effects and relation of both mechanisms. In addition, the model contains ingredients derived from previous empirical observations, such as the heterogeneous propensity of the agents to initiate collaborations and the presence of circles of influences, i.e. clusters of agents sharing a given membership attribute. We first test our model against a dataset listing around 15,000 inter-firm Research and Development (R&D) alliances (Thomson Reuters SDC Platinum), and then extend our validation on a large dataset of co-authored studies in scientific disciplines, listing over 73,000 papers. Remarkably, by fitting only three macroscopic network properties through a Maximum Likelihood approach, our model reproduces a number of microscopic measures characterizing the network topology. Furthermore, by quantifying the link probabilities from/to new-entrant and established agents, we find that the endogenous mechanisms are predominant over the exogenous ones in the formation of new collaborations. This result holds for both R&D and co-authorship networks, highlighting a strong similarity. At the same time, we detect a notable difference between the two systems: in the R&D domain, newcomers tend to enter the networks by forming links with established agents, while in the co-authorship domain newcomers prefer to link with other newcomers. We argue that this difference is due to higher entry barriers in economic systems than in academia.
Mario V. Tomasello, Giacomo Vaccario and Frank Schweitzer
356 Predicting link dissolution in an interfirm buyer-seller network [abstract]
Abstract: Evolution of complex networks are driven by nodes constantly forming and dissolving links. While link formation prediction has attracted a lot of attention, its counterpart link dissolution prediction is a less studied problem. Especially, link dissolution prediction lacks versatile prediction method where many previous studies utilizes domain specific information unavailable to obtain in different settings. In this presentation we present a versatile prediction method focusing on dynamics of groups (i.e. blocks, communities) instead of dynamics of each nodes. Our method is tested on an interfirm buyer-seller network data where link dissolution is extremely important. This is because losing a link implies losing a customer and hence directly related to the survival rate of a node (i.e. firm). We show that our method successfully improves in predictive accuracy compared to the state of the art methods. We also confirm a previous finding that link dissolution prediction is a more difficult task than link formation prediction.
Ryohei Hisano
182 Systemic Risk Assessment and Endogenous Valuation in Financial Networks [abstract]
Abstract: The financial crisis has shown how interconnections among financial institutions can lead to large amplifications of small shocks. After the crisis, consensus has emerged among practitioners and policymakers on the fact that a large fraction of losses in the banking system was due to the endogenous revaluation of inter-financial claims. This accounting principle, known as Credit Valuation Adjustment (CVA), implies that banks revalue their assets to reflect the deterioration in creditworthiness of counterparties. In contrast, most of the current works on stress-testing routinely claim that default contagion through interbank exposures is unlikely and negligible. In order to address this paradox, we develop a general network-based stress-testing framework to compare, both analytically and empirically, the five most popular financial distress propagation models. Analytically, we show that the widely adopted Eisenberg-Noe model (and subsequent extensions) fails to account for CVA and leads to severe underestimation of financial distress, as we prove by exhibiting upper bounds on contagion losses. Furthermore, we prove that in the Eisenberg-Noe model the banking system, despite being modeled as a network, is essentially treated as a single aggregate entity, thus ruling out the possibility for complexity to emerge. On the other hand, models that implement CVA, such as the DebtRank algorithm, are shown to be able to account for larger systemic vulnerabilities. These relations among the models are illustrated empirically on a dataset of the top 50 EU banks, between January 1st, 2006 and September 30th, 2015, by comparing the models’ behavior across several dimensions, i.e. magnitude of shocks, type of shocked asset and recovery rates. Furthermore, our results indicate that the DebtRank algorithm allows to identify build-ups of vulnerability in periods immediately preceding crises. Exploiting this feature, we use daily market capitalization data to implement a quasi real-time assessment of systemic vulnerability in the EU banking system.
Stefano Battiston, Marco D'Errico and Gabriele Visentin
197 Enhanced extraction of weighted networks backbone [abstract]
Abstract: The problem of information filtering in weighted complex networks is crucial in order to better understand the relevant features of a system. Indeed, a network characterized by a high edge density may be unmanageable by the usual tools like community detection algorithms or visualization techniques. A common solution to this issue is the application of a threshold on the edge weights; however, this procedure introduces a typical scale in a heterogeneous system, therefore destroying its multiscale nature. In this context, we have developed a technique to prune non-significant links in a weighted network. This filter is able to assign a p-value to each edge of the graph, based on the level of compatibility of the observed link with respect to its expected value according to a given null model. Here we consider the so-called Enhanced Configuration Model (ECM). This null model preserves both the degree and strength sequences of the empirical weighted network, and the resulting filtering technique (henceforth ECM filter) is therefore more refined than other edge sparsification methods based on the strengths only. We have applied our filter to the International Trade Network, namely the system displaying the import-export relations between countries, both in a temporal and multiplex perspective. We have compared the outcome with the results obtained from previously defined pruning techniques and studied how various structural properties change when different levels of significance (p-values) are considered. Furthermore, we have analysed other infrastructural and biological systems, showing that the ECM filter is suitable for a broad range of networks, displaying different levels of weight heterogeneity and edge density. We have shown that our method is able to correctly extract the backbone of the considered weighted networks and is also computationally efficient.
Valerio Gemmetto, Alessio Cardillo and Diego Garlaschelli
573 Modeling the price dynamics of a stock market: stability, market efficiency, and information processing [abstract]
Abstract: The question of what drives stock price movements is fundamental in the theory of financial markets, and has profound implications for forecasting and managing crises, as well as for foundational economic issues. We describe a theoretical and empirical study of stock market returns based on linear response theory, and propose a general model of return dynamics based on agent behavior that accurately describes the return responses observed in real-world markets. We assume a simple set of dynamical equations and use methods from statistical physics to uncover distinct “calm” regimes, where profits and losses are relatively small, and “frantic” regimes, where bubbles or crashes may occur. These regimes are separated by a phase transition, controlled by an empirically accessible model parameter that we introduce. We observe evidence of this transition in the returns of a wide range of financial institutions across multiple time periods. Moreover, we introduce several techniques for modeling and understanding the mechanisms by which a market may digest external information, and identify key parameters which encode both the response to outside signals and large scale stability of the market as a whole. This allows us to directly address, theoretically and empirically, important questions such as market efficiency and also to propose a connection between efficient or inefficient market operation to overall systemic stability. To test the influence of news on market behavior we use the large English language news dataset underlying the NewStream platform (http://newstream.ijs.si/), and develop machine learning methods to identify exactly which subsets of the news data influence market operation. Our work offers a novel theoretical framework for probing how markets respond to and digest external news, how this process may result in instabilities, and also identifies specific parameters which may serve as an early warning tool for detecting system-wide dynamics which lead to crashes.
Asher Mullokandov, Nima Dehmamy, Igor Mozetic, Petra Kralj Novak, Irena Vodenska, Gene Stanley, Sebastian Gemsheim and Jose Morales

Foundations & Socio-Ecology  (FS) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: I - Roland Holst kamer

Chair: Debraj Roy

18 Symmetric and Asymmetric Tendencies in Stable Complex Systems [abstract]
Abstract: Stable complex systems must subscribe to certain structures in order to be stable. By obtaining eigenvalue bounds of the Jacobian matrix at an equilibrium point, we show that stable complex systems will favor mutualistic and competitive interactions that are asymmetric (non-reciprocative) and antagonistic interactions that are symmetric (reciprocative). This prediction is in line with real-world ecological observations. Furthermore, we show that increasing dispersion in the interaction strengths has a destabilizing effect, and that this effect is more pronounced for mutualistic and competitive interactions than antagonistic interactions. This prediction is also consistent with real-world ecological observations. Finally, we demonstrate that these results can be used to make stabilization algorithms of an equilibrium point more efficient. The generality of the analysis presented suggests that our findings should not be limited to ecological systems.
James Tan
378 Coherent and incoherent strategists: Evolutionary dynamics on multiplex networks [abstract]
Abstract: Cooperation is a very common, yet not fully-understood phenomenon in natural and human systems. The introduction of a network structure within the population is known to affect the outcome of cooperative dynamics, as described by the Game Theory paradigm, allowing for the survival of cooperation in adverse scenarios. Recently, the introduction of multiplex networks, where individuals can adopt different strategies in different layers, has yet again modified the expectations for the outcome of the Prisoner’s Dilemma game, compared to the single-layer case: for example, it is known that the average level of cooperation is slightly lower in the multiplex scenario for very low values of temptation, but also, cooperation is able to resist until higher values of the temptation. These phenomena, however, are not well understood at a microscopic level, and much remains to be studied regarding the rest of the social dilemmas in the TS plane on multiplex. We have explored the microscopic organization of the strategies across layers, and have found some remarkable and previously unknown phenomena, that are at the root of the differences between monoplex and multiplex. Specifically, we have found that in the stationary state and for any given time step, there are individuals that play the same strategy in all layers (“coherent”), and others that don’t (“incoherent”). We have found that this group of incoherent players is responsible for the surprising fact of a non full-cooperation in the Harmony Game on multiplex, which has never been observed before, as well as a higher-than-expected survival of cooperation in some regions of the other three social dilemmas. Moreover, we are able to prove mathematically the existence of defectors in the case of the harmony game on multiplex networks, calculating the probability of the necessary topological configuration happening for uncorrelated ER layers.
Joan T. Matamalas, Julia Poncela-Casasnovas, Sergio Gómez and Alex Arenas
484 Equivalence Classes in Complex System Dynamics [abstract]
Abstract: We present an unsupervised learning technique to identify coherent behavior patterns in heterogeneous multi-dimensional time-series data and apply it to the results of select agent-based models as well as empirical datasets from economics and neuroscience. Many systems of interest in complexity science are non-equilibrium in nature, and for others it is the out-of-equilibrium dynamics which reveal complexity. By mapping the phase space of such systems as multi-dimensional time-series data and capturing the revealed dynamics in an empirically-derived Markov model we can identify recurring patterns in the behavior through structural network properties. Specifically, applying a weighted and directional diffusion-based community detection algorithm identifies sustainable behavioral regimes; i.e., collections of states for which there is a greater likelihood to stay within than to leave. In combination with other likelihood measures and structural features we develop a partial categorization of behavioral equivalence classes that can be compared across a variety of systems from different domains. First we explain the technique through stylized two-dimensional motion data. After laying that groundwork we present the analyses of data from three sources: polarization measures from an agent-based simulation of reason-based argument, multiple characteristics of players in an online social game, and neural activation patterns in the motor cortex. Each dataset embodies its own modeling challenges, but our data-driven approach is parsimonious in its application across these systems. As a result one can compare the qualitative and quantitative behavioral characteristics of these disparate systems in a common language. The ability to capture, identify, and describe quasi-attractors and punctuated equilibria, as well as the transient behaviors in between, with an unsupervised and minimally-parameterized technique fosters deeper understanding of a broad class of complex behaviors including a refined categorization of equivalence classes within that broad class.
Aaron Bramson and Atsushi Iriki
204 Beyond Communities: Dynamical Markov Modules [abstract]
Abstract: Mesoscopic structures have attracted researchers attention in the network science community since its very first stages triggering the production of a wide range of community detection algorithms. Scientific research have since covered other type of mesoscopic structures such as Core-Periphery, bi/multi-partite to the more general Stochastic Block Model. We propose a new module detection algorithm based on the system dynamics which avoid an "a priori" mesoscopic structure choice. The dynamics of a Markov process on a network are determined by the topology of the latter but, when aggregated to the underlying communities or modules, the resulting kinetics can exhibit unwanted memory effects. We provide a methodology to consistently check if the detailed Markov chain is lumpable to a mesoscopic modular structure, a partition of the original network. Focusing on the aggregated dynamics, the flow of information from its past toward its future as means of mutual information provides a proxy for the lumpability of such process. The deeper in the past the process provides information about its future, the more the memory effects contaminate its aggregated dynamics. We propose a partition detection algorithm which minimize these memory effects. In both synthetic and real-world networks it successfully detects usual community structures but also extends to any kind of mesoscopic structures such as core-periphery and stochastic block models providing a unified and general approach to network modularity. This methodology open the doors to a new mesoscopic structure definition which focus on the dynamical properties of the process and the role played by each node. Acknowledgments: This work was supported by the Belgian Programme of Interuniversity Attraction Poles, initiated by the Belgian Federal Science Policy Office and an Action de Recherche Concertée (ARC) of the French Community of Belgium.
Mauro Faccin and Jean-Charles Delvenne
104 Towards understanding the interactions between antimicrobial usage and pig health using an agent-based model [abstract]
Abstract: The aim of this paper is to demonstrate the application of complex systems to support policy making. Reduction in antimicrobial usage in livestock is needed to decrease antimicrobial resistance threatening human and animal health. Antimicrobial usage results from an interaction of biological processes and farmers’ decisions. These decisions are driven by economic considerations, disease status of the herd, motivations, cognitions and social networks. Antimicrobial usage effect the transmission dynamics of infectious diseases. Little is known about integrated influence of these economic, social and epidemiologic aspects. We constructed an agent-based model capturing the essentials of antimicrobial usage in Dutch fattening pig farming. The farmers make decisions based on their observations of health problems on the farm and their beliefs and motivations influenced by information on public health effects of antimicrobial usage, pressure from peers and incentives arising from policies. Each farm consisted of a number of pens with pigs, which were healthy, diseased by an endemic or emerging disease. The agent-based model was calibrated to data on antimicrobial usage and endemic disease prevalence. Data on measures to reduce antibiotic usage, costs and effects were taken from literature and expert information. Without additional measures, farmers might adopt a less favourable strategy of waiting to treat individual animals until group treatment is required and in that case antimicrobial usage does not decrease. Changes in farm management or investments can compensate for this effect and lead to reduction in antimicrobial usage. These effects emerge from individual processes. Policy interventions such as subsidies for investments in housing systems, promotion of particular management practices, and taxing antibiotic use can potentially change the usage of antimicrobials. Complex interactions between system components and actors need to be included in order to satisfactorily model the effect of policy interventions.
Egil A.J. Fischer, Thomas J. Hagenaars, Natalia I. Valeeva and Tim Verwaart

Economics & Socio-Ecology  (ES) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: J - Derkinderen kamer

Chair: Dexter Drupsteen

153 Analysis of collaboration networks enhanced through NRF early career funding program in Singapore [abstract]
Abstract: The objective of our research is to investigate how an early career research funding enhances the output of National Research Foundation (NRF) Fellows by studying their co-authorships networks and visualizing emerging topics of research of the NRF 2008 cohort through topic modeling of their paper abstracts. NRF Fellowship offers substantial grants worth up to S$3 million (~ € 1.9 M) over 5 years and is open to international applications without restriction on nationality. Since 2008, NRF Fellowship has attracted over a thousand top scientists and awarded nearly a hundred fellowships to date. We analyze collaboration patterns by performing network analysis on the 2008 NRF cohort. In addition, we study network influence and robustness by examining the graph density and degree distribution of co-authorship networks. Collaboration patterns are also examined at a geographic level to understand if the research collaborations are local in nature or distributed at a global level. Lastly, topic modeling with Latent Dirichlet Allocation (LDA) is used as a potential tool for identifying emerging topics of research so that research funding agencies can better support these areas. Increased density of collaborations can be clearly observed for NRF Fellows from co-authorship network graphs during the Fellowship period. Geographical diversity of collaboration for Fellows is also higher than those Finalists (who did not get the award) as evidenced through Circos visualization. Lastly, a visualization of topic models shows that NRF fellows are conducting research on emerging topics from genetic engineering to graphene. Traditionally, bibliometric methods have been utilized to measure the output of researchers. We have used alternative methods from Network theory and Computer Science to analyze the dividends of an early career research funding program leading to greater cross-border academic collaborations and research on emerging topics that may lead to critical breakthroughs for future industries.
Anand Gautam, Giovanni Ko, Walter Theseira and Michael Khor
183 A co-evolutionary approach to sustainability transitions in a two sector economic growth model. [abstract]
Abstract: The Anthropocene is thought of as the age of the humans, where in the context of a World-Earth [or social-ecological Earth] system, man-made processes and environmental dynamics can not be treated separately anymore. It is at the core of human agency, to keep the trajectory of this coupled system within the boundaries of a save and just operating space, to ensure prosperity for future generations. In all the common business-as-usual scenarios of future economic development and greenhouse gas emissions, this is not likely to happen. Therefore, we aim at investigating sustainability transitions towards independence from fossil resources. The German Energiewende has proven that besides economic realities, social dynamics such as opinion spreading can play a significant role in the choice of energy sources and thereby resource dependence of a society. Consequently, we study the co-dependence and co-evolution of social, economic and resource dynamic processes. Since we are interested in qualitative behavior of this complex system such as transient behavior and phase transitions rather than quantitative predictions, we use a conceptual model for our studies. This model combines a resource-dependent two-sector economic growth model with heterogeneous households with Fast and Frugal heuristics for household decision making, as well as an adaptive network approach to opinion spreading amongst households. We use analytic and numeric tools to analyze the models state space, to gain insight in its topological structure and phase transitions depending on parameter choices and to point out trajectories that lead to favorable resource dependencies.
Jakob Kolb and Jobst Heitzig
386 Inference of phylogenetic structure from the interaction matrices of mutualistic ecosystems [abstract]
Abstract: Nestedness is a feature of many bipartite interaction networks found in nature and social sciences. In nested bipartite networks, with two sets of nodes and edges only between nodes belonging to different sets, specialist nodes, namely those with fewer interactions, interact mainly with generalist nodes. Similar structures arise in trade networks as well (e.g. countries-products bipartite networks) and their study is the main focus of the relatively new field of Economic Complexity. Both in ecosystems and economics, such structures of interaction result from an evolutive process, and here we show to what extent it is possible to leverage them to infer phylogenetic relations among species, with methods devised in the context of economics. We project the bipartite network into two monopartite ones based on similarity of interactions and consequently filter the network structure to keep only the most important links. We use independently collected phylogenetic data to assess the accuracy of the proposed methods in identifying phylogenetically related species by only using information on their interactions. We compare the proposed methods against standard network approaches in their ability to isolate communities and topological structures of phylogenetically close species. Previous works on mutualistic interaction networks have already pointed out correlations between degree sequencies and phylogenetic information. Here we show that in most cases the methods we use are able to extract information that is lost on randomized copies of the interaction networks where only the degree sequence is conserved.
Andrea Tacchella and Giacomo Banti
435 A heterogeneous agent model of transition towards a circular economy [abstract]
Abstract: In this paper we analyse the transition towards a circular economy as a complex adaptive system focusing on the contribution of underlying demand-side factors. To do this a heterogeneous agent model, HAM (Brock and Hommes, 1997) is developed with a population of boundely rational heterogeneous agents choosing between two varieties of a consumer goods or service – a 'circular' and a 'non-circular' type. The model is further extended to include the effect of ‘Word of mouth’ by linking the HAM to a percolation model (Frenken et.al, 2012). The model is highly nonlinear due to evolutionary switching between strategies and can exhibit a wide range of dynamical behavior ranging from a unique stable steady state to complex state dynamics with multiple equilibria. For which there can be changes in the qualitative structure of the set of solutions with completely different economic outcomes if parameters are varied. For a classification of these outcomes based on different value of parameters we use bifurcation analysis. Using this analysis we seek to answer the following question: what are the necessary behavioral and market conditions for obtaining a stable market share of the 'circular' type? An Environmental Extended Input-Output database is used to calculate some resource-efficiency indicators for these outcomes. The results and methods developed in this paper is applied to a case of second life of tires in the Netherlands.
Saeed Moghayer, Trond Husby and Hettie Boonman
240 An evolving network model for the structure of visitors and services in a tourism destination [abstract]
Abstract: We present a growing network model to explain the visitors' behavior in a tourism destination. Specifically, we build an evolving bipartite network with two categories of nodes, lodgings (H) and services/attractions (S). In every lodging, we assume a sole tourist whose behavior is the average of all tourist's behavior hosted in the lodging. A link between a lodging and service appears if the representative tourist visits/enjoys the service during his/her staying in the destination. We assume that links are unweighted, undirected and permanent along time. The latter assumption means that, once a service is visited by a high enough amount of tourists staying in a certain lodging, the preference for this service is maintained by successive guests. The bipartite network grows similarly to previous models for collaboration networks (e.g. Ramasco et al., Phys. Rev. E, 70, 036106, 2004). At any time, one new lodging and m new services are created in the destination. We assume that the representative tourist of all new hotels visit c different services, including the old and m new ones, following part of them a preferential attachment and the other part a random rule. We show analytically that the long-term degree distribution of services in the bipartite network follows a shifted power-law distribution. This is also the case for the degree of the one-mode projections. We have also tested the model with real data. Specifically we have collected recommendations of lodgings and services in the destination of Maspalomas-Gran Canaria (Spain) published by tourists in tripadvisor.com during the period 2005-2016, with a sample size of around 78.000 opinions on 222 hotels and 768 services/attractions. To the extent of our knowledge, this is the first growing network model to represent the structure of supply and demand in a tourist destination.
Juan M Hernandez and Christian González

Urban  (U) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: R - Raadzaal

Chair: Neil Huynh Hoai Nguyen

390 Spatio-temporal analysis of maritime shipping and its impact on international trade. [abstract]
Abstract: Maritime shipping accounts for 90% of the volume of worldwide trade, making it the most important mode of cargo transportation. The global time-evolving network of cargo ship movements therefore gives an accurate representation of international trade and provides valuable insights on the development and evolution of economic relations. Here we conduct a novel longitudinal multi-layer network analysis on a unique dataset, comprising 30 years of worldwide shipping movements and making it the largest and most comprehensive of its kind. We use state-of-the-art temporal network analysis tools to study world trade from the perspectives of spatial, temporal and complex networks to uncover patterns of network growth, evolution and shifts with respect to different commodity classes. We uncover correlations between the shipping patterns of different commodities over time and investigate the extent to which these correlations can be explained by a number of different factors such as geospatial (e.g. preference to trade across shorter distances) or economic (e.g. countries’ comparative advantage policies, commodity availability and shipping tariffs) constraints. We then examine the effects of exogenous events such as the establishment or dissolution of trade agreements, embargoes and diplomatic relations has on changing the structure of the overall trade network. Moreover we investigate the structure of the overall network in order to detect possible structural weakness, conditional on the supremacy of some ports, and its resolutions.
Claire Lagesse, Francesca Lipari, Leto Peel and César Ducruet
47 Sustainable urban transport: complex systems evaluation [abstract]
Abstract: Visions of a walkable city will meet the objectives of much environmental legislation by reducing pollution and carbon emissions associated with highly congested city transport routes. Other desirable outcomes will emerge such as improved health and fitness through increased walking and cycling, increased personal safety through reduced accidents in highly pedestrianised areas, greater community cohesion through increased face-to-face encounters and greater vibrancy and innovation through increased diversity of contacts. But municipal planning to progressively reduce vehicle numbers in the urban environment cannot be considered in insolation from the places people need to go for jobs, and socializing; and from the goods and waste that need transporting in the urban city. Curtailing personal mobility will need to be perceived as a new measure of wealth, substituting uncertainties about congested commute times, with certainties for journey times, safety, and healthier lifestyles. City transition must reflect spatial and temporal states, i.e. the densities of different places and attractivities at different times. The spatial and temporal connectedness between origins and destinations of journeys and the wider multi-modal connectedness of the greater city will need consideration as will places to store personal vehicles. This research takes a complex systems perspective on sustainable transport planning. It identifies existing methods of transport evaluation and adapts the best method to accommodate complexity to assess whether the city is on track to meet its walkable city targets. The research then identifies areas for interventions that would help to achieve targets or adapt them. The method aims to be transferrable to other functional challenges in the city, such as sustainable energy and sustainable food systems, which are interdependent with sustainable transport. The method also aims to accommodate scenarios of city change, such as aging population, weather extremes, and smart city technologies such as the internet of things and autonomous vehicles.
Liz Varga, Peter Allen, Hendrik Reefke and Laszlo Torjai
571 Complex Systems Analysis for the Quantitative Life Cycle Sustainability Assessment [abstract]
Abstract: Practical application of Life Cycle Sustainability Assessment (LCSA) framework requires integration of various methods, tools, and disciplines. However, there is a lack of cohesion between environmental, social, and economics disciplines as well as methods and tools. To address these challenges, we present a complete discussion about the overarching role of systems thinking to bring tools, methods & disciplines together, and provide practical examples from the earlier studies that have employed various system-based methods. We discuss the importance of integrated system-based methods for advancement of LCSA framework in the following directions: (1) regional and global level LCSA models using multi region input-output analysis that is capable of quantitatively capturing macro-level social, environmental, and economic impacts, (2) dealing with uncertainties in LCSA results during multi-criteria decision-making process, and (3) integration of system dynamics modeling to reveal complex interconnections, dependencies, and causal relationships between sustainability indicators. We suggest that LCSA practitioners and researchers should adopt systems thinking, which is defined as the ability to see the parts of bigger mechanisms, recognizing patterns and interrelationships, and restructuring these interrelationships in more effective and efficient ways. Adoption of systems thinking can help with the dissemination of the LCSA framework, increase its applicability, and can bridge the environmental, economic, and social sciences. Developing a common system language and a shared understanding of the inherent interconnectedness and complexity of sustainable development can be very helpful for cohesion of different disciplines and adoption of systems thinking. Future direction for developing methods and tools should help the scientific community to move from approaches based on isolated disciplines towards inter/trans-disciplinarily and a holistic/systematic perspective in order to address emerging sustainability problems.
Murat Kucukvar and Nuri Onat
430 Planning with uncertainty. How the Complex Sciences inspire an adaptive approach to urban planning. [abstract]
Abstract: The development trajectories of cities include a wide variety of uncertainties that challenge spatial planners and policy makers in guiding urban development towards socially desired outcomes (Albrechts, 2010; Woerkum et al., 2011). In response, ideas and concepts derived from complexity science are being explored in planning literature to develop enhanced ways of dealing with these uncertainties (Portugali, 2011; Batty, 2013). Taking a complexity science perspective, this paper presents a dynamic, time-sensitive understanding of spatial transformations that helps to clarify the interconnected and changeable nature of the underlying processes. The paper continuous by exploring an adaptive approach to planning that strengthens the capacity of urban areas to respond and incorporate to both the expected and unexpected changes these processes give rise to. The argument is made that adaptive planning first and foremost implies a focus on influencing and creating conditions, followed by attention to content and process. Keywords: Albrechts, L. (2010). More of the same is not enough!: How could strategic spatial planning be instrumental in dealing with the challenges ahead? Environment and planning B: Planning & design, 37: 1115- 1127. Batty, M. (2013). The new science of cities. Cambridge: Mit Press. Portugali, J. (2011). Complexity, Cognition and the City. Understanding Complex Systems. Berlin: Springer-Verlag. Van Woerkum, C., Aarts, N., & Van Herzele, A. (2011). Changed planning for planned and unplanned change. Planning Theory, 10(2): 144 - 160.
Ward Rauws
457 Observability transition in real networks [abstract]
Abstract: We consider the observability model in networks with arbitrary topologies. We introduce a system of coupled nonlinear equations, valid under the locally tree-like ansatz, to describe the size of the largest observable cluster as a function of the fraction of directly observable nodes present in the network. We perform a systematic analysis on 95 real-world graphs and compare our theoretical predictions with numerical simulations of the observability model. Our method provides almost perfect predictions in the majority of the cases, even for networks with very large values of the clustering coefficient. Potential applications of our theory include the development of efficient and scalable algorithms for real-time surveillance of social networks, and monitoring of technological networks.
Yang Yang and Filippo Radicchi

Socio-Ecology  (S) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: L - Grote Zaal

Chair: Sacha Epskamp

246 Coupled societies are more robust against collapse: A hypothetical look at Easter Island [abstract]
Abstract: Inspired by challenges of environmental change and resource limitations experienced by modern society, recent decades have seen an increased interest in understanding and modelling the long-term development of past societies. A particular focus in this analysis has been on mechanisms that can cause collapse or enable the long term sustainability of a society. A widespread framework to model societal evolution has been dynamical systems theory, which can naturally capture the interaction of multiple sub-systems. Modelling efforts so far have focussed on single, isolated societies, while interactions, or networks of multiple coupled socio-environmental systems have not been considered. In this talk we propose a model of societal evolution that describes the dynamics of a population that harvests renewable resources and manufactures products that can be accumulated. Applying the model to Easter Island gives a good fit to the archaeological record. Collapse is driven by a bifurcation that occurs when the rate of extraction of natural resources is increased beyond a critical point. We present numerical and analytical analysis of the bifurcation diagram. In the second part of the talk we analyse the diffusion and targeted movement of wealth, people, and resources between two societies. Specifically, we investigate how a diffusive coupling and a wealth-driven coupling change the population levels and their distribution across the two societies compared to uncoupled, independent societies. We find that the region of parameter space in which societies can stably survive in the long term is significantly enlarged when coupling occurs in both social and environmental variables. The objective of the talk is to show how the phenomenon of societal collapse can be quantified and how the interaction of multiple coupled socio-environmental systems can be modelled. Attendees will gain insights into novel applications of dynamical system theory and network science.
Sabin Roman, Seth Bullock and Markus Brede
512 Patterns of Human Synchronization [abstract]
Abstract: Social media are transforming global communication and coordination and provide unprecedented opportunities for studying socio-technical domains. Here we study global dynamical patterns of communication on Twitter across multiple scales. In particular, we study collective activities across geographical scales, from areas smaller than one square kilometer up to the global scale. Underlying the observed patterns is both the diurnal rotation of the earth, day and night, and the synchrony required for contingency of actions between individuals. We find that urban areas show a cyclic contraction and expansion that resembles heartbeats linked to social rather than natural cycles, mainly determined by daily routines of work, rest and recreation. Different urban areas have characteristic signatures of daily collective activities, varying the shape and location of peaks and valleys of activity. We show that the differences detected are consistent with a new emergent global synchrony that couples behavior in distant regions across the world, in part due to the communication power provided by social media. Although local synchrony is the major force that shapes the collective behavior in cities, a larger-scale synchronization is beginning to occur. Alfredo J. Morales, Vaibhav Vavilala, Rosa M. Benito and Yaneer Bar-Yam Global Patterns of Human Synchronization, arXiv:1602.06219, 2016
Alfredo Morales, Vaibhav Vavilala, Rosa M. Benito and Yaneer Bar-Yam
245 Rabies virus persistence in dog population in Central African Republic [abstract]
Abstract: Rabies is a fatal zoonosis caused by the RABV virus and characterized by a complex epidemiological situation that remains a serious public health problem in developing countries. Rabies incidence is largely attributed to the growth of domestic dog population considered as the most important vector for human exposure. Phylogenetic and virological analysis of isolates collected in Bangui, the capital city of the Central African Republic, indicate the presence of sequentially circulating subtypes and a reproductive number close to one. None of these subtypes seem to persist in the sole population of Bangui and mechanisms underlying virus persistence remain unknown. Two main factors may be at play: the spatial fragmentation of the host population, favoring the importation of isolates from outside the city, and the heterogeneous incubation period. To study the role and interplay of these different layers of complexity on the RABV epidemiology, we built a spatially explicit stochastic metapopulation epidemic model, inferring dog settlements from human demographic data and considering realistic long-tail distributions for incubation and infectious periods. By exploring different epidemic scenarios, we found that the virus can persist even for very low transmissibilities maintaining a stable dog population, and producing invasion cycles in agreement with empirical observations. Interestingly, no persistence would be observed for the same parameters once an exponentially distributed incubation periods is considered. Our findings illustrate how multiple layers of complexity (both ecological and epidemiological) are needed to sustain virus persistence and identify the factors to target for epidemic control.
Vittoria Colizza, Davide Colombi, Chiara Poletto, Emmanuel Nakoune and Herve Bourhy
224 Stability and feasibility of large ecosystems: from structure to function [abstract]
Abstract: The coexistence of species in large ecosystems has been a longstanding problem in ecology. Since the seminal work by May, we have tools and methods to study local asymptotic stability in terms of random matrices. This results can be generalized to include the effect of empirical network structures. Stability is not the whole part of the issue. It is related to perturbation of population abundances, while feasibility (or structural stability) is related to perturbations of growth rates and to the volume of the domain of growth rates leading to positive population. Feasibility and stability, are different but not independent properties. In this talk, I will show how feasibility can be connected to different properties of interactions networks. [S. Allesina, J. Grilli, G. Barab\'as, S. Tang, J. Aljadeff and A. Maritan. Predicting the stability of large structured food webs. Nature Communications. 6:7842. 2015] [J. Grilli, M. Adorisio, S. Suweis, G. Barab\'as, J.R. Banavar, S. Allesina and A. Maritan. The geometry of coexistence in large ecosystems. arxiv 1507.05337]
Jacopo Grilli
513 Emergent income inequalities in a social-ecological system [abstract]
Abstract: We present an analysis of the economic dynamics in a floodplain fishery system, focused on the connection between fishery governance and economic inequality. The use of one individual fishing technique – fishing canals – has been growing exponentially in our study area. We observe that this expansion is associated with increasing inequality in fishing incomes, which is consistent with a lognormal distribution. Studying the current revenues of canals of different age, we show that the onset of this dynamics (intensified canal use, increasing inequality) occurred during a transition period in the 1970s. In the region of analysis, this decade was characterized by a combination of ecological stress (Sahelian drought and increasing population) and liberalization of the fishery through the breakdown of traditional oligarchic rule. We argue that, while this change in governance brought fairer opportunities to floodplain fishermen, its combination with ecological stress has led to an increase in economic inequality.
Sarah Laborde, Riccardo Gallotti, Ian Hamilton, Aboukar Mahamat and Mark Moritz

Foundations  (F) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: M - Effectenbeurszaal

Chair: Sarah de Nigris

123 Drift-induced Benjamin-Feir instabilities [abstract]
Abstract: The spontaneous ability of spatially extended systems to self-organize in space and time is proverbial and has been raised to paradigm in modern science. Collective behaviors are widespread in nature and mirror, at the macroscopic level, the microscopic interactions at play among elementary constituents. Convection instabilities in fluid dynamics, rhythms production and insect swarms are representative examples that emblematize the remarkable capacity of physical and biological systems to yield coherent dynamics. Instabilities triggered by random fluctuations are often patterns precursors. The imposed perturbation shakes e.g. an homogeneous equilibrium, seeding a resonant amplification mechanism that eventually materializes in magnificent patchy motifs, characterized by a vast gallery of shapes and geometries. Exploring possible routes to pattern formation, and unraveling novel avenues to symmetry breaking instability, is hence a challenge of both fundamental and applied importance. In the so-called modulational instability deviations from a periodic waveform are reinforced by nonlinearity, leading to spectral-sidebands and the breakup of the waveform into a train of pulses. The phenomenon was first conceptualized for periodic surface gravity waves on deep water by Benjamin and Feir, and for this reason it is customarily referred to as the Benjamin-Feir instability. The Benjamin-Feir instability has been later on discussed in the context of the Complex Ginzburg-Landau equation, a quintessential model for non linear physics. Here we revisit the Benjamin-Feir instability in the framework of the Complex Ginzburg-Landau equation, modified with the inclusion of a drift term. This latter is rigorously derived from a stochastic description of the microscopic coupling between adjacent oscillators. Generalized Benjamin-Feir instabilities occur, stimulated by the drift, outside the region of parameters for which the classical Benjamin-Feir instability is manifested. This observation, grounded on a detailed mathematical theory, contributes to considerably enrich the landscape of known instabilities, along a direction of investigation that can be experimentally substantiated.
Francesca Di Patti, Duccio Fanelli and Timoteo Carletti
133 On the Accuracy of Time-dependent NIMFA Prevalence of SIS Epidemic Process [abstract]
Abstract: The N-Intertwined Mean Field Approximation (NIMFA) exponentially reduces the number of differential equations that need to be calculated in the Markovian susceptible-infected-susceptible (SIS) model in networks [1]. The non-zero steady state of NIMFA has been proved globally asymptotically stable when the number of initially infected nodes is larger than 0 and the effective spreading rate is above the NIMFA epidemic threshold [2,3]. Moreover, an accuracy criterion in the steady state has been derived in [4]. However, the accuracy of NIMFA is also determined by initial conditions of the epidemic process, which escaped the attention so far. We find that the virus die-out probability at an arbitrary time, which is determined both by the initial conditions and the network topology, impacts the accuracy of NIMFA. New results will be presented to show how the virus die-out probability influences the accuracy of time-dependent NIMFA prevalence in networks and a novel correction function for NIMFA has been found. Furthermore, the virus die-out probability is equivalent to the gambler’s ruin problem [5, page 231] in complete graphs and can also be solved numerically as a birth and death process in complete graphs. References: [1] P. Van Mieghem, J. Omic, and R. Kooij, IEEE/ACM Trans. on Networking, vol. 17, no. 1, pp. 1–14, Feb. 2009. [2] A. Khanafer, T. Başar, and B. Gharesifard, in 2014 American Control Conference, 2014, pp. 3579–3584. [3] S. Bonaccorsi, S. Ottaviano, D. Mugnolo, and F. Pellegrini, SIAM J. Appl. Math., vol. 75, no. 6, pp. 2421–2443, Jan. 2015. [4] P. Van Mieghem and R. van de Bovenkamp, Phys. Rev. E, vol. 91, no. 3, p. 032812, Mar. 2015. [5] P. Van Mieghem, Performance analysis of complex networks and systems. Cambridge: Cambridge University Press, 2014.
Qiang Liu and Piet Van Mieghem
36 The effect of hidden source on the estimation of complex networks from time series [abstract]
Abstract: Many methods have been developed to estimate the inter-dependence (called also coupling, information flow or Granger causality) between interacting variables of a dynamical system. Typically, this problem regards complex systems observed from multivariate time series, such as financial markets, climatic phenomena, brain activity and earthquakes. The methods tested so far are found to estimate the true underlying complex network with varying success. Here, a difficult but realistic setting of non-observed important variables (or subsystems) of the complex network is considered. In particular, the unobserved variables are assumed to be hidden sources of the complex network. The performance of different connectivity (Granger causality) measures on settings of unobserved source is evaluated with simulations on linear and nonlinear stochastic processes and dynamical systems of many variables. The results show that though some connectivity measures (of those tested) identify correctly the true complex network from the multivariate time series when no source is involved, all connectivity measures fail to estimate the true connections when a driving, but unobserved, hub is present in the true complex network. An example from finance is added to illustrate the problem.
Dimitris Kugiumtzis and Christos Koutlis
447 Message-passing algorithms in networks and complex system [abstract]
Abstract: We will sketch an algorithmic take, i.e. message-passing algorithms, on networks and its relevance to some questions and insights in complex systems. Recently, message-passing algorithms have been shown to be an efficient, scalable approach to solve hard computational problems ranging from detecting community structures in networks to simulating probabilisitic epidemic dynamics on networks. The objective of the talk is two-fold. On on hand, we will discuss how the non-backtracking nature of message-passing avoids an “echo-chamber effects” of signal flow and thus makes a good tool to consider for problems in networks. On the other hand, we will also argue why insights gained from algorithms are equally important when exploring questions at the boundaries of scientific studies, such as networks and complex systems.
Munik Shrestha
555 Hypernetworks: multilevel multidimensional multiplex networks [abstract]
Abstract: Hypergraphs generalise graphs and networks by allowing edges to contain many vertices. An oriented hypergraph edge is a ‘simplex’. Simplices generalize oriented network edges: is a 1-dimensional edge, is a 2-dimensional triangle, is 3-dimensional tetrahedron, and … with (p+1) vertices being a p-dimensional polyhedron. Multiplex networks allow many relations to hold between sets of vertices. The ‘explicit relation’ notation can represent this: != . This generalizes to ‘hypersimplices’ with explicit relations on many vertices, != . Multiplex hypernetworks are formed from hypersimplices defined by many n-ary relations. Hypernetworks are foundational in a new theory of multilevel systems. The network family is integrated as follows (sorry if diagram doesn’t work): __________________orientation _____________________________many relations hypergraphs --------------> simplicial complexes ---------------> multiplex hypernetworks ^ . . . . . . . . . . . . . .__________. . . . . . . . ^ . . . . . . . . . . . . . __________. . . . . . . . . .^ | > 2 vertices. . . __________ . . . . . . . | > 2 vertices . . . . __________ . . . . . . . . . | > 2 vertices |. . . . . . . . . . . . . .__________ . . . . . . . . |. . . . . . . . . . . . . . .__________. . . . . . . . . | graphs -------------------------> networks -----------------------> multiplex networks __________________orientation _____________________________many relations It will be briefly explained how these structures contribute to understanding complexity.
Jeffrey Johnson

Foundations  (F) Session 2

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: N - Graanbeurszaal

Chair: "Colm Connaughton"

85 Spectral Analysis of Universal Register Machine on the Game of Life [abstract]
Abstract: The Game of Life (LIFE), one of the two dimensional cellular automata, exhibits 1/f noise in the evolution starting from random configuration. 1/f noise has the power inversely proportional to frequency, that is, the exponent of the power spectrum is about -1. On the other hand, LIFE is capable of supporting universal computation. Considering these results together we can guess that there is a relationship between computational universality and 1/f noise in cellular automata. However the power spectrum of the computation process in which Turing machine is emulated on the array of LIFE has the exponent -0.3, that is not considered to be 1/f noise. In this research we conduct spectral analysis of the behavior of another computation model called universal register machine (URM) on the array of LIFE. Register machine is an abstract model of computer with a set of registers. Each register is of infinite extent and can hold any single non-negative integer. Universal register machine (URM) is a register machine that can imitate the computation process of any register machines. The first embodiment of URM on LIFE was accomplished by Chapman although we use the other one developed by Rendell (http://rendell-attic.org/gol/UCM/CMappNotes.html). The pattern of the URM is about 19,000 * 3,900 cells on the array and it takes about 32,586,000 time steps to halt. We calculate the power spectrum of the upper part of the pattern of URM with the array size of height * width = 3000 * 3800 = 11,400,000 cells over 65,536 time steps. As a result, the exponent of the power in the range between f = 1 .. 100 is about -1.5. This result shows that the power spectra of the computation process in LIFE vary according to its model and it contrasts clearly with the process starting from random configuration.
Shigeru Ninagawa
211 Using Joint Factorization to Recover Missing Activity in Temporal Networks [abstract]
Abstract: Missing data is an issue ubiquitous in applied science. In the context of temporal networks, loss of data may correspond to missing contacts or missing node activities. In some cases, other data sources containing related information which can be used for data recovery are available. However, combining information from different sources is challenging. Here, we propose a technique based on joint factorization (Evrim et al., 2011) to recover missing activity in temporal networks. Each data source is encoded in a tensor. Assuming that there exists some correlation between the data from the multiple sources, a joint factorization of the tensors allows to infer missing information in one tensor from the others. We illustrate this method on an empirical contact network in which we simulated loss of data for the 10% of nodes by removing their activity for half the time span of the dataset. The resulting network was represented as a tensor where each slice gives the contacts of the nodes at a given time. In addition to the contact network, we have access to the location of the nodes with time. We built a second tensor where each slice gives the position of the nodes at a given time. We jointly factorized the two tensors, with a method handling missing values. Regularization terms were added to control the level of correlation between the contact network and the location network. We evaluated the correlation between the temporal activity of the nodes in the original network and in the recovered one over the missing period and found a good agreement. This suggests that our approach is an efficient method for recovering missing data for temporal networks. Moreover, it generalizes easily to more than two data sources which could all contain missing data.
Anna Sapienza and Laetitia Gauvin
416 Structural transition in multiplex networks triggered by layer degradation [abstract]
Abstract: The introduction of multilayer neteorks poses the theoretical question of whether critical phenomena will behave differently on such networks with respect to traditional networks. So far theoretical studies have pointed out that such differences in the critical behaviours indeed exists. It has been showed that a multiplex network can exists in different structural phases, the transition among them being abrupt under some conditions. Three different topological scales can be naturally identified in a multiplex: that of the individual layers, that of the network of layers, and that of the aggregate network. The notion of quotient graph gives the connection between those scales in therms of spectral properties of the parent multiplex network an its aggregate representation. Much attention was paid to the coupling parameter p that weights the coupling between different representatives of the same node in different layers. It exists a value p∗ for which the derivative of the algebraic connectivity (the first non-zero eigenvalue of the supra-Laplacian) shows a discontinuity. At this point the Fiedler vector also changes abruptly and the Fiedler cut, that before p∗ only intercepts inter-layer edges, starts to intercepts intra-layer edges. We show that the same transition can be triggered by a completely different mechanism that we call layer degradation. Just acting on a single layer at a fixed p by removing intra-layer edges or tuning the weight of these, the multiplex undergoes to the same transition. Working on an approximation of the algebraic connectivity, we give an analytical description of the transition and its dependency on the value of p and on the value of the algebraic connectivities of isolated layers. We also study the transition numerically and the behaviour of the Shannon entropy of the Fiedler vector during the process, since one can interpret (a function of) this as an order parameter.
Emanuele Cozzo, Guilherme Ferraz de Arruda, Francisco Aparecido Rodrigues and Yamir Moreno
111 Does growth variation among the parts evolve or stabilize the whole? The crucial role of information in structural adjustment [abstract]
Abstract: It is intuitive to assume that differences in growth among diverse parts change the constitution of the population as a whole. The larger the differences, the faster the change. Fisher’s fundamental theorem of natural selection formalizes this idea. Counterintuitively, this article shows that large fitness differentials make it likely more beneficial to stop evolutionary selection by stabilizing population structure through informed intervention (i.e. bet-hedging). The larger the differences of fitness among types, the more likely it is that it pays of to hold the shares of type constant by counteracting natural selection. This possibility of such interventions depends on the availability of information about the environmental pattern. With information growth can be optimized through resource redistribution. Information is the key ingredient in order to be able to hedge ones bets and actively manage a portfolio in an uncertain environment. We treat information formally in terms of information theory, including Shannon’s absolute entropy and Kullback-Leibler’s relative entropy. The policy mechanism to convert information into growth is structural adjustment. An empirical analysis over nine levels of an international trade taxonomy shows that is more likely beneficial to intervene on more fine-grained levels, while blind natural selection is likely to optimize growth on more aggregate levels.
Martin Hilbert
489 Computational Irreducibility and Emergence [abstract]
Abstract: The concept of Computational Irreducibility is formally defined in a rigorous way inside the computational model of Turing machines supplemented with a new kind of machines, the E-Turing machines (for enumerating Turing machine). This helps understanding the behavior of some cellular automata which seem apparently impredictible. Computational Irreducibilty is also mobilized to give an explanation to the concept of Emergence.
Hervé Zwirn

Biology  (B) Session 2

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: P - Keurzaal

Chair: "Francesc Font Clos"

241 Multilayer network approach to mutualistic ecosystems. [abstract]
Abstract: The origin and consequences of the nested structure of mutualistic ecosystems is a matter of strong debate in the ecological community. The relationship between the structure of mutualistic ecosystems and the dynamics that led to this structure is still an open problem. In the seminal paper of May[1], the ecosystem is described by a dynamical linear model, with a random matrix interaction. His results show that a large ecosystem with high connectivity is unstable. Since then, special attention has been paid to the structure of the interaction matrix. Bastolla et. al [2] study a population dynamics model that includes plant-animal mutualistic interactions and animal-animal and plant-plant competing interactions, in the mean field approach, except for the weak mutualism regime, where a more realistic mutualistic term is included. They conclude that the nestedness minimizes competition, allowing for an increase of biodiversity. A recent article [3] discusses the importance of structural stability of mutualistic ecosystems. In this work we investigate the influence of the network structure on the persistence of species of a mutualistic ecosystem. We study a non-linear population dynamics model where we take into account the structure of interactions both, in mutualistic and competition terms. In fact, the observed networks contain more information than just the plant-pollinator interactions. The ecosystem may be treated as a two layers of competing agents, one for plants and another for animals, coupled by the mutualistic interactions. This information can be then used to model the competition term beyond the mean-field approach. Our results show the existence of a trade-off between mutualism and competition, so that the largest biodiversity is achieved with a non-trivial combination of both mechanisms. [1] RM. May. Nature. 238, 413 (1972) [2] U. Bastolla et al. Nature. 458, 1018 (2009) [3] R.P. Rohr, S. Saavedra, J. Bascompte, Science 345, 416 (2014)
Carlos Gracia-Lázaro, Javier Borge-Holthoefer, Laura Hernandez and Yamir Moreno
432 Levy walk or law of first passage? The case of olfactory­cued navigation in pelagic birds. [abstract]
Abstract: The albatross was the first example of levy walk in animals, leading to the development of optimal foraging theories in levy walks. Other pelagic birds like the shearwaters presents a range of displacement distributed as a power law, but with an exponent different to the optimal foraging one, challenging the scientific community for a while. In this talk we show how the exponent of the power law in the pdf of displacement is simply the result of the law of first passage, related to the olfactory­cued navigation in shearwaters birds. Olfactory­cued navigation was proposed for a great variety of animals especially those one that navigate in featureless environment. We present the first mechanistic proof of olfactory­cued navigation showing the relation between the cut off of the pdf and the wind turbulence intensity.
Milo Abolaffio and Stefano Focardi
553 The role of idiotypic interactions in the adaptive immune system: a belief-propagation approach [abstract]
Abstract: In this work we use belief-propagation techniques to study the equilibrium behaviour of a minimal model for the immune system comprising interacting T and B clones. We investigate the effect of the so-called idiotypic interactions among complementary B clones on the system's activation. Our result shows that B-B interactions increase the system's resilience to noise, making clonal activation more stable, while increasing the cross-talk between different clones. We derive analytically the noise level at which a B clone gets activated, in the absence of cross-talk, and find that this increases with the strength of idiotypic interactions and with the number of T cells signalling the B clone. We also derive, analytically and numerically, via population dynamics, the critical line where clonal cross-talk arises. Our approach allows us to derive the B clone size distribution, which can be experimentally measured and gives important information about the adaptive immune system response to antigens and vaccination.
Silvia Bartolucci, Alexander Mozeika and Alessia Annibale
166 Empirical data revealing dynamical characteristics of resilience of the complex human system [abstract]
Abstract: Healthy life is maintained through a complex regulating system in our bodies that ensure our dynamic functioning and keep vital physical and mental parameters within safe limits despite environmental challenges. Systemic resilience is the capacity of our complex systems to bounce back to normal functioning upon disturbances, ultimately determining our chances of survival and quality of life. As they age, humans gradually lose resilience which often remains unnoticed until confronted with a health crisis that is often detrimental to well-being and costly to society. We currently still lack valid methods to dynamically measure resilience for upcoming stressors. Emerging insights in other complex dynamical systems such as ecological networks, the climate and financial markets are uncovering generic empirical indicators that may be used to quantify systemic resilience dynamically: these early warning signals comprise changes in the dynamics of a system that are most clearly observed when the system recovers from a disturbance, which slows down upon decreasing resilience. Here we present integrative research in which we asked whether we can rank humans from resilient to frail by looking at differences in these dynamical characteristics in empirical data collected over time. We analysed time series of daily self-reported physical and mental health during 100 days in 22 elderly people ranging from frail to resilient as determined by a frailty index. The dynamics of the time series of a less resilient human system indeed turned out to be characterised by elevated variance and temporal autocorrelation. Additionally, as network theory predicts, as the different elements in a network of fluctuating elements lose resilience, deviations in the physical and mental domains of the system became more correlated. This contribution to the empirical evidence for the use of dynamical characteristics to quantify resilience across complex systems brings hope of foreseeing and preventing catastrophic failures in health.
Sanne Gijzel, Ingrid van de Leemput, Marten Scheffer, Mattia Roppolo, Marcel Olde Rikkert and René Melis
570 Computability and Complexity of Cellular Protein Interaction Networks [abstract]
Abstract: Protein-protein interactions are important in various areas of cell biology, including drug development for several diseases. Many therapeutic methods are based on complex algorithms supported by protein-protein interaction networks. Using a known mathematical model of the cell (from membrane computing), as well as a new abstract measure of complexity provided by proteins length, we study the computational power of protein-protein interaction systems involving a minimal number of cells/membranes with respect to the movement provided by endocytosis and exocytosis operations that are supported by proteins of different lengths. We proved that such protein-protein interaction networks can simulate all computable functions, and thus can be effectively used in designing efficient therapeutic algorithms for numerous diseases. We study the computational power of a pair of certain forms of endocytosis and exocytosis (namely pino and exo operations), and prove their universality by using at most three cells/membranes using proteins for both (pino) and (exo) operations of length at most two. We also study the computational power of the pair (phago) and (exo) operations, and prove their universality by using at most four cells/membranes, while the length of proteins of are at most two. The higher number of cells/membranes here is triggered by the use of the (phago) operation. These universality results means that the corresponding protein-protein interaction networks have the same computational power as a Turing machine, and so able to support all the complex algorithms (describing computable functions).
Bogdan Aman and Gabriel Ciobanu

Swarming Systems: Analysis  (SSAM) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: Z - Zij foyer

Chair: Roland Bouffanais

7000 Less is more? New approaches for swarm control and inference [abstract]
Abstract: Robot swarms are often said to exhibit emergent properties. Yet, it is possible to design controllers with predictable outcome. We illustrate this for two canonical problems, multi-robot rendezvous and cooperative transport. The simplicity of the controllers (some do not even require arithmetic computation) facilitates their analysis. In the second part of the talk, we address the problem of inferring the rules of swarming agents through observation. We propose Turing Learning - the first system identification method not to rely on pre-defined metrics - and test it on a physical swarm of robots. Finally, we discuss novel development tools. We present OpenSwarm, an operating system for miniature robots, and formal methods for automatic code generation. We report on experiments with up to 600 physical robots.
Roderich Gross (The University of Sheffield)
7001 Analysis and Design of Self-Organizing Heterogeneous Swarm Systems [abstract]
Abstract: We present an overview of our recent work on self-organizing heterogeneous swarm systems that can show a wide variety of robust self-organizing spatio-temporal patterns. Our swarms consist of multiple types of very simple, kinetically interacting particles with no elaborate sensing, computation, or communication capabilities. We examine the effects of (1) heterogeneity of components, (2) differentiation/re-differentiation of components, and (3) local information sharing among components, on the self-organization of swarm systems, which are characterized using several kinetic and topological metrics. Results showed that (a) heterogeneity of components had a strong impact on the structure and behavior of the swarms, (b) dynamic differentiation/re-differentiation of components and local information sharing helped the swarms maintain spatially adjacent, coherent organization, (c) dynamic differentiation/re-differentiation contributed to the production of more diverse behaviors of swarms, and (d) stochastic re-differentiation of components also naturally realized a self-repair capability of self-organizing patterns. We also explore evolutionary methods to design novel, non-trivial self-organizing patterns, using either interactive evolutionary computation or spontaneous evolution within an artificial ecosystem. Finally, we demonstrate that these self-organizing swarm systems are remarkably robust against dimensional changes from 2D to 3D, although spontaneous evolution works more efficiently in a 2D space.
Hiroki Sayama (Binghamton University, State University of New York)
7002 Failure is the nominal operation mode for swarms (of drones): reasons and consequences [abstract]
Abstract: Swarms of drones (but the discussion applies to any kind of swarm) are a promising paradigm because combining individuals offers much more features than increasing the capacity of a single entity (?The whole is more than the sum of its parts? Aristotle, Metaphysics). It also raises a number of issues among which failure. In most systems, failure is considered an exception. But when thousands of autonomous entities communicating with each other and adapting their behavior so as to achieve a global mission (this is referred to as swarm intelligence) are considered, the situation is quite different. Statistically, a number of individuals will fail and a number of messages will be lost (because of collisions, interferences, e tc.). Thus, ?In adaptive systems [...] classical separation between ?nominal operation" and "faults" becomes untenable; system is continuously operating under faults? [ Werner J.A. Dahm, Director, Security & Defense Systems Initiative, Arizona State University in his keynote at AIAA Guidance, Navigation, and Control Conference 19 - 22 August 2013, Boston , Massachusetts ]. Applications should then be built so that the failure of an individual entity does not imply the failure of the global mission. In other words, an entity should not rely on any expected behavior of the other entities of the swarm. As a consequence, a mission should be designed respecting the following principles: a global result (a global property to attain) should be targeted instead of a local individual result, which could not be guaranteed; it should be qualitative rather than quantitative, since the worst case is always possible; no individual can assume a peer in the swarm is present in its neighborhood; no individual can assume a peer in the swarm is lost (it can simply be temporarily unreachable); no communication can be assumed to get through. A mission is thus de facto designed as an emergent behavior/property at the global swarm level that results from local individual behaviors. Obeying the above principles makes it possible to achieve real world missions that can be ?guaranteed? resilient to individual failures and communications faults. The counterpart (but it is worth the cost) is that it often leads to bigger resource consumption.
Serge Chaumette (Bordeaux Computer Science Research Laboratory (LaBRI), University of Bordeaux)
7003 Excess of Social Behavior Reduces the Capacity to Respond to Perturbations [abstract]
Abstract: Social interaction increases significantly the performance of a wide range of cooperative systems, but natural swarms seem to limit the number of social connections. Flocking starlings interact on average with a fixed number of conspecifics and swarms of midges regulate their nearest-neighbor distance depending on the size of the swarm. This suggests that excessive social activity may have detrimental consequences. Using a canonical model of collective motion, we find that the responsiveness of a swarm is reduced when the social interaction exceeds a certain threshold. We find that the system can exhibit a large susceptibility even in the ordered phase (far from the critical point) if the amount of social interaction is set to an appropriate level. The same effect can be observed in collective decision-making models of distributed consensus, for example in a set of networked agents following the "majority vote" rule. If an external factor perturbs the state of a small sub-set of agents, this change will propagate through the network at a speed that depends on the number of social connections. These examples of distributed consensus show that an excess of social behavior can hinder their capacity to respond to fast perturbations. The result has far-reaching implications for the design of artificial swarms or interaction networks: even ignoring the costs of establishing connections and transmitting information between agents, it may be desirable to limit the number of connections in order to achieve a more effective dynamical response.
David Mateo, Roland Bouffanais (Singapore University of Technology and Design)