10:00 - 12:30 on 20th Sep 2016

Determinants of creativity and innovation in science, art and technology  (DCIS) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: A - Administratiezaal

Chair: Vittorio Loreto

15000 Introduction
15001 TBA Stuart Kauffman
15002 Combinatorial evolutionary dynamics as a prototype for complex systems [abstract]
Abstract: Many evolutionary systems are combinatorial in the sense that the creation of new entities is based on the combination of already existing things. By formalising this kind of dynamics into mathematical models one can realise that evolutionary systems are prototypes of complex systems, where the underlying network structure ? that determines the next possible steps in evolution (adjacent possible)? co-evolves with the population of phase space (which things currently exist). We show that these models are self-organised critical and therefore are able to capture several key features of evolutionary systems, such as power laws in creation and extinction statistics, punctuated equilibria, and phases of massive and rapid re-structuring. We show an example where the model can be used to explain innovation dynamics as seen in world trade data.
Stefan Thurner
15003 Studying Collective Human Decision Making and Creativity with Evolutionary Computation [abstract]
Abstract: In this talk, we will present a summary of our interdisciplinary research project ?Evolutionary Perspective on Collective Decision Making? that was conducted through close collaboration between computational, organizational, and social scientists at Binghamton University. We redefined collective human decision making and creativity as evolution of ecologies of ideas, where populations of ideas evolve via continual applications of evolutionary operators such as reproduction, recombination, mutation, selection, and migration of ideas, each conducted by participating humans. Based on this evolutionary perspective, we generated hypotheses about collective human decision making and creativity, using agent-based computer simulations. The hypotheses were then tested through several experiments with real human subjects. Throughout this project, we utilized evolutionary computation (EC) in non-traditional ways? as a theoretical framework for reinterpreting the dynamics of idea generation and selection, as a computational simulation model of collective human decision-making processes, and as a research tool for collecting high-resolution experimental data on actual collaborative design and decision making from human subjects.
Hiroki Sayama
15004 The expansion into the adjacent possible as a microscopic mechanism driving innovation [abstract]
Abstract: Recently, large databases witnessing human activities allowed the observation that novelties - such as the individual process of listening a song for the first time - and innovation processes - such as the fixation of new genes in a population of bacteria - share striking statistical regularities. Theoretical results drew attention to the mechanism of expansion into the adjacent possible, originally proposed by Stuart Kauffman in the framework of biological evolution, as a very general and powerful mechanism able to explain such regularities. This translates mathematically in looking at the evolution of systems where innovation occurs, as a path in a complex space, whose structure and topology get continuously reshaped and expanded by the occurrence of the new. I will present a general framework based on Polya?s urn able to account for many of the statistical regularity measured in the analyzed databases.
Francesca Tria
15005 Major Transitions in Information Technology [abstract]
Abstract: When looking at the history of technology, we can see that all inventions are not of equal importance. Only of a few technologies have the potential to start a new branching series (specifically, by increasing diversity), have a lasting impact in human life and ultimately became turning points. Technological transitions correspond to times and places in the past when a large number of novel artefact forms or behaviours appeared together or in rapid succession. Why does that happen? Is technological innovation continuous and gradual or it occurs in sudden leaps and bounds? The evolution of information technology allows for a quantitative and theoretical approach to technological transitions. The value of information systems experiences sudden changes when when we learn how to use this technology, when we can accumulate large amounts of information and when communities of practice create and exchange information freely. The coexistence between gradual improvements and discontinuous technological change is a consequence of the asymmetric relationship between complexity and hardware and software. Using a cultural evolution approach, I suggest that sudden changes in the organization of information technologies depend on the high costs of maintaining and transmitting reliable information.
Sergi Valverde

Information Processing in Complex Systems  (IPCS) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: B - Berlage zaal

Chair: Rick Quax

45000 Entropy for complex systems [abstract]
Abstract: Complex systems are often inherently non-ergodic and non-Markovian and Shannon entropy loses its applicability. It will be shown that the entropy of non-ergodic systems can still be derived from three of the Shannon–Khinchin axioms and by violating the fourth, the so-called composition axiom. The corresponding entropy is of the form S_{c,d}∼\sum_i \Gamma(1+d,1−c \ln{p_i}) and depends on two system-specific scaling exponents, c and d. This entropy contains many recently proposed entropy functionals as special cases, including Shannon and Tsallis entropy. It will also be shown that this entropy is relevant for a special class of non-Markovian path-dependent random walks. We show for the first time that a generalised max ent formalism can be devised that allows to predict distribution functions of evolving path-dependent processes.
Stefan Thurner
45001 Integrated Information Theory Applied to Complex Systems Analysis [abstract]
Abstract: Integrated information theory (IIT) was originally proposed by Balduzzi and Tononi in 2008 and has been expanded by multiple authors since, predominantly in a theoretical neuroscience context. The main object of IIT, Phi, is an informational measure that attempts to quantify to what extent the parts of a dynamical system are simultaneously segregated (acting independently of each other) and integrated (acting as a whole). In our work, we have taken the broad concepts behind IIT and Phi and used them to gain further insight into the behaviour of complex systems. Among other systems, we have applied IIT to spiking neural networks and coupled Kuramoto oscillators. In both cases, we find that Phi-like measures can reliably identify phase transitions in the system and are related to other dynamical properties like criticality, metastability or synchronisation onset. Furthermore, IIT can offer an illustrative picture of the interdependence between parts of the system and their evolution at different timescales. To do this, we have proposed novel kinds of estimators for Phi-like measures on time series data of any kind, and studied their behaviour in terms of stability, robustness, finite sample effects and stationarity. Overall, we push IIT forward not as a fundamental neuroscientific theory, but as a useful tool to interpret and analyse information processing in complex systems.
Pedro A.M. Mediano and Murray Shanahan
45002 Information-Theoretic Distance between Complex Networks [abstract]
Abstract: Entropy and information-theoretic derived measures have successfully been applied in a range disciplines, revealing time scale dependence in neural coding, quantifying the complexity of genetic sequences and playing an even central role in the quantification of quantum information, to cite just a few representative achievements. However, when it comes to complex networks, an appropriate definition of entropy has remained elusive. Applicability being often limited to the probability distribution of some network descriptor (such as the normalized distribution of node degrees). Here, inspired by how entropy is calculated in quantum systems, we define an interconnectivity based density matrix to calculate the von Neumann entropy directly from a network. We prove that our definition satisfies the expected additivity properties of (quantum) thermodynamic entropy, in contrast to past approaches. We exploit this entropy to define network based information-theoretic measures such as Renyi q-entropy, generalized Kullback-Leibler and Jensen-Shannon divergences---as complexity indicators---and importantly to define a distance measure between complex networks. Using our mathematical framework, we are thus able to numerically probe contemporary problems faced in complex network science, recovering results related to model selection and clustering layers of multilayer networks. We found that both synthetic and empirical networks exhibit information-theoretic properties, indicating that the approach offers insights to quantify complexity and distinguish networks by means of a distance measures, providing a backbone to an information-theoretic approach to complex network science.
Manlio De Domenico and Jacob Biamonte
45003 Information Processing in Biomolecular Regulatory Networks [abstract]
Abstract: While living systems seem distinctive in their ability to process information and useful tools for quantifying information structure exist in complex systems research, they haven't been widely applied to biological networks. Hence, information processing in living systems is yet to be rigorously quantified. In our work we investigate informational architecture of Boolean model for various biomolecular regulatory networks. We also compare their informational properties to two classes, random and scale-free, of null model that share commonalities in their causal structure [1]. We report patterns in information processing that distinguish biological networks from random networks in terms of scaling relation, total amount of information processed and causal interaction. Based on the results, we suggest that previously unidentified information-based organizational principles that go beyond topological considerations, such as a scale-free structure, which may be critical to biological functions [2]. 1] H. Kim, P.C.W. Davies and S.I. Walker (2015) New Scaling Relation for Information Transfer in Biological Networks. J. Roy. Soc. Interface 12 20150944; DOI: 10.1098/rsif.2015.0944 [2] S.I. Walker, H. Kim and P.C.W. Davies (2016) The Informational Architecture of the Cell. Phil Trans A. 2016 374 20150057; DOI: 10.1098/rsta.2015.0057
Hyunju Kim, Paul Davies and Sara Imari Walker
45004 Thermodynamic Cost of Information Processing in Bio-Chemical Networks [abstract]
Abstract: Life is a non-equilibrium process involving information processing with biochemistry. Understanding the thermodynamic cost of these processes is important. For processes described by linear chemical networks, Stochastic Thermodynamics provides a powerful tools to do so. We illustrate this statement by characterizing the trade-offs between dissipation, speed and accuracy in kinetic proofreading [1]. However, for more complicated networks stochastic descriptions rapidly becomes prohibitive. We show that similar concepts can be extended to study chemical reaction networks (CNs) described by deterministic rate equations. In particular we derive a Landauer principle characterizing the amount of chemical work necessary to modify the population of a CN operating far from equilibrium [2]. [1] Rao, R. & Peliti, L. “Thermodynamics of accuracy in kinetic proofreading: dissipation and efficiency trade-offs”. J. Stat. Mech. Theor. Exp., 2015, P06001.[2] Rao, R. & Esposito, M. “Nonequilibrium Thermodynamics of Chemical Reaction Networks: Wisdom from Stochastic Thermodynamics”. arXiv 1602.07257.
Riccardo Rao, Massimiliano Esposito and Luca Peliti
45006 Applying Fisher Information to real data: Electric Vehicle charging behavior Omri Har Shemesh, Rick Quax, Alfons Hoekstra and Peter Sloot

Workshop on Open & Citizen Data Science  (WOCD) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: C - Veilingzaal

Chair: Thomas Maillart

33000 Opening & Participant Introduction
33001 War is the father of all things: Creative Destruction on Wikipedia, and Implications for Citizen Science [abstract]
Abstract: From the courtroom to the marketplace, the parliament building to the newsroom, some of our most important institutions are adversarial. We expect that placing individuals in conflict will help solve complex problems in information-dense environments, and will drive long-term social evolution through the creation of of new questions to answer. Yet we understand very little about the general principles that make conflict creative. This leaves a major gap in our understanding of human social development, and makes it difficult for us to mitigate the negative effects of conflict while retaining its benefits. To help remedy this, I present a new framework for the quantitative study of creativity and conflict. I apply it to a new study on the relationship between conflict and information creation in the online encyclopedia Wikipedia. Fifteen years of high-resolution records allow to track the long and tumultuous process by which Wikipedia articles are written, re-written, torn-apart and reconstructed. We can see not only how the introduction of new information into an article leads to conflict, but how conflict can often precede the creation of unexpected configurations whose long-term persistence suggests adaptive success. This work contradicts simple accounts that oppose conflict and cooperation, and suggests that popular conflict-suppression mechanisms may actually undermine Wikipedia's success. It provides new evidence for the central role of conflict in creative evolution, new methods to measure and quantify it across a wide variety of systems. For those engaged in the design of systems for citizen science It suggest that the correct management ? and even the selective promotion ? of conflict is crucial to the success of citizen science.
Simon Dedeo
33002 Citizen Science Practices for Computational Social Science Research [abstract]
Abstract: Under the name of Citizen Science, many innovative practices in which volunteers partner up with scientists to pose and answer real-world questions are growing rapidly worldwide. Citizen Science can furnish ready-made solutions with citizens playing an active role. However, this framework is still far from being well established as a standard tool for computational social science research. Here, we present our experience in bridging gap between computational social science and the philosophy underlying Citizen Science, which in our case has taken the form of what we call ?pop-up experiments.? These are non-permanent, highly participatory collective experiments which blend features developed by big data methodologies and behavioral experimental protocols with the ideals of Citizen Science. The main issues to take into account whenever planning experiments of this type are classified, discussed and grouped into three categories: infrastructure, public engagement, and the knowledge return for citizens. We explain the solutions we have implemented, providing practical examples grounded in our own experience in an urban context (Barcelona, Spain). Our aim here is that this work will serve as a guideline for groups willing to adopt and expand such in vivo practices and we hope it opens up the debate regarding the possibilities (and also the limitations) that the Citizen Science framework can offer the study of social phenomena.
Oleguer Sagarra, Mario Gutiérrez-Roig, Isabelle Bonhoure, Josep Perelló
33003 The Open Science Company Manifesto [abstract]
Abstract: This manifesto aims at defining fundamental statements every Company running with the Open Science initiative in mind must respect and promote.
William Durand :
33004 A Decentralized Approach to Dissemination, Retrieval, and Archiving of Data [abstract]
Abstract: Making available and archiving scientific results is for the most part still considered the task of classical publishing companies, despite the fact that classical forms of publishing centered around printed narrative articles no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. Here we propose to design scientific data publishing as a Web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used for the Semantic Web in general. Evaluation of the current small network shows that this system is efficient and reliable.
Tobias Kuehn
33005 Un-conference Breakout Session
33006 Presentations from breakout sessions

Territorial Intelligence for Multi-level Equity and Sustainability  (TIME) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: D - Verwey kamer

Chair: Celine Rozenblat

38000 Presentation of the CS-DC TIMES Flagship [abstract]
Abstract: The talk is presenting the CS-DC TIMES flagship. This flagship aims at creating a global ecosystem to give the same equality of chance for any territory to become a ??smart?? territory by using a global market for open responsible innovations linked to the global scientific and technological revolution. This global ecosystem is using the 2nd internet revolution and is devoted to all of these ? scientists of any discipline or experts from territorial governments, NGOs, firms, start-ups as well as ordinary citizens ? wanting to jointly increase social wellbeing, improve the relationship with Nature and to change the relations between science, engineering, politics and ethics.
Paul Bourgine (Ecole Polytechnique, Paris)
38001 TIMES: The global systems context [abstract]
Abstract: The paper will present a personal vision of the difficult years our world is facing, and of the role that the TIMES project can play in it. Our world system, under the pressure of globalization, is hitting a ceiling in which local, regional, bottom-up economic and identity issues are raising populist tendencies exploited by politicians, which is a highly combustible mixture. We need urgently to reduce tensions and arrive at a more balanced approach to the future of our planetary system. Education is a crucial element in that process, and that is where TIMES comes in.
Sander Van der Leeuw (Arizona State University)
38002 Considering urban geo-diversity in global models of climate change mitigation and ecological transition [abstract]
Abstract: There are many attempts at deducing smart urban policies from the global models, which aim at managing an ecological transition in human habitat and activities. Here I would like to recall two major insights, which are brought from urban science and could help to improve the efficiency of such decisions. One is about taking care of the tensions created by persistent or growing inter-urban inequalities or divergences in the distribution of wealth and population at world scale, the other is about enhancing the local geo-diversity (in size, morphology and cultural hybridization of technologies) which is an essential driving force in the complex dynamics of cities within systems of cities. Empirical evidence and simulation models will be presented whose developments could be shared within the TIMES flagship proposal.
Denise Pumain (University Paris 1)
38003 Territorial and functional structures in the face of global systemic risks [abstract]
Abstract: It is often assumed in debates about global systemic risks - like those of climate change, financial instability, disruptive inequality and more - that they have ultimately to be resolved through some kind of agreement among the governments of the world. This perspective gives priority to the territorial structure of nation state and leads to the difficult question of whether the world economy can and should be controlled by political means. The global economy, however, is not based on territorial, but on functional structures. They include global markets as well as multinational corporations and border-crossing supply chains. Particularly important are regional innovation systems linked over large distances, like Northern California, South Korea, Taiwan and others. The linkages between such innovation systems would be impossible without the professional networks that are characteristic of what has been called the knowledge society. How to harness the potential of such networks might become a key question for the TIMES project.
Carlo Jaeger, Global climate Forum, Germany

EvoEvo (Evolution of Evolution)  (EOE) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: E - Mendes da Costa kamer

Chair: Guillaume Beslon

16000 What is EvoEvo? (invited talk) [abstract]
Abstract: Variation and Selection are the two core processes of Darwinian Evolution. Yet, both are directly regulated by many processes that are themselves products of evolution (e.g. DNA repair, mutator genes, transposable elements, horizontal transfer, stochasticity of gene expression, sex, network modularity, niche construction…). This results in the ability of evolution to self-modify its operators, hence its dynamics. We call this process “Evolution of Evolution” orEvoEvo. Different EvoEvo strategies have been proposed in the literature, including regulation of variability, robustness/evolvability strategies and bet-hedging, but finding traces of these strategies in extant organisms is difficult. Moreover all these strategies are likely to interact one with the others, blurring their respective outcomes. However, new tools are now available that help understanding EvoEvo. On the one hand, large scale bioinformatic data analysis can be used to recognize signatures of evolution of evolution. On the other hand, large scale computational modelling of multi-level evolution is now becoming feasible, and promises to shed light on the conditions under which evolutionary mechanisms evolve as well as their consequences.
Paulien Hogeweg (Utrecht University)
16001 Epistasis and the structure of fitness landscapes: are experimental fitness landscapes compatible with Fisher’s geometric model? (invited talk) [abstract]
Abstract: The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher’s model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher’s model was able to explain several statistical properties of the landscapes—including the mean and SD of selection and epistasis coefficients—it was often unable to explain the full structure of fitness landscapes.
François Blanquart (Imperial College)
16002 The origin of genes through spontaneous symmetry breaking (invited talk) [abstract]
Abstract: The heredity of the modern cell is provided by a small number of non-catalytic template molecules—the gene. This basic feature of modern-type heredity, however, is believed to have been absent at the earliest stages of evolution. The RNA world hypothesis posits that the heredity of the first, primitive cell (protocell, for short) was provided by a population of dual-functional molecules serving as both templates and catalysts. How could genes originate in protocells? Here, I will discuss the possibility that gene-like molecules emerge in protocells through spontaneous symmetry breaking between the complementary strands of replicating molecules. Our model assumes a population of primitive cells, each containing a population of replicating molecules. Protocells are selected towards maximizing the catalytic activity of internal molecules, whereas molecules tend to evolve towards minimizing it. This conflicting evolutionary tendencies at different levels induce symmetry breaking, whereby one strand of replicating molecules maintains catalytic activity and increases its copy number, whereas the other completely loses catalytic activity and decreases its copy number—like genes. The evolution of these gene-like molecules increases the equilibrium fitness of protocells. Our results implicate conflicting multilevel evolution as a key cause of the evolution of genetic complexity.
Nobuto Takeuchi (University of Tokyo)
16003 Evolvability drives innovation in viral genomes [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_Beslon.pdf
Guillaume Beslon (Université de Lyon and Rhône-Alpes Complex Systems Institute, FR)
16004 Mutational load is ameliorated by increased transcriptional load-associated mutations, if these are biased towards duplications and deletions [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_Colizzi.pdf
Enrico Sandro Colizzi (Utrecht University, NL)
16005 Evolution of mutator populations in constant environments [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_Rutten.pdf
Jacob Pieter Rutten (Utrecht University and Université de Lyon, NL)
16006 Open-Endedness: Definitions and Shortcuts [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_stepney.pdf
Susan Stepney (University of York and York Centre for Complex Systems Analysis, UK)

Feedback in Complex Systems  (FCS) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: F - Rode kamer

Chair: Samuel Johnson

31000 Instabilities, self-organisation and feedback in geophysical fluid dynamics [abstract]
Abstract: The emergence of coherent circulation patterns of vortices and jets is a common feature of many large scale geophysical flows. Well known examples include the Earth's jet stream and Jupiter's Great Red Spot. These large scale structures exist in a background of turbulent fluctuations which are usually caused by hydrodynamic instabilities operating at much smaller scales. Some large scale structures are believed to have originally been created by the self-organisation of these small scale turbulent fluctuations into large-scale, quasi-deterministic flows. This self-organisation transfers energy from small scales to large scales, a process known as an "inverse cascade". In this talk, I will provide a non-specialist introduction to the physics of inverse cascades and discuss how it drives self-organisation phenomena in fluid dynamics. I will illustrate the ideas with a stylised model of the formation of zonal jets - structures analogous to the Jupiter's bands - from turbulence generated by an instability at much smaller scales. The zonal jets initially grow by extracting energy from small scale turbulent fluctuations. However a negative feedback mechanism is present which inhibits the instability mechanism generating this turbulence as the jet intensity increases. Jets thus arrest the energy input feeding their own growth and the system reaches a dynamical steady state in which the instability mechanism and feedback mechanism cancel each other out on average.
Colm Connaughton
31001 When slower is faster [abstract]
Abstract: The slower is faster (SIF) effect occurs when a system performs worse as its components try to do better. Thus, a moderate individual efficiency actually leads to a better systemic performance. The SIF effect takes place in a variety of phenomena. We review studies and examples of the SIF effect in pedestrian dynamics, vehicle traffic, traffic light control, logistics, public transport, social dynamics, ecological systems, and adaptation. Drawing on these examples, we generalize common features of the SIF effect and suggest possible future lines of research. Reference: Gershenson, C. and Helbing, D. (2015). When slower is faster. Complexity, 21(2):9?15. http://dx.doi.org/10.1002/cplx.21736
Carlos Gershenson
31002 Brain complexity and phase transitions [abstract]
Abstract: This talk will illustrate how brain functions, including some of the high-level ones setting up the mind, may be understood on well-defined grounds if one assumes analogy with scenarios that physics classifies as (non-equilibrium) phase transitions and critical phenomena. It suggests models that identify basic mechanisms and help in interpreting observations. This strategy also allows for comparison of data obtained from brains in different stages of evolution, and suggests experiments to detect significant changes of brain dynamics. References: ?Efficient transmission of subthreshold signals in complex networks of spiking neurons?, Plos One 10(3), e0121156 (2015), J.J. Torres, I. Elices, and J. Marro. ?Brain Performance versus Phase Transitions?, Sci. Rep. 5, 12216 (2015), J.J. Torres and J. Marro. Some yet unpublished work with Ana P. Millan. Physics, Nature and Society, J. Marro (Springer 2014). Nonequilibrium Phase Transitions in Lattice Systems, J. Marro and R. Dickman (Cambridge Univ. Press 2005).
Joaquin Marro
31003 Market ecology and evolution [abstract]
Abstract: How do we understand the relationships between the different actors in a market and how they change time? I will discuss how to understand and map out market ecologies. The key insight is that market evolution is driven by second order deviations from market efficiency. Market participants who use the market for direct purposes, such as liquidity extraction or risk diversification, create inefficiencies that support a diversity of different types of arbitrageurs. Based on a simple theory of differential price formation it is possible to map out the ecological relationships between financial strategies, which can be either predator-prey or competitive, and possibly mutualistic. I will show how to estimate timescales for market evolution. I conjecture that market ecology is a key determinant of market stability. Finally I will discuss how this theory can be developed and put to practical uses.
Doyne Farmer

Social and Economic Change as a Complex Dynamical System  (SEC) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: G - Blauwe kamer

Chair: Matthieu Cristelli

2000 Percolation in complex networks and applications in innovation economics [abstract]
Abstract: In my talk, I will go into the standard model of percolation and how this model can be used to analyse innovation diffusion processes through word-of-mouth. We will do so distinguishing different network topologies (random, small-world, regular), between simple versus complex propagation, and between different preference distribution reflecting economic inequality levels. This work is based on collaborations with dr. Elena Mas Tur (Eindhoven University of Technology) and dr. Paolo Zeppini (Bath University).?
Koen Frenken
2001 The development pathways of nations: the heterogeneous dynamics of economic complexity [abstract]
Abstract: Recent results of a new branch - Economic Complexity - set basis for a framework to interpret and explain the extreme heterogeneity of the dynamics of development of nations. The idea behind the concept of Economic Complexity is to use the output of a country to determine how fertile the economic system is. In other words, to go from the observable exported products to a synthetic estimate (fitness) of the level of endowments present in the country, that automatically takes their relationships into account. We will discuss the main results and achievements of Economic Complexity. As an example of them, the metrics for country intangibles allows for quantifying the hidden growth potential of countries by comparing it with money-based figures such as the GDP per capita. The analysis of the economic evolution in the plane defined by fitness and GDP per capita pinpoints strongly heterogeneous patterns of evolution and allows to cast economic forecast into the framework of forecasting the evolution of a dynamical system as in the case of weather dynamics. We also observe a strong heterogeneity in the predictability of the economic dynamics. In such a framework, the usual tool used in Economics (i.e regressions) is no more the appropriate one to deal with such a heterogeneous scenario and new concepts, borrowed from dynamical systems theory, are needed. We will also discuss how it is possible to track and define trajectories of growth and development at single product level by defining a suitable network, named Product Progression, linking products on a technological hierarchical basis.?
Andrea Tacchella
2002 The Complex Roots of Economic Liberalism [abstract]
Abstract: Economic theory has developed in such a way as to be consistent with the socio-political liberalism which became dominant after the Enlightement. The doctrine of "laissez faire", and the argument that leaving people insofar as possible to their own devices would lead to a socially desirable state was based on the conviction that an Invisible Hand would lead society to such a state. As economic theory developed it was never able to give a formal justification for this assertion. It was claimed that this would happen but the discipline was confined to study the welfare properties of equilibrium states without explaining how they were attained. Thus crises were said to be generated by exogenous shocks and did not come from within the system. Changing our two hundred year old paradigm to thinking of the economy as a complex adaptive system allows us to consider economies out of equilibrium and the fact that they may self organise into states which are far from optimal. Such systems with their feedbacks are unpredictable and policy measures can generate unexpected consequences. Accepting this may lead to more realistic and more modest economic theory
Alan Kirman
2003 Supply Chain Disruptions: Evidence from the Great East Japan Earthquake [abstract]
Abstract: This paper examines whether propagation of idiosyncratic, firm-level shocks through input-output linkages can lead to sizable fluctuations at the aggregate level. Using a large-scale dataset on supply chain linkages among Japanese firms together with information on firm-level exposures to a large, but localized, natural-disaster?the Great East Japan Earthquake in 2011?we quantify the earthquake?s impact on firms that were (directly or indirectly) linked to affected firms. We find that having a supplier in the earthquake-hit region led to a 3% loss in terms of sales growth compared to firms with no such suppliers. We also find evidence for smaller but nevertheless significant upstream propagation from affected firms to their suppliers. Furthermore, we show that these losses do not remain confined to the disrupted firms? immediate customers and suppliers. Rather, firms that were only indirectly related to the firms in the affected areas (such as their customers? customers) were also negatively impacted. Even though our results suggest that such cascade effects decay with supply chain distance, the number of firms affected is large enough for this localized disruption to have a meaningful macroeconomic impact: the propagation of the earthquake shock over input-output linkages led to a 1% drop in Japan?s aggregate output in the year following the earthquake.
Vasco Carvalho
2004 Portfolios of Technology Investments [abstract]
Abstract: We study how to optimally schedule production in technologies following experience curves under uncertainty to meet a known demand schedule. Experience effects encourage specialization but in an uncertain world a risk-averse decision maker may prefer to diversify. This is relevant for planning investment strategies for the transition to a sustainable energy system. We develop a stochastic model and characterize the optimal degree of diversification as a function of relative progress rates, initial conditions, time horizon, discount rate and variability. Due to feedback effects from learning, solutions depend sensitively on underlying model parameters, and we observe instantaneous switching between local optimums of the objective function. This differs sharply from portfolios of standard financial assets. Technological lock-in may be characterised in this framework. When the planning horizon is long, we find that it is optimal to specialize more in the short term than the long term, where uncertainty dominates the utility function.
Rupert Way

Law and complexity  (LAC) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: H - Ontvangkamer

Chair: Marion Dumas

22000 Session 1?: Modeling Law and Social Cohesion
22001 Cooperation on the tipping point?: how do rules,norms and law violations undermine society?? [abstract]
Abstract: In evolutionary game theory, success-driven migration has been found to be important for the emergence of cooperation, in particular when cooperative communities can be established. However, in real-life, migrations are often associated with illegal activities or other behavior deviating from the local social norms. If widespread, these activities may jeopardize cooperation and destabilize societies. We resort to a simple spatial game with individuals endowed with deterministic prisoner's dilemma and success-driven migration rules, as well as an additional stochastic property violation migration rule. These rules altogether reflect the odds of a fully rational individual engaging in illegal activity, against the enforcement of individual, social and institutional rules. We find that little rule violation helps cement cooperative communities, capable of handling violations locally. However, with increasingly prevalent rule-violating activity, cooperators aggregate and form larger clusters against defectors. These massive clusters of cooperators turn out to be fragile against invasion by defectors, and they ultimately collapse. Our results compare to climate change and other tipping point phenomena: Rule-violating behaviors may create the conditions of local weaknesses early on, which trigger hardly reversible systemic consequences for society on the long run. This has implications for policies on immigration, labor mobility, public discourse, and innovation.
Thomas Maillart, Stefan Bechtold, Dirk Helbing
22002 Regulating Complex Adaptive Systems?: Towards a Computational Model for Simulating the Effects of Rules [abstract]
Abstract: Complexity cannot be strictly defined, only situated in between order and disorder. A complex system is presented as a collection of interacting agents, representing components as diverse as people, cells or organizations. Because of the non-linearity of the interactions, the overall system evolution is to a substantial degree unpredictable and uncontrollable. One of the research topics in the Multi-Agent Systems area is using models that represent social structures, such as a network of organizations that create alliances, to analyze more objectively the emergent behavior of such a regulated open system. In our research, we study the impact of rules that describe the expected behavior of actors in such system. For this, we model both the rules as well as some aspects of the behavior of the agents that are subjected to these rules. In our simulations, we model complex networks that consist of many different actors that may represent individuals or organizations, which are related to each other by various types of relationships. Examples may be dependencies on goals, conflicts over resources and various beliefs. Legal rules bind the actors and because they are part of an organizational network (in this research a distributed network) they are bounded by a set of norms, including legal norms, contracts, and agreements. Obviously, actors may comply with the rules or not and being able to notice non-compliance and responding to it adequately is one of the reasons why we are interested in this research topic. The inherent complexity and unpredictability of this social society demand new kinds of coordination mechanism that focus on rapid joint responses and collective actions instead of centralized predictive planning. We present a multi-agent framework intended to explore the emergent behavior of a regulated complex system. Our approach is the result of ongoing investigations after the impact of regulations on social regimes, with the purpose of understanding social complexity as an emergent phenomenon floating on the characteristics of the models of the involved agents. In order to reduce the potentially infinite variance of individuals that are interacting in our real societies, we propose to apply canonical agents as an abstract model of agents. This will allow us to model the domain system with sufficient accuracy while being able to reduce the computational demands of our simulation. Our framework relies on agent-role modeling and simulation as a tool for examination the speci?c manifestations of emergent behavior. It proceeds along three steps. First, we explore how an institutional perspective can handle in our computational framework. Roles, institutions, and rules become components of the agent architecture. Second, we extend the agent architecture to address the problem of choosing an appropriate plan in an uncertain situation when an agent has to respond to and act upon uncertain, and incomplete information. Third, the resulting scenario representation is synthesized as agent programs. These scripts correspond to descriptions of agent roles observed in a social setting.
Ameneh Deljoo, Tom van Engers, Leon Gommans, Cees de Laat
22003 Unpacking the Mandate of Heaven Argument? Aernout Schmidt, Kunbei Zhang
22004 Session 2?: Law and Network Science
22005 Law as a Network of Networks [abstract]
Abstract: Over the past decades, the study of networks has brought remarkable advances in our understanding of complex systems. As various scholars have pointed out, law, too, can be viewed as a complex system. It follows, then, that we should use tools from network science to analyze law, legal rules, and legal systems. While the number of publications exploring networks in law has surged in recent years, most authors draw on the same small set of research questions and network definitions. My paper aims to change that. I describe law as a network of networks and develop a taxonomy of its elements, distinguishing nodes by category and edges by structure, order, and type of the underlying relations. This taxonomy illustrates that to fully understand law?s structure and dynamics, we need to embrace its complexity in our network definitions and investigate the interplay between the different kinds of networks in law. Therefore, the paper concludes with some thoughts on how my taxonomy could pave the way towards a truly interdisciplinary research agenda for the science of legal networks.
Corinna Coupette
22006 Unravelling the complexity of legal decisions with network science [abstract]
Abstract: Do case citations reflect the ?real? importance of individual judgments for the legal system concerned? This question has long been puzzling empirical legal scholars. Existing research typically studies case citation networks as a whole applying traditional network metrics stemming from graph theory. Those approaches are able to detect globally important cases, but since they do not take time explicitly into account, they cannot provide a comprehensive picture of the dynamics behind the network structure and its evolution.This talk intends to provide such a description by presenting the results obtained in a series of empirical and interdisciplinary studies conducted on two different jurisdictions. In the first series of studies, we analyze the corpus of decisions made by the International Criminal Court (ICC) since its creation. This work show how network metrics can help revealing some fundamental aspects of the cases such as the role and the rights of the victims in the trials and the particular emphasis that the Court placed on the notion of reparations.In a second series of studies, we use two node importance metrics that take time into account to study important cases of Court of Justice of the European Union over time. We then compare cases deemed as important by the metrics, with a set of 50 cases selected by the Court as the most important (landmark) cases. Our contribution here is twofold. First, with regard to network science, we show that structural and time-related properties are complementary, and necessary to obtain a complete and nuanced picture of the citation network. Second, with regard to the case law of the Court, these studies provide empirical evidence clarifying the motivation of the Court when selecting the landmark cases, revealing the importance of symbolic and historical cases in the selection. In addition, the temporal analysis sheds new light on the network properties specific to the landmark cases that distinguishes them from the rest of the cases. We validate our results by providing legal interpretations that sustain the highlights provided by the proposed network analysis.
Fabien Tarissan
22007 The Fitness of Case Law [abstract]
Abstract: All legal systems rely on case law to ensure their functioning. Most empirical legal studies to date have used classical statistical techniques to characterize the case law of a single court. Thus, our quantitative knowledge about case law as a whole is still very limited. Here, we propose the concept of case law fitness to systematically investigate the workings of case law from a network theoretic perspective. In particular, we study the relationships between decisions and bodies of case law from multiple sources. We develop hypotheses regarding the material, institutional, temporal, spatiocultural, and personal factors that may influence how cases cite and get cited. To test our hypotheses, we present empirical results and insights into some of the determinants of case law fitness. Using a dataset containing not only U.S. Supreme Court cases but also decisions from U.S. Courts of Appeals and U.S. District Courts, we study the temporal evolution of the entire citation network as well as the age structure of links. We further analyze the relationship between intra- and inter-institutional citations and the relevance of institutional hierarchies between courts. Finally, we examine the geospatial distribution of inter-institutional citations. The results point towards open questions regarding the role of cases in the evolution of legal systems and draw attention to some of the methodological challenges lying ahead.
Manik Debsankha, Corinna Coupette, José Casadiego, Nora Molkenthin, Marc Timme
22008 The Dynamics and Drivers of Statutory Interpretation Marion Dumas

Complex Systems in Education  (CSIE) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: I - Roland Holst kamer

Chair: Matthijs Koopmans

28000 Complex Dynamic Systems View on Conceptual Change: How a Picture of Students? Intuitive Conceptions Accrues From Context Dependent but Dynamically Robust Learning Outcomes [abstract]
Abstract: We discuss here conceptual change and the formation of robust learning outcomes from the viewpoint of complex dynamic systems (CDS) [1]. The CDS view considers students? conceptions as context dependent and multifaceted structures which depend on the context of their application [1,2]. In the CDS view the conceptual patterns (i.e. intuitive conceptions here) may be robust in a certain situation but are not formed, at last not as robust ones, in another situation. The stability is then thought to arise dynamically in a variety of ways and not so much mirror rigid ontological categories or static intuitive conceptions as assumed by traditional views on conceptual change. We discuss here a computational model based on CDS, in which the learning process is modelled as a dynamical system in order to study the generic dynamic and emergent features of conceptual change [3]. The model is highly simplified and idealized, but it shows how context dependence, described here through structure of epistemic landscape, leads to formation of context dependent robust states representing learning outcomes, kinds of attractors in learning. Due to sharply defined nature of these states, learning appears as a progression of switch from state to another, giving thus appearance of conceptual change as switch form one robust state to another. These states that correspond the intuitive conceptions of the traditional views, are in CDS, however, dynamical epiphenomena arising from the interaction of learning dynamics and targeted knowledge as coded in the instructional design. Finally, we discuss the implications of the results in guiding attention to the design of the learning task and its structure, and how empirically accessible learning outcomes might be related to these underlying factors. References[1] I. T. Koponen and T. Kokkonen (2014) Frontline Learning Research 4, 140-166.[2] I. T. Koponen (2013) Complexity 19, 27-37.[3] I.T. Koponen, T. Kokkonen and M. Nousiainen (2016) Complexity (in print).
I. T. Koponen and T. Kokkonen
28001 Complex Dynamic Systems in Science Education Research: The New Theoretical Perspective [abstract]
Abstract: Recent methodological developments have shown the applicability of nonlinear frame work (i.e. catastrophe theory, nonlinear time series analysis) in science education research and in relevant psychological theories, such as the neo-Piagetian framework, achievement goal theory or conceptual change theories. In this presentation we are reviewing recent investigations in science education research that concern learning at both, the individual-level and at the group-level processes, which ultimately showed the Complex Dynamic System?s meta-theoretical power. Applications of Catastrophe Theory in problem solving and conceptual understanding in chemistry and physics, opened a new area of investigations by implementing cognitive variables, such as, information processing capacity, logical thinking, field dependence/independence and convergent/ divergent thinking as controls explaining students? achievement [1], [2]. The crucial role of certain factors, acting as bifurcation variables, shed light on phenomena associated with surprising effects and students? failure. The catastrophe theory associates the underling cognitive processes with nonlinear dynamics and self-organization. The nonlinear phenomenology, that is discontinuity in mathematical sense, implies that the leaning outcomes are emergent phenomena [3]. Thus, the complex dynamic systems perspective challenges the existing conceptual change theories, which investigate learning through mechanistic and reductionistic approaches. Analogous investigations applied to cooperative learning settings bring up the issue of learning as an emergent phenomenon resulting from the nonlinear interactions among students working in groups. Students? discourses on explanation of physical phenomena analyzed by orbital decomposition method appeared to possess nonlinear characteristic, which are more pronounced in the more effect sessions [4]. The power law distribution of utterances evolving in time denotes the underlying self-organization processes that lead to the emergent learning outcomes. The merit of the new investigations is three-fold. First, they set the basis of the application of new methods and tools to educational research; second, they have provided rigorous explanations and a better understanding of the phenomena under investigation; and third, they signify a paradigm shift in science education and the rising of the new epistemology that embraces research and practice [3]. References[1] Stamovlasis, D. (2006). The Nonlinear Dynamical Hypothesis in Science Education Problem Solving: A Catastrophe Theory Approach. Nonlinear Dynamics, Psychology and Life Sciences, 10 (1), 37-70.[2] Stamovlasis, D. (2011). Nonlinear dynamics and Neo-Piagetian Theories in Problem solving: Perspectives on a new Epistemology and Theory Development. Nonlinear Dynamics, Psychology and Life Sciences, 15(2), 145-173.[3] Stamovlasis, D. (2016a). Catastrophe Theory: Methodology, Epistemology and Applications in Learning Science. In M. Koopmans and D. Stamovlasis (Eds), Complex Dynamical Systems in Education: Concepts, Methods and Applications (pp. 141-175). Switzerland: Springer Academic Publishing. [4] Stamovlasis, D. (2016b). Nonlinear Dynamical Interaction Patterns in Collaborative Groups: Discourse Analysis with Orbital Decomposition. In M. Koopmans and D. Stamovlasis (Eds), Complex Dynamical Systems in Education: Concepts, Methods and Applications (pp. 273-297). Switzerland: Springer Academic Publishing.
Dimitrios Stamovlasis
28002 Opening the Wondrous World of the Possible for Education: A Generative Complexity Approach. [abstract]
Abstract: In my contribution, I develop a new more complex view of learning and education. The focus is on opening and enlarging new spaces of the possible around what it means to educate and be educated. To open and enlarge the spaces of the possible for education, new thinking in complexity is needed to describe learning and development as complex processes of generative change. These may facilitate the opening of so-called ?Spaces of Generativity?, as an extension of Vygotsky?s Zone of Proximal Development. Generativity is the very complex human capability of ?knowing how to go on?. These complex spaces are linked to the generative process of learning and development. Learners may achieve their individual and collective generativity through individual and collective activity. Learners may actually co-create and co-generate each other?s learning and development, with potential non-linear, emergent effects. Learning, then, may be viewed as generative, emergent learning. Learners may even bootstrap each other within communities of learners. This may foster the development of the learner as a whole within his/her multi-dimensional Space of Generativity. New thinking in complexity is needed to design education in a new way, by fostering the scaffolding relations among learners. It is through relations that the complexly generative processes of learning and development of these learners can actually be triggered. The quality of these relations determine the quality of interaction and vice versa. The complexity involved may be taken as the fount of new possibilities for education, with effects hitherto unknown. It shows how new thinking in complexity may be taken as foundational for good education, by complexifying education. This complexifying may be viewed as opening the wondrous world of the possible for education. It may show that human beings are able to develop themselves as whole self-realizing human beings through generative processes of becoming. Complexifying education, then, is a way of humanizing education.
Ton Jörg
28003 The Fractal Dynamics of Early Childhood Play, Development and Non-linear Teaching and Learning [abstract]
Abstract: The Question: How do children before nine years of age actually learn about significant conceptual meanings, solve problems, and develop self-regulation? Educators who care to address this question--and are not content with rote memorization and children who parrot concrete verbalisms--can find some support in considering the dynamic, non-linear processes by which young children learn. Therefore, it makes sense to apprehend how young children learn in order to choreograph and coordinate how to teach in harmonious ways that do no harm. The Response: A complex dynamical systems theory perspective can help to better understand the generative process of early childhood play and learning in human development. It is relevant to envision the non-linearity of sensitive dependence on initial conditions; the equivalence of different surface manifestations with underlying processes; dynamic phase transitions that become a template for? young children's play and learning processes; and their interface with a content-rich, meaning-based dynamic-themes system of curricular implementation.? Throughout the strands of play dynamics, cognitive dynamics, and curricular dynamics, there appear to be similar non-linear dynamical systems functioning within children?s brains. There is discussion of the confluence of research on brain functions; a body or research that informs the characteristics of young children?s play and imagination; and the ways in which young children acquire fresh perceptions and cognitions.? Focus on the spaces among components of physical and interpersonal relationships can illuminate the processes of these non-linear, complex, dynamic systems.
Doris Pronin Fromberg
28004 The Socially Situated Dynamics of Children?s Learning Processes in Classrooms: What do we learn from a Complex Dynamic systems approach? [abstract]
Abstract: The current literature in educational psychology strongly emphasizes that learners are intentional agents pursuing their personal goals and that they self-regulate their actions in educational contexts. But how do these intentionally regulated learning processes develop over time? The complexity approach in education entails the study of how one condition changes into another, and how the short term and long term time scale of development and learning are interrelated (Van Geert, 1998; Van Geert & Steenbeek, 2005; Thelen & Smith, 1994). Complexity research investigates real-time processes and captures development as it unfolds through multiple interactions between a child and the environment. The approach makes use of microgenetic methods to investigate the interaction between child and environment in real time, and to describe and test its change over time.?But what kind of tools can be used to link these microgenetic measures with long term change ? in processes of learning and skill acquisition - in a meaningful way? In the presentation, we will describe our dynamic interaction model in which teaching-learning processes get their form in the interaction between student and teacher as autonomous, intentional agents. Secondly, several empirical examples will be given in which longitudinal microgenetic measures are combined, such as case studies of the interaction dynamics in student-teacher pairs during individual instruction sessions in arithmetic lessons and during science education (Van der Steen, Steenbeek, Van Dijk, & Van Geert, 2014; Steenbeek, Jansen, van Geert, 2012). In addition, we will discuss state space analysis and other techniques for describing the structure of observational time series as a means to visualize changes in individual teacher-student interaction dynamics, over several time frames. This way the effectiveness of educational interventions can be made visible, in a way that does justice to the complexity and dynamic aspects of the teaching-learning process.
Henderien Steenbeek & Paul van Geert
28005 Using Time Series Analysis to Capture Some Dynamical Aspects of Daily High School Attendance [abstract]
Abstract: In the United States, high school attendance and drop-out are important policy concerns receiving fairly extensive coverage in the research literature. Traditionally, the focus in this work is on the summary of dropout rates and mean attendance rates in specific schools, regions or socio-economic groups. However, the question how stable those attendance rates are over time has received scant attention. Since such stability may affect how long individual students stay in school, the issue deserves attention. We therefore need to investigate the periodic and aperiodic patterns in students? attendance behavior. The school districts that have begun to keep record of daily attendance rates in their schools over multi-year periods, such as those in New York City, have created an opportunity to do so. This presentation will describe how time series analysis can be used to estimate time sensitive dependencies in daily attendance trajectories, distinguishing random fluctuation therein from cyclical patterns (regularity) and aperiodic ones (unpredictability). After showing simulated examples of each of these three scenarios, I will show their occurrence in the attendance plots of actual schools, based on the attendance trajectories in three schools in the course of the 2013-14 school year (N = 187 in each), and in a fourth one from 2004 to 2011 (N = 1,345). A stepwise modeling process is described to statistically confirm the presence of regular and irregular patterns in the series, and it is illustrated how irregular patterns may suggest self-organized criticality (a tension ? release pattern) in the fourth school.The findings discussed here are meant to address a need in educational research to get a statistical handle on the dynamical processes proposed in the literature, and to illustrate the new insights gained from a temporal perspective on the collection and analysis of educational data in general, school attendance in particular.
Matthijs Koopmans

Robust Probabilistic Inference for Complex Systems  (RPI) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: J - Derkinderen kamer

Chair: Kenric Nelson

17000 Three practical entropy concepts for complex processes [abstract]
Abstract: It is well known that the concept of entropy does not possess a unique generalization to non-equilibrium processes. This differentiation of the entropy concept however need not be a draw-back. To describe the topology of complex networks one requires more measures than to describe regular lattices. Similarly we require more measures for describing path-dependent non-equilibrium processes than we would for describing systems or processes in equilibrium. So far we have identified three distinct entropy related concepts that have a practical meaning in the context of complex path-dependent processes; entropy production, the extensive entropy, and the maximum entropy functional or generalized divergence. The first telling us how compressible a sequence is, the second, how the phase-space of the process grows, and the third, for performing statistical inference, providing a starting point for the information theory of complex systems. In equilibrium systems those three measures yield identical results, while In path-dependent systems the concepts are distinct. In this talk we will briefly introduce and discuss these three concepts.
Rudolf Hanel
17001 Estimation of generalised entropies and similarity measures [abstract]
Abstract: Estimating entropy measures from finite-size samples is a problem of great interest in several fields. We will show that when the underlying distributions are heavy-tailed, even very large sample sizes lead to considerable biases in the estimations. This poses a problem for studies based on estimations of entropy-based distances between symbolic sequences. As a solution, we will propose a family of similarity measures based on the generalized entropy of order alpha, for which the bias in the estimations decays much faster. If time allows, I will discuss some applications to the study of the evolution of the English vocabulary over time.
Francesc Font-Clos, Martin Gerlach, Eduardo Altmann
17002 A unifying perspective on probability scoring rules and calibration curves [abstract]
Abstract: The emergence of machine learning capabilities for deriving patterns from data has highlighted the need to effectively manage and assess probabilistic inferences. A variety of methods are used to evaluate the performance of probabilistic inferences or forecasts. Examples include calibration curves, scoring rules, and when applied to decisions such metrics as receiver operating characteristic. Despite several decades of research in this area, a consensus of appropriate metrics is still allusive. A perspective on the issue is provided here by showing that local scoring rules can be translated back to the probability domain and thereby represent mean forecasts. This clarifies the interpretation of the scoring rule and enables the performance metric to be visualized in conjunction with probability calibration curves. The approach draws upon generalizations of information theory developed from the study of complex nonlinear systems.?
Kenric P. Nelson
17003 Networks of influence: transmission of information in systems of cooperative decision makers [abstract]
Abstract: n a society interconnected by family ties, friendships, acquaintances or work relations it is unavoidable that a person?s behaviors or decisions depend on the choices made by other people. The surrounding social network influences the opinions we hold, the products we buy and the activities we pursue.?In this context we study a decision making model (DMM) incorporating imitation as a sole mean of opinion and information sharing across the social network. We investigate how such a system of stochastic two-state agents, communicating through local interactions, is topologically complex and is manifesting temporal complexity through an inverse power-law probability distribution function in the switching times between the two critical states of consensus. We demonstrate how the dynamics of a single network element can be tied through fractional master equation to the behavior of the entire system and we discuss possible application of this approach to control of complex systems.?
Malgorzata Turalska, Bruce J. West

Complex Economic Systems and the Role of Institutions  (CES) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: R - Raadzaal

Chair: Torsten Heinrich

42000 Propagation Dynamics of Disaster-Induced Production Losses in the Global Economic Network [abstract]
Abstract: Risks of extreme weather events like floods, heat-waves, and storms are likely to increase under global warming. Since world markets are highly interlinked and local economies extensively rely on global supply and value added chains, local extreme weather events can have global repercussions.Accordingly, comprehensive climate risk assessment and cost estimation should take these interactions into account. Here, we present the dynamic agent-based loss propagation model acclimate. It describes the immediate response of the global supply network to local disasters as well as its recovery dynamics in the disaster aftermath. The model accounts for price dynamics and can thus base the decision rationale of the economic agents on clear and simple optimization principles. Furthermore, price effects like demand surge can be analyzed, which become important for large scale disasters. In contrast to most computable general equilibrium models, acclimate is not solved using inter-temporal optimization, but uses a recursive dynamic algorithm, i.e., the agents in acclimate do not have perfect temporal foresight. Instead, they base their expectations on future market developments on past information, and adapt them successively according to their gain in information. Instead of forcing the model onto an optimal equilibrium path, we resolve the complex out-of-equilibrium dynamics in the disaster aftermath as well as its decay towards a new macroscopic equilibrium state. This permits us to capture the nonlinear higher-order losses and their propagation after a disaster-induced shock.In our analysis, we introduce the economic amplification ratio measuring the importance of indirect losses with respect to total losses and study its dependence upon disaster size, network topology, and inventory size. Moreover, we study the dynamic economic resilience defined as the ratio of direct to total losses. We find a nonlinear dependence of the resilience on disaster size. Further, we analyze the influence of the network topology upon the resilience and discuss the potential of warehousing as an adaptation option.
Christian Otto, Sven Norman Willner, Leonie Wenz, Katja Frieler and Anders Levermann.
42001 Multi-level interactions of organisational decision making in the land use system [abstract]
Abstract: Formal organisations as manifestations of institutions have considerable impact on land use change dynamics. They affect productivity by the provision of knowledge and diffusion of technologies, control demand for services, improve infrastructure or manufactured capitals, and subsidise or prescribe specific land uses. Organisations have distinct objectives, actions and decision-making processes as they respond to socio-economic, environmental or climatic changes. Interactions between organisations with various objectives like governmental authorities and non-governmental organisations on regional, national and supranational levels affect the land use system in complex ways, and the interplay of these actions are important to consider in order to gain understanding of the system as a whole.Exploring the complex system of interacting organisations in land use demands a modelling approach that addresses temporal and spatial dynamics as well as the heterogeneity of involved actors and their actions. We present a spatial agent-based model of both land managers and organisations as autonomous agents and also represent the social network as an informal institution that is known to have an impact on the effect of formal institutions. Land managers produce services based on the natural, infrastructural and economic capital of their land and compete with other land manager types depending on demands for services. Organisational agents monitor land use dynamics and service provisions according to their objectives. Once a relevant metric like demand-supply gap exceeds a tolerated deviation, they apply modes of boundedly rational decision making, and take action accordingly. For instance, they promote innovations that spread through spatial social networks of land managers, affect productivity of certain land uses and thus influence the supply of specific services. Further represented objectives are connectedness of land and proportion of extensive land use, with restrictions and subsidies of certain land uses as potential actions.We investigate effects of noise in monitored data, variations in strength of measures like magnitude of subsidies, and planning horizons of organisations on connectivity of land uses, margins between demand and supply levels, and the degree of land use change. The impact of climatic changes (on capitals affecting productivity) as well as socio-economic and demographic drivers (affecting demands of services) is tested by analysing the difference between two IPCC scenarios. We find that strong subsidies may cause undesired outcome like strong perturbation through the decline of alternative service providers that become uncompetitive. Similarly, the land use system reacts sensitive towards the magnitude of noise in monitored data.
Sascha Holzhauer, Calum Brown and Mark Rounsevell.
42002 Bipartite innovation networks describe R&D collaboration between institutions [abstract]
Abstract: A great number of real world networks are, in fact, one-mode projections of more complicated structures. Often, the network has an original bipartite architecture comprised of two different types of nodes. The topology of the projected network, and the dynamics that take place on it, can be highly dependent on the features of the original bipartite structure. Here we investigate how such structures might impact on the study of interactions among institutions regarding technological innovation.In a set of practices called open innovation, organizations may partner up with others for inter-firm cooperation in R&D. A good example of open innovation is the joint application for new patents by multiple institutions (e.g. corporations, universities, government agencies and so on). It is possible to construct, in such case, a bipartite network in which one set of nodes are institutions (agents) that are connected to patents they have developed (artifacts). The projection of such a network of innovation collaboration is the co-patenting network, connecting institutions who have join patent applications.One-mode networks, such as the co-patenting network, are often analyzed without taking into account the underlying bipartite network. However, network properties such as degree distributions, clustering, the distribution of components sizes, and the size of the largest connected component are affected by the structure of the underlying bipartite network. Here we investigate how such dependencies appear and how the statistical properties of the bipartite structure shape the topology of the projected network. A good understanding of the topology is critical for further studies of the dynamics on the network; in this case the potential flow of technological knowledge.We create an empirical bipartite network using the European Patent Office data dating back to 1978 for 40 countries with harmonized applicant names (OECD REGPAT and HAN databases). We project this network onto the agents. Thus, a new one-mode network is built connecting institutions that have patented together. We look at properties of the empirical network that may play a role in knowledge sharing. We compare these to the properties of synthetic bipartite networks and their projections. Synthetic networks are an important tool in understanding the processes that might operate on real world networks: they afford us the opportunity to study multiple versions of a network with supposedly similar properties.We create synthetic analogues of the empirical network by using a configuration model that preserves the original degree sequence of both sets of nodes and rewires the links randomly. We then compare the structural properties of the empirical network with those of the synthetic bipartite network and its projection. With both empirical and synthetic networks in hand, it is then possible to see, the role of the statistical features in shaping the topology of the projected collaboration network.
Demival Vasques Filho and Dion O'Neale
42003 Is Economic Growth Biologically Driven? A Historical Analysis of Recent Developments in Cliometrics of Growth. [abstract]
Abstract: The study of the causal determinants of economic growth and development has long aroused an intense interest amongst economists. Until recently, the heart of the debate has mainly focused on the role of capital accumulation, either physical or human, on the impact of institutional or geographical factors, on innovation and technological progress, or on the significance of other growth-promoting factors such as ideas, and even religious and cultural aspects.In this article, we focus on two different ?biological hypotheses? that have been recently made by two prominent cliometricians, Robert W. Fogel (post-1982) and Oded Galor (since 2013). Indeed, one of the most interesting aspects of both Fogel?s and Galor?s analysis of the dynamics of economic growth is that they rest upon biological explanations. More specifically, Fogel developed a physiological theory of economic growth, named ?technophysio evolution?, which is based on the synergism between physiological capital accumulation and technological progress. On the other hand, Galor has recently engaged in the search for deeper determinants of economic development, i.e. genetic factors.This paper is organized as follows. Sections 2 and 3 respectively present the genesis of these two approaches. Thus, in the case of Fogel, we show that this latest work of his academic career was clearly influenced by his previous work on slavery. Based upon interviews with some of his colleagues, i.e. Deirdre McCloskey, Richard Steckel, Stanley Engerman and Robert Margo, we also argue that Fogel did not have a full conception of his project at the beginning, but progressively discovered the potential of using heights data in order to assess human welfare, nutritional status and in turn economic growth. In the case of Galor, we argue that he is building a very coherent research project which does not only directly stem from his previous work on Unified Growth Theory, but which is also expanding it in a substantial manner. In the light of these historical developments, Section 4 then discusses the epistemological stances of these two scholars. We argue that, with respect to these studies, Fogel and Galor are both, first and foremost, empiricists. They both heavily lean on econometrics and statistics in order to address their research topics, and none of them has a very sophisticated conception of the relationship between economics and biology. Finally, in Section 5, we argue that the faith in the neutrality of statistical methods lead them to underestimate the political dimension of their own interpretation of the dynamics of economic growth. Section 6 concludes.In the light of this historical and epistemological analysis of Fogel?s and Galor?s, we strongly believe that materialist approaches based upon the resort to biological analogies, tools or concepts, are not problems in themselves. The limitations that we highlighted in this paper are primarily due to a rather na?ve epistemological stance about the exchanges between economics and biology. As Geoffrey Hodgson points out, ?biology is important for the social sciences because in both cases we have highly complex, variegated, interacting systems. The success of scientific explanation in biology, in its highly complex domain, is a lesson for economists? (Hodgson, 2010). The question is thus not whether it is relevant to have exchanges between economics and biology, because, we can get inspiration of all sorts from biology. Indeed, biology is not limited to physiology or genetics, but it is rather an enormous field, addressing very complex phenomena at many different levels of organization. The same is true for economics, and since these two disciplines are both dealing with living systems, as opposed to inert matter, there is no reason why further exchanges between these two disciplines would not provide useful insights for the understanding of complex and evolving economic and social phenomena. From a holist viewpoint, economic activities can be considered as a specific part of human biology. Therefore, the most important question is not whether economic growth is biologically driven, but which tools, which models, which methods of biology could enhance our understanding of these highly complex phenomena. From our viewpoint, there is little doubt that our understanding of both the functioning and the dynamics of economic growth and development will benefit greatly from further exchanges between economics and biology.
Pierre Leviaux.
42004 Faraway, so Close: An Agent-Based Model for Climate, Energy and Macroeconomic Policy Analysis [abstract]
Abstract: This paper presents an agent based model for the study of coupled economic and climate dynamics that endogenously co-evolve across a range of different scenarios. The model offers a flexible laboratory to test various combinations of macroeconomic, industrial and climate policies both in the context of long run economic growth and medium run transition towards a greener economy. Furthermore, we propose a stochastic description of the feedbacks stemming from a warming and more volatile climate and study how such negative shocks propagate through the economy. For this reason, the model is particularly well suited for the study of extreme climate events, which are usually forgotten by standard integrated assessment models.
Andrea Roventini
42005 The performativity of potential output: Pro-cyclicality and path dependency in coordinating European fiscal policies [abstract]
Abstract: This paper analyzes the performative impact of the European Commission?s model for estimating ?potential output?, which is used as a yardstick for measuring the ?structural budget balance? of EU countries and, hence, is crucial for coordinating European fiscal policies. In pre-crisis years, potential output estimates amplified the build-up of private debt, housing bubbles and macroeconomic imbalances. After the financial crisis, they were revised downwards, which increased fiscal consolidation pressures. By focusing on the euro area?s economies during 1999-2014, we identify two performative aspects of the potential output model. First, the political implications of the model led to a pro-cyclical feedback loop, reinforcing general economic developments. Second, the model has contributed to national lock-ins on path dependent debt trajectories, fueling ?structural polarization? between core and periphery.
Jakob Kapeller and Philipp Heimberger.
42006 Institutions and Complexity in Economic Theory: Mapping Pluralist Economics [abstract]
Abstract: Economic theory has over the last few decades seen a transition from an almost total domination of the Neoclassical/Samuelsonian Paradigm (NSP), to an explosion of new approaches, leading to an era of unprecedented pluralism. Where there used to be only economics, the word is now prefixed by a plethora of specifications: experimental, evolutionary, behavioral, agent-based, complexity, computational, institutional, economic, feminist, agent-based, adaptive, and so on. The central question is how this will play out: (i) will these approaches be integrated into mainstream economics, as have alternative approaches in the past; (ii) will mainstream be replaced by some combination of these new perspectives; (iii) or have we enter into a permanent new era of economic pluralism?This question is fundamentally about the underlying epistemologies and ontologies of the approaches: is, for example, complexity theory merely a methodological toolkit, or is it a more profound ontological contribution? Can it be integrated into an NSP approach, or is it fundamentally incompatible?This paper proposes a way of mapping the plethora of disciplines in pluralist economics by characterizing the current state as a friction between three fundamentally incompatible epistemological perspectives: (i) the study of equilibria, (ii) the study of emergence and complexity, and (iii) the study of institutions and structure. Each of these are fundamentally unable to deal with the others: equilibrium theory is unable to handle both emergence and the role of social structures; institutional and structural perspective are unable to take into account the role of emergence and complexity; complexity theory has been limited in its capacity to account for downward effects of structure; and so on. Each perspective also points to valid observations about economic reality: institutions and social structures, equilibria, complexity and emergence, all play important roles in economics. But viewing economics through the lens of a single aspect is not enough. What is needed appears to be an integrated approach that balances and integrates all these aspects of reality - an approach that does not depart from the limitations of other approaches, but from an ontological description of economic reality.This paper argues that the combination between structure and emergence is more than the sum of its parts; it is "worse than complex" or "wicked" (Andersson et al. 2014). This suggests an approach to economics that does not confuse methodology with ontology, but that is, instead, question-driven and methodologically pluralist, reflexive and open, acknowledging the limitations of its methods while producing nuanced and contextualized accounts and conclusions.
Petter Törnberg
42007 Characteristics of Firm Competition: Biological Analogy versus Cultural Evolution [abstract]
Abstract: There are many avenues to analyze the role of competition. For instance, by means of its economic outcomes, its market structures, and its characteristics. The relationship between competition and its concrete economic outcomes, such as unemployment, growth, and income has been already extensively analyzed. Apart from the economic outcomes, different market structures have been also well described as to the amount of competitors, namely monopoly, duopoly, oligopoly, and polypoly. In comparison to that, only few contributions, however, scrutinize the characteristics of competition.This paper addresses that research gap by proposing an approach to the economic evolution of the characteristics of competition between firms. Competition among firms is a dynamic phenomenon. So, it is appropriate to take on an evolutionary perspective. The contribution arising from this approach is twofold. (i) various characteristics of competition, explained by various forms of selection, are presented. As a result, using analogy constructions to the biological sphere ? Darwinian thinking, economic competition may be characterized by genetic group selection. (ii) in order to expose the differences to the biological sphere, the naturalistic approach is considered, where culture additionally plays a major role. The quintessence of this paper is the deduction that competition among firms is a form of cultural group selection.The aim is to examine the characteristics of competition between firm organizations from an evolutionary perspective. For this purpose, a coherent approach is devised which not only draws on recent findings in that field as set out by Johnson et al. (2013), but also extend these findings by considering further evolutionary concepts. Cordes (2015) illustrates in his state-of-the-art article the most important evolutionary concepts, currently used and still emerging at the intersection of behavioral sciences, social sciences, including economics, and biology (van den Bergh and Gowdy, 2009; Stoelhorst and Richerson, 2013). Inspired by evolutionary theory, evolutionary concepts can be separated into two types (Cordes, 2015).First, Darwinian thinking is employed more commonly (e.g., Nelson and Winter, 1982; Johnson et al., 2013) and based on analogy constructions to the biological sphere induced by Darwinian ideas, embracing ?variation-selection-inheritance algorithms? (e.g., Campbell, 1965; Stoelhorst, 2008). Darwinian ideas capture natural selection acting on gene variation: genes with higher probabilities to be inherited spread, others tend to disappear. The probability of inheritance depends on relative reproductive success. Relative reproductive success depends on longevity, fecundity, and copying-fidelity (Dawkins, 2006). Second, besides selection processes that work on genes, there are additionally selection processes that work on culture in social organisms?, e.g., humans?, evolutionary history. Culture can be understood as knowledge, ideas, or belief (Richerson and Boyd, 2008). Some knowledge may survive, and other ones may die in the course of time. Those selection processes of culture are peculiarly taken into account by the naturalistic approach. Moreover, it relates culture to natural selection: since culture appears to occur later as compared to biological mechanism acting on genes, cultural evolution rests upon foundations laid before by natural selection (Cordes, 2015). Together, competition among firms is investigated with two types of evolutionary concepts: Darwinian thinking and the naturalistic approach.ReferencesCampbell, Donald T. (1965): "Variation and Selective Retention in Socio-Cultural Evolution", in: Barringer, H. R., Blanksten, G. I. and Mack, R. W. (eds.), Social Change in Developing Areas: A Reinterpretation of Evolutionary Theory, Cambridge, Mass.: Shenkman, pp. 29-49.Cordes, Christian (2015): "Evolutionary Economics", in: Wright, James D. (ed.), International Encyclopedia of the Social and Behavioral Sciences, Oxford: Elsevier, pp. 430-436.Dawkins, Richard (2006): The Selfish Gene: With a new Introduction by the Author, Oxford, England: Oxford University Press.Johnson, Dominic D., Price, Michael E. and van Vugt, Mark (2013): "Darwin's Invisible Hand: Market Competition, Evolution and the Firm", Journal of Economic Behavior and Organization, Vol. 90, Supplement, pp. S128-S140.Nelson, Richard R. and Winter, Sidney G. (1982): An Evolutionary Theory of Economic Change, Cambridge, Mass.: Belknap Press of Harvard University Press.Richerson, Peter J. and Boyd, Robert (2008): Not by Genes Alone: How Culture Transformed Human Evolution, Chicago: University of Chicago Press.Stoelhorst, Jan-Wilhelm (2008): "The Explanatory Logic and Ontological Commitments of Generalized Darwinism", Journal of Economic Methodology, Vol. 15, No. 4, pp. 343-363.Stoelhorst, Jan-Wilhelm and Richerson, Peter J. (2013): "A Naturalistic Theory of Economic Organization", Journal of Economic Behavior and Organization, Vol. 90, Supplement, pp. S45-S56.van den Bergh, Jeroen C. and Gowdy, John M. (2009): "A Group Selection Perspective on Economic Behavior, Institutions and Organizations", Journal of Economic Behavior and Organization, Vol. 72, No. 1, pp. 1-20.
Tong-Yaa Su.

The Complex Brain  (TCB) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: L - Grote Zaal

Chair: Tommaso Gili

48000 Hierarchical organization of functional connectivity in the mouse brain: a complex network approach [abstract]
Abstract: This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.
Andrea Gabrielli
48001 Mapping multiplex hubs in human functional brain network [abstract]
Abstract: Typical brain networks consist of many peripheral regions and a few highly central ones, i.e. hubs, playing key functional roles in cerebral inter-regional interactions. Studies have shown that networks, obtained from the analysis of specific frequency components of brain activity, present peculiar architectures with unique profiles of region centrality. However, the identification of hubs in networks built from different frequency bands simultaneously is still a challenging problem, remaining largely unexplored. Here we identify each frequency component with one layer of a multiplex network and face this challenge by exploiting the recent advances in the analysis of multiplex topologies. First, we show that each frequency band carries unique topological information, fundamental to accurately model brain functional networks. We then demonstrate that hubs in the multiplex network, in general different from those ones obtained after discarding or aggregating the measured signals as usual, provide a more accurate map of brain's most important functional regions, allowing to distinguish between healthy and schizophrenic populations better than conventional network approaches.
Alex Arenas
48002 Community detection in brain connectivity networks beyond the resolution limit [abstract]
Abstract: The modular organization of functional and structural brain connectivity networks has been investigated using graph theoretical approaches. Recently, it has been demonstrated that community detection methods based on the maximization of global fitness functions, suffer from a resolution limit, as they fail to detect modules that are smaller than an intrinsic scale. This resolution limit prevents detection of important details of the brain modular organization.Here, we show that Surprise, a recently proposed binary fitness function based on probability theory, behaves like a resolution-limit-free method. We propose an extension of Surprise to weighted networks, and heuristics for its optimization. We benchmark Surprise against widely applied algorithms, and quantitatively assess its performance in synthetic correlation networks with different levels of noise, and in human resting state functional connectivity data.In synthetic networks, Surprise shows better sensitivity and specificity in the detection of ground-truth structures, particularly in the presence of noise and variability such as those observed in experimental functional MRI data. Surprise optimization in human resting state networks reveals the presence of a rich structure of modules with heterogeneous size distribution undetectable by current methods. Our results indicate that the resolution limit may have substantially affected previous analyses of brain connectivity networks, and call for a revisitation of some of the current models of brain modular organization.
Angelo Bifone
48003 Cortical structural connectivity alteration in Obsessive Compulsive Disorder patients Fabrizio Piras
48004 Theory of self-organization in the functional brain networks [abstract]
Abstract: We derive an equation of motion for the order parameter in the small world networks with temporal memory and long-range interaction and obtain a fractional differential equation for the order parameter as We analyze a spatial and temporal distribution of the order parameter in detail. Based on fMRI data, we construct brain functional networks and try to describe the dynamical properties of these networks using our result.
Bahruz Gadijev
48005 Dynamic and multi-layer MEG networks Prejaas Tewarie
48006 Consistency of Regions of Interest as nodes of functional brain networks measured by fMRI [abstract]
Abstract: In connectomics, the human brain is modelled as a complex network and the properties of this network are studied in order to understand the structure and function of the brain. However, defining the functional brain network is not straightforward. Particularly, there is no consensus on what the nodes of the network should depict. One typical approach for defining the nodes are Regions of Interest (ROIs) that are collections of fMRI measurement voxels, typically defined by anatomical landmarks.The ROI approach is based on the assumption that voxels in a ROI are functionally similar and have reasonably similar dynamics. However, we find that this assumption does not hold in general: ROIs are often functionally inhomogeneous. This is visible in the wide distribution of consistency of ROIs, defined as the mean Pearson correlation between the time series of voxels within a ROI. Further, the time series of low-consistency ROIs can at times be highly correlated. To our understanding, this indicates that the ROI approach affects the structure of resting-state functional brain networks by inducing spurious links.
Onerva Korhonen

The complex dynamics of networks  (TCDO) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: M - Effectenbeurszaal

Chair: Rutger A. Van Santen

13000 Networks of the biological clock [abstract]
Abstract: Proper theoretical network models of the brain are in need of well-characterized brain areas that are preferably described at a multi-scale level. The suprachiasmatic nucleus (SCN) is the master clock in the mammalian brain and consists of 20,000 individually oscillating cells. Each cell contains a molecular feedback loop that produces an endogenous rhythm with its own intrinsic frequency. In order to obtain a robust and coherent 24-h rhythm that can drive other circadian rhythms in our body, the SCN cells synchronize to each other as a result of neural coupling. In addition to the internal synchronization, the SCN synchronizes to external cycles, such as to the 24-h light-dark cycle and to seasonal cycles. The network structure of the SCN results in a system that shows a balance between robustness on the one hand and flexibility on the other hand. In our lab we perform electrophysiological recordings from single neurons and from populations of about 100 neurons. Furthermore transgenic luciferase expressing mice are used to simultaneously measure the rhythms in gene expression at single cell level. Finally we record electrical activity from populations of neurons with implanted electrodes in freely moving animals. In this preparation, the recorded neurons of the central clock are interacting with other brain areas. We have observed that temporal behavioral patterns and the central clock show scale invariant behavior. With disease and aging, scale invariance is lost, and also in a brain slice preparation when the clock is not communicating with other brain areas, scale invariance is absent. We conclude that scale invariance emerges at the integrated network level. Understanding how neurons and brain regions communicate, coordinate, synchronize, and collectively respond to signals and perturbations is one of the most intriguing, yet unsolved problems in neuroscience. As the output of the SCN is unambiguously measurable in terms of phase and period, the measurements from the different levels of organization, i.e., the molecular level, the cellular level, the organ level and the behavioral level, can be compared. Current studies are aimed at bridging scales, from the micro to the macro level and vice versa, thereby understanding how properties emerge at each of these levels.
Johanna H. Meijer
13001 Emergence of synchronous oscillations in networks of systems [abstract]
Abstract: We consider networks of identical single-input-single-output systems that interact via linear, time-delay coupling functions. The systems itself are inert, that is, their solutions converge to a globally stable equilibrium. However, in the presence of coupling, the network of systems exhibits ongoing oscillatory activity. We study the emergence of oscillations by deriving conditions for 1. The solutions of the time-delay coupled systems to be bounded, 2. the network equilibrium to be unique, and 3. the network equilibrium to be unstable. The network of time-delay coupled inert systems is oscillatory provided that the aforementioned three conditions are satisfied. In addition, using recent results on the existence of partial synchronization manifolds, we identify the patterns of synchronized oscillation that may emerge.
Erik Steur and Sasha Pogromsky
13002 Charting the Brain: Resting-State Networks and Functional Connectivity [abstract]
Abstract: Functional connectivity measured during the resting-state in BOLD fMRI has become a popular approach for assessing network interactions in the brain. The study of patterns of connectivity between multiple distributed regions and their associated functional dynamics has been recognised as a powerful tool in cognitive and clinical imaging neurosciences. This talk will provide a basic overview of tools and techniques with a view to estimating directed brain network interactions.
Christian Beckmann
13003 Scale-free percolation [abstract]
Abstract: We propose and study a random graph model on the hypercubic lattice that interpolates between models of scale-free random graphs and long-range percolation. In our model, each vertex x has a weight Wx, where the weights of different vertices are independent and identically distributed random variables. Given the weights, the edge between x and y is, independently of all other edges, occupied with probability 1?exp(?WxWy/|x?y|?), where (a) ? is the percolation parameter, (b) |x ? y| is the Euclidean distance between x and y, and (c) ? is a long-range parameter.This model gives rise to geometric random graph models that have a high amount of inhomogeneity due to the high variability present in the vertex weights. The model interpolates nicely between long-range percolation models, obtained when the weights are all equal, and scale-free random graph models, obtained when considering the model on the complete graph and letting |x ? y| ? 1 for every x ?= y, so there is no geometry. The most interesting behavior can be observed when the random weights have a power-law distribution, i.e., when P(Wx > w) ? w1?? for some ? > 1. In this case, we see that the degrees are infinite a.s. when ? = ?(? ? 1)/d < 1, while the degrees have a power-law distribution with exponent ? when ? > 1. Our main results describe phase transitions in the positivity of the critical value and in the small-world nature in the percolation cluster as ? varies. Let ?c denote the critical value of the model. Then,we show that ?c =0 when ?<2, while ?c >0 when ?>2. Further, conditionally on 0 and x being connected, the minimal number of edges needed to hop between 0 and x is of order loglog|x| when ? < 2 and at least of order log|x| when ? > 2. These results are similar to the ones in inhomogeneous random graphs, where a wealth of further results is known. We also discuss many open problems, inspired both by recent work on long-range percolation (i.e., Wx = 1 for every x), and on inhomogeneous random graphs (i.e., the model on the complete graph of size n and where |x ? y| = n for every x ?= y).Acknowledgements. This work is supported by the Netherlands Organisation for Scientific Re- search (NWO) through VICI grant 639.033.806 and the Gravitation Networks grant 024.002.003. Joint work with Mia Deijfen and Gerard Hooghiemstra.
Remco van der Hofstad
13004 Functional Connectivity estimates from human brain activity recordings: Pittfalls & State of the art [abstract]
Abstract: ?The brain has no knowledge until connections are made between neurons. All that we are ? comes from the way our neurons are connected? (Berners Lee). This motto underlies the field of ?connectonics? which over the past decade has gained momentum in neuroscience and neuro-imaging. For the human brain, however, no methods exist to estimate connectivity at the neuronal level from in-vivo recordings. Instead the structural connectivity between brain regions can be estimated from diffusion tensor MR images, or the functional or effective connectivity between brain regions can be estimated from functional MR recordings, or from recordings of the electrical activity in the brain, using EEG or MEG recordings. The functional connectivity at neuronal level changes at different time scales (from maturation and growth to learning and behavior), and so presumably does the functional connectivity of brain regions. To estimate functional connectivity from EEG or MEG it is necessary to estimate from the sensor level data, at the scalp surface, the ?source? level signals, i.e. mean field estimates of the activity representing a cortical area. In a second step the statistical dependencies between these source level signals can be estimated, and it is commonly assumed that such statistical dependencies reflect the connectivity between brain regions (?the functional connectivity?). For both these steps leading to a network description of the brain, there exists many different methods. The perfect method would be multi-variate, time-resolved, unique, would reveal non-linear coupling, and would not suffer from mixing, or from volume conduction and would not depend on the choice of reference in the case of EEG. Clearly such a method does not exist. Moreover the methods used in the two steps of the analysis (source estimation and connectivity estimation) cannot be independently chosen. In a short overview the state of the art techniques in EEG or MEG studies of the neural networks in the brain will be summarized and the short-comings of each of the most commonly used methods will be discussed.
Bob van Dijk, Arjan Hillebrand and Kees Stam
13005 Hierarchical dynamical networks: the biological clock [abstract]
Abstract: Recent advances in functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), electroencephalography (EEG) and magnetoencephalography (MEG) in combination with graph theoretical network analysis have allowed for investigating structural and functional connectivity in the brain. Brain networks show small-world properties, a modular structure and highly connected hub-regions. Understanding of the network is important for understanding the organizational properties of the brain and will become increasingly important for clinical research. The fMRI, DTI, EEG and MEG techniques are able to study large parts of the brain, but they can only map a network of brain regions and are unable to investigate the network of single cells within the inspected brain regions. In order to obtain a complete picture of healthy brain function, the connection between cellular mechanisms and brain regions must be made. Our group investigates the cellular brain network of one such brain region, the circadian clock. The circadian clock, which is located in the suprachiasmatic nuclei (SCN) and drives the daily 24-hour rhythms in our body, is functionally dependent on emergent network properties. While the ability of individual SCN neurons to produce 24-hour rhythms is a cell-autonomous property, the ability of the SCN to respond to light, to adjust to the seasons and to synchronize after a jet-lag is critically dependent upon the state of the neuronal network. The synchronized network output regulates all daily rhythms in our body and is heavily dependent on the interactions between the neurons and the network topology of the clock. We investigated methods to assess network properties on our data, such as small-worldness and scale-freeness for this cellular network. Complicating factors include that this functional network is not static during its oscillation cycle: for example, the network appears to be different between day and night time. This dynamical nature of the network is normally not taken into account in network studies. Furthermore, the SCN network operates at different timescales: the network communication between the neurons in the SCN takes place in the order of milliseconds, while the synchronized network output takes place on a 24-h timescale. How do these timescale differences influence the network properties of the SCN? These dynamical issues open up new aspects for studying brain networks. Finally, to connect brain regions and cellular networks, we do not only look at the SCN network in isolation, but also regard it as being part of a bigger network of brain regions. We currently investigate, using network techniques on our experimental data, how the interactions between the SCN brain region and the rest of the brain is regulated, so that we gain better understanding of its place in the higher order network of the brain. This will in the end result in a hierarchical multi-level network model that operates at different timescales and takes into account the dynamical nature of the biological clock.
Jos Rohling
13006 Dimension reduction for networks of coupled dynamical systems [abstract]
Abstract: Networks of coupled nonlinear dynamical systems often display unexpected phenomena. They may for example synchronise. This form of collective behaviour occurs when the agents of the network behave in unison. An example is the simultaneous firing of neurons. It has also been observed that synchrony often emerges or breaks through quite unusual bifurcation scenarios. In this talk I will show how this can be understood with the help of a new geometric framework for network dynamics that we developed. This framework allows for the problem of synchrony breaking to be reduced to its essential dimensions, which in turn enables us to compute and classify a large class of synchrony breaking bifurcations in networks. This is joint work with Jan Sanders and Eddie Nijholt.
Bob Rink
13007 Comparing low-level network structures using simulated and real fMRI measurements [abstract]
Abstract: It is widely accepted in the neuroscience community that the network organization of the brain is scale free and has small world structure. This is considered to be true on multiple levels, ranging from the level of brain regions down to the level of cortical minicolumns. Although high level networks of the brain can be examined using fMRI or other means of measurement, all these techniques are limited in their resolution and they are not expected to provide low level connectivity information about the brain. One possible way to overcome the resolution barrier can be realized as follows. A family of low level (high resolution) networks is used where the nodes of the network have some simple dynamics. The behavior of these nodes can be simulated considering their interconnection network. Simultaneously with this, a virtual measurement can be performed to capture the overall global behavior of the network. This resembles a hypothetical brain with the simulated structure and dynamics placed in an fMRI machine. These virtual measurements can be compared to each other and to real measurement. This comparison can serve as a proxy to assess certain network properties of the low level brain network. All this is done under the premise that the global behavior of large networks of interconnected units with simple dynamics is largely influenced by the underlying network structure. Our goal is to find out to what extent is the scale free and small world nature present in the brain network. In order to achieve this goal we have to specify a family of graphs that is plausible for a low level brain network topology and on top of this the node dynamics also needs to be chosen. Since the cortex can be considered as a folded 2D structure with long range connections, the Scale-Free Percolation random graph model on Z2 with power law node weights is a suitable choice for the graph family in the previously outlined agenda. We can obtain a coupled family of graphs by appropriately changing the parameters alpha and lambda of the graph model simultaneously. This family of graphs contains the Z2 grid as one limiting case while an almost tree-like scale free graph is at the other end of the spectrum. The chosen coupling allows to resemble a certain evolution from the plain grid to a more and more efficient network in terms of information transfer. The chosen node dynamics is given by the paradigm Ising model with its single spin flip Glauber dynamics. We simulate excitation of a brain region coming from another region by considering different Scale-Free Percolation models as the structure of the excited brain region where the external stimulus manifests itself as inhomogeneous external field acting on the Ising model. By comparing globally measured quantities of the excited brain region we can see to what extent is deviation from the Z2 base structure is still beneficial in terms of the simulated brain behavior. This should allow us to gain deeper insight into how the low level network structure of a brain is organized.
Sándor Kolumbán

Dynamics on and of Complex Networks IX / Mining and learning for complex networks  (DOAO) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: N - Graanbeurszaal

Chair: Jean-Charles Delvenne

43000 Modeling sequences and temporal networks with dynamic community structures [abstract]
Abstract: Community-detection methods that describe large-scale patterns in the dynamics on and of networks suffer from effects of arbitrary time scales that need to be imposed a priori. We develop a variable-order hidden Markov chain model that generalizes the stochastic block model for discrete time-series as well as temporal networks. With our model, the relevant time scales are inferred from data, instead of being determined beforehand. The temporal model does not require the aggregation of events into discrete intervals, and instead takes full advantage of the time-ordering of the tokens or edges. When the edge ordering is random, we recover the traditional static block model as a special case. We formulate an efficient nonparametric Bayesian framework that can select the most appropriate Markov order and number of communities, based solely on statistical evidence and without overfitting.
Tiago Peixoto
43001 Correlation networks from flows [abstract]
Abstract: Complex network theory provides an elegant and powerful framework to statisticallyinvestigate different types of systems such as society, brain or the structure of local andlong-range dynamical interrelationships in the climate system. Network links in correlation,so-called climate networks typically imply information, mass or energy exchange.However, the specific connection between oceanic or atmospheric flows and the climatenetwork?s structure is still unclear. We propose a theoretical approach of flow-networks forverifying relations between the correlation matrix and the flow structure, generalizingprevious studies and overcoming the restriction to stationary flows [1]. We studied acomplex interrelation between the velocity field and the correlation network measures.Our methods are developed for correlations of a scalar quantity (temperature, for example)which satisfies an advection-diffusion dynamics in the presence of forcing and dissipation.Our approach reveals the insensitivity of correlation networks to steady sources and sinksand the profound impact of the signal decay rate on the network topology. We illustrate ourresults with calculations of degree and clustering for a meandering flow resembling ageophysical ocean jet.Moreover, we discuss the follow-up approaches and application of the flow-networksmethod [2].[1] "Correlation networks from flows. The case of forced and time-dependent advectiondiffusiondynamics" L.Tupikina, N.Molkenthin, C.Lopez, E.Hernandes-Garcia, N.Marwan,J.Kurths, Plos One. 2016[2] "A geometric perspective on spatially embedded networks. Quantification of edgeanisotropy and application to flow networks", H.Kutza, N.Molkenthin, L.Tupikina, J.Donges,N.Marwan, U.Feudel, J.Kurths, R.Donner under rev.in Chaos
Lyubov Tupikina
43002 Burstiness and spreading on networks: models and predictions [abstract]
Abstract: When modelling dynamical systems on networks, it is often assumed that the process is Markovian, that is future states depend only upon the present state and not on the sequence of events that preceded it. Examples include diffusion of ideas or diseases on social networks, or synchronisation of interacting dynamical units. In each case, the dynamics is governed by coupled differential equation, where the coupling is defined by the adjacency matrix of the underlying network. The main purpose of this talk is to challenge this Markovian picture. We will argue that non-Markovian models can provide a more realistic picture in the case of temporal networks where edges change in time, or in situations when pathways can be measured empirically. We will focus on the importance of non-Poisson temporal statistics, and show analytically the impact of burstiness on diffusive dynamics, before turning to applications and incorporating memory kernels in predictive models of retweet dynamics.
Renaud Lambiotte
43003 Understanding Markovian Network Dynamics [abstract]
Abstract: In this talk I will talk about approaches for understanding transition behavior on networks. In particular I will first introduce HypTrails - a Bayesian framework for comparing different hypothesesabout human transition behavior in networks. Then I will introduce SubTrails - an approach for identifying subgroups of users with exceptional transition behaviors. I will conclude with open challengesand problems in the field.
Markus Strohmaier
43004 On the degree centrality in multiplex networks - The paper that kills complex network analysis [abstract]
Abstract: In almost any network analytic project, we make multiple, innocent, and seemingly inconsequential modeling decisions - however, some of them might have a profound impact on the results. In this talk, I will discuss how to evaluate the most simple centrality index, the degree, of a node in multiplex networks. Of course, any ranking of all nodes in a multiplex network requires first, some normalization of the degrees to put them on the same scale, and second, some aggregation over the different layers. You might opt for the sum, the average, the mean, the minimum or the maximum as an aggregation strategy. In this talk I will show that the combination of these two strategies, i.e., normalization and aggregation, may result in nodes being identified as either top central or least central. It thus emphasizes the need to properly describe, check, and evaluate even the most simple modeling decisions.
Katharina Zweig
43005 The Effects of Correlation between Influential Level and Threshold in Opinion Dynamics [abstract]
Abstract: In every choice we make, repetitive interactions within our social networks and our susceptibility to influence impact our decisions. This research investigates such mechanisms with the framework of threshold models of social contagion. We find that a strong positive correlation between the degree (representing the strength of influence) and the threshold (displaying acceptance level) slows the dynamics and segregates the opinions.
Petter Holme

mathematical pharmacology  (MP) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: P - Keurzaal

Chair: Vivi Rottschafer

24000 Improving the prediction of drug action in the brain: towards a 3D brain drug distribution model [abstract]
Abstract: Development of drugs with adequate action on the brain is highly challenging. Quantitative understanding is needed on the highly complex processes that govern the concentration-profile of a drug (???pharmacokinetics???), such as transport of the drug from blood to brain and subsequent intra-brain distribution to the target site. The target site of a drug is the region in the brain where the drug can interact with biological target molecules are expressed on the cells, to elicit the effect of the drug.To provide such quantitative understanding we have recently developed a multi-compartmental semi-physiologically-based pharmacokinetic (semi-PBPK) brain distribution model. This model is based on a system of ordinary differential equations (ODEs) to describe the pharmacokinetics of drugs in multiple physiological brain compartments [1]. However, it is important to also take into account local brain distribution, as target expression in the brain can substantially differ between different brain regions and recent insights have shown that target association and dissociation kinetics can change local pharmacokinetics [2].To further extend our understanding of local brain drug distribution processes in the brain, we are currently exploring inclusion of partial differential equations (PDE), to include the regional brain distribution. By that we aim to integrate both the drug distribution and target interaction kinetics in a 3D manner, to ultimately improve the prediction of drug action in the brain. [1] Yamamoto, Y. et al Development of a multi-compartmental brain pharmacokinetic model and prediction of human brain target site concentrations. Submitted[2] de Witte, W.E.A. et al. (2016) In vivo Target Residence Time and Kinetic Selectivity: The Association Rate Constant as Determinant. Trends Pharmacol. Sci. xx, 1-12
E Vendel, V Rotschäfer, Y Yamamoto, W de Witte, YC Wong, JGC van Hasselt and ECM de Lange
24001 G-protein signalling dynamics and the role of mathematical pharmacology [abstract]
Abstract: Mathematical modelling and scientific computing are powerful tools for the analysis of cell signalling (cells integrating and reacting to signals from their environment in order to effect a response) in pharmacology. ???Analytical pharmacology???, which has its roots in classical receptor theory and largely focuses on equilibrium cell responses to drugs, provides a vital theoretical basis which underpins drug classification and prediction of drug mechanism of action. Efforts towards drug development and analysis will benefit from a better quantitative understanding of how cellular responses to signals such as drugs vary over time. Systems biology methods for modelling and analysing dynamics in cell signalling are now being combined with quantitative pharmacology ideas in the emergent field of systems pharmacology, while ???mathematical pharmacology??? encompasses numerical and analytical solution methods applied to models ranging from pharmacokinetics/pharmacodynamics to intracellular signal transduction.Here we present an overview of some recent pharmacological modelling, with a focus on G-protein coupled receptors (GPCRs), which play a crucial role in the control of cellular function, and are targets for up to 60% of current pharmaceuticals. Much work to date has explored the steady-state signalling behaviour of these receptors, which we now extend to study the dynamics of their signalling systems. We will show both new results and outstanding challenges, which highlight the importance of mathematics in pharmacological research and the range of the modelling toolkit upon which we draw. In particular, we will discuss:- The robust peak-plateau dynamics of activated G-proteins.- The role of asymptotic analysis in separating time scales.- A new model reduction method for reducing complexity of signalling models.- The dynamics of biased agonism.- ODE and agent based models.- Parameter estimation challenges.- Software development.
Lloyd Bridge
24002 Model-based treatment planning in reproductive medicine [abstract]
Abstract: Modern techniques in reproductive medicine like in-vitro fertilization or intracytoplasmicsperm injection have increased the chances for successful reproduction. However, currentsuccess rates vary significantly among clinics, still reaching only about 35% even in wellfunctioning centers. This is mainly due to the usage of different treatment protocols andlimited knowledge about individual variability in the dynamics of reproductive processes.Medically-assisted reproduction requires the women to undergo hormonal treatment forseveral weeks. Even though the pharmacokinetic properties of the involved drugs arewell characterized, their pharmacodynamics is less clear and varies a lot between patients.The aim of our research is to develop a mathematical model that predicts the effect ofdrug administration on hormone blood concentrations and follicular maturation and thataccounts for the observed inter- and intra-individual variability. This model shall becomepart of a clinical decision support system for reproductive endocrinologists that enablesthe simulation and optimization of treatment strategies in-silico. The talk focuses on thedevelopment and validation of a mathematical model for the human menstrual cycle[1] ,the difficulties of parameter identification[2] , and the problem of specifying the model forindividual patients[3] . This research is joint work with the scientists involved in the EUproject PAEON[4] .[1] S. R?blitz, C. St?tzel, P. Deuflhard, H.M. Jones, D.-O. Azulay, P. van der Graaf, S.W. Martin. A mathematical model of the human menstrual cycle for the administration of GnRH analogues. Journal of Theoretical Biology 321, pp. 8-27, 2013.[2] I. Klebanov, A. Sikorski, C. Sch?tte, S. R?blitz. Prior estimation and Bayesian inference from large cohort data sets. ZIB-Report 16-09, 2016.[3] T. Mancini, I. Salvo, F. Mari, I. Melatti, A. Massini, S. Sinisi, E. Trondi, F. Davi, T. Dierkes, R. Ehrig, S. R?blitz, B. Leeners, T. Kr?ger, M. Egli, F. Ille. Patient-Specific Models from Inter-Patient Biological Models and Clinical Records. In: Formal Methods in Computer-Aided Design, 2014.[4] http://paeon.di.uniroma1.it/
Stefan Schäfer , Ilja Klebanov , Rainald Ehrig , Susanna Röblitz

Digital Epidemiology and Surveillance  (DES) Session 1

Schedule Top Page

Time and Date: 10:00 - 12:30 on 20th Sep 2016

Room: Z - Zij foyer

Chair: Daniela Paolotti

36000 [abstract]
Abstract:
Marcel Salathé
36001 Emerging pathogen threats: risk assessment in the era of global awareness and response ?information on submission [abstract]
Abstract: Emerging pathogens events, from the H1N1 influenza pandemic, to the current Zika invasion of the Americas, represent serious global health threats that necessitate rapid risk assessment and intervention planning. Computational and statistical models provide an invaluable assisting tool in this effort, but their design and calibration face major scientific challenges, not least the accounting for the effect of awareness and human response. In the era of global information, awareness may anticipate the epidemic spread rising in yet unaffected territories and impacting their risk of experience the epidemic. I will discuss this aspect through two examples from the recent Ebola and MERS-CoV epidemics. In the first example, spontaneous reaction to the risk of global dissemination yielded flight cancellations and borders? closure. Through accurate data collection and extensive numerical simulations we showed that the strong modification in the flight network observed delayed by only a few weeks the risk of outbreak propagation in new countries. In the second example, we measured collective and public health awareness through digital proxies (Google Trends, ProMED-mail and Disease Outbreak News of WHO) and we quantified the impact of this component on the management of imported MERS cases in unaffected countries. We found that when attention was high, the time from hospitalization to isolation was reduced from 6 to 2 days, on average, consequently halving the risk of onward transmission following importation. These two examples show how the rise of novel information sources and the large increase in data availability are making possible the accounting for the human factor on epidemic models, increasing the reliability of risk assessment analyses.
Chiara Poletto
36002 Study of the effects of air quality and climate upon human health using social digital traces [abstract]
Abstract: Poor air quality episodes in high populated cities across the world are getting more and more common, these events are not longer sporadic, instead, they are getting recurrent and put millions of people at risk due to the high concentrations of pollutants on the air that they breath. Most of the impact of those episodes go unnoticed because they imply symptoms like headaches, fever or respiratory problems which do not become important to be diagnosed. However, those symptoms affect our daily life, our performance at work and thus, in turn, they impact our economy and/or society. To understand and quantify that effect, we have analyzed a large database of social media messages (136 Million geolocalized tweets) in Spain and, using natural language processing and machine learning techniques, we have identify 0.8 Million tweets in which users talk about suffering symptoms like ILI, common cold, fever, headache, digestive or respiratory problems and potential treatments (perceived health). We have also collected information from official sources on air quality, pollens and weather. With that information we have constructed weekly time series and studied their interdependence and predicting power. We found first ILI cases can be explained and nowcasted by perceived symptomatological data and second that perceived symptomatological data can be nowcasted from atmospheric factors such as pollutants, pollens and climate data. Our results apply both at the regional and city level at different regions in Spain, suggesting that using this kind of digital health data from users in social media could help councils and governments to construct better air quality monitoring systems that not only consider level of contaminants in the air, but also how those levels impact in real time perceived (and possibly real) health conditions of the population.
David Martín-Corral, Esteban Moro, Manuel Garcia-Herranz and Manuel Cebrian
36003 Syndromic surveillance of gastroenteritis [abstract]
Abstract: Gastroenteritis is one of the most common illnesses worldwide. It is characterised by the symptoms of diarrhoea and vomiting. Although most cases of gastroenteritis in high income countries are self-limiting, there is a significant impact on healthcare services and the economy. Determining the burden of gastroenteritis is challenging. Presentation biases mean that public health datasets of gastroenteritis incidence, and of incidence of gastroenteritis causing pathogens, do not give a complete picture of the community burden. We have been exploring the possibility of establishing a comprehensive near real-time picture of the levels of activity of gastroenteritis in the UK by investigating online data for syndromic surveillance of this illness. This includes webpage view statistics and an online community cohort survey; both of which have been extensively demonstrated as suitable for surveillance of other illnesses. Incidence data from these new online sources is compared to more traditional surveillance data from public health departments. This work contributes towards an improved understanding of gastroenteritis burden, which can influence policy decisions regarding the management of this illness.
Elizabeth Buckingham-Jeffery
36004 Attitudes to the influenza vaccine. Data from the Flusurvey 2015/2016 [abstract]
Abstract: The Flusurvey is an internet-based tool through which real-time surveillance of self-reported influenza like illness (ILI) in the community is undertaken. The Flusurvey collects information on vaccination status as well as the reason(s) given for either getting or not getting vaccinated. (Participants are allowed to provide more than 1 reason). We used these responses to explore the attitudes participants have about the influenza vaccine. 2,901 (34.9%) of Flusurvey participants reported as being vaccinated for the 2015/16 season (5,418 participants (65.1%) were not vaccinated). The majority of those vaccinated were vaccinated at their GP (56.5%), with 21.4% and 18.2% vaccinated at their place of work and pharmacy, respectively. 45.4% of those vaccinated said they were in a risk group and 32.6% said the vaccine was readily available to them and vaccine administration was convenient. Of those not vaccinated; 49.9% felt they did not belong to a risk group, 43.6% were not offered it by their GP, 7% said the vaccine was not free of charge, 6.4% were worried the vaccine was not safe or could cause illness or other adverse events, 7.5% doubted the effectiveness of the vaccine and 2.5% believed the vaccine could cause influenza, Encouragingly, a relatively small percentage of people had negative attitudes about the influenza vaccine (i.e. concerns around safety, side effects or efficacy of vaccine). Our results suggest that increasing vaccine availability and improving convenience of administration (e.g. in the work place) would increase vaccine uptake in the general population.
Bersabeh Sile, Chinelo Obi, Dominic Thorrington, Sebastian Funk and Richard Pebody.