14:15 - 18:00 on 20th Sep 2016

Determinants of creativity and innovation in science, art and technology  (DCIS) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: A - Administratiezaal

Chair: Vittorio Loreto

15006 How creative, participatory and innovation strategies can improve the quality of scientific research? [abstract]
Abstract: We will explain and discuss several experiences where artistic and creative practices can drive ambitious scientific research. We will focuss on topics and actions directly related to complex systems science to exemplify all their potentialities. We will describe how participatory strategies, public engagement, community processes and wide multidisciplinary teams are able to transform an ordinary research activity into a complete experience where impact and outputs are multiple, diverse and long-lasting. The list of actors involved should necessary include artists, designers, public agencies or administrations, and then must also take place in uncommon places such as museums, cultural spaces and public spaces. Working with many actors and building tailored-made research collectives have the capacity to raise shared concerns, to address societal challenges in a novel and innovative way, and to enhance the value of the results by publicly discussing and sharing the whole research cycle. Close
Josep Perello
15007 From Innovation to Diversification: A Simple Competitive Model [abstract]
Abstract: Few attempts have been proposed in order to describe the statistical features and historical evolution of the export bipartite matrix countries/products. An important standpoint is the introduction of a products network, namely a hierarchical forest of products that models the formation and the evolution of commodities. In the present article, we propose a simple dynamical model where countries compete with each other to acquire the ability to produce and export new products. Countries will have two possibilities to expand their export: innovating, i.e. introducing new goods, namely new nodes in the product networks, or copying the productive process of others, i.e. occupying a node already present in the same network. In this way, the topology of the products network and the country-product matrix evolve simultaneously, driven by the countries push toward innovation. Close
Riccardo Di Clemente
15008 Identifying the Features of Popular and Significant Artworks in Popular Music Production [abstract]
Abstract: In the world of artistic production there is a constant struggle to achieve fame and popularity. This fierce competition between artistic creations results in the emergence of highly popular elements that are usually well remembered throughout the years, while many other works that did not achieve that status are long-forgotten. However, there is another level of importance that must be considered in order to have a more complete picture of the system. In fact many works that have influenced the production itself, both due to their aesthetic and cultural value, might have not been or might not be popular anymore. Due to their relevance for the whole artistic production, it is important to identify them and save their memory for obvious cultural reasons. In this paper we focus on the duality between popularity and significance in the context of popular music, trying to understand the features of music albums belonging to one or both of these classes. Close
Bernardo Monechi
15009 Social networks evolution with old and and new ties: how our social horizon grows [abstract]
Abstract: By means of user-generated data gathered on Last.fm, an on-line catalog of music albums, we define a growing conceptual space in the form of a network of tags representing the evolution of music production during the years. We use this network in order to define a set of general metrics, characterizing the features of the albums and their impact on the global music production. We then use these metrics to implement an automated prediction method of both the commercial success of a creation and its belonging to expert-made lists of particularly significant and important works. We show that our metrics are not only useful to asses such predictions, but can also highlight important differences between culturally relevant and simply popular products. Finally, our method can be easily extended to other areas of artworks creation. Close
Raffaella Burioni

Information Processing in Complex Systems  (IPCS) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: B - Berlage zaal

Chair: Rick Quax

45010 Quantum Complexity [abstract]
Abstract: I intend to review several notions of classical complexity and discuss how they might be quantized. The standard technique from quantum information is to measure a given quantum system whose complexity we want to know and then apply the classical measures to the resulting classical statistics. In order to obtain the quantum complexity we finally optimize (minimize or maximize) over all classically obtained results. However, it might be more appropriate to measure quantum complexity directly, i.e. not going via classical statistics. How might that be done? I also discuss complexity from the perspective of how difficult it is to make entangled states. This will lead me introduce the concept of quantum macroscopicity. Close
Vlatko Vedral
45009 The classical-quantum divergence of complexity in the Ising spin chain [abstract]
Abstract: Most interesting systems in nature are often complex, lying in the region between order and randomness. Even though the idea of complexity is understood intuitively, it lacks a formal mathematical description. The statistical complexity, defined by Crutchfield as the minimum information required by a classical model to statistically simulate a process, serves to quantify complexity. Here, we show that the same idea of complexity behaves differently in the quantum information regime. We introduce the quantum statistical complexity as the minimum information required by a quantum model to statistically simulate a process, and show that it exhibits drastically different qualitative behavior when applied to the same system - the 1D Ising spin chain. Thus, we illustrate that the notion of what is complex depends on the information theory used to describe it. Close
Whei Yeap Suen, Jayne Thompson, Andrew Garner, Vlatko Vedral and Mile Gu
45011 Occam's Quantum Strop [abstract]
Abstract: A stochastic process's statistical complexity stands out as a fundamental property: the minimum information required to synchronize one process generatorto another. How much information is required, though, when synchronizing over aquantum channel? Recent work demonstrated that representing causal similarity as quantum state-indistinguishability provides a quantum advantage. We generalize this to synchronization and offer a sequence of constructions that exploit extended causal structures, finding substantial increase of the quantum advantage. We demonstrate that maximum compression is determined by the process's cryptic order---a classical, topological property closely allied to Markov order, itself a measure of historical dependence. We introduce an efficient algorithm that computes the quantum advantage and close noting that the advantage comes at a cost---one trades off prediction for generation complexity. Close
John Mahoney, James Crutchfield and Cina Aghamohammadi
45012 Classification of time-symmetry breaking in quantum walks on graphs [abstract]
Abstract: Most interesting systems in nature are often complex, lying in the region between order and randomness. Even though the idea of complexity is understood intuitively, it lacks a formal mathematical description. The statistical complexity, defined by Crutchfield as the minimum information required by a classical model to statistically simulate a process, serves to quantify complexity. Here, we show that the same idea of complexity behaves differently in the quantum information regime. We introduce the quantum statistical complexity as the minimum information required by a quantum model to statistically simulate a process, and show that it exhibits drastically different qualitative behavior when applied to the same system - the 1D Ising spin chain. Thus, we illustrate that the notion of what is complex depends on the information theory used to describe it. Quantum walks on graphs represent an established model capturing essential physics behind a host of natural and synthetic phenomena. Quantum walks have further been proven to provide a universal model of quantum computation and have been shown to capture the core underlying physics of several biological processes in which quantum effects play a central role. A 'single particle quantum walker' moves on a graph, with dynamics governed by Schrödinger's equation and quantum theory predicts the probability of a walker to transition between the graphs' nodes. Any quantum process in finite dimensions can be viewed as a single particle quantum walk. Until recently, quantum walks implicitly modeled only probability transition rates between nodes which were symmetric under time inversion. Breaking this time-reversal symmetry provides a new arena to consider applications of this symmetry breaking and to better understand its foundations. The main application discovered so far is that this symmetry breaking can be utilized as a passive means to control and direct quantum transport. A subtle interplay between the assignment of complex Hamiltonian edge weights and the geometry of the underlying network has emerged in a sequence of studies. This interplay has been central to several works, but in the absence of definitive statements, past work has only produced criteria for a process on a graph to be time-symmetric. Leaving the classification problem and its implications, open. Here we provide a full classification of the Hamiltonians which enable the breaking of time-reversal symmetry in their induced transition probabilities. Our results are furthermore proven in terms of the geometry of the corresponding Hamiltonian support graph. We found that the effect can only be present if the underlying support graph is not bipartite whereas certain bipartite graphs give rise to transition probability suppression, but not broken time-reversal symmetry. These results fill an important missing gap in understanding the role this omnipresent effect has in quantum information science. Close
Jacob Turner and Jacob Biamonte
45008 The Ambiguity of Simplicity [abstract]
Abstract: A system’s apparent simplicity depends on whether it is represented classically or quantally. This is not so surprising, as classical and quantum physics are descriptive frameworks built on different assumptions that capture, emphasize, and express different properties and mechanisms. What is surprising is that, as we demonstrate, simplicity is ambiguous: the relative simplicity between two systems can change sign when moving between classical and quantum descriptions. Thus, notions of absolute physical simplicity—minimal structure or memory—at best form a partial, not a total, order. This suggests that appeals to principles of physical simplicity, via Ockham’s Razor or to the “elegance” of competing theories, may be fundamentally subjective, perhaps even beyond the purview of physics itself. It also presents a challenge to using quantum computers for statistical inference. Fortunately, experiments are now beginning to probe measures of simplicity, creating the potential to directly test for ambiguity. Close
James Crutchfield, Cina Aghamohammadi and John Mahoney
45007 Increasing excess entropy in the approach towards equilibrium in a reversible Ising dynamics model [abstract]
Abstract: Dynamic models of spin systems, based on microscopic reversibility and conserved energy, have been used for simulation of the Ising model and the approach towards equilibrium. The equilibrium is here determined by the set of distributions, over configurations of finite blocks of spins, with increasing sizes, that with the given energy constraint maximises the entropy density. Since also the entropy density is conserved in such a dynamics, a natural question is whether the full equilibrium is really reached and how? We investigate this question in detail by making an information-theoretic analysis of the one-dimensional, infinite lattice version of the Q2R model. Its two-dimensional version has been extensively used for simulation of the two-dimensional Ising model. Starting from a low entropy state, with appropriate statistics, it is shown that despite the conserved entropy, if entropy is estimated only from finite size block statistics, entropy appears to be increasing and the equilibrium state for the given energy is approached. By showing how the excess entropy increases during this process, it is clarified how local order is transformed into correlation information over increasing distances, explaining the apparent entropy increase and the approach towards equilibrium. The findings are discussed in the broader context of reversible microscopic dynamics, macroscopic irreversibility, and the second law of thermodynamics. Close
Kristian Lindgren and Eckehard Olbrich
45005 Mutual information reveals lower-level mechanisms, aiding higher level predictability [abstract]
Abstract: I will present some recent work on Shannon information theory applied to natural complex systems. As part of this we have developed a new correlation length function based on mutual information. I will discuss how it aids predictability of complex systems by detecting underlying mechanisms of change. One example I will present is a glass former, and I will discuss how this far-from-equilibrium many-particle system is representative of many other complex systems. Close
Karoline Wiesner
45013 Unbounded memory advantage in stochastic simulation using quantum mechanics [abstract]
Abstract: Simulations using real quantities on a digital computer require a trade-off between the precision to which these quantities are stored, and the memory required to store them. The limit of the simulation's precision is hence limited by the internal memory available to the simulator. In this presentation, I shall demonstrate using tools from statistical complexity theory and its quantum extensions that quantum information processing allows the simulation of stochastic processes to arbitrarily high precision at a finite memory cost. This demonstrates the unbounded memory advantage that a quantum computer can exhibit over its best possible classical counterpart when used for stochastic simulations. Close
Andrew Garner, Qing Liu, Jayne Thompson, Vlatko Vedral and Mile Gu

Workshop on Open & Citizen Data Science  (WOCD) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: C - Veilingzaal

Chair: Thomas Maillart

33007 Open Infrastructure for Open Science [abstract]
Abstract: Opening the scientific process for creating knowledge needs opening the access to a number of diverse resources like scientific instruments, scientific data, digital services, software tools, knowledge and expertise, all needed in some form to conduct research. These elements can be regarded as infrastructural resources that are essential inputs to the research process. Making these resources open and shareable require the adoption of standards, the right legal frameworks and license, and clear rules for access. There is also the crucial aspect of defining the appropriate governance and management mechanisms that ensure their long term maintenance and availability. This presentation tackles commoning as the social practice suitable to create systems to manage these shared resources and provides examples in the area of open science. It also identifies some of the current challenges with particular focus on the digital infrastructures. Close
Sergio Andreozzi
33008 Overview of Citizen Science Models, Practices and Impacts [abstract]
Abstract: Ibercivis (www.ibercivis.es) is the national foundation of Citizen Science in Spain that promotes and supports Citizen Science experiments and studies, delivering services to the communities as well as deploying our own tools and experiences. Our main approach to public engagement is to promote the uptake of Citizen Science tools to enrich research by changing current approaches to scientific challenges and by incorporating knowledge from outside the academia. In the last years, we are facing a boom of Citizen Science practices all around the world. Only from Ibercivis we have deployed more than 50 experiments with over 40 different research groups from different areas of knowledge, reaching over 50.000 volunteers. Our set of experiments include volunteer computing (e.g. simulation of nuclear fusion devices using http://boinc.berkeley.edu), volunteer sensing (e.g. odour nuisances reports http://digitalearthlab.jrc.ec.europa.eu/mygeoss/results3.cfm), volunteer thinking (e.g. stem cells images analysis cellspotting.socientize.eu), participatory experiments (e.g. human behavior in dilemma dilema.ibercivis.es), or collective intelligence projects (e.g. collectivemusicexperiment.eu) among others. One of the main drivers of this scenario is the digitally enabled transformation of the interactions between science and society, facing an unprecedent scale of nature and range of collaborators. We find several models of public engagement in science and contributions occur individually as well as collectively, in all the steps of the scientific workflow. There is a need of having a clear picture of the situation in Europe. Addressing this at national scale, we created the Observatory of Citizen Science in Spain (www.ciencia-ciudadana.es) to monitor the growth of such initiatives, catalogue them and analyze different impacts. Ibercivis was promoted from the BIFI Institute of the University of Zaragoza and includes research institutions and science funders, namely MINECO, CSIC, CIEMAT, Gobierno de Arag?n and Fundaci?n Zaragoza Ciudad del Conocimiento. We are part of the steering committee of the European Citizen Science Association, linking with responsible research and innovation. Since 2012 we coordinate the European project Socientize (www.socientize.eu) which delivered the White Paper on Citizen Science for Europe, referenced as a flagship document for the Citizen Science policy making. In this presentation we will present a significant set of these projects and we will present their outcomes from scientific, educational, political and technological perspectives. Close
Fermin Serrano, Jesus Clemente, Mari Carmen Ibañez, Eduardo Lostal, Francisco Sanz
33009 Herding Cats in Gentoo Linux [abstract]
Abstract: Between 2001 and 2005, I had the privilege and honour of leading and guiding Gentoo Linux's development teams. That was a period of high growth for Gentoo -- our userbase went from less than a thousand users (about ~750 in the #gentoo IRC channel), to over 1.5 million by 2003. This massive mushrooming of the userbase made its effects felt throughout the project. Due to the nature of the technology and the ethos of the project, Gentoo rapidly started to become the Linux distribution for every kind of situation. Several projects launched within Gentoo -- many of them around supporting different hardware, including: PPC, Sparc, IA64, Arm, and others. Communities formed to help people speaking specific languages (Spanish, French, and Polish were amongst the pioneer communities). Each group had developers within the project, and users interacting with those developers. Interactions (user-user, dev-dev, user-dev) happened in many fora: IRC, email lists, Gentoo Forums, as well as a number of third party online destinations for Linux geeks. Gentoo-fever was all around. We got made fun of: in forums comments everywhere, satirical websites popped up, and of course, our users were die-hard defenders of The Gentoo Way. The challenge of guiding this growth, and considering the perspectives of (at our peak) 250 developers, fell onto the leadership team. As part of that team, I underwent my own growth -- as a developer, a colleague, a friend, and a human being. I would like to tell you my story. Close
Seemant Kullen
33010 Enhancing Online Community Building & Long-Term Production with Co-Located Events [abstract]
Abstract: Nearly all online communities organize co-located meetings, such as conferences, un-conferences, and hackathons. These events are short, fast-paced, yet they are intended to enable social interactions and fast-circulation of informal knowledge between attendants. There is however a dearth of knowledge on the contribution of co-located events to community enhancement and long-term online production. Here, I study a community of astrophysicists involved in open and reproducible data science. Over the span of data collected (4 years), five co-located meetings were organized. Each meeting triggered contrasted immediate effects regarding collaboration, but all of them had significant long-term enhancing effects on community building and online knowledge production. These results illustrate how punctual co-located meetings change the way contributors engage with their community once they have resumed their routine work online. Close
Thomas Maillart
33011 Using the Blockchain for Reproducible, Transparent and Trustworthy Science [abstract]
Abstract: Scientific studies are often not reproducible and trustworthy due to false or exaggerated research findings. This current issue creates a bias in which successful studies are often only published through the use of dishonest scientific practices. To approach this issue, a research project has been set up to investigate how the reproducibility and trustworthiness of scientific studies can be improved with the use of the blockchain technology. The blockchain technology has shown adequate results in terms of progressing and logging transactions for the Bitcoin protocol and will be examined within this project for the purpose of logging and tracking preannounced studies. This is the thesis design for the master project ?Using the Blockchain for Reproducible and Trustworthy Science?. First will be explained what the problem statement and context is and how the project is related to other work. Secondly, the research question and the subquestions will be defined, followed by a research methodology that will be used during the research phase. The last section introduces a planning, i.e. a roadmap, that is developed for the project. Close
Ilias Elmzouri
33012 Un-conference Breakout Session
33013 Presentations from breakout sessions
33014 Wrap-up

Territorial Intelligence for Multi-level Equity and Sustainability  (TIME) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: D - Verwey kamer

Chair: Celine Rozenblat

38004 "Perma-circularity" as a systemic framework for autonomizing and linking territories [abstract]
Abstract: I will take a critical stance towards the ?Smart? fashion that seems to be sweeping transition thinking these days. As trendy as it is, ?Smart? is very often linked with a techno-fix, eco-modernist vision for the future to which I have trouble subscribing. Our task, no doubt shared by most in the TIMES group, is to arrive at systemic solutions anchored in what was for a long time known as ?appropriate technologies? ? neither high nor low tech, but a smart combination of both, with a foremost objective in mind: to build a world in which humanity as a whole has a permanent ecological footprint of one planet, and to do so under socially and economically equitable circumstances. Given current population trends, this cannot be done within the current high-tech capitalist market system, or with a ?circular economy? still predicated on (green) growth. We need a ?perm-circular? economy that combines re-territorialization, re-localization, and a huge global reduction in material flows together with new, more ?sober" ways of linking territories. I will try to present some useful ideas about how to implement such an economy, drawing inspiration in particular from permaculture, bioregionalism, the ?Municipalist? school of anarchistic democracy, and the ?Territorialist? school of sustainability. Close
Christian Arnsperger (University of Lausanne)
38005 Toward an ecological revolution for recycling greenhouse gaz, renewing food production and water use by using biodiversity of ecological systems [abstract]
Abstract: Human-driven environmental load accumulated over the centuries is causing irreversible shift to ecosystems and pushing humanity's footprint out of planetary boundaries. These dilemmas of coupled social-ecological systems evoke essential challenge of complex systems science, in terms of multi-scale application to the management of real world. Open complex systems that lies in the nature of this challenge requires the openness and diversity in scientific methodologies, in every aspect of theoretical, empirical, methodological, and institutional organizations. Here I review recent activities on establishing a sustainable farming system with multi-scale synergy with environmental and human health, and outline how open complex systems view could integrate divided disciplines and provide interface for management in a transient process. Important propositions for flagship TIMES could be delivered in order to support transversal initiative of young researchers thinking globally. Close
Masatoshi Funabashi (Sony Computer science Laboratory)
38006 Global Systems Science and Dynamical Hyper-networks in turbulent?TIMES [abstract]
Abstract: The Global Systems Science (GSS) community is trying to develop new ways for complex systems science to support local and global policy in the context of?new methods in data science and modeling and the great need to engage and include citizens in policy processes [1][2][3]. GSS is perfectly aligned with the TIMES Flagship. ?The four main pillars of GSS are: 1. Policy at all levels, from individuals to the world: policy problems at global and national scales.?How can we know which, if any, proposed policy options will work? 2. The new, interdisciplinary approach: how the science of complex social, economic, political, biological, physical and environmental systems can inform policy makers in their work. 3. Data science and computational modeling for policy makers:?the use of??policy informatics? ? the new, policy-oriented methods of modeling complex systems on computers. 4. Citizen engagement:? the behavior of social systems emerges bottom-up, from the interactions of individuals and institutions, in the context of top-down policy constraints - individual citizens must be involved in decision making and policy formulation. In this context of GSS?the talk will briefly discuss how multilevel dynamic?hyper-networks can be used in TIMES [4].?The TIMES Flagship is an exemplar project of Global System Science. Close
Jeff Johnson (The Open University UK)
38007 Reconstruct multi-level territory dynamics with complex networks and self-organizing multi-agent systems [abstract]
Abstract: The paper will present one of the first possible applications of TIMES. One of the main characteristics of complexity is the emergence of properties due to dynamical processes. Our objective is to contribute to the formalization of these emergent properties studying dynamical structures. The structures of complexity proposed here, are interaction systems as the core of self-organization mechanisms. Dynamical networks are efficient tools to express some local or global properties of evolving topology. They capture structural aspects of complex systems representing entities as nodes and interactions between them as links. This contribution presents adaptive algorithms for complex networks dynamics, leading to identify emergent organizations in these networks. One of this algorithms, named AntCo2, is bio-inspired by social insect system behavior and lead to detect emergent structures inside complex networks. Some applications are presented relating to urban morphodynamics analysis of the communication networks of the city. Practical study cases are developed (i) to analyze network vulnerability in case of urban technological risk, using multi-scale measures on dynamical complex systems and (ii) to reconstruct the complexity of logistic corridors as interface between port and metropolitan systems. ?The simulations and results detailed in this presentation, are powered by the GraphStream Library which is a java package for dynamic complex networks (http://graphstream-project.org). Close
Cyrille Bertelle (University of Le Havre, France)
38008 Personalized Open Education for the Masses, for a?Personalized Open Lifelong Education ecosystem [abstract]
Abstract: Open Education is developing at high speed as Massive Open Online Courses. However, education cannot sum up ?to providing online courses. It is necessary to also provide social interaction between students and the tutors in order to create a real educational ecosystem. This is what is sought with POEM, a Personalized Open Education for the Masses ecosystem, that uses complex systems in order to involves students and teachers in lifelong learning. Close
Pierre Collet (University of Strasbourg, France)
38009 Smart Cities: Ethical issues investigated from a complex systems perspective [abstract]
Abstract: We aim to investigate, from a complex systems perspective, ethical issues related to: (i) privacy; (ii) the unauthorized handling, for commercial purposes, of digital traces left by users in smart houses and cities; (iii) the possibility of using ubiquitous information and data correlations to predict and prevent undesirable human social actions related to security and human rights. Problems (i) ? (iii) will be analysed from an inter/transdisciplinary perspective in order to identify, on the one hand, the privacy and ethical concerns during the stage of collection of ubiquitous information in smart environments, and, on the other hand, the possibility of identifying the ideological features of the selected data, which could be used for surveillance and commercial purposes. An evolutionary perspective of Ethics will be presented, according to which moral habits are emergent properties of social affordances embodied in human social interactions. (Social affordances are ?...dispositional collective properties that indicate possibilities of action provided to organisms by other organisms that share co-evolutionary histories ? (Gibson, 1986, McArthur & Baron, 1983)). Close
Maria Eunice Quilici Gonzalez & Mariana Claudia Broens (UNESP, FAPESP, CNPq, Brazil)

EvoEvo (Evolution of Evolution)  (EOE) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: E - Mendes da Costa kamer

Chair: Guillaume Beslon

16007 Evolution of Evolvable Systems (invited talk) [abstract]
Abstract: I shall survey experimental and theoretical results from an ERC Advanced project with that name (https://www.parmenides-foundation.org/research/projects/evoevo/). I shall focus on three issues: (1) Experimental approach to infrabiological systems, (2) Major transitions theory 2.0 (especially the filial transitions), (3) Learning in evolution versus Evolution in learning. Close
Eors Szathmary (Parmenides Center for the Concecptual Foundations of Science, Pullach/Munich, Germany)
16008 In-silico experimental evolution highlights the influence of environmental seasonality on bacterial diversification [abstract]
Abstract: http://www.evoevoeu/download/evoevo_workshop_2016/EvoEvo2016_Rocabert.pdf Close
Charles Rocabert (INRIA Grenoble-Rhône Alpes, FR)
16009 Evolution of r- and K-selected species of Virtual Microbes: a case study in a simple fluctuating 2-resource environment [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_vanDijk.pdf Close
Bram van Dijk (Utrecht University, NL)
16010 Modeling the co-evolutionary dynamics of the Lobaria pulmonaria lichen symbiosis [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_Adams.pdf Close
Julia Adams (Wellesley College, US)
16011 EvoMachina: a novel evolutionary algorithm inspired by bacterial genome reorganisation [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_Hoverd.pdf Close
Tim Hoverd (University of York and York Centre for Complex Systems Analysis, UK)
16012 Evolution towards extinction in replicase models: inevitable unless… [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_Hickinbotham.pdf Close
Simon Hickinbotham (University of York, UK)
16013 Physical interaction with automated music composition platforms [abstract]
Abstract: http://www.evoevo.eu/download/evoevo_workshop_2016/EvoEvo2016_Abernot.pdf Close
Jonas Abernot (INRIA Grenoble-Rhône Alpes)

Non-stationarity and ergodicity in Economic and Financial Systems  (NCEF) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: F - Rode kamer

Chair: Francesco Caravelli

47000 When is equilibrium a reasonable assumption? [abstract]
Abstract: Abstract: Equilibrium in economics is based on the assumption that agents? actions are consistent with their beliefs, and usually means convergence to a fixed point in the Close
Doyne Farmer
47001 Complexity Economics and the Synergy of Game Theory: Higher Order Dependencies for out of Equilibrium Economics [abstract]
Abstract: In an article entitled Complexity Economics: A Different Framework for Economic Thought (W.B. Arthur, 2013) Brian Arthur discusses ?Complexity Economics? and the basis of economic theory if neoclassical economics is to be replaced by a non-equilibrium, dynamic and process focused understanding of economics in which neoclassical equilibrium, if it exists at all, is a special case. The importance of finding an alternative approach comes from the difficulties classical economics has in explaining economic processes: historical contingencies, learning, innovations, and the imperfections of human perception and decision-making. In this talk I present some preliminary work on an interpretation of game theory as ?interacting logic gates? for which Nash Equilibrium is only a subset of possible outcomes. This model addresses Arthur?s point regarding the economy as a massively parallel system of concurrent and recurrent behaviour that is ?perpetually 'computing' itself?. The information theoretical measure of synergy is applied to data from a behavioural economics experiment to practically demonstrate how the whole can be different from the sum of the parts in applied economics. Close
Mike Harré
47002 Emergence of Cooperative Long-Term Market Loyalty in Double Auction Markets [abstract]
Abstract: Loyal buyer-seller relationships can arise by design, e.g. when a seller tailors a product to a specific market niche to accomplish the best possible returns, and buyers respond to the dedicated efforts the seller makes to meet their needs. We ask whether it is possible, instead, for loyalty to arise spontaneously, and in particular as a consequence of repeated interaction and co-adaptation among the agents in a market. We devise a stylized model of double auction markets and adaptive traders that incorporates these features. Traders choose where to trade (which market) and how to trade (to buy or to sell) based on their previous experience. We find that when the typical scale of market returns (or, at fixed scale of returns, the intensity of choice) become higher than some threshold, the preferred state of the system is segregated: both buyers and sellers are segmented into subgroups that are persistently loyal to one market over another. We characterize the segregated state analytically in the limit of large markets: it is stabilized by some agents acting cooperatively to enable trade, and provides higher rewards than its unsegregated counterpart both for individual traders and the population as a whole. Close
Aleksandra Aloric, Peter Sollich, Peter McBurney, Tobias Galla
47003 Economics without assuming ergodicity [abstract]
Abstract: The mathematics of randomness began in the 1650s, with imagined parallel worlds where all possible events coexist. Economics immediately adopted the new concepts, especially expectation values, as its most basic foundation. The 1700s saw challenges to the nascent economic formalism but found patches, notably utility theory, to keep it alive. In the 1870s the mathematics of randomness took a leap forward with Maxwell and Boltzmann realising that an average across parallel worlds may not be reflective of an average over time. These developments made the earlier patches unnecessary, but by this time the formalism had become too established to adapt. Boltzmann coined the term ``ergodicity??, and questioned the meaning of expectation values. The following century saw great refinements and mathematical formalisation of the concept of ergodicity. Over the last few decades we have seen much interest in what happens when ergodicity is absent. In my talk I will ask this question in the context of economic thinking. Economics, broadly, is based on the assumption of noisy growth, which is mathematised with non-stationary non-ergodic models, most famously geometric Brownian motion. Close
Ole Peters
47004 Optimal growth strategies with carrying capacity [abstract]
Abstract: I will discuss the problem of determining optimal portfolios and general strategies, while taking into account their market impact, i.e. the effect of the position taken on the prices of the assets involved. I will examine the case of ordinary (multivariate) Gaussian diffusion and the one of jump diffusions, with a brief analysis of the structure of the general case. The Kelly criterion and other objectivefunctions can then be reexamined in this broader context. Close
Lorenzo Sindoni
47005 Far from equilibrium: Wealth reallocation in the United States [abstract]
Abstract: Studies of wealth inequality often assume that an observed wealth distribution reflects a system in equilibrium, a constraint rarely tested empirically. In this talk we introduce a simple model that allows equilibrium without assuming it. To geometric Brownian motion we add reallocation: all individuals contribute in proportion to their wealth and receive equal shares of the amount collected. We fit the reallocation rate parameter required for the model to reproduce observed wealth inequality in the United States from 1917 to 2012 and find that this rate was positive until the 1980s, after which it became negative and of increasing magnitude. With negative reallocation or even with the positive reallocation rates observed, equilibration is impossible or too slow to be practically relevant. Therefore, studies which assume equilibrium must be treated skeptically. Currently the system is best described by a negative reallocation rate. Each time we observe it, we see a snapshot of a distribution in the process of diverging. It is much like taking a photo of an explosion in space, whose finite extent tells us nothing of the eventual distance between pieces of debris. Studies that assume equilibrium are incapable of detecting the dramatic conditions one finds without this assumption. Close
Yonatan Berman, Ole Peters, Alexander Adamou
47006 Financial Networks [abstract]
Abstract: In this talk I will present the basics of the network approach in order to evaluate the resilience of a financial network to shocks and distresses, quantify the probability of contagion in an interbank network, individuate early-warning signals of upcoming financial crises and reconstruct missing interbank linkages (in monopartite, bipartite and multiplex networks). Close
Guido Caldarelli
47007 The organization of the interbank network [abstract]
Abstract: The properties of the interbank market have been discussed widely in the literature. However a proper model selection between alternative organizations of the network in a small number of blocks, for example bipartite, core-periphery, and modular, has not been performed. In this paper, by inferring a Stochastic Block Model on the e-MID interbank market in the period 2010-2014, we show that in normal conditions the network is organized either as a bipartite structure or as a three community structure, where a group of intermediaries mediates between borrowers and lenders. In exceptional conditions, such as after LTRO, one of the most important unconventional measure by ECB at the beginning of 2012, the most likely structure becomes a random one and only in 2014 the e-MID market went back to a normal bipartite organization. By investigating the strategy of individual banks, we show that the disappearance of many lending banks and the strategy switch of a very small set of banks from borrower to lender is likely at the origin of this structural change. Close
Paolo Barucca, Fabrizio Lillo

Social and Economic Change as a Complex Dynamical System  (SEC) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: G - Blauwe kamer

Chair: Matthieu Cristelli

2005 Identifying booms and busts in housing prices with heterogeneous expectations [abstract]
Abstract: We develop a behavioral model for housing prices with heterogeneous expectations, with fundamental buying prices linked to housing rental levels to fundamental buying prices. Using quarterly data we estimate the model parameters for eight different countries, US, UK, NL, JP, CH, ES, SE and BE. We find that the data support heterogeneity in expectations, with temporary endogenous switching between fundamental mean-reverting and trend-following beliefs based on their relative performance. For all countries we identify temporary, long lasting house price bubbles, amplified by trend extrapolation, and crashes reinforced by mean-reverting expectations. The qualitative predictions of such non-linear models are very different from standard linear benchmarks, with important policy implications. The fundamental price becomes unstable, e.g. when the interest rate is set too low or mortgage tax deductions too high, giving rise to multiple non-fundamental equilibria and/or global instability. We also discuss estimation of similar non-linear switching models to other time series, such as stock? prices, commodity prices, exchange rates and inflation. Close
Cars Hommes
2006 A complex systems lens on Dutch energy transition policy [abstract]
Abstract: The Netherlands has generally been a laggard in tackling the energy transition, to the extent that a judge Dutch State is acting unlawfully by not contributing its proportional share to preventing a global warming. A 2015?report (in Dutch)?co-authored with the?Netherlands Scientific Council for Government Policy (WRR) argues that a contributing cause to this state of affairs is from policy makers looking at the energy system in too much isolation. The report is a practical case study how taking a complex systems lens to the problem, may open the path to different and innovative approaches to Dutch energy transition policy. Close
Roland Kupers
2007 The essential role of time in information filtering [abstract]
Abstract: Network-based metrics are commonly applied in a wide range of real-world problems, such as ranking, recommendation, and information spreading. Classical methods inspired by physical processes (like diffusion) often neglect the temporal order of interactions and, as a consequence, turn out to be highly ineffective when applied to systems evolving in time which is particularly worrying as most real systems fall in this category. To devise improved and well founded metrics, it is critical to understand the nature of the shortcomings of classical methods in evolving systems. In this presentation, we focus on the problems of ranking and recommendation in growing complex networks. For both problems, we show that by understanding the temporal patterns of the studied systems and unveiling the consequent lack of performance of classical metrics, we are able to design time-dependent methods that significantly outperform their static counterparts in singling out the most valuable items in the system. In the case of ranking, we use growing network models to show that PageRank centrality metrics systematically exhibit a temporal bias towards old or recent nodes depending on the temporal scales of node aging. We introduce a rescaled score which is built on the PageRank score and at the same time it is not biased by node age. We use real and model data and show that the rescaled score allows us to identify high-quality nodes earlier than with other metrics. In the case of recommendation, we use data from Netflix, Yelp and Digg to show that classical methods tend to favor old nodes and, as a consequence, systematically fail to identify recent items that will be collected by users in the future. We design a new time-aware method built on network growth patterns and show that it markedly outperforms its static counterpart. The findings presented here support the idea that time-aware modifications of existing metrics can lead to improved results in finding the most valuable information in diverse real systems. Close
Manuel Mariani
2008 Information Sharing in collective behavior? [abstract]
Abstract: In animal collective behavior, interactions between individuals and between an agent and the environment are usually seen to be ?through information?propagation. Cooperations can emerge with or without information sharing.? The information propagation in ants, fish and birds seems to be the?crucial?mechanism that regulates the global behavior of the crowd. This talk will introduce the research related to?information?functioning in collective?groups?and several recent works that demonstrate interesting collective patterns with information sharing playing pivotal roles. Close
Zhangan Han
2009 The Short- and Long-Run Damages of Fiscal Austerity: Keynes beyond Schumpeter [abstract]
Abstract: In this work we analyze the short- and long-run effects of fiscal austerity policies, employing an agent-based?model populated by heterogeneous, boundedly-rational firms and banks. The model, in line with the family of?"Keynes+Schumpeter" formalism, is able to account for a wide array of macro and micro empirical regularities.?In particular, it endogenously generates self-sustained growth patterns together with persistent economic?fluctuations punctuated by deep downturns. On the policy side, we find that austerity policies considerably?harm the economy, by increasing output volatility, unemployment, and the incidence of crises. In addition, they?depress innovation and the diffusion of new technologies, thus reducing long-run productivity and GDP growth.?Finally, we show that "discipline-guided" fiscal rules are self-defeating, as they do not stabilize public finances,?but, on the contrary, they disrupt them. Close
Andrea Roventini
2010 The Scientific Competitiveness of Nations: a network analysis [abstract]
Abstract: We use citation data of scientific articles produced by individual nations in different scientific domains to build a bipartite country - scientific domains network to determine the structure and efficiency of national research systems. We characterize the scientific fitness of each nation ?that is, the competitiveness of its research system?and the complexity of each scientific domain by means of a non-linear iterative algorithm able to assess quantitatively the advantage of scientific diversification. We find that technological leading nations, beyond having the largest production of scientific papers and the largest number of citations, do not specialize in a few scientific domains. Rather, they diversify as much as possible their research system. On the other side, less developed nations are competitive only in scientific domains where also many other nations are present. Diversification thus represents the key element that correlates with scientific and technological competitiveness. A remarkable implication of this structure of the scientific competition is that the scientific domains playing the role of ?markers? of national scientific competitiveness are those not necessarily of high technological requirements, but rather addressing the most ??sophisticated?? needs of the society. We complement this analysis with a correlation study between the scientific impact of a nation with a normalized measure of RD funds and the level of internationalization. Close
Andrea Gabrielli
2011 How to measure, manage and eliminate systemic risk in the financial system [abstract]
Abstract: Systemic risk in financial markets arises largely through the interconnectedness of agents through financial contracts. We show that the systemic risk level of every agent in the system can be quantified by simple network measures. With actual central bank data for Austria and Mexico we are able to compute the expected systemic losses of the economy, a number that allows us to estimate the true cost of a financial crises. We can further show with real data that it is possible to compute the systemic risk contribution of every single financial transaction to the financial system. Based on these findings, we suggest a smart financial transaction tax that taxes the systemic risk contribution of individual transactions. This tax provides an incentive for market participants to trade financial assets in a way that effectively restructures financial networks so that contagion events become impossible.?It is possible to show the existence of a systemically risk-free equilibrium under this smart tax. More intuitively, with the help of an agent based model we can demonstrate that the Systemic Risk Tax practically eliminates the network-driven systemic risk in a system. Close
Stephan Thurner
2012 Dynamics of rapid innovation [abstract]
Abstract: We introduce a model of innovation in which products are composed of distinct components and new components are adopted one at a time. We show that the number of products we can make now gives a distorted view of the number we can make in the future: simple products get over-represented, and complex products under- represented. By applying this at the component level, we derive a strategy for making far-sighted innovation choices that increase the rate of innovation. We apply our strategy to real data from three different sectors?language, gastronomy and mobile technology.? Close
Thomas Fink

Law and complexity  (LAC) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: H - Ontvangkamer

Chair: Marion Dumas

22009 Keynote lecture?: What is Law?? Gillian Hadfield
22010 Session 3?: Law, complexity and public policy
22011 Complexity Theory and the Economics of Intellectual Property [abstract]
Abstract: The intellectual property system is a relatively simple concept, but one with extremely ambitious aims. Its aims are no less than to stimulate ?innovation?- the production of the intellectual, cultural and technological artefacts which fuel modern economies. This aim is extremely ambitious because the nature and processes behind innovation are still in many ways opaque. There seems to be a startling disconnect between the acknowledged complexity of innovation systems and the extremely simplistic economic models that underwrite intellectual property.Knowledge production is cumulative and its production systems are complex : markets ?tip?, information ?spills over?, ?(small-world) networks? may form and distribute value 'hubs' according to power laws. Although these observations about the nature of innovation litter the footnotes of intellectual property literature, they are yet to be synthesized into a robust theory of innovation. Despite the acknowledged complexity of innovation, the economics behind intellectual property tends to pivot on a handful of simplifying metaphors: the commons are tragic and every prisoner is caught in a dilemma. Although these metaphors have come under increasing fire in recent years from scholars like Elinor Ostrom, Lawrence Lessig, Brett Frischmann and Yochai Benkler, all of whom argue that they conceal more than they reveal and do not match empirical findings, no robust and sufficiently rich theory of innovation has emerged from the criticism. Why do these metaphors still drive the justifications for intellectual property? How does the forefront of intellectual property scholarship grapple with the acknowledged complexity of innovation, if at all? And how do the Courts take all this on board in the enforcement of IP? This paper will briefly sketch some notes to these questions, with the aim to identify projects that may contribute to answering them Close
Carl Mair
22012 A Framework for an Agent-Based Model to Enhance the Governance of Law over Data Protection Issues [abstract]
Abstract: A fundamental proposition in data protection arrangement holds that the security of the right to personal data and the enforcement of rights-requirements are essential for the ultimate inner peace of human beings. Among these arrangements, some are entirely private, such as technology-standards for data protection, and do not rely on the legal authority. Other rules and legal institutions, such as the European legal framework for data protection issues, are legislation-built to secure the right to privacy in Cyberspace and to enforce the law there too. Regulatory agencies, such as the National data protection authority, restrict private conduct that might adversely influence others, and courts resolve disputes.However, legal theory does not tell us which of these mechanisms of securing the right to informational privacy is the best, and in reality they are all far from perfect. Technology-based private standards, while working well in some environments and time-periods, often degenerate into technical violence. Traditionally, orthodox mechanisms over enforcement of law are considered optimal; although public regulation might be corrupted and ?captured? by the very enforcement mechanisms themselves, they are considered optimal to enable an ordinary data subject to seek justice against technically more powerful offenders who control private enforcement.Few researches have devoted attention to analyzing the strength of deterrence offered by data protection law and its enforcement. From the point of view of evaluating the effectiveness of laws, however, it is crucial to understand the facts that make laws function more or less effectively.In many countries, policy actors and researchers attribute poor governance over data protection to the legislation being soft, backward (from the perspective of technology), or both. Yet, it may be that the poor enforcement structure within which enforcers, either judges or regulators in data protection authority, work in is the dominant cause. The two may also interact: poor enforcement structures may cause the soft and the backward data protection law to fail in deterrence. Since both enforcement structures and legislation are endogenously determined, establishing causality is hard. By providing a clean theoretical method for identification, ABM based simulation experiments can help cut the Gordian knot and identify problems in enforcement, elucidate the causes of deterrence failures, and suggest potential solutions. These contributions of the ABM simulation experiments are the focus of this expected contribution. In the field of CAS, ABM simulation is gaining new momentum. ABM offers the possibility of simulating human actions on a micro-scale level in a way that blends in with the existing network of theoretical approaches in the social sciences. In our case, the enforcement dynamics and social interactions are analyzed with an ABM. The contribution will examine the applicability of techniques that simulate different groups of agents involved in data processing on a regional level. The contribution will present the particulars of an ABM for enforcing data protection law, based on the peer effects among agents. As the most concerned issue, we try to offer some principles based on agent-based modeling about marked differences among jurisdictions in the strength of deterrence of their data protection laws. We will describe how the ABM simulation experiments could inform our understanding of the current problems in data protection law and help identify potential data protection law?s reforms.One novelty is the combined analysis of the governed agents and demand agents, which allows for incorporating psychological findings with respect to the enforcement of law updating over a case-cycle. Another novelty concerns individual?s social behavior regarding an optimal provision of compliance with law. Close
Kunbei Zhang
22013 Decomposing Contractual Privacy [abstract]
Abstract: As a result of technological developments, many consumer contracts like banking, shopping and the use of telecommunications now involve the collection, processing and use of personal data. Storage and analysis of this data becomes ever easier and cheaper as result of ?Moore?s law?; databases have therefore evolved into ?big data?. In this context, consumers are now entering in ever more ?privacy contracts?: contracts where agreeing to the processing of personal data is required to use goods or services.From a fundamental rights standpoint alone, evaluating privacy effects for their compliance with the law is already a complex problem. For example, citizens? ?reasonable expectation of privacy?, as well as the ?necessary in a democratic society?-criterion are adjudicated on a case-by-case basis and these notions are constantly re-evaluated as a result of changing technology and social norms. But complexity increases even further as increasing numbers of consumers, controllers, data brokers and government agencies, as well as undisclosed algorithms play a part in the decision-making process. Most consumers cannot be bothered with privacy concerns and simply click ?I agree? just to not miss out on the latest novelty service. As an emergent phenomenon this gives data controllers almost unlimited license to use personal data as they see fit. Incentives to disclose or discover the effects of these contracts seem to be lacking or ineffective. Privacy contracts have therefore become both a modern risk (as described by Ulrich Beck) and a new area where Perrow?s ?normal accident theory? can unfold in the areas of individual privacy, marginalization and mass surveillance.Zhang and Schmidt have illustrated that the Personal Data Community is a Complex Adaptive System. Therefore, to find effective incentives for all members of the Personal Data Community, we propose an Agent-Based approach to facilitate the quest for effective models, incentives and criteria for safeguarding consumers? privacy interests when they enter into contracts.Based on a previously performed comparative analysis of decision making processes and human rights protection law, we suggest modeling the market, the lawmaking process and dispute resolution processes bottom up, at both the national and the EU levels. Input parameters for the agents will represent opportunities for participation, transparency and accountability. Goal of the simulation is to determine whether a limited number of criteria (legal constraints) can sufficiently help to unearth the essential properties of consumer privacy contracts in the complex adaptive system that is the EU personal data community.The goal of the presentation is to discuss the viability of this approach with the complex systems community. This serves to clarify whether the approach is likely to add value to the more traditional approach of legal standard setting. Close
Michiel Rohen
22014 Understanding cyber security as a complex adaptive system [abstract]
Abstract: Using the framework provided by Zhang & Schmidt for understanding data protection law's subject matter as a complex adaptive system (CAS), I map out the various units and interconnections of the cyber security field. My first proposition therefore is, that the subject matter of cyber security laws can be described as the cyber security community (CSC). This distinction between the CSC and the laws that govern it, like Zhang's and Schmidt's distinction between the Personal Data Community (PDC) and its control systems, should make it possible to visualise the interaction between the two and highlight the nonlinear nature of this interaction. It may be necessary to deviate from the provided scaffolding due to the more complex nature of the CSC compared to the PDC.Jurisdictional variations in the maturity of cyber security policy and the effect of transnational bodies, such as the EU or NATO, on the CSC, might warrant delimiting the study to a subset of jurisdictions. Simplification would, however, bring only a pyrrhic victory, as the international dimension of the CSC is ill-suited for compartmentalisation, notwithstanding substantial differences in e.g. transatlantic approaches to cyber security. Zhang and Schmidt concluded that a boundary and some internal coherence are prerequisites for the PDC to be considered as a CAS, and placed the boundary on the interest of using personal data. Same prerequisites apply for the CSC, and the results of this process are not obvious at this stage, as an interest-based boundary would not suffice for the ubiquitous cyber security, where even oblivious users affect it through their passivity. My second proposition is, that the CSC is inherently global and hence ill-suited for national or transnational jurisdictional demarcation, and a conceptual boundary should be drawn to make the CSC a whole.An interesting addition to considering the CSC as CAS may come from Balzaq's and Cavelty's application of actor-network theory (ANT) to cyber security. Kim and Kaplan suggest that ANT and CAS theory may be complementary in the field of information systems, providing a co-evolutionary picture of both states and transitions. For a legal perspective of complexity, the transition is as interesting as the state, as Zhang and Schmidt noted in their critique of positivist and realist perspectives of legal scholarship. My third proposition is, that a dialectic perspective of the CSC will provide more understanding of the dynamic that affects the CSC as a CAS.Challenges to conceptualising the CSC arise firstly from fluctuating consensus in defining cyber security; without a unified way of describing the subject matter of the CSC, any further analysis would rely on the author's selection of an approach to cyber security. The second challenge is the inclusion of the debate of the nature of cyber security policy. Cyber security is considered to be either securitized or in the process of de-securitization in its EU context, and discursive evidence suggests securitization in US context. Whether cyber security is viewed as securitized or politicized affects the way the CSC is formed and what weight is placed on different agents in the (sub)dots. However, it is not necessary to resolve the question of securitization before depicting the CSC as a CAS. The question may actually be answered more clearly after a model of the CSC as a CAS has been developed. Agents are weighted differently in the CSC, and inferences to the (de-)securitized nature of cyber security may be drawn on a jurisdictional basis even if the CSC itself is global. My fourth proposition is, that making the CSC visible will also make it easier to determine if cyber security is currently securitized in a specific jurisdiction. Close
Tuomas Tiihonen
22015 Session 4?: Law and Complex Systems Science ? Roundtable
22016 Legal Norms as Emergent Phenomena [abstract]
Abstract: This presentation builds a theory on the difference between legal texts and legal norms out of the complexity theory. There is a consensus in legal theory about the existence of a fundamental difference between text of positive law and legal norms. However, there is no consensus on the terms of this distinction. This presentation argues that law can be understood as a complex adaptive system and that texts and norms can not be merged because norms as meanings of texts are phenomena that emerge in the interpretation process. So the presentation understands legal norms as emergent phenomena and interpretations as self-organizing processes in complex systems, which are formed of texts, theories, interpreters, values, worldviews, cases and other. To achieve that, the presentation contains an study that explains what are emergences in complexity theory and how these patterns are related to the parts that form them. Close
André Folloni
22017 Law at the edge of chaos. A praise for a complexity-inspired approach to the legal universe [abstract]
Abstract: Complexity theory has triggered a deep change in the way we conceive and interpret the intricacy of the real world. Its holistic vision has not only shown how the behavior of complex systems can emerge from simple interactions between its components, but has also provided a unified theoretical framework that inspires today?s research in a variety of areas from physics to biology. In the last 20 years, the complexity paradigm has gradually impacted social sciences too. Together with other factors - the spread of information technologies, the rise of Big Data and the advances made in the investigation of cognitive and biological underpinnings of human behavior - complexity science has paved the way to a new generation of social scientists that not only reject the idea of a strict separation between social and natural phenomena, but are gradually opening to the empirical, experimental and computational approaches that typically characterize physical and natural sciences. An example of this is social simulation, a trans-disciplinary methodology that is suggesting new explanations of the bottom-up emergence of complex social dynamics. As witnessed by recent works in computational legal studies and empirical legal research, this trend is gradually involving the legal world suggesting new ways to look at law, a natural social phenomenon to be understood as such in all its factual (biological, cognitive, social) complexity. Thanks to the spread of innovative approaches and methodologies spanning from agent-based modeling to social network analysis, we are facing the possibility to imagine a new legal research paradigm overcoming the limits of a traditional scholarship that still limits its scope to the analysis of the formal expressions of law (written norms, case law, legal literature).Grounded in concrete research experiences, the proposed analysis is a praise for the cross-fertilization between complexity and legal science, a choice seen as an urgent need from both a scientific and applicative standpoint. On the one hand, it represents a precious chance for a deeper understanding of all the phenomena that populate the legal universe. On the other hand, it can significantly increase the capacity of the legal technology to deal with the challenges of a more and more global, interconnected and complex world. Close
Nicola Lettieri
22018 Can complexity theory help understanding tomorrow's e-justice? [abstract]
Abstract: This paper investigates the e-Justice design and implementation experience taking place at EU level for the provision of e-Justice cross border judicial services through the complex theory lenses. The main drive is to better understand the elusive features of this phenomenon, which is resulting very difficult to tackle (and to explain) through the traditional process modeling and normative driven approaches typically implemented in the field.The design and implementation of ICT (EU cross border) justice service provision systems has shown a plurality of complexity factors:1. It is multi-domain: technological, legal, organizational and political dimensions are intertwined and interdependent.2. Organizational, legal and technological components of such systems are not the result of a blueprint design but are assembled though a process of design, discovery and adaptation, building on an only partially, locally known and evolving installed base, showing elements of emergence, non-linearity, path dependence in the way the net of relations between components is established (and evolve) and the assemblage shapes.3. Borders of the systems are not clearly defined. Often the components (organizations, laws, technologies) of a system are large systems themselves, only marginally involved in the system being development/implemented/maintained (e.g. the Italian PCT is a service provider within the e-CODEX system but e-CODEX is just a very limited part of the PCT)4. Components have shown to follow independent not always convergent evolution paths? 5. The systems (and their components) are influenced developments and initiatives at EU level, but also on developments in the Member States (like changes in legislation with an impact on competences of legal authorities or legal professionals; or like changes in IT systems)6. Objectives of these systems and of its components evolve over time and results from multiple interactions of a plurality of actors (and components) distributed through several localized times/places/events.7. Decisions and assemblage may emerge and stable/performative configurations are achieved (only) on a temporary basis.8. Development and use reinforce stability9. Loosely coupling, standardization of interfaces and adaptation supports the stability of the system and reduce the impact of local disruptive events and drift dynamics.10. The stability of these systems is always dynamicA typical example of such systems is e-CODEX, a EU e-Justice system which is the result of a 66 months and ?25m EU co-funded project (initially 36 months and ?15m), creating an infrastructure to support the communication and the exchange of legal information in EU cross-border judicial procedures.Confronted with this plurality of factors, EU and MS are now facing the need to design very complex systems capable of black-boxing this complexity and that result in very simple and user-friendly (socio-techno-legal) tools (simple and user-friendly depends from the user competences, available tools and relation to the justice domain and service needs). An initiative in this direction is the API for Justice project coordinated by the Ministry of Security and Justice of the Netherlands and aiming at providing an effective digital access to the e-CODEX system with an interface for alternative entry points for cross border legal procedures (an alternative entry point could for example be a website to support consumers in dealing with a specific type of dispute).As previously mentioned, traditional development and implementation approaches in use in e-Justice domain have fared poorly when confronted with this challenge. It is the opinion of the authors that a complexity theory driven approach could help better understand the situation and manage it more effectively through initiatives such as the API project. Close
Marco Vellicogna, Ernst Steigenga
22019 Complexity Squared: An Outline of Challenges Inherent to the Application of Complexity to 'Law' [abstract]
Abstract: The task of combining complexity and law is still in a nascent stage of its development. I argue that there are two reasons for why Complexity has not (yet) been fully embraced by the legal academy. Firstly, because as a?systems? concept, it must compete for attention and recognition in a space that is both dominated by Luhmannian autopoiesis, but that is also suspicious of claims that resemble out-dated comparative law discussions of national ?legal systems?. Secondly, it is not immediately clear how or to what ends complexity could apply to law because complexity itself is such a diverse field, but also because there is no consensus on what ?law? is. Like a swiss army knife, complexity is an analytical tool with a variety of applications for a broad array of disciplines. Although, one the one hand, this offers a large variety of options for how one might apply complexity to law, on the other, that variety multiplies to an dizzying abundance of potential when combined with the equally large variety of ways in which law itself is and has been conceptualized. However, with no established consensus on either complexity or ?law?, this abundance, while exciting, can also be daunting for its lack of established pathways for scholars to take. Law?s empirical components have obvious cross-over applicability that might make the application of complexity relatively straightforward for social scientists or others, but law?s normative elements cannot be ignored, even if their relationship to complexity is less straightforward. Considering law?s normativity from a complexity perspective poses entirely new questions, such as the complexity of norms and rule systems within human society, for instance, for which neither lawyers nor social scientists have easy answers. Combining normativity and empirics leads one to an even higher state of complexity for the study of complexity in law. To address this, I propose that the application of complexity to the study of law requires a researcher to select one of (at least) three fundamentally different perspectives representing significant conceptual cleavages in legal scholarship: a) law an observable ?thing? in the world; b) law as norms; and c) law as an instrument. From that choice, options for how and in what way complexity might be applicable will flow, although none will be determinative. Even within each of the three there is great variability. Law as a ?thing? can refer to legal positivism and the study of the architecture or legislative hierarchy, or it can refer to empirical practices or processes of conflict resolution or judgment, or it can ask questions about law as a social phenomenon, either as an assemblage of practices, processes, relationships, and/or behaviours. All of those require quite different considerations for how one might design a complexity based research project. Similarly, thinking of law in terms of a variable and changing complex ordering of norms and values requires different conceputalizations of complexity, most likely, but not always, rooted in highly contentious beliefs in the role, location, and significance of norms and values in human, political, modern, and inter- or transnational societies. Finally, seeing law as an instrument of social, political, or other engineering, might draw on entirely different applications of complexity, themselves differentiated between those that might see law as something that tries to regulate, control or direct complex phenomena, such as society, and those that might see law as a complex instrument in and of itself. This paper will tease out these distinctions with the underlying proposal that combining complexity and law is an intellectual travail that is long overdue, but one that is fraught with conceptual challenges, which might explain its relative tardiness. Close
Michael Leach
22020 Bio-Knowledge Governance in Complex Intellectual Property and Innovation Systems: New Approaches to Design Law and Policy to Promote Access to Knowledge and Innovation [abstract]
Abstract: Biotechnology research and development(R&D) lies at the center of health-related innovation and has redefined expectations and solutions to health problems around the world. Meanwhile, theories relating to intellectual property rights (IPRs) struggle to suggest how best to shape the legal system to maximize innovation and access to knowledge, especially in developing countries. The legal regulation of knowledge by intellectual property (IP) and innovation systems has become more complex, field specific, and fragmented into different regimes (IPRs specific legislation, innovation law, and biodiversity law, for instance) and levels of legislation (international, national, regional, federal, provincial, municipal, and institutional). The fragmented organization of IPRs, their regime complexity, and the diversity of relationships, expectations, and strategies of different agents compounding these systems are some of the aspects that characterize knowledge regulation in biotechnology R&D as a complex issue. Aggravating this complexity are the insufficient empirical evidence about the results of IPRs protection in health-related biotechnology R&D and the lack of a technology-specific approach in legal regulation.? This presentation explores how complexity science, especially complex adaptive systems (CAS) theory, can inform law and policy and guide the way in which developing countries legislate, develop policy, and negotiate internationally. The main goal is to better understand the regulation and governance of health-related biotechnology R&D by IP and innovation systems and design these systems to guarantee access to knowledge and innovation in developing countries. IP and innovation systems are built upon the understanding, expectations, and strategies of different agents compounding them. However, agents have limited understanding and are unable to predict every outcome occurring within these systems. This research identifies two main environmental forces influencing the regulation and governance of IP and innovation systems: the dialogue between and translation of science into the law constructed through multidisciplinary and empirical research (herein called law-science nexus) and power and politics in international trade and relations, as well as power and politics articulations by different agents nationally at diverse levels of jurisdiction. This research explores the idea of strengthening the law-science nexus to minimize power and politics influences to design access to knowledge and innovation driven systems which are more resilient and dynamic. Additionally, to suggest changes in law and policy that are more likely to be adopted successfully in IP and innovation systems it is necessary to analyse the understanding, expectations, and strategies of different agents towards these systems, and the hierarchy, organization, and environmental influences on these systems. The main outcome of my research is to suggest a framework of thinking based on complexity science principles to guide the process by which developing countries negotiate internationally, legislate, and develop policy in health-biotechnology.? Close
Priscilla Maria Dias Guimaraes Cesar

Complex Systems in Education  (CSIE) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: I - Roland Holst kamer

Chair: Matthijs Koopmans

28006 State Space Analysis and Its Connection to the Classroom [abstract]
Abstract: Discrete dynamical systems have been used to theoretically model the complex dynamics of classrooms. While time-series analyses of these models has yielded some insights, state space analyses can yield additional insights; this paper will explore state space analyses and their application to classroom situations. One benefit of state space analysis is that it allows simultaneous exploration of multiple time-series, and so can more easily provide information about divergence and convergence of paths. Additionally, state space analysis, more easily than time-series analysis, can provide information about the existence of multiple paths leading toward a desired state. Further, state space analysis can identify different regimes of behaviors, finding boundaries near which there may be divergent behaviors, and also using those regimes to define a (sometimes) relatively small number of archetypical behaviors. This is particularly useful in tracking behaviors at a microgenetic level, since multiple initial conditions may get to the same (or very close) final states, but in dramatically different ways, and these different routes may have implications for future classroom experiences. Because of these advantages, state space analysis can be used to inform attempts at differentiated instruction in a classroom, assist modelers in identifying appropriate parameter scales, and provide guidance for empirical studies of classroom learning. These ideas will be illustrated through state space analysis of an existing model of teacher-student interactions, identifying four regimes of behaviors, and leading to several implications for classroom practice and research. Close
Bernard P. Ricca & Kris H. Green
28007 Teacher Effect on Student Test Scores Revisited: A Network Analysis of Complexity Assumptions [abstract]
Abstract: Typically, studies of teaching on student test scores produce coefficients of determination less than 10% after SES is controlled. We propose that this may be attributable to common assumptions that people contribute to organizational outcomes through their individual characteristics, skills or attitudes. Yet even casual observers have seen interacting individuals ?feed off? one another such that individual and group characteristics are amplified beyond mere accumulation of individual skills. Therefore, we ask, ?Do teachers who are key agents in school networks promote higher test scores than teachers who are not key agents?? We performed analyses in seven elementary schools in a single district. Data came from all professional and support staff in those schools. We first conducted network analyses for each school to calculate network measures, or the degree to which each staff member was engaged in group dynamics. Approximately 30 measures for trust, advice, and social relationships were identified; these included such things as betweenness centrality (the degree to which an individual influences communications between groups) and Simmelian ties (engagement in 3-way reciprocal relationships). Measures for teachers who taught math, ELA, reading, social studies, and science (which had end-of-year test scores) were then combined across schools, and HLM was conducted on the resulting dataset. Predicted test scores were calculated by entering the school identifier with a random intercept, and by controlling student ethnicity and SES. We performed stepwise regression on predicted test scores with network measures as independent. Between 45% and 72% of variation was explained for the various tests. Specific results varied by subject, but measures for trust and advice were the strongest variables, and social engagement only explained science and social studies scores. The types of engagement most explanatory were those in which teachers were sought out for their trustworthiness or the apparent quality of their advice. Close
Russ Marion and Xiaoyan Jiang
28008 Momentary assessment of interpersonal adaptation in teacher-student interactions [abstract]
Abstract: How real-time classroom interactions in 35 secondary education classes unfold in time was observed to study to which extent teacher and class behaviors in interaction interpersonally adapt to each other; to which extent do students follow the teacher?s behavior? We used Sadler?s joystick method to observe interpersonal teacher and student behavior, in terms of agency and communion (Sadler, Ethier, Gunn, Duong, & Woody, 2009) during the lesson start (the first 10 minutes of the lesson). We used spectral analysis to cyclical patterns in each individual teacher-class interaction. To determine the degree of synchronization between teacher and class behavior, we calculated coherence and phase (Warner, 1998). The results of the study will be illustrated and explained into depth by zooming in on the specific results of Teacher-class 16;Who is a 24 year old male chemistry teacher with 2 years of teaching experience at the beginning of our study in 2010. His results will be compared to the general findings of the 35 teachers in the study. For Teacher-class 16 coherence values were .65 for communion and .78 for agency; indicating a considerable degree of synchronicity between interpersonal teacher and class behavior. Teacher-class 16?s phase values were -.01 for communion and .46 for agency. These values show that the teacher only slightly tends to follow the students in communal behavior and leads the students in agentic behavior. Further analysis of the coherence and phase values of the 35 teacher showed that differences in coherence and phase are related to the quality of the teacher interpersonal style. Close
Helena J. M. Pennings
28009 Conditions for Ecologies of Learning [abstract]
Abstract: Ecologies in nature are complex adaptive systems and complex adaptive systems learn. Learning is essential for all living systems. Learning ecologies are comprised of many diverse, interdependent agents, continually self-organizing in surprising ways as systems adapt to shifting environments. System-wide patterns emerge and interact across multiple levels of organization. The Ecology of learning provides an evocative and useful metaphor for powerful teaching and learning systems. Natural ecological systems and learning ecologies share at least three characteristics for transformational complex change (learning): 1. Open, permeable boundaries that allow information, energy, and resource to flow freely; 2. Diverse agents hold tension, generating energy to move the system. 3. Nonlinear exchanges serve as feedback for iterative, continuously, transforming systems. Three different education programs spanning K-12, university writing and post-graduate health professions education are described as ecologies of learning and teaching that share practice and theory in Human Systems Dynamics including: 1. A broad understanding of interconnected knowing different than superficial short-term, sequenced disconnected bits of information; 2. Pattern logic of the whole rather than data logic of individual items; both co-embedded in complex landscapes of socially organized learning. 3. The ability to see, understand, and take action to influence conditions that lead to complex patterns. How we continue to establish conditions to sustain deep learning ecologies for teachers and learners is embedded in an iterative educational process of planning and action: What do we know about complex systems? How do we use what we know to shape conditions for learning ecologies in educational systems? How do we establish and sustain inquiry? These questions frame deep reflection and professional conversation. They set conditions for an ecology including expectations, experiences, and emergent structures that support a praxis of deep learning. Close
Leslie Patterson, Royce Holladay, & Stewart Mennin
28010 NetSciEd: Teaching Networks to Everyone [abstract]
Abstract: Since its boom in the late 20th century, network science has become ever-more relevant to people's everyday life. Knowledge about networks can help us to make sense of this increasingly complex world, making it a useful literacy for people living in the 21st century. Network science offers a powerful approach for conceptualizing, developing, and understanding solutions to complex social, health, and environmental problems; and it also provides opportunities to develop many of the skills, habits of mind, and core ideas that are not currently addressed in extant elementary/secondary education curricula and teaching practice. There is a need for curricula, resources, accessible educational materials, and tools about networks. In this talk, we present a summary of the NetSciEd (Network Science and Education) initiative that we have been running over the last several years to address the educational need described above. It consists of (1) NetSci High educational outreach program (2010--2015) that connects high school students and their teachers with regional university research labs and provides them with the opportunity to work on network-science research projects, (2) NetSciEd symposium series (2012--present) that bring network-science researchers and educators together to discuss how network science can help and be integrated into school education, and (3) Network Literacy: Essential Concepts and Core Ideas booklet (2014--present) that was created collaboratively and subsequently translated into more than 15 languages by a large number of network-science researchers and educators worldwide. Close
Hiroki Sayama, Catherine Cramer, Mason Porter, Lori Sheetz, & Stephen Uzzo
28011 Engaged Action Research as a Catalyst of Co-learning in Catchments (Watersheds): Complex Adaptive Social Ecological Systems [abstract]
Abstract: Integrated water resource management (IWRM) is a contested goal for landscape sustainability, with proponents offering the possibility of viewing catchments as complex social-ecological systems (CESs), and embracing concepts such as resilience and adaptive management; and detractors arguing for the pragmatic utility of silo?s and more linear management processes. A group of transdisciplinary researchers in South Africa have engaged in several projects over the past five years. We adopt an understanding of catchments as CSESs, and aim to use co-learning and the co-development of knowledge as pathways for deepening democracy, through increasing knowledgeable catchment resident participation in catchment management institutions. We would like to share our experience of four case studies. The first is within a well-developed catchment management institution ? the Inkomati-Usuthu Catchment Management Agency (IUCMA), exposed to CSES thinking since in?s inception in 2004. There, in the Crocodile River sub-catchment, industry partners and municipalities co-operated to develop and initiate implementation of an integrated water quality management system. The other three are within an emerging CMA (Mzimvubu-Tsitsikamma), each with an opportunity to contribute to the catchment management strategy. Within the MTCMA: 1) the Lower Sundays River Valley has no primary water scarcity, and an efficiently irrigated export citrus industry ? but a lack of potable water in many homes; 2) in the sub-catchment of the Makana Muncipality a civil society organisation emerged and we traced practice and learning in facilitating water supply to homes; and 3) in the rural Tsitsa River sub-catchment, proposed dam construction triggered the question: ?How can state-sponsored landscape restoration investment be leveraged to ensure wetland seep protection and improved livestock livelihoods through a co-learning process? Each of these case studies illustrates our ?learning about learning?, which as has embedded our commitment to the CSES concept. Close
Tally Palmer & Margaret Wolff

Evolutionary Game Theory: from Biology to Social Systems  (EGT) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: J - Derkinderen kamer

Chair: Alberto Antonioni

34000 Delving into behavioral responses when humans face social dilemmas [abstract]
Abstract: The modeling of social systems has recently attracted a renewed attention as a result of the Data Science revolution. Ideally, we would like to develop tools and methods that allow in-silico simulations of real-world societal scenarios and systems. To this end, it is imperative to inform models with as much details as possible about human behavior at various scales. This constitutes nowadays a challenge due to our current limited knowledge of the laws describing most human behavioral responses. In this talk, we describe recent advances in this direction by discussing the results of several experiments involving humans -in some cases a few, in others hundreds- playing a diversity of social dilemmas. We also identify the experimental (data) and theoretical challenges in the study of techno-social systems, and propose a way to tackle such problems Close
Yamir Moreno
34001 Climate Change and Global Governance in an Uncertain World [abstract]
Abstract: When attempting to avoid global warming, individuals often face a social dilemma in which, besides securing future benefits, it is also necessary to reduce the chances of future losses. Unfortunately, individuals, regions or nations may opt to be ?free riders?, hoping to benefit from the efforts of others while choosing not to make any effort themselves. Moreover, nations and their leaders seek a collective goal that is shadowed by the uncertainty of its achievement. Such types of uncertainties have repeatedly happened throughout human history from group hunting to voluntary adoption of public health measures and other prospective choices. In this talk, I will discuss a population dynamics approach to a broad class of cooperation problems in which attempting to minimize future losses turns the risk of failure into a central issue in individual decisions. Our results suggest that global coordination for a common good should be attempted by segmenting tasks in many small to medium sized groups in which perception of risk is high. Moreover, whenever the perception of risk is low ? as it is presently the case ? we find that a polycentric approach involving multiple institutions is more effective than that associated with a single, global one, indicating that a bottom-up approach, setup at a local scale, provides a better ground on which to attempt a solution for such a complex and global dilemma. Finally, I will discuss the impact on public goods dilemmas of uncertainty in collective goals, heterogeneous political networks, obstinate players and wealth inequality, including a distribution of wealth representative of existing inequalities among nations. Close
Francisco C. Santos
34002 The role of social influence in competitive strategical games [abstract]
Abstract: Mixed dynamics of strategic and social imitative behaviors have shown to alter the outcome of competitive games significantly. However, in reality these interactions take place in different social and strategic contexts or domains, which can have important implications for the dynamic of the system. To study the effect of different contexts, we consider a multiplex system of two different layers: a game network, where the dynamics obey to a competitive game rules and a social network, where individuals exchange opinions on their strategies through a biased voter model dynamic. We consider the four main games (the Harmony Game, the Stag Hunt Game, the Prisoners Dilemma and the Hawk-Dove Game) with the Fermi imitation rule to give the possibility of making mistakes. We assume that individuals have an intrinsic tendency to be consistent in both layers but they can also lie by playing with one strategy and consenting with their friends on the other. Importantly, the topologies of the layers composing the system are not expected to be identical neither completely unrelated, meaning that individuals that engage in strategic interactions can have a social relationship but can also be unrelated in the social context. To account for these features, we use a model which can generate multiplex networks with realistic layer topologies and where both the popularity of individuals as well as their similarity are correlated to varying degrees between the different contexts. We show that the interplay between different contexts can lead to new behaviors. In particular, we show that the emergence of localized communities of highly synchronized individuals in the Stag Hunt Game in isolated network can be hindered if the coupling to the opinion context is present. In addition, we discuss the impact of popularity and similarity correlations between the different contexts on the dynamic of the system. Close
Roberta Amato, Kaj-Kolja Kleineberg, Jan Haerter, Albert Diaz-Guilera
34003 Coexistence of multiple public goods in a bacterial colony [abstract]
Abstract: Some aspects of the interactions between microorganisms (such as the siderophore production or the resilience to antibiotics) can be studied from the perspective of cooperation. Some microbes (producers or cooperators) pay a metabolic cost while non-producers or cheaters benefit from the public goods without paying that cost. In some theoretical models, ecological populations of microorganisms are operationally defined as groups of coexisting individuals that are highly clustered on the genotypic and phenotypic levels. In these models, each bacterium is characterized by the genes it possesses and has assigned a fitness related with the public good. In this work we propose a theoretical model to study the effect of the Horizontal Gene Transfer mechanism HGT in a bacterial colony. As a novelty, we consider a variable number G of genes altogether, each one associated to a public good (e.g., iron scavenging molecules, an enzyme for digesting sucrose, resistance to antibiotics, proteins to extracellular protein digestion, etc). We assume that the G genes are intermediate frequency genes, which are susceptible to be acquired by HGT or also by asexual reproduction. Each bacterium has assigned a fitness related with the totality of the public goods. We have found that, when more than a public good is considered, the appearance and spreading of a survival trait does not imply a shift from the previous strategic equilibrium: a small ratio of HGT is enough to not occur clonal sweeps. For intermediate values of the HGT rate, this mechanism allows the spreading of necessary genes in the colonization phases, while for very low rates of HGT, a decrease in density is observed in the colonization phases, and only the producers? genotypes survive. In addition, we have found that there is a range of the HGT ratio, compatible with the experimental values, that maximizes the biodiversity. Close
Carlos Gracia-Lázaro, Yamir Moreno, Joaquín Sanz, Mauro Moreno
34004 Topology-dependent rationality and quantal response equilibria in structured populations [abstract]
Abstract: The assumption of perfect rationality, an underlying concept of the notion of Nash equilibria, which are frequently used to reason about the decision making of actors in game theory, has been shown to be violated in many examples of decision-making in the real world. In contrast, bounded rationality, is a more versatile concept to model decisions of human actors . Here we explore a graded notion of rationality in socio-ecological systems of networked actors. Employing a framework developed in [1], we describe actors' rationality parameters via their place in a social network and quantify system rationality via the average symmetrised Kullback-Leibler divergence between the games Nash and Quantal Response equilibria. In this context, previous work [1] has argued that scale-free topologies maximise a system's overall rationality. Here, we show that while it is true that increasing degree-heterogeneity of complex networks enhances rationality, rationality-optimal configurations are not scale-free. We provide analytical arguments complemented by numerical optimisation experiments to demonstrate that core-periphery networks composed of a few dominant hub nodes surrounded by a periphery of very low degree nodes give strikingly smaller differences to rationality than scale-free networks. We also propose a class of topologies than can interpolate between random graphs and core-periphery networks, which allows us to gain insights into the structure of the search landscape a system has to navigate on its way towards enhanced rationality, which provides insight why truly optimal configurations might be exceedingly rare. These results shed further light on the role of social networks for the decision making of networks of interacting players and provide insight on the interplay between the topological structure of socio-ecological systems and their collective cognitive behaviour. [1] Kasthurirathna, D., and Piraveenan, M. (2015). Emergence of scale-free characteristics in socio-ecological systems with bounded rationality. Scientific reports, 5:10448. Close
Sabin Roman, Seth Bullock, Markus Brede
34005 The evolution of conditional moral assessment in indirect reciprocity [abstract]
Abstract: Indirect reciprocity is a major mechanism in the maintenance of cooperation among unfamiliar individuals. Indirect reciprocity leads to conditional cooperation according to social norms that discriminate the good (those who deserve to be rewarded with help) and the bad (those who should be punished by refusal of help). Despite intensive research, however, there is no definitive consensus on what social norms best promote cooperation through indirect reciprocity, and it remains unclear even how those who refuse help to the bad should be assessed. Here we propose a new simple norm called ?Staying? that prescribes to abstain from assessment, with the focal individual?s image remaining unchanged, if its opponent has a bad image. We fully analyse the Staying norm in terms of evolutionary game theory and unveil that Staying is most effective in establishing cooperation, compared to the prevailing social norms which rely on constant monitoring and unconditional assessment. The excellence of Staying suggests the limitation of strict application of moral judgment. Close
Tatsuya Sasaki, Isamu Okada, Yutaka Nakai
34006 Solitary observation in indirect reciprocity [abstract]
Abstract: The intensive studies on indirect reciprocity seldom break down an assumption of public information. All the players correspondently mistake about a tag of player if an error in perception occurs, and thus, a tag of each player is unique and never differ among the others. Considering the assessment cost bears the second-order free-rider problem. We theoretically analyze the solitary observation and compare with the shared monitoring (public information). To model a giving game, we assume that well-mixed infinite players in a population participate in the game using a continuous-entry model (Brandt and Sigmund, 2005). Players in the model consists of three norm-adopters: unconditional cooperators who always give a help, all-out defectors who always refuse a help, and conditional cooperators who cooperate to those assessed as good. In a game with solitary observation, a donor, a recipient, and an observer are randomly selected. The image of the donor in the eyes of the observer only is updated while the eyes of the rest of discriminators in the population do not change the image of the donor. To explore evolutionary dynamics of the private scores, we consider an analysis of a marginal value of a good reputation (Ohtsuki et al., 2015). In the framework, any expected probability of that a player's image in the eyes of a discriminator is saturated if the games infinitely continue, and then the expected payoffs of each player are calculated. The solitary observation and the shared monitoring go to different results. Our analysis shows that the tolerant indirect reciprocity with justified defection increases the Pareto efficiency in combination with the unconditional cooperators in the solitary observation than in the public information. Our result sheds light on the role of unconditional cooperators while so far they were unvalued as a second-order free-riders. Close
Isamu Okada, Tatsuya Sasaki, Yutaka Nakai
34007 Exploring dynamic environments using stochastic search strategies [abstract]
Abstract: In this paper, we conduct a literature review of laws of motion based on stochastic search strategies which are mainly focused on exploring highly dynamic environments. In this regard, stochastic search strategies represent an interesting alternative to cope with uncertainty and reduced perceptual capabilities. This study aims to present an introductory overview of research in terms of directional rules and searching methods mainly based on bio-inspired approaches. This study critically examines the role of animal searching behavior applied to random walk models using stochastic rules and kinesis or taxis. The aim of this study is to examine existing techniques and to select relevant work on random walks and analyze their actual contributions. In this regard, we cover a wide range of displacement events with an orientation mechanism given by a reactive behavior or a source-seeking behavior. Finally, we conclude with a discussion concerning the usefulness of using optimal foraging strategies as a reliable methodology. Stochastic search strategies plays an important role in terms of facing environmental uncertainty. Therefore, the present paper pretends to uncover the most insightful directional rules inspired by stochastic methods, statistical physics and random walks. Likewise, we consider these strategies as an emergent phenomenon (formation of global patterns from solely local interactions) which is a frequent and fascinating theme in the scientific literature both popular and academic. The aim of this paper is to examine existing techniques and do a comprehensive analysis to understand state-of-the-art, trends and research gaps. It is important to mention that these strategies can be used in the field of robotics as exploration and discovery algorithms with the aim to speeding up searching tasks. Stochastic search strategies are mainly inspired by optimal foraging theory which involves animal search behavior as an alternative for facing highly dynamic environments. Thus, these strategies can be viewed as a correlated process which may consists of displacements only broken by successive reorientation events. Strategies such as: L?vy walk, ballistic motion and correlated random walk are well known examples of foraging strategies, which are subject to statistical properties derived from L?vy stochastic processes. Recent evidence suggests that search strategies are mainly related to availability, quality and quantity of publicly accessible data on animal movement and Artificial Intelligence techniques. Recently, synthetic experiments have shown that what really matters is where the explorer diffuses, not the manner by which the explorer gets there. We therefore, decided to concentrate on what we considered to be some of the more significant developments in stochastic search models. It is important to mention that for organization purposes, we have split in two categories as follows: stochastic rules and directional rules (taxes). Close
Carlos Adolfo Piña-García, J. Mario Siqueiros, Gustavo Carreon, Carlos Gershenson

Hot Topics in the Study of Complex Systems in Asia  (HTCS) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: R - Raadzaal

Chair: Siew Ann CHEONG

29000 Modeling the activity of the entire primate brain: A meso-scale dynamical perspective [abstract]
Abstract: Nonlinear dynamics of interactions between clusters of neurons via complex networks lie at the base of all brain activity. How such communication between brain regions gives rise to the rich behavioral repertoire of the organism has been a long-standing question. In this talk, we will explore this question by looking at the simulations of collective dynamics of a detailed network of cortical areas in the Macaque brain recently compiled from the CoCoMac database, as well as, a model of global coupled brain regions used as a benchamrk. To understand the large-scale dynamics of the brain, we simulate it at the mesoscopic level with each node representing a local region of cortex, comprising between 10^3-10^6 neurons. The dynamical behavior of each such region has been described using a phenomenological model consisting of a pool of excitatory neurons coupled to a pool of inhibitory neurons, which exhibits oscillations over a large range of parameter values. Coupling these regions according to the Macaque cortical network produces activation patterns strikingly similar to those observed in recordings from the brain. Our results help to connect recent experimental findings of the olfactory system and suggest that a part of the complicated activity patterns seen in the brain may be explained even without a full knowledge of its wiring diagram. Close
Sitabhra Sinha (Institute of Mathematical Sciences, INDIA)
29001 A Generalized Betweenness for Studying Network Performance against Perturbations [abstract]
Abstract: Betweenness is an important network property to study system performance against perturbations (such as random natural disasters and intended terrorism attacks). Traditionally, betweenness of a node/link is defined as how many times the node/link appears as intermediate node/link in all shortest paths between nodes. Traditional betweenness can help to answer how likely the 1st best paths between nodes will be cut off by perturbations. However, in reality, it is more concerned with a general situation, i.e., how likely those paths whose lengths are within a given range will be affected by perturbations. For instance, for a researcher to attend a conference, whether the 1st best path is available is not important at all, and instead, whether s/he can arrive on time is the key. Unfortunately, this general situation has barely been discussed in literature. To assess network performance in the general situation, we propose a generalized betweenness which is mathematically defined as how many times a node/link appears as intermediate node/link in all those paths whose lengths are within a given range. No existing method can effectively calculate the generalized betweenness, because it is difficult to find out, between every pair of nodes in a network, all those paths whose lengths are within a given range. By modifying a newly reported nature-inspired method, i.e., ripple-spreading algorithm (RSA), it becomes possible to calculate the generalized betweenness. Surprisingly, the proposed RSA can effectively find out all those paths whose lengths are within a given range by just a single run of ripple relay race. This work makes progress towards the general performance assessment of a network system against perturbations. Close
Xiaobing Hu (Beijing Normal University, CHINA)
29002 Understanding the organization of cities from route analysis [abstract]
Abstract: Urban street structure is a snapshot of the human mobility and an important medium facilitating the human interaction. Previous studies have analyzed the topology and morphology of street structure in various ways; fractal patterns, complex spatial network and so on. In practical term, it is also important to discuss how street networks are used by people. There are studies analyzing the efficiency, accessibility and road usage in the street networks too. In those studies, people usually investigate routes, either empirical routes or theoretical routes, to understand the functionality of the network. The travel route is a good proxy to understand the street structure and city attributes from user perspective. It is a selected path from the whole network by people or under the given standard, so it reveals how people use the networks. The selected route is also influenced by various factors such as spatial pattern and travel demand of city. Thus studying the empirical or theoretical optimal routes can help us understand the urban characteristics which are often hidden. For instance, fastest routes show the distribution of traffic in a city as well as the street structure. In this paper, we analyze the geometric property of routes to understand the geometry of practical street network where hierarchy and traffic is included. We use the two types of optimal routes collected from time minimizing algorithm and distance minimizing algorithm via the OpenStreetMap API. We suggest a new metric, center-philic level, to measure how much a route is bent toward the city center. We measured the center-philic level of a number of route within 30km radius from the center. By analyzing the center-philic level for different location of routes, we can understand and simplify the geometry of street network based on the center. The center-philic level patterns for two different algorithms can also reveal the effect of street hierarchy and traffic. In urban transportation, we can imagine two forces competing each other. By the agglomeration of business and people inside of city, street networks become denser around the center area to satisfy the demand. Such centralized street networks attract traffic toward inside of city. However many cities have arterial roads located outside of city to disperses the traffic concentrated on the inside of city. The arterial roads act as another force pulling traffic toward outside of the city. This tendency is well captured by our suggested metric. We firstly compare the general average center-philic level of both shortest and fastest routes to point out the fundamental difference between them. Later we analyze the center-philic level of individual cities and discuss how the metric can explain the street layout and street hierarchy. Close
Minjin Lee (Sungkyunkwan University, KOREA), Hugo Serrano Barbosa (University of Rochester, USA), Gourab Ghoshal (University of Rochester, USA), Petter Holme (Sungkyunkwan University, KOREA)
29003 Integration of Network Analysis into Power-grid Analysis: sustainability and stability [abstract]
Abstract: Network analysis has become a powerful tool to analyze complex systems over wide range of topics. For last two decades, researchers have made much progress particularly in the topics of disease spreading, social interaction, biological metabolism, neural network, urban mobility, etc. However, energy system has yet been plentifully covered. In this talk, we seek to apply network theory into electric power systems. Firstly, we integrate network analysis into environmental impact analysis. We introduce energy distance in order to estimate the greenhouse gas emissions of electricity transmission taking both the amount of electricity consumption and transmission distance into account. Secondly, we analyze the functional stability of power grids. The stable synchronization of power-grid nodes is the essential condition for the secure electric power systems. We investigate the transition of the synchronization stability of power-grid nodes and classify nodes based on the transition patterns. We conclude that network analysis is a good complement for energy system analysis. Close
Heetae Kim (Sungkyunkwan University, KOREA), Sang Hoon Lee (KAIST, KOREA), Petter Holme (Sungkyunkwan University, KOREA)
29004 A new scientific collaboration network model [abstract]
Abstract: Scientific collaboration plays an important role in the knowledge production and scientific development. The researchers have constructed several network models of scientific collaboration. In traditional collaboration network, two scientists are linked if they have coauthored one paper. However, this construction of network undervalues the role of the first author. In this paper, we propose a new collaboration network model considering the importance of the partnership between the first author and others. We make an empirical analysis based on the data of American Physical Society (APS). The?results?show?that?there?are??some? differences?of properties?between?the?new?network?and?the?traditional?one. And the node importance is studied on the new network to identify potential researchers. Close
Ying Fan, Zhangang Han (Beijing Normal University, CHINA)
29005 Fusion of nations, fusion of disciplines: network evolution in nuclear fusion research [abstract]
Abstract: Nuclear fusion research, which originated from atomic weapon developments by USA and USSR, attracts public attention as a promising energy source for the future. After the Cold War, Nations have collaborated in order to build research capacity in nuclear fusion. ITER (International Thermonuclear Experimental Reactor) is an example of `big science' projects at the international level. Scientists from different disciplines involve in the project. The goal of our study is to investigate collaboration structure of nuclear fusion research and its evolution through an open access bibliometric database, Microsoft Academic Graph (MAG). We examine not only scientific journal citations but also the impact of IAEA Fusion Energy Conference on the research field. Dynamics of co-authorship networks reveal how nations take part and collaborate in nuclear fusion research. We expect that this study would be helpful for managing research activities and for suggesting national S&T policies. Close
Hyunuk Kim, Inho Hong, and Woo-Sung Jung (POSTECH, KOREA)
29006 The robustness of spatially embedded and coupled infrastructure networks under localized attacks [abstract]
Abstract: In the real world, infrastructure networks such as communication networks, power grid networks, transportation networks, solidly underlie the development of the whole society. The structure of infrastructure networks become more and more complicate and always couple together to perform intact service capacity. There commonly exist dependency among components as well as sub-networks, which make failure propagation. Currently, numerous literatures focused on the vulnerability and robustness of classical complex networks (e.g. random network, regular network, small-world network and scale-free network) under malicious attacks or random attacks. As one kind of real-world networks, besides having the topological characteristic of classical complex network, infrastructure networks are restricted by social-economic and geographical factors, so that they have short length links and some of them are planar graphs. Infrastructure network distributes in a specific spatially geographical domain, which probably exposures to real-world localized attacks (such as natural disasters). Recently, investigation on spatially embedded infrastructure networks under localized attacks is getting more and more attention. But full consideration of spatial characteristic of nodes and links in the robustness investigation of infrastructure network is still a big challenge due to involving real-world factors. In this paper, we aim to study the robustness of spatially embedded and coupled infrastructure networks under localized attacks. We first generate different kinds of spatially embedded infrastructure networks. Then a density-based index is proposed to depict the spatial characteristic of infrastructure network, dependency links among sub-networks are placed according to geographical restriction. Localized attacks are described by the circles with different radius. Finally, numerical simulation is conducted and the result illustrates that the spatial characteristic of infrastructure network and location of dependency links have significant effect on the robustness of infrastructure network under localized attack. Close
Saini Yang, Fuyu Hu, Weiping Wang (Beijing Normal University, CHINA)
29007 Exploring the Collective Mobility Pattern of Intra-Urban Taxi Passengers [abstract]
Abstract: The study of human mobility patterns is of both theoretical and practical values in many aspects. For long-distance travels, a few research endeavors have shown that the displacements of human travels follow the power-law distribution. However, the intra-urban travels do not simply follow the same power-law of longer-distance travels. What?s more, controversies remain in the issue of the scaling law of human mobility in intra-urban areas. In this work we focus on the mobility pattern of taxi passengers by examining five datasets of the three metropolitans of New York, Dalian and Nanjing. Through statistical analysis, we find the mixed distribution of lognormal and power-law better explain both the displacement and the duration time of taxi trips, as well as the vacant time of taxicabs, in all the examined cities. The universality of scaling law of human mobility is subsequently discussed, in accordance with the data analytics. Close
Ling Zhang (Dalian University of Technology, CHINA), Shuangling Luo (Dalian Maritime University, CHINA), Haoxiang Xia (Dalian University of Technology, CHINA)
29008 The Effects of Correlation between Influential Level and Threshold in Opinion Dynamics [abstract]
Abstract: We live in a society, where people interact with each other. An individual's opinion is formed with a neighbor's opinion. How do people accept a different opinion? Threshold model is the most well-known and proved theoretical model for the question. The model assumes that a person receives a different opinion when the ratio of their neighbor, whose opinion is different with one's opinion, is higher than one's threshold for an acceptance. Much research has been conducted on the opinion dynamics in the society with 'threshold model', and diverse aspects of key features in opinion dynamics have been revealed by the model. Most of the researches related with the threshold model in opinion dynamics are placed in the homogeneous threshold assumption. It means every person has the same level of threshold for an acceptance of an opinion. The usefulness of this assumption is clear in terms of statistical analysis. Howeve, we all know that heterogenuity of threshold exists. Even the threshold has a relationship with social capital like the number of neighbors, which can be represented as the characteristic of a network structure. Surprisingly, investigation on the correlation between network structure and threshold has not been conducted well. In this study, we investigate the influence of the correlation between the number of neighbors (influential level) and individual threshold on opinion dynamics. Given the scale-free network, as the representative network model of society, we change a level of correlation ($\beta$) from negative (-1) to positive (+1): 'negative' ('positive') is for the case that small (large) degree node has high threshold. The minimum of a threshold is set to 0.5 since we assume that people change their opinion when there are people more than a half of their neighbor generally. Additionally, opinion is composed with 0 and 1, and 60% of people in the network has opinion 1 as an initial condition. In this setting, we conduct the opinion dynamics while changing the correlation with degree and threshold. We found that the importance of the correlation betwen degree and threshold as to opinion dynamics. In this result, negative correlation region spreads information to entire system, but the positive correlation has a finite steady state. Additionally, there is a transition point around 0.5 regarding to correlation level. So far, we are conducting a finite size scaling analysis to figure out the characteristic of the transition, however, we were able to deduce from this result, there could be a transition of the system. To see the specific origin of this asymmetric contagion, we measured fixation time and final opinion. Fixation time is the time the node is fixed its final opinion. Moreover, the tendency of fixation time changes in positive correlation around the critical point, which is close to 0.5. This point is very close with the transition point for opinion switching. It means there is a balanced dependency between influential level and threshold so that from the point, the system can be divided with two different regimes: one is the regime for threshold dominant effect, the other is for the confused information effect. Another interesting result is: low influential nodes start to fluctuate more with high degree of dependency. Even though they have low threshold with positive correlation, they still receive a lot of influences from high degree nodes so they confused with mixed information from high degree nodes. This result suggests that the correlation between network structure and threshold of an acceptance can be important for opinion formation. Since we often experience the effect of correlation between influential level and threshold, it is worth to stress out the meaning of result. We can interpret the result as follows: positive correlation with influential level and conservative characteristic can block the unification of opinion in less influential group so that the majority opinion could not reach the whole society. Close
Eun Lee, Peter Holme (Sungkyunkwan University, KOREA)
29009 Inferring the model of ants movements and aggregation in circular region [abstract]
Abstract: Inferring the model of animal movements and aggregation have been a long-term challenging task. Although numerous of realistic-looking models have been proposed, model-based methods often rely on untested assumptions. Besides, many sets of microscopic hypotheses can produce the same macroscopic behaviors, it is dubious that they uncovered the real inherent mechanism. In this work, we conducted experiments with ants in two-dimensional circular surface. We try to infer behavioral rules directly from experimental data instead of traditional model-based research strategy. By defining a new metric to measure the ants aggregation extent, we study the influence of edge ?wall? and other individual on ants aggregation. Instead of studying large groups from 50 to 150, we study small number of ants from 1 to 3. By analyzing and comparing the data, we proposed a simple yet effective model, which may help to account for the micro-foundation of ants aggregation and infer how they amplify aggregation extent in large groups. Close
Yikun Xu, Zhangang Han (Beijing Normal University, CHINA)
29010 Multiple individuals tracking algorithm for fish in 2D space [abstract]
Abstract: As the increasing interest in the investigation of collective motion of a group of animals, it is important to tracking multiple moving animals and acquiring their position over time and space. There are several studies that have tried to solve this problem and make this data acquisition automated. However, none of these studies has solved the problem very well and automatically tracking is very difficult thanks to the individual?s various shape, complex motion and frequent occlusion. There are several published algorithms working on this problem, usually aim at one special specie, zebra fish, for instance. However, these algorithms have very high demands on the video quality, such as high frame rates, high image resolution and steady background, some of them are very time-consuming. Here we have developed an integrated approach based on artificial neural networks that enables us to automatically extract individuals? trajectories from both high quality and low quality videos. First we combine a background subtraction method and artificial neural networks to effectively detect the individuals. Then we use a linear assignment model to track the individuals. At last, we build a function to measure the confidence coefficient for each frame to help correct the possible errors. We applied our method to track different fish videos, the results showed that our method has a high efficiency and accuracy in most situations. Close
Qi Zhang, Li Jiang, Zhangang Han (Beijing Normal University, CHINA)
29011 A new measure based on information theory to quantify the co-ordination of fish groups [abstract]
Abstract: Collective motion of fish is an interesting research field. There is an essential question that how to quantify the how collective a group is, namely, how to recognize that fish in the group are interacting. Generally, researchers quantify the strength of interaction between fish intuitively using the correlation of velocities or spins, and measure the whole group?s polarization and rotation, etc. to determine whether it?s synergetic. It is because researchers consider the animal group motions as multi-body physical phenomena. However, it poses a problem that if an animal group does not display a visible collective structure, these physical statistics will fail to recognize the underlying mechanism. We introduce a new measure based on differential mutual information from information theory to quantify the co-ordination. Information theory is used in many fields but rarely in the field of collective animal motion. The original mutual information is a value to measure the correspondence of two signals, if there is any correspondence between these two agents? moving, mutual information will reveal it, while old statistics like polarization fail if the relationship is strange. The new statistic will be compared with classical statistics like polarization and correlation on Couzin model and Vicsek model. We will try different parameters to analyze the properties of all these statistics. And we will show that the new statistic is more efficient than old statistics when the model displays confusion and has even efficiency when the model displays order. We record the trajectories of different amounts of Glass Goldfishes swimming in a tank with a radius of 40 centimeters. The new statistic is used to measure the co-ordination of the groups. We will show that the co-ordination grows while the fish amount grows. In short, we will illustrate that the new measure can be helpful to reveal the strength of the co-ordination in a group. Close
Yinong Zhao, Zhangang Han (Beijing Normal University, CHINA)

Estimation of probability density functions in noisy complex ows  (EPD) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: L - Grote Zaal

Chair: Fred Wubs

3000 Estimation of Markov processes using operator eigenpairs [abstract]
Abstract: Modeling the effective macroscopic dynamics of complex systems as noise-driven motion in a potential landscape has found its use for topics ranging from protein folding to the thermohaline ocean circulation. I will discuss the estimation of such models from timeseries, focussing on a methodology that makes use of the spectral properties (leading eigenpairs) of the Fokker-Planck operator associated with the diffusion process. This methodology is well suited to infer stochastic differential equations that give effective, coarse-grained descriptions of multiscale systems. I will discuss estimation of coordinate-dependent diffusion, subsampling, nonconstant sampling intervals and inference from non-equilibrium data. Close
Daan Crommelin
3001 Mixing in Noisy Nonlinear Oscillators: Application to Low-Frequency Climate Variability [abstract]
Abstract: Much can be learned about systems exhibiting complex dynamics by studying the evolution of probability densities rather than single trajectories. In the stochastic case, this evolution is governed by the transfer semigroup which allows to connect the correlation functions and power spectra to the Fokker-Planck equation. Here, we propose to approximate the transfer operators of high-dimensional systems by Markov operators on a reduced space. While these Markov operators do not in general constitute a semigroup, rigorous results can be obtained regarding their spectral properties, in particular allowing to reconstruct correlation functions and quantify mixing in the reduced space. The approach is applied to the study of the variability and the stability stochastic nonlinear oscillators exhibiting resonant behavior. New analytical and numerical results are found for the mixing spectrum of the Hopf bifurcation with additive noise, bringing new insights on the phenomena of noise-induced oscillations and phase diffusion. These results allow to give new interpretations on the stochastic dynamics of high-dimensional climate models and support the applicability of the method to the study of stochastic bifurcations. Close
Alexis Tantet
3002 Deterministic Methods for Stochastically Forced PDEs [abstract]
Abstract: In this talk I shall illustrate an approach to study the dynamics of stochastic PDEs (or more generally stochastic dynamical systems) with respect to parameters using deterministic continuation methods. In particular, I shall focus on the case of local fluctuations for the stochastic Allen-Cahn equation and explain the practical implementation as well as applications in various scientific disciplines. Close
Christian Kuehn
3003 On the numerical solution of large-scale linear matrix equations [abstract]
Abstract: Linear matrix equations such as the Lyapunov and Sylvester equations play an important role in the analysis of dynamical systems, in control theory, in eigenvalue computation, and in other scientific and engineering application problems. A variety of robust numerical methods exists for the solution of small dimensional linear equations, whereas the large scale case still provides a great challenge. In this talk we review several available methods, from classical ADI to recently developed projection methods making use of ``second generation'' Krylov subspaces. All methods reply on the possible low-rank form of the given data. Both algebraic and computational aspects will be considered. Close
Valeria Simoncini
3004 Studying critical transitions in stochastic ocean-climate models by solving Lyapunov equations [abstract]
Abstract: Techniques from numerical bifurcation analysis are very useful when studying transitions between steady states of flows and the instabilities that are involved. In this presentation we discuss how we use parameter continuation in determining probability density functions of flows governed by stochastic partial differential equations near fixed points under small noise perturbations. We first discuss the traditional way of doing this by stochastically forced time forward simulation, and then show how this can also be done by solving generalized Lyapunov equations using a novel iterative method involving low-rank approximations. One of the advantages of this method is that preconditioning techniques that are known from iterative methods for linear systems can be used. We illustrate the capabilities of the method on a phenomenon in physical oceanography: the occurrence of multiple equilibria in the Atlantic Meridional Ocean Circulation. Close
Sven Baars and Fred Wubs

Robustness, Adaptability and Critical Transitions in Living Systems  (RACT) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: M - Effectenbeurszaal

Chair: Samir Suweis

11000 Introduction (5 min)
11001 TBA (35 min) [abstract]
Abstract: TBA Close
Elisa Benincà
11002 Predicting collapsing network entities before the tipping point (15 min) [abstract]
Abstract: Considerable evidence suggests that there are generic signals that indicate whether a system is approaching a tipping point to a new state. These indicators, such as an increased auto- and cross-correlation, increased variance and increased skewness, can be derived from time series analysis of the systems state before the tipping point. An important question that is not addressed by these indicators is what the system will look like after it passed its tipping point. In this study, we propose a new method using principal component analysis, to predict the future ?post-tipping point? state of the system. We formulate a method that works on systems in which the new system state is separated from the current system state by an unstable equilibrium. This is a situation that can be observed in various ecological systems. We derived the method analytically and illustrated it with an example based on data generated using an ODE-model. For our model, the method correctly predicts which variables increase and decrease in value after the shift. We could show that it was robust for some difficult cases, such as differences in noise between variables and having variables in the system that are not part of the shift. We believe that this method is generally useful for a variety of complex systems that contain such a tipping point and is especially valuable if the knowledge of the future state can help deciding on prevention measures. Close
Els Weinans, Ingrid van de Leemput, Jelle Lever and Rick Quax
11003 Environmental change influences ecological network structure on a global scale (15 min) [abstract]
Abstract: Theoretical studies have indicated that nestedness of environmental perturbations, affect the nestedness and modularity of ecological networks. However, the effects of climate change and human activities on ecological networks are poorly understood. Here, we constructed a large dataset of ecological networks, including food-web, pollinator, and seed-dispersal networks, and used a spatial analysis approach to examine the effects of climate change and human activities on the structural patterns of food webs and mutualistic networks [1]. We found that ecological network structure is globally affected by climate change and human impacts, in addition to current climate. In pollination networks, for instance, nestedness increased and modularity decreased in response to increased human impacts. Modularity in seed-dispersal networks decreased with temperature change (i.e., warming), whereas food web nestedness increased and modularity declined in response to global warming. According to a theory, these results are an indication that mutualistic networks form in such a way as to enhance ecosystem stability against environmental changes or perturbations. Unlike in mutualistic networks, however, our results suggest that food-web stability decreases in response to environmental changes. Our findings enhance our understanding of the effects of environmental change on ecological communities.[1] Takemoto K, Kajihara K (2016) Human Impacts and Climate Change Influence Nestedness and Modularity in Food-Web and Mutualistic Networks. PLoS ONE 11(6): e0157929. doi:10.1371/journal.pone.0157929 Close
Kazuhiro Takemoto
11004 A method for calculating an approximate analytical solution of a stochastic ecological model in space (5 min) [abstract]
Abstract: As is observed very often in physics, the wide variety of natural phenomena at very large or very small scales follows some simple rules that can be understood and described via formulas and a rigorous mathematical formulation.In the last 60 years ecologists have been collecting census data from a wide variety of different ecosystems, and despite the striking diversity of shapes and forms, they highlighted how deep commonalities emerge over wide scales of space.Our research focuses on the spatial distribution of plant species in an environment, and involves analytical calculations of global patterns that can be measured. Its aim is to derive from a single theory a set of predictions, and possibly anticipate new and unexpected empirical discoveries (e.g. stochastic pattern formation, quasi-cycles,...).In this talk I will give a brief overview of an approach based on a Stochastic partial differential equation. We calculated an approximate analytical solution, and, comparing it to simulated data, we checked whether it is a good estimate of the true solution. We also showed how this approach can be used to tackle real ecological problems involving species distribution and biodiversity preservation. Close
Fabio Peruzzo and Sandro Azaele
11005 TBA (35 min) [abstract]
Abstract: TBA Close
Sander Tans
11006 Investigating the collective behaviour of neurons in the brain: what can we do and what we cannot do (35 min) [abstract]
Abstract: The brain has been shaped by evolution as a sophisticated information processing system capable to adapt behavioural outputs to the ever-changing ?real world? inputs in an efficient and robust manner. A key ingredient for such degree of adaptability and robustness is the peculiar brain organization, with neurons that are structurally and functionally connected through adaptive synapses to form a complex evolving architecture. The emergence of criticality in brain circuits has been proposed as an important signature of brain computation. However, assessing critical behaviour of neuronal circuits is posing severe experimental challenges. We will present most advanced neurotechnologies enabling the measurement of neuronal networks and brain circuits and discuss their advantages and limitations to investigate the emergence of patterns resulting from collective activity of neuronal populations.? Close
Stefano Vassanelli
11007 Robustness of tissue structure to perturbations in mechanical forces (15 min) [abstract]
Abstract: In order to study the robustness of tissue architecture to variation in the forces that contribute to tissue shape, we rely on the 2D cell-based numerical model of epithelium formation by Farhadifar and collaborators (1), but dynamics and boundary conditions where modified, as explained in Merzouki et al (2). We aim to understand whether perturbations in mechanical properties of cells will affect the size and regularity of tissues. For example, does variation among cells in the two key parameters normalized cell contractility (?) and membrane line tension (?) affect tissue architecture? This architecture can be quantified by the distribution of shapes and areas of cells.To find out, we allow these values to vary among cells during tissue growth. Specifically, we assign values drawn from a bivariate normal distribution with a given mean (?,?) and standard deviation (?(?),?(?)) to these parameters for every newly created cell in a growing tissue. In the absence of perturbations, different values of ? and ? lead to tissue structures that fall into a small number of classes. The most prominent distinction is that between stable tissue, where most cells have a preferred shape (e.g., hexagonal), and where deviations from the preferred shape distribution lead to unfavorable energy, and an unstable tissue, where deviations from a preferred shape do not have a strong energy cost. Our preliminary observations show that within these parameter regimes, tissues react very differently to perturbations. Below we show exemplary observations for two pairs of mean values of ? and ?. In the first (?=0.04 and ?=0), cells in the tissue adopt a stable hexagonal shape in the absence of perturbations. For the other pair (?=0.12 and ?=?0.8), the tissue is unstable and cells adopt a greater variety of shapes. We vary the extent of perturbation by varying the standard deviation (?(?),?(?)) 0.05 and 0.1, and also perform control simulations without perturbations, in which the standard deviation is zero.Surprisingly, we find that the distribution of cell shapes in unstable tissues is more robust to perturbation than that of stable tissues. In addition, the perturbations affect stable and unstable tissues differently in systematic ways. Specifically, perturbation of a stable tissue creates fewer 6-sided cells and more 4- and 7-sided cells when compared with the control. We define the mean shape of cells as the average number of edges among all the cells of the tissue, and observe that this mean shape is systematically reduced in response to perturbations. In contrast, for unstable tissues, perturbations cause fewer 4-sided, 6-sided and 8-sided cells, but more 5-sided and 7-sided cells. Overall, however, these changes compensate for one another, such that the mean shape of cells does not change greatly.Currently, we are studying possible explanations for this pattern, which suggests that the soft energy constraints of unstable tissues may convey an advantage in allowing cells to conserve their shape distribution to a greater extent. We are also studying potential molecular factors that could explain this pattern. In our next steps, we will explore the robustness of other phenotypes, such as cell area and cell regularity.References1 - Reza Farhadifar, Jens Christian R?per, Benoit Aigouy, Suzanne Eaton, and Frank J?licher. The influence of cell mechanics, cell-cell interactions, and proliferation on epithelial packing. Current biology: CB, 17 (24): 2095?104, (2007).2 - Aziza Merzouki, Orestis Malaspinas and Bastien Chopard. The properties of a cell-based numerical model of epithelium under stretching constraints. Soft Matter, in press, (2016). Close
Charles De Santana, Aziza Merzouki, Orestis Malaspinas, Bastien Chopard and Andreas Wagner
11008 Closure (5 min)

Dynamics on and of Complex Networks IX / Mining and learning for complex networks  (DOAO) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: N - Graanbeurszaal

Chair: Jean-Charles Delvenne

43006 Discrimination in Human vs. Algorithmic Decision Making [abstract]
Abstract: Algorithmic (data-driven) decision making is increasingly being used toassist or replace human decision making in a variety of domains rangingfrom banking (rating user credit) and recruiting (ranking applicants) tojudiciary (profiling criminals) and journalism (recommendingnews-stories). Against this background, in this talk, I will pose andattempt to answer the following high-level questions: (a) Can algorithmic decision making be discriminatory?(b) Can we detect discrimination in decision making?(c) Can we control algorithmic discrimination? i.e., can we make algorithmic decision more fair? Close
Krishna P. Gummadi
43007 Understanding fashion as a complex network [abstract]
Abstract: Fashion is a very fast moving business addressing many different target groups (style, age, occasion, status, ?). Hence, it is hard to predict what will be fashionable in the future. Originally, how these markets evolved was largely dictated by a relatively small set of players, such as designers, brands or celebrities. Nowadays, social media is to some extend changing the rules of play. Bloggers and other participants in social media are increasingly playing an important role in defining and spreading fashion trends. Zalando wants to take active part in this development. Some of our goals are to discover new influencers in the fashion world, match influencers to specific brands or advertising campaigns and to discover and to monitor emerging trends present in social media. To do this, we have to construct a large network of entities (such as bloggers, brands, magazines) to be able to analyze the dynamic behavior of the fashion world and answer the questions mentioned above. For this talk, we studied a very popular fashion platform with around 1M subscribers and 22M connections. We will address questions such as which geographical regions of the world are most active or if people tend to follow influential users from their same country or region. In addition, we will analyze if some of the standard properties of complex networks apply in our example, such as small world, scale free, etc. Close
Julien Siebert
43008 Phase Transitions in the Growth of Spatial Networks? [abstract]
Abstract: Spatially embedded complex networks, such as nervous systems, the Internet, and transportation networks, generally have nontrivial topological patterns of connections combined with nearly minimal wiring costs. We report here the empirical analysis of two databases describing respectively: 200 years of evolution of the road network in a large area located north of Milan (Italy), and the growth of the nervous system of the C. elegans from the moment of fertilization to adulthood. We discuss the basic mechanisms that drive the evolution of such two spatial networks. Close
Vito Latora
43009 Stream Graphs and Link Streams for the Modeling of Interactions Over Time [abstract]
Abstract: The structure and dynamics of interactions is crucial for many phenomena of interest, like contacts between individuals, data transfers, commercial exchanges, mobility, and many others. Analyzing such interactions classicaly relies on network analysis, which captures the structure of interactions, or on temporal series, which captures their dynamics. Both approaches have been extended in various ways to cope with the both structural and temporal nature of interactions, but current situation remains unsatisfactory. I will present here the modeling of interactions over time by stream graphs and link streams, which aims at unifying both aspects into a simple, efficient and intuitive way. It provides a language to deal with interactions over time, in a waysimilar to the language provided by network science for relations. Close
Matthieu Latapy
43010 Syntactic Complexity of Web Search Queries through the Lenses of Language Models, Networks and Users [abstract]
Abstract: Across the world, millions of users interact with search engines every day to satisfy their information needs. As the Web grows bigger over time, such information needs, manifested through user search queries, also become more complex. However, there has been no systematic study that quantifies the structural complexity of Web search queries. In this research, we make an attempt towards understanding and characterizing the syntactic complexity of search queries using a multi-pronged approach. We use traditional statistical language modeling techniques to quantify and compare the perplexity of queries with natural language (NL). We then use complex network analysis for a comparative analysis of the topological properties of queries issued by real Web users and those generated by statistical models. Finally, we conduct experiments to study whether search engine users are able to identify real queries, when presented along with model-generated ones. The three complementary studies show that the syntactic structure of Web queries is more complex than what n-grams can capture, but simpler than NL. Queries, thus, seem to represent an intermediate stage between syntactic and non-syntactic communication.Across the world, millions of users interact with search engines every day to satisfy their information needs. As the Web grows bigger over time, such information needs, manifested through user search queries, also become more complex. However, there has been no systematic study that quantifies the structural complexity of Web search queries. In this research, we make an attempt towards understanding and characterizing the syntactic complexity of search queries using a multi-pronged approach. We use traditional statistical language modeling techniques to quantify and compare the perplexity of queries with natural language (NL). We then use complex network analysis for a comparative analysis of the topological properties of queries issued by real Web users and those generated by statistical models. Finally, we conduct experiments to study whether search engine users are able to identify real queries, when presented along with model-generated ones. The three complementary studies show that the syntactic structure of Web queries is more complex than what n-grams can capture, but simpler than NL. Queries, thus, seem to represent an intermediate stage between syntactic and non-syntactic communication. Across the world, millions of users interact with search engines every day to satisfy their information needs. As the Web grows bigger over time, such information needs, manifested through user search queries, also become more complex. However, there has been no systematic study that quantifies the structural complexity of Web search queries. In this research, we make an attempt towards understanding and characterizing the syntactic complexity of search queries using a multi-pronged approach. We use traditional statistical language modeling techniques to quantify and compare the perplexity of queries with natural language (NL). We then use complex network analysis for a comparative analysis of the topological properties of queries issued by real Web users and those generated by statistical models. Finally, we conduct experiments to study whether search engine users are able to identify real queries, when presented along with model-generated ones. The three complementary studies show that the syntactic structure of Web queries is more complex than what n-grams can capture, but simpler than NL. Queries, thus, seem to represent an intermediate stage between syntactic and non-syntactic communication. Close
Rishiraj Saha Roy

mathematical pharmacology  (MP) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: P - Keurzaal

Chair: Vivi Rottschafer

24003 How do protein- and lipid-binding impact efficacy of drugs? [abstract]
Abstract: When a drug enters the blood stream, on its way to a pharmaceutical target, it finds many proteins and lipids on its way which are eager to bind it and thus prevent it from reaching its destination. Whilst this may first adversely affect the beneficial effect of the drug, the drug bound to the proteins is not lost and may eventually still reach its target. We discuss a class of models proposed to study the impact of proteins and lipids on the efficacy of drugs, and show that affinity plays a key role in answering the question in the title. Close
Bert Peletier
24004 Explaining unexpected multi-stationarity in a nonlinear model of prolactin response to antipsychotic medication [abstract]
Abstract: Complexity of biological systems arises in part due to the nonlinearity of these systems. Mathematical models in biology and pharmacology often include this nonlinearity in the form of feedback mechanisms. Nonlinear models can hide interesting dynamic behaviours and as such warrant careful study. A case in point is a nonlinear model of prolactin (PRL) response to antipsychotic medication, which includes a positive feedback. Increased secretion of PRL is a side-effect of antipsychotic drugs. For repeated drug challenges, the intensity of the PRL response to the second drug challenge is lower than to the first challenge, if the duration between the two drug challenges is short. This implies that the intensity of the PRL response may be limited by a pool of PRL in a precursor compartment. The pharmacodynamics of PRL concentration in plasma has been modelled by means of a precursor-pool model which includes a positive feedback loop of plasma PRL on its own synthesis in the pool, making it a nonlinear system [1]. Even though the nonlinear model fits kinetic data from a small temporal window well, it results in unexplained multi-stationarity. We have used mathematical analysis to gain insight into this unexplained model behavior. We have shown that the nonlinearity has resulted in multiple steady states with different stability properties. Stability of each steady state, coupled with the pharmacokinetics of the drug, plays a role in determining which steady state is predicted by the model. We have been able to deduce a parametric restriction under which the desired steady state is stable [2]. The work highlights the importance of mathematical analysis in systems-pharmacological models.References:[1] Stevens J, Ploeger B, Hammarlund-Udenaes M, Osswald G, van der Graaf PH, Danhof M and de Lange ECM, Mechanism-based PKPD model for the prolactin biological system response following an acute dopamine inhibition challenge: quantitative extrapolation to humans. Journal of Pharmacokinetics and Pharmacodynamics. 2012;39(5):463-477.[2] Bakshi S, de Lange ECM, vd Graaf Piet H, Danhof M and Peletier LA, Understanding the behaviour of systems pharmacology models using mathematical analysis of differential equations - prolactin modelling as a case study. CPT: Pharmacometrics and Systems Pharmacology, 2016. Close
Suruchi Bakshi, Elizabeth C. de Lange, Piet H. van der Graaf, Meindert Danhof, Lambertus A. Peletier.
24005 Retrospective Drug Testing: Can the Skin Provide a Record of Drug Taking History? [abstract]
Abstract: Worldwide, noncompliance to drug regimens poses a significant challenge to effective treatment strategies. The WHO estimate that only 50% of patients living with chronic illness in developed countries adhere to prescribed treatment. In order to tackle this issue, an effective method of monitoring compliance is necessary.In this talk we consider reverse iontophoresis as a drug monitoring technique. This involves placing two electrodes on the skin and passing a small current between them, encouraging the movement of ions from the plasma to the skin surface where it is collected. It has been shown that prolonged systemic presence of a drug can result in a build-up of that drug in the skin which affects the reverse iontophoresis reading. We seek to determine, of the drug collected, how much has come from the skin and how much from the plasma.Our aim is to interpret reverse iontophoresis readings with particular interest in inferring the recent drug taking history of the patient. In order to do this a three model system is created: the first model predicts the systemic levels of the drug post administration, the second model describes the reservoir formation in the stratum corneum via a combination of diffusion and advection with cell movement and the third model, which is the focus of this talk, models the extraction of the reservoir via reverse iontophoresisOur extraction model takes the form of a coupled reaction-diffusion-convection system which is analysed to explore the importance of key model parameters, most notably binding rates, on the ability to effectively monitor drug levels using reverse iontophoresis across the skin. We go on to discuss the implications of our modelling and results for drug monitoring. Close
Jennifer Jones, K.A. Jane White, M. Begoña Delgado-Charro and Richard H. Guy
24006 A control theory inspired semi-automated method to probe the response of quantitative system pharmacology models to different drug dosing schedules. [abstract]
Abstract: Drug treatment schedules significantly influence the success of pharmacological intervention. Even though quantitative systems pharmacology (QSP) models are used to understand the interplay between the pharmacological system and drug action, their ability to guide drug treatment schedules is still underutilised.Here we adopt a method widely used in electrical and control engineering to inform on the timescales of QSP models in response temporal changes in oscillatory inputs. The frequency-domain response analysis (FRA) is based on the linearization of a nonlinear model around its steady states. FRA provides insights into the presence and magnitude of time-delays, the stability and performance of QSP models. Thus, FRA enables the identification of dosing frequencies for which the response of the QSP model is either amplified or attenuated. This facilitates not only the characterisation of QSP models but also aids the understanding of the pharmacological system and the optimisation of treatment schedules or the identification of signature profiles.By providing an interactive and semi-automated application based on R and the Shiny package we make FRA easy to use and accessible to everyone without the need to understand the underlying mathematics. Close
Pascal Schulthess, Teun Post, James Yates, Piet Hein van der Graaf
24007 Systems Medicine of Renal Cancer Drug Resistance: Towards New Diagnostics and Therapy [abstract]
Abstract: Renal cell carcinoma (RCC) is the 8th most common cancer in UK and the most lethal urological malignancy.Resistance to treatment is almost ubiquitous in advanced disease and urgently warrants further investigation.Five year survival is approximately 40% overall and <10% with metastasis [Nat Rev Urol 2011;8:255]. NoMethod is available to predict RCC response to targeted therapy, nor to accurately identify high-risk patientsfor entry into adjuvant trials. The current study bridges genotype and phenotype towards more effectiveclinical tools for renal cancer medicine. Genetic control is realized by complex relationships between manycomponents, including numerous uncharacterised genes and unknown context-specific functions [Cell2011;144:986]. At the single-cell level, phenotype is governed by many concurrent biochemical reactionsthat form pleiotropic networks with nested hierarchical structure, and hence modularity [Science2002;297:1551]. Systematic approaches to understand the properties of these networks and so inform controlof cell behaviour include static systems-wide functional gene networks and executable models. Modellingrestricted to prior knowledge misses components and interactions, limiting the representation scope. In orderto address this knowledge gap, we are reverse engineering context-specific modularised global genenetworks. This data driven approach spans molecular and clinical parameters.Four representative RCC cell lines were selected from a panel of sixteen for transcriptome profilingat multiple time points following exposure to sunitinib, a front line drug. These representative cell lines wereidentified by unsupervised learning with data on gene expression, mutational status and sunitinib sensitivity.Modularity analysis of the drug response time course with a novel algorithm (NetNC) identified regulatedfunctionally coherent subnetworks specific to cell line (e.g. drug-resistant) or condition (e.g. hypoxia). Thefigure shows a modularised sunitinib response network, which illuminates mechanisms of cell killing anddrug resistance. Sunitinib treatment elicits substantially fewer changed network modules in hypoxicconditions relative to 'normoxia' suggesting the action of sunitinib on canonical targets (e.g. VEGFR)simulates hypoxia in RCC, which may synergise with putative anti-angiogenic action in vivo. Interestingly,induction of an apoptosis regulation module was found only in a metastatic cell line in hypoxia, includingupregulation of canonical apoptosis inhibitors BCL2 and BCLXL. Focussed analysis of the apoptosispathway across the sunitinib response time course uncovered expression changes in regulatory genes for asecond cell line. Follow-up experiments investigated chemical abrogation of apoptosis resistance alongsidesunitinib treatment as a potentially synergistic combination therapy. Close
Sonntag HJ, Stewart GD, O' Mahony F, Edwards-Hicks J, Laird A, Murphy LC, Pairo-Castineira E, Mullen P, Harrison DJ, Overton IM

Digital Epidemiology and Surveillance  (DES) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 20th Sep 2016

Room: Z - Zij foyer

Chair: Daniela Paolotti

36005 Monitoring vaccine confidence (with deep learning): the VCMP platform Marco Cristoforetti
36006 Flu-Now : Nowcasting Flu based on Product Sales Time Series [abstract]
Abstract: Big Data offer nowadays the capability of creating a digital nervous system of oursociety, enabling the measurement, monitoring and prediction of various phenomena inquasi real time. But with that, comes the need of more timely forecast, in other wordsnowcast of changes and events in nearly real-time as well. The goal of nowcasting is toestimate up-to-date values for a time series whose actual observations are available onlywith a delay. Choi and Varian introduced the term nowcasting to advocate the tendencyof web searches to correlate with various indicators, which may reveal helpful for shortterm prediction. In the field of epidemiology, it was showed in various works that searchdata from Google Flu Trends, could help predict the incidence of influenza-like illnesses(ILI). But as Lazer and al. notice, in February 2013, Google Flu Trends predicted morethan double the proportion of doctor visits for ILI than the Center for Disease Control.In this work we are studying the flu time series, of cases from 2004/05 to2014/2015 flu season, from physicians and pediatricians from all over Italy. We are interestedto examine whether is possible to use retail market data as a proxy for flu prediction.Our dataset consists of economic transactions collected by COOP, a system of Italian consumers?cooperatives which operates the largest supermarket chain in Italy. The wholedataset contains retail market data in a time window that goes from January 1st, 2007 toApril, 27th 2014. First, we identified the products that have adoption trend similar to theflu trend with the help of an 1-nearest neighbor classifier that uses dynamic time warpingas the distance measure between time series. Based on these products, we identified thecustomers that buy them during the flu-peak, since those individuals would have higherpossibility to be either infected or close to an infected individual. We extracted their mostfrequent baskets during the peak using the Apriori algorithm, an algorithm for frequentitem set mining and association rule learning over transactional databases, and we usethose baskets-sentinels as control set for the following year flu peak. Monitoring the behaviorof these baskets-sentinels we are able to detect patterns similar to the ones of theprevious year?s flu peak, and as a result obtain an alarm for the appearance of the flu.Many lines of research remain open for future work, such as studying whether theretail market data can manage to predict the flu peak even in particular cases such as theyear 2009 non-seasonal H1N1 influenza (flu) pandemic that peaked in October and thendeclined quickly to below baseline levels by January. Close
Ioanna Miliou, Salvatore Rinzivillo, Giulio Rossetti, Dino Pedreschi and Fosca Giannotti.
36007 TOWARDS A BRIGHT FUTURE OF DATA-DRIVEN SIMULATION FOR POLICY DECISION SUPPORT Modeling and Simulation of the Zika outbreak [abstract]
Abstract: Background: The potential health threats associated to Zika virus infections (ZIKV) have alarmed the global community. Unusual incidences of microcephaly and Guillain-Barr? syndrome have attracted attention towards a disease that was for decades considered relatively harmless. At present, health authorities worldwide are urged to implement policies to prevent further spreading, in spite of only having scarce and uncertain knowledge on the epidemiology and potential solutions. Complex problems such as infectious diseases are often addressed by simulation techniques. However, vector-borne diseases such as ZIKV present three major challenges that these approaches find difficult to tackle. First, the worldwide distribution of tropical and sub-tropical regions with suitable conditions for ZIKV?s vectors call for a global scope of the study and the use of global data. Second, human travel is the main dissemination mode. Therefore, policy decisions support needs to cope with the high connectivity of regions. Third, the emergence of ZIKV is driven by geographically specific environmental and socio-economic factors. Thus, an appropriate representation requires georeferenced information. Objective: The aim of this research was to explore ways to overcome the limitations of simulation methods conventionally used to study epidemics. More specifically, we aimed at designing robust response strategies to prevent the potential spread of Zika virus infections to regions worldwide. Approach: To achieve this goal, we applied a data-driven simulation approach that integrates different modelling and simulation methods and adopts concepts from other disciplines. We divided the world according to their global administrative divisions and by adopting an object-oriented approach replicated our core model structure describing the epidemiological process in each region. The spread of the disease across regions occurs along two distinct network structures. First, an undirected and symmetric adjacency matrix. Second, model-based predictions of global air passenger flows represented in a directed matrix . Data: The input data was collected from diverse publicly available sources including geospatial data from maps, conventional databases provided by government agencies and open-access modelling outcomes of reported research. We aggregated multiple layers of high resolution raster datasets to administrative divisions by geo-processing zonal statistics and combined this with data on air passenger flows from 3416 airports across the world. After pre-processing steps for data alignment, the data was used as input for our simulation model. Since viewing data on maps enhances users? understanding, we projected back our simulation outcomes on maps. In this way users can interactively plot different plausible scenarios over time and explore their consequences. Sensitivity analysis and policy exploration were conducted on the higher levels of aggregation. Simulation methods: The model components of each region combine two modeling methods taking advantage of their respective strength. Our approach, loosely based on Bobashev et al. (2007), starts with an agent-based model. After, reaching a threshold number of infectious individuals, it switches to equation-based. Next steps: By collecting and managing data at global scale, coping with regions increasingly connected by air travel, and integrating georeferenced environmental and social information, our approach promises to deliver breakthroughs in model based policy analysis. The validity of the presented approach and results of policy exploration will be shown in future work. Close
Philipp Schwarz, Erik Pruyt.
36008 Digital Epidemiology Through the Ages Rumi Chunara