10:45 - 12:45 on 22nd Sep 2016

Economics  (E) Session 5

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: A - Administratiezaal

Chair: Sumit Sourabh

231 Pathways towards instability in financial networks [abstract]
Abstract: Following the financial crisis of 2007-2008, a deep analogy between the origins of instability in financial systems and in complex ecosystems has been pointed out - in both cases, topological features of network structures influence how easy it is for distress to spread within the system. However, in financial network models, the intricate details of how financial institutions interact typically play a decisive role. Hence, a general understanding of precisely how network topology creates instability remains lacking. Here we show how processes that are widely believed to stabilise the financial system, i.e. market integration and diversification, can actually drive it towards instability, as they contribute to create cyclical structures which tend to amplify financial distress, thereby undermining systemic stability and making large crises more likely. This result holds irrespective of the precise details of how institutions interact, and demonstrates that policy-relevant analysis of the factors affecting financial stability can be carried out while abstracting away from such details.
Marco Bardoscia, Stefano Battiston, Fabio Caccioli and Guido Caldarelli
67 The role of networks in firms’ multi-characteristics competition and market-share inequality [abstract]
Abstract: We develop a location analysis spatial model of firms’ competition in multi-characteristics space, where consumers’ opinions about the firms’ products are distributed on multilayered networks. Firms do not compete on price but only on location upon the products’ multi-characteristics space, and they aim to attract the maximum number of consumers. Boundedly rational consumers have distinct ideal points/tastes over the possible available firm locations but, crucially, they are affected by the opinions of their neighbors. Our central argument is that the consolidation of a dense underlying consumers’ opinion network is the key for the firm to enlarge its market-share. Proposing a dynamic agent-based analysis on firms’ location choice we characterize multi-dimensional product differentiation competition as adaptive learning by firms’ managers and we argue that such a complex systems approach advances the analysis in alternative ways, beyond game-theoretic calculations.
Athanasios Lapatinas and Antonios Garas
569 Concentration and systemic risk in banking networks [abstract]
Abstract: Since the 2007–2009 financial crisis, mounting evidence suggests that failures of large banks represent a major risk for the resilience of banking networks. This finding is widely used to link the increasing concentration of financial markets with an increase in their fragility. However, the same argument can easily result in the mistaken idea that any market change associated with an increase in concentration also amplifies systemic risk. In this study we applied stress tests to both hypothetical and empirically calibrated banking networks to observe how various bank-size distributions affect systemic risk. We found that analogous to the resilience of ecosystems, no single property of banking networks could explain the probability of systemic failure. We quantified concentration in terms of the Herfindahl–Hirschman index and also identified an additional indicator, inequality, measured by Rao’s quadratic entropy, which is important for understanding the concentration–resilience relationship. We found, counterintuitively, that an increase in concentration was beneficial when it was not followed by an increase in inequality. Similarly, a decrease in concentration became harmful when it was not followed by a decrease in inequality. Mergers of large banks increased, whereas mergers of small banks decreased systemic risk. Splitting of large banks was also effective in reducing systemic risk if splitting was not overdone to the extent that it resulted in too many small banks. Our results provide a guideline that can be applied to frequent issues that regulators face, such as bank mergers.
Stojan Davidovic, Amit Kothiyal, Konstantinos Katsikopoulos and Mirta Galesic
92 The asymptotic dynamics of wealth inequality and income-wealth interactions [abstract]
Abstract: The rapid increase of wealth inequality in the past few decades is one of the most disturbing social and economic issues of our time. Studying its origin and underlying mechanisms is essential for policy aiming to control and even reverse this trend. In this talk, I will describe a novel theoretical approach using interacting multi-agent master-equations from which the dynamics of the wealth inequality emerge. Taking into account growth rate, return on capital and personal savings, it is possible to capture the historical dynamics of wealth inequality in the United States during the course of the 20th century. The personal savings rate is shown to be the single most important factor that governs the wealth inequality dynamics. Notably, its major decrease in the past 30 years can be associated with the current wealth inequality surge. The asymptotic dynamic behavior of wealth inequality, on the other hand, emphasize the importance of the economic output growth rate. Furthermore, the effects changes in the income distribution have on the distribution of wealth are discussed. Plausible changes in income tax are found to have an insignificant effect on wealth inequality, in the short run. The results imply, therefore, that controlling income inequality is an impractical tool for regulating wealth inequality.
Yonatan Berman, Eshel Ben-Jacob and Yoash Shapira
286 The climate-finance macro-network: mapping the exposures of the financial system to climate policy risks in the Euro Area [abstract]
Abstract: In the wake of the recent international climate negotiations there is growing interest on the impact of climate policies on the financial system and the possible role of financial institutions in facilitating decarbonization pathway of the global economy. However, there are no established methodologies to assess gains and losses, and data is scarce and scattered. Further, while it is now understood that the interlinkages among financial institutions can amplify both positive and negative shocks to the financial system, this is seldom investigated. Here, we take a complex-systems perspective to climate policies and we develop a methodology to map the macro-network of financial exposures among the institutional sectors (e.g. non-financial corporations, investment funds, banks, insurance and pension funds, governments and households). This macro-network can be regarded as a multiplex weighted network in which multiple types of links correspond to different financial instruments: equity holdings (ownership shares), corporate and sovereign bonds (tradable debt obligations) and loans (non-tradable debt obligations). We illustrate the approach on recently available data by investigating the evolution of the macro-network of institutional sectors in the Euro Area. In particular, we estimate the exposures of the financial sectors to climate-policy risks, building on our previously developed climate-stress approach (Battiston 2016, ssrn 2726076). We find that while direct exposures to the fossil sector are limited, the combined exposures to other climate relevant sectors are large for both insurance, pension funds and investment funds (about 30%-40%). Further, these sectors bear large indirect exposures via banks to the housing sector. As a result of climate policies supporting green technologies and discouraging brown technologies, large portions of assets on the balance sheet of financial institutions are potentially subject to positive or negative revaluation. Our work contributes to a better understanding of the governance of climate-finance arena.
Veronika Stolbova and Stefano Battiston
238 Circadian Rhythm of Cancellation Rates in Foreign Currency Market [abstract]
Abstract: The recent proliferation of automated traders increase significantly the reaction speed of financial markets. As a consequence, new detailed database becomes available to help researchers to describe precisely impacts of some characteristics on the market as a whole. For example, the field of Econophysics emerged to focus on market microstructure using physics methodology especially to describe order book fluctuation [1]. Market participants annihilate their orders either via cancellation or transaction. Then, cancellation of orders play a major role in market fluctuation and an increasing number of researches study this subject [2]. Our research mainly focus on studying the rate at which limit orders are cancelled in Foreign Currency Market according to the time of the day and the distance from the market price. We use a precise database of Electronic Broking System (EBS) which contains identification of every order. The database is composed of three weeks. Two of them are from March 06 21:00 (GMT) to March 18 21:00 (GMT), 2011 and one week is from October 30 21:00 (GMT) to November 04 21:00 (GMT), 2011. It contains information about injected and annihilated orders with minimal tick time of one millisecond. Our study mainly focus on USD/JPY, EUR/USD and EUR/JPY currency pairs. In investigating the cancellation rate of limit orders, we discover that it follows a circadian rhythm similar to transactions. In addition, cancellation rate of limit orders is high when the distance from market price is low and vice-versa. In other words, market participants are less patient when their orders have a small distance from the market price. [1] M. Takayasu, T. Watanabe and H. Takayasu, Approaches to Large-Scale Business Data and Financial Crisis, Springer, 2010 [2] X-H Ni, Z-Q Jiang, G-F Gu, F. Ren, W. Chen and W-X Zhou, Phy. A 389, 2751–2761 (2010)
Jean-François Boilard, Misako Takayasu and Hideki Takayasu

Biology  (B) Session 4

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: B - Berlage zaal

Chair: Assaf Almog

45 Jamming and stabilization of transport current in random biological networks [abstract]
Abstract: The transport of organelles and proteins is of vital importance for living cells. Besides passive transport by diffusion, active transport by molecular motors hopping over the cytoskeleton network is crucial for the survival of cells. We performed simulations using the Totally Asymmetric Exclusion Process (TASEP), a paradigmatic model for nonequilibrium transport, to model the dynamics along the complex microtubule network. We found that the rules at the intersection of the network seem to be the key factor for the formation of traffic jams along the microtubule segments. The rate at which motors at crossing continue along the same microtubule or switch to the other microtubule appears to determine for the transport along the network. Our simulations of the microtubule network reveal surprisingly rich behavior of the transport current with respect to the global density and exit rate ratio. We found four different regimes of motor propagation through network depending on the average motor density. For example, for low densities the current/density distribution through the network can be linearized and solved exactly. In contrast, for medium global densities the motor distribution through the network becomes highly inhomogeneous and non-linear leading to a huge reduction of the transport current through the system, when larger part of the network will be in ‘virtual’ traffic jam. We have also found a broad plateau of the current at the intermediate motor densities leading to stabilization of transport properties within such networks [1]. Due to the generality of the exclusion process in modeling transport and arrest phenomena, our results may provide generic insights into traffic jams and transport capacities of highway networks, biological networks, and other systems with similar unidirectional topology. [1] D. V. Denisov, D. M. Miedema, B. Nienhuis, and P. Schall, Phys. Rev. E 92, 052714 (2015).
Dmitry Denisov, Daniel Miedema, Bernard Nienhuis and Peter Schall
428 Noise-induced Cycles in Biological Auctions [abstract]
Abstract: Competition for resources in biological context bears a resemblance to auction mechanisms, many agents compete but only a few (or only one) get the reward. But contrary to the well-studied auction models in economy, a reasonable assumption in this context is that everybody (not only the winner) pays their bid, e.g. time/energy invested to endure a conflict or foraging food. Following the work of Chatterjee et al. 2012, we look at the k-player all pay auctions searching for the states evolution might favour. We analyse these systems with an associated birth-death process governed by agent’s strategy success in the repeated interactions modeled as k-player all-pay auctions. In the large population limit, when the stochasticity can be neglected, we derive replicator equations whose fixed points are previously found Evolutionary Stable Strategies for these games. However, in previous works cycles were also noted that could not be explained at the level of deterministic description. We thus introduce back the stochasticity (the diffusion approximation) and the intrinsic noise, as we show, gives rise to the cyclic dynamics. We observe that the cycles are more present when the bidding strategy space is smaller, and when the number of participants in an auction (k) is small. As this description can be extended to the continuous strategy space, we find out that except for the 2-player auctions, cycles are property of games with discrete strategy space. Chatterjee, K., Reiter, J. G., Nowak, M. A., 2012. Evolutionary dynamics of biological auctions. Theoretical Population Biology, 81: 69-80.
Aleksandra Aloric, Tobias Galla and Peter Sollich
137 Tackling neurodegenerative diseases by computational approaches [abstract]
Abstract: Neurodegenerative diseases, such as Alzheimer and Parkinson, are more common in western countries due to the longer expectation of life and the design of reliable tests for their early detection is becoming a pressing challenge. Several neurological disorders are associated with the aggregation of aberrant proteins, often localized in intracellular organelles such as the endoplasmic reticulum. We have studied protein aggregation kinetics and developed a model to follow the evolution of the aggregation and the critical role played by the cell endoplasmic reticulum. Moreover, since another important question is to be able to analyze protein aggregation in micron-scale samples but reproducible results are still hard to achieve, we have developed a strategy to quantify in silico the statistical errors associated to the detection of aggregation-prone samples. Alltogether, our work opens a new perspective on the understanding of these pathologies and on the forecasting of protein aggregation in asymptomatic subjects.
Caterina La Porta, Giulio Costantini, Zoe Budrikis and Stefano Zapperi
42 Functional modules without functional networks: resolving brain organisation via random matrix theory [abstract]
Abstract: The mesoscopic structure of complex brain networks is the key intermediate level of organisation bridging the microscopic dynamics of individual neurons with the macroscopic dynamics of the brain as a whole. At this mesoscopic level, brain activity tends to be organized in a modular way, with functionally related units being positively correlated with each other, while at the same time being relatively less (or even negatively) correlated with dissimilar ones. Such emergent organisation is mainly detected through the measurement of cross-correlations among time series of brain activity, the projection (usually via an arbitrary threshold) of these correlations to a network, and the subsequent search for denser modules (or so-called communities) in the network. It is well known that this approach suffers from an unavoidable information loss induced by the thresholding procedure. Another, less realized, limitation is the bias introduced by the use of network-based (as opposed to correlation-based) community detection methods. Here we discuss an improved method for the identification of functional brain modules based on random matrix theory. Our method is threshold-free, correlation-based, and very powerful in filtering out both local unit-specific noise and global system-wide dependencies. The approach is guaranteed to identify mesoscopic functional modules that, relative to the global signal, have an overall positive internal correlation and negative mutual correlation. We apply our method to time series of individual neurons in several samples of the suprachiasmatic nucleus (SCN) of mice, a small pacemaker region where strong spatial and temporal dependencies make the identification of substructure particularly challenging. We systematically detect two main functional modules, core and periphery, which are perfectly anti-correlated once the strong global signal is filtered out. These modules turn out to largely correspond to neuron populations with true biological differences, e.g. in the neurotransmitters used and in their coupling and synchronisation properties.
Assaf Almog, Ori Roethler, Renate Buijink, Stephan Michel, Johanna Meijer, Jos Rohling and Diego Garlaschelli
205 Towards network-oriented circadian clock research [abstract]
Abstract: The circadian clock in the suprachiasmatic nucleus (SCN), located in the hypothalamus in the brain, is important for the regulation of our daily and seasonal rhythms. It has been shown that the neuronal network organization of the SCN changes in different seasons, however, the mechanisms behind these changes are far from elusive. Furthermore, only a subset of neurons within the SCN network are directly responsive to light, which poses the question how encoding for seasons is achieved in the SCN network. Currently, not much is known about the function of the regional heterogeneity in the SCN in seasonal adaptation. Using time series of single cells we have applied a new community detection method to identify communities of cells in winter and summer conditions. This impartial method detected mostly two communities which we mapped to the SCN neuronal network and further characterized in their functional significance. Anterior regions encode for more phase dispersion, while posterior regions encode for more phase synchrony. Within the anterior SCN, the cells in the dorso-lateral region show more variability in their oscillatory periods in summer conditions, which means that these cells are more weakly coupled, enabling higher phase dispersion among the cells. Ventro-medially located cells in the anterior SCN and cells in the posterior SCN are more rigid in their oscillatory behaviour. This suggests that the cells in the dorso-lateral anterior region of the SCN play an active role in the phase adjustments of the SCN cells in different seasons. Our new network analysis approach enhances the identification and the subsequent functional characterization of neuronal clusters in the SCN, possibly paving the way for more elaborate network analysis on the level of single-cells in other brain regions.
M. Renate Buijink, Assaf Almog, Charlotte B Wit, Ori Roethler, Anneke H. O. Olde Engberink, Johanna H Meijer, Diego Garlaschelli, Jos H T Rohling and Stephan Michel
401 Large-Scale Brain Network Dynamics with BrainX3 [abstract]
Abstract: BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX3 can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas.
Xerxes Arsiwalla, Riccardo Zucca, David Dalmazzo, Pedro Omedas, Gustavo Deco and Paul Verschure

Biology  (B) Session 5

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: C - Veilingzaal

Chair: Silvia Bartolucci

283 Modelling influenza A at the human-animal interface [abstract]
Abstract: Type A influenza poses a serious risk to the health of the global population, due to its ability to inhabit a diverse range of hosts and having many strains. Very occasionally, humans become infected with a virus derived from non-human sources. These are essentially novel to humans. Due to the viruses meeting with little or no established resistance, they can, following mutation and adaptation to their new host, spread relatively easily in the human species. This can give rise to a localised outbreak that may develop into a worldwide influenza pandemic. Despite this, there is a worrying gap in the modelling of spillover transmission from animals to humans, a crucial element of the system in the lead up to an influenza pandemic event. We explore developments to address the lack of established mathematical modelling tools in this area, with the applied aim of being able to evaluate the effectiveness of control strategies in reducing pandemic risk. In particular, we overview two data-driven studies: (i) a statistical analysis of the time periods between influenza pandemics since 1700, to determine whether the emergence of new pandemic strains is either a memoryless or history-dependent process; (ii) constructing a spatial model, incorporating poultry-to-poultry and poultry-to-human transmission, applied to H5N1 epidemics in Bangladesh occurring between 2007 to 2011. These studies provide insights into the risk to humans associated with avian influenza outbreaks and the control strategies that should be utilised, across both human and livestock species, in the event of future influenza epidemics.
Edward Hill, Michael Tildesley and Thomas House
554 Influence of precipitation variance on lake eutrophication : the case study of Lake Bourget [abstract]
Abstract: A major cause of regime shifts in lake is nutrient overloads, coming mostly from agricultural fertilizers. Moreover, intensive agriculture has weakened soils which leads to a greater leach of nutrients during heavy rains. The objective of the paper is to model the effect of rainfall variability on lake regime shift, despite lake regulation. IPCC (Intergovernmental Panel on Climate Change) reports show that increases in extreme rain events are expected. Therefore, the multiplication of significant floods could result in nutrient over-enrichment, disrupting the equilibrium of lakes and causing eutrophication. Our case-study is the lake Bourget in France. We build and calibrate a model based on annual phosphorus dynamics. Results show that the drought of the 2000s is fostering a return to an oligotrophic state of the lake. We also show that lake regulation has been effective in reducing phosphorus input enabling the compliance of the objective recommended by OECD (Organisation for Economic Co-operation and Development) in 2020. A return to previous rainfall variability affects in a limited way the probability of a regime shift in the short and long terms. However, increasing the variance of loading by 25% may decrease from 98% to 81% the probability to maintain the lake Bourget in an oligotrophic state until 2100.
Antoine Brias, Jean-Denis Mathias and Guillaume Deffuant
171 Using the human disease multiplex network to disentangle genetic and environmental risk factors for diseases [abstract]
Abstract: Most disorders are caused by a combination of multiple genetic and environmental factors. If two diseases are caused by the same mechanism, they often co-occur in patients. Here we disentangle how much genetic or environmental risk factors contribute to the pathogenesis of 358 individual diseases, respectively. We pool data on genetic, pathway-based, and toxicogenomic disease-causing mechanisms with co-occurrences obtained from almost two million patients. From this data we construct a multilayer network where nodes represent disorders that are connected by links that either represent phenotypic comorbidity or the joint involvement of certain mechanisms. We quantify the similarity of phenotypic and mechanism-based links for each disorder. Most diseases are dominated by genetic risk factors, while environmental influences prevail for disorders such as depressions, cancers, or dermatitis. The relevance of environmental risk factors for a given disease is inversely related to its broad-sense heritability and also inversely related to the rate at which new drugs for the disease are approved. This might be indicative of a lack of successful drug development for diseases with high environmental risks. Our approach allows to rule out certain types of disease-causing mechanisms when their implied comorbidities are not observed and might therefore be used to identify promising leverage points for the development of future therapies of multifactorial diseases.
Peter Klimek, Silke Aichberger and Stefan Thurner
567 Sub-clinical and clinical effects of infectious agents on food web stability [abstract]
Abstract: Infectious agents affect behaviour and vital rates of their hosts, by influencing the interactions between species in the community and in that way are potentially changing the stability of the ecosystem. Empirical examples show a variety of ways in which different types of infectious agents can affect their hosts. We take an indirect approach in investigating wider community effects of these influences on hosts at different trophic levels. By decreasing and increasing resource preferences of consumers, conversion efficiencies and growth rates, we mimic subclinical and clinical influence of an infection in the community. Via the influence of infectious agents on their hosts, food webs become more and less stable, as it was measured by the size of the largest real part of the eigenvalues of the community matrix. The potential effects of the infectious agents show various consequences for the stability of the system even in the same focal species and role of that species as a consumer or resource. Our results show that influence of infection on resource preference of consumers has more impact on the change of stability than the effect of infection on conversion efficiencies of consumers. Subclinical and clinical effects of infectious agents in focal species of hosts, more frequently lead to increase than to decrease in stability of the community. The study suggests that infectious agents may be important for the stability of ecosystems.
Sanja Selakovic and Hans Heesterbeek
40 Labyrinth-like population structures emerge as a consequence of multi-level selection in self-organized mussel beds [abstract]
Abstract: In self-organized ecosystems, it is eminent that group-level properties emerge from large-scale spatial pattern formation that promote survival of the organisms within the population. However, how these emergent properties influence the evolution of self-organizing traits and thereby affect spatial pattern formation itself remains unknown. Here, we demonstrate that aggregation into clusters in self-organized mussel beds adds a group-level selection pressure, which can cause the evolution of labyrinth-producing behaviour in mussels. We use a modelling approach that includes a high amount of ecological detail to investigate the evolution of two self-organizing traits, cooperation and aggregative movement, in spatially patterned mussel beds, where mussels aggregate and attach byssus threads (a glue-like substance) to neighbouring conspecifics in order to decrease losses to predation and wave stress. We developed a mechanistic, individual-based model of spatial self-organization where individual strategies of movement and attachment generate spatial patterns, which in turn determine the fitness consequences of these strategies. By combining an individual-based simulation approach for studying spatial self-organization within generations with an analytical adaptive dynamics approach that studies selection pressures across generations, we are able to predict how the evolutionary outcome is affected by environmental conditions. When selection pressures on cooperation and movement are only governed by local interactions, that is, the attachment of individuals to their neighbours, evolution does typically not result in the labyrinth-like spatial patterns that are characteristic for mussel beds. However, when we include a second level of selection by considering the additional protection provided by the formation of mussel clumps, evolutionarily stable movement and attachment strategies lead to labyrinth-like patterns under a wide range of conditions.
Monique de Jager, Johan van de Koppel and Franjo Weissing
101 Mapping multiplex hubs in human functional brain network [abstract]
Abstract: Typical brain networks consist of many peripheral regions and a few highly central ones, i.e. hubs, playing key functional roles in cerebral inter-regional interactions. Studies have shown that networks, obtained from the analysis of specific frequency components of brain activity, present peculiar architectures with unique profiles of region centrality. However, the identification of hubs in networks built from different frequency bands simultaneously is still a challenging problem, remaining largely unexplored. Here we identify each frequency component with one layer of a multiplex network and face this challenge by exploiting the recent advances in the analysis of multiplex topologies. We first show that each frequency band carries unique topological information, fundamental to accurately model brain functional networks. This result suggests that information from frequency bands which are tipically neglected might play a crucial role in understanding brain's function. By using node's versatility, i.e. the natural extension of the concept of node's centrality in classical networks to multilayer sistems, we then demonstrate that hubs in the multiplex network are in general different from those ones obtained after discarding or aggregating the measured signals as usual and provide a more accurate map of brain's most important functional regions. Finally, as a clinical application, we use the brain's versatility profile to distinguish between healthy and schizophrenic populations, achieving higher accuracy than conventional network approaches.
Manlio De Domenico, Shuntaro Sasai and Alex Arenas

Urban  (U) Session 3

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: D - Verwey kamer

Chair: Garvin Haslett

471 Complex Dynamics of Urban Traffic Congestion: A novel kinetic Monte Carlo simulation approach [abstract]
Abstract: Transitions observed in the dynamical patterns of vehicular traffic, for instance, as a result of changes in traffic density, form an important class of phenomena that is sought to be explained by large-scale modeling using many interacting agents. While the dynamics of highway traffic has been the subject of intense investigation over the last few decades, there is as yet comparatively little understanding of the patterns of urban traffic. The macroscopic collective behavior of cars in the network of roads inside a city is marked by relatively high vehicular densities and the presence of signals that coordinate movement of cross-flowing traffic traveling along several directions. We have devised a novel kinetic Monte Carlo simulation approach for studying the dynamics of urban traffic congestion, allowing study of continuous-time, continuous-space traffic flow, which contrast with the dominant paradigm of cellular automata models. Well-known results of such discrete models for traffic flow in the absence of any intersections can be easily reproduced in the framework. More importantly, the behavior in the presence of an intersection where cross-flowing traffic is regulated by a signal is seen to produce novel features. The fundamental diagram of traffic flow in the presence of a signal shows a broad plateau indicating that the flow is almost independent of small variations in vehicle density for an intermediate range of densities. This is unlike the case where there are no intersections, where a sharp transition is observed between free flow behavior and jamming on changing vehicle density. The distribution of congestion times shows a power-law scaling regime over an extended range for the stochastic case when exponential-like right skewed probability distributions are used. These results are then compared with empirically observed power-law behavior in congestion time distributions for urban traffic obtained from the cities of Delhi, Bengaluru and Mumbai.
Abdul Majith and Sitabhra Sinha
255 Models of growth for system of cities : Back to the simple [abstract]
Abstract: Understanding growth patterns in complex systems of cities through modeling is an intensive branch of quantitative geography. Complex agent-based models have been recently provided promising results by multi- modeling and intensive computation for pattern discovery and calibration. However simple interaction-based extensions of seminal models of growth (such as the Gibrat model) have not yet been tested and calibrated against real datasets. We propose a spatial model of urban growth extending the Gibrat model by adding the contributions of gravity-based interactions to expected growth rates. Moments derivation for the stochastic model allows to implement a deterministic version on expectancies. Working with the Pumain-INED harmonized database for French cities (population of urban areas for 1831-1999), the 4-parameter interaction model is calibrated through intensive computation on grid, using the OpenMole software, yielding e.g. the characteristic interaction distance at different periods. We then add a second order term aimed at integrating interactions between physical transportation networks and cities, through a feedback of physical flows on traversed cities.It allows to obtain better fits and reproduce stylized facts such as hierarchy inversions and apparition of the “tunnel effect” with the development of railway network. We furthermore introduce a novel method to assess the impact of adding parameters to a simulation model on the effectively gained information, as an extension of Akaike Information Criterion to simulation models. This empirical AIC is estimated by comparing AICs for statistical models, with same parameter number, fitting best behavior space obtained by exploration. It confirms that our extension provide a gain of information on the French city system. This contribution provides a renewing insight on simple models of urban growth for system of cities, that proves to have good explicative potentialities. It also introduce a methodology to tackle the open question of quantifying overfitting in simulation models.
Juste Raimbault
248 Individual-based stochastic model of demographic fluctuations in cities [abstract]
Abstract: In recent years, it has been shown that many seemingly unrelated natural phenomena; earthquake magnitudes, word frequency, astronomical masses and city sizes to name a few, can be asymptotically described by a small collection of empirical distributions. Of these distributions, perhaps the most prolific is Zipf’s Law. When applied to the size distribution of cities, Zipf’s Law states that the population of a city is inversely proportional to its rank and this has been shown to apply to city sizes both globally and historically. The existence of this global distribution of city sizes places a constraint on models of city growth. The most widely accepted model is proportionate random growth which constrains growth rates to be identically distributed and independent of city size. Despite proportionate random growth being the accepted mechanism behind the evolution of city sizes, there is no consensus on a model that describes the underlying stochastic processes governing city growth rates. Furthermore, it is noted that Zipf’s Law is only present in the tail of the distribution of city sizes and does not fit the distribution as a whole suggesting proportionate random growth alone is not a complete model. Here we present a model of births and deaths that is able to both reproduce Zipf’s Law in the upper tail and account for its absence in the distribution of smaller cities. We demonstrate that the observed proportionate random growth is a consequence of the interaction of these processes. The model is validated using census data on counties in the United States. Our results can be applied to other systems in which Zipf’s law arises from the interaction of underlying processes and may provide an explanation as to why this distribution occurs across such a diverse set of natural phenomena.
Charlotte R. James, Filippo Simini and Sandro Azaele
455 An Information Theoretical Global Epidemic Prediction Model [abstract]
Abstract: Dengue fever is a multi-serotype mosquito-borne disease that is steadily increasing in incidence worldwide and sharing animal vectors with other rapidly spreading viruses like Zika virs. In an Epidemic Prediction Initiative context, a new computational method is proposed as a new approach for constructing Stochastic Generalized Linear Models (SGLM) with multiple diversely lagged input factors based on the aim to improve prediction accuracy. The proposed computational method uses mutual information (MI) to evaluate the dependencies between predictive and outcome variables at different time lags. The window with the highest MI in the time series of each predictive variable is selected as the input of a negative binomial SGLM that predicts the weekly incidence of DF. More precisely, total cases, outbreak timing and magnitude are the variables used to design the most accurate predictive model. Global Sensitivity and Uncertainty Analysis (GSUA) is applied to attribute the variability of the output to each predictive factor and their interactions. Results reflect the micro/meso ecosystem dependence on Dengue fever incidence. For instance, temperature and humidity are more important in urban settings like in San Juan; NDVI is more important in rural settings like in Iquitos, Peru. For both study sites, annual and inter-annual trends and autoregressive components are the most influential independent variables. MI allows one to construct a varied lag factor model that can both investigate the universal epidemiology of a disease and make useful and site-dependent fine-resolution predictions. Yet, the mutual information based SGLM is proposed as a powerful epidemic prediction model not just for Dengue fever but also for any other environmental dependent infectious diseases.
Yang Liu and Matteo Convertino
452 Enhanced Adaptive Management for Population Health: Integrating Ecosystem and Stakeholder Dynamics using Information Theoretic Models [abstract]
Abstract: Ecosystem health issues abound worldwide with environmental implications, and impact for animal and human populations. The complexity of addressing problems systemically in the policy arena on one side, and the lack of use of computational technologies for quantitative public policy on the other side have determined a worsening of ecosystem health. We propose to enhance existing adaptive management efforts with an integrated decision-analytical and environmental dynamic model that can guide the strategic selection of robust ecosystem restoration alternative plans. The model can inform the need to adjust these alternatives in the course of action based on continuously acquired monitoring information and changing stakeholder values. We demonstrate an application of enhanced adaptive management for a wetland restoration case study inspired by the Florida Everglades restoration effort. This has implication for the environment, animal and human health and embraces the sustainability paradigm quantitatively. In terms of diseases we particularly look into waterborne and water-based diseases (Zika, Dengue Chickengunya, West Nile and Yellow Fever for instance). In relation to the Everglades, we find that alternatives designed to reconstruct the pre-drainage flow may have a positive ecological and animal health impact, but may also have high operational costs and only marginally contribute to meeting other objectives such as reduction of flooding that has catastrophic human impact and morbidities in terms of deaths and infectious disease symptoms. Enhanced adaptive management allows managers to guide investment in ecosystem modeling and monitoring efforts through scenario and value of information analyses to support optimal restoration strategies in the face of uncertain and changing information. Thus, the model allows decision makers to explore the full landscape of possible scenarios before taking decisions and to dynamically design the system considering stakeholder values, economical and political constraints, ecosystem dynamics and surprises.
Matteo Convertino

Foundations & Physics  (FP) Session 2

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: E - Mendes da Costa kamer

Chair: Ioannis Anagnostou

411 Controllability Criteria for Discrete-Time Non-Linear Dynamical Networks [abstract]
Abstract: Controllability of networked systems with non-linear dynamics remains an interesting challenge with widespread applications to problems ranging from engineering to biology. As a step in this direction, this paper explores global controllability criteria for discrete-time non-linear networks. We identify two classes of non-linear networks: those with non-linear edge dynamics and those with non-linear node dynamics. For each of these classes, we formulate the global controllability matrix and discuss corresponding controllability conditions. In the first case, we obtain a time-dependent controllability matrix, whereas, in the second, we obtain a non-linear operator. We point to a network interpretation of controllability associated to linear independence of sets of paths from driver nodes to every node of the network and comment on possible applications of our formalism.
Xerxes Arsiwalla, Baruch Barzel and Paul Verschure
270 Hamiltonian control to Kuramoto model of synchronization [abstract]
Abstract: Synchronization phenomena has attracted the interest of the scientific communities of different fields since old times. It appears of a decisive role especially as a self-organizing mechanism which manifests in biology e.g. mating of fireflies or in physics e.g. Josephson array [1]. In fact, the resonance effect shown for such behaviour results in many cases of vital importance in Nature [1]. However, not always the synchronization is the desiderable expectation in a physical process. This is the case for example of the Millenium Bridge of London [2]. Due to strong coupling of the bridge mechanical parts it started swaying after a given number of pedestrians tempted to cross it. In this paper we propose, a completely unconventional and novative control method to the synchronization problem. The idea is to prevent the set of weakly coupled nonlinear oscillators from phase-synchronizing. Based in a recent work [3] where a Hamiltonian formulation of the seminal Kuramoto model [4] was presented, we were able to construct a control technique making use of Hamiltonian control methods. Adding a control term of magnitude O(ε2) (where ε is the size of the coupling strength) in the Hamiltonian of the Kuramoto equation the system not only does not synchronize but it is also robust to resonance phenomena which never occur. The results we obtained using a simple paradigmatic model of synchronization, show that it is possible to design complex systems e.g. mechanical structures, immune to any resonance effect by simply making a small modification to the original system. References [1] S.H. Strogatz, Sync : The Emerging Science of Spontaneous Order, Hyperion (2003). [2] Strogatz, Steven et al., Nature 438, 43–44 (2005). [3] D. Witthaut, M. Timme Phys. Rev. E 90, 032917 (2014). [4] Y. Kuramoto, Chemical Oscillations, Waves, and Turbulence, New York, Springer-Verlag (1984).
Oltiana Gjata, Malbor Asllani and Timoteo Carletti
493 Concurrent enhancement of percolation and synchronization in adaptive networks [abstract]
Abstract: Co-evolutionary adaptive mechanisms are not only ubiquitous in nature, but also beneficial for the functioning of a variety of systems. We here consider an adaptive network of oscillators with a stochastic, fitness-based, rule of connectivity, and show that it self-organizes from fragmented and incoherent states to connected and synchronized ones. The synchronization and percolation are associated to abrupt transitions, and they are concurrently (and significantly) enhanced as compared to the non-adaptive case. Finally we provide evidence that only partial adaptation is sufficient to determine these enhancements. Our study, therefore, indicates that inclusion of simple adaptive mechanisms can efficiently describe some emergent features of networked systems' collective behaviors, and suggests also self-organized ways to control synchronization and percolation in natural and social systems.
Guido Caldarelli, Young-Ho Eom and Stefano Boccaletti
21 Modelling the Air-Water Interface [abstract]
Abstract: The air-water interface is of huge importance to a wide range of environmental, biological and industrial chemistry. It shows complex behaviour and continues to surprise both experimental and theoretical communities. For many years the biological physical chemistry community has highlighted the different behaviour of water in and close to hydrophobic surfaces, such as proteins or lipid membranes. Recent work on ellipsometry at the air-water interface has suggested that the refractive index of the surface region may be significantly higher than that of the bulk water. This higher refractive index, would not only infer a significant change of interactions in water at a hydrophobic region, but also impact on the interpretation of many of the non-linear spectroscopic studies as they rely on the linear optical properties being understood. We attempt to investigate this behaviour using the Amber 12 molecular mechanics software. However, classical molecular mechanics simulations are generally parameterised to accurately recreate bulk mechanical, electronic and thermodynamic properties. The interfacial and surface regions of atomistic and molecular systems tend to be neglected. In order to ensure accurate surface behaviour we have implemented ways to deal with long range Lennard-Jones corrections in systems containing interfaces based on the methodology of Janecek. We present how these corrections are important for replicating surface behaviour in water, and a novel way to thermodynamically estimate surface energetic and entropic terms.
Frank Longford, Jeremy Frey, Jonathan Essex and Chris-Kriton Skylaris
177 On the Collatz conjecture: a contracting Markov walk on a directed graph. [abstract]
Abstract: The Collatz conjecture is named after Lothar Collatz, who first proposed it in 1937. The conjecture is also known as the (3x+1) conjecture, the Ulam conjecture, Kakutani's problem, the Thwaites conjecture, Hasse's algorithm or the Syracuse problem. This can be formulated as an innocent problem of arithmetics. Take any positive integer n. If n is even, divide it by 2 to get n/2. If n is odd, multiply it by 3 and add 1 to obtain 3n+1.Repeating the process iteratively, the map is believed to converge to a period 3 orbit formed by the triad {1,2,4}. Equivalently, the conjecture states that the Collatz map will always reach 1, no matter what integer number one starts with. Numerical experiments have confirmed the validity of the conjecture for extraordinarily large values of the starting integer n. The beauty of the conjecture emanates indeed from its apparent, tantalising, simplicity, which however hides formidable challenges, when one tries to cast it on solid roots. In this paper, we provide a novel argument to support the validity of the Collatz conjecture, which, to the best of our knowledge, configures as the first proof of the claim. The proof exploits the formalism of stochastic maps defined on directed graphs. More specifically, the proof articulates along the following lines: (i) define the (forward) third iterate of the Collatz map and consider the equivalence classes of integer numbers modulo 8; (ii) employ a stochastic approach based on a Markov process to prove the contracting property of such map on generic orbits; (iii) demonstrate that diverging orbits are not allowed because they will not be compatible with the stationary equilibrium distribution of the Markov process. The proof will be illustrated with emphasis to the methological aspects that require resorting to the concept of directed graph.
Timoteo Carletti and Duccio Fanelli
254 Nanoscale artificial intelligence: creating artificial neural networks using autocatalytic reactions [abstract]
Abstract: A typical feature of many biological and ecological complex systems is their capability to be highly sensitive and responsive to small changes of the values of specific key variables, while being at the same time extremely resilient to a large class of disturbances. The possibility to build artificial systems with these characteristics is of extreme importance for the development of nanomachines and biological circuits with potential medical and environmental applications. The main theoretical difficulty toward the realisation of these devices lies in the lack of a mathematical methodology to design the blueprint of a self-controlled system composed of a large number of microscopic interacting constituents that should operate in a prescribed fashion. Here a general methodology is proposed to engineer a system of interacting components (particles) which is able to self-regulate their concentrations in order to produce any prescribed output in response to a particular input. The methodology is based on the mathematical equivalence between artificial neurons in neural networks and species in autocatalytic reactions, and it specifies the relationship between the artificial neural network’s parameters and the rate coefficients of the reactions between particle species. Such systems are characterised by a high degree of robustness as they are able to reach the desired output despite disturbances and perturbations of the concentrations of the various species. Relating concepts from artificial intelligence to dynamical systems, the results presented here demonstrate the possibility to employ approaches and techniques developed in one field to the other, bringing potential advancements in both disciplines and related applications. Preprint: https://arxiv.org/abs/1602.09070
Filippo Simini

Foundations & Physics  (FP) Session 3

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: F - Rode kamer

Chair: Louis Dijkstra

116 Onset of anomalous diffusion from local motion rules [abstract]
Abstract: Anomalous diffusion processes, in particular superdiffusive ones, are known to be powerful strategies for searching and navigation by animals and also in human mobility. One way to create such regimes are Lévy Flights, where the walkers are allowed to perform jumps, the “flights”, that can eventually be very long as their length distribution is asymptotically power-law distributed. In our work, we present a model in which walkers are allowed to perform, on a 1D lattice, “cascades” of n unitary steps instead of a jump in the Lévy case. In analogy with the Lévy approach, the size of such cascades is distributed according to a power-law tailed PDF P(n); on the other hand, at difference with Lévy Flights, we do not require an a priori knowledge of the jump length since, in our model, the walker follows strictly local rules. We thus show that this local mechanism for the walk gives indeed rise to superdiffusion or normal diffusion according to the P(n) power law exponent. We also investigate the interplay with the possibility to be stuck on a node, introducing waiting times that are power-law distributed as well. In this case, the competition of the two processes extends the palette of the reachable diffusion regimes and, again, this switch relies on the two PDF's power-law exponents. As a perspective, our approach may engender a possible generalization of anomalous diffusion in context where distances are difficult to define, as in the case of complex networks.
Timoteo Carletti, Sarah de Nigris and Renaud Lambiotte
485 Dynamics on multiplex networks [abstract]
Abstract: We will show some of the recent result in our group concerning dynamics in multiplex networks. On the one hand we consider multiplex networks as set of nodes in different layers. At each layer the set of nodes is the same but the connections among the nodes can be different in the layers. Furthermore the connections among the layers is described by a “network of layers”. We have studied different processes across the layers (diffusion) and between the layers (reaction) [1]. In this case Turing patterns appear as an effect of different average connectivities in different layers [2]. We also show that a multiplex construction where the layers correspond to contexts in which agents make different sets of connections can make a model of opinion formation to show stationary states of coexistence that are not observed in simple layers [3]. Finally, as a particular case of multiplex network, one can also analyze networks that change in time, since in this case each layer of the multiplex corresponds to a snapshot of the interaction pattern. For this situation, we have shown that there are different mechanisms that dominate the diffusion of information in the system depending on the relative effect of mobility and diffusion among the nodes [4]. [1] Replicator dynamics with diffusion on multiplex networks. RJ Requejo, A. Diaz-Guilera. Arxiv:1601.05658 (2016) [2] Pattern formation in multiplex networks. NE Kouvaris, S Hata & A. Diaz-Guilera. Scientific Reports 5, Article number: 10840 (2015) [3] Agreement and disagreement on multiplex networks. R Amato, N E Kouvaris, M San Miguel and Albert Díaz-Guilera, in preparation. [4] Tuning Synchronization of Integrate-and-Fire Oscillators through Mobility. L. Prignano, O. Sagarra, and A. Díaz-Guilera Phys. Rev. Lett. 110, 114101 (2013)
Albert Diaz-Guilera
284 Promiscuity of nodes in multilayer networks [abstract]
Abstract: The interplay of multiple types of interactions has been of interest in the social sciences for decades. Recent advances in the complexity sciences allow the analysis of such multilayer networks in a quantitative way. The question to what extent nodes are similarly important in all layers arises naturally. We define the promiscuity of a node as a measure for the variability of its degree across layers. This builds on similar frameworks that investigate such questions in networks with modular structure while taking into account that different layers can vary in their importance themselves. Using those tools on a range of empirical networks from a variety of disciplines including transportation, economic and social interactions, and biological regulation we show that the observed promiscuity distributions are different on the networks of different origins. Transportation networks, for example, where the layers represent different modes of transportation tend to have a majority of low promiscuity nodes. A few hub nodes with high promiscuity enable the transit between different modes of transportation. The representation of global trade as a multilayer network reveals that country’s imports are often very diverse whereas the export of some countries depends extremely on a single commodity. Employing the promiscuity on transcription factor interaction in multiple cell types reveals proteins that are potential biomarkers of cell fate. Despite its simplicity, the presented framework gives novel insights into numerous types of multilayer networks and expands the available toolbox for multilayer network analysis.
Florian Klimm, Gorka Zamora-López, Jonny Wray, Charlotte Deane, Jürgen Kurths and Mason Porter
188 Coarse analysis of collective behaviors: Bifurcation analysis of traffic jam formation [abstract]
Abstract: Collective phenomena have investigated in various fields of science, such as material science, biological science and social science. Examples of such phenomena are slacking of granular media, group formation of organisms, jam formation in traffic flow and lane formation of pedestrians. Scientists usually investigate them only using the equation of motion of individuals directly. It is generally difficult to derive the macroscopic laws of collective behaviors from such microscopic models. We challenge to develop a new approach to analyze macroscopic laws of these phenomena. In this paper, we describe collective behaviors in a low-dimensional space of macroscopic states obtained by dimensionality reduction. Such a space is constructed by using Diffusion maps as one of the pattern classification techniques. We obtain a few appropriate coarse-grained variables to distinguish the macroscopic states by the similarity of patterns, and we construct the low-dimensional space. A time development of collective behavior is represented as a trajectory in the space. We apply this method to the optimal velocity model for the analysis of the macroscopic property of traffic jam formation. The phenomena is considered as the dynamical phase transition of a non-equilibrium system. The important property of the transition is bistability of jammed flow and free flow. This property has been investigated by many researchers using the model. However their analysis does not satisfactory explain. Using our method, we clearly reveal a bifurcation structure, which features the bistability.
Yasunari Miura and Yuki Sugiyama
352 Design Principles for Self-Assembling Polyomino Tilings [abstract]
Abstract: The self-assembly of simple molecular units into regular 2d (monolayer) lattice patterns continues to provide an exciting intersection between experiment, theory and computational simulation. We study a simple model of polyominoes with edge specific interactions and introduce a visualisation of the configuration space that allows us to identify all possible ground states and the interactions which stabilise them. By considering temperature induced phase transitions away from ground states, we demonstrate kinetic robustness of particular configurations with respect to local rearrangements. We also present a rigorous sampling algorithm for larger lattices where complete enumeration is computationally intractable and discuss common features of the configuration space across different polyomino shapes.
Joel Nicholls, Gareth Alexander and David Quigley

ICT  (I) Session 2

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: G - Blauwe kamer

Chair: Taha Yasseri

318 Rumor Spreading in Social Networks with Individual Privacy Policies [abstract]
Abstract: Humans are social animals that love to disseminate ideas and news, as proved by the huge success of social networking websites such as Facebook or Twitter. On the other hand, these platforms have emphasized the dark side of information spreading, that is the diffusion of private facts and rumors in the society. Usually users of these social networks can set a level of privacy, and decide to whom to show their private facts, but they cannot control how their friends will use this information: they could spread it through other social websites, medias or simply with face-to-face communication. The classic Susceptible-Infectious-Recovered (SIR) epidemic model can be adopted for modeling the spread of information in a social network: susceptible individuals do not know the information, then are susceptible to be informed; infectious individuals know and spread the information, while recovered individuals already know the information but do not spread it anymore. A susceptible individual in contact with an infectious one can become infectious with a transmission probability, while an infectious individual naturally recovers from infection with a recovery rate, turning into a recovered individual. We extend this compartmental model in order to represent several kinds of privacy policies, from unsafer to more rigorous: each individual belongs to a class that models the privacy behavior by tuning the transmission probability, the recovery rate and the susceptibility to information, that specifies the interest of the individual on the information. We calculate a privacy score for each individual based on the privacy policies of her neighbors, so as to infer the local robustness to the spread of personal information. We test our model by means of stochastic simulations on synthetic contact networks and on a small partition of the Facebook social network, provided by few hundreds of volunteers that replied to an online survey.
Livio Bioglio and Ruggero Pensa
382 A Complexity Epistemology of Digital Data [abstract]
Abstract: Digital data is transitioning from data in search of method, to method in search of theory. But, despite a wealth of clearly pertaining theory in the social sciences, it has proven exceedingly difficult to accomplish a fruitful connection: there appears to be an epistemological incompatibility between the computational approaches applied within digital data study and traditional social scientific research. This paper makes two observations about the epistemological nature of digital data. First, digital data is by nature relational and focused on the interaction between individuals, rather than on their individual attributes. This not only permits the study of emergence through computational models - but requires it. Second, digital systems are “arenas of interaction” where the emergence of social practices intermingles with technological change in the platforms that support said social practices. We’re looking at innovation, quite simply, only substantially faster. We interpret this situation through the lens of what Lane and Maxfield (2006) call "ontological uncertainty," which implies limits to the applicability of formal modeling (Andersson et al. 2014.) We find that digital data typically resides at a difficult epistemological crossroad between "mass-dynamics," which is amenable to computational modeling, and ontological uncertainty, which limits the applicability most types of modeling. This paper suggests a complexity epistemology of digital data that is question-driven and methodologically pluralist, using computational modeling tools to explore emergence, and the opportunities of vast new data sets, but does so within an epistemological framing that enables insights that are situated and reflexive.
Petter Törnberg
280 Identification, modeling and impact of discoverers in e-commerce systems [abstract]
Abstract: Understanding the behavior of users in online systems is of essential importance for sociology, system design, e-commerce, and beyond. Most existing models assume that individuals in diverse systems, ranging from social networks to e-commerce platforms, tend to what is already popular. We propose a statistical time-aware framework to identify the users who differ from the usual behavior by being repeatedly and persistently among the first to collect the items that later become hugely popular. Since these users effectively discover future hits, we refer them as discoverers. We use the proposed framework to demonstrate that discoverers are present in a wide range of real systems. Discoverers are typically not among the most central nodes in the user-user social network, which indicates that users' ability to early collect future hits is essentially unrelated to users' social importance. We show that due to their ability to early identify future hits, discoverers can be used to predict the future success of new items based on the first few received links. Finally, we propose a network model which reproduces the discovery patterns observed in the real data. Data produced by the model are shown to pose a fundamental challenge to classical ranking algorithms on networks which neglect the time of link creation and thus fail to discriminate between discoverers and ordinary users in the data. Our results bring new insights into the quantitative characterization of users' behavior in online systems, and have far-reaching implications for trend prediction and algorithm design.
Matus Medo, Manuel Sebastian Mariani, An Zeng and Yi-Cheng Zhang
550 Reconstructing propagation networks with temporal similarity Metrics [abstract]
Abstract: Node similarity significantly contributes to the growth of real networks. In this paper, based on the observed epidemic spreading results we apply the node similarity metrics to reconstruct the underlying networks hosting the propagation. We find that the reconstruction accuracy of the similarity metrics is strongly influenced by the infection rate of the spreading process. Moreover, there is a range of infection rate in which the reconstruction accuracy of some similarity metrics drops nearly to zero. To improve the similarity-based reconstruction method, we propose a temporal similarity metric which takes into account the time information of the spreading. The reconstruction results are remarkably improved with the new method.
Hao Liao
346 Quantifying crowd size with mobile phone and Twitter data [abstract]
Abstract: Being able to infer the number of people in a specific area is of extreme importance for the avoidance of crowd disasters and to facilitate emergency evacuations. Here, using a football stadium and an airport as case studies, we present evidence of a strong relationship between the number of people in restricted areas and activity recorded by mobile phone providers and the online service Twitter. Our findings suggest that data generated through our interactions with mobile phone networks and the Internet may allow us to gain valuable measurements of the current state of society. This presentation will allow me to further disseminate my work to an audience with broad interests. It will help me establish a personal network of connections in the complex systems scientific community. This may enable collaborations, which would be of great benefit in my career as a young researchers. Attendees of this talk can expect to see the importance of new forms of data both from a scientific point of view, but also their importance for policy makers and stakeholders. The presentation will be readily accessible to a broad audience, thus maximising the dissemination of the research. Reference: Botta F, Moat HS, Preis T. Quantifying crowd size with mobile phone and Twitter data. Royal Society Open Science 2, 150162 (2015)
Federico Botta, Helen Susannah Moat and Tobias Preis
468 What does Big Data tell? Sampling the social network by communication channels [abstract]
Abstract: Big Data has become the primary source of understanding the structure and dynamics of the society at large scale. The network of social interactions can be considered as a multiplex, where each layer corresponds to one communication channel and the aggregate of all them constitutes the entire social network. However, usually one has information only about one of the channels, which should be considered as a sample of the whole. Here we show by simulations and analytical methods that this sampling may lead to bias. For example, while it is expected that the degree distribution of the whole social network has a maximum at a value larger than one, we get with reasonable assumptions about the sampling process a monotonously decreasing distribution as observed in empirical studies of single channel data. Also we find, that assortativity may occur or get strengthened due to the sampling process. We analyse the far-reaching consequences of our findings.
Janos Kertesz, Janos Torok, Yohsuke Murase, Hang-Hyun Jo and Kimmo Kaski

Cognition & Foundations  (CF) Session 2

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: H - Ontvangkamer

Chair: Massimo Stella

561 Similarity of Symbol Frequency Distributions with Heavy Tails [abstract]
Abstract: How similar are two examples of music or text from different authors or disciplines? How fast is the vocabulary of a language changing over time? How can one distinguish between coding and noncoding regions in DNA? Quantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. We will address this problem for the important case in which the frequencies of symbols show heavy-tailed distributions (e.g. the famous Zipf's law for word-frequencies), which hinder an accurate finite-size estimation of entropies, and for a family of similarity measures based on the generalized entropy of order α. We will show analytically how the systematic (bias) and statistical (fluctuations) errors in these estimations depend on the sample size N, on the exponent γ of the heavy-tailed distribution and the order alpha of the similarity measure. Our primary finding is that, for heavy-tailed distributions, (i) the error decay is often much slower than 1/N, illustrating the difficulty in obtaining accurate estimates even for large sample sizes, and (ii) there is a critical value of the order of the entropy α∗=1+1/γ≤2 for which the normal 1/N dependence is recovered. We emphasize the importance of these findings in the example of quantifying how fast the English vocabulary has changed within the last 200 years, showing that these finite-size effects have to be taken into account even for very large databases (N≳1,000,000,000 words). Associated reference: DOI:http://dx.doi.org/10.1103/PhysRevX.6.021009
Martin Gerlach, Francesc Font-Clos and Eduardo Altmann
145 Universal properties of culture: evidence for the ``multiple self'' in preference formation [abstract]
Abstract: Understanding the formation of subjective human traits, such as preference and opinions, is an important, but poorly explored problem. It is essential that traits collectively evolve under the repeated action of social influence, which is the focus of many studies of cultural dynamics. In this paradigm, other mechanisms potentially relevant for trait formation are reduced to specifying the initial cultural state for the social influence models, state which usually is generated in a uniformly random way. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state: a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behaviour in the short term, if the initial cultural state is sampled from empirical data instead of being generated in a uniformly random way. First, this study shows that this effect is remarkably robust across data sets. In a certain sense, the analysis suggests that systems from which such data is extracted function close to criticality. Second, this study presents a stochastic model for generating cultural states that retain the universal properties. One ingredient of the model, already used in previous work, assumes that every individual's set of traits is partly dictated by one of several ``cultural prototypes'', which are abstract entities informally postulated by several social science theories. A second, new ingredient, taken from the same theories, assumes that apart from a dominant prototype, each individual also has a certain exposure to the other prototypes. The fact that this combination of ingredients is compatible with regularities of empirical data suggests that cultural traits in the real world form under the combined influence of several cultural prototypes, thus providing indirect evidence for the class of social science theories compatible with this description.
Alexandru-Ionut Babeanu, Leandros Talman and Diego Garlaschelli
380 Temporal Network Analysis of Small Group Discourse [abstract]
Abstract: The analysis of school-age children engaged in engineering projects has proceeded by examining the conversations that take place among those children. The analysis of classroom discourse often considers a conversational turn to be the unit of analysis. In this study, small-group conversations among students engaged in a robotics project are analyzed by forming a dynamic network with the students as nodes and the utterances of each turn as edges. The data collected for this project contained more than 1000 turns for each group, with each group consisting of 4 students (and the occasional inclusion of a teacher or other interloper). The conversational turns were coded according to their content to form edges that vary qualitatively, with the content codes taken from prior literature on small group discourse during engineering design projects, resulting in approximately 10 possible codes for each edge. Analyzed as a time sequence of networks, clusters across turns were created that allow for a larger unit of analysis than is usually used. These larger units of analysis are more fruitfully connected to the stages of engineering design. Furthermore, the patterns uncovered allow for hypotheses to be made about the dynamics of transition between these stages, and also allow for these hypotheses to be compared to expert consideration of the group’s stage at various times. Although limited by noise and inter-group variation, the larger units allowed for greater insight into group processes during the engineering design cycle.
Bernard Ricca and Michelle Jordan
403 Ranked communities and the detection of dominance and influence hierarchies [abstract]
Abstract: In directed networks, edges often represent a transfer of power, social influence, or confidence. In some cases, this is explicit, as in the case of directed acts of dominance inflicted by one animal upon another. In other cases, the relationship is more implicit: when one university hires another's graduate, the hiring department is expressing confidence in the quality of the graduate's department's training. Given such a network, it is possible to rank individual vertices using a variety of methods, such as minimum violations or PageRank. However, in many real-world systems, groups of individuals are ranked, and expressions of influence or dominance by individuals, captured by network edges, simply reflect group affiliations--a type of large-scale network structure that has not been previously described. In this work, we introduce a definition of ranked communities in directed networks and propose an algorithm to efficiently identify them. The mathematical framework of our method is placed in the context of the related problem of classic modularity maximization and we introduce an important distinction between strict (dominance) and inclusive (endorsement) hierarchies. We confirm our method's ability to extract planted ranked community structure in synthetic networks before applying it to learn about real-world networks. In particular, this method allows us to quantitatively study a question posed over half a century ago regarding the relationship between social network structure and the Indian caste system. We show that our recently collected data of a social support network in two South Indian villages is structured according to a mixture of ranked caste-based communities and community-transcending individual relationships. In this system, organization by ranked community is related to known social structure, but we also apply our method to learn about other social and ecological networks in which the organizing mechanisms have yet to be identified.
Daniel Larremore, Laurent Hebert-Dufresnse and Eleanor Power
81 The Assessment of Self-organized Criticality in Daily High School Attendance Rates [abstract]
Abstract: One important aspect of studying the behavior of dynamical systems is the analysis of processes of stability and change over time. This requires an estimation of auto-correlative and cyclical patterns in sets of frequently repeated measurements of the behavior of such systems. Traditional mean and (co)variance computations typically used for cross-sectional data are not adequate to characterize these distributions, and may actually be misleading (Beran, 1994). Daily school attendance is presented as a case in point. Since 2004 and to this day, the New York City Department of Education has published daily attendance rates on its website for all of its schools. These data exhibit the degree of resolution needed to detect underlying systems dynamics that are hidden in the conventionally reported weekly, monthly or yearly average rates. Daily attendance rates in six small high schools were analyzed over a ten-year period (2005 – 2014). The analysis proceeded as follows: 1. intervention models were fitted to handle the most extreme values in the series (usually low attendance) for each school, 2. Conventional time series analyses were used to estimate short-term dependencies and cyclical patterns, 3. The goodness of fit of those models was compared with that of models including long-range estimates (fractional differencing parameter, Hurst exponent). Preliminary analyses suggest significant long-range dependencies in two of these six schools, suggesting self-organized criticality (tension-release, unpredictable cycles), and strong weekly cycles in the four others. The presentation will illustrate how the initial appearance of the data in the various diagnostic plots suggests long-range dependencies, and the parameters of the best fitting models are interpreted. Implications of the findings for the field are discussed, and a note is included about the available software options for conducting these types of analyses. References: Beran, J. (1994). Statistics for long-memory processes. Boca Raton, FL: Chapman & Hall/CRC.
Matthijs Koopmans
59 Organisational decision making as network coupled oscillators: validation and case study [abstract]
Abstract: Organisational decision making, where many individuals interact to share information, formulate decisions and perform actions, is at heart a social system oriented towards outcomes - success in a military mission or business venture. In recent years I have proposed the Kuramoto model of synchronising oscillators to represent such a system that may be developed towards predictive modelling against specific scenarios. Essentially here, the oscillator limit cycle represents what is known in cognitive psychology as a Perception-Action cycle, or in military parlance an Observe-Orient-Decide-Action loop; the network of interactions may represent the range of formal, information and technology based exchanges of information; finally, a native frequency represents the individual speed of decision-making for an agent left to themselves. To such a system may be added stochastic influences, or “noise”, to represent the human properties of intuition, indecision or degrading of communication under stress or mood shifts. In this paper, I propose the model in application to a military organisation as may be located in a deployed headquarters. I use data based on a recent study of such an organisation where participants responded to surveys and interviews probing their pattern of interactions and level of cognition (in the sense of the Perception-Action cycle) for two scenarios: routine business and an emergency response. For noise I use a combination of stable Lévy noise both in spatial and temporal dimensions, where the underlying probability distributions for this exhibit power-law heavy tails. I summarise an initial validation of the model, invoking another approach in organisation theory known as Contingency Theory. The model thus integrates ideas from quantitative and qualitative complexity theory. To illustrate the utility of the model, I study a number of interventions in the model: local network modifications and/or training in order to tighten frequency distributions. I conclude with prospects for future work.
Alexander Kalloniatis

Cognition & Socio-Ecology  (CS) Session 1

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: I - Roland Holst kamer

Chair: Andrea Baronchelli

220 Emergence of metapopulations and echo chambers in mobile agents [abstract]
Abstract: Multi-agent models often describe populations segregated either in the physical space, i.e. subdivided in metapopulations, or in the ecology of opinions, i.e. partitioned in echo chambers. Here we show how the interplay between homophily and social influence controls the emergence of both kinds of segregation in a simple model of mobile agents, endowed with a continuous opinion variable. In the model, physical proximity determines a progressive convergence of opinions but differing opinions result in agents moving away from each others. This feedback between mobility and social dynamics determines to the onset of a stable dynamical metapopulation scenario where physically separated groups of like-minded individuals interact with each other through the exchange of agents. The further introduction of confirmation bias in social interactions, defined as the tendency of an individual to favor opinions that match his own, leads to the emergence of echo chambers where different opinions can coexist also within the same group. We believe that the model may be of interest to researchers investigating the origin of segregation in the offline and online world.
Michele Starnini, Mattia Frasca and Andrea Baronchelli
320 Significance and Popularity in Music Production [abstract]
Abstract: In the world of artistic production there is a constant struggle to achieve fame and popularity. This fierce competition between artistic creations results in the emergence of highly popular elements that are usually well remembered throughout the years, while many other works that did not achieve that status are long-forgotten. However, there is another level of importance that must be considered in order to have a more complete picture of the system. In fact many works that have influenced the production itself, both due to their aesthetic and cultural value, might have not been or might not be popular anymore. Due to their relevance for the whole artistic production, it is important to identify them and save their memory for obvious cultural reasons. In this paper we focus on the duality between popularity and significance in the context of popular music, trying to understand the features of music albums belonging to one or both of these classes. By means of user-generated data gathered on Last.fm, an on-line catalog of music albums, we define a growing conceptual space in the form of a network of tags representing the evolution of music production during the years. We use this network in order to define a set of general metrics, characterizing the features of the albums and their impact on the global music production. We then use these metrics to implement an automated prediction method of both the commercial success of a creation and its belonging to expert-made lists of particularly significant and important works. We show that our metrics are not only useful to asses such predictions, but can also highlight important differences between culturally relevant and simply popular products. Finally, our method can be easily extended to other areas of artworks creation.
Bernardo Monechi, Pietro Gravino, Vito D.P. Servedio, Francesca Tria and Vittorio Loreto
359 Why Hierarchy? Coupled Oscillator Dynamics and the Emergence of Social Complexity [abstract]
Abstract: This paper presents a computer model that describes between-agent social interactions as dynamic couplings between limit-cycle oscillators, leading to emergent properties within social networks. The model’s purpose is to represent collective social dynamics as they emerge from local-level interactions, with a specific focus on social rank and status distinctions. This investigation is informed by anthropological observations that societies conventionally construct rank-order differences between individuals. For example, rituals often mark transition between roles, while linguistic conventions – such as Korean grammatical honorifics – enforce differing treatment of people of distinct relative statuses. Such observations raise a puzzling question: why would human societies ritually exaggerate cleavages in their social structure, when unity is typically a desideratum for social collectives? An interdisciplinary complex-systems perspective may be particularly useful here. Psychology studies demonstrate that humans tend to mimic one another automatically during interpersonal interactions. However, hierarchy alters this response, changing it from postural mimicry to complementarity. While somatic and speech mimicry enhances social affiliation, excessive coherence between individuals may also produce instability – as when a fad sweeps across an adolescent peer group. We suggest that, with segmented roles as dampeners, mutual amplification of such behaviors is contained to respective peer subgroups within larger populations. This is an agent-based model, meaning that social actors are simulated using decisional algorithms. Our algorithms are derived from those describing coupled limit-cycle oscillators. Coupling coefficients represent affiliation and mimetic tendencies. Mimesis is suppressed between agents of different social ranks, whereas peers tend to couple readily. Thus, same-rank agents mutually amplify one another’s states across the peer network. Rank distinctions dampen and constrain the propagation of these synchronic states, leading to the emergence of a dynamically structured population. We thereby emphasize a uniquely somatic perspective on the local dynamics that drive emergent social complexity.
Connor Wood and Saikou Diallo
190 Investigating peer and sorting effects within an adaptive multiplex network model [abstract]
Abstract: Empirical evidence shows that in our networked society people have a marked tendency to find themselves surrounded by others who are similar to them, e.g., in language, socio-economic status, educational level, political beliefs, work norm and many others. Two possible causes are: sorting, i.e. people selecting people like them (as in the Schelling's model), and peer effects, i.e. people influenced by people around them (as in the Standing Ovation model). We investigate the dynamics of sorting and peer effects in reaching (or not) mutual coexistence of conventions in a multiplex network topology. We model a social environment through a two-layer multiplex network, in which agents have profiles: a type, i.e. being orange or blue, and a strategy, i.e. rewiring their links or not. The layers can be interpreted as the presence of an informal context, e.g. family, school district, which are mostly fixed over time, and a formal one, e.g. work partnership, in which different agents adaptively restructure their neighbourhood over time. Each agent has a tolerance threshold describing his endurance to a certain level of diversity in its neighbourhood before switching type or strategy. Consequently, agents act according to a mixed motive imitation across the two layers: they conform their strategy to the most frequent one in their neighbourhood on the informal layer and they implement such strategy to their links on the formal, adaptive, layer. Strategies and types are randomly distributed in the beginning on two random graph layered topologies. We observe that the initial fraction of rewirers drives multiple and stable final configurations, going from coexistence to polarization of types, through a tipping point. Secondly, the lower the tolerance the more likely segregation takes place. Finally, we relate those results to either choice or opportunity-based homophily in the system.
Francesca Lipari, Massimo Stella and Alberto Antonioni
402 Conformity-driven agents support ordered phases in the spatial public goods game [abstract]
Abstract: In the last two decades, Evolutionary Game Theory (EGT) has strongly developed into a mature field that, nowadays, represents a vivid and independent research area with a list of applications. The Public Goods Game (PGG hereinafter) represents the typical game-theoretical framework in which individual and group interests collide. In fact, in the PGG game players can decide to mutually cooperate for achieving a common goal and, at the same time, are tempted to exploit their opponents in order to obtain an higher payoff. A series of works unveiled that even when agents' interactions are based upon Nash Equilibria in which 'defection' theoretically dominates cooperative strategies, agent, and even human, populations are able to attain a cooperative equilibrium. As result, it is interesting to identify behaviors and properties that may lead a population towards cooperation. It is also worth observing that conformism is one of the most investigated behaviors in the field of sociophysics. In the present work ([1]), we investigate the spatial PGG by considering a population composed of conformity-driven agents and fitness-driven agents. Thus, while the former tend to update their strategy with the most adopted one in their neighborhood, the latter tend to imitate their richest neighbor (i.e., the most fitted). Results show that conformism has a prominent role in the spatial PGG: it seems that this social influence may lead the population towards different phases and behaviors, as full cooperation and bistable equilibria. Beyond to present the results of our investigation, we aim to give an brief view of EGT for those participants coming from different communities that can be interested in evolutionary dynamics. [1] JAVARONE MA, ANTONIONI A., CARAVELLI F.: Conformity-driven agents support ordered phases in the spatial public goods game. EPL forthcoming (2016)
Marco Alberto Javarone, Alberto Antonioni and Francesco Caravelli
121 Dynamics of social organisation. Towards a multi-scalar analysis of social practices in past societies [abstract]
Abstract: A key element in archaeological research consists of tracing and understanding change in the past. Emphasis is commonly placed on developments in social, political, and economic organisation. Although the archaeological record is ultimately produced within a societal context, it cannot be interpreted as a clear-cut reflection of this society. The archaeological record is fragmentary and limited as only traces of those activities having a durable material component can be observed. What we see in the archaeological record is not societal organisation as such, but rather the material end-result of social practices. To study past societies we must therefore first understand how practices are bundled to function as constituent elements of societal organisation. The main mechanisms of bundling social practices can be found in the structuration of human action along dimensions of time and space. The former is commonly subdivided in short, medium, and long timespans, the latter in local, regional, and supra-regional scales. Often a convergence is presumed between corresponding scales in time and place on the one hand, and social dynamics on the other. These approaches, however, leave distinct scales in isolation, lacking a proper integration of dynamics across different scales, and remain trapped in a reductionist approach of human societies. To better understand the emergence of social organisation out of the multitude of interactions between human agents, theoretical inspiration has in recent years been sought and found in complex systems thinking. Development of social dynamics and practices in the past can be studied within a resilience-based framework of complex adaptive systems. In this presentation the concept of nested adaptive cycles in particular will be highlighted as a highly potent heuristic tool to move towards an integrated multi-scalar analysis of the temporal and spatial properties of social practices and dynamics, and better understanding of social organisation in past societies.
Dries Daems

Cognition & Economics  (CE) Session 1

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: J - Derkinderen kamer

Chair: Jelena Grujic

130 Will the market go up or down? Human guesses facing market uncertainty in a lab-in-the-field experiment as a way to unveil unintended strategies and behavioral biases [abstract]
Abstract: Decisions taken in our everyday lives are based on a wide variety of information so it is generally very difficult to assess what are the strategies that guide us. Stock markets therefore provides a rich environment to study how people take decisions since responding to market uncertainty needs constant updates of such strategies. For this purpose, we run a lab-in-the-field experiment where volunteers are given a controlled set of financial information -based on real data from worldwide financial indices- and they are required to guess whether the market price would go up or down in each situation. From the data collected we explore basic statistical traits, behavioral biases and emerging strategies. In particular, we detect unintended patterns of behavior through consistent actions which can be interpreted as Market Imitation and Win-Stay Lose-Shift emerging strategies, being Market Imitation the most dominant one. We also observe that these strategies are affected by external factors: the expert advice, the lack of information or an information overload reinforce the use of these intuitive strategies, while the probability to follow them significantly decreases when subjects spends more time to take a decision. The cohort analysis shows that women and children are more prone to use such strategies although their performance is not undermined. Our results are of interest for better handling clients expectations of trading companies, avoiding behavioral anomalies in financial analysts decisions and improving not only the design of markets but also the trading digital interfaces where information is set down. Strategies and behavioral biases observed can also be translated into new agent based modeling or stochastic price dynamics to better understand financial bubbles or the effects of asymmetric risk perception to price drops. Full paper available: http://arxiv.org/abs/1604.01557 Website: http://mr-banks.net
Josep Perello, Jordi Duch and Mario Gutiérrez-Roig
207 Drunk Game Theory. An individual perception-based framework for evolutionary game theory. [abstract]
Abstract: We present Drunk Game Theory (DGT), a framework for individual perception-based games, where payoffs change according to player's previous experience. We introduce DGT with the narrative of two individuals in a pub choosing independently and simultaneously between two possible actions: offering (cooperating) or not (defecting) a round of drinks. The payoffs of these interactions are perceived by individuals as a function of their current states. We represent these perceptions through two different games. The first game constitutes the classic Prisoner's Dilemma, in which player's utility is a function of the number of consumed drinks and invested money. The second game takes the form of the Harmony game, in which payoffs are computed solely as the number of consumed drinks. Players perceive one of the two games according to their current cognitive level. An individual is more likely to perceive payoffs according to the Prisoner's Dilemma (Harmony) game when she is in a heightened (diminished) cognitive state. The cognitive level of a player evolves according to the outcome of her previous interactions: it reduces when drinking and it increases when abstaining. We use evolutionary game theory to model the evolution of cooperation within well-mixed and structured populations. We observe non-trivial dynamics in both the fraction of cooperators and the cognitive levels when cooperators and defectors dynamically coexist over time. Our analytical results in well-mixed populations agree with those obtained from agent-based simulations. We further explore the role of network-constrained interactions on the overall level of cooperation. By accounting for heterogeneous and feedback-dependent perceptions, the DGT framework opens new horizons for exploring the emergence of cooperation in social environments.
Nicholas Mathis, Leto Peel, Massimo Stella, Luis A. Martinez-Vaquero and Alberto Antonioni
105 Predicting missing links in criminal networks: the Oversize case [abstract]
Abstract: The problem of link prediction in networks has recently received increasing attention. One of its aims is to recover missing links, namely connections among actors which are likely to exist but have not been reported because data are incomplete or uncertain. In criminal investigations, problems of incomplete information are encountered almost by definition, given the obvious anti-detection strategies set up by criminals and the limited investigative resources. We analyze a specific dataset obtained from a real investigation (operation Oversize, an Italian criminal case against a mafia group) including all wiretap conversations recorded by investigators and involving 182 individuals. The peculiarity of the case lies in the availability of three networks derived at different stages of the proceedings: the wiretap records (WR), including all wiretap conversations; the arrest warrant (AW), with a selection of the transcripts; and the judgment (JU), summarizing the trial. A few links are removed passing from WR to AW and to JU, since they are considered not relevant by the investigators (marginal links). We are aimed at identifying missing links, namely highly probable social ties not revealed by wiretap records. We propose a strategy based on the topological analysis of the links classified as marginal, i.e. removed during the investigation procedure. The main assumption is that missing (i.e. undetected) links should have opposite features with respect to marginal ones. Surprisingly, a centrality measure such as link betweenness proves to be unable to characterize marginal links, which are instead captured by standard measures of node similarity. A pool of link prediction methods, both local and global, is then applied and their results are analyzed and compared. The thorough inspection of the judicial source documents validates the results and confirms that the predicted links, in most instances, do relate actors with large likelihood of co-participation in illicit activities.
Giulia Berlusconi, Francesco Calderoni, Nicola Parolini, Marco Verani and Carlo Piccardi
6 The Risky Business of Asking for Help: An Agent Based Model of Unmet Need [abstract]
Abstract: In this work we present an agent based model of elderly care were populations of decision theoretic agents play a game, reflecting the interwoven supply and demand side decision making processes that govern whether older adults seek, and receive support in their activities of daily living. The model draws together longitudinal survey (ELSA) data to provide base rates of need for support, care costs from local authority activity reports (HSCIC PSSE/PSSA), and attitude surveys (ONS OPN, EuroBarometer, and ESS) to produce distributions of synthetic agent psychologies. We then calibrate the model against reported rates of unmet need from the ELSA dataset, by building statistical emulators of the simulation model to rapidly explore the free parameter space. The simulation results suggest that the care system is most sensitive to the balance between the perceived costs of failing to provide care where needed, and the rewards of delivering appropriate support. Further to this, the model indicates that the real system lies near to collapse, with relatively small decreases in perceived costs and rewards leading to breakdown. Potential applications for the simulation itself are in the arena of policy development, by suggesting possible implications for interventions, for example the impact of increases in the cost of care provision, or of campaigns targeting the the perception of stigma attached to age. In addition, the parameterisation and calibration of the model demonstrate the possibilities of simulation as a method for integrating disparate data sources.
Jonathan Gray, Jakub Bijak and Seth Bullock
469 Opinion evolution in the presence of stubborn agents: from consensus to disagreement [abstract]
Abstract: The structural properties of social networks, naturally described by graphs of interpersonal ties, has been thoroughly studied by the interdisciplinary theory of Social Network Analysis (SNA). However, the dynamics and evolution of social systems have mainly remained beyond the scope of SNA, confined to special processes over social networks, such as e.g. random walks and epidemic spread. In spite of the rapid progress in study of complex systems and their dynamics, reinforced by the development of software tools for big data analysis, the realm of dynamic social networks still remains a challenge for the modern science. An important reason for that is the lack of mathematical models, representing social groups by dynamical systems. Such models should be sufficiently simple to allow their rigorous analysis and still remain sufficiently "rich" to capture the behavior of real social systems. Unlike many natural and engineered systems, social networks rarely exhibit regular behaviors like consensus and synchronization; the opinions and actions of social agents are usually featured by persistent disagreement and clustering. In this paper, we consider a model of opinion evolution is social group, proposed by Friedkin and Johnsen in 1999 and confirmed by experiments with small social groups. This model extends the consensus-based procedure for decision making, dating back to the works by French (1956) and DeGroot (1974), to the case where some agents are "stubborn" and "attached" to their initial opinions, factoring them into every stage of the opinion iteration. We offer necessary and sufficient graph-theoretic conditions for the convergence of opinions in the Friedkin-Johnsen model. This model is also extended to the case where opinions are multidimensional and consist of interdependent scalar topic-specific opinions; such model can be used to model the evolution of belief systems.
Anton Proskurnikov, Sergei Parsegov, Roberto Tempo and Noah Friedkin

Foundations & Biology  (FB) Session 1

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: R - Raadzaal

Chair: Samuel Johnson

276 Structurally induced noncritical power-laws in neural avalanches [abstract]
Abstract: Percolation has been used as a model for describing a wide range of different phenomena [Saberi Phys. Reports 2015; Eckmann et al, Phys. Reports 2007]. For example, the distribution of sizes of epidemics or neural avalanches has been studied using models of percolation on networks [Faqeeh et al arXiv:1508.05590, 2015]. In particular, using a model of percolation, it was shown [Friedman and Landsberg, Chaos 2013] that the hierarchical modular structure observed in brain networks can contribute to a power-law distribution of avalanche sizes and durations, and critical behavior of neural dynamics. This study shows the potential of percolation models to help understand the origin of the various power-law behaviors, observed in experimental setups [Freidman et al, PRL 2012] and computational simulations [Rubinov, Plos. Comp. Biol. 2011] of neural systems. Here, we use methods employed in percolation theories, popularity dynamics, and critical branching processes to investigate the distribution of avalanche sizes on random networks with arbitrary degree distribution. We show, using theoretical and numerical calculations, that even a simple model of neural dynamics can produce a range of distinct power-law behaviors: For scale-free networks with degree distribution exponent $3<\nu<4$, the avalanche size distribution $P(s)$, at the critical point of the neural dynamics model, has a power-law form with exponent $(2\nu-3)/(\nu-2)$. Interestingly, for such scale-free networks, in the subcritical regime, $P(s)$ is also a power-law with exponent $\nu$. This refutes the previous analysis [Cohen PRE 66, 036113, 2002] that indicated that away from the critical point, $P(s)$ should be a power-law (with exponent 5/2) with exponential cutoff. In networks with $\nu>4$ or in non-scale-free networks, at the critical point, $P(s)$ is a pure power-law with exponent 5/2; nonetheless, even away from the critical point, a power-law with exponent 5/2 which is extended for several orders of magnitude can be observed for $P(s)$.
Ali Faqeeh and James Gleeson
383 Stochastic modeling of tumor emergence induced by cell-to-cell communication disruption in elastic epithelial tissue [abstract]
Abstract: It is known that the noise during gene expression comes about in two ways. The inherent stochasticity of biochemical processes generates "intrinsic" noise. "Extrinsic" noise refers to variation in identically-regulated quantities between different cells. The small number of reactant molecules involved in gene regulation can lead to significant fluctuations in protein concentrations. To study the spatial effects of intrinsic and extrinsic noises on the gene regulation determining the emergence of tumor we have applied a multiscale chemo-mechanical model of cancer development in epithelial tissue proposed recently in [1]. The epithelium is represented by an elastic 2D array of polygonal cells with its own gene regulation dynamics. The model allows for the simulation of evolution of multiple cells interacting via the chemical signaling or mechanically induced strain. The algorithm includes the transformation of normal cells into a cancerous state triggered by a local failure of spatial synchronization of the cellular rhythms. To model the delay-induced stochastic chemical signaling we have used a generalization of the Gillespie algorithm that accounts for delay suggested in [2]. The possibility of the stochastic pattern formation produced by the joint action of time delay and noise was demonstrated in [3]. In this work, we study the effect of the stochastic oscillations responsible for cell-to-cell communications on the emergence of tumor. Both the intrinsic and extrinsic contributions to stochastic pattern formation and circadian rhythm disruption have been explored numerically. [1] Bratsun D.A., Merkuriev D.V., Zakharov A.P., Pismen L.M. Multiscale modeling of tumor growth induced by circadian rhythm disruption in epithelial tissue. J. Biol. Phys. 42, 107-132 (2016). [2] Bratsun, D., Volfson, D., Hasty, J., Tsimring, L.S. Delay-induced stochastic oscillations in gene regulation. PNAS 102, 14593-14598 (2005). [3]Bratsun D.A., Zakharov A.P. Spatial Effects of Delay-Induced Stochastic Oscillations in a Multi-scale Cellular System. Springer Proceedings in Complexity, 93-103 (2016).
Dmitry Bratsun and Ivan Krasnyakov
173 Hyper-rarity in tropical forests: beyond species richness [abstract]
Abstract: Tropical forests have long been recognised as one of the largest pools of biodiversity, and tree inventory database from closed canopy forests have recently been used to estimate their species richness. Global patterns of empirical abundance distributions for vascular plant species show that tropical forests vary in their absolute number of species, but display surprising similarities in the distribution of populations across species. In the Amazonia hyper-dominant species are only 1.4% of the total, but they account for half of all trees; on the other spectrum, hyper-rare species make up nearly 70% of the entire pool, but their total population is only 0.12% of all trees. This extreme heterogeneity in abundances across species forms the core of the Fisher’s paradox, an important open question in ecology. Here we introduce an analytical framework which allows one to provide robust and accurate estimates of species richness and abundance distributions in biodiversity-rich ecosystems. We find that previous methods have systematically overestimated the total number of species. Also, our analysis of 15 empirical forest plots highlights that ecosystems at stationarity tend to maximise their relative fluctuation of abundances. This produces a large number of rare species and only a few common species. We argue that a large number of rare species provides a buffer against declines. When biotic factors or environmental conditions change, some of the rare species may be abler than others in maintaining ecosystem’s functions, because different species respond differently to environmental changes. This further underscores the importance of rare species and their link with the insurance effect.
Anna Tovo, Samir Suweis, Marco Formentin, Marco Favretti, Jayanth Banavar, Sandro Azaele and Amos Maritan
389 Human mobility network and persistence of rapidly mutating pathogens [abstract]
Abstract: Rapidly mutating pathogens may be able to persist in the population and reach an endemic equilibrium by escaping acquired immunity of hosts. For such diseases, multiple biological, environmental and population level mechanisms determine epidemic dynamics, including pathogen’s epidemiological traits, seasonality, interaction with other circulating strains and spatial fragmentation of hosts and their mixing. We focus on the two latter factors and study the impact of the heterogeneities characterizing population distribution and mobility network on the equilibrium dynamics of the infection both with one strain and with multiple competing strains. We consider a susceptible-infected-recovered-susceptible model on a metapopulation system where individuals are distributed in subpopulations connected with a network of mobility flows. We simulate disease spreading by means of a mechanistic stochastic model and we systematically explore different levels of spatial disaggregation, probability of traveling among subpopulations and mobility network topology, reconstructing the phase space of pathogen persistence and the dynamics out of the equilibrium. Results depict a rich dynamical behaviour. The increase in the average duration of immunity reduces the chance of persistence until extinction is certain above a threshold value. Such critical parameter, however, is crucially affected by the traveling probability, being larger for intermediate levels of mobility coupling. The dynamical regimes observed are very diversified and present oscillations and metastable states. Topological heterogeneities leave their signature on the spatial dynamics, where subpopulation connectivity affects recurrence of epidemic waves, spreading velocity and chance to be infected. The present work uncovers the crucial role of hosts’ space structure on the ecological dynamics of rapidly mutating pathogens, opening the path for further studies on disease ecology in presence of a complex and heterogeneous environment.
Alberto Aleta, Yamir Moreno, Sandro Meloni, Chiara Poletto and Vittoria Colizza
156 Applying the Epidemic Spreading Model to Explain Brain Activity [abstract]
Abstract: The role of correlations in the communication process in the functional brain networks is still highly debated in neuroscience. In this study, we apply a simple SIS epidemic spreading model on the human connectome to analyze the structural topological properties that drive these correlations of activity. We first verify results from previous discrete-time studies with our continuous-time simulations. Then, we introduce a small time delay and analyze the so-called delayed correlations of one brain region to the others. We find that just above the critical threshold direct structural connections induce higher cross-correlations between two brain regions and that the larger the distance between two nodes in the structural network, the lower is their delayed correlation. We prove analytically that the delayed auto-correlation is decreasing for small time lags and show with simulations that it even seems to be exponentially decaying for very small time lags. Hubs seem to have a lower auto-correlation than other nodes, but their delayed correlation with direct neighbors seems to be much higher than with other nodes. Previous studies found that the direction of activity spreading in the human connectome seems to be mostly from the back to the front. Using the delayed correlations and the measure of transfer entropy we can confirm this dominant back-to-front pattern with our SIS model. We show that the "rich club" structure of densely connected hubs seems to be responsible for this observed spreading pattern.
Jil Meier, Xiangyu Zhou, Cornelis Jan Stam and Piet Van Mieghem
535 On Complex Dynamics of Sparse Discrete Hopfield Networks and Its Implications [abstract]
Abstract: It has been argued that complex behavior in many biological systems, including but not limited to human and animal brains, is to a great extent a consequence of high interconnectedness among the individual elements, such as neurons in brains. As a very crude approximation, brain can be viewed as an associative memory that is implemented as a large network of heavily interconnected neurons. Hopfield Networks are a popular model of associative memory. From a dynamical systems perspective, it has been posited that the complexity of possible behaviors of a Hopfield network is largely due to the aforementioned high level of interconnectedness. We show, however, that many aspects of provably complex – and, in particular, unpredictable within realistic computational resources – behavior can also be obtained in very sparsely connected Hopfield networks and related classes of Boolean Network Automata. In fact, it turns out that the most fundamental problems about the memory capacity of a Hopfield network are computationally intractable, even for restricted types of networks that are uniformly sparse, with only a handful neighbors per node. This is significant not only from a theoretical computer science standpoint, but also from connectionist science and neuroscience perspectives: animal brains viewed as networks of neurons are relatively sparse and have local structure. One implication of our work is that some of the most fundamental aspects of biological (and other) networks’ dynamics do not require high density, in order to exhibit provably complex, computationally intractable to predict behavior.
Predrag Tosic

Urban  (U) Session 4

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: L - Grote Zaal

Chair: Elisa Omodei

508 Spatial Patterns in Urban Systems [abstract]
Abstract: Study of urban systems---how they form and develop---constitutes an important portion of human knowledge, not only because it is about our own physical space of daily living but also for understanding the underlying mechanisms of human settlement and civilisation on the Earth's surface that may be fundamentally similar to other forms of organisation like biological cells in our body or animal colonies. Among the physical features of an urban system, the complex patterns delineated by the physical locations and shapes of urban entities like buildings, parks, lakes or infrastructure can provide us with the comprehension of its current status of development or even the living condition of people inside it. In this study, we explore the spatial patterns encompassed in urban systems by analysing the pattern of spatial distribution of transport points in their public transport network of 73 cities around the world. The analysis reveals that different spatial distributions of points can be quantified and shown to belong to two main groups in which the points are either approximately equidistant or they are distributed apart with multiple length scales. The first group contains cities that appear to be well-planned, i.e. organised type, while the second consists of cities that tend to spread themselves over a large area and possess non-uniform spatial density of urban entities at different length scales, i.e. organic type. In addition to public transport network, we also look at the distribution of amenities within each city to investigate the relation between these two types of urban entity, and find that it possesses universal properties regardless of the city's spatial pattern type. This result has one important implication that at small scale of locality, the urban dynamics cannot be controlled even though the regulation can be done at large scale of the entire urban system.
Neil Huynh, Evgeny Makarov, Erika Legara, Christopher Monterola and Lock Yue Chew
541 Understanding Transition Patterns of Synchronization Stability in Power Grids [abstract]
Abstract: Power-grid nodes are coupled oscillators in electric power systems. In the normal operational state of power grids, the phase frequencies of the power-grid nodes are synchronized. The synchronization is self-sustained such that it recovers the synchrony against small perturbations. However, large perturbations can break the syncronization, and the synchronization stability varies for the topological position of nodes and the network parameters such as transmission strength. In this study, we investigate how the synchronization stability undergoes transition according to the network topology and the transmission strength between nodes. We track the stability transition by using Kuramoto-type model as a function of the transmission strength. Based on the transition shapes of the synchronization stability, we reveal the width of the transition curve is correlated with community consistency that represents how consistently a node associates with other nodes. In addition, we find that the transition shapes are distinguished by few patterns. Through the analysis of 598 isomorphically distinct topologies, we classify the transition patterns into four groups. Neither macro- nor micro- network characteristics well predict the transition patters. However, we find that the pathway-based nodal centrality such as betweenness is a good indicator for the synchronization stability transition.
Heetae Kim, Sang Hoon Lee and Petter Holme
422 Coupling Network Structure and Land Use in Modelling Transportation Travel Demand [abstract]
Abstract: Interactions and movements in urban systems are controlled by their transport networks. For these systems to function efficiently, there is a need to build a robust network. We present here how we can couple land use and network structure to model the travel demand of stations in a transport network. This gives us insights as to how transport networks should evolve with changing land use patterns. We apply the model to the Singapore Rapid Transit System (RTS). We find that considering network structure and using the entropy of the gross plot ratio as the potential measure of how likely a commuter will be attracted to travel to specific areas, the model predicts the data well with a correlation of 0.76.
Cheryl Abundo, Erika Fille Legara, Christopher Monterola and Lock Yue Chew
250 Conceptualizing Self-organization in Urban Planning: Turning diverging paths into consistency [abstract]
Abstract: Within the realm of urban studies and spatial planning, the concept of self-organization receives increasing attention in understanding spatial transformations and related planning interventions (De Roo et al, 2012; Portugali, 2011). In exploring the potential of self-organization, various scholars however introduce diverging interpretations of the concept, consequentially leading to different interpretations of what the concept of self-organization can offer to planners. In the first part of the paper, we show that these different interpretations have their foundation in two distinct epistemic positions: One is a critical-realist interpretation of complex adaptive systems (Byrne, 2005), resulting in a planning focused on pattern recognition and formulating guiding conditions (Portugali, 2011; Rauws, 2015). The other includes a post-structuralist interpretation of emerging assemblages (Cilliers, 1998; DeLanda, 2006), leading to a planning focused on personal style and situational behavior (Boonstra, 2015). Although both contribute to further explications of what self-organization can offer to planners, the potential synergies between the two epistemic positions has so far remained unexplored. Therefore, the second part of the paper explores their complementary in dealing with urban transformations and discusses how to turn them into consistency with one another – meaning how they can mutually reinforce each other without losing their individual epistemic strengths. Based on this exploration we suggest a style of spatial planning in which the planner is able to act adaptively and differentiate in style in response to the situation at stake, among others by means of pattern recognition. On a conceptual level the paper shows how planner scholars can make sense of the diversity of ongoing processes of self-organization in the context of spatial transformations.
Beitske Boonstra and Ward Rauws
354 Infrastructure planning in a dynamic environment. A complexity theory perspective on adaptive planning. [abstract]
Abstract: The planning and realization of transport infrastructure occurs in a continuously changing environment. The climate changes, our economy is circulating, environmental requirements and restrictions grow and our society becomes more energetic and participative. Dealing with these changes is a major challenge in infrastructure planning and implementation nowadays. Traditionally, infrastructure planning focused on modelling and forecasting future developments based on historical data and socio-economic scenarios. Complexity and dynamics of infrastructure and its environment are reduced to something concrete and manageable. In practice, this leads to a reactive way of working. With the emergence of complexity theory and the realization that the object of planning behaves as a complex system, a system of many actors with mutual reciprocal relationships, the emphasis in infrastructure planning is shifting from pre-scribing to creating a context that allows and stimulates variation to occur. Planning becomes more adaptive and aims the ability to develop variation and choosing the "best fits" given changing circumstances. This is a major transition in the context of traditional infrastructure planning with its specific institutional design and specific relational and contractual characteristics. Considering the infrastructure sector as a complex adaptive social system, the paper analyses adaptability from a complexity theory perspective and confronts this with current practice in (Dutch) infrastructure planning, implementation and exploitation through the analysis of cases. Focus is thereby on conditions that allow variation through interaction between related actors in the system. As most important relations, the public-public and the public-private relationships are analysed. From the confrontation of theory and practice dilemmas for further discussion are formulated and recommendations are made - to infrastructure authorities and markets involved in infrastructure development - how to facilitate the above described transition from the traditional technical-rational planning to a more adaptive planning.
Wim Leendertse, Stefan Verweij, Jos Arts and Frits Verhees

Foundations  (F) Session 5

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: M - Effectenbeurszaal

Chair: Marta Sales-Pardo

438 Breaking of Ensemble Equivalence in Complex Networks [abstract]
Abstract: It is generally believed that, in the thermodynamic limit, the microcanonical description as a function of energy coincides with the canonical description as a function of temperature. However, various examples of systems for which the microcanonical and canonical ensembles are not equivalent have been identified. A complete theory of this intriguing phenomenon is still missing. Here we show that ensemble nonequivalence can manifest itself also in random graphs with topological constraints. We find that, while graphs with a given number of links are ensemble equivalent, graphs with a given degree sequence are not. This result holds irrespective of whether the energy is nonadditive (as in unipartite graphs) or additive (as in bipartite graphs). In contrast with previous expectations, our results show that (1) physically, nonequivalence can be induced by an extensive number of local constraints, and not necessarily by longrange interactions or nonadditivity, (2) mathematically, nonequivalence is determined by a different large-deviation behavior of microcanonical and canonical probabilities for a single microstate, and not necessarily for almost all microstates. The latter criterion, which is entirely local, is not restricted to networks and holds in general.
Diego Garlaschelli
234 Universal temporal features of ranking in sports and games. [abstract]
Abstract: Sports and games can be described as complex systems. We describe these systems through a mathematical theory to find a universal behavior in the dynamics of different sports and games related to the players performance. This description can be done studying the ranking dynamics. We employed data from tennis, chess, golf, poker and football. Players are ordered by each official association. Each player or team has a score that varies with time, this results in a time dependent ranking. We tested five models to describe the distribution of scores for a given time. The rank diversity is introduced to quantify evolution of rankings; it is defined as the number of distinct elements of the complex set which have a given rank k within a given period of time. In other words, we analyze the time dependence of the ranks and observed a global behavior, despite the differences in distributions. Rank diversity in six cases follows closely the cumulative of a Gaussian. We introduce a simple model to reproduce the observed behavior. A member with rank k(t), at the discrete time t, is converted to rank k(t+1), tanking k(t) we add a Gaussian random number generator of the standard deviation corresponding to the original rank diversity getting s(t+1), once we obtained this for all members they are ordered according to their magnitude, a new data set is gotten. Good agreement is obtained.
José Antonio Morales-Alvarez and Carlos Gershenson
56 How to estimate epidemic risk from incomplete contact diaries data [abstract]
Abstract: Social interactions shape the patterns of spreading processes in a population. Techniques such as diaries or proximity sensors allow to collect data about encounters and to build networks of contacts between individuals. The contact networks obtained from these different techniques are however quantitatively different. Here, we first show how these discrepancies affect the prediction of the epidemic risk when these data are fed to numerical models of epidemic spread: low participation rate, under-reporting of contacts and overestimation of contact durations in contact diaries with respect to sensor data determine indeed important differences in the outcomes of the corresponding simulations. Most importantly, we investigate if and how information gathered from contact diaries can be used in such simulations in order to yield an accurate description of the epidemic risk, assuming that data from sensors represent the ground truth. The contact networks built from contact sensors and diaries present indeed several structural similarities: this suggests the possibility to construct, using only the contact diary network information, a surrogate contact network such that simulations using this surrogate network give the same estimation of the epidemic risk as simulations using the contact sensor network. We present and evaluate several methods to build such surrogate data and show that it is indeed possible to obtain a good agreement between the outcomes of simulations using surrogate and sensor data, as long as the contact diary information is complemented by publicly available data describing the heterogeneity of the durations of human contacts.
Alain Barrat and Rossana Mastrandrea
139 Epidemic Mitigation via Awareness Propagation in Communications Network: the Role of Time Scale [abstract]
Abstract: The pervasiveness of the Internet and smartphones enables individuals to participate in multiple networks such as communications networks and the physical contact network. The feedback between these networks opens new possibilities to mitigate epidemic spreading. For instance, the spread of a disease in a physical contact network may trigger the propagation of the information related to this disease in a communications network, which in turn may increase the alertness of some individuals resulting in their avoidance of contact with their infected neighbours in the physical contact network, possibly protecting the population from the infection [1,2]. In this work, we aim to understand how the time scale of the information propagation (speed at which information is spreaded and forgotten) in the communications network relative to that of the epidemic spread in the physical contract network influences such mitigation using awareness information. We firstly propose a model of the interaction between information propagation and epidemic spread, taking into account their relative time scale. We analytically derived the average fraction of infected nodes in the meta-stable state for this model (i) by developing an individual based Mean-Field Approximation method and (ii) by extending the Microscopic Markov Chain Approach, firstly introduced in [1,2]. Furthermore, we find that the optimal mitigation, can be achieved at a relative time scale, not necessarily zero nor infinity depending on the rate at which an infected individual gets aware. Contrary to our intuition, fast information spread in the communications network could even reduce the mitigation effect. Finally, we find that the mitigation tends to perform better when more node pairs are connected both in the physical contact and communications network. We explain these findings using both theoretical analysis and physical interpretations. [1] Granell C, Gomez S, Arenas A, Phys.Rev.E. 2014;90(1):012808. [2] Granell C, Gomez S, Arenas A, Phys.Rev.L. 2013;111(12):128701.
Huijuan Wang, Chuyi Chen, Bo Qu, Daqing Li and Shlomo Havlin
108 The ground truth about metadata and community detection in networks [abstract]
Abstract: Across many scientific domains, there is common need to automatically extract a simplified view or a coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called "ground truth" communities. This works well in synthetic networks with planted communities because such networks' links are formed explicitly based on the planted communities. In real world networks there are no planted communities. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. However, failure to find a good division that correlates with our metadata is a highly confounded outcome, arising for any of several reasons: (i) these particular metadata are irrelevant to the structure of the network, (ii) the detected communities and the metadata capture different aspects of the network's structure, (iii) the network contains no communities, or (iv) the community detection algorithm performed poorly. Most work on community detection assumes that failure to find communities that correlate with metadata implies case (iv). Here, we show that metadata are not the same as ground truth, and that treating them as such induces severe theoretical and practical problems, and can lead to incorrect scientific conclusions even in relatively benign circumstances. However, node metadata still have value and a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of community structure models. We demonstrates both techniques using sets of synthetic and real-world networks, featuring multiple types of metadata and community structure.
Leto Peel, Daniel Larremore and Aaron Clauset
251 Effect of multiple initial spreaders on the spreading velocity in SIS epidemics [abstract]
Abstract: Epidemic processes describe several real-world dynamic activities on networks, where we are concerned about how fast the virus will overspread the network. We use the average pervasion time to measure the spreading velocity in a SIS process, where the pervasion time is defined as the minimum time when every node in the network is infected at least once. We show that the pervasion time resembles a lognormal distribution, and the average pervasion time for the same spreaders exponentially decreases with the effective infection rate. Increasing the number of initially infected nodes can accelerate the spreading. The simulation results show the different effect of multiple spreaders on the spreading velocity for the different underlying topology, which provide some clues whether more spreaders are worthy to be invested. The investment utility is defined as the normalized decrement of the average pervasion time after adding a new spreader, which measures the effect of this spreader. We show that the investment utility decreases steadily with the number of initial spreaders in the homogeneous graphs like complete graph and lattice; However, in the network with power-law degree distribution, the investment utility of the newest spreader becomes suddenly low when the number of spreaders is over a value. After that, the investment utility of the lowest degree node dramatically becomes the highest among the rest of nodes. In addition, we reveal that the total investment utility for the same number of spreaders increases with the effective infection rate in a network, which implies the higher investment value to deploy multiple spreaders for the process with a higher effective infection rate.
Zhidong He and Piet Van Mieghem

Economics  (E) Session 4

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: N - Graanbeurszaal

Chair: Siew AnnCheong

168 A taxonomy of learning dynamics in 2 x 2 games [abstract]
Abstract: Learning is a convincing method to achieve coordination on a Nash Equilibrium (NE). But does learning converge, and to what? We answer this question in generic 2-player, 2-strategy games, using Experience-Weighted Attraction (EWA), which encompasses most extensively studied learning algorithms. We exhaustively characterize the parameter space of EWA learning, for any payoff matrix, and we understand the generic properties that imply convergent or non-convergent behaviour. Irrational choice and lack of incentives imply convergence to a mixed strategy in the centre of the simplex, possibly far from the NE. In the opposite limit, where the players quickly modify their strategies, the behaviour depends on the payoff matrix: (i) a strong discrepancy between the pure strategies is associated with dominance solvable games, which show convergence all the time; (ii) a substantial difference between the diagonal and the antidiagonal elements relates to coordination games, with multiple fixed points corresponding to the NE; (iii) a cycle in beliefs defines discoordination games, which commonly yield limit cycles or low-dimensional chaos. While it is well known that mixed strategy equilibria may be unstable, our approach is novel from several perspectives: we fully analyse EWA and provide explicit thresholds that define the onset of instability; we find an emerging taxonomy of the learning dynamics, without focusing on specific classes of games ex-ante; we show that chaos can occur even in the simplest games; we make a precise theoretical prediction that can be tested against data on experimental learning of discoordination games.
Marco Pangallo, James Sanders, Tobias Galla and Doyne Farmer
268 Causal Inference Using Multi-Channel Regime Switching Information Transfer Estimation [abstract]
Abstract: The past decade has seen the development of new methods to infer causal relationships in biological and socio-economic complex systems, following the expansion of network theory. Nevertheless, the standard estimation of causality still involves a single pair of time dependent variables which could be conditioned, in some instance, on its close environment. However, interactions may appear at a higher level between parts of the considered systems represented by more than one variable. We propose to study these types of relationships and develop a multi-channel framework, in the vein of Barrett and Barnett (Phys. Rev. E, 81 (2010)), allowing the inference of causal relationships between two sets of variables. Each channel represents the possible interaction between a variable of each sub-system. Based on this new framework, we develop two different multi-channel causality measures derived from the usual Granger causality to account for linear interactions and from the concept of transfer entropy for nonlinear contribution. Our measures provide different information about the inferred causal links: the strength of the global interaction between the two sub-systems, the average frequency of the channel switches and the channel contributing the most to the information transfer process for each time step. After having demonstrated the ability of our measures to infer linear as well as nonlinear interactions, we propose an application looking at the U.S. financial sector in order to better understand the interactions between individual financial institutions, as well as parts of the financial system. At the individual level, the considered channels between financial institutions are expressed both in terms of spectral representation using wavelet transform and probability distribution using quantile regressions. Beyond the application presented in the paper, this new multi-channel framework should be easy to implement in other fields of complex systems science such as neuroscience, biology or physics.
Carl-Henrik Dahlqvist
79 The geography of sleeping beauties in patenting: a country-level analysis [abstract]
Abstract: This study explores sleeping beauties, i.e. breakthrough inventions that experienced delayed recognition, by means of patent data. References in a patent signal the state of the art on which the patent is based, and they can limit the property rights established by its claims. A patent that is cited by many others, thus, includes some technology central to further developments. Patent citations can be used to study patented breakthrough inventions, identifying them as highly cited patents (Singh and Fleming, 2010; Castaldi et al, 2015). We add to this literature by analysing geographical determinants of the occurrence of sleeping beauties. A sleeping beauty is defined as a patent family that is both a sleeper (is not cited for at least x years after its priority date) and highly cited (receives at least x citations). Using this definition, with x=13, the database contains over 3,000 sleeping beauties. We hypothesize that the share of sleeping beauties in the output of a country and the share of sleeping beauties in the total of highly cited patents in a country is higher, the more geographically isolated the country is, reasoning that isolation renders diffusion and acceptance of new (radical) ideas more difficult. Geographical isolation is proxied by the mean geographical distance to all foreign patent inventors, measured both generally and specifically for each general technology class. We take into account the presence of international airports, average travel times, each country’s general proficiency in world languages, and the number of patents within each country, while controlling for technological and author variations across patents. Castaldi C, Frenken K, Los B (2015) Related Variety, Unrelated Variety and Technological Breakthroughs: An analysis of US State-Level Patenting. Regional Studies 49(5):767–781 Singh J, Fleming L (2010) Lone Inventors as Sources of Breakthroughs: Myth or Reality? Management Science 56(1):41–56
Mignon Wuestman, Koen Frenken, Jarno Hoekman and Elena Mas Tur
388 Comparing Density Forecasts in a Risk Management Context [abstract]
Abstract: In this paper we develop a testing framework for comparing the accuracy of competing density forecasts of a portfolio return in the downside part of the support. Three proper scoring rules including conditional likelihood, censored likelihood and penalized weighted likelihood are used for assessing the predictive ability of out-of-sample density forecasts, all closely related to the Kullback-Leibler information criterion (KLIC). We consider forecast distributions from the skew-elliptical family of distributions, as these are analytically tractable under affine transformations and projections onto linear combinations. We argue that the common practice to do forecast comparison in high-dimensional space can be problematic in the context of assessing portfolio risk, because a better multivariate forecast does not necessarily correspond to a better aggregate portfolio return forecast. This is illustrated by examples. An application to daily returns of a number of US stock prices suggests that the Student-t forecast distribution outperforms the Normal, Skew t and Skew Normal distributions in the left tail of the portfolio return. Additionally, the visualized dynamics of our test statistic provides empirical evidence for regime changes over the last thirty years. In a second application, techniques for forecast selection based on scoring rules are applied, and it turns out that the one-step-ahead Value-at-Risk (VaR) estimates from those dynamically selected time-varying distributions are more accurate than those based on a fixed distribution.
Cees Diks and Hao Fang
433 Investigating Open Innovation Collaborations Strategies between Organizations using Multi-level Networks and Dimensions of Similarity [abstract]
Abstract: Open innovation is a set of practices that enable organizations make direct use of external R&D to augment their internal research. Open innovation has received a lot of attention in the last decade, so it is of considerable interest to understand how widespread these practices are and how they affect the innovation process. Joint application for patents by multiple organizations is a form of open innovation that may result from joint R&D or other knowledge exchange between organizations. Interactions and collaborations affect the external knowledge potentially accessible by an organization, but they may also reduce the organization’s ability to appropriate the value of its internal knowledge. An optimal innovation strategy will balance these factors. We find that joint patent applications are relatively widespread and organizations utilise a range of strategies. To better understand some of the factors underpinning partner selection, we investigate the role of similarity between organizations and their impact on collaboration. We consider three dimensions of homophily, namely: technological proximity, geographical proximity as well as organization type (e.g. company, university, government agency). Here we construct a multi-level network in order to quantify these similarities between organizations. We define layers of the network as dimensions of homophily. These dimensions (layers) can be viewed as node attributes of bipartite networks. We use European Patent Office data dating back to 1978 for 40 countries with harmonized applicant names (OECD REGPAT and HAN databases) to construct four related bipartite networks relating organizations to patents, technological codes, geographic regions, and organization type. The respective one-mode projections can be combined as a co-organization network that is related by the different edge types: namely patents, technologies, geographical and organization type. This resulting network shows the structure of connections between organizations and the correlations between patent collaborations and the different dimensions of similarity under consideration.
Catriona Sissons, Demival Vasques, Dion O'Neale and Shaun Hendy
20 Bubbles in the Singapore and Taiwan Housing Markets are Dragon Kings [abstract]
Abstract: Asia is experiencing an unprecedented region-wide housing bubble right now. Should this bubble collapse, the economic and social fallouts are mind-boggling. As Asian governments race against time to defuse these ‘ticking bombs’, a deeper understanding of housing bubbles becomes necessary. By plotting the cumulative distribution functions (CDFs) of home prices per unit area in Singapore between 1995 and 2014, we found that these CDFs are stable over non-bubble years, and consist universally of an exponentially decaying body crossing over to a power-law tail. We also found in bubble years that dragon kings (positive deviations from the equilibrium distribution) develop near where the exponential body crosses over to the power-law tail. These were found in home price distribution of the Greater Taipei Area between Aug 2012 and Jul 2014, even though the two housing markets are structurally different. For the Singapore housing market, we also investigated the spatio-temporal dynamics of the bubble, and found price surges always start in a prestigious investment district, before propagating outwards to the rest of the island.
Darrell Jiajie Tay, Chung-I Chou, Sai-Ping Li, Shang-You Tee and Siew Ann Cheong

Socio-Ecology  (S) Session 2

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: P - Keurzaal

Chair: Sander Bais

580 A cross-scale framework for analyzing ecosystem services [abstract]
Abstract: Social-ecological systems are prototypical complex adaptive systems. Ecosystem services such as water purification, atmospheric regulation, and food production emerge from the interactions of ecological components occurring at smaller scales. At any focal scale, the diversity of ecosystem services available for production is constrained by the number of unique combinations of component aggregation patterns. For example, a patch can be managed to support high rates of denitrification under wetland cover or high outputs of food production as intensive corn cover, but not both simultaneously in time. At a larger time scale, or in an ecosystem with multiple patches, both types of ecosystem services can be accommodated. At increasingly smaller scales, however, the opposite is true. Social processes, such as management interventions intended to optimize certain ecosystem service process rates, interact with ecological components by changing patterns of aggregation and potentially leading to the development of cross-scale feedbacks. These cross-scale feedbacks can eventually contribute to the loss of relationships among ecological components, including those that support the desired ecosystem service. Our complex systems approach to analyzing ecosystem services provides a way to examine ecosystem service tradeoffs at multiple focal scales and potential cross scale interactions that could result in unexpected, non-linear system behavior. Specifically, we show how a surprising, sudden loss of ecosystem services can emerge from the interactions between management decisions and ecological components, and provide a framework for avoiding these losses.
Hannah E Birge, Craig R Allen, Ahjond S Garmestani and Kevin L Pope
170 Aging and percolation dynamics in a Non-Poissonian temporal network model [abstract]
Abstract: In the study of complex systems, one of the main assets of statistical physics consists in the postulation of simple models capable to reproduce one given relevant property of the system under consideration. This approach allows to simplify the study, by focusing on the property under scrutiny, independently of other complicating factors. In the case of static complex networks, the configuration model fulfills this role. In the field of temporal networks, the non-Poissonian activity driven (NoPAD) model fills this niche, providing a simple model characterized by an arbitrary inter-event time distribution, which assumes any form, in particular that dictated by empirical evidence. In this paper, we present a detailed mathematical study of the properties of the time-integrated networks emerging from the dynamics of the NoPAD model. We focus in two main issues: The topological properties of the integrated networks, and their percolation behavior, as determined by the time Tp at which a giant connected component, spanning a finite fraction of the total number of nodes, first emerges. These two properties are determined as functions of the model’s parameters, namely the exponent of the distribution of the waiting time between two consecutive activations of an agent, and the exponent of the agents’ heterogeneity distribution. The topological properties and the percolation dynamics also depend on the time window of the integration process [ta , ta + t], and are determined by applying a mapping of the network’s construction algorithm to the hidden variables class of models. The NoPAD model represents a minimal model of temporal networks with long tailed inter-event time distribution. As such, it has a wide potential to serve as a synthetic controlled environment to check both numerically and analytically several properties of these networks, and in particular their effect on dynamical processes.
Antoine Moinet, Romualdo Pastor-Satorras and Michele Starnini
102 On the trade-off between CO2 emission reduction and negative emissions for getting back to 350 ppm in 2100 [abstract]
Abstract: Here, we analyze the adaptive climate policies that comply with the planetary boundary (Rockström et al., 2009) on climate change in 2100 - recovering a CO2 concentration of 350ppm until 2100 – and the policy implications in terms of CO2 emission reductions and in terms of implementation of geoengineering technologies (negative emissions) under budget constraint. For this purpose, we couple the viability theory and the DICE model for assessing the set of these adaptive climate policies and we analyze the trade-off between increasing CO2 emission reduction and implementing new geoengineering technologies yielding negative emissions. Results show that the objective of 350ppm in 2100 is reached only with carbon neutrality and the effective implementation of innovative geoengineering technologies (10% of negative emissions) before 2060, under the assumption of getting out of the baseline scenario without delay. Then, this trade-off is analyzed according to costs involved in terms of abatement costs and investment in new technologies. The talk will present the main processes of the DICE model as well as viability theory before discussing the main results in terms of adaptive climate policies associated to abatement and investment costs. Reference Rockström J, Steffen W, Noone K, Persson A, Chapin FSIII, Lambin E, Lenton TM, Scheffer M, Folke C, Schellnhuber HJ, Nykvist B, De Wit CA, Hughes T, Van der Leeuw S, Rodhe H, Sörlin S, Snyder PK, Costanza R, Svedin U, Falkenmark M, Karlberg L, Corell RW, Fabry VJ, Hansen J, Walker B, Liverman D, Richardson K, Crutzen P, Foley J (2009) Planetary boundaries: Exploring the safe operating space for humanity. Ecology and Society 14(2):32.
Jean-Denis Mathias, John Marty Anderies and Marco Janssen
431 Dynamic control of social diffusions using extensions of the SIS model [abstract]
Abstract: Diffusion processes model propagation phenomena on complex networks, such as epidemics, information diffusion, and viral marketing. In many situations, it is critical to suppress an undesired diffusion process by means of dynamic resource allocation, where one needs to decide targeted actions by taking into account the evolving infection state of the network. In the context of continuous-time SIS, and with provided full information regarding the nodes’ state, we consider the scenario where a budget of treatment resources of limited efficiency is available at each time for distribution to infected nodes. Recent results on this particular problem include the Priority Planning approach which computes a linear ordering of the nodes with minimal maxcut, and the optimal greedy approach called Largest Reduction of Infectious Edges (LRIE). The latter is a simple, yet efficient, strategy that computes an intuitive priority score for the infected nodes which combines the notion of node virality (possibility to infect other nodes) and vulnerability (possibility to get reinfected after recovery). In this work we show that the principle of the LRIE score holds for a wide range of SIS-like modeling scenarios. More specifically, we propose the Generalized LRIE (gLRIE) strategy and study the dynamic diffusion control by introducing a two-fold extension to SIS which can model important aspects of social diffusion (e.g. behaviors or habits). The first considers nonlinear functions of infection rates with saturation. On the top that, our second extension considers competition in the sense that the two node states, the infected and the healthy, are both diffusive, though each node can only be in only one of them at a time. In this case, our gLIE control strategy has to align with the healthy diffusion to help it win the competition. Finally, simulations on large-scale real and synthetic networks show the efficiency of gLIE.
Argyris Kalogeratos, Stefano Sarao, Kevin Scaman and Nicolas Vayatis
339 Dynamical model of Middle East Respiratory Syndrome spread: uncovering ecological and behavioral drivers of propagation of an emerging disease [abstract]
Abstract: MERS coronavirus emerged in Arabian Peninsula in 2012 raising great concern for its severity, its international spread, and the many uncertainties characterizing its transmission and ecology. During the first three years after the emergence, the epidemic in the source area showed strong spatiotemporal heterogeneities emerging by the interplay between the zoonotic and the human-to-human transmission routes. Episodes of virus importation in foreign countries had highly variable outcomes, where little if no transmission followed importation except for a large outbreak in South Korea that raised worldwide alert. We aimed at understanding the mechanisms underlying this complex dynamics. We studied MERS spread in Middle East by means of an integrative approach combining dynamical modeling at different spatial scales – regional and international. The resulting multi-scale framework allowed extracting maximal information from the sparse and diverse epidemiological records, thus increasing inference power. It provided estimates of epidemiological parameters and their spatiotemporal variation, showing that human-to-human transmission is more important than expected for the generation of cases, while the observed geographical structure is induced by variations in the zoonotic source. To understand the drivers of global dissemination we modeled imported cases and onward transmission using detailed information on air-travel, along with digital proxies for collective and public health awareness (e.g. Google Trends records). We showed that structure and dynamics of air-transportation network shapes the spatiotemporal pattern of MERS propagation, and we quantified the effect of collective attention on the epidemic response observing that high collective attention is associated to more rapid isolation of imported cases. The study demonstrates the power of dynamical models in interpreting limited epidemiological records in light of the extensive socio-demographic and behavioral information available. Models are thus able to address fundamental questions regarding emerging diseases’ spread, the underling biological mechanisms and the role of human response.
Chiara Poletto, Pierre-Yves Boëlle and Vittoria Colizza