14:15 - 18:00 on 21st Sep 2016

Financial Networks and Policy Applications from Systemic Risk to Sustainability  (FNPA) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: A - Administratiezaal

Chair: Stefano Battiston

4019 Introduction - "Policy Applications of Financial Networks" Stefano Battiston, FINEXUS - Univ. of Zurich
4020 Contributed Ignite Talks Session (see Satellite webpage http://www.dolfinsproject.eu/index.php/ccs16)
4021 t.b.a. Marten Scheffer, Wageningen University and Research Centre, Netherlands
4022 Bubbles and crashes in large group asset market experiments Cars Hommes, CeNDEF - Univ. of Amsterdam
4023 New Metrics for Economic Complexity: Measuring the Intangible Growth Potential of Countries Luciano Pietronero, University of Rome Sapienza and Institute of Complex Systems, ISC-CNR, Rome, Italy
4024 t.b.a. Doyne Farmer, Oxford Univ. and Institute for New Economic Thinking, UK
4025 Panel Discussion - "Policy Applications of Global System Science: From Systemic Risk to Sustainability." Moderator: Stefano Battiston. Panelists: Marten Scheffer, Cars Hommes, Luciano Pietronero, Doyne Farmer

UrbanNet 2016: Smart Cities, Complexity and Urban Networks  (U2SC) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: B - Berlage zaal

Chair: Oliva Garcia Cantu / Fabio Lamanna

14008 Electric vehicle charging as complex adaptive system - information geometric approach [abstract]
Abstract: In all major cities in the Netherlands, charging points for electric vehicles seem to spring up like mushrooms. In the city of Amsterdam alone, for example, there were 231 charging points by the end of 2012 in comparison with 1, 185 today, and roughly two new charging stations added every week. Over the same period of time, the average number of charging sessions per week went up from 550 to 8, 000. All charging sessions in the Netherlands are recorded by the service providers and those from Amsterdam, Rotterdam, Utrecht, The Hague and provinces of Northern Holland, Flevoland and Utrecht are made available for research through the respective municipalities to the Urban Technology research program at the University of Applied Sciences Amsterdam1. The dataset of charging sessions, which is the largest of its kind in the world, currently holds more than 3.3 million records, containing information about duration, location and a unique identifier of the users [1]. The tremendous growth in electric vehicle adoption, in combination with the existence of this large and rich dataset, creates a unique opportunity to study many aspects of electric mobility and infrastructure in the context of complex social systems. The question we focus on is the following: if we consider the e-mobility system as complex and adaptive, what is its phase structure? Are there regime changes in the system? And, could we define distinct states of the dynamics of the system at hand? The framework in which we study these questions is that of information geometry [2]. To construct the framework we first define observables of interest from the data. We then estimate the probability distributions of these observables, as a function of time or other parameters of the system. As the system evolves, the shape of the probability distributions might change. We say that a regime shift has occurred when a large and persistent change in the probability distributions has happened. To define a large change in the probability distribution we use Fisher information [3]. Our approach is based on an analogy with the theory of phase transitions in statistical physics, especially second order or ?critical? transitions. In statistical physics one can study the information geometry of the Gibbs distribution and show that at second order phase transitions and on the spinodal curve the curvature of the statistical manifold diverges [4]. Taking it a step further, Prokopenko et al. showed that one can use the Fisher information matrix directly to serve as an order parameter [5]. Following these results, a maximum of the Fisher information matrix is used as a definition of criticality in complex systems, e.g. in [6]. The application of our approach is particularly challenging in the charging infrastructure system since 1) it is an open system (the number of users and charging points changes over time), and, 2) it is an irreversible system (the municipalities gain experience in deploying charging points, the users of the system optimize their usage of the charging point infrastructure, and policies and user support systems change). All this indicates that there is no straightforward notion of phase space for this system, which would allow for a Gibbs-like distribution to be defined. Our previous work, which was applying this framework to a non-linear reaction-diffusion system (the Gray-Scott model), is encouraging since we were also able to detect regime changes based on a macroscopic distribution of observables, independent of the microscopic dynamics of the system [7]. These challenges, however, are typical of complex adaptive social systems and therefore finding a satisfactory solution to them might allow for a generalization of the method to different social systems. In the talk we will present the results of pursuing this line of investigation. We will discuss different observables 1http://www.idolaad.nl 1 we tried and insights we gained into the system from our work. Understanding the phase structure of electric vehicle charging, and hence the dynamics of charging, can have large implications on our understanding of the dynamics of neighborhoods, on planning and policy implementation and on the study of Urban science in general.
Omri Har-Shemesh
14009 Residential Flows and the Stagnation of Social Mobility with the Economic Recession of 2008 [abstract]
Abstract: The movement of people within a city is a driver for the growth, development, and culture of the city. Understanding such movements in more detail is important for a range of diverse issues, including the spread of diseases, city planning, traffic engineering and now-casting economic well-being [1, 3]. Residential environment characteristics have been shown to be strongly associated with changes in individual socioeconomic status, making residential relocation a potential determinant of social mobility [2]. Examining residential mobility flows therefore offers an opportunity to better understand the determinants of social mobility. By using a novel dataset, recording the movement of people within the city of Madrid (Spain) over a time period of 10 years (2004-2014), we studied how residential flows changed during the economic recession of 2008. Here we present preliminary results from these investigations. In particular, we found that the crisis had a profound impact on social change, reducing the social mobility within the city as a whole, thus leading to a ?social stagnation? phenomenon. Methods: We used data from a continuous administrative census of the entire Spanish population (the ?Padron?) that includes universal information on all residential relocations. Using this data, we can assess the mobility within and in and out of the city of Madrid, stratified by age, education and country of origin. For analysis involving property value and unemployment, the granularity of our analysis is on the level of neighborhood ( 20,000 people each, n=128 in Madrid). For all other analysis, our granularity is on the level of census section ( 1,500 people each, n 2400 in Madrid), providing a very fine grained perspective on the residential flows within the city. To examine changes in residential mobility flows, we categorized these into the following: any mobility (any change of residential location), mobility within the city of Madrid, and mobility within the city but to a different area. We further divided these last type of flow into upward (from poorer to richer) or downward (from richer to poorer) mobility. Figure 1 (left) shows an example of the geographical delineations and the associated residential mobility flows. Figure 1: (Left) A data overlay of a section of Madrid. Red outlines correspond to neighborhoods, colored by quintile of property value for 2004 (red areas indicate the highest property value quintile). Black outlines correspond to census sections, and arrows represent residential mobility flows. In particular, white arrows indicate movement to areas of higher property value, black to lower, and blue to areas of equal value. (Right) The total movement (in-flow + out-flow) within each census section for the year 2004. Red areas indicate high residential flows. ?1 0 1 2005 2007 2009 2011 2013 Year Quintile of Destination minus Origin Neighborhood Social and Residential Mobility in Madrid 0.05 0.10 0.15 0.20 2006 2008 2010 2012 2014 Year Unemployment Rate (%) Unemployment Rate per Neighborhood in Madrid Figure 2: (Left) Time series of social mobility (average change in quintile of property value of all movers in the neighborhood; where a positive number represents upwards mobility and 0 represents no social mobility) in the six neighborhoods with the highest change in social mobility from 2005 to 2014. (Right) Unemployment time series in all neighborhoods of Madrid, with thicker lines for the six neighborhoods pictured in the left. Results: We find that residential mobility peaked in 2007-2008, especially due to the contribution of incoming flows to Northern and Southeastern Madrid. A centrality based analysis of the residential mobility network reveals the intensity of change in the downtown area (Centro) of Madrid (Figure 1, Right). We further assessed the effect of the 2008 financial crisis on residential mobility flows, showing that neighborhoods in the lower end of the socioeconomic spectrum and those that had changed the most during the housing boom of the 2000s were the most affected by the recession (Figure 2, Right). In particular, these neighborhoods showed a decrease in social mobility associated with residential relocation, with a decreasing proportion of people in poorer relocating to neighborhoods with a higher property value (Figure 2, Left). Moreover, there was also a decreasing proportion of people in richer areas relocating to neighborhoods with a lower property value. This lack of upward mobility (from poorer areas) and downward mobility (from richer areas) led to an stagnation of residential mobility in the aftermath of the recession. Discussion: A combination of fine-grained relocation, socioeconomic and property value data has allowed us to detect communities with increased mobility flows, as well as areas of relative residential stability or stagnation. It has further allowed us to explore changes with the economic recession. Our finding that social mobility at the neighborhood level has stagnated is consistent with previous findings of increased economic segregation concurrent with the economic recession of 2008[4].
Usama Bilal
14010 title to be confirmed (invited talk) Filippo Simini
14011 Smart Street Sensor [abstract]
Abstract: Urban street structures are a snapshot of human mobility and resources, and are an important medium for facilitating human interaction. Previous studies have analyzed the topology and morphology of street structures in various ways; fractal patterns [1], complex spatial networks [2] and so on. Through a functional aspect, it is important to discuss how street networks are used by people. There are studies analyzing the efficiency [3], accessibility[4] and road usage[5] in the street networks too. In those studies, the researchers investigated either empirical travel routes or theoretical travel routes to understand the functionality of the street network. A travel route is a path within the network selected by people or selected under a given condition. Since the determination of a travel route is directly influenced by travel demand and the spatial pattern of the city, including street network and land-use formation, a selected route is a good way to capture complex interactions among the factors which are often hidden. For instance, fastest routes estimate the possible distribution of traffic as well as the street structure in a city. In this study, we analyze the geometric property of routes to understand the street network considering hierarchical property and traffic condition. Although many studies discuss the efficiency of a route or a street network, few people investigate the geometry of a route [6] or study how individual routes are intrinsic to the city structure. Two cities with similar efficiency can have a different geometry of congestion pattern and traffic pattern [7]. Therefore, understanding the geometric feature of routes can link the the existing knowledge of routes and the structure of urban street network. We especially focus on how much a route is skewed into the city center by measuring a new metric, Inness. The inness I of a route is defined as the difference between inner travel area Pinner and outer travel area Pouter as I = Pinner - Pouter. The areas are defined, after a route is divided into inner part and outer part based on the straight line connecting the origin and destination as described in the Fig.1. We measured the inness of the collected optimal routes within 30km radius from the center for 100 global cities including NYC, London, Delhi and so on. In the cities, we identified two competing forces against each other. Due to the agglomeration of businesses and people, street networks grow denser around the center area to meet the demand, and attract traffic toward the interior of the city. On the other hand, many cities deploy arterial roads located outside of the city to help disperse the congestion at urban core. The arterial roads act as the other force pushing traffic toward the exterior of the city. This tendency is well captured by our suggested metric. We analyze two types of optimal routes by minimizing the travel time and distance. While the shortest routes reveal mere road geometric structure, the fastest routes show the geometry in which the road hierarchy is reflected. We systematically select the origin and the destination having different bearings and different radii from the center. Then, we collect the optimal routes of the O-D pairs via the OpenStreetMap API. Our results consist of two parts. We first compare the general average inness of both the shortest and fastest routes of the 100 global cities in order to point out the their fundamental differences. Later, we analyze the inness patterns of individual cities and discuss street layout and the effects of street hierarchy in each city.
Balamurugan Soundararaj
14012 A Retail Location Choice Model: Measuring the Role of Agglomeration in Retail Activity [abstract]
Abstract: The objective of our work is to build a consumers choice model, where consumers choose their retail destinations only based on a retailers? floorspace and the agglomeration with others. In other words, at a very aggregated level, the goal is to describe a retailers success with a model which only takes into account its position, and its floorspace. We define the attractiveness of a retailer r as Ar = f? r +X r0 f? r0e""drr0 (1) where fr is the retailer?s floorspace, drr0 is the distance between r and some other retail unit r0. Eq.(1) states that the composite perceived utility Ar that a consumer attaches to a particular retailer r is equal to its individual utility, quantified as choice and therefore floorspace f? r , and the utility of the shops in its vicinity. In eq.(1), ? controls the extent of the internal economies and " of the external economies of scale. If ? > 1, the relationship between consumer perceived utility of a shop and its size is super-linear and the economies of scale are positive, meaning that a retailer would benefit from larger floorspace. Similarly, low values of ", which translate into a slow decay, would imply a strong dependency of on vicinity to other attractive neighbours, and viceversa. Exploiting eq.(1) we define the probability of consumer i shopping in r as pi!r = Are"#C(dir,$) P r0 Ar0e"#C(dir0,$) (2) where C(dir, #) is the cost function of travelling from i to r, $ and # are two parameters. Eq.(2) has been formulated using random utility theory and as once can see in the proposed cross-nested logit model in eq.(2) consumers prefer to shop at larger shops (internal economies of scale) and at locations with higher concentration of retail activity (external economies of scale). In this work we have considered two types of trips, namely work to retail and home to retail. The model is therefore defined by 6 di?erent parameters, two describing the attractiveness of retailers through their internal and external economies (?, "), and two for each kind of trips describing the cost function, ($h, #h), and ($w, #w). Therefore the total modelled turnover will be of the form Yr = Y w r + Y h r = X l ? nw l pw l!r + nh l ph l!r ? (3) the $ and # have been calibrated using the LTDS datasets, as survey that includes 5004 home and retail and 2242 work to retail trips. Having completed the calibration of the distance profiles we can now calculate the modelled turnover estimates for each retailer r for a set of (?, ") parameters, defined in eq.(3) . This will tell us the modelled fraction of population that will end up shopping 1 in each retailer given their attractiveness and distance. Following this, we calculate the correlation level between the modelled turnovers and the observed floorspace rents. For each retailer r, we use the VOA rateable value as an indicator for willingness to pay for floorspace fr. The Rateable Value (a) Correlations (b) Scatter Plot Figure 1: As we can see from this figures the model yields high correlations with the VOA dataset?s rents. In the left panel we show the correlation between the expected turnover Yr(?, ")/fr and the Rateable Value / Size found in the dataset. Cmax ? C(? = 1.3, " = 0.008). These values are in agreement with a superlinear scaling in floorspace and with the observed retail agglomeration. In the right panel we present a scatter plot of the two quantities. is considered a very good indicator of the property value of the respective hereditament. In fig.(1) we compare the results of the models with rent data coming from VSOA. In fig.(1a) we can see how the maximum correlation between the modelled and real rents per squared meters is given by teh set of parameters (?max = 1.3, "max = 0.008). The ? value is in line with super-linear scaling of floorspace and expected earnings, and seems incredibly realistic, while the " values indicates a benefit in agglomeration of retail activities (the sign is positive), and indicates that the vicinity of a retail activity does have a non negligible role in defining an attractiveness.
Duccio Piovani
14013 Revealing patterns in human spending behavior [abstract]
Abstract:  In the last decade big data originating from human activities has given us the opportunity to analyze individual and collective behavior with unprecedented detail. These approaches are radically changing the way in which we can conceive social studies via complex systems methods. Large data, passively collected from mobile phones or social media, have informed us about social interactions in space and time [1], helping us to to understand the laws that govern human mobility [2?4] or to predict wealth in geographic areas [5]. More recently, data from Credit Card Shopping Records (CCSR) has also been explored providing new insights on human economic activities. Ref. [6] has shown that a fingerprint exists in the sequence of individual payment activities which permits the users to be identifiable with only few of their records. The shoppers spending behaviors and visitation patterns are very much related to urban mobility [7]. Both mobility decisions and expenditure behavior are subject to urban and geographical constraints [8] and to economic and demographic conditions [9, 10]. Further understanding consumer behavior is valuable to model the market dynamics, and to depict the differences between income groups [11]. In particular CCSRs have the potential to transform how we conceive the study of social inequality and human behavior within the geographic and socio-economic constraints of cities. Here we present a novel method to exploit CCSRs to provide new insights in the characterization of human spending patterns and how these are related to sociodemographic attributes. We analyze CCSRs of approx. 150, 000 users over a period of 10 weeks. The dataset is anonymized, and for each user the following demographic information is provided: age, gender, zipcode. For all users we have the chronological sequence of their transaction history with the associated shop typology according to the Merchant Category Codes (MCC) [12]. Our analysis of the aggregated CCSR data reveals that the majority of shoppers adopt the credit card payment for twelve types of transactions among the hundreds of possible MCCs. These are: grocery stores, eating places, toll roads, information services, food stores, gas stations, department stores, telecommunication services, ATM use, taxis, fast food restaurants, and computer software stores. These transaction activities are depicted as icons in Fig. 1. Interestingly, the temporal sequence of how these transactions occur are different among individuals. First, we identify the dominant sequences of transactions for each user using the SEQUITUR algorithm [13]. Then we evaluate the significance level of each sequence calculating the z-score with respect to the sequences computed from 100 randomized sequences whilst preserving the number of transactions per type. Each sequence of transactions defines a path in the space of the transaction codes.We define the User Transaction Network (UTN) connecting the codes of most statistical significant sequence (with z-score> 2), preserving the order.We compute the matrix of user similarity (Fig.1 lower left) calculating the Jaccard index between all the users with at least 3 link in their UTN. Applying the Louvain Method [14] for community detection we are able to group users according to their the most significant sequence of payments. Fig. 1 shows our results for the six different behavioral groups detected, with each cluster ordered in appearance from 1 to 6 in the matrix of users similarity. The upper part of the figure describes the most common sequences of transactions for each group, the link value with the error represents the probability for a user of the group to follow that particular transaction order, and the value in parenthesis defines the fraction of users in the group that perform that transaction sequence. The bottom part shows the demographic attributes of each group with respect to the average population in red. In summary, we have uncovered lifestyles groups in the transaction history of the CCSR data that relates to non-trivial demographic groups. We will discuss future applications of these clusters of life styles in the context of adoption of innovations in the city.
Riccardo Di Clemente
14014 The universal dynamics of urbanization (invited talk) Marc Barthelemy
14015 Identifying and tackling Water Leaks in Mexico through Twitter [abstract]
Abstract: As cities became smarter, the amount of daily data generated has become increasingly granular. Sensors, cameras, crowdsourcing, social media sharing, etc., can monitor different aspects in our cities, such as commuter flows, air quality over different time periods or public transport performance. The rise of the ?smart city? has then the potential of through some light into many fundamental urban problems, and pave the way to make cities a more livable and efficient places. Particularly, Twitter has attracted a lot of attention in recent years (Ausserhofer & Maireder, 2013) for its richness in content. People is not only sharing personal information through its closest contacts, but is using Twitter as a social and political platform to inform and disseminate all sort of statements or ideas (Weng & Menczer, 2015; Lu & Brelsford, 2014; Pi?a-Garc?a, Gershenson, & Siqueiros-Garc?a, 2016). Exploring this type of data has is gradually getting more and more important in terms of data collection. In addition, mining urban social signals can provide quick knowledge of a real-world situation (Roy & Zeng, 2014). It should be noted that the enormous volume of Twitter data has given rise to major computational challenges that sometimes result in the loss of useful information embedded in tweets. Apparently, more and more people are relying on Twitter for information. Twitter has been tagged a strong medium for opinion expression and information dissemination on diverse issues (Adedoyin-Olowe, Gaber, Stahl, & Gomes, 2015). Leveraging large-scale public data from Twitter, we are able to analyze and map the spread of information related to water leaks in the street, under the pavement and roads in Mexico (see Fig. 1). We gathered an initial sample of 2000 geolocated tweets posted by 1599 users tweets that contains the Spanish keywords: "fuga de agua" (water leaks).
Carlos Adolfo Piña García
14016 Estimating nonlinearity in cities' scaling laws [abstract]
Abstract: The study of statistical and dynamical properties of cities from a complex-systems perspective is increasingly popular [1]. A celebrated result is the scaling between a city specific observation y (e.g., the number of patents filed in the city) and the population x of the city as [2] y = ?x? , (1) with a non-trivial (? 6= 1) exponent. Super-linear scaling (? > 1) was observed when y quantifies creative or economical outputs and indicates that the concentration of people in large cities leads to an increase in the percapita production (y/x). Sub-linear scaling (? < 1) was observed when y quantifies resource use and suggests that large cities are more efficient in the per-capita (y/x) consumption. Since its proposal, non-linear scaling has been reported in an impressive variety of different aspects of cities. It has also inspired the proposal of different generative processes to explain its ubiquitous occurrence. Scalings similar to the one in Eq. (1) appear in physical (e.g., phase transitions) and biological (e.g., allometric scaling) systems suggesting that cities share similarities with these and other complex systems (e.g., fractals). More recent results cast doubts on the significance of the ? 6= 1 observations [3, 4, 5]. These results ask for a more careful statistical analysis that rigorously quantifies the evidence for ? 6= 1 in different datasets. We propose a statistical framework based on a probabilistic formulation of the scaling law (1) that allows us to perform hypothesis testing and model comparison. In particular, we quantify the evidence in favor of ? 6= 1 comparing (through the Bayesian Information Criterion, BIC) models with ? 6= 1 to models with ? = 1. The scaling relation in Eq. (1) describes a relation between two quantities y and x. However, the empirical data indicates that this relation can only be fulfilled on average. The statistical analysis we propose is based on the likelihood L of the data being generated by different models. Following Ref. [6], we assume that the index y (e.g. number of patents) of a city of size x is a random variable with probability density P(y | x). We interpret Eq. (1) as the scaling of the expectation of y with x E(y|x) = ?x? . (2) This relation does not specify the shape of P(y | x) , e.g., it does not specify how the fluctuations V(y|x) ? E(y 2 |x) ? E(y|x) 2 of y around E(y|x) scale with x. Here we are interested in models P(y | x) satisfying V(y|x) = ?E(y|x) ? . (3) This choice corresponds to Taylor?s law. It is motivated by its ubiquitous appearance in complex systems, where typically ? ? [1, 2], and by previous analysis of city data which reported non-trivial fluctuations. The fluctuations in our models aim to effectively describe the combination of different effects, such as the variability in human activity and imprecisions on data gathering. In principle, these effects can be explicitly included in our framework by considering distinct models for each of them. We specify different models P(y | x) compatible with Eqs. (2,3): City models are the ones where we assume that each data point yi is an independent realization from the conditional distribution P(y|xi), effectively to each city the same weight when computing the BIC of the model. For this model, we considered two different types of fluctuations, one Gaussian and the other Lognormally distributed, thus choosing a priori a parametric form for P(y | x). Person models are based in the natural interpretation of Eq. (1) that people?s efficiency (or consumption) scale with the size of the city they are living in. This motivates us to consider a generative process in which tokens (e.g. a patent,a dollar of GDP, a mile of road) are produced or consumed by (assigned to) individual persons, which leads to a P(y | x) that effectively weights the observations in of people. 1 100 101 102 103 104 y, Brazil-Aids City Model Person Model Running mean 103 104 105 106 107 x, Population 0.0 0.2 0.4 0.6 0.8 1.0 fraction < x 80% of the cities 75% of the population (A) (B) Figure 1: Comparison of the model of Cities and Persons. (A) Reported deaths by AIDS with respect to cities? population (dots). The lines represent the estimated scaling law giving the same weight to each city (city model, ? = 0.61) and giving the same weight to each person (person model). (B) Cumulative distribution of heavy-tailed distribution of city-sizes in terms of cities and persons, i.e. the fraction of i) cities of size ? x (City Model); and ii) the population in cities of size ? x. We apply this approach to 15 datasets of cities from 5 regions and find that the conclusions regarding ? vary dramatically not only depending on the datasets but also on assumptions of the models that go beyond (1). We argue that the estimation of ? is challenging and depends sensitively on the model because of the following two statistical properties of cities: i The distribution of city-population has heavy tails (Zipf?s law). ii There are large and heterogeneous fluctuations of y as a function of x (Heteroscedasticity). We found that in most cases models are rejected by the data and therefore conclusions can only be based on the comparison between the descriptive power of the different models considered here. Moreover, we found that models which differ only in their assumptions on the fluctuations can lead to different estimations of the scaling exponent ?. In extreme cases, even the conclusion on whether a city index scales linearly ? = 1 or non-linearly ? 6= 1 with city population depends on assumptions on the fluctuations. A further factor contributing to the large variability of ? is the broad city-size distribution which makes models to be dominated either by small or by large cities. In particular, these results show that the usual approach based on least-square fitting is not sufficient to conclude on the existence of non-linear scaling. Recent works focused on developing generative models of urban formation that explain non-linear scalings. Our finding that most models are rejected by the data confirms the need for such improved models. The significance of our results on models with different fluctuations is that they show that the estimation of ? and the development of generative models cannot be done as separate steps. Instead, it is essential to consider the predicted fluctuations not only in the validation of the model but also in the estimation of ?.
José M. Miotto
14017 Estimating Railway Travel Demand Through Social Media Geo-localised Data [abstract]
Abstract: The fundamental four-stage modelling framework on railway planning is highly focused both on modal choice models and on the assignment of passengers' flows over networks. These last steps pursue the achievement of the maximum potential of new policies of transportation modes, constantly running towards more efficient and ecological modes. In Europe we assist at the emergence of several projects that aim to interconnect urban areas within and among countries, both with new or better-performing links and through the developing of rolling stock able to interoperate among national networks characterized by different power-supply infrastructures and signalling/security systems and protocols. Linking demand and supply is therefore a challenge to project, provide and validate better international services that are both reliable and of high quality. Here we develop a new framework able to estimate railway traffic demand through the detection of a set of geo-localised tweets, posted in the last three years, overlapping railway lines in Europe. We scale the data of the potential passengers over a line through the so-called Òpenetration rateÓ, able to get an estimation of the sample we got over the total tweeting population. We compare our data per line with the frequency of the services on several railway branches in order to calibrate our estimations on flows. Our findings provide information about passengers' flows through regions, running over current methodologies that generally constrained data within single countries or administrations. Therefore the potential of the methodology goes towards the interoperability of data through countries, helping planners not only in getting a new source of cross-country demand estimation, but moreover to get a new tool and set of data for the calibration and validation of transportation demand models.
Fabio Lamanna

Modeling of Disease Contagion Processes  (MDCP) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: C - Veilingzaal

Chair: Vittoria Colizza

26007 Measuring association between close proximity interactions and pathogen transmission [abstract]
Abstract: Close proximity interactions (CPI) between individuals, as measured by electronic wearable sensors, have been increasingly used as a proxy to contacts leading to disease transmission. However, there is little evidence that CPIs are indeed a good proxy to transmission. However, this issue is difficult to analyze for want of data and because of different timescales in data collection.Here, we study this issue in a French hospital, where 85000 CPI were recorded in a network of 590 participants over a 5-months long period, both in staff and patients, jointly with 4700 pharyngeal swabs for carriage of bacteria (Staphylococcus aureus). We first define a measure of association based on path-lengths in dynamic networks, and show, using simulations, that it is a statistically powerful approach. Then, we show that paths connecting bacteria carriers to incident carriers have characteristics supporting the use of CPI as proxy for contacts leading to transmission. Transmission events are further characterized according to type and duration of contacts, especially in staff and patients.We conclude that CPI indeed inform on the network of contacts responsible for pathogen transmission. We examine the possibility of using such CPI to inform detection of incident carriage and optimize hygiene measurements in hospitals.
Pierre-Yves Boelle
26008 Modelling H5N1 influenza in Bangladesh across spatial scales: model complexity and zoonotic transmission risk [abstract]
Abstract: Highly pathogenic avian influenza H5N1 remains a persistent public health threat, being capable of causing infection in humans with a high mortality rate, in addition to negatively impacting the livestock industry. A central question is to determine regions that are likely sources of newly emerging influenza strains with pandemic causing potential. A suitable candidate is Bangladesh, due to being one of the most densely populated countries in the world and having an intensifying farming system. It is therefore vital to establish the key factors, specific to Bangladesh, that enable both continued transmission within poultry and spillover across the human-animal interface. We apply a modelling framework to H5N1 epidemics in the Dhaka region of Bangladesh occurring from 2007 onwards, which resulted in large outbreaks in the poultry sector and a limited number of confirmed human cases. This model consisted of a poultry transmission component and a zoonotic transmission component. Utilising poultry farm spatial and population information a set of competing nested models of varying complexity were fitted to the observed case data, with parameter inference carried out using Bayesian methodology and goodness-of-fit verified by stochastic simulations. We found successfully identifying a model of minimal complexity that enabled the size and spatial distribution of cases in H5N1 outbreaks to be predicted accurately was dependent on the administration level being analysed, while a consistent outcome of non-optimal reporting of infected premises materialised in each poultry epidemic of interest. Our zoonotic transmission component found the main contributor to spillover transmission of H5N1 in Bangladesh was found to differ from one poultry epidemic to another. These results indicate that shortening delays in reporting of infected poultry premises alongside reducing contact between humans and poultry will help reduce risk to human health.
Edward Hill, Thomas House, Xiangming Xiao, Marius Gilbert and Michael Tildesley
26009 Design principles for TB vaccines' clinical trials based on spreading dynamics [abstract]
Abstract: The complex and slow nature of the development process of new vaccines is specially concerned in the case of tuberculosis. One of the main reasons for such difficulties comes from the current absence of reliable immunological correlates of protection, which makes the only approach feasible to estimate vaccine efficacy to be the enrolment and subsequent follow-up of large cohorts of susceptible individuals in extremely challenging and costly efficacy clinical trials. After these trials, vaccine efficacy is commonly estimated from straightforward comparisons of the fraction of subjects at each different trial's endpoint --healthy, infected or diseased--, or as the ratio between the rates of the transitions, in vaccine vs control cohorts, either at the level of protection against infection (VE_inf) or against progression to disease (VE_dis). In this work, we identify a conceptual limitation of this basic approach to estimate vaccine efficacy, which consists on a degeneracy in the different mechanisms through which a vaccine can disrupt the natural cycle of the disease that are in turn compatible with a single trial observation of VE_dis. In this sense, once measured VE_dis , we identify an entire family of compatible vaccines in which the mechanism of action is arbitrarily distributed between 1) a reduction of the fraction of the individuals with experiencing fast progression after infection and 2) a deceleration of the rate at which that fast progression process take place. Furthermore, using disease spreading models, we find that the mentioned --and so far neglected-- degeneracy encompasses the introduction of critical levels of uncertainty when it comes to estimate the expected vaccine's impact in terms of reduction of cases and casualties; compromising our very ability to make meaningful predictions for statistically significant vaccine impacts. Finally, we propose an alternative approach to solve the degeneracy problem, which succeeds at providing independent estimations for the vaccine effects on both reducing the fraction of rapid progressors and restraining the rate at which they develop disease. Our method involves the analysis of the individual transition times of the individuals between the different end-points in the trial, an observable whose retrieval is compatible with state-of-the-art protocols. By doing so, the new method contributes to a more detailed and precise description of vaccines' features and unlocks more precise impact forecasts.
Sergio Arregui, Joaquín Sanz, Dessislava Marinova, Carlos Martín and Yamir Moreno
26010 Were spontaneous behavioral changes responsible for the spatial pattern of 2009 H1N1 pandemic in England? [abstract]
Abstract: Spontaneous human behavioral responses triggered by a pandemic/epidemic threat have the potential to shape the dynamics of infectious diseases. Nonetheless, detecting and quantifying their contribution in the spread of an epidemic remains challenging. In this work we make use of an individual-based model of influenza transmission, calibrated on age-specific serosurvey data in England regions, in order to identify the main determinants of heterogeneities in the spatial spread of 2009 H1N1 pandemic at the sub-national scale. In fact, the 2009 pandemic spread in England was characterized by two major waves: the first wave spread (almost only) in London, while the second one mainly spread in a highly homogenous way across the country. Our modeling results suggest that this dynamics was mainly attributable to a significant change in the effective distance at which potential infectious contacts have occurred. In particular, we estimated a remarkably lower force of infection at large distances in the first wave compared to the second one. Such decrease may be interpreted as a behavioral adaptation to the perceived risk of infection, which was particularly high in the initial phase of the pandemic, and may have resulted in a decrease of mobility and/or number of potentially infectious contacts at high distances. Such findings contribute to shed light on the role played by (spontaneous) precautionary behaviors in the spread of epidemics, especially under the pressure posed by a pandemic threat, and highlight the need to take human behavior into account for planning effective mitigation strategies.
Valentina Marziano, Andrea Pugliese, Stefano Merler and Marco Ajelli
26011 Computational Framework to Assess the Risk of Epidemics at Global Mass Gatherings [abstract]
Abstract: In the era of international traveling and with frequent occurrence of global events across the world, there is an increasing concern to study the impact of these mass gatherings (MGs) on a global level. The massive influx of spectators from different regions presents serious health threats and challenges for hosting countries and the countries where participants originate. Global MGs such as the Olympics, FIFA World Cup, and Hajj (Muslim pilgrimage to Makkah, Saudi Arabia), cause the mixing of various infectious pathogens due to the mixture of disease exposure history and the demographics of the participants. The travel patterns at the end of global events could cause a rapid spread of infectious diseases affecting large number of people within a short period of time. Mathematical and computational models provide valuable tools that help public health authorities to estimate, study, and control disease outbreaks at challenges settings such as MGs. In this study, we present a computational framework to model disease spread at the annual global event of the Hajj, where over two million pilgrims from over 189 countries. We used the travel and demographic data of five Hajj seasons (2010-2014), and spatial data of the holy sites where the rituals are performed. As 92% of the international pilgrims arrive by air, we used the daily flights profiles of the five Hajj seasons to model the arrival of pilgrims. We simulate the interactions of pilgrims using agent-based model where each agent represents a pilgrim and maintains related demographic attributes (gender, age, country of origin) and health information (infectivity, susceptibility, immunity, date of infection, number of days being exposed or infected). The proposed model includes several simulations of the stages of Hajj with hourly or daily time steps. At each stage, the agent-based model of pilgrims is integrated to simulate their interactions within the space and time frames of that stage.
Sultanah Alshammari and Armin Mikler
26012 Local demographic conditions and the progress towards measles elimination [abstract]
Abstract: Large measles epidemics represent a persisting public health issue for both developing and developed countries. In order to avoid the occurrence of repeated outbreaks, it is crucial to identify the age segments that have not been adequately immunized by vaccination programs and to investigate the influence of local demographic conditions in shaping measles epidemiology. To do this, we consider ten countries with distinct demographic and vaccination history and develop a transmission model explicitly accounting for a dynamic population age structure and immunization activities performed since 1980. The model is calibrated at a country level with a Markov chain Monte Carlo approach exploring the likelihood of measles serological data. The model is used to identify the determinants of the observed age-specific immunity profiles, highlighting the contribution of different immunization programs, fertility and mortality trends. Our estimates suggest that, in most countries, routine first dose administration produced over 80% of the successfully immunized individuals, whereas in African countries catch-up campaigns played a critical role in mitigating the effects of sub-optimal routine coverage. Remarkably, our results suggest that consequences of past immunization activities are expected to persist longer in populations with older age structures. Consequently, countries with high fertility rates, where residual susceptibility is mostly concentrated in early childhood, should optimize their routine vaccination program. Conversely, catch-up campaigns targeting adolescents are essential to achieve measles elimination in populations characterized by low fertility levels, where we found relevant fractions of susceptibles in all age groups.
Filippo Trentini, Piero Poletti, Alessia Melegaro and Stefano Merler
26013 Contagion Modeling with the chiSIM and ReFACE Frameworks: Agent-Based Models of Disease Transmission in Chicago, USA [abstract]
Abstract: We present an outline of two software frameworks being used together to simulate disease spread in the large urban metropolitan area around Chicago, USA: the Chicago Social Interaction Model (chiSIM) framework and the Repast Framework for Agent-based Compartmentalized Epidemiological Models (ReFACE). For a disease that is spread through interpersonal contact, the rate of spread through a population is in part a function of the topology of the network of contacts within that population. This contact network emerges from the movement of people through their daily activities and the locations in which they come into contact with each other. The chiSIM framework allows populations of agents move through daily activities at high chronological resolution (hourly). As agents move through their daily schedules they arrive at locations (work, home, school, etc.) where they interact with other agents; in the context of disease modeling, these interactions include disease transmission, which can be conditionally based on the type of location and/or the activities in which the agents engage. The chiSIM framework allows large-scale simulation of these events: we present examples in which ~5 million agents move among ~2 million places across Chicago and surrounding areas at hourly resolution for durations as long as 10 years. The ReFACE framework allows the construction of compartmentalized epidemiological models such as SIR and SEIR models in ways that can be incorporated into agent-based approaches. In accord with standard compartmentalized models, the disease is represented as a collection of states and the rates of transition from state to state. The ReFACE framework permits these specifications to be explored directly using traditional analytical techniques (i.e. via differential equation solvers). Additionally, however, the framework also allows these to be translated into states and state transitions for use within individual agents in an agent-based model. In epidemiological agent-based applications, each agent is considered to be in one of many possible disease states, and transitions from one state to the next are driven by the disease specification. In the agent-based approach a number of mitigating factors may also play a role in these transitions. For example, a state that represents one treatment pathway may be available only to a subset of agents, and this subset may change during certain periods in the simulation. In the ABM, these contingent pathways can be considered in light of the other actions that the agent is undertaking and the conditions of simulation, such as school closings or hospital overcrowding. We apply these two frameworks to study the relationship between the topological structure of the network of interactions provided by chiSIM and the disease progression dictated by the compartmentalized model. The topological structure may be dynamically impacted by agent decisions representing behavioral changes. We discuss the value added in using agent-based approaches, and focus on the ability to capture rich agent differences and dynamic, responsive agent behavior, to represent a dynamic topology of an interaction network, and to combine these to analyze the real impacts of possible interventions. We present analyses of specific test cases that illustrate these advantages.
John Murphy, Jonathan Ozik, Nicholson Collier and Charles Macal
26014 Spatio-temporal origin location during outbreaks of foodborne disease [abstract]
Abstract: Introduction: This project was conceptualized in order to bring data and modern analytical techniques to the problem of identifying the source of large scale, multi-state outbreaks of foodborne illness. Determining the spatial origin of a contaminated food causing an outbreak of foodborne disease is a challenging problem due to the complexity of the food supply and the absence of coherent labeling and distribution records. Current investigative methods are time and resource intensive, and often unsuccessful. New tools and approaches that take advantage of data available to investigators are needed to efficiently identify the source of an outbreak while contamination-caused illnesses are still occurring, thereby resolving investigations earlier and averting potential illnesses. Approach: In this work, a network-theoretical approach for rapid identification of the source of foodborne contamination events is developed. The objective is to locate the source of an outbreak of foodborne disease, given a known food distribution network and a set of observed illness times and network locations, under a few practical constraints: that only a small fraction of illnesses are reported, and that the reported times are highly imprecise. Additionally, we assume that the presence of contamination at locations within the distribution network is unknown or hidden; thus, the source of contamination can be recovered only from the information associated with the reported illnesses. We tackle this problem by developing a two-stage framework for source localization, solving the problem from two perspectives. In the first stage, we assume the reported illness times are accurate within a given uncertainty. Assuming a continuous time diffusion model of contamination from each feasible initiation node, we identify the most likely contamination source and initiation time by maximizing the likelihood of the observations given each diffusion model. In the second stage, we disregard the reported illness times, assuming that their signal is dominated by the imprecision in their measurements. We take a topological approach that identifies the source of contamination as the node maximizing the joint likelihood of the collection of paths to the observed contaminated nodes. The approach here is to view the problem from the perspective of a probabilistic graphical model that represents, through a set of conditional probability distributions, how the observation of a contamination at a given node increases the probability that the contamination has traveled through adjacent upstream and downstream nodes. Accuracy and Evaluation: We subject both techniques to an extensive study to evaluate their performance and robustness across multiple outbreak scenarios and network structures. Analytical expressions are derived to determine a lower bound on the accuracy achievable for specific multi-partite network structures. A probabilistic simulation approach involving generalized food distribution network models and diffusion models of contamination was developed to (1) analyze the robustness of the traceback methodology across multiple outbreak scenarios, (2) determine the relationship between accuracy and network structural parameters, (3) analyze the performance of the algorithms under strategic interventions introduced to improve traceback, and (4) to quantify benefits of the approach through comparison to heuristics, which can be viewed as representative of the kind of ?reasonably smart? investigation strategies one might apply in practice, and state of the art theoretical methods. From the results of this study, we recommend a combined approach for outbreak investigation that combines both algorithms to maximize the probability of localizing the source to a well-defined region or a single node. Findings: In extensive simulation testing across a variety of distribution network structures, we find that the methodology is highly accurate and efficient: the actual outbreak source is robustly ranked within the top 5% (1%) of feasible locations after 5% (25%) of the cases had been reported, thereby reducing by up to 45% (25%) the eventual total number of illnesses in the simulated outbreaks, greatly outperforming not only heuristics but also state of the art methods. We determine that large improvements in traceback accuracy (up to 50%) are possible if routine sampling is implemented at a small number (5%) of strategically chosen nodes, and find that it is possible to determine which supply chain actors should be investigated next during an investigation, given the currently available information, in order to increase the probability of identifying the source. We identify specific properties of distribution network structures that both limit propagation and facilitate more accurate tracebacks, thresholds for specific parameters above which traceback is trivial, and the reversal. Conclusions: This project has contributed an entirely novel approach to outbreak traceback investigations: a network-theoretic framework for efficient spatio-temporal localization of the source of a large scale, multi-state outbreak of foodborne illness. Our analytical and simulation results suggest that this methodology that can form the basis of a ?tool? to supplement real-time traceback procedures by identifying high probability sources of an ongoing outbreak and making strategic recommendations regarding allocation of investigative resources. It is important to stress, however, that live use of these techniques has yet to occur and may demonstrate features of the real problem inadvertently omitted from the modeling. Extensive testing of our tool across multiple historical cases, followed by real-time application during outbreak emergencies, will ultimately be necessary to determine the utility to public health in terms of how much earlier an investigation can be resolved and how many illnesses averted as a result.
Abigail Horn

Computational Social Science: Social Contagion, Collective Behaviour, and Networks  (CSS) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: D - Verwey kamer

Chair: Taha Yasseri

99999 TBC [abstract]
Abstract: TBC
TBC

Dynamics of Multilevel Complex Systems  (DMC) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: E - Mendes da Costa kamer

Chair: Guido Caldarelli

18007 Color Avoiding Percolation [abstract]
Abstract: When assessing the security or robustness of a complex system, including the fact that many nodes may fail together is essential. Though complex network studies typically assume that nodes are identical with respect to their vulnerability to failure or attack, this is often inaccurate.Surprisingly, this heterogeneity can be utilized to improve the system's functionality using a new ``color-avoiding percolation'' theory.We illustrate this with a new topological approach to cybersecurity.If there are many eavesdroppers, each tapping many nodes, we propose to split the message, and transmit each piece on a path that avoids all the nodes which are vulnerable to one of the eavesdroppers.Our theory determines which nodes can securely communicate and is applicable to a wide range of systems, from economic networks to epidemics
Vinko Zlatic
18008 On topological characterization of behavioural properties: the TOPDRIM approach to the dynamics of complex systems [abstract]
Abstract: The project TOPDRIM envisioned a new mathematical and computational framework based on topological data analysis for probing data spaces and extracting manifold hidden relations (patterns) that exist among data. While pursuing this objective, a general program aiming to construct an innovative methodology to perform data analytics has been devised. This program proposes the realization of a Field Theory of Data starting from topological data analysis, passing through field theory and returning an automaton as a recognizer of the data language. TOPDRIM contributed mainly to the first stage of the program by giving evidence that topological data analysis is a viable tool to tame the wild collection of data and to detect changes in complex networks. However, TOPDRIM already went beyond the concept of networks by considering instead simplicial complexes, which allow the study of n-dimensional objects (n>=2). An alternative approach to machine learning has been put forward, where data mining starts without receiving any initial input.
Emanuela Merelli
18009 The effect of spatiality on multiplex networks [abstract]
Abstract: Multilayer infrastructure is often interdependent, with nodes in one layer depending on nearby nodes in another layer to function. The links in each layer are often of limited length, due to the construction cost of longer links. Here, we model such systems as a multiplex network, in which each layer has links of characteristic geographic length. This is equivalent to a system of interdependent spatially embedded networks in which the connectivity links are constrained in length but varied while the length of the dependency links is always zero. We find two distinct percolation transition behaviors depending on the characteristic length of the links. When this value is longer than a certain critical value, abrupt, first-order transitions take place, while for shorter values the transition is continuous.
MIchael Danzinger
18010 When and how multiplex really matters? [abstract]
Abstract: In this talk, we will give a topological characterization of functional and behavioural features of complex systems. In particular we propose an interpretation of languages of regular expressions as the outcome of global topological features of the space intrinsically generated by the formal representation of processes constrained over the space. Our goal is a new scheme, (in the sense of Grothendieck) allowing for a new characterization of regular expressions and the study of a different axiomatic structure, analogous to Kleene algebras, but encompassing non-deterministic process interpretation.
Vito Latora
18011 Predictive Models and Hybrid, Data Based Simulation Concepts for Smart Cities [abstract]
Abstract: Developing, and managing predictive, causal models for smart cities must involve stakeholders with conflicting requirements, limited available data, limited knowledge and different ?city-subsystems? which interacts. Challenges can be summarized: (1) Present initiatives mostly focus on closed sets of topics leading to a narrow domain view. (2) Current simulations rely on small-scale, isolated models of real-world environments, where changes and migration of simulation results to real-world must be carried out manually. (3) Predictive causal models have to prove additional benefit by including smart cities ?behaviour? e.g. dynamic feedback loops of domains. Interdisciplinary, holistic approaches should integrate big static and dynamic data, the city emits from sources including IoT, documents or citizens. Data must be managed to provide the fundament for hybrid simulation models operated by multi-domain experts. This provides decision support for governance stakeholders, industry and citizens to influence the city.
NIcholas Popper
18012 Can Twitter sentiment predict Earning Announcements returns? [abstract]
Abstract: Social media are increasingly reflecting and influencing behavior of other complex systems. We investigate the relations between Twitter and stock market, in particular the Dow Jones Industrial Average contituents. In our previous work we adapted the well-known \event study" from economics to the analysis of Twitter data. We defined \events" as peaks of Twitter activity, and automatically classified sentiment in Twitter posts. During the Twitter peaks, we found significant dependence between the Twitter sentiment and stock returns: the sentiment polarity implies the direction of Cumulative Abnormal Returns
Igor Mozetic
18013 The temporal dimension of multiplex networks [abstract]
Abstract: Social interactions are composite, involve different communication layers and evolve in time. However, a rigorous analysis of the whole complexity of social networks has been hindered so far by lack of suitable data. Here we consider both the multi-layer and dynamic nature of social relations by analysing a diverse set of empirical temporal multiplex networks. We focus on the measurement and characterization of inter-layer correlations to investigate how activity in one layer affects social acts in another layer. We define observables able to detect when genuine correlations are present in empirical data, and single out spurious correlation induced by the bursty nature of human dynamics. We show that such temporal correlations do exist in social interactions where they act to depress the tendency to concentrate long stretches of activity on the same layer and imply some amount of potential predictability in the connection patterns between layers. Our work sets up a general framework to measure temporal correlations in multiplex networks, and we anticipate that it will be of interest to researchers in a broad array of fields.
Romualdo Pastor-Satorras
18014 Topological and functional b-cells networks: a biological paradigm of self-organised dynamics [abstract]
Abstract: Most of complex physical systems are characterised by an emergent behaviour arising from the interaction of many particles dynamics. The resulting patterns at the macroscopic level can thus be linked to functional states of the system, which strongly depend on the topological features of the connections, on the single node intrinsic dynamics and environmental inputs. Nowadays these concepts are generalised and applied to a plethora of fields, including social dynamics, epidemics spreading, information flows and, in this particular case, also to the physiology of excitable biological media. In this perspective, we analysed emergent dynamics of the endocrine b-cells in the pancreas, as a typical example of biological electrically-coupled oscillators which release insulin in response to appropriate blood glucose levels. The primary focus was to establish a link between the underlying physical connectivity of the nodes and the functional state of the global network, modulated by specific operating conditions. A functional state was determined by looking at the robustness of the emergent electrical oscillations and the synchronisation patterns, investigated through a functional network approach. Indeed, a deep bond exists between the original physical network and the induced functional network. The possibile presence of multiplex via connections with other networks will also be discussed.
Simonetta Filippi

Fundamentals of Networks  (FON) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: F - Rode kamer

Chair: Remco van der Hofstad

49003 Scaling Limits for Stochastic Networks [abstract]
Abstract: In this talk I will sketch a body of recent results obtained in the context of stochastic networks of dependently operating resources. These could be thought of to represent real-life networks of all sorts, such as traffic or communication networks, but I?ll point out that this setup is also highly relevant in economic and biological applications. The underlying model can be thought of as a network of interacting resources, which can be modeled in a discrete state-space context through coupled queues, and in a continuous state-space context through specific systems of stochastic differential equations; the individual resources operate dependently as they react to the same environmental process. For such large networks, one would typically like to describe their dynamic behavior, and to devise procedures that can deal with various undesired events (link failures, sudden overload, etc.). I?ll show how for systems that do not allow explicit analyses, various parameter scalings help shedding light on their behavior. More specifically, I'll discuss situations in which the time-scale corresponding to the fluctuations of the available resources differs from that of the fluctuations of the customer's demand, leading to various appealing limit results.
Michel Mandjes
49005 Rumor spread and competition on scale-free random graphs [abstract]
Abstract: Empirical findings have shown that many real-world networks share fascinating features. Indeed, many real-world networks are small worlds, in the sense that typical distances are much smaller than the size of the network. Further, many real-world networks are scale-free in the sense that there is a high variability in the number of connections of the elements of the networks, making these networks highly inhomogeneous. Such networks are typically modeled using random graphs with power-law degree sequences. In this lecture, we will investigate the behavior of competition processes on scale-free random graphs with finite-mean, but infinite-variance degrees. Take two vertices uniformly at random, or at either side of an edge chosen uniformly at random, and place an individual of two distinct types at these two vertices. Equip the edges with traversal times, which could be different for the two types. Then let each of the two types invade the graph, such that any other vertex can only be occupied by the types that gets there first. Let the speed of the types be the inverse of the expected traversal times of an edge by that types. We distinguish two cases. When the traversal times are exponential, we see that one (not necessarily the faster) types will occupy almost all vertices, while the losing types only occupied a bounded number of vertices, i.e., the winner takes it all, the loser's standing small. In particular, no asymptotic coexistence can occur. On the other hand, for deterministic traversal times, the fastest types always gets the majority of the vertices, while the other occupies a subpolynomial number. When the speeds are the same, asymptotic coexistence (in the sense that both types occupy a positive proportion of the vertices) occurs with positive probability. This lecture is based on joint work with Mia Deijfen, Julia Komjathy and Enrico Baroni, and builds on earlier work with Gerard Hooghiemstra, Shankar Bhamidi and Dmitri Znamenski.
Remco van der Hofstad
49002 Networks with strong homogeneous clustering are geometric [abstract]
Abstract: Two common features of many large real networks are that they are sparse and that they have strong clustering, i.e., large number of triangles homogeneously distributed across all nodes. In many growing networks for which historical data is available, the average degree and clustering are roughly independent of the growing network size. Recently, latent-space random graph models, also known as (soft) random geometric graphs, have been used successfully to model these features of real networks, to predict missing and future links in them, and to study their navigability, with applications ranging from designing optimal routing in the Internet, to identification of the information-transmission skeleton in the human brain. Yet it remains unclear if latent-space models are indeed adequate models of real networks, as these models may have properties that real networks do not have, or vice versa. We show that maximum-entropy random graphs in which the expected numbers of edges and triangles at every node are fixed to constants, are approximately soft random geometric graphs on the real line. The approximation is exact in the limit of standard random geometric graphs with a sharp connectivity threshold and strongest clustering. This result implies that a large number of triangles homogeneously distributed across all vertices is not only necessary but also a sufficient condition for the presence of a latent/effective metric space in large sparse networks. Strong clustering, ubiquitously observed in real networks, is thus a reflection of their latent geometry.
Dmitri Krioukov
49000 Breaking of Ensemble Equivalence in Networks [abstract]
Abstract: It is generally believed that, in the thermodynamic limit, the microcanonical description as a function of energy coincides with the canonical description as a function of temperature. However, various examples of systems for which the microcanonical and canonical ensembles are not equivalent have been identified. A complete theory of this intriguing phenomenon is still missing. Here we show that ensemble nonequivalence can manifest itself also in random graphs with topological constraints. We find that, while graphs with a given number of links are ensemble equivalent, graphs with a given degree sequence are not. This result holds irrespective of whether the energy is nonadditive (as in unipartite graphs) or additive (as in bipartite graphs). In contrast with previous expectations, our results show that (1) physically, nonequivalence can be induced by an extensive number of local constraints, and not necessarily by long-range interactions or nonadditivity, (2) mathematically, nonequivalence is determined by a different large-deviation behavior of microcanonical and canonical probabilities for a single microstate, and not necessarily for almost all microstates. The latter criterion, which is entirely local, is not restricted to networks and holds in general. (joint work with Tiziano Squartini, Joey de Mol and Frank den Hollander)
Diego Garlaschelli
49001 Mixing times of random walks on dynamic configuration models [abstract]
Abstract: The mixing time of a Markov chain is the time it needs to approach its stationary distribution. For random walks on graphs, the characterisation of the mixing time has been the subject of intensive study. One of the motivations is the fact that the mixing time gives information about the geometry of the graph. In the last few years, much attention has been devoted to the analysis of mixing times for random walks on random graphs, which poses interesting challenges. Many real-world networks are dynamic in nature. It is therefore natural to study random walks on dynamic random graphs. In this talk we investigate what happens for simple random walk on a dynamic version of the configuration model in which, at each unit of time, a fraction $\alpha_n$ of the edges is randomly relocated, where $n$ is the number of nodes. For degree distributions that converge and have a second moment that is bounded in $n$, we show that the mixing time is of order $1/\sqrt{\alpha_n}$, provided $\lim_{n\to\infty} \alpha_n(\log n)^2=\infty$. We identify the sharp asymptotics of the mixing time when we additionally require that $\lim_{n\to\infty} \alpha_n=0$, and relate the relevant proportionality constant to the average probability of escape from the root by a simple random walk on an augmented Galton-Watson tree that is obtained by taking a Galton-Watson tree whose offspring distribution is the size-biased version of the limiting degree distribution and attaching to its root another Galton-Watson tree with the same offspring distribution. Proofs are based on a randomised stopping time argument in combination with coupling estimates. [Joint work with Luca Avena (Leiden), Hakan Guldas (Leiden) and Remco van der Hofstad (Eindhoven)]
Frank den Hollander

Complexity science for boosting security  (CSBS) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: G - Blauwe kamer

Chair: Ana Isabel Barros

19005 The Complexity of Countering Ideologically Driven Violent Extremism through narratives [abstract]
Abstract: Terrorism is, by definition, violence driven by a sociopolitical or religious ideology that resides in a larger cultural worldview.? Worldviews and ideologies are complex systems ?that emerge from the complex causal interaction of cognitive, affective, and social factors.? As is true with all complex systems, they evolve in ways that can be surprising and hard to predict? (Home-Dixon et al 2013). This is certainly true of the dynamic and complex set of ideas called Jihadist ideology that arguably motivated those carrying out the recent wave of attacks in Brussels, Paris, and Istanbul.?I will describe the master narrative of Islamic Jihadism and how this narrative is supported by the larger Islamic worldview.?I will also discuss why some of our counter-terrorism rhetoric and actions can have the counterintuitive result of supporting elements of the Jihadist narrative. I will end with a suggestion on how the ideology-as-complex-system approach can help us design a more effective strategy for countering Islamic Jihadism
Afzal Upal (DRDC)
19006 Complexity in security: Discovering the narrative in crime related data [abstract]
Abstract: One of the most complex challenges societies currently face is the threat of terrorism. In order to adequately prevent terrorism, law enforcement agencies and intelligence services need to shift from prosecuting crime to anticipating crime. In this approach the discovery of the narrative of a terrorist organisation seems to be crucial. Pandora Intelligence, in cooperation with TNO, developed an innovative scenario-model that can be used to detect narratives in crime-related data. These narratives can be used to support law-enforcement agencies to effectively anticipate criminal behaviour. Moreover, the scenario-model may be used to create ?alter-narratives?, intervention-options that neutralise the narrative of the adverse party. In this presentation the challenges that intelligence agencies are facing will be discussed. The enormous quantities of complex data hide unknown, and potentially useful information. A film scenario approach - as used in the film industry- show potential to turn this complex data into actionable intelligence products. Moreover, like filmmakers, terrorists use a narrative, a central storyline that defines the behaviour of the individuals involved. Combining scenarios from films and books with real incidents, offers an added value in understanding the mind-set of a terrorist. The added value of the creativity of filmmakers and scenario writers will also be addressed as they offer support in the creative thinking process required to counter-terrorism.
Peter A. de Kock (Pandora Intelligence)
19007 Opponent Resilience [abstract]
Abstract: The hyper connectivity of the society yields that terrorism and past and on-going violent conflicts are extremely dynamic and volatile and often not contained within national borders or ideologies. Therefore, understanding opponent behaviour has become more than ever essential, reinforcing Sun Tzu writings in 5th Century BC ?If you know the enemy and know yourself, you need not fear the result of a hundred battles? (Art of War). However, limited modelling is available to provide insight into the effect of interventions on opponents and in particular on opponent resilience. Not only considering the modus operandi and its directed chain of actions that leads to an attack, or to criminal events, is important. It is also essential to analyse opponent resillience in the context of the opponent organization, its social networks, the society and physical environment in which it takes place, as all these aspects may influence each other.This paper explores the combination of Agent-Based Modelling and System Dynamics to derive a generic multi-methodology framework for modelling opponent behaviour in its context that can provide insights into the dynamics and resilience over time. It allows to combine detailed modelling, e.g. for the opponent organization, with high-level models, for example economic developments. Scenarios can be tailored for specific opponent problems ranging from insurgent situations in a military context to criminal gangs in a civil context.
Bob van der Vecht, Ana Isabel Barros, Bert Boltjes, Tom Logtens, Nico Reus (TNO)
19008 Goldilocks and the Wicked Problem [abstract]
Abstract: Governments are increasingly faced with challenges that present themselves as highly complex or ?wicked problems? (Rittel and Webber, 1973) . These problems are characterized by their strongly interdependent elements. They are typically not ?owned? by one organization, but instead have a myriad of stakeholders with different and sometimes conflicting perspectives on the system. Finally, these problems become especially challenging for areas related to security, where the complex systems being addressed are highly adaptive and covert. For over half a century, systems researchers have been working on developing formal approaches to help solve very complex problems. Different systems paradigms have emerged over the years, and are often labeled ?hard?, ?soft? and ?critical?. Hard systems (like system dynamics, system analysis, and system engineering), while designed to deal with multiple interacting variables, assume that the problem can definitively defined by an ?expert? practitioner and that optimal solutions can be achieved. Soft systems methods (SSM) were developed that rely more on qualitative methods and a focus on participation by all stakeholders to formulate the problem from multiple perspectives. Critical systems methods build on SSM by including methods that specifically address situations where there are conflicts among stakeholders and where some stakeholders hold an inordinate amount of power. This paper will summarize a multi-method Systemic Intervention methodology where soft and critical systems methods are used together with hard systems methods to develop interagency systemic strategies for countering transnational organized crime and its convergence with U.S. urban gang crime. Transnational crime organizations are becoming an increasingly complex and enduring threat globally. They pose a significant and growing domestic threat, especially through the influence they have on street gang crime in U.S. urban centers. Transnational crime groups and urban street gangs converge into an interdependent crime system that is highly adaptive and interconnected. They present an ever-evolving threat that cannot be addressed by breaking the problem into parts and addressing challenges independently within vertically structured agencies. Crime on the Urban Edge (CUE) is a research project being conducted at Argonne National Laboratory that is using 1) a critical systems method for participatory problem structuring that engages key stakeholders from local, regional, and federal perspectives and 2) and a computational hard systems method called AnticipatoRy Complex Adaptive Network Extrapolation (ARCANE). ARCANE is a genetic algorithm system for automatically generating system dynamics models that represent potential system adaptions. This model is being designed and developed for anticipating how wicked problem react under given disruptions or interventions. CUE illustrates a Goldilocks approach to wicked problems that is not hard, nor soft, but perhaps ?just right?.
Ignacio Martinez-Moyano, Pamela Sydelko, Michael North, and Brittany Friedman (Argonne National Laboratory)
19009 A Complex Systems Approach to Modelling and Analysis of Security and Resilience in Air Transport [abstract]
Abstract: The importance of security has been increasingly recognized in the area of air transport. However, formal, mathematical, and computational approaches to modelling and analysis of security, in particular of its physical dimension (e.g., security of airports), are currently largely lacking. To address this gap, in this presentation a formal methodology for systematic security risk assessment of air transport sociotechnical systems is introduced. To handle environmental uncertainty and to provide a resilient response to disruptions, modern complex air transport systems combine elements of hierarchical top-down control and bottom-up self-organization. General Systems Theory is useful to model hierarchical systems, whereas Complex Adaptive Systems Theory and its prominent tool ? multiagent systems modelling ? are well-suited to describe self-organ.ization and bottom-up emergence. In the proposed methodology both theories are integrated to realistically represent and analyse by simulation security- and resilience-related aspects of sociotechnical systems in air transport. The methodology is illustrated by a case study in airport security
Alexei Sharpanskykh (Delft University of Technology)
19010 Discussion and wrap-up Ana Isabel Barros (TNO)

Coarse-graining of Complex Systems  (CCS) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: H - Ontvangkamer

Chair: Mauro Faccin

20006 Coarse graining and data aggregation techniques in location-based services [abstract]
Abstract: Location-based services have become a popular subject of research over the past decade thanks to their significance as a novel source of geo-referenced data that provide solutions for a variety of research problems in a host of disciplines. Despite their richness in terms of the multiple data layers that become available, these datasets are often sparse and are characterised by skewed distributions which make the application of classical statistical frameworks and machine learning algorithms in this context a challenge. In this talk, we will review a wider range of data aggregation and coarse graining techniques that enable useful characterisations of complex location-based systems and find applicability in many real world applications.
Anastasios Noulas
20007 Modularity and the spread of perturbations in complex dynamical systems [abstract]
Abstract: Many complex systems are modular, in that their components are organized in tightly-integrated subsystems that are weakly-coupled to one another. Modularity has been argued to be important for evolvability, state-space exploration, subsystem specialization, and many other important functions. The problem of decomposing a system into weakly-coupled modules, which has been studied extensively in graphs, is here considered in the domain of multivariate dynamics, a commonly-used framework for modeling complex physical, biological and social systems. We propose to decompose dynamical systems using the idea that modules constrain the spread of localized perturbations. We find partitions of system variables that maximize a novel measure called `perturbation modularity', defined as the auto-covariance of a coarse-grained description of perturbed trajectories. Our approach effectively separates the fast intra-modular from the slow inter-modular dynamics of perturbation spreading (in this respect, it is a generalization of the Markov stability method of community detection). Perturbation modularity can capture variation of modular organization across different system states, time scales, and in response to different kinds of perturbations. We argue that our approach offers a principled alternative to detecting graph communities in networks of statistical dependency between system variables (e.g. `relevance networks', 'functional networks', and other networks based on correlation or information-transfer measures). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization encoded in a system's coupling matrix. Additionally, we use it to identify the onset of self-organized modularity in certain parameter regimes of homogeneously-coupled map lattices (originally popularized by Kaneko). Our approach offers a powerful and novel tool for exploring the modular organization of complex dynamical systems.
Artemy Kolchinsky, Alexander Gates, Luis Rocha
20008 Automatic identification of relevant concepts in scientific publications [abstract]
Abstract: In recent years, the increasing availability of publication records has attracted the attention of the scientific community. In particular, many efforts have been devoted to the study of the organization and evolution of science by exploiting the textual information extracted from the title and abstract of the articles. However, lesser attention has been devoted to the core of the article, i.e., its body. The access to the entire text, instead, paves the way to a better comprehension of the relations of similarity between articles. In the present work, concepts are extracted from the body of the scientific articles available on the ScienceWISE platform, and are used to build a network of similarity between articles. The resulting weighted network possesses a considerably high edge density, spoiling any attempt of associating communities of papers to specific topics. This happens because not all the concepts inside an article are truly informative and, even worse, they may not be useful to discriminate articles with different contents. Moreover, the presence of ``generic concepts'' with a loose meaning implies that a considerable amount of connections is made up by spurious similarities. To eliminate generic concepts, we introduce a method to evaluate the concepts' relevance according to an information-theoretic approach. The significance of a concept $c$ is defined in terms of the distance between its maximum entropy distribution, $S_{max}$, and the empirical one, $S_c$, calculated using the frequency of occurrence inside papers. By evaluation such distance, generic concepts are automatically identified as those with an entropy closer to the maximum. The progressive removal of generic concepts retaining only the ``meaningful'' ones has a twofold effect: it decreases sensibly the density of the network and reinforce meaningful relations. By applying different ``filtering thresholds,'' we unveil a refined topical organization of science in a coarse-grained way.
Andrea Martini, Alessio Cardillo, Paolo De Los Rios
20009 Tensorial Stochastic Block Models for layered data [abstract]
Abstract: In this talk I will discuss the problem of developing predictive ?models for layered data. Time-resolved networks are a typical example of layered data, since each time window results in a specific pattern of connections. I will present stochastic tensorial block models as a valid approach to predict missing information in network data with different layers of information. I will discuss results for two cases: a temporally resolved e-mail communication network and a drug-drug interaction network in different cell lines.
Marta Sales-Pardo
20010 The dynamics of community sentiments on Twitter [abstract]
Abstract: We study a large evolving network obtained from Twitter created by a sample of users @-mentioning each other. We find that people who have potentially the largest communication reach (according to a dynamic centrality measure) use sentiment differently than the average user: for example they use positive sentiment more often and negative sentiment less often. Furthermore, we use several algorithms for community detection based on structure of the network and users' sentiment levels to identify several communities. These communities are structurally stable over a period of months. Their sentiment levels are also stable, and sudden changes in daily community sentiment in most cases can be traced to external events affecting the community. Based on our findings, we create and calibrate a simple agent-based model that is capable of reproducing measures of emotive response comparable to those obtained from the observed data.
Danica Vukadinovic Greetham, Nathaniel Charlton, Colin Singleton
20011 Probabilistic and flux-based analysis of metabolic graphs [abstract]
Abstract: We present a framework for the construction and analysis of directed metabolic reaction graphs that can be tailored to reflect different environmental conditions. In the absence of information about the environmental context, we propose a Probabilistic Flux Reaction Graph (PRG) in which the weight of a connection between two reactions is the probability that a randomly chosen metabolite is produced by the source and consumed by the target. Using context-dependent flux distributions from Flux Balance Analysis (FBA), we produce a Flux-Balance Graph (FBG) with weighted links representing the amount of metabolite flowing from a source reaction to a target reaction per unit time. The PRG and FBG graphs are analyzed with tools from network theory to reveal salient features of metabolite flows in each biological context. We illustrate our approach with the directed network of the central carbon metabolism of Escherichia coli, and study its properties in four relevant biological scenarios. Our results show that both flow and network structure depend drastically on the environment: graphs produced from the same metabolic model in different contexts have different edges, components, and flow communities, capturing the biological re-routing of metabolic flows inside the cell. By integrating probabilistic and FBA-based analysis with tools from network science, our results provide a framework to interrogate cellular metabolism beyond standard pathway descriptions that are blind to the environmental context.
Mariano Beguerisse Diaz, Mauricio Barahona, Gabriel Bosque Chacón, Diego Oyarzún and Jesús Picó

The Anthropogenic Earth System: Modeling Social Systems, Landscapes, and Urban Dynamics as a Coupled Human+Climate System up to Planetary Scale  (TAES) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: I - Roland Holst kamer

Chair: John T. Murphy

32005 Global Modeling and Cities [abstract]
Abstract: Integrated Assessment Models (such as, notably, GCAM) use a regional approach: the planet is subdivided into some number of regions (for GCAM, 283 regions for agricultural production and 32 regions for other commodities), and these are used as the units of analysis when the model is run. This has a number of advantages, most notably computational tractability and data availability, but also that the units match those in which policies are established (e.g. trade between countries or blocks of countries). However, data availability and computational power are both changing; moreover, the impacts of a changing climate are being felt- and decisions about responses to them made- at smaller scales. We can reasonably ask if a different resolution might not be more appropriate for modeling climate-human feedbacks.One potential avenue is to take the city as the unit of analysis. Some advantages to this approach are presented and discussed here. Urbanism is increasing, and cities are representing daily life for a higher proportion of the world's population each day. Cities represent the economic drivers of the planet: the changes in consumption and production, even when that plays out across agricultural landscapes far removed from cities, are increasingly driven by cities. Responses to climate challenges will play out in cities, whether this occurs by rising water levels or by changes in demand for agricultural products, or by altering the efficiency and distribution of consumption of energy for buildings and transportation.In this presentation, I sketch a proposed modeling framework in which cities are central, and represent nodes on a network. The modeled network of cities has a number of dimensions along which adaptation in response to climate challenges is possible, and these modify the human system's impact on the climate system in a true feedback loop. I outline what this modeling program might look like, given the current state of modeling urban systems in the social science work, and especially with respect to agent-based modeling, considering both the prospect of cities as agents and of cities comprising simulated human agents. An important component will be asking how the results of such a modeling activity might usefully be compared to results from IAMs such as GCAM, and whether the change in focal point leads to genuinely novel insight or whether the approaches might mutually reinforce one another. I additionally consider the data, technical, and computational challenges of a truly agent-based approach at global scales.
Murphy, John T.
32006 A framework for unravelling the complexities of unsustainable environmental change in food production [abstract]
Abstract: Food production is responsible for over 70% of freshwater use by humans and is the primary cause of land conversion globally. The global system of food production and trade is complex, with interdependencies between natural and socioeconomic conditions in importing and exporting regions, as well as the physical and socioeconomic infrastructure linking the two. Given this complexity, policy or environmental changes can have non-linear and cascading impacts for water and land resources along the supply chain. As the world becomes more globalised and more urbanised our dependence on this complex system increases. Thus, in order to achieve sustainable food security, we require an understanding of the complexity of the food production and trade system.In this paper we set out a framework for modelling the complex feedbacks between food production policy and water and land resource use and the how these are linked globally via trade. Our framework couples a multi-agent policy network with a model of the physical environment based on the global hydrological model PCR-GLOBWB and the dynamic vegetation model LPJ-GUESS. Cities are nodes in our network and are linked via physical trade infrastructure. Our framework provides a template for new type of Earth System model that captures the complex feedbacks between policy and environmental change and how these are linked globally via trade.
Dermody, Brian
32007 Global modelling, individuals and ecosystems [abstract]
Abstract: One of the crucial aspects of global climate change, and global change more generally is the representation of ecosystems. Forest in particular plays a critical role in the regulation of atmospheric CO2. However, the dynamics of vegetated systems depends upon the presence of animals: current earth system models may include a global vegetation model, but the herbivores and the carnivores that depend upon them and help to shape them are typically absent. At the same time the ability of humans to create long-term sustainable societies arguably requires the maintenance of healthy ecosystems, and this needs a consideration of the whole ecosystem dynamics rather just the vegetated part. The ability to maintain wild-capture fisheries, for example, depends upon being able to assess the health of the carnivorous fish that are the main focus of fishing fleets, yet we have little idea of how fishing affects the resilience or long term stability of ecosystems in the ocean, especially in the face of increasing acidification from CO2 input. The Madingley model is the first model to include in a general way all of the living systems on the earth, including the animal component. The model is agent-based, but because of the vast numbers of animals that exist in certain categories (zooplankton for example) it uses a cohort representation in which a single agent may represent just one individual or many millions. In order to keep the model as general as possible, the cohorts may also represent multiple species that occupy the same functional group, so that our limited knowledge of the specifics does not prevent a meaningful representation of the dynamics. Grid-based climate driving fields and land-surface representations are used to link the individuals to environmental factors, with a variety of configurations that allow for the exploration of different scales. This also allows for scenario-based experiments to be performed in which the effects of human harvesting of natural capital on the ecosystem dynamics can be explored. However, since the model is already agent-based it also forms an ideal platform onto which global-scale agent-based dynamical human systems can be built while including all the relevant feedbacks between the atmosphere, ocean, biosphere and human systems. The latter present challenges both in coupling the model to existing climate GCMs, but also in exploring sufficiently generic but computationally tractable societal dynamics that go beyond the simple economic and equilibrium assumptions that still dominate most integrated assessments. The presentation will give a description the current state of the model and that ways it is currently being developed to include human factors.
Bithell, Mike; Harfoot, Mike; McOwen, Chris; Newbold, Tim; Purves, Drew; Tittensor, Derek; Underwood, Phil; and Visconti, Piero
32008 General Discussion: Modeling the Anthropogenic Earth System: Paths Forward

BURSTINESS in human behaviour and other natural phenomena  (BIHB) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: J - Derkinderen kamer

Chair: Yerali Gandica

21006 Temporal structures of crime [abstract]
Abstract: We analyze several temporal aspects of a large dataset of crimes and their suspects. The dataset covers all crimes recorded by the Swedish police from 1992 to 2012, it covers 460,846 crimes involving 1,250,535 individuals. The crimes are categorized into about 400 categories and are time stamped. We extract several types of temporal structures, from interevent times to the more long-scale structures related to criminal careers. We discuss the connections between the various extracted time scales and to the social phenomenon of crime. Some quantities are even scale-free (with exponent around 2.5)?like the distribution of collaboration trees (where a crime is connected to the previous if there is a significant overlap of criminals involved (see the figurebelow). Most quantities do, however, have scales. We discuss how these temporal effects can help preventive police works.
Petter Holme
21007 Time-energy correlations as a hallmark of different branching processes [abstract]
Abstract: Several biological and natural systems appear to operate close to a critical point, as evidenced by the absence of a characteristic size in the phenomenon. Indeed, the existence of power law distributions has been detected in several contexts, as different as earthquakes, solar flares or spontaneous brain activity, and, surprisingly, with similar scaling behaviour. We propose that the specific features of each phenomenon are imbedded in the temporal organization of events in time. A detailed analysis of time-energy correlations detrending statistical noise is able to enlighten the difference between the physical mechanisms controlling different phenomena, as for instance earthquakes and solar flares. Conversely, the temporal organization of neuronal avalanches in the rat cortex in vitro exhibits a distribution of waiting times between successive events with a non-monotonic behavior, not usually found in other natural processes. Numerical simulations provide evidence that this behavior is a consequence of the alternation between states of high and low activity, leading to a dynamic balance between excitation and inhibition. This behavior is also detected at a larger scale, i.e., on fMRI data from resting patients. By monitoring temporal correlations in high amplitude BOLD signal, we find that the activity variations with opposite sign are correlated over a temporal scale of few seconds, suggesting a critical balance between activity excitation and depression in the brain.
Lucilla de Arcangelis
21008 Bursts in the permeability of particle-laden flows through porous media [abstract]
Abstract: Particle-laden flows experience deposition and erosion when passing through a porous medium, a common situation in many fields, ranging from environmental sciences to industrial filters and petroleum recovery. We experimentally study dense suspensions during deep bed filtration and find that the time evolution of pressure losses through the filter is characterized by jumps separated bytime delays. These jumps are related to erosive events inside the porous medium and are preceded and followed by deposition. A statistical analysis shows that the events are independent whose size distribution scales with a power law. The detection of such jumps provides new insight into the dynamics of particle-laden flows through porous media, specifically as they can be considered analogous to sand avalanches occurring in petroleum wells. The above phenomenon can be reproduced in an electrical network of fuse-anti- fuse devices, which become insulators within acertain finite interval of local applied voltages. As a consequence, the macroscopic current exhibits temporal fluctuations which increase with system size. We determine the conditions under which this itinerant conduction appears by establishing a phase diagram as a function of the applied field and the size of the insulating window.
Hans Hermann
21009 Social networks, time, and individual differences [abstract]
Abstract: In the traditional ?bare-bones? network approach, nodes are nodes and links and links, and that is all there is. For social networks, this means that individuals are distinguishable only on the basis of their network characteristics (degree, centrality, etc). However, we all know that people are different and behave in different ways. These differences can be approached with more fine-grained behavioural data, in particular with the help of data on time-stamped interactions that allow constructing dynamic and temporal social networks. In this talk, I will focus on exploring individual differences with the help of temporal data on electronic interactions (calls, emails, etc). I will first talk about longer timescales and the similarities and differences in how we maintain our personal networks. Then, I will focus on shorter timescales of circadian patterns, and show how various data sets reveal chronotypes of individuals (morning/evening-active persons) and chronotype compositions of populations.
Jari Saramäki
21010 Models of human bursty phenomena [abstract]
Abstract: Bursty dynamical patterns characterise not only individual human behaviour but also appear on the level of dyadic interactions and even in case of collective phenomena. The first observations of human bursty patterns were commonly addressed the activity of individuals, although many observations were made on interaction datasets. All these studies reported heterogeneous non-Poissonian dynamical patterns characterised by broad inter-event time distributions, which emergence was explained in various ways: due to intrinsic correlations via decision mechanisms; due to independent actions influenced by circadian patterns; or other underlying mechanisms. In addition several combinations of these modelling directions were proposed together with phenomenological models aiming at simply reproducing signals with similar temporal features. In this talk our aim is to give a brief introduction to these modelling efforts and to provide an overview about their development during the last decay.
Marton Karsai
21011 Social Media affects the Timing, Location, and Severity of School Shootings [abstract]
Abstract: Over the past two decades, school shootings within the United States have repeatedly devastated communities and shaken public opinion. Many of these attacksappear to be ?lone wolf? ones driven by specific individual motivations, and the identification of precursor signals and hence actionable policy measures would thusseem highly unlikely. Here, we take a system-wide view and investigate the timing of school attacks and the dynamical feedback with social media. We identifya trend divergence in which college attacks have continued to accelerate over the last 25 years while those carried out on K-12 schools have slowed down. We estab-lish the copycat effect in school shootings and use a Hawkes process to model the statistical association between social media chatter and the probability of an attackin the following days. While hinting at causality, this relationship may also help mitigate the frequency and intensity of future attacks.
Javier Garcia-Bernardo

Skilled action as a complex system: affordances and social  (SAAA) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: R - Raadzaal

Chair: Jelle Bruineberg

40000 Skilled Intentionality in social coordination [abstract]
Abstract: In this talk I will make explicit how the concept of Skilled Intentionality (i.e. action-readiness for engaging with multiple relevant affordances simultaneously) we have developed in a series of publications (Rietveld, 2008, 2012; Bruineberg & Rietveld 2014; Kiverstein & Rietveld, 2015) sheds light on social coordination in skilled action and everyday life. This builds upon the work on affordances for 'higher' cognition we published in a recent paper in Ecological Psychology (Rietveld & Kiverstein, 2014). I will argue that this way of looking at social coordination implies taking material engagement seriously. Moreover, architecture is able to intervene in sociomaterial practices and in that way contribute to complex societal challenges, such as a better public domain and to environments that invite more active and healthy ways of living.
Erik Rietveld, Jelle Bruineberg
40001 Skilled collaboration in Volleyball reception [abstract]
Abstract: Professional Volleyball receivers are highly skillful in collaborating when defending their court from the serve. In previous research, we found the Voronoi Generalized Diagram to describe the way the court is collaboratively defended by three receivers. They are at the same time interacting with the opposing server/ball. In the present study, four receivers acted on 182 serves to test if their Voronoi areas? were affected by their relative-positioning (left-side, middle, and right-side of the court) and by the type of serve (jump-float serve, power-jump). We also tested if they correlate with the serve?s initial position. We found an effect for receivers? position. The right-side receiver had the largest area, followed by the left-side receiver, and finally the middle receiver. There was a serve and receiver?s position interaction effect. The receivers? areas were only influenced by the serving technique when considering the receivers? on-court position. The left-side receiver area was smaller when facing the jump-float serves, than when facing the power-jump serve, but for the right-side and middle positions their areas were larger against the jump-float serves than against the power-jump serves. Only the Voronoi areas of the side-receivers were correlated with the ball?s initial position. The areas increased as the ball?s initial position was less laterally aligned with them. The side-receivers adjusted to the ball?s initial position, and the middle receiver adjusted accordingly, maintain his dominant court-region values. Their adjustment to each other in their relative-positioning expresses collaborative adaptation to task constraints allowing them to maintain performance levels.
Paulo, A., Zaal, F., Fonseca, S., Araújo, D.
40002 Complexity Matching in Interpersonal Coordination [abstract]
Abstract: Human behavior emerges from the interconnection of physiological, cognitive and contextual processes operating at many levels and timescales. In the past decades it has become evident that behavior is not only inherently variable but temporally self-similar (i.e. fractal), which is an expression of this interconnectedness. In addition, it has recently been shown that two people who perform a task together tend to match the fractal patterns of their behavioral time series. In a way they become similar complex dynamical systems due to coupling. This effect called ?complexity matching? reveals global coordination at multiple timescales, which is related to optimal information exchange between two systems. In recent years complexity matching has been demonstrated in varies areas of interpersonal interaction. In this paper several empirical studies will be discussed, including those on synchronized rowing and a Wiimote controlled coordination task. Issues arising from complexity matching concern the nature of information exchange and social affordances, which might not be limited to a single scale but should be considered as extending across multiple timescales.
Ralf F. A. Cox
40003 Affording social interaction and collaboration in musical joint action. [abstract]
Abstract: Many daily activities require the coordination of actions with others, including navigating a crowded sidewalk, dancing with a partner, and participating in musical groups. Like everyday social behavior, musical joint action emerges from the complex interaction of environmental and informational constraints, including those of the instruments and the performance context. Music improvisation in particular is more like everyday interaction in that dynamics emerge spontaneously without a rehearsed score or script. Here we examined how the structure of the musical context affords and shapes interactions between improvising musicians. Six pairs of professional piano players improvised with three different backing tracks while their movements were recorded using a wireless motion tracking system. Each backing track varied in rhythmic and harmonic information, ranging from a chord progression, to a single tone. Afterward, while viewing videos from the performance trials, musicians narrated how they decided what to play and when. Narratives were analyzed using grounded theory (a qualitative method) to identify themes. For backing tracks with more structure, themes included expertise and signaling; for backing tracks with less structure: personality, novelty, and freedom. Differences in movement coordination and playing behavior were evaluated using linear and non-linear time series methods, to provide an understanding of the multi-scale dynamics that create the potential for musical collaboration and creativity. Collectively, our findings indicate that each backing track afforded the emergence of different coordination dynamics with respect to how they played together, how they moved together, as well as their experience collaborating with each other. Musical improvisation therefore provides a way to understand how social interaction emerges from the structure of the behavioral context, and how this structure supports coordination and collaboration in everyday behavior.
Ashley Walton, Auriel Washburn, Michael Richardson and Anthony Chemero
40004 Symmetry-Breaking Dynamic of Multi-Agent Behavioral Coordination [abstract]
Abstract: How is the patterning of social, multi-action behavior organized? Who or what decides what joint-action possibilities or behavioral modes are afforded within a given task context? Is there a complementary relationship between the low-level physical laws that constrain the mechanics of socially situated, perceptual-motor behavior and the higher-level cognitive decision processes that define ongoing multi-agent (social) activity? Using a selection of complex systems phenomena from physics, biology, cognitive science, and computational cognition, we explore whether symmetry principles and group theory can provide a way of answering these questions. In particular, we detail how the theory of symmetry-breaking can be employed to both describe and understand the interrelated physical, neural and cognitive structures that underlie joint-action and how symmetry breaking bifurcations give rise to the complex and complementary nature of everyday social activity.
Michael J. Richardson and Rachel W. Kallen

Complexity History. Complexity for History and History for Complexity  (CHCF) Session 1

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: L - Grote Zaal

Chair: Andrea Nanetti

30000 Provenance and Validation from the Humanities to Automatic Acquisition of Semantic Knowledge and Machine Reading for News and Historical Sources Indexing/Summary [abstract]
Abstract: My keynote presents the research project carried out at NTU Singapore in collaboration with Microsoft Research and Microsoft Azure for Research. For the NTU research team the real frontier research in world histories starts when we want to use computers to structure historical information, model historical narratives, simulate theoretical large scale hypotheses, and incentivise world historians to use virtual assistants and/or engage them in teamwork using social media and/or seduce them with immersive spaces to provide new learning and sharing environments, in which new things can emerge and happen: "You do not know which will be the next idea. Just repeating the same things is not enough" (Carlo Rubbia, 1984 Nobel Price in Physics, at Nanyang Technological University on January 19, 2016).
Chin-Yew Lin
30001 Exploiting Context in Cartographic Evolutionary Documents to Extract and Build Linked Spatial-Temporal Datasets [abstract]
Abstract: Millions of historical maps are in digital archives today. For example, the U.S. Geological Survey has created and scanned over 200,000 topographic maps covering a 125-year period. Maps are a form of ?evolutionary visual documents? because they display landscape changes over long periods of time and across large areas. Such documents are of tremendous value because they provide a high-resolution window into the past at a continental scale. Unfortunately, without time-intensive manual digitization scanned maps are unusable for research purposes. Map features, such as wetlands and roads, while readable by humans, are only available as images. This project will develop a set of open-source technologies and tools that allow users to extract map features from a large number of map sheets and track changes of features between map editions in a Geographical Information System. The resulting open-source tools will enable exciting new forms of inquiry in history, demography, economics, sociology, ecology, and other disciplines. The data produced by this project will be made publically available and through case studies integrated with other historical archives. Spatially and temporally linked knowledge covering man-made and natural features over more than 125 years holds enormous potential for the physical and social sciences. The wealth of information contained in these maps is unique, especially for the time before the widespread use of aerial photography. The ability to automatically transform the scanned paper maps stored in large archives into spatio-temporally linked knowledge will create an important resource for social and natural scientists studying global change and other socio-geographic processes that play out over large areas and long periods of time. Publications, software, and datasets for this project will be made available on the project website: http://spatial-computing.github.io/unlocking-spatiotemporal-map-data.
Yao-Yi Chiang
30002 Comparing network models for the evolution of terrestrial connections in Central Italy (1175/1150?500 BC ca) [abstract]
Abstract: The period between the Late Bronze Age and the Archaic Age is a time of changes and developments in the Italian Peninsula which led to the creation of regional ethnic and political groups and to the formation of the first city-states. In the present study, we focus on the Tyrrhenian regions of Latium Vetus and Southern Etruria, analysing the evolution of the network of terrestrial routes as they have been hypothesised by scholars from the archaeological evidence. We want to investigate 1) the mechanisms that shaped past communication infrastructures through time; 2) if they changed or stayed the same during the considered time framework.In particular, in order to understand to what extent the observed results are a consequence of either differences on the spatial distribution of settlements or dissimilarities in the process that generated those networks (cultural and political factors), we design three network models. Each model corresponds to a different hypothesis about the dominant mechanism underlying the creation of new connections. After locating the nodes at the positions inferred from the archaeological record, we start adding links according to a specific criterion. Once we have generated several synthetic versions of the networks, we compare them to the corresponding empirical system in order to determine which model fits the data better and is therefore more likely to resemble the actual forces at work. We find that, in the case of Southern Etruria, the model simulating a simple form of cooperation is able to reproduce with a very good accuracy all the relevant features for all the Ages under study. On the contrary, in Latium Vetus, each model can reproduce some of the features while failing with others, depending on the Age. However, if we add a ?rich get richer? bias to the cooperative model, its performance improves greatly.
Luce Prignano, Francesca Fulminante, Sergi Lozano, Ignacio Morer
30003 Hybridizing historiographies: modelling a blended complexity for history/history for complexity approach to understanding the past. [abstract]
Abstract: What additionalities does a hybrid approach to history for complexity and complexity for history offer to each discipline? Criticism of contemporary approaches to history tend to concentrate on percieved relativism, while the application of complex systems methodologies can be seen to dehumanise our narratives of the past. Both approaches offer risks and opportunities ? here we explore how a blended model might offer additional understanding. We have modelled the historical correspondence network of William Colenso (1811-1899), printer, missionary, explorer, naturalist, and politician of New Zealand?s early colonial period, using the addressee and location-of-writing meta-data from Colenso's letters to construct a co-location correspondence network. This network links recipients of Colenso's letters when he tended to write to them from the same set of locations, revealing several significant communities, and providing a na?ve way to identify themes or topics in the corpus of Colenso's letters. We then draw upon a combined na?ve and informed interpretation of the Colenso letter network, suggesting a model that acknowledges that, in utilizing tools and methods that developed to understand complex systems in history, we re-focus the narrative of human history on empires, civilisations, big states, and cities. Complexity theoretical approaches can prioritise hegemonic discourses and re-marginalise marginalised histories. Our approach, grounded in a people-focussed network, will utilise critical tools of twentieth century historiography, particularly postcolonial theory, feminism, and subaltern discourses, to reassert voices from the margins. Whose history will this hybrid approach foreground?
Kate Hannah,Dion O'Neale
30004 Patterns In Globalization - Viewed through the lens of Trieste [abstract]
Abstract: Globalization is a phenomenon lasting centuries. Contributing factors, including the import and export dynamics of major nations, are many in number and complex in their interactions. This study considers the behavior of one of the worlds 10 largest ports - Trieste - within the Austria-Hungarian empire from the mid 19th century through to the start of World War I (WWI), a time of profound globalization. Trade in the mid 19th century largely followed the British World System; a free world market centered in London and its financial web. However this system was unstable, experiencing a long depression (1873-96), state defaults, and regular financial panics. Challenges from competitors, especially Germany, soon followed, and by the end of the 19th century the trade landscape had shifted, and a new nationalist era ushered in. New boarders appeared, trade restrictions were imposed, and strong cartels limited competition, within the multinational Austro-Hungarian Empire as well. The European powers competed for African resources and parted the continent. This age culminated into a denser cluster of wars and deeper crisis, from WWI to the close of WWII in 1945.To understand how trade dynamics might evidence and interact with these various processes, information measures - including Shannon entropy and KL-divergence - were calculated on the distribution of imported and exported tonnages by nation over time and on the balance sheets of the Generali insurance company, the largest Austro-Hungarian insurer, from 1851 to 1910. The next phase of the project will include more detailed analysis, involving data on goods per country and Generali's marine insurance contracts.
Gaetano Dato, Ben Zhu, Simon Carrignon, Anjali Tarun, Evelyn Strombom, Rudi Minxha, Brian Ferguson, Tucker Ely, Philip Pika
30005 Historical Correspondence Networks [abstract]
Abstract: William Colenso (1811-1899) was a printer, missionary, explorer, naturalist, and politician in the early period of the colonisation of New Zealand. We have used the addressee and location-of-writing meta-data from Colenso's letters to construct a co-location correspondence network. The network links recipients of Colenso's letters when he tended to write to them from the same set of locations. The network reveals several significant communities. This suggests that it could provide a naive way to identify themes or topics in the corpus of Colenso's letters. The dual network connects geographic locations where Colenso tended to write to the same particular sets of people. Again, clusters of locations within the network suggest themes in Colenso's work at these different locations. It is also possible to study how the network changes over time. In addition to giving an interesting visualisation of Colenso's correspondence, this gives us a potential way to look at whether themes in the correspondence were more strongly associated with where Colenso was or when he wrote.The construction of the networks is completely agnostic of the content of the letters. We will use digital methods such as text mining, as well as more traditional historical techniques, such as narrative analysis, to compare and contrast themes suggested by the network structure, with those from the content of the letters. We suggest that interesting liminality between naive networks and known historiography might emerge, and that gaps revealed by this mixed methodological approach would create novel opportunities for understanding and contextualising the past.
Dion O’Neale, Kate Hannah

Multilayer and Interconnected Networks: Applications (MINA)  (MINA) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: M - Effectenbeurszaal

Chair: Alex Arenas

1006 Collective benetfits in traffic during large events via the use of information technologies [abstract]
Abstract: Information technologies today can inform drivers about their alternatives for shortest paths from origins to destinations, but they do not contain incentives or information that promote collective benefits. To obtain such benefits, we need to have not only good estimates of how the traffic is formed but also to have target strategies to reduce enough vehicles from the best possible roads in a feasible way. Moreover, to reach the target vehicle reduction is not trivial, it requires individual sacrifices such as drivers taking alternative routes, shifts in departure times or even changes in modes of transportation. The opportunity is that during large events (Carnivals, Festivals, Races, etc.) the traffic inconveniences in large cities are unusually high, yet temporary, and the entire population may be more willing to adopt collective recommendations for social good. In this paper, we integrate for the first time big data resources to quantify the impact of events and propose target strategies for collective good at urban scale. In the context of the Olympic Games in Rio de Janeiro, we first predict the increase in traffic by integrating data from: mobile phones, schedules and venues of sport matches, Airbnb, Waze and transit information. Second, we evaluate the impact of Olympic Games to the travel of commuters, and propose different route choice scenarios during the morning and evening peak hours. Moreover, we pinpoint the trips which have greatest contribution to the global congestion and propose reasonable collective reduction. Interestingly, we show that (i) following new route options with individual shortest path can save more collective travel time than keeping routine routes, uncovering the positive value of information technologies during events; (ii) if a small targeted proportion of people from specific areas switch from driving to public transport, the collective travel time can be reduced to a great extent.
Marta Gonzalez, Massachusetts Institute of Technology (USA)
1007 Transportation systems: a multilayer network approach [abstract]
Abstract: Multilayer networks provide a natural framework for studying transportation systems. Within this approach I will show on practical examples how we can understand the structure of these systems and quantify the impact of the coupling between different modes. I will also discuss how a simple dynamical model on these systems allows to explain some statistical patterns observed for human mobility.
Marc Barthelemy, Institut de Physique Théorique, CEA, CNRS (France)
1008 Generalized mapping of dynamics in multilayer and interconnected systems [abstract]
Abstract: To connect structure and dynamics in interconnected systems, flow-based methods have proven useful for identifying modular dynamics in weighted and directed networks that capture constraints on flow processes. However, many interconnected systems consist of elements with multiple layers of interactions. The information-theoretic and flow-based method known as the map equation was recently generalized to multilayer networks. However, it has been unclear how to further generalize the method to any type of layered network, such as multiplex networks, interconnected multiplex networks, interdependent networks, and interconnected multilayer networks. Here we show that discriminating between physical nodes for describing the concrete elements that flow entities can visit and abstract state nodes for representing the dynamics allows for completely generalized mapping. The generalized mapping framework applies to dynamics in multilayer and interconnected systems as well as any higher-order Markov chain model. We demonstrate how representations that are true to the system at hand provide the most effective analysis.
Ludvig Bohlin, Manlio De Domenico, Christian Persson, Daniel Edler and Martin Rosvall
1009 Analysis of Contagions in Multi-layer and Multiplex Networks [abstract]
Abstract: Dynamical processes on complex networks has been an active research area over the past decade. In this talk, we will present recent results on two major and related classes of dynamical processes: i) Information propagation, also known as simple contagion, and ii) Influence propagation, also known as complex contagion. With regard to simple contagions, we will consider a clustered multi-layer network model to capture the fact that information may propagate simultaneously over multiple social networks. Assuming that information propagates according to the SIR model and with different information transmissibility across the networks, we give results for the conditions, probability, and size of information epidemics. We present analogous results for complex contagions over clustered multiplex networks under a generalized linear threshold model. Collecting, we demonstrate several non-trivial results concerning the impact of clustering on contagion dynamics. Last but not least, we compare the dynamics of complex contagions over multiplex networks and their monoplex projections and demonstrate that ignoring link types and aggregating network layers may lead to inaccurate conclusions about contagion dynamics, particularly when the correlation of degrees between layers is high.
Osman Yagan, Carnegie Mellon University (USA)
1010 Strategic growth of multilayer airline networks [abstract]
Abstract: The airline transportation system is a paradigmatic example of multiplex network, where nodes represent airports, edges stand for direct flights between two locations, and each layer contains all the routes operated by the same carrier. In this work we propose a genuinely multiplex model of network growth, based on a trade-off between the maximisation of the number of potential travellers and the minimisation of competition on each route. By using real data about the six continental air transportation networks, we show that the model is able to reproduce quite accurately the structural properties of these systems, and in particular the observed patterns of egde overlap and node activity distribution. The results suggest that each airline really tends to organise its network in order to optimise a trade-off between efficiency and competition, and that the networks of all the airlines of a continent are indeed placed very close to the theoretical Pareto front in the efficiency-competition plane. We finally explain how this simple model can be used to suggest which routes should be added to an existing network in order to improve the overall performance of an airline. This work sheds light on the fundamental role played by multiplexity in shaping the structure of continental air transportation systems, and provides new interesting insight about effective strategies for network optimisation based exclusively on structural considerations.
Andrea Santoro, Vito Latora, Giuseppe Nicosia and Vincenzo Nicosia
1011 Quantifying topology transfer in interconnected networks of phase oscillators using relaxation time [abstract]
Abstract: In the last decade or so, the study of interconnected complex networks gained much interest across the scientific community. In neuroscience, in particular, systems are often composed of many networks that interact on different spatial and temporal scales. This may result in activity that is synchronized within and/or across networks, in its simplest form in-phase oscillations. We studied numerically the synchronizability of two interconnected oscillator networks of finite-size. In contrast to other studies that assessed synchronizability by the asymptotic state of (local) Kuramoto order parameter, we quantified is by the (local) relaxation time towards that asymptotic state. In view of finite-size effects, the latter was expected to be quite erratic which motivated using a statistical approach often employed to quantify stochastic dynamics: We determined the serial-lag auto-correlation function of the simulated time series of the networks? order parameters. The envelope of the auto-correlation function decayed exponentially (as in the case of a linear response system), which allowed for estimating the relaxation time. We first tested procedures in the case of two fully connected, symmetric networks and compared. Our numerical estimates of the relaxation times closely resembled the analytically known bifurcation scheme. Next we changed the topology of one of the networks to be random (Erd?s-R?nyi), scale free (Barab?si-Albert) and small world (Newman-Watts) by guarantying that the asymptotic value of local synchronization in that that network remained constant. Dependent of the relation of average degree and within-network vis-?-vis between-network coupling strength, topology transferred from one network to the other. Quantifying synchronizability through the (local) relaxation time appears a useful tool when it comes to interactions between oscillator networks.
Nicolás Deschle and Andreas Daffertshofer

Santa Fe Institute Workshop  (SFIW) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: N - Graanbeurszaal

Chair: Stefan Thurner

46003 Predicting the evolution of technology [abstract]
Abstract: Technological progress is the ultimate driver of economic growth, and forecasting technological progress is one of the pivotal issues for climate mitigation. While there is a rich anecdotal literature for technological change, there is still no overarching theory. Technology evolves under descent with variation and selection, but under very different rules than in biology. The data available to study technology are also very different: On one hand we have historical examples giving the performance of a few specific technologies over spans of centuries; on the other hand, the collection of information is much less systematic than it is for fossils. There is no good taxonomy, so in a sense the study of technological evolution is pre-Linnaean. This may be due to the complexities of horizontal information transfer, which plays an even bigger role for technology than it does for bacteria. There are nonetheless empirical laws for predicting the performance of technologies, such as Moore’s law and Wright’s law, that can be used to make quantitative distributional forecasts and address questions such as “What is the likelihood that solar energy will be cheaper than nuclear power 20 years from now?”. I will discuss the essential role of the network properties of technology, and show how 220 years of US patent data can be used as a "fossil record” to identify technological eras. Finally I will discuss new approaches for understanding technological progress that blend ideas from biology and economics.
J Doyne Farmer
46004 Understanding of power laws in path dependent processes [abstract]
Abstract: Where do power laws come from? There exist a handful of famous mechanisms that dynamically leadto scaling laws in complex dynamical systems, including preferential attachment processes and self-organized criticality. One extremely simple and transparent mechanism has so-far been overlooked. We present a mathematical theorem that states that every stochastic process that reduces its number of possible outcomes over time, leads to power laws in the frequency distributions of that so-called sample-space-reducing process (SSR). We show that targeted diffusion on networks is exactly such a SSR process, and we can thus understand the origin of power law visiting distributions that are ubiquitous in nature. We further comment on several examples where SSR processes can explain the origin of observed scaling laws including search processes, language formation and fragmentation processes.
Stefan Thurner
46005 TBA Geoffrey West

Complexity in personalised dynamical networks for mental health  (CPDN) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: P - Keurzaal

Chair: Lourens Waldorp

41005 Changing Dynamics, Changing Networks [abstract]
Abstract: A network captures how components in a system interact. Take humans as an example. Humans are complex dynamic systems, whose emotions, cognitions, and behaviors constantly fluctuate and interact over time. Networks in this case represent, for example, the interaction or dynamics between emotions over time. However, in time the dynamics of a process are themselves prone to change. Consider, for example, external factors like stress, which can lower the self-predictability and interaction of emotions and thus change the dynamics. In this case, there should not be a single network or static figure of the emotion dynamics, but a network movie representing the evolution of the network over time. We have developed a new data-driven model that can explicitly model the change in temporal dependency within an individual without pre-existing knowledge of the nature of the change: the semi-parametric time-varying vector autoregressive method (TV-VAR). The TV-VAR proposed here is based on the easy applicable and well-studied generalized additive modeling techniques (GAM), available in the software R. Using the semi-parametric TV-VAR one can detect and model changing dynamics or network movies for a single individual or system.
Laura F. Bringmann
41006 Mental disorders as complex systems: empirical tests [abstract]
Abstract: Background: Mental disorders are influenced by such a complex interplay of factors that it is extremely difficult to develop accurate predictive models. Complex dynamical system theory may provide a new route to assessment of personalized risk for transitions in depression. In complex systems early warning signals (EWS), signaling critical slowing down of the system, are found to precede critical transitions. Experience Sampling Methodology (ESM) may help to empirically test whether principals of complex dynamical systems also apply to mental disorders. Method: ESM techniques were employed to examine whether EWS can be detected in intra-individual change patterns of affect. Previously reported EWS are rising autocorrelation, variance and strength of associations between elements in the system. Results: Empirical findings support the idea that higher levels of autocorrelation, variance and connection strength may indeed function as EWS for mood transitions. Results will be visualized in network models during the presentation. Conclusion: Empirical findings, as obtained with ESM, suggest that transitions in mental disorders may behave according to principles of complex dynamical system theory. This may change our view upon mental disorders and yield novel possibilities for personalized assessment or risk for transition.
Marieke Wichers
41007 Estimating Time-Varying Mixed Graphical Models in High-Dimensional Data [abstract]
Abstract: Graphical models have become a popular way to abstract complex systems and gain insights into relational patterns among observed variables. For temporally evolving systems, time-varying graphical models offer additional insights as they provide information about organizational processes, information diffusion, vulnerabilities and the potential impact of interventions. In many of these situations the variables of interest do not follow the same type of distribution, for instance, one might be interested in the relations between physiological and psychological measures (continuous) and the type of drug (categorical) in a medical context. We present a novel method based on generalized covariance matrices and kernel smoothed neighborhood regression to estimate time-varying mixed graphical models in a high-dimensional setting. In addition to our theory, we present a freely available software implementation, performance benchmarks in realistic situations and an illustration of our method using a dataset from the field of psychopathology.
Jonas Haslbeck
41008 Mean field dynamics of graphs: Evolution of probabilistic cellular automata on different types of graphs and an empirical example. [abstract]
Abstract: We describe the dynamics of networks using one-dimensional discrete time dynamical systems theory obtained from a mean field approach to (elementary) probabilistic cellular automata (PCA). Often the mean field approach is used on a regular graph (a grid or torus) where each node has the same number of edges and the same probability of becoming active. We consider finite elementary PCA where each node has two states (two-letter alphabet): ?active? or ?inactive? (0/1). We then use the mean field approach to describe the dynamics of the PCA on a random, and a small world graph. We verified the accuracy of the mean field by means of a simulation study. Results showed that the mean field accurately estimates the percentage of active nodes (density) across various simulation conditions, and thus performs well when non-regular network structures are under consideration. The application we have in mind is that of psychopathology. The mean field approach then allows possible explanations of ?jumping? behaviour in depression, for instance. We show with an extensive time-series dataset how the mean field is applied and how the risk for phase transitions can be assessed.
Jolanda Kossakowski
41009 Cinematic theory of cognition and cognitive phase transitions modeled by random graphs and networks [abstract]
Abstract: The Cinematic Theory of Cognition (CTC) postulates that cognition is a dynamical process manifested in the sequence of complex metastable patterns. Each consecutive pattern can be viewed as a frame in a movie, while the transition from one frame to the other acts as the movie shutter. Experimental evidence indicates that each pattern is maintained for about 100-200 ms (theta rate), while the transition from one pattern to the other is rapid (10-20 ms). This talk will address the following issues: 1. Experimental evidence of the CTC. Experiments involve intracranial ECoG of animals and human patients in preparation to epilepsy surgery, as well as noninvasive scalp EEG of human volunteers. 2. Dynamical systems theory of cognition and neurodynamics. Accordingly, the brain is viewed as a complex system with a trajectory exploring a high-dimensional attractor landscape. Experimentally observed, metastable patterns represent brain states corresponding to the meaning of the environmental inputs, while the transitions signify the ?aha? moment of deep insights and decision. 3. Modeling of the sequence of metastable patterns as phase transitions in the brain networks and a large-scale graph. Synchronization-desynchronization transitions with singular dynamics are described in the brain graph as a dissipative system. The hypothesis is made that the observed long-range correlations are neural correlates of cognition (NCC).
Robert Kozma

Self-organized patterns on complex networks  (SPCN) Session 2

Schedule Top Page

Time and Date: 14:15 - 18:00 on 21st Sep 2016

Room: Z - Zij foyer

Chair: Timoteo Carletti

25004 Chimera states: intriguing patterns in complex networks [abstract]
Abstract: Chimera states are complex spatio-temporal patterns that consist of coexisting domains of spatially coherent and incoherent dynamics. This counterintuitive phenomenon was first observed in 2002 in systems of identical oscillators with symmetric coupling topology. During the last decade, chimera states have been theoretically investigated in a wide range of networks, where different kinds of coupling schemes varying from regular nonlocal to completely random topology have been considered. Potential applications of chimera states in nature include the phenomenon of unihemispheric sleep in birds and dolphins, bump states in neural systems, power grids, and social systems. We discuss current state-of-the-art in studies of chimera states, and demonstrate recent findings. In particular, we analyze properties of chimera states in the systems of nonlinear oscillators, the role of local dynamics and network topologies. We also address the robustness of chimeras due to inhomogeneities, and possible strategies of their control.
I. Omelchenko
25005 Persistent Cascades: Detecting the fundamental patterns of information spread in a social network [abstract]
Abstract: We define a new structural property of large-scale communication networks consisting of the persistent patterns of communication among users. We claim these patterns represent a best-estimate at real information spread, and term them "persistent cascades." Using metrics of inexact tree matching, we group these cascades into classes which we then argue represent the fundamental communication structure of a local network. This differs from existing work in that (1) we are focused on recurring patterns among specific users, not abstract motifs (e.g. the prevalence of ?triangles? or other structures in the graph, regardless of user), and (2) we allow for inexact matching (not necessarily isomorphic graphs) to better account for the noisiness of human communication patterns. We find that analysis of these classes of cascades reveals new insights about information spread and the influence of certain users, based on three large mobile phone record datasets. For example, we find distinct groups of "weekend" vs "workweek" spreaders not evident in the standard aggregated network. Finally, we create the communication network induced by these persistent structures, and we show the effect this has on measurements of centrality or diffusion.
S. Morse
25006 Nestedness in Communication Networks: From Information Exchange to Topology [abstract]
Abstract: We develop a dynamic network formation model that explains the observed nestedness in email communication networks inside organizations. Utilizing synchronization we enhance Konig et al. (2014)'s model with dynamic communication patterns. By endogenizing the probability of the removal of agents we propose a theoretical explanation why some agents become more important to a firm's informal organization than others, despite being ex ante identical. We also propose a theoretical framework for measuring the coherence of internal email communication and the impact of communication patterns on the informal organization structure as agents come and go. In situations with a high agent turnover rate, networks with high hierarchy outperform what we term "egalitarian" networks (i.e. all agents are of equal degree) for communication efficiency and robustness. In contrast, in situations with a low agent turnover, networks with low hierarchy outperform what we term "totalitarian" networks for communication efficiency and robustness. We derive a trade-off that accounts for the network communication performance in terms of both measures. Using the example for a consulting firm we show that the model fits real-world email communication networks.
A. Grimm