16:00 - 17:20 on 22nd Sep 2016

Foundations  (F) Session 11

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: A - Administratiezaal

Chair: Rick Quax

311 Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star? [abstract]
Abstract: Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology.
Gergely Palla and Daniel Czegel
10 The effects of the interplay between nodes’ activity and attractiveness on random walks in time-varying networks [abstract]
Abstract: Several real-world spreading and diffusion processes take place on time-varying networks. In general, the process dynamics are affected by changes of the topological structure. To understand the impact of temporal connectivity patterns, several studies have focused on the analysis of random walks, as they represent a paradigm for all diffusion processes. It was shown that the temporal connectivity patterns introduce features not found in the equivalent processes in quenched or annealed networks. The case of random walks on activity driven networks, where each node engages in interaction according to a fixed activity rate, is particularly transparent since its stationary state was derived analytically. However, this model accounts only for the heterogeneous activity of nodes, irrespectively of their potential to attract other connections. Here, we focus on the role played by the distribution of nodes with respect to their attractiveness, clarifying its impact on the diffusion of random walk processes and on the transport dynamics. We introduce a time-varying network model where each node is characterized by a fixed activity rate, giving the probability of engaging in interaction per time unit, and a fixed attractiveness rate, specifying the probability to receive a connection from an active node. We derive analytically the stationary state of the process and study the interplay between the distributions of nodes’ activity and attractiveness. Interestingly, the introduction of heterogeneous attractiveness distributions alters substantially the dynamical properties of the process and creates a rich phenomenology. Models based on heterogeneous attractiveness were introduced to explain features of face-to-face interaction networks, where the attractiveness may account for individuals’ different degree of social appeal. By clarifying the impact of this characteristic network feature on the diffusion of random walks, our findings can potentially shed light on real-world dynamical processes such as opinion and epidemics spreading on social networks.
Laura Maria Alessandretti, Andrea Baronchelli and Nicola Perra
191 Aggregation of link streams for multiscale analysis [abstract]
Abstract: A link stream L models interactions over time: a link (b, e, u, v) is in L if two entities (nodes) u and v have continuously interacted from time b to time e. Link streams, also called temporal networks or time-varying graphs, model many real-world interactions between individuals, email exchanges, or network traffic. Real-world link streams typically contain millions, or even billions of interactions, and handling them computationally is a challenge in itself. Hence, module decomposition based aggregation is a possible approach to construct a smaller and more observable image of a link stream. In our study, we generalize the definition of modules to link streams. In a graph, a module M is a subset of nodes that have the same neighborhood. Analogously, we define a module in a link stream as a couple (M, T) consisting of a set of nodes M and a time duration T (not necessarily an interval) such that all nodes in M share the same neighborhood over T. With the goal of achieving more meaningful (yet lossy) module decomposition of real world datasets, we reduce the stringency of module definition to introduce “relaxed” modules. Intuitively, a relaxed module (M, T) is such that the neighborhoods’ similarities of nodes in M over T are higher than a given threshold. We define a relevant measure of spatiotemporal neighborhood similarity. Beyond the theoretical scope of our work, we use modules in link streams as a mean of compression of large real-world datasets. In addition to reducing the amount of computational resources to process the stream, compression has applications in event and mesoscale structure detection, at the cost of optimizable loss in information. Hence, we finally investigate the reconstruction error induced by link stream compression, and discuss possible ways to minimize this error.
Hindol Rakshit, Tiphaine Viard and Robin Lamarche-Perrin
214 Systematic ranking of temporal and topological patterns for controlling dynamical processes on networks [abstract]
Abstract: To speed up information propagation or mitigate disease spreading calls for an investigation of the interplay between dynamical processes and temporal network. Temporal (e.g. burstiness) and topological (e.g. community structure) network characteristics have been indeed shown to influence such processes. Following these findings, techniques to influence the dynamics of a process have been built on using either temporal or structural informations, but designing such intervention strategy on their coupling is still challenging. Here we propose a generative model of temporal networks where we can control separately temporal and topological structures and combine them. A temporal network is built from subnetworks whose links have a correlated activity. The underlying assumption is that individuals are usually engaged in different social activities (e.g. work, spare time) involving different people at different times (e.g. week, week-end). Each subnetwork can be seen as representing one of these social activities. Moreover, the hypothesis of such modeling is supported by previous works using tensor factorization, that both shows the existence of correlated activity patterns in networks and enables to extract them. Using this model, we generated synthetic networks resulting from several subnetworks. Each subnetwork is built from a static network whose activity is modulated in time. We considered subnetworks with various clustering coefficients and activity profiles. We simulated a susceptible-infected process over each network and over the networks generated by the removal of one subnetwork at a time to measure the impact of each of them on the dynamic process. It led to the identification of subnetworks - with their respective temporal and topological structure - having an impact on the process. The clustering coefficient value was found to be the most decisive factor to predict the impact on the spreading process. The model architecture allows to extend the study to richer temporal patterns.
Anna Sapienza, Laetitia Gauvin and Ciro Cattuto

Economics  (E) Session 8

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: B - Berlage zaal

Chair: Dexter Drupsteen

306 The Interdependence of Trade Networks of Value Added Goods and Services: Networks of Networks [abstract]
Abstract: There are two fundamentally different networks in play in global trade networks: the trade in goods and the trade in services. These two complementary networks describe the way in which economic resources are dynamically allocated within countries, local regions and across the globe. It is interesting to observe that these networks are the macroscopic consequences of the detailed microstructure of the trade in goods and services between individual market sectors. It is only recently that analysis of such detailed microstructure and the network-to-network exchange of economic resources at multiple scales has been made possible through the development of econometrically consistent ‘trade in value added’ data has become readily available to the broader research community. In an exploratory analysis of global economic trade, we combine well established complex network theory with the newly emerging methods from the ‘networks of networks’ field to uncover a rich diversity of interactions at multiple scales both within the networks and between the networks. While it is now well understood that individual networks have specific risks associated with specific network topologies, for example single node vulnerabilities pose different network risks for scale-free versus Erdos-Renyi networks, it is a more complex issue when considering interactions between networks. In this study we use the OECD’s trade in value added data set to study the capital flows that are exchanged between networks that give rise to specific risks that are not immediately apparent when the networks are considered in isolation. As the long term goals of such analysis needs to be to inform the debate of global economic risks, we conclude by discussing some of the practical consequences for our understanding of global economic trade.
Michael Harre, Alexandra Vandeness and Alex Li-Kim-Mui
41 Enhanced Gravity Model of trade: reconciling macroeconomic and network models [abstract]
Abstract: The International Trade Network (ITN) is involved in an increasing number of processes of relevance for the world economy, including globalization, integration, competitiveness, and the propagation of shocks and instabilities. Characterizing the ITN via a simple yet accurate model is an open problem. The traditional Gravity Model successfully reproduces the volume of trade between two connected countries, using macroeconomic properties such as GDP and geographic distance. However, it generates a network with a complete or homogeneous topology, thus failing to reproduce the highly heterogeneous structure of the real ITN. On the other hand, recent maximum-entropy network models successfully reproduce the complex topology of the ITN, but provide no information about trade volumes. Here we integrate these two currently incompatible approaches via the introduction of an Enhanced Gravity Model (EGM) of trade. The EGM is the simplest model combining the maximum-entropy network approach with the Gravity Model, while at the same time enforcing a novel ingredient that we denote as `topological invariance', i.e. the invariance of the expected topology under an arbitrary change of units of trade volumes. Via this unified and principled mechanism that is transparent enough to be generalized to any economic network, the EGM provides a new econometric framework wherein trade probabilities and trade volumes can be separately controlled by any combination of dyadic and country-specific macroeconomic variables. We show that the EGM successfully reproduces both the topology and the weights of the ITN, finally reconciling the conflicting approaches. Moreover, it provides a general and simple theoretical explanation for the failure of economic models that do not explicitly focus on network topology: namely, their lack of topological invariance.
Assaf Almog, Rhys Bird and Diego Garlaschelli
578 The International Mergers & Acquisitions Web: A Network Approach [abstract]
Abstract: This paper analyzes the world web of mergers and acquisitions (M\&As) using a complex network approach. We aggregate data of M\&As to build a temporal sequence of binary and weighted-directed networks, for the 1995-2010 period and 224 countries. We study different geographical and temporal aspects of the M\&As web, building sequences of filtered sub-networks which links belongs to specific intervals of distance or time. Then, we derive directed-network statistics to see how topological properties of the network change over space and time. The M\&A web is a low density network characterized by a persistent giant component with many external nodes and with a few number of reciprocated links. Clustering patterns are very heterogeneous and dynamic. High-income economies are characterized by high connectivity, these countries mainly merge to several high- and middle-income economies, implying that most countries might work as targets of a few acquirers. We find that distance strongly impacts the structure of the network: link-weights and node degrees are non-linear.
Rossana Mastrandrea, Marco Duenas, Matteo Barigozzi and Giorgio Fagiolo
206 Measuring the Coherence of Financial Markets [abstract]
Abstract: Financial Agent Based Models (ABM) have been developed aiming at understanding the Stylized Facts (SF) observed in the financial time series. ABM allowed to overcome mainstream's vision and concepts like the rational representative agent. ABMs are capable of explaining the role of elements such as heterogeneity of strategies and time horizons, contagion dynamics, intrinsic large fluctuations (endogenous) but they still can not be concretely useful in policy-making processes. In a series of papers regarding a minimal ABM [see Alfi V., et al. Eur. Phys. J. B, 67 (2009) 385] it is shown that a key element in order to measure systemic risk and financial distress is the effective number of agents or, in other words, the number of effectively independent strategies in the market. In order to verify this insight, we tried to develop strategies to empirically estimate this coherence. We discuss some preliminary results of a novel measure of the stock market coherence with a reference community for this research. A market becomes coherent when agents (i.e investing subjects) tend to behave similarly and consequently perform the same actions. In such a scenario, markets are maximally exposed to large positive feedbacks and self-reinforcing dynamics which can dramatically enhance even small/local financial shocks turning them into systemic and global crashes. Here we propose a simple stochastic model which allows to give a daily estimation of markets' coherence starting from the modeling of the correlation network among stocks. The parameters of the model are estimated via Monte-Carlo procedure applied to daily price time series. This measure is a promising index to assess systemic risk of financial systems. The measure does not simply reproduce standard risk measures as the realized and implied volatility and it especially appears to be informative on the building dynamics of events before financial crisis.
Matthieu Cristelli, Fabrizio Piasini and Andrea Tacchella

Biology  (B) Session 7

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: C - Veilingzaal

Chair: Hiroki Sayama

146 Three dimensional model for chromosome congression during cell division [abstract]
Abstract: In order to correctly divide, cells have to move all their chromosomes at the center, a process known as congression. This task is performed by the combined action of molecular motors and randomly growing and shrinking microtubules. Chromosomes are captured by growing microtubules and transported by motors using the same microtubules as tracks (1). Coherent motion occurs as a result of a large collection of random and deterministic dynamical events. Understanding this process is important since a failure in chromosome segregation can lead to chromosomal instability one of the hallmarks of cancer. We describe this complex process in a three dimensional computational model involving thousands of microtubules. The results show that coherent and robust chromosome congression can only happen if the total number of microtubules is neither too small, nor too large. Our results allow for a coherent interpretation a variety of biological factors already associated in the past with chromosomal instability and related pathological conditions (2). (1) Z. Bertalan et al. Navigation Strategies of Motor Proteins on Decorated Tracks PLoS One 10 e0136945 (2015) (2) Z. Bertalan et al. Role of the Number of Microtubules in Chromosome Segregation during Cell Divison, PLoS One, 10 e0141305 (2015).
Stefano Zapperi, Zsolt Bertalan, Zoe Budrikis and Caterina La Porta
501 Thematic Clustering and Subsetting of Biomarkers in an Elderly Cohort [abstract]
Abstract: The phenomenon of “human aging” is a matter which has implications for society, the economy and policymaking. The global proportion of elderly people (defined as those above the age of 60), standing at 11.7% in 2013, is projected to reach 21.1% by 2050. Multiple nations worldwide facing aging populations also face concomitant economic and social pressures. Facing the challenges associated with an aging population requires a holistic approach to human aging, as it spans multiple domains (such as physical, metabolic, immunological, cognitive, psychological and social aspects). Here we report a network-theoretic procedure to characterize the inter-associations among the biomarkers obtained from Singapore Longitudinal Aging Study (SLAS)-2 elderly cohort (N = 3270), as well as evaluate strategies to obtain a small subset of representative biomarkers from the larger biomarker set of 1581 variables. From a network built from calculations of statistically-significant pairwise effect sizes between biomarker variables, we obtain a minimum spanning tree consisting of 1373 variables from the network’s giant cluster (the rest being singletons), and apply the Louvain maximum-modularity community-detection algorithm on the tree which are composed of biomarkers which are highly-associated with each other. We examine the thematic similarities to group the clusters into higher-order thematic groups. We also compare the performance of various machine learning models in predicting SAGE, a multi-modal index of successful or unsuccessful aging, as opposed to using the entire complement of biomarkers. The procedures proposed in this work simultaneously considers both numerical and categorical biomarkers (something not done previously to our knowledge). Furthermore, the results we obtained here are important in both the characterization of a group of elderly people, establishing a hierarchy of importance among their biomarkers, as well as obtaining candidate subsets of biomarkers for measurement and evaluation, something which requires less time and resources compared to obtaining the full set.
Jesus Felix Valenzuela, Christopher Monterola, Joo Chuan Tong, Anis Larbi and Tze Pin Ng
50 A Boolean model of gene regulatory networks with memory: application to the elementary cellular automata [abstract]
Abstract: We consider the model of Boolean genetic regulatory networks named GPBN established in [1]. A GPBN is a directed graph with two different classes of nodes; G and P, representing genes and proteins respectively. For every node we consider only states 0 or 1 (0 means inactive, 1 active). Each gene is strictly linked with a unique specific protein P, but a set of different proteins may influence the activation (or inactivation) of a given gene. The novelty of this model consists that each active protein will remain active throughout a fixed delay of time steps. In the classic Boolean network the delays are one for every node. Given a GPBN with N nodes and a set of delays (dti≥1; i=1,..., N) we prove that its dynamics is equivalent to a usual Boolean network (without delays) with N + SUM (dti) nodes. Furthermore, for the class of disjunctive Boolean networks (i.e., at each node the local activation is an OR function) we prove, by using the previous equivalence, that any GPBN admits only fixed points in spite of the fact that when this class of networks is updated like the usual ones (delays equal to one) they may have limit cycles with super-polynomial periods. Finally, we illustrate the behavior of GPBN by studying the dynamics of one-dimensional elementary cellular automata. Roughly, we observe, from exhaustive simulations, that the majority of the 256 elementary automata converge to fixed point or to confined limit cycles, from that we may conclude that the information transmission in automata with delays is unusual. [1] A. Graudenzi, R. Serra, M. Villani, C. Damiani, A. Colacci, and S. Kauffman. Dynamical properties of a boolean model of gene regulatory network with memory. Journal of Computational Biology, 18:1291–1303, 2011.
Eric Goles and Gonzalo A. Ruz
559 Topological gene expression networks capture spatial and gene-gene interactions [abstract]
Abstract: The human brain is composed of anatomically defined regions characterized by diverse histological, structural and functional connectivity profiles [1]. Previous work showed that genes that are consistently highly expressed across subjects show correlations to both brain structure and function, strongly suggesting a crucial role of differential transcription in modulating the genetic expression patterns across different regions, thus producing canonical gene-specific signatures for brain modules[2]. In this contribution, we study the whole genetic expression signatures of all regions across six subjects from the Allen Human Brain Atlas[1]. We produce an individual topological network of genes co-expression, akin to a coarse-grained backbone, via an extension of the topological simplification algorithm Mapper[3]. This new topological backbone is obtained by slicing the whole sample space, obtaining local clusters and then glueing them together according to a set-overlapping rule. This transformation solves the analysis problems caused by the combined properties of high-dimensionality, due to the large number of genes studied (~30k), and the relative sparsity of the samples (a few hundreds per subject). The resulting backbone preserves the shape of the original dataset while strongly reducing its dimensionality and yields a notion of network connectivity across the gene expression samples. We find that samples from known anatomical modules localize coherently on the backbone occupying almost non overlapping subnetworks formed by compact connected components. This reveals both the spatial architecture of gene (co)expression,as well as the interactions between the different modules. These subnetworks can provide maps to understand the interactions between the genetic pathways of neurotransmitters, an all important step in understanding the complex chemical interactions in the brain. For example, how a pharmaceutical interventions,that target a specific subsystem, such as anti-psychotic targeting the dopamine system, will impact the other sub-systems. 1.Hawrylycz,M.J. et al. Nature 489, 391–399(2013). 2.Hawrylycz,M.J. et al. 18, 1832–1844(2015). 3.Singh,G.,Mémoli,F. & Carlsson,G.E. SPBG(2007).
Alice Patania, Paul Expert, Francesco Vaccarino and Giovanni Petri

Socio-Ecology  (S) Session 4

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: D - Verwey kamer

Chair: Vincent Traag

332 Behavioural Economics in Social-Ecological Systems with Thresholds [abstract]
Abstract: How does people behave when dealing with situations pervaded by thresholds? Imagine you’re a fisherman whose livelihoods depend on a resource on the brink to collapse, what would you do? and what do you think others will do? Here we report results form a field experiment with fishermen from four coastal communities in the Colombian Caribbean. A dynamic game with 256 fishermen helped us investigate behavioural responses to the existence of thresholds (probability =1 ), risk (threshold with a climate event with known probability of 0.5) and uncertainty (threshold with an unknown probability climate event). Communication was allowed during the game and the social dilemma was confronted in groups of 4 fishermen. We found that fishermen facing thresholds presented a more conservative behaviour on the exploration of the parameter space of resource exploitation. Some groups that crossed the threshold managed to recover to a regime of high fish reproduction rate. However, complementary survey data reveals that groups that collapsed the resource in the game come often from communities with high livelihood diversification, lower resource dependence and strongly exposed to infrastructure development. We speculate that the later translates on higher noise levels on resource dynamics which decouples or mask the relationship between fishing efforts and stock size encouraging a more explorative behaviour of fishing effort in real life. This context is brought to our artificial game and leaves statistical signatures on resource exploitation patterns. In general, people adopt a precautionary behaviour when dealing with common pool resource dilemmas with thresholds. However, stochasticity can trigger the opposite behaviour.
Juan Carlos Rocha Gordo, Caroline Schill, Therese Lindahl and Anne Sophie Crépin
87 Self-Organization of Power-Law Vegetation Niches in Agriculture with Ecological Optimum [abstract]
Abstract: Sustainable farming based on the self-organization of ecosystem is an important alternative for smallholders. In contrast to conventional monoculture system based on a strong control and load to environment, complex systems perspective can provide biodiversity-based polyculture, in which various emergent ecosystem functions serve as a principal source of productivity and resilience. This article investigates the statistics of vegetation cover in ecological optimum with a simple model of niche differentiation, and discuss the origin of power law observed in field data. We also focus on farming application and analyze the yield in response to crop diversity. Power-law distribution of crops diverges the mean yield of single crop into large fluctuation, therefore it requires alternative conception of yield at plant community level. With the use of information entropy as management cost and basic conception of measure integration of niche distribution, basic strategy of adaptive diversification of vegetation portfolio is proposed, in order to assure minimum harvest for food security in changing environment. Statistics of natural vegetation and mixed polyculture of crops are demonstrated for the proof of concept.
Masatoshi Funabashi
60 On the emergence of cooperation under vigilance: a multiplex network approach [abstract]
Abstract: Understanding the evolution of cooperation is one of the most fascinating challenges in many disciplines. There is a large amount of literature analysing the mechanisms for cooperation to emerge and to be sustained, both from theoretical and experimental studies. Another way to understand the evolution of cooperation in human societies consist in deciphering the cooperative behaviour in ancient communities from historical records. In a previous work we studied cooperation in the Yamana society that inhabited Argentina and observed that the emergence of an informal network of vigilance promoted cooperation. Several field studies have found evidence of humans exposing a pro-social behaviour when being observed by others and also under the presence of subtle cues of being watched. The observability effect (the increase of cooperation under vigilance) seems to be driven by our reputational concerns, bringing the indirect reciprocity mechanism into play. This work explores the effect of vigilance on cooperation in networked systems, in the framework of the Prisoners’ Dilemma game. We study the bidirectionally-coupled vigilance and game dynamics. We quantify the impact of the topological structure of the network, and the interplay between vigilance and behaviour, on the outcome of cooperation. Moreover, we study the impact of vigilance on cooperation when the individuals have to afford a cost to become vigilant actors. We also analyse the influence of network multiplexity, i.e. the interconnection of different topological structures for the vigilance and the game networks, and the impact of correlated multiplexity, i.e. when node degrees of the multiplex layers are not randomly distributed but correlated. Our results show that vigilant actors can significantly affect the levels of cooperation, not only by enhancing cooperation in regions of the phase diagram where cooperation is expected to hold, but also by altering the critical point for the emergence of cooperation.
María Pereda
244 Cascading effects of critical transitions in social-ecological systems [abstract]
Abstract: Critical transitions in nature and society are likely to occur more often and severe as humans increase they pressure on the world ecosystems. Yet it is largely unknown how these transitions will interact, whether the occurrence of one will increase the likelihood of another, and whether these potential teleconnections (social and ecological) correlate critical transition in distant places. Here we present a framework for exploring three types of potential cascading effects of critical transitions: forks, domino effects and inconvenient feedbacks. Drivers and feedback mechanisms are reduced to a network form that allow us to explore drivers co-occurrence (forks). Sharing drivers is likely to increase correlation in time or space among critical transitions but not necessarily interdependence. Random walks on causal networks allow us to detect and compare communities of common drivers and feedback mechanisms across different critical transitions. Domino effects and inconvenient feedbacks were identified by mapping new circular pathways on coupled networks that have not been previously reported. The method serves as a platform for hypothesis exploration of plausible new feedbacks between critical transitions in social-ecological systems; it helps to scope structural interdependence and hence an avenue for future modelling and empirical testing of regime shifts coupling.
Juan Carlos Rocha Gordo

Economics & Socio-Ecology  (ES) Session 3

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: E - Mendes da Costa kamer

Chair: Andrew Schauf

193 The development of countries on the product progression network [abstract]
Abstract: Is there a common path of development for different countries, or each one must follow its own way? In order to produce cars, one has to learn how to produce wheels before? Let us represent countries as walkers in a network made of goods, defined such that if a country steps on one product, it will export it. Obviously, paths can be very different: while Germany has already explored much of the available space, underdeveloped countries have a long road ahead. Which are the best paths in the product network? To answer these questions we build a network of products starting from the UN-Comtrade data about the international trade flows over time. A possible approach is to connect two products if many countries produce both of them. Wanting to study the countries’ dynamics, we want also our links to indicate if one product is necessary to produce the other, like transistors for smartphones and wheels for cars. So our network is directed: a country usually goes from one product to another, but not vice versa. We introduce an algorithm that, starting from the empirical bipartite country-product network, is able to extract this kind of information. In particular, we project the bipartite network onto a filtered monopartite one in which a suitable normalization takes into account the nested structure of the system. We find that countries follow the direction of the links during industrialization. In other words, we are able to spot which products are helpful to start to export new products. These results suggest paths in the product network which are easier to achieve, and so can drive countries’ policies in the industrialization process and to exit from the poverty trap. Reference: Zaccaria, A., Cristelli, M., Tacchella, A., and Pietronero, L., PloS one, 9(12), e113770 (2014).
Andrea Zaccaria, Matthieu Cristelli, Andrea Tacchella and Luciano Pietronero
397 The Effect of Marketing Strategies on the Percolation of Innovations in Social Networks with Negative Word-of-Mouth [abstract]
Abstract: Because real-world marketing experiments are costly, firms make use of diffusion models to decrease uncertainty. Over the last few years Agent Based Models of Percolation have received increased attention in the literature, in which information about the existence of an innovation propagates through neutral Word-of-Mouth (WOM) between adopters and their susceptible neighbors, and product (e.g. price or quality) and promotion (seeding) strategies can be experimented with (cf. Solomon et al., 2000). A limitation of the basic percolation model such as Solomon et al. (2000) is that actors only receive WOM, but their attitude towards adoption remains unaffected by Positive- and Negative Word-of-Mouth (PWOM and NWOM). Although the effects of PWOM and NWOM have been studied empirically, only few extensions on the basic percolation model have been made capturing these effects (e.g. models on NWOM by Erez et al. (2004), and social reinforcement by Mas Tur (2016)). Addressing this gap, I will extend the standard percolation model by including the effect of NWOM in an actor’s decision process (from neighboring rejecters). With this model I will simulate percolation on small-world networks and test the effectiveness of price and seeding strategies to overcome the effects of NWOM on percolation size. As the relationship between price and diffusion size is highly non-linear, at some price (the percolation threshold) a small change causes the network to shift from almost no diffusion to almost full diffusion. However, NWOM may hamper percolation and an increase of seeds may prove to be more effective than lowering the price. A further contribution will be a model where awareness not only propagates from adopters but from rejecters as well, as it can be assumed that ‘negative’ information might also inform actors. Although the network will be fully informed, the effect from NWOM on percolation size may be substantially larger.
Daan Edelkoort
249 Early identification of high-quality papers [abstract]
Abstract: Seminal papers are usually recognized as such only many years after publication. Citation-based indicators of paper impact share this lag and often implicitly penalize recent papers that had less time to attract citations and thus cannot score well. Using insights from complex network analysis, we introduce a new article-level metric which allows us to early identify the papers that later become highly regarded. This metric – called rescaled PageRank score - is based on combining the classical PageRank centrality metric with the explicit requirement that paper score is not biased by paper age. We analyze here the network of citations among the 449935 papers published by the American Physical Society journals between 1893 and 2009, and focus on a group of papers labeled as Milestone Letters by the editors of Physical Review Letters, a leading physics journal. We compare various metrics with respect to their ability to identify the milestone papers and show that rescaled PageRank score outperforms the other metrics. The performance gap between rescaled PageRank and PageRank is particularly wide in the first years after paper publication, and it takes 15 years for PageRank to reach the identification level of the rescaled score. Due to its ability to recognize high-quality papers earlier than other metrics, rescaled PageRank score could prove particularly useful for the evaluation of young researchers, who are disadvantaged by indices biased by age and may be forced to leave academia if their potential is not appreciated promptly enough. The score proposed here may find further applications in other contexts, such as early identification of viral content or high-quality websites in the World Wide Web.
Manuel Sebastian Mariani, Matus Medo and Yi-Cheng Zhang

Cognition & ICT  (CI) Session 2

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: F - Rode kamer

Chair: Peter Emde Boas

542 System thinking and complexity in fighting organised crime [abstract]
Abstract: Policing interfaces with a variety of multilevel complex systems including: heterogeneous local, national and international criminality; local and national government policy on crime; budgets and targets for crime containment; and public perceptions of crime and social safety. When a UK police force identifies a group of individuals suspected of involvement in organised crime, it undertakes a nationally standardised ‘mapping’ procedure. This involves entering details of the group members' known and suspected activity, associates and capability into a spreadsheet. A numerical score is then calculated so that each organised crime group (OCG) can be placed into one of several ‘bands’ which reflect the range and severity of crime in which the group is involved as well as its level of capability and sophistication. This paper is based on an academic-police research collaboration that is investigating the ‘Organised Crime Group Mapping’ (OCGM) data for one of the UK’s largest police forces. The existing data analysis procedures are being evaluated using a novel combination of systems thinking and complexity methods. These methods include the problem-oriented perspective of applied systems theory that sets system boundaries in the context of policy problems, combined with the perspective of multilevel multidimensional dynamics of network and hypernetwork theory. The research therefore sits firmly in the context of policy, data analysis and practical policing. This presentation will sketch the many subsystems involved in the UK's Serious and Organised Crime Strategy and give overview of the analytic approach. The focus will be on new results being obtained from the OCGM data, how the systems-complexity methods can be extended and used within practical policing, and the implications for policy.
Jeffrey Johnson, Fortune Joyce and Bromley Jane
134 Social dynamics of online debates on unverified news [abstract]
Abstract: Massive digital misinformation is one of the main threats to our society, according to the World Economic Forum. Our recent studies [1-2] show that users online tend to select information by confirmation bias and to join virtual echo chambers where they reinforce and polarize their beliefs. On one hand, social media have the power to inform, engage or mobilize, but on the other hand also to misinform, manipulate or control. In such media without mediation, the public has to deal with a large amount of misleading information generated by nationalists, populists and conspirators, that is corrupting reliable sources. Last but not least, discussions between like-minded people only reinforce their positions, thus bursting polarization. Indeed, our recent work [3] shows that a negative emotional pattern is generally observed when polarized communities interact on the Italian Facebook. In this work we present our most recent advancements about the quantitative understanding of collective framing online by addressing the emotional dynamics of 54 Million users around two distinct kinds of narratives — scientific and conspiracy news — on US Facebook. We introduce a new metric to analyze the emotional polarization of both users and posts, which successfully reveals heavily opinionated users and posts on controversial topics. Furthermore, we measure the emotional impact of information in contrast with one’s beliefs, showing that users tend to react negatively to the correction attempts. Although online discussions are open to anyone, users only rarely discuss their opinions outside their echo chambers. [1] Bessi et al. "Science vs conspiracy: Collective narratives in the age of misinformation." PLoS ONE 10.2 (2015):e0118093. [2] Del Vicario et al. "The spreading of misinformation online." PNAS 113.3 (2016):554-559. [3] Zollo et al. (2015) Emotional Dynamics in the Age of Misinformation. PLoS ONE 10(9):e0138740.
Borut Sluban, Fabiana Zollo, Guido Caldarelli, Igor Mozetič and Walter Quattrociocchi
340 Bias, Belief and Consensus: Collective opinion formation on fluctuating networks [abstract]
Abstract: With the advent of online networks, societies are substantially more connected with individual members able to easily modify and maintain their own social links. Here, we show that active network maintenance exposes agents to confirmation bias, the tendency to confirm one’s beliefs, and we explore how this affects collective opinion formation. We introduce a model of binary opinion dynamics on a complex network with fast, stochastic rewiring and show that confirmation bias induces a segregation of individuals with different opinions. We use the dynamics of global opinion to generally categorize opinion update rules and find that confirmation bias always stabilizes the consensus state. Finally, we show that the time to reach consensus has a non-monotonic dependence on the magnitude of the bias, suggesting a novel avenue for large-scale opinion engineering.
Greg Stephens and Vudtiwat Ngampruetikorn
406 The Public Goods Game as Heuristic for Solving Optimization Tasks [abstract]
Abstract: Nowadays, Evolutionary Game Theory (EGT) represents a field of growing interest in different scientific communities, as biology and social science. On the other hand, the Darwinian concept of evolution, underlying the dynamics of evolutionary games, represents a powerful inspiring source also in the field of natural computing (e.g. genetic algorithms, swarm logic and ant colonies) for solving optimization problems. The latter have been widely investigated also within the realm of statistical physics, where theoretical physics and information theory meet forming a powerful framework for studying complex systems. In this work ([1]), we present a new heuristic based on the Public Goods Game (PGG) for solving problems as the Traveling Salesman Problem (TSP). In particular, the order-disorder phase transition occurring in population interacting by the classical PGG can be adopted for letting the population to converge towards a common solution of a given TSP. Notably, the solution plays the same role of the strategy in the PGG, and the order is reached by implementing a mechanism of partial imitation (i.e. agents imitate richer agents). Remarkably, results of numerical simulations show that it is possible to compute both optimal and sub-optimal solutions, on varying the number of cities in the TSP and the amount of agents in the population. Therefore, in the light of the achieved outcomes, we deem relevant to further investigate the potential of evolutionary games in optimization problems, enlarging the domain of application of EGT. To conclude, beyond to present our results, we aim to show basic principles of EGT and their potential applications in other fields, so that the presentation be of interest for scientists coming from different communities. [1] Javarone MA: Solving Optimization Problems by the Spatial Public Goods Game. arxiv:1604.02929 (2016)
Marco Alberto Javarone

Foundations & Biology & Physics  (FBP) Session 1

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: G - Blauwe kamer

Chair: Samuel Johnson

519 Tune the topology to create or destroy patterns [abstract]
Abstract: We consider the dynamics of a reaction-diffusion system on a multigraph. The species share the same set of nodes but can access different links to explore the embedding spatial support. By acting on the topology of the networks we can control the ability of the system to self-organise in macroscopic patterns, emerging as a symmetry breaking instability of an homogeneous fixed point. Two different cases study are considered: on the one side, we produce a global modification of the networks, starting from the limiting setting where species are hosted on the same graph. On the other, we consider the effect of inserting just one additional single link to differentiate the two graphs. In both cases, patterns can be generated or destroyed, as follows the imposed, small, topological perturbation. Approximate analytical formulae allows to grasp the essence of the phenomenon and can potentially inspire innovative control strategies to shape the macroscopic dynamics on multigraph networks.
Malbor Asllani, Timoteo Carletti and Duccio Fanelli
230 Probabilistic Quantification of Complex Biological Systems [abstract]
Abstract: Complex biological systems such as cells, tissues, or diseases are comprised of numerous interactive, multi-scale networks with redundant, convergent and divergent signaling pathways including numerous positive and negative feedback loops. Computational tools that integrate multiple types of data from high-throughput experiments to elucidate critical patterns and derive predictions are the key to understanding such complex systems. While the advent of high-throughput technologies and the resultant abundance of data have increased the demand for data-driven analytics, comprehensive and computationally efficient methods for learning and predictive modeling of complex biological systems remain elusive. Capturing statistical regularities, with minimal assumptions about the structure in the data, is particularly difficult in biological systems due to their stochastic nature and residual multi-scale interdependencies. Modeling the latent interactions that characterize many biological systems also presents a significant challenge to popular modeling approaches often limited to representing linear statistical regularities, stationary data distributions and/or the use of annotated data via supervised learning methods. In this paper, we introduce a new computational framework and algorithm designed for unsupervised learning and model construction in high-throughput biological data applications. The proposed framework uses an underlying Bayesian nonparametric model that that can effectively infer long-range temporal dependencies from heterogeneous data streams and produce grammatical rules used for real-time in-silico modeling, behavior recognition and prediction. We present initial results for two unsupervised learning tasks using unlabeled live-cell imaging data from experiments performed on the Large Scale Digital Cell Analysis System (LSDCAS), namely cellular event identification and large-scale spatio-temporal behavior recognition. We demonstrate increases in accuracy and precision over current expert methods, the efficient asymptotic computational complexity of the proposed learning algorithm and its suitability for real-time predictive analytics.
John Kalantari and Michael Mackey
451 Shuffle Morphology: Computing Complex Discrete Patterns [abstract]
Abstract: Some natural complex sequences when structurally analyzed have discrete morphemes, e.g. root consonants in Semitic languages and genetic exons in DNA molecules. Despite considerable approaches, such as Two Level Morphology of Kimmo Koskenniemi in linguistics and A New Kind of Science of Stephen Wolfram in physics, the existing formalisms applied for parsing these complex sequences are computationally inefficient. This presentation reports such a formalism called Shuffle Morphology where the sequences of a complex system, here Classical Arabic, is deshuffled into a few discrete morphemes on a finite set of shuffling discrete patterns for verbs and nouns. The extent to which Shuffle Morphology can handle the complexity of the morpho-syntax of Classical Arabic is evaluated in terms of the simplicity of its description of the language. This formalism is implemented as a set of templatic regular expressions in MOBIN Knowledge-Based System to morpho-syntactically analyze Classical Arabic texts. The descriptive simplicity of Shuffle Morphology is measured in terms of Kolmogorov Complexity defined as the length of the shortest computer program that morphologically analyses the language. Implemented in Perl, MOBIN has the source size of nearly 0.5 MB and the effectiveness of 96% in generating morpho-syntactically tagged corpora. This efficiency is just one-eighth of 4MB size of the source of Buckwalter Arabic Morphological Analyzer (BAMA version 2.0), also implemented in Perl and used for tagging Arabic corpora distributed at LDC centre in University of Pennsylvania. Although BAMA, newly re-implemented as SAMA (Standard Arabic Morphological Analyzer), uses three lists for prefixes, suffixes and stems, supplemented by three morphological compatibility tables, generates the tagged corpora highly ambiguously. Further development of Shuffle Morphology employing semi-supervised machine learning schemes is expected to compute efficiently other Semitic languages. It is also expected to increase considerably effectiveness and efficiency in computing DNA sequences.
Mahmoud Shokrollahi-Far

Cognition & ICT & Socio-Ecology  (CIS) Session 1

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: H - Ontvangkamer

Chair: Nicole Beckage

131 The dynamics of innovation through the expansion in the adjacent possible [abstract]
Abstract: Novelties are part of our daily lives. We constantly adopt new technologies, conceive new ideas, meet new people, experiment with new situations. At different scales, innovation is also a crucial feature of many biological, technological and social systems. Recently, large databases witnessing human activities allowed the observation that novelties - such as the individual process of listening to a song for the first time - and innovation processes - such as the fixation of new genes in a population of bacteria - share striking statistical regularities. I will present a new framework based on Polya's urn to effectively model the emergence of the new and its regularities. What seems to be key in the successful modelling schemes proposed so far is the idea of looking at evolution as a path in a complex space, physical, conceptual, biological, technological, whose structure and topology get continuously reshaped and expanded by the occurrence of the new. This will be identified as a process of expansion into the adjacent possible, a concept originally introduced by Stuart Kauffmann in the framework of biological evolution. We will identify statistical signatures of the presence of the expansion into the adjacent possible in the analyzed datasets, and we will show that our modeling scheme is able to predict remarkably well these observations. References: F. Tria, V. Loreto, V.D.P. Servedio, S.H. Strogatz, Scientific Reports 4 (2014) V. Loreto, V.D.P. Servedio, S.H. Strogatz, F. Tria, in "Universality and Creativity in Language", E. Altmann, M. Degli Esposti and F. Pachet (eds.) Lecture Notes in Morphogenesis, (Springer) (2015)
Francesca Tria, Vittorio Loreto, Vito Servedio and Bernardo Monechi
374 The Evolutionary Kuramoto's Dilemma [abstract]
Abstract: The simultaneous occurrence of events known as synchronization constitutes one of the most ubiquitous and fascinating phenomenon in complex systems. Synchronization has been observed in environments of different nature from power grids to biological and chemical systems. Despite the numerous studies on coupled oscillators on complex networks, all previous literature is based on the hypothesis that the synchronization process is costless and it thus does not require any investment by interacting oscillators. We study the evolution of cooperation and of synchronization on networks of coupled Kuramoto oscillators when the synchronization process is costly and when oscillators are able to avoid it. The introduction of costly synchronization leads to the formulation of a dichotomous scenario. In this framework, an oscillator may decide to pay the cost necessary to get synchronized, i.e. cooperating, or to simply wait others get synchronized with her frequency, i.e. defecting. The emergence of synchronization may thus be seen as the byproduct of an evolutionary Prisoner's Dilemma game in which oscillators can decide which behavior they adopt according to the payoff, i.e. the level of achieved synchronization, they receive in the previous round. We show how topology is essential in order for cooperation and consequent synchronization to thrive. We also display how different classes of topology foster differently synchronization both at a mascroscopic and microscopic level. The Kuramoto's Dilemma model – apart from looking at synchronization under a different perspective – can be helpful to study a wider range of social phenomena (such as motorial coordination, opinion dynamics and social rituals) in which synchronization and cooperation processes are both present and permanently coevolve.
Alberto Antonioni and Alessio Cardillo
90 Effective neighborhood and information load control segregation dynamics in human groups [abstract]
Abstract: A large number of collective behaviors observed in animal and human groups, such as collective motion or collective decision, results from purely local interactions between neighboring individuals. However, the number and position of neighbors that influence the behavior of a focal individual may deeply affect the resulting collective dynamics. Here we design an experimental setup for human groups which allows to investigate the effect of influential neighbors on collective dynamics in a simple segregation task. We conduct a series of experiments where 22 pedestrians placed in a 7m diameter circular arena are asked to walk until an individual objective is reached. Pedestrians are informed that they are assigned one of two colors but ignore their own color and the color of the others. Pedestrians are then asked to walk simultaneously and find a place among the others so that their individual environment consists mostly of people of their own color. To do that, pedestrians hold electronic devices (Ubisense tags) that track their instantaneous position (2 Hz frequency) and emit an acoustic signal if the majority of their k nearest neighbors belongs to the other group. This setup allow us to precisely control in real time the number of k influential neighbors and the amount of information each individual can use to make a decision. We find that, after a transient time, pedestrians reach a stationary state where they are segregated in two or more clusters of one color, and this, for k odd from 1 to 13. Moreover, we find that an optimal number of influential neighbors k* exist which yields the fastest segregation process. This optimal segregation time is reached when pedestrians take into account their k*=9 closest neighbors; below this value, there is a lack of information, while above k*, an information overload effect takes place.
Ramón Escobedo, Bertrand Jayles, Gilles Trédan, Matthieu Roy, Roberto Pasqua, Christophe Zanon, Adrien Blanchet, Clément Sire and Guy Theraulaz
324 Patterns of cooperative rhythm production between people through auditory and visual signals [abstract]
Abstract: As seen in musical ensembles, dance and conversation, people cooperatively produce various rhythmic patterns through real-time mutual interaction. The emergence of such rhythmic patterns is of great interest and has been investigated for a long time in cognitive psychology, and more recently, also in complexity science. However, the roles of partner feedback information in cooperative rhythm production remain unclear. In what ways is producing rhythm by yourself different from doing so cooperatively? How does a partner’s rhythm affect you? Does the method of interaction, aurally or visually, affect the final cooperative rhythm? This study addresses these essential questions through conducting alternate and continuous tapping experiments. In our alternate tapping experiments, participants were instructed to tap a pressure sensor, alternately (i) with a constant-paced metronome (ATm), (ii) with a metronome that ticks after the elapse of a certain time period from the participant’s tap (ATf), or (iii) with a partner (ATp). In the continuous tapping experiments, participants were instructed to tap at a constant pace (CT). The rhythm information (either that made by the tapping participants or the metronome) was relayed via either audio or visual signals. We evaluated the mean values and standard deviations of the time differences from stimulus to response (ds), and found that it was better under ATp conditions. Time-series analysis revealed that under ATp conditions, the inter-tap intervals of the two participants were correlated, and the values of ds varied in a mutually complementary manner. We also observed that participants depend on the partner more deeply when they receive their partners’ signals visually from the partner. Furthermore, under ATp conditions, a wider variety of complex rhythmic patterns were produced, suggesting that the emergence of complex rhythm patterns is highly impacted by mutual interaction between people and sensory modalities.
Taiki Ogata and Tomomi Kito

Cognition & Biology & ICT  (CBI) Session 1

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: I - Roland Holst kamer

Chair: Aleksandra Aloric

294 Public health monitoring of drug interactions, patient cohorts, and behavioral outcomes via network analysis of Instagram and Twitter user timelines [abstract]
Abstract: Social media and mobile application data provide population-level observation tools with the potential to speed translational research. We describe recent work demonstrating Instagram’s importance for public surveillance of drug interactions [1]. Our methodology is based on the longitudinal analysis of Instagram user timelines at different timescales: day, week and month. Weighted graphs are built from the co-occurrence of terms from various biomedical dictionaries (drugs, symptoms, natural products, side-effects, sentiment) at the various timescales. We show that spectral methods, shortest-paths and distance closure analysis [2,3] reveals relevant drug-drug and drug-symptom pairs, as well as clusters of terms and drugs associated with complex pathology associated with depression [1]. We further extend the approach to three additional social media sources: Twitter, ChaCha and the Epilepsy Foundation public forums. We focus on drugs and symptoms of epilepsy, identifying patient cohorts at risk for Sudden, Unexpected Death in Epilepsy (SUDEP), the top epilepsy-related cause of death. Via the Epilepsy Foundation data, we collect social media timelines of patients who died from SUDEP. Training classifiers on a cohort with known medical outcome, allows us to test the potential of social media in the prediction of SUDEP as well as identifying the terminology and behavior associated with it. Since existing methods have failed to identify consistent etiologies for SUDEP, we show that social media mining may be helpful in identifying unknown factors and behavioral transitions that precede SUDEP, thus enhancing predictability. Finally, we discuss the generalization of the pathology to other conditions by showcasing a general-purpose web-tool we have been developing [1]. [1] R.B. Correia, L. Li, L.M. Rocha [2016]. Pac. Symp. Biocomp. 21:492-503. (PMCID: PMC4720984) [2] T. Simas and L.M. Rocha [2015]. Network Science, 3(2):227-268. [3] G.L. Ciampaglia, P. Shiralkar, L.M. Rocha, J. Bollen, F. Menczer, A. Flammini [2015]. PLoS One. 10(6): e0128193.
Rion Correia, Nathan D. Ratkiewicz, Wendy R. Miller and Luis M. Rocha
75 Could undermining biosphere integrity trigger catastrophic climate change? [abstract]
Abstract: The carbon stored in the terrestrial biosphere, were it all released into the atmosphere instantaneously as carbon dioxide, would catastrophically change the Earth’s climate. Human actions that, both directly and indirectly, damage the integrity of the biosphere risk undermining’s the biosphere capacity to maintain this store of carbon. Here, we investigate the risk that degradation of the biosphere will trigger catastrophic climate change, even if future fossil emissions are kept to low levels. Whether terrestrial carbon stores can be maintained depends critically on the speed and strength of feedbacks involving the global carbon cycle, climate change, and dynamics of the biosphere. Many of the interactions that comprise these feedbacks are highly uncertain, such as the vulnerability of the biosphere to the magnitude and rate of temperature changes and how changes to the biosphere affect its ability to store carbon, and therefore are rarely implemented in climate models. We extend a previous stylised dynamical model of the global carbon cycle to include interactions with biosphere integrity. We use this model to integrate the range of current knowledge on climate-biosphere interactions and study its possible consequences. Our model constitutes a study of the interactions between the two core planetary boundaries: climate change and biosphere integrity.
Steven Lade
127 Opinion dynamics with public preference falsification: how much is the dynamics modified? [abstract]
Abstract: In many contexts, people do not speak their mind but falsify their private opinions (what Timur Kuran calls “preference falsification”). It goes from complimenting your boss for his ugly tie, not outing regarding one’s homosexual orientation, to publicly agreeing with public policies in a totalitarian country. Opinion falsification is still to be analyzed by means of precise multi-agent models. To this effect, we extend the bounded confidence (BC) model of opinion dynamics to analyze how much opinion falsification has an impact on well-entrenched results. In the initial BC model (Deffuant et al. 2002), agents update their opinions on random encounters with agents whose opinion differs from theirs by less than a common threshold d. One of the main results is the 1/2d rule: the number of final opinion clusters is the integer part of 1/2d. We keep the BC updating mechanism for private opinions but, in our model, the public opinion of agents is a compromise between their private opinion and what they previously heard from other agents. Overall, opinion falsification is characterized by two parameters: beta, which is the weight given to the opinion of others in the compromise mechanism, and memory size, the number of past interactions remembered by agents. We find that, even with tiny betas, important features of the BC model dynamics are altered. Total consensus obtains for thresholds d way lower than the one needed in the BC model, and all the more since beta (and opinion falsification) increases. The shift in the 1/2d curves is stronger when the model runs longer and these results are robust under parameters variations. Overall, there seems to be a two scale temporal dynamics: after converging to a meta-stable state following the 1/2d rule, all clusters end up merging into one. We quantitatively analyze the kinematics of these dynamics.
Margot Calbrix, Cyrille Imbert, Vincent Chevrier and Christine Bourjot
162 Structure and dynamics of the online climate-change debate [abstract]
Abstract: People shape opinions about common topics based on their beliefs, prior knowledge, peer influence, and personal involvement or interest in a certain topic. Individuals, social groups, or companies get particularly active in proliferating their ideas if they find any benefit in it, either moral, personal, collective, spiritual or material. Online social networks provide a rich source of user generated content, and the direct or indirect interactions between the users. It has been shown that in such a complex system different groups of tightly connected users can share their opinions on some topics, but can also considerably differ in their preferences towards selected controversial topics [1]. In our work we study various aspects of the online debate on climate change and the associated environmental policies. We use Twitter data as the source of public opinion and construct a dynamic content sharing network between 7 Million users, created from over 40 Million tweets, acquired during the last two and half years. We apply various techniques for temporal network mining to track the evolution of different communities participating in the climate change debate, and to understand their content sharing patterns. Using text mining and sentiment analysis we detect the communities’ opinions and preferences on relevant issues and policies. We show that the climate change debate on Twitter is dominated, at the highest level of partitioning, by two contrarian groups of users. We observe distinctive discourse and use of vocabulary in the reactions to relevant news events or policy announcements. Furthermore, we confirm the engagement of climate change contrarians detected in [2] also in our data. [1] Sluban, B. et al.: Sentiment leaning of influential communities in social networks, Computational Social Networks, 2:9, 2015 [2] Farrell, J.: Network structure and influence of the climate change counter-movement, Nature Climate Change, 2015
Borut Sluban, Igor Mozetic and Stefano Battiston

Foundations & ICT & Physics  (FIP) Session 2

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: J - Derkinderen kamer

Chair: Philip Rutten

534 Enumerating Possible Dynamics of Complex Networks in Open and Closed Environments [abstract]
Abstract: We study the problem of determining all possible asymptotic dynamics of Boolean Networks (BNs) such as Discrete Hopfield Nets, Sequential and Synchronous Dynamical Systems, and (finite) Cellular Automata. Viewing BNs as an abstraction for a broad variety of decentralized cyber-physical, biological, social and socio-technical systems, we discuss similarities and differences between open vs. closed such decentralized systems, in an admittedly simplified but rigorous mathematical setting. We revisit the problem of enumerating all possible dynamical evolutions of a large-scale decentralized complex system abstracted as a BN. We show that, in general, the problem of enumerating possible dynamics is provably computationally hard for both "open" and "closed" variants of BNs, even when all of the following restrictions simultaneously hold: i) the local behaviors (that is, node update rules) are very simple, monotone Boolean-valued functions; ii) the network topology is sparse; and iii) either there is no external environment impact on the system modeled as a BN (this case captures "closed systems"), or the model of the environment and how it influences individual nodes in the BN, is of a rather simple, deterministic nature. Our results should be viewed as lower bounds on the complexity of possible behaviors of "the real" large-scale cyber-physical, biological, social and other decentralized systems, with some far-reaching implications insofar as (un)predictability of possible dynamics of such systems.
Predrag Tosic
189 Dynamics on networks: competition of temporal and topological correlations [abstract]
Abstract: Networks are the skeleton that support dynamical processes in complex systems. Links in many real-world networks activate and deactivate in correspondence to the sporadic interactions between the elements of the system. Activation patterns may be irregular or bursty and play an important role on the dynamics of processes taking place in the network. Most of recent results point towards a delay of these processes due to the interplay between topology and link activation. Social networks and information or disease spreading processes are paradigmatic examples of this situation. Besides the burstiness, several correlations may appear in the process of link activation: Memory effects imply temporal correlations and the existence of communities in the network may mediate the activation patterns of internal an external links. Here, we study how these different types of correlations influence dynamical systems on the networks. As paradigmatic examples, we consider the SI spreading and the voter model on networks. As noted in the literature, the relation between topology and activation leads to a delay in the dynamics. However, we find that memory effects can notably accelerate the models' arrival at the absorbing states. A theoretical explanation about how this phenomenon occurs is provided. Furthermore, we show that when both types of correlations are present, the final dynamics crucially depends on the mix. The characteristic times of the dynamics suffers a divergence for some particular correlation combinations. Some mixes between topology and memory notably speed up the dynamics, while others strongly slow it down. Mixed correlations, topological and memory effects, are commonly present in any real system, so understanding their non-trivial competition is of great importance. In this sense, the SI and voter models are simple benchmark dynamics, but we expect our results to be generalizable to more elaborated dynamical processes. The complete work is available in https://arxiv.org/abs/1604.04155
Oriol Artime, José J. Ramasco and Maxi San Miguel
120 Modeling Complex Systems with Differential Equations of Time Dependent Order [abstract]
Abstract: We introduce a new type of evolution equation for 1-dimensional complex systems where the order of differentiation is itself one of the variables. We show that such ultra-fast growing systems with evolution determined by the variable order of differentiation can be mapped into a fractional differential equation and further into a Volterra integral equation. We elaborate on the existence and stability of the evolution solutions for various initial conditions and we present several case studies. The core of this approach is related to the observational connection between the evolution of the degree of complexity, and the rate of accelerated change on one hand, and the degree of time non-locality (history dependent) of the model equation, on the other hand. Since the latter quantity is connected to the number of neighbors or steps taken into account in discretized models, it results the need for a new type of equation whose order of differentiation changes in a dynamical way. We present applications of this approach in: nonlinear evolution equations for long-term memory systems, [1], fast growing computer/internet systems (e.g. Kryder’s or Nielsen's laws), [2], and accelerating change systems like populations (e.g. Reed’s and Carlson’s laws, Ribeiro model). We also present some novel applications developed from this model on cell growing, phase transitions, and avalanches. References [1] Spectral decomposition of nonlinear systems with memory A. Svenkeson, B. Glaz, S. Stanton, and B. J. West, Phys. Rev. E 93 (2016) 022211. [2] A. Ludu, Boundaries of a Complex World (Springer-Verlag, Heidelberg 2016).
Andrei Ludu
267 Family Business. Kin of co-authorship in five decades of health science literature [abstract]
Abstract: In academia, nepotism has been blamed for poor graduate career support, gender inequality, and emigration of the intelligentsia. To support this idea Allesina reported an unnatural scarcity of distinct surnames among tenured faculties in Italy while Ferlazzo and Sdoia repeated the same analysis in the UK, finding a more objective expression of social capital. Albeit with very careful consideration of surnames’ distributions across regions and time, surname clustering can be used to reflect family ties or kinship, and interpreted in relation to measures of social capital (including corruption, income inequality, scientific output). Here, we examine co-authorship patterns in the health science literature over five decades, by country using over 21 million papers indexed by the MEDLINE®/PubMed® database. Our analysis shows that kinship in the health literature has increased over the past fifty years with substantial differences between nations. I.e. Italy and Poland exhibited a dramatic increase in kinship starting from very low values and crossing the overall trend in the early eighties. We also observed low kinship among countries with low perceived corruption, and an association with income inequality. Investigating the co-authorship network from top publishing countries, we found that authors who are part of a kin tend to have a larger degree and occupy central positions in network. We could interpret this as increased information flows and allied activities such as grant applications, emanating from influential individuals who are more commonly kin co-authors. Our results also highlight that the local structure of collaborations of a kin co-author is usually very centralized, while authors who are not part of a kin tend to create ‘democratic’ structures. Finally, the analysis of mixing patterns strongly supports the idea that important kin authors form robust collaborations among their peers while do not collaborate with scientists who are not part of a kin.
Sandro Meloni, Mattia C.F. Prosperi, Iain E. Buchan, Iuri Fanti, Pietro Palladino and Vetle I. Torvik

Foundations & Urban  (FU) Session 2

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: R - Raadzaal

Chair: Jorge Hidalgo

149 FOUNTAIN: An Agent Based Model for Simulating Upscaling Transitions and Innovations [abstract]
Abstract: The key to understanding processes of transition and innovation lies in understanding behavioural change of individuals. Here, we present FOUNTAIN, an agent-based approach to modelling transitions and innovations. Its basis is a cognitive model for the agents which relies on behavioural change theories. FOUNTAIN can be used to simulate scenarios in which individuals show habitual behaviour. We can analyse the effects of interventions aimed to influence the behaviour. These effects can not only be evaluated at macro level, but, as FOUNTAIN models heterogeneous individuals, it can also provide insights in the properties and motivation of agents that show specific behaviour. Therewith, target groups can be identified that are more likely to react to interventions. These insights are very valuable for policy makers to define new policy measures. The agents have a bounded-rational cognitive model. It contains a model for habitual behaviour: agents develop a routine behaviour which they unconsciously follow, unless they are triggered to make a deliberate choice. The choice mechanism includes a trade-off between rational, utility-based choices and affective choices. The agents are placed in a social network, which facilitates social influence via peer pressure. Also, agents influence each other via environmental aspects, such as resource limitations. Simulating an agent community results in a complex system, that shows the propagating effects of behavioural change. We have developed an extended FOUNTAIN model for commuter behaviour, where agents select their work location, modality for traveling and travel time on a daily basis. We have configured the environment for the city of Utrecht in the Netherlands, and validated the commuter model by comparing the simulation results against a real-world experiment, in which car drivers were paid to avoid rush hour. As follow-up, FOUNTAIN analyses were used by policy makers while planning new interventions on commuter behaviour.
Bob van der Vecht and Tanja Vonk
135 Location based interconnection design for interdependent networks [abstract]
Abstract: An interdependent network is a network consisting of different types of networks that interact with each other via interconnections between them. The design of the interconnection is one of the main challenges to achieve a robust interdependent network. Due to cost considerations, network providers are inclined to interconnect nodes that are geographically close. Accordingly, we propose two topologies, the random geographic graph and the relative neighbourhood graph, for the design of interconnection in interdependent networks that incorporates the geographic location of nodes. Different from one-to-one interconnections studied in most papers, the two topologies generalize to multiple-to-multiple interconnections, meaning that one node in one network can have an arbitrary number of dependent nodes in the other network. Moreover, the two topologies are applicable in the scenario that the sizes of the coupled networks are not equal. We derive the average number of interdependent links (interlink density) for the two topologies which enables us to compare simulations performed on these two topologies with the same interlink density. For the two proposed topologies, we evaluate the impact of the interconnection structure on the robustness of interdependent networks against cascading failures. Finally, we propose the decrease of the largest functioning component after cascading failures triggered by a small fraction of failed nodes, as a robustness metric. This robustness metric quantifies the damage of the network introduced by a small fraction of initial failures well before the critical fraction of failures that collapses the whole network.
Xiangrong Wang, Robert E. Kooij and Piet Van Mieghem
158 Equilibria distribution of Cyclic Power Grids [abstract]
Abstract: The use of renewable energy sources will lead to enhanced distributed energy generation and an accompanying modified power grid topology. It is a major issue how to keep the network synchronized under these circumstances. We study synchronization on cyclic power grids and determine the stability of the synchronous state. To this end we first calculate all stable equilibria and type-1 saddles (with one unstable direction) of the network using an efficient algorithm. Next both linear and nonlinear stability are investigated, for the case in which generators and consumers are periodically distributed as well for random distributions. We find that the stability of the synchronous state decreases with network size, and that the most regular network topology is most stable. Nonlinear stability is considered using direct methods and measured by potential energy of saddles. We determine the potential energy for the ring network and demonstrate that this topology always leads to improved stability of the synchronous state compared to tree networks. We interpret our stability results in terms of the probability that certain connections between nodes can break. We show that heterogeneity in the distribution of generators and consumers will lead to an increased fragility of the network.
Kaihua Xi, Johan Dubbeldam and Haixiang Lin
375 Scaling analysis of urban agglomeration observed in Japanese phone directory data [abstract]
Abstract: How different urban properties such as number of hospitals, shops, patents, and crimes depend on city size? It has been demonstrated that most urban properties Y follow the allometric scaling law: Y is proportional to N^b, where N and b are population size of a city and the scaling exponent. Urban infrastructure has been shown to scale sub-linearly (b<1) reflecting large cities don't need large infrastructure, whereas output and income have been shown to scale super-linearly (b>1) reflecting high per capita in large cities. Here we empirically analyze urban scaling observed in Japanese phone directory (Yellow Pages) data collected from Telepoint provided by Zenrin Co., Ltd. This data contains comprehensive individual listings of about 7 million shops or facilities (nearly all shops, firms, hospitals, schools, parks, etc). Name, address, latitude and longitude, phone number, and industrial sector of the shop or the facility are also included. We can count the number of stores or facilities in each city. The industrial sector is divided into 39 categories. Each category is further divided into 785 subcategories. This allows us to study and discuss systematically the scaling exponent that are associated with various aspects of urban agglomeration. We show that obtained scaling exponents help to characterize urban properties.
Takaaki Ohnishi, Takayuki Mizuno and Tsutomu Watanabe

Foundations & Physics  (FP) Session 4

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: L - Grote Zaal

Chair: Vlatko Vedral

353 A Machian Functional Relations Perspective on Complex Systems [abstract]
Abstract: The paper discusses two related questions: where to ‘cut’ system definitions and systemic relations based on the perspective of the involved stakeholders. Both are historically related to the genetic historical /-critical, monist approach of psychophysicist Ernst Mach. For the analysis of (causal) interactions in complex systems (Auyang 1998), Simon and Ando (Ando and Simon 1961, see also Shpak et al. 2004) have developed the concept of (near) decomposability, where interactions in systems are separated into groups of interactions according to the strength of interactions between elements of a system. The danger in this assumption is that interactions between groups of variables can be neglected such that microstate variables can be aggregated into macro-state variables. This assumption may work in the short run under normal conditions, but may also fail under longer terms and unusual conditions. From a ‘complexity / non-linear mathematics perspective’ ‘small’ effects may lead under positive feedback to the crossing of thresholds and phase transitions and then may be observed as increased stress, risk and catastrophes in a system’s development (cp. Thom 1989, Jain and Krishna 2002, Sornette 2003). In order to tackle the question of where to ‘cut’ system definition, decomposition and system aggregation, the paper proposes to employ physicist-psychologist-philosopher Ernst Mach’s genetic perspective on the evolution of knowledge based on his research in the history of science (Mach 1888, 1905, 1883). Mach suggests to replace causality with functional relations, which describe the relationship between the elements of the measured item and the standard of measurement (Mach 1905, Heidelberger 2010) as functional dependencies of one appearance on the other. The paper sketches the links between Mach’s and Simon’s approach to derive requirements for ‘tools’ to converse about system definition, decomposition, and aggregation (modularization) interrelated with and dependent on scientists worldviews.
Carl Henning Reschke
86 Using quantum mechanics to simplify input-output processes [abstract]
Abstract: The black-box behavior of all natural things can be characterized by their input-output response. For example, neural networks can be considered devices that transform sensory inputs to electrical impulses known as spike trains. A goal of quantitative science is then to build mathematical models of such input-output processes; such that they can be used to construct devices that simulate such input-output behavior. In this talk we discuss the simplest models, the ones that can perform such simulations with the lest memory – as measured by the device's internal entropy. Such constructions serve a dual purpose. On the one hand they allow us to engineer devices that replicate the desired operational behaviour with minimal memory. On the other, the memory such a model requires tells us the minimal amount of structure any process exhibiting the same input-output behaviour must possess – and is therefore adopted as a way of quantifying the structural complexity of such processes [1]. Here, we demonstrate that the simplest models of general input-output processes are generally quantum mechanical – even if the inputs, and outputs are described purely by classical information. Specifically, we first review the provably simplest classical devices that exhibit a particular input-output behaviour; known as epsilon transducers [1]. We then outline recent work on modifying these devices to take advantage of quantum information processing; such that they can enact statistically identical input-output behaviour with reduced memory requirements [2]. This opens the potential for quantum information to be relevant in both the simulation of input-output processes, and the study of their structural complexity. [1] Barnett and Crutchfield, Journal of statistical physics, 161, 404 (2015) [2] J. Thompson et. al. Using quantum theory to reduce the complexity of input-output processes. arXiv:1601.05420 (2016)
Jayne Thompson, Andrew Garner, Vlatko Vedral and Mile Gu
450 An Algebraic Formulation of Quivers, Networks and Multiplexes [abstract]
Abstract: An alternative description of complex networks can be given in terms of quivers. These are objects in abstract algebra (they also have a category-theoretic definition). Using this formal machinery, we provide an alternative definition of multiplex networks. Then, identifying the path algebra of a multiplex, we find a gradation in the algebra that leads to the adjacency matrix of the multiplex. In fact, this formulation reveals two types of multiplex networks, the simple-multiplex, where the nodes are as usual, but there is an additional product map in the algebra; and the supra-multiplex, where nodes are replaced by supra-nodes. A supra-node is a collection of nodes belonging to an equivalence class with respect to a given equivalence relation. Though these equivalence classes can themselves be represented as connected graphs, the edges within a supra-node are of a distinct type than those between supra-nodes. By this classification, we identify all the tensorial adjacency matrices, that are usually discussed in the multiplex literature, as corresponding to supra-multiplexes, whereas, the original definition of multiplexes with nodes as basic entities, corresponds to simple-multiplexes. The benefit of an algebraic approach is that it helps in parsing these two types of multiplex networks in a precise way, leading to distinct path algebras and adjacency matrices for each. We show that the adjacency matrix of a simple-multiplex requires the construction of a new color-product map. To the best of our knowledge, this is the first formal derivation of this matrix.
Xerxes Arsiwalla, Ricardo Garcia and Paul Verschure
326 Network structure of multivariate time series [abstract]
Abstract: Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range of tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad-hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.
Vincenzo Nicosia, Lucas Lacasa and Vito Latora

Foundations  (F) Session 9

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: M - Effectenbeurszaal

Chair: Siew Ann Cheong

51 Centrality in interconnected multilayer networks: mathematical formulation of node versatility and its applications [abstract]
Abstract: The determination of the most central agents in complex networks is important because they are responsible for a faster propagation of information, epidemics, failures and congestion, among others. A challenging problem is to identify them in networked systems characterized by different types of interactions, forming interconnected multilayer networks. Here we describe a mathematical framework that allows us to calculate centrality in such networks and rank nodes accordingly, finding the ones that play the most central roles in the cohesion of the whole structure, bridging together different types of relations. These nodes are the most versatile in the multilayer network. We then present two applications. First, we propose a method based on the analysis of bipartite interconnected multilayer networks of citations and disciplines, to assess scholars, institutions and countries interdisciplinary importance. Using data about physics publications and US patents, we show that our method allows to reward, using a quantitative approach, scholars and institutions that have carried out interdisciplinary work and have had an impact in different scientific areas. Second, we investigate the diffusion of microfinance within rural India villages accounting for the whole multilayer structure of the underlying social networks. We define a new measure of node centrality on multilayer networks, diffusion versatility, and show that this is a better predictor of microfinance participation rate than previously introduced measures defined on aggregated single-layer social networks. Moreover, we untangle the role played by each social dimension and find that the most prominent role is played by the nodes that are central on the layer representing medical help ties, shedding new light on the key triggers of the diffusion of microfinance.
Elisa Omodei
274 The noisy voter model on complex networks [abstract]
Abstract: We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.
Adrián Carro, Raul Toral and Maxi San Miguel
546 On the Definition of Complex Systems – A Mathematical Perspective [abstract]
Abstract: In this talk I discuss the definition(s) of complex systems from a mathematical perspective. The basic question since complex systems theory was established is about what properties we should expect, and then develop analytical tools specific for this set of systems. In phrases like ‘the sum (system) is more than its parts’ one implicitly assumes a system can be arranged into system components, and their interactions. On this level, a system can very well be described inside network theory, with components projected into vertices, and interactions illustrated by arrows between vertices. Typically, the interactions then should be highly nonlinear, in order to guarantee potential ‘surprising’ behaviour. This idea leads to the discussion of feedback loops, and how they can be analysed mathematically. Such existence or absence of feedback loops is essential in modern scientific theory, as an example we mention climate models. However, such considerations do not take into account the multi-scale structure of most systems. Mathematically, the description on the micro-scale is often stochastic in nature, whereas the macroscopic scales are often described deterministically, for example with the help of partial differential equations. In order to understand these relationship two operations need to be investigated, first discretisation (and resulting transition from deterministic to stochastic, and vice versa), and secondly up-scaling, typically done in the form of a continuum limit. This discussion is also relevant for data science, as data from different scales of the system are increasingly available. The data question will end the talk.
Markus Kirkilionis
27 P-test for comparison among different distributions [abstract]
Abstract: Since the turn of the millennium, there is a shift towards big data. This makes the plotting and testing of empirical distributions more important. However, many continue to rely on visual cues (eg. classifying a seemingly straight line on a log-log plot as a power law). In 2009, Clauset and Newman published a series of statistical methods to test empirical data sets and classify them into various distributions. The paper have been cited over a thousand times in journals across various disciplines. However, in many of these papers, the exponential distribution always scores poorly. To understand why there is this apparent systematic bias against the exponential distribution, we sample data points from such a distribution before adding additive and multiplicative noise of different amplitudes. We then perform p-testing on these noisy exponentially-distributed data, to see how the confidence p decreases with increasing noise amplitude. We do the same for the power-law distribution. Based on these results, we discuss how to perform p-tests in an unbiased fashion across different distributions.
Boon Kin Teh, Darrrell Tay and Siew Ann Cheong

Foundations  (F) Session 10

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: N - Graanbeurszaal

Chair: Yamir Moreno

328 Irreducibility of multilayer network dynamics: the case of the voter model [abstract]
Abstract: We address the issue of the reducibility of the dynamics on a multilayer network to an equivalent process on an aggregated single-layer network. As a typical example of models for opinion formation in social networks, we implement the voter model on a two-layer multiplex network, and we study its dynamics as a function of two control parameters, namely the fraction of edges simultaneously existing in both layers of the network (edge overlap), and the fraction of nodes participating in both layers (interlayer connectivity or degree of multiplexity). We compute the asymptotic value of the number of active links (interface density) in the thermodynamic limit, and the time to reach an absorbing state for finite systems, and we compare the numerical results with the analytical predictions on equivalent single-layer networks obtained through various possible aggregation procedures. We find a large region of parameters where the interface density of large multiplexes gives systematic deviations from that of the aggregates. We show that neither of the standard unweighted aggregation procedures is able to capture the highly nonlinear increase in the lifetime of a finite size multiplex at small interlayer connectivity. These results indicate that multiplexity should be appropriately taken into account when studying voter model dynamics, and that, in general, single-layer approximations might be not accurate enough to properly understand processes occurring on multiplex networks, since they might flatten out relevant dynamical details.
Marina Diakonova, Vincenzo Nicosia, Vito Latora and Maxi San Miguel
552 New dimensions for network science [abstract]
Abstract: Network science makes a major contribution to understanding complexity but has focused on relations between two entities and hardly embraced the generality of n-ary relations for any value of n. A set of elements is n-ary related if, on removing one of them, the relation ceases to hold, e.g., the notes {C, E, G} forming the chord of C major (3-ary relation), four people playing bridge (4-ary relation), and the characters of the word {w,h,o,l,e} (5-ary relation). N-ary relations are ubiquitous in complex systems. Hypergraphs, with edges sets of vertices, provide a powerful first step. Generally vertices need to be ordered to avoid, e.g., {g, r, o, w, n} = {w, r, o, n, g}. An ordered set of vertices is a ‘simplex’, e.g. != . Simplices take network edges to new dimensions, is a 1-dimensional edge, is a 2-dimensional triangle, is 3-dimensional tetrahedron, and … with (p+1) vertices is a p-dimensional polyhedron. Polyhedra are q-connected through their q-dimensional faces, leading to higher dimensional q-connectivity. Generally the same set of vertices can support many relations, e.g. the right facing smiling face :-) and the left facing frowning face )-: are both assembled from the same three symbols. The ‘explicit relation’ notation of multiplex hypersimplices, != , allows different structures with same vertices to be discriminated. It will be shown that multilevel multiplex hypernetworks, formed from hypersimplices, are part of a natural family integrating graphs, networks and multiplex networks with their higher dimensional extensions to hypergraphs, simplicial complexes and multiplex hypernetworks. Illustrative analyses will be presented. Embracing higher dimensions can make network science even more powerful.
Jeffrey Johnson
100 The diffusion manifold of complex networks [abstract]
Abstract: Complex networks are special mathematical objects lacking an appropriate definition of distance between their units. In fact, the shortest path between two nodes is not metric because it does not satisfy the triangle inequality. One possible alternative is to introduce a metric distance based on dynamical process, more specifically random walks. This "diffusion distance" between two nodes, already used in recent machine learning and image processing applications, quantify how easily a walker can diffuse between them and it is a true metric. We show that the diffusion distance allows to identify a metric tensor g whose properties depends on the eigenvalue spectrum of the Laplacian matrix of the underlying network G. We introduce a Riemannian manifold M endowed with such a metric tensor and the Levi-Civita connection as the natural affine connection. By requiring that distances on G are preserved on M, this choice allows us to embed the network into a metric space that we call "diffusion manifold". The embedding of a complex network into a metric manifold provides several advantages for the analysis. For instance, it is possible to exploit the mathematical properties of the manifold to better understand the diffusion landscape of the network: we discuss the cases of several real-world networks, from physical ones (such as transportation networks, which are naturally embedded in an Euclidean space) to non-physical ones (such as online social networks). More intriguingly, we show that the diffusion manifold allows to naturally define a dynamical renormalization of the underlying complex network that can be exploited to better understand its structure and its critical properties.
Manlio De Domenico and Alex Arenas
208 Nonlinear resonances for discrete cycles and chains [abstract]
Abstract: The graph wave equation arises naturally from conservation laws on a network; there, the usual continuum Laplacian is replaced by the graph Laplacian. We consider such a wave equation with a cubic defocusing non-linearity on a general network. The model is well-posed. It is close to the φ4 model in condensed matter physics. Using the normal modes of the graph Laplacian as a basis, we derive the amplitude equations and define resonance conditions that relate the graph structure to the dynamics. For cycles and chains, the spectrum of the Laplacian is known; for these we analyze the amplitude equations in detail. The results are validated by comparison to the numerical solutions. This study could help understand other dynamical systems on networks from biology, physics or engineering.
Imene Khames, Jean Guy Caputo and Arnaud Knippel

Cognition  (C) Session 5

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: P - Keurzaal

Chair: Gaoxi Xiao

531 Dynamics of disagreement and editorial conflict in Wikipedia; from data to model [abstract]
Abstract: Disagreement and conflict are a fact of social life. However, negative interactions are rarely explicitly declared and recorded and this makes them hard for scientists to study. We use complex network methods to investigate patterns in the timing and configuration of contributions to collaboration communities in order to find evidence for negative interactions. We analyze sequences of reverts of article edits to Wikipedia, the largest online encyclopedia, and investigate how often and how fast they occur compared to a null model that randomizes the order of actions to remove any systematic clustering. Our results suggest that Wikipedia editors systematically revert the same person, revert back their reverter, and come to defend a reverted editor. We further relate these interactions to the status of the involved editors. Our findings reveal that certain social dynamics that have not been previously explored might underlie the knowledge collection practice conducted on Wikipedia. We also devise an Agent-Based-Model of common value production. Our opinion dynamics model is capable of explnaning the impirical observations and also allowe us to go beyond and test different scenarios. for instanse, we particularly study the role of extreamist editors and show that the consensus can only be reached if extremist groups can take actively part in the discussion and if their views are also represented in the common product at least temporarily. We also consider the effects of banning editors with unconventional opinions and show that banning problematic editors mostly hinders the consensus as it delays discussion and thus the whole consensus building process.
Taha Yasseri
363 A Model of Zealot-Influenced Conflict Dynamics in Wikipedia Editing [abstract]
Abstract: The underlying mechanisms for the conflict and coordination in Wikipedia editing have attracted enormous research attention. A noticeable model is proposed by Török and colleagues, which shows the random renewal of agents or editors is the key source of persistent controversy during the editing of a Wikipedia article. In this work, a modified model is proposed, based on the hypothesis that the contingent-activation of zealots with extremist opinions is substitutable to the renewal of agents in generating controversies. Numerical simulations reveal that the proposed model can basically reproduce the three identified regimes of conflict in Wikipedia editing, as well as the transitions between them. With the presence of a small number of contingently-activated zealots, the system would gradually transit from the “single conflict” regime to the “plateaus of consensus” and then to the “uninterrupted controversy” regime, as the agents’ tolerance threshold to the medium opinion decreases and the zealots’ activation rate increases. What's more, richer phenomena can be observed in the proposed model. Especially, the inclusion of contingently-activated zealots significantly influences the conflict dynamics. At different rates of fraction of zealots, the combination of tolerance threshold and zealot activation rate may have different modes of influence on the density of conflict. When the fraction of zealots increases, it can surprisingly be observed that the regime of "uninterrupted controversy" vanishes, while the system has two remaining phases, i.e. “single conflict” and “plateaus of consensus”. Thus the overall dynamics depicted in the proposed model is quite different from that in the original collective Wikipedia editing model.
Haoxiang Xia, Ruixin Wang, Pei Ma and Shuangling Luo
588 Uncovering the Dynamic of Twitter Opinion Leaders in the US 2016 elections [abstract]
Abstract: The role of social media such as Twitter in today’s political elections has become crucial. However, the ever-increasing amount of data available has rendered the task of identifying the real opinion leaders and understanding their impact on the social community extremely difficult. Using an unique large-scale dataset of tweets concerning the US 2016 election primaries, we investigate the temporal social network formed by the interactions among millions of Twitter users. Using the Collective Influence (CI) algorithm introduced by Morone & Makse, Nature, 524, 65 (2015), we are able to identify the most influential users of the social network, who are able to spread information the most efficiently to the whole network. The CI algorithm finds the minimal set of influencers by solving the optimal percolation in the network. The political opinion of Twitter influencers is determined using a combination of natural language processing of the tweet contents, machine learning classification and analysis of the hashtags co-occurrence network. Using this framework we are able to follow the dynamic of the influencers and to understand their role in the diffusion of opinion. The influencers tend to have stronger opinions than average Twitter users, and shifts in their sentiment appear to predict election results in primaries.
Alexandre Bovet, George Furbish, Flaviano Morone and Hernan Makse
400 Opinion Leader in Social Network as a Complex Network Structure Property [abstract]
Abstract: We proposed a new model, which capture the main difference between information and opinion spreading in complex networks. In the case of information spreading additional exposure to certain information has a small effect. Contrary, when an actor is exposed to 2 opinioned actors the probability to adopt the opinion is significant higher than in the case of contact with one such actor (called by J. Kleinberg "the 0-1-2 effect"). In each time step if an actor does not have an opinion, we randomly choose 2 his network neighbors. If one of them has an opinion, the actor adopts opinion with some low probability, if two – with a higher probability. Opinion spreading was simulated on different real world social networks and similar random scale-free networks. The results show that small world structure has a crucial impact on tipping point time. The "0-1-2" effect causes a significant difference between ability of the actors to start opinion spreading. Actor is an opinion leader according to his topological position in the network. Known characteristics of an actor in a network cannot indicate if he or she is a potential opinion leader. It's clear that an opinion leader must not have a low degree and must have a high clustering coefficient value. To become an opinion leader, a special position of an actor in the network is needed and this position is not a local property of the actor.
Igor Kanovsky