Foundations  (F) Session 9

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: M - Effectenbeurszaal

Chair: Siew Ann Cheong

51 Centrality in interconnected multilayer networks: mathematical formulation of node versatility and its applications [abstract]
Abstract: The determination of the most central agents in complex networks is important because they are responsible for a faster propagation of information, epidemics, failures and congestion, among others. A challenging problem is to identify them in networked systems characterized by different types of interactions, forming interconnected multilayer networks. Here we describe a mathematical framework that allows us to calculate centrality in such networks and rank nodes accordingly, finding the ones that play the most central roles in the cohesion of the whole structure, bridging together different types of relations. These nodes are the most versatile in the multilayer network. We then present two applications. First, we propose a method based on the analysis of bipartite interconnected multilayer networks of citations and disciplines, to assess scholars, institutions and countries interdisciplinary importance. Using data about physics publications and US patents, we show that our method allows to reward, using a quantitative approach, scholars and institutions that have carried out interdisciplinary work and have had an impact in different scientific areas. Second, we investigate the diffusion of microfinance within rural India villages accounting for the whole multilayer structure of the underlying social networks. We define a new measure of node centrality on multilayer networks, diffusion versatility, and show that this is a better predictor of microfinance participation rate than previously introduced measures defined on aggregated single-layer social networks. Moreover, we untangle the role played by each social dimension and find that the most prominent role is played by the nodes that are central on the layer representing medical help ties, shedding new light on the key triggers of the diffusion of microfinance.
Elisa Omodei
274 The noisy voter model on complex networks [abstract]
Abstract: We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.
Adrián Carro, Raul Toral and Maxi San Miguel
546 On the Definition of Complex Systems – A Mathematical Perspective [abstract]
Abstract: In this talk I discuss the definition(s) of complex systems from a mathematical perspective. The basic question since complex systems theory was established is about what properties we should expect, and then develop analytical tools specific for this set of systems. In phrases like ‘the sum (system) is more than its parts’ one implicitly assumes a system can be arranged into system components, and their interactions. On this level, a system can very well be described inside network theory, with components projected into vertices, and interactions illustrated by arrows between vertices. Typically, the interactions then should be highly nonlinear, in order to guarantee potential ‘surprising’ behaviour. This idea leads to the discussion of feedback loops, and how they can be analysed mathematically. Such existence or absence of feedback loops is essential in modern scientific theory, as an example we mention climate models. However, such considerations do not take into account the multi-scale structure of most systems. Mathematically, the description on the micro-scale is often stochastic in nature, whereas the macroscopic scales are often described deterministically, for example with the help of partial differential equations. In order to understand these relationship two operations need to be investigated, first discretisation (and resulting transition from deterministic to stochastic, and vice versa), and secondly up-scaling, typically done in the form of a continuum limit. This discussion is also relevant for data science, as data from different scales of the system are increasingly available. The data question will end the talk.
Markus Kirkilionis
27 P-test for comparison among different distributions [abstract]
Abstract: Since the turn of the millennium, there is a shift towards big data. This makes the plotting and testing of empirical distributions more important. However, many continue to rely on visual cues (eg. classifying a seemingly straight line on a log-log plot as a power law). In 2009, Clauset and Newman published a series of statistical methods to test empirical data sets and classify them into various distributions. The paper have been cited over a thousand times in journals across various disciplines. However, in many of these papers, the exponential distribution always scores poorly. To understand why there is this apparent systematic bias against the exponential distribution, we sample data points from such a distribution before adding additive and multiplicative noise of different amplitudes. We then perform p-testing on these noisy exponentially-distributed data, to see how the confidence p decreases with increasing noise amplitude. We do the same for the power-law distribution. Based on these results, we discuss how to perform p-tests in an unbiased fashion across different distributions.
Boon Kin Teh, Darrrell Tay and Siew Ann Cheong