Foundations & Physics  (FP) Session 1

Schedule Top Page

Time and Date: 14:15 - 15:45 on 19th Sep 2016

Room: E - Mendes da Costa kamer

Chair: John Mahoney

141 Human vibrations: the modes of crowd disasters [abstract]
Abstract: Empirically, we observe that in concerts, pilgrimages, parades, Black Friday bargain hunters, football matches and other similar social gatherings, the density of people becomes exceptionally high and might give rise to unusual and occasionally tragic collective motions. All these situations are characterised by a high degree of dynamic fluctuations, but little overall motility. While active particle simulations have demonstrated the ability to reproduce most phenomenology of human collective motion, high-density scenarios call for a rethinking of conventional analysis approaches. Here, we take inspiration from jammed granular media and eigenmode analysis to understand the mechanisms underlying human collective motion at extreme densities. Vibrational eigenmodes predict the emergence of long-range correlated motions and unstable areas in simulations of high-density crowds. By introducing agitated individuals to account for behavioural heterogeneity, we find that perturbing the eigenmodes enhances the propagation of long-range correlated motions such as, for example, shock waves. If found in real crowds, these emergent mechanisms would provide a simple explanation of how crowd disasters could arise from purely physical and structural considerations. Our approach could provide powerful tools for predicting the onset of dangerous collective motions with applications in crowd management and in the design of public venues.
Arianna Bottinelli, David Sumpter and Jesse Silverberg
80 Quantum Simplicity: How quantum theory can change what we perceive to be complex [abstract]
Abstract: Computational mechanics describes a sophisticated toolset for understanding the structure and complexity of observational phenomena [1]. It captures the idea that we understand nature through cause and effect – the more complex a process, the more causes one must postulate to model its behaviour. This view motivated statistical complexity – the minimal amount of causal information one needs to record about a phenomenon’s past to model its future statistics – as a popular measure of its intrinsic complexity. The standard framework has generally assumed that we understand nature through classical means; processing classical bits. Nature, however, is intrinsically quantum mechanical, allowing quantum bits that exist in superposition of 0 and 1. Can such uniquely quantum behaviour unveil ore refined views of structure and complexity? I this presentation, I review our work in pioneering quantum models that require provably less causal information that any classical counterpart [2] and describe our ongoing experiments in realizing these models within photonic systems [3]. I then outline recent advances in constructing provably optimal quantum models, and how they demonstrate that quantum statistical complexity can exhibits drastically different qualitative behaviour - falling for example, when its classical counterpart rises. Thus many observed phenomena could be significantly simpler than classically possible should quantum effects be involved, and existing notions of structure and complexity may ultimately depend on the type of information theory we use. [1] J.P. Crutchfield and K. Young, Phys. Rev. Lett. 63 105. [2] M. Gu, K. Wiesner, E. Rieper, V. Vedral, Nature communications, 3 762 [3] M. Palsson, M. Gu, J. Ho, Howard M. Wiseman, and Geoff J. Pryde. arXiv:1602.05683
Mile Gu, Andrew Garner, Joseph Ho, Mathew Palsson, Geoff Pryde, Elisabeth Rieper, Jayne Thompson, Vlatko Vedral, Karoline Wiesner and Howard Wiseman
527 The Weak Giant: phase transition in directed graphs with prescribed degree distribution [abstract]
Abstract: From reactions fueling cells in our bodies to internet links binding our society into a small world, networks are at the basis of every structure and random graph theory is the common language widely used to discuss them. Network science is full of empirical data, yet an observer collecting such data is either embedded into the network him/herself, thus viewing it locally, or is his/her distanced far apart and thus observing only the global properties. Indeed, one may study individual servers of the Internet but the question of the global structure is far less trivial. Or a physicist may observe global properties of a complex material not knowing much on how individual molecules are interconnected. This research shows how one, being on one extreme of this dichotomy may transit to the other: converting local information into global, and back. Take a ‘normal’ notion of network and replace all (or a portion of) links with arrows, we obtain a directed network. In such a network every node has a certain probability of having N outgoing arrows and M ingoing arrows – thus a bivariate degree distribution. There are a few generalizations for connected components in this case. Most controversial one is the weak-component -- a set of nodes one may reach if ignoring the direction of the arrows. It is controversial, precisely because it sounds so simple: one may be tempted to say, in this case there is no direction and the task degenerates to a classical problem. But when we posses only a snapshot of local properties – the bivariate degree distribution, we can not ignore the directional data anymore. This work presents correct formulation and the answer to the weak-component problem in directed graphs that are identified by a degree distribution for the first time.
Ivan Kryven
48 Complexity at multivalent receptor interfaces [abstract]
Abstract: Multivalency is the phenomenon that describes the interaction between multivalent receptors and multivalent ligands. It is well known to play a pivotal role in biochemistry, particularly in protein-carbohydrate interactions, both in solution (e.g. at pentavalent cholera toxins) and at interfaces (e.g. for the infection of cells by the attachment of viruses or bacteria to cell membranes). In particular in the latter case, multivalency is often poorly understood in a quantitative sense. Supramolecular host-guest chemistry has been well established in solution, but its use at interfaces remains limited to for example sensor development for specific guest compounds. In order to build assemblies at surfaces through supramolecular interactions for nanotechnological applications, other demands have to be met, such as larger thermodynamic and kinetic stabilities of the assemblies. For many supramolecular motifs, this inevitably leads to the use of multivalent interactions. We employ the concept of molecular printboards, which are self-assembled monolayers functionalized with receptor groups suitable for nanofabrication. The design of guest molecules allows precise control over the number of interacting sites and, therefore, over their (un)binding strength and kinetics. A recent focus is on heterotropic multivalency, which is the use of multiple interaction motifs. This has been applied to the controlled, selective, and specific binding of metal-ligand coordination complexes, proteins, antibodies, and even cells. The current paper will focus on less obvious, emerging properties from such assemblies such as supramolecular expression, non-linear amplification, coherent energy transfer, and multivalent surface diffusion.
Jurriaan Huskens
259 Sequential visibility graph motifs [abstract]
Abstract: Visibility algorithms transform time series into graphs and encode dynamical information in their topology, paving the way for graph-theoretical time series analysis as well as building a bridge between non-linear dynamics and network science. In this work we present the sequential visibility graph motifs, smaller substructures of n consecutive nodes that appear with characteristic frequencies inside visibility graphs. We show that the motif frequency profile is a highly informative feature which can be treated analytically for several classes of deterministic and stochastic processes and in general computationally efficient to extract. In particular we have found that this graph feature is surprisingly robust, in the sense that it is still able to distinguish amongst different dynamics even when the signals are polluted with large amounts of observational noise, what enables its use in practical problems such as classification of empirical time series. As an application, we have tackled the problem of disentangling meditative from general relaxation states from the horizontal visibility graph motif profiles of heartbeat time series of different subjects performing different activities. We have been able to provide a positive, unsupervised solution to this question by applying standard clustering algorithms on this simple feature. Our results suggest that visibility graph motifs provide a mathematically sound, computationally efficient and highly informative simple feature which can be extracted from any kind of time series and used to describe complex signals and dynamics from a new viewpoint. In direct analogy with the role played by standard motifs in biological networks, further work should evaluate whether visibility graph motifs can be seen as the building blocks of time series. References: 1) Lacasa L. et al. "From time series to complex networks: The visibility graph." PNAS 105.13 (2008). 2) Iacovacci J. and Lacasa L. "Sequential visibility graph motifs" PRE 93 (2016)
Jacopo Iacovacci and Lucas Lacasa

Foundations & Physics  (FP) Session 2

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: E - Mendes da Costa kamer

Chair: Ioannis Anagnostou

411 Controllability Criteria for Discrete-Time Non-Linear Dynamical Networks [abstract]
Abstract: Controllability of networked systems with non-linear dynamics remains an interesting challenge with widespread applications to problems ranging from engineering to biology. As a step in this direction, this paper explores global controllability criteria for discrete-time non-linear networks. We identify two classes of non-linear networks: those with non-linear edge dynamics and those with non-linear node dynamics. For each of these classes, we formulate the global controllability matrix and discuss corresponding controllability conditions. In the first case, we obtain a time-dependent controllability matrix, whereas, in the second, we obtain a non-linear operator. We point to a network interpretation of controllability associated to linear independence of sets of paths from driver nodes to every node of the network and comment on possible applications of our formalism.
Xerxes Arsiwalla, Baruch Barzel and Paul Verschure
270 Hamiltonian control to Kuramoto model of synchronization [abstract]
Abstract: Synchronization phenomena has attracted the interest of the scientific communities of different fields since old times. It appears of a decisive role especially as a self-organizing mechanism which manifests in biology e.g. mating of fireflies or in physics e.g. Josephson array [1]. In fact, the resonance effect shown for such behaviour results in many cases of vital importance in Nature [1]. However, not always the synchronization is the desiderable expectation in a physical process. This is the case for example of the Millenium Bridge of London [2]. Due to strong coupling of the bridge mechanical parts it started swaying after a given number of pedestrians tempted to cross it. In this paper we propose, a completely unconventional and novative control method to the synchronization problem. The idea is to prevent the set of weakly coupled nonlinear oscillators from phase-synchronizing. Based in a recent work [3] where a Hamiltonian formulation of the seminal Kuramoto model [4] was presented, we were able to construct a control technique making use of Hamiltonian control methods. Adding a control term of magnitude O(ε2) (where ε is the size of the coupling strength) in the Hamiltonian of the Kuramoto equation the system not only does not synchronize but it is also robust to resonance phenomena which never occur. The results we obtained using a simple paradigmatic model of synchronization, show that it is possible to design complex systems e.g. mechanical structures, immune to any resonance effect by simply making a small modification to the original system. References [1] S.H. Strogatz, Sync : The Emerging Science of Spontaneous Order, Hyperion (2003). [2] Strogatz, Steven et al., Nature 438, 43–44 (2005). [3] D. Witthaut, M. Timme Phys. Rev. E 90, 032917 (2014). [4] Y. Kuramoto, Chemical Oscillations, Waves, and Turbulence, New York, Springer-Verlag (1984).
Oltiana Gjata, Malbor Asllani and Timoteo Carletti
493 Concurrent enhancement of percolation and synchronization in adaptive networks [abstract]
Abstract: Co-evolutionary adaptive mechanisms are not only ubiquitous in nature, but also beneficial for the functioning of a variety of systems. We here consider an adaptive network of oscillators with a stochastic, fitness-based, rule of connectivity, and show that it self-organizes from fragmented and incoherent states to connected and synchronized ones. The synchronization and percolation are associated to abrupt transitions, and they are concurrently (and significantly) enhanced as compared to the non-adaptive case. Finally we provide evidence that only partial adaptation is sufficient to determine these enhancements. Our study, therefore, indicates that inclusion of simple adaptive mechanisms can efficiently describe some emergent features of networked systems' collective behaviors, and suggests also self-organized ways to control synchronization and percolation in natural and social systems.
Guido Caldarelli, Young-Ho Eom and Stefano Boccaletti
21 Modelling the Air-Water Interface [abstract]
Abstract: The air-water interface is of huge importance to a wide range of environmental, biological and industrial chemistry. It shows complex behaviour and continues to surprise both experimental and theoretical communities. For many years the biological physical chemistry community has highlighted the different behaviour of water in and close to hydrophobic surfaces, such as proteins or lipid membranes. Recent work on ellipsometry at the air-water interface has suggested that the refractive index of the surface region may be significantly higher than that of the bulk water. This higher refractive index, would not only infer a significant change of interactions in water at a hydrophobic region, but also impact on the interpretation of many of the non-linear spectroscopic studies as they rely on the linear optical properties being understood. We attempt to investigate this behaviour using the Amber 12 molecular mechanics software. However, classical molecular mechanics simulations are generally parameterised to accurately recreate bulk mechanical, electronic and thermodynamic properties. The interfacial and surface regions of atomistic and molecular systems tend to be neglected. In order to ensure accurate surface behaviour we have implemented ways to deal with long range Lennard-Jones corrections in systems containing interfaces based on the methodology of Janecek. We present how these corrections are important for replicating surface behaviour in water, and a novel way to thermodynamically estimate surface energetic and entropic terms.
Frank Longford, Jeremy Frey, Jonathan Essex and Chris-Kriton Skylaris
177 On the Collatz conjecture: a contracting Markov walk on a directed graph. [abstract]
Abstract: The Collatz conjecture is named after Lothar Collatz, who first proposed it in 1937. The conjecture is also known as the (3x+1) conjecture, the Ulam conjecture, Kakutani's problem, the Thwaites conjecture, Hasse's algorithm or the Syracuse problem. This can be formulated as an innocent problem of arithmetics. Take any positive integer n. If n is even, divide it by 2 to get n/2. If n is odd, multiply it by 3 and add 1 to obtain 3n+1.Repeating the process iteratively, the map is believed to converge to a period 3 orbit formed by the triad {1,2,4}. Equivalently, the conjecture states that the Collatz map will always reach 1, no matter what integer number one starts with. Numerical experiments have confirmed the validity of the conjecture for extraordinarily large values of the starting integer n. The beauty of the conjecture emanates indeed from its apparent, tantalising, simplicity, which however hides formidable challenges, when one tries to cast it on solid roots. In this paper, we provide a novel argument to support the validity of the Collatz conjecture, which, to the best of our knowledge, configures as the first proof of the claim. The proof exploits the formalism of stochastic maps defined on directed graphs. More specifically, the proof articulates along the following lines: (i) define the (forward) third iterate of the Collatz map and consider the equivalence classes of integer numbers modulo 8; (ii) employ a stochastic approach based on a Markov process to prove the contracting property of such map on generic orbits; (iii) demonstrate that diverging orbits are not allowed because they will not be compatible with the stationary equilibrium distribution of the Markov process. The proof will be illustrated with emphasis to the methological aspects that require resorting to the concept of directed graph.
Timoteo Carletti and Duccio Fanelli
254 Nanoscale artificial intelligence: creating artificial neural networks using autocatalytic reactions [abstract]
Abstract: A typical feature of many biological and ecological complex systems is their capability to be highly sensitive and responsive to small changes of the values of specific key variables, while being at the same time extremely resilient to a large class of disturbances. The possibility to build artificial systems with these characteristics is of extreme importance for the development of nanomachines and biological circuits with potential medical and environmental applications. The main theoretical difficulty toward the realisation of these devices lies in the lack of a mathematical methodology to design the blueprint of a self-controlled system composed of a large number of microscopic interacting constituents that should operate in a prescribed fashion. Here a general methodology is proposed to engineer a system of interacting components (particles) which is able to self-regulate their concentrations in order to produce any prescribed output in response to a particular input. The methodology is based on the mathematical equivalence between artificial neurons in neural networks and species in autocatalytic reactions, and it specifies the relationship between the artificial neural network’s parameters and the rate coefficients of the reactions between particle species. Such systems are characterised by a high degree of robustness as they are able to reach the desired output despite disturbances and perturbations of the concentrations of the various species. Relating concepts from artificial intelligence to dynamical systems, the results presented here demonstrate the possibility to employ approaches and techniques developed in one field to the other, bringing potential advancements in both disciplines and related applications. Preprint: https://arxiv.org/abs/1602.09070
Filippo Simini

Foundations & Physics  (FP) Session 3

Schedule Top Page

Time and Date: 10:45 - 12:45 on 22nd Sep 2016

Room: F - Rode kamer

Chair: Louis Dijkstra

116 Onset of anomalous diffusion from local motion rules [abstract]
Abstract: Anomalous diffusion processes, in particular superdiffusive ones, are known to be powerful strategies for searching and navigation by animals and also in human mobility. One way to create such regimes are Lévy Flights, where the walkers are allowed to perform jumps, the “flights”, that can eventually be very long as their length distribution is asymptotically power-law distributed. In our work, we present a model in which walkers are allowed to perform, on a 1D lattice, “cascades” of n unitary steps instead of a jump in the Lévy case. In analogy with the Lévy approach, the size of such cascades is distributed according to a power-law tailed PDF P(n); on the other hand, at difference with Lévy Flights, we do not require an a priori knowledge of the jump length since, in our model, the walker follows strictly local rules. We thus show that this local mechanism for the walk gives indeed rise to superdiffusion or normal diffusion according to the P(n) power law exponent. We also investigate the interplay with the possibility to be stuck on a node, introducing waiting times that are power-law distributed as well. In this case, the competition of the two processes extends the palette of the reachable diffusion regimes and, again, this switch relies on the two PDF's power-law exponents. As a perspective, our approach may engender a possible generalization of anomalous diffusion in context where distances are difficult to define, as in the case of complex networks.
Timoteo Carletti, Sarah de Nigris and Renaud Lambiotte
485 Dynamics on multiplex networks [abstract]
Abstract: We will show some of the recent result in our group concerning dynamics in multiplex networks. On the one hand we consider multiplex networks as set of nodes in different layers. At each layer the set of nodes is the same but the connections among the nodes can be different in the layers. Furthermore the connections among the layers is described by a “network of layers”. We have studied different processes across the layers (diffusion) and between the layers (reaction) [1]. In this case Turing patterns appear as an effect of different average connectivities in different layers [2]. We also show that a multiplex construction where the layers correspond to contexts in which agents make different sets of connections can make a model of opinion formation to show stationary states of coexistence that are not observed in simple layers [3]. Finally, as a particular case of multiplex network, one can also analyze networks that change in time, since in this case each layer of the multiplex corresponds to a snapshot of the interaction pattern. For this situation, we have shown that there are different mechanisms that dominate the diffusion of information in the system depending on the relative effect of mobility and diffusion among the nodes [4]. [1] Replicator dynamics with diffusion on multiplex networks. RJ Requejo, A. Diaz-Guilera. Arxiv:1601.05658 (2016) [2] Pattern formation in multiplex networks. NE Kouvaris, S Hata & A. Diaz-Guilera. Scientific Reports 5, Article number: 10840 (2015) [3] Agreement and disagreement on multiplex networks. R Amato, N E Kouvaris, M San Miguel and Albert Díaz-Guilera, in preparation. [4] Tuning Synchronization of Integrate-and-Fire Oscillators through Mobility. L. Prignano, O. Sagarra, and A. Díaz-Guilera Phys. Rev. Lett. 110, 114101 (2013)
Albert Diaz-Guilera
284 Promiscuity of nodes in multilayer networks [abstract]
Abstract: The interplay of multiple types of interactions has been of interest in the social sciences for decades. Recent advances in the complexity sciences allow the analysis of such multilayer networks in a quantitative way. The question to what extent nodes are similarly important in all layers arises naturally. We define the promiscuity of a node as a measure for the variability of its degree across layers. This builds on similar frameworks that investigate such questions in networks with modular structure while taking into account that different layers can vary in their importance themselves. Using those tools on a range of empirical networks from a variety of disciplines including transportation, economic and social interactions, and biological regulation we show that the observed promiscuity distributions are different on the networks of different origins. Transportation networks, for example, where the layers represent different modes of transportation tend to have a majority of low promiscuity nodes. A few hub nodes with high promiscuity enable the transit between different modes of transportation. The representation of global trade as a multilayer network reveals that country’s imports are often very diverse whereas the export of some countries depends extremely on a single commodity. Employing the promiscuity on transcription factor interaction in multiple cell types reveals proteins that are potential biomarkers of cell fate. Despite its simplicity, the presented framework gives novel insights into numerous types of multilayer networks and expands the available toolbox for multilayer network analysis.
Florian Klimm, Gorka Zamora-López, Jonny Wray, Charlotte Deane, Jürgen Kurths and Mason Porter
188 Coarse analysis of collective behaviors: Bifurcation analysis of traffic jam formation [abstract]
Abstract: Collective phenomena have investigated in various fields of science, such as material science, biological science and social science. Examples of such phenomena are slacking of granular media, group formation of organisms, jam formation in traffic flow and lane formation of pedestrians. Scientists usually investigate them only using the equation of motion of individuals directly. It is generally difficult to derive the macroscopic laws of collective behaviors from such microscopic models. We challenge to develop a new approach to analyze macroscopic laws of these phenomena. In this paper, we describe collective behaviors in a low-dimensional space of macroscopic states obtained by dimensionality reduction. Such a space is constructed by using Diffusion maps as one of the pattern classification techniques. We obtain a few appropriate coarse-grained variables to distinguish the macroscopic states by the similarity of patterns, and we construct the low-dimensional space. A time development of collective behavior is represented as a trajectory in the space. We apply this method to the optimal velocity model for the analysis of the macroscopic property of traffic jam formation. The phenomena is considered as the dynamical phase transition of a non-equilibrium system. The important property of the transition is bistability of jammed flow and free flow. This property has been investigated by many researchers using the model. However their analysis does not satisfactory explain. Using our method, we clearly reveal a bifurcation structure, which features the bistability.
Yasunari Miura and Yuki Sugiyama
352 Design Principles for Self-Assembling Polyomino Tilings [abstract]
Abstract: The self-assembly of simple molecular units into regular 2d (monolayer) lattice patterns continues to provide an exciting intersection between experiment, theory and computational simulation. We study a simple model of polyominoes with edge specific interactions and introduce a visualisation of the configuration space that allows us to identify all possible ground states and the interactions which stabilise them. By considering temperature induced phase transitions away from ground states, we demonstrate kinetic robustness of particular configurations with respect to local rearrangements. We also present a rigorous sampling algorithm for larger lattices where complete enumeration is computationally intractable and discuss common features of the configuration space across different polyomino shapes.
Joel Nicholls, Gareth Alexander and David Quigley

Foundations & Physics  (FP) Session 4

Schedule Top Page

Time and Date: 16:00 - 17:20 on 22nd Sep 2016

Room: L - Grote Zaal

Chair: Vlatko Vedral

353 A Machian Functional Relations Perspective on Complex Systems [abstract]
Abstract: The paper discusses two related questions: where to ‘cut’ system definitions and systemic relations based on the perspective of the involved stakeholders. Both are historically related to the genetic historical /-critical, monist approach of psychophysicist Ernst Mach. For the analysis of (causal) interactions in complex systems (Auyang 1998), Simon and Ando (Ando and Simon 1961, see also Shpak et al. 2004) have developed the concept of (near) decomposability, where interactions in systems are separated into groups of interactions according to the strength of interactions between elements of a system. The danger in this assumption is that interactions between groups of variables can be neglected such that microstate variables can be aggregated into macro-state variables. This assumption may work in the short run under normal conditions, but may also fail under longer terms and unusual conditions. From a ‘complexity / non-linear mathematics perspective’ ‘small’ effects may lead under positive feedback to the crossing of thresholds and phase transitions and then may be observed as increased stress, risk and catastrophes in a system’s development (cp. Thom 1989, Jain and Krishna 2002, Sornette 2003). In order to tackle the question of where to ‘cut’ system definition, decomposition and system aggregation, the paper proposes to employ physicist-psychologist-philosopher Ernst Mach’s genetic perspective on the evolution of knowledge based on his research in the history of science (Mach 1888, 1905, 1883). Mach suggests to replace causality with functional relations, which describe the relationship between the elements of the measured item and the standard of measurement (Mach 1905, Heidelberger 2010) as functional dependencies of one appearance on the other. The paper sketches the links between Mach’s and Simon’s approach to derive requirements for ‘tools’ to converse about system definition, decomposition, and aggregation (modularization) interrelated with and dependent on scientists worldviews.
Carl Henning Reschke
86 Using quantum mechanics to simplify input-output processes [abstract]
Abstract: The black-box behavior of all natural things can be characterized by their input-output response. For example, neural networks can be considered devices that transform sensory inputs to electrical impulses known as spike trains. A goal of quantitative science is then to build mathematical models of such input-output processes; such that they can be used to construct devices that simulate such input-output behavior. In this talk we discuss the simplest models, the ones that can perform such simulations with the lest memory – as measured by the device's internal entropy. Such constructions serve a dual purpose. On the one hand they allow us to engineer devices that replicate the desired operational behaviour with minimal memory. On the other, the memory such a model requires tells us the minimal amount of structure any process exhibiting the same input-output behaviour must possess – and is therefore adopted as a way of quantifying the structural complexity of such processes [1]. Here, we demonstrate that the simplest models of general input-output processes are generally quantum mechanical – even if the inputs, and outputs are described purely by classical information. Specifically, we first review the provably simplest classical devices that exhibit a particular input-output behaviour; known as epsilon transducers [1]. We then outline recent work on modifying these devices to take advantage of quantum information processing; such that they can enact statistically identical input-output behaviour with reduced memory requirements [2]. This opens the potential for quantum information to be relevant in both the simulation of input-output processes, and the study of their structural complexity. [1] Barnett and Crutchfield, Journal of statistical physics, 161, 404 (2015) [2] J. Thompson et. al. Using quantum theory to reduce the complexity of input-output processes. arXiv:1601.05420 (2016)
Jayne Thompson, Andrew Garner, Vlatko Vedral and Mile Gu
450 An Algebraic Formulation of Quivers, Networks and Multiplexes [abstract]
Abstract: An alternative description of complex networks can be given in terms of quivers. These are objects in abstract algebra (they also have a category-theoretic definition). Using this formal machinery, we provide an alternative definition of multiplex networks. Then, identifying the path algebra of a multiplex, we find a gradation in the algebra that leads to the adjacency matrix of the multiplex. In fact, this formulation reveals two types of multiplex networks, the simple-multiplex, where the nodes are as usual, but there is an additional product map in the algebra; and the supra-multiplex, where nodes are replaced by supra-nodes. A supra-node is a collection of nodes belonging to an equivalence class with respect to a given equivalence relation. Though these equivalence classes can themselves be represented as connected graphs, the edges within a supra-node are of a distinct type than those between supra-nodes. By this classification, we identify all the tensorial adjacency matrices, that are usually discussed in the multiplex literature, as corresponding to supra-multiplexes, whereas, the original definition of multiplexes with nodes as basic entities, corresponds to simple-multiplexes. The benefit of an algebraic approach is that it helps in parsing these two types of multiplex networks in a precise way, leading to distinct path algebras and adjacency matrices for each. We show that the adjacency matrix of a simple-multiplex requires the construction of a new color-product map. To the best of our knowledge, this is the first formal derivation of this matrix.
Xerxes Arsiwalla, Ricardo Garcia and Paul Verschure
326 Network structure of multivariate time series [abstract]
Abstract: Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range of tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad-hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.
Vincenzo Nicosia, Lucas Lacasa and Vito Latora