Rahul Biswas, SuryaNarayana Sripada, Somabha Mukherjee
et al.
Identifying causal interactions in complex dynamical systems is a fundamental challenge across the computational sciences. Existing functional connectivity methods capture correlations but not causation. While addressing directionality, popular causal inference tools such as Granger causality and the Peter-Clark algorithm rely on restrictive assumptions that limit their applicability to high-resolution time-series data, such as the large-scale recordings now standard in neuroscience. Here, we introduce CITS (Causal Inference in Time Series), a nonparametric framework for inferring statistically causal structure from multivariate time series. CITS models dynamics using a structural causal model of arbitrary Markov order and statistical tests for lagged conditional independence. We prove consistency under mild assumptions and demonstrate superior accuracy over state-of-the-art baselines across simulated linear, nonlinear, and recurrent neural network benchmarks. Applying CITS to large-scale neuronal recordings from the mouse visual cortex, thalamus, and hippocampus, we uncover stimulus-specific causal pathways and inter-regional hierarchies that align with known anatomy while revealing new functional insights. We further highlight CITS ability in accurately identifying conditional dependencies within small inferred neuronal motifs. These results establish CITS as a theoretically grounded and empirically validated method for discovering interpretable statistically causal networks in neural time series. Beyond neuroscience, the framework is broadly applicable to causal discovery in complex temporal systems across domains.
Samir Damji, Simrut Kurry, Shazia'Ayn Babul
et al.
Creativity is a core cognitive capacity underlying innovation and adaptive problem solving, yet how it is represented in the brain's intrinsic functional architecture is not fully understood. While resting-state fMRI studies have identified large-scale network correlates associated with differences in creativity, EEG provides the temporal resolution for examining oscillatory dynamics contributing to intrinsic network organization. We examined whether resting-state EEG connectivity patterns are associated with individual differences across multiple creativity-related measures. Thirty healthy young adults completed a multidimensional creativity battery comprising the Inventory of Creative Activities and Achievements (ICAA), the Divergent Association Task (DAT), the Matchstick Arithmetic Puzzles Task (MAPT) and a Self-rating (SR) of creative ability. Graph-theoretical analyses of alpha-band functional connectivity revealed two participant groups, each with distinct patterns of neural activity: Cluster 1 showed reduced global connectivity with relatively preserved left frontal connectivity and greater network modularity; Cluster 0 exhibited stronger overall connectivity strength, reduced modularity and higher local clustering. Notably, Cluster 1 reported higher self-rated creative ability and more frequent engagement in real-world creative activities. These findings suggest that resting-state EEG connectivity patterns are associated with variation in creative self-efficacy and creative engagement, highlighting characteristic patterns of alpha-band network organization observed at rest.
Valentina Giunchiglia, Sharon Curtis, Stephen Smith
et al.
Automated online and App-based cognitive assessment tasks are becoming increasingly popular in large-scale cohorts and biobanks due to advantages in affordability, scalability and repeatability. However, the summary scores that such tasks generate typically conflate the cognitive processes that are the intended focus of assessment with basic visuomotor speeds, testing device latencies and speed-accuracy tradeoffs. This lack of precision presents a fundamental limitation when studying brain-behaviour associations. Previously, we developed a novel modelling approach that leverages continuous performance recordings from large-cohort studies to achieve an iterative decomposition of cognitive tasks (IDoCT), which outputs data-driven estimates of cognitive abilities, and device and visuomotor latencies, whilst recalibrating trial-difficulty scales. Here, we further validate the IDoCT approach with UK BioBank imaging data. First, we examine whether IDoCT can improve ability distributions and trial-difficulty scales from an adaptive picture-vocabulary task (PVT). Then, we confirm that the resultant visuomotor and cognitive estimates associate more robustly with age and education than the original PVT scores. Finally, we conduct a multimodal brain-wide association study with free-text analysis to test whether the brain regions that predict the IDoCT estimates have the expected differential relationships with visuomotor vs. language and memory labels within the broader imaging literature. Our results support the view that the rich performance timecourses recorded during computerised cognitive assessments can be leveraged with modelling frameworks like IDoCT to provide estimates of human cognitive abilities that have superior distributions, re-test reliabilities and brain-wide associations.
Fahmy Zuhda Bahtiar, Fahmy Fatra, Heru Sugiantoro
et al.
This study aims to determine the effect of hardener volume and NC thinner types (super and ND) on the coating results, such as adhesion, gloss, hardness, and gasoline resistance to nitrocellulose paints (NC). This study used 5ml, 10ml, and 15ml of hardener volume variations with super and ND thinner types. The object of this research is nitrocellulose paints. This study found that the best adhesion test was produced at 5ml, 10ml, and 15ml of hardener volumes using super and ND thinners classified as 5B or 0% (percent area removed). For the best gloss test, the hardener volume was 15ml using super thinner with an average yield of 33,3 GU (Gloss Unit). The best hardness test was produced at volumes of 5ml, 10ml, and 15ml using super and ND thinners, while the best result for gasoline resistance testing was produced at volumes of 5ml, 10ml, and 15ml using super and ND thinners.
 Penelitian ini bertujuan untuk mengetahui pengaruh volume hardener dan penggunaan thinner jenis NC (super dan ND) terhadap hasil coating seperti gloss, adhesi, hardness dan gasoline resistance terhadap cat NC. Penelitian ini menggunakan variasi volume hardener 5ml, 10ml dan 15ml dengan penggunaan jenis thinner super dan ND. Obyek penelitian ini adalah cat nitroselulosa merek steel gloss. Hasil pada penelitian ini didapatkan bahwa uji adhesi terbaik dihasilkan pada semua variasi volume hardener dengan menggunakan thinner super dan ND yang tergolong klasifikasi 5B atau 0%. Untuk uji kilap terbaik diperoleh pada volume hardener 15ml dengan menggunakan super tiner dengan yield rata-rata 33,3 GU (Gloss Unit). Uji kekerasan terbaik dihasilkan pada semua variasi volume menggunakan tiner super dan ND, sedangkan hasil terbaik untuk pengujian gasoline resistance dihasilkan pada semua variasi volume hardener menggunakan tiner super dan ND.
Roberto D. Pascual-Marqui, Kieko Kochi, Toshihiko Kinoshita
Brain function as measured by multichannel EEG recordings can be described to a high level of accuracy by microstates, characterized as a sequence of time intervals within which the sign invariant normalized scalp electric potential field remains quasi-stable, concatenated by fast transitions. Filtering the EEG has a small effect on the spatial microstate scalp maps, but a large effect on the dynamics (e.g. duration, frequency of occurrence, and transition rates). In addition, spectral power has been found to be strongly correlated with microstate dynamics. And yet, the nature of the relation between spectra and microstates remains poorly understood. Here we show that the multivariate EEG cross-spectrum contains sufficient generative information for estimating the microstate scalp maps and their dynamics, demonstrating an underlying fundamental link between the microstate model and the multivariate cross-spectrum. Empirically, based on EEG recordings from 203 participants in eyes-closed resting state, their cross-spectral matrices were computed, from which stochastic EEG was generated. No significant differences were found for the microstate model (maps and dynamics) estimated from the actual EEG and from the stochastic EEG based solely on the cross-spectra. In addition, with the aim of quantifying the spatio-cross-spectral properties of the microstate model, we introduce here the topographic likelihood spectrum, based on the Watson distribution, which provides a frequency-by-frequency account of the contribution of a normalized microstate map to the normalized EEG cross-spectrum, independent of power. The topographic likelihood spectra are distinct for the different microstate maps. In a comparison between eyes-closed and eyes open conditions, they are shown to be significantly different in frequency specific patterns.
Axonal growth and guidance at the ventral floor plate is here followed $\textit{in vivo}$ in real time at high resolution by light-sheet microscopy along several hundred micrometers of the zebrafish spinal cord. The recordings show the strikingly stereotyped spatio-temporal control that governs midline crossing. Commissural axons are observed crossing the ventral floor plate midline perpendicularly at about 20 microns/h, in a manner dependent on the Robo3 receptor and with a growth rate minimum around the midline, confirming previous observations. At guidance points, commissural axons are seen to decrease their growth rate and growth cones increase in size. Commissural filopodia appear to interact with the nascent neural network, and thereby trigger immediate plastic and reversible sinusoidal-shaped bending movements of neighboring commissural shafts. Ipsilateral axons extend concurrently, but straight and without bends, at three to six times higher growth rates than commissurals, indicating they project their path on a substrate-bound surface rather than relying on diffusible guidance cues. Growing axons appeared to be under stretch, an observation that is of relevance for tension-based models of cortical morphogenesis. The \textit{in vivo} observations provide for a discussion of the current distinction between substrate-bound and diffusible guidance cues. The study applies the transparent zebrafish model that provides an experimental model system to explore further the cellular, molecular and physical mechanisms involved during axonal growth, guidance and midline crossing through a combination of $\textit{in vitro}$ and $\textit{in vivo}$ approaches.
Cesar C. Ceballos, Rodrigo F. O. Pena, Antonio C. Roque
The temporal dynamics of membrane voltage changes in neurons is controlled by ionic currents. These currents are characterized by two main properties: conductance and kinetics. The hyperpolarization-activated current ($I_{\rm h}$) strongly modulates subthreshold potential changes by shortening the excitatory postsynaptic potentials and decreasing their temporal summation. Whereas the shortening of the synaptic potentials caused by the $I_{\rm h}$ conductance is well understood, the role of the $I_{\rm h}$ kinetics remains unclear. Here, we use a model of the $I_{\rm h}$ current model with either fast or slow kinetics to determine its influence on the membrane time constant ($τ_m$) of a CA1 pyramidal cell model. Our simulation results show that the $I_{\rm h}$ with fast kinetics decreases $τ_m$ and attenuates and shortens the excitatory postsynaptic potentials more than the slow $I_{\rm h}$. We conclude that the $I_{\rm h}$ activation kinetics is able to modulate $τ_m$ and the temporal properties of excitatory postsynaptic potentials (EPSPs) in CA1 pyramidal cells. In order to elucidate the mechanisms by which $I_{\rm h}$ kinetics controls $τ_m$, we propose a new concept called "time scaling factor". Our main finding is that the $I_{\rm h}$ kinetics influences $τ_m$ by modulating the contribution of the $I_{\rm h}$ derivative conductance to $τ_m$.
Subjective Experience (SE) is part of the ancient mind-body problem, which continues to be one of deepest mysteries of science. Despite major advances in many fields, there is still no plausible causal link between SE and its realization in the body. The core issue is the incompatibility of objective (3rd person) public science with subjective (1st person) private experience. Any scientific approach to SE assumes that it arose from extended evolutionary processes and that examining evolutionary history should help us understand it. While the core mystery remains, converging evidence from theoretical, experimental, and computational studies yields strong constraints on SE and some suggestions for further research. All animals confront many of the same fitness challenges. They all need some kind of internal model to relate their life goals and actionable sensed information to action. We understand the evolution of the bodily aspects of human perception and emotion, but not the SE. The first evolutionary evidence for SE appears in vertebrates and much of its neural substrate and simulation mechanism is preserved in mammals and humans. People exhibit the same phenomena, but there are remaining mysteries of everyday experience that are demonstrably incompatible with current neuroscience. In spite of this limitation, there is considerable progress on understanding the role of SE in the success of prostheses.
Biological neural networks are often modeled as systems of coupled, nonlinear, ordinary or partial differential equations. The number of differential equations used to model a network increases with the size of the network and the level of detail used to model individual neurons and synapses. As one scales up the size of the simulation, it becomes essential to utilize powerful computing platforms. While many tools exist that solve these equations numerically, they are often platform-specific. Further, there is a high barrier of entry to developing flexible platform-independent general-purpose code that supports hardware acceleration on modern computing architectures such as GPUs/TPUs and Distributed Platforms. TensorFlow is a Python-based open-source package designed for machine learning algorithms. However, it is also a scalable environment for a variety of computations, including solving differential equations using iterative algorithms such as Runge-Kutta methods. In this article and the accompanying tutorials, we present a simple exposition of numerical methods to solve ordinary differential equations using Python and TensorFlow. The tutorials consist of a series of Python notebooks that, over the course of five sessions, will lead novice programmers from writing programs to integrate simple one-dimensional ordinary differential equations using Python to solving a large system (1000's of differential equations) of coupled conductance-based neurons using a highly parallelized and scalable framework. Embedded with the tutorial is a physiologically realistic implementation of a network in the insect olfactory system. This system, consisting of multiple neuron and synapse types, can serve as a template to simulate other networks.
Ai Wern Chung, Rebekah Mannix, Henry A. Feldman
et al.
The diffused nature of mild traumatic brain injury (mTBI) impacts brain white-matter pathways with potentially long-term consequences, even after initial symptoms have resolved. To understand post-mTBI recovery in adolescents, longitudinal studies are needed to determine the interplay between highly individualised recovery trajectories and ongoing development. To capture the distributed nature of mTBI and recovery, we employ connectomes to probe the brain's structural organisation. We present a diffusion MRI study on adolescent mTBI subjects scanned one day, two weeks and one year after injury with controls. Longitudinal global network changes over time suggests an altered and more 'diffuse' network topology post-injury (specifically lower transitivity and global efficiency). Stratifying the connectome by its back-bone, known as the 'rich-club', these network changes were driven by the 'peripheral' local subnetwork by way of increased network density, fractional anisotropy and decreased diffusivities. This increased structural integrity of the local subnetwork may be to compensate for an injured network, or it may be robust to mTBI and is exhibiting a normal developmental trend. The rich-club also revealed lower diffusivities over time with controls, potentially indicative of longer-term structural ramifications. Our results show evolving, diffuse alterations in adolescent mTBI connectomes beginning acutely and continuing to one year.
Martina Chiacchiaretta, Mattia Bramini, Anna Rocchi
et al.
Graphene-based materials are the focus of intense research efforts to devise novel theranostic strategies for targeting the central nervous system. In this work, we have investigated the consequences of long-term exposure of primary rat astrocytes to pristine graphene (GR) and graphene oxide (GO) flakes. We demonstrate that GR/GO interfere with a variety of intracellular processes as a result of their internalization through the endo-lysosomal pathway. Graphene-exposed astrocytes acquire a more differentiated morphological phenotype associated with extensive cytoskeletal rearrangements. Profound functional alterations are induced by GO internalization, including the upregulation of inward-rectifying K+ channels and of Na+-dependent glutamate uptake, which are linked to the astrocyte capacity to control the extracellular homeostasis. Interestingly, GO-pretreated astrocytes promote the functional maturation of co-cultured primary neurons by inducing an increase in intrinsic excitability and in the density of GABAergic synapses. The results indicate that graphene nanomaterials profoundly affect astrocyte physiology in vitro, with consequences for neuronal network activity. This work supports the view that GO-based materials could be of great interest to address pathologies of the central nervous system associated to astrocyte dysfunctions.
William G. P. Mayner, William Marshall, Larissa Albantakis
et al.
Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi's functionality in the course of analyzing an example system, and then describe details of the algorithm's design and implementation. PyPhi can be installed with Python's package manager via the command 'pip install pyphi' on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi . Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io/ . The pyphi-users mailing list can be joined at https://groups.google.com/forum/#!forum/pyphi-users . A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html .
Konstantin Mergenthaler, Franziska Oschmann, Jeremy Petravicz
et al.
Astrocytes affect neural transmission by a tight control via glutamate transporters on glutamate concentrations in direct vicinity to the synaptic cleft and by extracellular glutamate. Their relevance for information representation has been supported by in-vivo studies in ferret and mouse primary visual cortex. In ferret blocking glutamate transport pharmacologically broadened tuning curves and enhanced the response at preferred orientation. In knock-out mice with reduced expression of glutamate transporters sharpened tuning was observed. It is however unclear how focal and ambient changes in glutamate concentration affect stimulus representation. Here we develop a computational framework, which allows the investigation of synaptic and extrasynaptic effects of glutamate uptake on orientation tuning in recurrently connected network models with pinwheel-domain (ferret) or salt-and-pepper (mouse) organization. This model proposed that glutamate uptake shapes information representation when it affects the contribution of excitatory and inhibitory neurons to the network activity. Namely, strengthening the contribution of excitatory neurons generally broadens tuning and elevates the response. In contrast, strengthening the contribution of inhibitory neurons can have a sharpening effect on tuning. In addition local representational topology also plays a role: In the pinwheel-domain model effects were strongest within domains - regions where neighboring neurons share preferred orientations. Around pinwheels but also within salt-and-pepper networks the effects were less strong. Our model proposes that the pharmacological intervention in ferret increases the contribution of excitatory cells, while the reduced expression in mouse increases the contribution of inhibitory cells to network activity.
The loss of neuronal cells in the central nervous system may happen in numerous neurodegenerative illnesses. Alzheimer's Disease (AD) is an intricate, irreversible, dynamic neurodegenerative sickness. It is the main source of age-related dementia, influencing roughly 5.3 million individuals in the United States alone. Promotion is a typical feeble ailment in individuals more than 65 years, bringing on disability described by decrease in memory, failure to learn and do every day exercises, intellectual weakness and influences the personal satisfaction of patients. Pathologic qualities of AD are an irregular development of specific proteins called Beta-amyloid "plaques" and Tau "Tangles" in the mind. Notwithstanding, current treatments against AD are just to calm manifestations and palliative yet are not the cure and a few promising medications competitors have fizzled in late clinical trials. There is consequently a critical need to enhance our comprehension for pathogenesis of this sickness, making new and creative prescient models with powerful treatments. As of late, stem cell treatment has been appeared to have a potential way to deal with different illnesses, including neurodegenerative disorders. In light of the far reaching nature of AD pathology, stem cell substitution procedures have been seen as an extraordinarily difficult and impossible treatment approach. Stem Cell may likewise offer an effective new way to deal with model and concentrate AD. Patient derived induced Pluripotent Stem Cells (iPSCs), for instance, may propel our comprehension of disease mechanism. In this review we will examine the capability of stem cells to help in these testing tries.
Functions of brain areas in complex animals are believed to rely on the dynamics of networks of neurons rather than on single neurons. On the other hand, the network dynamics reflect and arise from the integration and coordination of the activity of populations of single neurons. Understanding how single-neurons and neural-circuits dynamics complement each other to produce brain functions is thus of paramount importance. LFPs and EEGs are good indicators of the dynamics of mesoscopic and macroscopic populations of neurons, while microscopic-level activities can be documented by measuring the membrane potential, the synaptic currents or the spiking activity of individual neurons. In this thesis we develop mathematical modelling and mathematical analysis tools that can help the interpretation of joint measures of neural activity at microscopic and mesoscopic or macroscopic scales. In particular, we develop network models of recurrent cortical circuits that can clarify the impact of several aspects of single-neuron (i.e., microscopic-level) dynamics on the activity of the whole neural population (as measured by LFP). We then develop statistical tools to characterize the relationship between the action potential firing of single neurons and mass signals. We apply these latter analysis techniques to joint recordings of the firing activity of individual cell-type identified neurons and mesoscopic (i.e., LFP) and macroscopic (i.e., EEG) signals in the mouse neocortex. We identified several general aspects of the relationship between cell-specific neural firing and mass circuit activity, providing for example general and robust mathematical rules which infer single-neuron firing activity from mass measures such as the LFP and the EEG.
Piotr Słowiński, Chao Zhai, Francesco Alderisio
et al.
Human movement has been studied for decades and dynamic laws of motion that are common to all humans have been derived. Yet, every individual moves differently from everyone else (faster/slower, harder/smoother etc). We propose here an index of such variability, namely an individual motor signature (IMS) able to capture the subtle differences in the way each of us moves. We show that the IMS of a person is time-invariant and that it significantly differs from those of other individuals. This allows us to quantify the dynamic similarity, a measure of rapport between dynamics of different individuals' movements, and demonstrate that it facilitates coordination during interaction. We use our measure to confirm a key prediction of the theory of similarity that coordination between two individuals performing a joint-action task is higher if their motions share similar dynamic features. Furthermore, we use a virtual avatar driven by an interactive cognitive architecture based on feedback control theory to explore the effects of different kinematic features of the avatar motion on the coordination with human players.
Bingni W. Brunton, Lise A. Johnson, Jeffrey G. Ojemann
et al.
There is a broad need in the neuroscience community to understand and visualize large-scale recordings of neural activity, big data acquired by tens or hundreds of electrodes simultaneously recording dynamic brain activity over minutes to hours. Such dynamic datasets are characterized by coherent patterns across both space and time, yet existing computational methods are typically restricted to analysis either in space or in time separately. Here we report the adaptation of dynamic mode decomposition (DMD), an algorithm originally developed for the study of fluid physics, to large-scale neuronal recordings. DMD is a modal decomposition algorithm that describes high-dimensional dynamic data using coupled spatial-temporal modes; the resulting analysis combines key features of performing principal components analysis (PCA) in space and power spectral analysis in time. The algorithm scales easily to very large numbers of simultaneously acquired measurements. We validated the DMD approach on sub-dural electrode array recordings from human subjects performing a known motor activation task. Next, we leveraged DMD in combination with machine learning to develop a novel method to extract sleep spindle networks from the same subjects. We suggest that DMD is generally applicable as a powerful method in the analysis and understanding of large-scale recordings of neural activity.
The connectome, or the entire connectivity of a neural system represented by network, ranges various scales from synaptic connections between individual neurons to fibre tract connections between brain regions. Although the modularity they commonly show has been extensively studied, it is unclear whether connection specificity of such networks can already be fully explained by the modularity alone. To answer this question, we study two networks, the neuronal network of C. elegans and the fibre tract network of human brains yielded through diffusion spectrum imaging (DSI). We compare them to their respective benchmark networks with varying modularities, which are generated by link swapping to have desired modularity values but otherwise maximally random. We find several network properties that are specific to the neural networks and cannot be fully explained by the modularity alone. First, the clustering coefficient and the characteristic path length of C. elegans and human connectomes are both higher than those of the benchmark networks with similar modularity. High clustering coefficient indicates efficient local information distribution and high characteristic path length suggests reduced global integration. Second, the total wiring length is smaller than for the alternative configurations with similar modularity. This is due to lower dispersion of connections, which means each neuron in C. elegans connectome or each region of interest (ROI) in human connectome reaches fewer ganglia or cortical areas, respectively. Third, both neural networks show lower algorithmic entropy compared to the alternative arrangements. This implies that fewer rules are needed to encode for the organisation of neural systems.