Real-Time Physiological Activity and Sleep State Monitoring System Using TS2Vec Embeddings and DBSCAN Clustering for Heart Rate and Motor Response Analysis in IoMT
Arifin Arifin, Harmiati Harbi, Andi Silvia Indriani
et al.
Monitoring physiological activity and sleep states in real time is challenging, particularly for continuous assessment in daily life settings using wearable IoMT devices. We developed a 24 h wearable system that integrates electrocardiogram (ECG) electrodes for heart rate measurement and a glove-mounted flex sensor for motor responses, connected through an Internet of Medical Things (IoMT) platform. Flex signals were combined using principal component analysis (PCA) to generate a single kinematic channel, then standardized with heart rate. Time-series windows were embedded using TS2Vec and clustered with DBSCAN, while t-SNE was applied only for visualization. The framework identified four physiologically coherent states: (i) nocturnal sleep with the lowest heart rate and minimal motion, (ii) evening pre-sleep with low movement and moderately higher heart rate, (iii) daytime activity with variable motion and mid-range heart rate, and (iv) late-day high-intensity activity with the highest heart rate and increased motor responses. A few outliers were observed during transient body movements or sensor readjustments, which were identified and excluded during preprocessing to ensure stable clustering results. Across 24 h, heart rate ranged from 52 to 96 bpm (mean 77.4), while flexion spanned 0 to 165° (mean 52.5°), showing alignment between movement intensity and cardiac response. This integrated sensing and analytics pipeline provides an interpretable, subject-specific state map that enables continuous remote monitoring of physiological activity and sleep patterns.
Applied mathematics. Quantitative methods
Applying the BF method on the DESI evidence for dynamical dark energy models
Ziad Sakr
Recent baryon acoustic oscillation measurements from the DESI, when combined with CMB data and Type Ia supernovae observations, indicate a preference for dynamical dark energy when considering the Chevallier-Polarski-Linder (CPL) model, over the standard ΛCDM or the wCDM model. However, the Bayes factor, a key metric for model comparison, remains inconclusive on which model is preferred. This paper applies the BF method, that integrates both Bayesian and frequentist approaches to DESI data to address the limitations of purely frequentist or Bayesian methods. It consists in considering the Bayes factor as a random variable and calculates its distribution, that results from values computed in a frequentist approach after perturbing the data following the model considered. We apply this hybrid method to DESI data, comparing the CPL and w models under various prior conditions, including weak and strong priors, and theory-informed priors. We find that, when the traditional bayes factor is considered, that weak priors favor the w model over CPL, while strong priors favor CPL. Additionally, theory-informed priors further enhance the preference for the w model. While when we apply the BF method, the preference for CPL over w is seen in all cases albeit with similar but reduced impact on the p-value from the different prior considerations. We also tried to generalize further, by perturbing as well the covariance matrix following the model considered, and found that, in general, the current data in that case is not stringent enough to disentangle between the two models. Our results demonstrate that varying the Bayes factor as a random variable, providing that the covariance matrix is kept as model independent, provides a robust model comparison, reducing the impact of prior dependence as well as offering quantitative assessment of the preferences of the competing models.(abridged)
en
astro-ph.CO, astro-ph.IM
OmicsQ: A User-Friendly Platform for Interactive Quantitative Omics Data Analysis
Xuan-Tung Trinh, André Abrantes da Costa, David Bouyssié
et al.
Motivation: High-throughput omics technologies generate complex datasets with thousands of features that are quantified across multiple experimental conditions, but often suffer from incomplete measurements, missing values and individually fluctuating variances. This requires sophisticated analytical methods for accurate, deep and insightful biological interpretations, capable of dealing with a large variety of data properties and different amounts of completeness. Software to handle such data complexity is rare and mostly relies on programming-based environments, limiting accessibility for researchers without computational expertise. Results: We present OmicsQ, an interactive, web-based platform designed to streamline quantitative omics data analysis. OmicsQ integrates established statistical processing tools with an intuitive, browser-based visualization interface. It provides robust batch correction, automated experimental design annotation, and missing-data handling without imputation, which ensures data integrity and avoids artifacts from a priori assumptions. OmicsQ seamlessly interacts with external applications for statistical testing, clustering, analysis of protein complex behavior, and pathway enrichment, offering a comprehensive and flexible workflow from data import to biological interpretation that is broadly applicable tov data from different domains. Availability and Implementation: OmicsQ is implemented in R and R Shiny and is available at https://computproteomics.bmb.sdu.dk/app_direct/OmicsQ. Source code and installation instructions can be found at https://github.com/computproteomics/OmicsQ
Optimal control strategies for infectious disease management: Integrating differential game theory with the SEIR model
Awad Talal Alabdala, Yasmin Adel, Waleed Adel
The rapid spread of infectious diseases poses a critical threat to global public health. Traditional frameworks, such as the Susceptible–Exposed–Infectious–Recovered (SEIR) model, have been crucial in elucidating disease dynamics. Nonetheless, these models frequently overlook the strategic interactions between public health authorities and individuals. This research extends the classic SEIR model by incorporating differential game theory to analyze optimal control strategies. By modeling the conflicting objectives of public health authorities aiming to minimize infection rates and intervention costs, and individuals seeking to reduce their infection risk and inconvenience, we derive a Nash equilibrium that provides a balanced approach to disease management. Using Picard’s iterative method, we solve the extended model to determine dynamic, optimal control strategies, revealing oscillatory behavior in public health interventions and individual preventive measures. This comprehensive approach offers valuable insights into the dynamic interactions essential for effective infectious disease control.
Applied mathematics. Quantitative methods
ESTADO DA ARTE ENVOLVENDO EDUCAÇÃO MATEMÁTICA INCLUSIVA EM PROGRAMAS DE PÓS-GRADUAÇÃO DA UNIVERSIDADE FEDERAL DE RONDÔNIA
Walber Christiano Lima da Costa, Idemar Vizolli
Este artigo tem como objetivo apresentar o estado da arte das pesquisas que versam sobre temas da/na Educação Matemática Inclusiva de quatro Programas de Pós-Graduação da Universidade Federal de Rondônia (UNIR), como parte de um estudo que integra o projeto Educação Matemática na Amazônia Legal Brasileira: um mapeamento das pesquisas produzidas no período de 1992 a 2022 como subsídio para políticas públicas, o qual conta com financiamento pelo edital CNPq/MCTI Nº 10/2023 - Universal. Para tanto, desenvolveu-se uma pesquisa bibliográfica, alicerçada nos pressupostos de um estado da arte de abordagem qualitativa, a partir do levantamento de trabalhos científicos disponíveis nos portais dos Programas de Pós-Graduação da Universidade Federal de Rondônia: Educação, Educação Escolar, Educação Matemática, e o Ensino de Ciências da Natureza. A partir do refinamento, detectou-se 14 Dissertações que se dedicam à Educação Matemática Inclusiva. As pesquisas encontradas foram organizadas em três categorias: "Políticas e Formação de professores na perspectiva da Educação Matemática Inclusiva”, “Metodologias e Práticas Pedagógicas na Educação Matemática Inclusiva” e “Profissionais da Educação Matemática Inclusiva”. Como resultado, evidencia-se a escassez de Teses e Dissertações que tematizem a Educação Matemática Inclusiva, mais precisamente voltada às pessoas com deficiência, com transtornos globais do desenvolvimento e com altas habilidades ou superdotação. O incentivo ao desenvolvimento de pesquisas dessa natureza certamente proporcionará melhorias nas condições de ensino e aprendizagem nos processos educacionais.
Special aspects of education, Applied mathematics. Quantitative methods
Editorial: Data driven modeling in mathematical biology
Jacques Demongeot, Pierre Magal
Applied mathematics. Quantitative methods, Probabilities. Mathematical statistics
Dynamical Analysis and Electronic Circuit Implementation of Fractional-order Chen System
Abdullah Gökyıldırım
In recent years, there has been a significant surge in interest in studies related to fractional calculus and its applications. Fractional-order analysis holds the potential to enhance the dynamic structure of chaotic systems. This study focuses on the dynamic analysis of the Chen system with low fractional-order values and its fractional-order electronic circuit. Notably, there is a lack of studies about chaotic electronic circuits in the literature with a fractional-order parameter value equal to 0.8, which makes this study pioneering in this regard. Moreover, various numerical analyses are presented to investigate the system's dynamic characteristics and complexity, such as chaotic phase planes and bifurcation diagrams. As anticipated, the voltage outputs obtained from PSpice simulations demonstrated good agreement with the numerical analysis.
Electronic computers. Computer science, Applied mathematics. Quantitative methods
A computationally effective time-restricted stability preserving H2-optimal model order reduction approach
Xin Du, Kife I. Bin Iqbal, M. Monir Uddin
et al.
Several approaches for reducing model order on the definite time segments have become the topic of investigation in a series of papers that bring challenges during application in a large-scale setting. The subject of discussion of this paper is the computationally efficient time-restricted H2-optimal model order reduction method of higher dimensional sparse systems that requires the solutions of time-restricted Lyapunov and Sylvester equations. Our discussion is on developing the algorithms to solve these matrix equations that face difficulty when calculating the matrix exponential of the large-scale matrices. As a result, an efficient remedy is also proposed to compute the matrix exponential. Our ideas are also evaluated for index-1 descriptor systems apart from the generalized structure. Numerical analyses are conducted on several benchmark examples to illustrate how accurate and efficient our suggested approaches are by comparing them with the existing methods.
Applied mathematics. Quantitative methods
Can we infer microscopic financial information from the long memory in market-order flow?: a quantitative test of the Lillo-Mike-Farmer model
Yuki Sato, Kiyoshi Kanazawa
In financial markets, the market order sign exhibits strong persistence, widely known as the long-range correlation (LRC) of order flow; specifically, the sign correlation function displays long memory with power-law exponent $γ$, such that $C(τ) \propto τ^{-γ}$ for large time-lag $τ$. One of the most promising microscopic hypotheses is the order-splitting behaviour at the level of individual traders. Indeed, Lillo, Mike, and Farmer (LMF) introduced in 2005 a simple microscopic model of order-splitting behaviour, which predicts that the macroscopic sign correlation is quantitatively associated with the microscopic distribution of metaorders. While this hypothesis has been a central issue of debate in econophysics, its direct quantitative validation has been missing because it requires large microscopic datasets with high resolution to observe the order-splitting behaviour of all individual traders. Here we present the first quantitative validation of this LFM prediction by analysing a large microscopic dataset in the Tokyo Stock Exchange market for more than nine years. On classifying all traders as either order-splitting traders or random traders as a statistical clustering, we directly measured the metaorder-length distributions $P(L)\propto L^{-α-1}$ as the microscopic parameter of the LMF model and examined the theoretical prediction on the macroscopic order correlation: $γ\approx α- 1$. We discover that the LMF prediction agrees with the actual data even at the quantitative level. Our work provides the first solid support of the microscopic model and solves directly a long-standing problem in the field of econophysics and market microstructure.
en
q-fin.TR, cond-mat.stat-mech
Rician likelihood loss for quantitative MRI using self-supervised deep learning
Christopher S. Parker, Anna Schroder, Sean C. Epstein
et al.
Purpose: Previous quantitative MR imaging studies using self-supervised deep learning have reported biased parameter estimates at low SNR. Such systematic errors arise from the choice of Mean Squared Error (MSE) loss function for network training, which is incompatible with Rician-distributed MR magnitude signals. To address this issue, we introduce the negative log Rician likelihood (NLR) loss. Methods: A numerically stable and accurate implementation of the NLR loss was developed to estimate quantitative parameters of the apparent diffusion coefficient (ADC) model and intra-voxel incoherent motion (IVIM) model. Parameter estimation accuracy, precision and overall error were evaluated in terms of bias, variance and root mean squared error and compared against the MSE loss over a range of SNRs (5 - 30). Results: Networks trained with NLR loss show higher estimation accuracy than MSE for the ADC and IVIM diffusion coefficients as SNR decreases, with minimal loss of precision or total error. At high effective SNR (high SNR and small diffusion coefficients), both losses show comparable accuracy and precision for all parameters of both models. Conclusion: The proposed NLR loss is numerically stable and accurate across the full range of tested SNRs and improves parameter estimation accuracy of diffusion coefficients using self-supervised deep learning. We expect the development to benefit quantitative MR imaging techniques broadly, enabling more accurate parameter estimation from noisy data.
Internet of Spacecraft for Multi-Planetary Defense and Prosperity
Yiming Huo
Recent years have seen unprecedentedly fast-growing prosperity in the commercial space industry. Several privately funded aerospace manufacturers, such as Space Exploration Technologies Corporation (SpaceX) and Blue Origin have transformed what we used to know about this capital-intense industry and gradually reshaped the future of human civilization. As private spaceflight and multi-planetary immigration gradually become realities from science fiction (sci-fi) and theory, both opportunities and challenges will be presented. In this article, we first review the progress in space exploration and the underlying space technologies. Next, we revisit the K-Pg extinction event and the Chelyabinsk event and predict extra-terrestrialization, terraformation, and planetary defense, including the emerging near-Earth object (NEO) observation and NEO impact avoidance technologies and strategies. Furthermore, a framework for the Solar Communication and Defense Networks (SCADN) with advanced algorithms and high efficacy is proposed to enable an Internet of distributed deep-space sensing, communications, and defense to cope with disastrous incidents such as asteroid/comet impacts. Furthermore, perspectives on the legislation, management, and supervision of founding the proposed SCADN are also discussed in depth.
Applied mathematics. Quantitative methods
Numerical study for a second order Fredholm integro-differential equation by applying Galerkin-Chebyshev-wavelets method
Youcef Henka, Samir Lemita, Mohamed Zine Aissaoui
Applied mathematics. Quantitative methods
Local stability analysis of two density-dependent semelparous species in two age classes
Arjun Hasibuan, Asep K. Supriatna, Ema Carnia
It is crucial to take into account the dynamics of the species while investigating how a species may survive in an environment. A species can be classified as either semelparous or iteroparous depending on how it reproduces. In this article, we present a model, which consists of two semelparous species by considering two age classes. We specifically discuss the effects of density-dependent in the interaction between the two semelparaous species and examine the equilibria of the system in the absence and presence of harvesting in the system. Then, the local stability of the equilibria is also investigated. A modified Leslie matrix population model with the addition of density-dependent in the equation is used. The model is analyzed in the presence and absence of competition between these species. We assume that density-dependent only occurred in the first age class of both species and that harvesting only occurred in the second age class of both species. Then, we assume that competition only occurs in the first age class in both species in the form of interspecific and intraspecific competition. This assumption is intended to simplify the complexity of the problem in the model. Our results show that there are three equilibria in the model without competition and four equilibria in the model with the competition. Hence, the presence of competition has influenced the number of equilibria. We also investigate the relation between the stability of the equilibria with the net reproduction rate of the system. Furthermore, we found the condition for the local stability of the co-existence equilibrium point, which is related to the degree of interspecific and intraspecific competition. This theory may be applied to investigate the dynamics of natural resources, whether in the absence of human exploitation and in the presence of various strategies in managing the exploitation of the resources, such as in fisheries industries.
Applied mathematics. Quantitative methods, Probabilities. Mathematical statistics
A library of quantitative markers of seizure severity
Sarah J. Gascoigne, Leonard Waldmann, Mariella Panagiotopoulou
et al.
Purpose: Understanding fluctuations of seizure severity within individuals is important for defining treatment outcomes and response to therapy, as well as developing novel treatments for epilepsy. Current methods for grading seizure severity rely on qualitative interpretations from patients and clinicians. Quantitative measures of seizure severity would complement existing approaches, for EEG monitoring, outcome monitoring, and seizure prediction. Therefore, we developed a library of quantitative electroencephalographic (EEG) markers that assess the spread and intensity of abnormal electrical activity during and after seizures. Methods: We analysed intracranial EEG (iEEG) recordings of 1056 seizures from 63 patients. For each seizure, we computed 16 markers of seizure severity that capture the signal magnitude, spread, duration, and post-ictal suppression of seizures. Results: Quantitative EEG markers of seizure severity distinguished focal vs. subclinical and focal vs. FTBTC seizures across patients. In individual patients, 71% had a moderate to large difference (ranksum r > 0.3) between focal and subclinical seizures in three or more markers. Circadian and longer-term changes in severity were found for 67% and 53% of patients, respectively. Conclusion: We demonstrate the feasibility of using quantitative iEEG markers to measure seizure severity. Our quantitative markers distinguish between seizure types and are therefore sensitive to established qualitative differences in seizure severity. Our results also suggest that seizure severity is modulated over different timescales. We envisage that our proposed seizure severity library will be expanded and updated in collaboration with the epilepsy research community to include more measures and modalities.
Quantitative Imaging Principles Improves Medical Image Learning
Lambert T. Leong, Michael C. Wong, Yannik Glaser
et al.
Fundamental differences between natural and medical images have recently favored the use of self-supervised learning (SSL) over ImageNet transfer learning for medical image applications. Differences between image types are primarily due to the imaging modality and medical images utilize a wide range of physics based techniques while natural images are captured using only visible light. While many have demonstrated that SSL on medical images has resulted in better downstream task performance, our work suggests that more performance can be gained. The scientific principles which are used to acquire medical images are not often considered when constructing learning problems. For this reason, we propose incorporating quantitative imaging principles during generative SSL to improve image quality and quantitative biological accuracy. We show that this training schema results in better starting states for downstream supervised training on limited data. Our model also generates images that validate on clinical quantitative analysis software.
Fuzzy Optimization Model for Decision-Making in Supply Chain Management
Jui-Fang Chang, Chao-Jung Lai, Chia-Nan Wang
et al.
Choosing a supplier is a complex decision-making process that can reduce the total cost of production inputs and increase profits without increasing the price or sacrificing product quality. However, supplier selection processes usually involve multiple quantitative and qualitative criteria which increase the complexity of the problem and may decrease the accuracy and effectiveness of the process. Such complex decision-making problems can be supported by using multicriteria decision-making (MCDM) models. While there have been multiple MCDM models to support supplier selection processes in different industries and sectors, only a few are developed to support the supplier selection processes in the garment industry, especially under uncertain decision-making environment. This paper presents an integrated mathematical model under a fuzzy environment and applies it to the supplier selection process in the garment industry. In this research, the authors utilize the Buckley extension based fuzzy Analytical Hierarchical Process (FAHP) method in combination with linear normalization based fuzzy Grey Relational Analysis (F-GRA) method to develop a MCDM approach to the supplier selection process under a fuzzy environment. As a result, supplier 08 (SA08) is the optimal supplier. The contribution of this work is to propose an MCDM model for ranking potential suppliers in the garment industry under a fuzzy environment. The proposed approach can also be applied to support complex decision-making processes under a fuzzy environment in different industries.
13 sitasi
en
Computer Science
Presolving linear bilevel optimization problems
Thomas Kleinert, Julian Manns, Martin Schmidt
et al.
Linear bilevel optimization problems are known to be strongly NP-hard and the computational techniques to solve these problems are often motivated by techniques from single-level mixed-integer optimization. Thus, during the last years and decades many branch-and-bound methods, cutting planes, or heuristics have been proposed. On the other hand, there is almost no literature on presolving linear bilevel problems although presolve is a very important ingredient in state-of-the-art mixed-integer optimization solvers. In this paper, we carry over standard presolve techniques from single-level optimization to bilevel problems and show that this needs to be done with great caution since a naive application of well-known techniques does often not lead to correctly presolved bilevel models. Our numerical study shows that presolve can also be very beneficial for bilevel problems but also highlights that these methods have a more heterogeneous effect on the solution process compared to what is known from single-level optimization. As a side result, our numerical experiments reveal that there is an urgent need for better and more heterogeneous test instance libraries to further propel the field of computational bilevel optimization.
Applied mathematics. Quantitative methods, Electronic computers. Computer science
Application of Quantitative Systems Pharmacology to guide the optimal dosing of COVID-19 vaccines
Mario Giorgi, Rajat Desikan, Piet H. van der Graaf
et al.
Optimal use and distribution of Covid-19 vaccines involves adjustments of dosing. Due to the rapidly-evolving pandemic, such adjustments often need to be introduced before full efficacy data are available. As demonstrated in other areas of drug development, quantitative systems pharmacology (QSP) is well placed to guide such extrapolation in a rational and timely manner. Here we propose for the first time how QSP can be applied real time in the context of COVID-19 vaccine development.
Soil water content estimation using ground penetrating radar data via group intelligence optimization algorithms: An application in the Northern Shaanxi Coal Mining Area
Fan Cui, Jianyu Ni, Yunfei Du
et al.
The determination of quantitative relationship between soil dielectric constant and water content is an important basis for measuring soil water content based on ground penetrating radar (GPR) technology. The calculation of soil volumetric water content using GPR technology is usually based on the classic Topp formula. However, there are large errors between measured values and calculated values when using the formula, and it cannot be flexibly applied to different media. To solve these problems, first, a combination of GPR and shallow drilling is used to calibrate the wave velocity to obtain an accurate dielectric constant. Then, combined with experimental moisture content, the intelligent group algorithm is applied to accurately build mathematical models of the relative dielectric constant and volumetric water content, and the Topp formula is revised for sand and clay media. Compared with the classic Topp formula, the average error rate of sand is decreased by nearly 15.8%, the average error rate of clay is decreased by 31.75%. The calculation accuracy of the formula has been greatly improved. It proves that the revised model is accurate, and at the same time, it proves the rationality of the method of using GPR wave velocity calibration method to accurately calculate the volumetric water content.
17 sitasi
en
Environmental Science
Measuring Adsorption Capacity of Supported Catalysts with a Novel Quasi‐Continuous Pulse Chemisorption Method
Jens Friedland, Bjarne Kreitz, Heiner Grimm
et al.
An improved pulse‐chemisorption technique is presented, which is proven to be independent of the strength of interaction between adsorptive and catalyst material. The methodology is based on the transient mass balance of the adsorptive, allowing for quantitative evaluation of the pulse‐signal obtained from the experiment. Two experimental strategies for determination of the adsorption capacity are introduced, with and without using an internal standard. The methodology is illustratively discussed based on simulation results and verified by chemisorption experiments of hydrogen and carbon dioxide on a supported nickel catalyst. In addition, temperature and dosing effects are examined and benchmarked with volumetric measurements of the adsorption capacity. The proposed mathematical evaluation method of the measured data can be applied directly to experiments performed in standard equipment for pulse‐chemisorption or in modified catalyst test rigs to measure adsorption capacity before and after reaction experiments. The technique shows potential for determination of sorption kinetics and combination with operando spectroscopy.
12 sitasi
en
Materials Science