Paul M. Matao, Jumanne Mng’ang’a, B. Prabhakar Reddy
This study investigates the consequence of thermal radiation on the fractional magnetohydrodynamic (MHD) Couette flow of a Jeffrey fluid in a vertical channel, incorporating the influences of activation energy and Joule heating. The mathematical model is derived using appropriate governing equations that account for the non-Newtonian behavior of the Jeffrey fluid, combined with the impacts of thermal radiation, magnetic field, and activation energy mechanisms. The classical mathematical framework has been transformed into a system of fractal fractional-order derivatives using the Caputo–Fabrizio derivative operator. To solve these systems, the finite difference technique was employed. The behavior of fluid flow fields in response to several significant parameters was analyzed and represented graphically. It is ascertained that velocity distribution upsurges as Hall current parameter rises, while a more substantial effect from the Jeffrey fluid parameter results in a decrease in the velocity field. Additionally, thermal field profiles exhibited higher values in response to increased thermal radiation and Joule heating parameters, whereas the temperature distribution showed a decline with improving in Hall current parameter values. The concentration field improved with higher activation energy parameter values, in contrast to the opposite trend observed with temperature difference and chemical reaction parameters. Furthermore, it is remarked that fractal fractional-order derivatives operator produced a more pronounced boundary layer compared to both fractional and classical models. It is ascertained that the Nusselt number showing a 15.7% improvement in thermal efficiency as thermal radiation varied from 2 to 4. These findings are important for applications in geothermal energy extraction, and biomedical engineering.
This work focuses on assessing the ECG signal quality of data collected with wearable devices specifically tailored for firefighters using machine learning techniques. Firefighters are at a heightened cardiac risk due to their challenging working conditions, making wearable sensors crucial for ongoing health monitoring. However, environmental factors such as the temperature, radiation, and moisture, significantly impact the performance of these sensors and the quality of the collected data. To address these challenges, this work explored supervised learning to classify ECG signals into acceptable and unacceptable segments using only eight cardiac features. Leveraging on the ScientISST MOVE dataset, which contains biosignals during various daily activities, the model achieved promising results, namely 88% accuracy and an 87% F1 score with just eight ECG features. Besides this, a case study was performed on ECG data gathered from firefighters under real-world conditions to further corroborate the proposed method. Such a validation exercise demonstrated how well the model performs for the assessment of signal quality in such dynamic, high-stress scenarios.
Pedro Eusebio Alvarado-Méndez, Carlos M. Astorga-Zaragoza, Gloria L. Osorio-Gordillo
et al.
A <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi mathvariant="script">H</mi><mo>∞</mo></msub></semantics></math></inline-formula> robust adaptive nonlinear observer for state and parameter estimation of a class of Lipschitz nonlinear systems with disturbances is presented in this work. The objective is to estimate parameters and monitor the performance of nonlinear processes with model uncertainties. The behavior of the observer in the presence of disturbances is analyzed using Lyapunov stability theory and by considering an <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi mathvariant="script">H</mi><mo>∞</mo></msub></semantics></math></inline-formula> performance criterion. Numerical simulations were carried out to demonstrate the applicability of this observer for a semi-active car suspension. The adaptive observer performed well in estimating the tire rigidity (as an unknown parameter) and induced disturbances representing damage to the damper. The main contribution is the proposal of an alternative methodology for simultaneous parameter and actuator disturbance estimation for a more general class of nonlinear systems.
This study examines the global literature that looks at spatial–visual abilities (SVA) while considering the numerous differential studies, methods of evaluation designed over a century, and multiple external influences on its development. The dataset was retrieved from Google Scholar and publisher databases such as Elsevier, Taylor & Francis, Springer, etc. Only factual reports and bibliographic reviews were included in an analysis of a total of 87 documents. Each study of SVA is classified based on information, country, year, and age groupings. SVA has been extensively studied in the areas of “STEM (Science, Technology, Engineering and Mathematics) fields”, “demographic factors” and “other activities”. “Spatial visualisation” or “visual ability” is the term employed to refer to the cognitive ability that allows one to comprehend, mentally process, and manipulate three-dimensional visuospatial shapes. One of the most crucial distinct abilities involved is spatial aptitude, which aids in understanding numerous aspects of everyday and academic life. It is especially vital for comprehending scientific concepts, and it has been extensively studied. Nearly all multiple-aptitude assessments include spatial ability. It is determined that over the past two decades, the study of SVA has gained momentum, most likely because of information being digitised. Within the vast reservoir of spatial-cognition research, the majority of the studies examined here originate from the United States of America, with less than a quarter of the studies based in the Asia–Pacific region and the Middle East. This paper presents a comprehensive review of the literature on the assessment of SVA with respect to sector, year, country, age and socio-economic factors. It also offers a detailed examination of the use of spatial interventions in educational environments to integrate spatial abilities with training in architecture and interior design.
Endothelial cells form the linchpin of vascular and lymphatic systems, creating intricate networks that are pivotal for angiogenesis, controlling vessel permeability, and maintaining tissue homeostasis. Despite their critical roles, there is no rigorous mathematical framework to represent the connectivity structure of endothelial networks. Here, we develop a pioneering mathematical formalism called $π$-graphs to model the multi-type junction connectivity of endothelial networks. We define $π$-graphs as abstract objects consisting of endothelial cells and their junction sets, and introduce the key notion of $π$-isomorphism that captures when two $π$-graphs have the same connectivity structure. We prove several propositions relating the $π$-graph representation to traditional graph-theoretic representations, showing that $π$-isomorphism implies isomorphism of the corresponding unnested endothelial graphs, but not vice versa. We also introduce a temporal dimension to the $π$-graph formalism and explore the evolution of topological invariants in spatial embeddings of $π$-graphs. Finally, we outline a topological framework to represent the spatial embedding of $π$-graphs into geometric spaces. The $π$-graph formalism provides a novel tool for quantitative analysis of endothelial network connectivity and its relation to function, with the potential to yield new insights into vascular physiology and pathophysiology.
Modeling-Simulation-Optimization workflows play a fundamental role in applied mathematics. The Mathematical Research Data Initiative, MaRDI, responded to this by developing a FAIR and machine-interpretable template for a comprehensive documentation of such workflows. MaRDMO, a Plugin for the Research Data Management Organiser, enables scientists from diverse fields to document and publish their workflows on the MaRDI Portal seamlessly using the MaRDI template. Central to these workflows are mathematical models. MaRDI addresses them with the MathModDB ontology, offering a structured formal model description. Here, we showcase the interaction between MaRDMO and the MathModDB Knowledge Graph through an algebraic modeling workflow from the Digital Humanities. This demonstration underscores the versatility of both services beyond their original numerical domain.
Samundra Regmi, Ioannis K. Argyros, Santhosh George
et al.
Developments are presented for the semi-local convergence of Newton’s method to solve Banach space-valued nonlinear equations. By utilizing a new methodology, we provide a finer convergence analysis with no additional conditions than in earlier results. In particular, this is done by introducing the center-Lipschitz condition by which we construct a stricter domain than the original domain of the operator. Then, the Lipschitz constants in the new domain are at least as small as the original constants leading to weaker sufficient convergence criteria, tighter error bounds on the error distances involved, and a piece of better information on the location of the solution. These benefits are obtained under the same computational cost since in practice the computation of the original constants requires the computation of the new constants as special cases. The same benefits are obtained if the Lipschitz conditions are replaced by Hölder conditions or even more general ω− continuity conditions. This methodology can be applied to other methods using such as the Secant, Stirling’s Newton-like, and other methods along the same lines. Numerical examples indicate that the new results can be utilized to solve nonlinear equations, but not earlier ones.
This research is motivated by a real-world industry problem. Environmental concerns caused by the increasing number of internal combustion engine vehicles are developing a growing interest in the study and development of electric vehicle (EV) batteries. The cost of EV batteries is critical for the market growth of electric vehicles. As cell is the most essential component in the EV battery, the cost-effective manufacturing of battery cells is a popular topic in industry and academics. Manufacturers invest billions of dollars in battery cell factories based on predicted EV growth rates. However, these manufacturers require information on total manufacturing costs, plant area, total capital equipment costs, and their cost drivers to achieve the goal of a profitable firm. Inspired by these concerns, an EPQ model with a process-based cost modeling technique is developed for the large-scale manufacturing of EV battery cells. The goal of this research is to provide the most precise framework for the manufacturer to maximize its profit in cell manufacturing. The data used in this model is collected from the BatPac model (version 4) developed by Argonne National Laboratory. This study considers two types of battery cells used in electric vehicles, and the firm produces 5 % of defective cells. A well-known cost estimation method (PBCM) and the EPQ model are combined to generate total profit function. The production rate and selling price for both cells are considered as decision variables. The profit function is maximized by using genetic algorithm and graphs are provided to show the relation between decision variables and profit function. According to the findings, the production rate of the cells has a significant impact on the overall profit, and in order to maximize the profit, the production rate and selling price of cell 1 must be lower than cell 2. A detailed cost analysis has been provided to identify which process steps and cost aspects significantly impact the total cost. Finally, managerial implications and conclusions are presented that support manufacturers in increasing the firm’s profit.
Yago Fontenla-Seco, Alberto Bugarín-Diz, Manuel Lama
In this paper, we propose a series of fuzzy temporal protoforms in the framework of the automatic generation of quantitative and qualitative natural language descriptions of processes. The model includes temporal and causal information from processes and attributes, quantifies attributes in time during the process life-span and recalls causal relations and temporal distances between events, among other features. Through integrating process mining techniques and fuzzy sets within the usual Data-to-Text architecture, our framework is able to extract relevant quantitative temporal as well as structural information from a process and describe it in natural language involving uncertain terms. A real use-case in the cardiology domain is presented, showing the potential of our model for providing natural language explanations addressed to domain experts.
Kristina Razminien, Irina Vinogradova, Manuela Tvaronavi
et al.
DOI: https://doi.org/10.46544/AMS.v26i3.06 Abstract Researchers tend to develop cluster studies when the ways of turning to the circular economy are considered. Clusters are viewed as a network where different institutions, enterprises, and research centres are connected to share their knowledge and resources for better results of their performance. Efficient use of resources can be achieved in such networks through involvement in the circular economy. Clusters with their resources and knowledge as contributors in transition to a circular economy are analysed in this paper. The paper aims at literature analysis where clusters and circular economy are overviewed. The links between these two notions are traced, and the relation of clusters in transition to a circular economy is verified through the application of several multicriteria decision-making and mathematics-based information analysis methods. Scientific literature analysis works for the identification of the main concepts and definition of the object. The qualitative and quantitative analysis employs multi-criteria decisionmaking (MCDM) methods (SAW, TOPSIS) and regression analysis. A tool that enables verification of relation between clusters and transition to the circular economy was employed using these methods. The findings suggest that the tool used in the research can be applied when the relation of clusters and transition to a circular economy is being traced. The paper suggests experts' selection through their work experience with clusters and/or circular economy and their evaluation for certain clusters in transition to a circular economy set of criteria. The zero values of some indicators were eliminated by mathematically recalculating the weights so that distortion of the results after the application of MCDM methods is avoided. The results of MCDM methods application in regression analysis show that there is a possible relationship between clusters and transition to a circular economy.
Taking the traditional fort-type settlements in Shaanxi as the research object, quantitative research methods such as K-means clustering algorithm, correlation analysis, density analysis, and nearest neighbor index are used to study their spatial distribution, formation causes, and cluster characteristics. The objective of the study is to break through the geographical limitations of fort-type settlements research and to explore the scientific methods of classifying and analyzing traditional fort-type settlements. The conclusions are: (1) The results of cluster analysis show that the fort-type settlements in Shaanxi can be divided into three categories; (2) The overall distribution of fort-type settlements in Shaanxi shows multi-point aggregation, and contains both point and linear aggregation distribution; (3) There are four typical cluster systems among the traditional fort-type settlements in Shaanxi; (4) The factors that have the greatest influence on the distribution of settlements are construction force, wall masonry, age, fortification purpose, and topographic environment. The article innovatively proposes the "cluster system" perspective and introduces mathematical algorithms and quantitative research methods to study the cluster system of the fort-type Settlements. This approach is feasible and can be applied to other settlement-related studies. At the same time, the perspective of cluster system could be used in heritage conservation, which can contribute to the restoration of architectural relics and systemic conservation on a larger scale.
A. Alanazi, G. Muhiuddin, Doha A. Al-Balawi
et al.
Natural genetic material may shed light on gene expression mechanisms and aid in the detection of genetic disorders. Single Nucleotide Polymorphism (SNP), small insertions and deletions (indels), and major chromosomal anomalies are all chromosomal abnormality-related disorders. As a result, several methods have been applied to analyze DNA sequences, which constitutes one of the most critical aspects of biological research. Thus, numerous mathematical and algorithmic contributions have been made to DNA analysis and computing. Cost minimization, deployment, and sensitivity analysis to many factors are all components of sequencing platforms built on a quantitative framework and their operating mechanisms. This study aims to investigate the role of DNA sequencing and its representation in the form of graphs in the analysis of different diseases by means of DNA sequencing.
Abstract In current mathematical configuration, the self-propelled movement of gyrotactic swimming microorganisms in the generalized slip flow of MHD nanoliquid past a stretching cylinder is discussed. Convective heat transfer is assumed along with Nield conditions on boundary. The formulation of this biomathematical model yields the boundary value problem of nonlinear partial differential equations. First, modelled mathematical system is transferred into non-dimensional form with the aid of suitable scaling variables and then shooting technique (along with Runge-Kutta-Fehlberg (R-K-F) method) is applied to obtain numerical solution of governing system. The computed numerical solutions are presented with figures and tables, and then these results are critically analyzed in both quantitative and qualitative manners.
Abstract In recent years, exchange of goods around the world has mostly been done by the sea, which increased the pollution coming from the port areas. Activities connected with shipping and handling of goods in ports may harm both human health and the environment. These activities include different (mostly diesel-fueled) machinery used in ports, resulting in air emissions including GHG, NOX, SOX, PM, etc. Besides air pollution, port activities affect noise, light, and odor emission, waste accumulation and water pollution. Existing methodologies for estimating environmental impacts of port activities are mostly qualitative and include self-assessment methods which can often lead to biased results. Because of that, there is a need for a quantitative, industry-validated, and cohesive method that would give more accurate results. In this article, the Port Environmental Index (PEI) which has all the attributes described above will be presented. The PEI mission is to integrate all of the main environmental aspects of port such as air emission, waste production, water pollution, noise, light, and odor pollution into one metric that can then be used to assess the port performance and make comparison between ports. The PEI is made as a quantitative composite index based on aggregations of individual indicators for significant aspects of port operations. It includes different indices according to the source of the emission; the Ship Environmental Index (SEI), the Terminal Environmental Index (TEI), and the Port Authority Environmental Index (PAEI). While designing the PEI, correctly choosing the environmental impacts is paramount to properly identify port activities and associated environmental aspects. After their identification, for each significant aspect, a set of representative environmental key performance indicators (eKPIs) is identified. Afterwards, a series of mathematical operations are to be applied: normalization, weighting and aggregation. In this short communication, those methods are outlined yet not definitively chosen. The main idea behind the PEI is to use quantitative, data-based information collected automatically leveraging Internet of Things (IoT) techniques making it possible to assess the environmental impacts of port operations in real-time. The advantages of having such metric in the environmental management plan of a port are numerous. Amongst the most remarkable, it allows inter-port comparison and it can be used for decision making to estimate the impacts using one single metric rather than having many disperse values. Moreover, it can be used by ports for estimating their environmental performance and progress. Since it is based on information collected using IoT technologies provided in real-time, ports can make immediate corrections in their activities.
Nour El Houda Bouaicha, Farid Chighoub, Ishak Alia
et al.
The paper presents a characterization of equilibrium in a game-theoretic description of discounting conditional stochastic linear-quadratic (LQ for short) optimal control problem, in which the controlled state process evolves according to a multidimensional linear stochastic differential equation, when the noise is driven by a Poisson process and an independent Brownian motion under the effect of a Markovian regime-switching. The running and the terminal costs in the objective functional are explicitly dependent on several quadratic terms of the conditional expectation of the state process as well as on a nonexponential discount function, which create the time-inconsistency of the considered model. Open-loop Nash equilibrium controls are described through some necessary and sufficient equilibrium conditions. A state feedback equilibrium strategy is achieved via certain differential-difference system of ODEs. As an application, we study an investment–consumption and equilibrium reinsurance/new business strategies for mean-variance utility for insurers when the risk aversion is a function of current wealth level. The financial market consists of one riskless asset and one risky asset whose price process is modeled by geometric Lévy processes and the surplus of the insurers is assumed to follow a jump-diffusion model, where the values of parameters change according to continuous-time Markov chain. A numerical example is provided to demonstrate the efficacy of theoretical results.
In this paper, we focus on investigating the performance of the mathematical software program Maple and the programming language MATLAB when using these respective platforms to compute the method of steps (MoS) and the Laplace transform (LT) solutions for neutral and retarded linear delay differential equations (DDEs). We computed the analytical solutions that are obtained by using the Laplace transform method and the method of steps. The accuracy of the Laplace method solutions was determined (or assessed) by comparing them with those obtained by the method of steps. The Laplace transform method requires, among other mathematical tools, the use of the Cauchy residue theorem and the computation of an infinite series. Symbolic computation facilitates the whole process, providing solutions that would be unmanageable by hand. The results obtained here emphasize the fact that symbolic computation is a powerful tool for computing analytical solutions for linear delay differential equations. From a computational viewpoint, we found that the computation time is dependent on the complexity of the history function, the number of terms used in the LT solution, the number of intervals used in the MoS solution, and the parameters of the DDE. Finally, we found that, for linear non-neutral DDEs, MATLAB symbolic computations were faster than Maple. However, for linear neutral DDEs, which are often more complex to solve, Maple was faster. Regarding the accuracy of the LT solutions, Maple was, in a few cases, slightly better than MATLAB, but both were highly reliable.
Low Earth orbit radiometers allow monitoring nighttime anthropogenic light emissions in wide areas of the planet. In this work we describe a simple model for assessing significant outdoor lighting changes at the municipality level using on-orbit measurements complemented with ground-truth information. We apply it to evaluate the transformation effected in the municipality of Ribeira (42° 33 23 N, 8° 59 32 W) in Galicia, which in 2015 reduced the amount of installed lumen in its publicly-owned outdoor lighting system from 93.2 to 28.7 Mlm. This significant cutback, with the help of additional controls, allowed to reduce from 0.768 to 0.208 Mlm/km2 the lumen emission density averaged across the territory. In combination with the VIIRS-DNB annual composite readings these data allow to estimate that the relative weight of the emissions of the public streetlight system with respect to the total emissions of light in the municipality changed from an initial value of 74.86% to 44.68% after the transformation. The effects of the sources spectral shift and the photon calibration factor on the radiance reported by the VIIRS-DNB are also evaluated.
Two mathematical aspects of the centuries-old Japanese sashiko stitching form hitomezashi are discussed: the encoding of designs using words from a binary alphabet, and duality. Traditional hitomezashi designs are analysed using these two ideas. Self-dual hitomezashi designs related to Fibonacci snowflakes, which we term Pell persimmon polyomino patterns, are proposed. Both these designs and the binary words used to generate them appear to be new to their respective literatures.