Abstract In the current article, we focus on hyperconvex metric spaces and survey the existence of best proximity points and optimal pair of fixed points for cyclic and noncyclic relatively u-continuous mappings which are r-condensing by applying a suitable measure of noncompactness. The method of the proof of our main results relies on the fact that every hyperconvex metric space ( M , d ) $(\mathcal {M}, d)$ can be isometrically embedded into the Banach space ℓ ∞ ( M ) $\ell ^{\infty}(\mathcal {M})$ . Another important tool which will be used in the proof of the existence theorems is to show that the proximal pair of every nonempty and admissible pair in a hyperconvex metric space M $\mathcal {M}$ is also nonempty and admissible.
Pshtiwan Othman Mohammed, Hari Mohan Srivastava, Nejmeddine Chorfi
et al.
We propose a novel delta fractional model with general boundary conditions to make a comparison of the functions and their fractional operators. This allows us to construct the Green’s functions (GFs), which we interpret as the delta fractional differences in the Riemann-Liouville setting. Next, positivity results for the GF together with the delta function have been derived. The appearance of two distinct functions having the same linear operator will be investigated. This further motivates to get the comparison of the functions, corresponding to the same GF. Two particular examples are provided to validate the theoretical findings.
This paper investigates the volatility dynamics and underlying long memory features of four major cryptocurrencies—Bitcoin, Ethereum, Litecoin, and Ripple—which were selected due to their high liquidity, large trading volumes, and historical significance in the digital asset market. The long-range dependence exhibited in cryptocurrency markets is often overlooked. However, based on the strong evidence of persistent dependence in the return series, we adopt advanced volatility models that are capable of accommodating high volatility and heavy-tails, as well as the long memory properties of cryptocurrencies. Specifically, we employ long-memory extensions of the GAS (Long memory GAS) and GARCH (Fractionally Integrated Asymmetric Power ARCH) models, integrating heavy-tailed innovation distributions: the Generalized Hyperbolic Distribution (GHD) and Generalized Lambda Distribution (GLD). Standard GARCH and GAS models are included as benchmarks. The performance of the models are assessed using Value-at-Risk (VaR) estimation, backtesting (in-sample and out-of-sample) and volatility forecasting metrics. The results indicate that long memory models, particularly the FIAPARCH model, consistently outperforms the standard GAS and GARCH models in capturing tail risk and the volatility persistence. These findings emphasize the critical role of long memory in modeling the risk of cryptocurrencies, indicating that accounting for volatility persistence can significantly enhance the accuracy of risk estimates and strengthen risk management practices.
Non-invasive blood analysis has the power to completely change how doctors identify and track illnesses. This study presents a novel approach for the non-invasive monitoring of red blood cell (RBC) mobility and concentration within capillaries, using photon absorption as a key diagnostic tool. The research combines optical modeling with the diffusion equation for light propagation, leveraging COMSOL simulations to create a comprehensive framework for understanding RBC dynamics. A two-dimensional geometric model of capillaries with RBCs is developed, where blood flow is modeled as a laminar, incompressible fluid. The Arbitrary Lagrangian–Eulerian (ALE) formulation is employed to account for the fluid–structure interactions, while photon attenuation by the RBCs is analyzed to investigate wavelength-dependent absorption characteristics. The methodology is implemented through a workflow developed with MATLAB’s S-Function builder, consisting of three main components: mesh generation, fluence computing, and Software-in-the-Loop (SIL) verification. The mesh generation process adapts to the target architecture using COMSOL Multiphysics for fluid–structure interaction (FSI) modeling. The fluence computing function solves the diffusion equation to model light intensity attenuation due to RBCs, and the SIL function compares computed results with real-time measurements, ensuring accuracy for potential real-time embedded system applications. The results demonstrate significant wavelength-dependent variations in photon absorption by RBCs, providing insights into the optical behavior of blood in microvascular structures. The findings have important implications for medical imaging, photodynamic therapy, and diagnostic tools, emphasizing the potential of integrating computational models with real-time systems for enhanced performance in biomedical applications.
Abstract High-energy laser technology is expected to be applied into deep drilling and tunnel excavation due to its attractive prospects for enhanced hard rock breakage. However, there is still a lack of proper experimental methods and numerical tools to quantitatively evaluate the influence of laser radiation on rock fracture. Combined with our specially designed experimental tests, this work aims to provide a numerical tool to quantitatively investigate laser-induced rock damage. Using a specially designed experimental platform, we conducted direct tensile tests on rock samples in the shape of dog-bone to determine rock damage under the laser radiation of different parameters. The experimental results show that a high-energy laser beam can cause a significant loss in rock tensile strength. Then, a mathematical model was established to describe the temporal and spatial evolution of rock damage in a given path of laser scanning, which was implemented in a four-dimensional lattice spring model (4D-LSM) to simulate our experimental tests. Through a comparison between the numerical and experimental results, the multiparameter damage model was proven to be more suitable for the numerical simulation of laser-induced rock damage. Finally, by introducing a laser damage model, an orthogonal experimental design was used to analyse the influencing significance of different factors on the efficiency of rock breaking and the wear of cutter tools in laser-assisted rock cutting, showing the prospects for application of the proposed numerical model to laser-assisted tunnel boring machine (TBM) tunnelling in the future.
There was developed a methodological approach for carrying out an integrated estimation of the sustainable development socioeconomic parameters based on the UN's current information base. The article proposes a methodology and tools for economic and mathematical modelling to estimate the degree of international trade and investment relations development, the degree of life expectancy, the standard of living and prosperity of international entities under the influence of sources of economic growth. Based on the simulation results an analysis of the general status of the 189 world countries according to the sources of economic growth has been carried out. In order to obtain scientifically grounded results, the paper used general scientific and special methods of research, such as: methods of analysis and synthesis, system approach and abstraction, modelling (fuzzy logic model, a method of Saati hierarchies, Mamdani algorithm), quantitative and qualitative comparison methods, a method of theoretical generalization. The approach proposed in this article can be applied when developing the country's national economic development strategy in the direction of achieving sustainable development.
Shubhendu Mandal, Kamal Hossain Gazi, Soheil Salahshour
et al.
The selection of Ph.D (Doctor of Philosophy) supervisor is always a vital and interesting problem in academia and especially for students who want to carry out Ph.D. Nowadays, selecting a supervisor for Ph.D in a scientific manner becomes a challenge for any student because of the variety of options available to the scholar. In this context, the present study aims to formulate a model for Ph.D. supervisor selection from the offered alternatives in an academic institute. A hybrid multi-criteria decision making (MCDM) framework has been applied to select the suitable supervisor of the student’s preferred criteria under interval-valued intuitionistic fuzzy (IVIF) scenario. The IVIF Analytic Hierarchy Process (AHP) has been employed to prioritize the criteria, whereas IVIF Technique for order preference by similarity to ideal solution (TOPSIS) technique is engaged to rank the available supervisors based on criteria weight. A set of eight criteria and five alternatives have been considered for modeling the problem. Moreover, the potential criteria are weighted and ranked by the multiple decision makers in the present study. To examine the consistency and robustness of the proposed integrated approach, sensitivity analysis and comparative analysis have been carried out. From all the analyses, it can be conferred that the suggested approach is quite useful to apply in different decision-making scenarios.
In this paper, we focus on investigating a post-processing technique de-signed for one-dimensional singularly perturbed parabolic convection-diffusion problems that demonstrate a regular boundary layer. We use a back-ward Euler numerical approach for time derivatives with uniform mesh in the temporal direction, and a simple upwind scheme is used for spa-tial derivatives with modified graded mesh in the spatial direction. In this study, we demonstrate the effectiveness of the Richardson extrapola-tion technique in enhancing the ε-uniform accuracy of simple upwinding within the discrete supremum norm, as evidenced by an improvement from O(N −1 ln(1/ε) + △θ) to O(N −2 ln2(1/ε) + △θ2). Furthermore, to validate the theoretical findings, computational experiments are conducted for two test examples by applying the proposed technique.
Javad Alikhani Koupaei, Mohammad Javad Ebadi, Majid Iran Pour
Purpose: This study aims to investigate the potential of chaotic optimization algorithms in improving performance compared to other optimization methods, focusing on determining the appropriate shape parameter of radial basis functions for solving partial differential equations.Methodology: In this research, a two-stage process is employed where the Kansa method, based on meshless local techniques, is combined with the FCW method. In the first stage, the FCW algorithm is utilized to obtain the optimal shape parameter for radial basis functions, followed by the Kansa method in the second stage to estimate the Root Mean Square (RMS) error for approximate solutions.Findings: Numerical results indicate that approximately 95% of the results obtained from two partial differential equations using PSO and FCW algorithms are similar. These results demonstrate the effectiveness and efficiency of this approach in estimating appropriate shape parameters for solving differential equations.Originality/Value: This study confirms the importance of chaos-based optimization algorithms in solving partial differential equations, which can contribute to future research in this field.
Sirasrete Phoosree, Nattinee Khongnual, Jiraporn Sanjun
et al.
In the field of mathematical physics, the Riccati sub-equation method is an important tool for locating the analytical solutions of nonlinear space–time fractional equations. In this study, we provide the Riccati sub-equation method for solving the space–time fractional equation by means of the Ablowitz–Kaup–Newell–Segur (AKNS) equation and the generalized Zakharov–Kuznetsov–Benjamin–Bona–Mahony (GZK-BBM) equation, which explain the effect of flood waves and plasma physics, respectively, and reach new analytical solutions, including parameters. Each equation yields fifteen different solutions, which are in the form of generalized hyperbolic functions, generalized triangular functions, and rational functions. The effect of the arising graphs of both equations is in the form of kink waves and periodic waves, which are presented in 2D and 3D graphs.
Björn Schembera, Frank Wübbeling, Hendrik Kleikamp
et al.
Mathematical models and algorithms are an essential part of mathematical research data, as they are epistemically grounding numerical data. In order to represent models and algorithms as well as their relationship semantically to make this research data FAIR, two previously distinct ontologies were merged and extended, becoming a living knowledge graph. The link between the two ontologies is established by introducing computational tasks, as they occur in modeling, corresponding to algorithmic tasks. Moreover, controlled vocabularies are incorporated and a new class, distinguishing base quantities from specific use case quantities, was introduced. Also, both models and algorithms can now be enriched with metadata. Subject-specific metadata is particularly relevant here, such as the symmetry of a matrix or the linearity of a mathematical model. This is the only way to express specific workflows with concrete models and algorithms, as the feasible solution algorithm can only be determined if the mathematical properties of a model are known. We demonstrate this using two examples from different application areas of applied mathematics. In addition, we have already integrated over 250 research assets from applied mathematics into our knowledge graph.
Peng-Hung Tsai, Daniel Berleant, Richard S. Segall
et al.
Quantitative technology forecasting uses quantitative methods to understand and project technological changes. It is a broad field encompassing many different techniques and has been applied to a vast range of technologies. A widely used approach in this field is trend extrapolation. Based on the publications available to us, there has been little or no attempt made to systematically review the empirical evidence on quantitative trend extrapolation techniques. This study attempts to close this gap by conducting a systematic review of technology forecasting literature addressing the application of quantitative trend extrapolation techniques. We identified 25 studies relevant to the objective of this research and classified the techniques used in the studies into different categories, among which growth curves and time series methods were shown to remain popular over the past decade, while newer methods, such as machine learning-based hybrid models, have emerged in recent years. As more effort and evidence are needed to determine if hybrid models are superior to traditional methods, we expect to see a growing trend in the development and application of hybrid models to technology forecasting.
A multilevel programming problem is an optimization problem that involves multiple decision makers, whose decisions are made in a sequential (or hierarchical) order. If all objective functions and constraints are linear and some decision variables in any level are restricted to take on integral or discrete values, then the problem is called a multilevel mixed integer linear programming problem (ML-MILP). Such problems are known to have disconnected feasible regions (called inducible regions), making the task of constructing an optimal solution challenging. Therefore, existing solution approaches are limited to some strict assumptions in the model formulations and lack universality. This paper presents a branch-and-cut (B&C) algorithm for the global solution of such problems with any finite number of hierarchical levels, and containing both continuous and discrete variables at each level of the decision-making hierarchy. Finite convergence of the proposed algorithm to a global solution is established. Numerical examples are used to illustrate the detailed procedure and to demonstrate the performance of the algorithm. Additionally, the computational performance of the proposed method is studied by comparing it with existing method through some selected numerical examples.
Leandro Oliveira, Edilene Costa dos Santos, Luiz Carlos Pais
Esta pesquisa discute a expertise profissional de Rubens de Carvalho, professor normalista paulista contratado pelo governo mato-grossense para conduzir a Escola Normal de Cuiabá, em meados da década de 1920. A fundamentação teórico-metodológica foi baseada em Hofstetter et al., (2017), Hofstetter e Schneuwly (2017), Hofstetter e Schneuwly (2020) e Hofstetter e Valente (2017), em uma perspectiva histórico-social acerca dos saberes do ensino, em razão da mobilização pedagógica promovida pelos experts da docência em provimento da produção e da mobilização de saberes para a instrução primária, especificamente nesta investigação das matemáticas elementares para o ensino primário. As fontes mobilizadas para justificar a expertise profissional do personagem estão sobre os cuidados do Arquivo Público do Mato Grosso (APMT), além de outras que retrataram a trajetória do educador, que foram recuperadas pela Hemeroteca Digital Brasileira. Assim, considerando o aporte referencial adotado, o professor normalista Rubens de Carvalho pode ser considerado um expert do ensino e da matemática elementar do mesmo segmento instrucional, pois, conforme analisamos, o Estado o contratou para resolver problemas da instrução, e, ao fazê-lo, o professor produziu saberes que foram sistematizados em Atas de Relatórios, no Programa de Ensino de 1924 e no Regulamento de Ensino de 1927.
Special aspects of education, Applied mathematics. Quantitative methods
Björn Schembera, Frank Wübbeling, Hendrik Kleikamp
et al.
In applied mathematics and related disciplines, the modeling-simulation-optimization workflow is a prominent scheme, with mathematical models and numerical algorithms playing a crucial role. For these types of mathematical research data, the Mathematical Research Data Initiative has developed, merged and implemented ontologies and knowledge graphs. This contributes to making mathematical research data FAIR by introducing semantic technology and documenting the mathematical foundations accordingly. Using the concrete example of microfracture analysis of porous media, it is shown how the knowledge of the underlying mathematical model and the corresponding numerical algorithms for its solution can be represented by the ontologies.
Zeeshan Asim, Ibrahim Rashid Al Shamsi, Mariam Wahaj
et al.
The present case study-based research provides insights of the current packaging practices with a supply chain perspective and proposed sustainable packaging options that would cut down the environmental impact from supply chain operations at Midas Safety. The case study is based on qualitative research that used semi-structured open-ended interviews and observations to understand the current processes of the packaging and supply chain department of Midas Safety and how they are planning to adapt sustainability to their processes. Considering the current packaging practices, the study aimed to develop improved sustainable packaging practices with a supply chain aspect in order to cut down the negative environmental aspect such as standardization in packaging for all customers, elimination of wood pallets, developing local suppliers, change in packaging design, making the packaging more compact and lightweight, reducing carbon footprint and fuel consumption by encouraging trade through sea instead of air. The results concluded that internal factors such as alternate packaging material (like Mondi’s Aegispaper, Arjowiggins’ and Corrugated Bubble Wrap) along with the suggested sustainable packaging practices discussed above and external factors such as availability of local vendors are important requirements for successful sustainable packaging development.
Bilinearization of nonlinear partial differential equations (PDEs) is essential in the Hirota method, which is a widely used and robust mathematical tool for finding soliton solutions of nonlinear PDEs in a variety of fields, including nonlinear dynamics, mathematical physics, and engineering sciences. We present a novel systematic computational approach for determining the bilinear form of a class of nonlinear PDEs in this article. It can be easily implemented in symbolic system software like Mathematica, Matlab, and Maple because of its simplicity. The proven results are obtained by using a developed method in Mathematica and applying a logarithmic transformation to the dependent variable. Finally, the findings validate the implemented technique’s competence, productivity, and dependability. The approach is a useful, authentic, and simple mathematical tool for calculating multiple soliton solutions to nonlinear evolution equations encountered in nonlinear sciences, plasma physics, ocean engineering, applied mathematics, and fluid dynamics.
In this paper, we develop a theory of $\top$-nets and study their relation to $\top$-filters. We show that convergence in strong $L$-topological spaces can be described by both $\top$-nets and $\top$-filters and both concepts are equivalent in the sense that definitions and proofs that are given using $\top$-filters can also be given using $\top$-nets and vice versa.