B. Cano, M.J. Moreta
Hasil untuk "Applied mathematics. Quantitative methods"
Menampilkan 20 dari ~6512604 hasil · dari DOAJ, Semantic Scholar, CrossRef
Miguel Afonso Sellitto, Ismael Cristofer Baierle, Marta Rinaldi
Entropy is a foundational concept across scientific domains, playing a role in understanding disorder, randomness, and uncertainty within systems. This study applies Shannon’s entropy in information theory to evaluate and manage complexity in industrial supply chain management. The purpose of the study is to propose a quantitative modeling method, employing Shannon’s entropy model as a proxy to assess the complexity in SCs. The underlying assumption is that information entropy serves as a proxy for the complexity of the SC. The research method is quantitative modeling, which is applied to four focal companies from the agrifood and metalworking industries in Southern Brazil. The results showed that companies prioritizing cost and quality exhibit lower complexity compared to those emphasizing flexibility and dependability. Additionally, information flows related to specially engineered products and deliveries show significant differences in average entropies, indicating that organizational complexities vary according to competitive priorities. The implications of this suggest that a focus on cost and quality in SCM may lead to lower complexity, in opposition to a focus on flexibility and dependability, influencing strategic decision making in industrial contexts. This research introduces the novel application of information entropy to assess and control complexity within industrial SCs. Future studies can explore and validate these insights, contributing to the evolving field of supply chain management.
Blair D. Hall
Metrological traceability is essential for ensuring the accuracy of measurement results and enabling a comparison of results to support decision-making in society. This paper explores a structured approach to modelling traceability chains, focusing on the role of residual measurement errors and their impact on measurement accuracy. This work emphasises a scientific description of these errors as physical quantities. By adopting a simple modelling framework grounded in physical principles, the paper offers a formal way to account for the effects of errors through an entire traceability chain, from primary reference standards to end users. Real-world examples from microwave and optical metrology highlight the effectiveness of this rigorous modelling approach. Additionally, to further advance digital systems development in metrology, the paper advocates a formal semantic structure for modelling, based on principles of Model-Driven Architecture. This architectural approach will enhance the clarity of metrological practices and support ongoing efforts toward the digital transformation of international metrology infrastructure.
Arak M. Mathai, Hans J. Haubold
Usually, convolution refers to Laplace convolution in the literature, but Mellin convolutions can yield very ueful results. This aspect is illustrated in the coming sections. This study deals with Mellin convolutions of products and ratios. Functions belonging to the pathway family of functions are considered. Several types of integral representations, their equivalent representations in terms of G and H-functions, and their equivalent computable series representations are examined in this study.Mathematics Subject Classification 2010: 26A33, 44A10, 33C60, 35J10.
Giulia Di Teodoro, Marta Monaci, Laura Palagi
The interpretability of models has become a crucial issue in Machine Learning because of algorithmic decisions' growing impact on real-world applications. Tree ensemble methods, such as Random Forests or XgBoost, are powerful learning tools for classification tasks. However, while combining multiple trees may provide higher prediction quality than a single one, it sacrifices the interpretability property resulting in “black-box” models. In light of this, we aim to develop an interpretable representation of a tree-ensemble model that can provide valuable insights into its behavior. First, given a target tree-ensemble model, we develop a hierarchical visualization tool based on a heatmap representation of the forest's feature use, considering the frequency of a feature and the level at which it is selected as an indicator of importance. Next, we propose a mixed-integer linear programming (MILP) formulation for constructing a single optimal multivariate tree that accurately mimics the target model predictions. The goal is to provide an interpretable surrogate model based on oblique hyperplane splits, which uses only the most relevant features according to the defined forest's importance indicators. The MILP model includes a penalty on feature selection based on their frequency in the forest to further induce sparsity of the splits. The natural formulation has been strengthened to improve the computational performance of mixed-integer software. Computational experience is carried out on benchmark datasets from the UCI repository using a state-of-the-art off-the-shelf solver. Results show that the proposed model is effective in yielding a shallow interpretable tree approximating the tree-ensemble decision function.
Lars Carlsen
Decision-making, bringing in the opinions of several stakeholders, may be a rather time- and resource-demanding process. Partial order-based methods like generalized linear aggregation (GLA) and average ranking appear as advantageous tools for considering several stakeholders’ opinions simultaneously. The present study presents an approach where stakeholders’ opinions/weights are substituted by a series of randomly generated weight regimes, leading to virtually identical rankings as demonstrated through comparisons to examples where true stakeholder opinions are applied, as demonstrated through a study on food sustainability. This study showed a high degree of agreement between the ranking based on random data and that based on real stakeholder data. The method, which is a top-down approach to the decision process, appears to be a highly resource-reducing decision-supporting process. However, the method, by default, excludes the possibility of incorporating specific knowledge from, e.g., employees or other stakeholders in the decision process.
Fatih Tank, Serkan Eryilmaz
Ori Becher, Mira Marcus-Kalish, David M. Steinberg
The age of big data has fueled expectations for accelerating learning. The availability of large data sets enables researchers to achieve more powerful statistical analyses and enhances the reliability of conclusions, which can be based on a broad collection of subjects. Often such data sets can be assembled only with access to diverse sources; for example, medical research that combines data from multiple centers in a federated analysis. However these hopes must be balanced against data privacy concerns, which hinder sharing raw data among centers. Consequently, federated analyses typically resort to sharing data summaries from each center. The limitation to summaries carries the risk that it will impair the efficiency of statistical analysis procedures. In this work, we take a close look at the effects of federated analysis on two very basic problems, non-parametric comparison of two groups and quantile estimation to describe the corresponding distributions. We also propose a specific privacy-preserving data release policy for federated analysis with the K-anonymity criterion, which has been adopted by the Medical Informatics Platform of the European Human Brain Project. Our results show that, for our tasks, there is only a modest loss of statistical efficiency.
Özlem Ak Gümüş
This article is about the dynamic behavior of a prey-predator model exposed to the harvesting effect on prey. Firstly, the existence and stability of the fixed points of the model are obtained, and then the presence and direction of Neimark-Sacker bifurcation is examined. By using the bifurcation theory, we show that the system undergoes Neimark-Sacker bifurcation. The hybrid control strategy is applied to control the chaos caused by the Neimark-Sacker bifurcation. In addition, some numerical simulations are given to verify the theoretical results obtained.
Himanshukumar R. Patel, Vipul A. Shah
In recent, various metaheuristic algorithms have shown significant results in control engineering problems; moreover, fuzzy sets (FSs) and theories were frequently used for dynamic parameter adaption in metaheuristic algorithms. The primary reason for this is that fuzzy inference system (FISs) can be designed using human knowledge, allowing for intelligent dynamic adaptations of metaheuristic parameters. To accomplish these tasks, we proposed shadowed type-2 fuzzy inference systems (ST2FISs) for two metaheuristic algorithms, namely cuckoo search (CS) and flower pollination (FP). Furthermore, with the advent of shadowed type-2 fuzzy logic, the abilities of uncertainty handling offer an appealing improved performance for dynamic parameter adaptation in metaheuristic methods; moreover, the use of ST2FISs has been shown in recent works to provide better results than type-1 fuzzy inference systems (T1FISs). As a result, ST2FISs are proposed for adjusting the L<i>è</i>vy flight (<i>P</i>) and switching probability (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>P</mi><mo>′</mo></msup></semantics></math></inline-formula>) parameters in the original cuckoo search (CS) and flower pollination (FP) algorithms, respectively. Our approach investigated trapezoidal types of membership functions (MFs), such as ST2FSs. The proposed method was used to optimize the precursors and implications of a two-tank non-interacting conical frustum tank level (TTNCFTL) process using an interval type-2 fuzzy controller (IT2FLC). To ensure that the implementation is efficient compared with the original CS and FP algorithms, simulation results were obtained without and then with uncertainty in the main actuator <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>(</mo><mi>C</mi><msub><mi>V</mi><mn>1</mn></msub><mo>)</mo></mrow></semantics></math></inline-formula> and system component (leak) at the bottom of frustum tank two of the TTNCFLT process. In addition, the statistical z-test and non-parametric Friedman test are performed to analyze and deliver the findings for the best metaheuristic algorithm. The reported findings highlight the benefits of employing this approach over traditional general type-2 fuzzy inference systems since we get superior performance in the majority of cases while using minimal computational resources.
Rui Cong, Yukai Li, Kun Li et al.
In pedestrian traffic, individuals’ behaviors at crossings are significantly influenced by their mentality. In this paper, we consider the spread of anxiety among pedestrians, based on SIS model. The evolutionary game dynamics model of pedestrian crossing strategy is established on the complex network, and the evolutionary mechanism of pedestrian crossing strategy selection and the control strategy of anxiety are analyzed. The results show that individuals can effectively control the spread of anxiety when choosing waiting strategies, which shows that improving pedestrian safety awareness can effectively reduce pedestrian–vehicle conflict when crossing the street. Moreover, introducing the third-party reward mechanism can effectively enhance the effect of waiting behavior.
S. Khanra, S.K. Ghosh, C. Pathak
The paper describes an integrated/centralised supply chain model consisting of one supplier, one manufacturer and one retailer within a finite time horizon. The manufacturer produces, at a finite rate, in each lot. The lot production rate in a batch increases with a rate λ in successive batch and the produced items are supplied to the retailer. The objective of the proposed model is to optimize the average total profit under the consideration of the proportional increase in the size of successive shipments within a batch production run and the production time of the supplier. The corresponding average profits of the supplier, the manufacturer and the retailer and the average total profit of integrated model are obtained. The results obtained in the numerical examples clearly establish that it is always beneficial in terms of profit when the size of the successive shipment is a variable. Therefore, size of the successive shipment should be variable in order to get more profit. A sensitivity analysis of the optimal solution with respect to changes of the parameter values is also carried out to strengthen the proposed model.
Jingming Hu, Tuan Tran Chu, Seok-Hee Hong et al.
Abstract Graph sampling methods have been used to reduce the size and complexity of big complex networks for graph mining and visualization. However, existing graph sampling methods often fail to preserve the connectivity and important structures of the original graph. This paper introduces a new divide and conquer approach to spectral graph sampling based on graph connectivity, called the BC Tree (i.e., decomposition of a connected graph into biconnected components) and spectral sparsification. Specifically, we present two methods, spectral vertex sampling $$BC\_SV$$ B C _ S V and spectral edge sampling $$BC\_SS$$ B C _ S S by computing effective resistance values of vertices and edges for each connected component. Furthermore, we present $$DBC\_SS$$ D B C _ S S and $$DBC\_GD$$ D B C _ G D , graph connectivity-based distributed algorithms for spectral sparsification and graph drawing respectively, aiming to further improve the runtime efficiency of spectral sparsification and graph drawing by integrating connectivity-based graph decomposition and distributed computing. Experimental results demonstrate that $$BC\_SV$$ B C _ S V and $$BC\_SS$$ B C _ S S are significantly faster than previous spectral graph sampling methods while preserving the same sampling quality. $$DBC\_SS$$ D B C _ S S and $$DBC\_GD$$ D B C _ G D obtain further significant runtime improvement over sequential approaches, and $$DBC\_GD$$ D B C _ G D further achieves significant improvements in quality metrics over sequential graph drawing layouts.
Danilo Delpini, Stefano Battiston, Guido Caldarelli et al.
Abstract Networks of portfolio holdings exemplify how interdependence both between the agents and their assets can be a source of systemic vulnerability. We study a real-world holdings network and compare it with various alternative scenarios from randomization and rebalancing of the original investments. Scenarios generation relies on algorithms that satisfy the global constraints imposed by the numbers of outstanding shares in the market. We consider fixed-diversification models and diversification-maximizing replicas too. We extensively analyze the interplay between portfolio diversification and differentiation, and how the outreach of exogenous shocks depends on these factors as well as on the type of shock and the size of the network with respect to the market. We find that real portfolios are poorly diversified but highly similar, that portfolio similarity correlates with systemic fragility and that rebalancing can come with an increased similarity depending on the initial network configuration. We show that a large diversification gain is achieved through rebalancing but, noteworthy, that makes the network vulnerable in front of unselective shocks. Also, while the network is riskier in the presence of targeted shocks, it is safer than its random counterparts when it is stressed by widespread price downturns.
Adelmo Carvalho Silva, Cristina Maria D'Ávila
O texto consiste em um recorte da pesquisa de pós-doutorado dos autores, realizada na UFBA/FACED no período de março de 2018 a março de 2019. O estudo teve como objetivo geral analisar a compreensão, organização e desenvolvimento da Prática Pedagógica Lúdica de professores que ensinam matemática aos alunos dos anos iniciais do Ensino Fundamental. Apresentou a seguinte questão: como as professoras de matemática do Ensino Fundamental compreendem, interpretam e organizam suas práticas pedagógicas lúdicas? Foram utilizadas as abordagens da pesquisa qualitativa, do estudo de caso e do método interpretativo para analisar as informações produzidas por quatro professoras de uma escola pública de 1º e 2º ciclos. Os instrumentos utilizados para a produção e coleta de dados foram a observação, a entrevista e a análise de documentos. Conclui-se que os professores compreendem o conceito investigado e esforçam-se para desenvolvê-lo nas aulas de matemática.
Ross Schuchard, Andrew T. Crooks, Anthony Stefanidis et al.
Abstract This study presents a novel approach to expand the emergent area of social bot research. We employ a methodological framework that aggregates and fuses data from multiple global Twitter conversations with an available bot detection platform and ultimately classifies the relative importance and persistence of social bots in online social networks (OSNs). In testing this methodology across three major global event OSN conversations in 2016, we confirmed the hyper-social nature of bots: suspected social bot accounts make far more attempts on average than social media accounts attributed to human users to initiate contact with other accounts via retweets. Social network analysis centrality measurements discover that social bots, while comprising less than 0.3% of the total corpus user population, display a disproportionately high level of structural network influence by ranking particularly high among the top users across multiple centrality measures within the OSN conversations of interest. Further, we show that social bots exhibit temporal persistence in centrality ranking density when examining these same OSN conversations over time.
Takayasu Fushimi, Kazumi Saito, Tetsuo Ikeda et al.
Abstract Dividing a geographical region into some subregions with common characteristics is an important research topic, and has been studied in many research fields such as urban planning and transportation planning. In this paper, by network analysis approach, we attempt to extract functionally similar regions, each of which consists of functionally similar nodes of a road network. For this purpose, we previously proposed the Functional Cluster Extraction method, which takes a large amount of computation time to output clustering results because it treats too many high-dimensional vectors. To overcome this difficulty, we also previously proposed a transfer learning-based clustering method that selects approximate medoids from the target network using the K medoids of a previously clustered network and divides all the nodes into K clusters. If we select an appropriate network with similar structural characteristics, this method produces highly accurate clustering results. However it is difficult to preliminarily know which network is appropriate. In this paper, we extend this method to ensure accuracy using the K medoids of multiple networks rather than a specific network. Using actual urban streets, we evaluate our proposed method from the viewpoint of the improvement degree of clustering accuracy and computation time.
Martin Seilmayer, Matthias Ratajczak
This paper provides an overview about the usage of the Fourier transform and its related methods and focuses on the subtleties to which the users must pay attention. Typical questions, which are often addressed to the data, will be discussed. Such a problem can be the origin of frequency or band limitation of the signal or the source of artifacts, when a Fourier transform is carried out. Another topic is the processing of fragmented data. Here, the Lomb-Scargle method will be explained with an illustrative example to deal with this special type of signal. Furthermore, the time-dependent spectral analysis, with which one can evaluate the point in time when a certain frequency appears in the signal, is of interest. The goal of this paper is to collect the important information about the common methods to give the reader a guide on how to use these for application on one-dimensional data. The introduced methods are supported by the spectral package, which has been published for the statistical environment R prior to this article.
Vladimir Reinharz, Alexander Churkin, Harel Dahari et al.
The multiscale model of hepatitis C virus (HCV) dynamics, which includes intracellular viral RNA (vRNA) replication, has been formulated in recent years in order to provide a new conceptual framework for understanding the mechanism of action of a variety of agents for the treatment of HCV. We present a robust and efficient numerical method that belongs to the family of adaptive stepsize methods and is implicit, a Rosenbrock type method that is highly suited to solve this problem. We provide a Graphical User Interface that applies this method and is useful for simulating viral dynamics during treatment with anti-HCV agents that act against HCV on the molecular level.
Halaman 48 dari 325631