Supply Chain Management (SCM) has received considerable attention from the industrial community in recent decades. SCM continues to be an interesting and relevant research topic in many business areas such as revealing supply chain integration benefits, uncertainty and risk mitigation methods, decision-making and optimization methodologies, etc. In current supply chain management, huge volumes of data are being developed each second, and emerging technologies such as Radio Frequency Identification (RFID) have amplified the availability of online data. Using Artificial Intelligence (AI) methods that go beyond simply using the huge volume of online data enables Supply Chain (SC) managers to monitor everything in a timely fashion. There are several aspects of an SC that AI—and specifically Artificial Neural Networks (ANNs)—can be applied to better help them manage and optimize. This study aims to review state-of-the-art ANNs and Deep Neural Networks (DNNs) in the field of supply chain management. One hundred high-quality research studies that applied ANNs in supply chain management are reviewed and categorized into four classes: performance optimization, supplier selection, forecasting, and inventory management studies. Our study shows that there is a significant possibility that we could use ANNs and DNNs to better manage supply chains. Across the reviewed studies, neural networks are frequently reported to improve predictive performance and support monitoring/control in complex, nonlinear supply chain settings, often complementing traditional operations research approaches. Finally, the limitations of ANN models and the possibilities for future studies are presented at the end of this study.
Abdullah H. Sofiyev, Mahmure Avey, Nigar M. Aslanova
In this study, the solution of the buckling problem of axially loaded laminated cylindrical shells consisting of functionally graded (FG) nanocomposites in elastic and thermal environments is presented within extended first-order shear deformation theory (FOST) for the first time. The effective material properties and thermal expansion coefficients of nanocomposites in the layers are computed using the extended rule of mixture method and molecular dynamics simulation techniques. The governing relations and equations for laminated cylindrical shells consisting of FG nanocomposites on the two-parameter elastic foundation and in thermal environments are mathematically modeled and solved to find the expression for the axial buckling load. The numerical results of the current analytical approach agree well with the existing literature results obtained using a different methodology. Finally, some new results and interpretations are provided by investigating the influences of different parameters such as elastic foundations, thermal environments, FG nanocomposite models, shear stress, and stacking sequences on the axial buckling load.
We consider the radial spreading of an axisymmetric viscous gravity current, in which fluid released from a point source at a constant flux is confined vertically to a narrow gap between two horizontal plates. A grounding line forms where the free surface of the current intersects with the top plate, creating two regions of flow: an inner, circular contact region near to the source where the fluid fills the entire gap between the two plates; and an outer annular region where the free surface of the gravity current lies below the top plate. Mathematical models of such flows involve solving a partial differential equation for the height of the free surface, subject to appropriate boundary conditions at the grounding line and at the leading edge of the current. In many cases, these systems admit similarity solutions. I will present one such model where the effects of surface tension are included locally at the grounding line and at the leading edge, leading to similarity solutions that depend on two dimensionless parameters, J and S, which measure the impact of confinement and the effects of surface tension, respectively. Introducing the surface tension parameter S is shown to provide better agreement between theory and experiment.
Maria Elizabete Rambo Kochhann, Marieli Vanessa Rediske de Almeida, Clesensia Mesquita Cassiano
Este artigo é fruto de ações de formação inicial desenvolvidas com acadêmicos da Licenciatura em Matemática da Universidade Federal de Integração Latino-Americana (Unila). Dentre os projetos, estão o Programa de Iniciação à Docência (Pibid) e o Programa de Apoio à Vivência de Componentes Curriculares (PVCC). O intento destas ações é oportunizar aos licenciandos a imersão em colégios da rede pública estadual da cidade de Foz do Iguaçu – PR e cidades vizinhas, bem como reflexões sobre o papel do professor e os desafios a serem encontrados na docência. O objetivo da pesquisa foi compreender as possibilidades formativas do Pibid e do PVCC na percepção dos acadêmicos da Licenciatura em Matemática da Unila respondentes do formulário aplicado a eles. Os dados foram constituídos por meio de questionários online (Google Forms), respondidos por 30 acadêmicos em junho de 2024. Para destacar a diversidade dos acadêmicos, faremos uma descrição de aspectos que dizem respeito aos participantes. O estudo é de cunho qualitativo e interpretativo, pois utilizaremos as respostas dadas pelos participantes ao formulário sobre as percepções que eles possuem sobre o curso e as diferentes atuações nele. Tais percepções foram categorizadas a partir de um processo de análise de conteúdo. Desta análise, emergiram duas categorias, a saber, Trajetória e Motivação na Matemática e Engajamento e Perspectivas Futuras. Conclui-se que as ações formativas em tela são consideradas importantes e benéficas pelos licenciandos, sendo fundamental assegurar, na formação inicial, espaços de vivências da futura profissão docente como uma das formas de incentivo e permanência no curso.
Special aspects of education, Applied mathematics. Quantitative methods
The presence of residual stress seriously affects the mechanical performance and reliability of engineering components. Here, the authors propose a novel method to determine corresponding residual stress through micro-hardness measurements of machined surfaces. In this study, a mathematical model with equal biaxial stress indentation is established. Then, the correlation of micro-hardness with indentation and residual stress is used to determine the prediction equation of residual stress. The material applied in this study is the nickel-based Superalloy GH4169. The residual stress prediction formula for Superalloy GH4169 is ultimately determined through the finite element method by subjecting the indentation to residual stress and fitting the experimental test data. The relationship between the indentation modulus and indentation depth is given quantitatively. The relationship between residual stress and hardness is given quantitatively. The prediction results show that the compressive residual stress can enhance the material hardness and make the contact deformation only require a low indentation depth to achieve complete plastic deformation. Conversely, the tensile residual stress can result in a deeper depth and less hardness at the initial stage of the fully plastic state. For the materials that yield more easily (small ratio of elastic modulus to yield strength), the effect is more evident. The model presented in this paper can accurately forecast corresponding residual stress through measurements of the micro-hardness of machined surfaces.
Owing to the expansion of non-face-to-face activities, security issues in video conferencing systems are becoming more critical. In this paper, we focus on the end-to-end encryption (E2EE) function among the security services of video conferencing systems. First, the E2EE-related protocols of Zoom and Secure Frame (SFrame), which are representative video conferencing systems, are thoroughly investigated, and the two systems are compared and analyzed from the overall viewpoint. Next, the E2EE protocol in a Government Public Key Infrastructure (GPKI)-based video conferencing system, in which the user authentication mechanism is fundamentally different from those used in commercial sector systems such as Zoom and SFrame, is considered. In particular, among E2EE-related protocols, we propose a detailed mechanism in which the post-quantum cryptography (PQC) key encapsulation mechanism (KEM) is applied to the user key exchange process. Since the session key is not disclosed to the central server, even in futuristic quantum computers, the proposed mechanism, which includes the PQC KEM, still satisfies the E2EE security requirements in the quantum environment. Moreover, our GPKI-based mechanism induces the effect of enhancing the security level of the next-generation video conferencing systems up to a quantum-safe level.
Sara Zergani, K. K. Viswanathan, D. S. Sankar
et al.
This mathematical model studies the dynamics of tumor growth, one of the most complex dynamics problems that relates several interrelated processes over multiple ranges of spatial and temporal scales. In order to construct a tumor growth model, an angiogenesis model is used with focus on controlling the tumor volume, preventing new establishment, dissemination, and growth. The lattice Boltzmann method (LBM) is effectively applied to Navier-Stokes’ equation for obtaining the numerical simulation of blood flow through vasculature. It is observed that the flow features are extremely sensitive to stenosis severity, even at small strains and stresses, and that a severe effect on flow patterns and wall shear stresses is noticed in the tumor blood vessels. It is noted that based on the nonlinear deformation of the blood vessel’s wall, the flow rate conditions became unstable or distorted and affect the complex blood vessel’s geometry and it changes the blood flow pattern. When the blood flows inside the stenotic artery, depending on the presence of moderate or severe stenosis, it can lead to insufficient blood supply to the tissues in the downstream. Consequently, the highly disturbed flow occurs in the downstream of the stenosed artery, or even plaque ruptures happen when the flow pattern becomes very irregular and complex as it transits to turbulent which cannot be described without assumptions on the geometry. The results predicted by LBM-based code surpassed the expectations, and thus, the numerical results are found to be in great accord with the relevant established results of others.
The classical Box-Pierce and Ljung-Box tests for auto-correlation of residuals possess severe deviations from nominal type I error rates. Previous studies have attempted to address this issue by either revising existing tests or designing new techniques. The Adjusted Box-Pierce achieves the best results with respect to attaining type I error rates closer to nominal values. This research paper proposes a further correction to the adjusted Box-Pierce test that possesses near perfect type I error rates. The approach is based on an inflation of the rejection region for all sample sizes and lags calculated via a linear model applied to simulated data that encompasses a large range of data scenarios. Our results show that the new approach possesses the best type I error rates of all goodness-of-fit time series statistics.
Microorganisms can contaminate food, thus causing food spoilage and health risks when the food is consumed. Foods are not sterile; they have a natural flora and a transient flora reflecting their environment. To ensure food is safe, we must destroy these microorganisms or prevent their growth. Recurring hazards due to lapses in the handling, processing, and distribution of foods cannot be solved by obsolete methods and inadequate proposals. They require positive approach and resolution through the pooling of accumulated knowledge. As the industrial domain evolves rapidly and we are faced with pressures to continually improve both products and processes, a considerable competitive advantage can be gained by the introduction of predictive modeling in the food industry. Research and development capital concerns of the industry have been preserved by investigating the plethora of factors able to react on the final product. The presence of microorganisms in foods is critical for the quality of the food. However, microbial behavior is closely related to the properties of food itself such as water activity, pH, storage conditions, temperature, and relative humidity. The effect of these factors together contributing to permitting growth of microorganisms in foods can be predicted by mathematical modeling issued from quantitative studies on microbial populations. The use of predictive models permits us to evaluate shifts in microbial numbers in foods from harvesting to production, thus having a permanent and objective evaluation of the involving parameters. In this vein, predictive microbiology is the study of the microbial behavior in relation to certain environmental conditions, which assure food quality and safety. Microbial responses are evaluated through developed mathematical models, which must be validated for the specific case. As a result, predictive microbiology modeling is a useful tool to be applied for quantitative risk assessment. Herein, we review the predictive models that have been adapted for improvement of the food industry chain through a built virtual prototype of the final product or a process reflecting real-world conditions. It is then expected that predictive models are, nowadays, a useful and valuable tool in research as well as in industrial food conservation processes.
This paper presents a general framework to address diverse notoriously difficult problems arising in the area of optimal resource management, exploitation of natural reserves, pension fund valuation, environmental protection, and storage operation. Using some common abstract features of this problem class, we present a technique which provides a significant reduction of decision variables. As an application, we discuss a battery storage control to show how a decision problem, which is practically unsolvable in the original formulation, can be treated by our method.
Alessandro Scuderi, Mariarita Cammarata, Giovanni La Via
et al.
The increasing micronutrient deficiency within the nutritional habits of the world’s population and the growing need for healthy foods have given rise to the development of biofortified crops. In a context where the consumer’s attention is focused on a healthy lifestyle and respect for the environment, the cultivation of potatoes enriched with selenium offers an undisputed advantage in the pursuit of this twofold objective. The crop has been analyzed through the life-cycle assessment (LCA) methodology in order to highlight the environmental burden generated by selenium (Se) potato cultivation and to compare it with potato in conventional regime. The LCA highlights how the biofortified product is more sustainable than the conventional one, and this not only provides a benefit for the consumer, but also designates a new time for farmers who have the opportunity to implement more environmentally friendly practices.
One of the crucial challenges in the area of image segmentation is intensity inhomogeneity. For most of the region-based models, it is not easy to completely segment images having severe intensity inhomogeneity and complex structure, as they rely on intensity distributions. In this work, we proposed a firsthand hybrid model by blending kernel and Euclidean distance metrics. Experimental results on some real and synthetic images suggest that our proposed model is better than models of Chan and Vese, Wu and He, and Salah et al.
The objective of this research is to find out the differences in terms of creative thinking ability between one class taught using implementing Mathematical problem posing based on Lesson Study for Learning Community (LSLC) and another class taught in conventional fashion. This research applied Mixed-Method study which was initiated by developing instructional instruments based on Lesson Study for Learning Community (LSLC) to train students’ creativity, collaboration, and communication. This quantitative study applied experimental method with quasi experimental design. The results of this research indicated: (1) that the developed instruments meet validity, practicality, and effectiveness criteria; (2) There is ability difference of creative thinking in mathematical problem posing between the experimental class and the control class. The application of different learning in the two sample classes has an effect on the level of students’ creative thinking skills. Mathematical problem posing based on Lesson Study for Learning Community can help students to understand the lesson.
The paper deals with a generalization of the risk model with stochastic premiums where dividends are paid according to a multi-layer dividend strategy. First of all, we derive piecewise integro-differential equations for the Gerber–Shiu function and the expected discounted dividend payments until ruin. In addition, we concentrate on the detailed investigation of the model in the case of exponentially distributed claim and premium sizes and find explicit formulas for the ruin probability as well as for the expected discounted dividend payments. Lastly, numerical illustrations for some multi-layer dividend strategies are presented.
This paper considers the investigation of the optimal number of clusters for datasets that are modeled as the Gaussian mixture. For that purpose, the adaptive method that is based on a modified Expectation Maximization (EM) algorithm is developed. The modification is conducted within the hidden variable of the standard EM algorithm. Assuming that data are multivariate normally distributed, where each component of the Gaussian mixture corresponds to one cluster, the modification is provided by utilizing the fact that the Mahalanobis distance of samples follows a Chi-square distribution. Besides, the quantity measure is constructed in order to determine number of clusters. The proposed method is presented in several numerical examples.
The modeling of developable surfaces is considered a very important application in plat-metal-based industries. Relating to the purpose, this discussion aims to obtain some formulas for constructing the regular developable Bézier patches, in which each boundary curve must be laid in two parallel planes. The results as follows: We find some formulas of the equation systems that are described by the constant, linear, and quadratic control parameters of the regular developable Bézier patches criteria. The new approach is numerically tested for constructing the regular developable Bézier patches, in which their boundary curves are defined, respectively, by the combination of four, five, and six degrees.
Abstract ■■■ We describe a methodology for characterizing the relative structural importance of an arbitrary network edge by exploiting the properties of a k-shortest path algorithm. We introduce the metric Edge Gravity, measuring how often an edge occurs in any possible network path, as well as k-Gravity, a lower bound based on paths enumerated while solving the k-shortest path problem. The methodology is demonstrated using Granovetter’s original strength of weak ties network examples as well as the well-known Florentine families of the Italian Renaissance and the Krebs 2001 terrorist networks. The relationship to edge betweenness is established. It is shown that important edges, i.e. ones with a high Edge Gravity, are not necessarily adjacent to nodes of importance as identified by standard centrality metrics, and that key nodes, i.e. ones with high centrality, often have their importance bolstered by being adjacent to bridges to nowhere–e.g. ones with low Edge Gravity. It is also demonstrated that Edge Gravity distinguishes critically important bridges or local bridges from those of lesser structural importance.