Machine learning models excel with abundant annotated data, but annotation is often costly and time-intensive. Active learning (AL) aims to improve the performance-to-annotation ratio by using query methods (QMs) to iteratively select the most informative samples. While AL research focuses mainly on QM development, the evaluation of this iterative process lacks appropriate performance metrics. This work reviews eight years of AL evaluation literature and formally introduces the speed-up factor, a quantitative multi-iteration QM performance metric that indicates the fraction of samples needed to match random sampling performance. Using four datasets from diverse domains and seven QMs of various types, we empirically evaluate the speed-up factor and compare it with state-of-the-art AL performance metrics. The results confirm the assumptions underlying the speed-up factor, demonstrate its accuracy in capturing the described fraction, and reveal its superior stability across iterations.
Effective distribution and transportation systems are crucial for enhancing a company's competitive advantage and productivity. This research evaluates the application of the Distribution Requirements Planning method and a greedy algorithm to improve the distribution of crude palm oil at PT. Agrojaya Perdana. DRP is used to project distribution needs using a quantitative approach based on historical demand data. The research results demonstrate that the integration of Distribution Requirement Planning Method and the greedy algorithm is effective in increasing distribution efficiency, reducing delivery times, and optimizing distribution scheduling. Calculations using the integrated Distribution Requirements Planning method show that PT. Argojaya will supply 12,501 tons of Crude Palm Kernel Oil from May to October 2025. Furthermore, the greedy algorithm minimizes routes for each company and prioritizes shipments to PT. Musim Mas, resulting in a profit of IDR 17,851,743,750.
Using the ZA method proposed for the first time in this paper, it is theoretically possible to obtain general or analytical solutions for an infinite number of ordinary and partial differential equations. These equations can be linear or nonlinear, and the solutions of some of them are gained for the first time. By the general solutions, we have successfully obtained exact solutions to seven definite solution problems and found two singularities. Using the ZA method, we prove that some important equations do not have certain types of solutions.
Abstract This study investigates the role of symbolic content, including social, cultural, and political imagery, in shaping algorithmic biases within YouTube’s recommendation system, using the 2024 Taiwanese presidential election as a case study. Leveraging classification methodology and a dataset of 15,600 videos collected via a rigorous multiphase keyword expansion, our research employs a novel combination of social network analysis, statistical metrics, and generative AI-based content evaluation to examine the propagation dynamics, community formation, and topic relevance of both symbolic and non-symbolic content. Our analysis reveals a dual dynamic: symbolic content fosters tightly integrated, cohesive communities characterized by strong thematic consistency and deeper topic relevance, yet exhibits limited network-wide visibility, while non-symbolic content achieves broader connectivity by often serving as crucial bridges within recommendation networks. Building on prior research documenting the influential role of symbols in political mobilization and online misinformation, we further assess how symbolic imagery interacts with algorithmic recommendation processes. Our findings underscore that algorithmic biases may inadvertently reinforce echo chambers and limit content diversity, highlighting the need for recommendation systems that balance content relevance with community-specific thematic coherence. These insights offer valuable guidance for policymakers, platform designers, and content creators striving for equitable content representation in the digital era.
Mauricio Ayala-Rincón, Thaynara Arielly de Lima, Georg Ehling
et al.
The recently introduced framework of Graded Quantitative Rewriting is an innovative extension of traditional rewriting systems, in which rules are annotated with degrees drawn from a quantale. This framework provides a robust foundation for equational reasoning that incorporates metric aspects, such as the proximity between terms and the complexity of rewriting-based computations. Quantitative narrowing, introduced in this paper, generalizes quantitative rewriting by replacing matching with unification in reduction steps, enabling the reduction of terms even when they contain variables, through simultaneous instantiation and rewriting. In the standard (non-quantitative) setting, narrowing has been successfully applied in various domains, including functional logic programming, theorem proving, and equational unification. Here, we focus on quantitative narrowing to solve unification problems in quantitative equational theories over Lawverean quantales. We establish its soundness and discuss conditions under which completeness can be ensured. This approach allows us to solve quantitative equations in richer theories than those addressed by previous methods.
This work is dedicated to a novel sampling method for accurately reconstructing elastic and electromagnetic sources from the far field patterns. We show that the proposed indicators in the form of integrals with full far field patterns are exactly the source functions. These facts not only give constructive uniqueness proofs of the inverse source problems, but also establish the theoretical basis of the proposed sampling methods. Furthermore, we derive the stability estimates for the corresponding discrete indicators using the far field patterns with finitely many observations and frequencies. We have also proposed the indicators with partial far field patterns and proved their validity for providing the derivative information of the unknown sources. Numerical examples are presented to verify the accuracy and stability of the proposed quantitative sampling method.
Ana Paula Nascimento, Alexandra Oliveira, Brígida Mónica Faria
et al.
In various fields, such as economics, finance, bioinformatics, geology, and medicine, namely, in the cases of electroencephalogram, electrocardiogram, and biotechnology, cluster analysis of time series is necessary. The first step in cluster applications is to establish a similarity/dissimilarity coefficient between time series. This article introduces an extension of the affinity coefficient for the autoregressive expansions of the invertible autoregressive moving average models to measure their similarity between them. An application of the affinity coefficient between time series was developed and implemented in R. Cluster analysis is performed with the corresponding distance for the estimated simulated autoregressive moving average of order one. The primary findings indicate that processes with similar forecast functions are grouped (in the same cluster) as expected concerning the affinity coefficient. It was also possible to conclude that this affinity coefficient is very sensitive to the behavior changes of the forecast functions: processes with small different forecast functions appear to be well separated in different clusters. Moreover, if the two processes have at least an infinite number of π- weights with a symmetric signal, the affinity value is also symmetric.
Antoine Passemiers, Pietro Folco, Daniele Raimondi
et al.
Saliency Maps (SMs) have been extensively used to interpret deep learning models decision by highlighting the features deemed relevant by the model. They are used on highly nonlinear problems, where linear feature selection (FS) methods fail at highlighting relevant explanatory variables. However, the reliability of gradient-based feature attribution methods such as SM has mostly been only qualitatively (visually) assessed, and quantitative benchmarks are currently missing, partially due to the lack of a definite ground truth on image data. Concerned about the apophenic biases introduced by visual assessment of these methods, in this paper we propose a synthetic quantitative benchmark for Neural Networks (NNs) interpretation methods. For this purpose, we built synthetic datasets with nonlinearly separable classes and increasing number of decoy (random) features, illustrating the challenge of FS in high-dimensional settings. We also compare these methods to conventional approaches such as mRMR or Random Forests. Our results show that our simple synthetic datasets are sufficient to challenge most of the benchmarked methods. TreeShap, mRMR and LassoNet are the best performing FS methods. We also show that, when quantifying the relevance of a few non linearly-entangled predictive features diluted in a large number of irrelevant noisy variables, neural network-based FS and interpretation methods are still far from being reliable.
The received Hilbert-style axiomatic foundations of mathematics has been designed by Hilbert and his followers as a tool for meta-theoretical research. Foundations of mathematics of this type fail to satisfactory perform more basic and more practically-oriented functions of theoretical foundations such as verification of mathematical constructions and proofs. Using alternative foundations of mathematics such as the Univalent Foundations is compatible with using the received set-theoretic foundations for meta-mathematical purposes provided the two foundations are mutually interpretable. Changes in foundations of mathematics do not, generally, disqualify mathematical theories based on older foundations but allow for reconstruction of these theories on new foundations. Mathematics is one but its foundations are many.
Salvador Bará, Carmen Bao-Varela, Miroslav Kocifaj
Contrary to some widespread intuitive belief, the night sky brightness perceived by the human eye or any other physical detector does not come (exclusively) from high in the sky. The detected brightness is built up from the scattered radiance contributed by all elementary atmospheric volumes along the line of sight, starting from the very first millimeter from the eye cornea or the entrance aperture of the measuring instrument. In artificially lit environments, nearby light sources may be responsible for a large share of the total perceived sky radiance. We present in this paper a quantitative analytical model for the sky radiance in the vicinity of outdoor light sources, free from singularities at the origin, which provides useful insights for the correct design or urban dark sky places. It is found that the artificial zenith sky brightness produced by a small ground-level source detected by a ground-level observer at short distances (from the typical dimension of the source up to several hundred meters) decays with the inverse of the distance to the source. This amounts to a reduction of 2.5 mag/arcsec2 in sky brightness for every log10 unit increase of the distance. The effects of screening by obstacles are also discussed.
Magnetic Particle Imaging (MPI) is a promising noninvasive in vivo imaging modality that makes it possible to map the spatial distribution of superparamagnetic nanoparticles by exposing them to dynamic magnetic fields. In the Field-Free Line (FFL) scanner topology, the spatial encoding of the particle distribution is performed by applying magnetic fields vanishing on straight lines. The voltage induced in the receiving coils by the particles when exposed to the magnetic fields constitute the signal from which the particle distribution is to be reconstructed. To avoid lengthy calibration, model-based reconstruction formulae have been developed for the 2D FFL scanning topology. In this work we develop reconstruction formulae for 3D FFL. Moreover, we provide a model-based reconstruction algorithm for 3D FFL and we validate it with a numerical experiment.
The optimization of experimental results has repeatedly posed major challenges for scientist and engineers. In this work, a systematic multi-layer optimization scheme is proposed in conjunction with particle swarm optimization algorithm to locate a global optimum that fits a cost function. The technique utilizes SP-lines to form three-dimensional patches surfaces from experimental data in multi-layer fashion and incorporates a multi-layer search using particle swarm optimization. The novel technique is illustrated and verified over two layers of experimental data to show its effectiveness.
Gundlapally Shiva Kumar Reddy, Nilam Venkata Koteswararao, Ragoju Ravi
et al.
This article aims to study the effect of the vertical rotation and magnetic field on the dissolution-driven convection in a saturated porous layer with a first-order chemical reaction. The system’s physical parameters depend on the Vadasz number, the Hartmann number, the Taylor number, and the Damkohler number. We analyze them in an in-depth manner. On the other hand, based on an artificial neural network (ANN) technique, the Levenberg–Marquardt backpropagation algorithm is adopted to predict the distribution of the critical Rayleigh number and for the linear stability analysis. The simulated critical Rayleigh numbers obtained by the numerical study and the predicted critical Rayleigh numbers by the ANN are compared and are in good agreement. The system becomes more stable by increasing the Damkohler and Taylor numbers.
This paper discusses extensively the time-dependent flow of a dusty viscous, incompressible fluid in rotating horizontal annuli under the influence of an azimuthal pressure gradient. The momentum and continuity equations depicting the flow system alongside the initial and boundary conditions are non-dimensionalized and solved semi-analytically using Laplace transform and Riemann-sum approximation (RSA) method. The velocity, skin frictions, vorticity and mass flow rates are obtained in the Laplace domain and then inverted back to the time domain with the aid of RSA. Steady-state solutions for the velocity, skin frictions, vorticity and mass flow rates are presented analytically to check the validity of the method employed at large values of the time. The governing dimensionless parameters appearing in the flow phenomenon are examined with the aid of line graphs and tables for comparison. From the findings and numerical computations, it is found that at large values of time, t, the velocity, skin frictions, vorticity and mass flow rates reach steady-state. Physically, as the angular velocity of the dusty particles increases, so does the mass concentration of the dust particles, resulting in a reduction in the velocity of the dusty particles.
In this article we describe special type of mathematical problems that may help develop teaching methods that motivate students to explore patterns, formulate conjectures and find solutions without only memorizing formulas and procedures. These are problems that either their solutions do not make sense in a real life context and/or provide a contradiction during the solution process. In this article we call these problems incorrect problems. We show several examples that can be applied in undergraduate mathematics courses and provide possible ways these examples can be used to motivate critical mathematical thinking. We also discuss the results of exposing a group of 168 undergraduate engineering students to an incorrect problem in a Differential Equations course. This experience provided us with important insight on how well prepared our students are to "out of the box" thinking and on the importance of previous mathematical skills in order to master further mathematical analysis to solve such a problem.
Guido Thömmes, Martin Oliver Sailer, Nicolas Bonnet
et al.
The pharmaceutical industry has experienced increasing costs and sustained high attrition rates in drug development over the last years. One proposal that addresses this challenge from a statistical perspective is the use of quantitative decision-making (QDM) methods to support a data-driven, objective appraisal of the evidence that forms the basis of decisions at different development levels. Growing awareness among statistical leaders in the industry has led to the creation of the European EFSPI/PSI special interest group (ESIG) on quantitative decision making to share experiences, collect best practices, and promote the use of QDM. In this paper, we introduce key components of QDM and present examples of QDM methods on trial, program, and portfolio level. The ESIG created a questionnaire to learn how and to what extent QDM methods are currently used in the different development phases. We present the main questionnaire findings, and we show where QDM is already used today but also where areas for future improvement can be identified. In particular, statisticians should increase their visibility, involvement, and leadership in cross-functional decision-making.
هنگامه محمدی نژادرشتی, علیرضا امیرتیموری, سهراب کردرستمی
et al.
هدف: در مسائل تخصیص منابع و هدف گذاری دیدگاه مدیریتی برنامه ریز مرکزی در تصمیمگیریهای مدیریتی نقشی اساسی دارد، به خصوص هنگامی که با خروجی های نامطلوب مانند انتشار گازهای گلخانهای مواجه میشویم. در این شرایط واحدها باید با همکاری یکدیگر در جهت دستیابی به اهداف برنامه ریز مرکزی گام بردارند. از آنجا که در اکثر مدل های تخصیص منابع مبتنی بر تحلیل پوششی دادهها تلاش مدیریتی و نوآوری تکنولوژیکی نادیده گرفته میشود، در این مقاله رویکردی از تخصیص منابع و هدف گذاری مبتنی بر تحلیل پوششی دادهها ارائه می شود که در آن فرض دسترسی پذیری مدیریتی انعکاس چشم انداز موفقیت مدیریتی برنامه ریز مرکزی و دور نمای نوآوری تکنولوژیکی در روند تخصیص منابع و تعیین هدف است. روششناسی پژوهش: استفاده از فرض دسترسی پذیری مدیریتی در این مقاله راهکاری برای تخصیص منابع و هدف گذاری درست و قابل قبول به همراه بهبود عملکرد واحدها به طور هم زمان ارائه میدهد. برای تحلیل روش ارائه شده در این مقاله داده های 29 خط هواپیمایی بینالملی مشهور که نماینده صنعت هوانوردی جهانی می باشند، انتخاب شده و مورد مطالعه قرار گرفته است.یافته ها: یافتههای این پژوهش نشان می دهد در این مدل واحدهای تصمیم گیرنده از دسترسی پذیری مدیریتی در تنظیم تطبیقها بر روی خروجی های نامطلوب بر اساس چشم انداز استراتژیهای همکاری واحدها در راستای بهبود عملکرد زیست محیطی خود استفاده میکند. در این رویکرد علاوه بر افزایش ورودیها، ثابت ماندن مقدار خروجیهای مطلوب، به خروجیهای نامطلوب اجازه کاهش داده میشود. در واقع این مدل تضمین میکند که واحدهای تصمیم گیرنده تطبیق شده بعد از تخصیص منابع و هدف گذاری، در دوره بعدی، از بهبود کارآیی برخوردار شوند و همچنین بهبود کارایی کل در نتایج بدست آمده توسط این رویکرد، مشاهده میشود.اصالت/ارزش افزوده علمی: در این مقاله با استفاده از فرض دسترسی پذیری مدیریتی رویکرد جدیدی از تخصیص منابع و هدف گذاری مبتنی بر تحلیل پوششی دادهها ارائه می شود که تأثیر تلاش مدیریتی و نوآوری فناوری را در مسئله تخصیص منابع و هدف گذاری در نظر میگیرد.
Protection schemes are usually implemented in the planning of transmission line operations. These schemes are expected to protect not only the network of transmission lines but also the entire power systems network during fault conditions. However, it is often a challenge for these schemes to differentiate accurately between various fault locations. This study analyses the deficiencies identified in existing protection schemes and investigates a different method that proposes to overcome these shortcomings. The proposed scheme operates by performing a wavelet transform on the fault-generated signal, which reduces the signal into frequency components. These components are then used as the input data for a multilayer perceptron neural network with backpropagation that can classify between different fault locations in the system. The study uses the transient signal generated during fault conditions to identify faults. The scientific research paradigm was adopted for the study. It also adopted the deduction research approach as it requires data collection via simulation using the Simscape electrical sub-program of Simulink within Matrix laboratory (MATLAB). The outcome of the study shows that the simulation correctly classifies 70.59% of the faults when tested. This implies that the majority of the faults can be detected and accurately isolated using boundary protection of transmission lines with the help of wavelet transforms and a neural network. The outcome also shows that more accurate fault identification and classification are achievable by using neural network than by the conventional system currently in use.