B. Francis
Hasil untuk "Mathematics"
Menampilkan 20 dari ~3519587 hasil · dari CrossRef, DOAJ, Semantic Scholar, arXiv
Jianhong Shen, T. Chan
P. Dirac
The steady progress of physics requires for its theoretical formulation a mathematics that gets continually more advanced. This is only natural and to be expected. What, however, was not expected by the scientific workers of the last century was the particular form that the line of advancement of the mathematics would take, namely, it was expected that the mathematics would get more and more complicated, but would rest on a permanent basis of axioms and definitions, while actually the modern physical developments have required a mathematics that continually shifts its foundations and gets more abstract. Non-euclidean geometry and non-commutative algebra, which were at one time considered to be purely fictions of the mind and pastimes for logical thinkers, have now been found to be very necessary for the description of general facts of the physical world. It seems likely that this process of increasing abstraction will continue in the future and that advance in physics is to be associated with a continual modification and generalisation of the axioms at the base of the mathematics rather than with a logical development of any one mathematical scheme on a fixed foundation. There are at present fundamental problems in theoretical physics awaiting solution, e.g. , the relativistic formulation of quantum mechanics and the nature of atomic nuclei (to be followed by more difficult ones such as the problem of life), the solution of which problems will presumably require a more drastic revision of our fundamental concepts than any that have gone before. Quite likely these changes will be so great that it will be beyond the power of human intelligence to get the necessary new ideas by direct attempts to formulate the experimental data in mathematical terms. The theoretical worker in the future will therefore have to proceed in a more indirect way. The most powerful method of advance that can be suggested at present is to employ all the resources of pure mathematics in attempts to perfect and generalise the mathematical formalism that forms the existing basis of theoretical physics, and after each success in this direction, to try to interpret the new mathematical features in terms of physical entities (by a process like Eddington’s Principle of Identification).
L. Edelstein-Keshet
Richard Lesh, H. Doerr, G. Carmona et al.
C. Hartshorne, P. Weiss
Swathi Muthyala Ramesh, Kristen M. Donnell
Frequency selective surfaces (FSSs) are arrays of conductive elements or apertures that exhibit frequency-dependent reflection and transmission properties. Their electromagnetic response is influenced by geometry and environmental conditions, making them attractive for wireless strain-sensing applications. However, temperature variations can produce frequency shifts similar to those caused by strain, reducing measurement accuracy. This work investigates the effects of intrinsic temperature compensation on two common FSS unit cell geometries—loop and patch—through comprehensive simulation analysis. The results show that loop-based cells offer superior thermal stability, while patch-based cells provide greater strain sensitivity, illustrating the tradeoff between thermal robustness and mechanical responsiveness. A patch-type FSS strain sensor was designed, fabricated, and characterized under varying temperature and strain. The sensor achieves a strain sensitivity of ~150 MHz per 1%<inline-formula> <tex-math notation="LaTeX">${\varepsilon }_{l}$ </tex-math></inline-formula>, while temperature-induced drift is limited to ~12 MHz over a 200°C range, confirming the effectiveness of the intrinsic compensation strategy. The results provide valuable insights for optimizing FSS-based sensor design in structural health monitoring applications and balancing thermal stability with mechanical sensitivity to ensure reliable performance in thermally dynamic environments.
GORCEAG, Gheorghe
To ensure effective management of medical devices, it is imperative that medical devices must be safe and inoffensive, and their management must be based on evidence. Thus, to help enhance the safety of medical devices, a new mechanism for the periodic compliance assessment of medical devices has been developed. The mechanism involves the assessment of general safety, electrical safety and performance parameters in line with international best practice. At the same time, the effective management of medical devices requires data and information related to medical devices and their lifecycle events, which can be obtained through the medical device management information system. The establishment and implementation of efficient management of medical devices, involves strengthening the capacities of medical devices’ management, in order to be able to respond to the current requirements of the health system, in such a way as to ensure the functionality of medical devices and the safe and efficient use of medical devices. Accordingly, the implementation of efficient management of medical devices is fundamental for providing qualitative, safe and efficient medical devices, which contributes to increasing the quality of medical services.
Fei Gao, Julia Harz, Chandan Hati et al.
Abstract A large primordial lepton asymmetry can lead to successful baryogenesis by preventing the restoration of electroweak symmetry at high temperatures, thereby suppressing the sphaleron rate. This asymmetry can also lead to a first-order cosmic QCD transition, accompanied by detectable gravitational wave (GW) signals. By employing next-to-leading order dimensional reduction we determine that the necessary lepton asymmetry is approximately one order of magnitude smaller than previously estimated. Incorporating an updated QCD equation of state that harmonizes lattice and functional QCD outcomes, we pinpoint the range of lepton flavor asymmetries capable of inducing a first-order cosmic QCD transition. To maintain consistency with observational constraints from the Cosmic Microwave Background and Big Bang Nucleosynthesis, achieving the correct baryon asymmetry requires entropy dilution by approximately a factor of ten. However, the first-order QCD transition itself can occur independently of entropy dilution. We propose that the sphaleron freeze-in mechanism can be investigated through forthcoming GW experiments such as μAres.
Yang Cai, Yunli Hao, Yongfang Qi
The niche situation can reflect the advantages and disadvantages of biological individuals in the ecosystem environment as well as the overall operational status of the ecosystem. However, higher-order niche systems generally exhibit complex nonlinearities and parameter uncertainties, making it difficult for traditional Type-1 fuzzy control to accurately handle their inherent fuzziness and environmental disturbances in complex environments. To address this, this paper introduces the backstepping control method based on Type-2 T-S fuzzy control, incorporating the niche situation function as the consequent of the T-S backstepping fuzzy control. The stability analysis of the system is completed by constructing a Lyapunov function, and the adaptive law for the parameters of the niche situation function is derived. This design reflects the tendency of biological individuals to always develop in a direction beneficial to themselves, highlighting the bio-inspired intelligent characteristics of the proposed method. The results of case simulations show that the Type-2 backstepping T-S fuzzy control has significantly superior comprehensive performance in dealing with the complexity and uncertainty of high-order niche situation systems compared with the traditional Type-1 control and Type-2 T-S adaptive fuzzy control. These results not only verify the adaptive and self-development capabilities of biological individuals, as well as their efficiency in environmental utilization, but also endow this control method with a solid practical foundation.
Tomer Raz, Michael Shalyt, Elyasheev Leibtag et al.
The constant $π$ has fascinated scholars throughout the centuries, inspiring numerous formulas for its evaluation, such as infinite sums and continued fractions. Despite their individual significance, many of the underlying connections among formulas remain unknown, missing unifying theories that could unveil deeper understanding. The absence of a unifying theory reflects a broader challenge across math and science: knowledge is typically accumulated through isolated discoveries, while deeper connections often remain hidden. In this work, we present an automated framework for the unification of mathematical formulas. Our system combines Large Language Models (LLMs) for systematic formula harvesting, an LLM-code feedback loop for validation, and a novel symbolic algorithm for clustering and eventual unification. We demonstrate this methodology on the hallmark case of $π$, an ideal testing ground for symbolic unification. Applying this approach to 455,050 arXiv papers, we validate 385 distinct formulas for $π$ and prove relations between 360 (94%) of them, of which 166 (43%) can be derived from a single mathematical object - linking canonical formulas by Euler, Gauss, Brouncker, and newer ones from algorithmic discoveries by the Ramanujan Machine. Our method generalizes to other constants, including $e$, $ζ(3)$, and Catalan's constant, demonstrating the potential of AI-assisted mathematics to uncover hidden structures and unify knowledge across domains.
Haihua Qin, Jiafang Pan, Jian Li et al.
Intelligent fault diagnosis encounters the challenges of varying working conditions and sample class imbalance individually, but very few approaches address both challenges simultaneously. This article proposes an improvement network model named ICDAN-F, which can deal with fault diagnosis scenarios with class imbalance and working condition variations in an integrated way. First, Focal Loss, which was originally designed for target detection, is introduced to alleviate the sample class imbalance problem of fault diagnosis and emphasize the key features. Second, the domain discriminator is improved by the default ReLU activation function being replaced with Tanh so that useful negative value information can help extract transferable fault features. Extensive transfer experiments dealing with varying working conditions are conducted on two bearing fault datasets with the effect of class imbalance. The results show that the fault diagnosis performance of ICDAN-F outperforms several other widely used domain adaptation methods, achieving 99.76% and 96.76% fault diagnosis accuracies in Case 1 and Case 2, respectively, which predicts that ICDAN-F can handle both challenges in a cohesive manner.
Noor Hafizah Khairul Anuar, Mohd Amri Md Yunus, Muhammad Ariff Baharudin et al.
Environmental factors like temperature, solar irradiance, and rain may influence the health and productivity of stingless bees. This paper aims to investigate the best approaches applied in meliponiculture to predict beehive health and products based on environmental variables and bee activity data. The data on temperature, humidity, rain, beehive weight, and bee activity traffic utilized in this project were monitored in real-time and saved on the Google Spreadsheet platform. The dataset extracted from the 6th of January 2024 to the 5th of February 2024, at a 15-minute time interval comprising a total of 2577 data points was analyzed using various deep learning approaches for best RMSE performance. A single-layer LSTM model with 50 units produced the best RMSE performance of 0.039, representing that the beehive weight was accurately predicted. This predictive capability can help farmers determine the optimum harvesting time based on weight forecasts, ensuring maximum yield and quality. Additionally, by providing early warnings of unwanted conditions such as swarming or potential attacks, this method significantly enhances the ability of beekeepers to take proactive measures to protect their colonies, safeguarding both bee populations and the livelihoods of farmers.
Filip D. Jevtić, Jovana Kostić, Katarina Maksimović
Mathematical research is often motivated by the desire to reach a beautiful result or to prove it in an elegant way. Mathematician's work is thus strongly influenced by his aesthetic judgments. However, the criteria these judgments are based on remain unclear. In this article, we focus on the concept of mathematical beauty, as one of the central aesthetic concepts in mathematics. We argue that beauty in mathematics reveals connections between apparently non-related problems or areas and allows a better and wider insight into mathematical reality as a whole. We also explain the close relationship between beauty and other important notions such as depth, elegance, simplicity, fruitfulness, and others.
Forrest Laine
Mathematical Program Networks (MPNs) are introduced in this work. An MPN is a collection of interdependent Mathematical Programs (MPs) which are to be solved simultaneously, while respecting the connectivity pattern of the network defining their relationships. The network structure of an MPN impacts which decision variables each constituent mathematical program can influence, either directly or indirectly via solution graph constraints representing optimal decisions for their decedents. Many existing problem formulations can be formulated as MPNs, including Nash Equilibrium problems, multilevel optimization problems, and Equilibrium Programs with Equilibrium Constraints (EPECs), among others. The equilibrium points of an MPN correspond with the equilibrium points or solutions of these other problems. By thinking of a collection of decision problems as an MPN, a common definition of equilibrium can be used regardless of relationship between problems, and the same algorithms can be used to compute solutions. The presented framework facilitates modeling flexibility and analysis of various equilibrium points in problems involving multiple mathematical programs.
Daizhan Cheng
A new mathematical structure, called the cross-dimensional mathematics (CDM), is proposed. The CDM considered in this paper consists of three parts: hyper algebra, hyper geometry, and hyper Lie group/Lie algebra. Hyper algebra proposes some new algebraic structures such as hyper group, hyper ring, and hyper module over matrices and vectors with mixed dimensions (MVMDs). They have sets of classical groups, rings, and modules as their components and cross-dimensional connections among their components. Their basic properties are investigated. Hyper geometry starts from mixed dimensional Euclidian space, and hyper vector space. Then the hyper topological vector space, hyper inner product space, and hyper manifold are constructed. They have a joined cross-dimensional geometric structure. Finally, hyper metric space, topological hyper group and hyper Lie algebra are built gradually, and finally, the corresponding hyper Lie group is introduced. All these concepts are built over MVMDs, and to reach our purpose in addition to existing semi-tensor products (STPs) and semi-tensor additions (STAs), a couple of most general STP and STA are introduced. Some existing structures/results about STPs/STAs have also been resumed and integrated into this CDM.
Dirk De Bock
Courtney R. Butler
Matilde Marcolli, Noam Chomsky, Robert Berwick
The syntactic Merge operation of the Minimalist Program in linguistics can be described mathematically in terms of Hopf algebras, with a formalism similar to the one arising in the physics of renormalization. This mathematical formulation of Merge has good descriptive power, as phenomena empirically observed in linguistics can be justified from simple mathematical arguments. It also provides a possible mathematical model for externalization and for the role of syntactic parameters.
P. Benacerraf
Halaman 23 dari 175980