The concept of an epigenetic landscape describing potential cellular fates arising from pluripotent cells, first advanced by Conrad Waddington, has evolved in light of experiments showing nondeterministic outcomes of regulatory processes and mathematical methods for quantifying stochasticity. In this Review, we discuss modern approaches to epigenetic and gene regulation landscapes and the associated ideas of entropy and attractor states, illustrating how their definitions are both more precise and relevant to understanding cancer etiology and the plasticity of cancerous states. We address the interplay between different types of regulatory landscapes and how their changes underlie cancer progression. We also consider the roles of cellular aging and intrinsic and extrinsic stimuli in modulating cellular states and how landscape alterations can be quantitatively mapped onto phenotypic outcomes and thereby used in therapy development. Description Plasticity of cancer cell phenotypes During differentiation, cells adopt phenotypic states of progressive specificity. Cancer cells violate this property, instead adopting increased plasticity of structure and function. Epigenetic change has been considered a developmental landscape that can channel specific differentiation events and define and constrain distinct phenotypic and gene expression states. In a Review, Feinberg and Levchenko discuss how cancer epigenetic landscapes can be defined quantitatively, borrowing from theory used in physical sciences to define potential energy and its relationship to physical or chemical states. This strategy has yielded new insights whereby stochastic changes in the epigenetic landscape of cancer cells drive oncogenic phenotypes. Such analyses can also reveal pathogenic signaling and therapeutic targets. —GKA A review discusses how epigenetic stochasticity can connect mutations and environmental perturbations to cancer progression and treatment. BACKGROUND During differentiation, living cells within complex organisms adopt phenotypic states of progressive specificity. Cancerous cells and tissues violate this property, adopting increased plasticity of cell states, tissue structure, and function during their progression. The information about the repertoire of normal differentiation outcomes is genetically encoded, but the information about the particular realization of this potential and cell regulation in response to the environment is encoded epigenetically in DNA methylation and biochemical modification of chromatin. Dating back to Conrad Waddington’s prescient work, epigenetic change has been viewed schematically as a developmental landscape that can channel specific differentiation events and define and constrain distinct phenotypic and gene expression states. More recently, cancer onset and progression have been viewed as a reversal or deformation of this landscape. In the physical sciences, potential energy landscapes and their relationships to the probability distribution of physical or chemical states have been developed and refined for decades, but they have only recently been applied to more quantitatively realize Waddington’s classical landscape idea. Such approaches are particularly appealing in describing the cancer epigenetic landscape given that the plasticity of cell states realized on such a landscape lies at the functional core of the disease. ADVANCES Recent developments in experimental technologies, including single cell–resolution analysis of mRNA and protein expression as well as molecular assays of epigenetic modifications of DNA and histones, have enriched our understanding of the diversity of phenotypic states defined by genomic information and epigenetic control. In this work, we expand on the emerging view that there is considerable variability in the expression of biological molecules even within presumably isogenic cells in normal homeostatic tissues or in well-defined cell lines in cell culture. This revelation suggests that the biological processes may be essentially stochastic and that biological variability on the cellular level can be indicative of—or even drive—important aspects of biological function. This analysis has also enabled assessment of the probability distributions of different cellular states, or quasipotential energy, and the use of these to determine the associated entropy, a measure of informational uncertainty. These measures, which we define in detail, enable a precise and quantitative definition of the underlying epigenetic landscapes, coordinately reflected by gene expression landscapes. Cancer-related genetic and epigenetic alterations can increase the entropy of the landscape as a whole and result in higher variability and occupancy of otherwise cryptic attractors. An increase in entropy and thus heterogeneity of the responses—rather than alteration of the average response—is emerging as a key and often overlooked feature of the landscape deformation in cancer pathogenesis. Changes in entropy can also accompany cell differentiation and aging in ways that further inform cancer etiology. They also permit distinguishing phenotypic plasticity from phenotypic heterogeneity. Using recent observations and landscape conceptualization, we outline several scenarios that can occur during precancerous and cancerous progression. We also discuss the molecular mechanisms enabling these scenarios, relating them to specific landscape transformations. We suggest how the relationship between the epigenetic landscape alterations and corresponding phenotypic changes can be quantitatively assessed and used to further understand the information transfer in signaling pathways and to develop new therapeutic interventions. This approach can also incorporate recently introduced ideas of the archetypical states of cells within normal and cancerous tissues. OUTLOOK New integrated theoretical and experimental methods in quantitative analyses of the cancer epigenetic landscape provide the tools to understand the connections between genetic and environmental drivers of cancer evolution and the relationships between epigenetic regulatory networks that mediate the landscape. Continued advances in single-cell measurements, including assessment of DNA methylation, genomic sequencing, and chromatin analysis, will allow further understanding of the dynamics of the landscapes having progressively increasing complexity and accounting for tumor evolution, progression to invasive and metastatic spread, and associated alterations in anatomical organization and structure. Moreover, a greater understanding of biological stochasticity, defined mathematically as epigenetic and gene expression entropy, can uncover the cellular actors and mechanisms by which cancer plasticity enables escape from natural defenses and therapeutic interventions. Epigenetic landscapes and phenotypic plasticity in cancer. Regulatory networks can define the number and probabilities of stable cellular states adopted by a cell population, representing attractors in the epigenetic landscape. Diverse inputs can promote transitions (and corresponding phenotypic plasticity) between cellular states within landscapes corresponding to the normal tissue (fewer attractors) and cancerous tumors (emergence of new attractors), as defined by parameters P1 and P2 that correspond to effective concentrations of landscape-defining molecules.
Accurate sales prediction is crucial for inventory and marketing in e-commerce. Cross-border sales involve complex patterns that traditional models cannot capture. To address this, we propose an improved Bidirectional Long Short-Term Memory (BiLSTM) model, enhanced with an attention mechanism and Bayesian hyperparameter optimization. The attention mechanism focuses on key temporal features, improving trend identification. The BiLSTM captures both forward and backward dependencies, offering deeper insights into sales patterns. Bayesian optimization fine-tunes hyperparameters such as learning rate, hidden-layer size, and dropout rate to achieve optimal performance. These innovations together improve forecasting accuracy, making the model more adaptable and efficient for cross-border e-commerce sales. Experimental results show that the model achieves an Root Mean Square Error (RMSE) of 13.2, Mean Absolute Error (MAE) of 10.2, Mean Absolute Percentage Error (MAPE) of 8.7 percent, and a Coefficient of Determination (R<sup>2</sup>) of 0.92. It outperforms baseline models, including BiLSTM (RMSE 16.5, MAPE 10.9 percent), BiLSTM with Attention (RMSE 15.2, MAPE 10.1 percent), Temporal Convolutional Network (RMSE 15.0, MAPE 9.8 percent), and Transformer for Time Series (RMSE 14.8, MAPE 9.5 percent). These results highlight the model’s superior performance in forecasting cross-border e-commerce sales, making it a valuable tool for inventory management and demand planning.
In particle physics experiments, fragment separators utilize dipole magnets to distinguish and isolate specific isotopes based on their mass-to-charge ratio as particles traverse the dipole’s magnetic field. Accurate fragment selection relies on precise knowledge of the magnetic field generated by the dipole magnets, necessitating dedicated measurement instrumentation to characterize the field in the constructed magnets. This study presents measurements of the two first-of-series dipole magnets (Type II—11 degrees bending angle—and Type III—9.5 degrees bending angle) for the Superconducting Fragment Separator that is being built in Darmstadt, Germany. Stringent field quality requirements necessitated a novel measurement system—the so-called translating fluxmeter. It is based on a PCB coil array installed on a moving trolley that scans the field while passing through the magnet aperture. While previous publications have discussed the design of the moving fluxmeter and the characterization of its components, this article presents the results of a measurement campaign conducted using the new system. The testing campaign was supplemented with conventional methods, including integral field measurements using a single stretched wire system and three-dimensional field mapping with a Hall probe. We provide an overview of the working principle of the translating fluxmeter system and validate its performance by comparing the results with those obtained using conventional magnetic measurement methods.
Abstract Many different network models have been in use for representing relational data. These include homogeneous networks, heterogeneous networks and multilayer networks. However, none of these models are generalized enough to represent both simple and complex relational data. The present paper provides a unified model of network, namely, Hybrid Layered Network (HLN). We proved that the sets of all homogeneous, heterogeneous and multi-layered networks are subsets of the set of all HLNs, depicting the model’s generalizability. The proposed HLN is more efficient in encoding different types of nodes and edges when compared to representing the same information through heterogeneous or multilayered networks. It is found experimentally that the HLN model, when used with GNNs, improves tasks such as link prediction. In addition, we present a novel parameterized algorithm (with complexity analysis) for generating synthetic HLNs. The networks generated from our proposed algorithm are more consistent in modelling the layer-wise degree distribution of a real-world Twitter network (represented as HLN) than those generated by existing models. Moreover, we also show that our algorithm is capable of generating various multilayer and homogeneous networks. Further, we define different structural measures for HLN, namely multilayer neighborhood, degree centrality, closeness centrality and betweenness centrality. Accordingly, we established the equivalency of the proposed structural measures of HLNs with those of homogeneous, heterogeneous, and multi-layered networks.
In this paper, we investigate the application of the combination of the Ramadan group transform and the accelerated Adomian polynomial method for solving integro-differential equations. Integro-differential equations arise in various fields such as physics, engineering, and biology, often modeling complex phenomena. The Ramadan group transform, known for its transformation properties and its ability to simplify computational complexities, is coupled with the accelerated Adomian polynomial method, which is an effective series expansion technique. This combination enhances the convergence and efficiency of solving nonlinear integro-differential equations that are difficult to handle using traditional methods. The paper demonstrates the utility of this hybrid approach through several test cases, comparing it with existing methods in terms of accuracy, computational efficiency, and convergence rate.
In the context of sample surveys, it is only sometimes feasible to obtain complete and accurate information. Non-response represents a significant challenge in this context. As this is a common occurrence in the context of estimation, various approaches are employed to eliminate it. This paper proposes a new class of estimators constructed by combining non-response and unbiased estimator approaches. The simulation study provides a comprehensive evaluation of the performance of various estimators by considering a wide range of scenarios, including different sample sizes, correlation coefficients, non-response rates, and z-values. This extensive simulation framework has explored multiple conditions and variations to ensure a thorough assessment of estimator performance under different settings. The findings show that each member of the proposed family of estimators consistently exhibits a higher PRE value than all other estimators under the scenarios tested.
In this study, Allee type, single-species (prey), two-patch model with nonlinear harvesting rate, and species migration across two patches have been developed and analyzed. As we all know, the population of any species in an ecosystem is greatly dependent on the carrying capacity of the corre-sponding ecosystem; the main focus of our work is on how carrying capacity affects system dynamics in the presence and absence of randomness (de-terministic and stochastic case, respectively). In the deterministic case, we find that the carrying capacity of both patches increases the number of interior equilibrium points, and a maximum of eight interior equilib-rium points can be observed. Also, we observe some interesting dynamics, including bi-stability, tri-stability, and catastrophic bifurcations. On the other hand, we use the continuous-time Markov chain modeling approach to construct an equivalent stochastic model of the corresponding determin-istic model based on deterministic assumptions. Based on the extinction or persistence of the species, we compare the dynamics of deterministic and stochastic models in order to assess the impact of demographic stochas-ticity on the population of the species in two patches. The stochastic model shows the possibility of species extinction in a finite amount of time, whereas the deterministic model shows the persistence of the species at the same time, which is the major difference between these two models. We also derive the implicit equation for the expected time needed for species extinction. Finally, a graphic is used to illustrate how the patch’s carrying capacity affects the expected time.
This study compares two statistical approaches to image reconstruction in single-photon emission computed tomography (SPECT). We evaluated the widely used Ordered Subset Expectation Maximization (OSEM) algorithm and the newer Maximum a Posteriori approach with Entropy prior (MAP-Ent) approach in the context of quantifying radiopharmaceutical uptake in pathological lesions. Numerical experiments were performed using a digital twin of the standardized NEMA IEC phantom, which contains six spheres of varying diameters to simulate lesions. Quantitative accuracy was assessed using the maximum recovery coefficient (RCmax), defined as the ratio of the reconstructed maximum activity to the true value. The study shows that OSEM exhibits unstable convergence during iterations, leading to noise and edge artifacts in lesion images. Post-filtering stabilizes the reconstruction and ensures convergence, producing RCmax-size curves that could be used as correction factors in clinical evaluations. However, this approach significantly underestimates uptake in small lesions and may even lead to the complete loss of small lesions on reconstructed images. In contrast, MAP-Ent demonstrates fundamentally different behavior: it achieves stable convergence and preserves quantitative accuracy without post-filtering, while maintaining the contrast of even the smallest lesions. However, the iteration number at which accurate reconstruction is achieved depends strongly on the choice of a single global regularization parameter, which limits optimal performance across lesions of different sizes. These results demonstrate the need for locally adaptive regularization in MAP-Ent to improve quantitative accuracy in lesion reconstruction.
Johanna L. Smith, Quenna Wong, Whitney Hornsby
et al.
Sharing diverse genomic and other biomedical datasets is critical to advance scientific discoveries and their equitable translation to improve human health. However, data sharing remains challenging in the context of legacy datasets, evolving policies, multi-institutional consortium science, and international stakeholders. The NIH-funded Polygenic Risk Methods in Diverse Populations (PRIMED) Consortium was established to improve the performance of polygenic risk estimates for a broad range of health and disease outcomes with global impacts. Improving polygenic risk score performance across genetically diverse populations requires access to large, diverse cohorts. We report on the design and implementation of data sharing policies and procedures developed in PRIMED to aggregate and analyze data from multiple, heterogeneous sources while adhering to existing data sharing policies for each integrated dataset. We describe two primary data sharing mechanisms: coordinated dbGaP applications and a Consortium Data Sharing Agreement, as well as provide alternatives when individual-level data cannot be shared within the Consortium (e.g., federated analyses). We also describe technical implementation of Consortium data sharing in the NHGRI Analysis Visualization and Informatics Lab-space (AnVIL) cloud platform, to share derived individual-level data, genomic summary results, and methods workflows with appropriate permissions. As a Consortium making secondary use of pre-existing data sources, we also discuss challenges and propose solutions for release of individual- and summary-level data products to the broader scientific community. We make recommendations for ongoing and future policymaking with the goal of informing future consortia and other research activities.
Menaha Dhanraj, Arul Joseph Gnanaprakasam, Santosh Kumar
Abstract In this paper, we initiate the fixed point theorems for an orthogonal hybrid interpolative Riech Istrastescus type contractions map on orthogonal b-metric spaces to modify this class proficiently. Also, we provide some examples supporting our main results. Finally, we provide an application to solve the existence and uniqueness of an integral equation with numeric results, which is powerful in a greater way.
In this effort, a new method called the improved residual power series method with Padé approximants (IRPSM-Padé) has been introduced to solve the boundary value problems (BVPs) on an unbounded domain. It was known from the previous studies that the IRPSM is only acceptable for solving BVPs in a finite domain for small values of the independent variable. To overcome this difficulty, the IRPSM has been modified with the Padé approximants. Combining the results obtained by IRPSM and Padé approximants delivers an inspiring tool to handle BVPs on an unbounded domain. The applications of IRPSM-Padé have been introduced with the help of well-known boundary-layer Blasius problems over a stretching sheet arising in incompressible fluids. MATHEMATICA and Maple software are used for the computation of this analysis. In the first example, the results achieved by IRPSM-Padé have been associated with the results achieved by ADM-Padé and DTM-Padé solutions. A good agreement has been illustrated among the results obtained by IRPSM-Padé, ADM-Padé and DTM-Padé solutions. In the second example, the results achieved by IRPSM and IRPSM-Padé have been associated with the exact solution. A good agreement has been showed between the results achieved by IRPSM-Padé and exact solutions. Furthermore, it is also verified that the IRPSM technique is not suitable to solve BVPs on an unbounded domain. The IRPSM-Padé requires less computational work without linearization, discretization, or perturbation. This confirms that the IRPSM-Padé is a promising tool for solving infinite BVPs in applied fields.
The article is a response to a recent opinion piece that log concentration values should not be applied in analytical chemistry. An essential aim in the development of analytical chemistry methods is to obtain more sensitive and accurate detection values. For the application of chemical analysis methods, the obtained experiment data need to fit with the mathematical functions in the first place. As influenced by different detection principles and analytical methods, data can be displayed in a coordinate system with two linear axes for linear function fitting, or the data can first be taken through a logarithmic transformation and then for function fitting. Using raw data or data after logarithmic transformation primarily depends on analytical principles, without special rules of data formats. For example, ultraviolet-visible spectrophotometric data are more suitable for direct linear fitting. However, enzyme-catalyzed reaction or electrochemical data in logarithmic form are more appropriate for function fitting. This transformation of data form will not affect the soundness of fit statistics; rather, it simplifies the complexity of function analysis and calculation, which are the essence of analytical chemistry. In this brief article, we provide justification and legitimacy of the application of logarithmic processing in various fields of quantitative analytical chemistry.
Purpose: The main objective of this study is to rank methods of improving debt-asset management at branches of Bank Sepah in Tehran.Methodology: Questionnaires were the tool to collect data, and statistical sample is 146 managers and experts of Bank Sepah in Tehran that have been selected by the simple random sampling method. In this study, by using Multiple Criteria Decision-Making (MCDM) techniques of fuzzy TOPSIS, we have ranked the goals of debt asset management in Bank Sepah.Findings: Based on the results, among the main criteria for Asset and Liability Management (ALM) goals, “the risk management of interest rate” with a weight of 3. 83 is at the first priority, then the “maintenance of adequate capital” with a weight of 3. 67 is in the second place and then “liquidity risk management” with a weight of 3. 41 is in the third priority. Also, according to Friedman test results؛ there are differences between the achievements for each of the major debt-asset management in Bank Sepah in Tehran.Originality/Value: This study is a mixed method (Delphi (qualitative) and survey (quantitative)) in terms of performance and in terms of data collection. By using MCDM techniques, we have ranked the major objectives of asset-debt management in Bank Sepah. In addition, the results could be used in the planning process of banks and financial institutions.
Disposisi dan resiliensi matematis mempunyai peran penting dalam pembelajaran matematika. Penelitian ini bertujuan untuk mendeskripsikan disposisi matematis dan resiliensi matematis mahasiswa pendidikan matematika dan korelasinya. Sampel pada penelitian yaitu mahasiswa program studi pendidikan matematika semester 4 yang berjumlah 114 orang diambil secara acak. Hasil penelitian menunjukkan bahwa skor disposisi dan resiliensi matematis mahasiswa masuk dalam golongan sedang. Dari hasil analisis korelasional menunjukkan bahwa terdapat korelasi yang lemah antar kedua variabel. Untuk hasil regresi diperoleh nilai sig kurang dari 0,05 maka secara regresi dapat diketahui terdapat pengaruh antara variabel disposisi dan resiliensi matematis.
Timothy Williamson has recently argued that the applicability of classical mathematics in the natural and social sciences raises a problem for the endorsement, in non-mathematical domains, of a wide range of non-classical logics. We first reconstruct his argument and present its restriction to the case of quantum logic (QL). Then we show that there is no problematic tension between the applicability of classical mathematical models to quantum phenomena and the endorsement of QL in the reasoning about the latter. Once we identify the premise in Williamson's argument that turns out to be false when restricted to QL, we argue that the same premise fails for a wider variety of non-classical logics. In the end, we use our discussion to draw some general lessons concerning the relationship between applied logic and applied mathematics.
This manifesto has been written as a practical tool and aid for anyone carrying out, managing or influencing mathematical work. It provides insight into how to undertake and develop mathematically-powered products and services in a safe and responsible way. Rather than give a framework of objectives to achieve, we instead introduce a process that can be integrated into the common ways in which mathematical products or services are created, from start to finish. This process helps address the various issues and problems that can arise for the product, the developers, the institution, and for wider society. To do this, we break down the typical procedure of mathematical development into 10 key stages; our "10 pillars for responsible development" which follow a somewhat chronological ordering of the steps, and associated challenges, that frequently occur in mathematical work. Together these 10 pillars cover issues of the entire lifecycle of a mathematical product or service, including the preparatory work required to responsibly start a project, central questions of good technical mathematics and data science, and issues of communication, deployment and follow-up maintenance specifically related to mathematical systems. This manifesto, and the pillars within it, are the culmination of 7 years of work done by us as part of the Cambridge University Ethics in Mathematics Project. These are all tried-and-tested ideas, that we have presented and used in both academic and industrial environments. In our work, we have directly seen that mathematics can be an incredible tool for good in society, but also that without careful consideration it can cause immense harm. We hope that following this manifesto will empower its readers to reduce the risk of undesirable and unwanted consequences of their mathematical work.
In this paper, we introduce a new framework for generating synthetic vascular trees, based on rigorous model-based mathematical optimization. Our main contribution is the reformulation of finding the optimal global tree geometry into a nonlinear optimization problem (NLP). This rigorous mathematical formulation accommodates efficient solution algorithms such as the interior point method and allows us to easily change boundary conditions and constraints applied to the tree. Moreover, it creates trifurcations in addition to bifurcations. A second contribution is the addition of an optimization stage for the tree topology. Here, we combine constrained constructive optimization (CCO) with a heuristic approach to search among possible tree topologies. We combine the NLP formulation and the topology optimization into a single algorithmic approach. Finally, we attempt the validation of our new model-based optimization framework using a detailed corrosion cast of a human liver, which allows a quantitative comparison of the synthetic tree structure with the tree structure determined experimentally down to the fifth generation. The results show that our new framework is capable of generating asymmetric synthetic trees that match the available physiological corrosion cast data better than trees generated by the standard CCO approach.
Aroldo Eduardo Athias Rodrigues , Lidinalva de Almada Coutinho, José Ricardo e Souza Mafra
Este artigo consiste em um mapeamento de pesquisas acadêmicas, mais especificamente de teses e dissertações defendidas no Brasil, no período de 2016 a 2021, visando a localizar as que tratavam da interseção ou interface entre os temas formação de professores que ensinam matemática e tecnologias digitais. O levantamento inicial dos trabalhos foi realizado em dois repositórios: o Catálogo de Teses e Dissertações da Capes e o site Dados Abertos CAPES, tendo por base alguns dos pressupostos teóricos e metodológicos relacionados com o que vem sendo denominado de Estado do Conhecimento. Ao final do levantamento, constituiu-se um corpus para a pesquisa formado por 128 trabalhos localizados em diferentes regiões, sendo que se percebeu uma maior concentração de trabalhos nas regiões Sudeste e Sul. Os resultados apontam uma diversificação acentuada de diferentes recursos tecnológicos como principais indutores, envolvendo discussões formativas, tanto na formação inicial como, em maior frequência, na formação contínua de professores que ensinam matemática. Destaca-se, ainda, a necessidade de mais estudos focados para a formação dos professores que atuam nas séries iniciais, no âmbito das tecnologias digitais, além de pesquisas envolvendo a própria formação do professor formador de professores, com base em uma necessidade de fundamento e de significado.
Special aspects of education, Applied mathematics. Quantitative methods