Mustafa Adil Albuhadeed, Aqilah Baseri Huddin, Fazida Hanim Hashim
et al.
Background Breast cancer (BC) diagnosis remains challenging in medical images as diagnostic interpretation varies among pathologists due to the complexity and variety of histopathological images, which can contribute to subjectivity and inconsistency in clinical decision-making. To address this ongoing issue and give pathologists better tools for detecting BC from histopathological data, researchers have been exploring advanced computer methods, like hybrid systems that combine machine learning and deep learning techniques. Methodology This systematic review was motivated by the need to assess the effectiveness, performance, data reliability, and real-world applicability of hybrid systems in histopathological image analysis of BC. Only studies on detecting BC that utilize hybrid artificial intelligence (AI) methods are considered, excluding simpler approaches or research focused on clinical data or other types of images apart from histopathological images. The studies were sourced from reputable databases such as PubMed, ScienceDirect, Scopus, IEEE Xplore Digital Library, MDPI, and PLOS One, and they span the period from January 2015 to April 2025. Data selection, measurement, reporting, and validation biases were assessed along with the overall bias in the 31 studies found according to the inclusion and exclusion criteria. Results The qualitative and meta-analysis on the 31 included studies showed that the pooled accuracy of these methods in multiclass and binary classification reached 90% and 94%, respectively. In terms of image data types, the pooled accuracy for whole slide images and region-cropped images was 86% and 96%, respectively. Conclusions Although there was a noticeable validation risk, as most of the researchers did not test their model on multiple datasets, these methods represent a promising pathologist AI assistant.
Virginia Morejón, Ainhoa González Del Campo, Ibon Galparsoro
et al.
Abstract With the increase in marine spatial planning efforts the need for robust environmental assessments that account for multiple pressures of human activities on marine ecosystems is more critical than ever. However, Cumulative Effects Assessment (CEA) practice, a requirement of Strategic Environmental Assessment (SEA) of marine spatial plans, remains insufficient. This paper explores the integration of ecosystem-based approaches into SEA stages for holistic environmental assessments of marine spatial plans that prioritize ecological integrity. It also reviews advancements in marine CEA research, focusing on risk-based approaches for assessing cumulative effects, and addresses the existing disconnection between CEA science and environmental assessment practice. Emphasis is placed on improving key SEA stages that are critical to CEA by identifying principles and approaches that systematically and spatially address the interactions of various pressures and ecosystem receptors across the four dimensions (4D) of marine environments to assess cumulative effects risks. This novel approach, presents a holistic framework aimed at enhancing CEA practice within SEA of marine spatial plans, for more sustainable and ecosystem-focused planning outcomes in marine environments.
Bayesian networks (BN) are directed acyclic graphical (DAG) models that have been adopted into many fields for their strengths in transparency, interpretability, probabilistic reasoning, and causal modeling. Given a set of data, one hurdle towards using BNs is in building the network graph from the data that properly handles dependencies, whether correlated or causal. In this paper, we propose an initial methodology for discovering network structures using Tsetlin Machines.
The use of a hypothetical generative model was been suggested for causal analysis of observational data. The very assumption of a particular model is a commitment to a certain set of variables and therefore to a certain set of possible causes. Estimating the joint probability distribution of can be useful for predicting values of variables in view of the observed values of others, but it is not sufficient for inferring causal relationships. The model describes a single observable distribution and cannot a chain of effects of intervention that deviate from the observed distribution.
We present a self-learning approach for synthesizing programs from integer sequences. Our method relies on a tree search guided by a learned policy. Our system is tested on the On-Line Encyclopedia of Integer Sequences. There, it discovers, on its own, solutions for 27987 sequences starting from basic operators and without human-written training examples.
This paper proposes a framework for representing and reasoning causality between geographic events by introducing the notion of Geo-Situation. This concept links to observational snapshots that represent sets of conditions, and either acts as the setting of a geo-event or influences the initiation of a geo-event. We envision the use of this framework within knowledge graphs that represent geographic entities will help answer the important question of why a geographic event occurred.
ConnectX is a two-player game that generalizes the popular game Connect 4. The objective is to get X coins across a row, column, or diagonal of an M x N board. The first player to do so wins the game. The parameters (M, N, X) are allowed to change in each game, making ConnectX a novel and challenging problem. In this paper, we present our work on the implementation and modification of various reinforcement learning algorithms to play ConnectX.
In this paper, we introduce an imagine network that can simulate itself through artificial association networks. Association, deduction, and memory networks are learned, and a network is created by combining the discriminator and reinforcement learning models. This model can learn various datasets or data samples generated in environments and generate new data samples.
In this short paper, we will introduce a simple model for quantifying philosophical vagueness. There is growing interest in this endeavor to quantify vague concepts of consciousness, agency, etc. We will then discuss some of the implications of this model including the conditions under which the quantification of `nifty' leads to pan-nifty-ism. Understanding this leads to an interesting insight - the reason a framework to quantify consciousness like Integrated Information Theory implies (forms of) panpsychism is because there is favorable structure already implicitly encoded in the construction of the quantification metric.
Logical forgetting may take exponential time in general, but it does not when its input is a single-head propositional definite Horn formula. Single-head means that no variable is the head of multiple clauses. An algorithm to make a formula single-head if possible is shown. It improves over a previous one by being complete: it always finds a single-head formula equivalent to the given one if any.
We study strategic similarity of game positions in two-player extensive games of perfect information, by looking at the structure of their local game trees, with the aim of improving the performance of game playing agents in detecting forcing continuations. We present a range of measures over the induced game trees and compare them against benchmark problems in chess, observing a promising level of accuracy in matching up trap states.
We review some practical and philosophical questions raised by the use of machine learning in creative practice. Beyond the obvious problems regarding plagiarism and authorship, we argue that the novelty in AI Art relies mostly on a narrow machine learning contribution : manifold approximation. Nevertheless, this contribution creates a radical shift in the way we have to consider this movement. Is this omnipotent tool a blessing or a curse for the artists?
Committee selection with diversity or distributional constraints is a ubiquitous problem. However, many of the formal approaches proposed so far have certain drawbacks including (1) computationally intractability in general, and (2) inability to suggest a solution for certain instances where the hard constraints cannot be met. We propose a practical and polynomial-time algorithm for diverse committee selection that draws on the idea of using soft bounds and satisfies natural axioms.
Constant structure closed semantic systems are the systems each element of which receives its definition through the correspondent unchangeable set of other elements of the system. Discrete time means here that the definitions of the elements change iteratively and simultaneously based on the "neighbor portraits" from the previous iteration. I prove that the iterative redefinition process in such class of systems will quickly degenerate into a series of pairwise isomorphic states and discuss some directions of further research.
This paper suggests a new interpretation of the Dempster-Shafer theory in terms of probabilistic interpretation of plausibility. A new rule of combination of independent evidence is shown and its preservation of interpretation is demonstrated.
Julian Andres Cuellar, Camila Sanchez Sandoval, Sergio Alfonso Huertas
El artículo presentado a continuación tiene por objetivo generar una reflexión alrededor de los postulados epistémicos de los trabajos adelantados por (Cuellar Argote, 2007) y (Leyva & Ramirez, 2015), con el ánimo de apostar por una actualización de las premisas y avances que estos autores otorgaron a la discusión sobre la Ciencia Política en Colombia y su proceso de consolidación. Así, el artículo hace uso de la metodología de análisis documental y su desarrollo evidencia algunos intereses compartidos en los autores alrededor de la enseñanza/formación de la Ciencia Política en Colombia.
In practical situations, it is of interest to investigate computing approximations of sets as an important step of knowledge reduction of dynamic covering decision information systems. In this paper, we present incremental approaches to computing the type-1 and type-2 characteristic matrices of dynamic coverings whose cardinalities increase with immigration of more objects. We also present the incremental algorithms of computing the second and sixth lower and upper approximations of sets in dynamic covering approximation spaces.
In this paper, optimum decomposition of belief networks is discussed. Some methods of decomposition are examined and a new method - the method of Minimum Total Number of States (MTNS) - is proposed. The problem of optimum belief network decomposition under our framework, as under all the other frameworks, is shown to be NP-hard. According to the computational complexity analysis, an algorithm of belief network decomposition is proposed in (Wee, 1990a) based on simulated annealing.
A simplified description of Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) is presented. We have adapted the TOPSIS description from existing Fuzzy theory literature and distilled the bare minimum concepts required for understanding and applying TOPSIS. An example has been worked out to illustrate the application of TOPSIS for a multi-criteria group decision making scenario.