The paper investigates whether and how AI systems can realize states of uncertainty. By adopting a functionalist and behavioral perspective, it examines how symbolic, connectionist and hybrid architectures make room for uncertainty. The paper distinguishes between epistemic uncertainty, or uncertainty inherent in the data or information, and subjective uncertainty, or the system's own attitude of being uncertain. It further distinguishes between distributed and discrete realizations of subjective uncertainty. A key contribution is the idea that some states of uncertainty are interrogative attitudes whose content is a question rather than a proposition.
We demonstrate how Monte Carlo Search (MCS) algorithms, namely Nested Monte Carlo Search (NMCS) and Nested Rollout Policy Adaptation (NRPA), can be used to build graphs and find counter-examples to spectral graph theory conjectures in minutes.
A semantic tableau method, called an argumentation tableau, that enables the derivation of arguments, is proposed. First, the derivation of arguments for standard propositional and predicate logic is addressed. Next, an extension that enables reasoning with defeasible rules is presented. Finally, reasoning by cases using an argumentation tableau is discussed.
The terminological landscape is rather cluttered when referring to autonomous driving or vehicles. A plethora of terms are used interchangeably, leading to misuse and confusion. With its technological, social and legal progress, it is increasingly imperative to establish a clear terminology that allows each concept to be placed in its corresponding place.
In a multiagent network model consisting of nodes, each network node has an agent and priced Friddy coins, and the agent can buy or sell Friddy coins in the marketplace. Though every node may not effectively have an equal price during the transaction time, the prices have to reach equilibrium by iterating buy and sell transactions on a macro level.
Carbon disulfide, an important sulfur-containing species, has strong absorption lines in the wavelength range of 188 nm to 215 nm. It is difficult to accurately measure the absorption cross sections of carbon disulfide because carbon disulfide will be easily converted into carbon sulfide when it is exposed to ultraviolet light. In this study, the absorption cross sections of carbon disulfide were measured by reducing carbon disulfide conversion. The factors affecting carbon disulfide conversion, including gas flow rate, ultraviolet light intensity, and duration of illumination, were studied to reduce the conversion of carbon disulfide by controlling experimental conditions in the experiment. Finally, the absorption cross sections of carbon disulfide at room temperature and atmospheric pressure were calculated using the absorption spectrum and the carbon disulfide concentration in the absence of carbon disulfide conversion. The wavelengths of 16 absorption peaks on the carbon disulfide absorption cross sections of the vibration change were marked. Carbon disulfide has the maximum absorption cross section of 4.5 × 10–16 cm2/molecule at a wavelength of 198.10 nm.
Aleksandar Kartelj, Vladimir Filipović, Siniša Vrećica
et al.
This paper proposes topologically sensitive metaheuristics, and describes conceptual design of topologically sensitive Variable Neighborhood Search method (TVNS) and topologically sensitive Electromagnetism Metaheuristic (TEM).
Ms. Navya Singh, Anshul Dhull, Barath Mohan. S
et al.
Our game Pommerman is based on the console game Bommerman. The game starts on an 11 by 11 platform. Pommerman is a multi-agent environment and is made up of a set of different situations and contains four agents.
This paper describes how to carry out a feasibility study for a potential knowledge based system application. It discusses factors to be considered under three headings: the business case, the technical feasibility, and stakeholder issues. It concludes with a case study of a feasibility study for a KBS to guide surgeons in diagnosis and treatment of thyroid conditions.
The paper presents a knowledge representation language $\mathcal{A}log$ which extends ASP with aggregates. The goal is to have a language based on simple syntax and clear intuitive and mathematical semantics. We give some properties of $\mathcal{A}log$, an algorithm for computing its answer sets, and comparison with other approaches.
The article describes the technique for designing a domain ontology, shows the flowchart of algorithm design and example of constructing a fragment of the ontology of the subject area of Computer Science is considered.
Many fields are now snowed under with an avalanche of data, which raises considerable challenges for computer scientists. Meanwhile, robotics (among other fields) can often only use a few dozen data points because acquiring them involves a process that is expensive or time-consuming. How can an algorithm learn with only a few data points?
In Inverse subsumption for complete explanatory induction Yamamoto et al. investigate which inductive logic programming systems can learn a correct hypothesis $H$ by using the inverse subsumption instead of inverse entailment. We prove that inductive logic programming system Imparo is complete by inverse subsumption for learning a correct definite hypothesis $H$ wrt the definite background theory $B$ and ground atomic examples $E$, by establishing that there exists a connected theory $T$ for $B$ and $E$ such that $H$ subsumes $T$.
The paper presents a knowledge representation language $\mathcal{A}log$ which extends ASP with aggregates. The goal is to have a language based on simple syntax and clear intuitive and mathematical semantics. We give some properties of $\mathcal{A}log$, an algorithm for computing its answer sets, and comparison with other approaches.
The noisy-or and its generalization noisy-max have been utilized to reduce the complexity of knowledge acquisition. In this paper, we present a new representation of noisy-max that allows for efficient inference in general Bayesian networks. Empirical studies show that our method is capable of computing queries in well-known large medical networks, QMR-DT and CPCS, for which no previous exact inference method has been shown to perform well.
This paper demonstrates a method for using belief-network algorithms to solve influence diagram problems. In particular, both exact and approximation belief-network algorithms may be applied to solve influence-diagram problems. More generally, knowing the relationship between belief-network and influence-diagram problems may be useful in the design and development of more efficient influence diagram algorithms.
This paper discusses a target tracking problem in which no dynamic mathematical model is explicitly assumed. A nonlinear filter based on the fuzzy If-then rules is developed. A comparison with a Kalman filter is made, and empirical results show that the performance of the fuzzy filter is better. Intensive simulations suggest that theoretical justification of the empirical results is possible.
In recent years there has been a spate of papers describing systems for probabilisitic reasoning which do not use numerical probabilities. In some cases the simple set of values used by these systems make it impossible to predict how a probability will change or which hypothesis is most likely given certain evidence. This paper concentrates on such situations, and suggests a number of ways in which they may be resolved by refining the representation.
Much artificial intelligence research focuses on the problem of deducing the validity of unobservable propositions or hypotheses from observable evidence.! Many of the knowledge representation techniques designed for this problem encode the relationship between evidence and hypothesis in a directed manner. Moreover, the direction in which evidence is stored is typically from evidence to hypothesis.
The apparent failure of individual probabilistic expressions to distinguish uncertainty about truths from uncertainty about probabilistic assessments have prompted researchers to seek formalisms where the two types of uncertainties are given notational distinction. This paper demonstrates that the desired distinction is already a built-in feature of classical probabilistic models, thus, specialized notations are unnecessary.