Hasil untuk "cs.AI"

Menampilkan 20 dari ~561626 hasil · dari arXiv, DOAJ, CrossRef

JSON API
arXiv Open Access 2021
Self-checking Logical Agents

Stefania Costantini

This paper presents a comprehensive framework for run-time self-checking of logical agents, by means of temporal axioms to be dynamically checked. These axioms are specified by using an agent-oriented interval temporal logic defined to this purpose. We define syntax, semantics and pragmatics for this new logic, specifically tailored for application to agents. In the resulting framework, we encompass and extend our past work.

en cs.AI
arXiv Open Access 2021
Budget-Constrained Coalition Strategies with Discounting

Lia Bozzone, Pavel Naumov

Discounting future costs and rewards is a common practice in accounting, game theory, and machine learning. In spite of this, existing logics for reasoning about strategies with cost and resource constraints do not account for discounting. The paper proposes a sound and complete logical system for reasoning about budget-constrained strategic abilities that incorporates discounting into its semantics.

en cs.AI, cs.GT
arXiv Open Access 2021
Scheduling Plans of Tasks

Davide Andrea Guastella

We present a heuristic algorithm for solving the problem of scheduling plans of tasks. The plans are ordered vectors of tasks, and tasks are basic operations carried out by resources. Plans are tied by temporal, precedence and resource constraints that makes the scheduling problem hard to solve in polynomial time. The proposed heuristic, that has a polynomial worst-case time complexity, searches for a feasible schedule that maximize the number of plans scheduled, along a fixed time window, with respect to temporal, precedence and resource constraints.

en cs.AI
arXiv Open Access 2021
Aggregating Bipolar Opinions (With Appendix)

Stefan Lauren, Francesco Belardinelli, Francesca Toni

We introduce a novel method to aggregate Bipolar Argumentation (BA) Frameworks expressing opinions by different parties in debates. We use Bipolar Assumption-based Argumentation (ABA) as an all-encompassing formalism for BA under different semantics. By leveraging on recent results on judgement aggregation in Social Choice Theory, we prove several preservation results, both positive and negative, for relevant properties of Bipolar ABA.

en cs.AI, cs.MA
arXiv Open Access 2020
Reannealing of Decaying Exploration Based On Heuristic Measure in Deep Q-Network

Xing Wang, Alexander Vinel

Existing exploration strategies in reinforcement learning (RL) often either ignore the history or feedback of search, or are complicated to implement. There is also a very limited literature showing their effectiveness over diverse domains. We propose an algorithm based on the idea of reannealing, that aims at encouraging exploration only when it is needed, for example, when the algorithm detects that the agent is stuck in a local optimum. The approach is simple to implement. We perform an illustrative case study showing that it has potential to both accelerate training and obtain a better policy.

en cs.AI
arXiv Open Access 2020
Technical Report: The Policy Graph Improvement Algorithm

Joni Pajarinen

Optimizing a partially observable Markov decision process (POMDP) policy is challenging. The policy graph improvement (PGI) algorithm for POMDPs represents the policy as a fixed size policy graph and improves the policy monotonically. Due to the fixed policy size, computation time for each improvement iteration is known in advance. Moreover, the method allows for compact understandable policies. This report describes the technical details of the PGI [1] and particle based PGI [2] algorithms for POMDPs in a more accessible way than [1] or [2] allowing practitioners and students to understand and implement the algorithms.

en cs.AI
arXiv Open Access 2020
Defeasible reasoning in Description Logics: an overview on DL^N

Piero A. Bonatti, Iliana M. Petrova, Luigi Sauro

DL^N is a recent approach that extends description logics with defeasible reasoning capabilities. In this paper we provide an overview on DL^N, illustrating the underlying knowledge engineering requirements as well as the characteristic features that preserve DL^N from some recurrent semantic and computational drawbacks. We also compare DL^N with some alternative nonmonotonic semantics, enlightening the relationships between the KLM postulates and DL^N.

en cs.AI
arXiv Open Access 2020
Active Fairness Instead of Unawareness

Boris Ruf, Marcin Detyniecki

The possible risk that AI systems could promote discrimination by reproducing and enforcing unwanted bias in data has been broadly discussed in research and society. Many current legal standards demand to remove sensitive attributes from data in order to achieve "fairness through unawareness". We argue that this approach is obsolete in the era of big data where large datasets with highly correlated attributes are common. In the contrary, we propose the active use of sensitive attributes with the purpose of observing and controlling any kind of discrimination, and thus leading to fair results.

en cs.AI
arXiv Open Access 2020
Belief Base Revision for Further Improvement of Unified Answer Set Programming

Kumar Sankar Ray, Sandip Paul, Diganta Saha

A belief base revision is developed. The belief base is represented using Unified Answer Set Programs which is capable of representing imprecise and uncertain information and perform nonomonotonic reasoning with them. The base revision operator is developed using Removed Set Revision strategy. The operator is characterized with respect to the postulates for base revisions operator satisfies.

en cs.AI
arXiv Open Access 2020
Choice functions based on sets of strict partial orders: an axiomatic characterisation

Jasper De Bock

Methods for choosing from a set of options are often based on a strict partial order on these options, or on a set of such partial orders. I here provide a very general axiomatic characterisation for choice functions of this form. It includes as special cases axiomatic characterisations for choice functions based on (sets of) total orders, (sets of) weak orders, (sets of) coherent lower previsions and (sets of) probability measures.

en cs.AI, cs.LO
arXiv Open Access 2020
Representing Pure Nash Equilibria in Argumentation

Bruno Yun, Srdjan Vesic, Nir Oren

In this paper we describe an argumentation-based representation of normal form games, and demonstrate how argumentation can be used to compute pure strategy Nash equilibria. Our approach builds on Modgil's Extended Argumentation Frameworks. We demonstrate its correctness, prove several theoretical properties it satisfies, and outline how it can be used to explain why certain strategies are Nash equilibria to a non-expert human user.

en cs.AI, cs.GT
arXiv Open Access 2020
Impact of Legal Requirements on Explainability in Machine Learning

Adrien Bibal, Michael Lognoul, Alexandre de Streel et al.

The requirements on explainability imposed by European laws and their implications for machine learning (ML) models are not always clear. In that perspective, our research analyzes explanation obligations imposed for private and public decision-making, and how they can be implemented by machine learning techniques.

en cs.AI, cs.CY
arXiv Open Access 2020
Nmbr9 as a Constraint Programming Challenge

Mikael Zayenz Lagerkvist

Modern board games are a rich source of interesting and new challenges for combinatorial problems. The game Nmbr9 is a solitaire style puzzle game using polyominoes. The rules of the game are simple to explain, but modelling the game effectively using constraint programming is hard. This abstract presents the game, contributes new generalized variants of the game suitable for benchmarking and testing, and describes a model for the presented variants. The question of the top possible score in the standard game is an open challenge.

en cs.AI
arXiv Open Access 2020
Probably Approximately Correct Explanations of Machine Learning Models via Syntax-Guided Synthesis

Daniel Neider, Bishwamittra Ghosh

We propose a novel approach to understanding the decision making of complex machine learning models (e.g., deep neural networks) using a combination of probably approximately correct learning (PAC) and a logic inference methodology called syntax-guided synthesis (SyGuS). We prove that our framework produces explanations that with a high probability make only few errors and show empirically that it is effective in generating small, human-interpretable explanations.

en cs.AI
arXiv Open Access 2019
Memory Management in Resource-Bounded Agents

Valentina Pitoni

In artificial intelligence, multi agent systems constitute an interesting typology of society modeling, and have in this regard vast fields of application, which extend to the human sciences. Logic is often used to model such kind of systems as it is easier to verify the explainability and validation, so for this reason we have tried to manage agents' memory extending a previous work by inserting the concept of time.

en cs.AI, cs.LO
arXiv Open Access 2019
A Temporal Module for Logical Frameworks

Valentina Pitoni, Stefania Costantini

In artificial intelligence, multi agent systems constitute an interesting typology of society modeling, and have in this regard vast fields of application, which extend to the human sciences. Logic is often used to model such kind of systems as it is easier to verify than other approaches, and provides explainability and potential validation. In this paper we define a time module suitable to add time to many logic representations of agents.

en cs.AI, cs.LO
arXiv Open Access 2018
Semantically Enhanced Models for Commonsense Knowledge Acquisition

Ikhlas Alhussien, Erik Cambria, Zhang NengSheng

Commonsense knowledge is paramount to enable intelligent systems. Typically, it is characterized as being implicit and ambiguous, hindering thereby the automation of its acquisition. To address these challenges, this paper presents semantically enhanced models to enable reasoning through resolving part of commonsense ambiguity. The proposed models enhance in a knowledge graph embedding (KGE) framework for knowledge base completion. Experimental results show the effectiveness of the new semantic models in commonsense reasoning.

en cs.AI, cs.CL
arXiv Open Access 2018
Planning with Arithmetic and Geometric Attributes

David Folqué, Sainbayar Sukhbaatar, Arthur Szlam et al.

A desirable property of an intelligent agent is its ability to understand its environment to quickly generalize to novel tasks and compose simpler tasks into more complex ones. If the environment has geometric or arithmetic structure, the agent should exploit these for faster generalization. Building on recent work that augments the environment with user-specified attributes, we show that further equipping these attributes with the appropriate geometric and arithmetic structure brings substantial gains in sample complexity.

en cs.AI

Halaman 19 dari 28082