Artificial intelligence (AI)-powered technology integration in social fintech has transformative potential to advance social responsibility and support sustainable development. This research examines a Blockchain-based lending mechanism that integrates centralized exchanges (CEX) and decentralized exchanges (DEX) to facilitate seamless financial transactions and equitable resource allocation. AI-driven tools are utilized to enhance transparency, accuracy, and security, while smart contracts facilitate the efficient management and verification of loan distribution. The proposed system focuses on helping underserved communities, poor regions, and green businesses, promoting fair and sustainable finance in line with the Sustainable Development Goals (SDGs). The hybrid ecosystem combines the liquidity and regulatory compliance of centralized exchanges with the autonomy and reduced intermediary involvement of decentralized exchanges. AI enhances loan processing, reducing biases and inefficiencies. This framework with smart contracts is to provide scalable, auditable lending aligned with sustainable goals. Machine Learning (ML) algorithms verified loan eligibility with the borrower dataset. The performance of Random Forest algorithms is good due to their robustness and ensemble learning features. Then, Optuna enhanced model tuning, and SHapley Additive exPlanations (SHAP) identified key parameters. Finally, Smart contracts ensured secure, autonomous execution of green loans based on ML verification and sustainability criteria.
We analyse the expressiveness of the two-valued semantics of abstract argumentation frameworks, normal logic programs and abstract dialectical frameworks. By expressiveness we mean the ability to encode a desired set of two-valued interpretations over a given propositional signature using only atoms from that signature. While the computational complexity of the two-valued model existence problem for all these languages is (almost) the same, we show that the languages form a neat hierarchy with respect to their expressiveness.
Argumentation is one of the most popular approaches of defining a~non-monotonic formalism and several argumentation based semantics were proposed for defeasible logic programs. Recently, a new approach based on notions of conflict resolutions was proposed, however with declarative semantics only. This paper gives a more procedural counterpart by developing skeptical and credulous argument games for complete semantics and soundness and completeness theorems for both games are provided. After that, distribution of defeasible logic program into several contexts is investigated and both argument games are adapted for multi-context system.
Space and time are two critical components of many real world systems. For this reason, analysis of anomalies in spatiotemporal data has been a great of interest. In this work, application of tensor decomposition and eigenspace techniques on spatiotemporal hotspot detection is investigated. An algorithm called SST-Hotspot is proposed which accounts for spatiotemporal variations in data and detect hotspots using matching of eigenvector elements of two cases and population tensors. The experimental results reveal the interesting application of tensor decomposition and eigenvector-based techniques in hotspot analysis.
We show how to find a small loop curser in a Bayesian network. Finding such a loop cutset is the first step in the method of conditioning for inference. Our algorithm for finding a loop cutset, called MGA, finds a loop cutset which is guaranteed in the worst case to contain less than twice the number of variables contained in a minimum loop cutset. We test MGA on randomly generated graphs and find that the average ratio between the number of instances associated with the algorithms' output and the number of instances associated with a minimum solution is 1.22.
This paper discusses techniques for performing efficient decision-theoretic planning. We give an overview of the DRIPS decision-theoretic refinement planning system, which uses abstraction to efficiently identify optimal plans. We present techniques for automatically generating search control information, which can significantly improve the planner's performance. We evaluate the efficiency of DRIPS both with and without the search control rules on a complex medical planning problem and compare its performance to that of a branch-and-bound decision tree algorithm.
Much recent research in decision theoretic planning has adopted Markov decision processes (MDPs) as the model of choice, and has attempted to make their solution more tractable by exploiting problem structure. One particular algorithm, structured policy construction achieves this by means of a decision theoretic analog of goal regression using action descriptions based on Bayesian networks with tree-structured conditional probability tables. The algorithm as presented is not able to deal with actions with correlated effects. We describe a new decision theoretic regression operator that corrects this weakness. While conceptually straightforward, this extension requires a somewhat more complicated technical approach.
Bayesian approaches to learn the graphical structure of Bayesian Belief Networks (BBNs) from databases share the assumption that the database is complete, that is, no entry is reported as unknown. Attempts to relax this assumption involve the use of expensive iterative methods to discriminate among different structures. This paper introduces a deterministic method to learn the graphical structure of a BBN from a possibly incomplete database. Experimental evaluations show a significant robustness of this method and a remarkable independence of its execution time from the number of missing data.
Bat algorithm (BA) is a bio-inspired algorithm developed by Yang in 2010 and BA has been found to be very efficient. As a result, the literature has expanded significantly in the last 3 years. This paper provides a timely review of the bat algorithm and its new variants. A wide range of diverse applications and case studies are also reviewed and summarized briefly here. Further research topics are also discussed.
In the canonical examples underlying Shafer-Dempster theory, beliefs over the hypotheses of interest are derived from a probability model for a set of auxiliary hypotheses. Beliefs are derived via a compatibility relation connecting the auxiliary hypotheses to subsets of the primary hypotheses. A belief function differs from a Bayesian probability model in that one does not condition on those parts of the evidence for which no probabilities are specified. The significance of this difference in conditioning assumptions is illustrated with two examples giving rise to identical belief functions but different Bayesian probability distributions.
Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal reasoning where there are additional uninstantiated causes of the common effect. We propose a new definition of product synergy and prove its adequacy for intercausal reasoning with direct and indirect evidence for the common effect. The new definition is based on a new property matrix half positive semi-definiteness, a weakened form of matrix positive semi-definiteness.
Tree structures have been shown to provide an efficient framework for propagating beliefs [Pearl,1986]. This paper studies the problem of finding an optimal approximating tree. The star decomposition scheme for sets of three binary variables [Lazarsfeld,1966; Pearl,1986] is shown to enhance the class of probability distributions that can support tree structures; such structures are called tree-decomposable structures. The logarithm scoring rule is found to be an appropriate optimality criterion to evaluate different tree-decomposable structures. Characteristics of such structures closest to the actual belief network are identified using the logarithm rule, and greedy and exact techniques are developed to find the optimal approximation.
The classical propositional assumption-based model is extended to incorporate probabilities for the assumptions. Then it is placed into the framework of evidence theory. Several authors like Laskey, Lehner (1989) and Provan (1990) already proposed a similar point of view, but the first paper is not as much concerned with mathematical foundations, and Provan's paper develops into a different direction. Here we thoroughly develop and present the mathematical foundations of this theory, together with computational methods adapted from Reiter, De Kleer (1987) and Inoue (1992). Finally, recently proposed techniques for computing degrees of support are presented.
This paper examines the interdependence generated between two parent nodes with a common instantiated child node, such as two hypotheses sharing common evidence. The relation so generated has been termed "intercausal." It is shown by construction that inter-causal independence is possible for binary distributions at one state of evidence. For such "CICI" distributions, the two measures of inter-causal effect, "multiplicative synergy" and "additive synergy" are equal. The well known "noisy-or" model is an example of such a distribution. This introduces novel semantics for the noisy-or, as a model of the degree of conflict among competing hypotheses of a common observation.
This volume contains the papers presented at the fifth workshop on Answer Set Programming and Other Computing Paradigms (ASPOCP 2012) held on September 4th, 2012 in Budapest, co-located with the 28th International Conference on Logic Programming (ICLP 2012). It thus continues a series of previous events co-located with ICLP, aiming at facilitating the discussion about crossing the boundaries of current ASP techniques in theory, solving, and applications, in combination with or inspired by other computing paradigms.
The direct effect of one eventon another can be defined and measured byholding constant all intermediate variables between the two.Indirect effects present conceptual andpractical difficulties (in nonlinear models), because they cannot be isolated by holding certain variablesconstant. This paper shows a way of defining any path-specific effectthat does not invoke blocking the remainingpaths.This permits the assessment of a more naturaltype of direct and indirect effects, one thatis applicable in both linear and nonlinear models. The paper establishesconditions under which such assessments can be estimated consistentlyfrom experimental and nonexperimental data,and thus extends path-analytic techniques tononlinear and nonparametric models.
Unaided human decision making appears to systematically violate consistency constraints imposed by normative theories; these biases in turn appear to justify the application of formal decision-analytic models. It is argued that both claims are wrong. In particular, we will argue that the "confirmation bias" is premised on an overly narrow view of how conflicting evidence is and ought to be handled. Effective decision aiding should focus on supporting the contral processes by means of which knowledge is extended into novel situations and in which assumptions are adopted, utilized, and revised. The Non- Monotonic Probabilist represents initial work toward such an aid.
This paper presents a decision-theoretic approach to statistical inference that satisfies the likelihood principle (LP) without using prior information. Unlike the Bayesian approach, which also satisfies LP, we do not assume knowledge of the prior distribution of the unknown parameter. With respect to information that can be obtained from an experiment, our solution is more efficient than Wald's minimax solution.However, with respect to information assumed to be known before the experiment, our solution demands less input than the Bayesian solution.