The spread of plant diseases is influenced by a variety of environmental and pathogen factors that not only lower the production of fruits and grains but also cause quality deterioration. The timely detection and evaluation of diseases in food plants are crucial for the health of the agricultural industry and the country’s capacity to produce enough food. This research focused on prevalent illnesses in wheat and pea plants to explore the creation of an autonomous disease detection system designed for the unique traits of the examined crops, which can be adapted to the particularities of other crops. The experiments with artificial intelligence (AI) models included both traditional machine learning and sophisticated deep learning models, including customized models and transfer learning models. In this study, our framework employed the capabilities of Transformers, Random Forest, VGG16, a custom convolutional neural network (CNN) model, and two versions of You Only Look Once (YOLO), including both v5 and v8. The employability of classical machine learning techniques aims to gauge the computational feasibility of advanced techniques such as transformers and YOLO. This work also focused on the development of a standard dataset, which was developed by collecting healthy and diseased samples from various crop fields in Pakistan. The obtained results demonstrate the improved performance of the fine-tuned VGG16 model, transformers, and the customized version of CNN as compared to the Random Forest model and YOLOv8 algorithms, in terms of disease detection and classification accuracy. Moreover, the usability of YOLO versions and detection transformers for real-time disease identification has shown promising results, which is evident in its prospects for plant disease monitoring systems.
Monte Carlo Tree Search and Monte Carlo Search have good results for many combinatorial problems. In this paper we propose to use Monte Carlo Search to design mathematical expressions that are used as exploration terms for Monte Carlo Tree Search algorithms. The optimized Monte Carlo Tree Search algorithms are PUCT and SHUSS. We automatically design the PUCT and the SHUSS root exploration terms. For small search budgets of 32 evaluations the discovered root exploration terms make both algorithms competitive with usual PUCT.
Rohan Bhambhoria, Samuel Dahan, Jonathan Li
et al.
This study evaluates the performance of general-purpose AI, like ChatGPT, in legal question-answering tasks, highlighting significant risks to legal professionals and clients. It suggests leveraging foundational models enhanced by domain-specific knowledge to overcome these issues. The paper advocates for creating open-source legal AI systems to improve accuracy, transparency, and narrative diversity, addressing general AI's shortcomings in legal contexts.
\emph{TxGraffiti} is a machine learning and heuristic based artificial intelligence designed to automate the task of conjecturing in mathematics. Since its inception, TxGraffiti has generated many surprising conjectures leading to publication in respectable mathematical journals. In this paper we outline the machine learning and heuristic techniques implemented by TxGraffiti. We also recall its contributions to the mathematical literature and announce a new online version of the program available for anyone curious to explore conjectures in graph theory.
We present a toolkit for creating low-cost Mixture-of-Domain-Experts (MOE) from trained models. The toolkit can be used for creating a mixture from models or from adapters. We perform extensive tests and offer guidance on defining the architecture of the resulting MOE using the toolkit. A public repository is available.
In this paper we explore the possibility of using OpenAI's CLIP to perform logically coherent grounded visual reasoning. To that end, we formalize our terms and give a geometric analysis of how embeddings in CLIP's latent space would need to be configured in order for the system to be logically coherent. Our main conclusion is that, as usually configured, CLIP cannot perform such reasoning.
The paper discusses scientific and technological problems of dynamic integrated expert systems development. Extensions of problem-oriented methodology for dynamic integrated expert systems development are considered. Attention is paid to the temporal knowledge representation and processing.
We present a baseline approach for cross-modal knowledge fusion. Different basic fusion methods are evaluated on existing embedding approaches to show the potential of joining knowledge about certain concepts across modalities in a fused concept representation.
This work summarizes part of current knowledge on High-level Cognitive process and its relation with biological hardware. Thus, it is possible to identify some paradoxes which could impact the development of future technologies and artificial intelligence: we may make a High-level Cognitive Machine, sacrificing the principal attribute of a machine, its accuracy.
We give a non-FPT lower bound on the size of structured decision DNNF and OBDD with decomposable AND-nodes representing CNF-formulas of bounded incidence treewidth. Both models are known to be of FPT size for CNFs of bounded primal treewidth. To the best of our knowledge this is the first parameterized separation of primal treewidth and incidence treewidth for knowledge compilation models.
Humans display a tendency to pay more attention to bad outcomes, often in a disproportionate way relative to their statistical occurrence. They also display euphorism, as well as a preference for the current state of affairs (status quo bias). Based on the analysis of optimal solutions of infinite horizon stationary optimization problems under imperfect state observation, we show that such human perception and decision biases can be grounded in a form of rationality. We also provide conditions (boundaries) for their possible occurence and an analysis of their robustness.Thus, biases can be the product of rational behavior.
Saran Vardhanabhuti, Heather J. Ribaudo, Raphael J. Landovitz
et al.
Abstract Background. Some patients are not prescribed atazanavir because of concern about possible jaundice. Atazanavir-associated hyperbilirubinemia correlates with UGT1A1 rs887829 genotype. We examined bilirubin-related discontinuation of atazanavir in participants from AIDS Clinical Trials Group Study A5257. Methods. Discriminatory properties of UGT1A1 T/T genotype for predicting bilirubin-related atazanavir discontinuation through 96 weeks after antiretroviral initiation were estimated. Results. Genetic analyses involved 1450 participants, including 481 who initiated randomized atazanavir/ritonavir. Positive predictive values of rs887829 T/T for bilirubin-related discontinuation of atazanavir (with 95% confidence intervals [CIs]) were 20% (CI, 9%–36%) in Black, 60% (CI, 32%–84%) in White, and 29% (CI, 8%–58%) in Hispanic participants; negative predictive values were 97% (CI, 93%–99%), 95% (CI, 90%–98%), and 97% (CI, 90%–100%), respectively. Conclusions. Bilirubin-related discontinuation of atazanavir was rare in participants not homozygous for rs887829 T/T, regardless of race or ethnicity. We hypothesize that the higher rate of discontinuation among White participants homozygous for rs887829 T/T may reflect differences in physical manifestations of jaundice by race and ethnicity. Selective avoidance of atazanavir initiation among individuals with T/T genotypes would markedly reduce the likelihood of bilirubin-related discontinuation of atazanavir while allowing atazanavir to be prescribed to the majority of individuals. This genetic association will also affect atazanavir/cobicistat.
The paper introduces k-bounded MAP inference, a parameterization of MAP inference in Markov logic networks. k-Bounded MAP states are MAP states with at most k active ground atoms of hidden (non-evidence) predicates. We present a novel delayed column generation algorithm and provide empirical evidence that the algorithm efficiently computes k-bounded MAP states for meaningful real-world graph matching problems. The underlying idea is that, instead of solving one large optimization problem, it is often more efficient to tackle several small ones.
Hannaneh Hajishirzi, Julia Hockenmaier, Erik T. Mueller
et al.
This paper presents an approach for learning to translate simple narratives, i.e., texts (sequences of sentences) describing dynamic systems, into coherent sequences of events without the need for labeled training data. Our approach incorporates domain knowledge in the form of preconditions and effects of events, and we show that it outperforms state-of-the-art supervised learning systems on the task of reconstructing RoboCup soccer games from their commentaries.
We introduce a new tractable temporal constraint language, which strictly contains the Ord-Horn language of Buerkert and Nebel and the class of AND/OR precedence constraints. The algorithm we present for this language decides whether a given set of constraints is consistent in time that is quadratic in the input size. We also prove that (unlike Ord-Horn) this language cannot be solved by Datalog or by establishing local consistency.
This paper proposes a neuro-rough model based on multi-layered perceptron and rough set. The neuro-rough model is then tested on modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62%. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.
In this paper we derive the equations for Loop Corrected Belief Propagation on a continuous variable Gaussian model. Using the exactness of the averages for belief propagation for Gaussian models, a different way of obtaining the covariances is found, based on Belief Propagation on cavity graphs. We discuss the relation of this loop correction algorithm to Expectation Propagation algorithms for the case in which the model is no longer Gaussian, but slightly perturbed by nonlinear terms.
A method for the construction of approximate analytical expressions for the stationary marginal densities of general stochastic search processes is proposed. By the marginal densities, regions of the search space that with high probability contain the global optima can be readily defined. The density estimation procedure involves a controlled number of linear operations, with a computational cost per iteration that grows linearly with problem size.