Enhancing Angular Sensitivity of Segmented Antineutrino Detectors for Reactor Monitoring Applications
Brian C. Crow, Max A. A. Dornfest, John G. Learned
et al.
We present a potential improvement over the standard method developed to determine antineutrino directionality in inverse-beta-decay detectors. The previously developed method for quantifying directionality in monolithic and segmented detectors may be ambiguous in methodology. In this paper, we present a new directionality algorithm and include error analysis. We have developed a new algorithm based on a measure of ``distance'' between two matrices. We report findings for our research in reactor-antineutrino directionality, and emphasize that the algorithm has broad applications whenever one desires computationally efficient 2D pattern-matching. We treat data from detector segments in the form of a matrix. The validation of our algorithm boils down to comparing a Monte Carlo generated ``empirical'' data set to a simulated data set. The empirical data set is generated for a particular orientation of the neutrino beam. We identify an optimal segmentation scale in the low-count regime. We also discuss the shortcomings of the conventional method and how this knowledge can be applied to segmented detectors, hybrid designs, and generalized validation, agnostic to the physics of detector design.
en
hep-ex, physics.ins-det
Measurement of solar neutrino interaction rate below 3.49 MeV in Super-Kamiokande-IV
Super-Kamiokande Collaboration, :, A. Yankelevich
et al.
Super-Kamiokande has observed $^{8}\text{B}$ solar neutrino elastic scattering at recoil electron kinetic energies ($E_{kin}$) as low as 3.49 MeV to study neutrino flavor conversion within the sun. At SK-observable energies, these conversions are dominated by the Mikheyev-Smirnov-Wolfenstein effect. An upturn in the electron neutrino survival probability in which vacuum neutrino oscillations become dominant is predicted to occur at lower energies, but radioactive background increases exponentially with decreasing energy. New machine learning approaches provide substantial background reduction below 3.49 MeV such that statistical extraction of solar neutrino interactions becomes feasible. This article presents an analysis of the solar neutrino interaction rate at $E_{kin}$ < 3.49 MeV with the full SK-IV period, using data from a wideband intelligent trigger when available and with a boosted decision tree for event selection. A solar neutrino signal is observed between 2.99 MeV < $E_{kin}$ < 3.49 MeV with $2.76σ$ significance and a data to unoscillated MC ratio of $0.307^{+0.112}_{-0.111}$. This additional low energy data has a negligible effect on the $1σ$ intervals of the fits to the solar neutrino energy spectrum but has a noticeable effect on the best fit when using the exponential parameterization.
Attitudes and perceptions towards the use of artificial intelligence chatbots in medical journal peer review: A protocol for a large-scale, international cross-sectional survey
Jeremy Y. Ng, Daivat Bhavsar, Neha Dhanvanthry
et al.
Background: Artificial intelligence (AI) chatbots are advanced conversational programmes capable of performing tasks such as identifying methodological flaws, verifying references, and improving language clarity in manuscripts. Their use in peer review has the potential to enhance efficiency, reduce reviewer workload, and address inconsistencies in review quality. However, concerns remain regarding their reliability, ethical implications, and transparency in decision-making, and little is known about how peer reviewers perceive these tools.Objectives: To assess peer reviewers&rsquo; attitudes and perceptions towards the use of AI chatbots in the peer review process, including their familiarity with AI, perceived benefits and challenges, ethical considerations, and expectations for future roles.Methods: An international cross-sectional survey will be conducted among academic peer reviewers. The survey will collect data on participants&rsquo; prior experience with AI, perceptions of the utility of chatbots in supporting peer review, concerns related to ethics and transparency, and anticipated future applications.Results: This study will report descriptive and comparative analyses of reviewers&rsquo; responses, highlighting patterns in attitudes and perceptions by demographic and professional characteristics.Conclusions: The findings may offer evidence to inform the development of future policies and best practices for the ethical and effective integration of AI chatbots in peer review, with the goal of improving review quality while addressing potential risks.
Academies and learned societies, Bibliography. Library science. Information resources
Learned Compression for Compressed Learning
Dan Jacobellis, Neeraja J. Yadwadkar
Modern sensors produce increasingly rich streams of high-resolution data. Due to resource constraints, machine learning systems discard the vast majority of this information via resolution reduction. Compressed-domain learning allows models to operate on compact latent representations, allowing higher effective resolution for the same budget. However, existing compression systems are not ideal for compressed learning. Linear transform coding and end-to-end learned compression systems reduce bitrate, but do not uniformly reduce dimensionality; thus, they do not meaningfully increase efficiency. Generative autoencoders reduce dimensionality, but their adversarial or perceptual objectives lead to significant information loss. To address these limitations, we introduce WaLLoC (Wavelet Learned Lossy Compression), a neural codec architecture that combines linear transform coding with nonlinear dimensionality-reducing autoencoders. WaLLoC sandwiches a shallow, asymmetric autoencoder and entropy bottleneck between an invertible wavelet packet transform. Across several key metrics, WaLLoC outperforms the autoencoders used in state-of-the-art latent diffusion models. WaLLoC does not require perceptual or adversarial losses to represent high-frequency detail, providing compatibility with modalities beyond RGB images and stereo audio. WaLLoC's encoder consists almost entirely of linear operations, making it exceptionally efficient and suitable for mobile computing, remote sensing, and learning directly from compressed data. We demonstrate WaLLoC's capability for compressed-domain learning across several tasks, including image classification, colorization, document understanding, and music source separation. Our code, experiments, and pre-trained audio and image codecs are available at https://ut-sysml.org/walloc
Learned Data Compression: Challenges and Opportunities for the Future
Qiyu Liu, Siyuan Han, Jianwei Liao
et al.
Compressing integer keys is a fundamental operation among multiple communities, such as database management (DB), information retrieval (IR), and high-performance computing (HPC). Recent advances in \emph{learned indexes} have inspired the development of \emph{learned compressors}, which leverage simple yet compact machine learning (ML) models to compress large-scale sorted keys. The core idea behind learned compressors is to \emph{losslessly} encode sorted keys by approximating them with \emph{error-bounded} ML models (e.g., piecewise linear functions) and using a \emph{residual array} to guarantee accurate key reconstruction. While the concept of learned compressors remains in its early stages of exploration, our benchmark results demonstrate that an SIMD-optimized learned compressor can significantly outperform state-of-the-art CPU-based compressors. Drawing on our preliminary experiments, this vision paper explores the potential of learned data compression to enhance critical areas in DBMS and related domains. Furthermore, we outline the key technical challenges that existing systems must address when integrating this emerging methodology.
High Visual-Fidelity Learned Video Compression
Meng Li, Yibo Shi, Jing Wang
et al.
With the growing demand for video applications, many advanced learned video compression methods have been developed, outperforming traditional methods in terms of objective quality metrics such as PSNR. Existing methods primarily focus on objective quality but tend to overlook perceptual quality. Directly incorporating perceptual loss into a learned video compression framework is nontrivial and raises several perceptual quality issues that need to be addressed. In this paper, we investigated these issues in learned video compression and propose a novel High Visual-Fidelity Learned Video Compression framework (HVFVC). Specifically, we design a novel confidence-based feature reconstruction method to address the issue of poor reconstruction in newly-emerged regions, which significantly improves the visual quality of the reconstruction. Furthermore, we present a periodic compensation loss to mitigate the checkerboard artifacts related to deconvolution operation and optimization. Extensive experiments have shown that the proposed HVFVC achieves excellent perceptual quality, outperforming the latest VVC standard with only 50% required bitrate.
Subspace Adaptation Prior for Few-Shot Learning
Mike Huisman, Aske Plaat, Jan N. van Rijn
Gradient-based meta-learning techniques aim to distill useful prior knowledge from a set of training tasks such that new tasks can be learned more efficiently with gradient descent. While these methods have achieved successes in various scenarios, they commonly adapt all parameters of trainable layers when learning new tasks. This neglects potentially more efficient learning strategies for a given task distribution and may be susceptible to overfitting, especially in few-shot learning where tasks must be learned from a limited number of examples. To address these issues, we propose Subspace Adaptation Prior (SAP), a novel gradient-based meta-learning algorithm that jointly learns good initialization parameters (prior knowledge) and layer-wise parameter subspaces in the form of operation subsets that should be adaptable. In this way, SAP can learn which operation subsets to adjust with gradient descent based on the underlying task distribution, simultaneously decreasing the risk of overfitting when learning new tasks. We demonstrate that this ability is helpful as SAP yields superior or competitive performance in few-shot image classification settings (gains between 0.1% and 3.9% in accuracy). Analysis of the learned subspaces demonstrates that low-dimensional operations often yield high activation strengths, indicating that they may be important for achieving good few-shot learning performance. For reproducibility purposes, we publish all our research code publicly.
Almost Tight Error Bounds on Differentially Private Continual Counting
Monika Henzinger, Jalaj Upadhyay, Sarvagya Upadhyay
The first large-scale deployment of private federated learning uses differentially private counting in the continual release model as a subroutine (Google AI blog titled "Federated Learning with Formal Differential Privacy Guarantees"). In this case, a concrete bound on the error is very relevant to reduce the privacy parameter. The standard mechanism for continual counting is the binary mechanism. We present a novel mechanism and show that its mean squared error is both asymptotically optimal and a factor 10 smaller than the error of the binary mechanism. We also show that the constants in our analysis are almost tight by giving non-asymptotic lower and upper bounds that differ only in the constants of lower-order terms. Our algorithm is a matrix mechanism for the counting matrix and takes constant time per release. We also use our explicit factorization of the counting matrix to give an upper bound on the excess risk of the private learning algorithm of Denisov et al. (NeurIPS 2022). Our lower bound for any continual counting mechanism is the first tight lower bound on continual counting under approximate differential privacy. It is achieved using a new lower bound on a certain factorization norm, denoted by $γ_F(\cdot)$, in terms of the singular values of the matrix. In particular, we show that for any complex matrix, $A \in \mathbb{C}^{m \times n}$, \[ γ_F(A) \geq \frac{1}{\sqrt{m}}\|A\|_1, \] where $\|\cdot \|$ denotes the Schatten-1 norm. We believe this technique will be useful in proving lower bounds for a larger class of linear queries. To illustrate the power of this technique, we show the first lower bound on the mean squared error for answering parity queries.
CL2R: Compatible Lifelong Learning Representations
Niccolo Biondi, Federico Pernici, Matteo Bruni
et al.
In this paper, we propose a method to partially mimic natural intelligence for the problem of lifelong learning representations that are compatible. We take the perspective of a learning agent that is interested in recognizing object instances in an open dynamic universe in a way in which any update to its internal feature representation does not render the features in the gallery unusable for visual search. We refer to this learning problem as Compatible Lifelong Learning Representations (CL2R) as it considers compatible representation learning within the lifelong learning paradigm. We identify stationarity as the property that the feature representation is required to hold to achieve compatibility and propose a novel training procedure that encourages local and global stationarity on the learned representation. Due to stationarity, the statistical properties of the learned features do not change over time, making them interoperable with previously learned features. Extensive experiments on standard benchmark datasets show that our CL2R training procedure outperforms alternative baselines and state-of-the-art methods. We also provide novel metrics to specifically evaluate compatible representation learning under catastrophic forgetting in various sequential learning tasks. Code at https://github.com/NiccoBiondi/CompatibleLifelongRepresentation.
Discovering Latent Concepts Learned in BERT
Fahim Dalvi, Abdul Rafae Khan, Firoj Alam
et al.
A large number of studies that analyze deep neural network models and their ability to encode various linguistic and non-linguistic concepts provide an interpretation of the inner mechanics of these models. The scope of the analyses is limited to pre-defined concepts that reinforce the traditional linguistic knowledge and do not reflect on how novel concepts are learned by the model. We address this limitation by discovering and analyzing latent concepts learned in neural network models in an unsupervised fashion and provide interpretations from the model's perspective. In this work, we study: i) what latent concepts exist in the pre-trained BERT model, ii) how the discovered latent concepts align or diverge from classical linguistic hierarchy and iii) how the latent concepts evolve across layers. Our findings show: i) a model learns novel concepts (e.g. animal categories and demographic groups), which do not strictly adhere to any pre-defined categorization (e.g. POS, semantic tags), ii) several latent concepts are based on multiple properties which may include semantics, syntax, and morphology, iii) the lower layers in the model dominate in learning shallow lexical concepts while the higher layers learn semantic relations and iv) the discovered latent concepts highlight potential biases learned in the model. We also release a novel BERT ConceptNet dataset (BCN) consisting of 174 concept labels and 1M annotated instances.
Provable Benefit of Multitask Representation Learning in Reinforcement Learning
Yuan Cheng, Songtao Feng, Jing Yang
et al.
As representation learning becomes a powerful technique to reduce sample complexity in reinforcement learning (RL) in practice, theoretical understanding of its advantage is still limited. In this paper, we theoretically characterize the benefit of representation learning under the low-rank Markov decision process (MDP) model. We first study multitask low-rank RL (as upstream training), where all tasks share a common representation, and propose a new multitask reward-free algorithm called REFUEL. REFUEL learns both the transition kernel and the near-optimal policy for each task, and outputs a well-learned representation for downstream tasks. Our result demonstrates that multitask representation learning is provably more sample-efficient than learning each task individually, as long as the total number of tasks is above a certain threshold. We then study the downstream RL in both online and offline settings, where the agent is assigned with a new task sharing the same representation as the upstream tasks. For both online and offline settings, we develop a sample-efficient algorithm, and show that it finds a near-optimal policy with the suboptimality gap bounded by the sum of the estimation error of the learned representation in upstream and a vanishing term as the number of downstream samples becomes large. Our downstream results of online and offline RL further capture the benefit of employing the learned representation from upstream as opposed to learning the representation of the low-rank model directly. To the best of our knowledge, this is the first theoretical study that characterizes the benefit of representation learning in exploration-based reward-free multitask RL for both upstream and downstream tasks.
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases
James Harrison, Luke Metz, Jascha Sohl-Dickstein
Learned optimizers -- neural networks that are trained to act as optimizers -- have the potential to dramatically accelerate training of machine learning models. However, even when meta-trained across thousands of tasks at huge computational expense, blackbox learned optimizers often struggle with stability and generalization when applied to tasks unlike those in their meta-training set. In this paper, we use tools from dynamical systems to investigate the inductive biases and stability properties of optimization algorithms, and apply the resulting insights to designing inductive biases for blackbox optimizers. Our investigation begins with a noisy quadratic model, where we characterize conditions in which optimization is stable, in terms of eigenvalues of the training dynamics. We then introduce simple modifications to a learned optimizer's architecture and meta-training procedure which lead to improved stability, and improve the optimizer's inductive bias. We apply the resulting learned optimizer to a variety of neural network training tasks, where it outperforms the current state of the art learned optimizer -- at matched optimizer computational overhead -- with regard to optimization performance and meta-training speed, and is capable of generalization to tasks far different from those it was meta-trained on.
Faster Fundamental Graph Algorithms via Learned Predictions
Justin Y. Chen, Sandeep Silwal, Ali Vakilian
et al.
We consider the question of speeding up classic graph algorithms with machine-learned predictions. In this model, algorithms are furnished with extra advice learned from past or similar instances. Given the additional information, we aim to improve upon the traditional worst-case run-time guarantees. Our contributions are the following: (i) We give a faster algorithm for minimum-weight bipartite matching via learned duals, improving the recent result by Dinitz, Im, Lavastida, Moseley and Vassilvitskii (NeurIPS, 2021); (ii) We extend the learned dual approach to the single-source shortest path problem (with negative edge lengths), achieving an almost linear runtime given sufficiently accurate predictions which improves upon the classic fastest algorithm due to Goldberg (SIAM J. Comput., 1995); (iii) We provide a general reduction-based framework for learning-based graph algorithms, leading to new algorithms for degree-constrained subgraph and minimum-cost $0$-$1$ flow, based on reductions to bipartite matching and the shortest path problem. Finally, we give a set of general learnability theorems, showing that the predictions required by our algorithms can be efficiently learned in a PAC fashion.
Darwin no Brasil
Marcos Josephino
Assim como dezenas de outros viajantes, Charles Darwin esteve no Brasil. Ao mesmo tempo em que ficou encantado com a fauna e a flora brasileira, Darwin ficou chocado com a maneira como os africanos escravizados eram tratados, a ponto de escrever em seu diário, quando partia do Brasil, que esperaria jamais colocar novamente os pés em um país escravista. O objetivo deste presente artigo é mostrar o lado humano deste cientista, sua opinião e relatos por ele deixado sobre um sistema cruel que esteve presente durante três séculos em nosso país.
Academies and learned societies, Natural history (General)
Sumário
Academies and learned societies, Natural history (General)
Towards Robust Graph Contrastive Learning
Nikola Jovanović, Zhao Meng, Lukas Faber
et al.
We study the problem of adversarially robust self-supervised learning on graphs. In the contrastive learning framework, we introduce a new method that increases the adversarial robustness of the learned representations through i) adversarial transformations and ii) transformations that not only remove but also insert edges. We evaluate the learned representations in a preliminary set of experiments, obtaining promising results. We believe this work takes an important step towards incorporating robustness as a viable auxiliary task in graph contrastive learning.
Penerapan mesin penyerut bambu pada pengrajin bambu irat sebagai upaya peningkatkan kualitas serutan bambu
Ika Yuniwati, Anggra Fiveriaty, Ninik Sri Rahayu
et al.
Bamboo is a very easy plant found in Indonesia. This plant is very easy to grow and develops into many especially in tropical climates, so the bamboo is mostly grown on the river banks. Bamboo utilization is one form of an abundance of bamboo influences. Bamboo is widely used for the needs of making household and various kinds of crafts as well as wicker. This webbing is sold as a source for the makers’ income. Gintangan is a village located in Banyuwangi whose majority of people have the expertise of making wicker. Manufacture of webbing that requires thin bamboo sheets as raw material woven. Development of a bamboo ambush machine is needed to assist the partner in Gintangan Village as a tool to accelerate the process of making bamboo slices thin so that the manufacturing process becomes faster at the partner in Sanggar Kerajinan Bambu Karya Nyata. This bamboo-absorbing tool was developed to accelerate the process without having to do manual bamboo watering which is very time-consuming. The presence of these machines can increase the production of webbing rapidly and can increase the profit of partners because it reduces the cost for employees of the bamboo shredder.
Food processing and manufacture, Academies and learned societies
Penerapan pembelajaran bermuatan sustainability di sekolah program adiwiyata untuk mendukung sustainable development
Eny Hartadiyati Wasikin Haryanti, Fibria Kaswinarni
The Adiwiyata program is implemented in order to realize responsible school citizens in efforts to protect and manage the environment through good school governance to support Sustainable Development. SMAN 3 Demak is one of the schools that organizes the Adiwiyata Program. Several activities that have been carried out to support Sustainable Development include environmental hygiene, waste management and reforestation. However, the learning process has not been carried out to support Sustainable Development, it is necessary to implement Learning Containing Sustainability. The purpose of this service is to provide teachers with: (1) insight and understanding of Sustainable Development (2) compiling Learning Containing Sustainability. The methods used are counseling, workshop and mentoring. The results of the counseling activities on the understanding of Sustainable Development showed that 93.75% of the participants got a minimum score of 75 with complete criteria. As for the workshop activities on Learning Containing Sustainablity (embedding the concept of Sustainable Development in learning materials) it was seen that 87.5% of participants were at least quite successful. The results of these activities have met the planned output targets. The mentoring activities for learning teachers have been carried out in accordance with the learning materials in their respective classes.
Food processing and manufacture, Academies and learned societies
Pengentasan buta aksara berbasis pendekatan “semua anak cerdas” untuk guru SD
Awal Nur Kholifatur Rosyidah, Lalu Hamdian Affandi, Muhammad Erfan
et al.
This service activity aims to improve the literacy and numeracy skills of elementary school (SD) children in Karang Sidemen Village, North Batukliang District, Central Lombok Regency through the "All Smart Children" Program which trains teachers regarding the urgency of learning basic literacy and numeracy for children. , and how to teach basic literacy and numeracy using the TaRL (Teaching at the Right Level) method. The activity is carried out in stages, namely 1) Coordination meeting, 2) Simulation of material presentation, 3) Preparation of activity logistics, 4) Implementation of workshop activities consisting of 2 waves, and 5) Evaluation of success indicators seen from the increase in scores on pretest-posttest which are grouped into 4 levels. There are differences in the characteristics of low-grade teachers and high-grade teachers during the training activities. Based on the results of the pretest and posttest, it is known that there are 24 out of 31 participants who experienced an increase in level. Workshop activities can be considered quite successful with the percentage of success of training activities in general is 77.41%.
Food processing and manufacture, Academies and learned societies
Understanding Learned Reward Functions
Eric J. Michaud, Adam Gleave, Stuart Russell
In many real-world tasks, it is not possible to procedurally specify an RL agent's reward function. In such cases, a reward function must instead be learned from interacting with and observing humans. However, current techniques for reward learning may fail to produce reward functions which accurately reflect user preferences. Absent significant advances in reward learning, it is thus important to be able to audit learned reward functions to verify whether they truly capture user preferences. In this paper, we investigate techniques for interpreting learned reward functions. In particular, we apply saliency methods to identify failure modes and predict the robustness of reward functions. We find that learned reward functions often implement surprising algorithms that rely on contingent aspects of the environment. We also discover that existing interpretability techniques often attend to irrelevant changes in reward output, suggesting that reward interpretability may need significantly different methods from policy interpretability.