Building on decades of success in digital infrastructure and biomedical innovation, we propose the Personal Care Utility (PCU) - a cybernetic system for lifelong health guidance. PCU is conceived as a global, AI-powered utility that continuously orchestrates multimodal data, knowledge, and services to assist individuals and populations alike. Drawing on multimodal agents, event-centric modeling, and contextual inference, it offers three essential capabilities: (1) trusted health information tailored to the individual, (2) proactive health navigation and behavior guidance, and (3) ongoing interpretation of recovery and treatment response after medical events. Unlike conventional episodic care, PCU functions as an ambient, adaptive companion - observing, interpreting, and guiding health in real time across daily life. By integrating personal sensing, experiential computing, and population-level analytics, PCU promises not only improved outcomes for individuals but also a new substrate for public health and scientific discovery. We describe the architecture, design principles, and implementation challenges of this emerging paradigm.
Recent guidance methods in diffusion models steer reverse sampling by perturbing the model to construct an implicit weak model and guide generation away from it. Among these approaches, attention perturbation has demonstrated strong empirical performance in unconditional scenarios where classifier-free guidance is not applicable. However, existing attention perturbation methods lack principled approaches for determining where perturbations should be applied, particularly in Diffusion Transformer (DiT) architectures where quality-relevant computations are distributed across layers. In this paper, we investigate the granularity of attention perturbations, ranging from the layer level down to individual attention heads, and discover that specific heads govern distinct visual concepts such as structure, style, and texture quality. Building on this insight, we propose "HeadHunter", a systematic framework for iteratively selecting attention heads that align with user-centric objectives, enabling fine-grained control over generation quality and visual attributes. In addition, we introduce SoftPAG, which linearly interpolates each selected head's attention map toward an identity matrix, providing a continuous knob to tune perturbation strength and suppress artifacts. Our approach not only mitigates the oversmoothing issues of existing layer-level perturbation but also enables targeted manipulation of specific visual styles through compositional head selection. We validate our method on modern large-scale DiT-based text-to-image models including Stable Diffusion 3 and FLUX.1, demonstrating superior performance in both general quality enhancement and style-specific guidance. Our work provides the first head-level analysis of attention perturbation in diffusion models, uncovering interpretable specialization within attention layers and enabling practical design of effective perturbation strategies.
Achieving long-term safety in uncertain or extreme environments while accounting for human preferences remains a fundamental challenge for autonomous systems. Existing methods often trade off long-term guarantees for fast real-time control and cannot adapt to variability in human preferences or risk tolerance. To address these limitations, we propose a language-guided adaptive probabilistic safety certificate (PSC) framework that guarantees long-term safety for stochastic systems under environmental uncertainty while accommodating diverse human preferences. The proposed framework integrates natural-language inputs from users and Bayesian estimators of the environment into adaptive safety certificates that explicitly account for user preferences, system dynamics, and quantified uncertainties. Our key technical innovation leverages probabilistic invariance--a generalization of forward invariance to a probability space--to obtain myopic safety conditions with long-term safety guarantees that integrate language guidance, model information, and quantified uncertainty. We validate the framework through numerical simulations of autonomous lane-keeping with human-in-the-loop guidance under uncertain and extreme road conditions, demonstrating enhanced safety-performance trade-offs, adaptability to changing environments, and personalization to different user preferences.
Deploying large, complex policies in the real world requires the ability to steer them to fit the needs of a situation. Most common steering approaches, like goal-conditioning, require training the robot policy with a distribution of test-time objectives in mind. To overcome this limitation, we present DynaGuide, a steering method for diffusion policies using guidance from an external dynamics model during the diffusion denoising process. DynaGuide separates the dynamics model from the base policy, which gives it multiple advantages, including the ability to steer towards multiple objectives, enhance underrepresented base policy behaviors, and maintain robustness on low-quality objectives. The separate guidance signal also allows DynaGuide to work with off-the-shelf pretrained diffusion policies. We demonstrate the performance and features of DynaGuide against other steering approaches in a series of simulated and real experiments, showing an average steering success of 70% on a set of articulated CALVIN tasks and outperforming goal-conditioning by 5.4x when steered with low-quality objectives. We also successfully steer an off-the-shelf real robot policy to express preference for particular objects and even create novel behavior. Videos and more can be found on the project website: https://dynaguide.github.io
Machine learning techniques have demonstrated their effectiveness in achieving autonomy and optimality for nonlinear and high-dimensional dynamical systems. However, traditional black-box machine learning methods often lack formal stability guarantees, which are critical for safety-sensitive aerospace applications. This paper proposes a comprehensive framework that combines control Lyapunov functions with supervised learning to provide certifiably stable, time- and fuel-optimal guidance for rendezvous maneuvers governed by Clohessy-Wiltshire dynamics. The framework is easily extensible to nonlinear control-affine systems. A novel neural candidate Lyapunov function is developed to ensure positive definiteness. Subsequently, a control policy is defined, in which the thrust direction vector minimizes the Lyapunov function's time derivative, and the thrust throttle is determined using minimal required throttle. This approach ensures that all loss terms related to the control Lyapunov function are either naturally satisfied or replaced by the derived control policy. To jointly supervise the Lyapunov function and the control policy, a simple loss function is introduced, leveraging optimal state-control pairs obtained by a polynomial maps based method. Consequently, the trained neural network not only certifies the Lyapunov function but also generates a near-optimal guidance policy, even for the bang-bang fuel-optimal problem. Extensive numerical simulations are presented to validate the proposed method.
Text-guided diffusion models have become essential for high-quality image synthesis, enabling dynamic image editing. In image editing, two crucial aspects are editability, which determines the extent of modification, and faithfulness, which reflects how well unaltered elements are preserved. However, achieving optimal results is challenging because of the inherent trade-off between editability and faithfulness. To address this, we propose Faithfulness Guidance and Scheduling (FGS), which enhances faithfulness with minimal impact on editability. FGS incorporates faithfulness guidance to strengthen the preservation of input image information and introduces a scheduling strategy to resolve misalignment between editability and faithfulness. Experimental results demonstrate that FGS achieves superior faithfulness while maintaining editability. Moreover, its compatibility with various editing methods enables precise, high-quality image edits across diverse tasks.
While diffusion-based T2I models have achieved remarkable image generation quality, they also enable easy creation of harmful content, raising social concerns and highlighting the need for safer generation. Existing inference-time guiding methods lack both adaptivity--adjusting guidance strength based on the prompt--and selectivity--targeting only unsafe regions of the image. Our method, SP-Guard, addresses these limitations by estimating prompt harmfulness and applying a selective guidance mask to guide only unsafe areas. Experiments show that SP-Guard generates safer images than existing methods while minimizing unintended content alteration. Beyond improving safety, our findings highlight the importance of transparency and controllability in image generation.
With the rapid development of information and communication technology (ICT), smart classroom has become an important trend in education modernization. This article reviews the development of smart classrooms from the hardware and software levels. The hardware describes the transformation from the construction of basic ICT facilities in single mode to a multi-modal information cloud platform. In terms of software, we look at the evolution of related supporting algorithms and technologies from the platform construction technology to the integration of advanced artificial intelligence (AI) technology from the perspectives of learning analysis and data mining. Provide guidance and suggestions for future educators, researchers and policymakers on the future direction of smart classrooms.
In this paper, we address the problem of enclosing an arbitrarily moving target in three dimensions by a single pursuer while ensuring the pursuer's safety by preventing collisions with the target. The proposed guidance strategy steers the pursuer to a safe region of space surrounding and excluding the target, allowing it to maintain a certain distance from the latter while offering greater flexibility in positioning and converging to any orbit within this safe zone. We leverage the concept of the Lyapunov Barrier Function as a powerful tool to constrain the distance between the pursuer and the target within asymmetric bounds, thereby ensuring the pursuer's safety within the predefined region. Further, we demonstrate the effectiveness of the proposed guidance law in managing arbitrarily maneuvering targets and other uncertainties (such as vehicle/autopilot dynamics and external disturbances) by enabling the pursuer to consistently achieve stable global enclosing behaviors by switching between stable enclosing trajectories within the safe region whenever necessary, even in response to aggressive target maneuvers. To attest to the merits of our work, we conduct experimental tests with various plant models, including a high-fidelity quadrotor model within Software-in-the-loop (SITL) simulations, encompassing various challenging target maneuver scenarios and requiring only relative information for successful execution.
This paper introduces a landing guidance strategy for reusable launch vehicles (RLVs) using a model predictive approach based on sequential convex programming (SCP). The proposed approach devises two distinct optimal control problems (OCPs): planning a fuel-optimal landing trajectory that accommodates practical path constraints specific to RLVs, and determining real-time optimal tracking commands. This dual optimization strategy allows for reduced computational load through adjustable prediction horizon lengths in the tracking task, achieving near closed-loop performance. Enhancements in model fidelity for the tracking task are achieved through an alternative rotational dynamics representation, enabling a more stable numerical solution of the OCP and accounting for vehicle transient dynamics. Furthermore, modifications of aerodynamic force in both planning and tracking phases are proposed, tailored for thrust-vector-controlled RLVs, to reduce the fidelity gap without adding computational complexity. Extensive 6-DOF simulation experiments validate the effectiveness and improved guidance performance of the proposed algorithm.
Sebastien Origer, Dario Izzo, Giacomo Acciarini
et al.
We perform uncertainty propagation on an event manifold for Guidance & Control Networks (G&CNETs), aiming to enhance the certification tools for neural networks in this field. This work utilizes three previously solved optimal control problems with varying levels of dynamics nonlinearity and event manifold complexity. The G&CNETs are trained to represent the optimal control policies of a time-optimal interplanetary transfer, a mass-optimal landing on an asteroid and energy-optimal drone racing, respectively. For each of these problems, we describe analytically the terminal conditions on an event manifold with respect to initial state uncertainties. Crucially, this expansion does not depend on time but solely on the initial conditions of the system, thereby making it possible to study the robustness of the G&CNET at any specific stage of a mission defined by the event manifold. Once this analytical expression is found, we provide confidence bounds by applying the Cauchy-Hadamard theorem and perform uncertainty propagation using moment generating functions. While Monte Carlo-based (MC) methods can yield the results we present, this work is driven by the recognition that MC simulations alone may be insufficient for future certification of neural networks in guidance and control applications.
Muhammad Gul Zain Ali Khan, Muhammad Ferjad Naeem, Luc Van Gool
et al.
Continual Learning aims to learn a single model on a sequence of tasks without having access to data from previous tasks. The biggest challenge in the domain still remains catastrophic forgetting: a loss in performance on seen classes of earlier tasks. Some existing methods rely on an expensive replay buffer to store a chunk of data from previous tasks. This, while promising, becomes expensive when the number of tasks becomes large or data can not be stored for privacy reasons. As an alternative, prompt-based methods have been proposed that store the task information in a learnable prompt pool. This prompt pool instructs a frozen image encoder on how to solve each task. While the model faces a disjoint set of classes in each task in this setting, we argue that these classes can be encoded to the same embedding space of a pre-trained language encoder. In this work, we propose Language Guidance for Prompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods. LGCL is model agnostic and introduces language guidance at the task level in the prompt pool and at the class level on the output feature of the vision encoder. We show with extensive experimentation that LGCL consistently improves the performance of prompt-based continual learning methods to set a new state-of-the art. LGCL achieves these performance improvements without needing any additional learnable parameters.
Recent large-scale text-guided diffusion models provide powerful image-generation capabilities. Currently, a significant effort is given to enable the modification of these images using text only as means to offer intuitive and versatile editing. However, editing proves to be difficult for these generative models due to the inherent nature of editing techniques, which involves preserving certain content from the original image. Conversely, in text-based models, even minor modifications to the text prompt frequently result in an entirely distinct result, making attaining one-shot generation that accurately corresponds to the users intent exceedingly challenging. In addition, to edit a real image using these state-of-the-art tools, one must first invert the image into the pre-trained models domain - adding another factor affecting the edit quality, as well as latency. In this exploratory report, we propose LEDITS - a combined lightweight approach for real-image editing, incorporating the Edit Friendly DDPM inversion technique with Semantic Guidance, thus extending Semantic Guidance to real image editing, while harnessing the editing capabilities of DDPM inversion as well. This approach achieves versatile edits, both subtle and extensive as well as alterations in composition and style, while requiring no optimization nor extensions to the architecture.
The Diffusion model has a strong ability to generate wild images. However, the model can just generate inaccurate images with the guidance of text, which makes it very challenging to directly apply the text-guided generative model for virtual try-on scenarios. Taking images as guiding conditions of the diffusion model, this paper proposes a brand new personalized virtual try-on model (PE-VITON), which uses the two stages (shape control and texture guidance) to decouple the clothing attributes. Specifically, the proposed model adaptively matches the clothing to human body parts through the Shape Control Module (SCM) to mitigate the misalignment of the clothing and the human body parts. The semantic information of the input clothing is parsed by the Texture Guided Module (TGM), and the corresponding texture is generated by directional guidance. Therefore, this model can effectively solve the problems of weak reduction of clothing folds, poor generation effect under complex human posture, blurred edges of clothing, and unclear texture styles in traditional try-on methods. Meanwhile, the model can automatically enhance the generated clothing folds and textures according to the human posture, and improve the authenticity of virtual try-on. In this paper, qualitative and quantitative experiments are carried out on high-resolution paired and unpaired datasets, the results show that the proposed model outperforms the state-of-the-art model.
Julie Hogan, Aneliya Karadzhinova-Ferrer, Sudhir Malik
The HEP faculty hire job market has stayed fairly plateaued over the years not keeping up with number of postdocs and PhD produced seeking such employment. Physicists who seek jobs outside this realm face challenges. For example those hired as faculties at predominantly undergraduate institutions (PUI) and community colleges (CC) face hurdles to keep with research due higher teaching load and funding challenges. At the same time those who seek employment in industry may find themselves in unprepared territory despite marketable skills. New job opportunities seeking HEP developed skills have appeared in data science, machine learning and quantum computing. Given that a vast majority transition to the industry jobs, we must strengthen the existing paths for this transition and develop new ways to facilitating it. A strong engagement between HEP and its alumni would boost this process. At the same time those who stay in academia but choose to work at PUIs or CCs must be enabled to continue to pursue research and receive support and guidance for funding. PUIs and CCs serve as a gateway to opportunities for inclusiveness beyond national labs and academic research institutions offering an early starting point in the pipeline that can mitigate issues of lack of diversity and underrepresented participation of different groups in HEP. This report summarises the study undertaken to investigate these issues in HEP community and provide findings and recommendations.
Power dynamics influence every aspect of scientific collaboration. Team power dynamics can be measured by team power level and team power hierarchy. Team power level is conceptualized as the average level of the possession of resources, expertise, or decision-making authorities of a team. Team power hierarchy represents the vertical differences of the possessions of resources in a team. In Science of Science, few studies have looked at scientific collaboration from the perspective of team power dynamics. This research examines how team power dynamics affect team impact to fill the research gap. In this research, all co-authors of one publication are treated as one team. Team power level and team power hierarchy of one team are measured by the mean and Gini index of career age of co-authors in this team. Team impact is quantified by citations of a paper authored by this team. By analyzing over 7.7 million teams from Science (e.g., Computer Science, Physics), Social Sciences (e.g., Sociology, Library & Information Science), and Arts & Humanities (e.g., Art), we find that flat team structure is associated with higher team impact, especially when teams have high team power level. These findings have been repeated in all five disciplines except Art, and are consistent in various types of teams from Computer Science including teams from industry or academia, teams with different gender groups, teams with geographical contrast, and teams with distinct size.
An aesthetics evaluation model is at the heart of predicting users' aesthetic experience and developing user interfaces with higher quality. However, previous methods on aesthetic evaluation largely ignore the interpretability of the model and are consequently not suitable for many human-computer interaction tasks. We solve this problem by using a hyper-network to learn the overall aesthetic rating as a combination of individual aesthetic attribute scores. We further introduce a specially designed attentional mechanism in attribute score estimators to enable the users to know exactly which parts/elements of visual inputs lead to the estimated score. We demonstrate our idea by designing an intelligent photography guidance system. Computational results and user studies demonstrate the interpretability and effectiveness of our method.
To alleviate data sparsity and cold-start problems of traditional recommender systems (RSs), incorporating knowledge graphs (KGs) to supplement auxiliary information has attracted considerable attention recently. However, simply integrating KGs in current KG-based RS models is not necessarily a guarantee to improve the recommendation performance, which may even weaken the holistic model capability. This is because the construction of these KGs is independent of the collection of historical user-item interactions; hence, information in these KGs may not always be helpful for recommendation to all users. In this paper, we propose attentive Knowledge-aware Graph convolutional networks with Collaborative Guidance for personalized Recommendation (CG-KGR). CG-KGR is a novel knowledge-aware recommendation model that enables ample and coherent learning of KGs and user-item interactions, via our proposed Collaborative Guidance Mechanism. Specifically, CG-KGR first encapsulates historical interactions to interactive information summarization. Then CG-KGR utilizes it as guidance to extract information out of KGs, which eventually provides more precise personalized recommendation. We conduct extensive experiments on four real-world datasets over two recommendation tasks, i.e., Top-K recommendation and Click-Through rate (CTR) prediction. The experimental results show that the CG-KGR model significantly outperforms recent state-of-the-art models by 1.4-27.0% in terms of Recall metric on Top-K recommendation.
Decades of research has studied how language learning infants learn to discriminate speech sounds, segment words, and associate words with their meanings. While gradual development of such capabilities is unquestionable, the exact nature of these skills and the underlying mental representations yet remains unclear. In parallel, computational studies have shown that basic comprehension of speech can be achieved by statistical learning between speech and concurrent referentially ambiguous visual input. These models can operate without prior linguistic knowledge such as representations of linguistic units, and without learning mechanisms specifically targeted at such units. This has raised the question of to what extent knowledge of linguistic units, such as phone(me)s, syllables, and words, could actually emerge as latent representations supporting the translation between speech and representations in other modalities, and without the units being proximal learning targets for the learner. In this study, we formulate this idea as the so-called latent language hypothesis (LLH), connecting linguistic representation learning to general predictive processing within and across sensory modalities. We review the extent that the audiovisual aspect of LLH is supported by the existing computational studies. We then explore LLH further in extensive learning simulations with different neural network models for audiovisual cross-situational learning, and comparing learning from both synthetic and real speech data. We investigate whether the latent representations learned by the networks reflect phonetic, syllabic, or lexical structure of input speech by utilizing an array of complementary evaluation metrics related to linguistic selectivity and temporal characteristics of the representations. As a result, we find that representations associated...