Navigating Hype, Interdisciplinary Collaboration, and Industry Partnerships in Quantum Information Science and Technology: Perspectives from Leading Quantum Educators
Liam Doyle, Fargol Seifollahi, Chandralekha Singh
The rapid advancement of quantum information science and technology (QIST) has generated significant attention from people in academia, industry, and the public. Recent advances in QIST have led to both opportunities and challenges for students and researchers who are curious about the potential of the field amid hype, considering whether their skills are aligned with what the field needs, and contemplating how collaborating with industries may impact their research. This qualitative study presents perspectives from leading quantum researchers who are educators on three critical aspects shaping QIST's development: (1) the impact of hype in the field and strategies for managing expectations, (2) approaches to creating conducive environments that attract students and established researchers from non-physics disciplines, and (3) effective models for fostering university-industry partnerships that can be valuable for students and researchers alike. These aspects, along with several interconnected challenges, were explored through in-depth interviews with quantum educators. Our findings reveal nuanced perspectives on managing the hype cycle and its risks in creating unrealistic expectations. Regarding greater interdisciplinary engagement and attracting more non-physicists to QIST, educators emphasized the need to recognize and leverage existing expertise from other fields while developing educational pathways that meet diverse student backgrounds to prepare them for the QIST workforce. On university-industry partnerships, respondents highlighted successful models, while noting persistent challenges around intellectual property, confidentiality, and differing organizational goals. These insights provide valuable guidance for educators, policymakers, and industry leaders working to build a sustainable quantum workforce while maintaining realistic expectations about the field's trajectory.
en
physics.ed-ph, quant-ph
Bioactive Compounds From Agri‐Food By‐Products: Advancements in Environmental Sustainability and Bioeconomic Progress
Payel Dhar, B. Jose Ravindra Raj, Amayappanallur Kannan Dasarathy
et al.
ABSTRACT The rapid growth of agri‐food industries has led to an alarming increase in waste generation, posing environmental, economic, and sustainability challenges. This review explores recent advancements in the valorization of agri‐food by‐products into value‐added products through green extraction and biorefinery technologies. It emphasizes the recovery of bioactive compounds such as polyphenols, flavonoids, carotenoids, and dietary fibers from fruit, vegetable, dairy, meat, and seafood wastes, highlighting their potential applications in the food, pharmaceutical, cosmetic, and bioenergy sectors. Emerging eco‐friendly extraction techniques—including supercritical and subcritical fluid extraction, enzyme‐assisted extraction, microwave‐ and ultrasound‐assisted methods, and pulsed electric field processing—offer improved yield, purity, and energy efficiency while reducing ecological impact. Despite technological progress, large‐scale adoption remains constrained by high costs, lack of standardization, and limited industrial integration. Key research gaps include the need for techno‐economic assessments, solvent recovery strategies, and life‐cycle evaluations to ensure process scalability and sustainability. Future research should focus on developing hybrid extraction systems, AI‐driven process optimization, and pilot‐scale biorefineries supported by robust policy frameworks and industry–academia collaboration. Overall, agri‐food waste valorization presents a viable pathway toward achieving environmental sustainability and circular bioeconomy goals, enabling a transition from waste‐intensive practices to resource‐efficient and climate‐resilient production systems.
Engineering (General). Civil engineering (General), Electronic computers. Computer science
Grasping in Uncertain Environments: A Case Study For Industrial Robotic Recycling
Annalena Daniels, Sebastian Kerz, Salman Bari
et al.
Autonomous robotic grasping of uncertain objects in uncertain environments is an impactful open challenge for the industries of the future. One such industry is the recycling of Waste Electrical and Electronic Equipment (WEEE) materials, in which electric devices are disassembled and readied for the recovery of raw materials. Since devices may contain hazardous materials and their disassembly involves heavy manual labor, robotic disassembly is a promising venue. However, since devices may be damaged, dirty and unidentified, robotic disassembly is challenging since object models are unavailable or cannot be relied upon. This case study explores grasping strategies for industrial robotic disassembly of WEEE devices with uncertain vision data. We propose three grippers and appropriate tactile strategies for force-based manipulation that improves grasping robustness. For each proposed gripper, we develop corresponding strategies that can perform effectively in different grasping tasks and leverage the grippers design and unique strengths. Through experiments conducted in lab and factory settings for four different WEEE devices, we demonstrate how object uncertainty may be overcome by tactile sensing and compliant techniques, significantly increasing grasping success rates.
FlakyGuard: Automatically Fixing Flaky Tests at Industry Scale
Chengpeng Li, Farnaz Behrang, August Shi
et al.
Flaky tests that non-deterministically pass or fail waste developer time and slow release cycles. While large language models (LLMs) show promise for automatically repairing flaky tests, existing approaches like FlakyDoctor fail in industrial settings due to the context problem: providing either too little context (missing critical production code) or too much context (overwhelming the LLM with irrelevant information). We present FlakyGuard, which addresses this problem by treating code as a graph structure and using selective graph exploration to find only the most relevant context. Evaluation on real-world flaky tests from industrial repositories shows that FlakyGuard repairs 47.6 % of reproducible flaky tests with 51.8 % of the fixes accepted by developers. Besides it outperforms state-of-the-art approaches by at least 22 % in repair success rate. Developer surveys confirm that 100 % find FlakyGuard's root cause explanations useful.
Agent-based Condition Monitoring Assistance with Multimodal Industrial Database Retrieval Augmented Generation
Karl Löwenmark, Daniel Strömbergsson, Chang Liu
et al.
Condition monitoring (CM) plays a crucial role in ensuring reliability and efficiency in the process industry. Although computerised maintenance systems effectively detect and classify faults, tasks like fault severity estimation, and maintenance decisions still largely depend on human expert analysis. The analysis and decision making automatically performed by current systems typically exhibit considerable uncertainty and high false alarm rates, leading to increased workload and reduced efficiency. This work integrates large language model (LLM)-based reasoning agents with CM workflows to address analyst and industry needs, namely reducing false alarms, enhancing fault severity estimation, improving decision support, and offering explainable interfaces. We propose MindRAG, a modular framework combining multimodal retrieval-augmented generation (RAG) with novel vector store structures designed specifically for CM data. The framework leverages existing annotations and maintenance work orders as surrogates for labels in a supervised learning protocol, addressing the common challenge of training predictive models on unlabelled and noisy real-world datasets. The primary contributions include: (1) an approach for structuring industry CM data into a semi-structured multimodal vector store compatible with LLM-driven workflows; (2) developing multimodal RAG techniques tailored for CM data; (3) developing practical reasoning agents capable of addressing real-world CM queries; and (4) presenting an experimental framework for integrating and evaluating such agents in realistic industrial scenarios. Preliminary results, evaluated with the help of an experienced analyst, indicate that MindRAG provide meaningful decision support for more efficient management of alarms, thereby improving the interpretability of CM systems.
Testbed and Software Architecture for Enhancing Security in Industrial Private 5G Networks
Song Son Ha, Florian Foerster, Thomas Robert Doebbert
et al.
In the era of Industry 4.0, the growing need for secure and efficient communication systems has driven the development of fifth-generation (5G) networks characterized by extremely low latency, massive device connectivity and high data transfer speeds. However, the deployment of 5G networks presents significant security challenges, requiring advanced and robust solutions to counter increasingly sophisticated cyber threats. This paper proposes a testbed and software architecture to strengthen the security of Private 5G Networks, particularly in industrial communication environments.
Enhancing failure prediction in nuclear industry: Hybridization of knowledge- and data-driven techniques
Amaratou Mahamadou Saley, Thierry Moyaux, Aïcha Sekhari
et al.
The convergence of the Internet of Things (IoT) and Industry 4.0 has significantly enhanced data-driven methodologies within the nuclear industry, notably enhancing safety and economic efficiency. This advancement challenges the precise prediction of future maintenance needs for assets, which is crucial for reducing downtime and operational costs. However, the effectiveness of data-driven methodologies in the nuclear sector requires extensive domain knowledge due to the complexity of the systems involved. Thus, this paper proposes a novel predictive maintenance methodology that combines data-driven techniques with domain knowledge from a nuclear equipment. The methodological originality of this paper is located on two levels: highlighting the limitations of purely data-driven approaches and demonstrating the importance of knowledge in enhancing the performance of the predictive models. The applicative novelty of this work lies in its use within a domain such as a nuclear industry, which is highly restricted and ultrasensitive due to security, economic and environmental concerns. A detailed real-world case study which compares the current state of equipment monitoring with two scenarios, demonstrate that the methodology significantly outperforms purely data-driven methods in failure prediction. While purely data-driven methods achieve only a modest performance with a prediction horizon limited to 3 h and a F1 score of 56.36%, the hybrid approach increases the prediction horizon to 24 h and achieves a higher F1 score of 93.12%.
Harmonizing Diverse Models: A Layer-wise Merging Strategy for Consistent Generation
Xujun Peng, Anoop Kumar, Jingyu Wu
et al.
Retrieval-Augmented Generation (RAG) systems leverage Large Language Models (LLMs) to generate accurate and reliable responses that are grounded in retrieved context. However, LLMs often generate inconsistent outputs for semantically equivalent inputs, a problem compounded by the scarcity of consistency-focused training data and the limitations of current fine-tuning techniques in enhancing output consistency. We propose a new approach combining systematic synthetic data generation, triplet loss for better embeddings, and a novel layer-wise model merging approach. Using consistency-aware weights derived from intermediate layer activations, our method effectively integrates knowledge from specialized models. Experimental results how that our merged model significantly enhances output consistency, achieving a ~47.5\% improvement in response similarity over the baseline, thus offering a practical solution for increasing the reliability of an industrial RAG system.
Can industrial overcapacity enable seasonal flexibility in electricity use? A case study of aluminum smelting in China
Ruike Lyu, Anna Li, Jianxiao Wang
et al.
In many countries, declining demand in energy-intensive industries such as cement, steel, and aluminum is leading to industrial overcapacity. Although industrial overcapacity is traditionally envisioned as problematic and resource-wasteful, it could unlock energy-intensive industries' flexibility in electricity use. Here, using China's aluminum smelting industry as a case study, we evaluate the system-level cost-benefit of retaining energy-intensive industries overcapacity for flexible electricity use in decarbonized energy systems. We find that overcapacity can enable aluminum smelters to adopt a seasonal operation paradigm, ceasing production during winter load peaks that are exacerbated by heating electrification and renewable seasonality. This seasonal operation paradigm could reduce the investment and operational costs of China's decarbonized electricity system by 23-32 billion CNY/year (11-15% of the aluminum smelting industry's product value), sufficient to offset the increased smelter maintenance and product storage costs associated with overcapacity. It may also provide an opportunity for seasonally complementary labor deployment across the aluminum smelting and thermal power generation sectors, offering a potential pathway for mitigating socio-economic disruptions caused by industrial restructuring and energy decarbonization.
en
physics.soc-ph, econ.GN
Gallic acid based green corrosion inhibitor for mild steel in 1 M HCl electrochemical and microbial assessment with theoretical validation
Ahmed. E. Suliman, Ahmed H. Mangood, Naema S. Yehia
et al.
Abstract The petroleum industry, characterized by the significant investment in costly equipment and devices utilized in the extraction, production, or processing of crude oil, can result in the loss of valuable assets or the crude itself. This research involved the synthesis of a Schiff base from substituted gallic acid derivatives through an intermediate reaction known as N-(2-{2-[2-(2-amino-ethylamino)-ethylamino]-ethylamino}-ethyl)-3,4,5-trihydroxy-benzamide (AEET). The synthesized compound was characterized using FTIR and 1HNMR spectroscopy to evaluate its effectiveness in inhibition. The performance of the inhibitors was assessed through an electrochemical process that included Tafel and EIS. This evaluation was supported by theoretical mechanisms involving density functional theory (DFT) and molecular dynamics simulations (MDS). To validate the findings from the electrochemical studies, the scanning electron microscopy (SEM) technique was employed to examine the topographic anisotropy characteristics between the treated and untreated samples of mild steel species. The bioassay diluted serial technique was utilized to assess the AEET as effective biocides for managing bacterial growth issues. This evaluation included an analysis of the AEET’s efficiency in inhibiting sulfate-reducing bacteria (SRB). Additionally, computational methods were described, demonstrating optimal scores, RMSD values, and binding interaction energies associated with the formation of hydrogen bonds with specific receptor residues to investigate the biological activity.
Digital currencies of central banks: memories of the fu ture
E. I. Dyudikova
The emergence of a new type of society in the wake of the information and digital revolution – Society 5.0. – is accompanied by social polarization, contributing to the occurrence and spread of technocratic metaphors with their humanitarian interpretation, generally complicating the formation of new realities and hindering their perception. This began to manifest itself during a period of rapid transformations in monetary circulation, when the cryptocurrency turned into a full-fledged innovative narrative, prompting the authorities to take retaliatory measures to curb the backstage sector of alternative finance. At the same time, there is no holistic, definite and understandable vision of the vector of digital transformation of the payment and settlement space, there is no concretization of the concept, theoretical approaches are not detailed, there is no prototype of the tested solution, no White Paper has been published, a high level of secrecy of piloting results is recorded, etc. Under these conditions, phrases applied to the sphere of money circulation and containing the word «digital» in their formulations most often look like a technocratic metaphor to attract public attention. However, the formal enumeration of breakthrough technologies and the local selective nature of their use as a criterion for digital transformation do not reveal either the essence of the technologies themselves or the «breakthroughs» into society 5.0. The historical perspective of the causal relationship between the cryptocurrency industry and the digital currencies of central banks, as well as an interactive approach to improving the settlement and payment space, taking into account the change in customer-oriented concepts to human-centricity, allow us to identify existing technocratic metaphors regarding the legislative interpretation of the digital ruble and establish the dichotomy of the virtual settlement and payment space (separation into digital and electronic).
Performance optimization of human factors and safety performance using an integrated DEA-TOPSIS approach: A case study in the process industry
Leila Omidi, Vahid Salehi, Seyed Abolfazl Zakerian
et al.
Stress, fatigue, and work situation awareness are key contributors to accidents and unsafe behaviors in process industries. Given the significance of these factors, this study aimed to assess the employees' perceptions of the effects of stress, fatigue, and work situation awareness on safety performance in a process industry. The data of this study were collected through a questionnaire, and their reliability was evaluated and confirmed. The Data Envelopment Analysis (DEA) method was used to identify and analyze the most influential factors and sub-factors influencing employees' perceptions of safety performance. Additionally, the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) was applied to rank alternatives and validate the DEA results. Sensitivity analysis revealed that work situation awareness significantly affected safety performance compared to stress and fatigue. Furthermore, the findings showed that distraction, chronic fatigue, and demands were the most influential sub-factors of work situation awareness, fatigue, and stress, respectively. The Pearson correlation test confirmed a strong agreement between the DEA and TOPSIS results. Given these findings, stress, fatigue, and work situation awareness play an important role in safety performance of employees in the process industries.
History of scholarship and learning. The humanities, Social sciences (General)
Nutrition and food science & technology: Vital symbiosis for sustainable health
Gert W. Meijer
Nutrition science and food science & technology are crucial for creating a healthier world through accessible nutrition and sustainable health practices. Examples of successful impact can be found in food fortification, foods with effective levels of bioactives, (re)formulation of foods to combat obesity and diet-related diseases, (re)formulation of foods to enable nutrition and health claims, and the activities by the European Technology Platform (ETP) for the food sector 'Food for Life'.In preparing, maintaining, and promoting the Strategic Research and Innovation Agenda (SRIA), the ETP aims to identify scientific and technological actions towards the transformations that are needed to achieve more optimal outcomes of the food system. The four major food system outcomes are the environment, society, citizen health & wellbeing, and economy & competitiveness. The SRIA provides essential guidance to the European Commission, Member States and Regions, the food industry, and the wider research community interested in food, in the form of Research and Innovation needs to make a real difference to the Food and Drink sector and society. The mutualism between nutritionists and food scientists and technologists is essential for achieving the transformations towards the outcomes that are needed for a more sustainable food system.
Nutrition. Foods and food supply, Food processing and manufacture
Research on productivity prediction method of infilling well based on improved LSTM neural network: A case study of the middle-deep shale gas in South Sichuan
GUAN Wenjie, PENG Xiaolong, ZHU Suyang, YANG Chen, PENG Zhen, MA Xiaoran
During the development of middle and deep gas reservoirs in South Sichuan, conventional reservoir engineering methods—such as fracture propagation, stress-induced analysis, and numerical simulation—render productivity prediction of infilling wells laborious and ineffective in addressing variations in production capacity across different production stages, with stringent application conditions. In order to quickly and accurately predict the production capacity of infilling wells, this study classifies the “three-stage” declining trend observed in the production pressure curves of existing wells into: (1) A drastic decline period, regarded as the initial water production stage; (2) a rapid decline period; and (3) a slow decline period, both considered part of the later gas production stage. The Grey Wolf Optimizer(GWO) algorithm, a fast optimization algorithm with adaptive capabilities and an information feedback mechanism, is applied for hyperparameter optimization of the Long Short-term Memory (LSTM) neural network. Two stage-specific models were constructed, with the number of hidden layer neurons, dropout rate, and batch size determined by the optimal solutions obtained via GWO. The number of iterations was selected based on the loss curve and performance metric curve, while a linear warm-up strategy was used to dynamically adjust the learning rate, facilitating high-speed training and the formation of a staged productivity prediction model. Example studies show that the GWO-optimised LSTM neural network model achieves rapid convergence with a preset learning rate of 0.002 and 450 iterations, ultimately reaching a performance index of 0.923. Compared to the conventional LSTM neural network model, the average absolute errors during the early and later stages are reduced by 1.290 m3/d and 0.213 × 104 m3/d, respectively. Compared with numerical simulation fitting results, the average absolute error in gas production prediction is reduced by 0.24 × 104 m3/d. Therefore, the improved LSTM neural network model demonstrates excellent performance in capacity prediction across different production stages, and the stage-specific productivity variations in infilling wells within middle and deep shale gas reservoirs in South Sichuan. This provides a theoretical foundation for productivity prediction methods of infilling wells.
Petroleum refining. Petroleum products, Gas industry
Towards certification: A complete statistical validation pipeline for supervised learning in industry
Lucas Lacasa, Abel Pardo, Pablo Arbelo
et al.
Methods of Machine and Deep Learning are gradually being integrated into industrial operations, albeit at different speeds for different types of industries. The aerospace and aeronautical industries have recently developed a roadmap for concepts of design assurance and integration of neural network-related technologies in the aeronautical sector. This paper aims to contribute to this paradigm of AI-based certification in the context of supervised learning, by outlining a complete validation pipeline that integrates deep learning, optimization and statistical methods. This pipeline is composed by a directed graphical model of ten steps. Each of these steps is addressed by a merging key concepts from different contributing disciplines (from machine learning or optimization to statistics) and adapting them to an industrial scenario, as well as by developing computationally efficient algorithmic solutions. We illustrate the application of this pipeline in a realistic supervised problem arising in aerostructural design: predicting the likelikood of different stress-related failure modes during different airflight maneuvers based on a (large) set of features characterising the aircraft internal loads and geometric parameters.
en
cs.LG, physics.data-an
Exploring Modular Mobility: Industry Advancements, Research Trends, and Future Directions on Modular Autonomous Vehicles
Lanhang Ye, Toshiyuki Yamamoto
Modular autonomous vehicles (MAVs) represent a transformative paradigm in the rapidly advancing field of autonomous vehicle technology. The integration of modularity offers numerous advantages, poised to reshape urban mobility systems and foster innovation in this emerging domain. Although publications on MAVs have only gained traction in the past five years, these pioneering efforts are critical for envisioning the future of modular mobility. This work provides a comprehensive review of industry and academic contributions to MAV development up to 2024, encompassing conceptualization, design, and applications in both passenger and logistics transport. The review systematically defines MAVs and outlines their technical framework, highlighting groundbreaking efforts in vehicular conceptualization, system design, and business models by the automotive industry and emerging mobility service providers. It also synthesizes academic research on key topics, including passenger and logistics transport, and their integration within future mobility ecosystems. The review concludes by identifying challenges, summarizing the current state of the art, and proposing future research directions to advance the development of modular autonomous mobility systems.
Towards Understanding Provenance in Industry
Matthias Galster, Jens Dietrich
Context: Trustworthiness of software has become a first-class concern of users (e.g., to understand software-made decisions). Also, there is increasing demand to demonstrate regulatory compliance of software and end users want to understand how software-intensive systems make decisions that affect them. Objective: We aim to provide a step towards understanding provenance needs of the software industry to support trustworthy software. Provenance is information about entities, activities, and people involved in producing data, software, or output of software, and used to assess software quality, reliability and trustworthiness of digital products and services. Method: Based on data from in-person and questionnaire-based interviews with professionals in leadership roles we develop an ``influence map'' to analyze who drives provenance, when provenance is relevant, what is impacted by provenance and how provenance can be managed. Results: The influence map helps decision makers navigate concerns related to provenance. It can also act as a checklist for initial provenance analyses of systems. It is empirically-grounded and designed bottom-up (based on perceptions of practitioners) rather than top-down (from regulations or policies). Conclusion: We present an imperfect first step towards understanding provenance based on current perceptions and offer a preliminary view ahead.
Batch Prompting: Efficient Inference with Large Language Model APIs
Zhoujun Cheng, Jungo Kasai, Tao Yu
Performing inference on large volumes of samples with large language models (LLMs) can be computationally and financially costly in industry and real-world use. We propose batch prompting, a simple yet effective prompting approach that enables the LLM to run inference in batches, instead of one sample at a time. Our method reduces both token and time costs while retaining downstream performance. We theoretically demonstrate that under a few-shot in-context learning setting, the inference costs decrease almost inverse linearly with the number of samples in each batch. We extensively validate the effectiveness of batch prompting on ten datasets across commonsense QA, arithmetic reasoning, and NLI/NLU: batch prompting significantly~(up to 5x with six samples in batch) reduces the LLM (Codex) inference token and time costs while achieving better or comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5 and GPT-4, we show the benefits of batch prompting also hold. Further analysis shows that the number of samples in each batch and the complexity of tasks affect its performance. Moreover, batch prompting can be applied across different reasoning methods using LLMs. Our code can be found at the site https://github.com/xlang-ai/batch-prompting.
ConaCLIP: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval
Jiapeng Wang, Chengyu Wang, Xiaodan Wang
et al.
Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP) are typically adopted for various vision-language applications, including text-image retrieval. However,these models are still less practical on edge devices or for real-time situations, due to the substantial indexing and inference time and the large consumption of computational resources. Although knowledge distillation techniques have been widely utilized for uni-modal model compression, how to expand them to the situation when the numbers of modalities and teachers/students are doubled has been rarely studied. In this paper, we conduct comprehensive experiments on this topic and propose the fully-Connected knowledge interaction graph (Cona) technique for cross-modal pre-training distillation. Based on our findings, the resulting ConaCLIP achieves SOTA performances on the widely-used Flickr30K and MSCOCO benchmarks under the lightweight setting. An industry application of our method on an e-commercial platform further demonstrates the significant effectiveness of ConaCLIP.
C18- Analyse phytochimique et évaluation de l’activité larvicide contre le vecteur de la dengue (Aedes aegypti) d’extraits de feuilles de Calotropis procera R. br (Apocynaceae) en vue de leur utilisation comme bio-insecticide
Daisy Damando, B. Gérard Josias Yaméogo, Hermine Zimé-Diawara
et al.
La résistance des moustiques aux insecticides chimiques conventionnels demeure une préoccupation majeure pour les programmes de lutte anti-vectorielle. Les recherches s’orientent actuellement vers l’utilisation d’alternatives naturelles, les bio insecticides. L’activité insecticide de certaines plantes comme le Calotropis procera (Ait.) R.br (Apocynaceae) pourrait avoir un grand intérêt dans ce domaine. Dans cette étude, nous avons effectués un criblage des extraits de feuilles de C. procera (aqueux, hydro- éthanolique, méthanolique et éthanolique) par HPTLC, puis mesurer leur teneur en stérol, triterpène et en cardénolides. Nous avons étudié également l’activité larvicide des extraits aqueux et hydro- éthanolique contre le vecteur de la dengue Aedes aegypti selon la méthodologie décrite par l’OMS.
Le profil chimique des différents extraits montre la présence de flavonoïdes, tanins, stérols et triterpènes, coumarines, alcaloïdes et cardénolides. La plus grande activité larvicide a été obtenue avec l’extrait hydro-éthanolique provenant de feuilles récoltées en saison sèche dans la localité de Kombissiri (partie Centre Sud du Burkina) avec une DL50 de (1,58 mg/ml ± 1,51 ; 1,66 mg/ml)
Les résultats obtenus indiquent que l’extrait hydro-éthanolique de C. procera pourrait servir à la formulation d’un bio-insecticide écologique et peu coûteux pour lutter contre les larves d'Aedes aegypti.