In recent years, various industrial activities have caused serious pollution to the environment. Due to the low operating costs and high flexibility, adsorption is considered as one of the most effective technologies for pollutant management. Agricultural waste has loose and porous structures, and contains functional groups such as the carboxyl group and hydroxyl group, so it can be invoked as biological adsorption material. Agricultural waste gets the advantages of a wide range of sources, low cost, and renewable. It has a good prospect for the comprehensive utilization of resources when used for environmental pollution control. This article summarized the current research status of agricultural waste in adsorbing pollutants, which pointed out the influencing factors of adsorption, expounded the adsorption mechanism of biological adsorption and introduced the related parameters of adsorption, proposed the application of adsorbents in engineering including adsorption in liquid and gas phases, at the same time it gave the future development prospect of agricultural waste as adsorbent.
Nidhal Selmi, Jean-michel Bruel, Sébastien Mosser
et al.
Decision-making is a core engineering design activity that conveys the engineer's knowledge and translates it into courses of action. Capturing this form of knowledge can reap potential benefits for the engineering teams and enhance development efficiency. Despite its clear value, traditional decision capture often requires a significant amount of effort and still falls short of capturing the necessary context for reuse. Model-based systems engineering (MBSE) can be a promising solution to address these challenges by embedding decisions directly within system models, which can reduce the capture workload while maintaining explicit links to requirements, behaviors, and architectural elements. This article discusses a lightweight framework for integrating decision capture into MBSE workflows by representing decision alternatives as system model slices. Using a simplified industry example from aircraft architecture, we discuss the main challenges associated with decision capture and propose preliminary solutions to address these challenges.
Mairieli Wessel, Daniel Feitosa, Sangeeth Kochanthara
Rising publication pressure and the routine use of generative AI tools are reshaping how software engineering research is produced, assessed, and taught. While these developments promise efficiency, they also raise concerns about skill degradation, responsibility, and trust in scholarly outputs. This vision paper employs Design Fiction as a methodological lens to examine how such concerns might materialise if current practices persist. Drawing on themes reported in a recent community survey, we construct a speculative artifact situated in a near future research setting. The fiction is used as an analytical device rather than a forecast, enabling reflection on how automated assistance might impede domain knowledge competence, verification, and mentoring practices. By presenting an intentionally unsettling scenario, the paper invites discussion on how the software engineering research community in the future will define proficiency, allocate responsibility, and support learning.
Massimiliano Di Penta, Kelly Blincoe, Marsha Chechik
et al.
As software engineering conferences grow in size, rising costs and outdated formats are creating barriers to participation for many researchers. These barriers threaten the inclusivity and global diversity that have contributed to the success of the SE community. Based on survey data, we identify concrete actions the ACM Special Interest Group on Software Engineering (SIGSOFT) can take to address these challenges, including improving transparency around conference funding, experimenting with hybrid poster presentations, and expanding outreach to underrepresented regions. By implementing these changes, SIGSOFT can help ensure the software engineering community remains accessible and welcoming.
Ana B. M. Bett, Thais S. Nepomuceno, Edson OliveiraJr
et al.
Context: The empirical software engineering (ESE) community has contributed to improving experimentation over the years. However, there is still a lack of rigor in describing controlled experiments, hindering reproducibility and transparency. Registered Reports (RR) have been discussed in the ESE community to address these issues. A RR registers a study's hypotheses, methods, and/or analyses before execution, involving peer review and potential acceptance before data collection. This helps mitigate problematic practices such as p-hacking, publication bias, and inappropriate post hoc analysis. Objective: This paper presents initial results toward establishing an RR template for Software Engineering controlled experiments using the Open Science Framework (OSF). Method: We analyzed templates of selected OSF RR types in light of documentation guidelines for controlled experiments. Results: The observed lack of rigor motivated our investigation of OSF-based RR types. Our analysis showed that, although one of the RR types aligned with many of the documentation suggestions contained in the guidelines, none of them covered the guidelines comprehensively. The study also highlights limitations in OSF RR template customization. Conclusion: Despite progress in ESE, planning and documenting experiments still lack rigor, compromising reproducibility. Adopting OSF-based RRs is proposed. However, no currently available RR type fully satisfies the guidelines. Establishing RR-specific guidelines for SE is deemed essential.
Cross-organizational collaboration in Model-Based Systems Engineering (MBSE) faces many challenges in achieving semantic alignment across independently developed system models. SysML v2 introduces enhanced structural modularity and formal semantics, offering a stronger foundation for interoperable modeling. Meanwhile, GPT-based Large Language Models (LLMs) provide new capabilities for assisting model understanding and integration. This paper proposes a structured, prompt-driven approach for LLM-assisted semantic alignment of SysML v2 models. The core contribution lies in the iterative development of an alignment approach and interaction prompts, incorporating model extraction, semantic matching, and verification. The approach leverages SysML v2 constructs such as alias, import, and metadata extensions to support traceable, soft alignment integration. It is demonstrated with a GPT-based LLM through an example of a measurement system. Benefits and limitations are discussed.
Software development relies heavily on text-based communication, making sentiment analysis a valuable tool for understanding team dynamics and supporting trustworthy AI-driven analytics in requirements engineering. However, existing sentiment analysis tools often perform inconsistently across datasets from different platforms, due to variations in communication style and content. In this study, we analyze linguistic and statistical features of 10 developer communication datasets from five platforms and evaluate the performance of 14 sentiment analysis tools. Based on these results, we propose a mapping approach and questionnaire that recommends suitable sentiment analysis tools for new datasets, using their characteristic features as input. Our results show that dataset characteristics can be leveraged to improve tool selection, as platforms differ substantially in both linguistic and statistical properties. While transformer-based models such as SetFit and RoBERTa consistently achieve strong results, tool effectiveness remains context-dependent. Our approach supports researchers and practitioners in selecting trustworthy tools for sentiment analysis in software engineering, while highlighting the need for ongoing evaluation as communication contexts evolve.
UI automation is a useful technique for UI testing, bug reproduction, and robotic process automation. Recording user actions with an application assists rapid development of UI automation scripts, but existing recording techniques are intrusive, rely on OS or GUI framework accessibility support, or assume specific app implementations. Reverse engineering user actions from screencasts is non-intrusive, but a key reverse-engineering step is currently missing - recognizing human-understandable structured user actions ([command] [widget] [location]) from action screencasts. To fill the gap, we propose a deep learning-based computer vision model that can recognize 11 commands and 11 widgets, and generate location phrases from action screencasts, through joint learning and multi-task learning. We label a large dataset with 7260 video-action pairs, which record user interactions with Word, Zoom, Firefox, Photoshop, and Windows 10 Settings. Through extensive experiments, we confirm the effectiveness and generality of our model, and demonstrate the usefulness of a screencast-to-action-script tool built upon our model for bug reproduction.
Decarbonization of the transport sector sets increasingly strict demands to maximize thermal efficiency and minimize greenhouse gas emissions of Internal Combustion Engines. This has led to complex engines with a surge in the number of corresponding tunable parameters in actuator set points and control settings. Automated calibration is therefore essential to keep development time and costs at acceptable levels. In this work, an innovative self-learning calibration method is presented based on in-cylinder pressure curve shaping. This method combines Principal Component Decomposition with constrained Bayesian Optimization. To realize maximal thermal engine efficiency, the optimization problem aims at minimizing the difference between the actual in-cylinder pressure curve and an Idealized Thermodynamic Cycle. By continuously updating a Gaussian Process Regression model of the pressure's Principal Components weights using measurements of the actual operating conditions, the mean in-cylinder pressure curve as well as its uncertainty bounds are learned. This information drives the optimization of calibration parameters, which are automatically adapted while dealing with the risks and uncertainties associated with operational safety and combustion stability. This data-driven method does not require prior knowledge of the system. The proposed method is successfully demonstrated in simulation using a Reactivity Controlled Compression Ignition engine model. The difference between the Gross Indicated Efficiency of the optimal solution found and the true optimum is 0.017%. For this complex engine, the optimal solution was found after 64.4s, which is relatively fast compared to conventional calibration methods.
The problem of simulation of efficient ducted fan type propulsors is considered. From experience of operation of twin blades in fantails of helicopters, it is known that this configuration creates less noise compared to a uniform arrangement of the blades around the circumference. However, the flow behind such fan is less uniform than that of a conventional ducted fan. For multicopter-type unmanned aircraft and air taxis, the key problem is flight in take-off and landing modes as well as acoustic and vortex fields created by propulsors in these modes. The decrease in the noise level in propellers with twin blades can potentially be accompanied by an increase in non-stationary vortex effects on the aircraft as well as a decrease in specific thrust. The objectives were to develop a method for simulation of ducted fan propellers in the takeoff and landing mode, to determine the optimal angle between the blades, and to compare a ducted fan with twin X-shaped blades to conventional blade position. Turbulent flows were calculated using transient Reynold-averaged Navier-Stokes equations, complemented by SST turbulence model, and large eddy simulation with WALE subgrid viscosity model. The calculations used the modification γ–Reθ Transition SST of the Langtry-Menter turbulence model, where there are relations for the intermittency criterion, which made it possible to consider the laminar-turbulent transition and the appearance of thin laminar separation bubbles that affect both the thrust of the propeller and the nonuniformity of the flow behind it. Testing was carried out on four-bladed propellers according to the known results of the TsAGI reference experiments. Testing of the γ–Reθ Transition SST Langtry-Menter turbulence model showed that it reproduces the dependence of the thrust coefficient and power factor on the blade angle better than the standard SST model. Calculations have shown that there is a clearly defined optimum angle between the paired blades. A comparison of three-bladed, six-bladed single and six-bladed propellers with twin blades showed that the latter option has slightly better thrust characteristics and creates a significantly lower noise level on the ground. The studied characteristics of ducted fans demonstrate the prospects for the use of propellers with twin blades in aircraft with vertical takeoff and landing. The developed numerical method can be directly used for industrial calculations of propellers and fans.
In this research, a proposed model aims to automatically identify patterns of spatial and temporal behavior of moving objects in video sequences. The moving objects are analyzed and characterized based on their shape and observable attributes in displacement. To quantify the moving objects over time and form a homogeneous database, a set of shape descriptors is introduced. Geometric measurements of shape, contrast, and connectedness are used to represent each moving object. The proposal uses Granger’s theory to find causal relationships from the history of each moving object stored in a database. The model is tested in two scenarios; the first is a public database, and the second scenario uses a proprietary database from a real scenario. The results show an average accuracy value of 78% in the detection of atypical behaviors in positive and negative dependence relationships.
Judging from daily activities, human beings heavily rely on the internet for communication purposes. and exchange information using either social media applications or browsers, vonsistently fast internet speeds are incredibly beneficial for performing tasks and activities, particularly for students and professionals. A sluggish internet connection can be frustrating and may lead to interruptions in online activities and tasks if it persists. Hence, this study examines a comparative evaluation of two approaches, Per Connection Classifier (PCC) and Equal Cost Multi-Path (ECMP) in Load Balancing through GNS3 simulation. Load balancing, as a method for evenly distributing traffic loads, and failover, as a backup mechanism when the main connection experiences problems. GNS3 is a graphical network simulator program that can transmit more complex network topologies compared to other simulators, for example Cisco Packet Tracer. The primary aim of this study is to comprehend how efficiently both techniques distribute traffic loads, maintaining smooth internet access, and increasing reliability. The PCC method produces better throughput, delay and jitter compared to the ECMP method, even though it has slightly different values for each QoS parameter. In testing traffic distribution, the PCC method outperforms the ECMP method. The PCC method can distribute traffic evenly across both ISP lines when downloading and uploading data packets. Meanwhile, the ECMP method can only carry out download and upload activities on one traffic path.
Natural Language Processing (NLP) is now a cornerstone of requirements automation. One compelling factor behind the growing adoption of NLP in Requirements Engineering (RE) is the prevalent use of natural language (NL) for specifying requirements in industry. NLP techniques are commonly used for automatically classifying requirements, extracting important information, e.g., domain models and glossary terms, and performing quality assurance tasks, such as ambiguity handling and completeness checking. With so many different NLP solution strategies available and the possibility of applying machine learning alongside, it can be challenging to choose the right strategy for a specific RE task and to evaluate the resulting solution in an empirically rigorous manner. In this chapter, we present guidelines for the selection of NLP techniques as well as for their evaluation in the context of RE. In particular, we discuss how to choose among different strategies such as traditional NLP, feature-based machine learning, and language-model-based methods. Our ultimate hope for this chapter is to serve as a stepping stone, assisting newcomers to NLP4RE in quickly initiating themselves into the NLP technologies most pertinent to the RE field.
Jialiang Wei, Anne-Lise Courbis, Thomas Lambolais
et al.
Graphical User Interfaces (GUIs) are central to app development projects. App developers may use the GUIs of other apps as a means of requirements refinement and rapid prototyping or as a source of inspiration for designing and improving their own apps. Recent research has thus suggested retrieving relevant GUI designs that match a certain text query from screenshot datasets acquired through crowdsourced or automated exploration of GUIs. However, such text-to-GUI retrieval approaches only leverage the textual information of the GUI elements, neglecting visual information such as icons or background images. In addition, retrieved screenshots are not steered by app developers and lack app features that require particular input data. To overcome these limitations, this paper proposes GUing, a GUI search engine based on a vision-language model called GUIClip, which we trained specifically for the problem of designing app GUIs. For this, we first collected from Google Play app introduction images which display the most representative screenshots and are often captioned (i.e.~labelled) by app vendors. Then, we developed an automated pipeline to classify, crop, and extract the captions from these images. This resulted in a large dataset which we share with this paper: including 303k app screenshots, out of which 135k have captions. We used this dataset to train a novel vision-language model, which is, to the best of our knowledge, the first of its kind for GUI retrieval. We evaluated our approach on various datasets from related work and in a manual experiment. The results demonstrate that our model outperforms previous approaches in text-to-GUI retrieval achieving a Recall@10 of up to 0.69 and a HIT@10 of 0.91. We also explored the performance of GUIClip for other GUI tasks including GUI classification and sketch-to-GUI retrieval with encouraging results.
[Background.] Empirical research in requirements engineering (RE) is a constantly evolving topic, with a growing number of publications. Several papers address this topic using literature reviews to provide a snapshot of its "current" state and evolution. However, these papers have never built on or updated earlier ones, resulting in overlap and redundancy. The underlying problem is the unavailability of data from earlier works. Researchers need technical infrastructures to conduct sustainable literature reviews. [Aims.] We examine the use of the Open Research Knowledge Graph (ORKG) as such an infrastructure to build and publish an initial Knowledge Graph of Empirical research in RE (KG-EmpiRE) whose data is openly available. Our long-term goal is to continuously maintain KG-EmpiRE with the research community to synthesize a comprehensive, up-to-date, and long-term available overview of the state and evolution of empirical research in RE. [Method.] We conduct a literature review using the ORKG to build and publish KG-EmpiRE which we evaluate against competency questions derived from a published vision of empirical research in software (requirements) engineering for 2020 - 2025. [Results.] From 570 papers of the IEEE International Requirements Engineering Conference (2000 - 2022), we extract and analyze data on the reported empirical research and answer 16 out of 77 competency questions. These answers show a positive development towards the vision, but also the need for future improvements. [Conclusions.] The ORKG is a ready-to-use and advanced infrastructure to organize data from literature reviews as knowledge graphs. The resulting knowledge graphs make the data openly available and maintainable by research communities, enabling sustainable literature reviews.
Agrobisnis Pedesaan ‘SAGARA MERENTE’ adalah agribisnis yang menerapkan sistem pertanian terpadu yang mengintegrasikan unit-unit usaha di sektor pertanian, perikanan dan peternakan dengan memberdayakan masyarakat setempat. Pengelolaan terhadap pengembangan Agrobisnis Pedesaan ‘SAGARA MERENTE’ saat ini masih belum memberikan nilai tambah secara optimal dan proporsional, sehingga tidak signifikan pengaruhnya terhadap peningkatan usaha tersebut dan menambah kesejahteraan untuk pengelolah dan bagi masyarakat setempat. Hal ini diperlukan analisa penyebab belum optimalnya usaha tersebut dengan menggunakan Fault Tree Analysis (FTA) yang mengidentifikasi akar penyebab sebenarnya dari kegagalan usaha Agrobisnis Pedesaan ‘SAGARA MERENTE’. Penelitian dilakukan untuk memperoleh strategi dalam pengambilan keputusan yang terbaik dari alternatif pilihan pengembangan usaha dalam meningkatkan nilai tambah yang optimal dengan menggunakan Value Engineering (VE) dan Analytical Hierarchy Process (AHP)-Benefit, Cost, Opportunity, dan Risk (BCOR). Hasil yang diperoleh dalam penelitian ini adalah bahwa strategi yang digunakan oleh Agrobisnis Pedesaan ‘SAGARA MERENTE’ yaitu strategi dinamik/agresif dengan pilihan terbaik untuk pengambilan keputusan yang optimal dalam pengembangan usahanya, dimana yang pertama: ternak ayam pejantan, pilihan terbaik kedua: budidaya (Lele/Nila), terbaik ketiga: Ternak (Sapi/Kambing), terbaik keempat: budidaya madu trigona, dan terbaik kelima: Pertanian organik.
Kata Kunci: Pengembangan usaha, Value Enggineering (VE), Fault Tree Analysis (FTA), Analytical Hierarchy Process (AHP)-Benefit, Cost, Opportunity, dan Risk (BCOR)
Embedding artificial intelligence into systems introduces significant challenges to modern engineering practices. Hazard analysis tools and processes have not yet been adequately adapted to the new paradigm. This paper describes initial research and findings regarding current practices in AI-related hazard analysis and on the tools used to conduct this work. Our goal with this initial research is to better understand the needs of practitioners and the emerging challenges of considering hazards and risks for AI-enabled products and services. Our primary research question is: Can we develop new structured thinking methods and systems engineering tools to support effective and engaging ways for preemptively considering failure modes in AI systems? The preliminary findings from our review of the literature and interviews with practitioners highlight various challenges around integrating hazard analysis into modern AI development processes and suggest opportunities for exploration of usable, human-centered hazard analysis tools.