This study examines how green finance influences high-quality economic development, with a particular focus on its spatial spillover mechanisms. Specifically, we investigate the competing roles of technology spillover and the pollution haven effect. Using provincial panel data from China (2010–2021) and applying a Spatial Durbin Model (SDM), we deconstruct the total effect of green finance into three distinct components: the local technological progress effect, the positive technology spillover effect, and the negative pollution haven effect. While acknowledging limitations related to the macro-level data granularity and the indirect nature of the mechanism tests, our analysis yields three main findings. First, green finance development shows significant regional disparities. It has progressed most rapidly in the eastern region, remained relatively stable in the central region, and declined in the western region. Second, green finance exerts a strong positive direct effect on local high-quality economic development. This promoting effect becomes even stronger in more developed regions. Third, green finance generates significant negative spatial spillovers on neighboring regions. These are primarily driven by the pollution haven effect, which involves the cross-regional relocation of polluting industries. However, local technological progress partially mitigates these adverse externalities. Overall, our findings reveal the dual nature of the spatial externalities associated with green finance. They also highlight the urgency of coordinated regional environmental governance to prevent “green leakage” and to promote balanced, high-quality economic development.
Large Language Models, particularly decoder-only generative models such as GPT, are increasingly used to automate Software Engineering tasks. These models are primarily guided through natural language prompts, making prompt engineering a critical factor in system performance and behavior. Despite their growing role in SE research, prompt-related decisions are rarely documented in a systematic or transparent manner, hindering reproducibility and comparability across studies. To address this gap, we conducted a two-phase empirical study. First, we analyzed nearly 300 papers published at the top-3 SE conferences since 2022 to assess how prompt design, testing, and optimization are currently reported. Second, we surveyed 105 program committee members from these conferences to capture their expectations for prompt reporting in LLM-driven research. Based on the findings, we derived a structured guideline that distinguishes essential, desirable, and exceptional reporting elements. Our results reveal significant misalignment between current practices and reviewer expectations, particularly regarding version disclosure, prompt justification, and threats to validity. We present our guideline as a step toward improving transparency, reproducibility, and methodological rigor in LLM-based SE research.
With the advancement of Agentic AI, researchers are increasingly leveraging autonomous agents to address challenges in software engineering (SE). However, the large language models (LLMs) that underpin these agents often function as black boxes, making it difficult to justify the superiority of Agentic AI approaches over baselines. Furthermore, missing information in the evaluation design description frequently renders the reproduction of results infeasible. To synthesize current evaluation practices for Agentic AI in SE, this study analyzes 18 papers on the topic, published or accepted by ICSE 2026, ICSE 2025, FSE 2025, ASE 2025, and ISSTA 2025. The analysis identifies prevailing approaches and their limitations in evaluating Agentic AI for SE, both in current research and potential future studies. To address these shortcomings, this position paper proposes a set of guidelines and recommendations designed to empower reproducible, explainable, and effective evaluations of Agentic AI in software engineering. In particular, we recommend that Agentic AI researchers make their Thought-Action-Result (TAR) trajectories and LLM interaction data, or summarized versions of these artifacts, publicly accessible. Doing so will enable subsequent studies to more effectively analyze the strengths and weaknesses of different Agentic AI approaches. To demonstrate the feasibility of such comparisons, we present a proof-of-concept case study that illustrates how TAR trajectories can support systematic analysis across approaches.
The concept of life skills is related to the way of life that emphasises the mutual exchange of knowledge, attitudes, and interpersonal skills in education. Its objective is to develop diverse skills among students and prepare them to face life’s challenges with determination. The World Health Organization has defined life skills as “the positive behaviours and tendencies that enable a person to adapt in day-to-day life.” Life skills are the abilities that enable a person to adapt and exhibit positive behaviour, allowing them to deal effectively with the problems and challenges of daily life. Life is a unique gift. Therefore, by equipping life with various skills, happiness, peace, and prosperity are created. In this research, with the objectives of the study in mind, an analytical examination of life skills among secondary-level students has been conducted. This research study examines the effects of living conditions, gender, and social class on students’ life skills and presents the findings. Future researchers can build upon this, and other factors affecting the research can also be explored.
The current study contains a steady laminar bioconvective magnetohydrodynamic (MHD) flow involving water (H2O)-based hybrid nanofluid composed of silver (Ag) and alumina oxide (Al2O3) as nanoparticles. This flow passes over a thin moving slender needle including gyrotactic motile microorganisms through a porous medium. The study examines how various physical characteristics, such as Cattaneo-Christov heat and mass flow, and viscous dissipation, affect the system's flow. Our objective is to find the impact of pertinent parameters on velocity, temperature, concentration, and microorganisms. This type of flow problem is important to control the heat and fluid flow phenomena around a needle which are applied to biotechnology (bioreactors, microbial fuel cells), biomedical engineering, microfluidics, and cooling systems. The reason for this investigation combines both scientific curiosity and practical applications. The controlling equations are simplified into nonlinear ordinary differential equations, solved numerically via MATLAB bvp4c tool, and their impact on temperature, velocity, microorganism, and concentration outline is graphically depicted, also, their impact on local microorganism's number, local Sherwood number, frictional drag coefficient, and local Nusselt number, are tabulated. This study's novelty is that it fills the gaps left by Kandasamy et al. [31]. This study demonstrates great agreement with Kandasamy et al. [31]. The study's findings indicate that improvement of thermal and concentration relaxation parameters declines fluid temperature and concentration respectively. Also, enhancement of bioconvection Lewis and Peclet numbers diminishes the microorganisms' profile. Again, when the Dufour and Soret numbers rise, then the temperature and concentration distribution also improve respectively. Furthermore, introducing 1 % of alumina oxide (Al2O3), and silver (Ag) nanoparticles into the base fluid increased frictional drag by 2.64 %, and 3.03 %, respectively, compared to water.
In the social media environment, fake news is a significant issue. It might be online or offline, depending on the field of journalism. Concerns have been expressed by media and publishing houses, who are looking for solutions to the problem. One of the solutions the industry has to offer in this area is Blockchain. It could be digital security trading, source or identity verification, or quotes following a certain news piece, photo, or video. It's miles of shared document generation to deliver timely files, and it's done with the help of a specific article, video, or image that has been addressed. This will no longer assist the fact abuser in verifying the details. This will help the fact abuser confirm the details, but it will also offer documentation of metadata generated at all phases. It allows you to cut the expense of disseminating false information by forwarding and explicit disclosure to persons who have first-hand knowledge of the subject. The proposed structure for acquiring fake news is supported by the blockchain age, which allows news organizations to deliver their content to their subscribers transparently. This framework was created for journalists and can be integrated into any current platform to publish a news piece and include asset statistics.
Mohammed Latif Siddiq, Arvin Islam-Gomes, Natalie Sekerak
et al.
Reproducibility is a cornerstone of scientific progress, yet its state in large language model (LLM)-based software engineering (SE) research remains poorly understood. This paper presents the first large-scale, empirical study of reproducibility practices in LLM-for-SE research. We systematically mined and analyzed 640 papers published between 2017 and 2025 across premier software engineering, machine learning, and natural language processing venues, extracting structured metadata from publications, repositories, and documentation. Guided by four research questions, we examine (i) the prevalence of reproducibility smells, (ii) how reproducibility has evolved over time, (iii) whether artifact evaluation badges reliably reflect reproducibility quality, and (iv) how publication venues influence transparency practices. Using a taxonomy of seven smell categories: Code and Execution, Data, Documentation, Environment and Tooling, Versioning, Model, and Access and Legal, we manually annotated all papers and associated artifacts. Our analysis reveals persistent gaps in artifact availability, environment specification, versioning rigor, and documentation clarity, despite modest improvements in recent years and increased adoption of artifact evaluation processes at top SE venues. Notably, we find that badges often signal artifact presence but do not consistently guarantee execution fidelity or long-term reproducibility. Motivated by these findings, we provide actionable recommendations to mitigate reproducibility smells and introduce a Reproducibility Maturity Model (RMM) to move beyond binary artifact certification toward multi-dimensional, progressive evaluation of reproducibility rigor.
The paper entitled "Qualitative Methods in Empirical Studies of Software Engineering" by Carolyn Seaman was published in TSE in 1999. It has been chosen as one of the most influential papers from the third decade of TSE's 50 years history. In this retrospective, the authors discuss the evolution of the use of qualitative methods in software engineering research, the impact it's had on research and practice, and reflections on what is coming and deserves attention.
Electroencephalography (EEG) datasets are characterized by low signal-to-noise signals and unquantifiable noisy labels, which hinder the classification performance in rapid serial visual presentation (RSVP) tasks. Previous approaches primarily relied on supervised learning (SL), which may result in overfitting and reduced generalization performance. In this paper, we propose a novel multi-task collaborative network (MTCN) that integrates both SL and self-supervised learning (SSL) to extract more generalized EEG representations. The original SL task, i.e., the RSVP EEG classification task, is used to capture initial representations and establish classification thresholds for targets and non-targets. Two SSL tasks, including the masked temporal/spatial recognition task, are designed to enhance temporal dynamics extraction and capture the inherent spatial relationships among brain regions, respectively. The MTCN simultaneously learns from multiple tasks to derive a comprehensive representation that captures the essence of all tasks, thus mitigating the risk of overfitting and enhancing generalization performance. Moreover, to facilitate collaboration between SL and SSL, MTCN explicitly decomposes features into task-specific features and task-shared features, leveraging both label information with SL and feature information with SSL. Experiments conducted on THU, CAS, and GIST datasets illustrate the significant advantages of learning more generalized features in RSVP tasks. Our code is publicly accessible at <uri>https://github.com/Tammie-Li/MTCN</uri>.
For multi-agent reinforcement learning systems (MARLS), the problem formulation generally involves investing massive reward engineering effort specific to a given problem. However, this effort often cannot be translated to other problems; worse, it gets wasted when system dynamics change drastically. This problem is further exacerbated in sparse reward scenarios, where a meaningful heuristic can assist in the policy convergence task. We propose GOVerned Reward Engineering Kernels (GOV-REK), which dynamically assign reward distributions to agents in MARLS during its learning stage. We also introduce governance kernels, which exploit the underlying structure in either state or joint action space for assigning meaningful agent reward distributions. During the agent learning stage, it iteratively explores different reward distribution configurations with a Hyperband-like algorithm to learn ideal agent reward models in a problem-agnostic manner. Our experiments demonstrate that our meaningful reward priors robustly jumpstart the learning process for effectively learning different MARL problems.
Underwater object detection (UOD) has attracted widespread attention, being of great significance for marine resource management, underwater security and defense, underwater infrastructure inspection, etc. However, high-quality UOD tasks often encounter challenges such as image quality degradation, complex backgrounds, and occlusions between objects at different scales. This paper presents a collaborative framework for UOD via joint image enhancement and super-resolution to address the above problems. Specifically, a joint-oriented framework is constructed incorporating underwater image enhancement and super-resolution techniques. The proposed framework is capable of generating a detection-favoring appearance to provide more visual cues for UOD tasks. Furthermore, a plug-and-play self-attention mechanism, termed multihead blurpooling fusion network (MBFNet), is developed to capture sufficient contextual information by focusing on the dependencies between multiscale feature maps, so that the UOD performance of our proposed framework can be further facilitated. A comparative study on the popular URPC2020 and Brackish datasets demonstrates the superior performance of our proposed collaborative framework, and the ablation study also validates the effectiveness of each component within the framework.
B. Valarmathi, N. Srinivasa Gupta, G. Prakash
et al.
Deep learning and computer vision algorithms will be applied to find the breed of the dog from an image. The goal is to have the user submit an image of a dog, and the model will choose one of the 120 breeds stated in the dataset to determine the dog’s breed. The proposed work uses various deep learning algorithms like Xception, VGG19, NASNetMobile, EfficientNetV2M, ResNet152V2, Hybrid of Inception &Xception, and Hybrid of EfficientNetV2M, NASNetMobile, Inception &Xception to predict dog breeds. ResNet101, ResNet50, InceptionResNetV2, and Inception-v3 on the Stanford Dogs Standard Datasetswere used in the existing system. The proposed models are considered a hybrid of Inception-v3 &Xception and a hybrid of EfficientNetV2M, NASNetMobile, Inception & Xception. This hybrid model outperforms single models like Xception, VGG19, InceptionV3, ResNet50, and ResNet101.The authors used a transfer learning algorithm with data augmentation to increase their accuracy and achieved a validation accuracy score of 71.63% for ResNet101, 63.78% for ResNet50, 40.72% for InceptionResNetV2, and 34.84% for InceptionV3. This paper compares the proposed algorithms with existing ones like ResNet101, ResNet50, InceptionResNetV2, and InceptionV3. In the existing system, ResNet101 gave the highest accuracy of 71.63%. The proposed algorithms give a validation accuracy score of 91.9% for Xception, 55% for VGG19, 83.47% for NASNetMobile, 89.05% for EfficientNetV2M, 87.38% for ResNet152V2, 92.4% for Hybrid of Inception-v3 &Xception, and 89.00% for Hybrid of EfficientNetV2M, NASNetMobile, Inception &Xception. Among these algorithms, the Hybrid of Inception-v3 &Xception gives the highest accuracy of 92.4%.
Abstract Systems Thinking (ST) has become essential for practitioners and experts when dealing with turbulent and complex environments. Twitter medium harbors social capital including systems thinkers, however there are limited studies available in the extant literature that investigate how experts' systems thinking skills, if possible at all, can be revealed within Twitter analysis. This study aims to reveal systems thinking levels of experts from their Twitter accounts represented as a network. Unraveling of latent Twitter network clusters ensues the centrality analysis of their follower networks inferred in terms of systems thinking dimensions. COVID-19 emerges as a relevant case study to investigate the relationship between COVID-19 experts’ Twitter network and their systems thinking capabilities. A sample of 55 trusted expert Twitter accounts related to COVID-19 has been selected for the current study based on the lists from Forbes, Fortune, and Bustle. The Twitter network has been constructed based on the features extracted from their Twitter accounts. Community detection reveals three distinct groups of experts. In order to relate system thinking qualities to each group, systems thinking dimensions are matched with the follower network characteristics such as node-level metrics and centrality measures including degree, betweenness, closeness and Eigen centrality. Comparison of the 55 expert follower network characteristics elucidates three clusters with significant differences in centrality scores and node-level metrics. The clusters with a higher, medium, lower scores can be classified as Twitter accounts of Holistic thinkers, Middle thinkers, and Reductionist thinkers, respectfully. In conclusion, systems thinking capabilities are traced through unique network patterns in relation to the follower network characteristics associated with systems thinking dimensions.
Fluorescent molecules are versatile nanoscale emitters that enable detailed observations of biophysical processes with nanoscale resolution. Because they are well-approximated as electric dipoles, imaging systems can be designed to visualize their 3D positions and 3D orientations, so-called dipole-spread function (DSF) engineering, for 6D super-resolution single-molecule orientation-localization microscopy (SMOLM). We review fundamental image-formation theory for fluorescent di-poles, as well as how phase and polarization modulation can be used to change the image of a dipole emitter produced by a microscope, called its DSF. We describe several methods for designing these modulations for optimum performance, as well as compare recently developed techniques, including the double-helix, tetrapod, crescent, and DeepSTORM3D learned point-spread functions (PSFs), in addition to the tri-spot, vortex, pixOL, raPol, CHIDO, and MVR DSFs. We also cover common imaging system designs and techniques for implementing engineered DSFs. Finally, we discuss recent biological applications of 6D SMOLM and future challenges for pushing the capabilities and utility of the technology.
Takeshi Yoshida, Yuki Onishi, Takuya Kawahara
et al.
Abstract In this study, we propose a method to automate fruit harvesting with a fruit harvesting robot equipped with robotic arms. Given the future growth of the world population, food shortages are expected to accelerate. Since much of Japan’s agriculture is dependent on imports, it is expected to be greatly affected by this upcoming food shortage. In recent years, the number of agricultural workers in Japan has been decreasing and the population is aging. As a result, there is a need to automate and reduce labor in agricultural work using agricultural machinery. In particular, fruit cultivation requires a lot of manual labor due to the variety of orchard conditions and tree shapes, causing mechanization and automation to lag behind. In this study, a dual-armed fruit harvesting robot was designed and fabricated to reach most of the fruits on joint V-shaped trellis that was cultivated and adjusted for the robot. To harvest the fruit, the fruit harvesting robot uses sensors and computer vision to detect and estimate the position of the fruit and then inserts end-effectors into the lower part of the fruit. During this process, there is a possibility of collision within the robot itself or with other fruits depending on the position of the fruit to be harvested. In this study, inverse kinematics and a fast path planning method using random sampling is used to harvest fruits with robot arms. This method makes it possible to control the robot arms without interfering with the fruit or the other robot arm by considering them as obstacles. Through experiments, this study showed that these methods can be used to detect pears and apples outdoors and automatically harvest them using the robot arms.
Jean Pierre Uwiringiyimana, Umar Khayam, Suwarno
et al.
This article presents a design of ultra-high frequency (UHF), ultra-wide band (UWB) antenna used for partial discharge (PD) detection on high voltage and medium voltage power system equipment. The proposed UHF antenna has a working frequency band of 1.2GHz-4.5GHz, covering a total bandwidth of 3.3GHz with a return loss of less than -10dB in the entire antenna’s operating frequency. The computer simulation technology (CST) Microwave Studio software was used to design, simulate and optimize the proposed antenna. Upon simulation and optimization process, the antenna prototype was fabricated on the FR-4 substrate of 1.6 mm thickness and dielectric permittivity of 4.4. This antenna has a compact size of 100mm <inline-formula> <tex-math notation="LaTeX">$\times100$ </tex-math></inline-formula>mm.The radiating patch and the ground plane of this antenna are made of annealed copper whose thickness is 0.035mm. The simulations and measurement results for the proposed antenna are in a good agreement, and the return loss of this antenna is less than −10dB with voltage standing wave ratio, VSWR, < 2 within the frequency range of interest. The proposed antenna performance in PD sensing is compared with a commercial high-frequency current transformer, HFCT. To validate the sensitivity performance of the designed antenna, experimental PD measurements were carried out by using an epoxy slab inserted between two parallel plates electrode model, in order to generate surface discharge on the insulator, and using a needle-plate electrode configuration to generate corona discharge in transformer oil. Based on PD measurement results, it was shown that the designed antenna has a high sensitivity, which make it a suitable candidate for UHF partial discharge monitoring on high voltage and medium voltage power assets.
Endpoint users are usually viewed as the highest-risk element in the field of cybersecurity. At the same time, they need to be protected not just from the individual-level prism but also, from the states perspective, to counter threats like botnets that harvest weakly secured endpoints and forge an army of so-called zombies that are often used to attack critical infrastructure or other systems vital to the state. Measures aimed at citizens like the Israeli hotline for cybersecurity incidents or Estonian educational efforts have already started to be implemented. However, little effort is made to understand the recipients of such measures. Our study uses the survey method to partly fill this gap and investigate how endpoint users (citizens) are willing to protect themselves against cyber threats. To make results more valid, a unique comparison was made between cyber threats and physical threats according to the impact which they had. The results show statistically significant differences between comparable cyber-physical pairs indicating that a large portion of the sample was not able to assess the threat environment appropriately and that state intervention with fitting countermeasures is required. The resultant matrix containing frequencies of answers denotes what portion of respondents are willing to invest a certain amount of time and money into countering given threats, this enables the possible identification of weak points where state investment is needed most.
The rapid advances in technology over the last decade have significantly altered the nature of engineering knowledge and skills required in the modern industries. In response to the changing professional requirements, engineering institutions have updated their curriculum and pedagogical practices. However, most of the changes in the curriculum have been focused on the core engineering courses without much consideration for the auxiliary courses in mathematics and sciences. In this paper, we aim to propose a new, augmented mathematics curriculum aimed at meeting the requirements of the modern, technology-based engineering workplace. The proposed updates require minimal resources and can be seamlessly integrated into the existing curriculum.