Pengaruh Penambahan Konfirgurasi Foam Nine Cell Square Crash box terhadap Pola Deformasi dan Penyerapan Energi
Prayogo Arie Bowo, Kholis Nur Faizin
Crash box is a crucial component in a vehicle's structure, designed to absorb impact energy. Crash box design has been developed to improve energy absorption capability. In this study, the square nine-cell crash box is varied with foam configuration. The addition of foam was chosen due to its lightweight as a filler for crash box. The research method used a computer simulation using ANSYS software. The nine-cell square crash box model consists of three squares, each with a length of 30 mm, 50 mm, and 65 mm, with a connecting rib thickness of 2 mm. Crash box performance is evaluated based on energy absorption and deformation patterns. Frontal load modeling on crash box was modeled with a speed of 12 m/s. Based on the simulation results, it can be found that the configuration of foam on the crash box affects the absorption energy and deformation patterns. The CF-CB 3 model provides energy absorption of 34% compared with no foam crash box due to a more uniform deformation pattern.
Electrical engineering. Electronics. Nuclear engineering, Electronic computers. Computer science
Effects of brain-computer interface-based rehabilitation on lower limb function and activities of daily living after stroke: a systematic review and meta-analysis
Changshuo Liu, Jiaxu Han, Yuhui Wang
et al.
BackgroundLower limb motor dysfunction is a common sequela of stroke that significantly impacts patients' walking safety and independence in daily living. Although brain-computer interface (BCI) technology has demonstrated efficacy in upper limb rehabilitation, its effects on lower limb recovery have not yet been systematically evaluated.MethodsA systematic literature search was conducted across seven databases (PubMed, Web of Science, Embase, China National Knowledge Infrastructure, SinoMed, VIP Database, and Wanfang Data.) to identify studies investigating BCI for post-stroke lower limb dysfunction, encompassing records published up to September 2025. All statistical analyses were performed using Review Manager software (version 5.4.1).ResultsThirteen studies involving 582 participants were included. BCI training significantly improved the scores of Fugl-Meyer Assessment for Lower Extremity (FMA-LE, MD = 2.67, 95%CI: 2.31–3.03, P < 0.00001, I2 = 0%), Berg Balance Scale (BBS, MD = 7.04, 95%CI: 3.14–10.94, P = 0.0004), and Modified Barthel Index (MBI, MD = 6.72, 95%CI: 1.74–11.69, P = 0.008). Furthermore, a single study reported significant improvement in functional mobility measured by the Timed Up and Go Test (TUGT). Subgroup analysis for activities of daily living MBI showed that a cumulative training time of ≥ 500 min was associated with greater improvement.ConclusionBCI-based training is an effective approach for improving lower limb recovery after stroke, demonstrating benefits in motor function, balance, and functional mobility. While evidence for certain outcomes remains limited, the dose-dependent effect on daily living activities underscores the importance of sufficient training duration. Future research should validate these findings and clarify effects across a broader range of functional measures.Systematic review registrationhttps://www.crd.york.ac.uk/PROSPERO/view/CRD420251150558, identifier: CRD420251150558.
Neurology. Diseases of the nervous system
Improving Malaria diagnosis through interpretable customized CNNs architectures
Md. Faysal Ahamed, Md Nahiduzzaman, Golam Mahmud
et al.
Abstract Malaria, which is spread via female Anopheles mosquitoes and is brought on by the Plasmodium parasite, persists as a serious illness, especially in areas with a high mosquito density. Traditional detection techniques, like examining blood samples with a microscope, tend to be labor-intensive, unreliable and necessitate specialized individuals. To address these challenges, we employed several customized convolutional neural networks (CNNs), including Parallel convolutional neural network (PCNN), Soft Attention Parallel Convolutional Neural Networks (SPCNN), and Soft Attention after Functional Block Parallel Convolutional Neural Networks (SFPCNN), to improve the effectiveness of malaria diagnosis. Among these, the SPCNN emerged as the most successful model, outperforming all other models in evaluation metrics. The SPCNN achieved a precision of 99.38 $$\pm$$ 0.21%, recall of 99.37 $$\pm$$ 0.21%, F1 score of 99.37 $$\pm$$ 0.21%, accuracy of 99.37 ± 0.30%, and an area under the receiver operating characteristic curve (AUC) of 99.95 ± 0.01%, demonstrating its robustness in detecting malaria parasites. Furthermore, we employed various transfer learning (TL) algorithms, including VGG16, ResNet152, MobileNetV3Small, EfficientNetB6, EfficientNetB7, DenseNet201, Vision Transformer (ViT), Data-efficient Image Transformer (DeiT), ImageIntern, and Swin Transformer (versions v1 and v2). The proposed SPCNN model surpassed all these TL methods in every evaluation measure. The SPCNN model, with 2.207 million parameters and a size of 26 MB, is more complex than PCNN but simpler than SFPCNN. Despite this, SPCNN exhibited the fastest testing times (0.00252 s), making it more computationally efficient than both PCNN and SFPCNN. We assessed model interpretability using feature activation maps, Gradient-weighted Class Activation Mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP) visualizations for all three architectures, illustrating why SPCNN outperformed the others. The findings from our experiments show a significant improvement in malaria parasite diagnosis. The proposed approach outperforms traditional manual microscopy in terms of both accuracy and speed. This study highlights the importance of utilizing cutting-edge technologies to develop robust and effective diagnostic tools for malaria prevention.
Applying auxiliary supervised depth-assisted transformer and cross modal attention fusion in monocular 3D object detection
Zhijian Wang, Jie Liu, Yixiao Sun
et al.
Monocular 3D object detection is the most widely applied and challenging solution for autonomous driving, due to 2D images lacking 3D information. Existing methods are limited by inaccurate depth estimations by inequivalent supervised targets. The use of both depth and visual features also faces problems of heterogeneous fusion. In this article, we propose Depth Detection Transformer (Depth-DETR), applying auxiliary supervised depth-assisted transformer and cross modal attention fusion in monocular 3D object detection. Depth-DETR introduces two additional depth encoders besides the visual encoder. Two depth encoders are supervised by ground truth depth and bounding box respectively, working independently to complement each other’s limitations and predicting more accurate target distances. Furthermore, Depth-DETR employs cross modal attention mechanisms to effectively fuse three different features. A parallel structure of two cross modal transformer is applied to fuse two depth features with visual features. Avoiding early fusion between two depth features enhances the final fused feature for better feature representations. Through multiple experimental validations, the Depth-DETR model has achieved highly competitive results in the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset, with an AP score of 17.49, representing its outstanding performance in 3D object detection.
Electronic computers. Computer science
Design Obligations for Software, with Examples from Data Abstraction and Adaptive Systems
Mary Shaw
Producing a good software design involves not only writing a definition that satisfies the syntax of the chosen language or structural constraints of a design paradigm. It also involves upholding a variety of expectations about the behavior of the system: the semantic expectations. These expectations may apply not only at the code level, but also to more abstract system structures such as software architectures. Such high-level design paradigms provide a vocabulary of components or other constructs and ways to compose those constructs, but not all expressible designs are well-formed, and even well-formed designs may fail to satisfy the expectations of the paradigm. Unfortunately, these expectations are often implicit or documented only informally, so they are challenging to discover, let alone uphold. They may for example, require correct use of complex structures, internal consistency, compliance with external standards, adherence with design principles, etc. Further, the reasons for design decisions that uphold these expectations are often not explicit in the code or other representation of the system. I introduce the idea of 'design obligations', which are constraints on allowable designs within a given design paradigm that help to assure appropriate use of the paradigm. To illustrate this idea, I discuss design obligations for two paradigms: data abstraction and a class of adaptive based on feedback control.
Improving Deep Video Compression by Resolution-adaptive Flow Coding
Zhihao Hu, Zhenghao Chen, Dong Xu
et al.
In the learning based video compression approaches, it is an essential issue to compress pixel-level optical flow maps by developing new motion vector (MV) encoders. In this work, we propose a new framework called Resolution-adaptive Flow Coding (RaFC) to effectively compress the flow maps globally and locally, in which we use multi-resolution representations instead of single-resolution representations for both the input flow maps and the output motion features of the MV encoder. To handle complex or simple motion patterns globally, our frame-level scheme RaFC-frame automatically decides the optimal flow map resolution for each video frame. To cope different types of motion patterns locally, our block-level scheme called RaFC-block can also select the optimal resolution for each local block of motion features. In addition, the rate-distortion criterion is applied to both RaFC-frame and RaFC-block and select the optimal motion coding mode for effective flow coding. Comprehensive experiments on four benchmark datasets HEVC, VTL, UVG and MCL-JCV clearly demonstrate the effectiveness of our overall RaFC framework after combing RaFC-frame and RaFC-block for video compression.
140 sitasi
en
Computer Science
Advanced Model Consistency Restoration with Higher-Order Short-Cut Rules
Lars Fritsche, Jens Kosiol, Alexander Lauer
et al.
Sequential model synchronisation is the task of propagating changes from one model to another correlated one to restore consistency. It is challenging to perform this propagation in a least-changing way that avoids unnecessary deletions (which might cause information loss). From a theoretical point of view, so-called short-cut (SC) rules have been developed that enable provably correct propagation of changes while avoiding information loss. However, to be able to react to every possible change, an infinite set of such rules might be necessary. Practically, only small sets of pre-computed basic SC rules have been used, severely restricting the kind of changes that can be propagated without loss of information. In this work, we close that gap by developing an approach to compute more complex required SC rules on-the-fly during synchronisation. These higher-order SC rules allow us to cope with more complex scenarios when multiple changes must be handled in one step. We implemented our approach in the model transformation tool eMoflon. An evaluation shows that the overhead of computing higher-order SC rules on-the-fly is tolerable and at times even improves the overall performance. Above that, completely new scenarios can be dealt with without the loss of information.
Logic, Electronic computers. Computer science
Prototipo funcional para la medición y consulta de nivel UV utilizando el internet de las cosas
Jesús Velázquez Macias, Claudia Guadalupe Lara Torres, José Alberto Vela Dávila
et al.
El acceso a dispositivos avanzados para la medición de fenómenos naturales cada vez es más accesible para su uso en proyectos personales, una muestra son algunos sensores que funcionan con microcontroladores, como objetivo general de este trabajo se desarrollará un dispositivo que detecte el nivel de radiación solar (UVA, UVB), presente en la ciudad de Zacatecas y sus lecturas pueden ser recibidas vía mensajería instantánea con lecturas en tiempo real, además de recibir recomendaciones sugeridas por la OMS de acuerdo al nivel de intensidad detectado, todo esto respaldado gracias a la tecnología del Internet de las cosas y utilizando una metodología descriptiva sobre las herramientas y procesos utilizados, la exposición prolongada a este tipo de rayos emitidos por el sol está fuertemente ligada a diferentes efectos en la piel de las personas, como quemaduras o incluso cáncer, por lo que tener la información sobre los niveles de radiación a lo largo del día es de suma importancia para prevenir los posibles efectos nocivos para la salud. Como resultado se obtuvo un prototipo de hardware que interactúa con los diferentes segmentos de software bajo la plataforma de Telegram, el cual proporciona el nivel de rayos UV detectados en tiempo real.
Rango: Adaptive Retrieval-Augmented Proving for Automated Software Verification
Kyle Thompson, Nuno Saavedra, Pedro Carrott
et al.
Formal verification using proof assistants, such as Coq, enables the creation of high-quality software. However, the verification process requires significant expertise and manual effort to write proofs. Recent work has explored automating proof synthesis using machine learning and large language models (LLMs). This work has shown that identifying relevant premises, such as lemmas and definitions, can aid synthesis. We present Rango, a fully automated proof synthesis tool for Coq that automatically identifies relevant premises and also similar proofs from the current project and uses them during synthesis. Rango uses retrieval augmentation at every step of the proof to automatically determine which proofs and premises to include in the context of its fine-tuned LLM. In this way, Rango adapts to the project and to the evolving state of the proof. We create a new dataset, CoqStoq, of 2,226 open-source Coq projects and 196,929 theorems from GitHub, which includes both training data and a curated evaluation benchmark of well-maintained projects. On this benchmark, Rango synthesizes proofs for 32.0% of the theorems, which is 29% more theorems than the prior state-of-the-art tool Tactician. Our evaluation also shows that Rango adding relevant proofs to its context leads to a 47% increase in the number of theorems proven.
Action Research with Industrial Software Engineering -- An Educational Perspective
Yvonne Dittrich, Johan Bolmsten, Catherine Seidelin
Action research provides the opportunity to explore the usefulness and usability of software engineering methods in industrial settings, and makes it possible to develop methods, tools and techniques with software engineering practitioners. However, as the research moves beyond the observational approach, it requires a different kind of interaction with the software development organisation. This makes action research a challenging endeavour, and it makes it difficult to teach action research through a course that goes beyond explaining the principles. This chapter is intended to support learning and teaching action research, by providing a rich set of examples, and identifying tools that we found helpful in our action research projects. The core of this chapter focusses on our interaction with the participating developers and domain experts, and the organisational setting. This chapter is structured around a set of challenges that reoccurred in the action research projects in which the authors participated. Each section is accompanied by a toolkit that presents related techniques and tools. The exercises are designed to explore the topics, and practise using the tools and techniques presented. We hope the material in this chapter encourages researchers who are new to action research to further explore this promising opportunity.
Apples, Oranges, and Software Engineering: Study Selection Challenges for Secondary Research on Latent Variables
Marvin Wyrich, Marvin Muñoz Barón, Justus Bogner
Software engineering (SE) is full of abstract concepts that are crucial for both researchers and practitioners, such as programming experience, team productivity, code comprehension, and system security. Secondary studies aimed at summarizing research on the influences and consequences of such concepts would therefore be of great value. However, the inability to measure abstract concepts directly poses a challenge for secondary studies: primary studies in SE can operationalize such concepts in many ways. Standardized measurement instruments are rarely available, and even if they are, many researchers do not use them or do not even provide a definition for the studied concept. SE researchers conducting secondary studies therefore have to decide a) which primary studies intended to measure the same construct, and b) how to compare and aggregate vastly different measurements for the same construct. In this experience report, we discuss the challenge of study selection in SE secondary research on latent variables. We report on two instances where we found it particularly challenging to decide which primary studies should be included for comparison and synthesis, so as not to end up comparing apples with oranges. Our report aims to spark a conversation about developing strategies to address this issue systematically and pave the way for more efficient and rigorous secondary studies in software engineering.
Therapeutic Efficacy of a Formulation Prepared with <i>Linum usitatissimum</i> L., <i>Plantago ovata</i> Forssk., and Honey on Uncomplicated Pelvic Inflammatory Disease Analyzed with Machine Learning Techniques
Sana Qayyum, Arshiya Sultana, Md Belal Bin Heyat
et al.
A single-blind double-dummy randomized study was conducted in diagnosed patients (n = 66) to compare the efficacy of Linseeds (<i>Linum usitatissimum</i> L.), Psyllium (<i>Plantago ovata</i> Forssk.), and honey in uncomplicated pelvic inflammatory disease (uPID) with standard drugs using experimental and computational analysis. The pessary group received placebo capsules orally twice daily plus a per vaginum cotton pessary of powder from linseeds and psyllium seeds, each weighing 3 gm, with honey (5 mL) at bedtime. The standard group received 100 mg of doxycycline twice daily and 400 mg of metronidazole TID orally plus a placebo cotton pessary per vaginum at bedtime for 14 days. The primary outcomes were clinical features of uPID (vaginal discharge, lower abdominal pain (LAP), low backache (LBA), and pelvic tenderness. The secondary outcomes included leucocytes (WBCs) in vaginal discharge on saline microscopy and the SF-12 health questionnaire. In addition, we also classified both (pessary and standard) groups using machine learning models such as Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), and AdaBoost (AB). The pessary group showed a higher percentage reduction than the standard group in abnormal vaginal discharge (87.05% vs. 77.94%), Visual Analogue Scale (VAS)-LAP (80.57% vs. 77.09%), VAS-LBA (74.19% vs. 68.54%), McCormack pain scale (McPS) score for pelvic tenderness (75.39% vs. 67.81%), WBC count of vaginal discharge (87.09% vs. 83.41%) and improvement in SF-12 HRQoL score (94.25% vs. 86.81%). Additionally, our DT 5-fold model achieved the maximum accuracy (61.80%) in the classification. We propose that the pessary group is cost-effective, safer, and more effective as standard drugs for treating uPID and improving the HRQoL of women. Aucubin, Plantamajoside, Herbacetin, secoisolariciresinol diglucoside, Secoisolariciresinol Monoglucoside, and other various natural bioactive molecules of psyllium and linseeds have beneficial effects as they possess anti-inflammatory, antioxidant, antimicrobial, and immunomodulatory properties. The anticipated research work is be a better alternative treatment for genital infections.
Pharmacy and materia medica
Localization and Classification of Gastrointestinal Tract Disorders Using Explainable AI from Endoscopic Images
Muhammad Nouman Noor, Muhammad Nazir, Sajid Ali Khan
et al.
Globally, gastrointestinal (GI) tract diseases are on the rise. If left untreated, people may die from these diseases. Early discovery and categorization of these diseases can reduce the severity of the disease and save lives. Automated procedures are necessary, since manual detection and categorization are laborious, time-consuming, and prone to mistakes. In this work, we present an automated system for the localization and classification of GI diseases from endoscopic images with the help of an encoder–decoder-based model, XceptionNet, and explainable artificial intelligence (AI). Data augmentation is performed at the preprocessing stage, followed by segmentation using an encoder–decoder-based model. Later, contours are drawn around the diseased area based on segmented regions. Finally, classification is performed on segmented images by well-known classifiers, and results are generated for various train-to-test ratios for performance analysis. For segmentation, the proposed model achieved 82.08% dice, 90.30% mIOU, 94.35% precision, and 85.97% recall rate. The best performing classifier achieved 98.32% accuracy, 96.13% recall, and 99.68% precision using the softmax classifier. Comparison with the state-of-the-art techniques shows that the proposed model performed well on all the reported performance metrics. We explain this improvement in performance by utilizing heat maps with and without the proposed technique.
Technology, Engineering (General). Civil engineering (General)
Origin of superconductivity in hole doped SrBiO3 bismuth oxide perovskite from parameter-free first-principles simulations
Julien Varignon
Abstract The recent discovery of nickel oxide superconductors have highlighted the importance of first-principles simulations for understanding the formation of the bound electrons at the core of superconductivity. Nevertheless, superconductivity in oxides is often ascribed to strong electronic correlation effects that density functional theory (DFT) cannot properly take into account, thereby disqualifying this technique. Being isostructural to nickel oxides, Sr1-xKxBiO3 superconductors form an ideal testbed for unveiling the lowest theory level needed to model complex superconductors and the underlying pairing mechanism yielding superconductivity. Here I show that parameter-free DFT simulations capture all the experimental features and related quantities of Sr1-xKxBiO3 superconductors, encompassing the prediction of an insulating to metal phase transition upon increasing the K doping content and of an electron-phonon coupling constant of 1.22 in sharp agreement with the experimental value of 1.3 ± 0.2. The proximity of a disproportionated phase is further demonstrated to be a prerequisite for superconductivity in bismuthates.
Materials of engineering and construction. Mechanics of materials, Computer software
A Systematic Mapping of the Proposition of Benchmarks in the Software Testing and Debugging Domain
Deuslirio da Silva-Junior, Valdemar V. Graciano-Neto, Diogo M. de-Freitas
et al.
Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters. However, the reasons that inspire researchers to propose novel benchmarks are not fully understood. This article reports the investigation, identification, classification, and externalization of the state of the art about the proposition of benchmarks on software testing and debugging domains. The study was carried out using systematic mapping procedures according to the guidelines widely followed by software engineering literature. The search identified 1674 studies, from which, 25 were selected for analysis. A list of benchmarks is provided and descriptively mapped according to their characteristics, motivations, and scope of use for their creation. The lack of data to support the comparison between available and novel software testing and debugging techniques is the main motivation for the proposition of benchmarks. Advancements in the standardization and prescription of benchmark structure and composition are still required. Establishing such a standard could foster benchmark reuse, thereby saving time and effort in the engineering of benchmarks for software testing and debugging.
ANTASID: A Novel Temporal Adjustment to Shannon’s Index of Difficulty for Quantifying the Perceived Difficulty of Uncontrolled Pointing Tasks
Mohammad Ridwan Kabir, Mohammad Ishrak Abedin, Rizvi Ahmed
et al.
Shannon’s Index of Difficulty (<inline-formula> <tex-math notation="LaTeX">$ID$ </tex-math></inline-formula>), reputable for quantifying the perceived difficulty of pointing tasks as a logarithmic relationship between <italic>movement-amplitude</italic> (<inline-formula> <tex-math notation="LaTeX">$A$ </tex-math></inline-formula>) and <italic>target-width</italic> (<inline-formula> <tex-math notation="LaTeX">$W$ </tex-math></inline-formula>), is used for modeling the corresponding <italic>observed movement-times</italic> (<inline-formula> <tex-math notation="LaTeX">$MT_{O}$ </tex-math></inline-formula>) in such tasks in controlled experimental setup. However, real-life pointing tasks are both spatially and temporally uncontrolled, being influenced by factors, such as – human aspects, subjective behavior, the context of interaction, the inherent speed-accuracy trade-off, where, emphasizing accuracy compromises speed of interaction and vice versa. Effective target-width (<inline-formula> <tex-math notation="LaTeX">$W_{e}$ </tex-math></inline-formula>) is considered as spatial adjustment for compensating accuracy. However, no significant adjustment exists in the literature for compensating speed in different contexts of interaction in these tasks. As a result, without any temporal adjustment, the true difficulty of an uncontrolled pointing task may be inaccurately quantified using Shannon’s <inline-formula> <tex-math notation="LaTeX">$ID$ </tex-math></inline-formula>. To verify this, we propose ANTASID (A Novel Temporal Adjustment to Shannon’s ID) formulation with detailed performance analysis. We hypothesized a temporal adjustment factor (<inline-formula> <tex-math notation="LaTeX">$t$ </tex-math></inline-formula>) as a binary logarithm of <inline-formula> <tex-math notation="LaTeX">$MT_{O}$ </tex-math></inline-formula>, compensating for speed due to contextual differences and minimizing the non-linearity between <italic>movement-amplitude</italic> and <italic>target-width</italic>. Considering spatial and/or temporal adjustments to <inline-formula> <tex-math notation="LaTeX">$ID$ </tex-math></inline-formula>, we conducted regression analysis using our own and <italic>Benchmark</italic> datasets in both controlled and uncontrolled scenarios of pointing tasks with a generic mouse. ANTASID formulation showed significantly superior fitness values and throughput in all the scenarios while reducing the standard error. Furthermore, the quantification of <inline-formula> <tex-math notation="LaTeX">$ID$ </tex-math></inline-formula> with ANTASID varied significantly compared to the classical formulations of Shannon’s <inline-formula> <tex-math notation="LaTeX">$ID$ </tex-math></inline-formula>, validating the purpose of this study.
Electrical engineering. Electronics. Nuclear engineering
Mining detailed information from the description for App functions comparison
Huaxiao Liu, Xinglong Yin, Shanshan Song
et al.
Abstract The rapid development of Apps not only brings huge economic benefit but also causes increasingly fierce competition. In such a situation, developers are required to develop and update innovative functions to attract and retain users. Afterwards, analysing the functions of similar products can help developers formulate a well‐designed plan at the beginning of development as well as make updated strategies during the version update process. However, although there have been some methods that can be applied to extract the features from App descriptions to achieve this purpose to some extent, the features they obtained do not cover the details of App functions. Therefore, to conduct an in‐depth research on App functions, a novel method is proposed to extract App features with detailed information and an approach to integrate the gained results for further helping developers obtain the valuable knowledge better is provided. Subsequently, a series of experiments is carried out to evaluate our method. The results reveal that the proposed method can mine the features with detailed information from descriptions and integrate them effectively and also can assist developers to compare with other competitors and develop a better competitive analysis scheme.
Contour information regularized tensor ring completion for realistic image restoration
Zhi Yu, Yihao Luo, Zhifa Liu
et al.
Abstract Tensor completion has gained considerable research interest in recent years and has been frequently applied to image restoration. This type of method basically employs the low‐rank nature of images, implicitly requiring that the whole picture is of globally consistent features. As a result, existing tensor completion algorithms often give reasonably good performance if the target image has only random pixel‐level missing. Unfortunately, pixel‐level missing is very rare in practice and it is often wanted to restore an image with irregular hole‐shaped missing, such as removing electricity poles from landscape photos or irrelevant people from tourist photos. This task is extremely difficult for traditional low‐rank based tensor completion methods. To overcome this drawback, a Contour Information regularized Tensor RIng Completion (CITRIC) method is proposed for practical image restoration. Meanwhile, the contour information regularization is used to capture significant local features, whereas the low‐rank tensor ring structure is utilized to capture as much global information as possible. The alternating direction method of multipliers (ADMM) is adopted to optimize the cost function. Extensive experimental results using real‐world images show that CITRIC is more practical than existing methods and can restore real‐world images with irregular hole‐shaped missing.
Photography, Computer software
Naming the Identified Feature Implementation Blocks from Software Source Code
Ra'Fat Al-Msie'Deen, Hamzeh Eyal Salman, Anas H. Blasi
et al.
Identifying software identifiers that implement a particular feature of a software product is known as feature identification. Feature identification is one of the most critical and popular processes performed by software engineers during software maintenance activity. However, a meaningful name must be assigned to the Identified Feature Implementation Block (IFIB) to complete the feature identification process. The feature naming process remains a challenging task, where the majority of existing approaches manually assign the name of the IFIB. In this paper, the approach called FeatureClouds was proposed, which can be exploited by software developers to name the IFIBs from software code. FeatureClouds approach incorporates word clouds visualization technique to name Feature Blocks (FBs) by using the most frequent words across these blocks. FeatureClouds had evaluated by assessing its added benefit to the current approaches in the literature, where limited tool support was supplied to software developers to distinguish feature names of the IFIBs. For validity, FeatureClouds had applied to draw shapes and ArgoUML software. The findings showed that the proposed approach achieved promising results according to well-known metrics in terms of Precision and Recall.
A longitudinal case study on the effects of an evidence-based software engineering training
Sebastián Pizard, Diego Vallespir, Barbara Kitchenham
Context: Evidence-based software engineering (EBSE) can be an effective resource to bridge the gap between academia and industry by balancing research of practical relevance and academic rigor. To achieve this, it seems necessary to investigate EBSE training and its benefits for the practice. Objective: We sought both to develop an EBSE training course for university students and to investigate what effects it has on the attitudes and behaviors of the trainees. Method: We conducted a longitudinal case study to study our EBSE course and its effects. For this, we collect data at the end of each EBSE course (2017, 2018, and 2019), and in two follow-up surveys (one after 7 months of finishing the last course, and a second after 21 months). Results: Our EBSE courses seem to have taught students adequately and consistently. Half of the respondents to the surveys report making use of the new skills from the course. The most-reported effects in both surveys indicated that EBSE concepts increase awareness of the value of research and evidence and EBSE methods improve information gathering skills. Conclusions: As suggested by research in other areas, training appears to play a key role in the adoption of evidence-based practice. Our results indicate that our training method provides an introduction to EBSE suitable for undergraduates. However, we believe it is necessary to continue investigating EBSE training and its impact on software engineering practice.