Evaluation of Parameters Affecting the Inelastic Acceleration Ratio
emad Elhout
The Inelastic Acceleration Ratio (IAR) is a helpful instrument for determining the maximum inelastic acceleration from the related elastic acceleration that seems to have been little examined in past research. The IARs using single-degree-of-freedom (SDOF) systems with various structural factors under thirty pairs of ground motion earthquakes recorded are investigated in this paper. The linear elastic-perfect plastic model is used to model SDOF systems. The factors to consider include elastic vibration period (T), displacement ductility ratios (μ, 2-8), the post-yield stiffness ratio (α, 0-15%), and the damping ratio (x, 3-20%). The results showed that the IAR values are decreased with an increase in the ductility ratios (μ) while the IAR values are increased with an increase in the damping ratios (x). While the post-yield stiffness ratio (α) has little effect on the IAR. Also, Analytical formulae are used to estimate IAR based on the T, μ, α, and x.
Architectural engineering. Structural engineering of buildings, Structural engineering (General)
Guidelines for Empirical Studies in Software Engineering involving Large Language Models
Sebastian Baltes, Florian Angermeir, Chetan Arora
et al.
Large Language Models (LLMs) are now ubiquitous in software engineering (SE) research and practice, yet their non-determinism, opaque training data, and rapidly evolving models threaten the reproducibility and replicability of empirical studies. We address this challenge through a collaborative effort of 22 researchers, presenting a taxonomy of seven study types that organizes the landscape of LLM involvement in SE research, together with eight guidelines for designing and reporting such studies. Each guideline distinguishes requirements (must) from recommended practices (should) and is contextualized by the study types it applies to. Our guidelines recommend that researchers: (1) declare LLM usage and role; (2) report model versions, configurations, and customizations; (3) document the tool architecture beyond the model; (4) disclose prompts, their development, and interaction logs; (5) validate LLM outputs with humans; (6) include an open LLM as a baseline; (7) use suitable baselines, benchmarks, and metrics; and (8) articulate limitations and mitigations. We complement the guidelines with an applicability matrix mapping guidelines to study types and a reporting checklist for authors and reviewers. We maintain the study types and guidelines online as a living resource for the community to use and shape (llm-guidelines$.$org).
Re-interpreting the case study approach in architectural research
Purnama Salura, Stephanie Clarissa
The case study is a research approach carried out in a natural, holistic, and in-depth setting about a unique, best practice, and bounded phenomenon. Even though it was rooted in social sciences, the case study has characteristics that are almost the same as a process of studying past architectural works (precedent studies) that have long been carried out in architecture. Unfortunately, precedent studies are sometimes only limited to documentation and descriptions of architectural works as physical objects, without critical, in-depth and holistic analysis of the decisions behind the design decisions of a precedent. On the other hand, previous research that discussed the case study approach in architecture tended to understand this approach as just a social research method that was simply applied in architectural research. Therefore, it is crucial to understand the terms of the 'case' itself. This research aims to provide a complete and in-depth understanding of the case study approach, to filter and elaborate this approach for architectural research. This research examines and elaborates on credible and recent literature on case study, both in social and architectural research. Elaboration from the literature review is used to formulate an operational framework based on architectural function-form-meaning. It is hoped that the results of this research will enrich architectural knowledge. The in-depth understanding is ultimately beneficial for improving existing architectural practices and becoming a source of knowledge for the general public.
Architecture, Architectural engineering. Structural engineering of buildings
Design and Driving Performance Study of Soft Actuators for Hand Rehabilitation Training
Zhang Z, Calderon AD, Huang X
et al.
Zhilin Zhang,1– 4 Aldrin D Calderon,2 Xingyu Huang,5 Guixian Wu,6 Chuanjian Liang1,3,4 1School of Physics and Telecommunications Engineering, Yulin Normal University, Yulin, People’s Republic of China; 2School of Mechanical, Manufacturing and Energy Engineering, Mapua University, Manila, Philippines; 3Center for Applied Mathematics of Guangxi, Yulin Normal University, Yulin, People’s Republic of China; 4Guangxi Universities Key Laboratory of Complex System Optimization and Big Data Processing, Yulin Normal University, Yulin, People’s Republic of China; 5School of Information Technology, Mapua University, Manila, Philippines; 6Yulin Health School of Guangxi Medical University, Yulin, People’s Republic of ChinaCorrespondence: Chuanjian Liang, Email lcj19@ylu.edu.cnPurpose: To address the application requirements of soft actuators in rehabilitation training gloves, and in combination with ergonomic requirements, we designed a segmented soft actuator with bending and elongation modules. This actuator can achieve independent or coupled movements of the finger joints.Methods: A finite element model of the joint actuator was established to compare the driving performance of actuators with different structural forms. Numerical calculations were used to analyze the effects of structural size parameters on the bending characteristics and end output force of the actuator. The design was then refined based on these analyses.Results: The joint actuator designed in this study demonstrated a 71% increase in bending angle compared to the standard fast pneumatic network structure. Key factors affecting the driving performance include the thickness of the constraint layer, the inner wall thickness of the chamber, chamber height, chamber width, chamber spacing, chamber length, and the number of chambers. After improvements, the bending angle of the joint actuator increased by 60.6%, and the output force increased by 145.9%, indicating significant improvement.Conclusion: This study designed and improved a soft actuator for hand rehabilitation training, achieving independent and coupled joint movements. The bending angle, bending shape, and joint driving force of the soft actuator meet the requirements for finger rehabilitation training.Keywords: rehabilitation training, soft actuators, fast pneumatic networks, structural optimization, finite element method
Effect of solution treatment on the intergranular corrosion behavior of 316L stainless steel fabricated by selective laser melting
Baoshan WANG, Qiang SHANG, Cheng MAN
Selective laser melting (SLM) is a powder-bed metal additive manufacturing technology that is extensively employed in the fields of marine engineering, biomedicine, and nuclear power due to its high processing precision and wide range of applicable materials. 316L stainless steel is one of the metal materials that have been researched earlier and has a more mature process in the field of SLM. Although SLM technology processing of 316L stainless steel parts (later referred to as SLM-316L stainless steel) has been conducted for industrial applications, it is rarely utilized for high temperatures, strong corrosion, complex loads, and other demanding conditions. Nonequilibrium solidification in the laser melt pool is an inherent mechanism of the SLM-316L stainless steel forming process, which contributes to the production of a nonuniform organizational structure and a high level of residual stress, which influence the reliability of SLM-316L stainless steel in long-term service. Heat treatment after preparation of SLM-316L stainless steel is the most effective approach to optimizing the organizational structure and reducing residual stress. SLM of 316L stainless steel is often employed for solid solution treatment to optimize the organization and reduce residual stresses to yield remarkable overall performance. The intergranular corrosion behavior of austenitic stainless steel highly depends on its organizational structure; thus, solid solution treatment is bound to enhance the intergranular corrosion performance of SLM-316L stainless steel. However, the law and mechanism of the effect of solid solution treatment on the intergranular corrosion behavior of SLM-316L stainless steel is still vague. Based on the mentioned above, in this work, solid solution treatment of SLM-316L stainless steel is conducted at 1150 ℃, its organizational and structural characteristics and morphology of nanooxidized particles are examined by Scanning electron microscope (SEM), Electron backscattered diffraction (EBSD), and Transmission electron microscope (TEM), and its intergranular corrosion behavior is investigated by double-loop electrochemical reactivation and ammonium persulfate electrolysis tests. The following conclusions can be drawn. (1) Recrystallization of SLM-316L stainless steel takes place after solid solution treatment, forming regularly shaped equiaxed grains and annealed twin crystals. (2) The nanooxidized particles are coarsened, and the maximum size at grain boundaries can attain the micrometer level. Meanwhile, the type of oxide particle also transforms from the rhodochrosite structure of MnSiO3 to the spinel structure of CrMn2O4. (3) Solid solution treatment results in a decrease in intergranular corrosion performance of SLM-316L stainless steel, together with a decrease in intergranular corrosion performance, which, in turn, is accompanied by sensitization time extension, and the type of intergranular corrosion changes from step-like to groove-like.
Mining engineering. Metallurgy, Environmental engineering
Rubberized reinforced concrete columns�under axial and cyclic loading�
Heba A. Mohamed, Hilal Hassan, Mahmoud Zaghlal
et al.
The experimental study presented in this research was conducted to understand the performance of rubberized concrete columns under axial loads and numerically analyze the conduct of rubberized reinforced concrete (RRC) columns under cyclic loads. Twelve large-scale columns with square and circular cross sections were utilized to carry out experimental testing. Under axial loading, fine aggregate was replaced in percentages of 0%, 10%, and 15% with crumb rubber (CR). Square RRC columns were examined by a finite element program (ABAQUS) under cyclic loading. The experimental results indicated that the columns with crumb rubber had a lower load capacity than those without crumb rubber when exposed to axial loads. The numerical results were in good alignment with the experimental results, indicating that the simulated model may simulate the behavior of rubberized concrete columns under both axial and cyclic loads. According to the numerical analyses, the lateral displacement was significantly improved for rubberized reinforced concrete columns with 10% and 15% replacement of fine aggregates compared to columns without CR. Adding 10% and 15% of crump rubber to the fine aggregate in reinforced concrete columns increased the displacement ductility. The equivalent viscous damping ratio was enhanced by 33.67% when increasing crumb rubber (CR) from 0% to 10%, and when crumb rubber (CR) replacement became 15%, the damping ratio increased to 44.02%.The rubberized reinforced concrete columns showed a more ductile reaction than the traditional reinforced concrete columns, as evidenced by their softer post-peak response.
Mechanical engineering and machinery, Structural engineering (General)
Multiscale method for identifying and marking the multiform fractures from visible-light rock-mass images
Yongbo Pan, Junzhi Cui, Zhenhao Xu
Multiform fractures have a direct impact on the mechanical performance of rock masses. To accurately identify multiform fractures, the distribution patterns of grayscale and the differential features of fractures in their neighborhoods are summarized. Based on this, a multiscale processing algorithm is proposed. The multiscale process is as follows. On the neighborhood of pixels, a grayscale continuous function is constructed using bilinear interpolation, the smoothing of the grayscale function is realized by Gaussian local filtering, and the grayscale gradient and Hessian matrix are calculated with high accuracy. On small-scale blocks, the pixels are classified by adaptively setting the grayscale threshold to identify potential line segments and mini-fillings. On the global image, potential line segments and mini-fillings are spliced together by progressing the block frontier layer-by-layer to identify and mark multiform fractures. The accuracy of identifying multiform fractures is improved by constructing a grayscale continuous function and adaptively setting the grayscale thresholds on small-scale blocks. And the layer-by-layer splicing algorithm is performed only on the domain of the 2-layer small-scale blocks, reducing the complexity. By using rock mass images with different fracture types as examples, the identification results show that the proposed algorithm can accurately identify the multiform fractures, which lays the foundation for calculating the mechanical parameters of rock masses.
Engineering geology. Rock mechanics. Soil mechanics. Underground construction
Action Research with Industrial Software Engineering -- An Educational Perspective
Yvonne Dittrich, Johan Bolmsten, Catherine Seidelin
Action research provides the opportunity to explore the usefulness and usability of software engineering methods in industrial settings, and makes it possible to develop methods, tools and techniques with software engineering practitioners. However, as the research moves beyond the observational approach, it requires a different kind of interaction with the software development organisation. This makes action research a challenging endeavour, and it makes it difficult to teach action research through a course that goes beyond explaining the principles. This chapter is intended to support learning and teaching action research, by providing a rich set of examples, and identifying tools that we found helpful in our action research projects. The core of this chapter focusses on our interaction with the participating developers and domain experts, and the organisational setting. This chapter is structured around a set of challenges that reoccurred in the action research projects in which the authors participated. Each section is accompanied by a toolkit that presents related techniques and tools. The exercises are designed to explore the topics, and practise using the tools and techniques presented. We hope the material in this chapter encourages researchers who are new to action research to further explore this promising opportunity.
KG-EmpiRE: A Community-Maintainable Knowledge Graph for a Sustainable Literature Review on the State and Evolution of Empirical Research in Requirements Engineering
Oliver Karras
In the last two decades, several researchers provided snapshots of the "current" state and evolution of empirical research in requirements engineering (RE) through literature reviews. However, these literature reviews were not sustainable, as none built on or updated previous works due to the unavailability of the extracted and analyzed data. KG-EmpiRE is a Knowledge Graph (KG) of empirical research in RE based on scientific data extracted from currently 680 papers published in the IEEE International Requirements Engineering Conference (1994-2022). KG-EmpiRE is maintained in the Open Research Knowledge Graph (ORKG), making all data openly and long-term available according to the FAIR data principles. Our long-term goal is to constantly maintain KG-EmpiRE with the research community to synthesize a comprehensive, up-to-date, and long-term available overview of the state and evolution of empirical research in RE. Besides KG-EmpiRE, we provide its analysis with all supplementary materials in a repository. This repository contains all files with instructions for replicating and (re-)using the analysis locally or via executable environments and for repeating the research approach. Since its first release based on 199 papers (2014-2022), KG-EmpiRE and its analysis have been updated twice, currently covering over 650 papers. KG-EmpiRE and its analysis demonstrate how innovative infrastructures, such as the ORKG, can be leveraged to make data from literature reviews FAIR, openly available, and maintainable for the research community in the long term. In this way, we can enable replicable, (re-)usable, and thus sustainable literature reviews to ensure the quality, reliability, and timeliness of their research results.
Practical Guidelines for the Selection and Evaluation of Natural Language Processing Techniques in Requirements Engineering
Mehrdad Sabetzadeh, Chetan Arora
Natural Language Processing (NLP) is now a cornerstone of requirements automation. One compelling factor behind the growing adoption of NLP in Requirements Engineering (RE) is the prevalent use of natural language (NL) for specifying requirements in industry. NLP techniques are commonly used for automatically classifying requirements, extracting important information, e.g., domain models and glossary terms, and performing quality assurance tasks, such as ambiguity handling and completeness checking. With so many different NLP solution strategies available and the possibility of applying machine learning alongside, it can be challenging to choose the right strategy for a specific RE task and to evaluate the resulting solution in an empirically rigorous manner. In this chapter, we present guidelines for the selection of NLP techniques as well as for their evaluation in the context of RE. In particular, we discuss how to choose among different strategies such as traditional NLP, feature-based machine learning, and language-model-based methods. Our ultimate hope for this chapter is to serve as a stepping stone, assisting newcomers to NLP4RE in quickly initiating themselves into the NLP technologies most pertinent to the RE field.
Review of high temperature materials
Fehim Findik
High-temperature materials play a significant role in sustainable engineering across various industries and applications. Sustainable engineering aims to design, develop, and implement solutions that minimize environmental impact, enhance resource efficiency, and promote long-term sustainability. The availability of substances that can be used efficiently at high temperatures allows pushing the limits of possible measurable demands. These substances include ceramics, polymers and metals. It is used in elevated temperature materials, aircraft and space structures, and space exploration. In this study, high temperature metals are classified including superalloys, platinum and refractory metals, refractory metals such as W, Nb, Mo, Ta. Also, ceramic materials are high temperature materials. Ceramics are criticized to use in elevated temperature due to their high hardness, extraordinary strength in compression, excellent thermal stability, short-term thermal extension and tremendously great melting temperature. Ceramics that encounter these standards are carbides and borides of Zr, Nb, Ta, Ti and Hf. In addition, steel, nickel and copper alloys used in aircraft engines, space shuttles and turbine blades from aerospace materials were investigated. In addition, powder metallurgy and sintering techniques, which are the most widely used production methods of high temperature materials, are emphasized. In this study, important characterization techniques for analyzing some sample surface and subsurface properties are reviewed. Again, in this study, the use of AES, XPS, SSIMS and LEED methods for the chemical examination of surfaces is discussed. Optical, electron, and scanning probe microscopy is used for pictorial inspection of inspection specimens and structures, obtaining data on surface, shape, colors, and numerous additional physical properties. Here, AFM, SEM, TEM, EDX, FIB and EMP methods are discussed. Among the material analysis devices, XRD, x-ray fluorescence spectrometry, low energy electron diffraction, neutron diffraction and electron microprobe devices were examined.
Architecture, Structural engineering (General)
"Software is the easy part of Software Engineering" -- Lessons and Experiences from A Large-Scale, Multi-Team Capstone Course
Ze Shi Li, Nowshin Nawar Arony, Kezia Devathasan
et al.
Capstone courses in undergraduate software engineering are a critical final milestone for students. These courses allow students to create a software solution and demonstrate the knowledge they accumulated in their degrees. However, a typical capstone project team is small containing no more than 5 students and function independently from other teams. To better reflect real-world software development and meet industry demands, we introduce in this paper our novel capstone course. Each student was assigned to a large-scale, multi-team (i.e., company) of up to 20 students to collaboratively build software. Students placed in a company gained first-hand experiences with respect to multi-team coordination, integration, communication, agile, and teamwork to build a microservices based project. Furthermore, each company was required to implement plug-and-play so that their services would be compatible with another company, thereby sharing common APIs. Through developing the product in autonomous sub-teams, the students enhanced not only their technical abilities but also their soft skills such as communication and coordination. More importantly, experiencing the challenges that arose from the multi-team project trained students to realize the pitfalls and advantages of organizational culture. Among many lessons learned from this course experience, students learned the critical importance of building team trust. We provide detailed information about our course structure, lessons learned, and propose recommendations for other universities and programs. Our work concerns educators interested in launching similar capstone projects so that students in other institutions can reap the benefits of large-scale, multi-team development
Vegetal-FRCM Failure under Partial Interaction Mechanism
Virginia Mendizabal, Borja Martínez, Luis Mercedes
et al.
FRCM is a strengthening system based on composite material made of a cementitious matrix and fabrics. This strengthening system has been studied and researched, obtaining analytical predictive models where it is common to assume full composite action between components. Through using non-typical materials for these composites, it has been seen that, in some cases, the previous assumption cannot be taken. In this situation, traditional analytical models such as ACK or tri-linear ones do not offer a reasonable prediction. This work researches the behavior of synthetic and naturally coated vegetal-FRCM with partial interaction through the characterization of the materials through tensile tests. Yarns, meshes and different FRCM coupons were manufactured and mechanically tested using different types of coatings and fabrics. The use of colophony and Arabic gum as natural coatings provided similar mechanical properties to the cotton and hemp yarns and meshes conformed. Partial interaction was found when using epoxy as a natural resin to coat the reinforcement while maintaining the mechanical properties in the same order of magnitude. A new two-stage model is proposed to fit stress–strain mechanical test, and it is reliable and accurate for cotton specimens.
Technology, Engineering (General). Civil engineering (General)
Shape and size optimization of truss structures by Chaos game optimization considering frequency constraints
Mahdi Azizi, Uwe Aickelin, Hadi A. Khorshidi
et al.
Introduction: An engineering system consists of properly established activities and put together to achieve a predefined goal. These activities include analysis, design, construction, research, and development. Designing and constructing structural systems, including buildings, bridges, highways, and other complex systems, have been developed over the centuries. However, the evolution of these systems has been prolonged because the overall process is very costly and time-consuming, requiring primary human and material resources to be utilized. One of the options for overcoming these shortcomings is the utilization of metaheuristic algorithms as recently developed intelligent techniques. These algorithms can be utilized as upper-level search techniques for optimization procedures to achieve better results. Objectives: Shape and size optimization of truss structures are considered in this paper utilizing the Chaos Game Optimization (CGO) as one of the recently developed metaheuristic algorithms. The principles of chaos theory and fractal configuration are considered inspirational concepts. Methods: For the numerical purpose, the 10-bar, 37-bar, 52-bar, 72-bar, and 120-bar truss structures as four of the benchmark problems in this field are considered as design examples in which the frequency constraints are considered as limits that have to be dealt with during the optimization procedure. Multiple optimization runs are also conducted for having a comprehensive statistical analysis, while a comparative investigation is also conducted with other algorithms in the literature. Results: Based on the results of the CGO and other approaches from the literature, the CGO can provide better and competitive results in dealing with the considered truss design problems. Conclusion: In summary, the CGO can provide better solutions in dealing with the considered real-size structural design problems with higher levels of complexity.
Medicine (General), Science (General)
Deep Learning based Model Predictive Control for Compression Ignition Engines
Armin Norouzi, Saeid Shahpouri, David Gordon
et al.
Machine learning (ML) and a nonlinear model predictive controller (NMPC) are used in this paper to minimize the emissions and fuel consumption of a compression ignition engine. In this work machine learning is applied in two methods. In the first application, ML is used to identify a model for implementation in model predictive control optimization problems. In the second application, ML is used as a replacement of the NMPC where the ML controller learns the optimal control action by imitating or mimicking the behavior of the model predictive controller. In this study, a deep recurrent neural network including long-short term memory (LSTM) layers are used to model the emissions and performance of an industrial 4.5 liter 4-cylinder Cummins diesel engine. This model is then used for model predictive controller implementation. Then, a deep learning scheme is deployed to clone the behavior of the developed controller. In the LSTM integration, a novel scheme is used by augmenting hidden and cell states of the network in an NMPC optimization problem. The developed LSTM-NMPC and the imitative NMPC are compared with the Cummins calibrated Engine Control Unit (ECU) model in an experimentally validated engine simulation platform. Results show a significant reduction in Nitrogen Oxides (\nox) emissions and a slight decrease in the injected fuel quantity while maintaining the same load. In addition, the imitative NMPC has a similar performance as the NMPC but with a two orders of magnitude reduction of the computation time.
Towards a Conceptual Approach of Analytical Engineering for Big Data
Rogerio Rossi, Kechi Hirama
Analytics corresponds to a relevant and challenging phase of Big Data. The generation of knowledge from extensive data sets (petabyte era) of varying types, occurring at a speed able to serve decision makers, is practiced using multiple areas of knowledge, such as computing, statistics, data mining, among others. In the Big Data domain, Analytics is also considered as a process capable of adding value to the organizations. Besides the demonstration of value, Analytics should also consider operational tools and models to support decision making. To adding value, Analytics is also presented as part of some Big Data value chains, such the Information Value Chain presented by NIST among others, which are detailed in this article. As well, some maturity models are presented, since they represent important structures to favor continuous implementation of Analytics for Big Data, using specific technologies, techniques and methods. Hence, through an in-depth research, using specific literature references and use cases, we seeks to outline an approach to determine the Analytical Engineering for Big Data Analytics considering four pillars: Data, Models, Tools and People; and three process groups: Acquisition, Retention and Revision; in order to make feasible and to define an organization, possibly designated as an Analytics Organization, responsible for generating knowledge from the data in the field of Big Data Analytics.
Towards Ontology-Based Requirements Engineering for IoT-Supported Well-Being, Aging and Health
Hrvoje Belani, Petar Solic, Toni Perkovic
Ontologies serve as a one of the formal means to represent and model knowledge in computer science, electrical engineering, system engineering and other related disciplines. Ontologies within requirements engineering may be used for formal representation of system requirements. In the Internet of Things, ontologies may be used to represent sensor knowledge and describe acquired data semantics. Designing an ontology comprehensive enough with an appropriate level of knowledge expressiveness, serving multiple purposes, from system requirements specifications to modeling knowledge based on data from IoT sensors, is one of the great challenges. This paper proposes an approach towards ontology-based requirements engineering for well-being, aging and health supported by the Internet of Things. Such an ontology design does not aim at creating a new ontology, but extending the appropriate one already existing, SAREF4EHAW, in order align with the well-being, aging and health concepts and structure the knowledge within the domain. Other contributions include a conceptual formulation for Well-Being, Aging and Health and a related taxonomy, as well as a concept of One Well-Being, Aging and Health. New attributes and relations have been proposed for the new ontology extension, along with the updated list of use cases and particular ontological requirements not covered by the original ontology. Future work envisions full specification of the new ontology extension, as well as structuring system requirements and sensor measurement parameters to follow description logic.
A multidisciplinary design optimization for conceptual design of hybrid-electric aircraft
H. L. Silva, G. J. Resende, R. C. Neto
et al.
29 sitasi
en
Computer Science
Code Calibration of the Eurocodes
Tuomo Poutanen
This article addresses the process to optimally select safety factors and characteristic values for the Eurocodes. Five amendments to the present codes are proposed: (1) The load factors are fixed, γ<sub>G</sub> = γ<sub>Q</sub>, by making the characteristic load of the variable load changeable, it simplifies the codes and lessens the calculation work. (2) Currently, the characteristic load of the variable load is the same for all variable loads. It creates excess safety and material waste for the variable loads with low variation. This deficiency can be avoided by applying the same amendment as above. (3) Various materials fit with different accuracy in the reliability model. This article explains two options to reduce this difficulty. (4) A method to avoid rounding errors in the safety factors is explained. (5) The current safety factors are usually set by minimizing the reliability indexes regarding the target when the obtained codes include considerable safe and unsafe design cases with the variability ratio (high reliability/low) of about 1.4. The proposed three code models match the target β<sub>50</sub> = 3.2 with high accuracy, no unsafe design cases and insignificant safe design cases with the variability ratio 1.07, 1.03 and 1.04.
Technology, Engineering (General). Civil engineering (General)
Improving Software Engineering Research through Experimentation Workbenches
Klaus Schmid, Sascha El-Sharkawy, Christian Kröher
Experimentation with software prototypes plays a fundamental role in software engineering research. In contrast to many other scientific disciplines, however, explicit support for this key activity in software engineering is relatively small. While some approaches to improve this situation have been proposed by the software engineering community, experiments are still very difficult and sometimes impossible to replicate. In this paper, we propose the concept of an experimentation workbench as a means of explicit support for experimentation in software engineering research. In particular, we discuss core requirements that an experimentation workbench should satisfy in order to qualify as such and to offer a real benefit for researchers. Beyond their core benefits for experimentation, we stipulate that experimentation workbenches will also have benefits in regard to reproducibility and repeatability of software engineering research. Further, we illustrate this concept with a scenario and a case study, and describe relevant challenges as well as our experience with experimentation workbenches.