The article examines two problems that emerge at the intersection of data collection, processing and visualization with participatory public inquiries. In these contexts, the use of data visualization makes it possibile to represent (part of) phenomena of collective interest, and design techniques allow the activation of spaces for mediation with diversified publics. Starting from a critical re-examination of experimental research experiences conducted by the médialab Sciences Po and the Advanced Design Unit at the Università di Bologna, the article proposes a reflection on data participatory practices from a dual perspective: on the one hand, data that is not part of a visualization process and therefore does not enter into collective reflection; on the other hand, how some visualised data is interpreted not only in relation to its content but also to the format through which it is conveyed. The proposed reflections, which connect issues arising from the concept of data and from limitations linked to the concept of civic participation aim to invite further experiments and lines of research to identify modalities and strategies for mitigating the problems that have been brought to light.
In today’s society, data shapes how states and corporations treat people. It is closely connected to questions of economic, cultural and political representation and equality. Particularly marginalised groups depend on adequate representation in the data that shapes social and political life. At the same time, as the current pressure on democracy reminds us, data embodies a paradox of exposure: it can offer recognition and oppression. Drawing on theories of social and data justice, this article constitutes ‘missing data’ as a political force and proposes an analytical framework to deal with it. Building on Bonnycastle’s continuum of social justice and Fraser’s abnormal justice, it shows how missing data lies between social equality and oppression and can be analysed based on (mis)distribution, (mis)recognition, and (mis)representation. It demonstrates how missing data operates as a driver of inequality, while emphasising the need for digital sovereignty, democratic data work, counterdata practices, and political action.
Classic problem-space theory models problem solving as a navigation through a structured space of states, operators, goals, and constraints. Systems Engineering (SE) employs analogous constructs (functional analysis, operational analysis, scenarios, trade studies), yet still lacks a rigorous systems-theoretic representation of the problem space itself. In current practice, reasoning often proceeds directly from stakeholder goals to prescriptive artifacts. This makes foundational assumptions about the operational environment, admissible interactions, and contextual conditions implicit or prematurely embedded in architectures or requirements. This paper addresses that gap by formalizing the problem space as an explicit semantic world model containing theoretical constructs that are defined prior to requirements and solution commitments. These constructs along with the developed axioms, theorems and corollary establish a rigorous criterion for unambiguous boundary semantics, context-dependent interaction traceability to successful stakeholder goal satisfaction, and sufficiency of problem-space specification over which disciplined reasoning can occur independent of solution design. It offers a clear distinction between what is true of the problem domain and what is chosen as a solution. The paper concludes by discussing the significance of the theory on practitioners and provides a dialogue-based hypothetical case study between a stakeholder and an engineer, demonstrating how the theory guides problem framing before designing any prescriptive artifacts.
Junça craftsmanship is an iconic artisanal technique linked to Beselga, a small district within the municipality of Penedono. This tradition was born as an economic supplement to local agricultural production. The plant, which is the main element of handcrafted products, grows abundantly and spontaneously in the areas surrounding the hamlet. The fiber is harvested exclusively by hand, left to dry and then woven to create various types of products.
Using this technique, artisan Catarina Martins designs and creates not only utilities, but also decorative masks by translating these traditional crafts with contemporary aesthetics. These artifacts are then enhanced and disseminated through the 'Origem Comum' platform, a reality that aims to highlight traditional arts and crafts related to rural areas. They do so through design, research, sales and educational activities. The project aims to emphasize how vernacular knowledge is useful in contributing to a balance between human activities and environmental conditions. Artisanal production becomes a critical activity that promotes equal and sustainable consumption, opposing the automated mass production processes, enhancing, on the other hand, the local cultural identity.
Muhammad Tayyab Khan, Zane Yong, Lequn Chen
et al.
Accurate extraction of key information from 2D engineering drawings is crucial for high-precision manufacturing. Manual extraction is slow and labor-intensive, while traditional Optical Character Recognition (OCR) techniques often struggle with complex layouts and overlapping symbols, resulting in unstructured outputs. To address these challenges, this paper proposes a novel hybrid deep learning framework for structured information extraction by integrating an Oriented Bounding Box (OBB) detection model with a transformer-based document parsing model (Donut). An in-house annotated dataset is used to train YOLOv11 for detecting nine key categories: Geometric Dimensioning and Tolerancing (GD&T), General Tolerances, Measures, Materials, Notes, Radii, Surface Roughness, Threads, and Title Blocks. Detected OBBs are cropped into images and labeled to fine-tune Donut for structured JSON output. Fine-tuning strategies include a single model trained across all categories and category-specific models. Results show that the single model consistently outperforms category-specific ones across all evaluation metrics, achieving higher precision (94.77% for GD&T), recall (100% for most categories), and F1 score (97.3%), while reducing hallucinations (5.23%). The proposed framework improves accuracy, reduces manual effort, and supports scalable deployment in precision-driven industries.
Vladyslav Bulhakov, Giordano d'Aloisio, Claudio Di Sipio
et al.
The introduction of large language models (LLMs) has enhanced automation in software engineering tasks, including in Model Driven Engineering (MDE). However, using general-purpose LLMs for domain modeling has its limitations. One approach is to adopt fine-tuned models, but this requires significant computational resources and can lead to issues like catastrophic forgetting. This paper explores how hyperparameter tuning and prompt engineering can improve the accuracy of the Llama 3.1 model for generating domain models from textual descriptions. We use search-based methods to tune hyperparameters for a specific medical data model, resulting in a notable quality improvement over the baseline LLM. We then test the optimized hyperparameters across ten diverse application domains. While the solutions were not universally applicable, we demonstrate that combining hyperparameter tuning with prompt engineering can enhance results across nearly all examined domain models.
Sentiment analysis is an essential technique for investigating the emotional climate within developer teams, contributing to both team productivity and project success. Existing sentiment analysis tools in software engineering primarily rely on English or non-German gold-standard datasets. To address this gap, our work introduces a German dataset of 5,949 unique developer statements, extracted from the German developer forum Android-Hilfe.de. Each statement was annotated with one of six basic emotions, based on the emotion model by Shaver et al., by four German-speaking computer science students. Evaluation of the annotation process showed high interrater agreement and reliability. These results indicate that the dataset is sufficiently valid and robust to support sentiment analysis in the German-speaking software engineering community. Evaluation with existing German sentiment analysis tools confirms the lack of domain-specific solutions for software engineering. We also discuss approaches to optimize annotation and present further use cases for the dataset.
Paris Avgeriou, Nauman bin Ali, Marcos Kalinowski
et al.
Increasingly, courses on Empirical Software Engineering research methods are being offered in higher education institutes across the world, mostly at the M.Sc. and Ph.D. levels. While the need for such courses is evident and in line with modern software engineering curricula, educators designing and implementing such courses have so far been reinventing the wheel; every course is designed from scratch with little to no reuse of ideas or content across the community. Due to the nature of the topic, it is rather difficult to get it right the first time when defining the learning objectives, selecting the material, compiling a reader, and, more importantly, designing relevant and appropriate practical work. This leads to substantial effort (through numerous iterations) and poses risks to the course quality. This chapter attempts to support educators in the first and most crucial step in their course design: creating the syllabus. It does so by consolidating the collective experience of the authors as well as of members of the Empirical Software Engineering community; the latter was mined through two working sessions and an online survey. Specifically, it offers a list of the fundamental building blocks for a syllabus, namely course aims, course topics, and practical assignments. The course topics are also linked to the subsequent chapters of this book, so that readers can dig deeper into those chapters and get support on teaching specific research methods or cross-cutting topics. Finally, we guide educators on how to take these building blocks as a starting point and consider a number of relevant aspects to design a syllabus to meet the needs of their own program, students, and curriculum.
The rapid advancement of AI-assisted software engineering has brought transformative potential to the field of software engineering, but existing tools and paradigms remain limited by cognitive overload, inefficient tool integration, and the narrow capabilities of AI copilots. In response, we propose Compiler.next, a novel search-based compiler designed to enable the seamless evolution of AI-native software systems as part of the emerging Software Engineering 3.0 era. Unlike traditional static compilers, Compiler.next takes human-written intents and automatically generates working software by searching for an optimal solution. This process involves dynamic optimization of cognitive architectures and their constituents (e.g., prompts, foundation model configurations, and system parameters) while finding the optimal trade-off between several objectives, such as accuracy, cost, and latency. This paper outlines the architecture of Compiler.next and positions it as a cornerstone in democratizing software development by lowering the technical barrier for non-experts, enabling scalable, adaptable, and reliable AI-powered software. We present a roadmap to address the core challenges in intent compilation, including developing quality programming constructs, effective search heuristics, reproducibility, and interoperability between compilers. Our vision lays the groundwork for fully automated, search-driven software development, fostering faster innovation and more efficient AI-driven systems.
While mastered by some, good scientific writing practices within Empirical Software Engineering (ESE) research appear to be seldom discussed and documented. Despite this, these practices are implicit or even explicit evaluation criteria of typical software engineering conferences and journals. In this pragmatic, educational-first document, we want to provide guidance to those who may feel overwhelmed or confused by writing ESE papers, but also those more experienced who still might find an opinionated collection of writing advice useful. The primary audience we had in mind for this paper were our own BSc, MSc, and PhD students, but also students of others. Our documented advice therefore reflects a subjective and personal vision of writing ESE papers. By no means do we claim to be fully objective, generalizable, or representative of the whole discipline. With that being said, writing papers in this way has worked pretty well for us so far. We hope that this guide can at least partially do the same for others.
AbstractNode‐link diagrams are a widely used metaphor for creating visualizations of relational data. Most frequently, such techniques address creating 2D graph drawings, which are easy to use on computer screens and in print. In contrast, 3D node‐link graph visualizations are far less used, as they have many known limitations and comparatively few well‐understood advantages. A key issue here is that such 3D visualizations require users to select suitable viewpoints. We address this limitation by studying the ability of layout techniques to produce high‐quality views of 3D graph drawings. For this, we perform a thorough experimental evaluation, comparing 3D graph drawings, rendered from a covering sampling of all viewpoints, with their 2D counterparts across various state‐of‐the‐art node‐link drawing algorithms, graph families, and quality metrics. Our results show that, depending on the graph family, 3D node‐link diagrams can contain a many viewpoints that yield 2D visualizations that are of higher quality than those created by directly using 2D node‐link diagrams. This not only sheds light on the potential of 3D node‐link diagrams but also gives a simple approach to produce high‐quality 2D node‐link diagrams.
Written within the scope of a PhD research project that is being developed at the University of Aveiro (Portugal), this article is based on research for design, through design. The goal is to develop a proposal for a non-disposable modular packaging system for ceramic products, which can organize interior spaces, thus reducing waste. Its development relies on the partnerships of two companies in the distinctive areas of ceramics and textiles, namely: Grestel-Produtos Cerâmicos S.A. and Tintex Textiles S.A.
A historical study of ceramic ware packaging was developed through a documental analysis of relevant moments of long-distance transport history, before disposable consumption habits. By gathering and analysing information that provides significant contributions based on historical facts, this study targeting a contemporary solution for a reusable packaging project.
A proposal for a modular packaging system has already been designed and validated by the partner companies.
The article aims to read the transition of material design approaches identifying an evolutive models with major principles and contribution into the health improvement through material innovations. The analysis follows a bottom-up approach, using a case-based reasoning methodology to study well-being concept evolution and identify new material experimentations frontiers within three approaches: Imitative, Augmentative, and Mutational. The contribution of the article reflects and suggests the introduction of new material design approaches to manage the design of advanced materials’ behaviuors in their future development, highlighting their potential impact on user satisfaction and acceptance, collective welfare improvement, performance optimization, and environmental concerns.
This study aims to enhance user experience on online learning platforms by investigating design principles, usability evaluation techniques, and redesign processes. A total of 150 participants, divided equally among students, educators, and professionals, were stratified by age, gender, education level, and familiarity with online learning. Various evaluation methods, including heuristic evaluation, guideline reviews, and cognitive walkthroughs, were employed. Metrics such as task success rate, time-on-task, and Net Promoter Score (NPS) were used to quantify user satisfaction and effectiveness. Additionally, five qualitative interviews were conducted for deeper insights. The results revealed specific usability issues and demonstrated the effectiveness of the applied evaluation techniques. Post-redesign metrics indicated significant improvements in user satisfaction and engagement. The study underscores the importance of a multi-faceted approach to design and evaluation in online learning platforms and suggests avenues for future research.
The foundation for the EIT-KIC Culture and Creativity was laid in many steps, through a series of EU policy developments. Moving from the seminal report on The Economy of Culture promoted by the acting European Commissioner for Culture Jan Figel in 2006, we have witnessed a gradual development of the idea that cultural and creative sectors are a main driver of socio-economic development in Europe. This trajectory can be reconstructed in the sequence of the Work Plans for Culture that have spanned the last decade and in a few milestones such as the European Year of Cultural Heritage 2018 and its major legacies, the European Framework for Action on Cultural Heritage and the New European Agenda for Culture. Let’s briefly consider how all such components have been instrumental to the birth of the EIT-KIC Culture and Creativity, and how an understanding of such process is still fundamental today to fully appreciate the potential and criticalities of this new, ambitious endeavor.
Daniel Moreno, Katherine Mollenhauer, Arturo Orellana
The territory is configured as a space that is limited, occupied, and used by different actors. From these, diverse relationships of complementation and reciprocity but also conflict and confrontation are generated in the form of complex systems, making the territory a “system of systems” (Lurås, 2016; Jackson & Keys, 2019). It evokes the concept of intersectionality, as it is used to designate the perception of power relations, putting in doubt the existence of empowerment of certain less favored actors. Systemic design is a practical-methodological-theoretical field where systems and design thinking and practice converge to address the complexity of citizen participation as the relationships of various elements, having multiple foci, considering these elements as human or non-human from a territorial point of view. Service design proposes that the actors of the ecosystem are considered “users” and “co-producers” of the service since all of them interact with the DRA at some level. Through co-production and co-creation techniques, around 30 community citizen online workshops and 16 participatory webinars were carried out. The objective of these workshops, in addition to contributing to the elaboration and adjustment of the diagnosis and Design of the Regional Development Strategy (ERD) of Los Lagos, Chile, seeks to generate social innovation. The use of databases and various media made it possible to design workshops in remote mode, which achieved effective participation of more than 2,000 people.
Sonja Hyrynsalmi, Ella Peltonen, Fanny Vainionpää
et al.
In the extant literature, there has been discussion on the drivers and motivations of minorities to enter the software industry. For example, universities have invested in more diverse imagery for years to attract a more diverse pool of students. However, in our research, we consider whether we understand why students choose their current major and how they did in the beginning decided to apply to study software engineering. We were also interested in learning if there could be some signs that would help us in marketing to get more women into tech. We approached the topic via an online survey (N = 78) sent to the university students of software engineering in Finland. Our results show that, on average, women apply later to software engineering studies than men, with statistically significant differences between genders. Additionally, we found that marketing actions have different impacts based on gender: personal guidance in live events or platforms is most influential for women, whereas teachers and social media have a more significant impact on men. The results also indicate two main paths into the field: the traditional linear educational pathway and the adult career change pathway, each significantly varying by gender
Large Language Models (LLMs) have recently shown remarkable capabilities in various software engineering tasks, spurring the rapid growth of the Large Language Models for Software Engineering (LLM4SE) area. However, limited attention has been paid to developing efficient LLM4SE techniques that demand minimal computational cost, time, and memory resources, as well as green LLM4SE solutions that reduce energy consumption, water usage, and carbon emissions. This paper aims to redirect the focus of the research community towards the efficiency and greenness of LLM4SE, while also sharing potential research directions to achieve this goal. It commences with a brief overview of the significance of LLM4SE and highlights the need for efficient and green LLM4SE solutions. Subsequently, the paper presents a vision for a future where efficient and green LLM4SE revolutionizes the LLM-based software engineering tool landscape, benefiting various stakeholders, including industry, individual practitioners, and society. The paper then delineates a roadmap for future research, outlining specific research paths and potential solutions for the research community to pursue. While not intended to be a definitive guide, the paper aims to inspire further progress, with the ultimate goal of establishing efficient and green LLM4SE as a central element in the future of software engineering.
Ethnography has become one of the established methods for empirical research on software engineering. Although there is a wide variety of introductory books available, there has been no material targeting software engineering students particularly, until now. In this chapter we provide an introduction to teaching and learning ethnography for faculty teaching ethnography to software engineering graduate students and for the students themselves of such courses. The contents of the chapter focuses on what we think is the core basic knowledge for newbies to ethnography as a research method. We complement the text with proposals for exercises, tips for teaching, and pitfalls that we and our students have experienced. The chapter is designed to support part of a course on empirical software engineering and provides pointers and literature for further reading.
In the existing research, polygonal -shaped dies were manufactured based on the dimensions of the design. These dies were then assembled on the deep-drawing device to create polygonal cups (specifically heptagonal cups) for conducting experimental work tests. The process was achieved via a deep-drawing operation to create heptagonal cups with dimensions (diameter D = 40mm and height L = 31.5 mm). For both experimental work and finite element analysis, the cups of heptagonal shapes were formed from flat circular blanks of Low Carbon Steel (thickness to = 0.7 mm) and diameter D = 80mm. A commercial software program, ANSYS, was employed to perform this finite element analysis. The research aims to study the effect of different radial clearance ( RC = 1.1 to,1.2 to, and 1.3 to) between die and punch for heptagonal shape DD operations on the forming load, the height of the cup, thickness distribution, strain and stress distribution along the cup wall (side-wall and wall curvature). The analysis outcomes indicate that the maximum forming force was 51.250 kN with the wall curvature, the maximum effective strain was 0.8231, and the maximum stress was 676 MPa when the [Formula: see text]1.1 to). Additionally, the squeezing process in the mug wall occurred with [Formula: see text]1.1 to).