B2C e-commerce companies are increasing in number due to the significant benefits e-logistics provides, boosting profits and improving customer satisfaction. Moreover, the COVID-19 pandemic increased demand for online shopping due to lockdowns and safety concerns. Despite this, several B2C companies entering the field eventually leave due to inefficient handling of returned products and ineffective reverse e-logistics systems. This article identifies recent methods to improve Reverse Logistics performance for B2C e-commerce firms. The methodology is based on literature review, synthesis, DELPHI, and multi-scoring survey methods. The results showed that management, quality management, organisational structure and culture, IT, customer satisfaction and service, employees, and infrastructure are factors with a direct effect on REL's performance, and this study suggests methods to improve these factors.
Though much attention has been paid to the intentional adoption of Responsible Innovation (RI), the drivers of de facto RI (responsible innovation framed through organisations' existing practices, independent of RI principles) remain underexplored. We investigate the influence of social/environmentally oriented (impact) investors on early-stage business ventures whose innovation practices align with RI principles, despite their being unaware of RI discourse. Based on our study of a social finance-practising venture capital fund, its portfolio ventures, investors and peers, we illustrate a process model that captures impact investors' reinforcing influence on de facto RI practices among ventures they fund. We theorise that the selection, incentive and accountability systems deployed by impact investors serve to align the financial and non-financial interests of innovation value chain actors, creating conditions that promote and enable ventures' simultaneous pursuit of commercial and social/environmental performance.
Organizations across finance, healthcare, transportation, content moderation, and critical infrastructure are rapidly deploying highly automated AI systems, yet they lack principled methods to quantify how increasing automation amplifies harm when failures occur. We propose a parsimonious Bayesian risk decomposition expressing expected loss as the product of three terms: the probability of system failure, the conditional probability that a failure propagates into harm given the automation level, and the expected severity of harm. This framework isolates a critical quantity -- the conditional probability that failures propagate into harm -- which captures execution and oversight risk rather than model accuracy alone. We develop complete theoretical foundations: formal proofs of the decomposition, a harm propagation equivalence theorem linking the harm propagation probability to observable execution controls, risk elasticity measures, efficient frontier analysis for automation policy, and optimal resource allocation principles with second-order conditions. We motivate the framework with an illustrative case study of the 2012 Knight Capital incident ($440M loss) as one instantiation of a broadly applicable failure pattern, and characterize the research design required to empirically validate the framework at scale across deployment domains. This work provides the theoretical foundations for a new class of deployment-focused risk governance tools for agentic and automated AI systems.
Technological advancements, such as data analytics, artificial intelligence (AI), and robotic process automation (RPA), are reshaping internal audit practices. These innovations have driven significant improvements in efficiency, effectiveness, and performance. Traditional internal audit processes are evolving with the integration of advanced technologies. The 2024 Global Internal Audit Standards emphasize performance as a key factor in the success of modern internal audit functions (IAFs), which underscores the growing need to integrate advanced technologies into audit processes. However, adoption poses challenges, including data privacy concerns, cybersecurity risks, and the demand for specialized expertise. This paper reviews existing literature on technology-driven auditing, explores the impact of the 2024 Global Internal Audit Standards, and identifies key challenges in implementing different technologies.
Business, Business mathematics. Commercial arithmetic. Including tables, etc.
Alida Vallejo-López, Cesar Noboa-Terán, Juana Kou-Guzmán
et al.
Technology has become a global tool that allows us to obtain information and analyze data, streamlines communication, and allows us to share images, data, videos, texts, etc. Daily activities have gone from traditional to digital. Today, it is impossible to live without an electronic device. In this context, changes in people's health observed, with various complaints ranging from visual, neurological, and concentration problems to muscular, hearing, and sleep disorders. Society must be aware of the importance of using various technological devices responsibly to protect people's health in general. Keywords: Technology, activities, protect, electronic, Radiation, Health.
For modeling the serial dependence in time series of counts, various approaches have been proposed in the literature. In particular, models based on a recursive, autoregressive-type structure such as the well-known integer-valued autoregressive (INAR) models are very popular in practice. The distribution of such INAR models is fully determined by a vector of autoregressive binomial thinning coefficients and the discrete innovation distribution. While fully parametric estimation techniques for these models are mostly covered in the literature, a semi-parametric approach allows for consistent and efficient joint estimation of the model coefficients and the innovation distribution without imposing any parametric assumptions. Although the limiting distribution of this estimator is known, which, in principle, enables asymptotic inference and INAR model diagnostics on the innovations, it is cumbersome to apply in practice. In this paper, we consider a corresponding semi-parametric INAR bootstrap procedure and show its joint consistency for the estimation of the INAR coefficients and for the estimation of the innovation distribution. We discuss different application scenarios that include goodness-of-fit testing, predictive inference and joint dispersion index analysis for count time series. In simulations, we illustrate the finite sample performance of the semi-parametric INAR bootstrap using several innovation distributions and provide real-data applications.
A memristor, a two-terminal nanodevice, has garnered substantial attention in recent years due to its distinctive properties and versatile applications. These nanoscale components, characterized by their simplicity of manufacture, scalability in small dimensions, nonvolatile memory capabilities, and adaptability to low-power platforms, offer a wealth of opportunities for technological innovation. Memristors hold great promise in diverse fields, ranging from advanced memory devices and neuromorphic computing to energy-efficient circuits and more. As we delve into this report, our aim is to provide a succinct but thorough exploration of the expanding landscape of memristor applications. Through the meticulous examination of scholarly literature, we systematically documented pivotal research milestones. By preserving historical consistency in our approach, we aim to unveil the intricate spectrum of possibilities that memristors offer, according to which they can revolutionize and enhance various domains of electronics and computing.
This paper investigates the economic feasibility of replacing human labor with robotics and automation in Qatar's manufacturing and service sectors. By analyzing labor costs, productivity gains, and implementation expenses, the study assesses the potential financial impact and return on investment of robotic integration. Results indicate the sectors where automation is economically viable and identify challenges related to workforce adaptation, policy, and infrastructure. These insights provide guidance for policymakers and industry stakeholders considering automation strategies in Qatar.
The proliferation of generative AI and deceptive synthetic media threatens the global information ecosystem, especially across the Global Majority. This report from WITNESS highlights the limitations of current AI detection tools, which often underperform in real-world scenarios due to challenges related to explainability, fairness, accessibility, and contextual relevance. In response, WITNESS introduces the Truly Innovative and Effective AI Detection (TRIED) Benchmark, a new framework for evaluating detection tools based on their real-world impact and capacity for innovation. Drawing on frontline experiences, deceptive AI cases, and global consultations, the report outlines how detection tools must evolve to become truly innovative and relevant by meeting diverse linguistic, cultural, and technological contexts. It offers practical guidance for developers, policy actors, and standards bodies to design accountable, transparent, and user-centered detection solutions, and incorporate sociotechnical considerations into future AI standards, procedures and evaluation frameworks. By adopting the TRIED Benchmark, stakeholders can drive innovation, safeguard public trust, strengthen AI literacy, and contribute to a more resilient global information credibility.
Emerging solid-state additive manufacturing (AM) technologies have recently garnered significant interest because they can prevent the defects that other metal AM processes may have due to sintering or melting. Additive friction stir deposition (AFSD), also known as MELD, is a solid-state AM technology that utilises bar feedstocks as the input material and frictional–deformational heat as the energy source. AFSD offers high deposition rates and is a promising technique for achieving defect-free material properties like wrought aluminium, magnesium, steel, and titanium alloys. While it offers benefits in terms of productivity and material properties, its low technology readiness level prevents widespread adoption. Academics and engineers are conducting research across various subfields to better understand the process parameters, material properties, process monitoring, and modelling of the AFSD technology. Yet, it is also crucial to compile and compare the research findings from past studies on this new technology to gain a comprehensive understanding and pinpoint future research paths. This paper aims to present a comprehensive review of AFSD focusing on process parameters, material properties, monitoring, and modelling. In addition to examining data from existing studies, this paper identifies areas where research is lacking and suggests paths for future research efforts.
Engineering machinery, tools, and implements, Technological innovations. Automation
Mihai Christodorescu, Ryan Craven, Soheil Feizi
et al.
The rise of Generative AI (GenAI) brings about transformative potential across sectors, but its dual-use nature also amplifies risks. Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety. China, the United States (US), and the European Union (EU) are at the forefront with initiatives like the Management of Algorithmic Recommendations, the Executive Order, and the AI Act, respectively. However, the rapid evolution of GenAI capabilities often outpaces the development of comprehensive safety measures, creating a gap between regulatory needs and technical advancements. A workshop co-organized by Google, University of Wisconsin, Madison (UW-Madison), and Stanford University aimed to bridge this gap between GenAI policy and technology. The diverse stakeholders of the GenAI space -- from the public and governments to academia and industry -- make any safety measures under consideration more complex, as both technical feasibility and regulatory guidance must be realized. This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress? How technology can evolve to meet regulatory standards? The interplay between legislation and technology is a very vast topic, and we don't claim that this paper is a comprehensive treatment on this topic. This paper is meant to capture findings based on the workshop, and hopefully, can guide discussion on this topic.
Driven by the rapid ascent of artificial intelligence (AI), organizations are at the epicenter of a seismic shift, facing a crucial question: How can AI be successfully integrated into existing operations? To help answer it, manage expectations and mitigate frustration, this article introduces Computational Management, a systematic approach to task automation for enhancing the ability of organizations to harness AI's potential within existing workflows. Computational Management acts as a bridge between the strategic insights of management science with the analytical rigor of computational thinking. The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow. Such procedures focus on task (re)formulation, on the assessment of the automation potential of tasks, on the completion of task specification templates for AI selection and adaptation. Included in the article there are manual and automated methods, with prompt suggestions for publicly available LLMs, to complete these three procedures. The first procedure, task (re)formulation, focuses on breaking down work activities into basic units, so they can be completed by one agent, involve a single well-defined action, and produce a distinct outcome. The second, allows the assessment of the granular task and its suitability for automation, using the Task Automation Index to rank tasks based on whether they have standardized input, well-defined rules, repetitiveness, data dependency, and objective outputs. The third, focuses on a task specification template which details information on 16 critical components of tasks, and can be used as a checklist to select or adapt the most suitable AI solution for integration into existing workflows. Computational Management provides a roadmap and a toolkit for humans and AI to thrive together, while enhancing organizational efficiency and innovation.
Doron Yeverechyahu, Raveesh Mayya, Gal Oestreicher-Singer
Large Language Models (LLMs) have been shown to enhance individual productivity in guided settings. Whereas LLMs are likely to also transform innovation processes in a collaborative work setting, it is unclear what trajectory this transformation will follow. Innovation in these contexts encompasses both capability innovation that explores new possibilities by acquiring new competencies in a project and iterative innovation that exploits existing foundations by enhancing established competencies and improving project quality. Whether LLMs affect these two aspects of collaborative work and to what extent is an open empirical question. Open-source development provides an ideal setting to examine LLM impacts on these innovation types, as its voluntary and open/collaborative nature of contributions provides the greatest opportunity for technological augmentation. We focus on open-source projects on GitHub by leveraging a natural experiment around the selective rollout of GitHub Copilot (a programming-focused LLM) in October 2021, where GitHub Copilot selectively supported programming languages like Python or Rust, but not R or Haskell. We observe a significant jump in overall contributions, suggesting that LLMs effectively augment collaborative innovation in an unguided setting. Interestingly, Copilot's launch increased iterative innovation focused on maintenance-related or feature-refining contributions significantly more than it did capability innovation through code-development or feature-introducing commits. This disparity was more pronounced after the model upgrade in June 2022 and was evident in active projects with extensive coding activity, suggesting that as both LLM capabilities and/or available contextual information improve, the gap between capability and iterative innovation may widen. We discuss practical and policy implications to incentivize high-value innovative solutions.
Context: The rapid evolution of Large Language Models (LLMs) has sparked significant interest in leveraging their capabilities for automating code review processes. Prior studies often focus on developing LLMs for code review automation, yet require expensive resources, which is infeasible for organizations with limited budgets and resources. Thus, fine-tuning and prompt engineering are the two common approaches to leveraging LLMs for code review automation. Objective: We aim to investigate the performance of LLMs-based code review automation based on two contexts, i.e., when LLMs are leveraged by fine-tuning and prompting. Fine-tuning involves training the model on a specific code review dataset, while prompting involves providing explicit instructions to guide the model's generation process without requiring a specific code review dataset. Method: We leverage model fine-tuning and inference techniques (i.e., zero-shot learning, few-shot learning and persona) on LLMs-based code review automation. In total, we investigate 12 variations of two LLMs-based code review automation (i.e., GPT- 3.5 and Magicoder), and compare them with the Guo et al.'s approach and three existing code review automation approaches. Results: The fine-tuning of GPT 3.5 with zero-shot learning helps GPT-3.5 to achieve 73.17% -74.23% higher EM than the Guo et al.'s approach. In addition, when GPT-3.5 is not fine-tuned, GPT-3.5 with few-shot learning achieves 46.38% - 659.09% higher EM than GPT-3.5 with zero-shot learning. Conclusions: Based on our results, we recommend that (1) LLMs for code review automation should be fine-tuned to achieve the highest performance; and (2) when data is not sufficient for model fine-tuning (e.g., a cold-start problem), few-shot learning without a persona should be used for LLMs for code review automation.
The idea of an extension of life for CubeSats is proposed to reduce space debris in a low-earth orbit. In this work, a gripper is designed for geometry-based grasping in berthing tasks. The grasping operation is outlined for square- and rectangle-shaped CubeSats. Equilibrium conditions are formulated to design the fingertips’ shape and parameters for grasping CubeSat bodies. A design scheme is proposed to provide the required accuracy. A design concept is developed into a lab prototype by using low-cost 3D printing manufacturing, and a mock-up grasping task that is representative of the berthing operation is evaluated with the lab prototype. Center-mass hanging setup for the prototype and grasped body is used to evaluate the impact of grasping, partially replicating the conditions in space by reducing the effect of gravity on the system.
Engineering machinery, tools, and implements, Technological innovations. Automation
This paper investigates middle managers’ resistance to digital transformation initiatives and suggests strategies for overcoming such resistance using the example of a major Russian transportation company. This study employed a mixed-methods approach to assess middle managers’ values and to identify patterns of resistance behavior. The case studies further illustrate the resistance of middle managers and how the company under study responded to these incidents. The findings reveal a significant relationship between employees’ attitudes toward routine and their resistance to digital transformation. Managers with high scores in tradition, conformity, security, and power values, as well as a strong positive attitude toward routine, were more resistant to change. Conversely, those with high scores in universalism, self-direction, and stimulation values were more open to change. By addressing the values and concerns driving middle managers’ attitudes, organizations can better support them in overcoming resistance to digital transformation. The study also offers practical strategies for aligning digital transformation efforts with middle managers’ values, thereby fostering a more positive attitude toward change and facilitating successful implementation.
Information Systems is used by banking companies to process and store financial data and transactions made by customers. The company strives to continuously improve its technological innovations in order to support internal business processes. With the implementation of information technology to contribute to the acceleration of internal business processes that are more efficient and effective internally, it is expected that companies can compete to be more competitive. The purpose of this study is to determine the main factors of the implementation of the existing front end system at bank XYZ, so that further models can be made that can be used to evaluate the performance of the system. This research method is carried out by collecting data from respondents or customers who are directly affected by the service from this front end system, then observations are made using the factor analysis method. The results of this study found that there were new factors formed by the presence of a number of clustered indicators which could then be represented as new factors that reflected the expectations of respondents in particular and bank customers in general. The conclusion obtained is that there are new factors formed: Automation, Information. Management Information Systems, Business Process, and Performance.