Modern digital services have evolved into indispensable tools, driving the present large-scale information systems. Yet, the prevailing platform-centric model, where services are optimized for platform-driven metrics such as engagement and conversion, often fails to align with users' true needs. While platform technologies have advanced significantly-especially with the integration of large language models (LLMs)-we argue that improvements in platform service quality do not necessarily translate to genuine user benefit. Instead, platform-centric services prioritize provider objectives over user welfare, resulting in conflicts against user interests. This paper argues that the future of digital services should shift from a platform-centric to a user-centric agent. These user-centric agents prioritize privacy, align with user-defined goals, and grant users control over their preferences and actions. With advancements in LLMs and on-device intelligence, the realization of this vision is now feasible. This paper explores the opportunities and challenges in transitioning to user-centric intelligence, presents a practical device-cloud pipeline for its implementation, and discusses the necessary governance and ecosystem structures for its adoption.
In restaurants, many aspects of customer service, such as greeting customers, taking orders, and processing payments, are automated. Due to the various cuisines, required services, and different standards of each restaurant, one challenging part of making the entire automated process is inspecting and providing appropriate services at the table during a meal. In this paper, we demonstrate an approach for automatically checking and providing services at the table. We initially construct a base model to recognize common information to comprehend the context of the table, such as object category, remaining food quantity, and meal progress status. After that, we add a service recognition classifier and retrain the model using a small amount of local restaurant data. We gathered data capturing the restaurant table during the meal in order to find a suitable service recognition classifier. With different inputs, combinations, time series, and data choices, we carried out a variety of tests. Through these tests, we discovered that the model with few significant data points and trainable parameters is more crucial in the case of sparse and redundant retraining data.
Itamar Cohen, Paolo Giaccone, Carla Fabiana Chiasserini
In an edge-cloud multi-tier network, datacenters provide services to mobile users, with each service having specific latency constraints and computational requirements. Deploying such a variety of services while matching their requirements with the available computing resources is challenging. In addition, time-critical services may have to be migrated as the users move, to keep fulfilling their latency constraints. Unlike previous work relying on an orchestrator with an always-updated global view of the available resources and the users' locations, this work envisions a distributed solution to the above problems. In particular, we propose a distributed asynchronous framework for service deployment in the edge-cloud that increases the system resilience by avoiding a single point of failure, as in the case of a central orchestrator. Our solution ensures cost-efficient feasible placement of services, while using negligible bandwidth. Our results, obtained through trace-driven, large-scale simulations, show that the proposed solution provides performance very close to those obtained by state-of-the-art centralized solutions, and at the cost of a small communication overhead.
Service meshes play a central role in the modern application ecosystem by providing an easy and flexible way to connect different services that form a distributed application. However, because of the way they interpose on application traffic, they can substantially increase application latency and resource consumption. We develop a decompositional approach and a tool, called MeshInsight, to systematically characterize the overhead of service meshes and to help developers quantify overhead in deployment scenarios of interest. Using MeshInsight, we confirm that service meshes can have high overhead -- up to 185% higher latency and up to 92% more virtual CPU cores for our benchmark applications -- but the severity is intimately tied to how they are configured and the application workload. The primary contributors to overhead vary based on the configuration too. IPC (inter-process communication) and socket writes dominate when the service mesh operates as a TCP proxy, but protocol parsing dominates when it operates as an HTTP proxy. MeshInsight also enables us to study the end-to-end impact of optimizations to service meshes. We show that not all seemingly-promising optimizations lead to a notable overhead reduction in realistic settings.
Gil Einziger, Gabriel Scalosub, Carla Fabiana Chiasserini
et al.
Deploying services efficiently while satisfying their quality requirements is a major challenge in network slicing. Effective solutions place instances of the services' virtual network functions (VNFs) at different locations of the cellular infrastructure and manage such instances by scaling them as needed. In this work, we address the above problem and the very relevant aspect of sub-slice reuse among different services. Further, unlike prior art, we account for the services' finite lifetime and time-varying traffic load. We identify two major sources of inefficiency in service management: (i) the overspending of computing resources due to traffic of multiple services with different latency requirements being processed by the same virtual machine (VM), and (ii) the poor packing of traffic processing requests in the same VM, leading to opening more VMs than necessary. To cope with the above issues, we devise an algorithm, called REShare, that can dynamically adapt to the system's operational conditions and find an optimal trade-off between the aforementioned opposite requirements. We prove that REShare has low algorithmic complexity and is asymptotic 2-competitive under a non-decreasing load. Numerical results, leveraging real-world scenarios, show that our solution outperforms alternatives, swiftly adapting to time-varying conditions and reducing service cost by over 25%.
Recent years have witnessed the rapid development of service-oriented computing technologies. The boom of Web services increases software developers' selection burden in developing new service-based systems such as mashups. Timely recommending appropriate component services for developers to build new mashups has become a fundamental problem in service-oriented software engineering. Existing service recommendation approaches are mainly designed for mashup development in the single-round scenario. It is hard for them to effectively update recommendation results according to developers' requirements and behaviours (e.g. instant service selection). To address this issue, the authors propose a service bundle recommendation framework based on deep learning, DLISR, which aims to capture the interactions among the target mashup to build, selected (component) services, and the following service to recommend. Moreover, an attention mechanism is employed in DLISR to weigh selected services when recommending a candidate service. The authors also design two separate models for learning interactions from the perspectives of content and invocation history, respectively, and a hybrid model called HISR. Experiments on a real-world dataset indicate that HISR can outperform several state-of-the-art service recommendation methods to develop new mashups iteratively.
IoT paradigm exploits the Cloud Computing platform to extend its scope and service provisioning capabilities. However, due to the location of the underlying IoT devices which is far away from the cloud, some services cannot tolerate the possible latency resulted from this issue. To overcome the latency consequences that might affect the functionality of IoT services and applications, the Fog Computing has been proposed. Fog Computing paradigm utilizes local computing resources locating at the network edge instead of those residing at the cloud for processing data collected from sensors linked to physical devices in an IoT platform. The major benefits of such paradigm include low latency, real-time decision making and an optimal utilization of available bandwidth. In this paper, we offer a review of the Fog computing paradigm and in particular its impact on the IoT application development process. We also propose an architecture for Fog Computing based IoT services and applications.
Giuseppe Inturri, Nadia Giuffrida, Matteo Ignaccolo
et al.
Demand Responsive Shared Transport DRST services take advantage of Information and Communication Technologies ICT, to provide on demand transport services booking in real time a ride on a shared vehicle. In this paper, an agent-based model ABM is presented to test different the feasibility of different service configurations in a real context. First results show the impact of route choice strategy on the system performance.
Xiangbo Li, Mohsen Amini Salehi, Yamini Joshi
et al.
High-quality video streaming, either in form of Video-On-Demand (VOD) or live streaming, usually requires converting (ie, transcoding) video streams to match the characteristics of viewers' devices (eg, in terms of spatial resolution or supported formats). Considering the computational cost of the transcoding operation and the surge in video streaming demands, Streaming Service Providers (SSPs) are becoming reliant on cloud services to guarantee Quality of Service (QoS) of streaming for their viewers. Cloud providers offer heterogeneous computational services in form of different types of Virtual Machines (VMs) with diverse prices. Effective utilization of cloud services for video transcoding requires detailed performance analysis of different video transcoding operations on the heterogeneous cloud VMs. In this research, for the first time, we provide a thorough analysis of the performance of the video stream transcoding on heterogeneous cloud VMs. Providing such analysis is crucial for efficient prediction of transcoding time on heterogeneous VMs and for the functionality of any scheduling methods tailored for video transcoding. Based upon the findings of this analysis and by considering the cost difference of heterogeneous cloud VMs, in this research, we also provide a model to quantify the degree of suitability of each cloud VM type for various transcoding tasks. The provided model can supply resource (VM) provisioning methods with accurate performance and cost trade-offs to efficiently utilize cloud services for video streaming.
Digital Financial Services continue to expand and replace the delivery of traditional banking services to the customers through innovative technologies to meet the growing complex needs and globalization challenges. These diversified digital products help the organizations (service providers) to improve their firm performance and to remain competitive in the market. It also assists in increasing market share to grow their profitability and improve financial position. There is a growing literature on Digital Financial Services and firm performance. At this point of the development, this paper systemically reviews the existing (within last one decade) amount of literature investigating the impact of DFS on firm performance, analyzes and identifies the research gaps. We identify 39 works that have appeared in a wide range of peer-reviewed scientific journals. We classify the methodologies and approaches that researchers have used to predict the effect of such services on the financial growth and profitability. We observe that despite rapid technological advancement in DFS during the last ten years, Digital Financial Services being the factor affecting firm performance did not get the reasonable attention in academic literature. One of the reason is that almost all the authors limit their research to banking sector while ignoring others particularly mobile network operators (providing branchless banking) and new non-banking entrants. We also notice that newer researchers often ignore past research and investigate the same issues. This study also makes several recommendations and suggest directions for future research in this still emerging field.
Manabu Tsukada, Keiko Ogawa, Masahiro Ikeda
et al.
Internet-native audio-visual services are witnessing rapid development. Among these services, object-based audio-visual services are gaining importance. In 2014, we established the Software Defined Media (SDM) consortium to target new research areas and markets involving object-based digital media and Internet-by-design audio-visual environments. In this paper, we introduce the SDM architecture that virtualizes networked audio-visual services along with the development of smart buildings and smart cities using Internet of Things (IoT) devices and smart building facilities. Moreover, we design the SDM architecture as a layered architecture to promote the development of innovative applications on the basis of rapid advancements in software-defined networking (SDN). Then, we implement a prototype system based on the architecture, present the system at an exhibition, and provide it as an SDM API to application developers at hackathons. Various types of applications are developed using the API at these events. An evaluation of SDM API access shows that the prototype SDM platform effectively provides 3D audio reproducibility and interactiveness for SDM applications.
Ingo Friese, Rebecca Copeland, Sebastian Göndör
et al.
The upcoming WebRTC-based browser-to-browser communication services present new challenges for user discovery in peer-to-peer mode. Even more so, if we wish to enable different web communication services to interact. This paper presents Identity Mapping and Discovery Service (IMaDS), a global, scalable, service independent discovery service that enables users of web-based peer-to-peer applications to discover other users whom to communicate with. It also provides reachability and presence information. For that, user identities need to be mapped to any compatible service identity as well as to a globally unique, service-independent identity. This mapping and discovery process is suitable for multiple identifier formats and personal identifying properties, but it supports user-determined privacy options. IMaDS operates across different service domains dynamically, using context information. Users and devices have profiles containing context and other specific information that can be discovered by a search engine. The search results reveal the user's allocated globally unique identifier (GUID), which is then resolved to a list of the user's service domains identities, using a DHT-based directory service. Service-specific directories allow tracking of active endpoints, where users are currently logged on and can be contacted.
Modularity and decontextualisation are core principles of a service-oriented architecture. However, the principles are often lost when it comes to an implementation of services, as a result of a rigidly defined service interface. The interface, which defines a data format, is typically specific to a particular context and its change entails significant redevelopment costs. This paper focuses on a two-fold problem. On the one hand, the interface description language must be flexible enough for maintaining service compatibility in a variety of different contexts without modification of the service itself. On the other hand, the composition of interfaces in a distributed environment must be provably consistent. The existing approaches for checking compatibility of service choreographies are either inflexible (WS-CDL and WSCI) or require behaviour specification associated with each service, which is often impossible to provide in practice. We present a novel approach for automatic interface configuration in distributed stream-connected components operating as closed-source services (i.e. the behavioural protocol is unknown). We introduce a Message Definition Language (MDL), which can extend the existing interfaces description languages, such as WSDL, with support of subtyping, inheritance and polymorphism. The MDL supports configuration variables that link input and output interfaces of a service and propagate requirements over an application graph. We present an algorithm that solves the interface reconciliation problem using constraint satisfaction that relies on Boolean satisfiability as a subproblem.
Wireless network virtualization has been well recognized as a way to improve the flexibility of wireless networks by decoupling the functionality of the system and implementing infrastructure and spectrum as services. Recent studies have shown that caching provides a better performance to serve the content requests from mobile users. In this paper, we propose that \emph{caching can be applied as a service} in mobile networks, i.e., different service providers (SPs) cache their contents in the storages of wireless facilities that owned by mobile network operators (MNOs). Specifically, we focus on the scenario of \emph{small-cell networks}, where cache-enabled small-cell base stations (SBSs) are the facilities to cache contents. To deal with the competition for storages among multiple SPs, we design a mechanism based on multi-object auctions, where the time-dependent feature of system parameters and the frequency of content replacement are both taken into account. Simulation results show that our solution leads to a satisfactory outcome.
Liang Wang, Mario Almeida, Jeremy Blackburn
et al.
There is an obvious trend that more and more data and computation are migrating into networks nowadays. Combining mature virtualization technologies with service-centric net- working, we are entering into an era where countless services reside in an ISP network to provide low-latency access. Such services are often computation intensive and are dynamically created and destroyed on demands everywhere in the network to perform various tasks. Consequently, these ephemeral in-network services introduce a new type of congestion in the network which we refer to as "computation congestion". The service load need to be effectively distributed on different nodes in order to maintain the funtionality and responsiveness of the network, which calls for a new design rather than reusing the centralised scheduler designed for cloud-based services. In this paper, we study both passive and proactive control strategies, based on the proactive control we further propose a fully distributed solution which is low complexity, adaptive, and responsive to network dynamics.
Christoph Erath, Günther Of, Francisco-Javier Sayas
As model problem we consider the prototype for flow and transport of a concentration in porous media in an interior domain and couple it with a diffusion process in the corresponding unbounded exterior domain. To solve the problem we develop a new non-symmetric coupling between the vertex-centered finite volume and boundary element method. This discretization provides naturally conservation of local fluxes and with an upwind option also stability in the convection dominated case. We aim to provide a first rigorous analysis of the system for different model parameters; stability, convergence, and a~priori estimates. This includes the use of an implicit stabilization, known from the finite element and boundary element method coupling. Some numerical experiments conclude the work and confirm the theoretical results.
The role of the services described in this paper is to support decisions in the Critical Infrastructure Protection (CIP) domain. Those services are perceived as the most fundamental functionalities, that will serve as a basis for the planned European simulation centre for modelling the behaviour of Critical Infrastructures (CI). The proposed services are: CI-related data accessing and gathering, threat forecasting and visualisation, consequence analysis, crowd management, as well as resources and capability management. In general, services proposed in the current paper will contribute to reducing the problem of overwhelming decision makers by too large amount of information. In the crisis, their decisions are made on the basis of the large amount of data related to the current situation, such as the status of CI, localisation of capabilities, weather and threat forecasts etc. The design of the services has been established with the help of the future end-users. The work presented in this paper is the result of preliminary activities performed in the FP7 project CIPRNet.
Often the hardest job is to get business representatives to look at security as something that makes managing their risks and achieving their objectives easier, with security compliance as just part of that journey. This paper addresses that by making planning for security services a 'business tool'.
This paper describes the Automated Reasoning for Mizar (MizAR) service, which integrates several automated reasoning, artificial intelligence, and presentation tools with Mizar and its authoring environment. The service provides ATP assistance to Mizar authors in finding and explaining proofs, and offers generation of Mizar problems as challenges to ATP systems. The service is based on a sound translation from the Mizar language to that of first-order ATP systems, and relies on the recent progress in application of ATP systems in large theories containing tens of thousands of available facts. We present the main features of MizAR services, followed by an account of initial experiments in finding proofs with the ATP assistance. Our initial experience indicates that the tool offers substantial help in exploring the Mizar library and in preparing new Mizar articles.