The emerging pinching antenna (PA) technology has high flexibility to reconfigure wireless channels and combat line-of-sight blockage, thus holding transformative potential for indoor immersive applications in 6G. This paper investigates Pinching-antenna systems (PASS) for indoor immersive communications. Our contributions are threefold: (1) we construct a 3D model to characterize the distribution of users, waveguides, and PAs in the PASS; (2) we develop a general theoretical model on downlink performance of PASS by capturing PA-user relationships and system parameters' impacts; and (3) we conduct comprehensive numerical results of the theoretical model and provide implementation guidelines for PASS deployments.
Giulia Giovanelli, Mauro Borghi, Alessandro Lodi
et al.
The realization of novel electronic devices based on 2D materials, i.e., field-effect transistors, has recently stimulated a renewed interest regarding ultrathin fluoride epitaxial films. Thanks to their chemical and dielectric properties, ionic fluorides could have the potential to be used as insulators in many applications that require high processing control down to the nanoscale. Here we provide a review of some of the principal results that have been achieved in the past decades regarding the controlled growth of epitaxial fluorides on different types of materials relevant for electronics. The aim is to provide a concise summary of the growth modes, crystallinity, film morphologies, and chemical interactions of different types of fluorides on different type of substrates, highlighting the possibilities of applications and the future perspectives.
The rapid development of Large Language Models (LLMs) and Generative Pre-Trained Transformers(GPTs) in the field of Generative Artificial Intelligence (AI) can significantly impact task automation in themodern economy. We anticipate that the PRA field will inevitably be affected by this technology. Thus, themain goal of this paper is to engage the risk assessment community into a discussion of benefits anddrawbacks of this technology for PRA. We make a preliminary analysis of possible application of LLM inProbabilistic Risk Assessment (PRA) modeling context referring to the ongoing experience in softwareengineering field. We explore potential application scenarios and the necessary conditions for controlledLLM usage in PRA modeling (whether static or dynamic). Additionally, we consider the potential impact ofthis technology on PRA modeling tools.
Flora Kluge, Tilman Hüneke, Christophe Lerot
et al.
Abstract. We report on airborne Limb and Nadir measurements of vertical profiles and total vertical column densities (VCDs) of glyoxal (C2H2O2) in the troposphere, which were performed from aboard the German research aircraft HALO (High Altitude and Long Range) in different regions and seasons around the globe between 2014 and 2019. The airborne Nadir and integrated Limb profiles excellently agree among each other. Our airborne observations are further compared to collocated glyoxal measurements of the TROPOspheric Monitoring Instrument (TROPOMI), with good agreement between both data sets for glyoxal observations in (1) pristine terrestrial, (2) pristine marine, (3) mixed polluted, and (4) biomass burning affected air masses with high glyoxal concentrations. Exceptions from the overall good agreement are observations of (1) faint and aged biomass burning plumes over the oceans and (2) of low lying biomass burning or anthropogenic plumes in the terrestrial or marine boundary layer, and (3) plumes detected under heavy aerosol loud, both of which contain elevated glyoxal that is mostly not captured by TROPOMI. These differences of airborne and satellite detected glyoxal are most likely caused by the overall small contribution of plumes of limited extent to the total atmospheric absorption by glyoxal and the difficulty to remotely detect weak absorbers located close to low reflective surfaces (e.g. the ocean in the visible wavelength range), or within dense aerosol layers. Observations of glyoxal in aged biomass burning plumes (e.g. observed over the Tropical Atlantic off the coast of West Africa in summer 2018, off the coast of Brazil by the end of the dry season 2019, and the East China Sea in spring 2018) could be traced back to related wildfires, such as a plume crossing over the Drake Passage that originated from the Australian bushfires in late 2019. Our observations of glyoxal in these over days aged biomass burning plumes thus confirm recent findings of enhanced glyoxal and presumably secondary aerosol (SOA) formation in aged wildfire plumes from yet to be identified longer-lived organic precursor molecules (e.g. aromatics, acetylene, or aliphatic compounds) co-emitted in the fires. Further, elevated glyoxal (median 44 ppt) as compared to other marine regions (median 10–19 ppt) is observed in the boundary layer over the tropical oceans, well in agreement with previous reports. The airborne data sets are further compared to glyoxal simulations performed with the global atmosphere-chemistry model EMAC (ECHAM/MESSy Atmospheric Chemistry). When using an EMAC setup that resembles recent EMAC studies focusing on complex chemistry, reasonable agreement is found for pristine air masses (e.g. the unperturbed free and upper troposphere), but notable differences exist for regions with high emissions of glyoxal and glyoxal producing volatile organic compounds (VOC) from the biosphere (e.g. the Amazon), mixed emissions from anthropogenic activities (e.g. continental Europe, the Mediterranean and East China Sea), and potentially from the sea (e.g. the tropical oceans). Also, the model tends to largely under-predict glyoxal in city plumes and aged biomass burning plumes. The potential causes for these differences are likely to be multifaceted, but they all point to missing glyoxal sources from the degradation of the cocktail of (potentially longer-chained) organic compounds emitted from anthropogenic activities, biomass burning, and from the organic micro-layer of the sea.
Michael Alan Chang, Aurojit Panda, Hantao Wang
et al.
Most large web-scale applications are now built by composing collections (from a few up to 100s or 1000s) of microservices. Operators need to decide how many resources are allocated to each microservice, and these allocations can have a large impact on application performance. Manually determining allocations that are both cost-efficient and meet performance requirements is challenging, even for experienced operators. In this paper we present AutoTune, an end-to-end tool that automatically minimizes resource utilization while maintaining good application performance.
Computing accurate deterministic performance bounds is a strong need for communication technologies having strong requirements on latency and reliability. Beyond new scheduling protocols such as TSN, the FIFO policy remains at work within each class of communication. In this paper, we focus on computing deterministic performance bounds in FIFO networks in the network calculus framework. We propose a new algorithm based on linear programming that presents a trade-off between accuracy and tractability. This algorithm is first presented for tree networks. In a second time, we generalize our approach and present a linear program for computing performance bounds for arbitrary topologies, including cyclic dependencies. Finally, we provide numerical results, both of toy examples and real topologies, to assess the interest of our approach.
In order to automate actions, such as defences against network attacks, one needs to quantify their efficiency. This can subsequently be used in post-evaluation, learning, etc. In order to quantify the defence efficiency as a function of the impact of the defence and its total cost, we present several natural requirements from such a definition of efficiency and provide a natural definition that complies with these requirements. Next, we precisely characterize our definition of efficiency by the axiomatic approach; namely, we strengthen the original requirements from such a definition and prove that the given definition is the unique definition that satisfies those requirements. Finally, we generalize the definition to the case of any number of input variables in two natural ways, and compare these generalizations.
We study the performance of non-adaptive scheduling policies in computing systems with multiple servers. Compute jobs are mostly regular, with modest service requirements. However, there are sporadic data intensive jobs, whose expected service time is much higher than that of the regular jobs. Forthis model, we are interested in the effect of scheduling policieson the average time a job spends in the system. To this end, we introduce two performance indicators in a simplified, only-arrival system. We believe that these performance indicators are good predictors of the relative performance of the policies in the queuing system, which is supported by simulations results.
Major chip manufacturers have all introduced Multithreaded processors. These processors are used for running a variety of workloads. Efficient resource utilization is an important design aspect in such processors. Particularly, it is important to take advantage of available memory-level parallelism(MLP). In this paper I propose a MLP aware operating system (OS) scheduling algorithm for Multithreaded Multi-core processors. By observing the MLP available in each thread and by balancing it with available MLP resources in the system the OS will come up with a new schedule of threads for the next quantum that could potentially improve overall performance. We do a qualitative comparison of our solution with other hardware and software techniques. This work can be extended by doing a quantitative evaluation and by further refining the scheduling optimization.
Louis-Claude Canon, Mohamad El Sayah, Pierre-Cyrille Héam
In high performance computing, scheduling of tasks and allocation to machines is very critical especially when we are dealing with heterogeneous execution costs. Simulations can be performed with a large variety of environments and application models. However, this technique is sensitive to bias when it relies on random instances with an uncontrolled distribution. We use methods from the literature to provide formal guarantee on the distribution of the instance. In particular, it is desirable to ensure a uniform distribution among the instances with a given task and machine heterogeneity. In this article, we propose a method that generates instances (cost matrices) with a known distribution where tasks are scheduled on machines with heterogeneous execution costs.
The Softmax function is ubiquitous in machine learning, multiple previous works suggested faster alternatives for it. In this paper we propose a way to compute classical Softmax with fewer memory accesses and hypothesize that this reduction in memory accesses should improve Softmax performance on actual hardware. The benchmarks confirm this hypothesis: Softmax accelerates by up to 1.3x and Softmax+TopK combined and fused by up to 5x.
Miguel Cárdenas-Montes, Iván Méndez-Jiménez, Juan José Rodríguez-Vázquez
et al.
In this report, some cosmological correlation functions are used to evaluate the differential performance between C2075 and P100 GPU cards. In the past, the correlation functions used in this work have been widely studied and exploited on some previous GPU architectures. The analysis of the performance indicates that a speedup in the range from 13 to 15 is achieved without any additional optimization process for the P100 card.
We consider a multi-class G/G/1 queue with a finite shared buffer. There is task admission and server scheduling control which aims to minimize the cost which consists of holding and rejection components. We construct a policy that is asymptotically optimal in the heavy traffic limit. The policy stems from solution to Harrison-Taksar (HT) free boundary problem and is expressed by a single free boundary point. We show that the HT problem solution translated into the queuelength processes follows a specific {\it triangular} form. This form implies the queuelength control policy which is different from the known $cμ$ priority rule and has a novel structure. We exemplify that the probabilistic methods we exploit can be successfully applied to solving scheduling and admission problems in cloud computing.
This paper treats power-aware throughput maximization in a multi-user file downloading system. Each user can receive a new file only after its previous file is finished. The file state processes for each user act as coupled Markov chains that form a generalized restless bandit system. First, an optimal algorithm is derived for the case of one user. The algorithm maximizes throughput subject to an average power constraint. Next, the one-user algorithm is extended to a low complexity heuristic for the multi-user problem. The heuristic uses a simple online index policy and its effectiveness is shown via simulation. For simple 3-user cases where the optimal solution can be computed offline, the heuristic is shown to be near-optimal for a wide range of parameters.
This paper explores the performance of three packet scheduling algorithms, namely, Proportional Fair (PF) algorithm, Exponential/Proportional Fair (EXP/PF) algorithm and Maximum Largest Weighted Delay First (MLWDF), from the real time traffic perspectives. Simulation results showed that in the downlink of the 3GPP LTE system, the MLWDF outperforms the PF and the EXP/PF algorithms in terms of packet throughput, packet-loss ratio, packet latency, fairness index and total cell spectral efficiency.
This paper presents a model of NAND flash SSD utilization and write amplification when the ATA/ATAPI SSD Trim command is incorporated into object-based storage under a variety of user workloads, including a uniform random workload with objects of fixed size and a uniform random workload with objects of varying sizes. We first summarize the existing models for write amplification in SSDs for workloads with and without the Trim command, then propose an alteration of the models that utilizes a framework of object-based storage. The utilization of objects and pages in the SSD is derived, with the analytic results compared to simulation. Finally, the effect of objects on write amplification and its computation is discussed along with a potential application to optimization of SSD usage through object storage metadata servers that allocate object classes of distinct object size.
We develop a generalized loss network framework for capacity planning of a perinatal network in the UK. Decomposing the network by hospitals, each unit is analyzed with a GI/G/c/0 overflow loss network model. A two-moment approximation is performed to obtain the steady state solution of the GI/G/c/0 loss systems, and expressions for rejection probability and overflow probability have been derived. Using the model framework, the number of required cots can be estimated based on the rejection probability at each level of care of the neonatal units in a network. The generalization ensures that the model can be applied to any perinatal network for renewal arrival and discharge processes.
Due to the huge difference in performance between the computer memory and processor, the virtual memory management plays a vital role in system performance. A Cache memory is the fast memory which is used to compensate the speed difference between the memory and processor. This paper gives an adaptive replacement policy over the traditional policy which has low overhead, better performance and is easy to implement. Simulations show that our algorithm performs better than Least-Recently-Used (LRU), First-In-First-Out (FIFO) and Clock with Adaptive Replacement (CAR).
Ananth Narayan S, Somsubhra Sharangi, Alexandra Fedorova
The Intel Core i7 processor code named Nehalem provides a feature named Turbo Boost which opportunistically varies the frequencies of the processor's cores. The frequency of a core is determined by core temperature, the number of active cores, the estimated power consumption, the estimated current consumption, and operating system frequency scaling requests. For a chip multi-processor(CMP) that has a small number of physical cores and a small set of performance states, deciding the Turbo Boost frequency to use on a given core might not be difficult. However, we do not know the complexity of this decision making process in the context of a large number of cores, scaling to the 100s, as predicted by researchers in the field.