Line Planning Based on Passenger Perceived Satisfaction at Different Travel Distances
Xiaoqing Qiao, Li Xie, Yun Yang
et al.
The rapid development of China’s high-speed railways (HSRs) and the implementation of revenue management policies have promoted the marketization of railway passenger transport, which is mainly reflected in the gradual transformation from a seller’s market dominated by operating companies to a buyer’s market dominated by passenger demand. Passenger travel needs are becoming increasingly diverse. In order to improve the quality of HSR services and attract more passengers, this paper starts from passenger satisfaction and considers the heterogeneity of travel preferences of passengers with different travel distances. Based on the passenger travel data of the Nanning-Guangzhou (NG) HSR line, the K-means clustering method is used to classify passengers into three categories: short-distance, medium-distance, and long-distance travel. A structural equation modeling–multinomial logit (SEM-MNL) model integrating both explicit and latent variables was constructed to analyze passenger travel origin-destination (OD) choices. Stata software was used to estimate passenger preferences for perceived satisfaction functions across different travel distances. Finally, considering constraints such as load factor, departure capacity, and spatiotemporal passenger flow demand, a line planning optimization model was constructed with the goal of minimizing train operating costs and maximizing passenger travel satisfaction. An improved subtraction optimizer algorithm was designed for the solution. Using the NG Line as a case study, the proposed method achieved a reduction in train operating costs while enhancing overall passenger satisfaction.
Mechanical engineering and machinery, Machine design and drawing
Dynamic planning approach of facility layout from industry perspectives: A systematic literature review
Isnaini Wildanul, Masruroh Nur Aini, Dharma IGB Budi
A general classification of facility layout criteria, including the planning approach, material handling configuration, department area, layout generation approach, metaheuristic approach, and layout evaluation approach has been achieved through numerous reviews of facility layout. Based on dynamic planning approach research, the companies and industries as significant users of the facility layout, necessitate a more detailed and exhaustive review of the layout optimization (re-layout) strategy. However, that review remains incomplete. This paper aims to fill the gap between the industry’s practical needs and existing research on dynamic planning facility layout by conducting a literature review to identify various facility layout criteria and factors categorized by industry layout type, providing companies with clearer guidance for their layout decisions. A reference that provides a comprehensive analysis of the relevant characteristics, methods, and factors in determining layout types will be helpful to decision-makers as a strategy in facility layout. This literature review analyzed 44 articles from the Scopus database between 2014 and 2024. These articles were selected through a screening process from 1278 articles using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method which has proven effective in obtaining key articles on specific research topics. The results of this review present a classification of facility layout criteria based on layout type in industries complemented by the development of a checklist as an initial screening tool for the industry to optimize the layout. Further, it enhances the theoretical advancement of the dynamic planning approach by identifying areas for future investigation.
Machine design and drawing, Engineering machinery, tools, and implements
Joint Optimization of Full-Length and Short-Turning Plan and Schedule: Case Study of Nanchang Metro Airport Line
Jian Peng, Cong Huang, Hui Fei
et al.
This study addresses the joint optimization of full-length and short-turning operations for the Nanchang Metro Airport Line, aiming to balance operational efficiency and passenger service quality. A novel mathematical model is proposed, which integrates train schedule design, capacity allocation, and passenger flow assignment into a linear programming framework. The model features three key innovations: (1) precise calculation of passenger waiting times under strict capacity constraints by incorporating dynamic passenger flow distribution and train occupancy thresholds; (2) implicit treatment of train numbers as decision variables, enabling flexible adjustments to service frequency based on time-varying demand patterns; and (3) a linear formulation for direct optimal solution computation, avoiding the complexity of nonlinear constraints through variable substitution and constraint relaxation. The model is validated through a case study of the Nanchang Metro Line 1 (Airport Line), where passenger demand is derived from historical data and flight schedules. Numerical experiments demonstrate that the optimized strategy reduces the number of full-length trains by 53%, achieves a 22% power cost saving, and decreases the waiting time for all passengers by 3.4%. The relevant findings and recommendations can offer valuable guidance to metro companies in making operational decisions related to the full-length and short-turning service plans and schedules.
Mechanical engineering and machinery, Machine design and drawing
Formation of Human-Machine Trust in Smart Construction: Influencing Factors and Mechanisms
Yongliang Deng, Kewei Li, Wenhui Hu
et al.
With the rapid advancement of digital technologies, smart construction has emerged as a transformative approach within the construction industry. Central to the success of human-machine collaboration is human-machine trust, which plays a critical role in safety, performance, and the adoption of intelligent systems. This study develops and empirically tests a comprehensive structural equation model to explore the formation mechanism of human-machine trust in smart construction. Drawing on the three-domain framework, five primary constructs—role cognition; controllability; technology attachment; equipment reliability; and autonomy—are identified across individual and system dimensions. The model also incorporates trust propensity and task complexity as contextual moderators. A questionnaire survey of 288 construction professionals in China was conducted, and partial least squares structural equation modelling (PLS-SEM) was employed to analyze the data. The results confirm that all five constructs significantly and positively influence human-machine trust, with role cognition and autonomy having the strongest effects. Furthermore, trust propensity positively moderates the impact of individual traits, while task complexity negatively moderates the effect of equipment characteristics on trust formation. These findings provide valuable theoretical insights and practical guidance for the design of trustworthy intelligent systems, which can foster safer and more effective human-machine collaboration in smart construction.
Human strategic innovation against AI systems - analyzing how humans develop and implement novel strategies that exploit AI limitations
Abdullahi Dattijo, Sungbae Jo
Abstract This paper systematically analyzes documented cases and examines human strategic innovation against artificial intelligence systems. Drawing from peer-reviewed research and verified instances in strategic domains including complex games such as Go (Wang et al. in: Proceedings of the 40th international conference on machine learning, 2023), chess (McIlroy-Young et al. in Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020), Dota 2 (Berner et al.. Dota 2 with large-scale deep reinforcement learning. arXiv preprint arXiv:19106680, 2019), and poker (Brown and Sandholm in Science 359:418–424, 2017), as well as real-world applications including cybersecurity (Comiter Attacking artificial intelligence: AI's security vulnerability and what policymakers can do about it. Belfer Center for Science and International Affairs, Harvard Kennedy School, 2019) and finance (Zhang et al., 2024), we identify patterns in human innovation when confronting AI opponents. Our analysis reveals that humans can achieve notable successes by developing novel strategies operating outside AI training distributions, exploiting specific AI limitations (Gleave et al. in International Conference on Machine Learning, 2020). Key findings demonstrate several critical mechanisms. First, pattern-breaking innovations enable humans to force AI systems into unfamiliar decision spaces where their training becomes insufficient (Comiter Attacking artificial intelligence: AI's security vulnerability and what policymakers can do about it. Belfer Center for Science and International Affairs, Harvard Kennedy School, 2019). Second, exploiting AI's bounded rationality allows strategic actors to leverage artificial systems' inherent computational and representational limitations (Simon, 1972). Third, adaptive resource distribution strategies permit dynamic capabilities reallocation based on real-time AI behavioral pattern assessment (Fatima and Wooldridge. in Proceedings of the Fifth International Conference on Autonomous Agents, 2001). In Go, adversarial policies have achieved win rates exceeding 97% against superhuman AI by forcing the system into unfamiliar game states it cannot correctly evaluate (Wang et al. in Proceedings of the 40th International Conference on Machine Learning, 2023). These attacks succeed not through superior Go play but by exploiting fundamental vulnerabilities in how AI systems process information outside their training distributions. Chess analysis indicates that human strategic choices often diverge from AI preferences, with models like Maia specifically designed to predict human moves achieving accuracies of 46–52% for targeted skill levels, highlighting fundamental differences in strategic evaluation between human and artificial intelligence (McIlroy-Young et al. in Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020). While AI systems like OpenAI Five have demonstrated overwhelming dominance in Dota 2, achieving a 99.4% win rate in public games under restricted rule sets (Berner et al. Dota 2 with large-scale deep reinforcement learning. arXiv preprint arXiv:19106680, 2019), and Libratus significantly outperformed top poker professionals in heads-up no-limit Texas Hold'em (Brown and Sandholm in Science 359:418–424, 2017), human approaches in these contexts reveal ongoing attempts to identify and exploit AI behavioral patterns. These efforts demonstrate the persistent potential for strategic innovation even against seemingly dominant artificial systems. The implications of these findings extend beyond gaming applications to broader strategic contexts. They suggest fundamental considerations for AI system design, particularly regarding the need for enhanced strategic flexibility and adaptation capabilities when facing novel adversarial approaches (Wang et al. in Proceedings of the 40th international conference on machine learning, 2023). We propose that these insights should inform next-generation AI system development, emphasizing robust strategic frameworks that can better anticipate and respond to human innovations that operate outside conventional training paradigms. Our research contributes to the theoretical understanding of human-AI strategic interaction and provides practical frameworks for developing more resilient AI systems. The broader implications span multiple domains, including AI safety research (Russell in Human compatible: Artificial intelligence and the problem of control, Viking Press, 2019), human-AI collaboration frameworks (Vaccaro et al. in Nat Hum Behav 8:1869–1886, 2024), and strategic decision-making system design (Chen and Kumar in J Artif Intel Res 79:245–278, 2024).
Computational linguistics. Natural language processing, Electronic computers. Computer science
Active Learning for Machine Learning Driven Molecular Dynamics
Kevin Bachelor, Sanya Murdeshwar, Daniel Sabo
et al.
Machine-learned coarse-grained (CG) potentials are fast, but degrade over time when simulations reach under-sampled bio-molecular conformations, and generating widespread all-atom (AA) data to combat this is computationally infeasible. We propose a novel active learning (AL) framework for CG neural network potentials in molecular dynamics (MD). Building on the CGSchNet model, our method employs root mean squared deviation (RMSD)-based frame selection from MD simulations in order to generate data on-the-fly by querying an oracle during the training of a neural network potential. This framework preserves CG-level efficiency while correcting the model at precise, RMSD-identified coverage gaps. By training CGSchNet, a coarse-grained neural network potential, we empirically show that our framework explores previously unseen configurations and trains the model on unexplored regions of conformational space. Our active learning framework enables a CGSchNet model trained on the Chignolin protein to achieve a 33.05\% improvement in the Wasserstein-1 (W1) metric in Time-lagged Independent Component Analysis (TICA) space on an in-house benchmark suite.
en
cs.LG, physics.atm-clus
DALI-PD: Diffusion-based Synthetic Layout Heatmap Generation for ML in Physical Design
Bing-Yue Wu, Vidya A. Chhabria
Machine learning (ML) has demonstrated significant promise in various physical design (PD) tasks. However, model generalizability remains limited by the availability of high-quality, large-scale training datasets. Creating such datasets is often computationally expensive and constrained by IP. While very few public datasets are available, they are typically static, slow to generate, and require frequent updates. To address these limitations, we present DALI-PD, a scalable framework for generating synthetic layout heatmaps to accelerate ML in PD research. DALI-PD uses a diffusion model to generate diverse layout heatmaps via fast inference in seconds. The heatmaps include power, IR drop, congestion, macro placement, and cell density maps. Using DALI-PD, we created a dataset comprising over 20,000 layout configurations with varying macro counts and placements. These heatmaps closely resemble real layouts and improve ML accuracy on downstream ML tasks such as IR drop or congestion prediction.
Using Multimodal Large Language Models (MLLMs) for Automated Detection of Traffic Safety-Critical Events
Mohammad Abu Tami, Huthaifa I. Ashqar, Mohammed Elhenawy
et al.
Traditional approaches to safety event analysis in autonomous systems have relied on complex machine and deep learning models and extensive datasets for high accuracy and reliability. However, the emerge of multimodal large language models (MLLMs) offers a novel approach by integrating textual, visual, and audio modalities. Our framework leverages the logical and visual reasoning power of MLLMs, directing their output through object-level question–answer (QA) prompts to ensure accurate, reliable, and actionable insights for investigating safety-critical event detection and analysis. By incorporating models like Gemini-Pro-Vision 1.5, we aim to automate safety-critical event detection and analysis along with mitigating common issues such as hallucinations in MLLM outputs. The results demonstrate the framework’s potential in different in-context learning (ICT) settings such as zero-shot and few-shot learning methods. Furthermore, we investigate other settings such as self-ensemble learning and a varying number of frames. The results show that a few-shot learning model consistently outperformed other learning models, achieving the highest overall accuracy of about 79%. The comparative analysis with previous studies on visual reasoning revealed that previous models showed moderate performance in driving safety tasks, while our proposed model significantly outperformed them. To the best of our knowledge, our proposed MLLM model stands out as the first of its kind, capable of handling multiple tasks for each safety-critical event. It can identify risky scenarios, classify diverse scenes, determine car directions, categorize agents, and recommend the appropriate actions, setting a new standard in safety-critical event management. This study shows the significance of MLLMs in advancing the analysis of naturalistic driving videos to improve safety-critical event detection and understanding the interactions in complex environments.
Mechanical engineering and machinery, Machine design and drawing
Highly Discriminative Driver Distraction Detection Method Based on Swin Transformer
Ziyang Zhang, Lie Yang, Chen Lv
Driver distraction detection not only helps to improve road safety and prevent traffic accidents, but also promotes the development of intelligent transportation systems, which is of great significance for creating a safer and more efficient transportation environment. Since deep learning algorithms have very strong feature learning abilities, more and more deep learning-based driver distraction detection methods have emerged in recent years. However, the majority of existing deep learning-based methods are optimized only through the constraint of classification loss, making it difficult to obtain features with high discrimination, so the performance of these methods is very limited. In this paper, to improve the discrimination between features of different classes of samples, we propose a high-discrimination feature learning strategy and design a driver distraction detection model based on Swin Transformer and the highly discriminative feature learning strategy (ST-HDFL). Firstly, the features of input samples are extracted through the powerful feature learning ability of Swin Transformer. Then, the intra-class distance of samples of the same class in the feature space is reduced through the constraint of sample center distance loss (SC loss), and the inter-class distance of samples of different classes is increased through the center vector shift strategy, which can greatly improve the discrimination of different class samples in the feature space. Finally, we have conducted extensive experiments on two publicly available datasets, AUC-DD and State-Farm, to demonstrate the effectiveness of the proposed method. The experimental results show that our method can achieve better performance than many state-of-the-art methods, such as Drive-Net, MobileVGG, Vanilla CNN, and so on.
Mechanical engineering and machinery, Machine design and drawing
Designing Poisson Integrators Through Machine Learning
Miguel Vaquero, David Martín de Diego, Jorge Cortés
This paper presents a general method to construct Poisson integrators, i.e., integrators that preserve the underlying Poisson geometry. We assume the Poisson manifold is integrable, meaning there is a known local symplectic groupoid for which the Poisson manifold serves as the set of units. Our constructions build upon the correspondence between Poisson diffeomorphisms and Lagrangian bisections, which allows us to reformulate the design of Poisson integrators as solutions to a certain PDE (Hamilton-Jacobi). The main novelty of this work is to understand the Hamilton-Jacobi PDE as an optimization problem, whose solution can be easily approximated using machine learning related techniques. This research direction aligns with the current trend in the PDE and machine learning communities, as initiated by Physics- Informed Neural Networks, advocating for designs that combine both physical modeling (the Hamilton-Jacobi PDE) and data.
Design-o-meter: Towards Evaluating and Refining Graphic Designs
Sahil Goyal, Abhinav Mahajan, Swasti Mishra
et al.
Graphic designs are an effective medium for visual communication. They range from greeting cards to corporate flyers and beyond. Off-late, machine learning techniques are able to generate such designs, which accelerates the rate of content production. An automated way of evaluating their quality becomes critical. Towards this end, we introduce Design-o-meter, a data-driven methodology to quantify the goodness of graphic designs. Further, our approach can suggest modifications to these designs to improve its visual appeal. To the best of our knowledge, Design-o-meter is the first approach that scores and refines designs in a unified framework despite the inherent subjectivity and ambiguity of the setting. Our exhaustive quantitative and qualitative analysis of our approach against baselines adapted for the task (including recent Multimodal LLM-based approaches) brings out the efficacy of our methodology. We hope our work will usher more interest in this important and pragmatic problem setting.
Machine Learning in Short-Reach Optical Systems: A Comprehensive Survey
Chen Shao, Elias Giacoumidis, Syed Moktacim Billah
et al.
In recent years, extensive research has been conducted to explore the utilization of machine learning algorithms in various direct-detected and self-coherent short-reach communication applications. These applications encompass a wide range of tasks, including bandwidth request prediction, signal quality monitoring, fault detection, traffic prediction, and digital signal processing (DSP)-based equalization. As a versatile approach, machine learning demonstrates the ability to address stochastic phenomena in optical systems networks where deterministic methods may fall short. However, when it comes to DSP equalization algorithms, their performance improvements are often marginal, and their complexity is prohibitively high, especially in cost-sensitive short-reach communications scenarios such as passive optical networks (PONs). They excel in capturing temporal dependencies, handling irregular or nonlinear patterns effectively, and accommodating variable time intervals. Within this extensive survey, we outline the application of machine learning techniques in short-reach communications, specifically emphasizing their utilization in high-bandwidth demanding PONs. Notably, we introduce a novel taxonomy for time-series methods employed in machine learning signal processing, providing a structured classification framework. Our taxonomy categorizes current time series methods into four distinct groups: traditional methods, Fourier convolution-based methods, transformer-based models, and time-series convolutional networks. Finally, we highlight prospective research directions within this rapidly evolving field and outline specific solutions to mitigate the complexity associated with hardware implementations. We aim to pave the way for more practical and efficient deployment of machine learning approaches in short-reach optical communication systems by addressing complexity concerns.
Combining Machine Learning Defenses without Conflicts
Vasisht Duddu, Rui Zhang, N. Asokan
Machine learning (ML) defenses protect against various risks to security, privacy, and fairness. Real-life models need simultaneous protection against multiple different risks which necessitates combining multiple defenses. But combining defenses with conflicting interactions in an ML model can be ineffective, incurring a significant drop in the effectiveness of one or more defenses being combined. Practitioners need a way to determine if a given combination can be effective. Experimentally identifying effective combinations can be time-consuming and expensive, particularly when multiple defenses need to be combined. We need an inexpensive, easy-to-use combination technique to identify effective combinations. Ideally, a combination technique should be (a) accurate (correctly identifies whether a combination is effective or not), (b) scalable (allows combining multiple defenses), (c) non-invasive (requires no change to the defenses being combined), and (d) general (is applicable to different types of defenses). Prior works have identified several ad-hoc techniques but none satisfy all the requirements above. We propose a principled combination technique, Def\Con, to identify effective defense combinations. Def\Con meets all requirements, achieving 90% accuracy on eight combinations explored in prior work and 81% in 30 previously unexplored combinations that we empirically evaluate in this paper.
Understanding the Needs of Nonhuman Stakeholders in Design Process: An Overview of and Reflection on Methods
Berre Su Yanlic, Aykut Coskun
Design practice traditionally focused on human concerns, either overseeing the various effects of climate issues on nonhuman stakeholders or considering them as resources to address these problems. The climate crisis's urgency demands a design shift towards sustainability and inclusivity. This shift was happening through an emerging theme in design, More-Than-Human (MTH), which expands the notion of the user to animals, things, nature, and microbes. Such an expansion creates a requirement for designers to consider nonhuman perspectives during the design process. This paper investigates the methods used in MTH Design studies to explore and synthesize the perspectives of nonhuman users. Reviewing 30 papers, it highlights a predominant focus on animals and things over plants and microbes in MTH studies, along with a scarcity of synthesis methods. It identifies the necessity of tools that represent nonhumans with their relationships within larger ecosystems, and calls for increased attention to plants and microbes, emphasizing their vital role in sustainable environments and urging researchers to develop methods for understanding these species. By highlighting method strengths and weaknesses, it aims to guide designers and design researchers who plan to work with nonhuman users in selecting appropriate methods.
An in-silico framework for modeling optimal control of neural systems
Bodo Rueckauer, Marcel van Gerven
IntroductionBrain-machine interfaces have reached an unprecedented capacity to measure and drive activity in the brain, allowing restoration of impaired sensory, cognitive or motor function. Classical control theory is pushed to its limit when aiming to design control laws that are suitable for large-scale, complex neural systems. This work proposes a scalable, data-driven, unified approach to study brain-machine-environment interaction using established tools from dynamical systems, optimal control theory, and deep learning.MethodsTo unify the methodology, we define the environment, neural system, and prosthesis in terms of differential equations with learnable parameters, which effectively reduce to recurrent neural networks in the discrete-time case. Drawing on tools from optimal control, we describe three ways to train the system: Direct optimization of an objective function, oracle-based learning, and reinforcement learning. These approaches are adapted to different assumptions about knowledge of system equations, linearity, differentiability, and observability.ResultsWe apply the proposed framework to train an in-silico neural system to perform tasks in a linear and a nonlinear environment, namely particle stabilization and pole balancing. After training, this model is perturbed to simulate impairment of sensor and motor function. We show how a prosthetic controller can be trained to restore the behavior of the neural system under increasing levels of perturbation.DiscussionWe expect that the proposed framework will enable rapid and flexible synthesis of control algorithms for neural prostheses that reduce the need for in-vivo testing. We further highlight implications for sparse placement of prosthetic sensor and actuator components.
Neurosciences. Biological psychiatry. Neuropsychiatry
Application of the DMD Approach to High-Reynolds-Number Flow over an Idealized Ground Vehicle
Adit Misar, Nathan A. Tison, Vamshi M. Korivi
et al.
This paper attempts to develop a Dynamic Mode Decomposition (DMD)-based Reduced Order Model (ROMs) that can quickly but accurately predict the forces and moments experienced by a road vehicle such that they be used by an on-board controller to determine the vehicle’s trajectory. DMD can linearize a large dataset of high-dimensional measurements by decomposing them into low-dimensional coherent structures and associated time dynamics. This ROM can then also be applied to predict the future state of the fluid flow. Existing literature on DMD is limited to low Reynolds number applications. This paper presents DMD analyses of the flow around an idealized road vehicle, called the Ahmed body, at a Reynolds number of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>2.7</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula>. The high-dimensional dataset used in this paper was collected from a computational fluid dynamics (CFD) simulation performed using the Menter’s Shear Stress Transport (SST) turbulence model within the context of Improved Delayed Detached Eddy Simulations (IDDES). The DMD algorithm, as available in the literature, was found to suffer nonphysical dampening of the medium-to-high frequency modes. Enhancements to the existing algorithm were explored, and a modified DMD approach is presented in this paper, which includes: (a) a requirement of higher sampling rate to obtain a higher resolution of data, and (b) a custom filtration process to remove spurious modes. The modified DMD algorithm thus developed was applied to the high-Reynolds-number, separation-dominated flow past the idealized ground vehicle. The effectiveness of the modified algorithm was tested by comparing future predictions of force and moment coefficients as predicted by the DMD-based ROM to the reference CFD simulation data, and they were found to offer significant improvement.
Mechanical engineering and machinery, Machine design and drawing
Ortho-Radial Drawing in Near-Linear Time
Yi-Jun Chang
An orthogonal drawing is an embedding of a plane graph into a grid. In a seminal work of Tamassia (SIAM Journal on Computing 1987), a simple combinatorial characterization of angle assignments that can be realized as bend-free orthogonal drawings was established, thereby allowing an orthogonal drawing to be described combinatorially by listing the angles of all corners. The characterization reduces the need to consider certain geometric aspects, such as edge lengths and vertex coordinates, and simplifies the task of graph drawing algorithm design. Barth, Niedermann, Rutter, and Wolf (SoCG 2017) established an analogous combinatorial characterization for ortho-radial drawings, which are a generalization of orthogonal drawings to cylindrical grids. The proof of the characterization is existential and does not result in an efficient algorithm. Niedermann, Rutter, and Wolf (SoCG 2019) later addressed this issue by developing quadratic-time algorithms for both testing the realizability of a given angle assignment as an ortho-radial drawing without bends and constructing such a drawing. In this paper, we further improve the time complexity of these tasks to near-linear time. We establish a new characterization for ortho-radial drawings based on the concept of a good sequence. Using the new characterization, we design a simple greedy algorithm for constructing ortho-radial drawings.
Auctions with Tokens: Monetary Policy as a Mechanism Design Choice
Andrea Canidio
I study a repeated auction in which payments are made with a blockchain token created and initially owned by the auction designer. Unlike the ``virtual money'' previously examined in mechanism design, such tokens can be saved and traded outside the mechanism. I show that the present-discounted value of expected revenues equals that of a conventional dollar auction, but revenues accrue earlier and are less volatile. The optimal monetary policy burns the tokens used for payment, a practice common in blockchain-based protocols. I also show that the same outcome can be reproduced in a dollar auction if the auctioneer issues a suitable dollar-denominated security. This equivalence breaks down with moral hazard and contracting frictions: with severe contracting frictions the token auction dominates, whereas with mild contracting frictions the dollar auction combined with a dollar-denominated financial instrument is preferred.
COLE: A Hierarchical Generation Framework for Multi-Layered and Editable Graphic Design
Peidong Jia, Chenxuan Li, Yuhui Yuan
et al.
Graphic design, which has been evolving since the 15th century, plays a crucial role in advertising. The creation of high-quality designs demands design-oriented planning, reasoning, and layer-wise generation. Unlike the recent CanvaGPT, which integrates GPT-4 with existing design templates to build a custom GPT, this paper introduces the COLE system - a hierarchical generation framework designed to comprehensively address these challenges. This COLE system can transform a vague intention prompt into a high-quality multi-layered graphic design, while also supporting flexible editing based on user input. Examples of such input might include directives like ``design a poster for Hisaishi's concert.'' The key insight is to dissect the complex task of text-to-design generation into a hierarchy of simpler sub-tasks, each addressed by specialized models working collaboratively. The results from these models are then consolidated to produce a cohesive final output. Our hierarchical task decomposition can streamline the complex process and significantly enhance generation reliability. Our COLE system comprises multiple fine-tuned Large Language Models (LLMs), Large Multimodal Models (LMMs), and Diffusion Models (DMs), each specifically tailored for design-aware layer-wise captioning, layout planning, reasoning, and the task of generating images and text. Furthermore, we construct the DESIGNINTENTION benchmark to demonstrate the superiority of our COLE system over existing methods in generating high-quality graphic designs from user intent. Last, we present a Canva-like multi-layered image editing tool to support flexible editing of the generated multi-layered graphic design images. We perceive our COLE system as an important step towards addressing more complex and multi-layered graphic design generation tasks in the future.
Internet of Vehicles and Real-Time Optimization Algorithms: Concepts for Vehicle Networking in Smart Cities
Ferran Adelantado, Majsa Ammouriova, Erika Herrera
et al.
Achieving sustainable freight transport and citizens’ mobility operations in modern cities are becoming critical issues for many governments. By analyzing big data streams generated through IoT devices, city planners now have the possibility to optimize traffic and mobility patterns. IoT combined with innovative transport concepts as well as emerging mobility modes (e.g., ridesharing and carsharing) constitute a new paradigm in sustainable and optimized traffic operations in smart cities. Still, these are highly dynamic scenarios, which are also subject to a high uncertainty degree. Hence, factors such as real-time optimization and re-optimization of routes, stochastic travel times, and evolving customers’ requirements and traffic status also have to be considered. This paper discusses the main challenges associated with Internet of Vehicles (IoV) and vehicle networking scenarios, identifies the underlying optimization problems that need to be solved in real time, and proposes an approach to combine the use of IoV with parallelization approaches. To this aim, agile optimization and distributed machine learning are envisaged as the best candidate algorithms to develop efficient transport and mobility systems.
Mechanical engineering and machinery, Machine design and drawing