Modelling the relationship between hotel perceived value, customer satisfaction, and customer loyalty
Mohammed El-Adly
Abstract This study using structural equation modelling (SEM) investigates the relationship between the dimensions of customer perceived value, customer satisfaction, and customer loyalty in the context of hotels. The main procedure of this study was to conceptualise hotel perceived value as a multidimensional construct of seven dimensions with both cognitive and affective aspects. Five out of these seven dimensions; specifically, the self-gratification, price, quality, transaction, hedonic dimensions were then found to have a significant direct positive effect on customer satisfaction and/or customer loyalty. Two dimensions of hotel perceived value (aesthetics, prestige) were found to have no significant direct positive effect either on customer satisfaction or customer loyalty. It was also found that four hotel perceived value dimensions (hedonic, price, quality, transaction) had an indirect significant positive effect on customer loyalty through customer satisfaction as a mediator. Finally, customer satisfaction was found to have a direct positive effect on customer loyalty.
OmicStudio: A composable bioinformatics cloud platform with real‐time feedback that can generate high‐quality graphs for publication
Fengye Lyu, Feiran Han, Changli Ge
et al.
OmicStudio focuses on speed, quality together with flexibility. Generally, OmicStudio can not only meet the users' demand of ordinary bioinformatics data analysis, statistics, and visualization, but also provides them freedom of data mining beyond developer's framework. Additionally, unlimited to developer's aesthetics, users can get more elegant graphs through customizing. Available online https://www.omicstudio.cn.
Modeling Art Evaluations from Comparative Judgments: A Deep Learning Approach to Predicting Aesthetic Preferences
Manoj Reddy Bethi, Sai Rupa Jhade, Pravallika Yaganti
et al.
Modeling human aesthetic judgments in visual art presents significant challenges due to individual preference variability and the high cost of obtaining labeled data. To reduce cost of acquiring such labels, we propose to apply a comparative learning framework based on pairwise preference assessments rather than direct ratings. This approach leverages the Law of Comparative Judgment, which posits that relative choices exhibit less cognitive burden and greater cognitive consistency than direct scoring. We extract deep convolutional features from painting images using ResNet-50 and develop both a deep neural network regression model and a dual-branch pairwise comparison model. We explored four research questions: (RQ1) How does the proposed deep neural network regression model with CNN features compare to the baseline linear regression model using hand-crafted features? (RQ2) How does pairwise comparative learning compare to regression-based prediction when lacking access to direct rating values? (RQ3) Can we predict individual rater preferences through within-rater and cross-rater analysis? (RQ4) What is the annotation cost trade-off between direct ratings and comparative judgments in terms of human time and effort? Our results show that the deep regression model substantially outperforms the baseline, achieving up to $328\%$ improvement in $R^2$. The comparative model approaches regression performance despite having no access to direct rating values, validating the practical utility of pairwise comparisons. However, predicting individual preferences remains challenging, with both within-rater and cross-rater performance significantly lower than average rating prediction. Human subject experiments reveal that comparative judgments require $60\%$ less annotation time per item, demonstrating superior annotation efficiency for large-scale preference modeling.
Effects of previous orthodontic treatment on periodontal status of patients in long-term supportive periodontal care
Sarah K. Sonnenschein, Alexander-Nicolaus Spies, Christopher Büsch
et al.
Abstract Background While many German children and young adults receive orthodontic treatment (OTx), the number of patients requiring periodontal treatment is increasing due to demographic changes. Investigating the long-term effects of orthodontic treatment on periodontal health, particularly in patients developing periodontitis, is therefore of public health interest. Primary aim was to evaluate whether an anamnestic history of OTx affects the progression of periodontal parameters over a ten-year period of supportive periodontal care (SPC). Additionally, the study aimed to determine whether orthodontic treatment need in SPC patients correlates with periodontal and dental parameters change during the preceding ten years of SPC. Methods Sixty periodontitis patients with ten years (± six months) of SPC received digital intraoral scans during cross-sectional SPC follow-up examination (T1). Patients’ previous orthodontic treatment (POT) or no treatment (NOT) was recorded. The Index of Orthodontic Treatment Need (IOTN) at T1 was assessed. Dental and periodontal parameters were recorded and compared with retrospective data from ten years (± six months) earlier (T0). The association between changes in clinical attachment levels (CAL T0-T1) and treatment group (POT/NOT) was analysed (multiple linear regression). Spearman correlation between IOTN and clinical parameters change was assessed. Results The change in parameters from T0 to T1 was as follows (POT: n = 24 patients, NOT: n = 36 patients): Mean tooth loss: 0.92 ± 1.74 vs. 0.64 ± 0.90; Mean probing pocket depth: -0.03 ± 0.33 mm vs. 0.05 ± 0.51 mm; Mean CAL: 0.11 ± 0.59 mm vs. 0.09 ± 0.66 mm. No association was found between CAL change and treatment group. Only a negligible correlation between IOTN and changes in dental, periodontal, and oral hygiene parameters was found. Conclusions Patients with successfully treated periodontitis, both with and without a history of orthodontic treatment, show a high level of periodontal stability during long-term SPC and comparable orthodontic conditions. Trial registration Clinical trial registration number on the German clinical trials register: DRKS00011316 (Registration date 17th November 2016).
Automated Assessment of Aesthetic Outcomes in Facial Plastic Surgery
Pegah Varghaei, Kiran Abraham-Aggarwal, Manoj T. Abraham
et al.
We introduce a scalable, interpretable computer-vision framework for quantifying aesthetic outcomes of facial plastic surgery using frontal photographs. Our pipeline leverages automated landmark detection, geometric facial symmetry computation, deep-learning-based age estimation, and nasal morphology analysis. To perform this study, we first assemble the largest curated dataset of paired pre- and post-operative facial images to date, encompassing 7,160 photographs from 1,259 patients. This dataset includes a dedicated rhinoplasty-only subset consisting of 732 images from 366 patients, 96.2% of whom showed improvement in at least one of the three nasal measurements with statistically significant group-level change. Among these patients, the greatest statistically significant improvements (p < 0.001) occurred in the alar width to face width ratio (77.0%), nose length to face height ratio (41.5%), and alar width to intercanthal ratio (39.3%). Among the broader frontal-view cohort, comprising 989 rigorously filtered subjects, 71.3% exhibited significant enhancements in global facial symmetry or perceived age (p < 0.01). Importantly, our analysis shows that patient identity remains consistent post-operatively, with True Match Rates of 99.5% and 99.6% at a False Match Rate of 0.01% for the rhinoplasty-specific and general patient cohorts, respectively. Additionally, we analyze inter-practitioner variability in improvement rates. By providing reproducible, quantitative benchmarks and a novel dataset, our pipeline facilitates data-driven surgical planning, patient counseling, and objective outcome evaluation across practices.
Improving Perceptual Audio Aesthetic Assessment via Triplet Loss and Self-Supervised Embeddings
Dyah A. M. G. Wisnu, Ryandhimas E. Zezario, Stefano Rini
et al.
We present a system for automatic multi-axis perceptual quality prediction of generative audio, developed for Track 2 of the AudioMOS Challenge 2025. The task is to predict four Audio Aesthetic Scores--Production Quality, Production Complexity, Content Enjoyment, and Content Usefulness--for audio generated by text-to-speech (TTS), text-to-audio (TTA), and text-to-music (TTM) systems. A main challenge is the domain shift between natural training data and synthetic evaluation data. To address this, we combine BEATs, a pretrained transformer-based audio representation model, with a multi-branch long short-term memory (LSTM) predictor and use a triplet loss with buffer-based sampling to structure the embedding space by perceptual similarity. Our results show that this improves embedding discriminability and generalization, enabling domain-robust audio quality assessment without synthetic training data.
Aesthetic Matters in Music Perception for Image Stylization: A Emotion-driven Music-to-Visual Manipulation
Junjie Xu, Xingjiao Wu, Tanren Yao
et al.
Emotional information is essential for enhancing human-computer interaction and deepening image understanding. However, while deep learning has advanced image recognition, the intuitive understanding and precise control of emotional expression in images remain challenging. Similarly, music research largely focuses on theoretical aspects, with limited exploration of its emotional dimensions and their integration with visual arts. To address these gaps, we introduce EmoMV, an emotion-driven music-to-visual manipulation method that manipulates images based on musical emotions. EmoMV combines bottom-up processing of music elements-such as pitch and rhythm-with top-down application of these emotions to visual aspects like color and lighting. We evaluate EmoMV using a multi-scale framework that includes image quality metrics, aesthetic assessments, and EEG measurements to capture real-time emotional responses. Our results demonstrate that EmoMV effectively translates music's emotional content into visually compelling images, advancing multimodal emotional integration and opening new avenues for creative industries and interactive technologies.
JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment
Renhang Liu, Chia-Yu Hung, Navonil Majumder
et al.
Diffusion and flow-matching models have revolutionized automatic text-to-audio generation in recent times. These models are increasingly capable of generating high quality and faithful audio outputs capturing to speech and acoustic events. However, there is still much room for improvement in creative audio generation that primarily involves music and songs. Recent open lyrics-to-song models, such as, DiffRhythm, ACE-Step, and LeVo, have set an acceptable standard in automatic song generation for recreational use. However, these models lack fine-grained word-level controllability often desired by musicians in their workflows. To the best of our knowledge, our flow-matching-based JAM is the first effort toward endowing word-level timing and duration control in song generation, allowing fine-grained vocal control. To enhance the quality of generated songs to better align with human preferences, we implement aesthetic alignment through Direct Preference Optimization, which iteratively refines the model using a synthetic dataset, eliminating the need or manual data annotations. Furthermore, we aim to standardize the evaluation of such lyrics-to-song models through our public evaluation dataset JAME. We show that JAM outperforms the existing models in terms of the music-specific attributes.
MajutsuCity: Language-driven Aesthetic-adaptive City Generation with Controllable 3D Assets and Layouts
Zilong Huang, Jun He, Xiaobin Huang
et al.
Generating realistic 3D cities is fundamental to world models, virtual reality, and game development, where an ideal urban scene must satisfy both stylistic diversity, fine-grained, and controllability. However, existing methods struggle to balance the creative flexibility offered by text-based generation with the object-level editability enabled by explicit structural representations. We introduce MajutsuCity, a natural language-driven and aesthetically adaptive framework for synthesizing structurally consistent and stylistically diverse 3D urban scenes. MajutsuCity represents a city as a composition of controllable layouts, assets, and materials, and operates through a four-stage pipeline. To extend controllability beyond initial generation, we further integrate MajutsuAgent, an interactive language-grounded editing agent} that supports five object-level operations. To support photorealistic and customizable scene synthesis, we also construct MajutsuDataset, a high-quality multimodal dataset} containing 2D semantic layouts and height maps, diverse 3D building assets, and curated PBR materials and skyboxes, each accompanied by detailed annotations. Meanwhile, we develop a practical set of evaluation metrics, covering key dimensions such as structural consistency, scene complexity, material fidelity, and lighting atmosphere. Extensive experiments demonstrate MajutsuCity reduces layout FID by 83.7% compared with CityDreamer and by 20.1% over CityCraft. Our method ranks first across all AQS and RDR scores, outperforming existing methods by a clear margin. These results confirm MajutsuCity as a new state-of-the-art in geometric fidelity, stylistic adaptability, and semantic controllability for 3D city generation. We expect our framework can inspire new avenues of research in 3D city generation. Our project page: https://longhz140516.github.io/MajutsuCity/.
Sharing Frissons among Online Video Viewers: Exploring the Design of Affective Communication for Aesthetic Chills
Zeyu Huang, Xinyi Cao, Yuanhao Zhang
et al.
On online video platforms, viewers often lack a channel to sense others' and express their affective state on the fly compared to co-located group-viewing. This study explored the design of complementary affective communication specifically for effortless, spontaneous sharing of frissons during video watching. Also known as aesthetic chills, frissons are instant psycho-physiological reactions like goosebumps and shivers to arousing stimuli. We proposed an approach that unobtrusively detects viewers' frissons using skin electrodermal activity sensors and presents the aggregated data alongside online videos. Following a design process of brainstorming, focus group interview (N=7), and design iterations, we proposed three different designs to encode viewers' frisson experiences, namely, ambient light, icon, and vibration. A mixed-methods within-subject study (N=48) suggested that our approach offers a non-intrusive and efficient way to share viewers' frisson moments, increases the social presence of others as if watching together, and can create affective contagion among viewers.
Computational Modeling of Artistic Inspiration: A Framework for Predicting Aesthetic Preferences in Lyrical Lines Using Linguistic and Stylistic Features
Gaurav Sahu, Olga Vechtomova
Artistic inspiration remains one of the least understood aspects of the creative process. It plays a crucial role in producing works that resonate deeply with audiences, but the complexity and unpredictability of aesthetic stimuli that evoke inspiration have eluded systematic study. This work proposes a novel framework for computationally modeling artistic preferences in different individuals through key linguistic and stylistic properties, with a focus on lyrical content. In addition to the framework, we introduce \textit{EvocativeLines}, a dataset of annotated lyric lines, categorized as either "inspiring" or "not inspiring," to facilitate the evaluation of our framework across diverse preference profiles. Our computational model leverages the proposed linguistic and poetic features and applies a calibration network on top of it to accurately forecast artistic preferences among different creative individuals. Our experiments demonstrate that our framework outperforms an out-of-the-box LLaMA-3-70b, a state-of-the-art open-source language model, by nearly 18 points. Overall, this work contributes an interpretable and flexible framework that can be adapted to analyze any type of artistic preferences that are inherently subjective across a wide spectrum of skill levels.
QPT V2: Masked Image Modeling Advances Visual Scoring
Qizhi Xie, Kun Yuan, Yunpeng Qu
et al.
Quality assessment and aesthetics assessment aim to evaluate the perceived quality and aesthetics of visual content. Current learning-based methods suffer greatly from the scarcity of labeled data and usually perform sub-optimally in terms of generalization. Although masked image modeling (MIM) has achieved noteworthy advancements across various high-level tasks (e.g., classification, detection etc.). In this work, we take on a novel perspective to investigate its capabilities in terms of quality- and aesthetics-awareness. To this end, we propose Quality- and aesthetics-aware pretraining (QPT V2), the first pretraining framework based on MIM that offers a unified solution to quality and aesthetics assessment. To perceive the high-level semantics and fine-grained details, pretraining data is curated. To comprehensively encompass quality- and aesthetics-related factors, degradation is introduced. To capture multi-scale quality and aesthetic information, model structure is modified. Extensive experimental results on 11 downstream benchmarks clearly show the superior performance of QPT V2 in comparison with current state-of-the-art approaches and other pretraining paradigms.
Fuzzy Logic Approach For Visual Analysis Of Websites With K-means Clustering-based Color Extraction
Tamiris Abildayeva, Pakizar Shamoi
Websites form the foundation of the Internet, serving as platforms for disseminating information and accessing digital resources. They allow users to engage with a wide range of content and services, enhancing the Internet's utility for all. The aesthetics of a website play a crucial role in its overall effectiveness and can significantly impact user experience, engagement, and satisfaction. This paper examines the importance of website design aesthetics in enhancing user experience, given the increasing number of internet users worldwide. It emphasizes the significant impact of first impressions, often formed within 50 milliseconds, on users' perceptions of a website's appeal and usability. We introduce a novel method for measuring website aesthetics based on color harmony and font popularity, using fuzzy logic to predict aesthetic preferences. We collected our own dataset, consisting of nearly 200 popular and frequently used website designs, to ensure relevance and adaptability to the dynamic nature of web design trends. Dominant colors from website screenshots were extracted using k-means clustering. The findings aim to improve understanding of the relationship between aesthetics and usability in website design.
Quand rechercher c'est faire des vagues : Dans et {à} partir des images algorithmiques
Gaëtan Robillard
In Search of the Wave is a computer-generated film made in 2013, highlighting the computation of images through computer simulation, and through text and voice. Originating from a screening of the film at the Gustave Eiffel University, the article presents a reflection on research-creation in and from algorithmic images. Fundamentally, what is it in this research-creation -- especially in research on algorithmic imagery -- that can be set in motion? Without fully distinguishing between what would be research on one hand and creation on the other, we focus on characterizing forms, aesthetics, or theories that contribute to possible shifts. The inventory of these possibilities is precisely the challenge of the text: from mathematics to image and visualization, from the birth of generative aesthetics to the coding related to pioneering works (recoding), or from indexing new aesthetics to new forms of critical production.
Everyday Aesthetics - Review
Thomas Froy
This article reviews the volume 'Everydayness: Contemporary Aesthetic Approaches', edited by Lisa Gombini and Adrián Kvokačka. Thomas Froy assesses the relation, explored by the contributing authors, between the notion of the 'everyday' and the field of aesthetics, focussing on questions about the 'who' and the 'what' of everyday aesthetics, and its place in the modern world.
The fracture resistance of 3D-printed versus milled provisional crowns: An in vitro study
Ahmed Othman, Maximillian Sandmair, Vasilios Alevizakos
et al.
<h4>Background</h4> CAD/CAM has considerably transformed the clinical practice of dentistry. In particular, advanced dental materials produced via digital technologies offer unquestionable benefits, such as ideal mechanical stability, outstanding aesthetics and reliable high precision. Additive manufacturing (AM) technology has promoted new innovations, especially in the field of biomedicine. <h4>Aims</h4> The aim of this study is to analyze the fracture resistance of implant-supported 3D-printed temporary crowns relative to milled crowns by compression testing. <h4>Methods</h4> The study sample included 32 specimens of temporary crowns, which were divided into 16 specimens per group. Each group consisted of eight maxillary central incisor crowns (tooth 11) and eight maxillary molar crowns (tooth 16). The first group (16 specimens) was 3D printed by a mask printer (Varseo, BEGO, Bremen, Germany) with a temporary material (VarseoSmile Temp A3, BEGO, Bremen, Germany). The second group was milled with a millable temporary material (VitaCAD Temp mono-color, Vita, Bad Säckingen, Germany). The two groups were compression tested until failure to estimate their fracture resistance. The loading forces and travel distance until failure were measured. The statistical analysis was performed using SPSS Version 24.0. We performed multiple t tests and considered a significance level of p <0.05. <h4>Results</h4> The mean fracture force of the printed molars was 1189.50 N (±250.85) with a deformation of 1.75 mm (±0.25). The milled molars reached a mean fracture force of 1817.50 N (±258.22) with a deformation of 1.750 mm (±0.20). The printed incisors fractured at 321.63 N (±145.90) with a deformation of 1.94 mm (±0.40), while the milled incisors fractured at 443.38 N (±113.63) with a deformation of 2.26 mm (±0.40). The milled molar group revealed significantly higher mechanical fracture strength than the 3D-printed molar group (P<0.001). However, no significant differences between the 3D-printed incisors and the milled incisors were found (p = 0.084). There was no significant difference in the travel distance until fracture for both the molar group (p = 1.000) and the incisor group (p = 0.129). <h4>Conclusion</h4> Within the limits of this in vitro investigation, printed and milled temporary crowns withstood masticatory forces and were safe for clinical use.
FORUM on B. Bégout, Le concept d’ambiance
Germana Alberti
_
Philosophy. Psychology. Religion, Aesthetics
Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized Enhancer for Low-Light Images
Naishan Zheng, Jie Huang, Qi Zhu
et al.
Low-light image enhancement is an inherently subjective process whose targets vary with the user's aesthetic. Motivated by this, several personalized enhancement methods have been investigated. However, the enhancement process based on user preferences in these techniques is invisible, i.e., a "black box". In this work, we propose an intelligible unsupervised personalized enhancer (iUPEnhancer) for low-light images, which establishes the correlations between the low-light and the unpaired reference images with regard to three user-friendly attributions (brightness, chromaticity, and noise). The proposed iUP-Enhancer is trained with the guidance of these correlations and the corresponding unsupervised loss functions. Rather than a "black box" process, our iUP-Enhancer presents an intelligible enhancement process with the above attributions. Extensive experiments demonstrate that the proposed algorithm produces competitive qualitative and quantitative results while maintaining excellent flexibility and scalability. This can be validated by personalization with single/multiple references, cross-attribution references, or merely adjusting parameters.
An Experience-based Direct Generation approach to Automatic Image Cropping
Casper Christensen, Aneesh Vartakavi
Automatic Image Cropping is a challenging task with many practical downstream applications. The task is often divided into sub-problems - generating cropping candidates, finding the visually important regions, and determining aesthetics to select the most appealing candidate. Prior approaches model one or more of these sub-problems separately, and often combine them sequentially. We propose a novel convolutional neural network (CNN) based method to crop images directly, without explicitly modeling image aesthetics, evaluating multiple crop candidates, or detecting visually salient regions. Our model is trained on a large dataset of images cropped by experienced editors and can simultaneously predict bounding boxes for multiple fixed aspect ratios. We consider the aspect ratio of the cropped image to be a critical factor that influences aesthetics. Prior approaches for automatic image cropping, did not enforce the aspect ratio of the outputs, likely due to a lack of datasets for this task. We, therefore, benchmark our method on public datasets for two related tasks - first, aesthetic image cropping without regard to aspect ratio, and second, thumbnail generation that requires fixed aspect ratio outputs, but where aesthetics are not crucial. We show that our strategy is competitive with or performs better than existing methods in both these tasks. Furthermore, our one-stage model is easier to train and significantly faster than existing two-stage or end-to-end methods for inference. We present a qualitative evaluation study, and find that our model is able to generalize to diverse images from unseen datasets and often retains compositional properties of the original images after cropping. Our results demonstrate that explicitly modeling image aesthetics or visual attention regions is not necessarily required to build a competitive image cropping algorithm.
“The act of reading is a bodily experience”: an Interview with Mia Gallagher
Hedwig Schwall
With Shift Mia Gallagher put together a collection of short stories which have been in the making for about thirty years. As many stories had been published separately in journals, they were given an overhaul to fit the new context: narrative perspectives were rewritten, layers added, so that the fifteen stories formed a new composition, variations on a theme. The collection forms a fugue building towards increasing weirdness, using faery tale techniques and magic realism to illustrate different shades of the uncanny. Emotions, originating in the protagonists’ unconscious, in a family’s past, whirl around yet are tightly structured. Gallagher’s prose is physical but focusing on the in-between: people’s perceptions shift, gender is fluid, objects metamorphose constantly. Her aesthetics are inspired by Baroque theatre, David Lynch and Francis Bacon.
History of Great Britain, Language and Literature