Il contributo propone un’analisi del racconto della nobile menzogna che Socrate, in dialogo con Glaucone, narra nel III libro della Repubblica al termine della riflessione sul falso nei racconti, sulla forma espressiva e sulle norme per la produzione letteraria ammessa nella città ideale. Dopo aver stabilito la possibilità di produrre un racconto falso che tende al vero, sui fatti del passato, Socrate offre il γενναῖον ψεῦδος segnalando il rapporto con la poesia della tradizione, in termini di continuità e di distanza. Nella nobile menzogna Platone rielabora infatti sia il mito di Cadmo sugli uomini nati dalla terra, sia il mito delle cinque stirpi che Esiodo sviluppa negli Erga. Considerando le differenze e le innovazioni e osservando brevemente l’esegesi antica, sarà possibile considerare la nobile menzogna come la riscrittura della memoria letteraria della tradizione greca, come dunque uno ψεῦδος utile e ammesso perché in vista del fine etico della παίδεια.
Language. Linguistic theory. Comparative grammar, Style. Composition. Rhetoric
Content-preserving style transfer, generating stylized outputs based on content and style references, remains a significant challenge for Diffusion Transformers (DiTs) due to the inherent entanglement of content and style features in their internal representations. In this technical report, we present TeleStyle, a lightweight yet effective model for both image and video stylization. Built upon Qwen-Image-Edit, TeleStyle leverages the base model's robust capabilities in content preservation and style customization. To facilitate effective training, we curated a high-quality dataset of distinct specific styles and further synthesized triplets using thousands of diverse, in-the-wild style categories. We introduce a Curriculum Continual Learning framework to train TeleStyle on this hybrid dataset of clean (curated) and noisy (synthetic) triplets. This approach enables the model to generalize to unseen styles without compromising precise content fidelity. Additionally, we introduce a video-to-video stylization module to enhance temporal consistency and visual quality. TeleStyle achieves state-of-the-art performance across three core evaluation metrics: style similarity, content consistency, and aesthetic quality. Code and pre-trained models are available at https://github.com/Tele-AI/TeleStyle
Chanda Grover Kamra, Indra Deep Mastan, Debayan Gupta
We propose ObjMST, an object-focused multimodal style transfer framework that provides separate style supervision for salient objects and surrounding elements while addressing alignment issues in multimodal representation learning. Existing image-text multimodal style transfer methods face the following challenges: (1) generating non-aligned and inconsistent multimodal style representations; and (2) content mismatch, where identical style patterns are applied to both salient objects and their surrounding elements. Our approach mitigates these issues by: (1) introducing a Style-Specific Masked Directional CLIP Loss, which ensures consistent and aligned style representations for both salient objects and their surroundings; and (2) incorporating a salient-to-key mapping mechanism for stylizing salient objects, followed by image harmonization to seamlessly blend the stylized objects with their environment. We validate the effectiveness of ObjMST through experiments, using both quantitative metrics and qualitative visual evaluations of the stylized outputs. Our code is available at: https://github.com/chandagrover/ObjMST.
We present Style Matching Score (SMS), a novel optimization method for image stylization with diffusion models. Balancing effective style transfer with content preservation is a long-standing challenge. Unlike existing efforts, our method reframes image stylization as a style distribution matching problem. The target style distribution is estimated from off-the-shelf style-dependent LoRAs via carefully designed score functions. To preserve content information adaptively, we propose Progressive Spectrum Regularization, which operates in the frequency domain to guide stylization progressively from low-frequency layouts to high-frequency details. In addition, we devise a Semantic-Aware Gradient Refinement technique that leverages relevance maps derived from diffusion semantic priors to selectively stylize semantically important regions. The proposed optimization formulation extends stylization from pixel space to parameter space, readily applicable to lightweight feedforward generators for efficient one-step stylization. SMS effectively balances style alignment and content preservation, outperforming state-of-the-art approaches, verified by extensive experiments.
El objetivo de este trabajo es reseñar dos importantes monografías sobre la democracia ateniense que brindan la oportunidad de reflexionar sobre la crisis de la democracia a finales del siglo V a.C. y el carácter de las posteriores reformas institucionales que permitieron la recuperación y pervivencia de una debatida ‘democracia moderada’, a lo largo del siglo IV a.C.
This text explores the impact of social media on interactions and social relationships, highlighting their role as echo chambers and sources of polarization. It emphasizes the unique characteristics of social networks, such as their affordances – meaning the incentives they offer for action – and their influence on user behavior. By leveraging post-digital theory, which removes the distinction between online and offline worlds, the article proposes to examine the diversity of online discourses. This approach aims to better understand the dynamics of discourse fragmentation and their implications for the reconfiguration of social relationships.
Stylized Text-to-Image Generation (STIG) aims to generate images from text prompts and style reference images. In this paper, we present ArtWeaver, a novel framework that leverages pretrained Stable Diffusion (SD) to address challenges such as misinterpreted styles and inconsistent semantics. Our approach introduces two innovative modules: the mixed style descriptor and the dynamic attention adapter. The mixed style descriptor enhances SD by combining content-aware and frequency-disentangled embeddings from CLIP with additional sources that capture global statistics and textual information, thus providing a richer blend of style-related and semantic-related knowledge. To achieve a better balance between adapter capacity and semantic control, the dynamic attention adapter is integrated into the diffusion UNet, dynamically calculating adaptation weights based on the style descriptors. Additionally, we introduce two objective functions to optimize the model alongside the denoising loss, further enhancing semantic and style consistency. Extensive experiments demonstrate the superiority of ArtWeaver over existing methods, producing images with diverse target styles while maintaining the semantic integrity of the text prompts.
Visual text rendering are widespread in various real-world applications, requiring careful font selection and typographic choices. Recent progress in diffusion transformer (DiT)-based text-to-image (T2I) models show promise in automating these processes. However, these methods still encounter challenges like inconsistent fonts, style variation, and limited fine-grained control, particularly at the word-level. This paper proposes a two-stage DiT-based pipeline to address these problems by enhancing controllability over typography and style in text rendering. We introduce typography control fine-tuning (TC-FT), an parameter-efficient fine-tuning method (on $5\%$ key parameters) with enclosing typography control tokens (ETC-tokens), which enables precise word-level application of typographic features. To further address style inconsistency in text rendering, we propose a text-agnostic style control adapter (SCA) that prevents content leakage while enhancing style consistency. To implement TC-FT and SCA effectively, we incorporated HTML-render into the data synthesis pipeline and proposed the first word-level controllable dataset. Through comprehensive experiments, we demonstrate the effectiveness of our approach in achieving superior word-level typographic control, font consistency, and style consistency in text rendering tasks. The datasets and models will be available for academic use.
4D style transfer aims at transferring arbitrary visual style to the synthesized novel views of a dynamic 4D scene with varying viewpoints and times. Existing efforts on 3D style transfer can effectively combine the visual features of style images and neural radiance fields (NeRF) but fail to handle the 4D dynamic scenes limited by the static scene assumption. Consequently, we aim to handle the novel challenging problem of 4D style transfer for the first time, which further requires the consistency of stylized results on dynamic objects. In this paper, we introduce StyleDyRF, a method that represents the 4D feature space by deforming a canonical feature volume and learns a linear style transformation matrix on the feature volume in a data-driven fashion. To obtain the canonical feature volume, the rays at each time step are deformed with the geometric prior of a pre-trained dynamic NeRF to render the feature map under the supervision of pre-trained visual encoders. With the content and style cues in the canonical feature volume and the style image, we can learn the style transformation matrix from their covariance matrices with lightweight neural networks. The learned style transformation matrix can reflect a direct matching of feature covariance from the content volume to the given style pattern, in analogy with the optimization of the Gram matrix in traditional 2D neural style transfer. The experimental results show that our method not only renders 4D photorealistic style transfer results in a zero-shot manner but also outperforms existing methods in terms of visual quality and consistency.
Humour, a fundamental aspect of human communication, manifests itself in various styles that significantly impact social interactions and mental health. Recognising different humour styles poses challenges due to the lack of established datasets and machine learning (ML) models. To address this gap, we present a new text dataset for humour style recognition, comprising 1463 instances across four styles (self-enhancing, self-deprecating, affiliative, and aggressive) and non-humorous text, with lengths ranging from 4 to 229 words. Our research employs various computational methods, including classic machine learning classifiers, text embedding models, and DistilBERT, to establish baseline performance. Additionally, we propose a two-model approach to enhance humour style recognition, particularly in distinguishing between affiliative and aggressive styles. Our method demonstrates an 11.61% improvement in f1-score for affiliative humour classification, with consistent improvements in the 14 models tested. Our findings contribute to the computational analysis of humour in text, offering new tools for studying humour in literature, social media, and other textual sources.
José Watanabe es uno de los más notables poetas de la literatura peruana e hispanoamericana en general. En su producción se destaca el tono desacralizador, muchas veces ligado al humor, mediante el uso de la ironía y la sátira, y otras veces en las cuales opta por un tono solemne o serio, como veremos en el presente estudio. Así, la desmitificación se traduce en un proceso de distanciamiento como de actualización que permite reinterpretar desde el presente el sentido de un texto. De esta forma, el análisis del poema “El otro Asterión”, del libro Banderas detrás de la niebla (2006), nos muestra una ruptura con el discurso mitológico occidental que se encuentra vinculada al tema de la modernidad, circundante, a su vez, a toda la obra watanabesca.
Andrea Testa, Hendrik T. Spanke, Etienne Jambon-Puillet
et al.
Solutions of macromolecules can undergo liquid-liquid phase separation to form droplets with ultra-low surface tension. Droplets with such low surface tension wet and spread over common surfaces such as test tubes and microscope slides, complicating \textit{in vitro} experiments. Development of an universal super-repellent surface for macromolecular droplets has remained elusive because their ultra-low surface tension requires low surface energies. Furthermore, nonwetting of droplets containing proteins poses additional challenges because the surface must remain inert to a wide range of chemistries presented by the various amino-acid side-chains at the droplet surface. Here, we present a method to coat microscope slides with a thin transparent hydrogel that exhibits complete dewetting (contact angles $θ\approx180^\circ)$ and minimal pinning of phase-separated droplets in aqueous solution. The hydrogel is based on a swollen matrix of chemically crosslinked polyethylene glycol diacrylate of molecular weight 12 kDa (PEGDA), and can be prepared with basic chemistry lab equipment. The PEGDA hydrogel is a powerful tool for \textit{in vitro} studies of weak interactions, dynamics, and internal organization of phase-separated droplets in aqueous solutions.
In this study, we address the importance of modeling behavior style in virtual agents for personalized human-agent interaction. We propose a machine learning approach to synthesize gestures, driven by prosodic features and text, in the style of different speakers, even those unseen during training. Our model incorporates zero-shot multimodal style transfer using multimodal data from the PATS database, which contains videos of diverse speakers. We recognize style as a pervasive element during speech, influencing the expressivity of communicative behaviors, while content is conveyed through multimodal signals and text. By disentangling content and style, we directly infer the style embedding, even for speakers not included in the training phase, without the need for additional training or fine-tuning. Objective and subjective evaluations are conducted to validate our approach and compare it against two baseline methods.
Image Style Transfer (IST) is an interdisciplinary topic of computer vision and art that continuously attracts researchers' interests. Different from traditional Image-guided Image Style Transfer (IIST) methods that require a style reference image as input to define the desired style, recent works start to tackle the problem in a text-guided manner, i.e., Text-guided Image Style Transfer (TIST). Compared to IIST, such approaches provide more flexibility with text-specified styles, which are useful in scenarios where the style is hard to define with reference images. Unfortunately, many TIST approaches produce undesirable artifacts in the transferred images. To address this issue, we present a novel method to achieve much improved style transfer based on text guidance. Meanwhile, to offer more flexibility than IIST and TIST, our method allows style inputs from multiple sources and modalities, enabling MultiModality-guided Image Style Transfer (MMIST). Specifically, we realize MMIST with a novel cross-modal GAN inversion method, which generates style representations consistent with specified styles. Such style representations facilitate style transfer and in principle generalize any IIST methods to MMIST. Large-scale experiments and user studies demonstrate that our method achieves state-of-the-art performance on TIST task. Furthermore, comprehensive qualitative results confirm the effectiveness of our method on MMIST task and cross-modal style interpolation.
This article argues that Nietzsche advanced a rhetorical theory that enacted an attitude of creative destruction by subverting the norms of traditional, yet effective, Greco-Roman rhetoric with a dizzying, distasteful, untimely, unteachable, and impractical mad eloquence. The argument draws particular attention to two aphorisms from The Gay Science. Nietzsche partially described what I call mad eloquence in the obscure aphorism “Two Speakers” (Zwei Redner) and exemplified it with the performance of the madman in the infamous aphorism of the same name (Der tolle Mensch). The first speaker affirmed and then questioned the Greco-Roman rhetorical tradition, but the second blew it up. After establishing how and why Nietzsche confirmed the utility of traditional rhetorical theory, this article demonstrates how he redefined rhetoric as the art of discovering the available means to cultivate confusion and alienate audiences with reconceived parrhēsia, obnoxious delivery and ethos, impropriety, uncommonplaces, logical irrationality, unteachable inimitability, a very delayed persuasive effect, and an insufferable style.
El discurso del jesuita Girolamo Lagomarsini (1698-1773) In adventu Francisci III Lotharingiae, Barri et Magni Etruriae Ducis ad florentinos (Florencia, 1739) constituye un excelente ejemplo de la oratoria epidíctica de su época. Se trata de una alocución centrada concretamente en la figura de quien más tarde se convirtió en Francisco I, Emperador del Sacro Imperio Romano-Germánico. En este trabajo se ofrece una edición crítica del discurso, una traducción al español y una contextualización general en su marco histórico y retórico teniendo en cuenta la producción oratoria del auto
Few-shot font generation (FFG), which aims to generate a new font with a few examples, is gaining increasing attention due to the significant reduction in labor cost. A typical FFG pipeline considers characters in a standard font library as content glyphs and transfers them to a new target font by extracting style information from the reference glyphs. Most existing solutions explicitly disentangle content and style of reference glyphs globally or component-wisely. However, the style of glyphs mainly lies in the local details, i.e. the styles of radicals, components, and strokes together depict the style of a glyph. Therefore, even a single character can contain different styles distributed over spatial locations. In this paper, we propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs. Therefore, each spatial location in the content glyph can be assigned with the right fine-grained style. To this end, we adopt cross-attention over the representation of the content glyphs as the queries and the representations of the reference glyphs as the keys and values. Instead of explicitly disentangling global or component-wise modeling, the cross-attention mechanism can attend to the right local styles in the reference glyphs and aggregate the reference styles into a fine-grained style representation for the given content glyphs. The experiments show that the proposed method outperforms the state-of-the-art methods in FFG. In particular, the user studies also demonstrate the style consistency of our approach significantly outperforms previous methods.