Hasil untuk "Paints, pigments, varnishes, etc."

Menampilkan 15 dari ~92648 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
arXiv Open Access 2026
IMPASTO: Integrating Model-Based Planning with Learned Dynamics Models for Robotic Oil Painting Reproduction

Yingke Wang, Hao Li, Yifeng Zhu et al.

Robotic reproduction of oil paintings using soft brushes and pigments requires force-sensitive control of deformable tools, prediction of brushstroke effects, and multi-step stroke planning, often without human step-by-step demonstrations or faithful simulators. Given only a sequence of target oil painting images, can a robot infer and execute the stroke trajectories, forces, and colors needed to reproduce it? We present IMPASTO, a robotic oil-painting system that integrates learned pixel dynamics models with model-based planning. The dynamics models predict canvas updates from image observations and parameterized stroke actions; a receding-horizon model predictive control optimizer then plans trajectories and forces, while a force-sensitive controller executes strokes on a 7-DoF robot arm. IMPASTO integrates low-level force control, learned dynamics models, and high-level closed-loop planning, learns solely from robot self-play, and approximates human artists' single-stroke datasets and multi-stroke artworks, outperforming baselines in reproduction accuracy. Project website: https://impasto-robopainting.github.io/

en cs.RO, cs.AI
arXiv Open Access 2025
Every Painting Awakened: A Training-free Framework for Painting-to-Animation Generation

Lingyu Liu, Yaxiong Wang, Li Zhu et al.

We introduce a training-free framework specifically designed to bring real-world static paintings to life through image-to-video (I2V) synthesis, addressing the persistent challenge of aligning these motions with textual guidance while preserving fidelity to the original artworks. Existing I2V methods, primarily trained on natural video datasets, often struggle to generate dynamic outputs from static paintings. It remains challenging to generate motion while maintaining visual consistency with real-world paintings. This results in two distinct failure modes: either static outputs due to limited text-based motion interpretation or distorted dynamics caused by inadequate alignment with real-world artistic styles. We leverage the advanced text-image alignment capabilities of pre-trained image models to guide the animation process. Our approach introduces synthetic proxy images through two key innovations: (1) Dual-path score distillation: We employ a dual-path architecture to distill motion priors from both real and synthetic data, preserving static details from the original painting while learning dynamic characteristics from synthetic frames. (2) Hybrid latent fusion: We integrate hybrid features extracted from real paintings and synthetic proxy images via spherical linear interpolation in the latent space, ensuring smooth transitions and enhancing temporal consistency. Experimental evaluations confirm that our approach significantly improves semantic alignment with text prompts while faithfully preserving the unique characteristics and integrity of the original paintings. Crucially, by achieving enhanced dynamic effects without requiring any model training or learnable parameters, our framework enables plug-and-play integration with existing I2V methods, making it an ideal solution for animating real-world paintings. More animated examples can be found on our project website.

en cs.CV, cs.MM
arXiv Open Access 2024
AI vs. Human Paintings? Deciphering Public Interactions and Perceptions towards AI-Generated Paintings on TikTok

Jiajun Wang, Xiangzhe Yuan, Siying Hu et al.

With the development of generative AI technology, a vast array of AI-generated paintings (AIGP) have gone viral on social media like TikTok. However, some negative news about AIGP has also emerged. For example, in 2022, numerous painters worldwide organized a large-scale anti-AI movement because of the infringement in generative AI model training. This event reflected a social issue that, with the development and application of generative AI, public feedback and feelings towards it may have been overlooked. Therefore, to investigate public interactions and perceptions towards AIGP on social media, we analyzed user engagement level and comment sentiment scores of AIGP using human painting videos as a baseline. In analyzing user engagement, we also considered the possible moderating effect of the aesthetic quality of Paintings. Utilizing topic modeling, we identified seven reasons, including hyperrealistic quality, ambivalent reactions, perceived theft of art, etc., leading to negative public perceptions of AIGP. Our work may provide instructive suggestions for future generative AI technology development and avoid potential crises in human-AI collaboration.

arXiv Open Access 2024
Mixing Paint: An analysis of color value transformations in multiple coordinate spaces using multivariate linear regression

Alexander Messick

I explore the mathematical transformation that occurs in color coordinate space when physically mixing paints of two different colors. I tested 120 pairs of 16 paint colors and used a linear regression to find the most accurate combination of input parameters, both in RGB space and several other color spaces. I found that the fit with the strongest coefficient of determination was a geometrically symmetrized linear combination of the colors in CIEXYZ space, while this same mapping in RGB space returns a better mean squared error.

en physics.optics, cs.CV
DOAJ Open Access 2023
مروری بر کاربرد جاذب‌های مختلف در حذف ماده رنگزای رودامینB

طاهره نوایی دیوا

امروزه به دلیل افزایش تولید مواد رنگزا، آلودگی محیط‌زیست افزایش یافته است. مطالعات اخیر  نشان داده که جاذب‌های قابل استفاده فراوانی از جمله پوست موز، سیب زمینی، جلبک و غیره در دسترس همگان است. سازمان غذا و دارو استفاده از رودامینB  را به دلیل سمی ‌بودن و اثرات مضر آن ممنوع کرده‌ است. بنابراین، این مطالعه طیف گسترده‌ای از جاذب‌های جایگزین غیر‌متعارف ولی کم هزینه را برای حذف ماده رنگزای رودامین  B از پساب ارائه می‌کند. مشاهدات نشان داده است که سازوکار جذب این ماده رنگزا بر روی مدل‌های سینتیک، ایزوترم و ترمودینامیک متمرکز است که به ماهیت شیمیایی مواد و شرایط مختلف  فیزیکی و شیمیایی مانند pH محلول، غلظت اولیه ماده رنگزا، دوز جاذب و دما نیز بستگی دارد. داده‌های سینتیکی جذب ماده رنگزای رودامین B معمولاً از مدل‌های سینتیکی شبه مرتبه اول و شبه مرتبه دوم پیروی می‌کند. چندین مطالعه نشان داد که مدل‌های ایزوترم جذب لانگمویر و فروندلیچ اغلب برای ارزیابی ظرفیت جذب جاذب‌ها استفاده می‌شوند. علاوه بر این، بررسی ترمودینامیکی حاکی از آن است که جذب رودامین B در طبیعت، گرماگیر و بدون محدودیت است. بنابراین، هر دو روش تجزیه کاتالیزوری نوری و جذب، برای حذف ماده رنگزا رودامین B از پساب‌های صنعتی قابلیت خوبی دارند. تحقیقات بیشتری برای ارزیابی امکان استفاده از زیست‌توده پسماندهای اصلاح شده دیگر برای کنترل آلودگی صنعتی در حال انجام است.

Building construction, Textile bleaching, dyeing, printing, etc.
DOAJ Open Access 2023
جذب ترکیبات کروم III از پساب رنگی صنعت چرم توسط نانوذرات سیلیس

سیده زهرا حسینی امیرهنده, امین سالم, شیوا سالم

در تحقیق حاضر از نانوذرات سیلیس تهیه شده از پوسته برنج به روش ترسیب متداول و فراصوت برای تصفیه پساب واحد تولید چرم استفاده است. اثر غلظت سود، pH ترسیب و زمان پیرسازی بر بازدهی جذب سیلیس به کمک روش طراحی مرکب مرکزی مورد مطالعه قرار گرفته است. نتایج آزمایشات نشان داد که خاکستر پوسته برنج به عنوان پیش ماده مورد استفاده در تهیه سیلیس، باید در محلول رقیق سود با غلظت 5/0 مولار حل شود و pH ترسیب نیز باید در حدود 0/9 کنترل گردد تا جاذبی با عملکرد مناسب برای حذف ترکیبات کروم III حاصل شود. هر چند سطح ویژه سیلیس بدست آمده با روش رسوب‌دهی متداول، بزرگ‌تر از مقدار آن برای جاذب حاصل از ترسیب تحت امواج فراصوت است، توزیع اندازه خلل دارای نقش اساسی در جذب ترکیبات کروم است. ترسیب تحت فراصوت منجر به منافذ بزرگی می‌شود که در محدوده 38-4 نانومتر توزیع می‌شوند و نفوذ ترکیبات کروم را حتی در محیط‌های اسیدی تسهیل می‌کند.

Building construction, Textile bleaching, dyeing, printing, etc.
arXiv Open Access 2023
Collaborative Neural Painting

Nicola Dall'Asen, Willi Menapace, Elia Peruzzo et al.

The process of painting fosters creativity and rational planning. However, existing generative AI mostly focuses on producing visually pleasant artworks, without emphasizing the painting process. We introduce a novel task, Collaborative Neural Painting (CNP), to facilitate collaborative art painting generation between humans and machines. Given any number of user-input brushstrokes as the context or just the desired object class, CNP should produce a sequence of strokes supporting the completion of a coherent painting. Importantly, the process can be gradual and iterative, so allowing users' modifications at any phase until the completion. Moreover, we propose to solve this task using a painting representation based on a sequence of parametrized strokes, which makes it easy both editing and composition operations. These parametrized strokes are processed by a Transformer-based architecture with a novel attention mechanism to model the relationship between the input strokes and the strokes to complete. We also propose a new masking scheme to reflect the interactive nature of CNP and adopt diffusion models as the basic learning process for its effectiveness and diversity in the generative field. Finally, to develop and validate methods on the novel task, we introduce a new dataset of painted objects and an evaluation protocol to benchmark CNP both quantitatively and qualitatively. We demonstrate the effectiveness of our approach and the potential of the CNP task as a promising avenue for future research.

en cs.CV
arXiv Open Access 2023
Segmentation-Based Parametric Painting

Manuel Ladron de Guevara, Matthew Fisher, Aaron Hertzmann

We introduce a novel image-to-painting method that facilitates the creation of large-scale, high-fidelity paintings with human-like quality and stylistic variation. To process large images and gain control over the painting process, we introduce a segmentation-based painting process and a dynamic attention map approach inspired by human painting strategies, allowing optimization of brush strokes to proceed in batches over different image regions, thereby capturing both large-scale structure and fine details, while also allowing stylistic control over detail. Our optimized batch processing and patch-based loss framework enable efficient handling of large canvases, ensuring our painted outputs are both aesthetically compelling and functionally superior as compared to previous methods, as confirmed by rigorous evaluations. Code available at: https://github.com/manuelladron/semantic\_based\_painting.git

en cs.CV, cs.LG
arXiv Open Access 2023
Paint it Black: Generating paintings from text descriptions

Mahnoor Shahid, Mark Koch, Niklas Schneider

Two distinct tasks - generating photorealistic pictures from given text prompts and transferring the style of a painting to a real image to make it appear as though it were done by an artist, have been addressed many times, and several approaches have been proposed to accomplish them. However, the intersection of these two, i.e., generating paintings from a given caption, is a relatively unexplored area with little data available. In this paper, we have explored two distinct strategies and have integrated them together. First strategy is to generate photorealistic images and then apply style transfer and the second strategy is to train an image generation model on real images with captions and then fine-tune it on captioned paintings later. These two models are evaluated using different metrics as well as a user study is conducted to get human feedback on the produced results.

en cs.CV, cs.AI
arXiv Open Access 2023
Interactive Neural Painting

Elia Peruzzo, Willi Menapace, Vidit Goel et al.

In the last few years, Neural Painting (NP) techniques became capable of producing extremely realistic artworks. This paper advances the state of the art in this emerging research domain by proposing the first approach for Interactive NP. Considering a setting where a user looks at a scene and tries to reproduce it on a painting, our objective is to develop a computational framework to assist the users creativity by suggesting the next strokes to paint, that can be possibly used to complete the artwork. To accomplish such a task, we propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder. To evaluate the proposed approach and stimulate research in this area, we also introduce two novel datasets. Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art. Additional details, code and examples are available at https://helia95.github.io/inp-website.

arXiv Open Access 2022
Semantic Segmentation in Art Paintings

Nadav Cohen, Yael Newman, Ariel Shamir

Semantic segmentation is a difficult task even when trained in a supervised manner on photographs. In this paper, we tackle the problem of semantic segmentation of artistic paintings, an even more challenging task because of a much larger diversity in colors, textures, and shapes and because there are no ground truth annotations available for segmentation. We propose an unsupervised method for semantic segmentation of paintings using domain adaptation. Our approach creates a training set of pseudo-paintings in specific artistic styles by using style-transfer on the PASCAL VOC 2012 dataset, and then applies domain confusion between PASCAL VOC 2012 and real paintings. These two steps build on a new dataset we gathered called DRAM (Diverse Realism in Art Movements) composed of figurative art paintings from four movements, which are highly diverse in pattern, color, and geometry. To segment new paintings, we present a composite multi-domain adaptation method that trains on each sub-domain separately and composes their solutions during inference time. Our method provides better segmentation results not only on the specific artistic movements of DRAM, but also on other, unseen ones. We compare our approach to alternative methods and show applications of semantic segmentation in art paintings. The code and models for our approach are publicly available at: https://github.com/Nadavc220/SemanticSegmentationInArtPaintings.

en cs.CV, cs.GR
arXiv Open Access 2021
Recovery of underdrawings and ghost-paintings via style transfer by deep convolutional neural networks: A digital tool for art scholars

Anthony Bourached, George Cann, Ryan-Rhys Griffiths et al.

We describe the application of convolutional neural network style transfer to the problem of improved visualization of underdrawings and ghost-paintings in fine art oil paintings. Such underdrawings and hidden paintings are typically revealed by x-ray or infrared techniques which yield images that are grayscale, and thus devoid of color and full style information. Past methods for inferring color in underdrawings have been based on physical x-ray fluorescence spectral imaging of pigments in ghost-paintings and are thus expensive, time consuming, and require equipment not available in most conservation studios. Our algorithmic methods do not need such expensive physical imaging devices. Our proof-of-concept system, applied to works by Pablo Picasso and Leonardo, reveal colors and designs that respect the natural segmentation in the ghost-painting. We believe the computed images provide insight into the artist and associated oeuvre not available by other means. Our results strongly suggest that future applications based on larger corpora of paintings for training will display color schemes and designs that even more closely resemble works of the artist. For these reasons refinements to our methods should find wide use in art conservation, connoisseurship, and art analysis.

en cs.CV
arXiv Open Access 2021
Paint by Word

Alex Andonian, Sabrina Osmany, Audrey Cui et al.

We investigate the problem of zero-shot semantic image painting. Instead of painting modifications into an image using only concrete colors or a finite set of semantic concepts, we ask how to create semantic paint based on open full-text descriptions: our goal is to be able to point to a location in a synthesized image and apply an arbitrary new concept such as "rustic" or "opulent" or "happy dog." To do this, our method combines a state-of-the art generative model of realistic images with a state-of-the-art text-image semantic similarity network. We find that, to make large changes, it is important to use non-gradient methods to explore latent space, and it is important to relax the computations of the GAN to target changes to a specific region. We conduct user studies to compare our methods to several baselines.

en cs.CV, cs.AI
arXiv Open Access 2021
Paint Transformer: Feed Forward Neural Painting with Stroke Prediction

Songhua Liu, Tianwei Lin, Dongliang He et al.

Neural painting refers to the procedure of producing a series of strokes for a given image and non-photo-realistically recreating it using neural networks. While reinforcement learning (RL) based agents can generate a stroke sequence step by step for this task, it is not easy to train a stable RL agent. On the other hand, stroke optimization methods search for a set of stroke parameters iteratively in a large search space; such low efficiency significantly limits their prevalence and practicality. Different from previous methods, in this paper, we formulate the task as a set prediction problem and propose a novel Transformer-based framework, dubbed Paint Transformer, to predict the parameters of a stroke set with a feed forward network. This way, our model can generate a set of strokes in parallel and obtain the final painting of size 512 * 512 in near real time. More importantly, since there is no dataset available for training the Paint Transformer, we devise a self-training pipeline such that it can be trained without any off-the-shelf dataset while still achieving excellent generalization capability. Experiments demonstrate that our method achieves better painting performance than previous ones with cheaper training and inference costs. Codes and models are available.

en cs.CV
arXiv Open Access 2020
Painting Many Pasts: Synthesizing Time Lapse Videos of Paintings

Amy Zhao, Guha Balakrishnan, Kathleen M. Lewis et al.

We introduce a new video synthesis task: synthesizing time lapse videos depicting how a given painting might have been created. Artists paint using unique combinations of brushes, strokes, and colors. There are often many possible ways to create a given painting. Our goal is to learn to capture this rich range of possibilities. Creating distributions of long-term videos is a challenge for learning-based video synthesis methods. We present a probabilistic model that, given a single image of a completed painting, recurrently synthesizes steps of the painting process. We implement this model as a convolutional neural network, and introduce a novel training scheme to enable learning from a limited dataset of painting time lapses. We demonstrate that this model can be used to sample many time steps, enabling long-term stochastic video synthesis. We evaluate our method on digital and watercolor paintings collected from video websites, and show that human raters find our synthetic videos to be similar to time lapse videos produced by real artists. Our code is available at https://xamyzhao.github.io/timecraft.

en cs.GR, cs.CV

Halaman 4 dari 4633