Hasil untuk "Acoustics. Sound"

Menampilkan 20 dari ~1126643 hasil · dari DOAJ, arXiv, Semantic Scholar

JSON API
arXiv Open Access 2026
MoXaRt: Audio-Visual Object-Guided Sound Interaction for XR

Tianyu Xu, Sieun Kim, Qianhui Zheng et al.

In Extended Reality (XR), complex acoustic environments often overwhelm users, compromising both scene awareness and social engagement due to entangled sound sources. We introduce MoXaRt, a real-time XR system that uses audio-visual cues to separate these sources and enable fine-grained sound interaction. MoXaRt's core is a cascaded architecture that performs coarse, audio-only separation in parallel with visual detection of sources (e.g., faces, instruments). These visual anchors then guide refinement networks to isolate individual sources, separating complex mixes of up to 5 concurrent sources (e.g., 2 voices + 3 instruments) with ~2 second processing latency. We validate MoXaRt through a technical evaluation on a new dataset of 30 one-minute recordings featuring concurrent speech and music, and a 22-participant user study. Empirical results indicate that our system significantly enhances speech intelligibility, yielding a 36.2% (p < 0.01) increase in listening comprehension within adversarial acoustic environments while substantially reducing cognitive load (p < 0.001), thereby paving the way for more perceptive and socially adept XR experiences.

en cs.SD, cs.CV
DOAJ Open Access 2025
Dynamical analysis of a four-degree-of-freedom vibratory structure: Bifurcation, stability, and resonance exploration

TS Amer, Galal M. Moatimid, SK Zakria et al.

This study introduces a novel approach to analyzing a four-degree-of-freedom (DoF) nonlinear system by leveraging advanced numerical and analytical techniques to comprehensively examine its dynamic behavior. The system’s nonlinear differential equations (DEs) are obtained through the application of Lagrange’s equations (LE). The solutions are obtained using the fourth-order Runge–Kutta method (4-RKM). The investigation involves analyzing the relationships between the angular solutions and their corresponding first-order derivatives, commonly referred to as phase plane analysis. The study aims to examine bifurcation diagrams and Lyapunov exponent spectra to reveal the various modes of motion within the system and visualize Poincaré maps. These tools are used to analyze a unique system configuration. Lastly, the conditions for solvability and the characteristic exponents are identified by examining resonance scenarios. The examination of resonance scenarios through characteristic exponents and solvability conditions, coupled with the application of Routh-Hurwitz criteria (RHC) for stability evaluation, provides an innovative framework for understanding frequency response and nonlinear stability across stable and unstable ranges. By exploring both theoretical and practical aspects of vibrational dynamics in applications like aviation, robotics, and underwater exploration, this work offers a significant advancement in analyzing complex systems, with wide-ranging implications for various engineering fields, including aerospace, structural mechanics, and energy harvesting.

Control engineering systems. Automatic machinery (General), Acoustics. Sound
DOAJ Open Access 2025
Comparison of speech intelligibility in a real and virtual living room using loudspeaker and headphone presentations

Schütze Julia, Kirsch Christoph, Kollmeier Birger et al.

Virtual acoustics enables hearing research and audiology in ecologically relevant and realistic acoustic environments, while offering experimental control and reproducibility of classical psychoacoustics and speech intelligibility tests. Hereby, indoor environments are highly relevant, where listening and speech communication frequently involve multiple targets and interferers, as well as connected adjacent spaces that may create challenging acoustics. Hence, a controllable laboratory environment is evaluated here (by room acoustical parameters and speech intelligibility) which closely resembles a typical German living room with an adjacent kitchen. Target and interferer positions were permuted over four different locations, including an acoustically challenging position of a target in the kitchen with interrupted line of sight. Speech intelligibility was compared in the real room, in virtual acoustic representations, and in standard anechoic audiological configurations. Three presentation modes were tested: headphones, loudspeaker rendering on a small-scale, four-channel loudspeaker array in a sound-attenuated listening booth, and a three-dimensional 86-channel loudspeaker array in an anechoic chamber. The results showed that the target talker in the coupled room requires higher signal to noise ratios (SNRs) at threshold than typical indoor conditions. Moreover, for the stationary speech shaped interferer, effects of room acoustics were negligible. For a majority of target positions, no difference between the four-channel and the large-scale loudspeaker array were found, with an overall good agreement to the real room. This indicates that ecologically valid testing is feasible using a clinically applicable small-scale loudspeaker array.

Acoustics in engineering. Acoustical engineering, Acoustics. Sound
DOAJ Open Access 2025
Sonication-assisted emulsification: Analyzing different polymers in aqueous systems for microparticle preparation by the double emulsion technique

Muhaimin Muhaimin, Anis Yohana Chaerunisaa, Roland Bodmeier

Sonication-assisted emulsification has emerged as a powerful technique for the preparation of microparticles in various fields, including pharmaceuticals, cosmetics, and food science. This study aims to investigate the impact of different polymers in an aqueous system on the preparation of microparticles by the double emulsion technique. By understanding the factors that affect emulsification and stability, we can optimize the production of microparticles with desired characteristics. This study discusses the mechanism behind sonication-assisted emulsification, the various polymers used, and the analysis of particle size, morphology, and stability. The microparticle were prepared with a water-in-oil-in-water (W/O/W) solvent evaporation method, for various polymers (including EC 4 cp, Eudragit® RS 100, Eudragit® RL 100, PLGA (RG503H) and PCL) that solvent used dichloromethane. The particle size/distribution of the emulsion droplets/hardened microparticles was monitored using FBRM. The morphology of polymeric microparticles was characterized using scanning electron microscopy (SEM). The transformation of the emulsion droplets into solid microparticles occured within the first 11.5, 20, 26, 30.5 and 56 min when EC 4 cp, Eudragit® RS 100, Eudragit® RL 100, PLGA (RG503H) and PCL were used respectively. The square weighted mean chord length of PCL microparticles was smallest, but the chord count was not the highest. The chord length distribution (CLD) measured by FBRM showed that a larger mean particle size gave longer CLD and a lower peak of particle number. SEM data revealed that the morphology of microparticles was influenced by the type of polymer. Sonicator helped in emulsification of polymeric system in aquous. FBRM can be employed for online monitoring of the shift in the microparticle CLD and detect transformation of emulsion droplets into solid microparticles during the solvent evaporation process. The microparticle CLD and transformation process were strongly influenced by polymer type.

Chemistry, Acoustics. Sound
arXiv Open Access 2025
ReelWave: Multi-Agentic Movie Sound Generation through Multimodal LLM Conversation

Zixuan Wang, Chi-Keung Tang, Yu-Wing Tai

Current audio generation conditioned by text or video focuses on aligning audio with text/video modalities. Despite excellent alignment results, these multimodal frameworks still cannot be directly applied to compelling movie storytelling involving multiple scenes, where "on-screen" sounds require temporally-aligned audio generation, while "off-screen" sounds contribute to appropriate environment sounds accompanied by background music when applicable. Inspired by professional movie production, this paper proposes a multi-agentic framework for audio generation supervised by an autonomous Sound Director agent, engaging multi-turn conversations with other agents for on-screen and off-screen sound generation through multimodal LLM. To address on-screen sound generation, after detecting any talking humans in videos, we capture semantically and temporally synchronized sound by training a prediction model that forecasts interpretable, time-varying audio control signals: loudness, pitch, and timbre, which are used by a Foley Artist agent to condition a cross-attention module in the sound generation. The Foley Artist works cooperatively with the Composer and Voice Actor agents, and together they autonomously generate off-screen sound to complement the overall production. Each agent takes on specific roles similar to those of a movie production team. To temporally ground audio language models, in ReelWave, text/video conditions are decomposed into atomic, specific sound generation instructions synchronized with visuals when applicable. Consequently, our framework can generate rich and relevant audio content conditioned on video clips extracted from movies.

en cs.SD, cs.CV
arXiv Open Access 2025
Audio Flamingo Sound-CoT Technical Report: Improving Chain-of-Thought Reasoning in Sound Understanding

Zhifeng Kong, Arushi Goel, Joao Felipe Santos et al.

Chain-of-thought reasoning has demonstrated significant improvements in large language models and vision language models, yet its potential for audio language models remains largely unexplored. In this technical report, we take a preliminary step towards closing this gap. For better assessment of sound reasoning, we propose AF-Reasoning-Eval, a benchmark targeting common-sense reasoning and the ability to discriminate among closely related choices. To prepare training corpus for sound reasoning abilities, we propose automatic pipelines that transform existing audio question answering and classification data into explicit reasoning chains, yielding AF-CoT-Train with 1.24M samples. We study the effect of finetuning Audio Flamingo series on AF-CoT-Train and observe considerable improvements on several reasoning benchmarks, validating the effectiveness of chain-of-thought finetuning on advanced sound understanding.

en cs.SD, cs.LG
arXiv Open Access 2025
Deep, data-driven modeling of room acoustics: literature review and research perspectives

Toon van Waterschoot

Our everyday auditory experience is shaped by the acoustics of the indoor environments in which we live. Room acoustics modeling is aimed at establishing mathematical representations of acoustic wave propagation in such environments. These representations are relevant to a variety of problems ranging from echo-aided auditory indoor navigation to restoring speech understanding in cocktail party scenarios. Many disciplines in science and engineering have recently witnessed a paradigm shift powered by deep learning (DL), and room acoustics research is no exception. The majority of deep, data-driven room acoustics models are inspired by DL-based speech and image processing, and hence lack the intrinsic space-time structure of acoustic wave propagation. More recently, DL-based models for room acoustics that include either geometric or wave-based information have delivered promising results, primarily for the problem of sound field reconstruction. In this review paper, we will provide an extensive and structured literature review on deep, data-driven modeling in room acoustics. Moreover, we position these models in a framework that allows for a conceptual comparison with traditional physical and data-driven models. Finally, we identify strengths and shortcomings of deep, data-driven room acoustics models and outline the main challenges for further research.

en eess.AS, cs.SD
DOAJ Open Access 2024
Improving complexation of puerarin with kudzu starch by various ultrasonic pretreatment: Interaction mechanism analysis

Yuheng Li, Chao Zhang, Shuyi Li et al.

The industrial preparation of kudzu starch (KS) significantly reduces the remaining of flavonoids like puerarin (PU) in the product, weakening its biological activity and making pre-treatments on kudzu crucial. Ultrasonic technique, widely used for modifying biomolecules, can enhance nutrient interactions like those between starch and polyphenols in foods. Thus, a puerarin-kudzu starch (PKS) complex was prepared with the introduction of ultrasonic pretreatment. The results indicated that sonication increased the binding of PU to KS from 0.399 ± 0.01 to 0.609 ± 0.05 mg/g. Particle size analysis and SEM revealed that the particles of the ultrasonic puerarin-kudzu starch complex (UPKS) were larger than those of the untreated complexes. XRD, UV–vis, and FT-IR spectroscopic analyses indicated that hydrogen bonding primarily governs the interaction between PU and KS. Additionally, incorporating PU decreased the starch structure’s orderliness, while ultrasonic treatment altered the helical configuration of straight-chain starch, leading to the formation of a new, ordered structure through the creation of new hydrogen bonds. Additionally, gels formed from UPKS exhibited higher viscosity, elasticity, and shear stress, suggesting that ultrasound significantly altered the intermolecular interactions between PKS. In conclusion, the use of ultrasound under optimal conditions has demonstrated its effectiveness in preparing PKS complexes, highlighting its significant potential to produce high value-added kudzu-based products.

Chemistry, Acoustics. Sound
DOAJ Open Access 2024
Facile room temperature synthesis of size-controlled spherical silica from silicon metal via simple sonochemical process

Ren Zushi, Yamato Hayashi, Toshiki Yamanaka et al.

The waterglass or St o¨ ber method is commonly used to synthesize spherical colloidal silica; however, these methods have some disadvantages, such as complicated processes for the removal of sodium ions and expensive and energy-consuming raw materials such as tetraethoxysilane (TEOS). In this study, size-controlled spherical colloidal silica was synthesized from silicon metal at room temperature using an ultrasound process with hydrazine monohydrate as the solvent. Silicon metal dissolves easily in hydrazine monohydrate under ultrasound irradiation, and spherical colloidal silica can be synthesized by adding alcohol to this precursor solution. By changing the concentration or type of alcohol, size-controlled colloidal silica 20–200 nm in size could be easily obtained. In addition, finer and more monodisperse particles were produced by low-frequency ultrasound irradiation, which had a higher stirring effect at the particle formation stage. The present method is effective because size-controlled colloidal silica can be synthesized at room temperature using only silicon metal, hydrazine, and alcohol as raw materials, without complicated processes or expensive and energy-consuming raw materials such as TEOS or tetramethoxysilane (TMOS).

Chemistry, Acoustics. Sound
DOAJ Open Access 2024
Interpolation of scheduled simulation results for real-time auralization of moving sources

Schäfer Philipp, Fatela João, Vorländer Michael

A central part of auralization is the consideration of realistic sound propagation effects. This can be achieved using computationally efficient physics-based simulations based on the principle geometrical acoustics. When considering complex effects, e.g. curved propagation due to atmospheric refraction, those simulations can be computationally demanding. This can become the bottleneck for real-time auralizations, as the run-time exceeds the duration of one audio block even for large block sizes. A solution is to schedule the simulations into a separate thread. However, this leads to an irregular update rate which is lower than the rate of the audio blocks. Consequently, the output signal can contain audible artifacts. This especially holds when considering the Doppler effect for dynamic scenarios with fast moving sources, like aircraft. This paper introduces a method for interpolating, and thereby upsampling, the results of scheduled simulations in an auralization context in order to avoid such artifacts. The method is applied to an aircraft flyover auralization considering curved sound propagation in an inhomogeneous, moving atmosphere. Using this method, it is possible to auralize such scenarios in real time.

Acoustics in engineering. Acoustical engineering, Acoustics. Sound
DOAJ Open Access 2024
Pre-transplant kidney quality evaluation using photoacoustic imaging during normothermic machine perfusion

Anton V. Nikolaev, Yitian Fang, Jeroen Essers et al.

Due to the shortage of kidneys donated for transplantation, surgeons are forced to use the organs with an elevated risk of poor function or even failure. Although the existing methods for pre-transplant quality evaluation have been validated over decades in population cohort studies across the world, new methods are needed as long as delayed graft function or failure in a kidney transplant occurs. In this study, we explored the potential of utilizing photoacoustic (PA) imaging during normothermic machine perfusion (NMP) as a means of evaluating kidney quality. We closely monitored twenty-two porcine kidneys using 3D PA imaging during a two-hour NMP session. Based on biochemical analyses of perfusate and produced urine, the kidneys were categorized into ‘non-functional’ and ‘functional’ groups. Our primary focus was to quantify oxygenation (sO2) within the kidney cortical layer of depths 2 mm, 4 mm, and 6 mm using two-wavelength PA imaging. Next, receiver operating characteristic (ROC) analysis was performed to determine an optimal cortical layer depth and time point for the quantification of sO2 to discriminate between functional and non-functional organs. Finally, for each depth, we assessed the correlation between sO2 and creatinine clearance (CrCl), oxygen consumption (VO2), and renal blood flow (RBF).We found that hypoxia of the renal cortex is associated with poor renal function. In addition, the determination of sO2 within the 2 mm depth of the renal cortex after 30 min of NMP effectively distinguishes between functional and non-functional kidneys. The non-functional kidneys can be detected with the sensitivity and specificity of 80% and 85% respectively, using the cut-off point of sO2 < 39%. Oxygenation significantly correlates with RBF and VO2 in all kidneys. In functional kidneys, sO2 correlated with CrCl, which is not the case for non-functional kidneys.We conclude that the presented technique has a high potential for supporting organ selection for kidney transplantation.

Physics, Acoustics. Sound
arXiv Open Access 2024
Efficient learning-based sound propagation for virtual and real-world audio processing applications

Anton Jeran Ratnarajah

Sound propagation is the process by which sound energy travels through a medium, such as air, to the surrounding environment as sound waves. The room impulse response (RIR) describes this process and is influenced by the positions of the source and listener, the room's geometry, and its materials. Physics-based acoustic simulators have been used for decades to compute accurate RIRs for specific acoustic environments. However, we have encountered limitations with existing acoustic simulators. To address these limitations, we propose three novel solutions. First, we introduce a learning-based RIR generator that is two orders of magnitude faster than an interactive ray-tracing simulator. Our approach can be trained to input both statistical and traditional parameters directly, and it can generate both monaural and binaural RIRs for both reconstructed and synthetic 3D scenes. Our generated RIRs outperform interactive ray-tracing simulators in speech-processing applications, including ASR, Speech Enhancement, and Speech Separation. Secondly, we propose estimating RIRs from reverberant speech signals and visual cues without a 3D representation of the environment. By estimating RIRs from reverberant speech, we can augment training data to match test data, improving the word error rate of the ASR system. Our estimated RIRs achieve a 6.9% improvement over previous learning-based RIR estimators in far-field ASR tasks. We demonstrate that our audio-visual RIR estimator aids tasks like visual acoustic matching, novel-view acoustic synthesis, and voice dubbing, validated through perceptual evaluation. Finally, we introduce IR-GAN to augment accurate RIRs using real RIRs. IR-GAN parametrically controls acoustic parameters learned from real RIRs to generate new RIRs that imitate different acoustic environments, outperforming Ray-tracing simulators on the far-field ASR benchmark by 8.95%.

en cs.SD, eess.AS
arXiv Open Access 2024
Sound Event Bounding Boxes

Janek Ebbers, Francois G. Germain, Gordon Wichern et al.

Sound event detection is the task of recognizing sounds and determining their extent (onset/offset times) within an audio clip. Existing systems commonly predict sound presence confidence in short time frames. Then, thresholding produces binary frame-level presence decisions, with the extent of individual events determined by merging consecutive positive frames. In this paper, we show that frame-level thresholding degrades the prediction of the event extent by coupling it with the system's sound presence confidence. We propose to decouple the prediction of event extent and confidence by introducing SEBBs, which format each sound event prediction as a tuple of a class type, extent, and overall confidence. We also propose a change-detection-based algorithm to convert legacy frame-level outputs into SEBBs. We find the algorithm significantly improves the performance of DCASE 2023 Challenge systems, boosting the state of the art from .644 to .686 PSDS1.

en eess.AS, cs.SD
arXiv Open Access 2024
Producer vs. Rapper: Who Dominates the Hip Hop Sound? A Case Study

Tim Ziemer, Nikita Kudakov, Christoph Reuter

In hip-hop music, rappers and producers play important, but rather different roles. However, both contribute to the overall sound, as rappers bring in their voice, while producers are responsible for the music composition and mix. In this case report, we trained Self-Organizing Maps (SOMs) with songs produced by Dr. Dre, Rick Rubin and Timbaland using the goniometer and Mel Frequency Cepstral Coefficients (MFCCs). With these maps, we investigate whether hip hop producers have a unique sound profile. Then, we test whether collaborations with the rappers Eminem, Jay-Z, LL Cool J and Nas stick to, or break out of this sound profile. As these rappers are also producers of some songs, we investigate how much their sound profile is influenced by the producers who introduced them to beat making. The results speak a clear language: producers have their own sound profile that is unique concerning the goniometer, and less distinct concerning MFCCs. They dominate the sound of hip hop music over rappers, who emulate the sound profile of the producers who introduced them to beat making.

en cs.SD, cs.DC
arXiv Open Access 2024
OmniSep: Unified Omni-Modality Sound Separation with Query-Mixup

Xize Cheng, Siqi Zheng, Zehan Wang et al.

The scaling up has brought tremendous success in the fields of vision and language in recent years. When it comes to audio, however, researchers encounter a major challenge in scaling up the training data, as most natural audio contains diverse interfering signals. To address this limitation, we introduce Omni-modal Sound Separation (OmniSep), a novel framework capable of isolating clean soundtracks based on omni-modal queries, encompassing both single-modal and multi-modal composed queries. Specifically, we introduce the Query-Mixup strategy, which blends query features from different modalities during training. This enables OmniSep to optimize multiple modalities concurrently, effectively bringing all modalities under a unified framework for sound separation. We further enhance this flexibility by allowing queries to influence sound separation positively or negatively, facilitating the retention or removal of specific sounds as desired. Finally, OmniSep employs a retrieval-augmented approach known as Query-Aug, which enables open-vocabulary sound separation. Experimental evaluations on MUSIC, VGGSOUND-CLEAN+, and MUSIC-CLEAN+ datasets demonstrate effectiveness of OmniSep, achieving state-of-the-art performance in text-, image-, and audio-queried sound separation tasks. For samples and further information, please visit the demo page at \url{https://omnisep.github.io/}.

en cs.SD, cs.CV
arXiv Open Access 2023
Generating Realistic Images from In-the-wild Sounds

Taegyeong Lee, Jeonghun Kang, Hyeonyu Kim et al.

Representing wild sounds as images is an important but challenging task due to the lack of paired datasets between sound and images and the significant differences in the characteristics of these two modalities. Previous studies have focused on generating images from sound in limited categories or music. In this paper, we propose a novel approach to generate images from in-the-wild sounds. First, we convert sound into text using audio captioning. Second, we propose audio attention and sentence attention to represent the rich characteristics of sound and visualize the sound. Lastly, we propose a direct sound optimization with CLIPscore and AudioCLIP and generate images with a diffusion-based model. In experiments, it shows that our model is able to generate high quality images from wild sounds and outperforms baselines in both quantitative and qualitative evaluations on wild audio datasets.

en cs.CV, cs.SD
arXiv Open Access 2023
Characterization of cough sounds using statistical analysis

Naveenkumar Vodnala, Pratap Reddy Lankireddy, Padmasai Yarlagadda

Cough is a primary symptom of most respiratory diseases, and changes in cough characteristics provide valuable information for diagnosing respiratory diseases. The characterization of cough sounds still lacks concrete evidence, which makes it difficult to accurately distinguish between different types of coughs and other sounds. The objective of this research work is to characterize cough sounds with voiced content and cough sounds without voiced content. Further, the cough sound characteristics are compared with the characteristics of speech. The proposed method to achieve this goal utilized spectral roll-off, spectral entropy, spectral flatness, spectral flux, zero crossing rate, spectral centroid, and spectral bandwidth attributes which describe the cough sounds related to the respiratory system, glottal information, and voice model. These attributes are then subjected to statistical analysis using the measures of minimum, maximum, mean, median, and standard deviation. The experimental results show that the mean and frequency distribution of spectral roll-off, spectral centroid, and spectral bandwidth are found to be higher for cough sounds than for speech signals. Spectral flatness levels in cough sounds will rise to 0.22, whereas spectral flux varies between 0.3 and 0.6. The Zero Crossing Rate (ZCR) of most frames of cough sounds is between 0.05 and 0.4. These attributes contribute significant information while characterizing cough sounds.

en cs.SD, eess.AS
arXiv Open Access 2023
Sound Terminology Describing Production and Perception of Sonification

Tim Ziemer

Sonification research is intrinsically interdisciplinary. Consequently, a proper documentation of, and interdisciplinary discourse about a sonification is often hindered by terminology discrepancies between involved disciplines, i.e., the lack of a common sound terminology in sonification research. Without a common ground, a researcher from one discipline may have troubles understanding the implementation and imagining the resulting sound perception of a sonification, if the sonification is described by a researcher from another discipline. To find a common ground, I consulted literature on interdisciplinary research and discourse, identified problems that occur in sonification, and applied the recommended solutions. As a result, I recommend considering three aspects of sonification individually, namely 1.) Sound Design Concept, 2.) Objective and 3.) Method, clarifying which discipline is involved in which aspect, and sticking to this discipline's terminology. As two requirements of sonifications are that they are a) reproducible and b) interpretable, I recommend documenting and discussing every sonification design once using audio engineering terminology, and once using psychoacoustic terminology. The appendix provides comprehensive lists of sound terms from both disciplines, together with relevant literature and a clarification of often misunderstood and misused terms.

en cs.SD, eess.AS
DOAJ Open Access 2022
Understanding the Effects of Ultrasound (408 kHz) on the Hydrogen Evolution Reaction (HER) and the Oxygen Evolution Reaction (OER) on Raney-Ni in Alkaline Media

Faranak Foroughi, Christian Immanuel Bernäcker, Lars Röntzsch et al.

The hydrogen evolution reaction (HER) and the oxygen evolution reaction (OER) occurring at the Raney-Ni mesh electrode in 30 wt.-% aqueous KOH solution were studied in the absence (silent) and presence of ultrasound (408 kHz, ∼54 W, 100% acoustic amplitude) at different electrolyte temperatures (T = 25, 40 and 60 °C). Linear sweep voltammetry (LSV) and electrochemical impedance spectroscopy (EIS) experiments were performed to analyse the electrochemical behaviour of the Raney-Ni electrode under these conditions. Under silent conditions, it was found that the electrocatalytic activity of Raney-Ni towards the HER and the OER depends upon the electrolyte temperature, and higher current densities at lower overpotentials were achieved at elevated temperatures. It was also observed that the HER activity of Raney-Ni under ultrasonic conditions increased at low temperatures (e.g., 25 °C) while the ultrasonic effect on the OER was found to be insignificant. In addition, it was observed that the ultrasonic effect on both the HER and OER decreases by elevating the temperature. In our conditions, it is suggested that ultrasound enhances the electrocatalytic performance of Raney-Ni towards the HER due to principally the efficient gas bubble removal from the electrode surface and the dispersion of gas bubbles into the electrolyte, and this effect depends upon the behaviour of the hydrogen and oxygen gas bubbles in alkaline media.

Chemistry, Acoustics. Sound

Halaman 24 dari 56333