<p>Several studies highlight the relevance of considering polar winter stratospheric information such as the occurrence of Sudden Stratospheric Warmings (SSWs) for skillful Subseasonal to Seasonal (S2S) surface climate predictions. However, current S2S forecast systems can only predict these events about two weeks in advance. A potential way of increasing their predictability is to improve the models' representation of the triggering mechanisms of SSWs. Traditional theories indicate that SSWs follow sustained wave dissipation in the stratosphere, but the relative role of tropospheric versus stratospheric conditions in the enhancement of stratospheric wave activity remains unclear.</p>
<p>This study aims to quantify the role of the stratospheric state in wave activity preceding SSWs by analyzing three recent SSWs: the boreal SSWs of 2018 and 2019 and the austral minor SSW of 2019, using specific sets of S2S experiments. These ensembles follow the SNAPSI (Stratospheric Nudging And Predictable Surface Impacts) guidelines and include free-evolving atmospheric runs and nudged simulations, where the zonally-symmetric stratospheric state is nudged to either observations of a certain SSW or a climatological state. Our results show that the models struggle to capture the strong enhancement of wave activity preceding the 2018 SSW, limiting predictability beyond 10 d. In contrast, both SSWs of 2019 are better predicted, consistent with a more accurate simulation of the wave activity. Nudging the zonal mean stratospheric state does not drastically influence the upward wave activity flux or tropospheric circulation anomalies prior to these SSWs, but it has some impact on the stratospheric wave activity, although this modulation depends on the event characteristics. The boreal 2019 SSW appears to be primarily driven by tropospheric processes. In contrast, stratospheric contributions may have also played an important role in triggering the boreal 2018 SSW and the austral 2019 SSW. Understanding these variations is key to improving SSW predictability in S2S models.</p>
Archana Dixit, Saurabh Verma, Anirudh Pradhan
et al.
In this study, we explored the cosmological implications of the modified gravity framework <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>f</mi><mo>(</mo><mi>R</mi><mo>,</mo><msub><mi>L</mi><mi>m</mi></msub><mo>)</mo></mrow></semantics></math></inline-formula>, taking the specific form <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>f</mi><mrow><mo>(</mo><mi>R</mi><mo>,</mo><msub><mi>L</mi><mi>m</mi></msub><mo>)</mo></mrow><mo>=</mo><mstyle scriptlevel="0" displaystyle="false"><mfrac><mi>R</mi><mn>2</mn></mfrac></mstyle><mo>+</mo><msubsup><mi>L</mi><mi>m</mi><mi>n</mi></msubsup><mo>,</mo></mrow></semantics></math></inline-formula> where <i>n</i> denotes the model parameter. The analysis was carried out within a spatially flat FLRW background by adopting the Barboza–Alcaniz (BA) parametrization for the dark energy equation of state, expressed as <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>ω</mi><mrow><mo>(</mo><mi>z</mi><mo>)</mo></mrow><mo>=</mo><msub><mi>w</mi><mn>0</mn></msub><mo>+</mo><msub><mi>w</mi><mn>1</mn></msub><mstyle scriptlevel="0" displaystyle="false"><mfrac><mrow><mi>z</mi><mo>(</mo><mn>1</mn><mo>+</mo><mi>z</mi><mo>)</mo></mrow><mrow><mn>1</mn><mo>+</mo><msup><mi>z</mi><mn>2</mn></msup></mrow></mfrac></mstyle><mo>.</mo></mrow></semantics></math></inline-formula> Based on this setup, an expression for the Hubble parameter <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>H</mi><mo>(</mo><mi>z</mi><mo>)</mo></mrow></semantics></math></inline-formula> was derived. The parameters <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>(</mo><msub><mi>H</mi><mn>0</mn></msub><mo>,</mo><mi>n</mi><mo>,</mo><msub><mi>w</mi><mn>0</mn></msub><mo>,</mo><msub><mi>w</mi><mn>1</mn></msub><mo>)</mo></mrow></semantics></math></inline-formula> were estimated using a Bayesian Markov Chain Monte Carlo (MCMC) technique, implemented via the <i>emcee</i> package, with Cosmic Chronometers (CC), Pantheon Plus & SH0ES (PPS) and DESI BAO datasets. For the CC+PPS+DESI BAO combination, the best-fit Hubble constant was obtained as <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mi>H</mi><mn>0</mn></msub><mo>=</mo><msubsup><mn>72.08</mn><mrow><mo>−</mo><mn>0.24</mn></mrow><mrow><mo>+</mo><mn>0.30</mn></mrow></msubsup><mspace width="0.166667em"></mspace><mi>km</mi><mspace width="0.166667em"></mspace><msup><mi mathvariant="normal">s</mi><mrow><mo>−</mo><mn>1</mn></mrow></msup><mspace width="0.166667em"></mspace><msup><mi>Mpc</mi><mrow><mo>−</mo><mn>1</mn></mrow></msup><mo>,</mo></mrow></semantics></math></inline-formula> which shows better consistency with the local SH0ES measurement than with the Planck <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mo>Λ</mo></semantics></math></inline-formula>CDM result, thereby reducing the Hubble tension. Furthermore, the dynamical evolution of the equation of state parameter <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>ω</mi></semantics></math></inline-formula>, the deceleration parameter, the impact of various energy conditions, and the optimal model parameters were thoroughly examined. The study also investigated the behavior of the <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>(</mo><msub><mi>O</mi><mi>m</mi></msub><mo>)</mo></mrow></semantics></math></inline-formula> diagnostic and determined the present age of the universe predicted by this model.
We present ALMA CO observations of 14 H I-detected galaxies from the COSMOS H I Large Extragalactic Survey (CHILES) found in a cosmic over-density at z ∼ 0.12. This is the largest collection of spatially resolved CO + H I observations beyond the local Universe (z > 0.05) to date. While the H I-detected parent sample spans a range of stellar masses, star formation rates (SFRs), and environments, we only directly detect CO in the highest stellar mass galaxies, log(M*/M⊙) > 10.0, with SFRs greater than ∼2 M⊙ yr−1. The detected CO has the kinematic signature of a rotating disk, consistent with the H I. We stacked the CO non-detections and find a mean H2 mass of log(MH2/M⊙) = 8.46 in galaxies with a mean stellar mass of log(M*/M⊙) = 9.35. In addition to high stellar masses and SFRs, the systems detected in CO are spatially larger, have redder overall colors, and exhibit broader (stacked) line widths. The CO emission is spatially coincident with both the highest stellar mass surface density and star forming region of the galaxies, as revealed by the 1.4 GHz continuum emission from CHILES Con Pol. We interpret the redder colors as the molecular gas being coincident with dusty regions of obscured star formation. The 14 H I detections show a range of morphologies, but the H I reservoir is always more extended than the CO. Finally, we compare with samples in the literature and find mild evidence for evolution in the molecular gas reservoir and H2-to-H I gas ratio with redshift in H I flux-limited samples. We also show that the scatter in the H I, and H I-to-stellar mass ratio is too great to conclusively measure evolution below z = 0.2, and would be even extremely difficult below z = 0.4. Detections from CHILES are likely to be the only individual galaxies detected in H I between 0.1 < z < 0.23 for the foreseeable future due to the severity of satellite radio frequency interference, and its preferential impact on short baselines which dominate the observations of contemporary H I surveys.
We present the first high-resolution XRISM/Resolve view of the relativistically broadened Fe K line in Cygnus X-1. The data clearly separate the relativistic broad line from the underlying continuum and from narrow emission and absorption features in the Fe band. The unprecedented spectral resolution in the Fe K band clearly demonstrates that the flux excess can be attributed to a single, broad feature, as opposed to a superposition of previously unresolved narrow features. This broad feature can be best interpreted as emission consistent with an origin near the innermost stable circular orbit around a rapidly rotating black hole. By modeling the shape of the broad line, we find a black hole spin of a ≃ 0.98 and an inclination of the inner accretion disk of θ ≃ 63 ^∘ . The spin is consistent with prior reflection studies, reaffirming the robustness of past spin measurements using the relativistic reflection method. The measured inclination provides reinforcing evidence of a disk-orbit misalignment in Cygnus X-1. These results highlight the unique abilities of XRISM in separating overlapping spectral features and providing constraints on the geometry of accretion in X-ray binaries.
We address the scientific “time” concept in the context of more general relaxation processes toward the <i>Wärmetod</i> of thermodynamic equilibrium. More specifically, we sketch a construction of a conceptual ladder of chemical reaction steps that can rigorously bridge a description from the microscopic domain of molecular quantum chemistry to the macroscopic materials domain of Gibbsian thermodynamics. This conceptual reformulation follows the pioneering work of Kenichi Fukui (Nobel 1981) in rigorously formulating the <i>intrinsic reaction coordinate</i> (IRC) pathway for controlled description of non-equilibrium passages between reactant and product equilibrium states of an overall material transformation. Elementary <i>chemical reaction steps</i> are thereby identified as the logical building-blocks of an integrated mathematical framework that seamlessly spans the gulf between classical (pre-1925) and quantal (post-1925) scientific conceptions and encompasses both static and dynamic aspects of material change. All modern chemical reaction rate studies build on the apparent infallibility of quantum-chemical solutions of Schrödinger’s wave equation and its Dirac-type relativistic corrections. This infallibility may now be properly accepted as an added“inductive law” of Gibbsian chemical thermodynamics, the only component of 19th-century physics that passed <i>intact</i> through the revolutionary quantum upheavals of 1925.
Yoichi Tamura, Akio Taniguchi, Tom J. L. C. Bakx
et al.
We report the Australian Telescope Compact Array and Nobeyama 45 m telescope detection of a remarkably bright ( S _1.1mm = 44 mJy) submillimeter galaxy MM J154506.4−344318 in emission lines at 48.5 and 97.0 GHz, respectively. We also identify part of an emission line at ≈218.3 GHz using the Atacama Large Millimeter/submillimeter Array (ALMA). Together with photometric redshift estimates and the ratio between the line and infrared luminosities, we conclude that the emission lines are most likely to be the J = 2–1, 4–3, and 9–8 transitions of ^12 CO at redshift z = 3.753 ± 0.001. ALMA 1.3 mm continuum imaging reveals an arc and a spot separated by an angular distance of $1\mathop{.}\limits^{^{\prime\prime} }6$ , indicative of a strongly lensed dusty star-forming galaxy with respective molecular and dust masses of $\mathrm{log}{M}_{{\rm{mol}}}/{M}_{\odot }\approx 11.5$ and $\mathrm{log}{M}_{{\rm{dust}}}/{M}_{\odot }\approx 9.4$ after being corrected for ≈6.6× gravitational magnification. The inferred dust-to-gas mass ratio is found to be high (≈0.0083) among coeval dusty star-forming galaxies, implying the presence of a massive, chemically enriched reservoir of cool interstellar medium at z ≈ 4, or 1.6 Gyr after the Big Bang.
To address the issues of insufficient feature utilization in high-entropy regions (such as complex textures and edges), difficulty in detail recovery, and excessive model parameters with high computational complexity in existing remote sensing image super-resolution networks, a novel dual-branch hybrid-scale feature aggregation network (HSFAN) is proposed. The design of this network aims to achieve an optimal balance between model complexity and reconstruction quality. The main branch of the HSFAN effectively expands the receptive field through a multi-scale parallel large convolution kernel (MSPLCK) module, enhancing the ability to model global structures that contain rich information, while maintaining consistency constraints in the feature space. Meanwhile, an enhanced parallel attention (EPA) module is incorporated, optimizing feature allocation by prioritizing high-entropy feature channels and spatial locations, thereby improving the expression of key details. The auxiliary branch is designed with a multi-scale large-kernel attention (MSLA) module, employing depthwise separable convolutions to significantly reduce the computational overhead in the feature processing path, while adaptive attention weighting strengthens the capture and reconstruction of local high-frequency information. Experimental results show that, for the ×4 super-resolution task on the UC Merced dataset, the proposed algorithm achieves a PSNR of 27.91 dB and an SSIM of 0.7616, outperforming most current mainstream super-resolution algorithms, while maintaining a low computational cost and model parameter count. This provides a new research approach and technical route for remote sensing image super-resolution reconstruction.
Dieu D. Nguyen, Michele Cappellari, Hai N. Ngo
et al.
Understanding the demographics of intermediate-mass black holes (IMBHs, M _BH ≈ 10 ^2 –10 ^5 M _⊙ ) in low-mass galaxies is key to constraining black hole seed formation models, but detecting them is challenging due to their small gravitational sphere of influence (SOI). The upcoming Extremely Large Telescope (ELT) High Angular Resolution Monolithic Optical and Near-infrared Integral Field Spectrograph (HARMONI) instrument, with its high angular resolution, offers a promising solution. We present simulations assessing HARMONI’s ability to measure IMBH masses in nuclear star clusters (NSCs) of nearby dwarf galaxies. We selected a sample of 44 candidates within 10 Mpc. For two representative targets, NGC 300 and NGC 3115 dw01, we generated mock HARMONI integral-field data cubes using realistic inputs derived from Hubble Space Telescope imaging, stellar population models, and Jeans anisotropic models (JAM), assuming IMBH masses up to 1% of the NSC mass. We simulated observations across six near-infrared gratings at 10 mas resolution. Analyzing the mock data with standard kinematic extraction and JAM models in a Bayesian framework, we demonstrate that HARMONI can resolve the IMBH SOI and accurately recover masses down to ≈0.5% of the NSC mass within feasible exposure times. These results highlight HARMONI’s potential to revolutionize IMBH studies.
The CMS collaboration, A. Hayrapetyan, A. Tumasyan
et al.
Abstract Measurements of inclusive and normalized differential cross sections of the associated production of top quark-antiquark and bottom quark-antiquark pairs, t t ¯ b b ¯ $$ \textrm{t}\overline{\textrm{t}}\textrm{b}\overline{\textrm{b}} $$ , are presented. The results are based on data from proton-proton collisions collected by the CMS detector at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb −1. The cross sections are measured in the lepton+jets decay channel of the top quark pair, using events containing exactly one isolated electron or muon and at least five jets. Measurements are made in four fiducial phase space regions, targeting different aspects of the t t ¯ b b ¯ $$ \textrm{t}\overline{\textrm{t}}\textrm{b}\overline{\textrm{b}} $$ process. Distributions are unfolded to the particle level through maximum likelihood fits, and compared with predictions from several event generators. The inclusive cross section measurements of this process in the fiducial phase space regions are the most precise to date. In most cases, the measured inclusive cross sections exceed the predictions with the chosen generator settings. The only exception is when using a particular choice of dynamic renormalization scale, μ R = 1 2 ∏ i = t , t ¯ , b , b ¯ m T , i 1 / 4 $$ {\mu}_{\textrm{R}}=\frac{1}{2}{\prod}_{i=\textrm{t},\overline{\textrm{t}},\textrm{b},\overline{\textrm{b}}}{m}_{\textrm{T},i}^{1/4} $$ , where m T , i 2 = m i 2 + p T , i 2 $$ {m}_{\textrm{T},i}^2={m}_i^2+{p}_{\textrm{T},i}^2 $$ are the transverse masses of top and bottom quarks. The differential cross sections show varying degrees of compatibility with the theoretical predictions, and none of the tested generators with the chosen settings simultaneously describe all the measured distributions.
Nuclear and particle physics. Atomic energy. Radioactivity
The low energy x-ray telescope (LE) is one of the main instruments of the insight-hard x-ray modulation telescope, the first x-ray astronomical satellite of China. The scientific objectives of the LE focus on the scanning and pointed observations of the x-ray sources in the soft x-ray band (1–13 keV). In order to complete the observation tasks and accurately analyze the background information of the LE, it is essential to obtain the background data of the detectors. Therefore, we designed an LE background experiment. The experiment began with an underground background experiment in the China Jinping Underground Laboratory with a rock of a thickness of 2400 m, followed by a ground background experiment. These two experiments lasted for a long time, and through comparison and analysis of the background data, it was found that underground laboratories significantly shielded cosmic rays. In addition, the background of detectors in the underground experiment was more than one order of magnitude lower than that in the ground experiment. The experiments also revealed multiple x-ray fluorescence peaks of various elements in the background, including silicon from the detector itself, erbium in the ceramic substrate, and copper in the mounting plate. The anti-coincidence design of detectors was observed to reduce the x-ray fluorescence peaks of silicon. By comparing the background flux obtained with the background flux in the orbit, it was found that the background generated by radioactive substances inside the LE detector is very low. Within the energy range of less than 7.5 keV, the flux in the orbit is about 0.012 counts/s/keV, the ground flux is ∼3 × 10−3 counts/s/keV, and the underground flux is about 1.5 × 10−4 counts/s/keV. However, the flux in the orbit significantly increases above 7.5 keV, which does not occur in both the ground and underground background experiments. These results provide reference and guidance for scientific teams and instrument teams to analyze the data of the low energy x-ray telescope.
We analyze Spitzer spectra of 140 active galactic nuclei (AGN) detected in the hard X-rays (14–195 keV) by the Burst Alert Telescope on board Swift. This sample allows us to probe several orders of magnitude in black hole masses (10 ^6 –10 ^9 M _⊙ ), Eddington ratios (10 ^−3 –1), X-ray luminosities (10 ^42 –10 ^45 erg s ^−1 ), and X-ray column densities (10 ^20 –10 ^24 cm ^−2 ). The AGN emission is expected to be the dominant source of ionizing photons with energies ≳50 eV, and therefore, high-ionization mid-infrared (MIR) emission lines such as [Ne v ] 14.32, 24.32 μ m and [O iv ] 25.89 μ m are predicted to be good proxies of AGN activity, and robust against obscuration effects. We find high detection rates (≳85%–90%) for the MIR coronal emission lines in our AGN sample. The luminosities of these lines are correlated with the 14–150 keV luminosity (with a typical scatter of σ ∼0.4–0.5 dex), strongly indicating that the MIR coronal line (CL) emission is driven by AGN activity. CLs are also tightly correlated to the bolometric luminosity ( σ ∼0.2–0.3 dex), calculated from careful analysis of the spectral energy distribution. We find that the relationship between the CL strengths and L _14–150 keV is independent of black hole mass, AGN luminosity, and Eddington ratio, and mostly not affected by high X-ray column densities. This confirms that the MIR CLs can be used as unbiased tracers of the AGN power for X-ray luminosities in the 10 ^42 –10 ^45 erg s ^−1 range.
Gerrit Schellenberger, Ewan O’Sullivan, Simona Giacintucci
et al.
The galaxy group NGC 6338 is one of the most violent group–group mergers known to date. While the central dominant galaxies rush at each other at 1400 km s ^−1 along the line of sight, with dramatic gas heating and shock fronts detected, the central gas in the BCGs remains cool. There are also indications of feedback from active galactic nuclei, and neither subcluster core has been disrupted. With our deep radio uGMRT data at 383 and 650 MHz, we clearly detect a set of large, old lobes in the southern BCG coinciding with the X-ray cavities, while the northern and smaller BCG appears slightly extended in the radio. The southern BCG also hosts a smaller younger set of lobes perpendicular to the larger lobes, but also coinciding with the inner X-ray cavities and matching the jet direction in the parsec-resolution VLBA image. Our spectral analysis confirms the history of two feedback cycles. The high radio frequency analysis classifies the compact source in the southern BCG with a power law, while ruling out a significant contribution from accretion. The radio lightcurve over three decades shows a change about 10 yr ago, which might be related to ongoing feedback in the core. The southern BCG in the NGC 6338 merger remains another prominent case where the direction of jet-mode feedback between two cycles changed dramatically.
Theodore Kareta, Cristina Thomas, Jian-Yang Li
et al.
The impact of the Double Asteroid Redirection Test spacecraft into Dimorphos, moon of the asteroid Didymos, changed Dimorphos’s orbit substantially, largely from the ejection of material. We present results from 12 Earth-based facilities involved in a world-wide campaign to monitor the brightness and morphology of the ejecta in the first 35 days after impact. After an initial brightening of ∼1.4 mag, we find consistent dimming rates of 0.11–0.12 mag day ^−1 in the first week, and 0.08–0.09 mag day ^−1 over the entire study period. The system returned to its pre-impact brightness 24.3–25.3 days after impact though the primary ejecta tail remained. The dimming paused briefly eight days after impact, near in time to the appearance of the second tail. This was likely due to a secondary release of material after re-impact of a boulder released in the initial impact, though movement of the primary ejecta through the aperture likely played a role.
Crowd evacuation has gained increasing attention due to its importance in the day-to-day management of public areas. During an emergency evacuation, there are a variety of factors that need to be considered when designing a practical evacuation model. For example, relatives tend to move together or look for each other. These behaviors undoubtedly aggravate the chaos degree of evacuating crowds and make evacuations hard to model. In this paper, we propose an entropy-based combined behavior model to better analyze the influence of these behaviors on the evacuation process. Specifically, we utilize the Boltzmann entropy to quantitatively denote the degree of chaos in the crowd. The evacuation behavior of heterogeneous people is simulated through a series of behavior rules. Moreover, we devise a velocity adjustment method to ensure the evacuees follow a more orderly direction. Extensive simulation results demonstrate the effectiveness of the proposed evacuation model and provide useful insights into the design of practical evacuation strategies.
Magdiel Jiménez-Guarneros, Jonas Grande-Barreto, Jose de Jesus Rangel-Magdaleno
Early detection of fault events through electromechanical systems operation is one of the most attractive and critical data challenges in modern industry. Although these electromechanical systems tend to experiment with typical faults, a common event is that unexpected and unknown faults can be presented during operation. However, current models for automatic detection can learn new faults at the cost of forgetting concepts previously learned. This article presents a multiclass incremental learning (MCIL) framework based on 1D convolutional neural network (CNN) for fault detection in induction motors. The presented framework tackles the forgetting problem by storing a representative exemplar set from past data (known faults) in memory. Then, the 1D CNN is fine-tuned over the selected exemplar set and data from new faults. Test samples are classified using nearest centroid classifier (NCC) in the feature space from 1D CNN. The proposed framework was evaluated and validated over two public datasets for fault detection in induction motors (IMs): asynchronous motor common fault (AMCF) and Case Western Reserve University (CWRU). Experimental results reveal the proposed framework as an effective solution to incorporate and detect new induction motor faults to already known, with a high accuracy performance across different incremental phases.
The aim of multi-agent reinforcement learning systems is to provide interacting agents with the ability to collaboratively learn and adapt to the behavior of other agents. Typically, an agent receives its private observations providing a partial view of the true state of the environment. However, in realistic settings, the harsh environment might cause one or more agents to show arbitrarily faulty or malicious behavior, which may suffice to allow the current coordination mechanisms fail. In this paper, we study a practical scenario of multi-agent reinforcement learning systems considering the security issues in the presence of agents with arbitrarily faulty or malicious behavior. The previous state-of-the-art work that coped with extremely noisy environments was designed on the basis that the noise intensity in the environment was known in advance. However, when the noise intensity changes, the existing method has to adjust the configuration of the model to learn in new environments, which limits the practical applications. To overcome these difficulties, we present an Attention-based Fault-Tolerant (FT-Attn) model, which can select not only correct, but also relevant information for each agent at every time step in noisy environments. The multihead attention mechanism enables the agents to learn effective communication policies through experience concurrent with the action policies. Empirical results showed that FT-Attn beats previous state-of-the-art methods in some extremely noisy environments in both cooperative and competitive scenarios, much closer to the upper-bound performance. Furthermore, FT-Attn maintains a more general fault tolerance ability and does not rely on the prior knowledge about the noise intensity of the environment.
Battery energy storage technology is an important part of the industrial parks to ensure the stable power supply, and its rough charging and discharging mode is difficult to meet the application requirements of energy saving, emission reduction, cost reduction, and efficiency increase. As a classic method of deep reinforcement learning, the deep Q-network is widely used to solve the problem of user-side battery energy storage charging and discharging. In some scenarios, its performance has reached the level of human expert. However, the updating of storage priority in experience memory often lags behind updating of Q-network parameters. In response to the need for lean management of battery charging and discharging, this paper proposes an improved deep Q-network to update the priority of sequence samples and the training performance of deep neural network, which reduces the cost of charging and discharging action and energy consumption in the park. The proposed method considers factors such as real-time electricity price, battery status, and time. The energy consumption state, charging and discharging behavior, reward function, and neural network structure are designed to meet the flexible scheduling of charging and discharging strategies, and can finally realize the optimization of battery energy storage benefits. The proposed method can solve the problem of priority update lag, and improve the utilization efficiency and learning performance of the experience pool samples. The paper selects electricity price data from the United States and some regions of China for simulation experiments. Experimental results show that compared with the traditional algorithm, the proposed approach can achieve better performance in both electricity price systems, thereby greatly reducing the cost of battery energy storage and providing a stronger guarantee for the safe and stable operation of battery energy storage systems in industrial parks.
Panagiotis Sismanidis, Iphigenia Keramitsoglou, Stefano Barberis
et al.
The urban heat island (UHI) effect influences the heating and cooling (H&C) energy demand of buildings and should be taken into account in H&C energy demand simulations. To provide information about this effect, the PLANHEAT integrated tool—which is a GIS-based, open-source software tool for selecting, simulating and comparing alternative low-carbon and economically sustainable H&C scenarios—includes a dataset of 1 × 1 km hourly heating and cooling degrees (HD and CD, respectively). HD and CD are energy demand proxies that are defined as the deviation of the outdoor surface air temperature from a base temperature, above or below which a building is assumed to need heating or cooling, respectively. PLANHEAT’s HD and CD are calculated from a dataset of gridded surface air temperatures that have been derived using satellite thermal data from Meteosat-10 Spinning Enhanced Visible and Near-Infrared Imager (SEVIRI). This article describes the method for producing this dataset and presents the results for Antwerp (Belgium), which is one of the three validation cities of PLANHEAT. The results demonstrate the spatial and temporal information of PLANHEAT’s HD and CD dataset, while the accuracy assessment reveals that they agree well with reference values retrieved from in situ surface air temperatures. This dataset is an example of application-oriented research that provides location-specific results with practical utility.