E.A. Egorov, A.V. Kuroyedov, P.Ch. Zavadskiy
et al.
<p>
<br>
</p>
<p>
E.A. Egorov<sup>1</sup>, A.V. Kuroyedov<sup>1,2, </sup>P.Ch. Zavadskiy<sup>3</sup>, А.А. Vitkov<sup>4</sup>, N.A. Bakunina<sup>5,6,</sup> D.A. Baryshnikova<sup>7</sup>, I.A. Bulakh<sup>8</sup>, O.G. Zvereva<sup>9,10,</sup> S.A. Zubasheva<sup>11</sup>, A.B. Galimova<sup>12</sup>, O.V. Gaponko<sup>1,2</sup>, A.M. Getmanova <sup>13</sup> , A.A. Gusarevich<sup>14,15</sup>, V.E. Korelina<sup>16</sup>, S.N. Lanin<sup>17</sup>, E.A. Stepanova<sup>18</sup>, T.V. Chernyakova<sup>1,19</sup>, A.P. Shakhalova<sup>20</sup>, Yu.I. Razhko<sup>21</sup>, H. Yuan<sup>22</sup>, X. Sun<sup>23,</sup> L. Wu<sup>24</sup>, M. Bozic<sup>25,26</sup>, S.L. Ferkovа<sup>27,28,</sup> A.B. Zakhido<sup>v29</sup>
</p>
<p>
<sup>1</sup>Pirogov Russian National Research Medical University, Moscow, Russian Federation
</p>
<p>
<sup>2</sup>Mandryka Military Clinical Hospital, Moscow, Russian Federation
</p>
<p>
<sup>3</sup>SP LLC, St. Petersburg, Russian Federation
</p>
<p>
<sup>4</sup>M.M. Krasnov Research Institute of Eye Diseases, Moscow, Russian Federation
</p>
<p>
<sup>5</sup>RUDN University, Moscow, Russian Federation
</p>
<p>
<sup>6</sup>Pirogov Сity Clinical Hospital No.1, Moscow, Russian Federation
</p>
<p>
<sup>7</sup>Sectoral Clinical Diagnostic Center of the Gazprom PJSC, Moscow, Russian Federation
</p>
<p>
<sup>8</sup>Ivastramed Medical Center LLC, Ivanovo, Russian Federation
</p>
<p>
<sup>9</sup>Kazan State Medical Academy — Branch of the Russian Medical Academy of Continuous Professional Education, Kazan, Russian Federation
</p>
<p>
<sup>10</sup>E.V. Adamyuk Republican Clinical Ophthalmological Hospital, Kazan, Russian Federation
</p>
<p>
<sup>11</sup>Treatment and Diagnostic Center No. 9 of the Ministry of Defense, Moscow, Russian Federation
</p>
<p>
<sup>12</sup>Bashkir State Medical University, Ufa, Russian Federation
</p>
<p>
<sup>13</sup>Bryansk Regional Hospital No. 1, Bryansk, Russian Federation
</p>
<p>
<sup>14</sup>Novosibirsk State Medical University, Novosibirsk, Russian Federation
</p>
<p>
<sup>15</sup>Clinical Hospital "RZD-Medicine" of the city of Novosibirsk, Novosibirsk, Russian Federation
</p>
<p>
<sup>16</sup>North West State Medical University named after I.I. Mechnikov, St. Petersburg, Russian Federation
</p>
<p>
<sup>17</sup>Professor P.G. Makarov Krasnoyarsk Regional Ophtalmological Clinical Hospital, Krasnoyarsk, Russian Federation
</p>
<p>
<sup>18</sup>Omsk State Medical University, Omsk, Russian Federation
</p>
<p>
<sup>19</sup>52 Consulting and Diagnostic Center of the Ministry of Defense, Moscow, Russian Federation
</p>
<p>
<sup>20</sup>Tonus Amaris Medical Clinical Center, Nizhny Novgorod, Russian Federation
</p>
<p>
<sup>21</sup>Republican Scientific Practical Center of Radiation Medicine and Human Ecology, Gomel, Republic of Belarus
</p>
<p>
<sup>22</sup>The Second Affiliated Hospital, Harbin Medical University, Harbin, People's Republic of China
</p>
<p>
<sup>23</sup>Fudan University, Shanghai, People's Republic of China
</p>
<p>
<sup>24</sup>Peking University, Beijing, People's Republic of China
</p>
<p>
<sup>25</sup>Medicine University of Belgrade, Belgrade, Serbia
</p>
<p>
<sup>26</sup>University Clinical Center of Serbia, Belgrade, Serbia
</p>
<p>
<sup>27</sup>Ophthalmocenter Euromedix Betliarska, Bratislava, Slovakia
</p>
<p>
<sup>28</sup>Malacky Hospital, Malacky, Slovakia
</p>
<p>
<sup>29</sup>Eye Clinic SAIF-OPTIMA, Tashkent, Uzbekistan
</p>
<p>
<b>Aim: </b>to determine a level of "target" intraocular pressure (IOP) in patients with normal-tension glaucoma (NTG), depending on the disease stage.
</p>
<p>
<b>Patients and Methods: </b>a final protocol of a multicenter scientific analytical selective combined study conducted in 6 countries (Russia, China, Belarus, Serbia, Slovakia and Uzbekistan) at 25 clinical centers included data from 269 patients. The stage of glaucoma was confirmed by ophthalmoscopy and/or fundus photography and/or optical coherence tomography and/or Heidelberg tomography, and standard automatic perimetry (SAP). Visual acuity was examined, clinical refraction was determined, a tonometric level of IOP was measured (Maklakov tonometry with a 10 g load, Caucasian population), and Goldman tonometry results (Asian population) were evaluated.
</p>
<p>
<b>Results: </b>at diagnosis, patients with NTG were several years younger than those with primary open-angle glaucoma (POAG). The greatest increase in glaucoma progression was reported in patients with the advanced disease (+91.4%), while a number of patients with stages I and II decreased by 5.5% and 23.4%, respectively. In patients with mild NTG (stage I), IOP level reduced by 19.9% and 16.6% from the initial values when measured by Maklakov and by Goldman methods, respectively. In patients with stages II и III, the parameter values decreased by 17.1% and 9.4% (on average, by 14.1%), as well as by 21.9% and 9.3% (on average, by 15.3%), respectively. Only among patients with advanced NTG, the progression rate was approximately 7 times higher in individuals with the appropriate family history (p=0,016; U=2,416).
</p>
<p>
<b>Conclusion: </b>regardless of the disease stage, the lower an initial IOP level, the less it can be reduced towards the final examination. During the entire follow-up period, an average decrease in the IOP level was 16.5% regardless of the NTG stage. Due to the treatment provided, IOP in NTG patients was on average 2–4 mm Hg lower than that in individuals with POAG. Thus, current guidelines to determine "target" values of the parameter in this group of patients should be amended.
</p>
<p>
<b>Keywords: </b>normal-tension glaucoma, "target" intraocular pressure, glaucoma progression, tonometry.
</p>
<p>
<b>For citation: </b>Egorov E.A., Kuroyedov A.V., Zavadskiy P.Ch., Vitkov А.А., Bakunina N.A., Baryshnikova D.A., Bulakh I.A., Zvereva O.G., Zubasheva S.A., Galimova A.B., Gaponko O.V., Getmanova A.M., Gusarevich A.A., Korelina V.E., Lanin S.N., Stepanova E.A., Chernyakova T.V., Shakhalova A.P., Razhko Yu.I., Yuan H., Sun X., Wu L., Bozic М., Ferkovа S.L., Zakhidov A.B. Target intraocular pressure level in patients with different stages of low tension glaucoma. Russian Journal of Clinical Ophthalmology. 2025;25(1):9–19 (in Russ.). DOI: 10.32364/2311-7729-2025-25-1-2
</p>
Text-guided diffusion models have greatly advanced image editing and generation. However, achieving physically consistent image retouching with precise parameter control (e.g., exposure, white balance, zoom) remains challenging. Existing methods either rely solely on ambiguous and entangled text prompts, which hinders precise camera control, or train separate heads/weights for parameter adjustment, which compromises scalability, multi-parameter composition, and sensitivity to subtle variations. To address these limitations, we propose CameraMaster, a unified camera-aware framework for image retouching. The key idea is to explicitly decouple the camera directive and then coherently integrate two critical information streams: a directive representation that captures the photographer's intent, and a parameter embedding that encodes precise camera settings. CameraMaster first uses the camera parameter embedding to modulate both the camera directive and the content semantics. The modulated directive is then injected into the content features via cross-attention, yielding a strongly camera-sensitive semantic context. In addition, the directive and camera embeddings are injected as conditioning and gating signals into the time embedding, enabling unified, layer-wise modulation throughout the denoising process and enforcing tight semantic-parameter alignment. To train and evaluate CameraMaster, we construct a large-scale dataset of 78K image-prompt pairs annotated with camera parameters. Extensive experiments show that CameraMaster produces monotonic and near-linear responses to parameter variations, supports seamless multi-parameter composition, and significantly outperforms existing methods.
Astronauts take thousands of photos of Earth per day from the International Space Station, which, once localized on Earth's surface, are used for a multitude of tasks, ranging from climate change research to disaster management. The localization process, which has been performed manually for decades, has recently been approached through image retrieval solutions: given an astronaut photo, find its most similar match among a large database of geo-tagged satellite images, in a task called Astronaut Photography Localization (APL). Yet, existing APL approaches are trained only using satellite images, without taking advantage of the millions open-source astronaut photos. In this work we present the first APL pipeline capable of leveraging astronaut photos for training. We first produce full localization information for 300,000 manually weakly labeled astronaut photos through an automated pipeline, and then use these images to train a model, called AstroLoc. AstroLoc learns a robust representation of Earth's surface features through two losses: astronaut photos paired with their matching satellite counterparts in a pairwise loss, and a second loss on clusters of satellite imagery weighted by their relevance to astronaut photography via unsupervised mining. We find that AstroLoc achieves a staggering 35% average improvement in recall@1 over previous SOTA, pushing the limits of existing datasets with a recall@100 consistently over 99%. Finally, we note that AstroLoc, without any fine-tuning, provides excellent results for related tasks like the lost-in-space satellite problem and historical space imagery localization.
AIM: To observe the retinal and choroidal circulations in patients with non-arteritic permanent central retinal artery occlusion (NA-CRAO) via optical coherence tomography angiography (OCTA) and analyze their correlation with visual acuity. METHODS: Sixty-two eyes with clinically confirmed acute NA-CRAO were included in the study and divided into: A type (mild n=29), B type (moderate n=27) and C type (severe n=6) based on the degree of visual loss, retinal edema, and arterial blood flow delay in fundus fluorescence angiography (FFA). Contralateral healthy eyes were used as the control group. Best-corrected visual acuity (BCVA), slit lamp microscopy, indirect ophthalmoscopy, fundus color photography, OCTA, and FFA were performed. Spearman's correlation analysis was used to determine the correlations between retinal and choroidal vessels and visual acuity. RESULTS: There were no statistically significant differences in age, gender, and intraocular pressure among the three types and the control group (P>0.05). Vessel density in deep capillary plexus (VD-DCP) significantly decreased (P<0.05) in all three types of NA-CRAO patients compared to the control group. Vessel density in superficial vascular plexus (VD-SVP) significantly decreased (P<0.05) in type A patients and choriocapillaris flow area significantly decreased (P<0.05) in type B and type C patients compared to the control group; while outer retinal flow areas significantly increased in the type A (P<0.05) and decreased in type C patients (P<0.05). The retinal thickness significantly increased in type C group (P<0.05). The VD-SVP at fovea in the type A was significantly lower than both of type B and C. The VD-SVP at nasal parafovea in type A and B was significantly lower than type C (P<0.05). The logMAR BCVA of type A was significantly better than that of type B and C groups (P<0.05). Spearman's correlation analysis showed that the logMAR BCVA was positively correlated with VD-SVP at fovea (r=0.679, P=0.031) and nasal parafovea (r=0.826, P=0.013). CONCLUSION: OCTA is valuable for assessing retinal ischemia, and evaluating visual impairment. Deep retinal vasculature is commonly affected in all NA-CRAO types. VD-SVPs at fovea and nasal parafovea can serve as reliable markers of visual impairment in NA-CRAO.
Fundus photography, in combination with the ultra-wide-angle fundus (UWF) techniques, becomes an indispensable diagnostic tool in clinical settings by offering a more comprehensive view of the retina. Nonetheless, UWF fluorescein angiography (UWF-FA) necessitates the administration of a fluorescent dye via injection into the patient's hand or elbow unlike UWF scanning laser ophthalmoscopy (UWF-SLO). To mitigate potential adverse effects associated with injections, researchers have proposed the development of cross-modality medical image generation algorithms capable of converting UWF-SLO images into their UWF-FA counterparts. Current image generation techniques applied to fundus photography encounter difficulties in producing high-resolution retinal images, particularly in capturing minute vascular lesions. To address these issues, we introduce a novel conditional generative adversarial network (UWAFA-GAN) to synthesize UWF-FA from UWF-SLO. This approach employs multi-scale generators and an attention transmit module to efficiently extract both global structures and local lesions. Additionally, to counteract the image blurriness issue that arises from training with misaligned data, a registration module is integrated within this framework. Our method performs non-trivially on inception scores and details generation. Clinical user studies further indicate that the UWF-FA images generated by UWAFA-GAN are clinically comparable to authentic images in terms of diagnostic reliability. Empirical evaluations on our proprietary UWF image datasets elucidate that UWAFA-GAN outperforms extant methodologies. The code is accessible at https://github.com/Tinysqua/UWAFA-GAN.
While forming an ($A$+2) nucleus from a nucleus $A$ via a two-neutron transfer reaction, the constructive interference of the many possible reaction channels favors significant pairing enhancement through the continuum of the intermediate ($A$+1) nucleus [Phys. Lett. B \textbf{834} 137413 (2022)]. I analyse this situation in more generality, from the point of view of a varying pairing field and different continua leading to the formation of ($A$+2) nucleus. I consider $^6$He and $^{22}$C, described as housing two-neutrons in orbitals of $^5$He and $^{21}$C, respectively. The different possible situations manifest that the continuum correlations are extremely crucial to the extension of the pairing enhancement observed in the system.
Autonomous mobile robots (AMRs) equipped with high-quality cameras have revolutionized the field of inspections by providing efficient and cost-effective means of conducting surveys. The use of autonomous inspection is becoming more widespread in a variety of contexts, yet it is still challenging to acquire the best inspection information autonomously. In situations where objects may block a robot's view, it is necessary to use reasoning to determine the optimal points for collecting data. Although researchers have explored cloud-based applications to store inspection data, these applications may not operate optimally under network constraints, and parsing these datasets can be manually intensive. Instead, there is an emerging requirement for AMRs to autonomously capture the most informative views efficiently. To address this challenge, we present an autonomous Next-Best-View (NBV) framework that maximizes the inspection information while reducing the number of pictures needed during operations. The framework consists of a formalized evaluation metric using ray-tracing and Gaussian process interpolation to estimate information reward based on the current understanding of the partially-known environment. A derivative-free optimization (DFO) method is used to sample candidate views in the environment and identify the NBV point. The proposed approach's effectiveness is shown by comparing it with existing methods and further validated through simulations and experiments with various vehicles.
Bokeh rendering is one of the most popular techniques in photography. It can make photographs visually appealing, forcing users to focus their attentions on particular area of image. However, achieving satisfactory bokeh effect usually presents significant challenge, since mobile cameras with restricted optical systems are constrained, while expensive high-end DSLR lens with large aperture should be needed. Therefore, many deep learning-based computational photography methods have been developed to mimic the bokeh effect in recent years. Nevertheless, most of these methods were limited to rendering bokeh effect in certain single aperture. There lacks user-friendly bokeh rendering method that can provide precise focal plane control and customised bokeh generation. There as well lacks authentic realistic bokeh dataset that can potentially promote bokeh learning on variable apertures. To address these two issues, in this paper, we have proposed an effective controllable bokeh rendering method, and contributed a Variable Aperture Bokeh Dataset (VABD). In the proposed method, user can customize focal plane to accurately locate concerned subjects and select target aperture information for bokeh rendering. Experimental results on public EBB! benchmark dataset and our constructed dataset VABD have demonstrated that the customized focal plane together aperture prompt can bootstrap model to simulate realistic bokeh effect. The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models. The contributed dataset and source codes will be released on github https://github.com/MoTong-AI-studio/VABM.
Artificial Intelligence (AI) tools have become incredibly powerful in generating synthetic images. Of particular concern are generated images that resemble photographs as they aspire to represent real world events. Synthetic photographs may be used maliciously by a broad range of threat actors, from scammers to nation-state actors, to deceive, defraud, and mislead people. Mitigating this threat usually involves answering a basic analytic question: Is the photograph real or synthetic? To address this, we have examined the capabilities of recent generative diffusion models and have focused on their flaws: visible artifacts in generated images which reveal their synthetic origin to the trained eye. We categorize these artifacts, provide examples, discuss the challenges in detecting them, suggest practical applications of our work, and outline future research directions.
Nicolas Chahine, Sira Ferradans, Javier Vazquez-Corral
et al.
Automated and robust portrait quality assessment (PQA) is of paramount importance in high-impact applications such as smartphone photography. This paper presents FHIQA, a learning-based approach to PQA that introduces a simple but effective quality score rescaling method based on image semantics, to enhance the precision of fine-grained image quality metrics while ensuring robust generalization to various scene settings beyond the training dataset. The proposed approach is validated by extensive experiments on the PIQ23 benchmark and comparisons with the current state of the art. The source code of FHIQA will be made publicly available on the PIQ23 GitHub repository at https://github.com/DXOMARK-Research/PIQ2023.
All-in-Focus (AIF) photography is expected to be a commercial selling point for modern smartphones. Standard AIF synthesis requires manual, time-consuming operations such as focal stack compositing, which is unfriendly to ordinary people. To achieve point-and-shoot AIF photography with a smartphone, we expect that an AIF photo can be generated from one shot of the scene, instead of from multiple photos captured by the same camera. Benefiting from the multi-camera module in modern smartphones, we introduce a new task of AIF synthesis from main (wide) and ultra-wide cameras. The goal is to recover sharp details from defocused regions in the main-camera photo with the help of the ultra-wide-camera one. The camera setting poses new challenges such as parallax-induced occlusions and inconsistent color between cameras. To overcome the challenges, we introduce a predict-and-refine network to mitigate occlusions and propose dynamic frequency-domain alignment for color correction. To enable effective training and evaluation, we also build an AIF dataset with 2686 unique scenes. Each scene includes two photos captured by the main camera, one photo captured by the ultrawide camera, and a synthesized AIF photo. Results show that our solution, termed EasyAIF, can produce high-quality AIF photos and outperforms strong baselines quantitatively and qualitatively. For the first time, we demonstrate point-and-shoot AIF photo synthesis successfully from main and ultra-wide cameras.
Image sensors, most notably the Charge Coupled Device (CCD), have revolutionized observational astronomy as perhaps the most important innovation after photography. Since the 50th anniversary of the invention of the CCD has passed in 2019, it is time to review the development of detectors for the visible wavelength range, starting with the discovery of the photoelectric effect and first experiments to utilize it for the photometry of stars at Sternwarte Babelsberg in 1913, over the invention of the CCD, its development at the Jet Propulsion Laboratory, to the high performance CCD and CMOS imagers that are available off-the-shelf today.
David Komorowicz, Lu Sang, Ferdinand Maiwald
et al.
Historical buildings are a treasure and milestone of human cultural heritage. Reconstructing the 3D models of these building hold significant value. The rapid development of neural rendering methods makes it possible to recover the 3D shape only based on archival photographs. However, this task presents considerable challenges due to the limitations of such datasets. Historical photographs are often limited in number and the scenes in these photos might have altered over time. The radiometric quality of these images is also often sub-optimal. To address these challenges, we introduce an approach to reconstruct the geometry of historical buildings, employing volumetric rendering techniques. We leverage dense point clouds as a geometric prior and introduce a color appearance embedding loss to recover the color of the building given limited available color images. We aim for our work to spark increased interest and focus on preserving historical buildings. Thus, we also introduce a new historical dataset of the Hungarian National Theater, providing a new benchmark for the reconstruction method.
Davide Tore, Riccardo Faletti, Andrea Biondo
et al.
Atrial fibrillation (AF) is the most common arrhythmia, and its prevalence is growing with time. Since the introduction of catheter ablation procedures for the treatment of AF, cardiovascular magnetic resonance (CMR) has had an increasingly important role for the treatment of this pathology both in clinical practice and as a research tool to provide insight into the arrhythmic substrate. The most common applications of CMR for AF catheter ablation are the angiographic study of the pulmonary veins, the sizing of the left atrium (LA), and the evaluation of the left atrial appendage (LAA) for stroke risk assessment. Moreover, CMR may provide useful information about esophageal anatomical relationship to LA to prevent thermal injuries during ablation procedures. The use of late gadolinium enhancement (LGE) imaging allows to evaluate the burden of atrial fibrosis before the ablation procedure and to assess procedural induced scarring. Recently, the possibility to assess atrial function, strain, and the burden of cardiac adipose tissue with CMR has provided more elements for risk stratification and clinical decision making in the setting of catheter ablation planning of AF. The purpose of this review is to provide a comprehensive overview of the potential applications of CMR in the workup of ablation procedures for atrial fibrillation.
Photography, Computer applications to medicine. Medical informatics
Purpose: To investigate the 2-year effectiveness of reduced-fluence photodynamic therapy (rf-PDT) for chronic central serous chorioretinopathy (cCSC). Design: Retrospective cohort study. Participants: A total of 223 consecutive patients with newly diagnosed cCSC with active serous retinal detachment (SRD) were included from May 2007 to June 2017 and followed up for at least 2 years. Patients who underwent ocular treatment other than cataract surgery before the beginning of recruitment and those who had macular neovascularization at baseline were excluded. Methods: All patients underwent a comprehensive ophthalmic evaluation, including measurements of best-corrected visual acuity (BCVA), slit-lamp examination, dilated fundus examination, color fundus photography, fundus autofluorescence, fluorescein angiography, indocyanine green angiography, and spectral-domain OCT. An inverse probability of treatment weighting (IPTW) methodology was applied to balance 18 baseline characteristics between patients who received rf-PDT (rf-PDT group) and those who did not receive treatment (controls). Inverse probability of treatment weighting survival analysis and regression were performed. Main Outcome Measures: The proportion of patients whose BCVA at 24 months was the same or improved compared with the baseline visual acuity (VA) (VA maintenance rate). Results: A total of 155 eyes (rf-PDT group: 74; controls: 81) were analyzed. The patients’ backgrounds were well balanced after IPTW with standardized differences of < 0.10. An IPTW regression analysis revealed that the VA maintenance rate was significantly higher in the rf-PDT group than in the controls (93.6% vs. 70.9%, P < 0.001, 12 months; 85.7% vs. 69.8%, P = 0.019, 24 months). The rf-PDT group tended to show better VA improvement, but was not statistically significant (–0.06 vs. –0.008, P = 0.07, 12 months; –0.06 vs. –0.03, P = 0.32, 24 months). An IPTW Cox regression showed a significantly higher rate of complete SRD remission in the rf-PDT group (hazard ratio, 5.05; 95% confidence interval, 3.24–7.89; P < 0.001). Conclusions: The study suggests the beneficial effect of rf-PDT for cCSC for both VA maintenance and higher proportion of complete SRD remission in the clinical setting.
PurposeTo develop an artificial intelligence (AI) system that can predict optical coherence tomography (OCT)-derived high myopia grades based on fundus photographs.MethodsIn this retrospective study, 1,853 qualified fundus photographs obtained from the Zhongshan Ophthalmic Center (ZOC) were selected to develop an AI system. Three retinal specialists assessed corresponding OCT images to label the fundus photographs. We developed a novel deep learning model to detect and predict myopic maculopathy according to the atrophy (A), traction (T), and neovascularisation (N) classification and grading system. Furthermore, we compared the performance of our model with that of ophthalmologists.ResultsWhen evaluated on the test set, the deep learning model showed an area under the receiver operating characteristic curve (AUC) of 0.969 for category A, 0.895 for category T, and 0.936 for category N. The average accuracy of each category was 92.38% (A), 85.34% (T), and 94.21% (N). Moreover, the performance of our AI system was superior to that of attending ophthalmologists and comparable to that of retinal specialists.ConclusionOur AI system achieved performance comparable to that of retinal specialists in predicting vision-threatening conditions in high myopia via simple fundus photographs instead of fundus and OCT images. The application of this system can save the cost of patients' follow-up, and is more suitable for applications in less developed areas that only have fundus photography.
Salvador García-Delpech, Patricia Udaondo, Alex Samir Fernández-Santodomingo
et al.
The authors report the use of topical recombinant human nerve growth factor cenegermin 0.02% in 5 patients diagnosed with neurotrophic keratopathy (NK) in a real-life setting. These 5 patients affected with stage II and III NK mainly of herpetic cause received cenegermin six times daily for 8 weeks. It was initiated upon refractoriness to prior conventional topical treatment. Visual acuity, corneal sensitivity test at four corneal quadrants, fluorescein staining, OC,T and photography were performed weekly during 9 weeks of follow-up from the completion of treatment. At the ninth week of follow-up, corneal sensitivity improvement and healing of corneal ulcers were found in all patients. No adverse events were reported, and no corneal ulcer recurrence was observed over a 4-year follow-up period. Cenegermin should be used in combination with conventional therapy for advanced NK, as it is an effective treatment for healing corneal ulcers, improving the corneal surface homeostasis and avoiding surgery.
Andy Goldschmidt, James Kunert-Graf, Adrian C. Scott
et al.
Baker's yeast (Saccharomyces cerevisiae) is a model organism for studying the morphology that emerges at the scale of multi-cell colonies. To look at how morphology develops, we collect a dataset of time-lapse photographs of the growth of different strains of S. cerevisiae. We discuss the general statistical challenges that arise when using time-lapse photographs to extract time-dependent features. In particular, we show how texture-based feature engineering and representative clustering can be successfully applied to categorize the development of yeast colony morphology using our dataset. The local binary pattern (LBP) from image processing is used to score the surface texture of colonies. This texture score develops along a smooth trajectory during growth. The path taken depends on how the morphology emerges. A hierarchical clustering of the colonies is performed according to their texture development trajectories. The clustering method is designed for practical interpretability; it obtains the best representative colony image for any hierarchical sub-cluster.
Simon Pinzek, Alex Gustschin, Tobias Neuwirth
et al.
Grating-based phase-contrast and dark-field imaging systems create intensity modulations that are usually modeled with sinusoidal functions to extract transmission, differential-phase shift, and scatter information. Under certain system-related conditions, the modulations become non-sinusoidal and cause artifacts in conventional processing. To account for that, we introduce a piecewise-defined periodic polynomial function that resembles the physical signal formation process, modeling convolutions of binary periodic functions. Additionally, we extend the model with an iterative expectation-maximization algorithm that can account for imprecise grating positions during phase-stepping. We show that this approach can process a higher variety of simulated and experimentally acquired data, avoiding most artifacts.
Photography, Computer applications to medicine. Medical informatics