R. Barthes
Hasil untuk "Photography"
Menampilkan 20 dari ~170631 hasil · dari DOAJ, Semantic Scholar
C. Bock, G. Poole, P. Parker et al.
Liang Gao, Jinyang Liang, Chiye Li et al.
The capture of transient scenes at high imaging speed has been long sought by photographers, with early examples being the well known recording in 1878 of a horse in motion and the 1887 photograph of a supersonic bullet. However, not until the late twentieth century were breakthroughs achieved in demonstrating ultrahigh-speed imaging (more than 105 frames per second). In particular, the introduction of electronic imaging sensors based on the charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) technology revolutionized high-speed photography, enabling acquisition rates of up to 107 frames per second. Despite these sensors’ widespread impact, further increasing frame rates using CCD or CMOS technology is fundamentally limited by their on-chip storage and electronic readout speed. Here we demonstrate a two-dimensional dynamic imaging technique, compressed ultrafast photography (CUP), which can capture non-repetitive time-evolving events at up to 1011 frames per second. Compared with existing ultrafast imaging techniques, CUP has the prominent advantage of measuring an x–y–t (x, y, spatial coordinates; t, time) scene with a single camera snapshot, thereby allowing observation of transient events with temporal resolution as tens of picoseconds. Furthermore, akin to traditional photography, CUP is receive-only, and so does not need the specialized active illumination required by other single-shot ultrafast imagers. As a result, CUP can image a variety of luminescent—such as fluorescent or bioluminescent—objects. Using CUP, we visualize four fundamental physical phenomena with single laser shots only: laser pulse reflection and refraction, photon racing in two media, and faster-than-light propagation of non-information (that is, motion that appears faster than the speed of light but cannot convey information). Given CUP’s capability, we expect it to find widespread applications in both fundamental and applied sciences, including biomedical research.
O. Sonnentag, K. Hufkens, Cory Teshera-Sterne et al.
Talita Santos de Arruda, Rayssa Bruna Holanda Lima, Karla Luciana Magnani Seki et al.
Ultrasound has become an important tool that offers clinical and practical benefits in the intensive care unit (ICU). Its real-time imaging provides immediate information to support prognostic evaluation and clinical decision-making. This study used ultrasound assessment to investigate the impact of hospitalization on muscle properties in neurocritical patients and analyze the relationship between peripheral muscle changes and motor sequelae. A total of 43 neurocritical patients admitted to the ICU were included. The inclusion criteria were patients with acute brain injuries with or without motor sequelae. Muscle ultrasonography assessments were performed during ICU admission and hospital discharge. Measurements included muscle thickness, cross-sectional area, and echogenicity of the biceps brachii, quadriceps femoris, and rectus femoris. Statistical analyses were used to compare muscle properties between time points (hospital admission vs. discharge) and between groups (patients with vs. without motor sequelae). Significance was set at 5%. Hospitalization had a significant effect on muscle thickness, cross-sectional area, and echogenicity in patients with and without motor sequelae (<i>p</i> < 0.05, effect sizes between 0.104 and 0.475). Patients with motor sequelae exhibited greater alterations in muscle echogenicity than those without (<i>p</i> < 0.05, effect sizes between 0.182 and 0.211). Changes in muscle thickness and cross-sectional area were similar between the groups (<i>p</i> > 0.05). Neurocritical patients experience significant muscle deterioration during hospitalization. Future studies should explore why echogenicity is more markedly affected than muscle thickness and cross-sectional area in patients with motor sequelae compared to those without.
Mohamed Rowaizak, Ahmad Farhat, Reem Khalil
Neuroscience education must convey 3D structure with clarity and accuracy. Traditional 2D renderings are limited as they lose depth information and hinder spatial understanding. High-resolution resources now exist, yet many are difficult to use in the class. Therefore, we developed an educational brain video that moves from gross to microanatomy using MRI-based models and the published literature. The pipeline used Fiji for preprocessing, MeshLab for mesh cleanup, Rhino 6 for target fixes, Houdini FX for materials, lighting, and renders, and Cinema4D for final refinement of the video. We had our brain models validated by two neuroscientists for educational fidelity. We tested the video in a class with 96 undergraduates randomized to video and lecture or lecture only. Students completed the same pretest and posttest questions. Student feedback revealed that comprehension and motivation to learn increased significantly in the group that watched the video, suggesting its potential as a useful supplement to traditional lectures. A short, well-produced 3D video can supplement lectures and improve learning in this setting. We share software versions and key parameters to support reuse.
Daniel Wang, BA, Bonnie Sklar, MD, James Tian, MD et al.
Objective: We developed a novel slit-lamp photography (SLP) generative adversarial network (GAN) model using limited data to supplement and improve the performance of an artificial intelligence (AI)–based microbial keratitis (MK) screening model. Design: Cross-sectional study. Subjects: Slit-lamp photographs of 67 healthy and 36 MK eyes were prospectively and retrospectively collected at a tertiary care ophthalmology clinic at a large academic institution. Methods: We trained the GAN model StyleGAN2-ADA on healthy and MK SLPs to generate synthetic images. To assess synthetic image quality, we performed a visual Turing test. Three cornea fellows tested their ability to identify 20 images each of (1) real healthy, (2) real diseased, (3) synthetic healthy, and (4) synthetic diseased. We also used Kernel Inception Distance (KID) to quantitatively measure realism and variation of synthetic images. Using the same dataset used to train the GAN model, we trained 2 DenseNet121 AI models to grade SLP images as healthy or MK with (1) only real images and (2) real supplemented with GAN-generated images. Main Outcome Measures: Classification performance of MK screening models trained with only real images compared to a model trained with both limited real and supplemented synthetic GAN images. Results: For the visual Turing test, the fellows on average rated synthetic images as good quality (83.3% ± 12.0% of images), and synthetic and real images were found to depict pertinent anatomy and pathology for accurate classification (96.3% ± 2.19% of images). These experts could distinguish between real and synthetic images (accuracy: 92.5% ± 9.01%). Analysis of KID score for synthetic images indicated realism and variation. The MK screening model trained on both limited real and supplemented synthetic data (area under the receiver–operator characteristic curve: 0.93, bootstrapping 95% CI: 0.77–1.0) outperformed the model trained with only real data (area under the receiver–operator characteristic curve: 0.76, 95% CI: 0.50–1.0), with an improvement of 0.17 (95% CI: 0–0.4; 2-tailed t test P = 0.076). Conclusions: Artificial intelligence–based MK classification may be improved by supplementation of limited real training data with synthetic data generated by GANs. Financial Disclosure(s): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Zhenzhen Song, Mingqiang Guo, Liang Wu et al.
While most existing advanced large-scale point cloud semantic segmentation methods can accurately identify most large-scale objects, there is still room for improvement in the recognition accuracy of small-scale, low-proportion objects. Compared to point clouds, digital orthophoto maps (DOMs) has a more structured data format, allowing for better recognition of small-scale surface features. However, in existing projection-based methods, directly mapping images onto point clouds leads to occlusion issues. If image and point cloud features are simply concatenated, it results in feature blurring. Based on this observation, this article proposes a DAPSS network for point cloud semantic segmentation, assisted by prior knowledge constructed from DOM. The pretrained DOM features can provide a broader receptive field as guidance for learning the local context features of point clouds. Vertical occlusion has an issue, making ray-based mapping methods unsuitable. We propose a method that search for the nearest mapped point cloud in spherical space to fill in the occluded point cloud based on the already mapped point cloud. The traditional approach of directly concatenating point cloud features with image features often leads to feature blurring. Therefore, we propose a plug-and-play multimodal feature adaptive fusion module, which can adaptively select and aggregate features from different modalities to reduce redundant information further. In addition, we designed a cascaded multimodal feature deep fusion module to promote deep fusion between different modal features. Experiments on two large datasets demonstrate that DAPSS outperforms current mainstream methods, achieving mean Intersection-over-Union scores of 65.9% and 82.9% on the SansetUrban and SUM-Helsinki datasets, respectively. DAPSS not only effectively addresses the recognition of small-scale surface features, but also resolves the occlusion problems associated with projection-based methods.
Kshitij Marwah, Gordon Wetzstein, Yosuke Bando et al.
Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.
Fengming Sun, Junjie Cui, Xia Yuan et al.
Abstract Fully convolutional neural networks‐based salient object detection has recently achieved great success with its performance benefits from the effective use of multi‐layer features. Based on this, most of the existing saliency detectors designed complex network structures to fuse the multi‐level features generated by the backbone network. However, the variable scale and complex shape of the target are always a great challenge for saliency detection tasks. In this paper, the authors propose a Rich‐scale Feature Fusion Network (RFFNet) for salient object detection. The authors design a rich‐scale feature interactive fusion module to obtain more efficient features from the multi‐scale features. Moreover, the global feature enhance module is used to extract features with better characterization for the final saliency prediction. Extensive experiments performed on five benchmark datasets demonstrate that the proposed method can achieve satisfactory results on different evaluation metrics compared to other state‐of‐the‐art salient object detection approaches.
Audrey Doussot
The invention and popularization of photography in the nineteenth century revolutionized portraiture. From the beginnings, many writers posed in front of the camera to have their portraits captured by the successive developments of the daguerreotype, the carte-de-visite and other cheaper as well as more practical and portable photographic processes that brought portraiture outside the professional studio. A simultaneous growing interest in literary celebrities and the places related to them and their works led photographers to produce pictures of writers in their habitat, including pictures that were disseminated among the public through collectibles or publications. The representation of interiors in most photographic portraits of Victorian and Edwardian writers appears as a key element contributing to constructing the writer as a sociocultural type and a public figure. What can be perceived, at first, as a mere backdrop to the representation of a human being can actually reveal much about the fashioning of an author’s literary identity through images. Portraits of Charles Dickens or George Bernard Shaw, for instance, testify to the importance of staging and accessories when seeking to construct authors’ images and to depict their universe as a materialization of their character and psychological interiority.
K. Nakagawa, Atsushi Iwasaki, Y. Oishi et al.
Tianfan Xue, Michael Rubinstein, Ce Liu et al.
We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows.
Rahul Shrivastava, Vivek Tiwari, Swati Jain et al.
Abstract Recognizing entities and their corresponding roles are important in human activity recognition. In light of recent advancements, the primary emphasis is recognizing the abstract activities involving person‐person interaction. The contribution of this work is proposing an architecture, which utilizes the knowledge of the human body parts coordinates in role detection of each individual. The network preprocesses the coordinates to build intra‐body and inter‐body features. The extracted features build the relationship between the interacting bodies and learn the temporal relation corresponding to each role using the human memory‐inspired hierarchical temporal memory. The model is tested on vague samples of mutual actions in the experimental work. The model is found robust in action and role recognition tasks and performed well per expectations.
Ellen Handy
The Harrison Horblit Collection at the Harvard University’s Houghton Library contains a remarkable daguerreotype plate by the Boston firm Southworth & Hawes. It reproduces an engraving after Raphael’s Transfiguration. Whereas reproductive printmaking normally seeks to produce multiples of a unique original, daguerreotype reproductions open a space of ambiguity between the categories of original and reproduction since daguerreotypes are unique objects. Much is lost in this translation, but what is gained? If reproduction of paintings normally renders the singular multiple, what happens when a painting is reproduced as a unique image? Why was this daguerreotype created? Southworth & Hawes specialized in portraits of celebrities and considered themselves artists. Why then did they make a daguerreotype of an engraving of a painting? And why this painting?Their image of an image of an image is at once simply duplicative and a meditation on photography itself – an expanded conception of photography that figures it as spiritual and conceptual practice, as is suggested in other conflations of image reproduction and transfiguration within Southworth & Hawes’ oeuvre as well. The logic of the Southworth & Hawes’ Transfiguration becomes less a conundrum when considered in relation to two of their other images, one of the branded hand of abolitionist Jonathan Walker, the other a self-portrait representing Southworth’s torso as a classical sculpture. Translation, transfiguration, body, soul and image are closely imbricated in all three of these daguerreotypes, each produced during the height of New England Transcendentalism. While Raphael’s Transfiguration epitomizes the intersection of the human and a divine being as Scriptural drama, The Branded Hand and Southworth as a Classical Bust allude to the spiritual realm through representation of the soul’s transcendence of the suffering body rather than direct reference to scripture. The Branded Hand detaches subject from the context of the body as a whole; Walker’s wound appears in the image as the silvery trace of the price paid for his abolitionist conviction. The portrait of Southworth separates an individual man’s identity from the more allegorical presence, while presenting suggestions of sorrow as emblems of spiritual elevation. But beyond this, the transmedial daguerreotype of the print of the Raphael announces itself as visual metonymy; the transfiguration of Christ in the painting also conveys the transfigurative power of the photographic medium itself.
P. Frosh
Iris Sheungting Lo, B. Mckercher
Corby K. Martin, T. Nicklas, B. Gunturk et al.
Sebastián Gómez Ruiz
This article is a case study of the exchanges and relationships that arose from the circulation of images in the Arhuaco community of Kutunzama in the Sierra Nevada de Santa Marta. The circulation of photographs, which show the present and past history of the Arhuaco people, led to an exchange between the mamos, the community and myself —as ethnographer—. The purpose of this text is to show how the circulation of these photographs can be understood as a gift in a process of redistribution based on the evocative, material and sensual capacities of the images. This article consists of an ethnographic approach, developed between 2017 and 2019, on the basis which the community’s forms of organization and kinship are understood via the elicitation of photographs and films, interviews and participant observation. This circulation also made it possible to activate narratives of the recent history of the village, related to the settlement and the context of territorial disputes in the area. Ethnographic photography allowed us to approach notions of time and space, and to broaden the Arhuaco notion of makruma. Rather than conceiving of this notion as a gift, it is understood as a process of redistribution in which the given object possesses the characteristics of the person who gives it and the return of the object (in this case the images) becomes a way of settling a spiritual debt. The text shows the image as a means of interaction, a means of encounter, a place of circulation and sensory experience. It approaches the image as a multisensory object and the photograph not only as a representational object, but also as one that traces social relations based on the indigenous notion of makruma.
R. Rajalakshmi, Subramanian Arulmalar, M. Usha et al.
Aim To evaluate the sensitivity and specificity of “fundus on phone’ (FOP) camera, a smartphone based retinal imaging system, as a screening tool for diabetic retinopathy (DR) detection and DR severity in comparison with 7-standard field digital retinal photography. Design Single-site, prospective, comparative, instrument validation study. Methods 301 patients (602 eyes) with type 2 diabetes underwent standard seven-field digital fundus photography with both Carl Zeiss fundus camera and indigenous FOP at a tertiary care diabetes centre in South India. Grading of DR was performed by two independent retina specialists using modified Early Treatment of Diabetic Retinopathy Study grading system. Sight threatening DR (STDR) was defined by the presence of proliferative DR(PDR) or diabetic macular edema. The sensitivity, specificity and image quality were assessed. Results The mean age of the participants was 53.5 ±9.6 years and mean duration of diabetes 12.5±7.3 years. The Zeiss camera showed that 43.9% had non-proliferative DR(NPDR) and 15.3% had PDR while the FOP camera showed that 40.2% had NPDR and 15.3% had PDR. The sensitivity and specificity for detecting any DR by FOP was 92.7% (95%CI 87.8–96.1) and 98.4% (95%CI 94.3–99.8) respectively and the kappa (ĸ) agreement was 0.90 (95%CI-0.85–0.95 p<0.001) while for STDR, the sensitivity was 87.9% (95%CI 83.2–92.9), specificity 94.9% (95%CI 89.7–98.2) and ĸ agreement was 0.80 (95%CI 0.71–0.89 p<0.001), compared to conventional photography. Conclusion Retinal photography using FOP camera is effective for screening and diagnosis of DR and STDR with high sensitivity and specificity and has substantial agreement with conventional retinal photography.
Halaman 4 dari 8532