Hasil untuk "Photography"

Menampilkan 20 dari ~223541 hasil · dari DOAJ, CrossRef, Semantic Scholar

JSON API
DOAJ Open Access 2026
Influence of ammonia content on ammonia-hydrogen-air premixed gas duct-vented explosions

Yu GE, Quan WANG, Wenyan ZHU et al.

Renewable energy is addressing some of the key challenges facing global society today, and zero-carbon energy systems are the fundamental way to achieve carbon neutrality. Therefore, hydrogen and ammonia have gained great attention as zero-carbon energy sources. To further study the combustion characteristics of ammonia-hydrogen-air premixed gas flame inside and outside the duct, the influence of ammonia doped amount (φ) on the flame morphology and the evolution of pressure inside and outside the duct under stoichiometric ratio was explored with the help of high-speed photography and pressure sensor in the 2000-mm-long stainless steel duct with a 400-mm-long and 70-mm-wide observation window. The results show that φ significantly affects the pressure inside and outside the duct, and the time to reach the reverse flow phenomenon caused by the secondary explosion also increases. The pressure measuring point PS1 is set at 400 mm away from the explosion vent in the duct to collect data. The pressure curves in the duct under each working condition are presented as a three-peak structure, named p1, p2, and p3. The three pressure peaks are caused by the rupture of the explosion vent film, the gas venting in the duct, and the gas reverse generated by the secondary explosion outside the duct. The size of p1 depends on the tensile strength of the explosion venting membrane, and its amplitude is almost independent of the φ. p2 and p3 both increase with the increase of φ, and the p3 increase rate is the largest when φ is in 50%–65%. p2 changes from a single peak to a fluctuating pressure platform in the pressure curve diagram, and the time of the platform extends with the increase of φ. The pressure measurement point PS2 is set at the horizontal central axis, 500mm away from the explosion vent outside the duct, to collect data. And the peak pressure of the secondary explosion outside the duct (pout) decreases with the increase of the φ, and the time to reach pout increases. This study provides a theoretical basis for the utilization of ammonia and hydrogen energy.

Explosives and pyrotechnics
DOAJ Open Access 2026
Multimodal imaging features of non-proliferative and proliferative diabetic retinopathy based on SD-OCT and fundus autofluorescence

Zixun Wang, Zixun Wang, Chenxi Ji et al.

PurposeTo characterize and compare multimodal imaging features of non-proliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR) using spectral-domain optical coherence tomography (SD-OCT) and fundus autofluorescence (FAF).MethodsThis cross-sectional observational study included 132 patients (219 eyes) with DR (120 NPDR eyes and 99 PDR eyes) and 73 healthy controls (129 eyes). All participants underwent comprehensive ophthalmic examinations, including fundus photography, fundus fluorescein angiography (FFA), FAF, and SD-OCT. OCT biomarkers, including hyperreflective foci (HRF), intraretinal cystic cavities (IRC), diabetic macular edema (DME), disorganization of retinal inner layers (DRIL), epiretinal membrane (ERM), posterior vitreous detachment (PVD), subretinal fluid (SRF), and disruption of the external limiting membrane (ELM) and ellipsoid zone (EZ) were systematically evaluated and compared between groups.ResultsHyperreflective foci, IRC, DME, PVD, and ERM were significantly more frequent in PDR eyes than in NPDR eyes (all P < 0.05). ERM was highly prevalent in both NPDR (80.0%) and PDR (84.8%) eyes. FAF effectively demonstrated intraretinal and preretinal hemorrhages, DME, and fibroproliferative membranes, while SD-OCT provided superior visualization of microstructural retinal alterations. Subfoveal choroidal thickness was significantly increased in both NPDR and PDR compared with healthy controls (P < 0.05), but did not differ considerably between NPDR and PDR.ConclusionSpectral-domain optical coherence tomography and FAF provide complementary information for evaluating structural and functional retinal alterations in DR. FAF is particularly useful for visualizing hemorrhage, DME, and fibroproliferative membranes. In contrast, SD-OCT enables detailed assessment of multiple retinal biomarkers. The high prevalence of epiretinal membranes highlights their potential role in DR progression.

Medicine (General)
DOAJ Open Access 2025
Sustainable reuse evaluation framework for coastal industrial living preservation of heritage buildings based on visual perception driven

Xiang Meng, Jiang Chang

Abstract The protection and sustainable reuse of global industrial heritage have long been a widely concerned topic in the international community, and its existence is evidence of industrial development in various countries throughout history. As a historically significant hydraulic industrial heritage building in the eastern coastal region of China, many of them are currently underutilized or deteriorating. Traditional evaluation methods often overlook the role of public visual perception in guiding their sustainable revitalization. This study proposes an "objective + subjective" comprehensive framework for evaluating the visual perception and reuse potential of 32 coastal water heritage sites in eastern China, with a focus on providing information for the living preservation of historical buildings. Objective analysis employed drone photography, digital twin modeling (to address occluded elements), and semantic segmentation (DeepLabV3 + model) to extract six key visual indicators: Green Vegetation Index (GVI), Water Surface Index (WVI), Sky Coverage (SKVI), Hard Surface (HVI), Building Visibility (BVI), and Other Artificial Structures (OVI). Subjective data on perceptual dimensions—space, color, texture, uniqueness, culture, history, aesthetics, and pleasure—were collected via video-loop surveys (120 students) and online questionnaires (3,840 respondents) using a 5-point Likert scale. Multiple linear regression revealed that scenic beauty scores were most strongly predicted by GVI (β = 0.28, p < .001), WVI (β = 0.34, p < .001), and BVI (indicative of preserved heritage character). In contrast, SKVI, HVI, and OVI had limited influence. Among the sites, 14 were classified as high visual quality (≥ 35/40), 7 as medium, and 11 as low. The findings provide quantitative evidence to support the preservation of hydraulic heritage buildings through visual-centered living, promoting their continued cultural use through the integration of blue-green infrastructure and improved spatial aesthetics. The proposed framework offers a scalable and practical tool for policymakers and designers to enhance both the visual quality and sustainable reuse potential of linear industrial heritage, contributing to broader cultural sustainability goals.

Medicine, Science
DOAJ Open Access 2025
How Safe Are Oxygen–Ozone Therapy Procedures for Spine Disc Herniation? The SIOOT Protocols for Treating Spine Disorders

Marianno Franzini, Salvatore Chirumbolo, Francesco Vaiano et al.

Oxygen–ozone (O<sub>2</sub>–O<sub>3</sub>) therapy is widely used for treating lumbar disc herniation. However, controversy remains regarding the safest and most effective route of administration. While intradiscal injection is purported to show clinical efficacy, it has also been associated with serious complications. In contrast, the intramuscular route can exhibit a more favourable safety profile and comparable pain outcomes, suggesting its potential as a safer alternative in selected patient populations. This mixed-method study combined computed tomography (CT) imaging, biophysical diffusion modelling, and a meta-analysis of clinical trials to evaluate whether intramuscular O<sub>2</sub>–O<sub>3</sub> therapy can achieve disc penetration and therapeutic efficacy comparable to intradiscal nucleolysis, while minimizing procedural risk. Literature searches across PubMed, Scopus, and Cochrane databases identified seven eligible studies (four randomized controlled trials and three cohort studies), encompassing a total of 120 patients. Statistical analyses included Hedges’ g, odds ratios, and number needed to harm (NNH). CT imaging demonstrated gas migration into the intervertebral disc within minutes after intramuscular injection, confirming the plausibility of diffusion through annular micro-fissures. The meta-analysis revealed substantial pain reduction with intramuscular therapy (Hedges’ g = −1.55) and very high efficacy with intradiscal treatment (g = 2.87), though the latter was associated with significantly greater heterogeneity and higher complication rates. The relative risk of severe adverse events was 6.57 times higher for intradiscal procedures (NNH ≈ 1180). O<sub>2</sub>–O<sub>3</sub> therapy offers a biologically plausible, safer, and effective alternative to intradiscal injection, supporting its adoption as a first-line, minimally invasive strategy for managing lumbar disc herniation.

Photography, Computer applications to medicine. Medical informatics
DOAJ Open Access 2025
Combined CCTA and Stress CTP for Anatomical and Functional Assessment of Myocardial Bridges

Marco Fogante, Paolo Esposto Pirani, Fatjon Cela et al.

Myocardial bridging (MB) is a congenital coronary anomaly whose clinical impact remains controversial. Coronary computed tomography angiography (CCTA) combined with CT myocardial perfusion imaging (CT-MPI) enables a comprehensive anatomical and functional assessment of MB. This study aimed to investigate whether specific high-risk anatomical features of MB are independently associated with myocardial hypoperfusion, using combined CCTA and CT-MPI. We retrospectively analyzed 81 patients with MB showing high-risk anatomical features (depth ≥ 2.0 mm and length ≥ 25 mm) identified by CCTA, all of whom underwent stress dynamic CT-MPI between May 2022 and December 2025. Patients were classified according to the presence or absence of hypoperfusion in MB-related myocardial segments. Clinical and anatomical variables were compared between two groups using non-parametric tests, and multivariable logistic regression was performed to identify independent predictors of hypoperfusion. Among the 81 patients (mean age, 59.3 ± 11.7 years; 54 males), 26 (32.1%) demonstrated perfusion defects. All MBs were located in the left anterior descending artery (LAD). No significant differences were observed in clinical variables between groups. Bridges associated with hypoperfusion were significantly deeper (<i>p</i> < 0.001) and were more frequently located in the mid-LAD (73.1% vs. 38.2%, <i>p</i> = 0.01). In multivariable analysis, bridge depth and mid-LAD location remained independent predictors of hypoperfusion. In patients with MB, greater depth and mid-LAD location are independently associated with myocardial hypoperfusion. The combined use of CCTA and CT-MPI may enhance risk stratification and help guide clinical decision-making in this patient population.

Photography, Computer applications to medicine. Medical informatics
DOAJ Open Access 2025
Inspection of Defective Glass Bottle Mouths Using Machine Learning

Daiki Tomita, Yue Bao

In this study, we proposed a method for detecting chips in the mouth of glass bottles using machine learning. In recent years, Japanese cosmetic glass bottles have gained attention for their advancements in manufacturing technology and eco-friendliness through the use of recycled glass, leading to an increase in the volume of glass bottle exports overseas. Although cosmetic bottles are subject to strict quality inspections from the standpoint of safety, the complicated shape of the glass bottle mouths makes automated inspections difficult, and visual inspections have been the norm. Visual inspections conducted by workers have become problematic because it has become clear that the standard of judgment differs from worker to worker and that inspection accuracy deteriorates after long hours of work. To address these issues, the development of inspection systems for glass bottles using image processing and machine learning has been actively pursued. While conventional image processing methods can detect chips in glass bottles, the target glass bottles are those without screw threads, and the light from the light source is diffusely reflected by the screw threads in the glass bottles in this study, resulting in a loss of accuracy. Additionally, machine learning-based inspection methods are generally limited to the body and bottom of the bottle, excluding the mouth from analysis. To overcome these challenges, this study proposed a method to extract only the screw thread regions from the bottle image, using a dedicated machine learning model, and perform defect detection. To evaluate the effectiveness of the proposed approach, accuracy was assessed by training models using images of both the entire mouth and just the screw threads. Experimental results showed that the accuracy of the model trained using the image of the entire mouth was 98.0%, while the accuracy of the model trained using the image of the screw threads was 99.7%, indicating that the proposed method improves the accuracy by 1.7%. In a demonstration experiment using data obtained at a factory, the accuracy of the model trained using images of the entire mouth was 99.7%, whereas the accuracy of the model trained using images of screw threads was 100%, indicating that the proposed system can be used to detect chips in factories.

Photography, Computer applications to medicine. Medical informatics
DOAJ Open Access 2024
Measurement and Analysis of Gonadal Irradiation Dose During Singe-shot X-ray Exposure of Both Lower Limbs

Ce WANG, Fengyun ZHOU, Yanjiao XUAN et al.

Objective: To investigate the relationship between entrance surface dose (ESD) of gonads and physiological parameters such as sex, age, height, and weight in digital X-ray exposure of both lower limbs and to explore the feasibility of full-width long-plate imaging with single-dose radiation exposure. Methods: A total of 300 patients at a weight-bearing position of both lower limbs were prospectively enrolled in a hospital in Beijing, including 129 males and 171 females. The TLD detectors were set at the gonads on both sides of the patient during scanning. After exposure, the TLD detectors were measured with an RGD-3B thermoluminescence dosimeter, and the corresponding dose values were obtained and analyzed. Results: The average triple exposure dose (AP+bilateral view) in the full-length plate adult mode was (1281±202) µGy, which was approximately three times the single exposure dose (429±99) µGy. The triple exposure dose (AP+bilateral view) was significantly lower than the radiation dose recommended by international standards. Multiple linear regression analysis showed that only height and weight affected gonadal radiation dose. Under the scanning parameters of children, the ESD dose was (359±27) µGy and (627±155) µGy when the height was less or more than 160 cm, respectively, with a significant statistical difference. In adult mode, height and weight did not affect the ESD dose. Conclusions: This study showed that the exposure dose of gonads during full-length scanning of both lower limbs in a weight-bearing position with a single exposure was affected by height and weight. The dose received by the gonads was within the safe range, and a single exposure of the full-length plate radiation dose is feasible in clinical practice.

Geophysics. Cosmic physics, Medicine (General)
DOAJ Open Access 2024
INTEGRATION OF 3D SPATIAL INFORMATION FOR MULTI-MODAL EXPERIENCE OF THE URBAN ARCHIVE

J. Kim, J. Hwang

This paper focused on the case of Semal Village, the region that will be demolished due to undergoing recent redevelopment in Paju. We experimented utilizing 3D spatial information to capture precise urban morphology of present status before redevelopment and to transform such data into multi-modal experience of the urban archive. We investigated feasible techniques of photogrammetry and hybrid 3D modelling that could render the situated reality of the village communicatively with various medium such as 3D-printed scale model, and augmented reality (AR) contents. Through drone photography, the entire Semal village was captured, and three-dimensional data were obtained using photogrammetry. Along with information from on-site surveys, the mesh model was segmented into buildings, terrain, and vegetation for focused work. Each models reconstructed and retextured using images from the on-site surveys. All data were compiled for full-colour 3D printing and assembly. The 3D-printed scale model replicating Semal is on display at the Paju Central Library. Additionally, an AR content was created using the 3D-printed scale model and ethnographic data, aiming to archive and share people's memories, thereby continuously building a sustainable archive of the village.

Technology, Engineering (General). Civil engineering (General)
DOAJ Open Access 2024
Plants, Water, Salt, Coal: The Archival Strata of the Victorian Photographic Book

Ann Garascia

This essay interprets the special issue’s theme, “Bibliophilia: Book Matters” through the curious and interconnected bodies of 19th-century books and plants: more specifically, the experimental photographic book objects inspired by Pteridomania, or “Fern Craze,” a collecting fad hinging on the desire for ferns in prehistoric and contemporary forms. Botanical collecting is typically figured as an extractive process that removes living plants from their native environs, placing them within the dried, enclosed spaces of different books objects, ranging from institutional herbarium to domestic albums. Tapping into the preservative potentials of ecological extraction, I argue that the photographic book advances a model of botanical collecting that memorializes, rather than effaces, these environs of the extracted plants. Taking Cecilia Glaisher’s photographic book, The British Ferns (1855), as my primary subject, I map out processes of photographic creation, focusing on the material condition of Glaisher’s prints and their composition techniques, to demonstrate how different environmental milieux write themselves into and linger unseen within Glaisher’s book: the “wild states” of England’s fern collecting cultures, Glaisher’s own regional ecosystem of Kent, and finally England’s deep-time stratigraphic layers. To access these spaces and times, my readings advance a theoretical framework that entwines eco-materialisms, media studies, and book history through their shared interest in more-than-human storytelling. In simultaneously preserving vegetal, geological, and human histories, the photographic book forms a multi-layered node of Victorian environmental thought that recognizes how extractive ecologies challenge standard, human-centered histories of the book.

DOAJ Open Access 2021
Cinema as a form of composition

Michele Guerra

Technique and creativity Having been called upon to provide a contribution to a publication dedicated to “Techne”, I feel it is fitting to start from the theme of technique, given that for too many years now, we have fruitlessly attempted to understand the inner workings of cinema whilst disregarding the element of technique. And this has posed a significant problem in our field of study, as it would be impossible to gain a true understanding of what cinema is without immersing ourselves in the technical and industrial culture of the 19th century. It was within this culture that a desire was born: to mould the imaginary through the new techniques of reproduction and transfiguration of reality through images. Studying the development of the so-called “pre-cinema” – i.e. the period up to the conventional birth of cinema on 28 December 1895 with the presentation of the Cinématographe Lumière – we discover that the technical history of cinema is not only almost more enthralling than its artistic and cultural history, but that it contains all the great theoretical, philosophical and scientific insights that we need to help us understand the social, economic and cultural impact that cinema had on the culture of the 20th century. At the 1900 Paris Exposition, when cinema had already existed in some form for a few years, when the first few short films of narrative fiction also already existed, the cinematograph was placed in the Pavilion of Technical Discoveries, to emphasise the fact that the first wonder, this element of unparalleled novelty and modernity, was still there, in technique, in this marvel of innovation and creativity. I would like to express my idea through the words of Franco Moretti, who claims in one of his most recent works that it is only possible to understand form through the forces that pulsate through it and press on it from beneath, finally allowing the form itself to come to the surface and make itself visible and comprehensible to our senses. As such, the cinematic form – that which appears on the screen, that which is now so familiar to us, that which each of us has now internalised, that has even somehow become capable of configuring our way of thinking, imagining, dreaming – that form is underpinned by forces that allow it to eventually make its way onto the screen and become artistic and narrative substance. And those forces are the forces of technique, the forces of industry, the economic, political and social forces without which we could never hope to understand cinema. One of the issues that I always make a point of addressing in the first few lessons with my students is that if they think that the history of cinema is made up of films, directors, narrative plots to be understood, perhaps even retold in some way, then they are entirely on the wrong track; if, on the other hand, they understand that it is the story of an institution with economic, political and social drivers within it that can, in some way, allow us to come to the great creators, the great titles, but that without a firm grasp of those drivers, there is no point in even attempting to explore it, then they are on the right track. As I see it, cinema in the twentieth century was a great democratic, interclassist laboratory such as no other art has ever been, and this occurred thanks to the fact that what underpinned it was an industrial reasoning: it had to respond to the capital invested in it, it had to make money, and as such, it had to reach the largest possible number of people, immersing it into a wholly unprecedented relational situation. The aim was to be as inclusive as possible, ultimately giving rise to the idea that cinema could not be autonomous, as other forms of art could be, but that it must instead be able to negotiate all the various forces acting upon it, pushing it in every direction. This concept of negotiation is one which has been explored in great detail by one of the greatest film theorists of our modern age, Francesco Casetti. In a 2005 book entitled “Eye of the Century”, which I consider to be a very important work, Casetti actually argues that cinema has proven itself to be the art form most capable of adhering to the complexity and fast pace of the short century, and that it is for this very reason that its golden age (in the broadest sense) can be contained within the span of just a hundred years. The fact that cinema was the true epistemological driving force of 20th-century modernity – a position now usurped by the Internet – is not, in my opinion, something that diminishes the strength of cinema, but rather an element of even greater interest. Casetti posits that cinema was the great negotiator of new cultural needs, of the need to look at art in a different way, of the willingness to adapt to technique and technology: indeed, the form of cinema has always changed according to the techniques and technologies that it has brought to the table or established a dialogue with on a number of occasions. Barry Salt, whose background is in physics, wrote an important book – publishing it at his own expense, as a mark of how difficult it is to work in certain fields – entitled “Film Style and Technology”, in which he calls upon us stop writing the history of cinema starting from the creators, from the spirit of the time, from the great cultural and historical questions, and instead to start afresh by following the techniques available over the course of its development. Throughout the history of cinema, the creation of certain films has been the result of a particular set of technical conditions: having a certain type of film, a certain type of camera, only being able to move in a certain way, needing a certain level of lighting, having an entire arsenal of equipment that was very difficult to move and handle; and as the equipment, medium and techniques changed and evolved over the years, so too did the type of cinema that we were able to make. This means framing the history of cinema and film theory in terms of the techniques that were available, and starting from there: of course, whilst Barry Salt’s somewhat provocative suggestion by no means cancels out the entire cultural, artistic and aesthetic discourse in cinema – which remains fundamental – it nonetheless raises an interesting point, as if we fail to consider the methods and techniques of production, we will probably never truly grasp what cinema is. These considerations also help us to understand just how vast the “construction site” of cinema is – the sort of “factory” that lies behind the production of any given film. Erwin Panofsky wrote a single essay on cinema in the 1930s entitled “Style and Medium in the Motion Pictures” – a very intelligent piece, as one would expect from Panofsky – in which at a certain point, he compares the construction site of the cinema to those of Gothic cathedrals, which were also under an immense amount of pressure from different forces, namely religious ones, but also socio-political and economic forces which ultimately shaped – in the case of the Gothic cathedral and its development – an idea of the relationship between the earth and the otherworldly. The same could be said for cinema, because it also involves starting with something very earthly, very grounded, which is then capable of unleashing an idea of imaginary metamorphosis. Some scholars, such as Edgar Morin, will say that cinema is increasingly becoming the new supernatural, the world of contemporary gods, as religion gradually gives way to other forms of deification. Panofsky’s image is a very focused one: by making film production into a construction site, which to all intents and purposes it is, he leads us to understand that there are different forces at work, represented by a producer, a scriptwriter, a director, but also a workforce, the simple labourers, as is always the case in large construction sites, calling into question the idea of who the “creator” truly is. So much so that cinema, now more than ever before, is reconsidering the question of authorship, moving towards a “history of cinema without names” in an attempt to combat the “policy of the author” which, in the 1950s, especially in France, identified the director as the de facto author of the film. Today, we are still in that position, with the director still considered the author of the film, but that was not always so: back in the 1910s, in the United States, the author of the film was the scriptwriter, the person who wrote it (as is now the case for TV series, where they have once again taken pride of place as the showrunner, the creator, the true author of the series, and nobody remembers the names of the directors of the individual episodes); or at times, it can be the producer, as was the case for a long time when the Oscar for Best Picture, for example, was accepted by the producer in their capacity as the commissioner, as the “owner” of the work. As such, the theme of authorship is a very controversial one indeed, but one which helps us to understand the great meeting of minds that goes into the production of a film, starting with the technicians, of course, but also including the actors. Occasionally, a film is even attributed to the name of a star, almost as if to declare that that film is theirs, in that it is their body and their talent as an actor lending it a signature that provides far more of a draw to audiences than the name of the director does. In light of this, the theme of authorship, which Panofsky raised in the 1930s through the example of the Gothic cathedral, which ultimately does not have a single creator, is one which uses the image of the construction site to also help us to better understand what kind of development a film production can go through and to what extent this affects its critical and historical reception; as such, grouping films together based on their director means doing something that, whilst certainly not incorrect in itself, precludes other avenues of interpretation and analysis which could have favoured or could still favour a different reading of the “cinematographic construction site”.   Design and execution The great classic Hollywood film industry was a model that, although it no longer exists in the same form today, unquestionably made an indelible mark at a global level on the history not only of cinema, but more broadly, of the culture of the 20th century. The industry involved a very strong vertical system resembling an assembly line, revolving around producers, who had a high level of decision-making autonomy and a great deal of expertise, often inclined towards a certain genre of film and therefore capable of bringing together the exact kinds of skills and visions required to make that particular film. The history of classic American cinema is one that can also be reconstructed around the units that these producers would form. The “majors”, along with the so-called “minors”, were put together like football teams, with a chairman flanked by figures whom we would nowadays refer to as a sporting director and a managing director, who built the team based on specific ideas, “buying” directors, scriptwriters, scenographers, directors of photography, and even actors and actresses who generally worked almost exclusively for their major – although they could occasionally be “loaned out” to other studios. This system led to a very marked characterisation and allowed for the film to be designed in a highly consistent, recognisable way in an age when genres reigned supreme and there was the idea that in order to keep the audience coming back, it was important to provide certain reassurances about what they would see: anyone going to see a Western knew what sorts of characters and storylines to expect, with the same applying to a musical, a crime film, a comedy, a melodrama, and so on. The star system served to fuel this working method, with these major actors also representing both forces and materials in the hands of an approach to the filmmaking which had the ultimate objective of constructing the perfect film, in which everything had to function according to a rule rooted in both the aesthetic and the economic. Gore Vidal wrote that from 1939 onwards, Hollywood did not produce a single “wrong” film: indeed, whilst certainly hyperbolic, this claim confirms that that system produced films that were never wrong, never off-key, but instead always perfectly in tune with what the studios wished to achieve.  Whilst this long-entrenched system of yesteryear ultimately imploded due to certain historical phenomena that determined it to be outdated, the way of thinking about production has not changed all that much, with film design remaining tied to a professional approach that is still rooted within it. The overwhelming majority of productions still start from a system which analyses the market and the possible economic impact of the film, before even starting to tackle the various steps that lead up to the creation of the film itself.  Following production systems and the ways in which they have changed, in terms of both the technology and the cultural contexts, also involves taking stock of the still considerable differences that exist between approaches to filmmaking in different countries, or indeed the similarities linking highly disparate economic systems (consider, for example, India’s “Bollywood” or Nigeria’s “Nollywood”: two incredibly strong film industries that we are not generally familiar with as they lack global distribution, although they are built very solidly). In other words, any attempt to study Italian cinema and American cinema – to stay within this double field – with the same yardstick is unthinkable, precisely because the context of their production and design is completely different.   Composition and innovation Studying the publications on cinema in the United States in the early 1900s – which, from about 1911 to 1923, offers us a revealing insight into the attempts made to garner an in-depth understanding of how this new storytelling machine worked and the development of the first real cultural industry of the modern age – casts light on the centrality of the issues of design and composition. I remain convinced that without reading and understanding that debate, it is very difficult to understand why cinema is as we have come to be familiar with it today. Many educational works investigated the inner workings of cinema, and some, having understood them, suggested that they were capable of teaching others to do so. These publications have almost never been translated into Italian and remain seldom studied even in the US, and yet they are absolutely crucial for understanding how cinema established itself on an industrial and aesthetic level. There are two key words that crop up time and time again in these books, the first being “action”, one of the first words uttered when a film starts rolling: “lights, camera, action”. This collection of terms is interesting in that “motore” highlights the presence of a machine that has to be started up, followed by “action”, which expresses that something must happen at that moment in front of that machine, otherwise the film will not exist. As such, “action” – a term to which I have devoted some of my studies – is a fundamental word here in that it represents a sort of moment of birth of the film that is very clear – tangible, even. The other word is “composition”, and this is an even more interesting word with a history that deserves a closer look: the first professor of cinema in history, Victor Oscar Freeburg (I edited the Italian translation of his textbook “The Art of Photoplay Making”, published in 1918), took up his position at Columbia University in 1915 and, in doing so, took on the task of teaching the first ever university course in cinema. Whilst Freeburg was, for his time, a very well-educated and highly-qualified person, having studied at Yale and then obtained his doctorate in theatre at Columbia, cinema was not entirely his field of expertise. He was asked to teach a course entitled “Photoplay Writing”. At the time, a film was known as a “photoplay”, in that it was a photographed play of sorts, and the fact that the central topic of the course was photoplay writing makes it clear that back then, the scriptwriter was considered the main author of the work. From this point of view, it made sense to entrust the teaching of cinema to an expert in theatre, based on the idea that it was useful to first and foremost teach a sort of photographable dramaturgy. However, upon arriving at Columbia, Freeburg soon realised whilst preparing his course that “photoplay writing” risked misleading the students, as it is not enough to simply write a story in order to make a film; as such, he decided to change the title of his course to “photoplay composition”. This apparently minor alteration, from “writing” to “composition”, in fact marked a decisive conceptual shift in that it highlighted that it was no longer enough to merely write: one had to “compose”. So it was that the author of a film became, according to Freeburg, not the scriptwriter or director, but the “cinema composer” (a term of his own coinage), thus directing and broadening the concept of composition towards music, on the one hand, and architecture, on the other. We are often inclined to think that cinema has inherited expressive modules that come partly from literature, partly from theatre and partly from painting, but in actual fact, what Freeburg helps us to understand is that there are strong elements of music and architecture in a film, emphasising the lofty theme of the project. In his book, he explores at great length the relationship between static and dynamic forms in cinema, a topic that few have ever addressed in that way and that again, does not immediately spring to mind as applicable to a film. I believe that those initial intuitions were the result of a reflection unhindered by all the prejudices and preconceived notions that subsequently began to condition film studies as a discipline, and I feel that they are of great use to use today because they guide us, on the one hand, towards a symphonic idea of filmmaking, and on the other, towards an idea that preserves the fairly clear imprint of architecture.   Space-Time In cinema as in architecture, the relationship between space and time is a crucial theme: in every textbook, space and time are amongst the first chapters to be studied precisely because in cinema, they undergo a process of metamorphosis – as Edgar Morin would say – which is vital to constructing the intermediate world of film. Indeed, from both a temporal and a spatial point of view, cinema provides a kind of ubiquitous opportunity to overlap different temporalities and spatialities, to move freely from one space to another, but above all, to construct new systems of time. The rules of film editing – especially so-called “invisible editing”, i.e. classical editing that conceals its own presence – are rules built upon specific and precise connections that hold together different spaces – even distant ones – whilst nonetheless giving the impression of unity, of contiguity, of everything that cinema never is in reality, because cinema is constantly fragmented and interrupted, even though we very often perceive it in continuity. As such, from both a spatial and a temporal perspective, there are technical studies that explain the rules of how to edit so as to give the idea of spatial continuity, as well as theoretical studies that explain how cinema has transformed our sense of space and time. To mark the beginning of Parma’s run as Italy’s Capital of Culture, an exhibition was organised entitled “Time Machine. Seeing and Experiencing Time”, curated by Antonio Somaini, with the challenge of demonstrating how cinema, from its earliest experiments to the digital age, has managed to manipulate and transform time, profoundly affecting our way of engaging with it.  The themes of time and space are vital to understanding cinema, including from a philosophical point of view: in two of Gilles Deleuze’s seminal volumes, “The Movement Image” and “The Time Image”, the issues of space and time become the two great paradigms not only for explaining cinema, but also – as Deleuze himself says – for explaining a certain 20th-century philosophy. Deleuze succeeds in a truly impressive endeavour, namely linking cinema to philosophical reflection – indeed, making cinema into an instrument of philosophical thought; this heteronomy of filmmaking is then also transferred to its ability to become an instrument that goes beyond its own existence to become a reflection on the century that saw it as a protagonist of sorts. Don Ihde argues that every era has a technical discovery that somehow becomes what he calls an “epistemological engine”: a tool that opens up a system of thought that would never have been possible without that discovery. One of the many examples of this over the centuries is the camera obscura, but we could also name cinema as the defining discovery for 20th-century thought: indeed, cinema is indispensable for understanding the 20th century, just as the Internet is for understanding our way of thinking in the 21st century.    Real-virtual Nowadays, the film industry is facing the crisis of cinema closures, ultimately caused by ever-spreading media platforms and the power of the economic competition that they are exerting by aggressively entering the field of production and distribution, albeit with a different angle on the age-old desire to garner audiences. Just a few days ago, Martin Scorsese was lamenting the fact that on these platforms, the artistic project is in danger of foundering, as excellent projects are placed in a catalogue alongside a series of products of varying quality, thus confusing the viewer. A few years ago, during the opening ceremony of the academic year at the University of Southern California, Steven Spielberg and George Lucas expressed the same concept about the future of cinema in a different way. Lucas argued that cinemas would soon have to become incredibly high-tech places where people can have an experience that is impossible to reproduce elsewhere, with a ticket price that takes into account the expanded and increased experiential value on offer thanks to the new technologies used. Spielberg, meanwhile, observed that cinemas will manage to survive if they manage to transform the cinemagoer from a simple viewer into a player, an actor of sorts. The history of cinema has always been marked by continuous adaptation to technological evolutions. I do not believe that cinema will ever end. Jean-Luc Godard, one of the great masters of the Nouvelle Vague, once said in an interview: «I am very sorry not to have witnessed the birth of cinema, but I am sure that I will witness its death». Godard, who was born in 1930, is still alive. Since its origins, cinema has always transformed rather than dying. Raymond Bellour says that cinema is an art that never finishes finishing, a phrase that encapsulates the beauty and the secret of cinema: an art that never quite finishes finishing is an art that is always on the very edge of the precipice but never falls off, although it leans farther and farther over that edge. This is undoubtedly down to cinema’s ability to continually keep up with technique and technology, and in doing so to move – even to a different medium – to relocate, as contemporary theorists say, even finally moving out of cinemas themselves to shift onto platforms and tablets, yet all without ever ceasing to be cinema. That said, we should give everything we’ve got to ensure that cinemas survive.

Aesthetics of cities. City planning and beautifying, Architectural drawing and design
DOAJ Open Access 2021
Generative adversarial network for low‐light image enhancement

Fei Li, Jiangbin Zheng, Yuan‐fang Zhang

Abstract Low‐light image enhancement is rapidly gaining research attention due to the increasing demands of extreme visual tasks in various applications. Although numerous methods exist to enhance image qualities in low light, it is still undetermined how to trade‐off between the human observation and computer vision processing. In this work, an effective generative adversarial network structure is proposed comprising both the densely residual block (DRB) and the enhancing block (EB) for low‐light image enhancement. Specifically, the proposed end‐to‐end image enhancement method, consisting of a generator and a discriminator, is trained using the hyper loss function. The DRB adopts the residual and dense skip connections to connect and enhance the features extracted from different depths in the network while the EB receives unique multi‐scale features to ensure feature diversity. Additionally, increasing the feature sizes allows the discriminator to further distinguish between fake and real images from the patch levels. The merits of the loss function are also studied to recover both contextual and local details. Extensive experimental results show that our method is capable of dealing with extremely low‐light scenes and the realistic feature generator outperforms several state‐of‐the‐art methods in a number of qualitative and quantitative evaluation tests.

Photography, Computer software
DOAJ Open Access 2021
Fotoğraf Serüveninin Son Durağı, Mobil Fotoğrafçılık

Gökhan Demirel, Alahattin Kanlıoğlu

Fotoğraf serüveni, insanoğlunun gördüklerini tekrar görülebilir somut bir veriye dönüştürme çabasıyla başlamıştır. Görüntünün kayıt altına alınabilmesi, tekrar görülebilir, basılabilir ve çoğaltılabilir olması ise ancak eklentik teknolojik gelişmeler sayesinde gerçekleşebilmiştir. Bu bağlamla bakıldığında fotoğraf, teknolojik bir araç olarak dikkat çekmektedir. Hatıra biriktirme aracı olarak konumlandırılan fotoğraf, zamanla belge ve kanıt özelliklerini de kazanarak, toplumsal belleğin oluşumunda söz sahibi olmuştur. Fotoğrafın artan kullanımı, aynı zamanda teknoloji olarak gelişmesinin önünü açmıştır. Fotoğrafın bu bağlamda gelişimi çoğunlukla fotoğraf üretim pratiklerinde iyileştirme, herkes tarafından ulaşılabilir hale gelme, saklama ve paylaşma olanaklarının iyileştirmesi yönünde olmuştur. Bu bağlamıyla bakıldığından fotoğraf karanlık odalardan analog makinelere, oradan dijital makinelere geçiş yapmıştır. Dijital makineler bugün yerini daha ileri bir teknoloji olan aynasız makinelere bırakırken, bilişim teknolojisi de cep telefonu fotoğrafçılığını oluşturmaya başlamıştır. Özellikle de cep telefonlarına entegre kameraların gün geçtikçe yükselen popülariteleri, optik teknolojilerinde meydana gelen gelişmeler ile toplumların dijitalleşmeye başlaması, yeni bir alan olan mobil fotoğrafçılığın oluşumunda doğrudan etkili olmuşlardır. Bu çalışma anı biriktirme amacı taşıyan fotoğrafın kitle iletişim aracına dönüşümünde geçirdiği evrelere değinerek, mobil fotoğrafçılığın tanımını, kapsamını, teknolojik bağlamı ve kullanım alanlarını açıklamayı amaçlamaktadır. Bir literatür taraması olan bu çalışmada, fotoğrafta yeni bir alan olan mobil fotoğrafçılığın dünü, bugünkü gelişimi ve teknolojik bağlamıyla geleceği kronolojik sıralamayla araştırılarak tartışılmaktadır.

Journalism. The periodical press, etc.
DOAJ Open Access 2021
Investigating the Potential of Network Optimization for a Constrained Object Detection Problem

Tanguy Ophoff, Cédric Gullentops, Kristof Van Beeck et al.

Object detection models are usually trained and evaluated on highly complicated, challenging academic datasets, which results in deep networks requiring lots of computations. However, a lot of operational use-cases consist of more constrained situations: they have a limited number of classes to be detected, less intra-class variance, less lighting and background variance, constrained or even fixed camera viewpoints, etc. In these cases, we hypothesize that smaller networks could be used without deteriorating the accuracy. However, there are multiple reasons why this does not happen in practice. Firstly, overparameterized networks tend to learn better, and secondly, transfer learning is usually used to reduce the necessary amount of training data. In this paper, we investigate how much we can reduce the computational complexity of a standard object detection network in such constrained object detection problems. As a case study, we focus on a well-known single-shot object detector, YoloV2, and combine three different techniques to reduce the computational complexity of the model without reducing its accuracy on our target dataset. To investigate the influence of the problem complexity, we compare two datasets: a prototypical academic (Pascal VOC) and a real-life operational (LWIR person detection) dataset. The three optimization steps we exploited are: swapping all the convolutions for depth-wise separable convolutions, perform pruning and use weight quantization. The results of our case study indeed substantiate our hypothesis that the more constrained a problem is, the more the network can be optimized. On the constrained operational dataset, combining these optimization techniques allowed us to reduce the computational complexity with a factor of 349, as compared to only a factor 9.8 on the academic dataset. When running a benchmark on an Nvidia Jetson AGX Xavier, our fastest model runs more than 15 times faster than the original YoloV2 model, whilst increasing the accuracy by 5% Average Precision (AP).

Photography, Computer applications to medicine. Medical informatics
DOAJ Open Access 2021
Effects of vertical and horizontal plyometric exercises on explosive capacity and kinetic variables in professional long jump athletes

Amir Vazini Taher, Ratko Pavlović, Shahram Ahanjan et al.

Background and Study Aim. Athletic jumps are specific cyclically-acyclic movements that despite the good performance of the techniques require from competitors a high level of motor, specific-motor and functional abilities. The aim of this study was to examine the response effect of vertical and horizontal plyometric training on explosive capacity and kinetic variables in long jump athletes. Material and Methods. The participants of this study were twenty professional jumpers (22.5 ± 4.2 years; 178.4 ± 9.8 cm; 70.3 ± 7.6 kg) who were divided into two groups: experimental (plyometric training) and control (standard training). They participated in the last track and field championship in country, moreover, three of them participated in the last Asian games, and one athlete participated in the world track and field championship. The experiments were conducted on June-July 2019 in twenty professional athletes. All tests were performed after a standard warm up protocol. The place of camera was always determined wisely around the jumping field to attain best photography. Organizing and controlling the imaging and motor analysis processes were done by a biomechanics expert. Results. Post training results in experimental group showed more improvement in 30 m sprint, vertical jump, horizontal velocity at take-off, and long jump completion, comparing the control group. Significant between group differences in all variables were detected post training. No significant post training improvements in flight time and take off duration were reported in control group. Conclusion. Vertical and horizontal plyometric training protocol was shown to be more effective in promoting improvement in explosive capacity than kinetic variables.

Special aspects of education, Sports
DOAJ Open Access 2020
Structure-from-Motion-Derived Digital Surface Models from Historical Aerial Photographs: A New 3D Application for Coastal Dune Monitoring

Edoardo Grottoli, Mélanie Biausque, David Rogers et al.

Recent advances in structure-from-motion (SfM) techniques have proliferated the use of unmanned aerial vehicles (UAVs) in the monitoring of coastal landform changes, particularly when applied in the reconstruction of 3D surface models from historical aerial photographs. Here, we explore a number of depth map filtering and point cloud cleaning methods using the commercial software Agisoft Metashape Pro to determine the optimal methodology to build reliable digital surface models (DSMs). Twelve different aerial photography-derived DSMs are validated and compared against light detection and ranging (LiDAR)- and UAV-derived DSMs of a vegetated coastal dune system that has undergone several decades of coastline retreat. The different studied methods showed an average vertical error (root mean square error, RMSE) of approximately 1 m, with the best method resulting in an error value of 0.93 m. In our case, the best method resulted from the removal of confidence values in the range of 0–3 from the dense point cloud (DPC), with no filter applied to the depth maps. Differences among the methods examined were associated with the reconstruction of the dune slipface. The application of the modern SfM methodology to the analysis of historical aerial (vertical) photography is a novel (and reliable) new approach that can be used to better quantify coastal dune volume changes. DSMs derived from suitable historical aerial photographs, therefore, represent dependable sources of 3D data that can be used to better analyse long-term geomorphic changes in coastal dune areas that have undergone retreat.

DOAJ Open Access 2020
Hyperboles in Newspaper Photographs: A Case Study of Khalmg Ünn’s (‘The Kalmyk Pravda’) Issues, 1957–1961

Viktoriya V. Kukanova, Aleksandra T. Bayanova, Larisa B. Mandzhikova

Introduction. Photography is a visual source of information, and its unique character has been recognized by numerous researchers. Newspaper photographs tend to mirror both a historical era proper and daily life of its inhabitants. Goals. The paper aims at analyzing the ‘essential messages’ of photographs published by Khalmg Ünn (‘The Kalmyk Pravda’) newspaper in 1957–1961. The periodical is an ethnic-oriented print media to have published — and still does — Kalmyk language materials. Materials and Methods. The continuous sampling method was employed to extract photographs from newspaper issues of 1957–1961. So, a total of 4,000 units were analyzed, but the study primarily focuses on pictures that were taken by local photographers in the territory of the Kalmyk ASSR. Photographs by TASS were involved to trace similar trends through comparison with regional photographic images. Conclusions. The study shows that photographic materials of Khalmg Ünn (‘The Kalmyk Pravda’) highlight different artistic trends manifested in the eclectic patterns compiled from both Socialist realism and the ‘severe style’ (the latter characterized by romantic heroification of strenuous laborers). Just in two years the newspaper images rapidly evolutionized from mere shots to photographic pictures created through the use of diverse means and methods, e.g., that of hyperbolization achieved via different camera angles and glass prism techniques. Newspaper photographers turned to common laborers to show their joys and hardships, everyday life of citizens not involved in party or any other administrative activities. The Khrushchev era gave rise to most essential changes in newspaper photography and the images examined. Further analysis of newspaper materials shall facilitate the development of both regional print media and anthropological studies at large.

History (General), Oriental languages and literatures
DOAJ Open Access 2018
3-D RECONSTRUCTION OF DIGITAL OUTCROP MODEL BASED ON MULTIPLE VIEW IMAGES AND TERRESTRIAL LASER SCANNING

Reginaldo Macedonio da Silva, Maurício Roberto Veronez, Luiz Gonzaga Júnior et al.

This paper presents a comparative study about 3D reconstruction based on active and passive sensors, mainly LiDAR - Terrestrial Laser Scanner (TLS) and raster images (photography), respectively. An accuracy analysis has been performed in the positioning of outcrop point clouds obtained by both techniques. To make the comparison feasible, datasets are composed by point clouds generated from multiple images in different poses using a consumer digital camera and directly by terrestrial laser scanner. After preprocessing stages to obtain these point clouds, both are compared, through the positional discrepancies and standard deviation. A preliminary analysis has been shown the feasible employment of digital image jointly 3D reconstruction method for digital outcrop modeling, concerning with data acquisition at low cost without significantly lost of accuracy when compared with LiDAR.

Geography. Anthropology. Recreation, Cartography

Halaman 29 dari 11178