Muhammad Asghar, Muhammad Ayaz, Sharafat Ali
Hasil untuk "Land use"
Menampilkan 20 dari ~2073471 hasil · dari arXiv, CrossRef
Zaber Al Hassan Ayon, Gulam Husain, Roshankumar Bisoi et al.
This paper presents a novel approach to represent enterprise web application structures using Large Language Models (LLMs) to enable intelligent quality engineering at scale. We introduce a hierarchical representation methodology that optimizes the few-shot learning capabilities of LLMs while preserving the complex relationships and interactions within web applications. The approach encompasses five key phases: comprehensive DOM analysis, multi-page synthesis, test suite generation, execution, and result analysis. Our methodology addresses existing challenges around usage of Generative AI techniques in automated software testing by developing a structured format that enables LLMs to understand web application architecture through in-context learning. We evaluated our approach using two distinct web applications: an e-commerce platform (Swag Labs) and a healthcare application (MediBox) which is deployed within Atalgo engineering environment. The results demonstrate success rates of 90\% and 70\%, respectively, in achieving automated testing, with high relevance scores for test cases across multiple evaluation criteria. The findings suggest that our representation approach significantly enhances LLMs' ability to generate contextually relevant test cases and provide better quality assurance overall, while reducing the time and effort required for testing.
Jintao Li, Haoran Dong, Shaoxing Li
Piotr Kryczka
Kedi Zheng, Qixin Chen, Yi Wang et al.
Having a better understanding of how locational marginal prices (LMPs) change helps in price forecasting and market strategy making. This paper investigates the fundamental distribution of the congestion part of LMPs in high-dimensional Euclidean space using an unsupervised approach. LMP models based on the lossless and lossy DC optimal power flow (DC-OPF) are analyzed to show the overlapping subspace property of the LMP data. The congestion part of LMPs is spanned by certain row vectors of the power transfer distribution factor (PTDF) matrix, and the subspace attributes of an LMP vector uniquely are found to reflect the instantaneous congestion status of all the transmission lines. The proposed method searches for the basis vectors that span the subspaces of congestion LMP data in hierarchical ways. In the bottom-up search, the data belonging to 1-dimensional subspaces are detected, and other data are projected on the orthogonal subspaces. This procedure is repeated until all the basis vectors are found or the basis gap appears. Top-down searching is used to address the basis gap by hyperplane detection with outliers. Once all the basis vectors are detected, the congestion status can be identified. Numerical experiments based on the IEEE 30-bus system, IEEE 118-bus system, Illinois 200-bus system, and Southwest Power Pool are conducted to show the performance of the proposed method.
Prajit KrisshnaKumar, Jhoel Witter, Steve Paul et al.
Majority of aircraft under the Urban Air Mobility (UAM) concept are expected to be of the electric vertical takeoff and landing (eVTOL) vehicle type, which will operate out of vertiports. While this is akin to the relationship between general aviation aircraft and airports, the conceived location of vertiports within dense urban environments presents unique challenges in managing the air traffic served by a vertiport. This challenge becomes pronounced within increasing frequency of scheduled landings and take-offs. This paper assumes a centralized air traffic controller (ATC) to explore the performance of a new AI driven ATC approach to manage the eVTOLs served by the vertiport. Minimum separation-driven safety and delays are the two important considerations in this case. The ATC problem is modeled as a task allocation problem, and uncertainties due to communication disruptions (e.g., poor link quality) and inclement weather (e.g., high gust effects) are added as a small probability of action failures. To learn the vertiport ATC policy, a novel graph-based reinforcement learning (RL) solution called "Urban Air Mobility- Vertiport Schedule Management (UAM-VSM)" is developed. This approach uses graph convolutional networks (GCNs) to abstract the vertiport space and eVTOL space as graphs, and aggregate information for a centralized ATC agent to help generalize the environment. Unreal Engine combined with Airsim is used as the simulation environment over which training and testing occurs. Uncertainties are considered only during testing, due to the high cost of Mc sampling over such realistic simulations. The proposed graph RL method demonstrates significantly better performance on the test scenarios when compared against a feasible random decision-making baseline and a first come first serve (FCFS) baseline, including the ability to generalize to unseen scenarios and with uncertainties.
Håvard Kjellmo Arnestad, Gábor Geréb, Tor Inge Birkenes Lønmo et al.
Over the past decade, interval arithmetic (IA) has been utilized to determine tolerance bounds of phased array beampatterns. IA only requires that the errors of the array elements are bounded, and can provide reliable beampattern bounds even when a statistical model is missing. However, previous research has not explored the use of IA to find the error realizations responsible for achieving specific bounds. In this study, the capabilities of IA are extended by introducing the concept of ``backtracking'', which provides a direct way of addressing how specific bounds can be attained. Backtracking allows for the recovery of both the specific error realization and the corresponding beampattern, enabling the study and verification of which errors result in the worst-case array performance in terms of the peak sidelobe level. Moreover, IA is made applicable to a wider range of arrays by adding support for arbitrary array geometries with directive elements and mutual coupling, in addition to element amplitude, phase, and positioning errors. Lastly, a simple formula for approximate bounds of uniformly bounded errors is derived and numerically verified. This formula gives insights into how array size and apodization cannot reduce the worst-case peak sidelobe level beyond a certain limit.
Timothy Chase, Chris Gnam, John Crassidis et al.
The detection of hazardous terrain during the planetary landing of spacecraft plays a critical role in assuring vehicle safety and mission success. A cheap and effective way of detecting hazardous terrain is through the use of visual cameras, which ensure operational ability from atmospheric entry through touchdown. Plagued by resource constraints and limited computational power, traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps. Although successful on previous missions, this approach is restricted to the specificity of the templates and limited by the fidelity of the underlying hazard map, which both require extensive pre-flight cost and effort to obtain and develop. Terrestrial systems that perform a similar task in applications such as autonomous driving utilize state-of-the-art deep learning techniques to successfully localize and classify navigation hazards. Advancements in spacecraft co-processors aimed at accelerating deep learning inference enable the application of these methods in space for the first time. In this work, we introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique for autonomous spacecraft planetary landings. Through the use of unsupervised domain adaptation we tailor YOCO for training by simulation, removing the need for real-world annotated data and expensive mission surveying phases. We further improve the transfer of representative terrain knowledge between simulation and the real world through visual similarity clustering. We demonstrate the utility of YOCO through a series of terrestrial and extraterrestrial simulation-to-real experiments and show substantial improvements toward the ability to both detect and accurately classify instances of planetary terrain.
Gyu Seon Kim, JaeHyun Chung, Soohyun Park
The advent of reusable rockets has heralded a new era in space exploration, reducing the costs of launching satellites by a significant factor. Traditional rockets were disposable, but the design of reusable rockets for repeated use has revolutionized the financial dynamics of space missions. The most critical phase of reusable rockets is the landing stage, which involves managing the tremendous speed and attitude for safe recovery. The complexity of this task presents new challenges for control systems, specifically in terms of precision and adaptability. Classical control systems like the proportional-integral-derivative (PID) controller lack the flexibility to adapt to dynamic system changes, making them costly and time-consuming to redesign of controller. This paper explores the integration of quantum reinforcement learning into the control systems of reusable rockets as a promising alternative. Unlike classical reinforcement learning, quantum reinforcement learning uses quantum bits that can exist in superposition, allowing for more efficient information encoding and reducing the number of parameters required. This leads to increased computational efficiency, reduced memory requirements, and more stable and predictable performance. Due to the nature of reusable rockets, which must be light, heavy computers cannot fit into them. In the reusable rocket scenario, quantum reinforcement learning, which has reduced memory requirements due to fewer parameters, is a good solution.
Hyolim Kang, Hanjung Kim, Joungbin An et al.
Temporal Action Localization (TAL) methods typically operate on top of feature sequences from a frozen snippet encoder that is pretrained with the Trimmed Action Classification (TAC) tasks, resulting in a task discrepancy problem. While existing TAL methods mitigate this issue either by retraining the encoder with a pretext task or by end-to-end fine-tuning, they commonly require an overload of high memory and computation. In this work, we introduce Soft-Landing (SoLa) strategy, an efficient yet effective framework to bridge the transferability gap between the pretrained encoder and the downstream tasks by incorporating a light-weight neural network, i.e., a SoLa module, on top of the frozen encoder. We also propose an unsupervised training scheme for the SoLa module; it learns with inter-frame Similarity Matching that uses the frame interval as its supervisory signal, eliminating the need for temporal annotations. Experimental evaluation on various benchmarks for downstream TAL tasks shows that our method effectively alleviates the task discrepancy problem with remarkable computational efficiency.
Simiao Ren, Jordan Malof, T. Robert Fetter et al.
Solar home systems (SHS), a cost-effective solution for rural communities far from the grid in developing countries, are small solar panels and associated equipment that provides power to a single household. A crucial resource for targeting further investment of public and private resources, as well as tracking the progress of universal electrification goals, is shared access to high-quality data on individual SHS installations including information such as location and power capacity. Though recent studies utilizing satellite imagery and machine learning to detect solar panels have emerged, they struggle to accurately locate many SHS due to limited image resolution (some small solar panels only occupy several pixels in satellite imagery). In this work, we explore the viability and cost-performance tradeoff of using automatic SHS detection on unmanned aerial vehicle (UAV) imagery as an alternative to satellite imagery. More specifically, we explore three questions: (i) what is the detection performance of SHS using drone imagery; (ii) how expensive is the drone data collection, compared to satellite imagery; and (iii) how well does drone-based SHS detection perform in real-world scenarios. We collect and publicly-release a dataset of high-resolution drone imagery encompassing SHS imaged under real-world conditions and use this dataset and a dataset from Rwanda to evaluate the capabilities of deep learning models to recognize SHS, including those that are too small to be reliably recognized in satellite imagery. The results suggest that UAV imagery may be a viable alternative to identify very small SHS from perspectives of both detection accuracy and financial costs of data collection. UAV-based data collection may be a practical option for supporting electricity access planning strategies for achieving sustainable development goals and for monitoring the progress towards those goals.
Cyril Gadal, Pauline Delorme, Clément Narteau et al.
Emergence and growth of sand dunes results from the dynamic interaction between topography, wind flow and sediment transport. While feedbacks between these variables are well studied at the scale of a single and relatively small dune, the average effect of a periodic large-scale dune pattern on atmospheric flows remains poorly constrained, due to a pressing lack of data in major sand seas. Here, we compare local measurements of surface winds to the predictions of the ERA5-Land climate reanalysis at four locations in Namibia, both within and outside the giant linear dune field of the Namib Sand Sea. In the desert plains to the north of the sand sea, observations and predictions agree well. This is also the case in the interdune areas of the sand sea during the day. During the night, however, an additional wind component aligned with the giant dune orientation is measured, in contrast to the easterly wind predicted by the ERA5-Land reanalysis. For the given dune orientation and measured wind regime, we link the observed wind deviation (over 50\textdegree) to the daily cycle of the turbulent atmospheric boundary layer. During the night, a shallow boundary layer induces a flow confinement above the giant dunes, resulting in large flow deviations, especially for the slower easterly winds. During the day, the feedback of the giant dunes on the atmospheric flow is much weaker due to the thicker boundary layer and higher wind speeds. Finally, we propose that the confinement mechanism and the associated wind deflections induced by giant dunes could explain the development of smaller-scale secondary dunes, which elongate obliquely in the interdune areas of the primary dune pattern.
Cigdem Beyan, Alessandro Vinciarelli, Alessio Del Bue
Automated co-located human-human interaction analysis has been addressed by the use of nonverbal communication as measurable evidence of social and psychological phenomena. We survey the computing studies (since 2010) detecting phenomena related to social traits (e.g., leadership, dominance, personality traits), social roles/relations, and interaction dynamics (e.g., group cohesion, engagement, rapport). Our target is to identify the nonverbal cues and computational methodologies resulting in effective performance. This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings (free-standing conversations, meetings, dyads, and crowds). We also present a comprehensive summary of the related datasets and outline future research directions which are regarding the implementation of artificial intelligence, dataset curation, and privacy-preserving interaction analysis. Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively; multimodal features are prominently performing better; deep learning architectures showed improved performance in overall, but there exist many phenomena whose detection has never been implemented through deep models. We also identified several limitations such as the lack of scalable benchmarks, annotation reliability tests, cross-dataset experiments, and explainability analysis.
Yong Xie
Jennie L. Durant, Clint R.V. Otto
Carlos S. Ciria, Carlos M. Sastre, Juan Carrasco et al.
In order to face the expected increasing demand of energy crops without creating conflicts of land occupation sustainability, farmers need to find reliable alternatives in marginal agricultural areas where the production of food hardly ever is economically and environmentally sustainable. The purpose of this work was the study of the viability of the introduction of new non food crops in marginal areas of real farms. This study compares the profit margin and the energy and environmental performance of growing tall wheatgrass, in the marginal area of a rainfed farm versus rye, the annual crop sowed traditionally in the marginal area of the farm. The cited farm owned 300 ha of which about 13 percent was marginal. The methodology was based on the use of the profit margin of the crops as indicator for the economic assessment and Life Cycle Assessment LCA as technique for the energy and the environmental evaluations. Results of the economic analysis showed a slight enhancement of the profit margin for tall wheatgrass 156 Euro ha-1 y-1 compared to rye 145 Euro ha-1 y-1. Environmental LCA was driven by CO2 fixation due to soil organic matter increase and reduced inputs consumption for tall wheatgrass that produced a Global Warming Potential GWP of -1.9 Mg CO2 eq ha-1 y-1 versus 1.6 Mg CO2 eq ha-1 y-1 obtained for rye. Tall wheatgrass cultivation primary energy consumption was less than 40 percent of rye s consumption. According to the results achieved it was concluded that tall wheatgrass is better option than rye from the energy and the environmental point of views and slight better option from the economic view. Considering these results, monetarization of the CO2 eq reductions of tall wheatgrass compared to rye is essential to improve its profit margin and promote the implantation of this new crop in marginal areas of farms.
Kirk Y. W. Scheper, Guido C. H. E. de Croon
Automatic optimization of robotic behavior has been the long-standing goal of Evolutionary Robotics. Allowing the problem at hand to be solved by automation often leads to novel approaches and new insights. A common problem encountered with this approach is that when this optimization occurs in a simulated environment, the optimized policies are subject to the reality gap when implemented in the real world. This often results in sub-optimal behavior, if it works at all. This paper investigates the automatic optimization of neurocontrollers to perform quick but safe landing maneuvers for a quadrotor micro air vehicle using the divergence of the optical flow field of a downward looking camera. The optimized policies showed that a piece-wise linear control scheme is more effective than the simple linear scheme commonly used, something not yet considered by human designers. Additionally, we show the utility in using abstraction on the input and output of the controller as a tool to improve the robustness of the optimized policies to the reality gap by testing our policies optimized in simulation on real world vehicles. We tested the neurocontrollers using two different methods to generate and process the visual input, one using a conventional CMOS camera and one a dynamic vision sensor, both of which perform significantly differently than the simulated sensor. The use of the abstracted input resulted in near seamless transfer to the real world with the controllers showing high robustness to a clear reality gap.
Saket Mishra, Piyush Tagade
Future advancement of engineering applications is dependent on design of novel materials with desired properties. Enormous size of known chemical space necessitates use of automated high throughput screening to search the desired material. The high throughput screening uses quantum chemistry calculations to predict material properties, however, computational complexity of these calculations often imposes prohibitively high cost on the search for desired material. This critical bottleneck is resolved by using deep machine learning to emulate the quantum computations. However, the deep learning algorithms require a large training dataset to ensure an acceptable generalization, which is often unavailable a-priory. In this paper, we propose a deep Gaussian process based approach to develop an emulator for quantum calculations. We further propose a novel molecular descriptor that enables implementation of the proposed approach. As demonstrated in this paper, the proposed approach can be implemented using a small dataset. We demonstrate efficacy of our approach for prediction of formation energy of inorganic molecules.
SNO Collaboration, B. Aharmim, S. N. Ahmed et al.
The long baseline between the Earth and the Sun makes solar neutrinos an excellent test beam for exploring possible neutrino decay. The signature of such decay would be an energy-dependent distortion of the traditional survival probability which can be fit for using well-developed and high precision analysis methods. Here a model including neutrino decay is fit to all three phases of $^8$B solar neutrino data taken by the Sudbury Neutrino Observatory. This fit constrains the lifetime of neutrino mass state $ν_2$ to be ${>8.08\times10^{-5}}$ s/eV at $90\%$ confidence. An analysis combining this SNO result with those from other solar neutrino experiments results in a combined limit for the lifetime of mass state $ν_2$ of ${>1.04\times10^{-3}}$ s/eV at $99\%$ confidence.
Lyn E. Pleger
Halaman 51 dari 103674