Large language model for post‐earthquake structural damage assessment of buildings
Abstrak
A rapid and accurate assessment of structural damage to buildings in the aftermath of earthquakes is critical to emergency responses and engineering retrofit decisions. However, current in situ building damage assessment is primarily conducted through visual inspections by engineering professionals and deep learning techniques using single‐modal information, which are time‐consuming and unable to effectively integrate visual and textual information. In recent years, multimodal learning methods and large language models (LLMs), which could process visual and linguistic information, have emerged as viable alternatives for damage assessment of building constructions. In this study, a vision question–answering model for structural damage assessment (SDA‐Chat) is developed that automatically generates professional textual interpretations of structural damage images via multi‐round visual question–answering (VQA) interactions. A three‐stage training strategy that includes instruction fine‐tuning is designed to improve the model's VQA accuracy. The cross‐modality projector based on dimension reshaping and parallel network architecture is developed to enhance the accuracy and speed of alignment of multimodal features. Comparative experiments are conducted on the self‐constructed dataset containing 8195 pairs of structural damage images and corresponding damage description texts, focusing on various advanced LLMs. The results highlight that the SDA‐Chat can simultaneously process seven different tasks, demonstrating the effectiveness of the proposed method. The highest question–answering accuracy and efficiency of the model reached 83.04% and 435.31 tokens/s, respectively. In addition, high‐precision and lightweight solutions are designed for different application scenarios.
Penulis (4)
Yongqing Jiang
Jianze Wang
Xinyi Shen
Kaoshan Dai
Akses Cepat
- Tahun Terbit
- 2025
- Bahasa
- en
- Total Sitasi
- 9×
- Sumber Database
- Semantic Scholar
- DOI
- 10.1111/mice.70010
- Akses
- Open Access ✓