arXiv Open Access 2023

Optimizing and Fine-tuning Large Language Model for Urban Renewal

Xi Wang Xianyao Ling Tom Zhang Xuecao Li Shaolan Wang +3 lainnya
Lihat Sumber

Abstrak

This study aims to innovatively explore adaptive applications of large language models (LLM) in urban renewal. It also aims to improve its performance and text generation quality for knowledge question-answering (QA) tasks. Based on the ChatGLM, we automatically generate QA datasets using urban renewal scientific literature corpora in a self-instruct manner and then conduct joint fine-tuning training on the model using the Prefix and LoRA fine-tuning methods to create an LLM for urban renewal. By guiding the LLM to automatically generate QA data based on prompt words and given text, it is possible to quickly obtain datasets in the urban renewal field and provide data support for the fine-tuning training of LLMs. The experimental results show that the joint fine-tuning training method proposed in this study can significantly improve the performance of LLM on the QA tasks. Compared with LoRA fine-tuning, the method improves the Bleu and Rouge metrics on the test by about 5%; compared with the model before fine-tuning, the method improves the Bleu and Rouge metrics by about 15%-20%. This study demonstrates the effectiveness and superiority of the joint fine-tuning method using Prefix and LoRA for ChatGLM in the urban renewal knowledge QA tasks. It provides a new approach for fine-tuning LLMs on urban renewal-related tasks.

Topik & Kata Kunci

Penulis (8)

X

Xi Wang

X

Xianyao Ling

T

Tom Zhang

X

Xuecao Li

S

Shaolan Wang

Z

Zhixing Li

L

Liang Zhang

P

Peng Gong

Format Sitasi

Wang, X., Ling, X., Zhang, T., Li, X., Wang, S., Li, Z. et al. (2023). Optimizing and Fine-tuning Large Language Model for Urban Renewal. https://arxiv.org/abs/2311.15490

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓