DOAJ Open Access 2025

Attention is All Large Language Model Need

Liu Yuxin

Abstrak

With the advent of the Transformer, the attention mechanism has been applied to Large Language Model (LLM), evolving from initial single- modal large models to today's multi-modal large models. This has greatly propelled the development of Artificial Intelligence (AI) and ushered humans into the era of large models. Single-modal large models can be broadly categorized into three types based on their application domains: Text LLM for Natural Language Processing (NLP), Image LLM for Computer Vision (CV), and Audio LLM for speech interaction. Multi-modal large models, on the other hand, can leverage multiple data sources simultaneously to optimize the model. This article also introduces the training process of the GPT series. Large models have also had a significant impact on industry and society, bringing with them a number of unresolved problems. The purpose of this article is to assist researchers in comprehending the various forms of LLM, as well as its development, pre- training architecture, difficulties, and future objectives.

Topik & Kata Kunci

Penulis (1)

L

Liu Yuxin

Format Sitasi

Yuxin, L. (2025). Attention is All Large Language Model Need. https://doi.org/10.1051/itmconf/20257302025

Akses Cepat

Lihat di Sumber doi.org/10.1051/itmconf/20257302025
Informasi Jurnal
Tahun Terbit
2025
Sumber Database
DOAJ
DOI
10.1051/itmconf/20257302025
Akses
Open Access ✓