Baichuan 2: Open Large-scale Language Models
Abstrak
Large language models (LLMs) have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering. However, most powerful LLMs are closed-source or limited in their capability for languages other than English. In this technical report, we present Baichuan 2, a series of large-scale multilingual language models containing 7 billion and 13 billion parameters, trained from scratch, on 2.6 trillion tokens. Baichuan 2 matches or outperforms other open-source models of similar size on public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan 2 excels in vertical domains such as medicine and law. We will release all pre-training model checkpoints to benefit the research community in better understanding the training dynamics of Baichuan 2.
Topik & Kata Kunci
Penulis (55)
Ai Ming Yang
Bin Xiao
Bingning Wang
Borong Zhang
Ce Bian
Chao Yin
Chenxu Lv
Da Pan
Dian Wang
Dong Yan
Fan Yang
Fei Deng
Feng Wang
Feng Liu
Guangwei Ai
Guosheng Dong
Hai Zhao
Hang Xu
Hao-Lun Sun
Hongda Zhang
Hui Liu
Jiaming Ji
Jian Xie
Juntao Dai
Kuncheng Fang
Lei Su
Liang Song
Lifeng Liu
Liyun Ru
Luyao Ma
Mang Wang
Mickel Liu
Mingan Lin
Nuolan Nie
Pei Guo
Ruiyang Sun
Zhang Tao
Tianpeng Li
Tianyu Li
Wei Cheng
Weipeng Chen
Xiangrong Zeng
Xiaochuan Wang
Xiaoxi Chen
Xin Men
Xing Yu
Xuehai Pan
Yan-Bin Shen
Yiding Wang
Yiyun Li
Youxin Jiang
Yuchen Gao
Yupeng Zhang
Zenan Zhou
Zhiying Wu
Akses Cepat
- Tahun Terbit
- 2023
- Bahasa
- en
- Total Sitasi
- 962×
- Sumber Database
- Semantic Scholar
- DOI
- 10.48550/arXiv.2309.10305
- Akses
- Open Access ✓