< back

Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training

Masanori Hirano, Kentaro Imajo

16th IIAI International Congress on Advanced Applied Informatics, pp. 273-279, July 6, 2024


Conference

15th International Conference on Smart Computing and Artificial Intelligence (SCAI 2024) in 16th IIAI International Congress on Advanced Applied Informatics (IIAI AAI 2024)

Abstract

Large language models (LLMs) are now widely used in various fields, including finance. However, Japanese financial-specific LLMs have not been proposed yet. Hence, this study aims to construct a Japanese financial-specific LLM through continual pre-training. Before tuning, we constructed Japanese financial-focused datasets for continual pre-training. As a base model, we employed a Japanese LLM that achieved state-of-the-art performance on Japanese financial benchmarks among the 10-billion-class parameter models. After continual pre-training using the datasets and the base model, the tuned model performed better than the original model on the Japanese financial benchmarks. Moreover, the outputs comparison results reveal that the tuned model's outputs tend to be better than the original model's outputs in terms of the quality and length of the answers. These findings indicate that domain-specific continual pre-training is also effective for LLMs. The tuned model is publicly available on Hugging Face.

Keywords

large language model; continual pre-training; domain-specific tuning; Japanese; finance;


Paper

arXiv:2404.10555 (doi.org/10.48550/arXiv.2404.10555), ssrn.com/abstract=4796245 (doi.org/10.2139/ssrn.4796245)

doi

10.1109/IIAI-AAI63651.2024.00059


bibtex

@inproceedings{Hirano2024-iiai,
  title={{Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training}},
  author={Masanori Hirano and Kentaro Imajo},
  booktitle={16th IIAI International Congress on Advanced Applied Informatics},
  isbn={979-8-3503-7790-3},
  pages={273-279},
  publisher={IEEE},
  doi={10.1109/IIAI-AAI63651.2024.00059},
  archivePrefix={arXiv},
  arxivId={2404.10555},
  year={2024}
}