Revolutionizing Cryptocurrency Operations: The Role of Domain-Specific Large Language Models (LLMs) |
||
|
|
|
© 2024 by IJCTT Journal | ||
Volume-72 Issue-6 |
||
Year of Publication : 2024 | ||
Authors : Hao Qin | ||
DOI : 10.14445/22312803/IJCTT-V72I6P114 |
How to Cite?
Hao Qin, "Revolutionizing Cryptocurrency Operations: The Role of Domain-Specific Large Language Models (LLMs) ," International Journal of Computer Trends and Technology, vol. 72, no. 6, pp. 101-113, 2024. Crossref, https://doi.org/10.14445/22312803/IJCTT-V72I6P114
Abstract
The rapid dynamics of cryptocurrency markets and the specific convolution of blockchain technology involve both challenges and opportunities of implementing Large Language Models in this area. In the present research, we consider the process of fine-tuning and applying LLMs in the cryptocurrency sector to meet its specific needs. Through the comprehensive analysis of the dataset rationale and model’s preparation, as well as multiple practical implications in cryptocurrency workflows, it is possible to demonstrate that LLMs significantly contribute to cryptocurrency analytics, fraud identification, smart contract processing, and customer interaction potential. The paper also addresses the issues of the cryptocurrency sector, such as security, privacy, and regulation, and proposes recommendations for further research and practical implementation.
Keywords
Artificial Intelligence, Computer science and engineering, Data and information systems, Data and web mining, Scientific and engineering computing.
Reference
[1] Ashish Vaswani et al., “Attention Is All You Need,” Advances in Neural Information Processing Systems 30 (NIPS), 2017.
[Google Scholar] [Publisher Link]
[2] Jacob Devlin et al., “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding,” Arxiv Preprint, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Alec Radford et al., “Improving Language Understanding by Generative Pre-training,” 2018.
[Google Scholar] [Publisher Link]
[4] Jinhyuk Lee et., “Biobert: A Pre-Trained Biomedical Language Representation Model for Biomedical Text Mining,” Bioinformatics, vol. 36, no. 4, pp. 1234-1240, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Jihang Liu et al., “Unraveling Large Language Models: From Evolution to Ethical Implications - Introduction to Large Language Models,” World Scientific Research Journal, vol. 10, no. 5, pp. 97-102, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Jeremy Howard, and Sebastian Ruder, “Universal Language Model Fine-Tuning for Text Classification,” Arxiv Preprint, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Suchin Gururangan et al., “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks,” Arxiv Preprint, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Ilias Chalkidis et al., “LEGAL-BERT: The Muppets Straight out of Law School,” Arxiv Preprint, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Zhilin Yang et al., “XLNet: Generalized Autoregressive Pretraining for Language Understanding,” Arxiv Preprint, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Yingqiang Ge et al., “OpenAGI: When LLM Meets Domain Experts,” Advances in Neural Information Processing Systems 36 (NeurlIPS), 2023.
[Google Scholar] [Publisher Link]
[11] Tom B. Brown et al., “Language Models Are Few-Shot Learners,” Arxiv Preprint, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Neil Houlsby et al., “Parameter-Efficient Transfer Learning For NLP,” In Proceedings of the 36th International Conference on Machine Learning (PMLR), pp. 2790-2799, 2019.
[Google Scholar] [Publisher Link]