ISSN: 2229-371X

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Review Article Open Access

Large Language Models Bias Issues Solving through SDRT

Abstract

Since the start of transformer development and recent advancements in Large Language Models (LLMS), the whole world has been taken by storm. However, multiple LLM models, such as gpt-3, gpt-4, and all open-source LLM models, come with their own set of challenges. The development of Natural Language Processing (NLP) utilizing transformers commenced in 2017, initiated by google and Facebook. Since then, substantial language models have emerged as formidable tools in the domains of both natural language and artificial intelligence research. These models possess the capability to learn and predict, enabling them to generate coherent and contextually relevant text for a diverse array of applications. Additionally, large language models have made a significant impact on various industries, including healthcare, finance, customer service, and content generation. They have the potential to automate tasks, improve language understanding, and enhance user experiences when deployed effectively. However, along with these benefits, there are also major risks and challenges associated with these models, including pre-training and fine-tuning. To address these challenges, we are proposing SDRT (Segmented Discourse Representation Theory) and making the models more conversational to overcome some of the toughest obstacles.

Aarush1*, Chandhu 2

To read the full article Download Full Article