How Language Models are Transforming Information
Before search engines became popular, people accessed information through interactive interactions with librarians or subject experts. However, search engines have replaced this method with a keyword search that produces a list of results. Recently, artificial intelligence-based information access systems such as Microsoft’s Bing/ChatGPT, Google/Bard, and Meta/LLaMA have emerged. These systems can generate personalized natural language responses to full sentences and even paragraphs, which is a combination of customized answers and access to the vast knowledge available on the internet.
However, these systems are built on large language models that have limitations. Although these models work effectively and generate personalized responses, they also produce almost parroted responses without a real understanding of the query. Additionally, the systems are known to make up answers or “”hallucinate,”” which can lead to incorrect responses. The lack of transparency in these systems also means that users cannot quickly validate the responses and creators of original content do not receive attribution or compensation. The absence of sources in these systems also takes away the opportunity for users to explore various possibilities, learn, and make accidental discoveries.
Therefore, while large language models provide a significant advancement in information access, they have limitations due to the way they learn and construct responses. The response may be incorrect, and users cannot validate the results quickly. Furthermore, the lack of transparency is unfair to creators of original content, and it removes opportunities for people to learn and discover accidentally.