Chatbots are here to stay, but what’s the hype all about?

Download (PDF)

“ChatGPT is changing the world!”, “With ChatGPT vs Google Bard the battle of AI has begun”, “Legal professionals are being replaced by AI” … those headlines have made waves in recent weeks, as many of you may have witnessed yourselves.

But what is the hype all about? And what relevance and impact will these AI (artificial intelligence) chatbots really have on the future of legal professionals?

The development of AI language models and the foundation of AI chatbots have been ongoing for many years and have been a widely discussed field among AI researchers and practitioners. However, since the launch of ChatGPT by OpenAI, AI language models have been taken to the global stage, because suddenly, thanks to the chat interface, they are accessible to anyone without sophisticated skills in training and using AI models. As such, they are currently being discussed worldwide now also by other fields of academic research, a wide range of business professionals, and most strikingly, also by regular citizens and even high school students.

Many questions have been raised about the relevance and impact of these new AI chatbots on the future of daily and professional life. Many see great advantages, some have doubts, others are wary and see many risks and threats. In any case, AI chatbots have likely come to stay, and as such, it is important to evaluate them.

The AI chatbots race

Generally, a chatbot is a text-based dialog system that acts as an interface to a technical service in order to automatically process audio or text input. An AI-based chatbot is, as the name suggests, equipped with artificial intelligence and algorithms, as well as machine learning. AI chatbots increase their knowledge base by learning from past interactions. Further, AI chatbots are often used as digital assistants, well-known examples of which are Siri, Amazon Alexa and Google Assistant. But AI chatbots are also frequently integrated as automated customer support on websites and in messenger apps to generate simple answers. Overall, AI chatbots offer an effective way to automate and improve various processes.

Specifically, ChatGPT (Generative Pre-trained Transformer-3) is an artificial intelligence language model with a chatbot interface that has a neural network architecture. ChatGPT has been developed by the company OpenAI and has been trained on a vast amount of data. Because of the chatbot interface and the technology behind it, it is able to generate responses that simulate human-like conversation in response to questions or requests.

Besides ChatGPT, there are many more generative AI models. Generative AI describes any type of artificial intelligence that can be used to create new text, images, video, audio, code or synthetic data. Google’s Bard is the second most well-known AI chatbot currently. There are also Microsoft’s BingChatGPT, Jasper, Youchat, Socratic and many more. Given the big hype around ChatGPT, Microsoft and Google are racing to integrate AI chatbot functionalities into their product suites. Microsoft plans to add ChatGPT capabilities to the Bing search and Edge browser, while BingChatGPT is already integrated into Skype and Teams. At the same time, Google plans to merge Bard into its products like Android, Chrome and Gmail.

OpenAI’s ChatGPT vs. Google’s Bard

Currently, the most discussed AI chatbots are OpenAI’s ChatGPT and Google’s Bard. As such, understanding the similarities and differences between them may be useful.

Firstly, both AI chatbots are based on large language models that have been trained on large data sets, and both provide natural language responses via a chatbot interface.

Secondly, the different language models that the AI chatbots have been trained on should be considered. Google’s Bard uses Google’s internal LaMDA (Language Model for Dialogue Applications), while ChatGPT works with GPT-3, an older language model. In general, there are several leading language models currently used by academia and business, the analysis of which would go beyond the scope of this article.
Thirdly, the data sources that the AI chatbots have been trained on also differ significantly. Bard can answer questions based on more recent data sets, since it has access to the latest information from the World Wide Web through Google. ChatGPT on the other hand, can only refer to its trained data from 2021 and before, which makes it limited with regard to the latest research and information.

Fourthly, the two AI chatbots diverge in their chatbot functionality and use cases. While Bard was introduced exclusively in connection with Google Search and accordingly primarily provides answers to search queries, ChatGPT can also generate its own detailed texts and content, such as functional programming code.

Fifthly, the accessibility and as such, the pricing of the AI chatbots varies greatly. Currently, Google’s Bard can be used for free, but only by a selected group of early beta testers. As such, the accessibility of Google’s Bard and consequently, the ability to evaluate the use cases, reliability and quality of Google’s Bard, is currently very limited. In contrast, ChatGPT is accessible to anyone and free to use, subject to certain limitations . First, the number of requests a user can make in a certain amount of time is limited, to ensure that the servers are not overloaded. Currently, this limit is 2048 tokens, or about 400 words per request. Second, the free version of ChatGPT may not always be accessible at any time, because when the ChatGPT website reaches its capacity limits, free users will be moved to a waiting list. As a paid user of the Plus version of ChatGPT, the waiting list can be skipped.

Finally, let us ask ChatGPT itself the question of all questions:


LECARE:

Which AI chatbot is better? Google’s Bard or OpenAI’s ChatGPT?

 

ChatGPT:

As an AI language model, I cannot provide an objective answer to which chatbot is better, as it ultimately depends on individual needs and preferences. However, I can provide some information about both Google’s Bard and ChatGPT to help you make an informed decision.

 

Google’s Bard is a chatbot specifically designed to generate poetry in response to user prompts. It is based on machine learning algorithms and trained on a large corpus of poetry, allowing it to create poems that are often sophisticated and intricate. Bard can generate poems in a variety of styles, including sonnets, haikus, and free verse.

 

On the other hand, ChatGPT is a more general-purpose language model trained to understand and generate human-like text across a wide range of topics. It can provide answers to questions, engage in conversations on a wide range of topics, and even write short stories or essays.

 

Ultimately, the choice between Bard and ChatGPT depends on your specific needs and what you are looking for in a chatbot. If you are interested in generating poetry, Bard may be the better option. If you are looking for a more general-purpose chatbot that can engage in a wide range of conversations, provide information, and generate text on a variety of topics, ChatGPT may be the better choice.

ChatGPT is definitely chatty, but this answer is not correct. Google’s Bard may be named after a poet, but it can do way more than solely generate poetry. This example perfectly illustrates the core of the discussion around ChatGPT. On the one hand, the sophistication of the natural language response and linguistic style is impressive, but on the other hand it is daunting, as it may mislead readers into believing that the content is factful, trustworthy and reliable.

In order to properly evaluate the opportunities and threats of specifically using ChatGPT, it is important to better understand both its strengths and weaknesses, and its practical use cases.

ChatGPT: the knight in shining armor?

As has already been detailed, the most impressive attributes of ChatGPT are its accessibility, ease-of-use and the sophistication of the natural language response, especially when compared to previously-available AI solutions that provided less natural but more robotic type of responses, and were much more difficult to access and use.

Use cases of ChatGPT include research, creating drafts and outlines, or summaries for a large range of topics, which as a result provide assistance when starting a project, for getting some inspiration or overcoming writer’s block. In any case, the responses of ChatGPT should always be checked and possibly corrected. As such, instead of viewing ChatGPT as a flawless silver bullet, it is helpful to regard ChatGPT as an assistance system for daily work.

Finally, on a funny note to bring some amusement to dry topics and tasks, ChatGPT can be asked to rewrite answers in the style and tonality of famous people (whose writings are stored in the data that ChatGPT has been trained on). A fun gimmick for sure, but not necessarily a convincing argument with respect to content quality for many critics.

ChatGPT: can it be trusted?

As shown in the example above, the highly sophisticated linguistic style of writing by ChatGPT may lead to misconceptions regarding the content of the writing. At first sight, the generated responses may seem to be coherent and professional, but on closer examination can comprise unverified and wrong information. In fact, the quality of ChatGPT responses may be called into question in terms of truthfulness, trustworthiness and fact reliability.

First, ChatGPT can currently only access an existing, limited database and generates its answers based on the patterns it has learned during its training processes. Beyond that, ChatGPT can learn from past user interactions by storing an answered question and the given answer in its database. By this means it can generate new and appropriate responses to a given request. However, ChatGPT does not possess text comprehension abilities or reflection capacities. Nor does ChatGPT have human-like intelligence or consciousness. It can only create its responses according to the information it receives and the model on which it is based. It cannot process actual experience or deep understanding of concepts beyond the trained data. Furthermore, it does not have the competence to assess this and to critically evaluate the literature or content used. Accordingly, the true intelligence of the bot is limited.

Second, the possibility of plagiarism cannot be ruled out because ChatGPT may generate responses that may be phrased very similarly to the original data sources that ChatGPT uses. Moreover, there have been many concerns about the misuse of ChatGPT with respect to potential breaches of intellectual property rights. Not only high school and university students have discovered ChatGPT as a practical aid for their homework and assignments. As a result, educators and editors are using tools, like GPTZero, that try to identify whether texts have been written by or with the help of ChatGPT.

Third, the opacity of the data sources that ChatGPT is trained on, to which no one has access, makes it impossible to verify the reliability of ChatGPT responses. Therefore, especially from a legal perspective, the answers may be considered useless, as the sources cannot be verified reliably, unless a thorough independent check is performed. This may mean that more time and resources are required than working without ChatGPT altogether.

In summary, several limitations must be considered when using ChatGPT. Can ChatGPT be fully trusted? Probably not yet, because ChatGPT is still in the development phase and its answers are far from being mature or error-free. In order to be able to judge the quality of ChatGPT responses independently, profound knowledge and domain expertise is required. Consequently, ChatGPT will not be able to replace the expertise of professional domain experts overnight.

Will AI chatbots replace legal professionals?

With regard to domain expertise, what is the relevance and impact of AI chatbots for legal professionals?

AI chatbots may be a useful tool for legal professionals, offering benefits such as quick and easy-to-use access to information and increased efficiency. AI chatbots can quickly access a range of sources to provide information about specific legal issues. They can summarize judgments or legal references from commentaries, or draft simple contracts and contract clauses. Further, email drafts to clients or the opposing party may be drafted quicker using AI chatbots. The value of AI chatbots for legal professionals clearly lies in easy assistance for simple tasks that currently take up a lot of their valuable time.

However, for the reasons described above, the level of sophistication of these types of assistance is probably not yet deemed acceptable by legal professionals, given their typically very high expectations of and demands for accuracy, reliability, truthfulness and completeness. Further, AI chatbots may have difficulty understanding context and linguistic nuances, leading to inaccurate or misleading responses. Finally, AI chatbots may not have as specialized legal domain knowledge as a qualified lawyer in that domain, and thus may not be able to handle specific legal questions or complex cases at all.

Another consideration when using AI chatbots within the context of legal work is data privacy confidentiality and liability. Lawyers must be careful to ensure that sensitive legal information is kept private and confidential when using an AI chatbot. Trust concerns are not unfounded; even the developers of AI chatbots issue a disclaimer that no personal or confidential data should be entered into chatbots. The obvious reason is that the confidential data would automatically be digested by the artificial intelligence and used for future responses. Finally, in the current legislative environment, an AI chatbot is not considered a legal entity or person. As such, users of chatbots will for now continue to be considered solely liable for the content they use and provide, regardless of whether the content is originally from an AI chatbot response. Thus, verifying the content will also remain a necessity for legal professionals from a liability point of view.

Overall, legal professionals should not dismiss AI chatbots as flawed, imperfect and unreliable tools or as a threat to their profession. Instead, they should see the emergence of AI chatbots as an opportunity and view them as a gateway helping them shape their future way of working. Chatbots can assist them with tasks that as legal professionals they would rather delegate anyway to focus on bespoke legal challenges and to manage the increased workload.

Generally, it is not advisable for anyone to take AI chatbot responses in the legal domain at face value. Instead, users should always evaluate the responses given by AI chatbots in combination with profound legal knowledge before using them. This would also be advisable in any case when receiving information in the legal domain given by human beings who are not legal professionals. As such, legal professionals can be relieved, for now, that AI chatbots will not replace human judgment or legal expertise, and in summary, legal professionals will also not be replaced as the guardians of these any time soon.

As a final note, many people rely on Google, Wikipedia, and legal databases in their daily private and work lives, undoubtfully trusting that the content they contain is largely accurate. So given the expected progress in large language models over the next decades, it might be imagined that one day AI chatbot responses may also receive that level of trust. The question remaining is in which context? Legal professionals may actively shape that context by reimagining their roles in a way that users will continue to seek trusted advice from legal professionals, especially for bespoke matters, whereas AI chatbots may provide assistance on basic legal matters, also to legal professionals.

 

z.andreae@lecare.com

a.ovalioglu@lecare.com

Aktuelle Beiträge