In the wake of ChatGPT's popularity, businesses are eager to enhance their chatbots with large language models (LLMs). At the AI Summit London, Alex Choi, an AI chatbot specialist from Vodafone, shared his expertise on the advantages and pitfalls of integrating AI models into customer-facing chatbots.
Choi began by questioning the necessity of language models for all businesses, cautioning that not every company requires a sophisticated LLM. "Do you really need the latest GPT-4? Probably not," he stated. For businesses primarily dealing with straightforward queries like password resets, advanced models may be unnecessary.
For those looking to leverage LLMs, Choi emphasized key considerations:
Data Quality
Personalization
Cost-Effectiveness
He suggested that businesses might consider using open-source models hosted internally, though this approach could pose security risks. Instead, he recommended Retrieval Augmented Generation (RAG) as a practical solution. RAG allows an AI model to access information from connected sources, enhancing chatbot intelligence.
“Companies like ours have loads of help and support articles,” Choi explained. By storing these articles in a vector database, chatbots can retrieve relevant information when users ask questions. This context, combined with the user's query, enables the LLM to generate personalized responses.
Choi highlighted the benefits of RAG, particularly for creating a fluid conversational experience. Personalizing information from company FAQs and tailoring it to specific customer needs can significantly improve user satisfaction.
He also offered practical advice for implementing chatbots:
Limit User Input: Restricting input to a few sentences prevents users from overwhelming the system and reduces compute costs.
Ensure Data Quality: The quality of data fed into the model is crucial; poor input will yield poor output.
Rigorous Testing: Regular and thorough testing is essential to ensure bots are resilient to attempts to break them. Automated testing can help maintain performance consistency.
Choi warned that any generative AI chatbot launched today would likely face attempts to break it. Therefore, businesses must cover all test cases and frequently update their testing processes.
Regarding cost, Choi noted that expenses vary widely from proof of concept to full deployment, with no one-size-fits-all formula.
Finally, Choi stressed the importance of aligning chatbot communication with the brand's identity to avoid sounding "robotic." For instance, Vodafone's chatbot on its Voxi platform, which targets younger users, adopts a more "edgy" response style.
“A lot of large language model chatbots that I've seen out there simply sound like ChatGPT,” he observed. “The last thing we want is for the chatbots to sound like a restricted ChatGPT.”
By considering these factors, businesses can enhance their chatbots with LLMs, providing better customer experiences while maintaining cost efficiency and security.