“Hey AI, what is …?”
You might have started a conversation like this with an chatbot before. Having a machine answer your questions instantly sounds impressive, but have you ever noticed that sometimes the response doesn’t quite address what you asked?
On 26 June, Brian Har, Better.sg’s Deputy CTO, conducted a chatbot masterclass, and gave participants valuable insights into the limitations of LLM-powered chatbots and how we can make them better.
Behind the scenes, these chatbots are powered by LLMs, which process your questions in a numerical way and generate coherent responses, in a process called tokenisation. Although they are trained on huge amounts of knowledge — think of them as having knowledge equivalent to several PhDs — they perform best when questions are broad and general. When it comes to more specific or niche topics, they often struggle to provide accurate answers.
Brian highlighted the importance of clearly defining your problem space. By setting boundaries and providing targeted information sources, you can help guide the chatbot to give more precise answers. For example, if someone asks, “What is Better.sg?”, the chatbot can refer to our website (which has details about our team, projects and articles), and our Notion page (which lists volunteer opportunities).
It is also important to consider the maximum amount of text a LLM can handle at once, known as its context window. If your conversation is too long, the model may truncate earlier parts of the conversation. This creates an interesting problem, when you consider we tend to include the important information at the beginning of a conversation.
He then demonstrated how to build a simple workflow using the open source platform n8n.io, and showed how to include various nodes and adjust the prompt to shape the chatbot’s responses more effectively.
LLMs are here to stay. As experts in our respective fields, we should look for ways to use these tools to enhance our work. One practical approach is to augment the generative capability of LLMs with our specialised knowledge, allowing us to improve the usefulness of the chatbot’s answers.
As Brian explained, “This is a journey of experimentation. This technology is so new; we are all still learning about it. The more people are invested in learning and figuring out what these things do, the better for everybody." He suggested starting with one model and iterating as you go! You might discover that certain models work better for your specific needs.
Missed this session? Follow us on social media to stay updated on future opportunities! A special thank you to Brian for making this masterclass possible.