SINGAPORE - Artificial intelligence has cemented its place in the public lexicon since ChatGPT’s advent in 2022, but its roots can be traced back decades. Before there were intelligent chatbots that could generate travel itineraries and images within seconds, AI was being used to filter spam e-mails, power Siri on iPhones and flag fraud cases among bank transactions.The Straits Times explains the technology behind the evolution of AI and the new possibilities it can unlock.Traditional AI systems analyse large sets of labelled data to make classifications and predict outcomes based on fixed, pre-defined rules.One of the earliest examples of this is Checkers, a video game developed by British computer scientist Christopher Strachey in 1952, which pitted humans against AI that could evaluate its own position on the board and identify the strongest possible next move.Early fraud detection tools adopted by banks flagged suspicious transactions based on unusual patterns, such as excessive spending or transactions with an unlikely merchant. Banks also use traditional AI tools to evaluate a borrower’s creditworthiness based on data from past borrowing and other statistical patterns.In another example, hospitals use traditional AI tools to analyse population electronic health records, medical images and genetic data to identify disease risks for individual patients and predict future health events.Recommendation engines are also products of traditional AI. Streaming platform Netflix makes recommendations to users of the movies and shows they might like based on data collected, such as each user’s viewing history, and titles that other users with similar preferences watched.When Apple’s voice-activated virtual assistant Siri was first introduced in 2011, it was trained on a pre-defined set of rules to perform specific basic tasks such as set alarms, check on the weather or manage calendar events.Generative Al uses its understanding of patterns in massive datasets to create new text, images and code in response to prompts in natural language.Powered by large-language models, Gen AI also excels in deciphering and responding to natural language prompts.This ability gives AI chatbots such as OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude the ability to hold human-like conversations with users.These systems have been used to improve the effectiveness and efficiency of customer service chatbots which, in the past, were mostly able to give only fixed responses.An example of this is Singapore-based digital lender Trust Bank’s customer service chatbot launched in late 2025.The chatbot draws information from the bank’s internal knowledge base, conversation history and limited access to a customer’s portfolio to act on requests such as raising credit limits and replacing a lost card.But more complex scenarios, such as locking an account due to suspicion of fraudulent activity, would still trigger a handover to human agents.Today, nearly half of Trust Bank’s customer queries are handled by the AI chatbot, without involving a human agent.In 2025, the Government and the Singapore Academy of Law launched an AI-powered search engine that accelerates legal research by answering queries from lawyers asked in natural language.The tool is trained on Singapore’s legal context and supported by data such as judgments, reports, legislation and books.In education, teachers are also using Gen AI tools to create lesson plans, presentation slides and quizzes. For instance, learning platform Kahoot! has an AI generator that allows educators to create teaching materials based on a topic, an uploaded file or a webpage. Students can also use it to turn their study notes into interactive flashcards or practice tests.In the creative sector, those without traditional training have been able to create artwork, music and stories using Gen AI tools like Midjourney, Adobe Firefly and Suno.Despite its acclaim, Gen AI has drawn significant backlash.Critics have raised questions on whether the tool, which can generate essays and answers on demand, is hampering students’ learning and critical thinking.Users are also often urged to spend extra time double-checking responses generated by AI, due to its tendency to “hallucinate” and make up facts and figures.The Internet was flooded with AI-generated photos of users in the style of Studio Ghibli animations in 2025, raising fresh debate over potential copyright infringements.PHOTO: AFPIn 2025, the internet was flooded with AI-generated images of users that were done in the style of Studio Ghibli, a popular Japanese animation studio. It drew controversy over training AI models on original content without the permission of artists, writers and musicians.Billionaire Elon Musk’s AI chatbot Grok also came under fire earlier in 2026, after it began complying with user requests to churn out non-consensual, sexually explicit and violent content that often depicted women and children.And in March, the family of 36-year-old Jonathan Gavalas filed a lawsuit against Google, claiming that Gemini had encouraged the Florida man to kill himself by fuelling a delusional spiral.Governments around the world are studying the need for safeguards to curb the harms perpetrated by AI chatbots. Experts have suggested that this can include mandating that chatbot operators create mechanisms for users to flag harmful chatbot responses, and to be transparent about how bots handle sensitive topics such as self-harm in their annual reports.Unlike earlier AI systems, agentic AI proactively figures out the sequence of steps needed to achieve a goal defined by its user – such as making an online purchase or performing lending checks and risk assessments – with minimal to no human intervention in the process.The leap forward stems from advancements in natural language processing and reasoning, allowing AI to comprehend human language and respond dynamically. Unlike traditional AI models, whose answers are limited to the saved data they are trained on, agentic AI is able to use tools on a computer as a human user would.Some early examples include Jules, a coding agent developed by Google that can fix bugs, run tests and build new features autonomously based on its understanding of the user’s project.In September 2025, social networking site for professionals LinkedIn rolled out its Hiring Assistant AI agent, which helps recruiters to run searches through the platform’s database for potential candidates and evaluate talent autonomously, based on the specified hiring criteria.It also creates outreach messages that are tailored to each candidate’s job history, and can take over pre-interview screening questions such as the candidate’s expected start date, necessary commute from home to the workplace and status of job search.AI agent tool OpenClaw has become the latest frenzy, as it allows users to connect it to AI models of their choice, and prompt them to complete tasks autonomously, like sending e-mails, organising files and booking flight tickets through messaging apps such as WhatsApp and Telegram.This open-source AI tool created by Austrian software developer Peter Steinberger inspired droves of people to gather outside Baidu’s headquarters in Beijing in March, to brave the wait for help with installation. Similar crowds were seen in Shanghai and Shenzhen.But the capacity for such systems to draw sensitive data from multiple databases and act independently has also introduced new risks, for instance, for unauthorised payments and personal data theft.To get ahead of such risks while organisations develop their own agent architecture, Singapore launched the Model AI Governance Framework for Agentic AI earlier in January, to give organisations recommendations on possible guardrails to prevent unintended outcomes.The Infocomm Media Development Authority (IMDA) also built on this framework to release an advisory on OpenClaw, warning local users and organisations against giving the AI tool unrestricted access to files and applications, or running it on personal or work devices that contain sensitive data.OpenClaw requires autonomy and broad access to data to be helpful, but this comes with higher risks of unpredictable actions and data leakage, said IMDA.It added: “Accepting the risks associated with granting OpenClaw broader capabilities should be an intentional decision, and not the result of default configurations that were overlooked.”