By incorporating generative AI, robots are evolving from scripted machines to adaptive systems that interpret context, learn from demonstrations, and adjust their behaviors in real time. Advances in large language models and related technologies are helping robots deliver consistent, personalized outcomes at scale. Autonomous vehicles and humanoid factory assistants have shown that robots can handle complex instructions and collaborate with people in physical environments. But to successfully deploy gen-AI-powered robots, companies must choose use cases tied to real labor constraints, design interactions that feel natural, position robots as partners to—rather than replacements for—employees, match robots’ capabilities to task variability, and define success metrics. Privacy, transparency, and safety must also remain priorities as robots gather data and influence decisions.
They converse, physically interact, and learn in real time. by Jochen Wirtz
If you’ve had the chance to ride in a Waymo, you’ve likely emerged from the vehicle amazed by its abilities. Since Alphabet launched the project, in 2009, Waymo has developed a fleet of 2,500 driverless robotaxis now on the road in San Francisco, Miami, Phoenix, and other cities, where they’ve completed more than 20 million trips. The vehicles do more than zip passengers around at up to 65 miles an hour. They can respond to verbal instructions (“Play some ’80s rap on Spotify”) or answer questions (“What time is the Giants game?”) while switching lanes to avoid a double-parked delivery van. Early customer service data and comments on the Waymo One app show that riders are thrilled by the experience.