Google's new generation of Tensor AI chips is actually two chips, one for inference and one for training.

Google is packing ample amounts of static random access memory into a dedicated chip for running artificial intelligence models, following Nvidia's plans.

Google's new generation of Tensor AI chips is actually two chips, one for inference and one for training.

Google's newest TPUs are faster and cheaper than the previous versions. But the company is still embracing Nvidia in its cloud — for now.