Google has just uncovered one of it’s newest and most impressive hardware innovations. Called the Tensor Processing Unit (TPU), it is a custom built ASIC with the specific goal of machine learning. Created for Google’s open source software library for machine learning called TensorFlow, it promises performance far beyond current chips capabilities. The TPU is already in use at google, and is a seven-year leap forward in ASIC chip technology.
We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law). – Google
Requiring fewer transistors per operation, the TPU is tailored to machine learning applications. This allows more operations per second to be compressed into the silicon, and more sophisticated and powerful machine learning models to be used. The models can then be applied more quickly, in turn giving users more intelligent results more rapidly.
Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly. A board with a TPU fits into a hard disk drive slot in our data center racks.- Google
Powering google’s software ‘RankBrain’ (optimizes Google Search) and Street View, the TPU is already an important part of google’s functionality and shows incredible performance.
The end goal of the project is gaining an edge on the machine learning industry and making the technology available to Google customers. The use of TPUs in the company’s infrastructure stack allows the power of Google to be available to developers across software like TensorFlow and Cloud Machine Learning with advanced acceleration capabilities.