Tech giants make machine learning software open source while GPUs take center stage
Recently, Microsoft and Google released their two machine learning frameworks (TensorFlow and DMLT, or Distributed Machine Learning Toolkit, respectively) as open-source software. They are not the first major software companies to do this: Facebook made its deep-learning system Torch open source in January and claimed that Google’s artificial intelligence (AI) start-up DeepMind was using it at the time.
This may seem a counterintuitive business decision, but there is more to this than first meets the eye. Operating these tools to their full extent requires a large team of experts tweaking the algorithms, large datasets and extensive hardware infrastructure. Microsoft and Google’s frameworks are both still incomplete and they intend to improve and extend them in the future. Hopefully, these tools will encourage more researchers to enter the field, thereby helping to improve the framework.
As a recent Wired article points out, however, there are also changes afoot in hardware for machine learning. According to the article, "Facebook uses GPUs to train its face recognition services" and "as Google seeks an ever greater level of efficiency, there are cases where the company both trains and executes its AI models on GPUs inside the data center" which suggests that a paradigm change is taking place within these companies, as noted by Andrew Ng, chief scientist of Baidu.
This is, of course, great news for GPU vendors such as NVIDIA. Previously, graphics cards developers focused on graphics and media processing tasks, but since the appearance of General-Purpose Graphics Processing Units with a higher programmability, there has been an increasing tendency to use them more widely. Today, NVIDIA’s Tesla components, such as Maxwell-based chips, are suitable for many modern business needs.
Is this just another trend or the beginning of a hardware revolution? We will be keeping a close eye on developments in this area.