IBM Created New Software to Accelerate AI Learning

New technology IBM has allowed to accelerate learning AI 4 times


The computational efficiency of artificial intelligence is, in its own way, a double-edged sword. On the one hand, it should be trained fairly quickly, but the more the neural network “accelerates”, the more it consumes energy. So it can simply become unprofitable. However, the solution to this situation can be given by IBM, which has demonstrated new methods of teaching AI, which will allow it to learn several times faster with the same level of resource and energy costs.

To achieve such results, IBM had to abandon its computational methods using 32-bit and 16-bit techniques, developing an 8-bit technique, as well as a new chip to work with it.

   “The upcoming generation of applications for AI work will require faster response time, greater workloads and the ability to work with multiple data streams. To unlock the full potential of AI, we will redesign all hardware completely. Scaling AI with new hardware solutions is part of IBM Research’s transition from a narrow-profile AI, often used to solve specific, well-defined tasks, to a multi-purpose AI that covers all areas. ”Said Jeffrey Welser, Vice President and Director of IBM Research Lab .

All IBM developments were presented as part of NeurIPS 2018 in Montreal. Engineers of the company told about two developments. The first is called “deep machine learning of neural networks using 8-bit floating-point numbers.” In it, they describe how they managed to reduce arithmetic accuracy for applications from 32 bits to 16 bits and save them on an 8-bit model. Experts say that their technique speeds up the learning time of deep neural networks by 2-4 times compared to 16-bit systems. The second development is “8-bit multiplication in memory with projected phase transition memory.” Here, experts uncover a method that compensates for the low accuracy of analog AI chips, allowing them to consume 33 times less energy than comparable digital AI systems.

“The improved accuracy achieved by our research team indicates that in-memory computing can provide high-performance, deep learning in low-power environments. As with our digital accelerators, our analog chips are designed to scale and learn AI and output through visual, speech and text data sets and are distributed to multi-purpose AI. “