11:44 uur 06-11-2018

Toshiba Memory Corporation ontwikkelt een algoritme en hardware-architectuur voor hoge snelheid en hoge energiezuinigheid Deep Learning Processor

TOKIO –(BUSINESS WIRE)– Toshiba Memory Corporation, de wereldleider in geheugenoplossingen, kondigde vandaag de ontwikkeling aan van een algoritme voor hoge snelheid en hoge energie-efficiëntie en een hardware-architectuur voor diepe leerprocessen met minder degradaties van de herkenningsnauwkeurigheid. De nieuwe processor voor deep learning geïmplementeerd op een FPGA [1] bereikt 4 keer de energie-efficiëntie in vergelijking met conventionele. De vooruitgang werd aangekondigd op de IEEE Asian Solid-State Circuits Conference 2018 (A-SSCC 2018) in Taiwan op 6 november.

Deep learning berekeningen vereisen over het algemeen grote hoeveelheden multiply-accumulate (MAC) operaties, en het heeft geresulteerd in kwesties van lange rekentijd en groot energieverbruik. Hoewel de technieken die het aantal bits verminderen om parameters (bitprecisie) weer te geven zijn voorgesteld om het totale rekenbedrag te verminderen, reduceert één van de voorgestelde algoritmes de bitprecisie tot één of twee bits, maar die technieken veroorzaken een verminderde herkenningsnauwkeurigheid. Toshiba Memory heeft het nieuwe algoritme ontwikkeld dat de MAC-bewerkingen reduceert door de bitnauwkeurigheid van MAC-bewerkingen voor individuele filters [2] in elke laag van een neurale netwerk te optimaliseren. Door gebruik te maken van het nieuwe algoritme kunnen de MAC-bewerkingen worden verminderd met minder verslechtering van de herkenningsnauwkeurigheid.

Toshiba Memory Corporation Develops High-Speed and High-Energy-Efficiency Algorithm and Hardware Architecture for Deep Learning Processor

TOKYO–(BUSINESS WIRE)– Toshiba Memory Corporation, the world leader in memory solutions, today announced the development of a high-speed and high-energy-efficiency algorithm and hardware architecture for deep learning processing with less degradations of recognition accuracy. The new processor for deep learning implemented on an FPGA [1] achieves 4 times energy efficiency compared to conventional ones. The advance was announced at IEEE Asian Solid-State Circuits Conference 2018 (A-SSCC 2018) in Taiwan on November 6.

Deep learning calculations generally require large amounts of multiply-accumulate (MAC) operations, and it has resulted in issues of long calculation time and large energy consumption. Although techniques reducing the number of bits to represent parameters (bit precision) have been proposed to reduce the total calculation amount, one of proposed algorithm reduces the bit precision down to one or two bit, those techniques cause degraded recognition accuracy. Toshiba Memory developed the new algorithm reducing MAC operations by optimizing the bit precision of MAC operations for individual filters [2] in each layer of a neural network. By using the new algorithm, the MAC operations can be reduced with less degradation of recognition accuracy.

Furthermore, Toshiba Memory developed a new hardware architecture, called bit-parallel method, which is suitable for MAC operations with different bit precision. This method divides each various bit precision into a bit one by one and can execute 1-bit operation in numerous MAC units in parallel. It significantly improves utilization efficiency of the MAC units in the processor compared to conventional MAC architectures that execute in series.

Toshiba Memory implemented ResNet50[3], a deep neural network, on an FPGA using the various bit precision and bit-parallel MAC architecture. In the case of image recognition for the image dataset of ImageNet[4], the above technique supports that both operation time and energy consumption for recognizing image data are reduced to 25 % with less recognition accuracy degradation, compared to conventional method.

Artificial intelligence (AI) is forecasted to be implemented in various devices. The developed high-speed and low-energy-consumption techniques for deep-learning processors are expected to be utilized for various edge devices like smartphones and HMDs[5] and datacenters which require low energy consumption. High-performance processors like GPU are important devices for high-speed operation of AI. Memories and storages are also one of the most important devices for AI which inevitably use big data. Toshiba Memory Corporation is continuously focusing on research and development of AI technologies as well as innovating memories and storages to lead data-oriented computing.

[1] FPGA: Field Programmable Gate Array, an integrated circuit designed to be configured by a customer or a designer after manufacturing.
[2] filter: Generally, there are many filters of up to several thousands in one layer of a neural network.
[3] ResNet50: One of deep neural network, generally used to benchmark deep-learning for image recognition.
[4] ImageNet: A large image database, generally used to benchmark image-recognition, the number of image data is over 14,000,000.
[5] HMD: Head Mounted Display

About Toshiba Memory Corporation

Toshiba Memory Corporation, a world leader in memory solutions, is dedicated to the development, production and sales of flash memory and SSDs. In June 2018, Toshiba Memory was acquired by an industry consortium led by Bain Capital. Toshiba Memory pioneers cutting-edge memory solutions and services that enrich people’s lives and expand society’s horizons. The company’s innovative 3D flash memory technology, BiCS FLASH™, is shaping the future of storage in high density applications including advanced smartphones, PCs, SSDs, automotive and data centers. For more information on Toshiba Memory, please visit https://business.toshiba-memory.com/en-apac/top.html

 

Contacts

Toshiba Memory Corporation
Kota Yamaji, +81-3-3457-3473
Business Planning Division
semicon-NR-mailbox@ml.toshiba.co.jp

 

Deze bekendmaking is officieel geldend in de originele brontaal. Vertalingen zijn slechts als leeshulp bedoeld en moeten worden vergeleken met de tekst in de brontaal, die als enige rechtsgeldig is. Check out our twitter: @NewsNovumpr