Posts Taged chip

Preferred Networks develops a custom deep learning processor MN-Core for use in MN-3, a new large-scale cluster, in spring 2020

Dec. 12, 2018, Tokyo Japan – Preferred Networks, Inc. (“PFN”, Head Office: Tokyo, President & CEO: Toru Nishikawa) announces that it is developing MN-Core (TM), a processor dedicated to deep learning and will exhibit this independently developed hardware for deep learning, including the MN-Core chip, board, and server, at the SEMICON Japan 2018, held at Tokyo Big Site.  


With the aim of applying deep learning in the real world, PFN has developed the Chainer (TM) open source deep learning framework and built powerful GPU clusters MN-1 and MN-1b, which support its research and development activities. By using these clusters with the innovative software to conduct large-scale distributed deep learning, PFN is accelerating R&D in various areas, such as autonomous driving, intelligent robots, and cancer diagnosis and increasing efforts to put these R&D results to practical use.

To speed up the training phase in deep learning, PFN is currently developing the MN-Core chip, which is dedicated and optimized for performing matrix operations, a process characteristic of deep learning. MN-Core is expected to achieve a world top-class performance per watt of 1 TFLOPS/W (half precision). Today, floating-point operations per second per watt is one of the most important benchmarks to consider when developing a chip. By focusing on minimal functionalities, the dedicated chip can boost effective performance in deep learning as well as bringing down costs.

  • Specifications of the MN-Core chip
    • Fabrication Process : TSMC 12nm
    • Estimated power consumption (W) : 500
    • Peak performance (TFLOPS) :   32.8(DP) / 131(SP) / 524 (HP)
    • Estimated performance per watt (TFLOPS / W) : 0.066 (DP)/ 0.26(SP) / 1.0(HP)

(Notes) DP: double precision, SP: single precision, HP: half precision

https://projects.preferred.jp/mn-core/en/

 

Further improvement in the accuracy and computation speed of pre-trained deep learning models is an essential prerequisite for PFN to work on more complex problems that remain unsolved. It is therefore important to make continued efforts to increase computing resources and make them more efficient. PFN plans to build a new large-scale cluster loaded with MN-Cores, named MN-3, with plans to operate it in the spring of 2020. MN-3 comprises more than 1,000 dedicated server nodes, and PFN intends to increase its computation speed to a target of 2 EFLOPS eventually.

For MN-3 and subsequent clusters, PFN aims to build more efficient computing environments by making use of MN-Core and GPGPU (general-purpose computing on GPU) according to their respective fields of specialty.   

Furthermore, PFN will advance the development of the Chainer deep learning framework so that MN-Core can be selected as a backend, thus utilizing both software and hardware approaches to drive innovations based on deep learning.

 

PFN’s self-developed hardware for deep learning, including MN-Core, will be showcased at its exhibition booth at the SEMICON Japan 2018.

  • PFN exhibition booth at SEMICON Japan 2018
    • Dates/Time: 10:00 to 17:00 Dec. 12 – 14, 2018
    • Venue: Booth #3538, Smart Applications Zone, East Hall 3 at Tokyo Big Site
    • Exhibits:
      (1)  Deep Learning Processor MN-Core, Board, Server
      (2) Preferred Networks Visual Inspection
      (3) Preferred Networks plug&pick robot

 

*MN-Core (TM) and Chainer (TM) are the trademarks or the registered trademarks of Preferred Networks, Inc. in Japan and elsewhere.