Posts on Dec 2018

Preferred Networks develops a custom deep learning processor MN-Core for use in MN-3, a new large-scale cluster, in spring 2020

Dec. 12, 2018, Tokyo Japan – Preferred Networks, Inc. (“PFN”, Head Office: Tokyo, President & CEO: Toru Nishikawa) announces that it is developing MN-Core (TM), a processor dedicated to deep learning and will exhibit this independently developed hardware for deep learning, including the MN-Core chip, board, and server, at the SEMICON Japan 2018, held at Tokyo Big Site.  


With the aim of applying deep learning in the real world, PFN has developed the Chainer (TM) open source deep learning framework and built powerful GPU clusters MN-1 and MN-1b, which support its research and development activities. By using these clusters with the innovative software to conduct large-scale distributed deep learning, PFN is accelerating R&D in various areas, such as autonomous driving, intelligent robots, and cancer diagnosis and increasing efforts to put these R&D results to practical use.

To speed up the training phase in deep learning, PFN is currently developing the MN-Core chip, which is dedicated and optimized for performing matrix operations, a process characteristic of deep learning. MN-Core is expected to achieve a world top-class performance per watt of 1 TFLOPS/W (half precision). Today, floating-point operations per second per watt is one of the most important benchmarks to consider when developing a chip. By focusing on minimal functionalities, the dedicated chip can boost effective performance in deep learning as well as bringing down costs.

  • Specifications of the MN-Core chip
    • Fabrication Process : TSMC 12nm
    • Estimated power consumption (W) : 500
    • Peak performance (TFLOPS) :   32.8(DP) / 131(SP) / 524 (HP)
    • Estimated performance per watt (TFLOPS / W) : 0.066 (DP)/ 0.26(SP) / 1.0(HP)

(Notes) DP: double precision, SP: single precision, HP: half precision

https://projects.preferred.jp/mn-core/en/

 

Further improvement in the accuracy and computation speed of pre-trained deep learning models is an essential prerequisite for PFN to work on more complex problems that remain unsolved. It is therefore important to make continued efforts to increase computing resources and make them more efficient. PFN plans to build a new large-scale cluster loaded with MN-Cores, named MN-3, with plans to operate it in the spring of 2020. MN-3 comprises more than 1,000 dedicated server nodes, and PFN intends to increase its computation speed to a target of 2 EFLOPS eventually.

For MN-3 and subsequent clusters, PFN aims to build more efficient computing environments by making use of MN-Core and GPGPU (general-purpose computing on GPU) according to their respective fields of specialty.   

Furthermore, PFN will advance the development of the Chainer deep learning framework so that MN-Core can be selected as a backend, thus utilizing both software and hardware approaches to drive innovations based on deep learning.

 

PFN’s self-developed hardware for deep learning, including MN-Core, will be showcased at its exhibition booth at the SEMICON Japan 2018.

  • PFN exhibition booth at SEMICON Japan 2018
    • Dates/Time: 10:00 to 17:00 Dec. 12 – 14, 2018
    • Venue: Booth #3538, Smart Applications Zone, East Hall 3 at Tokyo Big Site
    • Exhibits:
      (1)  Deep Learning Processor MN-Core, Board, Server
      (2) Preferred Networks Visual Inspection
      (3) Preferred Networks plug&pick robot

 

*MN-Core (TM) and Chainer (TM) are the trademarks or the registered trademarks of Preferred Networks, Inc. in Japan and elsewhere.

Preferred Networks releases ChainerX, a C++ implementation of automatic differentiation of N-dimensional arrays, integrated into Chainer v6 (beta version) for higher computing performance

Dec. 3, 2018, Tokyo Japan – Preferred Networks, Inc. (“PFN”, Head Office: Tokyo, President & CEO: Toru Nishikawa) releases ChainerX, a C++ implementation of automatic differentiation of N-dimensional arrays for the Chainer™ v6 open source deep learning framework. Chainer v6 will run without the need to change most of the code used in previous versions.

Since the release of its source code in 2015, the development of Chainer, known as a pioneer of flexible and intuitive deep learning frameworks, has been very active and attracted many users. Many other deep learning frameworks have followed suit in adopting Chainer’s Define-by-Run method, demonstrating the foresight of Chainer. Chainer’s pure Python implementation policy has, on the one hand, contributed to the legibility and simplicity of code, but on the other, it was becoming a bottleneck due to increased overhead of the Python execution system relative to the overall runtime as its performance improved.

Therefore, the release of ChainerX, which is written in C++ and integrated into the main Chainer, is a first step in achieving higher performance without losing much of Chainer’s flexibility and backward compatibility for many users.

 

 

Main features of ChainerX are:

  • C++ implementation in close connection with Python – NumPy, CuPy™, and automatic differentiation (autograd), all of which are mostly written in Python, have been implemented in C++

The logic of matrix calculation, convolution operations, and error backpropagation has all been implemented in C++ to reduce CPU overhead by Python by up to 87% (comparison of overhead measurements only)

 

  • Easy to work with CPU, GPU, and other hardware backends

Replaceable backends have increased portability between devices

 

Figure:In addition to the multidimensional array implementation which corresponds to NumPy/CuPy, the Define-by-Run style automatic differentiation function is covered by ChainerX.

 

 

As well as improving ChainerX performance and expanding the backend, PFN plans to enable models written in ChainerX to be called from non-Python environments.

For more details on Chainer X, developer Seiya Tokui is scheduled to give a presentation at NeurIPS, a top conference in machine learning (formerly called NIPS), in Montreal, Canada this month.

Dec. 7, 12:50-02:55 Open Source Software Showcase:

http://learningsys.org/nips18/schedule.html

 

Chainer has adopted a number of development proposals from external contributors. PFN will continue to quickly adopt the results of the latest deep learning research and promote the development and popularization of Chainer in collaboration with supporting companies and the OSS community.

 

  • About the Chainer™ Open Source Deep Learning Framework

Chainer is a Python-based deep learning framework developed and provided by PFN, which has unique features and powerful performance that allow for designing complex neural networks easily and intuitively, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

Chainer quickly incorporates the results of the latest deep learning research. With additional packages such as ChainerRL (reinforcement learning), ChainerCV (computer vision), and Chainer Chemistry(a deep learning library for chemistry and biology)and through the support of Chainer development partner companies, PFN aims to promote the most advanced research and development activities of researchers and practitioners in each field. (http://chainer.org/

Preferred Networks releases the beta version of Optuna, an automatic hyperparameter optimization framework for machine learning, as open-source software

Dec. 3, 2018, Tokyo Japan – Preferred Networks, Inc. (“PFN”, Head Office: Tokyo, President & CEO: Toru Nishikawa) has released the beta version of Optuna™, an open-source automatic hyperparameter optimization framework.

In deep learning and machine learning, it is essential to tune hyperparameters since they control how an algorithm behaves. The precision of a model largely depends on tuning the hyperparameters. The number of hyperparameters tends to be high especially in deep learning. They include the numbers of training iterations, neural network layers and channels, learning rate, batch size, and others. Nevertheless, many deep learning researchers and engineers manually tune these hyperparameters and spend a significant amount of their time doing so.

Optuna automates the trial-and-error process of optimizing the hyperparameters. It automatically finds optimal hyperparameter values that enable the algorithm to give excellent performance. Optuna can be used not only with the Chainer™ open-source deep learning framework, but also with other machine learning software.

 

Main features of Optuna are:

  • Define-by-Run style API

Optuna can optimize complex hyperparameters while maintaining high modularity.

  • Pruning of trials based on learning curves

Optuna predicts the result of training with an iterative algorithm based on a learning curve. It halts unpromising trials to enable an efficient optimization process.

  • Parallel distributed optimization

Optuna supports asynchronous distributed optimization and simultaneously performs multiple trials using multiple nodes.

 

Optuna is used in PFN projects and with good results. One example is the second place award in the Google AI Open Images 2018– Object Detection Track competition. PFN will continue to develop Optuna, while prototyping and implementing advanced functionalities.

 

 

* Chainer™ and Optuna™ are the trademarks or the registered trademarks of Preferred Networks, Inc. in Japan and elsewhere.