Posts on Sep 2017

Preferred Networks Launches one of Japan’s Most Powerful Private Sector Supercomputers

Features NTT Com Group’s Cloud-based GPU platform

TOKYO, JAPAN — This September, Preferred Networks, Inc. (PFN), a provider of IoT-centric deep learning systems, NTT Communications Corporation (NTT Com), the ICT solutions and international communications business within the NTT Group, and NTT Com subsidiary NTT PC Communications Incorporated (NTTPC) announced today the launch of a private supercomputer designed to facilitate research and development of deep learning, including autonomous driving and cancer diagnosis.

The new supercomputer is one of the most powerful to be developed by the private sector in Japan and is equipped with NTT Com and NTTPC’s Graphics Processing Unit (GPU) platform, and contains 1,024 of NVIDIA(R)’s Tesla(R) multi-node P100 GPUs. Theoretically, the processing speed of the new supercomputer can reach 4.7 PetaFLOPS—a massive 4,700 trillion floating point operations per second—the fastest levels of any computing environment in Japan.

Overview of the private supercomputer

 

PFN’s deep learning research demands an ultra high-speed, high capacity, state-of-the-art computing environment. Existing GPU platforms require massive electricity supplies, generate excessive heat and offer inadequate network speed. To address these issues, PFN adopted the NTT Com Group’s proven GPU platform, which boasts significantly advanced technology. They additionally leveraged the latest data center design, building a large-scale multi-node platform using ChainerMN, PFN’s technology that significantly accelerates the speed of deep learning by parallelizing calculations over multiple nodes.

NTT Com group has developed and released a multi-node GPU platform on Enterprise Cloud, Nexcenter(TM), a world-leading data center service, which incorporates the group’s extensive know-how in GPU performance maximization.

Following the supercomputer launch, PFN plans to increase the processing speed of its open source deep learning framework Chainer. They will further accelerate their research and development in the field of transportation systems, manufacturing, bio and healthcare industry which require a huge amount of computing resources. PFN will additionally consider the deployment of NVIDIA(R) Tesla(R) V100 GPUs, which are based on next-generation Volta GPU technology. NTT Com group will continue to support PFN’s research and the commercialization of their developed solutions with AI-related technologies and platforms.

“NVIDIA is excited to see the launch of Preferred Networks’ private supercomputer, built in partnership with NTT Com Group. Computing power is the source of competitive advantage for deep learning, the core technology of modern AI. We have high expectations that the new system will accelerate Preferred Networks’ business and contribute to Japan’s economic growth.”

Masataka Osaki
NVIDIA Japan Country Manager, Vice President of Corporate Sales

 

Related links:

Chainer
Enterprise Cloud
Nexcenter

 

◆ About Preferred Networks, Inc.
Founded in March 2014 with the aim of promoting business utilization of deep learning technology focused on IoT, PFN advocates Edge Heavy Computing as a way to handle the enormous amounts of data generated by devices in a distributed and collaborative manner at the edge of the network, driving innovation in three priority business areas: transportation, manufacturing and bio/healthcare. PFN develops and provides Chainer, an open source deep learning framework. PFN promotes advanced initiatives by collaborating with world leading organizations, such as Toyota Motor Corporation, Fanuc Corporation, and the National Cancer Center.
https://www.preferred-networks.jp/en/

◆ About NTT Communications Corporation
NTT Communications provides consultancy, architecture, security and cloud services to optimize the information and communications technology (ICT) environments of enterprises. These offerings are backed by the company’s worldwide infrastructure, including the leading global tier-1 IP network, the Arcstar Universal One™ VPN network reaching over 190 countries/regions, and 140 secure data centers worldwide. NTT Communications’ solutions leverage the global resources of NTT Group companies including Dimension Data, NTT DOCOMO and NTT DATA.
www.ntt.com | Twitter@NTT Com | Facebook@NTT Com | LinkedIn@NTT Com

◆ NTT PC Communications Incorporated
NTTPC Communications Incorporated (NTTPC), established in 1985 is a subsidiary of NTT Communications, is a network service and communication solution provider in Japanese telco market, the company has been the most strategic technology company of the group throughout of years. NTTPC launched the 1st ISP service of the NTT group, so called “InfoSphere” at 1995, and also launched the 1st Internet Data Center and server hosting services of Japan so called “WebARENA” at 1997. NTTPC have always started something new in ICT market.
http://www.nttpc.co.jp/english/

 

 

Notes
1. Chainer(R) is the trademark or the registered trademark of Preferred Networks, Inc. in Japan and other countries.

2. Other company names and product names written in this release are the trademarks or the registered trademarks of each company.

Call for application for PFN summer internship 2018 in Tokyo

Preferred Networks (PFN) will be organizing internship programs next summer in Tokyo. In order to make the process smooth for students from outside of Japan, we open an early bird application for them.

We are a growing startup with about 100 members based in Tokyo, Japan, focusing on applying deep learning to industrial problems such as autonomous driving, manufacturing, and bio-healthcare. We are actively developing the deep learning framework Chainer.

We look for brilliant students who have expertise on various topics, such as deep learning, reinforcement learning, computer vision, bioinformatics, natural language processing, distributed computing, simulation, etc.

In previous years, by selecting highly capable interns and encouraging them to tackle challenging and important problems, some of the internship results have been published at top conferences such as ICML or workshops at ICRA and ICCV.

During the internship, you will have unique opportunity to collaborate with highly motivated experts for working on real-world applications of deep learning, while staying in Tokyo, one of the most attractive cities in the world.

We are looking forward to receiving your applications, following the instructions below.

 

● Target of this program

  • Students outside of Japan

 

Work time & Location:

  • Business hours:
    8 hours/day, 5 days/week (excluding national holidays)
  • Location: Center of Tokyo
    Preferred Networks Tokyo office: Otemachi Bldg. 2F, 1-6-1, Otemachi, Chiyoda-ku, Tokyo, Japan 100-0004
    https://www.preferred-networks.jp/en/about

 

Period & Compensation:

  • The period of the internship can be flexibly arranged though it usually starts in June and finish by the end of August
  • We require minimum of two months (40 business days), in order to be able to tackle a challenging task
  • Interns are paid a competitive salary
  • We will cover residence and travel cost

 

Requirements:

  • Experience in at least one of the technology areas (listed below) other than lectures
    e.g., published a paper, won a competition, part-time work, open source contribution
  • Strong programming skill (any programming language)
  • Formally enrolled in university or research institute outside of Japan during 2018-2019 school year
  • Fluent in either English or Japanese
  • Able to work fulltime on weekdays at our Tokyo office during the period

 

Preferred experience & skills:

  • Machine learning and deep learning
  • Experience with numpy / scipy / deep learning frameworks
  • Experience with software & service development
  • Experience working with shared codebases (e.g. github / bitbucket / etc)
  • Contribution to open source projects

 

Candidate themes (subject to change)

1. Technology areas: Sub-field of machine learning, such as

     a. Deep learning theory

     b. Reinforcement learning

     c. Computer vision

     d. Natural language processing

     e. Parallel / distributed computing

 

2. Application areas: Advanced applications, such as

     a. Object detection / tracking / segmentation from image / video

     b. Robotics / factory automation / predictive maintenance

     c. Life science / healthcare / medicine

     d. Human machine interaction

     e. Design / content creation / visualization

     f. Deep learning software (Chainer, CuPy, ChainerMN/CV/RL, etc)

     g. Optimization for deep learning hardware

 

Application information:

  • Resume / CV (PDF format only. Please DO NOT include any personal or private information [e.g., age, race, nationality, religion, personal address, phone number] except name, email address, affiliation)
  • Github account (optional)

 

How to apply:

  • Please fill the google forms and submit
  • Due: September 29th, 11:59 pm Friday (PST)

  • No late submission will be accepted (we are planning to open the 2nd call for application by January, and another call for students in Japan by May)
  • The review process takes about 6-8 weeks after submission
  • Usually, getting a visa for working in Japan takes up to 3 months

 

Interview process:

  1. Document review
  2. One-way video interview (webcam, recording)
  3. Skype interview in English or Japanese (multiple times if necessary)

 

If you have questions, please contact us at hr-pfn@preferred.jp (Sorry but no late application is accepted for fairness)

Preferred Networks officially released ChainerMN version 1.0.0, a multi-node distributed learning package, making it even faster with stablized data-parallel core functions

Tokyo, Japan, September 1, 2017 – Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President and CEO: Toru Nishikawa) has released the official version 1.0.0 of ChainerMN※1, which is a package adding distributed learning functionality with multiple GPUs to Chainer, the open source deep learning framework developed by PFN.

For practical application of machine learning and deep learning technologies, the ever-increasing complexity of neural network models, with a large number of parameters and much larger training datasets requires more and more computational power to train these models.

ChainerMN is a multi-node extension to Chainer that realizes large-scale distributed deep learning by high-speed communications both intra- and inter-node. PFN released the beta version of ChainerMN on May 9, 2017 and this is the first official release. The following features have been added to ChainerMN v1.0.0.

● Features of ChainerMN v1.0.0

1. Increased stability in core functions during data parallelization

With this improved stability, ChainerMN can be used more comfortably.

2. Compatibility with NVIDIA Collective Communications Library (NCCL) 2.0.

By supporting the latest version, it has become even faster.

3. More sample code (machine translation, DCGAN) is available.

These examples will help users learn more advanced ways of using ChainerMN.

4. Expansion of supported environments (non-CUDA-Aware MPI).

CUDA – Aware MPI implementation such as Open MPI and MVAPICH was necessary for the beta version, but ChainerMN is now compatible with non-CUDA-Aware MPI.

5. Initial implementation of model parallelism functions.

More complex distributed learning has become possible by getting multiple GPUs to work in the model parallelism method.
The conventional data parallelism approach is known to limit the possible batch size when increasing the nodes while maintaining accuracy. o overcome this, we have done the initial part of the more challenging implementation of model parallelism for greater speed than possible with data parallelism.

 

These features provide a more stable and faster than ever deep learning experiences with ChainerMN, as well as improved usability.

The following is the result of the performance measurement of ChainerMN using the image classification dataset of ImageNet. It is about 1.4 times faster than the first announcement in January this year, and 1.1 times faster than the beta version released in May. Please visit the following Chainer Blog to learn more about the experiment settings:

https://chainer.org/general/2017/02/08/Performance-of-Distributed-Deep-Learning-Using-ChainerMN.html

 

In addition, from October 2017, ChainerMN will become available on “XTREME DNA”, an unmanned  cloud-based super-computer deployment and operation service, provided by XTREME Design Inc.(Head office: Shinagawa-ku, Tokyo, CEO: Naoki Shibata)

ChainerMN will be added on the distributed parallel environment templates for GPU instances of the pay-per-load public cloud, Microsoft Azure. This not only eliminates the need to construct infrastructure required for large-scale distributed deep learning but also makes it easy to manage research-and-development costs.

ChainerMN aims to provide an environment in which deep learning researchers and developers can easily concentrate on the main parts of research and development including the design of neural networks. PFN will continue to improve ChainerMN by adding more features and expanding its usage environment.

 

◆ The Open Source Deep Learning Framework Chainer (http://chainer.org)
Chainer is a Python-based deep learning framework developed by PFN, which has unique features and powerful performance that enables users to easily and intuitively design complex neural networks, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

 

※1:MN in ChainerMN stands for Multi-Node. https://github.com/pfnet/chainermn