Posts on Apr 2017

A member of Preferred Networks temporally assigned to OpenAI

On April 10th 2017, Yasuhiro Fujita, an engineer of Preferred Networks, starts working at OpenAI as a visiting research scientist in San Francisco, California.

OpenAI is a non-profit artificial intelligence research company, in which many famous researchers are working on the fundamental problems in AI, and making the intellectual property and research open to public.

Yasuhiro Fujita is a passionate researcher of game-playing AI, and the creator of ChainerRL, a deep reinforcement learning library on top of Chainer.

During his assignment until September, Fujita will be contributing to OpenAI’s research projects and later making his achievements public.

Preferred Networks, as an active member of the research community, will keep its strong commitment to the advancement of the research in this field throughout similar collaborations and initiatives.

Intel and Preferred Networks collaborate to jointly develop Chainer, deep learning open source framework

The companies aim to significantly accelerate CPU performance for Chainer running on Intel Architecture.

Intel Corporation and Preferred Networks Inc. (PFN) announced today that the companies will collaborate on the development of Chainer(R)(http://chainer.org/), PFN’s open source framework for deep learning, with the aim to accelerate out of the box deep learning performance on general purpose infrastructure powered by Intel.

 

Advanced technologies including IoT (Internet of Things), 5G (fifth generation mobile networks), and AI (artificial intelligence) are expected to be used in a range of industries ahead, giving rise to data-driven business opportunities and user experiences. The advance of technologies related to AI and deep learning, in particular, will accelerate the creation of applications that further enhance the intrinsic value of data.

The use of special-purpose computing environments for developing and implementing AI applications and deep learning frameworks poses challenges for the developer community, including development complexity, time and cost.

PFN, the developer of Chainer, which is an advanced deep learning framework with a reputation for ease of use among application developers in various industries, and Intel Corporation, a provider of general purpose computing technologies and industry leading AI/deep learning accelerators, will collaborate in an effort to make AI development easier and more affordable. The collaboration will bring both companies’ technologies to bear in the aim of optimizing development/execution of applications that use advanced AI and deep learning frameworks, as well as accelerating the performance of image and voice recognitions.

Chainer is a Python-based deep learning framework developed by PFN, which has unique features and powerful performance that enables users to easily and intuitively design complex neural networks thanks to its “Define-by-Run” feature. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

Intel Corporation, a technology leader that is uniquely poised to drive the AI computing era, will help Chainer deliver breakthrough deep learning throughput for the industry’s most comprehensive compute portfolio for AI, which includes the Intel(R) Xeon(R) processors, Intel(R) Xeon Phi™ processors, Intel(R) Arria(R) 10 FPGAs, Intel(R) Nervana™ technology and more products. This framework will employ the highly-optimized Intel’s open source library—Intel(R) Math Kernel Library (MKL) and Intel(R) Math Kernel Library Deep Neural Network (MKL-DNN) as a fundamental building block.

 

Through the collaboration, Intel and PFN will undertake the following.

  •  Continuously optimize the performance of Chainer on Intel architecture
  •  Continuously align to Chainer updates
  •  Continuously optimize Chainer to updates to Intel architectures for general purpose computing, accelerators, libraries, and so on
  •  Share the results of the companies’ collaboration with the community on Intel’s GitHub repository
  •  Collaborate on marketing activities designed to accelerate AI/deep learning market growth

 

*Intel, Xeon, Xeon Phi, Arria and Nervana are trademarks or registered trademarks of Intel Corporation in the United States and other countries.

*Chainer, DIMo are trademarks or registered trademarks of Preferred Networks, Inc. in Japna and other countries.

PFN 2017 Summer Internship Program

As goes the tradition, Preferred Networks (PFN) will be organizing the internship program this summer too. From this year, we are also looking for front-end/back-end and chip development in addition to machine learning. We welcome applications not only from machine learning field but also from many people. We are looking forward to receiving students who want to join us in creating new technologies and services. Students who previously applied are also welcome to try again this year.
(Application from overseas with a need for VISA is already closed for this year)

Application Guideline

 

●Period

 

August 1st – September 30th 2017
(Negotiable.)

 

●Time & Place

8 hours/day, 5 days/week (excluding holidays)
Otemachi-Bldg. 2F 1-6-1, Otemachi, Chiyoda-ku,Tokyo, 100-1004

 

●Salary

 

  • High school: 1500Yen/hour
  • Technical college/Undergraduate/Graduate: 1800Yen/hour
  • Transportation expenses (up to 10000Yen/month) are also covered.

 

●Why join the PFN internship program?

 

  • You will be collaborating and be mentored by experts in various fields including deep learning, computer vision, natural language processing, reinforcement learning, algorithms, distributed processing, etc.
  • You can make public the results of your work during the internship program, as OSS or a paper, etc. (Some restrictions might apply.)

 

●Qualification requirements

We are looking for highly motivated people who have development capabilities. Expertise in the fields mentioned below, or prior development experience are taken into consideration, but are not a must. Application requirements are as follows:

  • Currently students (High school, technical college, college, graduate students, others could also be discussed.)
  • Able to communicate in English or Japanese
  • Able to communicate on one’s own initiative
  • Have programming skills (regardless of the programming language)
  • Able to work fulltime on weekdays at our Tokyo office

# We will prepare accomodation for those who live far from Tokyo.
# You can still apply even if you are not a fully-fledged application developer.

 

●How to apply

Please submit the application form below.
https://docs.google.com/forms/d/e/1FAIpQLSevjHAtBhq9380kzDLXQ1dySoWa_p7N_VhgTHZnC4pcJa75hw/viewform

Questions about the internship program are also accepted by intern2017@preferred.jp.

Application form note

Proof of skills; upload your document following the steps below that explains your strengths and expertise fields, etc. (Microsoft Word or Google docs, one A4 page)
E.g., List of papers, received awards, developed/used Software&Services, programming contests participation history, personal website/blog, twitter account, etc.
https://www.preferred-networks.jp/wp-content/uploads/2017/03/intern2017_GoogleUpload_3.pdf

Themes you want to do; please include your interest in the selected themes and your expectations from the internship using less than 400 characters.

# This is a very important for both the admission process, and the internship theme selection.

 

●Application Deadline

 

May. 7th, 2017 23:59 (JST)

 

●Selection process

Documents screening
# Takes around one weeks before result is returned.

Pre-interview task screening
# The task will be announced to those who passed the above.

Interview (generally once)
# Skype interview for remote applicants

Acceptance notice (Late June)

 

●Themes

 

[Machine Learning / Mathematics Fields]

Applications

  • Chainer development
  • Image recognition
  • Video analysis
  • Content generation (Generation of images, videos, sounds, etc.)
  • Natural language processing
  • Speech recognition
  • Anomaly detection
  • IoT
  • Data compression
  • Robotics (Robot arms, bipedal walking, self-driving cars, path planning)
  • Genomics, Epigenomics, proteomics
  • Deep Learning on embedded systems
  • LSI design optimization

 

Research

  • Distributed algorithm, Distributed deep learning
  • Reinforcement learning
  • Optimization
  • Deep generative models
  • Model compression
  • Neural network quantization
  • Machine learning with limited labels (One-shot learning, Weakly supervised learning, Semi-supervised learning, Meta learning)
  • Machine learning using simulators
  • Interpretability in machine learning
  • Differential privacy
  • Communication or collaboration emergence

 

[Front-end or Back-end Development]

  • Chainer development
  • SensorBee
  • PaintsChainer
  • Stream processing
  • Tools development
  • Web development
  • Networking
  • High-performance computing
  • 3DCG
  • Unity development
  • AR or VR

 

[Chip Development]

  • FPGA design