Posts on Oct 2018

Preferred Networks and PFDeNA launch joint research project to develop a deep learning-based system to detect 14 types of cancers with a small amount of blood

Aim to bring to market by 2021 to extend healthy life expectancy with early cancer detection

Oct. 29, 2018, Tokyo Japan – Preferred Networks, Inc. and PFDeNA Inc. will start research and development to create a blood test system that utilizes deep learning technology to detect 14 types of cancers*1 in their early stages.

In this R&D initiative, PFN and PFDeNA will use blood samples (DNA repository samples) and clinical information, both collected by the National Cancer Center Japan (NCC) for research purposes with donor consent. PFDeNA will measure the expression levels of ExRNA*2 in the DNA repository samples by using a next-generation sequencer*3 in a manner that does not identify individuals. PFN will apply deep learning technology to learn, evaluate, and analyze the measurements together with clinical data. The aim is to put the resulting system to practical use, which will be able to accurately determine the presence or absence and kind of cancer based on ExRNA expression levels in the blood.


Social background

Cancer is the leading cause of death among Japanese people, with about one in two developing cancer in their lifetimes. The number of Japanese who died from cancer is more than 370,000 a year and continuing to rise. This amounts to one out of every 3.6 deaths being caused by cancer *.

Even though it is critical to detect cancer at an early stage, screening rates for various types of cancers remain at roughly 30%, one of the lowest among developed countries. Each type of cancer has its own screening methods and requires different areas and organs in our bodies to be tested. The level of accuracy differs from one test to another. The burden of taking these tests need to be reduced both physically and financially in order to improve the screening rates.

Against this backdrop, many studies have been reported recently on gene expression of ExRNAs which include miRNAs*4, bringing to light miRNA expressions that are unique biomarkers of cancer in each organ. Because the types or numbers of miRNAs expressed in bodily fluids will change once a person has cancer, researchers have high expectations that it will become easier to diagnose cancers using easily-collectible bodily fluids, such as blood.


Going forward

After PMDA’s*5 review and approval, PFN and PFDeNA aim to develop the results of this research into a business by 2021 and promote its widespread use in Japan.

The high-precision, low-impact screening system will require only a small amount of blood to detect 14 types of cancers in their early stages and is expected to become a common cancer test in the future. Through early cancer detection, PFN and PFDeNA will contribute to efforts to decrease the mortality rate, to reduce medical costs, to extending healthy life expectancy and increasing cancer screening rates in Japan.


*1 The 14 types of cancers covered in this research are stomach cancer, colon cancer, esophageal cancer, pancreatic cancer, liver cancer, bile duct cancer, lung cancer, breast cancer, ovarian cancer, cervical cancer, uterine cancer, prostate cancer, bladder cancer, and kidney cancer.

*2 ExRNA is an RNA existent in the blood and other bodily fluids, mainly miRNA (microRNA) in this research. miRNA helps regulate a variety of biological activities and is expected to be used as a diagnostic biomarker.

*3 A next-generation sequencer is a piece of equipment used to sequence the base pairs of human genes in parallel at high speed.

*4 miRNA is a ribonucleic acid that is about 20 bases long and plays a role in regulating gene expression.

*5 PMDA is an acronym for the Pharmaceuticals and Medical Devices Agency of Japan, which is an organization that conducts the scientific review for quality, efficacy, and safety of pharmaceuticals and medical equipment.

*Source: “Summary of Vital Statistics for 2017” (Ministry of Health, Labour and Welfare)

Preferred Networks releases version 5 of both the open source deep learning framework, Chainer and the general-purpose array calculation library, CuPy.

Preferred Networks, Inc. (PFN, President and CEO: Toru Nishikawa) has released Chainer(TM) v5 and CuPy(TM) v5, major updates of PFN’s open source deep learning framework and general-purpose array calculation library, respectively.

In this major upgrade after six months, Chainer has become easier to use after integrating with ChainerMN, which has been provided as a distributed deep learning package to Chainer. The latest v5 will run as-is on most of the code used in previous versions.


Main features of Chainer v5 and CuPy v5 are:

  • Integrated with the ChainerMN distributed deep learning package

・With ChainerMN incorporated in Chainer, fast distributed deep learning on multiple GPUs can be conducted more easily.

  • Support for data augmentation library NVIDIA(R)

・Chainer v5 performs faster data preprocessing by decoding and resizing of JPEG images on GPUs.

  • Support for FP16

・Changing to half-precision floating-point (FP16) format is possible with minimal code changes.

・Reduced memory consumption, which allows larger batch sizes.

・Further speed increases with the use of NVIDIA(R) Volta GPU Tensor Cores.

  • Latest Intel(R) Architecture compatibility

・Chainer v5 supports the latest version 2 of Chainer Backend for Intel(R) Architecture (previously, iDeep, which was added to Chainer v4) for faster training and inference on Intel(R) Processors.

  • High-speed computing and memory saving for static graphs

・Chainer v5 optimizes computation and memory usage by caching static graphs that do not change throughout training. This speeds up training by 20-60%.

  • Enhanced cooperation with Anaconda Numba and PyTorch, enabling the mutual exchange of parallel data

・Added ability to pass a CuPy array directly to a JIT-compiled function by Anaconda Numba.

・DLpack:Array data can be exchanged with PyTorch and other frameworks.

  • CuPy basic operations are 50% faster

・Performance of basic operations such as memory allocation and array initialization has improved.


Chainer and CuPy have incorporated a number of development results from external contributors. PFN will continue to quickly adopt the results of the latest deep learning research and promote the development and popularization of Chainer and CuPy in collaboration with supporting companies and the OSS community.


◆ About the Chainer(TM) Open Source Deep Learning Framework

Chainer is a Python-based deep learning framework developed and provided by PFN, which has unique features and powerful performance that allow for designing complex neural networks easily and intuitively, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

Chainer quickly incorporates the results of the latest deep learning research. With additional packages such as ChainerRL (reinforcement learning), ChainerCV (computer vision), and Chainer Chemistry(a deep learning library for chemistry and biology)and through the support of Chainer development partner companies, PFN aims to promote the most advanced research and development activities of researchers and practitioners in each field. (

Preferred Networks unveils a personal robot system at CEATEC Japan 2018, exhibiting fully-autonomous tidying-up robots

Oct. 15, 2018, Tokyo Japan – Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President and CEO: Toru Nishikawa) will unveil a fully-autonomous tidying-up robot system, which is currently under development, at the CEATEC Japan 2018 exhibition held in Makuhari Messe near Tokyo. A technical demonstration of the system will be given at the event.

PFN is developing technology to create a society where robots can actively support our daily living activities. Unlike in controlled and regulated environments like factories, robots in the home need to respond flexibly to dynamic and complex situations and communicate naturally with humans.

At its exhibition booth (A060), PFN will demonstrate the new robot system using HSRs (Human Support Robots) developed by Toyota Motor Corporation and showcase their ability to keep a cluttered room neat and tidy. This has been difficult to achieve using conventional technologies based on object recognition and robot control. The deep learning-based robots can recognize various scattered household items like clothes, toys, and stationery, grasp and place them in their designated locations. In the demonstration, PFN will also show that these cleaning robots can be controlled intuitively through verbal and gestural instructions.

For technical details, please visit the website.


The fully-autonomous tidying-up robot system has been awarded the Semi-Grand Prix in the Industry/Market category at CEATEC Award 2018, which recognizes innovative technologies, products, and services from among a large number of exhibits at CEATEC JAPAN 2018.


  • PFN exhibition booth

・Dates/time:10:00–17:00 Oct. 16–19, 2018

・Location:Booth A060, Total Solutions Area, International Exhibition Hall 2

・Exhibit:Technical demonstration of personal robots “fully-autonomous, tidying-up robot system” (first public exhibition)


In addition, PFN President and CEO Toru Nishikawa will make a keynote speech entitled “Robots for Everyone” on the opening day of the CEATEC exhibition. As well as introducing the outlook on future technologies, he will explain how PFN is applying cutting-edge technologies of machine learning, deep learning, and robotics to solve real-world problems.

  • CEATEC Keynote Future
  • ・Date/Time: 12:30~13:15 Tuesday on Oct. 16, 2018・Location:Convention Hall, International Conference Hall, Makuhari Messe・Speaker:Preferred Networks President and CEO Toru Nishikawa・Title/outline:”Robots for Everyone”The possibilities of robots are rapidly expanding thanks to the advancement of machine learning technology. Fusing the machine learning technology with robotics is essential for making robots that can flexibly respond to unexpected situations and execute various tasks like a human. Soon, we will begin to see an increasing number of robots helping to perform tasks in many places, working alongside humans. As well as introducing the current technology and PFN’s new initiative, President Nishikawa will provide insight into how we can leverage technology in the new era of these kinds of robots and the outlook on future technologies.

Preferred Networks hired Professor Takeo Igarashi of The University of Tokyo as a Technical Advisor

On August 1st, 2018, Preferred Networks, Inc. (PFN, HQ: Chiyoda-ku, Tokyo, President & CEO: Toru Nishikawa) hired Takeo Igarashi (Professor at The University of Tokyo) as a technical advisor.

Professor Igarashi is a pioneer in human-computer interaction (HCI), user interface and computer graphics research. He has published many research papers at international conferences, notably Teddy, a ground-breaking technique to create 3D models from 2D sketches. Currently, he is the project leader of the JST CREST research project “HCI for Machine Learning”, which aims to make machine learning techniques more accessible and easier to use.

In this role as a technical advisor, Professor Igarashi will provide technical advice and guidance on HCI and HRI (Human-Robot Interaction) research at PFN, with a view to accelerating the development and adoption of PFN’s technology.

For more information, please refer to the research blog.

Prof. Takeo Igarashi

Takeo Igarashi

  • Biography

Takeo Igarashi is a Professor at the Computer Science Department of The University of Tokyo. He received a Ph.D from the Department of Information Engineering at The University of Tokyo in 2000. He then worked as a post doctoral research associate at Brown University (2000 – 2002). He joined the University of Tokyo as an Assistant Professor in 2002, and became a Full Professor in 2011. He also served as a director for the JST ERATO Igarashi Design Interface project (2007 – 2013). His research interest is in user interfaces and interactive computer graphics. He is known for the development of a sketch-based 3D modeling system (Teddy) and a performance-driven animation authoring system (MovingSketch). He has received several awards including the IBM Science Prize, the JSPS Prize, the ACM SIGGRAPH 2006 Significant New Researcher Award, and the Katayanagi Prize in Computer Science. He served as conference co-chair for ACM UIST 2016, program co-chair for ACM UIST 2013, program chair for ACM SIGGRAPH ASIA 2018 technical papers, associate editor for ACM Transactions on Graphics and program committee member for various international conferences including ACM CHI, UIST, and SIGGRAPH.

Preferred Networks releases deep learning-based, high-precision, visual inspection software


The software will make it possible to construct an inspection system quickly and inexpensively with minimal training datasets


Oct. 11, 2018, Tokyo Japan – Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President and CEO: Toru Nishikawa) has developed Preferred Networks Visual Inspection,  high-precision, visual inspection software based on deep learning technology. PFN will start licensing the software to partner companies in December 2018. We will announce the new software at a new product seminar (N3-5) on Thursday Oct.18 during the CEATEC Japan 2018 exhibition held in Makuhari Messe near Tokyo.

The use of machine learning and deep learning technologies is spreading rapidly in many areas including the manufacturing floor. However, existing visual inspection systems based on deep learning require as many as several thousand images for training, as well as engineers to annotate the considerable number of images to facilitate the training process. Poorly explained inspection results are also among other issues that have been tackled.

In order to solve these problems, PFN has utilized its technical know-how acquired through the development of the deep learning framework Chainer(TM) and applications of deep learning to our main business domains – transportation systems, manufacturing, and bio-healthcare – to develop the Preferred Networks Visual Inspection.

  • The main features of Preferred Networks Visual Inspection:
  1. An inspection line can be set up with a small amount of training data (as few as 100 images of normal products and 20 images of defective products)
  2. Plastic, metal, cloth, food, and other materials with various shapes can be handled
  3. Results are well-explained through visualized anomalies such as scratches, foreign objects, and stains
  4. Training is made easy even for non-engineers with intuitive user interfaces


Preferred Networks Visual Inspection consists of a training support tool and CPU-based defect detection software. Depending on requirements, our licensed partners will install a combination of system components which include training workstations, inspection PCs, photographing equipment, UIs for visualization and operation. GPU-based, fast detection software is also available as an option.

The new product will enable users to build an easy-to-use and highly reliable auto-inspection system at a low cost in a short period of time. This product can be introduced with ease to the manufacturing lines which have been difficult to automate by existing products due to their high costs and inflexibility. In addition, defects are visualized so that its results can be easily explained. This is useful for passing down inspection skills and sharing knowledge with others in the company.

Comparison of Preferred Networks Visual Inspection and the existing solutions

  • New product announcement

PFN will announce Preferred Networks Visual Inspection at a New Technologies and Products Seminar (N3-5) entitled “Visual inspection system and picking robot solution based on deep learning” at CEATEC Japan in Makuhari Messe.


PFN will continue to promote practical applications of machine learning and deep learning technologies in the real world.