Google’s DeepVariant, IBM’s DuHL machine learning algorithm, Desktop compatibility of Nvidia GPU Cloud, Google AutoML’s “child” AI NASNet, and a new tool for FPGA programming in today’s trending stories in data science news.
Google’s DeepVariant to make sense out of your genome
Google has announced DeepVariant, a new deep neural network to call genetic variants from next-generation DNA sequencing data. Released as an open source software, DeepVariant uses the latest deep learning techniques to build a more accurate picture of a person’s genome from sequencing data. It is available on the GitHub here.
Nvidia democratizes AI development
In what could make developing artificial intelligence models easier to hundreds of thousands of researchers worldwide, Nvidia has updated its GPU Cloud to support everyday desktops. In addition to the new desktop compatibility, the chip maker has added support for two new deep learning frameworks. The first is the PaddlePaddle engine that Chinese search giant Baidu released last year, which allows developers to implement certain models with a lot less code than some alternatives. The other is the 1.0 release of MXNet, the AI framework backed by Amazon’s cloud division.
IBM’s new algorithm for machine learning
In coordination with EPFL researchers, IBM has created a new method for working with large data sets to train machine learning algorithms. The new algorithm, called Duality-gap based Heterogeneous Learning (DuHL), is capable of pushing through 30GB of data every 60 seconds, resulting in a 10x improvement over previous methods. During preliminary testing, IBM used an Nvidia Quadro M4000 with 8GB of GDDR5 memory. With a modestly priced professional graphics card, IBM demonstrated that it could train Support Vector Machines over 10 times faster using its DuHL system compared to a standard sequential operating approach.
New tools for FPGA programming
New product from Falcon Computing lets software programmers design FPGA accelerators without any knowledge of FPGA
New startup Falcon Computing Solutions Inc. has developed automated compilation tools that focus on streamlining FPGA-based acceleration. Its principal product is Merlin, a compiler that provides push-button C/C++ language programming to optimize FPGA implementation and work in a fully integrated fashion with Intel’s own development tools. “It’s a pure C/C++ flow that enables software programmers to design FPGA accelerators without any knowledge of FPGA,” said Jim Wu, director of consumer experience at Falcon Computing. “We want to put the tool in the hands of all software programmers.” The company is making the product available in a 14-day trial for use in the enterprise data center or in the cloud, and general availability is planned for the first quarter of 2018. Falcon Computing already has agreements with Amazon Web Services Inc. and Alibaba Cloud, and is working to bring the tool to other public cloud providers as well.
AutoML gives birth to NASNet
Google’s AutoML project, designed to make AI build other AIs, has now developed a computer vision system that vastly outperforms state-of-the-art-models. NASNet, the new project, could improve how autonomous vehicles and next-generation AI robots ‘see.’ Being dubbed as AutoML’s “child” AI, NASNet recognizes objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time. Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” Google said in a blog post.