6th Dec.’ 17 – Headlines

In Data Science, Analytics, Machine Learning, AI and more

PyTorch v.3.0, IBM’s Power Systems Servers, Core ML’s support for TensorFlow Lite, Microsoft using AMD’s EPYC processors, and Google’s new machine learning services for text and video among today’s top data science news.

PyTorch removes Stochastic functions

Pytorch 0.3.0 released with performance improvements, ONNX/CUDA 9/CUDNN 7 Support and bug fixes

Pytorch has released its version 0.3.0 with several performance improvements, new layers, ship models to other frameworks (via ONNX), CUDA9, CuDNNv7, and “lots of bug fixes.” Among the most important changes, Pytorch has removed stochastic functions, i.e. Variable.reinforce() because of their limited functionality and broad performance implications. “The motivation for stochastic functions was to avoid book-keeping of sampled values. In practice, users were still book-keeping in their code for various reasons. We constructed an alternative, equally effective API, but did not have a reasonable deprecation path to the new API. Hence this removal is a breaking change,” the Python package said, adding that they have introduced the torch.distributions package to replace stochastic functions. Among the other changes, Pytorch said that in v0.3.0, some loss functions can compute per-sample losses in a mini-batch, and that more loss functions will be covered in the next release. There is also an in-built Profiler in the autograd engine which works for both CPU and CUDA models. In addition to new API changes, Pytorch 0.3.0 will see big reduction in framework overhead and 4x to 256x faster Softmax/LogSoftmax, apart from new tensor features. PyTorch models that are ConvNet-like and RNN-like (static graphs) can now be shipped to the ONNX format, a common model interchange format that can be executed in Caffe2, CoreML, CNTK, MXNet, and Tensorflow.

AMD processors coming to Azure machines

Microsoft Azure is first global cloud provider to deploy AMD EPYC processors

Microsoft is the first global cloud provider which will use AMD’s EPYC platform to power its data centers. In an official announcement, Microsoft said it has worked closely with AMD to develop the next generation of storage optimized VMs called Lv2-Series, powered by AMD’s EPYC processors. The Lv2-Series is designed to support customers with demanding workloads like MongoDB, Cassandra, and Cloudera that are storage intensive and demand high levels of I/O. Lv2-Series VM’s use the AMD EPYC 7551 processor, featuring a core frequency of 2.2Ghz and a maximum single-core turbo frequency of 3.0GHz. Lv2-Series VMs will come in sizes ranging up to 64 vCPU’s and 15TB of local resource disk.

IBM’s Power Systems Servers speeds up deep learning training by 4x

Power System AC922: IBM takes deep learning to next level with first Power9-based systems

In its quest to be the AI-workload leader for data centers, IBM unveiled its first Power9 server, the Power System AC922, at the AI Summit in New York. It runs a version of the Power9 chip tuned for Linux, with the four-way multithreading variant SMT4. Power9 chips with SMT4 can offer up to 24 cores, though the chips in the AC922 top out at 22 cores. The fastest Power9 in the AC922 runs at 3.3GHz. The air-cooled AC922 model 8335-GTG set for release in mid-December, as well as two other models (one air-cooled and one water-cooled) scheduled to ship in the second quarter next year, offer two Power9 chips each and run Red Hat and Ubuntu Linux. In 2018, IBM plans to release servers with a version of the Power9 tuned for AIX and System i, with SMT8 eight-way multithreading and PowerVM virtualization, topping out at 12 cores but likely running at faster clock speeds. The Power9 family is the first processor line to support a range of new I/O technologies, including PCI-Express 4.0 and NVLink 2.0, as well as OpenCAPI. IBM claims that the Power Systems Servers can make the training of deep learning frameworks four times faster. The U.S. Department of Energy’s Summit and Sierra supercomputers, at Oak Ridge National Laboratories and Lawrence Livermore National Laboratory, respectively, are also based on Power9.

AI perfects imperfect-information game
Study on AI’s win in heads-up no-limit Texas hold’em poker wins Best Paper award at NIPS2017

A detailed research on how the AI Libratus defeated the best human players at Heads-Up No-Limit Texas Hold’em poker game earlier this year has won the Best Paper award at NIPS2017. The paper delves deep into the analysis behind imperfect-information game AI vs perfect-information games such as Chess & Go, and expounds the idea that was used to defeat top humans in heads-up no-limit Texas hold’em poker. Earlier this year in January, the artificial intelligence system Libratus, developed by a team at Carnegie Mellon University, beat four professional poker players. The complete paper is available here on arxiv.

No more “versus” between Core ML and Tensorflow Lite

Google announces Apple’s Core ML support in TensorFlow Lite

In November, Google announced the developer preview of TensorFlow Lite. Now, Google has collaborated with Apple to add support for Core ML in TensorFlow Lite. With this announcement, iOS developers can leverage the strengths of Core ML for deploying TensorFlow models. In addition, TensorFlow Lite will continue to support cross-platform deployment, including iOS, through the TensorFlow Lite format (.tflite) as described in the original announcement. Support for Core ML is provided through a tool that takes a TensorFlow model and converts it to the Core ML Model Format (.mlmodel). For more information, users can check out the TensorFlow Lite documentation pages, and the Core ML converter. The pypi pip installable package is available at this link:

Google launches new machine learning services for analyzing video and text content

Google announces Cloud Video Intelligence and Cloud Natural Language Content Classification are now generally available

Google has announced the general availability of two new machine learning services: Cloud Video Intelligence and Cloud Natural Language Content Classification. Cloud Video Intelligence is a machine learning application programming interface that’s designed to analyze video content, while Cloud Natural Language Content Classification is an API that helps classify content into more than 700 different categories. Google Cloud Video Intelligence was launched in beta in March this year, and has since been fine-tuned for greater accuracy and deeper analysis. “We’ve been working closely with our beta users to improve the model’s accuracy and discover new ways to index, search, recommend and moderate video content. Cloud Video Intelligence is now capable of deeper analysis of your videos — everything from shot change detection, to content moderation, to the detection of 20,000 labels,” Google said. Its code is available on GitHub here. On the other hand, Google’s Content Classification for Cloud Natural Language service is designed for text-based content. Announced in September, its main job is to read through texts and categorize them appropriately. The API can be used to sort documents into more than 700 different categories, such as arts and entertainment, hobbies and leisure, law and government, news and many more.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also