Amazon’s new Deep Learning AMI, Mapbox’s acquisition of Fitness AR, and a new Home.me tool in today’s data science news.
Amazon Web Services is now offering AWS Deep Learning AMI for Microsoft Windows Server 2012 R2 and 2016. The new AMIs (Amazon Machine Images) contain all the necessary pre-built packages, libraries, and frameworks needed to start building AI systems using deep learning on Microsoft Windows. They also include popular deep learning frameworks such as Apache MXNet, Caffe and Tensorflow, as well as packages that enable easy integration with AWS, including launch configuration tools and many popular AWS libraries and tools. The AMIs come prepackaged with Nvidia CUDA 9, cuDNN 7, and Nvidia 385.54 drivers, and contain the Anaconda platform (supports Python versions 2.7 and 3.5). Amazon Web Services said the AWS Deep Learning AMIs for Microsoft Windows are provided at no additional cost beyond the Amazon EC2 instance hours used, and are available in all public regions. The AMIs can be used directly through AWS EC2 Console or AWS Marketplace. Users can visit the EC2 Windows user guide for step-by-step instructions on launching an EC2 instance, and get more resources for Windows in the documentation.
Open-source mapping platform Mapbox has acquired activity tracking app Fitness AR, which allows users to view runs, hikes or cycling routes from Strava superimposed on a 3D map of the terrain. The announcement was made by Mapbox VP Paul Veugen in a Medium post. Mapbox will continue to deliver updates to the app, which will be dropping its $2.99 price and going free in the App Store starting today. Fitness AR was among the first ARKit-enabled apps and was featured by Apple early on. The app utilized Mapbox’s Unity Maps SDK to visualize the terrain of the paths. Fitness AR’s co-founders Adam Debreczeni and Eric Florenzano will be joining Mapbox as part of the acquisition to work on AR tech in verticals, including “travel, weather, fitness, sports and gaming,” according to the company.
Charles Wong and Aravind Kandiah, both students at the Singapore University of Technology and Design, have built a tool that can turn 2D drawings into 3D renderings. The team uses computer vision to capture a hand-drawn or printed floorplan and converts it into a 3D rendering. Home.me asks you about your location and a few more details about the building you are trying to render, and is able to estimate the square footage and prize of what you’re drawing, too. Once the team’s tools have figured out the floorplan and rendered it, the next step it takes is to visualize it in augmented reality (using the floorplan as its anchor). The Home.me team said they considered using deep learning, but they ran out of time. So like most other machine learning-based tools, they built home.me in Python, using the popular OpenCV library to power its computer vision features. The tool exports its 3D models into Unity models, Wong and Kandiah said, adding that they would continue working on the project despite their current focus on academics.