BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

3 Key Machine Learning Trends To Watch Out For In 2018

Following
This article is more than 6 years old.

2017 witnessed the meteoric rise of Artificial Intelligence and Machine Learning. From large platform vendors to early-stage startups, AI and ML have become the key focus areas. VCs poured billions of dollars in funding AI-related startups. Platform companies increased their R&D budget to accelerate research in AI & ML domains. The number of online courses offering self-paced learning has hit the roof. Finally, there is no single industry vertical that’s not impacted by AI.

Source: Graphicstock

Though it has become a cliché, ‘democratizing machine learning’ has taken off in 2017. Amazon, Apple, IBM, Google, Facebook and Microsoft are competing with each other to make ML accessible to developers. The availability of tools and frameworks doubled in just one year. 2017 also saw the beginning of AI infusion in business applications.

With the hype at its peak, what’s in store for AI and ML in 2018?

Here are three key trends for 2018 that will take AI and ML to the next level.

DevOps for Data Science

A data scientist is defined as an individual who is better in statistics than an average programmer and a better programmer than an average statistician. Data scientists squarely focus on finding hidden patterns in data sets. They apply proven statistical models to modern data sets to solve business problems.

Though data scientists deal with Python, R and Julia to create machine learning models, they are not equipped to deal with the infrastructure and environment required for developing and deploying ML models. During the development phase, ML models will be moved back and forth between local development environments and cloud-based training environments where GPU-based VMs are used for scale. Data scientists need a simple mechanism to perform the roundtrip between local environment and cloud-based environment.

Ultimately, a trained model is yet another piece of code that needs to be treated like any other mission-critical application. They need to be deployed in secure, available, scalable, and reliable environments.

During the training and inference, there is quite a bit of infrastructure plumbing that’s needed by data scientists. This plumbing includes setting up the right development environment, packaging the code as container images, scaling the containers during the training and inference, versioning existing models, configuring a pipeline to upgrade models with newer versions seamlessly, and many other typical DevOps tasks.

As data science becomes mainstream, DevOps for data science becomes more important. 2018 will witness mature and streamlined DevOps processes exclusively defined for data science.

Amazon SageMaker and Azure ML Workbench are early indicators of this trend. Both the platforms focus on the DevOps aspect of data science.

 

Inference at the Edge

Edge Computing takes compute closer to the applications. Each edge location mimics the public cloud by exposing a compatible set of services and endpoints that the applications can consume. It is all set to redefine enterprise infrastructure.

The edge computing layer exposes compute, storage, and network services to developers. Typically, edge computing runs on top of a constrained infrastructure that may not be powerful enough to run VMs or containers. This is where serverless plays a crucial role in delivering the compute services.

After virtualization and containerization, serverless is emerging as the next wave of compute services. Functions as a Service (FaaS), a serverless delivery model, attempts to simplify the developer experience by minimizing the operational overhead in deploying and managing code.

Janakiram MSV

Machine learning models that are fully trained in the cloud are deployed at the edge for inference. The heavy lifting takes place in the public cloud while the optimized model is implemented at the edge. The inference model is exposed as a function deployed within the serverless compute environment.

If the edge computing layer runs on powerful hardware capable of running containers, the ML inference models are packaged and deployed as containers.

AWS Deep Lens and Azure IoT Edge are examples of how inference models are deployed at the edge. AWS Deep Lens runs ML model written in Python as a Lambda function. In Azure IoT Edge, the ML modules are packaged as containers and pushed to the edge layer.

Machine learning will become the key driver for accelerating the adoption of edge computing.

AI for IT Operations   

Modern applications and infrastructure are generating log data that is captured for indexing, searching, and analytics. The massive data sets obtained from the hardware, operating systems, server software and application software can be aggregated and correlated to find insights and patterns. When machine learning models are applied to these data sets, IT operations transform from being reactive to predictive.

When the power of AI is applied to operations, it will redefine the way infrastructure is managed.

The application of ML and AI in IT operations and DevOps will deliver intelligence to organizations. It will help the ops teams perform precise and accurate root cause analysis. Advanced models can help prevent disruption and outage to IT through predictive analytics. Intrusion detection can be augmented with ML for enhanced security. There are many scenarios where the application of ML to IT will lead to intelligent operations.

Amazon Macie and Azure Log Analytics are early examples of AIOps. Apart from AWS and Azure, many startups are investing in AI-driven Ops.

Follow me on Twitter or LinkedInCheck out my website