Is 2018 when the machines take over? Predictions for machine learning, the data center, and beyond

To learn more on this topic, join us February 6 for a webinar, “Machine Learning and AI in the Datacenter,” hosted by the Cloud Native Computing Foundation.

It’s hard to believe, but 2017 is almost over and 2018 is in sight. This year has seen a groundswell of technology in ways that seem to be simmering under the surface, and if you look a bit more closely it’s all there, just waiting to be noticed.

Here are the seeds being sown in 2017 that you can expect to bloom in 2018 and beyond.

Machine Learning

Our co-founder, Boris Renski, also gave another view of 2018 here.

Machine learning takes many different forms, but the important thing to understand about it is that it enables a program to react to a situation that was not explicitly anticipated by its developers.

It’s easy to think that robots and self-learning machines are the stuff of science fiction — unless you’ve been paying attention to the technical news.  Not a day goes by without at least a few stories about some advance in machine learning and other forms of artificial intelligence.  Companies and products based on it launch daily. Your smartphone increasingly uses it. So what does that mean for the future?

Although today’s machine learning algorithms have already surpassed anything that we thought possible even a few years ago, it is still a pretty nascent field.  The important thing that’s happening right now is that Machine Learning has now reached the point where it’s accessible to non-PhDs through toolkits such as Tensorflow and Scikit-Learn — and that is going to make all the difference in the next 18-24 months.

Here’s what you can expect to see in the reasonably near future.

Hardware vendors jump into machine learning

Although machine learning generally works better and faster on Graphics Processing Units (GPUs) — the same chips used for blockchain mining — one of the advances that’s made it accessible is the fact that software such as Tensorflow and Scikit-Learn can run on normal CPUs. But that doesn’t mean that hardware vendors aren’t trying to take things to the next level.

These efforts run from Nvidia’s focus on GPUs to Intel’s Nervana Neural Network Processor (NNP) to Google’s Tensor Processing Unit (TPU). Google, Intel and IBM are also working on quantum computers, which use completely different architecture from traditional digital chips, and are particularly well suited to machine learning tasks.  IBM has even announced that it will make a 20 qubit version of its quantum computer available through its cloud. It’s likely that 2018 will see these quantum computers reach the level of “quantum supremacy”, meaning that they can solve problems that can’t be solved on traditional hardware. That doesn’t mean they’ll be generally accessible the way general machine learning is now — the technical and physical requirements are still quite complex — but they’ll be on their way.

Machine learning in the data center

Data center operations are already reaching a point where manually managing hardware and software is difficult, if not impossible. The solution has been using devops, or scripting operations to create “Infrastructure as Code“, providing a way to create verifiable, testable, repeatable operations. Look for this process to add machine learning to improve operational outcomes.

IoT bringing additional intelligence into operations

Machine learning is at its best when it has enough data to make intelligent decisions, so look for the multitude of data that comes from IoT devices to be used to help improve operations.  This applies to both consumer devices, which will improve understanding of and interaction with consumers, and industrial devices, which will improve manufacturing operations.

Ethics and transparency

As we increasingly rely on machine learning for decisions being made in our lives, the fact that most people don’t know how those decisions are made — and have no way of knowing — can lead to major injustices. Think it’s not possible? Machine learning is used for mortgage lending decisions, which while important, aren’t life or death.  But they’re also used for things like criminal sentencing and parole decisions. And it’s still early.

One good example given for this “tyranny of the algorithm” involves the example of two people up for a promotion. One is a man, one is a woman. To prevent the appearance of bias, the company uses a machine learning algorithm to determine which candidate will be more successful in the new position. The algorithm chooses the man.  Why?  Because it has more examples of successful men in the role. But it doesn’t take into account that there are simply fewer women who have been promoted.

This kind of unintentional bias can be hard to spot, but companies and governments are going to have to begin looking at greater transparency as to how decisions are made.

The changing focus of datacenter infrastructures

All of this added intelligence is going to have massive effects on datacenter infrastructures.

For years now, the focus has been on virtualizing hardware, moving from physical servers to virtual ones, enabling a single physical host to serve as multiple “computers”.  The next step from here was cloud computing in which workloads didn’t know or care where in the cloud they resided; they just specified what they needed, and the cloud would provide it.  The rise of containers accelerated this trend; containers are self-contained units, making them even easier to schedule in connected infrastructure using tools such as Kubernetes.

The natural progression from here is the de-emphasis on the cloud itself.  Workloads will run wherever needed, and whereas before you didn’t worry about where in the cloud that wound up being, now you won’t even worry about what cloud you’re using, and eventually, the architecture behind that cloud will become irrelevant to you as an end user.  All of this will be facilitated by changes in philosophy.

APIs make architecture irrelevant

We can’t call microservices new for 2018, but the march to decompose monolithic applications into multiple microservices will continue and accelerate in 2018 as developers and businesses try to gain the flexibility that this architecture provides. Multiple APIs will exist for many common features, and we’ll see “API brokers” that provide a common interface for similar functions.

This reliance on APIs will mean that developers will worry less about actual architectures. After all, when you’re a developer making an API call, do you care what the server is running on?  Probably not.

The application might be running on a VM, or in containers, or even in a so-called serverless environment. As developers lean more heavily on composing applications out of APIs, they’ll reach the point where the architecture of the server is irrelevant to them.

That doesn’t mean that the providers of those APIs won’t have to worry about it, of course.

Multi-cloud infrastructures

Server application developers such as API providers will have to think about architecture, but increasingly they will host their applications in multi-cloud environments, where workloads run where it’s most efficient — and most cost-effective. Like their users, they will be building against APIs — in this case, cloud platform APIs — and functionality is all that will matter; the specific cloud will be irrelevant.

Intelligent cloud orchestration

In order to achieve this flexibility, application designers will need to be able to do more than simply spread their applications among multiple clouds. In 2018 look for the maturation of systems that enable application developers and operators to easily deploy workloads to the most advantageous cloud system.

All of this will become possible because of the ubiquity of open source systems and orchestrators such as Kubernetes. Amazon and other systems that thrive on vendor lock-in will hold on for a bit longer, but the tide will begin to turn and even they will start to compete on other merits so that developers are more willing to include them as deployment options.

Again, this is also a place where machine learning and artificial intelligence will begin to make themselves known as a way to optimize workload placement.

Continuous Delivery becomes crucial as tech can’t keep up

Remember when you bought software and used it for years without doing an update?  Your kids won’t.

Even Microsoft has admitted that it’s impossible to keep up with advances in technology by doing specific releases of software.  Instead, new releases are pushed to Windows 10 machines on a regular basis.

Continuous Delivery (CD) will become the de facto standard for keeping software up to date as it becomes impossible to keep up with the rate of development in any other way.  As such, companies will learn to build workflows that take advantage of this new software without giving up human control over what’s going on in their production environment.

At a more tactical level, technologies to watch are:

  • Service meshes such as Istio, which abstract away many of the complexities of working with multiple services
  • Serverless/event-driven programming, which reduces an API to its most basic form of call-response
  • Policy agents such as the Open Policy Agent (OPA), which will enable developers to easily control access to and behavior of their applications in a manageable, repeatable, and granular way
  • Cloud service brokers such as Open Service Broker (OSB), which provide a way for enterprises to curate and provide access to additional services their developers may need in working with the cloud.
  • Workflow management tools such as Spinnaker, which make it possible to create and manage repeatable workflows, particularly for the purposes of intelligent continuous delivery.
  • Identity services such as SPIFFE and SPIRE, which make it possible to uniquely identify workloads so that they can be provided the proper access and workflow.

Beyond the datacenter

None of this happens in a vacuum, of course; in addition to these technical changes, we’ll also see the rise of social issues they create, such as privacy concerns, strain on human infrastructure when dealing with the accelerating rate of development, and perhaps most important, the potential for cyber-war.

But when it comes to indirect effects of the changes we’re talking about, perhaps the biggest is the question of what our reliance on fault-tolerant programming will create.  Will it lead to architectures that are essentially foolproof, or such an increased level of sloppiness that eventually, the entire house of cards will come crashing down?

Either outcome is possible; make sure you know which side you’re on.

Getting ready for the 2018 and beyond

The important thing is to realize that whether we like it or not, the world is changing, but we don’t have to be held hostage by it. Here are Mirantis we have big plans, and we’re looking forward to talking more about them in the new year!

 

Subscribe to Our Newsletter

Latest Tweets

Suggested Content

LIVE DEMO
Mirantis Application Platform with Spinnaker
WEBINAR
How to Increase the Probability of a VNF Working with Your Cloud