Why do organizations need to start taking hybrid cloud and multi-cloud into account?
The growth of public cloud services has created an environment in which enterprise IT departments are scrambling to keep up with their customers. The ease of consumption and the number of resources available to enterprise users and developers makes it hard for traditional IT departments to compete to provides services, and many would argue that they shouldn’t bother to try.
Cloud providers can provide resources faster, and with less red tape, using an on-demand consumption-based model (OPEX vs CAPEX), and fewer restrictions. This model enables faster delivery times and supports more agile development methodologies, enabling companies to go to market with new ideas faster.
This is fantastic, and all of these points are laudable and necessary for companies to keep up in the modern multi-cloud world.
The challenge comes in when we start to look at what those traditional IT departments do, outside of just “providing infrastructure resources”. The reality is that IT departments provide a host of services to their customers that go far beyond just providing resources.
IT departments ensure that reliance can be placed upon the data in a company’s systems.
The term “reliance” is generally only heard during the annual financial audit, and is used to determine what the likelihood is that the data in the systems has been tampered with. This is where your IT department usually helps to ensure both the reliability and standardization of your data.
To ensure the reliability of the data, the IT department would have had influence in the deployment of applications, including:
- Ensuring that the security controls necessary to protect data had been put in place.
- Ensuring that the underlying systems and application designs provide high availability
- Ensuring that backups and other disaster recovery solutions are in place
- Providing technical support to help with ensuring suitable performance
- Determining the cost of the infrastructure and helping with strategies to prevent sprawl and achieve economies of scale
- Providing operational support for the underlying systems such as operating system patching, database administration, and other seemingly unimportant tasks that don’t get much attention. (Well, until they are not done and things go wrong.)
My point here, though, is not to argue for an IT department, but to discuss what is necessary in this new multi-cloud world, where users and developers are going to start to use public cloud services–and in fact should start using them–to protect the digital assets of the organization.
The challenges can be grouped into a small number of areas that each contain a vast amount of complexity, and an equally large number of potential solutions. They are:
- Data management and ownership
- Compliance and legislative requirements
- Quality Assurance, reliability and constancy
- Access control and auditing
- Cost management and containment
So what do we need to do to enable developers to build applications rapidly and utilize agile mechanisms in a hybrid cloud environment while still catering to the need for appropriate controls to protect the users’ and organization’s data? Some areas to consider include:
- Visibility: Developers and business owners need to have a clear understanding of what resources they are consuming, where their data is located, and who is responsible for that data.
- Predictability: Business owners need to know that an application will function and perform in the same way regardless of where it’s deployed.
- Deployment controls: Operations teams need to have a clear understanding that the appropriate testing is being done before applications are pushed to production, and that developers are not making on-the-fly changes to applications, potentially compromising performance or security.
- Policy auditing: Operations teams, cost managers, and application owners need to know that they can set, enforce, and report on the application of necessary policies.
It is highly unlikely that a single system will cater to the need to achieve all of these controls all of the time, especially given the fast pace of innovation and changing market requirements. As a result, to achieve these controls, we are going to need a layered set of systems, each providing the best set of controls for their environment while still providing the required reporting and visibility.
To achieve this control requires a few core tools to tie all these components together. These tools make up the following layers.
- Common Configuration storage model
- Common orchestration framework
- Common data store
- Data correlation and analysis engine
- Shared authentication and authorization framework
In other words, organizations need to use the best tools for the job at the deployment level, but those tools must center around a standardized set of tools to hold the configuration metadata and provide the overall orchestration in order to ensure the required level of performance, predictability, and reliability.