The importance of Multi-Cloud Thinking
Why do organizations need to start taking hybrid cloud and multi-cloud into account?The growth of public cloud services has created an environment in which enterprise IT departments are scrambling to keep up with their customers. The ease of consumption and the number of resources available to enterprise users and developers makes it hard for traditional IT departments to compete to provides services, and many would argue that they shouldn’t bother to try.
Cloud providers can provide resources faster, and with less red tape, using an on-demand consumption-based model (OPEX vs CAPEX), and fewer restrictions. This model enables faster delivery times and supports more agile development methodologies, enabling companies to go to market with new ideas faster.
This is fantastic, and all of these points are laudable and necessary for companies to keep up in the modern multi-cloud world.
The challenge comes in when we start to look at what those traditional IT departments do, outside of just "providing infrastructure resources". The reality is that IT departments provide a host of services to their customers that go far beyond just providing resources.
IT departments ensure that reliance can be placed upon the data in a company's systems.
The term “reliance” is generally only heard during the annual financial audit, and is used to determine what the likelihood is that the data in the systems has been tampered with. This is where your IT department usually helps to ensure both the reliability and standardization of your data.
To ensure the reliability of the data, the IT department would have had influence in the deployment of applications, including:
- Ensuring that the security controls necessary to protect data had been put in place.
- Ensuring that the underlying systems and application designs provide high availability
- Ensuring that backups and other disaster recovery solutions are in place
- Providing technical support to help with ensuring suitable performance
- Determining the cost of the infrastructure and helping with strategies to prevent sprawl and achieve economies of scale
- Providing operational support for the underlying systems such as operating system patching, database administration, and other seemingly unimportant tasks that don’t get much attention. (Well, until they are not done and things go wrong.)
The challenges can be grouped into a small number of areas that each contain a vast amount of complexity, and an equally large number of potential solutions. They are:
- Data management and ownership
- Compliance and legislative requirements
- Quality Assurance, reliability and constancy
- Access control and auditing
- Cost management and containment
- Visibility: Developers and business owners need to have a clear understanding of what resources they are consuming, where their data is located, and who is responsible for that data.
- Predictability: Business owners need to know that an application will function and perform in the same way regardless of where it's deployed.
- Deployment controls: Operations teams need to have a clear understanding that the appropriate testing is being done before applications are pushed to production, and that developers are not making on-the-fly changes to applications, potentially compromising performance or security.
- Policy auditing: Operations teams, cost managers, and application owners need to know that they can set, enforce, and report on the application of necessary policies.
To achieve this control requires a few core tools to tie all these components together. These tools make up the following layers.
- Common Configuration storage model
- Common orchestration framework
- Common data store
- Data correlation and analysis engine
- Shared authentication and authorization framework