The siloed nature of traditional data center architectures has produced “you-can’t-get-there-from-here” IT environments. Too often applications, data, and storage devices don’t interact , resources are wasted (e.g., one workload per server), and complex management hassles often lead to risky administrative lapses that result in security vulnerabilities.
The result: IT infrastructures that are too unwieldy, too expensive, and too slow at a time when agility and responsiveness are essential for success.
In contrast, Cloud Computing infrastructures are built on a new kind of IT architecture (that you can rent rather than build!). This architecture begins with virtualization — e.g., multiple workloads per server rather than just one, as in traditional IT architectures — and then integrates server, network, and storage access resources into a physically-distributed but centrally-managed system.
Virtualization separates (abstracts) application and/or service layers from the underlying infrastructure and resource layers, and Cloud Computing leverages this to provide a much more scalable, efficient, and elastic model for delivering IT services.
Specifically, Cloud Computing…
- Abstracts the configuration, connectivity, and personality of server and I/O resources from the underlying infrastructure so these can be automatically programmed,
- Unifies model configuration with system resources to consistently align policy, server personality, and workloads,
- Decouples scale from complexity and accelerates reliable, secure, end-to-end provisioning and migration support, and
- Implements a united fabric technology that reduces costs by eliminating need for multiple sets of network adapters, cables, and switches.
And why go through all this? Because it pays off. Which what I’ll be writing about next time.