Don’t look now, but the data centre is transforming. Seeking greater efficiencies, enterprises of all sizes have steadily evolved from application-based silos to virtualized environments. Now this evolution is taking the next leap forward as innovative enterprises move toward next-generation cloud computing models that deliver IT as a service (ITaaS) — via both internal and external cloud services.
Delivering on-demand cloud services through a multi-tenant architecture offers attractive benefits, including greater agility, data mobility and scalability with reduced capital expenditures and greater control over service costs. But it also presents some significant management challenges, especially when it comes to managing the dynamic storage resources required by ITaaS infrastructures.
To tap the full potential of cloud computing to drive efficiency, IT managers must maintain — and, in many cases, improve — service quality and availability. But managing storage resources in these highly complex, dynamic and heterogeneous environments can be a major challenge.
One challenge facing customers is that many simply lack the required insight into their storage environments. A prime example of this can be seen through a recent customer visit. The customer was reengineering virtual machines within the data centre to reduce server costs. Unfortunately he was unaware of the impact this change would have on storage growth over time.
A quick analysis showed almost all of the gains the customer was achieving through the move to a dynamic server infrastructure were being negated by the increased storage infrastructure costs. This was a direct result of the customer simply not having the requisite optimization, capacity planning and visibility into the storage environment.
Meeting these challenges requires a next-generation approach to storage management focused on reducing the risk of data center transformation. This approach should address the following five critical success factors:
1. Global storage visibility: How can you manage your storage environment if you can’t see it all? You need tools that provide a single, end-to-end view of storage — both physical and virtual — across the entire multivendor, multiprotocol environment, including host-to-storage access paths. Extending this visibility beyond the hosts to the applications themselves is essential to fully understand the impact of storage service levels on your business applications. The result is a “service-level” view of your storage infrastructure that is critical for ensuring the highest levels of service performance and availability, according to defined service level policies, while managing storage resources for maximum efficiency.
2. Comprehensive, continuous monitoring: Periodically checking the health of dynamic individual storage systems won’t do. You must be able to monitor all shared resources continuously — and store that information in a single data warehouse. Proactively monitoring for service violations and latency issues is crucial to identify potential problems before they impact service quality or availability. And having the ability to generate reports on your entire end-to-end storage infrastructure goes beyond administrative capabilities, giving you powerful management insight for optimizing shared resources and controlling costs.
3. Centralized change management: In the evolving data centre, nothing is so constant as change. Yet every change introduces additional complexity and potential risk. You need a solution for planning configuration changes that encompasses the entire infrastructure. Having the ability to perform “what if” configuration simulations is especially important, enabling you to see through the complexity and clearly understand the impact of proposed changes. This enables you to identify potential service-level violations to your business applications, so you can make informed decisions before changes are made. This is critical for managing risk, whether you’re deploying a single new storage array, planning a major migration or simply upgrading existing firmware levels.
4. Global capacity management: Optimizing utilization of shared storage is crucial to achieving the full efficiency benefits of virtualization and cloud computing. IT managers need tools that provide continuous, global visibility into storage resource allocation across the heterogeneous data centre. Leveraging a centralized data warehouse for all storage utilization information facilitates meaningful analysis of capacity consumption trends, forecasting trends and chargeback. When new capacity is needed, providing near-real-time visibility of available service tiers helps accelerate provisioning and avoid potential conflicts.
5. Elegant simplicity: Tools designed to help you manage complexity shouldn’t add more complexity to your environment. Investing in technologies that monitor, measure and analyze equipment from any vendor, based on any protocol, helps avoid a “patchwork” approach of storage management tools that don’t provide a unified view of your entire infrastructure. Also, given the dynamic and non-transparent nature of virtual infrastructure environments, it’s important to use agent-less solutions that don’t require a lot of care and feeding.
Embracing change without compromising stability or control is the goal of any IT organization driving the adoption of next-generation ITaaS and cloud computing models. The key to managing the risk of data centre transformation is having a clear picture of what’s really happening across the entire dynamic, complex environment and how it affects the service levels and performance of your business applications.
Taking advantage of storage management technologies that adhere to the five success factors described above can help you achieve this visibility — and realize the promise of an optimized, efficient, next-generation data centre.