Helping clients manage their technology for over 30 years.

When IT Performance Monitoring Gets a “Big Data” Boost

Businessman standing on Cloud looking at world map covered with image representing Cloud data. And a giant Q from the Quest logo

No doubt you’ve heard about “big data.” It sounds intimidating, invasive, and, well, way too big. But don’t be fooled — “big data” is going to save your IT infrastructure.

The first thing you need to understand about how this is happening (oh yes, it’s already well underway) is that “big data” is something of a misnomer. What we’re really talking about is analytics — automated mathematical tools that work in real time to sift through untold amounts of regular old data, in this case IT performance data, and produce actionable results that go far beyond legacy monitoring capabilities.

And just in time, too, because thanks to the complexity spawned by virtualization, cloud computing and today’s widely distributed IT infrastructures, we’re facing a whole new swarm of IT performance challenges that typical legacy monitoring tools have no hope of addressing.

Too much unusable data

The trouble with traditional performance monitoring tools is threefold:

  • They collect a great deal of data, but too much of that data is unusable (and too often generates  overwhelming noise — i.e., impossibly high volumes of alerts)
  • They’re siloed solutions that don’t play well together and make difficult work of cross-correlating data, thus seriously restricting visibility into IT service performance (notably after deploying virtualization, according to some research)
  • They’re reactive rather than proactive, so you find out about a problem after the fact instead of before an issue becomes critical

Such limitations cost plenty, both in terms of performance degradation, unplanned downtime (priced at an average of $72,000 per minute by one research firm), and the ability to implement new IT projects without seeing them sabotaged by unrecognized problems and risks in availability, performance and capacity.

Enter IT operations analytics

Unlike previous legacy performance monitoring, analytic performance solutions — called IT operations analytics — expose patterns hidden in your IT performance data. And these patterns can be used to reveal existing issues (say, misconfigurations) as well as anticipate nascent issues.

This means you get visibility into your complete IT environment so issues can be resolved proactively before they impact business services, end-users or customers. What’s more, this sort of analytics can be used to predict future capacity and provisioning requirements.

You’ll end up with actionable insight into every aspect of your IT network so you know what’s going on and why. And, you’ll be able to establish a performance baseline built from multiple metrics, so planning and deploying new projects will cost less time, money and hassle.

In my next post, I’ll drill down into just what you should look for in IT operations analytics capabilities.

Leave a Reply