Having the ability to stay several steps ahead and identify anomalous network behavior before it becomes an issue is everything. And having the right tools to do the job is the key.
Imagine. You start work at 8 am, check your NPM for outages and network stress points, and notice unusual spikes on the CPU for that time of day. You run an average CPU load report, and everything looks fine. But by 9 am, when the office is full of critical staff and the rest are working from home, dozens of complaints about poor service start flowing in. And one of those complaints is from the CEO whose video call to a regional office is constantly dropping out. Recognize this scenario?
Why didn’t the report show a problem? It’s all down to the frequency of polling intervals. The less frequent the polling interval, the less accurate an average report will be.
Take the above example. A server is providing software as a service to an office. For five minutes the device is at full CPU load and for anyone attempting to use the service, the performance is terrible. It then drops back to 30% load for the next five minutes. This repeats and repeats. CPU load spiking is not unusual. But if devices are polled once every ten minutes, poor network performance can be missed. This is the case with SolarWinds default interface performance polling.
Statseeker on the other hand polls node and interface performance every 60 seconds.
Statseeker vs SolarWinds Polling Intervals
|Device||15 sec||120 sec||60 sec||10 min|
|Interface||60 sec||120 sec||60 sec||9 min|
|Volume||60 sec||120 sec||60 sec||15 min|
These shorter polling interval times can help identify, and rectify, problems sooner. More frequent polling also means more intelligent data, which requires more storage, yes? In the case of most network performance monitoring products, that is exactly the case, and that’s also why traditional NPM providers do not retain all polling data. It gets averaged, rolled up, and after a certain period – which is as short as a year for SolarWinds – it’s deleted.
Try comparing last month’s bandwidth utilization to the same time last year. Chances are you won’t be able to. Even tracing events back a few months is unlikely to return precise information.
Statseeker on the other hand retains customers’ complete historical network data, without averaging or rolling up; every single piece of 60-second polling data is compressed and stored. You only require 37.5 GB per year for up to 25,000 interfaces.
Statseeker vs SolarWinds Data Statistic Retention
|0-7 days||60 sec||10 min|
|8-30 days||60 sec||60 min|
|31-365 days||60 sec||24 hours|
|366+ days||60 sec||deleted|
Do you really need to keep all the data, is it worth it? For one of our North American customers – a foods manufacturer – it was.
They were receiving reports from their staff of intermittent but consistent poor network performance. The network team went to work, running detailed reports to access their historical data, which was stored at the 60–second granularity. They recorded every instance of poor network performance over the previous twelve months (when Statseeker was implemented) – all the proof they needed. Armed with this data, they were able to present conclusive evidence to their network carrier showing the duration (to the minute) of each instance of service loss. As compensation, the network carrier refunded our customer US$30,000. You can read more here.
Accurate network behavior analysis requires complete historical, granular-level data. Without that, it’s just guesswork.