I've often said this about reporting information: "if you measure it, then you should report it. If you report it, then someone needs to make decisions with it." That is, the only data worth having is data that helps you make decisions. This is generally true, whether you are capturing performance statistics on a web server ("Are we overloading the server? Should we upgrade the server, or continue as we are?") or reviewing usage on a database system ("Are we running out of disk allocation? Do we need to add additional table space?") If you are logging data that never gets reviewed, by anyone, then I generally look on that "low value" data.
I consider this data-informed decision-making as "management by fact". An article in EDUCAUSE covered the same topic: Management by fact. From the article:
"We sit on a ton of data and just don't use it," said Chris Handley, CIO at Stanford University, in reference to the university's databases supporting operations. Similarly, in 2002, the Massachusetts Institute of Technology also saw a glaring need for meaningful operational data when a visiting committee reviewed its IT services. The committee lamented, "The absence of detailed cost data for IT activities and useful benchmark data from peer institutions ... became an obstacle to completing the full scope of the review."1 Broadly, both universities needed data about costs, customer satisfaction, process performance, project performance, and employee performance and satisfaction.
In effect, these organizations had to make decisions based on hearsay, or anecdotes. There wasn't any data to back up their decisions. Leading through guesswork is no way to advance an organization. So both organizations began similar improvement efforts. When beginning this effort, both institutions knew that several aspects of the project would be critical:
- Define data clearly. This would ensure meaningful "apples to apples" comparisons.
- Capture costs consistently for the services under study. Complex accounting structures in most university settings hampered previous benchmarking efforts, obscuring valid cost comparisons. We wanted to overcome this problem.
- Understand each other's processes in depth. To interpret comparative data, we needed to understand the factors behind the performance.
- Tackle issues of a manageable scope. Rather than develop broad metrics for IT overall, we sought metrics to inform decisions and compel action. The area of study needed to be broad reaching and visible, yet also well contained and data rich. These criteria led to the selection of IT help-desk services as the first area of study.
The efforts have been rewarded well. Each campus has seen
- process improvements, such as reduced hand-offs and broader range of topics supported;
- new abilities to handle spikes in workload due to crises (such as viruses) or plans (such as new system rollouts); and
- marked improvement in performance, such as an increased rate of cases resolved on first contact and number of cases handled per employee.