Pages

Sunday, August 2, 2009

Learn to count- both Blessings and Failures

In my previous posting "E = MC Squared” I had written about the three drivers of operational excellence that we focus on. They were Measurement, Continuous improvement and Customer focus. This posting is a little more detail on the framework of measurement that we try to institutionalize. There no rocket science here; just one of the ways for structuring and prioritizing various matrices.

The Information Technology (IT) and Service industries, in spite of their exposure to data handling, data mining and their familiarity with money value of transactions, often messes up in the discipline of systematic measurement for operational efficiency. Their operational measurement discipline is not yet matured because the glamour and focus in these industries are still ‘cool functions’, “exciting features” and “latest gadgets’ than boring pursuit of efficiency gains.

So we turned for some lessons to process industry; especially hazardous chemical industry. The reasons were manifold and can be summarised as below.

1. The process industry has been around for hundreds of years and has matured over period of time whereas IT and Services industries are quite young and still evolving.

2. Many of their product lines have been commoditized with very low operating margins unlike the IT and Service industries which still enjoy significant margins arising out of novelty and innovation. This meant that the process industry has to squeeze out efficiency wherever possible.

3. In chemical process industry the cost of process breakdowns and safety breaches are often fatal and therefore the extent of public attention, scrutiny and audits is quite severe.

4. In process industry mostly the processes are integrated end-to-end with less control and knowledge of what is happening inside the pipes. This necessitates strong monitoring and control.

The framework of measurement we evolved has thus been helped a lot by the learning from the process industry. The three key elements of this framework are the following.

1. Flow management (Micromanagement)

These are matrices that keep track of each of the inputs to ensure that it passes through each process/ sub-process it was meant to traverse and that too without error. This becomes even more critical when some of the processes are quite new and evolving because in such cases the exceptions gets to be the norm and tend to get ignored; especially in computerised processes.

These errors lead to customer discontent and even revenue leakage. I remember long ago when many people received a letter from one of the large banks which admitted that they had not tracked the credit card transactions correctly and enjoined the recipients to make a payment on the basis of their own bills. (I am not sure how many actually paid. This is not a made up story either).

One of my friends who provide transaction billing solutions to large number of banks and telcos shared with me the extent of leakage he is able to unearth when his system is introduced first time.

2. Capacity Management (Macro management)

These are matrices that track the capacity of processes, people, service providers and machines. We try to establish and evolve measures for trends in capacity utilisation of each element to avoid surprise bottlenecks.

This continuous monitoring is even more critical in computer based operations as many of the computer systems and network equipments are shared resources for multiple processes and the utilisation is build up for each of the processes would be different. Very often the process managers give scanty attention to the capacity requirement of their processes and how this capacity utilisation varies with volume.

In computerised processes there is another often neglected component of capacity grabber. The queries needed by the business or regulator. These queries on one side use the same production capacity. On the other end very often these queries are prepared by the junior most programmers who often develop quite in-efficient queries that hog resources. Mostly these onetime queries become a norm and end up quietly eating up resources.

Another area which eats up capacity is the weakness in the design itself. For most of the programmers, the kick is in developing features. Once the features are up and running he has lost his interest and is keen to move to the next project. At the time of feature development these cool cats seldom give importance to program efficiencies and once they are developed they are too lazy (and often don’t even know) to spend the time and effort to fine-in my experience there will be scope for at least 100% improvement in process efficiency in most of the bespoke programs.

3. Service Levels

The best way to ensure focus in measurement and improvement is to have clearly defined service levels that we are willing to commit to our customers. We call it the Customer Service Commitments (CSC). These involve commitments on Turn-Around- Time (TAT), Quality and Cost.

We try to build time-series data of these parameters at every key process and service provider level. This time series data provides early triggers in terms of trend shifts and unusual volatility.

We also try to have a separate team (other than the team responsible for operation) to track the commitments to the ultimate customer.

What Next?

The next thing to having data in place is how we use the data. I believe that is the more difficult part because it involves change in habits and practices of human beings. And that is not easy! Very often the data tracking reports are seen and used as measures for compliance or to satisfy whims of the bosses.

Excitement is in trouble shooting, exception handling and heroism. Not in prevention of trouble. It is more interesting to fix people than systems.

What we continuously try is to inculcate a spirit of looking at data as a tool for continuous improvement by everybody at all levels. To make this a culture and not a ritual.This is because we believe that we can scale-up and excel only when each of us learn the art of decision making that is founded on data.

“Give the people the facts, and they will do the right thing??”

No comments:

Post a Comment