Pages

Showing posts with label Information Technology. Show all posts
Showing posts with label Information Technology. Show all posts

Friday, July 3, 2020

Enabling a Paradigm Shift in Social Protection through BECKN

Delivery of social protection measures is a big challenge to any government; especially in a developing country.  Social protection program has three key components.

The first component is how we can uniquely identify the beneficiary and enrol them into the program. Digital Id revolutionised this part. The pandemic demonstrated that the countries which have established a foundation of digital id can manage the outreach much better than a country that is lacking in this area. Countries which have gone one step forward and have established a somewhat comprehensive registry of uniquely identified deserving beneficiaries did even better.

The second component is the payment system. Building on the foundation of Digital ID, in India we have introduced Unified Payment Interface (UPI) which is a completely open and interoperable protocol that has transformed the payment system.  As of now, UPI handles more than 1.25 billion transactions per month which is more than four times the volume of transactions handled by the Credit / Debit card network every month. We are now in the process of building a next layer on top of it to provide a digital voucher using completely open protocol. This could be a straightforward money voucher, or it could be a voucher that is meant for a specific product if the government wants to ensure that the benefits are provided in kind with respect to certain goods and services.

The third element is the market including the supply chain, and it is still a challenge. That is why the governments are forced to run the whole physical operation of procurement, supply chain management and retail shops. Often very very inefficiently.

BECKN foundation established by Nanden Nilekani has come out with a brilliant solution to this problem. It has published and open protocol specification which can completely revolutionize the marketplace.

What is the challenge it is attempting to solve?  Digital marketplace is a big boon; but the market is still heavily siloed.  It is increasingly becoming walled gardens of few players. Take for example retail. There are  a few large aggregators like amazon/ big basket/ Alibaba They are doing a great job; but, they are walled gardens. Same is the case with delivery or mobility or health service. It is a pain for the consumer who must look at multiple apps to make an ideal choice. This limits the choice and makes markets inefficient.

The service providers are also constrained. Unless you are a part of the aggregator platform you are at a big disadvantage in terms of discoverability.

BECKN protocol addresses this issue for both the service provider and the consumer. It is a paradigm shift; we are making internet, small business friendly and not small business internet friendly.

A simple adapter to the payment system or inventory management system based on BECKN protocol can provide any service provider equal access to the market whether he is big or small. Any frontend app which is comfortable to the consumer; could help him or her to access all the options seamlessly if it has a BECKN based connector to the network. It may be WhatsApp may be telegraph may be Google map or a special app provided by the government.

These mean simplification of service delivery [e.g. one stop payment, personalized workflows combining multiple services, granular view of events as and when they occur], freedom of choice for consumers, stitching of suppliers to fulfil a comprehensive service, much better (network driven) view of consumer demand and ability of suppliers to fulfil that demand

Such a marketplace has also the potential for bringing dramatic innovations in terms of option to the beneficiary. Once such an Open Benefit Delivery Network is in place, governments can focus on scheme design and enrolling the beneficiaries and reaching him the subsidy / financial assistance . Then leave the markets to make the best offer.

The beauty of this solution is that we are not talking about huge investment in a new platform or software solution. The service providers at a very low cost can be enabled to plug into the market. We are democratising the market just like HTTP democratised the internet. This is not a wishful thinking. We are helping the first solution built around BECKN to go live in India next month and few more are on the way.

It is time to give this a serious try and be an innovator to solve fundamental problems so that we can manage emergencies like the pandemic much better.

“He who is best prepared can best serve his moment of inspiration and desperation.” ― Samuel Taylor Coleridge


Saturday, April 11, 2020

COVID-19 and Foundations of Social Security Delivery

 

The COVID-19 has turned out to be a pandemic in a very short time causing significant disruption of normal life and economic activities around the world. This has led to increased stress to financially marginalised and has also pushed many who are not otherwise included in the financially challenged segment down the precipice at least in the short term. Therefore, financial assistance by the government to a larger cross section of society has become a necessity.

One of the foundational elements to target benefits to deserving citizens is the ability to uniquely identify every person. Many developing countries don’t even have reliable functional IDs that generally cover the majority of population. Hence, the relevance of national ID becomes even more important. In this world with increasing mobile penetration if this ID database also includes a mobile number and/ or email ID seeded the whole targeting process can become more efficient. ‘Aadhaar’, in a country as large as India, has demonstrated the benefit of having a platform that issues a unique ID to every resident and enables easy authentication. Many experts have observed significant variations in targeting benefits between countries that have well-established infrastructure, institutions & processes to uniquely identify/ authenticate its residents and countries that don’t have such facilities.

The other two important foundational elements for targeted delivery are digital social registries and digital financial inclusion.

A well-established social registry consolidates the list of residents along with their key attributes essential for determining the nature and extent of assistance needed. This registry could be built by linking multiple databases against unique ID of the resident subject to good practice personal data sharing rules. Such a registry maintained digitally and updated dynamically will go a long way in correctly identifying and targeting the deserving persons and avoid leakage through fraud, double-dipping or because of any processing error. A social registry built on a unique national ID is more efficient and less error prone.

An efficient and low-cost payment system that is inter-operable among all segments of institutions that hold a customer account storing value is another critical element that encourages and enables financial inclusion. The digitalisation of holding and transaction is key to reducing the cost of account maintenance and transaction. The success of Unified Payment Interface (UPI) that has been introduced in India enabling interoperability across all banking transaction networks at practically nil cost has revolutionised the payment ecosystem. The high volume, low value transactions which were hitherto not encouraged in the conventional card driven ecosystem have found exponential growth in this regime. In January 2020, the UPI transactions crossed 1.3 billion, amounting to close 30 Billion USD, which is about 3.5 times the number of transactions in the VISA/ MASTER network. In addition to the interoperability offered by UPI, it removes the need for plastic cards and relatively expensive to maintain POS network, with conventional smartphones for both the account holder and merchant. It also enables feature phones based USSD transactions that widen the serviceable segment of the population thus giving boost to financial inclusion. This has been endorsed both by large multinational private sector players like Google and by the banker to the Central Banks, BIS. The National Payment Corporation of India (NPCI) established by the Reserve Bank of India, which introduced UPI is expected to establish and promote an entity to introduce UPI to countries around the world to help them launch this in their countries.

The COVID-19 pandemic now brings to the forefront the importance of mature systems for ‘national ID’, ‘social registry’ and ‘interoperable payment systems’ as discussed above. As a response to the immediate need, many countries are quickly establishing beneficiary registries. Considering the urgency of the short term need, this is being built with lesser safeguards against ghosts, double dipping and such possible avenues for leakage. In the short term, this may be acceptable. However, instead of seeing this as a one-time exercise, it is advisable to consider this as a big step towards building a comprehensive social registry and establish mechanisms to refine this in the next few months so that this becomes the foundation for all social service delivery programs in the future. For example, Philippines is in the process of undertaking a door-to-door enumeration of 18 million households (out of 21 million total households) to create a list of beneficiaries. There could be concerns as to whether this is the best possible means to build the registry in this period. However, if this plan is already rolled out, the way forward could be as follows. Philippines is also expected to roll out their national ID project by the end of this year. It may be a good idea to use the digitised database built on the current enumeration as a starting point for enrolment for national ID by providing it as a pre-populated form that could be refined during enrolment. Thus, soon Philippines will have a biometric national ID seeded social registry. In this fashion, every country needs to evolve appropriate strategies to enrol deserving residents in the short term using the existing infrastructure and use this, in medium to long term to build a robust social registry.

Large number of residents are being provided cash transfer to tide over the crisis. While each country would rollout this cash distribution using the existing infrastructure this is the opportune time to encourage, enable and nudge the unbanked in this group to open at least a limited purpose account operated online with capping on the amount held in the account. This can be allowed with limited KYC processes. Eventually, the customers can opt to upgrade to a regular account after necessary due diligence. In this process, it is important to ensure that infrastructure and policies are in place at the earliest to enable seamless transactions across accounts at a cost that make sense for the small value transactions of customers and merchants, irrespective of which entity holds the account for storing value. Governments may have to consider regulatory nudge for interoperability and to contain the cost for low value transactions. Also, it is essential that all the entities holding account (including mobile wallets) are also regulated uniformly with respect to the safety and security of these accounts. Mobile platforms can also speed up the proliferation of low-cost micro ATM making cash in cash out (CICO) centres widely available using the mobile industry distributor network.

While each government will have their own approaches on the most pressing and emergent issues of today, a robust national ID integrated with a social registry and combined with digital financial inclusion are the building blocks that will enable governments to deliver benefits in good times as well as crises.

It is hoped that the governments and developmental agencies take these lessons in their design of policy interventions in the social sector.

“Those who cannot remember the past are condemned to repeat it”, George Santayana.

Saturday, September 27, 2014

Digital India 2.0 Hoping for the best

[Excerpts of my speech at Conference on Digital India as a part of 37th SKOCH Summit at India Habitat Center, New Delhi September 2014]

If somebody asks me my opinion about the Indian Movie Industry my reply would be; “It is big and thriving, some of its productions are world class, exciting and memorable. But it has also created a lot of crap.”  My reply would be the same if anybody asks for my opinion on e-Governance in India.

Why do I say so? Let me give two metrics in support of my statement. Firstly as per DeiTY tracking system for electronic transactions we have had just about 2 billion electronic transactions in 2013 and in 2013 this number has already reached this number in the first 8 months. Impressing in a way. But for country with more than a billion population and internet penetration of 100 million (which itself is quite low and is an expression of poor internet infrastructure  and not enough relevant and convenient service offerings compared to 8000 million mobile phones) it is 20 an average of 20 transaction per month per person. When we consider that it covers both central and state services including online utility payments, it is not something to shout from the rooftops.

Secondly, let us take a look at the eGov Development Index (EGDI) of UN which it publishes every year. In the last few years India’s ranking has been hovering in the range of 110-120. In spite of India being a powerhouse in the world of Information Technology, we are so low in the world ranking of EGDI. We have even managed a drop in our ranking from 113 in 2008 to 118 in 2014. On the other hand South Korea holds top rank for past many years.

It is in this context that we should look at the Digital India 2.0 Program that has been announced recently. It has all the Right Intentions, Great Ideas and Enough Resource Allocation. But then, Is it much different from the NEGP of yesteryears?. If we look at NEGP, it also had right intentions, right ideas and enough resources when it was launched. But then why did it fail deliver what it should? The fundamental problem was in the “Implementation”.  Let us take a look at some key learning from our experience of yesteryears. The weakness was in the foundation itself of most of the projects; which includes stable citizen focus, leadership, procurement and contracting strategy, continuous improvements and business models. Let us go a little deeper.

Most of the e-Gov initiatives are “transformational Projects” and not automation of the existing stable processes. Such transformational projects would require a re-look at the existing processes and undertake a re-engineering of the processes with a citizen focus and taking full advantage of what technology can offer. When I say citizen focus it means that the workflow and processes should attempt to make it easy and convenient for the citizen to access the services online. When he asks for a service from one government department he should not be asked to get a document/ certificate from another government department. The departmental systems should be able to talk to the other departmental systems electronically to source/ verify information available with the latter. This in addition to facility for online application could significantly reduce multiple interactions that the citizens have to make with the government departments and the related rent seeking and harassment.

One of the challenges in integrating the systems of different departmenst was the absence of a mechanism for uniquely identifying a person. Now that penetration of UID in India has reached significant levels, this problem has nearly been solved. Therefore, it is important for every departmental automation project to incorporate UID tracking of beneficiaries to the extent possible.

Any transformational project will succeed only if there is a stable and visionary leadership; especially during the conceptualisation and roll-out. Whenever government managed to place such people in the leadership roles the projects benefited. In many cases the selection and transfer policies of the government does not take this into account at all. This means poor choice of mission leaders and or key people frequently getting transferred; especially in critical stages of the project. Such changes not just affect the momentum of the project but are also counterproductive. One of the key reasons of  failure of many e-Gove projects is this.

Establishment of transformational project cannot be handled like an event management. It is an evolutionary process; a journey and not a destination.  But normally many government departments treat all projects alike an event management. There would be a study of the current processes, some kind of process re-engineering, development of specification for developing the software application , choosing of technology solution, getting the solution developed/ customised and rolling out the solution. During this process there we seed involvement of senior officers.

Once it is rolled out it is left to the operating staff to take care. But what is needed for success is continuous improvements. There should be continuous tracking and refinement of the processes, to strengthen validations, communication and clarification for the users on the basis of feedback from the field. This review and refinement should be led by the senior officers responsible for the mission. Only then would it reach adoption and acceptance of a broader cross section of clients. Often the feedback from the field is not given due importance. By the time the normal governmental process gets the approvals for these amendments it would be too late. So much negative image would have been created that adoption would have really suffered. Many users would have written this off. If we take the few projects that has demonstrated significant acceptance like Tax Information Network of CBDT, MCA 21 of Ministry of Company Affairs, Passport Seva Project of Ministry of External Affairs UIDAI etc were those which had taken this philosophy of continuous improvement seriously.

In addition to these efforts for continuous information there is also a need for education and handholding of the users so that they get used to the new system and make fewer mistakes. Even in this dimension many projects do badly.

The other major area of failure is in the procurement and contracting process of the government. Many departments select the lowest cost bidder referred to as L1. This often significantly affects the quality; especially in cases of services because it is difficult to quantify quality and monitor it. Some departments attempted Quality Cum Cost Based Selection (QCBS). But very often most of the bidders are given technical scores which are very close to each other. This is often because the evaluators would like to play safe. In this case, the bid again becomes L1.

In the selection process it is important to have experts in the field who compliment the departmental officials who may often do not have sufficient subject matter expertise. Many departments do adopt SME. But the SMEs are often selected often based on their willingness to give free/ low cost service and readiness to say yes and not on the basis of the quality of expertise. We can’t expect much value addition in such cases.

Then some departments started experimenting with outcome oriented contracts with service providers. It is a brilliant idea in theory. Even in this case most of the departments mess up. Let us see how. To make outcome oriented project a success, there are two critical pre-conditions. (i) There should be clear articulation and agreement on specific outcomes and milestones. (ii) Achievement of outcome requires that both the client and the service provider play their role on an agreed upon schedule.

But in case of most government projects both the above preconditions are not met. Firstly the outcomes are not clearly articulated. Often they are described very broadly which could be interpreted in different ways. Such vague definitions lead to so much scope creep that the service providers lose significantly. Secondly the departments fail so badly in meeting their part of the deal. Often there is significant delay / failure in giving timely approvals, giving input by users, providing feedback on proposed solutions, signing off on specifications and so on. This lead to significant delay, scope creep and increase in required effort. Taking both these together for most of the service providers government projects are loss leaders. Because of this, many good and reputed IT service providers are very cagey in bidding for government project and keep away from government projects in most cases.

It is evident that unless the Digital India 2.0 addresses these above implementation issues it will not be able to do any better. It can be seen from the above that n normal government processes are not capable of handling transformational projects. It has never been designed to support innovation or flexibility or agility that are essential ingredients for successful implementation of transformational projects.

One obvious solution such limitation is to carve these as independent projects and hand these over to a Special Purpose Vehicle (SPV) which has been suitably structured with sufficient flexibility to take nimble decisions and relevant mid-course corrections when needed. It is also important to have the right kind of leadership, team with relevant expertise and experience and also a supervisory body that recognizes the different approach needed for these SPVs to succeed. Especially the leadership vision and courage to take decisions will be very critical. We have seen examples of such successes. But unfortunately we see that even when SPVs are established the bureaucracy involved in the establishing these SPVs build-in structural limitations that will restrict/ prevent the nimbleness of the organisation and or install unimaginative bureaucrats to its leadership which completely vitiates the ability of these SPVS to make any significant difference.

I hope the focus of the new PMO has on implementation will galvanise the various departments towards better performance.

Related Reading

If wishes were horses


“Exogenous and blind interpretation of statutes, topped with hustled implementation of laws leads only to more turmoil and less productivity.”   Henrietta Newton Martin

Sunday, August 28, 2011

Outcome or Process, Choose One


IT enabled Governance (ITeG) is gaining acceptance and momentum. The Electronic Service Delivery Bill now under consideration of the government is expected to give further thrust to e-enablement. With Aadhaar (unique id) getting better traction, Aadhaar integration in the service delivery also is providing stimulus in ITeG initiatives.

The complexity and challenges relating to the e-enablement vary across departments. On one end of the spectrum, the processes are quite mature and well defined, and computerization is primarily a means for improvement in productivity. On the other end of the spectrum, what is required is not just automation of existing processes, but a total transformation solution which involves significant process re-engineering, re-alignment of the role of various stakeholders which may also result in some stake holders role getting redundant or less important, innovative use of technology, phased implementation, continuous monitoring and many mid-course corrections.

In both cases Information Technology is a critical component; but often it is forgotten that IT is a means to an end and not and end in itself. I do not intend to discuss the other dimensions of ITeG in this post (take a look at ‘An amateur’s guide to e-Governance”). This note is about how the IT component is managed by government departments. Most government departments have limited in house skill to undertake these activities and therefore they outsource these to private sector service providers. Most of the IT service providers who take up these jobs of the System Integrators often do not have a total solution orientation or they are not capable to offer one. So they end up being suppliers to staff.

The predominant skill set for them is writing software for the specification given by the solution architects because our domestic software industry is often drunk with the $ from being technicians and cybercoolies and not architects and engineers. The user requirement study is "Tell me what exactly you want me to automate, I will program it" and not "Tell me what your dream is, we will work together on how technology can make it happen". So their approach is that of manpower provider than a solution provider. They don’t share the risk of a failed system (the limitation of liability clause adequately takes care of this) nor do the government comfortable to share the reward in the form of outcome based payments.

Often service providers influence how the RFP is made which ends up being an enquiry for supply of bodies than outcomes. So RFP gives more importance to the CV of the team instead of making them responsible for the outcome. Subsequently these contracts make the client to pay for rectifying the bugs in the programs and also make him spent for much more hardware than needed. Augmenting this problem is the purchase decision based on the lowest price quoted by the vendors. No proper matrices for measuring the outcome are defined and the department attempts to micromanage the CV, the attendance, hardware specification and so on. Then either the shoddy service providers takes the contract or the selected vendor develops shoddy solutions. That is the reason many ITeG don’t live up to the expectations.

When people are told exactly what and how to do something, they stop thinking for themselves-and they can't learn and grow. ~ unknown

Monday, June 27, 2011

Caveat Emptor ?

The bank was taken aback by the order of the IT secretary of Tamilnadu as the adjudicator under the Information Technology Act, when he directed the bank to pay Rs 12.85 lacs as the compensation to an Abu Dhabi based NRI for the loss suffered by him in a “Phishing Fraud” . (1)

“Phishing” is a security risk that many computer users are getting to be familiar. In this case the account holder received an email that appeared genuine, which asked him to provide the use rid and password for his internet banking account to avoid his account getting closed. He parted with these details and he lost about Rs 7,00,000 from his account.

The bank took a view that the loss is on account of the carelessness of the account holder and refused any compensation as they had advised all account holders not to part with their userid and password to anybody. How could he be so irresponsible and give away his password and then claim that the bank should compensate him? But the adjudicator did not agree to this point of view.

This is an interesting dilemma for all entities like banks and depositories that provide online service to their clients. Where does their liability end for the loss suffered by their clients? Under what circumstances will they continue to be liable even if the loss was the outcome of a failure by the account holder?

Similar issues have been there with respect to offline transactions too. For example the loss suffered by credit card holders on account of misuse of their card details or loss of money from the bank account by fraudulent instructions.

One of the accepted doctrines in legal theory is that the entity that is best equipped to manage the risk should be liable to the loss arising out of such risk. This doctrine has been followed not just in online transaction. It has been used in fixing liability in terms of workplace accidents, accidents in amusement park and and so on.

For many this may look very unfair. Shouldn’t the responsibility of the service provider end when it has put in place risk containment measures and warned the users about the potential risks? Why should it bear the brunt of a fraud when ingenious souls manage to find a way around the protective walls?

There are studies comparing the incidents of banking frauds between the countries that placed the primary responsibility on the service provider and on the customer. These studies showed that when the legal structure supported the above doctrine the extent of fraud was much less. The service providers continuously upgraded their risk containment systems as they could not hide behind “fine prints” in the forms that they make their clients sign and the disclaimers that they publish. They cannot limit their risk with firewalls and dematerialized zones in their data centers. They have no option but innovative in saving their clients from their own foolishness. They are forced to look for patterns, trends, exceptions, track locations from where the transaction originate, raise alert when exceptions occur and so on. But if still a client is faced with a loss, he is compensated unless they can prove the customer connivance or involvement. We can’t just declare “Caveat Emptor”.

Risk mangment therefore becomes a managerial decision, may be more than technical solutions. (Read on Digital Security – Business, People and Economics for some more thoughts on this)

Ref: (1) https://indialawnews.org/tag/human-rights/

"Customers don't expect you to be perfect. They do expect you to fix things when they go wrong." Donald Porter, British Airways

If you like this post, share it with your friends


Sunday, June 12, 2011

Matter of Right

Government of India had put out a draft bill on Electronic Service Delivery (ESD) for public comments. The key features of this scheme are

i) Every department of the government should mandatorily make its service to citizen available through electronic mode.

ii) This ESD should be made operational within five years of enactment of this bill. Extension for another three years will be allowed if there are valid reasons for this delay.

iii) Within six months of enactnment of this bill, every department should publish the list of services which will come within the ESD commitment.

iv) As in the case of the Right to Information Act (RTI), the proposed ESD act provides for Commissions at center and state level to ensure that the expectation under the act is delivered and failure is met with punitive action.

This right for ESD proposed to be guaranteed under an act of the parliament can be seen as maturing of various e-Governance initiatives that the government has been taking in a variety of fields for more than a decade. India is considered to be a powerhouse in the field of ICT and we practically run the back-office operations for the whole world. With this, this should be an easy target. But is it?

World e-Government ranking undertaken by United Nations gives India a rank of 119 out of 192 countries it surveyed in 2010. As my friend Neel pointed out, “How come even after more than a decade of e-Gov initiatives at the highest level, we still want six months for all departments to publish the list of services they can make available electronically and we need five to eight years for this to be fructify?” Reasons are many; but the following appear to be the most fundamental of them.

i) Many of the e-Gov initiatives are computerization of existing operations of the departments, heavily accented to MIS reports for internal consumption and upword reporting.

ii) Processes were not fine-tuned with a citizen focus. Committed service levels or actual performance levels were seldom benchmarked or published

iii) Solution implementation was more activity based than outcome based. Often vendors saw their role as software developers or as hardware suppliers and not as service providers.

iv) More attention was given to the automation of front end without getting the back-end streamlined and automated. In many instances sufficient consideration was not given to building electronic repository of the records and masters or ensuring high data quality which are the foundation blocks for electronic service delivery.

This problem is not unique to government computerization efforts. In many private sector companies also the computerization took this route. To begin with computer was a perk and status symbol for the boss. Then it became a department initiative. It was much later an integrated corporate wide strategy got evolved in progressive companies.

Similarly, in government initially it started as a privilege for the big bosses. Then it became a department initiative left to the interest of the head of the department. Integrated service delivery is still a dream. (read on "India gets a CIO- Part II")

Now when we attempt to make electronic service delivery a matter of right we have to give more attention to the lacuna highlighted above else we will not be able to live up to our promise or the expectation of our citizens and the commissioners will end up inundated with grievances.

Picard: Come back! Make a difference!
Kirk: I take it the odds are against us and the situation’s grim.
Picard: You could say that.
Kirk: If Spock were here, he’d say that we are irrational, illlogical human beings for going on a mission like this... Sounds like fun!

-Star Trek: Generations

If you like this post, share it with your friends





Sunday, April 10, 2011

Give us the Facts

When India got its independence in 1947 as an outcome of partition there was large scale migration across the India-Pakistan borders. From West Pakistan, more than 300,000 refugees (this does not include thousands who moved to Delhi, Mumbai etc.) migrated to eastern Punjab leaving behind about2.7 million hectares and they were looking for land to settle down and farm. As against this, the total land left behind by Muslims who migrated to Pakistan was only 1.9 million hectares. The government had a huge challenge of allocating land to these refugee farmers equitably.

Sardar Tarlok Singh of Indian Civil Service, as the director general of rehabilitation in this region, managed this process with clear and simple guidelines which were enforced pretty well. To address the variation in productivity of land across regions, he defined a ‘standard acre’ which was the area that could yield specified quantity of rice. To address the lesser area available for allotment compared to what was left behind, he introduced a ‘graded cut’. As per this each party received only a specified fraction of the land area left behind by them. This hair cut, administered in a stepped fashion, was lowest for smaller land holding and highest for the highest slab.

The biggest challenge was verifying the authenticity of the claims. He addressed the same through open assemblies of refugees from the same village. False claims were punished by reducing allocation and even imprisonment which were strictly enforced. He was able to make allotment of 250,000 properties within 18 months from March 1948, when he started collecting claim applications. [1] (Compare this with the efficiency of administration of various development schemes run by the government these days even with availability of more people and better technology.)

Thus the most difficult problem of verification of claims was addressed through transparency. Peer verification, social audits and village assemblies were mechanisms used by generations. But as the society got larger, government procedures more complex and often opaque and exception handling ad-hoc, various government approvals, benefits and programs became inefficient and avenues for misappropriation.

Right to Information Act (RTI) is a good beginning. But the resources available for this are so limited that it will be practically impossible to scale up the transparency drive. The resources get clogged in meaningless queries, which often is the intention of those raising the queries. RTI is a good tool to dig deeper; but not a tool that can scale up easily.

To give momentum to the march towards more transparency, we need to have a system in place that continuously publish time series data to be published by various government departments on its expenditures, programs, exceptions, benefits, sanctions and approvals. At present most of the information dissemination by various government departments is nothing more than a public relation exercises. There are certain departments in certain states taking excellent initiatives. But often they remain as individual effort which dies down after the initiator has left or remain as islands of excellence.

What is needed is an institutional framework for publishing granular data in electronic form that can be queried and analyzed. The progress in Information Technology and better connectivity make this eminently possible and affordable. Various agencies can then access this data and make observations and conclusions. Some people may make simple queries for clarifications. Some pope will undertake extensive analysis of the data to identify trends or patterns or to measure efficacy of various schemes. Transparency Portal of Brazil is an excellent example for such an initiative.

This may be inconvenient for many and such people will always try to object and raise excuses. Some try to hide behind so called ‘privacy issues’. I agree privacy is important. But privacy is for private matters. When it comes to most government expenditure and government benefits, the public has the right to know how this has been spent and who the recipients are. Sometimes we hide behind strategic and security concerns. Certain information may have to be always confidential. But some can be released after time delay. We have to be very selective when we classify information on the basis of such consideration and it should not be a means to obfuscate. We should have mechanisms in place that would dispassionately evaluate the sensistivity to classify information as confidential.

This is a trend that we see around the world. An interesting example is how the data relating to government support during the recent financial market crisis in USA has been released. The central bank and industry lobbies resisted tooth and nails releasing this data. In December 2010, Dodd-Frank financial law forced the central bank to release the data relating to trillions of dollars of loans it extended to the various banks under trouble. However, it did not release the details of the loans under the discount window. Supreme Court has now rejected the objection by the banking industry and has forced central bank to release this data also[2].

It makes sense for the government to have a coordinated effort with help of experts to study the functioning of each department and develop an institutional framework and a time bound plan for defining the scope of data release. Let us publish time series data at the most granular level; details of individuals and entities who receive any kind of government patronage, input, aid or subsidies given against the quality and quantity of their output, details of companies found to have been engaged in corrupt practices, details of fund transferred to each department and how it has been spent as so on. Mário Vinícius Claussen Spinelli, Secretary of Corruption Prevention and Strategic Information, Brazilian Office of the Comptroller General has beautifully described the Three Laws of Open Government Data:

• If it can’t be spidered or indexed, it doesn’t exist
• If it isn’t available in open and machine readable format, it can’t engage
• If a legal framework doesn’t allow it to be repurposed, it doesn’t empower


In the beginning there was nothing. God said, "Let there be light!" And there was light. There was still nothing, but you could see it a whole lot better. ~Ellen DeGeneres

[1] India after Gandhi, Ramachandra Guha
[2] Fed To Disclose Discount Window Crisis-Lending Data Thursday


If you like this post, share it with your friends

Monday, March 28, 2011

If wishes were horses…

I recently read an interesting observation about growth prospects for India. During the first millennium AD, and even before, India was an evolved society. It had world class educational institutions (Nalanda, Taxila etc) which attracted students and scholars from around the world; it had world renowned commercial centers which had trade relationships with many continents, and it demonstrated leadership in area of philosophy, mathematics, literature and astronomy. It was the era of knowledge and reasoning.

The second millennium was the era of engineering and industrial revolution which practically bypassed India. The colonial suppression would have definitely contributed to this. But, as per the above article it was also a manifestation of how Indian brain is wired, which makes it more comfortable with knowledge and logic than technology and applied science.

The third millennium again is that of knowledge and learning, which are comfort areas for the Indian brain. In fact the planning commission in early 2000 had set up a task force under the chairmanship of Dy Chairman of Planning Commission to evolve strategies for becoming a knowledge superpower.

I don’t know how correct is this analysis with respect to the competitive edge of Indian society in this knowledge economy. But there can be no argument about the fact the key drivers for today’s growths are information and collaboration. The most important infrastructural requirements for these key drivers are connectivity to link people and capability to use the modern tools that facilitate information flow and collaboration.

Today India is acknowledged as a powerhouse in the area of Information Technology. We are the back end development center for the whole world. Graduates in every field of science, whether it is engineering, physics, chemistry and mathematics appear to be drifting to computer programming and many more into IT Enabled Services.

Therefore, it appears that we have the aptitude, the infrastructure and the human resources necessary in this most important field and we are well poised to build on this. But when we go a little deeper, we see some underlying weaknesses.

A global ICT index called “Connectivity Score Card” based on a Study created by Professor Leonard Waverman, London Business School, and economic consulting firm LECG, commissioned by Nokia Siemens Networks has been tracking the level of sophistication of ICT infrastructure across the world for last few years. It is a broad based matrix taking into account availability of infrastructure and usage & skills in the area of ICT among consumer, business and government sectors. This study has ranked countries which are segregated into two groups called innovation driven and resources driven as per the categorization of world economic forum. The former contains mostly developed economies and latter more of developing economies. India forms a part of the resource driven countries and with a score of 1,82 out of 10 it ranks 21 among 25. The only four countries which have ranks lower than India are Kenya, Nigeria, Bangladesh and Pakistan. Malaysia with a score of 7.14 has the top place among the resources driven economies.

What do we learn from this contradiction? We have outstanding strengths in the field of ICT which is one of the key requirements for a knowledge economy. But, this skills and strengths are concentrated in few Islands of excellence. Therefore, we need to have a focused strategy and attention (a little more than that gets wasted in telecom scams) for wider availability of ICT infrastructure for us to exploit this opportunity. Somebody once asked Dr R A Mashelkar what would be his ultimate wish for India. He had no difficulty in responding quickly. “High quality connectivity to every citizen at affordable cost and skill to use it effortlessly and meaningfully.” Then, as Mat Ridley would say, ideas will have more sex and multiply.

The number one benefit of information technology is that it empowers people to do what they want to do. It lets people be creative. It lets people be productive. It lets people learn things they didn't think they could learn before, and so in a sense it is all about potential. Steve Ballmer


If you like this post, share it with your friends

Wednesday, March 9, 2011

Software and Hard choices

In this world there are many countries which are endowed with natural resources. Some of them were content to extract these resources in its most basic form and sell and some countries built up industrial bases that add value to these natural resources. The latter prospered and the former often stagnated especially because this abundance in one area acted as a disincentive to growth in other areas. The stagnation might have happened when the resources ran out or more prices dropped or with falling demand or arrival of more competitive suppliers.

We have the potential for a somewhat similar problem in our software industry which is growing to be a significant sector of strength and opportunity for India. This could be on account of some factors that are holding us from rising above mediocrity. If we don’t address these, eventually we may end up paying price for this as a nation.

Demand Growth: There is an increasing acceptance for use of Information Technology in most sectors of economy in India; e-Governance, hospital administration, educational institutions, manufacturing industry banking and financial services. This offers huge potential for the IT industry. In India, our focus and strength is in application development more so in building bespoke applications and less in hardware and system software. This, in addition to the outsourcing opportunities for undertaking developments for international clients, creates a burgeoning demand for software industry in India.

Customer Awareness: However the in house appreciation among this large consuming sector within India is quite primitive and therefore unable to demand sophistication and quality from their vendors. On the other hand, a significant part of the our outsourcing contracts are for relatively low-end programming as per the solutions defined by in house CIO and his team or based on the solutions architectured by high-end international consulting companies. Thus there is very limited incentive among the programmers to worry about the performance of their output but encourages to focus on functionality and features.

Increase in Diversity: Fast paced  introduction of new tools, more sophisticated databases, and diverse programming languages encourages the programmers to be familiar with this diversity than develop deeper expertise in any of the systems to extract the performance efficiency. The are happy to be "Jack of all trades but Master of none"

Leaping hardware technology: The hardware is progressing in sophistication so fast that it is reducing the cost for processing same volume. When shoddy system design and program quality put strain on performance with increase in volume, the developers recommend more iron. Since these new machines process more volumes, the senior management of customers the gets lulled by the apparent reduction in cost and ask few questions because in most other areas they are used to increase in cost with increase in volume.

Impact of IT cost: Major consuming industries for our software developers are manufacturing, financial services and government. For manufacturing IT costs as a factor of their total cost is relatively low. As they get their revenue from selling products (cars to drugs to chemicals to consumer durables and non-durables) their attention is more on the technology for making and selling their products. IT is seen as an enabling component or a fad and gets lesser attention on performance.

Similarly in Financial services with the revenue being a function of the value of transactions than the number of transactions they pay less attention in cost per transaction. In e-governance application also the situation is same with less attention to performance but more on functionalities.

Measurement of Performance Efficiency: In software industry, there hardly few good measures of performance efficiency that are widely used and fewer bench mark values against which performance can be measured; especially when it comes to cost per transaction. With nobody taking the ownership of the total solution, when problems occur, providers of each component like hardware, system software and networks point finger on each other. Even very few system integrators own the performance of the total solution, and but shift performance responsibly to the components. (Read “Learn to count- both Blessings and Failures” for some more thoughts on this)

So What?

The more discerning users for whom cost per transaction is critical like those of EBay, Google and Facebook have their own in house team whose focus is to squeeze out efficiency and reduce the cost and hardly few of our software service providers do any meaningful work for this high-end computing.or develop unique solution

There are many smart Indians in the development teams out there working on such solutions. But our domestic software industry is often drunk with the $ from being technicians and cybercoolies and not architects and engineers. The user requirement study is "Tell me what exactly you want met to automate, I will program it"  and not "Tell me what your dream is, we will work together on how technology can make it happen"

But if there is a large demand for low and middle end computing should we not supply it to earn our dollars and be happy? Of course yes. What is the risk in this?

The transaction volume in our consuming industry is growing leaps and bounds. The shoddy designs will soon show its weakness in handling this ballooning volume. The users will ask for more performance. They will ask for sophisticated analytics, pattern recognition, trend analysis and statistical modeling with the goldmine of data that has been amassed. Then our conventional solution providers will have nothing to offer as against those companies who have given more attention to high-end computing and more sophisticated model building often having outsourced the run-of-the mill programming to us.

That is why in this time of plenty there is a need to invest in building high performance solutions, develop a culture of fine-tuning systems for performance, learn to offer solution to a client's problem and not just code what he ask for, develop capabilities for building models and so on. Even in our public policy we should start factoring this and shift the incentives from profits of software export to investment and profits from high-end products or solutions development for the global markets.

Talent without discipline is like an octopus on roller skates. There's plenty of movement, but you never know if it's going to be forward, backwards, or sideways.H. Jackson Brown, Jr.


If you liked this post, share it with your friends.

Thursday, October 14, 2010

Aadhaar – The First Milestone

“Aadhaar” Unique Id (UID) for Indian residents was inaugurated by the Prime Minister of India on September 29 2010 at Nandurbar District of Maharashtra at a function which was attended by a large contingent of political bigwigs including Ms. Sonai Gandhi, Chief Minister of Maharashtra, Mr. Ashok Chavan, his deputy Mr. Chhagan Bhujbal and the UID chief Mr. Nandan Nilekani.

This ceremony sent out two messages. Firstly it demonstrates that we have been able to keep the promise of rolling out of this project within 18 months. Secondly, by commencing this project at Tembhali in Maharashtra State which is a tribal village, it shows our commitment that we intend to give focus to the poorer segment who today suffers the most on account of lack of broadly acceptable identity.

Aadhaar is a new tool which could find multiple applications in a variety of areas. It can help to prevent ghost claimants, repeated claimants and proxy claimants of various benefits offered by governmental and aid agencies which in turn can reduce leakage. It can also help to give direct benefit to deserving candidates instead of carpet bombing of benefits which is often cornered by unscrupulous people.

But, this does not mean it is a panacea for all problems; even for the problem related to targeted social protection measures. It just means that we have a stronger tool which if properly employed can significantly reduce leakage and improve targeted delivery. UIDAI has come out with papers on how the UIDAI can be of help in different fields. Some of these ideas will fructify and some will not. But there is no doubt that such a tool can be truly transformational. The transformation we have seen in financial markets, especially in capital market on account of sensible use of technology to improve efficiencies and reduce fraud have helped us to become one the best settlement infrastructure in the world from one of the worst in the world in less than a decade.

We see in the press, from the so called intelligentsia the concerns on the cost –benefit balance of this initiative and issues of privacy. Sometime it appears to me as issues blowm out of proportion. Aadhaar is not unique in the world. Many other countries have already attempted this exercise. America has been using Social Security Number as a unique id for it residents. What is unique about our Aadhaar is the magnitude of challenge of issuing a unique id to a billion people and using the technology and processes to prevent duplicates or keep it to absolute minimum.

Nobody is claiming that the Aadhaar project can eliminate duplicates to absolute zero. But I have confidence that if properly implemented, technology and processes are available that can keep uniqueness to such high levels that no other methods can match. With such powerful identity verification tool, a large number of agencies providing services to millions of peoples, (banks, ration shops, insurance companies etc, etc) can save enormous cost of identity verification.

Other concern is about privacy. Let us look at this a little more deeply. Aadhaar takes only very few demographic details (name, gender, date of birth, address, parent’s name, etc ) along with biometric details. In a true sense, it need have taken only biometric details of an individual and it could have issued a unique number. But today’s level of technology needs few more fields for exception handling and more importantly the users of this identity has not reached the level of technology sophistication to map each of its clients on the basis of only a number with biometric mapping. Therefore, Aadhaar requires few critical demographic details.

The list of demographic data insisted at the time of issuance of Aadhaar is so general and is even less than the details taken for KYC verifications by most services providers. There is practically nothing in there that can be used for racial profiling or such measures. The UIDAI act is specifically providing for the same.

Aadhaar does not make this information available to anybody even for verification. Its verification service is limited to a “Yes/ No” response to an enquiry of whether a biometrical reading (finger print) taken from a person and the Aadhaar claimed by that person matches with the Aadhaar database. This does not compromise any private data.

Next concern is that once this number is widely prevalent among various service providers, it will be easy to integrate these data to develop total profile of people. If profiling is a concern or to be prevented fighting Aadhaar is not the solution. Today most service providers have so much of personal information like name, date of birth, even cell number which is sufficient to map one person among the multiple data bases with the current level of technology. What Aadhaar prevents is the ability of one person faking multiple identities among multiple service providers. I don’t think this is a right we need to offer to any person or protect.Though it sounds a bit harsh, the opinion expressed by Richard Posner, (Judge and legal expert from USA) has raises an interesting point. “As a social good, I think privacy is greatly overrated because privacy basically means concealment. People conceal things in order to fool other people about them. They want to appear healthier than they are, smarter, more honest and so forth.”

If we are concerned about misuse of profiling we need to establish legal frameworks that will prohibit such actions, we need to have mechanism to protect those who blow the whistle on violations and we need to have rules on the extent to which data can be shared across agencies. There is no point in preventing issuance of UID which comes with a host of other merits. It is barking up the wrong tree. But, it is fashionable to fight the establishment which I think is one of the strengths of a democratic society; with so many people barking at so many trees some may just hit the target!

“When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else.” David Brin - American science-fiction writer b.1950

Read Also

Privacy Fantasies

Why no one cares about privacy anymore

Tuesday, October 5, 2010

Privacy Fantasies

The technological leap in integration of varied sources of data raises a number of questions on privacy. I have attempted a time travel of 200 years into the future to take a look at these concerns. I would consider that the 200 year time period that I have given for these developments could be an over estimation than an under estimation.

New Delhi, January 15th, 2210: The Society is grappling to come in terms with the impact of the recent invention and proliferation of the Mind X-Ray Vision (Mind –X); a tool that helps us to read and feel with ease the thoughts and feeling of people around. The ultimate tool for transparency; its impact on human relationships, family lives, corporate strategies and matters of governance is unimaginable.

For a society that for years has progressed with the right, option and capability for privacy of thoughts and fantasies, this new invention is a totally disruptive development. A person who can wear this tool like a watch in his arm can now read the array of emotions that passes through minds of the person with whom he is now conversing. No more suspense about what he is thinking; of whether he is happy, sad, suspicious or is aroused. It is no more a matter of guessing.

A truly scary and a loss of control for some and a feeling of freedom and power for the other! It could allow us to be honest about our feelings and misgivings or it could make us self-conscious about what bubbles at the bottom of our heart.

The strategies for competition can no more be built on secrecy, obscurity or obfuscation but based on open manoeuvres. It is no more a game of poker; but a game of chess.

Isn’t this the next transition in the long journey to transparency? Some years ago we got the gadget that could search googols of digital information to find the answers to a question that popped up in our mind and transmit the answer back to our brain. The googols of information also contained sufficient information about each of us from the day we were born that there was practically no private life. This was possible with the tremendous growth of internet, Google and Social Networking in early 21st century.

In a way we have come full circle from the small village life we spent few thousands of years ago where there was practically no secret and everybody knew everything about everybody else in the village. The world has become one big village.

The worries and concerns on the Mind-X reminds me about the privacy concerns that were out there when the internet, Google and then Face book became popular laying bare the information that were once considered private. It was a scary proposition then. With the exploding computing power and the sophistication of the data mining tools, it became practically possible to develop individual profiles with publically available databases. The government with its right to access more confidential data had much more detailed data.

It taught us not to be judgemental about another person based on few incidences of indiscretion and accept the fact that most people become responsible over a period of time. It has also taught us on how to be more sensible in our conduct and how we publish it. There were worries that this increased transparency could be misused by the government and its agencies. There were also incidences of such events. Then many of these transgressions also became matter of public knowledge. But then we learned to address these issues. It brought about stronger checks and balances on how such interlinked data could be used even by the government.


Similar sentiments were expressed when photography became popular in late 19th century. Louis Brandies, one of the most renowned legal experts who also was a justice in the supreme court of United States and his partner Samuel Warren discussed snapshot photography, a (then) recent innovation in journalism that allowed newspapers to publish photographs and statements of individuals without obtaining their consent. They argued that private individuals were being continually injured and that the practice weakened the "moral standards of society as a whole” {1}

But then, can we protect privacy by arresting the growth of technology? Can we stop the usage and proliferation of new technologies for the benefit of our society because it can also be used to harm it? Tools are nothing but tools and it is for us decide how to use it. If we want a government that is fair, we need to elect one and we need be willing to play an active role in making it one. We also have to strengthen the governance structures and its oversight in how the information is used. If we are concerned about our reputation, we have to learn how we manage it.

We can’t fight an idea whose time has come. Mind You, Mind-X is here to stay!

"Sunlight is the best disinfectant." — William O Douglas

{1} Source Wikipedia 

Monday, July 26, 2010

Competitive Advantage - A case for blogs and wikis

Mat Ridley in his seminal article “Humans: Why They Triumphed” has put forward an interesting argument that the dramatic progress of Homo sapiens in the recent past is not primarily on account of the increasing size of brain or dramatic increase in human intelligence. But, it has been achieved by the collective intelligence of the society arising out of continuous exchange of ideas. We have managed to build on what others have built. Sir Isaac Newton also expressed this view when he said “If I have seen further, it is only by standing on the shoulders of giants”

The progress in commutation and in communication has enlarged opportunities for people of different culture and experience to contact each other and to exchange their ideas. This has further accelerated the rate of progress. As Mr Ridley expressed brilliantly “The process of cumulative innovation that has doubled life span, cut child mortality by three-quarters and multiplied per capita income nine fold - world-wide - in little more than a century, is driven by ideas having sex”

Books, Radio, TV and even internet (web 1.0) while helping to distribute thoughts and ideas across very long distance, enabled mostly one way interaction; sort of broadcast. Email brought about fast and cheap two way communication and it exploded opportunities for human collaboration.

The recent innovation in Information technology (web 2.0 also supported by progress in mobile technologies) has brought about dramatic changes in communication by making it “two-way” enabling seamless collaboration.

Very often these tools for two-way collaboration like face book, twitter, wiki and blogs are seen by many as either as geeky or as non-serious pastime, juvenile indulgence or even waste of time. Therefore many companies and organisations prohibit access to such tools as they see these as risky distractions.

As these tools are seen as such distractions, the senior management is not giving due attention to how these concepts can alter the way we work and alter the way we collaborate. With so little interest (or so high ignorance), we are unable to harness the power of these tools.

The study by American Sociologist Mark S. Granovetter on the Strength of weak ties is quite relevant in this context. According to this study, for most people their network friends with whom they enjoy strong relationship is quite small, limited and almost culturally and intellectually incestuous in nature. Therefore it is the weak ties between groups enable us to collaborate with a more divergent set of people.

It is in this area that collaboration tools like blogs, wikis and social networks offer powerful, intuitive and convenient means. It can help us to build larger network of strong ties and build and maintain a larger network of weak ties. Wikis help in collaborative developments, Face book kind of tools helps to keep the links with a large number of friends.

Many organisations have woken up to these challenges and have established innovative ways of harnessing the power of this collaboration. The book published by Andrew McAfee, Principle Research Scientist at MIT’s Center for digital business titled Enterprise 2.0, the new collaborative tools for your organisations provides excellent insights to why and how on these tools and it is worth reading. I have drawn on the insights from this book to write this post.

At present this is relatively a new concept and not widely adopted. Therefore, those who can exploit this early will be able to build significant competitive advantage. Once this idea gets commoditized and becomes the norm for most of the players, the extent of competitive differentiation possible with this may come down.

‘If you have an apple and I have an apple and we exchange these apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.’ — George Bernard Shaw

Saturday, February 27, 2010

India gets a CIO

“Who said Elephants can’t dance” is a book written by Louis Gerstner the CEO who turned around IBM, the ailing giant that it was in 1992. I am sure many of us feel the same about India. We used to have a growth rate of around 3% that was termed as the Hindu rate of growth. That is history, and we are now demonstrating to the world that Indian Elephants can dance too.

One of the key strengths that any country, corporate or individual needs to compete in this new world is strength in Information and Communication Technology. And it is an area India has clearly demonstrated comfort among a broad cross section of the society. Today we have a booming IT industry ranging from cottage industries to International Giants. But we have not yet capitalised on this strength in strengthening our governance Infrastructure on a national scale.

We have patches of brilliant executions across the country. The modernisation spearheaded by NSDL and NSE with support from SEBI has catapulted the Indian capital market to world standards. There are many other examples like the bhoomi project in Karnataka, VAT computerisation in Kerala, and many more. The tragedy is that this learning is almost quarantined and has not yet formed a part of the DNA of governance. The concern is not just this isolation, there is significant duplication of efforts and investments which benefits the vendors more than the users.

For example GST is one of the most critical national initiatives India is embarking on. The empowered committee after protracted deliberations has brought out a Discussion Paper on GST on November 2009. The Finance Minster has announced that the target for implementation will be 2011, revised from the original target of 2010. A project of such magnitude and transaction intensity can and will succeed only with the help of a powerful ICT infrastructure. But we hardly see sufficient focus on issues of computerisation in all these discourses and deliberations. Each of the interested parties and states are on their own trip and trying to push their own agenda.

What we need now is to establish a framework and guidelines to facilitate IT enabled Governance (ITeG) on a national scale . The Government has woken up to this challenge. It has asked Nandan Nilekani, the Chairman of Unique Identity Authority Of India (UIDAI) to head a newly constituted Technical Advisory Group for Unique Projects (TAGUP). This is a welcome development and can surely contribute towards integration of the divergent initiatives that are going around (often in circles).

I remember attending a party soon after Nandan was appointed as the chairman of UIDAI (read up It Made Sense – 3; Nandan and the Unique ID*). I made a remark that it looked like we have a CIO for government. One investment bankers based in London who was attending this party quipped “I have now one more story to sell India”

But the remark looked a bit preposterous at that point of time. Today it is a reality. I have been keenly watching the development of UIDAI. There appears to be a clear strategy that it is following. The concept note was world class. The RFP that has recently come out for application development can be easily termed as one of the top quality RFP brought out by any government agency. He has also managed to bring together a world class talent pool to support him.

India looks good, India looks corporate, and India has now has a CIO...Read it aloud, it sounds nice and almost musical

“Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest.” Isaac Asimov

Monday, August 24, 2009

Digital Security – Business, People and Economics

We live in a digital world. The extent to which our lives are exposed to this ‘digitization’ is increasing exponentially. Whether we like it or not and whether we are involved in information technology related activities or not, our lives are getting more and more dependent on ‘digits’. Health records, tax records, saving and investment records, records of buying habits; practically everything that affects our life including how we are governed are getting digital.

With the way our lives are increasingly getting dependent upon information systems, the Internet being one of the most prominent examples, there is a growing concern about the security and reliability of this fragile infrastructure.[1]

In this digital world whatever business we are in, we cannot afford to ignore the impact of information security. To begin with, computer security has been left in the hands of “computer security experts,” chiefly technologists whose technical understanding qualified them to shoulder the responsibility of keeping computers and their valuable information safe. With stand alone computers the key security issues were how to protect the data from being lost or corrupt or stolen. [1]

But today information security is not just a technological problem although technology forms an important component. For business this is like any other problem of managing of risk and the cost associated with it. Like in any domain, the security experts can find a solution to address most of the risks (except those of cosmic proportions like Tsunami or Starburst) that a person or organization face. It is a question of the resources that you can throw behind the security risk and the extent of abstinence/ isolation that you are willing to suffer. Thus it becomes more of a managerial issue of identifying the risk areas, its probabilities, its impact and cost –benefit ratio of mitigation.

Organizations optimize themselves to minimize their risk, and understanding those motivations is key to understanding computer security today. However each of the above elements of risk management is not amenable to straight forward computations. It has high dependence of the human idiosyncrasies, mental make-up, domain knowledge etc

So when we look at information security management we have to use a larger framework; a framework that takes in to account business compulsions, nature of people and economics of incentives.

The span of managerial response ranges from apathy resulting from ignorance or indifference to paranoia resulting from ignorance or spinelessness. This posting covers a broad survey of the above spectrum to provoke some thoughts. I don’t expect this in any way to be prescriptive or comprehensive.

On one end some business managers are unable to understand the risk in the right perspective. Risk triggers in the nature of “fight or flight’ is an elementary component of any living organism. But many of the risks that modern man is exposed do not require such response. This means that there’s an evolutionary advantage to being able to hold off the reflexive fight-or-flight response while you work out a more sophisticated analysis of the situation and your options for dealing with it. Human beings have a completely different pathway to deal with analyzing risk called neo-cortex, a more advanced part of the brain that developed very recently and appears only in mammals. It is intelligent and analytic. It can reason. It can make more nuanced trade-offs. It’s also much slower. But it’s hard for the neo-cortex to contradict the primary response from the amygdale.[2]

Psychologist Daniel Gilbert has made brilliant explanation on this conflict “The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years—and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.”

The above gets compounded by what the psychologists term the ‘optimism bias’; we often think that accidents happened to only the other fellow and end up taking extreme risks. [2] Therefore unless the manager consciously tries to hold on to the primitive response and analyze the risk, the responses may often be not optimum.

This situation gets compounded when it comes to information security and software products. Most business managers are used to the incentive structure in the production of physical goods. If Honda produces a car with a systemic flaw they are liable, but Microsoft can produce an operating system with multiple systemic flaws per week and not be liable. Software companies have been able to institute a framework denying them liability for faulty products. [3]

Many business managers don’t appreciate this different paradigm in the digital world when they take decisions with respect to information security. With most of the corporate assets increasingly getting to be digital this becomes a critical issue.
We also see behavior on the other end of the spectrum. The security engineering community has, like the environmental science community, built-in incentives to overstate the problem. This could be like a firewall vendor struggling to meet quarterly sales targets, or a professor trying to mine the `cyber-terrorism' industry for grants, or the information security division lobbying for more funds and more power.

I still remember some of the overselling that used to happen during the Y2K compliance. Our consultant wanted certificates from practically all vendors, who provided any electronic good, to give us certificate that his product is Y2K compliant.

The business manager gets totally taken in by the scaremongers of digital fraud. In the name of information security he ends up overspending on every latest gadget and every vocal consultant. Human mind has a tendency to react to recent events and events that are high on visibility. With the extent of sound bites that we are exposed on information security we naturally end up overreacting. I still remember our Government grounding the total A 320 fleet of aircraft of Indian Airlines for a long time after the accident in Bangaluru.

What we need to develop is a balance between these two extremes. As Andrew Odlyzko noted in a paper titled Economics, Psychology, and Sociology of Security, “The natural resilience of human society suggests yet again the natural analogies between biological defense systems and technological ones. An immune system does not provide absolute protection in the face of constantly evolving adversaries, but it provides adequate defense most of the time. In a society composed of people who are unsuited to formally secure systems, the best we can hope to do is to provide “speed bumps” that will reduce the threat of cyber attacks to that we face from more traditional sources.” [4]

In a digital world the balanced view has to be continuously re-balanced as the rate of change of environment is extremely quick paced, unlike in case of the conventional technology areas. You get hardly any time to relax with the comfort of equilibrium that you seem to have managed.

Summary

The key thoughts from the above discussion can be summarized as below:
1. Information security is not just about technology it is about the managerial choice that is exercised
2. Whether gadget or process, it should justify the merit. The idea is not to foolproof, but to identify the appropriate balance.
3. Managing of this risk should be an inherent part of the total organizational process and not the functional responsibility of an expert group
4. It is not a one-time activity or a periodic activity. It is a continuous game of ‘cops and robbers’.
5. Keep in mind that security should not and need not always compromise convenience if it has to, make it as bearable as far as possible.

References

1. Kevein J Soo Hoo, How Much Is Enough? A Risk Management Approach to Computer Security
2. Bruce Schneier,The Psychology of Security , January 21, 2008
3. William Yurcik and David Doss, Illinois State University Department of Applied Computer Science “CyberInsurance: A Market Solution to the Internet Security Market Failure”,
4. Andrew Odlyzko, Economics, Psychology, and Sociology of Security
5. Ross Anderson and Taylor Moore, Information security Economics – and Beyond
6. Ross Anderson, Why Information Security is Hard – an Economic Perspective
The above references include specific references of observations as well as the articles that have provided ideas for my talk.
This is extracted from the talk I delivered at the conference held at IIMA