Systems disparity: The implications of data proliferation on business decisions

The increasing reliance on data-based knowledge and its creation is rapidly changing how business is done. However, change always bears positive and negative effects.

It is generally acknowledged that information is now one of the most valuable resources available to help drive performance improvements. Companies like Facebook and Google are seen as role models due to their Big Data capabilities. But the current discussion tends to neglect the often poor quality of insights won through the use of data warehouses by other companies, which in many cases have not adequately maintained their data storage solutions for decades.

If truth be told, the many small problems that exist with deploying Big Data get worse as you expand its use. And the experiences of Big Data giants like Google are not all good. In a recent Financial Times article, for example, Tim Harford highlighted the high-profile failure of Google’s attempt to use a theory-free Big Data solution to forecast trends in flu outbreaks.

Furthermore, as noted in the Wall Street Journal article, “The risks of Big Data for companies,” the negative side of the Big Data trend includes “a greater potential for privacy invasion, greater financial exposure in fast-moving markets, greater potential for mistaking noise for true insight, and a greater risk of spending lots of money and time chasing poorly defined problems or opportunities.”

In this paper, we focus on the noise generated by proliferated data systems and its impact on decision making in large and complex financial services institutions.

 

PROLIFERATION AND DISPARITY

Most large- and medium-sized enterprises have a multiplicity of computerized application systems. A typical too-big-to-fail organization will have between 1,500 and 2,000 applications. These applications have proliferated for various reasons, including shifts in strategy, the desire to align with industry practice, M&A activity, internal factors and even malpractice. They are often stand-alone in origin and departmental in scope. Typically acquired or built to address niche aspects of the business, they can run on a variety of hardware platforms and technologies (each with its own unique operating system). The applications in question are designed according to a variety of paradigms and methodologies and are developed using a vast range of tools. These tools, which are generally aligned to a technology platform, encompass, among other things, programming languages, database-management systems, and user-interface builders. And it is in the context of this variety and the resulting heterogeneous environment that the term disparate is coined.

A greatly simplified schematic of the systems landscape, illustrated in Figure 1, allows us to attempt to understand what has caused organizations to have so many application systems and why the term disparate is appropriate.

FIGURE 1: IT SYSTEMS LANDSCAPE OVER TIME

IT Systems Change Over Time

In Figure 1, a time axis indicates the appearance of technology platforms from mainframes to the Network Computer, Internet Technologies and Cloud-based computing. The Back Office/Front Office division that particularly characterizes financial institutions is shown along with the general direction of information flow. The boxes in Figure 1 represent ‘systems’ or ‘applications’ that may themselves be suites of disparate systems or applications.

The real degree of disorder for organizations is not reflected above. Indeed, Figure 1 presents the applications with a semblance of order to assist the explanation that follows. In reality, most system diagrams are so complex that it is difficult to believe any strategy was involved. Certainly, such a level of chaos is unappealing, especially if one considers IT to be a major instrument in defining and executing the policy of an organization. But sadly, the fact that system schematics of industry competitors share this problem is viewed as an endorsement of the status quo rather than a condemnation. Like individuals, enterprises can draw comfort from being “no worse than average.”

With the high incidence of M&A activity in financial services, not to mention the massive reorganization of firms in this sector that followed the financial crisis, it is worth exposing the myth that these events created the chaos revealed in system schematics. Relative rather than absolute measures of performance have been very much in vogue for a long time. One has only to draw the system schematic of merger participants immediately prior to the merger to see that both parties each independently presided over a multiplicity of disparate systems. To suggest otherwise is disingenuousness. Candidate systems often compete on a head-to-head basis. Alternatively, an entire suite of systems is chosen from one of the contenders in order to preserve existing workflow. The result may still be chaotic, but the merger or acquisition is not the primary cause.

 

EMERGENCE OF SYSTEMS DISPARITIES

The various factors that lead to the emergence of systems disparity are summarized as follows:

Product and Service Diversity: Any relatively large organization is in fact an umbrella covering many internal businesses that have been aligned to specific markets. The markets themselves have developed as a result of enterprises and individuals requiring ever more sophisticated products with which to manage their own financial affairs as they seek to participate in a global economy. Deregulation, globalization, increased competition, the disappearance of traditional industries and tax efficiency have all contributed to the increased range of the financial product set. As new financial products emerge, a business within the firm has either purchased or built systems to handle them. Separate, and sometimes niche, boutique businesses are created to address a market or product set. The new business operates under the umbrella of the parent organization, which provides a good name and common support functions such as HR, legal and compliance. Again, separate systems are either purchased or built to support the new business area. This practice is particularly common within financial service institutions, but is not restricted to them. Market and product alignment has caused the number of systems to increase dramatically. These systems are said to be “vertically aligned” (by product set in this case). Rarely, of course, is an existing system enhanced to cater to a new product set. Occasionally, systems enhancement occurs when a new product is a close relative of an existing product. More often, however, new systems are developed. The reasons for this are technical, political and cultural. This will be subsequently discussed. In the meantime, we have identified one cause of the increase in the number of applications, viz. the increase in the number of products or services, although this cause is not in itself responsible for the disparity in the applications.

Back Office/Front Office Division: Perhaps the single most significant reason why diversity persists, particularly within financial institutions, is found in the very structure of the concerns themselves, where the business/operations/financial control triumvirate is common. The business side is generally responsible for revenue, client relationships, products and services. Operations deals with processing, workflow and ensuring that contractual obligations are met. Among other things, financial control personnel are responsible for bookkeeping, management accounting and regulatory reporting functions. Front Office is a generic name for the business or businesses. Back Office is the general name for operations, human resources and risk control. In this sense, one envisages a very ordered schema of vertical business lines supported by common horizontal operations and accounting and regulatory reporting services. While this is generally true of Financial Control, a vertical structure extends through operations. These three broad divisions each tend to sponsor or own, and are serviced by, different computer applications. Figure 1 illustrates this. The reasons for the Front Office/Back Office divide are historical, but essentially stem from notions such as “conflict of interest” and “division of responsibility.” In a more automated environment, these concepts, which relate to the character of (and control over) human beings, are increasingly called into question. Nevertheless, for the time being, the fact that systems are owned and sponsored by different groups is held as further cause of the proliferation of disparity (and possibly a further cause of it).

Technology Advance: New systems implementations for new products or markets are almost always made using new technologies. Indeed, anyone familiar with the rate of change in the computer industry at large knows that new systems are rarely built on the same platform or platforms as previous systems. The motivation to deploy technologies and platforms that are in vogue at the time stems from the inertia found in existing systems (regardless of age). The sponsors of the systems are forever conscious of the increasing cost of maintaining and enhancing their applications. However, the next “silver bullet” solution always appears to be at hand in the IT industry. Paradigm shifts and new methodologies, tools and languages emerge at a phenomenal rate, and they typically always promise greater efficiency and flexibility. Even system sponsors who are cynical of the next panacea inevitably become resigned to try new technology due to frustration over the increasing costs of their inert systems. And they are aided and abetted in reaching these conclusions by technology managers who are excited and motivated more by how new technology can solve problems than by the problem to be solved. The acceptance of new technologies is further endorsed by IT professionals, since they need to keep their skills marketable and therefore current. Systems have proliferated as the product and services provided by the organizations have grown. And they have multiplied across a whole range of technologies, not to mention generations of technologies. These technologies have been, and continue to be, disparate. As a result, the product set transcends most, if not all, mainstream technologies of the last 30 years. Figure 1 not only illustrates the increase in the number of systems (vertical alignment) as the product range grows, but also provides an indication of the major operating system platforms that have prevailed during the period together with the major software trends that have gained commercial acceptance. In this way we can observe both multiplication and disparity.

Natural Development: In order to survive in a competitive market, organizations must both improve the functionality that their services can provide (internally and externally) while also improving efficiency. Much of this drive for improvement is manifest in the continued development of computer systems. More often than not, however, this development takes effect not through the enhancement of the existing applications but through the construction of new ones. The main reason for this is, ad nauseam, that existing systems, no matter how new, demonstrate a stubborn inability to accommodate change of any significance. So the very natural and simple desire to improve leads to an increase in the number of systems. And this leads to both proliferation and systems disparity since new systems typically take advantage of new technologies.

Geography: Enterprises that operate in many geographical locations have seen further increases in the number of systems deployed due to alternative practice, a disease common to all multinationals. The “we-have-different-ways-of-doing-things-here” attitude often originates with authorities since regulations differ from country to country, as do working practices and patterns. But it is also frequently self-inflicted. Significant change must often be visited on a system to enable it to function in a different geographical location. Often the changes that are necessary are sufficient to raise the question whether or not the separate system implementations can reasonably bear the same name. While in general disagreement with that sentiment, one could be in agreement with the conclusion not to replace the original version with the new. So the reasonable policy of having the same system in a number of regional locations was completely undermined by the system divergence that accompanied the rollout. From a single source system, multiple disparate versions were spawned. Furthermore, when a foreign location hosts a smaller operation than the domestic location, or the foreign location is not a market hub in its own right, then the operation of the domestic system solution may not be affordable in the foreign location. In these cases, systems providing similar functionality that can be accordingly priced are often installed at the site of the smaller operation. These are not scaled-down versions. Generally, they are genuinely dissimilar to the system providing and supporting the larger operation. There are further geographic reasons for proliferation and disparity. In the past, for example, the same technology could not always be guaranteed in remote locations. Required technical support was also not always available.

Expediency: Windows of opportunity are often small. The financial service industry is particularly aware of this fact because ignoring or missing opportunities in this sector can substantially affect income, not to mention morale. As a result, the most expedient system solution is often implemented instead of the optimum architectural solution, which may take longer and represent opportunity risk. Once again, the inertia in a system will often prevent it from providing what would otherwise be the optimum architectural solution. The expedient solution is often a separate but, in these cases, necessarily, disparate system.

Budget and Downsizing: Legacy mainframe and big data storage systems were (and still are) expensive. In general, the new innovations in computing that have emerged share one common characteristic—they provide greater processor throughput. More recently, the provision of greater network throughput for the same cost has also been a common trait. This phenomenon, known as downsizing, is experienced to greatest effect when a new hardware and operating system architecture surfaces. These architectures are disparate, so the cost of downsizing is disparity and divergence. Over past decades, diversification into new markets required more computer power. Business functional enhancements such as transaction processing, decision support and office automation also all required greater power. Downsizing provided a solution, but the downside of downsizing was increased disparity. At any time, even when only a single application existed, existing applications have been too inert to modify to run on downsized technology, at least not within the required time. Subjecting critical applications to this process has been considered too risky.

 

OPERATIONAL COST & RISK

Cost and risk in disparate systems is not simply a matter of spending more to maintain the status quo (at the expense of new systems development). Real operational cost and risk is generated by disparity. After all, the more moving parts that any machine has increases the probability of a breakdown. And when a machine has been thrown together using parts made from different materials and engineered to different specifications, not to mention parts originally intended for quite different purpose, the machine will require continual intervention. Disparate systems in the enterprise are analogous to the parts of this kind of machine.

The necessary level of intervention required by disparate systems is measured in terms of the number of people employed. Indeed, despite huge expenditures in IT, the enterprise still depends on people in large numbers. For example, procedures are often developed that require a new customer to be made known to all disparate systems. The number of people needed to accomplish this new customer entry increases with the degree of proliferation while the difficulty of the exercise increases with the degree of disparity. And as the number of people employed in the enterprise increases, the quality of the service that they collectively provide must inevitably decrease.

Furthermore, since the flow of data in disparate environments is mainly from front to back, many systems are serially linked. A system that is positioned towards the back end of the operation may be dependent on the successful operation of many other systems in the data supply chain. Failure at any point in the chain adversely affects the chain of events at all stages beyond the point of failure. This can lead to serious problems. For example, when the end-of-day batch cycle of one system produces output which is the input for a downstream system, any failure in the former can delay the schedule for the remaining chain of systems. If next-day payments, or critical regulatory reporting deadlines, are downstream dependencies, data supply chain disruptions can be very serious indeed. Simply put, in a disparate environment, the number of dependencies is greater than the number of disparate systems. And the greater the degree of divergence the greater the operational risk to the enterprise.

In a global economy, mobility and agility are important attributes to both the enterprise and individual employees. But any enterprise wishing to re-site a business or an operation will find the disparate environment difficult to overcome. Likewise, in a disparate environment, an individual transferred to a different location within the enterprise will almost certainly find himself in unfamiliar territory with, albeit temporarily, a related increase in operational risk.

Finally, in a disparate environment, even one with more than enough total system capacity, there is never enough capacity for any one application. Hardware and infrastructure are constantly being updated. These operations in themselves constitute a significant risk. Disparity allows hardware improvements to address two issues – more throughput and faster throughput. (For business expansion more throughput is required. In serially chained batch oriented scenarios, faster throughput is necessary.) Unfortunately, the use of extra processing power to abstract and simplify is never an option afforded by disparate systems. Testing across applications where an error in one system is only exposed in another after an elapsed time is risk laden and costly. And the inability to meet regulatory objectives as a matter of course is not a risk to trifle with.

 

CONCLUSION

This article has emphasized the chaotic state of enterprise systems. But it is important to note that the cost of not moving forward is another issue of great gravity.

After all, everything eventually shifts to a higher performance criterion. Not so long ago, for example, customer credit was a back-office issue. Dealers were told retrospectively that they should not have dealt with such and such a client. Today, credit is the most front office of issues.

Nearer and nearer approximations to real-time will increase the degree of confidence in Big Data analytics and knowledge creation. Nevertheless, the overriding consequence of disparate systems is not simply that “multi-currency, cross-product, global and real-time” questions are difficult to address in a cost-effective manner. The consequence is that these questions are impossible to address appropriately from the current scenario, where near horizons, parochialism and self-interest are necessary survival aids and where the organization itself (and the role of IT within it) naturally preclude their solution.

When it comes to disparate systems, the social and cultural cost to the organization is that searchers for abstraction and generality and searchers for excellence are either marginalized or look elsewhere. We all need to remember that the systems in question, disparate and chaotic as they are, did not create this mess.

——

The views expressed in this article are those of the authors and not necessarily those of UBS.