A dialog on the importance of data quality really is a dialog about the importance of risk management. Data drives the assessment of market risk, credit risk, operational risk, and regulatory/reputational risk. To measure their exposure to market conditions, companies need settlement data for their realized profit and loss (PNL) and forward prices for their market-to-market exposure. They also might need volatilities and correlations to value options and to estimate potential changes in market value using techniques such as value at risk.

Similarly, to understand exposure to counterparties, particularly where a company has long-term contracts or trades, the same types of data plus credit ratings are critical. In addition, proper controls on the market data feeding the valuation of trades, contracts, and assets are central to minimizing both operational and reputational risk.

How serious is an organization about risk management?

It is easy to agree conceptually to the importance of market data quality. In reality, as conversations with market participants get deeper, a range of commitments on the topic emerges. There are three broad categories:

1. Alimony
– Some companies have a clear business case for investing in market data quality; they truly understand the risks because they have experienced them (e.g., experienced a loss on a misvalued trade or asset, overextended with a counterparty that went bankrupt, or has been subjected to other punitive costs of a failure, such as fines). These companies, for at least as long as their collective corporate memory allows, have a clear return on investment case for investing in data validation and automation processes. These investments sometimes are perceived as fees for past transgressions.

2. Marriage – Companies that truly have embraced the new religion of risk avoidance and management know that quality market data is the fuel that drives the engine of trading and hedging decisions, market risk management, credit risk management, and operational risk management. Companies understand that few other situations better fit the adage "garbage in, garbage out" than valuations and risk reports based on market data that has been given little thought or energy. These firms make the relatively low-cost investments in independent benchmarks, automation of market data, and data validation and correction processes.

3. Lip service – The organizations that remain deeply at risk are those that have not yet, or not recently enough, experienced a market, credit, or operational loss due to a market data error. These companies view any investment in market data as a hit to their operational budget and invest their dollars elsewhere. This practice might play out with luck, but if and when the hurricane hits, it can be devastating to a company's financials and reputation.

The balancing act of speed versus quality

Formula One racing is a balance between maximum speed and quality driving. For E&P companies closing out their books, the same balance between speed and quality is essential. Market data is a critical input to valuation and financial

reports, which are increasingly being run one or more times a day by companies with active trading or hedging desks. The departments responsible for market data need to balance the demand for timely reporting with the need for accuracy. They can process reports immediately with no validation or days spent validating and correcting data, leaving decision-makers waiting for information. Both options create the risk of a bad decision, either from erroneous data or due to changing market conditions.

Automation is the secret to achieving both timeliness and quality. Almost all E&P companies have hundreds to thousands of data points from tens to hundreds of market data sources inevitably required for decision-making and financial reporting. The first step on the road to data quality is to use automated feeds wherever possible instead of manual entry or Web scraping. Manual entry is prone to error, and Web scrapes are intrinsically flawed--it is the promise of automation from a manual source subject to unexpected mistakes.

The second step is to use validation rules to screen data, such as high/low checks, unchanged data checks, missing files, and overly large or small files. The third step is to have proper visual tools to facilitate the correction of any flagged data. The fourth step is using automated alerts to communicate to the appropriate staff when a process should not be run or should have flagged data to ensure market data errors do not lead to bad decisions, or worse.

Adequate staffing is another requisite component, even with automation. When automated checks find a potential discrepancy, trained staff must determine if outliers are correct or not and, where appropriate, work with data source providers to determine or obtain the correct values. Companies relying heavily on proprietary data specific to their own operations must invest in dedicated staff to feed and validate these data. For publicly available data, the best choice is a shared service bureau. Companies should select a provider that that has made investments in the intellectual property around data quality, such as automated checks and automated updates to take new data as soon as they are available from data sources.

Ideally, the provider also should employ a follow-the-sun model with 24-hour, specially trained staff to check each dataset as soon as it is made available and continuously work each issue through to resolution. Taking this type of work in house does not differentiate an E&P company. In fact, it is a disadvantage since it will either be more expensive or less effective than a company that specializes in doing this work as a collective bureau for multiple firms. A collective bureau has the tools and skills to ensure the quality of publicly available data, plus its third-party validation provides insurance against potential regulatory and reputational risk. The dollars saved by outsourcing can be better invested in determining the best course based on the end results achieved.

Centralization is another key part of the solution. A centralized market data repository reduces the time required to validate data, increases the availability of data for decision-making, and reduces the time to reconcile data and the chance of error. Centralization also can greatly reduce the costs of data by making it easier to control who is accessing what specific data and thereby managing the costs of underlying data sources.

Are there cowboys or cowgirls in the data?

Keeping market data under control has become increasingly difficult because of the globalization of markets and the diversification of products and trades that most E&P companies are involved in. The owners of market data in a company need proper tools to control access so they know not only who can use data, but who actually is using data. Most firms do not charge back for data use, even when expensive, so employees often sign up for more than they actually need. Detailed usage reports are essential to reduce ever-rising market data spend and to comply with market data vendor contracts.

Equally important are adequate controls on who can change the official data used for centralized decision-making and financial reporting, and ensuring an audit trail showing the who, what, when, why, and how of any data changes. If there is a discrepancy in a valuation or PNL, a company needs to be able to quickly determine how and why a change in data came about.

A long-term commitment to quality saves time and money and protects assets and reputation. Or you can roll the dice and see how it goes, but that might not keep you on the Formula One track.