So often in conversations with people on data quality we hear close enough” is “good enough,” right? How much damage can a missed match really cause? The answer may surprise you.

First you need to understand that data quality is not an IT problem – it’s a business problem, and the impact is measurable in terms like CAC, LTV, CRR, AAGR, CAGR, and Loyalty. Data quality is a shareholder issue—but the C-Suite can’t see the extent of it. Only those at the frontlines of working with the data can see it.

So often I sit in meetings where I hear thing like: We’re only wasting an average cost of $1.60 per duplicate per mailing; or We’re okay with a 20% dip in matching performance because of financial pressures; or We’re doing great, we only have a 8% duplication rate, and the national average for an MPI database is above one-million records is 9.4%—and that’s good enough.

Good enough? Seriously!? Think about the cost of “good enough” while considering these scenarios:

Example 1: Healthcare

You walk into a hospital emergency room because you found your neighbor fell off his roof stringing Christmas lights. The hospital can not locate him in the Master Patient Index. What is the significance of this one record?

It’s well illustrated in a consulting project completed for Children’s Medical Center of Dallas to remove duplicate records in their system1. They found nearly 250,000 duplicate records representing 22% of their Master Patient Index. The project showed that on average, each and every duplicate medical record costs the organization more than $96/record (you do the math – they had 250,000 of them), and in 4-percent of cases involving confirmed duplicate records, there was an impact on critical care, and costs resulting from repeat tests or treatment delays averaged about $1,100 each. That 4-percent of 250,000 at $1,100 each, equals $11-million dollars!

Example 2: Retail

You walk into your local big-box electronic retailer with your 55” HDTV that decided during Game 7 of the World Series it didn’t want to turn on any longer. When you first purchased the TV, the teenaged kid who entered your contact details for your warranty information didn’t enter it right. Big surprise, right? Now you’ve stood there for 50 minutes waiting in pure frustration while three more teenagers and the manager are on the phone with corporate to try and find you and your warranty in their system. Result: you leave frustrated and your next $2500 TV purchase is surely with another retailer.

So the impact is $2500 x the number of occurrences right? Wrong. It’s not just the one purchase. Think about the lifetime value of that customer (CLTV). Take that one step further. It’s not just that one customer, it’s the purchases by the whole family (holidays, birthdays etc). It’s not just about the family, it’s about word of mouth, a review, and how that person’s social circle influences the brand through word of mouth.

Final thought…

Data permeates almost every business process and impacts decision making at every organizational level—whether you’re prepared to acknowledge it or not. If you’re willing to accept a lower performing matching solution, you have to be willing to accept bad data. If you’re willing to accept bad data you have to accept that your data will fail you. If you accept that your data will fail you, you better brace yourself for the organization- and function-wide implications. Follow the data and ask yourself how it impacts your processes, how it touches the people that matter to your business. Now, ask yourself again: What’s the real cost of bad data?

Remember this next time someone tells you our data quality is “good enough.” Just good enough is the battle cry for mediocrity. “Good Enough” will cost you time and money! “Good enough” suggests that you’re happy to be average. “Good Enough” will guarantee you a “C,” but may cost you your job.