Right now there are countless “emperors” boldly making decisions that are clothed in “data.” The terrible truth is that those decisions aren’t wearing anything at all. As a marketer, I live and die by the data, and have found marketing automation platforms like SharpSpring to be the best tailors possible to avoid getting called out for indecent exposure in meetings.

New DataThere’s been an explosion in how easy it is to collect mountains of data, so everyone’s making better decisions as a result, right? Unfortunately, no. We’re collecting more data than ever before, but that doesn’t mean we are effectively using it. The solution is simple: quality over quantity. Remember that with data, more is not always better.

As a data-driven marketer, I regularly present to various groups and my goal is to focus on impactful, real-world data that moves our company forward. In my experience, quality data comes down to three key factors:

  •       Relevancy
  •       Accuracy
  •       Digestibility

Relevancy: focus on data that actually matters

Just because data can be collected and analyzed doesn’t mean that it should be. Start by determining what is really important, and list the key performance indicators (KPIs) for what you’re evaluating. Take a sales process for example: the KPIs may be the number of leads the sales team gets, how many of those leads are converted into sales, and the average resulting sale. Depending on complexity and what part of the process you’re analyzing, you may also want to segment your data further, such as by type of customer (SMB vs enterprise), region (North America vs Europe), or salesperson. Don’t stop there. Give your metrics context by establishing goals for each of them. Set these goals as high as you think is possible, not just at the level of current performance. If you see that you aren’t meeting those goals, step back and look at why you aren’t for each KPI. It may turn out that you set goals that are too ambitious, but take a hard look at why you weren’t able to meet those goals, and make sure the problem is actually the goals, and not the processes, before you lower them.

Accuracy: because anyone can just make up numbers

Today’s data is collected from a lot of different sources. This opens up opportunity — and gaps. It’s critical that data is matched up correctly, and that there aren’t big unaccounted for gaps. One solution is to use a fully-integrated platform that pulls all the relevant numbers into a big cohesive picture. If that’s not an option, you’re going to have to make sure your data isn’t full of potholes (or at least know where they are so you can swerve around them).

To figure out where you need to take a little more care, first map out the process you’re looking at, with extra attention on any transitions (a.k.a. potential black holes where data disappears), and note any areas where you have little-to-no visibility. Are those gaps going to derail you, or can you work around them? Sometimes, you won’t really know the answer to that, so you may have to make some approximations and just press on. For example, if you have a hard time seeing confirmations that prospects showed up to meetings with your sales team, but your sales team is constantly telling you how great attendance is, you can approximate that number as relatively high (say 85%-100%) as attendance isn’t a significant leak in your sales funnel. Don’t let gaps hold you back if they don’t need to.

There’s another big inaccuracy in how a surprising number of people look at data. It comes up so often and can be so impactful that it gets its own special mention (and it is really near and dear to my heart). Data needs to be looked at from a cohort perspective!At its simplest, cohort analysis just means grouping the base units of a process by time (at least), and then grouping everything that flows from those base units under that same time period. Sounds simple, right? There are two main challenges: first, data is almost never reported this way, and second, it can be tough to really get cohort data.

Take, for instance leads and sales. It’s easy to show that in April you got 1,000 leads and 200 sales. It’s tempting to say that means that your lead-to-sales conversion rate is 20% (200/1,000), but that’s not probably not accurate. If you have a moderate-to-long sales cycle, the sales in April are likely from leads that came in during the months prior. Instead, a more illuminating approach to determining conversion rates is to track the leads that came in during each month, and report sales that came from April leads under April. Thus from the 1,000 leads that came in during April, 250 of those leads turned into sales, so your real lead-to-sales conversion rate is 25% (250/1000). This takes some extra work, as cohort sales for April will continue to increase in May, June, and beyond, so updates will be necessary, but the value of the insight far outweighs the extra time.

Personally, I cut the time required way down by using SharpSpring every day to get the full cohort picture, since it ties all the pieces of the process together for me. As someone who used to spend hours piecing together data from different systems, I can not overstate how much of a game-changer a single fully-integrated platform is.

Digestibility: your data doesn’t matter if no one understands it

Collecting and crunching data is only half the battle. After that, you need to put it all together in a way that can be easily understood. Once you have a deep understanding of the data, you’ll need to package it effectively for others. There are two main challenges that can cause indigestion when it comes to data: volume and clarity.

The line between too little data and too much data can be tough to find. Think about the problems you’re trying to solve, the decisions you’re trying make, and the questions the stakeholders will have. Walk through all three of these pieces, noting the questions involved in each, and see how well the data is able to answer them. If there is data that doesn’t answer any of the questions, does it provide necessary context for data that does answer questions? If not, that’s probably data you can omit.

For the data that does answer the above questions, how well does it answer them? If you have to add qualifiers while you’re answering, such as why certain factors are obscuring the data, that’s a good sign that you don’t have enough data, and/or that you may need to break it down more granularly. Anticipate follow-up questions and see if the data can answer those, too.

Clarity is pretty easy to address, but it’s often overlooked. The two things that contribute the most to clarity are order/grouping and labeling. Ordering/grouping is easy, just make sure everything flows in a logical order (from stage-to-stage, chronologically, etc.). Labeling is deceptive. It seems easy, but that’s an illusion. The biggest issue I see that impacts clarity is lack of specificity in labels. Remember that if you designed the report/dashboard/etc., you have an advantage: your labels make more sense to you than they will to someone seeing the data for the first time. Take a step back and look at it objectively. Then, do a test run with a colleague and address any points of confusion.

Keep reaching

Better, clearer data means better decisions, better outcomes, and ultimately, bigger bottom lines. The above is a good starting point, but commit to constantly working to improve the data you collect, utilize and share. The effort has a huge ROI for your operation. Plus, your emperor is getting chilly – it’s time to put some clothes on.

Joel Garland
Joel Garland