4 Major Ways to Use Big Data Right—and it’s not about Ethics

7 min.

Big data has a lavish potential to change the way we run businesses. Last year, over 27% of Fortune 1000 executives said they started seeing a range of benefits from their big data initiatives, from decreasing expenses to creating a data-driven culture.

Big data and its business impacts can be seen everywhere in internal processes—they can define how decisions are made, strategies are created, and customer relationships are sustained. Sometimes, though, just adopting some marketed BI solution is not enough. It’s also about a growing set of obligations that arise from launching a big data project—like having to keep your data governance impeccable and workflows well-defined for such initiatives to pay off. Still, there’s more to that.

Our big data consulting team outlined four major ways in which businesses can use big data to reap the benefits above. Although ethics of collecting and using customer data is a big issue now, especially with GDPR coming into force, this guide doesn’t deal with ethics and internal data collection policies per se but rather with how businesses need to integrate and use business intelligence technologies in their daily operations.

But first, let’s look at how today’s CIOs perceive the importance of big data to understand its role for enterprises.

Big data: a blessing or a curse?

When IDG surveyed 186 Heads of IT in its CIO Tech Poll: Tech Priorities 2018, predictive analytics was on the priority list of budget allocation for 47% of the respondents with 37% having it on their radar. As we know, predictive analytics is only possible with large data sets on hands, especially where it’s aided with machine learning algorithms.

This predisposition for big data projects further come up in the report, where for 57% of respondents new technological partnerships in big data and analytics are on the horizon. It also shows that 60% of the surveyed executives are planning to increase their spending on BI and analytics in general.

This evident trend upwards shows no sign of stopping, but adopters still need to come well-equipped for such data-centered transformations. For example, the sheer scale of big data makes it more difficult for at least 68% of organizations to comply with regulations, according to Experian’s 2018 Global Data Management Research.

Big data does need to be harnessed, so we came up with this checklist for you to see how prepared you are before embarking on a big data journey.

#1 Keep it clean

This may sound trite by now, but poor data quality is still taking its toll on businesses worldwide. In their combined research, Experian, Experience Matters and Strategic IT Partners consultants together with Thomas Redman of Data Quality Solutions estimated the cost of bad data to be between 15% and 25% of revenues.

With big data, mistakes just get bigger. Irrelevant, duplicated, missing, incorrect, mistyped, or poorly integrated data affects all business sides that rely on data by their nature: human resource management, customer relationships, supply chains, finances, compliance and more.

Poor data quality may result from human error, malfunctioning machines, or data being transferred incorrectly and becoming corrupted. Even minor errors in big data processing can have a detrimental impact if they influence decision-making or affect customer satisfaction. In one notorious case in 2014, the Bank of America sent an envelope to their customer addressed to “Lisa Is A Slut McIntire,” a result of human error that a big data screening tool seemingly failed to both detect and fix.

One way or another, poor-quality data will show its teeth, and there’s no way to go about it except by investing in master data management (MDM) and data cleaning.

As an umbrella term, MDM includes a range of automated processes that all serve the goal of delivering a single point of truth and reference. In the organizational setting, it is a central hub that accumulates data from a range of sources and then shares it between other internal systems without creating duplicates.

You can notice the potential problem here. While there is only one version of data in use by everyone, if this data is incorrect, errors will start snowballing from one data user to another. To prevent this, data cleansing and validation processes should be in place, and it’s not only about having software to perform this task. It’s also about data governance policies and user training sessions on how to fill in, audit, and process data correctly.

Be critical to your data and don’t just assume your data is correct. If the result of data analysis seems unexpected or suspicious, check the validity of data by searching for errors and fixing them. You should consult data analysts for an explanation or visualize data to find extreme outliers and their causes. When data is transferred, be extra aware of possible errors or corruptions.

#2 When migrate, do it wisely

In some cases, poor data quality is the direct consequence of a migration project gone wrong. For example, now migrating to the cloud is one of the biggest trends, yet adoption of cloud computing poses a few risks in its own right.

Data migration failures can result from poorly documented legacy systems, lack of well-described project requirements or thorough testing practices, but the outcome is likely to be the same. That is fractured, lost or incorrect data, which sends us back to the first point above.

To avoid this scenario, data migration projects need to involve specialists from all departments, including real data users. This user groups should collaborate with the data migration team made up of techies and executives who will guide the project toward meeting their business requirements.

Strategy is also important. That’s why it’s necessary to allocate powerful system resources to support data migration and map out steps for each stage: before, during, and after the migration.

Before you migrate, check the quality of existing data and validate business rules, redefining them if necessary. Choose a fitting strategy and draw a careful roadmap based on the migration scope and budget. Migrate in logical iterations and sticking to a realistic timeline. Add ongoing data testing and evaluation to each of the project lifecycle stages. Don’t skip validating the migration results by both tech specialists and your data users.

#3 Don’t leave self-service BI users alone

Self-service BI tools bring in enormous opportunities to run ad-hoc big data analysis, minimize requests to IT departments, and give business users a better visibility into performance and productivity.

It is these benefits that drive enterprises to make self-service BI applications one of their highest strategic priorities, as reported in Advanced and Predictive Analytics Market Study by Dresner Advisory Services.

However, some of the users of advanced and predictive analytics, like business, financial and marketing analysts, won’t necessarily succeed when dealing with such applications.

The first pitfall is in the lack of role-tailored customizations. Part of the self-service BI success is to create readily comprehensible interfaces and dashboards based on end users’ needs and preferences. Yet, these users can be the ones left out of the picture when the actual system is developed and deployed. Missing this point of interviewing end users and incorporating their feedback upfront might keep user buy-in down simply because they find reports and layouts irrelevant.

Another potential loophole is in low technical proficiency of self-service BI users. While analysts may know their job well, it’s unlikely they can start using a completely overhauled analytical platform right away with no hassle. To counterbalance this risk, it takes to invest into user training—in the form of typical demos and ongoing learning sessions aimed at educating the BI workforce about technical implications of self-service BI systems. To cut it short, it’s better not to overestimate your users’ technical skills and prevent productivity losses upfront.

#4 Overcome internal bottlenecks

At large enterprises, different data types are scattered across different departments, stored there for different purposes and owned by different teams that often don’t communicate as efficiently as they should. Sometimes, one department stays unaware of the data from another department, and therefore can’t take actions on it. This can result in missing a tremendous value that such cooperation could bring.  

Such internal bottlenecks can be caused both by the lack of cross-departmental collaboration and poor system interoperability. Either way, remember that departmental division and information silos should not intervene with data dissemination throughout the enterprise.

A good starting point would be to bring together employees in power to enforce shared data policies in their respective teams. In this regard, you could rely on your highly qualified professionals who can help empower the rest of the organization to use data for their goals, minimize bottlenecks and lags in data delivery.

Speaking of system interoperability, this should be addressed at the technical level, providing accessibility that satisfies security and efficiency standards. First, it takes mapping out data consumers and their respective roles to create a role-based access model with well-defined user rights. This would help you avoid data privacy issues and any cases of intentional or unintentional data misuse. Second, maintaining common data formats is key to your ability to use this data across internal systems.

The steps to overcoming internal bottlenecks include but are not limited to the following:

  • Filter your data according to users’ roles, areas of expertise, interests, and responsibilities to round up the volume of data they get to see and deal with.
  • Prioritize valuable data by relevance to users within these specific roles.
  • Make sure you distribute the latest data as soon as it becomes available, preferably through an alert system urging users to check their dashboards and analytical tools at once.

When technology meets strategy

Like any enterprise innovation, big data projects come at a cost, yet this cost can be well-balanced with an appropriate degree of preparation. Big data and its business impacts are tremendous, and a third of business executives have already started enjoying the benefits. The secret sauce for this success is likely to be in a well-thought-out strategy for developing and adopting big data management solutions.

This guide summed up four areas where big data will demand your attention—from data quality and migration precautions to self-service BI proficiency and internal stoppers. Just remember that all of them can affect the value of big data initiatives in their own way, but addressing them is much easier before you set off.