Market experts agree, as does Itransition, that data-driven decision making has become one of the key drivers of the BI market growth in recent years, which also makes big data and its business impacts more prominent. Why is it then that executives still get caught in the dilemma of using data analysis versus their gut feel in order to arrive at strategic decisions?
We believe that it shouldn’t be a choice between one or the other. As our experience of BI consulting shows, a solid combination of both gut feel and hard evidence is how managers can make effective decisions, garner strategic advantages, and position their companies at the forefront of competition. It also makes the case for data democratization as a crucial step in the evolution of data science that has made data-driven decision making possible.
After we’ve made our case, we will present five ways that Itransition can help you support gut feel with big data in order to make better business decisions. With a customized solution written by a reliable provider, you get faster updates, faster fixes, less time spent on support, and more time dedicated to new features.
In search of a data-driven decision making culture
Data-driven decision making and gut feel often come up as two opposing extremes, with managers’ preferences often swinging toward the latter. The BI Survey research showed that 58% of the surveyed companies based half or more of their decisions on gut feel or experience over data. This habit of using one’s intuition most likely remains the key tool of business decision making simply because it’s been the best evaluator for centuries, while computer-based data analysis has only recently challenged it.
Gut feel: effective yet imperfect
Gut feel, or interoception as it’s known in academic circles, is the ability to sense bodily signals such as heart rate, pain, etc. Traditionally, this ability is connected to decision making—the higher one’s interoceptive sensitivity, the greater one’s ability to make better decisions.
In one interesting experiment, a team of scientists studied 18 traders’ ability to correctly count their heartbeats, and correlated the results with their trading floor performance. The findings clearly show that those who were more in tune with their bodies and accurately guessed the rate were more successful at making risky decisions, while they also showed higher profitability than those with lower accuracy scores.
However, relying on one’s intuition also has its downsides. Following the gut feel alone makes way for cognitive biases that can distort and undermine effective decision making.
The following biases are the most common:
Looking for information in order to confirm pre-existing beliefs and assumptions, while discarding evidence that contradicts the decision-maker’s opinion.
Focusing intensely on one particular piece of information (usually the very first bit of information) and overlooking others.
Inability to consider the entire range of impactful factors by randomly prioritizing some and ignoring others.
Overestimating positive outcomes and underestimating the probability of negative ones.
In a Forbes op-ed, Bernard Marr also warns against the so-called HiPPO effect, which means over-relying on the highest paid person’s opinion when making decisions. This effect is mostly found in larger companies that maintain a typical top-down decision making culture and, according to Marr, is one of the biggest barriers of data-driven decision making that’s backed by evidence, as opposed to a single person’s beliefs.
On occasion, the HiPPO effect can wreck a business when data analysis findings are not consulted. In an infamous example cited in the CIO Magazine, IBM decided to sell 50% of their ROLM unit to Siemens despite the research, which was commissioned but abandoned by IBM’s executives, that firmly showed it would be a certain failure for the entire unit. Five years later, IBM was still dragging the irreversible financial losses that resulted from the decision.
What is data-driven decision making and its disadvantages, if any?
So, should decision makers run with data or with their gut feel? The short answer is: both.
In the best situations, gut-based decision making should always be backed by data. In fact, both approaches share a similar mechanism, which is about deriving an insight based on information from the past. While data-driven decision making is impossible without a slew of data at hand from the past, gut feel is derived from the outcomes of similar situations in the past, which all accumulate to suggest the way toward a better result.
As described by Paolo Gaudiano, intuition becomes crucial when making sense of complex or conflicting outputs and in turn navigating the ocean of data insights. Experience needs to come into play as well. Overlooking a managers’ expertise in favor of machine-generated recommendations could lead to a mishap, since managers can often easily identify inconsistencies and unusual trends in data analysis outcomes.
In summary, truly effective decision making is only possible when data is included in the picture, though this should never devalue managerial gut feel and experience. With this in mind, the importance of data in decision making can’t be overlooked in any case.
Benefits of data-driven decision making
The numbers are telling: the BI Survey cited above shows that 60% of top-performing companies rely on data when making the majority of their business decisions. This is further reinforced in Deloitte’s 2019 survey on data-driven culture and the pervasiveness of analytics in decision making. The survey shows quite a broad scope of analytics use cases, making the future of big data in enterprises looks bright:
Data analysis and decision making work perfectly together if you consider the multitude of strategic advantages that come from using them in tandem:
1. Growing business by quickly detecting new opportunities and acting on them, for example, by entering new markets or launching new products or services based on identified niches.
2. Achieving cost-efficiency by decreasing expenses and optimizing business processes, like deciding on prolonging a contract with a vendor, or cutting off poorly performing SKUs and replacing them with items that are in higher demand.
3. Boosting customer engagement through a stronger focus on the customer experience and journey. Using multiple data sources makes this possible by building detailed profiles of customers then acting on insights in order to decide how to personalize their service best.
4. Improving a product or service based on customer feedback and sentiment analysis across channels in order to identify areas for improvement.
5. Refining a marketing strategy by applying predictive analytics in marketing to forecast the best-performing channels and cut off/add up advertising budgets as necessary.
6. Taking HR management to a whole new level through big data analytics of employee surveys and candidate profiles in order to create a healthier internal culture and make more effective hiring decisions.
Data democratization makes it possible
Until recently, pairing data analysis and decision making was a tall order that could only be accomplished by people with advanced data science skills. However, now that two essential factors—the proliferation of data sources and the adequate data processing technology—have fallen into place, data democratization has put data-driven decision making on every executive’s agenda.
Essentially, data democratization means having the right tools for mining, visualizing, and analyzing data—all with minimum adoption barriers for non-tech users. After decades of data analysis being confined to specialized IT departments, today’s data democratization heralds a breakthrough that eliminates the need for the gatekeepers of these valuable insights. According to Chad Bocklus, President and Chief Product Officer for CarStory, analytics is what brings democratization to decision making by opening crucial data to employees and making it transparent across an organization.
While corporate policy dictates opening up data to employees, technological innovations minimize the barriers to embracing data analytics. From simple data visualization tools like Excel charts to data fabric architecture solutions, data analysis software has come a long way. Now it can finally serve up digestible and usable information, providing there is a workable BI implementation project plan in the first place.
Advances in data virtualization, master data management, and cloud computing are all responsible for this paradigm shift. The previous barriers to data-driven decision making—data quality, fragmentation by departmental silos, integration of multiple sources, complex dealings with unstructured and semi-structured data—are now giving way. Technopedia even compares this evolution to the age of literacy, when common people finally gained access to the Bible and books, eventually leading to dramatic societal changes. While data democratization is unlikely to bring such a tremendous impact, it will certainly cause incremental effects in the decision making cultures around the world.
How Itransition can help: data-driven decision making examples
Many enterprises struggle when it comes to choosing a strategy to work with big data. Some opt for ready-made solutions for each data source, snowballing their support for each unstructured source into a state of utter chaos, where the same bugs have to be fixed 15 times adding to costs and stretching out schedules.
Below are five cornerstones of using big data to make business decisions, and more on how Itransition can help you implement them.
1. Get data anywhere you can
All of the data you can find (customer information, sales, follower trends, subscriptions, returning customers, the frequency of repeated purchases, and even word of mouth) should be used to drive decision making. Don’t be discouraged by the fact that you will have to deal with big data from within and outside the enterprise that comes in a variety of forms.
2. Know your data types before you start processing
We work with all types of data. Our task as IT consultants is to create a driver for each type of data. These drivers help us work with any data type, regardless of where or who it came from or its initial storage format. By developing drivers and enabling subsequent processing, we deliver easy-to-digest results in a universally accepted format. But above all, knowing how to work with different data sources is crucial, and the first step in data processing is determining the type of data source.
- CSV. Most often, we utilize a data management tool that maps out the data path and informs us of its type. We don’t expect the type to change in the process. For example, when we work with CSV data, the CSV driver launches to load the data in a convenient format for us before the next step starts.
- Binary data. We have data coming from clients in a binary format. This requires a custom driver since binary data is hard to parse. By accepting binary data, we already know its structure beforehand.
- Databases. Getting an entire database to work with is similar to CSV data processing, except that we don’t load data from a CSV; instead, we load it from other database tables.
- Custom formats. Sometimes business owners purchase data in custom formats. By knowing the structure of the received files, we can develop custom drivers for data processing.
- Data from APIs. In theory, we can also load data from APIs. Data processing is similar: we write a driver that converts data into convenient formats. In turn, we can build reports, merge this data with other data, and give clients processed data for further steps (such as data verification and validation, inconsistency reporting, and so on).
3. Visualize and centralize your data
At this stage, we build a frontend where data processing output is uniform, easily consumable for the average user, and readily available to the right players who have the power to transform business decisions into accomplishable goals. Make sure decision-makers have real-time access to this frontend via any device and mobile network, for example by adopting mobile BI. The more you visualize, the faster the data can be consumed.
Below is the image describing big data processing done by Itransition for one of our clients. The client used different data sources (from data vendors 1, 2, 3, etc.) in varying formats. Itransition utilized a custom automated decision-maker tool called Driver Selector. The Driver Selector further directed data to the appropriate Driver. From there, data was centralized in a Microsoft SQL Server database, verified (as explained below), prepared for output, and finally delivered to the frontend.
4. Verify data
When data is not coming from a reliable source, it needs to be verified. This is a partially automated process, where we have ready-made algorithms for different types of data sources. For example, if you take in data by country, one way to catch inconsistencies is by paying attention to each particular country’s geographical divisions. If we are supposedly working with data related to the United Kingdom yet get tables containing information on federal states, we reject this data since it is most likely data related to the USA.
The system of approval consists of the following levels:
- Level 1. All data is loaded into a database, but each type of data has a different approval level. First, it is brought into the system, checked by our automated tests, and, if verified, passes the first level of approval. This data is not yet ready for the frontend.
- Level 2. Reports are made and checked by a responsible party who confirms the validity of the data.
- Level 3. The data goes to the analyst who works with this type of data source, and if they legitimize it, we change the approval type for this data and show it to the end users. At this stage, it becomes available for further action.
Let’s describe data verification carried out by Itransition in one of our previous projects.
The client provided raw data, which we then processed with our Automated Data Verification System. The system performs three functions: checking for consistency, updating the algorithm, and adjusting the data. At the consistency check stage, data can be either validated and immediately prepared for the output, or analyzed for inconsistencies. A blatant inconsistency can lead to automatic data rejection. When further information is needed, Itransition consults the client, tweaks the algorithm, relaunches it, and receives adjusted data that has been prepared for the output. The next time that same inconsistency is discovered, it will be included in the algorithm, thereby speeding up the entire data verification cycle.
It’s essential to know which data verification challenges are likely to come up. We are often faced with text and numerical changes that need to be verified or rejected. If the configuration changes dramatically, we alert the client that the structure has changed, which is sometimes approved but can also lead to algorithm rewriting.
Below is the image describing an example of data verification with a client when it comes to a country name inconsistency. Data sources 1 and 2 have country names in the correct format, and are therefore labeled as valid data and sent to the output. Data source 3 needs to be verified. Once the client confirms that “S. Afr.” means “South Africa,” this information is added to the system, the data gets adjusted and prepared for the output.
Master your data-driven decision making tools
Decision making is one of the most critical business functions delegated to managers. With today’s upsurge in data availability, the question of whether to use data for this purpose or to resort to good old intuition is more pressing than ever.
The answer is, neither one should exclude the other. Although gut feel is subject to cognitive biases, it can serve as a solid double-check for analytical findings based on the manager’s experience and contextual awareness. At the same time, advanced data analysis, previously reserved for scientists and now available to the common public thanks to data democratization, allows decision-makers to mine petabytes of data for valuable insights and make sense of the outputs through big data visualization techniques.
When combined, both approaches are likely to bring in competitive advantages by helping managers refine and validate their strategies—in product management, marketing, human resources, customer experience, and other domains.
And even though managing big data is never a walk in the park, using the tips mentioned above and teaming up with a reliable business intelligence consultant can really help you reap all of these benefits.
Technology has arrived, now it’s up to business decision-makers to embrace this cultural shift for competitive gains.