Generating Pay Back from Your Analytics Investments
Data has been the bedrock of all analytics investments. The pursuit for organized data management has veered off into large investments in data warehouses and then in data lakes. A survey by Gartner suggests that 85% of big data or data warehouse projects have failed. More astonishingly, only 8% of successful projects seem to have generated business value. In substance, the de facto path to superior analytics leading through the data warehouse/ data lake has left organizations lost in a maze of concepts relating to data relationships, data lineage, data models, metadata management, and indexing. All investments in these concepts were meant to create a solid foundation that would enable pole-vaulting themselves into future-proof analytics use cases. However, these investments have largely led to an academic enhancement in data frameworks and methods without meeting the on-ground objectives of the end-use case.
If the above sounds familiar, you also have probably also seen this script play out over the years- You have either been a part of or heard the whispers in the corridors of the ubiquitous data warehouse/lake project that would be the key to deliver on the promise of superior analytics. This project receives a large budget and a large project team comprising business team members, business analysts, and developers, all geared up to deliver this outcome. Showstoppers keep emerging as the project goes along, which are mostly addressed by more budget allocation and more people involvement. A year or two goes by – these showstoppers continue to exist, the original objective gets forgotten, interest from business wanes, people drop out, and now budgetary questions increase – which often ends up in a decision to 'shelve it for the time being, which is code for 'let's sweep it under the carpet and move on. While this mega project was ongoing, other projects were paused or put on hold on the promise of delivering a centralized data management solution.
Supporters of the data warehouse approach still believe that superior analytics was not its only true purpose. Data unification, central data repository, meta-data cataloging, timely availability of data, and organization-wide data security are also important for a data warehouse to exist. But with the advent of API-based open standards, systems to talk directly to each other in real-time. Additionally, streaming technology, cloud-enabled data management service, and an accelerated movement to purpose-specific software-as-a-service (SaaS) models, made it difficult for organizations to follow a central data repository approach. In the new paradigm, the churn and movement of data are rapid and data pipelines need to be built across these systems to be able to consume, transform and provide usable analytics on a near-real-time and on-demand basis to business users.
To be able to make your investments into analytics payback, it is important to re-look and re-imagine the strategy. The strategy needs to follow a 'do-small-think-big' approach. Analytics needs to work on a 'scale-out approach. This ensures the delivery of immediate analytical needs rapidly and also allows an organization ' scale out to meet the ever-changing needs of the business in the future.
What does an organization need to do?
1. Focus on the business question: The first step in delivering successful superior analytics starts with a well-articulated business objective and outcome. Focus on the current analytics to be developed using spreadsheets to understand the way they are used and the outcome that they need to achieve. This front-to-back focused strategy helps in making sure that the project has a well-defined and realistic goal, and that the end outcome to be delivered is clear and well understood by all stakeholders. If you aren't getting each outcome within 4 weeks, the project is too big to be successful.
2. Create a distributed data management strategy: Build a data strategy focused on the underlying data formats and data latency requirements for the different data elements that are required for specific outcomes. Do not attempt to unify data by moving it -unify data only to meet the end outcome. Data can remain distributed but can be called upon to meet the analytical objective on demand. Focus on data streaming, flexible and modifiable data transformations, and building on-the-fly data relationships that meet the specific analytical objective.
3. Invest in an application with a strong data management layer: Most analytics initiatives focus on the visualization capabilities of the application at the time of evaluation and pay very little importance to the underlying data management layer. While the importance of good UI/ UX built on powerful visual capabilities cannot be argued, it is equally if not more important to have a data management layer architected for size and compute so as to scale the analytics as required without compromising on functionality or performance. A data management layer does not mean a data warehouse. It is simply an application that allows you to write dynamic rules, create dynamic data relationships, undertake data streaming and manage APIs to support on-demand analytics at a rapid pace.
4. Architect your data structures to be flexible: Traditional approaches to data modeling need to be replaced with a modern approach where data tables can be created and modified at will. When starting with a specific use case, make sure that data elements that are likely to be re-used for future use cases are clearly identified and earmarked. Ensure that a visual dictionary and search of all data elements are possible whenever new data elements are added. This will ensure that the same data is not present in multiple unrelated tables in the data layer. Focus on data discovery within the data layer rather than on defining a universal data set or universal data model.
As per Gartner, barely 20% of analytics projects make it into production. While data is the new oil, data warehouses often end up hampering its efficiency. A scale-out analytics strategy, therefore, helps in converting data into actionable intelligence. The implementation of this strategy requires shifting the focus away from idealistic data models to flexible data structures where data centralization is replaced with virtual data unification and where aggregating data with latency is replaced with data streaming and APIs.
Author
Ms. Viraj Shah, Leader, Business Solution, Acies
Viraj leads Acies' Business Solutions practice, enabling clients to leverage the power of new-age technologies to drive efficiencies in process & workflow digitization, performance management, data management, and data analytics. In her current role, Viraj works closely with leading financial institutions and corporates to support their digital innovation programs. She specializes in redesigning technology and data architectures that leverage next-generation technologies to deliver timely and cost-effective automation across a range of use cases.
Over her career, Viraj has led treasury, data management, system selection, and process automation engagements for multiple financial services and corporate clients. She is passionate about harnessing the power of modern technologies to solve complex business challenges and has worked with global developmental agencies to redesign credit operating models to bring greater transparency to credit sanction for microfinance institutions.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.