Maintaining a successful enterprise technology strategy can be a challenge for many organizations. With an abundance of data entering the enterprise every day, businesses can find themselves struggling to differentiate between irrelevant and relevant data, inconsistent and consistent data, and inaccurate and accurate data.
Unfortunately for most, the irrelevant, inconsistent and inaccurate data, categorized as “poor data,” is often the reason businesses lose their competitive edge. The combination of poor data along with not knowing what’s in an operating environment can lead to cyberattacks, software audits and overall bad business decisions. It’s concerning that poor, irrelevant data could be clogging up your enterprise.
However, with the right tools and strategies, enterprises can take a powerful, proactive and protective stance by identifying “accurate data,” data that is up-to-date, doesn’t have duplicates, and is consistent and complete with additional relevant information. Accurate, relevant data enables enterprises to enrich their data with relevant market data points and helps in the identification and maintenance of assets in an enterprise.
Here are three ways to identify and address relevant data from poor data:
Discover and Separate Irrelevant and Relevant Data
You can’t manage enterprise technology data if you don’t know what’s in your ecosystem inventory. The first step to getting accurate, up-to-date data entails gaining complete visibility throughout your enterprise. Through analysis and discovery, organizations get clear insight into their hardware and software assets, as well as assets brought in by shadow IT, BYOD and consumerization.
Next, create a common universal language and categorize the assets by relevant or irrelevant data so that a single source of truth becomes established. By discovering and then reconciling, organizations will not only be able to understand what data is being collected and stored, but also identify irrelevant data. Research shows that only 5 percent of raw data is relevant, which leaves the remaining 95 percent deemed irrelevant. Not only is irrelevant data not useful, but it can also increase your risk of non-compliance.
For instance, when companies receive an audit notice, it usually entails teams scrambling to compile the relevant data – an extremely time-intensive process—and then, if they are ultimately found to be out of compliance, companies have to pay the price. By identifying relevant from irrelevant data, organizations can save themselves from a vicious audit cycle and ultimately save time, money and resources.
A common problem enterprises face is duplicated data resulting from multiple entries for a single piece of software. Creating and maintaining records for software licenses, license agreements and cumulative licenses is not an easy task for any organization. Other activities, such as mergers or acquisitions, further compound the challenge of preserving data integrity and relevancy.
One might consider a different tact of examining the data to understand what should stay and what should go, rather than assuming the data is flawed. Organizations can also create a single data management process for software licenses based on their existing data, an approach used with IT Asset Management (ITAM). Furthermore, existing and new records from disparate systems can be matched and then merged to help create a consolidated report.
This process is called normalizing. It takes data in all iterations within the enterprise environment – differing vendor names, variables ways of referring to software and hardware components, along with devices – and creates a universal language that can be used throughout the enterprise. Not only will organizations be able to identify and delete duplicated, inconsistent data, but it will also save time and money required for the resources needed to manually update and track assets that are added or removed from the network.
Access Incomplete Data
A huge part of the information needed for enterprise decision-making is not discoverable because it originates outside IT, or because it lacks key data points such as release dates, end-of life dates or version numbers. For instance, the purchase order (PO) is often the only business record containing information about what was procured, and most POs tend to be incomplete.
Additionally, organizations have multiple data sources of inventory, including discovery solutions such as HPE Discovery and Dependency Mapping (DDMI) or HPE Universal Discovery (UD) and client management solutions such as Microsoft System Center Configuration Manager (SCCM), making it hard for IT, Purchasing and other departments to reconcile the data. If the reconciled data doesn’t contain all relevant information required, enterprises can fall vulnerable to hacks.
A recent BDNA report found that old software and hardware assets past their support dates, living invisibly within the enterprises, have been the cause of some of the most notorious and harmful data breaches in recent years.
To truly create reliable and complete data, look to discover these often overlooked market intelligence points, including: Product end-of-life dates Non-discoverable software-licensing information Hardware specifications Accurate reports on products from vendor accounts
Establishing an automated process for discovering and categorizing enterprise asset inventories is a crucial step that must be taken to identify what exists in the enterprise. Taking the time to differentiate relevant data from poor data will go a long way in ensuring that enterprises are enabled to make better business decisions. Whether it is in the form of better understanding relevant data or completing incomplete data, identifying accurate data can be the difference in ensuring business success.
(About the author: Walker White is president of BDNA Corporation. Joining the company in 2002, he originally served as chief technology officer before becoming president in 2014. Prior to BDNA, Walker had a 13-year career at Oracle, working as chief technologist in Oracle Service Industries.)