The most popular and comprehensive Open Source ECM platform
Data Gravity: No Industry is Immune from Its Disruption
Data Gravity refers to the special pull that data has to attract the rollout of new applications and services which in turn causes even more data to accumulate. Ultimately data grows so quickly that it becomes unmanageable and difficult to understand and utilize.
“Workloads with the largest volumes of stored data exhibit the largest mass within their universe, attracting applications, services, and other infrastructure resources into their orbit. As these massive stored data sets grow, it can become harder to detach applications and services from the data on which they rely. As a result, applications and services end up having to move to the data sets, or remain near the data sets, in order to fully operate. In some cases, these massive data sets risk becoming black holes, trapping stored data, applications, and services in a single location,” said a report from IDC.
Curtis Breville, Field CTO at Ahead, wrote in RT Insights that “data gravity planning must also ask the right questions. What different kinds of data are we going to keep at this new location? What are we expecting from it? Who needs answers, and how fast are those answers needed? How much data can and should be housed at this location? What is the cost/penalty for slower answers or the cost of managing, protecting, copying, and storing that growing amount of data?”
David McCrory, the engineer who coined the term ‘Data Gravity’, said that “no industry is immune from the disruptions and barriers created by Data Gravity. For example, when you think about what security and latency issues can do to the healthcare and financial industries, it’s not hard to imagine the amount of damage it causes.”