Traditional Databases Aren't Ready for Big Data in the Cloud

By Robert Greene, VP of Technology, Versant Corporation

Big Data and cloud computing are on a clear collision course, bringing with them a wealth of opportunities if approached in the right way. As mobility becomes the embodiment of the digital lifestyle, data is flowing out of every crack and crevice, making cloud’s accessibility ideal for retrieving this information. But to effectively address the scalability needs of all this data, serious underlying technology shortcomings must be addressed.

Why is reconciling Big Data so important? To put it bluntly, because in doing so, enterprises can realize an incredible amount of monetary and residual value. McKinsey & Company probably best measured everyone’s excitement in a report earlier this year noting that, for example, the U.S. health care system could realize $300 billion in potential annual value by leveraging data more effectively. The same applies for the freight business, where time literally is money. Fuel consumption and on-time delivery are determined almost exclusively by the availability of an aging infrastructure. Yet there is money to be made by those who can master the details and affect change in real-time, rather than relying on predicted outcomes.

In fact, expecting rail freight traffic to double by 2020, the U.S. Federal Railroad Administration created the RailEdge Movement Planner to organize minuscule details and readings from a massive network of information sensors and physical items. This included things like the number of engines and cars per train, weight and payload, rail traffic, congestion at depots, etc. – all against the backdrop of time. The new system improved fuel-efficiency and average train velocity so that it saves about $200 million annually in capital and expenses.  Indeed, reconciling Big Data does pay.

For other industries, the cloud will play a more crucial role in leveraging this value. As mobile technologies grow more ubiquitous, accessing information through the cloud will be key in providing the kind of professional and personal mobile lifestyles we expect to characterize the future.

Not only is there an immense business opportunity here, though, but there is also a challenge in managing the scale and accessibility options the cloud makes possible. Serving up that much data “on-demand” requires a move away from some traditional supporting technologies, especially the database.

Such a massive amount of information can’t be managed and accessed effectively using a traditional, relational database (RDB). It must instead be housed in a database that can both handle the data complexity and also scale to the cloud. And while many might think its hey-day has come and gone, Object-oriented-databases (OODB) have already proven in many such circumstances that they are up to the task, and are experiencing a renewed interest.

Take, for example, Sabre’s Sonic Inventory System, the most-used ticketing inventory system in the world – including online ticketing service clients such as Travelocity.com. The sheer amount of data it needs to process makes relational database technology unusable. The Big Data needs of each airline feeding into the system – 30 at last count – mean that an object-based data model was the only way to effectively keep performance high while controlling the costs of unnecessarily bloated I.T. infrastructure. Harnessing Big Data in order to quickly and accurately process millions of transactions per day has created value for Sabre in terms of costs savings and boosting the brand’s reputation for delivering high-quality services. The system allowed them to switch from using multi-million-dollar, high-end mainframe hardware to relatively low-cost commodity infrastructure without a sacrifice in performance and availability of the system.  The system is truly “always on”, exhibiting zero downtime since the switching on more than 3 years ago.

For value beyond dollars and cents, harnessing Big Data and making it available via the cloud has actually empowered science to operate on a timescale of minutes rather than years. Tracking the effects of Arctic ice sheets on the world’s climate is an intricate process that requires analyzing an immense amount of both historical and contemporary data. Scientists must monitor volumes of incredibly minute pieces of data in the petabyte range.

The National Snow and Ice Data Center (NSIDC) did just that. NSIDC’s scientists need to process billions of complex data objects with true database functionality to allow a time-centric change analysis about the Greenland ice sheet. Dealing with time-series data sets at Big Data size required an object oriented model which was driven down into the object database’s architectural implementation. NSIDC was then able to make the data available for scientists around the world via an online portal, allowing them to access an immense amount of data and knowledge to increase our understanding of the global crysophere and how it affects our lives. Digging through this amount of information without a powerful object-based data modeling schema would take years to process, rendering the results a matter of historical record rather than actionable intelligence.

It’s clear that the power of Big Data combined with that of the cloud can wield truly impressive business results – both for the bottom line and operational processes. The key is being able to scale to the cloud in order to effectively take advantage. An object-oriented database brings the needed elements of space, scalability and functionality to make this possible.

About the Author

Versant Corporation’s VP of Technology, Robert Greene, has over 15 years experience working on high-performance, mission-critical software systems. He provides the technical direction for Versant’s database technology, used by Fortune 1000 companies such as Dow Jones, Ericsson, and China Telecom.

Leave a Reply

Your email address will not be published. Required fields are marked *