There’s much ado about how Big Data – data sets so large and complex that it requires exceptional technologies to efficiently process such large quantities of data within a certain time-frame – will be the next frontier in computing.
The trouble is that while a lot of talk about big data infrastructure focuses on the capture and storage of such large amounts of data, there’s not quite enough of it on processing all that data and converting them into actionable insights.
“People and machines are producing valuable information that could enrich our lives in so many ways, from pinpoint accuracy in predicting severe weather to developing customized treatments for terminal diseases, says Boyd Davis, vice president and general manager of Intel‘s Datacenter Software Division. Davis adds that the company hopes that it can enable “all of the computing horsepower available to the open source community to provide the industry with a better foundation from which it can push the limits of innovation and realize the transformational opportunity of big data”.
That’s probably why the computer chipmaker – also the world’s seventh largest software manufacturer – late last month entered the Hadoop distribution market with the announcement of its own Intel Distribution for Apache Hadoop Software. This move by Intel seemingly bypasses existing Hadoop distribution vendors to create its own, by partnering system integrators, independent software vendors, original equipment manufacturers as well as training partners (such as Cisco, 1degreenorth, Savvis and Revolution Analytics, amongst others), to integrate its software into a number of next-generation platforms and solutions in both private and public cloud environments.
Hadoop, an open source framework for storing and processing large volumes of diverse data on a scalable cluster of servers that has pretty much emerged as the preferred platform for managing big data, itself promises to be able to keep pace with big data’s rapid evolution, so it’s no surprise to see Intel leap onto the bandwagon. The new software offering expands Intel’s extensive portfolio of datacenter computing, networking, storage and intelligent system products. Company executives confirm that the company – which first started on various Hadoop research projects since 2009 – will continue to invest in both research and with capital to advance the big data ecosystem.
Big data is going to get even bigger – and faster – yet.