Are you Hadooping?


Big Data Architecture“Are you Hadooping?” [Tweet this]

This was the key question asked by Gartner analysts, Merv Adrian and Nick Heudecker, during their insightful Hadoop 2015: The Road Ahead webinar which you can register to watch.

Their research shows that nearly 40% of all respondents have either deployed Hadoop into production or are a long way into a deployment.

This is significantly up from a year ago.

The focus for big data implementations has also shifted with enhancing the customer experience seen as the overwhelming opportunity by most respondents.

The conclusion is clear – if you have not already started with big data in Hadoop then your competitors have a substantial head start over you.

It gets worse:

Gartener warns that average Hadoop implementation times using SQ,L or other code driven approaches, are between 18 and 24 months to reach production. As discussed in Big Data Quality, data wrangling – extracting, transforming, filtering and cleaning big data so that it is fit for purpose – will take most of this time. In fact, the Gartner analysts suggest that this may be up to 80% of the costs and time.

One approach is to catch up by using visual big data tools running directly against Hadoop. Tools that simplify the process of wrangling data can cut development time by an order of magnitude.

Another consideration is to consider Hadoop as a Service (HaaS) offerings, as recently announced by our partner Datameer. [Tweet this]

The research suggests that the cloud is playing an increasingly important role in the management of big data.

Bandwidth and privacy concerns mean that a hybrid cloud solution is more likely to work for most companies than an out right cloud offering.

However, the cloud can be a great place to quick start your Hadoop journey with out worrying about skills and capital expenses.

Future proof your big data architecture  [http://ctt.ec/M2Lg5]

The Hadoop framework is rapidly extending. New components, such as Apache Spark, are promising significant benefits over older components, such as MapReduce..

At the same time, Spark has yet to achieve meaningful adoption by enterprise customers. A recent Search Data  management article points out that in memory can be a very expensive option, that the Spark APIs are still beyond most enterprise players and that the technology may yet fail to convert the market.

Your big data analytics platform architecture makes all the difference. Hadoop enables all data – from small amounts to petabytes, and both structured and unstructured – to be stored and analysed.to generate breakthrough insights. Using the correct component, at the correct time, can make massive difference to both the cost and speed of these analyses.

A future proof architecture will adapt to take advantage of newer components, such as Spark, when they become enterprise ready and without redevelopment.

It is time to get in on the Hadoop act.

Contact us for more information

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s