Mirantis | The #1 Pure Play OpenStack Company

Hadoop on OpenStack: Elastic Data Processing (EDP) with Savanna 0.3

Now that version 0.2 of Project Savanna is out, it’s time to start looking at what will be coming up in version 0.3. The goal for this next development phase is to provide elastic data processing (EDP) capabilities, creating a Savanna component that enables data analysis and transformation in an easy and resource-effective way, running a Hadoop cluster only when it’s needed.

To provide this functionality, Savanna needs you to give it these three things:

  • The data for processing
  • A program that specifies how the data will be processed
  • The location where that program will be executed

These three items define the architecture for the EDP part of Savanna. You can see the high-level architecture in this figure:

Savanna Acrhitecture

As you can see from the image, EDP is made up of the following components:

  • Data discovery component—Enables pulling data from various data sources. Data can be pulled from Swift, GlusterFS, or NoSQL database such as Cassandra or HBase.
  • Job origin—Supplies a task for data processing and execution on the cluster and allows users to execute several types of job: jar file, Pig and Hive scripts, and Oozie job flows.
  • Dispatcher component—Responsible for scheduling the job on the new or existing cluster, provisioning a new cluster, resizing cluster and gathering information from clusters about current jobs and utilization.
  • UI component—Enables integration with the OpenStack Dashboard (Horizon). It’s future intent is to provide instruments for job creation, monitoring, and so on. Hue already provides part of this functionality: submitting jobs (jar file, Hive, Pig, Impala), viewing job status, and outputting.

We’ll circle back to how EDP uses these three components, but first let’s consider in more detail how Savanna interacts with OpenStack as a whole:

EDP Interactions

The Savanna product communicates with the following OpenStack components:

  • Horizon—Provides a GUI with the ability to use all of Savanna’s features.
  • Keystone—Authenticates users and provides a security token that is used to work with the rest of OpenStack, hence limiting a user’s abilities in Savanna to his or her OpenStack privileges.
  • Nova—Used to provision VMs for Hadoop Cluster.
  • Glance—Stores Hadoop VM images, each of which contains an installed OS and Hadoop.
  • Swift—Used as a storage for data that will be processed by Hadoop jobs.

Let’s consider how Savanna performs elastic data processing using the three components we mentioned in the beginning of this post.

Data for processing

The data for processing can be stored in various locations and have various representations. Let’s take a look at different uses cases for data locations Savanna is planning to support:

  • Raw files stored in distributed storage such as Swift, Gluster, Ceph, or an existing Hadoop cluster. The idea of Hadoop was initially born to address this use case, and it still remains a major power when we need to process huge amounts of data quickly, when the order of data processing is not critical, and when we expect to process all of the data, as opposed to just parts of it.
  • Data stored in a NoSQL database such as HBase, Cassandra, or MongoDB. Data is placed in this type of storage for random access. Placing data in NoSQL can increase the processing speed if only some significant part of the data should be processed. For example, if a user needs to process only 10 percent of the data according to some criteria, they can reduce the data processing time by putting the data in NoSQL and building the index according to the chosen filter.
  • Data stored in RDBMS, as is common in many legacy applications. One of the ways Savanna can provide a way to process this data is to enable integration with Apache Sqoop. Sqoop was designed for transferring bulk data between Hadoop and structured data stores, such as relational databases. So after starting cluster services and before submitting a job, you would populate the cluster using Sqoop.

Processing the data: job workflow

In its simplest form, a job is defined by a single jar file. It contains all of the necessary code—in other words, implementation of the map and reduce functions. This means that writing a data processing procedure typically requires knowledge of Java. Another option is to use the Hadoop Streaming API. In that case, you can implement the map and reduce functions in any language supported by the operating system on which Hadoop is running, for example, Python or C++.

For those who are not familiar with Java or other programming languages, Savanna provides an opportunity to use a high-level scripting language such as Pig or Hive. These languages can easily be learned by users who are not professional programmers.

Looking ahead, an interesting possibility is support of Mahout as a service. It contains sophisticated algorithms, which are difficult to implement in a scripting language like Hive or Pig. Often, the end user doesn’t want to know the implementation details. In the future, we hope that Savanna will enable running Mahout jobs in an elastic way.

Currently, Savanna supports users who are professional Hadoop programmers and have to run complex data processing algorithms, which involve several tasks. The abstraction that describes these general workflows is represented by direct acyclic graphs (DAGs). Vertices correspond to certain task and edges represent a dependencies between steps. In Hadoop, usually this process is managed via Oozie, a Hadoop workflow engine and job coordinator. The next version of Savanna, 0.3, will likely include a mechanism for scheduling Oozie workflows for elastic data processing.

Now let’s talk about how Savanna can get the job code needed to begin data processing.

The simplest way to provide a program for data processing is to store it in a distributed object store such as Swift. Savanna uses the Horizon dashboard to provide a user-friendly interface for file upload. This works great when you want a quick, one-time execution, for example, for an ad-hoc query. But on the other hand, Savanna should be able to process code from a version control system (VCS) repository—for example, git or mercurial. This is critical for users who have a continuous delivery process and need a way to propagate code from development and testing environments to production.

We’ll now drill down and explain job execution in more detail.

Job Execution

The first question when it comes to job execution is whether Savanna should start a new cluster or start the job on one that already exists. To answer that, Savanna takes into account various aspects, such as the current load the existing clusters, proximity to the data location, and the required speed. Savanna provides this information to the end user, so they can decide where to schedule the job.

One very useful feature is cluster autoscaling during a job execution. Often, at different stages of data processing, you require different resource quantities. Autoscaling allows you to have only what you need at the time you need it. The Savanna team is still deciding how to handle autoscaling; it is likely that we will support it first within Savanna itself, then integrate with Heat.

Summary

Plans for version 0.3 of Savanna include both analytics as a service and elastic data processing (EDP). Making this possible requires providing Savanna with the data, the implementation of the processing routine, and guidance on where to process the data itself. Once these capabilities are in place, Savanna’s elastic data processing (EDP) features allow you to choose where and how to process your data using resource autoscaling.

To follow progress on the Savanna project, or even to contribute, please visit the Savanna wiki.

11 comments
Google Plus Mirantis

11 Responses

  1. sandhaya

    Great collection of articles related to EDP with savans version 0.3 in this blog post.In this advanced technology, new tools are very important for us to handle big data problem and for this Apache Hadoop concept is important to learn. Developer getting good oppurtunities to learn this to become Expert and initiators will get new tools to implement from Online Hadoop Training courses.Hadoop is quite complex and challenging but it can lead to new career in Big data.

    Thanks for the post
    Time and Effort would be well appreciated.

    December 20, 2013 04:14
  2. Xiaobin Zhang

    Hey,

    I have a questions: suppose that my openstack deployment is made of admin network and data(or VM) data network.

    So swift cluster has to be in same network as OpenStack admin network, to communicate with Keystone to authenticate,

    On the other hand, The Savana/hadoop cluster(VMs) has to connect to Swift to fetch data to process.

    Thus, the situation is: Swift cluster has to be connect to both OpenStack admin network, as well as Hadoop VM network to work correctly,

    Is my assumption correct? or what’s the best network design by deploying both OpenStack, Savana and Swift?

    Thanks lot!

    January 14, 2014 02:39
    • Dmitry Mescheryakov

      Hello Xiaobin Zhang,

      Only swift-proxy does authentication via Keystone, so only proxy nodes need access to the Keystone. Swift storage nodes do not have authentication so they can live on a private network.

      If you want Savanna to work with Swift, then VMs indeed need to have access to Swift proxy nodes. VMs do not need access to storage node (with one exception, see below).

      As far as I know, both requirements are fulfilled on most OpenStack deployments. You can install OpenStack with Fuel and see how network is set up there as an example.

      The above said is true with one remark: if you want to enable data-locality in Savanna, the VMs need to have access to Swift storage nodes. That is not a common setup and, for instance, Fuel currently can not install OpenStack in such mode.

      January 21, 2014 04:51
  3. Xiaobin Zhang

    My question didn’t show up here, so ask again:
    When deployment Savana and Swift, how to plan network? when we deploy OpenStack, we have admin network, and data(VM) network.

    For Swift, it has to connect to keystone to be authenticated, meanwhile, Hadoop VM cluster also need to talk with Swift to fetch Data.

    So my question is: does Swift has to be connect to both openStack admin network, and some of Hadoop VM network to work correctly?

    Please correct me if I am wrong, but I didn’t see any document address this question

    January 14, 2014 02:45

Continuing the Discussion

  1. Hadoop on OpenStack-Elastic Data Processing (EDP) with Savanna 0.3 | Big Data Analytics News

    [...] Alexander Kuznetsov Read more Related [...]

    August 29, 201308:29
  2. Dell Open Source Ecosystem Digest #28. Highlight der Ausgabe: “DevOpsDays Barcelona 2013” (englischsprachig) - TechCenter - Blog - TechCenter - Dell Community

    [...] Mirantis: “Hadoop on OpenStack: Elastic Data Processing (EDP) with Savanna 0.3” [...]

    August 30, 201301:32
  3. Hadoop no OpenStack: Processamento elástico de dados (EDP) com Savanna 0.3

    [...] Mirantis *Artigo traduzido e publicado com a autorização do [...]

    September 3, 201306:07
  4. | UnitedStack Inc.

    [...] Hadoop On OpenStack的弹性数据处理(EDP): Savanna 0.3 [...]

    September 4, 201301:10
  5. OpenStack社区周报(8.26 – 9.3) | UnitedStack Inc.

    [...] Hadoop On OpenStack的弹性数据处理(EDP): Savanna 0.3 [...]

    September 4, 201301:11
  6. Hadoop no OpenStack: Processamento elástico de dados (EDP) com Savanna 0.3 - Link IT

    [...] Fonte: Mirantis *Artigo traduzido e publicado com a autorização do autor. Categorias [...]

    September 10, 201307:26

Some HTML is OK


or, reply to this post via trackback.