Test environment setupOur test environment had 4 compute servers with the following configuration:
- CPU Intel Xeon E3 4 cores
- 8 GB DDR3 RAM ECC
- 1 TB HDD
- Two 1Gbs network interfaces:
- (eth0) physical interface used for Fuel PXE setup
- (eth1) interface used for OpenStack needs like Neutron and Ironic PXE
Preparing images for bare metalWhile you can provision and boot a virtual machine using only the disk image, bare metal servers cannot boot directly from a disk image. They require:
- Kernel image appropriate for the Linux distribution being used
- Initramfs image
Preparing your OpenStack cloud for bare metalIronic does not detect servers automatically, so you have to add them manually, referencing their IPMI addresses so that Ironic can manage the servers’ power and network. For example:
ironic node-create -d pxe_ipmitool -i ipmi_address=$IP_ADDRESS -i ipmi_username=$USERNAME \ -i ipmi_password=$PASSWORD -i pxe_deploy_kernel=$deploy.kernel.id -i pxe_deploy_ramdisk=$deploy.ramfs.id ironic port-create -n $NODE_ID -a "$MAC_eth1"You can also add hardware information:
ironic node-update $NODE_ID add properties/cpus=$CPU properties/memory_mb=$RAM properties/local_gb=$ROOT_GB properties/cpu_arch='x86_64'You must also add a special flavor for bare metal instances with an `arch` meta parameter set to match the real architecture of the server’s CPU. For example:
nova flavor-create baremetal auto $RAM $DISK_GB $CPU nova flavor-key baremetal set cpu_arch=x86_64The vCPU and vRAM parameters won’t be applied because the Operating System has access to the real CPU cores and RAM. In our case the arch was set to x86_64. Only the root disk parameter is applied and Ironic will resize the root disk partition. Ironic supports only a flat network topology for bare metal provisioning, so you must use Neutron to configure it.
Starting a Sahara ClusterFrom the Sahara perspective, switching to bare metal provisioning does not change anything, and Sahara can start a cluster from a cluster template as usual. You just have to make sure you are using the special bare metal flavor and network described above. In our case, we set up the test cluster with the Cloudera provisioning plugin version 5.3.0, using the following topology:
- 1 Master/Manager node, containing Cloudera Manager along with the Hadoop master processes, HDFS Name Node, and YARN Resource Manager, as well as the Data Processing service, Oozie.
- 3 worker nodes with HDFS data nodes and YARN node managers running.
Running a test Job on Bare MetalWith the cluster started, Sahara’s Elastic Data Processing (EDP) facility allows you to run Hadoop jobs on it. EDP supports different job types, including MapReduce, Pig, Hive, and others. To check that the new cluster works as expected, you can run the DFSIO test provided with the Hadoop Distribution. The DFSIO test is a set of MapReduce jobs that allows you to check both the read and write speeds of the HDFS service, which provides high throughput access to applications with large data sets. The HDFS service is configured to keep 3 replicas of each stored block so the write test also puts the cluster network under load. You can download the test jar file from the maven repository. Setting up a DFSIO job with EDP is very straightforward. All the operations can be done through the Data Processing panel in OpenStack Dashboard:
- Upload the test jar file with the DFSIO benchmark to Sahara as a job binary. Go to the Job Binaries page of Data Processing panel and click “Create Job Binary”. Choose a name for the test jar file (something like test.jar) and save to Sahara Internal storage.
- Create the job template with a “Java” job type and the uploaded binary attached as a library. Go to the Job Templates panel and click the “Create Job”. Your test.jar should be added as a library.
- Launch the job using a newly-created template with the appropriate arguments. This can done on the Job Templates panel by clicking the “Launch on existing Cluster” button next to the Job you have created.