Elasticity Task
Task description
Integrating LVM with Hadoop and providing Elasticity to DataNode Storage
Increase or Decrease the Size of Static Partition in Linux.
I use the AWS cloud for setting up my Hadoop cluster with 1 master node and 1 slave node. In the slave node, I’m attaching one more harddisk of 8GB for performing this practical. I want to only give 4GB from my 8GB harddisk to Hadoop to store the Data.

Now we have one physical hard drive…
First step: Creating Physical Volume(PV) from the hard drive…
For this, run command…
pvcreate /dev/xvdf(disk_name)

Second Step: Creating Volume Groups(VG) from the physical volumes
For this, run command…
vgcreate vg_name PV_name

Third Step: Creating logical volumes from the volume groups and assign the logical volumes mount points…
For this, run command…


I need to mount this partition with any directory to use it for Hadoop slave node…



First we have to install hadoop in master and data node
Now we need to edit some files in master and slave node…
- Master Node:
Make one directory using command…
mkdir /nn
Then Go to /etc/hadoop and first edit hdfs-site.xml

Now edit core-site.xml…

After this, I need to format the namenode first using command
hadoop namenode -format
- Now come to slave node…
In slave node, we also need to edit both files…
Go to /etc/hadoop and then edit core-site.xml…

That’s it.
Now let’s start the Hadoop services in both Master and Slave Node…
You can see that now my HADOOP Cluster is only able to use 4GB from my slave node.

You can see that now my HADOOP Cluster is only able to use 4GB from my slave node.
NOW I want to increase my slave storage from 4GB to 6GB without formatting/recreating the partition and this elasticity (in terms of storage) only provided by LVM storage.
For extend your logical volume(LV), run…
lvextend — size +size /partition

Your partition is extended but it is not shown yet… So for resize, run this…
resize2fs /partition

Now you can see my Volume is increased dynamically
