Powered By Blogger

Tuesday, September 1, 2015

Creating 2 node GlusterFS for OpenStack Nova/KVM

Add repo and install on both nodes


# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo


# yum -y install glusterfs glusterfs-fuse glusterfs-server




From first node
Start the service and enable it
 # systemctl start glusterd.service
 # systemctl enable glusterd.service
 # gluster peer probe Node2


From Second node


Start the service and enable it
 # systemctl start glusterd.service
 # systemctl enable glusterd.service
 # gluster peer probe Node1




Create Bricks on both nodes


# gluster volume create glustervmstore replica 2 Node1:/vmstore  Node2:/vmstore
# gluster volume start glustervmstore




Since Nova instance will be created under /var/lib/nova/instances, execute the following commands on both nodes


# mount -t glusterfs Node1:/glustervmstore /var/lib/nova/instances


(Replace Node2 with Node1 when executing on second node)


Test it


# cd /var/lib/nova/instances
touch a file and this file will be replicated on the second node too.


Now create a instance in openstack and you will see image on both nodes (If you are getting error, check permission for /var/lib/nova/instance directory)



Tuesday, June 2, 2015

Zpool root mirror

If you created a mirrored ZFS root pool during an initial installation, you can boot from an alternate disk automatically. Identify an alternate disk in your mirrored ZFS root pool. For example, disk c1t1d0s0 in the output below.
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
ok boot /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@1
If you used the zpool attach command to create a mirrored ZFS root pool after the installation, then you must run the installboot or installgrub command on the alternate disks before they are bootable. For example:
sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s

Add Memory to Red Hat VM under VMWare

Dynamically Add Memory to Red Hat VM under VMWare ESX 

1) Enable hot add memory for the VM (can only be done while VM is powered down).  This is under  Edit Settings -> Options -> Enable CPU/Memory Hotplug -> Enable Memory Hotplug
2) Add the desired memory in VMWare under Edit Settings on the shell VM
3) Run this script on the RHEL 5.x guest
#!/bin/bash

if [ "$UID" -ne "0" ]
   then
    echo -e "You must be root to run this script.\nYou can 'sudo' to get root access"
    exit 1
fi


for MEMORY in $(ls /sys/devices/system/memory/ | grep memory)
do
   SPARSEMEM_DIR="/sys/devices/system/memory/${MEMORY}"
   echo "Found sparsemem: \"${SPARSEMEM_DIR}\" ..."
   SPARSEMEM_STATE_FILE="${SPARSEMEM_DIR}/state"
   STATE=$(cat "${SPARSEMEM_STATE_FILE}" | grep -i online)
   if [ "${STATE}" == "online" ]; then
       echo -e "\t${MEMORY} already online"
   else
       echo -e "\t${MEMORY} is new memory, onlining memory ..."
       echo online > "${SPARSEMEM_STATE_FILE}"
   fi
done


Double check by typing free -m
 
Welcome to the UNIX world