Mirantis | The #1 Pure Play OpenStack Company

How to Migrate an Instance with Zero Downtime: OpenStack Live Migration with KVM Hypervisor and NFS Shared Storage

Editor’s note:  We will be talking briefly about live migration in the What’s New in OpenStack Havana webcast next week, but Damian had such a great explanation of how to actually do it that we wanted to put it out here so you can see it in action.

Live migration is the movement of a live instance from one compute node to another. A hugely sought-after feature by cloud administrators, it’s used primarily to achieve zero downtime during cloud maintenance and can also be a useful feature to achieve performance as live instances can be moved from a heavily loaded compute node to a less loaded compute node.

Planning for live migration has to be done at the initial stage of planning and designing an OpenStack deployment. Some things to take into consideration are as follows:

  • At the moment, not all hypervisors support live migration in OpenStack; therefore, it’s best to check HypervisorSupportMatrix to see if your hypervisor supports live migration. KVM, QEMU, XenServer/XCP, and HyperV are some of the currently supported hypervisors.

  • In a typical Openstack deployment, every compute node manages its instances locally in a dedicated directory (for example, /var/lib/nova/instances/) but for live migration, this folder has to be in a centralized location and shared across all the compute nodes. Hence, a shared file system or block storage is an important requirement for enabling live migration. For shared storage, a distributed file system such as GlusterFS, NFS needs to be properly configured and running before live migration can be performed. SAN storage protocols such as Fibre Channel (FC) and iSCSI can also be used for shared storage.

  • For file permissions when accessing the centralized storage in the shared storage, you must ensure that the UID and GID of Compute (nova) user is the same on the controller node and on all of the compute nodes (the assumption here is that the shared storage is on the controller node). Also, the UID and GID of libvirt-qemu must be the same on all compute nodes.

  • It’s important to specify vncserver_listen=0.0.0.0 so that vnc server can accept connections from all of the compute nodes regardless of where the instances are running. If this is not set, accessing the migrated instances through vnc could be an issue because the destination compute node’s ip address does not match that of the source compute node.

The following instructions enable live migration on an OpenStack multimode deployment using KVM hypervisor running Ubuntu 12.04 LTS with an NFS shared storage. This tutorial assumes that a working multimode deployment has already been configured using such a deployment tool as Mirantis Fuel. The lab used for this tutorial consists of a cloud controller node, a network node utilizing neutron networking, and two compute nodes.

Please note that this tutorial does not consider the security aspects of live migration. You have to research this area of concern and so do not take this tutorial as production ready from a security standpoint.

This tutorial is presented in two steps: first, the NFS shared storage implementation procedures, and, then, a demo of live migration.

Part 1: Implementing NFS shared storage

The cloud controller node is the NFS server. The aim is to share /var/lib/nova/instances across all of the compute nodes in your Openstack cluster. This directory contains libvirt KVM file-based disk images for the instances hosted on that compute node. If you are not running your cloud in a shared storage environment, this directory will be unique across all compute nodes. Note that if you already have instances running in your cloud before configuring live migrations, you need to take precautions that the existing instances are not overridden.

On the NFS server/controller node, take the following steps:

  1. Install the NFS server.
  2. IDMAPD provides functionality to the NFSv4 kernel client and server, by translating user and group IDs to names, and vice versa. Edit /etc/default/nfs-kernel-server and set the indicated option to yes. This file must be the same on both the client and NFS server.

  3. Ensure that the file /etc/idmapd.conf has the following:

  4. To share /var/lib/nova/instances, add the following to /etc/exports:

    Where 192.168.122.0/24 is the network address of your compute nodes (usually called data network) for your OpenStack cluster.

  5. Set the ‘execute’ bit on your shared directory as follows, so that qemu can use the images within the directories when exported to the compute nodes.

  6. Restart the services.

On each of the compute nodes, take the following steps:

  1. Install the NFS client services.
  2. Edit /etc/default/nfs-common and set the indicated option to yes:
  3. Mount the shared file system from the NFS server.
  4. To save from retyping this after every reboot, add the following line to /etc/fstab:
  5. Check on all the compute nodes and ensure the permissions are set as listed below. This indicates that the correct permissions are set on the controller node with the chmod +x command above.

  6. Ensure that the exported directory can be mounted and check that it’s mounted.

    Ensure that the last line above is as indicated. This line indicates that the /var/lib/nova/instances is correctly exported from NFS server. If this line is missing, your NFS share may not be working properly and you need to fix it before you proceed.
  7. Update the libvirt configurations. Modify /etc/libvirt/libvirtd.conf. To see all of the available options, please see libvirtd configurations.
  8. Modify /etc/init/libvirt-bin.conf.

    -l is short for –listen
  9. Modify /etc/default/libvirt-bin.
  10. Restart libvirt. After executing the command, ensure that libvirt is successfully restarted.

Miscellaneous configurations

You may skip the steps below if live migration was designed from start and hence the basic requirements are in place as stated in the introduction. These steps are to ensure that the nova UID and GID are the same on the controller node and on all the compute nodes. Also, the libvirt-qemu UID and GID must be the same on all compute nodes. This involves manually changing the GIDs and UIDs to ensure that they’re uniform on the compute and controller nodes.

The steps are as follows:

  1. On the controller node, check the nova id and then implement the same on all of the compute nodes:
  2. Now that we know the nova UIDs and GIDs, we can change them on all of the compute nodes as follows:

    Follow the same procedures for all of the compute nodes.
  3. Repeat the same for libvirt-qemu but keep in mind that the controller node does not have this user because the controller node does not run a hypervisor. Ensure that all of the compute nodes have the same UID and GID for user libvirt-qemu.
  4. Since we have changed the UIDs and GIDs of user nova and libvirt-qemu, we need to ensure that this is reflected across all of the files owned by these users. We achieve this by through the next step.
    Stop the nova-api and libvirt-bin services on the compute node. Change all of the files owned by nova and nova group to the new UID and GID, respectively. For example:

Part 2: Live migration of an OpenStack virtual machine

Now that OpenStack cluster and NFS shared file system have been properly set up, it’s time to attempt a live migration. Perform the following steps on the controller node:

  1. Check the running instances to determine their IDs.
  2. Check to see the compute nodes where the instances are running.

    Here we observe that vm1 is running on compute 2 (vmcom2-mn) and vm2 is running on compute 1 (vmcom1-mn).
  3. Perform live migration.
    We will migrate vm1 with id 0bb04bc1-5535-49e2-8769-53fa42e184c8 (obtained using the nova list above) running on compute node 2 to compute node 1 (see command: nova-manage vm list above), vmcom1-mn.
    Note that this is an administrative function, so typically you first want to export the variables or source an admin credentials file.

    If successful, nova live-migration command produces no output.
  4. Verify that migration has been performed by running:

    We can see that both instances are now running on the same node.

Conclusion

Live migration is an indispensable feature to achieve zero downtime during OpenStack cloud maintenance where some compute nodes need to be shut down. The above steps–implementing shared storage and migrating a live instance–were followed to get a working live migration on an OpenStack Grizzly cloud running Ubuntu 12.04, using NFS shared storage.

15 comments
Google Plus Mirantis

15 Responses

  1. Nabil

    Step 3 doesn’t actually include the syntax of the command to do the live migration??

    October 28, 2013 06:29
    • Nick Chase

      Thanks for spotting that, Nabil! Actually the command was there but it was at the end of the previous line due to a missing line feed. Fixed now. Thanks again!

      October 28, 2013 10:34
  2. Abdi

    Hello Damian,
    if I have an NFS server outside of the controller (as per your setup) would I need to create a nova user in the storage node as well where NFS server is running and adjust the UID and GID to match the ones on the compute nodes?
    Can you elaborate the user configuration requirement for the controller is it is not housing the NFS server?

    Thank you in advance,
    Abdi

    November 1, 2013 09:08
  3. Lingxian Kong

    IMHO, there’s one key point additionally, that you must consider the cpu issue even if you have the same hypervisor type of all nodes. right?

    November 5, 2013 09:54
  4. Kashyap

    while Setting up NFS, On compute Node (client node), Step 3 is giving error.
    while mounting it is giving error like “mount.nfs: access denied by server while mounting 10.100.64.24:/var/lib/nova/instances”

    what should I do?

    January 22, 2014 22:53
    • Vero

      Hi Kashyap,

      I am having the same issue, did you resolve it?

      Thanks!

      February 12, 2014 06:09
      • Lucio

        Could be many things:

        echo > /etc/sysconfig/iptables
        service iptables restart
        /etc/init.d/rpcbind start

        And don’t forget to reload exportfs with ‘exportfs -a’

        February 13, 2014 11:38
        • Lucio

          Adding the latter comment, my iptables fragment that allow shared folder with NFS in CentOS: (according this post:http://www.cyberciti.biz/faq/centos-fedora-rhel-iptables-open-nfs-server-ports/)

          -A INPUT -p gre -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 111 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 111 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 2049 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 32803 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 32769 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 892 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 892 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 875 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 875 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 662 -j ACCEPT
          -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 662 -j ACCEPT
          #-A INPUT -p tcp -m comment –comment “999 drop all other requests” -j DROP
          COMMIT

          Hope this help.

          February 17, 2014 12:32
          • Lucio

            Hi,
            I’m sending my complete iptables rules that are working.


            Compute nodes

            # Generated by iptables-save v1.4.7 on Mon Feb 17 14:49:00 2014
            *raw
            :PREROUTING ACCEPT [2989:4238873]
            :OUTPUT ACCEPT [1245:70574]
            -A PREROUTING -p gre -m comment –comment “333 accept gre” -j NOTRACK
            COMMIT
            # Completed on Mon Feb 17 14:49:00 2014
            # Generated by iptables-save v1.4.7 on Mon Feb 17 14:49:00 2014
            *filter
            :INPUT ACCEPT [0:0]
            :FORWARD ACCEPT [0:0]
            :OUTPUT ACCEPT [1:124]
            -A INPUT -p icmp -m comment –comment “000 accept all icmp requests” -j ACCEPT
            -A INPUT -i lo -m comment –comment “001 accept all to lo interface” -j ACCEPT
            -A INPUT -m comment –comment “002 accept related established rules” -m state –state RELATED,ESTABLISHED -j ACCEPT
            -A INPUT -s 10.20.0.2/32 -p tcp -m multiport –sports 4369,5672,41055,55672,61613 -m comment –comment “003 remote rabbitmq ” -j ACCEPT
            -A INPUT -p tcp -m multiport –sports 8140 -m comment –comment “004 remote puppet ” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 22 -m comment –comment “020 ssh” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 80,443 -m comment –comment “100 http” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 3306,3307,4567,4568 -m comment –comment “101 mysql” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 5000,35357 -m comment –comment “102 keystone” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 8080,6000,6001,6002 -m comment –comment “103 swift” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 9292,9191,8773 -m comment –comment “104 glance” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 8774,8775,8776,6080 -m comment –comment “105 nova ” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 4369,5672,5673,41055 -m comment –comment “106 rabbitmq ” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 11211 -m comment –comment “107 memcached tcp” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 11211 -m comment –comment “107 memcached udp” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 873 -m comment –comment “108 rsync” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 3260 -m comment –comment “109 iscsi ” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 9696 -m comment –comment “110 neutron ” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 67 -m comment –comment “111 dhcp-server” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 53 -m comment –comment “111 dns-server” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 123 -m comment –comment “112 ntp-server” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 5404 -m comment –comment “113 corosync-input” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 5405 -m comment –comment “114 corosync-output” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 58882 -m comment –comment “115 openvswitch db” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 5666 -m comment –comment “116 nrpe-server” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 16509 -m comment –comment “117 libvirt” -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -p tcp -m multiport –ports 5900:6100 -m comment –comment “118 vnc ports” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 8777 -m comment –comment “119 ceilometer” -j ACCEPT
            -A INPUT -p gre -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 111 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 111 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 2049 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 32803 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 32769 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 892 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 892 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 875 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 875 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 662 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 662 -j ACCEPT
            #-A INPUT -p tcp -m comment –comment “999 drop all other requests” -j DROP
            COMMIT
            # Completed on Mon Feb 17 14:49:00 2014
            # Generated by iptables-save v1.4.7 on Mon Feb 17 14:49:00 2014
            *mangle
            :PREROUTING ACCEPT [5744:7988973]
            :INPUT ACCEPT [5744:7988973]
            :FORWARD ACCEPT [0:0]
            :OUTPUT ACCEPT [2162:159573]
            :POSTROUTING ACCEPT [2162:159573]
            -A POSTROUTING -p udp -m udp –dport 514 -j CHECKSUM –checksum-fill

            COMMIT
            # Completed on Mon Feb 17 14:49:00 2014


            Controller:

            # Generated by iptables-save v1.4.7 on Mon Feb 17 11:47:00 2014
            *raw
            :PREROUTING ACCEPT [104268:148207803]
            :OUTPUT ACCEPT [52151:3264763]
            -A PREROUTING -p gre -m comment –comment “333 accept gre” -j NOTRACK
            COMMIT
            # Completed on Mon Feb 17 11:47:00 2014
            # Generated by iptables-save v1.4.7 on Mon Feb 17 11:47:00 2014
            *filter
            :INPUT ACCEPT [0:0]
            :FORWARD ACCEPT [0:0]
            :OUTPUT ACCEPT [1:127]
            -A INPUT -p icmp -m comment –comment “000 accept all icmp requests” -j ACCEPT
            -A INPUT -i lo -m comment –comment “001 accept all to lo interface” -j ACCEPT
            -A INPUT -m comment –comment “002 accept related established rules” -m state –state RELATED,ESTABLISHED -j ACCEPT
            -A INPUT -s 10.20.0.2/32 -p tcp -m multiport –sports 4369,5672,41055,55672,61613 -m comment –comment “003 remote rabbitmq ” -j ACCEPT
            -A INPUT -p tcp -m multiport –sports 8140 -m comment –comment “004 remote puppet ” -j ACCEPT
            -A INPUT -s 10.20.0.2/32 -p tcp -m multiport –dports 8888 -m comment –comment “007 tinyproxy” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 22 -m comment –comment “020 ssh” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 80,443 -m comment –comment “100 http” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 3306,3307,4567,4568 -m comment –comment “101 mysql” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 5000,35357 -m comment –comment “102 keystone” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 8080,6000,6001,6002 -m comment –comment “103 swift” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 9292,9191,8773 -m comment –comment “104 glance” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 8774,8775,8776,6080 -m comment –comment “105 nova ” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 4369,5672,5673,41055 -m comment –comment “106 rabbitmq ” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 11211 -m comment –comment “107 memcached tcp” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 11211 -m comment –comment “107 memcached udp” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 873 -m comment –comment “108 rsync” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 3260 -m comment –comment “109 iscsi ” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 9696 -m comment –comment “110 neutron ” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 67 -m comment –comment “111 dhcp-server” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 53 -m comment –comment “111 dns-server” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 123 -m comment –comment “112 ntp-server” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 5404 -m comment –comment “113 corosync-input” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 5405 -m comment –comment “114 corosync-output” -j ACCEPT
            -A INPUT -p udp -m multiport –ports 58882 -m comment –comment “115 openvswitch db” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 5666 -m comment –comment “116 nrpe-server” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 16509 -m comment –comment “117 libvirt” -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -p tcp -m multiport –ports 5900:6100 -m comment –comment “118 vnc ports” -j ACCEPT
            -A INPUT -p tcp -m multiport –ports 8777 -m comment –comment “119 ceilometer” -j ACCEPT
            -A INPUT -p tcp -m multiport –dports 8004 -m comment –comment “204 heat-api” -j ACCEPT
            -A INPUT -p tcp -m multiport –dports 8000 -m comment –comment “205 heat-api-cfn” -j ACCEPT
            -A INPUT -p tcp -m multiport –dports 8003 -m comment –comment “206 heat-api-cloudwatch” -j ACCEPT
            -A INPUT -p gre -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 111 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 111 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 2049 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 32803 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 32769 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 892 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 892 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 875 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 875 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p tcp –dport 662 -j ACCEPT
            -A INPUT -s 192.168.0.0/24 -m state –state NEW -p udp –dport 662 -j ACCEPT
            #-A INPUT -p tcp -m comment –comment “999 drop all other requests” -j DROP
            COMMIT
            # Completed on Mon Feb 17 11:47:00 2014
            # Generated by iptables-save v1.4.7 on Mon Feb 17 11:47:00 2014
            *mangle
            :PREROUTING ACCEPT [107038:151961372]
            :INPUT ACCEPT [107032:151956578]
            :FORWARD ACCEPT [0:0]
            :OUTPUT ACCEPT [53676:3385228]
            :POSTROUTING ACCEPT [53676:3385228]
            -A POSTROUTING -p udp -m udp –dport 514 -j CHECKSUM –checksum-fill
            COMMIT
            # Completed on Mon Feb 17 11:47:00 2014

            February 19, 2014 02:36
  5. Srinivas Avasarala

    How popular is NFS shared storage in OpenStack KVM deployments?

    March 27, 2014 14:32
    • Guilherme Russi

      Hello, here at this part:

      [root@vmcom1-mn ~]#service nova-api stop
      [root@vmcom1-mn ~]#service libvirt-bin stop
      [root@vmcom1-mn ~]#find / -uid 106 -exec chown nova {} \; # note the 106 here is the old nova uid before the change
      [root@vmcom1-mn ~]#find / -uid 104 -exec chown libvirt-qemu {} \; # note the 104 here is the old nova uid before the change
      [root@vmcom1-mn ~]# find / -gid 107 -exec chgrp nova {} \; #note the 107 here is the old nova uid before the change
      [root@vmcom1-mn ~]#find / -gid 104 -exec chgrp libvirt-qemu {} \; #note the 104 here is the old nova uid before the change
      [root@vmcom1-mn ~]#service nova-api restart
      [root@vmcom1-mn ~]#service libvirt-bin restart

      steps 5 and 6 are about the old GID right? Or is the old UID to change the group too?

      Thank you.

      April 2, 2014 04:58

Continuing the Discussion

  1. How to Migrate an Instance with Zero Downtime: OpenStack Live Migration with KVM Hypervisor and NFS Shared Storage | doc.cloudgear.io

    [...] post How to Migrate an Instance with Zero Downtime: OpenStack Live Migration with KVM Hypervisor and NFS … appeared first on Mirantis | The #1 Pure Play OpenStack [...]

    October 25, 201309:37
  2. How to Migrate an Instance with Zero Downtime: OpenStack Live Migration with KVM Hypervisor and NFS Shared Storage – Pure Play OpenStack. | OpenStackうぉっち

    [...] Mirantis | The #1 Pure Play OpenStack Company http://www.mirantis.com/blog/tutorial-openstack-live-migration-with-kvm-hypervisor-and-nfs-shared-st… 共有:TwitterFacebookGoogleいいね:いいね 読み込み中… カテゴリー: 未分類 [...]

    October 25, 201316:00
  3. Dell Open Source Ecosystem Digest #34. Highlight der Ausgabe: “Mirantis - die OpenStack PaaS Strategie im Detail” (englischsprachig) - TechCenter - Blog - TechCenter - Dell Community

    [...] Editor’s note:  We will be talking briefly about live migration in the What’s New in OpenStack Havana webcast next week, but Damian had such a great explanation of how to actually do it that we wanted to put it out here so you can see it in action. Live migration is the movement of a live instance from one compute node to another. A hugely sought-after feature by cloud administrators, it’s used primarily to achieve zero downtime during cloud maintenance and can also be a useful feature to achieve performance as live instances can be moved from a heavily loaded compute node to a less loaded compute node. read full ariticle [...]

    November 1, 201309:15
  4. Dell Open Source Ecosystem Digest #34. Issue Highlight: “Mirantis - OpenStack and its PaaS strategy: a deeper look under the covers” - Dell TechCenter - TechCenter - Dell Community

    [...] Editor’s note:  We will be talking briefly about live migration in the What’s New in OpenStack Havana webcast next week, but Damian had such a great explanation of how to actually do it that we wanted to put it out here so you can see it in action. Live migration is the movement of a live instance from one compute node to another. A hugely sought-after feature by cloud administrators, it’s used primarily to achieve zero downtime during cloud maintenance and can also be a useful feature to achieve performance as live instances can be moved from a heavily loaded compute node to a less loaded compute node. read full ariticle [...]

    November 1, 201309:17

Some HTML is OK


or, reply to this post via trackback.