 9ed01779e8
			
		
	
	
		9ed01779e8
		
	
	
	
	
		
			
			Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <1521742647-25550-5-git-send-email-a.perevalov@samsung.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
		
			
				
	
	
		
			635 lines
		
	
	
		
			26 KiB
		
	
	
	
		
			ReStructuredText
		
	
	
	
	
	
			
		
		
	
	
			635 lines
		
	
	
		
			26 KiB
		
	
	
	
		
			ReStructuredText
		
	
	
	
	
	
| =========
 | |
| Migration
 | |
| =========
 | |
| 
 | |
| QEMU has code to load/save the state of the guest that it is running.
 | |
| These are two complementary operations.  Saving the state just does
 | |
| that, saves the state for each device that the guest is running.
 | |
| Restoring a guest is just the opposite operation: we need to load the
 | |
| state of each device.
 | |
| 
 | |
| For this to work, QEMU has to be launched with the same arguments the
 | |
| two times.  I.e. it can only restore the state in one guest that has
 | |
| the same devices that the one it was saved (this last requirement can
 | |
| be relaxed a bit, but for now we can consider that configuration has
 | |
| to be exactly the same).
 | |
| 
 | |
| Once that we are able to save/restore a guest, a new functionality is
 | |
| requested: migration.  This means that QEMU is able to start in one
 | |
| machine and being "migrated" to another machine.  I.e. being moved to
 | |
| another machine.
 | |
| 
 | |
| Next was the "live migration" functionality.  This is important
 | |
| because some guests run with a lot of state (specially RAM), and it
 | |
| can take a while to move all state from one machine to another.  Live
 | |
| migration allows the guest to continue running while the state is
 | |
| transferred.  Only while the last part of the state is transferred has
 | |
| the guest to be stopped.  Typically the time that the guest is
 | |
| unresponsive during live migration is the low hundred of milliseconds
 | |
| (notice that this depends on a lot of things).
 | |
| 
 | |
| Types of migration
 | |
| ==================
 | |
| 
 | |
| Now that we have talked about live migration, there are several ways
 | |
| to do migration:
 | |
| 
 | |
| - tcp migration: do the migration using tcp sockets
 | |
| - unix migration: do the migration using unix sockets
 | |
| - exec migration: do the migration using the stdin/stdout through a process.
 | |
| - fd migration: do the migration using an file descriptor that is
 | |
|   passed to QEMU.  QEMU doesn't care how this file descriptor is opened.
 | |
| 
 | |
| All these four migration protocols use the same infrastructure to
 | |
| save/restore state devices.  This infrastructure is shared with the
 | |
| savevm/loadvm functionality.
 | |
| 
 | |
| State Live Migration
 | |
| ====================
 | |
| 
 | |
| This is used for RAM and block devices.  It is not yet ported to vmstate.
 | |
| <Fill more information here>
 | |
| 
 | |
| Common infrastructure
 | |
| =====================
 | |
| 
 | |
| The files, sockets or fd's that carry the migration stream are abstracted by
 | |
| the  ``QEMUFile`` type (see `migration/qemu-file.h`).  In most cases this
 | |
| is connected to a subtype of ``QIOChannel`` (see `io/`).
 | |
| 
 | |
| Saving the state of one device
 | |
| ==============================
 | |
| 
 | |
| The state of a device is saved using intermediate buffers.  There are
 | |
| some helper functions to assist this saving.
 | |
| 
 | |
| There is a new concept that we have to explain here: device state
 | |
| version.  When we migrate a device, we save/load the state as a series
 | |
| of fields.  Some times, due to bugs or new functionality, we need to
 | |
| change the state to store more/different information.  We use the
 | |
| version to identify each time that we do a change.  Each version is
 | |
| associated with a series of fields saved.  The `save_state` always saves
 | |
| the state as the newer version.  But `load_state` sometimes is able to
 | |
| load state from an older version.
 | |
| 
 | |
| Legacy way
 | |
| ----------
 | |
| 
 | |
| This way is going to disappear as soon as all current users are ported to VMSTATE.
 | |
| 
 | |
| Each device has to register two functions, one to save the state and
 | |
| another to load the state back.
 | |
| 
 | |
| .. code:: c
 | |
| 
 | |
|   int register_savevm(DeviceState *dev,
 | |
|                       const char *idstr,
 | |
|                       int instance_id,
 | |
|                       int version_id,
 | |
|                       SaveStateHandler *save_state,
 | |
|                       LoadStateHandler *load_state,
 | |
|                       void *opaque);
 | |
| 
 | |
|   typedef void SaveStateHandler(QEMUFile *f, void *opaque);
 | |
|   typedef int LoadStateHandler(QEMUFile *f, void *opaque, int version_id);
 | |
| 
 | |
| The important functions for the device state format are the `save_state`
 | |
| and `load_state`.  Notice that `load_state` receives a version_id
 | |
| parameter to know what state format is receiving.  `save_state` doesn't
 | |
| have a version_id parameter because it always uses the latest version.
 | |
| 
 | |
| VMState
 | |
| -------
 | |
| 
 | |
| The legacy way of saving/loading state of the device had the problem
 | |
| that we have to maintain two functions in sync.  If we did one change
 | |
| in one of them and not in the other, we would get a failed migration.
 | |
| 
 | |
| VMState changed the way that state is saved/loaded.  Instead of using
 | |
| a function to save the state and another to load it, it was changed to
 | |
| a declarative way of what the state consisted of.  Now VMState is able
 | |
| to interpret that definition to be able to load/save the state.  As
 | |
| the state is declared only once, it can't go out of sync in the
 | |
| save/load functions.
 | |
| 
 | |
| An example (from hw/input/pckbd.c)
 | |
| 
 | |
| .. code:: c
 | |
| 
 | |
|   static const VMStateDescription vmstate_kbd = {
 | |
|       .name = "pckbd",
 | |
|       .version_id = 3,
 | |
|       .minimum_version_id = 3,
 | |
|       .fields = (VMStateField[]) {
 | |
|           VMSTATE_UINT8(write_cmd, KBDState),
 | |
|           VMSTATE_UINT8(status, KBDState),
 | |
|           VMSTATE_UINT8(mode, KBDState),
 | |
|           VMSTATE_UINT8(pending, KBDState),
 | |
|           VMSTATE_END_OF_LIST()
 | |
|       }
 | |
|   };
 | |
| 
 | |
| We are declaring the state with name "pckbd".
 | |
| The `version_id` is 3, and the fields are 4 uint8_t in a KBDState structure.
 | |
| We registered this with:
 | |
| 
 | |
| .. code:: c
 | |
| 
 | |
|     vmstate_register(NULL, 0, &vmstate_kbd, s);
 | |
| 
 | |
| Note: talk about how vmstate <-> qdev interact, and what the instance ids mean.
 | |
| 
 | |
| You can search for ``VMSTATE_*`` macros for lots of types used in QEMU in
 | |
| include/hw/hw.h.
 | |
| 
 | |
| More about versions
 | |
| -------------------
 | |
| 
 | |
| Version numbers are intended for major incompatible changes to the
 | |
| migration of a device, and using them breaks backwards-migration
 | |
| compatibility; in general most changes can be made by adding Subsections
 | |
| (see below) or _TEST macros (see below) which won't break compatibility.
 | |
| 
 | |
| You can see that there are several version fields:
 | |
| 
 | |
| - `version_id`: the maximum version_id supported by VMState for that device.
 | |
| - `minimum_version_id`: the minimum version_id that VMState is able to understand
 | |
|   for that device.
 | |
| - `minimum_version_id_old`: For devices that were not able to port to vmstate, we can
 | |
|   assign a function that knows how to read this old state. This field is
 | |
|   ignored if there is no `load_state_old` handler.
 | |
| 
 | |
| So, VMState is able to read versions from minimum_version_id to
 | |
| version_id.  And the function ``load_state_old()`` (if present) is able to
 | |
| load state from minimum_version_id_old to minimum_version_id.  This
 | |
| function is deprecated and will be removed when no more users are left.
 | |
| 
 | |
| Saving state will always create a section with the 'version_id' value
 | |
| and thus can't be loaded by any older QEMU.
 | |
| 
 | |
| Massaging functions
 | |
| -------------------
 | |
| 
 | |
| Sometimes, it is not enough to be able to save the state directly
 | |
| from one structure, we need to fill the correct values there.  One
 | |
| example is when we are using kvm.  Before saving the cpu state, we
 | |
| need to ask kvm to copy to QEMU the state that it is using.  And the
 | |
| opposite when we are loading the state, we need a way to tell kvm to
 | |
| load the state for the cpu that we have just loaded from the QEMUFile.
 | |
| 
 | |
| The functions to do that are inside a vmstate definition, and are called:
 | |
| 
 | |
| - ``int (*pre_load)(void *opaque);``
 | |
| 
 | |
|   This function is called before we load the state of one device.
 | |
| 
 | |
| - ``int (*post_load)(void *opaque, int version_id);``
 | |
| 
 | |
|   This function is called after we load the state of one device.
 | |
| 
 | |
| - ``int (*pre_save)(void *opaque);``
 | |
| 
 | |
|   This function is called before we save the state of one device.
 | |
| 
 | |
| Example: You can look at hpet.c, that uses the three function to
 | |
| massage the state that is transferred.
 | |
| 
 | |
| If you use memory API functions that update memory layout outside
 | |
| initialization (i.e., in response to a guest action), this is a strong
 | |
| indication that you need to call these functions in a `post_load` callback.
 | |
| Examples of such memory API functions are:
 | |
| 
 | |
|   - memory_region_add_subregion()
 | |
|   - memory_region_del_subregion()
 | |
|   - memory_region_set_readonly()
 | |
|   - memory_region_set_enabled()
 | |
|   - memory_region_set_address()
 | |
|   - memory_region_set_alias_offset()
 | |
| 
 | |
| Subsections
 | |
| -----------
 | |
| 
 | |
| The use of version_id allows to be able to migrate from older versions
 | |
| to newer versions of a device.  But not the other way around.  This
 | |
| makes very complicated to fix bugs in stable branches.  If we need to
 | |
| add anything to the state to fix a bug, we have to disable migration
 | |
| to older versions that don't have that bug-fix (i.e. a new field).
 | |
| 
 | |
| But sometimes, that bug-fix is only needed sometimes, not always.  For
 | |
| instance, if the device is in the middle of a DMA operation, it is
 | |
| using a specific functionality, ....
 | |
| 
 | |
| It is impossible to create a way to make migration from any version to
 | |
| any other version to work.  But we can do better than only allowing
 | |
| migration from older versions to newer ones.  For that fields that are
 | |
| only needed sometimes, we add the idea of subsections.  A subsection
 | |
| is "like" a device vmstate, but with a particularity, it has a Boolean
 | |
| function that tells if that values are needed to be sent or not.  If
 | |
| this functions returns false, the subsection is not sent.
 | |
| 
 | |
| On the receiving side, if we found a subsection for a device that we
 | |
| don't understand, we just fail the migration.  If we understand all
 | |
| the subsections, then we load the state with success.
 | |
| 
 | |
| One important note is that the post_load() function is called "after"
 | |
| loading all subsections, because a newer subsection could change same
 | |
| value that it uses.
 | |
| 
 | |
| Example:
 | |
| 
 | |
| .. code:: c
 | |
| 
 | |
|   static bool ide_drive_pio_state_needed(void *opaque)
 | |
|   {
 | |
|       IDEState *s = opaque;
 | |
| 
 | |
|       return ((s->status & DRQ_STAT) != 0)
 | |
|           || (s->bus->error_status & BM_STATUS_PIO_RETRY);
 | |
|   }
 | |
| 
 | |
|   const VMStateDescription vmstate_ide_drive_pio_state = {
 | |
|       .name = "ide_drive/pio_state",
 | |
|       .version_id = 1,
 | |
|       .minimum_version_id = 1,
 | |
|       .pre_save = ide_drive_pio_pre_save,
 | |
|       .post_load = ide_drive_pio_post_load,
 | |
|       .needed = ide_drive_pio_state_needed,
 | |
|       .fields = (VMStateField[]) {
 | |
|           VMSTATE_INT32(req_nb_sectors, IDEState),
 | |
|           VMSTATE_VARRAY_INT32(io_buffer, IDEState, io_buffer_total_len, 1,
 | |
|                                vmstate_info_uint8, uint8_t),
 | |
|           VMSTATE_INT32(cur_io_buffer_offset, IDEState),
 | |
|           VMSTATE_INT32(cur_io_buffer_len, IDEState),
 | |
|           VMSTATE_UINT8(end_transfer_fn_idx, IDEState),
 | |
|           VMSTATE_INT32(elementary_transfer_size, IDEState),
 | |
|           VMSTATE_INT32(packet_transfer_size, IDEState),
 | |
|           VMSTATE_END_OF_LIST()
 | |
|       }
 | |
|   };
 | |
| 
 | |
|   const VMStateDescription vmstate_ide_drive = {
 | |
|       .name = "ide_drive",
 | |
|       .version_id = 3,
 | |
|       .minimum_version_id = 0,
 | |
|       .post_load = ide_drive_post_load,
 | |
|       .fields = (VMStateField[]) {
 | |
|           .... several fields ....
 | |
|           VMSTATE_END_OF_LIST()
 | |
|       },
 | |
|       .subsections = (const VMStateDescription*[]) {
 | |
|           &vmstate_ide_drive_pio_state,
 | |
|           NULL
 | |
|       }
 | |
|   };
 | |
| 
 | |
| Here we have a subsection for the pio state.  We only need to
 | |
| save/send this state when we are in the middle of a pio operation
 | |
| (that is what ``ide_drive_pio_state_needed()`` checks).  If DRQ_STAT is
 | |
| not enabled, the values on that fields are garbage and don't need to
 | |
| be sent.
 | |
| 
 | |
| Using a condition function that checks a 'property' to determine whether
 | |
| to send a subsection allows backwards migration compatibility when
 | |
| new subsections are added.
 | |
| 
 | |
| For example:
 | |
| 
 | |
|    a) Add a new property using ``DEFINE_PROP_BOOL`` - e.g. support-foo and
 | |
|       default it to true.
 | |
|    b) Add an entry to the ``HW_COMPAT_`` for the previous version that sets
 | |
|       the property to false.
 | |
|    c) Add a static bool  support_foo function that tests the property.
 | |
|    d) Add a subsection with a .needed set to the support_foo function
 | |
|    e) (potentially) Add a pre_load that sets up a default value for 'foo'
 | |
|       to be used if the subsection isn't loaded.
 | |
| 
 | |
| Now that subsection will not be generated when using an older
 | |
| machine type and the migration stream will be accepted by older
 | |
| QEMU versions. pre-load functions can be used to initialise state
 | |
| on the newer version so that they default to suitable values
 | |
| when loading streams created by older QEMU versions that do not
 | |
| generate the subsection.
 | |
| 
 | |
| In some cases subsections are added for data that had been accidentally
 | |
| omitted by earlier versions; if the missing data causes the migration
 | |
| process to succeed but the guest to behave badly then it may be better
 | |
| to send the subsection and cause the migration to explicitly fail
 | |
| with the unknown subsection error.   If the bad behaviour only happens
 | |
| with certain data values, making the subsection conditional on
 | |
| the data value (rather than the machine type) allows migrations to succeed
 | |
| in most cases.  In general the preference is to tie the subsection to
 | |
| the machine type, and allow reliable migrations, unless the behaviour
 | |
| from omission of the subsection is really bad.
 | |
| 
 | |
| Not sending existing elements
 | |
| -----------------------------
 | |
| 
 | |
| Sometimes members of the VMState are no longer needed:
 | |
| 
 | |
|   - removing them will break migration compatibility
 | |
| 
 | |
|   - making them version dependent and bumping the version will break backwards migration compatibility.
 | |
| 
 | |
| The best way is to:
 | |
| 
 | |
|   a) Add a new property/compatibility/function in the same way for subsections above.
 | |
|   b) replace the VMSTATE macro with the _TEST version of the macro, e.g.:
 | |
| 
 | |
|    ``VMSTATE_UINT32(foo, barstruct)``
 | |
| 
 | |
|    becomes
 | |
| 
 | |
|    ``VMSTATE_UINT32_TEST(foo, barstruct, pre_version_baz)``
 | |
| 
 | |
|    Sometime in the future when we no longer care about the ancient versions these can be killed off.
 | |
| 
 | |
| Return path
 | |
| -----------
 | |
| 
 | |
| In most migration scenarios there is only a single data path that runs
 | |
| from the source VM to the destination, typically along a single fd (although
 | |
| possibly with another fd or similar for some fast way of throwing pages across).
 | |
| 
 | |
| However, some uses need two way communication; in particular the Postcopy
 | |
| destination needs to be able to request pages on demand from the source.
 | |
| 
 | |
| For these scenarios there is a 'return path' from the destination to the source;
 | |
| ``qemu_file_get_return_path(QEMUFile* fwdpath)`` gives the QEMUFile* for the return
 | |
| path.
 | |
| 
 | |
|   Source side
 | |
| 
 | |
|      Forward path - written by migration thread
 | |
|      Return path  - opened by main thread, read by return-path thread
 | |
| 
 | |
|   Destination side
 | |
| 
 | |
|      Forward path - read by main thread
 | |
|      Return path  - opened by main thread, written by main thread AND postcopy
 | |
|      thread (protected by rp_mutex)
 | |
| 
 | |
| Postcopy
 | |
| ========
 | |
| 
 | |
| 'Postcopy' migration is a way to deal with migrations that refuse to converge
 | |
| (or take too long to converge) its plus side is that there is an upper bound on
 | |
| the amount of migration traffic and time it takes, the down side is that during
 | |
| the postcopy phase, a failure of *either* side or the network connection causes
 | |
| the guest to be lost.
 | |
| 
 | |
| In postcopy the destination CPUs are started before all the memory has been
 | |
| transferred, and accesses to pages that are yet to be transferred cause
 | |
| a fault that's translated by QEMU into a request to the source QEMU.
 | |
| 
 | |
| Postcopy can be combined with precopy (i.e. normal migration) so that if precopy
 | |
| doesn't finish in a given time the switch is made to postcopy.
 | |
| 
 | |
| Enabling postcopy
 | |
| -----------------
 | |
| 
 | |
| To enable postcopy, issue this command on the monitor (both source and
 | |
| destination) prior to the start of migration:
 | |
| 
 | |
| ``migrate_set_capability postcopy-ram on``
 | |
| 
 | |
| The normal commands are then used to start a migration, which is still
 | |
| started in precopy mode.  Issuing:
 | |
| 
 | |
| ``migrate_start_postcopy``
 | |
| 
 | |
| will now cause the transition from precopy to postcopy.
 | |
| It can be issued immediately after migration is started or any
 | |
| time later on.  Issuing it after the end of a migration is harmless.
 | |
| 
 | |
| Blocktime is a postcopy live migration metric, intended to show how
 | |
| long the vCPU was in state of interruptable sleep due to pagefault.
 | |
| That metric is calculated both for all vCPUs as overlapped value, and
 | |
| separately for each vCPU. These values are calculated on destination
 | |
| side.  To enable postcopy blocktime calculation, enter following
 | |
| command on destination monitor:
 | |
| 
 | |
| ``migrate_set_capability postcopy-blocktime on``
 | |
| 
 | |
| Postcopy blocktime can be retrieved by query-migrate qmp command.
 | |
| postcopy-blocktime value of qmp command will show overlapped blocking
 | |
| time for all vCPU, postcopy-vcpu-blocktime will show list of blocking
 | |
| time per vCPU.
 | |
| 
 | |
| .. note::
 | |
|   During the postcopy phase, the bandwidth limits set using
 | |
|   ``migrate_set_speed`` is ignored (to avoid delaying requested pages that
 | |
|   the destination is waiting for).
 | |
| 
 | |
| Postcopy device transfer
 | |
| ------------------------
 | |
| 
 | |
| Loading of device data may cause the device emulation to access guest RAM
 | |
| that may trigger faults that have to be resolved by the source, as such
 | |
| the migration stream has to be able to respond with page data *during* the
 | |
| device load, and hence the device data has to be read from the stream completely
 | |
| before the device load begins to free the stream up.  This is achieved by
 | |
| 'packaging' the device data into a blob that's read in one go.
 | |
| 
 | |
| Source behaviour
 | |
| ----------------
 | |
| 
 | |
| Until postcopy is entered the migration stream is identical to normal
 | |
| precopy, except for the addition of a 'postcopy advise' command at
 | |
| the beginning, to tell the destination that postcopy might happen.
 | |
| When postcopy starts the source sends the page discard data and then
 | |
| forms the 'package' containing:
 | |
| 
 | |
|    - Command: 'postcopy listen'
 | |
|    - The device state
 | |
| 
 | |
|      A series of sections, identical to the precopy streams device state stream
 | |
|      containing everything except postcopiable devices (i.e. RAM)
 | |
|    - Command: 'postcopy run'
 | |
| 
 | |
| The 'package' is sent as the data part of a Command: ``CMD_PACKAGED``, and the
 | |
| contents are formatted in the same way as the main migration stream.
 | |
| 
 | |
| During postcopy the source scans the list of dirty pages and sends them
 | |
| to the destination without being requested (in much the same way as precopy),
 | |
| however when a page request is received from the destination, the dirty page
 | |
| scanning restarts from the requested location.  This causes requested pages
 | |
| to be sent quickly, and also causes pages directly after the requested page
 | |
| to be sent quickly in the hope that those pages are likely to be used
 | |
| by the destination soon.
 | |
| 
 | |
| Destination behaviour
 | |
| ---------------------
 | |
| 
 | |
| Initially the destination looks the same as precopy, with a single thread
 | |
| reading the migration stream; the 'postcopy advise' and 'discard' commands
 | |
| are processed to change the way RAM is managed, but don't affect the stream
 | |
| processing.
 | |
| 
 | |
| ::
 | |
| 
 | |
|   ------------------------------------------------------------------------------
 | |
|                           1      2   3     4 5                      6   7
 | |
|   main -----DISCARD-CMD_PACKAGED ( LISTEN  DEVICE     DEVICE DEVICE RUN )
 | |
|   thread                             |       |
 | |
|                                      |     (page request)
 | |
|                                      |        \___
 | |
|                                      v            \
 | |
|   listen thread:                     --- page -- page -- page -- page -- page --
 | |
| 
 | |
|                                      a   b        c
 | |
|   ------------------------------------------------------------------------------
 | |
| 
 | |
| - On receipt of ``CMD_PACKAGED`` (1)
 | |
| 
 | |
|    All the data associated with the package - the ( ... ) section in the diagram -
 | |
|    is read into memory, and the main thread recurses into qemu_loadvm_state_main
 | |
|    to process the contents of the package (2) which contains commands (3,6) and
 | |
|    devices (4...)
 | |
| 
 | |
| - On receipt of 'postcopy listen' - 3 -(i.e. the 1st command in the package)
 | |
| 
 | |
|    a new thread (a) is started that takes over servicing the migration stream,
 | |
|    while the main thread carries on loading the package.   It loads normal
 | |
|    background page data (b) but if during a device load a fault happens (5)
 | |
|    the returned page (c) is loaded by the listen thread allowing the main
 | |
|    threads device load to carry on.
 | |
| 
 | |
| - The last thing in the ``CMD_PACKAGED`` is a 'RUN' command (6)
 | |
| 
 | |
|    letting the destination CPUs start running.  At the end of the
 | |
|    ``CMD_PACKAGED`` (7) the main thread returns to normal running behaviour and
 | |
|    is no longer used by migration, while the listen thread carries on servicing
 | |
|    page data until the end of migration.
 | |
| 
 | |
| Postcopy states
 | |
| ---------------
 | |
| 
 | |
| Postcopy moves through a series of states (see postcopy_state) from
 | |
| ADVISE->DISCARD->LISTEN->RUNNING->END
 | |
| 
 | |
|  - Advise
 | |
| 
 | |
|     Set at the start of migration if postcopy is enabled, even
 | |
|     if it hasn't had the start command; here the destination
 | |
|     checks that its OS has the support needed for postcopy, and performs
 | |
|     setup to ensure the RAM mappings are suitable for later postcopy.
 | |
|     The destination will fail early in migration at this point if the
 | |
|     required OS support is not present.
 | |
|     (Triggered by reception of POSTCOPY_ADVISE command)
 | |
| 
 | |
|  - Discard
 | |
| 
 | |
|     Entered on receipt of the first 'discard' command; prior to
 | |
|     the first Discard being performed, hugepages are switched off
 | |
|     (using madvise) to ensure that no new huge pages are created
 | |
|     during the postcopy phase, and to cause any huge pages that
 | |
|     have discards on them to be broken.
 | |
| 
 | |
|  - Listen
 | |
| 
 | |
|     The first command in the package, POSTCOPY_LISTEN, switches
 | |
|     the destination state to Listen, and starts a new thread
 | |
|     (the 'listen thread') which takes over the job of receiving
 | |
|     pages off the migration stream, while the main thread carries
 | |
|     on processing the blob.  With this thread able to process page
 | |
|     reception, the destination now 'sensitises' the RAM to detect
 | |
|     any access to missing pages (on Linux using the 'userfault'
 | |
|     system).
 | |
| 
 | |
|  - Running
 | |
| 
 | |
|     POSTCOPY_RUN causes the destination to synchronise all
 | |
|     state and start the CPUs and IO devices running.  The main
 | |
|     thread now finishes processing the migration package and
 | |
|     now carries on as it would for normal precopy migration
 | |
|     (although it can't do the cleanup it would do as it
 | |
|     finishes a normal migration).
 | |
| 
 | |
|  - End
 | |
| 
 | |
|     The listen thread can now quit, and perform the cleanup of migration
 | |
|     state, the migration is now complete.
 | |
| 
 | |
| Source side page maps
 | |
| ---------------------
 | |
| 
 | |
| The source side keeps two bitmaps during postcopy; 'the migration bitmap'
 | |
| and 'unsent map'.  The 'migration bitmap' is basically the same as in
 | |
| the precopy case, and holds a bit to indicate that page is 'dirty' -
 | |
| i.e. needs sending.  During the precopy phase this is updated as the CPU
 | |
| dirties pages, however during postcopy the CPUs are stopped and nothing
 | |
| should dirty anything any more.
 | |
| 
 | |
| The 'unsent map' is used for the transition to postcopy. It is a bitmap that
 | |
| has a bit cleared whenever a page is sent to the destination, however during
 | |
| the transition to postcopy mode it is combined with the migration bitmap
 | |
| to form a set of pages that:
 | |
| 
 | |
|    a) Have been sent but then redirtied (which must be discarded)
 | |
|    b) Have not yet been sent - which also must be discarded to cause any
 | |
|       transparent huge pages built during precopy to be broken.
 | |
| 
 | |
| Note that the contents of the unsentmap are sacrificed during the calculation
 | |
| of the discard set and thus aren't valid once in postcopy.  The dirtymap
 | |
| is still valid and is used to ensure that no page is sent more than once.  Any
 | |
| request for a page that has already been sent is ignored.  Duplicate requests
 | |
| such as this can happen as a page is sent at about the same time the
 | |
| destination accesses it.
 | |
| 
 | |
| Postcopy with hugepages
 | |
| -----------------------
 | |
| 
 | |
| Postcopy now works with hugetlbfs backed memory:
 | |
| 
 | |
|   a) The linux kernel on the destination must support userfault on hugepages.
 | |
|   b) The huge-page configuration on the source and destination VMs must be
 | |
|      identical; i.e. RAMBlocks on both sides must use the same page size.
 | |
|   c) Note that ``-mem-path /dev/hugepages``  will fall back to allocating normal
 | |
|      RAM if it doesn't have enough hugepages, triggering (b) to fail.
 | |
|      Using ``-mem-prealloc`` enforces the allocation using hugepages.
 | |
|   d) Care should be taken with the size of hugepage used; postcopy with 2MB
 | |
|      hugepages works well, however 1GB hugepages are likely to be problematic
 | |
|      since it takes ~1 second to transfer a 1GB hugepage across a 10Gbps link,
 | |
|      and until the full page is transferred the destination thread is blocked.
 | |
| 
 | |
| Postcopy with shared memory
 | |
| ---------------------------
 | |
| 
 | |
| Postcopy migration with shared memory needs explicit support from the other
 | |
| processes that share memory and from QEMU. There are restrictions on the type of
 | |
| memory that userfault can support shared.
 | |
| 
 | |
| The Linux kernel userfault support works on `/dev/shm` memory and on `hugetlbfs`
 | |
| (although the kernel doesn't provide an equivalent to `madvise(MADV_DONTNEED)`
 | |
| for hugetlbfs which may be a problem in some configurations).
 | |
| 
 | |
| The vhost-user code in QEMU supports clients that have Postcopy support,
 | |
| and the `vhost-user-bridge` (in `tests/`) and the DPDK package have changes
 | |
| to support postcopy.
 | |
| 
 | |
| The client needs to open a userfaultfd and register the areas
 | |
| of memory that it maps with userfault.  The client must then pass the
 | |
| userfaultfd back to QEMU together with a mapping table that allows
 | |
| fault addresses in the clients address space to be converted back to
 | |
| RAMBlock/offsets.  The client's userfaultfd is added to the postcopy
 | |
| fault-thread and page requests are made on behalf of the client by QEMU.
 | |
| QEMU performs 'wake' operations on the client's userfaultfd to allow it
 | |
| to continue after a page has arrived.
 | |
| 
 | |
| .. note::
 | |
|   There are two future improvements that would be nice:
 | |
|     a) Some way to make QEMU ignorant of the addresses in the clients
 | |
|        address space
 | |
|     b) Avoiding the need for QEMU to perform ufd-wake calls after the
 | |
|        pages have arrived
 | |
| 
 | |
| Retro-fitting postcopy to existing clients is possible:
 | |
|   a) A mechanism is needed for the registration with userfault as above,
 | |
|      and the registration needs to be coordinated with the phases of
 | |
|      postcopy.  In vhost-user extra messages are added to the existing
 | |
|      control channel.
 | |
|   b) Any thread that can block due to guest memory accesses must be
 | |
|      identified and the implication understood; for example if the
 | |
|      guest memory access is made while holding a lock then all other
 | |
|      threads waiting for that lock will also be blocked.
 |