
* Run docker probe only if docker or podman are available The docker probe uses "sudo -n" which can cause an e-mail with a security warning each time when configure is run. Therefore run docker probe only if either docker or podman are available. That avoids the problematic "sudo -n" on build environments which have neither docker nor podman installed. Fixes: c4575b59155e2e00 ("configure: store container engine in config-host.mak") Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20221030083510.310584-1-sw@weilnetz.de> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20221117172532.538149-2-alex.bennee@linaro.org> * tests/avocado/machine_aspeed.py: Reduce noise on the console for SDK tests The Aspeed SDK images are based on OpenBMC which starts a lot of services. The output noise on the console can break from time to time the test waiting for the logging prompt. Change the U-Boot bootargs variable to add "quiet" to the kernel command line and reduce the output volume. This also drops the test on the CPU id which was nice to have but not essential. Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20221104075347.370503-1-clg@kaod.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221117172532.538149-3-alex.bennee@linaro.org> * tests/docker: allow user to override check target This is useful when trying to bisect a particular failing test behind a docker run. For example: make docker-test-clang@fedora \ TARGET_LIST=arm-softmmu \ TEST_COMMAND="meson test qtest-arm/qos-test" \ J=9 V=1 Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-4-alex.bennee@linaro.org> * docs/devel: add a maintainers section to development process We don't currently have a clear place in the documentation to describe the roles and responsibilities of a maintainer. Lets create one so we can. I've moved a few small bits out of other files to try and keep everything in one place. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-5-alex.bennee@linaro.org> * docs/devel: make language a little less code centric We welcome all sorts of patches. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-6-alex.bennee@linaro.org> * docs/devel: simplify the minimal checklist The bullet points are quite long and contain process tips. Move those bits of the bullet to the relevant sections and link to them. Use a table for nicer formatting of the checklist. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-7-alex.bennee@linaro.org> * docs/devel: try and improve the language around patch review It is important that contributors take the review process seriously and we collaborate in a respectful way while avoiding personal attacks. Try and make this clear in the language. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-8-alex.bennee@linaro.org> * tests/avocado: Raise timeout for boot_linux.py:BootLinuxPPC64.test_pseries_tcg On my machine, a debug build of QEMU takes about 260 seconds to complete this test, so with the current timeout value of 180 seconds it always times out. Double the timeout value to 360 so the test definitely has enough time to complete. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221110142901.3832318-1-peter.maydell@linaro.org> Message-Id: <20221117172532.538149-9-alex.bennee@linaro.org> * tests/avocado: introduce alpine virt test for CI The boot_linux tests download and run a full cloud image boot and start a full distro. While the ability to test the full boot chain is worthwhile it is perhaps a little too heavy weight and causes issues in CI. Fix this by introducing a new alpine linux ISO boot in machine_aarch64_virt. This boots a fully loaded -cpu max with all the bells and whistles in 31s on my machine. A full debug build takes around 180s on my machine so we set a more generous timeout to cover that. We don't add a test for lesser GIC versions although there is some coverage for that already in the boot_xen.py tests. If we want to introduce more comprehensive testing we can do it with a custom kernel and initrd rather than a full distro boot. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-10-alex.bennee@linaro.org> * tests/avocado: skip aarch64 cloud TCG tests in CI We now have a much lighter weight test in machine_aarch64_virt which tests the full boot chain in less time. Rename the tests while we are at it to make it clear it is a Fedora cloud image. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-11-alex.bennee@linaro.org> * gitlab: integrate coverage report This should hopefully give is nice coverage information about what our tests (or at least the subset we are running) have hit. Ideally we would want a way to trigger coverage on tests likely to be affected by the current commit. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221117172532.538149-12-alex.bennee@linaro.org> * vhost: mask VIRTIO_F_RING_RESET for vhost and vhost-user devices Commit 69e1c14aa2 ("virtio: core: vq reset feature negotation support") enabled VIRTIO_F_RING_RESET by default for all virtio devices. This feature is not currently emulated by QEMU, so for vhost and vhost-user devices we need to make sure it is supported by the offloaded device emulation (in-kernel or in another process). To do this we need to add VIRTIO_F_RING_RESET to the features bitmap passed to vhost_get_features(). This way it will be masked if the device does not support it. This issue was initially discovered with vhost-vsock and vhost-user-vsock, and then also tested with vhost-user-rng which confirmed the same issue. They fail when sending features through VHOST_SET_FEATURES ioctl or VHOST_USER_SET_FEATURES message, since VIRTIO_F_RING_RESET is negotiated by the guest (Linux >= v6.0), but not supported by the device. Fixes: 69e1c14aa2 ("virtio: core: vq reset feature negotation support") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1318 Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-Id: <20221121101101.29400-1-sgarzare@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Acked-by: Jason Wang <jasowang@redhat.com> * tests: acpi: whitelist DSDT before moving PRQx to _SB scope Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221121153613.3972225-2-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * acpi: x86: move RPQx field back to _SB scope Commit 47a373faa6b2 (acpi: pc/q35: drop ad-hoc PCI-ISA bridge AML routines and let bus ennumeration generate AML) moved ISA bridge AML generation to respective devices and was using aml_alias() to provide PRQx fields in _SB. scope. However, it turned out that SeaBIOS was not able to process Alias opcode when parsing DSDT, resulting in lack of keyboard during boot (SeaBIOS console, grub, FreeDOS). While fix for SeaBIOS is posted https://mail.coreboot.org/hyperkitty/list/seabios@seabios.org/thread/RGPL7HESH5U5JRLEO6FP77CZVHZK5J65/ fixed SeaBIOS might not make into QEMU-7.2 in time. Hence this workaround that puts PRQx back into _SB scope and gets rid of aliases in ISA bridge description, so DSDT will be parsable by broken SeaBIOS. That brings back hardcoded references to ISA bridge PCI0.S08.P40C/PCI0.SF8.PIRQ where middle part now is auto generated based on slot it's plugged in, but it should be fine as bridge initialization also hardcodes PCI address of the bridge so it can't ever move. Once QEMU tree has fixed SeaBIOS blob, we should be able to drop this part and revert back to alias based approach Reported-by: Volker Rümelin <vr_qemu@t-online.de> Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221121153613.3972225-3-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * tests: acpi: x86: update expected DSDT after moving PRQx fields in _SB scope Expected DSDT changes, pc: - Field (P40C, ByteAcc, NoLock, Preserve) + Scope (\_SB) { - PRQ0, 8, - PRQ1, 8, - PRQ2, 8, - PRQ3, 8 + Field (PCI0.S08.P40C, ByteAcc, NoLock, Preserve) + { + PRQ0, 8, + PRQ1, 8, + PRQ2, 8, + PRQ3, 8 + } } - Alias (PRQ0, \_SB.PRQ0) - Alias (PRQ1, \_SB.PRQ1) - Alias (PRQ2, \_SB.PRQ2) - Alias (PRQ3, \_SB.PRQ3) q35: - Field (PIRQ, ByteAcc, NoLock, Preserve) - { - PRQA, 8, - PRQB, 8, - PRQC, 8, - PRQD, 8, - Offset (0x08), - PRQE, 8, - PRQF, 8, - PRQG, 8, - PRQH, 8 + Scope (\_SB) + { + Field (PCI0.SF8.PIRQ, ByteAcc, NoLock, Preserve) + { + PRQA, 8, + PRQB, 8, + PRQC, 8, + PRQD, 8, + Offset (0x08), + PRQE, 8, + PRQF, 8, + PRQG, 8, + PRQH, 8 + } } - Alias (PRQA, \_SB.PRQA) - Alias (PRQB, \_SB.PRQB) - Alias (PRQC, \_SB.PRQC) - Alias (PRQD, \_SB.PRQD) - Alias (PRQE, \_SB.PRQE) - Alias (PRQF, \_SB.PRQF) - Alias (PRQG, \_SB.PRQG) - Alias (PRQH, \_SB.PRQH) Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221121153613.3972225-4-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * MAINTAINERS: add mst to list of biosbits maintainers Adding Michael's name to the list of bios bits maintainers so that all changes and fixes into biosbits framework can go through his tree and he is notified. Suggested-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20221111151138.36988-1-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * tests/avocado: configure acpi-bits to use avocado timeout Instead of using a hardcoded timeout, just rely on Avocado's built-in test case timeout. This helps avoid timeout issues on machines where 60 seconds is not sufficient. Signed-off-by: John Snow <jsnow@redhat.com> Message-Id: <20221115212759.3095751-1-jsnow@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Ani Sinha <ani@anisinha.ca> * acpi/tests/avocado/bits: keep the work directory when BITS_DEBUG is set in env Debugging bits issue often involves running the QEMU command line manually outside of the avocado environment with the generated ISO. Hence, its inconvenient if the iso gets cleaned up after the test has finished. This change makes sure that the work directory is kept after the test finishes if the test is run with BITS_DEBUG=1 in the environment so that the iso is available for use with the QEMU command line. CC: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20221117113630.543495-1-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * virtio: disable error for out of spec queue-enable Virtio 1.0 is pretty clear that features have to be negotiated before enabling VQs. Unfortunately Seabios ignored this ever since gaining 1.0 support (UEFI is ok). Comment the error out for now, and add a TODO. Fixes: 3c37f8b8d1 ("virtio: introduce virtio_queue_enable()") Cc: "Kangjie Xu" <kangjie.xu@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221121200339.362452-1-mst@redhat.com> * hw/loongarch: Add default stdout uart in fdt Add "chosen" subnode into LoongArch fdt, and set it's "stdout-path" prop to uart node. Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20221115114923.3372414-1-yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> * hw/loongarch: Fix setprop_sized method in fdt rtc node. Fix setprop_sized method in fdt rtc node. Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20221116040300.3459818-1-yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> * hw/loongarch: Replace the value of uart info with macro Using macro to replace the value of uart info such as addr, size in acpi_build method. Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20221115115008.3372489-1-yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> * target/arm: Don't do two-stage lookup if stage 2 is disabled In get_phys_addr_with_struct(), we call get_phys_addr_twostage() if the CPU supports EL2. However, we don't check here that stage 2 is actually enabled. Instead we only check that inside get_phys_addr_twostage() to skip stage 2 translation. This means that even if stage 2 is disabled we still tell the stage 1 lookup to do its page table walks via stage 2. This works by luck for normal CPU accesses, but it breaks for debug accesses, which are used by the disassembler and also by semihosting file reads and writes, because the debug case takes a different code path inside S1_ptw_translate(). This means that setups that use semihosting for file loads are broken (a regression since 7.1, introduced in recent ptw refactoring), and that sometimes disassembly in debug logs reports "unable to read memory" rather than showing the guest insns. Fix the bug by hoisting the "is stage 2 enabled?" check up to get_phys_addr_with_struct(), so that we handle S2 disabled the same way we do the "no EL2" case, with a simple single stage lookup. Reported-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221121212404.1450382-1-peter.maydell@linaro.org * target/arm: Use signed quantity to represent VMSAv8-64 translation level The LPA2 extension implements 52-bit virtual addressing for 4k and 16k translation granules, and for the former, this means an additional level of translation is needed. This means we start counting at -1 instead of 0 when doing a walk, and so 'level' is now a signed quantity, and should be typed as such. So turn it from uint32_t into int32_t. This avoids a level of -1 getting misinterpreted as being >= 3, and terminating a page table walk prematurely with a bogus output address. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> * Update VERSION for v7.2.0-rc2 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> * tests/avocado: Update the URLs of the advent calendar images The qemu-advent-calendar.org server will be decommissioned soon. I've mirrored the images that we use for the QEMU CI to gitlab, so update their URLs to point to the new location. Message-Id: <20221121102436.78635-1-thuth@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> * tests/qtest: Decrease the amount of output from the qom-test The logs in the gitlab-CI have a size constraint, and sometimes we already hit this limit. The biggest part of the log then seems to be filled by the qom-test, so we should decrease the size of the output - which can be done easily by not printing the path for each property, since the path has already been logged at the beginning of each node that we handle here. However, if we omit the path, we should make sure to not recurse into child nodes in between, so that it is clear to which node each property belongs. Thus store the children and links in a temporary list and recurse only at the end of each node, when all properties have already been printed. Message-Id: <20221121194240.149268-1-thuth@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> * tests/avocado: use new rootfs for orangepi test The old URL wasn't stable. I suspect the current URL will only be stable for a few months so maybe we need another strategy for hosting rootfs snapshots? Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221118113309.1057790-1-alex.bennee@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> * Revert "usbredir: avoid queuing hello packet on snapshot restore" Run state is also in RUN_STATE_PRELAUNCH while "-S" is used. This reverts commit 0631d4b448454ae8a1ab091c447e3f71ab6e088a Signed-off-by: Joelle van Dyne <j@getutm.app> Reviewed-by: Ján Tomko <jtomko@redhat.com> The original commit broke the usage of usbredir with libvirt, which starts every domain with "-S". This workaround is no longer needed because the usbredir behavior has been fixed in the meantime: https://gitlab.freedesktop.org/spice/usbredir/-/merge_requests/61 Signed-off-by: Ján Tomko <jtomko@redhat.com> Message-Id: <1689cec3eadcea87255e390cb236033aca72e168.1669193161.git.jtomko@redhat.com> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * gtk: disable GTK Clipboard with a new meson option The GTK Clipboard implementation may cause guest hangs. Therefore implement new configure switch: --enable-gtk-clipboard, as a meson option disabled by default, which warns in the help text about the experimental nature of the feature. Regenerate the meson build options to include it. The initialization of the clipboard is gtk.c, as well as the compilation of gtk-clipboard.c are now conditional on this new option to be set. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1150 Signed-off-by: Claudio Fontana <cfontana@suse.de> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Jim Fehlig <jfehlig@suse.com> Message-Id: <20221121135538.14625-1-cfontana@suse.de> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * hw/usb/hcd-xhci.c: spelling: tranfer Fixes: effaf5a240e03020f4ae953e10b764622c3e87cc Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20221105114851.306206-1-mjt@msgid.tls.msk.ru> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * ui/gtk: prevent ui lock up when dpy_gl_update called again before current draw event occurs A warning, "qemu: warning: console: no gl-unblock within" followed by guest scanout lockup can happen if dpy_gl_update is called in a row and the second call is made before gd_draw_event scheduled by the first call is taking place. This is because draw call returns without decrementing gl_block ref count if the dmabuf was already submitted as shown below. (gd_gl_area_draw/gd_egl_draw) if (dmabuf) { if (!dmabuf->draw_submitted) { return; } else { dmabuf->draw_submitted = false; } } So it should not schedule any redundant draw event in case draw_submitted is already set in gd_egl_fluch/gd_gl_area_scanout_flush. Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Vivek Kasireddy <vivek.kasireddy@intel.com> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20221021192315.9110-1-dongwon.kim@intel.com> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * hw/usb/hcd-xhci: Reset the XHCIState with device_cold_reset() Currently the hcd-xhci-pci and hcd-xhci-sysbus devices, which are mostly wrappers around the TYPE_XHCI device, which is a direct subclass of TYPE_DEVICE. Since TYPE_DEVICE devices are not on any qbus and do not get automatically reset, the wrapper devices both reset the TYPE_XHCI device in their own reset functions. However, they do this using device_legacy_reset(), which will reset the device itself but not any bus it has. Switch to device_cold_reset(), which avoids using a deprecated function and also propagates reset along any child buses. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20221014145423.2102706-1-peter.maydell@linaro.org> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * hw/audio/intel-hda: don't reset codecs twice Currently the intel-hda device has a reset method which manually resets all the codecs by calling device_legacy_reset() on them. This means they get reset twice, once because child devices on a qbus get reset before the parent device's reset method is called, and then again because we're manually resetting them. Drop the manual reset call, and ensure that codecs are still reset when the guest does a reset via ICH6_GCTL_RESET by using device_cold_reset() (which resets all the devices on the qbus as well as the device itself) instead of a direct call to the reset function. This is a slight ordering change because the (only) codec reset now happens before the controller registers etc are reset, rather than once before and then once after, but the codec reset function hda_audio_reset() doesn't care. This lets us drop a use of device_legacy_reset(), which is deprecated. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221014142632.2092404-2-peter.maydell@linaro.org> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * hw/audio/intel-hda: Drop unnecessary prototype The only use of intel_hda_reset() is after its definition, so we don't need to separately declare its prototype at the top of the file; drop the unnecessary line. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221014142632.2092404-3-peter.maydell@linaro.org> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * add syx snapshot extras * it compiles! * virtiofsd: Add `sigreturn` to the seccomp whitelist The virtiofsd currently crashes on s390x. This is because of a `sigreturn` system call. See audit log below: type=SECCOMP msg=audit(1669382477.611:459): auid=4294967295 uid=0 gid=0 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 pid=6649 comm="virtiofsd" exe="/usr/libexec/virtiofsd" sig=31 arch=80000016 syscall=119 compat=0 ip=0x3fff15f748a code=0x80000000AUID="unset" UID="root" GID="root" ARCH=s390x SYSCALL=sigreturn Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> Reviewed-by: German Maglione <gmaglione@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221125143946.27717-1-mhartmay@linux.ibm.com> * libvhost-user: Fix wrong type of argument to formatting function (reported by LGTM) Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20220422070144.1043697-2-sw@weilnetz.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-2-sw@weilnetz.de> * libvhost-user: Fix format strings Signed-off-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220422070144.1043697-3-sw@weilnetz.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-3-sw@weilnetz.de> * libvhost-user: Fix two more format strings This fix is required for 32 bit hosts. The bug was detected by CI for arm-linux, but is also relevant for i386-linux. Reported-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-4-sw@weilnetz.de> * libvhost-user: Add format attribute to local function vu_panic Signed-off-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220422070144.1043697-4-sw@weilnetz.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-5-sw@weilnetz.de> * MAINTAINERS: Add subprojects/libvhost-user to section "vhost" Signed-off-by: Stefan Weil <sw@weilnetz.de> [Michael agreed to act as maintainer for libvhost-user via email in https://lore.kernel.org/qemu-devel/20221123015218-mutt-send-email-mst@kernel.org/. --Stefan] Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-6-sw@weilnetz.de> * Add G_GNUC_PRINTF to function qemu_set_info_str and fix related issues With the G_GNUC_PRINTF function attribute the compiler detects two potential insecure format strings: ../../../net/stream.c:248:31: warning: format string is not a string literal (potentially insecure) [-Wformat-security] qemu_set_info_str(&s->nc, uri); ^~~ ../../../net/stream.c:322:31: warning: format string is not a string literal (potentially insecure) [-Wformat-security] qemu_set_info_str(&s->nc, uri); ^~~ There are also two other warnings: ../../../net/socket.c:182:35: warning: zero-length gnu_printf format string [-Wformat-zero-length] 182 | qemu_set_info_str(&s->nc, ""); | ^~ ../../../net/stream.c:170:35: warning: zero-length gnu_printf format string [-Wformat-zero-length] 170 | qemu_set_info_str(&s->nc, ""); Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-7-sw@weilnetz.de> * del ramfile * update seabios source from 1.16.0 to 1.16.1 git shortlog rel-1.16.0..rel-1.16.1 =================================== Gerd Hoffmann (3): malloc: use variable for ZoneHigh size malloc: use large ZoneHigh when there is enough memory virtio-blk: use larger default request size Igor Mammedov (1): acpi: parse Alias object Volker Rümelin (2): pci: refactor the pci_config_*() functions reset: force standard PCI configuration access Xiaofei Lee (1): virtio-blk: Fix incorrect type conversion in virtio_blk_op() Xuan Zhuo (2): virtio-mmio: read/write the hi 32 features for mmio virtio: finalize features before using device Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * update seabios binaries to 1.16.1 Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * fix for non i386 archs * replay: Fix declaration of replay_read_next_clock Fixes the build with gcc 13: replay/replay-time.c:34:6: error: conflicting types for \ 'replay_read_next_clock' due to enum/integer mismatch; \ have 'void(ReplayClockKind)' [-Werror=enum-int-mismatch] 34 | void replay_read_next_clock(ReplayClockKind kind) | ^~~~~~~~~~~~~~~~~~~~~~ In file included from ../qemu/replay/replay-time.c:14: replay/replay-internal.h:139:6: note: previous declaration of \ 'replay_read_next_clock' with type 'void(unsigned int)' 139 | void replay_read_next_clock(unsigned int kind); | ^~~~~~~~~~~~~~~~~~~~~~ Fixes: 8eda206e090 ("replay: recording and replaying clock ticks") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com> Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221129010547.284051-1-richard.henderson@linaro.org> * hw/display/qxl: Have qxl_log_command Return early if no log_cmd handler Only 3 command types are logged: no need to call qxl_phys2virt() for the other types. Using different cases will help to pass different structure sizes to qxl_phys2virt() in a pair of commits. Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-2-philmd@linaro.org> * hw/display/qxl: Document qxl_phys2virt() Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-3-philmd@linaro.org> * hw/display/qxl: Pass requested buffer size to qxl_phys2virt() Currently qxl_phys2virt() doesn't check for buffer overrun. In order to do so in the next commit, pass the buffer size as argument. For QXLCursor in qxl_render_cursor() -> qxl_cursor() we verify the size of the chunked data ahead, checking we can access 'sizeof(QXLCursor) + chunk->data_size' bytes. Since in the SPICE_CURSOR_TYPE_MONO case the cursor is assumed to fit in one chunk, no change are required. In SPICE_CURSOR_TYPE_ALPHA the ahead read is handled in qxl_unpack_chunks(). Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-4-philmd@linaro.org> * hw/display/qxl: Avoid buffer overrun in qxl_phys2virt (CVE-2022-4144) Have qxl_get_check_slot_offset() return false if the requested buffer size does not fit within the slot memory region. Similarly qxl_phys2virt() now returns NULL in such case, and qxl_dirty_one_surface() aborts. This avoids buffer overrun in the host pointer returned by memory_region_get_ram_ptr(). Fixes: CVE-2022-4144 (out-of-bounds read) Reported-by: Wenxu Yin (@awxylitol) Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1336 Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-5-philmd@linaro.org> * hw/display/qxl: Assert memory slot fits in preallocated MemoryRegion Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-6-philmd@linaro.org> * block-backend: avoid bdrv_unregister_buf() NULL pointer deref bdrv_*() APIs expect a valid BlockDriverState. Calling them with bs=NULL leads to undefined behavior. Jonathan Cameron reported this following NULL pointer dereference when a VM with a virtio-blk device and a memory-backend-file object is terminated: 1. qemu_cleanup() closes all drives, setting blk->root to NULL 2. qemu_cleanup() calls user_creatable_cleanup(), which results in a RAM block notifier callback because the memory-backend-file is destroyed. 3. blk_unregister_buf() is called by virtio-blk's BlockRamRegistrar notifier callback and undefined behavior occurs. Fixes: baf422684d73 ("virtio-blk: use BDRV_REQ_REGISTERED_BUF optimization hint") Co-authored-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221121211923.1993171-1-stefanha@redhat.com> * target/arm: Set TCGCPUOps.restore_state_to_opc for v7m This setting got missed, breaking v7m. Fixes: 56c6c98df85c ("target/arm: Convert to tcg_ops restore_state_to_opc") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1347 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221129204146.550394-1-richard.henderson@linaro.org> * Update VERSION for v7.2.0-rc3 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> * hooks are now post mem access * tests/qtests: override "force-legacy" for gpio virtio-mmio tests The GPIO device is a VIRTIO_F_VERSION_1 devices but running with a legacy MMIO interface we miss out that feature bit causing confusion. For the GPIO test force the mmio bus to support non-legacy so we can properly test it. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1333 Message-Id: <20221130112439.2527228-2-alex.bennee@linaro.org> Acked-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * vhost: enable vrings in vhost_dev_start() for vhost-user devices Commit 02b61f38d3 ("hw/virtio: incorporate backend features in features") properly negotiates VHOST_USER_F_PROTOCOL_FEATURES with the vhost-user backend, but we forgot to enable vrings as specified in docs/interop/vhost-user.rst: If ``VHOST_USER_F_PROTOCOL_FEATURES`` has not been negotiated, the ring starts directly in the enabled state. If ``VHOST_USER_F_PROTOCOL_FEATURES`` has been negotiated, the ring is initialized in a disabled state and is enabled by ``VHOST_USER_SET_VRING_ENABLE`` with parameter 1. Some vhost-user front-ends already did this by calling vhost_ops.vhost_set_vring_enable() directly: - backends/cryptodev-vhost.c - hw/net/virtio-net.c - hw/virtio/vhost-user-gpio.c But most didn't do that, so we would leave the vrings disabled and some backends would not work. We observed this issue with the rust version of virtiofsd [1], which uses the event loop [2] provided by the vhost-user-backend crate where requests are not processed if vring is not enabled. Let's fix this issue by enabling the vrings in vhost_dev_start() for vhost-user front-ends that don't already do this directly. Same thing also in vhost_dev_stop() where we disable vrings. [1] https://gitlab.com/virtio-fs/virtiofsd [2] https://github.com/rust-vmm/vhost/blob/240fc2966/crates/vhost-user-backend/src/event_loop.rs#L217 Fixes: 02b61f38d3 ("hw/virtio: incorporate backend features in features") Reported-by: German Maglione <gmaglione@redhat.com> Tested-by: German Maglione <gmaglione@redhat.com> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Acked-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20221123131630.52020-1-sgarzare@redhat.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221130112439.2527228-3-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * hw/virtio: add started_vu status field to vhost-user-gpio As per the fix to vhost-user-blk in f5b22d06fb (vhost: recheck dev state in the vhost_migration_log routine) we really should track the connection and starting separately. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221130112439.2527228-4-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * hw/virtio: generalise CHR_EVENT_CLOSED handling ..and use for both virtio-user-blk and virtio-user-gpio. This avoids the circular close by deferring shutdown due to disconnection until a later point. virtio-user-blk already had this mechanism in place so generalise it as a vhost-user helper function and use for both blk and gpio devices. While we are at it we also fix up vhost-user-gpio to re-establish the event handler after close down so we can reconnect later. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20221130112439.2527228-5-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * include/hw: VM state takes precedence in virtio_device_should_start The VM status should always preempt the device status for these checks. This ensures the device is in the correct state when we suspend the VM prior to migrations. This restores the checks to the order they where in before the refactoring moved things around. While we are at it lets improve our documentation of the various fields involved and document the two functions. Fixes: 9f6bcfd99f (hw/virtio: move vm_running check to virtio_device_started) Fixes: 259d69c00b (hw/virtio: introduce virtio_device_should_start) Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Christian Borntraeger <borntraeger@linux.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221130112439.2527228-6-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * hw/nvme: fix aio cancel in format There are several bugs in the async cancel code for the Format command. Firstly, cancelling a format operation neglects to set iocb->ret as well as clearing the iocb->aiocb after cancelling the underlying aiocb which causes the aio callback to ignore the cancellation. Trivial fix. Secondly, and worse, because the request is queued up for posting to the CQ in a bottom half, if the cancellation is due to the submission queue being deleted (which calls blk_aio_cancel), the req structure is deallocated in nvme_del_sq prior to the bottom half being schedulued. Fix this by simply removing the bottom half, there is no reason to defer it anyway. Fixes: 3bcf26d3d619 ("hw/nvme: reimplement format nvm to allow cancellation") Reported-by: Jonathan Derrick <jonathan.derrick@linux.dev> Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * hw/nvme: fix aio cancel in flush Make sure that iocb->aiocb is NULL'ed when cancelling. Fix a potential use-after-free by removing the bottom half and enqueuing the completion directly. Fixes: 38f4ac65ac88 ("hw/nvme: reimplement flush to allow cancellation") Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * hw/nvme: fix aio cancel in zone reset If the zone reset operation is cancelled but the block unmap operation completes normally, the callback will continue resetting the next zone since it neglects to check iocb->ret which will have been set to -ECANCELED. Make sure that this is checked and bail out if an error is present. Secondly, fix a potential use-after-free by removing the bottom half and enqueuing the completion directly. Fixes: 63d96e4ffd71 ("hw/nvme: reimplement zone reset to allow cancellation") Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * hw/nvme: fix aio cancel in dsm When the DSM operation is cancelled asynchronously, we set iocb->ret to -ECANCELED. However, the callback function only checks the return value of the completed aio, which may have completed succesfully prior to the cancellation and thus the callback ends up continuing the dsm operation instead of bailing out. Fix this. Secondly, fix a potential use-after-free by removing the bottom half and enqueuing the completion directly. Fixes: d7d1474fd85d ("hw/nvme: reimplement dsm to allow cancellation") Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * hw/nvme: remove copy bh scheduling Fix a potential use-after-free by removing the bottom half and enqueuing the completion directly. Fixes: 796d20681d9b ("hw/nvme: reimplement the copy command to allow aio cancellation") Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * target/i386: allow MMX instructions with CR4.OSFXSR=0 MMX state is saved/restored by FSAVE/FRSTOR so the instructions are not illegal opcodes even if CR4.OSFXSR=0. Make sure that validate_vex takes into account the prefix and only checks HF_OSFXSR_MASK in the presence of an SSE instruction. Fixes: 20581aadec5e ("target/i386: validate VEX prefixes via the instructions' exception classes", 2022-10-18) Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1350 Reported-by: Helge Konetzka (@hejko on gitlab.com) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> * target/i386: Always completely initialize TranslateFault In get_physical_address, the canonical address check failed to set TranslateFault.stage2, which resulted in an uninitialized read from the struct when reporting the fault in x86_cpu_tlb_fill. Adjust all error paths to use structure assignment so that the entire struct is always initialized. Reported-by: Daniel Hoffman <dhoff749@gmail.com> Fixes: 9bbcf372193a ("target/i386: Reorg GET_HPHYS") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221201074522.178498-1-richard.henderson@linaro.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1324 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> * hw/loongarch/virt: Add cfi01 pflash device Add cfi01 pflash device for LoongArch virt machine Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221130100647.398565-1-yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> * Sync pc on breakpoints * tests/qtest/migration-test: Fix unlink error and memory leaks When running the migration test compiled with Clang from Fedora 37 and sanitizers enabled, there is an error complaining about unlink(): ../tests/qtest/migration-test.c:1072:12: runtime error: null pointer passed as argument 1, which is declared to never be null /usr/include/unistd.h:858:48: note: nonnull attribute specified here SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../tests/qtest/migration-test.c:1072:12 in (test program exited with status code 1) TAP parsing error: Too few tests run (expected 33, got 20) The data->clientcert and data->clientkey pointers can indeed be unset in some tests, so we have to check them before calling unlink() with those. While we're at it, I also noticed that the code is only freeing some but not all of the allocated strings in this function, and indeed, valgrind is also complaining about memory leaks here. So let's call g_free() on all allocated strings to avoid leaking memory here. Message-Id: <20221125083054.117504-1-thuth@redhat.com> Tested-by: Bin Meng <bmeng@tinylab.org> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> * target/s390x/tcg: Fix and improve the SACF instruction The SET ADDRESS SPACE CONTROL FAST instruction is not privileged, it can be used from problem space, too. Just the switching to the home address space is privileged and should still generate a privilege exception. This bug is e.g. causing programs like Java that use the "getcpu" vdso kernel function to crash (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=990417#26 ). While we're at it, also check if DAT is not enabled. In that case the instruction is supposed to generate a special operation exception. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/655 Message-Id: <20221201184443.136355-1-thuth@redhat.com> Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> * hw/display/next-fb: Fix comment typo Signed-off-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Message-Id: <20221125160849.23711-1-evgeny.v.ermakov@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> * fix dev snapshots * working syx snaps * Revert "hw/loongarch/virt: Add cfi01 pflash device" This reverts commit 14dccc8ea6ece7ee63273144fb55e4770a05e0fd. Signed-off-by: Song Gao <gaosong@loongson.cn> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221205113007.683505-1-gaosong@loongson.cn> * Update VERSION for v7.2.0-rc4 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Signed-off-by: John Snow <jsnow@redhat.com> Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Ján Tomko <jtomko@redhat.com> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com> Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Co-authored-by: Stefan Weil <sw@weilnetz.de> Co-authored-by: Cédric Le Goater <clg@kaod.org> Co-authored-by: Alex Bennée <alex.bennee@linaro.org> Co-authored-by: Peter Maydell <peter.maydell@linaro.org> Co-authored-by: Stefano Garzarella <sgarzare@redhat.com> Co-authored-by: Igor Mammedov <imammedo@redhat.com> Co-authored-by: Ani Sinha <ani@anisinha.ca> Co-authored-by: John Snow <jsnow@redhat.com> Co-authored-by: Michael S. Tsirkin <mst@redhat.com> Co-authored-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Co-authored-by: Stefan Hajnoczi <stefanha@redhat.com> Co-authored-by: Ard Biesheuvel <ardb@kernel.org> Co-authored-by: Thomas Huth <thuth@redhat.com> Co-authored-by: Joelle van Dyne <j@getutm.app> Co-authored-by: Claudio Fontana <cfontana@suse.de> Co-authored-by: Michael Tokarev <mjt@tls.msk.ru> Co-authored-by: Dongwon Kim <dongwon.kim@intel.com> Co-authored-by: Marc Hartmayer <mhartmay@linux.ibm.com> Co-authored-by: Stefan Weil via <qemu-devel@nongnu.org> Co-authored-by: Gerd Hoffmann <kraxel@redhat.com> Co-authored-by: Richard Henderson <richard.henderson@linaro.org> Co-authored-by: Philippe Mathieu-Daudé <philmd@linaro.org> Co-authored-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-authored-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Co-authored-by: Klaus Jensen <k.jensen@samsung.com> Co-authored-by: Paolo Bonzini <pbonzini@redhat.com> Co-authored-by: Song Gao <gaosong@loongson.cn>
1943 lines
59 KiB
C
1943 lines
59 KiB
C
/*
|
|
* vhost support
|
|
*
|
|
* Copyright Red Hat, Inc. 2010
|
|
*
|
|
* Authors:
|
|
* Michael S. Tsirkin <mst@redhat.com>
|
|
*
|
|
* This work is licensed under the terms of the GNU GPL, version 2. See
|
|
* the COPYING file in the top-level directory.
|
|
*
|
|
* Contributions after 2012-01-13 are licensed under the terms of the
|
|
* GNU GPL, version 2 or (at your option) any later version.
|
|
*/
|
|
|
|
#include "qemu/osdep.h"
|
|
#include "qapi/error.h"
|
|
#include "hw/virtio/vhost.h"
|
|
#include "qemu/atomic.h"
|
|
#include "qemu/range.h"
|
|
#include "qemu/error-report.h"
|
|
#include "qemu/memfd.h"
|
|
#include "standard-headers/linux/vhost_types.h"
|
|
#include "hw/virtio/virtio-bus.h"
|
|
#include "hw/virtio/virtio-access.h"
|
|
#include "migration/blocker.h"
|
|
#include "migration/qemu-file-types.h"
|
|
#include "sysemu/dma.h"
|
|
#include "trace.h"
|
|
|
|
/* enabled until disconnected backend stabilizes */
|
|
#define _VHOST_DEBUG 1
|
|
|
|
#ifdef _VHOST_DEBUG
|
|
#define VHOST_OPS_DEBUG(retval, fmt, ...) \
|
|
do { \
|
|
error_report(fmt ": %s (%d)", ## __VA_ARGS__, \
|
|
strerror(-retval), -retval); \
|
|
} while (0)
|
|
#else
|
|
#define VHOST_OPS_DEBUG(retval, fmt, ...) \
|
|
do { } while (0)
|
|
#endif
|
|
|
|
static struct vhost_log *vhost_log;
|
|
static struct vhost_log *vhost_log_shm;
|
|
|
|
static unsigned int used_memslots;
|
|
static QLIST_HEAD(, vhost_dev) vhost_devices =
|
|
QLIST_HEAD_INITIALIZER(vhost_devices);
|
|
|
|
bool vhost_has_free_slot(void)
|
|
{
|
|
unsigned int slots_limit = ~0U;
|
|
struct vhost_dev *hdev;
|
|
|
|
QLIST_FOREACH(hdev, &vhost_devices, entry) {
|
|
unsigned int r = hdev->vhost_ops->vhost_backend_memslots_limit(hdev);
|
|
slots_limit = MIN(slots_limit, r);
|
|
}
|
|
return slots_limit > used_memslots;
|
|
}
|
|
|
|
static void vhost_dev_sync_region(struct vhost_dev *dev,
|
|
MemoryRegionSection *section,
|
|
uint64_t mfirst, uint64_t mlast,
|
|
uint64_t rfirst, uint64_t rlast)
|
|
{
|
|
vhost_log_chunk_t *log = dev->log->log;
|
|
|
|
uint64_t start = MAX(mfirst, rfirst);
|
|
uint64_t end = MIN(mlast, rlast);
|
|
vhost_log_chunk_t *from = log + start / VHOST_LOG_CHUNK;
|
|
vhost_log_chunk_t *to = log + end / VHOST_LOG_CHUNK + 1;
|
|
uint64_t addr = QEMU_ALIGN_DOWN(start, VHOST_LOG_CHUNK);
|
|
|
|
if (end < start) {
|
|
return;
|
|
}
|
|
assert(end / VHOST_LOG_CHUNK < dev->log_size);
|
|
assert(start / VHOST_LOG_CHUNK < dev->log_size);
|
|
|
|
for (;from < to; ++from) {
|
|
vhost_log_chunk_t log;
|
|
/* We first check with non-atomic: much cheaper,
|
|
* and we expect non-dirty to be the common case. */
|
|
if (!*from) {
|
|
addr += VHOST_LOG_CHUNK;
|
|
continue;
|
|
}
|
|
/* Data must be read atomically. We don't really need barrier semantics
|
|
* but it's easier to use atomic_* than roll our own. */
|
|
log = qatomic_xchg(from, 0);
|
|
while (log) {
|
|
int bit = ctzl(log);
|
|
hwaddr page_addr;
|
|
hwaddr section_offset;
|
|
hwaddr mr_offset;
|
|
page_addr = addr + bit * VHOST_LOG_PAGE;
|
|
section_offset = page_addr - section->offset_within_address_space;
|
|
mr_offset = section_offset + section->offset_within_region;
|
|
memory_region_set_dirty(section->mr, mr_offset, VHOST_LOG_PAGE);
|
|
log &= ~(0x1ull << bit);
|
|
}
|
|
addr += VHOST_LOG_CHUNK;
|
|
}
|
|
}
|
|
|
|
static int vhost_sync_dirty_bitmap(struct vhost_dev *dev,
|
|
MemoryRegionSection *section,
|
|
hwaddr first,
|
|
hwaddr last)
|
|
{
|
|
int i;
|
|
hwaddr start_addr;
|
|
hwaddr end_addr;
|
|
|
|
if (!dev->log_enabled || !dev->started) {
|
|
return 0;
|
|
}
|
|
start_addr = section->offset_within_address_space;
|
|
end_addr = range_get_last(start_addr, int128_get64(section->size));
|
|
start_addr = MAX(first, start_addr);
|
|
end_addr = MIN(last, end_addr);
|
|
|
|
for (i = 0; i < dev->mem->nregions; ++i) {
|
|
struct vhost_memory_region *reg = dev->mem->regions + i;
|
|
vhost_dev_sync_region(dev, section, start_addr, end_addr,
|
|
reg->guest_phys_addr,
|
|
range_get_last(reg->guest_phys_addr,
|
|
reg->memory_size));
|
|
}
|
|
for (i = 0; i < dev->nvqs; ++i) {
|
|
struct vhost_virtqueue *vq = dev->vqs + i;
|
|
|
|
if (!vq->used_phys && !vq->used_size) {
|
|
continue;
|
|
}
|
|
|
|
vhost_dev_sync_region(dev, section, start_addr, end_addr, vq->used_phys,
|
|
range_get_last(vq->used_phys, vq->used_size));
|
|
}
|
|
return 0;
|
|
}
|
|
|
|
static void vhost_log_sync(MemoryListener *listener,
|
|
MemoryRegionSection *section)
|
|
{
|
|
struct vhost_dev *dev = container_of(listener, struct vhost_dev,
|
|
memory_listener);
|
|
vhost_sync_dirty_bitmap(dev, section, 0x0, ~0x0ULL);
|
|
}
|
|
|
|
static void vhost_log_sync_range(struct vhost_dev *dev,
|
|
hwaddr first, hwaddr last)
|
|
{
|
|
int i;
|
|
/* FIXME: this is N^2 in number of sections */
|
|
for (i = 0; i < dev->n_mem_sections; ++i) {
|
|
MemoryRegionSection *section = &dev->mem_sections[i];
|
|
vhost_sync_dirty_bitmap(dev, section, first, last);
|
|
}
|
|
}
|
|
|
|
static uint64_t vhost_get_log_size(struct vhost_dev *dev)
|
|
{
|
|
uint64_t log_size = 0;
|
|
int i;
|
|
for (i = 0; i < dev->mem->nregions; ++i) {
|
|
struct vhost_memory_region *reg = dev->mem->regions + i;
|
|
uint64_t last = range_get_last(reg->guest_phys_addr,
|
|
reg->memory_size);
|
|
log_size = MAX(log_size, last / VHOST_LOG_CHUNK + 1);
|
|
}
|
|
return log_size;
|
|
}
|
|
|
|
static int vhost_set_backend_type(struct vhost_dev *dev,
|
|
VhostBackendType backend_type)
|
|
{
|
|
int r = 0;
|
|
|
|
switch (backend_type) {
|
|
#ifdef CONFIG_VHOST_KERNEL
|
|
case VHOST_BACKEND_TYPE_KERNEL:
|
|
dev->vhost_ops = &kernel_ops;
|
|
break;
|
|
#endif
|
|
#ifdef CONFIG_VHOST_USER
|
|
case VHOST_BACKEND_TYPE_USER:
|
|
dev->vhost_ops = &user_ops;
|
|
break;
|
|
#endif
|
|
#ifdef CONFIG_VHOST_VDPA
|
|
case VHOST_BACKEND_TYPE_VDPA:
|
|
dev->vhost_ops = &vdpa_ops;
|
|
break;
|
|
#endif
|
|
default:
|
|
error_report("Unknown vhost backend type");
|
|
r = -1;
|
|
}
|
|
|
|
return r;
|
|
}
|
|
|
|
static struct vhost_log *vhost_log_alloc(uint64_t size, bool share)
|
|
{
|
|
Error *err = NULL;
|
|
struct vhost_log *log;
|
|
uint64_t logsize = size * sizeof(*(log->log));
|
|
int fd = -1;
|
|
|
|
log = g_new0(struct vhost_log, 1);
|
|
if (share) {
|
|
log->log = qemu_memfd_alloc("vhost-log", logsize,
|
|
F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL,
|
|
&fd, &err);
|
|
if (err) {
|
|
error_report_err(err);
|
|
g_free(log);
|
|
return NULL;
|
|
}
|
|
memset(log->log, 0, logsize);
|
|
} else {
|
|
log->log = g_malloc0(logsize);
|
|
}
|
|
|
|
log->size = size;
|
|
log->refcnt = 1;
|
|
log->fd = fd;
|
|
|
|
return log;
|
|
}
|
|
|
|
static struct vhost_log *vhost_log_get(uint64_t size, bool share)
|
|
{
|
|
struct vhost_log *log = share ? vhost_log_shm : vhost_log;
|
|
|
|
if (!log || log->size != size) {
|
|
log = vhost_log_alloc(size, share);
|
|
if (share) {
|
|
vhost_log_shm = log;
|
|
} else {
|
|
vhost_log = log;
|
|
}
|
|
} else {
|
|
++log->refcnt;
|
|
}
|
|
|
|
return log;
|
|
}
|
|
|
|
static void vhost_log_put(struct vhost_dev *dev, bool sync)
|
|
{
|
|
struct vhost_log *log = dev->log;
|
|
|
|
if (!log) {
|
|
return;
|
|
}
|
|
|
|
--log->refcnt;
|
|
if (log->refcnt == 0) {
|
|
/* Sync only the range covered by the old log */
|
|
if (dev->log_size && sync) {
|
|
vhost_log_sync_range(dev, 0, dev->log_size * VHOST_LOG_CHUNK - 1);
|
|
}
|
|
|
|
if (vhost_log == log) {
|
|
g_free(log->log);
|
|
vhost_log = NULL;
|
|
} else if (vhost_log_shm == log) {
|
|
qemu_memfd_free(log->log, log->size * sizeof(*(log->log)),
|
|
log->fd);
|
|
vhost_log_shm = NULL;
|
|
}
|
|
|
|
g_free(log);
|
|
}
|
|
|
|
dev->log = NULL;
|
|
dev->log_size = 0;
|
|
}
|
|
|
|
static bool vhost_dev_log_is_shared(struct vhost_dev *dev)
|
|
{
|
|
return dev->vhost_ops->vhost_requires_shm_log &&
|
|
dev->vhost_ops->vhost_requires_shm_log(dev);
|
|
}
|
|
|
|
static inline void vhost_dev_log_resize(struct vhost_dev *dev, uint64_t size)
|
|
{
|
|
struct vhost_log *log = vhost_log_get(size, vhost_dev_log_is_shared(dev));
|
|
uint64_t log_base = (uintptr_t)log->log;
|
|
int r;
|
|
|
|
/* inform backend of log switching, this must be done before
|
|
releasing the current log, to ensure no logging is lost */
|
|
r = dev->vhost_ops->vhost_set_log_base(dev, log_base, log);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_log_base failed");
|
|
}
|
|
|
|
vhost_log_put(dev, true);
|
|
dev->log = log;
|
|
dev->log_size = size;
|
|
}
|
|
|
|
static bool vhost_dev_has_iommu(struct vhost_dev *dev)
|
|
{
|
|
VirtIODevice *vdev = dev->vdev;
|
|
|
|
/*
|
|
* For vhost, VIRTIO_F_IOMMU_PLATFORM means the backend support
|
|
* incremental memory mapping API via IOTLB API. For platform that
|
|
* does not have IOMMU, there's no need to enable this feature
|
|
* which may cause unnecessary IOTLB miss/update transactions.
|
|
*/
|
|
if (vdev) {
|
|
return virtio_bus_device_iommu_enabled(vdev) &&
|
|
virtio_host_has_feature(vdev, VIRTIO_F_IOMMU_PLATFORM);
|
|
} else {
|
|
return false;
|
|
}
|
|
}
|
|
|
|
static void *vhost_memory_map(struct vhost_dev *dev, hwaddr addr,
|
|
hwaddr *plen, bool is_write)
|
|
{
|
|
if (!vhost_dev_has_iommu(dev)) {
|
|
return cpu_physical_memory_map(addr, plen, is_write);
|
|
} else {
|
|
return (void *)(uintptr_t)addr;
|
|
}
|
|
}
|
|
|
|
static void vhost_memory_unmap(struct vhost_dev *dev, void *buffer,
|
|
hwaddr len, int is_write,
|
|
hwaddr access_len)
|
|
{
|
|
if (!vhost_dev_has_iommu(dev)) {
|
|
cpu_physical_memory_unmap(buffer, len, is_write, access_len);
|
|
}
|
|
}
|
|
|
|
static int vhost_verify_ring_part_mapping(void *ring_hva,
|
|
uint64_t ring_gpa,
|
|
uint64_t ring_size,
|
|
void *reg_hva,
|
|
uint64_t reg_gpa,
|
|
uint64_t reg_size)
|
|
{
|
|
uint64_t hva_ring_offset;
|
|
uint64_t ring_last = range_get_last(ring_gpa, ring_size);
|
|
uint64_t reg_last = range_get_last(reg_gpa, reg_size);
|
|
|
|
if (ring_last < reg_gpa || ring_gpa > reg_last) {
|
|
return 0;
|
|
}
|
|
/* check that whole ring's is mapped */
|
|
if (ring_last > reg_last) {
|
|
return -ENOMEM;
|
|
}
|
|
/* check that ring's MemoryRegion wasn't replaced */
|
|
hva_ring_offset = ring_gpa - reg_gpa;
|
|
if (ring_hva != reg_hva + hva_ring_offset) {
|
|
return -EBUSY;
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
static int vhost_verify_ring_mappings(struct vhost_dev *dev,
|
|
void *reg_hva,
|
|
uint64_t reg_gpa,
|
|
uint64_t reg_size)
|
|
{
|
|
int i, j;
|
|
int r = 0;
|
|
const char *part_name[] = {
|
|
"descriptor table",
|
|
"available ring",
|
|
"used ring"
|
|
};
|
|
|
|
if (vhost_dev_has_iommu(dev)) {
|
|
return 0;
|
|
}
|
|
|
|
for (i = 0; i < dev->nvqs; ++i) {
|
|
struct vhost_virtqueue *vq = dev->vqs + i;
|
|
|
|
if (vq->desc_phys == 0) {
|
|
continue;
|
|
}
|
|
|
|
j = 0;
|
|
r = vhost_verify_ring_part_mapping(
|
|
vq->desc, vq->desc_phys, vq->desc_size,
|
|
reg_hva, reg_gpa, reg_size);
|
|
if (r) {
|
|
break;
|
|
}
|
|
|
|
j++;
|
|
r = vhost_verify_ring_part_mapping(
|
|
vq->avail, vq->avail_phys, vq->avail_size,
|
|
reg_hva, reg_gpa, reg_size);
|
|
if (r) {
|
|
break;
|
|
}
|
|
|
|
j++;
|
|
r = vhost_verify_ring_part_mapping(
|
|
vq->used, vq->used_phys, vq->used_size,
|
|
reg_hva, reg_gpa, reg_size);
|
|
if (r) {
|
|
break;
|
|
}
|
|
}
|
|
|
|
if (r == -ENOMEM) {
|
|
error_report("Unable to map %s for ring %d", part_name[j], i);
|
|
} else if (r == -EBUSY) {
|
|
error_report("%s relocated for ring %d", part_name[j], i);
|
|
}
|
|
return r;
|
|
}
|
|
|
|
/*
|
|
* vhost_section: identify sections needed for vhost access
|
|
*
|
|
* We only care about RAM sections here (where virtqueue and guest
|
|
* internals accessed by virtio might live). If we find one we still
|
|
* allow the backend to potentially filter it out of our list.
|
|
*/
|
|
static bool vhost_section(struct vhost_dev *dev, MemoryRegionSection *section)
|
|
{
|
|
MemoryRegion *mr = section->mr;
|
|
|
|
if (memory_region_is_ram(mr) && !memory_region_is_rom(mr)) {
|
|
uint8_t dirty_mask = memory_region_get_dirty_log_mask(mr);
|
|
uint8_t handled_dirty;
|
|
|
|
/*
|
|
* Kernel based vhost doesn't handle any block which is doing
|
|
* dirty-tracking other than migration for which it has
|
|
* specific logging support. However for TCG the kernel never
|
|
* gets involved anyway so we can also ignore it's
|
|
* self-modiying code detection flags. However a vhost-user
|
|
* client could still confuse a TCG guest if it re-writes
|
|
* executable memory that has already been translated.
|
|
*/
|
|
handled_dirty = (1 << DIRTY_MEMORY_MIGRATION) |
|
|
(1 << DIRTY_MEMORY_CODE);
|
|
|
|
if (dirty_mask & ~handled_dirty) {
|
|
trace_vhost_reject_section(mr->name, 1);
|
|
return false;
|
|
}
|
|
|
|
if (dev->vhost_ops->vhost_backend_mem_section_filter &&
|
|
!dev->vhost_ops->vhost_backend_mem_section_filter(dev, section)) {
|
|
trace_vhost_reject_section(mr->name, 2);
|
|
return false;
|
|
}
|
|
|
|
trace_vhost_section(mr->name);
|
|
return true;
|
|
} else {
|
|
trace_vhost_reject_section(mr->name, 3);
|
|
return false;
|
|
}
|
|
}
|
|
|
|
static void vhost_begin(MemoryListener *listener)
|
|
{
|
|
struct vhost_dev *dev = container_of(listener, struct vhost_dev,
|
|
memory_listener);
|
|
dev->tmp_sections = NULL;
|
|
dev->n_tmp_sections = 0;
|
|
}
|
|
|
|
static void vhost_commit(MemoryListener *listener)
|
|
{
|
|
struct vhost_dev *dev = container_of(listener, struct vhost_dev,
|
|
memory_listener);
|
|
MemoryRegionSection *old_sections;
|
|
int n_old_sections;
|
|
uint64_t log_size;
|
|
size_t regions_size;
|
|
int r;
|
|
int i;
|
|
bool changed = false;
|
|
|
|
/* Note we can be called before the device is started, but then
|
|
* starting the device calls set_mem_table, so we need to have
|
|
* built the data structures.
|
|
*/
|
|
old_sections = dev->mem_sections;
|
|
n_old_sections = dev->n_mem_sections;
|
|
dev->mem_sections = dev->tmp_sections;
|
|
dev->n_mem_sections = dev->n_tmp_sections;
|
|
|
|
if (dev->n_mem_sections != n_old_sections) {
|
|
changed = true;
|
|
} else {
|
|
/* Same size, lets check the contents */
|
|
for (int i = 0; i < n_old_sections; i++) {
|
|
if (!MemoryRegionSection_eq(&old_sections[i],
|
|
&dev->mem_sections[i])) {
|
|
changed = true;
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
|
|
trace_vhost_commit(dev->started, changed);
|
|
if (!changed) {
|
|
goto out;
|
|
}
|
|
|
|
/* Rebuild the regions list from the new sections list */
|
|
regions_size = offsetof(struct vhost_memory, regions) +
|
|
dev->n_mem_sections * sizeof dev->mem->regions[0];
|
|
dev->mem = g_realloc(dev->mem, regions_size);
|
|
dev->mem->nregions = dev->n_mem_sections;
|
|
used_memslots = dev->mem->nregions;
|
|
for (i = 0; i < dev->n_mem_sections; i++) {
|
|
struct vhost_memory_region *cur_vmr = dev->mem->regions + i;
|
|
struct MemoryRegionSection *mrs = dev->mem_sections + i;
|
|
|
|
cur_vmr->guest_phys_addr = mrs->offset_within_address_space;
|
|
cur_vmr->memory_size = int128_get64(mrs->size);
|
|
cur_vmr->userspace_addr =
|
|
(uintptr_t)memory_region_get_ram_ptr(mrs->mr) +
|
|
mrs->offset_within_region;
|
|
cur_vmr->flags_padding = 0;
|
|
}
|
|
|
|
if (!dev->started) {
|
|
goto out;
|
|
}
|
|
|
|
for (i = 0; i < dev->mem->nregions; i++) {
|
|
if (vhost_verify_ring_mappings(dev,
|
|
(void *)(uintptr_t)dev->mem->regions[i].userspace_addr,
|
|
dev->mem->regions[i].guest_phys_addr,
|
|
dev->mem->regions[i].memory_size)) {
|
|
error_report("Verify ring failure on region %d", i);
|
|
abort();
|
|
}
|
|
}
|
|
|
|
if (!dev->log_enabled) {
|
|
r = dev->vhost_ops->vhost_set_mem_table(dev, dev->mem);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_mem_table failed");
|
|
}
|
|
goto out;
|
|
}
|
|
log_size = vhost_get_log_size(dev);
|
|
/* We allocate an extra 4K bytes to log,
|
|
* to reduce the * number of reallocations. */
|
|
#define VHOST_LOG_BUFFER (0x1000 / sizeof *dev->log)
|
|
/* To log more, must increase log size before table update. */
|
|
if (dev->log_size < log_size) {
|
|
vhost_dev_log_resize(dev, log_size + VHOST_LOG_BUFFER);
|
|
}
|
|
r = dev->vhost_ops->vhost_set_mem_table(dev, dev->mem);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_mem_table failed");
|
|
}
|
|
/* To log less, can only decrease log size after table update. */
|
|
if (dev->log_size > log_size + VHOST_LOG_BUFFER) {
|
|
vhost_dev_log_resize(dev, log_size);
|
|
}
|
|
|
|
out:
|
|
/* Deref the old list of sections, this must happen _after_ the
|
|
* vhost_set_mem_table to ensure the client isn't still using the
|
|
* section we're about to unref.
|
|
*/
|
|
while (n_old_sections--) {
|
|
memory_region_unref(old_sections[n_old_sections].mr);
|
|
}
|
|
g_free(old_sections);
|
|
return;
|
|
}
|
|
|
|
/* Adds the section data to the tmp_section structure.
|
|
* It relies on the listener calling us in memory address order
|
|
* and for each region (via the _add and _nop methods) to
|
|
* join neighbours.
|
|
*/
|
|
static void vhost_region_add_section(struct vhost_dev *dev,
|
|
MemoryRegionSection *section)
|
|
{
|
|
bool need_add = true;
|
|
uint64_t mrs_size = int128_get64(section->size);
|
|
uint64_t mrs_gpa = section->offset_within_address_space;
|
|
uintptr_t mrs_host = (uintptr_t)memory_region_get_ram_ptr(section->mr) +
|
|
section->offset_within_region;
|
|
RAMBlock *mrs_rb = section->mr->ram_block;
|
|
|
|
trace_vhost_region_add_section(section->mr->name, mrs_gpa, mrs_size,
|
|
mrs_host);
|
|
|
|
if (dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER) {
|
|
/* Round the section to it's page size */
|
|
/* First align the start down to a page boundary */
|
|
size_t mrs_page = qemu_ram_pagesize(mrs_rb);
|
|
uint64_t alignage = mrs_host & (mrs_page - 1);
|
|
if (alignage) {
|
|
mrs_host -= alignage;
|
|
mrs_size += alignage;
|
|
mrs_gpa -= alignage;
|
|
}
|
|
/* Now align the size up to a page boundary */
|
|
alignage = mrs_size & (mrs_page - 1);
|
|
if (alignage) {
|
|
mrs_size += mrs_page - alignage;
|
|
}
|
|
trace_vhost_region_add_section_aligned(section->mr->name, mrs_gpa,
|
|
mrs_size, mrs_host);
|
|
}
|
|
|
|
if (dev->n_tmp_sections) {
|
|
/* Since we already have at least one section, lets see if
|
|
* this extends it; since we're scanning in order, we only
|
|
* have to look at the last one, and the FlatView that calls
|
|
* us shouldn't have overlaps.
|
|
*/
|
|
MemoryRegionSection *prev_sec = dev->tmp_sections +
|
|
(dev->n_tmp_sections - 1);
|
|
uint64_t prev_gpa_start = prev_sec->offset_within_address_space;
|
|
uint64_t prev_size = int128_get64(prev_sec->size);
|
|
uint64_t prev_gpa_end = range_get_last(prev_gpa_start, prev_size);
|
|
uint64_t prev_host_start =
|
|
(uintptr_t)memory_region_get_ram_ptr(prev_sec->mr) +
|
|
prev_sec->offset_within_region;
|
|
uint64_t prev_host_end = range_get_last(prev_host_start, prev_size);
|
|
|
|
if (mrs_gpa <= (prev_gpa_end + 1)) {
|
|
/* OK, looks like overlapping/intersecting - it's possible that
|
|
* the rounding to page sizes has made them overlap, but they should
|
|
* match up in the same RAMBlock if they do.
|
|
*/
|
|
if (mrs_gpa < prev_gpa_start) {
|
|
error_report("%s:Section '%s' rounded to %"PRIx64
|
|
" prior to previous '%s' %"PRIx64,
|
|
__func__, section->mr->name, mrs_gpa,
|
|
prev_sec->mr->name, prev_gpa_start);
|
|
/* A way to cleanly fail here would be better */
|
|
return;
|
|
}
|
|
/* Offset from the start of the previous GPA to this GPA */
|
|
size_t offset = mrs_gpa - prev_gpa_start;
|
|
|
|
if (prev_host_start + offset == mrs_host &&
|
|
section->mr == prev_sec->mr &&
|
|
(!dev->vhost_ops->vhost_backend_can_merge ||
|
|
dev->vhost_ops->vhost_backend_can_merge(dev,
|
|
mrs_host, mrs_size,
|
|
prev_host_start, prev_size))) {
|
|
uint64_t max_end = MAX(prev_host_end, mrs_host + mrs_size);
|
|
need_add = false;
|
|
prev_sec->offset_within_address_space =
|
|
MIN(prev_gpa_start, mrs_gpa);
|
|
prev_sec->offset_within_region =
|
|
MIN(prev_host_start, mrs_host) -
|
|
(uintptr_t)memory_region_get_ram_ptr(prev_sec->mr);
|
|
prev_sec->size = int128_make64(max_end - MIN(prev_host_start,
|
|
mrs_host));
|
|
trace_vhost_region_add_section_merge(section->mr->name,
|
|
int128_get64(prev_sec->size),
|
|
prev_sec->offset_within_address_space,
|
|
prev_sec->offset_within_region);
|
|
} else {
|
|
/* adjoining regions are fine, but overlapping ones with
|
|
* different blocks/offsets shouldn't happen
|
|
*/
|
|
if (mrs_gpa != prev_gpa_end + 1) {
|
|
error_report("%s: Overlapping but not coherent sections "
|
|
"at %"PRIx64,
|
|
__func__, mrs_gpa);
|
|
return;
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
if (need_add) {
|
|
++dev->n_tmp_sections;
|
|
dev->tmp_sections = g_renew(MemoryRegionSection, dev->tmp_sections,
|
|
dev->n_tmp_sections);
|
|
dev->tmp_sections[dev->n_tmp_sections - 1] = *section;
|
|
/* The flatview isn't stable and we don't use it, making it NULL
|
|
* means we can memcmp the list.
|
|
*/
|
|
dev->tmp_sections[dev->n_tmp_sections - 1].fv = NULL;
|
|
memory_region_ref(section->mr);
|
|
}
|
|
}
|
|
|
|
/* Used for both add and nop callbacks */
|
|
static void vhost_region_addnop(MemoryListener *listener,
|
|
MemoryRegionSection *section)
|
|
{
|
|
struct vhost_dev *dev = container_of(listener, struct vhost_dev,
|
|
memory_listener);
|
|
|
|
if (!vhost_section(dev, section)) {
|
|
return;
|
|
}
|
|
vhost_region_add_section(dev, section);
|
|
}
|
|
|
|
static void vhost_iommu_unmap_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
|
|
{
|
|
struct vhost_iommu *iommu = container_of(n, struct vhost_iommu, n);
|
|
struct vhost_dev *hdev = iommu->hdev;
|
|
hwaddr iova = iotlb->iova + iommu->iommu_offset;
|
|
|
|
if (vhost_backend_invalidate_device_iotlb(hdev, iova,
|
|
iotlb->addr_mask + 1)) {
|
|
error_report("Fail to invalidate device iotlb");
|
|
}
|
|
}
|
|
|
|
static void vhost_iommu_region_add(MemoryListener *listener,
|
|
MemoryRegionSection *section)
|
|
{
|
|
struct vhost_dev *dev = container_of(listener, struct vhost_dev,
|
|
iommu_listener);
|
|
struct vhost_iommu *iommu;
|
|
Int128 end;
|
|
int iommu_idx;
|
|
IOMMUMemoryRegion *iommu_mr;
|
|
int ret;
|
|
|
|
if (!memory_region_is_iommu(section->mr)) {
|
|
return;
|
|
}
|
|
|
|
iommu_mr = IOMMU_MEMORY_REGION(section->mr);
|
|
|
|
iommu = g_malloc0(sizeof(*iommu));
|
|
end = int128_add(int128_make64(section->offset_within_region),
|
|
section->size);
|
|
end = int128_sub(end, int128_one());
|
|
iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
|
|
MEMTXATTRS_UNSPECIFIED);
|
|
iommu_notifier_init(&iommu->n, vhost_iommu_unmap_notify,
|
|
IOMMU_NOTIFIER_DEVIOTLB_UNMAP,
|
|
section->offset_within_region,
|
|
int128_get64(end),
|
|
iommu_idx);
|
|
iommu->mr = section->mr;
|
|
iommu->iommu_offset = section->offset_within_address_space -
|
|
section->offset_within_region;
|
|
iommu->hdev = dev;
|
|
ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, NULL);
|
|
if (ret) {
|
|
/*
|
|
* Some vIOMMUs do not support dev-iotlb yet. If so, try to use the
|
|
* UNMAP legacy message
|
|
*/
|
|
iommu->n.notifier_flags = IOMMU_NOTIFIER_UNMAP;
|
|
memory_region_register_iommu_notifier(section->mr, &iommu->n,
|
|
&error_fatal);
|
|
}
|
|
QLIST_INSERT_HEAD(&dev->iommu_list, iommu, iommu_next);
|
|
/* TODO: can replay help performance here? */
|
|
}
|
|
|
|
static void vhost_iommu_region_del(MemoryListener *listener,
|
|
MemoryRegionSection *section)
|
|
{
|
|
struct vhost_dev *dev = container_of(listener, struct vhost_dev,
|
|
iommu_listener);
|
|
struct vhost_iommu *iommu;
|
|
|
|
if (!memory_region_is_iommu(section->mr)) {
|
|
return;
|
|
}
|
|
|
|
QLIST_FOREACH(iommu, &dev->iommu_list, iommu_next) {
|
|
if (iommu->mr == section->mr &&
|
|
iommu->n.start == section->offset_within_region) {
|
|
memory_region_unregister_iommu_notifier(iommu->mr,
|
|
&iommu->n);
|
|
QLIST_REMOVE(iommu, iommu_next);
|
|
g_free(iommu);
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
|
|
static int vhost_virtqueue_set_addr(struct vhost_dev *dev,
|
|
struct vhost_virtqueue *vq,
|
|
unsigned idx, bool enable_log)
|
|
{
|
|
struct vhost_vring_addr addr;
|
|
int r;
|
|
memset(&addr, 0, sizeof(struct vhost_vring_addr));
|
|
|
|
if (dev->vhost_ops->vhost_vq_get_addr) {
|
|
r = dev->vhost_ops->vhost_vq_get_addr(dev, &addr, vq);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_vq_get_addr failed");
|
|
return r;
|
|
}
|
|
} else {
|
|
addr.desc_user_addr = (uint64_t)(unsigned long)vq->desc;
|
|
addr.avail_user_addr = (uint64_t)(unsigned long)vq->avail;
|
|
addr.used_user_addr = (uint64_t)(unsigned long)vq->used;
|
|
}
|
|
addr.index = idx;
|
|
addr.log_guest_addr = vq->used_phys;
|
|
addr.flags = enable_log ? (1 << VHOST_VRING_F_LOG) : 0;
|
|
r = dev->vhost_ops->vhost_set_vring_addr(dev, &addr);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_addr failed");
|
|
}
|
|
return r;
|
|
}
|
|
|
|
static int vhost_dev_set_features(struct vhost_dev *dev,
|
|
bool enable_log)
|
|
{
|
|
uint64_t features = dev->acked_features;
|
|
int r;
|
|
if (enable_log) {
|
|
features |= 0x1ULL << VHOST_F_LOG_ALL;
|
|
}
|
|
if (!vhost_dev_has_iommu(dev)) {
|
|
features &= ~(0x1ULL << VIRTIO_F_IOMMU_PLATFORM);
|
|
}
|
|
if (dev->vhost_ops->vhost_force_iommu) {
|
|
if (dev->vhost_ops->vhost_force_iommu(dev) == true) {
|
|
features |= 0x1ULL << VIRTIO_F_IOMMU_PLATFORM;
|
|
}
|
|
}
|
|
r = dev->vhost_ops->vhost_set_features(dev, features);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_features failed");
|
|
goto out;
|
|
}
|
|
if (dev->vhost_ops->vhost_set_backend_cap) {
|
|
r = dev->vhost_ops->vhost_set_backend_cap(dev);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_backend_cap failed");
|
|
goto out;
|
|
}
|
|
}
|
|
|
|
out:
|
|
return r;
|
|
}
|
|
|
|
static int vhost_dev_set_log(struct vhost_dev *dev, bool enable_log)
|
|
{
|
|
int r, i, idx;
|
|
hwaddr addr;
|
|
|
|
r = vhost_dev_set_features(dev, enable_log);
|
|
if (r < 0) {
|
|
goto err_features;
|
|
}
|
|
for (i = 0; i < dev->nvqs; ++i) {
|
|
idx = dev->vhost_ops->vhost_get_vq_index(dev, dev->vq_index + i);
|
|
addr = virtio_queue_get_desc_addr(dev->vdev, idx);
|
|
if (!addr) {
|
|
/*
|
|
* The queue might not be ready for start. If this
|
|
* is the case there is no reason to continue the process.
|
|
* The similar logic is used by the vhost_virtqueue_start()
|
|
* routine.
|
|
*/
|
|
continue;
|
|
}
|
|
r = vhost_virtqueue_set_addr(dev, dev->vqs + i, idx,
|
|
enable_log);
|
|
if (r < 0) {
|
|
goto err_vq;
|
|
}
|
|
}
|
|
return 0;
|
|
err_vq:
|
|
for (; i >= 0; --i) {
|
|
idx = dev->vhost_ops->vhost_get_vq_index(dev, dev->vq_index + i);
|
|
addr = virtio_queue_get_desc_addr(dev->vdev, idx);
|
|
if (!addr) {
|
|
continue;
|
|
}
|
|
vhost_virtqueue_set_addr(dev, dev->vqs + i, idx,
|
|
dev->log_enabled);
|
|
}
|
|
vhost_dev_set_features(dev, dev->log_enabled);
|
|
err_features:
|
|
return r;
|
|
}
|
|
|
|
static int vhost_migration_log(MemoryListener *listener, bool enable)
|
|
{
|
|
struct vhost_dev *dev = container_of(listener, struct vhost_dev,
|
|
memory_listener);
|
|
int r;
|
|
if (enable == dev->log_enabled) {
|
|
return 0;
|
|
}
|
|
if (!dev->started) {
|
|
dev->log_enabled = enable;
|
|
return 0;
|
|
}
|
|
|
|
r = 0;
|
|
if (!enable) {
|
|
r = vhost_dev_set_log(dev, false);
|
|
if (r < 0) {
|
|
goto check_dev_state;
|
|
}
|
|
vhost_log_put(dev, false);
|
|
} else {
|
|
vhost_dev_log_resize(dev, vhost_get_log_size(dev));
|
|
r = vhost_dev_set_log(dev, true);
|
|
if (r < 0) {
|
|
goto check_dev_state;
|
|
}
|
|
}
|
|
|
|
check_dev_state:
|
|
dev->log_enabled = enable;
|
|
/*
|
|
* vhost-user-* devices could change their state during log
|
|
* initialization due to disconnect. So check dev state after
|
|
* vhost communication.
|
|
*/
|
|
if (!dev->started) {
|
|
/*
|
|
* Since device is in the stopped state, it is okay for
|
|
* migration. Return success.
|
|
*/
|
|
r = 0;
|
|
}
|
|
if (r) {
|
|
/* An error occurred. */
|
|
dev->log_enabled = false;
|
|
}
|
|
|
|
return r;
|
|
}
|
|
|
|
static void vhost_log_global_start(MemoryListener *listener)
|
|
{
|
|
int r;
|
|
|
|
r = vhost_migration_log(listener, true);
|
|
if (r < 0) {
|
|
abort();
|
|
}
|
|
}
|
|
|
|
static void vhost_log_global_stop(MemoryListener *listener)
|
|
{
|
|
int r;
|
|
|
|
r = vhost_migration_log(listener, false);
|
|
if (r < 0) {
|
|
abort();
|
|
}
|
|
}
|
|
|
|
static void vhost_log_start(MemoryListener *listener,
|
|
MemoryRegionSection *section,
|
|
int old, int new)
|
|
{
|
|
/* FIXME: implement */
|
|
}
|
|
|
|
static void vhost_log_stop(MemoryListener *listener,
|
|
MemoryRegionSection *section,
|
|
int old, int new)
|
|
{
|
|
/* FIXME: implement */
|
|
}
|
|
|
|
/* The vhost driver natively knows how to handle the vrings of non
|
|
* cross-endian legacy devices and modern devices. Only legacy devices
|
|
* exposed to a bi-endian guest may require the vhost driver to use a
|
|
* specific endianness.
|
|
*/
|
|
static inline bool vhost_needs_vring_endian(VirtIODevice *vdev)
|
|
{
|
|
if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
|
|
return false;
|
|
}
|
|
#if HOST_BIG_ENDIAN
|
|
return vdev->device_endian == VIRTIO_DEVICE_ENDIAN_LITTLE;
|
|
#else
|
|
return vdev->device_endian == VIRTIO_DEVICE_ENDIAN_BIG;
|
|
#endif
|
|
}
|
|
|
|
static int vhost_virtqueue_set_vring_endian_legacy(struct vhost_dev *dev,
|
|
bool is_big_endian,
|
|
int vhost_vq_index)
|
|
{
|
|
int r;
|
|
struct vhost_vring_state s = {
|
|
.index = vhost_vq_index,
|
|
.num = is_big_endian
|
|
};
|
|
|
|
r = dev->vhost_ops->vhost_set_vring_endian(dev, &s);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_endian failed");
|
|
}
|
|
return r;
|
|
}
|
|
|
|
static int vhost_memory_region_lookup(struct vhost_dev *hdev,
|
|
uint64_t gpa, uint64_t *uaddr,
|
|
uint64_t *len)
|
|
{
|
|
int i;
|
|
|
|
for (i = 0; i < hdev->mem->nregions; i++) {
|
|
struct vhost_memory_region *reg = hdev->mem->regions + i;
|
|
|
|
if (gpa >= reg->guest_phys_addr &&
|
|
reg->guest_phys_addr + reg->memory_size > gpa) {
|
|
*uaddr = reg->userspace_addr + gpa - reg->guest_phys_addr;
|
|
*len = reg->guest_phys_addr + reg->memory_size - gpa;
|
|
return 0;
|
|
}
|
|
}
|
|
|
|
return -EFAULT;
|
|
}
|
|
|
|
int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
|
|
{
|
|
IOMMUTLBEntry iotlb;
|
|
uint64_t uaddr, len;
|
|
int ret = -EFAULT;
|
|
|
|
RCU_READ_LOCK_GUARD();
|
|
|
|
trace_vhost_iotlb_miss(dev, 1);
|
|
|
|
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
|
|
iova, write,
|
|
MEMTXATTRS_UNSPECIFIED);
|
|
if (iotlb.target_as != NULL) {
|
|
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
|
|
&uaddr, &len);
|
|
if (ret) {
|
|
trace_vhost_iotlb_miss(dev, 3);
|
|
error_report("Fail to lookup the translated address "
|
|
"%"PRIx64, iotlb.translated_addr);
|
|
goto out;
|
|
}
|
|
|
|
len = MIN(iotlb.addr_mask + 1, len);
|
|
iova = iova & ~iotlb.addr_mask;
|
|
|
|
ret = vhost_backend_update_device_iotlb(dev, iova, uaddr,
|
|
len, iotlb.perm);
|
|
if (ret) {
|
|
trace_vhost_iotlb_miss(dev, 4);
|
|
error_report("Fail to update device iotlb");
|
|
goto out;
|
|
}
|
|
}
|
|
|
|
trace_vhost_iotlb_miss(dev, 2);
|
|
|
|
out:
|
|
return ret;
|
|
}
|
|
|
|
int vhost_virtqueue_start(struct vhost_dev *dev,
|
|
struct VirtIODevice *vdev,
|
|
struct vhost_virtqueue *vq,
|
|
unsigned idx)
|
|
{
|
|
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
|
|
VirtioBusState *vbus = VIRTIO_BUS(qbus);
|
|
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
|
|
hwaddr s, l, a;
|
|
int r;
|
|
int vhost_vq_index = dev->vhost_ops->vhost_get_vq_index(dev, idx);
|
|
struct vhost_vring_file file = {
|
|
.index = vhost_vq_index
|
|
};
|
|
struct vhost_vring_state state = {
|
|
.index = vhost_vq_index
|
|
};
|
|
struct VirtQueue *vvq = virtio_get_queue(vdev, idx);
|
|
|
|
a = virtio_queue_get_desc_addr(vdev, idx);
|
|
if (a == 0) {
|
|
/* Queue might not be ready for start */
|
|
return 0;
|
|
}
|
|
|
|
vq->num = state.num = virtio_queue_get_num(vdev, idx);
|
|
r = dev->vhost_ops->vhost_set_vring_num(dev, &state);
|
|
if (r) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_num failed");
|
|
return r;
|
|
}
|
|
|
|
state.num = virtio_queue_get_last_avail_idx(vdev, idx);
|
|
r = dev->vhost_ops->vhost_set_vring_base(dev, &state);
|
|
if (r) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_base failed");
|
|
return r;
|
|
}
|
|
|
|
if (vhost_needs_vring_endian(vdev)) {
|
|
r = vhost_virtqueue_set_vring_endian_legacy(dev,
|
|
virtio_is_big_endian(vdev),
|
|
vhost_vq_index);
|
|
if (r) {
|
|
return r;
|
|
}
|
|
}
|
|
|
|
vq->desc_size = s = l = virtio_queue_get_desc_size(vdev, idx);
|
|
vq->desc_phys = a;
|
|
vq->desc = vhost_memory_map(dev, a, &l, false);
|
|
if (!vq->desc || l != s) {
|
|
r = -ENOMEM;
|
|
goto fail_alloc_desc;
|
|
}
|
|
vq->avail_size = s = l = virtio_queue_get_avail_size(vdev, idx);
|
|
vq->avail_phys = a = virtio_queue_get_avail_addr(vdev, idx);
|
|
vq->avail = vhost_memory_map(dev, a, &l, false);
|
|
if (!vq->avail || l != s) {
|
|
r = -ENOMEM;
|
|
goto fail_alloc_avail;
|
|
}
|
|
vq->used_size = s = l = virtio_queue_get_used_size(vdev, idx);
|
|
vq->used_phys = a = virtio_queue_get_used_addr(vdev, idx);
|
|
vq->used = vhost_memory_map(dev, a, &l, true);
|
|
if (!vq->used || l != s) {
|
|
r = -ENOMEM;
|
|
goto fail_alloc_used;
|
|
}
|
|
|
|
r = vhost_virtqueue_set_addr(dev, vq, vhost_vq_index, dev->log_enabled);
|
|
if (r < 0) {
|
|
goto fail_alloc;
|
|
}
|
|
|
|
file.fd = event_notifier_get_fd(virtio_queue_get_host_notifier(vvq));
|
|
r = dev->vhost_ops->vhost_set_vring_kick(dev, &file);
|
|
if (r) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_kick failed");
|
|
goto fail_kick;
|
|
}
|
|
|
|
/* Clear and discard previous events if any. */
|
|
event_notifier_test_and_clear(&vq->masked_notifier);
|
|
|
|
/* Init vring in unmasked state, unless guest_notifier_mask
|
|
* will do it later.
|
|
*/
|
|
if (!vdev->use_guest_notifier_mask) {
|
|
/* TODO: check and handle errors. */
|
|
vhost_virtqueue_mask(dev, vdev, idx, false);
|
|
}
|
|
|
|
if (k->query_guest_notifiers &&
|
|
k->query_guest_notifiers(qbus->parent) &&
|
|
virtio_queue_vector(vdev, idx) == VIRTIO_NO_VECTOR) {
|
|
file.fd = -1;
|
|
r = dev->vhost_ops->vhost_set_vring_call(dev, &file);
|
|
if (r) {
|
|
goto fail_vector;
|
|
}
|
|
}
|
|
|
|
return 0;
|
|
|
|
fail_vector:
|
|
fail_kick:
|
|
fail_alloc:
|
|
vhost_memory_unmap(dev, vq->used, virtio_queue_get_used_size(vdev, idx),
|
|
0, 0);
|
|
fail_alloc_used:
|
|
vhost_memory_unmap(dev, vq->avail, virtio_queue_get_avail_size(vdev, idx),
|
|
0, 0);
|
|
fail_alloc_avail:
|
|
vhost_memory_unmap(dev, vq->desc, virtio_queue_get_desc_size(vdev, idx),
|
|
0, 0);
|
|
fail_alloc_desc:
|
|
return r;
|
|
}
|
|
|
|
void vhost_virtqueue_stop(struct vhost_dev *dev,
|
|
struct VirtIODevice *vdev,
|
|
struct vhost_virtqueue *vq,
|
|
unsigned idx)
|
|
{
|
|
int vhost_vq_index = dev->vhost_ops->vhost_get_vq_index(dev, idx);
|
|
struct vhost_vring_state state = {
|
|
.index = vhost_vq_index,
|
|
};
|
|
int r;
|
|
|
|
if (virtio_queue_get_desc_addr(vdev, idx) == 0) {
|
|
/* Don't stop the virtqueue which might have not been started */
|
|
return;
|
|
}
|
|
|
|
r = dev->vhost_ops->vhost_get_vring_base(dev, &state);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost VQ %u ring restore failed: %d", idx, r);
|
|
/* Connection to the backend is broken, so let's sync internal
|
|
* last avail idx to the device used idx.
|
|
*/
|
|
virtio_queue_restore_last_avail_idx(vdev, idx);
|
|
} else {
|
|
virtio_queue_set_last_avail_idx(vdev, idx, state.num);
|
|
}
|
|
virtio_queue_invalidate_signalled_used(vdev, idx);
|
|
virtio_queue_update_used_idx(vdev, idx);
|
|
|
|
/* In the cross-endian case, we need to reset the vring endianness to
|
|
* native as legacy devices expect so by default.
|
|
*/
|
|
if (vhost_needs_vring_endian(vdev)) {
|
|
vhost_virtqueue_set_vring_endian_legacy(dev,
|
|
!virtio_is_big_endian(vdev),
|
|
vhost_vq_index);
|
|
}
|
|
|
|
vhost_memory_unmap(dev, vq->used, virtio_queue_get_used_size(vdev, idx),
|
|
1, virtio_queue_get_used_size(vdev, idx));
|
|
vhost_memory_unmap(dev, vq->avail, virtio_queue_get_avail_size(vdev, idx),
|
|
0, virtio_queue_get_avail_size(vdev, idx));
|
|
vhost_memory_unmap(dev, vq->desc, virtio_queue_get_desc_size(vdev, idx),
|
|
0, virtio_queue_get_desc_size(vdev, idx));
|
|
}
|
|
|
|
static void vhost_eventfd_add(MemoryListener *listener,
|
|
MemoryRegionSection *section,
|
|
bool match_data, uint64_t data, EventNotifier *e)
|
|
{
|
|
}
|
|
|
|
static void vhost_eventfd_del(MemoryListener *listener,
|
|
MemoryRegionSection *section,
|
|
bool match_data, uint64_t data, EventNotifier *e)
|
|
{
|
|
}
|
|
|
|
static int vhost_virtqueue_set_busyloop_timeout(struct vhost_dev *dev,
|
|
int n, uint32_t timeout)
|
|
{
|
|
int vhost_vq_index = dev->vhost_ops->vhost_get_vq_index(dev, n);
|
|
struct vhost_vring_state state = {
|
|
.index = vhost_vq_index,
|
|
.num = timeout,
|
|
};
|
|
int r;
|
|
|
|
if (!dev->vhost_ops->vhost_set_vring_busyloop_timeout) {
|
|
return -EINVAL;
|
|
}
|
|
|
|
r = dev->vhost_ops->vhost_set_vring_busyloop_timeout(dev, &state);
|
|
if (r) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_busyloop_timeout failed");
|
|
return r;
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
static void vhost_virtqueue_error_notifier(EventNotifier *n)
|
|
{
|
|
struct vhost_virtqueue *vq = container_of(n, struct vhost_virtqueue,
|
|
error_notifier);
|
|
struct vhost_dev *dev = vq->dev;
|
|
int index = vq - dev->vqs;
|
|
|
|
if (event_notifier_test_and_clear(n) && dev->vdev) {
|
|
VHOST_OPS_DEBUG(-EINVAL, "vhost vring error in virtqueue %d",
|
|
dev->vq_index + index);
|
|
}
|
|
}
|
|
|
|
static int vhost_virtqueue_init(struct vhost_dev *dev,
|
|
struct vhost_virtqueue *vq, int n)
|
|
{
|
|
int vhost_vq_index = dev->vhost_ops->vhost_get_vq_index(dev, n);
|
|
struct vhost_vring_file file = {
|
|
.index = vhost_vq_index,
|
|
};
|
|
int r = event_notifier_init(&vq->masked_notifier, 0);
|
|
if (r < 0) {
|
|
return r;
|
|
}
|
|
|
|
file.fd = event_notifier_get_wfd(&vq->masked_notifier);
|
|
r = dev->vhost_ops->vhost_set_vring_call(dev, &file);
|
|
if (r) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_call failed");
|
|
goto fail_call;
|
|
}
|
|
|
|
vq->dev = dev;
|
|
|
|
if (dev->vhost_ops->vhost_set_vring_err) {
|
|
r = event_notifier_init(&vq->error_notifier, 0);
|
|
if (r < 0) {
|
|
goto fail_call;
|
|
}
|
|
|
|
file.fd = event_notifier_get_fd(&vq->error_notifier);
|
|
r = dev->vhost_ops->vhost_set_vring_err(dev, &file);
|
|
if (r) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_err failed");
|
|
goto fail_err;
|
|
}
|
|
|
|
event_notifier_set_handler(&vq->error_notifier,
|
|
vhost_virtqueue_error_notifier);
|
|
}
|
|
|
|
return 0;
|
|
|
|
fail_err:
|
|
event_notifier_cleanup(&vq->error_notifier);
|
|
fail_call:
|
|
event_notifier_cleanup(&vq->masked_notifier);
|
|
return r;
|
|
}
|
|
|
|
static void vhost_virtqueue_cleanup(struct vhost_virtqueue *vq)
|
|
{
|
|
event_notifier_cleanup(&vq->masked_notifier);
|
|
if (vq->dev->vhost_ops->vhost_set_vring_err) {
|
|
event_notifier_set_handler(&vq->error_notifier, NULL);
|
|
event_notifier_cleanup(&vq->error_notifier);
|
|
}
|
|
}
|
|
|
|
int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
|
|
VhostBackendType backend_type, uint32_t busyloop_timeout,
|
|
Error **errp)
|
|
{
|
|
uint64_t features;
|
|
int i, r, n_initialized_vqs = 0;
|
|
|
|
hdev->vdev = NULL;
|
|
hdev->migration_blocker = NULL;
|
|
|
|
r = vhost_set_backend_type(hdev, backend_type);
|
|
assert(r >= 0);
|
|
|
|
r = hdev->vhost_ops->vhost_backend_init(hdev, opaque, errp);
|
|
if (r < 0) {
|
|
goto fail;
|
|
}
|
|
|
|
r = hdev->vhost_ops->vhost_set_owner(hdev);
|
|
if (r < 0) {
|
|
error_setg_errno(errp, -r, "vhost_set_owner failed");
|
|
goto fail;
|
|
}
|
|
|
|
r = hdev->vhost_ops->vhost_get_features(hdev, &features);
|
|
if (r < 0) {
|
|
error_setg_errno(errp, -r, "vhost_get_features failed");
|
|
goto fail;
|
|
}
|
|
|
|
for (i = 0; i < hdev->nvqs; ++i, ++n_initialized_vqs) {
|
|
r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i);
|
|
if (r < 0) {
|
|
error_setg_errno(errp, -r, "Failed to initialize virtqueue %d", i);
|
|
goto fail;
|
|
}
|
|
}
|
|
|
|
if (busyloop_timeout) {
|
|
for (i = 0; i < hdev->nvqs; ++i) {
|
|
r = vhost_virtqueue_set_busyloop_timeout(hdev, hdev->vq_index + i,
|
|
busyloop_timeout);
|
|
if (r < 0) {
|
|
error_setg_errno(errp, -r, "Failed to set busyloop timeout");
|
|
goto fail_busyloop;
|
|
}
|
|
}
|
|
}
|
|
|
|
hdev->features = features;
|
|
|
|
hdev->memory_listener = (MemoryListener) {
|
|
.name = "vhost",
|
|
.begin = vhost_begin,
|
|
.commit = vhost_commit,
|
|
.region_add = vhost_region_addnop,
|
|
.region_nop = vhost_region_addnop,
|
|
.log_start = vhost_log_start,
|
|
.log_stop = vhost_log_stop,
|
|
.log_sync = vhost_log_sync,
|
|
.log_global_start = vhost_log_global_start,
|
|
.log_global_stop = vhost_log_global_stop,
|
|
.eventfd_add = vhost_eventfd_add,
|
|
.eventfd_del = vhost_eventfd_del,
|
|
.priority = 10
|
|
};
|
|
|
|
hdev->iommu_listener = (MemoryListener) {
|
|
.name = "vhost-iommu",
|
|
.region_add = vhost_iommu_region_add,
|
|
.region_del = vhost_iommu_region_del,
|
|
};
|
|
|
|
if (hdev->migration_blocker == NULL) {
|
|
if (!(hdev->features & (0x1ULL << VHOST_F_LOG_ALL))) {
|
|
error_setg(&hdev->migration_blocker,
|
|
"Migration disabled: vhost lacks VHOST_F_LOG_ALL feature.");
|
|
} else if (vhost_dev_log_is_shared(hdev) && !qemu_memfd_alloc_check()) {
|
|
error_setg(&hdev->migration_blocker,
|
|
"Migration disabled: failed to allocate shared memory");
|
|
}
|
|
}
|
|
|
|
if (hdev->migration_blocker != NULL) {
|
|
r = migrate_add_blocker(hdev->migration_blocker, errp);
|
|
if (r < 0) {
|
|
error_free(hdev->migration_blocker);
|
|
goto fail_busyloop;
|
|
}
|
|
}
|
|
|
|
hdev->mem = g_malloc0(offsetof(struct vhost_memory, regions));
|
|
hdev->n_mem_sections = 0;
|
|
hdev->mem_sections = NULL;
|
|
hdev->log = NULL;
|
|
hdev->log_size = 0;
|
|
hdev->log_enabled = false;
|
|
hdev->started = false;
|
|
memory_listener_register(&hdev->memory_listener, &address_space_memory);
|
|
QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
|
|
|
|
if (used_memslots > hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
|
|
error_setg(errp, "vhost backend memory slots limit is less"
|
|
" than current number of present memory slots");
|
|
r = -EINVAL;
|
|
goto fail_busyloop;
|
|
}
|
|
|
|
return 0;
|
|
|
|
fail_busyloop:
|
|
if (busyloop_timeout) {
|
|
while (--i >= 0) {
|
|
vhost_virtqueue_set_busyloop_timeout(hdev, hdev->vq_index + i, 0);
|
|
}
|
|
}
|
|
fail:
|
|
hdev->nvqs = n_initialized_vqs;
|
|
vhost_dev_cleanup(hdev);
|
|
return r;
|
|
}
|
|
|
|
void vhost_dev_cleanup(struct vhost_dev *hdev)
|
|
{
|
|
int i;
|
|
|
|
trace_vhost_dev_cleanup(hdev);
|
|
|
|
for (i = 0; i < hdev->nvqs; ++i) {
|
|
vhost_virtqueue_cleanup(hdev->vqs + i);
|
|
}
|
|
if (hdev->mem) {
|
|
/* those are only safe after successful init */
|
|
memory_listener_unregister(&hdev->memory_listener);
|
|
QLIST_REMOVE(hdev, entry);
|
|
}
|
|
if (hdev->migration_blocker) {
|
|
migrate_del_blocker(hdev->migration_blocker);
|
|
error_free(hdev->migration_blocker);
|
|
}
|
|
g_free(hdev->mem);
|
|
g_free(hdev->mem_sections);
|
|
if (hdev->vhost_ops) {
|
|
hdev->vhost_ops->vhost_backend_cleanup(hdev);
|
|
}
|
|
assert(!hdev->log);
|
|
|
|
memset(hdev, 0, sizeof(struct vhost_dev));
|
|
}
|
|
|
|
/* Stop processing guest IO notifications in qemu.
|
|
* Start processing them in vhost in kernel.
|
|
*/
|
|
int vhost_dev_enable_notifiers(struct vhost_dev *hdev, VirtIODevice *vdev)
|
|
{
|
|
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
|
|
int i, r, e;
|
|
|
|
/* We will pass the notifiers to the kernel, make sure that QEMU
|
|
* doesn't interfere.
|
|
*/
|
|
r = virtio_device_grab_ioeventfd(vdev);
|
|
if (r < 0) {
|
|
error_report("binding does not support host notifiers");
|
|
goto fail;
|
|
}
|
|
|
|
for (i = 0; i < hdev->nvqs; ++i) {
|
|
r = virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), hdev->vq_index + i,
|
|
true);
|
|
if (r < 0) {
|
|
error_report("vhost VQ %d notifier binding failed: %d", i, -r);
|
|
goto fail_vq;
|
|
}
|
|
}
|
|
|
|
return 0;
|
|
fail_vq:
|
|
while (--i >= 0) {
|
|
e = virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), hdev->vq_index + i,
|
|
false);
|
|
if (e < 0) {
|
|
error_report("vhost VQ %d notifier cleanup error: %d", i, -r);
|
|
}
|
|
assert (e >= 0);
|
|
virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), hdev->vq_index + i);
|
|
}
|
|
virtio_device_release_ioeventfd(vdev);
|
|
fail:
|
|
return r;
|
|
}
|
|
|
|
/* Stop processing guest IO notifications in vhost.
|
|
* Start processing them in qemu.
|
|
* This might actually run the qemu handlers right away,
|
|
* so virtio in qemu must be completely setup when this is called.
|
|
*/
|
|
void vhost_dev_disable_notifiers(struct vhost_dev *hdev, VirtIODevice *vdev)
|
|
{
|
|
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
|
|
int i, r;
|
|
|
|
for (i = 0; i < hdev->nvqs; ++i) {
|
|
r = virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), hdev->vq_index + i,
|
|
false);
|
|
if (r < 0) {
|
|
error_report("vhost VQ %d notifier cleanup failed: %d", i, -r);
|
|
}
|
|
assert (r >= 0);
|
|
virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), hdev->vq_index + i);
|
|
}
|
|
virtio_device_release_ioeventfd(vdev);
|
|
}
|
|
|
|
/* Test and clear event pending status.
|
|
* Should be called after unmask to avoid losing events.
|
|
*/
|
|
bool vhost_virtqueue_pending(struct vhost_dev *hdev, int n)
|
|
{
|
|
struct vhost_virtqueue *vq = hdev->vqs + n - hdev->vq_index;
|
|
assert(n >= hdev->vq_index && n < hdev->vq_index + hdev->nvqs);
|
|
return event_notifier_test_and_clear(&vq->masked_notifier);
|
|
}
|
|
|
|
/* Mask/unmask events from this vq. */
|
|
void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
|
|
bool mask)
|
|
{
|
|
struct VirtQueue *vvq = virtio_get_queue(vdev, n);
|
|
int r, index = n - hdev->vq_index;
|
|
struct vhost_vring_file file;
|
|
|
|
/* should only be called after backend is connected */
|
|
assert(hdev->vhost_ops);
|
|
|
|
if (mask) {
|
|
assert(vdev->use_guest_notifier_mask);
|
|
file.fd = event_notifier_get_wfd(&hdev->vqs[index].masked_notifier);
|
|
} else {
|
|
file.fd = event_notifier_get_wfd(virtio_queue_get_guest_notifier(vvq));
|
|
}
|
|
|
|
file.index = hdev->vhost_ops->vhost_get_vq_index(hdev, n);
|
|
r = hdev->vhost_ops->vhost_set_vring_call(hdev, &file);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_vring_call failed");
|
|
}
|
|
}
|
|
|
|
uint64_t vhost_get_features(struct vhost_dev *hdev, const int *feature_bits,
|
|
uint64_t features)
|
|
{
|
|
const int *bit = feature_bits;
|
|
while (*bit != VHOST_INVALID_FEATURE_BIT) {
|
|
uint64_t bit_mask = (1ULL << *bit);
|
|
if (!(hdev->features & bit_mask)) {
|
|
features &= ~bit_mask;
|
|
}
|
|
bit++;
|
|
}
|
|
return features;
|
|
}
|
|
|
|
void vhost_ack_features(struct vhost_dev *hdev, const int *feature_bits,
|
|
uint64_t features)
|
|
{
|
|
const int *bit = feature_bits;
|
|
while (*bit != VHOST_INVALID_FEATURE_BIT) {
|
|
uint64_t bit_mask = (1ULL << *bit);
|
|
if (features & bit_mask) {
|
|
hdev->acked_features |= bit_mask;
|
|
}
|
|
bit++;
|
|
}
|
|
}
|
|
|
|
int vhost_dev_get_config(struct vhost_dev *hdev, uint8_t *config,
|
|
uint32_t config_len, Error **errp)
|
|
{
|
|
assert(hdev->vhost_ops);
|
|
|
|
if (hdev->vhost_ops->vhost_get_config) {
|
|
return hdev->vhost_ops->vhost_get_config(hdev, config, config_len,
|
|
errp);
|
|
}
|
|
|
|
error_setg(errp, "vhost_get_config not implemented");
|
|
return -ENOSYS;
|
|
}
|
|
|
|
int vhost_dev_set_config(struct vhost_dev *hdev, const uint8_t *data,
|
|
uint32_t offset, uint32_t size, uint32_t flags)
|
|
{
|
|
assert(hdev->vhost_ops);
|
|
|
|
if (hdev->vhost_ops->vhost_set_config) {
|
|
return hdev->vhost_ops->vhost_set_config(hdev, data, offset,
|
|
size, flags);
|
|
}
|
|
|
|
return -ENOSYS;
|
|
}
|
|
|
|
void vhost_dev_set_config_notifier(struct vhost_dev *hdev,
|
|
const VhostDevConfigOps *ops)
|
|
{
|
|
hdev->config_ops = ops;
|
|
}
|
|
|
|
void vhost_dev_free_inflight(struct vhost_inflight *inflight)
|
|
{
|
|
if (inflight && inflight->addr) {
|
|
qemu_memfd_free(inflight->addr, inflight->size, inflight->fd);
|
|
inflight->addr = NULL;
|
|
inflight->fd = -1;
|
|
}
|
|
}
|
|
|
|
static int vhost_dev_resize_inflight(struct vhost_inflight *inflight,
|
|
uint64_t new_size)
|
|
{
|
|
Error *err = NULL;
|
|
int fd = -1;
|
|
void *addr = qemu_memfd_alloc("vhost-inflight", new_size,
|
|
F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL,
|
|
&fd, &err);
|
|
|
|
if (err) {
|
|
error_report_err(err);
|
|
return -ENOMEM;
|
|
}
|
|
|
|
vhost_dev_free_inflight(inflight);
|
|
inflight->offset = 0;
|
|
inflight->addr = addr;
|
|
inflight->fd = fd;
|
|
inflight->size = new_size;
|
|
|
|
return 0;
|
|
}
|
|
|
|
void vhost_dev_save_inflight(struct vhost_inflight *inflight, QEMUFile *f)
|
|
{
|
|
if (inflight->addr) {
|
|
qemu_put_be64(f, inflight->size);
|
|
qemu_put_be16(f, inflight->queue_size);
|
|
qemu_put_buffer(f, inflight->addr, inflight->size);
|
|
} else {
|
|
qemu_put_be64(f, 0);
|
|
}
|
|
}
|
|
|
|
int vhost_dev_load_inflight(struct vhost_inflight *inflight, QEMUFile *f)
|
|
{
|
|
uint64_t size;
|
|
|
|
size = qemu_get_be64(f);
|
|
if (!size) {
|
|
return 0;
|
|
}
|
|
|
|
if (inflight->size != size) {
|
|
int ret = vhost_dev_resize_inflight(inflight, size);
|
|
if (ret < 0) {
|
|
return ret;
|
|
}
|
|
}
|
|
inflight->queue_size = qemu_get_be16(f);
|
|
|
|
qemu_get_buffer(f, inflight->addr, size);
|
|
|
|
return 0;
|
|
}
|
|
|
|
int vhost_dev_prepare_inflight(struct vhost_dev *hdev, VirtIODevice *vdev)
|
|
{
|
|
int r;
|
|
|
|
if (hdev->vhost_ops->vhost_get_inflight_fd == NULL ||
|
|
hdev->vhost_ops->vhost_set_inflight_fd == NULL) {
|
|
return 0;
|
|
}
|
|
|
|
hdev->vdev = vdev;
|
|
|
|
r = vhost_dev_set_features(hdev, hdev->log_enabled);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_dev_prepare_inflight failed");
|
|
return r;
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
int vhost_dev_set_inflight(struct vhost_dev *dev,
|
|
struct vhost_inflight *inflight)
|
|
{
|
|
int r;
|
|
|
|
if (dev->vhost_ops->vhost_set_inflight_fd && inflight->addr) {
|
|
r = dev->vhost_ops->vhost_set_inflight_fd(dev, inflight);
|
|
if (r) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_inflight_fd failed");
|
|
return r;
|
|
}
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
int vhost_dev_get_inflight(struct vhost_dev *dev, uint16_t queue_size,
|
|
struct vhost_inflight *inflight)
|
|
{
|
|
int r;
|
|
|
|
if (dev->vhost_ops->vhost_get_inflight_fd) {
|
|
r = dev->vhost_ops->vhost_get_inflight_fd(dev, queue_size, inflight);
|
|
if (r) {
|
|
VHOST_OPS_DEBUG(r, "vhost_get_inflight_fd failed");
|
|
return r;
|
|
}
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
static int vhost_dev_set_vring_enable(struct vhost_dev *hdev, int enable)
|
|
{
|
|
if (!hdev->vhost_ops->vhost_set_vring_enable) {
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* For vhost-user devices, if VHOST_USER_F_PROTOCOL_FEATURES has not
|
|
* been negotiated, the rings start directly in the enabled state, and
|
|
* .vhost_set_vring_enable callback will fail since
|
|
* VHOST_USER_SET_VRING_ENABLE is not supported.
|
|
*/
|
|
if (hdev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER &&
|
|
!virtio_has_feature(hdev->backend_features,
|
|
VHOST_USER_F_PROTOCOL_FEATURES)) {
|
|
return 0;
|
|
}
|
|
|
|
return hdev->vhost_ops->vhost_set_vring_enable(hdev, enable);
|
|
}
|
|
|
|
/* Host notifiers must be enabled at this point. */
|
|
int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev, bool vrings)
|
|
{
|
|
int i, r;
|
|
|
|
/* should only be called after backend is connected */
|
|
assert(hdev->vhost_ops);
|
|
|
|
trace_vhost_dev_start(hdev, vdev->name, vrings);
|
|
|
|
vdev->vhost_started = true;
|
|
hdev->started = true;
|
|
hdev->vdev = vdev;
|
|
|
|
r = vhost_dev_set_features(hdev, hdev->log_enabled);
|
|
if (r < 0) {
|
|
goto fail_features;
|
|
}
|
|
|
|
if (vhost_dev_has_iommu(hdev)) {
|
|
memory_listener_register(&hdev->iommu_listener, vdev->dma_as);
|
|
}
|
|
|
|
r = hdev->vhost_ops->vhost_set_mem_table(hdev, hdev->mem);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_mem_table failed");
|
|
goto fail_mem;
|
|
}
|
|
for (i = 0; i < hdev->nvqs; ++i) {
|
|
r = vhost_virtqueue_start(hdev,
|
|
vdev,
|
|
hdev->vqs + i,
|
|
hdev->vq_index + i);
|
|
if (r < 0) {
|
|
goto fail_vq;
|
|
}
|
|
}
|
|
|
|
if (hdev->log_enabled) {
|
|
uint64_t log_base;
|
|
|
|
hdev->log_size = vhost_get_log_size(hdev);
|
|
hdev->log = vhost_log_get(hdev->log_size,
|
|
vhost_dev_log_is_shared(hdev));
|
|
log_base = (uintptr_t)hdev->log->log;
|
|
r = hdev->vhost_ops->vhost_set_log_base(hdev,
|
|
hdev->log_size ? log_base : 0,
|
|
hdev->log);
|
|
if (r < 0) {
|
|
VHOST_OPS_DEBUG(r, "vhost_set_log_base failed");
|
|
goto fail_log;
|
|
}
|
|
}
|
|
if (vrings) {
|
|
r = vhost_dev_set_vring_enable(hdev, true);
|
|
if (r) {
|
|
goto fail_log;
|
|
}
|
|
}
|
|
if (hdev->vhost_ops->vhost_dev_start) {
|
|
r = hdev->vhost_ops->vhost_dev_start(hdev, true);
|
|
if (r) {
|
|
goto fail_start;
|
|
}
|
|
}
|
|
if (vhost_dev_has_iommu(hdev) &&
|
|
hdev->vhost_ops->vhost_set_iotlb_callback) {
|
|
hdev->vhost_ops->vhost_set_iotlb_callback(hdev, true);
|
|
|
|
/* Update used ring information for IOTLB to work correctly,
|
|
* vhost-kernel code requires for this.*/
|
|
for (i = 0; i < hdev->nvqs; ++i) {
|
|
struct vhost_virtqueue *vq = hdev->vqs + i;
|
|
vhost_device_iotlb_miss(hdev, vq->used_phys, true);
|
|
}
|
|
}
|
|
return 0;
|
|
fail_start:
|
|
if (vrings) {
|
|
vhost_dev_set_vring_enable(hdev, false);
|
|
}
|
|
fail_log:
|
|
vhost_log_put(hdev, false);
|
|
fail_vq:
|
|
while (--i >= 0) {
|
|
vhost_virtqueue_stop(hdev,
|
|
vdev,
|
|
hdev->vqs + i,
|
|
hdev->vq_index + i);
|
|
}
|
|
|
|
fail_mem:
|
|
fail_features:
|
|
vdev->vhost_started = false;
|
|
hdev->started = false;
|
|
return r;
|
|
}
|
|
|
|
/* Host notifiers must be enabled at this point. */
|
|
void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev, bool vrings)
|
|
{
|
|
int i;
|
|
|
|
/* should only be called after backend is connected */
|
|
assert(hdev->vhost_ops);
|
|
|
|
trace_vhost_dev_stop(hdev, vdev->name, vrings);
|
|
|
|
if (hdev->vhost_ops->vhost_dev_start) {
|
|
hdev->vhost_ops->vhost_dev_start(hdev, false);
|
|
}
|
|
if (vrings) {
|
|
vhost_dev_set_vring_enable(hdev, false);
|
|
}
|
|
for (i = 0; i < hdev->nvqs; ++i) {
|
|
vhost_virtqueue_stop(hdev,
|
|
vdev,
|
|
hdev->vqs + i,
|
|
hdev->vq_index + i);
|
|
}
|
|
|
|
if (vhost_dev_has_iommu(hdev)) {
|
|
if (hdev->vhost_ops->vhost_set_iotlb_callback) {
|
|
hdev->vhost_ops->vhost_set_iotlb_callback(hdev, false);
|
|
}
|
|
memory_listener_unregister(&hdev->iommu_listener);
|
|
}
|
|
vhost_log_put(hdev, true);
|
|
hdev->started = false;
|
|
vdev->vhost_started = false;
|
|
hdev->vdev = NULL;
|
|
}
|
|
|
|
int vhost_net_set_backend(struct vhost_dev *hdev,
|
|
struct vhost_vring_file *file)
|
|
{
|
|
if (hdev->vhost_ops->vhost_net_set_backend) {
|
|
return hdev->vhost_ops->vhost_net_set_backend(hdev, file);
|
|
}
|
|
|
|
return -ENOSYS;
|
|
}
|