
* Run docker probe only if docker or podman are available The docker probe uses "sudo -n" which can cause an e-mail with a security warning each time when configure is run. Therefore run docker probe only if either docker or podman are available. That avoids the problematic "sudo -n" on build environments which have neither docker nor podman installed. Fixes: c4575b59155e2e00 ("configure: store container engine in config-host.mak") Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20221030083510.310584-1-sw@weilnetz.de> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20221117172532.538149-2-alex.bennee@linaro.org> * tests/avocado/machine_aspeed.py: Reduce noise on the console for SDK tests The Aspeed SDK images are based on OpenBMC which starts a lot of services. The output noise on the console can break from time to time the test waiting for the logging prompt. Change the U-Boot bootargs variable to add "quiet" to the kernel command line and reduce the output volume. This also drops the test on the CPU id which was nice to have but not essential. Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20221104075347.370503-1-clg@kaod.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221117172532.538149-3-alex.bennee@linaro.org> * tests/docker: allow user to override check target This is useful when trying to bisect a particular failing test behind a docker run. For example: make docker-test-clang@fedora \ TARGET_LIST=arm-softmmu \ TEST_COMMAND="meson test qtest-arm/qos-test" \ J=9 V=1 Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-4-alex.bennee@linaro.org> * docs/devel: add a maintainers section to development process We don't currently have a clear place in the documentation to describe the roles and responsibilities of a maintainer. Lets create one so we can. I've moved a few small bits out of other files to try and keep everything in one place. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-5-alex.bennee@linaro.org> * docs/devel: make language a little less code centric We welcome all sorts of patches. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-6-alex.bennee@linaro.org> * docs/devel: simplify the minimal checklist The bullet points are quite long and contain process tips. Move those bits of the bullet to the relevant sections and link to them. Use a table for nicer formatting of the checklist. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-7-alex.bennee@linaro.org> * docs/devel: try and improve the language around patch review It is important that contributors take the review process seriously and we collaborate in a respectful way while avoiding personal attacks. Try and make this clear in the language. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-8-alex.bennee@linaro.org> * tests/avocado: Raise timeout for boot_linux.py:BootLinuxPPC64.test_pseries_tcg On my machine, a debug build of QEMU takes about 260 seconds to complete this test, so with the current timeout value of 180 seconds it always times out. Double the timeout value to 360 so the test definitely has enough time to complete. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221110142901.3832318-1-peter.maydell@linaro.org> Message-Id: <20221117172532.538149-9-alex.bennee@linaro.org> * tests/avocado: introduce alpine virt test for CI The boot_linux tests download and run a full cloud image boot and start a full distro. While the ability to test the full boot chain is worthwhile it is perhaps a little too heavy weight and causes issues in CI. Fix this by introducing a new alpine linux ISO boot in machine_aarch64_virt. This boots a fully loaded -cpu max with all the bells and whistles in 31s on my machine. A full debug build takes around 180s on my machine so we set a more generous timeout to cover that. We don't add a test for lesser GIC versions although there is some coverage for that already in the boot_xen.py tests. If we want to introduce more comprehensive testing we can do it with a custom kernel and initrd rather than a full distro boot. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-10-alex.bennee@linaro.org> * tests/avocado: skip aarch64 cloud TCG tests in CI We now have a much lighter weight test in machine_aarch64_virt which tests the full boot chain in less time. Rename the tests while we are at it to make it clear it is a Fedora cloud image. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221117172532.538149-11-alex.bennee@linaro.org> * gitlab: integrate coverage report This should hopefully give is nice coverage information about what our tests (or at least the subset we are running) have hit. Ideally we would want a way to trigger coverage on tests likely to be affected by the current commit. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221117172532.538149-12-alex.bennee@linaro.org> * vhost: mask VIRTIO_F_RING_RESET for vhost and vhost-user devices Commit 69e1c14aa2 ("virtio: core: vq reset feature negotation support") enabled VIRTIO_F_RING_RESET by default for all virtio devices. This feature is not currently emulated by QEMU, so for vhost and vhost-user devices we need to make sure it is supported by the offloaded device emulation (in-kernel or in another process). To do this we need to add VIRTIO_F_RING_RESET to the features bitmap passed to vhost_get_features(). This way it will be masked if the device does not support it. This issue was initially discovered with vhost-vsock and vhost-user-vsock, and then also tested with vhost-user-rng which confirmed the same issue. They fail when sending features through VHOST_SET_FEATURES ioctl or VHOST_USER_SET_FEATURES message, since VIRTIO_F_RING_RESET is negotiated by the guest (Linux >= v6.0), but not supported by the device. Fixes: 69e1c14aa2 ("virtio: core: vq reset feature negotation support") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1318 Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-Id: <20221121101101.29400-1-sgarzare@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Acked-by: Jason Wang <jasowang@redhat.com> * tests: acpi: whitelist DSDT before moving PRQx to _SB scope Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221121153613.3972225-2-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * acpi: x86: move RPQx field back to _SB scope Commit 47a373faa6b2 (acpi: pc/q35: drop ad-hoc PCI-ISA bridge AML routines and let bus ennumeration generate AML) moved ISA bridge AML generation to respective devices and was using aml_alias() to provide PRQx fields in _SB. scope. However, it turned out that SeaBIOS was not able to process Alias opcode when parsing DSDT, resulting in lack of keyboard during boot (SeaBIOS console, grub, FreeDOS). While fix for SeaBIOS is posted https://mail.coreboot.org/hyperkitty/list/seabios@seabios.org/thread/RGPL7HESH5U5JRLEO6FP77CZVHZK5J65/ fixed SeaBIOS might not make into QEMU-7.2 in time. Hence this workaround that puts PRQx back into _SB scope and gets rid of aliases in ISA bridge description, so DSDT will be parsable by broken SeaBIOS. That brings back hardcoded references to ISA bridge PCI0.S08.P40C/PCI0.SF8.PIRQ where middle part now is auto generated based on slot it's plugged in, but it should be fine as bridge initialization also hardcodes PCI address of the bridge so it can't ever move. Once QEMU tree has fixed SeaBIOS blob, we should be able to drop this part and revert back to alias based approach Reported-by: Volker Rümelin <vr_qemu@t-online.de> Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221121153613.3972225-3-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * tests: acpi: x86: update expected DSDT after moving PRQx fields in _SB scope Expected DSDT changes, pc: - Field (P40C, ByteAcc, NoLock, Preserve) + Scope (\_SB) { - PRQ0, 8, - PRQ1, 8, - PRQ2, 8, - PRQ3, 8 + Field (PCI0.S08.P40C, ByteAcc, NoLock, Preserve) + { + PRQ0, 8, + PRQ1, 8, + PRQ2, 8, + PRQ3, 8 + } } - Alias (PRQ0, \_SB.PRQ0) - Alias (PRQ1, \_SB.PRQ1) - Alias (PRQ2, \_SB.PRQ2) - Alias (PRQ3, \_SB.PRQ3) q35: - Field (PIRQ, ByteAcc, NoLock, Preserve) - { - PRQA, 8, - PRQB, 8, - PRQC, 8, - PRQD, 8, - Offset (0x08), - PRQE, 8, - PRQF, 8, - PRQG, 8, - PRQH, 8 + Scope (\_SB) + { + Field (PCI0.SF8.PIRQ, ByteAcc, NoLock, Preserve) + { + PRQA, 8, + PRQB, 8, + PRQC, 8, + PRQD, 8, + Offset (0x08), + PRQE, 8, + PRQF, 8, + PRQG, 8, + PRQH, 8 + } } - Alias (PRQA, \_SB.PRQA) - Alias (PRQB, \_SB.PRQB) - Alias (PRQC, \_SB.PRQC) - Alias (PRQD, \_SB.PRQD) - Alias (PRQE, \_SB.PRQE) - Alias (PRQF, \_SB.PRQF) - Alias (PRQG, \_SB.PRQG) - Alias (PRQH, \_SB.PRQH) Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221121153613.3972225-4-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * MAINTAINERS: add mst to list of biosbits maintainers Adding Michael's name to the list of bios bits maintainers so that all changes and fixes into biosbits framework can go through his tree and he is notified. Suggested-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20221111151138.36988-1-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * tests/avocado: configure acpi-bits to use avocado timeout Instead of using a hardcoded timeout, just rely on Avocado's built-in test case timeout. This helps avoid timeout issues on machines where 60 seconds is not sufficient. Signed-off-by: John Snow <jsnow@redhat.com> Message-Id: <20221115212759.3095751-1-jsnow@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Ani Sinha <ani@anisinha.ca> * acpi/tests/avocado/bits: keep the work directory when BITS_DEBUG is set in env Debugging bits issue often involves running the QEMU command line manually outside of the avocado environment with the generated ISO. Hence, its inconvenient if the iso gets cleaned up after the test has finished. This change makes sure that the work directory is kept after the test finishes if the test is run with BITS_DEBUG=1 in the environment so that the iso is available for use with the QEMU command line. CC: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20221117113630.543495-1-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * virtio: disable error for out of spec queue-enable Virtio 1.0 is pretty clear that features have to be negotiated before enabling VQs. Unfortunately Seabios ignored this ever since gaining 1.0 support (UEFI is ok). Comment the error out for now, and add a TODO. Fixes: 3c37f8b8d1 ("virtio: introduce virtio_queue_enable()") Cc: "Kangjie Xu" <kangjie.xu@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221121200339.362452-1-mst@redhat.com> * hw/loongarch: Add default stdout uart in fdt Add "chosen" subnode into LoongArch fdt, and set it's "stdout-path" prop to uart node. Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20221115114923.3372414-1-yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> * hw/loongarch: Fix setprop_sized method in fdt rtc node. Fix setprop_sized method in fdt rtc node. Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20221116040300.3459818-1-yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> * hw/loongarch: Replace the value of uart info with macro Using macro to replace the value of uart info such as addr, size in acpi_build method. Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20221115115008.3372489-1-yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> * target/arm: Don't do two-stage lookup if stage 2 is disabled In get_phys_addr_with_struct(), we call get_phys_addr_twostage() if the CPU supports EL2. However, we don't check here that stage 2 is actually enabled. Instead we only check that inside get_phys_addr_twostage() to skip stage 2 translation. This means that even if stage 2 is disabled we still tell the stage 1 lookup to do its page table walks via stage 2. This works by luck for normal CPU accesses, but it breaks for debug accesses, which are used by the disassembler and also by semihosting file reads and writes, because the debug case takes a different code path inside S1_ptw_translate(). This means that setups that use semihosting for file loads are broken (a regression since 7.1, introduced in recent ptw refactoring), and that sometimes disassembly in debug logs reports "unable to read memory" rather than showing the guest insns. Fix the bug by hoisting the "is stage 2 enabled?" check up to get_phys_addr_with_struct(), so that we handle S2 disabled the same way we do the "no EL2" case, with a simple single stage lookup. Reported-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221121212404.1450382-1-peter.maydell@linaro.org * target/arm: Use signed quantity to represent VMSAv8-64 translation level The LPA2 extension implements 52-bit virtual addressing for 4k and 16k translation granules, and for the former, this means an additional level of translation is needed. This means we start counting at -1 instead of 0 when doing a walk, and so 'level' is now a signed quantity, and should be typed as such. So turn it from uint32_t into int32_t. This avoids a level of -1 getting misinterpreted as being >= 3, and terminating a page table walk prematurely with a bogus output address. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> * Update VERSION for v7.2.0-rc2 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> * tests/avocado: Update the URLs of the advent calendar images The qemu-advent-calendar.org server will be decommissioned soon. I've mirrored the images that we use for the QEMU CI to gitlab, so update their URLs to point to the new location. Message-Id: <20221121102436.78635-1-thuth@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> * tests/qtest: Decrease the amount of output from the qom-test The logs in the gitlab-CI have a size constraint, and sometimes we already hit this limit. The biggest part of the log then seems to be filled by the qom-test, so we should decrease the size of the output - which can be done easily by not printing the path for each property, since the path has already been logged at the beginning of each node that we handle here. However, if we omit the path, we should make sure to not recurse into child nodes in between, so that it is clear to which node each property belongs. Thus store the children and links in a temporary list and recurse only at the end of each node, when all properties have already been printed. Message-Id: <20221121194240.149268-1-thuth@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> * tests/avocado: use new rootfs for orangepi test The old URL wasn't stable. I suspect the current URL will only be stable for a few months so maybe we need another strategy for hosting rootfs snapshots? Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221118113309.1057790-1-alex.bennee@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> * Revert "usbredir: avoid queuing hello packet on snapshot restore" Run state is also in RUN_STATE_PRELAUNCH while "-S" is used. This reverts commit 0631d4b448454ae8a1ab091c447e3f71ab6e088a Signed-off-by: Joelle van Dyne <j@getutm.app> Reviewed-by: Ján Tomko <jtomko@redhat.com> The original commit broke the usage of usbredir with libvirt, which starts every domain with "-S". This workaround is no longer needed because the usbredir behavior has been fixed in the meantime: https://gitlab.freedesktop.org/spice/usbredir/-/merge_requests/61 Signed-off-by: Ján Tomko <jtomko@redhat.com> Message-Id: <1689cec3eadcea87255e390cb236033aca72e168.1669193161.git.jtomko@redhat.com> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * gtk: disable GTK Clipboard with a new meson option The GTK Clipboard implementation may cause guest hangs. Therefore implement new configure switch: --enable-gtk-clipboard, as a meson option disabled by default, which warns in the help text about the experimental nature of the feature. Regenerate the meson build options to include it. The initialization of the clipboard is gtk.c, as well as the compilation of gtk-clipboard.c are now conditional on this new option to be set. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1150 Signed-off-by: Claudio Fontana <cfontana@suse.de> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Jim Fehlig <jfehlig@suse.com> Message-Id: <20221121135538.14625-1-cfontana@suse.de> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * hw/usb/hcd-xhci.c: spelling: tranfer Fixes: effaf5a240e03020f4ae953e10b764622c3e87cc Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20221105114851.306206-1-mjt@msgid.tls.msk.ru> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * ui/gtk: prevent ui lock up when dpy_gl_update called again before current draw event occurs A warning, "qemu: warning: console: no gl-unblock within" followed by guest scanout lockup can happen if dpy_gl_update is called in a row and the second call is made before gd_draw_event scheduled by the first call is taking place. This is because draw call returns without decrementing gl_block ref count if the dmabuf was already submitted as shown below. (gd_gl_area_draw/gd_egl_draw) if (dmabuf) { if (!dmabuf->draw_submitted) { return; } else { dmabuf->draw_submitted = false; } } So it should not schedule any redundant draw event in case draw_submitted is already set in gd_egl_fluch/gd_gl_area_scanout_flush. Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Vivek Kasireddy <vivek.kasireddy@intel.com> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20221021192315.9110-1-dongwon.kim@intel.com> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * hw/usb/hcd-xhci: Reset the XHCIState with device_cold_reset() Currently the hcd-xhci-pci and hcd-xhci-sysbus devices, which are mostly wrappers around the TYPE_XHCI device, which is a direct subclass of TYPE_DEVICE. Since TYPE_DEVICE devices are not on any qbus and do not get automatically reset, the wrapper devices both reset the TYPE_XHCI device in their own reset functions. However, they do this using device_legacy_reset(), which will reset the device itself but not any bus it has. Switch to device_cold_reset(), which avoids using a deprecated function and also propagates reset along any child buses. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20221014145423.2102706-1-peter.maydell@linaro.org> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * hw/audio/intel-hda: don't reset codecs twice Currently the intel-hda device has a reset method which manually resets all the codecs by calling device_legacy_reset() on them. This means they get reset twice, once because child devices on a qbus get reset before the parent device's reset method is called, and then again because we're manually resetting them. Drop the manual reset call, and ensure that codecs are still reset when the guest does a reset via ICH6_GCTL_RESET by using device_cold_reset() (which resets all the devices on the qbus as well as the device itself) instead of a direct call to the reset function. This is a slight ordering change because the (only) codec reset now happens before the controller registers etc are reset, rather than once before and then once after, but the codec reset function hda_audio_reset() doesn't care. This lets us drop a use of device_legacy_reset(), which is deprecated. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221014142632.2092404-2-peter.maydell@linaro.org> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * hw/audio/intel-hda: Drop unnecessary prototype The only use of intel_hda_reset() is after its definition, so we don't need to separately declare its prototype at the top of the file; drop the unnecessary line. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221014142632.2092404-3-peter.maydell@linaro.org> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * add syx snapshot extras * it compiles! * virtiofsd: Add `sigreturn` to the seccomp whitelist The virtiofsd currently crashes on s390x. This is because of a `sigreturn` system call. See audit log below: type=SECCOMP msg=audit(1669382477.611:459): auid=4294967295 uid=0 gid=0 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 pid=6649 comm="virtiofsd" exe="/usr/libexec/virtiofsd" sig=31 arch=80000016 syscall=119 compat=0 ip=0x3fff15f748a code=0x80000000AUID="unset" UID="root" GID="root" ARCH=s390x SYSCALL=sigreturn Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> Reviewed-by: German Maglione <gmaglione@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221125143946.27717-1-mhartmay@linux.ibm.com> * libvhost-user: Fix wrong type of argument to formatting function (reported by LGTM) Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20220422070144.1043697-2-sw@weilnetz.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-2-sw@weilnetz.de> * libvhost-user: Fix format strings Signed-off-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220422070144.1043697-3-sw@weilnetz.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-3-sw@weilnetz.de> * libvhost-user: Fix two more format strings This fix is required for 32 bit hosts. The bug was detected by CI for arm-linux, but is also relevant for i386-linux. Reported-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-4-sw@weilnetz.de> * libvhost-user: Add format attribute to local function vu_panic Signed-off-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220422070144.1043697-4-sw@weilnetz.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-5-sw@weilnetz.de> * MAINTAINERS: Add subprojects/libvhost-user to section "vhost" Signed-off-by: Stefan Weil <sw@weilnetz.de> [Michael agreed to act as maintainer for libvhost-user via email in https://lore.kernel.org/qemu-devel/20221123015218-mutt-send-email-mst@kernel.org/. --Stefan] Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-6-sw@weilnetz.de> * Add G_GNUC_PRINTF to function qemu_set_info_str and fix related issues With the G_GNUC_PRINTF function attribute the compiler detects two potential insecure format strings: ../../../net/stream.c:248:31: warning: format string is not a string literal (potentially insecure) [-Wformat-security] qemu_set_info_str(&s->nc, uri); ^~~ ../../../net/stream.c:322:31: warning: format string is not a string literal (potentially insecure) [-Wformat-security] qemu_set_info_str(&s->nc, uri); ^~~ There are also two other warnings: ../../../net/socket.c:182:35: warning: zero-length gnu_printf format string [-Wformat-zero-length] 182 | qemu_set_info_str(&s->nc, ""); | ^~ ../../../net/stream.c:170:35: warning: zero-length gnu_printf format string [-Wformat-zero-length] 170 | qemu_set_info_str(&s->nc, ""); Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221126152507.283271-7-sw@weilnetz.de> * del ramfile * update seabios source from 1.16.0 to 1.16.1 git shortlog rel-1.16.0..rel-1.16.1 =================================== Gerd Hoffmann (3): malloc: use variable for ZoneHigh size malloc: use large ZoneHigh when there is enough memory virtio-blk: use larger default request size Igor Mammedov (1): acpi: parse Alias object Volker Rümelin (2): pci: refactor the pci_config_*() functions reset: force standard PCI configuration access Xiaofei Lee (1): virtio-blk: Fix incorrect type conversion in virtio_blk_op() Xuan Zhuo (2): virtio-mmio: read/write the hi 32 features for mmio virtio: finalize features before using device Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * update seabios binaries to 1.16.1 Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> * fix for non i386 archs * replay: Fix declaration of replay_read_next_clock Fixes the build with gcc 13: replay/replay-time.c:34:6: error: conflicting types for \ 'replay_read_next_clock' due to enum/integer mismatch; \ have 'void(ReplayClockKind)' [-Werror=enum-int-mismatch] 34 | void replay_read_next_clock(ReplayClockKind kind) | ^~~~~~~~~~~~~~~~~~~~~~ In file included from ../qemu/replay/replay-time.c:14: replay/replay-internal.h:139:6: note: previous declaration of \ 'replay_read_next_clock' with type 'void(unsigned int)' 139 | void replay_read_next_clock(unsigned int kind); | ^~~~~~~~~~~~~~~~~~~~~~ Fixes: 8eda206e090 ("replay: recording and replaying clock ticks") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com> Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221129010547.284051-1-richard.henderson@linaro.org> * hw/display/qxl: Have qxl_log_command Return early if no log_cmd handler Only 3 command types are logged: no need to call qxl_phys2virt() for the other types. Using different cases will help to pass different structure sizes to qxl_phys2virt() in a pair of commits. Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-2-philmd@linaro.org> * hw/display/qxl: Document qxl_phys2virt() Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-3-philmd@linaro.org> * hw/display/qxl: Pass requested buffer size to qxl_phys2virt() Currently qxl_phys2virt() doesn't check for buffer overrun. In order to do so in the next commit, pass the buffer size as argument. For QXLCursor in qxl_render_cursor() -> qxl_cursor() we verify the size of the chunked data ahead, checking we can access 'sizeof(QXLCursor) + chunk->data_size' bytes. Since in the SPICE_CURSOR_TYPE_MONO case the cursor is assumed to fit in one chunk, no change are required. In SPICE_CURSOR_TYPE_ALPHA the ahead read is handled in qxl_unpack_chunks(). Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-4-philmd@linaro.org> * hw/display/qxl: Avoid buffer overrun in qxl_phys2virt (CVE-2022-4144) Have qxl_get_check_slot_offset() return false if the requested buffer size does not fit within the slot memory region. Similarly qxl_phys2virt() now returns NULL in such case, and qxl_dirty_one_surface() aborts. This avoids buffer overrun in the host pointer returned by memory_region_get_ram_ptr(). Fixes: CVE-2022-4144 (out-of-bounds read) Reported-by: Wenxu Yin (@awxylitol) Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1336 Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-5-philmd@linaro.org> * hw/display/qxl: Assert memory slot fits in preallocated MemoryRegion Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221128202741.4945-6-philmd@linaro.org> * block-backend: avoid bdrv_unregister_buf() NULL pointer deref bdrv_*() APIs expect a valid BlockDriverState. Calling them with bs=NULL leads to undefined behavior. Jonathan Cameron reported this following NULL pointer dereference when a VM with a virtio-blk device and a memory-backend-file object is terminated: 1. qemu_cleanup() closes all drives, setting blk->root to NULL 2. qemu_cleanup() calls user_creatable_cleanup(), which results in a RAM block notifier callback because the memory-backend-file is destroyed. 3. blk_unregister_buf() is called by virtio-blk's BlockRamRegistrar notifier callback and undefined behavior occurs. Fixes: baf422684d73 ("virtio-blk: use BDRV_REQ_REGISTERED_BUF optimization hint") Co-authored-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221121211923.1993171-1-stefanha@redhat.com> * target/arm: Set TCGCPUOps.restore_state_to_opc for v7m This setting got missed, breaking v7m. Fixes: 56c6c98df85c ("target/arm: Convert to tcg_ops restore_state_to_opc") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1347 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221129204146.550394-1-richard.henderson@linaro.org> * Update VERSION for v7.2.0-rc3 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> * hooks are now post mem access * tests/qtests: override "force-legacy" for gpio virtio-mmio tests The GPIO device is a VIRTIO_F_VERSION_1 devices but running with a legacy MMIO interface we miss out that feature bit causing confusion. For the GPIO test force the mmio bus to support non-legacy so we can properly test it. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1333 Message-Id: <20221130112439.2527228-2-alex.bennee@linaro.org> Acked-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * vhost: enable vrings in vhost_dev_start() for vhost-user devices Commit 02b61f38d3 ("hw/virtio: incorporate backend features in features") properly negotiates VHOST_USER_F_PROTOCOL_FEATURES with the vhost-user backend, but we forgot to enable vrings as specified in docs/interop/vhost-user.rst: If ``VHOST_USER_F_PROTOCOL_FEATURES`` has not been negotiated, the ring starts directly in the enabled state. If ``VHOST_USER_F_PROTOCOL_FEATURES`` has been negotiated, the ring is initialized in a disabled state and is enabled by ``VHOST_USER_SET_VRING_ENABLE`` with parameter 1. Some vhost-user front-ends already did this by calling vhost_ops.vhost_set_vring_enable() directly: - backends/cryptodev-vhost.c - hw/net/virtio-net.c - hw/virtio/vhost-user-gpio.c But most didn't do that, so we would leave the vrings disabled and some backends would not work. We observed this issue with the rust version of virtiofsd [1], which uses the event loop [2] provided by the vhost-user-backend crate where requests are not processed if vring is not enabled. Let's fix this issue by enabling the vrings in vhost_dev_start() for vhost-user front-ends that don't already do this directly. Same thing also in vhost_dev_stop() where we disable vrings. [1] https://gitlab.com/virtio-fs/virtiofsd [2] https://github.com/rust-vmm/vhost/blob/240fc2966/crates/vhost-user-backend/src/event_loop.rs#L217 Fixes: 02b61f38d3 ("hw/virtio: incorporate backend features in features") Reported-by: German Maglione <gmaglione@redhat.com> Tested-by: German Maglione <gmaglione@redhat.com> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Acked-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20221123131630.52020-1-sgarzare@redhat.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221130112439.2527228-3-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * hw/virtio: add started_vu status field to vhost-user-gpio As per the fix to vhost-user-blk in f5b22d06fb (vhost: recheck dev state in the vhost_migration_log routine) we really should track the connection and starting separately. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221130112439.2527228-4-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * hw/virtio: generalise CHR_EVENT_CLOSED handling ..and use for both virtio-user-blk and virtio-user-gpio. This avoids the circular close by deferring shutdown due to disconnection until a later point. virtio-user-blk already had this mechanism in place so generalise it as a vhost-user helper function and use for both blk and gpio devices. While we are at it we also fix up vhost-user-gpio to re-establish the event handler after close down so we can reconnect later. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20221130112439.2527228-5-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * include/hw: VM state takes precedence in virtio_device_should_start The VM status should always preempt the device status for these checks. This ensures the device is in the correct state when we suspend the VM prior to migrations. This restores the checks to the order they where in before the refactoring moved things around. While we are at it lets improve our documentation of the various fields involved and document the two functions. Fixes: 9f6bcfd99f (hw/virtio: move vm_running check to virtio_device_started) Fixes: 259d69c00b (hw/virtio: introduce virtio_device_should_start) Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Christian Borntraeger <borntraeger@linux.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20221130112439.2527228-6-alex.bennee@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> * hw/nvme: fix aio cancel in format There are several bugs in the async cancel code for the Format command. Firstly, cancelling a format operation neglects to set iocb->ret as well as clearing the iocb->aiocb after cancelling the underlying aiocb which causes the aio callback to ignore the cancellation. Trivial fix. Secondly, and worse, because the request is queued up for posting to the CQ in a bottom half, if the cancellation is due to the submission queue being deleted (which calls blk_aio_cancel), the req structure is deallocated in nvme_del_sq prior to the bottom half being schedulued. Fix this by simply removing the bottom half, there is no reason to defer it anyway. Fixes: 3bcf26d3d619 ("hw/nvme: reimplement format nvm to allow cancellation") Reported-by: Jonathan Derrick <jonathan.derrick@linux.dev> Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * hw/nvme: fix aio cancel in flush Make sure that iocb->aiocb is NULL'ed when cancelling. Fix a potential use-after-free by removing the bottom half and enqueuing the completion directly. Fixes: 38f4ac65ac88 ("hw/nvme: reimplement flush to allow cancellation") Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * hw/nvme: fix aio cancel in zone reset If the zone reset operation is cancelled but the block unmap operation completes normally, the callback will continue resetting the next zone since it neglects to check iocb->ret which will have been set to -ECANCELED. Make sure that this is checked and bail out if an error is present. Secondly, fix a potential use-after-free by removing the bottom half and enqueuing the completion directly. Fixes: 63d96e4ffd71 ("hw/nvme: reimplement zone reset to allow cancellation") Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * hw/nvme: fix aio cancel in dsm When the DSM operation is cancelled asynchronously, we set iocb->ret to -ECANCELED. However, the callback function only checks the return value of the completed aio, which may have completed succesfully prior to the cancellation and thus the callback ends up continuing the dsm operation instead of bailing out. Fix this. Secondly, fix a potential use-after-free by removing the bottom half and enqueuing the completion directly. Fixes: d7d1474fd85d ("hw/nvme: reimplement dsm to allow cancellation") Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * hw/nvme: remove copy bh scheduling Fix a potential use-after-free by removing the bottom half and enqueuing the completion directly. Fixes: 796d20681d9b ("hw/nvme: reimplement the copy command to allow aio cancellation") Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> * target/i386: allow MMX instructions with CR4.OSFXSR=0 MMX state is saved/restored by FSAVE/FRSTOR so the instructions are not illegal opcodes even if CR4.OSFXSR=0. Make sure that validate_vex takes into account the prefix and only checks HF_OSFXSR_MASK in the presence of an SSE instruction. Fixes: 20581aadec5e ("target/i386: validate VEX prefixes via the instructions' exception classes", 2022-10-18) Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1350 Reported-by: Helge Konetzka (@hejko on gitlab.com) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> * target/i386: Always completely initialize TranslateFault In get_physical_address, the canonical address check failed to set TranslateFault.stage2, which resulted in an uninitialized read from the struct when reporting the fault in x86_cpu_tlb_fill. Adjust all error paths to use structure assignment so that the entire struct is always initialized. Reported-by: Daniel Hoffman <dhoff749@gmail.com> Fixes: 9bbcf372193a ("target/i386: Reorg GET_HPHYS") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221201074522.178498-1-richard.henderson@linaro.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1324 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> * hw/loongarch/virt: Add cfi01 pflash device Add cfi01 pflash device for LoongArch virt machine Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221130100647.398565-1-yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> * Sync pc on breakpoints * tests/qtest/migration-test: Fix unlink error and memory leaks When running the migration test compiled with Clang from Fedora 37 and sanitizers enabled, there is an error complaining about unlink(): ../tests/qtest/migration-test.c:1072:12: runtime error: null pointer passed as argument 1, which is declared to never be null /usr/include/unistd.h:858:48: note: nonnull attribute specified here SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../tests/qtest/migration-test.c:1072:12 in (test program exited with status code 1) TAP parsing error: Too few tests run (expected 33, got 20) The data->clientcert and data->clientkey pointers can indeed be unset in some tests, so we have to check them before calling unlink() with those. While we're at it, I also noticed that the code is only freeing some but not all of the allocated strings in this function, and indeed, valgrind is also complaining about memory leaks here. So let's call g_free() on all allocated strings to avoid leaking memory here. Message-Id: <20221125083054.117504-1-thuth@redhat.com> Tested-by: Bin Meng <bmeng@tinylab.org> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> * target/s390x/tcg: Fix and improve the SACF instruction The SET ADDRESS SPACE CONTROL FAST instruction is not privileged, it can be used from problem space, too. Just the switching to the home address space is privileged and should still generate a privilege exception. This bug is e.g. causing programs like Java that use the "getcpu" vdso kernel function to crash (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=990417#26 ). While we're at it, also check if DAT is not enabled. In that case the instruction is supposed to generate a special operation exception. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/655 Message-Id: <20221201184443.136355-1-thuth@redhat.com> Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> * hw/display/next-fb: Fix comment typo Signed-off-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Message-Id: <20221125160849.23711-1-evgeny.v.ermakov@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> * fix dev snapshots * working syx snaps * Revert "hw/loongarch/virt: Add cfi01 pflash device" This reverts commit 14dccc8ea6ece7ee63273144fb55e4770a05e0fd. Signed-off-by: Song Gao <gaosong@loongson.cn> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221205113007.683505-1-gaosong@loongson.cn> * Update VERSION for v7.2.0-rc4 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Signed-off-by: John Snow <jsnow@redhat.com> Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Ján Tomko <jtomko@redhat.com> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com> Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Co-authored-by: Stefan Weil <sw@weilnetz.de> Co-authored-by: Cédric Le Goater <clg@kaod.org> Co-authored-by: Alex Bennée <alex.bennee@linaro.org> Co-authored-by: Peter Maydell <peter.maydell@linaro.org> Co-authored-by: Stefano Garzarella <sgarzare@redhat.com> Co-authored-by: Igor Mammedov <imammedo@redhat.com> Co-authored-by: Ani Sinha <ani@anisinha.ca> Co-authored-by: John Snow <jsnow@redhat.com> Co-authored-by: Michael S. Tsirkin <mst@redhat.com> Co-authored-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Co-authored-by: Stefan Hajnoczi <stefanha@redhat.com> Co-authored-by: Ard Biesheuvel <ardb@kernel.org> Co-authored-by: Thomas Huth <thuth@redhat.com> Co-authored-by: Joelle van Dyne <j@getutm.app> Co-authored-by: Claudio Fontana <cfontana@suse.de> Co-authored-by: Michael Tokarev <mjt@tls.msk.ru> Co-authored-by: Dongwon Kim <dongwon.kim@intel.com> Co-authored-by: Marc Hartmayer <mhartmay@linux.ibm.com> Co-authored-by: Stefan Weil via <qemu-devel@nongnu.org> Co-authored-by: Gerd Hoffmann <kraxel@redhat.com> Co-authored-by: Richard Henderson <richard.henderson@linaro.org> Co-authored-by: Philippe Mathieu-Daudé <philmd@linaro.org> Co-authored-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-authored-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Co-authored-by: Klaus Jensen <k.jensen@samsung.com> Co-authored-by: Paolo Bonzini <pbonzini@redhat.com> Co-authored-by: Song Gao <gaosong@loongson.cn>
2292 lines
75 KiB
C
2292 lines
75 KiB
C
/*
|
|
* QEMU ARM CPU
|
|
*
|
|
* Copyright (c) 2012 SUSE LINUX Products GmbH
|
|
*
|
|
* This program is free software; you can redistribute it and/or
|
|
* modify it under the terms of the GNU General Public License
|
|
* as published by the Free Software Foundation; either version 2
|
|
* of the License, or (at your option) any later version.
|
|
*
|
|
* This program is distributed in the hope that it will be useful,
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
* GNU General Public License for more details.
|
|
*
|
|
* You should have received a copy of the GNU General Public License
|
|
* along with this program; if not, see
|
|
* <http://www.gnu.org/licenses/gpl-2.0.html>
|
|
*/
|
|
|
|
#include "qemu/osdep.h"
|
|
#include "qemu/qemu-print.h"
|
|
#include "qemu/timer.h"
|
|
#include "qemu/log.h"
|
|
#include "exec/page-vary.h"
|
|
#include "target/arm/idau.h"
|
|
#include "qemu/module.h"
|
|
#include "qapi/error.h"
|
|
#include "qapi/visitor.h"
|
|
#include "cpu.h"
|
|
#ifdef CONFIG_TCG
|
|
#include "hw/core/tcg-cpu-ops.h"
|
|
#endif /* CONFIG_TCG */
|
|
#include "internals.h"
|
|
#include "exec/exec-all.h"
|
|
#include "hw/qdev-properties.h"
|
|
#if !defined(CONFIG_USER_ONLY)
|
|
#include "hw/loader.h"
|
|
#include "hw/boards.h"
|
|
#endif
|
|
#include "sysemu/tcg.h"
|
|
#include "sysemu/qtest.h"
|
|
#include "sysemu/hw_accel.h"
|
|
#include "kvm_arm.h"
|
|
#include "disas/capstone.h"
|
|
#include "fpu/softfloat.h"
|
|
#include "cpregs.h"
|
|
|
|
static void arm_cpu_set_pc(CPUState *cs, vaddr value)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(cs);
|
|
CPUARMState *env = &cpu->env;
|
|
|
|
if (is_a64(env)) {
|
|
env->pc = value;
|
|
env->thumb = false;
|
|
} else {
|
|
env->regs[15] = value & ~1;
|
|
env->thumb = value & 1;
|
|
}
|
|
}
|
|
|
|
static vaddr arm_cpu_get_pc(CPUState *cs)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(cs);
|
|
CPUARMState *env = &cpu->env;
|
|
|
|
if (is_a64(env)) {
|
|
return env->pc;
|
|
} else {
|
|
return env->regs[15];
|
|
}
|
|
}
|
|
|
|
#ifdef CONFIG_TCG
|
|
void arm_cpu_synchronize_from_tb(CPUState *cs,
|
|
const TranslationBlock *tb)
|
|
{
|
|
/* The program counter is always up to date with TARGET_TB_PCREL. */
|
|
if (!TARGET_TB_PCREL) {
|
|
CPUARMState *env = cs->env_ptr;
|
|
/*
|
|
* It's OK to look at env for the current mode here, because it's
|
|
* never possible for an AArch64 TB to chain to an AArch32 TB.
|
|
*/
|
|
if (is_a64(env)) {
|
|
env->pc = tb_pc(tb);
|
|
} else {
|
|
env->regs[15] = tb_pc(tb);
|
|
}
|
|
}
|
|
}
|
|
|
|
void arm_restore_state_to_opc(CPUState *cs,
|
|
const TranslationBlock *tb,
|
|
const uint64_t *data)
|
|
{
|
|
CPUARMState *env = cs->env_ptr;
|
|
|
|
if (is_a64(env)) {
|
|
if (TARGET_TB_PCREL) {
|
|
env->pc = (env->pc & TARGET_PAGE_MASK) | data[0];
|
|
} else {
|
|
env->pc = data[0];
|
|
}
|
|
env->condexec_bits = 0;
|
|
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
|
|
} else {
|
|
if (TARGET_TB_PCREL) {
|
|
env->regs[15] = (env->regs[15] & TARGET_PAGE_MASK) | data[0];
|
|
} else {
|
|
env->regs[15] = data[0];
|
|
}
|
|
env->condexec_bits = data[1];
|
|
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
|
|
}
|
|
}
|
|
#endif /* CONFIG_TCG */
|
|
|
|
static bool arm_cpu_has_work(CPUState *cs)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(cs);
|
|
|
|
return (cpu->power_state != PSCI_OFF)
|
|
&& cs->interrupt_request &
|
|
(CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
|
|
| CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
|
|
| CPU_INTERRUPT_EXITTB);
|
|
}
|
|
|
|
void arm_register_pre_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
|
|
void *opaque)
|
|
{
|
|
ARMELChangeHook *entry = g_new0(ARMELChangeHook, 1);
|
|
|
|
entry->hook = hook;
|
|
entry->opaque = opaque;
|
|
|
|
QLIST_INSERT_HEAD(&cpu->pre_el_change_hooks, entry, node);
|
|
}
|
|
|
|
void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
|
|
void *opaque)
|
|
{
|
|
ARMELChangeHook *entry = g_new0(ARMELChangeHook, 1);
|
|
|
|
entry->hook = hook;
|
|
entry->opaque = opaque;
|
|
|
|
QLIST_INSERT_HEAD(&cpu->el_change_hooks, entry, node);
|
|
}
|
|
|
|
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
|
|
{
|
|
/* Reset a single ARMCPRegInfo register */
|
|
ARMCPRegInfo *ri = value;
|
|
ARMCPU *cpu = opaque;
|
|
|
|
if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS)) {
|
|
return;
|
|
}
|
|
|
|
if (ri->resetfn) {
|
|
ri->resetfn(&cpu->env, ri);
|
|
return;
|
|
}
|
|
|
|
/* A zero offset is never possible as it would be regs[0]
|
|
* so we use it to indicate that reset is being handled elsewhere.
|
|
* This is basically only used for fields in non-core coprocessors
|
|
* (like the pxa2xx ones).
|
|
*/
|
|
if (!ri->fieldoffset) {
|
|
return;
|
|
}
|
|
|
|
if (cpreg_field_is_64bit(ri)) {
|
|
CPREG_FIELD64(&cpu->env, ri) = ri->resetvalue;
|
|
} else {
|
|
CPREG_FIELD32(&cpu->env, ri) = ri->resetvalue;
|
|
}
|
|
}
|
|
|
|
static void cp_reg_check_reset(gpointer key, gpointer value, gpointer opaque)
|
|
{
|
|
/* Purely an assertion check: we've already done reset once,
|
|
* so now check that running the reset for the cpreg doesn't
|
|
* change its value. This traps bugs where two different cpregs
|
|
* both try to reset the same state field but to different values.
|
|
*/
|
|
ARMCPRegInfo *ri = value;
|
|
ARMCPU *cpu = opaque;
|
|
uint64_t oldvalue, newvalue;
|
|
|
|
if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
|
|
return;
|
|
}
|
|
|
|
oldvalue = read_raw_cp_reg(&cpu->env, ri);
|
|
cp_reg_reset(key, value, opaque);
|
|
newvalue = read_raw_cp_reg(&cpu->env, ri);
|
|
assert(oldvalue == newvalue);
|
|
}
|
|
|
|
static void arm_cpu_reset(DeviceState *dev)
|
|
{
|
|
CPUState *s = CPU(dev);
|
|
ARMCPU *cpu = ARM_CPU(s);
|
|
ARMCPUClass *acc = ARM_CPU_GET_CLASS(cpu);
|
|
CPUARMState *env = &cpu->env;
|
|
|
|
acc->parent_reset(dev);
|
|
|
|
memset(env, 0, offsetof(CPUARMState, end_reset_fields));
|
|
|
|
g_hash_table_foreach(cpu->cp_regs, cp_reg_reset, cpu);
|
|
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
|
|
|
|
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
|
|
env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
|
|
env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
|
|
env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
|
|
|
|
cpu->power_state = s->start_powered_off ? PSCI_OFF : PSCI_ON;
|
|
|
|
if (arm_feature(env, ARM_FEATURE_IWMMXT)) {
|
|
env->iwmmxt.cregs[ARM_IWMMXT_wCID] = 0x69051000 | 'Q';
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_AARCH64)) {
|
|
/* 64 bit CPUs always start in 64 bit mode */
|
|
env->aarch64 = true;
|
|
#if defined(CONFIG_USER_ONLY)
|
|
env->pstate = PSTATE_MODE_EL0t;
|
|
/* Userspace expects access to DC ZVA, CTL_EL0 and the cache ops */
|
|
env->cp15.sctlr_el[1] |= SCTLR_UCT | SCTLR_UCI | SCTLR_DZE;
|
|
/* Enable all PAC keys. */
|
|
env->cp15.sctlr_el[1] |= (SCTLR_EnIA | SCTLR_EnIB |
|
|
SCTLR_EnDA | SCTLR_EnDB);
|
|
/* Trap on btype=3 for PACIxSP. */
|
|
env->cp15.sctlr_el[1] |= SCTLR_BT0;
|
|
/* and to the FP/Neon instructions */
|
|
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
|
|
CPACR_EL1, FPEN, 3);
|
|
/* and to the SVE instructions, with default vector length */
|
|
if (cpu_isar_feature(aa64_sve, cpu)) {
|
|
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
|
|
CPACR_EL1, ZEN, 3);
|
|
env->vfp.zcr_el[1] = cpu->sve_default_vq - 1;
|
|
}
|
|
/* and for SME instructions, with default vector length, and TPIDR2 */
|
|
if (cpu_isar_feature(aa64_sme, cpu)) {
|
|
env->cp15.sctlr_el[1] |= SCTLR_EnTP2;
|
|
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
|
|
CPACR_EL1, SMEN, 3);
|
|
env->vfp.smcr_el[1] = cpu->sme_default_vq - 1;
|
|
if (cpu_isar_feature(aa64_sme_fa64, cpu)) {
|
|
env->vfp.smcr_el[1] = FIELD_DP64(env->vfp.smcr_el[1],
|
|
SMCR, FA64, 1);
|
|
}
|
|
}
|
|
/*
|
|
* Enable 48-bit address space (TODO: take reserved_va into account).
|
|
* Enable TBI0 but not TBI1.
|
|
* Note that this must match useronly_clean_ptr.
|
|
*/
|
|
env->cp15.tcr_el[1] = 5 | (1ULL << 37);
|
|
|
|
/* Enable MTE */
|
|
if (cpu_isar_feature(aa64_mte, cpu)) {
|
|
/* Enable tag access, but leave TCF0 as No Effect (0). */
|
|
env->cp15.sctlr_el[1] |= SCTLR_ATA0;
|
|
/*
|
|
* Exclude all tags, so that tag 0 is always used.
|
|
* This corresponds to Linux current->thread.gcr_incl = 0.
|
|
*
|
|
* Set RRND, so that helper_irg() will generate a seed later.
|
|
* Here in cpu_reset(), the crypto subsystem has not yet been
|
|
* initialized.
|
|
*/
|
|
env->cp15.gcr_el1 = 0x1ffff;
|
|
}
|
|
/*
|
|
* Disable access to SCXTNUM_EL0 from CSV2_1p2.
|
|
* This is not yet exposed from the Linux kernel in any way.
|
|
*/
|
|
env->cp15.sctlr_el[1] |= SCTLR_TSCXT;
|
|
#else
|
|
/* Reset into the highest available EL */
|
|
if (arm_feature(env, ARM_FEATURE_EL3)) {
|
|
env->pstate = PSTATE_MODE_EL3h;
|
|
} else if (arm_feature(env, ARM_FEATURE_EL2)) {
|
|
env->pstate = PSTATE_MODE_EL2h;
|
|
} else {
|
|
env->pstate = PSTATE_MODE_EL1h;
|
|
}
|
|
|
|
/* Sample rvbar at reset. */
|
|
env->cp15.rvbar = cpu->rvbar_prop;
|
|
env->pc = env->cp15.rvbar;
|
|
#endif
|
|
} else {
|
|
#if defined(CONFIG_USER_ONLY)
|
|
/* Userspace expects access to cp10 and cp11 for FP/Neon */
|
|
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
|
|
CPACR, CP10, 3);
|
|
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
|
|
CPACR, CP11, 3);
|
|
#endif
|
|
}
|
|
|
|
#if defined(CONFIG_USER_ONLY)
|
|
env->uncached_cpsr = ARM_CPU_MODE_USR;
|
|
/* For user mode we must enable access to coprocessors */
|
|
env->vfp.xregs[ARM_VFP_FPEXC] = 1 << 30;
|
|
if (arm_feature(env, ARM_FEATURE_IWMMXT)) {
|
|
env->cp15.c15_cpar = 3;
|
|
} else if (arm_feature(env, ARM_FEATURE_XSCALE)) {
|
|
env->cp15.c15_cpar = 1;
|
|
}
|
|
#else
|
|
|
|
/*
|
|
* If the highest available EL is EL2, AArch32 will start in Hyp
|
|
* mode; otherwise it starts in SVC. Note that if we start in
|
|
* AArch64 then these values in the uncached_cpsr will be ignored.
|
|
*/
|
|
if (arm_feature(env, ARM_FEATURE_EL2) &&
|
|
!arm_feature(env, ARM_FEATURE_EL3)) {
|
|
env->uncached_cpsr = ARM_CPU_MODE_HYP;
|
|
} else {
|
|
env->uncached_cpsr = ARM_CPU_MODE_SVC;
|
|
}
|
|
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
|
|
|
|
/* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
|
|
* executing as AArch32 then check if highvecs are enabled and
|
|
* adjust the PC accordingly.
|
|
*/
|
|
if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
|
|
env->regs[15] = 0xFFFF0000;
|
|
}
|
|
|
|
env->vfp.xregs[ARM_VFP_FPEXC] = 0;
|
|
#endif
|
|
|
|
if (arm_feature(env, ARM_FEATURE_M)) {
|
|
#ifndef CONFIG_USER_ONLY
|
|
uint32_t initial_msp; /* Loaded from 0x0 */
|
|
uint32_t initial_pc; /* Loaded from 0x4 */
|
|
uint8_t *rom;
|
|
uint32_t vecbase;
|
|
#endif
|
|
|
|
if (cpu_isar_feature(aa32_lob, cpu)) {
|
|
/*
|
|
* LTPSIZE is constant 4 if MVE not implemented, and resets
|
|
* to an UNKNOWN value if MVE is implemented. We choose to
|
|
* always reset to 4.
|
|
*/
|
|
env->v7m.ltpsize = 4;
|
|
/* The LTPSIZE field in FPDSCR is constant and reads as 4. */
|
|
env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
|
|
env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
|
|
env->v7m.secure = true;
|
|
} else {
|
|
/* This bit resets to 0 if security is supported, but 1 if
|
|
* it is not. The bit is not present in v7M, but we set it
|
|
* here so we can avoid having to make checks on it conditional
|
|
* on ARM_FEATURE_V8 (we don't let the guest see the bit).
|
|
*/
|
|
env->v7m.aircr = R_V7M_AIRCR_BFHFNMINS_MASK;
|
|
/*
|
|
* Set NSACR to indicate "NS access permitted to everything";
|
|
* this avoids having to have all the tests of it being
|
|
* conditional on ARM_FEATURE_M_SECURITY. Note also that from
|
|
* v8.1M the guest-visible value of NSACR in a CPU without the
|
|
* Security Extension is 0xcff.
|
|
*/
|
|
env->v7m.nsacr = 0xcff;
|
|
}
|
|
|
|
/* In v7M the reset value of this bit is IMPDEF, but ARM recommends
|
|
* that it resets to 1, so QEMU always does that rather than making
|
|
* it dependent on CPU model. In v8M it is RES1.
|
|
*/
|
|
env->v7m.ccr[M_REG_NS] = R_V7M_CCR_STKALIGN_MASK;
|
|
env->v7m.ccr[M_REG_S] = R_V7M_CCR_STKALIGN_MASK;
|
|
if (arm_feature(env, ARM_FEATURE_V8)) {
|
|
/* in v8M the NONBASETHRDENA bit [0] is RES1 */
|
|
env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_NONBASETHRDENA_MASK;
|
|
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_NONBASETHRDENA_MASK;
|
|
}
|
|
if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
|
|
env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_UNALIGN_TRP_MASK;
|
|
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
|
|
}
|
|
|
|
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
|
|
env->v7m.fpccr[M_REG_NS] = R_V7M_FPCCR_ASPEN_MASK;
|
|
env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
|
|
R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
/* Unlike A/R profile, M profile defines the reset LR value */
|
|
env->regs[14] = 0xffffffff;
|
|
|
|
env->v7m.vecbase[M_REG_S] = cpu->init_svtor & 0xffffff80;
|
|
env->v7m.vecbase[M_REG_NS] = cpu->init_nsvtor & 0xffffff80;
|
|
|
|
/* Load the initial SP and PC from offset 0 and 4 in the vector table */
|
|
vecbase = env->v7m.vecbase[env->v7m.secure];
|
|
rom = rom_ptr_for_as(s->as, vecbase, 8);
|
|
if (rom) {
|
|
/* Address zero is covered by ROM which hasn't yet been
|
|
* copied into physical memory.
|
|
*/
|
|
initial_msp = ldl_p(rom);
|
|
initial_pc = ldl_p(rom + 4);
|
|
} else {
|
|
/* Address zero not covered by a ROM blob, or the ROM blob
|
|
* is in non-modifiable memory and this is a second reset after
|
|
* it got copied into memory. In the latter case, rom_ptr
|
|
* will return a NULL pointer and we should use ldl_phys instead.
|
|
*/
|
|
initial_msp = ldl_phys(s->as, vecbase);
|
|
initial_pc = ldl_phys(s->as, vecbase + 4);
|
|
}
|
|
|
|
qemu_log_mask(CPU_LOG_INT,
|
|
"Loaded reset SP 0x%x PC 0x%x from vector table\n",
|
|
initial_msp, initial_pc);
|
|
|
|
env->regs[13] = initial_msp & 0xFFFFFFFC;
|
|
env->regs[15] = initial_pc & ~1;
|
|
env->thumb = initial_pc & 1;
|
|
#else
|
|
/*
|
|
* For user mode we run non-secure and with access to the FPU.
|
|
* The FPU context is active (ie does not need further setup)
|
|
* and is owned by non-secure.
|
|
*/
|
|
env->v7m.secure = false;
|
|
env->v7m.nsacr = 0xcff;
|
|
env->v7m.cpacr[M_REG_NS] = 0xf0ffff;
|
|
env->v7m.fpccr[M_REG_S] &=
|
|
~(R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK);
|
|
env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
|
|
#endif
|
|
}
|
|
|
|
/* M profile requires that reset clears the exclusive monitor;
|
|
* A profile does not, but clearing it makes more sense than having it
|
|
* set with an exclusive access on address zero.
|
|
*/
|
|
arm_clear_exclusive(env);
|
|
|
|
if (arm_feature(env, ARM_FEATURE_PMSA)) {
|
|
if (cpu->pmsav7_dregion > 0) {
|
|
if (arm_feature(env, ARM_FEATURE_V8)) {
|
|
memset(env->pmsav8.rbar[M_REG_NS], 0,
|
|
sizeof(*env->pmsav8.rbar[M_REG_NS])
|
|
* cpu->pmsav7_dregion);
|
|
memset(env->pmsav8.rlar[M_REG_NS], 0,
|
|
sizeof(*env->pmsav8.rlar[M_REG_NS])
|
|
* cpu->pmsav7_dregion);
|
|
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
|
|
memset(env->pmsav8.rbar[M_REG_S], 0,
|
|
sizeof(*env->pmsav8.rbar[M_REG_S])
|
|
* cpu->pmsav7_dregion);
|
|
memset(env->pmsav8.rlar[M_REG_S], 0,
|
|
sizeof(*env->pmsav8.rlar[M_REG_S])
|
|
* cpu->pmsav7_dregion);
|
|
}
|
|
} else if (arm_feature(env, ARM_FEATURE_V7)) {
|
|
memset(env->pmsav7.drbar, 0,
|
|
sizeof(*env->pmsav7.drbar) * cpu->pmsav7_dregion);
|
|
memset(env->pmsav7.drsr, 0,
|
|
sizeof(*env->pmsav7.drsr) * cpu->pmsav7_dregion);
|
|
memset(env->pmsav7.dracr, 0,
|
|
sizeof(*env->pmsav7.dracr) * cpu->pmsav7_dregion);
|
|
}
|
|
}
|
|
env->pmsav7.rnr[M_REG_NS] = 0;
|
|
env->pmsav7.rnr[M_REG_S] = 0;
|
|
env->pmsav8.mair0[M_REG_NS] = 0;
|
|
env->pmsav8.mair0[M_REG_S] = 0;
|
|
env->pmsav8.mair1[M_REG_NS] = 0;
|
|
env->pmsav8.mair1[M_REG_S] = 0;
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
|
|
if (cpu->sau_sregion > 0) {
|
|
memset(env->sau.rbar, 0, sizeof(*env->sau.rbar) * cpu->sau_sregion);
|
|
memset(env->sau.rlar, 0, sizeof(*env->sau.rlar) * cpu->sau_sregion);
|
|
}
|
|
env->sau.rnr = 0;
|
|
/* SAU_CTRL reset value is IMPDEF; we choose 0, which is what
|
|
* the Cortex-M33 does.
|
|
*/
|
|
env->sau.ctrl = 0;
|
|
}
|
|
|
|
set_flush_to_zero(1, &env->vfp.standard_fp_status);
|
|
set_flush_inputs_to_zero(1, &env->vfp.standard_fp_status);
|
|
set_default_nan_mode(1, &env->vfp.standard_fp_status);
|
|
set_default_nan_mode(1, &env->vfp.standard_fp_status_f16);
|
|
set_float_detect_tininess(float_tininess_before_rounding,
|
|
&env->vfp.fp_status);
|
|
set_float_detect_tininess(float_tininess_before_rounding,
|
|
&env->vfp.standard_fp_status);
|
|
set_float_detect_tininess(float_tininess_before_rounding,
|
|
&env->vfp.fp_status_f16);
|
|
set_float_detect_tininess(float_tininess_before_rounding,
|
|
&env->vfp.standard_fp_status_f16);
|
|
#ifndef CONFIG_USER_ONLY
|
|
if (kvm_enabled()) {
|
|
kvm_arm_reset_vcpu(cpu);
|
|
}
|
|
#endif
|
|
|
|
hw_breakpoint_update_all(cpu);
|
|
hw_watchpoint_update_all(cpu);
|
|
arm_rebuild_hflags(env);
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
|
|
static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
|
|
unsigned int target_el,
|
|
unsigned int cur_el, bool secure,
|
|
uint64_t hcr_el2)
|
|
{
|
|
CPUARMState *env = cs->env_ptr;
|
|
bool pstate_unmasked;
|
|
bool unmasked = false;
|
|
|
|
/*
|
|
* Don't take exceptions if they target a lower EL.
|
|
* This check should catch any exceptions that would not be taken
|
|
* but left pending.
|
|
*/
|
|
if (cur_el > target_el) {
|
|
return false;
|
|
}
|
|
|
|
switch (excp_idx) {
|
|
case EXCP_FIQ:
|
|
pstate_unmasked = !(env->daif & PSTATE_F);
|
|
break;
|
|
|
|
case EXCP_IRQ:
|
|
pstate_unmasked = !(env->daif & PSTATE_I);
|
|
break;
|
|
|
|
case EXCP_VFIQ:
|
|
if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
|
|
/* VFIQs are only taken when hypervized. */
|
|
return false;
|
|
}
|
|
return !(env->daif & PSTATE_F);
|
|
case EXCP_VIRQ:
|
|
if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
|
|
/* VIRQs are only taken when hypervized. */
|
|
return false;
|
|
}
|
|
return !(env->daif & PSTATE_I);
|
|
case EXCP_VSERR:
|
|
if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
|
|
/* VIRQs are only taken when hypervized. */
|
|
return false;
|
|
}
|
|
return !(env->daif & PSTATE_A);
|
|
default:
|
|
g_assert_not_reached();
|
|
}
|
|
|
|
/*
|
|
* Use the target EL, current execution state and SCR/HCR settings to
|
|
* determine whether the corresponding CPSR bit is used to mask the
|
|
* interrupt.
|
|
*/
|
|
if ((target_el > cur_el) && (target_el != 1)) {
|
|
/* Exceptions targeting a higher EL may not be maskable */
|
|
if (arm_feature(env, ARM_FEATURE_AARCH64)) {
|
|
switch (target_el) {
|
|
case 2:
|
|
/*
|
|
* According to ARM DDI 0487H.a, an interrupt can be masked
|
|
* when HCR_E2H and HCR_TGE are both set regardless of the
|
|
* current Security state. Note that we need to revisit this
|
|
* part again once we need to support NMI.
|
|
*/
|
|
if ((hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
|
|
unmasked = true;
|
|
}
|
|
break;
|
|
case 3:
|
|
/* Interrupt cannot be masked when the target EL is 3 */
|
|
unmasked = true;
|
|
break;
|
|
default:
|
|
g_assert_not_reached();
|
|
}
|
|
} else {
|
|
/*
|
|
* The old 32-bit-only environment has a more complicated
|
|
* masking setup. HCR and SCR bits not only affect interrupt
|
|
* routing but also change the behaviour of masking.
|
|
*/
|
|
bool hcr, scr;
|
|
|
|
switch (excp_idx) {
|
|
case EXCP_FIQ:
|
|
/*
|
|
* If FIQs are routed to EL3 or EL2 then there are cases where
|
|
* we override the CPSR.F in determining if the exception is
|
|
* masked or not. If neither of these are set then we fall back
|
|
* to the CPSR.F setting otherwise we further assess the state
|
|
* below.
|
|
*/
|
|
hcr = hcr_el2 & HCR_FMO;
|
|
scr = (env->cp15.scr_el3 & SCR_FIQ);
|
|
|
|
/*
|
|
* When EL3 is 32-bit, the SCR.FW bit controls whether the
|
|
* CPSR.F bit masks FIQ interrupts when taken in non-secure
|
|
* state. If SCR.FW is set then FIQs can be masked by CPSR.F
|
|
* when non-secure but only when FIQs are only routed to EL3.
|
|
*/
|
|
scr = scr && !((env->cp15.scr_el3 & SCR_FW) && !hcr);
|
|
break;
|
|
case EXCP_IRQ:
|
|
/*
|
|
* When EL3 execution state is 32-bit, if HCR.IMO is set then
|
|
* we may override the CPSR.I masking when in non-secure state.
|
|
* The SCR.IRQ setting has already been taken into consideration
|
|
* when setting the target EL, so it does not have a further
|
|
* affect here.
|
|
*/
|
|
hcr = hcr_el2 & HCR_IMO;
|
|
scr = false;
|
|
break;
|
|
default:
|
|
g_assert_not_reached();
|
|
}
|
|
|
|
if ((scr || hcr) && !secure) {
|
|
unmasked = true;
|
|
}
|
|
}
|
|
}
|
|
|
|
/*
|
|
* The PSTATE bits only mask the interrupt if we have not overriden the
|
|
* ability above.
|
|
*/
|
|
return unmasked || pstate_unmasked;
|
|
}
|
|
|
|
static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
|
|
{
|
|
CPUClass *cc = CPU_GET_CLASS(cs);
|
|
CPUARMState *env = cs->env_ptr;
|
|
uint32_t cur_el = arm_current_el(env);
|
|
bool secure = arm_is_secure(env);
|
|
uint64_t hcr_el2 = arm_hcr_el2_eff(env);
|
|
uint32_t target_el;
|
|
uint32_t excp_idx;
|
|
|
|
/* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
|
|
|
|
if (interrupt_request & CPU_INTERRUPT_FIQ) {
|
|
excp_idx = EXCP_FIQ;
|
|
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
|
|
if (arm_excp_unmasked(cs, excp_idx, target_el,
|
|
cur_el, secure, hcr_el2)) {
|
|
goto found;
|
|
}
|
|
}
|
|
if (interrupt_request & CPU_INTERRUPT_HARD) {
|
|
excp_idx = EXCP_IRQ;
|
|
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
|
|
if (arm_excp_unmasked(cs, excp_idx, target_el,
|
|
cur_el, secure, hcr_el2)) {
|
|
goto found;
|
|
}
|
|
}
|
|
if (interrupt_request & CPU_INTERRUPT_VIRQ) {
|
|
excp_idx = EXCP_VIRQ;
|
|
target_el = 1;
|
|
if (arm_excp_unmasked(cs, excp_idx, target_el,
|
|
cur_el, secure, hcr_el2)) {
|
|
goto found;
|
|
}
|
|
}
|
|
if (interrupt_request & CPU_INTERRUPT_VFIQ) {
|
|
excp_idx = EXCP_VFIQ;
|
|
target_el = 1;
|
|
if (arm_excp_unmasked(cs, excp_idx, target_el,
|
|
cur_el, secure, hcr_el2)) {
|
|
goto found;
|
|
}
|
|
}
|
|
if (interrupt_request & CPU_INTERRUPT_VSERR) {
|
|
excp_idx = EXCP_VSERR;
|
|
target_el = 1;
|
|
if (arm_excp_unmasked(cs, excp_idx, target_el,
|
|
cur_el, secure, hcr_el2)) {
|
|
/* Taking a virtual abort clears HCR_EL2.VSE */
|
|
env->cp15.hcr_el2 &= ~HCR_VSE;
|
|
cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
|
|
goto found;
|
|
}
|
|
}
|
|
return false;
|
|
|
|
found:
|
|
cs->exception_index = excp_idx;
|
|
env->exception.target_el = target_el;
|
|
cc->tcg_ops->do_interrupt(cs);
|
|
return true;
|
|
}
|
|
#endif /* !CONFIG_USER_ONLY */
|
|
|
|
void arm_cpu_update_virq(ARMCPU *cpu)
|
|
{
|
|
/*
|
|
* Update the interrupt level for VIRQ, which is the logical OR of
|
|
* the HCR_EL2.VI bit and the input line level from the GIC.
|
|
*/
|
|
CPUARMState *env = &cpu->env;
|
|
CPUState *cs = CPU(cpu);
|
|
|
|
bool new_state = (env->cp15.hcr_el2 & HCR_VI) ||
|
|
(env->irq_line_state & CPU_INTERRUPT_VIRQ);
|
|
|
|
if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VIRQ) != 0)) {
|
|
if (new_state) {
|
|
cpu_interrupt(cs, CPU_INTERRUPT_VIRQ);
|
|
} else {
|
|
cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
|
|
}
|
|
}
|
|
}
|
|
|
|
void arm_cpu_update_vfiq(ARMCPU *cpu)
|
|
{
|
|
/*
|
|
* Update the interrupt level for VFIQ, which is the logical OR of
|
|
* the HCR_EL2.VF bit and the input line level from the GIC.
|
|
*/
|
|
CPUARMState *env = &cpu->env;
|
|
CPUState *cs = CPU(cpu);
|
|
|
|
bool new_state = (env->cp15.hcr_el2 & HCR_VF) ||
|
|
(env->irq_line_state & CPU_INTERRUPT_VFIQ);
|
|
|
|
if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFIQ) != 0)) {
|
|
if (new_state) {
|
|
cpu_interrupt(cs, CPU_INTERRUPT_VFIQ);
|
|
} else {
|
|
cpu_reset_interrupt(cs, CPU_INTERRUPT_VFIQ);
|
|
}
|
|
}
|
|
}
|
|
|
|
void arm_cpu_update_vserr(ARMCPU *cpu)
|
|
{
|
|
/*
|
|
* Update the interrupt level for VSERR, which is the HCR_EL2.VSE bit.
|
|
*/
|
|
CPUARMState *env = &cpu->env;
|
|
CPUState *cs = CPU(cpu);
|
|
|
|
bool new_state = env->cp15.hcr_el2 & HCR_VSE;
|
|
|
|
if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VSERR) != 0)) {
|
|
if (new_state) {
|
|
cpu_interrupt(cs, CPU_INTERRUPT_VSERR);
|
|
} else {
|
|
cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
|
|
}
|
|
}
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
static void arm_cpu_set_irq(void *opaque, int irq, int level)
|
|
{
|
|
ARMCPU *cpu = opaque;
|
|
CPUARMState *env = &cpu->env;
|
|
CPUState *cs = CPU(cpu);
|
|
static const int mask[] = {
|
|
[ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
|
|
[ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
|
|
[ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
|
|
[ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
|
|
};
|
|
|
|
if (!arm_feature(env, ARM_FEATURE_EL2) &&
|
|
(irq == ARM_CPU_VIRQ || irq == ARM_CPU_VFIQ)) {
|
|
/*
|
|
* The GIC might tell us about VIRQ and VFIQ state, but if we don't
|
|
* have EL2 support we don't care. (Unless the guest is doing something
|
|
* silly this will only be calls saying "level is still 0".)
|
|
*/
|
|
return;
|
|
}
|
|
|
|
if (level) {
|
|
env->irq_line_state |= mask[irq];
|
|
} else {
|
|
env->irq_line_state &= ~mask[irq];
|
|
}
|
|
|
|
switch (irq) {
|
|
case ARM_CPU_VIRQ:
|
|
arm_cpu_update_virq(cpu);
|
|
break;
|
|
case ARM_CPU_VFIQ:
|
|
arm_cpu_update_vfiq(cpu);
|
|
break;
|
|
case ARM_CPU_IRQ:
|
|
case ARM_CPU_FIQ:
|
|
if (level) {
|
|
cpu_interrupt(cs, mask[irq]);
|
|
} else {
|
|
cpu_reset_interrupt(cs, mask[irq]);
|
|
}
|
|
break;
|
|
default:
|
|
g_assert_not_reached();
|
|
}
|
|
}
|
|
|
|
static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level)
|
|
{
|
|
#ifdef CONFIG_KVM
|
|
ARMCPU *cpu = opaque;
|
|
CPUARMState *env = &cpu->env;
|
|
CPUState *cs = CPU(cpu);
|
|
uint32_t linestate_bit;
|
|
int irq_id;
|
|
|
|
switch (irq) {
|
|
case ARM_CPU_IRQ:
|
|
irq_id = KVM_ARM_IRQ_CPU_IRQ;
|
|
linestate_bit = CPU_INTERRUPT_HARD;
|
|
break;
|
|
case ARM_CPU_FIQ:
|
|
irq_id = KVM_ARM_IRQ_CPU_FIQ;
|
|
linestate_bit = CPU_INTERRUPT_FIQ;
|
|
break;
|
|
default:
|
|
g_assert_not_reached();
|
|
}
|
|
|
|
if (level) {
|
|
env->irq_line_state |= linestate_bit;
|
|
} else {
|
|
env->irq_line_state &= ~linestate_bit;
|
|
}
|
|
kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level);
|
|
#endif
|
|
}
|
|
|
|
static bool arm_cpu_virtio_is_big_endian(CPUState *cs)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(cs);
|
|
CPUARMState *env = &cpu->env;
|
|
|
|
cpu_synchronize_state(cs);
|
|
return arm_cpu_data_is_big_endian(env);
|
|
}
|
|
|
|
#endif
|
|
|
|
static void arm_disas_set_info(CPUState *cpu, disassemble_info *info)
|
|
{
|
|
ARMCPU *ac = ARM_CPU(cpu);
|
|
CPUARMState *env = &ac->env;
|
|
bool sctlr_b;
|
|
|
|
if (is_a64(env)) {
|
|
info->cap_arch = CS_ARCH_ARM64;
|
|
info->cap_insn_unit = 4;
|
|
info->cap_insn_split = 4;
|
|
} else {
|
|
int cap_mode;
|
|
if (env->thumb) {
|
|
info->cap_insn_unit = 2;
|
|
info->cap_insn_split = 4;
|
|
cap_mode = CS_MODE_THUMB;
|
|
} else {
|
|
info->cap_insn_unit = 4;
|
|
info->cap_insn_split = 4;
|
|
cap_mode = CS_MODE_ARM;
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_V8)) {
|
|
cap_mode |= CS_MODE_V8;
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_M)) {
|
|
cap_mode |= CS_MODE_MCLASS;
|
|
}
|
|
info->cap_arch = CS_ARCH_ARM;
|
|
info->cap_mode = cap_mode;
|
|
}
|
|
|
|
sctlr_b = arm_sctlr_b(env);
|
|
if (bswap_code(sctlr_b)) {
|
|
#if TARGET_BIG_ENDIAN
|
|
info->endian = BFD_ENDIAN_LITTLE;
|
|
#else
|
|
info->endian = BFD_ENDIAN_BIG;
|
|
#endif
|
|
}
|
|
info->flags &= ~INSN_ARM_BE32;
|
|
#ifndef CONFIG_USER_ONLY
|
|
if (sctlr_b) {
|
|
info->flags |= INSN_ARM_BE32;
|
|
}
|
|
#endif
|
|
}
|
|
|
|
#ifdef TARGET_AARCH64
|
|
|
|
static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(cs);
|
|
CPUARMState *env = &cpu->env;
|
|
uint32_t psr = pstate_read(env);
|
|
int i;
|
|
int el = arm_current_el(env);
|
|
const char *ns_status;
|
|
bool sve;
|
|
|
|
qemu_fprintf(f, " PC=%016" PRIx64 " ", env->pc);
|
|
for (i = 0; i < 32; i++) {
|
|
if (i == 31) {
|
|
qemu_fprintf(f, " SP=%016" PRIx64 "\n", env->xregs[i]);
|
|
} else {
|
|
qemu_fprintf(f, "X%02d=%016" PRIx64 "%s", i, env->xregs[i],
|
|
(i + 2) % 3 ? " " : "\n");
|
|
}
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_EL3) && el != 3) {
|
|
ns_status = env->cp15.scr_el3 & SCR_NS ? "NS " : "S ";
|
|
} else {
|
|
ns_status = "";
|
|
}
|
|
qemu_fprintf(f, "PSTATE=%08x %c%c%c%c %sEL%d%c",
|
|
psr,
|
|
psr & PSTATE_N ? 'N' : '-',
|
|
psr & PSTATE_Z ? 'Z' : '-',
|
|
psr & PSTATE_C ? 'C' : '-',
|
|
psr & PSTATE_V ? 'V' : '-',
|
|
ns_status,
|
|
el,
|
|
psr & PSTATE_SP ? 'h' : 't');
|
|
|
|
if (cpu_isar_feature(aa64_sme, cpu)) {
|
|
qemu_fprintf(f, " SVCR=%08" PRIx64 " %c%c",
|
|
env->svcr,
|
|
(FIELD_EX64(env->svcr, SVCR, ZA) ? 'Z' : '-'),
|
|
(FIELD_EX64(env->svcr, SVCR, SM) ? 'S' : '-'));
|
|
}
|
|
if (cpu_isar_feature(aa64_bti, cpu)) {
|
|
qemu_fprintf(f, " BTYPE=%d", (psr & PSTATE_BTYPE) >> 10);
|
|
}
|
|
if (!(flags & CPU_DUMP_FPU)) {
|
|
qemu_fprintf(f, "\n");
|
|
return;
|
|
}
|
|
if (fp_exception_el(env, el) != 0) {
|
|
qemu_fprintf(f, " FPU disabled\n");
|
|
return;
|
|
}
|
|
qemu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
|
|
vfp_get_fpcr(env), vfp_get_fpsr(env));
|
|
|
|
if (cpu_isar_feature(aa64_sme, cpu) && FIELD_EX64(env->svcr, SVCR, SM)) {
|
|
sve = sme_exception_el(env, el) == 0;
|
|
} else if (cpu_isar_feature(aa64_sve, cpu)) {
|
|
sve = sve_exception_el(env, el) == 0;
|
|
} else {
|
|
sve = false;
|
|
}
|
|
|
|
if (sve) {
|
|
int j, zcr_len = sve_vqm1_for_el(env, el);
|
|
|
|
for (i = 0; i <= FFR_PRED_NUM; i++) {
|
|
bool eol;
|
|
if (i == FFR_PRED_NUM) {
|
|
qemu_fprintf(f, "FFR=");
|
|
/* It's last, so end the line. */
|
|
eol = true;
|
|
} else {
|
|
qemu_fprintf(f, "P%02d=", i);
|
|
switch (zcr_len) {
|
|
case 0:
|
|
eol = i % 8 == 7;
|
|
break;
|
|
case 1:
|
|
eol = i % 6 == 5;
|
|
break;
|
|
case 2:
|
|
case 3:
|
|
eol = i % 3 == 2;
|
|
break;
|
|
default:
|
|
/* More than one quadword per predicate. */
|
|
eol = true;
|
|
break;
|
|
}
|
|
}
|
|
for (j = zcr_len / 4; j >= 0; j--) {
|
|
int digits;
|
|
if (j * 4 + 4 <= zcr_len + 1) {
|
|
digits = 16;
|
|
} else {
|
|
digits = (zcr_len % 4 + 1) * 4;
|
|
}
|
|
qemu_fprintf(f, "%0*" PRIx64 "%s", digits,
|
|
env->vfp.pregs[i].p[j],
|
|
j ? ":" : eol ? "\n" : " ");
|
|
}
|
|
}
|
|
|
|
for (i = 0; i < 32; i++) {
|
|
if (zcr_len == 0) {
|
|
qemu_fprintf(f, "Z%02d=%016" PRIx64 ":%016" PRIx64 "%s",
|
|
i, env->vfp.zregs[i].d[1],
|
|
env->vfp.zregs[i].d[0], i & 1 ? "\n" : " ");
|
|
} else if (zcr_len == 1) {
|
|
qemu_fprintf(f, "Z%02d=%016" PRIx64 ":%016" PRIx64
|
|
":%016" PRIx64 ":%016" PRIx64 "\n",
|
|
i, env->vfp.zregs[i].d[3], env->vfp.zregs[i].d[2],
|
|
env->vfp.zregs[i].d[1], env->vfp.zregs[i].d[0]);
|
|
} else {
|
|
for (j = zcr_len; j >= 0; j--) {
|
|
bool odd = (zcr_len - j) % 2 != 0;
|
|
if (j == zcr_len) {
|
|
qemu_fprintf(f, "Z%02d[%x-%x]=", i, j, j - 1);
|
|
} else if (!odd) {
|
|
if (j > 0) {
|
|
qemu_fprintf(f, " [%x-%x]=", j, j - 1);
|
|
} else {
|
|
qemu_fprintf(f, " [%x]=", j);
|
|
}
|
|
}
|
|
qemu_fprintf(f, "%016" PRIx64 ":%016" PRIx64 "%s",
|
|
env->vfp.zregs[i].d[j * 2 + 1],
|
|
env->vfp.zregs[i].d[j * 2],
|
|
odd || j == 0 ? "\n" : ":");
|
|
}
|
|
}
|
|
}
|
|
} else {
|
|
for (i = 0; i < 32; i++) {
|
|
uint64_t *q = aa64_vfp_qreg(env, i);
|
|
qemu_fprintf(f, "Q%02d=%016" PRIx64 ":%016" PRIx64 "%s",
|
|
i, q[1], q[0], (i & 1 ? "\n" : " "));
|
|
}
|
|
}
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags)
|
|
{
|
|
g_assert_not_reached();
|
|
}
|
|
|
|
#endif
|
|
|
|
static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(cs);
|
|
CPUARMState *env = &cpu->env;
|
|
int i;
|
|
|
|
if (is_a64(env)) {
|
|
aarch64_cpu_dump_state(cs, f, flags);
|
|
return;
|
|
}
|
|
|
|
for (i = 0; i < 16; i++) {
|
|
qemu_fprintf(f, "R%02d=%08x", i, env->regs[i]);
|
|
if ((i % 4) == 3) {
|
|
qemu_fprintf(f, "\n");
|
|
} else {
|
|
qemu_fprintf(f, " ");
|
|
}
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_M)) {
|
|
uint32_t xpsr = xpsr_read(env);
|
|
const char *mode;
|
|
const char *ns_status = "";
|
|
|
|
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
|
|
ns_status = env->v7m.secure ? "S " : "NS ";
|
|
}
|
|
|
|
if (xpsr & XPSR_EXCP) {
|
|
mode = "handler";
|
|
} else {
|
|
if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_NPRIV_MASK) {
|
|
mode = "unpriv-thread";
|
|
} else {
|
|
mode = "priv-thread";
|
|
}
|
|
}
|
|
|
|
qemu_fprintf(f, "XPSR=%08x %c%c%c%c %c %s%s\n",
|
|
xpsr,
|
|
xpsr & XPSR_N ? 'N' : '-',
|
|
xpsr & XPSR_Z ? 'Z' : '-',
|
|
xpsr & XPSR_C ? 'C' : '-',
|
|
xpsr & XPSR_V ? 'V' : '-',
|
|
xpsr & XPSR_T ? 'T' : 'A',
|
|
ns_status,
|
|
mode);
|
|
} else {
|
|
uint32_t psr = cpsr_read(env);
|
|
const char *ns_status = "";
|
|
|
|
if (arm_feature(env, ARM_FEATURE_EL3) &&
|
|
(psr & CPSR_M) != ARM_CPU_MODE_MON) {
|
|
ns_status = env->cp15.scr_el3 & SCR_NS ? "NS " : "S ";
|
|
}
|
|
|
|
qemu_fprintf(f, "PSR=%08x %c%c%c%c %c %s%s%d\n",
|
|
psr,
|
|
psr & CPSR_N ? 'N' : '-',
|
|
psr & CPSR_Z ? 'Z' : '-',
|
|
psr & CPSR_C ? 'C' : '-',
|
|
psr & CPSR_V ? 'V' : '-',
|
|
psr & CPSR_T ? 'T' : 'A',
|
|
ns_status,
|
|
aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
|
|
}
|
|
|
|
if (flags & CPU_DUMP_FPU) {
|
|
int numvfpregs = 0;
|
|
if (cpu_isar_feature(aa32_simd_r32, cpu)) {
|
|
numvfpregs = 32;
|
|
} else if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
|
|
numvfpregs = 16;
|
|
}
|
|
for (i = 0; i < numvfpregs; i++) {
|
|
uint64_t v = *aa32_vfp_dreg(env, i);
|
|
qemu_fprintf(f, "s%02d=%08x s%02d=%08x d%02d=%016" PRIx64 "\n",
|
|
i * 2, (uint32_t)v,
|
|
i * 2 + 1, (uint32_t)(v >> 32),
|
|
i, v);
|
|
}
|
|
qemu_fprintf(f, "FPSCR: %08x\n", vfp_get_fpscr(env));
|
|
if (cpu_isar_feature(aa32_mve, cpu)) {
|
|
qemu_fprintf(f, "VPR: %08x\n", env->v7m.vpr);
|
|
}
|
|
}
|
|
}
|
|
|
|
uint64_t arm_cpu_mp_affinity(int idx, uint8_t clustersz)
|
|
{
|
|
uint32_t Aff1 = idx / clustersz;
|
|
uint32_t Aff0 = idx % clustersz;
|
|
return (Aff1 << ARM_AFF1_SHIFT) | Aff0;
|
|
}
|
|
|
|
static void arm_cpu_initfn(Object *obj)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(obj);
|
|
|
|
cpu_set_cpustate_pointers(cpu);
|
|
cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
|
|
NULL, g_free);
|
|
|
|
QLIST_INIT(&cpu->pre_el_change_hooks);
|
|
QLIST_INIT(&cpu->el_change_hooks);
|
|
|
|
#ifdef CONFIG_USER_ONLY
|
|
# ifdef TARGET_AARCH64
|
|
/*
|
|
* The linux kernel defaults to 512-bit for SVE, and 256-bit for SME.
|
|
* These values were chosen to fit within the default signal frame.
|
|
* See documentation for /proc/sys/abi/{sve,sme}_default_vector_length,
|
|
* and our corresponding cpu property.
|
|
*/
|
|
cpu->sve_default_vq = 4;
|
|
cpu->sme_default_vq = 2;
|
|
# endif
|
|
#else
|
|
/* Our inbound IRQ and FIQ lines */
|
|
if (kvm_enabled()) {
|
|
/* VIRQ and VFIQ are unused with KVM but we add them to maintain
|
|
* the same interface as non-KVM CPUs.
|
|
*/
|
|
qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4);
|
|
} else {
|
|
qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4);
|
|
}
|
|
|
|
qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs,
|
|
ARRAY_SIZE(cpu->gt_timer_outputs));
|
|
|
|
qdev_init_gpio_out_named(DEVICE(cpu), &cpu->gicv3_maintenance_interrupt,
|
|
"gicv3-maintenance-interrupt", 1);
|
|
qdev_init_gpio_out_named(DEVICE(cpu), &cpu->pmu_interrupt,
|
|
"pmu-interrupt", 1);
|
|
#endif
|
|
|
|
/* DTB consumers generally don't in fact care what the 'compatible'
|
|
* string is, so always provide some string and trust that a hypothetical
|
|
* picky DTB consumer will also provide a helpful error message.
|
|
*/
|
|
cpu->dtb_compatible = "qemu,unknown";
|
|
cpu->psci_version = QEMU_PSCI_VERSION_0_1; /* By default assume PSCI v0.1 */
|
|
cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
|
|
|
|
if (tcg_enabled() || hvf_enabled()) {
|
|
/* TCG and HVF implement PSCI 1.1 */
|
|
cpu->psci_version = QEMU_PSCI_VERSION_1_1;
|
|
}
|
|
}
|
|
|
|
static Property arm_cpu_gt_cntfrq_property =
|
|
DEFINE_PROP_UINT64("cntfrq", ARMCPU, gt_cntfrq_hz,
|
|
NANOSECONDS_PER_SECOND / GTIMER_SCALE);
|
|
|
|
static Property arm_cpu_reset_cbar_property =
|
|
DEFINE_PROP_UINT64("reset-cbar", ARMCPU, reset_cbar, 0);
|
|
|
|
static Property arm_cpu_reset_hivecs_property =
|
|
DEFINE_PROP_BOOL("reset-hivecs", ARMCPU, reset_hivecs, false);
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
static Property arm_cpu_has_el2_property =
|
|
DEFINE_PROP_BOOL("has_el2", ARMCPU, has_el2, true);
|
|
|
|
static Property arm_cpu_has_el3_property =
|
|
DEFINE_PROP_BOOL("has_el3", ARMCPU, has_el3, true);
|
|
#endif
|
|
|
|
static Property arm_cpu_cfgend_property =
|
|
DEFINE_PROP_BOOL("cfgend", ARMCPU, cfgend, false);
|
|
|
|
static Property arm_cpu_has_vfp_property =
|
|
DEFINE_PROP_BOOL("vfp", ARMCPU, has_vfp, true);
|
|
|
|
static Property arm_cpu_has_neon_property =
|
|
DEFINE_PROP_BOOL("neon", ARMCPU, has_neon, true);
|
|
|
|
static Property arm_cpu_has_dsp_property =
|
|
DEFINE_PROP_BOOL("dsp", ARMCPU, has_dsp, true);
|
|
|
|
static Property arm_cpu_has_mpu_property =
|
|
DEFINE_PROP_BOOL("has-mpu", ARMCPU, has_mpu, true);
|
|
|
|
/* This is like DEFINE_PROP_UINT32 but it doesn't set the default value,
|
|
* because the CPU initfn will have already set cpu->pmsav7_dregion to
|
|
* the right value for that particular CPU type, and we don't want
|
|
* to override that with an incorrect constant value.
|
|
*/
|
|
static Property arm_cpu_pmsav7_dregion_property =
|
|
DEFINE_PROP_UNSIGNED_NODEFAULT("pmsav7-dregion", ARMCPU,
|
|
pmsav7_dregion,
|
|
qdev_prop_uint32, uint32_t);
|
|
|
|
static bool arm_get_pmu(Object *obj, Error **errp)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(obj);
|
|
|
|
return cpu->has_pmu;
|
|
}
|
|
|
|
static void arm_set_pmu(Object *obj, bool value, Error **errp)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(obj);
|
|
|
|
if (value) {
|
|
if (kvm_enabled() && !kvm_arm_pmu_supported()) {
|
|
error_setg(errp, "'pmu' feature not supported by KVM on this host");
|
|
return;
|
|
}
|
|
set_feature(&cpu->env, ARM_FEATURE_PMU);
|
|
} else {
|
|
unset_feature(&cpu->env, ARM_FEATURE_PMU);
|
|
}
|
|
cpu->has_pmu = value;
|
|
}
|
|
|
|
unsigned int gt_cntfrq_period_ns(ARMCPU *cpu)
|
|
{
|
|
/*
|
|
* The exact approach to calculating guest ticks is:
|
|
*
|
|
* muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), cpu->gt_cntfrq_hz,
|
|
* NANOSECONDS_PER_SECOND);
|
|
*
|
|
* We don't do that. Rather we intentionally use integer division
|
|
* truncation below and in the caller for the conversion of host monotonic
|
|
* time to guest ticks to provide the exact inverse for the semantics of
|
|
* the QEMUTimer scale factor. QEMUTimer's scale facter is an integer, so
|
|
* it loses precision when representing frequencies where
|
|
* `(NANOSECONDS_PER_SECOND % cpu->gt_cntfrq) > 0` holds. Failing to
|
|
* provide an exact inverse leads to scheduling timers with negative
|
|
* periods, which in turn leads to sticky behaviour in the guest.
|
|
*
|
|
* Finally, CNTFRQ is effectively capped at 1GHz to ensure our scale factor
|
|
* cannot become zero.
|
|
*/
|
|
return NANOSECONDS_PER_SECOND > cpu->gt_cntfrq_hz ?
|
|
NANOSECONDS_PER_SECOND / cpu->gt_cntfrq_hz : 1;
|
|
}
|
|
|
|
void arm_cpu_post_init(Object *obj)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(obj);
|
|
|
|
/* M profile implies PMSA. We have to do this here rather than
|
|
* in realize with the other feature-implication checks because
|
|
* we look at the PMSA bit to see if we should add some properties.
|
|
*/
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
|
|
set_feature(&cpu->env, ARM_FEATURE_PMSA);
|
|
}
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_CBAR) ||
|
|
arm_feature(&cpu->env, ARM_FEATURE_CBAR_RO)) {
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_reset_cbar_property);
|
|
}
|
|
|
|
if (!arm_feature(&cpu->env, ARM_FEATURE_M)) {
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_reset_hivecs_property);
|
|
}
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
|
|
object_property_add_uint64_ptr(obj, "rvbar",
|
|
&cpu->rvbar_prop,
|
|
OBJ_PROP_FLAG_READWRITE);
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_EL3)) {
|
|
/* Add the has_el3 state CPU property only if EL3 is allowed. This will
|
|
* prevent "has_el3" from existing on CPUs which cannot support EL3.
|
|
*/
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_el3_property);
|
|
|
|
object_property_add_link(obj, "secure-memory",
|
|
TYPE_MEMORY_REGION,
|
|
(Object **)&cpu->secure_memory,
|
|
qdev_prop_allow_set_link_before_realize,
|
|
OBJ_PROP_LINK_STRONG);
|
|
}
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_EL2)) {
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_el2_property);
|
|
}
|
|
#endif
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_PMU)) {
|
|
cpu->has_pmu = true;
|
|
object_property_add_bool(obj, "pmu", arm_get_pmu, arm_set_pmu);
|
|
}
|
|
|
|
/*
|
|
* Allow user to turn off VFP and Neon support, but only for TCG --
|
|
* KVM does not currently allow us to lie to the guest about its
|
|
* ID/feature registers, so the guest always sees what the host has.
|
|
*/
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)
|
|
? cpu_isar_feature(aa64_fp_simd, cpu)
|
|
: cpu_isar_feature(aa32_vfp, cpu)) {
|
|
cpu->has_vfp = true;
|
|
if (!kvm_enabled()) {
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_vfp_property);
|
|
}
|
|
}
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_NEON)) {
|
|
cpu->has_neon = true;
|
|
if (!kvm_enabled()) {
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_neon_property);
|
|
}
|
|
}
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_M) &&
|
|
arm_feature(&cpu->env, ARM_FEATURE_THUMB_DSP)) {
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_dsp_property);
|
|
}
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_PMSA)) {
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_mpu_property);
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_V7)) {
|
|
qdev_property_add_static(DEVICE(obj),
|
|
&arm_cpu_pmsav7_dregion_property);
|
|
}
|
|
}
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_M_SECURITY)) {
|
|
object_property_add_link(obj, "idau", TYPE_IDAU_INTERFACE, &cpu->idau,
|
|
qdev_prop_allow_set_link_before_realize,
|
|
OBJ_PROP_LINK_STRONG);
|
|
/*
|
|
* M profile: initial value of the Secure VTOR. We can't just use
|
|
* a simple DEFINE_PROP_UINT32 for this because we want to permit
|
|
* the property to be set after realize.
|
|
*/
|
|
object_property_add_uint32_ptr(obj, "init-svtor",
|
|
&cpu->init_svtor,
|
|
OBJ_PROP_FLAG_READWRITE);
|
|
}
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
|
|
/*
|
|
* Initial value of the NS VTOR (for cores without the Security
|
|
* extension, this is the only VTOR)
|
|
*/
|
|
object_property_add_uint32_ptr(obj, "init-nsvtor",
|
|
&cpu->init_nsvtor,
|
|
OBJ_PROP_FLAG_READWRITE);
|
|
}
|
|
|
|
/* Not DEFINE_PROP_UINT32: we want this to be settable after realize */
|
|
object_property_add_uint32_ptr(obj, "psci-conduit",
|
|
&cpu->psci_conduit,
|
|
OBJ_PROP_FLAG_READWRITE);
|
|
|
|
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property);
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER)) {
|
|
qdev_property_add_static(DEVICE(cpu), &arm_cpu_gt_cntfrq_property);
|
|
}
|
|
|
|
if (kvm_enabled()) {
|
|
kvm_arm_add_vcpu_properties(obj);
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64) &&
|
|
cpu_isar_feature(aa64_mte, cpu)) {
|
|
object_property_add_link(obj, "tag-memory",
|
|
TYPE_MEMORY_REGION,
|
|
(Object **)&cpu->tag_memory,
|
|
qdev_prop_allow_set_link_before_realize,
|
|
OBJ_PROP_LINK_STRONG);
|
|
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_EL3)) {
|
|
object_property_add_link(obj, "secure-tag-memory",
|
|
TYPE_MEMORY_REGION,
|
|
(Object **)&cpu->secure_tag_memory,
|
|
qdev_prop_allow_set_link_before_realize,
|
|
OBJ_PROP_LINK_STRONG);
|
|
}
|
|
}
|
|
#endif
|
|
}
|
|
|
|
static void arm_cpu_finalizefn(Object *obj)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(obj);
|
|
ARMELChangeHook *hook, *next;
|
|
|
|
g_hash_table_destroy(cpu->cp_regs);
|
|
|
|
QLIST_FOREACH_SAFE(hook, &cpu->pre_el_change_hooks, node, next) {
|
|
QLIST_REMOVE(hook, node);
|
|
g_free(hook);
|
|
}
|
|
QLIST_FOREACH_SAFE(hook, &cpu->el_change_hooks, node, next) {
|
|
QLIST_REMOVE(hook, node);
|
|
g_free(hook);
|
|
}
|
|
#ifndef CONFIG_USER_ONLY
|
|
if (cpu->pmu_timer) {
|
|
timer_free(cpu->pmu_timer);
|
|
}
|
|
#endif
|
|
}
|
|
|
|
void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
|
|
{
|
|
Error *local_err = NULL;
|
|
|
|
#ifdef TARGET_AARCH64
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
|
|
arm_cpu_sve_finalize(cpu, &local_err);
|
|
if (local_err != NULL) {
|
|
error_propagate(errp, local_err);
|
|
return;
|
|
}
|
|
|
|
arm_cpu_sme_finalize(cpu, &local_err);
|
|
if (local_err != NULL) {
|
|
error_propagate(errp, local_err);
|
|
return;
|
|
}
|
|
|
|
arm_cpu_pauth_finalize(cpu, &local_err);
|
|
if (local_err != NULL) {
|
|
error_propagate(errp, local_err);
|
|
return;
|
|
}
|
|
|
|
arm_cpu_lpa2_finalize(cpu, &local_err);
|
|
if (local_err != NULL) {
|
|
error_propagate(errp, local_err);
|
|
return;
|
|
}
|
|
}
|
|
#endif
|
|
|
|
if (kvm_enabled()) {
|
|
kvm_arm_steal_time_finalize(cpu, &local_err);
|
|
if (local_err != NULL) {
|
|
error_propagate(errp, local_err);
|
|
return;
|
|
}
|
|
}
|
|
}
|
|
|
|
static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
|
|
{
|
|
CPUState *cs = CPU(dev);
|
|
ARMCPU *cpu = ARM_CPU(dev);
|
|
ARMCPUClass *acc = ARM_CPU_GET_CLASS(dev);
|
|
CPUARMState *env = &cpu->env;
|
|
int pagebits;
|
|
Error *local_err = NULL;
|
|
bool no_aa32 = false;
|
|
|
|
/* If we needed to query the host kernel for the CPU features
|
|
* then it's possible that might have failed in the initfn, but
|
|
* this is the first point where we can report it.
|
|
*/
|
|
if (cpu->host_cpu_probe_failed) {
|
|
if (!kvm_enabled() && !hvf_enabled()) {
|
|
error_setg(errp, "The 'host' CPU type can only be used with KVM or HVF");
|
|
} else {
|
|
error_setg(errp, "Failed to retrieve host CPU features");
|
|
}
|
|
return;
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
/* The NVIC and M-profile CPU are two halves of a single piece of
|
|
* hardware; trying to use one without the other is a command line
|
|
* error and will result in segfaults if not caught here.
|
|
*/
|
|
if (arm_feature(env, ARM_FEATURE_M)) {
|
|
if (!env->nvic) {
|
|
error_setg(errp, "This board cannot be used with Cortex-M CPUs");
|
|
return;
|
|
}
|
|
} else {
|
|
if (env->nvic) {
|
|
error_setg(errp, "This board can only be used with Cortex-M CPUs");
|
|
return;
|
|
}
|
|
}
|
|
|
|
if (!tcg_enabled() && !qtest_enabled()) {
|
|
/*
|
|
* We assume that no accelerator except TCG (and the "not really an
|
|
* accelerator" qtest) can handle these features, because Arm hardware
|
|
* virtualization can't virtualize them.
|
|
*
|
|
* Catch all the cases which might cause us to create more than one
|
|
* address space for the CPU (otherwise we will assert() later in
|
|
* cpu_address_space_init()).
|
|
*/
|
|
if (arm_feature(env, ARM_FEATURE_M)) {
|
|
error_setg(errp,
|
|
"Cannot enable %s when using an M-profile guest CPU",
|
|
current_accel_name());
|
|
return;
|
|
}
|
|
if (cpu->has_el3) {
|
|
error_setg(errp,
|
|
"Cannot enable %s when guest CPU has EL3 enabled",
|
|
current_accel_name());
|
|
return;
|
|
}
|
|
if (cpu->tag_memory) {
|
|
error_setg(errp,
|
|
"Cannot enable %s when guest CPUs has MTE enabled",
|
|
current_accel_name());
|
|
return;
|
|
}
|
|
}
|
|
|
|
{
|
|
uint64_t scale;
|
|
|
|
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
|
|
if (!cpu->gt_cntfrq_hz) {
|
|
error_setg(errp, "Invalid CNTFRQ: %"PRId64"Hz",
|
|
cpu->gt_cntfrq_hz);
|
|
return;
|
|
}
|
|
scale = gt_cntfrq_period_ns(cpu);
|
|
} else {
|
|
scale = GTIMER_SCALE;
|
|
}
|
|
|
|
cpu->gt_timer[GTIMER_PHYS] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
|
|
arm_gt_ptimer_cb, cpu);
|
|
cpu->gt_timer[GTIMER_VIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
|
|
arm_gt_vtimer_cb, cpu);
|
|
cpu->gt_timer[GTIMER_HYP] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
|
|
arm_gt_htimer_cb, cpu);
|
|
cpu->gt_timer[GTIMER_SEC] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
|
|
arm_gt_stimer_cb, cpu);
|
|
cpu->gt_timer[GTIMER_HYPVIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
|
|
arm_gt_hvtimer_cb, cpu);
|
|
}
|
|
#endif
|
|
|
|
cpu_exec_realizefn(cs, &local_err);
|
|
if (local_err != NULL) {
|
|
error_propagate(errp, local_err);
|
|
return;
|
|
}
|
|
|
|
arm_cpu_finalize_features(cpu, &local_err);
|
|
if (local_err != NULL) {
|
|
error_propagate(errp, local_err);
|
|
return;
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_AARCH64) &&
|
|
cpu->has_vfp != cpu->has_neon) {
|
|
/*
|
|
* This is an architectural requirement for AArch64; AArch32 is
|
|
* more flexible and permits VFP-no-Neon and Neon-no-VFP.
|
|
*/
|
|
error_setg(errp,
|
|
"AArch64 CPUs must have both VFP and Neon or neither");
|
|
return;
|
|
}
|
|
|
|
if (!cpu->has_vfp) {
|
|
uint64_t t;
|
|
uint32_t u;
|
|
|
|
t = cpu->isar.id_aa64isar1;
|
|
t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 0);
|
|
cpu->isar.id_aa64isar1 = t;
|
|
|
|
t = cpu->isar.id_aa64pfr0;
|
|
t = FIELD_DP64(t, ID_AA64PFR0, FP, 0xf);
|
|
cpu->isar.id_aa64pfr0 = t;
|
|
|
|
u = cpu->isar.id_isar6;
|
|
u = FIELD_DP32(u, ID_ISAR6, JSCVT, 0);
|
|
u = FIELD_DP32(u, ID_ISAR6, BF16, 0);
|
|
cpu->isar.id_isar6 = u;
|
|
|
|
u = cpu->isar.mvfr0;
|
|
u = FIELD_DP32(u, MVFR0, FPSP, 0);
|
|
u = FIELD_DP32(u, MVFR0, FPDP, 0);
|
|
u = FIELD_DP32(u, MVFR0, FPDIVIDE, 0);
|
|
u = FIELD_DP32(u, MVFR0, FPSQRT, 0);
|
|
u = FIELD_DP32(u, MVFR0, FPROUND, 0);
|
|
if (!arm_feature(env, ARM_FEATURE_M)) {
|
|
u = FIELD_DP32(u, MVFR0, FPTRAP, 0);
|
|
u = FIELD_DP32(u, MVFR0, FPSHVEC, 0);
|
|
}
|
|
cpu->isar.mvfr0 = u;
|
|
|
|
u = cpu->isar.mvfr1;
|
|
u = FIELD_DP32(u, MVFR1, FPFTZ, 0);
|
|
u = FIELD_DP32(u, MVFR1, FPDNAN, 0);
|
|
u = FIELD_DP32(u, MVFR1, FPHP, 0);
|
|
if (arm_feature(env, ARM_FEATURE_M)) {
|
|
u = FIELD_DP32(u, MVFR1, FP16, 0);
|
|
}
|
|
cpu->isar.mvfr1 = u;
|
|
|
|
u = cpu->isar.mvfr2;
|
|
u = FIELD_DP32(u, MVFR2, FPMISC, 0);
|
|
cpu->isar.mvfr2 = u;
|
|
}
|
|
|
|
if (!cpu->has_neon) {
|
|
uint64_t t;
|
|
uint32_t u;
|
|
|
|
unset_feature(env, ARM_FEATURE_NEON);
|
|
|
|
t = cpu->isar.id_aa64isar0;
|
|
t = FIELD_DP64(t, ID_AA64ISAR0, AES, 0);
|
|
t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 0);
|
|
t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 0);
|
|
t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 0);
|
|
t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 0);
|
|
t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 0);
|
|
t = FIELD_DP64(t, ID_AA64ISAR0, DP, 0);
|
|
cpu->isar.id_aa64isar0 = t;
|
|
|
|
t = cpu->isar.id_aa64isar1;
|
|
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 0);
|
|
t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 0);
|
|
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 0);
|
|
cpu->isar.id_aa64isar1 = t;
|
|
|
|
t = cpu->isar.id_aa64pfr0;
|
|
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 0xf);
|
|
cpu->isar.id_aa64pfr0 = t;
|
|
|
|
u = cpu->isar.id_isar5;
|
|
u = FIELD_DP32(u, ID_ISAR5, AES, 0);
|
|
u = FIELD_DP32(u, ID_ISAR5, SHA1, 0);
|
|
u = FIELD_DP32(u, ID_ISAR5, SHA2, 0);
|
|
u = FIELD_DP32(u, ID_ISAR5, RDM, 0);
|
|
u = FIELD_DP32(u, ID_ISAR5, VCMA, 0);
|
|
cpu->isar.id_isar5 = u;
|
|
|
|
u = cpu->isar.id_isar6;
|
|
u = FIELD_DP32(u, ID_ISAR6, DP, 0);
|
|
u = FIELD_DP32(u, ID_ISAR6, FHM, 0);
|
|
u = FIELD_DP32(u, ID_ISAR6, BF16, 0);
|
|
u = FIELD_DP32(u, ID_ISAR6, I8MM, 0);
|
|
cpu->isar.id_isar6 = u;
|
|
|
|
if (!arm_feature(env, ARM_FEATURE_M)) {
|
|
u = cpu->isar.mvfr1;
|
|
u = FIELD_DP32(u, MVFR1, SIMDLS, 0);
|
|
u = FIELD_DP32(u, MVFR1, SIMDINT, 0);
|
|
u = FIELD_DP32(u, MVFR1, SIMDSP, 0);
|
|
u = FIELD_DP32(u, MVFR1, SIMDHP, 0);
|
|
cpu->isar.mvfr1 = u;
|
|
|
|
u = cpu->isar.mvfr2;
|
|
u = FIELD_DP32(u, MVFR2, SIMDMISC, 0);
|
|
cpu->isar.mvfr2 = u;
|
|
}
|
|
}
|
|
|
|
if (!cpu->has_neon && !cpu->has_vfp) {
|
|
uint64_t t;
|
|
uint32_t u;
|
|
|
|
t = cpu->isar.id_aa64isar0;
|
|
t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 0);
|
|
cpu->isar.id_aa64isar0 = t;
|
|
|
|
t = cpu->isar.id_aa64isar1;
|
|
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 0);
|
|
cpu->isar.id_aa64isar1 = t;
|
|
|
|
u = cpu->isar.mvfr0;
|
|
u = FIELD_DP32(u, MVFR0, SIMDREG, 0);
|
|
cpu->isar.mvfr0 = u;
|
|
|
|
/* Despite the name, this field covers both VFP and Neon */
|
|
u = cpu->isar.mvfr1;
|
|
u = FIELD_DP32(u, MVFR1, SIMDFMAC, 0);
|
|
cpu->isar.mvfr1 = u;
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_M) && !cpu->has_dsp) {
|
|
uint32_t u;
|
|
|
|
unset_feature(env, ARM_FEATURE_THUMB_DSP);
|
|
|
|
u = cpu->isar.id_isar1;
|
|
u = FIELD_DP32(u, ID_ISAR1, EXTEND, 1);
|
|
cpu->isar.id_isar1 = u;
|
|
|
|
u = cpu->isar.id_isar2;
|
|
u = FIELD_DP32(u, ID_ISAR2, MULTU, 1);
|
|
u = FIELD_DP32(u, ID_ISAR2, MULTS, 1);
|
|
cpu->isar.id_isar2 = u;
|
|
|
|
u = cpu->isar.id_isar3;
|
|
u = FIELD_DP32(u, ID_ISAR3, SIMD, 1);
|
|
u = FIELD_DP32(u, ID_ISAR3, SATURATE, 0);
|
|
cpu->isar.id_isar3 = u;
|
|
}
|
|
|
|
/* Some features automatically imply others: */
|
|
if (arm_feature(env, ARM_FEATURE_V8)) {
|
|
if (arm_feature(env, ARM_FEATURE_M)) {
|
|
set_feature(env, ARM_FEATURE_V7);
|
|
} else {
|
|
set_feature(env, ARM_FEATURE_V7VE);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* There exist AArch64 cpus without AArch32 support. When KVM
|
|
* queries ID_ISAR0_EL1 on such a host, the value is UNKNOWN.
|
|
* Similarly, we cannot check ID_AA64PFR0 without AArch64 support.
|
|
* As a general principle, we also do not make ID register
|
|
* consistency checks anywhere unless using TCG, because only
|
|
* for TCG would a consistency-check failure be a QEMU bug.
|
|
*/
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
|
|
no_aa32 = !cpu_isar_feature(aa64_aa32, cpu);
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_V7VE)) {
|
|
/* v7 Virtualization Extensions. In real hardware this implies
|
|
* EL2 and also the presence of the Security Extensions.
|
|
* For QEMU, for backwards-compatibility we implement some
|
|
* CPUs or CPU configs which have no actual EL2 or EL3 but do
|
|
* include the various other features that V7VE implies.
|
|
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
|
|
* Security Extensions is ARM_FEATURE_EL3.
|
|
*/
|
|
assert(!tcg_enabled() || no_aa32 ||
|
|
cpu_isar_feature(aa32_arm_div, cpu));
|
|
set_feature(env, ARM_FEATURE_LPAE);
|
|
set_feature(env, ARM_FEATURE_V7);
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_V7)) {
|
|
set_feature(env, ARM_FEATURE_VAPA);
|
|
set_feature(env, ARM_FEATURE_THUMB2);
|
|
set_feature(env, ARM_FEATURE_MPIDR);
|
|
if (!arm_feature(env, ARM_FEATURE_M)) {
|
|
set_feature(env, ARM_FEATURE_V6K);
|
|
} else {
|
|
set_feature(env, ARM_FEATURE_V6);
|
|
}
|
|
|
|
/* Always define VBAR for V7 CPUs even if it doesn't exist in
|
|
* non-EL3 configs. This is needed by some legacy boards.
|
|
*/
|
|
set_feature(env, ARM_FEATURE_VBAR);
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_V6K)) {
|
|
set_feature(env, ARM_FEATURE_V6);
|
|
set_feature(env, ARM_FEATURE_MVFR);
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_V6)) {
|
|
set_feature(env, ARM_FEATURE_V5);
|
|
if (!arm_feature(env, ARM_FEATURE_M)) {
|
|
assert(!tcg_enabled() || no_aa32 ||
|
|
cpu_isar_feature(aa32_jazelle, cpu));
|
|
set_feature(env, ARM_FEATURE_AUXCR);
|
|
}
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_V5)) {
|
|
set_feature(env, ARM_FEATURE_V4T);
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_LPAE)) {
|
|
set_feature(env, ARM_FEATURE_V7MP);
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_CBAR_RO)) {
|
|
set_feature(env, ARM_FEATURE_CBAR);
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_THUMB2) &&
|
|
!arm_feature(env, ARM_FEATURE_M)) {
|
|
set_feature(env, ARM_FEATURE_THUMB_DSP);
|
|
}
|
|
|
|
/*
|
|
* We rely on no XScale CPU having VFP so we can use the same bits in the
|
|
* TB flags field for VECSTRIDE and XSCALE_CPAR.
|
|
*/
|
|
assert(arm_feature(&cpu->env, ARM_FEATURE_AARCH64) ||
|
|
!cpu_isar_feature(aa32_vfp_simd, cpu) ||
|
|
!arm_feature(env, ARM_FEATURE_XSCALE));
|
|
|
|
if (arm_feature(env, ARM_FEATURE_V7) &&
|
|
!arm_feature(env, ARM_FEATURE_M) &&
|
|
!arm_feature(env, ARM_FEATURE_PMSA)) {
|
|
/* v7VMSA drops support for the old ARMv5 tiny pages, so we
|
|
* can use 4K pages.
|
|
*/
|
|
pagebits = 12;
|
|
} else {
|
|
/* For CPUs which might have tiny 1K pages, or which have an
|
|
* MPU and might have small region sizes, stick with 1K pages.
|
|
*/
|
|
pagebits = 10;
|
|
}
|
|
if (!set_preferred_target_page_bits(pagebits)) {
|
|
/* This can only ever happen for hotplugging a CPU, or if
|
|
* the board code incorrectly creates a CPU which it has
|
|
* promised via minimum_page_size that it will not.
|
|
*/
|
|
error_setg(errp, "This CPU requires a smaller page size than the "
|
|
"system is using");
|
|
return;
|
|
}
|
|
|
|
/* This cpu-id-to-MPIDR affinity is used only for TCG; KVM will override it.
|
|
* We don't support setting cluster ID ([16..23]) (known as Aff2
|
|
* in later ARM ARM versions), or any of the higher affinity level fields,
|
|
* so these bits always RAZ.
|
|
*/
|
|
if (cpu->mp_affinity == ARM64_AFFINITY_INVALID) {
|
|
cpu->mp_affinity = arm_cpu_mp_affinity(cs->cpu_index,
|
|
ARM_DEFAULT_CPUS_PER_CLUSTER);
|
|
}
|
|
|
|
if (cpu->reset_hivecs) {
|
|
cpu->reset_sctlr |= (1 << 13);
|
|
}
|
|
|
|
if (cpu->cfgend) {
|
|
if (arm_feature(&cpu->env, ARM_FEATURE_V7)) {
|
|
cpu->reset_sctlr |= SCTLR_EE;
|
|
} else {
|
|
cpu->reset_sctlr |= SCTLR_B;
|
|
}
|
|
}
|
|
|
|
if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
|
|
/* If the has_el3 CPU property is disabled then we need to disable the
|
|
* feature.
|
|
*/
|
|
unset_feature(env, ARM_FEATURE_EL3);
|
|
|
|
/*
|
|
* Disable the security extension feature bits in the processor
|
|
* feature registers as well.
|
|
*/
|
|
cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
|
|
cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, COPSDBG, 0);
|
|
cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
|
|
ID_AA64PFR0, EL3, 0);
|
|
}
|
|
|
|
if (!cpu->has_el2) {
|
|
unset_feature(env, ARM_FEATURE_EL2);
|
|
}
|
|
|
|
if (!cpu->has_pmu) {
|
|
unset_feature(env, ARM_FEATURE_PMU);
|
|
}
|
|
if (arm_feature(env, ARM_FEATURE_PMU)) {
|
|
pmu_init(cpu);
|
|
|
|
if (!kvm_enabled()) {
|
|
arm_register_pre_el_change_hook(cpu, &pmu_pre_el_change, 0);
|
|
arm_register_el_change_hook(cpu, &pmu_post_el_change, 0);
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, arm_pmu_timer_cb,
|
|
cpu);
|
|
#endif
|
|
} else {
|
|
cpu->isar.id_aa64dfr0 =
|
|
FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, PMUVER, 0);
|
|
cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, PERFMON, 0);
|
|
cpu->pmceid0 = 0;
|
|
cpu->pmceid1 = 0;
|
|
}
|
|
|
|
if (!arm_feature(env, ARM_FEATURE_EL2)) {
|
|
/*
|
|
* Disable the hypervisor feature bits in the processor feature
|
|
* registers if we don't have EL2.
|
|
*/
|
|
cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
|
|
ID_AA64PFR0, EL2, 0);
|
|
cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1,
|
|
ID_PFR1, VIRTUALIZATION, 0);
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
if (cpu->tag_memory == NULL && cpu_isar_feature(aa64_mte, cpu)) {
|
|
/*
|
|
* Disable the MTE feature bits if we do not have tag-memory
|
|
* provided by the machine.
|
|
*/
|
|
cpu->isar.id_aa64pfr1 =
|
|
FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
|
|
}
|
|
#endif
|
|
|
|
if (tcg_enabled()) {
|
|
/*
|
|
* Don't report the Statistical Profiling Extension in the ID
|
|
* registers, because TCG doesn't implement it yet (not even a
|
|
* minimal stub version) and guests will fall over when they
|
|
* try to access the non-existent system registers for it.
|
|
*/
|
|
cpu->isar.id_aa64dfr0 =
|
|
FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, PMSVER, 0);
|
|
}
|
|
|
|
/* MPU can be configured out of a PMSA CPU either by setting has-mpu
|
|
* to false or by setting pmsav7-dregion to 0.
|
|
*/
|
|
if (!cpu->has_mpu) {
|
|
cpu->pmsav7_dregion = 0;
|
|
}
|
|
if (cpu->pmsav7_dregion == 0) {
|
|
cpu->has_mpu = false;
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_PMSA) &&
|
|
arm_feature(env, ARM_FEATURE_V7)) {
|
|
uint32_t nr = cpu->pmsav7_dregion;
|
|
|
|
if (nr > 0xff) {
|
|
error_setg(errp, "PMSAv7 MPU #regions invalid %" PRIu32, nr);
|
|
return;
|
|
}
|
|
|
|
if (nr) {
|
|
if (arm_feature(env, ARM_FEATURE_V8)) {
|
|
/* PMSAv8 */
|
|
env->pmsav8.rbar[M_REG_NS] = g_new0(uint32_t, nr);
|
|
env->pmsav8.rlar[M_REG_NS] = g_new0(uint32_t, nr);
|
|
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
|
|
env->pmsav8.rbar[M_REG_S] = g_new0(uint32_t, nr);
|
|
env->pmsav8.rlar[M_REG_S] = g_new0(uint32_t, nr);
|
|
}
|
|
} else {
|
|
env->pmsav7.drbar = g_new0(uint32_t, nr);
|
|
env->pmsav7.drsr = g_new0(uint32_t, nr);
|
|
env->pmsav7.dracr = g_new0(uint32_t, nr);
|
|
}
|
|
}
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
|
|
uint32_t nr = cpu->sau_sregion;
|
|
|
|
if (nr > 0xff) {
|
|
error_setg(errp, "v8M SAU #regions invalid %" PRIu32, nr);
|
|
return;
|
|
}
|
|
|
|
if (nr) {
|
|
env->sau.rbar = g_new0(uint32_t, nr);
|
|
env->sau.rlar = g_new0(uint32_t, nr);
|
|
}
|
|
}
|
|
|
|
if (arm_feature(env, ARM_FEATURE_EL3)) {
|
|
set_feature(env, ARM_FEATURE_VBAR);
|
|
}
|
|
|
|
register_cp_regs_for_features(cpu);
|
|
arm_cpu_register_gdb_regs_for_features(cpu);
|
|
|
|
init_cpreg_list(cpu);
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
MachineState *ms = MACHINE(qdev_get_machine());
|
|
unsigned int smp_cpus = ms->smp.cpus;
|
|
bool has_secure = cpu->has_el3 || arm_feature(env, ARM_FEATURE_M_SECURITY);
|
|
|
|
/*
|
|
* We must set cs->num_ases to the final value before
|
|
* the first call to cpu_address_space_init.
|
|
*/
|
|
if (cpu->tag_memory != NULL) {
|
|
cs->num_ases = 3 + has_secure;
|
|
} else {
|
|
cs->num_ases = 1 + has_secure;
|
|
}
|
|
|
|
if (has_secure) {
|
|
if (!cpu->secure_memory) {
|
|
cpu->secure_memory = cs->memory;
|
|
}
|
|
cpu_address_space_init(cs, ARMASIdx_S, "cpu-secure-memory",
|
|
cpu->secure_memory);
|
|
}
|
|
|
|
if (cpu->tag_memory != NULL) {
|
|
cpu_address_space_init(cs, ARMASIdx_TagNS, "cpu-tag-memory",
|
|
cpu->tag_memory);
|
|
if (has_secure) {
|
|
cpu_address_space_init(cs, ARMASIdx_TagS, "cpu-tag-memory",
|
|
cpu->secure_tag_memory);
|
|
}
|
|
}
|
|
|
|
cpu_address_space_init(cs, ARMASIdx_NS, "cpu-memory", cs->memory);
|
|
|
|
/* No core_count specified, default to smp_cpus. */
|
|
if (cpu->core_count == -1) {
|
|
cpu->core_count = smp_cpus;
|
|
}
|
|
#endif
|
|
|
|
if (tcg_enabled()) {
|
|
int dcz_blocklen = 4 << cpu->dcz_blocksize;
|
|
|
|
/*
|
|
* We only support DCZ blocklen that fits on one page.
|
|
*
|
|
* Architectually this is always true. However TARGET_PAGE_SIZE
|
|
* is variable and, for compatibility with -machine virt-2.7,
|
|
* is only 1KiB, as an artifact of legacy ARMv5 subpage support.
|
|
* But even then, while the largest architectural DCZ blocklen
|
|
* is 2KiB, no cpu actually uses such a large blocklen.
|
|
*/
|
|
assert(dcz_blocklen <= TARGET_PAGE_SIZE);
|
|
|
|
/*
|
|
* We only support DCZ blocksize >= 2*TAG_GRANULE, which is to say
|
|
* both nibbles of each byte storing tag data may be written at once.
|
|
* Since TAG_GRANULE is 16, this means that blocklen must be >= 32.
|
|
*/
|
|
if (cpu_isar_feature(aa64_mte, cpu)) {
|
|
assert(dcz_blocklen >= 2 * TAG_GRANULE);
|
|
}
|
|
}
|
|
|
|
qemu_init_vcpu(cs);
|
|
cpu_reset(cs);
|
|
|
|
acc->parent_realize(dev, errp);
|
|
}
|
|
|
|
static ObjectClass *arm_cpu_class_by_name(const char *cpu_model)
|
|
{
|
|
ObjectClass *oc;
|
|
char *typename;
|
|
char **cpuname;
|
|
const char *cpunamestr;
|
|
|
|
cpuname = g_strsplit(cpu_model, ",", 1);
|
|
cpunamestr = cpuname[0];
|
|
#ifdef CONFIG_USER_ONLY
|
|
/* For backwards compatibility usermode emulation allows "-cpu any",
|
|
* which has the same semantics as "-cpu max".
|
|
*/
|
|
if (!strcmp(cpunamestr, "any")) {
|
|
cpunamestr = "max";
|
|
}
|
|
#endif
|
|
typename = g_strdup_printf(ARM_CPU_TYPE_NAME("%s"), cpunamestr);
|
|
oc = object_class_by_name(typename);
|
|
g_strfreev(cpuname);
|
|
g_free(typename);
|
|
if (!oc || !object_class_dynamic_cast(oc, TYPE_ARM_CPU) ||
|
|
object_class_is_abstract(oc)) {
|
|
return NULL;
|
|
}
|
|
return oc;
|
|
}
|
|
|
|
static Property arm_cpu_properties[] = {
|
|
DEFINE_PROP_UINT64("midr", ARMCPU, midr, 0),
|
|
DEFINE_PROP_UINT64("mp-affinity", ARMCPU,
|
|
mp_affinity, ARM64_AFFINITY_INVALID),
|
|
DEFINE_PROP_INT32("node-id", ARMCPU, node_id, CPU_UNSET_NUMA_NODE_ID),
|
|
DEFINE_PROP_INT32("core-count", ARMCPU, core_count, -1),
|
|
DEFINE_PROP_END_OF_LIST()
|
|
};
|
|
|
|
static gchar *arm_gdb_arch_name(CPUState *cs)
|
|
{
|
|
ARMCPU *cpu = ARM_CPU(cs);
|
|
CPUARMState *env = &cpu->env;
|
|
|
|
if (arm_feature(env, ARM_FEATURE_IWMMXT)) {
|
|
return g_strdup("iwmmxt");
|
|
}
|
|
return g_strdup("arm");
|
|
}
|
|
|
|
#ifndef CONFIG_USER_ONLY
|
|
#include "hw/core/sysemu-cpu-ops.h"
|
|
|
|
static const struct SysemuCPUOps arm_sysemu_ops = {
|
|
.get_phys_page_attrs_debug = arm_cpu_get_phys_page_attrs_debug,
|
|
.asidx_from_attrs = arm_asidx_from_attrs,
|
|
.write_elf32_note = arm_cpu_write_elf32_note,
|
|
.write_elf64_note = arm_cpu_write_elf64_note,
|
|
.virtio_is_big_endian = arm_cpu_virtio_is_big_endian,
|
|
.legacy_vmsd = &vmstate_arm_cpu,
|
|
};
|
|
#endif
|
|
|
|
#ifdef CONFIG_TCG
|
|
static const struct TCGCPUOps arm_tcg_ops = {
|
|
.initialize = arm_translate_init,
|
|
.synchronize_from_tb = arm_cpu_synchronize_from_tb,
|
|
.debug_excp_handler = arm_debug_excp_handler,
|
|
.restore_state_to_opc = arm_restore_state_to_opc,
|
|
|
|
#ifdef CONFIG_USER_ONLY
|
|
.record_sigsegv = arm_cpu_record_sigsegv,
|
|
.record_sigbus = arm_cpu_record_sigbus,
|
|
#else
|
|
.tlb_fill = arm_cpu_tlb_fill,
|
|
.cpu_exec_interrupt = arm_cpu_exec_interrupt,
|
|
.do_interrupt = arm_cpu_do_interrupt,
|
|
.do_transaction_failed = arm_cpu_do_transaction_failed,
|
|
.do_unaligned_access = arm_cpu_do_unaligned_access,
|
|
.adjust_watchpoint_address = arm_adjust_watchpoint_address,
|
|
.debug_check_watchpoint = arm_debug_check_watchpoint,
|
|
.debug_check_breakpoint = arm_debug_check_breakpoint,
|
|
#endif /* !CONFIG_USER_ONLY */
|
|
};
|
|
#endif /* CONFIG_TCG */
|
|
|
|
static void arm_cpu_class_init(ObjectClass *oc, void *data)
|
|
{
|
|
ARMCPUClass *acc = ARM_CPU_CLASS(oc);
|
|
CPUClass *cc = CPU_CLASS(acc);
|
|
DeviceClass *dc = DEVICE_CLASS(oc);
|
|
|
|
device_class_set_parent_realize(dc, arm_cpu_realizefn,
|
|
&acc->parent_realize);
|
|
|
|
device_class_set_props(dc, arm_cpu_properties);
|
|
device_class_set_parent_reset(dc, arm_cpu_reset, &acc->parent_reset);
|
|
|
|
cc->class_by_name = arm_cpu_class_by_name;
|
|
cc->has_work = arm_cpu_has_work;
|
|
cc->dump_state = arm_cpu_dump_state;
|
|
cc->set_pc = arm_cpu_set_pc;
|
|
cc->get_pc = arm_cpu_get_pc;
|
|
cc->gdb_read_register = arm_cpu_gdb_read_register;
|
|
cc->gdb_write_register = arm_cpu_gdb_write_register;
|
|
#ifndef CONFIG_USER_ONLY
|
|
cc->sysemu_ops = &arm_sysemu_ops;
|
|
#endif
|
|
cc->gdb_num_core_regs = 26;
|
|
cc->gdb_core_xml_file = "arm-core.xml";
|
|
cc->gdb_arch_name = arm_gdb_arch_name;
|
|
cc->gdb_get_dynamic_xml = arm_gdb_get_dynamic_xml;
|
|
cc->gdb_stop_before_watchpoint = true;
|
|
cc->disas_set_info = arm_disas_set_info;
|
|
|
|
#ifdef CONFIG_TCG
|
|
cc->tcg_ops = &arm_tcg_ops;
|
|
#endif /* CONFIG_TCG */
|
|
}
|
|
|
|
static void arm_cpu_instance_init(Object *obj)
|
|
{
|
|
ARMCPUClass *acc = ARM_CPU_GET_CLASS(obj);
|
|
|
|
acc->info->initfn(obj);
|
|
arm_cpu_post_init(obj);
|
|
}
|
|
|
|
static void cpu_register_class_init(ObjectClass *oc, void *data)
|
|
{
|
|
ARMCPUClass *acc = ARM_CPU_CLASS(oc);
|
|
|
|
acc->info = data;
|
|
}
|
|
|
|
void arm_cpu_register(const ARMCPUInfo *info)
|
|
{
|
|
TypeInfo type_info = {
|
|
.parent = TYPE_ARM_CPU,
|
|
.instance_size = sizeof(ARMCPU),
|
|
.instance_align = __alignof__(ARMCPU),
|
|
.instance_init = arm_cpu_instance_init,
|
|
.class_size = sizeof(ARMCPUClass),
|
|
.class_init = info->class_init ?: cpu_register_class_init,
|
|
.class_data = (void *)info,
|
|
};
|
|
|
|
type_info.name = g_strdup_printf("%s-" TYPE_ARM_CPU, info->name);
|
|
type_register(&type_info);
|
|
g_free((void *)type_info.name);
|
|
}
|
|
|
|
static const TypeInfo arm_cpu_type_info = {
|
|
.name = TYPE_ARM_CPU,
|
|
.parent = TYPE_CPU,
|
|
.instance_size = sizeof(ARMCPU),
|
|
.instance_align = __alignof__(ARMCPU),
|
|
.instance_init = arm_cpu_initfn,
|
|
.instance_finalize = arm_cpu_finalizefn,
|
|
.abstract = true,
|
|
.class_size = sizeof(ARMCPUClass),
|
|
.class_init = arm_cpu_class_init,
|
|
};
|
|
|
|
static void arm_cpu_register_types(void)
|
|
{
|
|
type_register_static(&arm_cpu_type_info);
|
|
}
|
|
|
|
type_init(arm_cpu_register_types)
|