288 Commits

Author SHA1 Message Date
Romain Malmain
5682a6d841 v10.0.0 release
-----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmgHmpAACgkQnKSrs4Gr
 c8h82wf/fVN/ZlYKLX7VJz0z+u3UB5MKuDUd+7LUwSGse9uIOH3K8PITkMyYgIti
 Sh8EKg9rhVzBEpiL9ZJfqCJjQTgJFk0O4xt3dPSGNsI2pZZcDwvQXFit7e/fafrY
 tUaTPdGuZ+i7s8Ooa+Z5tacI7n8KniQQkgf90oTnKhatmDmUbsVE0fma/2EmgqdI
 fO2mJKp5YiDsRf3vmuVKx/ltHYfL2tOvBOojeWBk9Zwr+czI2ku6Fy1Suu+tWeZ5
 setxSOCfY3G+qVsTm3n0d9OW/GPoQBsSVbSYua/74nQneNivTDAncndLFbFdj60g
 Q9n4t7tHN35Nh4XqkE0DhMGqPsQ3Og==
 =CFYe
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCaBCxXQAKCRAC5KE/dBVG
 imXmAP0WaWyc2kmipvGyhdGor7F4PlG9LRHL0jM4Om5SM4lkzAD/WnyFAXtErEwl
 eK0c2d980jdVHS5h9tVDK5TpzcPCRA0=
 =Zk18
 -----END PGP SIGNATURE-----

Merge tag 'v10.0.0' into update_qemu_v10_0_0

v10.0.0 release
2025-04-29 13:00:44 +02:00
Romain Malmain
2a676d9cd8 v9.2.2 release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEZKoqtTHVaQM2a/75gqpKJDselHgFAme8B8gACgkQgqpKJDse
 lHjzqxAAl9+xkHoXtgsnMhENO8dNznCPFh3AGKacxrahv1/XP/ghjPF8NNV0tGDK
 us73n0rNJG88dW2RIQVTjZJ5WYXaMwFBYrPBD2F0MROpiLmjXkHTr/fuH9Z7GkXI
 DOAfzf9Hf2BgKlolLAxvL55LckolAM7C87DNE0gtg/OT+d+XXfFcCpQf6wn+v+B7
 vAj5v7ir96rBffjjbRm2wItIsBDhzSxUxdaSnefC3CT8O2hbD6OcPa9o8WH2fLIR
 HHBLsW+2JTxv01iKRwPLfA00RIbxvC9QaaxTdkyBcnWIwbJy7LIWDvy37pnfHOHS
 XBp/AXEiQ7CXWat2451CAx2WPA/Vbcz4ekNSlBFk4tGNAZTJc9gL/doTXaAOl1SM
 8URJpe/gIUVENICkZe17UXG1L2zdMclAUCrFwgzPv6Ljth8ctFC8Gdk2xvYw5etY
 wQaILuXtzl0RgGVHrVLRL3q1w51YKv7aii6v+czHjwgDRDchc1h3m2+33UPERVZe
 ymSs1R5Vvmh8kE7v0coJDtR2BLRb4++AvBKiJ6ty6UqHA/F5JLCSE7dwwUuim9YY
 7E2jI2cNX+HO8yfwNoqZQ2cr2gAtMIm4hHE4hs0iqamfi/RGk8xw9HrRPlXorj9y
 +KWDYTqYAXOtd+qZyQtbppHKGOEAKXjg9qdYNy9N5KyAe5jrd/8=
 =06yL
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCZ9mEEAAKCRAC5KE/dBVG
 isziAP9tS6m4jKmDiYyLoYHT5tQ8+gI0R3kMl5U8VNGOx+/kfgD/X11dFM7VaVDo
 fecgc4U1dVPRguh5WO1cjEL3k8IDQAU=
 =RdqL
 -----END PGP SIGNATURE-----

Merge tag 'v9.2.2' into update_qemu_v9_2_2

v9.2.2 release
2025-03-18 15:32:47 +01:00
Philippe Mathieu-Daudé
bcde46f57d exec: Declare tlb_hit*() in 'exec/cputlb.h'
Move CPU TLB related methods to "exec/cputlb.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20241114011310.3615-20-philmd@linaro.org>
2025-03-08 07:56:14 -08:00
Philippe Mathieu-Daudé
1501743654 accel/tcg: Rename 'hw/core/tcg-cpu-ops.h' -> 'accel/tcg/cpu-ops.h'
TCGCPUOps structure makes more sense in the accelerator context
rather than hardware emulation. Move it under the accel/tcg/ scope.

Mechanical change doing:

 $  sed -i -e 's,hw/core/tcg-cpu-ops.h,accel/tcg/cpu-ops.h,g' \
   $(git grep -l hw/core/tcg-cpu-ops.h)

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20250123234415.59850-11-philmd@linaro.org>
2025-03-06 15:46:17 +01:00
Richard Henderson
bf455ec50b include/exec: Use uintptr_t in CPUTLBEntry
Since we no longer support 64-bit guests on 32-bit hosts,
we can use a 32-bit type on a 32-bit host.  This shrinks
the size of the structure to 16 bytes on a 32-bit host.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2025-02-18 08:29:02 -08:00
Richard Henderson
252394c95b accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page
The declarations use vaddr for size.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2025-02-18 07:33:42 -08:00
Richard Henderson
f441b4d19b tcg: Remove TCG_OVERSIZED_GUEST
This is now prohibited in configuration.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2025-02-18 07:33:42 -08:00
Philippe Mathieu-Daudé
3e6bfabfbb accel/tcg: Move TranslationBlock declarations to 'tb-internal.h'
Move declarations related to TranslationBlock out of the
generic "internal-target.h" to "tb-internal.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241212185341.2857-11-philmd@linaro.org>
2024-12-20 17:44:57 +01:00
Philippe Mathieu-Daudé
93ef2c2f15 accel/tcg: Move 'exec/translate-all.h' -> 'tb-internal.h'
"exec/translate-all.h" is only useful to TCG accelerator,
so move it to accel/tcg/, after renaming it 'tb-internal.h'.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241212185341.2857-9-philmd@linaro.org>
2024-12-20 17:44:57 +01:00
Philippe Mathieu-Daudé
9c6e54f475 accel/tcg: Have tlb_vaddr_to_host() use vaddr type
abi_ptr is expected to be used in user emulation.
tlb_vaddr_to_host() uses it, but can be used in
system emulation. Replace the type by 'vaddr' which
is equivalent on user emulation but also works on
system.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241114011310.3615-13-philmd@linaro.org>
2024-12-20 17:44:56 +01:00
Romain Malmain
67dabac1ed v9.1.1 release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEZKoqtTHVaQM2a/75gqpKJDselHgFAmcScB0ACgkQgqpKJDse
 lHgQ7g/7BIWV/LC7MqFmHlXl9S0S7ZHVsDc2x6Bx97Sk4sKAUKLvRsLFMa5F40Fn
 xY8v/aLsqOTmzWz38hdtgJR0rrv8DykWw9ft9nta2tFg20tilL/LaakT8TLKmjK2
 StZFzk7iijnY78Z3RcVliBTStLoPbOx9WCUs2evCV/qTxQDec1A7u4ukG9cAztGn
 ea8pNnKNgk+BN805w1uMMZ1wnh3FTVs9kdXVh7CzXlRAHHkVHQ47C9ZN6vh6N3xs
 3qj/Obi4k1N81NNRJFA4gR02t82LdPhg/WV33/q9TxSmHyZEmNXg0lRlDyIeSbpw
 bqYY+dsBbGyMJgN/LUZMNjPAfQL4S5VicFJcfKTXr6xYtkhqtlCun1kmI7O+ZIY5
 kGQYbAAhyPkFIOU6XedyKxM+0eUDqrr9fyzyn5NfISzETQiGFccYjfk/4fsHGfS8
 nOBTNtYBpnEXFeUk/jvv6OPOsh2L+K0PKbGefFbCjNng9Ix3Kz5zEY8xhtlv7C6m
 9YyGGAS1zwcWapwq8URy01GWkiKT2Ia/gD7c89oGY1bJmQKYf9lrLX5YtP+d/NYs
 UqWmk046ViapiKDF7VXWtF0f5axYpeaMMhkNM5RtkOq57nez4LuKPaKs1emRC6W9
 LE2om+28dyGJqHeJp5fqigM+wPxRJlecR57sDIuq4n0bJcvzLEA=
 =240n
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCZxv7TAAKCRAC5KE/dBVG
 isCPAP43SCLPw/W/su5jPShfNn4fvHHiY1f0a6t3Kf6414aqvQD/XKmYGFGl4V5k
 XYnW/9D6Bp/k8gBSjKzYeIt0+Mt/AAQ=
 =cRil
 -----END PGP SIGNATURE-----

Merge tag 'v9.1.1' into update_qemu_9_1_0

v9.1.1 release
2024-10-25 22:10:51 +02:00
Richard Henderson
795592fef7 accel/tcg: Use the alignment test in tlb_fill_align
When we have a tlb miss, defer the alignment check to
the new tlb_fill_align hook.  Move the existing alignment
check so that we only perform it with a tlb hit.

Reviewed-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13 11:27:05 -07:00
Richard Henderson
f168808d7d accel/tcg: Add TCGCPUOps.tlb_fill_align
Add a new callback to handle softmmu paging.  Return the page
details directly, instead of passing them indirectly to
tlb_set_page.  Handle alignment simultaneously with paging so
that faults are handled with target-specific priority.

Route all calls of the two hooks through a tlb_fill_align
function local to cputlb.c.

As yet no targets implement the new hook.
As yet cputlb.c does not use the new alignment check.

Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13 11:27:05 -07:00
Richard Henderson
e5b063e81f include/exec/memop: Introduce memop_atomicity_bits
Split out of mmu_lookup.

Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13 11:27:03 -07:00
Richard Henderson
c5809eee45 include/exec/memop: Rename get_alignment_bits
Rename to use "memop_" prefix, like other functions
that operate on MemOp.

Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13 11:27:03 -07:00
Richard Henderson
49d1866a6e accel/tcg: Assert noreturn from write-only page for atomics
There should be no "just in case"; the page is already
in the tlb, and known to be not readable.

Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13 11:27:03 -07:00
Nicholas Piggin
30933c4fb4 tcg/cputlb: remove other-cpu capability from TLB flushing
Some TLB flush operations can flush other CPUs. The problem with this
is they used non-synced variants of flushes (i.e., that return
before the destination has completed the flush). Since all TLB flush
users need the _synced variants, and that last user (ppc) of the
non-synced flush was buggy, this is a footgun waiting to go off. There
do not seem to be any callers that flush other CPUs, so remove the
capability.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-05-24 08:57:50 +10:00
Nicholas Piggin
99cd12ced1 tcg/cputlb: Remove non-synced variants of global TLB flushes
These are no longer used.

  tlb_flush_all_cpus: removed by previous commit.
  tlb_flush_page_all_cpus: removed by previous commit.

  tlb_flush_page_bits_by_mmuidx_all_cpus: never used.
  tlb_flush_page_by_mmuidx_all_cpus: never used.
  tlb_flush_page_bits_by_mmuidx_all_cpus: never used, thus:
    tlb_flush_range_by_mmuidx_all_cpus: never used.
    tlb_flush_by_mmuidx_all_cpus: never used.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-05-24 08:57:50 +10:00
Philippe Mathieu-Daudé
74781c0888 exec/cpu: Extract page-protection definitions to page-protection.h
Extract page-protection definitions from "exec/cpu-all.h"
to "exec/page-protection.h".

The list of files requiring the new header was generated
using:

$ git grep -wE \
  'PAGE_(READ|WRITE|EXEC|RWX|VALID|ANON|RESERVED|TARGET_.|PASSTHROUGH)'

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240427155714.53669-3-philmd@linaro.org>
2024-05-06 11:17:15 +02:00
Romain Malmain
7c3c7877d8 Update to QEMU 9.0.0 (#67)
* Update to QEMU v9.0.0

---------

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Zheyu Ma <zheyuma97@gmail.com>
Signed-off-by: Ido Plat <ido.plat@ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Gregory Price <gregory.price@memverge.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Lorenz Brun <lorenz@brun.one>
Signed-off-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Joonas Kankaala <joonas.a.kankaala@gmail.com>
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Signed-off-by: Oleg Sviridov <oleg.sviridov@red-soft.ru>
Signed-off-by: Artem Chernyshev <artem.chernyshev@red-soft.ru>
Signed-off-by: Yajun Wu <yajunw@nvidia.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
Signed-off-by: Lei Wang <lei4.wang@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Martin Hundebøll <martin@geanix.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
Signed-off-by: Wafer <wafer@jaguarmicro.com>
Signed-off-by: Yuxue Liu <yuxue.liu@jaguarmicro.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Nguyen Dinh Phi <phind.uet@gmail.com>
Signed-off-by: Zack Buhman <zack@buhman.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Yuquan Wang wangyuquan1236@phytium.com.cn
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Signed-off-by: Cindy Lu <lulu@redhat.com>
Co-authored-by: Peter Maydell <peter.maydell@linaro.org>
Co-authored-by: Fabiano Rosas <farosas@suse.de>
Co-authored-by: Peter Xu <peterx@redhat.com>
Co-authored-by: Thomas Huth <thuth@redhat.com>
Co-authored-by: Cédric Le Goater <clg@redhat.com>
Co-authored-by: Zheyu Ma <zheyuma97@gmail.com>
Co-authored-by: Ido Plat <ido.plat@ibm.com>
Co-authored-by: Ilya Leoshkevich <iii@linux.ibm.com>
Co-authored-by: Markus Armbruster <armbru@redhat.com>
Co-authored-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Co-authored-by: Paolo Bonzini <pbonzini@redhat.com>
Co-authored-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Co-authored-by: David Hildenbrand <david@redhat.com>
Co-authored-by: Kevin Wolf <kwolf@redhat.com>
Co-authored-by: Stefan Reiter <s.reiter@proxmox.com>
Co-authored-by: Fiona Ebner <f.ebner@proxmox.com>
Co-authored-by: Gregory Price <gregory.price@memverge.com>
Co-authored-by: Lorenz Brun <lorenz@brun.one>
Co-authored-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
Co-authored-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Co-authored-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
Co-authored-by: BALATON Zoltan <balaton@eik.bme.hu>
Co-authored-by: Igor Mammedov <imammedo@redhat.com>
Co-authored-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Co-authored-by: Sven Schnelle <svens@stackframe.org>
Co-authored-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Co-authored-by: Helge Deller <deller@kernel.org>
Co-authored-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Co-authored-by: Benjamin Gray <bgray@linux.ibm.com>
Co-authored-by: Nicholas Piggin <npiggin@gmail.com>
Co-authored-by: Avihai Horon <avihaih@nvidia.com>
Co-authored-by: Michael Tokarev <mjt@tls.msk.ru>
Co-authored-by: Joonas Kankaala <joonas.a.kankaala@gmail.com>
Co-authored-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Co-authored-by: Stefan Weil <sw@weilnetz.de>
Co-authored-by: Dayu Liu <liu.dayu@zte.com.cn>
Co-authored-by: Zhao Liu <zhao1.liu@intel.com>
Co-authored-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Co-authored-by: Artem Chernyshev <artem.chernyshev@red-soft.ru>
Co-authored-by: Yajun Wu <yajunw@nvidia.com>
Co-authored-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Co-authored-by: Pierre-Clément Tosi <ptosi@google.com>
Co-authored-by: Wei Wang <wei.w.wang@intel.com>
Co-authored-by: Martin Hundebøll <martin@geanix.com>
Co-authored-by: Michael S. Tsirkin <mst@redhat.com>
Co-authored-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
Co-authored-by: Wafer <wafer@jaguarmicro.com>
Co-authored-by: lyx634449800 <yuxue.liu@jaguarmicro.com>
Co-authored-by: Gerd Hoffmann <kraxel@redhat.com>
Co-authored-by: Nguyen Dinh Phi <phind.uet@gmail.com>
Co-authored-by: Zack Buhman <zack@buhman.org>
Co-authored-by: Keith Packard <keithp@keithp.com>
Co-authored-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
Co-authored-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Co-authored-by: Cindy Lu <lulu@redhat.com>
2024-05-01 16:10:20 +02:00
Philippe Mathieu-Daudé
aacfd8bbaf exec: Move CPUTLBEntry helpers to cputlb.c
The following CPUTLBEntry helpers are only used in accel/tcg/cputlb.c:
  - tlb_index()
  - tlb_entry()
  - tlb_read_idx()
  - tlb_addr_write()

Move them to this file, allowing to remove the huge "cpu.h" header
inclusion from "exec/cpu_ldst.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240418192525.97451-13-philmd@linaro.org>
2024-04-26 17:03:05 +02:00
Philippe Mathieu-Daudé
51579d40f9 exec: Reduce tlb_set_dirty() declaration scope
tlb_set_dirty() is only used in accel/tcg/cputlb.c,
where it is defined. Declare it statically, removing
the stub.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240418192525.97451-11-philmd@linaro.org>
2024-04-26 15:28:11 +02:00
Richard Henderson
49fa457ca5 accel/tcg: Add TLB_CHECK_ALIGNED
This creates a per-page method for checking of alignment.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301204110.656742-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-03-05 13:22:56 +00:00
Richard Henderson
a0ff4a879c accel/tcg: Add tlb_fill_flags to CPUTLBEntryFull
Allow the target to set tlb flags to apply to all of the
comparators.  Remove MemTxAttrs.byte_swap, as the bit is
not relevant to memory transactions, only the page mapping.
Adjust target/sparc to set TLB_BSWAP directly.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301204110.656742-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-03-05 13:22:56 +00:00
Jonathan Cameron
6aba908d2b tcg: Avoid double lock if page tables happen to be in mmio memory.
On i386, after fixing the page walking code to work with pages in
MMIO memory (specifically CXL emulated interleaved memory),
a crash was seen in an interrupt handling path.

Useful part of backtrace

7  0x0000555555ab1929 in bql_lock_impl (file=0x555556049122 "../../accel/tcg/cputlb.c", line=2033) at ../../system/cpus.c:524
8  bql_lock_impl (file=file@entry=0x555556049122 "../../accel/tcg/cputlb.c", line=line@entry=2033) at ../../system/cpus.c:520
9  0x0000555555c9f7d6 in do_ld_mmio_beN (cpu=0x5555578e0cb0, full=0x7ffe88012950, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2033
10 0x0000555555ca0fbd in do_ld_8 (cpu=cpu@entry=0x5555578e0cb0, p=p@entry=0x7ffff4efd1d0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356
11 0x0000555555ca341f in do_ld8_mmu (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=19595792376, oi=oi@entry=52, ra=0, ra@entry=52, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439
12 0x0000555555ca5f59 in cpu_ldq_mmu (ra=52, oi=52, addr=19595792376, env=0x5555578e3470) at ../../accel/tcg/ldst_common.c.inc:169
13 cpu_ldq_le_mmuidx_ra (env=0x5555578e3470, addr=19595792376, mmu_idx=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/ldst_common.c.inc:301
14 0x0000555555b4b5fc in ptw_ldq (ra=0, in=0x7ffff4efd320) at ../../target/i386/tcg/sysemu/excp_helper.c:98
15 ptw_ldq (ra=0, in=0x7ffff4efd320) at ../../target/i386/tcg/sysemu/excp_helper.c:93
16 mmu_translate (env=env@entry=0x5555578e3470, in=0x7ffff4efd3e0, out=0x7ffff4efd3b0, err=err@entry=0x7ffff4efd3c0, ra=ra@entry=0) at ../../target/i386/tcg/sysemu/excp_helper.c:174
17 0x0000555555b4c4b3 in get_physical_address (ra=0, err=0x7ffff4efd3c0, out=0x7ffff4efd3b0, mmu_idx=0, access_type=MMU_DATA_LOAD, addr=18446741874686299840, env=0x5555578e3470) at ../../target/i386/tcg/sysemu/excp_helper.c:580
18 x86_cpu_tlb_fill (cs=0x5555578e0cb0, addr=18446741874686299840, size=<optimized out>, access_type=MMU_DATA_LOAD, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:606
19 0x0000555555ca0ee9 in tlb_fill (retaddr=0, mmu_idx=0, access_type=MMU_DATA_LOAD, size=<optimized out>, addr=18446741874686299840, cpu=0x7ffff4efd540) at ../../accel/tcg/cputlb.c:1315
20 mmu_lookup1 (cpu=cpu@entry=0x5555578e0cb0, data=data@entry=0x7ffff4efd540, mmu_idx=0, access_type=access_type@entry=MMU_DATA_LOAD, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:1713
21 0x0000555555ca2c61 in mmu_lookup (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=18446741874686299840, oi=oi@entry=32, ra=ra@entry=0, type=type@entry=MMU_DATA_LOAD, l=l@entry=0x7ffff4efd540) at ../../accel/tcg/cputlb.c:1803
22 0x0000555555ca3165 in do_ld4_mmu (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=18446741874686299840, oi=oi@entry=32, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2416
23 0x0000555555ca5ef9 in cpu_ldl_mmu (ra=0, oi=32, addr=18446741874686299840, env=0x5555578e3470) at ../../accel/tcg/ldst_common.c.inc:158
24 cpu_ldl_le_mmuidx_ra (env=env@entry=0x5555578e3470, addr=addr@entry=18446741874686299840, mmu_idx=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/ldst_common.c.inc:294
25 0x0000555555bb6cdd in do_interrupt64 (is_hw=1, next_eip=18446744072399775809, error_code=0, is_int=0, intno=236, env=0x5555578e3470) at ../../target/i386/tcg/seg_helper.c:889
26 do_interrupt_all (cpu=cpu@entry=0x5555578e0cb0, intno=236, is_int=is_int@entry=0, error_code=error_code@entry=0, next_eip=next_eip@entry=0, is_hw=is_hw@entry=1) at ../../target/i386/tcg/seg_helper.c:1130
27 0x0000555555bb87da in do_interrupt_x86_hardirq (env=env@entry=0x5555578e3470, intno=<optimized out>, is_hw=is_hw@entry=1) at ../../target/i386/tcg/seg_helper.c:1162
28 0x0000555555b5039c in x86_cpu_exec_interrupt (cs=0x5555578e0cb0, interrupt_request=<optimized out>) at ../../target/i386/tcg/sysemu/seg_helper.c:197
29 0x0000555555c94480 in cpu_handle_interrupt (last_tb=<synthetic pointer>, cpu=0x5555578e0cb0) at ../../accel/tcg/cpu-exec.c:844

Peter identified this as being due to the BQL already being
held when the page table walker encounters MMIO memory and attempts
to take the lock again.  There are other examples of similar paths
TCG, so this follows the approach taken in those of simply checking
if the lock is already held and if it is, don't take it again.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20240219173153.12114-4-Jonathan.Cameron@huawei.com>
[rth: Use BQL_LOCK_GUARD]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-29 11:35:36 -10:00
Richard Henderson
3b91614004 include/exec: Change cpu_mmu_index argument to CPUState
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03 16:46:10 +10:00
Stefan Hajnoczi
a4a411fbaf Replace "iothread lock" with "BQL" in comments
The term "iothread lock" is obsolete. The APIs use Big QEMU Lock (BQL)
in their names. Update the code comments to use "BQL" instead of
"iothread lock".

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Message-id: 20240102153529.486531-5-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Stefan Hajnoczi
195801d700 system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()
The Big QEMU Lock (BQL) has many names and they are confusing. The
actual QemuMutex variable is called qemu_global_mutex but it's commonly
referred to as the BQL in discussions and some code comments. The
locking APIs, however, are called qemu_mutex_lock_iothread() and
qemu_mutex_unlock_iothread().

The "iothread" name is historic and comes from when the main thread was
split into into KVM vcpu threads and the "iothread" (now called the main
loop thread). I have contributed to the confusion myself by introducing
a separate --object iothread, a separate concept unrelated to the BQL.

The "iothread" name is no longer appropriate for the BQL. Rename the
locking APIs to:
- void bql_lock(void)
- void bql_unlock(void)
- bool bql_locked(void)

There are more APIs with "iothread" in their names. Subsequent patches
will rename them. There are also comments and documentation that will be
updated in later patches.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Fabiano Rosas <farosas@suse.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Acked-by: Hyman Huang <yong.huang@smartx.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Romain Malmain
aa67fcae61 Syx Snapshot rework
- Most of the tables are now GHashtable instances
- Snapshot correctness checking
- Simplified API
- More callbacks to catch more dirty pages
2023-11-21 10:39:42 +01:00
Andrea Fioraldi
b0c8272465 Fix translation but not execution of edge TB 2023-11-17 14:48:04 +01:00
Jessica Clarke
e2faabee78 accel/tcg: Forward probe size on to notdirty_write
Without this, we just dirty a single byte, and so if the caller writes
more than one byte to the host memory then we won't have invalidated any
translation blocks that start after the first byte and overlap those
writes. In particular, AArch64's DC ZVA implementation uses probe_access
(via probe_write), and so we don't invalidate the entire block, only the
TB overlapping the first byte (and, in the unusual case an unaligned VA
is given to the instruction, we also probe that specific address in
order to get the right VA reported on an exception, so will invalidate a
TB overlapping that address too). Since our IC IVAU implementation is a
no-op for system emulation that relies on the softmmu already having
detected self-modifying code via this mechanism, this means we have
observably wrong behaviour when jumping to code that has been DC ZVA'ed.
In practice this is an unusual thing for software to do, as in reality
the OS will DC ZVA the page and the application will go and write actual
instructions to it that aren't UDF #0, but you can write a test that
clearly shows the faulty behaviour.

For functions other than probe_access it's not clear what size to use
when 0 is passed in. Arguably a size of 0 shouldn't dirty at all, since
if you want to actually write then you should pass in a real size, but I
have conservatively kept the implementation as dirtying the first byte
in that case so as to avoid breaking any assumptions about that
behaviour.

Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Message-Id: <20231104031232.3246614-1-jrtc27@jrtc27.com>
[rth: Move the dirtysize computation next to notdirty_write.]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-11-14 10:40:54 -08:00
Philippe Mathieu-Daudé
f4f826c0e0 accel/tcg: Declare tcg_flush_jmp_cache() in 'exec/tb-flush.h'
"exec/cpu-common.h" is meant to contain the declarations
related to CPU usable with any accelerator / target
combination.

tcg_flush_jmp_cache() is specific to TCG, so restrict its
declaration by moving it to "exec/tb-flush.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230918104153.24433-2-philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Richard Henderson
6046f6e94d accel/tcg: Fix condition for store_atom_insert_al16
Store bytes under a mask is fundamentally a cmpxchg, not a straight store.
Use HAVE_CMPXCHG128 instead of HAVE_ATOMIC128_RW.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230916220151.526140-8-richard.henderson@linaro.org>
2023-11-06 08:27:21 -08:00
Richard Henderson
24a4d59aa7 accel/tcg: Move HMP info jit and info opcount code
Move all of it into accel/tcg/monitor.c.  This puts everything
about tcg that is only used by the monitor in the same place.

Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-11-06 08:27:21 -08:00
Romain Malmain
bbd52db5f6 Fix wrong dirty address tracking for MMIO accesses. Assert is only triggered for debug builds. 2023-10-30 17:37:07 +01:00
Andrea Fioraldi
d86aae8ed9 Merge 2023-10-25 11:36:37 +02:00
Philippe Mathieu-Daudé
43e7a2d3f9 accel/tcg: Make cpu-exec-common.c a target agnostic unit
cpu_in_serial_context() is not target specific,
move it declaration to "internal-common.h" (which
we include in the 4 source files modified).

Remove the unused "exec/exec-all.h" header from
cpu-exec-common.c.  There is no more target specific
code in this file: make it target agnostic.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230914185718.76241-12-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Philippe Mathieu-Daudé
4c268d6d03 accel/tcg: Rename target-specific 'internal.h' -> 'internal-target.h'
accel/tcg/internal.h contains target specific declarations.
Unit files including it become "target tainted": they can not
be compiled as target agnostic. Rename using the '-target'
suffix to make this explicit.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230914185718.76241-9-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Anton Johansson
27c46fadf6 accel/tcg: move ld/st helpers to ldst_common.c.inc
A large chunk of ld/st functions are moved from cputlb.c and user-exec.c
to ldst_common.c.inc as their implementation is the same between both
modes.

Eventually, ldst_common.c.inc could be compiled into a separate
target-specific compilation unit, and be linked in with the targets.
Keeping CPUArchState usage out of cputlb.c (CPUArchState is primarily
used to access the mmu index in these functions).

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-12-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Anton Johansson
e20f73fba5 accel/tcg: Unify user and softmmu do_[st|ld]*_mmu()
The prototype of do_[st|ld]*_mmu() is unified between system- and
user-mode allowing a large chunk of helper_[st|ld]*() and cpu_[st|ld]*()
functions to be expressed in same manner between both modes. These
functions will be moved to ldst_common.c.inc in a following commit.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-11-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Anton Johansson
73fda56f33 accel/tcg: Use CPUState in atomicity helpers
Makes ldst_atomicity.c.inc almost target-independent, with the exception
of TARGET_PAGE_MASK, which will be addressed in a future patch.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-8-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Anton Johansson
d560225fc4 accel/tcg: Modify atomic_mmu_lookup() to use CPUState
The goal is to (in the future) allow for per-target compilation of
functions in atomic_template.h whilst atomic_mmu_lookup() and cputlb.c
are compiled once-per user- or system mode.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-7-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
[rth: Use cpu->neg.tlb instead of cpu_tlb()]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Anton Johansson
d50ef4467c accel/tcg: Modify memory access functions to use CPUState
do_[ld|st]*() and mmu_lookup*() are changed to use CPUState over
CPUArchState, moving the target-dependence to the target-facing facing
cpu_[ld|st] functions.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-6-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
[rth: Use cpu->neg.tlb instead of cpu_tlb; cpu_env instead of env_ptr.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Anton Johansson
5afec1c63b accel/tcg: Modify probe_access_internal() to use CPUState
probe_access_internal() is changed to instead take the generic CPUState
over CPUArchState, in order to lessen the target-specific coupling of
cputlb.c. Note: probe_access*() also don't need the full CPUArchState,
but aren't touched in this patch as they are target-facing.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-5-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
[rth: Use cpu->neg.tlb instead of cpu_tlb()]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Anton Johansson
10b32e2cd9 accel/tcg: Modify tlb_*() to use CPUState
Changes tlb_*() functions to take CPUState instead of CPUArchState, as
they don't require the full CPUArchState. This makes it easier to
decouple target-(in)dependent code.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-4-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
[rth: Use cpu->neg.tlb instead of cpu_tlb()]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Richard Henderson
b77af26e97 accel/tcg: Replace CPUState.env_ptr with cpu_env()
Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Richard Henderson
464dacf609 accel/tcg: Move can_do_io to CPUNegativeOffsetState
Minimize the displacement to can_do_io, since it may
be touched at the start of each TranslationBlock.
It fits into other padding within the substructure.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-03 08:01:02 -07:00
Andrea Fioraldi
09e79261f6 Merge 2023-09-25 11:50:05 +02:00
Richard Henderson
1f9823cea2 accel/tcg: Introduce do_st16_mmio_leN
Split out int_st_mmio_leN, to be used by both do_st_mmio_leN
and do_st16_mmio_leN.  Move the locks down into the two
functions, since each one now covers all accesses to once page.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-16 14:57:15 +00:00
Richard Henderson
8bf6726741 accel/tcg: Introduce do_ld16_mmio_beN
Split out int_ld_mmio_beN, to be used by both do_ld_mmio_beN
and do_ld16_mmio_beN.  Move the locks down into the two
functions, since each one now covers all accesses to once page.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-16 14:57:15 +00:00