929 Commits

Author SHA1 Message Date
Romain Malmain
5682a6d841 v10.0.0 release
-----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmgHmpAACgkQnKSrs4Gr
 c8h82wf/fVN/ZlYKLX7VJz0z+u3UB5MKuDUd+7LUwSGse9uIOH3K8PITkMyYgIti
 Sh8EKg9rhVzBEpiL9ZJfqCJjQTgJFk0O4xt3dPSGNsI2pZZcDwvQXFit7e/fafrY
 tUaTPdGuZ+i7s8Ooa+Z5tacI7n8KniQQkgf90oTnKhatmDmUbsVE0fma/2EmgqdI
 fO2mJKp5YiDsRf3vmuVKx/ltHYfL2tOvBOojeWBk9Zwr+czI2ku6Fy1Suu+tWeZ5
 setxSOCfY3G+qVsTm3n0d9OW/GPoQBsSVbSYua/74nQneNivTDAncndLFbFdj60g
 Q9n4t7tHN35Nh4XqkE0DhMGqPsQ3Og==
 =CFYe
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCaBCxXQAKCRAC5KE/dBVG
 imXmAP0WaWyc2kmipvGyhdGor7F4PlG9LRHL0jM4Om5SM4lkzAD/WnyFAXtErEwl
 eK0c2d980jdVHS5h9tVDK5TpzcPCRA0=
 =Zk18
 -----END PGP SIGNATURE-----

Merge tag 'v10.0.0' into update_qemu_v10_0_0

v10.0.0 release
2025-04-29 13:00:44 +02:00
Romain Malmain
2a676d9cd8 v9.2.2 release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEZKoqtTHVaQM2a/75gqpKJDselHgFAme8B8gACgkQgqpKJDse
 lHjzqxAAl9+xkHoXtgsnMhENO8dNznCPFh3AGKacxrahv1/XP/ghjPF8NNV0tGDK
 us73n0rNJG88dW2RIQVTjZJ5WYXaMwFBYrPBD2F0MROpiLmjXkHTr/fuH9Z7GkXI
 DOAfzf9Hf2BgKlolLAxvL55LckolAM7C87DNE0gtg/OT+d+XXfFcCpQf6wn+v+B7
 vAj5v7ir96rBffjjbRm2wItIsBDhzSxUxdaSnefC3CT8O2hbD6OcPa9o8WH2fLIR
 HHBLsW+2JTxv01iKRwPLfA00RIbxvC9QaaxTdkyBcnWIwbJy7LIWDvy37pnfHOHS
 XBp/AXEiQ7CXWat2451CAx2WPA/Vbcz4ekNSlBFk4tGNAZTJc9gL/doTXaAOl1SM
 8URJpe/gIUVENICkZe17UXG1L2zdMclAUCrFwgzPv6Ljth8ctFC8Gdk2xvYw5etY
 wQaILuXtzl0RgGVHrVLRL3q1w51YKv7aii6v+czHjwgDRDchc1h3m2+33UPERVZe
 ymSs1R5Vvmh8kE7v0coJDtR2BLRb4++AvBKiJ6ty6UqHA/F5JLCSE7dwwUuim9YY
 7E2jI2cNX+HO8yfwNoqZQ2cr2gAtMIm4hHE4hs0iqamfi/RGk8xw9HrRPlXorj9y
 +KWDYTqYAXOtd+qZyQtbppHKGOEAKXjg9qdYNy9N5KyAe5jrd/8=
 =06yL
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCZ9mEEAAKCRAC5KE/dBVG
 isziAP9tS6m4jKmDiYyLoYHT5tQ8+gI0R3kMl5U8VNGOx+/kfgD/X11dFM7VaVDo
 fecgc4U1dVPRguh5WO1cjEL3k8IDQAU=
 =RdqL
 -----END PGP SIGNATURE-----

Merge tag 'v9.2.2' into update_qemu_v9_2_2

v9.2.2 release
2025-03-18 15:32:47 +01:00
Peter Maydell
39ec3fc030 target/arm: HCR_EL2.RW should be RAO/WI if EL1 doesn't support AArch32
When EL1 doesn't support AArch32, the HCR_EL2.RW bit is supposed to
be RAO/WI. Enforce the RAO/WI behaviour.

Note that we handle "reset value should honour RES1 bits" in the same
way that SCR_EL3 does, via a reset function.

We do already have some CPU types which don't implement AArch32
above EL0, so this is technically a bug; it doesn't seem worth
backporting to stable because no sensible guest code will be
deliberately attempting to set the RW bit to a value corresponding
to an unimplemented execution state and then checking that we
did the right thing.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2025-03-14 12:54:33 +00:00
Peter Maydell
5d71c6820f target/arm: SCR_EL3.RW should be treated as 1 if EL2 doesn't support AArch32
The definition of SCR_EL3.RW says that its effective value is 1 if:
 - EL2 is implemented and does not support AArch32, and SCR_EL3.NS is 1
 - the effective value of SCR_EL3.{EEL2,NS} is {1,0} (i.e. we are
   Secure and Secure EL2 is disabled)

We implement the second of these in arm_el_is_aa64(), but forgot the
first.

Provide a new function arm_scr_rw_eff() to return the effective
value of SCR_EL3.RW, and use it in arm_el_is_aa64() and the other
places that currently look directly at the bit value.

(scr_write() enforces that the RW bit is RAO/WI if neither EL1 nor
EL2 have AArch32 support, but if EL1 does but EL2 does not then the
bit must still be writeable.)

This will mean that if code at EL3 attempts to perform an exception
return to AArch32 EL2 when EL2 is AArch64-only we will correctly
handle this as an illegal exception return: it will be caught by the
"return to an EL which is configured for a different register width"
check in HELPER(exception_return).

We do already have some CPU types which don't implement AArch32
above EL0, so this is technically a bug; it doesn't seem worth
backporting to stable because no sensible guest code will be
deliberately attempting to set the RW bit to a value corresponding
to an unimplemented execution state and then checking that we
did the right thing.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2025-03-14 10:49:20 +00:00
Stefan Hajnoczi
d9a4282c4b include/qemu: Tidy atomic128 headers.
include/exec: Split out cpu-interrupt.h
 include/exec: Split many tlb_* declarations to cputlb.h
 include/accel/tcg: Split out getpc.h
 accel/tcg: system: Compile some files once
 linux-user/main: Allow setting tb-size
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmfMyz8dHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9D/Af/Vh5PMtYjL+Mw2NQn
 Vmqbv+joiqswAxI8PmZZzEBJ06j4pCLXn+r/2nr+sEwLmrI4BI40Vxx5c5puftoZ
 GDGGclskF/pId5TE96TCEr8AoJgeNSSv4WxbINFTZRsRP4voZFHpU6mTz6B0Nnq5
 GS/k6c7+VcYbHIPD0RcIWwBlQv11uUAcnaygkNSsy+theUseOzTPTN/XGfTprf/6
 1sxlmtt6QcQ88bBJJbiNwqbjWGxANcSUspRo0sstpVr8ApkXNl7WSkWYRBhBa5oc
 iu0tixdCIoqqcCJy9/YVyIkmmwWeRUkbQqBeKf0o5xPnhmO3kfeezvERSDvDViAH
 K9BVBw==
 =7vra
 -----END PGP SIGNATURE-----

Merge tag 'pull-tcg-20250308' of https://gitlab.com/rth7680/qemu into staging

include/qemu: Tidy atomic128 headers.
include/exec: Split out cpu-interrupt.h
include/exec: Split many tlb_* declarations to cputlb.h
include/accel/tcg: Split out getpc.h
accel/tcg: system: Compile some files once
linux-user/main: Allow setting tb-size

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmfMyz8dHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9D/Af/Vh5PMtYjL+Mw2NQn
# Vmqbv+joiqswAxI8PmZZzEBJ06j4pCLXn+r/2nr+sEwLmrI4BI40Vxx5c5puftoZ
# GDGGclskF/pId5TE96TCEr8AoJgeNSSv4WxbINFTZRsRP4voZFHpU6mTz6B0Nnq5
# GS/k6c7+VcYbHIPD0RcIWwBlQv11uUAcnaygkNSsy+theUseOzTPTN/XGfTprf/6
# 1sxlmtt6QcQ88bBJJbiNwqbjWGxANcSUspRo0sstpVr8ApkXNl7WSkWYRBhBa5oc
# iu0tixdCIoqqcCJy9/YVyIkmmwWeRUkbQqBeKf0o5xPnhmO3kfeezvERSDvDViAH
# K9BVBw==
# =7vra
# -----END PGP SIGNATURE-----
# gpg: Signature made Sun 09 Mar 2025 06:57:03 HKT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-tcg-20250308' of https://gitlab.com/rth7680/qemu: (23 commits)
  accel/tcg: Build tcg-runtime-gvec.c once
  accel/tcg: Build tcg-runtime.c once
  qemu/atomic128: Include missing 'qemu/atomic.h' header
  qemu/atomic: Rename atomic128-ldst.h headers using .h.inc suffix
  qemu/atomic: Rename atomic128-cas.h headers using .h.inc suffix
  accel/tcg: Split out getpc.h
  accel/tcg: Restrict GETPC_ADJ to 'tb-internal.h'
  accel/tcg: Build tcg-accel-ops-mttcg.c once
  accel/tcg: Build tcg-accel-ops-rr.c once
  accel/tcg: Build tcg-accel-ops-icount.c once
  accel/tcg: Build tcg-accel-ops.c once
  system: Build watchpoint.c once
  exec: Declare tlb_flush*() in 'exec/cputlb.h'
  exec: Declare tlb_hit*() in 'exec/cputlb.h'
  exec: Declare tlb_set_page() in 'exec/cputlb.h'
  exec: Declare tlb_set_page_with_attrs() in 'exec/cputlb.h'
  exec: Declare tlb_set_page_full() in 'exec/cputlb.h'
  exec: Declare tlb_reset_dirty*() in 'exec/cputlb.h'
  accel/tcg: Compile watchpoint.c once
  include/exec: Split out exec/cpu-interrupt.h
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-03-09 11:45:00 +08:00
Philippe Mathieu-Daudé
6ff5da1600 exec: Declare tlb_flush*() in 'exec/cputlb.h'
Move CPU TLB related methods to "exec/cputlb.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-ID: <20241114011310.3615-19-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2025-03-08 07:56:14 -08:00
Alex Bennée
f9f99d7ca5 target/arm: Implement SEL2 physical and virtual timers
When FEAT_SEL2 was implemented the SEL2 timers were missed. This
shows up when building the latest Hafnium with SPMC_AT_EL=2. The
actual implementation utilises the same logic as the rest of the
timers so all we need to do is:

  - define the timers and their access functions
  - conditionally add the correct system registers
  - create a new accessfn as the rules are subtly different to the
    existing secure timer

Fixes: e9152ee91c (target/arm: add ARMv8.4-SEL2 system registers)
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20250204125009.2281315-7-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Cc: Andrei Homescu <ahomescu@google.com>
Cc: Arve Hjønnevåg <arve@google.com>
Cc: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
[PMM: CP_ACCESS_TRAP_UNCATEGORIZED -> CP_ACCESS_UNDEFINED;
 offset logic now in gt_{indirect,direct}_access_timer_offset() ]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2025-03-07 10:08:21 +00:00
Peter Maydell
02c648a0a1 target/arm: Refactor handling of timer offset for direct register accesses
When reading or writing the timer registers, sometimes we need to
apply one of the timer offsets.  Specifically, this happens for
direct reads of the counter registers CNTPCT_EL0 and CNTVCT_EL0 (and
their self-synchronized variants CNTVCTSS_EL0 and CNTPCTSS_EL0).  It
also applies for direct reads and writes of the CNT*_TVAL_EL*
registers that provide the 32-bit downcounting view of each timer.

We currently do this with duplicated code in gt_tval_read() and
gt_tval_write() and a special-case in gt_virt_cnt_read() and
gt_cnt_read().  Refactor this so that we handle it all in a single
function gt_direct_access_timer_offset(), to parallel how we handle
the offset for indirect accesses.

The call in the WFIT helper previously to gt_virt_cnt_offset() is
now to gt_direct_access_timer_offset(); this is the correct
behaviour, but it's not immediately obvious that it shouldn't be
considered an indirect access, so we add an explanatory comment.

This commit should make no behavioural changes.

(Cc to stable because the following bugfix commit will
depend on this one.)

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20250204125009.2281315-6-peter.maydell@linaro.org
2025-03-07 10:08:21 +00:00
Peter Maydell
4aecd4b442 target/arm: Always apply CNTVOFF_EL2 for CNTV_TVAL_EL02 accesses
Currently we handle CNTV_TVAL_EL02 by calling gt_tval_read() for the
EL1 virt timer.  This is almost correct, but the underlying
CNTV_TVAL_EL0 register behaves slightly differently.  CNTV_TVAL_EL02
always applies the CNTVOFF_EL2 offset; CNTV_TVAL_EL0 doesn't do so if
we're at EL2 and HCR_EL2.E2H is 1.

We were getting this wrong, because we ended up in
gt_virt_cnt_offset() and did the E2H check.

Factor out the tval read/write calculation from the selection of the
offset, so that we can special case gt_virt_tval_read() and
gt_virt_tval_write() to unconditionally pass CNTVOFF_EL2.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20250204125009.2281315-5-peter.maydell@linaro.org
2025-03-07 10:08:20 +00:00
Peter Maydell
bdd641541f target/arm: Make CNTPS_* UNDEF from Secure EL1 when Secure EL2 is enabled
When we added Secure EL2 support, we missed that this needs an update
to the access code for the EL3 physical timer registers.  These are
supposed to UNDEF from Secure EL1 when Secure EL2 is enabled.

(Note for stable backporting: for backports to branches where
CP_ACCESS_UNDEFINED is not defined, the old name to use instead
is CP_ACCESS_TRAP_UNCATEGORIZED.)

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20250204125009.2281315-4-peter.maydell@linaro.org
2025-03-07 10:08:20 +00:00
Peter Maydell
5709038aa8 target/arm: Don't apply CNTVOFF_EL2 for EL2_VIRT timer
The CNTVOFF_EL2 offset register should only be applied for accessses
to CNTVCT_EL0 and for the EL1 virtual timer (CNTV_*).  We were
incorrectly applying it for the EL2 virtual timer (CNTHV_*).

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20250204125009.2281315-3-peter.maydell@linaro.org
2025-03-07 10:08:20 +00:00
Peter Maydell
db6c219283 target/arm: Apply correct timer offset when calculating deadlines
When we are calculating timer deadlines, the correct definition of
whether or not to apply an offset to the physical count is described
in the Arm ARM DDI4087 rev L.a section D12.2.4.1.  This is different
from when the offset should be applied for a direct read of the
counter sysreg.

We got this right for the EL1 physical timer and for the EL1 virtual
timer, but got all the rest wrong: they should be using a zero offset
always.

Factor the offset calculation out into a function that has a comment
documenting exactly which offset it is calculating and which gets the
HYP, SEC, and HYPVIRT cases right.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20250204125009.2281315-2-peter.maydell@linaro.org
2025-03-07 10:08:19 +00:00
Peter Maydell
86d44c215e target/arm: Rename CP_ACCESS_TRAP_UNCATEGORIZED to CP_ACCESS_UNDEFINED
CP_ACCESS_TRAP_UNCATEGORIZED is technically an accurate description
of what this return value from a cpreg accessfn does, but it's liable
to confusion because it doesn't match how the Arm ARM pseudocode
indicates this case. What it does is an EXCP_UDEF with a zero
("uncategorized") syndrome value, which is what an UNDEFINED instruction
does. The pseudocode uses "UNDEFINED" to show this; rename our
constant to CP_ACCESS_UNDEFINED to make the parallel clearer.

Commit created with
sed -i -e 's/CP_ACCESS_TRAP_UNCATEGORIZED/CP_ACCESS_UNDEFINED/' $(git grep -l CP_ACCESS_TRAP_UNCATEGORIZED)

plus manual editing of the comment.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250130182309.717346-14-peter.maydell@linaro.org
2025-02-20 14:20:29 +00:00
Peter Maydell
2d60f1acdb target/arm: Use CP_ACCESS_TRAP_EL1 for traps that are always to EL1
We currently use CP_ACCESS_TRAP in a number of access functions where
we know we're currently at EL0; in this case the "usual target EL"
is EL1, so CP_ACCESS_TRAP and CP_ACCESS_TRAP_EL1 behave the same.
Use CP_ACCESS_TRAP_EL1 to more closely match the pseudocode for
this sort of check.

Note that in the case of the access functions foc cacheop to
PoC or PoU, the code was correct but the comment was wrong:
SCTLR_EL1.UCI traps for DC CVAC, DC CIVAC, DC CVAP, DC CVADP,
DC CVAU and IC IVAU should be system access traps, not UNDEFs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250130182309.717346-11-peter.maydell@linaro.org
2025-02-20 14:20:28 +00:00
Peter Maydell
4d436fb05c target/arm: Honour SDCR.TDCC and SCR.TERR in AArch32 EL3 non-Monitor modes
There are not many traps in AArch32 which should trap to Monitor
mode, but these trap bits should trap not just lower ELs to Monitor
mode but also the non-Monitor modes running at EL3 (i.e.  Secure
System, Secure Undef, etc).

We get this wrong because the relevant access functions implement the
AArch64-style logic of
   if (el < 3 && trap_bit_set) {
       return CP_ACCESS_TRAP_EL3;
   }
which won't trap the non-Monitor modes at EL3.

Correct this error by using arm_is_el3_or_mon() instead, which
returns true when the CPU is at AArch64 EL3 or AArch32 Monitor mode.
(Since the new callsites are compiled also for the linux-user mode,
we need to provide a dummy implementation for CONFIG_USER_ONLY.)

This affects only:
 * trapping of ERRIDR via SCR.TERR
 * trapping of the debug channel registers via SDCR.TDCC
 * trapping of GICv3 registers via SCR.IRQ and SCR.FIQ
   (which we already used arm_is_el3_or_mon() for)

This patch changes the handling of SCR.TERR and SDCR.TDCC. This
patch only changes guest-visible behaviour for "-cpu max" on
the qemu-system-arm binary, because SCR.TERR
and SDCR.TDCC (and indeed the entire SDCR register) only arrived
in Armv8, and the only guest CPU we support which has any v8
features and also starts in AArch32 EL3 is the 32-bit 'max'.

Other uses of CP_ACCESS_TRAP_EL3 don't need changing:

 * uses in code paths that can't happen when EL3 is AArch32:
   access_trap_aa32s_el1, cpacr_access, cptr_access, nsacr_access
 * uses which are in accessfns for AArch64-only registers:
   gt_stimer_access, gt_cntpoff_access, access_hxen, access_tpidr2,
   access_smpri, access_smprimap, access_lor_ns, access_pauth,
   access_mte, access_tfsr_el2, access_scxtnum, access_fgt
 * trap bits which exist only in the AArch64 version of the
   trap register, not the AArch32 one:
   access_tpm, pmreg_access, access_dbgvcr32, access_tdra,
   access_tda, access_tdosa (TPM, TDA and TDOSA exist only in
   MDCR_EL3, not in SDCR, and we enforce this in sdcr_write())

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250130182309.717346-8-peter.maydell@linaro.org
2025-02-20 14:20:28 +00:00
Peter Maydell
4cf4948651 target/arm: Make CP_ACCESS_TRAPs to AArch32 EL3 be Monitor traps
In system register access pseudocode the common pattern for
AArch32 registers with access traps to EL3 is:

at EL1 and EL2:
  if HaveEL(EL3) && !ELUsingAArch32(EL3) && (SCR_EL3.TERR == 1) then
     AArch64.AArch32SystemAccessTrap(EL3, 0x03);
  elsif HaveEL(EL3) && ELUsingAArch32(EL3) && (SCR.TERR == 1) then
     AArch32.TakeMonitorTrapException();
at EL3:
  if (PSTATE.M != M32_Monitor) && (SCR.TERR == 1) then
     AArch32.TakeMonitorTrapException();

(taking as an example the ERRIDR access pseudocode).

This implements the behaviour of (in this case) SCR.TERR that
"Accesses to the specified registers from modes other than Monitor
mode generate a Monitor Trap exception" and of SCR_EL3.TERR that
"Accesses of the specified Error Record registers at EL2 and EL1
are trapped to EL3, unless the instruction generates a higher
priority exception".

In QEMU we don't implement this pattern correctly in two ways:
 * in access_check_cp_reg() we turn the CP_ACCESS_TRAP_EL3 into
   an UNDEF, not a trap to Monitor mode
 * in the access functions, we check trap bits like SCR.TERR
   only when arm_current_el(env) < 3 -- this is correct for
   AArch64 EL3, but misses the "trap non-Monitor-mode execution
   at EL3 into Monitor mode" case for AArch32 EL3

In this commit we fix the first of these two issues, by
making access_check_cp_reg() handle CP_ACCESS_TRAP_EL3
as a Monitor trap. This is a kind of exception that we haven't
yet implemented(!), so we need a new EXCP_MON_TRAP for it.

This diverges from the pseudocode approach, where every access check
function explicitly checks for "if EL3 is AArch32" and takes a
monitor trap; if we wanted to be closer to the pseudocode we could
add a new CP_ACCESS_TRAP_MONITOR and make all the accessfns use it
when appropriate.  But because there are no non-standard cases in the
pseudocode (i.e.  where either it raises a Monitor trap that doesn't
correspond to an AArch64 SystemAccessTrap or where it raises a
SystemAccessTrap that doesn't correspond to a Monitor trap), handling
this all in one place seems less likely to result in future bugs
where we forgot again about this special case when writing an
accessor.

(The cc of stable here is because "hw/intc/arm_gicv3_cpuif: Don't
downgrade monitor traps for AArch32 EL3" which is also cc:stable
will implicitly use the new EXCP_MON_TRAP code path.)

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250130182309.717346-6-peter.maydell@linaro.org
2025-02-20 14:20:28 +00:00
Peter Maydell
707d478ed8 target/arm: Report correct syndrome for UNDEFINED LOR sysregs when NS=0
The pseudocode for the accessors for the LOR sysregs says they
are UNDEFINED if SCR_EL3.NS is 0. We were reporting the wrong
syndrome value here; use CP_ACCESS_TRAP_UNCATEGORIZED.

Cc: qemu-stable@nongnu.org
Fixes: 2d7137c10faf ("target/arm: Implement the ARMv8.1-LOR extension")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250130182309.717346-5-peter.maydell@linaro.org
2025-02-20 14:20:28 +00:00
Peter Maydell
ccda792945 target/arm: Report correct syndrome for UNDEFINED S1E2 AT ops at EL3
The pseudocode for AT S1E2R and AT S1E2W says that they should be
UNDEFINED if executed at EL3 when EL2 is not enabled. We were
incorrectly using CP_ACCESS_TRAP and reporting the wrong exception
syndrome as a result. Use CP_ACCESS_TRAP_UNCATEGORIZED.

Cc: qemu-stable@nongnu.org
Fixes: 2a47df953202e1 ("target-arm: Wire up AArch64 EL2 and EL3 address translation ops")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250130182309.717346-4-peter.maydell@linaro.org
2025-02-20 14:20:28 +00:00
Peter Maydell
1960d9701e target/arm: Report correct syndrome for UNDEFINED AT ops with wrong NSE, NS
R_NYXTL says that these AT insns should be UNDEFINED if they
would operate on an EL lower than EL3 and SCR_EL3.{NSE,NS} is
set to the Reserved {1, 0}. We were incorrectly reporting
them with the wrong syndrome; use CP_ACCESS_TRAP_UNCATEGORIZED
so they are reported as UNDEFINED.

Cc: qemu-stable@nongnu.org
Fixes: 1acd00ef1410 ("target/arm/helper: Check SCR_EL3.{NSE, NS} encoding for AT instructions")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250130182309.717346-3-peter.maydell@linaro.org
2025-02-20 14:20:28 +00:00
Peter Maydell
b819fd6994 target/arm: Report correct syndrome for UNDEFINED CNTPS_*_EL1 from EL2 and NS EL1
The access pseudocode for the CNTPS_TVAL_EL1, CNTPS_CTL_EL1 and
CNTPS_CVAL_EL1 secure timer registers says that they are UNDEFINED
from EL2 or NS EL1.  We incorrectly return CP_ACCESS_TRAP from the
access function in these cases, which means that we report the wrong
syndrome value to the target EL.

Use CP_ACCESS_TRAP_UNCATEGORIZED, which reports the correct syndrome
value for an UNDEFINED instruction.

Cc: qemu-stable@nongnu.org
Fixes: b4d3978c2fd ("target-arm: Add the AArch64 view of the Secure physical timer")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250130182309.717346-2-peter.maydell@linaro.org
2025-02-20 14:20:28 +00:00
Peter Maydell
731528d35e target/arm: Add FPCR.AH to tbflags
We are going to need to generate different code in some cases when
FPCR.AH is 1.  For example:
 * Floating point neg and abs must not flip the sign bit of NaNs
 * some insns (FRECPE, FRECPS, FRECPX, FRSQRTE, FRSQRTS, and various
   BFCVT and BFM bfloat16 ops) need to use a different float_status
   to the usual one

Encode FPCR.AH into the A64 tbflags, so we can refer to it at
translate time.

Because we now have a bit in FPCR that affects codegen, we can't mark
the AArch64 FPCR register as being SUPPRESS_TB_END any more; writes
to it will now end the TB and trigger a regeneration of hflags.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2025-02-11 16:22:07 +00:00
Peter Maydell
c806bbe8c1 target/arm: arm_reset_sve_state() should set FPSR, not FPCR
The pseudocode ResetSVEState() does:
    FPSR = ZeroExtend(0x0800009f<31:0>, 64);
but QEMU's arm_reset_sve_state() called vfp_set_fpcr() by accident.

Before the advent of FEAT_AFP, this was only setting a collection of
RES0 bits, which vfp_set_fpsr() would then ignore, so the only effect
was that we didn't actually set the FPSR the way we are supposed to
do.  Once FEAT_AFP is implemented, setting the bottom bits of FPSR
will change the floating point behaviour.

Call vfp_set_fpsr(), as we ought to.

(Note for stable backports: commit 7f2a01e7368f9 moved this function
from sme_helper.c to helper.c, but it had the same bug before the
move too.)

Cc: qemu-stable@nongnu.org
Fixes: f84734b87461 ("target/arm: Implement SMSTART, SMSTOP")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250124162836.2332150-4-peter.maydell@linaro.org
(cherry picked from commit 1edc3d43f20df0d04f8d00b906ba19fed37512a5)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2025-01-29 22:25:22 +03:00
Peter Maydell
1edc3d43f2 target/arm: arm_reset_sve_state() should set FPSR, not FPCR
The pseudocode ResetSVEState() does:
    FPSR = ZeroExtend(0x0800009f<31:0>, 64);
but QEMU's arm_reset_sve_state() called vfp_set_fpcr() by accident.

Before the advent of FEAT_AFP, this was only setting a collection of
RES0 bits, which vfp_set_fpsr() would then ignore, so the only effect
was that we didn't actually set the FPSR the way we are supposed to
do.  Once FEAT_AFP is implemented, setting the bottom bits of FPSR
will change the floating point behaviour.

Call vfp_set_fpsr(), as we ought to.

(Note for stable backports: commit 7f2a01e7368f9 moved this function
from sme_helper.c to helper.c, but it had the same bug before the
move too.)

Cc: qemu-stable@nongnu.org
Fixes: f84734b87461 ("target/arm: Implement SMSTART, SMSTOP")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250124162836.2332150-4-peter.maydell@linaro.org
2025-01-28 18:40:19 +00:00
Peter Maydell
538b764d34 target/arm: Move minor arithmetic helpers out of helper.c
helper.c includes some small TCG helper functions used for mostly
arithmetic instructions.  These are TCG only and there's no need for
them to be in the large and unwieldy helper.c.  Move them out to
their own source file in the tcg/ subdirectory, together with the
op_addsub.h multiply-included template header that they use.

Since we are moving op_addsub.h, we take the opportunity to
give it a name which matches our convention for files which
are not true header files but which are #included from other
C files: op_addsub.c.inc.

(Ironically, this means that helper.c no longer contains
any TCG helper function definitions at all.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250110131211.2546314-1-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2025-01-13 12:35:34 +00:00
Philippe Mathieu-Daudé
68df8c8dba accel/tcg: Include missing 'exec/translation-block.h' header
TB compile flags, tb_page_addr_t type, tb_cflags() and few
other methods are defined in "exec/translation-block.h".

All these files don't include "exec/translation-block.h" but
include "exec/exec-all.h" which include it. Explicitly include
"exec/translation-block.h" to be able to remove it from
"exec/exec-all.h" later when it won't be necessary. Otherwise
we'd get errors such:

  accel/tcg/internal-target.h:59:20: error: a parameter list without types is only allowed in a function definition
     59 | void tb_lock_page0(tb_page_addr_t);
        |                    ^
  accel/tcg/tb-hash.h:64:23: error: unknown type name 'tb_page_addr_t'
     64 | uint32_t tb_hash_func(tb_page_addr_t phys_pc, vaddr pc,
        |                       ^
  accel/tcg/tcg-accel-ops.c:62:36: error: use of undeclared identifier 'CF_CLUSTER_SHIFT'
     62 |     cflags = cpu->cluster_index << CF_CLUSTER_SHIFT;
        |                                    ^
  accel/tcg/watchpoint.c:102:47: error: use of undeclared identifier 'CF_NOIRQ'
    102 |                     cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
        |                                               ^
  target/i386/helper.c:536:28: error: use of undeclared identifier 'CF_PCREL'
    536 |     if (tcg_cflags_has(cs, CF_PCREL)) {
        |                            ^
  target/rx/cpu.c:51:21: error: incomplete definition of type 'struct TranslationBlock'
     51 |     cpu->env.pc = tb->pc;
        |                   ~~^
  system/physmem.c:2977:9: error: call to undeclared function 'tb_invalidate_phys_range'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
   2977 |         tb_invalidate_phys_range(addr, addr + length - 1);
        |         ^
  plugins/api.c:96:12: error: call to undeclared function 'tb_cflags'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     96 |     return tb_cflags(tcg_ctx->gen_tb) & CF_MEMI_ONLY;
        |            ^

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241114011310.3615-5-philmd@linaro.org>
2024-12-20 17:44:57 +01:00
Philippe Mathieu-Daudé
487a31e0ac accel/tcg: Declare mmap_[un]lock() in 'exec/page-protection.h'
Move mmap_lock(), mmap_unlock() declarations and the
WITH_MMAP_LOCK_GUARD() definition to 'exec/page-protection.h'.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241212185341.2857-5-philmd@linaro.org>
2024-12-20 17:44:57 +01:00
Philippe Mathieu-Daudé
32cad1ffb8 include: Rename sysemu/ -> system/
Headers in include/sysemu/ are not only related to system
*emulation*, they are also used by virtualization. Rename
as system/ which is clearer.

Files renamed manually then mechanical change using sed tool.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Lei Yang <leiyang@redhat.com>
Message-Id: <20241203172445.28576-1-philmd@linaro.org>
2024-12-20 17:44:56 +01:00
Peter Maydell
a2508d0e29 target/arm: Add ARM_CP_ADD_TLBI_NXS type flag for NXS insns
All of the TLBI insns with an NXS variant put that variant at the
same encoding but with a CRn field that is one greater than for the
original TLBI insn.  To avoid having to define every TLBI insn
effectively twice, once in the normal way and once in a set of cpreg
arrays that are only registered when FEAT_XS is present, we define a
new ARM_CP_ADD_TLB_NXS type flag for cpregs.  When this flag is set
in a cpreg struct and FEAT_XS is present,
define_one_arm_cp_reg_with_opaque() will automatically add a second
cpreg to the hash table for the TLBI NXS insn with:
 * the crn+1 encoding
 * an FGT field that indicates that it should honour HCR_EL2.FGTnXS
 * a name with the "NXS" suffix

(If there are future TLBI NXS insns that don't use this same
encoding convention, it is also possible to define them manually.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241211144440.2700268-3-peter.maydell@linaro.org
2024-12-17 15:17:46 +00:00
Peter Maydell
2b745c8f91 target/arm: Implement fine-grained-trap handling for FEAT_XS
FEAT_XS introduces a set of new TLBI maintenance instructions with an
"nXS" qualifier.  These behave like the stardard ones except that
they do not wait for memory accesses with the XS attribute to
complete.  They have an interaction with the fine-grained-trap
handling: the FGT bits that a hypervisor can use to trap TLBI
maintenance instructions normally trap also the nXS variants, but the
hypervisor can elect to not trap the nXS variants by setting
HCRX_EL2.FGTnXS to 1.

Add support to our FGT mechanism for these TLBI bits. For each
TLBI-trapping FGT bit we define, for example:
 * FGT_TLBIVAE1 -- the same value we do at present for the
   normal variant of the insn
 * FGT_TLBIVAE1NXS -- for the nXS qualified insn; the value of
   this enum has an NXS bit ORed into it

In access_check_cp_reg() we can then ignore the trap bit for an
access where ri->fgt has the NXS bit set and HCRX_EL2.FGTnXS is 1.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241211144440.2700268-2-peter.maydell@linaro.org
2024-12-17 15:17:46 +00:00
Peter Maydell
0b7aefb9eb target/arm: Move RME TLB insns to tlb-insns.c
Move the FEAT_RME specific TLB insns across to tlb-insns.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-10-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Peter Maydell
27fb860fd4 target/arm: Move small helper functions to tlb-insns.c
The remaining functions that we temporarily made global are now
used only from callsits in tlb-insns.c; move them across and
make them file-local again.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-9-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Peter Maydell
b0f7cd3572 target/arm: Move the TLBI OS insns to tlb-insns.c.
Move the TLBI OS insns across to tlb-insns.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-8-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Peter Maydell
6559379957 target/arm: Move TLBI range insns
Move the TLBI invalidate-range insns across to tlb-insns.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-7-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Peter Maydell
5991e5abe3 target/arm: Move AArch64 EL3 TLBI insns
Move the AArch64 EL3 TLBI insns from el3_cp_reginfo[] across
to tlb-insns.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-6-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Peter Maydell
7cadf1139d target/arm: Move the AArch64 EL2 TLBI insns
Move the AArch64 EL2 TLBI insn definitions that were
in el2_cp_reginfo[] across to tlb-insns.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-5-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Peter Maydell
abbb82646a target/arm: Move AArch64 TLBI insns from v8_cp_reginfo[]
Move the AArch64 TLBI insns that are declared in v8_cp_reginfo[]
into tlb-insns.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-4-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Peter Maydell
d6b6da1fc8 target/arm: Move TLBI insns for AArch32 EL2 to tlbi_insn_helper.c
Move the AArch32 TLBI insns for AArch32 EL2 to tlbi_insn_helper.c.
To keep this as an obviously pure code-movement, we retain the
same condition for registering tlbi_el2_cp_reginfo that we use for
el2_cp_reginfo. We'll be able to simplify this condition later,
since the need to define the reginfo for EL3-without-EL2 doesn't
apply for the TLBI ops specifically.

This move brings all the uses of tlbimva_hyp_write() and
tlbimva_hyp_is_write() back into a single file, so we can move those
also, and make them file-local again.

The helper alle1_tlbmask() is an exception to the pattern that we
only need to make these functions global temporarily, because once
this refactoring is complete it will be called by both code in
helper.c (vttbr_write()) and by code in tlb-insns.c.  We therefore
put its prototype in a permanent home in internals.h.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-3-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Peter Maydell
1e32ee23cd target/arm: Move some TLBI insns to their own source file
target/arm/helper.c is very large and unwieldy.  One subset of code
that we can pull out into its own file is the cpreg arrays and
corresponding functions for the TLBI instructions.

Because these are instructions they are only relevant for TCG and we
can make the new file only be built for CONFIG_TCG.

In this commit we move the AArch32 instructions from:
 not_v7_cp_reginfo[]
 v7_cp_reginfo[]
 v7mp_cp_reginfo[]
 v8_cp_reginfo[]
into a new file target/arm/tcg/tlb-insns.c.

A few small functions are used both by functions we haven't yet moved
across and by functions we have already moved.  We temporarily make
these global with a prototype in cpregs.h; when the move of all TLBI
insns is complete these will return to being file-local.

For CONFIG_TCG, this is just moving code around.  For a KVM only
build, these cpregs will no longer be added to the cpregs hashtable
for the CPU.  However this should not be a behaviour change, because:
 * we never try to migration sync or otherwise include
   ARM_CP_NO_RAW cpregs
 * for migration we treat the kernel's list of system registers
   as the authoritative one, so these TLBI insns were never
   in it anyway
The no-tcg stub of define_tlb_insn_regs() therefore does nothing.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241210160452.2427965-2-peter.maydell@linaro.org
2024-12-13 15:41:09 +00:00
Gustavo Romero
374cdc8efe target/arm: Enable FEAT_CMOW for -cpu max
FEAT_CMOW introduces support for controlling cache maintenance
instructions executed in EL0/1 and is mandatory from Armv8.8.

On real hardware, the main use for this feature is to prevent processes
from invalidating or flushing cache lines for addresses they only have
read permission, which can impact the performance of other processes.

QEMU implements all cache instructions as NOPs, and, according to rule
[1], which states that generating any Permission fault when a cache
instruction is implemented as a NOP is implementation-defined, no
Permission fault is generated for any cache instruction when it lacks
read and write permissions.

QEMU does not model any cache topology, so the PoU and PoC are before
any cache, and rules [2] apply. These rules state that generating any
MMU fault for cache instructions in this topology is also
implementation-defined. Therefore, for FEAT_CMOW, we do not generate any
MMU faults either, instead, we only advertise it in the feature
register.

[1] Rule R_HGLYG of section D8.14.3, Arm ARM K.a.
[2] Rules R_MZTNR and R_DNZYL of section D8.14.3, Arm ARM K.a.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241104142606.941638-1-gustavo.romero@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-11-05 10:10:00 +00:00
Peter Maydell
efbe180ad2 target/arm: Add new MMU indexes for AArch32 Secure PL1&0
Our current usage of MMU indexes when EL3 is AArch32 is confused.
Architecturally, when EL3 is AArch32, all Secure code runs under the
Secure PL1&0 translation regime:
 * code at EL3, which might be Mon, or SVC, or any of the
   other privileged modes (PL1)
 * code at EL0 (Secure PL0)

This is different from when EL3 is AArch64, in which case EL3 is its
own translation regime, and EL1 and EL0 (whether AArch32 or AArch64)
have their own regime.

We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't
do anything special about Secure PL0, which meant it used the same
ARMMMUIdx_EL10_0 that NonSecure PL0 does.  This resulted in a bug
where arm_sctlr() incorrectly picked the NonSecure SCTLR as the
controlling register when in Secure PL0, which meant we were
spuriously generating alignment faults because we were looking at the
wrong SCTLR control bits.

The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that
we wouldn't honour the PAN bit for Secure PL1, because there's no
equivalent _PAN mmu index for it.

Fix this by adding two new MMU indexes:
 * ARMMMUIdx_E30_0 is for Secure PL0
 * ARMMMUIdx_E30_3_PAN is for Secure PL1 when PAN is enabled
The existing ARMMMUIdx_E3 is used to mean "Secure PL1 without PAN"
(and would be named ARMMMUIdx_E30_3 in an AArch32-centric scheme).

These extra two indexes bring us up to the maximum of 16 that the
core code can currently support.

This commit:
 * adds the new MMU index handling to the various places
   where we deal in MMU index values
 * adds assertions that we aren't AArch32 EL3 in a couple of
   places that currently use the E10 indexes, to document why
   they don't also need to handle the E30 indexes
 * documents in a comment why regime_has_2_ranges() doesn't need
   updating

Notes for backporting: this commit depends on the preceding revert of
4c2c04746932; that revert and this commit should probably be
backported to everywhere that we originally backported 4c2c04746932.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2588
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241101142845.1712482-3-peter.maydell@linaro.org
2024-11-05 10:09:58 +00:00
Peter Maydell
056c5c90c1 Revert "target/arm: Fix usage of MMU indexes when EL3 is AArch32"
This reverts commit 4c2c0474693229c1f533239bb983495c5427784d.

This commit tried to fix a problem with our usage of MMU indexes when
EL3 is AArch32, using what it described as a "more complicated
approach" where we share the same MMU index values for Secure PL1&0
and NonSecure PL1&0. In theory this should work, but the change
didn't account for (at least) two things:

(1) The design change means we need to flush the TLBs at any point
where the CPU state flips from one to the other.  We already flush
the TLB when SCR.NS is changed, but we don't flush the TLB when we
take an exception from NS PL1&0 into Mon or when we return from Mon
to NS PL1&0, and the commit didn't add any code to do that.

(2) The ATS12NS* address translate instructions allow Mon code (which
is Secure) to do a stage 1+2 page table walk for NS.  I thought this
was OK because do_ats_write() does a page table walk which doesn't
use the TLBs, so because it can pass both the MMU index and also an
ARMSecuritySpace argument we can tell the table walk that we want NS
stage1+2, not S.  But that means that all the code within the ptw
that needs to find e.g.  the regime EL cannot do so only with an
mmu_idx -- all these functions like regime_sctlr(), regime_el(), etc
would need to pass both an mmu_idx and the security_space, so they
can tell whether this is a translation regime controlled by EL1 or
EL3 (and so whether to look at SCTLR.S or SCTLR.NS, etc).

In particular, because regime_el() wasn't updated to look at the
ARMSecuritySpace it would return 1 even when the CPU was in Monitor
mode (and the controlling EL is 3).  This meant that page table walks
in Monitor mode would look at the wrong SCTLR, TCR, etc and would
generally fault when they should not.

Rather than trying to make the complicated changes needed to rescue
the design of 4c2c04746932, we revert it in order to instead take the
route that that commit describes as "the most straightforward" fix,
where we add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond
to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at
PL1 with PAN".

This revert will re-expose the "spurious alignment faults in
Secure PL0" issue #2326; we'll fix it again in the next commit.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-id: 20241101142845.1712482-2-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-11-05 10:09:58 +00:00
Romain Malmain
67dabac1ed v9.1.1 release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEZKoqtTHVaQM2a/75gqpKJDselHgFAmcScB0ACgkQgqpKJDse
 lHgQ7g/7BIWV/LC7MqFmHlXl9S0S7ZHVsDc2x6Bx97Sk4sKAUKLvRsLFMa5F40Fn
 xY8v/aLsqOTmzWz38hdtgJR0rrv8DykWw9ft9nta2tFg20tilL/LaakT8TLKmjK2
 StZFzk7iijnY78Z3RcVliBTStLoPbOx9WCUs2evCV/qTxQDec1A7u4ukG9cAztGn
 ea8pNnKNgk+BN805w1uMMZ1wnh3FTVs9kdXVh7CzXlRAHHkVHQ47C9ZN6vh6N3xs
 3qj/Obi4k1N81NNRJFA4gR02t82LdPhg/WV33/q9TxSmHyZEmNXg0lRlDyIeSbpw
 bqYY+dsBbGyMJgN/LUZMNjPAfQL4S5VicFJcfKTXr6xYtkhqtlCun1kmI7O+ZIY5
 kGQYbAAhyPkFIOU6XedyKxM+0eUDqrr9fyzyn5NfISzETQiGFccYjfk/4fsHGfS8
 nOBTNtYBpnEXFeUk/jvv6OPOsh2L+K0PKbGefFbCjNng9Ix3Kz5zEY8xhtlv7C6m
 9YyGGAS1zwcWapwq8URy01GWkiKT2Ia/gD7c89oGY1bJmQKYf9lrLX5YtP+d/NYs
 UqWmk046ViapiKDF7VXWtF0f5axYpeaMMhkNM5RtkOq57nez4LuKPaKs1emRC6W9
 LE2om+28dyGJqHeJp5fqigM+wPxRJlecR57sDIuq4n0bJcvzLEA=
 =240n
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCZxv7TAAKCRAC5KE/dBVG
 isCPAP43SCLPw/W/su5jPShfNn4fvHHiY1f0a6t3Kf6414aqvQD/XKmYGFGl4V5k
 XYnW/9D6Bp/k8gBSjKzYeIt0+Mt/AAQ=
 =cRil
 -----END PGP SIGNATURE-----

Merge tag 'v9.1.1' into update_qemu_9_1_0

v9.1.1 release
2024-10-25 22:10:51 +02:00
Richard Henderson
29b4d7dbd2 target/arm: Pass MemOp to get_phys_addr_with_space_nogpc
Zero is the safe do-nothing value for callers to use.

Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-13 11:27:06 -07:00
Michael Tokarev
5691f4778e mark <zlib.h> with for-crc32 in a consistent manner
in many cases, <zlib.h> is only included for crc32 function,
and in some of them, there's a comment saying that, but in
a different way.  In one place (hw/net/rtl8139.c), there was
another #include added between the comment and <zlib.h> include.

Make all such comments to be on the same line as #include, make
it consistent, and also add a few missing comments, including
hw/nvram/mac_nvram.c which uses adler32 instead.

There's no code changes.

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-09-20 08:06:56 +03:00
Peter Maydell
4c2c047469 target/arm: Fix usage of MMU indexes when EL3 is AArch32
Our current usage of MMU indexes when EL3 is AArch32 is confused.
Architecturally, when EL3 is AArch32, all Secure code runs under the
Secure PL1&0 translation regime:
 * code at EL3, which might be Mon, or SVC, or any of the
   other privileged modes (PL1)
 * code at EL0 (Secure PL0)

This is different from when EL3 is AArch64, in which case EL3 is its
own translation regime, and EL1 and EL0 (whether AArch32 or AArch64)
have their own regime.

We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't
do anything special about Secure PL0, which meant it used the same
ARMMMUIdx_EL10_0 that NonSecure PL0 does.  This resulted in a bug
where arm_sctlr() incorrectly picked the NonSecure SCTLR as the
controlling register when in Secure PL0, which meant we were
spuriously generating alignment faults because we were looking at the
wrong SCTLR control bits.

The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that
we wouldn't honour the PAN bit for Secure PL1, because there's no
equivalent _PAN mmu index for it.

We could fix this in one of two ways:
 * The most straightforward is to add new MMU indexes EL30_0,
   EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0",
   "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN".
   This matches how we use indexes for the AArch64 regimes, and
   preserves propirties like being able to determine the privilege
   level from an MMU index without any other information. However
   it would add two MMU indexes (we can share one with ARMMMUIdx_EL3),
   and we are already using 14 of the 16 the core TLB code permits.

 * The more complicated approach is the one we take here. We use
   the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0
   than we do for NonSecure PL1&0. This saves on MMU indexes, but
   means we need to check in some places whether we're in the
   Secure PL1&0 regime or not before we interpret an MMU index.

The changes in this commit were created by auditing all the places
where we use specific ARMMMUIdx_ values, and checking whether they
needed to be changed to handle the new index value usage.

Note for potential stable backports: taking also the previous
(comment-change-only) commit might make the backport easier.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
2024-08-13 11:44:53 +01:00
Peter Maydell
f573ac059e target/arm: Ignore SMCR_EL2.LEN and SVCR_EL2.LEN if EL2 is not enabled
When determining the current vector length, the SMCR_EL2.LEN and
SVCR_EL2.LEN settings should only be considered if EL2 is enabled
(compare the pseudocode CurrentSVL and CurrentNSVL which call
EL2Enabled()).

We were checking against ARM_FEATURE_EL2 rather than calling
arm_is_el2_enabled(), which meant that we would look at
SMCR_EL2/SVCR_EL2 when in Secure EL1 or Secure EL0 even if Secure EL2
was not enabled.

Use the correct check in sve_vqm1_for_el_sm().

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240722172957.1041231-5-peter.maydell@linaro.org
2024-07-29 16:56:46 +01:00
Peter Maydell
a96edb687e target/arm: Implement FEAT WFxT and enable for '-cpu max'
FEAT_WFxT introduces new instructions WFIT and WFET, which are like
the existing WFI and WFE but allow the guest to pass a timeout value
in a register.  The instructions will wait for an interrupt/event as
usual, but will also stop waiting when the value of CNTVCT_EL0 is
greater than or equal to the specified timeout value.

We implement WFIT by setting up a timer to expire at the right
point; when the timer expires it sets the EXITTB interrupt, which
will cause the CPU to leave the halted state. If we come out of
halt for some other reason, we unset the pending timer.

We implement WFET as a nop, which is architecturally permitted and
matches the way we currently make WFE a nop.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240430140035.3889879-3-peter.maydell@linaro.org
2024-05-30 16:35:17 +01:00
Romain Malmain
7c3c7877d8 Update to QEMU 9.0.0 (#67)
* Update to QEMU v9.0.0

---------

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Zheyu Ma <zheyuma97@gmail.com>
Signed-off-by: Ido Plat <ido.plat@ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Gregory Price <gregory.price@memverge.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Lorenz Brun <lorenz@brun.one>
Signed-off-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Joonas Kankaala <joonas.a.kankaala@gmail.com>
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Signed-off-by: Oleg Sviridov <oleg.sviridov@red-soft.ru>
Signed-off-by: Artem Chernyshev <artem.chernyshev@red-soft.ru>
Signed-off-by: Yajun Wu <yajunw@nvidia.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
Signed-off-by: Lei Wang <lei4.wang@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Martin Hundebøll <martin@geanix.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
Signed-off-by: Wafer <wafer@jaguarmicro.com>
Signed-off-by: Yuxue Liu <yuxue.liu@jaguarmicro.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Nguyen Dinh Phi <phind.uet@gmail.com>
Signed-off-by: Zack Buhman <zack@buhman.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Yuquan Wang wangyuquan1236@phytium.com.cn
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Signed-off-by: Cindy Lu <lulu@redhat.com>
Co-authored-by: Peter Maydell <peter.maydell@linaro.org>
Co-authored-by: Fabiano Rosas <farosas@suse.de>
Co-authored-by: Peter Xu <peterx@redhat.com>
Co-authored-by: Thomas Huth <thuth@redhat.com>
Co-authored-by: Cédric Le Goater <clg@redhat.com>
Co-authored-by: Zheyu Ma <zheyuma97@gmail.com>
Co-authored-by: Ido Plat <ido.plat@ibm.com>
Co-authored-by: Ilya Leoshkevich <iii@linux.ibm.com>
Co-authored-by: Markus Armbruster <armbru@redhat.com>
Co-authored-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Co-authored-by: Paolo Bonzini <pbonzini@redhat.com>
Co-authored-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Co-authored-by: David Hildenbrand <david@redhat.com>
Co-authored-by: Kevin Wolf <kwolf@redhat.com>
Co-authored-by: Stefan Reiter <s.reiter@proxmox.com>
Co-authored-by: Fiona Ebner <f.ebner@proxmox.com>
Co-authored-by: Gregory Price <gregory.price@memverge.com>
Co-authored-by: Lorenz Brun <lorenz@brun.one>
Co-authored-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
Co-authored-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Co-authored-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
Co-authored-by: BALATON Zoltan <balaton@eik.bme.hu>
Co-authored-by: Igor Mammedov <imammedo@redhat.com>
Co-authored-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Co-authored-by: Sven Schnelle <svens@stackframe.org>
Co-authored-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Co-authored-by: Helge Deller <deller@kernel.org>
Co-authored-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Co-authored-by: Benjamin Gray <bgray@linux.ibm.com>
Co-authored-by: Nicholas Piggin <npiggin@gmail.com>
Co-authored-by: Avihai Horon <avihaih@nvidia.com>
Co-authored-by: Michael Tokarev <mjt@tls.msk.ru>
Co-authored-by: Joonas Kankaala <joonas.a.kankaala@gmail.com>
Co-authored-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Co-authored-by: Stefan Weil <sw@weilnetz.de>
Co-authored-by: Dayu Liu <liu.dayu@zte.com.cn>
Co-authored-by: Zhao Liu <zhao1.liu@intel.com>
Co-authored-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Co-authored-by: Artem Chernyshev <artem.chernyshev@red-soft.ru>
Co-authored-by: Yajun Wu <yajunw@nvidia.com>
Co-authored-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Co-authored-by: Pierre-Clément Tosi <ptosi@google.com>
Co-authored-by: Wei Wang <wei.w.wang@intel.com>
Co-authored-by: Martin Hundebøll <martin@geanix.com>
Co-authored-by: Michael S. Tsirkin <mst@redhat.com>
Co-authored-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
Co-authored-by: Wafer <wafer@jaguarmicro.com>
Co-authored-by: lyx634449800 <yuxue.liu@jaguarmicro.com>
Co-authored-by: Gerd Hoffmann <kraxel@redhat.com>
Co-authored-by: Nguyen Dinh Phi <phind.uet@gmail.com>
Co-authored-by: Zack Buhman <zack@buhman.org>
Co-authored-by: Keith Packard <keithp@keithp.com>
Co-authored-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
Co-authored-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Co-authored-by: Cindy Lu <lulu@redhat.com>
2024-05-01 16:10:20 +02:00
Peter Maydell
bd8e9ddf6f target/arm: Refactor default generic timer frequency handling
The generic timer frequency is settable by board code via a QOM
property "cntfrq", but otherwise defaults to 62.5MHz.  The way this
is done includes some complication resulting from how this was
originally a fixed value with no QOM property.  Clean it up:

 * always set cpu->gt_cntfrq_hz to some sensible value, whether
   the CPU has the generic timer or not, and whether it's system
   or user-only emulation
 * this means we can always use gt_cntfrq_hz, and never need
   the old GTIMER_SCALE define
 * set the default value in exactly one place, in the realize fn

The aim here is to pave the way for handling the ARMv8.6 requirement
that the generic timer frequency is always 1GHz.  We're going to do
that by having old CPU types keep their legacy-in-QEMU behaviour and
having the default for any new CPU types be a 1GHz rather han 62.5MHz
cntfrq, so we want the point where the default is decided to be in
one place, and in code, not in a DEFINE_PROP_UINT64() initializer.

This commit should have no behavioural changes.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240426122913.3427983-2-peter.maydell@linaro.org
2024-04-30 15:14:15 +01:00
Peter Maydell
f7ddd7b6a1 target/arm: Implement ID_AA64MMFR3_EL1
Newer versions of the Arm ARM (e.g.  rev K.a) now define fields for
ID_AA64MMFR3_EL1.  Implement this register, so that we can set the
fields if we need to.  There's no behaviour change here since we
don't currently set the register value to non-zero.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240418152004.2106516-5-peter.maydell@linaro.org
2024-04-30 15:01:07 +01:00