627 Commits

Author SHA1 Message Date
Paolo Bonzini
44d58e938b target/i386: add a note about gen_jcc1
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31 18:28:33 +01:00
Paolo Bonzini
cea677e821 target/i386: add a few more trivial CCPrepare cases
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31 18:28:33 +01:00
Paolo Bonzini
37df7c4d57 target/i386: optimize TEST+Jxx sequences
Mostly used for TEST+JG and TEST+JLE, but it is easy to cover
also JBE/JA and JL/JGE; shaves about 0.5% TCG ops.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31 18:28:33 +01:00
Paolo Bonzini
ae14b33de8 target/i386: optimize computation of ZF from CC_OP_DYNAMIC
Most uses of CC_OP_DYNAMIC are for CMP/JB/JE or similar sequences.
We can optimize many of them to avoid computation of the flags.
This eliminates both TCG ops to set up the new cc_op, and helper
instructions because evaluating just ZF is much cheaper.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31 18:28:33 +01:00
Richard Henderson
1f7f72bdc4 target/i386: Wrap cc_op_live with a validity check
Assert that op is known and that cc_op_live_ is populated.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31 18:28:33 +01:00
Richard Henderson
f359b2fb71 target/i386: Introduce cc_op_size
Replace arithmetic on cc_op with a helper function.
Assert that the op has a size and that it is valid
for the configuration.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240701025115.1265117-6-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31 18:28:33 +01:00
Paolo Bonzini
e09447c39f target/i386: remove CC_OP_CLR
Just use CC_OP_EFLAGS; it is not that likely that the flags computed by
CC_OP_CLR survive the end of the basic block, in which case there is no
need to spill cc_op_src.

cc_op_src now does need spilling if the XOR is followed by a memory
operation, but this only costs 0.2% extra TCG ops.  They will be recouped
by simplifications in how QEMU evaluates ZF at runtime, which are even
greater with this change.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31 18:28:33 +01:00
Paolo Bonzini
a635390f05 target/i386: use tcg_gen_ext_tl when applicable
Prefer it to gen_ext_tl in the common case where the destination is known.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-31 18:28:33 +01:00
Romain Malmain
b01a0bc334
Fix helper function calls & support for new x86 decoder (#92)
* fix helper function calls

* cmp hooks: support for new x86 decoder
2024-10-31 16:31:54 +01:00
Paolo Bonzini
14bde8cd76 target/i386: fix CPUID check for LFENCE and SFENCE
LFENCE and SFENCE were introduced with the original SSE instruction set;
marking them incorrectly as cpuid(SSE2) causes failures for CPU models
that lack SSE2, for example pentium3.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-30 16:30:56 +01:00
Romain Malmain
67dabac1ed v9.1.1 release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEZKoqtTHVaQM2a/75gqpKJDselHgFAmcScB0ACgkQgqpKJDse
 lHgQ7g/7BIWV/LC7MqFmHlXl9S0S7ZHVsDc2x6Bx97Sk4sKAUKLvRsLFMa5F40Fn
 xY8v/aLsqOTmzWz38hdtgJR0rrv8DykWw9ft9nta2tFg20tilL/LaakT8TLKmjK2
 StZFzk7iijnY78Z3RcVliBTStLoPbOx9WCUs2evCV/qTxQDec1A7u4ukG9cAztGn
 ea8pNnKNgk+BN805w1uMMZ1wnh3FTVs9kdXVh7CzXlRAHHkVHQ47C9ZN6vh6N3xs
 3qj/Obi4k1N81NNRJFA4gR02t82LdPhg/WV33/q9TxSmHyZEmNXg0lRlDyIeSbpw
 bqYY+dsBbGyMJgN/LUZMNjPAfQL4S5VicFJcfKTXr6xYtkhqtlCun1kmI7O+ZIY5
 kGQYbAAhyPkFIOU6XedyKxM+0eUDqrr9fyzyn5NfISzETQiGFccYjfk/4fsHGfS8
 nOBTNtYBpnEXFeUk/jvv6OPOsh2L+K0PKbGefFbCjNng9Ix3Kz5zEY8xhtlv7C6m
 9YyGGAS1zwcWapwq8URy01GWkiKT2Ia/gD7c89oGY1bJmQKYf9lrLX5YtP+d/NYs
 UqWmk046ViapiKDF7VXWtF0f5axYpeaMMhkNM5RtkOq57nez4LuKPaKs1emRC6W9
 LE2om+28dyGJqHeJp5fqigM+wPxRJlecR57sDIuq4n0bJcvzLEA=
 =240n
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCZxv7TAAKCRAC5KE/dBVG
 isCPAP43SCLPw/W/su5jPShfNn4fvHHiY1f0a6t3Kf6414aqvQD/XKmYGFGl4V5k
 XYnW/9D6Bp/k8gBSjKzYeIt0+Mt/AAQ=
 =cRil
 -----END PGP SIGNATURE-----

Merge tag 'v9.1.1' into update_qemu_9_1_0

v9.1.1 release
2024-10-25 22:10:51 +02:00
Richard Henderson
e46fbc7d50 target/i386: Remove ra parameter from ptw_translate
This argument is no longer used.

Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20241013184733.1423747-4-richard.henderson@linaro.org>
2024-10-22 13:45:03 -07:00
Richard Henderson
115ade42d5 target/i386: Use probe_access_full_mmu in ptw_translate
The probe_access_full_mmu function was designed for this purpose,
and does not report the memory operation event to plugins.

Cc: qemu-stable@nongnu.org
Fixes: 6d03226b422 ("plugins: force slow path when plugins instrument memory ops")
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20241013184733.1423747-3-richard.henderson@linaro.org>
2024-10-22 13:45:03 -07:00
Alexander Graf
b56617bbcb target/i386: Walk NPT in guest real mode
When translating virtual to physical address with a guest CPU that
supports nested paging (NPT), we need to perform every page table walk
access indirectly through the NPT, which we correctly do.

However, we treat real mode (no page table walk) special: In that case,
we currently just skip any walks and translate VA -> PA. With NPT
enabled, we also need to then perform NPT walk to do GVA -> GPA -> HPA
which we fail to do so far.

The net result of that is that TCG VMs with NPT enabled that execute
real mode code (like SeaBIOS) end up with GPA==HPA mappings which means
the guest accesses host code and data. This typically shows as failure
to boot guests.

This patch changes the page walk logic for NPT enabled guests so that we
always perform a GVA -> GPA translation and then skip any logic that
requires an actual PTE.

That way, all remaining logic to walk the NPT stays and we successfully
walk the NPT in real mode.

Cc: qemu-stable@nongnu.org
Fixes: fe441054bb3f0 ("target-i386: Add NPT support")
Signed-off-by: Alexander Graf <graf@amazon.com>
Reported-by: Eduard Vlad <evlad@amazon.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240921085712.28902-1-graf@amazon.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-10-22 13:45:03 -07:00
Richard Henderson
15d955975b target/i386: Use only 16 and 32-bit operands for IN/OUT
The REX.W prefix is ignored for these instructions.
Mirror the solution already used for INS/OUTS: X86_SIZE_z.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2581
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cc: qemu-stable@nongnu.org
Link: https://lore.kernel.org/r/20241015004144.2111817-1-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17 19:41:30 +02:00
Paolo Bonzini
e136648c5c target/i386/tcg: Use DPL-level accesses for interrupts and call gates
Stack accesses should be explicit and use the privilege level of the
target stack.  This ensures that SMAP is not applied when the target
stack is in ring 3.

This fixes a bug wherein i386/tcg assumed that an interrupt return, or a
far call using the CALL or JMP instruction, was always going from kernel
or user mode to kernel mode when using a call gate. This assumption is
violated if the call gate has a DPL that is greater than 0.

Analyzed-by: Robert R. Henry <rrh.henry@gmail.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/249
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17 19:41:30 +02:00
Paolo Bonzini
ac92afd19e target/i386: assert that cc_op* and pc_save are preserved
Now all decoding has been done before any code generation.
There is no need anymore to save and restore cc_op* and
pc_save but, for the time being, assert that this is indeed
the case.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17 19:41:30 +02:00
Paolo Bonzini
7e62a554af target/i386: list instructions still in translate.c
Group them so that it is easier to figure out which two-byte opcodes to
tackle together.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17 19:41:30 +02:00
Paolo Bonzini
f091a3f324 target/i386: do not check PREFIX_LOCK in old-style decoder
It is already checked before getting there.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17 19:41:30 +02:00
Paolo Bonzini
fcd16539eb target/i386: convert CMPXCHG8B/CMPXCHG16B to new decoder
The gen_cmpxchg8b and gen_cmpxchg16b functions even have the correct
prototype already; the only thing that needs to be done is removing the
gen_lea_modrm() call.

This moves the last LOCK-enabled instructions to the new decoder.  It is
now possible to assume that gen_multi0F is called only after checking
that PREFIX_LOCK was not specified.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17 19:41:29 +02:00
Paolo Bonzini
a2e2c78d2a target/i386: decode address before going back to translate.c
There are now relatively few unconverted opcodes in translate.c (there
are 13 of them including 8 for x87), and all of them have the same
format with a mod/rm byte and no immediate.  A good next step is
to remove the early bail out to disas_insn_x87/disas_insn_old,
instead giving these legacy translator functions the same prototype
as the other gen_* functions.

To do this, the X86DecodeInsn can be passed down to the places that
used to fetch address bytes from the instruction stream.  To make
sure that everything is done cleanly, the CPUX86State* argument is
removed.

As part of the unification, the gen_lea_modrm() name is now free,
so rename gen_load_ea() to gen_lea_modrm().  This is as good a name
and it makes the changes to translate.c easier to review.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17 19:41:29 +02:00
Paolo Bonzini
10eae89937 target/i386: convert bit test instructions to new decoder
Code generation was rewritten; it reuses the same trick to use the
CC_OP_SAR values for cc_op, but it tries to use CC_OP_ADCX or CC_OP_ADCOX
instead of CC_OP_EFLAGS.  This is a tiny bit more efficient in the
common case where only CF is checked in the resulting flags.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-10-17 19:41:29 +02:00
Richard Henderson
ded1db48c9 target/i386: Fix tss access size in switch_tss_ra
The two limit_max variables represent size - 1, just like the
encoding in the GDT, thus the 'old' access was off by one.
Access the minimal size of the new tss: the complete tss contains
the iopb, which may be a larger block than the access api expects,
and irrelevant because the iopb is not accessed during the
switch itself.

Fixes: 8b131065080a ("target/i386/tcg: use X86Access for TSS access")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2511
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240819074052.207783-1-richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
2024-08-21 09:11:26 +10:00
Richard Henderson
83a3a20e59 target/i386: Fix carry flag for BLSI
BLSI has inverted semantics for C as compared to the other two
BMI1 instructions, BLSMSK and BLSR.  Introduce CC_OP_BLSI* for
this purpose.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2175
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240801075845.573075-3-richard.henderson@linaro.org>
2024-08-21 09:11:26 +10:00
Richard Henderson
266d6dddbd target/i386: Split out gen_prepare_val_nz
Split out the TCG_COND_TSTEQ logic from gen_prepare_eflags_z,
and use it for CC_OP_BMILG* as well.  Prepare for requiring
both zero and non-zero senses.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240801075845.573075-2-richard.henderson@linaro.org>
2024-08-21 09:11:26 +10:00
Alex Bennée
cf584a908a target/i386: allow access_ptr to force slow path on failed probe
When we are using TCG plugin memory callbacks probe_access_internal
will return TLB_MMIO to force the slow path for memory access. This
results in probe_access returning NULL but the x86 access_ptr function
happily accepts an empty haddr resulting in segfault hilarity.

Check for an empty haddr to prevent the segfault and enable plugins to
track all the memory operations for the x86 save/restore helpers. As
we also want to run the slow path when instrumenting *-user we should
also not have the short cutting test_ptr macro.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2489
Fixes: 6d03226b42 (plugins: force slow path when plugins instrument memory ops)
Reviewed-by: Alexandre Iooss <erdnaxe@crans.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240813202329.1237572-8-alex.bennee@linaro.org>
2024-08-16 14:04:19 +01:00
Richard Henderson
7700d2293c target/i386: Assert MMX and XMM registers in range
The mmx assert would fire without the fix for #2495.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20240812025844.58956-4-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-08-13 16:35:43 +02:00
Richard Henderson
45230bca85 target/i386: Use unit not type in decode_modrm
Rather that enumerating the types that can produce
MMX operands, examine the unit.  No functional change.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20240812025844.58956-3-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-08-13 11:33:34 +02:00
Richard Henderson
416f2b16c0 target/i386: Do not apply REX to MMX operands
Cc: qemu-stable@nongnu.org
Fixes: b3e22b2318a ("target/i386: add core of new i386 decoder")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2495
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240812025844.58956-2-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-08-13 11:33:34 +02:00
Richard Henderson
ac63755b20 target/i386: Fix VSIB decode
With normal SIB, index == 4 indicates no index.
With VSIB, there is no exception for VR4/VR12.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2474
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240805003130.1421051-3-richard.henderson@linaro.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-08-05 14:14:47 +02:00
Peter Maydell
bde8adb808 target/i386: Remove dead assignment to ss in do_interrupt64()
Coverity points out that in do_interrupt64() in the "to inner
privilege" codepath we set "ss = 0", but because we also set
"new_stack = 1" there, later in the function we will always override
that value of ss with "ss = 0 | dpl".

Remove the unnecessary initialization of ss, which allows us to
reduce the scope of the variable to only where it is used.  Borrow a
comment from helper_lcall_protected() that explains what "0 | dpl"
means here.

Resolves: Coverity CID 1527395
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240723162525.1585743-1-peter.maydell@linaro.org
2024-07-29 16:59:44 +01:00
Romain Malmain
5782c014d1 v9.0.2 release
-----BEGIN PGP SIGNATURE-----
 
 iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmaWB/8PHG1qdEB0bHMu
 bXNrLnJ1AAoJEHAbT2saaT5ZYRoIALPfd8HRbQgBAgX/uQfRFCtkmv5BQ7SiYm1b
 IDZro9WWLTtcKOIEalLH63EOAspFzYtq6BHhhTvSakrF9xonVa4ou5yNIKqxHajt
 QvJiBQ4+5ArzZPPQcAiflQX09JuzKFmHGYwmj6/0ppm+uWZ4+yWE7Q58cUg6taFU
 9xVhYtwU1Gz6umvF6c9495gS9+bLYp7D3VMbbxbNmsOyIo9AX8sL3S5kcekoRNJI
 MwdXWLpRviVZ6JZ6NGA2m1Irnq8X9djK1mymoALf1bo+rN8Kh27Xy3ABBffQaSbX
 5bcNxUnOYhvXWDDG5XwLGujtPQHZfBnyhuDlvcTOBxafBA2o6Ns=
 =PXcF
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCZp/AxQAKCRAC5KE/dBVG
 itM1AP9vNB4aAqU8N33pAsrsj1gucu8+B33oZK8p7E/61uJIRwEArFesKIIp+fJp
 e1mc9WaUKRUc4Jx1CxI7JuDtOv4KZQc=
 =C8DB
 -----END PGP SIGNATURE-----

Merge tag 'v9.0.2' into update_qemu_9_0_2

v9.0.2 release
2024-07-23 16:40:04 +02:00
Paolo Bonzini
6a079f2e68 target/i386/tcg: save current task state before loading new one
This is how the steps are ordered in the manual.  EFLAGS.NT is
overwritten after the fact in the saved image.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:25 +02:00
Paolo Bonzini
8b13106508 target/i386/tcg: use X86Access for TSS access
This takes care of probing the vaddr range in advance, and is also faster
because it avoids repeated TLB lookups.  It also matches the Intel manual
better, as it says "Checks that the current (old) TSS, new TSS, and all
segment descriptors used in the task switch are paged into system memory";
note however that it's not clear how the processor checks for segment
descriptors, and this check is not included in the AMD manual.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:25 +02:00
Paolo Bonzini
05d41bbcb3 target/i386/tcg: check for correct busy state before switching to a new task
This step is listed in the Intel manual: "Checks that the new task is available
(call, jump, exception, or interrupt) or busy (IRET return)".

The AMD manual lists the same operation under the "Preventing recursion"
paragraph of "12.3.4 Nesting Tasks", though it is not clear if the processor
checks the busy bit in the IRET case.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
8053862af9 target/i386/tcg: Compute MMU index once
Add the MMU index to the StackAccess struct, so that it can be cached
or (in the next patch) computed from information that is not in
CPUX86State.

Co-developed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Richard Henderson
059368bcf5 target/i386/tcg: Reorg push/pop within seg_helper.c
Interrupts and call gates should use accesses with the DPL as
the privilege level.  While computing the applicable MMU index
is easy, the harder thing is how to plumb it in the code.

One possibility could be to add a single argument to the PUSH* macros
for the privilege level, but this is repetitive and risks confusion
between the involved privilege levels.

Another possibility is to pass both CPL and DPL, and adjusting both
PUSH* and POP* to use specific privilege levels (instead of using
cpu_{ld,st}*_data). This makes the code more symmetric.

However, a more complicated but much nicer approach is to use a structure
to contain the stack parameters, env, unwind return address, and rewrite
the macros into functions.  The struct provides an easy home for the MMU
index as well.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
312ef3243e target/i386/tcg: use PUSHL/PUSHW for error code
Do not pre-decrement esp, let the macros subtract the appropriate
operand size.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
0bd385e7e3 target/i386/tcg: Allow IRET from user mode to user mode with SMAP
This fixes a bug wherein i386/tcg assumed an interrupt return using
the IRET instruction was always returning from kernel mode to either
kernel mode or user mode. This assumption is violated when IRET is used
as a clever way to restore thread state, as for example in the dotnet
runtime. There, IRET returns from user mode to user mode.

This bug is that stack accesses from IRET and RETF, as well as accesses
to the parameters in a call gate, are normal data accesses using the
current CPL.  This manifested itself as a page fault in the guest Linux
kernel due to SMAP preventing the access.

This bug appears to have been in QEMU since the beginning.

Analyzed-by: Robert R. Henry <rrh.henry@gmail.com>
Co-developed-by: Robert R. Henry <rrh.henry@gmail.com>
Signed-off-by: Robert R. Henry <rrh.henry@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Richard Henderson
a7cf494993 target/i386/tcg: Remove SEG_ADDL
This truncation is now handled by MMU_*32_IDX.  The introduction of
MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses
with a high segment base (e.g.  big real mode or vm86 mode), which did
not use SEG_ADDL.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
3afc6539a8 target/i386/tcg: fix POP to memory in long mode
In long mode, POP to memory will write a full 64-bit value.  However,
the call to gen_writeback() in gen_POP will use MO_32 because the
decoding table is incorrect.

The bug was latent until commit aea49fbb01a ("target/i386: use gen_writeback()
within gen_POP()", 2024-06-08), and then became visible because gen_op_st_v
now receives op->ot instead of the "ot" returned by gen_pop_T0.

Analyzed-by: Clément Chigot <chigot@adacore.com>
Fixes: 5e9e21bcc4d ("target/i386: move 60-BF opcodes to new decoder", 2024-05-07)
Tested-by: Clément Chigot <chigot@adacore.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
74f73c2918 target/i386: remove unused enum
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 19:26:54 +02:00
Paolo Bonzini
460231ad36 target/i386: give CC_OP_POPCNT low bits corresponding to MO_TL
Handle it like the other arithmetic cc_ops.  This simplifies a
bit the implementation of bit test instructions.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 14:44:52 +02:00
Paolo Bonzini
944f400134 target/i386: use cpu_cc_dst for CC_OP_POPCNT
It is the only CCOp, among those that compute ZF from one of the cc_op_*
registers, that uses cpu_cc_src.  Do not make it the odd one off,
instead use cpu_cc_dst like the others.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 14:44:52 +02:00
Romain Malmain
2c7a79e826 v9.0.1 release
-----BEGIN PGP SIGNATURE-----
 
 iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmZky/sPHG1qdEB0bHMu
 bXNrLnJ1AAoJEHAbT2saaT5ZbfwIALtGu9NMrjfJTlXyzopqXsHqEw4L/ztZ/aFz
 oGHs8b5Xti51GY6PcDuNgInWQXTJs6RNcrpzpKPnPoDN4Et7PS5enuMEy687MdXr
 lyYyMp1QQm2oUBKTkRKAXDnWw23IPSVSHincvKK4EqJJODwRPZNrNu9rsVeLSrqo
 GTCrapuld1hkhuuZmlqb6weBSTItOLhfc1DMdXWB6n8mpoNXVuYAHNxvwjsYYoqT
 12pIHBhvVcErqPl5MpOYSFT8o/kRsYMMMC1jzXf185wi757SlJ64l43daDNmutBT
 4rwEZznllgpp0sZurfcFh1wHMMR50TbIXfro05eZZbYd/jShXlc=
 =RJCg
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSq9xYmtep25y1RrMYC5KE/dBVGigUCZnH/MgAKCRAC5KE/dBVG
 in7bAP9sd46ztdJLrwC/Chjz7PxeymuXOtRlqJ3MSlpao1XnlgD/ZF4M920yloc2
 xoKgu50RfieeSh00Y9EQWjzRmMFbvgA=
 =L0Ju
 -----END PGP SIGNATURE-----

Merge tag 'v9.0.1' into update_qemu_9_0_1

v9.0.1 release
2024-06-18 23:42:08 +02:00
Paolo Bonzini
0c4da54883 target/i386: convert CMPXCHG to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
7b1f25ac3a target/i386: convert XADD to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
11ffaf8c73 target/i386: convert LZCNT/TZCNT/BSF/BSR/POPCNT to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
6476902740 target/i386: convert SHLD/SHRD to new decoder
Use the same flag generation code as SHL and SHR, but use
the existing gen_shiftd_rm_T1 function to compute the result
as well as CC_SRC.

Decoding-wise, SHLD/SHRD by immediate count as a 4 operand
instruction because s->T0 and s->T1 actually occupy three op
slots.  The infrastructure used by opcodes in the 0F 3A table
works fine.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
e4e5981daf target/i386: adapt gen_shift_count for SHLD/SHRD
SHLD/SHRD can have 3 register operands - s->T0, s->T1 and either
1 or CL - and therefore decode->op[2] is taken by the low part
of the register being shifted.  Pass X86_OP_* to gen_shift_count
from its current callers and hardcode cpu_regs[R_ECX] as the
shift count.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00