When running on 32 bit TCG backends a wide unaligned load ends up truncating data before returning to the guest. We specifically have the return type as uint64_t to avoid any premature truncation so we should use the same for the interim types.
Hopefully fixes #1830872
Signed-off-by: Alex Bennée <email address hidden> --- accel/tcg/cputlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index cdcc3771020..b796ab1cbea 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1303,7 +1303,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 >= TARGET_PAGE_SIZE)) { target_ulong addr1, addr2; - tcg_target_ulong r1, r2; + uint64_t r1, r2; unsigned shift; do_unaligned_access: addr1 = addr & ~(size - 1); -- 2.20.1
When running on 32 bit TCG backends a wide unaligned load ends up
truncating data before returning to the guest. We specifically have
the return type as uint64_t to avoid any premature truncation so we
should use the same for the interim types.
Hopefully fixes #1830872
Signed-off-by: Alex Bennée <email address hidden>
---
accel/tcg/cputlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/accel/ tcg/cputlb. c b/accel/ tcg/cputlb. c .b796ab1cbea 100644 tcg/cputlb. c tcg/cputlb. c CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>= TARGET_PAGE_SIZE)) {
target_ ulong addr1, addr2; unaligned_ access:
index cdcc3771020.
--- a/accel/
+++ b/accel/
@@ -1303,7 +1303,7 @@ load_helper(
&& unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
- tcg_target_ulong r1, r2;
+ uint64_t r1, r2;
unsigned shift;
do_
addr1 = addr & ~(size - 1);
--
2.20.1