* [PATCH v1] ARM64: mmu: fix mmu_early_enable VA->PA mapping
@ 2023-12-18 14:12 Lior Weintraub
0 siblings, 0 replies; only message in thread
From: Lior Weintraub @ 2023-12-18 14:12 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
From 34dac7e73e486e864cfba5cee0e503a9641a502d Mon Sep 17 00:00:00 2001
From: Lior Weintraub <liorw@pliops.com>
Date: Mon, 18 Dec 2023 16:09:28 +0200
Subject: [PATCH v1] ARM64: mmu: fix mmu_early_enable VA->PA mapping
Fix the mmu_early_enable function to correctly map 40bits of virtual address into physical address with a 1:1 mapping.
It uses the init_range function to sets 2 table entries on TTB level0 and then fill level1 with the correct 1:1 mapping.
Signed-off-by: Lior Weintraub <liorw@pliops.com>
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # Qemu ARM64 Virt
---
arch/arm/cpu/mmu_64.c | 16 +++++++++++++++-
arch/arm/cpu/mmu_64.h | 19 +++++++++++++++++--
arch/arm/include/asm/pgtable64.h | 1 +
3 files changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index c6ea63e655..c47a744323 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -294,6 +294,18 @@ void dma_flush_range(void *ptr, size_t size)
v8_flush_dcache_range(start, end);
}
+static void init_range(size_t total_level0_tables)
+{
+ uint64_t *ttb = get_ttb();
+ uint64_t addr = 0;
+ while(total_level0_tables--) {
+ early_remap_range(addr, L0_XLAT_SIZE, MAP_UNCACHED);
+ split_block(ttb,0);
+ addr += L0_XLAT_SIZE;
+ ttb++;
+ }
+}
+
void mmu_early_enable(unsigned long membase, unsigned long memsize)
{
int el;
@@ -308,7 +320,9 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize)
memset((void *)ttb, 0, GRANULE_SIZE);
- early_remap_range(0, 1UL << (BITS_PER_VA - 1), MAP_UNCACHED);
+ // Assume maximum BITS_PER_PA set to 40 bits.
+ // Set 1:1 mapping of VA->PA. So to cover the full 1TB range we need 2 tables.
+ init_range(2);
early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED);
early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_FAULT);
early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
diff --git a/arch/arm/cpu/mmu_64.h b/arch/arm/cpu/mmu_64.h
index e4d81dace4..e3959e4407 100644
--- a/arch/arm/cpu/mmu_64.h
+++ b/arch/arm/cpu/mmu_64.h
@@ -105,12 +105,27 @@ static inline uint64_t level2mask(int level)
return mask;
}
+/**
+ * @brief Returns the TCR (Translation Control Register) value
+ *
+ * @param el - Exception Level
+ * @param va_bits - Virtual Address bits
+ * @return uint64_t TCR
+ */
static inline uint64_t calc_tcr(int el, int va_bits)
{
- u64 ips;
- u64 tcr;
+ u64 ips; // Intermediate Physical Address Size
+ u64 tcr; // Translation Control Register
+#if (BITS_PER_PA == 40)
ips = 2;
+#elif (BITS_PER_PA == 36)
+ ips = 1;
+#elif (BITS_PER_PA == 32)
+ ips = 0;
+#else
+#error "Unsupported"
+#endif
if (el == 1)
tcr = (ips << 32) | TCR_EPD1_DISABLE;
diff --git a/arch/arm/include/asm/pgtable64.h b/arch/arm/include/asm/pgtable64.h
index 21dac30cfe..b88ffe6be5 100644
--- a/arch/arm/include/asm/pgtable64.h
+++ b/arch/arm/include/asm/pgtable64.h
@@ -8,6 +8,7 @@
#define VA_START 0x0
#define BITS_PER_VA 48
+#define BITS_PER_PA 40 // Use 40 Physical address bits
/* Granule size of 4KB is being used */
#define GRANULE_SIZE_SHIFT 12
--
2.40.0
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2023-12-18 14:13 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-18 14:12 [PATCH v1] ARM64: mmu: fix mmu_early_enable VA->PA mapping Lior Weintraub
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox