* [PATCH 00/27] ARM: MMU rework
@ 2023-05-12 11:09 Sascha Hauer
2023-05-12 11:09 ` [PATCH 01/27] ARM: fix scratch mem position with OP-TEE Sascha Hauer
` (26 more replies)
0 siblings, 27 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
The goal of this series is to properly map SDRAM used for OP-TEE non
executable, because otherwise the instruction prefetcher might speculate
into the OP-TEE area. This is currently not possible because we use
1MiB (AArch32) or 1GiB (AArch64) sections which are too coarse for that.
With this series we start using two level page tables also in the early
MMU setup.
Overall the MMU code is more consolidated now, we no longer
differentiate between early MMU setup and non early MMU setup.
Consequently the CONFIG_MMU_EARLY option is gone and early MMU setup is
always done when the MMU is enabled.
One nice side effect of this series is that the Rockchip RK3568 boards
now start about a second faster. On these boards the early MMU setup
was skipped because of the insufficient memory start alignment.
Sascha
Sascha Hauer (27):
ARM: fix scratch mem position with OP-TEE
ARM: drop cache function initialization
ARM: Add _32 suffix to aarch32 specific filenames
ARM: cpu.c: remove unused include
ARM: mmu-common.c: use common mmu include
ARM: mmu32: rename mmu.h to mmu_32.h
ARM: mmu: implement MAP_FAULT
ARM: mmu64: Use arch_remap_range where possible
ARM: mmu32: implement zero_page_*()
ARM: i.MX: Drop HAB workaround
ARM: Move early MMU after malloc initialization
ARM: mmu: move dma_sync_single_for_device to extra file
ARM: mmu: merge mmu-early_xx.c into mmu_xx.c
ARM: mmu: alloc 64k for early page tables
ARM: mmu32: create alloc_pte()
ARM: mmu64: create alloc_pte()
ARM: mmu: drop ttb argument
ARM: mmu: always do MMU initialization early when MMU is enabled
ARM: mmu32: Assume MMU is on
ARM: mmu32: Fix pmd_flags_to_pte() for ARMv4/5/6
ARM: mmu32: Add pte_flags_to_pmd()
ARM: mmu32: add get_pte_flags, get_pmd_flags
ARM: mmu32: move functions into c file
ARM: mmu32: read TTB value from register
ARM: mmu32: Use pages for early MMU setup
ARM: mmu32: Skip reserved ranges during initialization
ARM: mmu64: Use two level pagetables in early code
arch/arm/Makefile | 5 +-
arch/arm/cpu/Kconfig | 3 +-
arch/arm/cpu/Makefile | 21 +-
arch/arm/cpu/{cache.c => cache_32.c} | 85 +++--
arch/arm/cpu/cache_64.c | 5 -
arch/arm/cpu/cpu.c | 2 -
arch/arm/cpu/dma_32.c | 20 ++
arch/arm/cpu/dma_64.c | 16 +
arch/arm/cpu/{entry_ll.S => entry_ll_32.S} | 0
.../arm/cpu/{exceptions.S => exceptions_32.S} | 0
.../arm/cpu/{interrupts.c => interrupts_32.c} | 0
arch/arm/cpu/{lowlevel.S => lowlevel_32.S} | 0
arch/arm/cpu/mmu-common.c | 13 +-
arch/arm/cpu/mmu-early.c | 71 ----
arch/arm/cpu/mmu-early_64.c | 93 ------
arch/arm/cpu/{mmu.c => mmu_32.c} | 304 +++++++++++-------
arch/arm/cpu/{mmu.h => mmu_32.h} | 20 --
arch/arm/cpu/mmu_64.c | 109 ++++---
arch/arm/cpu/{setupc.S => setupc_32.S} | 0
arch/arm/cpu/sm.c | 3 +-
.../arm/cpu/{smccc-call.S => smccc-call_32.S} | 0
arch/arm/cpu/start.c | 17 +-
arch/arm/cpu/uncompress.c | 7 +-
arch/arm/include/asm/barebox-arm.h | 10 +-
arch/arm/include/asm/cache.h | 2 -
arch/arm/include/asm/mmu.h | 3 +-
common/Kconfig | 9 -
drivers/hab/habv4.c | 9 +-
include/mmu.h | 1 +
29 files changed, 380 insertions(+), 448 deletions(-)
rename arch/arm/cpu/{cache.c => cache_32.c} (89%)
create mode 100644 arch/arm/cpu/dma_32.c
create mode 100644 arch/arm/cpu/dma_64.c
rename arch/arm/cpu/{entry_ll.S => entry_ll_32.S} (100%)
rename arch/arm/cpu/{exceptions.S => exceptions_32.S} (100%)
rename arch/arm/cpu/{interrupts.c => interrupts_32.c} (100%)
rename arch/arm/cpu/{lowlevel.S => lowlevel_32.S} (100%)
delete mode 100644 arch/arm/cpu/mmu-early.c
delete mode 100644 arch/arm/cpu/mmu-early_64.c
rename arch/arm/cpu/{mmu.c => mmu_32.c} (66%)
rename arch/arm/cpu/{mmu.h => mmu_32.h} (75%)
rename arch/arm/cpu/{setupc.S => setupc_32.S} (100%)
rename arch/arm/cpu/{smccc-call.S => smccc-call_32.S} (100%)
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 01/27] ARM: fix scratch mem position with OP-TEE
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 17:17 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 02/27] ARM: drop cache function initialization Sascha Hauer
` (25 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
OP-TEE is placed right below the end of memory, so the scratch space
has to go below it.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/include/asm/barebox-arm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/barebox-arm.h b/arch/arm/include/asm/barebox-arm.h
index 0cf4549cd7..711ccd2510 100644
--- a/arch/arm/include/asm/barebox-arm.h
+++ b/arch/arm/include/asm/barebox-arm.h
@@ -75,7 +75,7 @@ void *barebox_arm_boot_dtb(void);
static inline const void *arm_mem_scratch_get(void)
{
- return (const void *)__arm_mem_scratch(arm_mem_endmem_get());
+ return (const void *)__arm_mem_scratch(arm_mem_endmem_get() - OPTEE_SIZE);
}
#define arm_mem_stack_top(membase, endmem) ((endmem) - SZ_64K - OPTEE_SIZE)
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 02/27] ARM: drop cache function initialization
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
2023-05-12 11:09 ` [PATCH 01/27] ARM: fix scratch mem position with OP-TEE Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 17:19 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 03/27] ARM: Add _32 suffix to aarch32 specific filenames Sascha Hauer
` (24 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
We need a call to arm_set_cache_functions() before the cache maintenance
functions can be used. Drop this call and just pick the correct
functions on the first call.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/cache.c | 83 +++++++++++++++++-------------------
arch/arm/cpu/cache_64.c | 5 ---
arch/arm/cpu/mmu-early.c | 2 -
arch/arm/cpu/mmu.c | 2 -
arch/arm/cpu/start.c | 4 +-
arch/arm/include/asm/cache.h | 2 -
6 files changed, 41 insertions(+), 57 deletions(-)
diff --git a/arch/arm/cpu/cache.c b/arch/arm/cpu/cache.c
index 24a02c68f3..4202406d0d 100644
--- a/arch/arm/cpu/cache.c
+++ b/arch/arm/cpu/cache.c
@@ -17,8 +17,6 @@ struct cache_fns {
void (*mmu_cache_flush)(void);
};
-struct cache_fns *cache_fns;
-
#define DEFINE_CPU_FNS(arch) \
void arch##_dma_clean_range(unsigned long start, unsigned long end); \
void arch##_dma_flush_range(unsigned long start, unsigned long end); \
@@ -41,50 +39,13 @@ DEFINE_CPU_FNS(v5)
DEFINE_CPU_FNS(v6)
DEFINE_CPU_FNS(v7)
-void __dma_clean_range(unsigned long start, unsigned long end)
-{
- if (cache_fns)
- cache_fns->dma_clean_range(start, end);
-}
-
-void __dma_flush_range(unsigned long start, unsigned long end)
-{
- if (cache_fns)
- cache_fns->dma_flush_range(start, end);
-}
-
-void __dma_inv_range(unsigned long start, unsigned long end)
-{
- if (cache_fns)
- cache_fns->dma_inv_range(start, end);
-}
-
-#ifdef CONFIG_MMU
-
-void __mmu_cache_on(void)
-{
- if (cache_fns)
- cache_fns->mmu_cache_on();
-}
-
-void __mmu_cache_off(void)
+static struct cache_fns *cache_functions(void)
{
- if (cache_fns)
- cache_fns->mmu_cache_off();
-}
+ static struct cache_fns *cache_fns;
-void __mmu_cache_flush(void)
-{
if (cache_fns)
- cache_fns->mmu_cache_flush();
- if (outer_cache.flush_all)
- outer_cache.flush_all();
-}
-
-#endif
+ return cache_fns;
-int arm_set_cache_functions(void)
-{
switch (cpu_architecture()) {
#ifdef CONFIG_CPU_32v4T
case CPU_ARCH_ARMv4T:
@@ -113,9 +74,45 @@ int arm_set_cache_functions(void)
while(1);
}
- return 0;
+ return cache_fns;
+}
+
+void __dma_clean_range(unsigned long start, unsigned long end)
+{
+ cache_functions()->dma_clean_range(start, end);
+}
+
+void __dma_flush_range(unsigned long start, unsigned long end)
+{
+ cache_functions()->dma_flush_range(start, end);
+}
+
+void __dma_inv_range(unsigned long start, unsigned long end)
+{
+ cache_functions()->dma_inv_range(start, end);
+}
+
+#ifdef CONFIG_MMU
+
+void __mmu_cache_on(void)
+{
+ cache_functions()->mmu_cache_on();
+}
+
+void __mmu_cache_off(void)
+{
+ cache_functions()->mmu_cache_off();
}
+void __mmu_cache_flush(void)
+{
+ cache_functions()->mmu_cache_flush();
+ if (outer_cache.flush_all)
+ outer_cache.flush_all();
+}
+
+#endif
+
/*
* Early function to flush the caches. This is for use when the
* C environment is not yet fully initialized.
diff --git a/arch/arm/cpu/cache_64.c b/arch/arm/cpu/cache_64.c
index cb7bc0945c..3a30296128 100644
--- a/arch/arm/cpu/cache_64.c
+++ b/arch/arm/cpu/cache_64.c
@@ -6,11 +6,6 @@
#include <asm/cache.h>
#include <asm/system_info.h>
-int arm_set_cache_functions(void)
-{
- return 0;
-}
-
/*
* Early function to flush the caches. This is for use when the
* C environment is not yet fully initialized.
diff --git a/arch/arm/cpu/mmu-early.c b/arch/arm/cpu/mmu-early.c
index 0d528b9b9c..4895911cdb 100644
--- a/arch/arm/cpu/mmu-early.c
+++ b/arch/arm/cpu/mmu-early.c
@@ -28,8 +28,6 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize,
{
ttb = (uint32_t *)_ttb;
- arm_set_cache_functions();
-
set_ttbr(ttb);
/* For the XN bit to take effect, we can't be using DOMAIN_MANAGER. */
diff --git a/arch/arm/cpu/mmu.c b/arch/arm/cpu/mmu.c
index 6388e1bf14..78dd05577a 100644
--- a/arch/arm/cpu/mmu.c
+++ b/arch/arm/cpu/mmu.c
@@ -414,8 +414,6 @@ void __mmu_init(bool mmu_on)
{
struct memory_bank *bank;
- arm_set_cache_functions();
-
if (cpu_architecture() >= CPU_ARCH_ARMv7) {
pte_flags_cached = PTE_FLAGS_CACHED_V7;
pte_flags_wc = PTE_FLAGS_WC_V7;
diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index be303514c2..bcfc630f3b 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -170,9 +170,7 @@ __noreturn __no_sanitize_address void barebox_non_pbl_start(unsigned long membas
if (IS_ENABLED(CONFIG_MMU_EARLY)) {
unsigned long ttb = arm_mem_ttb(membase, endmem);
- if (IS_ENABLED(CONFIG_PBL_IMAGE)) {
- arm_set_cache_functions();
- } else {
+ if (!IS_ENABLED(CONFIG_PBL_IMAGE)) {
pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
arm_early_mmu_cache_invalidate();
mmu_early_enable(membase, memsize - OPTEE_SIZE, ttb);
diff --git a/arch/arm/include/asm/cache.h b/arch/arm/include/asm/cache.h
index b63776a74a..261c30129a 100644
--- a/arch/arm/include/asm/cache.h
+++ b/arch/arm/include/asm/cache.h
@@ -18,8 +18,6 @@ static inline void icache_invalidate(void)
#endif
}
-int arm_set_cache_functions(void);
-
void arm_early_mmu_cache_flush(void);
void arm_early_mmu_cache_invalidate(void);
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 03/27] ARM: Add _32 suffix to aarch32 specific filenames
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
2023-05-12 11:09 ` [PATCH 01/27] ARM: fix scratch mem position with OP-TEE Sascha Hauer
2023-05-12 11:09 ` [PATCH 02/27] ARM: drop cache function initialization Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 17:21 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 04/27] ARM: cpu.c: remove unused include Sascha Hauer
` (23 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
Several files in arch/arm/cpu/ have 32bit and 64bit versions. The
64bit versions have a _64 suffix, but the 32bit versions have none.
This can be confusing sometimes as one doesn't know if a file is
32bit specific or common code.
Add a _32 suffix to the 32bit files to avoid this confusion.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/Makefile | 5 ++++-
arch/arm/cpu/Makefile | 20 +++++++++----------
arch/arm/cpu/{cache.c => cache_32.c} | 0
arch/arm/cpu/{entry_ll.S => entry_ll_32.S} | 0
.../arm/cpu/{exceptions.S => exceptions_32.S} | 0
.../arm/cpu/{interrupts.c => interrupts_32.c} | 0
arch/arm/cpu/{lowlevel.S => lowlevel_32.S} | 0
arch/arm/cpu/{mmu-early.c => mmu-early_32.c} | 0
arch/arm/cpu/{mmu.c => mmu_32.c} | 0
arch/arm/cpu/{setupc.S => setupc_32.S} | 0
.../arm/cpu/{smccc-call.S => smccc-call_32.S} | 0
11 files changed, 14 insertions(+), 11 deletions(-)
rename arch/arm/cpu/{cache.c => cache_32.c} (100%)
rename arch/arm/cpu/{entry_ll.S => entry_ll_32.S} (100%)
rename arch/arm/cpu/{exceptions.S => exceptions_32.S} (100%)
rename arch/arm/cpu/{interrupts.c => interrupts_32.c} (100%)
rename arch/arm/cpu/{lowlevel.S => lowlevel_32.S} (100%)
rename arch/arm/cpu/{mmu-early.c => mmu-early_32.c} (100%)
rename arch/arm/cpu/{mmu.c => mmu_32.c} (100%)
rename arch/arm/cpu/{setupc.S => setupc_32.S} (100%)
rename arch/arm/cpu/{smccc-call.S => smccc-call_32.S} (100%)
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index a506f1e3a3..cb88c7b330 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -78,10 +78,13 @@ endif
ifeq ($(CONFIG_CPU_V8), y)
KBUILD_CPPFLAGS += $(CFLAGS_ABI) $(arch-y) $(tune-y)
KBUILD_AFLAGS += -include asm/unified.h
-export S64 = _64
+export S64_32 = 64
+export S64 = 64
else
KBUILD_CPPFLAGS += $(CFLAGS_ABI) $(arch-y) $(tune-y) $(CFLAGS_THUMB2)
KBUILD_AFLAGS += -include asm/unified.h -msoft-float $(AFLAGS_THUMB2)
+export S64_32 = 32
+export S32 = 32
endif
# Machine directory name. This list is sorted alphanumerically
diff --git a/arch/arm/cpu/Makefile b/arch/arm/cpu/Makefile
index 7674c1464c..fef2026da5 100644
--- a/arch/arm/cpu/Makefile
+++ b/arch/arm/cpu/Makefile
@@ -2,15 +2,15 @@
obj-y += cpu.o
-obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions$(S64).o interrupts$(S64).o
-obj-$(CONFIG_MMU) += mmu$(S64).o mmu-common.o
-obj-pbl-y += lowlevel$(S64).o
-obj-pbl-$(CONFIG_MMU) += mmu-early$(S64).o
+obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions_$(S64_32).o interrupts_$(S64_32).o
+obj-$(CONFIG_MMU) += mmu_$(S64_32).o mmu-common.o
+obj-pbl-y += lowlevel_$(S64_32).o
+obj-pbl-$(CONFIG_MMU) += mmu-early_$(S64_32).o
obj-pbl-$(CONFIG_CPU_32v7) += hyp.o
AFLAGS_hyp.o :=-Wa,-march=armv7-a -Wa,-mcpu=all
AFLAGS_hyp.pbl.o :=-Wa,-march=armv7-a -Wa,-mcpu=all
-obj-y += start.o entry.o entry_ll$(S64).o
+obj-y += start.o entry.o entry_ll_$(S64_32).o
KASAN_SANITIZE_start.o := n
pbl-$(CONFIG_CPU_64) += head_64.o
@@ -18,7 +18,7 @@ pbl-$(CONFIG_CPU_64) += head_64.o
pbl-$(CONFIG_BOARD_ARM_GENERIC_DT) += board-dt-2nd.o
pbl-$(CONFIG_BOARD_ARM_GENERIC_DT_AARCH64) += board-dt-2nd-aarch64.o
-obj-pbl-y += setupc$(S64).o cache$(S64).o
+obj-pbl-y += setupc_$(S64_32).o cache_$(S64_32).o
obj-$(CONFIG_ARM_PSCI_CLIENT) += psci-client.o
@@ -35,9 +35,9 @@ endif
obj-$(CONFIG_ARM_PSCI) += psci.o
obj-$(CONFIG_ARM_PSCI_OF) += psci-of.o
-obj-pbl-$(CONFIG_ARM_SMCCC) += smccc-call$(S64).o
-AFLAGS_smccc-call$(S64).o :=-Wa,-march=armv$(if $(S64),8,7)-a
-AFLAGS_smccc-call$(S64).pbl.o :=-Wa,-march=armv$(if $(S64),8,7)-a
+obj-pbl-$(CONFIG_ARM_SMCCC) += smccc-call_$(S64_32).o
+AFLAGS_smccc-call_$(S64_32).o :=-Wa,-march=armv$(if $(S64),8,7)-a
+AFLAGS_smccc-call_$(S64_32).pbl.o :=-Wa,-march=armv$(if $(S64),8,7)-a
obj-$(CONFIG_ARM_SECURE_MONITOR) += sm.o sm_as.o
AFLAGS_sm_as.o :=-Wa,-march=armv7-a
@@ -52,7 +52,7 @@ obj-pbl-$(CONFIG_CPU_64v8) += cache-armv8.o
AFLAGS_cache-armv8.o :=-Wa,-march=armv8-a
AFLAGS-cache-armv8.pbl.o :=-Wa,-march=armv8-a
-pbl-y += entry.o entry_ll$(S64).o
+pbl-y += entry.o entry_ll_$(S64_32).o
pbl-y += uncompress.o
pbl-$(CONFIG_ARM_ATF) += atf.o
diff --git a/arch/arm/cpu/cache.c b/arch/arm/cpu/cache_32.c
similarity index 100%
rename from arch/arm/cpu/cache.c
rename to arch/arm/cpu/cache_32.c
diff --git a/arch/arm/cpu/entry_ll.S b/arch/arm/cpu/entry_ll_32.S
similarity index 100%
rename from arch/arm/cpu/entry_ll.S
rename to arch/arm/cpu/entry_ll_32.S
diff --git a/arch/arm/cpu/exceptions.S b/arch/arm/cpu/exceptions_32.S
similarity index 100%
rename from arch/arm/cpu/exceptions.S
rename to arch/arm/cpu/exceptions_32.S
diff --git a/arch/arm/cpu/interrupts.c b/arch/arm/cpu/interrupts_32.c
similarity index 100%
rename from arch/arm/cpu/interrupts.c
rename to arch/arm/cpu/interrupts_32.c
diff --git a/arch/arm/cpu/lowlevel.S b/arch/arm/cpu/lowlevel_32.S
similarity index 100%
rename from arch/arm/cpu/lowlevel.S
rename to arch/arm/cpu/lowlevel_32.S
diff --git a/arch/arm/cpu/mmu-early.c b/arch/arm/cpu/mmu-early_32.c
similarity index 100%
rename from arch/arm/cpu/mmu-early.c
rename to arch/arm/cpu/mmu-early_32.c
diff --git a/arch/arm/cpu/mmu.c b/arch/arm/cpu/mmu_32.c
similarity index 100%
rename from arch/arm/cpu/mmu.c
rename to arch/arm/cpu/mmu_32.c
diff --git a/arch/arm/cpu/setupc.S b/arch/arm/cpu/setupc_32.S
similarity index 100%
rename from arch/arm/cpu/setupc.S
rename to arch/arm/cpu/setupc_32.S
diff --git a/arch/arm/cpu/smccc-call.S b/arch/arm/cpu/smccc-call_32.S
similarity index 100%
rename from arch/arm/cpu/smccc-call.S
rename to arch/arm/cpu/smccc-call_32.S
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 04/27] ARM: cpu.c: remove unused include
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (2 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 03/27] ARM: Add _32 suffix to aarch32 specific filenames Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 17:22 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 05/27] ARM: mmu-common.c: use common mmu include Sascha Hauer
` (22 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
cpu.c doesn't use anything from mmu.h, so drop its incusion.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/cpu.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm/cpu/cpu.c b/arch/arm/cpu/cpu.c
index 5b79dd2a8f..cacd442b28 100644
--- a/arch/arm/cpu/cpu.c
+++ b/arch/arm/cpu/cpu.c
@@ -18,8 +18,6 @@
#include <asm/cache.h>
#include <asm/ptrace.h>
-#include "mmu.h"
-
/**
* Enable processor's instruction cache
*/
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 05/27] ARM: mmu-common.c: use common mmu include
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (3 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 04/27] ARM: cpu.c: remove unused include Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 17:23 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 06/27] ARM: mmu32: rename mmu.h to mmu_32.h Sascha Hauer
` (21 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
mmu-common.c needs things from mmu-common.h, but not from mmu.h, so
include the former instead.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu-common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index 488a189f1c..e6cc3b974f 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -11,7 +11,7 @@
#include <asm/system.h>
#include <asm/barebox-arm.h>
#include <memory.h>
-#include "mmu.h"
+#include "mmu-common.h"
void dma_sync_single_for_cpu(dma_addr_t address, size_t size,
enum dma_data_direction dir)
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 06/27] ARM: mmu32: rename mmu.h to mmu_32.h
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (4 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 05/27] ARM: mmu-common.c: use common mmu include Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 17:23 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 07/27] ARM: mmu: implement MAP_FAULT Sascha Hauer
` (20 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
mmu.h is 32bit specific, so rename it to mmu32.h like the C files
have been renamed already.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/cache_32.c | 2 +-
arch/arm/cpu/mmu-early_32.c | 2 +-
arch/arm/cpu/mmu_32.c | 2 +-
arch/arm/cpu/{mmu.h => mmu_32.h} | 0
arch/arm/cpu/sm.c | 3 +--
5 files changed, 4 insertions(+), 5 deletions(-)
rename arch/arm/cpu/{mmu.h => mmu_32.h} (100%)
diff --git a/arch/arm/cpu/cache_32.c b/arch/arm/cpu/cache_32.c
index 4202406d0d..0ac50c4d9a 100644
--- a/arch/arm/cpu/cache_32.c
+++ b/arch/arm/cpu/cache_32.c
@@ -6,7 +6,7 @@
#include <asm/cache.h>
#include <asm/system_info.h>
-#include "mmu.h"
+#include "mmu_32.h"
struct cache_fns {
void (*dma_clean_range)(unsigned long start, unsigned long end);
diff --git a/arch/arm/cpu/mmu-early_32.c b/arch/arm/cpu/mmu-early_32.c
index 4895911cdb..07c5917e6a 100644
--- a/arch/arm/cpu/mmu-early_32.c
+++ b/arch/arm/cpu/mmu-early_32.c
@@ -9,7 +9,7 @@
#include <asm/cache.h>
#include <asm-generic/sections.h>
-#include "mmu.h"
+#include "mmu_32.h"
static uint32_t *ttb;
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 78dd05577a..8ec21ee1d2 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -18,7 +18,7 @@
#include <asm/system_info.h>
#include <asm/sections.h>
-#include "mmu.h"
+#include "mmu_32.h"
#define PTRS_PER_PTE (PGDIR_SIZE / PAGE_SIZE)
#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
diff --git a/arch/arm/cpu/mmu.h b/arch/arm/cpu/mmu_32.h
similarity index 100%
rename from arch/arm/cpu/mmu.h
rename to arch/arm/cpu/mmu_32.h
diff --git a/arch/arm/cpu/sm.c b/arch/arm/cpu/sm.c
index f5a1edbd4f..53f5142b63 100644
--- a/arch/arm/cpu/sm.c
+++ b/arch/arm/cpu/sm.c
@@ -19,8 +19,7 @@
#include <linux/arm-smccc.h>
#include <asm-generic/sections.h>
#include <asm/secure.h>
-
-#include "mmu.h"
+#include "mmu_32.h"
static unsigned int read_id_pfr1(void)
{
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 07/27] ARM: mmu: implement MAP_FAULT
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (5 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 06/27] ARM: mmu32: rename mmu.h to mmu_32.h Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 08/27] ARM: mmu64: Use arch_remap_range where possible Sascha Hauer
` (19 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
MAP_FAULT can be used for the zero page to make it faulting.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 4 ++++
arch/arm/cpu/mmu_64.c | 3 +++
include/mmu.h | 1 +
3 files changed, 8 insertions(+)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 8ec21ee1d2..a1ecc49f03 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -178,6 +178,10 @@ int arch_remap_range(void *start, size_t size, unsigned flags)
pte_flags = pte_flags_uncached;
pgd_flags = pgd_flags_uncached;
break;
+ case MAP_FAULT:
+ pte_flags = 0x0;
+ pgd_flags = 0x0;
+ break;
case ARCH_MAP_WRITECOMBINE:
pte_flags = pte_flags_wc;
pgd_flags = pgd_flags_wc;
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index f43ac9a121..a22e0c81ab 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -154,6 +154,9 @@ int arch_remap_range(void *_start, size_t size, unsigned flags)
case MAP_UNCACHED:
attrs = attrs_uncached_mem();
break;
+ case MAP_FAULT:
+ attrs = 0x0;
+ break;
default:
return -EINVAL;
}
diff --git a/include/mmu.h b/include/mmu.h
index 2e23853df3..2326cb215a 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -4,6 +4,7 @@
#define MAP_UNCACHED 0
#define MAP_CACHED 1
+#define MAP_FAULT 2
/*
* Depending on the architecture the default mapping can be
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 08/27] ARM: mmu64: Use arch_remap_range where possible
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (6 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 07/27] ARM: mmu: implement MAP_FAULT Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 17:40 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 09/27] ARM: mmu32: implement zero_page_*() Sascha Hauer
` (18 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_64.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index a22e0c81ab..0639d0f1ce 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -174,12 +174,12 @@ static void mmu_enable(void)
void zero_page_access(void)
{
- create_sections(0x0, 0x0, PAGE_SIZE, CACHED_MEM);
+ arch_remap_range(0x0, PAGE_SIZE, MAP_CACHED);
}
void zero_page_faulting(void)
{
- create_sections(0x0, 0x0, PAGE_SIZE, 0x0);
+ arch_remap_range(0x0, PAGE_SIZE, MAP_FAULT);
}
/*
@@ -201,17 +201,17 @@ void __mmu_init(bool mmu_on)
pr_debug("ttb: 0x%p\n", ttb);
/* create a flat mapping */
- create_sections(0, 0, 1UL << (BITS_PER_VA - 1), attrs_uncached_mem());
+ arch_remap_range(0, 1UL << (BITS_PER_VA - 1), MAP_UNCACHED);
/* Map sdram cached. */
for_each_memory_bank(bank) {
struct resource *rsv;
- create_sections(bank->start, bank->start, bank->size, CACHED_MEM);
+ arch_remap_range((void *)bank->start, bank->size, MAP_CACHED);
for_each_reserved_region(bank, rsv) {
- create_sections(resource_first_page(rsv), resource_first_page(rsv),
- resource_count_pages(rsv), attrs_uncached_mem());
+ arch_remap_range((void *)resource_first_page(rsv),
+ resource_count_pages(rsv), MAP_UNCACHED);
}
}
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 09/27] ARM: mmu32: implement zero_page_*()
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (7 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 08/27] ARM: mmu64: Use arch_remap_range where possible Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 10/27] ARM: i.MX: Drop HAB workaround Sascha Hauer
` (17 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
We have functions to access the zero page and to make it faulting again.
Implement them for AArch32.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/Kconfig | 3 ++-
arch/arm/cpu/mmu-common.c | 11 +++++++++++
arch/arm/cpu/mmu_32.c | 5 ++---
arch/arm/cpu/mmu_64.c | 10 ----------
4 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/arch/arm/cpu/Kconfig b/arch/arm/cpu/Kconfig
index 26f07043fe..40dd35833a 100644
--- a/arch/arm/cpu/Kconfig
+++ b/arch/arm/cpu/Kconfig
@@ -11,6 +11,7 @@ config CPU_32
select HAVE_MOD_ARCH_SPECIFIC
select HAS_DMA
select HAVE_PBL_IMAGE
+ select ARCH_HAS_ZERO_PAGE
config CPU_64
bool
@@ -19,6 +20,7 @@ config CPU_64
select HAVE_PBL_MULTI_IMAGES
select HAS_DMA
select ARCH_WANT_FRAME_POINTERS
+ select ARCH_HAS_ZERO_PAGE
# Select CPU types depending on the architecture selected. This selects
# which CPUs we support in the kernel image, and the compiler instruction
@@ -92,7 +94,6 @@ config CPU_V8
select ARM_EXCEPTIONS
select GENERIC_FIND_NEXT_BIT
select ARCH_HAS_STACK_DUMP
- select ARCH_HAS_ZERO_PAGE
config CPU_XSC3
bool
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index e6cc3b974f..02f512c2c6 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -11,6 +11,7 @@
#include <asm/system.h>
#include <asm/barebox-arm.h>
#include <memory.h>
+#include <zero_page.h>
#include "mmu-common.h"
void dma_sync_single_for_cpu(dma_addr_t address, size_t size,
@@ -57,6 +58,16 @@ void dma_free_coherent(void *mem, dma_addr_t dma_handle, size_t size)
free(mem);
}
+void zero_page_access(void)
+{
+ arch_remap_range(0x0, PAGE_SIZE, MAP_CACHED);
+}
+
+void zero_page_faulting(void)
+{
+ arch_remap_range(0x0, PAGE_SIZE, MAP_FAULT);
+}
+
static int mmu_init(void)
{
if (list_empty(&memory_banks)) {
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index a1ecc49f03..7b31938ecd 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -9,6 +9,7 @@
#include <init.h>
#include <mmu.h>
#include <errno.h>
+#include <zero_page.h>
#include <linux/sizes.h>
#include <asm/memory.h>
#include <asm/barebox-arm.h>
@@ -362,7 +363,6 @@ static int set_vector_table(unsigned long adr)
static void create_zero_page(void)
{
struct resource *zero_sdram;
- u32 *zero;
zero_sdram = request_sdram_region("zero page", 0x0, PAGE_SIZE);
if (zero_sdram) {
@@ -372,8 +372,7 @@ static void create_zero_page(void)
*/
pr_debug("zero page is in SDRAM area, currently not supported\n");
} else {
- zero = arm_create_pte(0x0, pte_flags_uncached);
- zero[0] = 0;
+ zero_page_faulting();
pr_debug("Created zero page\n");
}
}
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 0639d0f1ce..c7c16b527b 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -172,16 +172,6 @@ static void mmu_enable(void)
set_cr(get_cr() | CR_M | CR_C | CR_I);
}
-void zero_page_access(void)
-{
- arch_remap_range(0x0, PAGE_SIZE, MAP_CACHED);
-}
-
-void zero_page_faulting(void)
-{
- arch_remap_range(0x0, PAGE_SIZE, MAP_FAULT);
-}
-
/*
* Prepare MMU for usage enable it.
*/
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 10/27] ARM: i.MX: Drop HAB workaround
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (8 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 09/27] ARM: mmu32: implement zero_page_*() Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 18:09 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 11/27] ARM: Move early MMU after malloc initialization Sascha Hauer
` (16 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
The i.MX HAB code on i.MX6 has to jump into ROM which happens to start
at 0x0. To make that possible we used to map the ROM cached and jumped
to it before the MMU is initialized. Instead, remap the ROM as needed
in the HAB code so that we can safely jump into ROM with MMU enabled.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu-early_32.c | 7 -------
drivers/hab/habv4.c | 9 ++++++++-
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/mmu-early_32.c b/arch/arm/cpu/mmu-early_32.c
index 07c5917e6a..94bde44c9b 100644
--- a/arch/arm/cpu/mmu-early_32.c
+++ b/arch/arm/cpu/mmu-early_32.c
@@ -58,12 +58,5 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize,
/* maps main memory as cachable */
map_region(membase, memsize, PMD_SECT_DEF_CACHED);
- /*
- * With HAB enabled we call into the ROM code later in imx6_hab_get_status().
- * Map the ROM cached which has the effect that the XN bit is not set.
- */
- if (IS_ENABLED(CONFIG_HABV4) && IS_ENABLED(CONFIG_ARCH_IMX6))
- map_region(0x0, SZ_1M, PMD_SECT_DEF_CACHED);
-
__mmu_cache_on();
}
diff --git a/drivers/hab/habv4.c b/drivers/hab/habv4.c
index 252e38f655..d2494db114 100644
--- a/drivers/hab/habv4.c
+++ b/drivers/hab/habv4.c
@@ -11,6 +11,9 @@
#include <hab.h>
#include <init.h>
#include <types.h>
+#include <mmu.h>
+#include <zero_page.h>
+#include <linux/sizes.h>
#include <linux/arm-smccc.h>
#include <asm/cache.h>
@@ -613,12 +616,16 @@ static int init_imx6_hab_get_status(void)
/* can happen in multi-image builds and is not an error */
return 0;
+ arch_remap_range(0x0, SZ_1M, MAP_CACHED);
+
/*
* Nobody will check the return value if there were HAB errors, but the
* initcall will fail spectaculously with a strange error message.
*/
imx6_hab_get_status();
+ zero_page_faulting();
+
return 0;
}
@@ -627,7 +634,7 @@ static int init_imx6_hab_get_status(void)
* which will no longer be accessible when the MMU sets the zero page to
* faulting.
*/
-postconsole_initcall(init_imx6_hab_get_status);
+postmmu_initcall(init_imx6_hab_get_status);
int imx28_hab_get_status(void)
{
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 11/27] ARM: Move early MMU after malloc initialization
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (9 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 10/27] ARM: i.MX: Drop HAB workaround Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 18:10 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 12/27] ARM: mmu: move dma_sync_single_for_device to extra file Sascha Hauer
` (15 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
Initialize the MMU after malloc so that we can use malloc in the
MMU code, for example to allocate memory for page tables.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/start.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index bcfc630f3b..9d788eba2b 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -167,16 +167,6 @@ __noreturn __no_sanitize_address void barebox_non_pbl_start(unsigned long membas
arm_barebox_size = barebox_size;
malloc_end = barebox_base;
- if (IS_ENABLED(CONFIG_MMU_EARLY)) {
- unsigned long ttb = arm_mem_ttb(membase, endmem);
-
- if (!IS_ENABLED(CONFIG_PBL_IMAGE)) {
- pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
- arm_early_mmu_cache_invalidate();
- mmu_early_enable(membase, memsize - OPTEE_SIZE, ttb);
- }
- }
-
if (boarddata) {
uint32_t totalsize = 0;
const char *name;
@@ -226,6 +216,16 @@ __noreturn __no_sanitize_address void barebox_non_pbl_start(unsigned long membas
mem_malloc_init((void *)malloc_start, (void *)malloc_end - 1);
+ if (IS_ENABLED(CONFIG_MMU_EARLY)) {
+ unsigned long ttb = arm_mem_ttb(membase, endmem);
+
+ if (!IS_ENABLED(CONFIG_PBL_IMAGE)) {
+ pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
+ arm_early_mmu_cache_invalidate();
+ mmu_early_enable(membase, memsize - OPTEE_SIZE, ttb);
+ }
+ }
+
if (IS_ENABLED(CONFIG_BOOTM_OPTEE))
of_add_reserve_entry(endmem - OPTEE_SIZE, endmem - 1);
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 12/27] ARM: mmu: move dma_sync_single_for_device to extra file
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (10 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 11/27] ARM: Move early MMU after malloc initialization Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 18:30 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 13/27] ARM: mmu: merge mmu-early_xx.c into mmu_xx.c Sascha Hauer
` (14 subsequent siblings)
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
The next patch merges the mmu.c files with their corresponding
mmu-early.c files. Before doing that move functions which can't
be compiled for PBL out to extra files.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/Makefile | 1 +
arch/arm/cpu/dma_32.c | 20 ++++++++++++++++++++
arch/arm/cpu/dma_64.c | 16 ++++++++++++++++
arch/arm/cpu/mmu_32.c | 18 ------------------
arch/arm/cpu/mmu_64.c | 13 -------------
5 files changed, 37 insertions(+), 31 deletions(-)
create mode 100644 arch/arm/cpu/dma_32.c
create mode 100644 arch/arm/cpu/dma_64.c
diff --git a/arch/arm/cpu/Makefile b/arch/arm/cpu/Makefile
index fef2026da5..cd5f36eb49 100644
--- a/arch/arm/cpu/Makefile
+++ b/arch/arm/cpu/Makefile
@@ -4,6 +4,7 @@ obj-y += cpu.o
obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions_$(S64_32).o interrupts_$(S64_32).o
obj-$(CONFIG_MMU) += mmu_$(S64_32).o mmu-common.o
+obj-$(CONFIG_MMU) += dma_$(S64_32).o
obj-pbl-y += lowlevel_$(S64_32).o
obj-pbl-$(CONFIG_MMU) += mmu-early_$(S64_32).o
obj-pbl-$(CONFIG_CPU_32v7) += hyp.o
diff --git a/arch/arm/cpu/dma_32.c b/arch/arm/cpu/dma_32.c
new file mode 100644
index 0000000000..a66aa26b9b
--- /dev/null
+++ b/arch/arm/cpu/dma_32.c
@@ -0,0 +1,20 @@
+#include <dma.h>
+#include <asm/mmu.h>
+
+void dma_sync_single_for_device(dma_addr_t address, size_t size,
+ enum dma_data_direction dir)
+{
+ /*
+ * FIXME: This function needs a device argument to support non 1:1 mappings
+ */
+
+ if (dir == DMA_FROM_DEVICE) {
+ __dma_inv_range(address, address + size);
+ if (outer_cache.inv_range)
+ outer_cache.inv_range(address, address + size);
+ } else {
+ __dma_clean_range(address, address + size);
+ if (outer_cache.clean_range)
+ outer_cache.clean_range(address, address + size);
+ }
+}
diff --git a/arch/arm/cpu/dma_64.c b/arch/arm/cpu/dma_64.c
new file mode 100644
index 0000000000..b4ae736c9b
--- /dev/null
+++ b/arch/arm/cpu/dma_64.c
@@ -0,0 +1,16 @@
+#include <dma.h>
+#include <asm/mmu.h>
+#include <asm/cache.h>
+
+void dma_sync_single_for_device(dma_addr_t address, size_t size,
+ enum dma_data_direction dir)
+{
+ /*
+ * FIXME: This function needs a device argument to support non 1:1 mappings
+ */
+
+ if (dir == DMA_FROM_DEVICE)
+ v8_inv_dcache_range(address, address + size - 1);
+ else
+ v8_flush_dcache_range(address, address + size - 1);
+}
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 7b31938ecd..10f447874c 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -494,21 +494,3 @@ void *dma_alloc_writecombine(size_t size, dma_addr_t *dma_handle)
{
return dma_alloc_map(size, dma_handle, ARCH_MAP_WRITECOMBINE);
}
-
-void dma_sync_single_for_device(dma_addr_t address, size_t size,
- enum dma_data_direction dir)
-{
- /*
- * FIXME: This function needs a device argument to support non 1:1 mappings
- */
-
- if (dir == DMA_FROM_DEVICE) {
- __dma_inv_range(address, address + size);
- if (outer_cache.inv_range)
- outer_cache.inv_range(address, address + size);
- } else {
- __dma_clean_range(address, address + size);
- if (outer_cache.clean_range)
- outer_cache.clean_range(address, address + size);
- }
-}
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index c7c16b527b..9150de1676 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -241,16 +241,3 @@ void dma_flush_range(void *ptr, size_t size)
v8_flush_dcache_range(start, end);
}
-
-void dma_sync_single_for_device(dma_addr_t address, size_t size,
- enum dma_data_direction dir)
-{
- /*
- * FIXME: This function needs a device argument to support non 1:1 mappings
- */
-
- if (dir == DMA_FROM_DEVICE)
- v8_inv_dcache_range(address, address + size - 1);
- else
- v8_flush_dcache_range(address, address + size - 1);
-}
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 13/27] ARM: mmu: merge mmu-early_xx.c into mmu_xx.c
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (11 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 12/27] ARM: mmu: move dma_sync_single_for_device to extra file Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 14/27] ARM: mmu: alloc 64k for early page tables Sascha Hauer
` (13 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
The code will be further consolidated, so move it together for easier
code sharing.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/Makefile | 4 +-
arch/arm/cpu/mmu-early_32.c | 62 -------------------------
arch/arm/cpu/mmu-early_64.c | 93 -------------------------------------
arch/arm/cpu/mmu_32.c | 50 ++++++++++++++++++++
arch/arm/cpu/mmu_64.c | 76 ++++++++++++++++++++++++++++++
5 files changed, 128 insertions(+), 157 deletions(-)
delete mode 100644 arch/arm/cpu/mmu-early_32.c
delete mode 100644 arch/arm/cpu/mmu-early_64.c
diff --git a/arch/arm/cpu/Makefile b/arch/arm/cpu/Makefile
index cd5f36eb49..0e4fa69229 100644
--- a/arch/arm/cpu/Makefile
+++ b/arch/arm/cpu/Makefile
@@ -3,10 +3,10 @@
obj-y += cpu.o
obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions_$(S64_32).o interrupts_$(S64_32).o
-obj-$(CONFIG_MMU) += mmu_$(S64_32).o mmu-common.o
+obj-$(CONFIG_MMU) += mmu-common.o
+obj-pbl-$(CONFIG_MMU) += mmu_$(S64_32).o
obj-$(CONFIG_MMU) += dma_$(S64_32).o
obj-pbl-y += lowlevel_$(S64_32).o
-obj-pbl-$(CONFIG_MMU) += mmu-early_$(S64_32).o
obj-pbl-$(CONFIG_CPU_32v7) += hyp.o
AFLAGS_hyp.o :=-Wa,-march=armv7-a -Wa,-mcpu=all
AFLAGS_hyp.pbl.o :=-Wa,-march=armv7-a -Wa,-mcpu=all
diff --git a/arch/arm/cpu/mmu-early_32.c b/arch/arm/cpu/mmu-early_32.c
deleted file mode 100644
index 94bde44c9b..0000000000
--- a/arch/arm/cpu/mmu-early_32.c
+++ /dev/null
@@ -1,62 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-
-#include <common.h>
-#include <asm/mmu.h>
-#include <errno.h>
-#include <linux/sizes.h>
-#include <asm/memory.h>
-#include <asm/system.h>
-#include <asm/cache.h>
-#include <asm-generic/sections.h>
-
-#include "mmu_32.h"
-
-static uint32_t *ttb;
-
-static inline void map_region(unsigned long start, unsigned long size,
- uint64_t flags)
-
-{
- start = ALIGN_DOWN(start, SZ_1M);
- size = ALIGN(size, SZ_1M);
-
- create_sections(ttb, start, start + size - 1, flags);
-}
-
-void mmu_early_enable(unsigned long membase, unsigned long memsize,
- unsigned long _ttb)
-{
- ttb = (uint32_t *)_ttb;
-
- set_ttbr(ttb);
-
- /* For the XN bit to take effect, we can't be using DOMAIN_MANAGER. */
- if (cpu_architecture() >= CPU_ARCH_ARMv7)
- set_domain(DOMAIN_CLIENT);
- else
- set_domain(DOMAIN_MANAGER);
-
- /*
- * This marks the whole address space as uncachable as well as
- * unexecutable if possible
- */
- create_flat_mapping(ttb);
-
- /*
- * There can be SoCs that have a section shared between device memory
- * and the on-chip RAM hosting the PBL. Thus mark this section
- * uncachable, but executable.
- * On such SoCs, executing from OCRAM could cause the instruction
- * prefetcher to speculatively access that device memory, triggering
- * potential errant behavior.
- *
- * If your SoC has such a memory layout, you should rewrite the code
- * here to map the OCRAM page-wise.
- */
- map_region((unsigned long)_stext, _etext - _stext, PMD_SECT_DEF_UNCACHED);
-
- /* maps main memory as cachable */
- map_region(membase, memsize, PMD_SECT_DEF_CACHED);
-
- __mmu_cache_on();
-}
diff --git a/arch/arm/cpu/mmu-early_64.c b/arch/arm/cpu/mmu-early_64.c
deleted file mode 100644
index d1f4a046bb..0000000000
--- a/arch/arm/cpu/mmu-early_64.c
+++ /dev/null
@@ -1,93 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-
-#include <common.h>
-#include <dma-dir.h>
-#include <init.h>
-#include <mmu.h>
-#include <errno.h>
-#include <linux/sizes.h>
-#include <asm/memory.h>
-#include <asm/pgtable64.h>
-#include <asm/barebox-arm.h>
-#include <asm/system.h>
-#include <asm/cache.h>
-#include <memory.h>
-#include <asm/system_info.h>
-
-#include "mmu_64.h"
-
-static void create_sections(void *ttb, uint64_t virt, uint64_t phys,
- uint64_t size, uint64_t attr)
-{
- uint64_t block_size;
- uint64_t block_shift;
- uint64_t *pte;
- uint64_t idx;
- uint64_t addr;
- uint64_t *table;
-
- addr = virt;
-
- attr &= ~PTE_TYPE_MASK;
-
- table = ttb;
-
- while (1) {
- block_shift = level2shift(1);
- idx = (addr & level2mask(1)) >> block_shift;
- block_size = (1ULL << block_shift);
-
- pte = table + idx;
-
- *pte = phys | attr | PTE_TYPE_BLOCK;
-
- if (size < block_size)
- break;
-
- addr += block_size;
- phys += block_size;
- size -= block_size;
- }
-}
-
-#define EARLY_BITS_PER_VA 39
-
-void mmu_early_enable(unsigned long membase, unsigned long memsize,
- unsigned long ttb)
-{
- int el;
-
- /*
- * For the early code we only create level 1 pagetables which only
- * allow for a 1GiB granularity. If our membase is not aligned to that
- * bail out without enabling the MMU.
- */
- if (membase & ((1ULL << level2shift(1)) - 1))
- return;
-
- memset((void *)ttb, 0, GRANULE_SIZE);
-
- el = current_el();
- set_ttbr_tcr_mair(el, ttb, calc_tcr(el, EARLY_BITS_PER_VA), MEMORY_ATTRIBUTES);
- create_sections((void *)ttb, 0, 0, 1UL << (EARLY_BITS_PER_VA - 1),
- attrs_uncached_mem());
- create_sections((void *)ttb, membase, membase, memsize, CACHED_MEM);
- tlb_invalidate();
- isb();
- set_cr(get_cr() | CR_M);
-}
-
-void mmu_early_disable(void)
-{
- unsigned int cr;
-
- cr = get_cr();
- cr &= ~(CR_M | CR_C);
-
- set_cr(cr);
- v8_flush_dcache_all();
- tlb_invalidate();
-
- dsb();
- isb();
-}
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 10f447874c..12fe892400 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -494,3 +494,53 @@ void *dma_alloc_writecombine(size_t size, dma_addr_t *dma_handle)
{
return dma_alloc_map(size, dma_handle, ARCH_MAP_WRITECOMBINE);
}
+
+static uint32_t *ttb;
+
+static inline void map_region(unsigned long start, unsigned long size,
+ uint64_t flags)
+
+{
+ start = ALIGN_DOWN(start, SZ_1M);
+ size = ALIGN(size, SZ_1M);
+
+ create_sections(ttb, start, start + size - 1, flags);
+}
+
+void mmu_early_enable(unsigned long membase, unsigned long memsize,
+ unsigned long _ttb)
+{
+ ttb = (uint32_t *)_ttb;
+
+ set_ttbr(ttb);
+
+ /* For the XN bit to take effect, we can't be using DOMAIN_MANAGER. */
+ if (cpu_architecture() >= CPU_ARCH_ARMv7)
+ set_domain(DOMAIN_CLIENT);
+ else
+ set_domain(DOMAIN_MANAGER);
+
+ /*
+ * This marks the whole address space as uncachable as well as
+ * unexecutable if possible
+ */
+ create_flat_mapping(ttb);
+
+ /*
+ * There can be SoCs that have a section shared between device memory
+ * and the on-chip RAM hosting the PBL. Thus mark this section
+ * uncachable, but executable.
+ * On such SoCs, executing from OCRAM could cause the instruction
+ * prefetcher to speculatively access that device memory, triggering
+ * potential errant behavior.
+ *
+ * If your SoC has such a memory layout, you should rewrite the code
+ * here to map the OCRAM page-wise.
+ */
+ map_region((unsigned long)_stext, _etext - _stext, PMD_SECT_DEF_UNCACHED);
+
+ /* maps main memory as cachable */
+ map_region(membase, memsize, PMD_SECT_DEF_CACHED);
+
+ __mmu_cache_on();
+}
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 9150de1676..55ada960c5 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -241,3 +241,79 @@ void dma_flush_range(void *ptr, size_t size)
v8_flush_dcache_range(start, end);
}
+
+static void early_create_sections(void *ttb, uint64_t virt, uint64_t phys,
+ uint64_t size, uint64_t attr)
+{
+ uint64_t block_size;
+ uint64_t block_shift;
+ uint64_t *pte;
+ uint64_t idx;
+ uint64_t addr;
+ uint64_t *table;
+
+ addr = virt;
+
+ attr &= ~PTE_TYPE_MASK;
+
+ table = ttb;
+
+ while (1) {
+ block_shift = level2shift(1);
+ idx = (addr & level2mask(1)) >> block_shift;
+ block_size = (1ULL << block_shift);
+
+ pte = table + idx;
+
+ *pte = phys | attr | PTE_TYPE_BLOCK;
+
+ if (size < block_size)
+ break;
+
+ addr += block_size;
+ phys += block_size;
+ size -= block_size;
+ }
+}
+
+#define EARLY_BITS_PER_VA 39
+
+void mmu_early_enable(unsigned long membase, unsigned long memsize,
+ unsigned long ttb)
+{
+ int el;
+
+ /*
+ * For the early code we only create level 1 pagetables which only
+ * allow for a 1GiB granularity. If our membase is not aligned to that
+ * bail out without enabling the MMU.
+ */
+ if (membase & ((1ULL << level2shift(1)) - 1))
+ return;
+
+ memset((void *)ttb, 0, GRANULE_SIZE);
+
+ el = current_el();
+ set_ttbr_tcr_mair(el, ttb, calc_tcr(el, EARLY_BITS_PER_VA), MEMORY_ATTRIBUTES);
+ early_create_sections((void *)ttb, 0, 0, 1UL << (EARLY_BITS_PER_VA - 1),
+ attrs_uncached_mem());
+ early_create_sections((void *)ttb, membase, membase, memsize, CACHED_MEM);
+ tlb_invalidate();
+ isb();
+ set_cr(get_cr() | CR_M);
+}
+
+void mmu_early_disable(void)
+{
+ unsigned int cr;
+
+ cr = get_cr();
+ cr &= ~(CR_M | CR_C);
+
+ set_cr(cr);
+ v8_flush_dcache_all();
+ tlb_invalidate();
+
+ dsb();
+ isb();
+}
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 14/27] ARM: mmu: alloc 64k for early page tables
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (12 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 13/27] ARM: mmu: merge mmu-early_xx.c into mmu_xx.c Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 15/27] ARM: mmu32: create alloc_pte() Sascha Hauer
` (12 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
This is a preparation for using two level page tables in the PBL.
To do that we need a way to allocate page tables in PBL. As malloc
is not available in PBL, increase the area we use for the TTB to
make some space available for page tables.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 6 ++++++
arch/arm/include/asm/barebox-arm.h | 8 ++------
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 12fe892400..4050d96846 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -24,6 +24,12 @@
#define PTRS_PER_PTE (PGDIR_SIZE / PAGE_SIZE)
#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
+/*
+ * We have a 4GiB address space split into 1MiB sections, with each
+ * section header taking 4 bytes
+ */
+#define ARM_TTB_SIZE (SZ_4G / SZ_1M * sizeof(u32))
+
static uint32_t *ttb;
/*
diff --git a/arch/arm/include/asm/barebox-arm.h b/arch/arm/include/asm/barebox-arm.h
index 711ccd2510..f3398e3902 100644
--- a/arch/arm/include/asm/barebox-arm.h
+++ b/arch/arm/include/asm/barebox-arm.h
@@ -23,11 +23,7 @@
#include <asm/reloc.h>
#include <linux/stringify.h>
-/*
- * We have a 4GiB address space split into 1MiB sections, with each
- * section header taking 4 bytes
- */
-#define ARM_TTB_SIZE (SZ_4G / SZ_1M * sizeof(u32))
+#define ARM_EARLY_PAGETABLE_SIZE SZ_64K
void __noreturn barebox_arm_entry(unsigned long membase, unsigned long memsize, void *boarddata);
@@ -90,7 +86,7 @@ static inline unsigned long arm_mem_ttb(unsigned long membase,
unsigned long endmem)
{
endmem = arm_mem_stack(membase, endmem);
- endmem = ALIGN_DOWN(endmem, ARM_TTB_SIZE) - ARM_TTB_SIZE;
+ endmem = ALIGN_DOWN(endmem, ARM_EARLY_PAGETABLE_SIZE) - ARM_EARLY_PAGETABLE_SIZE;
return endmem;
}
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 15/27] ARM: mmu32: create alloc_pte()
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (13 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 14/27] ARM: mmu: alloc 64k for early page tables Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 16/27] ARM: mmu64: " Sascha Hauer
` (11 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
This is a preparation for using two level page tables in the PBL.
To do that we need a way to allocate page tables in PBL. As malloc
is not available in PBL, implement a function to allocate a page table
from the area we also place the TTB.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 24 ++++++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 4050d96846..a82382ad1e 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -76,6 +76,27 @@ static bool pgd_type_table(u32 pgd)
return (pgd & PMD_TYPE_MASK) == PMD_TYPE_TABLE;
}
+#define PTE_SIZE (PTRS_PER_PTE * sizeof(u32))
+
+#ifdef __PBL__
+static uint32_t *alloc_pte(void)
+{
+ static unsigned int idx = 3;
+
+ idx++;
+
+ if (idx * PTE_SIZE >= ARM_EARLY_PAGETABLE_SIZE)
+ return NULL;
+
+ return (void *)ttb + idx * PTE_SIZE;
+}
+#else
+static uint32_t *alloc_pte(void)
+{
+ return xmemalign(PTE_SIZE, PTE_SIZE);
+}
+#endif
+
static u32 *find_pte(unsigned long adr)
{
u32 *table;
@@ -125,8 +146,7 @@ static u32 *arm_create_pte(unsigned long virt, uint32_t flags)
virt = ALIGN_DOWN(virt, PGDIR_SIZE);
- table = xmemalign(PTRS_PER_PTE * sizeof(u32),
- PTRS_PER_PTE * sizeof(u32));
+ table = alloc_pte();
if (!ttb)
arm_mmu_not_initialized_error();
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 16/27] ARM: mmu64: create alloc_pte()
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (14 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 15/27] ARM: mmu32: create alloc_pte() Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 17/27] ARM: mmu: drop ttb argument Sascha Hauer
` (10 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
This is a preparation for using two level page tables in the PBL.
To do that we need a way to allocate page tables in PBL. As malloc
is not available in PBL, implement a function to allocate a page table
from the area we also place the TTB.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_64.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 55ada960c5..3cc5b14a46 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -32,7 +32,20 @@ static void set_table(uint64_t *pt, uint64_t *table_addr)
*pt = val;
}
-static uint64_t *create_table(void)
+#ifdef __PBL__
+static uint64_t *alloc_pte(void)
+{
+ static unsigned int idx;
+
+ idx++;
+
+ if (idx * GRANULE_SIZE >= ARM_EARLY_PAGETABLE_SIZE)
+ return NULL;
+
+ return (void *)ttb + idx * GRANULE_SIZE;
+}
+#else
+static uint64_t *alloc_pte(void)
{
uint64_t *new_table = xmemalign(GRANULE_SIZE, GRANULE_SIZE);
@@ -41,6 +54,7 @@ static uint64_t *create_table(void)
return new_table;
}
+#endif
static __maybe_unused uint64_t *find_pte(uint64_t addr)
{
@@ -81,7 +95,7 @@ static void split_block(uint64_t *pte, int level)
/* level describes the parent level, we need the child ones */
levelshift = level2shift(level + 1);
- new_table = create_table();
+ new_table = alloc_pte();
for (i = 0; i < MAX_PTE_ENTRIES; i++) {
new_table[i] = old_pte | (i << levelshift);
@@ -183,7 +197,7 @@ void __mmu_init(bool mmu_on)
if (mmu_on)
mmu_disable();
- ttb = create_table();
+ ttb = alloc_pte();
el = current_el();
set_ttbr_tcr_mair(el, (uint64_t)ttb, calc_tcr(el, BITS_PER_VA),
MEMORY_ATTRIBUTES);
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 17/27] ARM: mmu: drop ttb argument
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (15 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 16/27] ARM: mmu64: " Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 18/27] ARM: mmu: always do MMU initialization early when MMU is enabled Sascha Hauer
` (9 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
No need to pass ttb to the MMU code, the MMU code can itself call
arm_mem_ttb() to get the desired base.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 9 +++++----
arch/arm/cpu/mmu_64.c | 8 +++++---
arch/arm/cpu/start.c | 11 +++--------
arch/arm/cpu/uncompress.c | 7 ++-----
arch/arm/include/asm/mmu.h | 3 +--
5 files changed, 16 insertions(+), 22 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index a82382ad1e..bef4a01670 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -533,10 +533,11 @@ static inline void map_region(unsigned long start, unsigned long size,
create_sections(ttb, start, start + size - 1, flags);
}
-void mmu_early_enable(unsigned long membase, unsigned long memsize,
- unsigned long _ttb)
+void mmu_early_enable(unsigned long membase, unsigned long memsize)
{
- ttb = (uint32_t *)_ttb;
+ ttb = (uint32_t *)arm_mem_ttb(membase, membase + memsize);
+
+ pr_debug("enabling MMU, ttb @ 0x%p\n", ttb);
set_ttbr(ttb);
@@ -566,7 +567,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize,
map_region((unsigned long)_stext, _etext - _stext, PMD_SECT_DEF_UNCACHED);
/* maps main memory as cachable */
- map_region(membase, memsize, PMD_SECT_DEF_CACHED);
+ map_region(membase, memsize - OPTEE_SIZE, PMD_SECT_DEF_CACHED);
__mmu_cache_on();
}
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 3cc5b14a46..4b75be621d 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -292,10 +292,12 @@ static void early_create_sections(void *ttb, uint64_t virt, uint64_t phys,
#define EARLY_BITS_PER_VA 39
-void mmu_early_enable(unsigned long membase, unsigned long memsize,
- unsigned long ttb)
+void mmu_early_enable(unsigned long membase, unsigned long memsize)
{
int el;
+ unsigned long ttb = arm_mem_ttb(membase, membase + memsize);
+
+ pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
/*
* For the early code we only create level 1 pagetables which only
@@ -311,7 +313,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize,
set_ttbr_tcr_mair(el, ttb, calc_tcr(el, EARLY_BITS_PER_VA), MEMORY_ATTRIBUTES);
early_create_sections((void *)ttb, 0, 0, 1UL << (EARLY_BITS_PER_VA - 1),
attrs_uncached_mem());
- early_create_sections((void *)ttb, membase, membase, memsize, CACHED_MEM);
+ early_create_sections((void *)ttb, membase, membase, memsize - OPTEE_SIZE, CACHED_MEM);
tlb_invalidate();
isb();
set_cr(get_cr() | CR_M);
diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index 9d788eba2b..0b08af0176 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -216,14 +216,9 @@ __noreturn __no_sanitize_address void barebox_non_pbl_start(unsigned long membas
mem_malloc_init((void *)malloc_start, (void *)malloc_end - 1);
- if (IS_ENABLED(CONFIG_MMU_EARLY)) {
- unsigned long ttb = arm_mem_ttb(membase, endmem);
-
- if (!IS_ENABLED(CONFIG_PBL_IMAGE)) {
- pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
- arm_early_mmu_cache_invalidate();
- mmu_early_enable(membase, memsize - OPTEE_SIZE, ttb);
- }
+ if (IS_ENABLED(CONFIG_MMU_EARLY) && !IS_ENABLED(CONFIG_PBL_IMAGE)) {
+ arm_early_mmu_cache_invalidate();
+ mmu_early_enable(membase, memsize);
}
if (IS_ENABLED(CONFIG_BOOTM_OPTEE))
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index 65de87f109..7c85f5a1fe 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -81,11 +81,8 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
- if (IS_ENABLED(CONFIG_MMU_EARLY)) {
- unsigned long ttb = arm_mem_ttb(membase, endmem);
- pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
- mmu_early_enable(membase, memsize - OPTEE_SIZE, ttb);
- }
+ if (IS_ENABLED(CONFIG_MMU_EARLY))
+ mmu_early_enable(membase, memsize);
free_mem_ptr = arm_mem_early_malloc(membase, endmem);
free_mem_end_ptr = arm_mem_early_malloc_end(membase, endmem);
diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h
index fd8e93f7a3..9d2fdcf365 100644
--- a/arch/arm/include/asm/mmu.h
+++ b/arch/arm/include/asm/mmu.h
@@ -56,8 +56,7 @@ void __dma_clean_range(unsigned long, unsigned long);
void __dma_flush_range(unsigned long, unsigned long);
void __dma_inv_range(unsigned long, unsigned long);
-void mmu_early_enable(unsigned long membase, unsigned long memsize,
- unsigned long ttb);
+void mmu_early_enable(unsigned long membase, unsigned long memsize);
void mmu_early_disable(void);
#endif /* __ASM_MMU_H */
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 18/27] ARM: mmu: always do MMU initialization early when MMU is enabled
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (16 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 17/27] ARM: mmu: drop ttb argument Sascha Hauer
@ 2023-05-12 11:09 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 19/27] ARM: mmu32: Assume MMU is on Sascha Hauer
` (8 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:09 UTC (permalink / raw)
To: Barebox List
Drop the CONFIG_MMU_EARLY and make early MMU initialization the default.
Doing so allows us for some simplifications in the MMU code as we have
less code pathes to care and think about.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/start.c | 2 +-
arch/arm/cpu/uncompress.c | 2 +-
common/Kconfig | 9 ---------
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index 0b08af0176..4ce4579903 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -216,7 +216,7 @@ __noreturn __no_sanitize_address void barebox_non_pbl_start(unsigned long membas
mem_malloc_init((void *)malloc_start, (void *)malloc_end - 1);
- if (IS_ENABLED(CONFIG_MMU_EARLY) && !IS_ENABLED(CONFIG_PBL_IMAGE)) {
+ if (IS_ENABLED(CONFIG_MMU) && !IS_ENABLED(CONFIG_PBL_IMAGE)) {
arm_early_mmu_cache_invalidate();
mmu_early_enable(membase, memsize);
}
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index 7c85f5a1fe..0bfce8853d 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -81,7 +81,7 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
- if (IS_ENABLED(CONFIG_MMU_EARLY))
+ if (IS_ENABLED(CONFIG_MMU))
mmu_early_enable(membase, memsize);
free_mem_ptr = arm_mem_early_malloc(membase, endmem);
diff --git a/common/Kconfig b/common/Kconfig
index ac3df75acb..c6008f125b 100644
--- a/common/Kconfig
+++ b/common/Kconfig
@@ -185,15 +185,6 @@ config MMU
to enable the data cache which depends on the MMU. See Documentation/mmu.txt
for further information.
-config MMU_EARLY
- bool "Enable MMU early"
- depends on ARM
- depends on MMU
- default y
- help
- This enables the MMU during early startup. This speeds up things during startup
- of barebox, but may lead to harder to debug code. If unsure say yes here.
-
config HAVE_CONFIGURABLE_TEXT_BASE
bool
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 19/27] ARM: mmu32: Assume MMU is on
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (17 preceding siblings ...)
2023-05-12 11:09 ` [PATCH 18/27] ARM: mmu: always do MMU initialization early when MMU is enabled Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 20/27] ARM: mmu32: Fix pmd_flags_to_pte() for ARMv4/5/6 Sascha Hauer
` (7 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
As we now always enable the MMU during early initialization we can
safely assume that the MMU is already enabled in __mmu_init() and
drop the code path which enables the MMU.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 41 ++++++++++-------------------------------
1 file changed, 10 insertions(+), 31 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index bef4a01670..63e1acdcfa 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -457,38 +457,19 @@ void __mmu_init(bool mmu_on)
pte_flags_uncached = PTE_FLAGS_UNCACHED_V4;
}
- if (mmu_on) {
+ /* Clear unpredictable bits [13:0] */
+ ttb = (uint32_t *)(get_ttbr() & ~0x3fff);
+
+ if (!request_sdram_region("ttb", (unsigned long)ttb, SZ_16K))
/*
- * Early MMU code has already enabled the MMU. We assume a
- * flat 1:1 section mapping in this case.
+ * This can mean that:
+ * - the early MMU code has put the ttb into a place
+ * which we don't have inside our available memory
+ * - Somebody else has occupied the ttb region which means
+ * the ttb will get corrupted.
*/
- /* Clear unpredictable bits [13:0] */
- ttb = (uint32_t *)(get_ttbr() & ~0x3fff);
-
- if (!request_sdram_region("ttb", (unsigned long)ttb, SZ_16K))
- /*
- * This can mean that:
- * - the early MMU code has put the ttb into a place
- * which we don't have inside our available memory
- * - Somebody else has occupied the ttb region which means
- * the ttb will get corrupted.
- */
- pr_crit("Critical Error: Can't request SDRAM region for ttb at %p\n",
+ pr_crit("Critical Error: Can't request SDRAM region for ttb at %p\n",
ttb);
- } else {
- ttb = xmemalign(ARM_TTB_SIZE, ARM_TTB_SIZE);
-
- set_ttbr(ttb);
-
- /* For the XN bit to take effect, we can't be using DOMAIN_MANAGER. */
- if (cpu_architecture() >= CPU_ARCH_ARMv7)
- set_domain(DOMAIN_CLIENT);
- else
- set_domain(DOMAIN_MANAGER);
-
- create_flat_mapping(ttb);
- __mmu_cache_flush();
- }
pr_debug("ttb: 0x%p\n", ttb);
@@ -499,8 +480,6 @@ void __mmu_init(bool mmu_on)
PMD_SECT_DEF_CACHED);
__mmu_cache_flush();
}
-
- __mmu_cache_on();
}
/*
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 20/27] ARM: mmu32: Fix pmd_flags_to_pte() for ARMv4/5/6
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (18 preceding siblings ...)
2023-05-12 11:10 ` [PATCH 19/27] ARM: mmu32: Assume MMU is on Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 21/27] ARM: mmu32: Add pte_flags_to_pmd() Sascha Hauer
` (6 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
pmd_flags_to_pte() assumed ARMv7 page table format. This has the effect
that random bit values end up in the access permission bits. This works
because the domain is configured as manager in the DACR and thus the
access permissions are ignored by the MMU.
Nevertheless fix this and take the cpu architecture into account when
translating the bits. Don't bother to translate the access permission
bits though, just hardcode them as PTE_SMALL_AP_UNO_SRW.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 63e1acdcfa..3939f60758 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -173,17 +173,22 @@ static u32 pmd_flags_to_pte(u32 pmd)
pte |= PTE_BUFFERABLE;
if (pmd & PMD_SECT_CACHEABLE)
pte |= PTE_CACHEABLE;
- if (pmd & PMD_SECT_nG)
- pte |= PTE_EXT_NG;
- if (pmd & PMD_SECT_XN)
- pte |= PTE_EXT_XN;
-
- /* TEX[2:0] */
- pte |= PTE_EXT_TEX((pmd >> 12) & 7);
- /* AP[1:0] */
- pte |= ((pmd >> 10) & 0x3) << 4;
- /* AP[2] */
- pte |= ((pmd >> 15) & 0x1) << 9;
+
+ if (cpu_architecture() >= CPU_ARCH_ARMv7) {
+ if (pmd & PMD_SECT_nG)
+ pte |= PTE_EXT_NG;
+ if (pmd & PMD_SECT_XN)
+ pte |= PTE_EXT_XN;
+
+ /* TEX[2:0] */
+ pte |= PTE_EXT_TEX((pmd >> 12) & 7);
+ /* AP[1:0] */
+ pte |= ((pmd >> 10) & 0x3) << 4;
+ /* AP[2] */
+ pte |= ((pmd >> 15) & 0x1) << 9;
+ } else {
+ pte |= PTE_SMALL_AP_UNO_SRW;
+ }
return pte;
}
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 21/27] ARM: mmu32: Add pte_flags_to_pmd()
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (19 preceding siblings ...)
2023-05-12 11:10 ` [PATCH 20/27] ARM: mmu32: Fix pmd_flags_to_pte() for ARMv4/5/6 Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 22/27] ARM: mmu32: add get_pte_flags, get_pmd_flags Sascha Hauer
` (5 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 35 +++++++++++++++++++++++++++++------
1 file changed, 29 insertions(+), 6 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 3939f60758..fd1f429398 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -193,30 +193,53 @@ static u32 pmd_flags_to_pte(u32 pmd)
return pte;
}
+static u32 pte_flags_to_pmd(u32 pte)
+{
+ u32 pmd = 0;
+
+ if (pte & PTE_BUFFERABLE)
+ pmd |= PMD_SECT_BUFFERABLE;
+ if (pte & PTE_CACHEABLE)
+ pmd |= PMD_SECT_CACHEABLE;
+
+ if (cpu_architecture() >= CPU_ARCH_ARMv7) {
+ if (pte & PTE_EXT_NG)
+ pmd |= PMD_SECT_nG;
+ if (pte & PTE_EXT_XN)
+ pmd |= PMD_SECT_XN;
+
+ /* TEX[2:0] */
+ pmd |= ((pte >> 6) & 7) << 12;
+ /* AP[1:0] */
+ pmd |= ((pte >> 4) & 0x3) << 10;
+ /* AP[2] */
+ pmd |= ((pte >> 9) & 0x1) << 15;
+ } else {
+ pmd |= PMD_SECT_AP_WRITE | PMD_SECT_AP_READ;
+ }
+
+ return pmd;
+}
+
int arch_remap_range(void *start, size_t size, unsigned flags)
{
u32 addr = (u32)start;
u32 pte_flags;
- u32 pgd_flags;
BUG_ON(!IS_ALIGNED(addr, PAGE_SIZE));
switch (flags) {
case MAP_CACHED:
pte_flags = pte_flags_cached;
- pgd_flags = PMD_SECT_DEF_CACHED;
break;
case MAP_UNCACHED:
pte_flags = pte_flags_uncached;
- pgd_flags = pgd_flags_uncached;
break;
case MAP_FAULT:
pte_flags = 0x0;
- pgd_flags = 0x0;
break;
case ARCH_MAP_WRITECOMBINE:
pte_flags = pte_flags_wc;
- pgd_flags = pgd_flags_wc;
break;
default:
return -EINVAL;
@@ -234,7 +257,7 @@ int arch_remap_range(void *start, size_t size, unsigned flags)
* replace it with a section
*/
chunk = PGDIR_SIZE;
- *pgd = addr | pgd_flags;
+ *pgd = addr | pte_flags_to_pmd(pte_flags) | PMD_TYPE_SECT;
dma_flush_range(pgd, sizeof(*pgd));
} else {
unsigned int num_ptes;
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 22/27] ARM: mmu32: add get_pte_flags, get_pmd_flags
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (20 preceding siblings ...)
2023-05-12 11:10 ` [PATCH 21/27] ARM: mmu32: Add pte_flags_to_pmd() Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 23/27] ARM: mmu32: move functions into c file Sascha Hauer
` (4 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
The mmu code has several variables containing the pte/pmd values for
different mapping types. These variables only contain the correct values
after initializing them which makes it a bit hard to follow when the
code is used in both PBL and barebox proper.
Instead of using variables calculate the values when they are needed.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 82 +++++++++++++++++++++----------------------
1 file changed, 41 insertions(+), 41 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index fd1f429398..01c168e8c8 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -63,11 +63,6 @@ static inline void tlb_invalidate(void)
* PTE flags to set cached and uncached areas.
* This will be determined at runtime.
*/
-static uint32_t pte_flags_cached;
-static uint32_t pte_flags_wc;
-static uint32_t pte_flags_uncached;
-static uint32_t pgd_flags_wc;
-static uint32_t pgd_flags_uncached;
#define PTE_MASK ((1 << 12) - 1)
@@ -221,29 +216,48 @@ static u32 pte_flags_to_pmd(u32 pte)
return pmd;
}
-int arch_remap_range(void *start, size_t size, unsigned flags)
+static uint32_t get_pte_flags(int map_type)
+{
+ if (cpu_architecture() >= CPU_ARCH_ARMv7) {
+ switch (map_type) {
+ case MAP_CACHED:
+ return PTE_FLAGS_CACHED_V7;
+ case MAP_UNCACHED:
+ return PTE_FLAGS_UNCACHED_V7;
+ case ARCH_MAP_WRITECOMBINE:
+ return PTE_FLAGS_WC_V7;
+ case MAP_FAULT:
+ default:
+ return 0x0;
+ }
+ } else {
+ switch (map_type) {
+ case MAP_CACHED:
+ return PTE_FLAGS_CACHED_V4;
+ case MAP_UNCACHED:
+ case ARCH_MAP_WRITECOMBINE:
+ return PTE_FLAGS_UNCACHED_V4;
+ case MAP_FAULT:
+ default:
+ return 0x0;
+ }
+ }
+}
+
+static uint32_t get_pmd_flags(int map_type)
+{
+ return pte_flags_to_pmd(get_pte_flags(map_type));
+}
+
+int arch_remap_range(void *start, size_t size, unsigned map_type)
{
u32 addr = (u32)start;
- u32 pte_flags;
+ u32 pte_flags, pmd_flags;
BUG_ON(!IS_ALIGNED(addr, PAGE_SIZE));
- switch (flags) {
- case MAP_CACHED:
- pte_flags = pte_flags_cached;
- break;
- case MAP_UNCACHED:
- pte_flags = pte_flags_uncached;
- break;
- case MAP_FAULT:
- pte_flags = 0x0;
- break;
- case ARCH_MAP_WRITECOMBINE:
- pte_flags = pte_flags_wc;
- break;
- default:
- return -EINVAL;
- }
+ pte_flags = get_pte_flags(map_type);
+ pmd_flags = pte_flags_to_pmd(pte_flags);
while (size) {
const bool pgdir_size_aligned = IS_ALIGNED(addr, PGDIR_SIZE);
@@ -257,7 +271,7 @@ int arch_remap_range(void *start, size_t size, unsigned flags)
* replace it with a section
*/
chunk = PGDIR_SIZE;
- *pgd = addr | pte_flags_to_pmd(pte_flags) | PMD_TYPE_SECT;
+ *pgd = addr | pmd_flags | PMD_TYPE_SECT;
dma_flush_range(pgd, sizeof(*pgd));
} else {
unsigned int num_ptes;
@@ -315,7 +329,7 @@ void *map_io_sections(unsigned long phys, void *_start, size_t size)
unsigned long start = (unsigned long)_start, sec;
for (sec = start; sec < start + size; sec += PGDIR_SIZE, phys += PGDIR_SIZE)
- ttb[pgd_index(sec)] = phys | pgd_flags_uncached;
+ ttb[pgd_index(sec)] = phys | get_pmd_flags(MAP_UNCACHED);
dma_flush_range(ttb, 0x4000);
tlb_invalidate();
@@ -356,9 +370,9 @@ static void create_vector_table(unsigned long adr)
vectors = xmemalign(PAGE_SIZE, PAGE_SIZE);
pr_debug("Creating vector table, virt = 0x%p, phys = 0x%08lx\n",
vectors, adr);
- arm_create_pte(adr, pte_flags_uncached);
+ arm_create_pte(adr, get_pte_flags(MAP_UNCACHED));
pte = find_pte(adr);
- *pte = (u32)vectors | PTE_TYPE_SMALL | pte_flags_cached;
+ *pte = (u32)vectors | PTE_TYPE_SMALL | get_pte_flags(MAP_CACHED);
}
arm_fixup_vectors();
@@ -471,20 +485,6 @@ void __mmu_init(bool mmu_on)
{
struct memory_bank *bank;
- if (cpu_architecture() >= CPU_ARCH_ARMv7) {
- pte_flags_cached = PTE_FLAGS_CACHED_V7;
- pte_flags_wc = PTE_FLAGS_WC_V7;
- pgd_flags_wc = PGD_FLAGS_WC_V7;
- pgd_flags_uncached = PGD_FLAGS_UNCACHED_V7;
- pte_flags_uncached = PTE_FLAGS_UNCACHED_V7;
- } else {
- pte_flags_cached = PTE_FLAGS_CACHED_V4;
- pte_flags_wc = PTE_FLAGS_UNCACHED_V4;
- pgd_flags_wc = PMD_SECT_DEF_UNCACHED;
- pgd_flags_uncached = PMD_SECT_DEF_UNCACHED;
- pte_flags_uncached = PTE_FLAGS_UNCACHED_V4;
- }
-
/* Clear unpredictable bits [13:0] */
ttb = (uint32_t *)(get_ttbr() & ~0x3fff);
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 23/27] ARM: mmu32: move functions into c file
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (21 preceding siblings ...)
2023-05-12 11:10 ` [PATCH 22/27] ARM: mmu32: add get_pte_flags, get_pmd_flags Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 24/27] ARM: mmu32: read TTB value from register Sascha Hauer
` (3 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
Move create_flat_mapping() and create_sections() into the c file
rather than having them as static inline functions in the header file.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 19 +++++++++++++++++++
arch/arm/cpu/mmu_32.h | 20 --------------------
2 files changed, 19 insertions(+), 20 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 01c168e8c8..2b2013a8b5 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -324,6 +324,25 @@ int arch_remap_range(void *start, size_t size, unsigned map_type)
return 0;
}
+static void create_sections(uint32_t *ttb, unsigned long first,
+ unsigned long last, unsigned int flags)
+{
+ unsigned long ttb_start = pgd_index(first);
+ unsigned long ttb_end = pgd_index(last) + 1;
+ unsigned int i, addr = first;
+
+ for (i = ttb_start; i < ttb_end; i++) {
+ ttb[i] = addr | flags;
+ addr += PGDIR_SIZE;
+ }
+}
+
+static void create_flat_mapping(uint32_t *ttb)
+{
+ /* create a flat mapping using 1MiB sections */
+ create_sections(ttb, 0, 0xffffffff, attrs_uncached_mem());
+}
+
void *map_io_sections(unsigned long phys, void *_start, size_t size)
{
unsigned long start = (unsigned long)_start, sec;
diff --git a/arch/arm/cpu/mmu_32.h b/arch/arm/cpu/mmu_32.h
index 1499b70dd6..607d9e8608 100644
--- a/arch/arm/cpu/mmu_32.h
+++ b/arch/arm/cpu/mmu_32.h
@@ -56,20 +56,6 @@ static inline void set_domain(unsigned val)
asm volatile ("mcr p15,0,%0,c3,c0,0" : : "r"(val) /*:*/);
}
-static inline void
-create_sections(uint32_t *ttb, unsigned long first,
- unsigned long last, unsigned int flags)
-{
- unsigned long ttb_start = pgd_index(first);
- unsigned long ttb_end = pgd_index(last) + 1;
- unsigned int i, addr = first;
-
- for (i = ttb_start; i < ttb_end; i++) {
- ttb[i] = addr | flags;
- addr += PGDIR_SIZE;
- }
-}
-
#define PMD_SECT_DEF_UNCACHED (PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | PMD_TYPE_SECT)
#define PMD_SECT_DEF_CACHED (PMD_SECT_WB | PMD_SECT_DEF_UNCACHED)
@@ -83,10 +69,4 @@ static inline unsigned long attrs_uncached_mem(void)
return flags;
}
-static inline void create_flat_mapping(uint32_t *ttb)
-{
- /* create a flat mapping using 1MiB sections */
- create_sections(ttb, 0, 0xffffffff, attrs_uncached_mem());
-}
-
#endif /* __ARM_MMU_H */
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 24/27] ARM: mmu32: read TTB value from register
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (22 preceding siblings ...)
2023-05-12 11:10 ` [PATCH 23/27] ARM: mmu32: move functions into c file Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 25/27] ARM: mmu32: Use pages for early MMU setup Sascha Hauer
` (2 subsequent siblings)
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
Instead of relying on a variable for the location of the TTB which we
have to initialize in both PBL and barebox proper, just read the value
back from the hardware register.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 41 ++++++++++++++++++++---------------------
1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 2b2013a8b5..5ebceed89f 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -30,7 +30,11 @@
*/
#define ARM_TTB_SIZE (SZ_4G / SZ_1M * sizeof(u32))
-static uint32_t *ttb;
+static inline uint32_t *get_ttb(void)
+{
+ /* Clear unpredictable bits [13:0] */
+ return (uint32_t *)(get_ttbr() & ~0x3fff);
+}
/*
* Do it the simple way for now and invalidate the entire
@@ -83,7 +87,7 @@ static uint32_t *alloc_pte(void)
if (idx * PTE_SIZE >= ARM_EARLY_PAGETABLE_SIZE)
return NULL;
- return (void *)ttb + idx * PTE_SIZE;
+ return get_ttb() + idx * PTE_SIZE;
}
#else
static uint32_t *alloc_pte(void)
@@ -95,9 +99,7 @@ static uint32_t *alloc_pte(void)
static u32 *find_pte(unsigned long adr)
{
u32 *table;
-
- if (!ttb)
- arm_mmu_not_initialized_error();
+ uint32_t *ttb = get_ttb();
if (!pgd_type_table(ttb[pgd_index(adr)]))
return NULL;
@@ -136,6 +138,7 @@ void dma_inv_range(void *ptr, size_t size)
*/
static u32 *arm_create_pte(unsigned long virt, uint32_t flags)
{
+ uint32_t *ttb = get_ttb();
u32 *table;
int i, ttb_idx;
@@ -143,9 +146,6 @@ static u32 *arm_create_pte(unsigned long virt, uint32_t flags)
table = alloc_pte();
- if (!ttb)
- arm_mmu_not_initialized_error();
-
ttb_idx = pgd_index(virt);
for (i = 0; i < PTRS_PER_PTE; i++) {
@@ -253,6 +253,7 @@ int arch_remap_range(void *start, size_t size, unsigned map_type)
{
u32 addr = (u32)start;
u32 pte_flags, pmd_flags;
+ uint32_t *ttb = get_ttb();
BUG_ON(!IS_ALIGNED(addr, PAGE_SIZE));
@@ -324,9 +325,10 @@ int arch_remap_range(void *start, size_t size, unsigned map_type)
return 0;
}
-static void create_sections(uint32_t *ttb, unsigned long first,
- unsigned long last, unsigned int flags)
+static void create_sections(unsigned long first, unsigned long last,
+ unsigned int flags)
{
+ uint32_t *ttb = get_ttb();
unsigned long ttb_start = pgd_index(first);
unsigned long ttb_end = pgd_index(last) + 1;
unsigned int i, addr = first;
@@ -337,15 +339,16 @@ static void create_sections(uint32_t *ttb, unsigned long first,
}
}
-static void create_flat_mapping(uint32_t *ttb)
+static inline void create_flat_mapping(void)
{
/* create a flat mapping using 1MiB sections */
- create_sections(ttb, 0, 0xffffffff, attrs_uncached_mem());
+ create_sections(0, 0xffffffff, attrs_uncached_mem());
}
void *map_io_sections(unsigned long phys, void *_start, size_t size)
{
unsigned long start = (unsigned long)_start, sec;
+ uint32_t *ttb = get_ttb();
for (sec = start; sec < start + size; sec += PGDIR_SIZE, phys += PGDIR_SIZE)
ttb[pgd_index(sec)] = phys | get_pmd_flags(MAP_UNCACHED);
@@ -503,9 +506,7 @@ static void vectors_init(void)
void __mmu_init(bool mmu_on)
{
struct memory_bank *bank;
-
- /* Clear unpredictable bits [13:0] */
- ttb = (uint32_t *)(get_ttbr() & ~0x3fff);
+ uint32_t *ttb = get_ttb();
if (!request_sdram_region("ttb", (unsigned long)ttb, SZ_16K))
/*
@@ -523,7 +524,7 @@ void __mmu_init(bool mmu_on)
vectors_init();
for_each_memory_bank(bank) {
- create_sections(ttb, bank->start, bank->start + bank->size - 1,
+ create_sections(bank->start, bank->start + bank->size - 1,
PMD_SECT_DEF_CACHED);
__mmu_cache_flush();
}
@@ -547,8 +548,6 @@ void *dma_alloc_writecombine(size_t size, dma_addr_t *dma_handle)
return dma_alloc_map(size, dma_handle, ARCH_MAP_WRITECOMBINE);
}
-static uint32_t *ttb;
-
static inline void map_region(unsigned long start, unsigned long size,
uint64_t flags)
@@ -556,12 +555,12 @@ static inline void map_region(unsigned long start, unsigned long size,
start = ALIGN_DOWN(start, SZ_1M);
size = ALIGN(size, SZ_1M);
- create_sections(ttb, start, start + size - 1, flags);
+ create_sections(start, start + size - 1, flags);
}
void mmu_early_enable(unsigned long membase, unsigned long memsize)
{
- ttb = (uint32_t *)arm_mem_ttb(membase, membase + memsize);
+ uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase, membase + memsize);
pr_debug("enabling MMU, ttb @ 0x%p\n", ttb);
@@ -577,7 +576,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize)
* This marks the whole address space as uncachable as well as
* unexecutable if possible
*/
- create_flat_mapping(ttb);
+ create_flat_mapping();
/*
* There can be SoCs that have a section shared between device memory
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 25/27] ARM: mmu32: Use pages for early MMU setup
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (23 preceding siblings ...)
2023-05-12 11:10 ` [PATCH 24/27] ARM: mmu32: read TTB value from register Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 26/27] ARM: mmu32: Skip reserved ranges during initialization Sascha Hauer
2023-05-12 11:10 ` [PATCH 27/27] ARM: mmu64: Use two level pagetables in early code Sascha Hauer
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
Up to now we use 1MiB sections to setup the page tables in PBL. There
are two places where this leads to problems. First is OP-TEE, we have
to map the OP-TEE area with PTE_EXT_XN to prevent the instruction
prefetcher from speculating into that area. With the current section
mapping we have to align OPTEE_SIZE to 1MiB boundaries. The second
problem comes with SRAM where the PBL might be running. This SRAM has
to be mapped executable, but at the same time we should map the
surrounding areas non executable which is not always possible with
1MiB mapping granularity.
We now have everything in place to use two level page tables from PBL,
so use arch_remap_range() for the problematic cases.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 31 +++++++------------------------
1 file changed, 7 insertions(+), 24 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 5ebceed89f..c52b6d3a8b 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -117,8 +117,10 @@ void dma_flush_range(void *ptr, size_t size)
unsigned long end = start + size;
__dma_flush_range(start, end);
+#ifndef __PBL__
if (outer_cache.flush_range)
outer_cache.flush_range(start, end);
+#endif
}
void dma_inv_range(void *ptr, size_t size)
@@ -126,8 +128,10 @@ void dma_inv_range(void *ptr, size_t size)
unsigned long start = (unsigned long)ptr;
unsigned long end = start + size;
+#ifndef __PBL__
if (outer_cache.inv_range)
outer_cache.inv_range(start, end);
+#endif
__dma_inv_range(start, end);
}
@@ -548,16 +552,6 @@ void *dma_alloc_writecombine(size_t size, dma_addr_t *dma_handle)
return dma_alloc_map(size, dma_handle, ARCH_MAP_WRITECOMBINE);
}
-static inline void map_region(unsigned long start, unsigned long size,
- uint64_t flags)
-
-{
- start = ALIGN_DOWN(start, SZ_1M);
- size = ALIGN(size, SZ_1M);
-
- create_sections(start, start + size - 1, flags);
-}
-
void mmu_early_enable(unsigned long membase, unsigned long memsize)
{
uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase, membase + memsize);
@@ -578,21 +572,10 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize)
*/
create_flat_mapping();
- /*
- * There can be SoCs that have a section shared between device memory
- * and the on-chip RAM hosting the PBL. Thus mark this section
- * uncachable, but executable.
- * On such SoCs, executing from OCRAM could cause the instruction
- * prefetcher to speculatively access that device memory, triggering
- * potential errant behavior.
- *
- * If your SoC has such a memory layout, you should rewrite the code
- * here to map the OCRAM page-wise.
- */
- map_region((unsigned long)_stext, _etext - _stext, PMD_SECT_DEF_UNCACHED);
-
/* maps main memory as cachable */
- map_region(membase, memsize - OPTEE_SIZE, PMD_SECT_DEF_CACHED);
+ arch_remap_range((void *)membase, memsize - OPTEE_SIZE, MAP_CACHED);
+ arch_remap_range((void *)membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_UNCACHED);
+ arch_remap_range(_stext, PAGE_ALIGN(_etext - _stext), MAP_CACHED);
__mmu_cache_on();
}
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 26/27] ARM: mmu32: Skip reserved ranges during initialization
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (24 preceding siblings ...)
2023-05-12 11:10 ` [PATCH 25/27] ARM: mmu32: Use pages for early MMU setup Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-12 11:10 ` [PATCH 27/27] ARM: mmu64: Use two level pagetables in early code Sascha Hauer
26 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
The early MMU code now uses pages to map the OP-TEE area non executable.
This mapping is overwritten with sections in barebox proper. Refrain
from doing so by using arch_remap_range() and bypassing reserved areas.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index c52b6d3a8b..dc4b0e414d 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -528,9 +528,17 @@ void __mmu_init(bool mmu_on)
vectors_init();
for_each_memory_bank(bank) {
- create_sections(bank->start, bank->start + bank->size - 1,
- PMD_SECT_DEF_CACHED);
- __mmu_cache_flush();
+ struct resource *rsv;
+ resource_size_t pos;
+
+ pos = bank->start;
+
+ for_each_reserved_region(bank, rsv) {
+ arch_remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
+ pos = rsv->end + 1;
+ }
+
+ arch_remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
}
}
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 27/27] ARM: mmu64: Use two level pagetables in early code
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
` (25 preceding siblings ...)
2023-05-12 11:10 ` [PATCH 26/27] ARM: mmu32: Skip reserved ranges during initialization Sascha Hauer
@ 2023-05-12 11:10 ` Sascha Hauer
2023-05-16 10:55 ` Sascha Hauer
26 siblings, 1 reply; 41+ messages in thread
From: Sascha Hauer @ 2023-05-12 11:10 UTC (permalink / raw)
To: Barebox List
So far we used 1GiB sized sections in the early MMU setup. This has
the disadvantage that we can't use the MMU in early code when we
require a finer granularity. Rockchip for example keeps TF-A code
in the lower memory so the code just skipped MMU initialization.
Also we can't properly map the OP-TEE space at the end of SDRAM non
executable.
With this patch we now use two level page tables and can map with 4KiB
granularity.
The MMU setup in barebox proper changes as well. Instead of disabling
the MMU for reconfiguration we can now keep the MMU enabled and just
add the mappings for SDRAM banks not known to the early code.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_64.c | 97 ++++++++++---------------------------------
1 file changed, 21 insertions(+), 76 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 4b75be621d..3f9b52bbdb 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -22,7 +22,10 @@
#include "mmu_64.h"
-static uint64_t *ttb;
+static uint64_t *get_ttb(void)
+{
+ return (uint64_t *)get_ttbr(current_el());
+}
static void set_table(uint64_t *pt, uint64_t *table_addr)
{
@@ -42,7 +45,7 @@ static uint64_t *alloc_pte(void)
if (idx * GRANULE_SIZE >= ARM_EARLY_PAGETABLE_SIZE)
return NULL;
- return (void *)ttb + idx * GRANULE_SIZE;
+ return get_ttb() + idx * GRANULE_SIZE;
}
#else
static uint64_t *alloc_pte(void)
@@ -63,7 +66,7 @@ static __maybe_unused uint64_t *find_pte(uint64_t addr)
uint64_t idx;
int i;
- pte = ttb;
+ pte = get_ttb();
for (i = 0; i < 4; i++) {
block_shift = level2shift(i);
@@ -112,6 +115,7 @@ static void split_block(uint64_t *pte, int level)
static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
uint64_t attr)
{
+ uint64_t *ttb = get_ttb();
uint64_t block_size;
uint64_t block_shift;
uint64_t *pte;
@@ -121,9 +125,6 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
uint64_t type;
int level;
- if (!ttb)
- arm_mmu_not_initialized_error();
-
addr = virt;
attr &= ~PTE_TYPE_MASK;
@@ -192,37 +193,25 @@ static void mmu_enable(void)
void __mmu_init(bool mmu_on)
{
struct memory_bank *bank;
- unsigned int el;
-
- if (mmu_on)
- mmu_disable();
-
- ttb = alloc_pte();
- el = current_el();
- set_ttbr_tcr_mair(el, (uint64_t)ttb, calc_tcr(el, BITS_PER_VA),
- MEMORY_ATTRIBUTES);
- pr_debug("ttb: 0x%p\n", ttb);
+ reserve_sdram_region("OP-TEE", 0xf0000000 - OPTEE_SIZE, OPTEE_SIZE);
- /* create a flat mapping */
- arch_remap_range(0, 1UL << (BITS_PER_VA - 1), MAP_UNCACHED);
-
- /* Map sdram cached. */
for_each_memory_bank(bank) {
struct resource *rsv;
+ resource_size_t pos;
- arch_remap_range((void *)bank->start, bank->size, MAP_CACHED);
+ pos = bank->start;
for_each_reserved_region(bank, rsv) {
- arch_remap_range((void *)resource_first_page(rsv),
- resource_count_pages(rsv), MAP_UNCACHED);
+ arch_remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
+ pos = rsv->end + 1;
}
+
+ arch_remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
}
/* Make zero page faulting to catch NULL pointer derefs */
zero_page_faulting();
-
- mmu_enable();
}
void mmu_disable(void)
@@ -256,42 +245,6 @@ void dma_flush_range(void *ptr, size_t size)
v8_flush_dcache_range(start, end);
}
-static void early_create_sections(void *ttb, uint64_t virt, uint64_t phys,
- uint64_t size, uint64_t attr)
-{
- uint64_t block_size;
- uint64_t block_shift;
- uint64_t *pte;
- uint64_t idx;
- uint64_t addr;
- uint64_t *table;
-
- addr = virt;
-
- attr &= ~PTE_TYPE_MASK;
-
- table = ttb;
-
- while (1) {
- block_shift = level2shift(1);
- idx = (addr & level2mask(1)) >> block_shift;
- block_size = (1ULL << block_shift);
-
- pte = table + idx;
-
- *pte = phys | attr | PTE_TYPE_BLOCK;
-
- if (size < block_size)
- break;
-
- addr += block_size;
- phys += block_size;
- size -= block_size;
- }
-}
-
-#define EARLY_BITS_PER_VA 39
-
void mmu_early_enable(unsigned long membase, unsigned long memsize)
{
int el;
@@ -299,24 +252,16 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize)
pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
- /*
- * For the early code we only create level 1 pagetables which only
- * allow for a 1GiB granularity. If our membase is not aligned to that
- * bail out without enabling the MMU.
- */
- if (membase & ((1ULL << level2shift(1)) - 1))
- return;
+ el = current_el();
+ set_ttbr_tcr_mair(el, ttb, calc_tcr(el, BITS_PER_VA), MEMORY_ATTRIBUTES);
memset((void *)ttb, 0, GRANULE_SIZE);
- el = current_el();
- set_ttbr_tcr_mair(el, ttb, calc_tcr(el, EARLY_BITS_PER_VA), MEMORY_ATTRIBUTES);
- early_create_sections((void *)ttb, 0, 0, 1UL << (EARLY_BITS_PER_VA - 1),
- attrs_uncached_mem());
- early_create_sections((void *)ttb, membase, membase, memsize - OPTEE_SIZE, CACHED_MEM);
- tlb_invalidate();
- isb();
- set_cr(get_cr() | CR_M);
+ arch_remap_range(0, 1UL << (BITS_PER_VA - 1), MAP_UNCACHED);
+ arch_remap_range((void *)membase, memsize - OPTEE_SIZE, MAP_CACHED);
+ arch_remap_range((void *)membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_FAULT);
+
+ mmu_enable();
}
void mmu_early_disable(void)
--
2.39.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 01/27] ARM: fix scratch mem position with OP-TEE
2023-05-12 11:09 ` [PATCH 01/27] ARM: fix scratch mem position with OP-TEE Sascha Hauer
@ 2023-05-12 17:17 ` Ahmad Fatoum
0 siblings, 0 replies; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 17:17 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
Hi,
On 12.05.23 13:09, Sascha Hauer wrote:
> OP-TEE is placed right below the end of memory, so the scratch space
> has to go below it.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> arch/arm/include/asm/barebox-arm.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm/include/asm/barebox-arm.h b/arch/arm/include/asm/barebox-arm.h
> index 0cf4549cd7..711ccd2510 100644
> --- a/arch/arm/include/asm/barebox-arm.h
> +++ b/arch/arm/include/asm/barebox-arm.h
> @@ -75,7 +75,7 @@ void *barebox_arm_boot_dtb(void);
>
> static inline const void *arm_mem_scratch_get(void)
> {
> - return (const void *)__arm_mem_scratch(arm_mem_endmem_get());
> + return (const void *)__arm_mem_scratch(arm_mem_endmem_get() - OPTEE_SIZE);
PBL uses __arm_mem_scratch, so you would've to add OPTEE_SIZE there.
I hadn't done it, because I wrote arm_mem_scratch_get() with the expectation
that OPTEE_SIZE would be omitted from the RAM bank size, so it gets mapped
uncacheable by simply being considered non-RAM, see for example the usage
sites of CONFIG_FIRMWARE_IMX8M*_OPTEE. You'll have to fix these sites
too to avoid reserving the memory twice.
> }
>
> #define arm_mem_stack_top(membase, endmem) ((endmem) - SZ_64K - OPTEE_SIZE)
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 02/27] ARM: drop cache function initialization
2023-05-12 11:09 ` [PATCH 02/27] ARM: drop cache function initialization Sascha Hauer
@ 2023-05-12 17:19 ` Ahmad Fatoum
0 siblings, 0 replies; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 17:19 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> We need a call to arm_set_cache_functions() before the cache maintenance
> functions can be used. Drop this call and just pick the correct
> functions on the first call.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/cache.c | 83 +++++++++++++++++-------------------
> arch/arm/cpu/cache_64.c | 5 ---
> arch/arm/cpu/mmu-early.c | 2 -
> arch/arm/cpu/mmu.c | 2 -
> arch/arm/cpu/start.c | 4 +-
> arch/arm/include/asm/cache.h | 2 -
> 6 files changed, 41 insertions(+), 57 deletions(-)
>
> diff --git a/arch/arm/cpu/cache.c b/arch/arm/cpu/cache.c
> index 24a02c68f3..4202406d0d 100644
> --- a/arch/arm/cpu/cache.c
> +++ b/arch/arm/cpu/cache.c
> @@ -17,8 +17,6 @@ struct cache_fns {
> void (*mmu_cache_flush)(void);
> };
>
> -struct cache_fns *cache_fns;
> -
> #define DEFINE_CPU_FNS(arch) \
> void arch##_dma_clean_range(unsigned long start, unsigned long end); \
> void arch##_dma_flush_range(unsigned long start, unsigned long end); \
> @@ -41,50 +39,13 @@ DEFINE_CPU_FNS(v5)
> DEFINE_CPU_FNS(v6)
> DEFINE_CPU_FNS(v7)
>
> -void __dma_clean_range(unsigned long start, unsigned long end)
> -{
> - if (cache_fns)
> - cache_fns->dma_clean_range(start, end);
> -}
> -
> -void __dma_flush_range(unsigned long start, unsigned long end)
> -{
> - if (cache_fns)
> - cache_fns->dma_flush_range(start, end);
> -}
> -
> -void __dma_inv_range(unsigned long start, unsigned long end)
> -{
> - if (cache_fns)
> - cache_fns->dma_inv_range(start, end);
> -}
> -
> -#ifdef CONFIG_MMU
> -
> -void __mmu_cache_on(void)
> -{
> - if (cache_fns)
> - cache_fns->mmu_cache_on();
> -}
> -
> -void __mmu_cache_off(void)
> +static struct cache_fns *cache_functions(void)
> {
> - if (cache_fns)
> - cache_fns->mmu_cache_off();
> -}
> + static struct cache_fns *cache_fns;
>
> -void __mmu_cache_flush(void)
> -{
> if (cache_fns)
> - cache_fns->mmu_cache_flush();
> - if (outer_cache.flush_all)
> - outer_cache.flush_all();
> -}
> -
> -#endif
> + return cache_fns;
>
> -int arm_set_cache_functions(void)
> -{
> switch (cpu_architecture()) {
> #ifdef CONFIG_CPU_32v4T
> case CPU_ARCH_ARMv4T:
> @@ -113,9 +74,45 @@ int arm_set_cache_functions(void)
> while(1);
> }
>
> - return 0;
> + return cache_fns;
> +}
> +
> +void __dma_clean_range(unsigned long start, unsigned long end)
> +{
> + cache_functions()->dma_clean_range(start, end);
> +}
> +
> +void __dma_flush_range(unsigned long start, unsigned long end)
> +{
> + cache_functions()->dma_flush_range(start, end);
> +}
> +
> +void __dma_inv_range(unsigned long start, unsigned long end)
> +{
> + cache_functions()->dma_inv_range(start, end);
> +}
> +
> +#ifdef CONFIG_MMU
> +
> +void __mmu_cache_on(void)
> +{
> + cache_functions()->mmu_cache_on();
> +}
> +
> +void __mmu_cache_off(void)
> +{
> + cache_functions()->mmu_cache_off();
> }
>
> +void __mmu_cache_flush(void)
> +{
> + cache_functions()->mmu_cache_flush();
> + if (outer_cache.flush_all)
> + outer_cache.flush_all();
> +}
> +
> +#endif
> +
> /*
> * Early function to flush the caches. This is for use when the
> * C environment is not yet fully initialized.
> diff --git a/arch/arm/cpu/cache_64.c b/arch/arm/cpu/cache_64.c
> index cb7bc0945c..3a30296128 100644
> --- a/arch/arm/cpu/cache_64.c
> +++ b/arch/arm/cpu/cache_64.c
> @@ -6,11 +6,6 @@
> #include <asm/cache.h>
> #include <asm/system_info.h>
>
> -int arm_set_cache_functions(void)
> -{
> - return 0;
> -}
> -
> /*
> * Early function to flush the caches. This is for use when the
> * C environment is not yet fully initialized.
> diff --git a/arch/arm/cpu/mmu-early.c b/arch/arm/cpu/mmu-early.c
> index 0d528b9b9c..4895911cdb 100644
> --- a/arch/arm/cpu/mmu-early.c
> +++ b/arch/arm/cpu/mmu-early.c
> @@ -28,8 +28,6 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize,
> {
> ttb = (uint32_t *)_ttb;
>
> - arm_set_cache_functions();
> -
> set_ttbr(ttb);
>
> /* For the XN bit to take effect, we can't be using DOMAIN_MANAGER. */
> diff --git a/arch/arm/cpu/mmu.c b/arch/arm/cpu/mmu.c
> index 6388e1bf14..78dd05577a 100644
> --- a/arch/arm/cpu/mmu.c
> +++ b/arch/arm/cpu/mmu.c
> @@ -414,8 +414,6 @@ void __mmu_init(bool mmu_on)
> {
> struct memory_bank *bank;
>
> - arm_set_cache_functions();
> -
> if (cpu_architecture() >= CPU_ARCH_ARMv7) {
> pte_flags_cached = PTE_FLAGS_CACHED_V7;
> pte_flags_wc = PTE_FLAGS_WC_V7;
> diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
> index be303514c2..bcfc630f3b 100644
> --- a/arch/arm/cpu/start.c
> +++ b/arch/arm/cpu/start.c
> @@ -170,9 +170,7 @@ __noreturn __no_sanitize_address void barebox_non_pbl_start(unsigned long membas
> if (IS_ENABLED(CONFIG_MMU_EARLY)) {
> unsigned long ttb = arm_mem_ttb(membase, endmem);
>
> - if (IS_ENABLED(CONFIG_PBL_IMAGE)) {
> - arm_set_cache_functions();
> - } else {
> + if (!IS_ENABLED(CONFIG_PBL_IMAGE)) {
> pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
> arm_early_mmu_cache_invalidate();
> mmu_early_enable(membase, memsize - OPTEE_SIZE, ttb);
> diff --git a/arch/arm/include/asm/cache.h b/arch/arm/include/asm/cache.h
> index b63776a74a..261c30129a 100644
> --- a/arch/arm/include/asm/cache.h
> +++ b/arch/arm/include/asm/cache.h
> @@ -18,8 +18,6 @@ static inline void icache_invalidate(void)
> #endif
> }
>
> -int arm_set_cache_functions(void);
> -
> void arm_early_mmu_cache_flush(void);
> void arm_early_mmu_cache_invalidate(void);
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 03/27] ARM: Add _32 suffix to aarch32 specific filenames
2023-05-12 11:09 ` [PATCH 03/27] ARM: Add _32 suffix to aarch32 specific filenames Sascha Hauer
@ 2023-05-12 17:21 ` Ahmad Fatoum
0 siblings, 0 replies; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 17:21 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> Several files in arch/arm/cpu/ have 32bit and 64bit versions. The
> 64bit versions have a _64 suffix, but the 32bit versions have none.
> This can be confusing sometimes as one doesn't know if a file is
> 32bit specific or common code.
>
> Add a _32 suffix to the 32bit files to avoid this confusion.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/Makefile | 5 ++++-
> arch/arm/cpu/Makefile | 20 +++++++++----------
> arch/arm/cpu/{cache.c => cache_32.c} | 0
> arch/arm/cpu/{entry_ll.S => entry_ll_32.S} | 0
> .../arm/cpu/{exceptions.S => exceptions_32.S} | 0
> .../arm/cpu/{interrupts.c => interrupts_32.c} | 0
> arch/arm/cpu/{lowlevel.S => lowlevel_32.S} | 0
> arch/arm/cpu/{mmu-early.c => mmu-early_32.c} | 0
> arch/arm/cpu/{mmu.c => mmu_32.c} | 0
> arch/arm/cpu/{setupc.S => setupc_32.S} | 0
> .../arm/cpu/{smccc-call.S => smccc-call_32.S} | 0
> 11 files changed, 14 insertions(+), 11 deletions(-)
> rename arch/arm/cpu/{cache.c => cache_32.c} (100%)
> rename arch/arm/cpu/{entry_ll.S => entry_ll_32.S} (100%)
> rename arch/arm/cpu/{exceptions.S => exceptions_32.S} (100%)
> rename arch/arm/cpu/{interrupts.c => interrupts_32.c} (100%)
> rename arch/arm/cpu/{lowlevel.S => lowlevel_32.S} (100%)
> rename arch/arm/cpu/{mmu-early.c => mmu-early_32.c} (100%)
> rename arch/arm/cpu/{mmu.c => mmu_32.c} (100%)
> rename arch/arm/cpu/{setupc.S => setupc_32.S} (100%)
> rename arch/arm/cpu/{smccc-call.S => smccc-call_32.S} (100%)
>
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index a506f1e3a3..cb88c7b330 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -78,10 +78,13 @@ endif
> ifeq ($(CONFIG_CPU_V8), y)
> KBUILD_CPPFLAGS += $(CFLAGS_ABI) $(arch-y) $(tune-y)
> KBUILD_AFLAGS += -include asm/unified.h
> -export S64 = _64
> +export S64_32 = 64
> +export S64 = 64
> else
> KBUILD_CPPFLAGS += $(CFLAGS_ABI) $(arch-y) $(tune-y) $(CFLAGS_THUMB2)
> KBUILD_AFLAGS += -include asm/unified.h -msoft-float $(AFLAGS_THUMB2)
> +export S64_32 = 32
> +export S32 = 32
> endif
>
> # Machine directory name. This list is sorted alphanumerically
> diff --git a/arch/arm/cpu/Makefile b/arch/arm/cpu/Makefile
> index 7674c1464c..fef2026da5 100644
> --- a/arch/arm/cpu/Makefile
> +++ b/arch/arm/cpu/Makefile
> @@ -2,15 +2,15 @@
>
> obj-y += cpu.o
>
> -obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions$(S64).o interrupts$(S64).o
> -obj-$(CONFIG_MMU) += mmu$(S64).o mmu-common.o
> -obj-pbl-y += lowlevel$(S64).o
> -obj-pbl-$(CONFIG_MMU) += mmu-early$(S64).o
> +obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions_$(S64_32).o interrupts_$(S64_32).o
> +obj-$(CONFIG_MMU) += mmu_$(S64_32).o mmu-common.o
> +obj-pbl-y += lowlevel_$(S64_32).o
> +obj-pbl-$(CONFIG_MMU) += mmu-early_$(S64_32).o
> obj-pbl-$(CONFIG_CPU_32v7) += hyp.o
> AFLAGS_hyp.o :=-Wa,-march=armv7-a -Wa,-mcpu=all
> AFLAGS_hyp.pbl.o :=-Wa,-march=armv7-a -Wa,-mcpu=all
>
> -obj-y += start.o entry.o entry_ll$(S64).o
> +obj-y += start.o entry.o entry_ll_$(S64_32).o
> KASAN_SANITIZE_start.o := n
>
> pbl-$(CONFIG_CPU_64) += head_64.o
> @@ -18,7 +18,7 @@ pbl-$(CONFIG_CPU_64) += head_64.o
> pbl-$(CONFIG_BOARD_ARM_GENERIC_DT) += board-dt-2nd.o
> pbl-$(CONFIG_BOARD_ARM_GENERIC_DT_AARCH64) += board-dt-2nd-aarch64.o
>
> -obj-pbl-y += setupc$(S64).o cache$(S64).o
> +obj-pbl-y += setupc_$(S64_32).o cache_$(S64_32).o
>
> obj-$(CONFIG_ARM_PSCI_CLIENT) += psci-client.o
>
> @@ -35,9 +35,9 @@ endif
>
> obj-$(CONFIG_ARM_PSCI) += psci.o
> obj-$(CONFIG_ARM_PSCI_OF) += psci-of.o
> -obj-pbl-$(CONFIG_ARM_SMCCC) += smccc-call$(S64).o
> -AFLAGS_smccc-call$(S64).o :=-Wa,-march=armv$(if $(S64),8,7)-a
> -AFLAGS_smccc-call$(S64).pbl.o :=-Wa,-march=armv$(if $(S64),8,7)-a
> +obj-pbl-$(CONFIG_ARM_SMCCC) += smccc-call_$(S64_32).o
> +AFLAGS_smccc-call_$(S64_32).o :=-Wa,-march=armv$(if $(S64),8,7)-a
> +AFLAGS_smccc-call_$(S64_32).pbl.o :=-Wa,-march=armv$(if $(S64),8,7)-a
> obj-$(CONFIG_ARM_SECURE_MONITOR) += sm.o sm_as.o
> AFLAGS_sm_as.o :=-Wa,-march=armv7-a
>
> @@ -52,7 +52,7 @@ obj-pbl-$(CONFIG_CPU_64v8) += cache-armv8.o
> AFLAGS_cache-armv8.o :=-Wa,-march=armv8-a
> AFLAGS-cache-armv8.pbl.o :=-Wa,-march=armv8-a
>
> -pbl-y += entry.o entry_ll$(S64).o
> +pbl-y += entry.o entry_ll_$(S64_32).o
> pbl-y += uncompress.o
> pbl-$(CONFIG_ARM_ATF) += atf.o
>
> diff --git a/arch/arm/cpu/cache.c b/arch/arm/cpu/cache_32.c
> similarity index 100%
> rename from arch/arm/cpu/cache.c
> rename to arch/arm/cpu/cache_32.c
> diff --git a/arch/arm/cpu/entry_ll.S b/arch/arm/cpu/entry_ll_32.S
> similarity index 100%
> rename from arch/arm/cpu/entry_ll.S
> rename to arch/arm/cpu/entry_ll_32.S
> diff --git a/arch/arm/cpu/exceptions.S b/arch/arm/cpu/exceptions_32.S
> similarity index 100%
> rename from arch/arm/cpu/exceptions.S
> rename to arch/arm/cpu/exceptions_32.S
> diff --git a/arch/arm/cpu/interrupts.c b/arch/arm/cpu/interrupts_32.c
> similarity index 100%
> rename from arch/arm/cpu/interrupts.c
> rename to arch/arm/cpu/interrupts_32.c
> diff --git a/arch/arm/cpu/lowlevel.S b/arch/arm/cpu/lowlevel_32.S
> similarity index 100%
> rename from arch/arm/cpu/lowlevel.S
> rename to arch/arm/cpu/lowlevel_32.S
> diff --git a/arch/arm/cpu/mmu-early.c b/arch/arm/cpu/mmu-early_32.c
> similarity index 100%
> rename from arch/arm/cpu/mmu-early.c
> rename to arch/arm/cpu/mmu-early_32.c
> diff --git a/arch/arm/cpu/mmu.c b/arch/arm/cpu/mmu_32.c
> similarity index 100%
> rename from arch/arm/cpu/mmu.c
> rename to arch/arm/cpu/mmu_32.c
> diff --git a/arch/arm/cpu/setupc.S b/arch/arm/cpu/setupc_32.S
> similarity index 100%
> rename from arch/arm/cpu/setupc.S
> rename to arch/arm/cpu/setupc_32.S
> diff --git a/arch/arm/cpu/smccc-call.S b/arch/arm/cpu/smccc-call_32.S
> similarity index 100%
> rename from arch/arm/cpu/smccc-call.S
> rename to arch/arm/cpu/smccc-call_32.S
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 04/27] ARM: cpu.c: remove unused include
2023-05-12 11:09 ` [PATCH 04/27] ARM: cpu.c: remove unused include Sascha Hauer
@ 2023-05-12 17:22 ` Ahmad Fatoum
0 siblings, 0 replies; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 17:22 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> cpu.c doesn't use anything from mmu.h, so drop its incusion.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/cpu.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/arch/arm/cpu/cpu.c b/arch/arm/cpu/cpu.c
> index 5b79dd2a8f..cacd442b28 100644
> --- a/arch/arm/cpu/cpu.c
> +++ b/arch/arm/cpu/cpu.c
> @@ -18,8 +18,6 @@
> #include <asm/cache.h>
> #include <asm/ptrace.h>
>
> -#include "mmu.h"
> -
> /**
> * Enable processor's instruction cache
> */
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 05/27] ARM: mmu-common.c: use common mmu include
2023-05-12 11:09 ` [PATCH 05/27] ARM: mmu-common.c: use common mmu include Sascha Hauer
@ 2023-05-12 17:23 ` Ahmad Fatoum
0 siblings, 0 replies; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 17:23 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> mmu-common.c needs things from mmu-common.h, but not from mmu.h, so
> include the former instead.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/mmu-common.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
> index 488a189f1c..e6cc3b974f 100644
> --- a/arch/arm/cpu/mmu-common.c
> +++ b/arch/arm/cpu/mmu-common.c
> @@ -11,7 +11,7 @@
> #include <asm/system.h>
> #include <asm/barebox-arm.h>
> #include <memory.h>
> -#include "mmu.h"
> +#include "mmu-common.h"
>
> void dma_sync_single_for_cpu(dma_addr_t address, size_t size,
> enum dma_data_direction dir)
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 06/27] ARM: mmu32: rename mmu.h to mmu_32.h
2023-05-12 11:09 ` [PATCH 06/27] ARM: mmu32: rename mmu.h to mmu_32.h Sascha Hauer
@ 2023-05-12 17:23 ` Ahmad Fatoum
0 siblings, 0 replies; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 17:23 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> mmu.h is 32bit specific, so rename it to mmu32.h like the C files
Nitpick: mmu_32.h
> have been renamed already.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/cache_32.c | 2 +-
> arch/arm/cpu/mmu-early_32.c | 2 +-
> arch/arm/cpu/mmu_32.c | 2 +-
> arch/arm/cpu/{mmu.h => mmu_32.h} | 0
> arch/arm/cpu/sm.c | 3 +--
> 5 files changed, 4 insertions(+), 5 deletions(-)
> rename arch/arm/cpu/{mmu.h => mmu_32.h} (100%)
>
> diff --git a/arch/arm/cpu/cache_32.c b/arch/arm/cpu/cache_32.c
> index 4202406d0d..0ac50c4d9a 100644
> --- a/arch/arm/cpu/cache_32.c
> +++ b/arch/arm/cpu/cache_32.c
> @@ -6,7 +6,7 @@
> #include <asm/cache.h>
> #include <asm/system_info.h>
>
> -#include "mmu.h"
> +#include "mmu_32.h"
>
> struct cache_fns {
> void (*dma_clean_range)(unsigned long start, unsigned long end);
> diff --git a/arch/arm/cpu/mmu-early_32.c b/arch/arm/cpu/mmu-early_32.c
> index 4895911cdb..07c5917e6a 100644
> --- a/arch/arm/cpu/mmu-early_32.c
> +++ b/arch/arm/cpu/mmu-early_32.c
> @@ -9,7 +9,7 @@
> #include <asm/cache.h>
> #include <asm-generic/sections.h>
>
> -#include "mmu.h"
> +#include "mmu_32.h"
>
> static uint32_t *ttb;
>
> diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
> index 78dd05577a..8ec21ee1d2 100644
> --- a/arch/arm/cpu/mmu_32.c
> +++ b/arch/arm/cpu/mmu_32.c
> @@ -18,7 +18,7 @@
> #include <asm/system_info.h>
> #include <asm/sections.h>
>
> -#include "mmu.h"
> +#include "mmu_32.h"
>
> #define PTRS_PER_PTE (PGDIR_SIZE / PAGE_SIZE)
> #define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
> diff --git a/arch/arm/cpu/mmu.h b/arch/arm/cpu/mmu_32.h
> similarity index 100%
> rename from arch/arm/cpu/mmu.h
> rename to arch/arm/cpu/mmu_32.h
> diff --git a/arch/arm/cpu/sm.c b/arch/arm/cpu/sm.c
> index f5a1edbd4f..53f5142b63 100644
> --- a/arch/arm/cpu/sm.c
> +++ b/arch/arm/cpu/sm.c
> @@ -19,8 +19,7 @@
> #include <linux/arm-smccc.h>
> #include <asm-generic/sections.h>
> #include <asm/secure.h>
> -
> -#include "mmu.h"
> +#include "mmu_32.h"
>
> static unsigned int read_id_pfr1(void)
> {
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 08/27] ARM: mmu64: Use arch_remap_range where possible
2023-05-12 11:09 ` [PATCH 08/27] ARM: mmu64: Use arch_remap_range where possible Sascha Hauer
@ 2023-05-12 17:40 ` Ahmad Fatoum
0 siblings, 0 replies; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 17:40 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
LGTM.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/mmu_64.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index a22e0c81ab..0639d0f1ce 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -174,12 +174,12 @@ static void mmu_enable(void)
>
> void zero_page_access(void)
> {
> - create_sections(0x0, 0x0, PAGE_SIZE, CACHED_MEM);
> + arch_remap_range(0x0, PAGE_SIZE, MAP_CACHED);
> }
>
> void zero_page_faulting(void)
> {
> - create_sections(0x0, 0x0, PAGE_SIZE, 0x0);
> + arch_remap_range(0x0, PAGE_SIZE, MAP_FAULT);
> }
>
> /*
> @@ -201,17 +201,17 @@ void __mmu_init(bool mmu_on)
> pr_debug("ttb: 0x%p\n", ttb);
>
> /* create a flat mapping */
> - create_sections(0, 0, 1UL << (BITS_PER_VA - 1), attrs_uncached_mem());
> + arch_remap_range(0, 1UL << (BITS_PER_VA - 1), MAP_UNCACHED);
>
> /* Map sdram cached. */
> for_each_memory_bank(bank) {
> struct resource *rsv;
>
> - create_sections(bank->start, bank->start, bank->size, CACHED_MEM);
> + arch_remap_range((void *)bank->start, bank->size, MAP_CACHED);
>
> for_each_reserved_region(bank, rsv) {
> - create_sections(resource_first_page(rsv), resource_first_page(rsv),
> - resource_count_pages(rsv), attrs_uncached_mem());
> + arch_remap_range((void *)resource_first_page(rsv),
> + resource_count_pages(rsv), MAP_UNCACHED);
> }
> }
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 10/27] ARM: i.MX: Drop HAB workaround
2023-05-12 11:09 ` [PATCH 10/27] ARM: i.MX: Drop HAB workaround Sascha Hauer
@ 2023-05-12 18:09 ` Ahmad Fatoum
2023-05-16 8:23 ` Sascha Hauer
0 siblings, 1 reply; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 18:09 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> The i.MX HAB code on i.MX6 has to jump into ROM which happens to start
> at 0x0. To make that possible we used to map the ROM cached and jumped
> to it before the MMU is initialized. Instead, remap the ROM as needed
> in the HAB code so that we can safely jump into ROM with MMU enabled.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> arch/arm/cpu/mmu-early_32.c | 7 -------
> drivers/hab/habv4.c | 9 ++++++++-
> 2 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm/cpu/mmu-early_32.c b/arch/arm/cpu/mmu-early_32.c
> index 07c5917e6a..94bde44c9b 100644
> --- a/arch/arm/cpu/mmu-early_32.c
> +++ b/arch/arm/cpu/mmu-early_32.c
> @@ -58,12 +58,5 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize,
> /* maps main memory as cachable */
> map_region(membase, memsize, PMD_SECT_DEF_CACHED);
>
> - /*
> - * With HAB enabled we call into the ROM code later in imx6_hab_get_status().
> - * Map the ROM cached which has the effect that the XN bit is not set.
> - */
> - if (IS_ENABLED(CONFIG_HABV4) && IS_ENABLED(CONFIG_ARCH_IMX6))
> - map_region(0x0, SZ_1M, PMD_SECT_DEF_CACHED);
> -
> __mmu_cache_on();
> }
> diff --git a/drivers/hab/habv4.c b/drivers/hab/habv4.c
> index 252e38f655..d2494db114 100644
> --- a/drivers/hab/habv4.c
> +++ b/drivers/hab/habv4.c
> @@ -11,6 +11,9 @@
> #include <hab.h>
> #include <init.h>
> #include <types.h>
> +#include <mmu.h>
> +#include <zero_page.h>
> +#include <linux/sizes.h>
> #include <linux/arm-smccc.h>
> #include <asm/cache.h>
>
> @@ -613,12 +616,16 @@ static int init_imx6_hab_get_status(void)
> /* can happen in multi-image builds and is not an error */
> return 0;
>
> + arch_remap_range(0x0, SZ_1M, MAP_CACHED);
This affects SZ_1M bytes.
> +
> /*
> * Nobody will check the return value if there were HAB errors, but the
> * initcall will fail spectaculously with a strange error message.
> */
> imx6_hab_get_status();
>
> + zero_page_faulting();
This affects only 4K. The rest of the 1M can now be speculated into :/
> +
> return 0;
> }
>
> @@ -627,7 +634,7 @@ static int init_imx6_hab_get_status(void)
> * which will no longer be accessible when the MMU sets the zero page to
> * faulting.
> */
> -postconsole_initcall(init_imx6_hab_get_status);
> +postmmu_initcall(init_imx6_hab_get_status);
>
> int imx28_hab_get_status(void)
> {
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 11/27] ARM: Move early MMU after malloc initialization
2023-05-12 11:09 ` [PATCH 11/27] ARM: Move early MMU after malloc initialization Sascha Hauer
@ 2023-05-12 18:10 ` Ahmad Fatoum
0 siblings, 0 replies; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 18:10 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> Initialize the MMU after malloc so that we can use malloc in the
> MMU code, for example to allocate memory for page tables.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/start.c | 20 ++++++++++----------
> 1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
> index bcfc630f3b..9d788eba2b 100644
> --- a/arch/arm/cpu/start.c
> +++ b/arch/arm/cpu/start.c
> @@ -167,16 +167,6 @@ __noreturn __no_sanitize_address void barebox_non_pbl_start(unsigned long membas
> arm_barebox_size = barebox_size;
> malloc_end = barebox_base;
>
> - if (IS_ENABLED(CONFIG_MMU_EARLY)) {
> - unsigned long ttb = arm_mem_ttb(membase, endmem);
> -
> - if (!IS_ENABLED(CONFIG_PBL_IMAGE)) {
> - pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
> - arm_early_mmu_cache_invalidate();
> - mmu_early_enable(membase, memsize - OPTEE_SIZE, ttb);
> - }
> - }
> -
> if (boarddata) {
> uint32_t totalsize = 0;
> const char *name;
> @@ -226,6 +216,16 @@ __noreturn __no_sanitize_address void barebox_non_pbl_start(unsigned long membas
>
> mem_malloc_init((void *)malloc_start, (void *)malloc_end - 1);
>
> + if (IS_ENABLED(CONFIG_MMU_EARLY)) {
> + unsigned long ttb = arm_mem_ttb(membase, endmem);
> +
> + if (!IS_ENABLED(CONFIG_PBL_IMAGE)) {
> + pr_debug("enabling MMU, ttb @ 0x%08lx\n", ttb);
> + arm_early_mmu_cache_invalidate();
> + mmu_early_enable(membase, memsize - OPTEE_SIZE, ttb);
> + }
> + }
> +
> if (IS_ENABLED(CONFIG_BOOTM_OPTEE))
> of_add_reserve_entry(endmem - OPTEE_SIZE, endmem - 1);
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 12/27] ARM: mmu: move dma_sync_single_for_device to extra file
2023-05-12 11:09 ` [PATCH 12/27] ARM: mmu: move dma_sync_single_for_device to extra file Sascha Hauer
@ 2023-05-12 18:30 ` Ahmad Fatoum
2023-05-16 9:09 ` Sascha Hauer
0 siblings, 1 reply; 41+ messages in thread
From: Ahmad Fatoum @ 2023-05-12 18:30 UTC (permalink / raw)
To: Sascha Hauer, Barebox List
On 12.05.23 13:09, Sascha Hauer wrote:
> The next patch merges the mmu.c files with their corresponding
> mmu-early.c files. Before doing that move functions which can't
> be compiled for PBL out to extra files.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> arch/arm/cpu/Makefile | 1 +
> arch/arm/cpu/dma_32.c | 20 ++++++++++++++++++++
> arch/arm/cpu/dma_64.c | 16 ++++++++++++++++
> arch/arm/cpu/mmu_32.c | 18 ------------------
> arch/arm/cpu/mmu_64.c | 13 -------------
> 5 files changed, 37 insertions(+), 31 deletions(-)
> create mode 100644 arch/arm/cpu/dma_32.c
> create mode 100644 arch/arm/cpu/dma_64.c
>
> diff --git a/arch/arm/cpu/Makefile b/arch/arm/cpu/Makefile
> index fef2026da5..cd5f36eb49 100644
> --- a/arch/arm/cpu/Makefile
> +++ b/arch/arm/cpu/Makefile
> @@ -4,6 +4,7 @@ obj-y += cpu.o
>
> obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions_$(S64_32).o interrupts_$(S64_32).o
> obj-$(CONFIG_MMU) += mmu_$(S64_32).o mmu-common.o
> +obj-$(CONFIG_MMU) += dma_$(S64_32).o
> obj-pbl-y += lowlevel_$(S64_32).o
> obj-pbl-$(CONFIG_MMU) += mmu-early_$(S64_32).o
> obj-pbl-$(CONFIG_CPU_32v7) += hyp.o
> diff --git a/arch/arm/cpu/dma_32.c b/arch/arm/cpu/dma_32.c
> new file mode 100644
> index 0000000000..a66aa26b9b
> --- /dev/null
> +++ b/arch/arm/cpu/dma_32.c
> @@ -0,0 +1,20 @@
> +#include <dma.h>
> +#include <asm/mmu.h>
> +
> +void dma_sync_single_for_device(dma_addr_t address, size_t size,
> + enum dma_data_direction dir)
> +{
> + /*
> + * FIXME: This function needs a device argument to support non 1:1 mappings
> + */
> +
> + if (dir == DMA_FROM_DEVICE) {
> + __dma_inv_range(address, address + size);
> + if (outer_cache.inv_range)
> + outer_cache.inv_range(address, address + size);
I know this is unrelated to your series, but this is wrong. The outermost
cache must be be invalidated before L1. Otherwise we could have this
unlucky constellation:
- CPU is invalidating L1
- HW prefetcher wants to load something into L1
- Stale data in L2 is loaded into L1
- Only now CPU invalidates L2
Could you send a fix? :-)
> + } else {
> + __dma_clean_range(address, address + size);
> + if (outer_cache.clean_range)
> + outer_cache.clean_range(address, address + size);
This is fine though.
> + }
> +}
> diff --git a/arch/arm/cpu/dma_64.c b/arch/arm/cpu/dma_64.c
> new file mode 100644
> index 0000000000..b4ae736c9b
> --- /dev/null
> +++ b/arch/arm/cpu/dma_64.c
> @@ -0,0 +1,16 @@
> +#include <dma.h>
> +#include <asm/mmu.h>
> +#include <asm/cache.h>
> +
> +void dma_sync_single_for_device(dma_addr_t address, size_t size,
> + enum dma_data_direction dir)
> +{
> + /*
> + * FIXME: This function needs a device argument to support non 1:1 mappings
> + */
> +
> + if (dir == DMA_FROM_DEVICE)
> + v8_inv_dcache_range(address, address + size - 1);
> + else
> + v8_flush_dcache_range(address, address + size - 1);
> +}
> diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
> index 7b31938ecd..10f447874c 100644
> --- a/arch/arm/cpu/mmu_32.c
> +++ b/arch/arm/cpu/mmu_32.c
> @@ -494,21 +494,3 @@ void *dma_alloc_writecombine(size_t size, dma_addr_t *dma_handle)
> {
> return dma_alloc_map(size, dma_handle, ARCH_MAP_WRITECOMBINE);
> }
> -
> -void dma_sync_single_for_device(dma_addr_t address, size_t size,
> - enum dma_data_direction dir)
> -{
> - /*
> - * FIXME: This function needs a device argument to support non 1:1 mappings
> - */
> -
> - if (dir == DMA_FROM_DEVICE) {
> - __dma_inv_range(address, address + size);
> - if (outer_cache.inv_range)
> - outer_cache.inv_range(address, address + size);
> - } else {
> - __dma_clean_range(address, address + size);
> - if (outer_cache.clean_range)
> - outer_cache.clean_range(address, address + size);
> - }
> -}
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index c7c16b527b..9150de1676 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -241,16 +241,3 @@ void dma_flush_range(void *ptr, size_t size)
>
> v8_flush_dcache_range(start, end);
> }
> -
> -void dma_sync_single_for_device(dma_addr_t address, size_t size,
> - enum dma_data_direction dir)
> -{
> - /*
> - * FIXME: This function needs a device argument to support non 1:1 mappings
> - */
> -
> - if (dir == DMA_FROM_DEVICE)
> - v8_inv_dcache_range(address, address + size - 1);
> - else
> - v8_flush_dcache_range(address, address + size - 1);
> -}
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 10/27] ARM: i.MX: Drop HAB workaround
2023-05-12 18:09 ` Ahmad Fatoum
@ 2023-05-16 8:23 ` Sascha Hauer
0 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-16 8:23 UTC (permalink / raw)
To: Ahmad Fatoum; +Cc: Barebox List
On Fri, May 12, 2023 at 08:09:06PM +0200, Ahmad Fatoum wrote:
> On 12.05.23 13:09, Sascha Hauer wrote:
> > The i.MX HAB code on i.MX6 has to jump into ROM which happens to start
> > at 0x0. To make that possible we used to map the ROM cached and jumped
> > to it before the MMU is initialized. Instead, remap the ROM as needed
> > in the HAB code so that we can safely jump into ROM with MMU enabled.
> >
> > Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> > ---
> > arch/arm/cpu/mmu-early_32.c | 7 -------
> > drivers/hab/habv4.c | 9 ++++++++-
> > 2 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/arch/arm/cpu/mmu-early_32.c b/arch/arm/cpu/mmu-early_32.c
> > index 07c5917e6a..94bde44c9b 100644
> > --- a/arch/arm/cpu/mmu-early_32.c
> > +++ b/arch/arm/cpu/mmu-early_32.c
> > @@ -58,12 +58,5 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize,
> > /* maps main memory as cachable */
> > map_region(membase, memsize, PMD_SECT_DEF_CACHED);
> >
> > - /*
> > - * With HAB enabled we call into the ROM code later in imx6_hab_get_status().
> > - * Map the ROM cached which has the effect that the XN bit is not set.
> > - */
> > - if (IS_ENABLED(CONFIG_HABV4) && IS_ENABLED(CONFIG_ARCH_IMX6))
> > - map_region(0x0, SZ_1M, PMD_SECT_DEF_CACHED);
> > -
> > __mmu_cache_on();
> > }
> > diff --git a/drivers/hab/habv4.c b/drivers/hab/habv4.c
> > index 252e38f655..d2494db114 100644
> > --- a/drivers/hab/habv4.c
> > +++ b/drivers/hab/habv4.c
> > @@ -11,6 +11,9 @@
> > #include <hab.h>
> > #include <init.h>
> > #include <types.h>
> > +#include <mmu.h>
> > +#include <zero_page.h>
> > +#include <linux/sizes.h>
> > #include <linux/arm-smccc.h>
> > #include <asm/cache.h>
> >
> > @@ -613,12 +616,16 @@ static int init_imx6_hab_get_status(void)
> > /* can happen in multi-image builds and is not an error */
> > return 0;
> >
> > + arch_remap_range(0x0, SZ_1M, MAP_CACHED);
>
> This affects SZ_1M bytes.
>
> > +
> > /*
> > * Nobody will check the return value if there were HAB errors, but the
> > * initcall will fail spectaculously with a strange error message.
> > */
> > imx6_hab_get_status();
> >
> > + zero_page_faulting();
>
> This affects only 4K. The rest of the 1M can now be speculated into :/
Ok, I'll add a
arch_remap_range((void *)PAGE_SIZE, SZ_1M - PAGE_SIZE, MAP_UNCACHED);
to remap the remaining space as uncached.
Sascha
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 12/27] ARM: mmu: move dma_sync_single_for_device to extra file
2023-05-12 18:30 ` Ahmad Fatoum
@ 2023-05-16 9:09 ` Sascha Hauer
0 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-16 9:09 UTC (permalink / raw)
To: Ahmad Fatoum; +Cc: Barebox List
On Fri, May 12, 2023 at 08:30:01PM +0200, Ahmad Fatoum wrote:
> On 12.05.23 13:09, Sascha Hauer wrote:
> > The next patch merges the mmu.c files with their corresponding
> > mmu-early.c files. Before doing that move functions which can't
> > be compiled for PBL out to extra files.
> >
> > Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> > ---
> > arch/arm/cpu/Makefile | 1 +
> > arch/arm/cpu/dma_32.c | 20 ++++++++++++++++++++
> > arch/arm/cpu/dma_64.c | 16 ++++++++++++++++
> > arch/arm/cpu/mmu_32.c | 18 ------------------
> > arch/arm/cpu/mmu_64.c | 13 -------------
> > 5 files changed, 37 insertions(+), 31 deletions(-)
> > create mode 100644 arch/arm/cpu/dma_32.c
> > create mode 100644 arch/arm/cpu/dma_64.c
> >
> > diff --git a/arch/arm/cpu/Makefile b/arch/arm/cpu/Makefile
> > index fef2026da5..cd5f36eb49 100644
> > --- a/arch/arm/cpu/Makefile
> > +++ b/arch/arm/cpu/Makefile
> > @@ -4,6 +4,7 @@ obj-y += cpu.o
> >
> > obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions_$(S64_32).o interrupts_$(S64_32).o
> > obj-$(CONFIG_MMU) += mmu_$(S64_32).o mmu-common.o
> > +obj-$(CONFIG_MMU) += dma_$(S64_32).o
> > obj-pbl-y += lowlevel_$(S64_32).o
> > obj-pbl-$(CONFIG_MMU) += mmu-early_$(S64_32).o
> > obj-pbl-$(CONFIG_CPU_32v7) += hyp.o
> > diff --git a/arch/arm/cpu/dma_32.c b/arch/arm/cpu/dma_32.c
> > new file mode 100644
> > index 0000000000..a66aa26b9b
> > --- /dev/null
> > +++ b/arch/arm/cpu/dma_32.c
> > @@ -0,0 +1,20 @@
> > +#include <dma.h>
> > +#include <asm/mmu.h>
> > +
> > +void dma_sync_single_for_device(dma_addr_t address, size_t size,
> > + enum dma_data_direction dir)
> > +{
> > + /*
> > + * FIXME: This function needs a device argument to support non 1:1 mappings
> > + */
> > +
> > + if (dir == DMA_FROM_DEVICE) {
> > + __dma_inv_range(address, address + size);
> > + if (outer_cache.inv_range)
> > + outer_cache.inv_range(address, address + size);
>
> I know this is unrelated to your series, but this is wrong. The outermost
> cache must be be invalidated before L1. Otherwise we could have this
> unlucky constellation:
>
> - CPU is invalidating L1
> - HW prefetcher wants to load something into L1
> - Stale data in L2 is loaded into L1
> - Only now CPU invalidates L2
L1 is invalidated after the DMA transfer in dma_sync_single_for_cpu(),
so stale data in L1 shouldn't be a problem.
However, the prefetcher could cause stale entries in L2 during the DMA
transfer, so we have to invalidate that as well after the transfer.
Sascha
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 27/27] ARM: mmu64: Use two level pagetables in early code
2023-05-12 11:10 ` [PATCH 27/27] ARM: mmu64: Use two level pagetables in early code Sascha Hauer
@ 2023-05-16 10:55 ` Sascha Hauer
0 siblings, 0 replies; 41+ messages in thread
From: Sascha Hauer @ 2023-05-16 10:55 UTC (permalink / raw)
To: Barebox List
On Fri, May 12, 2023 at 01:10:08PM +0200, Sascha Hauer wrote:
> So far we used 1GiB sized sections in the early MMU setup. This has
> the disadvantage that we can't use the MMU in early code when we
> require a finer granularity. Rockchip for example keeps TF-A code
> in the lower memory so the code just skipped MMU initialization.
> Also we can't properly map the OP-TEE space at the end of SDRAM non
> executable.
>
> With this patch we now use two level page tables and can map with 4KiB
> granularity.
>
> The MMU setup in barebox proper changes as well. Instead of disabling
> the MMU for reconfiguration we can now keep the MMU enabled and just
> add the mappings for SDRAM banks not known to the early code.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> arch/arm/cpu/mmu_64.c | 97 ++++++++++---------------------------------
> 1 file changed, 21 insertions(+), 76 deletions(-)
>
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index 4b75be621d..3f9b52bbdb 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -192,37 +193,25 @@ static void mmu_enable(void)
> void __mmu_init(bool mmu_on)
> {
> struct memory_bank *bank;
> - unsigned int el;
> -
> - if (mmu_on)
> - mmu_disable();
> -
> - ttb = alloc_pte();
> - el = current_el();
> - set_ttbr_tcr_mair(el, (uint64_t)ttb, calc_tcr(el, BITS_PER_VA),
> - MEMORY_ATTRIBUTES);
>
> - pr_debug("ttb: 0x%p\n", ttb);
> + reserve_sdram_region("OP-TEE", 0xf0000000 - OPTEE_SIZE, OPTEE_SIZE);
This line shouldn't be here. I only used that for testing.
Sascha
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2023-05-16 10:57 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-12 11:09 [PATCH 00/27] ARM: MMU rework Sascha Hauer
2023-05-12 11:09 ` [PATCH 01/27] ARM: fix scratch mem position with OP-TEE Sascha Hauer
2023-05-12 17:17 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 02/27] ARM: drop cache function initialization Sascha Hauer
2023-05-12 17:19 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 03/27] ARM: Add _32 suffix to aarch32 specific filenames Sascha Hauer
2023-05-12 17:21 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 04/27] ARM: cpu.c: remove unused include Sascha Hauer
2023-05-12 17:22 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 05/27] ARM: mmu-common.c: use common mmu include Sascha Hauer
2023-05-12 17:23 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 06/27] ARM: mmu32: rename mmu.h to mmu_32.h Sascha Hauer
2023-05-12 17:23 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 07/27] ARM: mmu: implement MAP_FAULT Sascha Hauer
2023-05-12 11:09 ` [PATCH 08/27] ARM: mmu64: Use arch_remap_range where possible Sascha Hauer
2023-05-12 17:40 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 09/27] ARM: mmu32: implement zero_page_*() Sascha Hauer
2023-05-12 11:09 ` [PATCH 10/27] ARM: i.MX: Drop HAB workaround Sascha Hauer
2023-05-12 18:09 ` Ahmad Fatoum
2023-05-16 8:23 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 11/27] ARM: Move early MMU after malloc initialization Sascha Hauer
2023-05-12 18:10 ` Ahmad Fatoum
2023-05-12 11:09 ` [PATCH 12/27] ARM: mmu: move dma_sync_single_for_device to extra file Sascha Hauer
2023-05-12 18:30 ` Ahmad Fatoum
2023-05-16 9:09 ` Sascha Hauer
2023-05-12 11:09 ` [PATCH 13/27] ARM: mmu: merge mmu-early_xx.c into mmu_xx.c Sascha Hauer
2023-05-12 11:09 ` [PATCH 14/27] ARM: mmu: alloc 64k for early page tables Sascha Hauer
2023-05-12 11:09 ` [PATCH 15/27] ARM: mmu32: create alloc_pte() Sascha Hauer
2023-05-12 11:09 ` [PATCH 16/27] ARM: mmu64: " Sascha Hauer
2023-05-12 11:09 ` [PATCH 17/27] ARM: mmu: drop ttb argument Sascha Hauer
2023-05-12 11:09 ` [PATCH 18/27] ARM: mmu: always do MMU initialization early when MMU is enabled Sascha Hauer
2023-05-12 11:10 ` [PATCH 19/27] ARM: mmu32: Assume MMU is on Sascha Hauer
2023-05-12 11:10 ` [PATCH 20/27] ARM: mmu32: Fix pmd_flags_to_pte() for ARMv4/5/6 Sascha Hauer
2023-05-12 11:10 ` [PATCH 21/27] ARM: mmu32: Add pte_flags_to_pmd() Sascha Hauer
2023-05-12 11:10 ` [PATCH 22/27] ARM: mmu32: add get_pte_flags, get_pmd_flags Sascha Hauer
2023-05-12 11:10 ` [PATCH 23/27] ARM: mmu32: move functions into c file Sascha Hauer
2023-05-12 11:10 ` [PATCH 24/27] ARM: mmu32: read TTB value from register Sascha Hauer
2023-05-12 11:10 ` [PATCH 25/27] ARM: mmu32: Use pages for early MMU setup Sascha Hauer
2023-05-12 11:10 ` [PATCH 26/27] ARM: mmu32: Skip reserved ranges during initialization Sascha Hauer
2023-05-12 11:10 ` [PATCH 27/27] ARM: mmu64: Use two level pagetables in early code Sascha Hauer
2023-05-16 10:55 ` Sascha Hauer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox