mail archive of the barebox mailing list
 help / color / mirror / Atom feed
* [PATCH v4 00/13] ARM: Map sections RO/XN
@ 2025-08-04 17:22 Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 01/13] mmu: explicitly map executable non-SDRAM regions with MAP_CODE Ahmad Fatoum
                   ` (13 more replies)
  0 siblings, 14 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox

This series replaces 7 patches that are in next to fix a barebox hang
when used together with OP-TEE.

Root cause is that we need to consider both reserved memory entries
and the text area at once, otherwise mapping non-reserved regions
cached would map the text area eXecute never

Changes in v4:
- skip TLB invalidation if remapping zero bytes
- share common memory bank remapping code
- fix reserved memory at end of RAM mapping barebox text
  eXecute Never
- add range helpers to make especially v4 code clearer
- pass map type not pte flags to early_remap_range

Changes in v3:
- rework create_sections() for Ahmads comments
- mention CR_S bit and DOMAIN_CLIENT in commit message
- Link to v2: https://lore.barebox.org/20250617-mmu-xn-ro-v2-0-3c7aa9046b67@pengutronix.de

Changes in v2:
- Tested and fixed for ARMv5
- merge create_pages() and create_sections() into one functions (ahmad)
- introduce function to create mapping flags based on CONFIG_ARM_MMU_PERMISSIONS
- Link to v1: https://lore.barebox.org/20250606-mmu-xn-ro-v1-0-7ee6ddd134d4@pengutronix.de

Ahmad Fatoum (8):
  mmu: explicitly map executable non-SDRAM regions with MAP_CODE
  ARM: mmu: skip TLB invalidation if remapping zero bytes
  ARM: mmu: provide setup_trap_pages for both 32- and 64-bit
  ARM: mmu: share common memory bank remapping code
  ARM: mmu: make mmu_remap_memory_banks clearer with helper
  partition: rename region_overlap_end to region_overlap_end_inclusive
  partition: define new region_overlap_end_exclusive helper
  ARM: mmu64: map text segment ro and data segments execute never

Sascha Hauer (5):
  ARM: pass barebox base to mmu_early_enable()
  ARM: mmu: move ARCH_MAP_WRITECOMBINE to header
  ARM: mmu: map memory for barebox proper pagewise
  ARM: mmu: map text segment ro and data segments execute never
  ARM: mmu64: map memory for barebox proper pagewise

 arch/arm/Kconfig                 |  12 ++++
 arch/arm/cpu/lowlevel_32.S       |   1 +
 arch/arm/cpu/mmu-common.c        |  69 +++++++++++++++++++++
 arch/arm/cpu/mmu-common.h        |  21 +++++++
 arch/arm/cpu/mmu_32.c            | 101 ++++++++++++++++++-------------
 arch/arm/cpu/mmu_64.c            |  88 ++++++++++++++++-----------
 arch/arm/cpu/uncompress.c        |   9 ++-
 arch/arm/include/asm/mmu.h       |   2 +-
 arch/arm/include/asm/pgtable64.h |   1 +
 arch/arm/lib32/barebox.lds.S     |   3 +-
 arch/arm/lib64/barebox.lds.S     |   5 +-
 arch/arm/mach-imx/romapi.c       |   3 +-
 commands/iomemport.c             |   2 +-
 common/memory.c                  |   7 ++-
 common/partitions.c              |   6 +-
 drivers/firmware/socfpga.c       |   4 ++
 drivers/hab/habv4.c              |   2 +-
 include/mmu.h                    |   1 +
 include/range.h                  |  30 +++++++--
 19 files changed, 268 insertions(+), 99 deletions(-)

-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 01/13] mmu: explicitly map executable non-SDRAM regions with MAP_CODE
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 02/13] ARM: pass barebox base to mmu_early_enable() Ahmad Fatoum
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

So far we have been setting eXecute Never on MAP_UNCACHED regions and
left it out for the default MAP_CACHED region.

We have at least three places, which depend on this to remap non-SDRAM
regions executable, so ROM code or newly uploaded code can be run.

Switch them over to use a new MAP_CODE mapping type. For now, this is
equivalent to MAP_CACHED, but with the addition of W^X support in
barebox, this will be required to avoid a prefetch abort when MMU
attributes are used.

Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/mach-imx/romapi.c | 3 ++-
 drivers/firmware/socfpga.c | 4 ++++
 drivers/hab/habv4.c        | 2 +-
 include/mmu.h              | 1 +
 4 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mach-imx/romapi.c b/arch/arm/mach-imx/romapi.c
index 10af42f28a76..a4143d372ae8 100644
--- a/arch/arm/mach-imx/romapi.c
+++ b/arch/arm/mach-imx/romapi.c
@@ -299,7 +299,8 @@ void imx93_bootsource(void)
 		goto out;
 	}
 
-	arch_remap_range((void *)rom.start, rom.start, resource_size(&rom), MAP_CACHED);
+	/* TODO: restore uncached mapping once we no longer need this? */
+	arch_remap_range((void *)rom.start, rom.start, resource_size(&rom), MAP_CODE);
 
 	OPTIMIZER_HIDE_VAR(rom_api);
 
diff --git a/drivers/firmware/socfpga.c b/drivers/firmware/socfpga.c
index 0f7d11abb588..13ddbe32c30c 100644
--- a/drivers/firmware/socfpga.c
+++ b/drivers/firmware/socfpga.c
@@ -361,11 +361,15 @@ static int socfpga_fpgamgr_program_finish(struct firmware_handler *fh)
 			   &socfpga_sdram_apply_static_cfg,
 			   socfpga_sdram_apply_static_cfg_sz);
 
+	remap_range((void *)CYCLONE5_OCRAM_ADDRESS, PAGE_SIZE, MAP_CODE);
+
 	sync_caches_for_execution();
 
 	ocram_func((void __iomem *) (CYCLONE5_SDR_ADDRESS +
 				     SDR_CTRLGRP_STATICCFG_ADDRESS));
 
+	remap_range((void *)CYCLONE5_OCRAM_ADDRESS, PAGE_SIZE, MAP_UNCACHED);
+
 	return 0;
 }
 
diff --git a/drivers/hab/habv4.c b/drivers/hab/habv4.c
index 4945e8930acb..83ee06a6aa46 100644
--- a/drivers/hab/habv4.c
+++ b/drivers/hab/habv4.c
@@ -689,7 +689,7 @@ int imx8m_hab_print_status(void)
 
 int imx6_hab_print_status(void)
 {
-	remap_range(0x0, SZ_1M, MAP_CACHED);
+	remap_range(0x0, SZ_1M, MAP_CODE);
 
 	imx6_hab_get_status();
 
diff --git a/include/mmu.h b/include/mmu.h
index 84ec6c5efb3e..669959050194 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -8,6 +8,7 @@
 #define MAP_UNCACHED	0
 #define MAP_CACHED	1
 #define MAP_FAULT	2
+#define MAP_CODE	MAP_CACHED	/* until support added */
 
 /*
  * Depending on the architecture the default mapping can be
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 02/13] ARM: pass barebox base to mmu_early_enable()
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 01/13] mmu: explicitly map executable non-SDRAM regions with MAP_CODE Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 03/13] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Ahmad Fatoum
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum, Ahmad Fatoum

From: Sascha Hauer <s.hauer@pengutronix.de>

We'll need the barebox base in the next patches to map the barebox
area differently.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/cpu/mmu_32.c      | 2 +-
 arch/arm/cpu/mmu_64.c      | 2 +-
 arch/arm/cpu/uncompress.c  | 9 ++++-----
 arch/arm/include/asm/mmu.h | 2 +-
 4 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 40462f6fa5cf..2c2144327380 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -614,7 +614,7 @@ void *dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *dma_ha
 	return dma_alloc_map(dev, size, dma_handle, ARCH_MAP_WRITECOMBINE);
 }
 
-void mmu_early_enable(unsigned long membase, unsigned long memsize)
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
 {
 	uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
 
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index d72ff020f485..cb0803400cfd 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -435,7 +435,7 @@ static void init_range(size_t total_level0_tables)
 	}
 }
 
-void mmu_early_enable(unsigned long membase, unsigned long memsize)
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
 {
 	int el;
 	u64 optee_membase;
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index 4529ef5e3821..b9fc1d04db96 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -65,11 +65,6 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 
 	arm_pbl_init_exceptions();
 
-	if (IS_ENABLED(CONFIG_MMU))
-		mmu_early_enable(membase, memsize);
-	else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
-		set_cr(get_cr() | CR_C);
-
 	/* Add handoff data now, so arm_mem_barebox_image takes it into account */
 	if (boarddata)
 		handoff_data_add_dt(boarddata);
@@ -85,6 +80,10 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 #ifdef DEBUG
 	print_pbl_mem_layout(membase, endmem, barebox_base);
 #endif
+	if (IS_ENABLED(CONFIG_MMU))
+		mmu_early_enable(membase, memsize, barebox_base);
+	else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
+		set_cr(get_cr() | CR_C);
 
 	pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
 			pg_start, pg_len, barebox_base, uncompressed_len);
diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h
index ebf1e096c682..5538cd3558e8 100644
--- a/arch/arm/include/asm/mmu.h
+++ b/arch/arm/include/asm/mmu.h
@@ -64,7 +64,7 @@ void __dma_clean_range(unsigned long, unsigned long);
 void __dma_flush_range(unsigned long, unsigned long);
 void __dma_inv_range(unsigned long, unsigned long);
 
-void mmu_early_enable(unsigned long membase, unsigned long memsize);
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_base);
 void mmu_early_disable(void);
 
 #endif /* __ASM_MMU_H */
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 03/13] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 01/13] mmu: explicitly map executable non-SDRAM regions with MAP_CODE Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 02/13] ARM: pass barebox base to mmu_early_enable() Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 04/13] ARM: mmu: map memory for barebox proper pagewise Ahmad Fatoum
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum, Ahmad Fatoum

From: Sascha Hauer <s.hauer@pengutronix.de>

ARCH_MAP_WRITECOMBINE is defined equally for both mmu_32 and mmu_64 and
we'll add more mapping types later, so move it to a header file to be
shared by both mmu implementations.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/cpu/mmu-common.h | 2 ++
 arch/arm/cpu/mmu_32.c     | 1 -
 arch/arm/cpu/mmu_64.c     | 2 --
 3 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index d0b50662570a..0f11a4b73d11 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -9,6 +9,8 @@
 #include <linux/kernel.h>
 #include <linux/sizes.h>
 
+#define ARCH_MAP_WRITECOMBINE	((unsigned)-1)
+
 struct device;
 
 void dma_inv_range(void *ptr, size_t size);
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 2c2144327380..104780ff6b98 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -23,7 +23,6 @@
 #include "mmu_32.h"
 
 #define PTRS_PER_PTE		(PGDIR_SIZE / PAGE_SIZE)
-#define ARCH_MAP_WRITECOMBINE	((unsigned)-1)
 
 static inline uint32_t *get_ttb(void)
 {
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index cb0803400cfd..121dd136af33 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -24,8 +24,6 @@
 
 #include "mmu_64.h"
 
-#define ARCH_MAP_WRITECOMBINE  ((unsigned)-1)
-
 static uint64_t *get_ttb(void)
 {
 	return (uint64_t *)get_ttbr(current_el());
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 04/13] ARM: mmu: map memory for barebox proper pagewise
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (2 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 03/13] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 05/13] ARM: mmu: skip TLB invalidation if remapping zero bytes Ahmad Fatoum
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum, Ahmad Fatoum

From: Sascha Hauer <s.hauer@pengutronix.de>

Map the remainder of the memory explicitly with two level page tables. This is
the place where barebox proper ends at. In barebox proper we'll remap the code
segments readonly/executable and the ro segments readonly/execute never. For this
we need the memory being mapped pagewise. We can't do the split up from section
wise mapping to pagewise mapping later because that would require us to do
a break-before-make sequence which we can't do when barebox proper is running
at the location being remapped.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/cpu/mmu_32.c | 37 +++++++++++++++++++++++++++++--------
 1 file changed, 29 insertions(+), 8 deletions(-)

diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 104780ff6b98..b21fc75f0ceb 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -247,7 +247,8 @@ static uint32_t get_pmd_flags(int map_type)
 	return pte_flags_to_pmd(get_pte_flags(map_type));
 }
 
-static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
+static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t size,
+			       unsigned map_type, bool force_pages)
 {
 	u32 virt_addr = (u32)_virt_addr;
 	u32 pte_flags, pmd_flags;
@@ -268,7 +269,7 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
 
 		if (size >= PGDIR_SIZE && pgdir_size_aligned &&
 		    IS_ALIGNED(phys_addr, PGDIR_SIZE) &&
-		    !pgd_type_table(*pgd)) {
+		    !pgd_type_table(*pgd) && !force_pages) {
 			u32 val;
 			/*
 			 * TODO: Add code to discard a page table and
@@ -339,14 +340,15 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
 
 	tlb_invalidate();
 }
-static void early_remap_range(u32 addr, size_t size, unsigned map_type)
+
+static void early_remap_range(u32 addr, size_t size, unsigned map_type, bool force_pages)
 {
-	__arch_remap_range((void *)addr, addr, size, map_type);
+	__arch_remap_range((void *)addr, addr, size, map_type, force_pages);
 }
 
 int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
 {
-	__arch_remap_range(virt_addr, phys_addr, size, map_type);
+	__arch_remap_range(virt_addr, phys_addr, size, map_type, false);
 
 	if (map_type == MAP_UNCACHED)
 		dma_inv_range(virt_addr, size);
@@ -616,6 +618,7 @@ void *dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *dma_ha
 void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
 {
 	uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
+	unsigned long barebox_size, optee_start;
 
 	pr_debug("enabling MMU, ttb @ 0x%p\n", ttb);
 
@@ -637,9 +640,27 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 	create_flat_mapping();
 
 	/* maps main memory as cachable */
-	early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED);
-	early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_UNCACHED);
-	early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
+	optee_start = membase + memsize - OPTEE_SIZE;
+	barebox_size = optee_start - barebox_start;
+
+	/*
+	 * map the bulk of the memory as sections to avoid allocating too many page tables
+	 * at this early stage
+	 */
+	early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
+	/*
+	 * Map the remainder of the memory explicitly with two level page tables. This is
+	 * the place where barebox proper ends at. In barebox proper we'll remap the code
+	 * segments readonly/executable and the ro segments readonly/execute never. For this
+	 * we need the memory being mapped pagewise. We can't do the split up from section
+	 * wise mapping to pagewise mapping later because that would require us to do
+	 * a break-before-make sequence which we can't do when barebox proper is running
+	 * at the location being remapped.
+	 */
+	early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
+	early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
+	early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
+			  MAP_CACHED, false);
 
 	__mmu_cache_on();
 }
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 05/13] ARM: mmu: skip TLB invalidation if remapping zero bytes
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (3 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 04/13] ARM: mmu: map memory for barebox proper pagewise Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 06/13] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit Ahmad Fatoum
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

From: Ahmad Fatoum <a.fatoum@pengutronix.de>

The loop that remaps memory banks can end up calling remap_range with
zero size, when a reserved region is at the very start of the memory
bank.

This is handled correctly by the code, but does an unnecessary
invalidation of the whole TLB. Let's early exit instead to skip that.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/cpu/mmu_32.c | 2 ++
 arch/arm/cpu/mmu_64.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index b21fc75f0ceb..80e302596890 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -261,6 +261,8 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
 	pmd_flags = pte_flags_to_pmd(pte_flags);
 
 	size = PAGE_ALIGN(size);
+	if (!size)
+		return;
 
 	while (size) {
 		const bool pgdir_size_aligned = IS_ALIGNED(virt_addr, PGDIR_SIZE);
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 121dd136af33..db312daafdd2 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -145,6 +145,8 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
 	attr &= ~PTE_TYPE_MASK;
 
 	size = PAGE_ALIGN(size);
+	if (!size)
+		return;
 
 	while (size) {
 		table = ttb;
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 06/13] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (4 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 05/13] ARM: mmu: skip TLB invalidation if remapping zero bytes Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 07/13] ARM: mmu: share common memory bank remapping code Ahmad Fatoum
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

From: Ahmad Fatoum <a.fatoum@pengutronix.de>

In preparation for moving the remapping of memory banks into the common
code, rename vectors_init to setup_trap_pages and export it for both
32-bit and 64-bit ARM, so it can be called from the common code as well.

This needs to happen after the remapping, because otherwise the trap
pages would be switched cached by the SDRAM remapping loop.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/cpu/mmu-common.c |  2 ++
 arch/arm/cpu/mmu-common.h |  1 +
 arch/arm/cpu/mmu_32.c     |  4 +---
 arch/arm/cpu/mmu_64.c     | 12 ++++++++----
 4 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index f3416ae7f7ca..a55dce72a22d 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -95,6 +95,8 @@ static int mmu_init(void)
 
 	__mmu_init(get_cr() & CR_M);
 
+	setup_trap_pages();
+
 	return 0;
 }
 mmu_initcall(mmu_init);
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index 0f11a4b73d11..8d90da8c86fe 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -16,6 +16,7 @@ struct device;
 void dma_inv_range(void *ptr, size_t size);
 void dma_flush_range(void *ptr, size_t size);
 void *dma_alloc_map(struct device *dev, size_t size, dma_addr_t *dma_handle, unsigned flags);
+void setup_trap_pages(void);
 void __mmu_init(bool mmu_on);
 
 static inline void arm_mmu_not_initialized_error(void)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 80e302596890..3572fa70d13a 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -518,7 +518,7 @@ static void create_guard_page(void)
 /*
  * Map vectors and zero page
  */
-static void vectors_init(void)
+void setup_trap_pages(void)
 {
 	create_guard_page();
 
@@ -595,8 +595,6 @@ void __mmu_init(bool mmu_on)
 
 		remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
 	}
-
-	vectors_init();
 }
 
 /*
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index db312daafdd2..ba82528990fe 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -345,6 +345,14 @@ static void create_guard_page(void)
 	pr_debug("Created guard page\n");
 }
 
+void setup_trap_pages(void)
+{
+	/* Vectors are already registered by aarch64_init_vectors */
+	/* Make zero page faulting to catch NULL pointer derefs */
+	zero_page_faulting();
+	create_guard_page();
+}
+
 /*
  * Prepare MMU for usage enable it.
  */
@@ -380,10 +388,6 @@ void __mmu_init(bool mmu_on)
 
 		remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
 	}
-
-	/* Make zero page faulting to catch NULL pointer derefs */
-	zero_page_faulting();
-	create_guard_page();
 }
 
 void mmu_disable(void)
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 07/13] ARM: mmu: share common memory bank remapping code
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (5 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 06/13] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 08/13] ARM: mmu: make mmu_remap_memory_banks clearer with helper Ahmad Fatoum
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

From: Ahmad Fatoum <a.fatoum@pengutronix.de>

The code is identical between ARM32 and 64 and is going to get more
complex with the addition of finer grained MMU permissions.

Let's move it to a common code file in anticipation.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/cpu/mmu-common.c | 31 +++++++++++++++++++++++++++++--
 arch/arm/cpu/mmu_32.c     | 22 ----------------------
 arch/arm/cpu/mmu_64.c     | 16 ----------------
 3 files changed, 29 insertions(+), 40 deletions(-)

diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index a55dce72a22d..85cb7cb007b9 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -69,6 +69,34 @@ void zero_page_faulting(void)
 	remap_range(0x0, PAGE_SIZE, MAP_FAULT);
 }
 
+static void mmu_remap_memory_banks(void)
+{
+	struct memory_bank *bank;
+
+	/*
+	 * Early mmu init will have mapped everything but the initial memory area
+	 * (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
+	 * all memory banks, so let's map all pages, excluding reserved memory areas,
+	 * cacheable and executable.
+	 */
+	for_each_memory_bank(bank) {
+		struct resource *rsv;
+		resource_size_t pos;
+
+		pos = bank->start;
+
+		/* Skip reserved regions */
+		for_each_reserved_region(bank, rsv) {
+			remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
+			pos = rsv->end + 1;
+		}
+
+		remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+	}
+
+	setup_trap_pages();
+}
+
 static int mmu_init(void)
 {
 	if (efi_is_payload())
@@ -94,8 +122,7 @@ static int mmu_init(void)
 	}
 
 	__mmu_init(get_cr() & CR_M);
-
-	setup_trap_pages();
+	mmu_remap_memory_banks();
 
 	return 0;
 }
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 3572fa70d13a..080e55a7ced6 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -555,7 +555,6 @@ void setup_trap_pages(void)
  */
 void __mmu_init(bool mmu_on)
 {
-	struct memory_bank *bank;
 	uint32_t *ttb = get_ttb();
 
 	// TODO: remap writable only while remapping?
@@ -574,27 +573,6 @@ void __mmu_init(bool mmu_on)
 					ttb);
 
 	pr_debug("ttb: 0x%p\n", ttb);
-
-	/*
-	 * Early mmu init will have mapped everything but the initial memory area
-	 * (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
-	 * all memory banks, so let's map all pages, excluding reserved memory areas,
-	 * cacheable and executable.
-	 */
-	for_each_memory_bank(bank) {
-		struct resource *rsv;
-		resource_size_t pos;
-
-		pos = bank->start;
-
-		/* Skip reserved regions */
-		for_each_reserved_region(bank, rsv) {
-			remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
-			pos = rsv->end + 1;
-		}
-
-		remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
-	}
 }
 
 /*
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index ba82528990fe..54d4a4e9c638 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -359,7 +359,6 @@ void setup_trap_pages(void)
 void __mmu_init(bool mmu_on)
 {
 	uint64_t *ttb = get_ttb();
-	struct memory_bank *bank;
 
 	// TODO: remap writable only while remapping?
 	// TODO: What memtype for ttb when barebox is EFI loader?
@@ -373,21 +372,6 @@ void __mmu_init(bool mmu_on)
 		 *   the ttb will get corrupted.
 		 */
 		pr_crit("Can't request SDRAM region for ttb at %p\n", ttb);
-
-	for_each_memory_bank(bank) {
-		struct resource *rsv;
-		resource_size_t pos;
-
-		pos = bank->start;
-
-		/* Skip reserved regions */
-		for_each_reserved_region(bank, rsv) {
-			remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
-			pos = rsv->end + 1;
-		}
-
-		remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
-	}
 }
 
 void mmu_disable(void)
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 08/13] ARM: mmu: make mmu_remap_memory_banks clearer with helper
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (6 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 07/13] ARM: mmu: share common memory bank remapping code Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 09/13] partition: rename region_overlap_end to region_overlap_end_inclusive Ahmad Fatoum
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

From: Ahmad Fatoum <a.fatoum@pengutronix.de>

regmap_range operates on regions identified by start and size, but for
mmu_remap_memory_bank, we operate on the (exclusive) region end instead.

Make the code easier to reason about by using a regmap_range_end()
helper. This is intentionally not exported and functions that operate on
an end are error-prone with respect to whether the range end is
inclusive or exclusive.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/cpu/mmu-common.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index 85cb7cb007b9..4c30f98cbd79 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -69,6 +69,19 @@ void zero_page_faulting(void)
 	remap_range(0x0, PAGE_SIZE, MAP_FAULT);
 }
 
+/**
+ * remap_range_end - remap a range identified by [start, end)
+ *
+ * @start:    start of the range
+ * @end:      end of the first range (exclusive)
+ * @map_type: mapping type to apply
+ */
+static inline void remap_range_end(unsigned long start, unsigned long end,
+				   unsigned map_type)
+{
+	remap_range((void *)start, end - start, map_type);
+}
+
 static void mmu_remap_memory_banks(void)
 {
 	struct memory_bank *bank;
@@ -87,11 +100,11 @@ static void mmu_remap_memory_banks(void)
 
 		/* Skip reserved regions */
 		for_each_reserved_region(bank, rsv) {
-			remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
+			remap_range_end(pos, rsv->start, MAP_CACHED);
 			pos = rsv->end + 1;
 		}
 
-		remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+		remap_range_end(pos, bank->start + bank->size, MAP_CACHED);
 	}
 
 	setup_trap_pages();
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 09/13] partition: rename region_overlap_end to region_overlap_end_inclusive
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (7 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 08/13] ARM: mmu: make mmu_remap_memory_banks clearer with helper Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 10/13] partition: define new region_overlap_end_exclusive helper Ahmad Fatoum
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

From: Ahmad Fatoum <a.fatoum@pengutronix.de>

While quite verbose, off-by-one are very annoying, so it makes sense to
be very explicit about the expected input. Rename the function in
preparation for adding region_overlap_end_exclusive.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 commands/iomemport.c |  2 +-
 common/partitions.c  |  6 +++---
 include/range.h      | 10 +++++-----
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/commands/iomemport.c b/commands/iomemport.c
index bb546e4a3ad9..d7c960d15b41 100644
--- a/commands/iomemport.c
+++ b/commands/iomemport.c
@@ -21,7 +21,7 @@ static void __print_resources(struct resource *res, int indent,
 	resource_size_t size = resource_size(res);
 	int i;
 
-	if (addr && !region_overlap_end(*addr, *addr, res->start, res->end))
+	if (addr && !region_overlap_end_inclusive(*addr, *addr, res->start, res->end))
 		return;
 
 	if ((flags & FLAG_VERBOSE) && !(flags & FLAG_IOPORT))
diff --git a/common/partitions.c b/common/partitions.c
index fdead333a40a..7563cb0e6767 100644
--- a/common/partitions.c
+++ b/common/partitions.c
@@ -251,9 +251,9 @@ int partition_create(struct partition_desc *pdesc, const char *name,
 	}
 
 	list_for_each_entry(part, &pdesc->partitions, list) {
-		if (region_overlap_end(part->first_sec,
-				       part->first_sec + part->size - 1,
-				       lba_start, lba_end)) {
+		if (region_overlap_end_inclusive(part->first_sec,
+						 part->first_sec + part->size - 1,
+						 lba_start, lba_end)) {
 			pr_err("new partition %llu-%llu overlaps with partition %s (%llu-%llu)\n",
 			       lba_start, lba_end, part->name, part->first_sec,
 				part->first_sec + part->size - 1);
diff --git a/include/range.h b/include/range.h
index f72849044d0b..f1b75f5fafd8 100644
--- a/include/range.h
+++ b/include/range.h
@@ -5,15 +5,15 @@
 #include <linux/types.h>
 
 /**
- * region_overlap_end - check whether a pair of [start, end] ranges overlap
+ * region_overlap_end_inclusive - check whether a pair of [start, end] ranges overlap
  *
  * @starta: start of the first range
  * @enda:   end of the first range (inclusive)
  * @startb: start of the second range
  * @endb:   end of the second range (inclusive)
  */
-static inline bool region_overlap_end(u64 starta, u64 enda,
-				      u64 startb, u64 endb)
+static inline bool region_overlap_end_inclusive(u64 starta, u64 enda,
+						u64 startb, u64 endb)
 {
 	if (enda < startb)
 		return false;
@@ -36,8 +36,8 @@ static inline bool region_overlap_size(u64 starta, u64 lena,
 	if (!lena || !lenb)
 		return false;
 
-	return region_overlap_end(starta, starta + lena - 1,
-				  startb, startb + lenb - 1);
+	return region_overlap_end_inclusive(starta, starta + lena - 1,
+					    startb, startb + lenb - 1);
 }
 
 /**
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 10/13] partition: define new region_overlap_end_exclusive helper
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (8 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 09/13] partition: rename region_overlap_end to region_overlap_end_inclusive Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 11/13] ARM: mmu: map text segment ro and data segments execute never Ahmad Fatoum
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

From: Ahmad Fatoum <a.fatoum@pengutronix.de>

Most ranges in barebox identified by start and end offsets are exclusive
at the end, for example, the linker symbols for the sections.

To make handling them a bit easier while avoiding off-by-one, let's add
a region_overlap_end_exclusive helper as well.

Unlike inclusive ranges, exclusive ranges can be empty, so add a check
for invalid ranges.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 include/range.h | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/include/range.h b/include/range.h
index f1b75f5fafd8..eaeaaad3825c 100644
--- a/include/range.h
+++ b/include/range.h
@@ -22,6 +22,26 @@ static inline bool region_overlap_end_inclusive(u64 starta, u64 enda,
 	return true;
 }
 
+/**
+ * region_overlap_end_exclusive - check whether a pair of [start, end) ranges overlap
+ *
+ * @starta: start of the first range
+ * @enda:   end of the first range (exclusive)
+ * @startb: start of the second range
+ * @endb:   end of the second range (exclusive)
+ */
+static inline bool region_overlap_end_exclusive(u64 starta, u64 enda,
+						u64 startb, u64 endb)
+{
+	/* Empty ranges don't overlap */
+	if (starta >= enda || startb >= endb)
+		return false;
+
+	return region_overlap_end_inclusive(starta, enda - 1,
+					    startb, startb - 1);
+}
+
+
 /**
  * region_overlap_end - check whether a pair of [start, end] ranges overlap
  *
@@ -36,8 +56,8 @@ static inline bool region_overlap_size(u64 starta, u64 lena,
 	if (!lena || !lenb)
 		return false;
 
-	return region_overlap_end_inclusive(starta, starta + lena - 1,
-					    startb, startb + lenb - 1);
+	return region_overlap_end_exclusive(starta, starta + lena,
+					    startb, startb + lenb);
 }
 
 /**
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 11/13] ARM: mmu: map text segment ro and data segments execute never
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (9 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 10/13] partition: define new region_overlap_end_exclusive helper Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 12/13] ARM: mmu64: map memory for barebox proper pagewise Ahmad Fatoum
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum, Ahmad Fatoum

From: Sascha Hauer <s.hauer@pengutronix.de>

With this all segments in the DRAM except the text segment are mapped
execute-never so that only the barebox code can actually be executed.
Also map the readonly data segment readonly so that it can't be
modified.

The mapping is only implemented in barebox proper. The PBL still maps
the whole DRAM rwx.

To make the protection work on ARMv5 we have to set the CS_S bit and
also have to use DOMAIN_CLIENT as already done on ARMv7.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/Kconfig             | 13 ++++++++++++
 arch/arm/cpu/lowlevel_32.S   |  1 +
 arch/arm/cpu/mmu-common.c    | 38 +++++++++++++++++++++++++++++++----
 arch/arm/cpu/mmu-common.h    | 18 +++++++++++++++++
 arch/arm/cpu/mmu_32.c        | 39 +++++++++++++++++++++++++++---------
 arch/arm/lib32/barebox.lds.S |  3 ++-
 common/memory.c              |  7 ++++++-
 include/mmu.h                |  2 +-
 8 files changed, 104 insertions(+), 17 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 9b67f823807f..18bd0ffa5bf4 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -397,6 +397,19 @@ config ARM_UNWIND
 	  the performance is not affected. Currently, this feature
 	  only works with EABI compilers. If unsure say Y.
 
+config ARM_MMU_PERMISSIONS
+	bool "Map with extended RO/X permissions"
+	depends on ARM32
+	default y
+	help
+	  Enable this option to map readonly sections as readonly, executable
+	  sections as readonly/executable and the remainder of the SDRAM as
+	  read/write/non-executable.
+	  Traditionally barebox maps the whole SDRAM as read/write/execute.
+	  You get this behaviour by disabling this option which is meant as
+	  a debugging facility. It can go away once the extended permission
+	  settings are proved to work reliable.
+
 config ARM_SEMIHOSTING
 	bool "enable ARM semihosting support"
 	select SEMIHOSTING
diff --git a/arch/arm/cpu/lowlevel_32.S b/arch/arm/cpu/lowlevel_32.S
index 960a92b78c0a..5d524faf9cff 100644
--- a/arch/arm/cpu/lowlevel_32.S
+++ b/arch/arm/cpu/lowlevel_32.S
@@ -70,6 +70,7 @@ THUMB(	orr	r12, r12, #PSR_T_BIT	)
 	orr	r12, r12, #CR_U
 	bic	r12, r12, #CR_A
 #else
+	orr	r12, r12, #CR_S
 	orr	r12, r12, #CR_A
 #endif
 
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index 4c30f98cbd79..a8673d027d17 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -14,6 +14,7 @@
 #include <zero_page.h>
 #include "mmu-common.h"
 #include <efi/efi-mode.h>
+#include <range.h>
 
 void arch_sync_dma_for_cpu(void *vaddr, size_t size,
 			   enum dma_data_direction dir)
@@ -82,15 +83,37 @@ static inline void remap_range_end(unsigned long start, unsigned long end,
 	remap_range((void *)start, end - start, map_type);
 }
 
+static inline void remap_range_end_sans_text(unsigned long start, unsigned long end,
+					     unsigned map_type)
+{
+	unsigned long text_start = (unsigned long)&_stext;
+	unsigned long text_end = (unsigned long)&_etext;
+
+	if (region_overlap_end_exclusive(start, end, text_start, text_end)) {
+		remap_range_end(start, text_start, MAP_CACHED);
+		/* skip barebox segments here, will be mapped later */
+		start = text_end;
+	}
+
+	remap_range_end(start, end, MAP_CACHED);
+}
+
 static void mmu_remap_memory_banks(void)
 {
 	struct memory_bank *bank;
+	unsigned long code_start = (unsigned long)&_stext;
+	unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
+	unsigned long rodata_start = (unsigned long)&__start_rodata;
+	unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
 
 	/*
 	 * Early mmu init will have mapped everything but the initial memory area
 	 * (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
-	 * all memory banks, so let's map all pages, excluding reserved memory areas,
-	 * cacheable and executable.
+	 * all memory banks, so let's map all pages, excluding reserved memory areas
+	 * and barebox text area cacheable.
+	 *
+	 * This code will become much less complex once we switch over to using
+	 * CONFIG_MEMORY_ATTRIBUTES for MMU as well.
 	 */
 	for_each_memory_bank(bank) {
 		struct resource *rsv;
@@ -100,14 +123,21 @@ static void mmu_remap_memory_banks(void)
 
 		/* Skip reserved regions */
 		for_each_reserved_region(bank, rsv) {
-			remap_range_end(pos, rsv->start, MAP_CACHED);
+			remap_range_end_sans_text(pos, rsv->start, MAP_CACHED);
 			pos = rsv->end + 1;
 		}
 
-		remap_range_end(pos, bank->start + bank->size, MAP_CACHED);
+		remap_range_end_sans_text(pos, bank->start + bank->size, MAP_CACHED);
 	}
 
+	/* Do this while interrupt vectors are still writable */
 	setup_trap_pages();
+
+	if (!IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS))
+		return;
+
+	remap_range((void *)code_start, code_size, MAP_CODE);
+	remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
 }
 
 static int mmu_init(void)
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index 8d90da8c86fe..395a2f8d0f6f 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -3,6 +3,7 @@
 #ifndef __ARM_MMU_COMMON_H
 #define __ARM_MMU_COMMON_H
 
+#include <mmu.h>
 #include <printf.h>
 #include <linux/types.h>
 #include <linux/ioport.h>
@@ -10,6 +11,8 @@
 #include <linux/sizes.h>
 
 #define ARCH_MAP_WRITECOMBINE	((unsigned)-1)
+#define ARCH_MAP_CACHED_RWX	((unsigned)-2)
+#define ARCH_MAP_CACHED_RO	((unsigned)-3)
 
 struct device;
 
@@ -19,6 +22,21 @@ void *dma_alloc_map(struct device *dev, size_t size, dma_addr_t *dma_handle, uns
 void setup_trap_pages(void);
 void __mmu_init(bool mmu_on);
 
+static inline unsigned arm_mmu_maybe_skip_permissions(unsigned map_type)
+{
+	if (IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS))
+		return map_type;
+
+	switch (map_type) {
+	case MAP_CODE:
+	case MAP_CACHED:
+	case ARCH_MAP_CACHED_RO:
+		return ARCH_MAP_CACHED_RWX;
+	default:
+		return map_type;
+	}
+}
+
 static inline void arm_mmu_not_initialized_error(void)
 {
 	/*
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 080e55a7ced6..b7936141911f 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -47,11 +47,18 @@ static inline void tlb_invalidate(void)
 	);
 }
 
+#define PTE_FLAGS_CACHED_V7_RWX (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+				 PTE_EXT_AP_URW_SRW)
 #define PTE_FLAGS_CACHED_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
-			     PTE_EXT_AP_URW_SRW)
+			     PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
+#define PTE_FLAGS_CACHED_RO_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+			     PTE_EXT_APX | PTE_EXT_AP0 | PTE_EXT_AP1 | PTE_EXT_XN)
+#define PTE_FLAGS_CODE_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+			     PTE_EXT_APX | PTE_EXT_AP0 | PTE_EXT_AP1)
 #define PTE_FLAGS_WC_V7 (PTE_EXT_TEX(1) | PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
 #define PTE_FLAGS_UNCACHED_V7 (PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
 #define PTE_FLAGS_CACHED_V4 (PTE_SMALL_AP_UNO_SRW | PTE_BUFFERABLE | PTE_CACHEABLE)
+#define PTE_FLAGS_CACHED_RO_V4 (PTE_SMALL_AP_UNO_SRO | PTE_CACHEABLE)
 #define PTE_FLAGS_UNCACHED_V4 PTE_SMALL_AP_UNO_SRW
 #define PGD_FLAGS_WC_V7 (PMD_SECT_TEX(1) | PMD_SECT_DEF_UNCACHED | \
 			 PMD_SECT_BUFFERABLE | PMD_SECT_XN)
@@ -208,7 +215,9 @@ static u32 pte_flags_to_pmd(u32 pte)
 		/* AP[2] */
 		pmd |= ((pte >> 9) & 0x1) << 15;
 	} else {
-		pmd |= PMD_SECT_AP_WRITE | PMD_SECT_AP_READ;
+		pmd |= PMD_SECT_AP_READ;
+		if (pte & PTE_SMALL_AP_MASK)
+			pmd |= PMD_SECT_AP_WRITE;
 	}
 
 	return pmd;
@@ -218,10 +227,16 @@ static uint32_t get_pte_flags(int map_type)
 {
 	if (cpu_architecture() >= CPU_ARCH_ARMv7) {
 		switch (map_type) {
+		case ARCH_MAP_CACHED_RWX:
+			return PTE_FLAGS_CACHED_V7_RWX;
+		case ARCH_MAP_CACHED_RO:
+			return PTE_FLAGS_CACHED_RO_V7;
 		case MAP_CACHED:
 			return PTE_FLAGS_CACHED_V7;
 		case MAP_UNCACHED:
 			return PTE_FLAGS_UNCACHED_V7;
+		case MAP_CODE:
+			return PTE_FLAGS_CODE_V7;
 		case ARCH_MAP_WRITECOMBINE:
 			return PTE_FLAGS_WC_V7;
 		case MAP_FAULT:
@@ -230,6 +245,10 @@ static uint32_t get_pte_flags(int map_type)
 		}
 	} else {
 		switch (map_type) {
+		case ARCH_MAP_CACHED_RO:
+		case MAP_CODE:
+			return PTE_FLAGS_CACHED_RO_V4;
+		case ARCH_MAP_CACHED_RWX:
 		case MAP_CACHED:
 			return PTE_FLAGS_CACHED_V4;
 		case MAP_UNCACHED:
@@ -260,6 +279,8 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
 	pte_flags = get_pte_flags(map_type);
 	pmd_flags = pte_flags_to_pmd(pte_flags);
 
+	pr_debug("%s: 0x%08x 0x%08x type %d\n", __func__, virt_addr, size, map_type);
+
 	size = PAGE_ALIGN(size);
 	if (!size)
 		return;
@@ -350,6 +371,8 @@ static void early_remap_range(u32 addr, size_t size, unsigned map_type, bool for
 
 int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
 {
+	map_type = arm_mmu_maybe_skip_permissions(map_type);
+
 	__arch_remap_range(virt_addr, phys_addr, size, map_type, false);
 
 	if (map_type == MAP_UNCACHED)
@@ -605,11 +628,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 
 	set_ttbr(ttb);
 
-	/* For the XN bit to take effect, we can't be using DOMAIN_MANAGER. */
-	if (cpu_architecture() >= CPU_ARCH_ARMv7)
-		set_domain(DOMAIN_CLIENT);
-	else
-		set_domain(DOMAIN_MANAGER);
+	set_domain(DOMAIN_CLIENT);
 
 	/*
 	 * This marks the whole address space as uncachable as well as
@@ -625,7 +644,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 	 * map the bulk of the memory as sections to avoid allocating too many page tables
 	 * at this early stage
 	 */
-	early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
+	early_remap_range(membase, barebox_start - membase, ARCH_MAP_CACHED_RWX, false);
 	/*
 	 * Map the remainder of the memory explicitly with two level page tables. This is
 	 * the place where barebox proper ends at. In barebox proper we'll remap the code
@@ -635,10 +654,10 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 	 * a break-before-make sequence which we can't do when barebox proper is running
 	 * at the location being remapped.
 	 */
-	early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
+	early_remap_range(barebox_start, barebox_size, ARCH_MAP_CACHED_RWX, true);
 	early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
 	early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
-			  MAP_CACHED, false);
+			  ARCH_MAP_CACHED_RWX, false);
 
 	__mmu_cache_on();
 }
diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index a52556a35696..dbfdd2e9c110 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -30,7 +30,7 @@ SECTIONS
 	}
 	BAREBOX_BARE_INIT_SIZE
 
-	. = ALIGN(4);
+	. = ALIGN(4096);
 	__start_rodata = .;
 	.rodata : {
 		*(.rodata*)
@@ -53,6 +53,7 @@ SECTIONS
 		__stop_unwind_tab = .;
 	}
 #endif
+	. = ALIGN(4096);
 	__end_rodata = .;
 	_etext = .;
 	_sdata = .;
diff --git a/common/memory.c b/common/memory.c
index 57f58026df8e..bee55bd647e1 100644
--- a/common/memory.c
+++ b/common/memory.c
@@ -125,9 +125,14 @@ static int mem_malloc_resource(void)
 			MEMTYPE_BOOT_SERVICES_DATA, MEMATTRS_RW);
 	request_barebox_region("barebox code",
 			(unsigned long)&_stext,
-			(unsigned long)&_etext -
+			(unsigned long)&__start_rodata -
 			(unsigned long)&_stext,
 			MEMATTRS_RX);
+	request_barebox_region("barebox RO data",
+			(unsigned long)&__start_rodata,
+			(unsigned long)&__end_rodata -
+			(unsigned long)&__start_rodata,
+			MEMATTRS_RO);
 	request_barebox_region("barebox data",
 			(unsigned long)&_sdata,
 			(unsigned long)&_edata -
diff --git a/include/mmu.h b/include/mmu.h
index 669959050194..20855e89eda3 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -8,7 +8,7 @@
 #define MAP_UNCACHED	0
 #define MAP_CACHED	1
 #define MAP_FAULT	2
-#define MAP_CODE	MAP_CACHED	/* until support added */
+#define MAP_CODE	3
 
 /*
  * Depending on the architecture the default mapping can be
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 12/13] ARM: mmu64: map memory for barebox proper pagewise
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (10 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 11/13] ARM: mmu: map text segment ro and data segments execute never Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-04 17:22 ` [PATCH v4 13/13] ARM: mmu64: map text segment ro and data segments execute never Ahmad Fatoum
  2025-08-05 10:18 ` [PATCH v4 00/13] ARM: Map sections RO/XN Sascha Hauer
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum, Ahmad Fatoum

From: Sascha Hauer <s.hauer@pengutronix.de>

Map the remainder of the memory explicitly with two level page tables. This is
the place where barebox proper ends at. In barebox proper we'll remap the code
segments readonly/executable and the ro segments readonly/execute never. For this
we need the memory being mapped pagewise. We can't do the split up from section
wise mapping to pagewise mapping later because that would require us to do
a break-before-make sequence which we can't do when barebox proper is running
at the location being remapped.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/cpu/mmu_64.c | 40 +++++++++++++++++++++++++++++-----------
 1 file changed, 29 insertions(+), 11 deletions(-)

diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 54d4a4e9c638..6fd767d983b7 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -10,6 +10,7 @@
 #include <init.h>
 #include <mmu.h>
 #include <errno.h>
+#include <range.h>
 #include <zero_page.h>
 #include <linux/sizes.h>
 #include <asm/memory.h>
@@ -128,7 +129,7 @@ static void split_block(uint64_t *pte, int level)
 }
 
 static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
-			    uint64_t attr)
+			    uint64_t attr, bool force_pages)
 {
 	uint64_t *ttb = get_ttb();
 	uint64_t block_size;
@@ -151,14 +152,18 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
 	while (size) {
 		table = ttb;
 		for (level = 0; level < 4; level++) {
+			bool block_aligned;
 			block_shift = level2shift(level);
 			idx = (addr & level2mask(level)) >> block_shift;
 			block_size = (1ULL << block_shift);
 
 			pte = table + idx;
 
-			if (size >= block_size && IS_ALIGNED(addr, block_size) &&
-			    IS_ALIGNED(phys, block_size)) {
+			block_aligned = size >= block_size &&
+				        IS_ALIGNED(addr, block_size) &&
+				        IS_ALIGNED(phys, block_size);
+
+			if ((force_pages && level == 3) || (!force_pages && block_aligned)) {
 				type = (level == 3) ?
 					PTE_TYPE_PAGE : PTE_TYPE_BLOCK;
 
@@ -299,14 +304,14 @@ static unsigned long get_pte_attrs(unsigned flags)
 	}
 }
 
-static void early_remap_range(uint64_t addr, size_t size, unsigned flags)
+static void early_remap_range(uint64_t addr, size_t size, unsigned flags, bool force_pages)
 {
 	unsigned long attrs = get_pte_attrs(flags);
 
 	if (WARN_ON(attrs == ~0UL))
 		return;
 
-	create_sections(addr, addr, size, attrs);
+	create_sections(addr, addr, size, attrs, force_pages);
 }
 
 int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned flags)
@@ -319,7 +324,7 @@ int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsign
 	if (flags != MAP_CACHED)
 		flush_cacheable_pages(virt_addr, size);
 
-	create_sections((uint64_t)virt_addr, phys_addr, (uint64_t)size, attrs);
+	create_sections((uint64_t)virt_addr, phys_addr, (uint64_t)size, attrs, false);
 
 	return 0;
 }
@@ -416,7 +421,7 @@ static void init_range(size_t total_level0_tables)
 	uint64_t addr = 0;
 
 	while (total_level0_tables--) {
-		early_remap_range(addr, L0_XLAT_SIZE, MAP_UNCACHED);
+		early_remap_range(addr, L0_XLAT_SIZE, MAP_UNCACHED, false);
 		split_block(ttb, 0);
 		addr += L0_XLAT_SIZE;
 		ttb++;
@@ -427,6 +432,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 {
 	int el;
 	u64 optee_membase;
+	unsigned long barebox_size;
 	unsigned long ttb = arm_mem_ttb(membase + memsize);
 
 	if (get_cr() & CR_M)
@@ -447,14 +453,26 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 	 */
 	init_range(2);
 
-	early_remap_range(membase, memsize, MAP_CACHED);
+	early_remap_range(membase, memsize, MAP_CACHED, false);
 
-	if (optee_get_membase(&optee_membase))
+	if (optee_get_membase(&optee_membase)) {
                 optee_membase = membase + memsize - OPTEE_SIZE;
 
-	early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT);
+		barebox_size = optee_membase - barebox_start;
 
-	early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
+		early_remap_range(optee_membase - barebox_size, barebox_size,
+			     ARCH_MAP_CACHED_RWX, true);
+	} else {
+		barebox_size = membase + memsize - barebox_start;
+
+		early_remap_range(membase + memsize - barebox_size, barebox_size,
+			     ARCH_MAP_CACHED_RWX, true);
+	}
+
+	early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT, false);
+
+	early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
+			  MAP_CACHED, false);
 
 	mmu_enable();
 }
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 13/13] ARM: mmu64: map text segment ro and data segments execute never
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (11 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 12/13] ARM: mmu64: map memory for barebox proper pagewise Ahmad Fatoum
@ 2025-08-04 17:22 ` Ahmad Fatoum
  2025-08-05 10:18 ` [PATCH v4 00/13] ARM: Map sections RO/XN Sascha Hauer
  13 siblings, 0 replies; 15+ messages in thread
From: Ahmad Fatoum @ 2025-08-04 17:22 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum, Ahmad Fatoum

From: Ahmad Fatoum <a.fatoum@pengutronix.de>

With this all segments in the DRAM except the text segment are mapped
execute-never so that only the barebox code can actually be executed.
Also map the readonly data segment readonly so that it can't be
modified.

The mapping is only implemented in barebox proper. The PBL still maps
the whole DRAM rwx.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
 arch/arm/Kconfig                 |  1 -
 arch/arm/cpu/mmu-common.c        |  3 ---
 arch/arm/cpu/mmu_64.c            | 18 ++++++++++++++----
 arch/arm/include/asm/pgtable64.h |  1 +
 arch/arm/lib64/barebox.lds.S     |  5 +++--
 5 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 18bd0ffa5bf4..7a3952700aa8 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -399,7 +399,6 @@ config ARM_UNWIND
 
 config ARM_MMU_PERMISSIONS
 	bool "Map with extended RO/X permissions"
-	depends on ARM32
 	default y
 	help
 	  Enable this option to map readonly sections as readonly, executable
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index a8673d027d17..365d9c89ba7c 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -133,9 +133,6 @@ static void mmu_remap_memory_banks(void)
 	/* Do this while interrupt vectors are still writable */
 	setup_trap_pages();
 
-	if (!IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS))
-		return;
-
 	remap_range((void *)code_start, code_size, MAP_CODE);
 	remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
 }
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 6fd767d983b7..8621bcd26cf4 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -292,13 +292,19 @@ static unsigned long get_pte_attrs(unsigned flags)
 {
 	switch (flags) {
 	case MAP_CACHED:
-		return CACHED_MEM;
+		return attrs_xn() | CACHED_MEM;
 	case MAP_UNCACHED:
 		return attrs_xn() | UNCACHED_MEM;
 	case MAP_FAULT:
 		return 0x0;
 	case ARCH_MAP_WRITECOMBINE:
 		return attrs_xn() | MEM_ALLOC_WRITECOMBINE;
+	case MAP_CODE:
+		return CACHED_MEM | PTE_BLOCK_RO;
+	case ARCH_MAP_CACHED_RO:
+		return attrs_xn() | CACHED_MEM | PTE_BLOCK_RO;
+	case ARCH_MAP_CACHED_RWX:
+		return CACHED_MEM;
 	default:
 		return ~0UL;
 	}
@@ -316,7 +322,11 @@ static void early_remap_range(uint64_t addr, size_t size, unsigned flags, bool f
 
 int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned flags)
 {
-	unsigned long attrs = get_pte_attrs(flags);
+	unsigned long attrs;
+
+	flags = arm_mmu_maybe_skip_permissions(flags);
+
+	attrs = get_pte_attrs(flags);
 
 	if (attrs == ~0UL)
 		return -EINVAL;
@@ -453,7 +463,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 	 */
 	init_range(2);
 
-	early_remap_range(membase, memsize, MAP_CACHED, false);
+	early_remap_range(membase, memsize, ARCH_MAP_CACHED_RWX, false);
 
 	if (optee_get_membase(&optee_membase)) {
                 optee_membase = membase + memsize - OPTEE_SIZE;
@@ -472,7 +482,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 	early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT, false);
 
 	early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
-			  MAP_CACHED, false);
+			  ARCH_MAP_CACHED_RWX, false);
 
 	mmu_enable();
 }
diff --git a/arch/arm/include/asm/pgtable64.h b/arch/arm/include/asm/pgtable64.h
index b88ffe6be525..6f6ef22717b7 100644
--- a/arch/arm/include/asm/pgtable64.h
+++ b/arch/arm/include/asm/pgtable64.h
@@ -59,6 +59,7 @@
 #define PTE_BLOCK_NG            (1 << 11)
 #define PTE_BLOCK_PXN           (UL(1) << 53)
 #define PTE_BLOCK_UXN           (UL(1) << 54)
+#define PTE_BLOCK_RO            (UL(1) << 7)
 
 /*
  * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
index 454ae3a95d8d..68cff9dacdeb 100644
--- a/arch/arm/lib64/barebox.lds.S
+++ b/arch/arm/lib64/barebox.lds.S
@@ -25,18 +25,19 @@ SECTIONS
 	}
 	BAREBOX_BARE_INIT_SIZE
 
-	. = ALIGN(4);
+	. = ALIGN(4096);
 	__start_rodata = .;
 	.rodata : {
 		*(.rodata*)
 		RO_DATA_SECTION
 	}
 
+	. = ALIGN(4096);
+
 	__end_rodata = .;
 	_etext = .;
 	_sdata = .;
 
-	. = ALIGN(4);
 	.data : { *(.data*) }
 
 	.barebox_imd : { BAREBOX_IMD }
-- 
2.39.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 00/13] ARM: Map sections RO/XN
  2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
                   ` (12 preceding siblings ...)
  2025-08-04 17:22 ` [PATCH v4 13/13] ARM: mmu64: map text segment ro and data segments execute never Ahmad Fatoum
@ 2025-08-05 10:18 ` Sascha Hauer
  13 siblings, 0 replies; 15+ messages in thread
From: Sascha Hauer @ 2025-08-05 10:18 UTC (permalink / raw)
  To: Ahmad Fatoum; +Cc: barebox

Hi Ahmad,

On Mon, Aug 04, 2025 at 07:22:20PM +0200, Ahmad Fatoum wrote:
> This series replaces 7 patches that are in next to fix a barebox hang
> when used together with OP-TEE.

Unfortunately I haven't seen this in time, so the offending patches are
in master now. Could you resend your changes as patches instead?

Thanks

 Sascha

> 
> Root cause is that we need to consider both reserved memory entries
> and the text area at once, otherwise mapping non-reserved regions
> cached would map the text area eXecute never
> 
> Changes in v4:
> - skip TLB invalidation if remapping zero bytes
> - share common memory bank remapping code
> - fix reserved memory at end of RAM mapping barebox text
>   eXecute Never
> - add range helpers to make especially v4 code clearer
> - pass map type not pte flags to early_remap_range
> 
> Changes in v3:
> - rework create_sections() for Ahmads comments
> - mention CR_S bit and DOMAIN_CLIENT in commit message
> - Link to v2: https://lore.barebox.org/20250617-mmu-xn-ro-v2-0-3c7aa9046b67@pengutronix.de
> 
> Changes in v2:
> - Tested and fixed for ARMv5
> - merge create_pages() and create_sections() into one functions (ahmad)
> - introduce function to create mapping flags based on CONFIG_ARM_MMU_PERMISSIONS
> - Link to v1: https://lore.barebox.org/20250606-mmu-xn-ro-v1-0-7ee6ddd134d4@pengutronix.de
> 
> Ahmad Fatoum (8):
>   mmu: explicitly map executable non-SDRAM regions with MAP_CODE
>   ARM: mmu: skip TLB invalidation if remapping zero bytes
>   ARM: mmu: provide setup_trap_pages for both 32- and 64-bit
>   ARM: mmu: share common memory bank remapping code
>   ARM: mmu: make mmu_remap_memory_banks clearer with helper
>   partition: rename region_overlap_end to region_overlap_end_inclusive
>   partition: define new region_overlap_end_exclusive helper
>   ARM: mmu64: map text segment ro and data segments execute never
> 
> Sascha Hauer (5):
>   ARM: pass barebox base to mmu_early_enable()
>   ARM: mmu: move ARCH_MAP_WRITECOMBINE to header
>   ARM: mmu: map memory for barebox proper pagewise
>   ARM: mmu: map text segment ro and data segments execute never
>   ARM: mmu64: map memory for barebox proper pagewise
> 
>  arch/arm/Kconfig                 |  12 ++++
>  arch/arm/cpu/lowlevel_32.S       |   1 +
>  arch/arm/cpu/mmu-common.c        |  69 +++++++++++++++++++++
>  arch/arm/cpu/mmu-common.h        |  21 +++++++
>  arch/arm/cpu/mmu_32.c            | 101 ++++++++++++++++++-------------
>  arch/arm/cpu/mmu_64.c            |  88 ++++++++++++++++-----------
>  arch/arm/cpu/uncompress.c        |   9 ++-
>  arch/arm/include/asm/mmu.h       |   2 +-
>  arch/arm/include/asm/pgtable64.h |   1 +
>  arch/arm/lib32/barebox.lds.S     |   3 +-
>  arch/arm/lib64/barebox.lds.S     |   5 +-
>  arch/arm/mach-imx/romapi.c       |   3 +-
>  commands/iomemport.c             |   2 +-
>  common/memory.c                  |   7 ++-
>  common/partitions.c              |   6 +-
>  drivers/firmware/socfpga.c       |   4 ++
>  drivers/hab/habv4.c              |   2 +-
>  include/mmu.h                    |   1 +
>  include/range.h                  |  30 +++++++--
>  19 files changed, 268 insertions(+), 99 deletions(-)
> 
> -- 
> 2.39.5
> 
> 
> 

-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2025-08-05 10:39 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 01/13] mmu: explicitly map executable non-SDRAM regions with MAP_CODE Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 02/13] ARM: pass barebox base to mmu_early_enable() Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 03/13] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 04/13] ARM: mmu: map memory for barebox proper pagewise Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 05/13] ARM: mmu: skip TLB invalidation if remapping zero bytes Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 06/13] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 07/13] ARM: mmu: share common memory bank remapping code Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 08/13] ARM: mmu: make mmu_remap_memory_banks clearer with helper Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 09/13] partition: rename region_overlap_end to region_overlap_end_inclusive Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 10/13] partition: define new region_overlap_end_exclusive helper Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 11/13] ARM: mmu: map text segment ro and data segments execute never Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 12/13] ARM: mmu64: map memory for barebox proper pagewise Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 13/13] ARM: mmu64: map text segment ro and data segments execute never Ahmad Fatoum
2025-08-05 10:18 ` [PATCH v4 00/13] ARM: Map sections RO/XN Sascha Hauer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox