* [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area
@ 2025-08-05 17:45 Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 1/8] partition: rename region_overlap_end to region_overlap_end_inclusive Ahmad Fatoum
` (8 more replies)
0 siblings, 9 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox
The loop remapping the memory banks looks at reserved memory
regions and then maps everything eXecute Never up to the start of the
region. If the region happens to be in the same bank as the text area
and it comes after it, this means the text area is temporarily mapped
eXecute Never, while barebox is running from it, which results in a
hang.
Fix this by remapping only after both reserved memory regions and text
area have been considered.
This series is a rebase of
https://lore.barebox.org/barebox/aJHagEVHUpHjALa2@pengutronix.de/T/#t
Ahmad Fatoum (8):
partition: rename region_overlap_end to region_overlap_end_inclusive
partition: define new region_overlap_end_exclusive helper
ARM: mmu: skip TLB invalidation if remapping zero bytes
ARM64: mmu: pass map type not PTE flags to early_remap_range
ARM: mmu: provide setup_trap_pages for both 32- and 64-bit
ARM: mmu: setup trap pages before remapping R/O
ARM: mmu: share common memory bank remapping code
ARM: mmu: fix hang reserving memory after text area
arch/arm/cpu/mmu-common.c | 69 +++++++++++++++++++++++++++++++++++++++
arch/arm/cpu/mmu-common.h | 1 +
arch/arm/cpu/mmu_32.c | 44 ++-----------------------
arch/arm/cpu/mmu_64.c | 51 +++++++----------------------
commands/iomemport.c | 2 +-
common/partitions.c | 6 ++--
include/range.h | 30 ++++++++++++++---
7 files changed, 114 insertions(+), 89 deletions(-)
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH master 1/8] partition: rename region_overlap_end to region_overlap_end_inclusive
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
@ 2025-08-05 17:45 ` Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 2/8] partition: define new region_overlap_end_exclusive helper Ahmad Fatoum
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
While quite verbose, off-by-one are very annoying, so it makes sense to
be very explicit about the expected input. Rename the function in
preparation for adding region_overlap_end_exclusive.
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
commands/iomemport.c | 2 +-
common/partitions.c | 6 +++---
include/range.h | 10 +++++-----
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/commands/iomemport.c b/commands/iomemport.c
index bb546e4a3ad9..d7c960d15b41 100644
--- a/commands/iomemport.c
+++ b/commands/iomemport.c
@@ -21,7 +21,7 @@ static void __print_resources(struct resource *res, int indent,
resource_size_t size = resource_size(res);
int i;
- if (addr && !region_overlap_end(*addr, *addr, res->start, res->end))
+ if (addr && !region_overlap_end_inclusive(*addr, *addr, res->start, res->end))
return;
if ((flags & FLAG_VERBOSE) && !(flags & FLAG_IOPORT))
diff --git a/common/partitions.c b/common/partitions.c
index fdead333a40a..7563cb0e6767 100644
--- a/common/partitions.c
+++ b/common/partitions.c
@@ -251,9 +251,9 @@ int partition_create(struct partition_desc *pdesc, const char *name,
}
list_for_each_entry(part, &pdesc->partitions, list) {
- if (region_overlap_end(part->first_sec,
- part->first_sec + part->size - 1,
- lba_start, lba_end)) {
+ if (region_overlap_end_inclusive(part->first_sec,
+ part->first_sec + part->size - 1,
+ lba_start, lba_end)) {
pr_err("new partition %llu-%llu overlaps with partition %s (%llu-%llu)\n",
lba_start, lba_end, part->name, part->first_sec,
part->first_sec + part->size - 1);
diff --git a/include/range.h b/include/range.h
index 82c152f3f7c8..96e0f124d5d4 100644
--- a/include/range.h
+++ b/include/range.h
@@ -5,15 +5,15 @@
#include <linux/types.h>
/**
- * region_overlap_end - check whether a pair of [start, end] ranges overlap
+ * region_overlap_end_inclusive - check whether a pair of [start, end] ranges overlap
*
* @starta: start of the first range
* @enda: end of the first range (inclusive)
* @startb: start of the second range
* @endb: end of the second range (inclusive)
*/
-static inline bool region_overlap_end(u64 starta, u64 enda,
- u64 startb, u64 endb)
+static inline bool region_overlap_end_inclusive(u64 starta, u64 enda,
+ u64 startb, u64 endb)
{
if (enda < startb)
return false;
@@ -36,8 +36,8 @@ static inline bool region_overlap_size(u64 starta, u64 lena,
if (!lena || !lenb)
return false;
- return region_overlap_end(starta, starta + lena - 1,
- startb, startb + lenb - 1);
+ return region_overlap_end_inclusive(starta, starta + lena - 1,
+ startb, startb + lenb - 1);
}
/**
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH master 2/8] partition: define new region_overlap_end_exclusive helper
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 1/8] partition: rename region_overlap_end to region_overlap_end_inclusive Ahmad Fatoum
@ 2025-08-05 17:45 ` Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 3/8] ARM: mmu: skip TLB invalidation if remapping zero bytes Ahmad Fatoum
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
Most ranges in barebox identified by start and end offsets are exclusive
at the end, for example, the linker symbols for the sections.
To make handling them a bit easier while avoiding off-by-one, let's add
a region_overlap_end_exclusive helper as well.
Unlike inclusive ranges, exclusive ranges can be empty, so add a check
for invalid ranges.
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
include/range.h | 24 ++++++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/include/range.h b/include/range.h
index 96e0f124d5d4..e7819017e473 100644
--- a/include/range.h
+++ b/include/range.h
@@ -22,6 +22,26 @@ static inline bool region_overlap_end_inclusive(u64 starta, u64 enda,
return true;
}
+/**
+ * region_overlap_end_exclusive - check whether a pair of [start, end) ranges overlap
+ *
+ * @starta: start of the first range
+ * @enda: end of the first range (exclusive)
+ * @startb: start of the second range
+ * @endb: end of the second range (exclusive)
+ */
+static inline bool region_overlap_end_exclusive(u64 starta, u64 enda,
+ u64 startb, u64 endb)
+{
+ /* Empty ranges don't overlap */
+ if (starta >= enda || startb >= endb)
+ return false;
+
+ return region_overlap_end_inclusive(starta, enda - 1,
+ startb, startb - 1);
+}
+
+
/**
* region_overlap_end - check whether a pair of [start, end] ranges overlap
*
@@ -36,8 +56,8 @@ static inline bool region_overlap_size(u64 starta, u64 lena,
if (!lena || !lenb)
return false;
- return region_overlap_end_inclusive(starta, starta + lena - 1,
- startb, startb + lenb - 1);
+ return region_overlap_end_exclusive(starta, starta + lena,
+ startb, startb + lenb);
}
/**
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH master 3/8] ARM: mmu: skip TLB invalidation if remapping zero bytes
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 1/8] partition: rename region_overlap_end to region_overlap_end_inclusive Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 2/8] partition: define new region_overlap_end_exclusive helper Ahmad Fatoum
@ 2025-08-05 17:45 ` Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 4/8] ARM64: mmu: pass map type not PTE flags to early_remap_range Ahmad Fatoum
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
The loop that remaps memory banks can end up calling remap_range with
zero size, when a reserved region is at the very start of the memory
bank.
This is handled correctly by the code, but does an unnecessary
invalidation of the whole TLB. Let's early exit instead to skip that.
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
arch/arm/cpu/mmu_32.c | 2 ++
arch/arm/cpu/mmu_64.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 89a18d342b80..5f303ae1dc87 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -283,6 +283,8 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
pr_debug("%s: 0x%08x 0x%08x type %d\n", __func__, virt_addr, size, map_type);
size = PAGE_ALIGN(size);
+ if (!size)
+ return;
while (size) {
const bool pgdir_size_aligned = IS_ALIGNED(virt_addr, PGDIR_SIZE);
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index a229e4cb5526..91b3cd76c24f 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -146,6 +146,8 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
attr &= ~PTE_TYPE_MASK;
size = PAGE_ALIGN(size);
+ if (!size)
+ return;
while (size) {
table = ttb;
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH master 4/8] ARM64: mmu: pass map type not PTE flags to early_remap_range
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
` (2 preceding siblings ...)
2025-08-05 17:45 ` [PATCH master 3/8] ARM: mmu: skip TLB invalidation if remapping zero bytes Ahmad Fatoum
@ 2025-08-05 17:45 ` Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 5/8] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit Ahmad Fatoum
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
early_remap_range() is calling get_pre_attrs() internally already on the
flags argument, so passing PTE flags here is incorrect.
Fixes: 59c1288698b4 ("ARM: MMU64: map memory for barebox proper pagewise")
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
arch/arm/cpu/mmu_64.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 91b3cd76c24f..0db95bceba1b 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -499,12 +499,12 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
barebox_size = optee_membase - barebox_start;
early_remap_range(optee_membase - barebox_size, barebox_size,
- get_pte_attrs(ARCH_MAP_CACHED_RWX), true);
+ ARCH_MAP_CACHED_RWX, true);
} else {
barebox_size = membase + memsize - barebox_start;
early_remap_range(membase + memsize - barebox_size, barebox_size,
- get_pte_attrs(ARCH_MAP_CACHED_RWX), true);
+ ARCH_MAP_CACHED_RWX, true);
}
early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT, false);
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH master 5/8] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
` (3 preceding siblings ...)
2025-08-05 17:45 ` [PATCH master 4/8] ARM64: mmu: pass map type not PTE flags to early_remap_range Ahmad Fatoum
@ 2025-08-05 17:45 ` Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 6/8] ARM: mmu: setup trap pages before remapping R/O Ahmad Fatoum
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
In preparation for moving the remapping of memory banks into the common
code, rename vectors_init to setup_trap_pages and export it for both
32-bit and 64-bit ARM, so it can be called from the common code as well.
This needs to happen after the remapping, because otherwise the trap
pages would be switched cached by the SDRAM remapping loop.
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
arch/arm/cpu/mmu-common.h | 1 +
arch/arm/cpu/mmu_32.c | 4 ++--
arch/arm/cpu/mmu_64.c | 12 +++++++++---
3 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index 3bca5cc3b821..f76c7c4c38d6 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -18,6 +18,7 @@ struct device;
void dma_inv_range(void *ptr, size_t size);
void dma_flush_range(void *ptr, size_t size);
void *dma_alloc_map(struct device *dev, size_t size, dma_addr_t *dma_handle, unsigned flags);
+void setup_trap_pages(void);
void __mmu_init(bool mmu_on);
static inline unsigned arm_mmu_maybe_skip_permissions(unsigned map_type)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 5f303ae1dc87..151e786c9b2d 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -542,7 +542,7 @@ static void create_guard_page(void)
/*
* Map vectors and zero page
*/
-static void vectors_init(void)
+void setup_trap_pages(void)
{
create_guard_page();
@@ -632,7 +632,7 @@ void __mmu_init(bool mmu_on)
remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
}
- vectors_init();
+ setup_trap_pages();
remap_range((void *)code_start, code_size, MAP_CODE);
remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 0db95bceba1b..7e6e89cb98c2 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -360,6 +360,14 @@ static void create_guard_page(void)
pr_debug("Created guard page\n");
}
+void setup_trap_pages(void)
+{
+ /* Vectors are already registered by aarch64_init_vectors */
+ /* Make zero page faulting to catch NULL pointer derefs */
+ zero_page_faulting();
+ create_guard_page();
+}
+
/*
* Prepare MMU for usage enable it.
*/
@@ -412,9 +420,7 @@ void __mmu_init(bool mmu_on)
remap_range((void *)code_start, code_size, MAP_CODE);
remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
- /* Make zero page faulting to catch NULL pointer derefs */
- zero_page_faulting();
- create_guard_page();
+ setup_trap_pages();
}
void mmu_disable(void)
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH master 6/8] ARM: mmu: setup trap pages before remapping R/O
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
` (4 preceding siblings ...)
2025-08-05 17:45 ` [PATCH master 5/8] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit Ahmad Fatoum
@ 2025-08-05 17:45 ` Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 7/8] ARM: mmu: share common memory bank remapping code Ahmad Fatoum
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
The order matters on ARM32, because arm_fixup_vectors() actually rewrite
the vector table, which is in the text area.
On ARM64, the order doesn't matter. As we are going to make the memory
bank remapping code common between both ARM32 and ARM64, move
setup_trap_pages, so the code between ARM32 and ARM64 becomes identical.
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
arch/arm/cpu/mmu_64.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 7e6e89cb98c2..a770be7ed611 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -417,10 +417,10 @@ void __mmu_init(bool mmu_on)
remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
}
+ setup_trap_pages();
+
remap_range((void *)code_start, code_size, MAP_CODE);
remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
-
- setup_trap_pages();
}
void mmu_disable(void)
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH master 7/8] ARM: mmu: share common memory bank remapping code
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
` (5 preceding siblings ...)
2025-08-05 17:45 ` [PATCH master 6/8] ARM: mmu: setup trap pages before remapping R/O Ahmad Fatoum
@ 2025-08-05 17:45 ` Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 8/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
2025-08-06 6:31 ` [PATCH master 0/8] " Sascha Hauer
8 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
The code is identical between ARM32 and 64 and is going to get more
complex with the addition of finer grained MMU permissions.
Let's move it to a common code file in anticipation.
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
arch/arm/cpu/mmu-common.c | 46 +++++++++++++++++++++++++++++++++++++++
arch/arm/cpu/mmu_32.c | 40 ----------------------------------
arch/arm/cpu/mmu_64.c | 35 -----------------------------
3 files changed, 46 insertions(+), 75 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index f3416ae7f7ca..575fb32282d1 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -12,6 +12,7 @@
#include <asm/barebox-arm.h>
#include <memory.h>
#include <zero_page.h>
+#include <range.h>
#include "mmu-common.h"
#include <efi/efi-mode.h>
@@ -69,6 +70,50 @@ void zero_page_faulting(void)
remap_range(0x0, PAGE_SIZE, MAP_FAULT);
}
+static void mmu_remap_memory_banks(void)
+{
+ struct memory_bank *bank;
+ unsigned long text_start = (unsigned long)&_stext;
+ unsigned long code_start = text_start;
+ unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
+ unsigned long text_size = (unsigned long)&_etext - text_start;
+ unsigned long rodata_start = (unsigned long)&__start_rodata;
+ unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
+
+ /*
+ * Early mmu init will have mapped everything but the initial memory area
+ * (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
+ * all memory banks, so let's map all pages, excluding reserved memory areas,
+ * cacheable and executable.
+ */
+ for_each_memory_bank(bank) {
+ struct resource *rsv;
+ resource_size_t pos;
+
+ pos = bank->start;
+
+ /* Skip reserved regions */
+ for_each_reserved_region(bank, rsv) {
+ remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
+ pos = rsv->end + 1;
+ }
+
+ if (region_overlap_size(pos, bank->start + bank->size - pos,
+ text_start, text_size)) {
+ remap_range((void *)pos, text_start - pos, MAP_CACHED);
+ /* skip barebox segments here, will be mapped below */
+ pos = text_start + text_size;
+ }
+
+ remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+ }
+
+ setup_trap_pages();
+
+ remap_range((void *)code_start, code_size, MAP_CODE);
+ remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
+}
+
static int mmu_init(void)
{
if (efi_is_payload())
@@ -94,6 +139,7 @@ static int mmu_init(void)
}
__mmu_init(get_cr() & CR_M);
+ mmu_remap_memory_banks();
return 0;
}
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 151e786c9b2d..985a063bbdda 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -19,7 +19,6 @@
#include <asm/system_info.h>
#include <asm/sections.h>
#include <linux/pagemap.h>
-#include <range.h>
#include "mmu_32.h"
@@ -579,14 +578,7 @@ void setup_trap_pages(void)
*/
void __mmu_init(bool mmu_on)
{
- struct memory_bank *bank;
uint32_t *ttb = get_ttb();
- unsigned long text_start = (unsigned long)&_stext;
- unsigned long code_start = text_start;
- unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
- unsigned long text_size = (unsigned long)&_etext - text_start;
- unsigned long rodata_start = (unsigned long)&__start_rodata;
- unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
// TODO: remap writable only while remapping?
// TODO: What memtype for ttb when barebox is EFI loader?
@@ -604,38 +596,6 @@ void __mmu_init(bool mmu_on)
ttb);
pr_debug("ttb: 0x%p\n", ttb);
-
- /*
- * Early mmu init will have mapped everything but the initial memory area
- * (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
- * all memory banks, so let's map all pages, excluding reserved memory areas,
- * cacheable and executable.
- */
- for_each_memory_bank(bank) {
- struct resource *rsv;
- resource_size_t pos;
-
- pos = bank->start;
-
- /* Skip reserved regions */
- for_each_reserved_region(bank, rsv) {
- remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
- pos = rsv->end + 1;
- }
-
- if (region_overlap_size(pos, bank->start + bank->size - pos, text_start, text_size)) {
- remap_range((void *)pos, code_start - pos, MAP_CACHED);
- /* skip barebox segments here, will be mapped below */
- pos = text_start + text_size;
- }
-
- remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
- }
-
- setup_trap_pages();
-
- remap_range((void *)code_start, code_size, MAP_CODE);
- remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
}
/*
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index a770be7ed611..e7d2e9697a7e 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -10,7 +10,6 @@
#include <init.h>
#include <mmu.h>
#include <errno.h>
-#include <range.h>
#include <zero_page.h>
#include <linux/sizes.h>
#include <asm/memory.h>
@@ -374,13 +373,6 @@ void setup_trap_pages(void)
void __mmu_init(bool mmu_on)
{
uint64_t *ttb = get_ttb();
- struct memory_bank *bank;
- unsigned long text_start = (unsigned long)&_stext;
- unsigned long code_start = text_start;
- unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
- unsigned long text_size = (unsigned long)&_etext - text_start;
- unsigned long rodata_start = (unsigned long)&__start_rodata;
- unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
// TODO: remap writable only while remapping?
// TODO: What memtype for ttb when barebox is EFI loader?
@@ -394,33 +386,6 @@ void __mmu_init(bool mmu_on)
* the ttb will get corrupted.
*/
pr_crit("Can't request SDRAM region for ttb at %p\n", ttb);
-
- for_each_memory_bank(bank) {
- struct resource *rsv;
- resource_size_t pos;
-
- pos = bank->start;
-
- /* Skip reserved regions */
- for_each_reserved_region(bank, rsv) {
- remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
- pos = rsv->end + 1;
- }
-
- if (region_overlap_size(pos, bank->start + bank->size - pos,
- text_start, text_size)) {
- remap_range((void *)pos, text_start - pos, MAP_CACHED);
- /* skip barebox segments here, will be mapped below */
- pos = text_start + text_size;
- }
-
- remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
- }
-
- setup_trap_pages();
-
- remap_range((void *)code_start, code_size, MAP_CODE);
- remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
}
void mmu_disable(void)
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH master 8/8] ARM: mmu: fix hang reserving memory after text area
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
` (6 preceding siblings ...)
2025-08-05 17:45 ` [PATCH master 7/8] ARM: mmu: share common memory bank remapping code Ahmad Fatoum
@ 2025-08-05 17:45 ` Ahmad Fatoum
2025-08-06 6:31 ` [PATCH master 0/8] " Sascha Hauer
8 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2025-08-05 17:45 UTC (permalink / raw)
To: barebox; +Cc: Ahmad Fatoum
The loop in mmu_remap_memory_banks first looks at reserved memory
regions and then maps everything eXecute Never up to the start of the
region. If the region happens to be in the same bank as the text area
and it comes after it, this means the text area is temporarily mapped
eXecute Never, while barebox is running from it, which results in a
hang.
Fix this by remapping only after both reserved memory regions and text
area have been considered.
Fixes: 5916385fae83 ("ARM: MMU: map text segment ro and data segments execute never")
Fixes: 03dfb3f142fb ("ARM: MMU64: map text segment ro and data segments execute never")
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
arch/arm/cpu/mmu-common.c | 51 ++++++++++++++++++++++++++++-----------
1 file changed, 37 insertions(+), 14 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index 575fb32282d1..1de20d931876 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -70,21 +70,50 @@ void zero_page_faulting(void)
remap_range(0x0, PAGE_SIZE, MAP_FAULT);
}
+/**
+ * remap_range_end - remap a range identified by [start, end)
+ *
+ * @start: start of the range
+ * @end: end of the first range (exclusive)
+ * @map_type: mapping type to apply
+ */
+static inline void remap_range_end(unsigned long start, unsigned long end,
+ unsigned map_type)
+{
+ remap_range((void *)start, end - start, map_type);
+}
+
+static inline void remap_range_end_sans_text(unsigned long start, unsigned long end,
+ unsigned map_type)
+{
+ unsigned long text_start = (unsigned long)&_stext;
+ unsigned long text_end = (unsigned long)&_etext;
+
+ if (region_overlap_end_exclusive(start, end, text_start, text_end)) {
+ remap_range_end(start, text_start, MAP_CACHED);
+ /* skip barebox segments here, will be mapped later */
+ start = text_end;
+ }
+
+ remap_range_end(start, end, MAP_CACHED);
+}
+
static void mmu_remap_memory_banks(void)
{
struct memory_bank *bank;
- unsigned long text_start = (unsigned long)&_stext;
- unsigned long code_start = text_start;
+ unsigned long code_start = (unsigned long)&_stext;
unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
- unsigned long text_size = (unsigned long)&_etext - text_start;
unsigned long rodata_start = (unsigned long)&__start_rodata;
unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
/*
* Early mmu init will have mapped everything but the initial memory area
* (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
- * all memory banks, so let's map all pages, excluding reserved memory areas,
- * cacheable and executable.
+ * all memory banks, so let's map all pages, excluding reserved memory areas
+ * and barebox text area cacheable.
+ *
+ * This code will become much less complex once we switch over to using
+ * CONFIG_MEMORY_ATTRIBUTES for MMU as well.
*/
for_each_memory_bank(bank) {
struct resource *rsv;
@@ -94,20 +123,14 @@ static void mmu_remap_memory_banks(void)
/* Skip reserved regions */
for_each_reserved_region(bank, rsv) {
- remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
+ remap_range_end_sans_text(pos, rsv->start, MAP_CACHED);
pos = rsv->end + 1;
}
- if (region_overlap_size(pos, bank->start + bank->size - pos,
- text_start, text_size)) {
- remap_range((void *)pos, text_start - pos, MAP_CACHED);
- /* skip barebox segments here, will be mapped below */
- pos = text_start + text_size;
- }
-
- remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+ remap_range_end_sans_text(pos, bank->start + bank->size, MAP_CACHED);
}
+ /* Do this while interrupt vectors are still writable */
setup_trap_pages();
remap_range((void *)code_start, code_size, MAP_CODE);
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
` (7 preceding siblings ...)
2025-08-05 17:45 ` [PATCH master 8/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
@ 2025-08-06 6:31 ` Sascha Hauer
8 siblings, 0 replies; 10+ messages in thread
From: Sascha Hauer @ 2025-08-06 6:31 UTC (permalink / raw)
To: barebox, Ahmad Fatoum
On Tue, 05 Aug 2025 19:45:33 +0200, Ahmad Fatoum wrote:
> The loop remapping the memory banks looks at reserved memory
> regions and then maps everything eXecute Never up to the start of the
> region. If the region happens to be in the same bank as the text area
> and it comes after it, this means the text area is temporarily mapped
> eXecute Never, while barebox is running from it, which results in a
> hang.
>
> [...]
Applied, thanks!
[1/8] partition: rename region_overlap_end to region_overlap_end_inclusive
https://git.pengutronix.de/cgit/barebox/commit/?id=f6c2846933ca (link may not be stable)
[2/8] partition: define new region_overlap_end_exclusive helper
https://git.pengutronix.de/cgit/barebox/commit/?id=768fdb36f30e (link may not be stable)
[3/8] ARM: mmu: skip TLB invalidation if remapping zero bytes
https://git.pengutronix.de/cgit/barebox/commit/?id=b71103970c9b (link may not be stable)
[4/8] ARM64: mmu: pass map type not PTE flags to early_remap_range
https://git.pengutronix.de/cgit/barebox/commit/?id=fda2cc61ef68 (link may not be stable)
[5/8] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit
https://git.pengutronix.de/cgit/barebox/commit/?id=5a4f19a47c21 (link may not be stable)
[6/8] ARM: mmu: setup trap pages before remapping R/O
https://git.pengutronix.de/cgit/barebox/commit/?id=b661f0519424 (link may not be stable)
[7/8] ARM: mmu: share common memory bank remapping code
https://git.pengutronix.de/cgit/barebox/commit/?id=5c4d167f5a54 (link may not be stable)
[8/8] ARM: mmu: fix hang reserving memory after text area
https://git.pengutronix.de/cgit/barebox/commit/?id=e91c073f4756 (link may not be stable)
Best regards,
--
Sascha Hauer <s.hauer@pengutronix.de>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-08-06 6:34 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-05 17:45 [PATCH master 0/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 1/8] partition: rename region_overlap_end to region_overlap_end_inclusive Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 2/8] partition: define new region_overlap_end_exclusive helper Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 3/8] ARM: mmu: skip TLB invalidation if remapping zero bytes Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 4/8] ARM64: mmu: pass map type not PTE flags to early_remap_range Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 5/8] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 6/8] ARM: mmu: setup trap pages before remapping R/O Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 7/8] ARM: mmu: share common memory bank remapping code Ahmad Fatoum
2025-08-05 17:45 ` [PATCH master 8/8] ARM: mmu: fix hang reserving memory after text area Ahmad Fatoum
2025-08-06 6:31 ` [PATCH master 0/8] " Sascha Hauer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox