From: Ahmad Fatoum <a.fatoum@barebox.org>
To: barebox@lists.infradead.org
Cc: Ahmad Fatoum <a.fatoum@pengutronix.de>,
Ahmad Fatoum <a.fatoum@barebox.org>
Subject: [PATCH v4 04/13] ARM: mmu: map memory for barebox proper pagewise
Date: Mon, 4 Aug 2025 19:22:24 +0200 [thread overview]
Message-ID: <20250804172233.2158462-5-a.fatoum@barebox.org> (raw)
In-Reply-To: <20250804172233.2158462-1-a.fatoum@barebox.org>
From: Sascha Hauer <s.hauer@pengutronix.de>
Map the remainder of the memory explicitly with two level page tables. This is
the place where barebox proper ends at. In barebox proper we'll remap the code
segments readonly/executable and the ro segments readonly/execute never. For this
we need the memory being mapped pagewise. We can't do the split up from section
wise mapping to pagewise mapping later because that would require us to do
a break-before-make sequence which we can't do when barebox proper is running
at the location being remapped.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum@barebox.org>
---
arch/arm/cpu/mmu_32.c | 37 +++++++++++++++++++++++++++++--------
1 file changed, 29 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 104780ff6b98..b21fc75f0ceb 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -247,7 +247,8 @@ static uint32_t get_pmd_flags(int map_type)
return pte_flags_to_pmd(get_pte_flags(map_type));
}
-static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
+static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t size,
+ unsigned map_type, bool force_pages)
{
u32 virt_addr = (u32)_virt_addr;
u32 pte_flags, pmd_flags;
@@ -268,7 +269,7 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
if (size >= PGDIR_SIZE && pgdir_size_aligned &&
IS_ALIGNED(phys_addr, PGDIR_SIZE) &&
- !pgd_type_table(*pgd)) {
+ !pgd_type_table(*pgd) && !force_pages) {
u32 val;
/*
* TODO: Add code to discard a page table and
@@ -339,14 +340,15 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
tlb_invalidate();
}
-static void early_remap_range(u32 addr, size_t size, unsigned map_type)
+
+static void early_remap_range(u32 addr, size_t size, unsigned map_type, bool force_pages)
{
- __arch_remap_range((void *)addr, addr, size, map_type);
+ __arch_remap_range((void *)addr, addr, size, map_type, force_pages);
}
int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
{
- __arch_remap_range(virt_addr, phys_addr, size, map_type);
+ __arch_remap_range(virt_addr, phys_addr, size, map_type, false);
if (map_type == MAP_UNCACHED)
dma_inv_range(virt_addr, size);
@@ -616,6 +618,7 @@ void *dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *dma_ha
void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
{
uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
+ unsigned long barebox_size, optee_start;
pr_debug("enabling MMU, ttb @ 0x%p\n", ttb);
@@ -637,9 +640,27 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
create_flat_mapping();
/* maps main memory as cachable */
- early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED);
- early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_UNCACHED);
- early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
+ optee_start = membase + memsize - OPTEE_SIZE;
+ barebox_size = optee_start - barebox_start;
+
+ /*
+ * map the bulk of the memory as sections to avoid allocating too many page tables
+ * at this early stage
+ */
+ early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
+ /*
+ * Map the remainder of the memory explicitly with two level page tables. This is
+ * the place where barebox proper ends at. In barebox proper we'll remap the code
+ * segments readonly/executable and the ro segments readonly/execute never. For this
+ * we need the memory being mapped pagewise. We can't do the split up from section
+ * wise mapping to pagewise mapping later because that would require us to do
+ * a break-before-make sequence which we can't do when barebox proper is running
+ * at the location being remapped.
+ */
+ early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
+ early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
+ early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
+ MAP_CACHED, false);
__mmu_cache_on();
}
--
2.39.5
next prev parent reply other threads:[~2025-08-04 17:23 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-04 17:22 [PATCH v4 00/13] ARM: Map sections RO/XN Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 01/13] mmu: explicitly map executable non-SDRAM regions with MAP_CODE Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 02/13] ARM: pass barebox base to mmu_early_enable() Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 03/13] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Ahmad Fatoum
2025-08-04 17:22 ` Ahmad Fatoum [this message]
2025-08-04 17:22 ` [PATCH v4 05/13] ARM: mmu: skip TLB invalidation if remapping zero bytes Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 06/13] ARM: mmu: provide setup_trap_pages for both 32- and 64-bit Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 07/13] ARM: mmu: share common memory bank remapping code Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 08/13] ARM: mmu: make mmu_remap_memory_banks clearer with helper Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 09/13] partition: rename region_overlap_end to region_overlap_end_inclusive Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 10/13] partition: define new region_overlap_end_exclusive helper Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 11/13] ARM: mmu: map text segment ro and data segments execute never Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 12/13] ARM: mmu64: map memory for barebox proper pagewise Ahmad Fatoum
2025-08-04 17:22 ` [PATCH v4 13/13] ARM: mmu64: map text segment ro and data segments execute never Ahmad Fatoum
2025-08-05 10:18 ` [PATCH v4 00/13] ARM: Map sections RO/XN Sascha Hauer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250804172233.2158462-5-a.fatoum@barebox.org \
--to=a.fatoum@barebox.org \
--cc=a.fatoum@pengutronix.de \
--cc=barebox@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox