From: Ahmad Fatoum <a.fatoum@pengutronix.de>
To: barebox@lists.infradead.org
Cc: Ahmad Fatoum <a.fatoum@pengutronix.de>
Subject: [PATCH 19/22] ARM: mmu32: flush only cacheable pages on remap
Date: Wed, 6 Aug 2025 14:37:11 +0200 [thread overview]
Message-ID: <20250806123714.2092620-20-a.fatoum@pengutronix.de> (raw)
In-Reply-To: <20250806123714.2092620-1-a.fatoum@pengutronix.de>
For the same reasons described in b8454cae3b1e ("ARM64: mmu: flush
cacheable regions prior to remapping"), we want to cease doing cache
maintenance operation on MMIO regions and only flush regions that were
actually cacheable before.
We already do that for ARM64 and the code has been prepared for reuse by
the move into a dedicated header, so let's define the functions it
expects and put it to use.
Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
arch/arm/cpu/flush_cacheable_pages.h | 2 +-
arch/arm/cpu/mmu_32.c | 58 ++++++++++++++++++++++++++--
arch/arm/cpu/mmu_64.c | 2 +-
3 files changed, 57 insertions(+), 5 deletions(-)
diff --git a/arch/arm/cpu/flush_cacheable_pages.h b/arch/arm/cpu/flush_cacheable_pages.h
index 85fde0122802..a03e10810dc7 100644
--- a/arch/arm/cpu/flush_cacheable_pages.h
+++ b/arch/arm/cpu/flush_cacheable_pages.h
@@ -47,7 +47,7 @@ static void flush_cacheable_pages(void *start, size_t size)
block_size = granule_size(level);
- if (!pte || !pte_is_cacheable(*pte))
+ if (!pte || !pte_is_cacheable(*pte, level))
continue;
if (flush_end == addr) {
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 521e5f3a5769..a76d403e3477 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -24,6 +24,36 @@
#define PTRS_PER_PTE (PGDIR_SIZE / PAGE_SIZE)
+static size_t granule_size(int level)
+{
+ /*
+ * With 4k page granule, a virtual address is split into 2 lookup parts.
+ * We don't do LPAE or large (64K) pages for ARM32.
+ *
+ * _______________________
+ * | | | |
+ * | Lv1 | Lv2 | off |
+ * |_______|_______|_______|
+ * 31-21 20-12 11-00
+ *
+ * mask page size term
+ *
+ * Lv0: E0000000 --
+ * Lv1: 1FE00000 1M PGD/PMD
+ * Lv2: 1FF000 4K PTE
+ * off: FFF
+ */
+
+ switch (level) {
+ case 1:
+ return PGDIR_SIZE;
+ case 2:
+ return PAGE_SIZE;
+ }
+
+ return 0;
+}
+
static inline uint32_t *get_ttb(void)
{
/* Clear unpredictable bits [13:0] */
@@ -142,6 +172,20 @@ void dma_flush_range(void *ptr, size_t size)
outer_cache.flush_range(start, end);
}
+/**
+ * dma_flush_range_end - Flush caches for address range
+ * @start: Starting virtual address of the range.
+ * @end: Last virtual address in range (inclusive)
+ *
+ * This function cleans and invalidates all cache lines in the specified
+ * range. Note that end is inclusive, meaning that it's the last address
+ * that is flushed (assuming both start and total size are cache line aligned).
+ */
+static void dma_flush_range_end(unsigned long start, unsigned long end)
+{
+ dma_flush_range((void *)start, end - start + 1);
+}
+
void dma_inv_range(void *ptr, size_t size)
{
unsigned long start = (unsigned long)ptr;
@@ -389,15 +433,23 @@ static void early_remap_range(u32 addr, size_t size, maptype_t map_type)
__arch_remap_range((void *)addr, addr, size, map_type);
}
+static bool pte_is_cacheable(uint32_t pte, int level)
+{
+ return (level == 2 && (pte & PTE_CACHEABLE)) ||
+ (level == 1 && (pte & PMD_SECT_CACHEABLE));
+}
+
+#include "flush_cacheable_pages.h"
+
int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, maptype_t map_type)
{
+ if (!maptype_is_compatible(map_type, MAP_CACHED))
+ flush_cacheable_pages(virt_addr, size);
+
map_type = arm_mmu_maybe_skip_permissions(map_type);
__arch_remap_range(virt_addr, phys_addr, size, map_type);
- if (maptype_is_compatible(map_type, MAP_UNCACHED))
- dma_inv_range(virt_addr, size);
-
return 0;
}
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 50bb25b5373a..d8ba7a171c2d 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -254,7 +254,7 @@ static int __arch_remap_range(uint64_t virt, uint64_t phys, uint64_t size,
return 0;
}
-static bool pte_is_cacheable(uint64_t pte)
+static bool pte_is_cacheable(uint64_t pte, int level)
{
return (pte & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL);
}
--
2.39.5
next prev parent reply other threads:[~2025-08-06 13:06 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-06 12:36 [PATCH 00/22] ARM: mmu: refactor 32-bit and 64-bit code Ahmad Fatoum
2025-08-06 12:36 ` [PATCH 01/22] ARM: mmu: introduce new maptype_t type Ahmad Fatoum
2025-08-06 12:36 ` [PATCH 02/22] ARM: mmu: compare only lowest 16 bits for map type Ahmad Fatoum
2025-08-06 12:36 ` [PATCH 03/22] ARM: mmu: prefix pre-MMU functions with early_ Ahmad Fatoum
2025-08-06 12:36 ` [PATCH 04/22] ARM: mmu: panic when alloc_pte fails Ahmad Fatoum
2025-08-06 12:36 ` [PATCH 05/22] ARM: mmu32: introduce new mmu_addr_t type Ahmad Fatoum
2025-08-06 12:36 ` [PATCH 06/22] ARM: mmu: provide zero page control in PBL Ahmad Fatoum
2025-08-06 12:36 ` [PATCH 07/22] ARM: mmu: print map type as string Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 08/22] ARM: mmu64: rename create_sections to __arch_remap_range Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 09/22] ARM: mmu: move get_pte_attrs call into __arch_remap_range Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 10/22] ARM: mmu64: print debug message in __arch_remap_range Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 11/22] ARM: mmu: make force_pages a maptype_t flag Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 12/22] ARM: mmu64: move granule_size to the top of the file Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 13/22] ARM: mmu64: fix benign off-by-one in flush_cacheable_pages Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 14/22] ARM: mmu64: make flush_cacheable_pages less 64-bit dependent Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 15/22] ARM: mmu64: allow asserting last level page in __find_pte Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 16/22] ARM: mmu64: rename __find_pte to find_pte Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 17/22] ARM: mmu32: rework find_pte to have ARM64 find_pte semantics Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 18/22] ARM: mmu64: factor out flush_cacheable_pages for reusability Ahmad Fatoum
2025-08-06 12:37 ` Ahmad Fatoum [this message]
2025-08-06 12:37 ` [PATCH 20/22] ARM: mmu32: factor out set_pte_range helper Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 21/22] ARM: mmu64: " Ahmad Fatoum
2025-08-06 12:37 ` [PATCH 22/22] ARM: mmu: define dma_alloc_writecombine in common code Ahmad Fatoum
2025-08-07 7:24 ` [PATCH 00/22] ARM: mmu: refactor 32-bit and 64-bit code Sascha Hauer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250806123714.2092620-20-a.fatoum@pengutronix.de \
--to=a.fatoum@pengutronix.de \
--cc=barebox@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox