* [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping @ 2023-12-18 12:05 Lior Weintraub 2023-12-18 12:42 ` [PATCH] fixup! " Ahmad Fatoum 2023-12-18 12:43 ` [PATCH] " Ahmad Fatoum 0 siblings, 2 replies; 4+ messages in thread From: Lior Weintraub @ 2023-12-18 12:05 UTC (permalink / raw) To: barebox From 1b8f4ee9e29e722bbb0b7d0f7fed0ae213ef8637 Mon Sep 17 00:00:00 2001 From: Lior Weintraub <liorw@pliops.com> Date: Mon, 18 Dec 2023 14:01:16 +0200 Subject: [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping Fix the mmu_early_enable function to correctly map 40bits of virtual address into physical address with a 1:1 mapping. It uses the init_range function to sets 2 table entries on TTB level0 and then fill level1 with the correct 1:1 mapping. Signed-off-by: Lior Weintraub <liorw@pliops.com> --- arch/arm/cpu/mmu_64.c | 17 ++++++++++++++++- arch/arm/cpu/mmu_64.h | 19 +++++++++++++++++-- arch/arm/include/asm/pgtable64.h | 1 + 3 files changed, 34 insertions(+), 3 deletions(-) diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c index c6ea63e655..fe5babbfd9 100644 --- a/arch/arm/cpu/mmu_64.c +++ b/arch/arm/cpu/mmu_64.c @@ -294,6 +294,19 @@ void dma_flush_range(void *ptr, size_t size) v8_flush_dcache_range(start, end); } +void init_range(void *virt_addr, size_t size) +{ + uint64_t *ttb = get_ttb(); + uint64_t addr = (uint64_t)virt_addr; + while(size) { + early_remap_range((void *)addr, L0_XLAT_SIZE, MAP_UNCACHED); + split_block(ttb,0); + size -= L0_XLAT_SIZE; + addr += L0_XLAT_SIZE; + ttb++; + } +} + void mmu_early_enable(unsigned long membase, unsigned long memsize) { int el; @@ -308,7 +321,9 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize) memset((void *)ttb, 0, GRANULE_SIZE); - early_remap_range(0, 1UL << (BITS_PER_VA - 1), MAP_UNCACHED); + // Assume maximum BITS_PER_PA set to 40 bits. + // Set 1:1 mapping of VA->PA. So to cover the full 1TB range we need 2 tables. + init_range(0, 2*L0_XLAT_SIZE); early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED); early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_FAULT); early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED); diff --git a/arch/arm/cpu/mmu_64.h b/arch/arm/cpu/mmu_64.h index e4d81dace4..e3959e4407 100644 --- a/arch/arm/cpu/mmu_64.h +++ b/arch/arm/cpu/mmu_64.h @@ -105,12 +105,27 @@ static inline uint64_t level2mask(int level) return mask; } +/** + * @brief Returns the TCR (Translation Control Register) value + * + * @param el - Exception Level + * @param va_bits - Virtual Address bits + * @return uint64_t TCR + */ static inline uint64_t calc_tcr(int el, int va_bits) { - u64 ips; - u64 tcr; + u64 ips; // Intermediate Physical Address Size + u64 tcr; // Translation Control Register +#if (BITS_PER_PA == 40) ips = 2; +#elif (BITS_PER_PA == 36) + ips = 1; +#elif (BITS_PER_PA == 32) + ips = 0; +#else +#error "Unsupported" +#endif if (el == 1) tcr = (ips << 32) | TCR_EPD1_DISABLE; diff --git a/arch/arm/include/asm/pgtable64.h b/arch/arm/include/asm/pgtable64.h index 21dac30cfe..b88ffe6be5 100644 --- a/arch/arm/include/asm/pgtable64.h +++ b/arch/arm/include/asm/pgtable64.h @@ -8,6 +8,7 @@ #define VA_START 0x0 #define BITS_PER_VA 48 +#define BITS_PER_PA 40 // Use 40 Physical address bits /* Granule size of 4KB is being used */ #define GRANULE_SIZE_SHIFT 12 -- 2.40.0 ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH] fixup! ARM64: mmu: fix mmu_early_enable VA->PA mapping 2023-12-18 12:05 [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping Lior Weintraub @ 2023-12-18 12:42 ` Ahmad Fatoum 2023-12-18 12:43 ` [PATCH] " Ahmad Fatoum 1 sibling, 0 replies; 4+ messages in thread From: Ahmad Fatoum @ 2023-12-18 12:42 UTC (permalink / raw) To: barebox; +Cc: Ahmad Fatoum Fix warnings and reformat: - fix whitespaces in accordance with kernel coding style - mark locally-used function static to fix warning - remove cast to fix -Wint-conversion warning - Use C-style comments as used elsewhere Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> --- arch/arm/cpu/mmu_64.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c index e855ae4fa7d4..a392789ab696 100644 --- a/arch/arm/cpu/mmu_64.c +++ b/arch/arm/cpu/mmu_64.c @@ -294,12 +294,13 @@ void dma_flush_range(void *ptr, size_t size) v8_flush_dcache_range(start, end); } -void init_range(void *virt_addr, size_t size) +static void init_range(void *virt_addr, size_t size) { uint64_t *ttb = get_ttb(); uint64_t addr = (uint64_t)virt_addr; - while(size) { - early_remap_range((void *)addr, L0_XLAT_SIZE, MAP_UNCACHED); + + while (size) { + early_remap_range(addr, L0_XLAT_SIZE, MAP_UNCACHED); split_block(ttb,0); size -= L0_XLAT_SIZE; addr += L0_XLAT_SIZE; @@ -324,9 +325,10 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize) memset((void *)ttb, 0, GRANULE_SIZE); - // Assume maximum BITS_PER_PA set to 40 bits. - // Set 1:1 mapping of VA->PA. So to cover the full 1TB range we need 2 tables. - init_range(0, 2*L0_XLAT_SIZE); + /* Assuming maximum BITS_PER_PA set to 40 bits, set 1:1 mapping + * of VA->PA. To cover the full 1TiB range we need 2 tables. + */ + init_range(0, 2 * L0_XLAT_SIZE); early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED); early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_FAULT); early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED); -- 2.39.2 ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping 2023-12-18 12:05 [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping Lior Weintraub 2023-12-18 12:42 ` [PATCH] fixup! " Ahmad Fatoum @ 2023-12-18 12:43 ` Ahmad Fatoum 2023-12-18 14:00 ` Lior Weintraub 1 sibling, 1 reply; 4+ messages in thread From: Ahmad Fatoum @ 2023-12-18 12:43 UTC (permalink / raw) To: Lior Weintraub, barebox On 18.12.23 13:05, Lior Weintraub wrote: > From 1b8f4ee9e29e722bbb0b7d0f7fed0ae213ef8637 Mon Sep 17 00:00:00 2001 > From: Lior Weintraub <liorw@pliops.com> > Date: Mon, 18 Dec 2023 14:01:16 +0200 > Subject: [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping > > Fix the mmu_early_enable function to correctly map 40bits of virtual address into physical address with a 1:1 mapping. > It uses the init_range function to sets 2 table entries on TTB level0 and then fill level1 with the correct 1:1 mapping. > > Signed-off-by: Lior Weintraub <liorw@pliops.com> Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # Qemu ARM64 Virt I ran into some warnings when building, for which I sent out a patch just now. I think Sascha can squash them if there are no further comments. Cheers, Ahmad > --- > arch/arm/cpu/mmu_64.c | 17 ++++++++++++++++- > arch/arm/cpu/mmu_64.h | 19 +++++++++++++++++-- > arch/arm/include/asm/pgtable64.h | 1 + > 3 files changed, 34 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c > index c6ea63e655..fe5babbfd9 100644 > --- a/arch/arm/cpu/mmu_64.c > +++ b/arch/arm/cpu/mmu_64.c > @@ -294,6 +294,19 @@ void dma_flush_range(void *ptr, size_t size) > v8_flush_dcache_range(start, end); > } > > +void init_range(void *virt_addr, size_t size) > +{ > + uint64_t *ttb = get_ttb(); > + uint64_t addr = (uint64_t)virt_addr; > + while(size) { > + early_remap_range((void *)addr, L0_XLAT_SIZE, MAP_UNCACHED); > + split_block(ttb,0); > + size -= L0_XLAT_SIZE; > + addr += L0_XLAT_SIZE; > + ttb++; > + } > +} > + > void mmu_early_enable(unsigned long membase, unsigned long memsize) > { > int el; > @@ -308,7 +321,9 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize) > > memset((void *)ttb, 0, GRANULE_SIZE); > > - early_remap_range(0, 1UL << (BITS_PER_VA - 1), MAP_UNCACHED); > + // Assume maximum BITS_PER_PA set to 40 bits. > + // Set 1:1 mapping of VA->PA. So to cover the full 1TB range we need 2 tables. > + init_range(0, 2*L0_XLAT_SIZE); > early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED); > early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_FAULT); > early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED); > diff --git a/arch/arm/cpu/mmu_64.h b/arch/arm/cpu/mmu_64.h > index e4d81dace4..e3959e4407 100644 > --- a/arch/arm/cpu/mmu_64.h > +++ b/arch/arm/cpu/mmu_64.h > @@ -105,12 +105,27 @@ static inline uint64_t level2mask(int level) > return mask; > } > > +/** > + * @brief Returns the TCR (Translation Control Register) value > + * > + * @param el - Exception Level > + * @param va_bits - Virtual Address bits > + * @return uint64_t TCR > + */ > static inline uint64_t calc_tcr(int el, int va_bits) > { > - u64 ips; > - u64 tcr; > + u64 ips; // Intermediate Physical Address Size > + u64 tcr; // Translation Control Register > > +#if (BITS_PER_PA == 40) > ips = 2; > +#elif (BITS_PER_PA == 36) > + ips = 1; > +#elif (BITS_PER_PA == 32) > + ips = 0; > +#else > +#error "Unsupported" > +#endif > > if (el == 1) > tcr = (ips << 32) | TCR_EPD1_DISABLE; > diff --git a/arch/arm/include/asm/pgtable64.h b/arch/arm/include/asm/pgtable64.h > index 21dac30cfe..b88ffe6be5 100644 > --- a/arch/arm/include/asm/pgtable64.h > +++ b/arch/arm/include/asm/pgtable64.h > @@ -8,6 +8,7 @@ > > #define VA_START 0x0 > #define BITS_PER_VA 48 > +#define BITS_PER_PA 40 // Use 40 Physical address bits > > /* Granule size of 4KB is being used */ > #define GRANULE_SIZE_SHIFT 12 -- Pengutronix e.K. | | Steuerwalder Str. 21 | http://www.pengutronix.de/ | 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | ^ permalink raw reply [flat|nested] 4+ messages in thread
* RE: [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping 2023-12-18 12:43 ` [PATCH] " Ahmad Fatoum @ 2023-12-18 14:00 ` Lior Weintraub 0 siblings, 0 replies; 4+ messages in thread From: Lior Weintraub @ 2023-12-18 14:00 UTC (permalink / raw) To: Ahmad Fatoum, barebox Thanks Ahmad, I think we can make the init_range function clearer: void init_range(size_t total_level0_tables) { uint64_t *ttb = get_ttb(); uint64_t addr = 0; while(total_level0_tables--) { early_remap_range((void *)addr, L0_XLAT_SIZE, MAP_UNCACHED); split_block(ttb,0); addr += L0_XLAT_SIZE; ttb++; } } Let me know if you want me to prepare a new patch or you can include it in your warning fix patch. Cheers, Lior. > -----Original Message----- > From: Ahmad Fatoum <a.fatoum@pengutronix.de> > Sent: Monday, December 18, 2023 2:43 PM > To: Lior Weintraub <liorw@pliops.com>; barebox@lists.infradead.org > Subject: Re: [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping > > CAUTION: External Sender > > On 18.12.23 13:05, Lior Weintraub wrote: > > From 1b8f4ee9e29e722bbb0b7d0f7fed0ae213ef8637 Mon Sep 17 > 00:00:00 2001 > > From: Lior Weintraub <liorw@pliops.com> > > Date: Mon, 18 Dec 2023 14:01:16 +0200 > > Subject: [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping > > > > Fix the mmu_early_enable function to correctly map 40bits of virtual address > into physical address with a 1:1 mapping. > > It uses the init_range function to sets 2 table entries on TTB level0 and then > fill level1 with the correct 1:1 mapping. > > > > Signed-off-by: Lior Weintraub <liorw@pliops.com> > > Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # Qemu ARM64 Virt > > I ran into some warnings when building, for which I sent out a patch > just now. I think Sascha can squash them if there are no further > comments. > > Cheers, > Ahmad > > > --- > > arch/arm/cpu/mmu_64.c | 17 ++++++++++++++++- > > arch/arm/cpu/mmu_64.h | 19 +++++++++++++++++-- > > arch/arm/include/asm/pgtable64.h | 1 + > > 3 files changed, 34 insertions(+), 3 deletions(-) > > > > diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c > > index c6ea63e655..fe5babbfd9 100644 > > --- a/arch/arm/cpu/mmu_64.c > > +++ b/arch/arm/cpu/mmu_64.c > > @@ -294,6 +294,19 @@ void dma_flush_range(void *ptr, size_t size) > > v8_flush_dcache_range(start, end); > > } > > > > +void init_range(void *virt_addr, size_t size) > > +{ > > + uint64_t *ttb = get_ttb(); > > + uint64_t addr = (uint64_t)virt_addr; > > + while(size) { > > + early_remap_range((void *)addr, L0_XLAT_SIZE, MAP_UNCACHED); > > + split_block(ttb,0); > > + size -= L0_XLAT_SIZE; > > + addr += L0_XLAT_SIZE; > > + ttb++; > > + } > > +} > > + > > void mmu_early_enable(unsigned long membase, unsigned long memsize) > > { > > int el; > > @@ -308,7 +321,9 @@ void mmu_early_enable(unsigned long membase, > unsigned long memsize) > > > > memset((void *)ttb, 0, GRANULE_SIZE); > > > > - early_remap_range(0, 1UL << (BITS_PER_VA - 1), MAP_UNCACHED); > > + // Assume maximum BITS_PER_PA set to 40 bits. > > + // Set 1:1 mapping of VA->PA. So to cover the full 1TB range we need 2 > tables. > > + init_range(0, 2*L0_XLAT_SIZE); > > early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED); > > early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, > MAP_FAULT); > > early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), > PAGE_ALIGN(_etext - _stext), MAP_CACHED); > > diff --git a/arch/arm/cpu/mmu_64.h b/arch/arm/cpu/mmu_64.h > > index e4d81dace4..e3959e4407 100644 > > --- a/arch/arm/cpu/mmu_64.h > > +++ b/arch/arm/cpu/mmu_64.h > > @@ -105,12 +105,27 @@ static inline uint64_t level2mask(int level) > > return mask; > > } > > > > +/** > > + * @brief Returns the TCR (Translation Control Register) value > > + * > > + * @param el - Exception Level > > + * @param va_bits - Virtual Address bits > > + * @return uint64_t TCR > > + */ > > static inline uint64_t calc_tcr(int el, int va_bits) > > { > > - u64 ips; > > - u64 tcr; > > + u64 ips; // Intermediate Physical Address Size > > + u64 tcr; // Translation Control Register > > > > +#if (BITS_PER_PA == 40) > > ips = 2; > > +#elif (BITS_PER_PA == 36) > > + ips = 1; > > +#elif (BITS_PER_PA == 32) > > + ips = 0; > > +#else > > +#error "Unsupported" > > +#endif > > > > if (el == 1) > > tcr = (ips << 32) | TCR_EPD1_DISABLE; > > diff --git a/arch/arm/include/asm/pgtable64.h > b/arch/arm/include/asm/pgtable64.h > > index 21dac30cfe..b88ffe6be5 100644 > > --- a/arch/arm/include/asm/pgtable64.h > > +++ b/arch/arm/include/asm/pgtable64.h > > @@ -8,6 +8,7 @@ > > > > #define VA_START 0x0 > > #define BITS_PER_VA 48 > > +#define BITS_PER_PA 40 // Use 40 Physical address bits > > > > /* Granule size of 4KB is being used */ > > #define GRANULE_SIZE_SHIFT 12 > > -- > Pengutronix e.K. | | > Steuerwalder Str. 21 | http://secure- > web.cisco.com/1liV2eL7d3n7h9kx42p3X8Qq6NBttedq1RKB0Q_cBwnnB04E0 > FnT3ymWJy_tqsUoa8azOvJ9JTfqnc2FfDLTRXSREvCCFZ3lsufbobWiPBANyqld4 > I9K_0gGnupCTpXNidBqwEB8gTvQWpyyZw0rpJK27LxohsK69X8_v4VeWmRF > RF1OhudU5oAvG5oDz6ZnvYDZmOkP7ywfArbvx4rnTlpqhiBODxTHm3P9WBR > 0jIl_SaG6zk62bdDRXvumPQV3RlAJEchnQ7HyHskx7byyRGuPQTrzNNtZ_s3- > Sqq7-mMQ/http%3A%2F%2Fwww.pengutronix.de%2F | > 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | > Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | > ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-12-18 15:17 UTC | newest] Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-12-18 12:05 [PATCH] ARM64: mmu: fix mmu_early_enable VA->PA mapping Lior Weintraub 2023-12-18 12:42 ` [PATCH] fixup! " Ahmad Fatoum 2023-12-18 12:43 ` [PATCH] " Ahmad Fatoum 2023-12-18 14:00 ` Lior Weintraub
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox