mail archive of the barebox mailing list
 help / color / mirror / Atom feed
From: Marco Felsch <m.felsch@pengutronix.de>
To: Sascha Hauer <s.hauer@pengutronix.de>,
	 BAREBOX <barebox@lists.infradead.org>
Cc: Marco Felsch <m.felsch@pengutronix.de>
Subject: [PATCH 09/15] nvmem: core: create a header for internal sharing
Date: Mon, 04 Aug 2025 16:36:55 +0200	[thread overview]
Message-ID: <20250804-v2025-06-0-topic-nvmem-v1-9-7603eaa4d2b0@pengutronix.de> (raw)
In-Reply-To: <20250804-v2025-06-0-topic-nvmem-v1-0-7603eaa4d2b0@pengutronix.de>

Port Linux commit:

| commit ec9c08a1cb8dc5e8e003f95f5f62de41dde235bb
| Author: Miquel Raynal <miquel.raynal@bootlin.com>
| Date:   Fri Dec 15 11:15:29 2023 +0000
|
|     nvmem: Create a header for internal sharing
|
|     Before adding all the NVMEM layout bus infrastructure to the core, let's
|     move the main nvmem_device structure in an internal header, only
|     available to the core. This way all the additional code can be added in
|     a dedicated file in order to keep the current core file tidy.
|
|     Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
|     Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
|     Link: https://lore.kernel.org/r/20231215111536.316972-4-srinivas.kandagatla@linaro.org
|     Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

which is required for the upcoming nvmem-layout support.

Signed-off-by: Marco Felsch <m.felsch@pengutronix.de>
---
 drivers/nvmem/core.c      | 20 +-------------------
 drivers/nvmem/internals.h | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+), 19 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 29a18cbeb517bdf23a8620f19a0ec21a1ac1b4e2..86f4f43a3084bf0c0e71b9af002334975c5e5e0e 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -13,25 +13,7 @@
 #include <linux/nvmem-consumer.h>
 #include <linux/nvmem-provider.h>
 
-struct nvmem_device {
-	const char		*name;
-	struct device		dev;
-	struct list_head	node;
-	int			stride;
-	int			word_size;
-	int			ncells;
-	int			users;
-	size_t			size;
-	bool			read_only;
-	struct cdev		cdev;
-	void			*priv;
-	struct list_head	cells;
-	nvmem_cell_post_process_t cell_post_process;
-	int			(*reg_write)(void *ctx, unsigned int reg,
-					     const void *val, size_t val_size);
-	int			(*reg_read)(void *ctx, unsigned int reg,
-					    void *val, size_t val_size);
-};
+#include "internals.h"
 
 struct nvmem_cell_entry {
 	const char		*name;
diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
new file mode 100644
index 0000000000000000000000000000000000000000..4faa9a7f9e76e93cb67192ab11475f466df31473
--- /dev/null
+++ b/drivers/nvmem/internals.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_NVMEM_INTERNALS_H
+#define _LINUX_NVMEM_INTERNALS_H
+
+#include <device.h>
+#include <linux/nvmem-consumer.h>
+#include <linux/nvmem-provider.h>
+
+struct nvmem_device {
+	const char		*name;
+	struct device		dev;
+	struct list_head	node;
+	int			stride;
+	int			word_size;
+	int			ncells;
+	int			users;
+	size_t			size;
+	bool			read_only;
+	struct cdev		cdev;
+	void			*priv;
+	struct list_head	cells;
+	nvmem_cell_post_process_t cell_post_process;
+	int			(*reg_write)(void *ctx, unsigned int reg,
+					     const void *val, size_t val_size);
+	int			(*reg_read)(void *ctx, unsigned int reg,
+					    void *val, size_t val_size);
+};
+
+#endif  /* ifndef _LINUX_NVMEM_INTERNALS_H */

-- 
2.39.5




  parent reply	other threads:[~2025-08-04 15:32 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-04 14:36 [PATCH 00/15] NVMEM: Add support for layout drivers Marco Felsch
2025-08-04 14:36 ` [PATCH 01/15] of: sync of_*_phandle_with_args with Linux Marco Felsch
2025-08-04 14:36 ` [PATCH 02/15] of: base: add of_parse_phandle_with_optional_args() Marco Felsch
2025-08-04 14:36 ` [PATCH 03/15] of: device: Export of_device_make_bus_id() Marco Felsch
2025-08-04 14:36 ` [PATCH 04/15] nvmem: core: fix nvmem_register error path Marco Felsch
2025-08-04 14:36 ` [PATCH 05/15] nvmem: core: sync with Linux Marco Felsch
2025-08-04 14:36 ` [PATCH 06/15] nvmem: core: expose nvmem cells as cdev Marco Felsch
2025-08-04 14:36 ` [PATCH 07/15] nvmem: core: allow single and dynamic device ids Marco Felsch
2025-08-04 14:36 ` [PATCH 08/15] eeprom: at24: fix device name handling Marco Felsch
2025-08-04 14:36 ` Marco Felsch [this message]
2025-08-04 14:36 ` [PATCH 10/15] nvmem: core: add nvmem-layout support Marco Felsch
2025-08-04 14:36 ` [PATCH 11/15] nvmem: core: add an index parameter to the cell Marco Felsch
2025-08-04 14:36 ` [PATCH 12/15] nvmem: core: add per-cell post processing Marco Felsch
2025-08-04 14:36 ` [PATCH 13/15] nvmem: core: add cell based fixup logic Marco Felsch
2025-08-04 14:37 ` [PATCH 14/15] nvmem: core: provide own priv pointer in post process callback Marco Felsch
2025-08-04 14:37 ` [PATCH 15/15] nvmem: core: drop global cell_post_process Marco Felsch
2025-08-05 10:44 ` [PATCH 00/15] NVMEM: Add support for layout drivers Sascha Hauer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250804-v2025-06-0-topic-nvmem-v1-9-7603eaa4d2b0@pengutronix.de \
    --to=m.felsch@pengutronix.de \
    --cc=barebox@lists.infradead.org \
    --cc=s.hauer@pengutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox