1
0
Fork 0

device-dax for 5.1

* Replace the /sys/class/dax device model with /sys/bus/dax, and include
   a compat driver so distributions can opt-in to the new ABI.
 
 * Allow for an alternative driver for the device-dax address-range
 
 * Introduce the 'kmem' driver to hotplug / assign a device-dax
   address-range to the core-mm.
 
 * Arrange for the device-dax target-node to be onlined so that the newly
   added memory range can be uniquely referenced by numa apis.
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJchWpGAAoJEB7SkWpmfYgCJk8P/0Q1DINszUDO/vKjJ09cDs9P
 Jw3it6GBIL50rDOu9QdcprSpwYDD0h1mLAV/m6oa3bVO+p4uWGvnxaxRx2HN2c/v
 vhZFtUDpHlqR63vzWMNVKRprYixCRJDUr6xQhhCcE3ak/ELN6w7LWfikKVWv15UL
 MfR96IQU38f+xRda/zSXnL9606Dvkvu/inEHj84lRcHIwj3sQAUalrE8bR3O32gZ
 bDg/l5kzT49o8ZXUo/TegvRSSSZpJmOl2DD0RW+ax5q3NI2bOXFrVDUKBKxf/hcQ
 E/V9i57TrqQx0GqRhnU7rN/v53cFZGGs31TEEIB/xs3bzCnADxwXcjL5b5K005J6
 vJjBA2ODBewHFK3uVx46Hy1iV4eCtZWj4QrMnrjdSrjXOfbF5GTbWOhPFgoq7TWf
 S7VqFEf3I2gDPaMq4o8Ej1kLH4HMYeor2NSOZjyvGn87rSZ3ZIQguwbaNIVl+itz
 gdDt0ZOU0BgOBkV+rZIeZDaGdloWCHcDPL15CkZaOZyzdWhfEZ7dod6ad+9udilU
 EUPH62RgzXZtfm5zpebYyjNVLbb9pLZ0nT+UypyGR6zqWx1SqU3mXi63NFXPco+x
 XA9j//edPeI6NHg2CXLEh8DLuCg3dG1zWRJANkiF+niBwyCR8CHtGWAoY6soXbKe
 2UrXGcIfXxyJ8V9v8v4q
 =hfa3
 -----END PGP SIGNATURE-----

Merge tag 'devdax-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm

Pull device-dax updates from Dan Williams:
 "New device-dax infrastructure to allow persistent memory and other
  "reserved" / performance differentiated memories, to be assigned to
  the core-mm as "System RAM".

  Some users want to use persistent memory as additional volatile
  memory. They are willing to cope with potential performance
  differences, for example between DRAM and 3D Xpoint, and want to use
  typical Linux memory management apis rather than a userspace memory
  allocator layered over an mmap() of a dax file. The administration
  model is to decide how much Persistent Memory (pmem) to use as System
  RAM, create a device-dax-mode namespace of that size, and then assign
  it to the core-mm. The rationale for device-dax is that it is a
  generic memory-mapping driver that can be layered over any "special
  purpose" memory, not just pmem. On subsequent boots udev rules can be
  used to restore the memory assignment.

  One implication of using pmem as RAM is that mlock() no longer keeps
  data off persistent media. For this reason it is recommended to enable
  NVDIMM Security (previously merged for 5.0) to encrypt pmem contents
  at rest. We considered making this recommendation an actively enforced
  requirement, but in the end decided to leave it as a distribution /
  administrator policy to allow for emulation and test environments that
  lack security capable NVDIMMs.

  Summary:

   - Replace the /sys/class/dax device model with /sys/bus/dax, and
     include a compat driver so distributions can opt-in to the new ABI.

   - Allow for an alternative driver for the device-dax address-range

   - Introduce the 'kmem' driver to hotplug / assign a device-dax
     address-range to the core-mm.

   - Arrange for the device-dax target-node to be onlined so that the
     newly added memory range can be uniquely referenced by numa apis"

NOTE! I'm not entirely happy with the whole "PMEM as RAM" model because
we currently have special - and very annoying rules in the kernel about
accessing PMEM only with the "MC safe" accessors, because machine checks
inside the regular repeat string copy functions can be fatal in some
(not described) circumstances.

And apparently the PMEM modules can cause that a lot more than regular
RAM.  The argument is that this happens because PMEM doesn't necessarily
get scrubbed at boot like RAM does, but that is planned to be added for
the user space tooling.

Quoting Dan from another email:
 "The exposure can be reduced in the volatile-RAM case by scanning for
  and clearing errors before it is onlined as RAM. The userspace tooling
  for that can be in place before v5.1-final. There's also runtime
  notifications of errors via acpi_nfit_uc_error_notify() from
  background scrubbers on the DIMM devices. With that mechanism the
  kernel could proactively clear newly discovered poison in the volatile
  case, but that would be additional development more suitable for v5.2.

  I understand the concern, and the need to highlight this issue by
  tapping the brakes on feature development, but I don't see PMEM as RAM
  making the situation worse when the exposure is also there via DAX in
  the PMEM case. Volatile-RAM is arguably a safer use case since it's
  possible to repair pages where the persistent case needs active
  application coordination"

* tag 'devdax-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
  device-dax: "Hotplug" persistent memory for use like normal RAM
  mm/resource: Let walk_system_ram_range() search child resources
  mm/memory-hotplug: Allow memory resources to be children
  mm/resource: Move HMM pr_debug() deeper into resource code
  mm/resource: Return real error codes from walk failures
  device-dax: Add a 'modalias' attribute to DAX 'bus' devices
  device-dax: Add a 'target_node' attribute
  device-dax: Auto-bind device after successful new_id
  acpi/nfit, device-dax: Identify differentiated memory with a unique numa-node
  device-dax: Add /sys/class/dax backwards compatibility
  device-dax: Add support for a dax override driver
  device-dax: Move resource pinning+mapping into the common driver
  device-dax: Introduce bus + driver model
  device-dax: Start defining a dax bus model
  device-dax: Remove multi-resource infrastructure
  device-dax: Kill dax_region base
  device-dax: Kill dax_region ida
hifive-unleashed-5.1
Linus Torvalds 2019-03-16 13:05:32 -07:00
commit f67e3fb489
30 changed files with 1112 additions and 538 deletions

View File

@ -0,0 +1,22 @@
What: /sys/class/dax/
Date: May, 2016
KernelVersion: v4.7
Contact: linux-nvdimm@lists.01.org
Description: Device DAX is the device-centric analogue of Filesystem
DAX (CONFIG_FS_DAX). It allows memory ranges to be
allocated and mapped without need of an intervening file
system. Device DAX is strict, precise and predictable.
Specifically this interface:
1/ Guarantees fault granularity with respect to a given
page size (pte, pmd, or pud) set at configuration time.
2/ Enforces deterministic behavior by being strict about
what fault scenarios are supported.
The /sys/class/dax/ interface enumerates all the
device-dax instances in the system. The ABI is
deprecated and will be removed after 2020. It is
replaced with the DAX bus interface /sys/bus/dax/ where
device-dax instances can be found under
/sys/bus/dax/devices/

View File

@ -239,6 +239,7 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
memset(&ndr_desc, 0, sizeof(ndr_desc));
ndr_desc.attr_groups = region_attr_groups;
ndr_desc.numa_node = dev_to_node(&p->pdev->dev);
ndr_desc.target_node = ndr_desc.numa_node;
ndr_desc.res = &p->res;
ndr_desc.of_node = p->dn;
ndr_desc.provider_data = p;

View File

@ -2956,11 +2956,15 @@ static int acpi_nfit_register_region(struct acpi_nfit_desc *acpi_desc,
ndr_desc->res = &res;
ndr_desc->provider_data = nfit_spa;
ndr_desc->attr_groups = acpi_nfit_region_attribute_groups;
if (spa->flags & ACPI_NFIT_PROXIMITY_VALID)
if (spa->flags & ACPI_NFIT_PROXIMITY_VALID) {
ndr_desc->numa_node = acpi_map_pxm_to_online_node(
spa->proximity_domain);
else
ndr_desc->target_node = acpi_map_pxm_to_node(
spa->proximity_domain);
} else {
ndr_desc->numa_node = NUMA_NO_NODE;
ndr_desc->target_node = NUMA_NO_NODE;
}
/*
* Persistence domain bits are hierarchical, if

View File

@ -84,6 +84,7 @@ int acpi_map_pxm_to_node(int pxm)
return node;
}
EXPORT_SYMBOL(acpi_map_pxm_to_node);
/**
* acpi_map_pxm_to_online_node - Map proximity ID to online node

View File

@ -88,6 +88,7 @@ unsigned long __weak memory_block_size_bytes(void)
{
return MIN_MEMORY_BLOCK_SIZE;
}
EXPORT_SYMBOL_GPL(memory_block_size_bytes);
static unsigned long get_memory_block_size(void)
{

View File

@ -23,12 +23,38 @@ config DEV_DAX
config DEV_DAX_PMEM
tristate "PMEM DAX: direct access to persistent memory"
depends on LIBNVDIMM && NVDIMM_DAX && DEV_DAX
depends on m # until we can kill DEV_DAX_PMEM_COMPAT
default DEV_DAX
help
Support raw access to persistent memory. Note that this
driver consumes memory ranges allocated and exported by the
libnvdimm sub-system.
Say Y if unsure
Say M if unsure
config DEV_DAX_KMEM
tristate "KMEM DAX: volatile-use of persistent memory"
default DEV_DAX
depends on DEV_DAX
depends on MEMORY_HOTPLUG # for add_memory() and friends
help
Support access to persistent memory as if it were RAM. This
allows easier use of persistent memory by unmodified
applications.
To use this feature, a DAX device must be unbound from the
device_dax driver (PMEM DAX) and bound to this kmem driver
on each boot.
Say N if unsure.
config DEV_DAX_PMEM_COMPAT
tristate "PMEM DAX: support the deprecated /sys/class/dax interface"
depends on DEV_DAX_PMEM
default DEV_DAX_PMEM
help
Older versions of the libdaxctl library expect to find all
device-dax instances under /sys/class/dax. If libdaxctl in
your distribution is older than v58 say M, otherwise say N.
endif

View File

@ -1,8 +1,10 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DAX) += dax.o
obj-$(CONFIG_DEV_DAX) += device_dax.o
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
obj-$(CONFIG_DEV_DAX_KMEM) += kmem.o
dax-y := super.o
dax_pmem-y := pmem.o
dax-y += bus.o
device_dax-y := device.o
obj-y += pmem/

503
drivers/dax/bus.c 100644
View File

@ -0,0 +1,503 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017-2018 Intel Corporation. All rights reserved. */
#include <linux/memremap.h>
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/dax.h>
#include "dax-private.h"
#include "bus.h"
static struct class *dax_class;
static DEFINE_MUTEX(dax_bus_lock);
#define DAX_NAME_LEN 30
struct dax_id {
struct list_head list;
char dev_name[DAX_NAME_LEN];
};
static int dax_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
{
/*
* We only ever expect to handle device-dax instances, i.e. the
* @type argument to MODULE_ALIAS_DAX_DEVICE() is always zero
*/
return add_uevent_var(env, "MODALIAS=" DAX_DEVICE_MODALIAS_FMT, 0);
}
static struct dax_device_driver *to_dax_drv(struct device_driver *drv)
{
return container_of(drv, struct dax_device_driver, drv);
}
static struct dax_id *__dax_match_id(struct dax_device_driver *dax_drv,
const char *dev_name)
{
struct dax_id *dax_id;
lockdep_assert_held(&dax_bus_lock);
list_for_each_entry(dax_id, &dax_drv->ids, list)
if (sysfs_streq(dax_id->dev_name, dev_name))
return dax_id;
return NULL;
}
static int dax_match_id(struct dax_device_driver *dax_drv, struct device *dev)
{
int match;
mutex_lock(&dax_bus_lock);
match = !!__dax_match_id(dax_drv, dev_name(dev));
mutex_unlock(&dax_bus_lock);
return match;
}
enum id_action {
ID_REMOVE,
ID_ADD,
};
static ssize_t do_id_store(struct device_driver *drv, const char *buf,
size_t count, enum id_action action)
{
struct dax_device_driver *dax_drv = to_dax_drv(drv);
unsigned int region_id, id;
char devname[DAX_NAME_LEN];
struct dax_id *dax_id;
ssize_t rc = count;
int fields;
fields = sscanf(buf, "dax%d.%d", &region_id, &id);
if (fields != 2)
return -EINVAL;
sprintf(devname, "dax%d.%d", region_id, id);
if (!sysfs_streq(buf, devname))
return -EINVAL;
mutex_lock(&dax_bus_lock);
dax_id = __dax_match_id(dax_drv, buf);
if (!dax_id) {
if (action == ID_ADD) {
dax_id = kzalloc(sizeof(*dax_id), GFP_KERNEL);
if (dax_id) {
strncpy(dax_id->dev_name, buf, DAX_NAME_LEN);
list_add(&dax_id->list, &dax_drv->ids);
} else
rc = -ENOMEM;
} else
/* nothing to remove */;
} else if (action == ID_REMOVE) {
list_del(&dax_id->list);
kfree(dax_id);
} else
/* dax_id already added */;
mutex_unlock(&dax_bus_lock);
if (rc < 0)
return rc;
if (action == ID_ADD)
rc = driver_attach(drv);
if (rc)
return rc;
return count;
}
static ssize_t new_id_store(struct device_driver *drv, const char *buf,
size_t count)
{
return do_id_store(drv, buf, count, ID_ADD);
}
static DRIVER_ATTR_WO(new_id);
static ssize_t remove_id_store(struct device_driver *drv, const char *buf,
size_t count)
{
return do_id_store(drv, buf, count, ID_REMOVE);
}
static DRIVER_ATTR_WO(remove_id);
static struct attribute *dax_drv_attrs[] = {
&driver_attr_new_id.attr,
&driver_attr_remove_id.attr,
NULL,
};
ATTRIBUTE_GROUPS(dax_drv);
static int dax_bus_match(struct device *dev, struct device_driver *drv);
static struct bus_type dax_bus_type = {
.name = "dax",
.uevent = dax_bus_uevent,
.match = dax_bus_match,
.drv_groups = dax_drv_groups,
};
static int dax_bus_match(struct device *dev, struct device_driver *drv)
{
struct dax_device_driver *dax_drv = to_dax_drv(drv);
/*
* All but the 'device-dax' driver, which has 'match_always'
* set, requires an exact id match.
*/
if (dax_drv->match_always)
return 1;
return dax_match_id(dax_drv, dev);
}
/*
* Rely on the fact that drvdata is set before the attributes are
* registered, and that the attributes are unregistered before drvdata
* is cleared to assume that drvdata is always valid.
*/
static ssize_t id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct dax_region *dax_region = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", dax_region->id);
}
static DEVICE_ATTR_RO(id);
static ssize_t region_size_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct dax_region *dax_region = dev_get_drvdata(dev);
return sprintf(buf, "%llu\n", (unsigned long long)
resource_size(&dax_region->res));
}
static struct device_attribute dev_attr_region_size = __ATTR(size, 0444,
region_size_show, NULL);
static ssize_t align_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct dax_region *dax_region = dev_get_drvdata(dev);
return sprintf(buf, "%u\n", dax_region->align);
}
static DEVICE_ATTR_RO(align);
static struct attribute *dax_region_attributes[] = {
&dev_attr_region_size.attr,
&dev_attr_align.attr,
&dev_attr_id.attr,
NULL,
};
static const struct attribute_group dax_region_attribute_group = {
.name = "dax_region",
.attrs = dax_region_attributes,
};
static const struct attribute_group *dax_region_attribute_groups[] = {
&dax_region_attribute_group,
NULL,
};
static void dax_region_free(struct kref *kref)
{
struct dax_region *dax_region;
dax_region = container_of(kref, struct dax_region, kref);
kfree(dax_region);
}
void dax_region_put(struct dax_region *dax_region)
{
kref_put(&dax_region->kref, dax_region_free);
}
EXPORT_SYMBOL_GPL(dax_region_put);
static void dax_region_unregister(void *region)
{
struct dax_region *dax_region = region;
sysfs_remove_groups(&dax_region->dev->kobj,
dax_region_attribute_groups);
dax_region_put(dax_region);
}
struct dax_region *alloc_dax_region(struct device *parent, int region_id,
struct resource *res, int target_node, unsigned int align,
unsigned long pfn_flags)
{
struct dax_region *dax_region;
/*
* The DAX core assumes that it can store its private data in
* parent->driver_data. This WARN is a reminder / safeguard for
* developers of device-dax drivers.
*/
if (dev_get_drvdata(parent)) {
dev_WARN(parent, "dax core failed to setup private data\n");
return NULL;
}
if (!IS_ALIGNED(res->start, align)
|| !IS_ALIGNED(resource_size(res), align))
return NULL;
dax_region = kzalloc(sizeof(*dax_region), GFP_KERNEL);
if (!dax_region)
return NULL;
dev_set_drvdata(parent, dax_region);
memcpy(&dax_region->res, res, sizeof(*res));
dax_region->pfn_flags = pfn_flags;
kref_init(&dax_region->kref);
dax_region->id = region_id;
dax_region->align = align;
dax_region->dev = parent;
dax_region->target_node = target_node;
if (sysfs_create_groups(&parent->kobj, dax_region_attribute_groups)) {
kfree(dax_region);
return NULL;
}
kref_get(&dax_region->kref);
if (devm_add_action_or_reset(parent, dax_region_unregister, dax_region))
return NULL;
return dax_region;
}
EXPORT_SYMBOL_GPL(alloc_dax_region);
static ssize_t size_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
unsigned long long size = resource_size(&dev_dax->region->res);
return sprintf(buf, "%llu\n", size);
}
static DEVICE_ATTR_RO(size);
static int dev_dax_target_node(struct dev_dax *dev_dax)
{
struct dax_region *dax_region = dev_dax->region;
return dax_region->target_node;
}
static ssize_t target_node_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
return sprintf(buf, "%d\n", dev_dax_target_node(dev_dax));
}
static DEVICE_ATTR_RO(target_node);
static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
/*
* We only ever expect to handle device-dax instances, i.e. the
* @type argument to MODULE_ALIAS_DAX_DEVICE() is always zero
*/
return sprintf(buf, DAX_DEVICE_MODALIAS_FMT "\n", 0);
}
static DEVICE_ATTR_RO(modalias);
static umode_t dev_dax_visible(struct kobject *kobj, struct attribute *a, int n)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct dev_dax *dev_dax = to_dev_dax(dev);
if (a == &dev_attr_target_node.attr && dev_dax_target_node(dev_dax) < 0)
return 0;
return a->mode;
}
static struct attribute *dev_dax_attributes[] = {
&dev_attr_modalias.attr,
&dev_attr_size.attr,
&dev_attr_target_node.attr,
NULL,
};
static const struct attribute_group dev_dax_attribute_group = {
.attrs = dev_dax_attributes,
.is_visible = dev_dax_visible,
};
static const struct attribute_group *dax_attribute_groups[] = {
&dev_dax_attribute_group,
NULL,
};
void kill_dev_dax(struct dev_dax *dev_dax)
{
struct dax_device *dax_dev = dev_dax->dax_dev;
struct inode *inode = dax_inode(dax_dev);
kill_dax(dax_dev);
unmap_mapping_range(inode->i_mapping, 0, 0, 1);
}
EXPORT_SYMBOL_GPL(kill_dev_dax);
static void dev_dax_release(struct device *dev)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
struct dax_region *dax_region = dev_dax->region;
struct dax_device *dax_dev = dev_dax->dax_dev;
dax_region_put(dax_region);
put_dax(dax_dev);
kfree(dev_dax);
}
static void unregister_dev_dax(void *dev)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
dev_dbg(dev, "%s\n", __func__);
kill_dev_dax(dev_dax);
device_del(dev);
put_device(dev);
}
struct dev_dax *__devm_create_dev_dax(struct dax_region *dax_region, int id,
struct dev_pagemap *pgmap, enum dev_dax_subsys subsys)
{
struct device *parent = dax_region->dev;
struct dax_device *dax_dev;
struct dev_dax *dev_dax;
struct inode *inode;
struct device *dev;
int rc = -ENOMEM;
if (id < 0)
return ERR_PTR(-EINVAL);
dev_dax = kzalloc(sizeof(*dev_dax), GFP_KERNEL);
if (!dev_dax)
return ERR_PTR(-ENOMEM);
memcpy(&dev_dax->pgmap, pgmap, sizeof(*pgmap));
/*
* No 'host' or dax_operations since there is no access to this
* device outside of mmap of the resulting character device.
*/
dax_dev = alloc_dax(dev_dax, NULL, NULL);
if (!dax_dev)
goto err;
/* a device_dax instance is dead while the driver is not attached */
kill_dax(dax_dev);
/* from here on we're committed to teardown via dax_dev_release() */
dev = &dev_dax->dev;
device_initialize(dev);
dev_dax->dax_dev = dax_dev;
dev_dax->region = dax_region;
dev_dax->target_node = dax_region->target_node;
kref_get(&dax_region->kref);
inode = dax_inode(dax_dev);
dev->devt = inode->i_rdev;
if (subsys == DEV_DAX_BUS)
dev->bus = &dax_bus_type;
else
dev->class = dax_class;
dev->parent = parent;
dev->groups = dax_attribute_groups;
dev->release = dev_dax_release;
dev_set_name(dev, "dax%d.%d", dax_region->id, id);
rc = device_add(dev);
if (rc) {
kill_dev_dax(dev_dax);
put_device(dev);
return ERR_PTR(rc);
}
rc = devm_add_action_or_reset(dax_region->dev, unregister_dev_dax, dev);
if (rc)
return ERR_PTR(rc);
return dev_dax;
err:
kfree(dev_dax);
return ERR_PTR(rc);
}
EXPORT_SYMBOL_GPL(__devm_create_dev_dax);
static int match_always_count;
int __dax_driver_register(struct dax_device_driver *dax_drv,
struct module *module, const char *mod_name)
{
struct device_driver *drv = &dax_drv->drv;
int rc = 0;
INIT_LIST_HEAD(&dax_drv->ids);
drv->owner = module;
drv->name = mod_name;
drv->mod_name = mod_name;
drv->bus = &dax_bus_type;
/* there can only be one default driver */
mutex_lock(&dax_bus_lock);
match_always_count += dax_drv->match_always;
if (match_always_count > 1) {
match_always_count--;
WARN_ON(1);
rc = -EINVAL;
}
mutex_unlock(&dax_bus_lock);
if (rc)
return rc;
return driver_register(drv);
}
EXPORT_SYMBOL_GPL(__dax_driver_register);
void dax_driver_unregister(struct dax_device_driver *dax_drv)
{
struct device_driver *drv = &dax_drv->drv;
struct dax_id *dax_id, *_id;
mutex_lock(&dax_bus_lock);
match_always_count -= dax_drv->match_always;
list_for_each_entry_safe(dax_id, _id, &dax_drv->ids, list) {
list_del(&dax_id->list);
kfree(dax_id);
}
mutex_unlock(&dax_bus_lock);
driver_unregister(drv);
}
EXPORT_SYMBOL_GPL(dax_driver_unregister);
int __init dax_bus_init(void)
{
int rc;
if (IS_ENABLED(CONFIG_DEV_DAX_PMEM_COMPAT)) {
dax_class = class_create(THIS_MODULE, "dax");
if (IS_ERR(dax_class))
return PTR_ERR(dax_class);
}
rc = bus_register(&dax_bus_type);
if (rc)
class_destroy(dax_class);
return rc;
}
void __exit dax_bus_exit(void)
{
bus_unregister(&dax_bus_type);
class_destroy(dax_class);
}

61
drivers/dax/bus.h 100644
View File

@ -0,0 +1,61 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#ifndef __DAX_BUS_H__
#define __DAX_BUS_H__
#include <linux/device.h>
struct dev_dax;
struct resource;
struct dax_device;
struct dax_region;
void dax_region_put(struct dax_region *dax_region);
struct dax_region *alloc_dax_region(struct device *parent, int region_id,
struct resource *res, int target_node, unsigned int align,
unsigned long flags);
enum dev_dax_subsys {
DEV_DAX_BUS,
DEV_DAX_CLASS,
};
struct dev_dax *__devm_create_dev_dax(struct dax_region *dax_region, int id,
struct dev_pagemap *pgmap, enum dev_dax_subsys subsys);
static inline struct dev_dax *devm_create_dev_dax(struct dax_region *dax_region,
int id, struct dev_pagemap *pgmap)
{
return __devm_create_dev_dax(dax_region, id, pgmap, DEV_DAX_BUS);
}
/* to be deleted when DEV_DAX_CLASS is removed */
struct dev_dax *__dax_pmem_probe(struct device *dev, enum dev_dax_subsys subsys);
struct dax_device_driver {
struct device_driver drv;
struct list_head ids;
int match_always;
};
int __dax_driver_register(struct dax_device_driver *dax_drv,
struct module *module, const char *mod_name);
#define dax_driver_register(driver) \
__dax_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
void dax_driver_unregister(struct dax_device_driver *dax_drv);
void kill_dev_dax(struct dev_dax *dev_dax);
#if IS_ENABLED(CONFIG_DEV_DAX_PMEM_COMPAT)
int dev_dax_probe(struct device *dev);
#endif
/*
* While run_dax() is potentially a generic operation that could be
* defined in include/linux/dax.h we don't want to grow any users
* outside of drivers/dax/
*/
void run_dax(struct dax_device *dax_dev);
#define MODULE_ALIAS_DAX_DEVICE(type) \
MODULE_ALIAS("dax:t" __stringify(type) "*")
#define DAX_DEVICE_MODALIAS_FMT "dax:t%d"
#endif /* __DAX_BUS_H__ */

View File

@ -16,10 +16,17 @@
#include <linux/device.h>
#include <linux/cdev.h>
/* private routines between core files */
struct dax_device;
struct dax_device *inode_dax(struct inode *inode);
struct inode *dax_inode(struct dax_device *dax_dev);
int dax_bus_init(void);
void dax_bus_exit(void);
/**
* struct dax_region - mapping infrastructure for dax devices
* @id: kernel-wide unique region for a memory range
* @base: linear address corresponding to @res
* @target_node: effective numa node if this memory range is onlined
* @kref: to pin while other agents have a need to do lookups
* @dev: parent device backing this region
* @align: allocation and mapping alignment for child dax devices
@ -28,8 +35,7 @@
*/
struct dax_region {
int id;
struct ida ida;
void *base;
int target_node;
struct kref kref;
struct device *dev;
unsigned int align;
@ -38,20 +44,28 @@ struct dax_region {
};
/**
* struct dev_dax - instance data for a subdivision of a dax region
* struct dev_dax - instance data for a subdivision of a dax region, and
* data while the device is activated in the driver.
* @region - parent region
* @dax_dev - core dax functionality
* @target_node: effective numa node if dev_dax memory range is onlined
* @dev - device core
* @id - child id in the region
* @num_resources - number of physical address extents in this device
* @res - array of physical address ranges
* @pgmap - pgmap for memmap setup / lifetime (driver owned)
* @ref: pgmap reference count (driver owned)
* @cmp: @ref final put completion (driver owned)
*/
struct dev_dax {
struct dax_region *region;
struct dax_device *dax_dev;
int target_node;
struct device dev;
int id;
int num_resources;
struct resource res[0];
struct dev_pagemap pgmap;
struct percpu_ref ref;
struct completion cmp;
};
static inline struct dev_dax *to_dev_dax(struct device *dev)
{
return container_of(dev, struct dev_dax, dev);
}
#endif

View File

@ -1,18 +0,0 @@
/*
* Copyright(c) 2016 - 2017 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef __DAX_H__
#define __DAX_H__
struct dax_device;
struct dax_device *inode_dax(struct inode *inode);
struct inode *dax_inode(struct dax_device *dax_dev);
#endif /* __DAX_H__ */

View File

@ -1,25 +0,0 @@
/*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef __DEVICE_DAX_H__
#define __DEVICE_DAX_H__
struct device;
struct dev_dax;
struct resource;
struct dax_region;
void dax_region_put(struct dax_region *dax_region);
struct dax_region *alloc_dax_region(struct device *parent,
int region_id, struct resource *res, unsigned int align,
void *addr, unsigned long flags);
struct dev_dax *devm_create_dev_dax(struct dax_region *dax_region,
int id, struct resource *res, int count);
#endif /* __DEVICE_DAX_H__ */

View File

@ -1,15 +1,6 @@
/*
* Copyright(c) 2016 - 2017 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016-2018 Intel Corporation. All rights reserved. */
#include <linux/memremap.h>
#include <linux/pagemap.h>
#include <linux/module.h>
#include <linux/device.h>
@ -21,161 +12,39 @@
#include <linux/mm.h>
#include <linux/mman.h>
#include "dax-private.h"
#include "dax.h"
#include "bus.h"
static struct class *dax_class;
/*
* Rely on the fact that drvdata is set before the attributes are
* registered, and that the attributes are unregistered before drvdata
* is cleared to assume that drvdata is always valid.
*/
static ssize_t id_show(struct device *dev,
struct device_attribute *attr, char *buf)
static struct dev_dax *ref_to_dev_dax(struct percpu_ref *ref)
{
struct dax_region *dax_region = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", dax_region->id);
}
static DEVICE_ATTR_RO(id);
static ssize_t region_size_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct dax_region *dax_region = dev_get_drvdata(dev);
return sprintf(buf, "%llu\n", (unsigned long long)
resource_size(&dax_region->res));
}
static struct device_attribute dev_attr_region_size = __ATTR(size, 0444,
region_size_show, NULL);
static ssize_t align_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct dax_region *dax_region = dev_get_drvdata(dev);
return sprintf(buf, "%u\n", dax_region->align);
}
static DEVICE_ATTR_RO(align);
static struct attribute *dax_region_attributes[] = {
&dev_attr_region_size.attr,
&dev_attr_align.attr,
&dev_attr_id.attr,
NULL,
};
static const struct attribute_group dax_region_attribute_group = {
.name = "dax_region",
.attrs = dax_region_attributes,
};
static const struct attribute_group *dax_region_attribute_groups[] = {
&dax_region_attribute_group,
NULL,
};
static void dax_region_free(struct kref *kref)
{
struct dax_region *dax_region;
dax_region = container_of(kref, struct dax_region, kref);
kfree(dax_region);
return container_of(ref, struct dev_dax, ref);
}
void dax_region_put(struct dax_region *dax_region)
static void dev_dax_percpu_release(struct percpu_ref *ref)
{
kref_put(&dax_region->kref, dax_region_free);
}
EXPORT_SYMBOL_GPL(dax_region_put);
struct dev_dax *dev_dax = ref_to_dev_dax(ref);
static void dax_region_unregister(void *region)
{
struct dax_region *dax_region = region;
sysfs_remove_groups(&dax_region->dev->kobj,
dax_region_attribute_groups);
dax_region_put(dax_region);
dev_dbg(&dev_dax->dev, "%s\n", __func__);
complete(&dev_dax->cmp);
}
struct dax_region *alloc_dax_region(struct device *parent, int region_id,
struct resource *res, unsigned int align, void *addr,
unsigned long pfn_flags)
static void dev_dax_percpu_exit(void *data)
{
struct dax_region *dax_region;
struct percpu_ref *ref = data;
struct dev_dax *dev_dax = ref_to_dev_dax(ref);
/*
* The DAX core assumes that it can store its private data in
* parent->driver_data. This WARN is a reminder / safeguard for
* developers of device-dax drivers.
*/
if (dev_get_drvdata(parent)) {
dev_WARN(parent, "dax core failed to setup private data\n");
return NULL;
}
if (!IS_ALIGNED(res->start, align)
|| !IS_ALIGNED(resource_size(res), align))
return NULL;
dax_region = kzalloc(sizeof(*dax_region), GFP_KERNEL);
if (!dax_region)
return NULL;
dev_set_drvdata(parent, dax_region);
memcpy(&dax_region->res, res, sizeof(*res));
dax_region->pfn_flags = pfn_flags;
kref_init(&dax_region->kref);
dax_region->id = region_id;
ida_init(&dax_region->ida);
dax_region->align = align;
dax_region->dev = parent;
dax_region->base = addr;
if (sysfs_create_groups(&parent->kobj, dax_region_attribute_groups)) {
kfree(dax_region);
return NULL;
}
kref_get(&dax_region->kref);
if (devm_add_action_or_reset(parent, dax_region_unregister, dax_region))
return NULL;
return dax_region;
}
EXPORT_SYMBOL_GPL(alloc_dax_region);
static struct dev_dax *to_dev_dax(struct device *dev)
{
return container_of(dev, struct dev_dax, dev);
dev_dbg(&dev_dax->dev, "%s\n", __func__);
wait_for_completion(&dev_dax->cmp);
percpu_ref_exit(ref);
}
static ssize_t size_show(struct device *dev,
struct device_attribute *attr, char *buf)
static void dev_dax_percpu_kill(struct percpu_ref *data)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
unsigned long long size = 0;
int i;
struct percpu_ref *ref = data;
struct dev_dax *dev_dax = ref_to_dev_dax(ref);
for (i = 0; i < dev_dax->num_resources; i++)
size += resource_size(&dev_dax->res[i]);
return sprintf(buf, "%llu\n", size);
dev_dbg(&dev_dax->dev, "%s\n", __func__);
percpu_ref_kill(ref);
}
static DEVICE_ATTR_RO(size);
static struct attribute *dev_dax_attributes[] = {
&dev_attr_size.attr,
NULL,
};
static const struct attribute_group dev_dax_attribute_group = {
.attrs = dev_dax_attributes,
};
static const struct attribute_group *dax_attribute_groups[] = {
&dev_dax_attribute_group,
NULL,
};
static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
const char *func)
@ -226,21 +95,11 @@ static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
__weak phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff,
unsigned long size)
{
struct resource *res;
/* gcc-4.6.3-nolibc for i386 complains that this is uninitialized */
phys_addr_t uninitialized_var(phys);
int i;
struct resource *res = &dev_dax->region->res;
phys_addr_t phys;
for (i = 0; i < dev_dax->num_resources; i++) {
res = &dev_dax->res[i];
phys = pgoff * PAGE_SIZE + res->start;
if (phys >= res->start && phys <= res->end)
break;
pgoff -= PHYS_PFN(resource_size(res));
}
if (i < dev_dax->num_resources) {
res = &dev_dax->res[i];
phys = pgoff * PAGE_SIZE + res->start;
if (phys >= res->start && phys <= res->end) {
if (phys + size - 1 <= res->end)
return phys;
}
@ -576,152 +435,100 @@ static const struct file_operations dax_fops = {
.mmap_supported_flags = MAP_SYNC,
};
static void dev_dax_release(struct device *dev)
static void dev_dax_cdev_del(void *cdev)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
struct dax_region *dax_region = dev_dax->region;
struct dax_device *dax_dev = dev_dax->dax_dev;
if (dev_dax->id >= 0)
ida_simple_remove(&dax_region->ida, dev_dax->id);
dax_region_put(dax_region);
put_dax(dax_dev);
kfree(dev_dax);
cdev_del(cdev);
}
static void kill_dev_dax(struct dev_dax *dev_dax)
static void dev_dax_kill(void *dev_dax)
{
struct dax_device *dax_dev = dev_dax->dax_dev;
struct inode *inode = dax_inode(dax_dev);
kill_dax(dax_dev);
unmap_mapping_range(inode->i_mapping, 0, 0, 1);
}
static void unregister_dev_dax(void *dev)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
struct dax_device *dax_dev = dev_dax->dax_dev;
struct inode *inode = dax_inode(dax_dev);
struct cdev *cdev = inode->i_cdev;
dev_dbg(dev, "trace\n");
kill_dev_dax(dev_dax);
cdev_device_del(cdev, dev);
put_device(dev);
}
struct dev_dax *devm_create_dev_dax(struct dax_region *dax_region,
int id, struct resource *res, int count)
int dev_dax_probe(struct device *dev)
{
struct device *parent = dax_region->dev;
struct dax_device *dax_dev;
struct dev_dax *dev_dax;
struct dev_dax *dev_dax = to_dev_dax(dev);
struct dax_device *dax_dev = dev_dax->dax_dev;
struct resource *res = &dev_dax->region->res;
struct inode *inode;
struct device *dev;
struct cdev *cdev;
int rc, i;
void *addr;
int rc;
if (!count)
return ERR_PTR(-EINVAL);
dev_dax = kzalloc(struct_size(dev_dax, res, count), GFP_KERNEL);
if (!dev_dax)
return ERR_PTR(-ENOMEM);
for (i = 0; i < count; i++) {
if (!IS_ALIGNED(res[i].start, dax_region->align)
|| !IS_ALIGNED(resource_size(&res[i]),
dax_region->align)) {
rc = -EINVAL;
break;
}
dev_dax->res[i].start = res[i].start;
dev_dax->res[i].end = res[i].end;
/* 1:1 map region resource range to device-dax instance range */
if (!devm_request_mem_region(dev, res->start, resource_size(res),
dev_name(dev))) {
dev_warn(dev, "could not reserve region %pR\n", res);
return -EBUSY;
}
if (i < count)
goto err_id;
init_completion(&dev_dax->cmp);
rc = percpu_ref_init(&dev_dax->ref, dev_dax_percpu_release, 0,
GFP_KERNEL);
if (rc)
return rc;
if (id < 0) {
id = ida_simple_get(&dax_region->ida, 0, 0, GFP_KERNEL);
dev_dax->id = id;
if (id < 0) {
rc = id;
goto err_id;
}
} else {
/* region provider owns @id lifetime */
dev_dax->id = -1;
rc = devm_add_action_or_reset(dev, dev_dax_percpu_exit, &dev_dax->ref);
if (rc)
return rc;
dev_dax->pgmap.ref = &dev_dax->ref;
dev_dax->pgmap.kill = dev_dax_percpu_kill;
addr = devm_memremap_pages(dev, &dev_dax->pgmap);
if (IS_ERR(addr)) {
devm_remove_action(dev, dev_dax_percpu_exit, &dev_dax->ref);
percpu_ref_exit(&dev_dax->ref);
return PTR_ERR(addr);
}
/*
* No 'host' or dax_operations since there is no access to this
* device outside of mmap of the resulting character device.
*/
dax_dev = alloc_dax(dev_dax, NULL, NULL);
if (!dax_dev) {
rc = -ENOMEM;
goto err_dax;
}
/* from here on we're committed to teardown via dax_dev_release() */
dev = &dev_dax->dev;
device_initialize(dev);
inode = dax_inode(dax_dev);
cdev = inode->i_cdev;
cdev_init(cdev, &dax_fops);
cdev->owner = parent->driver->owner;
dev_dax->num_resources = count;
dev_dax->dax_dev = dax_dev;
dev_dax->region = dax_region;
kref_get(&dax_region->kref);
dev->devt = inode->i_rdev;
dev->class = dax_class;
dev->parent = parent;
dev->groups = dax_attribute_groups;
dev->release = dev_dax_release;
dev_set_name(dev, "dax%d.%d", dax_region->id, id);
rc = cdev_device_add(cdev, dev);
if (rc) {
kill_dev_dax(dev_dax);
put_device(dev);
return ERR_PTR(rc);
}
rc = devm_add_action_or_reset(dax_region->dev, unregister_dev_dax, dev);
if (dev->class) {
/* for the CONFIG_DEV_DAX_PMEM_COMPAT case */
cdev->owner = dev->parent->driver->owner;
} else
cdev->owner = dev->driver->owner;
cdev_set_parent(cdev, &dev->kobj);
rc = cdev_add(cdev, dev->devt, 1);
if (rc)
return ERR_PTR(rc);
return rc;
return dev_dax;
rc = devm_add_action_or_reset(dev, dev_dax_cdev_del, cdev);
if (rc)
return rc;
err_dax:
if (dev_dax->id >= 0)
ida_simple_remove(&dax_region->ida, dev_dax->id);
err_id:
kfree(dev_dax);
return ERR_PTR(rc);
run_dax(dax_dev);
return devm_add_action_or_reset(dev, dev_dax_kill, dev_dax);
}
EXPORT_SYMBOL_GPL(devm_create_dev_dax);
EXPORT_SYMBOL_GPL(dev_dax_probe);
static int dev_dax_remove(struct device *dev)
{
/* all probe actions are unwound by devm */
return 0;
}
static struct dax_device_driver device_dax_driver = {
.drv = {
.probe = dev_dax_probe,
.remove = dev_dax_remove,
},
.match_always = 1,
};
static int __init dax_init(void)
{
dax_class = class_create(THIS_MODULE, "dax");
return PTR_ERR_OR_ZERO(dax_class);
return dax_driver_register(&device_dax_driver);
}
static void __exit dax_exit(void)
{
class_destroy(dax_class);
dax_driver_unregister(&device_dax_driver);
}
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
subsys_initcall(dax_init);
module_init(dax_init);
module_exit(dax_exit);
MODULE_ALIAS_DAX_DEVICE(0);

108
drivers/dax/kmem.c 100644
View File

@ -0,0 +1,108 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016-2019 Intel Corporation. All rights reserved. */
#include <linux/memremap.h>
#include <linux/pagemap.h>
#include <linux/memory.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/pfn_t.h>
#include <linux/slab.h>
#include <linux/dax.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/mman.h>
#include "dax-private.h"
#include "bus.h"
int dev_dax_kmem_probe(struct device *dev)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
struct resource *res = &dev_dax->region->res;
resource_size_t kmem_start;
resource_size_t kmem_size;
resource_size_t kmem_end;
struct resource *new_res;
int numa_node;
int rc;
/*
* Ensure good NUMA information for the persistent memory.
* Without this check, there is a risk that slow memory
* could be mixed in a node with faster memory, causing
* unavoidable performance issues.
*/
numa_node = dev_dax->target_node;
if (numa_node < 0) {
dev_warn(dev, "rejecting DAX region %pR with invalid node: %d\n",
res, numa_node);
return -EINVAL;
}
/* Hotplug starting at the beginning of the next block: */
kmem_start = ALIGN(res->start, memory_block_size_bytes());
kmem_size = resource_size(res);
/* Adjust the size down to compensate for moving up kmem_start: */
kmem_size -= kmem_start - res->start;
/* Align the size down to cover only complete blocks: */
kmem_size &= ~(memory_block_size_bytes() - 1);
kmem_end = kmem_start + kmem_size;
/* Region is permanently reserved. Hot-remove not yet implemented. */
new_res = request_mem_region(kmem_start, kmem_size, dev_name(dev));
if (!new_res) {
dev_warn(dev, "could not reserve region [%pa-%pa]\n",
&kmem_start, &kmem_end);
return -EBUSY;
}
/*
* Set flags appropriate for System RAM. Leave ..._BUSY clear
* so that add_memory() can add a child resource. Do not
* inherit flags from the parent since it may set new flags
* unknown to us that will break add_memory() below.
*/
new_res->flags = IORESOURCE_SYSTEM_RAM;
new_res->name = dev_name(dev);
rc = add_memory(numa_node, new_res->start, resource_size(new_res));
if (rc)
return rc;
return 0;
}
static int dev_dax_kmem_remove(struct device *dev)
{
/*
* Purposely leak the request_mem_region() for the device-dax
* range and return '0' to ->remove() attempts. The removal of
* the device from the driver always succeeds, but the region
* is permanently pinned as reserved by the unreleased
* request_mem_region().
*/
return 0;
}
static struct dax_device_driver device_dax_kmem_driver = {
.drv = {
.probe = dev_dax_kmem_probe,
.remove = dev_dax_kmem_remove,
},
};
static int __init dax_kmem_init(void)
{
return dax_driver_register(&device_dax_kmem_driver);
}
static void __exit dax_kmem_exit(void)
{
dax_driver_unregister(&device_dax_kmem_driver);
}
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
module_init(dax_kmem_init);
module_exit(dax_kmem_exit);
MODULE_ALIAS_DAX_DEVICE(0);

View File

@ -1,153 +0,0 @@
/*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/percpu-refcount.h>
#include <linux/memremap.h>
#include <linux/module.h>
#include <linux/pfn_t.h>
#include "../nvdimm/pfn.h"
#include "../nvdimm/nd.h"
#include "device-dax.h"
struct dax_pmem {
struct device *dev;
struct percpu_ref ref;
struct dev_pagemap pgmap;
struct completion cmp;
};
static struct dax_pmem *to_dax_pmem(struct percpu_ref *ref)
{
return container_of(ref, struct dax_pmem, ref);
}
static void dax_pmem_percpu_release(struct percpu_ref *ref)
{
struct dax_pmem *dax_pmem = to_dax_pmem(ref);
dev_dbg(dax_pmem->dev, "trace\n");
complete(&dax_pmem->cmp);
}
static void dax_pmem_percpu_exit(void *data)
{
struct percpu_ref *ref = data;
struct dax_pmem *dax_pmem = to_dax_pmem(ref);
dev_dbg(dax_pmem->dev, "trace\n");
wait_for_completion(&dax_pmem->cmp);
percpu_ref_exit(ref);
}
static void dax_pmem_percpu_kill(struct percpu_ref *ref)
{
struct dax_pmem *dax_pmem = to_dax_pmem(ref);
dev_dbg(dax_pmem->dev, "trace\n");
percpu_ref_kill(ref);
}
static int dax_pmem_probe(struct device *dev)
{
void *addr;
struct resource res;
int rc, id, region_id;
struct nd_pfn_sb *pfn_sb;
struct dev_dax *dev_dax;
struct dax_pmem *dax_pmem;
struct nd_namespace_io *nsio;
struct dax_region *dax_region;
struct nd_namespace_common *ndns;
struct nd_dax *nd_dax = to_nd_dax(dev);
struct nd_pfn *nd_pfn = &nd_dax->nd_pfn;
ndns = nvdimm_namespace_common_probe(dev);
if (IS_ERR(ndns))
return PTR_ERR(ndns);
nsio = to_nd_namespace_io(&ndns->dev);
dax_pmem = devm_kzalloc(dev, sizeof(*dax_pmem), GFP_KERNEL);
if (!dax_pmem)
return -ENOMEM;
/* parse the 'pfn' info block via ->rw_bytes */
rc = devm_nsio_enable(dev, nsio);
if (rc)
return rc;
rc = nvdimm_setup_pfn(nd_pfn, &dax_pmem->pgmap);
if (rc)
return rc;
devm_nsio_disable(dev, nsio);
pfn_sb = nd_pfn->pfn_sb;
if (!devm_request_mem_region(dev, nsio->res.start,
resource_size(&nsio->res),
dev_name(&ndns->dev))) {
dev_warn(dev, "could not reserve region %pR\n", &nsio->res);
return -EBUSY;
}
dax_pmem->dev = dev;
init_completion(&dax_pmem->cmp);
rc = percpu_ref_init(&dax_pmem->ref, dax_pmem_percpu_release, 0,
GFP_KERNEL);
if (rc)
return rc;
rc = devm_add_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
if (rc) {
percpu_ref_exit(&dax_pmem->ref);
return rc;
}
dax_pmem->pgmap.ref = &dax_pmem->ref;
dax_pmem->pgmap.kill = dax_pmem_percpu_kill;
addr = devm_memremap_pages(dev, &dax_pmem->pgmap);
if (IS_ERR(addr))
return PTR_ERR(addr);
/* adjust the dax_region resource to the start of data */
memcpy(&res, &dax_pmem->pgmap.res, sizeof(res));
res.start += le64_to_cpu(pfn_sb->dataoff);
rc = sscanf(dev_name(&ndns->dev), "namespace%d.%d", &region_id, &id);
if (rc != 2)
return -EINVAL;
dax_region = alloc_dax_region(dev, region_id, &res,
le32_to_cpu(pfn_sb->align), addr, PFN_DEV|PFN_MAP);
if (!dax_region)
return -ENOMEM;
/* TODO: support for subdividing a dax region... */
dev_dax = devm_create_dev_dax(dax_region, id, &res, 1);
/* child dev_dax instances now own the lifetime of the dax_region */
dax_region_put(dax_region);
return PTR_ERR_OR_ZERO(dev_dax);
}
static struct nd_device_driver dax_pmem_driver = {
.probe = dax_pmem_probe,
.drv = {
.name = "dax_pmem",
},
.type = ND_DRIVER_DAX_PMEM,
};
module_nd_driver(dax_pmem_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
MODULE_ALIAS_ND_DEVICE(ND_DEVICE_DAX_PMEM);

View File

@ -0,0 +1,7 @@
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o
obj-$(CONFIG_DEV_DAX_PMEM_COMPAT) += dax_pmem_compat.o
dax_pmem-y := pmem.o
dax_pmem_core-y := core.o
dax_pmem_compat-y := compat.o

View File

@ -0,0 +1,73 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#include <linux/percpu-refcount.h>
#include <linux/memremap.h>
#include <linux/module.h>
#include <linux/pfn_t.h>
#include <linux/nd.h>
#include "../bus.h"
/* we need the private definitions to implement compat suport */
#include "../dax-private.h"
static int dax_pmem_compat_probe(struct device *dev)
{
struct dev_dax *dev_dax = __dax_pmem_probe(dev, DEV_DAX_CLASS);
int rc;
if (IS_ERR(dev_dax))
return PTR_ERR(dev_dax);
if (!devres_open_group(&dev_dax->dev, dev_dax, GFP_KERNEL))
return -ENOMEM;
device_lock(&dev_dax->dev);
rc = dev_dax_probe(&dev_dax->dev);
device_unlock(&dev_dax->dev);
devres_close_group(&dev_dax->dev, dev_dax);
if (rc)
devres_release_group(&dev_dax->dev, dev_dax);
return rc;
}
static int dax_pmem_compat_release(struct device *dev, void *data)
{
device_lock(dev);
devres_release_group(dev, to_dev_dax(dev));
device_unlock(dev);
return 0;
}
static int dax_pmem_compat_remove(struct device *dev)
{
device_for_each_child(dev, NULL, dax_pmem_compat_release);
return 0;
}
static struct nd_device_driver dax_pmem_compat_driver = {
.probe = dax_pmem_compat_probe,
.remove = dax_pmem_compat_remove,
.drv = {
.name = "dax_pmem_compat",
},
.type = ND_DRIVER_DAX_PMEM,
};
static int __init dax_pmem_compat_init(void)
{
return nd_driver_register(&dax_pmem_compat_driver);
}
module_init(dax_pmem_compat_init);
static void __exit dax_pmem_compat_exit(void)
{
driver_unregister(&dax_pmem_compat_driver.drv);
}
module_exit(dax_pmem_compat_exit);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
MODULE_ALIAS_ND_DEVICE(ND_DEVICE_DAX_PMEM);

View File

@ -0,0 +1,71 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#include <linux/memremap.h>
#include <linux/module.h>
#include <linux/pfn_t.h>
#include "../../nvdimm/pfn.h"
#include "../../nvdimm/nd.h"
#include "../bus.h"
struct dev_dax *__dax_pmem_probe(struct device *dev, enum dev_dax_subsys subsys)
{
struct resource res;
int rc, id, region_id;
resource_size_t offset;
struct nd_pfn_sb *pfn_sb;
struct dev_dax *dev_dax;
struct nd_namespace_io *nsio;
struct dax_region *dax_region;
struct dev_pagemap pgmap = { 0 };
struct nd_namespace_common *ndns;
struct nd_dax *nd_dax = to_nd_dax(dev);
struct nd_pfn *nd_pfn = &nd_dax->nd_pfn;
struct nd_region *nd_region = to_nd_region(dev->parent);
ndns = nvdimm_namespace_common_probe(dev);
if (IS_ERR(ndns))
return ERR_CAST(ndns);
nsio = to_nd_namespace_io(&ndns->dev);
/* parse the 'pfn' info block via ->rw_bytes */
rc = devm_nsio_enable(dev, nsio);
if (rc)
return ERR_PTR(rc);
rc = nvdimm_setup_pfn(nd_pfn, &pgmap);
if (rc)
return ERR_PTR(rc);
devm_nsio_disable(dev, nsio);
/* reserve the metadata area, device-dax will reserve the data */
pfn_sb = nd_pfn->pfn_sb;
offset = le64_to_cpu(pfn_sb->dataoff);
if (!devm_request_mem_region(dev, nsio->res.start, offset,
dev_name(&ndns->dev))) {
dev_warn(dev, "could not reserve metadata\n");
return ERR_PTR(-EBUSY);
}
rc = sscanf(dev_name(&ndns->dev), "namespace%d.%d", &region_id, &id);
if (rc != 2)
return ERR_PTR(-EINVAL);
/* adjust the dax_region resource to the start of data */
memcpy(&res, &pgmap.res, sizeof(res));
res.start += offset;
dax_region = alloc_dax_region(dev, region_id, &res,
nd_region->target_node, le32_to_cpu(pfn_sb->align),
PFN_DEV|PFN_MAP);
if (!dax_region)
return ERR_PTR(-ENOMEM);
dev_dax = __devm_create_dev_dax(dax_region, id, &pgmap, subsys);
/* child dev_dax instances now own the lifetime of the dax_region */
dax_region_put(dax_region);
return dev_dax;
}
EXPORT_SYMBOL_GPL(__dax_pmem_probe);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");

View File

@ -0,0 +1,40 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#include <linux/percpu-refcount.h>
#include <linux/memremap.h>
#include <linux/module.h>
#include <linux/pfn_t.h>
#include <linux/nd.h>
#include "../bus.h"
static int dax_pmem_probe(struct device *dev)
{
return PTR_ERR_OR_ZERO(__dax_pmem_probe(dev, DEV_DAX_BUS));
}
static struct nd_device_driver dax_pmem_driver = {
.probe = dax_pmem_probe,
.drv = {
.name = "dax_pmem",
},
.type = ND_DRIVER_DAX_PMEM,
};
static int __init dax_pmem_init(void)
{
return nd_driver_register(&dax_pmem_driver);
}
module_init(dax_pmem_init);
static void __exit dax_pmem_exit(void)
{
driver_unregister(&dax_pmem_driver.drv);
}
module_exit(dax_pmem_exit);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
#if !IS_ENABLED(CONFIG_DEV_DAX_PMEM_COMPAT)
/* For compat builds, don't load this module by default */
MODULE_ALIAS_ND_DEVICE(ND_DEVICE_DAX_PMEM);
#endif

View File

@ -22,6 +22,7 @@
#include <linux/uio.h>
#include <linux/dax.h>
#include <linux/fs.h>
#include "dax-private.h"
static dev_t dax_devt;
DEFINE_STATIC_SRCU(dax_srcu);
@ -383,11 +384,15 @@ void kill_dax(struct dax_device *dax_dev)
spin_lock(&dax_host_lock);
hlist_del_init(&dax_dev->list);
spin_unlock(&dax_host_lock);
dax_dev->private = NULL;
}
EXPORT_SYMBOL_GPL(kill_dax);
void run_dax(struct dax_device *dax_dev)
{
set_bit(DAXDEV_ALIVE, &dax_dev->flags);
}
EXPORT_SYMBOL_GPL(run_dax);
static struct inode *dax_alloc_inode(struct super_block *sb)
{
struct dax_device *dax_dev;
@ -602,6 +607,8 @@ EXPORT_SYMBOL_GPL(dax_inode);
void *dax_get_private(struct dax_device *dax_dev)
{
if (!test_bit(DAXDEV_ALIVE, &dax_dev->flags))
return NULL;
return dax_dev->private;
}
EXPORT_SYMBOL_GPL(dax_get_private);
@ -615,7 +622,7 @@ static void init_once(void *_dax_dev)
inode_init_once(inode);
}
static int __dax_fs_init(void)
static int dax_fs_init(void)
{
int rc;
@ -647,35 +654,45 @@ static int __dax_fs_init(void)
return rc;
}
static void __dax_fs_exit(void)
static void dax_fs_exit(void)
{
kern_unmount(dax_mnt);
unregister_filesystem(&dax_fs_type);
kmem_cache_destroy(dax_cache);
}
static int __init dax_fs_init(void)
static int __init dax_core_init(void)
{
int rc;
rc = __dax_fs_init();
rc = dax_fs_init();
if (rc)
return rc;
rc = alloc_chrdev_region(&dax_devt, 0, MINORMASK+1, "dax");
if (rc)
__dax_fs_exit();
return rc;
goto err_chrdev;
rc = dax_bus_init();
if (rc)
goto err_bus;
return 0;
err_bus:
unregister_chrdev_region(dax_devt, MINORMASK+1);
err_chrdev:
dax_fs_exit();
return 0;
}
static void __exit dax_fs_exit(void)
static void __exit dax_core_exit(void)
{
unregister_chrdev_region(dax_devt, MINORMASK+1);
ida_destroy(&dax_minor_ida);
__dax_fs_exit();
dax_fs_exit();
}
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
subsys_initcall(dax_fs_init);
module_exit(dax_fs_exit);
subsys_initcall(dax_core_init);
module_exit(dax_core_exit);

View File

@ -47,6 +47,7 @@ static int e820_register_one(struct resource *res, void *data)
ndr_desc.res = res;
ndr_desc.attr_groups = e820_pmem_region_attribute_groups;
ndr_desc.numa_node = e820_range_to_nid(res->start);
ndr_desc.target_node = ndr_desc.numa_node;
set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
if (!nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc))
return -ENXIO;

View File

@ -153,7 +153,7 @@ struct nd_region {
u16 ndr_mappings;
u64 ndr_size;
u64 ndr_start;
int id, num_lanes, ro, numa_node;
int id, num_lanes, ro, numa_node, target_node;
void *provider_data;
struct kernfs_node *bb_state;
struct badblocks bb;

View File

@ -68,6 +68,7 @@ static int of_pmem_region_probe(struct platform_device *pdev)
memset(&ndr_desc, 0, sizeof(ndr_desc));
ndr_desc.attr_groups = region_attr_groups;
ndr_desc.numa_node = dev_to_node(&pdev->dev);
ndr_desc.target_node = ndr_desc.numa_node;
ndr_desc.res = &pdev->resource[i];
ndr_desc.of_node = np;
set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);

View File

@ -1072,6 +1072,7 @@ static struct nd_region *nd_region_create(struct nvdimm_bus *nvdimm_bus,
nd_region->flags = ndr_desc->flags;
nd_region->ro = ro;
nd_region->numa_node = ndr_desc->numa_node;
nd_region->target_node = ndr_desc->target_node;
ida_init(&nd_region->ns_ida);
ida_init(&nd_region->btt_ida);
ida_init(&nd_region->pfn_ida);

View File

@ -400,12 +400,17 @@ extern bool acpi_osi_is_win8(void);
#ifdef CONFIG_ACPI_NUMA
int acpi_map_pxm_to_online_node(int pxm);
int acpi_map_pxm_to_node(int pxm);
int acpi_get_node(acpi_handle handle);
#else
static inline int acpi_map_pxm_to_online_node(int pxm)
{
return 0;
}
static inline int acpi_map_pxm_to_node(int pxm)
{
return 0;
}
static inline int acpi_get_node(acpi_handle handle)
{
return 0;

View File

@ -130,6 +130,7 @@ struct nd_region_desc {
void *provider_data;
int num_lanes;
int numa_node;
int target_node;
unsigned long flags;
struct device_node *of_node;
};

View File

@ -382,7 +382,7 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
int (*func)(struct resource *, void *))
{
struct resource res;
int ret = -1;
int ret = -EINVAL;
while (start < end &&
!find_next_iomem_res(start, end, flags, desc, first_lvl, &res)) {
@ -452,6 +452,9 @@ int walk_mem_res(u64 start, u64 end, void *arg,
* This function calls the @func callback against all memory ranges of type
* System RAM which are marked as IORESOURCE_SYSTEM_RAM and IORESOUCE_BUSY.
* It is to be used only for System RAM.
*
* This will find System RAM ranges that are children of top-level resources
* in addition to top-level System RAM resources.
*/
int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
void *arg, int (*func)(unsigned long, unsigned long, void *))
@ -460,14 +463,14 @@ int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
unsigned long flags;
struct resource res;
unsigned long pfn, end_pfn;
int ret = -1;
int ret = -EINVAL;
start = (u64) start_pfn << PAGE_SHIFT;
end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
while (start < end &&
!find_next_iomem_res(start, end, flags, IORES_DESC_NONE,
true, &res)) {
false, &res)) {
pfn = (res.start + PAGE_SIZE - 1) >> PAGE_SHIFT;
end_pfn = (res.end + 1) >> PAGE_SHIFT;
if (end_pfn > pfn)
@ -1128,6 +1131,15 @@ struct resource * __request_region(struct resource *parent,
conflict = __request_resource(parent, res);
if (!conflict)
break;
/*
* mm/hmm.c reserves physical addresses which then
* become unavailable to other users. Conflicts are
* not expected. Warn to aid debugging if encountered.
*/
if (conflict->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) {
pr_warn("Unaddressable device %s %pR conflicts with %pR",
conflict->name, conflict, res);
}
if (conflict != parent) {
if (!(conflict->flags & IORESOURCE_BUSY)) {
parent = conflict;

View File

@ -101,28 +101,24 @@ u64 max_mem_size = U64_MAX;
/* add this memory to iomem resource */
static struct resource *register_memory_resource(u64 start, u64 size)
{
struct resource *res, *conflict;
struct resource *res;
unsigned long flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
char *resource_name = "System RAM";
if (start + size > max_mem_size)
return ERR_PTR(-E2BIG);
res = kzalloc(sizeof(struct resource), GFP_KERNEL);
if (!res)
return ERR_PTR(-ENOMEM);
/*
* Request ownership of the new memory range. This might be
* a child of an existing resource that was present but
* not marked as busy.
*/
res = __request_region(&iomem_resource, start, size,
resource_name, flags);
res->name = "System RAM";
res->start = start;
res->end = start + size - 1;
res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
conflict = request_resource_conflict(&iomem_resource, res);
if (conflict) {
if (conflict->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) {
pr_debug("Device unaddressable memory block "
"memory hotplug at %#010llx !\n",
(unsigned long long)start);
}
pr_debug("System RAM resource %pR cannot be added\n", res);
kfree(res);
if (!res) {
pr_debug("Unable to reserve System RAM region: %016llx->%016llx\n",
start, start + size);
return ERR_PTR(-EEXIST);
}
return res;

View File

@ -35,6 +35,8 @@ obj-$(CONFIG_DAX) += dax.o
endif
obj-$(CONFIG_DEV_DAX) += device_dax.o
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o
obj-$(CONFIG_DEV_DAX_PMEM_COMPAT) += dax_pmem_compat.o
nfit-y := $(ACPI_SRC)/core.o
nfit-y += $(ACPI_SRC)/intel.o
@ -57,6 +59,7 @@ nd_e820-y := $(NVDIMM_SRC)/e820.o
nd_e820-y += config_check.o
dax-y := $(DAX_SRC)/super.o
dax-y += $(DAX_SRC)/bus.o
dax-y += config_check.o
device_dax-y := $(DAX_SRC)/device.o
@ -64,7 +67,9 @@ device_dax-y += dax-dev.o
device_dax-y += device_dax_test.o
device_dax-y += config_check.o
dax_pmem-y := $(DAX_SRC)/pmem.o
dax_pmem-y := $(DAX_SRC)/pmem/pmem.o
dax_pmem_core-y := $(DAX_SRC)/pmem/core.o
dax_pmem_compat-y := $(DAX_SRC)/pmem/compat.o
dax_pmem-y += config_check.o
libnvdimm-y := $(NVDIMM_SRC)/core.o

View File

@ -17,20 +17,11 @@
phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff,
unsigned long size)
{
struct resource *res;
struct resource *res = &dev_dax->region->res;
phys_addr_t addr;
int i;
for (i = 0; i < dev_dax->num_resources; i++) {
res = &dev_dax->res[i];
addr = pgoff * PAGE_SIZE + res->start;
if (addr >= res->start && addr <= res->end)
break;
pgoff -= PHYS_PFN(resource_size(res));
}
if (i < dev_dax->num_resources) {
res = &dev_dax->res[i];
addr = pgoff * PAGE_SIZE + res->start;
if (addr >= res->start && addr <= res->end) {
if (addr + size - 1 <= res->end) {
if (get_nfit_res(addr)) {
struct page *page;
@ -44,6 +35,5 @@ phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff,
return addr;
}
}
return -1;
}