lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2015 IT University of Copenhagen. All rights reserved.
|
|
|
|
* Initial release: Matias Bjorling <m@bjorling.me>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License version
|
|
|
|
* 2 as published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful, but
|
|
|
|
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program; see the file COPYING. If not, write to
|
|
|
|
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
|
|
|
|
* USA.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/sem.h>
|
|
|
|
#include <linux/bitmap.h>
|
2017-10-13 19:45:50 +07:00
|
|
|
#include <linux/module.h>
|
2016-10-30 03:38:41 +07:00
|
|
|
#include <linux/moduleparam.h>
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
#include <linux/miscdevice.h>
|
|
|
|
#include <linux/lightnvm.h>
|
2016-01-12 13:49:21 +07:00
|
|
|
#include <linux/sched/sysctl.h>
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2016-05-07 01:03:02 +07:00
|
|
|
static LIST_HEAD(nvm_tgt_types);
|
2016-07-07 14:54:17 +07:00
|
|
|
static DECLARE_RWSEM(nvm_tgtt_lock);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
static LIST_HEAD(nvm_devices);
|
|
|
|
static DECLARE_RWSEM(nvm_lock);
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
/* Map between virtual and physical channel and lun */
|
|
|
|
struct nvm_ch_map {
|
|
|
|
int ch_off;
|
2018-03-30 05:05:14 +07:00
|
|
|
int num_lun;
|
2017-01-31 19:17:09 +07:00
|
|
|
int *lun_offs;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct nvm_dev_map {
|
|
|
|
struct nvm_ch_map *chnls;
|
2018-03-30 05:05:14 +07:00
|
|
|
int num_ch;
|
2017-01-31 19:17:09 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
static struct nvm_target *nvm_find_target(struct nvm_dev *dev, const char *name)
|
|
|
|
{
|
|
|
|
struct nvm_target *tgt;
|
|
|
|
|
|
|
|
list_for_each_entry(tgt, &dev->targets, list)
|
|
|
|
if (!strcmp(name, tgt->disk->disk_name))
|
|
|
|
return tgt;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-01-05 20:16:05 +07:00
|
|
|
static bool nvm_target_exists(const char *name)
|
|
|
|
{
|
|
|
|
struct nvm_dev *dev;
|
|
|
|
struct nvm_target *tgt;
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
down_write(&nvm_lock);
|
|
|
|
list_for_each_entry(dev, &nvm_devices, devices) {
|
|
|
|
mutex_lock(&dev->mlock);
|
|
|
|
list_for_each_entry(tgt, &dev->targets, list) {
|
|
|
|
if (!strcmp(name, tgt->disk->disk_name)) {
|
|
|
|
ret = true;
|
|
|
|
mutex_unlock(&dev->mlock);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mutex_unlock(&dev->mlock);
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
up_write(&nvm_lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
static int nvm_reserve_luns(struct nvm_dev *dev, int lun_begin, int lun_end)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = lun_begin; i <= lun_end; i++) {
|
|
|
|
if (test_and_set_bit(i, dev->lun_map)) {
|
|
|
|
pr_err("nvm: lun %d already allocated\n", i);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
err:
|
2017-05-03 16:19:05 +07:00
|
|
|
while (--i >= lun_begin)
|
2017-01-31 19:17:09 +07:00
|
|
|
clear_bit(i, dev->lun_map);
|
|
|
|
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvm_release_luns_err(struct nvm_dev *dev, int lun_begin,
|
|
|
|
int lun_end)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = lun_begin; i <= lun_end; i++)
|
|
|
|
WARN_ON(!test_and_clear_bit(i, dev->lun_map));
|
|
|
|
}
|
|
|
|
|
2017-04-16 01:55:42 +07:00
|
|
|
static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear)
|
2017-01-31 19:17:09 +07:00
|
|
|
{
|
|
|
|
struct nvm_dev *dev = tgt_dev->parent;
|
|
|
|
struct nvm_dev_map *dev_map = tgt_dev->map;
|
|
|
|
int i, j;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
for (i = 0; i < dev_map->num_ch; i++) {
|
2017-01-31 19:17:09 +07:00
|
|
|
struct nvm_ch_map *ch_map = &dev_map->chnls[i];
|
|
|
|
int *lun_offs = ch_map->lun_offs;
|
|
|
|
int ch = i + ch_map->ch_off;
|
|
|
|
|
2017-04-16 01:55:42 +07:00
|
|
|
if (clear) {
|
2018-03-30 05:05:14 +07:00
|
|
|
for (j = 0; j < ch_map->num_lun; j++) {
|
2017-04-16 01:55:42 +07:00
|
|
|
int lun = j + lun_offs[j];
|
2018-03-30 05:05:14 +07:00
|
|
|
int lunid = (ch * dev->geo.num_lun) + lun;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2017-04-16 01:55:42 +07:00
|
|
|
WARN_ON(!test_and_clear_bit(lunid,
|
|
|
|
dev->lun_map));
|
|
|
|
}
|
2017-01-31 19:17:09 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
kfree(ch_map->lun_offs);
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(dev_map->chnls);
|
|
|
|
kfree(dev_map);
|
|
|
|
|
|
|
|
kfree(tgt_dev->luns);
|
|
|
|
kfree(tgt_dev);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
|
2018-01-05 20:16:14 +07:00
|
|
|
u16 lun_begin, u16 lun_end,
|
|
|
|
u16 op)
|
2017-01-31 19:17:09 +07:00
|
|
|
{
|
|
|
|
struct nvm_tgt_dev *tgt_dev = NULL;
|
|
|
|
struct nvm_dev_map *dev_rmap = dev->rmap;
|
|
|
|
struct nvm_dev_map *dev_map;
|
|
|
|
struct ppa_addr *luns;
|
2018-03-30 05:05:14 +07:00
|
|
|
int num_lun = lun_end - lun_begin + 1;
|
|
|
|
int luns_left = num_lun;
|
|
|
|
int num_ch = num_lun / dev->geo.num_lun;
|
|
|
|
int num_ch_mod = num_lun % dev->geo.num_lun;
|
|
|
|
int bch = lun_begin / dev->geo.num_lun;
|
|
|
|
int blun = lun_begin % dev->geo.num_lun;
|
2017-01-31 19:17:09 +07:00
|
|
|
int lunid = 0;
|
|
|
|
int lun_balanced = 1;
|
2018-03-30 05:05:14 +07:00
|
|
|
int sec_per_lun, prev_num_lun;
|
2017-01-31 19:17:09 +07:00
|
|
|
int i, j;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
num_ch = (num_ch_mod == 0) ? num_ch : num_ch + 1;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
dev_map = kmalloc(sizeof(struct nvm_dev_map), GFP_KERNEL);
|
|
|
|
if (!dev_map)
|
|
|
|
goto err_dev;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
dev_map->chnls = kcalloc(num_ch, sizeof(struct nvm_ch_map), GFP_KERNEL);
|
2017-01-31 19:17:09 +07:00
|
|
|
if (!dev_map->chnls)
|
|
|
|
goto err_chnls;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
luns = kcalloc(num_lun, sizeof(struct ppa_addr), GFP_KERNEL);
|
2017-01-31 19:17:09 +07:00
|
|
|
if (!luns)
|
|
|
|
goto err_luns;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
prev_num_lun = (luns_left > dev->geo.num_lun) ?
|
|
|
|
dev->geo.num_lun : luns_left;
|
|
|
|
for (i = 0; i < num_ch; i++) {
|
2017-01-31 19:17:09 +07:00
|
|
|
struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[i + bch];
|
|
|
|
int *lun_roffs = ch_rmap->lun_offs;
|
|
|
|
struct nvm_ch_map *ch_map = &dev_map->chnls[i];
|
|
|
|
int *lun_offs;
|
2018-03-30 05:05:14 +07:00
|
|
|
int luns_in_chnl = (luns_left > dev->geo.num_lun) ?
|
|
|
|
dev->geo.num_lun : luns_left;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
if (lun_balanced && prev_num_lun != luns_in_chnl)
|
2017-01-31 19:17:09 +07:00
|
|
|
lun_balanced = 0;
|
|
|
|
|
|
|
|
ch_map->ch_off = ch_rmap->ch_off = bch;
|
2018-03-30 05:05:14 +07:00
|
|
|
ch_map->num_lun = luns_in_chnl;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
lun_offs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL);
|
|
|
|
if (!lun_offs)
|
|
|
|
goto err_ch;
|
|
|
|
|
|
|
|
for (j = 0; j < luns_in_chnl; j++) {
|
|
|
|
luns[lunid].ppa = 0;
|
2018-03-30 05:05:15 +07:00
|
|
|
luns[lunid].a.ch = i;
|
|
|
|
luns[lunid++].a.lun = j;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
lun_offs[j] = blun;
|
|
|
|
lun_roffs[j + blun] = blun;
|
|
|
|
}
|
|
|
|
|
|
|
|
ch_map->lun_offs = lun_offs;
|
|
|
|
|
|
|
|
/* when starting a new channel, lun offset is reset */
|
|
|
|
blun = 0;
|
|
|
|
luns_left -= luns_in_chnl;
|
|
|
|
}
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
dev_map->num_ch = num_ch;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
tgt_dev = kmalloc(sizeof(struct nvm_tgt_dev), GFP_KERNEL);
|
|
|
|
if (!tgt_dev)
|
|
|
|
goto err_ch;
|
|
|
|
|
2018-03-30 05:05:10 +07:00
|
|
|
/* Inherit device geometry from parent */
|
2017-01-31 19:17:09 +07:00
|
|
|
memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo));
|
2018-03-30 05:05:10 +07:00
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
/* Target device only owns a portion of the physical device */
|
2018-03-30 05:05:14 +07:00
|
|
|
tgt_dev->geo.num_ch = num_ch;
|
|
|
|
tgt_dev->geo.num_lun = (lun_balanced) ? prev_num_lun : -1;
|
|
|
|
tgt_dev->geo.all_luns = num_lun;
|
|
|
|
tgt_dev->geo.all_chunks = num_lun * dev->geo.num_chk;
|
2018-03-30 05:05:10 +07:00
|
|
|
|
2018-01-05 20:16:14 +07:00
|
|
|
tgt_dev->geo.op = op;
|
2018-03-30 05:05:10 +07:00
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
sec_per_lun = dev->geo.clba * dev->geo.num_chk;
|
|
|
|
tgt_dev->geo.total_secs = num_lun * sec_per_lun;
|
2018-03-30 05:05:10 +07:00
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
tgt_dev->q = dev->q;
|
|
|
|
tgt_dev->map = dev_map;
|
|
|
|
tgt_dev->luns = luns;
|
|
|
|
tgt_dev->parent = dev;
|
|
|
|
|
|
|
|
return tgt_dev;
|
|
|
|
err_ch:
|
2017-05-03 16:19:05 +07:00
|
|
|
while (--i >= 0)
|
2017-01-31 19:17:09 +07:00
|
|
|
kfree(dev_map->chnls[i].lun_offs);
|
|
|
|
kfree(luns);
|
|
|
|
err_luns:
|
|
|
|
kfree(dev_map->chnls);
|
|
|
|
err_chnls:
|
|
|
|
kfree(dev_map);
|
|
|
|
err_dev:
|
|
|
|
return tgt_dev;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct block_device_operations nvm_fops = {
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
};
|
|
|
|
|
2018-01-05 20:16:04 +07:00
|
|
|
static struct nvm_tgt_type *__nvm_find_target_type(const char *name)
|
2017-10-13 19:46:31 +07:00
|
|
|
{
|
2018-01-05 20:16:04 +07:00
|
|
|
struct nvm_tgt_type *tt;
|
2017-10-13 19:46:31 +07:00
|
|
|
|
2018-01-05 20:16:04 +07:00
|
|
|
list_for_each_entry(tt, &nvm_tgt_types, list)
|
|
|
|
if (!strcmp(name, tt->name))
|
|
|
|
return tt;
|
2017-10-13 19:46:31 +07:00
|
|
|
|
2018-01-05 20:16:04 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct nvm_tgt_type *nvm_find_target_type(const char *name)
|
|
|
|
{
|
|
|
|
struct nvm_tgt_type *tt;
|
|
|
|
|
|
|
|
down_write(&nvm_tgtt_lock);
|
|
|
|
tt = __nvm_find_target_type(name);
|
|
|
|
up_write(&nvm_tgtt_lock);
|
2017-10-13 19:46:31 +07:00
|
|
|
|
|
|
|
return tt;
|
|
|
|
}
|
|
|
|
|
2018-01-05 20:16:14 +07:00
|
|
|
static int nvm_config_check_luns(struct nvm_geo *geo, int lun_begin,
|
|
|
|
int lun_end)
|
|
|
|
{
|
|
|
|
if (lun_begin > lun_end || lun_end >= geo->all_luns) {
|
|
|
|
pr_err("nvm: lun out of bound (%u:%u > %u)\n",
|
|
|
|
lun_begin, lun_end, geo->all_luns - 1);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __nvm_config_simple(struct nvm_dev *dev,
|
|
|
|
struct nvm_ioctl_create_simple *s)
|
|
|
|
{
|
|
|
|
struct nvm_geo *geo = &dev->geo;
|
|
|
|
|
|
|
|
if (s->lun_begin == -1 && s->lun_end == -1) {
|
|
|
|
s->lun_begin = 0;
|
|
|
|
s->lun_end = geo->all_luns - 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvm_config_check_luns(geo, s->lun_begin, s->lun_end);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __nvm_config_extended(struct nvm_dev *dev,
|
|
|
|
struct nvm_ioctl_create_extended *e)
|
|
|
|
{
|
|
|
|
if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) {
|
|
|
|
e->lun_begin = 0;
|
|
|
|
e->lun_end = dev->geo.all_luns - 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* op not set falls into target's default */
|
2018-03-30 05:05:08 +07:00
|
|
|
if (e->op == 0xFFFF) {
|
2018-01-05 20:16:14 +07:00
|
|
|
e->op = NVM_TARGET_DEFAULT_OP;
|
2018-03-30 05:05:08 +07:00
|
|
|
} else if (e->op < NVM_TARGET_MIN_OP || e->op > NVM_TARGET_MAX_OP) {
|
2018-01-05 20:16:14 +07:00
|
|
|
pr_err("nvm: invalid over provisioning value\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-03-30 05:05:10 +07:00
|
|
|
return nvm_config_check_luns(&dev->geo, e->lun_begin, e->lun_end);
|
2018-01-05 20:16:14 +07:00
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
|
|
|
|
{
|
2018-01-05 20:16:14 +07:00
|
|
|
struct nvm_ioctl_create_extended e;
|
2017-01-31 19:17:09 +07:00
|
|
|
struct request_queue *tqueue;
|
|
|
|
struct gendisk *tdisk;
|
|
|
|
struct nvm_tgt_type *tt;
|
|
|
|
struct nvm_target *t;
|
|
|
|
struct nvm_tgt_dev *tgt_dev;
|
|
|
|
void *targetdata;
|
2017-04-21 01:23:56 +07:00
|
|
|
int ret;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2018-01-05 20:16:14 +07:00
|
|
|
switch (create->conf.type) {
|
|
|
|
case NVM_CONFIG_TYPE_SIMPLE:
|
|
|
|
ret = __nvm_config_simple(dev, &create->conf.s);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
e.lun_begin = create->conf.s.lun_begin;
|
|
|
|
e.lun_end = create->conf.s.lun_end;
|
|
|
|
e.op = NVM_TARGET_DEFAULT_OP;
|
|
|
|
break;
|
|
|
|
case NVM_CONFIG_TYPE_EXTENDED:
|
|
|
|
ret = __nvm_config_extended(dev, &create->conf.e);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
e = create->conf.e;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
pr_err("nvm: config type not valid\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-01-05 20:16:04 +07:00
|
|
|
tt = nvm_find_target_type(create->tgttype);
|
2017-01-31 19:17:09 +07:00
|
|
|
if (!tt) {
|
|
|
|
pr_err("nvm: target type %s not found\n", create->tgttype);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-10-09 18:11:34 +07:00
|
|
|
if ((tt->flags & NVM_TGT_F_HOST_L2P) != (dev->geo.dom & NVM_RSP_L2P)) {
|
|
|
|
pr_err("nvm: device is incompatible with target L2P type.\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-01-05 20:16:05 +07:00
|
|
|
if (nvm_target_exists(create->tgtname)) {
|
|
|
|
pr_err("nvm: target name already exists (%s)\n",
|
|
|
|
create->tgtname);
|
2017-01-31 19:17:09 +07:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-01-05 20:16:14 +07:00
|
|
|
ret = nvm_reserve_luns(dev, e.lun_begin, e.lun_end);
|
2017-06-27 18:55:33 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
|
2017-04-21 01:23:56 +07:00
|
|
|
if (!t) {
|
|
|
|
ret = -ENOMEM;
|
2017-01-31 19:17:09 +07:00
|
|
|
goto err_reserve;
|
2017-04-21 01:23:56 +07:00
|
|
|
}
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2018-01-05 20:16:14 +07:00
|
|
|
tgt_dev = nvm_create_tgt_dev(dev, e.lun_begin, e.lun_end, e.op);
|
2017-01-31 19:17:09 +07:00
|
|
|
if (!tgt_dev) {
|
|
|
|
pr_err("nvm: could not create target device\n");
|
2017-04-21 01:23:56 +07:00
|
|
|
ret = -ENOMEM;
|
2017-01-31 19:17:09 +07:00
|
|
|
goto err_t;
|
|
|
|
}
|
|
|
|
|
2017-04-16 01:55:43 +07:00
|
|
|
tdisk = alloc_disk(0);
|
2017-04-21 01:23:56 +07:00
|
|
|
if (!tdisk) {
|
|
|
|
ret = -ENOMEM;
|
2017-04-16 01:55:43 +07:00
|
|
|
goto err_dev;
|
2017-04-21 01:23:56 +07:00
|
|
|
}
|
2017-04-16 01:55:43 +07:00
|
|
|
|
2018-11-14 23:02:18 +07:00
|
|
|
tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
|
2017-04-21 01:23:56 +07:00
|
|
|
if (!tqueue) {
|
|
|
|
ret = -ENOMEM;
|
2017-04-16 01:55:43 +07:00
|
|
|
goto err_disk;
|
2017-04-21 01:23:56 +07:00
|
|
|
}
|
2017-01-31 19:17:09 +07:00
|
|
|
blk_queue_make_request(tqueue, tt->make_rq);
|
|
|
|
|
2017-04-16 01:55:49 +07:00
|
|
|
strlcpy(tdisk->disk_name, create->tgtname, sizeof(tdisk->disk_name));
|
2017-01-31 19:17:09 +07:00
|
|
|
tdisk->flags = GENHD_FL_EXT_DEVT;
|
|
|
|
tdisk->major = 0;
|
|
|
|
tdisk->first_minor = 0;
|
|
|
|
tdisk->fops = &nvm_fops;
|
|
|
|
tdisk->queue = tqueue;
|
|
|
|
|
2017-04-16 01:55:45 +07:00
|
|
|
targetdata = tt->init(tgt_dev, tdisk, create->flags);
|
2017-04-21 01:23:56 +07:00
|
|
|
if (IS_ERR(targetdata)) {
|
|
|
|
ret = PTR_ERR(targetdata);
|
2017-01-31 19:17:09 +07:00
|
|
|
goto err_init;
|
2017-04-21 01:23:56 +07:00
|
|
|
}
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
tdisk->private_data = targetdata;
|
|
|
|
tqueue->queuedata = targetdata;
|
|
|
|
|
2018-03-30 05:05:04 +07:00
|
|
|
blk_queue_max_hw_sectors(tqueue,
|
2018-03-30 05:05:10 +07:00
|
|
|
(dev->geo.csecs >> 9) * NVM_MAX_VLBA);
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
set_capacity(tdisk, tt->capacity(targetdata));
|
|
|
|
add_disk(tdisk);
|
|
|
|
|
2017-04-21 01:23:56 +07:00
|
|
|
if (tt->sysfs_init && tt->sysfs_init(tdisk)) {
|
|
|
|
ret = -ENOMEM;
|
2017-01-31 19:17:20 +07:00
|
|
|
goto err_sysfs;
|
2017-04-21 01:23:56 +07:00
|
|
|
}
|
2017-01-31 19:17:20 +07:00
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
t->type = tt;
|
|
|
|
t->disk = tdisk;
|
|
|
|
t->dev = tgt_dev;
|
|
|
|
|
|
|
|
mutex_lock(&dev->mlock);
|
|
|
|
list_add_tail(&t->list, &dev->targets);
|
|
|
|
mutex_unlock(&dev->mlock);
|
|
|
|
|
2017-10-13 19:45:50 +07:00
|
|
|
__module_get(tt->owner);
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
return 0;
|
2017-01-31 19:17:20 +07:00
|
|
|
err_sysfs:
|
|
|
|
if (tt->exit)
|
2018-06-01 20:04:24 +07:00
|
|
|
tt->exit(targetdata, true);
|
2017-01-31 19:17:09 +07:00
|
|
|
err_init:
|
|
|
|
blk_cleanup_queue(tqueue);
|
2017-04-20 21:17:47 +07:00
|
|
|
tdisk->queue = NULL;
|
2017-04-16 01:55:43 +07:00
|
|
|
err_disk:
|
|
|
|
put_disk(tdisk);
|
2017-01-31 19:17:09 +07:00
|
|
|
err_dev:
|
2017-04-16 01:55:42 +07:00
|
|
|
nvm_remove_tgt_dev(tgt_dev, 0);
|
2017-01-31 19:17:09 +07:00
|
|
|
err_t:
|
|
|
|
kfree(t);
|
|
|
|
err_reserve:
|
2018-01-05 20:16:14 +07:00
|
|
|
nvm_release_luns_err(dev, e.lun_begin, e.lun_end);
|
2017-04-21 01:23:56 +07:00
|
|
|
return ret;
|
2017-01-31 19:17:09 +07:00
|
|
|
}
|
|
|
|
|
2018-06-01 20:04:24 +07:00
|
|
|
static void __nvm_remove_target(struct nvm_target *t, bool graceful)
|
2017-01-31 19:17:09 +07:00
|
|
|
{
|
|
|
|
struct nvm_tgt_type *tt = t->type;
|
|
|
|
struct gendisk *tdisk = t->disk;
|
|
|
|
struct request_queue *q = tdisk->queue;
|
|
|
|
|
|
|
|
del_gendisk(tdisk);
|
|
|
|
blk_cleanup_queue(q);
|
|
|
|
|
2017-01-31 19:17:20 +07:00
|
|
|
if (tt->sysfs_exit)
|
|
|
|
tt->sysfs_exit(tdisk);
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
if (tt->exit)
|
2018-06-01 20:04:24 +07:00
|
|
|
tt->exit(tdisk->private_data, graceful);
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2017-04-16 01:55:42 +07:00
|
|
|
nvm_remove_tgt_dev(t->dev, 1);
|
2017-01-31 19:17:09 +07:00
|
|
|
put_disk(tdisk);
|
2017-10-13 19:45:50 +07:00
|
|
|
module_put(t->type->owner);
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
list_del(&t->list);
|
|
|
|
kfree(t);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* nvm_remove_tgt - Removes a target from the media manager
|
|
|
|
* @dev: device
|
|
|
|
* @remove: ioctl structure with target name to remove.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* 0: on success
|
|
|
|
* 1: on not found
|
|
|
|
* <0: on error
|
|
|
|
*/
|
|
|
|
static int nvm_remove_tgt(struct nvm_dev *dev, struct nvm_ioctl_remove *remove)
|
|
|
|
{
|
|
|
|
struct nvm_target *t;
|
|
|
|
|
|
|
|
mutex_lock(&dev->mlock);
|
|
|
|
t = nvm_find_target(dev, remove->tgtname);
|
|
|
|
if (!t) {
|
|
|
|
mutex_unlock(&dev->mlock);
|
|
|
|
return 1;
|
|
|
|
}
|
2018-06-01 20:04:24 +07:00
|
|
|
__nvm_remove_target(t, true);
|
2017-01-31 19:17:09 +07:00
|
|
|
mutex_unlock(&dev->mlock);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int nvm_register_map(struct nvm_dev *dev)
|
|
|
|
{
|
|
|
|
struct nvm_dev_map *rmap;
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
rmap = kmalloc(sizeof(struct nvm_dev_map), GFP_KERNEL);
|
|
|
|
if (!rmap)
|
|
|
|
goto err_rmap;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
rmap->chnls = kcalloc(dev->geo.num_ch, sizeof(struct nvm_ch_map),
|
2017-01-31 19:17:09 +07:00
|
|
|
GFP_KERNEL);
|
|
|
|
if (!rmap->chnls)
|
|
|
|
goto err_chnls;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
for (i = 0; i < dev->geo.num_ch; i++) {
|
2017-01-31 19:17:09 +07:00
|
|
|
struct nvm_ch_map *ch_rmap;
|
|
|
|
int *lun_roffs;
|
2018-03-30 05:05:14 +07:00
|
|
|
int luns_in_chnl = dev->geo.num_lun;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
ch_rmap = &rmap->chnls[i];
|
|
|
|
|
|
|
|
ch_rmap->ch_off = -1;
|
2018-03-30 05:05:14 +07:00
|
|
|
ch_rmap->num_lun = luns_in_chnl;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
lun_roffs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL);
|
|
|
|
if (!lun_roffs)
|
|
|
|
goto err_ch;
|
|
|
|
|
|
|
|
for (j = 0; j < luns_in_chnl; j++)
|
|
|
|
lun_roffs[j] = -1;
|
|
|
|
|
|
|
|
ch_rmap->lun_offs = lun_roffs;
|
|
|
|
}
|
|
|
|
|
|
|
|
dev->rmap = rmap;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
err_ch:
|
|
|
|
while (--i >= 0)
|
|
|
|
kfree(rmap->chnls[i].lun_offs);
|
|
|
|
err_chnls:
|
|
|
|
kfree(rmap);
|
|
|
|
err_rmap:
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2017-04-16 01:55:39 +07:00
|
|
|
static void nvm_unregister_map(struct nvm_dev *dev)
|
|
|
|
{
|
|
|
|
struct nvm_dev_map *rmap = dev->rmap;
|
|
|
|
int i;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
for (i = 0; i < dev->geo.num_ch; i++)
|
2017-04-16 01:55:39 +07:00
|
|
|
kfree(rmap->chnls[i].lun_offs);
|
|
|
|
|
|
|
|
kfree(rmap->chnls);
|
|
|
|
kfree(rmap);
|
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:13 +07:00
|
|
|
static void nvm_map_to_dev(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
|
2017-01-31 19:17:09 +07:00
|
|
|
{
|
|
|
|
struct nvm_dev_map *dev_map = tgt_dev->map;
|
2018-03-30 05:05:15 +07:00
|
|
|
struct nvm_ch_map *ch_map = &dev_map->chnls[p->a.ch];
|
|
|
|
int lun_off = ch_map->lun_offs[p->a.lun];
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2018-03-30 05:05:15 +07:00
|
|
|
p->a.ch += ch_map->ch_off;
|
|
|
|
p->a.lun += lun_off;
|
2017-01-31 19:17:09 +07:00
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:13 +07:00
|
|
|
static void nvm_map_to_tgt(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
|
2017-01-31 19:17:09 +07:00
|
|
|
{
|
|
|
|
struct nvm_dev *dev = tgt_dev->parent;
|
|
|
|
struct nvm_dev_map *dev_rmap = dev->rmap;
|
2018-03-30 05:05:15 +07:00
|
|
|
struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[p->a.ch];
|
|
|
|
int lun_roff = ch_rmap->lun_offs[p->a.lun];
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2018-03-30 05:05:15 +07:00
|
|
|
p->a.ch -= ch_rmap->ch_off;
|
|
|
|
p->a.lun -= lun_roff;
|
2017-01-31 19:17:09 +07:00
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:14 +07:00
|
|
|
static void nvm_ppa_tgt_to_dev(struct nvm_tgt_dev *tgt_dev,
|
|
|
|
struct ppa_addr *ppa_list, int nr_ppas)
|
2017-01-31 19:17:09 +07:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2017-01-31 19:17:14 +07:00
|
|
|
for (i = 0; i < nr_ppas; i++) {
|
|
|
|
nvm_map_to_dev(tgt_dev, &ppa_list[i]);
|
2018-03-30 05:05:16 +07:00
|
|
|
ppa_list[i] = generic_to_dev_addr(tgt_dev->parent, ppa_list[i]);
|
2017-01-31 19:17:09 +07:00
|
|
|
}
|
2017-01-31 19:17:14 +07:00
|
|
|
}
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2017-01-31 19:17:14 +07:00
|
|
|
static void nvm_ppa_dev_to_tgt(struct nvm_tgt_dev *tgt_dev,
|
|
|
|
struct ppa_addr *ppa_list, int nr_ppas)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_ppas; i++) {
|
2018-03-30 05:05:16 +07:00
|
|
|
ppa_list[i] = dev_to_generic_addr(tgt_dev->parent, ppa_list[i]);
|
2017-01-31 19:17:14 +07:00
|
|
|
nvm_map_to_tgt(tgt_dev, &ppa_list[i]);
|
2017-01-31 19:17:09 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:14 +07:00
|
|
|
static void nvm_rq_tgt_to_dev(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd)
|
2017-01-31 19:17:09 +07:00
|
|
|
{
|
2018-10-09 18:11:46 +07:00
|
|
|
struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd);
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2018-10-09 18:11:46 +07:00
|
|
|
nvm_ppa_tgt_to_dev(tgt_dev, ppa_list, rqd->nr_ppas);
|
2017-01-31 19:17:14 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvm_rq_dev_to_tgt(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd)
|
|
|
|
{
|
2018-10-09 18:11:46 +07:00
|
|
|
struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd);
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2018-10-09 18:11:46 +07:00
|
|
|
nvm_ppa_dev_to_tgt(tgt_dev, ppa_list, rqd->nr_ppas);
|
2017-01-31 19:17:09 +07:00
|
|
|
}
|
|
|
|
|
2016-05-07 01:03:02 +07:00
|
|
|
int nvm_register_tgt_type(struct nvm_tgt_type *tt)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
2016-07-07 14:54:17 +07:00
|
|
|
down_write(&nvm_tgtt_lock);
|
2018-01-05 20:16:04 +07:00
|
|
|
if (__nvm_find_target_type(tt->name))
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
ret = -EEXIST;
|
|
|
|
else
|
2016-05-07 01:03:02 +07:00
|
|
|
list_add(&tt->list, &nvm_tgt_types);
|
2016-07-07 14:54:17 +07:00
|
|
|
up_write(&nvm_tgtt_lock);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2016-05-07 01:03:02 +07:00
|
|
|
EXPORT_SYMBOL(nvm_register_tgt_type);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2016-05-07 01:03:02 +07:00
|
|
|
void nvm_unregister_tgt_type(struct nvm_tgt_type *tt)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
|
|
|
if (!tt)
|
|
|
|
return;
|
|
|
|
|
2017-10-13 19:45:52 +07:00
|
|
|
down_write(&nvm_tgtt_lock);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
list_del(&tt->list);
|
2017-10-13 19:45:52 +07:00
|
|
|
up_write(&nvm_tgtt_lock);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
2016-05-07 01:03:02 +07:00
|
|
|
EXPORT_SYMBOL(nvm_unregister_tgt_type);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
|
|
|
void *nvm_dev_dma_alloc(struct nvm_dev *dev, gfp_t mem_flags,
|
|
|
|
dma_addr_t *dma_handler)
|
|
|
|
{
|
2016-05-07 01:03:13 +07:00
|
|
|
return dev->ops->dev_dma_alloc(dev, dev->dma_pool, mem_flags,
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
dma_handler);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(nvm_dev_dma_alloc);
|
|
|
|
|
2016-11-29 04:39:13 +07:00
|
|
|
void nvm_dev_dma_free(struct nvm_dev *dev, void *addr, dma_addr_t dma_handler)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
2016-05-07 01:03:13 +07:00
|
|
|
dev->ops->dev_dma_free(dev->dma_pool, addr, dma_handler);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(nvm_dev_dma_free);
|
|
|
|
|
|
|
|
static struct nvm_dev *nvm_find_nvm_dev(const char *name)
|
|
|
|
{
|
|
|
|
struct nvm_dev *dev;
|
|
|
|
|
|
|
|
list_for_each_entry(dev, &nvm_devices, devices)
|
|
|
|
if (!strcmp(name, dev->name))
|
|
|
|
return dev;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2017-10-13 19:46:31 +07:00
|
|
|
static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
|
|
|
|
const struct ppa_addr *ppas, int nr_ppas)
|
|
|
|
{
|
|
|
|
struct nvm_dev *dev = tgt_dev->parent;
|
|
|
|
struct nvm_geo *geo = &tgt_dev->geo;
|
|
|
|
int i, plane_cnt, pl_idx;
|
|
|
|
struct ppa_addr ppa;
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
if (geo->pln_mode == NVM_PLANE_SINGLE && nr_ppas == 1) {
|
2017-10-13 19:46:31 +07:00
|
|
|
rqd->nr_ppas = nr_ppas;
|
|
|
|
rqd->ppa_addr = ppas[0];
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
rqd->nr_ppas = nr_ppas;
|
|
|
|
rqd->ppa_list = nvm_dev_dma_alloc(dev, GFP_KERNEL, &rqd->dma_ppa_list);
|
|
|
|
if (!rqd->ppa_list) {
|
|
|
|
pr_err("nvm: failed to allocate dma memory\n");
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
plane_cnt = geo->pln_mode;
|
2017-10-13 19:46:31 +07:00
|
|
|
rqd->nr_ppas *= plane_cnt;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_ppas; i++) {
|
|
|
|
for (pl_idx = 0; pl_idx < plane_cnt; pl_idx++) {
|
|
|
|
ppa = ppas[i];
|
|
|
|
ppa.g.pl = pl_idx;
|
|
|
|
rqd->ppa_list[(pl_idx * nr_ppas) + i] = ppa;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev,
|
|
|
|
struct nvm_rq *rqd)
|
|
|
|
{
|
|
|
|
if (!rqd->ppa_list)
|
|
|
|
return;
|
|
|
|
|
|
|
|
nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list);
|
|
|
|
}
|
|
|
|
|
2018-10-09 18:11:32 +07:00
|
|
|
static int nvm_set_flags(struct nvm_geo *geo, struct nvm_rq *rqd)
|
|
|
|
{
|
|
|
|
int flags = 0;
|
|
|
|
|
|
|
|
if (geo->version == NVM_OCSSD_SPEC_20)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (rqd->is_seq)
|
|
|
|
flags |= geo->pln_mode >> 1;
|
|
|
|
|
|
|
|
if (rqd->opcode == NVM_OP_PREAD)
|
|
|
|
flags |= (NVM_IO_SCRAMBLE_ENABLE | NVM_IO_SUSPEND);
|
|
|
|
else if (rqd->opcode == NVM_OP_PWRITE)
|
|
|
|
flags |= NVM_IO_SCRAMBLE_ENABLE;
|
|
|
|
|
|
|
|
return flags;
|
|
|
|
}
|
|
|
|
|
lightnvm: eliminate nvm_lun abstraction in mm
In order to naturally support multi-target instances on an Open-Channel
SSD, targets should own the LUNs they get blocks from and manage
provisioning internally. This is done in several steps.
Since targets own the LUNs the are instantiated on top of and manage the
free block list internally, there is no need for a LUN abstraction in
the media manager. LUNs are intrinsically managed as in the physical
layout (ch:0,lun:0, ..., ch:0,lun:n, ch:1,lun:0, ch:1,lun:n, ...,
ch:m,lun:0, ch:m,lun:n) and given to the targets based on the target
creation ioctl. This simplifies LUN management and clears the path for a
partition manager to sit directly underneath LightNVM targets.
Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2016-11-29 04:39:10 +07:00
|
|
|
int nvm_submit_io(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
lightnvm: eliminate nvm_lun abstraction in mm
In order to naturally support multi-target instances on an Open-Channel
SSD, targets should own the LUNs they get blocks from and manage
provisioning internally. This is done in several steps.
Since targets own the LUNs the are instantiated on top of and manage the
free block list internally, there is no need for a LUN abstraction in
the media manager. LUNs are intrinsically managed as in the physical
layout (ch:0,lun:0, ..., ch:0,lun:n, ch:1,lun:0, ch:1,lun:n, ...,
ch:m,lun:0, ch:m,lun:n) and given to the targets based on the target
creation ioctl. This simplifies LUN management and clears the path for a
partition manager to sit directly underneath LightNVM targets.
Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2016-11-29 04:39:10 +07:00
|
|
|
struct nvm_dev *dev = tgt_dev->parent;
|
2017-06-26 16:57:10 +07:00
|
|
|
int ret;
|
lightnvm: eliminate nvm_lun abstraction in mm
In order to naturally support multi-target instances on an Open-Channel
SSD, targets should own the LUNs they get blocks from and manage
provisioning internally. This is done in several steps.
Since targets own the LUNs the are instantiated on top of and manage the
free block list internally, there is no need for a LUN abstraction in
the media manager. LUNs are intrinsically managed as in the physical
layout (ch:0,lun:0, ..., ch:0,lun:n, ch:1,lun:0, ch:1,lun:n, ...,
ch:m,lun:0, ch:m,lun:n) and given to the targets based on the target
creation ioctl. This simplifies LUN management and clears the path for a
partition manager to sit directly underneath LightNVM targets.
Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2016-11-29 04:39:10 +07:00
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
if (!dev->ops->submit_io)
|
|
|
|
return -ENODEV;
|
|
|
|
|
2017-01-31 19:17:14 +07:00
|
|
|
nvm_rq_tgt_to_dev(tgt_dev, rqd);
|
2017-01-31 19:17:09 +07:00
|
|
|
|
|
|
|
rqd->dev = tgt_dev;
|
2018-10-09 18:11:32 +07:00
|
|
|
rqd->flags = nvm_set_flags(&tgt_dev->geo, rqd);
|
2017-06-26 16:57:10 +07:00
|
|
|
|
|
|
|
/* In case of error, fail with right address format */
|
|
|
|
ret = dev->ops->submit_io(dev, rqd);
|
|
|
|
if (ret)
|
|
|
|
nvm_rq_dev_to_tgt(tgt_dev, rqd);
|
|
|
|
return ret;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(nvm_submit_io);
|
|
|
|
|
2017-10-13 19:46:47 +07:00
|
|
|
int nvm_submit_io_sync(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
2017-10-13 19:46:47 +07:00
|
|
|
struct nvm_dev *dev = tgt_dev->parent;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!dev->ops->submit_io_sync)
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
nvm_rq_tgt_to_dev(tgt_dev, rqd);
|
2017-01-31 19:17:10 +07:00
|
|
|
|
2017-10-13 19:46:47 +07:00
|
|
|
rqd->dev = tgt_dev;
|
2018-10-09 18:11:32 +07:00
|
|
|
rqd->flags = nvm_set_flags(&tgt_dev->geo, rqd);
|
2017-10-13 19:46:47 +07:00
|
|
|
|
|
|
|
/* In case of error, fail with right address format */
|
|
|
|
ret = dev->ops->submit_io_sync(dev, rqd);
|
|
|
|
nvm_rq_dev_to_tgt(tgt_dev, rqd);
|
|
|
|
|
|
|
|
return ret;
|
2017-04-16 01:55:37 +07:00
|
|
|
}
|
2017-10-13 19:46:47 +07:00
|
|
|
EXPORT_SYMBOL(nvm_submit_io_sync);
|
2017-01-31 19:17:10 +07:00
|
|
|
|
2017-01-31 19:17:17 +07:00
|
|
|
void nvm_end_io(struct nvm_rq *rqd)
|
2016-01-12 13:49:21 +07:00
|
|
|
{
|
2017-01-31 19:17:09 +07:00
|
|
|
struct nvm_tgt_dev *tgt_dev = rqd->dev;
|
|
|
|
|
|
|
|
/* Convert address space */
|
|
|
|
if (tgt_dev)
|
2017-01-31 19:17:14 +07:00
|
|
|
nvm_rq_dev_to_tgt(tgt_dev, rqd);
|
2017-01-31 19:17:09 +07:00
|
|
|
|
2017-01-31 19:17:17 +07:00
|
|
|
if (rqd->end_io)
|
|
|
|
rqd->end_io(rqd);
|
2016-01-12 13:49:21 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(nvm_end_io);
|
|
|
|
|
2018-10-09 18:11:36 +07:00
|
|
|
static int nvm_submit_io_sync_raw(struct nvm_dev *dev, struct nvm_rq *rqd)
|
|
|
|
{
|
|
|
|
if (!dev->ops->submit_io_sync)
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
rqd->flags = nvm_set_flags(&dev->geo, rqd);
|
|
|
|
|
|
|
|
return dev->ops->submit_io_sync(dev, rqd);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int nvm_bb_chunk_sense(struct nvm_dev *dev, struct ppa_addr ppa)
|
|
|
|
{
|
|
|
|
struct nvm_rq rqd = { NULL };
|
|
|
|
struct bio bio;
|
|
|
|
struct bio_vec bio_vec;
|
|
|
|
struct page *page;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
page = alloc_page(GFP_KERNEL);
|
|
|
|
if (!page)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
bio_init(&bio, &bio_vec, 1);
|
|
|
|
bio_add_page(&bio, page, PAGE_SIZE, 0);
|
|
|
|
bio_set_op_attrs(&bio, REQ_OP_READ, 0);
|
|
|
|
|
|
|
|
rqd.bio = &bio;
|
|
|
|
rqd.opcode = NVM_OP_PREAD;
|
|
|
|
rqd.is_seq = 1;
|
|
|
|
rqd.nr_ppas = 1;
|
|
|
|
rqd.ppa_addr = generic_to_dev_addr(dev, ppa);
|
|
|
|
|
|
|
|
ret = nvm_submit_io_sync_raw(dev, &rqd);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
__free_page(page);
|
|
|
|
|
|
|
|
return rqd.error;
|
|
|
|
}
|
|
|
|
|
2016-05-07 01:02:58 +07:00
|
|
|
/*
|
2018-10-09 18:11:36 +07:00
|
|
|
* Scans a 1.2 chunk first and last page to determine if its state.
|
|
|
|
* If the chunk is found to be open, also scan it to update the write
|
|
|
|
* pointer.
|
2016-05-07 01:02:58 +07:00
|
|
|
*/
|
2018-10-09 18:11:36 +07:00
|
|
|
static int nvm_bb_chunk_scan(struct nvm_dev *dev, struct ppa_addr ppa,
|
|
|
|
struct nvm_chk_meta *meta)
|
2016-05-07 01:02:58 +07:00
|
|
|
{
|
2016-11-29 04:39:06 +07:00
|
|
|
struct nvm_geo *geo = &dev->geo;
|
2018-10-09 18:11:36 +07:00
|
|
|
int ret, pg, pl;
|
2016-05-07 01:02:58 +07:00
|
|
|
|
2018-10-09 18:11:36 +07:00
|
|
|
/* sense first page */
|
|
|
|
ret = nvm_bb_chunk_sense(dev, ppa);
|
|
|
|
if (ret < 0) /* io error */
|
|
|
|
return ret;
|
|
|
|
else if (ret == 0) /* valid data */
|
|
|
|
meta->state = NVM_CHK_ST_OPEN;
|
|
|
|
else if (ret > 0) {
|
|
|
|
/*
|
|
|
|
* If empty page, the chunk is free, else it is an
|
|
|
|
* actual io error. In that case, mark it offline.
|
|
|
|
*/
|
|
|
|
switch (ret) {
|
|
|
|
case NVM_RSP_ERR_EMPTYPAGE:
|
|
|
|
meta->state = NVM_CHK_ST_FREE;
|
|
|
|
return 0;
|
|
|
|
case NVM_RSP_ERR_FAILCRC:
|
|
|
|
case NVM_RSP_ERR_FAILECC:
|
|
|
|
case NVM_RSP_WARN_HIGHECC:
|
|
|
|
meta->state = NVM_CHK_ST_OPEN;
|
|
|
|
goto scan;
|
|
|
|
default:
|
|
|
|
return -ret; /* other io error */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* sense last page */
|
|
|
|
ppa.g.pg = geo->num_pg - 1;
|
|
|
|
ppa.g.pl = geo->num_pln - 1;
|
|
|
|
|
|
|
|
ret = nvm_bb_chunk_sense(dev, ppa);
|
|
|
|
if (ret < 0) /* io error */
|
|
|
|
return ret;
|
|
|
|
else if (ret == 0) { /* Chunk fully written */
|
|
|
|
meta->state = NVM_CHK_ST_CLOSED;
|
|
|
|
meta->wp = geo->clba;
|
|
|
|
return 0;
|
|
|
|
} else if (ret > 0) {
|
|
|
|
switch (ret) {
|
|
|
|
case NVM_RSP_ERR_EMPTYPAGE:
|
|
|
|
case NVM_RSP_ERR_FAILCRC:
|
|
|
|
case NVM_RSP_ERR_FAILECC:
|
|
|
|
case NVM_RSP_WARN_HIGHECC:
|
|
|
|
meta->state = NVM_CHK_ST_OPEN;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -ret; /* other io error */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
scan:
|
|
|
|
/*
|
|
|
|
* chunk is open, we scan sequentially to update the write pointer.
|
|
|
|
* We make the assumption that targets write data across all planes
|
|
|
|
* before moving to the next page.
|
|
|
|
*/
|
|
|
|
for (pg = 0; pg < geo->num_pg; pg++) {
|
|
|
|
for (pl = 0; pl < geo->num_pln; pl++) {
|
|
|
|
ppa.g.pg = pg;
|
|
|
|
ppa.g.pl = pl;
|
|
|
|
|
|
|
|
ret = nvm_bb_chunk_sense(dev, ppa);
|
|
|
|
if (ret < 0) /* io error */
|
|
|
|
return ret;
|
|
|
|
else if (ret == 0) {
|
|
|
|
meta->wp += geo->ws_min;
|
|
|
|
} else if (ret > 0) {
|
|
|
|
switch (ret) {
|
|
|
|
case NVM_RSP_ERR_EMPTYPAGE:
|
|
|
|
return 0;
|
|
|
|
case NVM_RSP_ERR_FAILCRC:
|
|
|
|
case NVM_RSP_ERR_FAILECC:
|
|
|
|
case NVM_RSP_WARN_HIGHECC:
|
|
|
|
meta->wp += geo->ws_min;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -ret; /* other io error */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* folds a bad block list from its plane representation to its
|
|
|
|
* chunk representation.
|
|
|
|
*
|
|
|
|
* If any of the planes status are bad or grown bad, the chunk is marked
|
|
|
|
* offline. If not bad, the first plane state acts as the chunk state.
|
|
|
|
*/
|
|
|
|
static int nvm_bb_to_chunk(struct nvm_dev *dev, struct ppa_addr ppa,
|
|
|
|
u8 *blks, int nr_blks, struct nvm_chk_meta *meta)
|
|
|
|
{
|
|
|
|
struct nvm_geo *geo = &dev->geo;
|
|
|
|
int ret, blk, pl, offset, blktype;
|
2016-05-07 01:02:58 +07:00
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
for (blk = 0; blk < geo->num_chk; blk++) {
|
|
|
|
offset = blk * geo->pln_mode;
|
2016-05-07 01:02:58 +07:00
|
|
|
blktype = blks[offset];
|
|
|
|
|
2018-03-30 05:05:14 +07:00
|
|
|
for (pl = 0; pl < geo->pln_mode; pl++) {
|
2016-05-07 01:02:58 +07:00
|
|
|
if (blks[offset + pl] &
|
|
|
|
(NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) {
|
|
|
|
blktype = blks[offset + pl];
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-09 18:11:36 +07:00
|
|
|
ppa.g.blk = blk;
|
|
|
|
|
|
|
|
meta->wp = 0;
|
|
|
|
meta->type = NVM_CHK_TP_W_SEQ;
|
|
|
|
meta->wi = 0;
|
|
|
|
meta->slba = generic_to_dev_addr(dev, ppa).ppa;
|
|
|
|
meta->cnlb = dev->geo.clba;
|
|
|
|
|
|
|
|
if (blktype == NVM_BLK_T_FREE) {
|
|
|
|
ret = nvm_bb_chunk_scan(dev, ppa, meta);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
} else {
|
|
|
|
meta->state = NVM_CHK_ST_OFFLINE;
|
|
|
|
}
|
|
|
|
|
|
|
|
meta++;
|
2016-05-07 01:02:58 +07:00
|
|
|
}
|
|
|
|
|
2018-10-09 18:11:36 +07:00
|
|
|
return 0;
|
2016-05-07 01:02:58 +07:00
|
|
|
}
|
|
|
|
|
2018-10-09 18:11:36 +07:00
|
|
|
static int nvm_get_bb_meta(struct nvm_dev *dev, sector_t slba,
|
|
|
|
int nchks, struct nvm_chk_meta *meta)
|
|
|
|
{
|
|
|
|
struct nvm_geo *geo = &dev->geo;
|
|
|
|
struct ppa_addr ppa;
|
|
|
|
u8 *blks;
|
|
|
|
int ch, lun, nr_blks;
|
2018-12-12 02:16:08 +07:00
|
|
|
int ret = 0;
|
2018-10-09 18:11:36 +07:00
|
|
|
|
|
|
|
ppa.ppa = slba;
|
|
|
|
ppa = dev_to_generic_addr(dev, ppa);
|
|
|
|
|
|
|
|
if (ppa.g.blk != 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if ((nchks % geo->num_chk) != 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
nr_blks = geo->num_chk * geo->pln_mode;
|
|
|
|
|
|
|
|
blks = kmalloc(nr_blks, GFP_KERNEL);
|
|
|
|
if (!blks)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
for (ch = ppa.g.ch; ch < geo->num_ch; ch++) {
|
|
|
|
for (lun = ppa.g.lun; lun < geo->num_lun; lun++) {
|
|
|
|
struct ppa_addr ppa_gen, ppa_dev;
|
|
|
|
|
|
|
|
if (!nchks)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
ppa_gen.ppa = 0;
|
|
|
|
ppa_gen.g.ch = ch;
|
|
|
|
ppa_gen.g.lun = lun;
|
|
|
|
ppa_dev = generic_to_dev_addr(dev, ppa_gen);
|
|
|
|
|
|
|
|
ret = dev->ops->get_bb_tbl(dev, ppa_dev, blks);
|
|
|
|
if (ret)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
ret = nvm_bb_to_chunk(dev, ppa_gen, blks, nr_blks,
|
|
|
|
meta);
|
|
|
|
if (ret)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
meta += geo->num_chk;
|
|
|
|
nchks -= geo->num_chk;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
done:
|
|
|
|
kfree(blks);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct ppa_addr ppa,
|
|
|
|
int nchks, struct nvm_chk_meta *meta)
|
2016-11-29 04:39:14 +07:00
|
|
|
{
|
2017-01-31 19:17:12 +07:00
|
|
|
struct nvm_dev *dev = tgt_dev->parent;
|
|
|
|
|
2017-01-31 19:17:14 +07:00
|
|
|
nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1);
|
2016-11-29 04:39:14 +07:00
|
|
|
|
2018-10-09 18:11:36 +07:00
|
|
|
if (dev->geo.version == NVM_OCSSD_SPEC_12)
|
|
|
|
return nvm_get_bb_meta(dev, (sector_t)ppa.ppa, nchks, meta);
|
|
|
|
|
|
|
|
return dev->ops->get_chk_meta(dev, (sector_t)ppa.ppa, nchks, meta);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(nvm_get_chunk_meta);
|
|
|
|
|
|
|
|
int nvm_set_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas,
|
|
|
|
int nr_ppas, int type)
|
|
|
|
{
|
|
|
|
struct nvm_dev *dev = tgt_dev->parent;
|
|
|
|
struct nvm_rq rqd;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (dev->geo.version == NVM_OCSSD_SPEC_20)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (nr_ppas > NVM_MAX_VLBA) {
|
|
|
|
pr_err("nvm: unable to update all blocks atomically\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
memset(&rqd, 0, sizeof(struct nvm_rq));
|
|
|
|
|
|
|
|
nvm_set_rqd_ppalist(tgt_dev, &rqd, ppas, nr_ppas);
|
|
|
|
nvm_rq_tgt_to_dev(tgt_dev, &rqd);
|
|
|
|
|
|
|
|
ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_ppas, type);
|
|
|
|
nvm_free_rqd_ppalist(tgt_dev, &rqd);
|
|
|
|
if (ret)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
2016-11-29 04:39:14 +07:00
|
|
|
}
|
2018-10-09 18:11:36 +07:00
|
|
|
EXPORT_SYMBOL_GPL(nvm_set_chunk_meta);
|
2016-11-29 04:39:14 +07:00
|
|
|
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
static int nvm_core_init(struct nvm_dev *dev)
|
|
|
|
{
|
2016-11-29 04:39:06 +07:00
|
|
|
struct nvm_geo *geo = &dev->geo;
|
2016-05-07 01:02:59 +07:00
|
|
|
int ret;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2018-01-05 20:16:03 +07:00
|
|
|
dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns),
|
2016-03-03 21:06:38 +07:00
|
|
|
sizeof(unsigned long), GFP_KERNEL);
|
|
|
|
if (!dev->lun_map)
|
|
|
|
return -ENOMEM;
|
2016-05-07 01:02:59 +07:00
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
INIT_LIST_HEAD(&dev->area_list);
|
|
|
|
INIT_LIST_HEAD(&dev->targets);
|
2016-01-12 13:49:36 +07:00
|
|
|
mutex_init(&dev->mlock);
|
2016-03-03 21:06:37 +07:00
|
|
|
spin_lock_init(&dev->lock);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
ret = nvm_register_map(dev);
|
|
|
|
if (ret)
|
|
|
|
goto err_fmtype;
|
2016-09-16 19:25:04 +07:00
|
|
|
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
return 0;
|
2016-05-07 01:02:59 +07:00
|
|
|
err_fmtype:
|
|
|
|
kfree(dev->lun_map);
|
|
|
|
return ret;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
|
2017-04-16 01:55:46 +07:00
|
|
|
static void nvm_free(struct nvm_dev *dev)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
|
|
|
if (!dev)
|
|
|
|
return;
|
|
|
|
|
2016-09-16 19:25:08 +07:00
|
|
|
if (dev->dma_pool)
|
|
|
|
dev->ops->destroy_dma_pool(dev->dma_pool);
|
|
|
|
|
2017-04-16 01:55:39 +07:00
|
|
|
nvm_unregister_map(dev);
|
2016-05-07 01:02:59 +07:00
|
|
|
kfree(dev->lun_map);
|
2016-09-16 19:25:08 +07:00
|
|
|
kfree(dev);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int nvm_init(struct nvm_dev *dev)
|
|
|
|
{
|
2016-11-29 04:39:06 +07:00
|
|
|
struct nvm_geo *geo = &dev->geo;
|
2015-11-20 19:47:53 +07:00
|
|
|
int ret = -EINVAL;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2018-03-30 05:05:10 +07:00
|
|
|
if (dev->ops->identity(dev)) {
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
pr_err("nvm: device could not be identified\n");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2018-03-30 05:05:11 +07:00
|
|
|
pr_debug("nvm: ver:%u.%u nvm_vendor:%x\n",
|
|
|
|
geo->major_ver_id, geo->minor_ver_id,
|
2018-03-30 05:05:10 +07:00
|
|
|
geo->vmnt);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
|
|
|
ret = nvm_core_init(dev);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("nvm: could not initialize core structures.\n");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2018-03-30 05:05:10 +07:00
|
|
|
pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n",
|
2018-03-30 05:05:14 +07:00
|
|
|
dev->name, dev->geo.ws_min, dev->geo.ws_opt,
|
|
|
|
dev->geo.num_chk, dev->geo.all_luns,
|
|
|
|
dev->geo.num_ch);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
return 0;
|
|
|
|
err:
|
|
|
|
pr_err("nvm: failed to initialize nvm\n");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-09-16 19:25:07 +07:00
|
|
|
struct nvm_dev *nvm_alloc_dev(int node)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
2016-09-16 19:25:07 +07:00
|
|
|
return kzalloc_node(sizeof(struct nvm_dev), GFP_KERNEL, node);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
2016-09-16 19:25:07 +07:00
|
|
|
EXPORT_SYMBOL(nvm_alloc_dev);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2016-09-16 19:25:07 +07:00
|
|
|
int nvm_register(struct nvm_dev *dev)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
2018-12-12 02:16:24 +07:00
|
|
|
int ret, exp_pool_size;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
if (!dev->q || !dev->ops)
|
|
|
|
return -EINVAL;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2018-12-12 02:16:20 +07:00
|
|
|
ret = nvm_init(dev);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2018-12-12 02:16:24 +07:00
|
|
|
exp_pool_size = max_t(int, PAGE_SIZE,
|
|
|
|
(NVM_MAX_VLBA * (sizeof(u64) + dev->geo.sos)));
|
|
|
|
exp_pool_size = round_up(exp_pool_size, PAGE_SIZE);
|
|
|
|
|
|
|
|
dev->dma_pool = dev->ops->create_dma_pool(dev, "ppalist",
|
|
|
|
exp_pool_size);
|
2018-03-30 05:05:04 +07:00
|
|
|
if (!dev->dma_pool) {
|
|
|
|
pr_err("nvm: could not create dma pool\n");
|
2018-12-12 02:16:20 +07:00
|
|
|
nvm_free(dev);
|
2018-03-30 05:05:04 +07:00
|
|
|
return -ENOMEM;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
|
2015-12-06 17:25:49 +07:00
|
|
|
/* register device with a supported media manager */
|
2015-11-16 21:34:42 +07:00
|
|
|
down_write(&nvm_lock);
|
|
|
|
list_add(&dev->devices, &nvm_devices);
|
|
|
|
up_write(&nvm_lock);
|
|
|
|
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(nvm_register);
|
|
|
|
|
2016-09-16 19:25:07 +07:00
|
|
|
void nvm_unregister(struct nvm_dev *dev)
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
{
|
2017-01-31 19:17:09 +07:00
|
|
|
struct nvm_target *t, *tmp;
|
|
|
|
|
|
|
|
mutex_lock(&dev->mlock);
|
|
|
|
list_for_each_entry_safe(t, tmp, &dev->targets, list) {
|
|
|
|
if (t->dev->parent != dev)
|
|
|
|
continue;
|
2018-06-01 20:04:24 +07:00
|
|
|
__nvm_remove_target(t, false);
|
2017-01-31 19:17:09 +07:00
|
|
|
}
|
|
|
|
mutex_unlock(&dev->mlock);
|
|
|
|
|
2015-11-28 22:49:28 +07:00
|
|
|
down_write(&nvm_lock);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
list_del(&dev->devices);
|
|
|
|
up_write(&nvm_lock);
|
2015-11-16 21:34:43 +07:00
|
|
|
|
2016-11-29 04:38:53 +07:00
|
|
|
nvm_free(dev);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(nvm_unregister);
|
|
|
|
|
|
|
|
static int __nvm_configure_create(struct nvm_ioctl_create *create)
|
|
|
|
{
|
|
|
|
struct nvm_dev *dev;
|
|
|
|
|
2015-11-28 22:49:28 +07:00
|
|
|
down_write(&nvm_lock);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
dev = nvm_find_nvm_dev(create->dev);
|
2015-11-28 22:49:28 +07:00
|
|
|
up_write(&nvm_lock);
|
2016-07-07 14:54:16 +07:00
|
|
|
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
if (!dev) {
|
|
|
|
pr_err("nvm: device not found\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
return nvm_create_tgt(dev, create);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static long nvm_ioctl_info(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct nvm_ioctl_info *info;
|
|
|
|
struct nvm_tgt_type *tt;
|
|
|
|
int tgt_iter = 0;
|
|
|
|
|
|
|
|
info = memdup_user(arg, sizeof(struct nvm_ioctl_info));
|
|
|
|
if (IS_ERR(info))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
info->version[0] = NVM_VERSION_MAJOR;
|
|
|
|
info->version[1] = NVM_VERSION_MINOR;
|
|
|
|
info->version[2] = NVM_VERSION_PATCH;
|
|
|
|
|
2017-10-13 19:45:52 +07:00
|
|
|
down_write(&nvm_tgtt_lock);
|
2016-05-07 01:03:02 +07:00
|
|
|
list_for_each_entry(tt, &nvm_tgt_types, list) {
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
struct nvm_ioctl_info_tgt *tgt = &info->tgts[tgt_iter];
|
|
|
|
|
|
|
|
tgt->version[0] = tt->version[0];
|
|
|
|
tgt->version[1] = tt->version[1];
|
|
|
|
tgt->version[2] = tt->version[2];
|
|
|
|
strncpy(tgt->tgtname, tt->name, NVM_TTYPE_NAME_MAX);
|
|
|
|
|
|
|
|
tgt_iter++;
|
|
|
|
}
|
|
|
|
|
|
|
|
info->tgtsize = tgt_iter;
|
2017-10-13 19:45:52 +07:00
|
|
|
up_write(&nvm_tgtt_lock);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2015-11-28 22:49:24 +07:00
|
|
|
if (copy_to_user(arg, info, sizeof(struct nvm_ioctl_info))) {
|
|
|
|
kfree(info);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
return -EFAULT;
|
2015-11-28 22:49:24 +07:00
|
|
|
}
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
|
|
|
kfree(info);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static long nvm_ioctl_get_devices(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct nvm_ioctl_get_devices *devices;
|
|
|
|
struct nvm_dev *dev;
|
|
|
|
int i = 0;
|
|
|
|
|
|
|
|
devices = kzalloc(sizeof(struct nvm_ioctl_get_devices), GFP_KERNEL);
|
|
|
|
if (!devices)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
down_write(&nvm_lock);
|
|
|
|
list_for_each_entry(dev, &nvm_devices, devices) {
|
|
|
|
struct nvm_ioctl_device_info *info = &devices->info[i];
|
|
|
|
|
2017-04-16 01:55:49 +07:00
|
|
|
strlcpy(info->devname, dev->name, sizeof(info->devname));
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
/* kept for compatibility */
|
|
|
|
info->bmversion[0] = 1;
|
|
|
|
info->bmversion[1] = 0;
|
|
|
|
info->bmversion[2] = 0;
|
2017-04-16 01:55:49 +07:00
|
|
|
strlcpy(info->bmname, "gennvm", sizeof(info->bmname));
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
i++;
|
2017-01-31 19:17:09 +07:00
|
|
|
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
if (i > 31) {
|
|
|
|
pr_err("nvm: max 31 devices can be reported.\n");
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
up_write(&nvm_lock);
|
|
|
|
|
|
|
|
devices->nr_devices = i;
|
|
|
|
|
2015-11-28 22:49:24 +07:00
|
|
|
if (copy_to_user(arg, devices,
|
|
|
|
sizeof(struct nvm_ioctl_get_devices))) {
|
|
|
|
kfree(devices);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
return -EFAULT;
|
2015-11-28 22:49:24 +07:00
|
|
|
}
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
|
|
|
kfree(devices);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static long nvm_ioctl_dev_create(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct nvm_ioctl_create create;
|
|
|
|
|
|
|
|
if (copy_from_user(&create, arg, sizeof(struct nvm_ioctl_create)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2018-01-05 20:16:14 +07:00
|
|
|
if (create.conf.type == NVM_CONFIG_TYPE_EXTENDED &&
|
|
|
|
create.conf.e.rsv != 0) {
|
|
|
|
pr_err("nvm: reserved config field in use\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
create.dev[DISK_NAME_LEN - 1] = '\0';
|
|
|
|
create.tgttype[NVM_TTYPE_NAME_MAX - 1] = '\0';
|
|
|
|
create.tgtname[DISK_NAME_LEN - 1] = '\0';
|
|
|
|
|
|
|
|
if (create.flags != 0) {
|
2017-04-16 01:55:45 +07:00
|
|
|
__u32 flags = create.flags;
|
|
|
|
|
|
|
|
/* Check for valid flags */
|
|
|
|
if (flags & NVM_TARGET_FACTORY)
|
|
|
|
flags &= ~NVM_TARGET_FACTORY;
|
|
|
|
|
|
|
|
if (flags) {
|
|
|
|
pr_err("nvm: flag not supported\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return __nvm_configure_create(&create);
|
|
|
|
}
|
|
|
|
|
|
|
|
static long nvm_ioctl_dev_remove(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct nvm_ioctl_remove remove;
|
2016-07-07 14:54:16 +07:00
|
|
|
struct nvm_dev *dev;
|
|
|
|
int ret = 0;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
|
|
|
|
if (copy_from_user(&remove, arg, sizeof(struct nvm_ioctl_remove)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
remove.tgtname[DISK_NAME_LEN - 1] = '\0';
|
|
|
|
|
|
|
|
if (remove.flags != 0) {
|
|
|
|
pr_err("nvm: no flags supported\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2016-07-07 14:54:16 +07:00
|
|
|
list_for_each_entry(dev, &nvm_devices, devices) {
|
2017-01-31 19:17:09 +07:00
|
|
|
ret = nvm_remove_tgt(dev, &remove);
|
2016-07-07 14:54:16 +07:00
|
|
|
if (!ret)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
/* kept for compatibility reasons */
|
2016-01-12 13:49:37 +07:00
|
|
|
static long nvm_ioctl_dev_init(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct nvm_ioctl_dev_init init;
|
|
|
|
|
|
|
|
if (copy_from_user(&init, arg, sizeof(struct nvm_ioctl_dev_init)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
if (init.flags != 0) {
|
|
|
|
pr_err("nvm: no flags supported\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
return 0;
|
2016-01-12 13:49:37 +07:00
|
|
|
}
|
|
|
|
|
2017-01-31 19:17:09 +07:00
|
|
|
/* Kept for compatibility reasons */
|
2016-01-12 13:49:39 +07:00
|
|
|
static long nvm_ioctl_dev_factory(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct nvm_ioctl_dev_factory fact;
|
|
|
|
|
|
|
|
if (copy_from_user(&fact, arg, sizeof(struct nvm_ioctl_dev_factory)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
fact.dev[DISK_NAME_LEN - 1] = '\0';
|
|
|
|
|
|
|
|
if (fact.flags & ~(NVM_FACTORY_NR_BITS - 1))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2016-02-04 21:13:27 +07:00
|
|
|
return 0;
|
2016-01-12 13:49:39 +07:00
|
|
|
}
|
|
|
|
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
static long nvm_ctl_ioctl(struct file *file, uint cmd, unsigned long arg)
|
|
|
|
{
|
|
|
|
void __user *argp = (void __user *)arg;
|
|
|
|
|
2018-03-30 05:05:07 +07:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
switch (cmd) {
|
|
|
|
case NVM_INFO:
|
|
|
|
return nvm_ioctl_info(file, argp);
|
|
|
|
case NVM_GET_DEVICES:
|
|
|
|
return nvm_ioctl_get_devices(file, argp);
|
|
|
|
case NVM_DEV_CREATE:
|
|
|
|
return nvm_ioctl_dev_create(file, argp);
|
|
|
|
case NVM_DEV_REMOVE:
|
|
|
|
return nvm_ioctl_dev_remove(file, argp);
|
2016-01-12 13:49:37 +07:00
|
|
|
case NVM_DEV_INIT:
|
|
|
|
return nvm_ioctl_dev_init(file, argp);
|
2016-01-12 13:49:39 +07:00
|
|
|
case NVM_DEV_FACTORY:
|
|
|
|
return nvm_ioctl_dev_factory(file, argp);
|
lightnvm: Support for Open-Channel SSDs
Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.
LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.
The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.
Contributions in this patch from:
Javier Gonzalez <jg@lightnvm.io>
Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Jesper Madsen <jmad@itu.dk>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-10-29 01:54:55 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations _ctl_fops = {
|
|
|
|
.open = nonseekable_open,
|
|
|
|
.unlocked_ioctl = nvm_ctl_ioctl,
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.llseek = noop_llseek,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct miscdevice _nvm_misc = {
|
|
|
|
.minor = MISC_DYNAMIC_MINOR,
|
|
|
|
.name = "lightnvm",
|
|
|
|
.nodename = "lightnvm/control",
|
|
|
|
.fops = &_ctl_fops,
|
|
|
|
};
|
2016-10-30 03:38:41 +07:00
|
|
|
builtin_misc_device(_nvm_misc);
|