skb_copy_datagram_iter and skb_copy_and_csum_datagram are essentialy
the same but with a couple of differences: The first is the copy
operation used which either a simple copy or a csum_and_copy, and the
second are the behavior on the "short copy" path where simply copy
needs to return the number of bytes successfully copied while csum_and_copy
needs to fault immediately as the checksum is partial.
Introduce __skb_datagram_iter that additionally accepts:
1. copy operation function pointer
2. private data that goes with the copy operation
3. fault_short flag to indicate the action on short copy
Suggested-by: David S. Miller <davem@davemloft.net>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
The single caller to csum_and_copy_to_iter is skb_copy_and_csum_datagram
and we are trying to unite its logic with skb_copy_datagram_iter by passing
a callback to the copy function that we want to apply. Thus, we need
to make the checksum pointer private to the function.
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
This will be useful to consolidate skb_copy_and_hash_datagram_iter and
skb_copy_and_csum_datagram to a single code path.
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Prevent a namespace conflict as in following patches as skbuff.h will
include the crypto API.
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
to continue with operation that requires them.
- Fix bio-based DM core's dm_make_request() to properly impose device
limits on individual bios by making use of blk_queue_split().
- Fix long-standing race with how DM thinp notified userspace of
thin-pool mode state changes before they were actually made.
- Fix the zoned target's bio completion handling; this is a fairly
invassive fix at this stage but it is localized to the zoned target.
Any zoned target users will benefit from this fix.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcEUXZAAoJEMUj8QotnQNaBykIANWuydEZ0bleZQvdcUCK256K
MYQi16kdettxwAlzojxgpD1gjpAYuQJWRBeSyLxpMPa/jrNh8U5pcgkb8EQkaSd0
0KWxJS8V85a+fKTGpyaK5vVmbZcezY3GADGv5GDC2yeBTZJTcFWhsGQwfP/Il/X3
fKo9qOs2sabdCbR11U3psicsRbMVIkyDfX23hIZWSdVPNI43YKWugFZ1irOhh9gD
QNyUJ1cDOGYTwmTKHuJ9IidjuuU6rfhkbAek9TWTkhmWHoshlr3j9fpIOteB8U0M
vNu4oLedm+QBV8jOwplyAbDG7hxx8V4RNiNy31g4Er6KJltiMVpAbfOYdBpa3WE=
=YHnh
-----END PGP SIGNATURE-----
Merge tag 'for-4.20/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:
- Fix DM cache metadata to verify that a cache has block before trying
to continue with operation that requires them.
- Fix bio-based DM core's dm_make_request() to properly impose device
limits on individual bios by making use of blk_queue_split().
- Fix long-standing race with how DM thinp notified userspace of
thin-pool mode state changes before they were actually made.
- Fix the zoned target's bio completion handling; this is a fairly
invassive fix at this stage but it is localized to the zoned target.
Any zoned target users will benefit from this fix.
* tag 'for-4.20/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm thin: bump target version
dm thin: send event about thin-pool state change _after_ making it
dm zoned: Fix target BIO completion handling
dm: call blk_queue_split() to impose device limits on bios
dm cache metadata: verify cache has blocks in blocks_are_clean_separate_dirty()
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJcESwuAAoJEAhfPr2O5OEVMOUQAIZ/elsn9V2Dgiie16TkPvRc
/QNXSKJt7/5VMiF+iNrYr6NUxBL+y8FuwlcKVnyjWQHB9PKzhgd6G7QTqQbW4biz
1Ii4GGBIpyLlcsnCm3e8GdmhrBx1azs2yHuMKuy0nKurCySVXK+29E+N5Em/CWc5
Tz24KhkL+1C5Om9MtluZzIUndA6gXpYJzyO3Jb5emwOPnWdtO/CUO3aInCL9RLT7
imkfNgoG4n0TmJUo6vpzLIN9NmdVcwaGPH98iCLEmvCSEE3C+R4nMUGmvQkMadX2
nsJ+v7jhIDjR521x0pZ8R7TdGnFypX+WOMtI+S735FqG4SvXcFdtEhZ99f1IcKC8
oEgCIbbENSyKo3wrY97fTOIjY6fcdHp+kArCuMoRBehguIKk7dppNVX3uhNCS/t/
kN2alWvVgSNqhkp7Pzx36wKsol8l8lVsOr9sex6H9qPNddYFl5RWg5oSfdxmVyIG
qcNe4Gd6ARsg4J/946oDijLJouCYsAsT5W+BqYEK82BzzRJZcqpvM0Je2CQlky+1
Vj0tq70FNya/28CznlJEYdBmvWDZub/foQh4fKdc5bNtGMALo2bxIqOiFMNqhp+N
AqAbZFyA8K5DZfNYsxt0jHlOFdSIh5mDX/LOqnIXrHylOOZ5w+2xxoCxFaujdNjD
sKlqXfp1zSVvVEv7xWgz
=fz+k
-----END PGP SIGNATURE-----
Merge tag 'media/v4.20-5' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull media fixes from Mauro Carvalho Chehab:
- one regression at vsp1 driver
- some last time changes for the upcoming request API logic and for
stateless codec support. As the stateless codec "cedrus" driver is at
staging, don't apply the MPEG controls as part of the main V4L2 API,
as those may not be ready for production yet.
* tag 'media/v4.20-5' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
media: Add a Kconfig option for the Request API
media: extended-controls.rst: add note to the MPEG2 state controls
media: mpeg2-ctrls.h: move MPEG2 state controls to non-public header
media: vicodec: set state resolution from raw format
media: vivid: drop v4l2_ctrl_request_complete() from start_streaming
media: vb2: don't unbind/put the object when going to state QUEUED
media: vb2: keep a reference to the request until dqbuf
media: vb2: skip request checks for VIDIOC_PREPARE_BUF
media: vb2: don't call __vb2_queue_cancel if vb2_start_streaming failed
media: cedrus: Fix a NULL vs IS_ERR() check
media: vsp1: Fix LIF buffer thresholds
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQSQHSd0lITzzeNWNm3h3BK/laaZPAUCXBDMLgAKCRDh3BK/laaZ
PPuRAP0X4zYWFh3mcGlcjjfzaP2W/3F8nVsXjo+YADi9nJ+wAwD+LIeL7zGr8Mw8
EixiC+OJyL31O5ZOyHGoPEhhDz4O+Ao=
=hWRh
-----END PGP SIGNATURE-----
Merge tag 'ovl-fixes-4.20-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs
Pull overlayfs fixes from Miklos Szeredi:
"Needed to revert a patch, because it possibly introduces a security
hole. Since the patch is basically a conceptual cleanup, not a bug
fix, it's safe to revert. I'm not giving up on this, and discussions
seemed to have reached an agreement over how to move forward, but that
can wait 'till the next release.
The other two patches are fixes for bugs introduced in recent
releases"
* tag 'ovl-fixes-4.20-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs:
Revert "ovl: relax permission checking on underlying layers"
ovl: fix decode of dir file handle with multi lower layers
ovl: fix missing override creds in link of a metacopy upper
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQSQHSd0lITzzeNWNm3h3BK/laaZPAUCXBDKkAAKCRDh3BK/laaZ
PCXdAPwOWqLXpkBL76YaIbgFVzS+S5btlhHwVSZ0w/r7HGA3uQD+IgsHbky1MdSv
rYyKcg+lVzA7GI7tcoQUhC2D9aZ8tAQ=
=I0eL
-----END PGP SIGNATURE-----
Merge tag 'fuse-fixes-4.20-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse
Pull fuse fixes from Miklos Szeredi:
"There's one patch fixing a minor but long lived bug, the others are
fixing regressions introduced in this cycle"
* tag 'fuse-fixes-4.20-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse:
fuse: continue to send FUSE_RELEASEDIR when FUSE_OPEN returns ENOSYS
fuse: Fix memory leak in fuse_dev_free()
fuse: fix revalidation of attributes for permission check
fuse: fix fsync on directory
fuse: Add bad inode check in fuse_destroy_inode()
detector found some allocations that were not freed correctly.
This fixes a couple of leaks in the event trigger code as well as
in adding function trace filters in trace instances.
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCXBAHphQccm9zdGVkdEBn
b29kbWlzLm9yZwAKCRAp5XQQmuv6qphzAP4mTz45V9gq9vyXCVPPzg8T6lV4ZjJh
bPaumlHGumaJHAD9FipqlhCOCVfv8Qyxv5iWuBpoGKcp37ULb6d+dtM+qg4=
=S1FK
-----END PGP SIGNATURE-----
Merge tag 'trace-v4.20-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fixes from Steven Rostedt:
"While running various ftrace tests on new development code, the
kmemleak detector found some allocations that were not freed
correctly.
This fixes a couple of leaks in the event trigger code as well as in
adding function trace filters in trace instances"
* tag 'trace-v4.20-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Fix memory leak of instance function hash filters
tracing: Fix memory leak in set_trigger_filter()
tracing: Fix memory leak in create_filter()
Between v3 [1] and v4 [2] of the blkg association series, the
association point moved from generic_make_request_checks(), which is
called after the request enters the queue, to bio_set_dev(), which is when
the bio is formed before submit_bio(). When the request_queue goes away,
the blkgs supporting the request_queue are destroyed and then the
q->root_blkg is set to %NULL.
This patch adds a %NULL check to blkg_tryget_closest() to prevent the
NPE caused by the above. It also adds a guard to see if the
request_queue is dying when creating a blkg to prevent creating a blkg
for a dead request_queue.
[1] https://lore.kernel.org/lkml/20180911184137.35897-1-dennisszhou@gmail.com/
[2] https://lore.kernel.org/lkml/20181126211946.77067-1-dennis@kernel.org/
Fixes: 5cdf2e3fea ("blkcg: associate blkg when associating a device")
Reported-and-tested-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Dennis Zhou <dennis@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
lkml and Linus gained a CoC, and it's serious this time. Which means
my no 1 reason for declining to officially step up as drm maintainer
is gone, and I didn't find any new good excuse.
I chatted with a few people in private already, and the biggest
concern is that I mislay my community hat and start running around
with my intel hat only. Or some other convenient abuse of trust.
That's why this patch doesn't just need a lot of acks that mean "yeah
seems fine to me", but a lot of acks that mean "yeah we'll tell you
when you're over the line and usurp you from that comfy chair if you
don't get it". Which I think we've been done a fairly good job here at
dri-devel in general, but better to be clear.
Rough idea is that I'll do this for maybe 2-3 years, helping Dave
figure out a group model for drm overall. And getting the tooling and
infrastructure for that off the ground. Then step down again because
some other shiny thing that needs chasing. Of course as plans tend to
do, this one will probably pan out a bit different in reality.
Cc: David Airlie <airlied@linux.ie>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Eric Anholt <eric@anholt.net>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Daniel Vetter <daniel@ffwll.ch>
Acked-by: Neil Armstrong <narmstrong@baylibre.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Sean Paul <sean@poorly.run>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181210103001.30549-1-daniel.vetter@ffwll.ch
Since this is not needed any more on the latest SMC firmware.
Signed-off-by: Evan Quan <evan.quan@amd.com>
Acked-by: Feifei Xu <Feifei.Xu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
When support for bonding of RoCE devices was added, there was
necessarily a link between the RoCE device and the paired netdevice that
was part of the bond. If you remove the mlx4_en module, that paired
association is broken (the RoCE device is still present but the paired
netdevice has been released). We need to account for this in
is_upper_ndev_bond_master_filter() and filter out those links with a
broken pairing or else we later oops in netdev_next_upper_dev_rcu().
Fixes: 408f1242d9 ("IB/core: Delete lower netdevice default GID entries in bonding scenario")
Signed-off-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Decoupled version bump from commit f6c367585d ("dm thin: send event
about thin-pool state change _after_ making it") because version bumps
just create conflicts when backporting to the stable trees.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
The return statement is redundant as there is a return statement
immediately before it so we have dead code that can be removed.
Also remove the unused declaration of ret.
Detected by CoverityScan, CID#1473793 ("Structurally dead code")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Adding an extra MI_STORE_DWORD_IMM to the gpu relocation path for gen3
was good, but still not good enough. To survive 24+ hours under test we
needed to perform not one, not two but three extra store-dw. Doing so
for each GPU relocation was a little unsightly and since we need to
worry about userspace hitting the same issues, we should apply the dummy
store-dw into the EMIT_FLUSH.
Fixes: 7dd4f6729f ("drm/i915: Async GPU relocation processing")
References: 7fa28e1469 ("drm/i915: Write GPU relocs harder with gen3")
Testcase: igt/gem_tiled_fence_blits # blb/pnv
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181207134037.11848-1-chris@chris-wilson.co.uk
(cherry picked from commit a889580c08)
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Currently we allocate a scratch page for each engine, but since we only
ever write into it for post-sync operations, it is not exposed to
userspace nor do we care for coherency. As we then do not care about its
contents, we can use one page for all, reducing our allocations and
avoid complications by not assuming per-engine isolation.
For later use, it simplifies engine initialisation (by removing the
allocation that required struct_mutex!) and means that we can always rely
on there being a scratch page.
v2: Check that we allocated a large enough scratch for I830 w/a
Fixes: 06e562e7f515 ("drm/i915/ringbuffer: Delay after EMIT_INVALIDATE for gen4/gen5") # v4.18.20
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108850
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181204141522.13640-1-chris@chris-wilson.co.uk
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: <stable@vger.kernel.org> # v4.18.20+
(cherry picked from commit 5179749925)
[Joonas: Use new function in gen9_init_indirectctx_bb too]
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Braswell is really picky about having our writes posted to memory before
we execute or else the GPU may see stale values. A wmb() is insufficient
as it only ensures the writes are visible to other cores, we need a full
mb() to ensure the writes are in memory and visible to the GPU.
The most frequent failure in flushing before execution is that we see
stale PTE values and execute the wrong pages.
References: 987abd5c62 ("drm/i915/execlists: Force write serialisation into context image vs execution")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: stable@vger.kernel.org
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181206084431.9805-3-chris@chris-wilson.co.uk
(cherry picked from commit 490b8c65b9)
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Noticed this while working on redoing the reference counting scheme in
the DP MST helpers. Nouveau doesn't attempt to call
drm_dp_mst_topology_mgr_destroy() at all, which leaves it leaking all of
the resources for drm_dp_mst_topology_mgr and it's children mstbs+ports.
Fixes: f479c0ba4a ("drm/nouveau/kms/nv50: initial support for DP 1.2 multi-stream")
Signed-off-by: Lyude Paul <lyude@redhat.com>
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Should hopefully fix a regression some people have been seeing since EVO
push buffers were moved to VRAM by default on Pascal GPUs.
Fixes: d00ddd9da ("drm/nouveau/kms/nv50-: allocate push buffers in vidmem on pascal")
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Cc: <stable@vger.kernel.org> # 4.19+
We're missing a deferred clear off the shallow get, which can cause
a hang. Additionally, when we resize the sbitmap, we should also
flush deferred clears for good measure.
Ensure we have full coverage on batch clears, even for paths where
we would not be doing deferred clear. This makes it less error
prone for future additions.
Reported-by: Bart Van Assche <bvanassche@acm.org>
Tested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Commit f149b31557 ("signal: Never allocate siginfo for SIGKILL or SIGSTOP")
means that the seccomp selftest cannot check si_pid under SIGSTOP anymore.
Since it's believed[1] there are no other userspace things depending on the
old behavior, this removes the behavioral check in the selftest, since it's
more a "extra" sanity check (which turns out, maybe, not to have been
useful to test).
[1] https://lkml.kernel.org/r/CAGXu5jJaZAOzP1qFz66tYrtbuywqb+UN2SOA1VLHpCCOiYvYeg@mail.gmail.com
Reported-by: Tycho Andersen <tycho@tycho.ws>
Suggested-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <shuah@kernel.org>
null_blk_zoned creation fails if the number of zones specified is equal to or is
smaller than 64 due to a memory allocation failure in blk_alloc_zones(). With
such a small number of zones, the required memory size for all zones descriptors
fits in a single page, and the page order for alloc_pages_node() is zero. Allow
this value in blk_alloc_zones() for the allocation to succeed.
Fixes: bf50545696 "block: Introduce blk_revalidate_disk_zones()"
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
When FUSE_OPEN returns ENOSYS, the no_open bit is set on the connection.
Because the FUSE_RELEASE and FUSE_RELEASEDIR paths share code, this
incorrectly caused the FUSE_RELEASEDIR request to be dropped and never sent
to userspace.
Pass an isdir bool to distinguish between FUSE_RELEASE and FUSE_RELEASEDIR
inside of fuse_file_put.
Fixes: 7678ac5061 ("fuse: support clients that don't implement 'open'")
Cc: <stable@vger.kernel.org> # v3.14
Signed-off-by: Chad Austin <chadaustin@fb.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Sending a DM event before a thin-pool state change is about to happen is
a bug. It wasn't realized until it became clear that userspace response
to the event raced with the actual state change that the event was
meant to notify about.
Fix this by first updating internal thin-pool state to reflect what the
DM event is being issued about. This fixes a long-standing racey/buggy
userspace device-mapper-test-suite 'resize_io' test that would get an
event but not find the state it was looking for -- so it would just go
on to hang because no other events caused the test to reevaluate the
thin-pool's state.
Cc: stable@vger.kernel.org
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Ehen using pblk with 0 sized metadata both ppa list and meta list
points to the same memory since pblk_dma_meta_size() returns 0 in
that case.
This patch fix that issue by ensuring that pblk_dma_meta_size()
always returns space equal to sizeof(struct pblk_sec_meta) and thus
ppa list and meta list points to different memory address.
Even that in that case drive does not really care about meta_list
pointer, this is the easiest way to fix that issue without introducing
changes in many places in the code just for 0 sized metadata case.
The same approach needs to be also done for pblk_get_sec_meta()
since we also cannot point to the same memory address in meta buffer
when we are using it for pblk recovery process
Reported-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Tested-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
pblk performs recovery of open lines by storing the LBA in the per LBA
metadata field. Recovery therefore only works for drives that has this
field.
This patch adds support for packed metadata, which store l2p mapping
for open lines in last sector of every write unit and enables drives
without per IO metadata to recover open lines.
After this patch, drives with OOB size <16B will use packed metadata
and metadata size larger than16B will continue to use the device per
IO metadata.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Currently pblk only check the size of I/O metadata and does not take
into account if this metadata is in a separate buffer or interleaved
in a single metadata buffer.
In reality only the first scenario is supported, where second mode will
break pblk functionality during any IO operation.
This patch prevents pblk to be instantiated in case device only
supports interleaved metadata.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Currently lightnvm and pblk uses single DMA pool, for which the entry
size always is equal to PAGE_SIZE. The contents of each entry allocated
from the DMA pool consists of a PPA list (8bytes * 64), leaving
56bytes * 64 space for metadata. Since the metadata field can be bigger,
such as 128 bytes, the static size does not cover this use-case.
This patch adds support for I/O metadata above 56 bytes by changing DMA
pool size based on device meta size and allows pblk to use OOB metadata
>=16B.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
pblk currently assumes that size of OOB metadata on drive is always
equal to size of pblk_sec_meta struct. This commit add helpers which will
allow to handle different sizes of OOB metadata on drive in the future.
After this patch only OOB metadata equal to 16 bytes is supported.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Currently DMA allocated memory is reused on partial read
for lba_list_mem and lba_list_media arrays. In preparation
for dynamic DMA pool sizes we need to move this arrays
into pblk_pr_ctx structures.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The current kref implementation around pblk global caches triggers a
false positive on refcount_inc_checked() (when called) as the kref is
initialized to 0. Instead of usint kref_inc() on a 0 reference, which is
in principle correct, use kref_init() to avoid the check. This is also
more explicit about what actually happens on cache creation.
In the process, do a small refactoring to use kref helpers.
Fixes: 1864de94ec "lightnvm: pblk: stop recreating global caches"
Signed-off-by: Javier González <javier@cnexlabs.com>
Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Currently the geometry of an OCSSD is enumerated using a two step
approach:
First, nvm_register is called, the OCSSD identify command is issued,
and second the geometry sos and csecs values are read either from the
OCSSD identify if it is a 1.2 drive, or from the NVMe namespace data
structure if it is a 2.0 device.
This patch recombines it into a single step, such that nvm_register can
use the csecs and sos fields independent of which version is used. This
enables one to dynamically size the lightnvm subsystem dma pool.
Reviewed-by: Igor Konopko <igor.j.konopko@intel.com>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
pblk's recovery path is single threaded and therefore a number of
assumptions regarding concurrency can be made. To avoid confusion, make
this explicit with a couple of comments in the code.
Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Protect the list_add on the pblk_line_init_bb() error
path in case this code is used for some other purpose
in the future.
Signed-off-by: Hua Su <suhua.tanke@gmail.com>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Remove the call to pblk_line_replace_data as it returns
directly because we have not set l_mg->data_next yet.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The chunk metadata is allocated with vmalloc, so we need to use
vfree to free it.
Fixes: 090ee26fd5 ("lightnvm: use internal allocation for chunk log page")
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
ADDR_POOL_SIZE is not used anymore, so remove the macro.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
In a worst-case scenario (random writes), OP% of sectors
in each line will be invalid, and we will then need
to move data out of 100/OP% lines to free a single line.
So, to prevent the possibility of running out of lines,
temporarily block user writes when there is less than
100/OP% free lines.
Also ensure that pblk creation does not produce instances
with insufficient over provisioning.
Insufficient over-provising is not a problem on real hardware,
but often an issue when running QEMU simulations (with few lines).
100 lines is enough to create a sane instance with the standard
(11%) over provisioning.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
If mapping fails (i.e. when running out of lines), handle the error
and stop writing.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Lines inflicted with write errors lines might be recovered
if they have not been recycled after write error garbage collection.
Ensure that the emeta accounting of valid lbas is correct
for such lines to avoid recovery inconsistencies.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Make sure we only look up valid lba addresses on the resubmission path.
If an lba is invalidated in the write buffer, that sector will be
submitted to disk (as it is already mapped to a ppa), and that write
might fail, resulting in a crash when trying to look up the lba in the
mapping table (as the lba is marked as invalid).
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>