amdgpu is MIT licensed.
Fixes: ec8f24b7fa ("treewide: Add SPDX license identifier - Makefile/Kconfig")
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add KASAN_VMALLOC support which now enables vmalloc memory area access
checks as well as enables usage of VMAP_STACK under kasan.
KASAN_VMALLOC changes the way vmalloc and modules areas shadow memory
is handled. With this new approach only top level page tables are
pre-populated and lower levels are filled dynamically upon memory
allocation.
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
diag 0x44 is a voluntary undirected yield of a virtual CPU. This has
caused a lot of performance issues in the past.
There is only one caller left, and that one is only executed if diag
0x9c (directed yield) is not present. Given that all hypervisors
implement diag 0x9c anyway, remove the last diag 0x44 to avoid that
more callers will be added.
Worst case that could happen now, if diag 0x9c is not present, is that
a virtual CPU would loop a bit instead of giving its time slice up.
diag 0x44 statistics in debugfs are kept and will always be zero, so
that user space can tell that there are no calls.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
ENOTSUP is just an internal kernel error and should never reach
userspace. The return value of the share function is not exported to
userspace, but to avoid giving bad examples let us use EOPNOTSUPP:
Suggested-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
The s390 CPU Measurement sampling facility has an overflow condition
which fires when all entries in a SBD are used.
The measurement alert interrupt is triggered and reads out all samples
in this SDB. It then tests the successor SDB, if this SBD is not full,
the interrupt handler does not read any samples at all from this SDB
The design waits for the hardware to fill this SBD and then trigger
another meassurement alert interrupt.
This scheme works nicely until
an perf_event_overflow() function call discards the sample due to
a too high sampling rate.
The interrupt handler has logic to read out a partially filled SDB
when the perf event overflow condition in linux common code is met.
This causes the CPUM sampling measurement hardware and the PMU
device driver to operate on the same SBD's trailer entry.
This should not happen.
This can be seen here using this trace:
cpumsf_pmu_add: tear:0xb5286000
hw_perf_event_update: sdbt 0xb5286000 full 1 over 0 flush_all:0
hw_perf_event_update: sdbt 0xb5286008 full 0 over 0 flush_all:0
above shows 1. interrupt
hw_perf_event_update: sdbt 0xb5286008 full 1 over 0 flush_all:0
hw_perf_event_update: sdbt 0xb5286008 full 0 over 0 flush_all:0
above shows 2. interrupt
... this goes on fine until...
hw_perf_event_update: sdbt 0xb5286068 full 1 over 0 flush_all:0
perf_push_sample1: overflow
one or more samples read from the IRQ handler are rejected by
perf_event_overflow() and the IRQ handler advances to the next SDB
and modifies the trailer entry of a partially filled SDB.
hw_perf_event_update: sdbt 0xb5286070 full 0 over 0 flush_all:1
timestamp: 14:32:52.519953
Next time the IRQ handler is called for this SDB the trailer entry shows
an overflow count of 19 missed entries.
hw_perf_event_update: sdbt 0xb5286070 full 1 over 19 flush_all:1
timestamp: 14:32:52.970058
Remove access to a follow on SDB when event overflow happened.
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Function perf_event_ever_overflow() and perf_event_account_interrupt()
are called every time samples are processed by the interrupt handler.
However function perf_event_account_interrupt() has checks to avoid being
flooded with interrupts (more then 1000 samples are received per
task_tick). Samples are then dropped and a PERF_RECORD_THROTTLED is
added to the perf data. The perf subsystem limit calculation is:
maximum sample frequency := 100000 --> 1 samples per 10 us
task_tick = 10ms = 10000us --> 1000 samples per task_tick
The work flow is
measurement_alert() uses SDBT head and each SBDT points to 511
SDB pages, each with 126 sample entries. After processing 8 SBDs
and for each valid sample calling:
perf_event_overflow()
perf_event_account_interrupts()
there is a considerable amount of samples being dropped, especially when
the sample frequency is very high and near the 100000 limit.
To avoid the high amount of samples being dropped near the end of a
task_tick time frame, increment the sampling interval in case of
dropped events. The CPU Measurement sampling facility on the s390
supports only intervals, specifiing how many CPU cycles have to be
executed before a sample is generated. Increase the interval when the
samples being generated hit the task_tick limit.
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
For super_90_load, we need to make sure 'desc_nr' less
than MD_SB_DISKS, avoiding invalid memory access of 'sb->disks'.
Fixes: 228fc7d76d ("md: avoid invalid memory access for array sb->dev_roles")
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
In raid1_sync_request func, rdev should be checked before reference.
Signed-off-by: Zhiqiang Liu <liuzhiqiang26@huawei.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
With commit 6ce220dd2f ("raid5: don't set
STRIPE_HANDLE to stripe which is in batch list"), we don't want to set
STRIPE_HANDLE flag for sh which is already in batch list.
However, the stripe which is the head of batch list should set this flag,
otherwise panic could happen inside init_stripe at BUG_ON(sh->batch_head),
it is reproducible with raid5 on top of nvdimm devices per Xiao oberserved.
Thanks for Xiao's effort to verify the change.
Fixes: 6ce220dd2f ("raid5: don't set STRIPE_HANDLE to stripe which is in batch list")
Reported-by: Xiao Ni <xni@redhat.com>
Tested-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Show the name of each volume in /proc/net/afs/<cell>/volumes to make it
easier to work out the name corresponding to a volume ID. This makes it
easier to work out which mounts in /proc/mounts correspond to which volume
ID.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Marc Dionne <marc.dionne@auristor.com>
Fix missing cell comparison in afs_test_super(). Without this, any pair
volumes that have the same volume ID will share a superblock, no matter the
cell, unless they're in different network namespaces.
Normally, most users will only deal with a single cell and so they won't
see this. Even if they do look into a second cell, they won't see a
problem unless they happen to hit a volume with the same ID as one they've
already got mounted.
Before the patch:
# ls /afs/grand.central.org/archive
linuxdev/ mailman/ moin/ mysql/ pipermail/ stage/ twiki/
# ls /afs/kth.se/
linuxdev/ mailman/ moin/ mysql/ pipermail/ stage/ twiki/
# cat /proc/mounts | grep afs
none /afs afs rw,relatime,dyn,autocell 0 0
#grand.central.org:root.cell /afs/grand.central.org afs ro,relatime 0 0
#grand.central.org:root.archive /afs/grand.central.org/archive afs ro,relatime 0 0
#grand.central.org:root.archive /afs/kth.se afs ro,relatime 0 0
After the patch:
# ls /afs/grand.central.org/archive
linuxdev/ mailman/ moin/ mysql/ pipermail/ stage/ twiki/
# ls /afs/kth.se/
admin/ common/ install/ OldFiles/ service/ system/
bakrestores/ home/ misc/ pkg/ src/ wsadmin/
# cat /proc/mounts | grep afs
none /afs afs rw,relatime,dyn,autocell 0 0
#grand.central.org:root.cell /afs/grand.central.org afs ro,relatime 0 0
#grand.central.org:root.archive /afs/grand.central.org/archive afs ro,relatime 0 0
#kth.se:root.cell /afs/kth.se afs ro,relatime 0 0
Fixes: ^1da177e4c3f4 ("Linux-2.6.12-rc2")
Reported-by: Carsten Jacobi <jacobi@de.ibm.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Marc Dionne <marc.dionne@auristor.com>
Tested-by: Jonathan Billings <jsbillings@jsbillings.org>
cc: Todd DeSantis <atd@us.ibm.com>
Fix the lookup method on the dynamic root directory such that creation
calls, such as mkdir, open(O_CREAT), symlink, etc. fail with EOPNOTSUPP
rather than failing with some odd error (such as EEXIST).
lookup() itself tries to create automount directories when it is invoked.
These are cached locally in RAM and not committed to storage.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Marc Dionne <marc.dionne@auristor.com>
Tested-by: Jonathan Billings <jsbillings@jsbillings.org>
On an old perl such as v5.10.1, `kselftest/prefix.pl` gives below error
message:
Can't locate object method "autoflush" via package "IO::Handle" at kselftest/prefix.pl line 10.
This commit fixes the error by explicitly specifying the use of the
`IO::Handle` package.
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
If a timeout failure occurs, kselftest kills the test process and prints
the timeout log. If the test process has killed while printing a log
that ends with new line, the timeout log can be printed in middle of the
test process output so that it can be seems like a comment, as below:
# test_process_log not ok 3 selftests: timers: nsleep-lat # TIMEOUT
This commit avoids such problem by printing one more line before the
TIMEOUT failure log.
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Commit c78fd76f2b ("selftests: Move kselftest_module.sh into
kselftest/") moved kselftest_module.sh but missed updating a few
references to the path in documentation.
Fixes: c78fd76f2b ("selftests: Move kselftest_module.sh into kselftest/")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Each AFS mountpoint has strings that define the target to be mounted. This
is required to end in a dot that is supposed to be stripped off. The
string can include suffixes of ".readonly" or ".backup" - which are
supposed to come before the terminal dot. To add to the confusion, the "fs
lsmount" afs utility does not show the terminal dot when displaying the
string.
The kernel mount source string parser, however, assumes that the terminal
dot marks the suffix and that the suffix is always "" and is thus ignored.
In most cases, there is no suffix and this is not a problem - but if there
is a suffix, it is lost and this affects the ability to mount the correct
volume.
The command line mount command, on the other hand, is expected not to
include a terminal dot - so the problem doesn't arise there.
Fix this by making sure that the dot exists and then stripping it when
passing the string to the mount configuration.
Fixes: bec5eb6141 ("AFS: Implement an autocell mount capability [ver #2]")
Reported-by: Jonathan Billings <jsbillings@jsbillings.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Marc Dionne <marc.dionne@auristor.com>
Tested-by: Jonathan Billings <jsbillings@jsbillings.org>
After original patch was merged there were additional comments/requests
provided by Rob Herring [1]. Mostly they are related to json-schema usage,
and this patch fixes them. Also SPDX-License-Identifier has been changed to
(GPL-2.0-only OR BSD-2-Clause) as requested.
[1] https://lkml.org/lkml/2019/11/21/875
Fixes: ef63fe72f6 ("dt-bindings: net: ti: add new cpsw switch driver bindings")
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
[robh: Remove 2 more maxItems that aren't necessary]
Signed-off-by: Rob Herring <robh@kernel.org>
If the optional wdg interrupt is defined, then this property
may be defined.
Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Signed-off-by: Rob Herring <robh@kernel.org>
spin_unlock_irqrestore() might be called with stale flags after
reading port status, possibly restoring interrupts to a incorrect
state.
If a usb2 port just finished resuming while the port status is read
the spin lock will be temporary released and re-acquired in a separate
function. The flags parameter is passed as value instead of a pointer,
not updating flags properly before the final spin_unlock_irqrestore()
is called.
Cc: <stable@vger.kernel.org> # v3.12+
Fixes: 8b3d45705e ("usb: Fix xHCI host issues on remote wakeup.")
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Link: https://lore.kernel.org/r/20191211142007.8847-7-mathias.nyman@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
xhci driver claims it needs XHCI_TRUST_TX_LENGTH quirk for both
Broadcom/Cavium and a Renesas xHC controllers.
The quirk was inteded for handling false "success" complete event for
transfers that had data left untransferred.
These transfers should complete with "short packet" events instead.
In these two new cases the false "success" completion is reported
after a "short packet" if the TD consists of several TRBs.
xHCI specs 4.10.1.1.2 say remaining TRBs should report "short packet"
as well after the first short packet in a TD, but this issue seems so
common it doesn't make sense to add the quirk for all vendors.
Turn these events into short packets automatically instead.
This gets rid of the "The WARN Successful completion on short TX for
slot 1 ep 1: needs XHCI_TRUST_TX_LENGTH quirk" warning in many cases.
Cc: <stable@vger.kernel.org>
Reported-by: Eli Billauer <eli.billauer@gmail.com>
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Eli Billauer <eli.billauer@gmail.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Link: https://lore.kernel.org/r/20191211142007.8847-6-mathias.nyman@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xhci driver cannot call pci_set_power_state() on non-pci xhci host
controllers. For example, NVIDIA Tegra XHCI host controller which acts
as platform device with XHCI_SPURIOUS_WAKEUP quirk set in some platform
hits this issue during shutdown.
Cc: <stable@vger.kernel.org>
Fixes: 638298dc66 ("xhci: Fix spurious wakeups after S5 on Haswell")
Signed-off-by: Henry Lin <henryl@nvidia.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Link: https://lore.kernel.org/r/20191211142007.8847-4-mathias.nyman@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
A race in xhci USB3 remote wake handling may force device back to suspend
after it initiated resume siganaling, causing a missed resume event or warm
reset of device.
When a USB3 link completes resume signaling and goes to enabled (UO)
state a interrupt is issued and the interrupt handler will clear the
bus_state->port_remote_wakeup resume flag, allowing bus suspend.
If the USB3 roothub thread just finished reading port status before
the interrupt, finding ports still in suspended (U3) state, but hasn't
yet started suspending the hub, then the xhci interrupt handler will clear
the flag that prevented roothub suspend and allow bus to suspend, forcing
all port links back to suspended (U3) state.
Example case:
usb_runtime_suspend() # because all ports still show suspended U3
usb_suspend_both()
hub_suspend(); # successful as hub->wakeup_bits not set yet
==> INTERRUPT
xhci_irq()
handle_port_status()
clear bus_state->port_remote_wakeup
usb_wakeup_notification()
sets hub->wakeup_bits;
kick_hub_wq()
<== END INTERRUPT
hcd_bus_suspend()
xhci_bus_suspend() # success as port_remote_wakeup bits cleared
Fix this by increasing roothub usage count during port resume to prevent
roothub autosuspend, and by making sure bus_state->port_remote_wakeup
flag is only cleared after resume completion is visible, i.e.
after xhci roothub returned U0 or other non-U3 link state link on a
get port status request.
Issue rootcaused by Chiasheng Lee
Cc: <stable@vger.kernel.org>
Cc: Lee, Hou-hsun <hou-hsun.lee@intel.com>
Reported-by: Lee, Chiasheng <chiasheng.lee@intel.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Link: https://lore.kernel.org/r/20191211142007.8847-3-mathias.nyman@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Only four patches here this time around. Three of them are on dwc3
fixing some small bugs related to our 'started' flag.
None are major fixes but they're important nevertheless.
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEElLzh7wn96CXwjh2IzL64meEamQYFAl3w+rMRHGJhbGJpQGtl
cm5lbC5vcmcACgkQzL64meEamQZCvRAArY9ON9FpyoXVihv+mEsB423z5R7RLVNK
2KPKZmIt4jETFB+r+ZILjnNWaPO33CH6GzbX8WgqcUne79/t8LOvsF/E7amYQbP5
A1lWI+RPuy1f7pCL1bkiYlfYWCU1XqdNAawcCUmgJaH//eDdI4Jn78q54FRFwkSI
AwQ7M9uPHgzcxwrowpiR4UmZe4VlyqC3mfd0eGTiugDMx+IK73X7TmpP5ARZ/P09
dDUjYIhsMzU1sBFDBBUoY05ZVAdHDh1mpvZwUCznH8S1rbIWDVcPTfzvT6qu3EK0
z5fcHbgx2ic5kbtYEf9YcQJdCI9tF+/Mu9q1/asyDKbTeemw/j2FskcxjxPqVpoB
F2TwTxUKpXUzI+ZRKm8PgAHmpCC/L5ByZVin8Gbjr9lPffeAe3WqYK3rvKAJZ4Cy
ZgF3n0DTbt8iZ0xoE+/wHdFTbH1rx+JinkjFSclhN6Ytsd6ct91k4SjCzBovp+Ry
WDqqKi+QToDX7SmS3zth8U5ABzkjQM/HtA+aZmc1BobPnfgjxAVXa4bTYFPqDouc
5cLi1IFVcEq+Lh82pcc+Dqd613YryuicxPgVTqWTu1seyFC1xZ1wdrEwDce/v3ns
fQxGeHFvfHxYGK5F5LTe8984ZfFUtFv5cfr2YXfI6d5xTYtM7MWZSXfPXhQxtRGG
wy1S7CB9Exs=
=Dck/
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-v5.5-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-linus
Felipe writes:
USB: fixes for v5.5-rc2
Only four patches here this time around. Three of them are on dwc3
fixing some small bugs related to our 'started' flag.
None are major fixes but they're important nevertheless.
* tag 'fixes-for-v5.5-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb:
usb: gadget: fix wrong endpoint desc
usb: dwc3: ep0: Clear started flag on completion
usb: dwc3: gadget: Clear started flag for non-IOC
usb: dwc3: gadget: Fix logical condition
Since retirement may be running in a worker on another CPU, it may be
skipped in the local intel_gt_wait_for_idle(). To ensure the state is
consistent for our sanity checks upon load, serialise with the remote
retirer by waiting on the timeline->mutex.
Outside of this use case, e.g. on suspend or module unload, we expect the
slack to be picked up by intel_gt_pm_wait_for_idle() and so prefer to
put the special case serialisation with retirement in its single user,
for now at least.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191121071044.97798-2-chris@chris-wilson.co.uk
(cherry picked from commit 2d0fb25136)
Fixes: 093b922873 ("drm/i915: Split i915_active.mutex into an irq-safe spinlock for the rbtree")
Closes: https://gitlab.freedesktop.org/drm/intel/issues/754
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
We managed to get confused about the shift direction at least once.
Let's switch to division/multiplcation instead. Add a number of pages
macro for this purpose. We still keep the order macro around too since
this is what alloc/free pages want.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
free_page_order is a confusing name. It's not a page order
actually, it's the order of the block of memory we are hinting.
Rename to hint_block_order. Also, rename SIZE to BYTES
to make it clear it's the block size in bytes.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Fix commit 7b81cb6bdd ("usb: add a HCD_DMA flag instead of
guestimating DMA capabilities") where local memory USB drivers
erroneously allocate DMA memory instead of pool memory, causing
OHCI Unrecoverable Error, disabled
HC died; cleaning up
The order between hcd_uses_dma() and hcd->localmem_pool is now
arranged as in hcd_buffer_alloc() and hcd_buffer_free(), with the
test for hcd->localmem_pool placed first.
As an alternative, one might consider adjusting hcd_uses_dma() with
static inline bool hcd_uses_dma(struct usb_hcd *hcd)
{
- return IS_ENABLED(CONFIG_HAS_DMA) && (hcd->driver->flags & HCD_DMA);
+ return IS_ENABLED(CONFIG_HAS_DMA) &&
+ (hcd->driver->flags & HCD_DMA) &&
+ (hcd->localmem_pool == NULL);
}
One can also consider unsetting HCD_DMA for local memory pool drivers.
Fixes: 7b81cb6bdd ("usb: add a HCD_DMA flag instead of guestimating DMA capabilities")
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Fredrik Noring <noring@nocrew.org>
Link: https://lore.kernel.org/r/20191210172905.GA52526@sx9
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
As a preparation for an API conversion, factor out something frequently
used in the media subsystem. As an improvement, it bails out on both,
NULL and ERRPTR to handle the old and new API.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
After applying the fixup ALC274_FIXUP_DELL_AIO_LINEOUT_VERB, the
Line-out jack works well. And instead of adding a new set of pin
definition in the pin_fixup_tbl, we put a more generic matching entry
in the fallback_pin_fixup_tbl.
Cc: <stable@vger.kernel.org>
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Link: https://lore.kernel.org/r/20191211051321.5883-1-hui.wang@canonical.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
In chasing a performance issue between using IORING_OP_RECVMSG and
IORING_OP_READV on sockets, tracing showed that we always punt the
socket reads to async offload. This is due to io_file_supports_async()
not checking for S_ISSOCK on the inode. Since sockets supports the
O_NONBLOCK (or MSG_DONTWAIT) flag just fine, add sockets to the list
of file types that we can do a non-blocking issue to.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The socket read/write helpers only look at the file O_NONBLOCK. not
the iocb IOCB_NOWAIT flag. This breaks users like preadv2/pwritev2
and io_uring that rely on not having the file itself marked nonblocking,
but rather the iocb itself.
Cc: netdev@vger.kernel.org
Acked-by: David Miller <davem@davemloft.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We hash regular files to avoid having multiple threads hammer on the
inode mutex, but it should not be needed on other types of files
(like sockets).
Signed-off-by: Jens Axboe <axboe@kernel.dk>
One major use case of linked commands is the ability to run the next
link inline, if at all possible. This is done correctly for async
offload, but somewhere along the line we lost the ability to do so when
we were able to complete a request without having to punt it. Ensure
that we do so correctly.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This essentially reverts commit e944475e69. For high poll ops
workloads, like TAO, the dynamic allocation of the wait_queue
entry for IORING_OP_POLL_ADD adds considerable extra overhead.
Go back to embedding the wait_queue_entry, but keep the usage of
wait->private for the pointer stashing.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Don't just assign it from the main call path, that can miss the case
when we're called from issue deferral.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We use the mutex to guard against registered file updates, for instance.
Ensure we're safe in accessing that state against concurrent updates.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
To avoid going to sleep only to get woken shortly thereafter, spin
briefly for new work upon completion of work.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We only have one cases of using the waitqueue to wake the worker, the
rest are using wake_up_process(). Since we can save some cycles not
fiddling with the waitqueue io_wqe_worker(), switch the work activation
to task wakeup and get rid of the now unused wait_queue_head_t in
struct io_worker.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Some commands will invariably end in a failure in the sense that the
completion result will be less than zero. One such example is timeouts
that don't have a completion count set, they will always complete with
-ETIME unless cancelled.
For linked commands, we sever links and fail the rest of the chain if
the result is less than zero. Since we have commands where we know that
will happen, add IOSQE_IO_HARDLINK as a stronger link that doesn't sever
regardless of the completion result. Note that the link will still sever
if we fail submitting the parent request, hard links are only resilient
in the presence of completion results for requests that did submit
correctly.
Cc: stable@vger.kernel.org # v5.4
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Reported-by: 李通洲 <carter.li@eoitek.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
It turns out that cpuidle_driver_state_disabled() can be called
before registering the cpufreq driver on some platforms, which
was not expected when it was introduced and which leads to a NULL
pointer dereference when trying to walk the CPUs associated with
the given cpuidle driver.
Fix the problem by making cpuidle_driver_state_disabled() check if
the driver's mask of CPUs associated with it is present and to set
CPUIDLE_FLAG_UNUSABLE for the given idle state in the driver's states
list if that is not the case to cause __cpuidle_register_device() to
set CPUIDLE_STATE_DISABLED_BY_DRIVER for that state for all cpuidle
devices registered by it later.
Fixes: cbda56d5fe ("cpuidle: Introduce cpuidle_driver_state_disabled() for driver quirks")
Reported-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Tested-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Calling kzalloc() and related functions requires the
linux/slab.h header to be included:
drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_resource.c: In function 'dcn21_ipp_create':
drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_resource.c:679:3: error: implicit declaration of function 'kzalloc'; did you mean 'd_alloc'? [-Werror=implicit-function-declaration]
kzalloc(sizeof(struct dcn10_ipp), GFP_KERNEL);
A lot of other headers also miss a direct include in this file,
but this is the only one that causes a problem for now.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>