io_free_req_many() is used only for iopoll requests, i.e. reads/writes.
Hence no need to batch inflight unhooking. For safety, it'll be done by
io_dismantle_req(), which replaces __io_req_aux_free(), and looks more
solid and cleaner.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We won't have valid ring_fd, ring_file in task work. Grab files early.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
No reason to mark a head of a link as for-async in io_req_defer_prep().
grab_env(), etc. That will be done further during submission if
neccessary.
Mark for_async=false saving extra grab_env() in many cases.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
It's not enough to check for REQ_F_WORK_INITIALIZED and punt async
assuming that io_req_work_grab_env() was called, it may not have been.
E.g. io_close_prep() and personality path set the flag without further
async init.
As a quick fix, always pass next work through io_req_task_queue().
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Fix build errors when CONFIG_NET is not set/enabled:
../fs/io_uring.c:5472:10: error: too many arguments to function ‘io_sendmsg’
../fs/io_uring.c:5474:10: error: too many arguments to function ‘io_send’
../fs/io_uring.c:5484:10: error: too many arguments to function ‘io_recvmsg’
../fs/io_uring.c:5486:10: error: too many arguments to function ‘io_recv’
../fs/io_uring.c:5510:9: error: too many arguments to function ‘io_accept’
../fs/io_uring.c:5518:9: error: too many arguments to function ‘io_connect’
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: io-uring@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Merge in changes that went into 5.8-rc3. GIT will silently do the
merge, but we still need a tweak on top of that since
io_complete_rw_common() was modified to take a io_comp_state pointer.
The auto-merge fails on that, and we end up with something that
doesn't compile.
* io_uring-5.8:
io_uring: fix current->mm NULL dereference on exit
io_uring: fix hanging iopoll in case of -EAGAIN
io_uring: fix io_sq_thread no schedule when busy
Signed-off-by: Jens Axboe <axboe@kernel.dk>
It's easier to return next work from ->do_work() than
having an in-out argument. Looks nicer and easier to compile.
Also, merge io_wq_assign_next() into its only user.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
A bit more surgery required here, as completions are generally done
through the kiocb->ki_complete() callback, even if they complete inline.
This enables the regular read/write path to use the io_comp_state
logic to batch inline completions.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Provide the completion state to the handlers that we know can complete
inline, so they can utilize this for batching completions.
Cap the max batch count at 32. This should be enough to provide a good
amortization of the cost of the lock+commit dance for completions, while
still being low enough not to cause any real latency issues for SQPOLL
applications.
Xuan Zhuo <xuanzhuo@linux.alibaba.com> reports that this changes his
profile from:
17.97% [kernel] [k] copy_user_generic_unrolled
13.92% [kernel] [k] io_commit_cqring
11.04% [kernel] [k] __io_cqring_fill_event
10.33% [kernel] [k] udp_recvmsg
5.94% [kernel] [k] skb_release_data
4.31% [kernel] [k] udp_rmem_release
2.68% [kernel] [k] __check_object_size
2.24% [kernel] [k] __slab_free
2.22% [kernel] [k] _raw_spin_lock_bh
2.21% [kernel] [k] kmem_cache_free
2.13% [kernel] [k] free_pcppages_bulk
1.83% [kernel] [k] io_submit_sqes
1.38% [kernel] [k] page_frag_free
1.31% [kernel] [k] inet_recvmsg
to
19.99% [kernel] [k] copy_user_generic_unrolled
11.63% [kernel] [k] skb_release_data
9.36% [kernel] [k] udp_rmem_release
8.64% [kernel] [k] udp_recvmsg
6.21% [kernel] [k] __slab_free
4.39% [kernel] [k] __check_object_size
3.64% [kernel] [k] free_pcppages_bulk
2.41% [kernel] [k] kmem_cache_free
2.00% [kernel] [k] io_submit_sqes
1.95% [kernel] [k] page_frag_free
1.54% [kernel] [k] io_put_req
[...]
0.07% [kernel] [k] io_commit_cqring
0.44% [kernel] [k] __io_cqring_fill_event
Signed-off-by: Jens Axboe <axboe@kernel.dk>
No functional changes in this patch, just in preparation for having the
completion state be available on the issue side. Later on, this will
allow requests that complete inline to be completed in batches.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
No functional changes in this patch, just in preparation for passing back
pending completions to the caller and completing them in a batched
fashion.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We have lots of callers of:
io_cqring_add_event(req, result);
io_put_req(req);
Provide a helper that does this for us. It helps clean up the code, and
also provides a more convenient location for us to change the completion
handling.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
__io_queue_sqe() tries to handle all request of a link,
so it's not enough to grab mm in io_sq_thread_acquire_mm()
based just on the head.
Don't check req->needs_mm and do it always.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
io_do_iopoll() won't do anything with a request unless
req->iopoll_completed is set. So io_complete_rw_iopoll() has to set
it, otherwise io_do_iopoll() will poll a file again and again even
though the request of interest was completed long time ago.
Also, remove -EAGAIN check from io_issue_sqe() as it races with
the changed lines. The request will take the long way and be
resubmitted from io_iopoll*().
io_kiocb's result and iopoll_completed")
Fixes: bbde017a32 ("io_uring: add memory barrier to synchronize
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
When the user consumes and generates sqe at a fast rate,
io_sqring_entries can always get sqe, and ret will not be equal to -EBUSY,
so that io_sq_thread will never call cond_resched or schedule, and then
we will get the following system error prompt:
rcu: INFO: rcu_sched self-detected stall on CPU
or
watchdog: BUG: soft lockup-CPU#23 stuck for 112s! [io_uring-sq:1863]
This patch checks whether need to call cond_resched() by checking
the need_resched() function every cycle.
Suggested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
After recent changes, io_submit_sqes() always passes valid submit state,
so kill leftovers checking it for NULL.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
It's a good practice to modify fields of a struct after but not before
it was initialised. Even though io_init_poll_iocb() doesn't touch
poll->file, call it first.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
REQ_F_MUST_PUNT may seem looking good and clear, but it's the same
as not having REQ_F_NOWAIT set. That rather creates more confusion.
Moreover, it doesn't even affect any behaviour (e.g. see the patch
removing it from io_{read,write}).
Kill theg flag and update already outdated comments.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_{read,write}() {
...
copy_iov: // prep async
if (!(flags & REQ_F_NOWAIT) && !file_can_poll(file))
flags |= REQ_F_MUST_PUNT;
}
REQ_F_MUST_PUNT there is pointless, because if it happens then
REQ_F_NOWAIT is known to be _not_ set, and the request will go
async path in __io_queue_sqe() anyway. file_can_poll() check
is also repeated in arm_poll*(), so don't need it.
Remove the mentioned assignment REQ_F_MUST_PUNT in preparation
for killing the flag.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Pull in async buffered reads branch.
* async-buffered.8:
io_uring: support true async buffered reads, if file provides it
mm: add kiocb_wait_page_queue_init() helper
btrfs: flag files as supporting buffered async reads
xfs: flag files as supporting buffered async reads
block: flag block devices as supporting IOCB_WAITQ
fs: add FMODE_BUF_RASYNC
mm: support async buffered reads in generic_file_buffered_read()
mm: add support for async page locking
mm: abstract out wake_page_match() from wake_page_function()
mm: allow read-ahead with IOCB_NOWAIT set
io_uring: re-issue block requests that failed because of resources
io_uring: catch -EIO from buffered issue request failure
io_uring: always plug for any number of IOs
block: provide plug based way of signaling forced no-wait semantics
If the file is flagged with FMODE_BUF_RASYNC, then we don't have to punt
the buffered read to an io-wq worker. Instead we can rely on page
unlocking callbacks to support retry based async IO. This is a lot more
efficient than doing async thread offload.
The retry is done similarly to how we handle poll based retry. From
the unlock callback, we simply queue the retry to a task_work based
handler.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Checks if the file supports it, and initializes the values that we need.
Caller passes in 'data' pointer, if any, and the callback function to
be used.
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Use the async page locking infrastructure, if IOCB_WAITQ is set in the
passed in iocb. The caller must expect an -EIOCBQUEUED return value,
which means that IO is started but not done yet. This is similar to how
O_DIRECT signals the same operation. Once the callback is received by
the caller for IO completion, the caller must retry the operation.
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Normally waiting for a page to become unlocked, or locking the page,
requires waiting for IO to complete. Add support for lock_page_async()
and wait_on_page_locked_async(), which are callback based instead. This
allows a caller to get notified when a page becomes unlocked, rather
than wait for it.
We add a new iocb field, ki_waitq, to pass in the necessary data for this
to happen. We can unionize this with ki_cookie, since that is only used
for polled IO. Polled IO can never co-exist with async callbacks, as it is
(by definition) polled completions. struct wait_page_key is made public,
and we define struct wait_page_async as the interface between the caller
and the core.
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
No functional changes in this patch, just in preparation for allowing
more callers.
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The read-ahead shouldn't block, so allow it to be done even if
IOCB_NOWAIT is set in the kiocb.
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mark the plug with nowait == true, which will cause requests to avoid
blocking on request allocation. If they do, we catch them and reissue
them from a task_work based handler.
Normally we can catch -EAGAIN directly, but the hard case is for split
requests. As an example, the application issues a 512KB request. The
block core will split this into 128KB if that's the max size for the
device. The first request issues just fine, but we run into -EAGAIN for
some latter splits for the same request. As the bio is split, we don't
get to see the -EAGAIN until one of the actual reads complete, and hence
we cannot handle it inline as part of submission.
This does potentially cause re-reads of parts of the range, as the whole
request is reissued. There's currently no better way to handle this.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
-EIO bubbles up like -EAGAIN if we fail to allocate a request at the
lower level. Play it safe and treat it like -EAGAIN in terms of sync
retry, to avoid passing back an errant -EIO.
Catch some of these early for block based file, as non-mq devices
generally do not support NOWAIT. That saves us some overhead by
not first trying, then retrying from async context. We can go straight
to async punt instead.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Currently we only plug if we're doing more than two request. We're going
to be relying on always having the plug there to pass down information,
so plug unconditionally.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Provide a way for the caller to specify that IO should be marked
with REQ_NOWAIT to avoid blocking on allocation.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ring pages are not pinned so it is more appropriate to report them
as locked.
Signed-off-by: Bijan Mottahedeh <bijan.mottahedeh@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Applications can pass this flag in to avoid accept thundering herd.
Signed-off-by: Jiufei Xue <jiufei.xue@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
poll events should be 32-bits to cover EPOLLEXCLUSIVE.
Explicit word-swap the poll32_events for big endian to make sure the ABI
is not changed. We call this feature IORING_FEAT_POLL_32BITS,
applications who want to use EPOLLEXCLUSIVE should check the feature bit
first.
Signed-off-by: Jiufei Xue <jiufei.xue@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
-----BEGIN PGP SIGNATURE-----
iQJIBAABCAAyFiEES0KozwfymdVUl37v6iDy2pc3iXMFAl7vxoUUHHBhdWxAcGF1
bC1tb29yZS5jb20ACgkQ6iDy2pc3iXOs3Q/+JSNYoKZiOJax5u1ePEwk5JRasRij
JkKNjspXueVcoVkVF8lOiSOmQm/7FyfDUi2Qk8U5Gmx7Pr6vQJZgghHxdPMsemCz
mNbbR8UMm6ssTim19ULBQ3S0Sc1QMQvQCLNDZcv24ne2K8d9HTBrFGenqlLU4UtZ
JrMLircBt39fVonooMrf9ycGlM8tUZwm8te+Jp7KL18GUKZT8hr0HKzu2WE6/qT4
WBGNaWxqnfbajnDb41ix2rL+lb8Snqn94cxCjp248rn7M5fJRSCKmYaumBh5ViJ2
VuD/ZQsTX5SSnc9YDpkUDXya9M1wzFwf64ku6Avga1BXS6lNWB1wqWueSTMfggiL
2B+LVANWkGFfHtVAVA5xsxXjeJnYmIj/g8qSiHS/RSFJazr1b/cXWedvyewll/Nv
rFRBsVzktV6BBrlTclcrsu9FmlZRAThNC3uYs/s5vbAja+wHEhCLuacO+jiducRP
fqQCP2iF6MqC6B2I8WzVp3jU8k2t02i6ySaXmXjzrwOZSLvnOdvDBzE7e95yNLRg
WLeGd/o2PdLpVoSNVHelFrqm8VZKYSCkWty9WppklnrIVVydKMJ3bgihXY4pADyf
1ABtKUZgySZKZOpr1pQBqIivHuvKqUGFynj6PSRsngQBoq6V3XpJ7ZCBhuG7cNAT
9BfnUkhFW7lW70I=
=nILH
-----END PGP SIGNATURE-----
Merge tag 'selinux-pr-20200621' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux
Pull SELinux fixes from Paul Moore:
"Three small patches to fix problems in the SELinux code, all found via
clang.
Two patches fix potential double-free conditions and one fixes an
undefined return value"
* tag 'selinux-pr-20200621' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux:
selinux: fix undefined return of cond_evaluate_expr
selinux: fix a double free in cond_read_node()/cond_read_list()
selinux: fix double free
- Fix a warning on the Qualcomm SPMI GPIO chip being instatiated
twice without a unique irqchip struct.
- Use the noirq variants of the suspend and resume callbacks in
the Tegra driver.
- Clean up the errorpath on the MCP23s08 driver.
- Revert the use of devm_of_iomap() in the Freescale driver as
it was regressing the platform.
- Add some missing pins in the Qualcomm IPQ6018 driver.
- Fix a simple documentation bug in the pinctrl-single driver.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEElDRnuGcz/wPCXQWMQRCzN7AZXXMFAl7vpLEACgkQQRCzN7AZ
XXNWEw/+KAe5G5DHnqGCAzglGxfK9FxWsi42j8PBFFxI0TTID+Wohhw4D/AXakGL
ooXyrWKB/VOY6zp5RU6IDiZAC/8m8SUZhY9MVDH1kOCULrV/ItKAClbMfVHBWJN0
B2SkPIgEK7np8vV6j+Ch+EeQZMqaMDZYrFIM8Y4QmmJ4xZJ/K/hYL6lMuXl3XRyv
/8x1zMTdimEIbvONwXAGqkrFgyuy+hQ/xa1/6DmPl65EjUVKPNu60F1jNMNo6uNp
ssnQWEI4/G47OCTw14Qx8MTZwYd55tfO9KHgOfcEsZbzm5ppK/5wQzFo0hs2e5kL
cxDmpyw3AdC/lKCPJi2jifYFhBQuMGjIQHiskFCu3ko2RUMGvIVx3dTmDHb5kb2J
YuAb/kqk4uPsaBqrrRaSVa1NTHkMrFS9ndxzpdMOzNctlSUmLBdnYT0kmqtNwcnt
ZdkHLjptVOuLvqmfEK3tIs32IPaKgsEACUcyF6KY7Yw+9sGU7i0fgWEAqQqpL0II
MIBaPg836YuUdAa8uCuKxFsqmjAzuUEYkV5g5B03+5/c+3yQMe74m0C0N/pE1Oa1
PPaPlFW/jU/irPKz8383L+hl48swksY85sCeu9oT0ASIqqas+qsXy6Ui6AlST36V
5jEFtfThIu7OwJw/9Qzve++TPFM+zMvprT0UW1+6+vTwyqpFv7s=
=g/Xm
-----END PGP SIGNATURE-----
Merge tag 'pinctrl-v5.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pin control fixes from Linus Walleij:
"Some early fixes collected during the first week after the merge
window, all pretty self-evident, with the details below. The revert is
the crucial thing.
- Fix a warning on the Qualcomm SPMI GPIO chip being instatiated
twice without a unique irqchip struct
- Use the noirq variants of the suspend and resume callbacks in the
Tegra driver
- Clean up the errorpath on the MCP23s08 driver
- Revert the use of devm_of_iomap() in the Freescale driver as it was
regressing the platform
- Add some missing pins in the Qualcomm IPQ6018 driver
- Fix a simple documentation bug in the pinctrl-single driver"
* tag 'pinctrl-v5.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
pinctrl: single: fix function name in documentation
pinctrl: qcom: ipq6018 Add missing pins in qpic pin group
Revert "pinctrl: freescale: imx: Use 'devm_of_iomap()' to avoid a resource leak in case of error in 'imx_pinctrl_probe()'"
pinctrl: mcp23s08: Split to three parts: fix ptr_ret.cocci warnings
pinctrl: tegra: Use noirq suspend/resume callbacks
pinctrl: qcom: spmi-gpio: fix warning about irq chip reusage
- fix -gz=zlib compiler option test for CONFIG_DEBUG_INFO_COMPRESSED
- improve cc-option in scripts/Kbuild.include to clean up temp files
- improve cc-option in scripts/Kconfig.include for more reliable compile
option test
- do not copy modules.builtin by 'make install' because it would break
existing systems
- use 'userprogs' syntax for watch_queue sample
-----BEGIN PGP SIGNATURE-----
iQJJBAABCgAzFiEEbmPs18K1szRHjPqEPYsBB53g2wYFAl7vkn4VHG1hc2FoaXJv
eUBrZXJuZWwub3JnAAoJED2LAQed4NsG69MP/Raf/dW6GEHsUu7McIM3/HmNkjKh
6oBaD3NyY2TTjL/BtM18GmUuQTfeLG60UitPYVwvzVom/88JLQYzKmLFcwEH+KMz
7Bv/UukTXq4OjcmafG/h47BMYZTZ3No4Z+kMWzHe8HU8w+adfMh578nK5JeDz/1/
wY/xn+/OeXGcpEyjpR/rS9XCxKTYdEF6NwPknHhjGK3+byn3oqsZ3yRc+WwYumOD
UvXT2KE8krCJjTQ4kKUY3Q+jzZiKEuHEEcWI6AdHLXADpUll60DBc/5OgW75+6NA
FYOU2Ocuq+D8Q8wifMBKXjhN5ci/I8/h+aGvE2M05IbXN9BqMw6sbf/SEY0j2Saq
+p2AB4hbCzrFMtUTK2Al+bhV5tPYukQQqKpIRnhZe3NwJwX3EMtmAokbBtR/oD4i
yN28JZhosCggVV3o/9wFyWzq6fr376SSoHDogAtPOefvJVRQKHmavdnHk68ixkAk
itntVrMS2T/wB5esnAMiiCY4zdWwXd+OTceN2sxgdxxXZ+IklAbJG5IemUnys9Ts
eZ1IbIaopTKriWyOIjmlKP4dVbSqHVkHSovZFACu4PcKyvpIJtZhigFQJyXIDGho
GDsrgXRRykYiG42wu1l4zHyI9O76XXqweCJobyURN/JmQ2wUPahczEROH0rPJG5H
SEr1yjd7KaMxu5mx
=IAng
-----END PGP SIGNATURE-----
Merge tag 'kbuild-fixes-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild fixes from Masahiro Yamada:
- fix -gz=zlib compiler option test for CONFIG_DEBUG_INFO_COMPRESSED
- improve cc-option in scripts/Kbuild.include to clean up temp files
- improve cc-option in scripts/Kconfig.include for more reliable
compile option test
- do not copy modules.builtin by 'make install' because it would break
existing systems
- use 'userprogs' syntax for watch_queue sample
* tag 'kbuild-fixes-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
samples: watch_queue: build sample program for target architecture
Revert "Makefile: install modules.builtin even if CONFIG_MODULES=n"
scripts: Fix typo in headers_install.sh
kconfig: unify cc-option and as-option
kbuild: improve cc-option to clean up all temporary files
Makefile: Improve compressed debug info support detection