The ALC225_FIXUP_HEADSET_JACK fixup can be merged to alc295_fixup_chromebook.
There are no other users for ALC225_FIXUP_HEADSET_JACK other than
the chromebook hardware.
Fixes: 10f5b1b85e ("ALSA: hda/realtek - Fixed Headset Mic JD not stable")
Cc: Kailang Yang <kailang@realtek.com>
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
- Split out some i915 code into the fb_helper to allow the above
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQRcEzekXsqa64kGDp7j7w1vZxhRxQUCXIlYNQAKCRDj7w1vZxhR
xdsRAP0TnlDCQHcAY8U41jrWgSg2rNYNn/aT+PEXo7uoEy0AzwEAtv9tzk2309xn
wNT0c8wJU7l190VD8ZuikwT/19Y+pgw=
=3SKX
-----END PGP SIGNATURE-----
Merge tag 'drm-misc-next-fixes-2019-03-13' of git://anongit.freedesktop.org/drm/drm-misc into drm-next
- qxl: Remove the conflicting framebuffers earlier
- Split out some i915 code into the fb_helper to allow the above
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maxime Ripard <maxime.ripard@bootlin.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190313192158.k3qssf733khsqodn@flea
sk_setup_caps() is called to set sk->sk_dst_cache in pptp_connect,
so we have to dst_release(sk->sk_dst_cache) in pptp_sock_destruct,
otherwise, the dst refcnt will leak.
It can be reproduced by this syz log:
r1 = socket$pptp(0x18, 0x1, 0x2)
bind$pptp(r1, &(0x7f0000000100)={0x18, 0x2, {0x0, @local}}, 0x1e)
connect$pptp(r1, &(0x7f0000000000)={0x18, 0x2, {0x3, @remote}}, 0x1e)
Consecutive dmesg warnings will occur:
unregister_netdevice: waiting for lo to become free. Usage count = 1
v1->v2:
- use rcu_dereference_protected() instead of rcu_dereference_check(),
as suggested by Eric.
Fixes: 00959ade36 ("PPTP: PPP over IPv4 (Point-to-Point Tunneling Protocol)")
Reported-by: Xiumei Mu <xmu@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is a patchwork instance behind bcm-kernel-feedback-list that is
helpful to track submissions, add this list for the Broadcom GENET and
SYSTEMPORT drivers.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A previous fix ("tls: Fix write space handling") assumed that
user space application gets informed about the socket send buffer
availability when tls_push_sg() gets called. Inside tls_push_sg(), in
case do_tcp_sendpages() returns 0, the function returns without calling
ctx->sk_write_space. Further, the new function tls_sw_write_space()
did not invoke ctx->sk_write_space. This leads to situation that user
space application encounters a lockup always waiting for socket send
buffer to become available.
Rather than call ctx->sk_write_space from tls_push_sg(), it should be
called from tls_write_space. So whenever tcp stack invokes
sk->sk_write_space after freeing socket send buffer, we always declare
the same to user space by the way of invoking ctx->sk_write_space.
Fixes: 7463d3a2db ("tls: Fix write space handling")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Reviewed-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It is confusing to directly use return value of netlink_send()/
netlink_unicast() as the return value of *notify*, as it may be not
error at all.
Example: in tc_del_tfilter(), after calling tfilter_del_notify(), it will
goto errout if (err). However, the netlink_send()/netlink_unicast() will
return positive value even for successful case. So it may not call
tcf_chain_tp_remove() and so on to clean up the resource, as a result,
resource is leaked.
It may be easier to only check the return value of tfilter_del_nofiy(),
but it is more clean to correct all related functions.
Co-developed-by: Zengmo Gao <gaozengmo@jd.com>
Signed-off-by: Zhike Wang <wangzhike@jd.com>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It has been observed that tx queue may stall while downloading
from certain web sites (example www.speedtest.net)
The cause has been tracked down to a corner case where
the tx interrupt vector was disabled automatically, but
was not re enabled later.
The lan743x has two mechanisms to enable/disable individual
interrupts. Interrupts can be enabled/disabled by individual
source, and they can also be enabled/disabled by individual
vector which has been mapped to the source. Both must be
enabled for interrupts to work properly.
The TX code path, primarily uses the interrupt enable/disable of
the TX source bit, while leaving the vector enabled all the time.
However, while investigating this issue it was noticed that
the driver requested the use of the vector auto clear feature.
The test above revealed a case where the vector enable was
cleared unintentionally.
This patch fixes the issue by deleting the lines that request
the vector auto clear feature to be used.
Fixes: 23f0703c12 ("lan743x: Add main source files for new lan743x driver")
Signed-off-by: Bryan Whitehead <Bryan.Whitehead@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is a plan to build the kernel with -Wimplicit-fallthrough and
this place in the code produced a warning (W=1).
This commit remove the following warning:
kernel/trace/blktrace.c:725:9: warning: this statement may fall through [-Wimplicit-fallthrough=]
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This is just a port of the ASoC Icelake HDMI codec code to the legacy
HDA driver with some cleanups.
ASoC commit 019033c854:
"ASoC: Intel: hdac_hdmi: add Icelake support"
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Cc: Bard liao <bard.liao@intel.com>
Cc: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
A C2HData PDU with the SUCCESS flag set indicates that the I/O was
completed by the controller successfully and means that a subsequent
completion response capsule PDU will be ommitted.
If we see this flag, fisrt we check that LAST_PDU flag is set as well,
and then we complete the request when the data transfer (and data digest
verification if its on) is done.
While we're at it, reuse a bit of code with nvme_fail_request.
Reported-by: Steve Blightman <steve.blightman@oracle.com>
Suggested-by: Oliver Smith-Denny <osmithde@cisco.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Oliver Smith-Denny <osmithde@cisco.com>
Tested-by: Oliver Smith-Denny <osmithde@cisco.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
NVMe DSM is a pure hint, so if the underlying device / file system
does not support discard-like operations we should not fail the
operation but rather return success.
Fixes: 3b031d1599 ("nvmet: add error log support for bdev backend")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Tested-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Add a gendisk argument to nvme_config_write_zeroes so that the call to
nvme_update_disk_info for the multipath device node updates the
proper request_queue.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Tested-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Add a gendisk argument to nvme_config_discard so that the call to
nvme_update_disk_info for the multipath device node updates the
proper request_queue.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Tested-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Just opencode the two function calls in the caller.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Tested-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Qemu started out with a broken implementation of Write Zeroes written
by yours truly. Disable Write Zeroes on qemu for now, eventually
we need to go back and make all the qemu quirks version specific,
but that is left for another time.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The FC-NVME spec, when finally approved, modified the disconnect LS
such that the only scope available is the association.
Rework the Disconnect LS processing to be in accordance with the
change.
Signed-off-by: Nigel Kirkland <nigel.kirkland@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Ewan D. Milne <emilne@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
There are two changes:
1) The logic in the __nvmet_fc_free_assoc() routine is bad. It uses
"safe" routines assuming pointers will come back valid. However, the
intervening next structure being linked can be removed from the list and
the resulting safe pointers are bad, resulting in NULL ptrs being hit.
Correct by scheduling a work element to perform the association delete,
which can be done while under lock.
2) Prior patch that added the work element scheduling left a possible
reference on the object if the work element couldn't be scheduled.
Correct by doing the put on a failing schedule_work() call.
Signed-off-by: Nigel Kirkland <nigel.kirkland@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Ewan D. Milne <emilne@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
-----BEGIN PGP SIGNATURE-----
iQJIBAABCAAyFiEES0KozwfymdVUl37v6iDy2pc3iXMFAlyIOIMUHHBhdWxAcGF1
bC1tb29yZS5jb20ACgkQ6iDy2pc3iXMJQg//X8wOXFpzZhVav9bvuMfjg8OZ4R0K
FxGo4L5A7LCYLqmbDL2pyEan4l8w7mXr3dlISLf5KbFSXo5Nm9zAyG57MfGg3rfI
zIUkRbmp0V6sqg2lNDWhk5ntvtyl4qizgIFsAgfHFbKV0/usf5GiuaipUpVC3hXf
QE/MKYLXdaibkkgDl3JWWxTiRBX6m2M0FEMmhiKGWNmu3cjhLQVUFJsR14JMtp1e
Z8NDqT+8iHtbwZGl6Az0u1WauYqn0v2CXxnwr6O7mUJ+2PzgJzLI0OrTacTbfEw5
3uAHVGeHkZmLI5W0RUIJ0LeUs1XvCBFvUOktLP4f3iOFwqapQhY1dxJH07OViTem
VbA1VX8i2fwMyYj9H4FpCsX09syNyFqRDJkP27VIGcl/3EZRGOTi9GO4ziV0y41N
/DOhAJANbDASSvcxve1x/ZGp6qRlEzjpPXW9HljavFdOBXnQcnoT0k7t1I+m8knT
E0+vgNCNO9azWo+0yK5PP2RFVOEBgrDzSd2JWjOYgVfSbzr/994zTpEHgFxL6D1t
8Kp4hMlfRCKTNY+Czg/cQx5wlKAUfcEltZPwqkH02b6OgiE/6J8oHDzHM76eBm7Z
4qSdKP5DqGbcuVt3bO1QRtsWC7zIVun2AvCOrpCdvjUFCDySZGOSQRGdrgfnpLYg
7lC0qkoXqvQvYgA=
=A0xD
-----END PGP SIGNATURE-----
Merge tag 'selinux-pr-20190312' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux
Pull selinux fixes from Paul Moore:
"Two small fixes for SELinux in v5.1: one adds a buffer length check to
the SELinux SCTP code, the other ensures that the SELinux labeling for
a NFS mount is not disabled if the filesystem is mounted twice"
* tag 'selinux-pr-20190312' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux:
security/selinux: fix SECURITY_LSM_NATIVE_LABELS on reused superblock
selinux: add the missing walk_size + len check in selinux_sctp_bind_connect
- fix double when failing to unpack secmark rules in policy
- fix leak of dentry when profile is removed
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE7cSDD705q2rFEEf7BS82cBjVw9gFAlyHm1IACgkQBS82cBjV
w9iDxw//dslQir/0YlNLry+GJjPl0T7Kt8mAFPpE2R/40X58S73je26VZp8uo78I
iOvCYzalWRAdsgfsFnTeJqldgJP5SLpAer2sChEtwSr4zlL9Rp0q8s3FiqB1Unse
qz5MYRs963xoyyFuLy4/1eT3O9lbvY31eTxwkOZgMTHJhZwxGYyIWcRBlGSA8tBm
iji/d5EnGggcycssokwn0HzexGhXgVIh2zpgpjb3Y/wfqntYKnGEnd/u+h/+GhgB
ADe1V7KBq3+BirH2Dxau6kreCOF576rEhDTMuWzLHCcyOtz8tYnAK8VUdoZF/enj
i78bmOnh18tZjM7Gi9LJIspWUTJPburEEskqvLJG6WAtejGtOfFgc1T8UNe8yt6C
Mc6xO6DBs5RABKS8fV5rSIHE+0bpYFrtZgAzQlzzBQ5zX/mNTc/YqY91Bc2hYF6s
KJ3V2mVUuGjOxnTaam3VA536zbvOuf05kHDVb60TmDgYJqsWg5C5yJoZ+VHs6gpQ
YpZ25LOz1TNKFPcbjHWG2A2nuPDmmJdxSOHJAF8t1SFEbXFb9QcA1/F3E/mALgLY
QoxqxQMhn1sLkMsK0cAZYlIMxcMP9fqJDIvomKx/zhNzbNdBwxkCe1LPzi7ChoSk
iEi4MVlSFDWJPXradBFZ5mj6nu1LSjvQ7t88WeoqHr/rErpIQLE=
=mC/w
-----END PGP SIGNATURE-----
Merge tag 'apparmor-pr-2019-03-12' of git://git.kernel.org/pub/scm/linux/kernel/git/jj/linux-apparmor
Pull apparmor fixes from John Johansen:
- fix double when failing to unpack secmark rules in policy
- fix leak of dentry when profile is removed
* tag 'apparmor-pr-2019-03-12' of git://git.kernel.org/pub/scm/linux/kernel/git/jj/linux-apparmor:
apparmor: fix double free when unpack of secmark rules fails
apparmor: delete the dentry in aafs_remove() to avoid a leak
apparmor: Fix warning about unused function apparmor_ipv6_postroute
If:
- A successful connect has occurred with an io queue count greater than
zero and namespaces detected and running.
- An error or something occurs which causes a termination of the prior
association and then starts a reconnect,
- The reconnect then creates a new controller, but for whatever reason,
nvme_set_queue_count() results in io queue count set to zero. This
will skip io queue and tag set changes.
- But... the controller will transition to live, calling
nvme_start_ctrl, which calls nvme_start_queues(), which then releases
I/Os into the transport which then sends them to the driver.
As there are no queues, things eventually hit the driver looking for a
handle, which was cleared when the original controller was reset, and it
can't proceed. Worst case, things progress, but everything fails.
In the failing scenario, the nvme_set_features(NVME_FEAT_NUM_QUEUES)
command actually failed with a NVME_SC_INTERNAL error. For some reason,
although nvme_set_queue_count() saw the error and set io queue count to
zero, it doesn't return a failure status to the transport, which allows
the transport to continue using the controller.
Fix the problem by simply rejecting the new association if at least 1
I/O queue can't be created. The association reject will fail the
reconnect attempt and fall into the reconnect retry policy.
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
A recent change added a numa_node field to the nvme controller
and has the transport assign the node using dev_to_node().
However, fcloop registers with a NULL device struct, so the
dev_to_node() call oops.
Revise the assignment to assign no node when device struct is null.
Fixes: 103e515efa ("nvme: add a numa_node field to struct nvme_ctrl")
Reported-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
[hch: small coding style fixup]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
For some nvme command, when issued by the nvme core layer, there
is an internal buffer which can cause blk_rq_payload_bytes() to
return a non-zero value yet there is no actual/real command payload
and sg list. An example is the WRITE ZEROES command.
To address this, when making choices on whether to dma map an sgl,
use blk_rq_nr_phys_segments() instead of blk_rq_payload_bytes().
When there is a sgl, blk_rq_payload_bytes() will return the amount
of data to be transferred by the sgl.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
After commit a686ed75c0 ("nvme: introduce a helper function for
controller deletion), nvme_delete_ctrl_sync no longer use flush_work.
Update comment, accordingly.
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
In case nvme_alloc_ns fails after we initialize ns_head but before we
add the ns to the controller namespaces list we need to explicitly put
the ns_head reference because when we teardown the controller we
won't find it, causing us to leak a dangling subsystem eventually.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The field is defined to be a 24 byte array, we don't need to multiply
the sizeof() that field by the number of dwords it covers.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
A write or flush IO passthrough command is expected to change the
logical block content, so don't warn on these as no additional handling
is necessary.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This will be a little more efficient since unset CONFIG options are
stripped away from auto.conf, and we can hard-code the path to auto.conf
since it is never overridden.
include/config/kernel.release is generated before %pkg is run.
So, it is guaranteed auto.conf is up-to-date.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
I think is_enabled() and if_enable_echo() in scripts/package/mkdebian
are useful.
builddeb also has many repetitive greps over the kernel config, so I
borrowed the idea to clean it up.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
This might be a kind of bike-shed, but I personally prefer grep'able
code.
I often do 'git grep CONFIG_FOO' instead of 'git grep FOO' when I
want to know where that CONFIG option is used.
This makes code longer, but I hope this is acceptable level.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Since commit 3812b8c5c5 ("kbuild: make -r/-R effective in top
Makefile for old Make versions"), make-kpkg is not working.
make-kpkg directly includes the top Makefile of Linux kernel, and
appends some debian_* targets.
/usr/share/kernel-package/ruleset/kernel_version.mk:
# Include the kernel makefile
override dot-config := 1
include Makefile
dot-config := 1
I did not know the kernel Makefile was used in that way, and it is
hard to guarantee the behavior when the kernel Makefile is included
by another Makefile from a different project.
It looks like Debian Stretch stopped providing make-kpkg. Maybe it is
obsolete and being replaced with 'make deb-pkg' etc. but still widely
used.
This commit adds a workaround; if the top Makefile is included by
another Makefile, skip sub-make in order to make the main part visible.
'MAKEFLAGS += -rR' does not become effective for GNU Make < 4.0, but
Debian/Ubuntu is already using newer versions.
The effect of this commit:
Debian 8 (Jessie) : Fixed
Debian 9 (Stretch) : make-kpkg (kernel-package) is not provided
Ubuntu 14.04 LTS : NOT Fixed
Ubuntu 16.04 LTS : Fixed
Ubuntu 18.04 LTS : Fixed
This commit cannot fix Ubuntu 14.04 because it installs GNU Make 3.81,
but its support will end in Apr 2019, which is before the Linux v5.1
release.
I added warning so that nobody would try to include the top Makefile.
Fixes: 3812b8c5c5 ("kbuild: make -r/-R effective in top Makefile for old Make versions")
Reported-by: Liz Zhang <lizzha@microsoft.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Tested-by: Lili Deng <v-lide@microsoft.com>
Cc: Manoj Srivastava <srivasta@debian.org>
As commit 423a8155fa ("kbuild: Fix reading of .config in
link-vmlinux.sh") addressed, some shells fail to perform '.' if
${KCONFIG_CONFIG} does not contain a slash at all.
Instead, we can source include/config/auto.conf, which obviously
contain slashes, and we do not expect its file path overridden by
a user. Perhaps, the performance might be slightly better since
unset CONFIG options are stripped from include/config/auto.conf.
scripts/setlocalversion already works this way.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
When I was searching for unneeded $(KCONFIG_CONFIG) usages, I noticed
this strange build dependency.
It can use $(call if_changed,...) in case ZTEXTADDR and ZBSSADDR are
changed, but even a simpler way is to use the pattern rule in
scripts/Makefile.build. This is what arch/arm/boot/compressed/Makefile
does.
I did only build test. I confirmed equivalent vmlinux.lds was generated.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
It believe it is a bad idea to hardcode a specific compiler prefix
that may or may not be installed on a user's system. It is annoying
when testing features that should not require compilers at all.
For example, mrproper, headers_install, etc. should work without
any compiler.
They look like follows on my machine.
$ make ARCH=h8300 mrproper
./scripts/gcc-version.sh: line 26: h8300-unknown-linux-gcc: command not found
./scripts/gcc-version.sh: line 27: h8300-unknown-linux-gcc: command not found
make: h8300-unknown-linux-gcc: Command not found
make: h8300-unknown-linux-gcc: Command not found
[ a bunch of the same error messages continue ]
$ make ARCH=h8300 headers_install
./scripts/gcc-version.sh: line 26: h8300-unknown-linux-gcc: command not found
./scripts/gcc-version.sh: line 27: h8300-unknown-linux-gcc: command not found
make: h8300-unknown-linux-gcc: Command not found
HOSTCC scripts/basic/fixdep
make: h8300-unknown-linux-gcc: Command not found
WRAP arch/h8300/include/generated/uapi/asm/kvm_para.h
[ snip ]
The solution is to delete this line, or to use cc-cross-prefix like
some architectures do. I chose the latter as a moderate fixup.
I added an alternative 'h8300-linux-' because it is available at:
https://mirrors.edge.kernel.org/pub/tools/crosstool/files/bin/x86_64/8.1.0/
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
scripts/Makefile.build and arch/s390/boot/Makefile use the same
command (thin archiving with symbol table creation).
Avoid the code duplication, and move it to scripts/Makefile.lib.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Unless CONFIG_DEBUG_SECTION_MISMATCH is enabled, modpost only shows
the number of section mismatches.
If you want to know the symbols causing the issue, you need to rebuild
with CONFIG_DEBUG_SECTION_MISMATCH. It is tedious.
I think it is fine to show annoying warning when a new section mismatch
comes in.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Currently, the Kbuild core manipulates header search paths in a crazy
way [1].
To fix this mess, I want all Makefiles to add explicit $(srctree)/ to
the search paths in the srctree. Some Makefiles are already written in
that way, but not all. The goal of this work is to make the notation
consistent, and finally get rid of the gross hacks.
Having whitespaces after -I does not matter since commit 48f6e3cf5b
("kbuild: do not drop -I without parameter").
I removed some header search paths because I was able to build ia64
without them.
[1]: https://patchwork.kernel.org/patch/9632347/
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Currently, the Kbuild core manipulates header search paths in a crazy
way [1].
To fix this mess, I want all Makefiles to add explicit $(srctree)/ to
the search paths in the srctree. Some Makefiles are already written in
that way, but not all. The goal of this work is to make the notation
consistent, and finally get rid of the gross hacks.
Having whitespaces after -I does not matter since commit 48f6e3cf5b
("kbuild: do not drop -I without parameter").
[1]: https://patchwork.kernel.org/patch/9632347/
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
bison/flex is now needed always for building for kconfig. Some build
dependencies depend on kernel configuration, enable them as needed:
- libelf-dev when UNWINDER_ORC is set
- libssl-dev for SYSTEM_TRUSTED_KEYRING
Since the libssl-dev is needed for extract_cert binary, denote with
:native to install the libssl-dev for the build machines architecture,
rather than for the architecture of the kernel being built.
Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Ben Hutchings <ben@decadent.org.uk>
Acked-by: maximilian attems <maks@stro.at>
[masahiro.yamada: change 'flex' to 'flex | flex:native' ]
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Before this commit the i2c-designware-platdrv assumes that if the pdev
has an apci-companion it should use a dynamic adapter-nr and it sets
adapter->nr to -1, otherwise it will use pdev->id as adapter->nr.
There are 3 ways how platform_device-s to which i2c-designware-platdrv
will bind can be instantiated:
1) Through of / devicetree
2) Through ACPI enumeration
3) Explicitly instantiated through platform_device_create + add
1) In case of devicetree-instantiation the drivers/of code always sets
pdev->id to PLATFORM_DEVID_NONE, which is -1 so in this case both paths
to set adapter->nr end up doing the same thing.
2) In case of ACPI instantiation the device will always have an
ACPI-companion, so we are already using dynamic adapter-nrs.
3) There are 2 places manually instantiating a designware_i2c platform_dev:
drivers/mfd/intel_quark_i2c_gpio.c
drivers/mfd/intel-lpss.c
In the intel_quark_i2c_gpio.c case pdev->id is always 0, so switching to
dynamic adapter-nrs here could lead to the bus-number no longer being
stable, but the quark X1000 only has 1 i2c-controller, which will also
be assigned bus-number 0 when using dynamic adapter-nrs.
In the intel-lpss.c case intel_lpss_probe() is called from either
intel-lpss-acpi.c in which case there always is an ACPI-companion, or
from intel-lpss-pci.c. In most cases devices handled by intel-lpss-pci.c
also have an ACPI-companion, so we use a dynamic adapter-nr. But in some
cases the ACPI-companion is missing and we would use pdev->id (allocated
from intel_lpss_devid_ida). Devices which use the intel-lpss-pci.c code
typically have many i2c busses, so using pdev->id in this case may lead
to a bus-number conflict, triggering a WARN(id < 0, "couldn't get idr")
in i2c-core-base.c causing an oops an the adapter registration to fail.
So in this case using non dynamic adapter-nrs is actually undesirable.
One machine on which this oops was triggering is the Apollo Lake based
Acer TravelMate Spin B118.
TL;DR: Switching to always using dynamic adapter-numbers does not make
any difference in most cases and in the one case where it does make a
difference the behavior change is desirable because the old behavior
caused an oops.
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1687065
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
i2c-designware-platdrv assumes that if the pdev has an apci-companion
it should use a dynamic adapter-nr and otherwise it will use pdev->id
as adapter-nr.
Before this commit the setting of the adapter.nr was somewhat convoluted,
in the acpi_companion case it was set from dw_i2c_acpi_configure, in the
non acpi_companion case it was set from dw_i2c_set_fifo_size based on
tx_fifo_depth not being set yet indicating that dw_i2c_acpi_configure was
not executed.
This cleans this up, directly setting the adapter-nr from
dw_i2c_plat_probe for both cases.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>