Commit Graph

723016 Commits

Author SHA1 Message Date
Joe Perches
cc10f8712b xen-netback: Fix logging message with spurious period after newline
Using a period after a newline causes bad output.

Signed-off-by: Joe Perches <joe@perches.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 15:10:32 -05:00
Willem de Bruijn
cc166427dc tun: avoid unnecessary READ_ONCE in tun_net_xmit
The statement no longer serves a purpose.

Commit fa35864e0b ("tuntap: Fix for a race in accessing numqueues")
added the ACCESS_ONCE to avoid a race condition with skb_queue_len.

Commit 436accebb5 ("tuntap: remove unnecessary sk_receive_queue
length check during xmit") removed the affected skb_queue_len check.

Commit 96f8406162 ("tun: add eBPF based queue selection method")
split the function, reading the field a second time in the callee.
The temp variable is now only read once, so just remove it.

Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 15:09:13 -05:00
Prashant Bhole
d9b8693783 rds: debug: fix null check on static array
t_name cannot be NULL since it is an array field of a struct.
Replacing null check on static array with string length check using
strnlen()

Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
Acked-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 15:02:48 -05:00
Cong Wang
4bee00eb25 act_mirred: get rid of mirred_list_lock spinlock
TC actions are no longer freed in RCU callbacks and we should
always have RTNL lock, so this spinlock is no longer needed.

Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Jiri Pirko <jiri@mellanox.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 14:50:13 -05:00
Cong Wang
9f8a739e72 act_mirred: get rid of tcfm_ifindex from struct tcf_mirred
tcfm_dev always points to the correct netdev and we already
hold a refcnt, so no need to use tcfm_ifindex to lookup again.

If we would support moving target netdev across netns, using
pointer would be better than ifindex.

This also fixes dumping obsolete ifindex, now after the
target device is gone we just dump 0 as ifindex.

Cc: Jiri Pirko <jiri@mellanox.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 14:50:13 -05:00
David S. Miller
3a9ab39328 Merge branch 'ipv6-add-ip6erspan-collect_md-mode'
William Tu says:

====================
ipv6: add ip6erspan collect_md mode

Similar to erspan collect_md mode in ipv4, the first patch adds
support for ip6erspan collect metadata mode.  The second patch
adds the test case using bpf_skb_[gs]et_tunnel_key helpers.

The corresponding iproute2 patch:
https://marc.info/?l=linux-netdev&m=151251545410047&w=2
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 14:45:29 -05:00
William Tu
d37e3bb774 samples/bpf: add ip6erspan sample code
Extend the existing tests for ip6erspan.

Signed-off-by: William Tu <u9012063@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 14:45:29 -05:00
William Tu
ef7baf5e08 ip6_gre: add ip6 erspan collect_md mode
Similar to ip6 gretap and ip4 gretap, the patch allows
erspan tunnel to operate in collect metadata mode.
bpf_skb_[gs]et_tunnel_key() helpers can make use of
it right away.

Signed-off-by: William Tu <u9012063@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 14:45:29 -05:00
Florian Westphal
134059fd27 net: thunderx: Fix TCP/UDP checksum offload for IPv4 pkts
Offload IP header checksum to NIC.

This fixes a previous patch which disabled checksum offloading
for both IPv4 and IPv6 packets.  So L3 checksum offload was
getting disabled for IPv4 pkts.  And HW is dropping these pkts
for some reason.

Without this patch, IPv4 TSO appears to be broken:

WIthout this patch I get ~16kbyte/s, with patch close to 2mbyte/s
when copying files via scp from test box to my home workstation.

Looking at tcpdump on sender it looks like hardware drops IPv4 TSO skbs.
This patch restores performance for me, ipv6 looks good too.

Fixes: fa6d7cb5d7 ("net: thunderx: Fix TCP/UDP checksum offload for IPv6 pkts")
Cc: Sunil Goutham <sgoutham@cavium.com>
Cc: Aleksey Makarov <aleksey.makarov@auriga.com>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 14:43:31 -05:00
Dan Carpenter
92425c4067 bnxt_en: Uninitialized variable in bnxt_tc_parse_actions()
Smatch warns that:

    drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c:160 bnxt_tc_parse_actions()
    error: uninitialized symbol 'rc'.

"rc" is either uninitialized or set to zero here so we can just remove
the check.

Fixes: 8c95f773b4 ("bnxt_en: add support for Flower based vxlan encap/decap offload")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-06 14:25:25 -05:00
Dave Martin
cb968afc78 arm64/sve: Avoid dereference of dead task_struct in KVM guest entry
When deciding whether to invalidate FPSIMD state cached in the cpu,
the backend function sve_flush_cpu_state() attempts to dereference
__this_cpu_read(fpsimd_last_state).  However, this is not safe:
there is no guarantee that this task_struct pointer is still valid,
because the task could have exited in the meantime.

This means that we need another means to get the appropriate value
of TIF_SVE for the associated task.

This patch solves this issue by adding a cached copy of the TIF_SVE
flag in fpsimd_last_state, which we can check without dereferencing
the task pointer.

In particular, although this patch is not a KVM fix per se, this
means that this check is now done safely in the KVM world switch
path (which is currently the only user of this code).

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-06 19:08:05 +00:00
Linus Torvalds
e56d565d67 IOMMU fixes for Linux v4.15-rc3
* Fix VT-d handling of scatterlists where sg->offset exceeds PAGE_SIZE
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJaKDGkAAoJECObm247sIsi6t4P/3WyPbB8ik8eb0J4R8yKfJsa
 lqJ/KWwxDMwkOlIjW+cU07mnpZSBahHFKNr4T+5LdhSSOdEbepGC8aMeN188UHww
 MyqzWEgKOFL76BMqL20YWVXdIwhejZz/jVyboblS8e9ufr3Wn53btfFUdhmeZ7Yc
 9vL5wu2QQPG6oRcEPNmk9ev039MPBkk/F4GM+bfmVnaqsdo0+aLSzBBT/LWZ/W8l
 TUnrVwRFVg3J6voWH2L3i+5LmyJiKRVXPKLppoOLMNtUc4oviHAVW2VNA6BxKpaL
 R3TH/N9isfMzIlk54kSU5b+90p/esXWJZZEIwv1qaF2263FVjjTBko7RcMFobAir
 Gz6rPB4VhPsSfYikVJXsljrJntHFFgiw/3J0X52wd7ZFPJvpp5ku/1Lql9RFcNBM
 ymB8jrPY5nA+IaiIqm0We2gMLUrQw3ejdorXp54KRBr78dQWRq7IF0imA7yOjSCf
 8wqf3vOVTdAPhB77XII4tYj2torOzBuYwM1PMbyfgCC0Ux5xUQh8/f1L05wVzIaw
 fMkcE581OEogyyktAVwPQ8KFJwsf7Lqywm6CzYqxDQX8FKgvWdGlSfV6MG/ma7ng
 bNm5fdqwdAOloSHRnuuRiC1B9azqTuZZPbdkcY1F8qk+08AnC77GL8o6ofkHMI9i
 GeAscUDDvNmVTl7mHKmZ
 =1sMC
 -----END PGP SIGNATURE-----

Merge tag 'iommu-v4.15-rc3' of git://github.com/awilliam/linux-vfio

Pull IOMMU fix from Alex Williamson:
 "Fix VT-d handling of scatterlists where sg->offset exceeds PAGE_SIZE"

* tag 'iommu-v4.15-rc3' of git://github.com/awilliam/linux-vfio:
  iommu/vt-d: Fix scatterlist offset handling
2017-12-06 10:53:02 -08:00
Linus Torvalds
f9efc94447 sound fixes for 4.15-rc3
All fixes are small and for stable:
 - A PCM ioctl race fix
 - Yet another USB-audio hardening for malicious descriptors
 - Realtek ALC257 codec support
 -----BEGIN PGP SIGNATURE-----
 
 iQJCBAABCAAsFiEEIXTw5fNLNI7mMiVaLtJE4w1nLE8FAlonvnwOHHRpd2FpQHN1
 c2UuZGUACgkQLtJE4w1nLE+1cA//SIYwnAE0LcvD2IhC+3B4j9dQW5JET7vtTfvr
 cP+sncYoGMtI/4cSYrcDO/crxb1KSGLWwmql09+LamBjOvbhPbmKhD7xZvZ32a0O
 RJ1oY2SKx5hgBmqowZu5la8gGHF9YTtLd8Q4XFbcf8lK+eCC19ndpkQOuENkXBKU
 SJYIW1D2X3Dw3Cy9PGFtKoYbwxe/Yb5tYcTDtLZgiqMm8B8nb9fPRkUjBBOIRZVv
 gi+YI6I/CjtIxH9AFVhDrDlDobZOOT7xQP/Sbwa4YTabiIQPFK3oIF+T9ZkwuwLG
 vKx9DjYjZEa/mbmre83KTF9zLP4MYva/+hiuGxJOW+Vx+LEUH+R7F7we3TfwuQJ0
 DxI/CpJi+xzHDMzFcHrjTZqxKw2MFNI85VVJUlzRmAvhMQrzjm3ckzYw8JUOytL5
 OavOAR8j9QAWpK4aBEQ8EqA9qInu0ibBlRhTbTzjJjwheUnyckW+V+n48wvo46I+
 xnenMgMjuZHC8CxJYgur0WvVbBtyC+/dPCJ+0FEEn5I67KwrB+Cbso7fP5OVxcAR
 Yc62fgb58OInpZA77s6Gc0T0QbN/udjqDUyDvtA+IROau5uOmjAILKw8ym5giR9+
 eNHKNFTIhQB2F6taAdVGdujAfN8JNklv742PZWt3uVXvI1y9/G7IxnX771fBPiUu
 jZoj6ok=
 =Tbdj
 -----END PGP SIGNATURE-----

Merge tag 'sound-4.15-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound

Pull sound fixes from Takashi Iwai:
 "All fixes are small and for stable:

   - a PCM ioctl race fix

   - yet another USB-audio hardening for malicious descriptors

   - Realtek ALC257 codec support"

* tag 'sound-4.15-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
  ALSA: pcm: prevent UAF in snd_pcm_info
  ALSA: hda/realtek - New codec support for ALC257
  ALSA: usb-audio: Add check return value for usb_string()
  ALSA: usb-audio: Fix out-of-bound error
  ALSA: seq: Remove spurious WARN_ON() at timer check
2017-12-06 10:49:14 -08:00
Colin Ian King
d553d03f70 x86: Fix Sparse warnings about non-static functions
Functions x86_vector_debug_show(), uv_handle_nmi() and uv_nmi_setup_common()
are local to the source and do not need to be in global scope, so make them
static.

Fixes up various sparse warnings.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Mike Travis <mike.travis@hpe.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Kosina <trivial@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russ Anderson <russ.anderson@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-janitors@vger.kernel.org
Cc: travis@sgi.com
Link: http://lkml.kernel.org/r/20171206173358.24388-1-colin.king@canonical.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 19:32:58 +01:00
Dave Young
0b02e448a2 efi: Add comment to avoid future expanding of sysfs systab
/sys/firmware/efi/systab shows several different values, it breaks sysfs
one file one value design.  But since there are already userspace tools
depend on it eg. kexec-tools so add code comment to alert future expanding
of this file.

Signed-off-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/20171206095010.24170-4-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 19:32:23 +01:00
Pan Bian
89c5a2d34b efi/esrt: Use memunmap() instead of kfree() to free the remapping
The remapping result of memremap() should be freed with memunmap(), not kfree().

Signed-off-by: Pan Bian <bianpan2016@163.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: <stable@vger.kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/20171206095010.24170-3-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 19:32:08 +01:00
Greg Kroah-Hartman
af97a77bc0 efi: Move some sysfs files to be read-only by root
Thanks to the scripts/leaking_addresses.pl script, it was found that
some EFI values should not be readable by non-root users.

So make them root-only, and to do that, add a __ATTR_RO_MODE() macro to
make this easier, and use it in other places at the same time.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Tested-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Cc: stable <stable@vger.kernel.org>
Link: http://lkml.kernel.org/r/20171206095010.24170-2-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 19:31:39 +01:00
Vincent Guittot
a4c3c04974 sched/fair: Update and fix the runnable propagation rule
Unlike running, the runnable part can't be directly propagated through
the hierarchy when we migrate a task. The main reason is that runnable
time can be shared with other sched_entities that stay on the rq and
this runnable time will also remain on prev cfs_rq and must not be
removed.

Instead, we can estimate what should be the new runnable of the prev
cfs_rq and check that this estimation stay in a possible range. The
prop_runnable_sum is a good estimation when adding runnable_sum but
fails most often when we remove it. Instead, we could use the formula
below instead:

  gcfs_rq's runnable_sum = gcfs_rq->avg.load_sum / gcfs_rq->load.weight

which assumes that tasks are equally runnable which is not true but
easy to compute.

Beside these estimates, we have several simple rules that help us to filter
out wrong ones:

 - ge->avg.runnable_sum <= than LOAD_AVG_MAX
 - ge->avg.runnable_sum >= ge->avg.running_sum (ge->avg.util_sum << LOAD_AVG_MAX)
 - ge->avg.runnable_sum can't increase when we detach a task

The effect of these fixes is better cgroups balancing.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Chris Mason <clm@fb.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Paul Turner <pjt@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Yuyang Du <yuyang.du@intel.com>
Link: http://lkml.kernel.org/r/1510842112-21028-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 19:30:50 +01:00
Omar Sandoval
c6b9d9a330 sched/wait: Fix add_wait_queue() behavioral change
The following cleanup commit:

  50816c4899 ("sched/wait: Standardize internal naming of wait-queue entries")

... unintentionally changed the behavior of add_wait_queue() from
inserting the wait entry at the head of the wait queue to the tail
of the wait queue.

Beyond a negative performance impact this change in behavior
theoretically also breaks wait queues which mix exclusive and
non-exclusive waiters, as non-exclusive waiters will not be
woken up if they are queued behind enough exclusive waiters.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Jens Axboe <axboe@kernel.dk>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-team@fb.com
Fixes: ("sched/wait: Standardize internal naming of wait-queue entries")
Link: http://lkml.kernel.org/r/a16c8ccffd39bd08fdaa45a5192294c784b803a7.1512544324.git.osandov@fb.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 19:30:34 +01:00
Peter Zijlstra
5e351ad106 locking/lockdep: Fix possible NULL deref
We can't invalidate xhlocks when we've not yet allocated any.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Fixes: f52be57080 ("locking/lockdep: Untangle xhlock history save/restore from task independence")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 19:29:56 +01:00
Brendan Jackman
5b1ead6800 cpu/hotplug: Fix state name in takedown_cpu() comment
CPUHP_AP_SCHED_MIGRATE_DYING doesn't exist, it looks like this was
supposed to refer to CPUHP_AP_SCHED_STARTING's teardown callback,
i.e. sched_cpu_dying().

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Quentin Perret <quentin.perret@arm.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171206105911.28093-1-brendan.jackman@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 19:28:45 +01:00
Will Deacon
d96cc49bff arm64: SW PAN: Update saved ttbr0 value on enter_lazy_tlb
enter_lazy_tlb is called when a kernel thread rides on the back of
another mm, due to a context switch or an explicit call to unuse_mm
where a call to switch_mm is elided.

In these cases, it's important to keep the saved ttbr value up to date
with the active mm, otherwise we can end up with a stale value which
points to a potentially freed page table.

This patch implements enter_lazy_tlb for arm64, so that the saved ttbr0
is kept up-to-date with the active mm for kernel threads.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Vinayak Menon <vinmenon@codeaurora.org>
Cc: <stable@vger.kernel.org>
Fixes: 39bc88e5e3 ("arm64: Disable TTBR0_EL1 during normal kernel execution")
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-06 18:28:10 +00:00
Will Deacon
0adbdfde8c arm64: SW PAN: Point saved ttbr0 at the zero page when switching to init_mm
update_saved_ttbr0 mandates that mm->pgd is not swapper, since swapper
contains kernel mappings and should never be installed into ttbr0. However,
this means that callers must avoid passing the init_mm to update_saved_ttbr0
which in turn can cause the saved ttbr0 value to be out-of-date in the context
of the idle thread. For example, EFI runtime services may leave the saved ttbr0
pointing at the EFI page table, and kernel threads may end up with stale
references to freed page tables.

This patch changes update_saved_ttbr0 so that the init_mm points the saved
ttbr0 value to the empty zero page, which always exists and never contains
valid translations. EFI and switch can then call into update_saved_ttbr0
unconditionally.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Vinayak Menon <vinmenon@codeaurora.org>
Cc: <stable@vger.kernel.org>
Fixes: 39bc88e5e3 ("arm64: Disable TTBR0_EL1 during normal kernel execution")
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-06 18:28:10 +00:00
Dave Martin
8884b7bd7e arm64: fpsimd: Abstract out binding of task's fpsimd context to the cpu.
There is currently some duplicate logic to associate current's
FPSIMD context with the cpu when loading FPSIMD state into the cpu
regs.

Subsequent patches will update that logic, so in order to ensure it
only needs to be done in one place, this patch factors the relevant
code out into a new function fpsimd_bind_to_cpu().

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-06 18:28:10 +00:00
Dave Martin
071b6d4a5d arm64: fpsimd: Prevent registers leaking from dead tasks
Currently, loading of a task's fpsimd state into the CPU registers
is skipped if that task's state is already present in the registers
of that CPU.

However, the code relies on the struct fpsimd_state * (and by
extension struct task_struct *) to unambiguously identify a task.

There is a particular case in which this doesn't work reliably:
when a task exits, its task_struct may be recycled to describe a
new task.

Consider the following scenario:

 1) Task P loads its fpsimd state onto cpu C.
        per_cpu(fpsimd_last_state, C) := P;
        P->thread.fpsimd_state.cpu := C;

 2) Task X is scheduled onto C and loads its fpsimd state on C.
        per_cpu(fpsimd_last_state, C) := X;
        X->thread.fpsimd_state.cpu := C;

 3) X exits, causing X's task_struct to be freed.

 4) P forks a new child T, which obtains X's recycled task_struct.
	T == X.
	T->thread.fpsimd_state.cpu == C (inherited from P).

 5) T is scheduled on C.
	T's fpsimd state is not loaded, because
	per_cpu(fpsimd_last_state, C) == T (== X) &&
	T->thread.fpsimd_state.cpu == C.

        (This is the check performed by fpsimd_thread_switch().)

So, T gets X's registers because the last registers loaded onto C
were those of X, in (2).

This patch fixes the problem by ensuring that the sched-in check
fails in (5): fpsimd_flush_task_state(T) is called when T is
forked, so that T->thread.fpsimd_state.cpu == C cannot be true.
This relies on the fact that T is not schedulable until after
copy_thread() completes.

Once T's fpsimd state has been loaded on some CPU C there may still
be other cpus D for which per_cpu(fpsimd_last_state, D) ==
&X->thread.fpsimd_state.  But D is necessarily != C in this case,
and the check in (5) must fail.

An alternative fix would be to do refcounting on task_struct.  This
would result in each CPU holding a reference to the last task whose
fpsimd state was loaded there.  It's not clear whether this is
preferable, and it involves higher overhead than the fix proposed
in this patch.  It would also move all the task_struct freeing
work into the context switch critical section, or otherwise some
deferred cleanup mechanism would need to be introduced, neither of
which seems obviously justified.

Cc: <stable@vger.kernel.org>
Fixes: 005f78cd88 ("arm64: defer reloading a task's FPSIMD state to userland resume")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: word-smithed the comment so it makes more sense]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-06 18:02:21 +00:00
Dan Carpenter
1ab134ca31 xen/pvcalls: Fix a check in pvcalls_front_remove()
bedata->ref can't be less than zero because it's unsigned.  This affects
certain error paths in probe.  We first set ->ref = -1 and then we set
it to a valid value later.

Fixes: 2196819099 ("xen/pvcalls: connect to the backend")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2017-12-06 09:44:49 -05:00
Dan Carpenter
8c71fa88f7 xen/pvcalls: check for xenbus_read() errors
Smatch complains that "len" is uninitialized if xenbus_read() fails so
let's add some error handling.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2017-12-06 09:44:43 -05:00
Christian König
e60bb46b57 drm/ttm: swap consecutive allocated pooled pages v4
When we detect consecutive allocation of pages swap them to avoid
accidentally freeing them as huge page.

v2: use swap
v3: check if it's really the first allocated page
v4: don't touch the loop variable

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-06 09:28:31 -05:00
Michael Ellerman
d810418208 powerpc/xmon: Don't print hashed pointers in xmon
Since commit ad67b74d24 ("printk: hash addresses printed with %p")
pointers printed with %p are hashed, ie. you don't see the actual
pointer value but rather a cryptographic hash of its value.

In xmon we want to see the actual pointer values, because xmon is a
debugger, so replace %p with %px which prints the actual pointer
value.

We justify doing this in xmon because 1) xmon is a kernel crash
debugger, it's only accessible via the console 2) xmon doesn't print
to dmesg, so the pointers it prints are not able to be leaked that
way.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-07 00:27:01 +11:00
Nicholas Piggin
371b80447f powerpc/64s: Initialize ISAv3 MMU registers before setting partition table
kexec can leave MMU registers set when booting into a new kernel,
the PIDR (Process Identification Register) in particular. The boot
sequence does not zero PIDR, so it only gets set when CPUs first
switch to a userspace processes (until then it's running a kernel
thread with effective PID = 0).

This leaves a window where a process table entry and page tables are
set up due to user processes running on other CPUs, that happen to
match with a stale PID. The CPU with that PID may cause speculative
accesses that address quadrant 0 (aka userspace addresses), which will
result in cached translations and PWC (Page Walk Cache) for that
process, on a CPU which is not in the mm_cpumask and so they will not
be invalidated properly.

The most common result is the kernel hanging in infinite page fault
loops soon after kexec (usually in schedule_tail, which is usually the
first non-speculative quadrant 0 access to a new PID) due to a stale
PWC. However being a stale translation error, it could result in
anything up to security and data corruption problems.

Fix this by zeroing out PIDR at boot and kexec.

Fixes: 7e381c0ff6 ("powerpc/mm/radix: Add mmu context handling callback for radix")
Cc: stable@vger.kernel.org # v4.7+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-06 23:32:43 +11:00
Andy Lutomirski
5b06bbcfc2 x86/power: Fix some ordering bugs in __restore_processor_context()
__restore_processor_context() had a couple of ordering bugs.  It
restored GSBASE after calling load_gs_index(), and the latter can
call into tracing code.  It also tried to restore segment registers
before restoring the LDT, which is straight-up wrong.

Reorder the code so that we restore GSBASE, then the descriptor
tables, then the segments.

This fixes two bugs.  First, it fixes a regression that broke resume
under certain configurations due to irqflag tracing in
native_load_gs_index().  Second, it fixes resume when the userspace
process that initiated suspect had funny segments.  The latter can be
reproduced by compiling this:

// SPDX-License-Identifier: GPL-2.0
/*
 * ldt_echo.c - Echo argv[1] while using an LDT segment
 */

int main(int argc, char **argv)
{
	int ret;
	size_t len;
	char *buf;

	const struct user_desc desc = {
                .entry_number    = 0,
                .base_addr       = 0,
                .limit           = 0xfffff,
                .seg_32bit       = 1,
                .contents        = 0, /* Data, grow-up */
                .read_exec_only  = 0,
                .limit_in_pages  = 1,
                .seg_not_present = 0,
                .useable         = 0
        };

	if (argc != 2)
		errx(1, "Usage: %s STRING", argv[0]);

	len = asprintf(&buf, "%s\n", argv[1]);
	if (len < 0)
		errx(1, "Out of memory");

	ret = syscall(SYS_modify_ldt, 1, &desc, sizeof(desc));
	if (ret < -1)
		errno = -ret;
	if (ret)
		err(1, "modify_ldt");

	asm volatile ("movw %0, %%es" :: "rm" ((unsigned short)7));
	write(1, buf, len);
	return 0;
}

and running ldt_echo >/sys/power/mem

Without the fix, the latter causes a triple fault on resume.

Fixes: ca37e57bbe ("x86/entry/64: Add missing irqflags tracing to native_load_gs_index()")
Reported-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lkml.kernel.org/r/6b31721ea92f51ea839e79bd97ade4a75b1eeea2.1512057304.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 12:29:12 +01:00
Rafael J. Wysocki
ddec3bdee0 x86/PCI: Make broadcom_postcore_init() check acpi_disabled
acpi_os_get_root_pointer() may return a valid address even if acpi_disabled
is set, but the host bridge information from the ACPI tables is not going
to be used in that case and the Broadcom host bridge initialization should
not be skipped then, So make broadcom_postcore_init() check acpi_disabled
too to avoid this issue.

Fixes: 6361d72b04 (x86/PCI: read Broadcom CNB20LE host bridge info before PCI scan)
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Linux PCI <linux-pci@vger.kernel.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/3186627.pxZj1QbYNg@aspire.rjw.lan
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 12:27:47 +01:00
Tom Lendacky
f4e9b7af0c x86/microcode/AMD: Add support for fam17h microcode loading
The size for the Microcode Patch Block (MPB) for an AMD family 17h
processor is 3200 bytes.  Add a #define for fam17h so that it does
not default to 2048 bytes and fail a microcode load/update.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@alien8.de>
Link: https://lkml.kernel.org/r/20171130224640.15391.40247.stgit@tlendack-t1.amdoffice.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 12:27:24 +01:00
Rudolf Marek
e3811a3f74 x86/cpufeatures: Make X86_BUG_FXSAVE_LEAK detectable in CPUID on AMD
The latest AMD AMD64 Architecture Programmer's Manual
adds a CPUID feature XSaveErPtr (CPUID_Fn80000008_EBX[2]).

If this feature is set, the FXSAVE, XSAVE, FXSAVEOPT, XSAVEC, XSAVES
/ FXRSTOR, XRSTOR, XRSTORS always save/restore error pointers,
thus making the X86_BUG_FXSAVE_LEAK workaround obsolete on such CPUs.

Signed-off-by: Rudolf Marek <r.marek@assembler.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Tested-by: Borislav Petkov <bp@suse.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Link: https://lkml.kernel.org/r/bdcebe90-62c5-1f05-083c-eba7f08b2540@assembler.cz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-06 12:27:13 +01:00
Daniel Vetter
a703c55004 drm: safely free connectors from connector_iter
In

commit 613051dac4
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date:   Wed Dec 14 00:08:06 2016 +0100

    drm: locking&new iterators for connector_list

we've went to extreme lengths to make sure connector iterations works
in any context, without introducing any additional locking context.
This worked, except for a small fumble in the implementation:

When we actually race with a concurrent connector unplug event, and
our temporary connector reference turns out to be the final one, then
everything breaks: We call the connector release function from
whatever context we happen to be in, which can be an irq/atomic
context. And connector freeing grabs all kinds of locks and stuff.

Fix this by creating a specially safe put function for connetor_iter,
which (in this rare case) punts the cleanup to a worker.

Reported-by: Ben Widawsky <ben@bwidawsk.net>
Cc: Ben Widawsky <ben@bwidawsk.net>
Fixes: 613051dac4 ("drm: locking&new iterators for connector_list")
Cc: Dave Airlie <airlied@gmail.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sean Paul <seanpaul@chromium.org>
Cc: <stable@vger.kernel.org> # v4.11+
Reviewed-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171204204818.24745-1-daniel.vetter@ffwll.ch
2017-12-06 10:22:55 +01:00
Zhenyu Wang
11474e9091 drm/i915/gvt: set max priority for gvt context
This is to workaround guest driver hang regression after
preemption enable that gvt hasn't enabled handling of that
for guest workload. So in effect this disables preemption
for gvt context now.

Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
(cherry picked from commit 1603660b33)
2017-12-06 11:38:21 +08:00
Zhenyu Wang
ac7688c039 drm/i915/gvt: Don't mark vgpu context as inactive when preempted
We shouldn't mark inactive for vGPU context if preempted,
which would still be re-scheduled later. So keep active state.

Fixes: d6c0511300 ("drm/i915/execlists: Distinguish the incomplete context notifies")
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
(cherry picked from commit da5f99eacc)
2017-12-06 11:34:10 +08:00
Xiong Zhang
29f9e42597 drm/i915/gvt: Limit read hw reg to active vgpu
mmio_read_from_hw() let vgpu could read hw reg, if vgpu's workload
is running on hw, things is good. Otherwise vgpu will get other
vgpu's reg val, it is unsafe.

This patch limit such hw access to active vgpu. If vgpu isn't
running on hw, the reg read of this vgpu will get the last active
val which saved at schedule_out.

v2: ring timestamp is walking continuously even if the ring is idle.
    so read hw directly. (Zhenyu)

Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
(cherry picked from commit 295764cd2f)
2017-12-06 11:33:30 +08:00
Zhi Wang
365ad5df9c drm/i915/gvt: Export intel_gvt_render_mmio_to_ring_id()
Since many emulation logic needs to convert the offset of ring registers
into ring id, we export it for other caller which might need it.

Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
(cherry picked from commit 62a6a53786)
2017-12-06 11:33:20 +08:00
Changbin Du
add7e4fc24 drm/i915/gvt: Emulate PCI expansion ROM base address register
Our vGPU doesn't have a device ROM, we need follow the PCI spec to
report this info to drivers. Otherwise, we would see below errors.

Inspecting possible rom at 0xfe049000 (vd=8086:1912 bdf=00:10.0)
qemu-system-x86_64: vfio-pci: Cannot read device rom at 00000000-0000-0000-0000-000000000001
Device option ROM contents are probably invalid (check dmesg).
Skip option ROM probe with rombar=0, or load from file with romfile=No option rom signature (got 4860)

I will also send a improvement patch to PCI subsystem related to PCI ROM.
But no idea to omit below error, since no pattern to detect vbios shadow
without touch its content.
0000:00:10.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x0000

Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
(cherry picked from commit c4270d122c)
2017-12-06 11:24:37 +08:00
Linus Torvalds
328b4ed93b x86: don't hash faulting address in oops printout
Things like this will probably keep showing up for other architectures
and other special cases.

I actually thought we already used %lx for this, and that is indeed
_historically_ the case, but we moved to %p when merging the 32-bit and
64-bit cases as a convenient way to get the formatting right (ie
automatically picking "%08lx" vs "%016lx" based on register size).

So just turn this %p into %px.

Reported-by: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-12-05 17:59:29 -08:00
Kees Cook
b562c171cf locking/refcounts: Do not force refcount_t usage as GPL-only export
The refcount_t protection on x86 was not intended to use the stricter
GPL export. This adjusts the linkage again to avoid a regression in
the availability of the refcount API.

Reported-by: Dave Airlie <airlied@gmail.com>
Fixes: 7a46ec0e2f ("locking/refcounts, x86/asm: Implement fast refcount overflow protection")
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-12-05 17:14:31 -08:00
David S. Miller
b9f242047f Merge branch 'macb-rx-filter-cleanups'
Julia Cartwright says:

====================
macb rx filter cleanups

Here's a proper patchset based on net-next.

v1 -> v2:
  - Rebased on net-next
  - Add Nicolas's Acks
  - Reorder commits, putting the list_empty() cleanups prior to the
    others.
  - Added commit reverting the GFP_ATOMIC change.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05 20:08:04 -05:00
Julia Cartwright
cc1674eeee net: macb: change GFP_ATOMIC to GFP_KERNEL
Now that the rx_fs_lock is no longer held across allocation, it's safe
to use GFP_KERNEL for allocating new entries.

This reverts commit 81da3bf6e3 ("net: macb: change GFP_KERNEL to
GFP_ATOMIC").

Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05 20:08:03 -05:00
Julia Cartwright
7038cdb7d6 net: macb: reduce scope of rx_fs_lock-protected regions
Commit ae8223de3d ("net: macb: Added support for RX filtering")
introduces a lock, rx_fs_lock which is intended to protect the list of
rx_flow items and synchronize access to the hardware rx filtering
registers.

However, the region protected by this lock is overscoped, unnecessarily
including things like slab allocation.  Reduce this lock scope to only
include operations which must be performed atomically: list traversal,
addition, and removal, and hitting the macb filtering registers.

This fixes the use of kmalloc w/ GFP_KERNEL in atomic context.

Fixes: ae8223de3d ("net: macb: Added support for RX filtering")
Cc: Rafal Ozieblo <rafalo@cadence.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05 20:08:03 -05:00
Julia Cartwright
a3da8adcb5 net: macb: kill useless use of list_empty()
The list_for_each_entry() macro already handles the case where the list
is empty (by not executing the loop body).  It's not necessary to handle
this case specially, so stop doing so.

Cc: Rafal Ozieblo <rafalo@cadence.com>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05 20:08:03 -05:00
Al Viro
8e1611e235 make sock_alloc_file() do sock_release() on failures
This changes calling conventions (and simplifies the hell out
the callers).  New rules: once struct socket had been passed
to sock_alloc_file(), it's been consumed either by struct file
or by sock_release() done by sock_alloc_file().  Either way
the caller should not do sock_release() after that point.

Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05 18:39:29 -05:00
Al Viro
016a266bdf socketpair(): allocate descriptors first
simplifies failure exits considerably...

Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05 18:39:28 -05:00
Al Viro
a5739435b5 fix kcm_clone()
1) it's fput() or sock_release(), not both
2) don't do fd_install() until the last failure exit.
3) not a bug per se, but... don't attach socket to struct file
   until it's set up.

Take reserving descriptor into the caller, move fd_install() to the
caller, sanitize failure exits and calling conventions.

Cc: stable@vger.kernel.org # v4.6+
Acked-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05 18:39:28 -05:00
Mohamed Ghannam
69c64866ce dccp: CVE-2017-8824: use-after-free in DCCP code
Whenever the sock object is in DCCP_CLOSED state,
dccp_disconnect() must free dccps_hc_tx_ccid and
dccps_hc_rx_ccid and set to NULL.

Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05 18:08:53 -05:00